chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Learning Objectives Having finished this chapter, you should be able to: • Interact with an RMarkdown notebook in RStudio • Describe the difference between a variable and a function • Describe the different types of variables • Create a vector or data frame and access its elements • Install and load an R library • Load data from a file and view the data frame This chapter is the first of several distributed throughout the book that will introduce you to increasingly sophisticated things that you can do using the R programming language. The name “R” is a play on the names of the two authors of the software package (Ross Ihaka and Robert Gentleman) as well as an homage to an older statistical software package called “S”. R has become one of the most popular programming languages for statistical analysis and “data science”. Unlike general-purpose programming languages such as Python or Java, R is purpose-built for statistics. That doesn’t mean that you can’t do more general things with it, but the place where it really shines is in data analysis and statistics. 03: Introduction to R Programming a computer is a skill, just like playing a musical instrument or speaking a second language. And just like those skills, it takes a lot of work to get good at it — the only way to acquire a skill is through practice. There is nothing special or magical about people who are experts, other than the quality and quantity of their experience! However, not all practice is equally effective. A large amount of psychological research has shown that practice needs to be deliberate, meaning that it focuses on developing the specific skills that one needs to perform the skill, at a level that is always pushing one’s ability. If you have never programmed before, then it’s going to seem hard, just as it would seem hard for a native English speaker to start speaking Mandarin. However, just as a beginning guitarist needs to learn to play their scales, we will teach you how to perform the basics of programming, which you can then use to do more powerful things. One of the most important aspects of computer programming is that you can try things to your heart’s content; the worst thing that can happen is that the program will crash. Trying new things and making mistakes is one of the keys to learning. The hardest part of programming is figuring out why something didn’t work, which we call debugging. In programming, things are going to go wrong in ways that are often confusing and opaque. Every programmer has a story about spending hours trying to figure out why something didn’t work, only to realize that the problem was completely obvious. The more practice you get, the better you will get at figuring out how to fix these errors. But there are a few strategies that can be helpful. 3.1.1 Use the web In particular, you should take advantage of the fact that there are millions of people programming in R around the world, so nearly any error message you see has already been seen by someone else. Whenever I experience an error that I don’t understand, the first thing that I do is to copy and paste the error message into a search engine Often this will provide several pages discussing the problem and the ways that people have solved it. 3.1.2 Rubber duck debugging The idea behind rubber duck debugging is to pretend that you are trying to explain what your code is doing to an inanimate object, like a rubber duck. Often, the process of explaning it aloud is enough to help you find the problem.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/03%3A_Introduction_to_R/3.01%3A_Why_Programming_Is_Hard_to_Learn.txt
When I am using R in my own work, I generally use a free software package called RStudio, which provides a number of nice tools for working with R. In particular, RStudio provides the ability to create “notebooks” that mix together R code and text (formatted using the Markdown text formatting system). In fact, this book is written using exactly that system! You can see the R code used to generate this book here. 3.03: Getting Started with R When we work with R, we often do this using a command line in which we type commands and it responds to those commands. In the simplest case, if we just type in a number, it will simply respond with that number. Go into the R console and type the number 3. You should see somethign like this: ``````> 3 [1] 3`````` The `>` symbol is the command prompt, which is prompting you to type something in. The next line (`[1] 3`) is R’s answer. Let’s try something a bit more complicated: ``````> 3 + 4 [1] 7`````` R spits out the answer to whatever you type in, as long as it can figure it out. Now let’s try typing in a word: ``````> hello Error: object 'hello' not found`````` What? Why did this happen? When R encounters a letter or word, it assumes that it is referring to the name of a variable — think of `X` from high school algebra. We will return to variables in a little while, but if we want R to print out the word hello then we need to contain it in quotation marks, telling R that it is a character string. ``````> "hello" [1] "hello"`````` There are many types of variables in R. You have already seen two examples: integers (like the number 3) and character strings (like the word “hello”). Another important one is real numbers, which are the most common kind of numbers that we will deal with in statistics, which span the entire number line including the spaces in between the integers. For example: ``````> 1/3 [1] 0.33`````` In reality the result should be 0.33 followed by an infinite number of threes, but R only shows us two decimal points in this example. Another kind of variable is known as a logical variable, because it is based on the idea from logic that a statement can be either true or false. In R, these are capitalized (`TRUE` and `FALSE`). To determine whether a statement is true or not, we use logical operators. You are already familiar with some of these, like the greater-than (`>`) and less-than (`<`) operators. ``````> 1 < 3 [1] TRUE > 2 > 4 [1] FALSE `````` Often we want to know whether two numbers are equal or not equal to one another. There are special operators in R to do this: `==` for equals, and `!=` for not-equals: ``````> 3 == 3 [1] TRUE > 4 != 4 [1] FALSE `````` 3.04: Variables A variable is a symbol that stands for another value (just like “X” in algebra). We can create a variable by assigning a value to it using the `<-` operator. If we then type the name of the variable R will print out its value. ``````> x <- 4 > x [1] 4 `````` The variable now stands for the value that it contains, so we can perform operations on it and get the same answer as if we used the value itself. ``````> x + 3 [1] 7 > x == 5 [1] FALSE`````` We can change the value of a variable by simply assigning a new value to it. ``````> x <- x + 1 > x [1] 5 `````` A note: You can also use the equals sign `=` instead of the` <-` 3.05: Functions A function is an operator that takes some input and gives an output based on the input. For example, let’s say that have a number and we want to determine its absolute value. R has a function called `abs()` that takes in a number and outputs its absolute value: ``````> x <- -3 > abs(x) [1] 3 `````` Most functions take an input like the `abs()` function (which we call an argument), but some also have special keywords that can be used to change how the function works. For example, the `rnorm()` function generates random numbers from a normal distribution (which we will learn more about later). Have a look at the help page for this function by typing `help(rnorm)` in the console, which will cause a help page to appear below. The section of the help page for the `rnorm()` function shows the following: ``````rnorm(n, mean = 0, sd = 1) Arguments n number of observations. mean vector of means. sd vector of standard deviations.`````` You can also obtain some examples of how the function is used by typing `example(rnorm)` in the console. We can see that the rnorm function has two arguments, mean and sd, that are shown to be equal to specific values. This means that those values are the default settings, so that if you don’t do anything, then the function will return random numbers with a mean of 0 and a standard deviation of 1. The other argument, n, does not have a default value. Try typing in the function `rnorm()` with no arguments and see what happens — it will return an error telling you that the argument “n” is missing and does not have a default value. If we wanted to create random numbers with a different mean and standard deviation (say mean == 100 and standard deviation == 15), then we could simply set those values in the function call. Let’s say that we would like 5 random numbers from this distribution: ``````> my_random_numbers <- rnorm(5, mean=100, sd=15) > my_random_numbers [1] 104 115 101 97 115 `````` You will see that I set the variable to the name `my_random_numbers`. In general, it’s always good to be as descriptive as possible when creating variables; rather than calling them x or y, use names that describe the actual contents. This will make it much easier to understand what’s going on once things get more complicated.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/03%3A_Introduction_to_R/3.02%3A_Using_RStudio.txt
You may have noticed that the `my_random_numbers` created above wasn’t like the variables that we had seen before — it contained a number of values in it. We refer to this kind of variable as a vector. If you want to create your own new vector, you can do that using the `c()` function: ``````> my_vector <- c(4, 5, 6) > my_vector [1] 4 5 6 `````` You can access the individual elements within a vector by using square brackets along with a number that refers to the location within the vector. These index values start at 1, which is different from many other programming languages that start at zero. Let’s say we want to see the value in the second place of the vector: ``````> my_vector[2] [1] 5 `````` You can also look at a range of positions, by putting the start and end locations with a colon in between: ``````> my_vector[2:3] [1] 5 6 `````` You can also change the values of specific locations using the same indexing: ``````> my_vector[3] <- 7 > my_vector [1] 4 5 7 `````` 3.07: Math with Vectors You can apply mathematical operations to the elements of a vector just as you would with a single number: ``````> my_vector <- c(4, 5, 6) > my_vector_times_ten <- my_vector*10 > my_vector_times_ten [1] 40 50 60 `````` You can also apply mathematical operations on pairs of vectors. In this case, each matching element is used for the operation. ``````> my_first_vector <- c(1,2,3) > my_second_vector <- c(10, 20, 20) > my_first_vector + my_second_vector [1] 11 22 23 `````` We can also apply logical operations across vectors; again, this will return a vector with the operation applied to the pairs of values at each position. ``````> vector_a <- c(1,2,3) > vector_b <- c(1,2,4) > vector_a == vector_b [1] TRUE TRUE FALSE `````` Most functions will work with vectors just as they would with a single number. For example, let’s say we wanted to obtain the trignometric sine for each of a set of values. We could create a vector and pass it to the `sin()` function, which will return as many sine values as there are input values: ``````> my_angle_values <- c(0, 1, 2) > my_sin_values <- sin(my_angle_values) > my_sin_values [1] 0.00 0.84 0.91 k`````` 3.08: Data Frames Often in a dataset we will have a number of different variables that we want to work with. Instead of having a different named variable that stores each one, it is often useful to combine all of the separate variables into a single package, which is referred to as a data frame. If you are familiar with a spreadsheet (say from Microsoft Excel) then you already have a basic understanding of a data frame. Let’s say that we have values of price and mileage for three different types of cars. We could start by creating a variable for each one, making sure that the three cars are in the same order for each of the variables: ``````car_model <- c("Ford Fusion", "Hyundai Accent", "Toyota Corolla") car_price <- c(25000, 16000, 18000) car_mileage <- c(27, 36, 32) `````` We can then combine these into a single data frame, using the `data.frame()` function. I like to use "_df" in the names of data frames just to make clear that it’s a data frame, so we will call this one “cars_df”: ``````cars_df <- data.frame(model=car_model, price=car_price, mileage=car_mileage) `````` We can view the data frame by using the `View()` function: ``View(cars_df)`` Which will present a view of the data frame much like a spreadsheet, as shown in Figure 2.1: Each of the columns in the data frame contains one of the variables, with the name that we gave it when we created the data frame. We can access each of those columns using the `\$` operator. For example, if we wanted to access the mileage variable, we would combine the name of the data frame with the name of the variable as follows: ``````> cars_df\$mileage [1] 27 36 32 `````` This is just like any other vector, in that we can refer to its individual values using square brackets as we did with regular vectors: ``````> cars_df\$mileage[3] [1] 32 `````` In some of the examples in the book, you will see something called a tibble; this is basically a souped-up version of a data frame, and can be treated mostly in the same way. 3.09: Using R Libraries Many of the useful features in R are not contained in the primary R package, but instead come from libraries that have been developed by various members of the R community. For example, the `ggplot2` package provides a number of features for visualizing data, as we will see in a later chapter. Before we can use a package, we need to install it on our system, using the `install.packages()` function: ``````> install.packages("ggplot2") trying URL 'https://cran.rstudio.com/... Content type 'application/x-gzip' length 3961383 bytes (3.8 MB) ================================================== downloaded 3.8 MB The downloaded binary packages are in /var/folders/.../downloaded_packages `````` This will automatically download the package from the Comprehensive R Archive Network (CRAN) and install it on your system. Once it’s installed, you can then load the library using the `library()` function: ``> library(ggplot2)`` After loading the function, you can now access all of its features. If you want to learn more about its features, you can find them using the help function: ``> help(ggplot2)`` 3.10: Working with Data Files When we are doing statistics, we often need to load in the data that we will analyze. Those data will live in a file on one’s computer or on the internet. For this example, let’s use a file that is hosted on the internet, which contains the gross domestic product (GDP) values for a number of countries around the world. This file is stored as comma-delimited text, meaning that the values for each of the variables in the dataset are separate by commas. There are three variables: the relative rank of the countries, the name of the country, and its GDP value. Here is what the first few lines of the file look like: ``````Rank,Country,GDP 1,Liechtenstein,141100 2,Qatar,104300 3,Luxembourg,81100 `````` We can load a comma-delimited text file into R using the `read.csv()` function, which will accept either the location of a file on one’s computer, or a URL for files that are located on the web: ``````url='https://raw.githubusercontent.com/psych10/ psych10/master/notebooks/Session03-IntroToR/gdp.csv' gdp_df <- read.csv(url)`````` Once you have done this, take a look at the data frame using the `View()` function, and make sure that it looks right — it should have a column for each of the three variables. Let’s say that we wanted to create a new file, which contained GDP values in Euros rather than US Dollars. We use today’s exchange rate, which is 1 USD == 0.90 Euros. To convert from Dollars to Euros, we simply multiple the GDP values by the exchange rate, and assign those values to a new variable within the data frame: ``````> exchange_rate = 0.9 > gdp_df\$GDP_euros <- gdp_df\$GDP * exchange_rate `````` You should now see a new variable within the data frame, called “GDP_euros” which contains the new values. Now let’s save this to a comma-delimited text file on our computer called “gdp_euro.csv”. We do this using the `write.table()` command. ``````> write.table(gdp_df, file='gdp_euro.csv') `````` This file will be created with the working directory that RStudio is using. You can find this directory using the `getwd()` function: ``````> getwd() [1] "/Users/me/MyClasses/Psych10/LearningR" `````` 3.11: Suggested Readings and Videos There are many online resources for learning R. Here are a few:
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/03%3A_Introduction_to_R/3.06%3A_Vectors.txt
I mentioned in the Introduction that one of the big discoveries of statistics is the idea that we can better understand the world by throwing away information, and that’s exactly what we are doing when we summarize a dataset. In this Chapter we will discuss why and how to summarize data. 04: Summarizing Data When we summarize data, we are necessarily throwing away information, and one might plausibly object to this. As an example, let’s go back to the PURE study that we discussed in Chapter 1. Are we not supposed to believe that all of the details about each individual matter, beyond those that are summarized in the dataset? What about the specific details of how the data were collected, such as the time of day or the mood of the participant? All of these details are lost when we summarize the data. We summarize data in general because it provides us with a way to generalize - that is, to make general statements that extend beyond specific observations. The importance of generalization was highlighted by the writer Jorge Luis Borges in his short story “Funes the Memorious”, which describes an individual who loses the ability to forget. Borges focuses in on the relation between generalization (i.e. throwing away data) and thinking: “To think is to forget a difference, to generalize, to abstract. In the overly replete world of Funes, there were nothing but details.” Psychologists have long studied all of the ways in which generalization is central to thinking. One example is categorization: We are able to easily recognize different examples of the category of “birds” even though the individual examples may be very different in their surface features (such as an ostrich, a robin, and a chicken). Importantly, generalization lets us make predictions about these individuals – in the case of birds, we can predict that they can fly and eat worms, and that they probably can’t drive a car or speak English. These predictions won’t always be right, but they are often good enough to be useful in the world. 4.02: Summarizing Data Using Tables A simple way to summarize data is to generate a table representing counts of various types of observations. This type of table has been used for thousands of years (see Figure 4.1). Let’s look at some examples of the use of tables, again using the NHANES dataset. Type the command help(NHANES) in the RStudio console, and scroll through the help page, which should open within the Help panel if you are using RStudio. This page provides some information about the dataset as well as a listing of all of the variables included in the dataset. Let’s have a look at a simple variable, called “PhysActive” in the dataset. This variable contains one of three different values: “Yes” or “No” (indicating whether or not the person reports doing “moderate or vigorous-intensity sports, fitness or recreational activities”), or “NA” if the data are missing for that individual. There are different reasons that the data might be missing; for example, this question was not asked of children younger than 12 years of age, while in other cases an adult may have declined to answer the question during the interview. 4.2.1 Frequency distributions Let’s look at how many people fall into each of these categories. We will do this in R by selecting the variable of interest (PhysActive) from the NHANES dataset, grouping the data by the different values of the variable, and then counting how many values there are in each group: PhysActive AbsoluteFrequency No 2473 Yes 2972 NA 1334 This table shows the frequencies of each of the different values; there were 2473 individuals who responded “No” to the question, 2972 who responded “Yes”, and 1334 for whom no response was given. We call this a frequency distribution because it tells us how frequent each of the possible values is within our sample. This shows us the absolute frequency of the two responses, for everyone who actually gave a response. We can see from this that there are more people saying “Yes” than “No”, but it can be hard to tell from absolute numbers how big the difference is. For this reason, we often would rather present the data using relative frequency, which is obtained by dividing each frequency by the sum of all frequencies: $\ relative frequency_{i}=\frac{\text {absolute frequency}_{i}}{\sum_{j=1}^{N} \text {absolute frequency}_{j}}$ The relative frequency provides a much easier way to see how big the imbalance is. We can also interpret the relative frequencies as percentages by multiplying them by 100. In this example, we will drop the NA values as well, since we would like to be able to interpret the relative frequencies of active versus inactive people. Table 4.1: Absolute and relative frequencies and percentages for PhysActive variable PhysActive AbsoluteFrequency RelativeFrequency Percentage No 2473 0.45 45 Yes 2972 0.55 55 This lets us see that 45.4 percent of the individuals in the NHANES sample said “No” and 54.6 percent said “Yes”. 4.2.2 Cumulative distributions The PhysActive variable that we examined above only had two possible values, but often we wish to summarize data that can have many more possible values. When those values are quantitative, then one useful way to summarize them is via what we call a cumulative frequency representation: rather than asking how many observations take on a specific value, we ask how many have a value of at least some specific value. Let’s look at another variable in the NHANES dataset, called SleepHrsNight which records how many hours the participant reports sleeping on usual weekdays. Let’s create a frequency table as we did above, after removing anyone who didn’t provide a response to the question. Table 4.2: Frequency distribution for number of hours of sleep per night in the NHANES dataset SleepHrsNight AbsoluteFrequency RelativeFrequency Percentage 2 9 0.00 0.18 3 49 0.01 0.97 4 200 0.04 3.97 5 406 0.08 8.06 6 1172 0.23 23.28 7 1394 0.28 27.69 8 1405 0.28 27.90 9 271 0.05 5.38 10 97 0.02 1.93 11 15 0.00 0.30 12 17 0.00 0.34 We can already begin to summarize the dataset just by looking at the table; for example, we can see that most people report sleeping between 6 and 8 hours. Let’s plot the data to see this more clearly. To do this we can plot a histogram which shows the number of cases having each of the different values; see left panel of Figure 4.2. The ggplot2() library has a built in histogram function (geom_histogram()) which we will often use. We can also plot the relative frequencies, which we will often refer to as densities - see the right panel of Figure 4.2. What if we want to know how many people report sleeping 5 hours or less? To find this, we can compute a cumulative distribution. To compute the cumulative frequency for some value j, we add up the frequencies for all of the values up to and including j: cumulative frequency $_{j}=\sum_{i=1}^{j}$ absolute frequency $_{i}$ Table 4.3: Absolute and cumulative frquency distributions for SleepHrsNight variable SleepHrsNight AbsoluteFrequency CumulativeFrequency 2 9 9 3 49 58 4 200 258 5 406 664 6 1172 1836 7 1394 3230 8 1405 4635 9 271 4906 10 97 5003 11 15 5018 12 17 5035 In the left panel of Figure 4.3 we plot the data to see what these representations look like; the absolute frequency values are plotted in solid lines, and the cumulative frequencies are plotted in dashed lines We see that the cumulative frequency is monotonically increasing – that is, it can only go up or stay constant, but it can never decrease. Again, we usually find the relative frequencies to be more useful than the absolute; those are plotted in the right panel of Figure 4.3. 4.2.3 Plotting histograms The variables that we examined above were fairly simple, having only a few possible values. Now let’s look at a more complex variable: Age. First let’s plot the Age variable for all of the individuals in the NHANES dataset (see left panel of Figure 4.4). What do you see there? First, you should notice that the number of individuals in each age group is declining over time. This makes sense because the population is being randomly sampled, and thus death over time leads to fewer people in the older age ranges. Second, you probably notice a large spike in the graph at age 80. What do you think that’s about? If you look at the help function for the NHANES dataset, you will see the following definition: “Age in years at screening of study participant. Note: Subjects 80 years or older were recorded as 80.” The reason for this is that the relatively small number of individuals with very high ages would make it potentially easier to identify the specific person in the dataset if you knew their exact age; researchers generally promise their participants to keep their identity confidential, and this is one of the things they can do to help protect their research subjects. This also highlights the fact that it’s always important to know where one’s data have come from and how they have been processed; otherwise we might interpret them improperly, thinking that 80-year-olds had been somehow overrepresented in the sample. Let’s look at another more complex variable in the NHANES dataset: Height. The histogram of height values is plotted in the right panel of Figure 4.4. The first thing you should notice about this distribution is that most of its density is centered around about 170 cm, but the distribution has a “tail” on the left; there are a small number of individuals with much smaller heights. What do you think is going on here? You may have intuited that the small heights are coming from the children in the dataset. One way to examine this is to plot the histogram with separate colors for children and adults (left panel of Figure 4.5). This shows that all of the very short heights were indeed coming from children in the sample. Let’s create a new version of NHANES that only includes adults, and then plot the histogram just for them (right panel of Figure 4.5). In that plot that the distribution looks much more symmetric. As we will see later, this is a nice example of a normal (or Gaussian) distribution. 4.2.4 Histogram bins In our earlier example with the sleep variable, the data were reported in whole numbers, and we simply counted the number of people who reported each possible value. However, if you look at a few values of the Height variable in NHANES, you will see that it was measured in centimeters down to the first decimal place: Table 4.4: A few values from the NHANES data frame. Height 170 170 168 155 174 174 Panel C of Figure 4.5 shows a histogram that counts the density of each possible value. That histogram looks really jagged, which is because of the variability in specific decimal place values. For example, the value 173.2 occurs 32 times, while the value 173.3 only occurs 15 times. We probably don’t think that there is really such a big difference between the prevalence of these two weights; more likely this is just due to random variability in our sample of people. In general, when we create a histogram of data that are continuous or where there are many possible values, we will bin the values so that instead of counting and plotting the frequency of every specific value, we count and plot the frequency of values falling within a specific range. That’s why the plot looked less jagged above in Panel B of 4.5; in this panel we set the bin width to 1, which means that the histogram is computing by combining values within bins with a width of one; thus, the values 1.3, 1.5, and 1.6 would all count toward the frequency of the same bin, which would span from values equal to one up through values less than 2. Note that once the bin size has been selected, then the number of bins is determined by the data: number of bins $=\frac{\text { range of scores }}{\text { bin width }}$ There is no hard and fast rule for how to choose the optimal bin width. Occasionally it will be obvious (as when there are only a few possible values), but in many cases it would require trial and error. There are methods that try to find an optimal bin size automatically, such as the Freedman-Diaconis method (that is implemented within the nclass.FD() function in R); we will use this function in some later examples.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/04%3A_Summarizing_Data/4.01%3A_Why_Summarize_Data.txt
Datasets are like snowflakes, in that every one is different, but nonetheless there are patterns that one often sees in different types of data. This allows us to use idealized representations of the data to further summarize them. Let’s take the adult height data plotted in 4.5, and plot them alongside a very different variable: pulse rate (heartbeats per minute), also measured in NHANES (see Figure 4.6). While these plots certainly don’t look exactly the same, both have the general characteristic of being relatively symmetric around a rounded peak in the middle. This shape is in fact one of the commonly observed shapes of distributions when we collect data, which we call the normal (or Gaussian) distribution. This distribution is defined in terms of two values (which we call parameters of the distribution): the location of the center peak (which we call the mean) and the width of the distribution (which is described in terms of a parameter called the standard deviation). Figure 4.6 shows the appropriate normal distribution plotted on top of each of the histrograms.You can see that although the curves don’t fit the data exactly, they do a pretty good job of characterizing the distribution – with just two numbers! As we will see later in the course when we discuss the central limit theorem, there is a deep mathematical reason why many variables in the world exhibit the form of a normal distribution. 4.3.1 Skewness The examples in Figure 4.6 followed the normal distribution fairly well, but in many cases the data will deviate in a systematic way from the normal distribution. One way in which the data can deviate is when they are asymmetric, such that one tail of the distribution is more dense than the other. We refer to this as “skewness”. Skewness commonly occurs when the measurement is constrained to be non-negative, such as when we are counting things or measuring elapsed times (and thus the variable can’t take on negative values). An example of skewness can be seen in the average waiting times at the airport security lines at San Francisco International Airport, plotted in the left panel of Figure 4.7. You can see that while most wait times are less than 20 minutes, there are a number of cases where they are much longer, over 60 minutes! This is an example of a “right-skewed” distribution, where the right tail is longer than the left; these are common when looking at counts or measured times, which can’t be less than zero. It’s less common to see “left-skewed” distributions, but they can occur, for example when looking at fractional values that can’t take a value greater than one. 4.3.2 Long-tailed distributions Historically, statistics has focused heavily on data that are normally distributed, but there are many data types that look nothing like the normal distribution. In particular, many real-world distributions are “long-tailed”, meaning that the right tail extends far beyond the most typical members of the distribution. One of the most interesting types of data where long-tailed distributions occur arise from the analysis of social networks. For an example, let’s look at the Facebook friend data from the Stanford Large Network Database and plot the histogram of number of friends across the 3,663 people in the database (see right panel of Figure 4.7). As we can see, this distribution has a very long right tail – the average person has 24.09 friends, while the person with the most friends (denoted by the blue dot) has 1043! Long-tailed distributions are increasingly being recognized in the real world. In particular, many features of complex systems are characterized by these distributions, from the frequency of words in text, to the number of flights in and out of different airports, to the connectivity of brain networks. There are a number of different ways that long-tailed distributions can come about, but a common one occurs in cases of the so-called “Matthew effect” from the Christian Bible: For to every one who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away. — Matthew 25:29, Revised Standard Version This is often paraphrased as “the rich get richer”. In these situations, advantages compound, such that those with more friends have access to even more new friends, and those with more money have the ability do things that increase their riches even more. As the course progresses we will see several examples of long-tailed distributions, and we should keep in mind that many of the tools in statistics can fail when faced with long-tailed data. As Nassim Nicholas Taleb pointed out in his book “The Black Swan”, such long-tailed distributions played a critical role in the 2008 financial crisis, because many of the financial models used by traders assumed that financial systems would follow the normal distribution, which they clearly did not. 4.04: Suggested Readings • The Black Swan: The Impact of the Highly Improbable, by Nassim Nicholas Taleb
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/04%3A_Summarizing_Data/4.03%3A_Idealized_Representations_of_Distributions.txt
This chapter will introduce you to how to summarize data using R, as well as providing an introduction to a popular set of R tools known as the “Tidyverse.” Before doing anything else we need to load the libraries that we will use in this notebook. ``````library(tidyverse) library(cowplot) library(knitr) set.seed(123456) opts_chunk\$set(tidy.opts=list(width.cutoff=80)) options(tibble.width = 60)`````` We will use the NHANES dataset for several of our examples, so let’s load the library that contains the data. ``````# load the NHANES data library # first unload it if it's already loaded, to make sure # we have a clean version if("NHANES" %in% (.packages())){ detach('package:NHANES', unload=TRUE) } library(NHANES)``` ``` 05: Summarizing Data with R (with Lucy King) In this chapter we will introduce a way of working with data in R that is often referred to as the “Tidyverse.” This comprises a set of packages that provide various tools for working with data, as well as a few special ways of using those functions 5.1.1 Making a data frame using tibble() The tidyverse provides its over version of a data frame, which known as a tibble. A tibble is a data frame but with some smart tweaks that make it easier to work with, expecially when using functions from the tidyverse. See here for more information on the function `tibble()`: https://r4ds.had.co.nz/tibbles.html ``````# first create the individual variables n <- c("russ", "lucy", "jaclyn", "tyler") x <- c(1, 2, 3, 4) y <- c(4, 5, 6, 7) z <- c(7, 8, 9, 10) # create the data frame myDataFrame <- tibble( n, #list each of your columns in the order you want them x, y, z ) myDataFrame`````` ``````## # A tibble: 4 x 4 ## n x y z ## <chr> <dbl> <dbl> <dbl> ## 1 russ 1 4 7 ## 2 lucy 2 5 8 ## 3 jaclyn 3 6 9 ## 4 tyler 4 7 10`````` Take a quick look at the properties of the data frame using `glimpse()`: ``glimpse(myDataFrame) `` ``````## Observations: 4 ## Variables: 4 ## \$ n <chr> "russ", "lucy", "jaclyn", "tyler" ## \$ x <dbl> 1, 2, 3, 4 ## \$ y <dbl> 4, 5, 6, 7 ## \$ z <dbl> 7, 8, 9, 10`````` 5.1.2 Selecting an element There are various ways to access the contents within a data frame. 5.1.2.1 Selecting a row or column by name ``myDataFrame\$x`` ``## [1] 1 2 3 4`` The first index refers to the row, the second to the column. ``myDataFrame[1, 2]`` ``````## # A tibble: 1 x 1 ## x ## <dbl> ## 1 1`````` ``myDataFrame[2, 3]`` ``````## # A tibble: 1 x 1 ## y ## <dbl> ## 1 5`````` 5.1.2.2 Selecting a row or column by index ``myDataFrame[1, ]`` ``````## # A tibble: 1 x 4 ## n x y z ## <chr> <dbl> <dbl> <dbl> ## 1 russ 1 4 7`````` ``myDataFrame[, 1]`` ``````## # A tibble: 4 x 1 ## n ## <chr> ## 1 russ ## 2 lucy ## 3 jaclyn ## 4 tyler`````` 5.1.2.3 Select a set of rows ``````myDataFrame %>% slice(1:2) `````` ``````## # A tibble: 2 x 4 ## n x y z ## <chr> <dbl> <dbl> <dbl> ## 1 russ 1 4 7 ## 2 lucy 2 5 8`````` `slice()` is a function that selects out rows based on their row number. You will also notice something we haven’t discussed before: `%>%`. This is called a “pipe”, which is commonly used within the tidyverse; you can read more here. A pipe takes the output from one command and feeds it as input to the next command. In this case, simply writing the name of the data frame (myDataFrame) causes it to be input to the `slice()` command following the pipe. The benefit of pipes will become especially apparent when we want to start stringing together multiple data processing operations into a single command. In this example, no new variable is created - the output is printed to the screen, just like it would be if you typed the name of the variable. If you wanted to save it to a new variable, you would use the `<-` assignment operator, like this: ``````myDataFrameSlice <- myDataFrame %>% slice(1:2) myDataFrameSlice`````` ``````## # A tibble: 2 x 4 ## n x y z ## <chr> <dbl> <dbl> <dbl> ## 1 russ 1 4 7 ## 2 lucy 2 5 8`````` 5.1.2.4 Select a set of rows based on specific value(s) ``````myDataFrame %>% filter(n == "russ")`````` ``````## # A tibble: 1 x 4 ## n x y z ## <chr> <dbl> <dbl> <dbl> ## 1 russ 1 4 7`````` `filter()` is a function that retains only those rows that meet your stated criteria. We can also filter for multiple criteria at once — in this example, the `|` symbol indicates “or”: ``````myDataFrame %>% filter(n == "russ" | n == "lucy")`````` ``````## # A tibble: 2 x 4 ## n x y z ## <chr> <dbl> <dbl> <dbl> ## 1 russ 1 4 7 ## 2 lucy 2 5 8`````` 5.1.2.5 Select a set of columns ``````myDataFrame %>% select(x:y)`````` ``````## # A tibble: 4 x 2 ## x y ## <dbl> <dbl> ## 1 1 4 ## 2 2 5 ## 3 3 6 ## 4 4 7`````` `select()` is a function that selects out only those columns you specify using their names You can also specify a vector of columns to select. ``````myDataFrame %>% select(c(x,z))`````` ``````## # A tibble: 4 x 2 ## x z ## <dbl> <dbl> ## 1 1 7 ## 2 2 8 ## 3 3 9 ## 4 4 10`````` 5.1.3 Adding a row or column add a named row ``````tiffanyDataFrame <- tibble( n = "tiffany", x = 13, y = 14, z = 15 ) myDataFrame %>% bind_rows(tiffanyDataFrame)`````` ``````## # A tibble: 5 x 4 ## n x y z ## <chr> <dbl> <dbl> <dbl> ## 1 russ 1 4 7 ## 2 lucy 2 5 8 ## 3 jaclyn 3 6 9 ## 4 tyler 4 7 10 ## 5 tiffany 13 14 15`````` `bind_rows()` is a function that combines the rows from another dataframe to the current dataframe
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/05%3A_Summarizing_Data_with_R_(with_Lucy_King)/5.01%3A_Introduction_to_the_Tidyverse.txt
Often we will want to either create a new variable based on an existing variable, or modify the value of an existing variable. Within the tidyverse, we do this using a function called `mutate()`. Let’s start with a toy example by creating a data frame containing a single variable. ``````toy_df <- data.frame(x = c(1,2,3,4)) glimpse(toy_df)`````` ``````## Observations: 4 ## Variables: 1 ## \$ x <dbl> 1, 2, 3, 4`````` Let’s say that we wanted to create a new variable called `y` that would contain the value of x multiplied by 10. We could do this using `mutate()` and then assign the result back to the same data frame: ``````toy_df <- toy_df %>% # create a new variable called y that contains x*10 mutate(y = x*10) glimpse(toy_df)`````` ``````## Observations: 4 ## Variables: 2 ## \$ x <dbl> 1, 2, 3, 4 ## \$ y <dbl> 10, 20, 30, 40`````` We could also overwrite a variable with a new value: ``````toy_df2 <- toy_df %>% # create a new variable called y that contains x*10 mutate(y = y + 1) glimpse(toy_df2)`````` ``````## Observations: 4 ## Variables: 2 ## \$ x <dbl> 1, 2, 3, 4 ## \$ y <dbl> 11, 21, 31, 41`````` We will use `mutate()` often so it’s an important function to understand. Here we can use it with our example data frame to create a new variable that is the sum of several other variables. ``````myDataFrame <- myDataFrame %>% mutate(total = x + y + z) kable(myDataFrame)`````` n x y z total russ 1 4 7 12 lucy 2 5 8 15 jaclyn 3 6 9 18 tyler 4 7 10 21 mutate() is a function that creates a new variable in a data frame using the existing variables. In this case, it creates a variable called total that is the sum of the existing variables x, y, and z. 5.2.1 Remove a column using the select() function Adding a minus sign to the name of a variable within the `select()` command will remove that variable, leaving all of the others. ``````myDataFrame <- myDataFrame %>% dplyr::select(-total) kable(myDataFrame)`````` n x y z russ 1 4 7 lucy 2 5 8 jaclyn 3 6 9 tyler 4 7 10 5.03: Tidyverse in Action To see the tidyverse in action, let’s clean up the NHANES dataset. Each individual in the NHANES dataset has a unique identifier stored in the variable `ID`. First let’s look at the number of rows in the dataset: ``nrow(NHANES)`` ``## [1] 6779`` Now let’s see how many unique IDs there are. The `unique()` function returns a vector containing all of the unique values for a particular variable, and the `length()` function returns the length of the resulting vector. ``length(unique(NHANES\$ID))`` ``## [1] 6779`` This shows us that while there are 10,000 observations in the data frame, there are only 6779 unique IDs. This means that if we were to use the entire dataset, we would be reusing data from some individuals, which could give us incorrect results. For this reason, we wold like to discard any observations that are duplicated. Let’s create a new variable called `NHANES_unique` that will contain only the distinct observations, with no individuals appearing more than once. The `dplyr` library provides a function called `distinct()` that will do this for us. You may notice that we didn’t explicitly load the `dplyr` library above; however, if you look at the messages that appeared when we loaded the `tidyverse` library, you will see that it loaded `dplyr` for us. To create the new data frame with unique observations, we will pipe the NHANES data frame into the `distinct()` function and then save the output to our new variable. ``````NHANES_unique <- NHANES %>% distinct(ID, .keep_all = TRUE)`````` If we number of rows in the new data frame, it should be the same as the number of unique IDs (6779): ``nrow(NHANES_unique)`` ``## [1] 6779`` In the next example you will see the power of pipes come to life, when we start tying together multiple functions into a single operation (or “pipeline”). 5.06: Computing a Cumulative Distribution (Section 4.2.2) Let’s compute a cumulative distribution for the `SleepHrsNight` variable in NHANES. This looks very similar to what we saw in the previous section. ``````# create summary table for relative frequency of different # values of SleepHrsNight SleepHrsNight_cumulative <- NHANES_unique %>% # drop NA values for SleepHrsNight variable drop_na(SleepHrsNight) %>% # remove other variables dplyr::select(SleepHrsNight) %>% # group by values group_by(SleepHrsNight) %>% # create summary table summarize(AbsoluteFrequency = n()) %>% # create relative and cumulative frequencies mutate( RelativeFrequency = AbsoluteFrequency / sum(AbsoluteFrequency), CumulativeDensity = cumsum(RelativeFrequency) ) kable(SleepHrsNight_cumulative)`````` SleepHrsNight AbsoluteFrequency RelativeFrequency CumulativeDensity 2 9 0.00 0.00 3 49 0.01 0.01 4 200 0.04 0.05 5 406 0.08 0.13 6 1172 0.23 0.36 7 1394 0.28 0.64 8 1405 0.28 0.92 9 271 0.05 0.97 10 97 0.02 0.99 11 15 0.00 1.00 12 17 0.00 1.00
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/05%3A_Summarizing_Data_with_R_(with_Lucy_King)/5.02%3A_Creating_or_Modifying_Variables_Using_Mutate%28%29.txt
Now that you know a bit about the tidyverse, let’s look at the various tools that it provides for working with data. We will use as an example an analysis of whether attitudes about statistics are different between the different student year groups in the class. 5.7.1 Statistics attitude data from course survey These data were collected using the Attitudes Towards Statistics (ATS) scale (from https://www.stat.auckland.ac.nz/~iase/cblumberg/wise2.pdf). The 29-item ATS has two subscales. The Attitudes Toward Field subscale consists of the following 20 items, with reverse-keyed items indicated by an “(R)”: 1, 3, 5, 6(R), 9, 10(R), 11, 13, 14(R), 16(R), 17, 19, 20(R), 21, 22, 23, 24, 26, 28(R), 29 The Attitudes Toward Course subscale consists of the following 9 items: 2(R), 4(R), 7(R), 8, 12(R), 15(R), 18(R), 25(R), 27(R) For our purposes, we will just combine all 29 items together, rather than separating them into these subscales. Note: I have removed the data from the graduate students and 5+ year students, since those would be too easily identifiable given how few there are. Let’s first save the file path to the data. ``attitudeData_file <- 'data/statsAttitude.txt'`` Next, let’s load the data from the file using the tidyverse function `read_tsv()`. There are several functions available for reading in different file formats as part of the the `readr` tidyverse package. ``````attitudeData <- read_tsv(attitudeData_file) glimpse(attitudeData)`````` ``````## Observations: 148 ## Variables: 31 ## \$ `What year are you at Stanford?` <dbl> … ## \$ `Have you ever taken a statistics course before?` <chr> … ## \$ `I feel that statistics will be useful to me in my profession.` <dbl> … ## \$ `The thought of being enrolled in a statistics course makes me nervous.` <dbl> … ## \$ `A good researcher must have training in statistics.` <dbl> … ## \$ `Statistics seems very mysterious to me.` <dbl> … ## \$ `Most people would benefit from taking a statistics course.` <dbl> … ## \$ `I have difficulty seeing how statistics relates to my field of study.` <dbl> … ## \$ `I see being enrolled in a statistics course as a very unpleasant experience.` <dbl> … ## \$ `I would like to continue my statistical training in an advanced course.` <dbl> … ## \$ `Statistics will be useful to me in comparing the relative merits of different objects, methods, programs, etc.` <dbl> … ## \$ `Statistics is not really very useful because it tells us what we already know anyway.` <dbl> … ## \$ `Statistical training is relevant to my performance in my field of study.` <dbl> … ## \$ `I wish that I could have avoided taking my statistics course.` <dbl> … ## \$ `Statistics is a worthwhile part of my professional training.` <dbl> … ## \$ `Statistics is too math oriented to be of much use to me in the future.` <dbl> … ## \$ `I get upset at the thought of enrolling in another statistics course.` <dbl> … ## \$ `Statistical analysis is best left to the "experts" and should not be part of a lay professional's job.` <dbl> … ## \$ `Statistics is an inseparable aspect of scientific research.` <dbl> … ## \$ `I feel intimidated when I have to deal with mathematical formulas.` <dbl> … ## \$ `I am excited at the prospect of actually using statistics in my job.` <dbl> … ## \$ `Studying statistics is a waste of time.` <dbl> … ## \$ `My statistical training will help me better understand the research being done in my field of study.` <dbl> … ## \$ `One becomes a more effective "consumer" of research findings if one has some training in statistics.` <dbl> … ## \$ `Training in statistics makes for a more well-rounded professional experience.` <dbl> … ## \$ `Statistical thinking can play a useful role in everyday life.` <dbl> … ## \$ `Dealing with numbers makes me uneasy.` <dbl> … ## \$ `I feel that statistics should be required early in one's professional training.` <dbl> … ## \$ `Statistics is too complicated for me to use effectively.` <dbl> … ## \$ `Statistical training is not really useful for most professionals.` <dbl> … ## \$ `Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write.` <dbl> …`````` Right now the variable names are unwieldy, since they include the entire name of the item; this is how Google Forms stores the data. Let’s change the variable names to be somewhat more readable. We will change the names to “ats” where is replaced with the question number and ats indicates Attitudes Toward Statistics scale. We can create these names using the `rename()` and `paste0()` functions. `rename()` is pretty self-explanatory: a new name is assigned to an old name or a column position. The `paste0()` function takes a string along with a set of numbers, and creates a vector that combines the string with the number. ``````nQuestions <- 29 # other than the first two columns, the rest of the columns are for the 29 questions in the statistics attitude survey; we'll use this below to rename these columns based on their question number # use rename to change the first two column names # rename can refer to columns either by their number or their name attitudeData <- attitudeData %>% rename( # rename using columm numbers # The first column is the year Year = 1, # The second column indicates whether the person took stats before StatsBefore = 2 ) %>% rename_at( # rename all the columns except Year and StatsBefore vars(-Year, -StatsBefore), #rename by pasting the word "stat" and the number funs(paste0('ats', 1:nQuestions)) ) # print out the column names names(attitudeData)`````` ``````## [1] "Year" "StatsBefore" "ats1" "ats2" "ats3" ## [6] "ats4" "ats5" "ats6" "ats7" "ats8" ## [11] "ats9" "ats10" "ats11" "ats12" "ats13" ## [16] "ats14" "ats15" "ats16" "ats17" "ats18" ## [21] "ats19" "ats20" "ats21" "ats22" "ats23" ## [26] "ats24" "ats25" "ats26" "ats27" "ats28" ## [31] "ats29"`````` ``````#check out the data again glimpse(attitudeData)`````` ``````## Observations: 148 ## Variables: 31 ## \$ Year <dbl> 3, 4, 2, 1, 2, 3, 4, 2, 2, 2, 4, 2, 3… ## \$ StatsBefore <chr> "Yes", "No", "No", "Yes", "No", "No",… ## \$ ats1 <dbl> 6, 4, 6, 3, 7, 4, 6, 5, 7, 5, 5, 4, 2… ## \$ ats2 <dbl> 1, 5, 5, 2, 7, 5, 5, 4, 2, 2, 3, 3, 7… ## \$ ats3 <dbl> 7, 6, 5, 7, 2, 4, 7, 7, 7, 5, 6, 5, 7… ## \$ ats4 <dbl> 2, 5, 5, 2, 7, 3, 3, 4, 5, 3, 3, 2, 3… ## \$ ats5 <dbl> 7, 5, 6, 7, 5, 4, 6, 6, 7, 5, 3, 5, 4… ## \$ ats6 <dbl> 1, 4, 5, 2, 2, 4, 2, 3, 1, 2, 2, 3, 1… ## \$ ats7 <dbl> 1, 4, 3, 2, 4, 4, 2, 2, 3, 2, 4, 2, 4… ## \$ ats8 <dbl> 2, 1, 4, 3, 1, 4, 4, 4, 7, 3, 2, 4, 1… ## \$ ats9 <dbl> 5, 4, 5, 5, 7, 4, 5, 5, 7, 6, 3, 5, 5… ## \$ ats10 <dbl> 2, 3, 2, 2, 1, 4, 2, 2, 1, 3, 3, 1, 1… ## \$ ats11 <dbl> 6, 4, 6, 2, 7, 4, 6, 5, 7, 3, 3, 4, 2… ## \$ ats12 <dbl> 2, 4, 1, 2, 5, 7, 2, 1, 2, 4, 4, 2, 4… ## \$ ats13 <dbl> 6, 4, 5, 5, 7, 3, 6, 6, 7, 5, 2, 5, 1… ## \$ ats14 <dbl> 2, 4, 3, 3, 3, 4, 2, 1, 1, 3, 3, 2, 1… ## \$ ats15 <dbl> 2, 4, 3, 3, 5, 6, 3, 4, 2, 3, 2, 4, 3… ## \$ ats16 <dbl> 1, 3, 2, 5, 1, 5, 2, 1, 2, 3, 2, 2, 1… ## \$ ats17 <dbl> 7, 7, 5, 7, 7, 4, 6, 6, 7, 6, 6, 7, 4… ## \$ ats18 <dbl> 2, 5, 4, 5, 7, 4, 2, 4, 2, 5, 2, 4, 6… ## \$ ats19 <dbl> 3, 3, 4, 3, 2, 3, 6, 5, 7, 3, 3, 5, 2… ## \$ ats20 <dbl> 1, 4, 1, 2, 1, 4, 2, 2, 1, 2, 3, 2, 3… ## \$ ats21 <dbl> 6, 3, 5, 5, 7, 5, 6, 5, 7, 3, 4, 6, 6… ## \$ ats22 <dbl> 7, 4, 5, 6, 7, 5, 6, 5, 7, 5, 5, 5, 5… ## \$ ats23 <dbl> 6, 4, 6, 6, 7, 5, 6, 7, 7, 5, 3, 5, 3… ## \$ ats24 <dbl> 7, 4, 4, 6, 7, 5, 6, 5, 7, 5, 5, 5, 3… ## \$ ats25 <dbl> 3, 5, 3, 3, 5, 4, 3, 4, 2, 3, 3, 2, 5… ## \$ ats26 <dbl> 7, 4, 5, 6, 2, 4, 6, 5, 7, 3, 4, 4, 2… ## \$ ats27 <dbl> 2, 4, 2, 2, 4, 4, 2, 1, 2, 3, 3, 2, 1… ## \$ ats28 <dbl> 2, 4, 3, 5, 2, 3, 3, 1, 1, 4, 3, 2, 2… ## \$ ats29 <dbl> 4, 4, 3, 6, 2, 1, 5, 3, 3, 3, 2, 3, 2…`````` The next thing we need to do is to create an ID for each individual. To do this, we will use the `rownames_to_column()` function from the tidyverse. This creates a new variable (which we name “ID”) that contains the row names from the data frame; thsee are simply the numbers 1 to N. ``````# let's add a participant ID so that we will be able to identify them later attitudeData <- attitudeData %>% rownames_to_column(var = 'ID') head(attitudeData)`````` ``````## # A tibble: 6 x 32 ## ID Year StatsBefore ats1 ats2 ats3 ats4 ats5 ## <chr> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 1 3 Yes 6 1 7 2 7 ## 2 2 4 No 4 5 6 5 5 ## 3 3 2 No 6 5 5 5 6 ## 4 4 1 Yes 3 2 7 2 7 ## 5 5 2 No 7 7 2 7 5 ## 6 6 3 No 4 5 4 3 4 ## # … with 24 more variables: ats6 <dbl>, ats7 <dbl>, ## # ats8 <dbl>, ats9 <dbl>, ats10 <dbl>, ats11 <dbl>, ## # ats12 <dbl>, ats13 <dbl>, ats14 <dbl>, ats15 <dbl>, ## # ats16 <dbl>, ats17 <dbl>, ats18 <dbl>, ats19 <dbl>, ## # ats20 <dbl>, ats21 <dbl>, ats22 <dbl>, ats23 <dbl>, ## # ats24 <dbl>, ats25 <dbl>, ats26 <dbl>, ats27 <dbl>, ## # ats28 <dbl>, ats29 <dbl>`````` If you look closely at the data, you can see that some of the participants have some missing responses. We can count them up for each individual and create a new variable to store this to a new variable called `numNA` using `mutate()`. We can also create a table showing how many participants have a particular number of NA values. Here we use two additional commands that you haven’t seen yet. The `group_by()` function tells other functions to do their analyses while breaking the data into groups based on one of the variables. Here we are going to want to summarize the number of people with each possible number of NAs, so we will group responses by the numNA variable that we are creating in the first command here. The summarize() function creates a summary of the data, with the new variables based on the data being fed in. In this case, we just want to count up the number of subjects in each group, which we can do using the special n() function from dpylr. ``````# compute the number of NAs for each participant attitudeData <- attitudeData %>% mutate( numNA = rowSums(is.na(.)) # we use the . symbol to tell the is.na function to look at the entire data frame ) # present a table with counts of the number of missing responses attitudeData %>% count(numNA)`````` ``````## # A tibble: 3 x 2 ## numNA n ## <dbl> <int> ## 1 0 141 ## 2 1 6 ## 3 2 1`````` We can see from the table that there are only a few participants with missing data; six people are missing one answer, and one is missing two answers. Let’s find those individuals, using the filter() command from dplyr. filter() returns the subset of rows from a data frame that match a particular test - in this case, whether numNA is > 0. ``````attitudeData %>% filter(numNA > 0)`````` ``````## # A tibble: 7 x 33 ## ID Year StatsBefore ats1 ats2 ats3 ats4 ats5 ## <chr> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 42 2 No NA 2 7 5 6 ## 2 55 1 No 5 3 7 4 5 ## 3 90 1 No 7 2 7 5 7 ## 4 113 2 No 5 7 7 5 6 ## 5 117 2 Yes 6 6 7 4 6 ## 6 137 3 No 7 5 6 5 6 ## 7 139 1 No 7 5 7 5 6 ## # … with 25 more variables: ats6 <dbl>, ats7 <dbl>, ## # ats8 <dbl>, ats9 <dbl>, ats10 <dbl>, ats11 <dbl>, ## # ats12 <dbl>, ats13 <dbl>, ats14 <dbl>, ats15 <dbl>, ## # ats16 <dbl>, ats17 <dbl>, ats18 <dbl>, ats19 <dbl>, ## # ats20 <dbl>, ats21 <dbl>, ats22 <dbl>, ats23 <dbl>, ## # ats24 <dbl>, ats25 <dbl>, ats26 <dbl>, ats27 <dbl>, ## # ats28 <dbl>, ats29 <dbl>, numNA <dbl>`````` There are fancy techniques for trying to guess the value of missing data (known as “imputation”) but since the number of participants with missing values is small, let’s just drop those participants from the list. We can do this using the `drop_na()` function from the `tidyr` package, another tidyverse package that provides tools for cleaning data. We will also remove the numNA variable, since we won’t need it anymore after removing the subjects with missing answeres. We do this using the `select()` function from the `dplyr` tidyverse package, which selects or removes columns from a data frame. By putting a minus sign in front of numNA, we are telling it to remove that column. `select()` and `filter()` are similar - `select()` works on columns (i.e. variables) and `filter()` works on rows (i.e. observations). ``````# this is equivalent to drop_na(attitudeData) attitudeDataNoNA <- attitudeData %>% drop_na() %>% select(-numNA) glimpse(attitudeDataNoNA)`````` ``````## Observations: 141 ## Variables: 32 ## \$ ID <chr> "1", "2", "3", "4", "5", "6", "7", "8… ## \$ Year <dbl> 3, 4, 2, 1, 2, 3, 4, 2, 2, 2, 4, 2, 3… ## \$ StatsBefore <chr> "Yes", "No", "No", "Yes", "No", "No",… ## \$ ats1 <dbl> 6, 4, 6, 3, 7, 4, 6, 5, 7, 5, 5, 4, 2… ## \$ ats2 <dbl> 1, 5, 5, 2, 7, 5, 5, 4, 2, 2, 3, 3, 7… ## \$ ats3 <dbl> 7, 6, 5, 7, 2, 4, 7, 7, 7, 5, 6, 5, 7… ## \$ ats4 <dbl> 2, 5, 5, 2, 7, 3, 3, 4, 5, 3, 3, 2, 3… ## \$ ats5 <dbl> 7, 5, 6, 7, 5, 4, 6, 6, 7, 5, 3, 5, 4… ## \$ ats6 <dbl> 1, 4, 5, 2, 2, 4, 2, 3, 1, 2, 2, 3, 1… ## \$ ats7 <dbl> 1, 4, 3, 2, 4, 4, 2, 2, 3, 2, 4, 2, 4… ## \$ ats8 <dbl> 2, 1, 4, 3, 1, 4, 4, 4, 7, 3, 2, 4, 1… ## \$ ats9 <dbl> 5, 4, 5, 5, 7, 4, 5, 5, 7, 6, 3, 5, 5… ## \$ ats10 <dbl> 2, 3, 2, 2, 1, 4, 2, 2, 1, 3, 3, 1, 1… ## \$ ats11 <dbl> 6, 4, 6, 2, 7, 4, 6, 5, 7, 3, 3, 4, 2… ## \$ ats12 <dbl> 2, 4, 1, 2, 5, 7, 2, 1, 2, 4, 4, 2, 4… ## \$ ats13 <dbl> 6, 4, 5, 5, 7, 3, 6, 6, 7, 5, 2, 5, 1… ## \$ ats14 <dbl> 2, 4, 3, 3, 3, 4, 2, 1, 1, 3, 3, 2, 1… ## \$ ats15 <dbl> 2, 4, 3, 3, 5, 6, 3, 4, 2, 3, 2, 4, 3… ## \$ ats16 <dbl> 1, 3, 2, 5, 1, 5, 2, 1, 2, 3, 2, 2, 1… ## \$ ats17 <dbl> 7, 7, 5, 7, 7, 4, 6, 6, 7, 6, 6, 7, 4… ## \$ ats18 <dbl> 2, 5, 4, 5, 7, 4, 2, 4, 2, 5, 2, 4, 6… ## \$ ats19 <dbl> 3, 3, 4, 3, 2, 3, 6, 5, 7, 3, 3, 5, 2… ## \$ ats20 <dbl> 1, 4, 1, 2, 1, 4, 2, 2, 1, 2, 3, 2, 3… ## \$ ats21 <dbl> 6, 3, 5, 5, 7, 5, 6, 5, 7, 3, 4, 6, 6… ## \$ ats22 <dbl> 7, 4, 5, 6, 7, 5, 6, 5, 7, 5, 5, 5, 5… ## \$ ats23 <dbl> 6, 4, 6, 6, 7, 5, 6, 7, 7, 5, 3, 5, 3… ## \$ ats24 <dbl> 7, 4, 4, 6, 7, 5, 6, 5, 7, 5, 5, 5, 3… ## \$ ats25 <dbl> 3, 5, 3, 3, 5, 4, 3, 4, 2, 3, 3, 2, 5… ## \$ ats26 <dbl> 7, 4, 5, 6, 2, 4, 6, 5, 7, 3, 4, 4, 2… ## \$ ats27 <dbl> 2, 4, 2, 2, 4, 4, 2, 1, 2, 3, 3, 2, 1… ## \$ ats28 <dbl> 2, 4, 3, 5, 2, 3, 3, 1, 1, 4, 3, 2, 2… ## \$ ats29 <dbl> 4, 4, 3, 6, 2, 1, 5, 3, 3, 3, 2, 3, 2…`````` Try the following on your own: Using the attitudeData data frame, drop the NA values, create a new variable called mystery that contains a value of 1 for anyone who answered 7 to question ats4 (“Statistics seems very mysterious to me”). Create a summary that includes the number of people reporting 7 on this question, and the proportion of people who reported 7. 5.7.1.1 Tidy data These data are in a format that meets the principles of “tidy data”, which state the following: • Each variable must have its own column. • Each observation must have its own row. • Each value must have its own cell. This is shown graphically the following figure (from Hadley Wickham, developer of the “tidyverse”): [Following three rules makes a dataset tidy: variables are in columns, observations are in rows, and values are in cells..] In our case, each column represents a variable: `ID` identifies which student responded, `Year` contains their year at Stanford, `StatsBefore` contains whether or not they have taken statistics before, and ats1 through ats29 contain their responses to each item on the ATS scale. Each observation (row) is a response from one individual student. Each value has its own cell (e.g., the values for `Year` and `StatsBefoe` are stored in separate cells in separate columns). For an example of data that are NOT tidy, take a look at these data Belief in Hell - click on the “Table” tab to see the data. • What are the variables • Why aren’t these data tidy? 5.7.1.2 Recoding data We now have tidy data; however, some of the ATS items require recoding. Specifically, some of the items need to be “reverse coded”; these items include: ats2, ats4, ats6, ats7, ats10, ats12, ats14, ats15, ats16, ats18, ats20, ats25, ats27 and ats28. The raw responses for each item are on the 1-7 scale; therefore, for the reverse coded items, we need to reverse them by subtracting the raw score from 8 (such that 7 becomes 1 and 1 becomes 7). To recode these items, we will use the tidyverse `mutate()` function. It’s a good idea when recoding to preserve the raw original variables and create new recoded variables with different names. There are two ways we can use `mutate()` function to recode these variables. The first way is easier to understand as a new code, but less efficient and more prone to error. Specifically, we repeat the same code for every variable we want to reverse code as follows: ``````attitudeDataNoNA %>% mutate( ats2_re = 8 - ats2, ats4_re = 8 - ats4, ats6_re = 8 - ats6, ats7_re = 8 - ats7, ats10_re = 8 - ats10, ats12_re = 8 - ats12, ats14_re = 8 - ats14, ats15_re = 8 - ats15, ats16_re = 8 - ats16, ats18_re = 8 - ats18, ats20_re = 8 - ats20, ats25_re = 8 - ats25, ats27_re = 8 - ats27, ats28_re = 8 - ats28 ) `````` ``````## # A tibble: 141 x 46 ## ID Year StatsBefore ats1 ats2 ats3 ats4 ats5 ## <chr> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 1 3 Yes 6 1 7 2 7 ## 2 2 4 No 4 5 6 5 5 ## 3 3 2 No 6 5 5 5 6 ## 4 4 1 Yes 3 2 7 2 7 ## 5 5 2 No 7 7 2 7 5 ## 6 6 3 No 4 5 4 3 4 ## 7 7 4 Yes 6 5 7 3 6 ## 8 8 2 Yes 5 4 7 4 6 ## 9 9 2 Yes 7 2 7 5 7 ## 10 10 2 Yes 5 2 5 3 5 ## # … with 131 more rows, and 38 more variables: ats6 <dbl>, ## # ats7 <dbl>, ats8 <dbl>, ats9 <dbl>, ats10 <dbl>, ## # ats11 <dbl>, ats12 <dbl>, ats13 <dbl>, ats14 <dbl>, ## # ats15 <dbl>, ats16 <dbl>, ats17 <dbl>, ats18 <dbl>, ## # ats19 <dbl>, ats20 <dbl>, ats21 <dbl>, ats22 <dbl>, ## # ats23 <dbl>, ats24 <dbl>, ats25 <dbl>, ats26 <dbl>, ## # ats27 <dbl>, ats28 <dbl>, ats29 <dbl>, ats2_re <dbl>, ## # ats4_re <dbl>, ats6_re <dbl>, ats7_re <dbl>, ## # ats10_re <dbl>, ats12_re <dbl>, ats14_re <dbl>, ## # ats15_re <dbl>, ats16_re <dbl>, ats18_re <dbl>, ## # ats20_re <dbl>, ats25_re <dbl>, ats27_re <dbl>, ## # ats28_re <dbl>`````` The second way is more efficient and takes advatange of the use of “scoped verbs” (https://dplyr.tidyverse.org/reference/scoped.html), which allow you to apply the same code to several variables at once. Because you don’t have to keep repeating the same code, you’re less likely to make an error: ``````ats_recode <- #create a vector of the names of the variables to recode c( "ats2", "ats4", "ats6", "ats7", "ats10", "ats12", "ats14", "ats15", "ats16", "ats18", "ats20", "ats25", "ats27", "ats28" ) attitudeDataNoNA <- attitudeDataNoNA %>% mutate_at( vars(ats_recode), # the variables you want to recode funs(re = 8 - .) # the function to apply to each variable )`````` Whenever we do an operation like this, it’s good to check that it actually worked correctly. It’s easy to make mistakes in coding, which is why it’s important to check your work as well as you can. We can quickly select a couple of the raw and recoded columns from our data and make sure things appear to have gone according to plan: ``````attitudeDataNoNA %>% select( ats2, ats2_re, ats4, ats4_re )`````` ``````## # A tibble: 141 x 4 ## ats2 ats2_re ats4 ats4_re ## <dbl> <dbl> <dbl> <dbl> ## 1 1 7 2 6 ## 2 5 3 5 3 ## 3 5 3 5 3 ## 4 2 6 2 6 ## 5 7 1 7 1 ## 6 5 3 3 5 ## 7 5 3 3 5 ## 8 4 4 4 4 ## 9 2 6 5 3 ## 10 2 6 3 5 ## # … with 131 more rows`````` Let’s also make sure that there are no responses outside of the 1-7 scale that we expect, and make sure that no one specified a year outside of the 1-4 range. ``````attitudeDataNoNA %>% summarise_at( vars(ats1:ats28_re), funs(min, max) )`````` ``````## # A tibble: 1 x 86 ## ats1_min ats2_min ats3_min ats4_min ats5_min ats6_min ## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 1 1 2 1 2 1 ## # … with 80 more variables: ats7_min <dbl>, ats8_min <dbl>, ## # ats9_min <dbl>, ats10_min <dbl>, ats11_min <dbl>, ## # ats12_min <dbl>, ats13_min <dbl>, ats14_min <dbl>, ## # ats15_min <dbl>, ats16_min <dbl>, ats17_min <dbl>, ## # ats18_min <dbl>, ats19_min <dbl>, ats20_min <dbl>, ## # ats21_min <dbl>, ats22_min <dbl>, ats23_min <dbl>, ## # ats24_min <dbl>, ats25_min <dbl>, ats26_min <dbl>, ## # ats27_min <dbl>, ats28_min <dbl>, ats29_min <dbl>, ## # ats2_re_min <dbl>, ats4_re_min <dbl>, ## # ats6_re_min <dbl>, ats7_re_min <dbl>, ## # ats10_re_min <dbl>, ats12_re_min <dbl>, ## # ats14_re_min <dbl>, ats15_re_min <dbl>, ## # ats16_re_min <dbl>, ats18_re_min <dbl>, ## # ats20_re_min <dbl>, ats25_re_min <dbl>, ## # ats27_re_min <dbl>, ats28_re_min <dbl>, ats1_max <dbl>, ## # ats2_max <dbl>, ats3_max <dbl>, ats4_max <dbl>, ## # ats5_max <dbl>, ats6_max <dbl>, ats7_max <dbl>, ## # ats8_max <dbl>, ats9_max <dbl>, ats10_max <dbl>, ## # ats11_max <dbl>, ats12_max <dbl>, ats13_max <dbl>, ## # ats14_max <dbl>, ats15_max <dbl>, ats16_max <dbl>, ## # ats17_max <dbl>, ats18_max <dbl>, ats19_max <dbl>, ## # ats20_max <dbl>, ats21_max <dbl>, ats22_max <dbl>, ## # ats23_max <dbl>, ats24_max <dbl>, ats25_max <dbl>, ## # ats26_max <dbl>, ats27_max <dbl>, ats28_max <dbl>, ## # ats29_max <dbl>, ats2_re_max <dbl>, ats4_re_max <dbl>, ## # ats6_re_max <dbl>, ats7_re_max <dbl>, ## # ats10_re_max <dbl>, ats12_re_max <dbl>, ## # ats14_re_max <dbl>, ats15_re_max <dbl>, ## # ats16_re_max <dbl>, ats18_re_max <dbl>, ## # ats20_re_max <dbl>, ats25_re_max <dbl>, ## # ats27_re_max <dbl>, ats28_re_max <dbl>`````` ``````attitudeDataNoNA %>% summarise_at( vars(Year), funs(min, max) )`````` ``````## # A tibble: 1 x 2 ## min max ## <dbl> <dbl> ## 1 1 4`````` 5.7.1.3 Different data formats Sometimes we need to reformat our data in order to analyze it or visualize it in a specific way. Two tidyverse functions, `gather()` and `spread()`, help us to do this. For example, say we want to examine the distribution of the raw responses to each of the ATS items (i.e., a histogram). In this case, we would need our x-axis to be a single column of the responses across all the ATS items. However, currently the responses for each item are stored in 29 different columns. This means that we need to create a new version of this dataset. It will have four columns: - ID - Year - Question (for each of the ATS items) - ResponseRaw (for the raw response to each of the ATS items) Thus, we want change the format of the dataset from being “wide” to being “long”. We change the format to “wide” using the `gather()` function. `gather()` takes a number of variables and reformates them into two variables: one that contains the variable values, and another called the “key” that tells us which variable the value came from. In this case, we want it to reformat the data so that each response to an ATS question is in a separate row and the key column tells us which ATS question it corresponds to. It is much better to see this in practice than to explain in words! ``````attitudeData_long <- attitudeDataNoNA %>% select(-ats_recode) %>% #remove the raw variables that you recoded gather( key = question, # key refers to the new variable containing the question number value = response, # value refers to the new response variable -ID, -Year, -StatsBefore #the only variables we DON'T want to gather ) attitudeData_long %>% slice(1:20)`````` ``````## # A tibble: 20 x 5 ## ID Year StatsBefore question response ## <chr> <dbl> <chr> <chr> <dbl> ## 1 1 3 Yes ats1 6 ## 2 2 4 No ats1 4 ## 3 3 2 No ats1 6 ## 4 4 1 Yes ats1 3 ## 5 5 2 No ats1 7 ## 6 6 3 No ats1 4 ## 7 7 4 Yes ats1 6 ## 8 8 2 Yes ats1 5 ## 9 9 2 Yes ats1 7 ## 10 10 2 Yes ats1 5 ## 11 11 4 No ats1 5 ## 12 12 2 No ats1 4 ## 13 13 3 Yes ats1 2 ## 14 14 1 Yes ats1 6 ## 15 15 2 No ats1 7 ## 16 16 4 No ats1 7 ## 17 17 2 No ats1 7 ## 18 18 2 No ats1 6 ## 19 19 1 No ats1 6 ## 20 20 1 No ats1 3`````` ``glimpse(attitudeData_long)`` ``````## Observations: 4,089 ## Variables: 5 ## \$ ID <chr> "1", "2", "3", "4", "5", "6", "7", "8… ## \$ Year <dbl> 3, 4, 2, 1, 2, 3, 4, 2, 2, 2, 4, 2, 3… ## \$ StatsBefore <chr> "Yes", "No", "No", "Yes", "No", "No",… ## \$ question <chr> "ats1", "ats1", "ats1", "ats1", "ats1… ## \$ response <dbl> 6, 4, 6, 3, 7, 4, 6, 5, 7, 5, 5, 4, 2…`````` Say we now wanted to undo the `gather()` and return our dataset to wide format. For this, we would use the function `spread()`. ``````attitudeData_wide <- attitudeData_long %>% spread( key = question, #key refers to the variable indicating which question each response belongs to value = response ) attitudeData_wide %>% slice(1:20)`````` ``````## # A tibble: 20 x 32 ## ID Year StatsBefore ats1 ats10_re ats11 ats12_re ## <chr> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> ## 1 1 3 Yes 6 6 6 6 ## 2 10 2 Yes 5 5 3 4 ## 3 100 4 Yes 5 6 4 2 ## 4 101 2 No 4 7 2 4 ## 5 102 3 Yes 5 6 5 6 ## 6 103 2 No 6 7 5 7 ## 7 104 2 Yes 6 5 5 3 ## 8 105 3 No 6 6 5 6 ## 9 106 1 No 4 4 4 4 ## 10 107 2 No 1 2 1 1 ## 11 108 2 No 7 7 7 7 ## 12 109 2 No 4 4 4 6 ## 13 11 4 No 5 5 3 4 ## 14 110 3 No 5 7 4 4 ## 15 111 2 No 6 6 6 3 ## 16 112 3 No 6 7 5 7 ## 17 114 2 No 5 4 4 3 ## 18 115 3 No 5 7 5 1 ## 19 116 3 No 5 6 5 5 ## 20 118 2 No 6 6 6 1 ## # … with 25 more variables: ats13 <dbl>, ats14_re <dbl>, ## # ats15_re <dbl>, ats16_re <dbl>, ats17 <dbl>, ## # ats18_re <dbl>, ats19 <dbl>, ats2_re <dbl>, ## # ats20_re <dbl>, ats21 <dbl>, ats22 <dbl>, ats23 <dbl>, ## # ats24 <dbl>, ats25_re <dbl>, ats26 <dbl>, ## # ats27_re <dbl>, ats28_re <dbl>, ats29 <dbl>, ## # ats3 <dbl>, ats4_re <dbl>, ats5 <dbl>, ats6_re <dbl>, ## # ats7_re <dbl>, ats8 <dbl>, ats9 <dbl>`````` Now that we have created a “long” version of our data, they are in the right format to create the plot. We will use the tidyverse function `ggplot()` to create our histogram with `geom_histogram`. ``````attitudeData_long %>% ggplot(aes(x = response)) + geom_histogram(binwidth = 0.5) + scale_x_continuous(breaks = seq.int(1, 7, 1))`````` It looks like responses were fairly positively overall. We can also aggregate each participant’s responses to each question during each year of their study at Stanford to examine the distribution of mean ATS responses across people by year. We will use the `group_by()` and `summarize()` functions to aggregate the responses. ``````attitudeData_agg <- attitudeData_long %>% group_by(ID, Year) %>% summarize( mean_response = mean(response) ) attitudeData_agg`````` ``````## # A tibble: 141 x 3 ## # Groups: ID [141] ## ID Year mean_response ## <chr> <dbl> <dbl> ## 1 1 3 6 ## 2 10 2 4.66 ## 3 100 4 5.03 ## 4 101 2 5.10 ## 5 102 3 4.66 ## 6 103 2 5.55 ## 7 104 2 4.31 ## 8 105 3 5.10 ## 9 106 1 4.21 ## 10 107 2 2.45 ## # … with 131 more rows`````` First let’s use the geom_density argument in `ggplot()` to look at mean responses across people, ignoring year of response. The density argrument is like a histogram but smooths things over a bit. ``````attitudeData_agg %>% ggplot(aes(mean_response)) + geom_density()`````` Now we can also look at the distribution for each year. ``````attitudeData_agg %>% ggplot(aes(mean_response, color = factor(Year))) + geom_density()`````` Or look at trends in responses across years. ``````attitudeData_agg %>% group_by(Year) %>% summarise( mean_response = mean(mean_response) ) %>% ggplot(aes(Year, mean_response)) + geom_line()`````` This looks like a precipitous drop - but how might that be misleading?
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/05%3A_Summarizing_Data_with_R_(with_Lucy_King)/5.07%3A_Data_Cleaning_and_Tidying_with_R.txt
Learning Objectives • Describe the principles that distinguish between good and bad graphs, and use them to identify good versus bad graphs. • Understand the human limitations that must be accommodated in order to make effective graphs. • Promise to never create a pie chart. Ever. On January 28, 1986, the Space Shuttle Challenger exploded 73 seconds after takeoff, killing all 7 of the astronauts on board. As when any such disaster occurs, there was an official investigation into the cause of the accident, which found that an O-ring connecting two sections of the solid rocket booster had leaked, resulting in failure of the joint and explosion of the large liquid fuel tank (see figure 6.1). The investigation found that many aspects of the NASA decision making process were flawed, and focused in particular on a meeting that was had between NASA staff and engineers from Morton Thiokol, a contractor who had built the solid rocket boosters. These engineers were particularly concerned because the temperatures were forecast to be very cold on the morning of the launch, and they had data from previous launches showing that performance of the O-rings was compromised at lower temperatures. In a meeting on the evening before the launch, the engineers presented their data to the NASA managers, but were unable to convince them to postpone the launch. Their evidence was a set of hand-written slides showing numbers from various past launches. The visualization expert Edward Tufte has argued that with a proper presentation of all of the data, the engineers could have been much more persuasive. In particular, they could have shown a figure like the one in Figure 6.2, which highlights two important facts. First, it shows that the amount of O-ring damage (defined by the amount of erosion and soot found outside the rings after the solid rocket boosters were retrieved from the ocean in previous flights) was closely related to the temperature at takeoff. Second, it shows that the range of forecasted temperatures for the morning of January 28 (shown in the shaded area) was well outside of the range of all previous launches. While we can’t know for sure, it seems at least plausible that this could have been more persuasive. 06: Data Visualization The goal of plotting data is to present a summary of a dataset in a two-dimensional (or occasionally three-dimensional) presentation. We refer to the dimensions as axes – the horizontal axis is called the X-axis and the vertical axis is called the Y-axis. We can arrange the data along the axes in a way that highlights the data values. These values may be either continuous or categorical. There are many different types of plots that we can use, which have different advantages and disadvantages. Let’s say that we are interested in characterizing the difference in height between men and women in the NHANES dataset. Figure 6.3 shows four different ways to plot these data. 1. The bar graph in panel A shows the difference in means, but doesn’t show us how much spread there is in the data around these means – and as we will see later, knowing this is essential to determine whether we think the difference between the groups is large enough to be important. 2. The second plot shows the bars with all of the data points overlaid - this makes it a bit clearer that the distributions of height for men and women are overlapping, but it’s still hard to see due to the large number of data points. In general we prefer using a plotting technique that provides a clearer view of the distribution of the data points. 1. In panel C, we see one example of a violin plot, which plots the distribution of data in each condition (after smoothing it out a bit). 2. Another option is the box plot shown in panel D, which shows the median (central line), a measure of variability (the width of the box, which is based on a measure called the interquartile range), and any outliers (noted by the points at the ends of the lines). These are both effective ways to show data that provide a good feel for the distribution of the data.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/06%3A__Data_Visualization/6.01%3A_Anatomy_of_a_Plot.txt
Many books have been written on effective visualization of data. There are some principles that most of these authors agree on, while others are more contentious. Here we summarize some of the major principles; if you want to learn more, then some good resources are listed in the Suggested Readings section at the end of this chapter. 6.2.1 Show the data and make them stand out Let’s say that I had performed a study that examined the relationship between dental health and time spent flossing, and I would like to visualize my data. Figure 6.4 shows four possible presentations of these data. 1. In panel A, we don’t actually show the data, just a line expressing the relationship between the data. This is clearly not optimal, because we can’t actually see what the underlying data look like. Panels B-D show three possible outcomes from plotting the actual data, where each plot shows a different way that the data might have looked. 1. If we saw the plot in Panel B, we would probably be suspicious – rarely would real data follow such a precise pattern. 2. The data in Panel C, on the other hand, look like real data – they show a general trend, but they are messy, as data in the world usually are. 3. The data in Panel D show us that the apparent relationship between the two variables is solely caused by one individual, who we would refer to as an outlier because they fall so far outside of the pattern of the rest of the group. It should be clear that we probably don’t want to conclude very much from an effect that is driven by one data point. This figure highlights why it is always important to look at the raw data before putting too much faith in any summary of the data. 6.2.2 Maximize the data/ink ratio Edward Tufte has proposed an idea called the data/ink ratio: $\ data/ink\ ratio = {\frac {amount\ of\ ink\ used\ on\ data}{total\ amount\ of\ ink}}$ The point of this is to minimize visual clutter and let the data show through. For example, take the two presentations of the dental health data in Figure 6.5. Both panels show the same data, but panel A is much easier to apprehend, because of its relatively higher data/ink ratio. 6.2.3 Avoid chartjunk It’s especially common to see presentations of data in the popular media that are adorned with lots of visual elements that are thematically related to the content but unrelated to the actual data. This is known as chartjunk, and should be avoided at all costs. One good way to avoid chartjunk is to avoid using popular spreadsheet programs to plot one’s data. For example, the chart in Figure 6.6 (created using Microsoft Excel) plots the relative popularity of different religions in the United States. There are at least three things wrong with this figure: • it has graphics overlaid on each of the bars that have nothing to do with the actual data • it has a distracting background texture • it uses three-dimensional bars, which distort the data 6.2.4 Avoid distorting the data It’s often possible to use visualization to distort the message of a dataset. A very common one is use of different axis scaling to either exaggerate or hide a pattern of data. For example, let’s say that we are interested in seeing whether rates of violent crime have changed in the US. In Figure 6.7, we can see these data plotted in ways that either make it look like crime has remained constant, or that it has plummeted. The same data can tell two very different stories! One of the major controversies in statistical data visualization is how to choose the Y axis, and in particular whether it should always include zero. In his famous book “How to lie with statistics”, Darrell Huff argued strongly that one should always include the zero point in the Y axis. On the other hand, Edward Tufte has argued against this: “In general, in a time-series, use a baseline that shows the data not the zero point; don’t spend a lot of empty vertical space trying to reach down to the zero point at the cost of hiding what is going on in the data line itself.” (from https://qz.com/418083/its-ok-not-to-start-your-y-axis-at-zero/) There are certainly cases where using the zero point makes no sense at all. Let’s say that we are interested in plotting body temperature for an individual over time. In Figure 6.8 we plot the same (simulated) data with or without zero in the Y axis. It should be obvious that by plotting these data with zero in the Y axis (Panel A) we are wasting a lot of space in the figure, given that body temperature of a living person could never go to zero! By including zero, we are also making the apparent jump in temperature during days 21-30 much less evident. In general, my inclination for line plots and scatterplots is to use all of the space in the graph, unless the zero point is truly important to highlight. Edward Tufte introduced the concept of the lie factor to describe the degree to which physical differences in a visualization correspond to the magnitude of the differences in the data. If a graphic has a lie factor near 1, then it is appropriately representing the data, whereas lie factors far from one reflect a distortion of the underlying data. The lie factor supports the argument that one should always include the zero point in a bar chart in many cases. In Figure 6.9 we plot the same data with and without zero in the Y axis. In panel A, the proportional difference in area between the two bars is exactly the same as the proportional difference between the values (i.e. lie factor = 1), whereas in Panel B (where zero is not included) the proportional difference in area between the two bars is roughly 2.8 times bigger than the proportional difference in the values, and thus it visually exaggerates the size of the difference. 6.03: Accommodating Human Limitations Humans have both perceptual and cognitive limitations that can make some visualizations very difficult to understand. It’s always important to keep these in mind when building a visualization. 6.3.1 Perceptual limitations One important perceptual limitation that many people (including myself) suffer from is color blindness. This can make it very difficult to perceive the information in a figure (like the one in Figure 6.10) where there is only color contrast between the elements but no brightness contrast. It is always helpful to use graph elements that differ substantially in brightness and/or texture, in addition to color. There are also “colorblind-friendly” pallettes available for use in R, which we used in Figure ??. Even for people with perfect color vision, there are perceptual limitations that can make some plots ineffective. This is one reason why statisticians never use pie charts: It can be very difficult for humans to accurately perceive differences in the volume of shapes. The pie chart in Figure 6.11 (presenting the same data on religious affiliation that we showed above) shows how tricky this can be. This plot is terrible for several reasons. First, it requires distinguishing a large number of colors from very small patches at the bottom of the figure. Second, the visual perspective distorts the relative numbers, such that the pie wedge for Catholic appears much larger than the pie wedge for None, when in fact the number for None is slightly larger (22.8 vs 20.8 percent), as was evident in Figure 6.6. Third, by separating the legend from the graphic, it requires the viewer to hold information in their working memory in order to map between the graphic and legend and to conduct many “table look-ups” in order to continuously match the legend labels to the visualization. And finally, it uses text that is far too small, making it impossible to read without zooming in. Plotting the data using a more reasonable approach (Figure 6.12), we can see the pattern much more clearly. This plot may not look as flashy as the pie chart generated using Excel, but it’s a much more effective and accurate representation of the data. This plot allows the viewer to make comparisons based on the the length of the bars along a common scale (the y-axis). Humans tend to be more accurate when decoding differences based on these perceptual elements than based on area or color. 6.04: Correcting for Other Factors Often we are interested in plotting data where the variable of interest is affected by other factors than the one we are interested in. For example, let’s say that we want to understand how the price of gasoline has changed over time. Figure 6.13 shows historical gas price data, plotted either with or without adjustment for inflation. Whereas the unadjusted data show a huge increase, the adjusted data show that this is mostly just reflective of inflation. Other examples where one needs to adjust data for other factors include population size (as we saw in the crime rate examples in an earlier chapter) and data collected across different seasons.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/06%3A__Data_Visualization/6.02%3A_Principles_of_Good_Visualization.txt
There are many different tools for plotting data in R, but we will focus on the `ggplot()` function provided by a package called `ggplot2`. ggplot is very powerful, but using it requires getting one’s head around how it works. 07: Data Visualization with R (with Anna Khazenzon) or, the “gg” in ggplot Each language has a grammar consisting of types of words and the rules with which to string them together into sentences. If a sentence is grammatically correct, we’re able to parse it, even though that doesn’t ensure that it’s interesting, beautiful, or even meaningful. Similarly, plots can be divided up into their core components, which come together via a set of rules. Some of the major components are : • data • aesthetics • geometries • themes The data are the actual variables we’re plotting, which we pass to ggplot through the data argument. As you’ve learned, ggplot takes a dataframe in which each column is a variable. Now we need to tell ggplot how to plot those variables, by mapping each variable to an axis of the plot. You’ve seen that when we plot histograms, our variable goes on the x axis. Hence, we set `x=<variable>` in a call to `aes()` within `ggplot()`. This sets aesthetics, which are mappings of data to certain scales, like axes or things like color or shape. The plot still had two axes – x and y – but we didn’t need to specify what went on the y axis because ggplot knew by default that it should make a count variable. How was ggplot able to figure that out? Because of geometries, which are shapes we use to represent our data. You’ve seen `geom_histogram`, which basically gives our graph a bar plot shape, except that it also sets the default y axis variable to be `count`. Other shapes include points and lines, among many others. We’ll go over other aspects of the grammar of graphics (suxh as facets, statistics, and coordinates) as they come up. Let’s start visualizing some data by first choosing a theme, which describes all of the non-data ink in our plot, like grid lines and text. 7.02: Getting Started Load ggplot and choose a theme you like (see here for examples). ``````library(tidyverse) theme_set(theme_bw()) # I like this fairly minimal one`````` 7.03: Lets Think Through a Visualization Principles we want to keep in mind: • Show the data without distortion • Use color, shape, and location to encourage comparisons • Minimize visual clutter (maximize your information to ink ratio) The two questions you want to ask yourself before getting started are: • What type of variable(s) am I plotting? • What comparison do I want to make salient for the viewer (possibly myself)? Figuring out how to highlight a comparison and include relevant variables usually benefits from sketching the plot out first. 7.04: Plotting the Distribution of a Single Variable How do you choose which geometry to use? ggplot allows you to choose from a number of geometries. This choice will determine what sort of plot you create. We will use the built-in mpg dataset, which contains fuel efficiency data for a number of different cars. 7.4.1 Histogram The histogram shows the ogerall distribution of the data. Here we use the nclass.FD function to compute the optimal bin size. ``````ggplot(mpg, aes(hwy)) + geom_histogram(bins = nclass.FD(mpg\$hwy)) + xlab('Highway mileage') `````` Instead of creating discrete bins, we can look at relative density continuously. 7.4.2 Density plot ``````ggplot(mpg, aes(hwy)) + geom_density() + xlab('Highway mileage') `````` A note on defaults: The default statistic (or “stat”) underlying `geom_density` is called “density” – not surprising. The default stat for `geom_histogram` is “count”. What do you think would happen if you overrode the default and set `stat="count"`? ``````ggplot(mpg, aes(hwy)) + geom_density(stat = "count")`````` What we discover is that the geometric difference between `geom_histogram` and `geom_density` can actually be generalized. `geom_histogram` is a shortcut for working with `geom_bar`, and `geom_density` is a shortcut for working with `geom_line`. 7.4.3 Bar vs. line plots ``````ggplot(mpg, aes(hwy)) + geom_bar(stat = "count")`````` Note that the geometry tells ggplot what kind of plot to use, and the statistic (stat) tells it what kind of summary to present. ``````ggplot(mpg, aes(hwy)) + geom_line(stat = "density")``````
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/07%3A_Data_Visualization_with_R_(with_Anna_Khazenzon)/7.01%3A_The_Grammar_of_Graphics.txt
Let’s check out mileage by car manufacturer. We’ll plot one continuous variable by one nominal one. First, let’s make a bar plot by choosing the stat “summary” and picking the “mean” function to summarize the data. ``````ggplot(mpg, aes(manufacturer, hwy)) + geom_bar(stat = "summary", fun.y = "mean") + ylab('Highway mileage')`````` One problem with this plot is that it’s hard to read some of the labels because they overlap. How could we fix that? Hint: search the web for “ggplot rotate x axis labels” and add the appropriate command. TBD: fix ``````ggplot(mpg, aes(manufacturer, hwy)) + geom_bar(stat = "summary", fun.y = "mean") + ylab('Highway mileage')`````` 7.5.1 Adding on variables What if we wanted to add another variable into the mix? Maybe the year of the car is also important to consider. We have a few options here. First, you could map the variable to another aesthetic. ``````# first, year needs to be converted to a factor mpg\$year <- factor(mpg\$year) ggplot(mpg, aes(manufacturer, hwy, fill = year)) + geom_bar(stat = "summary", fun.y = "mean")`````` By default, the bars are stacked on top of one another. If you want to separate them, you can change the `position` argument form its default to “dodge”. ``````ggplot(mpg, aes(manufacturer, hwy, fill=year)) + geom_bar(stat = "summary", fun.y = "mean", position = "dodge")`````` ``````ggplot(mpg, aes(year, hwy, group=manufacturer, color=manufacturer)) + geom_line(stat = "summary", fun.y = "mean")`````` For a less visually cluttered plot, let’s try facetting. This creates subplots for each value of the `year` variable. ``````ggplot(mpg, aes(manufacturer, hwy)) + # split up the bar plot into two by year facet_grid(year ~ .) + geom_bar(stat = "summary", fun.y = "mean")`````` 7.5.2 Plotting dispersion Instead of looking at just the means, we can get a sense of the entire distribution of mileage values for each manufacturer. 7.5.2.1 Box plot ``````ggplot(mpg, aes(manufacturer, hwy)) + geom_boxplot()`````` A box plot (or box and whiskers plot) uses quartiles to give us a sense of spread. The thickest line, somewhere inside the box, represents the median. The upper and lower bounds of the box (the hinges) are the first and third quartiles (can you use them to approximate the interquartile range?). The lines extending from the hinges are the remaining data points, excluding outliers, which are plotted as individual points. 7.5.2.2 Error bars Now, let’s do something a bit more complex, but much more useful – let’s create our own summary of the data, so we can choose which summary statistic to plot and also compute a measure of dispersion of our choosing. ``````# summarise data mpg_summary <- mpg %>% group_by(manufacturer) %>% summarise(n = n(), mean_hwy = mean(hwy), sd_hwy = sd(hwy)) # compute confidence intervals for the error bars # (we'll talk about this later in the course!) limits <- aes( # compute the lower limit of the error bar ymin = mean_hwy - 1.96 * sd_hwy / sqrt(n), # compute the upper limit ymax = mean_hwy + 1.96 * sd_hwy / sqrt(n)) # now we're giving ggplot the mean for each group, # instead of the datapoints themselves ggplot(mpg_summary, aes(manufacturer, mean_hwy)) + # we set stat = "identity" on the summary data geom_bar(stat = "identity") + # we create error bars using the limits we computed above geom_errorbar(limits, width=0.5) `````` Error bars don’t always mean the same thing – it’s important to determine whether you’re looking at e.g. standard error or confidence intervals (which we’ll talk more about later in the course). 7.5.2.2.1 Minimizing non-data ink The plot we just created is nice and all, but it’s tough to look at. The bar plots add a lot of ink that doesn’t help us compare engine sizes across manufacturers. Similarly, the width of the error bars doesn’t add any information. Let’s tweak which geometry we use, and tweak the appearance of the error bars. ``````ggplot(mpg_summary, aes(manufacturer, mean_hwy)) + # switch to point instead of bar to minimize ink used geom_point() + # remove the horizontal parts of the error bars geom_errorbar(limits, width = 0) `````` Looks a lot cleaner, but our points are all over the place. Let’s make a final tweak to make learning something from this plot a bit easier. ``````mpg_summary_ordered <- mpg_summary %>% mutate( # we sort manufacturers by mean engine size manufacturer = reorder(manufacturer, -mean_hwy) ) ggplot(mpg_summary_ordered, aes(manufacturer, mean_hwy)) + geom_point() + geom_errorbar(limits, width = 0) `````` 7.5.3 Scatter plot When we have multiple continuous variables, we can use points to plot each variable on an axis. This is known as a scatter plot. You’ve seen this example in your reading. ``````ggplot(mpg, aes(displ, hwy)) + geom_point()`````` 7.5.3.1 Layers of data We can add layers of data onto this graph, like a line of best fit. We use a geometry known as a smooth to accomplish this. ``````ggplot(mpg, aes(displ, hwy)) + geom_point() + geom_smooth(color = "black")`````` We can add on points and a smooth line for another set of data as well (efficiency in the city instead of on the highway). ``````ggplot(mpg) + geom_point(aes(displ, hwy), color = "grey") + geom_smooth(aes(displ, hwy), color = "grey") + geom_point(aes(displ, cty), color = "limegreen") + geom_smooth(aes(displ, cty), color = "limegreen")`````` 7.06: Creating a More Complex Plot In this section we will recreate Figure 6.2 from Chapter @ref{data-visualization}. Here is the code to generate the figure; we will go through each of its sections below. ``````oringDf <- read.table("data/orings.csv", sep = ",", header = TRUE) oringDf %>% ggplot(aes(x = Temperature, y = DamageIndex)) + geom_point() + geom_smooth(method = "loess", se = FALSE, span = 1) + ylim(0, 12) + geom_vline(xintercept = 27.5, size =8, alpha = 0.3, color = "red") + labs( y = "Damage Index", x = "Temperature at time of launch" ) + scale_x_continuous(breaks = seq.int(25, 85, 5)) + annotate( "text", angle=90, x = 27.5, y = 6, label = "Forecasted temperature on Jan 28", size = 5 )``````
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/07%3A_Data_Visualization_with_R_(with_Anna_Khazenzon)/7.05%3A_Plots_with_Two_Variables.txt
​​​​​​ • Describe the basic equation for statistical models (outcome=model + error) • Describe different measures of central tendency and dispersion, how they are computed, and which are appropriate under what circumstance. • Describe the concept of a Z-score and when they are useful. One of the fundamental activities in statistics is creating models that can summarize data using a small set of numbers, thus providing a compact description of the data. In this chapter we will discuss the concept of a statistical model and how it can be used to describe data. 08: Fitting Models to Data error$=\sum_{i=1}^{n}\left(x_{i}-\bar{X}\right)=0$ $\begin{array}{c} {\sum_{i=1}^{n} x_{i}-\sum_{i=1}^{n} \bar{X}=0} \ {\sum_{i=1}^{n} x_{i}=\sum_{i=1}^{n} \bar{X}} \ {\sum_{i=1}^{n} x_{i}=n \bar{X}} \ {\sum_{i=1}^{n} x_{i}=\sum_{i=1}^{n} x_{i}} \end{array} \nonumber$ 8.02: What Is a Model In the physical world, “models” are generally simplifications of things in the real world that nonetheless convey the essence of the thing being modeled. A model of a building conveys the structure of the building while being small and light enough to pick up with one’s hands; a model of a cell in biology is much larger than the actual thing, but again conveys the major parts of the cell and their relationships. In statistics, a model is meant to provide a similarly condensed description, but for data rather than for a physical structure. Like physical models, a statistical model is generally much simpler than the data being described; it is meant to capture the structure of the data as simply as possible. In both cases, we realize that the model is a convenient fiction that necessarily glosses over some of the details of the actual thing being modeled. As the statistician George Box famously said: “All models are wrong but some are useful.” The basic structure of a statistical model is: $data = model + error$ This expresses the idea that the data can be described by a statistical model, which describes what we expect to occur in the data, along with the difference between the model and the data, which we refer to as the error. 8.03: Statistical Modeling- An Example Let’s look at an example of fitting a model to data, using the data from NHANES. In particular, we will try to build a model of the height of children in the NHANES sample. First let’s load the data and plot them (see Figure 8.1). Remember that we want to describe the data as simply as possible while still capturing their important features. What is the simplest model we can imagine that might still capture the essence of the data? How about the most common value in the dataset (which we call the mode)? This redescribes the entire set of 1691 children in terms of a single number. If we wanted to predict the height of any new children, then our guess would be the same number: 166.5 centimeters. $\ \hat{height_i}=166.5$ We put the hat symbol over the name of the variable to show that this is our predicted value. The error for this individual would then be the difference between the predicted value ( $\ \hat{height_i}$ )and their actual height ( $\ {height_i}$ ): $\ error_i =height_i - \hat{height_i}$ How good of a model is this? In general we define the goodness of a model in terms of the error, which represents the difference between model and the data; all things being equal, the model that produces lower error is the better model. What we find is that the average individual has a fairly large error of -28.8 centimeters. We would like to have a model where the average error is zero, and it turns out that if we use the arithmetic mean (commonly known as the average) as our model then this will be the case. The mean (often denoted by a bar over the variable, such as $\bar{X}$) is the sum of all of the values, divided by the number of values. Mathematically, we express this as: $\ \bar{X}=\frac{\sum_{i=1}^n x_i}{n}$ We can prove mathematically that the sum of errors from the mean (and thus the average error) is zero (see the proof at the end of the chapter if you are interested). Given that the average error is zero, this seems like a better model. Even though the average of errors from the mean is zero, we can see from the histogram in Figure 8.2 that each individual still has some degree of error; some are positive and some are negative, and those cancel each other out. For this reason, we generally summarize errors in terms of some kind of measure that counts both positive and negative errors as bad. We could use the absolute value of each error value, but it’s more common to use the squared errors, for reasons that we will see later in the course. There are several common ways to summarize the squared error that you will encounter at various points in this book, so it’s important to understand how they relate to one another. First, we could simply add them up; this is referred to as the sum of squared errors. The reason we don’t usually use this is that its magnitude depends on the number of data points, so it can be difficult to interpret unless we are looking at the same number of observations. Second, we could take the mean of the squared error values, which is referred to as the mean squared error (MSE). However, because we squared the values before averaging, they are not on the same scale as the original data; they are in centimeters2. For this reason, it’s also common to take the square root of the MSE, which we refer to as the root mean squared error (RMSE), so that the error is measured in the same units as the original values (in this example, centimeters). The mean has a pretty substantial amount of error – any individual data point will be about 27 cm from the mean on average – but it’s still much better than the mode, which has an average error of about 39 cm. 8.2.1 Improving our model Can we imagine a better model? Remember that these data are from all children in the NHANES sample, who vary from 2 to 17 years of age. Given this wide age range, we might expect that our model of height should also include age. Let’s plot the data for height against age, to see if this relationship really exists. The black points in Panel A of Figure 8.3 show individuals in the dataset, and there seems to be a strong relationship between height and age, as we would expect. Thus, we might build a model that relates height to age: $\ \hat{height_i}=\beta * age_i$ where β \beta is a parameter that we multiply by age to get the smallest error. You may remember from algebra that a line is define as follows: y = slope ∗ x + intercept If age is the X variable, then that means that our prediction of height from age will be a line with a slope of β and an intercept of zero - to see this, let’s plot the best fitting line in blue on top of the data (Panel B in Figure 8.3). Something is clearly wrong with this model, as the line doesn’t seem to follow the data very well. In fact, the RMSE for this model (39.16) is actually higher than the model that only includes the mean! The problem comes from the fact that our model only includes age, which means that the predicted value of height from the model must take on a value of zero when age is zero. Even though the data do not include any children with an age of zero, the line is mathematically required to have a y-value of zero when x is zero, which explains why the line is pulled down below the younger datapoints. We can fix this by including a constant value in our model, which basically represents the estimated value of height when age is equal to zero; even though an age of zero is not plausible in this dataset, this is a mathematical trick that will allow the model to account for the overall magnitude of the data. The model is: $\ \hat{height_i}=constant + \beta * age_i$ where constant is a constant value added to the prediction for each individual; we also call the intercept, since it maps onto the intercept in the equation for a line. We will learn later how it is that we actually compute these values for a particular dataset; for now, we will use the lm() function in R to compute the values of the constant and β that give us the smallest error for these particular data. Panel C in Figure 8.3 shows this model applied to the NHANES data, where we see that the line matches the data much better than the one without a constant. Our error is much smaller using this model – only 8.36 centimeters on average. Can you think of other variables that might also be related to height? What about gender? In Panel D of Figure 8.3 we plot the data with lines fitted separately for males and females. From the plot, it seems that there is a difference between males and females, but it is relatively small and only emerges after the age of puberty. Let’s estimate this model and see how the errors look. In Figure 8.4 we plot the root mean squared error values across the different models. From this we see that the model got a little bit better going from mode to mean, much better going from mean to mean+age, and only very slighly better by including gender as well. Figure 8.4: Mean squared error plotted for each of the models tested above.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/08%3A_Fitting_Models_to_Data/8.01%3A_Appendix.txt
There are generally two different things that we want from our statistical model. First, we want it to describe our data well; that is, we want it to have the lowest possible error when modeling our data. Second, we want it to generalize well to new datasets; that is, we want its error to be as low as possible when we apply it to a new dataset. It turns out that these two features can often be in conflict. To understand this, let’s think about where error comes from. First, it can occur if our model is wrong; for example, if we inaccurately said that height goes down with age instead of going up, then our error will be higher than it would be for the correct model. Similarly, if there is an important factor that is missing from our model, that will also increase our error (as it did when we left age out of the model for height). However, error can also occur even when the model is correct, due to random variation in the data, which we often refer to as “measurement error” or “noise”. Sometimes this really is due to error in our measurement – for example, when the measurements rely on a human, such as using a stopwatch to measure elapsed time in a footrace. In other cases, our measurement device is highly accurate (like a digital scale to measure body weight), but the thing being measured is affected by many different factors that cause it to be variable. If we knew all of these factors then we could build a more accurate model, but in reality that’s rarely possible. Let’s use an example to show this. Rather than using real data, we will generate some data for the example using a computer simulation (about which we will have more to say in a few chapters). Let’s say that we want to understand the relationship between a person’s blood alcohol content (BAC) and their reaction time on a simulated driving test. We can generate some simulated data and plot the relationship (see Panel A of Figure 8.5). In this example, reaction time goes up systematically with blood alcohol content – the line shows the best fitting model, and we can see that there is very little error, which is evident in the fact that all of the points are very close to the line. We could also imagine data that show the same linear relationship, but have much more error, as in Panel B of Figure 8.5. Here we see that there is still a systematic increase of reaction time with BAC, but it’s much more variable across individuals. These were both examples where the linear model seems appropriate, and the error reflects noise in our measurement. The linear model specifies that the relationship between two variables follows a straight line. For example, in a linear model, change in BAC is always associated with a specific change in ReactionTime, regardless of the level of BAC. On the other hand, there are other situations where the linear model is incorrect, and error will be increased because the model is not properly specified. Let’s say that we are interested in the relationship between caffeine intake and performance on a test. The relation between stimulants like caffeine and test performance is often nonlinear - that is, it doesn’t follow a straight line. This is because performance goes up with smaller amounts of caffeine (as the person becomes more alert), but then starts to decline with larger amounts (as the person becomes nervous and jittery). We can simulate data of this form, and then fit a linear model to the data (see Panel C of Figure 8.5). The blue line shows the straight line that bests fits these data; clearly, there is a high degree of error. Although there is a very lawful relation between test performance and caffeine intake, it follows a curve rather than a straight line. The linear model has high error because it’s the wrong model for these data. 8.05: Can a Model Be Too Good Error sounds like a bad thing, and usually we will prefer a model that has lower error over one that has higher error. However, we mentioned above that there is a tension between the ability of a model to accurately fit the current dataset and its ability to generalize to new datasets, and it turns out that the model with the lowest error often is much worse at generalizing to new datasets! To see this, let’s once again generate some data so that we know the true relation between the variables. We will create two simulated datasets, which are generated in exactly the same way – they just have different random noise added to them. The left panel in Figure 8.6 shows that the more complex model (in red) fits the data better than the simpler model (in blue). However, we see the opposite when the same model is applied to a new dataset generated in the same way – here we see that the simpler model fits the new data better than the more complex model. Intuitively, we can see that the more complex model is influenced heavily by the specific data points in the first dataset; since the exact position of these data points was driven by random noise, this leads the more complex model to fit badly on the new dataset. This is a phenomenon that we call overfitting. For now it’s important to keep in mind that our model fit needs to be good, but not too good. As Albert Einstein (1933) said: “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Which is often paraphrased as: “Everything should be as simple as it can be, but not simpler.”
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/08%3A_Fitting_Models_to_Data/8.04%3A_What_Makes_a_Model_Good.txt
We have already encountered the mean (or average), and in fact most people know about the average even if they have never taken a statistics class. It is commonly used to describe what we call the “central tendency” of a dataset – that is, what value are the data centered around? Most people don’t think of computing a mean as fitting a model to data. However, that’s exactly what we are doing when we compute the mean. We have already seen the formula for computing the mean of a sample of data: $\ \bar{X}=\frac{{\sum_{i=1}^n}x_i}{n}$ Note that I said that this formula was specifically for a sample of data, which is a set of data points selected from a larger population. Using a sample, we wish to characterize a larger population – the full set of individuals that we are interested in. For example, if we are a political pollster our population of interest might be all registered voters, whereas our sample might just include a few thousand people sampled from this population. Later in the course we will talk in more detail about sampling, but for now the important point is that statisticians generally like to use different symbols to differentiate statistics that describe values for a sample from parameters that describe the true values for a population; in this case, the formula for the population mean (denoted as μ) is: $\ \mu=\frac{\sum_{i=1}^Nx_i}{N}$ where N is the size of the entire population. We have already seen that the mean is the summary statistic that is guaranteed to give us a mean error of zero. The mean also has another characteristic: It is the summary statistic that has the lowest possible value for the sum of squared errors (SSE). In statistics, we refer to this as being the “best” estimator. We could prove this mathematically, but instead we will demonstrate it graphically in Figure 8.7. This minimization of SSE is a good feature, and it’s why the mean is the most commonly used statistic to summarize data. However, the mean also has a dark side. Let’s say that five people are in a bar, and we examine each person’s income: Table 8.1: Income for our five bar patrons income person 48000 Joe 64000 Karen 58000 Mark 72000 Andrea 66000 Pat The mean (61600.00) seems to be a pretty good summary of the income of those five people. Now let’s look at what happens if Beyoncé Knowles walks into the bar: Table 8.2: Income for our five bar patrons plus Beyoncé Knowles. income person 4.8e+04 Joe 6.4e+04 Karen 5.8e+04 Mark 7.2e+04 Andrea 6.6e+04 Pat 5.4e+07 Beyonce The mean is now almost 10 million dollars, which is not really representative of any of the people in the bar – in particular, it is heavily driven by the outlying value of Beyoncé. In general, the mean is highly sensitive to extreme values, which is why it’s always important to ensure that there are no extreme values when using the mean to summarize data. 8.5.1 The median If we want to summarize the data in a way that is less sensitive to outliers, we can use another statistic called the median. If we were to sort all of the values in order of their magnitude, then the median is the value in the middle. If there is an even number of values then there will be two values tied for the middle place, in which case we take the mean (i.e. the halfway point) of those two numbers. Let’s look at an example. Say we want to summarize the following values: 8 6 3 14 12 7 6 4 9 If we sort those values: 3 4 6 6 7 8 9 12 14 Then the median is the middle value – in this case, the 5th of the 9 values. Whereas the mean minimizes the sum of squared errors, the median minimizes a slighty different quantity: The sum of absolute errors. This explains why it is less sensitive to outliers – squaring is going to exacerbate the effect of large errors compared to taking the absolute value. We can see this in the case of the income example: The median is much more representative of the group as a whole, and less sensitive to the one large outlier. Table 8.3: Summary statistics for income after arrival of Beyoncé Knowles. Statistic Value Mean 9051333 Median 65000 Given this, why would we ever use the mean? As we will see in a later chapter, the mean is the “best” estimator in the sense that it will vary less from sample to sample compared to other estimators. It’s up to us to decide whether that is worth the sensitivity to potential outliers – statistics is all about tradeoffs.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/08%3A_Fitting_Models_to_Data/8.06%3A_The_Simplest_Model-_The_Mean.txt
Sometimes we wish to describe the central tendency of a dataset that is not numeric. For example, let’s say that we want to know which models of iPhone are most commonly used. Let’s say we ask a large group of iPhone users which model each one owns. If we were to take the average of these values, we would see that the mean iPhone model is 9.51, which is clearly nonsensical, since the iPhone model numbers are not meant to be quantitative measurements. In this case, a more appropriate measure of central tendency is the mode, which is the most common value in the dataset, as we discussed above. 8.08: Variability- How Well Does the Mean Fit the Data Once we have described the central tendency of the data, we often also want to describe how variable the data are – this is sometimes also referred to as “dispersion”, reflecting the fact that it describes how widely dispersed the data are. We have already encountered the sum of squared errors above, which is the basis for the most commonly used measures of variability: the variance and the standard deviation. The variance for a population (referred to as σ2) is simply the sum of squared errors divided by the number of observations - that is, it is exactly the same as the mean squared error that you encountered earlier: $\ \sigma^2=\frac{SSE}{N}=\frac{\sum_{i=1}^n(x_i - \mu)^2}{N}$ where μ is the population mean. The standard deviation is simply the square root of this – that is, the root mean squared error that we saw before. The standard deviation is useful because the errors are in the same units as the original data (undoing the squaring that we applied to the errors). We usually don’t have access to the entire population, so we have to compute the variance using a sample, which we refer to as $\ \hat{\sigma}^2$, with the “hat” representing the fact that this is an estimate based on a sample. The equation for $\ \hat{\sigma}^2$ is similar to the one for σ2: $\ \hat{\sigma}^2=\frac{\sum_{i=1}^N(x_i - \bar{X})^2}{n-1}$ The only difference between the two equations is that we divide by n - 1 instead of N. This relates to a fundamental statistical concept: degrees of freedom. Remember that in order to compute the sample variance, we first had to estimate the sample mean $\ \bar{X}$. Having estimated this, one value in the data is no longer free to vary. For example, let’s say we have the following data points for a variable x: [3, 5, 7, 9, 11], the mean of which is 7. Because we know that the mean of this dataset is 7, we can compute what any specific value would be if it were missing. For example, let’s say we were to obscure the first value (3). Having done this, we still know that its value must be 3, because the mean of 7 implies that the sum of all of the values is 7∗n=35 and 35−(5+7+9+11)=3. So when we say that we have “lost” a degree of freedom, it means that there is a value that is not free to vary after fitting the model. In the context of the sample variance, if we don’t account for the lost degree of freedom, then our estimate of the sample variance will be biased, causing us to underestimate the uncertainty of our estimate of the mean.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/08%3A_Fitting_Models_to_Data/8.07%3A_The_Mode.txt
Having characterized a distribution in terms of its central tendency and variability, it is often useful to express the individual scores in terms of where they sit with respect to the overall distribution. Let’s say that we are interested in characterizing the relative level of crimes across different states, in order to determine whether California is a particularly dangerous place. We can ask this question using data for 2014 from the FBI’s Uniform Crime Reporting site. The left panel of Figure 8.8 shows a histogram of the number of violent crimes per state, highlighting the value for California. Looking at these data, it seems like California is terribly dangerous, with 153709 crimes in that year. With R it’s also easy to generate a map showing the distribution of a variable across states, which is presented in the right panel of Figure 8.8. It may have occurred to you, however, that CA also has the largest population of any state in the US, so it’s reasonable that it will also have a larger number of crimes. If we plot the two against one another (see left panel of Figure 8.9), we see that there is a direct relationship between population and the number of crimes. Instead of using the raw numbers of crimes, we should instead use the per-capita violent crime rate, which we obtain by dividing the number of crimes by the population of the state. The dataset from the FBI already includes this value (expressed as rate per 100,000 people). Looking at the right panel of Figure 8.9, we see that California is not so dangerous after all – its crime rate of 396.10 per 100,000 people is a bit above the mean across states of 346.81, but well within the range of many other states. But what if we want to get a clearer view of how far it is from the rest of the distribution? The Z-score allows us to express data in a way that provides more insight into each data point’s relationship to the overall distribution. The formula to compute a Z-score for a data point given that we know the value of the population mean $\mu$ and standard deviation $\sigma$ is: $Z(x)=x−μσ$ Intuitively, you can think of a Z-score as telling you how far away from the mean any data point is, in units of standard deviation. We can compute this for the crime rate data, as shown in Figure 8.10. ## [1] "mean of Z-scored data: 1.4658413372004e-16" ## [1] "std deviation of Z-scored data: 1" The scatterplot shows us that the process of Z-scoring doesn’t change the relative distribution of the data points (visible in the fact that the orginal data and Z-scored data fall on a straight line when plotted against each other) – it just shifts them to have a mean of zero and a standard deviation of one. However, if you look closely, you will see that the mean isn’t exactly zero – it’s just very small. What is going on here is that the computer represents numbers with a certain amount of numerical precision - which means that there are numbers that are not exactly zero, but are small enough that R considers them to be zero. Figure 8.11 shows the Z-scored crime data using the geographical view. This provides us with a slightly more interpretable view of the data. For example, we can see that Nevada, Tennessee, and New Mexico all have crime rates that are roughly two standard deviations above the mean. 8.9.1 Interpreting Z-scores The “Z” in “Z-score”" comes from the fact that the standard normal distribution (that is, a normal distribution with a mean of zero and a standard deviation of 1) is often referred to as the “Z” distribution. We can use the standard normal distribution to help us understand what specific Z scores tell us about where a data point sits with respect to the rest of the distribution. The upper panel in Figure 8.12 shows that we expect about 16% of values to fall in $Z\ge 1$, and the same proportion to fall in $Z\le -1$. Figure 8.13 shows the same plot for two standard deviations. Here we see that only about 2.3% of values fall in $Z \le -2$ and the same in $Z \ge 2$. Thus, if we know the Z-score for a particular data point, we can estimate how likely or unlikely we would be to find a value at least as extreme as that value, which lets us put values into better context. 8.9.2 Standardized scores Let’s say that instead of Z-scores, we wanted to generate standardized crime scores with a mean of 100 and standard deviation of 10. This is similar to the standardization that is done with scores from intelligence tests to generate the intelligence quotient (IQ). We can do this by simply multiplying the Z-scores by 10 and then adding 100. 8.9.2.1 Using Z-scores to compare distributions One useful application of Z-scores is to compare distributions of different variables. Let’s say that we want to compare the distributions of violent crimes and property crimes across states. In the left panel of Figure 8.15 we plot those against one another, with CA plotted in blue. As you can see the raw rates of property crimes are far higher than the raw rates of violent crimes, so we can’t just compare the numbers directly. However, we can plot the Z-scores for these data against one another (right panel of Figure 8.15)– here again we see that the distribution of the data does not change. Having put the data into Z-scores for each variable makes them comparable, and lets us see that California is actually right in the middle of the distribution in terms of both violent crime and property crime. Let’s add one more factor to the plot: Population. In the left panel of Figure 8.16 we show this using the size of the plotting symbol, which is often a useful way to add information to a plot. Because Z-scores are directly comparable, we can also compute a “Violence difference” score that expresses the relative rate of violent to non-violent (property) crimes across states. We can then plot those scores against population (see right panel of Figure 8.16). This shows how we can use Z-scores to bring different variables together on a common scale. It is worth noting that the smallest states appear to have the largest differences in both directions. While it might be tempting to look at each state and try to determine why it has a high or low difference score, this probably reflects the fact that the estimates obtained from smaller samples are necessarily going to be more variable, as we will discuss in the later chapter on Sampling.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/08%3A_Fitting_Models_to_Data/8.10%3A_Z-scores.txt
In this chapter we will focus on how to compute the measures of central tendency and variability that were covered in the previous chapter. Most of these can be computed using a built-in R function, but we will show how to do them manually in order to give some intuition about how they work. 09: Fitting Simple Models with R The mean is defined as the sum of values divided by the number of values being summed: $\ \bar{X} =\frac{\sum_{i=1}^nx_i}{n}$ Let’s say that we want to obtain the mean height for adults in the NHANES database (contained in the data Height). We would sum the individual heights (using the sum() function) and then divide by the number of values: sum(NHANES$Height)/length(NHANES$Height) ## [1] NA This returns the value NA, because there are missing values for some rows, and the sum() function doesn’t automatically handle those. To address this, we could filter the data frame using drop_na() to drop rows with NA values for this variable: height_noNA <- NHANES %>% drop_na(Height) %>% pull(Height) sum(height_noNA)/length(height_noNA) ## [1] 160 There is, of course, a built-in function in R called mean() that will compute the mean. Like the sum() function, mean() will return NA if there are any NA values in the data: mean(NHANES$Height) ## [1] NA The mean() function includes an optional argument called na.rm that will remove NA values if it is set to TRUE: mean(NHANES$Height, na.rm=TRUE) ## [1] 160 9.02: Median The median is the middle value after sorting the entire set of values. Let’s use the cleand-up `height_noNA` variable created above to determine this for the NHANES height data. First we sort the data in order of their values: ``height_sorted <- sort(height_noNA)`` Next we find the median value. If there is an odd number of values in the list, then this is just the value in the middle, whereas if the number of values is even then we take the average of the two middle values. We can determine whether the number of items is even by dividing the length by two and seeing if there is a remainder; we do this using the `%%` operator, which is known as the modulus and returns the remainder: ``5 %% 2`` ``## [1] 1`` Here we will test whether the remainder is equal to one; if it is, then we will take the middle value, otherwise we will take the average of the two middle values. We can do this using an if/else structure, which executes different processes depending on which of the arguments are true: ``````if (logical value) { functions to perform if logical value is true } else { functions to perform if logical value is false }`````` Let’s do this with our data. To find the middle value when the number of items is odd, we will divide the length and then round up, using the `ceiling()` function: ``````if (length(height_sorted) %% 2 == 1){ # length of vector is odd median_height <- height_sorted[ceiling(length(height_sorted) / 2)] } else { median_height <- (height_sorted[length(height_sorted) / 2] + height_sorted[1 + length(height_sorted) / (2)])/2 } median_height`````` ``## [1] 165`` We can compare this to the result from the built-in median function: ``median(height_noNA)`` ``## [1] 165``
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/09%3A_Fitting_Simple_Models_with_R/9.01%3A_Mean.txt
The mode is the most frequent value that occurs in a variable. R has a function called `mode()` but if you look at the help page you will see that it doesn’t actually copute the mode. In fact, R doesn’t have a built-in function to compute the mode, so we need to create one. Let start with some toy data: ``````mode_test = c('a', 'b', 'b', 'c', 'c', 'c') mode_test`````` ``## [1] "a" "b" "b" "c" "c" "c"`` We can see by eye that the mode is “a” since it occurs more often than the others. To find it computationally, let’s first get the unique values To do this, we first create a table with the counts for each value, using the `table()` function: ``````mode_table <- table(mode_test) mode_table`````` ``````## mode_test ## a b c ## 1 2 3`````` Now we need to find the maximum value. We do this by comparing each value to the maximum of the table; this will work even if there are multiple values with the same frequency (i.e. a tie for the mode). ``````table_max <- mode_table[mode_table == max(mode_table)] table_max`````` ``````## c ## 3`````` This variable is a special kind of value called a named vector, and its name contains the value that we need to identify the mode. We can pull it out using the `names()` function: ``````my_mode <- names(table_max)[1] my_mode`````` ``## [1] "c"`` Let’s wrap this up into our own custom function: ``````getmode <- function(v, print_table=FALSE) { mode_table <- table(v) if (print_table){ print(kable(mode_table)) } table_max <- mode_table[mode_table == max(mode_table)] return(names(table_max)) }`````` We can then apply this to real data. Let’s apply this to the `MaritalStatus` variable in the NHANES dataset: ``getmode(NHANES\$MaritalStatus)`` ``## [1] "Married"`` 9.04: Variability Let’s first compute the variance, which is the average squared difference between each value and the mean. Let’s do this with our cleaned-up version of the height data, but instead of working with the entire dataset, let’s take a random sample of 150 individuals: ``````height_sample <- NHANES %>% drop_na(Height) %>% sample_n(150) %>% pull(Height)`````` First we need to obtain the sum of squared errors from the mean. In R, we can square a vector using `**2`: ``````SSE <- sum((height_sample - mean(height_sample))**2) SSE`````` ``## [1] 63419`` Then we divide by N - 1 to get the estimated variance: ``````var_est <- SSE/(length(height_sample) - 1) var_est`````` ``## [1] 426`` We can compare this to the built-in `var()` function: ``var(height_sample)`` ``## [1] 426`` We can get the standard deviation by simply taking the square root of the variance: ``sqrt(var_est)`` ``## [1] 21`` Which is the same value obtained using the built-in `sd()` function: ``sd(height_sample)`` ``## [1] 21`` 9.05: Z-scores A Z-score is obtained by first subtracting the mean and then dividing by the standard deviation of a distribution. Let’s do this for the `height_sample` data. ``````mean_height <- mean(height_sample) sd_height <- sd(height_sample) z_height <- (height_sample - mean_height)/sd_height`````` Now let’s plot the histogram of Z-scores alongside the histogram for the original values. We will use the `plot_grid()` function from the `cowplot` library to plot the two figures alongside one another. First we need to put the values into a data frame, since `ggplot()` requires the data to be contained in a data frame. ``````height_df <- data.frame(orig_height=height_sample, z_height=z_height) # create individual plots plot_orig <- ggplot(height_df, aes(orig_height)) + geom_histogram() plot_z <- ggplot(height_df, aes(z_height)) + geom_histogram() # combine into a single figure plot_grid(plot_orig, plot_z)`````` You will notice that the shapes of the histograms are similar but not exactly the same. This occurs because the binning is slightly different between the two sets of values. However, if we plot them against one another in a scatterplot, we will see that there is a direct linear relation between the two sets of values: ``````ggplot(height_df, aes(orig_height, z_height)) + geom_point()``````
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/09%3A_Fitting_Simple_Models_with_R/9.03%3A_Mode.txt
Learning Objectives • Describe the sample space for a selected random experiment. • Compute relative frequency and empirical probability for a given set of events • Compute probabilities of single events, complementary events, and the unions and intersections of collections of events. • Describe the law of large numbers. • Describe the difference between a probability and a conditional probability • Describe the concept of statistical independence • Use Bayes’ theorem to compute the inverse conditional probability. Probability theory is the branch of mathematics that deals with chance and uncertainty. It forms an important part of the foundation for statistics, because it provides us with the mathematical tools to describe uncertain events. The study of probability arose in part due to interest in understanding games of chance, like cards or dice. These games provide useful examples of many statistical concepts, because when we repeat these games the likelihood of different outcomes remains (mostly) the same. However, there are deep questions about the meaning of probability that we will not address here; see Suggested Readings at the end if you are interested in learning more about this fascinating topic and its history. 10: Probability It might strike you that it is a bit odd to talk about the probability of a person having cancer depending on a test result; after all, the person either has cancer or they don’t. Historically, there have been two different ways that probabilities have been interpreted. The first (known as the frequentist interpretation) interprets probabilities in terms of long-run frequencies. For example, in the case of a coin flip, it would reflect the relative frequencies of heads in the long run after a large number of flips. While this interpretation might make sense for events that can be repeated many times like a coin flip, it makes less sense for events that will only happen once, like an individual person’s life or a particular presidential election; and as the economist John Maynard Keynes famously said, “In the long run, we are all dead.” The other interpretation of probablities (known as the Bayesian interpretation) is as a degree of belief in a particular proposition. If I were to ask you “How likely is it that the US will return to the moon by 2026”, you can provide an answer to this question based on your knowledge and beliefs, even though there are no relevant frequencies to compute a frequentist probability. One way that we often frame subjective probabilities is in terms of one’s willingness to accept a particular gamble. For example, if you think that the probability of the US landing on the moon by 2026 is 0.1 (i.e. odds of 9 to 1), then that means that you should be willing to accept a gamble that would pay off with anything more than 9 to 1 odds if the event occurs. As we will see, these two different definitions of probability are very relevant to the two different ways that statisticians think about testing statistical hypotheses, which we will encounter in later chapters. 10.02: Suggested Readings • The Drunkard’s Walk: How Randomness Rules Our Lives, by Leonard Mlodinow 10.03: Appendix Proof (Derivation of Bayes’ rule). First, remember the rule for computing a conditional probability: $P(A|B) = \frac{P(A \cap B)}{P(B)}$ We can rearrange this to get the formula to compute the joint probability using the conditional: $P(A \cap B) = P(A|B) * P(B)$ Using this we can compute the inverse probability: $P(B|A) = \frac{P(A \cap B)}{P(A)} = \frac{P(A|B)*P(B)}{P(A)}$ 10.04: What Is Probability? Informally, we usually think of probability as a number that describes the likelihood of some event occurring, which ranges from zero (impossibility) to one (certainty). Sometimes probabilities will instead be expressed in percentages, which range from zero to one hundred, as when the weather forecast predicts a twenty percent chance of rain today. In each case, these numbers are expressing how likely that particular event is, ranging from absolutely impossible to absolutely certain. To formalize probability theory, we first need to define a few terms: • An experiment is any activity that produces or observes an outcome. Examples are flipping a coin, rolling a 6-sided die, or trying a new route to work to see if it’s faster than the old route. • The sample space is the set of possible outcomes for an experiment. We represent these by listing them within a set of squiggly brackets. For a coin flip, the sample space is {heads, tails}. For a six-sided die, the sample space is each of the possible numbers that can appear: {1,2,3,4,5,6}. For the amount of time it takes to get to work, the sample space is all possible real numbers greater than zero (since it can’t take a negative amount of time to get somewhere, at least not yet). We won’t bother trying to write out all of those numbers within the brackets. • An event is a subset of the sample space. In principle it could be one or more of possible outcomes in the sample space, but here we will focus primarily on elementary events which consist of exactly one possible outcome. For example, this could be obtaining heads in a single coin flip, rolling a 4 on a throw of the die, or taking 21 minutes to get home by the new route. Now that we have those definitions, we can outline the formal features of a probability, which were first defined by the Russian mathematician Andrei Kolmogorov. These are the features that a value has to have if it is going to be a probability. If $P(X_i)$ is the probability of event Xi: • Probability cannot be negative: $P(X_i) \ge 0$ • The total probability of all outcomes in the sample space is 1; that is, if we take the probability of each element in the sample space and add them up, they must sum to 1. We can express this using the summation symbol ∑: $\sum_{i=1}^{N} P\left(X_{i}\right)=P\left(X_{1}\right)+P\left(X_{2}\right)+\ldots+P\left(X_{N}\right)=1$ This is interpreted as saying “Take all of the N elementary events, which we have labeled from 1 to N, and add up their probabilities. These must sum to one.” - The probability of any individual event cannot be greater than one: $P(X_{i})≤1$. This is implied by the previous point; since they must sum to one, and they can’t be negative, then any particular probability must be less than or equal to one.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/10%3A_Probability/10.01%3A_What_Do_Probabilities_Mean%3F.txt
Now that we know what a probability is, how do we actually figure out what the probability is for any particular event? 10.2.1 Personal belief Let’s say that I asked you what the probability was that the Beatles would have been equally successful if they had not replaced their original drummer Pete Best with Ringo Starr in 1962. We will define “success” in terms of the number of number-one hits on the Billboard Hot 100 (which we refer to as Nhits); the Beatles had 20 such number-one hits, so the sample space is {Nhits<20,Nhits≥20 }. We can’t actually do the experiment to find the outcome. However, most people with knowledge of the Beatles would be willing to at leaste offer a guess at the probability of this event. In many cases personal knowledge and/or opinion is the only guide we have determining the probability of an event, but this is not very scientifically satisfying. 10.2.2 Empirical frequency Another way to determine the probability of an event is to do the experiment many times and count how often each event happens. From the relative frequency of the different outcomes, we can compute the probability of each. For example, let’s say that we are interested in knowing the probability of rain in San Francisco. We first have to define the experiment — let’s say that we will look at the National Weather Service data for each day in 2017 and determine whether there was any rain at the downtown San Francisco weather station. Number of rainy days Number of days measured P(rain) 73 365 0.2 According to these data , in 2017 there were rain y days. To compute the probability of rain in San Francisco, we simply divide the number of rainy days by the number of days counted (365), giving P(rain in SF in 2017) = . How do we know that empirical probability gives us the right number? The answer to this question comes from the law of large numbers, which shows that the empirical probability will approach the true probability as the sample size increases. We can see this by simulating a large number of coin flips, and looking at our estimate of the probability of heads after each flip. We will spend more time discussing simulation in a later chapter; for now, just assume that we have a computational way to generate a random outcome for each coin flip. The left panel of Figure 10.1 shows that as the number of samples (i.e., coin flip trials) increases, the estimated probability of heads converges onto the true value of 0.5. However, note that the estimates can be very far off from the true value when the sample sizes are small. A real-world example of this was seen in the 2017 special election for the US Senate in Georgia, which pitted the Republican Roy Moore against Democrat Doug Jones. The right panel of Figure 10.1 shows the relative amount of the vote reported for each of the candidates over the course of the evening, as an increasing number of ballots were counted. Early in the evening the vote counts were especially volatile, swinging from a large initial lead for Jones to a long period where Moore had the lead, until finally Jones took the lead to win the race. These two examples show that while large samples will ultimately converge on the true probability, the results with small samples can be far off. Unfortunately, many people forget this and overinterpret results from small samples. This was referred to as the law of small numbers by the psychologists Danny Kahneman and Amos Tversky, who showed that people (even trained researchers) often behave as if the law of large numbers applies even to small samples, giving too much credence to results from small datasets. We will see examples throughout the course of just how unstable statistical results can be when they are generated on the basis of small samples. 10.2.3 Classical probability It’s unlikely that any of us has ever flipped a coin tens of thousands of times, but we are nonetheless willing to believe that the probability of flipping heads is 0.5. This reflects the use of yet another approach to computing probabilities, which we refer to as classical probability. In this approach, we compute the probability directly based on our knowledge of the situation. Classical probability arose from the study of games of chance such as dice and cards. A famous example arose from a problem encountered by a French gambler who went by the name of Chevalier de Méré. de Méré played two different dice games: In the first he bet on the chance of at least one six on four rolls of a six-sided die, while in the second he bet on the chance of at least one double-six on 24 rolls of two dice. He expected to win money on both of these gambles, but he found that while on average he won money on the first gamble, he actually lost money on average when he played the second gamble many times. To understand this he turned to his friend, the mathematician Blaise Pascal, who is now recognized as one of the founders of probability theory. How can we understand this question using probability theory? In classical probability, we start with the assumption that all of the elementary events in the sample space are equally likely; that is, when you roll a die, each of the possible outcomes ({1,2,3,4,5,6}) is equally likely to occur. (No loaded dice allowed!) Given this, we can compute the probability of any individual outcome as one divided by the number of possible outcomes: $\ P(outcome_i)=\frac{1}{number\ of\ possible\ outcomes}$ For the six-sided die, the probability of each individual outcome is 1/6. This is nice, but de Méré was interested in more complex events, like what happens on multiple dice throws. How do we compute the probability of a complex event (which is a union of single events), like rolling a one on the first or the second throw? We represent the union of events mathematically using the ∪ symbol: for example, if the probability of rolling a one on the first throw is referred to as P(Roll1throw1) and the probability of rolling a one on the second throw is P(Roll1throw2), then the union is referred to as P(Roll1throw1∪Roll1throw2). de Méré thought (incorrectly, as we will see below) that he could simply add together the probabilities of the individual events to compute the probability of the combined event, meaning that the probability of rolling a one on the first or second roll would be computed as follows: $P\left(\text {Roll} 1_{\text {throw1}}\right)=1 / 6$ $P\left(\text {Roll} 1_{\text {throw2}}\right)=1 / 6$ $\ deMéré ^{\prime}s\ error:$ $P\left(\text {Roll } 1_{\text {throw1}} \cup \text {Roll }_{t \text {throw2}}\right)=P\left(\text {Roll } 1_{\text {throw1}}\right)+P\left(\text {Roll } 1_{\text {throw2}}\right)=1 / 6+1 / 6=1 / 3$ de Méré reasoned based on this that the probability of at least one six in four rolls was the sum of the probabilities on each of the individual throws: $\ 4 * \frac{1}{6}=\frac{2}{3}$. Similarly, he reasoned that since the probability of a double-six in throws of dice is 1/36, then the probability of at least one double-six on 24 rolls of two dice would be $\ 24 * \frac{1}{36}=\frac{2}{3}$. Yet, while he consistently won money on the first bet, he lost money on the second bet. What gives? To understand de Méré’s error, we need to introduce some of the rules of probability theory. The first is the rule of subtraction, which says that the probability of some event A not happening is one minus the probability of the event happening: P(¬A)=1−P(A) where ¬A means “not A”. This rule derives directly from the axioms that we discussed above; because A and ¬A are the only possible outcomes, then their total probability must sum to 1. For example, if the probability of rolling a one in a single throw is $\ \frac{1}{6}$, then the probability of rolling anything other than a one is $\ \frac{5}{6}$. A second rule tells us how to compute the probability of a conjoint event – that is, the probability that both of two events will occur. We refer to this as an intersection, which is signified by the ∩ symbol; thus, P(A∩B) means the probability that both A and B will occur. This version of the rule tells us how to compute this quantity in the special case when the two events are independent from one another; we will learn later exactly what the concept of independence means, but for now we can just take it for granted that the two die throws are independent events. We compute the probability of the union of two independent events by simply multiplying the probabilities of the individual events: P(A∩B)=P(A)∗P(B) if and only if A and B are independent Thus, the probability of throwing a six on both of two rolls is $\ \frac{1}{6} * \frac{1}{6} = \frac{1}{36}$ The third rule tells us how to add together probabilities - and it is here that we see the source of de Méré’s error. The addition rule tells us that to obtain the probability of either of two events occurring, we add together the individual probabilities, but then subtract the likelihood of both occurring together: P(A∪B)=P(A)+P(B)−P(A∩B) In a sense, this prevents us from counting those instances twice, and that’s what distinguishes the rule from de Méré’s incorrect computation. Let’s say that we want to find the probability of rolling 6 on either of two throws. According to our rules: $P\left(\text {Roll } 1_{\text {throw1 }} \cup \text { Roll } 1_{\text {throw } 2}\right)=P\left(\text {Roll } 1_{\text {throw1 }}\right)+P\left(\text {Roll } 1_{\text {throw } 2}\right)-P\left(\text {Roll }_{\text {throw } 1} \cap \text {Roll }_{\text {throw } 2}\right)$ $\ =\frac{1}{6}+\frac{1}{6}-\frac{1}{36}=\frac{11}{36}$ Let’s use a graphical depiction to get a different view of this rule. Figure 10.2 shows a matrix representing all possible combinations of results across two throws, and highlights the cells that involve a one on either the first or second throw. If you count up the cells in light blue you will see that there are 11 such cells. This shows why the addition rule gives a different answer from de Méré’s; if we were to simply add together the probabilities for the two throws as he did, then we would count (1,1) towards both, when it should really only be counted once. 10.2.4 Solving de Méré’s problem Blaise Pascal used the rules of probability to come up with a solution to de Méré’s problem. First, he realized that computing the probability of at least one event out of a combination was tricky, whereas computing the probability that something does not occur across several events is relatively easy – it’s just the product of the probabilities of the individual events. Thus, rather than computing the probability of at least one six in four rolls, he instead computed the probability of no sixes across all rolls: $P(\text { no sixes in four rolls })=\frac{5}{6} * \frac{5}{6} * \frac{5}{6} * \frac{5}{6}=\left(\frac{5}{6}\right)^{4}=0.482$ He then used the fact that the probability of no sixes in four rolls is the complement of at least one six in four rolls (thus they must sum to one), and used the rule of subtraction to compute the probability of interest: $P(\text { at least one six in four rolls })=1-\left(\frac{5}{6}\right)^{4}=0.517$ de Méré’s gamble that he would throw at least one six in four rolls has a probability of greater than 0.5, explaning why de Méré made money on this bet on average. But what about de Méré’s second bet? Pascal used the same trick: $P(\text { no double six in } 24 \text { rolls })=\left(\frac{35}{36}\right)^{24}=0.509$ $P(\text { at least one double } \operatorname{six} \text { in } 24 \text { rolls })=1-\left(\frac{35}{36}\right)^{24}=0.491$ The probability of this outcome was slightly below 0.5, showing why de Méré lost money on average on this bet.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/10%3A_Probability/10.05%3A_How_Do_We_Determine_Probabilities%3F.txt
A probability distribution describes the probability of all of the possible outcomes in an experiment. For example, on Jan 20 2018, the basketball player Steph Curry hit only 2 out of 4 free throws in a game against the Houston Rockets. We know that Curry’s overall probability of hitting free throws across the entire season was 0.91, so it seems pretty unlikely that he would hit only 50% of his free throws in a game, but exactly how unlikely is it? We can determine this using a theoretical probability distribution; during this course we will encounter a number of these probability distributions, each of which is appropriate to describe different types of data. In this case, we use the binomial distribution, which provides a way to compute the probability of some number of successes out of a number of trials on which there is either success or failure and nothing in between (known as “Bernoulli trials”) given some known probability of success on each trial. This distribution is defined as: $P(k ; n, p)=P(X=k)=\left(\begin{array}{l}{n} \ {k}\end{array}\right) p^{k}(1-p)^{n-k}$ This refers to the probability of k successes on n trials when the probability of success is p. You may not be familiar with (nk), which is referred to as the binomial coefficient. The binomial coefficient is also referred to as “n-choose-k” because it describes the number of different ways that one can choose k items out of n total items. The binomial coefficient is computed as: $\left(\begin{array}{l}{n} \ {k}\end{array}\right)=\frac{n !}{k !(n-k) !}$ where the explanation point (!) refers to the factorial of the number: $n !=\prod_{i=1}^{n} i=n *(n-1) * \ldots * 2 * 1$ In the example of Steph Curry’s free throws: $P(2 ; 4,0.91)=\left(\begin{array}{l}{4} \ {2}\end{array}\right) 0.91^{2}(1-0.91)^{4-2}=0.040$ This shows that given Curry’s overall free throw percentage, it is very unlikely that he would hit only 2 out of 4 free throws. Which just goes to show that unlikely things do actually happen in the real world. 10.3.1 Cumulative probability distributions Often we want to know not just how likely a specific value is, but how likely it is to find a value that is as extreme or more than a particular value; this will become very important when we discuss hypothesis testing in a later chapter. To answer this question, we can use a cumulative probability distribution; whereas a standard probability distribution tells us the probability of some specific value, the cumulative distribution tells us the probability of a value as large or larger (or as small or smaller) than some specific value. In the free throw example, we might want to know: What is the probability that Steph Curry hits 2 or fewer free throws out of four, given his overall free throw probability of 0.91. To determine this, we could simply use the the binomial probability equation and plug in all of the possible values of k and add them together: P(k≤2)=P(k=2)+P(k=1)+P(k=0)=6e−5+.002+.040=.043 In many cases the number of possible outcomes would be too large for us to compute the cumulative probability by enumerating all possible values; fortunately, it can be computed directly. For the binomial, we can do this in R using the pbinom() function: Table 10.1: Cumulative probability distribution for number of successful free throws by Steph Curry in 4 attempts. numSuccesses probability 0 0.00 1 0.00 2 0.04 3 0.31 4 1.00 From the table we can see that the probability of Curry landing 2 or fewer free throws out of 4 attempts is 0.043.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/10%3A_Probability/10.06%3A_Probability_Distributions.txt
So far we have limited ourselves to simple probabilities - that is, the probability of a single event or combination of events. However, we often wish to determine the probability of some event given that some other event has occurred, which are known as conditional probabilities. Let’s take the 2016 US Presidential election as an example. There are two simple probabilities that we could use to describe the electorate. First, we know the probability that a voter in the US affiliated with the Republican party: $p(Republican) = 0.44$. We also know the probability that a voter cast their vote in favor of Donald Trump: $p($Trump voter)=0.46. However, let’s say that we want to know the following: What is the probability that a person cast their vote for Donald Trump, given that they are a Republican? To compute the conditional probability of A given B (which we write as $P(A|B)$, “probability of A, given B”), we need to know the joint probability (that is, the probability of both A and B occurring) as well as the overall probability of B: $P(A|B) = \frac{P(A \cap B)}{P(B)}$ That is, we want to know the probability that both things are true, given that the one being conditioned upon is true. It can be useful to think of this is graphically. Figure 10.3 shows a flow chart depicting how the full population of voters breaks down into Republicans and Democrats, and how the conditional probability (conditioning on party) further breaks down the members of each party according to their vote. 10.10: Reversing a Conditional Probability- Bayes’ Rule In many cases, we know $P(A|B)$ but we really want to know $P(B|A)$. This commonly occurs in medical screening, where we know $P(\text{positive test result| disease})$ but what we want to know is $P(\text{disease|positive test result})$. For example, some doctors recommend that men over the age of 50 undergo screening using a test called prostate specific antigen (PSA) to screen for possible prostate cancer. Before a test is approved for use in medical practice, the manufacturer needs to test two aspects of the test’s performance. First, they need to show how sensitive it is – that is, how likely is it to find the disease when it is present: $\text{sensitivity} = P(\text{positive test| disease})$. They also need to show how specific it is: that is, how likely is it to give a negative result when there is no disease present: $\text{specificity} = P(\text{negative test|no disease})$. For the PSA test, we know that sensitivity is about 80% and specificity is about 70%. However, these don’t answer the question that the physician wants to answer for any particular patient: what is the likelihood that they actually have cancer, given that the test comes back positive? This requires that we reverse the conditional probability that defines sensitivity: instead of P(positive test| disease) we want to know P(disease|positive test). In order to reverse a conditional probability, we can use Bayes’ rule: $P(B|A) = \frac{P(A|B)*P(B)}{P(A)}$ Bayes’ rule is fairly easy to derive, based on the rules of probability that we learned earlier in the chapter (see the Appendix for this derivation). If we have only two outcomes, we can express Bayes’ rule in a somewhat clearer way, using the sum rule to redefine $P(A)$: $P(A) = P(A|B)*P(B) + P(A|\neg B)*P(\neg B)$ Using this, we can redefine Bayes’s rule: $P(B|A) = \frac{P(A|B)*P(B)}{P(A|B)*P(B) + P(A|\neg B)*P(\neg B)}$ We can plug the relevant numbers into this equation to determine the likelihood that an individual with a positive PSA result actually has cancer – but note that in order to do this, we also need to know the overall probability of cancer in the person, which we often refer to as the base rate. Let’s take a 60 year old man, for whom the probability of prostate cancer in the next 10 years is $P(\text{cancer})=0.058$. Using the sensitivity and specificity values that we outlined above, we can compute the individual’s likelihood of having cancer given a positive test: $P(\text{cancer|test}) = \frac{P(\text{test|cancer})*P(\text{cancer})}{P(\text{test|cancer})*P(\text{cancer}) + P(\text{test|}\neg\text{cancer})*P(\neg\text{cancer})}$ $= \frac{0.8*0.058}{0.8*0.058 +0.3*0.942 } = 0.14$ That’s pretty small – do you find that surprising? Many people do, and in fact there is a substantial psychological literature showing that people systematically neglect base rates (i.e. overall prevalence) in their judgments.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/10%3A_Probability/10.07%3A_Conditional_Probability.txt
Another way to think of Bayes’ rule is as a way to update our beliefs on the basis of data – that is, learning about the world using data. Let’s look at Bayes’ rule again: $P(B|A) = \frac{P(A|B)*P(B)}{P(A)}$ The different parts of Bayes’ rule have specific names, that relate to their role in using Bayes rule to update our beliefs. We start out with an initial guess about the probability of B ($P(B)$), which we refer to as the prior probability. In the PSA example we used the base rate for the prior, since it was our best guess as to the individual’s chance of cancer before we knew the test result. We then collect some data, which in our example was the test result. The degree to which the data A are consistent with outcome B is given by $P(A|B)$, which we refer to as the likelihood. You can think of this as how likely the data are, given the particular hypothesis being tested. In our example, the hypothesis being tested was whether the individual had cancer, and the likelihood was based on our knowledge about the sensitivity of the test (that is, the probability of cancer given a positive test outcome). The denominator ($P(A)$) is referred to as the marginal likelihood, because it expresses the overall likelihood of the data, averaged across all of the possible values of A (which in our example were the positive and negative test results). The outcome to the left ($P(B|A)$) is referred to as the posterior - because it’s what comes out the back end of the computation. There is a another way of writing Bayes rule that makes this a bit clearer: $P(B|A) = \frac{P(A|B)}{P(A)}*P(B)$ The part on the left ($\frac{P(A|B)}{P(A)}$) tells us how much more or less likely the data A are given B, relative to the overall (marginal) likelihood of the data, while the part on the right side ($P(B)$) tells us how likely we thought B was before we knew anything about the data. This makes it clearer that the role of Bayes theorem is to update our prior knowledge based on the degree to which the data are more likely given B than they would be overall. If the hypothesis is more likely given the data than it would be in general, then we increase our belief in the hypothesis; if it’s less likely given the data, then we decrease our belief. 10.12: Odds and Odds Ratios The result in the last section showed that the likelihood that the individual has cancer based on a positive PSA test result is still fairly low, even though it’s more than twice as big as it was before we knew the test result. We would often like to quantify the relation between probabilities more directly, which we can do by converting them into odds which express the relative likelihood of something happening or not: $\text{odds of A} = \frac{P(A)}{P(\neg A)}$ In our PSA example, the odds of having cancer (given the positive test) are: $\text{odds of cancer} = \frac{P(\text{cancer})}{P(\neg \text{cancer})} =\frac{0.14}{1 - 0.14} = 0.16$ This tells us that the that the odds are fairly low of having cancer, even though the test was positive. For comparison, the odds of rolling a 6 in a single dice throw are: $\text{odds of 6} = \frac{1}{5} = 0.2$ As an aside, this is a reason why many medical researchers have become increasingly wary of the use of widespread screening tests for relatively uncommon conditions; most positive results will turn out to be false positives. We can also use odds to compare different probabilities, by computing what is called an odds ratio - which is exactly what it sounds like. For example, let’s say that we want to know how much the positive test increases the individual’s odds of having cancer. We can first compute the prior odds – that is, the odds before we knew that the person had tested positively. These are computed using the base rate: $\text{prior odds} = \frac{P(\text{cancer})}{P(\neg \text{cancer})} =\frac{0.058}{1 - 0.058} = 0.061$ We can then compare these with the posterior odds, which are computed using the posterior probability: ods ratio $=\frac{\text { posterior odds }}{\text { prior odds }}=\frac{0.16}{0.061}=2.62$ This tells us that the odds of having cancer are increased by 2.62 times given the positive test result. An odds ratio is example of what we will later call an effect size, which is a way of quantifying how relatively large any particular statistical effect is.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/10%3A_Probability/10.11%3A_Learning_from_Data.txt
In this chapter we will go over probability computations in R. 11: Probability in R Let’s create a vector of outcomes from one to 6, using the `seq()` function to create such a sequence: ``````outcomes <- seq(1, 6) outcomes`````` ``## [1] 1 2 3 4 5 6`` Now let’s create a vector of logical values based on whether the outcome in each position is equal to 1. Remember that `==` tests for equality of each element in a vector: ``````outcome1isTrue <- outcomes == 1 outcome1isTrue`````` ``## [1] TRUE FALSE FALSE FALSE FALSE FALSE`` Rememer that the simple probability of an outcome is number of occurrences of the outcome divided by the total number of events. To compute a probability, we can take advantage of the fact that TRUE/FALSE are equivalent to 1/0 in R. The formula for the mean (sum of values divided by the number of values) is thus exactly the same as the formula for the simple probability! So, we can compute the probability of the event by simply taking the mean of the logical vector. ``````p1isTrue <- mean(outcome1isTrue) p1isTrue`````` ``## [1] 0.17`` 11.01: Basic Probability Calculations Let’s walk through how we computed empirical frequency of rain in San Francisco. First we load the data: ``````# we will remove the STATION and NAME variables # since they are identical for all rows SFrain <- read_csv("data/SanFranciscoRain/1329219.csv") %>% dplyr::select(-STATION, -NAME) glimpse(SFrain)`````` ``````## Observations: 365 ## Variables: 2 ## \$ DATE <date> 2017-01-01, 2017-01-02, 2017-01-03, 2017-01… ## \$ PRCP <dbl> 0.05, 0.10, 0.40, 0.89, 0.01, 0.00, 0.82, 1.…`````` We see that the data frame contains a variable called `PRCP` which denotes the amount of rain each day. Let’s create a new variable called `rainToday` that denotes whether the amount of precipitation was above zero: ``````SFrain <- SFrain %>% mutate(rainToday = as.integer(PRCP > 0)) glimpse(SFrain)`````` ``````## Observations: 365 ## Variables: 3 ## \$ DATE <date> 2017-01-01, 2017-01-02, 2017-01-03, 20… ## \$ PRCP <dbl> 0.05, 0.10, 0.40, 0.89, 0.01, 0.00, 0.8… ## \$ rainToday <int> 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, …`````` Now we will summarize the data to compute the probability of rain: ``````pRainInSF <- SFrain %>% summarize( pRainInSF = mean(rainToday) ) %>% pull() pRainInSF`````` ``## [1] 0.2`` 11.02: Conditional Probability (Section 10.4) Let’s determine the conditional probability of someone being unhealthy, given that they are over 70 years of age, using the NHANES dataset. Let’s create a new data frame that ``````healthDataFrame <- NHANES %>% mutate( Over70 = Age > 70, Unhealthy = DaysPhysHlthBad > 0 ) %>% dplyr::select(Unhealthy, Over70) %>% drop_na() glimpse(healthDataFrame)`````` ``````## Observations: 4,891 ## Variables: 2 ## \$ Unhealthy <lgl> FALSE, FALSE, FALSE, TRUE, FALSE, TRUE,… ## \$ Over70 <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALS…`````` First, what’s the probability of being over 70? ``````pOver70 <- healthDataFrame %>% summarise(pOver70 = mean(Over70)) %>% pull() # to obtain the specific value, we need to extract it from the data frame pOver70`````` ``## [1] 0.11`` Second, what’s the probability of being unhealthy? ``````pUnhealthy <- healthDataFrame %>% summarise(pUnhealthy = mean(Unhealthy)) %>% pull() pUnhealthy`````` ``## [1] 0.36`` What’s the probability for each combination of unhealthy/healthly and over 70/ not? We can create a new variable that finds the joint probability by multiplying the two individual binary variables together; since anything times zero is zero, this will only have the value 1 for any case where both are true. ``````pBoth <- healthDataFrame %>% mutate( both = Unhealthy*Over70 ) %>% summarise( pBoth = mean(both)) %>% pull() pBoth`````` ``## [1] 0.043`` Finally, what’s the probability of someone being unhealthy, given that they are over 70 years of age? ``````pUnhealthyGivenOver70 <- healthDataFrame %>% filter(Over70 == TRUE) %>% # limit to Over70 summarise(pUnhealthy = mean(Unhealthy)) %>% pull() pUnhealthyGivenOver70`````` ``## [1] 0.38`` ``````# compute the opposite: # what the probability of being over 70 given that # one is unhealthy? pOver70givenUnhealthy <- healthDataFrame %>% filter(Unhealthy == TRUE) %>% # limit to Unhealthy summarise(pOver70 = mean(Over70)) %>% pull() pOver70givenUnhealthy`````` ``## [1] 0.12``
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/11%3A_Probability_in_R/11.1.01%3A_Empirical_Frequency_%28Section_10.2.2%29.txt
Learning Objectives • Distinguish between a population and a sample, and between population parameters and statistics • Describe the concepts of sampling error and sampling distribution • Compute the standard error of the mean • Describe how the Central Limit Theorem determines the nature of the sampling distribution of the mean • Compute a confidence interval for the mean based on the normal distribution, and describe its proper interpretation One of the foundational ideas in statistics is that we can make inferences about an entire population based on a relatively small sample of individuals from that population. In this chapter we will introduce the concept of statistical sampling and discuss why it works. Anyone living in the United States will be familiar with the concept of sampling from the political polls that have become a central part of our electoral process. In some cases, these polls can be incredibly accurate at predicting the outcomes of elections. The best known example comes from the 2008 and 2012 US Presidential elections, when the pollster Nate Silver correctly predicted electoral outcomes for 49/50 states in 2008 and for all 50 states in 2012. Silver did this by combining data from 21 different polls, which vary in the degree to which they tend to lean towards either the Republican or Democratic side. Each of these polls included data from about 1000 likely voters – meaning that Silver was able to almost perfectly predict the pattern of votes of more than 125 million voters using data from only 21,000 people, along with other knowledge (such as how those states have voted in the past). 12: Sampling Our goal in sampling is to determine the value of a statistic for an entire population of interest, using just a small subset of the population. We do this primarily to save time and effort – why go to the trouble of measuring every individual in the population when just a small sample is sufficient to accurately estimate the variable of interest? In the election example, the population is all registered voters, and the sample is the set of 1000 individuals selected by the polling organization. The way in which we select the sample is critical to ensuring that the sample is representative of the entire population, which is a main goal of statistical sampling. It’s easy to imagine a non-representative sample; if a pollster only called individuals whose names they had received from the local Democratic party, then it would be unlikely that the results of the poll would be representative of the population as a whole. In general, we would define a representative poll as being one in which every member of the population has an equal chance of being selected. When this fails, then we have to worry about whether the statistic that we compute on the sample is biased - that is, whether its value is systematically different from the population value (which we refer to as a parameter). Keep in mind that we generally don’t know this population parameter, because if we did then we wouldn’t need to sample! But we will use examples where we have access to the entire population, in order to explain some of the key ideas. It’s important to also distinguish between two different ways of sampling: with replacement versus without replacement. In sampling with replacement, after a member of the population has been sampled, they are put back into the pool so that they can potentially be sampled again. In sampling without replacement, once a member has been sampled they are not eligible to be sampled again. It’s most common to use sampling without replacement, but there will be some contexts in which we will use sampling with replacement, as when we discuss a technique called bootstrapping in Chapter 14.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/12%3A_Sampling/12.01%3A_How_Do_We_Sample%3F.txt
Later in the course it will become essential to be able to characterize how variable our samples are, in order to make inferences about the sample statistics. For the mean, we do this using a quantity called the standard error of the mean (SEM), which one can think of as the standard deviation of the sampling distribution. To compute the standard error of the mean for our sample, we divide the estimated standard deviation by the square root of the sample size: $SEM = \frac{\hat{\sigma}}{\sqrt{n}}$ Note that we have to be careful about computing SEM using the estimated standard deviation if our sample is small (less than about 30). Because we have many samples from the NHANES population and we actually know the population SEM (which we compute by dividing the population standard deviation by the size of the population), we can confirm that the SEM computed using the population parameter (1.44) is very close to the observed standard deviation of the means for the samples that we took from the NHANES dataset (1.44). The formula for the standard error of the mean says that the quality of our measurement involves two quantities: the population variability, and the size of our sample. Because the sample size is the denominator in the formula for SEM, a larger sample size will yield a smaller SEM when holding the population variability constant. We have no control over the population variability, but we do have control over the sample size. Thus, if we wish to improve our sample statistics (by reducing their sampling variability) then we should use larger samples. However, the formula also tells us something very fundamental about statistical sampling – namely, that the utility of larger samples diminishes with the square root of the sample size. This means that doubling the sample size will not double the quality of the statistics; rather, it will improve it by a factor of $218.3 we will discuss statistical power, which is intimately tied to this idea.$ 12.04: The Central Limit Theorem The Central Limit Theorem tells us that as sample sizes get larger, the sampling distribution of the mean will become normally distributed, even if the data within each sample are not normally distributed. We can see this in real data. Let’s work with the variable AlcoholYear in the NHANES distribution, which is highly skewed, as shown in the left panel of Figure ??. This distribution is, for lack of a better word, funky – and definitely not normally distributed. Now let’s look at the sampling distribution of the mean for this variable. Figure 12.2 shows the sampling distribution for this variable, which is obtained by repeatedly drawing samples of size 50 from the NHANES dataset and taking the mean. Despite the clear non-normality of the original data, the sampling distribution is remarkably close to the normal. The Central Limit Theorem is important for statistics because it allows us to safely assume that the sampling distribution of the mean will be normal in most cases. This means that we can take advantage of statistical techniques that assume a normal distribution, as we will see in the next section. 12.06: Suggested Readings • The Signal and the Noise: Why So Many Predictions Fail - But Some Don’t, by Nate Silver
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/12%3A_Sampling/12.03%3A_Standard_Error_of_the_Mean.txt
First we load the necessary libraries and set up the NHANES adult dataset ``````library(tidyverse) library(ggplot2) library(knitr) library(cowplot) set.seed(123456) opts_chunk\$set(tidy.opts=list(width.cutoff=80)) options(tibble.width = 60) # load the NHANES data library library(NHANES) # create a NHANES dataset without duplicated IDs NHANES <- NHANES %>% distinct(ID, .keep_all = TRUE) #create a dataset of only adults NHANES_adult <- NHANES %>% filter( Age >= 18 ) %>% drop_na(Height)`````` 13: Sampling in R Here we will repeatedly sample from the NHANES Height variable in order to obtain the sampling distribution of the mean. ``````sampSize <- 50 # size of sample nsamps <- 5000 # number of samples we will take # set up variable to store all of the results sampMeans <- tibble(meanHeight=rep(NA,nsamps)) # Loop through and repeatedly sample and compute the mean for (i in 1:nsamps) { sampMeans\$meanHeight[i] <- NHANES_adult %>% sample_n(sampSize) %>% summarize(meanHeight=mean(Height)) %>% pull(meanHeight) }`````` Now let’s plot the sampling distribution. We will also overlay the sampling distribution of the mean predicted on the basis of the population mean and standard deviation, to show that it properly describes the actual sampling distribution. ``````# pipe the sampMeans data frame into ggplot sampMeans %>% ggplot(aes(meanHeight)) + # create histogram using density rather than count geom_histogram( aes(y = ..density..), bins = 50, col = "gray", fill = "gray" ) + # add a vertical line for the population mean geom_vline(xintercept = mean(NHANES_adult\$Height), size=1.5) + # add a label for the line annotate( "text", x = 169.6, y = .4, label = "Population mean", size=6 ) + # label the x axis labs(x = "Height (inches)") + # add normal based on population mean/sd stat_function( fun = dnorm, n = sampSize, args = list( mean = mean(NHANES_adult\$Height), sd = sd(NHANES_adult\$Height)/sqrt(sampSize) ), size = 1.5, color = "black", linetype='dotted' ) `````` 13.02: Central Limit Theorem The central limit theorem tells us that the sampling distribution of the mean becomes normal as the sample size grows. Let’s test this by sampling a clearly non-normal variable and look at the normality of the results using a Q-Q plot. We saw in Figure @ref{fig:alcDist50} that the variable `AlcoholYear` is distributed in a very non-normal way. Let’s first look at the Q-Q plot for these data, to see what it looks like. We will use the `stat_qq()` function from `ggplot2` to create the plot for us. ``````# prepare the dta NHANES_cleanAlc <- NHANES %>% drop_na(AlcoholYear) ggplot(NHANES_cleanAlc, aes(sample=AlcoholYear)) + stat_qq() + # add the line for x=y stat_qq_line()`````` We can see from this figure that the distribution is highly non-normal, as the Q-Q plot diverges substantially from the unit line. Now let’s repeatedly sample and compute the mean, and look at the resulting Q-Q plot. We will take samples of various sizes to see the effect of sample size. We will use a function from the `dplyr` package called `do()`, which can run a large number of analyses at once. ``````set.seed(12345) sampSizes <- c(16, 32, 64, 128) # size of sample nsamps <- 1000 # number of samples we will take # create the data frame that specifies the analyses input_df <- tibble(sampSize=rep(sampSizes,nsamps), id=seq(nsamps*length(sampSizes))) # create a function that samples and returns the mean # so that we can loop over it using replicate() get_sample_mean <- function(sampSize){ meanAlcYear <- NHANES_cleanAlc %>% sample_n(sampSize) %>% summarize(meanAlcoholYear = mean(AlcoholYear)) %>% pull(meanAlcoholYear) return(tibble(meanAlcYear = meanAlcYear, sampSize=sampSize)) } # loop through sample sizes # we group by id so that each id will be run separately by do() all_results = input_df %>% group_by(id) %>% # "." refers to the data frame being passed in by do() do(get_sample_mean(.\$sampSize))`````` Now let’s create separate Q-Q plots for the different sample sizes. ``````# create empty list to store plots qqplots = list() for (N in sampSizes){ sample_results <- all_results %>% filter(sampSize==N) qqplots[[toString(N)]] <- ggplot(sample_results, aes(sample=meanAlcYear)) + stat_qq() + # add the line for x=y stat_qq_line(fullrange = TRUE) + ggtitle(sprintf('N = %d', N)) + xlim(-4, 4) } plot_grid(plotlist = qqplots)`````` This shows that the results become more normally distributed (i.e. following the straight line) as the samples get larger. 13.03: Confidence Intervals (Section @refconfidence-intervals) Remember that confidence intervals are intervals that will contain the population parameter on a certain proportion of times. In this example we will walk through the simulation that was presented in Section @ref{confidence-intervals} to show that this actually works properly. Here we will use a function called `do()` that lets us
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/13%3A_Sampling_in_R/13.01%3A_Sampling_Error_%28Section_%40refsamplingerror%29.txt
Learning Objectives • Describe the concept of a Monte Carlo simulation. • Describe the meaning of randomness in statistics • Obtain random numbers from the uniform and normal distributions • Describe the concept of the bootstrap The use of computer simulations has become an essential aspect of modern statistics. For example, one of the most important books in practical computer science, called Numerical Recipes, says the following: “Offered the choice between mastery of a five-foot shelf of analytical statistics books and middling ability at performing statistical Monte Carlo simulations, we would surely choose to have the latter skill.” In this chapter we will introduce the concept of a Monte Carlo simulation and discuss how it can be used to perform statistical analyses. 14: Resampling and Simulation The concept of Monte Carlo simulation was devised by the mathematicians Stan Ulam and Nicholas Metropolis, who were working to develop an atomic weapon for the US as part of the Manhattan Project. They needed to compute the average distance that a neutron would travel in a substance before it collided with an atomic nucleus, but they could not compute this using standard mathematics. Ulam realized that these computations could be simulated using random numbers, just like a casino game. In a casino game such as a roulette wheel, numbers are generated at random; to estimate the probability of a specific outcome, you could play the game hundreds of times. Ulam’s uncle had gambled at the Monte Carlo casino in Monaco, which is apparently where the name came from for this new technique. There are four steps to performing a Monte Carlo simulation: 1. Define a domain of possible values 2. Generate random numbers within that domain from a probability distribution 3. Perform a computation using the random numbers 4. Combine the results across many repetitions As an example, let’s say that I want to figure out how much time to allow for an in-class quiz. Say that we know that the distribution of quiz completion times is normal, with mean of 5 minutes and standard deviation of 1 minute. Given this, how long does the test period need to be so that we expect all students to finish the exam 99% of the time? There are two ways to solve this problem. The first is to calculate the answer using a mathematical theory known as the statistics of extreme values. However, this involves complicated mathematics. Alternatively, we could use Monte Carlo simulation. To do this, we need to generate random samples from a normal distribution. 14.02: Randomness in Statistics The term “random” is often used colloquially to refer to things that are bizarre or unexpected, but in statistics the term has a very specific meaning: A process is random if it is unpredictable. For example, if I flip a fair coin 10 times, the value of the outcome on one flip does not provide me with any information that lets me predict the outcome on the next flip. It’s important to note that the fact that something is unpredictable doesn’t necessarily mean that it is not deterministic. For example, when we flip a coin, the outcome of the flip is determined by the laws of physics; if we knew all of the conditions in enough detail, we should be able to predict the outcome of the flip. However, many factors combine to make the outcome of the coin flip unpredictable in practice. Psychologists have shown that humans actually have a fairly bad sense of randomness. First, we tend to see patterns when they don’t exist. In the extreme, this leads to the phenomenon of pareidolia, in which people will perceive familiar objects within random patterns (such as perceiving a cloud as a human face or seeing the Virgin Mary in a piece of toast). Second, humans tend to think of random processes as self-correcting, which leads us to expect that we are “due for a win” after losing many rounds in a game of chance, a phenonenon known as the “gambler’s fallacy”.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/14%3A_Resampling_and_Simulation/14.01%3A_Monte_Carlo_Simulation.txt
Running a Monte Carlo simulation requires that we generate random numbers. Generating truly random numbers (i.e. numbers that are completely unpredictable) is only possible through physical processes, such as the decay of atoms or the rolling of dice, which are difficult to obtain and/or too slow to be useful for computer simulation (though they can be obtained from the NIST Randomness Beacon). In general, instead of truly random numbers we use pseudo-random numbers generated using a computer algorithm; these numbers will seem random in the sense that they are difficult to predict, but the series of numbers will actually repeat at some point. For example, the random number generator used in R will repeate after $2^{19937} - 1$ numbers. That’s far more than the number of seconds in the history of the universe, and we generally think that this is fine for most purposes in statistical analysis. In R, there is a function to generate random numbers for each of the major probability distributions, such as: • `runif()` - uniform distribution (all values between 0 and 1 equally) • `rnorm()` - normal distribution • `rbinom()` - binomial distribution (e.g. rolling the dice, coin flips) Figure 14.1 shows examples of numbers generated using the `runif()` and `rnorm()` functions, generated using the following code: You can also generate random numbers for any distribution if you have a quantile function for the distribution. This is the inverse of the cumulative distribution function; instead of identifying the cumulative probabilities for a set of values, the quantile function identifies the values for a set of cumulative probabilities. Using the quantile function, we can generate random numbers from a uniform distribution, and then map those into the distribution of interest via its quantile function. By default, R will generate a different set of random numbers every time you run one of the random number generator functions described above. However, it is also possible to generate exactly the same set of random numbers, by setting what is called the random seed to a specific value. We will do this in many of the examples in this book, in order to make sure that the examples are reproducible. If we run the `rnorm()` function twice, it will give us different sets of pseudorandom numbers each time: ``print(rnorm(n = 5))`` ``## [1] 1.48 0.18 0.21 -0.15 -1.72`` ``print(rnorm(n = 5))`` ``## [1] -0.691 -2.231 0.391 0.029 -0.647`` However, if we set the random seed to the same value each time using the `set.seed()` function, then it will give us the same series of pseudorandom numbers each time: ``````set.seed(12345) print(rnorm(n = 5))`````` ``## [1] 0.59 0.71 -0.11 -0.45 0.61`` ``````set.seed(12345) print(rnorm(n = 5))`````` ``## [1] 0.59 0.71 -0.11 -0.45 0.61`` 14.04: Using Monte Carlo Simulation Let’s go back to our example of exam finishing times. Let’s say that I administer three quizzes and record the finishing times for each student for each exam, which might look like the distributions presented in Figure 14.2. However, what we really want to know is not what the distribution of finishing times looks like, but rather what the distribution of the longest finishing time for each quiz looks like. To do this, we can simulate the finishing time for a quiz, using the assumption that the finishing times are distributed normally, as stated above; for each of these simulated quizzes, we then record the longest finishing time. We repeat this simulation a large number of times (5000 should be enough) and record the distribution of finishing times, which is shown in Figure 14.3. This shows that the 99th percentile of the finishing time distribution falls at 8.81, meaning that if we were to give that much time for the quiz, then everyone should finish 99% of the time. It’s always important to remember that our assumptions matter – if they are wrong, then the results of the simulation are useless. In this case, we assumed that the finishing time distribution was normally distributed with a particular mean and standard deviation; if these assumptions are incorrect (and they almost certainly are), then the true answer could be very different.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/14%3A_Resampling_and_Simulation/14.03%3A_Generating_Random_Numbers.txt
So far we have used simulation to demonstrate statistical principles, but we can also use simulation to answer real statistical questions. In this section we will introduce a concept known as the bootstrap that lets us use simulation to quantify our uncertainty about statistical estimates. Later in the course, we will see other examples of how simulation can often be used to answer statistical questions, especially when theoretical statistical methods are not available or when their assumptions are too difficult to meet. 14.5.1 Computing the bootstrap In the section above, we used our knowledge of the sampling distribution of the mean to compute the standard error of the mean and confidence intervals. But what if we can’t assume that the estimates are normally distributed, or we don’t know their distribution? The idea of the bootstrap is to use the data themselves to estimate an answer. The name comes from the idea of pulling one’s self up by one’s own bootstraps, expressing the idea that we don’t have any external source of leverage so we have to rely upon the data themselves. The bootstrap method was conceived by Bradley Efron of the Stanford Department of Statistics, who is one of the world’s most influential statisticians. The idea behind the bootstrap is that we repeatedly sample from the actual dataset; importantly, we sample with replacement, such that the same data point will often end up being represented multiple times within one of the samples. We then compute our statistic of interest on each of the bootstrap samples, and use the distribution of those estimates. Let’s start by using the bootstrap to estimate the sampling distribution of the mean, so that we can compare the result to the standard error of the mean (SEM) that we discussed earlier. Figure 14.4 shows that the distribution of means across bootstrap samples is fairly close to the theoretical estimate based on the assumption of normality. We can also use the bootstrap samples to compute a confidence interval for the mean, simply by computing the quantiles of interest from the distribution of bootstrap samples. Table 14.1: Confidence limits for normal distribution and bootstrap methods type 2.5% 97.5% Normal 165 172 Bootstrap 165 172 We would not usually employ the bootstrap to compute confidence intervals for the mean (since we can generally assume that the normal distribution is appropriate for the sampling distribution of the mean, as long as our sample is large enough), but this example shows how the method gives us roughly the same result as the standard method based on the normal distribution. The bootstrap would more often be used to generate standard errors for estimates of other statistics where we know or suspect that the normal distribution is not appropriate. 14.06: Suggested Readings • Computer Age Statistical Inference: Algorithms, Evidence and Data Science, by Bradley Efron and Trevor Hastie 15.01: Generating Random Samples (Section @refgenerating-random-numbers) Here we will generate random samples from a number of different distributions and plot their histograms. ``````nsamples <- 10000 nhistbins <- 100 # uniform distribution p1 <- tibble( x = runif(nsamples) ) %>% ggplot((aes(x))) + geom_histogram(bins = nhistbins) + labs(title = "Uniform") # normal distribution p2 <- tibble( x = rnorm(nsamples) ) %>% ggplot(aes(x)) + geom_histogram(bins = nhistbins) + labs(title = "Normal") # Chi-squared distribution p3 <- tibble( x = rnorm(nsamples) ) %>% ggplot(aes(x)) + geom_histogram(bins = nhistbins) + labs(title = "Normal") # Chi-squared distribution p3 <- tibble( x = rchisq(nsamples, df=1) ) %>% ggplot(aes(x)) + geom_histogram(bins = nhistbins) + labs(title = "Chi-squared") # Poisson distribution p4 <- tibble( x = rbinom(nsamples, 20, 0.25) ) %>% ggplot(aes(x)) + geom_histogram(bins = nhistbins) + labs(title = "Binomial (p=0.25, 20 trials)") plot_grid(p1, p2, p3, p4, ncol = 2)`````` 15.02: Simulating the Maximum Finishing Time Let’s simulate 150 samples, collecting the maximum value from each sample, and then plotting the distribution of maxima. ``````# sample maximum value 5000 times and compute 99th percentile nRuns <- 5000 sampSize <- 150 sampleMax <- function(sampSize = 150) { samp <- rnorm(sampSize, mean = 5, sd = 1) return(tibble(max=max(samp))) } input_df <- tibble(id=seq(nRuns)) %>% group_by(id) maxTime <- input_df %>% do(sampleMax()) cutoff <- quantile(maxTime\$max, 0.99) ggplot(maxTime,aes(max)) + geom_histogram(bins = 100) + geom_vline(xintercept = cutoff, color = "red")``````
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/14%3A_Resampling_and_Simulation/14.05%3A_Using_Simulation_for_Statistics-_The_Bootstrap.txt
Learning Objectives • Identify the components of a hypothesis test, including the parameter of interest, the null and alternative hypotheses, and the test statistic. • Describe the proper interpretations of a p-value as well as common misinterpretations • Distinguish between the two types of error in hypothesis testing, and the factors that determine them. • Describe how resampling can be used to compute a p-value. • Define the concept of statistical power, and compute statistical power for a given statistical test. • Describe the main criticisms of null hypothesis statistical testing In the first chapter we discussed the three major goals of statistics: • Describe • Decide • Predict In this chapter we will introduce the ideas behind the use of statistics to make decisions – in particular, decisions about whether a particular hypothesis is supported by the data. 16: Hypothesis Testing The specific type of hypothesis testing that we will discuss is known (for reasons that will become clear) as null hypothesis statistical testing (NHST). If you pick up almost any scientific or biomedical research publication, you will see NHST being used to test hypotheses, and in their introductory psycholology textbook, Gerrig & Zimbardo (2002) referred to NHST as the “backbone of psychological research”. Thus, learning how to use and interpret the results from hypothesis testing is essential to understand the results from many fields of research. It is also important for you to know, however, that NHST is deeply flawed, and that many statisticians and researchers (including myself) think that it has been the cause of serious problems in science, which we will discuss in Chapter 32. For more than 50 years, there have been calls to abandon NHST in favor of other approaches (like those that we will discuss in the following chapters): • “The test of statistical significance in psychological research may be taken as an instance of a kind of essential mindlessness in the conduct of research” (Bakan, 1966) • Hypothesis testing is “a wrongheaded view about what constitutes scientific progress” (Luce, 1988) NHST is also widely misunderstood, largely because it violates our intuitions about how statistical hypothesis testing should work. Let’s look at an example to see. 16.02: Null Hypothesis Statistical Testing- An Example There is great interest in the use of body-worn cameras by police officers, which are thought to reduce the use of force and improve officer behavior. However, in order to establish this we need experimental evidence, and it has become increasingly common for governments to use randomized controlled trials to test such ideas. A randomized controlled trial of the effectiveness of body-worn cameras was performed by the Washington, DC government and DC Metropolitan Police Department in 2015/2016 in order to test the hypothesis that body-worn cameras are effective. Officers were randomly assigned to wear a body-worn camera or not, and their behavior was then tracked over time to determine whether the cameras resulted in less use of force and fewer civilian complaints about officer behavior. Before we get to the results, let’s ask how you would think the statistical analysis might work. Let’s say we want to specifically test the hypothesis of whether the use of force is decreased by the wearing of cameras. The randomized controlled trial provides us with the data to test the hypothesis – namely, the rates of use of force by officers assigned to either the camera or control groups. The next obvious step is to look at the data and determine whether they provide convincing evidence for or against this hypothesis. That is: What is the likelihood that body-worn cameras reduce the use of force, given the data and everything else we know? It turns out that this is not how null hypothesis testing works. Instead, we first take our hypothesis of interest (i.e. whether body-worn cameras reduce use of force), and flip it on its head, creating a null hypothesis – in this case, the null hypothesis would be that cameras do not reduce use of force. Importantly, we then assume that the null hypothesis is true. We then look at the data, and determine whether the data are sufficiently unlikely under the null hypothesis that we can reject the null in favor of the alternative hypothesis which is our hypothesis of interest. If there is not sufficient evidence to reject the null, then we say that we “failed to reject” the null. Understanding some of the concepts of NHST, particularly the notorious “p-value”, is invariably challenging the first time one encounters them, because they are so counter-intuitive. As we will see later, there are other approaches that provide a much more intuitive way to address hypothesis testing (but have their own complexities). However, before we get to those, it’s important for you to have a deep understanding of how hypothesis testing works, because it’s clearly not going to go away any time soon.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/16%3A_Hypothesis_Testing/16.01%3A_Null_Hypothesis_Statistical_Testing_%28NHST%29.txt
We can break the process of null hypothesis testing down into a number of steps: 1. Formulate a hypothesis that embodies our prediction (before seeing the data) 2. Collect some data relevant to the hypothesis 3. Specify null and alternative hypotheses 4. Fit a model to the data that represents the alternative hypothesis and compute a test statistic 5. Compute the probability of the observed value of that statistic assuming that the null hypothesis is true 6. Assess the “statistical significance” of the result For a hands-on example, let’s use the NHANES data to ask the following question: Is physical activity related to body mass index? In the NHANES dataset, participants were asked whether they engage regularly in moderate or vigorous-intensity sports, fitness or recreational activities (stored in the variable $PhysActive$). The researchers also measured height and weight and used them to compute the Body Mass Index (BMI): $BMI = \frac{weight(kg)}{height(m)^2}$ 16.3.1 Step 1: Formulate a hypothesis of interest For step 1, we hypothesize that BMI is greater for people who do not engage in physical activity, compared to those who do. 16.3.2 Step 2: Collect some data For step 2, we collect some data. In this case, we will sample 250 individuals from the NHANES dataset. Figure 16.1 shows an example of such a sample, with BMI shown separately for active and inactive individuals. Table 16.1: Summary of BMI data for active versus inactive individuals PhysActive N mean sd No 131 30 9.0 Yes 119 27 5.2 16.3.3 Step 3: Specify the null and alternative hypotheses For step 3, we need to specify our null hypothesis (which we call $H$) and our alternative hypothesis (which we call $H_A$). $H$ is the baseline against which we test our hypothesis of interest: that is, what would we expect the data to look like if there was no effect? The null hypothesis always involves some kind of equality (=, $\le$, or $\ge$). $H_A$ describes what we expect if there actually is an effect. The alternative hypothesis always involves some kind of inequality ($\ne$, >, or <). Importantly, null hypothesis testing operates under the assumption that the null hypothesis is true unless the evidence shows otherwise. We also have to decide whether to use directional or non-directional hypotheses. A non-directional hypothesis simply predicts that there will be a difference, without predicting which direction it will go. For the BMI/activity example, a non-directional null hypothesis would be: $H0$ and the corresponding non-directional alternative hypothesis would be: $HA: BMI_{active} \neq BMI_{inactive}$ A directional hypothesis, on the other hand, predicts which direction the difference would go. For example, we have strong prior knowledge to predict that people who engage in physical activity should weigh less than those who do not, so we would propose the following directional null hypothesis: $H0: BMI_{active} \ge BMI_{inactive}$ and directional alternative: $HA$ As we will see later, testing a non-directional hypothesis is more conservative, so this is generally to be preferred unless there is a strong a priori reason to hypothesize an effect in a particular direction. Any direction hypotheses should be specified prior to looking at the data! 16.3.4 Step 4: Fit a model to the data and compute a test statistic For step 4, we want to use the data to compute a statistic that will ultimately let us decide whether the null hypothesis is rejected or not. To do this, the model needs to quantify the amount of evidence in favor of the alternative hypothesis, relative to the variability in the data. Thus we can think of the test statistic as providing a measure of the size of the effect compared to the variability in the data. In general, this test statistic will have a probability distribution associated with it, because that allows us to determine how likely our observed value of the statistic is under the null hypothesis. For the BMI example, we need a test statistic that allows us to test for a difference between two means, since the hypotheses are stated in terms of mean BMI for each group. One statistic that is often used to compare two means is the t-statistic, first developed by the statistician William Sealy Gossett, who worked for the Guiness Brewery in Dublin and wrote under the pen name “Student” - hence, it is often called “Student’s t-statistic”. The t-statistic is appropriate for comparing the means of two groups when the sample sizes are relatively small and the population standard deviation is unknown. The t-statistic for comparison of two independent groups is computed as: $t = \frac{\bar{X_1} - \bar{X_2}}{\sqrt{\frac{S_1^2}{n_1} + \frac{S_2^2}{n_2}}}$ where $\bar{X}_1$ and $\bar{X}_2$ are the means of the two groups, $S$ and $S$ are the estimated variances of the groups, and $n$ and $n$ are the sizes of the two groups. Note that the denominator is basically an average of the standard error of the mean for the two samples. Thus, one can view the the t-statistic as a way of quantifying how large the difference between groups is in relation to the sampling variability of the means that are being compared. The t-statistic is distributed according to a probability distribution known as a t distribution. The t distribution looks quite similar to a normal distribution, but it differs depending on the number of degrees of freedom, which for this example is the number of observations minus 2, since we have computed two means and thus given up two degrees of freedom. When the degrees of freedom are large (say 1000), then the t distribution looks essentialy like the normal distribution, but when they are small then the t distribution has longer tails than the normal (see Figure 16.2). 16.3.5 Step 5: Determine the probability of the data under the null hypothesis This is the step where NHST starts to violate our intuition – rather than determining the likelihood that the null hypothesis is true given the data, we instead determine the likelihood of the data under the null hypothesis - because we started out by assuming that the null hypothesis is true! To do this, we need to know the probability distribution for the statistic under the null hypothesis, so that we can ask how likely the data are under that distribution. Before we move to our BMI data, let’s start with some simpler examples. 16.3.5.1 Randomization: A very simple example Let’s say that we wish to determine whether a coin is fair. To collect data, we flip the coin 100 times, and we count 70 heads. In this example, $H_0: P(heads)=0.5$ and $H_A: P(heads) \neq 0.5$, and our test statistic is simply the number of heads that we counted. The question that we then want to ask is: How likely is it that we would observe 70 heads if the true probability of heads is 0.5. We can imagine that this might happen very occasionally just by chance, but doesn’t seem very likely. To quantify this probability, we can use the binomial distribution: $P(X < k) = \sum_{i=0}^k \binom{N}{k} p^i (1-p)^{(n-i)}$ This equation will tell us the likelihood of a certain number of heads or fewer, given a particular probability of heads. However, what we really want to know is the probability of a certain number or more, which we can obtain by subtracting from one, based on the rules of probability: $P(X \ge k) = 1 - P(X < k)$ We can compute the probability for our example using the pbinom() function. The probability of 69 or fewer heads given P(heads)=0.5 is 0.999961, so the probability of 70 or more heads is simply one minus that value (0.000039) This computation shows us that the likelihood of getting 70 heads if the coin is indeed fair is very small. Now, what if we didn’t have the pbinom() function to tell us the probability of that number of heads? We could instead determine it by simulation – we repeatedly flip a coin 100 times using a true probability of 0.5, and then compute the distribution of the number of heads across those simulation runs. Figure 16.3 shows the result from this simulation. Here we can see that the probability computed via simulation (0.000030) is very close to the theoretical probability (.00004). Let’s do the analogous computation for our BMI example. First we compute the t statistic using the values from our sample that we calculated above, where we find that (t = 3.86). The question that we then want to ask is: What is the likelihood that we would find a t statistic of this size, if the true difference between groups is zero or less (i.e. the directional null hypothesis)? We can use the t distribution to determine this probability. Our sample size is 250, so the appropriate t distribution has 248 degrees of freedom because lose one for each of the two means that we computed. We can use the pt() function in R to determine the probability of finding a value of the t-statistic greater than or equal to our observed value. Note that we want to know the probability of a value greater than our observed value, but by default pt() gives us the probability of a value less than the one that we provide it, so we have to tell it explicitly to provide us with the “upper tail” probability (by setting lower.tail = FALSE). We find that (p(t > 3.86, df = 248) = 0.000), which tells us that our observed t-statistic value of 3.86 is relatively unlikely if the null hypothesis really is true. In this case, we used a directional hypothesis, so we only had to look at one end of the null distribution. If we wanted to test a non-directional hypothesis, then we would need to be able to identify how unexpected the size of the effect is, regardless of its direction. In the context of the t-test, this means that we need to know how likely it is that the statistic would be as extreme in either the positive or negative direction. To do this, we multiply the observed t value by -1, since the t distribution is centered around zero, and then add together the two tail probabilities to get a two-tailed p-value: (p(t > 3.86 or t< -3.86, df = 248) = 0.000). Here we see that the p value for the two-tailed test is twice as large as that for the one-tailed test, which reflects the fact that an extreme value is less surprising since it could have occurred in either direction. How do you choose whether to use a one-tailed versus a two-tailed test? The two-tailed test is always going to be more conservative, so it’s always a good bet to use that one, unless you had a very strong prior reason for using a one-tailed test. In that case, you should have written down the hypothesis before you ever looked at the data. In Chapter 32 we will discuss the idea of pre-registration of hypotheses, which formalizes the idea of writing down your hypotheses before you ever see the actual data. You should never make a decision about how to perform a hypothesis test once you have looked at the data, as this can introduce serious bias into the results. 16.3.5.2 Computing p-values using randomization So far we have seen how we can use the t-distribution to compute the probability of the data under the null hypothesis, but we can also do this using simulation. The basic idea is that we generate simulated data like those that we would expect under the null hypothesis, and then ask how extreme the observed data are in comparison to those simulated data. The key question is: How can we generate data for which the null hypothesis is true? The general answer is that we can randomly rearrange the data in a particular way that makes the data look like they would if the null was really true. This is similar to the idea of bootstrapping, in the sense that it uses our own data to come up with an answer, but it does it in a different way. 16.3.5.3 Randomization: a simple example Let’s start with a simple example. Let’s say that we want to compare the mean squatting ability of football players with cross-country runners, with $H_0: \mu_{FB} \le \mu_{XC}$ and $H$. We measure the maximum squatting ability of 5 football players and 5 cross-country runners (which we will generate randomly, assuming that $\mu_{FB} = 300$, $\mu_{XC} = 140$, and $\sigma = 30$). Table 16.2: Squatting data for the two groups group squat FB 335 FB 350 FB 230 FB 290 FB 325 XC 115 XC 115 XC 170 XC 175 XC 215 Table 16.2: Squatting data after randomly scrambling group labels squat scrambledGroup 335 FB 350 FB 230 XC 290 FB 325 FB 115 XC 115 XC 170 FB 175 XC 215 XC From the plot in Figure 16.4 it’s clear that there is a large difference between the two groups. We can do a standard t-test to test our hypothesis, using the t.test() command in R, which gives the following result: ## ## Two Sample t-test ## ## data: squat by group ## t = 5, df = 8, p-value = 4e-04 ## alternative hypothesis: true difference in means is greater than 0 ## 95 percent confidence interval: ## 95 Inf ## sample estimates: ## mean in group FB mean in group XC ## 306 158 If we look at the p-value reported here, we see that the likelihood of such a difference under the null hypothesis is very small, using the t distribution to define the null. Now let’s see how we could answer the same question using randomization. The basic idea is that if the null hypothesis of no difference between groups is true, then it shouldn’t matter which group one comes from (football players versus cross-country runners) – thus, to create data that are like our actual data but also conform to the null hypothesis, we can randomly reorder the group labels for the individuals in the dataset, and then recompute the difference between the groups. The results of such a shuffle are shown in Figure ??. After scrambling the labels, we see that the two groups are now much more similar, and in fact the cross-country group now has a slightly higher mean. Now let’s do that 10000 times and store the t statistic for each iteration; this may take a moment to complete. Figure 16.5 shows the histogram of the t-values across all of the random shuffles. As expected under the null hypothesis, this distribution is centered at zero (the mean of the distribution is -0.016. From the figure we can also see that the distribution of t values after shuffling roughly follows the theoretical t distribution under the null hypothesis (with mean=0), showing that randomization worked to generate null data. We can compute the p-value from the randomized data by measuring how many of the shuffled values are at least as extreme as the observed value: p(t > 5.14, df = 8) using randomization = 0.00380. This p-value is very similar to the p-value that we obtained using the t distribution, and both are quite extreme, suggesting that the observed data are very unlikely to have arisen if the null hypothesis is true - and in this case we know that it’s not true, because we generated the data. 16.3.5.3.1 Randomization: BMI/activity example Now let’s use randomization to compute the p-value for the BMI/activity example. In this case, we will randomly shuffle the PhysActive variable and compute the difference between groups after each shuffle, and then compare our observed t statistic to the distribution of t statistics from the shuffled datasets. Figure 16.6 shows the distribution of t values from the shuffled samples, and we can also compute the probability of finding a value as large or larger than the observed value. The p-value obtained from randomization (0.0000) is very similar to the one obtained using the t distribution (0.0001). The advantage of the randomization test is that it doesn’t require that we assume that the data from each of the groups are normally distributed, though the t-test is generally quite robust to violations of that assumption. In addition, the randomization test can allow us to compute p-values for statistics when we don’t have a theoretical distribution like we do for the t-test. We do have to make one main assumption when we use the randomization test, which we refer to as exchangeability. This means that all of the observations are distributed in the same way, such that we can interchange them without changing the overall distribution. The main place where this can break down is when there are related observations in the data; for example, if we had data from individuals in 4 different families, then we couldn’t assume that individuals were exchangeable, because siblings would be closer to each other than they are to individuals from other families. In general, if the data were obtained by random sampling, then the assumption of exchangeability should hold. 16.3.6 Step 6: Assess the “statistical significance” of the result The next step is to determine whether the p-value that results from the previous step is small enough that we are willing to reject the null hypothesis and conclude instead that the alternative is true. How much evidence do we require? This is one of the most controversial questions in statistics, in part because it requires a subjective judgment – there is no “correct” answer. Historically, the most common answer to this question has been that we should reject the null hypothesis if the p-value is less than 0.05. This comes from the writings of Ronald Fisher, who has been referred to as “the single most important figure in 20th century statistics”(Efron 1998): “If P is between .1 and .9 there is certainly no reason to suspect the hypothesis tested. If it is below .02 it is strongly indicated that the hypothesis fails to account for the whole of the facts. We shall not often be astray if we draw a conventional line at .05 … it is convenient to draw the line at about the level at which we can say: Either there is something in the treatment, or a coincidence has occurred such as does not occur more than once in twenty trials” (Fisher 1925) However, Fisher never intended $p < 0.05$ to be a fixed rule: “no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas” [fish:1956] Instead, it is likely that it became a ritual due to the reliance upon tables of p-values that were used before computing made it easy to compute p values for arbitrary values of a statistic. All of the tables had an entry for 0.05, making it easy to determine whether one’s statistic exceeded the value needed to reach that level of significance. The choice of statistical thresholds remains deeply controversial, and recently (Benjamin et al., 2018) it has been proposed that the standard threshold be changed from .05 to .005, making it substantially more stringent and thus more difficult to reject the null hypothesis. In large part this move is due to growing concerns that the evidence obtained from a significant result at $p<.0532.$ 16.3.6.1 Hypothesis testing as decision-making: The Neyman-Pearson approach Whereas Fisher thought that the p-value could provide evidence regarding a specific hypothesis, the statisticians Jerzy Neyman and Egon Pearson disagreed vehemently. Instead, they proposed that we think of hypothesis testing in terms of its error rate in the long run: “no test based upon a theory of probability can by itself provide any valuable evidence of the truth or falsehood of a hypothesis. But we may look at the purpose of tests from another viewpoint. Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not often be wrong” (Neyman and Pearson 1933) That is: We can’t know which specific decisions are right or wrong, but if we follow the rules, we can at least know how often our decisions will be wrong on average. To understand the decision making framework that Neyman and Pearson developed, we first need to discuss statistical decision making in terms of the kinds of outcomes that can occur. There are two possible states of reality ($H$ is true, or $H$ is false), and two possible decisions (reject $H$, or fail to reject $H$). There are two ways in which we can make a correct decision: • We can decide to reject $H$ when it is false (in the language of decision theory, we call this a hit) • We can fail to reject $H$ when it is true (we call this a correct rejection) There are also two kinds of errors we can make: • We can decide to reject $H$ when it is actually true (we call this a false alarm, or Type I error) • We can fail to reject $H$ when it is actually false (we call this a miss, or Type II error) Neyman and Pearson coined two terms to describe the probability of these two types of errors in the long run: • P(Type I error) = $\alpha$ • P(Type II error) = $\beta$ That is, if we set $α18.3, which is the complement of Type II error.$ 16.3.7 What does a significant result mean? There is a great deal of confusion about what p-values actually mean (Gigerenzer, 2004). Let’s say that we do an experiment comparing the means between conditions, and we find a difference with a p-value of .01. There are a number of possible interpretations. 16.3.7.1 Does it mean that the probability of the null hypothesis being true is .01? No. Remember that in null hypothesis testing, the p-value is the probability of the data given the null hypothesis ($P(data|H_0)$). It does not warrant conclusions about the probability of the null hypothesis given the data ($P(H_0|data)$). We will return to this question when we discuss Bayesian inference in a later chapter, as Bayes theorem lets us invert the conditional probability in a way that allows us to determine the latter probability. 16.3.7.2 Does it mean that the probability that you are making the wrong decision is .01? No. This would be $P(H_0|data)$, but remember as above that p-values are probabilities of data under $H$, not probabilities of hypotheses. 16.3.7.3 Does it mean that if you ran the study again, you would obtain the same result 99% of the time? No. The p-value is a statement about the likelihood of a particular dataset under the null; it does not allow us to make inferences about the likelihood of future events such as replication. 16.3.7.4 Does it mean that you have found a meaningful effect? No. There is an important distinction between statistical significance and practical significance. As an example, let’s say that we performed a randomized controlled trial to examine the effect of a particular diet on body weight, and we find a statistically significant effect at p<.05. What this doesn’t tell us is how much weight was actually lost, which we refer to as the effect size (to be discussed in more detail in Chapter 18). If we think about a study of weight loss, then we probably don’t think that the loss of ten ounces (i.e. the weight of a bag of potato chips) is practically significant. Let’s look at our ability to detect a significant difference of 1 ounce as the sample size increases. Figure 16.7 shows how the proportion of significant results increases as the sample size increases, such that with a very large sample size (about 262,000 total subjects), we will find a significant result in more than 90% of studies when there is a 1 ounce weight loss. While these are statistically significant, most physicians would not consider a weight loss of one ounce to be practically or clinically significant. We will explore this relationship in more detail when we return to the concept of statistical power in Section 18.3, but it should already be clear from this example that statistical significance is not necessarily indicative of practical significance.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/16%3A_Hypothesis_Testing/16.03%3A_The_Process_of_Null_Hypothesis_Testing.txt
So far we have discussed examples where we are interested in testing a single statistical hypothesis, and this is consistent with traditional science which often measured only a few variables at a time. However, in modern science we can often measure millions of variables per individual. For example, in genetic studies that quantify the entire genome, there may be many millions of measures per individual, and in brain imaging we often collect data from more than 100,000 locations in the brain at once. When standard hypothesis testing is applied in these contexts, bad things can happen unless we take appropriate care. Let’s look at an example to see how this might work. There is great interest in understanding the genetic factors that can predispose individuals to major mental illnesses such as schizophrenia, because we know that about 80% of the variation between individuals in the presence of schizophrenia is due to genetic differences. The Human Genome Project and the ensuing revolution in genome science has provided tools to examine the many ways in which humans differ from one another in their genomes. One approach that has been used in recent years is known as a genome-wide association study (GWAS), in which the genome of each individual is characterized at one million or more places in their genome to determine which letters of the genetic code (which we call “variants”) they have at that location. After these have been determined, the researchers perform a statistical test at each location in the genome to determine whether people diagnosed with schizoprenia are more or less likely to have one specific variant at that location. Let’s imagine what would happen if the researchers simply asked whether the test was significant at p<.05 at each location, when in fact there is no true effect at any of the locations. To do this, we generate a large number of simulated t values from a null distribution, and ask how many of them are significant at p<.05. Let’s do this many times, and each time count up how many of the tests come out as significant (see Figure 16.8). ## [1] "corrected familywise error rate: 0.036" This shows that about 5% of all of the tests were significant in each run, meaning that if we were to use p < .05 as our threshold for statistical significance, then even if there were no truly significant relationships present, we would still “find” about 500 genes that were seemingly significant (the expected number of significant results is simply $n * \alpha$). That is because while we controlled for the error per test, we didn’t control the familywise error, or the error across all of the tests, which is what we really want to control if we are going to be looking at the results from a large number of tests. Using p<.05, our familywise error rate in the above example is one – that is, we are pretty much guaranteed to make at least one error in any particular study. A simple way to control for the familywise error is to divide the alpha level by the number of tests; this is known as the Bonferroni correction, named after the Italian statistician Carlo Bonferroni. Using the data from our example above, we see in Figure ?? that only about 5 percent of studies show any significant results using the corrected alpha level of 0.000005 instead of the nominal level of .05. We have effectively controlled the familywise error, such that the probability of making any errors in our study is controlled at right around .05. 17.01: Simple Example- Coin-flipping (Section 16.3.5.1) In this chapter we will present several examples of using R to perform hypothesis testing. 17: Hypothesis Testing in R Let’s say that we flipped 100 coins and observed 70 heads. We would like to use these data to test the hypothesis that the true probability is 0.5. First let’s generate our data, simulating 100,000 sets of 100 flips. We use such a large number because it turns out that it’s very rare to get 70 heads, so we need many attempts in order to get a reliable estimate of these probabilties. This will take a couple of minutes to complete. ``````# simulate tossing of 100,000 flips of 100 coins to identify # empirical probability of 70 or more heads out of 100 flips nRuns <- 100000 # create function to toss coins tossCoins <- function() { flips <- runif(100) > 0.5 return(tibble(nHeads=sum(flips))) } # create an input data frame for do() input_df <- tibble(id=seq(nRuns)) %>% group_by(id) # use do() to perform the coin flips flip_results <- input_df %>% do(tossCoins()) %>% ungroup() p_ge_70_sim <- flip_results %>% summarise(p_gt_70 = mean(nHeads >= 70)) %>% pull() p_ge_70_sim`````` ``## [1] 3e-05`` For comparison, we can also compute the p-value for 70 or more heads based on a null hypothesis of $P_{heads}=0.5$, using the binomial distribution. ``````# compute the probability of 69 or fewer heads, # when P(heads)=0.5 p_lt_70 <- pbinom(69, 100, 0.5) # the probability of 70 or more heads is simply # the complement of p_lt_70 p_ge_70 <- 1 - p_lt_70 p_ge_70`````` ``## [1] 3.9e-05`` 17.02: Simulating p-values In this exercise we will perform hypothesis testing many times in order to test whether the p-values provided by our statistical test are valid. We will sample data from a normal distribution with a mean of zero, and for each sample perform a t-test to determine whether the mean is different from zero. We will then count how often we reject the null hypothesis; since we know that the true mean is zero, these are by definition Type I errors. ``````nRuns <- 5000 # create input data frame for do() input_df <- tibble(id=seq(nRuns)) %>% group_by(id) # create a function that will take a sample # and perform a one-sample t-test sample_ttest <- function(sampSize=32){ tt.result <- t.test(rnorm(sampSize)) return(tibble(pvalue=tt.result\$p.value)) } # perform simulations sample_ttest_result <- input_df %>% do(sample_ttest()) p_error <- sample_ttest_result %>% ungroup() %>% summarize(p_error = mean(pvalue<.05)) %>% pull() p_error`````` ``## [1] 0.048`` We should see that the proportion of samples with $p$ is about 5%.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/16%3A_Hypothesis_Testing/16.04%3A_NHST_in_a_Modern_Context-_Multiple_Testing.txt
Learning Objectives • Describe the proper interpretation of a confidence interval, and compute a confidence interval for the mean of a given dataset. • Define the concept of effect size, and compute the effect size for a given test. In the previous chapter we discussed how we can use data to test hypotheses. Those methods provided a binary answer: we either reject or fail to reject the null hypothesis. However, this kind of decision overlooks a couple of important questions. First, we would like to know how much uncertainty we have about the answer (regardless of which way it goes). In addition, sometimes we don’t have a clear null hypothesis, so we would like to see what range of estimates are consistent with the data. Second, we would like to know how large the effect actually is, since as we saw in the weight loss example in the previous chapter, a statistically significant effect is not necessarily a practically important effect. In this chapter we will discuss methods to address these two questions: confidence intervals to provide a measure of our uncertainty about our estimates, and effect sizes to provide a standardized way to understand how large the effects are. We will also discuss the concept of statistical power which tells us how well we can expect to find any true effects that might exist. 18: Quantifying Effects and Desiging Studies So far in the book we have focused on estimating the specific value of a statistic. For example, let’s say we want to estimate the mean weight of adults in the NHANES dataset. Let’s take a sample from the dataset and estimate the mean. In this sample, the mean weight was 79.92 kilograms. We refer to this as a point estimate since it provides us with a single number to describe the difference. However, we know from our earlier discussion of sampling error that there is some uncertainty about this estimate, which is described by the standard error. You should also remember that the standard error is determined by two components: the population standard deviation (which is the numerator), and the square root of the sample size (which is in the denominator). The population standard deviation is an unknown but fixed parameter that is not under our control, whereas the sample size is under our control. Thus, we can decrease our uncertainty about the estimate by increasing our sample size – up to the limit of the entire population size, at which point there is no uncertainty at all because we can just calculate the population parameter directly from the data of the entire population. You may also remember that earlier we introduced the concept of a confidence interval, which is a way of describing our uncertainty about a statistical estimate. Remember that a confidence interval describes an interval that will on average contain the true population parameter with a given probability; for example, the 95% confidence interval is an interval that will capture the true population parameter 95% of the time. Note again that this is not a statement about the population parameter; any particular confidence interval either does or does not contain the true parameter. As Jerzy Neyman, the inventor of the confidence interval, said: “The parameter is an unknown constant and no probability statement concerning its value may be made.”(Neyman 1937) The confidence interval for the mean is computed as: $CI = point\ estimate\ \pm critical\ value$ where the critical value is determined by the sampling distribution of the estimate. The important question, then, is what that sampling distribution is. 18.1.1 Confidence intervals using the normal distribution If we know the population standard deviation, then we can use the normal distribution to compute a confidence interval. We usually don’t, but for our example of the NHANES dataset we do (it’s 21.3 for weight). Let’s say that we want to compute a 95% confidence interval for the mean. The critical value would then be the values of the standard normal distribution that capture 95% of the distribution; these are simply the 2.5th percentile and the 97.5th percentile of the distribution, which we can compute using the qnorm() function in R, and come out to $\pm 1.96$. Thus, the confidence interval for the mean ($\bar{X}$) is: $CI = \bar{X} \pm 1.96*SE$ Using the estimated mean from our sample (79.92) and the known population standard deviation, we can compute the confidence interval of [77.28,82.56]. 18.1.2 Confidence intervals using the t distribution As stated above, if we knew the population standard deviation, then we could use the normal distribution to compute our confidence intervals. However, in general we don’t – in which case the t distribution is more appropriate as a sampling distribution. Remember that the t distribution is slightly broader than the normal distribution, especially for smaller samples, which means that the confidence intervals will be slightly wider than they would if we were using the normal distribution. This incorporates the extra uncertainty that arises when we make conclusions based on small samples. We can compute the 95% confidence interval in a way similar to the normal distribution example above, but the critical value is determined by the 2.5th percentile and the 97.5th percentile of the t distribution, which we can compute using the qt() function in R. Thus, the confidence interval for the mean ($\bar{X}$) is: $CI = \bar{X} \pm t_{crit}*SE$ where $t$ is the critical t value. For the NHANES weight example (with sample size of 250), the confidence interval would be 79.92 +/- 1.97 [77.15 - 82.69]. Remember that this doesn’t tell us anything about the probability of the true population value falling within this interval, since it is a fixed parameter (which we know is 81.77 because we have the entire population in this case) and it either does or does not fall within this specific interval (in this case, it does). Instead, it tells us that in the long run, if we compute the confidence interval using this procedure, 95% of the time that confidence interval will capture the true population parameter. 18.1.3 Confidence intervals and sample size Because the standard error decreases with sample size, the means confidence interval should get narrower as the sample size increases, providing progressively tighter bounds on our estimate. Figure 18.1 shows an example of how the confidence interval would change as a function of sample size for the weight example. From the figure it’s evident that the confidence interval becomes increasingly tighter as the sample size increases, but increasing samples provide diminishing returns, consistent with the fact that the denominator of the confidence interval term is proportional to the square root of the sample size. 18.1.4 Computing confidence intervals using the bootstrap In some cases we can’t assume normality, or we don’t know the sampling distribution of the statistic. In these cases, we can use the bootstrap (which we introduced in Chapter 14). As a reminder, the bootstrap involves repeatedly resampling the data with replacement, and then using the distribution of the statistic computed on those samples as a surrogate for the sampling distribution of the statistic. R includes a package called boot that we can use to run the bootstrap and compute confidence intervals. It’s always good to use a built-in function to compute a statistic if it is available, rather than coding it up from scratch — both because it saves you extra work, and because the built-in version will be better tested. These are the results when we use the boot() to compute the confidence interval for weight in our NHANES sample: ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 1000 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = bs, type = "perc") ## ## Intervals : ## Level Percentile ## 95% (77, 83 ) ## Calculations and Intervals on Original Scale These values are fairly close to the values obtained using the t distribution above, though not exactly the same. 18.1.5 Relation of confidence intervals to hypothesis tests There is a close relationship between confidence intervals and hypothesis tests. In particular, if the confidence interval does not include the null hypothesis, then the associated statistical test would be statistically significant. For example, if you are testing whether the mean of a sample is greater than zero with $\alpha = 0.05$, you could simply check to see whether zero is contained within the 95% confidence interval for the mean. Things get trickier if we want to compare the means of two conditions (Schenker and Gentleman 2001). There are a couple of situations that are clear. First, if each mean is contained within the confidence interval for the other mean, then there is certainly no significant difference at the chosen confidence level. Second, if there is no overlap between the confidence intervals, then there is certainly a significant difference at the chosen level; in fact, this test is substantially conservative, such that the actual error rate will be lower than the chosen level. But what about the case where the confidence intervals overlap one another but don’t contain the means for the other group? In this case the answer depends on the relative variability of the two variables, and there is no general answer. In general we should avoid using the “visual test” for overlapping confidence intervals.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/18%3A_Quantifying_Effects_and_Desiging_Studies/18.01%3A_Confidence_Intervals.txt
“Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just, does a treatment affect people, but how much does it affect them.” Gene Glass (REF) In the last chapter, we discussed the idea that statistical significance may not necessarily reflect practical significance. In order to discuss practical significance, we need a standard way to describe the size of an effect in terms of the actual data, which we refer to as an effect size. In this section we will introduce the concept and discuss various ways that effect sizes can be calculated. An effect size is a standardized measurement that compares the size of some statistical effect to a reference quantity, such as the variability of the statistic. In some fields of science and engineering, this idea is referred to as a “signal to noise ratio”. There are many different ways that the effect size can be quantified, which depend on the nature of the data. 18.2.1 Cohen’s D One of the most common measures of effect size is known as Cohen’s d, named after the statistician Jacob Cohen (who is most famous for his 1994 paper titled “The Earth Is Round (p < .05)”). It is used to quantify the difference between two means, in terms of their standard deviation: $d = \frac{\bar{X}_1 - \bar{X}_2}{s}$ where $\bar{X}_1$ and $\bar{X}_2$ are the means of the two groups, and $s$ is the pooled standard deviation (which is a combination of the standard deviations for the two samples, weighted by their sample sizes): $s = \sqrt{\frac{(n_1 - 1)s^2_1 + (n_2 - 1)s^2_2 }{n_1 +n_2 -2}}$ where $n$ and $n$ are the sample sizes and $s$ and $s$ are the standard deviations for the two groups respectively. Note that this is very similar in spirit to the t statistic — the main difference is that the denominator in the t statistic is based on the standard error of the mean, whereas the denominator in Cohen’s D is based on the standard deviation of the data. This means that while the t statistic will grow as the sample size gets larger, the value of Cohen’s D will remain the same. There is a commonly used scale for interpreting the size of an effect in terms of Cohen’s d: Table 18.1: Interpetation of Cohen’s D D Interpretation 0.0 - 0.2 neglibible 0.2 - 0.5 small 0.5 - 0.8 medium 0.8 - large It can be useful to look at some commonly understood effects to help understand these interpretations. For example, the effect size for gender differences in height (d = 1.6) is very large by reference to our table above. We can also see this by looking at the distributions of male and female heights in our sample. Figure 18.2 shows that the two distributions are quite well separated, though still overlapping, highlighting the fact that even when there is a very large effect size for the difference between two groups, there will be individuals from each group that are more like the other group. It is also worth noting that we rarely encounter effects of this magnitude in science, in part because they are such obvious effects that we don’t need scientific research to find them. As we will see in Chapter 32 on reproducibility, very large reported effects in scientific research often reflect the use of questionable research practices rather than truly huge effects in nature. It is also worth noting that even for such a huge effect, the two distributions still overlap - there will be some females who are taller than the average male, and vice versa. For most interesting scientific effects, the degree of overlap will be much greater, so we shouldn’t immediately jump to strong conclusions about different populations based on even a large effect size. 18.2.2 Pearson’s r Pearson’s r, also known as the correlation coefficient, is a measure of the strength of the linear relationship between two continuous variables. We will discuss correlation in much more detail in Chapter 24, so we will save the details for that chapter; here, we simply introduce r as a way to quantify the relation between two variables. r is a measure that varies from -1 to 1, where a value of 1 represents a perfect positive relationship between the variables, 0 represents no relationship, and -1 represents a perfect negative relationship. Figure 18.3 shows examples of various levels of correlation using randomly generated data. 18.2.3 Odds ratio In our earlier discussion of probability we discussed the concept of odds – that is, the relative likelihood of some event happening versus not happening: $odds\ of\ A = \frac{P(A)}{P(\neg A)}$ We also discussed the odds ratio, which is simply the ratio of two odds. The odds ratio is a useful way to describe effect sizes for binary variables. For example, let’s take the case of smoking and lung cancer. A study published in the International Journal of Cancer in 2012 (Pesch et al. 2012) combined data regarding the occurrence of lung cancer in smokers and individuals who have never smoked across a number of different studies. Note that these data come from case-control studies, which means that participants in the studies were recruited because they either did or did not have cancer; their smoking status was then examined. These numbers thus do not represent the prevalence of cancer amongst smokers in the general population – but they can tell us about the relationship between cancer and smoking. Table 18.2: Cancer occurrence separately for current smokers and those who have never smoked Status NeverSmoked CurrentSmoker No Cancer 2883 3829 Cancer 220 6784 We can convert these numbers to odds ratios for each of the groups. The odds of someone having lung cancer who has never smoked is 0.08 whereas the odds of a current smoker having lung cancer is 1.77. The ratio of these odds tells us about the relative likelihood of cancer between the two groups: The odds ratio of 23.22 tells us that the odds of cancer in smokers are roughly 23 times higher than never-smokers.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/18%3A_Quantifying_Effects_and_Desiging_Studies/18.02%3A_Effect_Sizes.txt
Remember from the previous chapter that under the Neyman-Pearson hypothesis testing approach, we have to specify our level of tolerance for two kinds of errors: False positives (which they called Type I error) and false negatives (which they called Type II error). People often focus heavily on Type I error, because making a false positive claim is generally viewed as a very bad thing; for example, the now discredited claims by Wakefield (1999) that autism was associated with vaccination led to anti-vaccine sentiment that has resulted in substantial increases in childhood diseases such as measles. Similarly, we don’t want to claim that a drug cures a disease if it really doesn’t. That’s why the tolerance for Type I errors is generally set fairly low, usually at $\alpha = 0.05$. But what about Type II errors? The concept of statistical power is the complement of Type II error – that is, it is the likelihood of finding a positive result given that it exists: $power = 1 - \beta$ Another important aspect of the Neyman-Pearson model that we didn’t discuss above is the fact that in addition to specifying the acceptable levels of Type I and Type II errors, we also have to describe a specific alternative hypothesis – that is, what is the size of the effect that we wish to detect? Otherwise, we can’t interpret $\beta$ – the likelihood of finding a large effect is always going to be higher than finding a small effect, so $\beta$ will be different depending on the size of effect we are trying to detect. There are three factors that can affect power: • Sample size: Larger samples provide greater statistical power • Effect size: A given design will always have greater power to find a large effect than a small effect (because finding large effects is easier) • Type I error rate: There is a relationship between Type I error and power such that (all else being equal) decreasing Type I error will also decrease power. We can see this through simulation. First let’s simulate a single experiment, in which we compare the means of two groups using a standard t-test. We will vary the size of the effect (specified in terms of Cohen’s d), the Type I error rate, and the sample size, and for each of these we will examine how the proportion of significant results (i.e. power) is affected. Figure 18.4 shows an example of how power changes as a function of these factors. This simulation shows us that even with a sample size of 96, we will have relatively little power to find a small effect ($d = 0.2$) with $\alpha = 0.005$. This means that a study designed to do this would be futile – that is, it is almost guaranteed to find nothing even if a true effect of that size exists. There are at least two important reasons to care about statistical power, one of which we discuss here and the other of which we will return to in Chapter 32. If you are a researcher, you probably don’t want to spend your time doing futile experiments. Running an underpowered study is essentially futile, because it means that there is a very low likelihood that one will find an effect, even if it exists. 18.3.1 Power analysis Fortunately, there are tools available that allow us to determine the statistical power of an experiment. The most common use of these tools is in planning an experiment, when we would like to determine how large our sample needs to be in order to have sufficient power to find our effect of interest. Let’s say that we are interested in running a study of how a particular personality trait differs between users of iOS versus Android devices. Our plan is collect two groups of individuals and measure them on the personality trait, and then compare the two groups using a t-test. In order to determine the necessary sample size, we can use the pwr.t.test() function from the pwr library: ## ## Two-sample t test power calculation ## ## n = 64 ## d = 0.5 ## sig.level = 0.05 ## power = 0.8 ## alternative = two.sided ## ## NOTE: n is number in *each* group This tells us that we would need at least 64 subjects in each group in order to have sufficient power to find a medium-sized effect. It’s always important to run a power analysis before one starts a new study, to make sure that the study won’t be futile due to a sample that is too small. It might have occurred to you that if the effect size is large enough, then the necessary sample will be very small. For example, if we run the same power analysis with an effect size of d=2, then we will see that we only need about 5 subjects in each group to have sufficient power to find the difference. ## ## Two-sample t test power calculation ## ## n = 5.1 ## d = 2 ## sig.level = 0.05 ## power = 0.8 ## alternative = two.sided ## ## NOTE: n is number in *each* group However, it’s rare in science to be doing an experiment where we expect to find such a large effect – just as we don’t need statistics to tell us that 16-year-olds are taller than than 6-year-olds. When we run a power analysis, we need to specify an effect size that is plausible for our study, which would usually come from previous research. However, in Chapter 32 we will discuss a phenomenon known as the “winner’s curse” that likely results in published effect sizes being larger than the true effect size, so this should also be kept in mind.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/18%3A_Quantifying_Effects_and_Desiging_Studies/18.03%3A_Statistical_Power.txt
In this chapter we focus specifically on statistical power. 19: Statistical Power in R We can compute a power analysis using functions from the `pwr` package. Let’s focus on the power for a t-test in order to determine a difference in the mean between two groups. Let’s say that we think than an effect size of Cohen’s d=0.5 is realistic for the study in question (based on previous research) and would be of scientific interest. We wish to have 80% power to find the effect if it exists. We can compute the sample size needed for adequate power using the `pwr.t.test()` function: ``pwr.t.test(d=0.5, power=.8)`` ``````## ## Two-sample t test power calculation ## ## n = 64 ## d = 0.5 ## sig.level = 0.05 ## power = 0.8 ## alternative = two.sided ## ## NOTE: n is number in *each* group`````` Thus, about 64 participants would be needed in each group in order to test the hypothesis with adequate power. 19.02: Power Curves We can also create plots that can show us how the power to find an effect varies as a function of effect size and sample size. We willl use the `crossing()` function from the `tidyr` package to help with this. This function takes in two vectors, and returns a tibble that contains all possible combinations of those values. ``````effect_sizes <- c(0.2, 0.5, 0.8) sample_sizes = seq(10, 500, 10) # input_df <- crossing(effect_sizes,sample_sizes) glimpse(input_df)`````` ``````## Observations: 150 ## Variables: 2 ## \$ effect_sizes <dbl> 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0… ## \$ sample_sizes <dbl> 10, 20, 30, 40, 50, 60, 70, 80, 90, …`````` Using this, we can then perform a power analysis for each combination of effect size and sample size to create our power curves. In this case, let’s say that we wish to perform a two-sample t-test. ``````# create a function get the power value and # return as a tibble get_power <- function(df){ power_result <- pwr.t.test(n=df\$sample_sizes, d=df\$effect_sizes, type='two.sample') df\$power=power_result\$power return(df) } # run get_power for each combination of effect size # and sample size power_curves <- input_df %>% do(get_power(.)) %>% mutate(effect_sizes = as.factor(effect_sizes)) `````` Now we can plot the power curves, using a separate line for each effect size. ``````ggplot(power_curves, aes(x=sample_sizes, y=power, linetype=effect_sizes)) + geom_line() + geom_hline(yintercept = 0.8, linetype='dotdash')`````` 19.03: Simulating Statistical Power Let’s simulate this to see whether the power analysis actually gives the right answer. We will sample data for two groups, with a difference of 0.5 standard deviations between their underlying distributions, and we will look at how often we reject the null hypothesis. ``````nRuns <- 5000 effectSize <- 0.5 # perform power analysis to get sample size pwr.result <- pwr.t.test(d=effectSize, power=.8) # round up from estimated sample size sampleSize <- ceiling(pwr.result\$n) # create a function that will generate samples and test for # a difference between groups using a two-sample t-test get_t_result <- function(sampleSize, effectSize){ # take sample for the first group from N(0, 1) group1 <- rnorm(sampleSize) group2 <- rnorm(sampleSize, mean=effectSize) ttest.result <- t.test(group1, group2) return(tibble(pvalue=ttest.result\$p.value)) } index_df <- tibble(id=seq(nRuns)) %>% group_by(id) power_sim_results <- index_df %>% do(get_t_result(sampleSize, effectSize)) p_reject <- power_sim_results %>% ungroup() %>% summarize(pvalue = mean(pvalue<.05)) %>% pull() p_reject`````` ``## [1] 0.8`` This should return a number very close to 0.8.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/19%3A_Statistical_Power_in_R/19.01%3A_Power_Analysis.txt
Learning Objectives • Describe the main differences between Bayesian analysis and null hypothesis testing • Describe and perform the steps in a Bayesian analysis • Describe the effects of different priors, and the considerations that go into choosing a prior • Describe the difference in interpretation between a confidence interval and a Bayesian credible interval In this chapter we will take up the approach to statistical modeling and inference that stands in contrast to the null hypothesis testing framework that you encountered in Chapter 16. This is known as “Bayesian statistics” after the Reverend Thomas Bayes, whose theorem you have already encountered in Chapter 10. In this chapter you will learn how Bayes’ theorem provides a way of understanding data that solves many of the conceptual problems that we discussed regarding null hypothesis testing. 20: Bayesian Statistics Say you are walking down the street and a friend of yours walks right by but doesn’t say hello. You would probably try to decide why this happened – Did they not see you? Are they mad at you? Are you suddenly cloaked in a magic invisibility shield? One of the basic ideas behind Bayesian statistics is that we want to infer the details of how the data are being generated, based on the data themselves. In this case, you want to use the data (i.e. the fact that your friend did not say hello) to infer the process that generated the data (e.g. whether or not they actually saw you, how they feel about you, etc). The idea behind a generative model is that a latent (unseen) process generates the data we observe, usually with some amount of randomness in the process. When we take a sample of data from a population and estimate a parameter from the sample, what we are doing in essence is trying to learn the value of a latent variable (the population mean) that gives rise through sampling to the observed data (the sample mean). Figure 20.1 shows a schematic of this idea. If we know the value of the latent variable, then it’s easy to reconstruct what the observed data should look like. For example, let’s say that we are flipping a coin that we know to be fair, such that we would expect it to land on heads 50% of the time. We can describe the coin by a binomial distribution with a value of $P_{heads}=0.5$, and then we could generate random samples from such a distribution in order to see what the observed data should look like. However, in general we are in the opposite situation: We don’t know the value of the latent variable of interest, but we have some data that we would like to use to estimate it. 20.02: Bayes’ Theorem and Inverse Inference The reason that Bayesian statistics has its name is because it takes advantage of Bayes’ theorem to make inferences from data about the underlying process that generated the data. Let’s say that we want to know whether a coin is fair. To test this, we flip the coin 10 times and come up with 7 heads. Before this test we were pretty sure that the $P_{heads}=0.5$), but finding 7 heads out of 10 flips would certainly give us pause if we believed that $P_{heads}=0.5$. We already know how to compute the conditional probability that we would flip 7 or more heads out of 10 if the coin is really fair ($P(n\ge7|p_{heads}=0.5)$), using the binomial distribution. TBD: MOTIVATE SWITCH FROM 7 To 7 OR MORE The resulting probability is 0.055. That is a fairly small number, but this number doesn’t really answer the question that we are asking – it is telling us about the likelihood of 7 or more heads given some particular probability of heads, whereas what we really want to know is the probability of heads. This should sound familiar, as it’s exactly the situation that we were in with null hypothesis testing, which told us about the likelihood of data rather than the likelihood of hypotheses. Remember that Bayes’ theorem provides us with the tool that we need to invert a conditional probability: $P(H|D) = \frac{P(D|H)*P(H)}{P(D)}$ We can think of this theorem as having four parts: • prior ($P(Hypothesis)$): Our degree of belief about hypothesis H before seeing the data D • likelihood ($P(Data|Hypothesis)$): How likely are the observed data D under hypothesis H? • marginal likelihood ($P(Data)$): How likely are the observed data, combining over all possible hypotheses? • posterior ($P(Hypothesis|Data)$): Our updated belief about hypothesis H, given the data D In the case of our coin-flipping example: - prior ($P$): Our degree of belief the likelhood of flipping heads, which was $P_{heads}=0.5$ - likelihood ($P(\text{7 or more heads out of 10 flips}|P_{heads}=0.5)$): How likely are 7 or more heads out of 10 flips if $P_{heads}=0.5)$? - marginal likelihood ($P(\text{7 or more heads out of 10 flips})$): How likely are we to observe 7 heads out of 10 coin flips, in general? - posterior ($P_{heads}|\text{7 or more heads out of 10 coin flips})$): Our updated belief about $P$ given the observed coin flips Here we see one of the primary differences between frequentist and Bayesian statsistics. Frequentists do not believe in the idea of a probability of a hypothesis (i.e., our degree of belief about a hypothesis) – for them, a hypothesis is either true or it isn’t. Another way to say this is that for the frequentist, the hypothesis is fixed and the data are random, which is why frequentist inference focuses on describing the probability of data given a hypothesis (i.e. the p-value). Bayesians, on the other hand, are comfortable making probability statements about both data and hypotheses.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/20%3A_Bayesian_Statistics/20.01%3A_Generative_Models.txt
We ultimately want to use Bayesian statistics to make decisions about hypotheses, but before we do that we need to estimate the parameters that are necessary to make the decision. Here we will walk through the process of Bayesian estimation. Let’s use another screening example: Airport security screening. If you fly a lot, it’s just a matter of time until one of the random explosive screenings comes back positive; I had the particularly unfortunate experience of this happening soon after September 11, 2001, when airport security staff were especially on edge. What the security staff want to know is what is the likelihood that a person is carrying an explosive, given that the machine has given a positive test. Let’s walk through how to calculate this value using Bayesian analysis. 20.3.1 Specifying the prior To use Bayes’ theorem, we first need to specify the prior probability for the hypothesis. In this case, we don’t know the real number but we can assume that it’s quite small. According to the FAA, there were 971,595,898 air passengers in the U.S. in 2017. Let’s say that one out of those travelers was carrying an explosive in their bag — that would give a prior probability of 1 out of 971 million, which is very small! The security personnel may have reasonably held a stronger prior in the months after the 9/11 attack, so let’s say that their subjective belief was that one out of every million flyers was carrying an explosive. 20.3.2 Collect some data The data are composed of the results of the explosive screening test. Let’s say that the security staff runs the bag through their testing apparatus 3 times, and it gives a positive reading on 3 of the 3 tests. 20.3.3 Computing the likelihood We want to compute the likelihood of the data under the hypothesis that there is an explosive in the bag. Let’s say that we know (from the machine’s manufacturer) that the sensitivity of the test is 0.99 – that is, when a device is present, it will detect it 99% of the time. To determine the likelihood of our data under the hypothesis that a device is present, we can treat each test as a Bernoulli trial (that is, a trial with an outcome of true or false) with a probability of success of 0.99, which we can model using a binomial distribution. 20.3.4 Computing the marginal likelihood We also need to know the overall likelihood of the data – that is, finding 3 positives out of 3 tests. Computing the marginal likelihood is often one of the most difficult aspects of Bayesian analysis, but for our example it’s simple because we can take advantage of the specific form of Bayes’ theorem for a binary outcome that we introduced in Section 10.7: $P(E|T) = \frac{P(T|E)*P(E)}{P(T|E)*P(E) + P(T|\neg E)*P(\neg E)}$ where $E$ refers to the presence of explosives, and $T$ refers to a postive test result. The marginal likelihood in this case is a weighted average of the likelihood of the data under either presence or absence of the explosive, multiplied by the probability of the explosive being present (i.e. the prior). In this case, let’s say that we know (from the manufacturer) that the specificity of the test is 0.99, such that the likelihood of a positive result when there is no explosive ($P(T|\neg E)$) is 0.01. 20.3.5 Computing the posterior We now have all of the parts that we need to compute the posterior probability of an explosive being present, given the observed 3 positive outcomes out of 3 tests. This result shows us that the posterior probability of an explosive in the bag given these positive tests (0.492) is just under 50%, again highlighting the fact that testing for rare events is almost always liable to produce high numbers of false positives, even when the specificity and sensitivity are very high. An important aspect of Bayesian analysis is that it can be sequential. Once we have the posterior from one analysis, it can become the prior for the next analysis!
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/20%3A_Bayesian_Statistics/20.03%3A_Doing_Bayesian_Estimation.txt
In the previous example there were only two possible outcomes – the explosive is either there or it’s not – and we wanted to know which outcome was most likely given the data. However, in other cases we want to use Bayesian estimation to estimate the numeric value of a parameter. Let’s say that we want to know about the effectiveness of a new drug for pain; to test this, we can administer the drug to a group of patients and then ask them whether their pain was improved or not after taking the drug. We can use Bayesian analysis to estimate the proportion of people for whom the drug will be effective using these data. 20.4.1 Specifying the prior TBD: MH: USE PRIOR BIASED TOWARDS ZERO? In this case, we don’t have any prior information about the effectiveness of the drug, so we will use a uniform distribution as our prior, since all values are equally likely under a uniform distribution. In order to simplify the example, we will only look at a subset of 99 possible values of effectiveness (from .01 to .99, in steps of .01). Therefore, each possible value has a prior probability of 1/99. 20.4.2 Collect some data We need some data in order to estimate the effect of the drug. Let’s say that we administer the drug to 100 individuals, we find that 64 respond positively to the drug. 20.4.3 Computing the likelihood We can compute the likelihood of the data under any particular value of the effectiveness parameter using the `dbinom()` function in R. In Figure 20.2 you can see the likelihood curves over numbers of responders for several different values of $P$. Looking at this, it seems that our observed data are relatively more likely under the hypothesis of $P_{respond}=0.7$, somewhat less likely under the hypothesis of $P_{respond}=0.5$, and quite unlikely under the hypothesis of $P_{respond}=0.3$. One of the fundamental ideas of Bayesian inference is that we should upweight our belief in values of our parameter of interest in proportion to how likely the data are under those values, balanced against what we believe about the parameter values before having seen the data (our prior knowledge). 20.4.4 Computing the marginal likelihood In addition to the likelihood of the data under different hypotheses, we need to know the overall likelihood of the data, combining across all hypotheses (i.e., the marginal likelihood). This marginal likelihood is primarily important beacuse it helps to ensure that the posterior values are true probabilities. In this case, our use of a set of discrete possible parameter values makes it easy to compute the marginal likelihood, because we can just compute the likelihood of each parameter value under each hypothesis and add them up. MH:not sure there’s a been clear discussion of the marginal likelihood up this point. it’s a confusing and also very deep construct.. the overall likelihood of the data is the likelihood of the data under each hypothesis, averaged togeteher (weifhted by) the prior probability of those hypotheses. it is how likely the data is under your prior beliefs about the hypotheses. might be worth thinking of two examples, where the likelihood of the data under thet hypothesis of interest is the same, but where the marginal likelihood changes i.e., the hypothesis is pretty good at predicting the data, while other hypothese are bad vs. other hypotheses are always good (perhaps better) 20.4.5 Computing the posterior We now have all of the parts that we need to compute the posterior probability distribution across all possible values of $prespond20.3.$ 20.4.6 Maximum a posteriori (MAP) estimation Given our data we would like to obtain an estimate of $prespond20.3 — it’s the value shown with a marker at the top of the distribution. Note that the result (0.64) is simply the proportion of responders from our sample – this occurs because the prior was uniform and thus didn’t influence our estimate.$ 20.4.7 Credible intervals Often we would like to know not just a single estimate for the posterior, but an interval in which we are confident that the posterior falls. We previously discussed the concept of confidence intervals in the context of frequentist inference, and you may remember that the interpretation of confidence intervals was particularly convoluted: It was an interval that will contain the the value of the parameter 95% of the time. What we really want is an interval in which we are confident that the true parameter falls, and Bayesian statistics can give us such an interval, which we call a credible interval. TBD: USE POSTERIOR FROM ABOVE The interpretation of this credible interval is much closer to what we had hoped we could get from a confidence interval (but could not): It tells us that there is a 95% probability that the value of $p$ falls between these two values. Importantly, it shows that we have high confidence that $p_{respond} > 0.0$, meaning that the drug seems to have a positive effect. In some cases the credible interval can be computed numerically based on a known distribution, but it’s more common to generate a credible interval by sampling from the posterior distribution and then to compute quantiles of the samples. This is particularly useful when we don’t have an easy way to express the posterior distribution numerically, which is often the case in real Bayesian data analysis. One such method (rejection sampling) is explained in more detail in the Appendix at the end of this chapter. 20.4.8 Effects of different priors In the previous example we used a flat prior, meaning that we didn’t have any reason to believe that any particular value of $p$ was more or less likely. However, let’s say that we had instead started with some previous data: In a previous study, researchers had tested 20 people and found that 10 of them had responded positively. This would have lead us to start with a prior belief that the treatment has an effect in 50% of people. We can do the same computation as above, but using the information from our previous study to inform our prior (see oanel A in Figure 20.4). MH: i wonder what you’re doing here: is this the same thing as doing a bayesian inference assuming 10 / 20 data and using the posterior from that as the prior for this analysis? that is what woud normally be the straightfoward thing to do. Note that the likelihood and marginal likelihood did not change - only the prior changed. The effect of the change in prior to was to pull the posterior closer to the mass of the new prior, which is centered at 0.5. Now let’s see what happens if we come to the analysis with an even stronger prior belief. Let’s say that instead of having previously observed 10 responders out of 20 people, the prior study had instead tested 500 people and found 250 responders. This should in principle give us a much stronger prior, and as we see in panel B of Figure 20.4 , that’s what happens: The prior is much more concentrated around 0.5, and the posterior is also much closer to the prior. The general idea is that Bayesian inference combines the information from the prior and the likelihood, weighting the relative strength of each. This example also highlights the sequential nature of Bayesian analysis – the posterior from one analysis can become the prior for the next analysis. Finally, it is important to realize that if the priors are strong enough, they can completely overwhelm the data. Let’s say that you have an absolute prior that $p$ is 0.8 or greater, such that you set the prior likelihood of all other values to zero. What happens if we then compute the posterior? In panel C of Figure 20.4 we see that there is zero density in the posterior for any of the values where the prior was set to zero - the data are overwhelmed by the absolute prior.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/20%3A_Bayesian_Statistics/20.04%3A_Estimating_Posterior_Distributions.txt
The impact of priors on the resulting inferences are the most controversial aspect of Bayesian statistics.What is the right prior to use? If the choice of prior determines the results (i.e., the posterior), how can you be sure you results are trustworthy? These are difficult quetsions, but we should not back away just because we are faced with hard questions. As we discussed previously, Bayesian analyses give us interpretable results (credible intervals, etc.). This alone should inspire us to think hard about these questions s0 that we can arrive with results that are reasonable and interpretable. There are various ways to choose one’s priors, which (as we saw above) can impact the resulting inferences. Sometimes we have a very specific prior, as in the case where we expected our coin to lands heads 50% of the time, but in many cases we don’t have such strong a starting point. Uninformative priors attempt to influence the resulting posterior as little as possible, as we saw in the example of the uniform prior above. It’s also common to use weakly informative priors (or default priors), which influence the result only very slightly. For example, if we had used a binomial distribution based on one heads out of two coin flips, the prior would have been centered around 0.5 but fairly flat, influence the posterior only slightly. It is also possible to use priors based on the scientific literature or pre-existing data, which we would call empirical priors. In general, however, we will stick to the use of uninformative/weakly informative priors, since they raise the least concern about influencing our results. 20.06: Bayesian Hypothesis Testing Having learned how to perform Bayesian estimation, we now turn to the use of Bayesian methods for hypothesis testing. Let’s say that there are two politicians who differ in their beliefs about whether the public is in favor an extra tax to support the national parks. Senator Smith thinks that only 40% of people are in favor of the tax, whereas Senator Jones thinks that 60% of people are in favor. They arrange to have a poll done to test this, which asks 1000 randomly selected people whether they support such a tax. The results are that 490 of the people in the polled sample were in favor of the tax. Based on these data, we would like to know: Do the data support the claims of one senator over the other,and by how much? We can test this using a concept known as the Bayes factor,which quantifies which hypothesis is better by comparing how well each predicts the observed data. 20.6.1 Bayes factors The Bayes factor characterizes the relative likelihood of the data under two different hypotheses. It is defined as: $BF = \frac{p(data|H_1)}{p(data|H_2)}$ for two hypotheses $H$ and $H$. In the case of our two senators, we know how to compute the likelihood of the data under each hypothesis using the binomial distribution. We will put Senator Smith in the numerator and Senator Jones in the denominator, so that a value greater than one will reflect greater evidence for Senator Smith, and a value less than one will reflect greater evidence for Senator Jones. The resulting Bayes Factor (3325.26) provides a measure of the evidence that the data provides regarding the two hypotheses - in this case, it tells us the data support Senator Smith more than 3000 times more strongly than they support Senator Jones. 20.6.2 Bayes factors for statistical hypotheses In the previous example we had specific predictions from each senator, whose likelihood we could quantify using the binomial distribution. However, in real data analysis we generally must deal with uncertainty about our parameters, which complicates the Bayes factor. However, in exchange we gain the ability to quantify the relative amount of evidence in favor of the null versus alternative hypotheses. Let’s say that we are a medical researcher performing a clinical trial for the treatment of diabetes, and we wish to know whether a particular drug reduces blood glucose compared to placebo. We recruit a set of volunteers and randomly assign them to either drug or placebo group, and we measure the change in hemoglobin A1C (a marker for blood glucose levels) in each group over the period in which the drug or placebo was administered. What we want to know is: Is there a difference between the drug and placebo? First, let’s generate some data and analyze them using null hypothesis testing (see Figure 20.5). Then let’s perform an independent-samples t-test, which shows that there is a significant difference between the groups: ## ## Welch Two Sample t-test ## ## data: hbchange by group ## t = 2, df = 32, p-value = 0.02 ## alternative hypothesis: true difference in means is greater than 0 ## 95 percent confidence interval: ## 0.11 Inf ## sample estimates: ## mean in group 0 mean in group 1 ## -0.082 -0.650 This test tells us that there is a significant difference between the groups, but it doesn’t quantify how strongly the evidence supports the null versus alternative hypotheses. To measure that, we can compute a Bayes factor using ttestBF function from the BayesFactor package in R: ## Bayes factor analysis ## -------------- ## [1] Alt., r=0.707 0<d<Inf : 3.4 ±0% ## [2] Alt., r=0.707 !(0<d<Inf) : 0.12 ±0% ## ## Against denominator: ## Null, mu1-mu2 = 0 ## --- ## Bayes factor type: BFindepSample, JZS We are particularly interested in the Bayes Factor for an effect greater than zero, which is listed in the line marked “[1]” in the report. The Bayes factor here tells us that the alternative hypothesis (i.e. that the difference is greater than zero) is about 3 times more likely than the point null hypothesis (i.e. a mean difference of exactly zero) given the data. Thus, while the effect is significant, the amount of evidence it provides us in favor of the alternative hypothesis is rather weak. 20.6.2.1 One-sided tests We generally are less interested in testing against the null hypothesis of a specific point value (e.g. mean difference = 0) than we are in testing against a directional null hypothesis (e.g. that the difference is less than or equal to zero). We can also perform a directional (or one-sided) test using the results from ttestBF analysis, since it provides two Bayes factors: one for the alternative hypothesis that the mean difference is greater than zero, and one for the alternative hypothesis that the mean difference is less than zero. If we want to assess the relative evidence for a positive effect, we can compute a Bayes factor comparing the relative evidence for a positive versus a negative effect by simply dividing the two Bayes factors returned by the function: ## Bayes factor analysis ## -------------- ## [1] Alt., r=0.707 0<d<Inf : 29 ±0% ## ## Against denominator: ## Alternative, r = 0.707106781186548, mu =/= 0 !(0<d<Inf) ## --- ## Bayes factor type: BFindepSample, JZS Now we see that the Bayes factor for a positive effect versus a negative effect is substantially larger (almost 30). 20.6.2.2 Interpreting Bayes Factors How do we know whether a Bayes factor of 2 or 20 is good or bad? There is a general guideline for interpretation of Bayes factors suggested by Kass & Rafferty (1995): BF Strength of evidence 1 to 3 not worth more than a bare mention 3 to 20 positive 20 to 150 strong >150 very strong Based on this, even though the statisical result is significant, the amount of evidence in favor of the alternative vs. the point null hypothesis is weak enough that it’s not worth even mentioning, whereas the evidence for the directional hypothesis is relatively strong. 20.6.3 Assessing evidence for the null hypothesis Because the Bayes factor is comparing evidence for two hypotheses, it also allows us to assess whether there is evidence in favor of the null hypothesis, which we couldn’t do with standard null hypothesis testing (because it starts with the assumption that the null is true). This can be very useful for determining whether a non-significant result really provides strong evidence that there is no effect, or instead just reflects weak evidence overall. 20.07: Suggested Readings • The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy, by Sharon Bertsch McGrayne • Doing Bayesian Data Analysis: A Tutorial Introduction with R, by John K. Kruschke 20.08: Appendix- 20.8.1 Rejection sampling We will generate samples from our posterior distribution using a simple algorithm known as rejection sampling. The idea is that we choose a random value of x (in this case $prespond20.6 shows an example of a histogram of samples using rejection sampling, along with the 95% credible intervals obtained using this method.$ ``````# Compute credible intervals for example nsamples <- 100000 # create random uniform variates for x and y x <- runif(nsamples) y <- runif(nsamples) # create f(x) fx <- dbinom(x = nResponders, size = 100, prob = x) # accept samples where y < f(x) accept <- which(y < fx) accepted_samples <- x[accept] credible_interval <- quantile(x = accepted_samples, probs = c(0.025, 0.975)) kable(credible_interval)`````` x 2.5% 0.54 98% 0.73 21.01: A Simple Example (Section 20.3) ``````bayes_df = data.frame(prior=NA, likelihood=NA, marginal_likelihood=NA, posterior=NA) bayes_df\$prior <- 1/1000000 nTests <- 3 nPositives <- 3 sensitivity <- 0.99 specificity <- 0.99 bayes_df\$likelihood <- dbinom(nPositives, nTests, 0.99) bayes_df\$marginal_likelihood <- dbinom( x = nPositives, size = nTests, prob = sensitivity ) * bayes_df\$prior + dbinom( x = nPositives, size = nTests, prob = 1 - specificity ) * (1 - bayes_df\$prior) bayes_df\$posterior <- (bayes_df\$likelihood * bayes_df\$prior) / bayes_df\$marginal_likelihood`````` 21.02: Estimating Posterior Distributions (Section 20.4) ``````# create a table with results nResponders <- 64 nTested <- 100 drugDf <- tibble( outcome = c("improved", "not improved"), number = c(nResponders, nTested - nResponders) )`````` Computing likelihood ``````likeDf <- tibble(resp = seq(1,99,1)) %>% mutate( presp=resp/100, likelihood5 = dbinom(resp,100,.5), likelihood7 = dbinom(resp,100,.7), likelihood3 = dbinom(resp,100,.3) ) ggplot(likeDf,aes(resp,likelihood5)) + geom_line() + xlab('number of responders') + ylab('likelihood') + geom_vline(xintercept = drugDf\$number[1],color='blue') + geom_line(aes(resp,likelihood7),linetype='dotted') + geom_line(aes(resp,likelihood3),linetype='dashed')`````` Computing marginal likelihood ``````# compute marginal likelihood likeDf <- likeDf %>% mutate(uniform_prior = array(1 / n())) # multiply each likelihood by prior and add them up marginal_likelihood <- sum( dbinom( x = nResponders, # the number who responded to the drug size = 100, # the number tested likeDf\$presp # the likelihood of each response ) * likeDf\$uniform_prior )`````` Comuting posterior ``````bayesDf <- tibble( steps = seq(from = 0.01, to = 0.99, by = 0.01) ) %>% mutate( likelihoods = dbinom( x = nResponders, size = 100, prob = steps ), priors = dunif(steps) / length(steps), posteriors = (likelihoods * priors) / marginal_likelihood ) # compute MAP estimate MAP_estimate <- bayesDf %>% arrange(desc(posteriors)) %>% slice(1) %>% pull(steps) ggplot(bayesDf,aes(steps,posteriors)) + geom_line() + geom_line(aes(steps,priors), color='black', linetype='dotted') + xlab('p(respond)') + ylab('posterior probability of the observed data') + annotate( "point", x = MAP_estimate, y = max(bayesDf\$posteriors), shape=9, size = 3 )`````` 21.03: Bayes Factors (Section 20.6.1) Example showing how BFs and p-values relate
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/20%3A_Bayesian_Statistics/20.05%3A_Choosing_a_Prior.txt
Learning Objectives • Describe the concept of a contingency table for categorical data. • Describe the concept of the chi-squared test for association and compute it for a given contingency table. • Describe Simpson’s paradox and why it is important for categorical data analysis. So far we have discussed the general concept of statistical modeling and hypothesis testing, and applied them to some simple analyses. In this chapter we will focus on the modeling of categorical relationships, by which we mean relationships between variables that are measured qualitatively. These data are usually expressed in terms of counts; that is, for each value of the variable (or combination of values of multiple variables), how many observations take that value? For example, when we count how many people from each major are in our class, we are fitting a categorical model to the data. 22: Modeling Categorical Relationships Let’s say that I have purchased a bag of 100 candies, which are labeled as having 1/3 chocolates, 1/3 licorices, and 1/3 gumballs. When I count the candies in the bag, we get the following numbers: 30 chccolates, 33 licorices, and 37 gumballs. Because I like chocolate much more than licorice or gumballs, I feel slightly ripped off and I’d like to know if this was just a random accident. To answer than question, I need to know: What is the likelihood that the count would come out this way if the true probability of each candy type is the averaged proportion of 1/3 each? 22.03: Contingency Tables and the Two-way Test Another way that we often use the chi-squared test is to ask whether two categorical variables are related to one another. As a more realistic example, let’s take the question of whether a black driver is more likely to be searched when they are pulled over by a police officer, compared to a white driver. The Stanford Open Policing Project (https://openpolicing.stanford.edu/) has studied this, and provides data that we can use to analyze the question. We will use the data from the State of Connecticut since they are fairly small. These data were first cleaned up to remove all unnecessary data. The standard way to represent data from a categorical analysis is through a contingency table, which presents the number or proportion of observations falling into each possible combination of values for each of the variables. The table below shows the contingency table for the police search data. It can also be useful to look at the contingency table using proportions rather than raw numbers, since they are easier to compare visually, so we include both absolute and relative numbers here. Table 22.2: Contingency table for police search data searched Black White Black (relative) White (relative) FALSE 36244 239241 0.13 0.86 TRUE 1219 3108 0.00 0.01 The Pearson chi-squared test allows us to test whether observed frequencies are different from expected frequencies, so we need to determine what frequencies we would expect in each cell if searches and race were unrelated – which we can define as being independent. Remember from the chapter on probability that if X and Y are independent, then: $P(X \cap Y) = P(X) * P(Y)$ That is, the joint probability under the null hypothesis of independence is simply the product of the marginal probabilities of each individual variable. The marginal probabilities are simply the probabilities of each event occuring regardless of other events. We can compute those marginal probabilities, and then multiply them together to get the expected proportions under independence. Black White Not searched P(NS)*P(B) P(NS)*P(W) P(NS) Searched P(S)*P(B) P(S)*P(W) P(S) P(B) P(W) Table 22.3: Summary of the 2-way contingency table for police search data searched driver_race n expected stdSqDiff FALSE Black 36244 36884 11.1 TRUE Black 1219 579 706.3 FALSE White 239241 238601 1.7 TRUE White 3108 3748 109.2 We then compute the chi-squared statistic, which comes out to 828.3. To compute a p-value, we need to compare it to the null chi-squared distribution in order to determine how extreme our chi-squared value is compared to our expectation under the null hypothesis. The degrees of freedom for this distribution are $df = (nRows - 1) * (nColumns - 1)$ - thus, for a 2X2 table like the one here, $df$. The intuition here is that computing the expected frequencies requires us to use three values: the total number of observations and the marginal probability for each of the two variables. Thus, once those values are computed, there is only one number that is free to vary, and thus there is one degree of freedom. Given this, we can compute the p-value for the chi-squared statistic, which is about as close to zero as one can get: $3.79e^{-182}$. This shows that the observed data would be highly unlikely if there was truly no relationship between race and police searches, and thus we should reject the null hypothesis of independence. We can also perform this test easily using the `chisq.test()` function in R: ``````## ## Pearson's Chi-squared test ## ## data: summaryDf2wayTable ## X-squared = 828, df = 1, p-value <2e-16`````` 22.05: Odds Ratios We can also represent the relative likelihood of different outcomes in the contingency table using the odds ratio that we introduced earlier, in order to better understand the size of the effect. First, we represent the odds of being stopped for each race: $odds_{searched|black} = \frac{N_{searched\cap black}}{N_{not\ searched\cap black}} = \frac{1219}{36244} = 0.034$ $odds_{searched|white} = \frac{N_{searched\cap white}}{N_{not\ searched\cap white}} = \frac{3108}{239241} = 0.013$ $odds\ ratio = \frac{odds_{searched|black}}{odds_{searched|white}} = 2.59$ The odds ratio shows that the odds of being searched are 2.59 times higher for black versus white drivers, based on this dataset. 22.06: Bayes Factor We discussed Bayes factors in the earlier chapter on Bayesian statistics – you may remember that it represents the ratio of the likelihood of the data under each of the two hypotheses: $K = \frac{P(data|H_A)}{P(data|H_0)} = \frac{P(H_A|data)*P(H_A)}{P(H_0|data)*P(H_0)}$ We can compute the Bayes factor for the police search data using the contingencyTableBF() function from the BayesFactor package: ## Bayes factor analysis ## -------------- ## [1] Non-indep. (a=1) : 1.8e+142 ±0% ## ## Against denominator: ## Null, independence, a = 1 ## --- ## Bayes factor type: BFcontingencyTable, independent multinomial This shows that the evidence in favor of a relationship between driver race and police searches in this dataset is exceedingly strong. 22.08: Beware of Simpson’s Paradox The contingency tables presented above represent summaries of large numbers of observations, but summaries can sometimes be misleading. Let’s take an example from baseball. The table below shows the batting data (hits/at bats and batting average) for Derek Jeter and David Justice over the years 1995-1997: Player 1995 1996 1997 Combined Derek Jeter 12/48 .250 183/582 .314 190/654 .291 385/1284 .300 David Justice 104/411 .253 45/140 .321 163/495 .329 312/1046 .298 If you look closely, you will see that something odd is going on: In each individual year Justice had a higher batting average than Jeter, but when we combine the data across all three years, Jeter’s average is actually higher than Justice’s! This is an example of a phenomenon known as Simpson’s paradox, in which a pattern that is present in a combined dataset may not be present in any of the subsets of the data. This occurs when there is another variable that may be changing across the different subsets – in this case, the number of at-bats varies across years, with Justice batting many more times in 1995 (when batting averages were low). We refer to this as a lurking variable, and it’s always important to be attentive to such variables whenever one examines categorical data.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/22%3A_Modeling_Categorical_Relationships/22.01%3A_Example-_Candy_Colors.txt
23 Modeling categorical relationships in R So far we have discussed the general concept of statistical modeling and hypothesis testing, and applied them to some simple analyses. In this chapter we will focus on the modeling of categorical relationships, by which we mean relationships between variables that are measured qualitatively. These data are usually expressed in terms of counts; that is, for each value of the variable (or combination of values of multiple variables), how many observations take that value? For example, when we count how many people from each major are in our class, we are fitting a categorical model to the data. 24.01: An Example- Hate Crimes and Income Inequality Learning Objectives • Describe the concept of the correlation coefficient and its interpretation • Compute the correlation between two continuous variables • Describe the potential causal influences that can give rise to a correlation. Most people are familiar with the concept of correlation, and in this chapter we will provide a more formal understanding for this commonly used and misunderstood concept. 24: Modeling Continuous Relationships In 2017, the web site Fivethirtyeight.com published a story titled Higher Rates Of Hate Crimes Are Tied To Income Inequality which discussed the relationship between the prevalence of hate crimes and income inequality in the wake of the 2016 Presidential election. The story reported an analysis of hate crime data from the FBI and the Southern Poverty Law Center, on the basis of which they report: “we found that income inequality was the most significant determinant of population-adjusted hate crimes and hate incidents across the United States”. The data for this analysis are included in the `fivethirtyeight` R package, which makes it easy for us to access them. The analysis reported in the story focused on the relationship between income inequality (defined by a quantity called the Gini index — see Appendix for more details) and the prevalence of hate crimes in each state. 24.02: Is income Inequality Related to Hate Crimes? The relationship between income inequality and rates of hate crimes is shown in Figure 24.1. Looking at the data, it seems that there may be a positive relationship between the two variables. How can we quantify that relationship? 24.03: Covariance and Correlation One way to quantify the relationship between two variables is the covariance. Remember that variance for a single variable is computed as the average squared difference between each data point and the mean: $s^2 = \frac{\sum_{i=1}^n (x_i - \bar{x})^2}{N - 1}$ This tells us how far each observation is from the mean, on average, in squared units. Covariance tells us whether there is a relation between the deviations of two different variables across observations. It is defined as: $covariance = \frac{\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})}{N - 1}$ This value will be far from zero when x and y are both highly deviant from the mean; if they are deviant in the same direction then the covariance is positive, whereas if they are deviant in opposite directions the covariance is negative. Let’s look at a toy example first. The data are shown in the table, along with their individual deviations from the mean and their crossproducts. Table 24.1: Data for toy example of covariance x y y_dev x_dev crossproduct 3 5 -3.6 -4.6 16.56 5 4 -4.6 -2.6 11.96 8 7 -1.6 0.4 -0.64 10 10 1.4 2.4 3.36 12 17 8.4 4.4 36.96 The covariance is simply the mean of the crossproducts, which in this case is 17.05. We don’t usually use the covariance to describe relationships between variables, because it varies with the overall level of variance in the data. Instead, we would usually use the correlation coefficient (often referred to as Pearson’s correlation after the statistician Karl Pearson). The correlation is computed by scaling the covariance by the standard deviations of the two variables: $r = \frac{covariance}{s_xs_y} = \frac{\sum_{i=1}^n (x_i - \bar{x})(y_i - \bar{y})}{(N - 1)s_x s_y}$ In this case, the value is0.89. We can also compute the correlation value easily using the cor() function in R, rather than computing it by hand. The correlation coefficient is useful because it varies between -1 and 1 regardless of the nature of the data - in fact, we already discussed the correlation coefficient earlier in the discussion of effect sizes. As we saw in the previous chapter on effect sizes, a correlation of 1 indicates a perfect linear relationship, a correlation of -1 indicates a perfect negative relationship, and a correlation of zero indicates no linear relationship. 24.3.1 Hypothesis testing for correlations The correlation value of 0.42 seems to indicate a reasonably strong relationship between the hate crimes and income inequality, but we can also imagine that this could occur by chance even if there is no relationship. We can test the null hypothesis that the correlation is zero, using a simple equation that lets us convert a correlation value into a t statistic: $t_{r}=\frac{r \sqrt{N-2}}{\sqrt{1-r^{2}}}$ Under the null hypothesis $H_0:r=0$, this statistic is distributed as a t distribution with $N - 2$ degrees of freedom. We can compute this using the cor.test() function in R: ## ## Pearson's product-moment correlation ## ## data: hateCrimes$avg_hatecrimes_per_100k_fbi and hateCrimes$gini_index ## t = 3, df = 48, p-value = 0.002 ## alternative hypothesis: true correlation is not equal to 0 ## 95 percent confidence interval: ## 0.16 0.63 ## sample estimates: ## cor ## 0.42 This test shows that the likelihood of an r value this extreme or more is quite low, so we would reject the null hypothesis of $r=0$. Note that this test assumes that both variables are normally distributed. We could also test this by randomization, in which we repeatedly shuffle the values of one of the variables and compute the correlation, and then compare our observed correlation value to this null distribution to determine how likely our observed value would be under the null hypothesis. The results are shown in Figure 24.2. The p-value computed using randomization is reasonably similar to the answer give by the t-test. We could also use Bayesian inference to estimate the correlation; see the Appendix for more on this. 24.3.2 Robust correlations You may have noticed something a bit odd in Figure 24.1 – one of the datapoints (the one for the District of Columbia) seemed to be quite separate from the others. We refer to this as an outlier, and the standard correlation coefficient is very sensitive to outliers. For example, in Figure 24.3 we can see how a single outlying data point can cause a very high positive correlation value, even when the actual relationship between the other data points is perfectly negative. One way to address outliers is to compute the correlation on the ranks of the data after ordering them, rather than on the data themselves; this is known as the Spearman correlation. Whereas the Pearson correlation for the example in Figure 24.3 was 0.83, the Spearman correlation is -0.45, showing that the rank correlation reduces the effect of the outlier. We can compute the rank correlation on the hate crime data using the cor.test function: ## ## Spearman's rank correlation rho ## ## data: hateCrimes$avg_hatecrimes_per_100k_fbi and hateCrimes$gini_index ## S = 20146, p-value = 0.8 ## alternative hypothesis: true rho is not equal to 0 ## sample estimates: ## rho ## 0.033 Now we see that the correlation is no longer significant (and in fact is very near zero), suggesting that the claims of the FiveThirtyEight blog post may have been incorrect due to the effect of the outlier.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/23%3A_Modeling_Categorical_Relationships_in_R.txt
When we say that one thing causes another, what do we mean? There is a long history in philosophy of discussion about the meaning of causality, but in statistics one way that we commonly think of causation is in terms of experimental control. That is, if we think that factor X causes factor Y, then manipulating the value of X should also change the value of Y. In medicine, there is a set of ideas known as Koch’s postulates which have historically been used to determine whether a particular organism causes a disease. The basic idea is that the organism should be present in people with the disease, and not present in those without it – thus, a treatment that eliminates the organism should also eliminate the disease. Further, infecting someone with the organism should cause them to contract the disease. An example of this was seen in the work of Dr. Barry Marshall, who had a hypothesis that stomach ulcers were caused by a bacterium (Helicobacter pylori). To demonstrate this, he infected himself with the bacterium, and soon thereafter developed severe inflammation in his stomach. He then treated himself with an antibiotic, and his stomach soon recovered. He later won the Nobel Prize in Medicine for this work. Often we would like to test causal hypotheses but we can’t actually do an experiment, either because it’s impossible (“What is the relationship between human carbon emissions and the earth’s climate?”) or unethical (“What are the effects of severe abuse on child brain development?”). However, we can still collect data that might be relevant to those questions. For example, in the latter example, we can potentially collect data from children who have been abused as well as those who have not, and we can then ask whether their brain development differs. Let’s say that we did such an analysis, and we found that abused children had poorer brain development than non-abused children. Would this demonstrate that abuse causes poorer brain development? No. Whenever we observe a statistical association between two variables, it is certainly possible that one of those two variables causes the other. However, it is also possible that both of the variables are being influenced by a third variable; in this example, it could be that child abuse is associated with family stress, which could also cause poorer brain development through less intellectual engagement, food stress, or many other possible avenues. The point is that a correlation between two variables generally tells us that something is probably causing somethign else, but it doesn’t tell us what is causing what. 24.4.1 Causal graphs One useful way to describe causal relations between variables is through a causal graph, which shows variables as circles and causal relations between them as arrows. For example, Figure 24.4 shows the causal relationships between study time and two variables that we think should be affected by it: exam grades and exam finishing times. However, in reality the effects on finishing time and grades are not due directly to the amount of time spent studying, but rather to the amount of knowledge that the student gains by studying. We would usually say that knowledge is a latent variable – that is, we can’t measure it directly but we can see it reflected in variables that we can measure (like grades and finishing times). Figure 24.5 shows this. Here we would say that knowledge mediates the relationship between study time and grades/finishing times. That means that if we were able to hold knowledge constant (for example, by administering a drug that causes immediate forgetting), then the amount of study time should no longer have an effect on grades and finishing times. Note that if we simply measured exam grades and finishing times we would generally see negative relationship between them, because people who finish exams the fastest in general get the highest grades. However, if we were to interpret this correlation as a causal relation, this would tell us that in order to get better grades, we should actually finish the exam more quickly! This example shows how tricky the inference of causality from non-experimental data can be. Within statistics and machine learning, there is a very active research community that is currently studying the question of when and how we can infer causal relationships from non-experimental data. However, these methods often require strong assumptions, and must generally be used with great caution. 24.05: Suggested Readings • The Book of Why by Judea Pearl - an excellent introduction to the ideas behind causal inference. 24.06: Appendix- 24.6.1 Quantifying inequality: The Gini index Before we look at the analysis reported in the story, it’s first useful to understand how the Gini index is used to quantify inequality. The Gini index is usually defined in terms of a curve that describes the relation between income and the proportion of the population that has income at or less than that level, known as a Lorenz curve. However, another way to think of it is more intuitive: It is the relative mean absolute difference between incomes, divided by two (from https://en.Wikipedia.org/wiki/Gini_coefficient): $G = \frac{\displaystyle{\sum_{i=1}^n \sum_{j=1}^n \left| x_i - x_j \right|}}{\displaystyle{2n\sum_{i=1}^n x_i}}$ Figure 24.6 shows the Lorenz curves for several different income distributions. The top left panel (A) shows an example with 10 people where everyone has exactly the same income. The length of the intervals between points are equal, indicating each person earns an identical share of the total income in the population. The top right panel (B) shows an example where income is normally distributed. The bottom left panel shows an example with high inequality; everyone has equal income ($40,000) except for one person, who has income of$40,000,000. According to the US Census, the United States had a Gini index of 0.469 in 2010, falling roughly half way between our normally distributed and maximally inequal examples. 24.6.2 Bayesian correlation analysis We can also analyze the FiveThirtyEight data using Bayesian analysis, which has two advantages. First, it provides us with a posterior probability – in this case, the probability that the correlation value exceeds zero. Second, the Bayesian estimate combines the observed evidence with a prior, which has the effect of regularizing the correlation estimate, effectively pulling it towards zero. Here we can compute it using the jzs_cor function from the BayesMed package. ## Compiling model graph ## Resolving undeclared variables ## Allocating nodes ## Graph information: ## Observed stochastic nodes: 50 ## Unobserved stochastic nodes: 4 ## Total graph size: 230 ## ## Initializing model ## $Correlation ## [1] 0.41 ## ##$BayesFactor ## [1] 11 ## ## \$PosteriorProbability ## [1] 0.92 Notice that the correlation estimated using the Bayesian method is slightly smaller than the one estimated using the standard correlation coefficient, which is due to the fact that the estimate is based on a combination of the evidence and the prior, which effectively shrinks the estimate toward zero. However, notice that the Bayesian analysis is not robust to the outlier, and it still says that there is fairly strong evidence that the correlation is greater than zero. 25.02: Hate Crime Example Now we will look at the hate crime data from the `fivethirtyeight` package. First we need to prepare the data by getting rid of NA values and creating abbreviations for the states. To do the latter, we use the `state.abb` and `state.name` variables that come with R along with the `match()` function that will match the state names in the `hate_crimes` variable to those in the list. ``````hateCrimes <- hate_crimes %>% mutate(state_abb = state.abb[match(state,state.name)]) %>% drop_na(avg_hatecrimes_per_100k_fbi, gini_index) # manually fix the DC abbreviation hateCrimes\$state_abb[hateCrimes\$state=="District of Columbia"] <- 'DC'`````` ``````## ## Pearson's product-moment correlation ## ## data: hateCrimes\$avg_hatecrimes_per_100k_fbi and hateCrimes\$gini_index ## t = 3, df = 48, p-value = 0.001 ## alternative hypothesis: true correlation is greater than 0 ## 95 percent confidence interval: ## 0.21 1.00 ## sample estimates: ## cor ## 0.42`````` Remember that we can also compute the p-value using randomization. To to this, we shuffle the order of one of the variables, so that we break the link between the X and Y variables — effectively making the null hypothesis (that the correlation is less than or equal to zero) true. Here we will first create a function that takes in two variables, shuffles the order of one of them (without replacement) and then returns the correlation between that shuffled variable and the original copy of the second variable. Now we take the distribution of observed correlations after shuffling and compare them to our observed correlation, in order to obtain the empirical probability of our observed data under the null hypothesis. ``mean(shuffleDist\$cor >corr_results\$estimate )`` ``## [1] 0.0066`` This value is fairly close (though a bit larger) to the one obtained using `cor.test()`. 25.03: Robust Correlations (24.3.2) In the previous chapter we also saw that the hate crime data contained one substantial outlier, which appeared to drive the significant correlation. To compute the Spearman correlation, we first need to convert the data into their ranks, which we can do using the `order()` function: ``````hateCrimes <- hateCrimes %>% mutate(hatecrimes_rank = order(avg_hatecrimes_per_100k_fbi), gini_rank = order(gini_index))`````` We can then compute the Spearman correlation by applying the Pearson correlation to the rank variables" ``````cor(hateCrimes\$hatecrimes_rank, hateCrimes\$gini_rank)`````` ``## [1] 0.057`` We see that this is much smaller than the value obtained using the Pearson correlation on the original data. We can assess its statistical signficance using randomization: ``## [1] 0.0014`` Here we see that the p-value is substantially larger and far from significance.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/24%3A_Modeling_Continuous_Relationships/24.04%3A_Correlation_and_Causation.txt
Learning Objectives • Describe the concept of linear regression and apply it to a bivariate dataset • Describe the concept of the general linear model and provide examples of its application • Describe how cross-validation an allow us to estimate the predictive performance of a model on new data Remember that early in the book we described the basic model of statistics: $outcome = model + error$ where our general goal is to find the model that minimizes the error, subject to some other constraints (such as keeping the model relatively simple so that we can generalize beyond our specific dataset). In this chapter we will focus on a particular implementation of this approach, which is known as the general linear model (or GLM). You have already seen the general linear model in the earlier chapter on Fitting Models to Data, where we modeled height in the NHANES dataset as a function of age; here we will provide a more general introduction to the concept of the GLM and its many uses. Before we discuss the general linear model, let’s first define two terms that will be important for our discussion: • dependent variable: This is the outcome variable that our model aims to explain (usually referred to as Y) • independent variable: This is a variable that we wish to use in order to explain the dependent variable (usually referred to as X). There may be multiple independent variables, but for this course we will focus primarily on situations where there is only one dependent variable in our analysis. A general linear model is one in which the model for the dependent variable is composed of a linear combination of independent variables that are each multiplied by a weight (which is often referred to as the Greek letter beta - $\beta$), which determines the relative contribution of that independent variable to the model prediction. As an example, let’s generate some simulated data for the relationship between study time and exam grades (see Figure 26.1). Given these data, we might want to engage in each of the three fundamental activities of statistics: • Describe: How strong is the relationship between grade and study time? • Decide: Is there a statistically significant relationship between grade and study time? • Predict: Given a particular amount of study time, what grade do we expect? In the last chapter we learned how to describe the relationship between two variables using the correlation coefficient, so we can use that to describe the relationship here, and to test whether the correlation is statistically significant using the cor.test() function in R: ## ## Pearson's product-moment correlation ## ## data: df$grade and df$studyTime ## t = 2, df = 6, p-value = 0.05 ## alternative hypothesis: true correlation is greater than 0 ## 95 percent confidence interval: ## 0.014 1.000 ## sample estimates: ## cor ## 0.63 The correlation is quite high, but just barely reaches statistical significance because the sample size is so small. 26: The General Linear Model We can also use the general linear model to describe the relation between two variables and to decide whether that relationship is statistically significant; in addition, the model allows us to predict the value of the dependent variable given some new value(s) of the independent variable(s). Most importantly, the general linear model will allow us to build models that incorporate multiple independent variables, whereas correlation can only tell us about the relationship between two individual variables. The specific version of the GLM that we use for this is referred to as as linear regression. The term regression was coined by Francis Galton, who had noted that when he compared parents and their children on some feature (such as height), the children of extreme parents (i.e. the very tall or very short parents) generally fell closer to the mean than their parents. This is an extremely important point that we return to below. The simplest version of the linear regression model (with a single independent variable) can be expressed as follows: $y = x * \beta_x + \beta_0 + \epsilon$ The $\beta_x$ value tells us how much we would expect y to change given a one-unit change in x. The intercept $\beta_0$ is an overall offset, which tells us what value we would expect y to have when $x=0$; you may remember from our early modeling discussion that this is important to model the overall magnitude of the data, even if $x$ never actually attains a value of zero. The error term $\epsilon$ refers to whatever is left over once the model has been fit. If we want to know how to predict y (which we call $\hat{y}$), then we can drop the error term: $ŷ=x*βx+β026.2 shows an example of this model applied to the study time example.$ We will not go into the details of how the best fitting slope and intercept are actually estimated from the data; if you are interested, details are available in the Appendix. 26.1.1 Regression to the mean The concept of regression to the mean was one of Galton’s essential contributions to science, and it remains a critical point to understand when we interpret the results of experimental data analyses. Let’s say that we want to study the effects of a reading intervention on the performance of poor readers. To test our hypothesis, we might go into a school and recruit those individuals in the bottom 25% of the distribution on some reading test, administer the intervention, and then examine their performance. Let’s say that the intervention actually has no effect, such that reading scores for each individual are simply independent samples from a normal distribution. We can simulate this: Table 26.1: Reading scores for Test 1 (which is lower, because it was the basis for selecting the students) and Test 2 (which is higher because it was not related to Test 1). Score Test 1 88 Test 2 101 If we look at the difference between the mean test performance at the first and second test, it appears that the intervention has helped these students substantially, as their scores have gone up by more than ten points on the test! However, we know that in fact the students didn’t improve at all, since in both cases the scores were simply selected from a random normal distribution. What has happened is that some subjects scored badly on the first test simply due to random chance. If we select just those subjects on the basis of their first test scores, they are guaranteed to move back towards the mean of the entire group on the second test, even if there is no effect of training. This is the reason that we need an untreated control group in order to interpret any changes in reading over time; otherwise we are likely to be tricked by regression to the mean. 26.1.2 The relation between correlation and regression There is a close relationship between correlation coefficients and regression coefficients. Remember that Pearson’s correlation coefficient is computed as the ratio of the covariance and the product of the standard deviations of x and y: $\hat{r} = \frac{covariance_{xy}}{s_x * s_y}$ whereas the regression beta is computed as: $\hat{\beta} = \frac{covariance_{xy}}{s_x*s_x}$ Based on these two equations, we can derive the relationship between $\hat{r}$ and $\hat{beta}$: $covariance_{xy} = \hat{r} * s_x * s_y$ $\hat{\beta_x} = \frac{\hat{r} * s_x * s_y}{s_x * s_x} = r * \frac{s_y}{s_x}$ That is, the regression slope is equal to the correlation value multiplied by the ratio of standard deviations of y and x. One thing this tells us is that when the standard deviations of x and y are the same (e.g. when the data have been converted to Z scores), then the correlation estimate is equal to the regression slope estimate. 26.1.3 Standard errors for regression models If we want to make inferences about the regression parameter estimates, then we also need an estimate of their variability. To compute this, we first need to compute the residual variance or error variance for the model – that is, how much variability in the dependent variable is not explained by the model. We can compute the model residuals as follows: $residual = y - \hat{y} = y - (x*\hat{\beta_x} + \hat{\beta_0})$ We then compute the sum of squared errors (SSE): $SS_{error} = \sum_{i=1}^n{(y_i - \hat{y_i})^2} = \sum_{i=1}^n{residuals^2}$ and from this we compute the mean squared error: $MS_{error} = \frac{SS_{error}}{df} = \frac{\sum_{i=1}^n{(y_i - \hat{y_i})^2} }{N - p}$ where the degrees of freedom ($df$) are determined by subtracting the number of estimated parameters (2 in this case: $\hat{\beta_x}$ and $\hat{\beta_0}$) from the number of observations ($N$). Once we have the mean squared error, we can compute the standard error for the model as: $SE_{model} = \sqrt{MS_{error}}$ In order to get the standard error for a specific regression parameter estimate, $SE$, we need to rescale the standard error of the model by the square root of the sum of squares of the X variable: $SE_{\beta_x} = \frac{SE_{model}}{\sqrt$ } 26.1.4 Statistical tests for regression parameters Once we have the parameter estimates and their standard errors, we can compute a t statistic to tell us the likelihood of the observed parameter estimates compared to some expected value under the null hypothesis. In this case we will test against the null hypothesis of no effect (i.e. $\beta=0$): $\begin{array}{c} t_{N - p} = \frac{\hat{\beta} - \beta_{expected}}{SE_{\hat{\beta}}}\ t_{N - p} = \frac{\hat{\beta} - 0}{SE_{\hat{\beta}}}\ t_{N - p} = \frac{\hat{\beta} }{SE_{\hat{\beta}}} \end{array}$ In R, we don’t need to compute these by hand, as they are automatically returned to us by the lm() function: ## ## Call: ## lm(formula = grade ~ studyTime, data = df) ## ## Residuals: ## Min 1Q Median 3Q Max ## -10.656 -2.719 0.125 4.703 7.469 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 76.16 5.16 14.76 6.1e-06 *** ## studyTime 4.31 2.14 2.01 0.091 . ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 6.4 on 6 degrees of freedom ## Multiple R-squared: 0.403, Adjusted R-squared: 0.304 ## F-statistic: 4.05 on 1 and 6 DF, p-value: 0.0907 In this case we see that the intercept is significantly different from zero (which is not very interesting) and that the effect of studyTime on grades is marginally significant (p = .09). 26.1.5 Quantifying goodness of fit of the model Sometimes it’s useful to quantify how well the model fits the data overall, and one way to do this is to ask how much of the variability in the data is accounted for by the model. This is quantified using a value called $R$ (also known as the coefficient of determination). If there is only one x variable, then this is easy to compute by simply squaring the correlation coefficient: $R^2 = r^2$ In the case of our study time example, $R$ = 0.4, which means that we have accounted for about 40% of the variance in grades. More generally we can think of $R$ as a measure of the fraction of variance in the data that is accounted for by the model, which can be computed by breaking the variance into multiple components: $SS_{total} = SS_{model} + SS_{error}$ where $SS$ is the variance of the data ($y$) and $SS$ and $SS$ are computed as shown earlier in this chapter. Using this, we can then compute the coefficient of determination as: $R^2 = \frac{SS_{model}}{SS_{total}} = 1 - \frac{SS_{error}}{SS_{total}}$ A small value of $R$ tells us that even if the model fit is statistically significant, it may only explain a small amount of information in the data.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/26%3A_The_General_Linear_Model/26.01%3A_Linear_Regression.txt
Often we would like to understand the effects of multiple variables on some particular outcome, and how they relate to one another. In the context of our study time example, let’s say that we discovered that some of the students had previously taken a course on the topic. If we plot their grades (see Figure 26.3), we can see that those who had a prior course perform much better than those who had not, given the same amount of study time. We would like to build a statistical model that takes this into account, which we can do by extending the model that we built above: $ŷ=β1̂*studyTime+β2̂*priorClass+β0̂26.3).$ ``````## ## Call: ## lm(formula = grade ~ studyTime + priorClass, data = df) ## ## Residuals: ## 1 2 3 4 5 6 7 8 ## 3.5833 0.7500 -3.5833 -0.0833 0.7500 -6.4167 2.0833 2.9167 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 70.08 3.77 18.60 8.3e-06 *** ## studyTime 5.00 1.37 3.66 0.015 * ## priorClass1 9.17 2.88 3.18 0.024 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 4 on 5 degrees of freedom ## Multiple R-squared: 0.803, Adjusted R-squared: 0.724 ## F-statistic: 10.2 on 2 and 5 DF, p-value: 0.0173`````` 26.03: Interactions Between Variables In the previous model, we assumed that the effect of study time on grade (i.e., the regression slope) was the same for both groups. However, in some cases we might imagine that the effect of one variable might differ depending on the value of another variable, which we refer to as an interaction between variables. Let’s use a new example that asks the question: What is the effect of caffeine on public speaking? First let’s generate some data and plot them. Looking at panel A of Figure 26.4, there doesn’t seem to be a relationship, and we can confirm that by performing linear regression on the data: ``````## ## Call: ## lm(formula = speaking ~ caffeine, data = df) ## ## Residuals: ## Min 1Q Median 3Q Max ## -33.10 -16.02 5.01 16.45 26.98 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -7.413 9.165 -0.81 0.43 ## caffeine 0.168 0.151 1.11 0.28 ## ## Residual standard error: 19 on 18 degrees of freedom ## Multiple R-squared: 0.0642, Adjusted R-squared: 0.0122 ## F-statistic: 1.23 on 1 and 18 DF, p-value: 0.281`````` But now let’s say that we find research suggesting that anxious and non-anxious people react differently to caffeine. First let’s plot the data separately for anxious and non-anxious people. As we see from panel B in Figure 26.4, it appears that the relationship between speaking and caffeine is different for the two groups, with caffeine improving performance for people without anxiety and degrading performance for those with anxiety. We’d like to create a statistical model that addresses this question. First let’s see what happens if we just include anxiety in the model. ``````## ## Call: ## lm(formula = speaking ~ caffeine + anxiety, data = df) ## ## Residuals: ## Min 1Q Median 3Q Max ## -32.97 -9.74 1.35 10.53 25.36 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -12.581 9.197 -1.37 0.19 ## caffeine 0.131 0.145 0.91 0.38 ## anxietynotAnxious 14.233 8.232 1.73 0.10 ## ## Residual standard error: 18 on 17 degrees of freedom ## Multiple R-squared: 0.204, Adjusted R-squared: 0.11 ## F-statistic: 2.18 on 2 and 17 DF, p-value: 0.144`````` Here we see there are no significant effects of either caffeine or anxiety, which might seem a bit confusing. The problem is that this model is trying to fit the same line relating speaking to caffeine for both groups. If we want to fit them using separate lines, we need to include an interaction in the model, which is equivalent to fitting different lines for each of the two groups; in R this is denoted by the $*$ symbol. ``````## ## Call: ## lm(formula = speaking ~ caffeine + anxiety + caffeine * anxiety, ## data = df) ## ## Residuals: ## Min 1Q Median 3Q Max ## -11.385 -7.103 -0.444 6.171 13.458 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 17.4308 5.4301 3.21 0.00546 ** ## caffeine -0.4742 0.0966 -4.91 0.00016 *** ## anxietynotAnxious -43.4487 7.7914 -5.58 4.2e-05 *** ## caffeine:anxietynotAnxious 1.0839 0.1293 8.38 3.0e-07 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 8.1 on 16 degrees of freedom ## Multiple R-squared: 0.852, Adjusted R-squared: 0.825 ## F-statistic: 30.8 on 3 and 16 DF, p-value: 7.01e-07`````` From these results we see that there are significant effects of both caffeine and anxiety (which we call main effects) and an interaction between caffeine and anxiety. Panel C in Figure 26.4 shows the separate regression lines for each group. Sometimes we want to compare the relative fit of two different models, in order to determine which is a better model; we refer to this as model comparison. For the models above, we can compare the goodness of fit of the model with and without the interaction, using the `anova()` command in R: ``````## Analysis of Variance Table ## ## Model 1: speaking ~ caffeine + anxiety ## Model 2: speaking ~ caffeine + anxiety + caffeine * anxiety ## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 17 5639 ## 2 16 1046 1 4593 70.3 3e-07 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1`````` This tells us that there is good evidence to prefer the model with the interaction over the one without an interaction. Model comparison is relatively simple in this case because the two models are nested – one of the models is a simplified version of the other model. Model comparison with non-nested models can get much more complicated.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/26%3A_The_General_Linear_Model/26.02%3A_Fitting_More_Complex_Models.txt
It is important to note that despite the fact that it is called the general linear model, we can actually use the same machinery to model effects that don’t follow a straight line (such as curves). The “linear” in the general linear model doesn’t refer to the shape of the response, but instead refers to the fact that model is linear in its parameters — that is, the predictors in the model only get multiplied the parameters, rather than a nonlinear relationship like being raised to a power of the parameter. It’s also common to analyze data where the outcomes are binary rather than continuous, as we saw in the chapter on categorical outcomes. There are ways to adapt the general linear model (known as generalized linear models) that allow this kind of analysis. We will explore both of these points in more detail in the following chapter. 26.05: Criticizing Our Model and Checking Assumptions The saying “garbage in, garbage out” is as true of statistics as anywhere else. In the case of statistical models, we have to make sure that our model is properly specified and that our data are appropriate for the model. When we say that the model is “properly specified”, we mean that we have included the appropriate set of independent variables in the model. We have already seen examples of misspecified models, in Figure 8.3. Remember that we saw several cases where the model failed to properly account for the data, such as failing to include an intercept. When building a model, we need to ensure that it includes all of the appropriate variables. We also need to worry about whether our model satisifies the assumptions of our statisical methods. One of the most important assumptions that we make when using the general linear model is that the residuals (that is, the difference between the model’s predictions and the actual data) are normally distributed. This can fail for many reasons, either because the model was not properly specified or because the data that we are modeling are inappropriate. We can use something called a Q-Q (quantile-quantile) plot to see whether our residuals are normally distributed. You have already encountered quantiles — they are the value that cuts off a particular proportion of a cumulative distribution.The Q-Q plot presents the quantiles of two distributions against one another; in this case, we will present the quantiles of the actual data from the quantiles of a normal distribution. Figure 26.5 shows examples of two such Q-Q plots. The left panel shows a Q-Q plot for data from a normal distribution, while the right panel shows a Q-Q plot from non-normal data. The data points in the right panel diverge substantially from the line, reflecting the fact that they are not normally distributed. ``````qq_df <- tibble(norm=rnorm(100), unif=runif(100)) p1 <- ggplot(qq_df,aes(sample=norm)) + geom_qq() + geom_qq_line() + ggtitle('Normal data') p2 <- ggplot(qq_df,aes(sample=unif)) + geom_qq() + geom_qq_line()+ ggtitle('Non-normal data') plot_grid(p1,p2)`````` Model diagnostics will be explored in more detail in the following chapter. 26.06: What Does “Predict” Really Mean? When we talk about “prediction” in daily life, we are generally referring to the ability to estimate the value of some variable in advance of seeing the data. However, the term is often used in the context of linear regression to refer to the fitting of a model to the data; the estimated values ($ŷ1983).$ As an example, let’s take a sample of 48 children from NHANES and fit a regression model for weight that includes several regressors (age, height, hours spent watching TV and using the computer, and household income) along with their interactions. Table 26.2: Root mean squared error for model applied to original data and new data, and after shuffling the order of the y variable (in essence making the null hypothesis true) Data type RMSE (original data) RMSE (new data) True data 3.0 21 Shuffled data 7.6 59 Here we see that whereas the model fit on the original data showed a very good fit (only off by a few pounds per individual), the same model does a much worse job of predicting the weight values for new children sampled from the same population (off by more than 25 pounds per individual). This happens because the model that we specified is quite complex, since it includes not just each of the individual variables, but also all possible combinations of them (i.e. their interactions), resulting in a model with 32 parameters. Since this is almost as many coefficients as there are data points (i.e., the heights of 48 children), the model overfits the data, just like the complex polynomial curve in our initial example of overfitting in Section 8.4. Another way to see the effects of overfitting is to look at what happens if we randomly shuffle the values of the weight variable (shown in the second row of the table). Randomly shuffling the value should make it impossible to predict weight from the other variables, because they should have no systematic relationship. This shows us that even when there is no true relationship to be modeled (because shuffling should have obliterated the relationship), the complex model still shows a very low error in its predictions, because it fits the noise in the specific dataset. However, when that model is applied to a new dataset, we see that the error is much larger, as it should be. 26.6.1 Cross-validation One method that has been developed to help address the problem of overfitting is known as cross-validation. This technique is commonly used within the field of machine learning, which is focused on building models that will generalize well to new data, even when we don’t have a new dataset to test the model. The idea behind cross-validation is that we fit our model repeatedly, each time leaving out a subset of the data, and then test the ability of the model to predict the values in each held-out subset. Let’s see how that would work for our weight prediction example. In this case we will perform 12-fold cross-validation, which means that we will break the data into 12 subsets, and then fit the model 12 times, in each case leaving out one of the subsets and then testing the model’s ability to accurately predict the value of the dependent variable for those held-out data points. The `caret` package in R provides us with the ability to easily run cross-validation across our dataset. Using this function we can run cross-validation on 100 samples from the NHANES dataset, and compute the RMSE for cross-validation, along with the RMSE for the original data and a new dataset, as we computed above. Table 26.3: Root mean squared error from cross-validation and new data, showing that cross-validation provides a reasonable estimate of the model’s performance on new data. Root mean squared error Original data 3 New data 24 Cross-validation 146 Here we see that cross-validation gives us an estimate of predictive accuracy that is much closer to what we see with a completely new dataset than it is to the inflated accuracy that we see with the original dataset – in fact, it’s even slighlty more pessimistic than the average for a new dataset, probably because only part of the data are being used to train each of the models. Note that using cross-validation properly is tricky, and it is recommended that you consult with an expert before using it in practice. However, this section has hopefully shown you three things: • “Prediction” doesn’t always mean what you think it means • Complex models can overfit data very badly, such that one can see seemingly good prediction even when there is no true signal to predict • You should view claims about prediction accuracy very skeptically unless they have been done using the appropriate methods.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/26%3A_The_General_Linear_Model/26.04%3A_Beyond_Linear_Predictors_and_Outcomes.txt
26.8.1 Estimating linear regression parameters We generally estimate the parameters of a linear model from data using linear algebra, which is the form of algebra that is applied to vectors and matrices. If you aren’t familiar with linear algebra, don’t worry – you won’t actually need to use it here, as R will do all the work for us. However, a brief excursion in linear algebra can provide some insight into how the model parameters are estimated in practice. First, let’s introduce the idea of vectors and matrices; you’ve already encountered them in the context of R, but we will review them here. A matrix is a set of numbers that are arranged in a square or rectangle, such that there are one or more dimensions across which the matrix varies. It is customary to place different observation units (such as people) in the rows, and different variables in the columns. Let’s take our study time data from above. We could arrange these numbers in a matrix, which would have eight rows (one for each student) and two columns (one for study time, and one for grade). If you are thinking “that sounds like a data frame in R” you are exactly right! In fact, a data frame is a specialized version of a matrix, and we can convert a data frame to a matrix using the as.matrix() function. df <- tibble( studyTime = c(2, 3, 5, 6, 6, 8, 10, 12) / 3, priorClass = c(0, 1, 1, 0, 1, 0, 1, 0) ) %>% mutate( grade = studyTime * betas[1] + priorClass * betas[2] + round(rnorm(8, mean = 70, sd = 5)) ) df_matrix <- df %>% dplyr::select(studyTime, grade) %>% as.matrix() We can write the general linear model in linear algebra as follows: $Y = X*\beta + E$ This looks very much like the earlier equation that we used, except that the letters are all capitalized, which is meant to express the fact that they are vectors. We know that the grade data go into the Y matrix, but what goes into the $X26.7).$ The rules of matrix multiplication tell us that the dimensions of the matrices have to match with one another; in this case, the design matrix has dimensions of 8 (rows) X 2 (columns) and the Y variable has dimensions of 8 X 1. Therefore, the $\beta$ matrix needs to have dimensions 2 X 1, since an 8 X 2 matrix multiplied by a 2 X 1 matrix results in an 8 X 1 matrix (as the matching middle dimensions drop out). The interpretation of the two values in the $\beta$ matrix is that they are the values to be multipled by study time and 1 respectively to obtain the estimated grade for each individual. We can also view the linear model as a set of individual equations for each individual: $\hat{y}_1 = studyTime_1*\beta_1 + 1*\beta_2$ $\hat{y}_2 = studyTime_2*\beta_1 + 1*\beta_2$ $\hat{y}_8 = studyTime_8*\beta_1 + 1*\beta_2$ Remember that our goal is to determine the best fitting values of $\beta$ given the known values of $X$ and $Y$. A naive way to do this would be to solve for $\beta$ using simple algebra – here we drop the error term $E$ because it’s out of our control: $\hat{\beta} = \frac{Y}{X}$ The challenge here is that $X$ and $\beta$ are now matrices, not single numbers – but the rules of linear algebra tell us how to divide by a matrix, which is the same as multiplying by the inverse of the matrix (referred to as $X^{-1}$). We can do this in R: # compute beta estimates using linear algebra #create Y variable 8 x 1 matrix Y <- as.matrix(df$grade) #create X variable 8 x 2 matrix X <- matrix(0, nrow = 8, ncol = 2) #assign studyTime values to first column in X matrix X[, 1] <- as.matrix(df$studyTime) #assign constant of 1 to second column in X matrix X[, 2] <- 1 # compute inverse of X using ginv() # %*% is the R matrix multiplication operator beta_hat <- ginv(X) %*% Y #multiple the inverse of X by Y print(beta_hat) ## [,1] ## [1,] 4.3 ## [2,] 76.0 Anyone who is interested in serious use of statistical methods is highly encouraged to invest some time in learning linear algebra, as it provides the basis for nearly all of the tools that are used in standard statistics.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/26%3A_The_General_Linear_Model/26.08%3A_Appendix.txt
To perform linear regression in R, we use the `lm()` function. Let’s generate some data and use this function to compute the linear regression solution. ``````npoints <- 100 intercept = 10 # slope of X/Y relationship slope=0.5 # this lets us control the strength of the relationship # by varying the amount of noise added to the y variable noise_sd = 0.6 regression_data <- tibble(x = rnorm(npoints)) %>% mutate(y = x*slope + rnorm(npoints)*noise_sd + intercept) ggplot(regression_data,aes(x,y)) + geom_point()`````` We can then apply `lm()` to these data: ``````lm_result <- lm(y ~ x, data=regression_data) summary(lm_result)`````` ``````## ## Call: ## lm(formula = y ~ x, data = regression_data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.5563 -0.3042 -0.0059 0.3804 1.2522 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 9.9761 0.0580 172.12 < 2e-16 *** ## x 0.3725 0.0586 6.35 6.6e-09 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.58 on 98 degrees of freedom ## Multiple R-squared: 0.292, Adjusted R-squared: 0.284 ## F-statistic: 40.4 on 1 and 98 DF, p-value: 6.65e-09`````` We should see three things in the `lm()` results: • The estimate of the Intercept in the model should be very close to the intercept that we specified • The estimate for the x parameter should be very close to the slope that we specified • The residual standard error should be roughly similar to the noise standard deviation that we specified 27.02: Model Criticism and Diagnostics (Section 26.5) Once we have fitted the model, we want to look at some diagnostics to determine whether the model is actually fitting properly. We can do this using the `autoplot()` function from the `ggfortify` package. ``autoplot(lm_result,which=1:2)`` The left panel in this plot shows the relationship between the predicted (or “fitted”) values and the residuals. We would like to make sure that there is no clear relationship between these two (as we will see below). The right panel shows a Q-Q plot, which helps us assess whether the residuals from the model are normally distributed. In this case, they look reasonably normal, as the points don’t differ too much from the unit line. 27.03: Examples of Problematic Model Fit Let’s say that there was another variable at play in this dataset, which we were not aware of. This variable causes some of the cases to have much larger values than others, in a way that is unrelated to the X variable. We play a trick here using the `seq()` function to create a sequence from zero to one, and then threshold those 0.5 (in order to obtain half of the values as zero and the other half as one) and then multiply by the desired effect size: ``````effsize=2 regression_data <- regression_data %>% mutate(y2=y + effsize*(seq(1/npoints,1,1/npoints)>0.5)) lm_result2 <- lm(y2 ~ x, data=regression_data) summary(lm_result2)`````` ``````## ## Call: ## lm(formula = y2 ~ x, data = regression_data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.3324 -0.9689 -0.0939 1.0421 2.2591 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 10.978 0.117 93.65 <2e-16 *** ## x 0.270 0.119 2.27 0.025 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.2 on 98 degrees of freedom ## Multiple R-squared: 0.0501, Adjusted R-squared: 0.0404 ## F-statistic: 5.17 on 1 and 98 DF, p-value: 0.0252`````` One thing you should notice is that the model now fits overall much worse; the R-squared is about half of what it was in the previous model, which reflects the fact that more variability was added to the data, but it wasn’t accounted for in the model. Let’s see if our diagnostic reports can give us any insight: ``autoplot(lm_result2,which=1:2)`` The residual versus fitted graph doesn’t give us much insight, but we see from the Q-Q plot that the residuals are diverging quite a bit from the unit line. Let’s look at another potential problem, in which the y variable is nonlinearly related to the X variable. We can create these data by squaring the X variable when we generate the Y variable: ``````effsize=2 regression_data <- regression_data %>% mutate(y3 = (x**2)*slope + rnorm(npoints)*noise_sd + intercept) lm_result3 <- lm(y3 ~ x, data=regression_data) summary(lm_result3)`````` ``````## ## Call: ## lm(formula = y3 ~ x, data = regression_data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.610 -0.568 -0.065 0.359 3.266 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 10.5547 0.0844 125.07 <2e-16 *** ## x -0.0419 0.0854 -0.49 0.62 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.84 on 98 degrees of freedom ## Multiple R-squared: 0.00245, Adjusted R-squared: -0.00773 ## F-statistic: 0.241 on 1 and 98 DF, p-value: 0.625`````` Now we see that there is no significant linear relationship between $X$ and Y/ But if we look at the residuals the problem with the model becomes clear: ``autoplot(lm_result3,which=1:2)`` In this case we can see the clearly nonlinear relationship between the predicted and residual values, as well as the clear lack of normality in the residuals. As we noted in the previous chapter, the “linear” in the general linear model doesn’t refer to the shape of the response, but instead refers to the fact that model is linear in its parameters — that is, the predictors in the model only get multiplied the parameters (e.g., rather than being raised to a power of the parameter). Here is how we would build a model that could account for the nonlinear relationship: ``````# create x^2 variable regression_data <- regression_data %>% mutate(x_squared = x**2) lm_result4 <- lm(y3 ~ x + x_squared, data=regression_data) summary(lm_result4)`````` ``````## ## Call: ## lm(formula = y3 ~ x + x_squared, data = regression_data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1.4101 -0.3791 -0.0048 0.3908 1.4437 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 10.1087 0.0739 136.8 <2e-16 *** ## x -0.0118 0.0600 -0.2 0.84 ## x_squared 0.4557 0.0451 10.1 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.59 on 97 degrees of freedom ## Multiple R-squared: 0.514, Adjusted R-squared: 0.504 ## F-statistic: 51.2 on 2 and 97 DF, p-value: 6.54e-16`````` Now we see that the effect of $X$ is significant, and if we look at the residual plot we should see that things look much better: ``autoplot(lm_result4,which=1:2)`` Not perfect, but much better than before!
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/27%3A_The_General_Linear_Model_in_R/27.01%3A_Linear_Regression_%28Section_26.1%29.txt
Let’s say that we have a blood test (which is often referred to as a biomarker) and we want to know whether it predicts who is going to have a heart attack within the next year. We will generate a synthetic dataset for a population that is at very high risk for a heart attack in the next year. ``````# sample size npatients=1000 # probability of heart attack p_heartattack = 0.5 # true relation to biomarker true_effect <- 0.6 # assume biomarker is normally distributed disease_df <- tibble(biomarker=rnorm(npatients)) # generate another variable that reflects risk for # heart attack, which is related to the biomarker disease_df <- disease_df %>% mutate(risk = biomarker*true_effect + rnorm(npatients)) # create another variable that shows who has a # heart attack, based on the risk variable disease_df <- disease_df %>% mutate( heartattack = risk > quantile(disease_df\$risk, 1-p_heartattack)) glimpse(disease_df)`````` ``````## Observations: 1,000 ## Variables: 3 ## \$ biomarker <dbl> 1.15, 0.68, 1.21, -0.72, -1.00, -0.12… ## \$ risk <dbl> 1.054, -0.529, 0.675, -0.474, -1.398,… ## \$ heartattack <lgl> TRUE, FALSE, TRUE, FALSE, FALSE, TRUE…`````` Now we would like to build a model that allows us to predict who will have a heart attack from these data. However, you may have noticed that the heartattack variable is a binary variable; because linear regression assumes that the residuals from the model will be normally distributed, and the binary nature of the data will violate this, we instead need to use a different kind of model, known as a logistic regression model, which is built to deal with binary outcomes. We can fit this model using the `glm()` function: ``````glm_result <- glm(heartattack ~ biomarker, data=disease_df, family=binomial()) summary(glm_result)`````` ``````## ## Call: ## glm(formula = heartattack ~ biomarker, family = binomial(), data = disease_df) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -2.1301 -1.0150 0.0305 1.0049 2.1319 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -0.00412 0.06948 -0.06 0.95 ## biomarker 0.99637 0.08342 11.94 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 1386.3 on 999 degrees of freedom ## Residual deviance: 1201.4 on 998 degrees of freedom ## AIC: 1205 ## ## Number of Fisher Scoring iterations: 3`````` This looks very similar to the output from the `lm()` function, and it shows us that there is a significant relationship between the biomarker and heart attacks. The model provides us with a predicted probability that each individual will have a heart attack; if this is greater than 0.5, then that means that the model predicts that the individual is more likely than not to have a heart attack. We can start by simply comparing those predictions to the actual outcomes. ``````# add predictions to data frame disease_df <- disease_df %>% mutate(prediction = glm_result\$fitted.values>0.5, heartattack = heartattack) # create table comparing predicted to actual outcomes CrossTable(disease_df\$prediction, disease_df\$heartattack, prop.t=FALSE, prop.r=FALSE, prop.chisq=FALSE)`````` ``````## ## ## Cell Contents ## |-------------------------| ## | N | ## | N / Col Total | ## |-------------------------| ## ## ## Total Observations in Table: 1000 ## ## ## | disease_df\$heartattack ## disease_df\$prediction | FALSE | TRUE | Row Total | ## ----------------------|-----------|-----------|-----------| ## FALSE | 332 | 157 | 489 | ## | 0.664 | 0.314 | | ## ----------------------|-----------|-----------|-----------| ## TRUE | 168 | 343 | 511 | ## | 0.336 | 0.686 | | ## ----------------------|-----------|-----------|-----------| ## Column Total | 500 | 500 | 1000 | ## | 0.500 | 0.500 | | ## ----------------------|-----------|-----------|-----------| ## ## `````` This shows us that of the 500 people who had heart attacks, the model corrected predicted a heart attack for 343 of them. It also predicted heart attacks for 168 people who didn’t have them, and it failed to predict a heart attack for 157 people who had them. This highlights the distinction that we mentioned before between statistical and practical significance; even though the biomarker shows a highly significant relationship to heart attacks, it’s ability to predict them is still relatively poor. As we will see below, it gets even worse when we try to generalize this to a new group of people. 27.05: Cross-validation (Section 26.6.1) Cross-validation is a powerful technique that allows us to estimate how well our results will generalize to a new dataset. Here we will build our own crossvalidation code to see how it works, continuing the logistic regression example from the previous section. In cross-validation, we want to split the data into several subsets and then iteratively train the model while leaving out each subset (which we usually call folds) and then test the model on that held-out fold Let’s write our own code to do this splitting; one relatively easy way to this is to create a vector that contains the fold numbers, and then randomly shuffle it to create the fold assigments for each data point. ``````nfolds <- 4 # number of folds # we use the kronecker() function to repeat the folds fold <- kronecker(seq(nfolds),rep(1,npatients/nfolds)) # randomly shuffle using the sample() function fold <- sample(fold) # add variable to store CV predictions disease_df <- disease_df %>% mutate(CVpred=NA) # now loop through folds and separate training and test data for (f in seq(nfolds)){ # get training and test data train_df <- disease_df[fold!=f,] test_df <- disease_df[fold==f,] # fit model to training data glm_result_cv <- glm(heartattack ~ biomarker, data=train_df, family=binomial()) # get probability of heart attack on test data pred <- predict(glm_result_cv,newdata = test_df) # convert to prediction and put into data frame disease_df\$CVpred[fold==f] = (pred>0.5) }`````` Now let’s look at the performance of the model: ``````# create table comparing predicted to actual outcomes CrossTable(disease_df\$CVpred, disease_df\$heartattack, prop.t=FALSE, prop.r=FALSE, prop.chisq=FALSE)`````` ``````## ## ## Cell Contents ## |-------------------------| ## | N | ## | N / Col Total | ## |-------------------------| ## ## ## Total Observations in Table: 1000 ## ## ## | disease_df\$heartattack ## disease_df\$CVpred | FALSE | TRUE | Row Total | ## ------------------|-----------|-----------|-----------| ## FALSE | 416 | 269 | 685 | ## | 0.832 | 0.538 | | ## ------------------|-----------|-----------|-----------| ## TRUE | 84 | 231 | 315 | ## | 0.168 | 0.462 | | ## ------------------|-----------|-----------|-----------| ## Column Total | 500 | 500 | 1000 | ## | 0.500 | 0.500 | | ## ------------------|-----------|-----------|-----------| ## ## `````` Now we see that the model only accurately predicts less than half of the heart attacks that occurred when it is predicting to a new sample. This tells us that this is the level of prediction that we could expect if were to apply the model to a new sample of patients from the same population.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/27%3A_The_General_Linear_Model_in_R/27.04%3A_Extending_Regression_to_Binary_Outcomes..txt
Learning Objectives • Describe the rationale behind the sign test • Describe how the t-test can be used to compare a single mean to a hypothesized value • Compare the means for two paired or unpaired groups using a two-sample t-test We have already encountered a number of cases where we wanted to ask questions about the mean of a sample. In this chapter, we will delve deeper into the various ways that we can compare means. 28: Comparing Means The simplest question we might want to ask of a mean is whether it has a specific value. Let’s say that we want to test whether the mean BMI value in adults from the NHANES dataset is above 25, which is the lower cutoff for being overweight according to the US Centers for Disease Control. We take a sample of 200 adults in order to ask this question. One simple way to test for this difference is using a test called the sign test, which asks whether the proportion of positive differences between the actual value and the hypothesized value is different than what we would expect by chance. To do this, we take the differences between each data point and the hypothesized mean value and compute their sign. In our sample, we see that 66.0 percent of individuals have a BMI greater than 25. We can then use a binomial test to ask whether this proportion of positive differences is greater than 0.5, using the binom.test() function in R: ## ## Exact binomial test ## ## data: npos and nrow(NHANES_sample) ## number of successes = 132, number of trials = 200, p-value = 4e-06 ## alternative hypothesis: true probability of success is greater than 0.5 ## 95 percent confidence interval: ## 0.6 1.0 ## sample estimates: ## probability of success ## 0.66 Here we see that the proportion of individuals with positive signs would be very surprising under the null hypothesis of $p=0.5$. We can also ask this question using Student’s t-test, which you have already encountered earlier in the book. We will refer to the mean as $\bar{X}$ and the hypothesized population mean as $\mu$. Then, the t test for a single mean is: $t = \frac{\bar{X} - \mu}{SEM}$ where SEM (as you may remember from the chapter on sampling) is defined as: $SEM = \frac{\hat{\sigma}}{\sqrt{n}}$ In essence, the t statistic asks how large the deviation of the sample mean from the hypothesized quantity is with respect to the sampling variability of the mean. We can compute this for the NHANES dataset using the t.test() function in R: ## ## One Sample t-test ## ## data: NHANES_adult\$BMI ## t = 38, df = 4785, p-value <2e-16 ## alternative hypothesis: true mean is not equal to 25 ## 95 percent confidence interval: ## 29 29 ## sample estimates: ## mean of x ## 29 This shows us that the mean BMI in the dataset (28.79) is significantly larger than the cutoff for overweight. 28.02: Comparing Two Means A more common question that often arises in statistics is whether there is a difference between the means of two different groups. Let’s say that we would like to know whether regular marijuana smokers watch more television. We can ask this question using the NHANES dataset; let’s take a sample of 200 individuals from the dataset and test whether the number of hours of television watching per day is related to regular marijuana use. The left panel of Figure 28.1 shows these data using a violin plot. We can also use Student’s t test to test for differences between two groups of independent observations (as we saw in an earlier chapter); we will turn later in the chapter to cases where the observations are not independent. As a reminder, the t-statistic for comparison of two independent groups is computed as: $t = \frac{\bar{X_1} - \bar{X_2}}{\sqrt{\frac{S_1^2}{n_1} + \frac{S_2^2}{n_2}}}$ where $\bar{X}_1$ and $\bar{X}_2$ are the means of the two groups, $S$ and $S$ are the variances for each of the groups, and $n_1$ and $n_2$ are the sizes of the two groups. Under the null hypothesis of no difference between means, this statistic is distributed according to a t distribution with n-2 degrees of freedom (since we have computed two parameter estimates, namely the means of the two groups). We can compute the t-test in R using the t.test() function. In this case, we started with the specific hypothesis that smoking marijuana is associated with greater TV watching, so we will use a one-tailed test. Since the t.test function orders the conditions alphabetically, the “No” group comes first, and thus we need to test the alternative hypothesis of whether the first group is less than the second (“Yes”) group; for this reason, we specify ‘less’ as our alternative. ## ## Two Sample t-test ## ## data: TVHrsNum by RegularMarij ## t = -3, df = 198, p-value = 0.004 ## alternative hypothesis: true difference in means is less than 0 ## 95 percent confidence interval: ## -Inf -0.25 ## sample estimates: ## mean in group No mean in group Yes ## 2.1 2.8 In this case we see that there is a statistically significant difference between groups, in the expected direction - regular pot smokers watch more TV.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/28%3A_Comparing_Means/28.01%3A_Testing_the_Value_of_a_Single_Mean.txt
The t-test is often presented as a specialized tool for comparing means, but it can also be viewed as an application of the general linear model. In this case, the model would look like this: $\hat{BP} = \hat{\beta_1}*Marijuana + \hat{\beta_0}$ However, smoking is a binary variable, so we treat it as a dummy variable like we discussed in the previous chapter, setting it to a value of 1 for smokers and zero for nonsmokers. In that case, $\hat{\beta_1}$ is simply the difference in means between the two groups, and $\hat{\beta_0}$ is the mean for the group that was coded as zero. We can fit this model using the lm() function, and see that it gives the same t statistic as the t-test above: ## ## Call: ## lm(formula = TVHrsNum ~ RegularMarij, data = NHANES_sample) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.293 -1.133 -0.133 0.867 2.867 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 2.133 0.119 17.87 <2e-16 *** ## RegularMarijYes 0.660 0.249 2.65 0.0086 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.5 on 198 degrees of freedom ## Multiple R-squared: 0.0343, Adjusted R-squared: 0.0295 ## F-statistic: 7.04 on 1 and 198 DF, p-value: 0.00861 We can also view the linear model results graphically (see the right panel of Figure 28.1). In this case, the predicted value for nonsmokers is $\hat{\beta_0}$ (2.13) and the predicted value for smokers is $\hat{\beta_0} +\hat{\beta_1}$ (2.79). To compute the standard errors for this analysis, we can use exactly the same equations that we used for linear regression – since this really is just another example of linear regression. In fact, if you compare the p-value from the t-test above with the p-value in the linear regression analysis for the marijuana use variable, you will see that the one from the linear regression analysis is exactly twice the one from the t-test, because the linear regression analysis is performing a two-tailed test. 28.3.1 Effect sizes for comparing two means The most commonly used effect size for a comparison between two means is Cohen’s d, which (as you may remember from Chapter 18) is an expression of the effect in terms of standard error units. For the t-test estimated using the general linear model outlined above (i.e. with a single dummy-coded variable), this is expressed as: $d = \frac{\hat{beta_1}}{SE_{residual}}$ We can obtain these values from the analysis output above, giving us a d = 0.45, which we would generally interpret as a medium sized effect. We can also compute $R$ for this analysis, which tells us how much variance in TV watching is accounted for. This value (which is reported in the summary of the lm() analysis) is 0.03, which tells us that while the effect may be statistically significant, it accounts for relatively little of the variance in TV watching. 28.04: Bayes Factor for Mean Differences As we discussed in the chapter on Bayesian analysis, Bayes factors provide a way to better quantify evidence in favor or against the null hypothesis of no difference. In this case, we want to specifically test against the null hypothesis that the difference is greater than zero - because the difference is computed by the function between the first group (‘No’) and the second group (‘Yes’). Thus, we specify a “null interval” going from zero to infinity, which means that the alternative is less than zero. ``````## Bayes factor analysis ## -------------- ## [1] Alt., r=0.707 0<d<Inf : 0.051 ±0% ## [2] Alt., r=0.707 !(0<d<Inf) : 8.7 ±0% ## ## Against denominator: ## Null, mu1-mu2 = 0 ## --- ## Bayes factor type: BFindepSample, JZS`````` This shows us that the evidence against the null hypothesis is moderately strong.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/28%3A_Comparing_Means/28.03%3A_The_t-test_as_a_Linear_Model.txt
In experimental research, we often use within-subjects designs, in which we compare the same person on multiple measurements. The measurement that come from this kind of design are often referred to as repeated measures. For example, in the NHANES dataset blood pressure was measured three times. Let’s say that we are interested in testing whether there is a difference in mean blood pressure between the first and second measurement (Figure 28.2). We see that there does not seem to be much of a difference in mean blood pressure between time points (about one point). First let’s test for a difference using an independent samples t-test, which ignores the fact that pairs of data points come from the the same individuals. ``````## ## Two Sample t-test ## ## data: BPsys by timepoint ## t = 0.6, df = 398, p-value = 0.5 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -2.1 4.1 ## sample estimates: ## mean in group BPSys1 mean in group BPSys2 ## 121 120`````` This analysis shows no significant difference. However, this analysis is inappropriate since it assumes that the two samples are independent, when in fact they are not, since the data come from the same individuals. We can plot the data with a line for each individual to show this (see Figure ??). In this analysis, what we really care about is whether the blood pressure for each person changed in a systematic way between the two measurements, so another way to represent the data is to compute the difference between the two timepoints for each individual, and then analyze these difference scores rather than analyzing the individual measurements. In Figure 28.3, we show a histogram of these difference scores, with a blue line denoting the mean difference. 28.5.1 Sign test One simple way to test for differences is using the sign test. To do this, we take the differences and compute their sign, and then we use a binomial test to ask whether the proportion of positive signs differs from 0.5. ``````## ## Exact binomial test ## ## data: npos and nrow(NHANES_sample) ## number of successes = 96, number of trials = 200, p-value = 0.6 ## alternative hypothesis: true probability of success is not equal to 0.5 ## 95 percent confidence interval: ## 0.41 0.55 ## sample estimates: ## probability of success ## 0.48`````` Here we see that the proportion of individuals with positive signs (0.48) is not large enough to be surprising under the null hypothesis of $p=0.5$. However, one problem with the sign test is that it is throwing away information about the magnitude of the differences, and thus might be missing something. 28.5.2 Paired t-test A more common strategy is to use a paired t-test, which is equivalent to a one-sample t-test for whether the mean difference between the measurements is zero. We can compute this using the `t.test()` function in R and setting `paired=TRUE`. ``````## ## Paired t-test ## ## data: BPsys by timepoint ## t = 3, df = 199, p-value = 0.007 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## 0.29 1.75 ## sample estimates: ## mean of the differences ## 1`````` With this analyses we see that there is in fact a significant difference between the two measurements. Let’s compute the Bayes factor to see how much evidence is provided by the result: ``````## Bayes factor analysis ## -------------- ## [1] Alt., r=0.707 : 3 ±0% ## ## Against denominator: ## Null, mu = 0 ## --- ## Bayes factor type: BFoneSample, JZS`````` This shows us that although the effect was significant in a paired t-test, it actually provides very little evidence in favor of the alternative hypothesis. The paired t-test can also be defined in terms of a linear model; see the Appendix for more details on this.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/28%3A_Comparing_Means/28.05%3A_Comparing_Paired_Observations.txt
Often we want to compare more than two means to determine whether any of them differ from one another. Let’s say that we are analyzing data from a clinical trial for the treatment of high blood pressure. In the study, volunteers are randomized to one of three conditions: Drug 1, Drug 2 or placebo. Let’s generate some data and plot them (see Figure 28.4) 28.6.1 Analysis of variance We would first like to test the null hypothesis that the means of all of the groups are equal – that is, neither of the treatments had any effect. We can do this using a method called analysis of variance (ANOVA). This is one of the most commonly used methods in psychological statistics, and we will only scratch the surface here. The basic idea behind ANOVA is one that we already discussed in the chapter on the general linear model, and in fact ANOVA is just a name for a specific implementation of such a model. Remember from the last chapter that we can partition the total variance in the data ($SS$) into the variance that is explained by the model ($SS$) and the variance that is not ($SS$). We can then compute a mean square for each of these by dividing them by their degrees of freedom; for the error this is $N - p$ (where $p$ is the number of means that we have computed), and for the model this is $p - 1$: $MS_{model} =\frac{SS_{model}}{df_{model}}= \frac{SS_{model}}{p-1}$ $MS_{error} = \frac{SS_{error}}{df_{error}} = \frac{SS_{error}}{N - p}$ With ANOVA, we want to test whether the variance accounted for by the model is greater than what we would expect by chance, under the null hypothesis of no differences between means. Whereas for the t distribution the expected value is zero under the null hypothesis, that’s not the case here, since sums of squares are always positive numbers. Fortunately, there is another standard distribution that describes how ratios of sums of squares are distributed under the null hypothesis: The F distribution (see figure 28.5). This distribution has two degrees of freedom, which correspond to the degrees of freedom for the numerator (which in this case is the model), and the denominator (which in this case is the error). To create an ANOVA model, we extend the idea of dummy coding that you encountered in the last chapter. Remember that for the t-test comparing two means, we created a single dummy variable that took the value of 1 for one of the conditions and zero for the others. Here we extend that idea by creating two dummy variables, one that codes for the Drug 1 condition and the other that codes for the Drug 2 condition. Just as in the t-test, we will have one condition (in this case, placebo) that doesn’t have a dummy variable, and thus represents the baseline against which the others are compared; its mean defines the intercept of the model. Let’s create the dummy coding for drugs 1 and 2. Now we can fit a model using the same approach that we used in the previous chapter: ## ## Call: ## lm(formula = sysBP ~ d1 + d2, data = df) ## ## Residuals: ## Min 1Q Median 3Q Max ## -29.084 -7.745 -0.098 7.687 23.431 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 141.60 1.66 85.50 < 2e-16 *** ## d1 -10.24 2.34 -4.37 2.9e-05 *** ## d2 -2.03 2.34 -0.87 0.39 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 9.9 on 105 degrees of freedom ## Multiple R-squared: 0.169, Adjusted R-squared: 0.154 ## F-statistic: 10.7 on 2 and 105 DF, p-value: 5.83e-05 The output from this command provides us with two things. First, it shows us the result of a t-test for each of the dummy variables, which basically tell us whether each of the conditions separately differs from placebo; it appears that Drug 1 does whereas Drug 2 does not. However, keep in mind that if we wanted to interpret these tests, we would need to correct the p-values to account for the fact that we have done multiple hypothesis tests; we will see an example of how to do this in the next chapter. Remember that the hypothesis that we started out wanting to test was whether there was any difference between any of the conditions; we refer to this as an omnibus hypothesis test, and it is the test that is provided by the F statistic. The F statistic basically tells us whether our model is better than a simple model that just includes an intercept. In this case we see that the F test is highly significant, consistent with our impression that there did seem to be differences between the groups (which in fact we know there were, because we created the data). 28.07: Appendix 28.7.1 The paired t-test as a linear model We can also define the paired t-test in terms of a general linear model. To do this, we include all of the measurements for each subject as data points (within a tidy data frame). We then include in the model a variable that codes for the identity of each individual (in this case, the ID variable that contains a subject ID for each person). This is known as a mixed model, since it includes effects of independent variables as well as effects of individuals. The standard model fitting procedure `lm()` can’t do this, but we can do it using the `lmer()` function from a popular R package called lme4, which is specialized for estimating mixed models. The `(1|ID)` in the formula tells `lmer()` to estimate a separate intercept (which is what the `1` refers to) for each value of the `ID` variable (i.e. for each individual in the dataset), and then estimate a common slope relating timepoint to BP. ``````# compute mixed model for paired test lmrResult <- lmer(BPsys ~ timepoint + (1 | ID), data = NHANES_sample_tidy) summary(lmrResult)`````` ``````## Linear mixed model fit by REML. t-tests use Satterthwaite's method [ ## lmerModLmerTest] ## Formula: BPsys ~ timepoint + (1 | ID) ## Data: NHANES_sample_tidy ## ## REML criterion at convergence: 2895 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.3843 -0.4808 0.0076 0.4221 2.1718 ## ## Random effects: ## Groups Name Variance Std.Dev. ## ID (Intercept) 236.1 15.37 ## Residual 13.9 3.73 ## Number of obs: 400, groups: ID, 200 ## ## Fixed effects: ## Estimate Std. Error df t value Pr(>|t|) ## (Intercept) 121.370 1.118 210.361 108.55 <2e-16 *** ## timepointBPSys2 -1.020 0.373 199.000 -2.74 0.0068 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Correlation of Fixed Effects: ## (Intr) ## tmpntBPSys2 -0.167`````` You can see that this shows us a p-value that is very close to the result from the paired t-test computed using the `t.test()` function.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/28%3A_Comparing_Means/28.06%3A_Comparing_More_Than_Two_Means.txt
In this example, we will show multiple ways to test a hypothesis about the value of a single mean. As an example, let’s test whether the mean systolic blood pressure (BP) in the NHANES dataset (averaged over the three measurements that were taken for each person) is greater than 120 mm, which is the standard value for normal systolic BP. First let’s perform a power analysis to see how large our sample would need to be in order to detect a small difference (Cohen’s d = .2). ``````pwr.result <- pwr.t.test(d=0.2, power=0.8, type='one.sample', alternative='greater') pwr.result`````` ``````## ## One-sample t test power calculation ## ## n = 156 ## d = 0.2 ## sig.level = 0.05 ## power = 0.8 ## alternative = greater`````` Based on this, we take a sample of 156 individuals from the dataset. ``````NHANES_BP_sample <- NHANES_adult %>% drop_na(BPSysAve) %>% dplyr::select(BPSysAve) %>% sample_n(pwr.result\$n) print('Mean BP:')`````` ``## [1] "Mean BP:"`` ``````meanBP <- NHANES_BP_sample %>% summarize(meanBP=mean(BPSysAve)) %>% pull() meanBP`````` ``## [1] 123`` First let’s perform a sign test to see whether the observed mean of 123.11 is significantly different from zero. To do this, we count the number of values that are greater than the hypothesized mean, and then use a binomial test to ask how surprising that number is if the true proportion is 0.5 (as it would be if the distribution were centered at the hypothesized mean). ``````NHANES_BP_sample <- NHANES_BP_sample %>% mutate(BPover120=BPSysAve>120) nOver120 <- NHANES_BP_sample %>% summarize(nOver120=sum(BPover120)) %>% pull() binom.test(nOver120, nrow(NHANES_BP_sample), alternative='greater')`````` ``````## ## Exact binomial test ## ## data: nOver120 and nrow(NHANES_BP_sample) ## number of successes = 84, number of trials = 155, p-value = 0.2 ## alternative hypothesis: true probability of success is greater than 0.5 ## 95 percent confidence interval: ## 0.47 1.00 ## sample estimates: ## probability of success ## 0.54`````` This shows no significant difference. Next let’s perform a one-sample t-test: ``t.test(NHANES_BP_sample\$BPSysAve, mu=120, alternative='greater')`` ``````## ## One Sample t-test ## ## data: NHANES_BP_sample\$BPSysAve ## t = 2, df = 154, p-value = 0.01 ## alternative hypothesis: true mean is greater than 120 ## 95 percent confidence interval: ## 121 Inf ## sample estimates: ## mean of x ## 123`````` Here we see that the difference is not statistically signficant. Finally, we can perform a randomization test to test the hypothesis. Under the null hypothesis we would expect roughly half of the differences from the expected mean to be positive and half to be negative (assuming the distribution is centered around the mean), so we can cause the null hypothesis to be true on average by randomly flipping the signs of the differences. ``````nruns = 5000 # create a function to compute the # t on the shuffled values shuffleOneSample <- function(x,mu) { # randomly flip signs flip <- runif(length(x))>0.5 diff <- x - mu diff[flip]=-1*diff[flip] # compute and return correlation # with shuffled variable return(tibble(meanDiff=mean(diff))) } index_df <- tibble(id=seq(nruns)) %>% group_by(id) shuffle_results <- index_df %>% do(shuffleOneSample(NHANES_BP_sample\$BPSysAve,120)) observed_diff <- mean(NHANES_BP_sample\$BPSysAve-120) p_shuffle <- mean(shuffle_results\$meanDiff>observed_diff) p_shuffle`````` ``## [1] 0.014`` This gives us a very similar p value to the one observed with the standard t-test. We might also want to quantify the degree of evidence in favor of the null hypothesis, which we can do using the Bayes Factor: ``````ttestBF(NHANES_BP_sample\$BPSysAve, mu=120, nullInterval = c(-Inf, 0))`````` ``````## Bayes factor analysis ## -------------- ## [1] Alt., r=0.707 -Inf<d<0 : 0.029 ±0.29% ## [2] Alt., r=0.707 !(-Inf<d<0) : 1.8 ±0% ## ## Against denominator: ## Null, mu = 120 ## --- ## Bayes factor type: BFoneSample, JZS`````` This tells us that our result doesn’t provide particularly strong evidence for either the null or alternative hypothesis; that is, it’s inconclusive.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/29%3A_Comparing_Means_in_R/29.01%3A_Testing_the_Value_of_a_Single_Mean_%28Section_28.1%29.txt
To compare two means from independent samples, we can use the two-sample t-test. Let’s say that we want to compare blood pressure of smokers and non-smokers; we don’t have an expectation for the direction, so we will use a two-sided test. First let’s perform a power analysis, again for a small effect: ``````power_results_2sample <- pwr.t.test(d=0.2, power=0.8, type='two.sample' ) power_results_2sample`````` ``````## ## Two-sample t test power calculation ## ## n = 393 ## d = 0.2 ## sig.level = 0.05 ## power = 0.8 ## alternative = two.sided ## ## NOTE: n is number in *each* group`````` This tells us that we need 394 subjects in each group, so let’s sample 394 smokers and 394 nonsmokers from the NHANES dataset, and then put them into a single data frame with a variable denoting their smoking status. ``````nonsmoker_df <- NHANES_adult %>% dplyr::filter(SmokeNow=="Yes") %>% drop_na(BPSysAve) %>% dplyr::select(BPSysAve,SmokeNow) %>% sample_n(power_results_2sample\$n) smoker_df <- NHANES_adult %>% dplyr::filter(SmokeNow=="No") %>% drop_na(BPSysAve) %>% dplyr::select(BPSysAve,SmokeNow) %>% sample_n(power_results_2sample\$n) sample_df <- smoker_df %>% bind_rows(nonsmoker_df)`````` Let’s test our hypothesis using a standard two-sample t-test. We can use the formula notation to specify the analysis, just like we would for `lm()`. ``t.test(BPSysAve ~ SmokeNow, data=sample_df)`` ``````## ## Welch Two Sample t-test ## ## data: BPSysAve by SmokeNow ## t = 4, df = 775, p-value = 3e-05 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## 2.9 7.8 ## sample estimates: ## mean in group No mean in group Yes ## 125 120`````` This shows us that there is a significant difference, though the direction is surprising: Smokers have lower blood pressure! Let’s look at the Bayes factor to quantify the evidence: ``````sample_df <- sample_df %>% mutate(SmokeNowInt=as.integer(SmokeNow)) ttestBF(formula=BPSysAve ~ SmokeNowInt, data=sample_df)`````` ``````## Bayes factor analysis ## -------------- ## [1] Alt., r=0.707 : 440 ±0% ## ## Against denominator: ## Null, mu1-mu2 = 0 ## --- ## Bayes factor type: BFindepSample, JZS`````` This shows that there is very strong evidence against the null hypothesis of no difference. 29.03: The t-test as a Linear Model (Section 28.3) We can also use `lm()` to implement these t-tests. The one-sample t-test is basically a test for whether the intercept is different from zero, so we use a model with only an intercept and apply this to the data after subtracting the null hypothesis mean (so that the expectation under the null hypothesis is an intercept of zero): ``````NHANES_BP_sample <- NHANES_BP_sample %>% mutate(BPSysAveDiff = BPSysAve-120) lm_result <- lm(BPSysAveDiff ~ 1, data=NHANES_BP_sample) summary(lm_result)`````` ``````## ## Call: ## lm(formula = BPSysAveDiff ~ 1, data = NHANES_BP_sample) ## ## Residuals: ## Min 1Q Median 3Q Max ## -36.11 -13.11 -1.11 9.39 67.89 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3.11 1.41 2.2 0.029 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 18 on 154 degrees of freedom`````` You will notice that this p-value is twice as big as the one obtained from the one-sample t-test above; this is because that was a one-tailed test, while `lm()` is performing a two-tailed test. We can also run the two-sample t-test using `lm()`: ``````lm_ttest_result <- lm(BPSysAve ~ SmokeNow, data=sample_df) summary(lm_ttest_result)`````` ``````## ## Call: ## lm(formula = BPSysAve ~ SmokeNow, data = sample_df) ## ## Residuals: ## Min 1Q Median 3Q Max ## -45.16 -11.16 -2.16 8.84 101.18 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 125.160 0.897 139.54 < 2e-16 *** ## SmokeNowYes -5.341 1.269 -4.21 2.8e-05 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 18 on 784 degrees of freedom ## Multiple R-squared: 0.0221, Adjusted R-squared: 0.0209 ## F-statistic: 17.7 on 1 and 784 DF, p-value: 2.84e-05`````` This gives the same p-value for the SmokeNowYes variable as it did for the two-sample t-test above.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/29%3A_Comparing_Means_in_R/29.02%3A_Comparing_Two_Means_%28Section_28.2%29.txt
Let’s look at how to perform a paired t-test in R. In this case, let’s generate some data for a set of individuals on two tests, where each indivdual varies in their overall ability, but there is also a practice effect such that performance on the second test is generally better than the first. First, let’s see how big of a sample we will require to find a medium (d=0.5) sized effect. Let’s say that we want to be extra sure in our results, so we will find the sample size that gives us 95% power to find an effect if it’s there: ``````paired_power <- pwr.t.test(d=0.5, power=0.95, type='paired', alternative='greater') paired_power`````` ``````## ## Paired t test power calculation ## ## n = 45 ## d = 0.5 ## sig.level = 0.05 ## power = 0.95 ## alternative = greater ## ## NOTE: n is number of *pairs*`````` Now let’s generate a dataset with the required number of subjects: ``````subject_id <- seq(paired_power\$n) # we code the tests as 0/1 so that we can simply # multiply this by the effect to generate the data test_id <- c(0,1) repeat_effect <- 5 noise_sd <- 5 subject_means <- rnorm(paired_power\$n, mean=100, sd=15) paired_data <- crossing(subject_id,test_id) %>% mutate(subMean=subject_means[subject_id], score=subject_means + test_id*repeat_effect + rnorm(paired_power\$n, mean=noise_sd))`````` Let’s perform a paired t-test on these data. To do that, we need to separate the first and second test data into separate variables, which we can do by converting our long data frame into a wide data frame. ``````paired_data_wide <- paired_data %>% spread(test_id, score) %>% rename(test1=`0`, test2=`1`) glimpse(paired_data_wide)`````` ``````## Observations: 44 ## Variables: 4 ## \$ subject_id <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,… ## \$ subMean <dbl> 116, 95, 103, 91, 97, 91, 89, 97, 99, … ## \$ test1 <dbl> 121, 108, 102, 94, 105, 111, 110, 89, … ## \$ test2 <dbl> 104, 101, 102, 107, 108, 101, 157, 126…`````` Now we can pass those new variables into the `t.test()` function: ``````paired_ttest_result <- t.test(paired_data_wide\$test1, paired_data_wide\$test2, type='paired') paired_ttest_result `````` ``````## ## Welch Two Sample t-test ## ## data: paired_data_wide\$test1 and paired_data_wide\$test2 ## t = -1, df = 73, p-value = 0.2 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -10.5 2.3 ## sample estimates: ## mean of x mean of y ## 108 112`````` This analysis is a bit trickier to perform using the linear model, because we need to estimate a separate intercept for each subject in order to account for the overall differences between subjects. We can’t do this using `lm()` but we can do it using a function called `lmer()` from the `lme4` package. To do this, we need to add `(1|subject_id)` to the formula, which tells `lmer()` to add a separate intercept (“1”) for each value of `subject_id`. ``````paired_test_lmer <- lmer(score ~ test_id + (1|subject_id), data=paired_data) summary(paired_test_lmer)`````` ``````## Linear mixed model fit by REML. t-tests use Satterthwaite's method [ ## lmerModLmerTest] ## Formula: score ~ test_id + (1 | subject_id) ## Data: paired_data ## ## REML criterion at convergence: 719 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -2.5424 -0.6214 -0.0929 0.7349 2.9793 ## ## Random effects: ## Groups Name Variance Std.Dev. ## subject_id (Intercept) 0 0.0 ## Residual 228 15.1 ## Number of obs: 88, groups: subject_id, 44 ## ## Fixed effects: ## Estimate Std. Error df t value Pr(>|t|) ## (Intercept) 107.59 2.28 86.00 47.26 <2e-16 *** ## test_id 4.12 3.22 86.00 1.28 0.2 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Correlation of Fixed Effects: ## (Intr) ## test_id -0.707 ## convergence code: 0 ## boundary (singular) fit: see ?isSingular`````` This gives a similar answer to the standard paired t-test. The advantage is that it’s more flexible, allowing us to perform repeated measures analyses, as we will see below.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/29%3A_Comparing_Means_in_R/29.04%3A_Comparing_Paired_Observations_%28Section_28.5%29.txt
Often we want to compare several different means, to determine whether any of them are different from the others. In this case, let’s look at the data from NHANES to determine whether Marital Status is related to sleep quality. First we clean up the data: ``````NHANES_sleep_marriage <- NHANES_adult %>% dplyr::select(SleepHrsNight, MaritalStatus, Age) %>% drop_na()`````` In this case we are going to treat the full NHANES dataset as our sample, with the goal of generalizing to the entire US population (from which the NHANES dataset is mean to be a representative sample). First let’s look at the distribution of the different values of the `MaritalStatus` variable: ``````NHANES_sleep_marriage %>% group_by(MaritalStatus) %>% summarize(n=n()) %>% kable()`````` MaritalStatus n Divorced 437 LivePartner 370 Married 2434 NeverMarried 889 Separated 134 Widowed 329 There are reasonable numbers of most of these categories, but let’s remove the `Separated` category since it has relatively few members: ``````NHANES_sleep_marriage <- NHANES_sleep_marriage %>% dplyr::filter(MaritalStatus!="Separated")`````` Now let’s use `lm()` to perform an analysis of variance. Since we also suspect that Age is related to the amount of sleep, we will also include Age in the model. ``````lm_sleep_marriage <- lm(SleepHrsNight ~ MaritalStatus + Age, data=NHANES_sleep_marriage) summary(lm_sleep_marriage)`````` ``````## ## Call: ## lm(formula = SleepHrsNight ~ MaritalStatus + Age, data = NHANES_sleep_marriage) ## ## Residuals: ## Min 1Q Median 3Q Max ## -5.016 -0.880 0.107 1.082 5.282 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 6.51758 0.09802 66.49 < 2e-16 *** ## MaritalStatusLivePartner 0.14373 0.09869 1.46 0.14536 ## MaritalStatusMarried 0.23494 0.07094 3.31 0.00093 *** ## MaritalStatusNeverMarried 0.25172 0.08404 3.00 0.00276 ** ## MaritalStatusWidowed 0.26304 0.10327 2.55 0.01090 * ## Age 0.00318 0.00141 2.25 0.02464 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.4 on 4453 degrees of freedom ## Multiple R-squared: 0.00458, Adjusted R-squared: 0.00347 ## F-statistic: 4.1 on 5 and 4453 DF, p-value: 0.00102`````` This tells us that there is a highly significant effect of marital status (based on the F test), though it accounts for a very small amount of variance (less than 1%). It’s also useful to look in more detail at which groups differ from which others, which we can do by examining the estimated marginal means for each group using the `emmeans()` function. ``````# compute the differences between each of the means leastsquare <- emmeans(lm_sleep_marriage, pairwise ~ MaritalStatus, adjust="tukey") # display the results by grouping using letters CLD(leastsquare\$emmeans, alpha=.05, Letters=letters)`````` ``````## MaritalStatus emmean SE df lower.CL upper.CL .group ## Divorced 6.7 0.066 4453 6.5 6.8 a ## LivePartner 6.8 0.073 4453 6.7 7.0 ab ## Married 6.9 0.028 4453 6.8 7.0 b ## NeverMarried 6.9 0.050 4453 6.8 7.0 b ## Widowed 6.9 0.082 4453 6.8 7.1 ab ## ## Confidence level used: 0.95 ## P value adjustment: tukey method for comparing a family of 5 estimates ## significance level used: alpha = 0.05`````` The letters in the `group` column tell us which individual conditions differ from which others; any pair of conditions that don’t share a group identifier (in this case, the letters `a` and `b`) are significantly different from one another. In this case, we see that Divorced people sleep less than Married or Widowed individuals; no other pairs differ significantly. 29.5.1 Repeated measures analysis of variance The standard analysis of variance assumes that the observations are independent, which should be true for different people in the NHANES dataset, but may not be true if the data are based on repeated measures of the same individual. For example, the NHANES dataset involves three measurements of blood pressure for each individual. If we want to test whether there are any differences between those, then we would need to use a repeated measures analysis of variance. We can do this using `lmer()` as we did above. First, we need to create a “long” version of the dataset. ``````NHANES_bp_all <- NHANES_adult %>% drop_na(BPSys1,BPSys2,BPSys3) %>% dplyr::select(BPSys1,BPSys2,BPSys3, ID) %>% gather(test, BPsys, -ID)`````` Then we fit a model that includes a separate intercept for each individual. ``````repeated_lmer <-lmer(BPsys ~ test + (1|ID), data=NHANES_bp_all) summary(repeated_lmer)`````` ``````## Linear mixed model fit by REML. t-tests use Satterthwaite's method [ ## lmerModLmerTest] ## Formula: BPsys ~ test + (1 | ID) ## Data: NHANES_bp_all ## ## REML criterion at convergence: 89301 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -4.547 -0.513 -0.005 0.495 4.134 ## ## Random effects: ## Groups Name Variance Std.Dev. ## ID (Intercept) 280.9 16.8 ## Residual 16.8 4.1 ## Number of obs: 12810, groups: ID, 4270 ## ## Fixed effects: ## Estimate Std. Error df t value Pr(>|t|) ## (Intercept) 122.0037 0.2641 4605.7049 462.0 <2e-16 *** ## testBPSys2 -0.9283 0.0887 8538.0000 -10.5 <2e-16 *** ## testBPSys3 -1.6215 0.0887 8538.0000 -18.3 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Correlation of Fixed Effects: ## (Intr) tsBPS2 ## testBPSys2 -0.168 ## testBPSys3 -0.168 0.500`````` This shows us that the second and third tests are significant different from the first test (which was automatically assigned as the baseline by `lmer()`). We might also want to know whether there is an overall effect of test. We can determine this by comparing the fit of our model to the fit of a model that does not include the test variable, which we will fit here. We then compare the models using the `anova()` function, which performs an F test to compare the two models. ``````repeated_lmer_baseline <-lmer(BPsys ~ (1|ID), data=NHANES_bp_all) anova(repeated_lmer,repeated_lmer_baseline)`````` ``````## Data: NHANES_bp_all ## Models: ## repeated_lmer_baseline: BPsys ~ (1 | ID) ## repeated_lmer: BPsys ~ test + (1 | ID) ## Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) ## repeated_lmer_baseline 3 89630 89652 -44812 89624 ## repeated_lmer 5 89304 89341 -44647 89294 330 2 <2e-16 ## ## repeated_lmer_baseline ## repeated_lmer *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1`````` This shows that blood pressure differs significantly across the three tests.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/29%3A_Comparing_Means_in_R/29.05%3A_Analysis_of_Variance_%28Section_28.6.1%29.txt
In this chapter we will bring together everything that we have learned to apply our knowledge to a practical example. 30: Practical statistical modeling There is a set of steps that we generally go through when we want to use our statistical model to test a scientific hypothesis: 1. Specify your question of interest 2. Identify or collect the appropriate data 3. Prepare the data for analysis 4. Determine the appropriate model 5. Fit the model to the data 6. Criticize the model to make sure it fits properly 7. Test hypothesis and quantify effect size Let’s look at a real example. In 2007, Christopher Gardner and colleagues from Stanford published a study in the Journal of the American Medical Association titled “Comparison of the Atkins, Zone, Ornish, and LEARN Diets for Change in Weight and Related Risk Factors Among Overweight Premenopausal Women The A TO Z Weight Loss Study: A Randomized Trial” (Gardner et al. 2007). 30.1.1 1: Specify your question of interest According to the authors, the goal of their study was: To compare 4 weight-loss diets representing a spectrum of low to high carbohydrate intake for effects on weight loss and related metabolic variables. 30.1.2 2: Identify or collect the appropriate data To answer their question, the investigators randomly assigned each of 311 overweight/obese women to one of four different diets (Atkins, Zone, Ornish, or LEARN), and measured their weight and other measures of health over time. The authors recorded a large number of variables, but for the main question of interest let’s focus on a single variable: Body Mass Index (BMI). Further, since our goal is to measure lasting changes in BMI, we will only look at the measurement taken at 12 months after onset of the diet. 30.1.3 3: Prepare the data for analysis The actual data from the A to Z study are not publicly available, so we will use the summary data reported in their paper to generate some synthetic data that roughly match the data obtained in their study. Once we have the data, we can visualize them to make sure that there are no outliers. Violin plots are useful to see the shape of the distributions, as shown in Figure 30.1. Those data look fairly reasonable - in particular, there don’t seem to be any serious outliers. However, we can see that the distributions seem to differ a bit in their variance, with Atkins and Ornish showing greater variability than the others. This means that any analyses that assume the variances are equal across groups might be inappropriate. Fortunately, the ANOVA model that we plan to use is fairly robust to this. 30.1.4 4. Determine the appropriate model There are several questions that we need to ask in order to determine the appropriate statistical model for our analysis. • What kind of dependent variable? • BMI : continuous, roughly normally distributed • What are we comparing? • mean BMI across four diet groups • ANOVA is appropriate • Are observations independent? • Random assignment and use of difference scores should ensure that the assumption of independence is appropriate 30.1.5 5. Fit the model to the data Let’s run an ANOVA on BMI change to compare it across the four diets. It turns out that we don’t actually need to generate the dummy-coded variables ourselves; if we pass `lm()` a categorical variable, it will automatically generate them for us. ``````## ## Call: ## lm(formula = BMIChange12Months ~ diet, data = dietDf) ## ## Residuals: ## Min 1Q Median 3Q Max ## -8.14 -1.37 0.07 1.50 6.33 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -1.622 0.251 -6.47 3.8e-10 *** ## dietLEARN 0.772 0.352 2.19 0.0292 * ## dietOrnish 0.932 0.356 2.62 0.0092 ** ## dietZone 1.050 0.352 2.98 0.0031 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 2.2 on 307 degrees of freedom ## Multiple R-squared: 0.0338, Adjusted R-squared: 0.0243 ## F-statistic: 3.58 on 3 and 307 DF, p-value: 0.0143`````` Note that lm automatically generated dummy variables that correspond to three of the four diets, leaving the Atkins diet without a dummy variable. This means that the intercept models the Atkins diet, and the other three variables model the difference between each of those diets and the Atkins diet. By default, `lm()` treats the first value (in alphabetical order) as the baseline. 30.1.6 6. Criticize the model to make sure it fits properly The first thing we want to do is to critique the model to make sure that it is appropriate. One thing we can do is to look at the residuals from the model. In the left panel of Figure ??, we plot the residuals for each individual grouped by diet, which are positioned by the mean for each diet. There are no obvious differences in the residuals across conditions, although there are a couple of datapoints (#34 and #304) that seem to be slight outliers. Another important assumption of the statistical tests that we apply to linear models is that the residuals from the model are normally distributed. The right panel of Figure ?? shows a Q-Q (quantile-quantile) plot, which plots the residuals against their expected values based on their quantiles in the normal distribution. If the residuals are normally distributed then the data points should fall along the dashed line — in this case it looks pretty good, except for those two outliers that are once again apparent here. 30.1.7 7. Test hypothesis and quantify effect size First let’s look back at the summary of results from the ANOVA, shown in Step 5 above. The significant F test shows us that there is a significant difference between diets, but we should also note that the model doesn’t actually account for much variance in the data; the R-squared value is only 0.03, showing that the model is only accounting for a few percent of the variance in weight loss. Thus, we would not want to overinterpret this result. The significant result also doesn’t tell us which diets differ from which others. We can find out more by comparing means across conditions using the `emmeans()` (“estimated marginal means”) function: ``````## diet emmean SE df lower.CL upper.CL .group ## Atkins -1.62 0.251 307 -2.11 -1.13 a ## LEARN -0.85 0.247 307 -1.34 -0.36 ab ## Ornish -0.69 0.252 307 -1.19 -0.19 b ## Zone -0.57 0.247 307 -1.06 -0.08 b ## ## Confidence level used: 0.95 ## P value adjustment: tukey method for comparing a family of 4 estimates ## significance level used: alpha = 0.05`````` The letters in the rightmost column show us which of the groups differ from one another, using a method that adjusts for the number of comparisons being performed. This shows that Atkins and LEARN diets don’t differ from one another (since they share the letter a), and the LEARN, Ornish, and Zone diets don’t differ from one another (since they share the letter b), but the Atkins diet differs from the Ornish and Zone diets (since they share no letters). 30.1.7.1 Bayes factor Let’s say that we want to have a better way to describe the amount of evidence provided by the data. One way we can do this is to compute a Bayes factor, which we can do by fitting the full model (including diet) and the reduced model (without diet) and then comparing their fit. For the reduced model, we just include a 1, which tells the fitting program to only fit an intercept. Note that this will take a few minutes to run. This shows us that there is very strong evidence (Bayes factor of nearly 100) for differences between the diets. 30.1.8 What about possible confounds? If we look more closely at the Garder paper, we will see that they also report statistics on how many individuals in each group had been diagnosed with metabolic syndrome, which is a syndrome characterized by high blood pressure, high blood glucose, excess body fat around the waist, and abnormal cholesterol levels and is associated with increased risk for cardiovascular problems. Let’s first add those data into the summary data frame: Table 30.1: Presence of metabolic syndrome in each group in the AtoZ study. Diet N P(metabolic syndrome) Atkins 77 0.29 LEARN 79 0.25 Ornish 76 0.38 Zone 79 0.34 Looking at the data it seems that the rates are slightly different across groups, with more metabolic syndrome cases in the Ornish and Zone diets – which were exactly the diets with poorer outcomes. Let’s say that we are interested in testing whether the rate of metabolic syndrome was significantly different between the groups, since this might make us concerned that these differences could have affected the results of the diet outcomes. 30.1.8.1 Determine the appropriate model • What kind of dependent variable? • proportions • What are we comparing? • proportion with metabolic syndrome across four diet groups • chi-squared test for goodness of fit is appropriate against null hypothesis of no difference Let’s compute that statistic using the `chisq.test()` function. Here we will use the `simulate.p.value` option, which will help deal with the relatively small ``````## ## Pearson's Chi-squared test ## ## data: contTable ## X-squared = 4, df = 3, p-value = 0.3`````` This test shows that there is not a significant difference between means. However, it doesn’t tell us how certain we are that there is no difference; remember that under NHST, we are always working under the assumption that the null is true unless the data show us enough evidence to cause us to reject this null hypothesis. What if we want to quantify the evidence for or against the null? We can do this using the Bayes factor. ``````## Bayes factor analysis ## -------------- ## [1] Non-indep. (a=1) : 0.058 ±0% ## ## Against denominator: ## Null, independence, a = 1 ## --- ## Bayes factor type: BFcontingencyTable, independent multinomial`````` This shows us that the alternative hypothesis is 0.058 times more likely than the null hypothesis, which means that the null hypothesis is 1/0.058 ~ 17 times more likely than the alternative hypothesis given these data. This is fairly strong, if not completely overwhelming, evidence.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/30%3A_Practical_statistical_modeling/30.01%3A_The_Process_of_Statistical_Modeling.txt
Learning Objectives • Describe the concept of P-hacking and its effects on scientific practice • Describe the concept of positive predictive value and its relation to statstical power • Describe the concept of pre-registration and how it can help protect against questionable research practices Most people think that science is a reliable way to answer questions about the world. When our physician prescribes a treatment we trust that it has been shown to be effective through research, and we have similar faith that the airplanes that we fly in aren’t going to fall from the sky. However, since 2005 there has been an increasing concern that science may not always work as well as we have long thought that it does. In this chapter we will discuss these concerns about reproducibility of scientific research, and outline the steps that one can take to make sure that our statistical results are as reproducible as possible. 32: Doing Reproducible Research Let’s say that we are interested in a research project on how children choose what to eat. This is a question that was asked in a study by the well-known eating researcher Brian Wansink and his colleagues in 2012. The standard (and, as we will see, somewhat naive) view goes something like this: • You start with a hypothesis • Branding with popular characters should cause children to choose “healthy” food more often • You collect some data • Offer children the choice between a cookie and an apple with either an Elmo-branded sticker or a control sticker, and record what they choose • You do statistics to test the null hypothesis • “The preplanned comparison shows Elmo-branded apples were associated with an increase in a child’s selection of an apple over a cookie, from 20.7% to 33.8% ($χ22012)$ • You make a conclusion based on the data • “This study suggests that the use of branding or appealing branded characters may benefit healthier foods more than they benefit indulgent, more highly processed foods. Just as attractive names have been shown to increase the selection of healthier foods in school lunchrooms, brands and cartoon characters could do the same with young children.”(Wansink, Just, and Payne 2012) 32.02: How Science (Sometimes) Actually Works Brian Wansink is well known for his books on “Mindless Eating”, and his fee for corporate speaking engagements is in the tens of thousands of dollars. In 2017, a set of researchers began to scrutinize some of his published research, starting with a set of papers about how much pizza people ate at a buffet. The researchers asked Wansink to share the data from the studies but he refused, so they dug into his published papers and found a large number of inconsistencies and statistical problems in the papers. The publicity around this analysis led a number of others to dig into Wansink’s past, including obtaining emails between Wansink and his collaborators. As reported by Stephanie Lee at Buzzfeed, these emails showed just how far Wansink’s actual research practices were from the naive model: …back in September 2008, when Payne was looking over the data soon after it had been collected, he found no strong apples-and-Elmo link — at least not yet. … “I have attached some initial results of the kid study to this message for your report,” Payne wrote to his collaborators. “Do not despair. It looks like stickers on fruit may work (with a bit more wizardry).” … Wansink also acknowledged the paper was weak as he was preparing to submit it to journals. The p-value was 0.06, just shy of the gold standard cutoff of 0.05. It was a “sticking point,” as he put it in a Jan. 7, 2012, email. … “It seems to me it should be lower,” he wrote, attaching a draft. “Do you want to take a look at it and see what you think. If you can get the data, and it needs some tweeking, it would be good to get that one value below .05.” … Later in 2012, the study appeared in the prestigious JAMA Pediatrics, the 0.06 p-value intact. But in September 2017, it was retracted and replaced with a version that listed a p-value of 0.02. And a month later, it was retracted yet again for an entirely different reason: Wansink admitted that the experiment had not been done on 8- to 11-year-olds, as he’d originally claimed, but on preschoolers. This kind of behavior finally caught up with Wansink; fifteen of his research studies have been retracted and in 2018 he resigned from his faculty position at Cornell University.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/32%3A_Doing_Reproducible_Research/32.01%3A_How_We_Think_Science_Should_Work.txt
While we think that the kind of frauduent behavior seen in Wansink’s case is relatively rare, it has become increasingly clear that problems with reproducibility are much more widespread in science than previously thought. This became clear in 2015, when a large group of researchers published a study in the journal Science titled “Estimating the reproducibility of psychological science”(Open Science Collaboration 2015). In this study, the researchers took 100 published studies in psychology and attempted to reproduce the results originally reported in the papers. Their findings were shocking: Whereas 97% of the original papers had reported statistically significant findings, only 37% of these effects were statistically significant in the replication study. Although these problems in psychology have received a great deal of attention, they seem to be present in nearly every area of science, from cancer biology (Errington et al. 2014) and chemistry (Baker 2017) to economics (Christensen and Miguel 2016) and the social sciences (Camerer et al. 2018). The reproducibility crisis that emerged after 2010 was actually predicted by John Ioannidis, a physician from Stanford who wrote a paper in 2005 titled “Why most published research findings are false”(Ioannidis 2005). In this article, Ioannidis argued that the use of null hypothesis statistical testing in the context of modern science will necessarily lead to high levels of false results. 32.3.1 Positive predictive value and statistical significance Ioannidis’ analysis focused on a concept known as the positive predictive value, which is defined as the proportion of positive results (which generally translates to “statistically significant findings”) that are true: $PPV = \frac{p(true\ positive\ result)}{p(true\ positive\ result) + p(false\ positive\ result)}$ Assuming that we know the probability that our hypothesis is true ($p(hIsTrue)$), then the probability of a true positive result is simply $p(hIsTrue)$ multiplied by the statistical power of the study: $p(true\ positive\ result) = p(hIsTrue) * (1 - \beta)$ were $\beta$ is the false negative rate. The probability of a false positive result is determined by $p(hIsTrue)$ and the false positive rate $\alpha$: $p(false\ positive\ result) = (1 - p(hIsTrue)) * \alpha$ PPV is then defined as: $PPV = \frac{p(hIsTrue) * (1 - \beta)}{p(hIsTrue) * (1 - \beta) + (1 - p(hIsTrue)) * \alpha}$ Let’s first take an example where the probability of our hypothesis being true is high, say 0.8 - though note that in general we cannot actually know this probability. Let’s say that we perform a study with the standard values of $\alpha=0.05$ and $\beta=0.2$. We can compute the PPV as: $PPV = \frac{0.8 * (1 - 0.2)}{0.8 * (1 - 0.2) + (1 - 0.8) * 0.05} = 0.98$ This means that if we find a positive result in a study where the hypothesis is likely to be true and power is high, then its likelihood of being true is high. Note, however, that a research field where the hypotheses have such a high likelihood of being true is probably not a very interesting field of research; research is most important when it tells us something new! Let’s do the same analysis for a field where $p(hIsTrue)=0.1$ – that is, most of the hypotheses being tested are false. In this case, PPV is: $PPV = \frac{0.1 * (1 - 0.2)}{0.1 * (1 - 0.2) + (1 - 0.1) * 0.05} = 0.307$ This means that in a field where most of the hypotheses are likely to be wrong (that is, an interesting scientific field where researchers are testing risky hypotheses), even when we find a positive result it is more likely to be false than true! In fact, this is just another example of the base rate effect that we discussed in the context of hypothesis testing – when an outcome is unlikely, then it’s almost certain that most positive outcomes will be false positives. We can simulate this to show how PPV relates to statistical power, as a function of the prior probability of the hypothesis being true (see Figure 32.1) Unfortunately, statistical power remains low in many areas of science (Smaldino and McElreath 2016), suggesting that many published research findings are false. An amusing example of this was seen in a paper by Jonathan Schoenfeld and John Ioannidis, titled “Is everything we eat associated with cancer? A systematic cookbook review”[scho:ioan:2013]. They examined a large number of papers that had assessed the relation between different foods and cancer risk, and found that 80% of ingredients had been associated with either increased or decreased cancer risk. In most of these cases, the statistical evidence was weak, and when the results were combined across studies, the result was null. 32.3.2 The winner’s curse Another kind of error can also occur when statistical power is low: Our estimates of the effect size will be inflated. This phenomenon often goes by the term “winner’s curse”, which comes from economics, where it refers to the fact that for certain types of auctions (where the value is the same for everyone, like a jar of quarters, and the bids are private), the winner is guaranteed to pay more than the good is worth. In science, the winner’s curse refers to the fact that the effect size estimated from a significant result (i.e. a winner) is almost always an overestimate of the true effect size. We can simulate this in order to see how the estimated effect size for significant results is related to the actual underlying effect size. Let’s generate data for which there is a true effect size of 0.2, and estimate the effect size for those results where there is a significant effect detected. The left panel of Figure 32.2 shows that when power is low, the estimated effect size for significant results can be highly inflated compared to the actual effect size. We can look at a single simulation to see why this is the case. In the right panel of Figure 32.2, you can see a histogram of the estimated effect sizes for 1000 samples, separated by whether the test was statistically significant. It should be clear from the figure that if we estimate the effect size only based on significant results, then our estimate will be inflated; only when most results are significant (i.e. power is high and the effect is relatively large) will our estimate come near the actual effect size.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/32%3A_Doing_Reproducible_Research/32.03%3A_The_Reproducibility_Crisis_in_Science.txt
A popular book entitled “The Compleat Academic: A Career Guide”, published by the American Psychological Association (Darley, Zanna, and Roediger 2004), aims to provide aspiring researchers with guidance on how to build a career. In a chapter by well-known social psychologist Daryl Bem titled “Writing the Empirical Journal Article”, Bem provides some suggestions about how to write a research paper. Unfortunately, the practices that he suggests are deeply problematic, and have come to be known as questionable research practices (QRPs). Which article should you write? There are two possible articles you can write: (1) the article you planned to write when you designed your study or (2) the article that makes the most sense now that you have seen the results. They are rarely the same, and the correct answer is (2). What Bem suggests here is known as HARKing (Hypothesizing After the Results are Known)(Kerr 1998). This might seem innocuous, but is problematic because it allows the research to re-frame a post-hoc conclusion (which we should take with a grain of salt) as an a priori prediction (in which we would have stronger faith). In essence, it allows the researcher to rewrite their theory based on the facts, rather that using the theory to make predictions and then test them – akin to moving the goalpost so that it ends up wherever the ball goes. It thus becomes very difficult to disconfirm incorrect ideas, since the goalpost can always be moved to match the data. Bem continues: Analyzing data Examine them from every angle. Analyze the sexes separately. Make up new composite indices. If a datum suggests a new hypothesis, try to find further evidence for it elsewhere in the data. If you see dim traces of interesting patterns, try to reorganize the data to bring them into bolder relief. If there are participants you don’t like, or trials, observers, or interviewers who gave you anomalous results,drop them (temporarily). Go on a fishing expedition for something — anything — interesting. No, this is not immoral. What Bem suggests here is known as p-hacking, which refers to trying many different analyses until one finds a significant result. Bem is correct that if one were to report every analysis done on the data then this approach would not be “immoral”. However, it is rare to see a paper discuss all of the analyses that were performed on a dataset; rather, papers often only present the analyses that worked - which usually means that they found a statistically significant result. There are many different ways that one might p-hack: • Analyze data after every subject, and stop collecting data once p<.05 • Analyze many different variables, but only report those with p<.05 • Collect many different experimental conditions, but only report those with p<.05 • Exclude participants to get p<.05 • Transform the data to get p<.05 A well-known paper by Simmons, Nelson, and Simonsohn (2011) showed that the use of these kinds of p-hacking strategies could greatly increase the actual false positive rate, resulting in a high number of false positive results. 32.4.1 ESP or QRP? In 2011, Daryl Bem published an article (Bem 2011) that claimed to have found scientific evidence for extrasensory perception. The article states: This article reports 9 experiments, involving more than 1,000 participants, that test for retroactive influence by “time-reversing” well-established psychological effects so that the individual’s responses are obtained before the putatively causal stimulus events occur. …The mean effect size (d) in psi performance across all 9 experiments was 0.22, and all but one of the experiments yielded statistically significant results. As researchers began to examine Bem’s article, it became clear that he had engaged in all of the QRPs that he had recommended in the chapter discussed above. As Tal Yarkoni pointed out in a blog post that examined the article: • Sample sizes varied across studies • Different studies appear to have been lumped together or split apart • The studies allow many different hypotheses, and it’s not clear which were planned in advance • Bem used one-tailed tests even when it’s not clear that there was a directional prediction (so alpha is really 0.1) • Most of the p-values are very close to 0.05 • It’s not clear how many other studies were run but not reported
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/32%3A_Doing_Reproducible_Research/32.04%3A_Questionable_Research_Practices.txt
In the years since the reproducibility crisis arose, there has been a robust movement to develop tools to help protect the reproducibility of scientific research. 32.5.1 Pre-registration One of the ideas that has gained the greatest traction is pre-registration, in which one submits a detailed description of a study (including all data analyses) to a trusted repository (such as the Open Science Framework or AsPredicted.org). By specifying one’s plans in detail prior to analyzing the data, pre-registration provides greater faith that the analyses do not suffer from p-hacking or other questionable research practices. The effects of pre-registration have been seen in clinical trials in medicine. In 2000, the National Heart, Lung, and Blood Institute (NHLBI) began requiring all clinical trials to be pre-registered using the system at ClinicalTrials.gov. This provides a natural experiment to observe the effects of study pre-registration. When Kaplan and Irvin (2015) examined clinical trial outcomes over time, they found that the number of positive outcomes in clinical trials was reduced after 2000 compared to before. While there are many possible causes, it seems likely that prior to study registration researchers were able to change their methods in order to find a positive result, which became more difficult after registration was required. 32.5.2 Reproducible practices The paper by Simmons, Nelson, and Simonsohn (2011) laid out a set of suggested practices for making research more reproducible, all of which should become standard for researchers: • Authors must decide the rule for terminating data collection before data collection begins and report this rule in the article. • Authors must collect at least 20 observations per cell or else provide a compelling cost-of-data-collection justification. • Authors must list all variables collected in a study. • Authors must report all experimental conditions, including failed manipulations. • If observations are eliminated, authors must also report what the statistical results are if those observations are included. • If an analysis includes a covariate, authors must report the statistical results of the analysis without the covariate. 32.5.3 Replication One of the hallmarks of science is the idea of replication – that is, other researchers should be able to perform the same study and obtain the same result. Unfortunately, as we saw in the outcome of the Replication Project discussed earlier, many findings are not replicable. The best way to ensure replicability of one’s research is to first replicate it on your own; for some studies this just won’t be possible, but whenever it is possible one should make sure that one’s finding holds up in a new sample. That new sample should be sufficiently powered to find the effect size of interest; in many cases, this will actually require a larger sample than the original. It’s important to keep a couple of things in mind with regard to replication. First, the fact that a replication attempt fails does not necessarily mean that the original finding was false; remember that with the standard level of 80% power, there is still a one in five chance that the result will be nonsignificant, even if there is a true effect. For this reason, we generally want to see multiple replications of any important finding before we decide whether or not to believe it. Unfortunately, many fields including psychology have failed to follow this advice in the past, leading to “textbook” findings that turn out to be likely false. With regard to Daryl Bem’s studies of ESP, a large replication attempt involving 7 studies failed to replicate his findings (Galak et al. 2012). Second, remember that the p-value doesn’t provide us with a measure of the likelihood of a finding to replicate. As we discussed previously, the p-value is a statement about the likelihood of one’s data under a specific null hypothesis; it doesn’t tell us anything about the probability that the finding is actually true (as we learned in the chapter on Bayesian analysis). In order to know the likelihood of replication we need to know the probability that the finding is true, which we generally don’t know. 32.06: Doing Reproducible Data Analysis So far we have focused on the ability to replicate other researchers’ findings in new experiments, but another important aspect of reproducibility is to be able to reproduce someone’s analyses on their own data, which we refer to a computational reproducibility. This requires that researchers share both their data and their analysis code, so that other researchers can both try to reproduce the result as well as potentially test different analysis methods on the same data. There is an increasing move in psychology towards open sharing of code and data; for example, the journal Psychological Science now provides “badges” to papers that share research materials, data, and code, as well as for pre-registration. The ability to reproduce analyses is one reason that we strongly advocate for the use of scripted analyses (such as those using R) rather than using a “point-and-click” software package. It’s also a reason that we advocate the use of free and open-source software (like R) as opposed to commercial software packages, which will require others to buy the software in order to reproduce any analyses. There are many ways to share both code and data. A common way to share code is via web sites that support version control for software, such as Github. Small datasets can also be shared via these same sites; larger datasets can be shared through data sharing portals such as Zenodo, or through specialized portals for specific types of data (such as OpenNeuro for neuroimaging data). 32.07: Conclusion- Doing Better Science It is every scientist’s responsibility to improve their research practices in order to increase the reproducibility of their research. It is essential to remember that the goal of research is not to find a significant result; rather, it is to ask and answer questions about nature in the most truthful way possible. Most of our hypotheses will be wrong, and we should be comfortable with that, so that when we find one that’s right, we will be even more confident in its truth. 32.08: Suggested Readings • Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions, by Richard Harris • Improving your statistical inferences - an online course on how to do better statistical analysis, including many of the points raised in this chapter.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/32%3A_Doing_Reproducible_Research/32.05%3A_Doing_Reproducible_Research.txt
References Baker, Monya. 2017. “Reproducibility: Check Your Chemistry.” Nature 548 (7668): 485–88. https://doi.org/10.1038/548485a. Bem, Daryl J. 2011. “Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect.” J Pers Soc Psychol 100 (3): 407–25. https://doi.org/10.1037/a0021524. Breiman, Leo. 2001. “Statistical Modeling: The Two Cultures (with Comments and a Rejoinder by the Author).” Statist. Sci. 16 (3). The Institute of Mathematical Statistics: 199–231. https://doi.org/10.1214/ss/1009213726. Camerer, Colin F., Anna Dreber, Felix Holzmeister, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler, et al. 2018. “Evaluating the Replicability of Social Science Experiments in Nature and Science Between 2010 and 2015.” Nature Human Behaviour 2: 637–44. Christensen, Garret S, and Edward Miguel. 2016. “Transparency, Reproducibility, and the Credibility of Economics Research.” Working Paper 22989. Working Paper Series. National Bureau of Economic Research. https://doi.org/10.3386/w22989. Copas, J. B. 1983. “Regression, Prediction and Shrinkage (with Discussion).” Journal of the Royal Statistical Society, Series B: Methodological 45: 311–54. Darley, John M, Mark P Zanna, and Henry L Roediger. 2004. The Compleat Academic: A Career Guide. 2nd ed. Washington, DC: American Psychological Association. http://www.loc.gov/catdir/toc/fy037/2003041830.html. Dehghan, Mahshid, Andrew Mente, Xiaohe Zhang, Sumathi Swaminathan, Wei Li, Viswanathan Mohan, Romaina Iqbal, et al. 2017. “Associations of Fats and Carbohydrate Intake with Cardiovascular Disease and Mortality in 18 Countries from Five Continents (Pure): A Prospective Cohort Study.” Lancet 390 (10107): 2050–62. https://doi.org/10.1016/S0140-6736(17)32252-3. Efron, Bradley. 1998. “R. A. Fisher in the 21st Century (Invited Paper Presented at the 1996 R. A. Fisher Lecture).” Statist. Sci. 13 (2). The Institute of Mathematical Statistics: 95–122. https://doi.org/10.1214/ss/1028905930. Errington, Timothy M, Elizabeth Iorns, William Gunn, Fraser Elisabeth Tan, Joelle Lomax, and Brian A Nosek. 2014. “An Open Investigation of the Reproducibility of Cancer Biology Research.” Elife 3 (December). https://doi.org/10.7554/eLife.04333. Fisher, R.A. 1925. Statistical Methods for Research Workers. Edinburgh Oliver & Boyd. Galak, Jeff, Robyn A LeBoeuf, Leif D Nelson, and Joseph P Simmons. 2012. “Correcting the Past: Failures to Replicate Psi.” J Pers Soc Psychol 103 (6): 933–48. https://doi.org/10.1037/a0029709. Gardner, Christopher D, Alexandre Kiazand, Sofiya Alhassan, Soowon Kim, Randall S Stafford, Raymond R Balise, Helena C Kraemer, and Abby C King. 2007. “Comparison of the Atkins, Zone, Ornish, and Learn Diets for Change in Weight and Related Risk Factors Among Overweight Premenopausal Women: The a to Z Weight Loss Study: A Randomized Trial.” JAMA 297 (9): 969–77. doi.org/10.1001/jama.297.9.969. Ioannidis, John P A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2 (8): e124. https://doi.org/10.1371/journal.pmed.0020124. Kaplan, Robert M, and Veronica L Irvin. 2015. “Likelihood of Null Effects of Large Nhlbi Clinical Trials Has Increased over Time.” PLoS One 10 (8): e0132382. https://doi.org/10.1371/journal.pone.0132382. Kerr, N L. 1998. “HARKing: Hypothesizing After the Results Are Known.” Pers Soc Psychol Rev 2 (3): 196–217. https://doi.org/10.1207/s15327957pspr0203_4. Neyman, J. 1937. “Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability.” Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 236 (767). The Royal Society: 333–80. https://doi.org/10.1098/rsta.1937.0005. Neyman, J., and K. Pearson. 1933. “On the Problem of the Most Efficient Tests of Statistical Hypotheses.” Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 231 (694-706). The Royal Society: 289–337. https://doi.org/10.1098/rsta.1933.0009. Open Science Collaboration. 2015. “PSYCHOLOGY. Estimating the Reproducibility of Psychological Science.” Science 349 (6251): aac4716. https://doi.org/10.1126/science.aac4716. Pesch, Beate, Benjamin Kendzia, Per Gustavsson, Karl-Heinz Jöckel, Georg Johnen, Hermann Pohlabeln, Ann Olsson, et al. 2012. “Cigarette Smoking and Lung Cancer–Relative Risk Estimates for the Major Histological Types from a Pooled Analysis of Case-Control Studies.” Int J Cancer 131 (5): 1210–9. doi.org/10.1002/ijc.27339. Schenker, Nathaniel, and Jane F. Gentleman. 2001. “On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals.” The American Statistician 55 (3). [American Statistical Association, Taylor & Francis, Ltd.]: 182–86. www.jstor.org/stable/2685796. Simmons, Joseph P, Leif D Nelson, and Uri Simonsohn. 2011. “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychol Sci 22 (11): 1359–66. https://doi.org/10.1177/0956797611417632. Smaldino, Paul E, and Richard McElreath. 2016. “The Natural Selection of Bad Science.” R Soc Open Sci 3 (9): 160384. https://doi.org/10.1098/rsos.160384. Stigler, Stephen M. 2016. The Seven Pillars of Statistical Wisdom. Harvard University Press. Teicholz, Nina. 2014. The Big Fat Surprise. Simon & Schuster. Wakefield, A J. 1999. “MMR Vaccination and Autism.” Lancet 354 (9182): 949–50. https://doi.org/10.1016/S0140-6736(05)75696-8. Wansink, Brian, David R Just, and Collin R Payne. 2012. “Can Branding Improve School Lunches?” Arch Pediatr Adolesc Med 166 (10): 1–2. doi.org/10.1001/archpediatrics.2012.999.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/33%3A_References.txt
Much of experimental science comes down to measuring changes. Does one medicine work better than another? Do cells with one version of a gene synthesize more of an enzyme than cells with another version? Does one kind of signal processing algorithm detect pulsars better than another? Is one catalyst more effective at speeding a chemical reaction than another? Much of statistics, then, comes down to making judgments about these kinds of differences. We talk about “statistically significant differences” because statisticians have devised ways of telling if the difference between two measurements is really big enough to ascribe to anything but chance. Suppose you’re testing cold medicines. Your new medicine promises to cut the duration of cold symptoms by a day. To prove this, you find twenty patients with colds and give half of them your new medicine and half a placebo. Then you track the length of their colds and find out what the average cold length was with and without the medicine. But all colds aren’t identical. Perhaps the average cold lasts a week, but some last only a few days, and others drag on for two weeks or more, straining the household Kleenex supply. It’s possible that the group of ten patients receiving genuine medicine will be the unlucky types to get two-week colds, and so you’ll falsely conclude that the medicine makes things worse. How can you tell if you’ve proven your medicine works, rather than just proving that some patients are unlucky? 1.02: The Power of p Values Statistics provides the answer. If we know the distribution of typical cold cases – roughly how many patients tend to have short colds, or long colds, or average colds – we can tell how likely it is for a random sample of cold patients to have cold lengths all shorter than average, or longer than average, or exactly average. By performing a statistical test, we can answer the question “If my medication were completely ineffective, what are the chances I’d see data like what I saw?” That’s a bit tricky, so read it again. Intuitively, we can see how this might work. If I only test the medication on one person, it’s unsurprising if he has a shorter cold than average – about half of patients have colds shorter than average. If I test the medication on ten million patients, it’s pretty damn unlikely that all of them will have shorter colds than average, unless my medication works. The common statistical tests used by scientists produce a number called the \(p\) value that quantifies this. Here’s how it’s defined: The P value is defined as the probability, under the assumption of no effect or no difference (the null hypothesis), of obtaining a result equal to or more extreme than what was actually observed.24 So if I give my medication to \(100\) patients and find that their colds are a day shorter on average, the \(p\) value of this result is the chance that, if my medication didn’t do anything at all, my \(100\) patients would randomly have, on average, day-or-more-shorter colds. Obviously, the \(p\) value depends on the size of the effect – colds shorter by four days are less likely than colds shorter by one day – and the number of patients I test the medication on. That’s a tricky concept to wrap your head around. A \(p\) value is not a measure of how right you are, or how significant the difference is; it’s a measure of how surprised you should be if there is no actual difference between the groups, but you got data suggesting there is. A bigger difference, or one backed up by more data, suggests more surprise and a smaller \(p\) value. It’s not easy to translate that into an answer to the question “is there really a difference?” Most scientists use a simple rule of thumb: if \(p\) is less than \(0.05\), there’s only a \(5\)% chance of obtaining this data unless the medication really works, so we will call the difference between medication and placebo “significant.” If \(p\) is larger, we’ll call the difference insignificant. But there are limitations. The \(p\) value is a measure of surprise, not a measure of the size of the effect. I can get a tiny \(p\) value by either measuring a huge effect – “this medicine makes people live four times longer” – or by measuring a tiny effect with great certainty. Statistical significance does not mean your result has any practical significance. Similarly, statistical insignificance is hard to interpret. I could have a perfectly good medicine, but if I test it on ten people, I’d be hard-pressed to tell the difference between a real improvement in the patients and plain good luck. Alternately, I might test it on thousands of people, but the medication only shortens colds by three minutes, and so I’m simply incapable of detecting the difference. A statistically insignificant difference does not mean there is no difference at all. There’s no mathematical tool to tell you if your hypothesis is true; you can only see whether it is consistent with the data, and if the data is sparse or unclear, your conclusions are uncertain. But we can’t let that stop us.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/01%3A_An_Introduction_to_Data_Analysis/1.01%3A_Data_Analysis.txt
We’ve seen that it’s possible to miss a real effect simply by not taking enough data. In most cases, this is a problem: we might miss a viable medicine or fail to notice an important side-effect. How do we know how much data to collect? Statisticians provide the answer in the form of “statistical power.” The power of a study is the likelihood that it will distinguish an effect of a certain size from pure luck. A study might easily detect a huge benefit from a medication, but detecting a subtle difference is much less likely. Let’s try a simple example. Suppose a gambler is convinced that an opponent has an unfair coin: rather than getting heads half the time and tails half the time, the proportion is different, and the opponent is using this to cheat at incredibly boring coin-flipping games. How to prove it? You can’t just flip the coin a hundred times and count the heads. Even with a perfectly fair coin, you don’t always get fifty heads: You can see that \(50\) heads is the most likely option, but it’s also reasonably likely to get \(45\) or \(57\). So if you get \(57\) heads, the coin might be rigged, but you might just be lucky. Let’s work out the math. Let’s say we look for a \(p\) value of \(0.05\) or less, as scientists typically do. That is, if I count up the number of heads after \(10\) or \(100\) trials and find a deviation from what I’d expect – half heads, half tails – I call the coin unfair if there’s only a \(5\)% chance of getting a deviation that size or larger with a fair coin. Otherwise, I can conclude nothing: the coin may be fair, or it may be only a little unfair. I can’t tell. So, what happens if I flip a coin ten times and apply these criteria? This is called a power curve. Along the horizontal axis, we have the different possibilities for the coin’s true probability of getting heads, corresponding to different levels of unfairness. On the vertical axis is the probability that I will conclude the coin is rigged after ten tosses, based on the \(p\) value of the result. You can see that if the coin is rigged to give heads \(60\)% of the time, and I flip the coin \(10\) times, I only have a \(20\)% chance of concluding that it’s rigged. There’s just too little data to separate rigging from random variation. The coin would have to be incredibly biased for me to always notice. But what if I flip the coin \(100\) times? Or \(1,000\) times? With one thousand flips, I can easily tell if the coin is rigged to give heads \(60\)% of the time. It’s just overwhelmingly unlikely that I could flip a fair coin \(1,000\) times and get more than \(600\) heads. 2.02: The Power of Being Underpowered After hearing all this, you might think calculations of statistical power are essential to medical trials. A scientist might want to know how many patients are needed to test if a new medication improves survival by more than \(10\)%, and a quick calculation of statistical power would provide the answer. Scientists are usually satisfied when the statistical power is \(0.8\) or higher, corresponding to an \(80\)% chance of concluding there’s a real effect. However, few scientists ever perform this calculation, and few journal articles ever mention the statistical power of their tests. Consider a trial testing two different treatments for the same condition. You might want to know which medicine is safer, but unfortunately, side effects are rare. You can test each medicine on a hundred patients, but only a few in each group suffer serious side effects. Obviously, you won’t have terribly much data to compare side effect rates. If four people have serious side effects in one group, and three in the other, you can’t tell if that’s the medication’s fault. Unfortunately, many trials conclude with “There was no statistically significant difference in adverse effects between groups” without noting that there was insufficient data to detect any but the largest differences.57 And so doctors erroneously think the medications are equally safe, when one could well be much more dangerous than the other. You might think this is only a problem when the medication only has a weak effect. But no: in one sample of studies published between 1975 and 1990 in prestigious medical journals, \(27\)% of randomized controlled trials gave negative results, but \(64\)% of these didn’t collect enough data to detect a \(50\)% difference in primary outcome between treatment groups. Fifty percent! Even if one medication decreases symptoms by \(50\)% more than the other medication, there’s insufficient data to conclude it’s more effective. And \(84\)% of the negative trials didn’t have the power to detect a \(25\)% difference.17, 4, 11, 16 In neuroscience the problem is even worse. Suppose we aggregate the data collected by numerous neuroscience papers investigating one particular effect and arrive at a strong estimate of the effect’s size. The median study has only a \(20\)% chance of being able to detect that effect. Only after many studies were aggregated could the effect be discerned. Similar problems arise in neuroscience studies using animal models – which raises a significant ethical concern. If each individual study is underpowered, the true effect will only likely be discovered after many studies using many animals have been completed and analyzed, using far more animal subjects than if the study had been done properly the first time.12 That’s not to say scientists are lying when they state they detected no significant difference between groups. You’re just misleading yourself when you assume this means there is no real difference. There may be a difference, but the study was too small to notice it. Let’s consider an example we see every day. 2.03: The Wrong Turn on Red In the 1970s, many parts of the United States began to allow drivers to turn right at a red light. For many years prior, road designers and civil engineers argued that allowing right turns on a red light would be a safety hazard, causing many additional crashes and pedestrian deaths. But the 1973 oil crisis and its fallout spurred politicians to consider allowing right turn on red to save fuel wasted by commuters waiting at red lights. Several studies were conducted to consider the safety impact of the change. For example, a consultant for the Virginia Department of Highways and Transportation conducted a before-and-after study of twenty intersections which began to allow right turns on red. Before the change there were \(308\) accidents at the intersections; after, there were \(337\) in a similar length of time. However, this difference was not statistically significant, and so the consultant concluded there was no safety impact. Several subsequent studies had similar findings: small increases in the number of crashes, but not enough data to conclude these increases were significant. As one report concluded, There is no reason to suspect that pedestrian accidents involving RT operations (right turns) have increased after the adoption of [right turn on red]… Based on this data, more cities and states began to allow right turns at red lights. The problem, of course, is that these studies were underpowered. More pedestrians were being run over and more cars were involved in collisions, but nobody collected enough data to show this conclusively until several years later, when studies arrived clearly showing the results: significant increases in collisions and pedestrian accidents (sometimes up to \(100\)% increases).27, 48 The misinterpretation of underpowered studies cost lives.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/02%3A_Statistical_Power_and_Underpowered_Statistics/2.01%3A_Statistical_Power.txt
Many studies strive to collect more data through replication: by repeating their measurements with additional patients or samples, they can be more certain of their numbers and discover subtle relationships that aren’t obvious at first glance. We’ve seen the value of additional data for improving statistical power and detecting small differences. But what exactly counts as a replication? Let’s return to a medical example. I have two groups of \(100\) patients taking different medications, and I seek to establish which medication lowers blood pressure more. I have each group take the medication for a month to allow it to take effect, and then I follow each group for ten days, each day testing their blood pressure. I now have ten data points per patient and \(1,000\) data points per group. Brilliant! \(1,000\) data points is quite a lot, and I can fairly easily establish whether one group has lower blood pressure than the other. When I do calculations for statistical significance I find significant results very easily. But wait: we expect that taking a patient’s blood pressure ten times will yield ten very similar results. If one patient is genetically predisposed to low blood pressure, I have counted his genetics ten times. Had I collected data from \(1,000\) independent patients instead of repeatedly testing \(100\), I would be more confident that differences between groups came from the medicines and not from genetics and luck. I claimed a large sample size, giving me statistically significant results and high statistical power, but my claim is unjustified. This problem is known as pseudoreplication, and it is quite common.38 After testing cells from a culture, a biologist might “replicate” his results by testing more cells from the same culture. Neuroscientists will test multiple neurons from the same animal, incorrectly claiming they have a large sample size because they tested hundreds of neurons from just two rats. In statistical terms, pseudoreplication occurs when individual observations are heavily dependent on each other. Your measurement of a patient’s blood pressure will be highly related to his blood pressure yesterday, and your measurement of soil composition here will be highly correlated with your measurement five feet away. There are several ways to account for this dependence while performing your statistical analysis: 1. Average the dependent data points. For example, average all the blood pressure measurements taken from a single person. This isn’t perfect, though; if you measured some patients more frequently than others, this won’t be reflected in the averaged number. You want a method that somehow counts measurements as more reliable as more are taken. 2. Analyze each dependent data point separately. You could perform an analysis of every patient’s blood pressure on day \(5\), giving you only one data point per person. But be careful, because if you do this for every day, you’ll have problems with multiple comparisons, which we will discuss in the next chapter. 3. Use a statistical model which accounts for the dependence, like a hierarchical model or random effects model. It’s important to consider each approach before analyzing your data, as each method is suited to different situations. Pseudoreplication makes it easy to achieve significance, even though it gives you little additional information on the test subjects. Researchers must be careful not to artificially inflate their sample sizes when they retest samples.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/03%3A_Pseudoreplication-_Choose_Your_Data_Wisely.txt
You’ve already seen that \(p\) values are hard to interpret. Getting a statistically insignificant result doesn’t mean there’s no difference. What about getting a significant result? Let’s try an example. Suppose I am testing a hundred potential cancer medications. Only ten of these drugs actually work, but I don’t know which; I must perform experiments to find them. In these experiments, I’ll look for \(p<0.05\) gains over a placebo, demonstrating that the drug has a significant benefit. To illustrate, each square in this grid represents one drug. The blue squares are the drugs that work: As we saw, most trials can’t perfectly detect every good medication. We’ll assume my tests have a statistical power of \(0.8\). Of the ten good drugs, I will correctly detect around eight of them, shown in purple: Of the ninety ineffectual drugs, I will conclude that about \(5\) have significant effects. Why? Remember that \(p\) values are calculated under the assumption of no effect, so \(p=0.05\) means a \(5\)% chance of falsely concluding that an ineffectual drug works. So I perform my experiments and conclude there are \(13\) working drugs: \(8\) good drugs and \(5\) I’ve included erroneously, shown in red: The chance of any given “working” drug being truly effectual is only \(62\)%. If I were to randomly select a drug out of the lot of \(100\), run it through my tests, and discover a \(p<0.05\) statistically significant benefit, there is only a \(62\)% chance that the drug is actually effective. In statistical terms, my false discovery rate – the fraction of statistically significant results which are really false positives – is \(38\)%. Because the base rate of effective cancer drugs is so low – only \(10\)% of our hundred trial drugs actually work – most of the tested drugs do not work, and we have many opportunities for false positives. If I had the bad fortune of possessing a truckload of completely ineffective medicines, giving a base rate of \(0\)%, there is a \(0\)% chance that any statistically significant result is true. Nevertheless, I will get a \(p<0.05\) result for \(5\)% of the drugs in the truck. You often hear people quoting \(p\) values as a sign that error is unlikely. “There’s only a \(1\) in \(10,000\) chance this result arose as a statistical fluke,” they say, because they got \(p=0.0001\). No! This ignores the base rate, and is called the base rate fallacy. Remember how \(p\) values are defined: The P value is defined as the probability, under the assumption of no effect or no difference (the null hypothesis), of obtaining a result equal to or more extreme than what was actually observed. A \(p\) value is calculated under the assumption that the medication does not work and tells us the probability of obtaining the data we did, or data more extreme than it. It does not tell us the chance the medication is effective. When someone uses their \(p\) values to say they’re probably right, remember this. Their study’s probability of error is almost certainly much higher. In fields where most tested hypotheses are false, like early drug trials (most early drugs don’t make it through trials), it’s likely that most “statistically significant” results with \(p<0.05\) are actually flukes. One good example is medical diagnostic tests.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/04%3A_The_p_Value_and_the_Base_Rate_Fallacy/4.01%3A_Prelude_to_p_Values.txt
There has been some controversy over the use of mammograms in screening breast cancer. Some argue that the dangers of false positive results, such as unnecessary biopsies, surgery and chemotherapy, outweigh the benefits of early cancer detection. This is a statistical question. Let’s evaluate it. Suppose \(0.8\)% of women who get mammograms have breast cancer. In \(90\)% of women with breast cancer, the mammogram will correctly detect it. (That’s the statistical power of the test. This is an estimate, since it’s hard to tell how many cancers are missed if we don’t know they’re there.) However, among women with no breast cancer at all, about \(7\)% will get a positive reading on the mammogram, leading to further tests and biopsies and so on. If you get a positive mammogram result, what are the chances you have breast cancer? Ignoring the chance that you, the reader, are male,\(^{[1]}\) the answer is \(9\)%.35 Despite the test only giving false positives for \(7\)% of cancer-free women, analogous to testing for \(p<0.07\), \(91\)% of positive tests are false positives. How did I calculate this? It’s the same method as the cancer drug example. Imagine \(1,000\) randomly selected women who choose to get mammograms. Eight of them (\(0.8\)%) have breast cancer. The mammogram correctly detects \(90\)% of breast cancer cases, so about seven of the eight women will have their cancer discovered. However, there are \(992\) women without breast cancer, and \(7\)% will get a false positive reading on their mammograms, giving us \(70\) women incorrectly told they have cancer. In total, we have \(77\) women with positive mammograms, \(7\) of whom actually have breast cancer. Only \(9\)% of women with positive mammograms have breast cancer. If you administer questions like this one to statistics students and scientific methodology instructors, more than a third fail.35 If you ask doctors, two thirds fail.10 They erroneously conclude that a \(p<0.05\) result implies a \(95\)% chance that the result is true – but as you can see in these examples, the likelihood of a positive result being true depends on what proportion of hypotheses tested are true. And we are very fortunate that only a small proportion of women have breast cancer at any given time. Examine introductory statistical textbooks and you will often find the same error. \(P\) values are counterintuitive, and the base rate fallacy is everywhere. Footnotes [1] Interestingly, being male doesn’t exclude you from getting breast cancer; it just makes it exceedingly unlikely. 4.03: Taking up Arms Against the Base Rate Fallacy You don’t have to be performing advanced cancer research or early cancer screenings to run into the base rate fallacy. What if you’re doing social research? You’d like to survey Americans to find out how often they use guns in self-defense. Gun control arguments, after all, center on the right to self-defense, so it’s important to determine whether guns are commonly used for defense and whether that use outweighs the downsides, such as homicides. One way to gather this data would be through a survey. You could ask a representative sample of Americans whether they own guns and, if so, whether they’ve used the guns to defend their homes in burglaries or defend themselves from being mugged. You could compare these numbers to law enforcement statistics of gun use in homicides and make an informed decision about whether the benefits outweigh the downsides. Such surveys have been done, with interesting results. One 1992 telephone survey estimated that American civilians use guns in self-defense up to 2.5 million times every year – that is, about \(1\)% of American adults have defended themselves with firearms. Now, \(34\)% of these cases were in burglaries, giving us \(845,000\) burglaries stymied by gun owners. But in 1992, there were only 1.3 million burglaries committed while someone was at home. Two thirds of these occurred while the homeowners were asleep and were discovered only after the burglar had left. That leaves \(430,000\) burglaries involving homeowners who were at home and awake to confront the burglar – \(845,000\) of which, we are led to believe, were stymied by gun-toting residents.28 Whoops. What happened? Why did the survey overestimate the use of guns in self-defense? Well, for the same reason that mammograms overestimate the incidence of breast cancer: there are far more opportunities for false positives than false negatives. If \(99.9\)% of people have never used a gun in self-defense, but \(1\)% of those people will answer “yes” to any question for fun, and \(1\)% want to look manlier, and \(1\)% misunderstand the question, then you’ll end up vastly overestimating the use of guns in self-defense. What about false negatives? Could this effect be balanced by people who say “no” even though they gunned down a mugger last week? No. If very few people genuinely use a gun in self-defense, then there are very few opportunities for false negatives. They’re overwhelmed by the false positives. This is exactly analogous to the cancer drug example earlier. Here, \(p\) is the probability that someone will falsely claim they’ve used a gun in self-defense. Even if \(p\) is small, your final answer will be wildly wrong. To lower \(p\), criminologists make use of more detailed surveys. The National Crime Victimization surveys, for instance, use detailed sit-down interviews with researchers where respondents are asked for details about crimes and their use of guns in self-defense. With far greater detail in the survey, researchers can better judge whether the incident meets their criteria for self-defense. The results are far smaller – something like \(65,000\) incidents per year, not millions. There’s a chance that survey respondents underreport such incidents, but a much smaller chance of massive overestimation.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/04%3A_The_p_Value_and_the_Base_Rate_Fallacy/4.02%3A_The_Base_Rate_Fallacy_in_Medical_Testing.txt
The base rate fallacy shows us that false positives are much more likely than you’d expect from a \(p<0.05\) criterion for significance. Most modern research doesn’t make one significance test, however; modern studies compare the effects of a variety of factors, seeking to find those with the most significant effects. For example, imagine testing whether jelly beans cause acne by testing the effect of every single jelly bean color on acne: As you can see, making multiple comparisons means multiple chances for a false positive. For example, if I test \(20\) jelly bean flavors which do not cause acne at all, and look for a correlation at \(p<0.05\) significance, I have a \(64\)% chance of a false positive result.54 If I test \(45\) materials, the chance of false positive is as high as \(90\)%. It’s easy to make multiple comparisons, and it doesn’t have to be as obvious as testing twenty potential medicines. Track the symptoms of a dozen patients for a dozen weeks and test for significant benefits during any of those weeks: bam, that’s twelve comparisons. Check for the occurrence of twenty-three potential dangerous side effects: alas, you have sinned. Send out a ten-page survey asking about nuclear power plant proximity, milk consumption, age, number of male cousins, favorite pizza topping, current sock color, and a few dozen other factors for good measure, and you’ll find that something causes cancer. Ask enough questions and it’s inevitable. A survey of medical trials in the 1980s found that the average trial made \(30\) therapeutic comparisons. In more than half of the trials, the researchers had made so many comparisons that a false positive was highly likely, and the statistically significant results they did report were cast into doubt: they may have found a statistically significant effect, but it could just have easily been a false positive.54 There exist techniques to correct for multiple comparisons. For example, the Bonferroni correction method says that if you make \(n\) comparisons in the trial, your criterion for significance should be \(p<0.05/n\). This lowers the chances of a false positive to what you’d see from making only one comparison at \(p<0.05\). However, as you can imagine, this reduces statistical power, since you’re demanding much stronger correlations before you conclude they’re statistically significant. It’s a difficult tradeoff, and tragically few papers even consider it. 4.05: Red Herrings in Brain Imaging Neuroscientists do massive numbers of comparisons regularly. They often perform fMRI studies, where a three-dimensional image of the brain is taken before and after the subject performs some task. The images show blood flow in the brain, revealing which parts of the brain are most active when a person performs different tasks. But how do you decide which regions of the brain are active during the task? A simple method is to divide the brain image into small cubes called voxels. A voxel in the “before” image is compared to the voxel in the “after” image, and if the difference in blood flow is significant, you conclude that part of the brain was involved in the task. Trouble is, there are thousands of voxels to compare and many opportunities for false positives. One study, for instance, tested the effects of an “open-ended mentalizing task” on participants. Subjects were shown “a series of photographs depicting human individuals in social situations with a specified emotional valence,” and asked to “determine what emotion the individual in the photo must have been experiencing.” You can imagine how various emotional and logical centers of the brain would light up during this test. The data was analyzed, and certain brain regions found to change activity during the task. Comparison of images made before and after the mentalizing task showed a $p=0.001$ difference in a $81\text{mm}^3$ cluster in the brain. The study participants? Not college undergraduates paid \$$10$ for their time, as is usual. No, the test subject was one $3.8$-pound Atlantic salmon, which “was not alive at the time of scanning.”8 Of course, most neuroscience studies are more sophisticated than this; there are methods of looking for clusters of voxels which all change together, along with techniques for controlling the rate of false positives even when thousands of statistical tests are made. These methods are now widespread in the neuroscience literature, and few papers make such simple errors as I described. Unfortunately, almost every paper tackles the problem differently; a review of $241$ fMRI studies found that they performed $223$ unique analysis strategies, which, as we will discuss later, gives the researchers great flexibility to achieve statistically significant results.13 4.06: Controlling the False Discovery Rate I mentioned earlier that techniques exist to correct for multiple comparisons. The Bonferroni procedure, for instance, says that you can get the right false positive rate by looking for \(p<0.05/n\), where \(n\) is the number of statistical tests you’re performing. If you perform a study which makes twenty comparisons, you can use a threshold of \(p<0.0025\) to be assured that there is only a \(5\)% chance you will falsely decide a nonexistent effect is statistically significant. This has drawbacks. By lowering the \(p\) threshold required to declare a result statistically significant, you decrease your statistical power greatly, and fail to detect true effects as well as false ones. There are more sophisticated procedures than the Bonferroni correction which take advantage of certain statistical properties of the problem to improve the statistical power, but they are not magic solutions. Worse, they don’t spare you from the base rate fallacy. You can still be misled by your \(p\) threshold and falsely claim there’s “only a \(5\)% chance I’m wrong” – you just eliminate some of the false positives. A scientist is more interested in the false discovery rate: what fraction of my statistically significant results are false positives? Is there a statistical test that will let me control this fraction? For many years the answer was simply “no.” As you saw in the section on the base rate fallacy, we can compute the false discovery rate if we make an assumption about how many of our tested hypotheses are true – but we’d rather find that out from the data, rather than guessing. In 1995, Benjamini and Hochberg provided a better answer. They devised an exceptionally simple procedure which tells you which \(p\) values to consider statistically significant. I’ve been saving you from mathematical details so far, but to illustrate just how simple the procedure is, here it is: 1. Perform your statistical tests and get the \(p\) value for each. Make a list and sort it in ascending order. 2. Choose a false-discovery rate and call it \(q\). Call the number of statistical tests \(m\). 3. Find the largest \(p\) value such that \(p≤iq/m\), where \(i\) is the \(p\) value’s place in the sorted list. 4. Call that \(p\) value and all smaller than it statistically significant. You’re done! The procedure guarantees that out of all statistically significant results, no more than \(q\) percent will be false positives.7 The Benjamini-Hochberg procedure is fast and effective, and it has been widely adopted by statisticians and scientists in certain fields. It usually provides better statistical power than the Bonferroni correction and friends while giving more intuitive results. It can be applied in many different situations, and variations on the procedure provide better statistical power when testing certain kinds of data. Of course, it’s not perfect. In certain strange situations, the Benjamini-Hochberg procedure gives silly results, and it has been mathematically shown that it is always possible to beat it in controlling the false discovery rate. But it’s a start, and it’s much better than nothing.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/04%3A_The_p_Value_and_the_Base_Rate_Fallacy/4.04%3A_If_at_First_You_Don%27t_Succeed_Try_Try_Again.txt
“We compared treatments A and B with a placebo. Treatment A showed a significant benefit over placebo, while treatment B had no statistically significant benefit. Therefore, treatment A is better than treatment B.” We hear this all the time. It’s an easy way of comparing medications, surgical interventions, therapies, and experimental results. It’s straightforward. It seems to make sense. However, a difference in significance does not always make a significant difference.22 One reason is the arbitrary nature of the \(p<0.05\) cutoff. We could get two very similar results, with \(p=0.04\) and \(p=0.06\), and mistakenly say they’re clearly different from each other simply because they fall on opposite sides of the cutoff. The second reason is that \(p\) values are not measures of effect size, so similar \(p\) values do not always mean similar effects. Two results with identical statistical significance can nonetheless contradict each other. Instead, think about statistical power. If we compare our new experimental drugs Fixitol and Solvix to a placebo but we don’t have enough test subjects to give us good statistical power, then we may fail to notice their benefits. If they have identical effects but we have only \(50\)% power, then there’s a good chance we’ll say Fixitol has significant benefits and Solvix does not. Run the trial again, and it’s just as likely that Solvix will appear beneficial and Fixitol will not. Instead of independently comparing each drug to the placebo, we should compare them against each other. We can test the hypothesis that they are equally effective, or we can construct a confidence interval for the extra benefit of Fixitol over Solvix. If the interval includes zero, then they could be equally effective; if it doesn’t, then one medication is a clear winner. This doesn’t improve our statistical power, but it does prevent the false conclusion that the drugs are different. Our tendency to look for a difference in significance should be replaced by a check for the significance of the difference. Examples of this error in common literature and news stories abound. A huge proportion of papers in neuroscience, for instance, commit the error.44 You might also remember a study a few years ago suggesting that men with more biological older brothers are more likely to be homosexual.9 How did they reach this conclusion? And why older brothers and not older sisters? The authors explain their conclusion by noting that they ran an analysis of various factors and their effect on homosexuality. Only the number of older brothers had a statistically significant effect; number of older sisters, or number of nonbiological older brothers, had no statistically significant effect. But as we’ve seen, that doesn’t guarantee that there’s a significant difference between the effects of older brothers and older sisters. In fact, taking a closer look at the data, it appears there’s no statistically significant difference between the effect of older brothers and older sisters. Unfortunately, not enough data was published in the paper to allow a direct calculation.22 5.02: When Significant Differences are Missed The problem can run the other way. Scientists routinely judge whether a significant difference exists simply by eye, making use of plots like this one: Imagine the two plotted points indicate the estimated time until recovery from some disease in two different groups of patients, each containing ten patients. There are three different things those error bars could represent: 1. The standard deviation of the measurements. Calculate how far each observation is from the average, square each difference, and then average the results and take the square root. This is the standard deviation, and it measures how spread out the measurements are from their mean. 2. The standard error of some estimator. For example, perhaps the error bars are the standard error of the mean. If I were to measure many different samples of patients, each containing exactly \(n\) subjects, I can estimate that \(68\)% of the mean times to recover I measure will be within one standard error of “real” average time to recover. (In the case of estimating means, the standard error is the standard deviation of the measurements divided by the square root of the number of measurements, so the estimate gets better as you get more data – but not too fast.) Many statistical techniques, like least-squares regression, provide standard error estimates for their results. 3. The confidence interval of some estimator. A \(95\)% confidence interval is mathematically constructed to include the true value for \(95\) random samples out of \(100\), so it spans roughly two standard errors in each direction. (In more complicated statistical models this may not be exactly true.) These three options are all different. The standard deviation is a simple measurement of my data. The standard error tells me how a statistic, like a mean or the slope of a best-fit line, would likely vary if I take many samples of patients. A confidence interval is similar, with an additional guarantee that \(95\)% of \(95\)% confidence intervals should include the “true” value. In the example plot, we have two \(95\)% confidence intervals which overlap. Many scientists would view this and conclude there is no statistically significant difference between the groups. After all, groups \(1\) and \(2\) might not be different – the average time to recover could be \(25\) in both groups, for example, and the differences only appeared because group \(1\) was lucky this time. But does this mean the difference is not statistically significant? What would the p value be? In this case, \(p<0.05\). There is a statistically significant difference between the groups, even though the confidence intervals overlap.\(^{[1]}\) Unfortunately, many scientists skip hypothesis tests and simply glance at plots to see if confidence intervals overlap. This is actually a much more conservative test – requiring confidence intervals to not overlap is akin to requiring \(p<0.01\) in some cases.50 It is easy to claim two measurements are not significantly different even when they are. Conversely, comparing measurements with standard errors or standard deviations will also be misleading, as standard error bars are shorter than confidence interval bars. Two observations might have standard errors which do not overlap, and yet the difference between the two is not statistically significant. A survey of psychologists, neuroscientists and medical researchers found that the majority made this simple error, with many scientists confusing standard errors, standard deviations, and confidence intervals.6 Another survey of climate science papers found that a majority of papers which compared two groups with error bars made the error.37 Even introductory textbooks for experimental scientists, such as An Introduction to Error Analysis, teach students to judge by eye, hardly mentioning formal hypothesis tests at all. There are, of course, formal statistical procedures which generate confidence intervals which can be compared by eye, and even correct for multiple comparisons automatically. For example, Gabriel comparison intervals are easily interpreted by eye.19 Overlapping confidence intervals do not mean two values are not significantly different. Similarly, separated standard error bars do not mean two values are significantly different. It’s always best to use the appropriate hypothesis test instead. Your eyeball is not a well-defined statistical procedure. Footnotes [1] This was calculated with an unpaired \(t\) test, based on a standard error of \(2.5\) in group \(1\) and \(3.5\) in group \(2\).
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/05%3A_When_Differences_in_Significance_Aren't_Significant_Differences/5.01%3A_Differences_in_Significance.txt
Medical trials are expensive. Supplying dozens of patients with experimental medications and tracking their symptoms over the course of months takes significant resources, and so many pharmaceutical companies develop “stopping rules,” which allow investigators to end a study early if it’s clear the experimental drug has a substantial effect. For example, if the trial is only half complete but there’s already a statistically significant difference in symptoms with the new medication, the researchers may terminate the study, rather than gathering more data to reinforce the conclusion. When poorly done, however, this can lead to numerous false positives. For example, suppose we’re comparing two groups of patients, one with a medication and one with a placebo. We measure the level of some protein in their bloodstreams as a way of seeing if the medication is working. In this case, though, the medication causes no difference whatsoever: patients in both groups have the same average protein levels, although of course individuals have levels which vary slightly. We start with ten patients in each group, and gradually collect more data from more patients. As we go along, we do a \(t\) test to compare the two groups and see if there is a statistically significant difference between average protein levels. We might see a result like this simulation: This plot shows the \(p\) value of the difference between groups as we collect more data, with the horizontal line indicating the \(p=0.05\) level of significance. At first, there appears to be no significant difference. Then we collect more data and conclude there is. If we were to stop, we’d be misled: we’d believe there is a significant difference between groups when there is none. As we collect yet more data, we realize we were mistaken – but then a bit of luck leads us back to a false positive. You’d expect that the \(p\) value dip shouldn’t happen, since there’s no real difference between groups. After all, taking more data shouldn’t make our conclusions worse, right? And it’s true that if we run the trial again we might find that the groups start out with no significant difference and stay that way as we collect more data, or start with a huge difference and quickly regress to having none. But if we wait long enough and test after every data point, we will eventually cross any arbitrary line of statistical significance, even if there’s no real difference at all. We can’t usually collect infinite samples, so in practice this doesn’t always happen, but poorly implemented stopping rules still increase false positive rates significantly.53 Modern clinical trials are often required to register their statistical protocols in advance, and generally pre-select only a few evaluation points at which they test their evidence, rather than testing after every observation. This causes only a small increase in the false positive rate, which can be adjusted for by carefully choosing the required significance levels and using more advanced statistical techniques.56 But in fields where protocols are not registered and researchers have the freedom to use whatever methods they feel appropriate, there may be false positive demons lurking.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/06%3A_Stopping_Rules_and_Regression_to_the_Mean/6.01%3A_Rules_of_the_Game.txt
Medical trials also tend to have inadequate statistical power to detect moderate differences between medications. So they want to stop as soon as they detect an effect, but they don’t have the power to detect effects. Suppose a medication reduces symptoms by \(20\)% over a placebo, but the trial you’re using to test it does not have adequate statistical power to detect this difference. We know that small trials tend to have varying results: it’s easy to get ten lucky patients who have shorter colds than usual, but much harder to get ten thousand who all do. Now imagine running many copies of this trial. Sometimes you get unlucky patients, and so you don’t notice any statistically significant improvement from your drug. Sometimes your patients are exactly average, and the treatment group has their symptoms reduced by \(20\)% – but you don’t have enough data to call this a statistically significant increase, so you ignore it. Sometimes the patients are lucky and have their symptoms reduced by much more than \(20\)%, and so you stop the trial and say “Look! It works!” You’ve correctly concluded that your medication is effective, but you’ve inflated the size of its effect. You falsely believe it is much more effective than it really is. This effect occurs in pharmacological trials, epidemiological studies, gene association studies (“gene A causes condition B”), psychological studies, and in some of the most-cited papers in the medical literature.30, 32 In fields where trials can be conducted quickly by many independent researchers (such as gene association studies), the earliest published results are often wildly contradictory, because small trials and a demand for statistical significance cause only the most extreme results to be published.33 As a bonus, truth inflation can combine forces with early stopping rules. If most drugs in clinical trials are not quite so effective to warrant stopping the trial early, then many trials stopped early will be the result of lucky patients, not brilliant drugs – and by stopping the trial we have deprived ourselves of the extra data needed to tell the difference. Reviews have compared trials stopped early with other studies addressing the same question which did not stop early; in most cases, the trials stopped early exaggerated the effects of their tested treatments by an average of \(29\)%.3 Of course, we do not know The Truth about any drug being studied, so we cannot tell if a particular study stopped early due to luck or a particularly good drug. Many studies do not even publish the original intended sample size or the stopping rule which was used to justify terminating the study.43 A trial’s early stoppage is not automatic evidence that its results are biased, but it is a suggestive detail. 6.03: Little Extremes Suppose you’re in charge of public school reform. As part of your research into the best teaching methods, you look at the effect of school size on standardized test scores. Do smaller schools perform better than larger schools? Should you try to build many small schools or a few large schools? To answer this question, you compile a list of the highest-performing schools you have. The average school has about \(1,000\) students, but the top-scoring five or ten schools are almost all smaller than that. It seems that small schools do the best, perhaps because of their personal atmosphere where teachers can get to know students and help them individually. Then you take a look at the worst-performing schools, expecting them to be large urban schools with thousands of students and overworked teachers. Surprise! They’re all small schools too. What’s going on? Well, take a look at a plot of test scores vs. school size: Smaller schools have more widely varying average test scores, entirely because they have fewer students. With fewer students, there are fewer data points to establish the “true” performance of the teachers, and so the average scores vary widely. As schools get larger, test scores vary less, and in fact increase on average. This example used simulated data, but it’s based on real (and surprising) observations of Pennsylvania public schools.59 Another example: In the United States, counties with the lowest rates of kidney cancer tend to be Midwestern, Southern and Western rural counties. How could this be? You can think of many explanations: rural people get more exercise, inhale less polluted air, and perhaps lead less stressful lives. Perhaps these factors lower their cancer rates. On the other hand, counties with the highest rates of kidney cancer tend to be Midwestern, Southern and Western rural counties. The problem, of course, is that rural counties have the smallest populations. A single kidney cancer patient in a county with ten residents gives that county the highest kidney cancer rate in the nation. Small counties hence have vastly more variable kidney cancer rates, simply because they have so few residents.21
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/06%3A_Stopping_Rules_and_Regression_to_the_Mean/6.02%3A_Truth_Inflation.txt
There’s a common misconception that statistics is boring and monotonous. Collect lots of data, plug the numbers into Excel or SPSS or R, and beat the software with a stick until it produces some colorful charts and graphs. Done! All the statistician must do is read off the results. But one must choose which commands to use. Two researchers attempting to answer the same question may perform different statistical analyses entirely. There are many decisions to make: 1. Which variables do I adjust for? In a medical trial, for instance, you might control for patient age, gender, weight, BMI, previous medical history, smoking, drug use, or for the results of medical tests done before the start of the study. Which of these factors are important, and which can be ignored? 2. Which cases do I exclude? If I’m testing diet plans, maybe I want to exclude test subjects who came down with uncontrollable diarrhea during the trial, since their results will be abnormal. 3. What do I do with outliers? There will always be some results which are out of the ordinary, for reasons known or unknown, and I may want to exclude them or analyze them specially. Which cases count as outliers, and what do I do with them? 4. How do I define groups? For example, I may want to split patients into “overweight”, “normal”, and “underweight” groups. Where do I draw the lines? What do I do with a muscular bodybuilder whose BMI is in the “overweight” range? 5. What about missing data? Perhaps I’m testing cancer remission rates with a new drug. I run the trial for five years, but some patients will have tumors reappear after six years, or eight years. My data does not include their recurrence. How do I account for this when measuring the effectiveness of the drug? 6. How much data should I collect? Should I stop when I have a definitive result, or continue as planned until I’ve collected all the data? 7. How do I measure my outcomes? A medication could be evaluated with subjective patient surveys, medical test results, prevalence of a certain symptom, or measures such as duration of illness. Producing results can take hours of exploration and analysis to see which procedures are most appropriate. Papers usually explain the statistical analysis performed, but don’t always explain why the researchers chose one method over another, or explain what the results would be had the researchers chosen a different method. Researchers are free to choose whatever methods they feel appropriate – and while they may make the right choices, what would happen if they analyzed the data differently? In simulations, it’s possible to get effect sizes different by a factor of two simply by adjusting for different variables, excluding different sets of cases, and handling outliers differently.30 The effect size is that all-important number which tells you how much of a difference your medication makes. So apparently, being free to analyze how you want gives you enormous control over your results! The most concerning consequence of this statistical freedom is that researchers may choose the statistical analysis most favorable to them, arbitrarily producing statistically significant results by playing with the data until something emerges. Simulation suggests that false positive rates can jump to over \(50\)% for a given dataset just by letting researchers try different statistical analyses until one works.53 Medical researchers have devised ways of preventing this. Researchers are often required to draft a clinical trial protocol, explaining how the data will be collected and analyzed. Since the protocol is drafted before the researchers see any data, they can’t possibly craft their analysis to be most favorable to them. Unfortunately, many studies depart from their protocols and perform different analysis, allowing for researcher bias to creep in.15, 14 Many other scientific fields have no protocol publication requirement at all. The proliferation of statistical techniques has given us many useful tools, but it seems they have been put to use as blunt objects. One must simply beat the data until it confesses.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/07%3A_Researcher_Freedom-_Good_Vibrations.txt
Until now, I have presumed that scientists are capable of making statistical computations with perfect accuracy, and only err in their choice of appropriate numbers to compute. Scientists may misuse the results of statistical tests or fail to make relevant computations, but they can at least calculate a \(p\) value, right? Perhaps not. Surveys of statistically significant results reported in medical and psychological trials suggest that many \(p\) values are wrong, and some statistically insignificant results are actually significant when computed correctly.25, 2 Other reviews find examples of misclassified data, erroneous duplication of data, inclusion of the wrong dataset entirely, and other mixups, all concealed by papers which did not describe their analysis in enough detail for the errors to be easily noticed.1, 26 Sunshine is the best disinfectant, and many scientists have called for experimental data to be made available through the Internet. In some fields, this is now commonplace: there exist gene sequencing databases, protein structure databanks, astronomical observation databases, and earth observation collections containing the contributions of thousands of scientists. Many other fields, however, can’t share their data due to impracticality (particle physics data can include many terabytes of information), privacy issues (in medical trials), a lack of funding or technological support, or just a desire to keep proprietary control of the data and all the discoveries which result from it. And even if the data were all available, would anyone analyze it all to spot errors? Similarly, scientists in some fields have pushed towards making their statistical analyses available through clever technological tools. A tool called Sweave, for instance, makes it easy to embed statistical analyses performed using the popular R programming language inside papers written in LaTeX, the standard for scientific and mathematical publications. The result looks just like any scientific paper, but another scientist reading the paper and curious about its methods can download the source code, which shows exactly how all the numbers were calculated. But would scientists avail themselves of the opportunity? Nobody gets scientific glory by checking code for typos. Another solution might be replication. If scientists carefully recreate the experiments of other scientists and validate their results, it is much easier to rule out the possibility of a typo causing an errant result. Replication also weeds out fluke false positives. Many scientists claim that experimental replication is the heart of science: no new idea is accepted until it has been independently tested and retested around the world and found to hold water. That’s not entirely true; scientists often take previous studies for granted, though occasionally scientists decide to systematically re-test earlier works. One new project, for example, aims to reproduce papers in major psychology journals to determine just how many papers hold up over time – and what attributes of a paper predict how likely it is to stand up to retesting.\(^{[1]}\) In another example, cancer researchers at Amgen retested \(53\) landmark preclinical studies in cancer research. (By “preclinical” I mean the studies did not involve human patients, as they were testing new and unproven ideas.) Despite working in collaboration with the authors of the original papers, the Amgen researchers could only reproduce six of the studies.5 Bayer researchers have reported similar difficulties when testing potential new drugs found in published papers.49 This is worrisome. Does the trend hold true for less speculative kinds of medical research? Apparently so: of the top-cited research articles in medicine, a quarter have gone untested after their publication, and a third have been found to be exaggerated or wrong by later research.32 That’s not as extreme as the Amgen result, but it makes you wonder what important errors still lurk unnoticed in important research. Replication is not as prevalent as we would like it to be, and the results are not always favorable. Footnotes [1] The Reproducibility Project, at http://openscienceframework.org/reproducibility/
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/08%3A_Everybody_Makes_Mistakes.txt
“Given enough eyeballs, all bugs are shallow.” —Eric S. Raymond We’ve talked about the common mistakes made by scientists, and how the best way to spot them is a bit of outside scrutiny. Peer review provides some of this scrutiny, but a peer reviewer doesn’t have the time to extensively re-analyze data and read code for typos – reviewers can only check that the methodology makes good sense. Sometimes they spot obvious errors, but subtle problems are usually missed.52 This is why many journals and professional societies require researchers to make their data available to other scientists on request. Full datasets are usually too large to print in the pages of a journal, so authors report their results and send the complete data to other scientists if they ask for a copy. Perhaps they will find an error or a pattern the original scientists missed. Or so it goes in theory. In 2005, Jelte Wicherts and colleagues at the University of Amsterdam decided to analyze every recent article in several prominent journals of the American Psychological Association to learn about their statistical methods. They chose the APA partly because it requires authors to agree to share their data with other psychologists seeking to verify their claims. Of the 249 studies they sought data for, they had only received data for 64 six months later. Almost three quarters of study authors never sent their data.61 Of course, scientists are busy people, and perhaps they simply didn’t have the time to compile their datasets, produce documents describing what each variable means and how it was measured, and so on. Wicherts and his colleagues decided they’d test this. They trawled through all the studies looking for common errors which could be spotted by reading the paper, such as inconsistent statistical results, misuse of various statistical tests, and ordinary typos. At least half of the papers had an error, usually minor, but \(15\)% reported at least one statistically significant result which was only significant because of an error. Next, they looked for a correlation between these errors and an unwillingness to share data. There was a clear relationship. Authors who refused to share their data were more likely to have committed an error in their paper, and their statistical evidence tended to be weaker.60 Because most authors refused to share their data, Wicherts could not dig for deeper statistical errors, and many more may be lurking. This is certainly not proof that authors hid their data out of fear their errors may be uncovered, or even that the authors knew about the errors at all. Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing “look over there.”\(^{[1]}\) Footnotes [1] Joke shamelessly stolen from the alternate text of http://xkcd.com/552/. 9.02: Just Leave out the Details Nitpicking statisticians getting you down by pointing out flaws in your paper? There’s one clear solution: don’t publish as much detail! They can’t find the errors if you don’t say how you evaluated your data. I don’t mean to seriously suggest that evil scientists do this intentionally, although perhaps some do. More frequently, details are left out because authors simply forgot to include them, or because journal space limits force their omission. It’s possible to evaluate studies to see what they left out. Scientists leading medical trials are required to provide detailed study plans to ethical review boards before starting a trial, so one group of researchers obtained a collection of these plans from a review board. The plans specify which outcomes the study will measure: for instance, a study might monitor various symptoms to see if any are influenced by the treatment. The researchers then found the published results of these studies and looked for how well these outcomes were reported. Roughly half of the outcomes never appeared in the scientific journal papers at all. Many of these were statistically insignificant results which were swept under the rug.\(^{[1]}\) Another large chunk of results were not reported in sufficient detail for scientists to use the results for further meta-analysis.14 Other reviews have found similar problems. A review of medical trials found that most studies omit important methodological details, such as stopping rules and power calculations, with studies in small specialist journals faring worse than those in large general medicine journals.29 Medical journals have begun to combat this problem with standards for reporting of results, such as the CONSORT checklist. Authors are required to follow the checklist’s requirements before submitting their studies, and editors check to make sure all relevant details are included. The checklist seems to work; studies published in journals which follow the guidelines tend to report more essential detail, although not all of it.46 Unfortunately the standards are inconsistently applied and studies often slip through with missing details nonetheless.42 Journal editors will need to make a greater effort to enforce reporting standards. We see that published papers aren’t faring very well. What about unpublished studies? Footnotes [1] Why do we always say “swept under the rug”? Whose rug is it? And why don’t they use a vacuum cleaner instead of a broom? 9.03: Science in a Filing Cabinet Earlier we saw the impact of multiple comparisons and truth inflation on study results. These problems arise when studies make numerous comparisons with low statistical power, giving a high rate of false positives and inflated estimates of effect sizes, and they appear everywhere in published research. But not every study is published. We only ever see a fraction of medical research, for instance, because few scientists bother publishing “We tried this medicine and it didn’t seem to work.” Consider an example: studies of the tumor suppressor protein TP53 and its effect on head and neck cancer. A number of studies suggested that measurements of TP53 could be used to predict cancer mortality rates, since it serves to regulate cell growth and development and hence must function correctly to prevent cancer. When all 18 published studies on TP53 and cancer were analyzed together, the result was a highly statistically significant correlation: TP53 could clearly be measured to tell how likely a tumor is to kill you. But then suppose we dig up unpublished results on TP53: data that had been mentioned in other studies but not published or analyzed. Add this data to the mix and the statistically significant effect vanishes.36 After all, few authors bothered to publish data showing no correlation, so the meta-analysis could only use a biased sample. A similar study looked at reboxetine, an antidepressant sold by Pfizer. Several published studies have suggested that it is effective compared to placebo, leading several European countries to approve it for prescription to depressed patients. The German Institute for Quality and Efficiency in Health Care, responsible for assessing medical treatments, managed to get unpublished trial data from Pfizer – three times more data than had ever been published – and carefully analyzed it. The result: reboxetine is not effective. Pfizer had only convinced the public that it’s effective by neglecting to mention the studies proving it isn’t.18 This problem is commonly known as publication bias or the file-drawer problem: many studies sit in a file drawer for years, never published, despite the valuable data they could contribute. The problem isn’t simply the bias on published results. Unpublished studies lead to a duplication of effort – if other scientists don’t know you’ve done a study, they may well do it again, wasting money and effort. Regulators and scientific journals have attempted to halt this problem. The Food and Drug Administration requires certain kinds of clinical trials to be registered through their website ClinicalTrials.gov before the trials begin, and requires the publication of results within a year of the end of the trial. Similarly, the International Committee of Medical Journal Editors announced in 2005 that they would not publish studies which had not been pre-registered. Unfortunately, a review of \(738\) registered clinical trials found that only \(22\)% met the legal requirement to publish.47 The FDA has not fined any drug companies for noncompliance, and journals have not consistently enforced the requirement to register trials. Most studies simply vanish.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/09%3A_Hiding_the_Data/9.01%3A_Handling_Data.txt
I’ve painted a grim picture. But anyone can pick out small details in published studies and produce a tremendous list of errors. Do these problems matter? Well, yes. I wouldn’t have written this otherwise. John Ioannidis’s famous article “Why Most Published Research Findings are False”31 was grounded in mathematical concerns rather than an empirical test of research results. If most research articles have poor statistical power – and they do – while researchers have the freedom to choose among multitudes of analyses methods to get favorable results – and they do – when most tested hypotheses are false and most true hypotheses correspond to very small effects, we are mathematically determined to get a multitude of false positives. But if you want empiricism, you can have it, courtesy of John Ioannidis and Jonathan Schoenfeld. They studied the question “Is everything we eat associated with cancer?”51\(^{[1]}\) After choosing fifty common ingredients out of a cookbook, they set out to find studies linking them to cancer rates – and found \(216\) studies on forty different ingredients. Of course, most of the studies disagreed with each other. Most ingredients had multiple studies claiming they increased and decreased the risk of getting cancer. Most of the statistical evidence was weak, and meta-analyses usually showed much smaller effects on cancer rates than the original studies. Of course, being contradicted by follow-up studies and meta-analyses doesn’t prevent a paper from being cited as though it were true. Even effects which have been contradicted by massive follow-up trials with unequivocal results are frequently cited five or ten years later, with scientists apparently not noticing that the results are false.55 Of course, new findings get widely publicized in the press, while contradictions and corrections are hardly ever mentioned.23 You can hardly blame the scientists for not keeping up. Let’s not forget the merely biased results. Poor reporting standards in medical journals mean studies testing new treatments for schizophrenia can neglect to include the scale they used to evaluate symptoms – a handy source of bias, as trials using unpublished scales tend to produce better results than those using previously validated tests.40 Other medical studies simply omit particular results if they’re not favorable or interesting, biasing subsequent meta-analyses to only include positive results. A third of meta-analyses are estimated to suffer from this problem.34 Another review compared meta-analyses to subsequent large randomized controlled trials, considered the gold standard in medicine. In over a third of cases, the randomized trial’s outcome did not correspond well to the meta-analysis.39 Other comparisons of meta-analyses to subsequent research found that most results were inflated, with perhaps a fifth representing false positives.45 Let’s not forget the multitude of physical science papers which misuse confidence intervals.37 Or the peer-reviewed psychology paper allegedly providing evidence for psychic powers, on the basis of uncontrolled multiple comparisons in exploratory studies.58 Unsurprisingly, results failed to be replicated – by scientists who appear not to have calculated the statistical power of their tests.20 We have a problem. Let’s work on fixing it. Footnotes [1] An important part of the ongoing Oncological Ontology project to categorize everything into two categories: that which cures cancer and that which causes it.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/10%3A_What_Have_We_Wrought.txt
I’ve discussed many statistical problems throughout this guide. They appear in many fields of science: medicine, physics, climate science, biology, chemistry, neuroscience, and many others. Any researcher using statistical methods to analyze data is likely to make a mistake, and as we’ve seen, most of them do. What can we do about it? 11: What Can be Done Most American science students have a minimal statistical education – perhaps one or two required courses, or even none at all for many students. And even when students have taken statistical courses, professors report that they can’t apply statistical concepts to scientific questions, having never fully understood – or simply forgotten – the appropriate techniques. This needs to change. Almost every scientific discipline depends on statistical analysis of experimental data, and statistical errors waste grant funding and researcher time. Some universities have experimented with statistics courses integrated with science classes, with students immediately applying their statistical knowledge to problems in their field. Preliminary results suggests these methods work: students learn and retain more statistics, and they spend less time whining about being forced to take a statistics course.41 More universities should adopt these techniques, using conceptual tests to see what methods work best. We also need more freely available educational material. I was introduced to statistics when I needed to analyze data in a laboratory and didn’t know how; until strong statistics education is more widespread, many students will find themselves in the same position, and they need resources. Projects like OpenIntro Stats are promising, and I hope to see more in the near future. 11.02: Scientific Publishing Scientific journals are slowly making progress towards solving many of the problems I have discussed. Reporting guidelines, such as CONSORT for randomized trials, make it clear what information is required for a published paper to be reproducible; unfortunately, as we’ve seen, these guidelines are infrequently enforced. We must continue to pressure journals to hold authors to more rigorous standards. Premier journals need to lead the charge. Nature has begun to do so, announcing a new checklist which authors are required to complete before articles may be published. The checklist requires reporting of sample sizes, statistical power calculations, clinical trial registration numbers, a completed CONSORT checklist, adjustment for multiple comparisons, and sharing of data and source code. The guidelines cover most issues covered in Statistics Done Wrong, except for stopping rules and discussion of any reasons for departing from the trial’s registered protocol. Nature will also make statisticians available to consult for papers as needed. If these guidelines are enforced, the result will be much more reliable and reproducible scientific research. More journals should do the same. 11.03: Your Job Your task can be expressed in four simple steps: 1. Read a statistics textbook or take a good statistics course. Practice. 2. Plan your data analyses carefully and deliberately, avoiding the misconceptions and errors you have learned. 3. When you find common errors in the scientific literature – such as a simple misinterpretation of \(p\) values – hit the perpetrator over the head with your statistics textbook. It’s therapeutic. 4. Press for change in scientific education and publishing. It’s our research. Let’s not screw it up. 12: Conclusion Beware false confidence. You may soon develop a smug sense of satisfaction that your work doesn’t screw up like everyone else’s. But I have not given you a thorough introduction to the mathematics of data analysis. There are many ways to foul up statistics beyond these simple conceptual errors. Errors will occur often, because somehow, few undergraduate science degrees or medical schools require courses in statistics and experimental design – and some introductory statistics courses skip over issues of statistical power and multiple inference. This is seen as acceptable despite the paramount role of data and statistical analysis in the pursuit of modern science; we wouldn’t accept doctors who have no experience with prescription medication, so why do we accept scientists with no training in statistics? Scientists need formal statistical training and advice. To quote: “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” —R. A. Fisher, popularizer of the p value Journals may choose to reject research with poor-quality statistical analyses, and new guidelines and protocols may eliminate some problems, but until we have scientists adequately trained in the principles of statistics, experimental design and data analysis will not be improved. The all-consuming quest for statistical significance will only continue. Change will not be easy. Rigorous statistical standards don’t come free: if scientists start routinely performing statistical power computations, for example, they’ll soon discover they need vastly larger sample sizes to reach solid conclusions. Clinical trials are not free, and more expensive research means fewer published trials. You might object that scientific progress will be slowed needlessly – but isn’t it worse to build our progress on a foundation of unsound results? To any science students: invest in a statistics course or two while you have the chance. To researchers: invest in training, a good book, and statistical advice. And please, the next time you hear someone say “The result was significant with \(p<0.05\), so there’s only a \(1\) in \(20\) chance it’s a fluke!”, please beat them over the head with a statistics textbook for me. Disclaimer: The advice in this guide cannot substitute for the advice of a trained statistical professional. If you think you’re suffering from any serious statistical error, please consult a statistician immediately. I shall not have any liability from any injury to your dignity, statistical error or misconception suffered as a result of your use of this website. Use of this guide to justify rejecting the results of a scientific study without reviewing the evidence in any detail whatsoever is grounds for being slapped upside the head with a very large statistics textbook. This guide should help you find statistical errors, not allow you to selectively ignore science you don’t like.
textbooks/stats/Introductory_Statistics/Statistics_Done_Wrong_(Reinhart)/11%3A_What_Can_be_Done/11.01%3A_Statistical_Education.txt
You are exposed to statistics regularly. If you are a sports fan, then you have the statistics for your favorite player. If you are interested in politics, then you look at the polls to see how people feel about certain issues or candidates. If you are an environmentalist, then you research arsenic levels in the water of a town or analyze the global temperatures. If you are in the business profession, then you may track the monthly sales of a store or use quality control processes to monitor the number of defective parts manufactured. If you are in the health profession, then you may look at how successful a procedure is or the percentage of people infected with a disease. There are many other examples from other areas. To understand how to collect data and analyze it, you need to understand what the field of statistics is and the basic definitions. Definition \(1\) Statistics is the study of how to collect, organize, analyze, and interpret data collected from a group. There are two branches of statistics. One is called descriptive statistics, which is where you collect and organize data. The other is called inferential statistics, which is where you analyze and interpret data. First you need to look at descriptive statistics since you will use the descriptive statistics when making inferences. To understand how to create descriptive statistics and then conduct inferences, there are a few definitions that you need to look at. Note, many of the words that are defined have common definitions that are used in non-statistical terminology. In statistics, some have slightly different definitions. It is important that you notice the difference and utilize the statistical definitions. The first thing to decide in a statistical study is whom you want to measure and what you want to measure. You always want to make sure that you can answer the question of whom you measured and what you measured. The who is known as the individual and the what is the variable. Definition \(2\) Individual – a person or object that you are interested in finding out information about. Definition \(3\) Variable – the measurement or observation of the individual. If you put the individual and the variable into one statement, then you obtain a population. Definition \(4\) Population – set of all values of the variable for the entire group of individuals. Notice, the population answers who you want to measure and what you want to measure. Make sure that your population always answers both of these questions. If it doesn’t, then you haven’t given someone who is reading your study the entire picture. As an example, if you just say that you are going to collect data from the senators in the U.S. Congress, you haven’t told your reader want you are going to collect. Do you want to know their income, their highest degree earned, their voting record, their age, their political party, their gender, their marital status, or how they feel about a particular issue? Without telling what you want to measure, your reader has no idea what your study is actually about. Sometimes the population is very easy to collect. Such as if you are interested in finding the average age of all of the current senators in the U.S. Congress, there are only 100 senators. This wouldn’t be hard to find. However, if instead you were interested in knowing the average age that a senator in the U.S. Congress first took office for all senators that ever served in the U.S. Congress, then this would be a bit more work. It is still doable, but it would take a bit of time to collect. But what if you are interested in finding the average diameter of breast height of all of the Ponderosa Pine trees in the Coconino National Forest? This would be impossible to actually collect. What do you do in these cases? Instead of collecting the entire population, you take a smaller group of the population, kind of a snap shot of the population. This smaller group is called a sample. Definition \(5\) Sample – a subset from the population. It looks just like the population, but contains less data How you collect your sample can determine how accurate the results of your study are. There are many ways to collect samples. Some of them create better samples than others. No sampling method is perfect, but some are better than others. Sampling techniques will be discussed later. For now, realize that every time you take a sample you will find different data values. The sample is a snapshot of the population, and there is more information than is in the picture. The idea is to try to collect a sample that gives you an accurate picture, but you will never know for sure if your picture is the correct picture. Unlike previous mathematics classes where there was always one right answer, in statistics there can be many answers, and you don’t know which are right. Once you have your data, either from a population or a sample, you need to know how you want to summarize the data. As an example, suppose you are interested in finding the proportion of people who like a candidate, the average height a plant grows to using a new fertilizer, or the variability of the test scores. Understanding how you want to summarize the data helps to determine the type of data you want to collect. Since the population is what we are interested in, then you want to calculate a number from the population. This is known as a parameter. As mentioned already, you can’t really collect the entire population. Even though this is the number you are interested in, you can’t really calculate it. Instead you use the number calculated from the sample, called a statistic, to estimate the parameter. Since no sample is exactly the same, the statistic values are going to be different from sample to sample. They estimate the value of the parameter, but again, you do not know for sure if your answer is correct. Definition \(6\) Parameter – a number calculated from the population. Usually denoted with a Greek letter. This number is a fixed, unknown number that you want to find. Definition \(7\) Statistic – a number calculated from the sample. Usually denoted with letters from the Latin alphabet, though sometimes there is a Greek letter with a ^ (called a hat) above it. Since you can find samples, it is readily known, though it changes depending on the sample taken. It is used to estimate the parameter value. One last concept to mention is that there are two different types of variables – qualitative and quantitative. Each type of variable has different parameters and statistics that you find. It is important to know the difference between them. Definition \(8\) Qualitative or categorical variable – answer is a word or name that describes a quality of the individual. Definition \(9\) Quantitative or numerical variable – answer is a number, something that can be counted or measured from the individual. Example \(1\) stating definitions for qualitative variable In 2010, the Pew Research Center questioned \(1500\) adults in the U.S. to estimate the proportion of the population favoring marijuana use for medical purposes. It was found that \(73\)% are in favor of using marijuana for medical purposes. State the individual, variable, population, and sample. Solution Individual – a U.S. adult Variable – the response to the question “should marijuana be used for medical purposes?” This is qualitative data since you are recording a person’s response – yes or no. Population – set of all responses of adults in the U.S. Sample – set of 1500 responses of U.S. adults who are questioned. Parameter – proportion of those who favor marijuana for medical purposes calculated from population Statistic– proportion of those who favor marijuana for medical purposes calculated from sample Example \(2\) stating definitions for qualitative variable A parking control officer records the manufacturer of every \(5^{th}\) car in the college parking lot in order to guess the most common manufacturer. Solution Individual – a car in the college parking lot Variable – the name of the manufacturer. This is qualitative data since you are recording a car type. Population – set of all names of the manufacturer of cars in the college parking lot. Sample – set of recorded names of the manufacturer of the cars in college parking lot Parameter – proportion of each car type calculated from population Statistic – proportion of each car type calculated from sample Example \(3\) stating definitions for quantitative variable A biologist wants to estimate the average height of a plant that is given a new plant food. She gives \(10\) plants the new plant food. State the individual, variable, population, and sample. Solution Individual – a plant given the new plant food Variable – the height of the plant (Note: it is not the average height since you cannot measure an average – it is calculated from data.) This is quantitative data since you will have a number. Population – set of all the heights of plants when the new plant food is used Sample – set of \(10\) heights of plants when the new plant food is used Parameter – average height of all plants Statistic – average height of \(10\) plants Example \(4\) stating definitions for quantitative variable A doctor wants to see if a new treatment for cancer extends the life expectancy of a patient versus the old treatment. She gives one group of \(25\) cancer patients the new treatment and another group of \(25\) the old treatment. She then measures the life expectancy of each of the patients. State the individuals, variables, populations, and samples. Solution In this example there are two individuals, two variables, two populations, and two samples. Individual 1: cancer patient given new treatment Individual 2: cancer patient given old treatment Variable 1: life expectancy when given new treatment. This is quantitative data since you will have a number. Variable 2: life expectancy when given old treatment. This is quantitative data since you will have a number. Population 1: set of all life expectancies of cancer patients given new treatment Population 2: set of all life expectancies of cancer patients given old treatment Sample 1: set of \(25\) life expectancies of cancer patients given new treatment Sample 2: set of \(25\) life expectancies of cancer patients given old treatment Parameter 1 – average life expectancy of all cancer patients given new treatment Parameter 2 – average life expectancy of all cancer patients given old treatment Statistic 1 – average life expectancy of \(25\) cancer patients given new treatment Statistic 2 – average life expectancy of \(25\) cancer patients given old treatment There are different types of quantitative variables, called discrete or continuous. The difference is in how many values can the data have. If you can actually count the number of data values (even if you are counting to infinity), then the variable is called discrete. If it is not possible to count the number of data values, then the variable is called continuous. Definition \(10\) Discrete data can only take on particular values like integers. Discrete data are usually things you count. Definition \(11\) Continuous data can take on any value. Continuous data are usually things you measure. Example \(5\) discrete or continuous Classify the quantitative variable as discrete or continuous, 1. The weight of a cat. 2. The number of fleas on a cat. 3. The size of a shoe. Solution 1. This is continuous since it is something you measure. 2. This is discrete since it is something you count. 3. This is discrete since you can only be certain values, such as \(7, 7.5, 8, 8.5, 9\). You can't buy a \(9.73\) shoe. There are also are four measurement scales for different types of data with each building on the ones below it. They are: Measurement Scales: Definition \(12\) Nominal – data is just a name or category. There is no order to any data and since there are no numbers, you cannot do any arithmetic on this level of data. Examples of this are gender, car name, ethnicity, and race. Definition \(13\) Ordinal – data that is nominal, but you can now put the data in order, since one value is more or less than another value. You cannot do arithmetic on this data, but you can now put data values in order. Examples of this are grades (A, B, C, D, F), place value in a race (1st, 2nd, 3rd), and size of a drink (small, medium, large). Definition \(14\) Interval – data that is ordinal, but you can now subtract one value from another and that subtraction makes sense. You can do arithmetic on this data, but only addition and subtraction. Examples of this are temperature and time on a clock. Definition \(15\) Ratio – data that is interval, but you can now divide one value by another and that ratio makes sense. You can now do all arithmetic on this data. Examples of this are height, weight, distance, and time. Nominal and ordinal data come from qualitative variables. Interval and ratio data come from quantitative variables. Most people have a hard time deciding if the data are nominal, ordinal, interval, or ratio. First, if the variable is qualitative (words instead of numbers) then it is either nominal or ordinal. Now ask yourself if you can put the data in a particular order. If you can it is ordinal. Otherwise, it is nominal. If the variable is quantitative (numbers), then it is either interval or ratio. For ratio data, a value of \(0\) means there is no measurement. This is known as the absolute zero. If there is an absolute zero in the data, then it means it is ratio. If there is no absolute zero, then the data are interval. An example of an absolute zero is if you have \$\(0\) in your bank account, then you are without money. The amount of money in your bank account is ratio data. Word of caution, sometimes ordinal data is displayed using numbers, such as \(5\) being strongly agree, and \(1\) being strongly disagree. These numbers are not really numbers. Instead they are used to assign numerical values to ordinal data. In reality you should not perform any computations on this data, though many people do. If there are numbers, make sure the numbers are inherent numbers, and not numbers that were assigned. Example \(6\) measurement scale State which measurement scale each is. 1. Time of first class 2. Hair color 3. Length of time to take a test 4. Age groupings (baby, toddler, adolescent, teenager, adult, elderly) Solution 1. This is interval since it is a number, but \(0\) o'clock means midnight and not the absence of time. 2. This is nominal since it is not a number, and there is no specific order for hair color. 3. This is ratio since it is a number, and if you take \(0\) minutes to take a test, it means you didn't take any time to complete it. 4. This is ordinal since it is not a number, but you could put the data in order from youngest to oldest or the other way around. Homework Exercise \(1\) 1. Suppose you want to know how Arizona workers age \(16\) or older travel to work. To estimate the percentage of people who use the different modes of travel, you take a sample containing \(500\) Arizona workers age \(16\) or older. State the individual, variable, population, sample, parameter, and statistic. 2. You wish to estimate the mean cholesterol levels of patients two days after they had a heart attack. To estimate the mean you collect data from \(28\) heart patients. State the individual, variable, population, sample, parameter, and statistic. 3. Print-O-Matic would like to estimate their mean salary of all employees. To accomplish this they collect the salary of \(19\) employees. State the individual, variable, population, sample, parameter, and statistic. 4. To estimate the percentage of households in Connecticut which use fuel oil as a heating source, a researcher collects information from \(1000\) Connecticut households about what fuel is their heating source. State the individual, variable, population, sample, parameter, and statistic. 5. The U.S. Census Bureau needs to estimate the median income of males in the U.S., they collect incomes from \(2500\) males. State the individual, variable, population, sample, parameter, and statistic. 6. The U.S. Census Bureau needs to estimate the median income of females in the U.S., they collect incomes from \(3500\) females. State the individual, variable, population, sample, parameter, and statistic. 7. Eyeglassmatic manufactures eyeglasses and they would like to know the percentage of each defect type made. They review \(25,891\) defects and classify each defect that is made. State the individual, variable, population, sample, parameter, and statistic. 8. The World Health Organization wishes to estimate the mean density of people per square kilometer, they collect data on \(56\) countries. State the individual, variable, population, sample, parameter, and statistic 9. State the measurement scale for each. 1. Cholesterol level 2. Defect type 3. Time of first class 4. Opinion on a 5 point scale, with 5 being strongly agree and 1 being strongly disagree 10. State the measurement scale for each. 1. Temperature in degrees Celsius 2. Ice cream flavors available 3. Pain levels on a scale from 1 to 10, 10 being the worst pain ever 4. Salary of employees Answer 1. See solutions 3. See solutions 5. See solutions 7. See solutions 9. 1. ratio 2. nominal 3. interval 4. ordinal
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/01%3A_Statistical_Basics/1.01%3A_What_is_Statistics.txt
As stated before, if you want to know something about a population, it is often impossible or impractical to examine the whole population. It might be too expensive in terms of time or money. It might be impractical – you can’t test all batteries for their length of lifetime because there wouldn’t be any batteries left to sell. You need to look at a sample. Hopefully the sample behaves the same as the population. When you choose a sample you want it to be as similar to the population as possible. If you want to test a new painkiller for adults you would want the sample to include people who are fat, skinny, old, young, healthy, not healthy, male, female, etc. There are many ways to collect a sample. None are perfect, and you are not guaranteed to collect a representative sample. That is unfortunately the limitations of sampling. However, there are several techniques that can result in samples that give you a semi-accurate picture of the population. Just remember to be aware that the sample may not be representative. As an example, you can take a random sample of a group of people that are equally males and females, yet by chance everyone you choose is female. If this happens, it may be a good idea to collect a new sample if you have the time and money. There are many sampling techniques, though only four will be presented here. The simplest, and the type that is strived for is a simple random sample. This is where you pick the sample such that every sample has the same chance of being chosen. This type of sample is actually hard to collect, since it is sometimes difficult to obtain a complete list of all individuals. There are many cases where you cannot conduct a truly random sample. However, you can get as close as you can. Now suppose you are interested in what type of music people like. It might not make sense to try to find an answer for everyone in the U.S. You probably don’t like the same music as your parents. The answers vary so much you probably couldn’t find an answer for everyone all at once. It might make sense to look at people in different age groups, or people of different ethnicities. This is called a stratified sample. The issue with this sample type is that sometimes people subdivide the population too much. It is best to just have one stratification. Also, a stratified sample has similar problems that a simple random sample has. If your population has some order in it, then you could do a systematic sample. This is popular in manufacturing. The problem is that it is possible to miss a manufacturing mistake because of how this sample is taken. If you are collecting polling data based on location, then a cluster sample that divides the population based on geographical means would be the easiest sample to conduct. The problem is that if you are looking for opinions of people, and people who live in the same region may have similar opinions. As you can see each of the sampling techniques have pluses and minuses. Include convenience Definition \(1\) A simple random sample (SRS) of size \(n\) is a sample that is selected from a population in a way that ensures that every different possible sample of size \(n\) has the same chance of being selected. Also, every individual associated with the population has the same chance of being selected Ways to select a simple random sample: Put all names in a hat and draw a certain number of names out. Assign each individual a number and use a random number table or a calculator or computer to randomly select the individuals that will be measured. Example \(1\) choosing a simple random sample Describe how to take a simple random sample from a classroom. Solution Give each student in the class a number. Using a random number generator you could then pick the number of students you want to pick. Example \(2\) how not to choose a simple random sample You want to choose \(5\) students out of a class of \(20\). Give some examples of samples that are not simple random samples: Solution Choose \(5\) students from the front row. The people in the last row have no chance of being selected. Choose the \(5\) shortest students. The tallest students have no chance of being selected. Definition \(2\) Stratified sampling is where you break the population into groups called strata, then take a simple random sample from each strata. For example: If you want to look at musical preference, you could divide the individuals into age groups and then conduct simple random samples inside each group. If you want to calculate the average price of textbooks, you could divide the individuals into groups by major and then conduct simple random samples inside each group. Definition \(3\) Systematic sampling is where you randomly choose a starting place then select every \(k\)th individual to measure. For example: You select every 5th item on an assembly line You select every 10th name on the list You select every 3rd customer that comes into the store. Definition \(4\) Cluster sampling is where you break the population into groups called clusters. Randomly pick some clusters then poll all individuals in those clusters. For example: A large city wants to poll all businesses in the city. They divide the city into sections (clusters), maybe a square block for each section, and use a random number generator to pick some of the clusters. Then they poll all businesses in each chosen cluster. You want to measure whether a tree in the forest is infected with bark beetles. Instead of having to walk all over the forest, you divide the forest up into sectors, and then randomly pick the sectors that you will travel to. Then record whether a tree is infected or not for every tree in that sector. Many people confuse stratified sampling and cluster sampling. In stratified sampling you use all the groups and some of the members in each group. Cluster sampling is the other way around. It uses some of the groups and all the members in each group. The four sampling techniques that were presented all have advantages and disadvantages. There is another sampling technique that is sometimes utilized because either the researcher doesn’t know better, or it is easier to do. This sampling technique is known as a convenience sample. This sample will not result in a representative sample, and should be avoided. Definition \(5\) Convenience sample is one where the researcher picks individuals to be included that are easy for the researcher to collect. An example of a convenience sample is if you want to know the opinion of people about the criminal justice system, and you stand on a street corner near the county court house, and questioning the first \(10\) people who walk by. The people who walk by the county court house are most likely involved in some fashion with the criminal justice system, and their opinion would not represent the opinions of all individuals. On a rare occasion, you do want to collect the entire population. In which case you conduct a census. Definition \(6\) A census is when every individual of interest is measured. Example \(3\) sampling type Banner Health is a several state nonprofit chain of hospitals. Management wants to assess the incident of complications after surgery. They wish to use a sample of surgery patients. Several sampling techniques are described below. Categorize each technique as simple random sample, stratified sample, systematic sample, cluster sample, or convenience sampling. 1. Obtain a list of patients who had surgery at all Banner Health facilities. Divide the patients according to type of surgery. Draw simple random samples from each group. 2. Obtain a list of patients who had surgery at all Banner Health facilities. Number these patients, and then use a random number table to obtain the sample. 3. Randomly select some Banner Health facilities from each of the seven states, and then include all the patients on the surgery lists of the states. 4. At the beginning of the year, instruct each Banner Health facility to record any complications from every 100th surgery. 5. Instruct each Banner Health facilities to record any complications from 20 surgeries this week and send in the results. Solution 1. This is a stratified sample since the patients where separated into different stratum and then random samples were taken from each strata. The problem with this is that some types of surgeries may have more chances for complications than others. Of course, the stratified sample would show you this. 2. This is a random sample since each patient has the same chance of being chosen. The problem with this one is that it will take a while to collect the data. 3. This is a cluster sample since all patients are questioned in each of the selected hospitals. The problem with this is that you could have by chance selected hospitals that have no complications. 4. This is a systematic sample since they selected every 100th surgery. The problem with this is that if every 90th surgery has complications, you wouldn’t see this come up in the data. 5. This is a convenience sample since they left it up to the facility how to do it. The problem with convenience samples is that the person collecting the data will probably collect data from surgeries that had no complications. Homework Exercise \(1\) 1. Researchers want to collect cholesterol levels of U.S. patients who had a heart attack two days prior. The following are different sampling techniques that the researcher could use. Classify each as simple random sample, stratified sample, systematic sample, cluster sample, or convenience sample. 1. The researchers randomly select 5 hospitals in the U.S. then measure the cholesterol levels of all the heart attack patients in each of those hospitals. 2. The researchers list all of the heart attack patients and measure the cholesterol level of every 25th person on the list. 3. The researchers go to one hospital on a given day and measure the cholesterol level of the heart attack patients at that time. 4. The researchers list all of the heart attack patients. They then measure the cholesterol levels of randomly selected patients. 5. The researchers divide the heart attack patients based on race, and then measure the cholesterol levels of randomly selected patients in each race grouping. 2. The quality control officer at a manufacturing plant needs to determine what percentage of items in a batch are defective. The following are different sampling techniques that could be used by the officer. Classify each as simple random sample, stratified sample, systematic sample, cluster sample, or convenience sample. 1. The officer lists all of the batches in a given month. The number of defective items is counted in randomly selected batches. 2. The officer takes the first 10 batches and counts the number of defective items. 3. The officer groups the batches made in a month into which shift they are made. The number of defective items is counted in randomly selected batches in each shift. 4. The officer chooses every 15th batch off the line and counts the number of defective items in each chosen batch. 5. The officer divides the batches made in a month into which day they were made. Then certain days are picked and every batch made that day is counted to determine the number of defective items. 3. You wish to determine the GPA of students at your school. Describe what process you would go through to collect a sample if you use a simple random sample. 4. You wish to determine the GPA of students at your school. Describe what process you would go through to collect a sample if you use a stratified sample. 5. You wish to determine the GPA of students at your school. Describe what process you would go through to collect a sample if you use a systematic sample. 6. You wish to determine the GPA of students at your school. Describe what process you would go through to collect a sample if you use a cluster sample. 7. You wish to determine the GPA of students at your school. Describe what process you would go through to collect a sample if you use a convenience sample. Answer 1. 1. Cluster sample 2. Systematic sample 3. Convenience sample 4. Simple random sample 5. Stratified sample 3. See solutions 5. See solutions 7. See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/01%3A_Statistical_Basics/1.02%3A_Sampling_Methods.txt
The section is an introduction to experimental design. This is how to actually design an experiment or a survey so that they are statistical sound. Experimental design is a very involved process, so this is just a small introduction. Guidelines for planning a statistical study 1. . Identify the individuals that you are interested in. Realize that you can only make conclusions for these individuals. As an example, if you use a fertilizer on a certain genus of plant, you can’t say how the fertilizer will work on any other types of plants. However, if you diversify too much, then you may not be able to tell if there really is an improvement since you have too many factors to consider. 2. Specify the variable. You want to make sure this is something that you can measure, and make sure that you control for all other factors too. As an example, if you are trying to determine if a fertilizer works by measuring the height of the plants on a particular day, you need to make sure you can control how much fertilizer you put on the plants (which would be your treatment), and make sure that all the plants receive the same amount of sunlight, water, and temperature. 3. Specify the population. This is important in order for you know what conclusions you can make and what individuals you are making the conclusions about. 4. Specify the method for taking measurements or making observations. 5. Determine if you are taking a census or sample. If taking a sample, decide on the sampling method. 6. Collect the data. 7. Use appropriate descriptive statistics methods and make decisions using appropriate inferential statistics methods. 8. Note any concerns you might have about your data collection methods and list any recommendations for future. There are two types of studies: Definition \(1\) An observational study is when the investigator collects data merely by watching or asking questions. He doesn’t change anything. Definition \(2\) An experiment is when the investigator changes a variable or imposes a treatment to determine its effect. Example \(1\) observational study or experiment State if the following is an observational study or an experiment. 1. Poll students to see if they favor increasing tuition. 2. Give some students a tutor to see if grades improve. Solution 1. This is an observational study. You are only asking a question. 2. This is an experiment. The tutor is the treatment. Many observational studies involve surveys. A survey uses questions to collect the data and needs to be written so that there is no bias. In an experiment, there are different options. Randomized Two-Treatment Experiment: In this experiment, there are two treatments, and individuals are randomly placed into the two groups. Either both groups get a treatment, or one group gets a treatment and the other gets either nothing or a placebo. The group getting either no treatment or the placebo is called the control group. The group getting the treatment is called the treatment group. The idea of the placebo is that a person thinks they are receiving a treatment, but in reality they are receiving a sugar pill or fake treatment. Doing this helps to account for the placebo effect, which is where a person’s mind makes their body respond to a treatment because they think they are taking the treatment when they are not really taking the treatment. Note, not every experiment needs a placebo, such when using animals or plants. Also, you can’t always use a placebo or no treatment. As an example, if you are testing a new blood pressure medication you can’t give a person with high blood pressure a placebo or no treatment because of moral reasons. Randomized Block Design: A block is a group of subjects that are similar, but the blocks differ from each other. Then randomly assign treatments to subjects inside each block. An example would be separating students into full-time versus part-time, and then randomly picking a certain number full-time students to get the treatment and a certain number part-time students to get the treatment. This way some of each type of student gets the treatment and some do not. Rigorously Controlled Design: Carefully assign subjects to different treatment groups, so that those given each treatment are similar in ways that are important to the experiment. An example would be if you want to have a full-time student who is male, takes only night classes, has a full-time job, and has children in one treatment group, then you need to have the same type of student getting the other treatment. This type of design is hard to implement since you don’t know how many differentiations you would use, and should be avoided. Matched Pairs Design: The treatments are given to two groups that can be matched up with each other in some ways. One example would be to measure the effectiveness of a muscle relaxer cream on the right arm and the left arm of individuals, and then for each individual you can match up their right arm measurement with their left arm. Another example of this would be before and after experiments, such as weight before and weight after a diet. No matter which experiment type you conduct, you should also consider the following: Replication: Repetition of an experiment on more than one subject so you can make sure that the sample is large enough to distinguish true effects from random effects. It is also the ability for someone else to duplicate the results of the experiment. Blind Study: Blind study is where the individual does not know which treatment they are getting or if they are getting the treatment or a placebo. Double-Blind Study: Double-blind study is where neither the individual nor the researcher knows who is getting which treatment or who is getting the treatment and who is getting the placebo. This is important so that there can be no bias created by either the individual or the researcher. One last consideration is the time period that you are collecting the data over. There are three types of time periods that you can consider. Cross-Sectional Study: Data observed, measured, or collected at one point in time. Retrospective (or Case-Control) Study: Data collected from the past using records, interviews, and other similar artifacts. Prospective (or Longitudinal or Cohort) Study: Data collected in the future from groups sharing common factors. Homework Exercise \(1\) 1. You want to determine if cinnamon reduces a person’s insulin sensitivity. You give patients who are insulin sensitive a certain amount of cinnamon and then measure their glucose levels. Is this an observation or an experiment? Why? 2. You want to determine if eating more fruits reduces a person’s chance of developing cancer. You watch people over the years and ask them to tell you how many servings of fruit they eat each day. You then record who develops cancer. Is this an observation or an experiment? Why? 3. A researcher wants to evaluate whether countries with lower fertility rates have a higher life expectancy. They collect the fertility rates and the life expectancies of countries around the world. Is this an observation or an experiment? Why? 4. To evaluate whether a new fertilizer improves plant growth more than the old fertilizer, the fertilizer developer gives some plants the new fertilizer and others the old fertilizer. Is this an observation or an experiment? Why? 5. A researcher designs an experiment to determine if a new drug lowers the blood pressure of patients with high blood pressure. The patients are randomly selected to be in the study and they randomly pick which group to be in. Is this a randomized experiment? Why or why not? 6. Doctors trying to see if a new stint works longer for kidney patients, asks patients if they are willing to have one of two different stints put in. During the procedure the doctor decides which stent to put in based on which one is on hand at the time. Is this a randomized experiment? Why or why not? 7. A researcher wants to determine if diet and exercise together helps people lose weight over just exercising. The researcher solicits volunteers to be part of the study, randomly picks which volunteers are in the study, and then lets each volunteer decide if they want to be in the diet and exercise group or the exercise only group. Is this a randomized experiment? Why or why not? 8. To determine if lack of exercise reduces flexibility in the knee joint, physical therapists ask for volunteers to join their trials. They then randomly select the volunteers to be in the group that exercises and to be in the group that doesn’t exercise. Is this a randomized experiment? Why or why not? 9. You collect the weights of tagged fish in a tank. You then put an extra protein fish food in water for the fish and then measure their weight a month later. Are the two samples matched pairs or not? Why or why not? 10. A mathematics instructor wants to see if a computer homework system improves the scores of the students in the class. The instructor teaches two different sections of the same course. One section utilizes the computer homework system and the other section completes homework with paper and pencil. Are the two samples matched pairs or not? Why or why not? 11. A business manager wants to see if a new procedure improves the processing time for a task. The manager measures the processing time of the employees then trains the employees using the new procedure. Then each employee performs the task again and the processing time is measured again. Are the two samples matched pairs or not? Why or why not? 12. The prices of generic items are compared to the prices of the equivalent named brand items. Are the two samples matched pairs or not? Why or why not? 13. A doctor gives some of the patients a new drug for treating acne and the rest of the patients receive the old drug. Neither the patient nor the doctor knows who is getting which drug. Is this a blind experiment, double blind experiment, or neither? Why? 14. One group is told to exercise and one group is told to not exercise. Is this a blind experiment, double blind experiment, or neither? Why? 15. The researchers at a hospital want to see if a new surgery procedure has a better recovery time than the old procedure. The patients are not told which procedure that was used on them, but the surgeons obviously did know. Is this a blind experiment, double blind experiment, or neither? Why? 16. To determine if a new medication reduces headache pain, some patients are given the new medication and others are given a placebo. Neither the researchers nor the patients know who is taking the real medication and who is taking the placebo. Is this a blind experiment, double blind experiment, or neither? Why? 17. A new study is underway to track the eating and exercise patterns of people at different time periods in the future, and see who is afflicted with cancer later in life. Is this a cross-sectional study, a retrospective study, or a prospective study? Why? 18. To determine if a new medication reduces headache pain, some patients are given the new medication and others are given a placebo. The pain levels of a patient are then recorded. Is this a cross-sectional study, a retrospective study, or a prospective study? Why? 19. To see if there is a link between smoking and bladder cancer, patients with bladder cancer are asked if they currently smoke or if they smoked in the past. Is this a cross-sectional study, a retrospective study, or a prospective study? Why? 20. The Nurses Health Survey was a survey where nurses were asked to record their eating habits over a period of time, and their general health was recorded. Is this a cross-sectional study, a retrospective study, or a prospective study? Why? 21. Consider a question that you would like to answer. Describe how you would design your own experiment. Make sure you state the question you would like to answer, then determine if an experiment or an observation is to be done, decide if the question needs one or two samples, if two samples are the samples matched, if this is a randomized experiment, if there is any blinding, and if this is a cross-sectional, retrospective, or prospective study. Answer 1. Experiment 3. Observation 5. No, see solutions 7. No, see solutions 9. Yes, see solutions 11. Yes, see solutions 13. Double blind, see solutions 15. Blind, see solutions 17. Prospective, see solutions 19. Retrospective, see solutions 21. See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/01%3A_Statistical_Basics/1.03%3A_Experimental_Design.txt
Many studies are conducted and conclusions are made. However, there are occasions where the study is not conducted in the correct manner or the conclusion is not correctly made based on the data. There are many things that you should question when you read a study. There are many reasons for the study to have bias in it. Bias is where a study may have a certain slant or preference for a certain result. The following are a list of some of the questions or issues you should consider to help decide if there is bias in a study. One of the first issues you should ask is who funded the study. If the entity that sponsored the study stands to gain either profits or notoriety from the results, then you should question the results. It doesn’t mean that the results are wrong, but you should scrutinize them on your own to make sure they are sound. As an example if a study says that genetically modified foods are safe, and the study was funded by a company that sells genetically modified food, then one may question the validity of the study. Since the company funds the study and their profits rely on people buying their food, there may be bias. An experiment could have lurking or confounding variables when you cannot rule out the possibility that the observed effect is due to some other variable rather than the factor being studied. An example of this is when you give fertilizer to some plants and no fertilizer to others, but the no fertilizer plants also are placed in a location that doesn’t receive direct sunlight. You won’t know if the plants that received the fertilizer grew taller because of the fertilizer or the sunlight. Make sure you design experiments to eliminate the effects of confounding variables by controlling all the factors that you can. Overgeneralization Overgeneralization is where you do a study on one group and then try to say that it will happen on all groups. An example is doing cancer treatments on rats. Just because the treatment works on rats does not mean it will work on humans. Another example is that until recently most FDA medication testing had been done on white males of a particular age. There is no way to know how the medication affects other genders, ethnic groups, age groups, and races. The new FDA guidelines stresses using individuals from different groups. Cause and Effect Cause and effect is where people decide that one variable causes the other just because the variables are related or correlated. Unless the study was done as an experiment where a variable was controlled, you cannot say that one variable caused the other. Most likely there is another variable that caused both. As an example, there is a relationship between number of drownings at the beach and ice cream sales. This does not mean that ice cream sales increasing causes people to drown. Most likely the cause for both increasing is the heat. Sampling Error This is the difference between the sample results and the true population results. This is unavoidable, and results in the fact that samples are different from each other. As an example, if you take a sample of 5 people’s height in your class, you will get 5 numbers. If you take another sample of 5 people’s heights in your class, you will likely get 5 different numbers. Nonsampling Error This is where the sample is collected poorly either through a biased sample or through error in measurements. Care should be taken to avoid this error. Lastly, there should be care taken in considering the difference between statistical significance versus practical significance. This is a major issue in statistics. Something could be statistically significance, which means that a statistical test shows there is evidence to show what you are trying to prove. However, in practice it doesn’t mean much or there are other issues to consider. As an example, suppose you find that a new drug for high blood pressure does reduce the blood pressure of patients. When you look at the improvement it actually doesn’t amount to a large difference. Even though statistically there is a change, it may not be worth marketing the product because it really isn’t that big of a change. Another consideration is that you find the blood pressure medication does improve a person’s blood pressure, but it has serious side effects or it costs a great deal for a prescription. In this case, it wouldn't be practical to use it. In both cases, the study is shown to be statistically significant, but practically you don’t want to use the medication. The main thing to remember in a statistical study is that the statistics is only part of the process. You also want to make sure that there is practical significance too. Surveys Surveys have their own areas of bias that can occur. A few of the issues with surveys are in the wording of the questions, the ordering of the questions, the manner the survey is conducted, and the response rate of the survey. The wording of the questions can cause hidden bias, which is where the questions are asked in a way that makes a person respond a certain way. An example is that a poll was done where people were asked if they believe that there should be an amendment to the constitution protecting a woman’s right to choose. About 60% of all people questioned said yes. Another poll was done where people were asked if they believe that there should be an amendment to the constitution protecting the life of an unborn child. About 60% of all people questioned said yes. These two questions deal with the same issue, though giving opposite results, but how the question was asked affected the outcome. The ordering of the question can also cause hidden bias. An example of this is if you were asked if there should be a fine for texting while driving, but proceeding that question is the question asking if you text while drive. By asking a person if they actually partake in the activity, that person now personalizes the question and that might affect how they answer the next question of creating the fine. Non-response Non-response is where you send out a survey but not everyone returns the survey. You can calculate the response rate by dividing the number of returns by the number of surveys sent. Most response rates are around 30-50%. A response rate less than 30% is very poor and the results of the survey are not valid. To reduce non-response, it is better to conduct the surveys in person, though these are very expensive. Phones are the next best way to conduct surveys, emails can be effective, and physical mailings are the least desirable way to conduct surveys. Voluntary response Voluntary response is where people are asked to respond via phone, email or online. The problem with these is that only people who really care about the topic are likely to call or email. These surveys are not scientific and the results from these surveys are not valid. Note: all studies involve volunteers. The difference between a voluntary response survey and a scientific study is that in a scientific study the researchers ask the individuals to be involved, while in a voluntary response survey the individuals become involved on their own choosing. Example \(1\): Bias in a Study Suppose a mathematics department at a community college would like to assess whether computer-based homework improves students’ test scores. They use computer-based homework in one classroom with one teacher and use traditional paper and pencil homework in a different classroom with a different teacher. The students using the computer-based homework had higher test scores. What is wrong with this experiment? Solution Since there were different teachers, you do not know if the better test scores are because of the teacher or the computer-based homework. A better design would be have the same teacher teach both classes. The control group would utilize traditional paper and pencil homework and the treatment group would utilize the computer-based homework. Both classes would have the same teacher, and the students would be split between the two classes randomly. The only difference between the two groups should be the homework method. Of course, there is still variability between the students, but utilizing the same teacher will reduce any other confounding variables. Example \(2\): Cause and Effect Determine if the one variable did cause the change in the other variable. 1. Cinnamon was giving to a group of people who have diabetes, and then their blood glucose levels were measured a time period later. All other factors for each person were kept the same. Their glucose levels went down. Did the cinnamon cause the reduction? 2. There is a link between spray on tanning products and lung cancer. Does that mean that spray on tanning products cause lung cancer? Solution 1. Since this was a study where the use of cinnamon was controlled, and all other factors were kept constant from person to person, then any changes in glucose levels can be attributed to the use of cinnamon 2. Since there is only a link, and not a study controlling the use of the tanning spray, then you cannot say that increased use causes lung cancer. You can say that there is a link, and that there could be a cause, but you cannot say for sure that the spray causes the cancer. Example \(3\): Generalization 1. A researcher conducts a study on the use of ibuprofen on humans and finds that it is safe. Does that mean that all species can use ibuprofen? 2. Aspirin has been used for years to bring down fevers in humans. Originally it was tested on white males between the ages of 25 and 40 and found to be safe. Is it safe to give to everyone? Solution 1. No. Just because a drug is safe to use on one species doesn’t mean it is safe to use for all species. In fact, ibuprofen is toxic to cats. 2. No. Just because one age group can use it doesn’t mean it is safe to use for all age groups. In fact, there has been a link between giving a child under the age of 19 aspirin when they have a fever and Reye’s syndrome. Homework Exercise \(1\) 1. Suppose there is a study where a researcher conducts an experiment to show that deep breathing exercises helps to lower blood pressure. The researcher takes two groups of people and has one group to perform deep breathing exercises and a series of aerobic exercises every day and the other group was asked to refrain from any exercises. The researcher found that the group performing the deep breathing exercises and the aerobic exercises had lower blood pressure. Discuss any issue with this study. 2. Suppose a car dealership offers a low interest rate and a longer payoff period to customers or a high interest rate and a shorter payoff period to customers, and most customers choose the low interest rate and longer payoff period, does that mean that most customers want a lower interest rate? Explain. 3. Over the years it has been said that coffee is bad for you. When looking at the studies that have shown that coffee is linked to poor health, you will see that people who tend to drink coffee don’t sleep much, tend to smoke, don’t eat healthy, and tend to not exercise. Can you say that the coffee is the reason for the poor health or is there a lurking variable that is the actual cause? Explain. 4. When researchers were trying to figure out what caused polio, they saw a connection between ice cream sales and polio. As ice cream sales increased so did the incident of polio. Does that mean that eating ice cream causes polio? Explain your answer. 5. There is a positive correlation between having a discussion of gun control, which usually occur after a mass shooting, and the sale of guns. Does that mean that the discussion of gun control increases the likelihood that people will buy more guns? Explain. 6. There is a study that shows that people who are obese have a vitamin D deficiency. Does that mean that obesity causes a deficiency in vitamin D? Explain. 7. A study was conducted that shows that polytetrafluoroethylene (PFOA) (Teflon is made from this chemical) has an increase risk of tumors in lab mice. Does that mean that PFOA’s have an increased risk of tumors in humans? Explain. 8. Suppose a telephone poll is conducted by contacting U.S. citizens via landlines about their view of gay marriage. Suppose over 50% of those called do not support gay marriage. Does that mean that you can say over 50% of all people in the U.S. do not support gay marriage? Explain. 9. Suppose that it can be shown to be statistically significant that a smaller percentage of the people are satisfied with your business. The percentage before was 87% and is now 85%. Do you change how you conduct business? Explain? 10. You are testing a new drug for weight loss. You find that the drug does in fact statistically show a weight loss. Do you market the new drug? Why or why not? 11. There was an online poll conducted about whether the mayor of Auckland, New Zealand, should resign due to an affair. The majority of people participating said he should. Should the mayor resign due to the results of this poll? Explain. 12. An online poll showed that the majority of Americans believe that the government covered up events of 9/11. Does that really mean that most Americans believe this? Explain. 13. A survey was conducted at a college asking all employees if they were satisfied with the level of security provided by the security department. Discuss how the results of this question could be biased. 14. An employee survey says, “Employees at this institution are very satisfied with working here. Please rate your satisfaction with the institution.” Discuss how this question could create bias. 15. A survey has a question that says, “Most people are afraid that they will lose their house due to economic collapse. Choose what you think is the biggest issue facing the nation today. 1. Economic collapse 2. Foreign policy issues 3. Environmental concerns.” Discuss how this question could create bias. 16. A survey says, “Please rate the career of Roberto Clemente, one of the best right field baseball players in the world.” Discuss how this question could create bias. Answer See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/01%3A_Statistical_Basics/1.04%3A_How_Not_to_Do_Statistics.txt
In chapter 1, you were introduced to the concepts of population, which again is a collection of all the measurements from the individuals of interest. Remember, in most cases you can’t collect the entire population, so you have to take a sample. Thus, you collect data either through a sample or a census. Now you have a large number of data values. What can you do with them? No one likes to look at just a set of numbers. One thing is to organize the data into a table or graph. Ultimately though, you want to be able to use that graph to interpret the data, to describe the distribution of the data set, and to explore different characteristics of the data. The characteristics that will be discussed in this chapter and the next chapter are: 1. Center: middle of the data set, also known as the average. 2. Variation: how much the data varies. 3. Distribution: shape of the data (symmetric, uniform, or skewed). 4. Qualitative data: analysis of the data 5. Outliers: data values that are far from the majority of the data. 6. Time: changing characteristics of the data over time. This chapter will focus mostly on using the graphs to understand aspects of the data, and not as much on how to create the graphs. There is technology that will create most of the graphs, though it is important for you to understand the basics of how to create them. 02: Graphical Descriptions of Data Remember, qualitative data are words describing a characteristic of the individual. There are several different graphs that are used for qualitative data. These graphs include bar graphs, Pareto charts, and pie charts. Pie charts and bar graphs are the most common ways of displaying qualitative data. A spreadsheet program like Excel can make both of them. The first step for either graph is to make a frequency or relative frequency table. A frequency table is a summary of the data with counts of how often a data value (or category) occurs. Example \(1\) Suppose you have the following data for which type of car students at a college drive? Ford, Chevy, Honda, Toyota, Toyota, Nissan, Kia, Nissan, Chevy, Toyota, Honda, Chevy, Toyota, Nissan, Ford, Toyota, Nissan, Mercedes, Chevy, Ford, Nissan, Toyota, Nissan, Ford, Chevy, Toyota, Nissan, Honda, Porsche, Hyundai, Chevy, Chevy, Honda, Toyota, Chevy, Ford, Nissan, Toyota, Chevy, Honda, Chevy, Saturn, Toyota, Chevy, Chevy, Nissan, Honda, Toyota, Toyota, Nissan Solution A listing of data is too hard to look at and analyze, so you need to summarize it. First you need to decide the categories. In this case it is relatively easy; just use the car type. However, there are several cars that only have one car in the list. In that case it is easier to make a category called other for the ones with low values. Now just count how many of each type of cars there are. For example, there are 5 Fords, 12 Chevys, and 6 Hondas. This can be put in a frequency distribution: Cateogry Frequency Ford 5 Chevy 12 Honda 6 Toyota 12 Nissan 10 Other 5 Total 50 Table \(1\): Frequency Table for Type of Car Data The total of the frequency column should be the number of observations in the data. Since raw numbers are not as useful to tell other people it is better to create a third column that gives the relative frequency of each category. This is just the frequency divided by the total. As an example for Ford category: relative frequency \(= \dfrac{5}{50} = 0.10\) This can be written as a decimal, fraction, or percent. You now have a relative frequency distribution: Category Frequency Relative Frequency Ford 5 0.10 Chevy 12 0.24 Honda 6 0.12 Toyota 12 0.24 Nissan 10 0.20 Other 5 0.10 Total 50 1.00 Table \(2\): Relative Frequency Table for Type of Car Data The relative frequency column should add up to 1.00. It might be off a little due to rounding errors. Now that you have the frequency and relative frequency table, it would be good to display this data using a graph. There are several different types of graphs that can be used: bar chart, pie chart, and Pareto charts. Bar graphs or charts consist of the frequencies on one axis and the categories on the other axis. Then you draw rectangles for each category with a height (if frequency is on the vertical axis) or length (if frequency is on the horizontal axis) that is equal to the frequency. All of the rectangles should be the same width, and there should be equally width gaps between each bar. Example \(2\) drawing a bar graph Draw a bar graph of the data in Example \(1\). Solution Category Frequency Relative Frequency Ford 5 0.10 Chevy 12 0.24 Honda 6 0.12 Toyota 12 0.24 Nissan 10 0.20 Other 5 0.10 Total 50 1.00 Table \(2\): Relative Frequency Table for Type of Car Data Put the frequency on the vertical axis and the category on the horizontal axis. Then just draw a box above each category whose height is the frequency. All graphs are drawn using \(R\). The command in \(R\) to create a bar graph is: variable<-c(type in percentages or frequencies for each class with commas in between values) barplot(variable,names.arg=c("type in name of 1st category", "type in name of 2nd category",…,"type in name of last category"), ylim=c(0,number over max), xlab="type in label for x-axis", ylab="type in label for y-axis",ylim=c(0,number above maximum y value), main="type in title", col="type in a color") – creates a bar graph of the data in a color if you want. For this example the command would be: car<-c(5, 12, 6, 12, 10, 5) barplot(car, names.arg=c("Ford", "Chevy", "Honda", "Toyota", "Nissan", "Other"), xlab="Type of Car", ylab="Frequency", ylim=c(0,12), main="Type of Car Driven by College Students", col="blue") Notice from the graph, you can see that Toyota and Chevy are the more popular car, with Nissan not far behind. Ford seems to be the type of car that you can tell was the least liked, though the cars in the other category would be liked less than a Ford. Some key features of a bar graph: • Equal spacing on each axis. • Bars are the same width. • There should be labels on each axis and a title for the graph. • There should be a scaling on the frequency axis and the categories should be listed on the category axis. • The bars don’t touch. You can also draw a bar graph using relative frequency on the vertical axis. This is useful when you want to compare two samples with different sample sizes. The relative frequency graph and the frequency graph should look the same, except for the scaling on the frequency axis. Using R, the command would be: car<-c(0.1, 0.24, 0.12, 0.24, 0.2, 0.1) barplot(car, names.arg=c("Ford", "Chevy", "Honda", "Toyota", "Nissan", "Other"), xlab="Type of Car", ylab="Relative Frequency", main="Type of Car Driven by College Students", col="blue", ylim=c(0,.25)) Another type of graph for qualitative data is a pie chart. A pie chart is where you have a circle and you divide pieces of the circle into pie shapes that are proportional to the size of the relative frequency. There are 360 degrees in a full circle. Relative frequency is just the percentage as a decimal. All you have to do to find the angle by multiplying the relative frequency by 360 degrees. Remember that 180 degrees is half a circle and 90 degrees is a quarter of a circle Example \(3\) drawing a pie chart Draw a pie chart of the data in Example \(1\). First you need the relative frequencies. Category Frequency Relative Frequency Ford 5 0.10 Chevy 12 0.24 Honda 6 0.12 Toyota 12 0.24 Nissan 10 0.20 Other 5 0.10 Total 50 1.00 Table \(2\): Relative Frequency Table for Type of Car Data Solution Then you multiply each relative frequency by 360° to obtain the angle measure for each category. Category Relative Frequency Angle (in degrees (°)) Ford 0.10 36.0 Chevy 0.24 86.4 Honda 0.12 43.2 Toyota 0.24 86.4 Nissan 0.20 72.0 Other 0.10 36.0 Total 1.00 360.0 Table \(3\): Pie Chart Angles for Type of Car Data Now draw the pie chart using a compass, protractor, and straight edge. Technology is preferred. If you use technology, there is no need for the relative frequencies or the angles. You can use R to graph the pie chart. In R, the commands would be: pie(variable,labels=c("type in name of 1st category", "type in name of 2nd category",…,"type in name of last category"),main="type in title", col=rainbow(number of categories)) – creates a pie chart with a title and rainbow of colors for each category. For this example, the commands would be: car<-c(5, 12, 6, 12, 10, 5) pie(car, labels=c("Ford, 10%", "Chevy, 24%", "Honda, 12%", "Toyota, 24%", "Nissan, 20%", "Other, 10%"), main="Type of Car Driven by College Students", col=rainbow(6)) As you can see from the graph, Toyota and Chevy are more popular, while the cars in the other category are liked the least. Of the cars that you can determine from the graph, Ford is liked less than the others. Pie charts are useful for comparing sizes of categories. Bar charts show similar information. It really doesn’t matter which one you use. It really is a personal preference and also what information you are trying to address. However, pie charts are best when you only have a few categories and the data can be expressed as a percentage. The data doesn’t have to be percentages to draw the pie chart, but if a data value can fit into multiple categories, you cannot use a pie chart. As an example, if you are asking people about what their favorite national park is, and you say to pick the top three choices, then the total number of answers can add up to more than 100% of the people involved. So you cannot use a pie chart to display the favorite national park. A third type of qualitative data graph is a Pareto chart, which is just a bar chart with the bars sorted with the highest frequencies on the left. Here is the Pareto chart for the data in Example \(1\). The advantage of Pareto charts is that you can visually see the more popular answer to the least popular. This is especially useful in business applications, where you want to know what services your customers like the most, what processes result in more injuries, which issues employees find more important, and other type of questions like these. There are many other types of graphs that can be used on qualitative data. There are spreadsheet software packages that will create most of them, and it is better to look at them to see what can be done. It depends on your data as to which may be useful. The next example illustrates one of these types known as a multiple bar graph. Example \(4\) multiple bar graph In the Wii Fit game, you can do four different types of exercises: yoga, strength, aerobic, and balance. The Wii system keeps track of how many minutes you spend on each of the exercises everyday. The following graph is the data for Dylan over one week time period. Discuss any indication you can infer from the graph. Solution It appears that Dylan spends more time on balance exercises than on any other exercises on any given day. He seems to spend less time on strength exercises on a given day. There are several days when the amount of exercise in the different categories is almost equal. The usefulness of a multiple bar graph is the ability to compare several different categories over another variable, in Example \(4\) the variable would be time. This allows a person to interpret the data with a little more ease. Homework Exercise \(1\) 1. Eyeglassomatic manufactures eyeglasses for different retailers. The number of lenses for different activities is in Example \(4\). Activity Grind Multicoat Assemble Make frames Receive finished Unknown Number of lenses 18872 12105 4333 25880 26991 1508 Table \(4\): Data for Eyeglassomatic Grind means that they ground the lenses and put them in frames, multicoat means that they put tinting or scratch resistance coatings on lenses and then put them in frames, assemble means that they receive frames and lenses from other sources and put them together, make frames means that they make the frames and put lenses in from other sources, receive finished means that they received glasses from other source, and unknown means they do not know where the lenses came from. Make a bar chart and a pie chart of this data. State any findings you can see from the graphs. 2. To analyze how Arizona workers ages 16 or older travel to work the percentage of workers using carpool, private vehicle (alone), and public transportation was collected. Create a bar chart and pie chart of the data in Example \(5\). State any findings you can see from the graphs. Transportation type Percentage Carpool 11.6% Private Vehicle (Alone) 75.8% Public Transportation 2.0% Other 10.6% Table \(5\): Data of Travel Mode for Arizona Workers 3. The number of deaths in the US due to carbon monoxide (CO) poisoning from generators from the years 1999 to 2011 are in table #2.1.6 (Hinatov, 2012). Create a bar chart and pie chart of this data. State any findings you see from the graphs. Region Number of Deaths from CO While Using a Generator Urban Core 401 Sub-Urban 97 Large Rural 86 Small Rural/Isolated 111 Table \(6\): Data of Number of Deaths Due to CO Poisoning 4. In Connecticut households use gas, fuel oil, or electricity as a heating source. Example \(7\) shows the percentage of households that use one of these as their principle heating sources ("Electricity usage," 2013), ("Fuel oil usage," 2013), ("Gas usage," 2013). Create a bar chart and pie chart of this data. State any findings you see from the graphs. Heating Source Percentage Electricity 15.3% Fuel Oil 46.3% Gas 35.6% Other 2.85 Table \(7\): Data of Household Heating Sources 5. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made during the time period of January 1 to March 31. Example \(8\) gives the defect and the number of defects. Create a Pareto chart of the data and then describe what this tells you about what causes the most defects. Defect type Number of defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble - intern 976 Table \(8\): Data of Defect Type 6. People in Bangladesh were asked to state what type of birth control method they use. The percentages are given in Example \(9\) ("Contraceptive use," 2013). Create a Pareto chart of the data and then state any findings you can from the graph. Method Percentage Condom 4.50% Pill 28.50% Periodic Abstinence 4.90% Injection 7.00% Female Sterilization 5.00% IUD 0.90% Male Sterilization 0.70% Withdrawal 2.90% Other Modern Methods 0.70% Other Traditional Methods 0.60% Table \(9\): Data of Birth Control Type 7. The percentages of people who use certain contraceptives in Central American countries are displayed in Graph 2.1.6 ("Contraceptive use," 2013). State any findings you can from the graph. Answer See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/02%3A_Graphical_Descriptions_of_Data/2.01%3A_Qualitative_Data.txt
The graph for quantitative data looks similar to a bar graph, except there are some major differences. First, in a bar graph the categories can be put in any order on the horizontal axis. There is no set order for these data values. You can’t say how the data is distributed based on the shape, since the shape can change just by putting the categories in different orders. With quantitative data, the data are in specific orders, since you are dealing with numbers. With quantitative data, you can talk about a distribution, since the shape only changes a little bit depending on how many categories you set up. This is called a frequency distribution. This leads to the second difference from bar graphs. In a bar graph, the categories that you made in the frequency table were determined by you. In quantitative data, the categories are numerical categories, and the numbers are determined by how many categories (or what are called classes) you choose. If two people have the same number of categories, then they will have the same frequency distribution. Whereas in qualitative data, there can be many different categories depending on the point of view of the author. The third difference is that the categories touch with quantitative data, and there will be no gaps in the graph. The reason that bar graphs have gaps is to show that the categories do not continue on, like they do in quantitative data. Since the graph for quantitative data is different from qualitative data, it is given a new name. The name of the graph is a histogram. To create a histogram, you must first create the frequency distribution. The idea of a frequency distribution is to take the interval that the data spans and divide it up into equal subintervals called classes. Summary of the Steps Involved in Making a Frequency Distribution 1. Find the range = largest value – smallest value 2. Pick the number of classes to use. Usually the number of classes is between five and twenty. Five classes are used if there are a small number of data points and twenty classes if there are a large number of data points (over 1000 data points). (Note: categories will now be called classes from now on.) 3. Class width = $\dfrac{\text { range }}{\# \text { classes }}$ Always round up to the next integer (even if the answer is already a whole number go to the next integer). If you don’t do this, your last class will not contain your largest data value, and you would have to add another class just for it. If you round up, then your largest data value will fall in the last class, and there are no issues. 4. Create the classes. Each class has limits that determine which values fall in each class. To find the class limits, set the smallest value as the lower class limit for the first class. Then add the class width to the lower class limit to get the next lower class limit. Repeat until you get all the classes. The upper class limit for a class is one less than the lower limit for the next class. 5. In order for the classes to actually touch, then one class needs to start where the previous one ends. This is known as the class boundary. To find the class boundaries, subtract 0.5 from the lower class limit and add 0.5 to the upper class limit. 6. Sometimes it is useful to find the class midpoint. The process is Midpoint $=\dfrac{\text { lower limit +upper limit }}{2}$ 7. To figure out the number of data points that fall in each class, go through each data value and see which class boundaries it is between. Utilizing tally marks may be helpful in counting the data values. The frequency for a class is the number of data values that fall in the class. Note The above description is for data values that are whole numbers. If you data value has decimal places, then your class width should be rounded up to the nearest value with the same number of decimal places as the original data. In addition, your class boundaries should have one more decimal place than the original data. As an example, if your data have one decimal place, then the class width would have one decimal place, and the class boundaries are formed by adding and subtracting 0.05 from each class limit. Example $1$ creating a frequency table Example $1$ contains the amount of rent paid every month for 24 students from a statistics course. Make a relative frequency distribution using 7 classes. 1500 1350 350 1200 850 900 1500 1150 1500 900 1400 1100 1250 600 610 960 890 1325 900 800 2550 495 1200 690 Table $1$: Data of Monthly Rent Solution 1. Find the range: largest value - smallest value $= 2550-350=2200$ 2. Pick the number of classes: The directions to say to use 7 classes. 3. Find the class width: width $=\dfrac{\text { range }}{7}=\dfrac{2200}{7} \approx 314.286$ Round up to 315 $\color{text}{Always round up to the next integer even if the width is already an integer.}$ 4. Find the class limits: Start at the smallest value. This is the lower class limit for the first class. Add the width to get the lower limit of the next class. Keep adding the width to get all the lower limits. $350+315=665,665+315=980,980+315=1295 \rightleftharpoons$, The upper limit is one less than the next lower limit: so for the first class the upper class limit would be $665-1=664$. When you have all 7 classes, make sure the last number, in this case the 2550, is at least as large as the largest value in the data. If not, you made a mistake somewhere. 5. Find the class boundaries: Subtract 0.5 from the lower class limit to get the class boundaries. Add 0.5 to the upper class limit for the last class's boundary. $350-0.5=349.5, \quad 665-0.5=664.5,\quad 980-0.5=979.5, \quad 1295-0.5=1294.5 \rightleftharpoons$ Every value in the data should fall into exactly one of the classes. No data values should fall right on the boundary of two classes. 6. Find the class midpoints: midpoint $=\dfrac{\text { lower limit }+\text { upper limit }}{2}$ $\dfrac{350+664}{2}=507, \dfrac{665+979}{2}=822, \rightleftharpoons$ 7. Tally and find the frequency of the data: Go through the data and put a tally mark in the appropriate class for each piece of data by looking to see which class boundaries the data value is between. Fill in the frequency by changing each of the tallies into a number. Class Limits Class Boundaries Class Midpoint Tally Frequency 350-664 349.5-664.5 507 |||| 4 665-979 664.5-979.5 822 $\cancel{||||}$ ||| 8 980-1294 979.5-1294.5 1137 $\cancel{||||}$ 5 1295-1609 1294.5-1609.5 1452 $\cancel{||||}$ | 6 1610-1924 1609.5-1924.5 1767   0 1925-2239 1924.5-2239.5 2082   0 2240-2554 2239.5-2554.5 2397 | 1 Table $2$: Frequency Distribution for Monthly Rent Make sure the total of the frequencies is the same as the number of data points. R command for a frequency distribution: To create a frequency distribution: summary(variable) – so you can find out the minimum and maximum. breaks = seq(min, number above max, by = class width) breaks – so you can see the breaks that R made. variable.cut=cut(variable, breaks, right=FALSE) – this will cut up the data into the classes. variable.freq=table(variable.cut) – this will create the frequency table. variable.freq – this will display the frequency table. For the data in Example $1$, the R command would be: rent<-c(1500, 1350, 350, 1200, 850, 900, 1500, 1150, 1500, 900, 1400, 1100, 1250, 600, 610, 960, 890, 1325, 900, 800, 2550, 495, 1200, 690) summary(rent) Output: $\begin{array}{cccccc}{\text{Min} }&{1\text{st Qu.}}& {\text{Median}} & {\text{Mean}} & {3\text{rd Qu.}} & {\text{Max}} \ {350} & {837.5} & {1030 .0} & {1082.0} & {1331.0} & {2550 .0} \end{array}$ breaks=seq(350, 3000, by = 315) breaks Output: [1] 350 665 980 1295 1610 1925 2240 2555 2870 These are your lower limits of the frequency distribution. You can now write your own table. rent.cut=cut(rent, breaks, right=FALSE) rent.freq=table(rent.cut) Output: rent.cut $\begin{array}{cccccccc}{[350,665)} & {[665,980)} & {[980,1.3 e+03)} & {[1.3e+03, 1.61e+03)} & {[1.61e+03, 1.92e+03)} & {[1.92e+03, 2.24e+03)} & {[2.24e+03, 2.56e+03)} & {[2.56e+03, 2.87e+03)} \ {4} & {8} & {5} & {6}& {0} & {0} & {1} & {0}\end{array}$ It is difficult to determine the basic shape of the distribution by looking at the frequency distribution. It would be easier to look at a graph. The graph of a frequency distribution for quantitative data is called a frequency histogram or just histogram for short. Definition $1$: Histogram A Histogram is a graph of the frequencies on the vertical axis and the class boundaries on the horizontal axis. Rectangles where the height is the frequency and the width is the class width are drawn for each class. Example $2\: Drawing a Histogram Draw a histogram for the distribution from Example \(1$. Solution The class boundaries are plotted on the horizontal axis and the frequencies are plotted on the vertical axis. You can plot the midpoints of the classes instead of the class boundaries. Graph 2.2.1 was created using the midpoints because it was easier to do with the software that created the graph. On R, the command is hist(variable, col="type in what color you want", breaks, main="type the title you want", xlab="type the label you want for the horizontal axis", ylim=c(0, number above maximum frequency) – produces histogram with specified color and using the breaks you made for the frequency distribution. For this example, the command in R would be (assuming you created a frequency distribution in R as described previously): hist(rent, col="blue", breaks, right=FALSE, main="Monthly Rent Paid by Students", ylim=c(0,8) xlab="Monthly Rent ($)") If no frequency distribution was created before the histogram, then the command would be: hist(variable, col="type in what color you want", number of classes, main="type the title you want", xlab="type the label you want for the horizontal axis") – produces histogram with specified color and number of classes (though the number of classes is an estimate and R will create the number of classes near this value). For this example, the R command without a frequency distribution created first would be: hist(rent, col="blue", 7, main="Monthly Rent Paid by Students", xlab="Monthly Rent ($)") Notice the graph has the axes labeled, the tick marks are labeled on each axis, and there is a title. Reviewing the graph you can see that most of the students pay around $750 per month for rent, with about$1500 being the other common value. You can see from the graph, that most students pay between $600 and$1600 per month for rent. Of course, these values are just estimates from the graph. There is a large gap between the $1500 class and the highest data value. This seems to say that one student is paying a great deal more than everyone else. This value could be considered an outlier. An outlier is a data value that is far from the rest of the values. It may be an unusual value or a mistake. It is a data value that should be investigated. In this case, the student lives in a very expensive part of town, thus the value is not a mistake, and is just very unusual. There are other aspects that can be discussed, but first some other concepts need to be introduced. Frequencies are helpful, but understanding the relative size each class is to the total is also useful. To find this you can divide the frequency by the total to create a relative frequency. If you have the relative frequencies for all of the classes, then you have a relative frequency distribution. Definition $2$ Relative Frequency Distribution A variation on a frequency distribution is a relative frequency distribution. Instead of giving the frequencies for each class, the relative frequencies are calculated. Relative frequency $=\dfrac{\text { frequency }}{\# \text { of data points }}$ This gives you percentages of data that fall in each class. Example $3$ creating a relative frequency table Find the relative frequency for the grade data. Solution From Example $1$, the frequency distribution is reproduced in Example $2$. Class Limits Class Boundaries Class Midpoint Frequency 350-664 349.5-664.5 507 4 665-979 664.5-979.5 822 8 980-1294 979.5-1294.5 1127 5 1295-1609 1294.5-1609.5 1452 6 1610-1924 1609.5-1924.5 1767 0 1925-2239 1924.5-2239.5 2082 0 2240-2554 2239.5-2554.5 2397 1 Table $2$: Frequency Distribution for Monthly Rent Divide each frequency by the number of data points. $\dfrac{4}{24}=0.17, \dfrac{8}{24}=0.33, \dfrac{5}{24}=0.21, \rightleftharpoons$ Class Limits Class Boundaries Class Midpoint Frequency Relative Frequency 350-664 349.5-664.5 507 4 0.17 665-979 664.5-979.5 822 8 0.33 980-1294 979.5-1294.5 1127 5 0.21 1295-1609 1294.5-1609.5 1452 6 0.25 1610-1924 1609.5-1924.5 1767 0 0 1925-2239 1924.5-2239.5 2082 0 0 2240-2554 2239.5-2554.5 2397 1 0.04 Total 24 1 Table $3$: Relative Frequency Distribution for Monthly Rent The relative frequencies should add up to 1 or 100%. (This might be off a little due to rounding errors.) The graph of the relative frequency is known as a relative frequency histogram. It looks identical to the frequency histogram, but the vertical axis is relative frequency instead of just frequencies. Example $4$ drawing a relative frequency histogram Draw a relative frequency histogram for the grade distribution from Example $1$. Solution The class boundaries are plotted on the horizontal axis and the relative frequencies are plotted on the vertical axis. (This is not easy to do in R, so use another technology to graph a relative frequency histogram.) Notice the shape is the same as the frequency distribution. Another useful piece of information is how many data points fall below a particular class boundary. As an example, a teacher may want to know how many students received below an 80%, a doctor may want to know how many adults have cholesterol below 160, or a manager may want to know how many stores gross less than$2000 per day. This is known as a cumulative frequency. If you want to know what percent of the data falls below a certain class boundary, then this would be a cumulative relative frequency. For cumulative frequencies you are finding how many data values fall below the upper class limit. To create a cumulative frequency distribution, count the number of data points that are below the upper class boundary, starting with the first class and working up to the top class. The last upper class boundary should have all of the data points below it. Also include the number of data points below the lowest class boundary, which is zero. Example $5$ creating a cumulative frequency distribution Create a cumulative frequency distribution for the data in Example $1$. Solution The frequency distribution for the data is in Example $2$. Class Limits Class Boundaries Class Midpoint Frequency 350-664 349.5-664.5 507 4 665-979 664.5-979.5 822 8 980-1294 979.5-1294.5 1127 5 1295-1609 1294.5-1609.5 1452 6 1610-1924 1609.5-1924.5 1767 0 1925-2239 1924.5-2239.5 2082 0 2240-2554 2239.5-2554.5 2397 1 Table $2$: Frequency Distribution for Monthly Rent Now ask yourself how many data points fall below each class boundary. Below 349.5, there are 0 data points. Below 664.5 there are 4 data points, below 979.5, there are 4 + 8 = 12 data points, below 1294.5 there are 4 + 8 + 5 = 17 data points, and continue this process until you reach the upper class boundary. This is summarized in Example $4$. To produce cumulative frequencies in R, you need to have performed the commands for the frequency distribution. Once you have complete that, then use variable.cumfreq=cumsum(variable.freq) – creates the cumulative frequencies for the variable cumfreq0=c(0,variable.cumfreq) – creates a cumulative frequency table for the variable. cumfreq0 – displays the cumulative frequency table. For this example the command would be: rent.cumfreq=cumsum(rent.freq) cumfreq0=c(0,rent.cumfreq) cumfreq0 Output: $\begin{array}{ccccccccc}{}&{[350,665)} & {[665,980)} & {[980,1.3e+03)}& {[1.3e+03, 1.61e+03)}&{[1.61e+03, 1.92e+03)}&{[1.92e+03,2.24e+03)}&{[2.24e+03, 2.56e+03)}&{[2.56e+03, 2.87e+03)} \ {0}&{4} & {12}&{17}&{23}&{23}&{23}&{24}&{24}\end{array}$ Now type this into a table. See Example $4$. Class Limits Class Boundaries Class Midpoint Frequency Cumulative Frequency 350-664 349.5-664.5 507 4 4 665-979 664.5-979.5 822 8 12 980-1294 979.5-1294.5 1127 5 17 1295-1609 1294.5-1609.5 1452 6 23 1610-1924 1609.5-1924.5 1767 0 23 1925-2239 1924.5-2239.5 2082 0 23 2240-2554 2239.5-2554.5 2397 1 24 Table $4$: Cumulative Distribution for Monthly Rent Again, it is hard to look at the data the way it is. A graph would be useful. The graph for cumulative frequency is called an ogive (o-jive). To create an ogive, first create a scale on both the horizontal and vertical axes that will fit the data. Then plot the points of the class upper class boundary versus the cumulative frequency. Make sure you include the point with the lowest class boundary and the 0 cumulative frequency. Then just connect the dots. Example $6$ drawing an ogive Draw an ogive for the data in Example $1$. Solution In R, the commands would be: plot(breaks,cumfreq0, main="title you want to use", xlab="label you want to use", ylab="label you want to use", ylim=c(0, number above maximum cumulative frequency) – plots the ogive lines(breaks,cumfreq0) – connects the dots on the ogive For this example, the commands would be: Plot(breaks,cumfreq0, main=”Cumulative Frequency for Monthly Rent”, xlab=”Monthly Rent ($)”, ylab=”Cumulative Frequency”, ylim=c(0,25)) lines(breaks,cumfreq0) The usefulness of a ogive is to allow the reader to find out how many students pay less than a certain value, and also what amount of monthly rent is paid by a certain number of students. As an example, suppose you want to know how many students pay less than$1500 a month in rent, then you can go up from the $1500 until you hit the graph and then you go over to the cumulative frequency axes to see what value corresponds to this value. It appears that around 20 students pay less than$1500. (See Graph 2.2.4.) Also, if you want to know the amount that 15 students pay less than, then you start at 15 on the vertical axis and then go over to the graph and down to the horizontal axis where the line intersects the graph. You can see that 15 students pay less than about $1200 a month. (See Graph 2.2.5.) If you graph the cumulative relative frequency then you can find out what percentage is below a certain number instead of just the number of people below a certain value. Shapes of the distribution: When you look at a distribution, look at the basic shape. There are some basic shapes that are seen in histograms. Realize though that some distributions have no shape. The common shapes are symmetric, skewed, and uniform. Another interest is how many peaks a graph may have. This is known as modal. Symmetric means that you can fold the graph in half down the middle and the two sides will line up. You can think of the two sides as being mirror images of each other. Skewed means one “tail” of the graph is longer than the other. The graph is skewed in the direction of the longer tail (backwards from what you would expect). A uniform graph has all the bars the same height. Modal refers to the number of peaks. Unimodal has one peak and bimodal has two peaks. Usually if a graph has more than two peaks, the modal information is not longer of interest. Other important features to consider are gaps between bars, a repetitive pattern, how spread out is the data, and where the center of the graph is. Examples of Graphs: This graph is roughly symmetric and unimodal: This graph is symmetric and bimodal: This graph is skewed to the right: This graph is skewed to the left and has a gap: This graph is uniform since all the bars are the same height: Example $7$ creating a frequency distribution, histogram, and ogive The following data represents the percent change in tuition levels at public, fouryear colleges (inflation adjusted) from 2008 to 2013 (Weissmann, 2013). Create a frequency distribution, histogram, and ogive for the data. 19.5% 40.8% 57.0% 15.1% 17.4% 5.2% 13.0% 15.6% 51.5% 15.6% 14.5% 22.4% 19.5% 31.3% 21.7% 27.0% 13.1% 26.8% 24.3% 38.0% 21.1% 9.3% 46.7% 14.5% 78.4% 67.3% 21.1% 22.4% 5.3% 17.3% 17.5% 36.6% 72.0% 63.2% 15.1% 2.2% 17.5% 36.7% 2.8% 16.2% 20.5% 17.8% 30.1% 63.6% 17.8% 23.2% 25.3% 21.4% 28.5% 9.4% Table $5$: Data of Tuition Levels at Public, Four-Year Colleges Solution 1. Find the range: largest value - smallest value = $78.4$% $-2.2$% $=76.2$% 2. Pick the number of classes: Since there are 50 data points, then around 6 to 8 classes should be used. Let's use 8. 3. Find the class width: width $=\dfrac{\text { range }}{8}=\dfrac{76.2 \%}{8} \approx 9.525 \%$ Since the data has one decimal place, then the class width should round to one decimal place. Make sure you round up. width $=9.6$% 4. Find the class limits: $2.2 \%+9.6 \%=11.8 \%, 11.8 \%+9.6 \%=21.4 \%, 21.4 \%+9.6 \%=31.0 \%, \leftrightharpoons$ 5. Find the class boundaries: Since the data has one decimal place, the class boundaries should have two decimal places, so subtract 0.05 from the lower class limit to get the class boundaries. Add 0.05 to the upper class limit for the last class’s boundary. $2.2-0.05=2.15 \%, 11.8-0.05=11.75 \%, 21.4-0.05=21.35 \% \leftrightharpoons$ Every value in the data should fall into exactly one of the classes. No data values should fall right on the boundary of two classes. 6. Find the class midpoints: midpoint $=\dfrac{\text { lower limt }+\text { upper limit }}{2}$ $\dfrac{2.2+11.7}{2}=6.95 \%, \dfrac{11.8+21.3}{2}=16.55 \%, \leftrightharpoons$ 7. Tally and find the frequency of the data: Class Limits Class Boundaries Class Midpoint Tally Frequency Relative Frequency Cumulative Frequency 2.2-11.7 2.15-11.75 6.95 $\cancel{||||} |$ 6 0.12 6 11.8-21.3 11.75-21.35 16.55 $\cancel{||||} \cancel{||||} \cancel{||||} \cancel{||||}$ 20 0.40 26 21.4-30.9 21.35-30.95 26.15 $\cancel{||||} \cancel{||||} |$ 11 0.22 37 31.0-45.0 30.95-40.55 35.75 $||||$ 4 0.08 41 40.6-50.1 40.55-50.15 45.35 $||$ 2 0.04 43 50.2-59.7 50.15-59.75 54.95 $||$ 2 0.04 45 59.8-69.3 59.75-69.35 64.55 $|||$ 3 0.06 48 69.4-78.9 69.35-78.95 74.15 $||$ 2 0.04 50 Table $6$: Frequency Distribution for Tuition Levels at Public, Four-Year Colleges Make sure the total of the frequencies is the same as the number of data points. This graph is skewed right, with no gaps. This says that most percent increases in tuition were around 16.55%, with very few states having a percent increase greater than 45.35%. Looking at the ogive, you can see that 30 states had a percent change in tuition levels of about 25% or less. There are occasions where the class limits in the frequency distribution are predetermined. Example $8$ demonstrates this situation. Example $8$ creating a frequency distribution and histogram The following are the percentage grades of 25 students from a statistics course. Make a frequency distribution and histogram. 62 87 81 69 87 62 45 95 76 76 62 71 65 67 72 80 40 77 87 58 84 73 93 64 89 Table $7$: Data of Test Grades Solution Since this data is percent grades, it makes more sense to make the classes in multiples of 10, since grades are usually 90 to 100%, 80 to 90%, and so forth. It is easier to not use the class boundaries, but instead use the class limits and think of the upper class limit being up to but not including the next classes lower limit. As an example the class 80 – 90 means a grade of 80% up to but not including a 90%. A student with an 89.9% would be in the 80-90 class. Class Limit Class Midpoint Tally Freqeuncy 40-50 45 $||$ 2 50-60 55 $|$ 1 60-70 65 $\cancel{||||} ||$ 7 70-80 75 $\cancel{||||} |$ 6 80-90 85 $\cancel{||||} ||$ 7 90-100 95 $||$ 2 Table $8$: Frequency Distribution for Test Grades It appears that most of the students had between 60 to 90%. This graph looks somewhat symmetric and also bimodal. The same number of students earned between 60 to 70% and 80 to 90%. There are other types of graphs for quantitative data. They will be explored in the next section. Homework Exercise $1$ 1. The median incomes of males in each state of the United States, including the District of Columbia and Puerto Rico, are given in Example $9$ ("Median income of," 2013). Create a frequency distribution, relative frequency distribution, and cumulative frequency distribution using 7 classes.$42,951 $52,379$42,544 $37,488$49,281 $50,987$60,705 $50,411$66,760 $40,951$43,902 $45,494$41,528 $50,746$45,183 $43,624$43,993 $41,612$46,313 $43,944$56,708 $60,264$50,053 $50,580$40,202 $43,146$41,635 $42,182$41,803 $53,033$60,568 $41,037$50,388 $41,950$44,660 $46,176$41,420 $45,976$47,956 $22,529$48,842 $41,464$40,285 $41,309$43,160 $47,573$44,057 $52,805$53,046 $42,125$46,214 $51,630 Table $9$: Data of Median Income for Males 2. The median incomes of females in each state of the United States, including the District of Columbia and Puerto Rico, are given in Example $10$ ("Median income of," 2013). Create a frequency distribution, relative frequency distribution, and cumulative frequency distribution using 7 classes.$31,862 $40,550$36,048 $30,752$41,817 $40,236$47,476 $40,500$60,332 $33,823$35,438 $37,242$31,238 $39,150$34,023 $33,745$33,269 $32,684$31,844 $34,599$48,748 $46,185$36,931 $40,416$29,548 $33,865$31,067 $33,424$35,484 $41,021$47,155 $32,316$42,113 $33,459$32,462 $35,746$31,274 $36,027$37,089 $22,117$41,412 $31,330$31,329 $33,184$35,301 $32,843$38,177 $40,969$40,993 $29,688$35,890 $34,381 Table $10$: Data of Median Income for Females 3. The density of people per square kilometer for African countries is in Example $11$ ("Density of people," 2013). Create a frequency distribution, relative frequency distribution, and cumulative frequency distribution using 8 classes. 15 16 81 3 62 367 42 123 8 9 337 12 29 70 39 83 26 51 79 6 157 105 42 45 72 72 37 4 36 134 12 3 630 563 72 29 3 13 176 341 415 187 65 194 75 16 41 18 69 49 103 65 143 2 18 31 Table $11$: Data of Density of People per Square Kilometer 4. The Affordable Care Act created a market place for individuals to purchase health care plans. In 2014, the premiums for a 27 year old for the bronze level health insurance are given in Example $12$ ("Health insurance marketplace," 2013). Create a frequency distribution, relative frequency distribution, and cumulative frequency distribution using 5 classes.$114 $119$121 $125$132 $139$139 $141$143 $145$151 $153$156 $159$162 $163$165 $166$170 $170$176 $177$181 $185$185 $186$186 $189$190 $192$196 $203$204 $219$254 \$286 Table $12$: Data of Health Insurance Premiums 5. Create a histogram and relative frequency histogram for the data in Example $9$. Describe the shape and any findings you can from the graph. 6. Create a histogram and relative frequency histogram for the data in Example $10$. Describe the shape and any findings you can from the graph. 7. Create a histogram and relative frequency histogram for the data in Example $11$. Describe the shape and any findings you can from the graph. 8. Create a histogram and relative frequency histogram for the data in Example $12$. Describe the shape and any findings you can from the graph. 9. Create an ogive for the data in Example $9$. Describe any findings you can from the graph. 10. Create an ogive for the data in Example $10$. Describe any findings you can from the graph. 11. Create an ogive for the data in Example $11$. Describe any findings you can from the graph. 12. Create an ogive for the data in Example $12$. Describe any findings you can from the graph. 13. Students in a statistics class took their first test. The following are the scores they earned. Create a frequency distribution and histogram for the data using class limits that make sense for grade data. Describe the shape of the distribution. 80 79 89 74 73 67 79 93 70 70 76 88 83 73 81 79 80 85 79 80 79 58 93 94 74 Table $13$: Data of Test 1 Grades 14. Students in a statistics class took their first test. The following are the scores they earned. Create a frequency distribution and histogram for the data using class limits that make sense for grade data. Describe the shape of the distribution. Compare to the graph in question 13. Table $14$: Data of Test 1 Grades 67 67 76 47 85 70 87 76 80 72 84 98 84 64 65 82 81 81 88 74 87 83 Answer See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/02%3A_Graphical_Descriptions_of_Data/2.02%3A_Quantitative_Data.txt
There are many other types of graphs. Some of the more common ones are the frequency polygon, the dot plot, the stem plot, scatter plot, and a time-series plot. There are also many different graphs that have emerged lately for qualitative data. Many are found in publications and websites. The following is a description of the stem plot, the scatter plot, and the time-series plot. Stem Plots Stem plots are a quick and easy way to look at small samples of numerical data. You can look for any patterns or any strange data values. It is easy to compare two samples using stem plots. The first step is to divide each number into 2 parts, the stem (such as the leftmost digit) and the leaf (such as the rightmost digit). There are no set rules, you just have to look at the data and see what makes sense. Example \(1\) stem plot for grade distribution The following are the percentage grades of 25 students from a statistics course. Draw a stem plot of the data. 62 87 81 69 87 62 45 95 76 76 62 71 65 67 72 80 40 77 87 58 84 73 93 64 89 Table \(1\): Data of Test Grades Solution Divide each number so that the tens digit is the stem and the ones digit is the leaf. 62 becomes 6|2. Make a vertical chart with the stems on the left of a vertical bar. Be sure to fill in any missing stems. In other words, the stems should have equal spacing (for example, count by ones or count by tens). The Graph 2.3.1 shows the stems for this example. Now go through the list of data and add the leaves. Put each leaf next to its corresponding stem. Don’t worry about order yet just get all the leaves down. When the data value 62 is placed on the plot it looks like the plot in Graph 2.3.2. When the data value 87 is placed on the plot it looks like the plot in Graph 2.3.3. Filling in the rest of the leaves to obtain the plot in Graph 2.3.4. Now you have to add labels and make the graph look pretty. You need to add a label and sort the leaves into increasing order. You also need to tell people what the stems and leaves mean by inserting a legend. Be careful to line the leaves up in columns. You need to be able to compare the lengths of the rows when you interpret the graph. The final stem plot for the test grade data is in Graph 2.3.5. Now you can interpret the stem-and-leaf display. The data is bimodal and somewhat symmetric. There are no gaps in the data. The center of the distribution is around 70. You can create a stem and leaf plot on R. the command is: stem(variable) – creates a stem and leaf plot, if you do not get a stem plot that shows all of the stems then use scale = a number. Adjust the number until you see all of the stems. So you would have stem(variable, scale = a number) For Example \(1\), the command would be grades<-c(62, 87, 81, 69, 87, 62, 45, 95, 76, 76, 62, 71, 65, 67, 72, 80, 40, 77, 87, 58, 84, 73, 93, 64, 89) stem(grades, scale = 2) Output: The decimal point is 1 digit(s) to the right of the | Now just put a title on the stem plot. Scatter Plot Sometimes you have two different variables and you want to see if they are related in any way. A scatter plot helps you to see what the relationship would look like. A scatter plot is just a plotting of the ordered pairs. Example \(2\) scatter plot Is there any relationship between elevation and high temperature on a given day? The following data are the high temperatures at various cities on a single day and the elevation of the city. Elevation (in feet) 7000 4000 6000 3000 7000 4500 5000 Temperature (°F) 50 60 48 70 55 55 60 Table \(2\): Data of Temperature versus Elevation Solution Preliminary: State the random variables Let x = altitude y = high temperature Now plot the x values on the horizontal axis, and the y values on the vertical axis. Then set up a scale that fits the data on each axes. Once that is done, then just plot the x and y values as an ordered pair. In R, the command is: independent variable<-c(type in data with commas in between values) dependent variable<-c(type in data with commas in between values) plot(independent variable, dependent variable, main="type in a title you want", xlab="type in a label for the horizontal axis", ylab="type in a label for the vertical axis", ylim=c(0, number above maximum y value) For this example, that would be: elevation<-c(7000, 4000, 6000, 3000, 7000, 4500, 5000) temperature<-c(50, 60, 48, 70, 55, 55, 60) plot(elevation, temperature, main="Temperature versus Elevation", xlab="Elevation (in feet)", ylab="Temperature (in degrees F)", ylim=c(0, 80)) Looking at the graph, it appears that there is a linear relationship between temperature and elevation. It also appears to be a negative relationship, thus as elevation increases, the temperature decreases. Time-Series A time-series plot is a graph showing the data measurements in chronological order, the data being quantitative data. For example, a time-series plot is used to show profits over the last 5 years. To create a time-series plot, the time always goes on the horizontal axis, and the other variable goes on the vertical axis. Then plot the ordered pairs and connect the dots. The purpose of a time-series graph is to look for trends over time. Caution, you must realize that the trend may not continue. Just because you see an increase, doesn’t mean the increase will continue forever. As an example, prior to 2007, many people noticed that housing prices were increasing. The belief at the time was that housing prices would continue to increase. However, the housing bubble burst in 2007, and many houses lost value, and haven’t recovered. Example \(3\) Time-series plot The following table tracks the weight of a dieter, where the time in months is measuring how long since the person started the diet Time (months) 0 1 2 3 4 5 Weight (pounds) 200 195 192 193 190 187 Table \(3\): Data of Weights versus Time Make a time-series plot of this data Solution In R, the command would be: variable1<-c(type in data with commas in between values, this should be the time variable) variable2<-c(type in data with commas in between values) plot(variable1, variable2, ylim=c(0,number over max), main="type in a title you want", xlab="type in a label for the horizontal axis", ylab="type in a label for the vertical axis") lines(variable1, variable2) – connects the dots For this example: time<-c(0, 1, 2, 3, 4, 5) weight<-c(200, 195, 192, 193, 190, 187) plot(time, weight, ylim=c(0,250), main="Weight over Time", xlab="Time (Months) ", ylab="Weight (pounds)") ines(time, weight) Notice, that over the 5 months, the weight appears to be decreasing. Though it doesn’t look like there is a large decrease. Be careful when making a graph. If you don’t start the vertical axis at 0, then the change can look much more dramatic than it really is. As an example, Graph 2.3.8 shows the Graph 2.3.7 with a different scaling on the vertical axis. Notice the decrease in weight looks much larger than it really is. Homework Exercise \(1\) 1. Students in a statistics class took their first test. The data in Example \(4\) are the scores they earned. Create a stem plot. 80 79 89 74 73 67 79 93 70 70 76 88 83 73 81 79 80 85 79 80 79 58 93 94 74 Table \(4\): Data of Test 1 Grades 2. Students in a statistics class took their first test. The data in Example \(5\) are the scores they earned. Create a stem plot. Compare to the graph in question 1. 67 67 76 47 85 70 87 76 80 72 84 98 84 64 65 82 81 81 88 74 87 83 Table \(5\): Data of Test 1 Grades 3. When an anthropologist finds skeletal remains, they need to figure out the height of the person. The height of a person (in cm) and the length of one of their metacarpal bone (in cm) were collected and are in Example \(6\) ("Prediction of height," 2013). Create a scatter plot and state if there is a relationship between the height of a person and the length of their metacarpal. Length of Metacarpal Height of Person 45 171 51 178 39 157 41 163 48 172 49 183 46 173 43 175 47 173 Table \(6\): Data of Metacarpal versus Height 4. Example \(7\) contains the value of the house and the amount of rental income in a year that the house brings in ("Capital and rental," 2013). Create a scatter plot and state if there is a relationship between the value of the house and the annual rental income. Value Rental Value Rental Value Rental Value Rental 81000 6656 77000 4576 75000 7280 67500 6864 95000 7904 94000 8736 90000 6240 85000 7072 121000 12064 115000 7904 110000 7072 104000 7904 135000 8320 130000 9776 126000 6240 125000 7904 145000 8320 140000 9568 140000 9152 135000 7488 165000 13312 165000 8528 155000 7488 148000 8320 178000 11856 174000 10400 170000 9568 170000 12688 200000 12272 200000 10608 194000 11232 190000 8320 214000 8528 280000 10400 200000 10400 200000 8320 240000 10192 240000 12064 240000 11648 225000 12480 289000 11648 270000 12896 262000 10192 244500 11232 325000 12480 310000 12480 303000 12272 300000 12480 Table \(7\): Data of House Value versus Rental 5. The World Bank collects information on the life expectancy of a person in each country ("Life expectancy at," 2013) and the fertility rate per woman in the country ("Fertility rate," 2013). The data for 24 randomly selected countries for the year 2011 are in Example \(8\). Create a scatter plot of the data and state if there appears to be a relationship between life expectancy and the number of births per woman. Life Expectancy Fertility Rate Life Expectancy Fertility rate 77.2 1.7 72.3 3.9 55.4 5.8 76.0 1.5 69.9 2.2 66.0 4.2 76.4 2.1 5.9 5.2 75.0 1.8 54.4 6.8 78.2 2.0 62.9 4.7 73.0 2.6 78.3 2.1 70.8 2.8 72.1 2.9 82.6 1.4 80.7 1.4 68.9 2.6 74.2 2.5 81.0 1.5 73.3 1.5 54.2 6.9 67.1 2.4 Table \(8\): Data of Life Expectancy versus Fertility Rate 6. The World Bank collected data on the percentage of gross domestic product (GDP) that a country spends on health expenditures ("Health expenditure," 2013) and the percentage of woman receiving prenatal care ("Pregnant woman receiving," 2013). The data for the countries where this information is available for the year 2011 is in Example \(9\). Create a scatter plot of the data and state if there appears to be a relationship between percentage spent on health expenditure and the percentage of woman receiving prenatal care. Prenatal Care (%) Health Expenditure (% of GDP) 47.9 9.6 54.6 3.7 93.7 5.2 84.7 5.2 100.0 10.0 42.5 4.7 96.4 4.8 77.1 6.0 58.3 5.4 95.4 4.8 78.0 4.1 93.3 6.0 93.3 9.5 93.7 6.8 89.8 6.1 Table \(9\): Data of Prenatal Care versus Health Expenditure 7. The Australian Institute of Criminology gathered data on the number of deaths (per 100,000 people) due to firearms during the period 1983 to 1997 ("Deaths from firearms," 2013). The data is in Example \(10\). Create a time-series plot of the data and state any findings you can from the graph. Year 1983 1984 1985 1986 1987 1988 1989 1990 Rate 4.31 4.42 4.52 4.35 4.39 4.21 3.40 3.61 Year 1991 1992 1993 1994 1995 1996 1997 Rate 3.67 3.61 2.98 2.95 2.72 2.95 2.3 Table \(10\): Data of Year versus Number of Deaths due to Firearms 8. The economic crisis of 2008 affected many countries, though some more than others. Some people in Australia have claimed that Australia wasn’t hurt that badly from the crisis. The bank assets (in billions of Australia dollars (AUD)) of the Reserve Bank of Australia (RBA) for the time period of March 2007 through March 2013 are contained in Example \(11\) ("B1 assets of," 2013). Create a time-series plot and interpret any findings. Date Assets in Billions of AUD Mar-2006 96.9 Jun-2006 107.4 Sep-2006 107.2 Dec-2006 116.2 Mar-2007 123.7 Jun-2007 134.0 Sep-2007 123.0 Dec-2007 93.2 Mar-2008 93.7 Jun-2008 105.6 Sep-2008 101.5 Dec-2008 158.8 Mar-2009 118.7 Jun-2009 111.9 Sep-2009 87.0 Dec-2009 86.1 Mar-2010 83.4 Jun-2010 85.7 Sep-2010 74.8 Dec-2010 76.0 Mar-2011 75.7 Jun-2011 75.9 Sep-2011 75.2 Dec-2011 87.9 Mar-2012 91.0 Jun-2012 90.1 Sep-2012 83.9 Dec-2012 95.8 Mar-2013 90.5 Table \(11\): Data of Date versus RBA Assets 9. The consumer price index (CPI) is a measure used by the U.S. government to describe the cost of living. Example \(12\) gives the cost of living for the U.S. from the years 1947 through 2011, with the year 1977 being used as the year that all others are compared (DeNavas-Walt, Proctor & Smith, 2012). Create a time-series plot and interpret. Year CPI-U-RS1 index (December 1977=100) Year CPI-U-RS1 index (December 1977=100) 1947 37.5 1980 127.1 1948 40.5 1981 139.2 1949 40.0 1982 147.6 1950 40.5 1983 153.9 1951 43.7 1984 160.2 1952 44.5 1985 165.7 1953 44.8 1986 168.7 1954 45.2 1987 174.4 1955 45.0 1988 180.8 1956 45.7 1989 188.6 1957 47.2 1990 198.0 1958 48.5 1991 205.1 1959 48.9 1992 210.3 1960 49.7 1993 215.5 1961 50.2 1994 220.1 1962 50.7 1995 225.4 1963 51.4 1996 231.4 1964 52.1 1997 236.4 1965 52.9 1998 239.7 1966 54.4 1999 244.7 1967 56.1 2000 252.9 1968 58.3 2001 260.0 1969 60.9 2002 264.2 1970 63.9 2003 270.1 1971 66.7 2004 277.4 1972 68.7 2005 286.7 1973 73.0 2006 296.1 1974 80.3 2007 304.5 1975 86.9 2008 316.2 1976 91.9 2009 315.0 1977 97.7 2010 320.2 1978 104.4 2011 330.3 1979 114.4 Table \(12\): Data of Time versus CPI 10. The median incomes for all households in the U.S. for the years 1967 to 2011 are given in Example \(13\) (DeNavas-Walt, Proctor & Smith, 2012). Create a time-series plot and interpret. Year Median Income Year Median Income 1967 42,056 1990 49,950 1968 43,868 1991 48,516 1969 45,499 1992 48,117 1970 45,146 1993 47,884 1971 44,707 1994 48,418 1972 46,622 1995 49,935 1973 47,563 1996 50,661 1974 46,057 1997 51,704 1975 44,851 1998 53,582 1976 45,595 1999 54,932 1977 45,884 2000 54,841 1978 47,659 2001 53,646 1979 47,527 2002 53,019 1980 46,024 2003 52,973 1981 45,260 2004 52,788 1982 45,139 2005 53,371 1983 44,823 2006 53,768 1984 46,215 2007 54,489 1985 47,079 2008 52,546 1986 48,746 2009 52,195 1987 49,358 2010 50,831 1988 49,737 2011 50,054 1989 50,624 Table \(13\): Data of Time versus Median Income 11. State everything that makes Graph 2.3.9 a misleading or poor graph. Graph 2.3.9: Example of a Poor Graph 12. State everything that makes Graph 2.3.10 a misleading or poor graph (Benen, 2011). Graph 2.3.10: Example of a Poor Graph 13. State everything that makes Graph 2.3.11 a misleading or poor graph ("United States unemployment," 2013). Graph 2.3.11: Example of a Poor Graph 14. State everything that makes Graph 2.3.12 a misleading or poor graph. Graph 2.3.12: Example of a Poor Graph Answer See solutions Data Sources: B1 assets of financial institutions. (2013, June 27). Retrieved from www.rba.gov.au/statistics/tables/xls/b01hist.xls Benen, S. (2011, September 02). [Web log message]. Retrieved from http://www.washingtonmonthly.com/pol...edit031960.php Capital and rental values of Auckland properties. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/rentcap.html Contraceptive use. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...gs.aspx?ind=35 Deaths from firearms. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/firearms.html DeNavas-Walt, C., Proctor, B., & Smith, J. U.S. Department of Commerce, U.S. Census Bureau. (2012). Income, poverty, and health insurance coverage in the United States: 2011 (P60-243). Retrieved from website: www.census.gov/prod/2012pubs/p60-243.pdf Density of people in Africa. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...249,250,251,25 2,253,254,34227,255,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,27 2,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,294, 295,296,297,298,299,300,301,302,304,305,306,307,308 Department of Health and Human Services, ASPE. (2013). Health insurance marketplace premiums for 2014. Retrieved from website: aspe.hhs.gov/health/reports/2...b_premiumsland scape.pdf Electricity usage. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...s.aspx?ind=162 Fertility rate. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SP.DYN.TFRT.IN Fuel oil usage. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...s.aspx?ind=164 Gas usage. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...s.aspx?ind=165 Health expenditure. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SH.XPD.TOTL.ZS Hinatov, M. U.S. Consumer Product Safety Commission, Directorate of Epidemiology. (2012). Incidents, deaths, and in-depth investigations associated with non-fire carbon monoxide from engine-driven generators and other engine-driven tools, 1999-2011. Retrieved from website: www.cpsc.gov/PageFiles/129857/cogenerators.pdf Life expectancy at birth. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SP.DYN.LE00.IN Median income of males. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...s.aspx?ind=137 Median income of males. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...s.aspx?ind=136 Prediction of height from metacarpal bone length. (2013, September 26). Retrieved from http://www.statsci.org/data/general/stature.html Pregnant woman receiving prenatal care. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SH.STA.ANVC.ZS United States unemployment. (2013, October 14). Retrieved from http://www.tradingeconomics.com/unit...mployment-rate Weissmann, J. (2013, March 20). A truly devastating graph on state higher education spending. The Atlantic. Retrieved from http://www.theatlantic.com/business/...ending/274199/
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/02%3A_Graphical_Descriptions_of_Data/2.03%3A_Other_Graphical_Representations_of_Data.txt
Chapter 1 discussed what a population, sample, parameter, and statistic are, and how to take different types of samples. Chapter 2 discussed ways to graphically display data. There was also a discussion of important characteristics: center, variations, distribution, outliers, and changing characteristics of the data over time. Distributions and outliers can be answered using graphical means. Finding the center and variation can be done using numerical methods that will be discussed in this chapter. Both graphical and numerical methods are part of a branch of statistics known as descriptive statistics. Later descriptive statistics will be used to make decisions and/or estimate population parameters using methods that are part of the branch called inferential statistics. • 3.1: Measures of Center This section focuses on measures of central tendency. Many statistical question can be answered by knowing the center of the data set. There are three measures of the “center” of the data. They are the mode, median, and mean. Any of the values can be referred to as the “average.” • 3.2: Measures of Spread Variability is an important idea in statistics. Variability describes how the data are spread out. If the data are very close to each other, then there is low variability. If the data are very spread out, then there is high variability. How do you measure variability? It would be good to have a number that measures it. This section will describe some of the different measures of variability, also known as variation. • 3.3: Ranking Along with the center and the variability, another useful numerical measure is the ranking of a number. A percentile is a measure of ranking. It represents a location measurement of a data value to the rest of the values. Many standardized tests give the results as a percentile. Doctors also use percentiles to track a child’s growth. 03: Examining the Evidence Using Graphs and Statistics This section focuses on measures of central tendency. Many times you are asking what to expect on average. Such as when you pick a major, you would probably ask how much you expect to earn in that field. If you are thinking of relocating to a new town, you might ask how much you can expect to pay for housing. If you are planting vegetables in the spring, you might want to know how long it will be until you can harvest. These questions, and many more, can be answered by knowing the center of the data set. There are three measures of the “center” of the data. They are the mode, median, and mean. Any of the values can be referred to as the “average.” • The mode is the data value that occurs the most frequently in the data. To find it, you count how often each data value occurs, and then determine which data value occurs most often. • The median is the data value in the middle of a sorted list of data. To find it, you put the data in order, and then determine which data value is in the middle of the data set. • The mean is the arithmetic average of the numbers. This is the center that most people call the average, though all three – mean, median, and mode – really are averages. There are no symbols for the mode and the median, but the mean is used a great deal, and statisticians gave it a symbol. There are actually two symbols, one for the population parameter and one for the sample statistic. In most cases you cannot find the population parameter, so you use the sample statistic to estimate the population parameter. Definition $1$: Population Mean The population mean is given by $\mu=\dfrac{\sum x}{N}$, pronounced mu where • $N$ is the size of the population. • $x$ represents a data value. • $\sum x$ means to add up all of the data values. Definition $2$: Sample Mean Sample Mean: $\overline{x}=\dfrac{\sum x}{n}$, pronounced x bar, where • $n$ is the size of the sample. • $x$ represents a data value. • $\sum x$ means to add up all of the data values. The value for $\overline{x}$ is used to estimate $\mu$ since $\mu$ can't be calculated in most situations. Example $1$ finding the mean, median, and mode Suppose a vet wants to find the average weight of cats. The weights (in pounds) of five cats are in Example $1$. 6.8 8.2 7.5 9.4 8.2 Table $1$: Finding the Mean, Median, and Mode Find the mean, median, and mode of the weight of a cat. Solution Before starting any mathematics problem, it is always a good idea to define the unknown in the problem. In this case, you want to define the variable. The symbol for the variable is $x$. The variable is $x =$ weight of a cat Mean: $\overline{x}=\dfrac{6.8+8.2+7.5+9.4+8.2}{5}=\dfrac{40.1}{5}=8.02$ pounds Median: You need to sort the list for both the median and mode. The sorted list is in Example $2$. 6.8 7.5 8.2 8.2 9.4 Table $2$: Sorted List of Cat's Weights There are 5 data points so the middle of the list would be the 3rd number. (Just put a finger at each end of the list and move them toward the center one number at a time. Where your fingers meet is the median.) 6.8 7.5 8.2 8.2 9.4 Table $3$: Sorted List of Cats' Weights with Median Marked The median is therefore 8.2 pounds. Mode: This is easiest to do from the sorted list that is in Example $2$. Which value appears the most number of times? The number 8.2 appears twice, while all other numbers appear once. Mode = 8.2 pounds. A data set can have more than one mode. If there is a tie between two values for the most number of times then both values are the mode and the data is called bimodal (two modes). If every data point occurs the same number of times, there is no mode. If there are more than two numbers that appear the most times, then usually there is no mode. In Example $1$, there were an odd number of data points. In that case, the median was just the middle number. What happens if there is an even number of data points? What would you do? Example $2$ finding the median with an even number of data points Suppose a vet wants to find the median weight of cats. The weights (in pounds) of six cats are in Example $4$. Find the median. 6.8 8.2 7.5 9.4 8.2 6.3 Table $4$: Weights of Six Cats Solution Variable: $x =$ weight of a cat First sort the list if it is not already sorted. There are 6 numbers in the list so the number in the middle is between the 3rd and 4th number. Use your fingers starting at each end of the list in Example $5$ and move toward the center until they meet. There are two numbers there. 6.3 6.8 7.5 8.2 8.2 9.4 Table $5$: Sorted List of Weights of Six Cats To find the median, just average the two numbers. median $=\dfrac{7.5+8.2}{2}=7.85$ pounds The median is 7.85 pounds. Example $3$ finding mean and median using technology Suppose a vet wants to find the median weight of cats. The weights (in pounds) of six cats are in Example $4$. Find the median Solution Variable: $x=$ weight of a cat You can do the calculations for the mean and median using the technology. The procedure for calculating the sample mean ( $\overline{x}$ ) and the sample median (Med) on the TI-83/84 is in Figures 3.1.1 through 3.1.4. First you need to go into the STAT menu, and then Edit. This will allow you to type in your data (see Figure $1$). Once you have the data into the calculator, you then go back to the STAT menu, move over to CALC, and then choose 1-Var Stats (see Figure $2$). The calculator will now put 1-Var Stats on the main screen. Now type in L1 (2nd button and 1) and then press ENTER. (Note if you have the newer operating system on the TI-84, then the procedure is slightly different.) If you press the down arrow, you will see the rest of the output from the calculator. The results from the calculator are in Figure $3$. The commands for finding the mean and median using R are as follows: variable<-c(type in your data with commas in between) To find the mean, use mean(variable) To find the median, use median(variable) So for this example, the commands would be weights<-c(6.8, 8.2, 7.5, 9.4, 8.2, 6.3) mean(weights) [1] 7.733333 median(weights) [1] 7.85 Example $4$ affect of extreme values on mean and median Suppose you have the same set of cats from Example $1$ but one additional cat was added to the data set. Example $6$ contains the six cats’ weights, in pounds. 6.8 7.5 8.2 8.2 9.4 22.1 Table $6$: Weights of Six Cats Find the mean and the median. Solution Variable: $x=$ weight of a cat mean $=\overline{x}=\dfrac{6.8+7.5+8.2+8.2+9.4+22.1}{6}=10.37$ pounds The data is already in order, thus the median is between 8.2 and 8.2. median $=\dfrac{8.2+8.2}{2}=8.2$ pounds The mean is much higher than the median. Why is this? Notice that when the value of 22.1 was added, the mean went from 8.02 to 10.37, but the median did not change at all. This is because the mean is affected by extreme values, while the median is not. The very heavy cat brought the mean weight up. In this case, the median is a much better measure of the center. An outlier is a data value that is very different from the rest of the data. It can be really high or really low. Extreme values may be an outlier if the extreme value is far enough from the center. In Example $4$, the data value 22.1 pounds is an extreme value and it may be an outlier. If there are extreme values in the data, the median is a better measure of the center than the mean. If there are no extreme values, the mean and the median will be similar so most people use the mean. The mean is not a resistant measure because it is affected by extreme values. The median and the mode are resistant measures because they are not affected by extreme values. As a consumer you need to be aware that people choose the measure of center that best supports their claim. When you read an article in the newspaper and it talks about the “average” it usually means the mean but sometimes it refers to the median. Some articles will use the word “median” instead of “average” to be more specific. If you need to make an important decision and the information says “average”, it would be wise to ask if the “average” is the mean or the median before you decide. As an example, suppose that a company wants to use the mean salary as the average salary for the company. This is because the high salaries of the administration will pull the mean higher. The company can say that the employees are paid well because the average is high. However, the employees want to use the median since it discounts the extreme values of the administration and will give a lower value of the average. This will make the salaries seem lower and that a raise is in order. Why use the mean instead of the median? The reason is because when multiple samples are taken from the same population, the sample means tend to be more consistent than other measures of the center. The sample mean is the more reliable measure of center. To understand how the different measures of center related to skewed or symmetric distributions, see Figure $5$. As you can see sometimes the mean is smaller than the median and mode, sometimes the mean is larger than the median and mode, and sometimes they are the same values. One last type of average is a weighted average. Weighted averages are used quite often in real life. Some teachers use them in calculating your grade in the course, or your grade on a project. Some employers use them in employee evaluations. The idea is that some activities are more important than others. As an example, a fulltime teacher at a community college may be evaluated on their service to the college, their service to the community, whether their paperwork is turned in on time, and their teaching. However, teaching is much more important than whether their paperwork is turned in on time. When the evaluation is completed, more weight needs to be given to the teaching and less to the paperwork. This is a weighted average. Definition $3$ Weighted Average $\dfrac{\sum x w}{\sum w}$ where $w$ is the weight of the data value, $x$. Example $5$ weighted average In your biology class, your final grade is based on several things: a lab score, scores on two major tests, and your score on the final exam. There are 100 points available for each score. The lab score is worth 15% of the course, the two exams are worth 25% of the course each, and the final exam is worth 35% of the course. Suppose you earned scores of 95 on the labs, 83 and 76 on the two exams, and 84 on the final exam. Compute your weighted average for the course. Solution Variable: $x=$ score The weighted average is $\dfrac{\sum x w}{\sum w}=\dfrac{\text { sum of the scores times their weights }}{\text { sum of all the weights }}$ weighted average $=\dfrac{95(0.15)+83(0.25)+76(0.25)+84(0.35)}{0.15+0.25+0.25+0.35}=\dfrac{83.4}{1.00}=83.4 \%$ A weighted average can be found using technology. The procedure for calculating the weighted average on the TI-83/84 is in Figures 3.1.6 through 3.1.9. First you need to go into the STAT menu, and then Edit. This will allow you to type in the scores into L1 and the weights into L2 (see Figure $6$). Once you have the data into the calculator, you then go back to the STAT menu, move over to CALC, and then choose 1-Var Stats (see Figure $7$). The calculator will now put 1-Var Stats on the main screen. Now type in L1 (2nd button and 1), then a comma (button above the 7 button), and then L2 (2nd button and 2) and then press ENTER. (Note if you have the newer operating system on the TI-84, then the procedure is slightly different.) The results from the calculator are in Figure $9$. The $\overline{x}$ is the weighted average. The commands for finding the mean and median using R are as follows: x<-c(type in your data with commas in between) w<-c(type in your weights with commas in between weighted.mean(x,w) So for this example, the commands would be x<-c(95, 83, 76, 84) w<-c(.15, .25, .25, .35) weighted.mean(x,w) [1] 83.4 Example $6$ weighted average The faculty evaluation process at John Jingle University rates a faculty member on the following activities: teaching, publishing, committee service, community service, and submitting paperwork in a timely manner. The process involves reviewing student evaluations, peer evaluations, and supervisor evaluation for each teacher and awarding him/her a score on a scale from 1 to 10 (with 10 being the best). The weights for each activity are 20 for teaching, 18 for publishing, 6 for committee service, 4 for community service, and 2 for paperwork. 1. One faculty member had the following ratings: 8 for teaching, 9 for publishing, 2 for committee work, 1 for community service, and 8 for paperwork. Compute the weighted average of the evaluation. 2. Another faculty member had ratings of 6 for teaching, 8 for publishing, 9 for committee work, 10 for community service, and 10 for paperwork. Compute the weighted average of the evaluation. 3. Which faculty member had the higher average evaluation? Solution a. Variable: $x=$ rating The weighted average is $\dfrac{\sum x w}{\sum w}=\dfrac{\text { sum of the scores times their weights }}{\text { sum of all the weights }}$ evaluation $=\dfrac{8(20)+9(18)+2(6)+1(4)+8(2)}{20+18+6+4+2}=\dfrac{354}{50}=7.08$ b. evaluation $=\dfrac{6(20)+8(18)+9(6)+10(4)+10(2)}{20+18+6+4+2}=\dfrac{378}{50}=7.56$ c. The second faculty member has a higher average evaluation. You can find a weighted average using technology. The last thing to mention is which average is used on which type of data. Mode can be found on nominal, ordinal, interval, and ratio data, since the mode is just the data value that occurs most often. You are just counting the data values. Median can be found on ordinal, interval, and ratio data, since you need to put the data in order. As long as there is order to the data you can find the median. Mean can be found on interval and ratio data, since you must have numbers to add together. Homework Exercise $1$ 1. Cholesterol levels were collected from patients two days after they had a heart attack (Ryan, Joiner & Ryan, Jr, 1985) and are in Example $7$. Find the mean, median, and mode. 270 236 210 142 280 272 160 220 226 242 186 266 206 318 294 282 234 224 276 282 360 310 280 278 288 288 244 236 Table $7$: Cholesterol Levels 2. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Pacific Ocean are listed in Example $8$ (Lee, 1994). Find the mean, median, and mode. River Length (km) River Length (km) Clarence 209 Clutha 322 Conway 48 Taieri 288 Waiau 169 Shag 72 Hurunui 138 Kakanui 64 Waipara 64 Rangitata 121 Ashley 97 Ophi 80 Waimakariri 161 Pareora 56 Selwyn 95 Waihao 64 Rakaia 145 Waitaki 209 Ashburton 90 Table $8$: Lengths of Rivers (km) Flowing to Pacific Ocean 3. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Tasman Sea are listed in Example $9$ (Lee, 1994). Find the mean, median, and mode. River Length (km) River Length (km) Hollyford 76 Waimea 48 Cascade 64 Motueka 108 Arawhata 68 Takaka 72 Haast 64 Aorere 72 Karangarua 37 Heaphy 35 Cook 32 Karamea 80 Waiho 32 Mokihinui 56 Whataroa 51 Buller 177 Wanganui 56 Grey 121 Waitaha 40 Taramakau 80 Hokitika 64 Arahura 56 Table $9$: Lengths of Rivers (km) Flowing to Tasman Sea 4. Eyeglassmatic manufactures eyeglasses for their retailers. They research to see how many defective lenses they made during the time period of January 1 to March 31. Example $10$ contains the defect and the number of defects. Find the mean, median, and mode. Defect Type Number of Defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble - intern 976 Table $10$: Number of Defective Lenses 5. Print-O-Matic printing company’s employees have salaries that are contained in Example $11$. Employee Salary ($) CEO 272,500 Driver 58,456 CD74 100,702 CD65 57,380 Embellisher 73,877 Folder 65,270 GTO 74,235 Handwork 52,718 Horizon 76,029 ITEK 64,553 Mgmt 108,448 Platens 69,573 Polar 75,526 Pre Press Manager 108,448 Pre Press Manager/ IT 98,837 Pre Press/ Graphic Artist 75,311 Designer 90,090 Sales 109,739 Administration 66,346 Table $11$: Salaries of Print-O-Matic Printing Company Employees a. Find the mean and median. b. Find the mean and median with the CEO's salary removed. c. What happened to the mean and median when the CEO’s salary was removed? Why? d. If you were the CEO, who is answering concerns from the union that employees are underpaid, which average of the complete data set would you prefer? Why? e. If you were a platen worker, who believes that the employees need a raise, which average would you prefer? Why? 6. Print-O-Matic printing company spends specific amounts on fixed costs every month. The costs of those fixed costs are in Example $12$. Monthly charges Monthly cost ($) Bank charges 482 Cleaning 2208 Computer expensive 2471 Lease payments 2656 Postage 2117 Uniforms 2600 Table $12$: Fixed Costs for Print-O-Matic Printing Company a. Find the mean and median. b. Find the mean and median with the bank charger removed. c. What happened to the mean and median when the bank charger was removed? Why? d. If it is your job to oversee the fixed costs, which average using te complete data set would you prefer to use when submitting a report to administration to show that costs are low? Why? e. If it is your job to find places in the budget to reduce costs, which average using the complete data set would you prefer to use when submitting a report to administration to show that fixed costs need to be reduced? Why? 7. State which type of measurement scale each represents, and then which center measures can be use for the variable? 1. You collect data on people’s likelihood (very likely, likely, neutral, unlikely, very unlikely) to vote for a candidate. 2. You collect data on the diameter at breast height of trees in the Coconino National Forest. 3. You collect data on the year wineries were started. 4. You collect the drink types that people in Sydney, Australia drink. 8. State which type of measurement scale each represents, and then which center measures can be use for the variable? 1. You collect data on the height of plants using a new fertilizer. 2. You collect data on the cars that people drive in Campbelltown, Australia. 3. You collect data on the temperature at different locations in Antarctica. 4. You collect data on the first, second, and third winner in a beer competition. 9. Looking at Graph 3.1.1, state if the graph is skewed left, skewed right, or symmetric and then state which is larger, the mean or the median? Graph 3.1.1: Skewed or Symmetric Graph 10. Looking at Graph 3.1.2, state if the graph is skewed left, skewed right, or symmetric and then state which is larger, the mean or the median? Graph 3.1.2: Skewed or Symmetric Graph 11. An employee at Coconino Community College (CCC) is evaluated based on goal setting and accomplishments toward the goals, job effectiveness, competencies, and CCC core values. Suppose for a specific employee, goal 1 has a weight of 30%, goal 2 has a weight of 20%, job effectiveness has a weight of 25%, competency 1 has a goal of 4%, competency 2 has a goal has a weight of 3%, competency 3 has a weight of 3%, competency 4 has a weight of 3%, competency 5 has a weight of 2%, and core values has a weight of 10%. Suppose the employee has scores of 3.0 for goal 1, 3.0 for goal 2, 2.0 for job effectiveness, 3.0 for competency 1, 2.0 for competency 2, 2.0 for competency 3, 3.0 for competency 4, 4.0 for competency 5, and 3.0 for core values. Find the weighted average score for this employee. If an employee has a score less than 2.5, they must have a Performance Enhancement Plan written. Does this employee need a plan? 12. An employee at Coconino Community College (CCC) is evaluated based on goal setting and accomplishments toward goals, job effectiveness, competencies, CCC core values. Suppose for a specific employee, goal 1 has a weight of 20%, goal 2 has a weight of 20%, goal 3 has a weight of 10%, job effectiveness has a weight of 25%, competency 1 has a goal of 4%, competency 2 has a goal has a weight of 3%, competency 3 has a weight of 3%, competency 4 has a weight of 5%, and core values has a weight of 10%. Suppose the employee has scores of 2.0 for goal 1, 2.0 for goal 2, 4.0 for goal 3, 3.0 for job effectiveness, 2.0 for competency 1, 3.0 for competency 2, 2.0 for competency 3, 3.0 for competency 4, and 4.0 for core values. Find the weighted average score for this employee. If an employee that has a score less than 2.5, they must have a Performance Enhancement Plan written. Does this employee need a plan? 13. A statistics class has the following activities and weights for determining a grade in the course: test 1 worth 15% of the grade, test 2 worth 15% of the grade, test 3 worth 15% of the grade, homework worth 10% of the grade, semester project worth 20% of the grade, and the final exam worth 25% of the grade. If a student receives an 85 on test 1, a 76 on test 2, an 83 on test 3, a 74 on the homework, a 65 on the project, and a 79 on the final, what grade did the student earn in the course? 14. A statistics class has the following activities and weights for determining a grade in the course: test 1 worth 15% of the grade, test 2 worth 15% of the grade, test 3 worth 15% of the grade, homework worth 10% of the grade, semester project worth 20% of the grade, and the final exam worth 25% of the grade. If a student receives a 92 on test 1, an 85 on test 2, a 95 on test 3, a 92 on the homework, a 55 on the project, and an 83 on the final, what grade did the student earn in the course? Answer 1. mean = 253.93, median = 268, mode = none 3. mean = 67.68 km, median = 64 km, mode = 56 and 64 km 5. a. mean = $89,370.42, median =$75,311, b. mean = $79,196.56, median =$74,773, c. See solutions, d. See solutions, e. See solutions 7. a. ordinal- median and mode, b. ratio – all three, c. interval – all three, d. nominal – mode 9. Skewed right, mean higher 11. 2.71 13. 76.75
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/03%3A_Examining_the_Evidence_Using_Graphs_and_Statistics/3.01%3A_Measures_of_Center.txt
Variability is an important idea in statistics. If you were to measure the height of everyone in your classroom, every observation gives you a different value. That means not every student has the same height. Thus there is variability in people’s heights. If you were to take a sample of the income level of people in a town, every sample gives you different information. There is variability between samples too. Variability describes how the data are spread out. If the data are very close to each other, then there is low variability. If the data are very spread out, then there is high variability. How do you measure variability? It would be good to have a number that measures it. This section will describe some of the different measures of variability, also known as variation. In Example $1$, the average weight of a cat was calculated to be 8.02 pounds. How much does this tell you about the weight of all cats? Can you tell if most of the weights were close to 8.02 or were the weights really spread out? What are the highest weight and the lowest weight? All you know is that the center of the weights is 8.02 pounds. You need more information. Definition $1$ The range of a set of data is the difference between the highest and the lowest data values (or maximum and minimum values). \begin{align*} \text{Range} &= \text{highest value} - \text{lowest value} \[4pt] &= \text{maximum value} - \text{minimum value} \end{align*} Example $1$: Finding the Range Look at the following three sets of data. Find the range of each of these. 1. $10, 20, 30, 40, 50$ 2. $10, 29, 30, 31, 50$ 3. $28, 29, 30, 31, 32$ Solution a. b. c. Based on the mean, median, and range in Example $1$, the first two distributions are the same, but you can see from the graphs that they are different. In Example $1$a the data are spread out equally. In Example $1$b the data has a clump in the middle and a single value at each end. The mean and median are the same for Example $1$c but the range is very different. All the data is clumped together in the middle. The range doesn’t really provide a very accurate picture of the variability. A better way to describe how the data is spread out is needed. Instead of looking at the distance the highest value is from the lowest how about looking at the distance each value is from the mean. This distance is called the deviation. Example $2$: Finding the Deviations Suppose a vet wants to analyze the weights of cats. The weights (in pounds) of five cats are 6.8, 8.2, 7.5, 9.4, and 8.2. Find the deviation for each of the data values. Solution Variable: $x=$ weight of a cat The mean for this data set is $\overline{x}=8.02$ pounds. $x$ $x-\overline{x}$ 6.8 6.8-8.02 = -1.22 8.2 8.2-8.02=0.18 7.5 7.5-8.02=-0.52 9.4 9.4-8.02=1.38 8.2 8.2-8.02=0.18 Table $1$: Deviations of Weights of Cats Now you might want to average the deviation, so you need to add the deviations together. $x$ $x-\overline{x}$ 6.8 6.8-8.02 = -1.22 8.2 8.2-8.02=0.18 7.5 7.5-8.02=-0.52 9.4 9.4-8.02=1.38 8.2 8.2-8.02=0.18 Total 0 Table $2$: Sum of Deviations of Weights of Cats This can’t be right. The average distance from the mean cannot be 0. The reason it adds to 0 is because there are some positive and negative values. You need to get rid of the negative signs. How can you do that? You could square each deviation. $x$ $x-\overline{x}$ $(x-\overline{x})^{2}$ 6.8 6.8-8.02 = -1.22 1.4884 8.2 8.2-8.02=0.18 0.0324 7.5 7.5-8.02=-0.52 0.2704 9.4 9.4-8.02=1.38 1.9044 8.2 8.2-8.02=0.18 0.0324 Total 0 3.728 Table $3$: Squared Deviations of Weights of Cats Now average the total of the squared deviations. The only thing is that in statistics there is a strange average here. Instead of dividing by the number of data values you divide by the number of data values minus 1. In this case you would have $s^{2}=\dfrac{3.728}{5-1}=\dfrac{3.728}{4}=0.932 \text { pounds }^{2}$ Notice that this is denoted as $s^{2}$. This is called the variance and it is a measure of the average squared distance from the mean. If you now take the square root, you will get the average distance from the mean. This is called the standard deviation, and is denoted with the letter $s$. $s=\sqrt{.932} \approx 0.965$ pounds The standard deviation is the average (mean) distance from a data point to the mean. It can be thought of as how much a typical data point differs from the mean. Definition $2$: Sample Variance The sample variance formula: $s^{2}=\dfrac{\sum(x-\overline{x})^{2}}{n-1}$ where $\overline{x}$ is the sample mean, $n$ is the sample size, and $\sum$ means to find the sum. Definition $3$: Sample Standard Deviation The sample standard deviation formula: $s=\sqrt{s^{2}}=\sqrt{\dfrac{\sum(x-\overline{x})^{2}}{n-1}}$ The $n-1$ on the bottom has to do with a concept called degrees of freedom. Basically, it makes the sample standard deviation a better approximation of the population standard deviation. Definition $4$: Population Variance The population variance formula: $\sigma^{2}=\dfrac{\sum(x-\mu)^{2}}{N}$ where $\sigma$ is the Greek letter sigma and $\sigma^{2}$ represents the population variance, $\mu$ is the population mean, and N is the size of the population. Definition $5$: Population Standard Deviation The population standard deviation formula: $\sigma=\sqrt{\sigma^{2}}=\sqrt{\dfrac{\sum(x-\mu)^{2}}{N}}$ Note The sum of the deviations should always be 0. If it isn’t, then it is because you rounded, you used the median instead of the mean, or you made an error. Try not to round too much in the calculations for standard deviation since each rounding causes a slight error Example $3$: Finding the Standard Deviation Suppose that a manager wants to test two new training programs. He randomly selects 5 people for each training type and measures the time it takes to complete a task after the training. The times for both trainings are in Example $4$. Which training method is better? Training 1 56 75 48 63 59 Training 2 60 58 66 59 58 Table $4$: Time to Finish Task in Minutes Solution It is important that you define what each variable is since there are two of them. Variable 1: $X_{1}=$ productivity from training 1 Variable 2: $X_{2}=$ productivity from training 2 To answer which training method better, first you need some descriptive statistics. Start with the mean for each sample. $\overline{x}_{1}=\dfrac{56+75+48+63+59}{5}=60.2$ minutes $\overline{x}_{2}=\dfrac{60+58+66+59+58}{5}=60.2$ minutes Since both means are the same values, you cannot answer the question about which is better. Now calculate the standard deviation for each sample. $x_{1}$ $x_{1}-\overline{x}_{1}$ $\left(x_{1}-\overline{x}_{1}\right)^{2}$ 56 -4.2 17.64 75 14.8 219.04 48 -12.2 148.84 63 2.8 7.84 59 -1.2 1.44 Total 0 394.8 Table $5$: Squared Deviations for Training 1 $x_{2}$ $x_{2}-\overline{x}_{2}$ $\left(x_{2}-\overline{x}_{2}\right)^{2}$ 60 -0.2 0.04 58 -2.2 4.84 66 5.8 33.64 59 -1.2 1.44 58 -2.2 4.84 Total 0 44.8 Table $6$: Squared Deviations for Training 2 The variance for each sample is: $s_{1}^{2}=\dfrac{394.8}{5-1}=98.7 \text { minutes }^{2}$ $s_{2}^{2}=\dfrac{44.8}{5-1}=11.2 \text { minutes }^{2}$ The standard deviations are: $s_{1}=\sqrt{98.7} \approx 9.93$ minutes $s_{2}=\sqrt{11.2} \approx 3.35$ minutes From the standard deviations, the second training seemed to be the better training since the data is less spread out. This means it is more consistent. It would be better for the managers in this case to have a training program that produces more consistent results so they know what to expect for the time it takes to complete the task. You can do the calculations for the descriptive statistics using the technology. The procedure for calculating the sample mean ( $\overline{x})$ and the sample standard deviation ( $s_{x}$) for $X_{2}$ in Example $3$ on the TI-83/84 is in Figures 3.2.1 through 3.2.4 (the procedure is the same for $X_{1}$). Note the calculator gives you the population standard deviation ( $\sigma_{x}$ ) because it doesn’t know whether the data you input is a population or a sample. You need to decide which value you need to use, based on whether you have a population or sample. In almost all cases you have a sample and will be using $s_{x}$. Also, the calculator uses the notation $s_{x}$ of instead of just $s$. It is just a way for it to denote the information. First you need to go into the STAT menu, and then Edit. This will allow you to type in your data (see Figure $1$). Once you have the data into the calculator, you then go back to the STAT menu, move over to CALC, and then choose 1-Var Stats (see Figure $2$). The calculator will now put 1-Var Stats on the main screen. Now type in L2 (2nd button and 2) and then press ENTER. (Note if you have the newer operating system on the TI-84, then the procedure is slightly different.) The results from the calculator are in Figure $4$. The processes for finding the mean, median, range, standard deviation, and variance on R are as follows: variable<-c(type in your data) To find the mean, use mean(variable) To find the median, use median(variable) To find the range, use range(variable). Then find maximum – minimum. To find the standard deviation, use sd(variable) To find the variance, use var(variable) For the second data set in Example $3$, the commands and results would be productivity_2<-c(60, 58, 66, 59, 58) mean(productivity_2) [1] 60.2 median(productivity_2) [1] 59 range(productivity_2) [1] 58 66 sd(productivity_2) [1] 3.34664 var(productivity_2) [1] 11.2 In general a “small” standard deviation means the data is close together (more consistent) and a “large” standard deviation means the data is spread out (less consistent). Sometimes you want consistent data and sometimes you don’t. As an example if you are making bolts, you want to lengths to be very consistent so you want a small standard deviation. If you are administering a test to see who can be a pilot, you want a large standard deviation so you can tell who are the good pilots and who are the bad ones. What do “small” and “large” mean? To a bicyclist whose average speed is 20 mph, s = 20 mph is huge. To an airplane whose average speed is 500 mph, s = 20 mph is nothing. The “size” of the variation depends on the size of the numbers in the problem and the mean. Another situation where you can determine whether a standard deviation is small or large is when you are comparing two different samples such as in example #3.2.3. A sample with a smaller standard deviation is more consistent than a sample with a larger standard deviation. Many other books and authors stress that there is a computational formula for calculating the standard deviation. However, this formula doesn’t give you an idea of what standard deviation is and what you are doing. It is only good for doing the calculations quickly. It goes back to the days when standard deviations were calculated by hand, and the person needed a quick way to calculate the standard deviation. It is an archaic formula that this author is trying to eradicate it. It is not necessary anymore, since most calculators and computers will do the calculations for you with as much meaning as this formula gives. It is suggested that you never use it. If you want to understand what the standard deviation is doing, then you should use the definition formula. If you want an answer quickly, use a computer or calculator. Use of Standard Deviation One of the uses of the standard deviation is to describe how a population is distributed by using Chebyshev’s Theorem. This theorem works for any distribution, whether it is skewed, symmetric, bimodal, or any other shape. It gives you an idea of how much data is a certain distance on either side of the mean. Definition $6$: Chebyshev's Theorem For any set of data: • At least 75% of the data fall in the interval from $\mu-2 \sigma \text { to } \mu+2 \sigma$. • At least 88.9% of the data fall in the interval from $\mu-3 \sigma \text { to } \mu+3 \sigma$. • At least 93.8% of the data fall in the interval from $\mu-4 \sigma \text { to } \mu+4 \sigma$. Example $4$: Using Chebyshev's Theorem The U.S. Weather Bureau has provided the information in Example $7$ about the total annual number of reported strong to violent (F3+) tornados in the United States for the years 1954 to 2012. ("U.S. tornado climatology," 17). 46 47 31 41 24 56 56 23 31 59 39 70 73 85 33 38 45 39 35 22 51 39 51 131 37 24 57 42 28 45 98 35 54 45 30 15 35 64 21 84 40 51 44 62 65 27 34 23 32 28 41 98 82 47 62 21 31 29 32 Table $7$: Annual Number of Violent Tornados in the U.S. 1. Use Chebyshev’s theorem to find an interval centered about the mean annual number of strong to violent (F3+) tornados in which you would expect at least 75% of the years to fall. 2. Use Chebyshev’s theorem to find an interval centered about the mean annual number of strong to violent (F3+) tornados in which you would expect at least 88.9% of the years to fall. Solution a. Variable: $x =$ number of strong or violent (F3+) tornadoes Chebyshev’s theorem says that at least 75% of the data will fall in the interval from $\mu-2 \sigma$ to $\mu+2 \sigma$. You do not have the population, so you need to estimate the population mean and standard deviation using the sample mean and standard deviation. You can find the sample mean and standard deviation using technology: $\overline{x} \approx 46.24, s \approx 22.18$ So, $\mu \approx 46.24, \sigma \approx 22.18$ $\mu-2 \sigma \text { to } \mu+2 \sigma$ $46.24-2(22.18) \text { to } 46.24+2(22.18)$ $46.24-44.36 \text { to } 46.24+44.36$ $1.88 \text { to } 90.60$ Since you can’t have fractional number of tornados, round to the nearest whole number. At least 75% of the years have between 2 and 91 strong to violent (F3+) tornados. (Actually, all but three years’ values fall in this interval, that means that $\dfrac{56}{59} \approx 94.9 \%$ actually fall in the interval.) b. Variable: $x =$ number of strong or violent (F3+) tornadoes Chebyshev’s theorem says that at least 88.9% of the data will fall in the interval from $\mu-3 \sigma$ to $\mu+3 \sigma$. $\mu-3 \sigma \text { to } \mu+3 \sigma$ $46.24-3(22.18) \text { to } 46.24+3(22.18)$ $46.24-66.54 \text { to } 46.24+66.54$ $-20.30 \text { to } 112.78$ Since you can’t have negative number of tornados, the lower limit is actually 0. Since you can’t have fractional number of tornados, round to the nearest whole number. At least 88.9% of the years have between 0 and 113 strong to violent (F3+) tornados. (Actually, all but one year falls in this interval, that means that $\dfrac{58}{59} \approx 98.3 \%$ actually fall in the interval.) Chebyshev’s Theorem says that at least 75% of the data is within two standard deviations of the mean. That percentage is fairly high. There isn’t much data outside two standard deviations. A rule that can be followed is that if a data value is within two standard deviations, then that value is a common data value. If the data value is outside two standard deviations of the mean, either above or below, then the number is uncommon. It could even be called unusual. An easy calculation that you can do to figure it out is to find the difference between the data point and the mean, and then divide that answer by the standard deviation. As a formula this would be $\dfrac{x-\mu}{\sigma}$. If you don’t know the population mean, $\mu$, and the population standard deviation, $\sigma$, then use the sample mean, $\overline{x}$, and the sample standard deviation, $s$, to estimate the population parameter values. However, realize that using the sample standard deviation may not actually be very accurate. Example $5$ determining if a value is unusual 1. In 1974, there were 131 strong or violent (F3+) tornados in the United States. Is this value unusual? Why or why not? 2. In 1987, there were 15 strong or violent (F3+) tornados in the United States. Is this value unusual? Why or why not? Solution a. Variable: $x =$ number of strong or violent (F3+) tornadoes To answer this question, first find how many standard deviations 131 is from the mean. From Example $4$, we know $\mu \approx 46.24$ and $\sigma \approx 22.18$. For $x = 131$, $\dfrac{x-\mu}{\sigma}=\dfrac{131-46.24}{22.18} \approx 3.82$ Since this value is more than 2, then it is unusual to have 131 strong or violent (F3+) tornados in a year. b. Variable: $x =$ number of strong or violent (F3+) tornadoes For this question the $x = 15$, $\dfrac{x-\mu}{\sigma}=\dfrac{15-46.24}{22.18} \approx-1.41$ Since this value is between -2 and 2, then it is not unusual to have only 15 strong or violent (F3+) tornados in a year. Homework Exercise $1$ 1. Cholesterol levels were collected from patients two days after they had a heart attack (Ryan, Joiner & Ryan, Jr, 1985) and are in Example $8$. Find the mean, median, range, variance, and standard deviation using technology. 270 236 210 142 280 272 160 220 226 242 186 266 206 318 294 282 234 224 276 282 360 310 280 278 288 288 244 236 Table $8$: Cholesterol Levels 2. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Pacific Ocean are listed in Example $9$ (Lee, 1994). Table $9$: Lengths of Rivers (km) Flowing to Pacific Ocean River Length (km) River Length (km) Clarence 209 Clutha 322 Conway 48 Taieri 288 Waiau 169 Shag 72 Hurunui 138 Kakanui 64 Waipara 64 Waitaki 209 Ashley 97 Waihao 64 Waimakariri 161 Pareora 56 Selwyn 95 Rangitata 121 Rakaia 145 Ophi 80 Ashburton 90 a. Find the mean and median. b. Find the range. c. Find the variance and standard deviation. 3. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Pacific Ocean are listed in Example $9$ (Lee, 1994). River Length (km) River Length (km) Hollyford 76 Waimea 48 Cascade 64 Motueka 108 Arawhata 68 Takaka 72 Haast 64 Aorere 72 Karangarua 37 Heaphy 35 Cook 32 Karamea 80 Waiho 32 Mokihinui 56 Whataroa 51 Buller 177 Wanganui 56 Grey 121 Waitaha 40 Taramakau 80 Hokitika 64 Arahura 56 Table $10$: Lengths of Rivers (km) Flowing to Tasman Sea a. Find the mean and median. b. Find the range. c. Find the variance and standard deviation. 4. Eyeglassmatic manufactures eyeglasses for their retailers. They test to see how many defective lenses they made the time period of January 1 to March 31. Example $11$ gives the defect and the number of defects. Defect type Number of defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble - intern 976 Table $11$: Number of Defective Lenses a. Find the mean and median. b. Find the range. c. Find the variance and standard deviation. 5. Print-O-Matic printing company’s employees have salaries that are contained in Example $12$. Find the mean, median, range, variance, and standard deviation using technology. Employee Salary ($) Employee Salary ($) CEO 272,500 Administration 66,346 Driver 58,456 Sales 109,739 CD74 100,702 Designer 90,090 CD65 57,380 Platens 69,573 Embellisher 73,877 Polar 75,526 Folder 65,270 ITEK 64,553 GTO 74,235 Mgmt 108,448 Pre Press Manager 108,448 Handwork 52,718 Pre Press Manager/IT 98,837 Horizon 76,029 Pre Press/ Graphic Artist 75,311 Table $12$: Salaries of Print-O-Matic Printing Company Employees 6. Print-O-Matic printing company spends specific amounts on fixed costs every month. The costs of those fixed costs are in Example $13$. Table $13$: Fixed Costs for Print-O-Matic Printing Company Monthly charges Monthly cost ($) Bank charges 482 Cleaning 2208 Computer expensive 2471 Lease payments 2656 Postage 2117 Uniforms 2600 a. Find the mean and median. b. Find the range. c. Find the variance and standard deviation. 7. Compare the two data sets in problems 2 and 3 using the mean and standard deviation. Discuss which mean is higher and which has a larger spread of the data. 8. Example $14$ contains pulse rates collected from males, who are non-smokers but do drink alcohol ("Pulse rates before," 2013). The before pulse rate is before they exercised, and the after pulse rate was taken after the subject ran in place for one minute. Compare the two data sets using the mean and standard deviation. Discuss which mean is higher and which has a larger spread of the data. Pulse before Pulse after Pulse before Pulse after 76 88 59 92 56 110 60 104 64 126 65 82 50 90 76 150 49 83 145 155 68 136 84 140 68 125 78 141 88 150 85 131 80 146 78 132 78 168 Table $14$: Pulse Rates of Males Before and After Exercise 9. Example $15$ contains pulse rates collected from females, who are non-smokers but do drink alcohol ("Pulse rates before," 2013). The before pulse rate is before they exercised, and the after pulse rate was taken after the subject ran in place for one minute. Compare the two data sets using the mean and standard deviation. Discuss which mean is higher and which has a larger spread of the data. Pulse before Pulse after Pulse before Pulse after 96 176 92 120 82 150 70 96 86 150 75 130 72 115 70 119 78 129 70 95 90 160 68 84 88 120 47 136 71 125 64 120 66 89 70 98 76 132 74 168 70 120 85 130 Table $15$: Pulse Rates of Females Before and After Exercise 10. To determine if Reiki is an effective method for treating pain, a pilot study was carried out where a certified second-degree Reiki therapist provided treatment on volunteers. Pain was measured using a visual analogue scale (VAS) immediately before and after the Reiki treatment (Olson & Hanson, 1997) and the data is in Example $16$. Compare the two data sets using the mean and standard deviation. Discuss which mean is higher and which has a larger spread of the data. VAS before VAS after VAS before VAS after 6 3 5 1 2 1 1 0 2 0 6 4 9 1 6 1 3 0 4 4 3 2 4 1 4 1 7 6 5 2 2 1 2 2 4 3 3 0 8 8 Table $16$: Pain Measurements Before and After Reiki Treatment 11. Example $17$ contains data collected on the time it takes in seconds of each passage of play in a game of rugby. ("Time of passages," 2013) Table $17$: Times (in seconds) of rugby plays 39.2 2.7 9.2 14.6 1.9 17.8 15.5 53.8 17.5 27.5 4.8 8.6 22.1 29.8 10.4 9.8 27.7 32.7 32 34.3 29.1 6.5 2.8 10.8 9.2 12.9 7.1 23.8 7.6 36.4 35.6 28.4 37.2 16.8 21.2 14.7 44.5 24.7 36.2 20.9 19.9 24.4 7.9 2.8 2.7 3.9 14.1 28.4 45.5 38 18.5 8.3 56.2 10.2 5.5 2.5 46.8 23.1 9.2 10.3 10.2 22 28.5 24 17.3 12.7 15.5 4 5.6 3.8 21.6 49.3 52.4 50.1 30.5 37.2 15 38.7 3.1 11 10 5 48.8 3.6 12.6 9.9 58.6 37.9 19.4 29.2 12.3 39.2 22.2 39.7 6.4 2.5 34 a. Using technology, find the mean and standard deviation. b. Use Chebyshev’s theorem to find an interval centered about the mean times of each passage of play in the game of rugby in which you would expect at least 75% of the times to fall. c. Use Chebyshev’s theorem to find an interval centered about the mean times of each passage of play in the game of rugby in which you would expect at least 88.9% of the times to fall. 12. Yearly rainfall amounts (in millimeters) in Sydney, Australia, are in table #3.2.18 ("Annual maximums of," 2013). Table $18$: Yearly Rainfall Amounts in Sydney, Australia 146.8 383 90.9 178.1 267.5 95.5 156.5 180 90.9 139.7 200.2 171.7 187.2 184.9 70.1 58 84.1 55.6 133.1 271.8 135.9 71.9 99.4 110.6 47.5 97.8 122.7 58.4 154.4 173.7 118.8 88 84.6 171.5 254.3 185.9 137.2 138.9 96.2 85 45.2 74.7 264.9 113.8 133.4 68.1 156.4 a. Using technology, find the mean and standard deviation. b. Use Chebyshev’s theorem to find an interval centered about the mean yearly rainfall amounts in Sydney, Australia, in which you would expect at least 75% of the amounts to fall. c. Use Chebyshev’s theorem to find an interval centered about the mean yearly rainfall amounts in Sydney, Australia, in which you would expect at least 88.9% of the amounts to fall. 13. The number of deaths attributed to UV radiation in African countries in the year 2002 is given in Example $19$ ("UV radiation: Burden," 2013). Table $19$: Number of Deaths from UV Radiation 50 84 31 338 6 504 40 7 58 204 15 27 39 1 45 174 98 94 199 9 27 58 356 5 45 5 94 26 171 13 57 138 39 3 171 41 1177 102 123 433 35 40 456 125 a. Using technology, find the mean and standard deviation. b. Use Chebyshev’s theorem to find an interval centered about the mean number of deaths from UV radiation in which you would expect at least 75% of the numbers to fall. c. Use Chebyshev’s theorem to find an interval centered about the mean number of deaths from UV radiation in which you would expect at least 88.9% of the numbers to fall. 14. The time (in 1/50 seconds) between successive pulses along a nerve fiber ("Time between nerve," 2013) are given in Example $20$. Table $20$: Time (in 1/50 seconds) Between Successive Pulses 10.5 1.5 2.5 5.5 29.5 3 9 27.5 18.5 4.5 7 9.5 1 7 4.5 2.5 7.5 11.5 7.5 4 12 8 3 5.5 7.5 4.5 1.5 10.5 1 7 12 14.5 8 3.5 3.5 2 1 7.5 6 13 7.5 16.5 3 25.5 5.5 14 18 7 27.5 14 a. Using technology, find the mean and standard deviation. b. Use Chebyshev’s theorem to find an interval centered about the mean time between successive pulses along a nerve fiber in which you would expect at least 75% of the times to fall. c. Use Chebyshev’s theorem to find an interval centered about the mean time between successive pulses along a nerve fiber in which you would expect at least 88.9% of the times to fall. 15. Suppose a passage of play in a rugby game takes 75.1 seconds. Would it be unusual for this to happen? Use the mean and standard deviation that you calculated in problem 11. 16. Suppose Sydney, Australia received 300 mm of rainfall in a year. Would this be unusual? Use the mean and standard deviation that you calculated in problem 12. 17. Suppose in a given year there were 2257 deaths attributed to UV radiation in an African country. Is this value unusual? Use the mean and standard deviation that you calculated in problem 13. 18. Suppose it only takes 2 (1/50 seconds) for successive pulses along a nerve fiber. Is this value unusual? Use the mean and standard deviation that you calculated in problem 14. Answer 1. mean = 253.93, median = 268, range = 218, variance = 2276.29, st dev = 47.71 3. a. mean = 67.68 km, median = 64 km, b. range = 145 km, c. variance = 1107.9416 $\mathrm{km}^{2}$, st dev = 33.29 km 5. mean =$89,370.42, median = $75,311, range =$219,782, variance =2298639399, st dev = \$47,944.13 7. See solutions 9. $\overline{x}_{1} \approx 75.45, s_{1} \approx 11.10, \overline{x}_{2} \approx 125.55, s_{2} \approx 24.72$ 11. a. $\overline{x} \approx 21.24 \mathrm{sec}, s \approx 14.95 \mathrm{sec}$ b. $(-8.66 \mathrm{sec}, 51.14 \mathrm{sec})$ c. $(-23.61 \mathrm{sec}, 66.09 \mathrm{sec})$ 13. a. $\overline{x} \approx 130.98, s \approx 205.44$ b. $(-279.90,541.86)$ c. $(-485.34,747.3)$ 15. 3.61 17. 10.35
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/03%3A_Examining_the_Evidence_Using_Graphs_and_Statistics/3.02%3A_Measures_of_Spread.txt
Along with the center and the variability, another useful numerical measure is the ranking of a number. A percentile is a measure of ranking. It represents a location measurement of a data value to the rest of the values. Many standardized tests give the results as a percentile. Doctors also use percentiles to track a child’s growth. The kth percentile is the data value that has k% of the data at or below that value. Example $1$ interpreting percentile 1. What does a score of the 90th percentile mean? 2. What does a score of the 70th percentile mean? Solution 1. This means that 90% of the scores were at or below this score. (A person did the same as or better than 90% of the test takers.) 2. This means that 70% of the scores were at or below this score. Example $2$ percentile versus score If the test was out of 100 points and you scored at the 80th percentile, what was your score on the test? Solution You don’t know! All you know is that you scored the same as or better than 80% of the people who took the test. If all the scores were really low, you could have still failed the test. On the other hand, if many of the scores were high you could have gotten a 95% or so. There are special percentiles called quartiles. Quartiles are numbers that divide the data into fourths. One fourth (or a quarter) of the data falls between consecutive quartiles. Definition $1$ To find the quartiles: 1. Sort the data in increasing order. 2. Find the median, this divides the data list into 2 halves. 3. Find the median of the data below the median. This value is Q1. 4. Find the median of the data above the median. This value is Q3. Ignore the median in both calculations for Q1 and Q3 If you record the quartiles together with the maximum and minimum you have five numbers. This is known as the five-number summary. The five-number summary consists of the minimum, the first quartile (Q1), the median, the third quartile (Q3), and the maximum (in that order). The interquartile range, IQR, is the difference between the first and third quartiles, Q1 and Q3. Half of the data (50%) falls in the interquartile range. If the IQR is “large” the data is spread out and if the IQR is “small” the data is closer together. Definition $2$ Interquartile Range (IQR) IQR = Q3 - Q1 Determining probable outliers from IQR: fences A value that is less than Q1-$1.5*$IQR (this value is often referred to as a low fence) is considered an outlier. Similarly, a value that is more than Q3$+1.5*$IQR (the high fence) is considered an outlier. A box plot (or box-and-whisker plot) is a graphical display of the five-number summary. It can be drawn vertically or horizontally. The basic format is a box from Q1 to Q3, a vertical line across the box for the median and horizontal lines as whiskers extending out each end to the minimum and maximum. The minimum and maximum can be represented with dots. Don’t forget to label the tick marks on the number line and give the graph a title. An alternate form of a Box-and-Whiskers Plot, known as a modified box plot, only extends the left line to the smallest value greater than the low fence, and extends the left line to the largest value less than the high fence, and displays markers (dots, circles or asterisks) for each outlier. If the data are symmetrical, then the box plot will be visibly symmetrical. If the data distribution has a left skew or a right skew, the line on that side of the box plot will be visibly long. If the plot is symmetrical, and the four quartiles are all about the same length, then the data are likely a near uniform distribution. If a box plot is symmetrical, and both outside lines are noticeably longer than the Q1 to median and median to Q3 distance, the distribution is then probably bell-shaped. Example $3$ five-number summary for an even number of data points The total assets in billions of Australian dollars (AUD) of Australian banks for the year 2012 are given in Example $1$ ("Reserve bank of," 2013). Find the five-number summary and the interquartile range (IQR), and draw a box-and-whiskers plot. 2855 2862 2861 2884 3014 2965 2971 3002 3032 2950 2967 2964 Table $1$: Total Assets (in billions of AUD) of Australian Banks Solution Variable: $x =$ total assets of Australian banks First sort the data. 2855 2861 2862 2884 2950 2964 2965 2967 2971 3002 3014 3032 Table $2$: Sorted Data for Total Assets The minimum is 2855 billion AUD and the maximum is 3032 billion AUD. There are 12 data points so the median is the average of the 6th and 7th numbers. Table $3$: Sorted Data for Total Assets with Median To find QI, find the median of the first half of the list. Table $4$: Finding QI To find Q3, find the median of the second half of the list. Table $5$: Finding Q3 The five-number summary is (all numbers in billion AUD) Minimum: 2855 Q1: 2873 Median: 2964.5 Q3: 2986.5 Maximum: 3032 To find the interquartile range, IQR, find Q3-Q1 IQR = 2986.5 - 2873 = 113.5 billion AUD This tells you the middle 50% of assets were within 113.5 billion AUD of each other. You can use the five-number summary to draw the box-and-whiskers plot. The distribution is skewed right because the right tail is longer. Example $4$ five-number summary for an odd number of data points The life expectancy for a person living in one of 11 countries in the region of South East Asia in 2012 is given below ("Life expectancy in," 2013). Find the five-number summary for the data and the IQR, then draw a box-and-whiskers plot. 70 67 69 65 69 77 65 68 75 74 64 Table $6$: Life Expectancy of a Person Living in South-East Asia Solution Variable: $x =$ life expectancy of a person. Sort the data first. 64 65 65 67 68 69 69 70 74 75 77 Table $7$: Sorted Life Expectancies The minimum is 64 years and the maximum is 77 years. There are 11 data points so the median is the 6th number in the list. Table $8$: Finding the Median of Life Expectancies Finding the Q1 and Q3 you need to find the median of the numbers below the median and above the median. The median is not included in either calculation. Table $9$: Finding Q1 Table $10$: Finding Q3 Q1=65 years and Q3=74 years The five-number summary is (in years) Minimum: 64 Q1: 65 Median: 69 Q3: 74 Maximum: 77 To find the interquartile range (IQR) IQR=Q3-Q1=74-65=9 years The middle 50% of life expectancies are within 9 years. This distribution looks somewhat skewed right, since the whisker is longer on the right. However, it could be considered almost symmetric too since the box looks somewhat symmetric. You can draw 2 box plots side by side (or one above the other) to compare 2 samples. Since you want to compare the two data sets, make sure the box plots are on the same axes. As an example, suppose you look at the box-and-whiskers plot for life expectancy for European countries and Southeast Asian countries. Looking at the box-and-whiskers plot, you will notice that the three quartiles for life expectancy are all higher for the European countries, yet the minimum life expectancy for the European countries is less than that for the Southeast Asian countries. The life expectancy for the European countries appears to be skewed left, while the life expectancies for the Southeast Asian countries appear to be more symmetric. There are of course more qualities that can be compared between the two graphs. To find the five-number summary using R, the command is: variable<-c(type in data with commas) summary(variable) This command will give you the five number summary and the mean. For Example $4$, the commands would be expectancy<-c(70, 67, 69, 65, 69, 77, 65, 68, 75, 74, 64) summary(expectancy) The output would be: $\begin{array}{cccccc}{\text { Min.}} & {\text{ Ist Qu.}} & {\text{Median}} & {\text{Mean}} & {\text{3rd Qu.}} & {\text{Max.}} \ {64.00} & {66.00} & {69.00} & {69.36} & {72.00} & {77.00} \end{array}$ To draw the box plot the command is boxplot(variable, main="title you want", xlab="label you want", horizontal = TRUE). The horizontal = TRUE orients the box plot to be horizontal. If you leave that part off, the box plot will be vertical by default. For Example $4$, the command is boxplot(expectancy, main="Life Expectancy of Southeast Asian Countries in 2011",horizontal=TRUE, xlab="Life Expectancy") You should get the box plot in Graph 3.3.4. This is known as a modified box plot. Instead of plotting the maximum and minimum, the box plot has as a lower line Q1-1.5*IQR , and as an upper line, Q3+1.5*IQR. Any values below the lower line or above the upper line are considered outliers. Outliers are plotted as dots on the modified box plot. This data set does not have any outliers. Example $5$ putting it all together A random sample was collected on the health expenditures (as a % of GDP) of countries around the world. The data is in Example $11$. Using graphical and numerical descriptive statistics, analyze the data and use it to predict the health expenditures of all countries in the world. 3.35 5.94 10.64 5.24 3.79 5.65 7.66 7.38 5.87 11.15 5.96 4.78 7.75 2.72 9.50 7.69 10.05 11.96 8.18 6.74 5.89 6.20 5.98 8.83 6.78 6.66 9.45 5.41 5.16 8.55 Table $11$: Health Expenditures as a Percentage of GDP Solution First, it might be useful to look at a visualization of the data, so create a histogram. From the graph, the data appears to be somewhat skewed right. So there are some countries that spend more on health based on a percentage of GDP than other countries, but the majority of countries appear to spend around 4 to 8% of their GDP on health. Numerical descriptions might also be useful. Using technology, the mean is 7.03%, the standard deviation is 2.27%, and the five-number summary is minimum = 2.72%, Q1 = 5.71%, median = 6.70%, Q3 = 8.46%, and maximum = 11.96%. To visualize the five-number summary, create a box plot. So it appears that countries spend on average about 7% of their GPD on health. The spread is somewhat low, since the standard deviation is fairly small, which means that the data is fairly consistent. The five-number summary confirms that the data is slightly skewed right. The box plot shows that there are no outliers. So from all of this information, one could say that countries spend a small percentage of their GDP on health and that most countries spend around the same amount. There doesn’t appear to be any country that spends much more than other countries or much less than other countries. Homework Exercise $1$ 1. Suppose you take a standardized test and you are in the 10th percentile. What does this percentile mean? Can you say that you failed the test? Explain. 2. Suppose your child takes a standardized test in mathematics and scores in the 96th percentile. What does this percentile mean? Can you say your child passed the test? Explain. 3. Suppose your child is in the 83rd percentile in height and 24th percentile in weight. Describe what this tells you about your child’s stature. 4. Suppose your work evaluates the employees and places them on a percentile ranking. If your evaluation is in the 65th percentile, do you think you are working hard enough? Explain. 5. Cholesterol levels were collected from patients two days after they had a heart attack (Ryan, Joiner & Ryan, Jr, 1985) and are in Example $12$. Find the five-number summary and interquartile range (IQR), and draw a box-and-whiskers plot. 270 236 210 142 280 272 160 220 226 242 186 266 206 318 294 282 234 224 276 282 360 310 280 278 288 288 244 236 Table $12$: Cholesterol Levels 6. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Pacific Ocean are listed in Example $13$ (Lee, 1994). Find the five-number summary and interquartile range (IQR), and draw a box-and-whiskers plot. River Length (km) River Length (km) Clarence 209 Clutha 322 Conway 48 Taieri 288 Waiau 169 Shag 72 Hurunui 169 Kakanui 64 Waipara 64 Waitaki 209 Ashley 97 Waihao 64 Waimakariri 161 Pareora 56 Selwyn 95 Rangitata 121 Rakaia 145 Ophi 80 Ashburton 90 Table $13$: Lengths of Rivers (km) Flowing to Pacific Ocean 7. The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Tasman Sea are listed in Example $14$ (Lee, 1994). Find the five-number summary and interquartile range (IQR), and draw a box-and-whiskers plot. River Length (km) River Length (km) Hollyford 76 Waimea 48 Cascade 64 Motueka 108 Arawhata 68 Takaka 72 Haast 64 Aorere 72 Karangarua 37 Heaphy 35 Cook 32 Karamea 80 Waiho 32 Mokihinui 56 Whataroa 51 Buller 177 Wanganui 56 Grey 121 Waitaha 40 Taramakau 80 Hokitika 64 Arahura 56 Table $14$: Lengths of Rivers (km) Flowing to Tasman Sea 8. Eyeglassmatic manufactures eyeglasses for their retailers. They test to see how many defective lenses they made the time period of January 1 to March 31. Example $15$ gives the defect and the number of defects. Find the five-number summary and interquartile range (IQR), and draw a box-and-whiskers plot. Defect type Number of defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble - intern 976 Table $15$: Number of Defective Lenses 9. A study was conducted to see the effect of exercise on pulse rate. Male subjects were taken who do not smoke, but do drink. Their pulse rates were measured ("Pulse rates before," 2013). Then they ran in place for one minute and then measured their pulse rate again. Graph 3.3.7 is of box-and-whiskers plots that were created of the before and after pulse rates. Discuss any conclusions you can make from the graphs. Graph 3.3.7: Box-and-Whiskers Plot of Pulse Rates for Males 10. A study was conducted to see the effect of exercise on pulse rate. Female subjects were taken who do not smoke, but do drink. Their pulse rates were measured ("Pulse rates before," 2013). Then they ran in place for one minute, and after measured their pulse rate again. Graph 3.3.8 is of box-and-whiskers plots that were created of the before and after pulse rates. Discuss any conclusions you can make from the graphs. Graph 3.3.8: Box-and-Whiskers Plot of Pulse Rates for Females 11. To determine if Reiki is an effective method for treating pain, a pilot study was carried out where a certified second-degree Reiki therapist provided treatment on volunteers. Pain was measured using a visual analogue scale (VAS) immediately before and after the Reiki treatment (Olson & Hanson, 1997). Graph 3.3.9 is of box-and-whiskers plots that were created of the before and after VAS ratings. Discuss any conclusions you can make from the graphs. Graph 3.3.9: Box-and-Whiskers Plot of Pain Using Reiki 12. The number of deaths attributed to UV radiation in African countries and Middle Eastern countries in the year 2002 were collected by the World Health Organization ("UV radiation: Burden," 2013). Graph 3.3.10 is of box-and-whiskers plots that were created of the deaths in African countries and deaths in Middle Eastern countries. Discuss any conclusions you can make from the graphs. Graph 3.3.10: Box-and-Whiskers Plot of UV Radiation Deaths in Different Regions Answer Note: Q1, Q3, and IQR may differ slightly due to how technology finds them. 1. See solutions 3. See solutions 5. min = 142, Q1 = 225, med = 268, Q3 = 282, max = 360, IQR = 57, see solutions 7. min = 32 km, Q1 = 46 km, med = 64 km, Q3 = 77 km, max = 177 km, IQR = 31 km, see solutions 9. See solutions 11. See solutions Data Sources: Annual maximums of daily rainfall in Sydney. (2013, September 25). Retrieved from http://www.statsci.org/data/oz/sydrain.html Lee, A. (1994). Data analysis: An introduction based on r. Auckland. Retrieved from http://www.statsci.org/data/oz/nzrivers.html Life expectancy in southeast Asia. (2013, September 23). Retrieved from http://apps.who.int/gho/data/node.main.688 Olson, K., & Hanson, J. (1997). Using reiki to manage pain: a preliminary report. Cancer Prev Control, 1(2), 108-13. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9765732 Pulse rates before and after exercise. (2013, September 25). Retrieved from http://www.statsci.org/data/oz/ms212.html Reserve bank of Australia. (2013, September 23). Retrieved from http://data.gov.au/dataset/banks-assets Ryan, B. F., Joiner, B. L., & Ryan, Jr, T. A. (1985). Cholesterol levels after heart attack. Retrieved from http://www.statsci.org/data/general/cholest.html Time between nerve pulses. (2013, September 25). Retrieved from http://www.statsci.org/data/general/nerve.html Time of passages of play in rugby. (2013, September 25). Retrieved from http://www.statsci.org/data/oz/rugby.html U.S. tornado climatology. (17, May 2013). Retrieved from www.ncdc.noaa.gov/oa/climate/...tornadoes.html UV radiation: Burden of disease by country. (2013, September 4). Retrieved from http://apps.who.int/gho/data/node.main.165?lang=en
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/03%3A_Examining_the_Evidence_Using_Graphs_and_Statistics/3.03%3A_Ranking.txt
One story about how probability theory was developed is that a gambler wanted to know when to bet more and when to bet less. He talked to a couple of friends of his that happened to be mathematicians. Their names were Pierre de Fermat and Blaise Pascal. Since then many other mathematicians have worked to develop probability theory. Understanding probabilities are important in life. Examples of mundane questions that probability can answer for you are if you need to carry an umbrella or wear a heavy coat on a given day. More important questions that probability can help with are your chances that the car you are buying will need more maintenance, your chances of passing a class, your chances of winning the lottery, your chances of being in a car accident, and the chances that the U.S. will be attacked by terrorists. Most people do not have a very good understanding of probability, so they worry about being attacked by a terrorist but not about being in a car accident. The probability of being in a terrorist attack is much smaller than the probability of being in a car accident, thus it actually would make more sense to worry about driving. Also, the chance of you winning the lottery is very small, yet many people will spend the money on lottery tickets. Yet, if instead they saved the money that they spend on the lottery, they would have more money. In general, events that have a low probability (under 5%) are unlikely to occur. Whereas if an event has a high probability of happening (over 80%), then there is a good chance that the event will happen. This chapter will present some of the theory that you need to help make a determination of whether an event is likely to happen or not. First you need some definitions. Definition $1$ Experiment: an activity that has specific result that can occur, but it is unknown which results will occur. Definition $2$ Outcomes: the result of an experiment. Definition $3$ Event: a set of certain outcomes of an experiment that you want to have happen. Definition $4$ Sample Space: collection of all possible outcomes of the experiment. Usually denoted as SS. Definition $5$ Event Space: the set of outcomes that make up an event. The symbol is usually a capital letter. Start with an experiment. Suppose that the experiment is rolling a die. The sample space is {1, 2, 3, 4, 5, 6}. The event that you want is to get a 6, and the event space is {6}. To do this, roll a die 10 times. When you do that, you get a 6 two times. Based on this experiment, the probability of getting a 6 is 2 out of 10 or 1/5. To get more accuracy, repeat the experiment more times. It is easiest to put this in a table, where n represents the number of times the experiment is repeated. When you put the number of 6s found over the number of times you repeat the experiment, this is the relative frequency. n Number of 6s Relative Frequency 10 2 0.2 50 6 0.12 100 18 0.18 500 81 0.162 1000 163 0.163 Table $1$: Trials for Die Experiment Notice that as n increased, the relative frequency seems to approach a number. It looks like it is approaching 0.163. You can say that the probability of getting a 6 is approximately 0.163. If you want more accuracy, then increase n even more. These probabilities are called experimental probabilities since they are found by actually doing the experiment. They come about from the relative frequencies and give an approximation of the true probability. The approximate probability of an event A, P(A), is Definition $6$ Experimental Probabilities $P(A)=\dfrac{\text { number of times } A \text { occurs }}{\text { number of times the experiment was repeated }}$ For the event of getting a 6, the probability would by $\dfrac{163}{1000}=0.163$. You must do experimental probabilities whenever it is not possible to calculate probabilities using other means. An example is if you want to find the probability that a family has 5 children, you would have to actually look at many families, and count how many have 5 children. Then you could calculate the probability. Another example is if you want to figure out if a die is fair. You would have to roll the die many times and count how often each side comes up. Make sure you repeat an experiment many times, because otherwise you will not be able to estimate the true probability. This is due to the law of large numbers. Definition $7$ Law of large numbers: as n increases, the relative frequency tends towards the actual probability value. Note Probability, relative frequency, percentage, and proportion are all different words for the same concept. Also, probabilities can be given as percentages, decimals, or fractions. Homework Exercise $1$ 1. Example $2$ contains the number of M&M’s of each color that were found in a case (Madison, 2013). Find the probability of choosing each color based on this experiment. Blue Brown Green Orange Red Yellow Total 481 371 483 544 372 369 2620 Table $2$: M&M Distribution 2. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made the time period of January 1 to March 31. Example $3$ gives the defect and the number of defects. Find the probability of each defect type based on this data. Defect type Number of defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble - intern 976 Table $3$: Number of Defective Lenses 3. In Australia in 1995, of the 2907 indigenous people in prison 17 of them died. In that same year, of the 14501 non-indigenous people in prison 42 of them died ("Aboriginal deaths in," 2013). Find the probability that an indigenous person dies in prison and the probability that a non-indigenous person dies in prison. Compare these numbers and discuss what the numbers may mean. 4. A project conducted by the Australian Federal Office of Road Safety asked people many questions about their cars. One question was the reason that a person chooses a given car, and that data is in Example $4$ ("Car preferences," 2013). Find the probability a person chooses a car for each of the given reasons. Safety Reliability Cost Performance Comfort Looks 84 62 46 34 47 27 Table $4$: Reason for Choosing a Car Answer 1. P(blue) = 0.184, P(brow) = 0.142, P(green) = 0.184, P(orange) = 0.208, P(red) = 0.142, P(yellow) = 0.141 3. P(indigenous person dies) = 0.0058, P(non-indigenous person dies) = 0.0029, see solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/04%3A_Probability/4.01%3A_Empirical_Probability.txt
It is not always feasible to conduct an experiment over and over again, so it would be better to be able to find the probabilities without conducting the experiment. These probabilities are called Theoretical Probabilities. To be able to do theoretical probabilities, there is an assumption that you need to consider. It is that all of the outcomes in the sample space need to be equally likely outcomes. This means that every outcome of the experiment needs to have the same chance of happening. Example $1$ Equally likely outcomes Which of the following experiments have equally likely outcomes? 1. Rolling a fair die. 2. Flip a coin that is weighted so one side comes up more often than the other. 3. Pull a ball out of a can containing 6 red balls and 8 green balls. All balls are the same size. 4. Picking a card from a deck. 5. Rolling a die to see if it is fair. Solution 1. Since the die is fair, every side of the die has the same chance of coming up. The outcomes are the different sides, so each outcome is equally likely. 2. Since the coin is weighted, one side is more likely to come up than the other side. The outcomes are the different sides, so each outcome is not equally likely. 3. Since each ball is the same size, then each ball has the same chance of being chosen. The outcomes of this experiment are the individual balls, so each outcome is equally likely. Don’t assume that because the chances of pulling a red ball are less than pulling a green ball that the outcomes are not equally likely. The outcomes are the individual balls and they are equally likely. 4. If you assume that the deck is fair, then each card has the same chance of being chosen. Thus the outcomes are equally likely outcomes. You do have to make this assumption. For many of the experiments you will do, you do have to make this kind of assumption. 5. In this case you are not sure the die is fair. The only way to determine if it is fair is to actually conduct the experiment, since you don’t know if the outcomes are equally likely. If the experimental probabilities are fairly close to the theoretical probabilities, then the die is fair. If the outcomes are not equally likely, then you must do experimental probabilities. If the outcomes are equally likely, then you can do theoretical probabilities. Definition $1$: Theoretical Probabilities If the outcomes of an experiment are equally likely, then the probability of event A happening is $P(A)=\dfrac{\# \text { of outcomes in event space }}{\# \text { of outcomes in sample space }}$ Example $2$ calculating theoretical probabilities Suppose you conduct an experiment where you flip a fair coin twice. 1. What is the sample space? 2. What is the probability of getting exactly one head? 3. What is the probability of getting at least one head? 4. What is the probability of getting a head and a tail? 5. What is the probability of getting a head or a tail? 6. What is the probability of getting a foot? 7. What is the probability of each outcome? What is the sum of these probabilities? Solution a. There are several different sample spaces you can do. One is SS={0, 1, 2} where you are counting the number of heads. However, the outcomes are not equally likely since you can get one head by getting a head on the first flip and a tail on the second or a tail on the first flip and a head on the second. There are 2 ways to get that outcome and only one way to get the other outcomes. Instead it might be better to give the sample space as listing what can happen on each flip. Let H = head and T = tail, and list which can happen on each flip. SS={HH, HT, TH, TT} b. Let A = getting exactly one head. The event space is A = {HT, TH}. So $P(A)=\dfrac{2}{4} \text { or } \dfrac{1}{2}$ It may not be advantageous to reduce the fractions to lowest terms, since it is easier to compare fractions if they have the same denominator. c. Let B = getting at least one head. At least one head means get one or more. The event space is B = {HT, TH, HH} and $P(B)=\dfrac{3}{4}$ Since P(B) is greater than the P(A), then event B is more likely to happen than event A. d. Let C = getting a head and a tail = {HT, TH} and $P(C)=\dfrac{2}{4}$ This is the same event space as event A, but it is a different event. Sometimes two different events can give the same event space. e. Let D = getting a head or a tail. Since or means one or the other or both and it doesn’t specify the number of heads or tails, then D = {HH, HT, TH, TT} and $P(D)=\dfrac{4}{4}=1$ f. Let E = getting a foot. Since you can’t get a foot, E = {} or the empty set and $P(E)=\dfrac{0}{4}=0$ g. $P(H H)=P(H T)=P(T H)=P(T T)=\dfrac{1}{4}$. If you add all of these probabilities together you get 1. This example had some results in it that are important concepts. They are summarized below: Probability Properties 1. $0 \leq P(\text { event }) \leq 1$ 2. If the P(event)=1, then it will happen and is called the certain event. 3. If the P(event)=0, then it cannot happen and is called the impossible event. 4. $\sum P(\text { outcome })=1$ Example $3$ calculating theoretical probabilities Suppose you conduct an experiment where you pull a card from a standard deck. 1. What is the sample space? 2. What is the probability of getting a Spade? 3. What is the probability of getting a Jack? 4. What is the probability of getting an Ace? 5. What is the probability of not getting an Ace? 6. What is the probability of getting a Spade and an Ace? 7. What is the probability of getting a Spade or an Ace? 8. What is the probability of getting a Jack and an Ace? 9. What is the probability of getting a Jack or an Ace? Solution a. SS = {2S, 3S, 4S, 5S, 6S, 7S, 8S, 9S, 10S, JS, QS, KS, AS, 2C, 3C, 4C, 5C, 6C, 7C, 8C, 9C, 10C, JC, QC, KC, AC, 2D, 3D, 4D, 5D, 6D, 7D, 8D, 9D, 10D, JD, QD, KD, AD, 2H, 3H, 4H, 5H, 6H, 7H, 8H, 9H, 10H, JH, QH, KH, AH} b. Let A = getting a spade = {2S, 3S, 4S, 5S, 6S, 7S, 8S, 9S, 10S, JS, QS, KS, AS} so $P(A)=\dfrac{13}{52}$ c. Let B = getting a Jack = {JS, JC, JH, JD} so $P(B)=\dfrac{4}{52}$ d. Let C = getting an Ace = {AS, AC, AH, AD} so $P(C)=\dfrac{4}{52}$ e. Let D = not getting an Ace = {2S, 3S, 4S, 5S, 6S, 7S, 8S, 9S, 10S, JS, QS, KS, 2C, 3C, 4C, 5C, 6C, 7C, 8C, 9C, 10C, JC, QC, KC, 2D, 3D, 4D, 5D, 6D, 7D, 8D, 9D, 10D, JD, QD, KD, 2H, 3H, 4H, 5H, 6H, 7H, 8H, 9H, 10H, JH, QH, KH} so $P(D)=\dfrac{48}{52}$ Notice, $P(D)+P(C)=\dfrac{48}{52}+\dfrac{4}{52}=1$, so you could have found the probability of D by doing 1 minus the probability of C $P(D)=1-P(C)=1-\dfrac{4}{52}=\dfrac{48}{52}$. f. Let E = getting a Spade and an Ace = {AS} so $P(E)=\dfrac{1}{52}$ g. Let F = getting a Spade and an Ace ={2S, 3S, 4S, 5S, 6S, 7S, 8S, 9S, 10S, JS, QS, KS, AS, AC, AD, AH} so $P(F)=\dfrac{16}{52}$ h. Let G = getting a Jack and an Ace = { } since you can’t do that with one card. So $P(G)=0$ i. Let H = getting a Jack or an Ace = {JS, JC, JD, JH, AS, AC, AD, AH} so $P(H)=\dfrac{8}{52}$ Example $4$ calculating theoretical probabilities Suppose you have an iPod Shuffle with the following songs on it: 5 Rolling Stones songs, 7 Beatles songs, 9 Bob Dylan songs, 4 Faith Hill songs, 2 Taylor Swift songs, 7 U2 songs, 4 Mariah Carey songs, 7 Bob Marley songs, 6 Bunny Wailer songs, 7 Elton John songs, 5 Led Zeppelin songs, and 4 Dave Mathews Band songs. The different genre that you have are rock from the 60s which includes Rolling Stones, Beatles, and Bob Dylan; country includes Faith Hill and Taylor Swift; rock of the 90s includes U2 and Mariah Carey; Reggae includes Bob Marley and Bunny Wailer; rock of the 70s includes Elton John and Led Zeppelin; and bluegrasss/rock includes Dave Mathews Band. The way an iPod Shuffle works, is it randomly picks the next song so you have no idea what the next song will be. Now you would like to calculate the probability that you will hear the type of music or the artist that you are interested in. The sample set is too difficult to write out, but you can figure it from looking at the number in each set and the total number. The total number of songs you have is 67. 1. What is the probability that you will hear a Faith Hill song? 2. What is the probability that you will hear a Bunny Wailer song? 3. What is the probability that you will hear a song from the 60s? 4. What is the probability that you will hear a Reggae song? 5. What is the probability that you will hear a song from the 90s or a bluegrass/rock song? 6. What is the probability that you will hear an Elton John or a Taylor Swift song? 7. What is the probability that you will hear a country song or a U2 song? Solution a. There are 4 Faith Hill songs out of the 67 songs, so $P(\text { Faith Hill song })=\dfrac{4}{67}$ b. There are 6 Bunny Wailer songs, so $P(\text { Bunny Wailer })=\dfrac{6}{67}$ c. There are 5, 7, and 9 songs that are classified as rock from the 60s, which is 21 total, so $P(\text { rock from the } 60 \mathrm{s})=\dfrac{21}{67}$ d. There are 6 and 7 songs that are classified as Reggae, which is 13 total, so $P(\text { Reggae })=\dfrac{13}{67}$ e. There are 7 and 4 songs that are songs from the 90s and 4 songs that are bluegrass/rock, for a total of 15, so $P(\text { rock from the } 90 \text { s or bluegrass/rock })=\dfrac{15}{67}$ f. There are 7 Elton John songs and 2 Taylor Swift songs, for a total of 9, so $P(\text { Elton John or Taylor Swift song })=\dfrac{9}{67}$ g. There are 6 country songs and 7 U2 songs, for a total of 13, so $P(\text { country or } \mathrm{U} 2 \text { song })=\dfrac{13}{67}$ Of course you can do any other combinations you would like. Notice in Example $3$ part e, it was mentioned that the probability of event D plus the probability of event C was 1. This is because these two events have no outcomes in common, and together they make up the entire sample space. Events that have this property are called complementary events. Definition $2$: complementary events If two events are complementary events then to find the probability of one just subtract the probability of the other from one. Notation used for complement of A is not A or $A^{c}$. $P(A)+P\left(A^{c}\right)=1, \text { or } P(A)=1-P\left(A^{c}\right)$ Example $5$ complementary events 1. Suppose you know that the probability of it raining today is 0.45. What is the probability of it not raining? 2. Suppose you know the probability of not getting the flu is 0.24. What is the probability of getting the flu? 3. In an experiment of picking a card from a deck, what is the probability of not getting a card that is a Queen? Solution a. Since not raining is the complement of raining, then $P(\text { not raining })=1-P(\text { raining })=1-0.45=0.55$ b. Since getting the flu is the complement of not getting the flu, then $P(\text { getting the flu })=1-P(\text { not getting the flu })=1-0.24=0.76$ c. You could do this problem by listing all the ways to not get a queen, but that set is fairly large. One advantage of the complement is that it reduces the workload. You use the complement in many situations to make the work shorter and easier. In this case it is easier to list all the ways to get a Queen, find the probability of the Queen, and then subtract from one. Queen = {QS, QC, QD, QH} so $P(\text { Queen })=\dfrac{4}{52}$ and $P(\text { not Queen })=1-P(\text { Queen })=1-\dfrac{4}{52}=\dfrac{48}{52}$ The complement is useful when you are trying to find the probability of an event that involves the words at least or an event that involves the words at most. As an example of an at least event is suppose you want to find the probability of making at least $50,000 when you graduate from college. That means you want the probability of your salary being greater than or equal to$50,000. An example of an at most event is suppose you want to find the probability of rolling a die and getting at most a 4. That means that you want to get less than or equal to a 4 on the die. The reason to use the complement is that sometimes it is easier to find the probability of the complement and then subtract from 1. Example $6$ demonstrates how to do this. Example $6$ using the complement to find probabilities 1. In an experiment of rolling a fair die one time, find the probability of rolling at most a 4 on the die. 2. In an experiment of pulling a card from a fair deck, find the probability of pulling at least a 5 (ace is a high card in this example). Solution a. The sample space for this experiment is {1, 2, 3, 4, 5, 6}. You want the event of getting at most a 4, which is the same as thinking of getting 4 or less. The event space is {1, 2, 3, 4}. The probability is $P(\text { at most } 4)=\dfrac{4}{6}$ Or you could have used the complement. The complement of rolling at most a 4 would be rolling number bigger than 4. The event space for the complement is {5, 6}. The probability of the complement is $\dfrac{2}{6}$. The probability of at most 4 would be $P(\text { at most } 4)=1-P(\text { more than } 4)=1-\dfrac{2}{6}=\dfrac{4}{6}$ Notice you have the same answer, but the event space was easier to write out. On this example it probability wasn’t that useful, but in the future there will be events where it is much easier to use the complement. b. The sample space for this experiment is SS = {2S, 3S, 4S, 5S, 6S, 7S, 8S, 9S, 10S, JS, QS, KS, AS, 2C, 3C, 4C, 5C, 6C, 7C, 8C, 9C, 10C, JC, QC, KC, AC, 2D, 3D, 4D, 5D, 6D, 7D, 8D, 9D, 10D, JD, QD, KD, AD, 2H, 3H, 4H, 5H, 6H, 7H, 8H, 9H, 10H, JH, QH, KH, AH} Pulling a card that is at least a 5 would involve listing all of the cards that are a 5 or more. It would be much easier to list the outcomes that make up the complement. The complement of at least a 5 is less than a 5. That would be the event of 4 or less. The event space for the complement would be {2S, 3S, 4S, 2C, 3C, 4C, 2D, 3D, 4D, 2H, 3H, 4H}. The probability of the complement would be $\dfrac{12}{52}$. The probability of at least a 5 would be $P(\text { at least } \mathbf{a} 5)=1-P(4 \text { or less })=1-\dfrac{12}{52}=\dfrac{40}{52}$ Another concept was show in Example $3$ parts g and i. The problems were looking for the probability of one event or another. In part g, it was looking for the probability of getting a Spade or an Ace. That was equal to $\dfrac{16}{52}$. In part i, it was looking for the probability of getting a Jack or an Ace. That was equal to $\dfrac{8}{52}$. If you look back at the parts b, c, and d, you might notice the following result: $P(\text { Jack })+P(\text { Ace })=P(\text { Jack or Ace }) \text { but } P(\text { Spade })+P(\text { Ace }) \neq P(\text { Spade or } \text { Ace })$ Why does adding two individual probabilities together work in one situation to give the probability of one or another event and not give the correct probability in the other? The reason this is true in the case of the Jack and the Ace is that these two events cannot happen together. There is no overlap between the two events, and in fact the $P(\text { Jack and } \mathrm{Acc})=0$. However, in the case of the Spade and Ace, they can happen together. There is overlap, mainly the ace of spades. The $P(\text { Spade and } \mathrm{Ace}) \neq 0$. When two events cannot happen at the same time, they are called mutually exclusive. In the above situation, the events Jack and Ace are mutually exclusive, while the events Spade and Ace are not mutually exclusive. Addition Rules: If two events A and B are mutually exclusive, then $P(A \text { or } B)=P(A)+P(B) \text { and } P(A \text { and } B)=0$ If two events A and B are not mutually exclusive, then $P(A \text { or } B)=P(A)+P(B)-P(A \text { and } B)$ Example $7$ using addition rules Suppose your experiment is to roll two fair dice. 1. What is the sample space? 2. What is the probability of getting a sum of 5? 3. What is the probability of getting the first die a 2? 4. What is the probability of getting a sum of 7? 5. What is the probability of getting a sum of 5 and the first die a 2? 6. What is the probability of getting a sum of 5 or the first die a 2? 7. What is the probability of getting a sum of 5 and sum of 7? 8. What is the probability of getting a sum of 5 or sum of 7? Solution a. As with the other examples you need to come up with a sample space that has equally likely outcomes. One sample space is to list the sums possible on each roll. That sample space would look like: SS = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}. However, there are more ways to get a sum of 7 then there are to get a sum of 2, so these outcomes are not equally likely. Another thought is to list the possibilities on each roll. As an example you could roll the dice and on the first die you could get a 1. The other die could be any number between 1 and 6, but say it is a 1 also. Then this outcome would look like (1,1). Similarly, you could get (1, 2), (1, 3), (1,4), (1, 5), or (1, 6). Also, you could get a 2, 3, 4, 5, or 6 on the first die instead. Putting this all together, you get the sample space: $\begin{array}{r}{\mathrm{SS}=\{(1,1),(1,2),(1,3),(1,4),(1,5),(1,6)} \ {(2,1),(2,2),(2,3),(2,4),(2,5),(2,6)} \ {(3,1),(3,2),(3,3),(3,4),(3,5),(3,6)} \ {(4,1),(4,2),(4,3),(4,4),(4,5),(4,6)} \ {(5,1),(5,2),(5,3),(5,4),(5,5),(5,6)} \ {(6,1),(6,2),(6,3),(6,4),(6,5),(6,6) \}}\end{array}$ Notice that a (2,3) is different from a (3,2), since the order that you roll the die is important and you can tell the difference between these two outcomes. You don’t need any of the doubles twice, since these are not distinguishable from each other in either order. This will always be the sample space for rolling two dice. b. Let A = getting a sum of 5 = {(4,1), (3,2), (2,3), (1,4)} so $P(A)=\dfrac{4}{36}$ c. Let B = getting first die a 2 = {(2,1), (2,2), (2,3), (2,4), (2,5), (2,6)} so $P(B)=\dfrac{6}{36}$ d. Let C = getting a sum of 7 = {(6,1), (5,2), (4,3), (3,4), (2,5), (1,6)} so $P(C)=\dfrac{6}{36}$ e. This is events A and B which contains the outcome {(2,3)} so $P(A \text { and } B)=\dfrac{1}{36}$ f. Notice from part e, that these two events are not mutually exclusive, so $P(A \text { or } B)=P(A)+P(B)-P(A \text { and } B)$ $=\dfrac{4}{36}+\dfrac{6}{36}-\dfrac{1}{36}$ $=\dfrac{9}{36}$ g. These are the events A and C, which have no outcomes in common. Thus A and C = { } so $P(A \text { and } C)=0$ h. From part g, these two events are mutually exclusive, so $P(A \text { or } C)=P(A)+P(C)$ $=\dfrac{4}{36}+\dfrac{6}{36}$ $=\dfrac{10}{36}$ Odds Many people like to talk about the odds of something happening or not happening. Mathematicians, statisticians, and scientists prefer to deal with probabilities since odds are difficult to work with, but gamblers prefer to work in odds for figuring out how much they are paid if they win. Definition $3$ The actual odds against event A occurring are the ratio $P\left(A^{c}\right) / P(A)$, usually expressed in the form a:b or a to b, where a and b are integers with no common factors. Definition $4$ The actual odds in favor event A occurring are the ratio $P(A) / P\left(A^{c}\right)$, which is the reciprocal of the odds against. If the odds against event A are a:b, then the odds in favor event A are b:a. Definition $5$ The payoff odds against event A occurring are the ratio of the net profit (if you win) to the amount bet. payoff odds against event A = (net profit) : (amount bet) Example $8$ odds against and payoff odds In the game of Craps, if a shooter has a come-out roll of a 7 or an 11, it is called a natural and the pass line wins. The payoff odds are given by a casino as 1:1. 1. Find the probability of a natural. 2. Find the actual odds for a natural. 3. Find the actual odds against a natural. 4. If the casino pays 1:1, how much profit does the casino make on a $10 bet? Solution a. A natural is a 7 or 11. The sample space is $\begin{array}{r}{\mathrm{SS}=\{(1,1),(1,2),(1,3),(1,4),(1,5),(1,6)} \ {(2,1),(2,2),(2,3),(2,4),(2,5),(2,6)} \ {(3,1),(3,2),(3,3),(3,4),(3,5),(3,6)} \ {(4,1),(4,2),(4,3),(4,4),(4,5),(4,6)} \ {(5,1),(5,2),(5,3),(5,4),(5,5),(5,6)} \ {(6,1),(6,2),(6,3),(6,4),(6,5),(6,6) \}}\end{array}$ The event space is {(1,6), (2,5), (3,4), (4,3), (5,2), (6,1), (5,6), (6,5)} So $P(7 \text { or } 11)=\dfrac{8}{36}$ b. odd for a natural $=\dfrac{P(7 \text { or } 11)}{P(\text {not} 7 \text { or } 11)}$ $=\dfrac{8 / 36}{1-8 / 36}$ $=\dfrac{8 / 36}{28 / 36}$ $=\dfrac{8}{28}$ $=\dfrac{2}{7}$ c. odds against a natural $=\dfrac{P(\text { not } 7 \text { or } 11)}{P(7 \text { or } 11)}=\dfrac{28}{8}=\dfrac{7}{2}=\dfrac{3.5}{1}$ d. The actual odds are 3.5 to 1 while the payoff odds are 1 to 1. The casino pays you$10 for your $10 bet. If the casino paid you the actual odds, they would pay$3.50 on every $1 bet, and on$10, they pay $3.5 * \ 10=\ 35$. Their profit is $\ 35-\ 10=\ 25$. Homework Exercise $1$ 1. Example $1$ contains the number of M&M’s of each color that were found in a case (Madison, 2013). Blue Brown Green Orange Red Yellow Total 481 371 483 544 372 369 2620 Table $1$: M&M Distribution a. Find the probability of choosing a green or red M&M. b. Find the probability of choosing a blue, red, or yellow M&M. c. Find the probability of not choosing a brown M&M. d. Find the probability of not choosing a green M&M. 2. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made in a time period. Example $2$ gives the defect and the number of defects. Defect type Number of defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble 976 Table $2$: Number of Defective Lenses a. Find the probability of picking a lens that is scratched or flaked. b. Find the probability of picking a lens that is the wrong PD or was lost in lab. c. Find the probability of picking a lens that is not scratched. d. Find the probability of picking a lens that is not the wrong shape. 3. An experiment is to flip a fair coin three times. 1. State the sample space. 2. Find the probability of getting exactly two heads. Make sure you state the event space. 3. Find the probability of getting at least two heads. Make sure you state the event space. 4. Find the probability of getting an odd number of heads. Make sure you state the event space. 5. Find the probability of getting all heads or all tails. Make sure you state the event space. 6. Find the probability of getting exactly two heads or exactly two tails. 7. Find the probability of not getting an odd number of heads. 4. An experiment is rolling a fair die and then flipping a fair coin. 1. State the sample space. 2. Find the probability of getting a head. Make sure you state the event space. 3. Find the probability of getting a 6. Make sure you state the event space. 4. Find the probability of getting a 6 or a head. 5. Find the probability of getting a 3 and a tail. 5. An experiment is rolling two fair dice. 1. State the sample space. 2. Find the probability of getting a sum of 3. Make sure you state the event space. 3. Find the probability of getting the first die is a 4. Make sure you state the event space. 4. Find the probability of getting a sum of 8. Make sure you state the event space. 5. Find the probability of getting a sum of 3 or sum of 8. 6. Find the probability of getting a sum of 3 or the first die is a 4. 7. Find the probability of getting a sum of 8 or the first die is a 4. 8. Find the probability of not getting a sum of 8. 6. An experiment is pulling one card from a fair deck. 1. State the sample space. 2. Find the probability of getting a Ten. Make sure you state the event space. 3. Find the probability of getting a Diamond. Make sure you state the event space. 4. Find the probability of getting a Club. Make sure you state the event space. 5. Find the probability of getting a Diamond or a Club. 6. Find the probability of getting a Ten or a Diamond. 7. An experiment is pulling a ball from an urn that contains 3 blue balls and 5 red balls. 1. Find the probability of getting a red ball. 2. Find the probability of getting a blue ball. 3. Find the odds for getting a red ball. 4. Find the odds for getting a blue ball. 8. In the game of roulette, there is a wheel with spaces marked 0 through 36 and a space marked 00. 1. Find the probability of winning if you pick the number 7 and it comes up on the wheel. 2. Find the odds against winning if you pick the number 7. 3. The casino will pay you \$20 for every dollar you bet if your number comes up. How much profit is the casino making on the bet? Answer 1. a. P(green or red) = 0.326, b. P(blue, red, or yellow) = 0.466, c. P(not brown) = 0.858, d. P(not green) = 0.816 3. a. See solutions, b. P(2 heads) = 0.375, c. P(at least 2 heads) = 0.50, d. P(odd number of heads) = 0.50, e. P(all heads or all tails) = 0.25, f. P(two heads or two tails) = 0.75, g. P(no an odd number of heads) = 0.50 5. a. See solutions, b. P(sum of 3) = 0.056, c. P(1st die a 4) = 0.167, d. P(sum of 8) = 0.139, e. P(sum of 3 or sum of 8) = 0.194, f. P(sum of 3 or 1st die a 4) = 0.222, g. P(sum of 8 or 1st die a 4) = 0.278, h. P(not getting a sum of 8) = 0.861 7. a. P(red ball) = 0.625, b. P(blue ball) = 0.375, c. 5 to 3 d. 3 to 5
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/04%3A_Probability/4.02%3A_Theoretical_Probability.txt
Suppose you want to figure out if you should buy a new car. When you first go and look, you find two cars that you like the most. In your mind they are equal, and so each has a 50% chance that you will pick it. Then you start to look at the reviews of the cars and realize that the first car has had 40% of them needing to be repaired in the first year, while the second car only has 10% of the cars needing to be repaired in the first year. You could use this information to help you decide which car you want to actually purchase. Both cars no longer have a 50% chance of being the car you choose. You could actually calculate the probability you will buy each car, which is a conditional probability. You probably wouldn’t do this, but it gives you an example of what a conditional probability is. Conditional probabilities are probabilities calculated after information is given. This is where you want to find the probability of event A happening after you know that event B has happened. If you know that B has happened, then you don’t need to consider the rest of the sample space. You only need the outcomes that make up event B. Event B becomes the new sample space, which is called the restricted sample space, R. If you always write a restricted sample space when doing conditional probabilities and use this as your sample space, you will have no trouble with conditional probabilities. The notation for conditional probabilities is $P(A, \text { given } B)=P(A | B)$. The event following the vertical line is always the restricted sample space. Example $1$ conditional probabilities 1. Suppose you roll two dice. What is the probability of getting a sum of 5, given that the first die is a 2? 2. Suppose you roll two dice. What is the probability of getting a sum of 7, given the first die is a 4? 3. Suppose you roll two dice. What is the probability of getting the second die a 2, given the sum is a 9? 4. Suppose you pick a card from a deck. What is the probability of getting a Spade, given that the card is a Jack? 5. Suppose you pick a card from a deck. What is the probability of getting an Ace, given the card is a Queen? Solution a. Since you know that the first die is a 2, then this is your restricted sample space, so R = {(2,1), (2,2), (2,3), (2,4), (2,5), (2,6)} Out of this restricted sample space, the way to get a sum of 5 is {(2,3)}. Thus $P(\text { sum of } 5 | \text { the first die is a } 2)=\dfrac{1}{6}$ b. Since you know that the first die is a 4, this is your restricted sample space, so R = {(4,1), (4,2), (4,3), (4,4), (4,5), (4,6)} Out of this restricted sample space, the way to get a sum of 7 is {(4,3)}. Thus $P(\text { sum of } 7 | \text { the first die is a } 4)=\dfrac{1}{6}$ c. Since you know the sum is a 9, this is your restricted sample space, so R = {(3,6), (4,5), (5,4), (6,3)} Out of this restricted sample space there is no way to get the second die a 2. Thus $P(\text { second die is a } 2 | \text { sum is } 9)=0$ d. Since you know that the card is a Jack, this is your restricted sample space, so R = {JS, JC, JD, JH} Out of this restricted sample space, the way to get a Spade is {JS}. Thus $P(\text { Spade } | \mathrm{Jack})=\dfrac{1}{4}$ e. on: Since you know that the card is a Queen, then this is your restricted sample space, so R = {QS, QC, QD, QH} Out of this restricted sample space, there is no way to get an Ace, thus $P(\text { Ace | Queen })=0$ If you look at the results of Example $7$ part d and Example $1$ part b, you will notice that you get the same answer. This means that knowing that the first die is a 4 did not change the probability that the sum is a 7. This added knowledge did not help you in any way. It is as if that information was not given at all. However, if you compare Example $7$ part b and Example $1$ part a, you will notice that they are not the same answer. In this case, knowing that the first die is a 2 did change the probability of getting a sum of 5. In the first case, the events sum of 7 and first die is a 4 are called independent events. In the second case, the events sum of 5 and first die is a 2 are called dependent events. Events A and B are considered independent events if the fact that one event happens does not change the probability of the other event happening. In other words, events A and B are independent if the fact that B has happened does not affect the probability of event A happening and the fact that A has happened does not affect the probability of event B happening. Otherwise, the two events are dependent. In symbols, A and B are independent if $P(A | B)=P(A) \text { or } P(B | A)=P(B)$ Example $2$ independent events 1. Suppose you roll two dice. Are the events “sum of 7” and “first die is a 3” independent? 2. Suppose you roll two dice. Are the events “sum of 6” and “first die is a 4” independent? 3. Suppose you pick a card from a deck. Are the events “Jack” and “Spade” independent? 4. Suppose you pick a card from a deck. Are the events “Heart” and “Red” card independent? 5. Suppose you have two children via separate births. Are the events “the first is a boy” and “the second is a girl” independent? 6. Suppose you flip a coin 50 times and get a head every time, what is the probability of getting a head on the next flip? Solution a. To determine if they are independent, you need to see if $P(A | B)=P(A)$. It doesn’t matter which event is A or B, so just assign one as A and one as B. Let A = sum of 7 = {(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)} and B = first die is a 3 = {(3,1), (3,2), (3,3), (3,4), (3,5), (3,6)} $P(A | B)$ means that you assume that B has happened. The restricted sample space is B, R = {(3,1), (3,2), (3,3), (3,4), (3,5), (3,6)} In this restricted sample space, the way for A to happen is {(3,4)}, so $P(A | B)=\dfrac{1}{6}$ The $P(A)=\dfrac{6}{36}=\dfrac{1}{6}$ $P(A | B)=P(A)$ Thus “sum of 7” and “first die is a 3” are independent events. b. To determine if they are independent, you need to see if $P(A | B)=P(A)$. It doesn’t matter which event is A or B, so just assign one as A and one as B. Let A = sum of 6 = {(1,5), (2,4), (3,3), (4,2), (5,1)} and B = first die is a 4 = {(4,1), (4,2), (4,3), (4,4), (4,5), (4,6)}, so $P(A)=\dfrac{5}{36}$ For $P(A | B)$, the restricted sample space is B, R = {(4,1), (4,2), (4,3), (4,4), (4,5), (4,6)} In this restricted sample space, the way for A to happen is {(4,2)}, so $P(A | B)=\dfrac{1}{6}$. In this case, “sum of 6” and “first die is a 4” are dependent since $P(A | B) \neq P(A)$. c. To determine if they are independent, you need to see if $P(A | B)=P(A)$. It doesn’t matter which event is A or B, so just assign one as A and one as B. Let A = Jack = {JS, JC, JD, JH} and B = Spade {2S, 3S, 4S, 5S, 6S, 7S, 8S, 9S, 10S, JS, QS, KS, AS} $P(A)=\dfrac{4}{52}=\dfrac{1}{13}$ For $P(A | B)$, the restricted sample space is B, R = {2S, 3S, 4S, 5S, 6S, 7S, 8S, 9S, 10S, JS, QS, KS, AS} In this restricted sample space, the way A happens is {JS}, so $P(A | B)=\dfrac{1}{13}$ In this case, “Jack” and “Spade” are independent since $P(A | B)=P(A)$. d. To determine if they are independent, you need to see if $P(A | B)=P(A)$. It doesn’t matter which event is A or B, so just assign one as A and one as B. Let A = Heart = {2H, 3H, 4H, 5H, 6H, 7H, 8H, 9H, 10H, JH, QH, KH, AH} and B = Red card = {2D, 3D, 4D, 5D, 6D, 7D, 8D, 9D, 10D, JD, QD, KD, AD, 2H, 3H, 4H, 5H, 6H, 7H, 8H, 9H, 10H, JH, QH, KH, AH}, so $P(A)=\dfrac{13}{52}=\dfrac{1}{4}$ For $P(A | B)$, the restricted sample space is B, R = {2D, 3D, 4D, 5D, 6D, 7D, 8D, 9D, 10D, JD, QD, KD, AD, 2H, 3H, 4H, 5H, 6H, 7H, 8H, 9H, 10H, JH, QH, KH, AH} In this restricted sample space, the way A can happen is 13, $P(A | B)=\dfrac{13}{26}=\dfrac{1}{2}$. In this case, “Heart” and “Red” card are dependent, since $P(A | B) \neq P(A)$. e. In this case, you actually don’t need to do any calculations. The gender of one child does not affect the gender of the second child, the events are independent. f. Since one flip of the coin does not affect the next flip (the coin does not remember what it did the time before), the probability of getting a head on the next flip is still one-half. Multiplication Rule: Two more useful formulas: If two events are dependent, then $P(A \text { and } B)=P(A) * P(B | A)$ If two events are independent, then $P(A \text { and } B)=P(A)^{*} P(B)$ If you solve the first equation for $P(B | A)$, you obtain $P(B | A)=\dfrac{P(A \text { and } B)}{P(A)}$, which is a formula to calculate a conditional probability. However, it is easier to find a conditional probability by using the restricted sample space and counting unless the sample space is large. Example $3$ Multiplication rule 1. Suppose you pick three cards from a deck, what is the probability that they are all Queens if the cards are not replaced after they are picked? 2. Suppose you pick three cards from a deck, what is the probability that they are all Queens if the cards are replaced after they are picked and before the next card is picked? Solution a. This sample space is too large to write out, so using the multiplication rule makes sense. Since the cards are not replaced, then the probability will change for the second and third cards. They are dependent events. This means that on the second draw there is one less Queen and one less card, and on the third draw there are two less Queens and 2 less cards. P(3 Queens)=P(Q on 1st and Q on 2nd and Q on 3rd) =P(Q on 1st)*P(Q on 2nd|Q on 1st)*P(Q on 3rd|1st and 2nd Q) $=\dfrac{4}{52} * \dfrac{3}{51} * \dfrac{2}{50}$ $=\dfrac{24}{132600}$ b. Again, the sample space is too large to write out, so using the multiplication rule makes sense. Since the cards are put back, one draw has no affect on the next draw and they are all independent. P(3 Queens)=P(Queen on 1st and Queen on 2nd and Queen on 3rd) =P(Queen on 1st)*P(Queen on 2nd)*P(Queen on 3rd) $=\dfrac{4}{52} * \dfrac{4}{52} * \dfrac{4}{52}$ $=\left(\dfrac{4}{52}\right)^{3}$ $=\dfrac{64}{140608}$ Example $4$ application problem The World Health Organization (WHO) keeps track of how many incidents of leprosy there are in the world. Using the WHO regions and the World Banks income groups, one can ask if an income level and a WHO region are dependent on each other in terms of predicting where the disease is. Data on leprosy cases in different countries was collected for the year 2011 and a summary is presented in Example $1$ ("Leprosy: Number of," 2013). WHO Region World Bank Income Group Row Total High Income Upper Middle Income Lower Middle Income Low Income Americas 174 36028 615 0 36817 Eastern Mediterranean 54 6 1883 604 2547 Europe 10 0 0 0 10 Western Pacific 26 216 3689 1155 5086 Africa 0 39 1986 15928 17953 South-East Asia 0 0 149896 10236 160132 Column Total 264 36289 158069 27923 222545 Table $1$: Number of Leprosy Cases 1. Find the probability that a person with leprosy is from the Americas. 2. Find the probability that a person with leprosy is from a high-income country. 3. Find the probability that a person with leprosy is from the Americas and a high-income country. 4. Find the probability that a person with leprosy is from a high-income country, given they are from the Americas. 5. Find the probability that a person with leprosy is from a low-income country. 6. Find the probability that a person with leprosy is from Africa. 7. Find the probability that a person with leprosy is from Africa and a low-income country. 8. Find the probability that a person with leprosy is from Africa, given they are from a low-income country. 9. Are the events that a person with leprosy is from “Africa” and “low-income country” independent events? Why or why not? 10. Are the events that a person with leprosy is from “Americas” and “high-income country” independent events? Why or why not? Solution a. There are 36817 cases of leprosy in the Americas out of 222,545 cases worldwide. So, $P(\text { Americas })=\dfrac{36817}{222545} \approx 0.165$ There is about a 16.5% chance that a person with leprosy lives in a country in the Americas. b. There are 264 cases of leprosy in high-income countries out of 222,545 cases worldwide. So, $P(\text { high-income })=\dfrac{264}{222545} \approx 0.0001$ There is about a 0.1% chance that a person with leprosy lives in a high-income country. c. There are 174 cases of leprosy in countries in a high-income country in the Americas out the 222,545 cases worldwide. So, $P(\text { Americas and high-income })=\dfrac{174}{222545} 0.0008$ There is about a 0.08% chance that a person with leprosy lives in a high-income country in the Americas. d. In this case you know that the person is in the Americas. You don’t need to consider people from Easter Mediterranean, Europe, Western Pacific, Africa, and South-east Asia. You only need to look at the row with Americas at the start. In that row, look to see how many leprosy cases there are from a high-income country. There are 174 countries out of the 36,817 leprosy cases in the Americas. So, $P(\text { high-income } | \text { Americas })=\dfrac{174}{36817} \approx 0.0047$ There is 0.47% chance that a person with leprosy is from a high-income country given that they are from the Americas. e. There are 27,923 cases of leprosy in low-income countries out of the 222,545 leprosy cases worldwide. So, $P(\text { low-income })=\dfrac{27923}{222545} \approx 0.125$ There is a 12.5% chance that a person with leprosy is from a low-income country. f. There are 17,953 cases of leprosy in Africa out of 222,545 leprosy cases worldwide. So, $P(\text { Africa })=\dfrac{17953}{222545} \approx 0.081$ There is an 8.1% chance that a person with leprosy is from Africa. g. There are 15,928 cases of leprosy in low-income countries in Africa out of all the 222,545 leprosy cases worldwide. So, $P(\text { Africa and low-income })=\dfrac{15928}{222545} \approx 0.072$ There is a 7.2% chance that a person with leprosy is from a low-income country in Africa. h. In this case you know that the person with leprosy is from low-income country. You don’t need to include the high income, upper-middle income, and lowermiddle income country. You only need to consider the column headed by lowincome. In that column, there are 15,928 cases of leprosy in Africa out of the 27,923 cases of leprosy in low-income countries. So, $P(\text { Africa | low-income })=\dfrac{15928}{27923} \approx 0.570$ There is a 57.0% chance that a person with leprosy is from Africa, given that they are from a low-income country. i. In order for these events to be independent, either $P(\text { Africa } | \text { low-income })=P(\text { Africa })$ or $P(\text { low-income } | \text { Africa })=P(\text { low-income })$ have to be true. Part (h) showed $P(\text { Africa | low-income }) \approx 0.570$ and part (f) showed $P(\text { Africa }) \approx 0.081$. Since these are not equal, then these two events are dependent. j. In order for these events to be independent, either $P(\text { Americas } | \text { high-income })=P(\text { Americas })$ or $P(\text { high-income |} \text { Americas })=P(\text { high-income })$ have to be true. Part (d) showed $P(\text { high-income } | \text { Americas }) \approx 0.0047$ and part (b) showed $P(\text { high-income }) \approx 0.001$. Since these are not equal, then these two events are dependent. A big deal has been made about the difference between dependent and independent events while calculating the probability of and compound events. You must multiply the probability of the first event with the conditional probability of the second event. Why do you care? You need to calculate probabilities when you are performing sampling, as you will learn later. But here is a simplification that can make the calculations a lot easier: when the sample size is very small compared to the population size, you can assume that the conditional probabilities just don't change very much over the sample. For example, consider acceptance sampling. Suppose there is a big population of parts delivered to you factory, say 12,000 parts. Suppose there are 85 defective parts in the population. You decide to randomly select ten parts, and reject the shipment. What is the probability of rejecting the shipment? There are many different ways you could reject the shipment. For example, maybe the first three parts are good, one is bad, and the rest are good. Or all ten parts could be bad, or maybe the first five. So many ways to reject! But there is only one way that you’d accept the shipment: if all ten parts are good. That would happen if the first part is good, and the second part is good, and the third part is good, and so on. Since the probability of the second part being good is (slightly) dependent on whether the first part was good, technically you should take this into consideration when you calculate the probability that all ten are good. The probability of getting the first sampled part good is $\dfrac{12000-85}{12000}=\dfrac{11915}{12000}$. So the probability that all ten being good is $\dfrac{11915}{12000} * \dfrac{11914}{11999} * \dfrac{11913}{11998} * \ldots * \dfrac{11906}{11991} \approx 93.1357 \%$. If instead you assume that the probability doesn’t change much, you get $\left(\dfrac{11915}{12000}\right)^{10} \approx 93.1382 \%$. So as you can see, there is not much difference. So here is the rule: if the sample is very small compared to the size of the population, then you can assume that the probabilities are independent, even though they aren’t technically. By the way, the probability of rejecting the shipment is $1-0.9314=0.0686=6.86 \%$. Homework Exercise $1$ 1. Are owning a refrigerator and owning a car independent events? Why or why not? 2. Are owning a computer or tablet and paying for Internet service independent events? Why or why not? 3. Are passing your statistics class and passing your biology class independent events? Why or why not? 4. Are owning a bike and owning a car independent events? Why or why not? 5. An experiment is picking a card from a fair deck. 1. What is the probability of picking a Jack given that the card is a face card? 2. What is the probability of picking a heart given that the card is a three? 3. What is the probability of picking a red card given that the card is an ace? 4. Are the events Jack and face card independent events? Why or why not? 5. Are the events red card and ace independent events? Why or why not? 6. An experiment is rolling two dice. 1. What is the probability that the sum is 6 given that the first die is a 5? 2. What is the probability that the first die is a 3 given that the sum is 11? 3. What is the probability that the sum is 7 given that the fist die is a 2? 4. Are the two events sum of 6 and first die is a 5 independent events? Why or why not? 5. Are the two events sum of 7 and first die is a 2 independent events? Why or why not? 7. You flip a coin four times. What is the probability that all four of them are heads? 8. You flip a coin six times. What is the probability that all six of them are heads? 9. You pick three cards from a deck with replacing the card each time before picking the next card. What is the probability that all three cards are kings? 10. You pick three cards from a deck without replacing a card before picking the next card. What is the probability that all three cards are kings? 11. The number of people who survived the Titanic based on class and sex is in Example $2$ ("Encyclopedia Titanica," 2013). Suppose a person is picked at random from the survivors. Class Sex Total Female Male 1st 134 59 193 2nd 94 25 119 3rd 80 58 138 Total 308 142 450 Table $2$: Surviving the Titanic a. What is the probability that a survivor was female? b. What is the probability that a survivor was in the 1st class? c. What is the probability that a survivor was a female given that the person was in 1st class? d. What is the probability that a survivor was a female and in the 1st class? e. What is the probability that a survivor was a female or in the 1st class? f. Are the events survivor is a female and survivor is in 1st class mutually exclusive? Why or why not? g. Are the events survivor is a female and survivor is in 1st class independent? Why or why not? 12. Researchers watched groups of dolphins off the coast of Ireland in 1998 to determine what activities the dolphins partake in at certain times of the day ("Activities of dolphin," 2013). The numbers in Example $3$ represent the number of groups of dolphins that were partaking in an activity at certain times of days. Activity Period Total Morning Noon Afternoon Evening Travel 6 6 14 13 39 Feed 28 4 0 56 88 Social 38 5 9 10 62 Total 72 15 23 79 189 Table $3$: Dolphin Activity a. What is the probability that a dolphin group is partaking in travel? b. What is the probability that a dolphin group is around in the morning? c. What is the probability that a dolphin group is partaking in travel given that it is morning? d. What is the probability that a dolphin group is around in the morning given that it is partaking in socializing? e. What is the probability that a dolphin group is around in the afternoon given that it is partaking in feeding? f. What is the probability that a dolphin group is around in the afternoon and is partaking in feeding? g. What is the probability that a dolphin group is around in the afternoon or is partaking in feeding? h. Are the events dolphin group around in the afternoon and dolphin group feeding mutually exclusive events? Why or why not? i. Are the events dolphin group around in the morning and dolphin group partaking in travel independent events? Why or why not? Answer 1. Independent, see solutions 3. Dependent, see solutions 5. a. P(Jack/face card) = 0.333, b. P(heart/card a 3) = 0.25, c. P(red card/ace) = 0.50, d. not independent, see solutions, e. independent, see solutions 7. 0.0625 9. $4.55 \times 10^{-4}$ 11. a. P(female) = 0.684, b. P(1st class) = 0.429, c. P(female/1st class) = 0.694, d. P(female and 1st class) = 0.298, e. P(female or 1st class) = 0.816, f. No, see solutions, g. Dependent, see solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/04%3A_Probability/4.03%3A_Conditional_Probability.txt
There are times when the sample space or event space are very large, that it isn’t feasible to write it out. In that case, it helps to have mathematical tools for counting the size of the sample space and event space. These tools are known as counting techniques. Definition $1$ Multiplication Rule in Counting Techniques If task 1 can be done $m_{1}$ ways, task 2 can be done $m_{2}$ ways, and so forth to task n being done $m_{n}$ ways. Then the number of ways to do task 1, 2,…, n together would be $m_{1}^{*} m_{2}^{*} \stackrel{\Delta *}{r} m_{n}$. Example $1$ multiplication rule in counting A menu offers a choice of 3 salads, 8 main dishes, and 5 desserts. How many different meals consisting of one salad, one main dish, and one dessert are possible? Solution There are three tasks, picking a salad, a main dish, and a dessert. The salad task can be done 3 ways, the main dish task can be done 8 ways, and the dessert task can be done 5 ways. The ways to pick a salad, main dish, and dessert are $\dfrac{3}{\text { salad }} \dfrac{8}{\text { main }} \dfrac{5}{\text { dessert }}=120$ different meals Example $2$ Multiplication rule in counting How many three letter “words” can be made from the letters a, b, and c with no letters repeating? A “word” is just an ordered group of letters. It doesn’t have to be a real word in a dictionary. Solution There are three tasks that must be done in this case. The tasks are to pick the first letter, then the second letter, and then the third letter. The first task can be done 3 ways since there are 3 letters. The second task can be done 2 ways, since the first task took one of the letters. The third task can be done 1 ways, since the first and second task took two of the letters. There are $\dfrac{3}{\text { first letter }} * \dfrac{2}{\text { second letter }} * \dfrac{1}{\text { third letter }}$ Which is $3^{*} 2^{*} 1=6$ You can also look at this in a tree diagram: So, there are 6 different “words.” In Example $2$, the solution was found by find $3*2*1=6$. Many counting problems involve multiplying a list of decreasing numbers. This is called a factorial. There is a special symbol for this and a special button on your calculator. Definition $2$ Factorial $n !=n(n-1)(n-2) \cdots(3)(2)(1)$ As an example: $5 !=5 * 4 * 3 * 2 * 1=120$ $8 !=8 * 7 * 6 * 5 * 4 * 3 * 2 * 1=40320$ 0 factorial is defined to be 0!=1 and 1 factorial is defined to be 1!=1. Sometimes you are trying to select r objects from n total objects. The number of ways to do this depends on if the order you choose the r objects matters or if it doesn’t. As an example if you are trying to call a person on the phone, you have to have their number in the right order. Otherwise, you call someone you didn’t mean to. In this case, the order of the numbers matters. If however you were picking random numbers for the lottery, it doesn’t matter which number you pick first. As long as you have the same numbers that the lottery people pick, you win. In this case the order doesn’t matter. A permutation is an arrangement of items with a specific order. You use permutations to count items when the order matters. When the order doesn’t matter you use combinations. A combination is an arrangement of items when order is not important. When you do a counting problem, the first thing you should ask yourself is “does order matter?” Definition $3$ Permutation Formula Picking r objects from n total objects when order matters $_{n} P_{r}=\dfrac{n !}{(n-r) !}$ Definition $4$ Combination Formula Picking r objects from n total objects when order doesn’t matter $_{n} C_{r}=\dfrac{n !}{r !(n-r) !}$ Example $3$ calculating the number of ways In a club with 15 members, how many ways can a slate of 3 officers consisting of a president, vice-president, and secretary/treasurer be chosen? Solution In this case the order matters. If you pick person 1 for president, person 2 for vice-president, and person 3 for secretary/treasurer you would have different officers than if you picked person 2 for president, person 1 for vice-president, and person 3 for secretary/treasurer. This is a permutation problem with n=15 and r=3. $_{15} P_{3}=\dfrac{15 !}{(15-3) !}=\dfrac{15 !}{12 !}=2730$ Example $4$ calculating the number of ways Suppose you want to pick 7 people out of 20 people to take part in a survey. How many ways can you do this? Solution In this case the order doesn’t matter, since you just want 7 people. This is a combination with n=20 and r=7. $_{20} C_{7}=\dfrac{20 !}{7 !(20-7) !}=\dfrac{20 !}{7 ! 13 !}=77520$ Most calculators have a factorial button on them, and many have the combination and permutation functions also. R has a combination command. Homework Exercise $1$ 1. You are going to a benefit dinner, and need to decide before the dinner what you want for salad, main dish, and dessert. You have 2 different salads to choose from, 3 main dishes, and 5 desserts. How many different meals are available? 2. How many different phone numbers are possible in the area code 928? 3. You are opening a T-shirt store. You can have long sleeves or short sleeves, three different colors, five different designs, and four different sizes. How many different shirts can you make? 4. The California license plate has one number followed by three letters followed by three numbers. How many different license plates are there? 5. Find $_{9} P_{4}$ 6. Find $_{10} P_{6}$ 7. Find $_{10} P_{5}$ 8. Find $_{20} P_{4}$ 9. You have a group of twelve people. You need to pick a president, treasurer, and secretary from the twelve. How many different ways can you do this? 10. A baseball team has a 25-man roster. A batting order has nine people. How many different batting orders are there? 11. An urn contains five red balls, seven yellow balls, and eight white balls. How many different ways can you pick two red balls? 12. How many ways can you choose seven people from a group of twenty? Answer 1. 30 meals 3. 120 shirts 5. 3024 7. 252 9. 1320 11. 10 Data sources Aboriginal deaths in custody. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/custody.html Activities of dolphin groups. (2013, September 26). Retrieved from http://www.statsci.org/data/general/dolpacti.html Car preferences. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/carprefs.html Encyclopedia Titanica. (2013, November 09). Retrieved from www.encyclopediatitanica.org/ Leprosy: Number of reported cases by country. (2013, September 04). Retrieved from http://apps.who.int/gho/data/node.main.A1639 Madison, J. (2013, October 15). M&M's color distribution analysis. Retrieved from http://joshmadison.com/2007/12/02/mm...tion-analysis/
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/04%3A_Probability/4.04%3A_Counting_Techniques.txt
• 5.1: Basics of Probability Distributions There are different types of quantitative variables, called discrete or continuous. What is the difference between discrete and continuous data? Discrete data can only take on particular values in a range. Continuous data can take on any value in a range. Discrete data usually arises from counting while continuous data usually arises from measuring. • 5.2: Binomial Probability Distribution The focus of the section was on discrete probability distributions (pdf). To find the pdf for a situation, you usually needed to actually conduct the experiment and collect data. Then you can calculate the experimental probabilities. Normally you cannot calculate the theoretical probabilities instead. However, there are certain types of experiment that allow you to calculate the theoretical probability. One of those types is called a Binomial Experiment. • 5.3: Mean and Standard Deviation of Binomial Distribution If you list all possible values of x in a Binomial distribution, you get the Binomial Probability Distribution (pdf). You can draw a histogram of the pdf and find the mean, variance, and standard deviation of it. 05: Discrete Probability Distributions As a reminder, a variable or what will be called the random variable from now on, is represented by the letter x and it represents a quantitative (numerical) variable that is measured or observed in an experiment. Also remember there are different types of quantitative variables, called discrete or continuous. What is the difference between discrete and continuous data? Discrete data can only take on particular values in a range. Continuous data can take on any value in a range. Discrete data usually arises from counting while continuous data usually arises from measuring. Examples of each How tall is a plant given a new fertilizer? Continuous. This is something you measure. How many fleas are on prairie dogs in a colony? Discrete. This is something you count. If you have a variable, and can find a probability associated with that variable, it is called a random variable. In many cases the random variable is what you are measuring, but when it comes to discrete random variables, it is usually what you are counting. So for the example of how tall is a plant given a new fertilizer, the random variable is the height of the plant given a new fertilizer. For the example of how many fleas are on prairie dogs in a colony, the random variable is the number of fleas on a prairie dog in a colony. Now suppose you put all the values of the random variable together with the probability that that random variable would occur. You could then have a distribution like before, but now it is called a probability distribution since it involves probabilities. A probability distribution is an assignment of probabilities to the values of the random variable. The abbreviation of pdf is used for a probability distribution function. For probability distributions, $0 \leq P(x) \leq 1 \operatorname{and} \sum P(x)=1$ Example $1$: Probability Distribution The 2010 U.S. Census found the chance of a household being a certain size. The data is in Example $1$ ("Households by age," 2013). Size of household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% Table $1$: Household Size from US Census of 2010 Solution In this case, the random variable is x = number of people in a household. This is a discrete random variable, since you are counting the number of people in a household. This is a probability distribution since you have the x value and the probabilities that go with it, all of the probabilities are between zero and one, and the sum of all of the probabilities is one. You can give a probability distribution in table form (as in Example $1$) or as a graph. The graph looks like a histogram. A probability distribution is basically a relative frequency distribution based on a very large sample. Example $2$ graphing a probability distribution The 2010 U.S. Census found the chance of a household being a certain size. The data is in the table ("Households by age," 2013). Draw a histogram of the probability distribution. Size of household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% Table $2$: Household Size from US Census of 2010 Solution State random variable: x = number of people in a household You draw a histogram, where the x values are on the horizontal axis and are the x values of the classes (for the 7 or more category, just call it 7). The probabilities are on the vertical axis. Notice this graph is skewed right. Just as with any data set, you can calculate the mean and standard deviation. In problems involving a probability distribution function (pdf), you consider the probability distribution the population even though the pdf in most cases come from repeating an experiment many times. This is because you are using the data from repeated experiments to estimate the true probability. Since a pdf is basically a population, the mean and standard deviation that are calculated are actually the population parameters and not the sample statistics. The notation used is the same as the notation for population mean and population standard deviation that was used in chapter 3. Note The mean can be thought of as the expected value. It is the value you expect to get if the trials were repeated infinite number of times. The mean or expected value does not need to be a whole number, even if the possible values of x are whole numbers. For a discrete probability distribution function, The mean or expected value is $\mu=\sum x P(x)$ The variance is $\sigma^{2}=\sum(x-\mu)^{2} P(x)$ The standard deviation is $\sigma=\sqrt{\sum(x-\mu)^{2} P(x)}$ where x = the value of the random variable and P(x) = the probability corresponding to a particular x value. Example $3$: Calculating mean, variance, and standard deviation for a discrete probability distribution The 2010 U.S. Census found the chance of a household being a certain size. The data is in the table ("Households by age," 2013). Size of household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% Table $3$: Household Size from US Census of 2010 1. Find the mean 2. Find the variance 3. Find the standard deviation 4. Use a TI-83/84 to calculate the mean and standard deviation 5. Using R to calculate the mean Solution State random variable: x= number of people in a household a. To find the mean it is easier to just use a table as shown below. Consider the category 7 or more to just be 7. The formula for the mean says to multiply the x value by the P(x) value, so add a row into the table for this calculation. Also convert all P(x) to decimal form. x 1 2 3 4 5 6 7 P(x) 0.267 0.336 0.158 0.137 0.063 0.024 0.015 xP(x) 0.267 0.672 0.474 0.548 0.315 0.144 0.098 Table $4$: Calculating the Mean for a Discrete PDF Now add up the new row and you get the answer 2.525. This is the mean or the expected value, $\mu$ = 2.525 people. This means that you expect a household in the U.S. to have 2.525 people in it. Now of course you can’t have half a person, but what this tells you is that you expect a household to have either 2 or 3 people, with a little more 3-person households than 2-person households. b. To find the variance, again it is easier to use a table version than try to just the formula in a line. Looking at the formula, you will notice that the first operation that you should do is to subtract the mean from each x value. Then you square each of these values. Then you multiply each of these answers by the probability of each x value. Finally you add up all of these values. x 1 2 3 4 5 6 7 P(x) 0.267 0.336 0.158 0.137 0.063 0.024 0.015 $x-\mu$ -1.525 -0.525 0.475 1.475 2.475 3.475 4.475 $(x-\mu)^{2}$ 2.3256 0.2756 0.2256 2.1756 6.1256 12.0756 20.0256 $(x-\mu)^{2} P(x)$ 0.6209 0.0926 0.0356 0.2981 0.3859 0.2898 0.3004 Table $5$: Calculating the Variance for a Discrete PDF Now add up the last row to find the variance, $\sigma^{2}=2.02375 \text { people }^{2}$. (Note: try not to round your numbers too much so you aren’t creating rounding error in your answer. The numbers in the table above were rounded off because of space limitations, but the answer was calculated using many decimal places.) c. To find the standard deviation, just take the square root of the variance, $\sigma=\sqrt{2.023375} \approx 1.422454$ people. This means that you can expect a U.S. household to have 2.525 people in it, with a standard deviation of 1.42 people. d. Go into the STAT menu, then the Edit menu. Type the x values into L1 and the P(x) values into L2. Then go into the STAT menu, then the CALC menu. Choose 1:1-Var Stats. This will put 1-Var Stats on the home screen. Now type in L1,L2 (there is a comma between L1 and L2) and then press ENTER. If you have the newer operating system on the TI-84, then your input will be slightly different. You will see the output in Figure $1$. The mean is 2.525 people and the standard deviation is 1.422 people. e. The command would be weighted.mean(x, p). So for this example, the process would look like: x<-c(1, 2, 3, 4, 5, 6, 7) p<-c(0.267, 0.336, 0.158, 0.137, 0.063, 0.024, 0.015) weighted.mean(x, p) Output: [1] 2.525 So the mean is 2.525. To find the standard deviation, you would need to program the process into R. So it is easier to just do it using the formula. Example $4$ Calculating the expected value In the Arizona lottery called Pick 3, a player pays $1 and then picks a three-digit number. If those three numbers are picked in that specific order the person wins$500. What is the expected value in this game? Solution To find the expected value, you need to first create the probability distribution. In this case, the random variable x = winnings. If you pick the right numbers in the right order, then you win $500, but you paid$1 to play, so you actually win $499. If you didn’t pick the right numbers, you lose the$1, the x value is -$1. You also need the probability of winning and losing. Since you are picking a three-digit number, and for each digit there are 10 numbers you can pick with each independent of the others, you can use the multiplication rule. To win, you have to pick the right numbers in the right order. The first digit, you pick 1 number out of 10, the second digit you pick 1 number out of 10, and the third digit you pick 1 number out of 10. The probability of picking the right number in the right order is $\dfrac{1}{10} * \dfrac{1}{10} * \dfrac{1}{10}=\dfrac{1}{1000}=0.001$. The probability of losing (not winning) would be $1-\dfrac{1}{1000}=\dfrac{999}{1000}=0.999$. Putting this information into a table will help to calculate the expected value. Win or lose x P(x) xP(x) Win$499 0.001 $0.499 Lose -$1 0.999 -$0.999 Table $6$: Finding Expected Value Now add the two values together and you have the expected value. It is $\ 0.499+(-\ 0.999)=-\ 0.50$. In the long run, you will expect to lose$0.50. Since the expected value is not 0, then this game is not fair. Since you lose money, Arizona makes money, which is why they have the lottery. The reason probability is studied in statistics is to help in making decisions in inferential statistics. To understand how that is done the concept of a rare event is needed. Definition $1$: Rare Event Rule for Inferential Statistics If, under a given assumption, the probability of a particular observed event is extremely small, then you can conclude that the assumption is probably not correct. An example of this is suppose you roll an assumed fair die 1000 times and get a six 600 times, when you should have only rolled a six around 160 times, then you should believe that your assumption about it being a fair die is untrue. Determining if an event is unusual If you are looking at a value of x for a discrete variable, and the P(the variable has a value of x or more) < 0.05, then you can consider the x an unusually high value. Another way to think of this is if the probability of getting such a high value is less than 0.05, then the event of getting the value x is unusual. Similarly, if the P(the variable has a value of x or less) < 0.05, then you can consider this an unusually low value. Another way to think of this is if the probability of getting a value as small as x is less than 0.05, then the event x is considered unusual. Why is it "x or more" or "x or less" instead of just "x" when you are determining if an event is unusual? Consider this example: you and your friend go out to lunch every day. Instead of Going Dutch (each paying for their own lunch), you decide to flip a coin, and the loser pays for both. Your friend seems to be winning more often than you'd expect, so you want to determine if this is unusual before you decide to change how you pay for lunch (or accuse your friend of cheating). The process for how to calculate these probabilities will be presented in the next section on the binomial distribution. If your friend won 6 out of 10 lunches, the probability of that happening turns out to be about 20.5%, not unusual. The probability of winning 6 or more is about 37.7%. But what happens if your friend won 501 out of 1,000 lunches? That doesn't seem so unlikely! The probability of winning 501 or more lunches is about 47.8%, and that is consistent with your hunch that this isn't so unusual. But the probability of winning exactly 501 lunches is much less, only about 2.5%. That is why the probability of getting exactly that value is not the right question to ask: you should ask the probability of getting that value or more (or that value or less on the other side). The value 0.05 will be explained later, and it is not the only value you can use. Example $5$ is the event unusual The 2010 U.S. Census found the chance of a household being a certain size. The data is in the table ("Households by age," 2013). Size of household 1 2 3 4 5 6 7 or more Probability 26.7% 33.6% 15.8% 13.7% 6.3% 2.4% 1.5% Table $7$: Household Size from US Census of 2010 1. Is it unusual for a household to have six people in the family? 2. If you did come upon many families that had six people in the family, what would you think? 3. Is it unusual for a household to have four people in the family? 4. If you did come upon a family that has four people in it, what would you think? Solution State random variable: x= number of people in a household a. To determine this, you need to look at probabilities. However, you cannot just look at the probability of six people. You need to look at the probability of x being six or more people or the probability of x being six or less people. The \begin{aligned} P(x \leq 6) &=P(x=1)+P(x=2)+P(x=3)+P(x=4)+P(x=5)+P(x=6) \ &=26.7 \%+33.6 \%+15.8 \%+13.7 \%+6.3 \%+2.4 \% \ &=98.5 \% \end{aligned} Since this probability is more than 5%, then six is not an unusually low value. The \begin{aligned} P(x \geq 6) &=P(x=6)+P(x \geq 7) \ &=2.4 \%+1.5 \% \ &=3.9 \% \end{aligned} Since this probability is less than 5%, then six is an unusually high value. It is unusual for a household to have six people in the family. b. Since it is unusual for a family to have six people in it, then you may think that either the size of families is increasing from what it was or that you are in a location where families are larger than in other locations. c. To determine this, you need to look at probabilities. Again, look at the probability of x being four or more or the probability of x being four or less. The \begin{aligned} P(x \geq 4) &=P(x=4)+P(x=5)+P(x=6)+P(x=7) \ &=13.7 \%+6.3 \%+2.4 \%+1.5 \% \ &=23.9 \% \end{aligned} Since this probability is more than 5%, four is not an unusually high value. The \begin{aligned} P(x \leq 4) &=P(x=1)+P(x=2)+P(x=3)+P(x=4) \ &=26.7 \%+33.6 \%+15.8 \%+13.7 \% \ &=89.8 \% \end{aligned} Since this probability is more than 5%, four is not an unusually low value. Thus, four is not an unusual size of a family. d. Since it is not unusual for a family to have four members, then you would not think anything is amiss. Homework Exercise $1$ 1. Eyeglassomatic manufactures eyeglasses for different retailers. The number of days it takes to fix defects in an eyeglass and the probability that it will take that number of days are in the table. Number of days Probabilities 1 24.9% 2 10.8% 3 9.1% 4 12.3% 5 13.3% 6 11.4% 7 7.0% 8 4.6% 9 1.9% 10 1.3% 11 1.0% 12 0.8% 13 0.6% 14 0.4% 15 0.2% 16 0.2% 17 0.1% 18 0.1% Table $8$: Number of Days to Fix Defects a. State the random variable. b. Draw a histogram of the number of days to fix defects c. Find the mean number of days to fix defects. d. Find the variance for the number of days to fix defects. e. Find the standard deviation for the number of days to fix defects. f. Find probability that a lens will take at least 16 days to make a fix the defect. g. Is it unusual for a lens to take 16 days to fix a defect? h. If it does take 16 days for eyeglasses to be repaired, what would you think? 2. Suppose you have an experiment where you flip a coin three times. You then count the number of heads. 1. State the random variable. 2. Write the probability distribution for the number of heads. 3. Draw a histogram for the number of heads. 4. Find the mean number of heads. 5. Find the variance for the number of heads. 6. Find the standard deviation for the number of heads. 7. Find the probability of having two or more number of heads. 8. Is it unusual for to flip two heads? 3. The Ohio lottery has a game called Pick 4 where a player pays $1 and picks a four-digit number. If the four numbers come up in the order you picked, then you win$2,500. What is your expected value? 4. An LG Dishwasher, which costs $800, has a 20% chance of needing to be replaced in the first 2 years of purchase. A two-year extended warrantee costs$112.10 on a dishwasher. What is the expected value of the extended warranty assuming it is replaced in the first 2 years? Answer 1. a. See solutions, b. See solutions, c. 4.175 days, d. 8.414375 $\text { days }^{2}$, e. 2.901 days, f. 0.004, g. See solutions, h. See solutions 3. -\$0.75
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/05%3A_Discrete_Probability_Distributions/5.01%3A_Basics_of_Probability_Distributions.txt
Section 5.1 introduced the concept of a probability distribution. The focus of the section was on discrete probability distributions (pdf). To find the pdf for a situation, you usually needed to actually conduct the experiment and collect data. Then you can calculate the experimental probabilities. Normally you cannot calculate the theoretical probabilities instead. However, there are certain types of experiment that allow you to calculate the theoretical probability. One of those types is called a Binomial Experiment. Properties of a binomial experiment (or Bernoulli trial) 1. Fixed number of trials, n, which means that the experiment is repeated a specific number of times. 2. The n trials are independent, which means that what happens on one trial does not influence the outcomes of other trials. 3. There are only two outcomes, which are called a success and a failure. 4. The probability of a success doesn’t change from trial to trial, where p = probability of success and q = probability of failure, q = 1-p. If you know you have a binomial experiment, then you can calculate binomial probabilities. This is important because binomial probabilities come up often in real life. Examples of binomial experiments are: • Toss a fair coin ten times, and find the probability of getting two heads. • Question twenty people in class, and look for the probability of more than half being women? • Shoot five arrows at a target, and find the probability of hitting it five times? To develop the process for calculating the probabilities in a binomial experiment, consider Example $1$. Example $1$: Deriving the Binomial Probability Formula Suppose you are given a 3 question multiple-choice test. Each question has 4 responses and only one is correct. Suppose you want to find the probability that you can just guess at the answers and get 2 questions right. (Teachers do this all the time when they make up a multiple-choice test to see if students can still pass without studying. In most cases the students can’t.) To help with the idea that you are going to guess, suppose the test is in Martian. 1. What is the random variable? 2. Is this a binomial experiment? 3. What is the probability of getting 2 questions right? 4. What is the probability of getting zero right, one right, and all three right? Solution a. x = number of correct answers b. 1. There are 3 questions, and each question is a trial, so there are a fixed number of trials. In this case, n = 3. 2. Getting the first question right has no affect on getting the second or third question right, thus the trials are independent. 3. Either you get the question right or you get it wrong, so there are only two outcomes. In this case, the success is getting the question right. 4. The probability of getting a question right is one out of four. This is the same for every trial since each question has 4 responses. In this case, $p=\dfrac{1}{4} \text { and } q=1-\dfrac{1}{4}=\dfrac{3}{4}$ This is a binomial experiment, since all of the properties are met. c. To answer this question, start with the sample space. SS = {RRR, RRW, RWR, WRR, WWR, WRW, RWW, WWW}, where RRW means you get the first question right, the second question right, and the third question wrong. The same is similar for the other outcomes. Now the event space for getting 2 right is {RRW, RWR, WRR}. What you did in chapter four was just to find three divided by eight. However, this would not be right in this case. That is because the probability of getting a question right is different from getting a question wrong. What else can you do? Look at just P(RRW) for the moment. Again, that means P(RRW) = P(R on 1st, R on 2nd, and W on 3rd) Since the trials are independent, then P(RRW) = P(R on 1st, R on 2nd, and W on 3rd) = P(R on 1st) * P(R on 2nd) * P(W on 3rd) Just multiply p * p * q $P(\mathrm{RRW})=\dfrac{1}{4} * \dfrac{1}{4} * \dfrac{3}{4}=\left(\dfrac{1}{4}\right)^{2}\left(\dfrac{3}{4}\right)^{1}$ The same is true for P(RWR) and P(WRR). To find the probability of 2 correct answers, just add these three probabilities together. You get \begin{aligned} P(2 \text { correct answers }) &=P(\mathrm{RRW})+P(\mathrm{RWR})+P(\mathrm{WRR}) \ &=\left(\dfrac{1}{4}\right)^{2}\left(\dfrac{3}{4}\right)^{1}+\left(\dfrac{1}{4}\right)^{2}\left(\dfrac{3}{4}\right)^{1}+\left(\dfrac{1}{4}\right)^{2}\left(\dfrac{3}{4}\right)^{1} \ &=3\left(\dfrac{1}{4}\right)^{2}\left(\dfrac{3}{4}\right)^{1} \end{aligned} d. You could go through the same argument that you did above and come up with the following: r right P(r right) 0 right $1^{*}\left(\dfrac{1}{4}\right)^{0}\left(\dfrac{3}{4}\right)^{3}$ 1 right $3^{*}\left(\dfrac{1}{4}\right)^{1}\left(\dfrac{3}{4}\right)^{2}$ 2 right $3 *\left(\dfrac{1}{4}\right)^{2}\left(\dfrac{3}{4}\right)^{1}$ 3 right $1^{*}\left(\dfrac{1}{4}\right)^{3}\left(\dfrac{3}{4}\right)^{0}$ Table $1$: Binomial pattern Hopefully you see the pattern that results. You can now write the general formula for the probabilities for a Binomial experiment First, the random variable in a binomial experiment is x = number of successes. Be careful, a success is not always a good thing. Sometimes a success is something that is bad, like finding a defect. A success just means you observed the outcome you wanted to see happen. Definition $1$ Binomial Formula for the probability of r successes in n trials is $P(x=r)=_{n} C_{r} p^{r} q^{n \cdot r} \text { where }_{n} C_{r}=\dfrac{n !}{r !(n-r) !}$ The $_{n} C_{r}$ is the number of combinations of n things taking r at a time. It is read “n choose r”. Some other common notations for n choose r are $C_{n, r}$, and $\left( \begin{array}{l}{n} \ {r}\end{array}\right)$. n! means you are multiplying $n^{*}(n-1)^{*}(n-2)^{*} \dots^{*} 2^{*} 1$. As an example, $5 !=5^{*} 4^{*} 3^{*} 2^{*} 1=120$. When solving problems, make sure you define your random variable and state what n, p, q, and r are. Without doing this, the problems are a great deal harder. Example $2$: Calculating Binomial Probabilities When looking at a person’s eye color, it turns out that 1% of people in the world has green eyes ("What percentage of," 2013). Consider a group of 20 people. 1. State the random variable. 2. Argue that this is a binomial experiment. 3. Find the probability that none have green eyes. 4. Find the probability that nine have green eyes. 5. Find the probability that at most three have green eyes. 6. Find the probability that at most two have green eyes. 7. Find the probability that at least four have green eyes. 8. In Europe, four people out of twenty have green eyes. Is this unusual? What does that tell you? Solution a. x = number of people with green eyes b. 1. There are 20 people, and each person is a trial, so there are a fixed number of trials. In this case, $n = 20$. 2. If you assume that each person in the group is chosen at random the eye color of one person doesn’t affect the eye color of the next person, thus the trials are independent. 3. Either a person has green eyes or they do not have green eyes, so there are only two outcomes. In this case, the success is a person has green eyes. 4. The probability of a person having green eyes is 0.01. This is the same for every trial since each person has the same chance of having green eyes. p = 0.01 and q = 1 - 0.01 = 0.99 c. $P(x=0)=_{20} C_{0}(0.01)^{0}(0.99)^{20-0} \approx 0.818$ d. $P(x=9)=_{20} C_{9}(0.01)^{9}(0.99)^{20-9} \approx 1.50 \times 10^{-13} \approx 0.000$ e. At most three means that three is the highest value you will have. Find the probability of x is less than or equal to three. \begin{aligned} P(x \leq 3) &=P(x=0)+P(x=1)+P(x=2)+P(x=3) \ &=_{20} C_{0}(0.01)^{0}(0.99)^{20}+_{20} C_{1}(0.01)^{1}(0.99)^{19} \& +_{20}C_{2}(0.01)^{2}(0.99)^{18}+_{20}C_{3}(0.01)^{3}(0.99)^{17} \ & \approx 0.818+0.165+0.016+0.001>0.999 \end{aligned} The reason the answer is written as being greater than 0.999 is because the answer is actually 0.9999573791, and when that is rounded to three decimal places you get 1. But 1 means that the event will happen, when in reality there is a slight chance that it won’t happen. It is best to write the answer as greater than 0.999 to represent that the number is very close to 1, but isn’t 1. f. \begin{aligned} P(x \leq 2) &=P(x=0)+P(x=1)+P(x=2) \ &=_{20} C_{0}(0.01)^{0}(0.99)^{20}+_{20} C_{1}(0.01)^{1}(0.99)^{19}+_{20} C_{2}(0.01)^{2}(0.99)^{18} \ & \approx 0.818+0.165+0.016 \approx 0.999 \end{aligned} g. At least four means four or more. Find the probability of x being greater than or equal to four. That would mean adding up all the probabilities from four to twenty. This would take a long time, so it is better to use the idea of complement. The complement of being greater than or equal to four is being less than four. That would mean being less than or equal to three. Part (e) has the answer for the probability of being less than or equal to three. Just subtract that number from 1. $P(x \geq 4)=1-P(x \leq 3)=1-0.999=0.001$ Actually the answer is less than 0.001, but it is fine to write it this way. h. Since the probability of finding four or more people with green eyes is much less than 0.05, it is unusual to find four people out of twenty with green eyes. That should make you wonder if the proportion of people in Europe with green eyes is more than the 1% for the general population. If this is true, then you may want to ask why Europeans have a higher proportion of green-eyed people. That of course could lead to more questions. The binomial formula is cumbersome to use, so you can find the probabilities by using technology. On the TI-83/84 calculator, the commands on the TI-83/84 calculators when the number of trials is equal to n and the probability of a success is equal to p are $\text{binompdf}(n, p, r)$ when you want to find P(x=r) and $\text{binomcdf}(n, p, r)$ when you want to find $P(x \leq r)$. If you want to find $P(x \geq r)$, then you use the property that $P(x \geq r)=1-P(x \leq r-1)$, since $x \geq r$ and $x<r$ or $x \leq r-1$ are complementary events. Both binompdf and binomcdf commands are found in the DISTR menu. Using R, the commands are $P(x=r)=\text { dbinom }(r, n, p) \text { and } P(x \leq r)=\text { pbinom }(r, n, p)$. Example $3$ using the binomial command on the ti-83/84 When looking at a person’s eye color, it turns out that 1% of people in the world has green eyes ("What percentage of," 2013). Consider a group of 20 people. 1. State the random variable. 2. Find the probability that none have green eyes. 3. Find the probability that nine have green eyes. 4. Find the probability that at most three have green eyes. 5. Find the probability that at most two have green eyes. 6. Find the probability that at least four have green eyes. Solution a. x = number of people with green eyes b. You are looking for P (x=0). Since this problem is x=0, you use the binompdf command on the TI-83/84 or dbinom command on R. On the TI83/84, you go to the DISTR menu, select the binompdf, and then type into the parenthesis your n, p, and r values into your calculator, making sure you use the comma to separate the values. The command will look like $\text{binompdf}(20,.01,0)$ and when you press ENTER you will be given the answer. (If you have the new software on the TI-84, the screen looks a bit different.) On R, the command would look like dbinom(0, 20, 0.01) P (x=0) = 0.8179. Thus there is an 81.8% chance that in a group of 20 people none of them will have green eyes. c. In this case you want to find the P (x=9). Again, you will use the binompdf command or the dbinom command. Following the procedure above, you will have binompdf(20, .01, 9) on the TI-83/84 or dbinom(9,20,0.01) on R. Your answer is $P(x=9)=1.50 \times 10^{-13}$. (Remember when the calculator gives you $1.50 E-13$ and R give you $1.50 e-13$, this is how they display scientific notation.) The probability that out of twenty people, nine of them have green eyes is a very small chance. d. At most three means that three is the highest value you will have. Find the probability of x being less than or equal to three, which is $P(x \leq 3)$. This uses the binomcdf command on the TI-83/84 and pbinom command in R. You use the command on the TI-83/84 of binomcdf(20, .01, 3) and the command on R of pbinom(3,20,0.01) Your answer is 0.99996. Thus there is a really good chance that in a group of 20 people at most three will have green eyes. (Note: don’t round this to one, since one means that the event will happen, when in reality there is a slight chance that it won’t happen. It is best to write the answer out to enough decimal points so it doesn’t round off to one. e. You are looking for $P(x \leq 2)$. Again use binomcdf or pbinom. Following the procedure above you will have $\text{binomcdf}(20,.01,2)$ on the TI-83/84 and pbinom(2,20,0.01), with $P(x \leq 2)=0.998996$. Again there is a really good chance that at most two people in the room will have green eyes. f. At least four means four or more. Find the probability of x being greater than or equal to four. That would mean adding up all the probabilities from four to twenty. This would take a long time, so it is better to use the idea of complement. The complement of being greater than or equal to four is being less than four. That would mean being less than or equal to three. Part (e) has the answer for the probability of being less than or equal to three. Just subtract that number from 1. $P(x \geq 4)=1-P(x \leq 3)=1-0.99996=0.00004$ You can also find this answer by doing the following on TI-83/84: $P(x \geq 4)=1-P(x \leq 3)=1-\text { binomcdf }(20,.01,3)=1-0.99996=0.00004$ on R: $P(x \geq 4)=1-P(x \leq 3)=1-\text { pbinom }(3,20,.01)=1-0.99996=0.0004$ Again, this is very unlikely to happen. There are other technologies that will compute binomial probabilities. Example $4$ calculating binomial probabilities According to the Center for Disease Control (CDC), about 1 in 88 children in the U.S. have been diagnosed with autism ("CDC-data and statistics,," 2013). Suppose you consider a group of 10 children. 1. State the random variable. 2. Argue that this is a binomial experiment. 3. Find the probability that none have autism. 4. Find the probability that seven have autism. 5. Find the probability that at least five have autism. 6. Find the probability that at most two have autism. 7. Suppose five children out of ten have autism. Is this unusual? What does that tell you? Solution a. x = number of children with autism b. 1. There are 10 children, and each child is a trial, so there are a fixed number of trials. In this case, n = 10. 2. If you assume that each child in the group is chosen at random, then whether a child has autism does not affect the chance that the next child has autism. Thus the trials are independent. 3. Either a child has autism or they do not have autism, so there are two outcomes. In this case, the success is a child has autism. 4. The probability of a child having autism is 1/88. This is the same for every trial since each child has the same chance of having autism. $p=\dfrac{1}{88}$ and $q=1-\dfrac{1}{88}=\dfrac{87}{88}$. c. Using the formula: $P(x=0)=_{10} C_{0}\left(\dfrac{1}{88}\right)^{0}\left(\dfrac{87}{88}\right)^{10-0} \approx 0.892$ Using the TI-83/84 Calculator: $P(x=0)=\text { binompdf }(10,1 \div 88,0) \approx 0.892$ Using R: $P(x=0)=\text { pbinom }(0,10,1 / 88) \approx 0.892$ d. Using the formula: $P(x=7)=_{10} C_{7}\left(\dfrac{1}{88}\right)^{7}\left(\dfrac{87}{88}\right)^{10-7} \approx 0.000$ Using the TI-83/84 Calculator: $P(x=7)=\text { binompdf }(10,1 \div 88,7) \approx 2.84 \times 10^{-12}$ Using R: $P(x=7)=\operatorname{dbinom}(7,10,1 / 88) \approx 2.84 \times 10^{-12}$ e. Using the formula: \begin{aligned} P(x \geq 5) &=P(x=5)+P(x=6)+P(x=7) \ &+P(x=8)+P(x=9)+P(x=10) \ &=_{10} C_{5}\left(\dfrac{1}{88}\right)^{5}\left(\dfrac{78}{88}\right)^{10-5}+_{10} C_{6}\left(\dfrac{1}{88}\right)^{6}\left(\dfrac{78}{88}\right)^{10-6} \ & +_{10}C_{7}\left(\dfrac{1}{88}\right)^{7}\left(\dfrac{78}{88}\right)^{10-7}+_{10}C_{8}\left(\dfrac{1}{88}\right)^{8}\left(\dfrac{78}{88}\right)^{10-8} \ &+_{10}C_{9}\left(\dfrac{1}{88}\right)^{9}\left(\dfrac{78}{88}\right)^{10-9}+_{10}C_{10}\left(\dfrac{1}{88}\right)^{10}\left(\dfrac{78}{88}\right)^{10-10}\&=0.000+0.000+0.000+0.000+0.000+0.000 \ &=0.000 \end{aligned} Using the TI-83/84 Calculator: To use the calculator you need to use the complement. \begin{aligned} P(x \geq 5) &=1-P(x<5) \ &=1-P(x \leq 4) \ &=1-\text { binomcdf }(10,1 \div 88,4) \ & \approx 1-0.9999999=0.000 \end{aligned} Using R: To use R you need to use the complement. \begin{aligned} P(x \geq 5) &=1-P(x<5) \ &=1-P(x \leq 4) \ &=1-\text { pbinom }(4,10,1 / 88) \ & \approx 1-0.9999999=0.000 \end{aligned} Notice, the answer is given as 0.000, since the answer is less than 0.000. Don’t write 0, since 0 means that the event is impossible to happen. The event of five or more is improbable, but not impossible. f. Using the formula: \begin{aligned} P(x \leq 2) &=P(x=0)+P(x=1)+P(x=2) \ &=_{10} C_{0}\left(\dfrac{1}{88}\right)^{0}\left(\dfrac{78}{88}\right)^{10-0}+_{10} C_{1}\left(\dfrac{1}{88}\right)^{1}\left(\dfrac{78}{88}\right)^{10-1} \ &+_{10} C_{2}\left(\dfrac{1}{88}\right)^{2}\left(\dfrac{78}{88}\right)^{10-2} \ &=0.892+0.103+0.005>0.999 \end{aligned} Using the TI-83/84 Calculator: $P(x \leq 2)=\text { binomcdf }(10,1 \div 88,2) \approx 0.9998$ Using R: $P(x \leq 2)=\text { pbinom }(2,10,1 / 88) \approx 0.9998$ g. Since the probability of five or more children in a group of ten having autism is much less than 5%, it is unusual to happen. If this does happen, then one may think that the proportion of children diagnosed with autism is actually more than 1/88. Homework Exercise $1$ 1. Suppose a random variable, x, arises from a binomial experiment. If n = 14, and p = 0.13, find the following probabilities using the binomial formula. 1. P (x=5) 2. P (x=8) 3. P (x=12) 4. $P(x \leq 4)$ 5. $P(x \geq 8)$ 6. $P(x \leq 12)$ 2. Suppose a random variable, x, arises from a binomial experiment. If n = 22, and p = 0.85, find the following probabilities using the binomial formula. 1. P (x=18) 2. P (x=5) 3. P (x=20) 4. $P(x \leq 3)$ 5. $P(x \geq 18)$ 6. $P(x \leq 20)$ 3. Suppose a random variable, x, arises from a binomial experiment. If n = 10, and p = 0.70, find the following probabilities using the binomial formula. 1. P (x=2) 2. P (x=8) 3. P (x=7) 4. $P(x \leq 3)$ 5. $P(x \geq 7)$ 6. $P(x \leq 4)$ 4. Suppose a random variable, x, arises from a binomial experiment. If n = 6, and p = 0.30, find the following probabilities using the binomial formula. 1. P (x=1) 2. P (x=5) 3. P (x=3) 4. $P(x \leq 3)$ 5. $P(x \geq 5)$ 6. $P(x \leq 4)$ 5. Suppose a random variable, x, arises from a binomial experiment. If n = 17, and p = 0.63, find the following probabilities using the binomial formula. 1. P (x=8) 2. P (x=15) 3. P (x=14) 4. $P(x \leq 12)$ 5. $P(x \geq 10)$ 6. $P(x \leq 7)$ 6. Suppose a random variable, x, arises from a binomial experiment. If n = 23, and p = 0.22, find the following probabilities using the binomial formula. 1. P (x=21) 2. P (x=6) 3. P (x=12) 4. $P(x \leq 14)$ 5. $P(x \geq 17)$ 6. $P(x \leq 9)$ 7. Approximately 10% of all people are left-handed ("11 little-known facts," 2013). Consider a grouping of fifteen people. 1. State the random variable. 2. Argue that this is a binomial experiment Find the probability that 3. None are left-handed. 4. Seven are left-handed. 5. At least two are left-handed. 6. At most three are left-handed. 7. At least seven are left-handed. 8. Seven of the last 15 U.S. Presidents were left-handed. Is this unusual? What does that tell you? 8. According to an article in the American Heart Association’s publication Circulation, 24% of patients who had been hospitalized for an acute myocardial infarction did not fill their cardiac medication by the seventh day of being discharged (Ho, Bryson & Rumsfeld, 2009). Suppose there are twelve people who have been hospitalized for an acute myocardial infarction. 1. State the random variable. 2. Argue that this is a binomial experiment Find the probability that 3. All filled their cardiac medication. 4. Seven did not fill their cardiac medication. 5. None filled their cardiac medication. 6. At most two did not fill their cardiac medication. 7. At least three did not fill their cardiac medication. 8. At least ten did not fill their cardiac medication. 9. Suppose of the next twelve patients discharged, ten did not fill their cardiac medication, would this be unusual? What does this tell you? 9. Eyeglassomatic manufactures eyeglasses for different retailers. In March 2010, they tested to see how many defective lenses they made, and there were 16.9% defective lenses due to scratches. Suppose Eyeglassomatic examined twenty eyeglasses. 1. State the random variable. 2. Argue that this is a binomial experiment Find the probability that 3. None are scratched. 4. All are scratched. 5. At least three are scratched. 6. At most five are scratched. 7. At least ten are scratched. 8. Is it unusual for ten lenses to be scratched? If it turns out that ten lenses out of twenty are scratched, what might that tell you about the manufacturing process? 10. The proportion of brown M&M’s in a milk chocolate packet is approximately 14% (Madison, 2013). Suppose a package of M&M’s typically contains 52 M&M’s. 1. State the random variable. 2. Argue that this is a binomial experiment Find the probability that 3. Six M&M’s are brown. 4. Twenty-five M&M’s are brown. 5. All of the M&M’s are brown. 6. Would it be unusual for a package to have only brown M&M’s? If this were to happen, what would you think is the reason? Answer 1. a. P(x=5) = 0.0212, b. P(x=8) = $1.062 \times 10^{-4}$, c. P(x=12) = $1.605 \times 10^{-9}$, d. $P(x \leq 4)=0.973$, e. $P(x \geq 8)=1.18 \times 10^{-4}$, f. $P(x \leq 12)=0.99999$ 3. a. $P(x=2)=0.0014$, b. $P(x=8)=0.2335$, c. $P(x=7)=0.2668$, d. $P(x \leq 3)=0.0106$, e. $P(x \geq 7)=0.6496$, f. $P(x \leq 4)=0.0473$ 5. a. $P(x=8)=0.0784$, b. $P(x=15)=0.0182$, c. $P(x=14)=0.0534$, d. $P(x \leq 12)=0.8142$, e. $P(x \geq 10)=0.7324$, f. $P(x \leq 7)=0.0557$ 7. a. See solutions, b. See solutions, c. P(x=0) = 0.2059, d. $P(x=7)=2.770 \times 10^{-4}$, e. $P(x \geq 2)=0.4510$, f. $P(x \leq 3)=0.944$, g.$P(x \geq 7)=3.106 \times 10^{-4}$, h. See solutions 9. a. See solutions, b. See solutions, c. $P(x=0)=0.0247$, d. $P(x=20)=3.612 \times 10^{-16}$, e. $P(x \geq 3)=0.6812$, f. $P(x \leq 5)=0.8926$, g. $P(x \geq 10)=6.711 \times 10^{-4}$, h. See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/05%3A_Discrete_Probability_Distributions/5.02%3A_Binomial_Probability_Distribution.txt
If you list all possible values of $x$ in a Binomial distribution, you get the Binomial Probability Distribution (pdf). You can draw a histogram of the pdf and find the mean, variance, and standard deviation of it. For a general discrete probability distribution, you can find the mean, the variance, and the standard deviation for a pdf using the general formulas $\mu=\sum x P(x), \sigma^{2}=\sum(x-\mu)^{2} P(x), \text { and } \sigma=\sqrt{\sum(x-\mu)^{2} P(x)}$ These formulas are useful, but if you know the type of distribution, like Binomial, then you can find the mean and standard deviation using easier formulas. They are derived from the general formulas. Note For a Binomial distribution, $\mu$, the expected number of successes, $\sigma^{2}$, the variance, and $\sigma$, the standard deviation for the number of success are given by the formulas: $\mu=n p \quad \sigma^{2}=n p q \quad \sigma=\sqrt{n p q}$ Where p is the probability of success and q = 1 - p. Example $1$ Finding the Probability Distribution, Mean, Variance, and Standard Deviation of a Binomial Distribution When looking at a person’s eye color, it turns out that 1% of people in the world has green eyes ("What percentage of," 2013). Consider a group of 20 people. 1. State the random variable. 2. Write the probability distribution. 3. Draw a histogram. 4. Find the mean. 5. Find the variance. 6. Find the standard deviation. Solution a. x = number of people who have green eyes b. In this case you need to write each value of x and its corresponding probability. It is easiest to do this by using the binompdf command, but don’t put in the r value. You may want to set your calculator to only three decimal places, so it is easier to see the values and you don’t need much more precision than that. The command would look like $\text{binompdf}(20, .01)$. This produces the information in Example $1$. x P (x=r) 0 0.818 1 0.165 2 0.016 3 0.001 4 0.000 5 0.000 6 0.000 7 0.000 8 0.000 9 0.000 10 0.000 $\vdots$ $\vdots$ 20 0.000 Table $1$: Probability Distribution for Number of People with Green Eyes Notice that after x = 4, the probability values are all 0.000. This just means they are really small numbers. c. You can draw the histogram on the TI-83/84 or other technology. The graph would look like in Figure $1$. This graph is very skewed to the right. d. Since this is a binomial, then you can use the formula $\mu=n p$. So $\mu=20(0.01)=0.2$ people. You expect on average that out of 20 people, less than 1 would have green eyes. e. Since this is a binomial, then you can use the formula $\sigma^{2}=n p q$. $q=1-0.01=0.99$ $\sigma^{2}=20(0.01)(0.99)=0.198 \text { people }^{2}$ f. Once you have the variance, you just take the square root of the variance to find the standard deviation. $\sigma=\sqrt{0.198} \approx 0.445$ Homework Exercise $1$ 1. Suppose a random variable, x, arises from a binomial experiment. Suppose n = 6, and p = 0.13. 1. Write the probability distribution. 2. Draw a histogram. 3. Describe the shape of the histogram. 4. Find the mean. 5. Find the variance. 6. Find the standard deviation. 2. Suppose a random variable, x, arises from a binomial experiment. Suppose n = 10, and p = 0.81. 1. Write the probability distribution. 2. Draw a histogram. 3. Describe the shape of the histogram. 4. Find the mean. 5. Find the variance. 6. Find the standard deviation. 3. Suppose a random variable, x, arises from a binomial experiment. Suppose n = 7, and p = 0.50. 1. Write the probability distribution. 2. Draw a histogram. 3. Describe the shape of the histogram. 4. Find the mean. 5. Find the variance. 6. Find the standard deviation. 4. Approximately 10% of all people are left-handed. Consider a grouping of fifteen people. 1. State the random variable. 2. Write the probability distribution. 3. Draw a histogram. 4. Describe the shape of the histogram. 5. Find the mean. 6. Find the variance. 7. Find the standard deviation. 5. According to an article in the American Heart Association’s publication Circulation, 24% of patients who had been hospitalized for an acute myocardial infarction did not fill their cardiac medication by the seventh day of being discharged (Ho, Bryson & Rumsfeld, 2009). Suppose there are twelve people who have been hospitalized for an acute myocardial infarction. 1. State the random variable. 2. Write the probability distribution. 3. Draw a histogram. 4. Describe the shape of the histogram. 5. Find the mean. 6. Find the variance. 7. Find the standard deviation. 6. Eyeglassomatic manufactures eyeglasses for different retailers. In March 2010, they tested to see how many defective lenses they made, and there were 16.9% defective lenses due to scratches. Suppose Eyeglassomatic examined twenty eyeglasses. 1. State the random variable. 2. Write the probability distribution. 3. Draw a histogram. 4. Describe the shape of the histogram. 5. Find the mean. 6. Find the variance. 7. Find the standard deviation. 7. The proportion of brown M&M’s in a milk chocolate packet is approximately 14% (Madison, 2013). Suppose a package of M&M’s typically contains 52 M&M’s. 1. State the random variable. 2. Find the mean. 3. Find the variance. 4. Find the standard deviation. Answer 1. a. See solutions, b. See solutions, c. Skewed right, d. 0.78, e. 0.6786, f. 0.8238 3. a. See solutions, b. See solutions, c. Symmetric, d. 3.5, e. 1.75, f. 1.3229 5. a. See solutions, b. See solutions, c. See solutions, d. Skewed right, e. 2.88, f. 2.1888, g. 1.479 7. a. See solutions, b. 7.28, c. 6.2608, d. 2.502 Data Sources: 11 little-known facts about left-handers. (2013, October 21). Retrieved from www.huffingtonpost.com/2012/1...n_2005864.html CDC-data and statistics, autism spectrum disorders - ncbdd. (2013, October 21). Retrieved from http://www.cdc.gov/ncbddd/autism/data.html Ho, P. M., Bryson, C. L., & Rumsfeld, J. S. (2009). Medication adherence. Circulation, 119 (23), 3028-3035. Retrieved from http://circ.ahajournals.org/content/119/23/3028 Households by age of householder and size of household: 1990 to 2010. (2013, October 19). Retrieved from www.census.gov/compendia/stat...es/12s0062.pdf Madison, J. (2013, October 15). M&M's color distribution analysis. Retrieved from http://joshmadison.com/2007/12/02/mm...tion-analysis/ What percentage of people have green eyes?. (2013, October 21). Retrieved from www.ask.com/question/what-per...ave-green-eyes
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/05%3A_Discrete_Probability_Distributions/5.03%3A_Mean_and_Standard_Deviation_of_Binomial_Distribution.txt
Chapter 5 dealt with probability distributions arising from discrete random variables. Mostly that chapter focused on the binomial experiment. There are many other experiments from discrete random variables that exist but are not covered in this book. This chapter deals with probability distributions that arise from continuous random variables. The focus of this chapter is a distribution known as the normal distribution, though realize that there are many other distributions that exist. A few others are examined in future chapters. • 6.1: Uniform Distribution If you have a situation where the probability is always the same, then this is known as a uniform distribution. • 6.2: Graphs of the Normal Distribution Many real life problems produce a histogram that is a symmetric, unimodal, and bellshaped continuous probability distribution. • 6.3: Finding Probabilities for the Normal Distribution The Empirical Rule is just an approximation and only works for certain values. What if you want to find the probability for x values that are not integer multiples of the standard deviation? The probability is the area under the curve. To find areas under the curve, you need calculus. Before technology, you needed to convert every x value to a standardized number, called the z-score or z-value or simply just z. The z-score is a measure of how many standard deviations an x value is from the mean. • 6.4: Assessing Normality The distributions you have seen up to this point have been assumed to be normally distributed, but how do you determine if it is normally distributed. • 6.5: Sampling Distribution and the Central Limit Theorem 06: Continuous Probability Distributions If you have a situation where the probability is always the same, then this is known as a uniform distribution. An example would be waiting for a commuter train. The commuter trains on the Blue and Green Lines for the Regional Transit Authority (RTA) in Cleveland, OH, have a waiting time during peak hours of ten minutes ("2012 annual report," 2012). If you are waiting for a train, you have anywhere from zero minutes to ten minutes to wait. Your probability of having to wait any number of minutes in that interval is the same. This is a uniform distribution. The graph of this distribution is in Figure \(1\). Suppose you want to know the probability that you will have to wait between five and ten minutes for the next train. You can look at the probability graphically such as in Figure \(2\). How would you find this probability? Calculus says that the probability is the area under the curve. Notice that the shape of the shaded area is a rectangle, and the area of a rectangle is length times width. The length is \(10-5=5\) and the width is 0.1. The probability is \(P(5<x<10)=0.1 * 5=0.5\), where and x is the waiting time during peak hours. Example \(1\) finding probabilities in a uniform distribution The commuter trains on the Blue and Green Lines for the Regional Transit Authority (RTA) in Cleveland, OH, have a waiting time during peak rush hour periods of ten minutes ("2012 annual report," 2012). 1. State the random variable. 2. Find the probability that you have to wait between four and six minutes for a train. 3. Find the probability that you have to wait between three and seven minutes for a train. 4. Find the probability that you have to wait between zero and ten minutes for a train. 5. Find the probability of waiting exactly five minutes. Solution a. x = waiting time during peak hours b. \(P(4<x<6)=(6-4)^{*} 0.1=0.2\) c. \(P(3<x<7)=(7-3) * 0.1=0.4\) d. \(P(0<x<10)=(10-0) * 0.1=1.0\) e. Since this would be just one line, and the width of the line is 0, then the \(P(x=5)=0 * 0.1=0\). Notice that in Example \(1\)d, the probability is equal to one. This is because the probability that was computed is the area under the entire curve. Just like in discrete probability distributions, where the total probability was one, the probability of the entire curve is one. This is the reason that the height of the curve is 0.1. In general, the height of a uniform distribution that ranges between a and b, is \(\dfrac{1}{b-a}\). Homework Exercise \(1\) 1. The commuter trains on the Blue and Green Lines for the Regional Transit Authority (RTA) in Cleveland, OH, have a waiting time during peak rush hour periods of ten minutes ("2012 annual report," 2012). 1. State the random variable. 2. Find the probability of waiting between two and five minutes. 3. Find the probability of waiting between seven and ten minutes. 4. Find the probability of waiting eight minutes exactly. 2. The commuter trains on the Red Line for the Regional Transit Authority (RTA) in Cleveland, OH, have a waiting time during peak rush hour periods of eight minutes ("2012 annual report," 2012). 1. State the random variable. 2. Find the height of this uniform distribution. 3. Find the probability of waiting between four and five minutes. 4. Find the probability of waiting between three and eight minutes. 5. Find the probability of waiting five minutes exactly. Answer 1. a. See solutions, b. \(P(2<x<5)=0.3\), c. \(P(7<x<10)=0.3\), d. \(P(x=8)=0\)
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/06%3A_Continuous_Probability_Distributions/6.01%3A_Uniform_Distribution.txt
Many real life problems produce a histogram that is a symmetric, unimodal, and bellshaped continuous probability distribution. For example: height, blood pressure, and cholesterol level. However, not every bell shaped curve is a normal curve. In a normal curve, there is a specific relationship between its “height” and its “width.” Normal curves can be tall and skinny or they can be short and fat. They are all symmetric, unimodal, and centered at $\mu$, the population mean. Figure $1$ shows two different normal curves drawn on the same scale. Both have $\mu = 100$ but the one on the left has a standard deviation of 10 and the one on the right has a standard deviation of 5. Notice that the larger standard deviation makes the graph wider (more spread out) and shorter. Every normal curve has common features. These are detailed in Figure $2$. • The center, or the highest point, is at the population mean, $\mu$. • The transition points (inflection points) are the places where the curve changes from a “hill” to a “valley”. The distance from the mean to the transition point is one standard deviation, $\sigma$. • The area under the whole curve is exactly 1. Therefore, the area under the half below or above the mean is 0.5. The equation that creates this curve is $f(x)=\dfrac{1}{\sigma \sqrt{2 \pi}} e^{-\dfrac{1}{2}\left(\dfrac{x-\mu}{\sigma}\right)^{2}}$ Just as in a discrete probability distribution, the object is to find the probability of an event occurring. However, unlike in a discrete probability distribution where the event can be a single value, in a continuous probability distribution the event must be a range. You are interested in finding the probability of x occurring in the range between a and b, or $P(a \leq x \leq b)=P(a<x<b)$. Calculus tells us that to find this you find the area under the curve above the interval from a to b. $P(a \leq x \leq b)=P(a<x<b)$ is the area under the curve above the integral from a to b. Before looking at the process for finding the probabilities under the normal curve, it is somewhat useful to look at the Empirical Rule that gives approximate values for these areas. The Empirical Rule is just an approximation and it will only be used in this section to give you an idea of what the size of the probabilities is for different shadings. A more precise method for finding probabilities for the normal curve will be demonstrated in the next section. Please do not use the empirical rule except for real rough estimates. Definition $1$: Empirical Rule The Empirical Rule for any normal distribution: • Approximately 68% of the data is within one standard deviation of the mean. • Approximately 95% of the data is within two standard deviations of the mean. • Approximately 99.7% of the data is within three standard deviations of the mean. Be careful, there is still some area left over in each end. Remember, the maximum a probability can be is 100%, so if you calculate 100%-99.7%=0.3% you will see that for both ends together there is 0.3% of the curve. Because of symmetry, you can divide this equally between both ends and find that there is 0.15% in each tail beyond the $\mu \pm 3 \sigma$.
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/06%3A_Continuous_Probability_Distributions/6.02%3A_Graphs_of_the_Normal_Distribution.txt
The Empirical Rule is just an approximation and only works for certain values. What if you want to find the probability for x values that are not integer multiples of the standard deviation? The probability is the area under the curve. To find areas under the curve, you need calculus. Before technology, you needed to convert every x value to a standardized number, called the z-score or z-value or simply just z. The z-score is a measure of how many standard deviations an x value is from the mean. To convert from a normally distributed x value to a z-score, you use the following formula. Definition $1$: z-score $z=\dfrac{x-\mu}{\sigma} \label{z-score}$ where $\mu$= mean of the population of the x value and $\sigma$= standard deviation for the population of the x value The z-score is normally distributed, with a mean of 0 and a standard deviation of 1. It is known as the standard normal curve. Once you have the z-score, you can look up the z-score in the standard normal distribution table. Definition $2$: standard normal distribution The standard normal distribution, z, has a mean of $\mu =0$ and a standard deviation of $\sigma =1$. Luckily, these days technology can find probabilities for you without converting to the zscore and looking the probabilities up in a table. There are many programs available that will calculate the probability for a normal curve including Excel and the TI-83/84. There are also online sites available. The following examples show how to do the calculation on the TI-83/84 and with R. The command on the TI-83/84 is in the DISTR menu and is normalcdf(. You then type in the lower limit, upper limit, mean, standard deviation in that order and including the commas. The command on R to find the area to the left is pnorm(z-value or x-value, mean, standard deviation). Example $1$ general normal distribution The length of a human pregnancy is normally distributed with a mean of 272 days with a standard deviation of 9 days (Bhat & Kushtagi, 2006). 1. State the random variable. 2. Find the probability of a pregnancy lasting more than 280 days. 3. Find the probability of a pregnancy lasting less than 250 days. 4. Find the probability that a pregnancy lasts between 265 and 280 days. 5. Find the length of pregnancy that 10% of all pregnancies last less than. 6. Suppose you meet a woman who says that she was pregnant for less than 250 days. Would this be unusual and what might you think? Solution a. x = length of a human pregnancy b. First translate the statement into a mathematical statement. P (x>280) Now, draw a picture. Remember the center of this normal curve is 272. To find the probability on the TI-83/84, looking at the picture you realize the lower limit is 280. The upper limit is infinity. The calculator doesn’t have infinity on it, so you need to put in a really big number. Some people like to put in 1000, but if you are working with numbers that are bigger than 1000, then you would have to remember to change the upper limit. The safest number to use is $1 \times 10^{99}$, which you put in the calculator as 1E99 (where E is the EE button on the calculator). The command looks like: $\text{normalcdf}(280,1 E 99,272,9)$ To find the probability on R, R always gives the probability to the left of the value. The total area under the curve is 1, so if you want the area to the right, then you find the area to the left and subtract from 1. The command looks like: $1-\text { pnom }(280,272,9)$ Thus, $P(x>280) \approx 0.187$ Thus 18.7% of all pregnancies last more than 280 days. This is not unusual since the probability is greater than 5%. c. First translate the statement into a mathematical statement. P (x<250) Now, draw a picture. Remember the center of this normal curve is 272. To find the probability on the TI-83/84, looking at the picture, though it is hard to see in this case, the lower limit is negative infinity. Again, the calculator doesn’t have this on it, put in a really small number, such as $-1 \times 10^{99}=-1 E 99$ on the calculator. $P(x<250)=\text { normalcdf }(-1 E 99,250,272,9)=0.0073$ To find the probability on R, R always gives the probability to the left of the value. Looking at the figure, you can see the area you want is to the left. The command looks like: $P(x<250)=\text { pnorm }(250,272,9)=0.0073$ Thus 0.73% of all pregnancies last less than 250 days. This is unusual since the probability is less than 5%. d. First translate the statement into a mathematical statement. P (265<x<280) Now, draw a picture. Remember the center of this normal curve is 272. In this case, the lower limit is 265 and the upper limit is 280. Using the calculator $P(265<x<280)=\text { normalcdf }(265,280,272,9)=0.595$ To use R, you have to remember that R gives you the area to the left. So $P(x<280)=\text { pnom }(280,272,9)$ is the area to the left of 280 and $P(x<265)=\text { pnom }(265,272,9)$ is the area to the left of 265. So the area is between the two would be the bigger one minus the smaller one. So, $P(265<x<280)=\text { pnorm }(280,272,9)-\text { pnorm }(265,272,9)=0.595$. Thus 59.5% of all pregnancies last between 265 and 280 days. e. This problem is asking you to find an x value from a probability. You want to find the x value that has 10% of the length of pregnancies to the left of it. On the TI-83/84, the command is in the DISTR menu and is called invNorm(. The invNorm( command needs the area to the left. In this case, that is the area you are given. For the command on the calculator, once you have invNorm( on the main screen you type in the probability to the left, mean, standard deviation, in that order with the commas. On R, the command is qnorm(area to the left, mean, standard deviation). For this example that would be qnorm(0.1, 272, 9) Thus 10% of all pregnancies last less than approximately 260 days. f. From part (c) you found the probability that a pregnancy lasts less than 250 days is 0.73%. Since this is less than 5%, it is very unusual. You would think that either the woman had a premature baby, or that she may be wrong about when she actually became pregnant. Example $2$ general normal distribution The mean mathematics SAT score in 2012 was 514 with a standard deviation of 117 ("Total group profile," 2012). Assume the mathematics SAT score is normally distributed. 1. State the random variable. 2. Find the probability that a person has a mathematics SAT score over 700. 3. Find the probability that a person has a mathematics SAT score of less than 400. 4. Find the probability that a person has a mathematics SAT score between a 500 and a 650. 5. Find the mathematics SAT score that represents the top 1% of all scores. Solution a. x = mathematics SAT score b. First translate the statement into a mathematical statement. P (x>700) Now, draw a picture. Remember the center of this normal curve is 514. On TI-83/84: $P(x>700)=\text { normalcdf }(700,1 E 99,514,117) \approx 0.056$ On R: $P(x>700)=1-\text { pnorm }(700,514,117) \approx 0.056$ There is a 5.6% chance that a person scored above a 700 on the mathematics SAT test. This is not unusual. c. First translate the statement into a mathematical statement. P (x<400) Now, draw a picture. Remember the center of this normal curve is 514. On TI-83/84: $P(x<400)=\text { normalcdf }(-1 E 99,400,514,117) \approx 0.165$ On R: $P(x<400)=\operatorname{pnorm}(400,514,117) \approx 0.165$ So, there is a 16.5% chance that a person scores less than a 400 on the mathematics part of the SAT. d. First translate the statement into a mathematical statement. P (500<x<650) Now, draw a picture. Remember the center of this normal curve is 514. On TI-83/84: $P(500<x<650)=\text { normalcdf }(500,650,514,117) \approx 0.425$ On R: $P(500<x<650)=\text { pnorm }(650,514,117)-\text { pnorm }(500,514,117) \approx 0.425$ So, there is a 42.5% chance that a person has a mathematical SAT score between 500 and 650. e. This problem is asking you to find an x value from a probability. You want to find the x value that has 1% of the mathematics SAT scores to the right of it. Remember, the calculator and R always need the area to the left, you need to find the area to the left by 1 - 0.01 = 0.99. On TI-83/84: $\text{invNorm}(.99,514,117) \approx 786$ On R: $\text{qnorm}(.99,514,117) \approx 786$ So, 1% of all people who took the SAT scored over about 786 points on the mathematics SAT. Homework Exercise $1$ 1. Find each of the probabilities, where z is a z-score from the standard normal distribution with mean of $\mu =0$ and standard deviation $\sigma =1$. Make sure you draw a picture for each problem. 1. P (z<2.36) 2. P (z>0.67) 3. P (0<z<2.11) 4. P (-2.78<z<1.97) 2. Find the z-score corresponding to the given area. Remember, z is distributed as the standard normal distribution with mean of $\mu =0$ and standard deviation $\sigma =1$. 1. The area to the left of z is 15%. 2. The area to the right of z is 65%. 3. The area to the left of z is 10%. 4. The area to the right of z is 5%. 5. The area between -z and z is 95%. (Hint draw a picture and figure out the area to the left of the -z.) 6. The area between -z and z is 99%. 3. If a random variable that is normally distributed has a mean of 25 and a standard deviation of 3, convert the given value to a z-score. 1. x = 23 2. x = 33 3. x = 19 4. x = 45 4. According to the WHO MONICA Project the mean blood pressure for people in China is 128 mmHg with a standard deviation of 23 mmHg (Kuulasmaa, Hense & Tolonen, 1998). Assume that blood pressure is normally distributed. 1. State the random variable. 2. Find the probability that a person in China has blood pressure of 135 mmHg or more. 3. Find the probability that a person in China has blood pressure of 141 mmHg or less. 4. Find the probability that a person in China has blood pressure between 120 and 125 mmHg. 5. Is it unusual for a person in China to have a blood pressure of 135 mmHg? Why or why not? 6. What blood pressure do 90% of all people in China have less than? 5. The size of fish is very important to commercial fishing. A study conducted in 2012 found the length of Atlantic cod caught in nets in Karlskrona to have a mean of 49.9 cm and a standard deviation of 3.74 cm (Ovegard, Berndt & Lunneryd, 2012). Assume the length of fish is normally distributed. 1. State the random variable. 2. Find the probability that an Atlantic cod has a length less than 52 cm. 3. Find the probability that an Atlantic cod has a length of more than 74 cm. 4. Find the probability that an Atlantic cod has a length between 40.5 and 57.5 cm. 5. If you found an Atlantic cod to have a length of more than 74 cm, what could you conclude? 6. What length are 15% of all Atlantic cod longer than? 6. The mean cholesterol levels of women age 45-59 in Ghana, Nigeria, and Seychelles is 5.1 mmol/l and the standard deviation is 1.0 mmol/l (Lawes, Hoorn, Law & Rodgers, 2004). Assume that cholesterol levels are normally distributed. 1. State the random variable. 2. Find the probability that a woman age 45-59 in Ghana, Nigeria, or Seychelles has a cholesterol level above 6.2 mmol/l (considered a high level). 3. Find the probability that a woman age 45-59 in Ghana, Nigeria, or Seychelles has a cholesterol level below 5.2 mmol/l (considered a normal level). 4. Find the probability that a woman age 45-59 in Ghana, Nigeria, or Seychelles has a cholesterol level between 5.2 and 6.2 mmol/l (considered borderline high). 5. If you found a woman age 45-59 in Ghana, Nigeria, or Seychelles having a cholesterol level above 6.2 mmol/l, what could you conclude? 6. What value do 5% of all woman ages 45-59 in Ghana, Nigeria, or Seychelles have a cholesterol level less than? 7. In the United States, males between the ages of 40 and 49 eat on average 103.1 g of fat every day with a standard deviation of 4.32 g ("What we eat," 2012). Assume that the amount of fat a person eats is normally distributed. 1. State the random variable. 2. Find the probability that a man age 40-49 in the U.S. eats more than 110 g of fat every day. 3. Find the probability that a man age 40-49 in the U.S. eats less than 93 g of fat every day. 4. Find the probability that a man age 40-49 in the U.S. eats less than 65 g of fat every day. 5. If you found a man age 40-49 in the U.S. who says he eats less than 65 g of fat every day, would you believe him? Why or why not? 6. What daily fat level do 5% of all men age 40-49 in the U.S. eat more than? 8. A dishwasher has a mean life of 12 years with an estimated standard deviation of 1.25 years ("Appliance life expectancy," 2013). Assume the life of a dishwasher is normally distributed. 1. State the random variable. 2. Find the probability that a dishwasher will last more than 15 years. 3. Find the probability that a dishwasher will last less than 6 years. 4. Find the probability that a dishwasher will last between 8 and 10 years. 5. If you found a dishwasher that lasted less than 6 years, would you think that you have a problem with the manufacturing process? Why or why not? 6. A manufacturer of dishwashers only wants to replace free of charge 5% of all dishwashers. How long should the manufacturer make the warranty period? 9. The mean starting salary for nurses is $67,694 nationally ("Staff nurse -," 2013). The standard deviation is approximately$10,333. Assume that the starting salary is normally distributed. 1. State the random variable. 2. Find the probability that a starting nurse will make more than $80,000. 3. Find the probability that a starting nurse will make less than$60,000. 4. Find the probability that a starting nurse will make between $55,000 and$72,000. 5. If a nurse made less than $50,000, would you think the nurse was under paid? Why or why not? 6. What salary do 30% of all nurses make more than? 10. The mean yearly rainfall in Sydney, Australia, is about 137 mm and the standard deviation is about 69 mm ("Annual maximums of," 2013). Assume rainfall is normally distributed. 1. State the random variable. 2. Find the probability that the yearly rainfall is less than 100 mm. 3. Find the probability that the yearly rainfall is more than 240 mm. 4. Find the probability that the yearly rainfall is between 140 and 250 mm. 5. If a year has a rainfall less than 100mm, does that mean it is an unusually dry year? Why or why not? 6. What rainfall amount are 90% of all yearly rainfalls more than? Answer 1. a. $P(z<2.36)=0.9909$, b. $P(z>0.67)=0.2514$, c. $P(0<z<2.11)=0.4826$, d. $P(-2.78<z<1.97)=0.9729$ 3. a. -0.6667, b. -2.6667, c. -2, d. 6.6667 5. a. See solutions, b. $P(x<52 \mathrm{cm})=0.7128$, c. $P(x>74 \mathrm{cm})=5.852 \times 10^{-11}$, d. $P(40.5 \mathrm{cm}<x<57.5 \mathrm{cm})=0.9729$, e. See solutions, f. 53.8 cm 7. a. See solutions, b. $P(x>110 \mathrm{g})=0.0551$ c. $P(x<93 \mathrm{g})=0.0097$, d. $P(x<65 \mathrm{g}) \approx 0$ or $5.57 \times 10^{-19}$, e. See solutions, f. 110.2 g 9. a. See solutions, b. $P(x>\ 80,000)=0.1168$, c. $P(x>\ 80,000)=0.2283$, d. $P(\ 55,000<x<\ 72,000)=0.5519$, e. See solutions, f.$73,112
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/06%3A_Continuous_Probability_Distributions/6.03%3A_Finding_Probabilities_for_the_Normal_Distribution.txt
The distributions you have seen up to this point have been assumed to be normally distributed, but how do you determine if it is normally distributed. One way is to take a sample and look at the sample to determine if it appears normal. If the sample looks normal, then most likely the population is also. Here are some guidelines that are use to help make that determination. 1. Histogram: Make a histogram. For a normal distribution, the histogram should be roughly bell-shaped. For small samples, this is not very accurate, and another method is needed. A distribution may not look normally distributed from the histogram, but it still may be normally distributed. 2. Outliers: For a normal distribution, there should not be more than one outlier. One way to check for outliers is to use a modified box plot. Outliers are values that are shown as dots outside of the rest of the values. If you don’t have a modified box plot, outliers are those data values that are: Above Q3, the third quartile, by an amount greater than 1.5 times the interquartile range (IQR) Below Q1, the first quartile, by an amount greater than 1.5 times the interquartile range (IQR) Note If there is one outlier, that outlier could have a dramatic effect on the results especially if it is an extreme outlier. However, there are times where a distribution has more than one outlier, but it is still normally distributed. The guideline of only one outlier is just a guideline. 3. Normal quantile plot (or normal probability plot): This plot is provided through statistical software on a computer or graphing calculator. If the points lie close to a line, the data comes from a distribution that is approximately normal. If the points do not lie close to a line or they show a pattern that is not a line, the data are likely to come from a distribution that is not normally distributed. To create a histogram on the TI-83/84: 1. Go into the STAT menu, and then Chose 1: Edit Figure $1$: STAT Menu on TI-83/84 2. Type your data values into L1. 3. Now click STAT PLOT ($2^{\text { nd }} Y=$). Figure $2$: STAT PLOT Menu on TI-83/84 4. Use 1:Plot1. Press ENTER. Figure $3$: Plot1 Menu on TI-83/84 5. You will see a new window. The first thing you want to do is turn the plot on. At this point you should be on On, just press ENTER. It will make On dark. 6. Now arrow down to Type: and arrow right to the graph that looks like a histogram (3rd one from the left in the top row). 7. Now arrow down to Xlist. Make sure this says L1. If it doesn’t, then put L1 there (2nd number 1). Freq: should be a 1. Figure $4$: Plot1 Menu on TI-83/84 Setup for Histogram 8. Now you need to set up the correct window to graph on. Click on WINDOW. You need to set up the settings for the x variable. Xmin should be your smallest data value. Xmax should just be a value sufficiently above your highest data value, but not too high. Xscl is your class width that you calculated. Ymin should be 0 and Ymax should be above what you think the highest frequency is going to be. You can always change this if you need to. Yscl is just how often you would like to see a tick mark on the y-axis. 9. Now press GRAPH. You will see a histogram. To find the IQR and create a box plot on the TI-83/84: 1. Go into the STAT menu, and then Choose 1:Edit Figure $5$: STAT Menu on TI-83/84 2. Type your data values into L1. If L1 has data in it, arrow up to the name L1, click CLEAR and then press ENTER. The column will now be cleared and you can type the data in. 3. Go into the STAT menu, move over to CALC and choose 1-Var Stats. Press ENTER, then type L1 (2nd 1) and then ENTER. This will give you the summary statistics. If you press the down arrow, you will see the five-number summary. 4. To draw the box plot press 2nd STAT PLOT. Figure $6$: STAT PLOT Menu on TI-83/84 5. Use Plot1. Press ENTER Figure $7$: Plot1 Menu on TI-83/84 Setup for Box Plot 6. Put the cursor on On and press Enter to turn the plot on. Use the down arrow and the right arrow to highlight the boxplot in the middle of the second row of types then press ENTER. Set Data List to L1 (it might already say that) and leave Freq as 1. 7. Now tell the calculator the set up for the units on the x-axis so you can see the whole plot. The calculator will do it automatically if you press ZOOM, which is in the middle of the top row. Figure $8$: ZOOM Menu on TI-83/84 Then use the down arrow to get to 9:ZoomStat and press ENTER. The box plot will be drawn. Figure $9$: ZOOM Menu on TI-83/84 with ZoomStat To create a normal quantile plot on the TI-83/84 1. Go into the STAT menu, and then Chose 1:Edit Figure $10$: STAT Menu on TI-83/84 2. Type your data values into L1. If L1 has data in it, arrow up to the name L1, click CLEAR and then press ENTER. The column will now be cleared and you can type the data in. 3. Now click STAT PLOT ($2^{\text { nd }} Y=$). You have three stat plots to choose from. Figure $11$: STAT PLOT Menu on TI-83/84 4. Use 1:Plot1. Press ENTER. 5. Put the cursor on the word On and press ENTER. This turns on the plot. Arrow down to Type: and use the right arrow to move over to the last graph (it looks like an increasing linear graph). Set Data List to L1 (it might already say that) and set Data Axis to Y. The Mark is up to you. Figure $12$: Plot1 Menu on TI-83/84 Setup for Normal Quantile Plot 6. Now you need to set up the correct window on which to graph. Click on WINDOW. You need to set up the settings for the x variable. Xmin should be -4. Xmax should be 4. Xscl should be 1. Ymin and Ymax are based on your data, the Ymin should be below your lowest data value and Ymax should be above your highest data value. Yscl is just how often you would like to see a tick mark on the y-axis. 7. Now press GRAPH. You will see the normal quantile plot. To create a histogram on R: Put the variable in using variable<-c(type in the data with commas between values) using a name for the variable that makes sense for the problem. The command for histogram is hist(variable). You can then copy the histogram into a word processing program. There are options that you can put in for title, and axis labels. See section 2.2 for the commands for those. To create a modified boxplot on R: Put the variable in using variable<-c(type in the data with commas between values) using a name for the variable that makes sense for the problem. The command for box plot is boxplot(variable). You can then copy the box plot into a word processing program. There are options that you can put in for title, horizontal orientation, and axis labels. See section 3.3 for the commands for those. To create a normal quantile plot on R: Put the variable in using variable<-c(type in the data with commas between values) using a name for the variable that makes sense for the problem. The command for normal quantile plot is qqnorm(variable). You can then copy the normal quantile plot into a word processing program. Realize that your random variable may be normally distributed, even if the sample fails the three tests. However, if the histogram definitely doesn't look symmetric and bell shaped, there are outliers that are very extreme, and the normal probability plot doesn’t look linear, then you can be fairly confident that the data set does not come from a population that is normally distributed. Example $1$ is it normal? In Kiama, NSW, Australia, there is a blowhole. The data in table #6.4.1 are times in seconds between eruptions ("Kiama blowhole eruptions," 2013). Do the data come from a population that is normally distributed? 83 51 87 60 28 95 8 27 15 10 18 16 29 54 91 8 17 55 10 35 47 77 36 17 21 36 18 40 10 7 34 27 28 56 8 25 68 146 89 18 73 69 9 37 10 82 29 8 60 61 61 18 169 25 8 26 11 83 11 42 17 14 9 12 Table $1$: Time (in Seconds) Between Kiama Blowhole Eruptions 1. State the random variable 2. Draw a histogram. 3. Find the number of outliers. 4. Draw the normal quantile plot. 5. Do the data come from a population that is normally distributed? Solution a. x = time in seconds between eruptions of Kiama Blowhole b. The histogram produced is in Figure $13$. This looks skewed right and not symmetric. c. The box plot is in Figure $14$. There are two outliers. Instead using: $I Q R=Q 3-Q 1=60-14.5=45.5$ seconds $1.5 * I Q R=1.5 * 45.5=68.25$ seconds $Q 1-1.5^{*} I Q R=14.5-68.25=-53.75$ seconds $Q 3+1.5 * I Q R=60+68.25=128.25$ seconds Outliers are any numbers greater than 128.25 seconds and less than -53.75 seconds. Since all the numbers are measurements of time, then no data values are less than 0 or seconds for that matter. There are two numbers that are larger than 128.25 seconds, so there are two outliers. Two outliers are not real indications that the sample does not come from a normal distribution, but the fact that both are well above 128.25 seconds is an indication of an issue. d. The normal quantile plot is in Figure $15$. This graph looks more like an exponential growth than linear. e. Considering the histogram is skewed right, there are two extreme outliers, and the normal probability plot does not look linear, then the conclusion is that this sample is not from a population that is normally distributed. Example $2$ is it normal? One way to measure intelligence is with an IQ score. Example $2$ contains 50 IQ scores. Determine if the sample comes from a population that is normally distributed. 78 92 96 100 67 105 109 75 127 111 93 114 82 100 125 67 94 74 81 98 102 108 81 96 103 91 90 96 86 92 84 92 90 103 115 93 85 116 87 106 85 88 106 104 102 98 116 107 102 89 Table $2$: IQ Scores 1. State the random variable. 2. Draw a histogram. 3. Find the number of outliers. 4. Draw the normal quantile plot. 5. Do the data come from a population that is normally distributed? Solution a. x = IQ score b. The histogram is in Figure $16$. This looks somewhat symmetric, though it could be thought of as slightly skewed right. c. The modified box plot is in Figure $17$. There are no outliers. Or using Outliers $I Q R=Q 3-Q 1=105-87=18$ $1.5^{*} I Q R=1.5^{*} 18=27$ $Q 1 -1.5 I Q R=87-27=60$ $Q 3+1.5 I Q R=105+27=132$ are any numbers greater than 132 and less than 60. Since the maximum number is 127 and the minimum is 67, there are no outliers. d. The normal quantile plot is in Figure $18$. This graph looks fairly linear. e. Considering the histogram is somewhat symmetric, there are no outliers, and the normal probability plot looks linear, then the conclusion is that this sample is from a population that is normally distributed. Homework Exercise $1$ 1. Cholesterol data was collected on patients four days after having a heart attack. The data is in Example $3$. Determine if the data is from a population that is normally distributed. 218 234 214 116 200 276 146 182 238 288 190 236 244 258 240 294 220 200 220 186 352 202 218 248 278 248 270 242 Table $3$: Cholesterol Data Collected Four Days After a Heart Attack 2. The size of fish is very important to commercial fishing. A study conducted in 2012 collected the lengths of Atlantic cod caught in nets in Karlskrona (Ovegard, Berndt & Lunneryd, 2012). Data based on information from the study is in Example $4$. Determine if the data is from a population that is normally distributed. 48 50 50 55 53 50 49 52 61 48 45 47 53 46 50 48 42 44 50 60 54 48 50 49 53 48 52 56 46 46 47 48 48 49 52 47 51 48 45 47 Table $4$: Atlantic Cod Lengths 3. The WHO MONICA Project collected blood pressure data for people in China (Kuulasmaa, Hense & Tolonen, 1998). Data based on information from the study is in Example $5$. Determine if the data is from a population that is normally distributed. 114 141 154 137 131 132 133 156 119 138 86 122 112 114 177 128 137 140 171 129 127 104 97 135 107 136 118 92 182 150 142 97 140 106 76 115 119 125 162 80 138 124 132 143 119 Table $5$: Blood Pressure Values for People in China 4. Annual rainfalls for Sydney, Australia are given in Example $6$. ("Annual maximums of," 2013). Can you assume rainfall is normally distributed? 146.8 383 90.9 178.1 267.5 95.5 156.5 180 90.9 139.7 200.2 171.7 187.2 184.9 70.1 58 84.1 55.6 133.1 271.8 135.9 71.9 99.4 110.6 47.5 97.8 122.7 58.4 154.4 173.7 118.8 88 84.6 171.5 254.3 185.9 137.2 138.9 96.2 85 45.2 74.7 264.9 113.8 133.4 68.1 156.4 Table $6$: Annual Rainfall in Sydney, Australia Answer 1. Normally distributed 3. Normally distributed
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/06%3A_Continuous_Probability_Distributions/6.04%3A_Assessing_Normality.txt