chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Regression through the origin Sometimes due to the nature of the problem (e.g. (i) physical law where one variable is proportional to another variable, and the goal is to determine the constant of proportionality; (ii) X = sales, Y = profit from sales), or, due to empirical considerations ( in the full regression model the intercept $\beta_0$ turns out to be insignificant), one may fit the model $Y_i$ = $\beta_1X_i$ + $\varepsilon_i$, where $\varepsilon_i$ are assumed to be uncorrelated, and have mean 0 and variance $\sigma^2$. Then estimates are: $\widehat \beta_1 = \widetilde b_1 = \frac{\sum_{i=1}^n X_iY_i}{\sum_{i=1}^n X_i^2}, \qquad \widetilde{SSE} = \sum_{i=1}^n (Y_i - \widetilde b_1 X_i)^2 = \sum_i Y_i^2 - \widetilde b_1^2 \sum_i X_i^2.$ Also, $E(\widetilde b_1) = \beta_1$, $E(\widetilde{SSE}) = (n-1)\sigma^2$, so that $\widetilde{MSE} = \frac{1}{n-1}\widetilde{SSE}$ is an unbiased estimator of $\sigma^2$ and d.f.$(\widetilde{MSE}) = n-1$. Var$(\widetilde b_1) = \frac{\sigma^2}{\sum_i X_i^2}$, and is estimated by $s^2(\widetilde b_1) = \frac{\widetilde{MSE}}{\sum_i X_i^2}$. • 100(1 - $\alpha$)% confidence interval for $\beta_1$ : $\widetilde b_1 \pm t(1-\alpha/2; n-1) s(\widetilde b_1)$. • Estimate of mean response for $X = X_h$ : $\widetilde Y_h = \widetilde b_1 X_h$ with estimated standard error $s(\widetilde Y_h) = \sqrt{\widetilde{MSE} \frac{X_h^2}{\sum_i X_i^2}}$. • 100(1 - $\alpha)$% confidence interval for mean response : $\widetilde Y_h \pm t(1-\alpha/2; n-1) s(\widetilde Y_h)$. • ANOVA decomposition : $\widetilde{SSTO} = \widetilde{SSR} + \widetilde{SSE}$, where $\widetilde{SSTO} = \sum_i Y_i^2$, with d.f. $(\widetilde{SSTO}) = n$, $\widetilde{SSR} = \widetilde b_1^2 \sum_i X_i^2$ with d.f.$(\widetilde{SSR}) = 1$. Reject $H_0 : \beta_1 = 0$ if F-ratio $F^* = \frac{\widetilde{MSR}}{\widetilde{MSE}} > F(1-\alpha;1,n-1)$. Inverse prediction, or calibration problem In some experimental studies it is important to know the value of X in order to obtain ( on an average ) a pre-specified value of Y. The following example illustrates such a situtation. X 10 15 15 20 20 20 25 25 28 30 Y 160 171 175 182 184 181 188 193 195 200 Here Y = tensile strength of paper, X = amount (percentage) of hardwood in the pulp. Want to find $X_{h(new)}$ for given value of $Y_{h(new)}$. Estimate $\widehat X_{h(new)} = \frac{Y_{h(new)} - b_0}{b_1}$. Estimated standard error of prediction is $s(\widehat X_{h(new)})$ where $s^2(\widehat X_{h(new)}) = \frac{MSE}{b_1^2} \left[1+ \frac{1}{n} + \frac{(\widehat X_{h(new)} - \overline{X})^2}{\sum_i(X_i - \overline{X})^2}\right].$ Then 100(1 - $\alpha$)% prediction interval for $X_{h(new)}$ is given by $\widehat X_{h(new)} \pm t(1-\alpha/2;n-2) s(\widehat X_{h(new)})$. Fitted model : $\widehat Y$ = 143.8244 + 1.8786X . SSE = 38.8328, SSTO = 1300.9, SSR = 1262.1, $R^2$ = 0.9701, $\sum_i (X_i - \overline{X})^2$ = 357.6, MSE = 4.8541, $\overline{X}$ = 20.8, $\overline{Y}$ = 182.9. Contributors • Yingwen Li • Debashis Paul Simple Linear Regression (with one predictor) Model $X$ and $Y$ are the predictor and response variables, respectively. Fit the model, $Y_i = \beta_0+\beta_1X_i+\epsilon_i, x = 1,2,...,n$ where $\epsilon_1 ,..., \epsilon_n$ are uncorrelated, $E(\epsilon_1)=0, VAR(\epsilon_1)=\sigma^2$. Interpretation Look at the scatter plot of $Y$ (vertical axis) versus $X$ (horizontal axis). Consider narrow vertical strips around the different values of $X$: 1. Means (measure of center) of the points falling in the vertical strips lie (approximately) on a straight line with slope $\beta_1$ and intercept $\beta_0$. 2. Standard deviations (measure of spread) of the points falling in each vertical strip are (roughly) the same. Estimation of $\beta_0$ and $\beta_1$ We employ the method of least squares to estimate $\beta_0$ and $\beta_1$. This means, we minimize the sum of squared errors : $Q(\beta_0,\beta_1) = \sum_{i=1}^n(Y_i-\beta_0-\beta_1X_i)^2$. This involves differentiating $Q(\beta_0,\beta_1)$ with respect to the parameters $\beta_0$ and $\beta_1$ and setting the derivatives to zero. This gives us the normal equations: $nb_0 + b_1\sum_{i=1}^nX_i = \sum_{i=1}^nY_i$ $b_0\sum_{i=1}^nX_i+b_1\sum_{i=1}^nX_i^2 = \sum_{i=1}^nX_iY_i$ Solving these equations, we have: $b_1=\frac{\sum_{i=1}^nX_iY_i-n\overline{XY}}{\sum_{i=1}^nX_i^2-n\overline{X}^2} = \frac{\sum_{i=1}^n(X_i-\overline{X})(Y_i-\overline{y})}{\sum_{i=1}^n(X_i-\overline{X})^2}, b_0 = \overline{Y}-b_1\overline{X}$ $b_0$ and $b_1$ are the estimates of $\beta_0$ and $\beta_1$, respectively, and are sometimes denoted as $\widehat\beta_0$ and $\widehat\beta_1$. Prediction The fitted regression line is given by the equation: $\widehat{Y} = b_0 + b_1X$ and is used to predict the value of $Y$ given a value of $X$. Residuals These are the quantities $e_i = Y_i - \widehat{Y}_i = Y_i - (b_0 + b_1X_i)$, where $\widehat{Y}_i = b_0 + b_1X_i$. Note that $\epsilon_i = Y_i - \beta_0 - \beta_1X_i$. This means that $e_i$'s estimate $\epsilon_i$'s. Some properties of the regression line and residuals are : 1. $\sum_{i}e_i = 0$. 2. $\sum_{i}e_i^2 \leq \sum_{i}(Y_i - u_0 - u_1X_i)^2$ for any $(u_0, u_1)$ (with equality when $(u_0, u_1)$ = $(b_0, b_1)$). 3. $\sum_{i}Y_i = \sum_{i}\widehat{Y}_i$. 4. $\sum_{i}X_ie_i = 0$. 5. $\sum_{i}\widehat{Y}_ie_i = 0$. 6. Regression line passes through the point $(\overline{X},\overline{Y})$ 7. The slope $b_1$ of the regression line can be expressed as $b_1 = r_{XY}\frac{sy}{sx}$, where $r_{XY}$ is the correlation coefficient between $X$ and $Y$ and $s_X$ and $s_Y$ are the standard deviations of $X$ and $Y$. Error sum of squares, deonted $SSE$, is given by $SSE = \sum_{i=1}^ne_i^2 = \sum_{i=1}^n(Y_i - \overline{Y})^2 - b_1^2\sum_{i=1}^n(X_i-\overline{X})^2.$ Estimation of $\sigma^2$ It can be shown that $E(SSE) = (n-2)\sigma^2.$ Therefore, $\sigma^2$ is estimated by the mean squared error, i.e., $MSE = \frac{SSE}{n-2}.$ Note also that this justifies the statement that the degree of freedom of the errors is $n-2$ which is sample size $(n)$ minus the number of regression coefficients ($\beta_0$ and $\beta_1$) being estimated. Contributors • Debashis Paul (UCD) • Scott Brunstein (UCD)
textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Regression_through_the_origin.txt
We want to find confidence interval for more than one parameter simultaneously. For example we might want to find confidence interval for $\beta_0$ and $\beta_1$. Bonferroni Joint Confidence Intervals The confidence coefficients for individual parameters are adjusted to the higher 1 - $\alpha$ so that the confidence coefficient for the collection of parameters must be at least 1 - $\alpha$. This is based on the following inequality: Theorem (Bonferroni's Inequality) $P( \beta_0\cap\beta_1)\geq1-P(\beta_0^c)-P(\beta_1^c) \label{Bonferroni}$ for any two events $\beta_0$ and $\beta_1$, where $\beta_0^c$ and $\beta_1^c$ are complements of events $\beta_0$ and $\beta_1$, respectively. We take, $\beta_0 =$ the event that confidence interval for $\beta_0$ covers $\beta_0$; and, $\beta_1 =$ the event that confidence interval for $\beta_1$ covers $\beta_1$; So, if $P(\beta_0) = 1-\alpha_1$, and $P(\beta_1) = 1-\alpha_2$, then $P(\beta_0\cap\beta_1)\geq1-\alpha_1-\alpha_2$, by Bonferroni's inequality (Equation \ref{Bonferroni}). Note that $\beta_0\cap\beta_1$ is the event that confidence intervals for both the parameters cover the respective parameters. Therefore we take $\alpha_1 = \alpha_2 = \alpha/2$ to get joint confidence intervals with confidence coefficient at least $1 - \alpha$, $b_0 \pm t(1-\alpha/4;n-2) s(b_0)$ and $b_1 \pm t(1-\alpha/4;n-2) s(b_1)$ for $\beta_0$ and $\beta_1$, respectively. Bonferroni Joint Confidence Intervals for Mean Response We want to find the simultaneous confidence interval for $E(Y|X = X_h) = \beta_0 + \beta_1X_h$ for g different values of $X_h$. Using Bonferroni's inequality for the intersection of g different events, the confidence intervals with confidence coefficient (at least) $1-\alpha$ are given by $\widehat{Y_h} \pm t(1-\alpha/2g; n-2)s(\widehat{Y_h}).$ Confidence band for regression line : Working-Hotelling procedure The confidence band $\widehat{Y_h} \pm\sqrt{2F(1-\alpha;2,n-2)}s(\widehat{Y_h})$ contains the entire regression line (for all values of $X$) with confidence level $1-\alpha$. The Working-Hotelling procedure for obtaining the $1-\alpha$ simultaneous confidence band for the mean responses, therefore, is to use these confidence limits for the g different values of $X_h$. Simultaneous prediction intervals Recall that, the standard error of prediction for a new observation $Y_{h(new)}$ with $X = X_h$, is $s(Y_{h(new)}-\widehat{Y_h}) = \sqrt{MSE(1+\frac{1}{n}+\frac{(X_h-\overline{X})^2}{\sum_i(X_i-\overline{X})^2})}$ In order to predict the new observations for g different values of X, we may use one of the two procedures: • Bonferroni procedure : $\widehat{Y_h}\pm t(1-\alpha/2g;n-2)s(Y_{h(new)}-\widehat{Y_h})$. • Scheffe procedure : $\widehat{Y_h}\pm \sqrt{gF(1-\alpha;g,n-2)}s(Y_{h(new)}-\widehat{Y_h})$. Remark : A point to note is that except for the Working-Hotelling procedure for finding simultaneous confidence intervals for mean response, in all the other cases, the confidence intervals become wider as g increases. Which method to choose : Choose the method which leads to narrower intervals. As a comparison between Bonferroni and Working-Hotelling (for finding confidence intervals for the mean response), the following can be said : • If g is small, Bonferroni is better. • If g is large, Working-Hotelling is better (the coefficient of $s(\widehat{Y_h})$ in the confidence limits remains the same even as g becomes large). Housing data as an example Fitted regression model : $\widehat{Y_h} = 28.981 + 2.941X, n=19, s(b_0) = 8.5438, s(b_1) = 0.5412, MSE = 11.9512.$ • Simultaneous confidence intervals for $\beta_0$ and $\beta_1$ : For 95% simultaneous C.I., $t(1-\alpha/4;n-2)=t(0.9875;17) = 2.4581$. The intervals are (for $\beta_0 and \beta_1$, respectively) $28.981 \pm 2.4581 \times 8.5438 \equiv 28.981 \pm 21.002, 2.941 \pm 2.4581 \times 0.5412 \equiv 2.941 \pm 1.330$ • Simultaneous inference for mean response at g different values of X : Say g = 3. And the values are $X_h$ 14 16 18.5 $\widehat{Y_h}$ 70.155 76.037 83.390 $s(\widehat{Y_h})$ 1.2225 0.8075 1.7011 $t(1-0.05/2g;n-2) = t(0.99167;17) = 2.655, \sqrt{2F(0.95;2,n-2)} = \sqrt{2 \times 3.5915} = 2.6801$ The 95% simultaneous confidence intervals for the mean responses are given in the following table: $X_h$ 14 16 18.5 Bonferroni $70.155 \pm 3.248$ $76.037 \pm 2.145$ $83.390 \pm 4.520$ Working-Hotelling $70.155 \pm 3.276$ $76.037 \pm 2.164$ $83.390 \pm 4.559$ • Simultaneous prediction intervals for g different values of X : Again, say g = 3 and the values of 14,16 and 18.5. In this case, $\alpha = 0.05, t(1-\alpha/2g;n-2) = t(0.99167; 17) = 2.655$. And $\sqrt{gF(1-\alpha;g,n-2)} = \sqrt{3F(0.95;3,17)} = \sqrt{3 \times 3.1968} = 3.0968$. The standard errors and simultaneous 95% C.I. are given in the following table: $X_h$ 14 16 18.5 $\widehat{Y_h}$ 70.155 76.037 83.390 $s(Y_{h(new)}-\widehat{Y_h})$ 3.6668 3.5501 3.8529 Bonferroni $70.155 \pm 9.742$ $76.037 \pm 9.432$ $83.390 \pm 10.237$ Scheffe $70.155 \pm 11.355$ $76.037 \pm 10.994$ $83.390 \pm 11.932$ Contributors • Anirudh Kandada(UCD)
textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Simultaneous_Inference.txt
Addition rule for matrices If $c_1, ... , c_k$ are scalers, and $A_1,...,A_k$ are all $m \times n$ matrices, then $B = c_1A_1+c_2A_2+ ... + c_kA_k$ is an $m \times n$ matrix with $(i, j)$-th entry of $B : B(i, j) = c_1A_1(i, j) + c_2A_2(i, j) + ... + c_kA_k(i, j)$, for all $i = 1, ... , m; j = 1, ..., n$. (Note sometimes we denote the entries of a matrix by $A_{i_j}$, and sometimes by $A(i, j)$. But always the first index is for the row and the second index is for the column). Transpose of a matrix If $A$ is an $m \times n$ matrix, then $A^T$ (spelled $A$-transpose) is the $n \times m$ matrix $B$ whose $(i, j)$-th entry $B_{ij} = A_{ji}$ for all $i = 1, ... , n; j = 1, ... ,m$. Inner product of vectors If $x$ and $y$ are two $m \times 1$ vectors, then the inner product (or, dot product) between $x$ and $y$ is given by : $\left \langle x, y \right \rangle$ = $\sum_{i=1}^{m}x_iy_i$. Note that $\left \langle x, y \right \rangle = \left \langle y, x \right \rangle$. Multiplication of matrices If $A$ is an $m \times n$ matrix and $B$ is an $n \times p$ matrix then the product $AB = C$, say, is defined and it is an $m \times p$ matrix with $(i, j)$-th entry : $C_{ij} = \sum_{k=1}^{n}A_{ik}B_{kj}$ for all $i = 1, ... , m; j = 1, ... ,p$. Notethat for $m \times 1$ vectors $x$ and $y$, $\left \langle x, y \right \rangle = x^Ty = y^Tx$. In other words, the $(i, j)$-th entry of $AB$ is the inner product of $i$-th row of $A$ and $j$-th column of $B$. Special matricies 1. Square matrix: A matrix $A$ is square if it is $m \times m$ (that is, number of rows = number of columns). 2. Symmetric matrix: An $m \times m$ (square) matrix $A$ is symmetric if $A = A^T$. That is, for all $1 \leq i, j \leq m, A_{ij} = A_{ji}$. 3. Diagonal matrix: A $m \times m$ matrix with all the entries zero except (possibly) the entries on the diagonal (that is the ($i , i$)-th entry for all the $i = 1, ... m$) is called a diagonal matrix. 4. Identity matrix: The $m \times m$ diagonal matrix with all diagonal entries equal to 1 is called the identity matrix and is denoted by $I$ (or, $I_m$). It has the property that for any $m \times n$ matrix $A$ and any $p x m$ matrix $B$, $IA = A$ and $BI = B$. 5. One vector: The $m \times 1$ vector with all entries equal to 1 is usually called the one vector (non-standard term) and is denoted by 1(or, $1_m$). 6. Ones matrix: The $m \times m$ matrix with all entries equal to 1 is denoted by $J$ (or, $J_m$). Note that $J_m$ = $1_m1_{m}^{T}$. 7. Zero vector: The $m \times 1$ vector with all entries zero is called the zero vector and is denoted by $0$ (or, $0_m$). • Multiplication is not commutative: If $A$ and $B$ are both $m \times m$ matrices then both $AB$ and $BA$ are defined and are $m \times m$ matrices. However, in general $AB \neq BA$. Notice that $I_mB = BI_m = B$, where $I_m$ is the identity matrix. • Linear independence: The $m \times 1$ vectors $x_1,...,x_k$, ($k$ arbitrary) are said to be linearly dependent, if there exist constants $c_1,..., c_m$, not all zero, such that $c_1x_1+c_2x_2 + ... + c_mx_m = 0$ If no such sequence of numbers $c1, ... , c_m$ exists then the vectors $x_1, ..., x_m$ are said to be linearly independent. 1. Relationship with dimension: If $k > m$ then $m \times 1$ vectors $x_1, ..., x_k$ are always linearly dependent. 2. Rank of a matrix: For an $m \times n$ matrix $A$, the rank of $A$, written rank($A$) is the maximal number of linearly independent columns of $A$ (treating each column as an $m \times 1$ vector). Also, rank($A$)$\leq$min$\{m,n\}$ 3. Nonsingular matrix: If an $m \times m$ matrix $A$ has full rank, that is, rank($A$) = $m$, (which is equivalent to saying that all the columns of $A$ are linearly independent), then the matrix $A$ is called nonsingular • Inverse of a matrix: If an $m \times m$ matrix $A$ nonsingular, then it has an inverse, that is a unique $m \times m$ matrix denoted by $A^{-1}$ that satisfies the relationship : $A^{-1}A = I_m = AA^{-1}$ Inverse of a $2 \times 2$ matrix: Let a $2 \times 2$ matrix $A$ be expressed as $A = \begin{bmatrix} a &b \ c &d \end{bmatrix}$. Then $A$ is nonsingular (and hence has an inverse) if and only if $ad-bc \neq 0$. If this is satisfied then the inverse is $A^{-1} = \frac{1}{ad-bc}\begin{bmatrix} d & -b\ -c & a \end{bmatrix}$ 2. Solution of a system of linear equations : A system of $m$ linear equations in $m$ variables $b_1, ...,, b_m$ can be expressed as $a_{1_1}b_1 + a_{1_2}b_2 + ... + a_{1_m}b_m = c_1$ $a_{2_1}b_1 + a_{2_2}b_2 + ... + a_{2_m}b_m = c_2$ $... ... ... ... ... = .$ $a_{m_1}b_1 + a_{m_2}b_2 + ... + a_{m_m}b_m = c_m$ Here the coefficients $a_{i_j}$ and the constants $c_i$ are considered unknown. This system can be expressed in matrix form as $Ab = c$, where $A$ is the $m \times m$ matrix with $(i, j)$-th entry $a_{i_j}$, and b and c are $m \times 1$ vectors with $i$-th entries $b_i$ and $c_i$, respectively, for $i = 1, ... , m; j = 1, ..., m$. If the matrix $A$ is nonsingular, then a unique solution exists for this system of equations and is given by b = $A^{-1}c$. To see this, note that since $A(A^{-1}) = (AA^{-1})c= I c= c$, it shows that $A^{-1}c$ is a solution. On the other hand, if $b = b^*$ is a solution, then it satisfies $Ab^* = c$. Hence $b^* = I b^* = (A^{-1}A)b^* = A^{-1}(Ab^*) = A^{-1}c$, which proves uniqueness. Contributors • Debashis Paul • Cathy Wang
textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Some_basic_facts_about_vectors_and_matrices.txt
Lack of Fit When we have repeated measurements for different values of the predictor variables $X$, it is possible to test whether a linear model fits the data. Suppose that we have data that can be expressed in the form: $\{(X_j,Y_{ij}) : i = 1, ..., n_{j}; j = 1, ..., c\}$where $c >2$. Assume that the data come from the model : $Y_{ij} = \mu_{j} + \varepsilon_{ij}, i = 1, ..., n_{j}; j = 1, ..., c (1)$. The null hypothesis in which the linear model holds is: $H_{0}: \mu_{j} = \beta_{0} + \beta_{1}X_{j}$, for all $j = 1, ..., c$. Here (1) is the full model and the model specified by $H_{0}$ is the reduced model. We follow the usual procedure for the ANOVA, by computing the sum of squares due to errors for the full and reduced models. Let $\bar{Y} = \frac{1}{n_{j}} \sum_{i = 1}^{n_{j}} Y_{ij}$, and $\bar{Y} = \frac{1}{c}\sum_{j=1}^{c}n_{j}\bar{Y}_{j} = \frac{1}{n}\sum_{j=1}^{n}\sum_{i=1}^{n_j}Y_{ij}$, where $n = \sum_{j=1}^{c}n_{j}$. $SSTO = \sum_{j=1}^{c}\sum_{i=1}^{n_j}(Y_{ij} - \bar{Y})^2$ and $SSPE = SSE_{full} = \sum_{j=1}^{c}\sum_{i=1}^{n_j}(Y_{ij} - \bar{Y}_{j})^2 = \sum_{j=1}^{c}\sum_{i=1}^{n_j}Y_{ij}^2 - \sum_{j=1}^{c}n_{j}\bar{Y}_{j}^2$ $SSE_{red} = SSE = \sum_{j=1}^{c}\sum_{i=1}^{n_j}(Y_{ij} - \beta_{0} - \beta_{1}X_{j})^2$ $SSLF = SSE_{red} - SSE_{full}$. Degrees of freedom $d.f.(SSPE) = n - c; d.f.(SSLF) = d.f.(SSE_{red}) - d.f.(SSPE) = (n - 2) - (n - c) = c - 2.$ ANOVA Table Source d.f. SS MS=SS/d.f. F-statistic Regression 1 SSR MSR MSR/MSE Error n-2 SSE=SSLF+SSPE MSE Lack of fit c-2 SSLF MSLF MSLF/MSPE Pure error n-c SSPE MSPE Total n-1 SSTO=SSR+SSLF+SSPE Reject $H_{0} : (\mu_{j} = \beta_{0} + \beta_{1}X_{j} for all j)$ at level $\alpha$ if $F^*_{LF} = \frac{MSLF}{MSPE} > F(1 - \alpha; c - 2, n - c)$. Example: Growth rate data In the following example, data are available on the effect of dietary supplement on the growth rates of rats. Here $X =$ dose of dietary supplement and $Y =$ growth rate. The following table presents the data in a form suitable for the analysis. $j = 1$ $X_{1} = 10$ $n_{1} = 2$ $j = 2$ $X_{2} = 15$ $n_{2} = 2$ $j = 3$ $X_{3} = 20$ $n_{3} = 2$ $j = 4$ $X_{4} = 25$ $n_{4} = 3$ $j = 5$ $X_{5} = 30$ $n_{5} = 1$ $j = 6$ $X_{6} = 35$ $n_{6} = 2$ $Y_{11} = 73$ $Y_{21} = 78$ $Y_{12} = 85$ $Y_{22} = 88$ $Y_{13} = 90$ $Y_{23} = 91$ $Y_{14} = 87$ $Y_{24} = 86$ $Y_{34} = 91$ $Y_{15} = 75$ $Y_{16} = 65$ $Y_{26} = 63$ $\bar{Y}_{1} = 75.5$ $\bar{Y}_{2} = 86.5$ $\bar{Y}_{3} = 90.5$ $\bar{Y}_{4} = 88$ $\bar{Y}_{5} = 75$ $\bar{Y}_{6} = 64$ So, for this data, $c = 6, n = \sum_{j = 1}^{c}n_{j} = 12$. $SSTO = \sum_{j}\sum_{i}(Y_{ij} - \bar{Y})^2 = 1096.00$ $SSPE = \sum_{j}\sum_{i}Y_{ij}^2 - \sum_{j}n_{j}\bar{Y}_{j}^2 = 79828 - 79704.5 = 33.50$ $SSE_{red} = \sum_{i}\sum_{j}(Y_{ij} - \beta_{0} - \beta_{1}X_{j})^2 = 891.73 (Note: \beta_{0} = 92.003, \beta_{1} = -0.498)$ $SSLF = SSE_{red} - SSPE = 891.73 - 33.50 = 858.23$ $d.f.(SSPE) = n - c = 6$ $d.f.(SSLF) = c - 2 = 4$ $MSLF = \frac{SSLF}{c - 2} = 214.5575$ $MSPE = \frac{SSPE}{n -c} = 5.5833$ ANOVA Table: Source d.f. SS MS F* Regression Error 1 10 204.27 891.73 204.27 89.173 $\frac{MSR}{MSE} =$ 2.29 Lack of fit Pure error 4 6 858.23 33.50 214.56 5.583 $\frac{MSLF}{MSPE} =$ 38.43 Total 11 1096.00 $F(0.95;4,6) = 4.534$. Since $F_{LF}^\star = 38.43 > 4.534$, reject $H_{0} :$ $\mu_{j} = \beta_{0} + \beta_{1}X_{j}$ for all j at 5% level of significance. Also, if you are testing $H_{0}' :$ $\beta_{1} = 0$ against $H_{1}' :$ $\beta_{1} \neq 0$, assuming that the linear model holds, as in the usual ANOVA for linear regression, then the corresponding test statistic $F^\star = \frac{MSR}{MSE} = 2.29$. Which is less than $F(0.95;1,n-2) = F(0.95;1,10) = 4.964$. So, if one assumes that linear model holds then the test cannot reject $H_{0}'$ at 5% level of significance. Contributors • Joy Wei • Debashis Paul
textbooks/stats/Computing_and_Modeling/Supplemental_Modules_(Computing_and_Modeling)/Regression_Analysis/Simple_linear_regression/Test_for_Lack_of_Fit.txt
• 1.1: Terminology - Individuals/Population/Variables/Samples Oddly enough, it is often a lack of clarity about who [or what] you are looking at which makes a lie out of statistics. Here are the terms, then, to keep straight: The units which are the objects of a statistical study are called the individuals in that study, while the collection of all such individuals is called the population of the study. Note that while the term “individuals” sounds like it is talking about people, the individuals in a study could be things, even abstract things like even • 1.2: Visual Representation of Data I - Categorical Variables Suppose we have a population and variable in which we are interested. We get a sample, which could be large or small, and look at the values of the our variable for the individuals in that sample. We shall informally refer to this collection of values as a dataset. In this section, we suppose also that the variable we are looking at is categorical. Then we can summarize the dataset by telling which categorical values did we see for the individuals in the sample, and how often we saw those value • 1.3: Visual Representation of Data II - Quantitative Variables Now suppose we have a population and quantitative variable in which we are interested. We get a sample, which could be large or small, and look at the values of the our variable for the individuals in that sample. There are two ways we tend to make pictures of datasets like this: stem-and-leaf plots and histograms. • 1.4: Numerical Descriptions of Data I: Measures of the Center Oddly enough, there are several measures of central tendency, as ways to define the middle of a dataset are called. There is different work to be done to calculate each of them, and they have different uses, strengths, and weaknesses. • 1.5: Numerical Descriptions of Data, II: Measures of Spread • 1.6: Exercises 01: One-Variable Statistics - Basics One-Variable Statistics: Basics Terminology: Individuals/Population/Variables/Samples Oddly enough, it is often a lack of clarity about who [or what] you are looking at which makes a lie out of statistics. Here are the terms, then, to keep straight: The units which are the objects of a statistical study are called the individuals in that study, while the collection of all such individuals is called the population of the study. Note that while the term “individuals” sounds like it is talking about people, the individuals in a study could be things, even abstract things like events. Example 1.1.2. The individuals in a study about a democratic election might be the voters. But if you are going to make an accurate prediction of who will win the election, it is important to be more precise about what exactly is the population of all of those individuals [voters] that you intend to study, but it all eligible voters, all registered voters, the people who actually voted, etc. Example 1.1.3. If you want to study if a coin is “fair” or not, you would flip it repeatedly. The individuals would then be flips of that coin, and the population might be something like all the flips ever done in the past and all that will every be done in the future. These individuals are quite abstract, and in fact it is impossible ever to get your hands on all of them (the ones in the future, for example). Example 1.1.4. Suppose we’re interested in studying whether doing more homework helps students do better in their studies. So shouldn’t the individuals be the students? Well, which students? How about we look only at college students. Which college students? OK, how about students at 4-year colleges and universities in the United States, over the last five years – after all, things might be different in other countries and other historical periods. Wait, a particular student might sometimes do a lot of homework and sometimes do very little. And what exactly does “do better in their studies” mean? So maybe we should look at each student in each class they take, then we can look at the homework they did for that class and the success they had in it. Therefore, the individuals in this study would be individual experiences that students in US 4-year colleges and universities had in the last five years, and population of the study would essentially be the collection of all the names on all class rosters of courses in the last five years at all US 4-year colleges and universities. When doing an actual scientific study, we are usually not interested so much in the individuals themselves, but rather in A variable in a statistical study is the answer of a question the researcher is asking about each individual. There are two types: • A categorical variable is one whose values have a finite number of possibilities. • A quantitative variable is one whose values are numbers (so, potentially an infinite number of possibilities). The variable is something which (as the name says) varies, in the sense that it can have a different value for each individual in the population (although that is not necessary). Example 1.1.6 In Example 1.1.2, the variable most likely would be who they voted for, a categorical variable with only possible values “Mickey Mouse” or “Daffy Duck” (or whoever the names on the ballot were). Example 1.1.7 In Example 1.1.3, the variable most likely would be what face of the coin was facing up after the flip, a categorical variable with values “heads” and “tails.” Example 1.1.8 There are several variables we might use in Example 1.1.4. One might be how many homework problems did the student do in that course. Another could be how many hours total did the student spend doing homework over that whole semester, for that course. Both of those would be quantitative variables. A categorical variable for the same population would be what letter grade did the student get in the course, which has possible values A, A-, B+, …, D-, F. In many [most?] interesting studies, the population is too large for it to be practical to go observe the values of some interesting variable. Sometimes it is not just impractical, but actually impossible – think of the example we gave of all the flips of the coin, even in the ones in the future. So instead, we often work with A sample is a subset of a population under study. Often we use the variable \(N\) to indicate the size of a whole population and the variable \(n\) for the size of a sample; as we have said, usually \(n<N\). Later we shall discuss how to pick a good sample, and how much we can learn about a population from looking at the values of a variable of interest only for the individuals in a sample. For the rest of this chapter, however, let’s just consider what to do with these sample values.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/01%3A__One-Variable_Statistics_-_Basics/1.02%3A_New_Page.txt
Suppose we have a population and variable in which we are interested. We get a sample, which could be large or small, and look at the values of the our variable for the individuals in that sample. We shall informally refer to this collection of values as a dataset. In this section, we suppose also that the variable we are looking at is categorical. Then we can summarize the dataset by telling which categorical values did we see for the individuals in the sample, and how often we saw those values. There are two ways we can make pictures of this information: bar charts and pie charts. Bar Charts I: Frequency Charts We can take the values which we saw for individuals in the sample along the \(x\)-axis of a graph, and over each such label make a box whose height indicates how many individuals had that value – the frequency of occurrence of that value.[def:frequency] This is called a bar chart. As with all graphs, you should always label all axes. The \(x\)-axis will be labeled with some description of the variable in question, the \(y\)-axis label will always be “frequency” (or some synonym like “count” or “number of times”). Example 1.2.1. In Example 1.1.7, suppose we took a sample of consisting of the next 10 flips of our coin. Suppose further that 4 of the flips came up heads – write it as “H” – and 6 came up tails, T. Then the corresponding bar chart would look like Bar Charts II: Relative Frequency Charts There is a variant of the above kind of bar chart which actually looks nearly the same but changes the labels on the \(y\)-axis. That is, instead of making the height of each bar be how many times each categorical value occurred, we could make it be what fraction of the sample had that categorical value – the relative frequency[def:relfreq]. This fraction is often displayed as a percentage. Example 1.2.2. The relative frequency version of the above bar chart in Example 1.2.1 would look like Bar Charts III: Cautions Notice that with bar charts (of either frequency or relative frequency) the variable values along the \(x\)-axis can appear in any order whatsoever. This means that any conclusion you draw from looking at the bar chart must not depend upon that order. For example, it would be foolish to say that the graph in the above Example 1.2.1 “shows and increasing trend,” since it would make just as much sense to put the bars in the other order and then “show a decreasing trend” – both are meaningless. For relative frequency bar charts, in particular, note that the total of the heights of all the bars must be \(1\) (or \(100\)%). If it is more, something is weird; if it is less, some data has been lost. Finally, it makes sense for both kinds of bar charts for the \(y\)-axis to run from the logical minimum to maximum. For frequency charts, this means it should go from \(0\) to \(n\) (the sample size). For relative frequency charts, it should go from \(0\) to \(1\) (or \(100\)%). Skimping on how much of this appropriate \(y\)-axis is used is a common trick to lie with statistics. Example 1.2.3. The coin we looked at in Example 1.2.1 and Example 1.2.2 could well be a fair coin – it didn’t show exactly half heads and half tails, but it was pretty close. Someone who was trying to claim, deceptively, that the coin was not fair might have shown only a portion of the \(y\) axis, as This is actually, in a strictly technical sense, a correct graph. But, looking at it, one might conclude that T seems to occur more than twice as often as H, so the coin is probably not fair ... until a careful examination of the \(y\)-axis shows that even though the bar for T is more than twice as high as the bar for H, that is an artifact of how much of the \(y\)-axis is being shown. In summary, bar charts actually don’t have all that much use in sophisticated statistics, but are extremely common in the popular press (and on web sites and so on). Pie Charts Another way to make a picture with categorical data is to use the fractions from a relative frequency bar chart, but not for the heights of bars, instead for the sizes of wedges of a pie. Example 1.2.4. Here’s a pie chart with the relative frequency data from Example 1.2.2. Pie charts are widely used, but actually they are almost never a good choice. In fact, do an Internet search for the phrase “pie charts are bad” and there will be nearly 3000 hits. Many of the arguments are quite insightful. When you see a pie chart, it is either an attempt (misguided, though) by someone to be folksy and friendly, or it is a sign that the author is quite unsophisticated with data visualization, or, worst of all, it might be a sign that the author is trying to use mathematical methods in a deceptive way. In addition, all of the cautions we mentioned above for bar charts of categorical data apply, mostly in exactly the same way, for pie charts.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/01%3A__One-Variable_Statistics_-_Basics/1.03%3A_New_Page.txt
Now suppose we have a population and quantitative variable in which we are interested. We get a sample, which could be large or small, and look at the values of the our variable for the individuals in that sample. There are two ways we tend to make pictures of datasets like this: stem-and-leaf plots and histograms. Stem-and-leaf Plots One somewhat old-fashioned way to handle a modest amount of quantitative data produces something between simply a list of all the data values and a graph. It’s not a bad technique to know about in case one has to write down a dataset by hand, but very tedious – and quite unnecessary, if one uses modern electronic tools instead – if the dataset has more than a couple dozen values. The easiest case of this technique is where the data are all whole numbers in the range $0-99$. In that case, one can take off the tens place of each number – call it the stem – and put it on the left side of a vertical bar, and then line up all the ones places – each is a leaf – to the right of that stem. The whole thing is called a stem-and-leaf plot or, sometimes, just a stemplot. It’s important not to skip any stems which are in the middle of the dataset, even if there are no corresponding leaves. It is also a good idea to allow repeated leaves, if there are repeated numbers in the dataset, so that the length of the row of leaves will give a good representation of how much data is in that general group of data values. Example 1.3.1. Here is a list of the scores of 30 students on a statistics test: $\begin{matrix} 86 & 80 & 25 & 77 & 73 & 76 & 88 & 90 & 69 & 93\ 90 & 83 & 70 & 73 & 73 & 70 & 90 & 83 & 71 & 95\ 40 & 58 & 68 & 69 & 100 & 78 & 87 & 25 & 92 & 74 \end{matrix}$ As we said, using the tens place (and the hundreds place as well, for the data value $100$) as the stem and the ones place as the leaf, we get [tab:stemplot1] Stem Leaf 10 0 9 0 0 0 2 3 5 8 0 3 3 6 7 8 7 0 0 1 3 3 3 4 6 7 8 6 8 9 9 5 8 4 0 3 2 5 5 One nice feature stem-and-leaf plots have is that they contain all of the data values, they do not lose anything (unlike our next visualization method, for example). [Frequency] Histograms The most important visual representation of quantitative data is a histogram. Histograms actually look a lot like a stem-and-leaf plot, except turned on its side and with the row of numbers turned into a vertical bar, like a bar graph. The height of each of these bars would be how many Another way of saying that is that we would be making bars whose heights were determined by how many scores were in each group of ten. Note there is still a question of into which bar a value right on the edge would count: e.g., does the data value $50$ count in the bar to the left of that number, or the bar to the right? It doesn’t actually matter which side, but it is important to state which choice is being made. Example 1.3.2 Continuing with the score data in Example 1.3.1 and putting all data values $x$ satisfying $20\le x<30$ in the first bar, values $x$ satisfying $30\le x<40$ in the second, values $x$ satisfying $40\le x<50$ in the second, etc. – that is, put data values on the edges in the bar to the right – we get the figure Actually, there is no reason that the bars always have to be ten units wide: it is important that they are all the same size and that how they handle the edge cases (whether the left or right bar gets a data value on edge), but they could be any size. We call the successive ranges of the $x$ coordinates which get put together for each bar the called bins or classes, and it is up to the statistician to chose whichever bins – where they start and how wide they are – shows the data best. Typically, the smaller the bin size, the more variation (precision) can be seen in the bars ... but sometimes there is so much variation that the result seems to have a lot of random jumps up and down, like static on the radio. On the other hand, using a large bin size makes the picture smoother ... but sometimes, it is so smooth that very little information is left. Some of this is shown in the following Example 1.3.3. Continuing with the score data in Example 1.3.1 and now using the bins with $x$ satisfying $10\le x<12$, then $12\le x<14$, etc., we get the histogram with bins of width 2: If we use the bins with $x$ satisfying $10\le x<15$, then $15\le x<20$, etc., we get the histogram with bins of width 5: If we use the bins with $x$ satisfying $20\le x<40$, then $40\le x<60$, etc., we get the histogram with bins of width 20: Finally, if we use the bins with $x$ satisfying $0\le x<50$, then $50\le x<100$, and then $100\le x<150$, we get the histogram with bins of width 50: [Relative Frequency] Histograms Just as we could have bar charts with absolute (§2.1) or relative (§2.2) frequencies, we can do the same for histograms. Above, in §3.2, we made absolute frequency histograms. If, instead, we divide each of the counts used to determine the heights of the bars by the total sample size, we will get fractions or percents – relative frequencies. We should then change the label on the $y$-axis and the tick-marks numbers on the $y$-axis, but otherwise the graph will look exactly the same (as it did with relative frequency bar charts compared with absolute frequency bar chars). Example 1.3.4. Let’s make the relative frequency histogram corresponding to the absolute frequency histogram in Example 1.3.2 based on the data from Example 1.3.1 – all we have to do is change the numbers used to make heights of the bars in the graph by dividing them by the sample size, 30, and then also change the $y$-axis label and tick mark numbers. How to Talk About Histograms Histograms of course tell us what the data values are – the location along the $x$ value of a bar is the value of the variable – and how many of them have each particular value – the height of the bar tells how many data values are in that bin. This is also given a technical name [def:distribution] Given a variable defined on a population, or at least on a sample, the distribution of that variable is a list of all the values the variable actually takes on and how many times it takes on these values. The reason we like the visual version of a distribution, its histogram, is that our visual intuition can then help us answer general, qualitative questions about what those data must be telling us. The first questions we usually want to answer quickly about the data are • What is the shape of the histogram? • Where is its center? • How much variability [also called spread] does it show? When we talk about the general shape of a histogram, we often use the terms [def:symmskew] A histogram is symmetric if the left half is (approximately) the mirror image of the right half. We say a histogram is skewed left if the tail on the left side is longer than on the right. In other words, left skew is when the left half of the histogram – half in the sense that the total of the bars in this left part is half of the size of the dataset – extends farther to the left than the right does to the right. Conversely, the histogram is skewed right if the right half extends farther to the right than the left does to the left. If the shape of the histogram has one significant peak, then we say it is unimodal, while if it has several such, we say it is multimodal. It is often easy to point to where the center of a distribution looks like it lies, but it is hard to be precise. It is particularly difficult if the histogram is “noisy,” maybe multimodal. Similarly, looking at a histogram, it is often easy to say it is “quite spread out” or “very concentrated in the center,” but it is then hard to go beyond this general sense. Precision in our discussion of the center and spread of a dataset will only be possible in the next section, when we work with numerical measures of these features.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/01%3A__One-Variable_Statistics_-_Basics/1.04%3A_New_Page.txt
Oddly enough, there are several measures of central tendency, as ways to define the middle of a dataset are called. There is different work to be done to calculate each of them, and they have different uses, strengths, and weaknesses. For this whole section we will assume we have collected $n$ numerical values, the values of our quantitative variable for the sample we were able to study. When we write formulæ with these values, we can’t give them variable names that look like $a, b, c, \dots$, because we don’t know where to stop (and what would we do if $n$ were more than 26?). Instead, we’ll use the variables $x_1, x_2, \dots, x_n$ to represent the data values. One more very convenient bit of notation, once we have started writing an unknown number ($n$) of numbers $x_1, x_2, \dots, x_n$, is a way of writing their sum: [def:summation] If we have $n$ numbers which we write $x_1, \dots, x_n$, then we use the shorthand summation notation $\sum x_i$ to represent the sum $\sum x_i = x_1 + \dots + x_n$. 2 [eg:subscriptssums] If our dataset were $\{1, 2, 17, -3.1415, 3/4\}$, then $n$ would be 5 and the variables $x_1, \dots, x_5$ would be defined with values $x_1=1$, $x_2=2$, $x_3=17$, $x_4=-3.1415$, and $x_5=3/4$. In addition3, we would have $\sum x_i = x_1+x_2+x_3+x_4+x_5=1+2+17-3.1415+ 3/4=17.6085$. Mode Let’s first discuss probably the simplest measure of central tendency, and in fact one which was foreshadowed by terms like “unimodal.” [def:mode] A mode of a dataset $x_1, \dots, x_n$ of $n$ numbers is one of the values $x_i$ which occurs at least as often in the dataset as any other value. It would be nice to say this in a simpler way, something like “the mode is the value which occurs the most often in the dataset,” but there may not be a single such number. EXAMPLE 1.4.4. Continuing with the data from Example 1.3.1, it is easy to see, looking at the stem-and-leaf plot, that both 73 and 90 are modes. Note that in some of the histograms we made using these data and different bin widths, the bins containing 73 and 90 were of the same height, while in others they were of different heights. This is an example of how it can be quite hard to see on a histogram where the mode is... or where the modes are. Mean The next measure of central tendency, and certainly the one heard most often in the press, is simply the average. However, in statistics, this is given a different name. [def:mean] The mean of a dataset $x_1, \dots, x_n$ of $n$ numbers is given by the formula $\left(\sum x_i\right)/n$. If the data come from a sample, we use the notation $\overline{x}$ for the sample mean. If $\{x_1, \dots, x_n\}$ is all of the data from an entire population, we use the notation $\ \mu X$ [this is the Greek letter “mu,” pronounced “mew,” to rhyme with “new.”] for the population mean. EXAMPLE 1.4.6. Since we’ve already computed the sum of the data in Example 1.4.2 to be $17.6085$ and there were $5$ values in the dataset, the mean is $\overline{x}=17.6085/5 = 3.5217$. EXAMPLE 1.4.7. Again using the data from Example 1.3.1, we can calculate the mean $\overline{x}=\left(\sum x_i\right)/n =2246/30=74.8667$. Notice that the mean in the two examples above was not one of the data values. This is true quite often. What that means is that the phrase “the average whatever,” as in “the average American family has $X$” or “the average student does $Y$,” is not talking about any particular family, and we should not expect any particular family or student to have or do that thing. Someone with a statistical education should mentally edit every phrase like that they hear to be instead something like “the mean of the variable $X$ on the population of all American families is ...,” or “the mean of the variable $Y$ on the population of all students is ...,” or whatever. Median Our third measure of central tendency is not the result of arithmetic, but instead of putting the data values in increasing order. DEFINITION 1.4.8. Imagine that we have put the values of a dataset $\{x_1, \dots, x_n\}$ of $n$ numbers in increasing (or at least non-decreasing) order, so that $x_1\le x_2\le \dots \le x_n$. Then if $n$ is odd, the median of the dataset is the middle value, $x_{(n+1)/2}$, while if $n$ is even, the median is the mean of the two middle numbers, $\frac{x_{n/2}+x_{(n/2)+1}}{2}$. EXAMPLE 1.4.9. Working with the data in Example 1.4.2, we must first put them in order, as $\{-3.1415, 3/4, 1, 2, 17\}$, so the median of this dataset is the middle value, $1$. EXAMPLE 1.4.10. Now let us find the median of the data from Example 1.3.1. Fortunately, in that example, we made a stem-and-leaf plot and even put the leaves in order, so that starting at the bottom and going along the rows of leaves and then up to the next row, will give us all the values in order! Since there are 30 values, we count up to the $15^{th}$ and $16^{th}$ values, being 76 and 77, and from this we find that the median of the dataset is $\frac{76+77}{2}=76.5$. Strengths and Weaknesses of These Measures of Central Tendency The weakest of the three measures above is the mode. Yes, it is nice to know which value happened most often in a dataset (or which values all happened equally often and more often then all other values). But this often does not necessarily tell us much about the over-all structure of the data. EXAMPLE 1.4.11. Suppose we had the data $\begin{matrix} 86 & 80 & 25 & 77 & 73 & 76 & 100 & 90 & 67 & 93\ 94 & 83 & 72 & 75 & 79 & 70 & 91 & 82 & 71 & 95\ 40 & 58 & 68 & 69 & 100 & 78 & 87 & 25 & 92 & 74 \end{matrix}$ with corresponding stem-and-leaf plot Stem 10 0 9 0 1 2 3 4 5 8 0 2 3 6 7 8 7 0 1 2 3 4 5 6 7 8 9 6 7 8 9 5 8 4 0 3 2 5 5 This would have a histogram with bins of width 10 that looks exactly like the one in Example 1.3.2 – so the center of the histogram would seem, visually, still to be around the bar over the 80s – but now there is a unique mode of 25. What this example shows is that a small change in some of the data values, small enough not to change the histogram at all, can change the mode(s) drastically. It also shows that the location of the mode says very little about the data in general or its shape, the mode is based entirely on a possibly accidental coincidence of some values in the dataset, no matter if those values are in the “center” of the histogram or not. The mean has a similar problem: a small change in the data, in the sense of adding only one new data value, but one which is very far away from the others, can change the mean quite a bit. Here is an example. EXAMPLE 1.4.12. Suppose we take the data from Example 1.3.1 but change only one value – such as by changing the 100 to a 1000, perhaps by a simple typo of the data entry. Then if we calculate the mean, we get $\overline{x}=\left(\sum x_i\right)/n =3146/30=104.8667$, which is quite different from the mean of original dataset. A data value which seems to be quite different from all (or the great majority of) the rest is called an outlier4 What we have just seen is that the mean is very sensitive to outliers. This is a serious defect, although otherwise it is easy to compute, to work with, and to prove theorems about. Finally, the median is somewhat tedious to compute, because the first step is to put all the data values in order, which can be very time-consuming. But, once that is done, throwing in an outlier tends to move the median only a little bit. Here is an example. EXAMPLE 1.4.13. If we do as in Example 1.4.12 and change the data value of 100 in the dataset of Example 1.3.1 to 1000, but leave all of the other data values unchanged, it does not change the median at all since the 1000 is the new largest value, and that does not change the two middle values at all. If instead we take the data of Example 1.3.1 and simply add another value, 1000, with- out taking away the 100, that does change the media: there are now an odd number of data values, so the median is the middle one after they are put in order, which is 78. So the median has changed by only half a point, from 77.5 to 78. And his would even be true if the value we were adding to the dataset were 1000000 and not just 1000! In other words, the median is very insensitive to outliers. Since, in practice, it is very easy for datasets to have a few random, bad values (typos, mechanical errors, etc.), which are often outliers, it is usually smarter to use the median than the mean. As one final point, note that as we mentioned in §4.2, the word “average,” the unsophisticated version of “mean,” is often incorrectly used as a modifier of the individuals in some population being studied (as in “the average American ...”), rather than as a modifier of the variable in the study (“the average income...”), indicating a fundamental misunderstanding of what the mean means. If you look a little harder at this misunderstanding, though, perhaps it is based on the idea that we are looking for the center, the “typical” value of the variable. The mode might seem like a good way – it’s the most frequently occurring value. But we have seen how that is somewhat flawed. The mean might also seem like a good way – it’s the “average,” literally. But we’ve also seen problems with the mean. In fact, the median is probably closest to the intuitive idea of “the center of the data.” It is, after all, a value with the property that both above and below that value lie half of the data values. One last example to underline this idea: EXAMPLE 1.4.14. The period of economic difficulty for world markets in the late 2000s and early 2010s is sometimes called the Great Recession. Suppose a politician says that we have come out of that time of troubles, and gives as proof the fact that the average family income has increased from the low value it had during the Great Recession back to the values it had before then, and perhaps is even higher than it was in 2005. It is possible that in fact people are better off, as the increase in this average – mean – seems to imply. But it is also possible that while the mean income has gone up, the median income is still low. This would happen if the histogram of incomes recently still has most of the tall bars down where the variable (family income) is low, but has a few, very high outliers. In short, if the super-rich have gotten even super-richer, that will make the mean (average) go up, even if most of the population has experienced stagnant or decreasing wages – but the median will tell what is happening to most of the population. So when a politician uses the evidence of the average (mean) as suggested here, it is possible they are trying to hide from the pubic the reality of what is happening to the rich and the not-so-rich. It is also possible that this politician is simply poorly educated in statistics and doesn’t realize what is going on. You be the judge ... but pay attention so you know what to ask about. The last thing we need to say about the strengths and weaknesses of our different measures of central tendency is a way to use the weaknesses of the mean and median to our advantage. That is, since the mean is sensitive to outliers, and pulled in the direction of those outliers, while the median is not, we can use the difference between the two to tell us which way a histogram is skewed. FACT 1.4.15. If the mean of a dataset is larger than the median, then histograms of that dataset will be right-skewed. Similarly, if the mean is less than the median, histograms will be left-skewed.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/01%3A__One-Variable_Statistics_-_Basics/1.05%3A_New_Page.txt
Range The simplest – and least useful – measure of the spread of some data is literally how much space on the $x$-axis the histogram takes up. To define this, first a bit of convenient notation: DEFINITION 1.5.1. Suppose $x_1, \dots, x_n$ is some quantitative dataset. We shall write $x_{min}$ for the smallest and $x_{max}$ for the largest values in the dataset. With this, we can define our first measure of spread DEFINITION 1.5.2. Suppose $x_1, \dots, x_n$ is some quantitative dataset. The range of this data is the number $x_{max}-x_{min}$. EXAMPLE 1.5.3. Using again the statistics test scores data from Example 1.3.1, we can read off from the stem-and-leaf plot that $x_{min}=25$ and $x_{max}=100$, so the range is $75(=100-25)$. EXAMPLE 1.5.4. Working now with the made-up data in Example 1.4.2, which was put into increasing order in Example 1.4.9, we can see that $x_{min}=-3.1415$ and $x_{max}=17$, so the range is $20.1415(=17-(-3.1415))$. The thing to notice here is that since the idea of outliers is that they are outside of the normal behavior of the dataset, if there are any outliers they will definitely be what value gets called $x_{min}$ or $x_{max}$ (or both). So the range is supremely sensitive to outliers: if there are any outliers, the range will be determined exactly by them, and not by what the typical data is doing. Quartiles and the $IQR$ Let’s try to find a substitute for the range which is not so sensitive to outliers. We want to see how far apart not the maximum and minimum of the whole dataset are, but instead how far apart are the typical larger values in the dataset and the typical smaller values. How can we measure these typical larger and smaller? One way is to define these in terms of the typical – central – value of the upper half of the data and the typical value of the lower half of the data. Here is the definition we shall use for that concept: DEFINITION 1.5.5. Imagine that we have put the values of a dataset $\{x_1, \dots, x_n\}$ of $n$ numbers in increasing (or at least non-decreasing) order, so that $x_1\le x_2\le \dots \le x_n$. If $n$ is odd, call the lower half data all the values $\{x_1, \dots, x_{(n-1)/2}\}$ and the upper half data all the values $\{x_{(n+3)/2}, \dots, x_n\}$; if $n$ is even, the lower half data will be the values $\{x_1, \dots, x_{n/2}\}$ and the upper half data all the values $\{x_{(n/2)+1}, \dots, x_n\}$. Then the first quartile, written $Q_1$, is the median of the lower half data, and the third quartile, written $Q_3$, is the median of the upper half data. Note that the first quartile is halfway through the lower half of the data. In other words, it is a value such that one quarter of the data is smaller. Similarly, the third quartile is halfway through the upper half of the data, so it is a value such that three quarters of the data is small. Hence the names “first” and “third quartiles.” We can build a outlier-insensitive measure of spread out of the quartiles. DEFINITION 1.5.6. Given a quantitative dataset, its inter-quartile range or $IQR$ is defined by $IQR=Q_3-Q_1$. EXAMPLE 1.5.7. Yet again working with the statistics test scores data from Example 1.3.1, we can count off the lower and upper half datasets from the stem-and-leaf plot, being respectively $\rm{Lower}=\{25, 25, 40, 58, 68, 69, 69, 70, 70, 71, 73, 73, 73, 74, 76\}$ and $\ \ \ \ \ \rm{Upper} = \{77, 78, 80, 83, 83, 86, 87, 88, 90, 90, 90, 92, 93, 95, 100\}\ .$ It follows that, for these data, $Q_1=70$ and $Q_3=88$, so $IQR=18(=88-70)$. EXAMPLE 1.5.8. Working again with the made-up data in Example 1.4.2, which was put into increasing order in Example 1.4.9, we can see that the lower half data is $\{-3.1415, .75\}$, the upper half is $\{2, 17\}$, $Q_1=-1.19575(=\frac{-3.1415+.75}{2})$, $Q_3=9.5(=\frac{2+17}{2})$, and $IQR=10.69575(=9.5-(-1.19575))$. Variance and Standard Deviation We’ve seen a crude measure of spread, like the crude measure “mode” of central tendency. We’ve also seen a better measure of spread, the $IQR$, which is insensitive to outliers like the median (and built out of medians). It seems that, to fill out the parallel triple of measures, there should be a measure of spread which is similar to the mean. Let’s try to build one. Suppose the data is sample data. Then how far a particular data value $x_i$ is from the sample mean $\overline{x}$ is just $x_i-\overline{x}$. So the mean displacement from the mean, the mean of $x_i-\overline{x}$, should be a good measure of variability, shouldn’t it? Unfortunately, it turns out that the mean of $x_i-\overline{x}$ is always 0. This is because when $x_i>\overline{x}$, $x_i-\overline{x}$ is positive, while when $x_i<\overline{x}$, $x_i-\overline{x}$ is negative, and it turns out that the positives always exactly cancel the negatives (see if you can prove this algebraically, it’s not hard). We therefore need to make the numbers $x_i-\overline{x}$ positive before taking their mean. One way to do this is to square them all. Then we take something which is almost the mean of these squared numbers to get another measure of spread or variability: DEFINITION 1.5.9. Given sample data $x_1, \dots, x_n$ from a sample of size $n$, the sample variance is defined as $S_x^2 = \frac{\sum \left(x_i-\overline{x}\right)^2}{n-1} .$ Out of this, we then define the sample standard deviation $S_x = \sqrt{S_x^2} = \sqrt{\frac{\sum \left(x_i-\overline{x}\right)^2}{n-1}} .$ Why do we take the square root in that sample standard deviation? The answer is that the measure we build should have the property that if all the numbers are made twice as big, then the measure of spread should also be twice as big. Or, for example, if we first started working with data measured in feet and then at some point decided to work in inches, the numbers would all be 12 times as big, and it would make sense if the measure of spread were also 12 times as big. The variance does not have this property: if the data are all doubled, the variance increases by a factor of 4. Or if the data are all multiplied by 12, the variance is multiplied by a factor of 144. If we take the square root of the variance, though, we get back to the nice property of doubling data doubles the measure of spread, etc. For this reason, while we have defined the variance on its own and some calculators, computers, and on-line tools will tell the variance whenever you ask them to computer 1-variable statistics, we will in this class only consider the variance a stepping stone on the way to the real measure of spread of data, the standard deviation. One last thing we should define in this section. For technical reasons that we shall not go into now, the definition of standard deviation is slightly different if we are working with population data and not sample data: DEFINITION 1.5.10. Given data $x_1, \dots, x_n$ from an entire population of size $n$, the population variance is defined as $\ \sigma X^2 = \frac{\sum \left(x_i-\mu X\right)^2}{n} .$ Out of this, we then define the population standard deviation $\ \sigma X = \sqrt{\sigma X^2} = \sqrt{\frac{\sum \left(x_i-\mu X\right)^2}{n}} .$ [This letter $\ \sigma$ is the lower-case Greek letter sigma, whose upper case $\ \Sigma$ you’ve seen elsewhere.] Now for some examples. Notice that to calculate these values, we shall always use an electronic tool like a calculator or a spreadsheet that has a built-in variance and standard deviation program – experience shows that it is nearly impossible to get all the calculations entered correctly into a non-statistical calculator, so we shall not even try. EXAMPLE 1.5.11. For the statistics test scores data from Example 1.3.1, entering them into a spreadsheet and using VAR.S and STDEV.S for the sample variance and standard deviation and VAR.P and STDEV.P for population variance and population standard deviation, we get \begin{aligned} S_x^2 &= 331.98\ S_x &= 18.22\ \sigma X^2 &= 330.92\ \sigma X &= 17.91\end{aligned} EXAMPLE 1.5.12. Similarly, for the data in Example 1.4.2, we find in the same way that \begin{aligned} S_x^2 &= 60.60\ S_x &= 7.78\ \sigma X^2 &= 48.48\ \sigma X &= 6.96\end{aligned} Strengths and Weaknesses of These Measures of Spread We have already said that the range is extremely sensitive to outliers. The $IQR$, however, is built up out of medians, used in different ways, so the $IQR$ is insensitive to outliers. The variance, both sample and population, is built using a process quite like a mean, and in fact also has the mean itself in the defining formula. Since the standard deviation in both cases is simply the square root of the variance, it follows that the sample and population variances and standard deviations are all sensitive to outliers. This differing sensitivity and insensitivity to outliers is the main difference between the different measures of spread that we have discussed in this section. One other weakness, in a certain sense, of the $IQR$ is that there are several different definitions in use of the quartiles, based upon whether the median value is included or not when dividing up the data. These are called, for example, QUARTILE.INC and QUARTILE.EXC on some spreadsheets. It can then be confusing which one to use. A Formal Definition of Outliers – the $1.5\,IQR$ Rule So far, we have said that outliers are simply data that are atypical. We need a precise definition that can be carefully checked. What we will use is a formula (well, actually two formulæ) that describe that idea of an outlier being far away from the rest of data. Actually, since outliers should be far away either in being significantly bigger than the rest of the data or in being significantly smaller, we should take a value on the upper side of the rest of the data, and another on the lower side, as the starting points for this far away. We can’t pick the $x_{max}$ and $x_{min}$ as those starting points, since they will be the outliers themselves, as we have noticed. So we will use our earlier idea of a value which is typical for the larger part of the data, the quartile $Q_3$, and $Q_1$ for the corresponding lower part of the data. Now we need to decide how far is far enough away from those quartiles to count as an outlier. If the data already has a lot of variation, then a new data value would have to be quite far in order for us to be sure that it is not out there just because of the variation already in the data. So our measure of far enough should be in terms of a measure of spread of the data. Looking at the last section, we see that only the $IQR$ is a measure of spread which is insensitive to outliers – and we definitely don’t want to use a measure which is sensitive to the outliers, one which would have been affected by the very outliers we are trying to define. All this goes together in the following DEFINITION 1.5.13. [The $1.5\,IQR$ Rule for Outliers] Starting with a quantitative dataset whose first and third quartiles are $Q_1$ and $Q_3$ and whose inter-quartile range is $IQR$, a data value $x$ is [officially, from now on] called an outlier if $x<Q_1-1.5\,IQR$ or $x>Q_3+1.5\,IQR$. Notice this means that $x$ is not an outlier if it satisfies $Q_1-1.5\,IQR\le x\le Q_3+1.5\,IQR$. EXAMPLE 1.5.14. Let’s see if there were any outliers in the test score dataset from Example 1.3.1. We found the quartiles and IQR in Example 1.5.7, so from the $1.5\,IQR$ Rule, a data value $x$ will be an outlier if $x<Q_1-1.5\,IQR=70-1.5\cdot18=43$ or if $x>Q_3+1.5\,IQR=88+1.5\cdot18=115\ .$ Looking at the stemplot in Table 1.3.1, we conclude that the data values $25$, $25$, and $40$ are the outliers in this dataset. EXAMPLE 1.5.15. Applying the same method to the data in Example 1.4.2, using the quartiles and IQR from Example 1.5.8, the condition for an outlier $x$ is $x<Q_1-1.5\,IQR=-1.19575-1.5\cdot10.69575=-17.239375$ or $x>Q_3+1.5\,IQR=9.5+1.5\cdot10.69575=25.543625\ .$ Since none of the data values satisfy either of these conditions, there are no outliers in this dataset. The Five-Number Summary and Boxplots We have seen that numerical summaries of quantitative data can be very useful for quickly understanding (some things about) the data. It is therefore convenient for a nice package of several of these DEFINITION 1.5.16. Given a quantitative dataset $\{x_1, \dots, x_n\}$, the five-number summary5 of this data is the set of values $\left\{x_{min},\ \ Q_1,\ \ \mathrm{median},\ \ Q_3,\ \ x_{max}\right\}$ EXAMPLE 1.5.17. Why not write down the five-number summary for the same test score data we saw in Example 1.3.1? We’ve already done most of the work, such as calcu- lating the min and max in Example 1.5.3, the quartiles in Example 1.5.7, and the median in Example 1.4.10, so the five-number summary is \begin{aligned} x_{min}&=25\ Q_1&=70\ \mathrm{median}&=76.5\ Q_3&=88\ x_{max}&=100\end{aligned} EXAMPLE 1.5.18. And, for completeness, the five number summary for the made-up data in Example 1.4.2 is \begin{aligned} x_{min}&=-3.1415\ Q_1&=-1.9575\ \mathrm{median}&=1\ Q_3&=9.5\ x_{max}&=17\end{aligned} where we got the min and max from Example 1.5.4, the median from Example 1.4.9, and the quartiles from Example 1.5.8. As we have seen already several times, it is nice to have a both a numeric and a graphical/visual version of everything. The graphical equivalent of the five-number summary is DEFINITION 1.5.19. Given some quantitative data, a boxplot [sometimes box-and-whisker plot] is a graphical depiction of the five-number summary, as follows: • an axis is drawn, labelled with the variable of the study • tick marks and numbers are put on the axis, enough to allow the following visual features to be located numerically • a rectangle (the box) is drawn parallel to the axis, stretching from values $Q_1$ to $Q_3$ on the axis • an addition line is drawn, parallel to the sides of the box at locations $x_{min}$ and $x_{max}$, at the axis coordinate of the median of the data • lines are drawn parallel to the axis from the middle of sides of the box at the locations $x_{min}$ and $x_{max}$ out to the axis coordinates $x_{min}$ and $x_{max}$, where these whiskers terminate in “T”s. EXAMPLE 1.5.20. A boxplot for the test score data we started using in Example 1.3.1 is easy to make after we found the corresponding five-number summary in Example 1.5.17: Sometimes it is nice to make a version of the boxplot which is less sensitive to outliers. Since the endpoints of the whiskers are the only parts of the boxplot which are sensitive in this way, they are all we have to change: DEFINITION 1.5.21. Given some quantitative data, a boxplot showing outliers [sometimes box-and-whisker plot showing outliers] is minor modification of the regular boxplot, as follows • the whiskers only extend as far as the largest and smallest non-outlier data values • dots are put along the lines of the whiskers at the axis coordinates of any outliers in the dataset EXAMPLE 1.5.22. A boxplot showing outliers for the test score data we started using in Example 1.3.1 is only a small modification of the one we just made in Example 1.5.20
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/01%3A__One-Variable_Statistics_-_Basics/1.5%3A_Numerical_Descriptions_of_Data_II%3A_Measures_of_Spread.txt
A product development manager at the campus bookstore wants to make sure that the backpacks being sold there are strong enough to carry the heavy books students carry around campus. The manager decides she will collect some data on how heavy are the bags/packs/suitcases students are carrying around at the moment, by stopping the next 100 people she meets at the center of campus and measuring. What are the individuals in this study? What is the population? Is there a sample – what is it? What is the variable? What kind of variable is this? During a blood drive on campus, 300 donated blood. Of these, 136 had blood of type $O$, 120 had blood of type $A$, 32 of type $B$, and the rest of type $AB$. Answer the same questions as in the previous exercise for this new situation. Now make at least two visual representations of these data. Go to the Wikipedia page for “Heights of Presidents and Presidential Candidates of the United States” and look only at the heights of the presidents themselves, in centimeters (cm). Make a histogram with these data using bins of width 5. Explain how you are handling the edge cases in your histogram. 1cm Suppose you go to the supermarket every week for a year and buy a bag of flour, packaged by a major national flour brand, which is labelled as weighing $1kg$. You take the bag home and weigh it on an extremely accurate scale that measures to the nearest ${1/100}^{th}$ of a gram. After the 52 weeks of the year of flour buying, you make a histogram of the accurate weights of the bags. What do you think that histogram will look like? Will it be symmetric or skewed left or right (which one?), where will its center be, will it show a lot of variation/spread or only a little? Explain why you think each of the things you say. What about if you buy a $1kg$ loaf of bread from the local artisanal bakery – what would the histogram of the accurate weights of those loaves look like (same questions as for histogram of weights of the bags of flour)? If you said that those histograms were symmetric, can you think of a measurement you would make in a grocery store or bakery which would be skewed; and if you said the histograms for flour and loaf weights were skewed, can you think of one which would be symmetric? (Explain why, always, of course.) [If you think one of the two above histograms was skewed and one was symmetric (with explanation), you don’t need to come up with another one here.] Twenty sacks of grain weigh a total of $1003kg$. What is the mean weight per sack? Can you determine the median weight per sack from the given information? If so, explain how. If not, give two examples of datasets with the same total weight be different medians. For the dataset $\{6, -2, 6, 14, -3, 0, 1, 4, 3, 2, 5\}$, which we will call $DS_1$, find the mode(s), mean, and median. Define $DS_2$ by adding $3$ to each number in $DS_1$. What are the mode(s), mean, and median of $DS_2$? Now define $DS_3$ by subtracting $6$ from each number in $DS_1$. What are the mode(s), mean, and median of $DS_3$? Next, define $DS_4$ by multiplying every number in $DS_1$ by 2. What are the mode(s), mean, and median of $DS_4$? Looking at your answers to the above calculations, how do you think the mode(s), mean, and median of datasets must change when you add, subtract, multiply or divide all the numbers by the same constant? Make a specific conjecture! 1cm There is a very hard mathematics competition in which college students in the US and Canada can participate called the William Lowell Putnam Mathematical Competition. It consists of a six-hour long test with twelve problems, graded 0 to 10 on each problem, so the total score could be anything from 0 to 120. The median score last year on the Putnam exam was 0 (as it often is, actually). What does this tell you about the scores of the students who took it? Be as precise as you can. Can you tell what fraction (percentage) of students had a certain score or scores? Can you figure out what the quartiles must be? Find the range, $IQR$, and standard deviation of the following sample dataset: $DS_1 = \{0, 0, 0, 0, 0, .5, 1, 1, 1, 1, 1\}\quad .$ Now find the range, $IQR$, and standard deviation of the following sample data: $DS_2 = \{0, .5, 1, 1, 1, 1, 1, 1, 1, 1, 1\}\quad .$ Next find the range, $IQR$, and standard deviation of the following sample data: $DS_3 = \{0, 0, 0, 0, 0, 0, 0, 0, 0, .5, 1\}\quad .$ Finally, find the range, $IQR$, and standard deviation of sample data $DS_4$, consisting of 98 0s, one .5, and one 1 (so like $DS_3$ except with 0 occurring 98 times instead of 9 time). What must be true about a dataset if its range is 0? Give the most interesting example of a dataset with range of 0 and the property you just described that you can think of. What must be true about a dataset if its $IQR$ is 0? Give the most interesting example of a dataset with $IQR$ of 0 and the property you just described that you can think of. What must be true about a dataset if its standard deviation is 0? Give the most interesting example of a dataset with standard deviation of 0 and the property you just described that you can think of. Here are some boxplots of test scores, out of 100, on a standardized test given in five different classes – the same test, different classes. For each of these plots, $A - E$, describe qualitatively (in the sense of §3.4) but in as much detail as you can, what must have been the histogram for the data behind this boxplot. Also sketch a possible such histogram, for each case.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/01%3A__One-Variable_Statistics_-_Basics/1.6%3A_Exercises.txt
All of the discussion so far has been for studies which have a single variable. We may collect the values of this variable for a large population, or at least the largest sample we can afford to examine, and we may display the resulting data in a variety of graphical ways, and summarize it in a variety of numerical ways. But in the end all this work can only show a single characteristic of the individuals. If, instead, we want to study a relationship, we need to collect two (at least) variables and develop methods of descriptive statistics which show the relationships between the values of these variables. 02: Bi-variate Statistics - Basics All of the discussion so far has been for studies which have a single variable. We may collect the values of this variable for a large population, or at least the largest sample we can afford to examine, and we may display the resulting data in a variety of graphical ways, and summarize it in a variety of numerical ways. But in the end all this work can only show a single characteristic of the individuals. If, instead, we want to study a relationship, we need to collect two (at least) variables and develop methods of descriptive statistics which show the relationships between the values of these variables. Relationships in data require at least two variables. While more complex relationships can involve more, in this chapter we will start the project of understanding bivariate data, data where we make two observations for each individual, where we have exactly two variables. If there is a relationship between the two variables we are studying, the most that we could hope for would be that that relationship is due to the fact that one of the variables causes the other. In this situation, we have special names for these variables [def:explanatoryresponsevars] In a situation with bivariate data, if one variable can take on any value without (significant) constraint it is called the independent variable , while the second variable, whose value is (at least partially) controlled by the first, is called the dependent variable . Since the value of the dependent variable depends upon the value of the independent variable, we could also say that it is explained by the independent variable. Therefore the independent variable is also called the explanatory variable and the dependent variable is then called the response variable Whenever we have bivariate data and we have made a choice of which variable will be the independent and which the dependent, we write \(x\) for the independent and \(y\) for the dependent variable. [eg:depindepvars1] Suppose we have a large warehouse of many different boxes of products ready to ship to clients. Perhaps we have packed all the products in boxes which are perfect cubes, because they are stronger and it is easier to stack them efficiently. We could do a study where • the individuals would be the boxes of product; • the population would be all the boxes in our warehouse; • the independent variable would be, for a particular box, the length of its side in cm; • the dependent variable would be, for a particular box, the cost to the customer of buying that item, in US dollars. We might think that the size determines the cost, at least approximately, because the larger boxes contain larger products into which went more raw materials and more labor, so the items would be more expensive. So, at least roughly, the size may be anything, it is a free or independent choice, while the cost is (approximately) determined by the size, so the cost is dependent. Otherwise said, the size explains and the cost is the response. Hence the choice of those variables. [eg:depindepvars3] Suppose we have exactly the same scenario as above, but now we want to make the different choice where • the dependent variable would be, for a particular box, the volume of that box. There is one quite important difference between the two examples above: in one case (the cost), knowing the length of the side of a box give us a hint about how much it costs (bigger boxes cost more, smaller boxes cost less) but this knowledge is imperfect (sometimes a big box is cheap, sometimes a small box is expensive); while in the other case (the volume), knowing the length of the side of the box perfectly tells us the volume. In fact, there is a simple geometric formula that the volume \(V\) of a cube of side length \(s\) is given by \(V=s^3\). This motivates a last preliminary definition [def:deterministic] We say that the relationship between two variables is deterministic if knowing the value of one variable completely determines the value of the other. If, instead, knowing one value does not completely determine the other, we say the variables have a non-deterministic relationship.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/02%3A_Bi-variate_Statistics_-_Basics/2.01%3A_New_Page.txt
When we have bivariate data, the first thing we should always do is draw a graph of this data, to get some feeling about what the data is showing us and what statistical methods it makes sense to try to use. The way to do this is as follows [def:scatterplot] Given bivariate quantitative data, we make the scatterplot of this data as follows: Draw an \(x\)- and a \(y\)-axis, and label them with descriptions of the independent and dependent variables, respectively. Then, for each individual in the dataset, put a dot on the graph at location \((x,y)\), if \(x\) is the value of that individual’s independent variable and \(y\) the value of its dependent variable. After making a scatterplot, we usually describe it qualitatively in three respects: [def:scattershape] If the cloud of data points in a scatterplot generally lies near some curve, we say that the scatterplot has [approximately] that shape. A common shape we tend to find in scatterplots is that it is linear If there is no visible shape, we say the scatterplot is amorphous, or has no clear shape. [def:scatterstrength] When a scatterplot has some visible shape – so that we do not describe it as amorphous – how close the cloud of data points is to that curve is called the strength of that association. In this context, a strong [linear, e.g.,] association means that the dots are close to the named curve [line, e.g.,], while a weak association means that the points do not lie particularly close to any of the named curves [line, e.g.,]. [def:scatterdirection] In case a scatterplot has a fairly strong linear association, the direction of the association described whether the line is increasing or decreasing. We say the association is positive if the line is increasing and negative if it is decreasing. [Note that the words positive and negative here can be thought of as describing the slope of the line which we are saying is the underlying relationship in the scatterplot.] 2.03: New Page As before (in §§4 and 5), when we moved from describing histograms with words (like symmetric) to describing them with numbers (like the mean), we now will build a numeric measure of the strength and direction of a linear association in a scatterplot. [def:corrcoeff] Given bivariate quantitative data $\{(x_1,y_1), \dots , (x_n,y_n)\}$ the [Pearson] correlation coefficient of this dataset is $r=\frac{1}{n-1}\sum \frac{(x_i-\overline{x})}{s_x}\frac{(y_i-\overline{y})}{s_y}$ where $s_x$ and $s_y$ are the standard deviations of the $x$ and $y$, respectively, datasets by themselves. We collect some basic information about the correlation coefficient in the following [fact:corrcoefff] For any bivariate quantitative dataset $\{(x_1,y_1), \dots ,(x_n,y_n)\}$ with correlation coefficient $r$, we have 1. $-1\le r\le 1$ is always true; 2. if $|r|$ is near $1$ – meaning that $r$ is near $\pm 1$ – then the linear association between $x$ and $y$ is strong 3. if $r$ is near $0$ – meaning that $r$ is positive or negative, but near $0$ – then the linear association between $x$ and $y$ is weak 4. if $r>0$ then the linear association between $x$ and $y$ is positive, while if $r<0$ then the linear association between $x$ and $y$ is negative 5. $r$ is the same no matter what units are used for the variables $x$ and $y$ – meaning that if we change the units in either variable, $r$ will not change 6. $r$ is the same no matter which variable is begin used as the explanatory and which as the response variable – meaning that if we switch the roles of the $x$ and the $y$ in our dataset, $r$ will not change. It is also nice to have some examples of correlation coefficients, such as Many electronic tools which compute the correlation coefficient $r$ of a dataset also report its square, $r^2$. There reason is explained in the following [fact:rsquared] If $r$ is the correlation coefficient between two variables $x$ and $y$ in some quantitative dataset, then its square $r^2$ it the fraction (often described as a percentage) of the variation of $y$ which is associated with variation in $x$. [eg:rsquared] If the square of the correlation coefficient between the independent variable how many hours a week a student studies statistics and the dependent variable how many points the student gets on the statistics final exam is $.64$, then 64% of the variation in scores for that class is cause by variation in how much the students study. The remaining 36% of the variation in scores is due to other random factors like whether a student was coming down with a cold on the day of the final, or happened to sleep poorly the night before the final because of neighbors having a party, or some other issues different just from studying time. 2.04: New Page Suppose you pick 50 random adults across the United States in January 2017 and measure how tall they are. For each of them, you also get accurate information about how tall their (biological) parents are. Now, using as your individuals these 50 adults and as the two variables their heights and the average of their parents’ heights, make a sketch of what you think the resulting scatterplot would look like. Explain why you made the choice you did of one variable to be the explanatory and the other the response variable. Tell what are the shape, strength, and direction you see in this scatterplot, if it shows a deterministic or non-deterministic association, and why you think those conclusions would be true if you were to do this exercise with real data. Is there any time or place other than right now in the United States where you think the data you would collect as above would result in a scatterplot that would look fairly different in some significant way? Explain! It actually turns out that it is not true that the more a person works, the more they produce ... at least not always. Data on workers in a wide variety of industries show that working more hours produces more of that business’s product for a while, but then after too many hours of work, keeping on working makes for almost no additional production. Describe how you might collect data to investigate this relationship, by telling what individuals, population, sample, and variables you would use. Then, assuming the truth of the above statement about what other research in this area has found, make an example of a scatterplot that you think might result from your suggested data collection. Make a scatterplot of the dataset consisting of the following pairs of measurements: $\left\{(8,16), (9,9), (10,4), (11,1), (12,0), (13,1), (14,4), (15,9), (16,16)\right\} .$ You can do this quite easily by hand (there are only nine points!). Feel free to use an electronic device to make the plot for you, if you have one you know how to use, but copy the resulting picture into the homework you hand in, either by hand or cut-and-paste into an electronic version. Describe the scatterplot, telling what are the shape, strength, and direction. What do you think would be the correlation coefficient of this dataset? As always, explain all of your reasoning!
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/02%3A_Bi-variate_Statistics_-_Basics/2.02%3A_New_Page.txt
Quick review of equations for lines: Recall the equation of a line is usually in the form $y = mx + b$, where $x$ and $y$ are variables and m and b are numbers. Some basic facts about lines: • If you are given a number for x, you can plug it in to the equation $y=mx+b$ to get a number for $y$, which together give you a point with coordinates (x, y) that is on the line. • m is the slope, which tells how much the line goes up (increasing y) for every unit you move over to the right (increasing x) – we often say that the value of the slope is $\ m=\frac{rise}{run}$. • positive, if the line is tilted up, • negative, if the line is tilted down, • zero, if the line is horizontal, and • undefined, if the line is vertical. • You can calculate the slope by finding the coordinates (x1, y1) and (x2, y2) of any y2 −y1 two points on the line and then $\ m = \frac{y_2 - y_1}{x_2 - x_1}$. • In particular, x2−x1=1,then $\ m =\frac{y_2 - y_1}{1} = y_2 - y_1$ - so if you look at how much the line goes up in each step of one unit to the right, that number will be the slope m (and if it goes down, the slope m will simply be negative). In other words, the slope answers the question “for each step to the right, how much does the line increase (or decrease)?” • b is the y-intercept, which tells the y-coordinate of the point where the line crosses the y-axis. Another way of saying that is that b is the y value of the line when the x is 0. 03: Linear Regression Suppose we have some bivariate quantitative data {(x1, y1), . . . , (xn, yn)} for which the correlation coefficient indicates some linear association. It is natural to want to write down explicitly the equation of the best line through the data – the question is what is this line. The most common meaning given to best in this search for the line is the line whose total square error is the smallest possible. We make this notion precise in two steps DEFINITION 3.1.1. Given a bivariate quantitative dataset {(x1, y1), . . . , (xn, yn)} and a candidate line $\ \hat{y} = mx+b$ passing through this dataset, a residual is the difference in y-coordinates of an actual data point (xi, yi) and the line’s y value at the same x-coordinate. That is, if the y-coordinate of the line when x = xi is $\ \hat{y_i} = mx_i + b$, then the residual is the measure of error given by $\ error_i = y_i - \hat{y_i}$. Note we use the convention here and elsewhere of writing $\ \hat{y}$ for the y-coordinate on an approximating line, while the plain y variable is left for actual data values, like yi. Here is an example of what residuals look like Now we are in the position to state the DEFINITION 3.1.2. Given a bivariate quantitative dataset the least square regression line, almost always abbreviated to LSRL, is the line for which the sum of the squares of the residuals is the smallest possible. FACT 3.1.3. If a bivariate quantitative dataset {(x1, y1), . . . , (xn, yn)} has LSRL given $\ \hat{y} = mx + b$, then 1. The slope of the LSRL is given by $\ m=r\frac{s_y}{s_x}$, where r is the correlation coefficient of the dataset. 2. The LSRL passes through the point ($\ \bar{x},\bar{y}$). 3. It follows that the y-intercept of the LSRL is given by $\ b = \bar{y} - \bar{x}m = \bar{y} - \bar{x}r \frac{s_y}{s_x}$ It is possible to find the (coefficients of the) LSRL using the above information, but it is often more convenient to use a calculator or other electronic tool. Such tools also make it very easy to graph the LSRL right on top of the scatterplot – although it is often fairly easy to sketch what the LSRL will likely look like by just making a good guess, using visual intuition, if the linear association is strong (as will be indicated by the correlation coefficient). EXAMPLE 3.1.4. Here is some data where the individuals are 23 students in a statistics class, the independent variable is the students’ total score on their homeworks, while the dependent variable is their final total course points, both out of 100. $\begin{array}{llllllllll}{x:} & {65} & {65} & {50} & {53} & {59} & {92} & {86} & {84} & {29}\end{array}$ $\begin{array}{rrrrrrrrrr}{y:} & {74} & {71} & {65} & {60} & {83} & {90} & {84} & {88} & {48}\end{array}$ $\begin{array}{lllllllllll}{x:} & {29} & {09} & {64} & {31} & {69} & {10} & {57} & {81} & {81}\end{array}$ $\begin{array}{lllllllllll}{y:} & {54} & {25} & {79} & {58} & {81} & {29} & {81} & {94} & {86}\end{array}$ $\begin{array}{lllllllllll}{x:} & {80} & {70} & {60} & {62} & {59} \end{array}$ $\begin{array}{lllllllllll}{y:} & {95} & {68} & {69} & {83} & {70} \end{array}$ Here is the resulting scatterplot, made with LibreOffice Calc(a free equivalent of Mi- crosoft Excel) It seems pretty clear that there is quite a strong linear association between these two vari- ables, as is born out by the correlation coefficient, r = .935 (computed with LibreOffice Calc’s CORREL). Using then STDEV.S and AVERAGE, we find that the coefficients of the LSRL for this data, $\ \hat{y} = mx + b$ are $\ m = r \frac{s_y}{s_x} = .935 \frac{18.701}{23.207} = .754$ and $\ b = \bar{y} - \bar{x}m =71 − 58 · .754 = 26.976$ We can also use LibreOffice Calc’s Insert Trend Line, with Show Equation, to get all this done automatically. Note that when LibreOffice Calc writes the equation of the LSRL, it uses f (x) in place of $\ \hat{y}$, as we would.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/03%3A_Linear_Regression/3.01%3A_New_Page.txt
Suppose that we have a bivariate quantitative dataset $\{(x_1,y_1), \dots , (x_n,y_n)\}$ and we have computed its correlation coefficient $r$ and (the coefficients of) its LSRL $\widehat{y}=mx+b$. What is this information good for? The main use of the LSRL is described in the following [def:interpolation] Given a bivariate quantitative dataset and associated LSRL with equation $\widehat{y}=mx+b$, the process of guessing that the value of the dependent variable in this relationship to have the value $mx_0+b$, for $x_0$ any value for the independent variable which satisfies $x_{min}\le x_0\le x_{max}$, is called interpolation. The idea of interpolation is that we think the LSRL describes as well as possible the relationship between the independent and dependent variables, so that if we have a new $x$ value, we’ll use the LSRL equation to predict what would be our best guess of what would be the corresponding $y$. Note we might have a new value of $x$ because we simply lost part of our dataset and are trying to fill it in as best we can. Another reason might be that a new individual came along whose value of the independent variable, $x_0$, was typical of the rest of the dataset – so the the very least $x_{min}\le x_0\le x_{max}$ – and we want to guess what will be the value of the dependent variable for this individual before we measure it. (Or maybe we cannot measure it for some reason.) A common (but naive) alternate approach to interpolation for a value $x_0$ as above might be to find two values $x_i$ and $x_j$ in the dataset which were as close to $x_0$ as possible, and on either side of it (so $x_i<x_0<x_j$), and simply to guess that the $y$-value for $x_0$ would be the average of $y_i$ and $y_j$. This is not a terrible idea, but it is not as effective as using the LSRL as described above, since we use the entire dataset when we build the coefficients of the LSRL. So the LSRL will give, by the process of interpolation, the best guess for what should be that missing $y$-value based on everything we know, while the “average of $y_i$ and $y_j$” method only pays attention to those two nearest data points and thus may give a very bad guess for the corresponding $y$-value if those two points are not perfectly typical, if they have any randomness, any variation in their $y$-values which is not due to the variation of the $x$. It is thus always best to use interpolation as described above. EXAMPLE 3.2.2. Working with the statistics students’ homework and total course points data from Example 3.1.4, suppose the gradebook of the course instructor was somewhat corrupted and the instructor lost the final course points of the student Janet. If Janet’s homework points of 77 were not in the corrupted part of the gradebook, the instructor might use interpolation to guess what Janet’s total course point probably were. To do this, the instructor would have plugged in $x=77$ into the equation of the LSRL, $\widehat{y}=mx+b$ to get the estimated total course points of $.754\cdot77+26.976=85.034$. Another important use of the (coefficients of the) LSRL is to use the underlying meanings of the slope and $y$-intercept. For this, recall that in the equation $y=mx+b$, the slope $m$ tells us how much the line goes up (or down, if the slope is negative) for each increase of the $x$ by one unit, while the $y$-intercept $b$ tells us what would be the $y$ value where the line crosses the $y$-axis, so when the $x$ has the value 0. In each particular situation that we have bivariate quantitative data and compute an LSRL, we can then use these interpretations to make statements about the relationship between the independent and dependent variables. EXAMPLE 3.2.3. Look one more time at the data on students’ homework and total course points in a statistics class from Example 3.1.4, and the the LSRL computed there. We said that the slope of the LSRL was $m=.754$ and the $y$-intercept was $b=26.976$. In context, what this means, is that On average, each additional point of homework corresponded to an increase of $.754$ total course points. We may hope that this is actually a causal relationship, that the extra work a student does to earn that additional point of homework score helps the student learn more statistics and therefore get $.75$ more total course points. But the mathematics here does not require that causation, it merely tells us the increase in $x$ is associated with that much increase in $y$. Likewise, we can also conclude from the LSRL that In general, a student who did no homework at all would earn about $26.976$ total course points. Again, we cannot conclude that doing no homework causes that terrible final course point total, only that there is an association.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/03%3A_Linear_Regression/3.02%3A_New_Page.txt
Sensitivity to Outliers The correlation coefficient and the (coefficients of the) LSRL are built out of means and standard deviations and therefore the following fact is completely unsurprising FACT 3.3.1. The correlation coefficient and the (coefficients of the) LSRL are very sensitive to outliers. What perhaps is surprising here is that the outliers for bivariate data are a little different from those for 1-variable data. DEFINITION 3.3.2. An outlier for a bivariate quantitative dataset is one which is far away from the curve which has been identified as underlying the shape of the scatterplot of that data. In particular, a point $(x,y)$ can be a bivariate outlier even if both $x$ is not an outlier for the independent variable data considered alone and $y$ is not an outlier for the dependent variable data alone. EXAMPLE 3.3.3. Suppose we add one more point (90,30) to the dataset in Exam- ple 3.1.4. Neither the $x$- nor $y$-coordinates of this point are outliers with respect to their respective single-coordinate datasets, but it is nevertheless clearly a bivariate outlier, as can be seen in the new scatterplot In fact recomputing the correlation coefficient and LSRL, we find quite a change from what we found before, in Example 3.1.4: $r=.704\qquad\text{[which used to be } .935]$ and $\widehat{y}=.529x+38.458\qquad\text{[which used to be }.754x+26.976]$ all because of one additional point! Causation The attentive reader will have noticed that we started our discussion of bivariate data by saying we hoped to study when one thing causes another. However, what we’ve actually done instead is find correlation between variables, which is quite a different thing. Now philosophers have discussed what exactly causation is for millennia, so certainly it is a subtle issue that we will not resolve here. In fact, careful statisticians usually dodge the complexities by talking about relationships, association, and, of course, the correlation coefficient, being careful always not to commit to causation – at least based only on an analysis of the statistical data. As just one example, where we spoke about the meaning of the square $r^2$ of the correlation coefficient (we called it Fact 2.3.3), we were careful to say that $r^2$ measures the variation of the dependent variable which is associated with the variation of the independent variable. A more reckless description would have been to say that one caused the other – but don’t fall into that trap! This would be a bad idea because (among other reasons) the correlation coefficient is symmetric in the choice of explanatory and response variables (meaning $r$ is the same no matter which is chosen for which role), while any reasonable notion of causation is asymmetric. E.g., while the correlation is exactly the same very large value with either variable being $x$ and which $y$, most people would say that smoking causes cancer and not the other way6! We do need to make one caution about this caution, however. If there is a causal relationship between two variables that are being studied carefully, then there will be correlation. So, to quote the great data scientist Edward Tufte , Correlation is not causation but it sure is a hint. The first part of this quote (up to the “but”) is much more famous and, as a very first step, is a good slogan to live by. Those with a bit more statistical sophistication might instead learn this version, though. A more sophisticated-sounding version, again due to Tufte , is Empirically observed covariation is a necessary but not sufficient condition for causality. Extrapolation We have said that visual intuition often allows humans to sketch fairly good approximations of the LSRL on a scatterplot, so long as the correlation coefficient tells us there is a strong linear association. If the diligent reader did that with the first scatterplot in Example 3.1.4, probably the resulting line looked much like the line which LibreOffice Calc produced – except humans usually sketch their line all the way to the left and right edges of the graphics box. Automatic tools like LibreOffice Calc do not do that, for a reason. [def:extrapolation] Given a bivariate quantitative dataset and associated LSRL with equation $\widehat{y}=mx+b$, the process of guessing that the value of the dependent variable in this relationship to have the value $mx_0+b$, for $x_0$ any value for the independent variable which does not satisfy $x_{min}\le x_0\le x_{max}$ [so, instead, either $x_0<x_{min}$ or $x_0>x_{max}$], is called extrapolation. Extrapolation is considered a bad, or at least risky, practice. The idea is that we used the evidence in the dataset $\{(x_1,y_1), \dots , (x_n,y_n)\}$ to build the LSRL, but, by definition, all of this data lies in the interval on the $x$-axis from $x_{min}$ to $x_{max}$. There is literally no evidence from this dataset about what the relationship between our chosen explanatory and response variables will be for $x$ outside of this interval. So in the absence of strong reasons to believe that the precise linear relationship described by the LSRL will continue for more $x$’s, we should not assume that it does, and therefore we should not use the LSRL equation to guess values by extrapolation. The fact is, however, that often the best thing we can do with available information when we want to make predictions out into uncharted territory on the $x$-axis is extrapolation. So while it is perilous, it is reasonable to extrapolate, so long as you are clear about what exactly you are doing. EXAMPLE 3.3.5. Using again the statistics students’ homework and total course points data from Example 3.1.4, suppose the course instructor wanted to predict what would be the total course points for a student who had earned a perfect $100$ points on their homework. Plugging into the LSRL, this would have yielded a guess of $.754\cdot100+26.976=102.376$. Of course, this would have been impossible, since the maximum possible total course score was $100$. Moreover, making this guess is an example of extrapolation, since the $x$ value of $100$ is beyond the largest $x$ value of $x_{max}=92$ in the dataset. Therefore we should not rely on this guess – as makes sense, since it is invalid by virtue of being larger than $100$. Simpson’s Paradox Our last caution is not so much a way using the LSRL can go wrong, but instead a warning to be ready for something very counter-intuitive to happen – so counter-intuitive, in fact, that it is called a paradox. It usually seems reasonable that if some object is cut into two pieces, both of which have a certain property, then probably the whole object also has that same property. But if the object in question is a population and the property is has positive correlation, then maybe the unreasonable thing happens. DEFINITION 3.3.6. Suppose we have a population for which we have a bivariate quantitative dataset. Suppose further that the population is broken into two (or more) subpopulations for all of which the correlation between the two variables is positive, but the correlation of the variables for the whole dataset is negative. Then this situation is called Simpson’s Paradox. [It’s also called Simpson’s Paradox if the role of positive and negative is reversed in our assumptions.] The bad news is that Simpson’s paradox can happen. EXAMPLE 3.3.7 Let $\ P=\{(0,1), (1,0), (9,10), (10,9)\}$ be a bivariate dataset, which is broken into the two subpopulations $\ P_1=\{(0,1), (1,0)\}$ and $\ P_2=\{(9,10), (10,9)\}$. Then the correlation coefficients of both $\ P_1$ and $\ P_2$ are $r=-1$, but the correlation of all of $\ P$ is $r=.9756$. This is Simpson’s Paradox! Or, in applications, we can have situations like EXAMPLE 3.3.8. Suppose we collect data on two sections of a statistics course, in particular on how many hours per work the individual students study for the course and how they do in the course, measured by their total course points at the end of the semester. It is possible that there is a strong positive correlation between these variables for each section by itself, but there is a strong negative correlation when we put all the students into one dataset. In other words, it is possible that the rational advice, based on both individual sections, is study more and you will do better in the course, but that the rational advice based on all the student data put together is study less and you will do better. 3.04: New Page The age ($x$) and resting heart rate (RHR, $y$) were measured for nine men, yielding this dataset: $\begin{matrix} x:&20&23&30&37&35&45&51&60&63\ y:&72&71&73&74&74&73&75&75&77 \end{matrix}$ Make a scatterplot of these data. Based on the scatterplot, what do you think the correlation coefficient $r$ will be? Now compute $r$. Compute the LSRL for these data, write down its equation, and sketch it on top of your scatterplot. [You may, of course, do as much of this with electronic tools as you like. However, you should explain what tool you are using, how you used it, and what it must have been doing behind the scenes to get the results which it displayed and you are turning in.] Continuing with the data and computations of the previous problem: What percentage of the variation in RHR is associated with variation in age? Write the following sentences with blanks filled in: “If I measured the RHR of a 55 year-old man, I would expect it to be . Making an estimate like this is called .” Just looking at the equation of the LSRL, what does it suggest should be the RHR of a newborn baby? Explain. Also explain what an estimate like yours for the RHR of a baby is called. This kind of estimate is considered a bad idea in many cases – explain why in general, and also use specifics from this particular case. Write down a bivariate quantitative dataset for a population of only two individuals whose LSRL is $\widehat{y}=2x-1$. What is the correlation coefficient of your dataset? Next, add one more point to the dataset in such a way that you don’t change the LSRL or correlation coefficient. Finally, can your find a dataset with the same LSRL but having a larger correlation coefficient than you just had? [Hint: fool around with modifications or additions to the datasets in you already found in this problem, using an electronic tool to do all the computational work. When you find a good one, write it down and explain what you thinking was as you searched for it.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/03%3A_Linear_Regression/3.03%3A_New_Page.txt
We want to imagine doing an experiment in which there is no way to predict what the outcome will be. Of course, if we stop our imagination there, there would be nothing we could say and no point in trying to do any further analysis: the outcome would just be whatever it wanted to be, with no pattern. So let us add the additional assumption that while we cannot predict what will happen any particular time we do the experiment, we can predict general trends, in the long run, if we repeat the experiment many times. To be more precise, we assume that, for any collection \(E\) of possible outcomes of the experiment there is a number \(p(E)\) such that, no matter who does the experiment, no matter when they do it, if they repeat the experiment many times, the fraction of times they would have seen any of the outcomes of \(E\) would be close to that number \(p(E)\). This is called the frequentist approach to the idea of probability. While it is not universally accepted – the Bayesian alternative does in fact have many adherents – it has the virtue of being the most internally consistent way of building a foundation for probability. For that reason, we will follow the frequentist description of probability in this text. Before we jump into the mathematical formalities, we should motivate two pieces of what we just said. First, why talk about sets of outcomes of the experiment instead of talking about individual outcomes? The answer is that we are often interested in sets of outcomes, as we shall see later in this book, so it is nice to set up the machinery from the very start to work with such sets. Or, to give a particular concrete example, suppose you were playing a game of cards and could see your hand but not the other players’ hands. You might be very interested in how likely is it that your hand is a winning hand, i.e., what is the likelihood of the set of all possible configurations of all the rest of the cards in the deck and in your opponents’ hands for which what you have will be the winning hand? It is situations like this which motivate an approach based on sets of outcomes of the random experiment. Another question we might ask is: where does our uncertainty about the experimental results come from? From the beginnings of the scientific method through the turn of the \(20^{th}\) century, it was thought that this uncertainty came from our incomplete knowledge of the system on which we were experimenting. So if the experiment was, say, flipping a coin, the precise amount of force used to propel the coin up into the air, the precise angular motion imparted to the coin by its position just so on the thumbnail of the person doing the flipping, the precise drag that the coin felt as it tumbled through the air caused in part by eddies in the air currents coming from the flap of a butterfly’s wings in the Amazon rainforest – all of these things could significantly contribute to changing whether the coin would eventually come up heads or tails. Unless the coin-flipper was a robot operating in a vacuum, then, there would just be no way to know all of these physical details with enough accuracy to predict the toss. After the turn of the \(20^{th}\) century, matters got even worse (at least for physical determinists): a new theory of physics came along then, called Quantum Mechanics, according to which true randomness is built into the laws of the universe. For example, if you have a very dim light source, which produces the absolutely smallest possible “chunks” of light (called photons), and you shine it through first one polarizing filter and then see if it goes through a second filter at a \(45^\circ\) angle to the first, then half the photons will get through the second filter, but there is absolutely no way ever to predict whether any particular photon will get though or not. Quantum mechanics is full of very weird, non-intuitive ideas, but it is one of the most well-tested theories in the history of science, and it has passed every test. 04: Probability Theory Sample Spaces, Set Operations, and Probability Models Let’s get right to the definitions. DEFINITION 4.1.1. Suppose we have a repeatable experiment we want to investigate probabilistically. The things that happen when we do the experiment, the results of running it, are called the [experimental] outcomes. The set of all outcomes is called the sample space of the experiment. We almost always use the symbol $S$ for this sample space. EXAMPLE 4.1.2. Suppose the experiment we are doing is “flip a coin.” Then the sample space would be $S=\{H, T\}$. EXAMPLE 4.1.3. For the experiment “roll a [normal, six-sided] die,” the sample space would be $S=\{1, 2, 3, 4, 5, 6\}$. EXAMPLE 4.1.4. For the experiment “roll two dice,” the sample space would be \begin{aligned} S=\{&11, 12, 13, 14, 15, 16,\ &21, 22, 23, 24, 25, 26\ &31, 23, 33, 34, 35, 36\ &41, 42, 43, 44, 45, 46\ &51, 52, 53, 54, 55, 56\ &61, 62, 63, 64, 65, 66\\end{aligned} where the notation “$nm$” means “$1^{st}$ roll resulted in an $n$, $2^{nd}$ in an $m$.” EXAMPLE 4.1.5. Consider the experiment “flip a coin as many times as necessary to see the first Head.” This would have the infinite sample space $S=\{H, TH, TTH, TTTH, TTTTH, \dots \} \quad .$ EXAMPLE 4.1.6. Finally, suppose the experiment is “point a Geiger counter at a lump of radioactive material and see how long you have to wait until the next click.” Then the sample space $S$ is the set of all positive real numbers, because potentially the waiting time could be any positive amount of time. As mentioned in the chapter introduction, we are more interested in DEFINITION 4.1.7. Given a repeatable experiment with sample space $S$, an event is any collection of [some, all, or none of the] outcomes in $S$; i.e., an event is any subset $E$ of $S$, written $E\subset S$. There is one special set which is a subset of any other set, and therefore is an event in any sample space. DEFINITION 4.1.8. The set $\{\}$ with no elements is called the empty set, for which we use the notation $\emptyset$. EXAMPLE 4.1.9. Looking at the sample space $S=\{H, T\}$ in Example 4.1.2, it’s pretty clear that the following are all the subsets of $S$: \begin{aligned} &\emptyset\ &\{H\}\ &\{T\}\ &S\ [=\{H, T\}]\end{aligned} Two parts of that example are always true: $\emptyset$ and $S$ are always subsets of any set $S$. Since we are going to be working a lot with events, which are subsets of a larger set, the sample space, it is nice to have a few basic terms from set theory: DEFINITION 4.1.10. Given a subset E ⊂ S of a larger set S, the complement of E, is the set Ec = {all the elements of S which are not in E}. If we describe an event $E$ in words as all outcomes satisfies some property $X$, the complementary event, consisting of all the outcomes not in $E$, can be described as all outcomes which don’t satisfy $X$. In other words, we often describe the event $E^c$ as the event “not $E$.” DEFINITION 4.1.11. Given two sets $A$ and $B$, their union is the set $A\cup B = \{\text{all elements which are in A or B [or both]}\}\ .$ Now if event $A$ is those outcomes having property $X$ and $B$ is those with property $Y$, the event $A\cup B$, with all outcomes in $A$ together with all outcomes in $B$ can be described as all outcomes satisfying $X$ or $Y$, thus we sometimes pronounce the event “$A\cup B$” as “$A$ or $B$.” DEFINITION 4.1.12. Given two sets $A$ and $B$, their intersection is the set $A\cap B = \{\text{all elements which are in both A and B}\}\ .$ If, as before, event $A$ consists of those outcomes having property $X$ and $B$ is those with property $Y$, the event $A\cap B$ will consist of those outcomes which satisfy both $X$ and $Y$. In other words, “$A\cap B$” can be described as “$A$ and $B$.” Putting together the idea of intersection with the idea of that special subset $\emptyset$ of any set, we get the DEFINITION 4.1.13. Two sets $A$ and $B$ are called disjoint if $A\cap B=\emptyset$. In other words, sets are disjoint if they have nothing in common. A exact synonym for disjoint that some authors prefer is mutually exclusive. We will use both terms interchangeably in this book. Now we are ready for the basic structure of probability. DEFINITION 4.1.14. Given a sample space $S$, a probability model on $S$ is a choice of a real number $P(E)$ for every event $E\subset S$ which satisfies 1. For all events $E$, $0\le P(E)\le 1$. 2. $P(\emptyset)=1$ and $P(S)=1$. 3. For all events $E$, $P(E^c)=1-P(E)$. 4. If $A$ and $B$ are any two disjoint events, then $P(A\cup B)=P(A)+P(B)$. [This is called the addition rule for disjoint events.] Venn Diagrams Venn diagrams are a simple way to display subsets of a fixed set and to show the relationships between these subsets and even the results of various set operations (like complement, union, and intersection) on them. The primary use we will make of Venn diagrams is for events in a certain sample space, so we will use that terminology [even though the technique has much wider application]. To make a Venn Diagram, always start out by making a rectangle to represent the whole sample space: Within that rectangle, we make circles, ovals, or just blobs, to indicate that portion of the sample space which is some event $E$: Sometimes, if the outcomes in the sample space $S$ and in the event $A$ might be indicated in the different parts of the Venn diagram. So, if $S=\{a, b, c, d\}$ and $A=\{a, b\}\subset S$, we might draw this as The complement $E^c$ of an event $E$ is easy to show on a Venn diagram, since it is simply everything which is not in $E$: This can actually be helpful in figuring out what must be in $E^c$. In the example above with $S=\{a, b, c, d\}$ and $A=\{a, b\}\subset S$, by looking at what is in the shaded exterior part for our picture of $E^c$, we can see that for that $A$, we would get $A^c=\{c, d\}$. Moving now to set operations that work with two events, suppose we want to make a Venn diagram with events $A$ and $B$. If we know these events are disjoint, then we would make the diagram as follows: while if they are known not to be disjoint, we would use instead this diagram: For example, it $S=\{a, b, c, d\}$, $A=\{a, b\}$, and $B=\{b, c\}$, we would have When in doubt, it is probably best to use the version with overlap, which then could simply not have any points in it (or could have zero probability, when we get to that, below). Venn diagrams are very good at showing unions, and intersection: Another nice thing to do with Venn diagrams is to use them as a visual aid for probability computations. The basic idea is to make a diagram showing the various events sitting inside the usual rectangle, which stands for the sample space, and to put numbers in various parts of the diagram showing the probabilities of those events, or of the results of operations (unions, intersection, and complement) on those events. For example, if we are told that an event $A$ has probability $P(A)=.4$, then we can immediately fill in the $.4$ as follows: But we can also put a number in the exterior of that circle which represents $A$, taking advantage of the fact that that exterior is $A^c$ and the rule for probabilities of complements (point (3) in Definition 4.1.14) to conclude that the appropriate number is $1-.4=.6$: We recommend that, in a Venn diagram showing probability values, you always put a number in the region exterior to all of the events [but inside the rectangle indicating the sample space, of course]. Complicating a little this process of putting probability numbers in the regions of a Venn diagram is the situation where we are giving for both an event and a subsetsubset, $\subset$ of that event. This most often happens when we are told probabilities both of some events and of their intersection(s). Here is an example: EXAMPLE 4.1.15. Suppose we are told that we have two events $A$ and $B$ in the sample space $S$, which satisfy $P(A)=.4$, $P(B)=.5$, and $P(A\cap B)=.1$. First of all, we know that $A$ and $B$ are not disjoint, since if they were disjoint, that would mean (by definition) that $A\cap B=\emptyset$, and since $P(\emptyset)=0$ but $P(A\cap B)\neq 0$, that is not possible. So we draw a Venn diagram that we’ve see before: However, it would be unwise simply to write those given numbers $.4$, $.5$, and $.1$ into the three central regions of this diagram. The reason is that the number $.1$ is the probability of $A\cap B$, which is a part of $A$ already, so if we simply write $.4$ in the rest of $A$, we would be counting that $.1$ for the $A\cap B$ twice. Therefore, before we write a number in the rest of $A$, outside of $A\cap B$, we have to subtract the $.1$ for $P(A\cap B)$. That means that the number which goes in the rest of $A$ should be $.4-.1=.3$. A similar reasoning tells us that the number in the part of $B$ outside of $A\cap B$, should be $.5-.1=.4$. That means the Venn diagram with all probabilities written in would be: The approach in the above example is our second important recommendation for who to put numbers in a Venn diagram showing probability values: always put a number in each region which corresponds to the probability of that smallest connected region containing the number, not any larger region. One last point we should make, using the same argument as in the above example. Suppose we have events $A$ and $B$ in a sample space $S$ (again). Suppose we are not sure if $A$ and $B$ are disjoint, so we cannot use the addition rule for disjoint events to compute $P(A\cup B)$. But notice that the events $A$ and $A^c$ are disjoint, so that $A\cap B$ and $A^c\cap B$ are also disjoint and $A = A\cap S = A\cap\left(B\cup B^c\right) = \left(A\cap B\right)\cup\left(A\cap B^c\right)$ is a decomposition of the event $A$ into the two disjoint events $A\cap B$ and $A^c\cap B$. From the addition rule for disjoint events, this means that $P(A)=P(A\cap B)+P(A\cap B^c)\ .$ Similar reasoning tells us both that $P(B)=P(A\cap B)+P(A^c\cap B)$ and that $A\cup B=\left(A\cap B^c\right)\cup\left(A\cap B\right)\cup\left(A^c\cap B\right)$ is a decomposition of $A\cup B$ into disjoint pieces, so that $P(A\cup B)=P(A\cap B^c)+P(A\cap B)+P(A^c\cap B)\ .$ Combining all of these equations, we conclude that \begin{aligned} P(A)+P(B)-P(A\cap B) &=P(A\cap B)+P(A\cap B^c)+P(A\cap B)+P(A^c\cap B)-P(A\cap B)\ &= P(A\cap B^c)+P(A\cap B)+P(A^c\cap B) + P(A\cap B)-P(A\cap B)\ &= P(A\cap B^c)+P(A\cap B)+P(A^c\cap B)\ &= P(A\cup B) \ .\end{aligned} This is important enough to state as a FACT 4.1.16. The Addition Rule for General Events If $A$ and $B$ are events in a sample space $S$ then we have the addition rule for their probabilities $P(A\cup B) = P(A) + P(B) - P(A\cap B)\ .$ This rule is true whether or not $A$ and $B$ are disjoint. Finite Probability Models Here is a nice situation in which we can easily calculate a lot of probabilities fairly easily: if the sample space $S$ of some experiment is finite. So let’s suppose the sample space consists of just the outcomes $S=\{o_1, o_2, \dots, o_n\}$. For each of the outcomes, we can compute the probability: \begin{aligned} p_1 =& P(\{o_1\})\ p_2 =& P(\{o_2\})\ &\vdots\ p_n =& P(\{o_n\})\\end{aligned} Let’s think about what the rules for probability models tell us about these numbers $p_1, p_2, \dots, p_n$. First of all, since they are each the probability of an event, we see that \begin{aligned} 0\le &p_1\le 1\ 0\le &p_2\le 1\ &\ \vdots\ 0\le &p_n\le 1\end{aligned} Furthermore, since $S=\{o_1, o_2, \dots, o_n\}=\{o_1\}\cup\{o_2\}\cup \dots \cup\{o_n\}$ and all of the events $\{o_1\}, \{o_2\}, \dots, \{o_n\}$ are disjoint, by the addition rule for disjoint events we have \begin{aligned} 1=P(S)&=P(\{o_1, o_2, \dots, o_n\})\ &=P(\{o_1\}\cup\{o_2\}\cup \dots \cup\{o_n\})\ &=P(\{o_1\})+P(\{o_2\})+ \dots +P(\{o_n\})\ &=p_1+p_2+ \dots +p_n\ .\end{aligned} The final thing to notice about this situation of a finite sample space is that if $E\subset S$ is any event, then $E$ will be just a collection of some of the outcomes from $\{o_1, o_2, \dots, o_n\}$ (maybe none, maybe all, maybe an intermediate number). Since, again, the events like $\{o_1\}$ and $\{o_2\}$ and so on are disjoint, we can compute \begin{aligned} P(E) &= P(\{\text{the outcomes o_j which make up E}\})\ &= \sum \{\text{the p_j's for the outcomes in E}\}\ .\end{aligned} In other words FACT 4.1.17. A probability model on a sample space $S$ with a finite number, $n$, of outcomes, is nothing other than a choice of real numbers $p_1, p_2, \dots, p_n$, all in the range from $0$ to $1$ and satisfying $p_1+p_2+ \dots +p_n=1$. For such a choice of numbers, we can compute the probability of any event $E\subset S$ as $P(E) = \sum \{\text{the p_j's corresponding to the outcomes o_j which make up E}\}\ .$ EXAMPLE 4.1.18. For the coin flip of Example 4.1.2, there are only the two outcomes $H$ and $T$ for which we need to pick two probabilities, call them $p$ and $q$. In fact, since the total must be $1$, we know that $p+q=1$ or, in other words, $q=1-p$. The the probabilities for all events (which we listed in Example 4.1.9) are \begin{aligned} P(\emptyset) &= 0\ P(\{H\}) &= p\ P(\{T\}) &= q = 1-p\ P(\{H,T\}) &= p + q = 1\end{aligned} What we’ve described here is, potentially, a biased coin, since we are not assuming that $p=q$ – the probabilities of getting a head and a tail are not assumed to be the same. The alternative is to assume that we have a fair coin, meaning that $p=q$. Note that in such a case, since $p+q=1$, we have $2p=1$ and so $p=1/2$. That is, the probability of a head (and, likewise, the probability of a tail) in a single throw of a fair coin is $1/2$. EXAMPLE 4.1.19. As in the previous example, we can consider the die of Example 4.1.3 to a fair die, meaning that the individual face probabilities are all the same. Since they must also total to $1$ (as we saw for all finite probability models), it follows that $p_1 = p_2 = p_3 = p_4 = p_5 = p_6 = 1/6 .$ We can then use this basic information and the formula (for $P(E)$) in Fact 4.1.17 to compute the probability of any event of interest, such as $P(\text{roll was even''}) = P(\{2, 4, 6\}) = \frac16 + \frac16 + \frac16 = \frac36 = \frac12\ .$ We should immortalize these last two examples with a [def:fair] When we are talking about dice, coins, individuals for some task, or another small, practical, finite experiment, we use the term fair to indicate that the probabilities of all individual outcomes are equal (and therefore all equal to the the number $1/n$, where $n$ is the number of outcomes in the sample space). A more technical term for the same idea is equiprobable, while a more casual term which is often used for this in very informal settings is “at random” (such as “pick a card at random from this deck” or “pick a random patient from the study group to give the new treatment to...”). EXAMPLE 4.1.21. Suppose we look at the experiment of Example 4.1.4 and add the information that the two dice we are rolling are fair. This actually isn’t quite enough to figure out the probabilities, since we also have to assure that the fair rolling of the first die doesn’t in any way affect the rolling of the second die. This is technically the requirement that the two rolls be independent, but since we won’t investigate that carefully until §2, below, let us instead here simply say that we assume the two rolls are fair and are in fact completely uninfluenced by anything around them in the world including each other. What this means is that, in the long run, we would expect the first die to show a $1$ roughly ${\frac16}^{th}$ of the time, and in the very long run, the second die would show a $1$ roughly ${\frac16}^{th}$ of those times. This means that the outcome of the “roll two dice” experiment should be $11$ with probability $\frac{1}{36}$ – and the same reasoning would show that all of the outcomes have that probability. In other words, this is an equiprobable sample space with $36$ outcomes each having probability $\frac{1}{36}$. Which in turn enables us to compute any probability we might like, such as \begin{aligned} P(\text{sum of the two rolls is 4''}) &= P(\{13, 22, 31\})\ &= \frac{1}{36} + \frac{1}{36} + \frac{1}{36}\ &= \frac{3}{36}\ &= \frac{1}{12}\ .\end{aligned}
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/04%3A_Probability_Theory/4.01%3A_New_Page.txt
We have described the whole foundation of the theory of probability as coming from imperfect knowledge, in the sense that we don’t know for sure if an event $A$ will happen any particular time we do the experiment but we do know, in the long run, in what fraction of times $A$ will happen. Or, at least, we claim that there is some number $P(A)$ such that after running the experiment $N$ times, out of which $n_A$ of these times are when $A$ happened, $P(A)$ is approximately $n_A/N$ (and this ratio gets closer and closer to $P(A)$ as $N$ gets bigger and bigger). But what if we have some knowledge? In particular, what happens if we know for sure that the event $B$ has happened – will that influence our knowledge of whether $A$ happens or not? As before, when there is randomness involved, we cannot tell for sure if $A$ will happen, but we hope that, given the knowledge that $B$ happened, we can make a more accurate guess about the probability of $A$. [eg:condprob1] If you pick a person at random in a certain country on a particular date, you might be able to estimate the probability that the person had a certain height if you knew enough about the range of heights of the whole population of that country. [In fact, below we will make estimates of this kind.] That is, if we define the event $A=\text{the random person is taller than 1.829 meters (6 feet)''}$ then we might estimate $P(A)$. But consider the event $B=\text{the random person's parents were both taller than 1.829 meters''}\ .$ Because there is a genetic component to height, if you know that $B$ happened, it would change your idea of how likely, given that knowledge, that $A$ happened. Because genetics are not the only thing which determines a person’s height, you would not be certain that $A$ happened, even given the knowledge of $B$. Let us use the frequentist approach to derive a formula for this kind of probability of $A$ given that $B$ is known to have happened. So think about doing the repeatable experiment many times, say $N$ times. Out of all those times, some times $B$ happens, say it happens $n_B$ times. Out of those times, the ones where $B$ happened, sometimes $A$ also happened. These are the cases where both $A$ and $B$ happened – or, converting this to a more mathematical descriptions, the times that $A\cap B$ happened – so we will write it $n_{A\cap B}$. We know that the probability of $A$ happening in the cases where we know for sure that $B$ happened is approximately $n_{A\cap B}/n_B$. Let’s do that favorite trick of multiplying and dividing by the same number, so finding that the probability in which we are interested is approximately $\frac{n_{A\cap B}}{n_B} = \frac{n_{A\cap B}\cdot N}{N\cdot n_B} = \frac{n_{A\cap B}}{N}\cdot\frac{N}{n_B} = \frac{n_{A\cap B}}{N} \Bigg/ \frac{n_B}{N} \approx P(A\cap B) \Big/ P(B)$ Which is why we make the [def:condprob] The conditional probability is $P(A|B) = \frac{P(A\cap B)}{P(B)}\ .$ Here $P(A|B)$ is pronounced the probability of $A$ given $B$. Let’s do a simple EXAMPLE 4.2.3. Building off of Example 4.1.19, note that the probability of rolling a $2$ is $P(\{2\})=1/6$ (as is the probability of rolling any other face – it’s a fair die). But suppose that you were told that the roll was even, which is the event $\{2, 4, 6\}$, and asked for the probability that the roll was a $2$ given this prior knowledge. The answer would be $P(\{2\}\mid\{2, 4, 6\})=\frac{P(\{2\}\cap\{2, 4, 6\})}{P(\{2, 4, 6\})} =\frac{P(\{2\})}{P(\{2, 4, 6\})} = \frac{1/6}{1/2} = 1/3\ .$ In other words, the probability of rolling a $2$ on a fair die with no other information is $1/6$, which the probability of rolling a $2$ given that we rolled an even number is $1/3$. So the probability doubled with the given information. Sometimes the probability changes even more than merely doubling: the probability that we rolled a $1$ with no other knowledge is $1/6$, while the probability that we rolled a $1$ given that we rolled an even number is $P(\{1\}\mid\{2, 4, 6\})=\frac{P(\{1\}\cap\{2, 4, 6\})}{P(\{2, 4, 6\})} =\frac{P(\emptyset)}{P(\{2, 4, 6\})} = \frac{0}{1/2} = 0\ .$ But, actually, sometimes the conditional probability for some event is the same as the unconditioned probability. In other words, sometimes knowing that $B$ happened doesn’t change our estimate of the probability of $A$ at all, they are no really related events, at least from the point of view of probability. This motivates the [def:independent] Two events $A$ and $B$ are called independent if $P(A\mid B)=P(A)$. Plugging the defining formula for $P(A\mid B)$ into the definition of independent, it is easy to see that FACT 4.2.5. Events $A$ and $B$ are independent if and only if $P(A\cap B)=P(A)\cdot P(B)$. EXAMPLE 4.2.6. Still using the situation of Example 4.1.19, we saw in Example 4.2.3 that the events $\{2\}$ and $\{2, 3, 4\}$ are not independent since $P(\{2\}) = 1/6 \neq 1/3 = P(\{2\}\mid\{2, 4, 6\})$ nor are $\{1\}$ and $\{2, 3, 4\}$, since $P(\{1\}) = 1/6 \neq 0 = P(\{1\}\mid\{2, 4, 6\})\ .$ However, look at the events $\{1, 2\}$ and $\{2, 4, 6\}$: \begin{aligned} P(\{1, 2\}) = P(\{1\}) + P(\{2\}) &= 1/6 + 1/6\ &= 1/3\ &= \frac{1/6}{1/2}\ &= \frac{P(\{1\})}{P(\{2, 4, 6\})}\ &= \frac{P(\{1, 2\}\cap\{2, 4, 6\})}{P(\{2, 4, 6\})}\ &= P(\{1, 2\}\mid\{2, 4, 6\})\end{aligned} which means that they are independent! EXAMPLE 4.2.7. We can now fully explain what was going on in Example 4.1.21. The two fair dice were supposed to be rolled in a way that the first roll had no effect on the second – this exactly means that the dice were rolled independently. As we saw, this then means that each individual outcome of sample space $S$ had probability $\frac{1}{36}$. But the first roll having any particular value is independent of the second roll having another, e.g., if $A=\{11, 12, 13, 14, 15, 16\}$ is the event in that sample space of getting a $1$ on the first roll and $B=\{14, 24, 34, 44, 54, 64\}$ is the event of getting a $4$ on the second roll, then events $A$ and $B$ are independent, as we check by using Fact 4.2.5: \begin{aligned} P(A\cap B) &= P(\{14\})\ &= \frac{1}{36}\ &= \frac16\cdot\frac16\ &= \frac{6}{36}\cdot\frac{6}{36}\ &=P(A)\cdot P(B)\ .\end{aligned} On the other hand, the event “the sum of the rolls is $4$,” which is $C=\{13, 22, 31\}$ as a set, is not independent of the value of the first roll, since $P(A\cap C)=P(\{13\})=\frac{1}{36}$ but $P(A)\cdot P(C)=\frac{6}{36}\cdot\frac{3}{36}=\frac16\cdot\frac{1}{12}=\frac{1}{72}$.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/04%3A_Probability_Theory/4.02%3A_New_Page.txt
Definition and First Examples Suppose we are doing a random experiment and there is some consequence of the result in which we are interested that can be measured by a number. The experiment might be playing a game of chance and the result could be how much you win or lose depending upon the outcome, or the experiment could be which part of the drives’ manual you randomly choose to study and the result how many points we get on the driver’s license test we make the next day, or the experiment might be giving a new drug to a random patient in medical study and the result would be some medical measurement you make after treatment (blood pressure, white blood cell count, whatever), etc. There is a name for this situation in mathematics DEFINITION 4.3.1. A choice of a number for each outcome of a random experiment is called a random variable [RV]. If the values an RV takes can be counted, because they are either finite or countably infinite8 in number, the RV is called discrete; if, instead, the RV takes on all the values in an interval of real numbers, the RV is called continuous. We usually use capital letters to denote RVs and the corresponding lowercase letter to indicate a particular numerical value the RV might have, like $X$ and $x$. EXAMPLE 4.3.2. Suppose we play a silly game where you pay me $5 to play, then I flip a fair coin and I give you$10 if the coin comes up heads and $0 if it comes up tails. Then your net winnings, which would be +$5 or -5 each time you play, are a random variable. Having only two possible values, this RV is certainly discrete. EXAMPLE 4.3.3. Weather phenomena vary so much, due to such small effects – such as the famous butterfly flapping its wings in the Amazon rain forest causing a hurricane in North America – that they appear to be a random phenomenon. Therefore, observing the temperature at some weather station is a continuous random variable whose value can be any real number in some range like $-100$ to $100$ (we’re doing science, so we use ${}^\circ C$). EXAMPLE 4.3.4. Suppose we look at the “roll two fair dice independently” experiment from Example 4.2.7 and Example 4.1.21, which was based on the probability model in Example 4.1.21 and sample space in Example 4.1.4. Let us consider in this situation the random variable $X$ whose value for some pair of dice rolls is the sum of the two numbers showing on the dice. So, for example, $X(11)=2$, $X(12)=3$, etc. In fact, let’s make a table of all the values of $X$: \begin{aligned} X(11) &= 2\ X(21) = X(12) &= 3\ X(31) = X(22) = X(13) &=4\ X(41) = X(32) = X(23) = X(14) &= 5\ X(51) = X(42) = X(33) = X(24) = X(15) &= 6\ X(61) = X(52) = X(43) = X(34) = X(25) = X(16) &= 7\ X(62) = X(53) = X(44) = X(35) = X(26) &= 8\ X(63) = X(54) = X(45) = X(36) &= 9\ X(64) = X(55) = X(46) &= 10\ X(65) = X(56) &= 11\ X(66) &= 12\\end{aligned} Distributions for Discrete RVs The first thing we do with a random variable, usually, is talk about the probabilities associate with it. DEFINITION 4.3.5. Given a discrete RV $X$, its distribution is a list of all of the values $X$ takes on, together with the probability of it taking that value. [Note this is quite similar to Definition 1.3.5 – because it is essentially the same thing.] EXAMPLE 4.3.6. Let’s look at the RV, which we will call $X$, in the silly betting game of Example 4.3.2. As we noticed when we first defined that game, there are two possible values for this RV,5 and -$5. We can actually think of “$X=5$” as describing an event, consisting of the set of all outcomes of the coin-flipping experiment which give you a net gain of$5. Likewise, “$X=-5$” describes the event consisting of the set of all outcomes which give you a net gain of -\$5. These events are as follows: $\begin{array}{r|rl}{x} & {\text { Set of outcomes } o} \ & {\text { such that } X(o)} & {=x} \ \hline 5 & {\{H\}} \ {-5} & {\{T\}}\end{array}$ Since it is a fair coin so the probabilities of these events are known (and very simple), we conclude that the distribution of this RV is the table $\begin{array}{r|rl}{x} & P(X=x) \ \hline 5 & \ 1/2\ \ -5 & \ {1/2}\end{array}$ EXAMPLE 4.3.7. What about the $X=$"sum of the face values" RV on the “roll two fair dice, independently” random experiment from Example 4.3.4? We have actually already done most of the work, finding out what values the RV can take and which outcomes cause each of those values. To summarize what we found: $\ \begin{array}{r|ll}{x} & {\text { Set of outcomes } o} \ & {\text { such that } X(o)}\ {=x} \ \hline 2 & {\{11\}} \ {3} & {\{21,12\}} \ 4 & \{31, 22, 13\}\ \ 5 & \{41, 32, 23, 14\} \6 & \{51, 42, 33, 24, 15\}\7 & \{61, 52, 43, 34, 25, 16\}\8 & \{62, 53, 44, 35, 26\}\9 &\{63, 54, 45, 36\} \10 & \{64, 55, 46\}\11&\{65, 56\}\12&\{66\} \end{array}$ But we have seen that this is an equiprobable situation, where the probability of any event $A$ contain $n$ outcomes is $P(A)=n\cdot1/36$, so we can instantly fill in the distribution table for this RV as $\ \begin{array}{r|ll}{x} & P(X=x) \ \hline 2 & \frac{1}{36} \ {3} & \frac{2}{36} = \frac{1}{18} \ 4 & \frac{3}{36} = \frac{1}{12} \ 5 & \frac{4}{36} = \frac{1}{6} \6 & \frac{5}{36} \7 & \frac{6}{36} = \frac{1}{6}\8 & \frac{5}{36}\9 &\frac{4}{36} = \frac{1}{6} \10 & \frac{3}{36} = \frac{1}{12}\11&\frac{2}{36} = \frac{1}{18} \12 & \frac{1}{36} \end{array}$ One thing to notice about distributions is that if we make a preliminary table, as we just did, of the events consisting of all outcomes which give a particular value when plugged into the RV, then we will have a collection of disjoint events which exhausts all of the sample space. What this means is that the sum of the probability values in the distribution table of an RV is the probability of the whole sample space of that RV’s experiment. Therefore FACT 4.3.8. The sum of the probabilities in a distribution table for a random variable must always equal $1$. It is quite a good idea, whenever you write down a distribution, to check that this Fact is true in your distribution table, simply as a sanity check against simple arithmetic errors. Expectation for Discrete RVs Since we cannot predict what exactly will be the outcome each time we perform a random experiment, we cannot predict with precision what will be the value of an RV on that experiment, each time. But, as we did with the basic idea of probability, maybe we can at least learn something from the long-term trends. It turns out that it is relatively easy to figure out the mean value of an RV over a large number of runs of the experiment. Say $X$ is a discrete RV, for which the distribution tells us that $X$ takes the values $x_1, \dots, x_n$, each with corresponding probability $p_1, \dots, p_n$. Then the frequentist view of probability says that the probability $p_i$ that $X=x_i$ is (approximately) $n_i/N$, where $n_i$ is the number of times $X=x_i$ out of a large number $N$ of runs of the experiment. But if $p_i = n_i/N$ then, multiplying both sides by $N$, $n_i = p_i\,N \ .$ That means that, out of the $N$ runs of the experiment, $X$ will have the value $x_1$ in $p_1\,N$ runs, the value $x_2$ in $p_2\,N$ runs, etc. So the sum of $X$ over those $N$ runs will be $(p_1\,N)x_1+(p_2\,N)x_2 + \dots + (p_n\,N)x_n\ .$ Therefore the mean value of $X$ over these $N$ runs will be the total divided by $N$, which is $p_1\,x_1 + \dots + p_n x_n$. This motivates the definition DEFINITION 4.3.9. Given a discrete RV $X$ which takes on the values $x_1, \dots, x_n$ with probabilities $p_1, \dots, p_n$, the expectation [sometimes also called the expected value] of $X$ is the value $E(X) = \sum p_i\,x_i\ .$ By what we saw just before this definition, we have the following FACT 4.3.10. The expectation of a discrete RV is the mean of its values over many runs of the experiment. Note: The attentive reader will have noticed that we dealt above only with the case of a finite RV, not the case of a countably infinite one. It turns out that all of the above works quite well in that more complex case as well, so long as one is comfortable with a bit of mathematical technology called “summing an infinite series.” We do not assume such a comfort level in our readers at this time, so we shall pass over the details of expectations of infinite, discrete RVs. EXAMPLE 4.3.11. Let’s compute the expectation of net profit RV $X$ in the silly betting game of Example 4.3.2, whose distribution we computed in Example 4.3.6. Plugging straight into the definition, we see $E(X)=\sum p_i\,x_i = \frac12\cdot5 + \frac12\cdot(-5)=2.5-2.5 = 0 \ .$ In other words, your average net gain playing this silly game many times will be zero. Note that does not mean anything like “if you lose enough times in a row, the chances of starting to win again will go up,” as many gamblers seem to believe, it just means that, in the very long run, we can expect the average winnings to be approximately zero – but no one knows how long that run has to be before the balancing of wins and losses happens9. A more interesting example is EXAMPLE 4.3.12. In Example 4.3.7 we computed the distribution of the random variable $X=$ "sum of the face values" on the “roll two fair dice, independently” random experiment from Example 4.3.4. It is therefore easy to plug the values of the probabilities and RV values from the distribution table into the formula for expectation, to get \begin{aligned} E(X) &=\sum p_i\,x_i\ &= \frac1{36}\cdot2 + \frac2{36}\cdot3 + \frac3{36}\cdot4 + \frac4{36}\cdot5 + \frac5{36}\cdot6 + \frac6{36}\cdot7 + \frac5{36}\cdot8 + \frac4{36}\cdot9 + \frac3{36}\cdot10\ &\hphantom{= \frac1{36}\cdot2 + \frac2{36}\cdot3 + \frac3{36}\cdot4 + \frac4{36}\cdot5 + \frac5{36}\cdot6 + \frac6{36}\cdot7 + \frac5{36}\cdot8\ } + \frac2{36}\cdot11 + \frac1{36}\cdot12\ &= \frac{2\cdot1 + 3\cdot2 + 4\cdot3 + 5\cdot4 + 6\cdot5 + 7\cdot6 + 8\cdot5 + 9\cdot4 + 10\cdot3 + 11\cdot2 + 12\cdot1}{36}\ &= 7\end{aligned} So if you roll two fair dice independently and add the numbers which come up, then do this process many times and take the average, in the long run that average will be the value $7$. Density Functions for Continuous RVs What about continuous random variables? Definition 4.3.5 of distribution explicitly excluded the case of continuous RVs, so does that mean we cannot do probability calculations in that case? There is, when we think about it, something of a problem here. A distribution is supposed to be a list of possible values of the RV and the probability of each such value. But if some continuous RV has values which are an interval of real numbers, there is just no way to list all such numbers – it has been known since the late 1800s that there is no way to make a list like that (see , for a description of a very pretty proof of this fact). In addition, the chance of some random process producing a real number that is exactly equal to some particular value really is zero: for two real numbers to be precisely equal requires infinite accuracy ... think of all of those decimal digits, marching off in orderly rows to infinity, which must match between the two numbers. Rather than a distribution, we do the following: DEFINITION 4.3.13. Let $X$ be a continuous random variable whose values are the real interval $[x_{min},x_{max}]$, where either $x_{min}$ or $x_{max}$ or both may be $\infty$. A [probability] density function for $X$ is a function $f(x)$ defined for $x$ in $[x_{min},x_{max}]$, meaning it is a curve with one $y$ value for each $x$ in that interval, with the property that $P(a<X<b) = \left\{\begin{matrix}\text{the area in the xy-plane above the x-axis, below}\ \text{the curve y=f(x) and between x=a and x=b.}\end{matrix}\right.\ .$ Graphically, what is going on here is Because of what we know about probabilities, the following is true (and fairly easy to prove): FACT 4.3.14. Suppose $f(x)$ is a density function for the continuous RV $X$ defined on the real interval $[x_{min},x_{max}]$. Then • For all $x$ in $[x_{min},x_{max}]$, $f(x)\ge0$. • The total area under the curve $y=f(x)$, above the $x$-axis, and between $x=x_{min}$ and $x=x_{max}$ is $1$. If we want the idea of picking a real number on the interval $[x_{min},x_{max}]$ at random, where at random means that all numbers have the same chance of being picked (along the lines of fair in Definition 4.1.20, the height of the density function must be the same at all $x$. In other words, the density function $f(x)$ must be a constant $c$. In fact, because of the above Fact 4.3.14, that constant must have the value $\frac1{x_{max}-x_{min}}$. There is a name for this: DEFINITION 4.3.15. The uniform distribution on $[x_{min},x_{max}]$ is the distribution for the continuous RV whose values are the interval $[x_{min},x_{max}]$ and whose density function is the constant function $f(x)=\frac1{x_{max}-x_{min}}$. EXAMPLE 4.3.16. Suppose you take a bus to school every day and because of a chaotic home life (and, let’s face it, you don’t like mornings), you get to the bus stop at a pretty nearly perfectly random time. The bus also doesn’t stick perfectly to its schedule – but it is guaranteed to come at least every $30$ minutes. What this adds up to is the idea that your waiting time at the bus stop is a uniformly distributed RV on the interval $[0,30]$. If you wonder one morning how likely it then is that you will wait for less than $10$ minutes, you can simply compute the area of the rectangle whose base is the interval $[0,10]$ on the $x$-axis and whose height is $\frac1{30}$, which will be $P(0<X<10)=base \cdot height =10\cdot\frac1{30}=\frac13\ .$ A picture which should clarify this is where the area of the shaded region represents the probability of having a waiting time from $0$ to $10$ minutes. One technical thing that can be confusing about continuous RVs and their density functions is the question of whether we should write $P(a<X<b)$ or $P(a\le X\le b)$. But if you think about it, we really have three possible events here: \begin{aligned} A &= \{\text{outcomes such that } X=a\},\ M &= \{\text{ outcomes such that a<X<b}\},\text{and}\ B &= \{\text{ outcomes such that X=b}\}\ .\end{aligned} Since $X$ always takes on exactly one value for any particular outcome, there is no overlap between these events: they are all disjoint. That means that $P(A\cup M\cup B) = P(A)+P(M)+P(B) = P(M)$ where the last equality is because, as we said above, the probability of a continuous RV taking on exactly one particular value, as it would in events $A$ and $B$, is $0$. The same would be true if we added merely one endpoint of the interval $(a,b)$. To summarize: FACT 4.3.17. If $X$ is a continuous RV with values forming the interval $[x_{min},x_{max}]$ and $a$ and $b$ are in this interval, then $P(a<X<b) = P(a<X\le b) = P(a\le X<b) = P(a\le X\le b)\ .$ As a consequence of this fact, some authors write probability formulæ about continuous RVs with “${}<{}$” and some with “${}\le{}$” and it makes no difference. Let’s do a slightly more interesting example than the uniform distribution: EXAMPLE 4.3.18. Suppose you repeatedly throw darts at a dartboard. You’re not a machine, so the darts hit in different places every time and you think of this as a repeatable random experiment whose outcomes are the locations of the dart on the board. You’re interested in the probabilities of getting close to the center of the board, so you decide for each experimental outcome (location of a dart you threw) to measure its distance to the center – this will be your RV $X$. Being good at this game, you hit near the center more than near the edge and you never completely miss the board, whose radius is $10cm$– so $X$ is more likely to be near $0$ than near $10$, and it is never greater than $10$. What this means is that the RV has values forming the interval $[0,10]$ and the density function, defined on the same interval, should have its maximum value at $x=0$ and should go down to the value $0$ when $x=10$. You decide to model this situation with the simplest density function you can think of that has the properties we just noticed: a straight line from the highest point of the density function when $x=0$ down to the point $(10,0)$. The figure that will result will be a triangle, and since the total area must be $1$ and the base is $10$ units long, the height must be $.2$ units. [To get that, we solved the equation $1=\frac12bh=\frac1210h=5h$ for $h$.] So the graph must be and the equation of this linear density function would be $y=-\frac1{50}x+.2$ [why? – think about the slope and $y$-intercept!]. To the extent that you trust this model, you can now calculate the probabilities of events like, for example, “hitting the board within that center bull’s-eye of radius $1.5cm$,” which probability would be the area of the shaded region in this graph: The upper-right corner of this shaded region is at $x$-coordinate $1.5$ and is on the line, so its $y$-coordinate is $-\frac1{50}1.5+.2=.17$ . Since the region is a trapezoid, its area is the distance between the two parallel sides times the average of the lengths of the other two sides, giving $P(0<X<1.5) = 1.5\cdot\frac{.2+.17}2 = .2775\ .$ In other words, the probability of hitting the bull’s-eye, assuming this model of your dart-throwing prowess, is about $28$%. If you don’t remember the formula for the area of a trapezoid, you can do this problem another way: compute the probability of the complementary event, and then take one minus that number. The reason to do this would be that the complementary event corresponds to the shaded region here which is a triangle! Since we surely do remember the formula for the area of a triangle, we find that $P(1.5<X<10)=\frac12bh=\frac{1}{2}.17\cdot8.5=.7225$ and therefore $P(0<X<1.5)=1-P(1.5<X<10)=1-.7225=.2775$. [It’s nice that we got the same number this way, too!] The Normal Distribution We’ve seen some examples of continuous RVs, but we have yet to meet the most important one of all. DEFINITION 4.3.19. The Normal distribution with mean $\ \mu X$ and standard deviation $\ \sigma X$ is the continuous RV which takes on all real values and is governed by the probability density function $\rho(x)=\frac1{\sqrt{2 \sigma X^2\pi}}e^{-\frac{(x- \mu X)^2}{2 \sigma X^2}}\ .$ If $X$ is a random variable which follows this distribution, then we say that $X$ is Normally distributed with mean $\ \mu X$ and standard deviation $\ \sigma X$ or, in symbols, $X$ is $N(\ \mu X, \sigma X)$. [More technical works also call this the Gaussian distribution, named after the great mathematician Carl Friedrich Gauss. But we will not use that term again in this book after this sentence ends.] The good news about this complicated formula is that we don’t really have to do anything with it. We will collect some properties of the Normal distribution which have been derived from this formula, but these properties are useful enough, and other tools such as modern calculators and computers which can find specific areas we need under the graph of $y=\rho(x)$, that we won’t need to work directly with the above formula for $\rho(x)$ again. It is nice to know that $N(\mu X, \sigma X)$ does correspond to a specific, known density function, though, isn’t it? It helps to start with an image of what the Normal distribution looks like. Here is the density function for $\mu X=17$ and $\sigma X=3$: Now let’s collect some of these useful facts about the Normal distributions. FACT 4.3.20. The density function ρ for the Normal distribution N(μXX) is a positive function for all values of x and the total area under the curve y = ρ(x) is 1. This simply means that ρ is a good candidate for the probability density function for some continuous RV. FACT 4.3.21. The density function ρ for the Normal distribution N (μXX) is unimodal with maximum at x-coordinate μX. This means that N (μX , σX ) is a possible model for an RV X which tends to have one main, central value, and less often has other values farther away. That center is at the location given by the parameter μX , so wherever we want to put the center of our model for X, we just use that for μX. FACT 4.3.22. The density function ρ for the Normal distribution N (μX, σX) is symmetric when reflected across the line x = μX. This means that the amount X misses its center, μX, tends to be about the same when it misses above μX and when it misses below μX. This would correspond to situations were you hit as much to the right as to the left of the center of a dartboard. Or when randomly picked people are as likely to be taller than the average height as they are to be shorter. Or when the time it takes a student to finish a standardized test is as likely to be less than the average as it is to be more than the average. Or in many, many other useful situations. FACT 4.3.23. The density function ρ for the Normal distribution N(μXX) has has tails in both directions which are quite thin, in fact get extremely thin as x → ±∞, but never go all the way to 0. This means that N(μXX) models situations where the amount X deviates from its average has no particular cut-off in the positive or negative direction. So you are throwing darts at a dart board, for example, and there is no way to know how far your dart may hit to the right or left of the center, maybe even way off the board and down the hall – although that may be very unlikely. Or perhaps the time it takes to complete some task is usually a certain amount, but every once and a while it might take much more time, so much more that there is really no natural limit you might know ahead of time. At the same time, those tails of the Normal distribution are so thin, for values far away from μX , that it can be a good model even for a situation where there is a natural limit to the values of X above or below μX. For example, heights of adult males (in inches) in the United States are fairly well approximated by N(69,2.8), even though heights can never be less than 0 and N (69, 2.8) has an infinitely long tail to the left – because while that tail is non-zero all the way as x → −∞, it is very, very thin. All of the above Facts are clearly true on the first graph we saw of a Normal distribution density function. FACT 4.3.24. The graph of the density function ρ for the Normal distribution N(μXX) has a taller and narrower peak if σX is smaller, and a lower and wider peak if σX is larger. This allows the statistician to adjust how much variation there typically is in a normally distributed RV: By making σX small, we are saying that an RV X which is N(μXX) is very likely to have values quite close to its center, μX. If we make σX large, however, X is more likely to have values all over the place – still, centered at μX, but more likely to wander farther away. Let’s make a few versions of the graph we saw for ρ when μX was 17 and σX was 3, but now with different values of σX. First, if σX = 1, we get If, instead, σX = 5, then we get Finally, let’s superimpose all of the above density functions on each other, for one, combined graph: This variety of Normal distributions (one for each μX and σX ) is a bit bewildering, so traditionally, we concentrate on one particularly nice one. DEFINITION 4.3.25. The Normal distribution with mean μX = 0 and standard deviation σX = 1 is called the standard Normal distribution and an RV [often written with the variable Z ] that is N (0, 1) is described as a standard Normal RV.Here is what the standard Normal probability density function looks like: Here is what the standard Normal probability density function looks like: One nice thing about the standard Normal is that all other Normal distributions can be related to the standard. FACT 4.3.26. If X is N(μXX), then Z = (X−μX)/σX is standard Normal. This has a name. DEFINITION 4.3.27. The process of replacing a random variable X which is N(μX, σX) with the standard normal RV Z = (X − μX )/σX is called standardizing a Normal RV. It used to be that standardization was an important step in solving problems with Normal RVs. A problem would be posed with information about some data that was modelled by a Normal RV with given mean μX and standardization σX . Then questions about probabilities for that data could be answered by standardizing the RV and looking up values in a single table of areas under the standard Normal curve. Today, with electronic tools such as statistical calculators and computers, the standardization step is not really necessary. EXAMPLE 4.3.28. As we noted above, the heights of adult men in the United States, when measured in inches, give a RV $X$ which is $N(69, 2.8)$. What percentage of the population, then, is taller than $6$ feet? First of all, the frequentist point of view on probability tells us that what we are interested in is the probability that a randomly chosen adult American male will be taller than 6 feet – that will be the same as the percentage of the population this tall. In other words, we must find the probability that X > 72, since in inches, 6 feet becomes 72. As X is a continuous RV, we must find the area under its density curve, which is the ρ for N (69, 2.8), between 72 and ∞. That ∞ is a little intimidating, but since the tails of the Normal distribution are very thin, we can stop measuring area when x is some large number and we will have missed only a very tiny amount of area, so we will have a very good approximation. Let’s therefore find the area under ρ from x = 72 up to x = 1000. This can be done in many ways: • With a wide array of online tools – just search for “online normal probability calculator.” One of these yields the value .142. • With a TI-8x calculator, by typing normalcdf(72, 1000, 69, 2.8) which yields the value .1419884174. The general syntax here is normalcdf(a, b, μX, σX) to find P(a < X < b) when X is N(μXX). Note you get normalcdf by typing • Spreadsheets like LibreOffice Calc and Microsoft Excel will compute this by putting the following in a cell =1-NORM.DIST(72, 69, 2.8, 1) giving the value 0.1419883859. Here we are using the command NORM.DIST(b, μX, σX, 1) which computes the area under the density function for N (μX, σX) from −∞ to b. [The last input of “1” to NORM.DIST just tells it that we want to compute the area under the curve. If we used “0” instead, it would simple tell us the particular value of ρ(b), which is of very direct little use in probability calculations.] Therefore, by doing 1 − NORM.DIST(72,69,2.8,1), we are taking the total area of 1 and subtracting the area to the left of 72, yielding the area to the right, as we wanted. Therefore, if you want the area between a and b on an N(μX, σX) RV using a spreadsheet, you would put =NORM.DIST(b, μX, σX, 1) - NORM.DIST(a, μX, σX, 1) in a cell. While standardizing a non-standard Normal RV and then looking up values in a table is an old-fashioned method that is tedious and no longer really needed, one old technique still comes in handy some times. It is based on the following: FACT 4.3.29. The 68-95-99.7 Rule: Let X be an N(μX X) RV. Then some special values of the area under the graph of the density curve ρ for X are nice to know: • The area under the graph of ρ from x=μX −σX to x=μXX, also known as P(μX −σX <X<μXX), is .68. • The area under the graph of ρ from x=μX −2σX to x=μX +2σX, also known as P(μX −2σX <X <μX +2σX), is .95. • The area under the graph of ρ from x=μX −3σX to x=μX +3σX, also known as P(μX −3σX <X <μX +3σX), is .997. This is also called The Empirical Rule by some authors. Visually3: 3By Dan Kernler - Own work, CC BY-SA 4.0, commons.wikimedia.org/w/index. php?curid=36506025 . In order to use the 68-95-99.7 Rule in understanding a particular situation, it is helpful to keep an eye out for the numbers that it talks about. Therefore, when looking at a problem, one should notice if the numbers μXX, μX −σX, μX +2σX, μX −2σX, μX +3σX, or μX − 3σX are ever mentioned. If so, perhaps this Rule can help. EXAMPLE 4.3.30. In Example 4.3.28, we needed to compute P(X > 72) where X was known to be N(69,2.8). Is 72 one of the numbers for which we should be looking, to use the Rule? Well, it’s greater than μX = 69, so we could hope that it was μX + σX , μX + 2σX, or μX + 3σX. But values are μXX =69+2.8=71.8, μX +2σX =69+5.6=74.6,and μX +3σX =69+8.4=77.4, none of which is what we need. Well, it is true that 72 ≈ 71.8, so we could use that fact and accept that we are only getting an approximate answer – an odd choice, given the availability of tools which will give us extremely precise answers, but let’s just go with it for a minute. Let’s see, the above Rule tells us that P(66.2<X <71.8)=P(μX −σX <X <μXX)=.68. Now since the total area under any density curve is 1, P(X <66.2orX >71.8)=1−P(66.2<X <71.8)=1−.68=.32. Since the event “X < 66.2” is disjoint from the event “X > 71.8” (X only takes on one value at a time, so it cannot be simultaneously less than 66.2 and greater than 71.8), we can use the simple rule for addition of probabilities: .32=P(X <66.2orX >71.8)=P(X <66.2)+P(X >71.8). Now, since the density function of the Normal distribution is symmetric around the line x = μX, the two terms on the right in the above equation are equal, which means that P(X >71.8)=$\ \frac{1}{2}$ (P(X <66.2)+P(X >71.8))=$\ \frac{1}{2}$.32=.16. It might help to visualize the symmetry here as the equality of the two shaded areas in the following graph Now, using the fact that 72 ≈ 71.8, we may say that P (X > 72) ≈ P (X > 71.8) = .16 which, since we know that in fact P (X > 72) = .1419883859, is not a completely terrible approximation. EXAMPLE 4.3.31. Let’s do one more computation in the context of the heights of adult American males, as in the immediately above Example 4.3.30, but now one in which the 68-95-99.7 Rule gives a more precise answer. So say we are asked this time what proportion of adult American men are shorter than 63.4 inches. Why that height, in particular? Well, it’s how tall archaeologists have deter- mined King Tut was in life. [No, that’s made up. It’s just a good number for this problem.] Again, looking through the values μX ± σX, μX ± 2σX, and μX ± 3σX, we notice that 63.4=69−5.6=μX −2σX . Therefore, to answer what fraction of adult American males are shorter than 63.4 inches amounts to asking what is the value of P (X < μX − 2σX). What we know about μX ± 2σX is that the probability of X being between those two values is P(μX − 2σX < X < μX + 2σX) = .95. As in the previous Example, the complementary event to “μX − 2σX < X < μX + 2σX,” which will have probability .05, consists of two pieces “X < μX − 2σX” and “X > μX + 2σX,” which have the same area by symmetry. Therefore \begin{aligned} P(X<63.4) &=P\left(X<\mu_{X}-2 \sigma_{X}\right) \ &=\frac{1}{2}\left[P\left(X<\mu_{X}-2 \sigma_{X}\right)+P\left(X>\mu_{X}+2 \sigma_{X}\right)\right] \ &=\frac{1}{2} P\left(X<\mu_{X}-2 \sigma_{X} \text { or } X>\mu_{X}+2 \sigma_{X}\right) \text { since they're disjoint } \ &=\frac{1}{2} P\left(\left(\mu_{X}-2 \sigma_{X}<X<\mu_{X}+2 \sigma_{X}\right)^{c}\right) \ &=\frac{1}{2}\left[1-P\left(\mu_{X}-2 \sigma_{X}<X<\mu_{X}+2 \sigma_{X}\right)^{c}\right) \quad \ &=\frac{1}{2}\left[1-P\left(\mu_{X}-2 \sigma_{X}<X<\mu_{X}+2 \sigma_{X}\right)^{c}\right) \quad \text { by prob. for complements } \ &=\frac{1}{2} \cdot 05 \ &=.025 \end{aligned} Just the way finding the particular X values μX ± σX, μX ± 2σX, and μX ± 3σX in a particular situation would tell us the 68-95-99.7 Rule might be useful, so also would finding the probability values .68, .95, 99.7, or their complements .32, .05, or .003, – or even half of one of those numbers, using the symmetry. EXAMPLE 4.3.32. Continuing with the scenario of Example 4.3.30, let us now figure out what is the height above which there will only be .15% of the population. Notice that .15%, or the proportion .0015, is not one of the numbers in the 68-95-99.7 Rule, nor is it one of their complements – but it is half of one of the complements, being half of .003 . Now, .003 is the complementary probability to .997, which was the probability in the range μX ± 3σX. As we have seen already (twice), the complementary area to that in the region between μX ± 3σX consists of two thin tails which are of equal area, each of these areas being $\ \frac{1}{2}$(1 − .997) = .0015 . This all means that the beginning of that upper tail, above which value lies .15% of the population, is the X value μX +3σX =68+3·2.8=77.4. Therefore .15% of adult American males are taller than 77.4 inches.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/04%3A_Probability_Theory/4.03%3A_New_Page.txt
A basketball player shoots four free throws, and you write down the sequence of hits and misses. Write down the sample space for thinking of this whole thing as a random experiment. In another game, a basketball player shoots four free throws, and you write down the number of baskets she makes. Write down the sample space for this different random experiment. You take a normal, six-sided die, paint over all the sides, and then write the letter A on all six sides. You then roll the die. What is the sample space of this experiment? Also, list all the possible events for this experiment. [Hint: it may help to look at Example [4.1.9.] Now you paint it over again, and write A on half the sides and B on the other half. Again, say what is the sample space and list all possible events. One more time you paint over the sides, then write A on one third of the faces, B on one third of the other faces, and C on the remaining third. Again, give the sample space and all events. Make a conjecture about how many events there will be if the sample space has \(n\) outcomes in it. Describe a random experiment whose sample space will be the set of all points on the (standard, 2-dimensional, \(xy\)-) plane. The most common last [family] name in the world seems to be Wang [or the variant Wong]. Approximately 1.3% of the global population has this last name. The most common first name in the world seems to be Mohammad [or one of several variants]. Some estimates suggest that perhaps as many as 2% of the global population has this first name. Can you tell, from the above information, what percentage of the world population has the name “Mohammad Wang?” If so, why and what would it be? If not, why not, and can you make any guess about what that percentage would be, anyway? [Hint: think of all the above percentages as probabilities, where the experiment is picking a random person on Earth and asking their name. Carefully describe some events for this experiment, relevant to this problem, and say what their probabilities are. Tell how combining events will or will not compute the probability of the desired event, corresponding to the desired percentage.] [Note: don’t bet on the numbers given in this problem being too accurate – they might be, but there is a wide range of published values for them in public information from different sources, so probably they are only a very crude approximation.] Suppose that when people have kids, the chance of having a boy or a girl is the same. Suppose also that the sexes of successive children in the same family are independent. [Neither of these is exactly true in real life, but let’s pretend for this problem.] The Wang family has two children. If we think of the sexes of these children as the result of a random experiment, what is the sample space? Note that we’re interested in birth order as well, so that should be apparent from the sample space. What are the probabilities of each of the outcomes in your sample space? Why? Now suppose we know that at least one of the Wang children is a boy. Given this information, what is the probability that the Wangs have two boys? Suppose instead that we know that the Wangs’ older child is a boy. What is the probability, given this different information, that both Wang children are boys? To solve this, clearly define events in words and with symbols, compute probabilities, and combine these to get the desired probability. Explain everything you do, of course. Imagine you live on a street with a stop light at both ends of the block. You watch cars driving down the street and notice which ones have to stop at the \(1^{st}\) and/or \(2^{nd}\) light (or none). After counting cars and stops for a year, you have seen what a very large number – call it \(N\) – of cars did. Now imagine you decide to think about the experiment “pick a car on this street from the last year at random and notice at which light or lights it has to stop.” Let \(A\) be the event “the car had to stop at the \(1^{st}\) light” and \(B\) be the event “the car had to stop at the \(2^{nd}\) light.” What else would you have to count, over your year of data collection, to estimate the probabilities of \(A\) and of \(B\)? Pick some numbers for all of these variables and show what the probabilities would then be. Make a Venn diagram of this situation. Label each of the four connected regions of this diagram (the countries, if this were a map) with a number from to , then provide a key which gives, for each of these numbered regions, both a formula in terms of \(A\), \(B\), unions, intersections, and/or complements, and then also a description entirely in words which do not mention \(A\) or \(B\) or set operations at all. Then put a decimal number in each of the regions indicating the probability of the corresponding event. Wait – for one of the regions, you can’t fill in the probability yet, with the information you’ve collected so far. What else would you have had to count over the data-collection year to estimate this probability? Make up a number and show what the corresponding probability would then be, and add that number to your Venn diagram. Finally, using the probabilities you have chosen, are the events \(A\) and \(B\) independent? Why or why not? Explain in words what this means, in this context. EXERCISE 4.7. Here is a table of the prizes for the EnergyCube Lottery: Prize Odds of winning \$1,000,000 1 in 12,000,000 \$50,000 1 in 1,000,000 \$100 1 in 10,000 \$7 1 in 300 \$4 1 in 25 We want to transform the above into the [probability] distribution of a random variable \(X\). First of all, let’s make \(X\) represent the net gain a Lottery player would have for the various outcomes of playing – note that the ticket to play costs \$2. How would you modify the above numbers to take into account the ticket costs? Next, notice that the above table gives winning odds, not probabilities. How will you compute the probabilities from those odds? Recall that saying something has odds of “1 in \(n\)” means that it tends to happen about once out of \(n\) runs of the experiment. You might use the word frequentist somewhere in your answer here. Finally, something is missing from the above table of outcomes. What prize – actually the most common one! – is missing from the table, and how will you figure out its probability? After giving all of the above explanations, now write down the full, formal, probability distribution for this “net gain in EnergyCube Lottery plays” random variable, \(X\). In this problem, some of the numbers are quite small and will disappear entirely if you round them. So use a calculator or computer to compute everything here and keep as much accuracy as your device shows for each step of the calculation. EXERCISE 4.8. Continuing with the same scenario as in the previous Exercise 4.7, with the EnergyCube Lottery: What would be your expectation of the average gain per play of this Lottery? Explain fully, of course. So if you were to play every weekday for a school year (so: five days a week for the 15 weeks of each semester, two semesters in the year), how much would you expect to win or lose in total? Again, use as much accuracy as your computational device has, at every step of these calculations. EXERCISE 4.9. Last problem in the situation of the above Exercise 4.7 about the En- ergyCube Lottery: Suppose your friend plays the lottery and calls you to tell you that she won ... but her cell phone runs out of charge in the middle of the call, and you don’t know how much she won. Given the information that she won, what is the probability that she won more than \$1,000? Continue to use as much numerical accuracy as you can. EXERCISE 4.10. Let’s make a modified version of Example 4.3.18. You are again throwing darts at a dartboard, but you notice that you are very left-handed so your throws pull to the right much more than they pull to the left. What this means is that it is not a very good model of your dart throws just to notice how far they are from the center of the dartboard, it would be better to notice the x-coordinate of where the dart hits, measuring (in cm) with the center of the board at x location 0. This will be your new choice of RV, which you will still call X. You throw repeatedly at the board, measure \(X\), and find out that you never hit more than \(10cm\) to the right of the center, while you are more accurate to the left and never hit more than \(5cm\) in that direction. You do hit the middle (\(X=0\)) the most often, and you guess that the probability decreases linearly to those edges where you never hit. Explain why your \(X\) is a continuous RV, and what its interval \([x_{min},x_{max}]\) of values is. Now sketch the graph of the probability density function for \(X\). [Hint: it will be a triangle, with one side along the interval of values \([x_{min},x_{max}]\) on the \(x\)-axis, and its maximum at the center of the dartboard.] Make sure that you put tick marks and numbers on the axes, enough so that the coordinates of the corners of the triangular graph can be seen easily. [Another hint: it is a useful fact that the total area under the graph of any probability density function is \(1\).] What is the probability that your next throw will be in the bull’s-eye, whose radius, remember, is \(1.5cm\) and which therefore stretches from \(x\) coordinate \(-1.5\) to \(x\)-coordinate \(1.5\)? EXERCISE 4.11. Here’s our last discussion of dartboards [maybe?]: One of the prob- lems with the probability density function approaches from Example 4.3.18 and Exer- cise 4.10 is the assumption that the functions were linear (at least in pieces). It would be much more sensible to assume they were more bell-shaped, maybe like the Normal distribution. Suppose your friend Mohammad Wang is an excellent dart-player. He throws at a board and you measure the x-coordinate of where the dart goes, as in Exercise 4.10 with the center corresponding to x = 0. You notice that his darts are rarely – only 5% of the time in total! – more than 5cm from the center of the board. Fill in the blanks: “MW’s dart hits’ x-coordinates are an RV X which is Normally distributed with mean μX = and standard deviation σX=_______.” Explain, of course. How often does MW completely miss the dartboard? Its radius is \(10cm\). How often does he hit the bull’s-eye? Remember its radius is \(1.5cm\), meaning that it stretches from \(x\) coordinate \(-1.5\) to \(x\)-coordinate \(1.5\).
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/04%3A_Probability_Theory/4.04%3A_New_Page.txt
In this chapter, we start to get very practical on the matter of tracking down good data in the wild and bringing it home. This is actually a very large and important subject – there are entire courses and books on Experimental Design, Survey Methodology, and Research Methods specialized for a range of particular disciplines (medicine, psychology, sociology, criminology, manufacturing reliability, etc.) – so in this book we will only give a broad introduction to some of the basic issues and approaches. The first component of this introduction will give several of the important definitions for experimental design in the most direct, simplest context: collecting sample data in an attempt to understand a single number about an entire population. As we have mentioned before, usually a population is too large or simply inaccessible and so to determine an important feature of a population of interest, a researcher must use the accessible, affordable data of a sample. If this approach is to work, the sample must be chosen carefully, so as to avoid the dreaded bias. The basic structure of such studies, the meaning of bias, and some of the methods to select bias-minimizing samples, are the subject of the first section of this chapter. It is more complicated to collect data which will give evidence for causality, for a causal relationship between two variables under study. But we are often interested in such relationships – which drug is a more effective treatment for some illness, what advertisement will induce more people to buy a particular product, or what public policy leads to the strongest economy. In order to investigate causal relationships, it is necessary not merely to observe, but to do an actual experiment; for causal questions about human subjects, the gold standard is a randomized, placebo-controlled, double-blind experiment, sometimes called simply a randomized, controlled trial [RCT], which we describe in the second section. There is something in the randomized, controlled experiment which makes many people nervous: those in the control group are not getting what the experimenter likely thinks is the best treatment. So, even though society as a whole may benefit from the knowledge we get through RCTs, it almost seems as if some test subjects are being mistreated. While the scientific research community has come to terms with this apparent injustice, there are definitely experiments which could go too far and cross an important ethical lines. In fact, history has shown that a number of experiments have actually been done which we now consider to be clearly unethical. It is therefore important to state clearly some ethical guidelines which future investigations can follow in order to be confident to avoid mistreatment of test subjects. One particular set of such guidelines for ethical experimentation on human subjects is the topic of the third and last section of this chapter. Thumbnail: pixabay.com/photos/checklist...arker-2077020/ 05: Bringing Home the Data Suppose we are studying some population, and in particular a variable defined on that population. We are typically interested in finding out the following kind of characteristic of our population: DEFINITION 5.1.1. A [population] parameter is a number which is computed by knowing the values of a variable for every individual in the population. EXAMPLE 5.1.2. If X is a quantitative variable on some population, the population mean μX of X is a population parameter – to compute this mean, you need to add together the values of X for all of individuals in the population. Likewise, the population standard deviation σX of X is another parameter. For example, we asserted in Example 4.3.28 that the heights of adult American men are N (69, 2.8). Both the 69 and 2.8 are population parameters here. EXAMPLE 5.1.3. If, instead, X were a categorical variable on some population, then the relative frequency (also called the population proportion) of some value A of X – the fraction of the population that has that value – is another population parameter. After all, to compute this fraction, you have to look at every single individual in the population, all N of them, say, and see how many of them, say NA, make the X take the value A, then compute the relative frequency NA/N. Sometimes one doesn’t have to look at the specific individuals and compute that fraction nA/N to find a population proportion. For example, in Example 4.3.28, we found that 14.1988% of adult American men are taller than 6 feet, assuming, as stated above, that adult American men’s heights are distributed like N (69, 2.8) – using, notice, those parameters μX and σX of the height distribution, for which the entire population must have been examined. What this means is that the relative frequency of the value “yes” for the categorical variable “is this person taller than 6 feet?” is .141988. This relative frequency is also a parameter of the same population of adult American males. Parameters must be thought of as fixed numbers, out there in the world, which have a single, specific value. However, they are very hard for researchers to get their hands on, since to compute a parameter, the variable values for the entire population must be measured. So while the parameter is a single, fixed value, usually that value is unknown. What can (and does change) is a value coming from a sample. DEFINITION 5.1.4. A [sample] statistic is a number which is computed by knowing the values of a variable for the individuals from only a sample. EXAMPLE 5.1.5. Clearly, if we have a population and quantitative variable X, then any time we choose a sample out of that population, we get a sample mean and sample standard deviation Sx, both of which are statistics. Similarly, if we instead have a categorical variable Y on some population, we take a sample of size n out of the population and count how many individuals in the sample – say nA – have some value A for their value of Y , then the nA/n is a statistic (which is also called the sample proportion and frequently denoted $\ \hat{p}$��). Two different researchers will choose different samples and so will almost certainly have different values for the statistics they compute, even if they are using the same formula for their statistic and are looking at the same population. Likewise, one researcher taking repeated samples from the same population will probably get different values each time for the statistics they compute. So we should think of a statistic as an easy, accessible number, changing with each sample we take, that is merely an estimate of the thing we want, the parameter, which is one, fixed number out in the world, but hidden from out knowledge. So while getting sample statistics is practical, we need to be careful that they are good estimates of the corresponding parameters. Here are some ways to get better estimates of this kind: 1. Pick a larger sample. This seems quite obvious, because the larger is the sample, the closer it is to being the whole population and so the better its approximating statistics will estimate the parameters of interest. But in fact, things are not really quite so simple. In many very practical situations, it would be completely infeasible to collect sample data on a sample which was anything more than a miniscule part of the population of interest. For example, a national news organization might want to survey the American population, but it would be entirely prohibitive to get more than a few thousand sample data values, out of a population of hundreds of millions – so, on the order of tenths of a percent. Fortunately, there is a general theorem which tells us that, in the long run, one particular statistic is a good estimator of one particular parameter: FACT 5.1.6. The Law of Large Numbers: Let X be a quantitative variable on some population. Then as the sizes of samples (each made up of individuals chosen randomly and independently from the population) get bigger and bigger, the corresponding sample means x get closer and closer to the population mean μX. 1. Pick a better statistic. It makes sense to use the sample mean as a statistic to estimate the population mean and the sample proportion to estimate the population proportion. But it is less clear where the somewhat odd formula for the sample standard deviation came from – remember, it differs from the population standard deviation by having an n − 1 in the denominator instead of an n. The reason, whose proof is too technical to be included here, is that the formula we gave for SX is a better estimator for σX than would have be the version which simply had the same n in the denominator. In a larger sense, “picking a better statistic” is about getting higher quality estimates from your sample. Certainly using a statistic with a clever formula is one way to do that. Another is to make sure that your data is of the highest quality possible. For example, if you are surveying people for their opinions, the way you ask a question can have enormous consequences in how your subjects answer: “Do you support a woman’s right to control her own body and her reproduction?” and “Do you want to protect the lives of unborn children?” are two heavy-handed approaches to asking a question about abortion. Collectively, the impacts of how a question is asked are called wording effects, and are an important topic social scientists must understand well. 2. Pick a better sample. Sample quality is, in many ways, the most important and hardest issue in this kind of statistical study. What we want, of course, is a sample for which the statistic(s) we can compute give good approximations for the parameters in which we are interested. There is a name for this kind of sample, and one technique which is best able to create these good samples: randomness. DEFINITION 5.1.7. A sample is said to be representative of its population if the values of its sample means and sample proportions for all variables relevant to the subject of the research project are good approximations of the corresponding population means and proportions. It follows almost by definition that a representative sample is a good one to use in the process of, as we have described above, using a sample statistic as an estimate of a population parameter in which you are interested. The question is, of course, how to get a representative sample. The answer is that it is extremely hard to build a procedure for choosing samples which guarantees representative samples, but there is a method – using randomness – which at least can reduce as much as possible one specific kind of problem samples might have. DEFINITION 5.1.8. Any process in a statistical study which tends to produce results which are systematically different from the true values of the population parameters under investigation is called biased. Such a systematic deviation from correct values is called bias. The key word in this definition is systematically: a process which has a lot of variation might be annoying to use, it might require the researcher to collect a huge amount of data to average together, for example, in order for the estimate to settle down on something near the true value – but it might nevertheless not be biased. A biased process might have less variation, might seem to get close to some particular value very quickly, with little data, but would never give the correct answer, because of the systematic deviation it contained. The hard part of finding bias is to figure out what might be causing that systematic deviation in the results. When presented with a sampling method for which we wish to think about sources of possible bias, we have to get creative. EXAMPLE 5.1.9. In a democracy, the opinion of citizens about how good a job their elected officials are doing seems like an interesting measure of the health of that democracy. At the time of this writing, approximately two months after the inauguration of the 45th president of the United States, the widely respected Gallup polling organization reports [Gal17] that 56% of the population approve of the job the president is doing and 40% disapprove. [Presumably, 4% were neutral or had no opinion.] According to the site from which these numbers are taken, “Gallup tracks daily the percentage of Americans who approve or dis- approve of the job Donald Trump is doing as president. Daily results are based on telephone interviews with approximately 1,500 national adults....” Presumably, Gallup used the sample proportion as an estimator computed with the re- sponses from their sample of 1500 adults. So it was a good statistic for the job, and the sample size is quite respectable, even if not a very large fraction of the entire adult Amer- ican population, which is presumably the target population of this study. Gallup has the reputation for being a quite neutral and careful organization, so we can also hope that the way they worded their questions did not introduce any bias. A source of bias that does perhaps cause some concern here is that phrase “telephone interviews.” It is impossible to do telephone interviews with people who don’t have tele- phones, so there is one part of the population they will miss completely. Presumably, also, Gallup knew that if they called during normal working days and hours, they would not get working people at home or even on cell phones. So perhaps they called also, or only, in the evenings and on weekends – but this approach would tend systematically to miss people who had to work very long and/or late hours. So we might worry that a strategy of telephone interviews only would be biased against those who work the longest hours, and those people might tend to have similar political views. In the end, that would result in a systematic error in this sampling method. Another potential source of bias is that even when a person is able to answer their phone, it is their choice to do so: there is little reward in taking the time to answer an opinion survey, and it is easy simply not to answer or to hang up. It is likely, then, that only those who have quite strong feelings, either positive or negative, or some other strong personal or emotional reason to take the time, will have provided complete responses to this telephone survey. This is potentially distorting, even if we cannot be sure that the effects are systematically in one direction or the other. [Of course, Gallup pollsters have an enormous amount of experience and have presum- ably thought the above issues through completely and figure out how to work around it – but we have no particular reason to be completely confident in their results other than our faith in their reputation, without more details about what work-arounds they used. In science, doubt is always appropriate.] One of the issues we just mentioned about the Gallup polling of presidential approval ratings has its own name: DEFINITION 5.1.10. A sample selection method that involves any substantial choice of whether to participate or not suffers from what is called voluntary sample bias. Voluntary sample bias is incredibly common, and yet is such a strong source of bias that it should be taken as a reason to disregard completely the supposed results of any study that it affects. Volunteers tend to have strong feelings that drive them to participate, which can have entirely unpredictable but systematic distorting influence on the data they provide. Web-based opinion surveys, numbers of thumbs-up or -down or of positive or negative comments on a social media post, percentages of people who call in to vote for or against some public statement, etc., etc. – such widely used polling methods produce nonsensical results which will be instantly rejected by anyone with even a modest statistical knowledge. Don’t fall for them! We did promise above one technique which can robustly combat bias: randomness. Since bias is based on a systematic distortion of data, any method which completely breaks all systematic processes in, for example, sample selection, will avoid bias. The strongest such sampling method is as follows. DEFINITION 5.1.11. A simple random sample [SRS] is a sample of size n, say, chosen from a population by a method which produces all samples of size n from that population with equal probability. It is oddly difficult to tell if a particular sample is an SRS. Given just a sample, in fact, there is no way to tell – one must ask to see the procedure that had been followed to make that sample and then check to see if that procedure would produce any subset of the population, of the same size as the sample, with equal probability. Often, it is easier to see that a sampling method does not make SRSs, by finding some subsets of the population which have the correct size but which the sampling method would never choose, meaning that they have probability zero of being chosen. That would mean some subsets of the correct size would have zero probability and others would have a positive probability, meaning that not all subsets of that size would have the same probability of being chosen. Note also that in an SRS it is not that every individual has the same probability of being chosen, it must be that every group of individuals of the size of the desired sample has the same probability of being chosen. These are not the same thing! EXAMPLE 5.1.12. Suppose that on Noah’s Ark, the animals decide they will form an advisory council consisting of an SRS of 100 animals, to help Noah and his family run a tight ship. So a chimpanzee (because it has good hands) puts many small pieces of paper in a basket, one for each type of animal on the Ark, with the animal’s name written on the paper. Then the chimpanzee shakes the basket well and picks fifty names from the basket. Both members of the breeding pair of that named type of animal are then put on the advisory council. Is this an SRS from the entire population of animals on the Ark? First of all, each animal name has a chance of 50/N, where N is the total number of types of animals on the Ark, of being chosen. Then both the male and female of that type of animal are put on the council. In other words, every individual animal has the same probability – 50/N – of being on the council. And yet there are certainly collections of 100 animals from the Ark which do not consist of 50 breeding pairs: for example, take 50 female birds and 50 female mammals; that collection of 100 animals has no breeding pairs at all. Therefore this is a selection method which picks each individual for the sample with equal probability, but not each collection of 100 animals with the same probability. So it is not an SRS. With a computer, it is fairly quick and easy to generate an SRS: FACT 5.1.13. Suppose we have a population of size N out of which we want to pick an SRS of size n, where n < N . Here is one way to do so: assign every individual in the popu- lation a unique ID number, with say d digits (maybe student IDs, Social Security numbers, new numbers from 1 to N chosen in any way you like – randomness not needed here, there is plenty of randomness in the next step). Have a computer generate completely random d-digit number, one after the other. Each time, pick the individual from the population with that ID number as a new member of the sample. If the next random number generated by the computer is a repeat of one seen before, or if it is a d-digit number that doesn’t happen to be any individual’s ID number, then simply skip to the next random number from the computer. Keep going until you have n individuals in your sample. The sample created in this way will be an SRS.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/05%3A_Bringing_Home_the_Data/5.01%3A_New_Page.txt
If we want to draw conclusions about causality, observations are insufficient. This is because simply seeing B always follow A out in the world does not tell us that A causes B. For example, maybe they are both caused by Z, which we didn’t notice had always happened before those A and B, and A is simply a bit faster than B, so it seems always to proceed, even to cause, B. If, on the other hand, we go out in the world and do A and then always see B, we would have more convincing evidence that A causes B. Therefore, we distinguish two types of statistical studies DEFINITION 5.2.1. An observational study is any statistical study in which the researchers merely look at (measure, talk to, etc.) the individuals in which they are interested. If, instead, the researchers also change something in the environment of their test subjects before (and possibly after and during) taking their measurements, then the study is an experiment. EXAMPLE 5.2.2. A simple survey of, for example, opinions of voters about political candidates, is an observational study. If, as is sometimes done, the subject is told something like “let me read you a statement about these candidates and then ask you your opinion again” [this is an example of something called push-polling], then the study has become an experiment. Note that to be considered an experiment, it is not necessary that the study use principles of good experimental design, such as those described in this chapter, merely that the researchers do something to their subjects. EXAMPLE 5.2.3. If I slap my brother, notice him yelp with pain, and triumphantly turn to you and say “See, slapping hurts!” then I’ve done an experiment, simply because I did something, even if it is a stupid experiment [tiny non-random sample, no comparison, etc., etc.]. If I watch you slap someone, who cries out with pain, and then I make the same triumphant announcement, then I’ve only done an observational study, since the action taken was not by me, the “researcher.” When we do an experiment, we typically impose our intentional change on a number of test subjects. In this case, no matter the subject of inquiry, we steal a word from the medical community: DEFINITION 5.2.4. The thing we do to the test subjects in an experiment is called the treatment. Control Groups If we are doing an experiment to try to understand something in the world, we should not simply do the interesting new treatment to all of our subjects and see what happens. In a certain sense, if we did that, we would simply be changing the whole world (at least the world of all of our test subjects) and then doing an observational study, which, as we have said, can provide only weak evidence of causality. To really do an experiment, we must compare two treatments. Therefore any real experiment involves at least two groups. DEFINITION 5.2.5. In an experiment, the collection of test subjects which gets the new, interesting treatment is called the experimental group, while the remaining subjects, who get some other treatment such as simply the past common practice, are collectively called the control group. When we have to put test subjects into one of these two groups, it is very important to use a selection method which has no bias. The only way to be sure of this is [as discussed before] to use a random assignment of subjects to the experimental or control group. Human-Subject Experiments: The Placebo Effect Humans are particularly hard to study, because their awareness of their environments can have surprising effects on what they do and even what happens, physically, to their bodies. This is not because people fake the results: there can be real changes in patients’ bodies even when you give them a medicine which is not physiologically effective, and real changes in their performance on tests or in athletic events when you merely convince them that they will do better, etc. DEFINITION 5.2.6. A beneficial consequence of some treatment which should not directly [e.g., physiologically] cause an improvement is called the Placebo Effect. Such a “fake” treatment, which looks real but has no actual physiological effect, is called a placebo. Note that even though the Placebo Effect is based on giving subjects a “fake” treatment, the effect itself is not fake. It is due to a complex mind-body connection which really does change the concrete, objectively measurable situation of the test subjects. In the early days of research into the Placebo Effect, the pill that doctors would give as a placebo would look like other pills, but would be made just of sugar (glucose), which (in those quite small quantities) has essentially no physiological consequences and so is a sort of neutral dummy pill. We still often call medical placebos sugar pills even though now they are often made of some even more neutral material, like the starch binder which is used as a matrix containing the active ingredient in regular pills – but without any active ingredient. Since the Placebo Effect is a real phenomenon with actual, measurable consequences, when making an experimental design and choosing the new treatment and the treatment for the control group, it is important to give the control group something. If they get nothing, they do not have the beneficial consequences of the Placebo Effect, so they will not have as good measurements as the experimental group, even if the experimental treatment had no actual useful effect. So we have to equalize for both groups the benefit provided by the Placebo Effect, and give them both an treatment which looks about the same (compare pills to pills, injections to injections, operations to operations, three-hour study sessions in one format to three-hour sessions in another format, etc.) to the subjects. DEFINITION 5.2.7. An experiment in which there is a treatment group and a control group, which control group is given a convincing placebo, is said to be placebo-controlled. Blinding We need one last fundamental tool in experimental design, that of keeping subjects and experimenters ignorant of which subject is getting which treatment, experimental or control. If the test subjects are aware of into which group they have been put, that mind-body connection which causes the Placebo Effect may cause a systematic difference in their outcomes: this would be the very definition of bias. So we don’t tell the patients, and make sure that their control treatment looks just like the real experimental one. It also could be a problem if the experimenter knew who was getting which treatment. Perhaps if the experimenter knew a subject was only getting the placebo, they would be more compassionate or, alternatively, more dismissive. In either case, the systematically different atmosphere for that group of subjects would again be a possible cause of bias. Of course, when we say that the experimenter doesn’t know which treatment a particular patient is getting, we mean that they do not know that at the time of the treatment. Records must be kept somewhere, and at the end of the experiment, the data is divided between control and experimental groups to see which was effective. DEFINITION 5.2.8. When one party is kept ignorant of the treatment being administered in an experiment, we say that the information has been blinded. If neither subjects nor experimenters know who gets which treatment until the end of the experiment (when both must be told, one out of fairness, and one to learn something from the data that was collected), we say that the experiment was double-blind. Combining it all: RCTs This, then is the gold standard for experimental design: to get reliable, unbiased experimental data which can provide evidence of causality, the design must be as follows: DEFINITION 5.2.9. An experiment which is • randomized • placebo-controlled. • double-blind is called, for short, a randomized, controlled trial [RCT] (where the “placebo-” and “double-blind” are assumed even if not stated). Confounded Lurking Variables A couple of last terms in this subject are quite poetic but also very important. DEFINITION 5.2.10. A lurking variable is a variable which the experimenter did not put into their investigation. So a lurking variable is exactly the thing experimenters most fear: something they didn’t think of, which might or might not affect the study they are doing. Next is a situation which also could cause problems for learning from experiments. DEFINITION 5.2.11. Two variables are confounded when we cannot statistically distinguish their effects on the results of our experiments. When we are studying something by collecting data and doing statistics, confounded variables are a big problem, because we do not know which of them is the real cause of the phenomenon we are investigating: they are statistically indistinguishable. The combination of the two above terms is the worst thing for a research project: what if there is a lurking variable (one you didn’t think to investigate) which is confounded with the variable you did study? This would be bad, because then your conclusions would apply equally well (since the variables are statistically identical in their consequences) to that thing you didn’t think of ... so your results could well be completely misunderstanding cause and effect. The problem of confounding with lurking variables is particularly bad with observational studies. In an experiment, you can intentionally choose your subjects very randomly, which means that any lurking variables should be randomly distributed with respect to any lurking variables – but controlled with respect to the variables you are studying – so if the study finds a causal relationship in your study variables, in cannot be confounded with a lurking variable. EXAMPLE 5.2.12. Suppose you want to investigate whether fancy new athletic shoes make runners faster. If you just do an observational study, you might find that those athletes with the new shoes do run faster. But a lurking variable here could be how rich the athletes are, and perhaps if you looked at rich and poor athletes they would have the same relationship to slow and fast times as the new- vs old-shoe wearing athletes. Essentially, the variable what kind of shoe is the athlete wearing (categorical with the two values new and old) is being confounded with the lurking variable. So the conclusion about causality might be false, and instead the real truth might be wealthy athletes, who have lots of support, good coaches, good nutrition, and time to devote to their sport, run faster. If, instead, we did an experiment, we would not have this problem. We would select athletes at random – so some would be wealthy and some not – and give half of them (the experimental group) the fancy new shoes and the other half (the control group) the old type. If the type of shoe was the real cause of fast running, we would see that in our experimental outcome. If really it is the lurking variable of the athlete’s wealth which matters, then we would see neither group would do better than the other, since they both have a mixture of wealthy and poor athletes. If the type of shoe really is the cause of fast running, then we would see a difference between the two groups, even though there were rich and poor athletes in both groups, since only one group had the fancy new shoes. In short, experiments are better at giving evidence for causality than observational studies in large part because an experiment which finds a causal relationship between two variables cannot be confounding the causal variable under study with a lurking variable.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/05%3A_Bringing_Home_the_Data/5.02%3A_New_Page.txt
Experiments with human subjects are technically hard to do, as we have just seen, because of things like the Placebo Effect. Even beyond these difficulties, they are hard because human subjects just don’t do what we tell them, and seem to want to express their free will and autonomy. In fact, history has many (far too many) examples of experiments done on human subjects which did not respect their humanity and autonomy – see, for example, the Wikipedia page on unethical human experimentation . The ethical principles for human subject research which we give below are largely based on the idea of respecting the humanity and autonomy of the test subjects, since the lack of that respect seems to be the crucial failure of many of the generally acknowledged unethical experiments in history. Therefore the below principles should always be taken as from the point of view of the test subjects, or as if they were designed to create systems which protect those subjects. In particular, a utilitarian calculus of the greatest good for the greatest number might be appealing to some, but modern philosophers of experimental ethics generally do not allow the researchers to make that decision themselves. If, for example, some subjects were willing and chose to experience some negative consequences from being in a study, that might be alright, but it is never to be left up to the researcher. “Do No Harm” The Hippocratic Oath, a version of which is thought in popular culture to be sworn by all modern doctors, is actually not used much at all today in its original form. This is actually not that strange, since it sounds quite odd and archaic11 to modern ears – it begins I swear by Apollo the physician, and Asclepius, and Hygieia and Panacea and all the gods and goddesses as my witnesses that... It also has the odd requirements that physicians not use a knife, and will remain celibate, etc. One feature, often thought to be part of the Oath, does not exactly appear in the traditional text but is probably considered the most important promise: First, do no harm [sometimes seen in the Latin version, primum nil nocere]. This principle is often thought of as constraining doctors and other care-givers, which is why, for example, the American Medical Association forbids doctors from participation in executions, even when they are legal in certain jurisdictions in the United States. It does seem like good general idea, in any case, that those who have power and authority over others should, at the very least, not harm them. In the case of human subject experimentation, this is thought of as meaning that researchers must never knowingly harm their patients, and must in fact let the patients decide what they consider harm to be. Informed Consent Continuing with the idea of letting subjects decide what harms they are willing to experience or risk, one of the most important ethical principles for human subject research is that test subjects must be asked for informed consent. What this means is that they must be informed of all of the possible consequences, positive and (most importantly) negative, of participation in the study, and then given the right to decide if they want to participate. The information part does not have to tell every detail of the experimental design, but it must give every possible consequence that the researchers can imagine. It is important when thinking about informed consent to make sure that the subjects really have the ability to exercise fully free will in their decision to give consent. If, for example, participation in the experiment is the only way to get some good (health care, monetary compensation in a poor neighborhood, a good grade in a class, advancement in their job, etc.) which they really need or want, the situation itself may deprive them of their ability freely to say no – and therefore yes, freely. Confidentiality The Hippocratic Oath does also require healers to protect the privacy of their patients. Continuing with the theme of protecting the autonomy of test subjects, then, it is considered to be entirely the choice of subject when and how much information about their participation in the experiment will be made public. The kinds of information protected here run from, of course, the subjects’ performance in the experimental activities, all the way to the simple fact of participation itself. Therefore, ethical experimenters must make it possible for subject to sign up for and then do all parts of the experiment without anyone outside the research team knowing this fact, should the subject want this kind of privacy. As a practical matter, something must be revealed about the experimental outcomes in order for the scientific community to be able to learn something from that experiment. Typically this public information will consist of measures like sample means and other data which are aggregated from many test subjects’ results. Therefore, even if it were know what the mean was and that a person participated in the study, the public would not be able to figure out what that person’s particular result was. If the researchers want to give more precise information about one particular test subject’s experiences, or about the experiences of a small enough number of subjects that individual results could be disaggregated from what was published, then the subjects’ identities must be hidden, or anonymized. This is done by removing from scientific reports all personally identifiable information [PII] such as name, social security or other ID number, address, phone number, email address, etc. External Oversight [IRB] One last way to protect test subjects and their autonomy which is required in ethical human subject experimentation is to give some other, disinterested, external group as much power and information as the researchers themselves. In the US, this is done by requiring all human subject experimentation to get approval from a group of trained and independent observers, called the Institutional Review Board [IRB] before the start of the experiment. The IRB is given a complete description of all details of the experimental design and then chooses whether or not to give its approval. In cases when the experiment continues for a long period of time (such as more than one year), progress reports must be given to the IRB and its re-approval sought. Note that the way this IRB requirement is enforced in the US is by requiring approval by a recognized IRB for experimentation by any organization which wants ever to receive US Federal Government monies, in the form of research grants, government contracts, or even student support in schools. IRBs tend to be very strict about following rules, and if the ever see a violation at some such organization, that organization will quickly get excluded from federal funds for a very long time. As a consequence, all universities, NGOs, and research institutes in the US, and even many private organizations or companies, are very careful about proper use of IRBs. 5.04: New Page In practice, wording effects are often an extremely strong influence on the answers people give when surveyed. So... Suppose you were doing a survey of American voters opinions of the president. Think of a way of asking a question which would tend to maximize the number of people who said they approved of the job he is doing. Then think of another way of asking a question which would tend to minimize that number [who say they approve of his job performance]. Think of a survey question you could ask in a survey of the general population of Americans in response to which many [most?] people would lie. State what would be the issue you would be investigating with this survey question, as a clearly defined, formal variable and parameter on the population of all Americans. Also tell exactly what would be the wording of the question you think would get lying responses. Now think of a way to do an observational study which would get more accurate values for this variable and for the parameter of interest. Explain in detail. Many parents believe that their small children get a bit hyperactive when they eat or drink sweets (candies, sugary sodas, etc.), and so do not let their kids have such things before nap time, for example. A pediatrician at Euphoria State University Teaching Hospital [ESUTH] thinks instead that it is the parents’ expectations about the effects of sugar which cause their children to become hyperactive, and not the sugar at all. Describe a randomized, placebo-controlled, double-blind experiment which would collect data about this ESUTH pediatrician’s hypothesis. Make sure you are clear about both which part of your experimental procedure addresses each of those important components of good experimental design. Is the experiment you described in the previous exercise an ethical one? What must the ESUTH pediatrician do before, during, and after the experiment to make sure it is ethical? Make sure you discuss (at least) the checklist of ethical guidelines from this chapter and how each point applies to this particular experiment.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/05%3A_Bringing_Home_the_Data/5.03%3A_New_Page.txt
The purpose of this chapter is to introduce two basic but powerful tools of inferential statistics, the confidence interval and the hypothesis test (also called test of significance), in the simplest case of looking for the population mean of a quantitative RV. This simple case of these tool is based, for both of them, on a beautiful and amazing theorem called the Central Limit Theorem, which is therefore the subject of the first section of the chapter. The following sections then build the ideas and formulæ first for confidence intervals and then for hypothesis tests. Throughout this chapter, we assume that we are working with some (large) population on which there is defined a quantitative RV X. The population mean σX is, of course, a fixed number, out in the world, unchanging but also probably unknown, simply because to compute it we would have to have access to the values of X for the entire population. Strangely, we assume in this chapter that while we do not know μX , we do know the population standard deviation σX, of X. This is actually quite a silly assumption – how could we know σX if we didn’t already know μX ? But we make this assumption because if makes this first version of confidence intervals and hypothesis tests particularly simple. (Later chapters in this Part will remove this silly assumption.) Finally, we always assume in this chapter that the samples we use are simple random samples, since by now we know that those are the best kind. 06: Inferential Statistics Taking the average [mean] of a sample of quantitative data is actually a very nice process: the arithmetic is simple, and the average often has the nice property of being closer to the center of the data than the values themselves being combined or averaged. This is because while a random sample may have randomly picked a few particularly large (or particularly small) values from the data, it probably also picked some other small (or large) values, so that the mean will be in the middle. It turns out that these general observations of how nice a sample mean can be explained and formalized in a very important Theorem: FACT 6.1.1. The Central Limit Theorem [CLT] Suppose we have a large population on which is defined a quantitative random variable $X$ whose population mean is $\ \mu_X$ and whose population standard deviation is $\ \sigma_X$. Fix a whole number $n\ge30$. As we take repeated, independent SRSs of size $n$, the distribution of the sample means $\overline{x}$ of these SRSs is approximately $N(\ \mu_X, \sigma_X/\sqrt{n})$. That is, the distribution of $\overline{x}$ is approximately Normal with mean $\ \mu_X$ and standard deviation $\sigma_X/\sqrt{n}$. Furthermore, as $n$ gets bigger, the Normal approximation gets better. Note that the CLT has several nice pieces. First, it tells us that the middle of the histogram of sample means, as we get repeated independent samples, is the same as the mean of the original population – the mean of the sample means is the population mean. We might write this as $\ \sigma_{\bar{X}} = \mu_X$. Second, the CLT tells us precisely how much less variation there is in the sample means because of the process noted above whereby averages are closer to the middle of some data than are the data values themselves. The formula is $\ \sigma_{\bar{X}} = \sigma_x/\sqrt{n}$. Finally and most amazingly, the CLT actually tells us exactly what is the shape of the distribution for $\overline{x}$ – and it turns out to be that complicated formula we gave Definition 4.3.19. This is completely unexpected, but somehow the universe knows that formula for the Normal distribution density function and makes it appear when we construct the histogram of sample means. Here is an example of how we use the CLT: EXAMPLE 6.1.2. We have said elsewhere that adult American males’ heights in inches are distributed like $N(69, 2.8)$. Supposing this is true, let us figure out what is the probability that 52 randomly chosen adult American men, lying down in a row with each one’s feet touching the next one’s head, stretch the length of a football field. [Why 52? Well, an American football team may have up to 53 people on its active roster, and one of them has to remain standing to supervise everyone else’s formation lying on the field....] First of all, notice that a football field is 100 yards long, which is 300 feet or 3600 inches. If every single one of our randomly chosen men was exactly the average height for adult men, that would a total of $52*69=3588$ inches, so they would not stretch the whole length. But there is variation of the heights, so maybe it will happen sometimes.... So imagine we have chosen 52 random adult American men. Measure each of their heights, and call those numbers $x_1, x_2, \dots, x_{52}$. What we are trying to figure out is whether $\sum x_i \ge 3600$. More precisely, we want to know $P\left(\sum x_i \ge 3600\right)\ .$ Nothing in that looks familiar, but remember that the 52 adult men were chosen randomly. The best way to choose some number, call it $n=52$, of individuals from a population is to choose an SRS of size $n$. Let’s also assume that we did that here. Now, having an SRS, we know from the CLT that the sample mean $\overline{x}$ is $N(69, 2.8/\sqrt{52})$ or, doing the arithmetic, $N(69, .38829)$. But the question we are considering here doesn’t mention $\overline{x}$, you cry! Well, it almost does: $\overline{x}$ is the sample mean given by $\overline{x} = \frac{\sum x_i}{n} = \frac{\sum x_i}{52} \ .$ What that means is that the inequality $\sum x_i \ge 3600$ amounts to exactly the same thing, by dividing both sides by 52, as the inequality $\frac{\sum x_i}{52} \ge \frac{3600}{52}$ or, in other words, $\overline{x} \ge 69.23077\ .$ Since these inequalities all amount to the same thing, they have the same probabilities, so $P\left(\sum x_i \ge 3600\right) = P\left(\overline{x} \ge 69.23077\right)\ .$ But remember $\overline{x}$ was $N(69, .38829)$, so we can calculate this probability with LibreOffice Calc or Microsoft Excel as \begin{aligned} P\left(\overline{x} \ge 69.23077\right) & = 1 - P\left(\overline{x} < 69.23077\right)\ & = \text{ NORM.DIST}(69.23077, 69, .38829, 1)\ & = .72385\end{aligned} where here we first use the probability rule for complements to turn around the inequality into the direction that NORM.DIST calculates. Thus the chance that 52 randomly chosen adult men, lying in one long column, are as long as a football field, is 72.385%.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/06%3A_Inferential_Statistics/6.01%3A_New_Page.txt
As elsewhere in this chapter, we assume that we are working with some (large) population on which there is defined a quantitative RV $X$. The population mean $\ \mu_x$ is unknown, and we want to estimate it. world, unchanging but also probably unknown, simply because to compute it we would have to have access to the values of $X$ for the entire population. We continue also with our strange assumption that while we do not know $\ \mu_x$, we do know the population standard deviation $\ \sigma_x$, of $X$. Our strategy to estimate $\ \mu_x$ is to take an SRS of size $n$, compute the sample mean $\overline{x}$ of $X$, and then to guess that $\ \mu_x\approx\overline{x}$. But this leaves us wondering how good an approximation $\overline{x}$ is of $\ \mu_x$. The strategy we take for this is to figure how close $\ \mu_x$ must be to $\overline{x}$ – or $\overline{x}$ to $\ \mu_x$, it’s the same thing, and in fact to be precise enough to say what is the probability that $\ \mu_x$ is a certain distance from $\overline{x}$. That is, if we choose a target probability, call it $L$, we want to make an interval of real numbers centered on $\overline{x}$ with the probability of $\ \mu_x$ being in that interval being $L$. Actually, that is not really a sensible thing to ask for: probability, remember, is the fraction of times something happens in repeated experiments. But we are not repeatedly choosing $\ \mu_x$ and seeing if it is in that interval. Just the opposite, in fact: $\ \mu_x$ is fixed (although unknown to us), and every time we pick a new SRS – that’s the repeated experiment, choosing new SRSs! – we can compute a new interval and hope that that new interval might contain $\ \mu_x$. The probability $L$ will correspond to what fraction of those newly computed intervals which contain the (fixed, but unknown) $\ \mu_x$. How to do even this? Well, the Central Limit Theorem tells us that the distribution of $\overline{x}$ as we take repeated SRSs – exactly the repeatable experiment we are imagining doing – is approximately Normal with mean $\ \mu_x$ and standard deviation $\ \sigma_x/\sqrt{n}$. By the 68-95-99.7 Rule: 1. 68% of the time we take samples, the resulting $\overline{x}$ will be within $\ \sigma_x/\sqrt{n}$ units on the number line of $\ \mu_x$. Equivalently (since the distance from A to B is the same as the distance from B to A!), 68% of the time we take samples, $\ \mu_x$ will be within $\ \sigma_x/\sqrt{n}$ of $\overline{x}$. In other words, 68% of the time we take samples, $\ \mu_x$ will happen to lie in the interval from $\overline{x}-\ \sigma_x/\sqrt{n}$ to $\overline{x}+\ \sigma_x/\sqrt{n}$. 2. Likewise, 95% of the time we take samples, the resulting $\overline{x}$ will be within $2\ \sigma_x/\sqrt{n}$ units on the number line of $\ \mu_x$. Equivalently (since the distance from A to B is still the same as the distance from B to A!), 95% of the time we take samples, $\ \mu_x$ will be within $2\ \sigma_x/\sqrt{n}$ of $\overline{x}$. In other words, 95% of the time we take samples, $\ \mu_x$ will happen to lie in the interval from $\overline{x}-2\ \sigma_x/\sqrt{n}$ to $\overline{x}+2\ \sigma_x/\sqrt{n}$. 3. Likewise, 99.7% of the time we take samples, the resulting $\overline{x}$ will be within $3\ \sigma_x/\sqrt{n}$ units on the number line of $\ \mu_x$. Equivalently (since the distance from A to B is still the same as the distance from B to A!), 99.7% of the time we take samples, $\ \mu_x$ will be within $3\ \sigma_x/\sqrt{n}$ of $\overline{x}$. In other words, 99.7% of the time we take samples, $\ \mu_x$ will happen to lie in the interval from $\overline{x}-3\ \sigma_x/\sqrt{n}$ to $\overline{x}+3\ \sigma_x/\sqrt{n}$. Notice the general shape here is that the interval goes from $\overline{x}-z_L^*\ \sigma_x/\sqrt{n}$ to $\overline{x}+z_L^*\ \sigma_x/\sqrt{n}$, where this number $z_L^*$ has a name: [def:criticalvalue] The critical value $z_L^*$ with probability $L$ for the Normal distribution is the number such that the Normal distribution $N(\ \mu_x, \ \sigma_x)$ has probability $L$ between $\ \mu_x-z_L^*\ \sigma_x$ and $\ \mu_x+z_L^*\ \sigma_x$. Note the probability $L$ in this definition is usually called the confidence level. If you think about it, the 68-95-99.7 Rule is exactly telling us that $z_L^*=1$ if $L=.68$, $z_L^*=2$ if $L=.95$, and $z_L^*=3$ if $L=.997$. It’s actually convenient to make a table of similar values, which can be calculated on a computer from the formula for the Normal distribution. [fact:critvaltable] Here is a useful table of critical values for a range of possible confidence levels: $L$ .5 .8 .9 .95 .99 .999 $z_L^*$ .674 1.282 1.645 1.960 2.576 3.291 Note that, oddly, the $z_L^*$ here for $L=.95$ is not $2$, but rather $1.96$! This is actually more accurate value to use, which you may choose to use, or you may continue to use $z_L^*=2$ when $L=.95$, if you like, out of fidelity to the 68-95-99.7 Rule. Now, using these accurate critical values we can define an interval which tells us where we expect the value of $\ \mu_x$ to lie. [def:confint] For a probability value $L$, called the confidence level, the interval of real numbers from $\overline{x}-z_L^*\ \sigma_x/\sqrt{n}$ to $\overline{x}+z_L^*\ \sigma_x/\sqrt{n}$ is called the confidence interval for $\ \mu_x$ with confidence level $L$. The meaning of confidence here is quite precise (and a little strange): [fact:confidence] Any particular confidence interval with confidence level $L$ might or might not actually contain the sought-after parameter $\ \mu_x$. Rather, what it means to have confidence level $L$ is that if we take repeated, independent SRSs and compute the confidence interval again for each new $\overline{x}$ from the new SRSs, then a fraction of size $L$ of these new intervals will contain $\ \mu_x$. Note that any particular confidence interval might or might not contain $\ \mu_x$ not because $\ \mu_x$ is moving around, but rather the SRSs are different each time, so the $\overline{x}$ is (potentially) different, and hence the interval is moving around. The $\ \mu_x$ is fixed (but unknown), while the confidence intervals move. Sometimes the piece we add and subtract from the $\overline{x}$ to make a confidence interval is given a name of its own: [def:marginoferror] When we write a confidence interval for the population mean $\ \mu_x$ of some quantitative variable $X$ in the form $\overline{x}-E$ to $\overline{x}+E$, where $E=z_L^*\ \sigma_x/\sqrt{n}$, we call $E$ the margin of error [or, sometimes, the sampling error] of the confidence interval. Note that if a confidence interval is given without a stated confidence level, particularly in the popular press, we should assume that the implied level was .95 . Cautions The thing that most often goes wrong when using confidence intervals is that the sample data used to compute the sample mean $\overline{x}$ and then the endpoints $\overline{x}\pm E$ of the interval is not from a good SRS. It is hard to get SRSs, so this is not unexpected. But we nevertheless frequently assume that some sample is an SRS, so that we can use it to make a confidence interval, even of that assumption is not really justified. Another thing that can happen to make confidence intervals less accurate is to choose too small a sample size $n$. We have seen that our approach to confidence intervals relies upon the CLT, therefore it typically needs samples of size at least 30. [eg:CI1] A survey of 463 first-year students at Euphoria State University [ESU] found that the [sample] average of how long they reported studying per week was 15.3 hours. Suppose somehow we know that the population standard deviation of hours of study per week at ESU is 8.5 . Then we can find a confidence interval at the 99% confidence level for the mean study per week of all first-year students by calculating the margin of error to be $E==z_L^*\ \sigma_x/\sqrt{n} = 2.576\cdot8.5/\sqrt{463} = 1.01759$ and then noting that the confidence interval goes from $\overline{x}-E = 15.3 - 1.01759 = 14.28241$ to $\overline{x}+E = 15.3 + 1.01759 = 16.31759\,.$ Note that for this calculation to be doing what we want it to do, we must assume that the 463 first-year students were an SRS out of the entire population of first-year students at ESU. Note also that what it means that we have 99% confidence in this interval from 14.28241 to 16.31759 (or $[14.28241, 16.31759]$ in interval notation) is not, in fact, that we any confidence at all in those particular numbers. Rather, we have confidence in the method, in the sense that if we imagine independently taking many future SRSs of size 463 and recalculating new confidence intervals, then 99% of these future intervals will contain the one, fixed, unknown $\ \mu_x$.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/06%3A_Inferential_Statistics/6.02%3A_New_Page.txt
Let’s start with a motivating example, described somewhat more casually than the rest of the work we usually do, but whose logic is exactly that of the scientific standard for hypothesis testing. EXAMPLE 6.3.1. Suppose someone has a coin which they claim is a fair coin (including, in the informal notion of a fair coin, that successive flips are independent of each other). You care about this fairness perhaps because you will use the coin in a betting game. How can you know if the coin really is fair? Obviously, your best approach is to start flipping the coin and see what comes up. If the first flip shows heads [H], you wouldn’t draw any particular conclusion. If the second was also an H, again, so what? If the third was still H, you’re starting to think there’s a run going. If you got all the way to ten Hs in a row, you would be very suspicious, and if the run went to 100 Hs, you would demand that some other coin (or person doing the flipping) be used. Somewhere between two and 100 Hs in a row, you would go from bland acceptance of fairness to nearly complete conviction that this coin is not fair – why? After all, the person flipping the coin and asserting its fairness could say, correctly, that it is possible for a fair coin to come up H any number of times in a row. Sure, you would reply, but it is very unlikely: that is, given that the coin is fair, the conditional probability that those long runs without Ts would occur is very small. Which in turn also explains how you would draw the line, between two and 100 Hs in a row, for when you thought the the improbability of that particular run of straight Hs was past the level you would be willing to accept. Other observers might draw the line elsewhere, in fact, so there would not be an absolutely sure conclusion to the question of whether the coin was fair or not. It might seem that in the above example we only get a probabilistic answer to a yes/no question (is the coin fair or not?) simply because the thing we are asking about is, by nature, a random process: we cannot predict how any particular flip of the coin will come out, but the long-term behavior is what we are asking about; no surprise, then, that the answer will involve likelihood. But perhaps other scientific hypotheses will have more decisive answers, which do not invoke probability. Unfortunately, this will not be the case, because we have seen above that it is wise to introduce probability into an experimental situation, even if it was not there originally, in order to avoid bias. Modern theories of science (such as quantum mechanics, and also, although in a different way, epidemiology, thermodynamics, genetics, and many other sciences) also have some amount of randomness built into their very foundations, so we should expect probability to arise in just about every kind of data. Let’s get a little more formal and careful about what we need to do with hypothesis testing. The Formal Steps of Hypothesis Testing 1. State what is the population under study. 2. State what is the variable of interest for this population. For us in this section, that will always be a quantitative variable $X$. 3. State which is the resulting population parameter of interest. For us in this section, that will always be the population mean $\ \mu_X$ of $X$. 4. State two hypotheses about the value of this parameter. One, called the null hypothesis and written $H_0$, will be a statement that the parameter of interest has a particular value, so $H_0: \ \mu_X = \mu_0$ where $\mu_0$ is some specific number. The other is the interesting alternative we are considering for the value of that parameter, and is thus called the alternative hypothesis, written $H_a$. The alternative hypothesis can have one of three forms: \begin{aligned} H_a: \ \mu_X &< \mu_0\ ,\ \hphantom{\ or\ }H_a: \ \mu_X &> \mu_0\ ,\ \text{or}\ H_a: \ \mu_X &\neq \mu_0\ ,\\end{aligned} where $\mu_0$ is the same specific number as in $H_0$. 5. Gather data from an SRS and compute the sample statistic which is best related to the parameter of interest. For us in this section, that will always be the sample mean $\overline{X}$ 6. Compute the following conditional probability $p=P\left( \begin{matrix} \text{getting values of the statistic which are as extreme,}\ \text{or more extreme, as the ones you did get} \end{matrix}\ \middle|\ H_0\right)\ .$ This is called the $p$-value of the test. 7. If the $p$-value is sufficiently small – typically, $p<.05$ or even $p<.01$ – announce “We reject $H_0$, with $p=\left<\text{number here}\right>$.” Otherwise, announce “We fail to reject $H_0$, with $p=\left<\text{number here}\right>$.” 8. Translate the result just announced into the language of the original question. As you do this, you can say “There is strong statistical evidence that ...” if the $p$-value is very small, while you should merely say something like “There is evidence that...” if the $p$-value is small but not particularly so. Note that the hypotheses $H_0$ and $H_a$ are statements, not numbers. So don’t write something like H0 = μX = 17; you might use $H_o: \ "\mu_X=17"$ or $H_o: \ \mu_X=17$ (we always use the latter in this book). How Small is Small Enough, for $p$-values? Remember how the $p$-value is defined: $p=P\left( \begin{matrix} \text{getting values of the statistic which are as extreme,}\ \text{or more extreme, as the ones you did get} \end{matrix}\ \middle|\ H_0\right)\ .$ In other words, if the null hypothesis is true, maybe the behavior we saw with the sample data would sometimes happen, but if the probability is very small, it starts to seem that, under the assumption $H_0$ is true, the sample behavior was a crazy fluke. If the fluke is crazy enough, we might want simply to say that since the sample behavior actually happened, it makes us doubt that $H_0$ is true at all. For example, if $p=.5$, that means that under the assumption $H_0$ is true, we would see behavior like that of the sample about every other time we take an SRS and compute the sample statistic. Not much of a surprise. If the $p=.25$, that would still be behavior we would expect to see in about one out of every four SRSs, when the $H_0$ is true. When $p$ gets down to $.1$, that is still behavior we expect to see about one time in ten, when $H_0$ is true. That’s rare, but we wouldn’t want to bet anything important on it. Across science, in legal matters, and definitely for medical studies, we start to reject $H_0$ when $p<.05$. After all, if $p<.05$ and $H_0$ is true, then we would expect to see results as extreme as the ones we saw in fewer than one SRS out of 20. There is some terminology for these various cut-offs. DEFINITION 6.3.2. When we are doing a hypothesis test and get a $p$-value which satisfies $p<\alpha$, for some real number $\alpha$, we say the data are statistically significant at level $\alpha$. Here the value $\alpha$ is called the significance level of the test, as in the phrase “We reject $H_0$ at significance level $\alpha$,” which we would say if $p<\alpha$. EXAMPLE 6.3.3. If we did a hypothesis test and got a $p$-value of $p=.06$, we would say about it that the result was statistically significant at the $\alpha=.1$ level, but not statistically significant at the $\alpha=.05$ level. In other words, we would say “We reject the null hypothesis at the $\alpha=.1$ level,” but also “We fail to reject the null hypothesis at the $\alpha=.05$ level,”. FACT 6.3.4. The courts in the United States, as well as the majority of standard scientific and medical tests which do a formal hypothesis test, use the significance level of $\alpha=.05$. In this chapter, when not otherwise specified, we will use that value of $\alpha=.05$ as a default significance level. EXAMPLE 6.3.5. We have said repeatedly in this book that the heights of American males are distributed like $N(69, 2.8)$. Last semester, a statistics student named Mohammad Wong said he thought that had to be wrong, and decide to do a study of the question. MW is a bit shorter than 69 inches, so his conjecture was that the mean height must be less, also. He measured the heights of all of the men in his statistics class, and was surprised to find that the average of those 16 men’s heights was 68 inches (he’s only 67 inches tall, and he thought he was typical, at least for his class12). Does this support his conjecture or not? Let’s do the formal hypothesis test. The population that makes sense for this study would be all adult American men today – MW isn’t sure if the claim of American men’s heights having a population mean of 69 inches was always wrong, he is just convinced that it is wrong today. The quantitative variable of interest on that population is their height, which we’ll call $X$. The parameter of interest is the population mean $\ \mu_X$. The two hypotheses then are \begin{aligned} H_0: \ \mu_X &= 69\quad \text{and}\ H_a: \ \mu_X &< 69\ ,\end{aligned} where the basic idea in the null hypothesis is that the claim in this book of men’s heights having mean 69 is true, while the new idea which MW hopes to find evidence for, encoded in alternative hypothesis, is that the true mean of today’s men’s heights is less than 69 inches (like him). MW now has to make two bad assumptions: the first is that the 16 students in his class are an SRS drawn from the population of interest; the second, that the population standard deviation of the heights of individuals in his population of interest is the same as the population standard deviation of the group of all adult American males asserted elsewhere in this book, 2.8 . These are definitely bad assumptions – particularly that MW’s male classmates are an SRS of the population of today’s adult American males – but he has to make them nevertheless in order to get somewhere. The sample mean height $\overline{X}$ for MW’s SRS of size $n=16$ is $\overline{X}=68$. MW can now calculate the p-value of this test, using the Central Limit Theorem. According to the CLT, the distribution of X is N (69, 2.8/$\ \sqrt{16}$). Therefore the p-value is $p=P\left(\begin{array}{c}{\text { MW would get values of } \bar{X} \text { which are as }} \ {\text { extreme, or more extreme, as the ones he did get }}\end{array} | H_{0}\right)=P(\bar{X}<69)$ Which, by what we just observed the CLT tells us, is computable by normalcdf(−9999, 68, 69, 2.8/ $\ \sqrt{16}$ on a calculator, or NORM.DIST(68, 69, 2.8/SQRT(16), 1) in a spreadsheet, either of which gives a value around .07656 . This means that if MW uses the 5% significance level, as we often do, the result is not statistically significant. Only at the much cruder 10% significance level would MW say that he rejects the null hypothesis. In other words, he might conclude his project by saying -2.5mm“My research collected data about my conjecture which was statistically insignificant at the 5% significance level but the data, significant at the weaker 10% level, did indicate that the average height of American men is less than the 69 inches we were told it is ($p=.07656$).” People who talk to MW about his study should have additional concerns about his assumptions of having an SRS and of the value of the population standard deviation Calculations for Hypothesis Testing of Population Means We put together the ideas in §3.1 above and the conclusions of the Central Limit Theorem to summarize what computations are necessary to perform: FACT 6.3.6. Suppose we are doing a formal hypothesis test with variable $X$ and parameter of interest the population mean $\ \mu_X$. Suppose that somehow we know the population standard deviation $\sigma_X$ of $X$. Suppose the null hypothesis is $H_0: \ \mu_X = \mu_0$ where $\mu_0$ is a specific number. Suppose also that we have an SRS of size $n$ which yielded the sample mean $\overline{X}$. Then exactly one of the following three situations will apply: 1. If the alternative hypothesis is $H_a:\ \mu_X<\mu_0$ then the $p$-value of the test can be calculated in any of the following ways 1. the area to the left of $\overline{X}$ under the graph of a $N(\mu_0, \sigma_X/\sqrt{n})$ distribution, 2. normalcdf$(-9999, \overline{X}, \mu_0, \sigma_X/\sqrt{n})$ on a calculator, or 3. NORM.DIST($\overline{X}$, $\mu_0$, $\sigma_X$/SQRT($n$), 1) on a spreadsheet. 2. If the alternative hypothesis is $H_a:\ \mu_X>\mu_0$ then the $p$-value of the test can be calculated in any of the following ways 1. the area to the right of $\overline{X}$ under the graph of a $N(\mu_0, \sigma_X/\sqrt{n})$ distribution, 2. normalcdf$(\overline{X}, 9999, \mu_0, \sigma_X/\sqrt{n})$ on a calculator, or 3. 1-NORM.DIST($\overline{X}$, $\mu_0$, $\sigma_X$/SQRT($n$), 1) on a spreadsheet. 3. If the alternative hypothesis is $H_a:\ \mu_X\neq \mu_0$ then the $p$-value of the test can be found by using the approach in exactly one of the following three situations: 1. If $\overline{X}<\mu_0$ then $p$ is calculated by any of the following three ways: 1. two times the area to the left of $\overline{X}$ under the graph of a $N(\mu_0, \sigma_X/\sqrt{n})$ distribution, 2. 2  normalcdf$(-9999, \overline{X}, \mu_0, \sigma_X/\sqrt{n})$ on a calculator, or 3. 2  NORM.DIST($\overline{X}$, $\mu_0$, $\sigma_X$/SQRT($n$), 1) on a spreadsheet. 2. If $\overline{X}>\mu_0$ then $p$ is calculated by any of the following three ways: 1. two times the area to the right of $\overline{X}$ under the graph of a $N(\mu_0, \sigma_X/\sqrt{n})$ distribution, 2. 2  normalcdf$(\overline{X}, 9999, \mu_0, \sigma_X/\sqrt{n})$ on a calculator, or 3. 2  (1-NORM.DIST($\overline{X}$, $\mu_0$, $\sigma_X$/SQRT($n$), 1)) on a spreadsheet. 3. If $\overline{X}=\mu_0$ then $p=1$. Note the reason that there is that multiplication by two if the alternative hypothesis is $H_a:\ \mu_X\neq \mu_0$ is that there are two directions – the distribution has two tails – in which the values can be more extreme than $\overline{X}$. For this reason we have the following terminology: DEFINITION 6.3.7. If we are doing a hypothesis test and the alternative hypothesis is $H_a:\ \mu_X>\mu_0$ or $H_a:\ \mu_X<\mu_0$ then this is called a one-tailed test. If, instead, the alternative hypothesis is $H_a:\ \mu_X\neq\mu_0$ then this is called a two-tailed test. EXAMPLE 6.3.8. Let’s do one very straightforward example of a hypothesis test: A cosmetics company fills its best-selling 8-ounce jars of facial cream by an automatic dispensing machine. The machine is set to dispense a mean of 8.1 ounces per jar. Uncontrollable factors in the process can shift the mean away from 8.1 and cause either underfill or overfill, both of which are undesirable. In such a case the dispensing machine is stopped and recalibrated. Regardless of the mean amount dispensed, the standard deviation of the amount dispensed always has value .22 ounce. A quality control engineer randomly selects 30 jars from the assembly line each day to check the amounts filled. One day, the sample mean is $\overline{X}=8.2$ ounces. Let us see if there is sufficient evidence in this sample to indicate, at the 1% level of significance, that the machine should be recalibrated. The population under study is all of the jars of facial cream on the day of the 8.2 ounce sample. The variable of interest is the weight $X$ of the jar in ounces. The population parameter of interest is the population mean $\ \mu_X$ of $X$. The two hypotheses then are \begin{aligned} H_0: \ \mu_X &= 8.1\quad \text{and}\ H_a: \ \mu_X &\neq 8.1\ .\end{aligned} The sample mean is $\overline{X}=8.2$ , and the sample – which we must assume to be an SRS – is of size $n=30$. Using the case in Fact 6.3.6 where the alternative hypothesis is Ha : μX $\ \neq$ μ0 and the sub-case where $\ \bar{X}$> μ0, we compute the p-value by 2 * (1-NORM.DIST(8.2, 8.1, .22/SQRT(30), 1)) on a spreadsheet, which yields p = .01278 . Since $p$ is not less than $.01$, we fail to reject $H_0$ at the $\alpha=.01$ level of significance. The quality control engineer should therefore say to company management -2.5mm“Today’s sample, though off weight, was not statistically significant at the stringent level of significance of $\alpha=.01$ that we have chosen to use in these tests, that the jar-filling machine is in need of recalibration today ($p=.01278$).” Cautions As we have seen before, the requirement that the sample we are using in our hypothesis test is a valid SRS is quite important. But it is also quite hard to get such a good sample, so this is often something that can be a real problem in practice, and something which we must assume is true with often very little real reason. It should be apparent from the above Facts and Examples that most of the work in doing a hypothesis test, after careful initial set-up, comes in computing the $p$-value. Be careful of the phrase statistically significant. It does not mean that the effect is large! There can be a very small effect, the $\overline{X}$ might be very close to $\mu_0$ and yet we might reject the null hypothesis if the population standard deviation $\sigma_X$ were sufficiently small, or even if the sample size $n$ were large enough that $\sigma_X/\sqrt{n}$ became very small. Thus, oddly enough, a statistically significant result, one where the conclusion of the hypothesis test was statistically quite certain, might not be significant in the sense of mattering very much. With enough precision, we can be very sure of small effects. Note that the meaning of the $p$-value is explained above in its definition as a conditional probability. So $p$ does not compute the probability that the null hypothesis $H_0$ is true, or any such simple thing. In contrast, the Bayesian approach to probability, which we chose not to use in the book, in favor of the frequentist approach, does have a kind of hypothesis test which includes something like the direct probability that $H_0$ is true. But we did not follow the Bayesian approach here because in many other ways it is more confusing. In particular, one consequence of the real meaning of the $p$-value as we use it in this book is that sometimes we will reject a true null hypothesis $H_0$ just out of bad luck. In fact, if $p$ is just slightly less than $.05$, we would reject $H_0$ at the $\alpha=.05$ significance level even though, in slightly less than one case in 20 (meaning 1 SRS out of 20 chosen independently), we would do this rejection even though $H_0$ was true. We have a name for this situation. DEFINITION 6.3.9. When we reject a true null hypothesis $H_0$ this is called a type I error. Such an error is usually (but not always: it depends upon how the population, variable, parameter, and hypotheses were set up) a false positive, meaning that something exciting and new (or scary and dangerous) was found even though it is not really present in the population. EXAMPLE 6.3.10. Let us look back at the cosmetic company with a jar-filling machine from Example 6.3.8. We don’t know what the median of the SRS data was, but it wouldn’t be surprising if the data were symmetric and therefore the median would be the same as the sample mean $\overline{X}=8.2$ . That means that there were at least 15 jars with 8.2 ounces of cream in them, even though the jars are all labelled “8oz.” The company is giving way at least $.2\times15=3$ ounces of the very valuable cream – in fact, probably much more, since that was simply the overfilling in that one sample. So our intrepid quality assurance engineer might well propose to management to increase the significance level $\alpha$ of the testing regime in the factory. It is true that with a larger $\alpha$, it will be easier for simple randomness to result in type I errors, but unless the recalibration process takes a very long time (and so results in fewer jars being filled that day), the cost-benefit analysis probably leans towards fixing the machine slightly too often, rather than waiting until the evidence is extremely strong it must be done.
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/06%3A_Inferential_Statistics/6.03%3A_New_Page.txt
You buy seeds of one particular species to plant in your garden, and the information on the seed packet tells you that, based on years of experience with that species, the mean number of days to germination is 22, with standard deviation 2.3 days. What is the population and variable in that information? What parameter(s) and/or statistic(s) are they asserting have particular values? Do you think they can really know the actual parameter(s) and/or statistic’s(s’) value(s)? Explain. You plant those seeds on a particular day. What is the probability that the first plant closest to your house will germinate within half a day of the reported mean number of days to germination – that is, it will germinate between 21.5 and 22.5 after planting? You are interested in the whole garden, where you planted 160 seeds, as well. What is the probability that the average days to germination of all the plants in your garden is between 21.5 and 22.5 days? How do you know you can use the Central Limit Theorem to answer this problem – what must you assume about those 160 seeds from the seed packet in order for the CLT to apply? You decide to expand your garden and buy a packet of different seeds. But the printing on the seed packet is smudged so you can see that the standard deviation for the germination time of that species of plant is 3.4 days, but you cannot see what the mean germination time is. So you plant 100 of these new seeds and note how long each of them takes to germinate: the average for those 100 plants is 17 days. What is a 90% confidence interval for the population mean of the germination times of plants of this species? Show and explain all of your work. What assumption must you make about those 100 seeds from the packet in order for your work to be valid? What does it mean that the interval you gave had 90% confidence? Answer by talking about what would happen if you bought many packets of those kinds of seeds and planted 100 seeds in each of a bunch of gardens around your community. An SRS of size 120 is taken from the student population at the very large Euphoria State University [ESU], and their GPAs are computed. The sample mean GPA is 2.71 . Somehow, we also know that the population standard deviation of GPAs at ESU is .51 . Give a confidence interval at the 90% confidence level for the mean GPA of all students at ESU. You show the confidence interval you just computed to a fellow student who is not taking statistics. They ask, “Does that mean that 90% of students at ESU have a GPA which is between \(a\) and \(b\)?” where \(a\) and \(b\) are the lower and upper ends of the interval you computed. Answer this question, explaining why if the answer is yes and both why not and what is a better way of explaining this 90% confidence interval, if the answer is no. The recommended daily calorie intake for teenage girls is 2200 calories per day. A nutritionist at Euphoria State University believes the average daily caloric intake of girls in her state to be lower because of the advertising which uses underweight models targeted at teenagers. Our nutritionist finds that the average of daily calorie intake for a random sample of size \(n=36\) of teenage girls is 2150. Carefully set up and perform the hypothesis test for this situation and these data. You may need to know that our nutritionist has been doing studies for years and has found that the standard deviation of calorie intake per day in teenage girls is about 200 calories. Do you have confidence the nutritionist’s conclusions? What does she need to be careful of, or to assume, in order to get the best possible results? The medication most commonly used today for post-operative pain relieve after minor surgery takes an average of 3.5 minutes to ease patients’ pain, with a standard deviation of 2.1 minutes. A new drug is being tested which will hopefully bring relief to patients more quickly. For the test, 50 patients were randomly chosen in one hospital after minor surgeries. They were given the new medication and how long until their pain was relieved was timed: the average in this group was 3.1 minutes. Does this data provide statistically significant evidence, at the 5% significance level, that the new drug acts more quickly than the old? Clearly show and explain all your set-up and work, of course! The average household size in a certain region several years ago was 3.14 persons, while the standard deviation was .82 persons. A sociologist wishes to test, at the 5% level of significance, whether the mean household size is different now. Perform the test using new information collected by the sociologist: in a random sample of 75 households this past year, the average size was 2.98 persons. A medical laboratory claims that the mean turn-around time for performance of a battery of tests on blood samples is 1.88 business days. The manager of a large medical practice believes that the actual mean is larger. A random sample of 45 blood samples had a mean of 2.09 days. Somehow, we know that the population standard deviation of turn-around times is 0.13 day. Carefully set up and perform the relevant test at the 10% level of significance. Explain everything, of course. 6.05: New Page 1. The word “data” is really a plural, corresponding to the singular “datum.” We will try to remember to use plural forms when we talk about “data,” but there will be no penalty for (purely grammatical) failure to do so.↩ 2. Sometimes you will see this written instead $\sum_{i=1}^n x_i$ . Think of the “$\sum_{i=1}^n{}$” as a little computer program which with $i=1$, increases it one step at a time until it gets all the way to $i=n$, and adds up whatever is to the right. So, for example, $\sum_{i=1}^3 2i$ would be $(2*1)+(2*2)+(2*3)$, and so has the value $12$.↩ 3. no pun intended↩ 4. This is a very informal definition of an outlier. Below we will have an extremely precise one.↩ 5. Which might write 5N$\Sigma$ary for short.↩ 6. Although in the 1950s a doctor (who later was found to be in the pay of the tobacco industry) did say that the clear statistical evidence of association between smoking and cancer might be a sign that cancer causes smoking (I know: crazy!). His theory was that people who have lung tissue which is more prone to developing cancer are more likely to start smoking because somehow the smoke makes that particular tissue feel better. Needless to say, this is not the accepted medical view, because lots of evidence goes against it.↩ 7. It is hard to be certain of the true origins of this phrase. The political scientist Raymond Wolfinger is sometimes given credit – for a version without the “not,” actually. Sometime later, then, it became widespread with the “not.”↩ 8. There many kinds of infinity in mathematics – in fact, an infinite number of them. The smallest is an infinity that can be counted, like the whole numbers. But then there are many larger infinities, describing sets that are too big even to be counted, like the set of all real numbers.↩ 9. In fact, in a very precise sense which we will not discuss in this book, the longer you play a game like this, the more you can expect there will be short-term, but very large, wins and losses.↩ 10. By Dan Kernler - Own work, CC BY-SA 4.0, commons.wikimedia.org/w/index. php?curid=36506025 .↩ 11. It dates from the $5^{th}$ century BCE, and is attributed to Hippocrates of Kos .↩ 12. When an experimenter tends to look for information which supports their prior ideas, it’s called confirmation bias – MW may have been experiencing a bit of this bias when he mistakenly thought he was average in height for his class.↩
textbooks/stats/Introductory_Statistics/Book%3A_Lies_Damned_Lies_or_Statistics_-_How_to_Tell_the_Truth_with_Statistics_(Poritz)/06%3A_Inferential_Statistics/6.04%3A_New_Page.txt
He who would catch fish must find the water first, they say. If you want to analyze data, you need to obtain them. There are many ways of obtaining data but the most important are observation and experiment. Observation is the method when observer has the least possible influence on the observed. It is important to understand that zero influence is practically impossible because the observer will always change the environment. Experiment approaches the nature the other way. In the experiment, influence(s) are strictly controlled. Very important here are precise measurements of effects, removal of all interacting factors and (related) contrasting design. The latter means that one experimental group has no sense, there must be at least two, experiment (influence) and control (no influence). Only then we can equalize all possibly interacting factors and take into account solely the results of our influence. Again, no interaction is practically impossible since everything around us is structurally too complicated. One of the most complicated things are we humans, and this is why several special research methods like blind (when patients do not know what they receive, drug or placebo) or even double blind (when doctor also does not know that) were invented. 1.02: Population and sample Let us research the simple case: which of two ice-creams is more popular? It would be relatively easy to gather all information if all these ice-creams sold in one shop. However, the situation is usually different and there are many different sellers which are really hard to control. In situation like that, the best choice is sampling. We cannot control everybody but we can control somebody. Sampling is also cheaper, more robust to errors and gives us free hands to perform more data collection and analyses. However, when we receive the information from sampling, another problem will become apparent—how representative are these results? Is it possible to estimate the small piece of sampled information to the whole big population (this is not a biological term) of ice-cream data? Statistics (mathematical statistics, including the theory of sampling) could answer this question. It is interesting that sampling could be more precise than the total investigation. Not only because it is hard to control all variety of cases, and some data will be inevitably mistaken. There are many situations when the smaller size of sample allows to obtain more detailed information. For example, in XIX century many Russian peasants did not remember their age, and all age-related total census data was rounded to tens. However, in this case selective but more thorough sampling (using documents and cross-questioning) could produce better result. And philosophically, full investigation is impossible. Even most complete research is a subset, sample of something bigger. 1.03: How to obtain the data There are two main principles of sampling: replication and randomization. Replication suggests that the same effect will be researched several times. This idea derived from the cornerstone math “big numbers” postulate which in simple words is “the more, the better”. When you count replicates, remember that they must be independent. For example, if you research how light influences the plant growth and use five growing chambers, each with ten plants, then number of replicates is five, not fifty. This is because plants withing each chamber are not independent as they all grow in the same environment but we research differences between environments. Five chambers are replicates whereas fifty plants are pseudoreplicates. Repeated measurements is another complication. For example, in a study of short-term visual memory ten volunteers were planned to look on the same specific object multiple times. The problem here is that people may remember the object and recall it faster towards the end of a sequence. As a result, these multiple times are not replicates, they are repeated measurements which could tell something about learning but not about memory itself. There are only ten true replicates. Another important question is how many replicates should be collected. There is the immense amount of publications about it, but in essence, there are two answers: (a) as many as possible and (b) 30. Second answer looks a bit funny but this rule of thumb is the result of many years of experience. Typically, samples which size is less than 30, considered to be a small. Nevertheless, even minuscule samples could be useful, and there are methods of data analysis which work with five and even with three replicates. There are also special methods (power analysis) which allow to estimate how many objects to collect (we will give one example due course). Randomization tells among other that every object should have the equal chances to go into the sample. Quite frequently, researchers think that data was randomized while it was not actually collected in the random way. For example, how to select the sample of 100 trees in the big forest? If we try simply to walk and select trees which somehow attracted the attention, this sample will not be random because these trees are somehow deviated and this is why we spotted them. Since one of the best ways of randomization is to introduce the order which is knowingly absent in nature (or at least not related with the study question), the reliable method is, for example, to use a detailed map of the forest, select two random coordinates, and find the tree which is closest to the selected point. However, trees are not growing homogeneously, some of them (like spruces) tend to grow together whereas others (like oaks) prefer to stay apart. With the method described above, spruces will have a better chance to come into sample so that breaks the rule of randomization. We might employ the second method and make a transect through the forest using rope, then select all trees touched with it, and then select, saying, every fifth tree to make a total of hundred. Is the last (second) method appropriate? How to improve it? Now you know enough to answer another question: Once upon a time, there was an experiment with a goal to research the effect of different chemical poisons to weevils. Weevils were hold in jars, chemicals were put on fragments of filter paper. Researcher opened the jar, then picked up the weevil which first came out of jar, put it on the filter paper and waited until weevil died. Then researcher changed chemical, and start the second run of experiment in the same dish, and so on. But for some unknown reason, the first chemical used was always the strongest (weevils died very fast). Why? How to organize this experiment better?
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/01%3A_Data/1.01%3A_Origin_of_the_data.txt
Why we need the data analysis Well, if everything is so complicated, why to analyze data? It is frequently evident the one shop has more customers than the other, or one drug is more effective, and so on... —This is correct, but only to the some extent. For example, this data `2 3 4 2 1 2 2 0` Code \(1\) (Python): ```2 3 4 2 1 2 2 0 ``` is more or less self-explanatory. It is easy to say that here is a tendency, and this tendency is most likely 2. Actually, it is easy to use just a brain to analyze data which contains 5–9 objects. But what about this data? ```88 22 52 31 51 63 32 57 68 27 15 20 26 3 33 7 35 17 28 32 8 19 60 18 30 104 0 72 51 66 22 44 75 87 95 65 77 34 47 108 9 105 24 29 31 65 12 82``` Code \(2\) (Python): ```88 22 52 31 51 63 32 57 68 27 15 20 26 3 33 7 35 17 28 32 8 19 60 18 30 104 0 72 51 66 22 44 75 87 95 65 77 34 47 108 9 105 24 29 31 65 12 82 ``` (This is the real-word example of some flowers measurements in orchids, you can download it from the book data folder as dact.txt.) It is much harder to say anything about tendency without calculations: there are too many objects. However, sometimes the big sample is easy enough to understand: ```2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2``` Code \(3\) (Python): ```2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 ``` Here everything is so similar than again, methods of data analysis would be redundant. As a conclusion, we might say that statistical methods are wanted in cases of (1) numerous objects and/or (2) when data is not uniform. And of course, if there are not one (like in examples above) but several variables, our brain does not handle them easily and we again need statistics. What data analysis can do 1. First of all, data analysis can characterize samples, reveal central tendency (of course, if it is here) and variation. You may think of them as about target and deviations. 2. Then, data analysis reveals differences between samples (usually two samples). For example, in medicine it is very important to understand if there is a difference between physiological characteristics of two groups of patients: those who received the drug of question, and those who received the placebo. There is no other way to understand if the drug works. Statistical tests and effect size estimations will help to understand the reliability of difference numerically. 3. Data analysis might help in understanding relations within data. There plenty of relation types. For example, association is the situation when two things frequently occur together (like lightning and thunder). The other type is correlation where is the way to measure the strength and sign (positive or negative) of relation. And finally, dependencies allow not only to spot their presence and to measure their strength but also to understand direction and predict the value of effect in unknown situations (this is a statistical model). 4. Finally, data analysis might help in understating the structure of data. This is the most complicated part of statistics because structure includes multiple objects and multiple variables. The most important outcome of the analysis of structure is classification which, in simple words, is an ultimate tool to understand world around us. Without proper classification, most of problems is impossible to resolve. All of the methods above include both description (visualization) methods—which explain the situation, and inferential methods—which employ probability theory and other math. Inferential methods include many varieties (some of them explained below in main text and in appendices), e.g., parametric and nonparametric methods, robust methods and re-sampling methods. There are also analyses which fall into several of these categories. What data analysis cannot do 1. Data analysis cannot read your mind. You should start data analysis only if you know what is your data, and which exact questions you need to answer. 2. Data analysis cannot give you certainty. Most inferential methods are based on the theory of probability. 3. Data analysis does not reflect the world perfectly. It is always based on a sample. 1.05: Answers to exercises Answer to the exercise about tree sampling. In case of transect, spruces still have a better chance to be selected. Also, this forest could have some specific structure along the transect. So to improve method, one can use several transects and increase distances between selected trees. Answer to the weevil question. In that case, first were always most active insects which piked the lethal dose of the chemical mush faster than less active individuals. Rule of replication was also broken here because one dish was used for the sequence of experiments. We think that if you read this explanation and understand it, it already became clear how to improve the experiment.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/01%3A_Data/1.04%3A_What_to_find_in_the_data.txt
Generally, you do not need a computer to process the data. However, contemporary statistics is “heavy” and almost always requires the technical help from some kind of software. 02: How to process the data Almost every computer or smart phone has the calculator. Typically, it can do simple arithmetics, sometimes also square roots and degrees. This is enough for the basic data processing. However, to do any statistical analysis, such calculator will need statistical tables which give approximate values of statistics, special characteristics of data distribution. Exact calculation of these statistics is too complicated (for example, it might require integration) and most programs use embedded statistical tables. Calculators usually do not have these tables. Even more important disadvantage of the calculator is absence of the ability to work with sequences of numbers. To deal with many numbers at once, spreadsheets were invented. The power of spreadsheet is in data visualization. From the spreadsheet, it is easy to estimate the main parameters of data (especially if the data is small). In addition, spreadsheets have multiple ways to help with entering and converting data. However, as spreadsheets were initially created for the accounting, they oriented still to the tasks typical to that field. If even they have statistical functions, most of them are not contemporary and are not supported well. Multivariate methods are frequently absent, realization of procedures is not optimal (and frequently hidden from the user), there is no specialized reporting system, and so on. And thinking of data visualization in spreadsheets—what if the data do not fit the window? In that case, the spreadsheet will start to prevent the understanding of data instead of helping it. Another example—what if you need to work simultaneously with three non-neighboring columns of data? This is also extremely complicated in spreadsheets. This is why specialized statistical software come to the scene. 2.02: Statistical software Graphical systems There are two groups of statistical software. First, graphical systems which at a glance do not differ much from spreadsheets but supplied with much more statistical functions and have the powerful graphical and report modules. The typical examples are SPSS and MiniTab. As all visual systems, they are flexible but only within the given range. If you need something new (new kind of plot, new type of calculation, unusual type of data input), the only possibility is to switch to non-visual side and use macros or sub-programs. But even more important is that visual ideology is not working well with more than one user, and does not help if the calculation should be repeated in different place with different people or several years after. That breaks reproducibility, one of the most important principle of science. Last but not least, in visual software statistical algorithms are hidden from end-user so if even you find the name of procedure you want, it is not exactly clear what program is going to do. Statistical environments This second group of programs uses the command-line interface (CLI). User enters commands, the system reacts. Sounds simple, but in practice, statistical environments belong to the most complicated systems of data analysis. Generally speaking, CLI has many disadvantages. It is impossible, for example, to choose available command from the menu. Instead, user must remember which commands are available. Also, this method is so similar to programming that users of statistical environments need to have some programming skills. As a reward, the user has the full control over the system: combine all types of analysis, write command sequences into scripts which could be run later at any time, modify graphic output, easily extend the system and if the system is open source, modify the core statistical environment. The difference between statistical environment and graphical system is like the difference between supermarket and vending machine! SAS is the one of the most advanced and powerful statistical environments. This commercial system has extensive help and the long history of development. Unfortunately, SAS is frequently overcomplicated even for the experienced programmer, has many “vestiges” of 1970s (when it was written), closed-source and extremely expensive... 2.03: The very short history of the S and R R is the statistical environment. In was created as a freeware analog of commercial S-Plus which is in turn was implementation of the S language concept. The S language was first created in 1976 in Bell Labs, and its name was inspired by famous C language (from same Bell Labs). S-Plus started in the end of 1980s, and as many statistical software, was seriously expensive. In August 1993, two New Zealand scientists, Robert Gentleman and Ross Ihaka, decided to make R(this name was, in turn, inspired by S). The idea was to make independent realization of S language concept which would differ from S-Plus in some details (for example, in the way it works with local and global variables). Practically, R is not an imitation of S-Plus but the new “branch” in the family of S software. In 1990s, R was developing slowly, but when users finally realized its truly amazing opportunities (like the system of R extensions—packages, or libraries) and started to migrate from other statistical systems, R started to grow exponentially. Now, there are thousands of R packages, and R is used almost everywhere! Without any exaggeration, R is now the most important software tool for data analysis.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/02%3A_How_to_process_the_data/2.01%3A_General_purpose_software.txt
R is used everywhere to work with any kind of data. R is capable to do not only “statistics” in the strict sense but also all kinds of data analysis (like visualization plots), data operations (similar to databasing) and even machine learning and advanced mathematical modeling (which is the niche of other software like Python modules, Octave or MATLAB). There are several extremely useful features of R: flexibility, reproducibility, open source code and (yes!) command-line interface. Flexibility allows to create extension packages almost for all purposes. For the common user, it means that almost everything which was described in statistical literature as a method, is available in R. And people who professionally work in the creation of statistical methods, use R for their research. And (this is rare case) if the method is not available, it is possible to write yourself commands implementing it. Reproducibility allow to repeat the same analysis, without much additional efforts, with the updated data, or ten years later, or in other institutions. Openness means that it is always possible to look inside the code and find out how exactly the particular procedure was implemented. It is also possible to correct mistakes in the code (since everything made by humans have mistakes and R is not an exception) in Wikipedia-like communal way. Command-line interface (CLI) of R is in truth, superior way over GUI (graphical user interface) of other software. User of GUI is just like the ordinary worker whereas CLI user is more similar to foreman who leaves the “dirty work” to the computer, and this is exactly what computers were invented for. CLI also allows to make interfaces, connect R with almost any software. There is also the R“dark side”. R is difficult to learn. This is why you are reading this book. After you install R, you see the welcome screen with a > prompt, and that is it. Many commands are hard to remember, and there are no of almost no menus. Sometimes, it is really complicated to find how to do something particular. As a difference from S-Plus, R makes all calculations in the operational memory. Therefore if you accidentally power off the computer, all results not written on disk intentionally, will be lost\(^{[1]}\). 2.05: How to download and install R Since R is free, it is possible to download and install it without any additional procedures. There are several ways to do that depending on your operation system, but generally one need to google the uppercase letter “R” which will give the link to the site of R project. Next step is to find there “CRAN”, the on-line repository of all R-related software. In fact, there are multiple repositories (mirrors) so next step is to choose the nearest mirror. Then everything is straightforward, links will finally get you to the downloading page. If your operating system has the package manager, software center of similar, installing R is even simpler. All that you need is to find R from within the manager and click install\(^{[1]}\). It is also possible to install R and run it from iPhone, iPad and Android phones. However, it is recommended to have for R a full-featured computer since statistical calculations might consume plenty of resources. Under Windows, R might be installed in two different modes, “one big window with smaller windows inside” (MDI) or “multiple independent windows” (SDI). We recommended to use the second (SDI) as R in other operating systems can work only in SDI mode. It is better to determine the mode during the installation: to make this choice, choose “Custom installation” from the one of first screens. If for some reason, you skipped it, it may be done later, through the menu (R GUI options). On macOS, in addition to the core R, it is recommended to install also XQuartz software. Apart from “graphical” R, both Windows and macOS have terminal R applications. While the functionality of Windows terminal programs is limited, on macOS it runs in a way similar to Linux and therefore makes the appropriate alternative. There are useful features in macOS graphical R, but also restrictions, especially with saving history of commands (see below). To start using this terminal R application in macOS, user should run any available terminal (like Terminal.app) first.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/02%3A_How_to_process_the_data/2.04%3A_Use_advantages_and_disadvantages_of_the_R.txt
Launching R Typically, you launch R from the desktop icon or application menu. To launch R from the terminal, type: Code $1$ (R): \$ R —and you will see the R screen. It is even possible to launch R on the remote UNIX server without any graphical system running. In that case, all plots will be written in one PDF file, Rplots.pdf which will appear in the working directory If you know how to work with R, it is a good idea to check the fresh installation typing, for example, plot(1:20) to check if graphics works. If you are a novice to R, proceed to the next section. First steps After you successfully opened R, it is good to understand how to exit. After entering empty parentheses, be sure to press Enter and answer “n” or “No” on the question: Code $2$ (R): q() Save workspace image? [y/n/c]: n This simple example already shows that any command (or function, this is almost the same) in R has an argument inside round brackets, parentheses. If there is no argument, you still need these parentheses. If you forget them, R will show the definition of the function instead of quitting: Code $3$ (R): q function (save = "default", status = 0, runLast = TRUE) .Internal(quit(save, status, runLast)) <bytecode: 0x28a5708> <environment: namespace:base> (For the curious, “bytecode” means that this function was compiled for speed, “environment” shows the way to call this function. If you want to know the function code, it is not always work to call it without parentheses; see the reference card for more advanced methods.) How to know more about function? Call the help: Code $4$ (R): help(q) or simply Code $5$ (R): ?q And you will see the separate window or (under Linux) help text in the same window (to exit this help, press q)$^{[1]}$. .) Now back to the ?q. If you read this help text thoroughly, you might conclude that to quit Rwithout being asked anything, you may want to enter q("no"). Please try it. "no" is the argument of the exit function q(). Actually, not exactly the argument but its value, because in some cases you can skip the name of argument. The name of argument is save so you can type q(save="no"). In fact, most of R functions look like function(name="value"); see more detail in Figure $1$. R is pretty liberal about arguments. You will receive same answers if you enter any of these variants: Code $6$ (R): round(1.5, digits=0) round(1.5, d=0) round(d=0, 1.5) round(1.5, 0) round(1.5,) round(1.5) ($^{[2]}$As you see, arguments are matched by name and/or by position. In output, R frequently prints something like [1], it is just an index of resulted number(s). What is round()? Run ?round to find out.) It is possible to mess with arguments as long as R“understands” what you want. Please experiment more yourself and find out why this Code $7$ (R): round(0, 1.5) gives the value you probably do not want. If you want to know arguments of some function, together with their default values, run args(): Code $8$ (R): args(round) args(q) There is also an example() function which is quite useful, especially when you learn plotting with R. To run examples supplied with the function, type example(function). Also do not forget to check demo() function which outputs the list of possible demonstrations, some of them are really handy, saying, demo(colors). Here R shows one of its basic principles which came from Perl language: there always more than one way to do it. There are many ways to receive a help in R! So default is to ask the “save” question on exit. But why does R ask it? And what will happen if you answer “yes”? In that case, two files will be written into the R working directory: binary file .RData and textual file .Rhistory (yes, their names start with a dot). First contains all objects you created during the R session. Second contains the full history of entered commands. These files will be loaded automatically if you start R from the same directory, and the following message will appear: [Previously saved workspace restored] Frequently, this is not a desirable behavior, especially if you are just learning R and therefore often make mistakes. As long as you study with this book, we strongly recommend to answer “no”. If you by chance answered “yes” on the question in the end of the previous R session, you might want to remove unwanted files: Code $9$ (R): unlink(c(".RData", ".Rhistory")) (Be extremely careful here because R deletes files silently! On macOS, file names might be different; in addition, it is better to uncheck Read history file on startup in the Preferences menu.) If you are bored from answering questions again and again, and at the same time do not want to enter q("no"), there is a third way. Supply R starting command with option –no-save (it could be done differently on different operation systems), and you will get rid of it$^{[3]}$. How to type When you work in R, the previous command could be called if you press “arrow up” key ($\uparrow$). This is extremely useful and saves plenty of time, especially when you need to run the command similar to the preceding. On some systems, there is also backward search (Ctrl+R on Linux) which is even more efficient than arrow up. If you mistakenly typed the long command and want to wipe it without supplying to R, there is Ctrl+U key (works on Linux and Windows). If you run R in the terminal which has no apparent way to scroll, use Shift+PgUp and Shift+PgDn. Another really helpful key is the Tab. To see how it works, start to type long command like read.t... and then press Tab. It will call completion with suggests how to continue. Completion works not only for commands, but also for objects, command arguments and even for file names! To invoke the latter, start to type read.table(" and then press Tab once or twice; all files in the working directory will be shown. Remember that all brackets (braces, parentheses) and quotes must be always closed. One of the best ways to make sure of it is to enter opening and closing brackets together, and then return your cursor into the middle. Actually, graphic R on macOS does this by default. Pair also all quotes. R accepts two types of quotes, single ’...’ and double "..." but they must be paired with quote of the same type. Good question is when do you need quotes. In general, quotes belong to character strings. Rule of thumb is that objects external to R need quotes whereas internal objects could be called without quotes. R is sensitive to the case of symbols. Commands ls() and Ls() are different! However, spaces do not play any role. These commands are the same: Code $10$ (R): round (1.5, digits=0) round(1.5, digits=0) round ( 1.5 , digits = 0 ) Do not be afraid of making errors. On the contrary, Make as many mistakes as possible! The more mistakes you do when you learn, the less you do when you start to work with R on your own. R is frequently literal when it sees a mistake, and its error messages will help you to decipher it. Conversely, R is perfectly silent when you do well. If your input has no errors, R usually says nothing. It is by the way really hard to crash R. If nevertheless your R seems to hang, press Esc button (on Linux, try Ctrl+C instead). Yet another appeal to users of this book: Experiment! Try unknown commands, change options, numbers, names, remove parentheses, load any data, run code from Internet, from help, from your brain. The more you experiment, the better you learn R. How to play with R Now, when we know basics, it is time to do something more interesting in R. Here is the simple task: convert the sequence of numbers from 1 to 9 into the table with three columns. In the spreadsheet or visual statistical software, there will be several steps: (1) make two new columns, (2–3) copy the two pieces into clipboard and paste them and (4) delete extra rows. In R, this is just one command: Code $11$ (R): bb <- matrix(1:9, ncol=3) bb (Symbol <- is an assignment operator, it is read from right to left. bb is a new R object (it is a good custom to name objects with double letters, less chances to intersect with existent R oblects). But what is 1:9? Find it$^{[4]}$ yourself. Hint: it is explained in few pages from this one.) Again from the above: How to select the sample of 100 trees in the big forest? If you remember, our answer was to produce 100 random pairs of the coordinates. If this forest is split into 10,000 squares ($100\times100$), then required sample might look like: Code $12$ (R): coordinates <- expand.grid(1:100, 1:100) sampled.rows <- sample(1:nrow(coordinates), 100) coordinates[sampled.rows, ] (First, expand.grid() was used above to create all 10,000 combinations of square numbers. Then, powerful sample() command randomly selects 100 rows from whatever number of rows is in the table coordinates. Note that your results will be likely different since sample() uses the random number generator. Finally, this samples.rows was used as an index to randomly select 100 rows (pairs of coordinates) from 10,000 combinations. What is left for you now is to go to the forest and find these trees :-)) Let us now play dice and cards with R: Code $13$ (R): dice <- as.vector(outer(1:6, 1:6, paste)) sample(dice, 4, replace=TRUE) sample(dice, 5, replace=TRUE) cards <- paste(rep(c(6:10,"V","D","K","T"), 4), c("Tr","Bu","Ch","Pi")) sample(cards, 6) sample(cards, 6) (Note here outer() command which combines values, paste() which joins into the text, rep() which repeats some values, and also the replace=TRUE argument (by default, replace is FALSE). What is replace=FALSE? Please find out. Again, your results could be different from what is shown here. Note also that TRUE or FALSE must always be fully uppercased.) Overgrown calculator But the most simple way is to use R as an advanced calculator: Code $14$ (R): 2+2 2+.2 (Note that you can skip leading zero in decimal numbers.) The more complicated example, “log10(((sqrt(sum(c(2, 2))))^2)*2.5)” will be calculated as follows: 1. The vector will me created from two twos: c(2, 2). 2. The sum of its elements will be counted: 2+2=4. 3. Square root calculated: sqrt(4)=2. 4. It is raised to the power of 2: 2^2=4. 5. The result is multiplied by 2.5: 4*2.5=10. 6. Decimal logarithm is calculated: log10(10)=1. As you see, it is possible to embed pairs of parentheses. It is a good idea to count opening and closing parentheses before you press Enter; these numbers must be equal. After submission, R will open them, pair by pair, from the deepest pair to the most external one. So R expressions are in some way similar to Russian doll, or to onion, or to artichoke (Figure $2$), and to analyze them, one should peel it. Here is also important to say that R(similar to its TeX friend) belongs to one of the most deeply thought software. In essence, R“base” package covers almost 95% needs of the common statistical and data handling work and therefore external tools are often redundant. It is wise to keep things simple with R. If there are no parentheses, R will use precedence rules which are similar to the rules known from the middle school. For example, in 2+3*5 R will multiply first (3*5=15), and only then calculate the sum (2+15=17). Please check it in R yourself. How to make the result 25? Add parentheses. Let us feed something mathematically illegal to R. For example, square root or logarithm of $-1$: Code $15$ (R): log(-1) If you thought that R will crash, that was wrong. It makes NaN instead. NaN is not a number, one of reserved words. What about division by zero? Code $16$ (R): 100/0 This is another reserved word, Inf, infinity.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/02%3A_How_to_process_the_data/2.06%3A_How_to_start_with_R.txt
How to enter the data from within R We now need to know how to enter data into R. Basic command is c() (shortcut of the word concatenate): Code \(1\) (R): `c(1, 2, 3, 4, 5)` However, in that way your numbers will be forgotten because R does not remember anything which is not saved into object: Code \(2\) (R): ```aa <- c(2, 3, 4, 5, 6, 7, 8) aa``` (Here we created an object aa, assigned to it vector of numbers from one to five, and then printed object with typing its name.) If you want to create and print object simultaneously, use external parentheses: Code \(3\) (R): ```(aa <- c(1, 2, 3, 4, 5, 6, 7, 8, 9)) aa``` (By the way, here we created aa object again, and R silently re-wrote it. R never gives a warning if object already exists!) In addition to c(), we can use commands rep(), seq(), and also the colon (:) operator: Code \(4\) (R): ```rep(1, 5) rep(1:5, each=3) seq(1, 5) 1:5``` How to name your objects R has no strict rules on the naming your objects, but it is better to follow some guidelines: 1. Keep in mind that R is case-sensitive, and, for example, X and x are different names. 2. For objects, use only English letters, numbers, dot and (possibly) underscore. Do not put numbers and dots in the beginning of the name. One of recommended approaches is double-letter (or triple-letter) when you name objects like aa, jjj, xx and so on. 3. In data frames, we recommend to name your columns (characters) with uppercase letters and dots. The examples are throughout of this book. 4. Do not reassign names already given to popular functions (like c()), reserved words (especially T, F, NA, NaN, Inf and NULL) and predefined objects like pi\(^{[1]}\), LETTERS and letters. If you accidentally did it, there is conflict() function to help in these situations. To see all reserved words, type ?Reserved. How to load the text data In essence, data which need to be processed could be of two kinds: text and binary. To avoid unnecessary details, we will accept here that text data is something which you can read and edit in the simple text editor like Geany\(^{[2]}\). But if you want to edit the binary data, you typically need a program which outputted this file in the past. Without the specific software, the binary data is not easy to read. Text data for the statistical processing is usually text tables where every row corresponds with the table row, and columns are separated with delimiters, either invisible, like spaces or tab symbols, or visible, like commas or semicolons. If you want R to “ingest” this kind of data, is is necessary to make sure first that the data file is located within the same directory which R regards as a working directory: Code \(5\) (R): `getwd()` If this is not the directory you want, you can change it with the command: Code \(6\) (R): ```setwd("e:\wrk\temp") # Windows only! getwd()``` Note how R works with backslashes under Windows. Instead of one backslash, you need to enter two. Only in that case R under Windows will understand it. It is also possible to use slashes under Windows, similar to Linux and macOS: Code \(7\) (R): ```setwd("e:/wrk/temp") getwd()``` Please always start each of your R session from changing working directory. Actually, it is not absolutely necessary to remember long paths. You can copy it from your file manager into R. Then, graphical R under Windows and macOS have rudimentary menu system, and it is sometimes easier to change working directory though the menu. Finally, collection asmisc.r contains function Files() which is the textual file browser, so it is possible to run setwd(Files()) and then follow screen instructions\(^{[3]}\). The next step after you got sure that the working directory is correct, is to check if your data file is in place, with dir() command: Code \(8\) (R): `dir("data")` It is really handy to separate data from all other stuff. Therefore, we assumed above that you have subdirectory data in you working directory, and your data files (including mydata.txt) are in that subdirectory. Please create it (and of course, create the working directory) if you do not have it yet. You can create these with your file manager, or even with R itself: Code \(9\) (R): `dir.create("data")` Now you can load your data with read.table() command. But wait a minute! You need to understand the structure of your file first. Command read.table() is sophisticated but it is not smart enough to determine the data structure on the fly\(^{[4]}\). This is why you need to check data. You can open it in any available simple text editor, in your Web browser, or even from inside R with file.show() or url.show() command. It outputs the data “as is”. This is what you will see: Code \(10\) (R): `file.show("data/mydata.txt")` (By the way, if you type file.show("data/my and press Tab, completion will show you if your file is here—if it is really here. This will save both typing file name and checking the presence with dir().) How did the file mydata.txt appear in your data subdirectory? We assume that you already downloaded it from the repository mentioned in the foreword. If you did not do it, please do it now. It is possible to perform with any browser and even with R: Code \(11\) (R): `download.file("http://ashipunov.info/data/mydata.txt","data/mydata.txt")` (Within parentheses, left part is for URL whereas right tells R how to place and name the downloaded file.) Alternatively, you can check your file directly from the URL with url.show() and then use read.table() from the same URL. Now time finally came to load data into R. We know that all columns have names, and therefore use head=TRUE, and also know that the delimiter is the semicolon, this is why we use sep=";": Code \(12\) (R): `mydata <- read.table("data/mydata.txt", sep=";", head=TRUE)` Immediately after we loaded the data, we must check the new object. There are three ways: Code \(13\) (R): ```mydata <- read.table("data/mydata.txt", sep=";", head=TRUE) str(mydata)``` Third way is to simply type mydata but this is not optimal since when data is large, your computer screen will be messed with content. Commands head() and str() are much more efficient. To summarize, local data file should be loaded into R in three steps: 1. Make sure that you data is in place, with dir() command, Tab completion or through Web browser; 2. Take a look on data with file.show() or url.show() command and determine its structure; 3. Load it with read.table() command using appropriate options (see below). How to load data from Internet Loading remote data takes same three steps from above. However, as the data is not on disk but somewhere else, to check its presence, the best way is to open it in the Internet browser using URL which should be given to you; this also makes the second step because you will see its structure in the browser window. It is also possible to check the structure with the command: Code \(14\) (R): `url.show("http://ashipunov.info/data/mydata.txt")` Then you can run read.table() but with URL instead of the file name: Code \(15\) (R): `read.table("http://ashipunov.info/data/mydata.txt", sep=";", head=TRUE)` (Here and below we will sometimes skip creation of new object step. However, remember that you must create new object if you want to use the data in R later. Otherwise, the content will be shown and immediately forgotten.) How to use read.table() Sometimes, you want R to “ingest” not only column names but also row names: Code \(16\) (R): `read.table("data/mydata1.txt", head=TRUE)` (File mydata1.txt\(^{[5]}\) is constructed in the unusual way: its first row has three items whereas all other rows each have four items delimited with the tab symbol—“big invisible space”. Please do not forget to check that beforehand, for example using file.show() or url.show() command.) Sometimes, there are both spaces (inside cells) and tabs (between cells): Code \(17\) (R): `read.table("data/mydata2.txt", sep=" ", quote="", head=TRUE)` If we run read.table() without sep="\t" option (which is “separator is a tab”), R will give an error. Try it. But why did it work for mydata1.txt? This is because the default separator is both space and/or tab. If one of them used as the part of data, the other must be stated as separator explicitly. Note also that since row names contain quote, quoting must be disabled, otherwise data will silently read in a wrong way. How to know what separator is here, tab or space? This is usually simple as most editors, browsers and file.show() / url.show() commands visualize tab as a space which is much broader than single letter. However, do not forget to use monospaced font in your software, other fonts might be deceiving. Sometimes, numbers have comma as a decimal separator (this is another worldwide standard). To input this kind of data, use dec option: Code \(18\) (R): `read.table("data/mydata3.txt", dec=",", se=";", h=T)` (Please note the shortcuts. Shortcuts save typing but could be dangerous if they match several possible names. There are only one read.table() argument which starts with se, but several of them start with s (e.g., skip); therefore it is impossible to reduce se further, into s. Note also that TRUE and FALSE are possible to shrink into T and F, respectively (but this is the only possible way); we will avoid this in the book though.) When read.table() sees character columns, it converts them into factors (see below). To avoid this behavior, use as.is=TRUE option. Command scan() is similar to read.table() but reads all data into only one “column” (one vector). It has, however, one unique feature: Code \(19\) (R): `scan()` (What did happen here? First, we entered scan() with empty first argument, and R changed its prompt to numbers allowing to type numerical data in, element after element. To finish, enter empty row\(^{[6]}. One can paste here even numbers from the clipboard!) How to load binary data Functions from the foreign package (it is installed by default) can read data in MiniTab, S, SAS, SPSS, Stata, Systat, and FoxPro DBF binary formats. To find out more, you may want to call it first with command library(foreign) and then call help about all its commands help(package=foreign). R can upload images. There are multiple packages for this, one of the most developed is pixmap. R can also upload GIS maps of different formats including ArcInfo (packages maps, maptools and others). R has its own binary format. It is very fast to write and to load\(^{[7]}\) (useful for big data) but impossible to use with any program other than R: Code \(20\) (R): ```xx <- "apple" save(xx, file="xx.rd") exists("xx") rm(xx) exists("xx") dir() load("xx.rd") xx``` (Here we used several new commands. To save and to load binary files, one needs save() and load() commands, respectively; to remove the object, there is rm() command. To show you that the object was deleted, we used exists() command.) Note also that everything which is written after “#” symbol on the same text string is a comment. R skips all comments without reading. There are many interfaces which connect R to databases including MySQL, PostgresSQL and sqlite (it is possible to call the last one directly from inside R see the documentation for RSQLite and sqldf packages). But what most users actually need is to load the spreadsheet data made with MS Excel or similar programs (like Gnumeric or LibreOffice Calc). There are three ways. First way we recommend to all users of this book: convert Excel file into the text, and then proceed with read.table() command explained above\(^{[8]}\). On macOS, the best way is likely to save data from spreadsheet as tab-delimited text file. On Windows and Linux, if you copy any piece of spreadsheet into clipboard and then paste it into text editor (including R script editor), it becomes the tab-delimited text. The same is possible in macOS but you will need to use some terminal editor (like nano). Another way is to use external packages which convert binary spreadsheets “on the fly”. One is readxl package with main command read_excel(), another is xlsx package with main command read.xlsx(). Please note that these packages are not available by default so you need to download and install them (see below for the explanations). How to load data from clipboard Third way is to use clipboard. It is easy enough: on Linux or Windows you will need to select data in the open spreadsheet, copy it to clipboard, and then in R window type command like: Code \(21\) (R): `read.table("clipboard", sep=" ", head=TRUE)` On macOS, this is slightly different: Code \(22\) (R): `read.table(pipe("pbpaste"), sep=" ", head=TRUE)` (Ignore warnings about “incomplete lines” or “closed connection”. Package clipr unifies the work with clipboard on main OSes.) “Clipboard way” is especially good when your data come out of non-typical software. Note also that entering scan() and then pasting from clipboard (see above) work the same way on all systems. Summarizing the above, recommended data workflow in R might look like: 1. Enter data into the spreadsheet; 2. Save it as a text file with known delimiters (tab and semicolon are preferable), headers and row names (if needed); 3. Load it into R with read.table(); 4. If you must change the data in R, write it afterwards to the external file using write.table() command (see below); 5. Open it in the spreadsheet program again and proceed to the next round. One of its big pluses of this workflow is the separation between data editing and data processing. How to edit data in R If there is a need to change existing objects, you could edit them through R. We do not recommend this though, spreadsheets and text editors are much more advanced then R internal tools. Nevertheless, there is a spreadsheet sub-program embedded into R which is set to edit table-like objects (matrices or data frames). To start it on bb matrix (see above), enter command fix(bb) and edit “in place”. Everything which you enter will immediately change your object. This is somewhat contradictory with R principles so there is the similar function edit() which does not change the object but outputs the result to the R window. For other types of objects (not table-like), commands fix() / edit() call internal (on Windows or macOS) or external (on Linux) text editor. To use external editor, you might need to supply an additional argument, edit(..., editor="name") where name could be any text editor which is available in the system. R on Linux has vi editor as a default but it is too advanced for the beginner\(^{[9]}\); we recommend to use nano instead\(^{[10]}\). Also, there is a pico() command which is usually equal to edit(..., editor="nano"). nano editor is usually available also through the macOS terminal. How to save the results Beginners in R simply copy results of the work (like outputs from statistical tests) from the R console into some text file. This is enough if you are the beginner. Earlier or later, however, it becomes necessary to save larger objects (like data frames): Code \(23\) (R): `write.table(file="trees.txt", trees, row.names=FALSE, sep=" ", quote=FALSE)` (File trees.txt, which is made from the internal trees data frame, will be written into the working directory.) Please be really careful with write.table() as R is perfectly silent if the file with the same name trees.txt is already here. Instead of giving you any warning, it simply overwrites it! By the way, “internal data” means that it is accessible from inside R directly, without preliminary loading. You may want to check which internal data is available with command data(). While a scan() is a single-vector variant of read.table(), write() command is the single-vector variant of write.table(). It is now a good time to speak about file name conventions in this book. We highly recommend to follow these simply rules: 1. Use only lowercase English letters, numbers and underscore for the file and directory names (and also dot, but only to separate file extension). 2. Do not use uppercase letters, spaces and other symbols! 3. Make your names short, preferably shorter than 15–20 symbols. 4. For R command (script) files, use extension *.r By the way, for the comfortable work in R, it is strongly recommended to change those options of your operating system which allow it to hide file extensions. On macOS, go to Finder preferences, choose Advanced tab and select the appropriate box. On Windows, click View tab in File Explorer, choose Options, then View again, unselect appropriate box and apply this to all folders. Linux, as a rule, does not hide file extensions. But what if we need to write into the external file our results (like the output from statistical test)? There is the sink() command: Code \(24\) (R): ```sink("1.txt", split=TRUE) 2+2 sink()``` (Here the string “[1] 4” will be written to the external file.), We specified split=TRUE argument because we wanted to see the result on the screen. Specify also append=TRUE if you want to add output to the existing file. To stop sinking, use sink() without arguments. Be sure that you always close sink()! There are many tools and external packages which enhance R to behave like full-featured report system which is not only calculates something for you but also helps you to write the results. One of the simplest is Rresults shell script (http://ashipunov.info/shipunov/r) which works on macOS and Linux. The appendix of the book explains Sweave system. There are also knitr and much more. History and scripts To see what you typed during the current R session, run history()\(^{[11]}\): Code \(25\) (R): ```history(100) # 100 last commands history(Inf) # all session commands history(p="plot") # last plot commands``` If you want to save your history of commands, use savehistory() with the appropriate file name (in quotes) as argument\(^{[12]}). While you work with this book, it is a good idea to use savehistory() and save all commands from each R session in the file named, saying, by the date (like 20170116.r) and store this file in your working folder. To do that on macOS, use menu R -> Preferences -> Startup -> History, uncheck Read history file on startup and and enter the name of today’s history file. When you close R, file will appear in your working directory. To save all objects in the binary file, type save.image(). You may want to use it if, for example, you are experimenting with R. R allows to create scripts which might be run later to reproduce your work. Actually, R scripts could be written in any text editor\(^{[13]}\). In the appendix, there is much more about R scripts, but the following will help you to create your own first one: 1. Open the text editor, or just type file.edit("hello.r")\(^{[14]}\) 2. Write there the string print("Hello, world!") 3. Save the file under hello.r name in your working directory 4. Call it from R using the command source("hello.r") 5. ... and you will see [1] "Hello, world!" in R console as if you typed it. (In fact, you can even type in the script "Hello world!" without print(), R will understand what to do.) Then, every time you add any R command to the hello.r, you will see more and more output. Try it. To see input (commands) and output (results) together, type source("hello.r", echo=TRUE). Scripting is the “killer feature” of R. If all your data files are in place, and the R script is made, you may easily return to your calculations years later! Moreover, others can do exactly the same with your data and therefore your research becomes fully reproducible. Even more, if you find that your data must be changed, you run the same script and it will output results which take all changes into account. Command source() allows to load commands not only from local file but also from Internet. You only need to replace file name with URL.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/02%3A_How_to_process_the_data/2.07%3A_R_and_Data.txt
Graphical systems One of the most valuable part of every statistical software is the ability to make diverse plots. R sets here almost a record. In the base, default installation, several dozens of plot types are already present, more are from recommended lattice package, and much more are in the external packages from CRAN where more than a half of them (several thousands!) is able to produce at least one unique type of plot. Therefore, there are several thousands plot types in R. But this is not all. All these plots could be enhanced by user! Here we will try to describe fundamental principles of R graphics. Let us look on this example (Figure $1$): Code $1$ (R) plot(1:20, main="Title") legend("topleft", pch=1, legend="My wonderful points") (Curious reader will find here many things to experiment with. What, for example, is pch? Change its number in the second row and find out. What if you supply 20:1 instead of 1:20? Please discover and explain.) Command plot() draws the basic plot whereas the legend() adds some details to the already drawn output. These commands represent two basic types of R plotting commands: 1. high-level commands which create new plot, and 2. low-level commands which add features to the existing plot. Consider the following example: Code $2$ (R) plot(1:20, type="n") mtext("Title", line=1.5, font=2) points(1:20) legend("topleft", pch=1, legend="My wonderful points") (These commands make almost the same plot as above! Why? Please find out. And what is different?) Note also that type argument of the plot() command has many values, and some produce interesting and potentially useful output. To know more, try p, l, c, s, h and b types; check also what example(plot) shows. Naturally, the most important plotting command is the plot(). This is a “smart” command$^{[1]}$. It means that plot() “understands” the type of the supplied object, and draws accordingly. For example, 1:20 is a sequence of numbers (numeric vector, see below for more explanation), and plot() “knows” that it requires dots with coordinates corresponding to their indices (x axis) and actual values (y axis). If you supply to the plot() something else, the result most likely would be different. Here is an example (Figure $2$): Code $3$ (R) plot(cars) title(main="Cars from 1920s") Here commands of both types are here again, but they were issued in a slightly different way. cars is an embedded dataset (you may want to call ?cars which give you more information). This data is not a vector but data frame (sort of table) with two columns, speed and distance (actually, stopping distance). Function plot() chooses the scatterplot as a best way to represent this kind of data. On that scatterplot, x axis corresponds with the first column, and y axis—with the second. We recommend to check what will happen if you supply the data frame with three columns (e.g., embedded trees data) or contingency table (like embedded Titanic or HairEyeColor data) to the plot(). There are innumerable ways to alter the plot. For example, this is a bit more fancy “twenty points”: Code $4$ (R) plot(1:20, pch=3, col=6, main="Title") (Please run this example yourself. What are col and pch? What will happen if you set pch=0? If you set col=0? Why?) Sometimes, default R plots are considered to be “too laconic”. This is simply wrong. Plotting system in R is inherited from S where it was thoroughly developed on the base of systematic research made by W.S. Cleveland and others in Bell Labs. There were many experiments$^{[2]}$. For example, in order to understand which plot types are easier to catch, they presented different plots and then asked to reproduce data numerically. The research resulted in recommendations of how to make graphic output more understandable and easy to read (please note that it is not always “more attractive”!) In particular, they ended up with the conclusion that elementary graphical perception tasks should be arranged from easiest to hardest like: position along a scale $\rightarrow$ length $\rightarrow$ angle and slope $\rightarrow$ area $\rightarrow$ volume $\rightarrow$ color hue, color saturation and density. So it is easy to lie with statistics, if your plot employs perception tasks mostly from the right site of this sequence. (Do you see now why pie charts are particularly bad?) They applied this paradigm to S and consequently, in R almost everything (point shapes, colors, axes labels, plotting size) in default plots is based on the idea of intelligible graphics. Moreover, even the order of point and color types represents the sequence from the most easily perceived to less easily perceived features. Look on the plot from Figure $3$. Guess how was it done, which commands were used? Many packages extend the graphical capacities of R. Second well-known R graphical subsystem comes from the lattice package (Figure $4$): Code $5$ (R) library(lattice) xyplot(1:20 ~ 1:20, main="title") (We repeated 1:20 twice and added tilde because xyplot() works slightly differently from the plot(). By default, lattice should be already installed in your system.$^{[3]}$) Package lattice is by default already installed on your system. To know which packages are already installed, type library(). Next, below is what will happen with the same 1:20 data if we apply function qplot() from the third popular R graphic subsystem, ggplot2$^{[4]}$ package (Figure $5$): Code $6$ (R) library(ggplot2) qplot(1:20, 1:20, main="title") We already mentioned above that library() command loads the package. But what if this package is absent in your installation? ggplot2 is not installed by default. In that case, you will need to download it from Internet R archive (CRAN) and install. This could be done with install.packages("ggplot2") command (note plural in the command name and quotes in argument). During installation, you will be asked first about preferable Internet mirror (it is usually good idea to choose the first). Then, you may be asked about local or system-wide installation (local one works in most cases). Finally, R for Windows or macOS will simply unpack the downloaded archive whereas R on Linux will compile the package from source. This takes a bit more time and also could require some additional software to be installed. Actually, some packages want additional software regardless to the system. Maximal length and maximal width of birds’ eggs are likely related. Please make a plot from eggs.txt data and confirm (or deny) this hypothesis. Explanations of characters are in companion eggs_c.txt file. Graphical devices This is the second important concept of R graphics. When you enter plot(), R opens screen graphical device and starts to draw there. If the next command is of the same type, R will erase the content of the device and start the new plot. If the next command is the “adding” one, like text(), R will add something to the existing plot. Finally, if the next command is dev.off(), R will close the device. Most of times, you do not need to call screen devices explicitly. They will open automatically when you type any of main plotting commands (like plot()). However, sometimes you need more than one graphical window. In that case, open additional device with dev.new() command. Apart from the screen device, there are many other graphical devices in R, and you will need to remember the most useful. They work as follows: Code $7$ (R) png(file="01_20.png", bg="transparent") plot(1:20) text(10, 20, "a") dev.off() png() command opens the graphical device with the same name, and you may apply some options specific to PNG, e.g., transparency (useful when you want to put the image on the Web page with some background). Then you type all your plotting commands without seeing the result because it is now redirected to PNG file connection. When you finally enter dev.off(), connection and device will close, and file with a name 01_20.png will appear in the working directory on disk. Note that R does it silently so if there was the file with the same name, it will be overwritten! So saving plots in R is as simple as to put elephant into the fridge in three steps (Remember? Open fridge – put elephant – close fridge.) This “triple approach” (open device – plot – close device) is the most universal way to save graphics from R. It works on all systems and (what is really important), from the R scripts. For the beginner, however, difficult is that R is here tacit and does not output anything until the very end. Therefore, it is recommended first to enter plotting commands in a common way, and check what is going on the screen graphic device. Then enter name of file graphic device (like png()), and using arrow up, repeat commands in proper sequence. Finally, enter dev.off(). png() is good for, saying, Web pages but outputs only raster images which do not scale well. It is frequently recommended to use vector images like PDF: Code $8$ (R) pdf(file="01_20.pdf", width=8) plot(1:20) text(10, 20, "a") dev.off() (Traditionally, PDF width is measured in inches. Since default is 7 inches, the command above makes a bit wider PDF.) (Above, we used “quaternary approach” because after high-level command, we added some low-level ones. This is also not difficult to remember, and as simple as putting hippo into the fridge in four steps: open fridge – take elephant – put hippo – close fridge.) R also can produce files of SVG (scalable vector graphics) format$^{[5]}$. Important is to always close the device! If you did not, there could be strange consequences: for example, new plots do not appear or some files on disk become inaccessible. If you suspect that it is the case, repeat dev.off() several times until you receive an error like: Code $9$ (R) dev.off() (This is not a dangerous error.) It usually helps. Please create the R script which will make PDF plot by itself. Graphical options We already said that R graphics could be tuned in the almost infinite number of ways. One way of the customization is the modification of graphical options which are preset in R. This is how you, for example, can draw two plots, one under another, in the one window. To do it, change graphical options first (Figure $6$): Code $10$ (R) old.par <- par(mfrow=c(2, 1)) hist(cars$speed, main="") hist(cars$dist, main="") par(old.par) (hist() command creates histogram plots, which break data into bins and then count number of data points in each bin. See more detailed explanations at the end of “one-dimensional” data chapter.) The key command here is par(). First, we changed one of its parameters, namely mfrow which regulates number and position of plots within the plotting region. By default mfrow is c(1, 1) which means “one plot vertically and one horizontally”. To protect the older value of par(), we saved them in the object old.par. At the end, we changed par() again to initial values. The separate task is to overlay plots. That may be done in several ways, and one of them is to change the default par(new=...) value from FALSE to TRUE. Then next high-level plotting command will not erase the content of window but draw over the existed content. Here you should be careful and avoid intersecting axes: Code $11$ (R) hist(cars$speed, main="", xaxt="n", xlab="") old.par <- par(new=TRUE) hist(cars$dist, main="", axes=FALSE, xlab="", lty=2) par(old.par) (Try this plot yourself.) Interactive graphics Interactive graphics enhances the data analysis. Interactive tools trace particular points on the plot to their origins in a data, add objects to the arbitrary spots, follow one particular data point across different plots (“brushing”), enhance visualization of multidimensional data, and much more. The core of R graphical system is not very interactive. Only two interactive commands, identify() and locator() come with the default installation. With identify(), R displays information about the data point on the plot. In this mode, the click on the default (left on Windows and Linux) mouse button near the dot reveals its row number in the dataset. This continues until you right-click the mouse (or Command-Click on macOS). Code $12$ (R) plot(1:20) identify(1:20) Identifying points in 1:20 is practically useless. Consider the following: Code $13$ (R) plot(USArrests[, c(1, 3)]) identify(USArrests[, c(1, 3)], labels=row.names(USArrests)) By default, plot() does not name states, only print dots. Yes, this is possible to print all state names but this will flood plot window with names. Command identify() will help if you want to see just outliers. Command locator() returns coordinates of clicked points. With locator() you can add text, points or lines to the plot with the mouse$^{[6]}$. By default, output goes to the console, but with the little trick you can direct it to the plot: Code $14$ (R) plot(1:20) text(locator(), "My beloved point", pos=4) (Again, left click (Linux & Windows) or click (macOS) will mark places; when you stop this with the right click (Linux & Windows) or Command+Click (macOS), the text will appear in previously marked place(s).) How to save the plot which was modified interactively? The “triple approach” explained above will not work because it does not allow interaction. When your plot is ready on the screen, use the following: Code $15$ (R) dev.copy("pdf", "01_20.pdf"); dev.off() This pair of commands (concatenated with command delimiter, semicolon, for the compactness) copy existing plot into the specified file. Plenty of interactive graphics is now available in R through the external packages like iplot, loon, manipulate, playwith, rggobi, rpanel, TeachingDemos and many others.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/02%3A_How_to_process_the_data/2.08%3A_R_graphics.txt
Answer to the question of how to find the R command if you know only what it should do (e.g., “anova”). In order to find this from within R, you may go in several ways. First is to use double question marks command ??: Code \(1\) (R): `??anova` (Output might be long because it includes all installed packages. Pay attention to rows started with “base” and “stats”.) Similar result might be achieved if you start the interactive (Web browser-based) help with help.start() and then enter “anova” into the search box. Second, even simpler way, is to use apropos(): Code \(2\) (R): `apropos("anova")` Sometimes, nothing helps: Code \(3\) (R): ```??clusterization apropos("clusterization")``` Then start to search in the Internet. It might be done from within R: Code \(4\) (R): `RSiteSearch("clusterization")` In the Web browser, you should see the new tab (or window) with the query results. If nothing helps, as the R community. Command help.request() will guide you through posting sequence. Answer to the plot question (Figure 2.8.4): Code \(5\) (R): ```plot(1:20, col="green", xlab="", ylab="") points(20:1, pch=0)``` (Here empty xlab and ylab were used to remove axes labels. Note that pch=0 is the rectangle.) Instead of col="green", one can use col=3. See below palette() command to understand how it works. To know all color names, type colors(). Argument col could also have multiple values. Check what happen if you supply, saying, col=1:3 (pay attention to the very last dots). To know available point types, run example(points) and skip several plots to see the table of points; or simply look on Figure A.1.1 in this book (and read comments how it was made). Answer to the question about eggs. First, let us load the data file. To use read.table() command, we need to know file structure. To know the structure, (1) we need to look on this file from R with url.show() (or without R, in the Internet browser), and also (2) to look on the companion file, eggs_c.txt. From (1) and (2), we conclude that file has three nameless columns from which we will need first and second (egg length and width in mm, respectively). Columns are separated with large space, most likely the Tab symbol. Now we can run read.table(): Code \(6\) (R): `eggs <- read.table("data/eggs.txt")` Next step is always to check the structure of new object: Code \(7\) (R): ```eggs <- read.table("data/eggs.txt") str(eggs)``` It is also the good idea to look on first rows of data: Code \(8\) (R): ```eggs <- read.table("data/eggs.txt") head(eggs)``` Our first and second variables received names V1 (length) and V2 (width). Now we need to plot variables to see possible relation. The best plot in that case is a scatterplot, and to make scatterplot in R, we simply use plot() command: Code \(9\) (R): `plot(V2 ~ V1, data=eggs, xlab="Length, mm", ylab="Width, mm", pch=21, bg="grey")` (Command plot(y ~ x) uses Rformula interface. It is almost the same as plot(x, y)\(^{[1]}\); but note the different order in arguments.) Resulted “cloud” is definitely elongated and slanted as it should be in case of dependence. What would make this more clear, is some kind of the “average” line showing the direction of the relation. As usual, there are several possibilities in R (Figure \(1\)): Code \(10\) (R): ```abline(line(eggs\$V1, eggs\$V2), lty=2, lwd=1.5) lines(loess.smooth(eggs\$V1, eggs\$V2), col=2, lty=2, lwd=1.5) legend("topleft", lty=2, col=1:2, lwd=1.5, legend=c("Tukey's median-median line", "LOESS curve"))``` (Note use of line(), lines() and abline()—all three are really different commands. lines() and abline() are low-level graphic commands which add line(s) to the existing plot. First uses coordinates while the second uses coefficients. line() and loess.smooth() do not draw, they calculate numbers to use with drawing commands. To see this in more details, run help() for every command.) First line() approach uses John Tukey’s algorithm based on medians (see below) whereas loess.smooth() uses more complicated non-linear LOESS (LOcally wEighted Scatterplot Smoothing) which estimates the overall shape of the curve\(^{[2]}\). Both are approximate but robust, exactly what we need to answer the question. Yes, there is a dependence between egg maximal width and egg maximal length. There is one problem though. Look on the Figure \(1\): many “eggs” are overlaid with other points which have exact same location, and it is not easy to see how many data belong to one point. We will try to access this in next chapter. Answer to the R script question. It is enough to create (with any text editor) the text file and name it, for example, my_script1.r. Inside, type the following: pdf("my_plot1.pdf") plot(1:20) dev.off() Create the subdirectory test and copy your script there. Then close R as usual, open it again, direct it (through the menu or with setwd() command) to make the test subdirectory the working directory, and run: Code \(11\) (R): `source("my_script1.r", echo=TRUE)` If everything is correct, then the file my_plot1.pdf will appear in the test directory. Please do not forget to check it: open it with your PDF viewer. If anything went wrong, it is recommended to delete directory test along with all content, modify the master copy of script and repeat the cycle again, until results become satisfactory.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/02%3A_How_to_process_the_data/2.09%3A_Answers_to_exercises.txt
To process data it is not enough just to obtain them. You need to convert it to the appropriate format, typically to numbers. Since Galileo Galilei, who urged to “measure what can be measured, and make measurable what cannot be measured”, European science aggregated tremendous experience in transferring surrounding events into numbers. Most of our instruments are devices which translate environment features (e.g., temperature, distance) to the numerical language. 03: Types of Data It is extremely important that temperature and distance change smoothly and continuously. This means, that if we have two different measures of the temperature, we can always imagine an intermediate value. Any two temperature or distance measurements form an interval including an infinite amount of other possible values. Thus, our first data type is called measurement, or interval. Measurement data is similar to the ideal endless ruler where every tick mark corresponds to a real number. However, measurement data do not always change smoothly and continuously from negative infinity to positive infinity. For example, temperature corresponds to a ray and not a line since it is limited with an absolute zero (\(0^\circ\)K), and the agreement is that below it no temperature is possible. But the rest of the temperature points along its range are still comparable with real numbers. It is even more interesting to measure angles. Angles change continuously, but after 359\(^\circ\) goes 0\(^\circ\)! Instead of a line, there is a segment with only positive values. This is why exists a special circular statistics that deals with angles. Sometimes, collecting measurement data requires expensive or rare equipment and complex protocols. For example, to estimate the colors of flowers as a continuous variable, you would (as minimum) have to use spectrophotometer to measure the wavelength of the reflected light (a numerical representation of visible color). Now let us consider another example. Say, we are counting the customers in a shop. If on one day there were 947 people, and 832 on another, we can easily imagine values in between. It is also evident that on the first day there were more customers. However, the analogy breaks when we consider two consecutive numbers (like 832 and 831) because, since people are not counted in fractions, there is no intermediate. Therefore, these data correspond better to natural then to real numbers. These numbers are ordered, but not always allow intermediates and are always non-negative. They belong to a different type of measurement data—not continuous, but discreteI\(^{[1]}\). Related with definition of measurement data is the idea of parametricity. With that approach, inferential statistical methods are divided into parametric and nonparametric. Parametric methods are working well if: 1. Data type is continuous measurement. 2. Sample size is large enough (usually no less then 30 individual observations). 3. Data distribution is normal or close to it. This data is often called “normal”, and this feature—“normality”. Should at least one of the above assumptions to be violated, the data usually requires nonparametric methods. An important advantage of nonparametric tests is their ability to deal with data without prior assumptions about the distribution. On the other hand, parametric methods are more powerful: the chance of find an existing pattern is higher because nonparametric algorithms tend to “mask” differences by combining individual observations into groups. In addition, nonparametric methods for two and more samples often suffer from sensitivity to the inequality of sample distributions. Let us create normal and non-normal data artificially: Code \(1\) (R): ```rnorm(10) runif(10)``` (First command creates 10 random numbers which come from normal distribution. Second creates numbers from uniform distribution\(^{[2]}\) Whereas first set of numbers are concentrated around zero, like in darts game, second set are more or less equally spaced.) But how to tell normal from non-normal? Most simple is the visual way, with appropriate plots (Figure \(1\)): Code \(2\) (R): ```old.par <- par(mfrow=c(2, 1)) hist(rnorm(100), main="Normal data") hist(runif(100), main="Non-normal data") par(old.par)``` (Do you see the difference? Histograms are good to check normality but there are better plots—see next chapter for more advanced methods.) Note again that nonparametric methods are applicable to both “nonparametric” and “parametric” data whereas the opposite is not true (Figure \(2\)). By the way, this last figure (Euler diagram) was created with R by typing the following commands: Code \(3\) (R): ```library(plotrix) plot(c(-1, 1), c(-1, 1), type="n", xlab="", ylab="", axes=FALSE) draw.circle(-.2, 0, .4) draw.circle(.1, 0, .9) text(-.2, 0, "parametric", cex=1.5) text(.1, 0.6, "nonparametric", cex=1.5)``` (We used plotrix package which has the draw.circle() command defined. As you see, one may use R even for these exotic purposes. However, diagrams are better to draw in specialized applications like Inkscape.) Measurement data are usually presented in R as numerical vectors. Often, one vector corresponds with one sample. Imagine that we have data on heights (in cm) of the seven employees in a small firm. Here is how we create a simple vector: Code \(3\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) ``` As you learned from the previous chapter, x is the name of the R object, <- is an assignment operator, and c() is a function to create vector. Every R object has a structure: Code \(4\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) str(x)``` Function str() shows that x is a num, numerical vector. Here is the way to check if an object is a vector: Code \(5\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) is.vector(x)``` There are many is.something()-like functions in R, for example: Code \(6\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) is.numeric(x)``` There are also multiple as.something()-like conversion functions. To sort heights from smallest to biggest, use: Code \(7\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) sort(x)``` To reverse results, use: Code \(8\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) rev(sort(x))``` Measurement data is somehow similar to the common ruler, and R package vegan has a ruler-like linestack() plot useful for plotting linear vectors: One of simple but useful plots is the linestack() timeline plot from vegan package (Figure \(3\)): Code \(1\) (R): ```library(vegan) phanerozoic <- read.table("data/phanerozoic.txt") with(phanerozoic, linestack(541-V2, labels=paste(V1, V2), cex=1))``` In the open repository, file compositae.txt contains results of flowering heads measurements for many species of aster family (Compositae). In particular, we measured the overall diameter of heads (variable HEAD.D) and counted number of rays (“petals”, variable RAYS, see Figure \(4\)). Please explore part of this data graphically, with scatterplot(s) and find out if three species (yellow chamomile, Anthemis tinctoria; garden cosmos, Cosmos bipinnatus; and false chamomile, Tripleurospermum inodorum) are different by combination of diameter of heads and number of rays. 3.02: Grades and t-shirts- ranked data Ranked (or ordinal) data do not come directly from measurements and do not easily correspond to numbers. For example, quality of mattresses could be estimated with some numbers, from bad (“0”), to excellent (“5”). These assigned numbers are a matter of convenience. They may be anything. However, they maintain a relationship and continuity. If we grade the most comfortable one as “5”, and somewhat less comfortable as “4”, it is possible to imagine what is “4.5”. This is why many methods designed for measurement variables are applicable to ranked data. Still, we recommend to treat results with caution and keep in mind that these grades are arbitrary. By default, R will identify ranked data as a regular numerical vector. Here are seven employees ranked by their heights: Code \(1\) (R): ```rr <- c(2, 1, 3, 3, 1, 1, 2) str(rr)``` Object rr is the same numerical vector, but numbers “1”, “2” and “3” are not measurements, they are ranks, “places”. For example, “3” means that this person belongs to the tallest group. Function cut() helps to make above three groups automatically: Code \(2\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) (hh <- cut(x, 3, labels=c(1:3), ordered_result=TRUE))``` Result is the ordered factor (see below for more explanations). Note that cut() is irreversible operation, and “numbers” which you receive are not numbers (heights) you start from: Code \(3\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) x (hh <- cut(x, 3, labels=c(1:3), ordered_result=TRUE)) as.numeric(hh)``` Ranked data always require nonparametric methods. If we still want to use parametric methods, we have to obtain the measurement data (which usually means designing the study differently) and also check it for the normality. However, there is a possibility to re-encode ranked data into the measurement. For example, with the appropriate care the color description could be encoded as red, green and blue channel intensity. Suppose, we examine the average building height in various cities of the world. Straightforward thing to do would be to put names of places under the variable “city” (nominal data). It is, of cause, the easiest way, but such variable would be almost useless in statistical analysis. Alternatively, we may encode the cities with letters moving from north to south. This way we obtain the ranked data, open for many nonparametric methods. Finally, we may record geographical coordinates of each city. This we obtain the measurement data, which might be suitable for parametric methods of the analysis.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/03%3A_Types_of_Data/3.01%3A_Degrees_hours_and_kilometers-_measurement_data.txt
Nominal, or categorical, data, unlike ranked, are impossible to order or align. They are even farther away from numbers. For example, if we assign numerical values to males and females (say, “1” and “2”), it would not imply that one sex is somehow “larger” then the other. An intermediate value (like “1.5”) is also hard to imagine. Consequently, nominal indices may be labeled with any letters, words or special characters—it does not matter. Regular numerical methods are just not applicable to nominal data. There are, however, ways around. The simplest one is counting, calculating frequencies for each level of nominal variable. These counts, and other derived measures, are easier to analyze. Character vectors R has several ways to store nominal data. First is a character (textual) vector: Code \(1\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") is.character(sex) is.vector(sex) str(sex)``` (Please note the function str() again. It is must be used each time when you deal with new objects!) By the way, to enter character strings manually, it is easier to start with something like aa <- c(""""), then insert commas and spaces: aa <- c("", "") and finally insert values: aa <- c("b", "c"). Another option is to enter scan(what="char") and then type characters without quotes and commas; at the end, enter empty string. Let us suppose that vector sex records sexes of employees in a small firm. This is how R displays its content: Code \(2\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex``` To select elements from the vector, use square brackets: Code \(3\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex[2:3]``` Yes, square brackets are the command! They are used to index vectors and other R objects. To prove it, run ?"[". Another way to check that is with backticks which allow to use non-trivial calls which are illegal otherwise: Code \(4\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") `[`(sex, 2:3)``` Smart, object-oriented functions in R may “understand” something about object sex: Code \(5\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") table(sex)``` Command table() counts items of each type and outputs the table, which is one of few numerical ways to work with nominal data (next section tells more about counts). Factors But plot() could do nothing with the character vector (check it yourself). To plot the nominal data, we are to inform R first that this vector has to be treated as factor: Code \(6\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) sex.f``` Now plot() will “see” what to do. It will invisibly count items and draw a barplot (Figure \(1\)): Code \(7\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) plot(sex.f)``` It happened because character vector was transformed into an object of a type specific to categorical data, a factor with two levels: Code \(8\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) is.factor(sex.f) is.character(sex.f) str(sex.f) levels(sex.f)``` Code \(9\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) nlevels(sex.f)``` In R, many functions (including plot()) prefer factors to character vectors. Some of them could even transform character into factor, but some not. Therefore, be careful! There are some other facts to keep in mind. First (and most important), factors, unlike character vectors, allow for easy transformation into numbers: Code \(10\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) as.numeric(sex.f)``` But why is female 1 and male 2? Answer is really simple: because “female” is the first in alphabetical order. R uses this order every time when factors have to be converted into numbers. Reasons for such transformation become transparent in a following example. Suppose, we also measured weights of the employees from a previous example: Code \(11\) (R): `w <- c(69, 68, 93, 87, 59, 82, 72)` We may wish to plot all three variables: height, weight and sex. Here is one possible way (Figure \(2\)): Code \(12\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) w <- c(69, 68, 93, 87, 59, 82, 72) plot(x, w, pch=as.numeric(sex.f), col=as.numeric(sex.f), xlab="Height, cm", ylab="Weight, kg") legend("topleft", pch=1:2, col=1:2, legend=levels(sex.f))``` Parameters pch (from “print character”) and col (from “color”) define shape and color of the characters displayed in the plot. Depending on the value of the variable sex, data point is displayed as a circle or triangle, and also in black or in red. In general, it is enough to use either shape, or color to distinguish between levels. Note that colors were printed from numbers in accordance with the current palette. To see which numbers mean which colors, type: Code \(13\) (R): `palette()` It is possible to change the default palette using this function with argument. For example, palette(rainbow(8)) will replace default with 8 new “rainbow” colors. To return, type palette("default"). It is also possible to create your own palette, for example with function colorRampPalette() (see examples in next chapters) or using the separate package (like RColorBrewer or cetcolor, the last allows to create perceptually uniform palettes). How to color barplot from Figure \(1\) in black (female) and red (male)? If your factor is made from numbers and you want to convert it back into numbers (this task is not rare!), convert it first to the characters vector, and only then—to numbers: Code \(14\) (R): ```(ff <- factor(3:5)) as.numeric(ff) # incorrect! as.numeric(as.character(ff)) # correct!``` Next important feature of factors is that subset of a factor retains by default the original number of levels, even if some of the levels are not here anymore. Compare: Code \(15\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) sex.f[5:6] sex.f[6:7]``` There are several ways to exclude the unused levels, e.g. with droplevels() command, with drop argument, or by “back and forth” (factor to character to factor) transformation of the data: Code \(16\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) droplevels(sex.f[6:7]) sex.f[6:7, drop=T] factor(as.character(sex.f[6:7]))``` Third, we may order factors. Let us introduce a fourth variable—T-shirt sizes for these seven hypothetical employees: Code \(17\) (R): ```m <- c("L", "S", "XL", "XXL", "S", "M", "L") m.f <- factor(m) m.f``` Here levels follow alphabetical order, which is not appropriate because we want S (small) to be the first. Therefore, we must tell R that these data are ordered: Code \(18\) (R): ```m <- c("L", "S", "XL", "XXL", "S", "M", "L") m.f <- factor(m) m.o <- ordered(m.f, levels=c("S", "M", "L", "XL", "XXL")) m.o``` (Now R recognizes relationships between sizes, and m.o variable could be treated as ranked.) In this section, we created quite a few new R objects. One of skills to develop is to understand which objects are present in your session at the moment. To see them, you might want to list objects: Code \(19\) (R): `ls()` If you want all objects together with their structure, use ls.str() command. There is also a more sophisticated version of object listing, which reports objects in a table: Code \(20\) (R): `Ls() # asmisc.r` (To use Ls(), either download asmisc.r and then source() it from the disk, or source from URL mentioned in the preface.) Ls() is also handy when you start to work with large objects: it helps to clean R memory\(^{[1]}\). Logical vectors and binary data Binary data (do not mix with a binary file format) are a special case related with both nominal and ranked data. A good example would be “yes” of “no” reply in a questionnaire, or presence vs. absence of something. Sometimes, binary data may be ordered (as with presence/absence), sometimes not (as with right or wrong answers). Binary data may be presented either as 0/1 numbers, or as logical vector which is the string of TRUE or FALSE values. Imagine that we asked seven employees if they like pizza and encoded their “yes”/“no” answers into TRUE or FALSE: Code \(21\) (R): `(likes.pizza <- c(T, T, F, F, T, T, F))` Resulted vector is not character or factor, it is logical. One of interesting features is that logical vectors participate in arithmetical operations without problems. It is also easy to convert them into numbers directly with as.numeric(), as well as to convert numbers into logical with as.logical(): Code \(22\) (R): ```(likes.pizza <- c(T, T, F, F, T, T, F)) is.vector(likes.pizza) is.factor(likes.pizza) is.character(likes.pizza) is.logical(likes.pizza) likes.pizza * 1 as.logical(c(1, 1, 0)) as.numeric(likes.pizza)``` This is the most useful feature of binary data. All other types of data, from measurement to nominal (the last is most useful), could be converted into logical, and logical is easy to convert into 0/1 numbers: Code \(23\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") Tobin(sex, convert.names=FALSE)``` Afterwards, many specialized methods, such as logistic regression or binary similarity metrics, will become available even to that initially nominal data. As an example, this is how to convert the character sex vector into logical: Code \(24\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") (is.male <- sex == "male") (is.female <- sex == "female")``` (We applied logical expression on the right side of assignment using “is equal?” double equation symbol operator. This is the second numerical way to work with nominal data. Note that one character vector with two types of values became two logical vectors.) Logical vectors are useful also for indexing: Code \(25\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) x > 170 x[x > 170]``` (First, we applied logical expression with greater sign to create the logical vector. Second, we used square brackets to index heights vector; in other words, we selected those heights which are greater than 170 cm.) Apart from greater and equal signs, there are many other logical operators which allow to create logical expressions in R(see Table \(1\)): == EQUAL <= EQUAL OR LESS >= EQUAL OR MORE & AND | OR ! NOT != NOT EQUAL %in% MATCH Table \(1\) Some logical operators and how to understand them. AND and OR operators (& and |) help to build truly advanced and highly useful logical expressions: Code \(26\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") w <- c(69, 68, 93, 87, 59, 82, 72) ((x < 180) | (w <= 70)) & (sex=="female" | m=="S")``` (Here we selected only those people which height is less than 170 cm or weight is 70 kg or less, these people must also be either females or bear small size T-shirts. Note that use of parentheses allows to control the order of calculations and also makes expression more understandable.) Logical expressions are even more powerful if you learn how to use them together with command ifelse() and operator if (the last is frequently supplied with else): Code \(27\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") ifelse(sex=="female", "pink", "blue")``` (Command ifelse() is vectorized so it goes through multiple conditions at once. Operator if takes only one condition.) Note the use of curly braces it the last rows. Curly braces turn a number of expressions into a single (combined) expression. When there is only a single command, the curly braces are optional. Curly braces may contain two commands on one row if they are separated with semicolon.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/03%3A_Types_of_Data/3.03%3A_Colors_Names_and_Sexes_-_Nominal_Data.txt
These data types arise from modification of the “primary”, original data, mostly from ranked or nominal data that cannot be analyzed head-on. Close to secondary data is an idea of compositional data which are quantitative descriptions of the parts of the whole (probabilities, proportions, percentages etc.) Percentages, proportions and fractions (ratios) are pretty common and do not need detailed explanation. This is how to calculate percentages (rounded to whole numbers) for our sex data: Code \(1\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.t <- table(sex) round(100*sex.t/sum(sex.t))``` Since it is so easy to lie with proportions, they must be always supplied with the original data. For example, 50% mortality looks extremely high but if it is discovered that there was only 2 patients, then impression is completely different. Ratios are particularly handy when measured objects have widely varying absolute values. For example, weight is not very useful in medicine while the height-to-weight ratio allows successful diagnostics. Counts are just numbers of individual elements inside categories. In R, the easiest way to obtain counts is the table() command. There are many ways to visualize counts and percentages. Bu default, R plots one-dimensional tables (counts) with simple vertical lines (try plot(sex.t) yourself). More popular are pie-charts and barplots. However, they represent data badly. There were multiple experiments when people were asked to look on different kinds of plots, and then to report numbers they actually remember. You can run this experiment yourself. Figure \(1\) is a barplot of top twelve R commands: Code \(2\) (R): ```load("data/com12.rd") exists("com12") # check if our object is here com12.plot <- barplot(com12, names.arg="") text(com12.plot, par("usr")[3]*2, srt=45, pos=2, xpd=TRUE, labels=names(com12))``` (We load()’ed binary file to avoid using commands which we did not yet learn; to load binary file from Internet, use load(url(...)). To make bar labels look better, we applied here the “trick” with rotation. Much more simple but less aesthetic solution is barplot(com12, las=2).) Try looking at this barplot for 3–5 minutes, then withdraw from this book and report numbers seen there, from largest to smallest. Compare with the answer from the end of the chapter. In many experiments like this, researchers found that the most accurately understood graphical feature is the position along the axis, whereas length, angle, area, density and color are each less and less appropriate. This is why from the beginning of R history, pie-charts and barplots were recommended to replace with dotcharts (Figure \(2\)): Code \(3\) (R): ```load("data/com12.rd") exists("com12") # check if our object is here com12.plot <- barplot(com12, names.arg="") dotchart(com12)``` We hope you would agree that the dotchart is easier both to understand and to remember. (Of course, it is possible to make this plot even more understandable with sorting like dotchart(rev(sort(com12)))try it yourself. It is also possible to sort bars, but even sorted barplot is worse then dotchart.) Another useful plot for counts is the word cloud, the image where every item is magnified in accordance with its frequency. This idea came out of text mining tools. To make word clouds in R, one might use the wordcloud package (Figure \(3\)): Code \(4\) (R): ```com80 <- read.table("data/com80.txt") library(wordcloud) set.seed(5) # freeze random number generator``` Code \(5\) (R): ```com80 <- read.table("data/com80.txt") wordcloud(words=com80[, 1], freq=com80[, 2], colors=brewer.pal(8, "Dark2"))``` (New com80 object is a data frame with two columns—check it with str() command. Since wordcloud() “wants” words and frequencies separately, we supplied columns of com80 individually to each argument. To select column, we used square brackets with two arguments: e.g., com80[, 1] is the first column. See more about this in the “Inside R” section.) Command set.seed() needs more explanation. It freezes random number generator in such a way that immediately after its first use all random numbers are the same on different computers. Word cloud plot uses random numbers, therefore in order to have plots similar between Figure \(2\) and your computer, it is better run set.seed() immediately before plotting. Its argument should be single integer value, same on all computers. To re-initialize random numbers, run set.seed(NULL). By the way, NULL object is not just an emptiness, it is a really useful tool. For example, it is easy to remove columns from data frame with command like trees[, 3] <- NULL. If some command “wants” to plot but you do not need this feature, suppress plotting with pdf(file=NULL) command (do not forget to close device with dev.off()). Compare with your results: Code \(6\) (R): `set.seed(1); rnorm(1)` Word cloud is a fashionable way to show counts but it has one big minus: whereas it possible to tell which word in more frequent, it is impossible to tell how frequent it is. Dotchart of com80 needs more space (it is better to plot is as big PDF) but there will be both relative and absolute frequencies visible. Try it yourself: Code \(7\) (R): ```com80 <- read.table("data/com80.txt") dotchart(log(com80[, 2]), labels=com80[, 1], cex=.6)``` (We used logarithmic scale to make counts less dispersed and cex parameter to decrease font size.) While counts and percentages usually come from categorical (nominal) data, ranks usually come from the measurement data, like our heights: Code \(8\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) x.ranks <- x names(x.ranks) <- rank(x) x.ranks sort(x.ranks) # easier to spot``` (The “trick” here was to use names to represent ranks. All R objects, along with values, might bear names.) Not only integers, but fractions too may serve as rank; the latter happens when there is an even number of equal measurements (i.e., some items are duplicated): Code \(9\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) x.ranks2 <- c(x, x[3]) # duplicate the 3rd item names(x.ranks2) <- rank(x.ranks2) sort(x.ranks2)``` In general, identical original measurements receive identical ranks. This situation is called a “tie”, just as in sport. Ties may interfere with some nonparametric tests and other calculations based on ranks: Code \(10\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) x.ranks2 <- c(x, x[3]) # duplicate the 3rd item wilcox.test(x.ranks2)``` (If you did not see Rwarnings before, remember that they might appear even if there is nothing wrong. Therefore, ignore them if you do not understand them. However, sometimes warnings bring useful information.) R always returns a warning if there are ties. It is possible to avoid ties adding small random noise with jitter() command (examples will follow.) Ranks are widely used in statistics. For example, the popular measure of central tendency, median (see later) is calculated using ranks. They are especially suited for ranked and nonparametric measurement data. Analyses based on ranks are usually more robust but less sensitive.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/03%3A_Types_of_Data/3.04%3A_Fractions_counts_and_ranks-_secondary_data.txt
There is no such thing as a perfect observation, much less a perfect experiment. The larger is the data, the higher is the chance of irregularities. Missing data arises from the almost every source due to imperfect methods, accidents during data recording, faults of computer programs, and many other reasons. Strictly speaking, there are several types of missing data. The easiest to understand is “unknown”, datum that was either not recorded, or even lost. Another type, “both” is a case when condition fits to more then one level. Imagine that we observed the weather and registered sunny days as ones and overcast days with zeros. Intermittent clouds would, in this scheme, fit into both categories. As you see, the presence of “both” data usually indicate poorly constructed methods. Finally, “not applicable”, an impossible or forbidden value, arises when we meet something logically inconsistent with a study framework. Imagine that we study birdhouses and measure beak lengths in birds found there, but suddenly found a squirrel within one of the boxes. No beak, therefore no beak length is possible. Beak length is “not applicable” for the squirrel. In R, all kinds of missing data are denoted with two uppercase letters NA. Imagine, for example, that we asked the seven employees about their typical sleeping hours. Five named the average number of hours they sleep, one person refused to answer, another replied “I do not know” and yet another was not at work at the time. As a result, three NA ’s appeared in the data: Code \(1\) (R): `(hh <- c(8, 10, NA, NA, 8, NA, 8))` We entered NA without quotation marks and R correctly recognizes it among the numbers. Note that multiple kinds of missing data we had were all labeled identically. An attempt to just calculate an average (with a function mean()), will lead to this: Code \(2\) (R): ```hh <- c(8, 10, NA, NA, 8, NA, 8) mean(hh)``` Philosophically, this is a correct result because it is unclear without further instructions how to calculate average of eight values if three of them are not in place. If we still need the numerical value, we can provide one of the following: Code \(3\) (R): ```hh <- c(8, 10, NA, NA, 8, NA, 8) mean(hh, na.rm=TRUE) mean(na.omit(hh))``` The first one allows the function mean() to accept (and skip) missing values, while the second creates a temporary vector by throwing NA s away from the original vector hh. The third way is to substitute (impute) the missing data, e.g. with the sample mean: Code \(4\) (R): ```hh <- c(8, 10, NA, NA, 8, NA, 8) hh.old <- hh hh.old hh[is.na(hh)] <- mean(hh, na.rm=TRUE) hh``` Here we selected from hh values that satisfy condition is.na() and permanently replaced them with a sample mean. To keep the original data, we saved it in a vector with the other name (hh.old). There are many other ways to impute missing data, more complicated are based on bootstrap, regression and/or discriminant analysis. Some are implemented in packages mice and cat. Collection asmisc.r supplied with this book, has Missing.map() function which is useful to determine the “missingness” (volume and relative location of missing data) in big datasets.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/03%3A_Types_of_Data/3.05%3A_Missing_data.txt
Problems arising while typing in data are not limited to empty cells. Mistypes and other kinds of errors are also common, and among them most notorious are outliers, highly deviated data values. Some outliers could not be even mistypes, they come from the highly heterogeneous data. Regardless of the origin, they significantly hinder the data analysis as many statistical methods are simply not applicable to the sets with outliers. The easiest way to catch outliers is to look at maximum and minimum for numerical variables, and at the frequency table for character variables. This could be done with handy summary() function. Among plotting methods, boxplot() (and related boxplot.stats()) is probably the best method to visualize outliers. While if it is easy enough to spot a value which differs from the normal range of measurements by an order of magnitude, say “17” instead of “170” cm of height, a typing mistake of “171” instead of “170” is nearly impossible to find. Here we rely on the statistical nature of the data—the more measurements we have, the less any individual mistake will matter. There are multiple robust statistical procedures which are not so influenced from outliers. Many of them are also nonparametric, i.e. not sensitive to assumptions about the distribution of data. We will discuss some robust methods later. Related with outliers is the common mistake in loading data—ignoring headers when they actually exist: Code \(1\) (R): ```m1 <- read.table("data/mydata.txt", sep=";") # wrong! str(m1) m2 <- read.table("data/mydata.txt", sep=";", h=TRUE) # correct! str(m2)``` Command read.table() converts whole columns to factors (or character vectors) even if one data value is not a proper number. This behavior is useful to identify mistypes, like “O” (letter O) instead of “0” (zero), but will lead to problems if headers are not defined explicitly. To diagnose problem, use str(), it helps to distinguish between the wrong and correct way. Do not forget to use str() all the time while you work in R! 3.07: Changing data- basics of transformations In complicated studies involving many data types: measurements and ranks, percentages and counts, parametric, nonparametric and nominal, it is useful to unify them. Sometimes such transformations are easy. Even nominal data may be understood as continuous, given enough information. For example, sex may be recorded as continuous variable of blood testosterone level, possibly with additional measurements. Another, more common way, is to treat discrete data as continuous—it is usually safe, but sometimes may lead to unpleasant surprises. Another possibility is to transform measurement data into ranked. R function cut() allows to perform this operation and create ordered factors. What is completely unacceptable is transforming common nominal data into ranks. If values are not, by their nature, ordered, imposing an artificial order can make the results meaningless. Data are often transformed to make them closer to parametric and to homogenize standard deviations. Distributions with long tails, or only somewhat bell-shaped (as in Figure 4.2.5), might be log-transformed. It is perhaps the most common transformation. There is even a special argument plot(..., log="axis"), where "axis" should be substituted with x or y, presenting it in (natural) logarithmic scale. Another variant is to simply calculate logarithm on the fly like plot(log(...). Consider some widely used transformations and their implications in R(we assume that your measurements are recorded in the vector data): • Logarithmic: log(data + 1). It may normalize distributions with positive skew (right-tailed), bring relationships between variables closer to linear and equalize variances. It cannot handle zeros, this is why we added a single digit. • Square root: sqrt(data). It is similar to logarithmic in its effects, but cannot handle negatives. • Inverse: 1/(data + 1). This one stabilizes variances, cannot handle zeros. • Square: data^2. Together with square root, belongs to family of power transformations. It may normalize data with negative skew (left-tailed) data, bring relationships between variables closer to linear and equalize variances. • Logit: log(p/(1-p)). It is mostly used on proportions to linearize S-shaped, or sigmoid, curves. Along with logit, these types of data are sometimes treated with arcsine transformation which is asin(sqrt(p)). In both cases, p must be between 0 and 1. While working with multiple variables, keep track of their dimensions. Try not to mix them up, recording one variable in millimeters, and another in centimeters. Nevertheless, in multivariate statistics even data measured in common units might have different nature. In this case, variables are often standardized, e.g. brought to the same mean and/or the same variance with scale() function. Embedded trees data is a good example: Code \(1\) (R): `scale(trees)` At the end of data types explanation, we recommend to review a small chart which could be helpful for the determination of data type (Figure \(1\)).
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/03%3A_Types_of_Data/3.06%3A_Outliers_and_how_to_find_them.txt
Vectors in numeric, logical or character modes and factors are enough to represent simple data. However, if the data is structured and/or variable, there is frequently a need for more complicated R objects: matrices, lists and data frames. Matrices Matrix is a popular way of presenting tabular data. There are two important things to know about them in R. First, they may have various dimensions. And second—there are, in fact, no true matrices in R. We begin with the second statement. Matrix in R is just a specialized type of vector with additional attributes that help to identify values as belonging to rows and columns. Here we create the simple $2\times2$ matrix from the numerical vector: Code $1$ (R): m <- 1:4 m ma <- matrix(m, ncol=2, byrow=TRUE) ma str(ma) str(m) As str() command reveals, objects m and ma are very similar. What is different is the way they are presented on the screen. Equality of matrices and vectors in even more clear in the next example: Code $2$ (R): m <- 1:4 mb <- m mb attr(mb, "dim") <- c(2, 2) mb In looks like a trick but underlying reason is simple. We assign attribute dim (“dimensions”, size) to the vector mb and state the value of the attribute as c(2, 2), as 2 rows and 2 columns. Why are matrices mb and ma different? Another popular way to create matrices is binding vectors as columns or rows with cbind() and rbind(). Related command t() is used to transpose the matrix, turn it clockwise by $90^\circ$. To index a matrix, use square brackets: Code $3$ (R): m <- 1:4 ma <- matrix(m, ncol=2, byrow=TRUE) ma[1, 2] The rule here is simple: within brackets, first goes first dimension (rows), and second to columns. So to index, use matrix[rows, columns]. The same rule is applicable to data frames (see below). Empty index is equivalent to all values: Code $4$ (R): m <- 1:4 ma <- matrix(m, ncol=2, byrow=TRUE) ma[, ] Common ways of indexing matrix do not allow to select diagonal, let alone L-shaped (“knight’s move”) or sparse selection. However, R will satisfy even these exotic needs. Let us select the diagonal values of ma: Code $5$ (R): m <- 1:4 ma <- matrix(m, ncol=2, byrow=TRUE) (mi <- matrix(c(1, 1, 2, 2), ncol=2, byrow=TRUE)) ma[mi] (Here mi is an indexing matrix. To index 2-dimensional object, it must have two columns. Each row of indexing matrix describes position of the element to select. For example, the second row of mi is equivalent of [2, 2]. As an alternative, there is diag() command but it works only for diagonals.) Much less exotic is the indexing with logical matrix. We already did similar indexing in the example of missing data imputation. This is how it works in matrices: Code $6$ (R): (mn <- matrix(c(NA, 1, 2, 2), ncol=2, byrow=TRUE)) is.na(mn) mn[is.na(mn)] <- 0 mn Since matrices are vectors, all elements of matrix must be of the same mode: either numerical, or character, or logical. If we change mode of one element, the rest of them will change automatically: Code $7$ (R): m <- 1:4 ma <- matrix(m, ncol=2, byrow=TRUE) mean(ma) ma[1, 1] <- "a" mean(ma) ma Two-dimensional matrices are most popular, but there are also multidimensional arrays: Code $8$ (R): m3 <- 1:8 dim(m3) <- c(2, 2, 2) m3 (Instead of attr(..., "dim") we used analogous dim(...) command.) m3 is an array, “3D matrix”. It cannot be displayed as a single table, and R returns it as a series of tables. There are arrays of higher dimensionality; for example, the bui db">Titatic is the 4D array. To index arrays, R requires same square brackets but with three or more elements within. Lists List is essentially the collection of anything: Code $9$ (R): l <- list("R", 1:3, TRUE, NA, list("r", 4)) l Here we see that list is a composite thing. Vectors and matrices may only include elements of the same type while lists accommodate anything, including other lists. List elements could have names: Code $10$ (R): fred <- list(name="Fred", wife.name="Mary", no.children=3, child.ages=c(1, 5, 9)) fred Names feature is not unique to lists as many other types of R objects could also have named elements. Values inside vectors, and rows and columns of matrices can have their own unique names: Code $11$ (R): m <- 1:4 ma <- matrix(m, ncol=2, byrow=TRUE) w <- c(69, 68, 93, 87, 59, 82, 72) names(w) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") w row.names(ma) <- c("row1", "row2") colnames(ma) <- c("col1", "col2") ma > > Rick Amanda Peter Alex Kathryn Ben George 69 68 93 87 59 82 72 > col1 col2 row1 1 2 row2 3 4 To remove names, use: Code $12$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) names(w) <- NULL w Let us now to index a list. As you remember, we extracted elements from vectors with square brackets: Code $13$ (R): hh <- c(8, 10, NA, NA, 8, NA, 8) hh[is.na(hh)] <- mean(hh, na.rm=TRUE) hh[3] For matrices/arrays, we used several arguments, in case of two-dimensional ones they are row and column numbers: Code $14$ (R): m <- 1:4 ma <- matrix(m, ncol=2, byrow=TRUE) ma[2, 1] Now, there are at least three ways to get elements from lists. First, we may use the same square brackets: Code $15$ (R): l <- list("R", 1:3, TRUE, NA, list("r", 4)) l[1] str(l[1]) Here the resulting object is also a list. Second, we may use double square brackets: Code $16$ (R): l <- list("R", 1:3, TRUE, NA, list("r", 4)) l str(l) str(l) After this operation we obtain the content of the sub-list, object of the type it had prior to joining into the list. The first object in this example is a character vector, while the fifth is itself a list. Metaphorically, square brackets take egg out of the basket whereas double square brackets will also shell it. Third, we may create names for the elements of the list and then call these names with dollar sign: Code $17$ (R): l <- list("R", 1:3, TRUE, NA, list("r", 4)) names(l) <- c("first", "second", "third", "fourth", "fifth") l$first str(l$first) Dollar sign is a syntactic sugar that allows to write l$first instead of more complicated l. That last R piece might be regarded as a fourth way to index list, with character vector of names. Now consider the following example: Code $18$ (R): l <- list("R", 1:3, TRUE, NA, list("r", 4)) names(l) <- c("first", "second", "third", "fourth", "fifth") l$fir l$fi This happens because dollar sign (and default [[ too) allow for partial matching in the way similar to function arguments. This saves typing time but could potentially be dangerous. With a dollar sign or character vector, the object we obtain by indexing retains its original type, just as with double square bracket. Note that indexing with dollar sign works only in lists. If you have to index other objects with named elements, use square brackets with character vectors: Code $19$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) names(w) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") w["Jenny"] > names(w) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn", + "Ben", "George") > Jenny 68 Lists are so important to learn because many functions in Rstore their output as lists: Code $20$ (R): x <- c(174, 162, 188, 192, 165, 168, 172.5) x.ranks2 <- c(x, x[3]) x2.wilcox <- wilcox.test(x.ranks2) str(x2.wilcox) Therefore, if we want to extract any piece of the output (like p-value, see more in next chapters), we need to use the list indexing principles from the above: Code $21$ (R): x <- c(174, 162, 188, 192, 165, 168, 172.5) x.ranks2 <- c(x, x[3]) x2.wilcox <- wilcox.test(x.ranks2) x2.wilcox$p.value Data frames Now let us turn to the one most important type of data representation, data frames. They bear the closest resemblance with spreadsheets and its kind, and they are most commonly used in R. Data frame is a “hybrid”, “chimeric” type of R objects, unidimensional list of same length vectors. In other words, data frame is a list of vectors-columns$^{[1]}$. Each column of the data frame must contain data of the same type (like in vectors), but columns themselves may be of different types (like in lists). Let us create a data frame from our existing vectors: Code $22$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d (It was not absolutely necessary to enter row.names() since our w object could still retain names and they, by rule, will become row names of the whole data frame.) This data frame represents data in short form, with many columns-features. Long form of the same data could, for example, look like: Rick weight 69 Rick height 174.0 Rick size L Rick sex male Amanda weight 68 ... In long form, features are mixed in one column, whereas the other column specifies feature id. This is really useful when we finally come to the two-dimensional data analysis. Commands row.names() or rownames() specify names of data frame rows (objects). For data frame columns (variables), use names() or colnames(). Alternatively, especially if objects w, x, m.o, or sex.f are for some reason absent from the workspace, you can type: Code $23$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d$size <- ordered(d$size, levels=c("S", "M", "L", "XL", "XXL")) ... and then immediately check the structure: Code $24$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") str(d) Since the data frame is in fact a list, we may successfully apply to it all indexing methods for lists. More then that, data frames available for indexing also as two-dimensional matrices: Code $25$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d[, 1] d[[1]] d$weight d[, "weight"] d[["weight"]] To be absolutely sure that any of two these methods output the same, run: Code $26$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") identical(d$weight, d[, 1]) To select several columns (all these methods give same results): Code $27$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d[, 2:4] # matrix method d[, c("height", "size", "sex")] d[2:4] # list method subset(d, select=2:4) d[, -1] # negative selection (Three of these methods work also for this data frame rows. Try all of them and find which are not applicable. Note also that negative selection works only for numerical vectors; to use several negative values, type something like d[, -c(2:4)]. Think why the colon is not enough and you need c() here.) Among all these ways, the most popular is the dollar sign and square brackets (Figure $1$). While first is shorter, the second is more universal. Selection by column indices is easy and saves space but it requires to remember these numbers. Here could help the Str() command (note the uppercase) which replaces dollar signs with column numbers (and also indicates with star* sign the presence of NAs, plus shows row names if they are not default):(see Code $24$ (R):) 'data.frame': 7 obs. of 4 variables: 1 weight: int 69 68 93 87 59 82 72 2 height: num 174 162 188 192 165 ... 3 size : Ord.factor w/ 5 levels "S"<"M"<"L"<"XL"<..: 3 1 4 4 sex : Factor w/ 2 levels "female","male": 2 1 2 2 1 2 2 row.names [1:7] "Rick" "Amanda" "Peter" "Alex" "Kathryn" ... Now, how to make a subset, select several objects (rows) which have particular features? One way is through logical vectors. Imagine that we are interesting only in the values obtained from females: Code $28$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d[d$sex=="female", ] (To select only rows, we used the logical expression d$sex==female before the comma.) By itself, the above expression returns a logical vector: Code $29$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d$sex=="female" This is why R selected only the rows which correspond to TRUE: 2nd and 5th rows. The result is just the same as: Code $30$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d[c(2, 5), ] Logical expressions could be used to select whole rows and/or columns: Code $31$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d[, names(d) != "weight"] It is also possible to apply more complicated logical expressions: Code $32$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d[d$size== "M" | d$size== "S", ] d[d$size %in% c("M", "L") & d$sex=="male", ] (Second example shows how to compare with several character values at once.) If the process of selection with square bracket, dollar sign and comma looks too complicated, there is another way, with subset() command: Code $33$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") subset(d, sex=="female") However, “classic selection” with [ is preferable (see the more detailed explanation in ?subset). Selection does not only extract the part of data frame, it also allows to replace existing values: Code $34$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d.new <- d d.new[, 1] <- round(d.new[, 1] * 2.20462) d.new (Now weight is in pounds.) Partial matching does not work with the replacement, but there is another interesting effect: Code $35$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d.new <- d d.new$he <- round(d.new$he * 0.0328084) d.new (A bit mysterious, is not it? However, rules are simple. As usual, expression works from right to left. When we called d.new$he on the right, independent partial matching substituted it with d.new$height and converted centimeters to feet. Then replacement starts. It does not understand partial matching and therefore d.new$he on the left returns NULL. In that case, the new column (variable) is silently created. This is because subscripting with $returns NULL if subscript is unknown, creating a powerful method to add columns to the existing data frame.) Another example of “data frame magic” is recycling. Data frame accumulates shorter objects if they evenly fit the data frame after being repeated several times: Code $36$ (R): data.frame(a=1:4, b=1:2) The following table (Table $1$) provides a summary of R subscripting with “[”: subscript effect positive numeric vector selects items with those indices negative numeric vector selects all but those indices character vector selects items with those names (or dimnames) logical vector selects the TRUE (and NA) items missing selects all Table $1$ Subscription with “[”. Command sort() does not work for data frames. To sort values in a d data frame, saying, first with sex and then with height, we have to use more complicated operation: Code $37$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d[order(d$sex, d$height), ] The order() command creates a numerical, not logical, vector with the future order of the rows: Code $38$ (R): w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f, h=TRUE) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") order(d$sex, d$height) Use order() to arrange the columns of the d matrix in alphabetic order. Overview of data types and modes This simple table (Table $2$) shows the four basic R objects: linear rectangular all the same type vector matrix mixed type list data frame Table $2$ Basic ojects. (Most, but not all, vectors are also atomic, check it with is.atomic().) You must know the type (matrix, data frame etc.) and mode (numerical, character etc.) of object you work with. Command str() is especially good for that. If any procedure wants object of some specific mode or type, it is usually easy to convert into it with as.<something>() command. Sometimes, you do not need the conversion at all. For example, matrices are already vectors, and all data frames are already lists (but the reverse is not correct!). On the next page, there is a table (Table $3$) which overviews R internal data types and lists their most important features. Data type and mode What is it? How to subset? How to convert? Vector: numeric, character, or logical Sequence of numbers, character strings, or TRUE/FALSE. Made with c(), colon operator :, scan(), rep(), seq() etc. With numbers like vector[1].With names (if named) like vector["Name"].With logical expression like vector[vector > 3]. matrix(), rbind(), cbind(), t() to matrix; as.numeric() and as.character() convert modes Vector: factor Way of encoding vectors. Has values and levels (codes), and sometimes also names. Just like vector. Factors could be also re-leveled or ordered with factor(). c() to numeric vector, droplevels() removes unused levels Matrix Vector with two dimensions. All elements must be of the same mode. Made with matrix(), cbind() etc. matrix[2, 3] is a cell; matrix[2:3, ] or matrix[matrix[, 1] > 3, ] rows; matrix[, 3] column Matrix is a vector; c() or dim(...) <- NULL removes dimensions List Collection of anything. Could be nested (hierarchical). Made with list(). Most of statistical outputs are lists. list[2] or (if named) list["Name"] is element; list or list$Name content of the element unlist() to vector, data.frame() only if all elements have same length Data frame Named list of anything of same lengths but (possibly) different modes. Data could be short (ids are columns) and/or long (ids are rows). Made with read.table(), data.frame() etc. Like matrix: df[2, 3] (with numbers) or df[, "Name"] (with names) or df[df[, 1] > 3, ] (logical).Like list: df[1] or df\$Name.Also possible: subset(df, Name > 3) Data frame is a list; matrix() converts to matrix (modes will be unified); t() transposes and converts to matrix Table $3$ Overview of the most important R internal data types and ways to work with them.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/03%3A_Types_of_Data/3.08%3A_Inside_R.txt
Answers to the barplot coloring question: Code \(1\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) plot(sex.f, col=1:2) ``` or Code \(2\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) plot(sex.f, col=1:nlevels(sex.f)) ``` (Please try these commands yourself. The second answer is preferable because it will work even in cases when factor has more than two levels.) Answers to the barplot counts question. To see frequencies, from highest to smallest, run: Code \(3\) (R): ```lload("data/com12.rd") exists("com12") com12.plot <- barplot(com12, names.arg="") rev(sort(com12)) ``` or Code \(4\) (R): ```lload("data/com12.rd") exists("com12") com12.plot <- barplot(com12, names.arg="") rev(sort(com12)) sort(com12, decreasing=TRUE) ``` Answer to flowering heads question. First, we need to load the file into R. With url.show() or simply by examining the file in the browser window, we reveal that file has multiple columns divided with wide spaces (likely Tab symbols) and that the first column contains species names with spaces. Therefore, header and separator should be defined explicitly: Code \(5\) (R): ```comp <- read.table("http://ashipunov.info/shipunov/open/compositae.txt", h=TRUE, sep="\t") ``` Next step is always to check the structure of new object: Code \(6\) (R): ```comp <- read.table("http://ashipunov.info/shipunov/open/compositae.txt", h=TRUE, sep="\t") str(comp) ``` Two columns (including species) are factors, others are numerical (integer or not). The resulted object is a data frame. Next is to select our species and remove unused levels: Code \(7\) (R): ```comp <- read.table("http://ashipunov.info/shipunov/open/compositae.txt", h=TRUE, sep="\t") c3 <- comp[comp\$SPECIES %in% c("Anthemis tinctoria", "Cosmos bipinnatus", "Tripleurospermum inodorum"), ] c3\$SPECIES <- droplevels(c3\$SPECIES) ``` To select tree species in one command, we used logical expression made with %in% operator (please see how it works with ?"%in%" command). Removal of redundant levels will help to use species names for scatterplot: Code \(8\) (R): ```comp <- read.table("http://ashipunov.info/shipunov/open/compositae.txt", h=TRUE, sep="\t") c3 <- comp[comp\$SPECIES %in% c("Anthemis tinctoria", "Cosmos bipinnatus", "Tripleurospermum inodorum"), ] c3\$SPECIES <- droplevels(c3\$SPECIES) with(c3, plot(HEAD.D, RAYS, col=as.numeric(SPECIES), pch=as.numeric(SPECIES), xlab="Diameter of head, mm", ylab="Number of rays")) legend("topright", pch=1:3, col=1:3, legend=levels(c3\$SPECIES)) ``` Please make this plot yourself. The key is to use SPECIES factor as number, with as.numeric() command. Function with() allows to ignore cc\$ and therefore saves typing. However, there is one big problem which at first is not easy to recognize: in many places, points overlay each other and therefore amount of visible data points is much less then in the data file. What is worse, we cannot say if first and third species are well or not well segregated because we do not see how many data values are located on the “border” between them. This scatterplot problem is well known and there are workarounds: Code \(9\) (R): ```comp <- read.table("http://ashipunov.info/shipunov/open/compositae.txt", h=TRUE, sep="\t") c3 <- comp[comp\$SPECIES %in% c("Anthemis tinctoria", "Cosmos bipinnatus", "Tripleurospermum inodorum"), ] with(c3, plot(jitter(HEAD.D), jitter(RAYS), col=as.numeric(SPECIES), pch=as.numeric(SPECIES), xlab="Diameter of head, mm", ylab="Number of rays")) ``` Please run this code yourself. Function jitter() adds random noise to variables and shits points allowing to see what is below. However, it is still hard to understand the amount of overplotted values. There are also: Code \(10\) (R): ```comp <- read.table("http://ashipunov.info/shipunov/open/compositae.txt", h=TRUE, sep="\t") c3 <- comp[comp\$SPECIES %in% c("Anthemis tinctoria", "Cosmos bipinnatus", "Tripleurospermum inodorum"), ] with(c3, sunflowerplot(HEAD.D, RAYS)) with(c3, smoothScatter(HEAD.D, RAYS)) ``` (Try these variants yourself. When you run the first line of code, you will see sunflower plot, developed exactly to such “overplotting cases”. It reflects how many points are overlaid. However, it is not easy to make sunflowerplot() show overplotting separately for each species. The other approach, smoothScatter() suffers from the same problem\(^{[1]}\).) To overcome this, we developed PPoints() function (Figure \(1\)): Code \(11\) (R): ```comp <- read.table("http://ashipunov.info/shipunov/open/compositae.txt", h=TRUE, sep="\t") c3 <- comp[comp\$SPECIES %in% c("Anthemis tinctoria", "Cosmos bipinnatus", "Tripleurospermum inodorum"), ] with(c3, plot(HEAD.D, RAYS, type="n", xlab="Diameter of head, mm", ylab="Number of rays")) with(c3, PPoints(SPECIES, HEAD.D, RAYS, scale=.9)) legend("topright", pch=1:3, col=1:3, legend=levels(c3\$SPECIES)) ``` Finally, the answer. As one might see, garden cosmos is really separate from two other species which in turn could be distinguished with some certainty, mostly because number of rays in the yellow chamomile is more than 20. This approach is possible to improve. “Data mining” chapter tells how to do that. Answer to the matrix question. While creating matrix ma, we defined byrow=TRUE, i.e. indicated that elements should be joined into a matrix row by row. In case of byrow=FALSE (default) we would have obtained the matrix identical to mb: Code \(12\) (R): ```m <- 1:4 ma <- matrix(m, ncol=2, byrow=FALSE) ma``` Answer to the sorting exercise. To work with columns, we have to use square brackets with a comma and place commands to the right: Code \(13\) (R): ```w <- c(69, 68, 93, 87, 59, 82, 72) x <- c(174, 162, 188, 192, 165, 168, 172.5) sex <- c("male", "female", "male", "male", "female", "male", "male") sex.f <- factor(sex) m.o <- c("L", "S", "XL", "XXL", "S", "M", "L") d <- data.frame(weight=w, height=x, size=m.o, sex=sex.f) row.names(d) <- c("Rick", "Amanda", "Peter", "Alex", "Kathryn","Ben", "George") d.sorted <- d[order(d\$sex, d\$height), ] d.sorted[, order(names(d.sorted))]``` Please note that we cannot just type order() after the comma. This command returns the new order of columns, thus we gave it our column names (names() returns column names for a given data frame). By the way, sort() would have worked here too, since we only needed to rearrange a single vector.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/03%3A_Types_of_Data/3.09%3A_Answers_to_exercises.txt
It is always tempting to describe the sample with just one number “to rule them all”. Or only few numbers... This idea is behind central moments, two (or sometimes four) numbers which represent the center or central tendency of sample and its scale (variation, variability, instability, dispersion: there are many synonyms). Third and fourth central moments are not frequently used, they represent asymmetry (shift, skewness) and sharpness (“tailedness”, kurtosis), respectively. Median is the best Mean is a parametric method whereas median depends less on the shape of distribution. Consequently, median is more stable, more robust. Let us go back to our seven hypothetical employees. Here are their salaries (thousands per year): Code \(1\) (R): `salary <- c(21, 19, 27, 11, 102, 25, 21)` Dramatic differences in salaries could be explained by fact that Alex is the custodian whereas Kathryn is the owner of company. Code \(2\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) mean(salary) median(salary)``` We can see that mean does not reflect typical wages very well—it is influenced by higher Kathryn’s salary. Median does a better job because it is calculated in a way radically different from mean. Median is a value that cuts off a half of ordered sample. To illustrate the point, let us make another vector, similar to our salary: Code \(3\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) sort(salary1 <- c(salary, 22)) median(salary1)``` Vector salary1 contains an even number of values, eight, so its median lies in the middle, between two central values (21 and 22). There is also a way to make mean more robust to outliers, trimmed mean which is calculated after removal of marginal values: Code \(4\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) mean(salary, trim=0.2)``` This trimmed mean is calculated after 10% of data was taken from each end and it is significantly closer to the median. There is another measure of central tendency aside from median and mean. It is mode, the most frequent value in the sample. It is rarely used, and mostly applied to nominal data. Here is an example (we took the variable sex from the last chapter): Code \(5\) (R): ```sex <- c("male", "female", "male", "male", "female", "male", "male") t.sex <- table(sex) mode <- names(t.sex[which.max(t.sex)]) mode``` Here the most common value is male\(^{[1]}\). Often we face the task of calculating mean (or median) for the data frames. There are at least three different ways: Code \(6\) (R): ```attach(trees) mean(Girth) mean(Height) detach(trees)``` The first way uses attach() and adds columns from the table to the list of “visible” variables. Now we can address these variables using their names only, omitting the name of the data frame. If you choose to use this command, do not forget to detach() the table. Otherwise, there is a risk of loosing track of what is and is not attached. It is particularly problematic if variable names repeat across different data frames. Note that any changes made to variables will be forgotten after you detach(). The second way uses with()) which is similar to attaching, only here attachment happens within the function body: Code \(7\) (R): `with(trees, mean(Volume)) # Second way` The third way uses the fact that a data frame is just a list of columns. It uses grouping functions from apply() family\(^{[2]}\), for example, sapply() (“apply and simplify”): Code \(8\) (R): `sapply(trees, mean)` What if you must supply an argument to the function which is inside sapply()? For example, missing data will return NA without proper argument. In many cases this is possible to specify directly: Code \(9\) (R): ```trees.n <- trees trees.n[2, 1] <- NA sapply(trees.n, mean) sapply(trees.n, mean, na.rm=TRUE)``` In more complicated cases, you might want to define anonymous function (see below). Quartiles and quantiles Quartiles are useful in describing sample variability. Quartiles are values cutting the sample at points of 0%, 25%, 50%, 75% and 100% of the total distribution\(^{[3]}\). Median is nothing else then the third quartile (50%). The first and the fifth quartiles are minimum and maximum of the sample. The concept of quartiles may be expanded to obtain cut-off points at any desired interval. Such measures are called quantiles (from quantum, an increment), with many special cases, e.g. percentiles for percentages. Quantiles are used also to check the normality (see later). This will calculate quartiles: Code \(10\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) quantile(salary, c(0, 0.25, 0.5, 0.75, 1))``` Another way to calculate them: Code \(11\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) fivenum(salary)``` (These two functions sometimes output slightly different results, but this is insignificant for the research. To know more, use help. Boxplots (see below) use fivenum().) The third and most commonly used way is to run summary(): Code \(12\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) summary(salary)``` summary() function is generic so it returns different results for different object types (e.g., for data frames, for measurement data and nominal data): Code \(13\) (R): `summary(PlantGrowth)` In addition, summary() shows the number of missing data values: Code \(14\) (R): ```hh <- c(8, 10, NA, NA, 8, NA, 8) summary(hh)``` Command summary() is also very useful at the first stage of analysis, for example, when we check the quality of data. It shows missing values and returns minimum and maximum: Code \(15\) (R): ```err <- read.table("data/errors.txt", h=TRUE, sep=" ") str(err) summary(err)``` We read the data file into a table and check its structure with str(). We see that variable AGE (which must be the number) has unexpectedly turned into a factor. Output of the summary() explains why: one of age measures was mistyped as a letter a. Moreover, one of the names is empty—apparently, it should have contained NA. Finally, the minimum height is 16.1 cm! This is quite impossible even for the newborns. Most likely, the decimal point was misplaced. Variation Most common parametric measures of variation are variance and standard deviation: Code \(16\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) var(salary); sqrt(var(salary)); sd(salary)``` (As you see, standard deviation is simply the square root of variance; in fact, this function was absent from S language.) Useful non-parametric variation measures are IQR and MAD: Code \(17\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) IQR(salary); mad(salary)``` The first measure, inter-quartile range (IQR), the distance between the second and the fourth quartiles. Second robust measurement of the dispersion is median absolute deviation, which is based on the median of absolute differences between each value and sample median. To report central value and variability together, one of frequent approaches is to use “center ± variation”. Sometimes, they do mean ± standard deviation (which mistakenly called “SEM”, ambiguous term which must be avoided), but this is not robust. Non-parametric, robust methods are always preferable, therefore “median ± IQR” or “median ± MAD” will do the best: Code \(18\) (R): ```with(trees, paste(median(Height), IQR(Height), sep="±")) paste(median(trees\$Height), mad(trees\$Height), sep="±")``` To report variation only, there are more ways. For example, one can use the interval where 95% of sample lays: Code \(19\) (R): `paste(quantile(trees\$Height, c(0.025, 0.975)), collapse="-")` Note that this is not a confidence interval because quantiles and all other descriptive statistics are about sample, not about population! However, bootstrap (described in Appendix) might help to use 95% quantiles to estimate confidence interval. ... or 95% range together with a median: Code \(20\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) paste(quantile(trees\$Girth, c(0.025, 0.5, 0.975)), collapse="-")``` ... or scatter of “whiskers” from the boxplot: Code \(21\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) paste(boxplot.stats(trees\$Height)\$stats[c(1, 5)], collapse="-")``` Related with scale measures are maximum and minimum. They are easy to obtain with range() or separate min() and max() functions. Taking alone, they are not so useful because of possible outliers, but together with other measures they might be included in the report: Code \(22\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) HS <- fivenum(trees\$Height) paste("(", HS[1], ")", HS[2], "-", HS[4], "(", HS[5], ")", sep="")``` (Here boxplot hinges were used for the main interval.) The figure (Figure \(1\)) summarizes most important ways to report central tendency and variation with the same Euler diagram which was used to show relation between parametric and nonparametric approaches (Figure 3.1.2). To compare the variability of characters (especially measured in different units) one may use a dimensionless coefficient of variation. It has a straightforward calculation: standard deviation divided by mean and multiplied by 100%. Here are variation coefficients for trees characteristics from a bui db">trees): Code \(23\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) 100*sapply(trees, sd)/colMeans(trees)``` (To make things simpler, we used colMeans() which calculated means for each column. It comes from a family of similar commands with self-explanatory names: rowMeans(), colSums() and rowSums().)
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/04%3A_One-Dimensional_Data/4.01%3A_How_to_Estimate_General_Tendencies.txt
Our firm has just seven workers. How to analyze the bigger data? Let us first imagine that our hypothetical company prospers and hired one thousand new workers! We add them to our seven data points, with their salaries drawn randomly from interquartile range of the original sample (Figure \(1\)): Code \(1\) (R): Boxplots ```salary <- c(21, 19, 27, 11, 102, 25, 21) new.1000 <- sample((median(salary) - IQR(salary)) : + (median(salary) + IQR(salary)), 1000, replace=TRUE) salary2 <- c(salary, new.1000) boxplot(salary2, log="y")``` In a code above we also see an example of data generation. Function sample() draws values randomly from a distribution or interval. Here we used replace=TRUE, since we needed to pick a lot of values from a much smaller sample. (The argument replace=FALSE might be needed for imitation of a card game, where each card may only be drawn from a deck once.) Please keep in mind that sampling is random and therefore each iteration will give slightly different results. Let us look at the plot. This is the boxplot (“box-and-whiskers” plot). Kathryn’s salary is the highest dot. It is so high, in fact, that we had to add the parameter log="y" to better visualize the rest of the values. The box (main rectangle) itself is bound by second and fourth quartiles, so that its height equals IQR. Thick line in the middle is a median. By default, the “whiskers” extend to the most extreme data point which is no more than 1.5 times the interquartile range from the box. Values that lay farther away are drawn as separate points and are considered outliers. The scheme (Figure \(2\)) might help in understanding boxplots. Numbers which make the boxplot might be returned with fivenum() command. Boxplot representation was created by a famous American mathematician John W. Tukey as a quick, powerful and consistent way of reflecting main distribution-independent characteristics of the sample. In R, boxplot() is vectorized so we can draw several boxplots at once (Figure \(3\)): Code \(2\) (R): Boxplots `boxplot(scale(trees))` (Parameters of trees were measured in different units, therefore we scale()’d them.) Histogram is another graphical representation of the sample where range is divided into intervals (bins), and consecutive bars are drawn with their height proportional to the count of values in each bin (Figure \(4\)): Code \(3\) (R): Histograms `hist(salary2, breaks=20)` (By default, the command hist() would have divided the range into 10 bins, but here we needed 20 and therefore set them manually. Histogram is sometimes a rather cryptic way to display the data. Commands Histp() and Histr() from the asmisc.r will plot histograms together with percentages on the top of each bar, or overlaid with normal curve (or density—see below), respectively. Please try them yourself.) A numerical analog of a histogram is the function cut(): Code \(4\) (R): `table(cut(salary2, 20))` There are other graphical functions, conceptually similar to histograms. The first is stem-and-leaf plot. stem() is a kind of pseudograph, text histogram. Let us see how it treats the original vector salary: Code \(5\) (R): stem-and-leaf plot `stem(salary, scale=2)` The bar | symbol is a “stem” of the graph. The numbers in front of it are leading digits of the raw values. As you remember, our original data ranged from 11 to 102—therefore we got leading digits from 1 to 10. Each number to the left comes from the next digit of a datum. When we have several values with identical leading digit, like 11 and 19, we place their last digits in a sequence, as “leafs”, to the left of the “stem”. As you see, there are two values between 10 and 20, five values between 20 and 30, etc. Aside from a histogram-like appearance, this function performs an efficient ordering. Another univariate instrument requires more sophisticated calculations. It is a graph of distribution density, density plot (Figure \(5\)): CodeBox (R) \(6\): Density Plots Code \(6\) (R): Density Plots ```plot(density(salary2, adjust=2)) rug(salary2)``` (We used an additional graphic function rug() which supplies an existing plot with a “ruler” which marks areas of highest data density.) Here the histogram is smoothed, turned into a continuous function. The degree to which it is “rounded” depends on the parameter adjust. Aside from boxplots and a variety of histograms and alike, R and external packages provide many more instruments for univariate plotting. One of simplest is the stripchart. To make stripchart more interesting, we complicated it below using its ability to show individual data points: Code \(7\) (R): stripchart ```trees.s <- data.frame(scale(trees), class=cut(trees\$Girth, 3, labels=c("thin", "medium", "thick"))) stripchart(trees.s[, 1:3], method="jitter", cex=2, pch=21, col=1:3, bg=as.numeric(trees.s\$class)) legend("right", legend=levels(trees.s\$class), pch=19, pt.cex=2, col=1:3)``` (By default, stripchart is horizontal. We used method="jitter" to avoid overplotting, and also scaled all characters to make their distributions comparable. One of stripchart features is that col argument colorizes columns whereas bg argument (which works only for pch from 21 to 25) colorizes rows. We split trees into 3 classes of thickness, and applied these classes as dots background. Note that if data points are shown with multiple colors and/or multiple point types, the legend is always necessary.‘) Beeswarm plot requires the external package. It is similar to stripchart but has several advanced methods to disperse points, plus an ability to control the type of individual points (Figure \(7\)): Code \(8\) (R): Beeswarm plots ```library(beeswarm) beeswarm(trees.s[, 1:3], cex=2, col=1:3, pwpch=rep(as.numeric(trees.s\$class), 3)) bxplot(trees.s[, 1:3], add=TRUE) legend("top", pch=1:3, legend=levels(trees.s\$class))``` (Here with bxplot() command we added boxplot lines to a beehive graph in order to visualize quartiles and medians. To overlay, we used an argument add=TRUE.) And one more useful 1-dimensional plot. It is a similar to both boxplot and density plot (Figure \(8\)): Code \(1\) (R): Bean plots ```library(beanplot) beanplot(trees.s[, 1:3], col=list(1, 2, 3), border=1:3, beanlines="median")```
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/04%3A_One-Dimensional_Data/4.02%3A_1-Dimensional_Plots.txt
We are ready now to make the first step in the world of inferential statistics and use statistical tests. They were invented to solve the main question of statistical analysis (Figure $1$): how to estimate anything about population using only its sample? This sounds like a magic. How to estimate the whole population if we know nothing about it? However, it is possible if we know some data law, feature which our population should follow. For example, the population could exhibit one of standard data distributions. Let us first to calculate confidence interval. This interval predict with a given probability (usually 95%) where the particular central tendency (mean or median) is located within population. Do not mix it with the 95% quantiles, these measures have a different nature. We start from checking the hypothesis that the population mean is equal to 0. This is our null hypothesis, H$_0$, that we wish to accept or reject based on the test results. Code $1$ (R): t.test(trees\$Height) Here we used a variant of t-test for univariate data which in turn uses the standard Student’s t-distribution. First, this test obtains a specific statistic from the original data set, so-called t-statistic. The test statistic is a single measure of some attribute of a sample; it reduces all the data to one value and with a help of standard distribution, allows to re-create the “virtual population”. Student test comes with some price: you should assume that your population is “parametric”, “normal”, i.e. interpretable with a normal distribution (dart game distribution, see the glossary). Second, this test estimates if the statistic derived from our data can reasonably come from the distribution defined by our original assumption. This principle lies at the heart of calculating p-value. The latter is the probability of obtaining our test statistic if the initial assumption, null hypothesis was true (in the above case, mean tree height equals 0). What do we see in the output of the test? t-statistic equals 66.41 at 30 degrees of freedom (df $=30$). P-value is really low ($2.2\times e^{-16}$), almost zero, and definitely much lower then the “sacred” confidence level of 0.05. Therefore, we reject the null hypothesis, or our initial assumption that mean tree height equals to 0 and consequently, go with the alternative hypothesis which is a logical opposite of our initial assumption (i.e., “height is not equal to 0”): However, what is really important at the moment, is the confidence interval—a range into which the true, population mean should fall with given probability (95%). Here it is narrow, spanning from 73.7 to 78.3 and does not include zero. The last means again that null hypothesis is not supported. If your data does not go well with normal distribution, you need more universal (but less powerful) Wilcoxon rank-sum test. It uses median instead of mean to calculate the test statistic V. Our null hypothesis will be that population median is equal to zero: Code $2$ (R): salary <- c(21, 19, 27, 11, 102, 25, 21) wilcox.test(salary, conf.int=TRUE) (Please ignore warning messages, they simply say that our data has ties: two salaries are identical.) Here we will also reject our null hypothesis with a high degree of certainty. Passing an argument conf.int=TRUE will return the confidence interval for population median—it is broad (because sample size is small) but does not include zero.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/04%3A_One-Dimensional_Data/4.03%3A_Confidence_intervals.txt
How to decide which test to use, parametric or non-parametric, t-test or Wilcoxon? We need to know if the distribution follows or at least approaches normality. This could be checked visually (Figure $1$): Code $1$ (R): salary <- c(21, 19, 27, 11, 102, 25, 21) new.1000 <- sample((median(salary) - IQR(salary)) : + (median(salary) + IQR(salary)), 1000, replace=TRUE) salary2 <- c(salary, new.1000) qqnorm(salary2, main=""); qqline(salary2, col=2) How does QQ plot work? First, data points are ordered and each one is assigned to a quantile. Second, a set of theoretical quantiles—positions that data points should have occupied in a normal distribution—is calculated. Finally, theoretical and empirical quantiles are paired off and plotted. We have overlaid the plot with a line coming through quartiles. When the dots follow the line closely, the empirical distribution is normal. Here a lot of dots at the tails are far. Again, we conclude, that the original distribution is not normal. R also offers numerical instruments that check for normality. The first among them is Shapiro-Wilk test (please run this code yourself): Code $2$ (R): salary <- c(21, 19, 27, 11, 102, 25, 21) new.1000 <- sample((median(salary) - IQR(salary)) : + (median(salary) + IQR(salary)), 1000, replace=TRUE) salary2 <- c(salary, new.1000) shapiro.test(salary) shapiro.test(salary2) Here the output is rather terse. P-values are small, but what was the null hypothesis? Even the built-in help does not state it. To understand, we may run a simple experiment: Code $3$ (R): set.seed(1638) # freeze random number generator shapiro.test(rnorm(100)) The command rnorm() generates random numbers that follow normal distribution, as many of them as stated in the argument. Here we have obtained a p-value approaching unity. Clearly, the null hypothesis was “the empirical distribution is normal”. Armed with this little experiment, we may conclude that distributions of both salary and salary2 are not normal. Kolmogorov-Smirnov test works with two distributions. The null hypothesis is that both samples came from the same population. If we want to test one distribution against normal, second argument should be pnorm: Code $4$ (R): salary <- c(21, 19, 27, 11, 102, 25, 21) new.1000 <- sample((median(salary) - IQR(salary)) : + (median(salary) + IQR(salary)), 1000, replace=TRUE) salary2 <- c(salary, new.1000) ks.test(scale(salary2), "pnorm") (The result is comparable with the result of Shapiro-Wilk test. We scaled data because by default, the second argument uses scaled normal distribution.) Function ks.test() accepts any type of the second argument and therefore could be used to check how reliable is to approximate current distribution with any theoretical distribution, not necessarily normal. However, Kolmogorov-Smirnov test often returns the wrong answer for samples which size is $< 50$, so it is less powerful then Shapiro-Wilks test. 2.2e-16 us so-called exponential notation, the way to show really small numbers like this one ($2.2 \times 10^{-16}$). If this notation is not comfortable to you, there is a way to get rid of it: Code $5$ (R): salary <- c(21, 19, 27, 11, 102, 25, 21) new.1000 <- sample((median(salary) - IQR(salary)) : + (median(salary) + IQR(salary)), 1000, replace=TRUE) salary2 <- c(salary, new.1000) old.options <- options(scipen=100) ks.test(scale(salary2), "pnorm") options(old.options) (Option scipen equals to the maximal allowable number of zeros.) Most of times these three ways to determine normality are in agreement, but this is not a surprise if they return different results. Normality check is not a death sentence, it is just an opinion based on probability. Again, if sample size is small, statistical tests and even quantile-quantile plots frequently fail to detect non-normality. In these cases, simpler tools like stem plot or histogram, would provide a better help.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/04%3A_One-Dimensional_Data/4.04%3A_Normality.txt
Shapiro-Wilk test is probably the fastest way to check normality but its output is not immediately understandable. It is also not easy to apply for whole data frames. Let us create the function which overcomes these problems: Code \(1\) (R): `Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")}` (We used here the fact that in R, test output is usually a list and each component is possible to extract using `\$`-name approach described in previous chapter. How to know what to extract? Save test output into object and run str(obj)!) Collection asmisc.r contains slightly more advanced version of the Normality() which takes into account that Shapiro-Wilks test is not so reliable for small size (\(< 25\)) samples. To make this Normality() function work, you need to copy the above text into R console, or into the separate file (preferably with *.r extension), and then load it with source() command. Next step is to call the function: Code \(2\) (R): ```salary <- c(21, 19, 27, 11, 102, 25, 21) Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(salary) # asmisc.r sapply(trees, Normality) sapply(log(trees+0.01), Normality)``` (Note that logarithmic conversion could change the normality. Check yourself if square root does the same.) This function not only runs Shapiro-Wilks test several times but also outputs an easily readable result. Most important is the third row which uses p-value extracted from the test results. Extraction procedure is based on the knowledge of the internal structure of shapiro.test() output. output object without going into help? In many cases, “stationary”, named function is not necessary as user need some piece of code which runs only once (but runs in relatively complicated way). Here helps the anonymous function. It is especially useful within functions of apply() family. This is how to calculate mode simultaneously in multiple columns: Code \(3\) (R): ```x <- c(174, 162, 188, 192, 165, 168, 172.5) sapply(chickwts,function(.x) names(table(.x))[which.max(table(.x))])``` (Here we followed the agreement that in the anonymous functions, argument names must start with a dot.) Even more useful—simultaneous confidence intervals: Code \(4\) (R): ```old.options <- options(warn=-1) x <- c(174, 162, 188, 192, 165, 168, 172.5) sapply(trees, function(.x) wilcox.test(.x, conf.int=TRUE)\$conf.int) options(old.options)``` (Here we suppressed multiple “ties” warnings. Do not do it yourself without a strong reason!) in the open data repository contains measurements of several birch morphological characters. Are there any characters which could be analyzed with parametric methods? contains explanation of variables.) description characteristics as possible, calculate the appropriate confidence interval and plot this data. ) distinguish these species most. Provide the answer in the form “if the character is ..., then species is ..”.. 4.06: How good is the proportion Proportions are frequent in the data analysis, especially for categorical variables. How to check how well the sample proportion corresponds with population proportion? Here is an example. In hospital, there was a group of 476 patients undergoing specific treatment and 356 among them are smokers (this is the old data). In average, proportion of smokers is slightly less than in our group (70% versus 75%, respectively). To check if this difference is real, we can run the proportions test: Code \(1\) (R): `prop.test(x=356, n=476, p=0.7, alternative="two.sided")` (We used two.sided option to check both variants of inequality: larger and smaller. To check one of them (“one tail”), we need greater or less\(^{[1]}\).) Confidence interval is narrow. Since the null hypothesis was that “true probability of is equal to 0.7” and p-value was less than 0.05, we reject it in favor to alternative hypothesis, “true probability of is not equal to 0.7”. Consequently, proportion of smokers in our group is different from their proportion in the whole hospital. Now to the example from foreword. Which candidate won, A or B? Here the proportion test will help again\(^{[2]}\): Code \(2\) (R): `prop.test(x=0.52*262, n=262, p=0.5, alternative="greater")` According to the confidence interval, the real proportion of people voted for candidate A varies from 100% to 47%. This might change completely the result of elections! Large p-value suggests also that we cannot reject the null hypothesis. We must conclude that “true p is not greater then 0.5”. Therefore, using only that data it is impossible to tell if candidate A won the elections. This exercise is related with phyllotaxis (Figure 4.7.1), botanical phenomenon when leaves on the branch are distributed in accordance with the particular rule. Most amazingly, this rule (formulas of phyllotaxis) is quite often the Fibonacci rule, kind of fraction where numerators and denominators are members of the famous Fibonacci sequence. We made R function Phyllotaxis() which produces these fractions: Code \(3\) (R): `sapply(1:10, Phyllotaxis) # asmisc.r` In the open repository, there is a data file phyllotaxis.txt which contains measurements of phyllotaxis in nature. Variables N.CIRCLES and N.LEAVES are numerator and denominator, respectively. Variable FAMILY is the name of plant family. Many formulas in this data file belong to “classic” Fibonacci group (see above), but some do not. Please count proportions of non-classic formulas per family, determine which family is the most deviated and check if the proportion of non-classic formulas in this family is statistically different from the average proportion (calculated from the whole data).
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/04%3A_One-Dimensional_Data/4.05%3A_How_to_create_your_own_functions.txt
Answer to the question of shapiro.test() output structure. First, we need to recollect that almost everything what we see on the R console, is the result of print()’ing some lists. To extract the component from a list, we can call it by dollar sign and name, or by square brackets and number (if component is not named). Let us check the structure with str(): Code \(1\) (R): `str(shapiro.test(rnorm(100)))` Well, p-value most likely comes from the p.value component, this is easy. Check it: Code \(2\) (R): ```set.seed(1683) shapiro.test(rnorm(100))\$p.value``` This is what we want. Now we can insert it into the body of our function. Answer to the “birch normality” exercise. First, we need to check the data and understand its structure, for example with url.show(). Then we can read it into R, check its variables and apply Normality() function to all appropriate columns: Code \(3\) (R): ```betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} str(betula) # asmisc.r sapply(betula[, c(2:4, 7:8)], Normality) # asmisc.r``` (Note how only non-categorical columns were selected for the normality check. We used Str() because it helps to check numbers of variables, and shows that two variables, LOBES and WINGS have missing data. There is no problem in using str() instead.) Only CATKIN (length of female catkin) is available to parametric methods here. It is a frequent case in biological data. What about the graphical check for the normality, histogram or QQ plot? Yes, it should work but we need to repeat it 5 times. However, lattice package allows to make it in two steps and fit on one trellis plot (Figure \(2\)): Code \(4\) (R): ```betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.s <- stack(betula[, c(2:4, 7:8)]) qqmath(~ values | ind, data=betula.s, panel=function(x) {panel.qqmathline(x); panel.qqmath(x)})``` (Library lattice requires long data format where all columns stacked into one and data supplied with identifier column, this is why we used stack() function and formula interface. There are many trellis plots. Please check the trellis histogram yourself: Code \(5\) (R): ```betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.s <- stack(betula[, c(2:4, 7:8)]) bwtheme <- standard.theme("pdf", color=FALSE) histogram(~ values | ind, data=betula.s, par.settings=bwtheme)``` (There was also an example of how to apply grayscale theme to these plots.) As one can see, SCALE.L could be also accepted as “approximately normal”. Among others, LEAF.MAXW is “least normal”. Answer to the birch characters variability exercise. To create a function, it is good to start from prototype: Code \(6\) (R): ```betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) CV <- function(x) {}``` This prototype does nothing, but on the next step you can improve it, for example, with fix(CV) command. Then test CV() with some simple argument. If the result is not satisfactory, fix(CV) again. At the end of this process, your function (actually, it “wraps” CV calculation explained above) might look like: Code \(7\) (R): ```betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) CV <- function(x) { 100*sd(x, na.rm=TRUE)/mean(x, na.rm=TRUE) }``` Then sapply() could be used to check variability of each measurement column: Code \(8\) (R): ```betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) CV <- function(x){100*sd(x, na.rm=TRUE)/mean(x, na.rm=TRUE)} sapply(betula[, c(2:4, 7:8)], CV)``` As one can see, LEAF.MAXW (location of the maximal leaf width) has the biggest variability. In the asmisc.r, there is CVs() function which implements this and three other measurements of relative variation. Answer to question about dact.txt data. Companion file dact_c.txt describes it as a random extract from some plant measurements. From the first chapter, we know that it is just one sequence of numbers. Consequently, scan() would be better than read.table(). First, load and check: Code \(9\) (R): ```dact <- scan("data/dact.txt") str(dact)``` Now, we can check the normality with our new function: Code \(10\) (R): ```Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} dact <- scan("data/dact.txt") Normality(dact) # asmisc.r``` Consequently, we must apply to dact only those analyses and characteristics which are robust to non-normality: Code \(11\) (R): ```dact <- scan("data/dact.txt") summary(dact)[-4] # no mean IQR(dact) mad(dact)``` Confidence interval for the median: Code \(12\) (R): ```dact <- scan("data/dact.txt") wilcox.test(dact, conf.int=TRUE)\$conf.int``` (Using the idea that every test output is a list, we extracted the confidence interval from output directly. Of course, we knew beforehand that name of a component we need is conf.int; this knowledge could be obtained from the function help (section “Value”). The resulted interval is broad.) To plot single numeric data, histogram (Figure \(3\)) is preferable (boxplots are better for comparison between variables): Code \(13\) (R): ```dact <- scan("data/dact.txt") Histr(dact, xlab="", main="") # asmisc.r``` Similar to histogram is the steam-and-leaf plot: Code \(14\) (R): ```dact <- scan("data/dact.txt") stem(dact)``` In addition, here we will calculate skewness and kurtosis, third and fourth central moments (Figure \(4\)). Skewness is a measure of how asymmetric is the distribution, kurtosis is a measure of how spiky is it. Normal distribution has both skewness and kurtosis zero whereas “flat” uniform distribution has skewness zero and kurtosis approximately \(-1.2\) (check it yourself). What about dact data? From the histogram (Figure \(3\)) and stem-and-leaf we can predict positive skewness (asymmetricity of distribution) and negative kurtosis (distribution flatter than normal). To check, one need to load library e1071 first: Code \(15\) (R): ```dact <- scan("data/dact.txt") library(e1071) skewness(dact) kurtosis(dact)``` Answer to the question about water lilies. First, we need to check the data, load it into R and check the resulted object: Code \(16\) (R): ```ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ") str(ny) # asmisc.r``` (Function Str() shows column numbers and the presence of NA.) One of possible ways to proceed is to examine differences between species by each character, with four paired boxplots. To make them in one row, we will employ for() cycle: Code \(17\) (R): ```ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ") oldpar <- par(mfrow=c(2, 2)) for (i in 2:5) boxplot(ny[, i] ~ ny[, 1], main=names(ny)[i]) par(oldpar)``` (Not here, but in many other cases, for() in R is better to replace with commands of apply() family. Boxplot function accepts “ordinary” arguments but in this case, formula interface with tilde is much more handy.) Please review this plot yourself. It is even better, however, to compare scaled characters in the one plot. First variant is to load lattice library and create trellis plot similar to Figure 7.1.8 or Figure 7.1.7: Code \(18\) (R): ```ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ") library(lattice) ny.s <- stack(as.data.frame(scale(ny[ ,2:5]))) ny.s\$SPECIES <- ny\$SPECIES bwplot(SPECIES ~ values | ind, ny.s, xlab="")``` (As usual, trellis plots “want” long form and formula interface.) Please check this plot yourself. Alternative is the Boxplots() (Figure \(5\)) command. It is not a trellis plot, but designed with a similar goal to compare many things at once: Code \(19\) (R): ```ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ") boxplot(ny[, 2:5], ny[, 1], srt=0, adj=c(.5, 1)) # asmisc.r``` (By default, Boxplots() rotates character labels, but this behavior is not necessary with 4 characters. This plot uses scale() so y-axis is, by default, not provided.) Or, with even more crisp Linechart() (Figure \(6\)): Code \(20\) (R): ```ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ") Linechart(ny[, 2:5], ny[, 1], se.lwd=2) # asmisc.r``` (Sometimes, IQRs are better to percept if you add grid() to the plot. Try it yourself.) Evidently (after SEPALS), PETALS and STAMENS make the best species resolution. To obtain numerical values, it is better to check the normality first. Note that species identity is the natural, internal feature of our data. Therefore, it is theoretically possible that the same character in one species exhibit normal distribution whereas in another species does not. This is why normality should be checked per character per species. This idea is close to the concept of fixed effects which are so useful in linear models (see next chapters). Fixed effects oppose the random effects which are not natural to the objects studied (for example, if we sample only one species of water lilies in the lake two times). Code \(21\) (R): ```ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ") Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} aggregate(ny[, 3:4], by=list(SPECIES=ny[, 1]), Normality) # asmisc.r``` (Function aggregate() does not only apply anonymous function to all elements of its argument, but also splits it on the fly with by list of factor(s). Similar is tapply() but it works only with one vector. Another variant is to use split() and then apply() reporting function to the each part separately.) By the way, the code above is good for learning but in our particular case, normality check is not required! This is because numbers of petals and stamens are discrete characters and therefore must be treated with nonparametric methods by definition. Thus, for confidence intervals, we should proceed with nonparametric methods: Code \(22\) (R): ```ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ") aggregate(ny[, 3:4],by=list(SPECIES=ny[, 1]),function(.x) wilcox.test(.x, conf.int=TRUE)\$conf.int)``` Confidence intervals reflect the possible location of central value (here median). But we still need to report our centers and ranges (confidence interval is not a range!). We can use either summary() (try it yourself), or some customized output which, for example, can employ median absolute deviation: Code \(23\) (R): ```aggregate(ny[, 3:4], by=list(SPECIES=ny[, 1]), function(.x) paste(median(.x, na.rm=TRUE), mad(.x, na.rm=TRUE), sep="±")) ny <- read.table("http://ashipunov.info/shipunov/open/nymphaeaceae.txt", h=TRUE, sep=" ")``` Now we can give the answer like “if there are 12–16 petals and 100–120 stamens, this is likely a yellow water lily, otherwise, if there are 23–29 petals and 66–88 stamens, this is likely a white water lily”. Answer to the question about phyllotaxis. First, we need to look on the data file, either with url.show(), or in the browser window and determine its structure. There are four tab-separated columns with headers, and at least the second column contains spaces. Consequently, we need to tell read.table() about both separator and headers and then immediately check the “anatomy” of new object: Code \(24\) (R): ```phx <- read.table("http://ashipunov.info/shipunov/open/phyllotaxis.txt", h=TRUE, sep=" ") str(phx)``` As you see, we have 11 families and therefore 11 proportions to create and analyze: Code \(25\) (R): ```phx <- read.table("http://ashipunov.info/shipunov/open/phyllotaxis.txt", h=TRUE, sep=" ") phx10 <- sapply(1:10, Phyllotaxis) phx.all <- paste(phx\$N.CIRCLES, phx\$N.LEAVES, sep="/") phx.tbl <- table(phx\$FAMILY, phx.all %in% phx10) dotchart(sort(phx.tbl[,"FALSE"]/(rowSums(phx.tbl))),lcolor=1, pch=19)``` Here we created 10 first classic phyllotaxis formulas (ten is enough since higher order formulas are extremely rare), then made these formulas (classic and non-classic) from data and finally made a table from the logical expression which checks if real world formulas are present in the artificially made classic sequence. Dotchart (Figure \(7\)) is probably the best way to visualize this table. Evidently, Onagraceae (evening primrose family) has the highest proportion of FALSE’s. Now we need actual proportions and finally, proportion test: Code \(26\) (R): ```phx <- read.table("http://ashipunov.info/shipunov/open/phyllotaxis.txt", h=TRUE, sep=" ") phx.all <- paste(phx\$N.CIRCLES, phx\$N.LEAVES, sep="/") phx10 <- sapply(1:10, Phyllotaxis) phx.tbl <- table(phx\$FAMILY, phx.all %in% phx10) mean.phx.prop <- sum(phx.tbl[, 1])/sum(phx.tbl) prop.test(phx.tbl["Onagraceae", 1], sum(phx.tbl["Onagraceae", ]), mean.phx.prop)``` As you see, proportion of non-classic formulas in Onagraceae (almost 77%) is statistically different from the average proportion of 27%. Answer to the exit poll question from the “Foreword”. Here is the way to calculate how many people we might want to ask to be sure that our sample 48% and 52% are “real” (represent the population): Code \(27\) (R): `power.prop.test(p1=0.48, p2=0.52, power=0.8)` We need to ask almost 5,000 people! To calculate this, we used a kind of power test which are frequently used for planning experiments. We made power=0.8 since it is the typical value of power used in social sciences. The next chapter gives definition of power (as a statistical term) and some more information about power test output.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/04%3A_One-Dimensional_Data/4.07%3A_Answers_to_exercises.txt
All methods covered in this chapter based on the idea of statistical test and side-by-side comparison. If even there are methods which seemingly accept multiple samples (like ANOVA or analysis of tables), they internally do the same: compare two pooled variations, or expected and observed frequencies. 05: Two-Dimensional Data - Differences Suppose that we compared two sets of numbers, measurements which came from two samples. From comparison, we found that they are different. But how to know if this difference did not arise by chance? In other words, how to decide that our two samples are truly different, i.e. did not come from the one population? These samples could be, for example, measurements of systolic blood pressure. If we study the drug which potentially lowers the blood pressure, it is sensible to mix it randomly with a placebo, and then ask members of the group to report their blood pressure on the first day of trial and, saying, on the tenth day. Then the difference between two measurements will allow to decide if there is any effect: Code $1$ (R): bpress <- read.table("data/bpress.txt", h=TRUE) head(bpress) drug.d <- bpress$DRUG.10 - bpress$DRUG.1 placebo.d <- bpress$PLACEBO.10 - bpress$PLACEBO.1 mean(drug.d - placebo.d) boxplot(bpress) Now, there is a promising effect, sufficient difference between blood pressure differences with drug and with placebo. This is also visible well with boxplots (check it yourself). How to test it? We already know how to use p-value, but it is the end of logical chain. Let us start from the beginning. Statistical hypotheses Philosophers postulated that science can never prove a theory, but only disprove it. If we collect 1000 facts that support a theory, it does not mean we have proved it—it is possible that the 1001st piece of evidence will disprove it. This is why in statistical testing we commonly use two hypotheses. The one we are trying to prove is called the alternative hypothesis ($H_1$). The other, default one, is called the null hypothesis ($H_0$). The null hypothesis is a proposition of absence of something (for example, difference between two samples or relationship between two variables). We cannot prove the alternative hypothesis, but we can reject the null hypothesis and therefore switch to the alternative. If we cannot reject the null hypothesis, then we must stay with it. Statistical errors With two hypotheses, there are four possible outcomes (Table $1$). The first (a) and the last (d) outcomes are ideal cases: we either accept the null hypothesis which is correct for the population studied, or we reject $H_0$ when it is wrong. If we have accepted the alternative hypothesis, when it is not true, we have committed a Type I statistical error—we have found a pattern that does not exist. This situation is often called “false positive”, or “false alarm”. The probability of committing a Type I error is connected with a p-value which is always reported as one of results of a statistical test. In fact, p-value is a probability to have same or greater effect if the null hypothesis is true. Imagine security officer on the night duty who hears something strange. There are two choices: jump and check if this noise is an indication of something important, or continue to relax. If the noise outside is not important or even not real but officer jumped, this is the Type I error. The probability to hear the suspicious noise when actually nothing happens in a p-value. sample\population Null is true Alternative is true Accept null Accept alternative Table $1$ Statistical hypotheses, including illustrations of (b) Type I and (c) Type II errors. Bigger dots are samples, all dots are population(s). For the security officer, it is probably better to commit Type I error than to skip something important. However, in science the situation is opposite: we always stay with the $H_0$ when the probability of committing a Type I error is too high. Philosophically, this is a variant of Occam’s razor: scientists always prefer not to introduce anything (i.e., switch to alternative) without necessity. the man who single-handedly saved the world from nuclear war This approach could be found also in other spheres of our life. Read the Wikipedia article about Stanislav Petrov (https://en.Wikipedia.org/wiki/Stanislav_Petrov); this is another example when false alarm is too costly. The obvious question is what probability is “too high”? The conventional answer places that threshold at 0.05—the alternative hypothesis is accepted if the p-value is less than 5% (more than 95% confidence level). In medicine, with human lives as stake, the thresholds are set even more strictly, at 1% or even 0.1%. Contrary, in social sciences, it is frequent to accept 10% as a threshold. Whatever was chosen as a threshold, it must be set a priori, before any test. It is not allowed to modify threshold in order to find an excuse for statistical decision in mind. Accept the null hypothesis when in fact the alternative is true is a Type II statistical error—failure to detect a pattern that actually exists. This is called “false negative”, “carelessness”. If the careless security officer did not jump when the noise outside is really important, this is Type II error. Probability of committing type II error is expressed as power of the statistical test (Figure $1$). The smaller is this probability, the more powerful is the test.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/05%3A_Two-Dimensional_Data_-_Differences/5.01%3A_What_is_a_statistical_test.txt
Two-sample Tests Studying two samples, we use the same approach with two hypotheses. The typical null hypothesis is “there is no difference between these two samples”—in other words, they are both drawn from the same population. The alternative hypothesis is “there is a difference between these two samples”. There are many other ways to say that: • Null: difference equal to 0 $\approx$ samples similar $\approx$ samples related $\approx$ samples came from the same population • Alternative: difference not equal to 0 $\approx$ samples different $\approx$ samples non-related $\approx$ samples came from different populations And, in terms of p-value: If the data are “parametric”, then a parametric t-test is required. If the variables that we want to compare were obtained on different objects, we will use a two-sample t-test for independent variables, which is called with the command t.test(): Code $1$ (R): bpress <- read.table("data/bpress.txt", h=TRUE) head(bpress) drug.d <- bpress$DRUG.10 - bpress$DRUG.1 placebo.d <- bpress$PLACEBO.10 - bpress$PLACEBO.1 sapply(data.frame(placebo.d, drug.d), Normality) # asmisc.r Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} t.test(placebo.d, drug.d) There is a long output. Please note the following: • Apart from the normality, there is a second assumption of the classic t-test, homogeneity of variances. However, R by default performs more complicated Welch test which does not require homogeneity. This is why degrees of freedom are not a whole number. • t is a t statistic and df are degrees of freedom (related with number of cases), they both needed to calculate the p-value. • Confidence interval is the second most important output of the R t.test(). It is recommended to supply confidence intervals and effect sizes (see below) wherever possible. If zero is within the confidence interval, there is a difference. • p-value is small, therefore the probability to “raise the false alarm” when “nothing happens” is also small. Consequently, we reject the null hypothesis (“nothing happens”, “no difference”, “no effect”) and therefore switch to the alternative hypothesis (“there is a difference between drugs”.) We can use the following order from most to least important: 1. p-value is first because it helps to make decision; 2. confidence interval; 3. t statistic; 4. degrees of freedom. Results of t-test did not come out of nowhere. Let us calculate the same thing manually (actually, half-manually because we will use degrees of freedom from the above test results): Code $2$ (R): bpress <- read.table("data/bpress.txt", h=TRUE) drug.d <- bpress$DRUG.10 - bpress$DRUG.1 placebo.d <- bpress$PLACEBO.10 - bpress$PLACEBO.1 df <- 6.7586 v1 <- var(placebo.d) v2 <- var(drug.d) (t.stat <- (mean(placebo.d) - mean(drug.d))/sqrt(v1/5 + v2/5)) (p.value <- 2*pt(-abs(t.stat), df)) (Function pt() calculates values of the Student distribution, the one which is used for t-test. Actually, instead of direct calculation, this and similar functions estimate p-values using tables and approximate formulas. This is because the direct calculation of exact probability requires integration, determining the square under the curve, like $\alpha$ from Figure 5.1.1.) Using t statistic and degrees of freedom, one can calculate p-value without running test. This is why to report result of t-test (and related Wilcoxon test, see later), most researchers list statistic, degrees of freedom (for t-test only) and p-value. Instead of “short form” from above, you can use a “long form” when the first column of the data frame contains all data, and the second indicates groups: Code $3$ (R): bpress <- read.table("data/bpress.txt", h=TRUE) drug.d <- bpress$DRUG.10 - bpress$DRUG.1 placebo.d <- bpress$PLACEBO.10 - bpress$PLACEBO.1 long <- stack(data.frame(placebo.d, drug.d)) head(long) t.test(values ~ ind, data=long) (Note the formula interface which usually comes together with a long form.) Long form is handy also for plotting and data manipulations (check the plot yourself): Code $4$ (R): bpress <- read.table("data/bpress.txt", h=TRUE) drug.d <- bpress$DRUG.10 - bpress$DRUG.1 placebo.d <- bpress$PLACEBO.10 - bpress$PLACEBO.1 long <- stack(data.frame(placebo.d, drug.d)) boxplot(values ~ ind, data=long) tapply(long$values, long$ind, sd) aggregate(values ~ ind, range, data=long) Another example of long form is the embedded beaver2 data: Code $5$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} tapply(beaver2$temp, beaver2$activ, Normality) # asmisc.r boxplot(temp ~ activ, data=beaver2) t.test(temp ~ activ, data=beaver2) (Check the boxplot yourself. We assumed that temperature was measured randomly.) Again, p-value is much less than 0.05, and we must reject the null hypothesis that temperatures are not different when beaver is active or not. To convert long form into short, use unstack() function: Code $6$ (R): uu <- unstack(beaver2, temp ~ activ) str(uu) (Note that result is a list because numbers of observations for active and inactive beaver are different. This is another plus of long form: it can handle subsets of unequal size.) If measurements were obtained on one object, a paired t-test should be used. In fact, it is just one-sample t-test applied to differences between each pair of measurements. To do paired t-test in R, use the parameter paired=TRUE. It is not illegal to choose common t-test for paired data, but paired tests are usually more powerful: Code $7$ (R): bpress <- read.table("data/bpress.txt", h=TRUE) drug.d <- bpress$DRUG.10 - bpress$DRUG.1 placebo.d <- bpress$PLACEBO.10 - bpress$PLACEBO.1 sapply(bpress, Normality) # asmisc.r Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} attach(bpress) t.test(DRUG.1, DRUG.10, paired=TRUE) t.test(DRUG.10 - DRUG.1, mu=0) # same results t.test(DRUG.1, DRUG.10) # non-paired detach(bpress) If the case of blood pressure measurements, common t-test does not “know” which factor is responsible more for the differences: drug influence or individual variation between people. Paired t-test excludes individual variation and allows each person to serve as its own control, this is why it is more precise. Also more precise (if the alternative hypothesis is correctly specified) are one-tailed tests: Code $8$ (R): bpress <- read.table("data/bpress.txt", h=TRUE) drug.d <- bpress$DRUG.10 - bpress$DRUG.1 placebo.d <- bpress$PLACEBO.10 - bpress$PLACEBO.1 attach(bpress) t.test(PLACEBO.10, DRUG.10, alt="greater") # one-tailed test t.test(PLACEBO.10, DRUG.10) # "common" test detach(bpress) (Here we used another alternative hypothesis: instead of guessing difference, we guessed that blood pressure in “placebo” group was greater on 10th day.) Highly important note: all decisions related with the statistical tests (parametric or nonparametric, paired or non-paired, one-sided or two-sided, 0.05 or 0.01) must be done a priori, before the analysis. The “hunting for the p-value” is illegal! If we work with nonparametric data, nonparametric Wilcoxon test (also known as a Mann-Whitney test) is required, under the command wilcox.test(): Code $9$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} tapply(ToothGrowth$len, ToothGrowth$supp, Normality) # asmisc.r boxplot(len ~ supp, data=ToothGrowth, notch=TRUE) wilcox.test(len ~ supp, data=ToothGrowth) (Please run the boxplot code and note the use of notches. It is commonly accepted that overlapping notches is a sign of no difference. And yes, Wilcoxon test supports that. Notches are not default because in many cases, boxplots are visually not overlapped. By the way, we assumed here that only supp variable is present and ignored dose (see ?ToothGrowth for more details).) And yes, it is really tempting to conclude something except “stay with null” if p-value is 0.06 (Figure $1$) but no. This is not allowed. Like in the t-test, paired data requires the parameter paired=TRUE: Code $10$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} w0 <- ChickWeight$weight[ChickWeight$Time == 0] w2 <- ChickWeight$weight[ChickWeight$Time == 2] sapply(data.frame(w0, w2), Normality) boxplot(w0, w2) wilcox.test(w0, w2, paired=TRUE) (Chicken weights are really different between hatching and second day! Please check the boxplot yourself.) Nonparametric tests are generally more universal since they do not assume any particular distribution. However, they are less powerful (prone to Type II error, “carelessness”). Moreover, nonparametric tests based on ranks (like Wilcoxon test) are sensitive to the heterogeneity of variances$^{[1]}$. All in all, parametric tests are preferable when data comply with their assumptions. Table $1$ summarizes this simple procedure. Table $1$: How to choose two-sample test in R. This table should be read from the top right cell. Paired: one object, two measures Non-paired Normal t.test(..., paired=TRUE) t.test(...) Non-normal wilcox.test(..., paired=TRUE) wilcox.test(...) Embedded in R is the classic data set used in the original work of Student (the pseudonym of mathematician William Sealy Gossett who worked for Guinness brewery and was not allowed to use his real name for publications). This work was concerned with comparing the effects of two drugs on the duration of sleep for 10 patients. In R these data are available under the name sleep (Figure $2$ shows corresponding boxplots). The data is in the long form: column extra contains the increase of the sleep times (in hours, positive or negative) while the column group indicates the group (type of drug). Code $11$ (R): plot(extra ~ group, data=sleep) (Plotting uses the “model formula”: in this case, extra ~ group. R is smart enough to understand that group is the “splitting” factor and should be used to make two boxplots.) The effect of each drug on each person is individual, but the average length by which the drug prolongs sleep can be considered a reasonable representation of the “strength” of the drug. With this assumption, we will attempt to use a two sample test to determine whether there is a significant difference between the means of the two samples corresponding to the two drugs. First, we need to determine which test to use: Code $12$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} tapply(sleep$extra, sleep$group, Normality) # asmisc.r (Data in the long form is perfectly suitable for tapply() which splits first argument in accordance with second, and then apply the third argument to all subsets.) Since the data comply with the normality assumption, we can now employ parametric paired t-test: Code $13$ (R): t.test(extra ~ group, data=sleep, paired=TRUE) (Yes, we should reject null hypothesis about no difference.) How about the probability of Type II errors (false negatives)? It is related with statistical power, and could be calculated through power test: Code $14$ (R): power.t.test(n=10, sig.level=0.05, d=1.58) Therefore, if we want the level of significance 0.05, sample size 10 and the effect (difference between means) 1.58, then probability of false negatives should be approximately $1-0.92=0.08$ which is really low. Altogether, this makes close to 100% our positive predictive value (PPV), probability of our positive result (observed difference) to be truly positive for the whole statistical population. Package caret is able to calculate PPV and other values related with statistical power. It is sometimes said that t-test can handle the number of samples as low as just four. This is not absolutely correct since the power is suffering from small sample sizes, but it is true that main reason to invent t-test was to work with small samples, smaller then “rule of 30” discussed in first chapter. Both t-test and Wilcoxon test check for differences only between measures of central tendency (for example, means). These homogeneous samples Code $15$ (R): (aa <- 1:9) (bb <- rep(5, 9)) have the same mean but different variances (check it yourself), and thus the difference would not be detected with t-test or Wilcoxon test. Of course, tests for scale measures (like var.test()) also exist, and they might find the difference. You might try them yourself. The third homogeneous sample complements the case: Code $16$ (R): (xx <- 51:59) as differences in centers, not in ranges, will now be detected (check it). There are many other two sample tests. One of these, the sign test, is so simple that it does not exist in R by default. The sign test first calculates differences between every pair of elements in two samples of equal size (it is a paired test). Then, it considers only the positive values and disregards others. The idea is that if samples were taken from the same distribution, then approximately half the differences should be positive, and the proportions test will not find a significant difference between 50% and the proportion of positive differences. If the samples are different, then the proportion of positive differences should be significantly more or less than half. Come up with R code to carry out sign test, and test two samples that were mentioned at the beginning of the section. The standard data set airquality contains information about the amount of ozone in the atmosphere around New York City from May to September 1973. The concentration of ozone is presented as a rounded mean for every day. To analyze it conservatively, we use nonparametric methods. Determine how close to normally distributed the monthly concentration measurements are. Let us test the hypothesis that ozone levels in May and August were the same: Code $17$ (R): wilcox.test(Ozone ~ Month, data=airquality, subset = Month %in% c(5, 8), conf.int=TRUE) (Since Month is a discrete variable as the “number” simply represents the month, the values of Ozone will be grouped by month. We used the parameter subset with the operator %in%, which chooses May and August, the 5th and 8th month. To obtain the confidence interval, we used the additional parameter conf.int. $W$ is the statistic employed in the calculation of p-values. Finally, there were warning messages about ties which we ignored.) The test rejects the null hypothesis, of equality between the distribution of ozone concentrations in May and August, fairly confidently. This is plausible because the ozone level in the atmosphere strongly depends on solar activity, temperature and wind. Differences between samples are well represented by box plots (Figure $3$): Code $18$ (R): boxplot(Ozone ~ Month, data=airquality, subset=Month %in% c(5, 8), notch=TRUE) (Note that in the boxplot() command we use the same formula as the statistical model. Option subset is alternative way to select from data frame.) It is conventionally considered that if the boxes overlap by more than a third of their length, the samples are not significantly different. The last example in this section is related with the discovery of argon. At first, there was no understanding that inert gases exist in nature as they are really hard to discover chemically. But in the end of XIX century, data start to accumulate that something is wrong with nitrogen gas (N$_2$). Physicist Lord Rayleigh presented data which show that densities of nitrogen gas produced from ammonia and nitrogen gas produced from air are different: Code $19$ (R): ar <- read.table("data/argon.txt") unstack(ar, form=V2 ~ V1) As one might see, the difference is really small. However, it was enough for chemist Sir William Ramsay to accept it as a challenge. Both scientists performed series of advanced experiments which finally resulted in the discovery of new gas, argon. In 1904, they received two Nobel Prizes, one in physical science and one in chemistry. From the statistical point of view, most striking is how the visualization methods perform with this data: Code $20$ (R): ar <- read.table("data/argon.txt") means <- tapply(ar$V2, ar$V1, mean, na.rm=TRUE) oldpar <- par(mfrow=1:2) boxplot(V2 ~ V1, data=ar) barplot(means) par(oldpar) The Figure $4$ shows as clear as possible that boxplots have great advantage over traditional barplots, especially in cases of two-sample comparison. We recommend therefore to avoid barplots, and by all means avoid so-called “dynamite plots” (barplots with error bars on tops). Beware of dynamite! Their most important disadvantages are (1) they hide primary data (so they are not exploratory), and in the same time, do not illustrate any statistical test (so they are not inferential); (2) they (frequently wrongly) assume that data is symmetric and parametric; (3) they use space inefficiently, have low data-to-ink ratio; (4) they cause an optical illusion in which the reader adds some of the error bar to the height of the main bar when trying to judge the heights of the main bars; (5) the standard deviation error bar (typical there) has no direct relation even with comparing two samples (see above how t-test works), and has almost nothing to do with comparison of multiple samples (see below how ANOVA works). And, of course, they do not help Lord Rayleigh and Sir William Ramsay to receive their Nobel prizes. Please check the Lord Rayleigh data with the appropriate statistical test and report results. So what to do with dynamite plots? Replace them with boxplots. The only disadvantage of boxplots is that they are harder to draw with hand which sounds funny in the era of computers. This, by the way, explains partly why there are so many dynamite around: they are sort of legacy pre-computer times. A supermarket has two cashiers. To analyze their work efficiency, the length of the line at each of their registers is recorded several times a day. The data are recorded in kass.txt. Which cashier processes customers more quickly? Effect sizes Statistical tests allow to make decisions but do not show how different are samples. Consider the following examples: Code $21$ (R): wilcox.test(1:10, 1:10 + 0.001, paired=TRUE) wilcox.test(1:10, 1:10 + 0.0001, paired=TRUE) (Here difference decreases but p-value does not grow!) One of the beginner’s mistakes is to think that p-values measure differences, but this is really wrong. P-values are probabilities and are not supposed to measure anything. They could be used only in one, binary, yes/no way: to help with statistical decisions. In addition, the researcher can almost always obtain a reasonably good p-value, even if effect is minuscule, like in the second example above. To estimate the extent of differences between populations, effect sizes were invented. They are strongly recommended to report together with p-values. Package effsize calculates several effect size metrics and provides interpretations of their magnitude. Cohen’s d is the parametric effect size metric which indicates difference between two means: Code $22$ (R): library(effsize) cohen.d(extra ~ group, data=sleep) (Note that in the last example, effect size is large with confidence interval including zero; this spoils the “large” effect.) If the data is nonparametric, it is better to use Cliff’s Delta: Code $23$ (R): cliff.delta(1:10, 1:10+0.001) cliff.delta(Ozone ~ Month, data=airquality, subset = Month %in% c(5, 8)) Now we have quite a few measurements to keep in memory. The simple table below emphasizes most frequently used ones: Center Scale Test Effect Parametric Mean Standard deviation t-test Cohen’s D Non-parametric Median IQR, MAD Wilcoxon test Cliff’s Delta Table $2$: Most frequently used numerical tools, both for one and two samples. There are many measures of effect sizes. In biology, useful is coefficient of divergence ($K$) discovered by Alexander Lyubishchev in 1959, and related with the recently introduced squared strictly standardized mean difference (SSSMD): Code $24$ (R): K(extra ~ group, data=sleep) # asmisc.r summary(K(extra ~ group, data=sleep)) Lyubishchev noted that good biological species should have $K>18$, this means no transgression. Coefficient of divergence is robust to allometric changes: Code $25$ (R): K(extra ~ group, data=sleep) # asmisc.r (aa <- 1:9) summary(K(aa*3, aa*10)) # asmisc.r cliff.delta(1:10, 1:10+0.001) cliff.delta(Ozone ~ Month, data=airquality, subset = Month %in% c(5, 8)) cliff.delta(aa*3, aa*10) There is also MAD-based nonparametric variant of $K$: Code $26$ (R): K(extra ~ group, data=sleep) # asmisc.r summary(K(1:10, 1:10+0.001, mad=TRUE)) # asmisc.r (dd <- K(Ozone ~ Month, data=airquality[airquality\$Month %in% c(5, 8), ], mad=TRUE)) # asmisc.r summary(dd) In the data file grades.txt are the grades of a particular group of students for the first exam (in the column labeled A1) and the second exam (A2), as well as the grades of a second group of students for the first exam (B1). Do the A class grades for the first and second exams differ? Which class did better in the first exam, A or B? Report significances, confidence intervals and effect sizes. In the open repository, file aegopodium.txt contains measurements of leaves of sun and shade Aegopodium podagraria (ground elder) plants. Please find the character which is most different between sun and shade and apply the appropriate statistical test to find if this difference is significant. Report also the confidence interval and effect size.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/05%3A_Two-Dimensional_Data_-_Differences/5.02%3A_Is_there_a_difference_Comparing_two_samples.txt
One way What if we need to know if there are differences between three samples? The first idea might be to make the series of statistical tests between each pair of the sample. In case of three samples, we will need three t-tests or Wilcoxon tests. What is unfortunate is that number of required tests will grow dramatically with the number of samples. For example, to compare six samples we will need to perform 15 tests! Even more serious problem is that all tests are based on the idea of probability. Consequently, the chance to make of the Type I error (false alarm) will grow every time we perform more simultaneous tests on the same sample. For example, in one test, if null hypothesis is true, there is usually only a 5% chance to reject it by mistake. However, with 20 tests (Figure E.2), if all corresponding null hypotheses are true, the expected number of incorrect rejections is 1! This is called the problem of multiple comparisons. One of most striking examples of multiple comparisons is a “dead salmon case”. In 2009, group of researches published results of MRI testing which detected the brain activity in a dead fish! But that was simply because they purposely did not account for multiple comparisons$^{[1]}$. The special technique, ANalysis Of VAriance (ANOVA) was invented to avoid multiple comparisons in case of more than two samples. In R formula language, ANOVA might be described as response ~ factor where response is the measurement variable. Note that the only difference from two-sample case above is that factor in ANOVA has more then two levels. The null hypothesis here is that all samples belong to the same population (“are not different”), and the alternative hypothesis is that at least one sample is divergent, does not belong to the same population (“samples are different”). In terms of p-values: If any sample came from different population, then variance between samples should be at least comparable with (or larger then) variation within samples; in other words, F-value (or F-ratio) should be $\geq 1$. To check that inferentially, F-test is applied. If p-value is small enough, then at least one sample (subset, column) is divergent. ANOVA does not reveal which sample is different. This is because variances in ANOVA are pooled. But what if we still need to know that? Then we should apply post hoc tests. In is not required to run them after ANOVA; what is required is to perform them carefully and always apply p-value adjustment for multiple comparisons. This adjustment typically increases p-value to avoid accumulation from multiple tests. ANOVA and post hoc tests answer different research questions, therefore this is up to the researcher to decide which and when to perform. ANOVA is a parametric method, and this typically goes well with its first assumption, normal distribution of residuals (deviations between observed and expected values). Typically, we check normality of the whole dataset because ANOVA uses pooled data anyway. It is also possible to check normality of residuals directly (see below). Please note that ANOVA tolerates mild deviations from normality, both in data and in residuals. But if the data is clearly nonparametric, it is recommended to use other methods (see below). Second assumption is homogeinety of variance (homoscedasticity), or, simpler, similarity of variances. This is more important and means that sub-samples were collected with similar methods. Third assumption is more general. It was already described in the first chapter: independence of samples. “Repeated measurements ANOVA” is however possible, but requires more specific approach. All assumptions must be checked before analysis. The best way of data organization for the ANOVA is the long form explained above: two variables, one of them contains numerical data, whereas the other describes grouping (in R terminology, it is a factor). Below, we create the artificial data which describes three types of hair color, height (in cm) and weight (in kg) of 90 persons: Code $1$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) str(hwc) boxplot(WEIGHT ~ COLOR, data=hwc, ylab="Weight, kg") ​​​(Note that notches and other “bells and whistles” do not help here because we want to estimate joint differences; raw boxplot is probably the best choice.) Code $2$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} sapply(hwc[sapply(hwc, is.numeric)], Normality) # asmisc.r tapply(hwc$WEIGHT, hwc$COLOR, var) (Note the use of double sapply() to check normality only for measurement columns.) It looks like both assumptions are met: variance is at least similar, and variables are normal. Now we run the core ANOVA: Code $3$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) wc.aov <- aov(WEIGHT ~ COLOR, data=hwc) summary(wc.aov) This output is slightly more complicated then output from two-sample tests, but contains similar elements (from most to least important): 1. p-value (expressed as Pr(>F)) and its significance; 2. statistic (F value); 3. degrees of freedom (Df) All above numbers should go to the report. In addition, there are also: 1. variance within columns (Sum Sq for Residuals); 2. variance between columns (Sum Sq for COLOR); 3. mean variances (Sum Sq divided by Df) (Grand variance is just a sum of variances between and within columns.) If degrees of freedom are already known, it is easy enough to calculate F value and p-value manually, step by step: Code $4$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) df1 <- 2 df2 <- 87 group.size <- 30 (sq.between <- sum(tapply(hwc$WEIGHT, hwc$COLOR, function(.x) (mean(.x) - mean(hwc$WEIGHT))^2))*group.size) (mean.sq.between <- sq.between/df1) (sq.within <- sum(tapply(hwc$WEIGHT, hwc$COLOR, function(.x) sum((.x - mean(.x))^2)))) (mean.sq.within <- sq.within/df2) (f.value <- mean.sq.between/mean.sq.within) (p.value <- (1 - pf(f.value, df1, df2))) Of course, R calculates all of that automatically, plus also takes into account all possible variants of calculations, required for data with another structure. Related to the above example is also that to report ANOVA, most researches list three things: two values for degrees of freedom, F value and, of course, p-value. All in all, this ANOVA p-value is so small that H$_0$ should be rejected in favor of the hypothesis that at least one sample is different. Remember, ANOVA does not tell which sample is it, but boxplots (Figure $2$) suggest that this might be people with black hairs. To check the second assumption of ANOVA, that variances should be at least similar, homogeneous, it is sometimes enough to look on the variance of each group with tapply() as above or with aggregate(): Code $5$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) aggregate(hwc[,-1], by=list(COLOR=hwc[, 1]), var) But better is to test if variances are equal with, for example, bartlett.test() which has the same formula interface: Code $6$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) bartlett.test(WEIGHT ~ COLOR, data=hwc) (The null hypothesis of the Bartlett test is the equality of variances.) Alternative is nonparametric Fligner-Killeen test: Code $7$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) fligner.test(WEIGHT ~ COLOR, data=hwc) (Null is the same as in Bartlett test.) The first assumption of ANOVA could also be checked here directly: Code $8$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) wc.aov <- aov(WEIGHT ~ COLOR, data=hwc) Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(wc.aov$residuals) Effect size of ANOVA is called $\eta^2$ (eta squared). There are many ways to calculate eta squared but simplest is derived from the linear model (see in next sections). It is handy to define $\eta^2$ as a function: Code $9$ (R): Eta2 <- function(aov){summary.lm(aov)$r.squared} and then use it for results of both classic ANOVA and one-way test (see below): Code $10$ (R): Eta2 <- function(aov){summary.lm(aov)$r.squared} hwc <- read.table("data/hwc.txt", h=TRUE) wc.aov <- aov(WEIGHT ~ COLOR, data=hwc) (ewc <- Eta2(wc.aov)) Mag(ewc) # asmisc.r The second function is an interpreter for $\eta^2$ and similar effect size measures (like $r$ correlation coefficient or R$^2$ from linear model). If there is a need to calculate effect sizes for each pair of groups, two-sample effect size measurements like coefficient of divergence (Lyubishchev’s K) are applicable. One more example of classic one-way ANOVA comes from the data embedded in R(make boxplot yourself): Code $11$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(chickwts$weight) bartlett.test(weight ~ feed, data=chickwts) boxplot(weight ~ feed, data=chickwts) chicks.aov <- aov(weight ~ feed, data=chickwts) summary(chicks.aov) Eta2(chicks.aov) Eta2 <- function(aov){summary.lm(aov)$r.squared} Mag(Eta2(chicks.aov)) # asmisc.r Consequently, there is a very high difference between weights of chickens on different diets. If there is a goal to find the divergent sample(s) statistically, one can use post hoc pairwise t-test which takes into account the problem of multiple comparisons described above; this is just a compact way to run many t-tests and adjust resulted p-values: Code $12$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) pairwise.t.test(hwc$WEIGHT, hwc$COLOR) (This test uses by default the Holm method of p-value correction. Another way is Bonferroni correction explained below. All available ways of correction are accessible trough the p.adjust() function.) Similar to the result of pairwise t-test (but more detailed) is the result of Tukey Honest Significant Differences test (Tukey HSD): Code $13$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) wc.aov <- aov(WEIGHT ~ COLOR, data=hwc) TukeyHSD(wc.aov) Are our groups different also by heights? If yes, are black-haired still different? Post hoc tests output p-values so they do not measure anything. If there is a need to calculate group-to-group effect sizes, two samples effect measures (like Lyubishchev’s K) are generally applicable. To understand pairwise effects, you might want to use the custom function pairwise.Eff() which is based on double sapply(): Code $14$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) pairwise.Eff(hwc$WEIGHT, hwc$COLOR, eff="cohen.d") # asmisc.r Next example is again from the embedded data (make boxplot yourself): Code $15$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(PlantGrowth$weight) bartlett.test(weight ~ group, data=PlantGrowth) plants.aov <- aov(weight ~ group, data=PlantGrowth) summary(plants.aov) Eta2 <- function(aov){summary.lm(aov)$r.squared} Eta2(plants.aov) Mag(Eta2(plants.aov)) # asmisc.r boxplot(weight ~ group, data=PlantGrowth) with(PlantGrowth, pairwise.t.test(weight, group)) As a result, yields of plants from two treatment condition are different, but there is no difference between each of them and the control. However, the overall effect size if this experiment is high. If variances are not similar, then oneway.test() will replace the simple (one-way) ANOVA: Code $16$ (R): hwc2 <- read.table("data/hwc2.txt", h=TRUE) boxplot(WEIGHT ~ COLOR, data=hwc2) Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} sapply(hwc2[, 2:3], Normality) # asmisc.r tapply(hwc2$WEIGHT, hwc2$COLOR, var) bartlett.test(WEIGHT ~ COLOR, data=hwc2) oneway.test(WEIGHT ~ COLOR, data=hwc2) Eta2 <- function(aov){summary.lm(aov)$r.squared} (e2 <- Eta2(aov(WEIGHT ~ COLOR, data=hwc2))) Mag(e2) pairwise.t.test(hwc2$WEIGHT, hwc2$COLOR) # most applicable post hoc (Here we used another data file where variables are normal but group variances are not homogeneous. Please make boxplot and check results of post hoc test yourself.) What if the data is not normal? The first workaround is to apply some transformation which might convert data into normal: Code $17$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(InsectSprays$count) # asmisc.r Normality(sqrt(InsectSprays$count)) However, the same transformation could influence variance: Code $18$ (R): bartlett.test(sqrt(count) ~ spray, data=InsectSprays)$p.value Frequently, it is better to use the nonparametric ANOVA replacement, Kruskall-Wallis test: Code $19$ (R): hwc3 <- read.table("data/hwc3.txt", h=TRUE) boxplot(WEIGHT ~ COLOR, data=hwc3) sapply(hwc3[, 2:3], Normality) # asmisc.r kruskal.test(WEIGHT ~ COLOR, data=hwc3) (Again, another variant of the data file was used, here variables are not even normal. Please make boxplot yourself.) Effect size of Kruskall-Wallis test could be calculated with $\epsilon^2$: Code $20$ (R): hwc3 <- read.table("data/hwc3.txt", h=TRUE) Epsilon2 <- function(kw, n) # n is the number of cases{unname(kw$statistic/((n^2 - 1)/(n+1)))} kw <- kruskal.test(WEIGHT ~ COLOR, data=hwc3) Epsilon2(kw, nrow(hwc3)) Mag(Epsilon2(kw, nrow(hwc3))) # asmisc.r The overall efefct size is high, it also visible well on the boxplot (make it yourself): Code $21$ (R): hwc3 <- read.table("data/hwc3.txt", h=TRUE) boxplot(WEIGHT ~ COLOR, data=hwc3) To find out which sample is deviated, use nonparametric post hoc test: Code $22$ (R): hwc3 <- read.table("data/hwc3.txt", h=TRUE) pairwise.wilcox.test(hwc3$WEIGHT, hwc3$COLOR) (There are multiple warnings about ties. To get rid of them, replace the first argument with jitter(hwc3$HEIGHT). However, since jitter() adds random noise, it is better to be careful and repeat the analysis several times if p-values are close to the threshold like here.) Another post hoc test for nonparametric one-way layout is Dunn’s test. There is a separate dunn.test package: Code $23$ (R): hwc3 <- read.table("data/hwc3.txt", h=TRUE) library(dunn.test) dunn.test(hwc3$WEIGHT, hwc3$COLOR, method="holm", altp=TRUE) (Output is more advanced but overall results are similar. More post hoc tests like Dunnett’s test exist in the multcomp package.) It is not necessary to check homogeneity of variance before Kruskall-Wallis test, but please note that it assumes that distribution shapes are not radically different between samples. If it is not the case, one of workarounds is to transform the data first, either logarithmically or with square root, or to the ranks$^{[2]}$, or even in the more sophisticated way. Another option is to apply permutation tests (see Appendix). As a post hoc test, is is possible to use pairwise.Rro.test() from asmisc.r which does not assume similarity of distributions. Next figure (Figure $3$) contains the Euler diagram which summarizes what was said above about different assumptions and ways of simple ANOVA-like analyses. Please note that there are much more post hoc tests procedures then listed, and many of them are implemented in various R packages. The typical sequence of procedures related with one-way analysis is listed below: • Check if data structure is suitable (head(), str(), summary()), is it long or short • Plot (e.g., boxplot(), beanplot()) • Normality, with plot or Normality()-like function • Homogeneity of variance (homoscedasticity) (with bartlett.test() or fligner.test()) • Core procedure (classic aov(), oneway.test() or kruskal.test()) • Optionally, effect size ($\eta^2$ or $\epsilon^2$ with appropriate formula) • Post hoc test, for example TukeyHSD(), pairwise.t.test(), dunn.test() or pairwise.wilcox.test() In the open repository, data file melampyrum.txt contains results of cow-wheat (Melampyrum spp.) measurements in multiple localities. Please find if there is a difference in plant height and leaf length between plants from different localities. Which localities are divergent in each case? To understand the structure of data, use companion file melampyrum_c.txt. All in all, if you have two or more samples represented with measurement data, the following table will help to research differences: More then one way Simple, one-way ANOVA uses only one factor in formula. Frequently, however, we need to analyze results of more sophisticated experiments or observations, when data is split two or more times and possibly by different principles. Our book is not intended to go deeper, and the following is just an introduction to the world of design and analysis of experiment. Some terms, however, are important to explain: two samples more then two samples Step 1. Graphic boxplot(); beanplot() Step 2. Normality etc. Normality(); hist(); qqnorm() and qqine(); optionally: bartlett. test() or flingner.test() Step 3. Test t.test(); wilcoxon.test() aov(); oneway.test(); kruskal.test() Step 4. Effect cohen.d(); cliff.delta() optionally: Eta2(); Epsilon2() Step 5. Pairwise NA TukeyHSD(); pairwise.t.test(); dunn.test() Table $1$ How to research differences between numerical samples in R. Two-way This is when data contains two independent factors. See, for example, ?ToothGrowth data embedded in R. With more factors, three- and more ways layouts are possible. Repeated measurements This is analogous to paired two-sample cases, but with three and more measurements on each subject. This type of layout might require specific approaches. See ?Orange or ?Loblolly data. Unbalanced When groups have different sizes and/or some factor combinations are absent, then design is unbalanced; this sometimes complicates calculations. Interaction If there are more than one factor, they could work together (interact) to produce response. Consequently, with two factors, analysis should include statistics for each of them plus separate statistic for interaction, three values in total. We will return to interaction later, in section about ANCOVA (“Many lines”). Here we only mention the useful way to show interactions visually, with interaction plot (Figure $4$): Code $24$ (R): with(ToothGrowth, interaction.plot(supp, dose, len)) (It is, for example, easy to see from this interaction plot that with dose 2, type of supplement does not matter.) Random and fixed effects Some factors are irrelevant to the research but participate in response, therefore they must be included into analysis. Other factors are planned and intentional. Figure $4$ Interaction plot for ToothGrowth data. Respectively, they are called random and fixed effects. This difference also influences calculations.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/05%3A_Two-Dimensional_Data_-_Differences/5.03%3A_If_there_are_More_than_Two_Samples_-_ANOVA.txt
Contingency tables How do you compare samples of categorical data? These frequently are text only, there are have no numbers, like in classic “Fisher’s tea drinker” example$^{[1]}$. A British woman claimed to be able to distinguish whether milk or tea was added to the cup first. To test, she was given 8 cups of tea, in four of which milk was added first: Code $1$ (R): tea <- read.table("data/tea.txt", h=TRUE) head(tea) The only way is to convert it to numbers, and the best way to convert is to count cases, make contingency table: Code $2$ (R): tea <- read.table("data/tea.txt", h=TRUE) (tea.t <- table(tea)) Contingency table is not a matrix or data frame, it is the special type of R object called “table”. In R formula language, contingency tables are described with simple formula ~ factor(s) To use this formula approach, run xtabs() command: Code $3$ (R): tea <- read.table("data/tea.txt", h=TRUE) xtabs(~ GUESS + REALITY, data=tea) (More than one factors have to be connected with + sign.) If there are more than two factors, R can build a multidimensional table and print it as a series of two-dimensional tables. Please call the embedded Titanic data to see how 3-dimensional contingency table looks. A “flat” contingency table can be built if all the factors except one are combined into one multidimensional factor. To do this, use the command ftable(): Code $4$ (R): ftable(Titanic) The function table can be used simply for calculation of frequencies (including missing data, if needed): Code $5$ (R): d <- rep(LETTERS[1:3], 10) is.na(d) <- 3:4 d table(d, useNA="ifany") The function mosaicplot() creates a graphical representation of a contingency table (Figure $1$): Code $6$ (R): titanic <- apply(Titanic, c(1, 4), sum) titanic mosaicplot(titanic, col=c("#485392", "#204F15"), main="", cex.axis=1) (We used mosaicplot() command because apply() outputted a matrix. If the data is a “table” with more than one dimension, object, plot() command will output mosaic plot by default.) Contingency tables are easy enough to make even from numerical data. Suppose that we need to look on association between month and comfortable temperatures in New York. If the temperatures from 64 to 86°F (from 18 to 30°C) are comfort temperatures, then: Code $7$ (R): comfort <- ifelse(airquality$Temp < 64 | airquality$Temp > 86,"uncomfortable", "comfortable") Now we have two categorical variables, comfort and airquality$Month and can proceed to the table: Code $8$ (R): comf.month <- table(comfort, airquality$Month) comf.month Spine plot (Figure $2$) is good for this kind of table, it looks like a visually advanced “hybrid” between histogram, barplot and mosaic plot: Code $9$ (R): comf.month <- table(comfort, airquality$Month) spineplot(t(comf.month)) (Another variant to plot these two-dimensional tables is the dotchart(), please try it yourself. Dotchart is good also for 1-dimensional tables, but sometimes you might need to use the replacement Dotchart1() from asmisc.r—it keeps space for y axis label.) Table tests To find if there is an association in a table, one should compare two frequencies in each cell: predicted (theoretical) and observed. The serious difference is the sign of association. Null and alternative hypotheses pairs are typically: • Null: independent distribution of factors $\approx$ no pattern present $\approx$ no association present • Alternative: concerted distribution of factors $\approx$ pattern present $\approx$ there is an association In terms of p-values: Function chisq.test() runs a chi-squared test, one of two most frequently used tests for contingency tables. Two-sample chi-squared (or $\chi^2$) test requires either contingency table or two factors of the same length (to calculate table from them first). Now, what about the table of temperature comfort? assocplot(comf.month) shows some “suspicious” deviations. To check if these are statistically significant: Code $10$ (R): comf.month <- table(comfort, airquality$Month) chisq.test(comf.month) No, they are not associated. As before, there is nothing mysterious in these numbers. Everything is based on differences between expected and observed values: Code $11$ (R): comf.month <- table(comfort, airquality$Month) df <- 4 (expected <- outer(rowSums(comf.month), colSums(comf.month), "*")/sum(comf.month)) (chi.squared <- sum((comf.month - expected)^2/expected)) (p.value <- 1 - pchisq(chi.squared, df)) (Note how expected values calculated and how they look: expected (null) are equal proportions between both rows and columns. June and September have 30 days each, hence slight differences in values—but not in expected proportions.) Let us see now whether hair color and eye color from the 3-dimensional embedded HairEyeColor data are associated. First, we can examine associations graphically with assocplot() (Figure $3$): Code $12$ (R): (HE <- margin.table(HairEyeColor, 1:2)) assocplot(HE) (Instead of apply() used in the previous example, we employed margin.table() which essentially did the same job.) Association plot shows several things: the height of bars reflects the contribution of each cell into the total chi-squared, this allows, for example, to detect outliers. Square of rectangle corresponds with difference between observed and expected value, thus big tall rectangles indicate more association (to understand this better, compare this current plot with assocplot(comf.month)). Color and position of rectangle show the sign of the difference. Overall, it is likely that there is an association. Now we need to check this hypothesis with a test: Code $13$ (R): HE <- margin.table(HairEyeColor, 1:2) chisq.test(HE) The chi-squared test takes as null hypothesis “no pattern”, “no association”. Therefore, in our example, since we reject the null hypothesis, we find that the factors are associated. And what about survival on the “Titanic”? Code $14$ (R): titanic <- apply(Titanic, c(1, 4), sum) chisq.test(titanic) Yes (as reader might remember from the famous movie), survival was associated with being in the particular class. General chi-squared test shows only if asymmetry presents anywhere in the table. This means that if it is significant, then at least one group of passengers has the difference in survival. Like ANOVA, test does not show which one. Post hoc, or pairwise table test is able do show this: Code $15$ (R): titanic <- apply(Titanic, c(1, 4), sum) pairwise.Table2.test(titanic) # asmisc.r From the table of p-values, it is apparent that 3rd class and crew members were not different by survival rates. Note that post hoc tests apply p-value adjustment for multiple comparisons; practically, it means that because 7 tests were performed simultaneously, p-values were magnified with some method (here, Benjamini & Hochberg method is default). The file seedlings.txt contains results of an experiment examining germination of seeds infected with different types of fungi. In all, three fungi were tested, 20 seeds were tested for each fungus, and therefore with the controls 80 seeds were tested. Do the germination rates of the infected seeds differ? Let us examine now the more complicated example. A large group of epidemiologists gathered for a party. The next morning, many woke up with symptoms of food poisoning. Because they were epidemiologists, they decided to remember what each of them ate at the banquet, and thus determine what was the cause of the illness. The gathered data take the following format: Code $16$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) head(tox) (We used head() here because the table is really long.) The first variable (ILL) tells whether the participant got sick or not (1 or 2 respectively); the remaining variables correspond to different foods. A simple glance at the data will not reveal anything, as the banquet had 45 participants and 13 different foods. Therefore, statistical methods must be used. Since the data are nominal, we will use contingency tables: Code $17$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.1 <- lapply(tox[, -1], function(.x) table(tox[, 1], .x)) tox.2 <- array(unlist(tox.1), dim=c(dim(tox.1[[1]]), length(tox.1))) # or simply c(2, 2, 13) dimnames(tox.2) <- list(c("ill","not ill"), c("took","didn't take"), names(tox.1)) (First, we ran ILL variable against every column and made a list of small contingency tables. Second, we converted list into 3-dimensional array, just like the Titanic data is, and also made sensible names of dimensions.) Now our data consists of small contingency tables which are elements of array: Code $18$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.1 <- lapply(tox[,-1], function(.x) table(tox[, 1], .x)) tox.2 <- array(unlist(tox.1), dim=c(dim(tox.1[[1]]), length(tox.1))) # or simply c(2, 2, 13) tox.2[,,"TOMATO"] (Note two commas which needed to tell R that we want the third dimension of the array.) Now we need a kind of stratified (with every type of food) table analysis. Since every element in the tox.2 is $2\times2$ table, fourfold plot will visualize this data well (Figure $4$): Code $19$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.1 <- lapply(tox[,-1], function(.x) table(tox[, 1], .x)) tox.2 <- array(unlist(tox.1), dim=c(dim(tox.1[[1]]), length(tox.1))) # or simply c(2, 2, 13) fourfoldplot(tox.2, conf.level=0, col=c("yellow","black")) (In fourfold plots, association corresponds with the difference between two pairs of diagonal sectors. Since we test multiple times, confidence rings are suppressed.) There are some apparent differences, especially for CAESAR, BREAD and TOMATO. To check their significance, we will at first apply chi-squared test multiple times and check out p-values: Code $20$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.1 <- lapply(tox[,-1], function(.x) table(tox[, 1], .x)) tox.2 <- array(unlist(tox.1), dim=c(dim(tox.1[[1]]), length(tox.1))) # or simply c(2, 2, 13) cbind(apply(tox.2, 3, function(.x) chisq.test(.x)$p.value)) (An apply() allows us not to write the code for the test 13 times. You may omit cbind() since it used only to make output prettier. There were multiple warnings, and we will return to them soon.) The result is that two foods exhibit significant associations with illness—Caesar salad and tomatoes. The culprit is identified! Almost. After all, it is unlikely that both dishes were contaminated. Now we must try to determine what was the main cause of the food poisoning. We will return to this subject later. Let us discuss one more detail. Above, we applied chi-squared test simultaneously several times. To account for multiple comparisons, we must adjust p-values, magnify them in accordance with the particular rule, for example, with widely known Bonferroni correction rule, or with (more reliable) Benjamini and Hochberg correction rule like in the following example: Code $21$ (R): p.adjust(c(0.005, 0.05, 0.1), method="BH") Now you know how to apply p-value corrections for multiple comparisons. Try to do this for our toxicity data. Maybe, it will help to identify the culprit? The special case of chi-squared test is the goodness-of-fit test, or G-test. We will apply it to the famous data, results of Gregor Mendel first experiment. In this experiment, he crossed pea plants which grew out of round and angled seeds. When he counted seeds from the first generation of hybrids, he found that among 7,324 seeds, 5,474 were round and 1850 were angled. Mendel guessed that true ratio in this and six other experiments is 3:1$^{[2]}$: Code $22$ (R): chisq.test(c(5474, 1850), p=c(3/4, 1/4)) Goodness-of-fit test uses the null that frequencies in the first argument (interpreted as one-dimensional contingency table) are not different from probabilities in the second argument. Therefore, 3:1 ratio is statistically supported. As you might note, it is not radically different from the proportion test explained in the previous chapter. Without p parameter, G-test simply checks if probabilities are equal. Let us check, for example, if numbers of species in supergroups of living organisms on Earth are equal: Code $23$ (R): sp <- read.table("data/species.txt", sep=" ") species <- sp[, 2] names(species) <- sp[, 1] dotchart(rev(sort(log10(species))), xlab="Decimal logarithm of species number", pch=19, pt.cex=1.2) chisq.test(species) Naturally, numbers of species are not equal between supergroups. Some of them like bacteria (supergroup Monera) have surprisingly low number of species, others like insects (supergroup Ecdysozoa)—really large number (Figure $5$). Chi-squared test works well when the number of cases per cell is more then 5. If there are less cases, R gives at least three workarounds. First, instead of p-value estimated from the theoretical distribution, there is a way to calculate it directly, with Fisher exact test. Tea drinker table contains less then 5 cases per cell so it is a good example: Code $24$ (R): tea <- read.table("data/tea.txt", h=TRUE) tea.t <- table(tea) fisher.test(tea.t) Fisher test checks the null if odds ratio is just one. Although in this case, calculation gives odds ratio $(3\mathbin{:}1)/(1\mathbin{:}3)=9$, there are only 8 observations, and confidence interval still includes one. Therefore, contrary to the first impression, the test does not support the idea that aforementioned woman is a good guesser. Fourfold plot (please check it yourself) gives the similar result: Code $25$ (R): tea <- read.table("data/tea.txt", h=TRUE) tea.t <- table(tea) fourfoldplot(tea.t) While there is apparent difference between diagonals, confidence rings significantly intersect. Fisher test is computationally intensive so it is not recommended to use it for large number of cases. The second workaround is the Yates continuity correction which in R is default for chi-squared test on 22 tables. We use now data from the original Yates (1934)$^{[3]}$ publication, data is taken from study of the influence of breast and artificial feeding on teeth formation: Code $26$ (R): ee <- read.table("data/teeth.txt", h=T) chisq.test(table(ee)) (Note the warning in the end.) Yates correction is not a default for the summary.table() function: Code $27$ (R): ee <- read.table("data/teeth.txt", h=T) summary(table(ee)) # No correction in summary.table() (Note different p-value: this is an effect of no correction. For all other kind of tables (e.g., non $2\times2$), results of chisq.test() and summary.table() should be similar.) The third way is to simulate chi-squared test p-value with replication: Code $28$ (R): ee <- read.table("data/teeth.txt", h=T) chisq.test(table(ee), simulate.p.value=T) (Note that since this algorithm is based on random procedure, p-values might differ.) How to calculate an effect size for the association of categorical variables? One of them is odds ratio from the Fisher test (see above). There are also several different effect size measures changing from 0 (no association) to (theoretically) 1 (which is an extremely strong association). If you do not want to use external packages, one of them, $\phi$ coefficient is easy to calculate from the $\chi$-squared statistic. Code $29$ (R): tea <- read.table("data/tea.txt", h=TRUE) tea.t <- table(tea) sqrt(chisq.test(tea.t, correct=FALSE)\$statistic/sum(tea.t)) $\Phi$ coefficient works only for two binary variables. If variables are not binary, there are Tschuprow’s T and Cramer’s V coefficients. Now it is better to use the external code from the asmisc.r distributing with this book: Code $30$ (R): (x <- margin.table(Titanic, 1:2)) VTcoeffs(x) # asmisc.r R package vcd has function assocstats() which calculates odds ratio, $\phi$, Cramer’s V and several other effect measures. In the open repository, file cochlearia.txt contains measurements of morphological characters in several populations (locations) of scurvy-grass, Cochlearia. One of characters, binary IS.CREEPING reflects the plant life form: creeping or upright stem. Please check if numbers of creeping plants are different between locations, provide effect sizes and p-values. There are many table tests. For example, test of proportions from the previous chapter could be easily extended for two samples and therefore could be used as a table test. There is also mcnemar.test() which is used to compare proportions when they belong to same objects (paired proportions). You might want to check the help (and especially examples) in order to understand how they work. In the betula (see above) data, there are two binary characters: LOBES (position of lobes on the flower bract) and WINGS (the relative size of fruit wings). Please find if proportions of plants with 0 and 1 values of LOBES are different between location 1 and location 2. Are proportions of LOBES and WING values different in the whole dataset? The typical sequence of procedures related with analysis of tables is listed below: • Check the phenomenon of association: table(), xtabs() • Plot it first: mosaicplot(), spineplot(), assocplot() • Decide is association is statistically significant: chisq.test(), fisher.test() • Measure how strong is an association: VTCoeffs() • Optionally, if there are more then two groups per case involved, run post hoc pairise tests with the appropriate correction: pairwise.Table2.test() To conclude this “differences” chapter, here is the Table $1$ which will guide the reader through most frequently used types of analysis. Please note also the much more detailed Table 6.1.1 in the appendix. Normal Non-normal measurement or ranked nominal $\mathbf{=2}$ samples Student’s test Wilcoxon test Chi-squared test (+ post-hoc test) $\mathbf{>2}$ samples ANOVA or one-way + some post hoc test Kruskall-Wallis + some post hoc test Table $1$: Methods, most frequently used to analyze differences and patterns. This is the simplified variant of Table 6.1.1.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/05%3A_Two-Dimensional_Data_-_Differences/5.04%3A_Is_there_an_association_Analysis_of_tables.txt
Two sample tests, effect sizes Answer to the sign test question. It is enough to write: Code $1$ (R): aa <- c(1, 2, 3, 4, 5, 6, 7, 8, 9) bb <- c(5, 5, 5, 5, 5, 5, 5, 5, 5) dif <- aa - bb pos.dif <- dif[dif > 0] prop.test(length(pos.dif), length(dif)) Here the sign test failed to find obvious differences because (like t-test and Wilcoxon test) it considers only central values. Answer to the ozone question. To know if our data are normally distributed, we can apply the Normality() function: Code $2$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} ozone.month <- airquality[, c("Ozone","Month")] ozone.month.list <- unstack(ozone.month) sapply(ozone.month.list, Normality) # asmisc.r (Here we applied unstack() function which segregated our data by months.) Answer to the argon question. First, we need to check assumptions: Code $3$ (R): ar <- read.table("data/argon.txt") Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} sapply(unstack(ar, form=V2 ~ V1), Normality) It is clear that in this case, nonparametric test will work better: Code $4$ (R): ar <- read.table("data/argon.txt") wilcox.test(jitter(V2) ~ V1, data=ar) (We used jitter() to break ties. However, be careful and try to check if this random noise does not influence the p-value. Here, it does not.) And yes, boxplots (Figure 5.2.4) told the truth: there is a statistical difference between two set of numbers. Answer to the cashiers question. Check normality first: Code $5$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} cashiers <- read.table("data/cashiers.txt", h=TRUE) head(cashiers) sapply(cashiers, Normality) # asmisc.r Now, we can compare means: Code $6$ (R): cashiers <- read.table("data/cashiers.txt", h=TRUE) (cashiers.m <- sapply(cashiers, mean)) It is likely that first cashier has generally bigger lines: Code $7$ (R): cashiers <- read.table("data/cashiers.txt", h=TRUE) with(cashiers, t.test(CASHIER.1, CASHIER.2, alt="greater")) The difference is not significant. Answer to the grades question. First, check the normality: Code $8$ (R): grades <- read.table("data/grades.txt") classes <- split(grades$V1, grades$V2) Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} sapply(classes, Normality) # asmisc.r (Function split() created three new variables in accordance with the grouping factor; it is similar to unstack() from previous answer but can accept groups of unequal size.) Check data (it is also possible to plot boxplots): Code $9$ (R): grades <- read.table("data/grades.txt") classes <- split(grades$V1, grades$V2) sapply(classes, median, na.rm=TRUE) It is likely that the first class has results similar between exams but in the first exam, the second group might have better grades. Since data is not normal, we will use nonparametric methods: Code $10$ (R): grades <- read.table("data/grades.txt") classes <- split(grades$V1, grades$V2) wilcox.test(classes$A1, classes$A2, paired=TRUE, conf.int=TRUE) wilcox.test(classes$B1, classes$A1, alt="greater", conf.int=TRUE) For the first class, we applied the paired test since grades in first and second exams belong to the same people. To see if differences between different classes exist, we used one-sided alternative hypothesis because we needed to understand not if the second class is different, but if it is better. As a result, grades of the first class are not significantly different between exams, but the second class performed significantly better than first. First confidence interval includes zero (as it should be in the case of no difference), and second is not of much use. Now effect sizes with suitable nonparametric Cliff’s Delta: Code $11$ (R): grades <- read.table("data/grades.txt") classes <- split(grades$V1, grades$V2) cliff.delta(classes$A1, classes$A2) cliff.delta(classes$B1, classes$A1) Therefore, results of the second class are only slightly better which could even be negligible since confidence interval includes 0. Answer to the question about ground elder leaves (Figure $1$). First, check data, load it and check the object: Code $12$ (R): aa <- read.table("http://ashipunov.info/shipunov/open/aegopodium.txt", h=TRUE) aa$SUN <- factor(aa$SUN, labels=c("shade","sun")) str(aa) (We also converted SUN variable into factor and supplied the proper labels.) Let us check the data for the normality and for the most different character (Figure $2$): Code $13$ (R): aa <- read.table("http://ashipunov.info/shipunov/open/aegopodium.txt", h=TRUE) Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} aggregate(aa[,-5], list(light=aa[,5]), Normality) # asmisc.r Linechart(aa[, 1:4], aa[, 5], xmarks=FALSE, lcolor=1, se.lwd=2, mad=TRUE) # asmisc.r TERM.L (length of the terminal leaflet, it is the rightmost one on Figure $1$), is likely most different between sun and shade. Since this character is normal, we will run more precise parametric test: Code $14$ (R): aa <- read.table("http://ashipunov.info/shipunov/open/aegopodium.txt", h=TRUE) t.test(LEAF.L ~ SUN, data=aa) To report t-test result, one needs to provide degrees of freedom, statistic and p-value, e.g., like “in a Welch test, t statistic is 14.85 on 63.69 degrees of freedom, p-value is close to zero, thus we rejected the null hypothesis”. Effect sizes are usually concerted with p-values but provide additional useful information about the magnitude of differences: Code $15$ (R): aa <- read.table("http://ashipunov.info/shipunov/open/aegopodium.txt", h=TRUE) library(effsize) cohen.d(LEAF.L ~ SUN, data=aa) summary(K(LEAF.L ~ SUN, data=aa)) Both Cohen’s d and Lyubishchev’s K (coefficient of divergence) are large. ANOVA Answer to the height and color questions. Yes on both questions: Code $16$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) summary(aov(HEIGHT ~ COLOR, data=hwc)) pairwise.t.test(hwc$HEIGHT, hwc$COLOR) There are significant differences between all three groups. Answer to the question about differences between cow-wheats (Figure $3$) from seven locations. Load the data and check its structure: Code $17$ (R): mm <- read.table("http://ashipunov.info/shipunov/open/melampyrum.txt", h=TRUE) str(mm) Plot it first (Figure $4$): Code $18$ (R): mm <- read.table("http://ashipunov.info/shipunov/open/melampyrum.txt", h=TRUE) old.par <- par(mfrow=c(2, 1), mai=c(0.5, 0.5, 0.1, 0.1)) boxplot(P.HEIGHT ~ LOC, data=mm, col=grey(0.8)) boxplot(LEAF.L ~ LOC, data=mm, col=rgb(173, 204, 90, max=255)) par(old.par) Check assumptions: Code $19$ (R): mm <- read.table("http://ashipunov.info/shipunov/open/melampyrum.txt", h=TRUE) Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} sapply(mm[, c(2, 5)], Normality) bartlett.test(P.HEIGHT ~ LOC, data=mm) Consequently, leaf length must be analyzed with non-parametric procedure, and plant height—with parametric which does not assume homogeneity of variance (one-way test): Code $20$ (R): mm <- read.table("http://ashipunov.info/shipunov/open/melampyrum.txt", h=TRUE) oneway.test(P.HEIGHT ~ LOC, data=mm) pairwise.t.test(mm$P.HEIGHT, mm$LOC) Now the leaf length: Code $21$ (R): mm <- read.table("http://ashipunov.info/shipunov/open/melampyrum.txt", h=TRUE) kruskal.test(LEAF.L ~ LOC, data=mm) pairwise.wilcox.test(mm$LEAF.L, mm$LOC) All in all, location pairs 2–4 and 4–6 are divergent statistically in both cases. This is visible also on boxplots (Figure $5$). There are more significant differences in plant heights, location #6, in particular, is quite outstanding. Contingency tables Answer to the seedlings question. Load data and check its structure: Code $22$ (R): pr <- read.table("data/seedlings.txt", h=TRUE) str(pr) Now, what we need is to examine the table because both variables only look like numbers; in fact, they are categorical. Dotchart (Figure $5$) is a good way to explore 2-dimensional table: Code $23$ (R): pr <- read.table("data/seedlings.txt", h=TRUE) (pr.t <- table(pr)) dotchart(t(pr.t), pch=19, gcolor=2) To explore possible associations visually, we employ vcd package: Code $24$ (R): pr <- read.table("data/seedlings.txt", h=TRUE) pr.t <- table(pr) library(vcd) assoc(pr.t, shade=TRUE, gp=shading_Friendly2,gp_args=list(interpolate=c(1, 1.8))) Both table output and vcd association plot (Figure $6$) suggest some asymmetry (especially for CID80) which is a sign of possible association. Let us check it numerically, with the chi-squared test: Code $25$ (R): pr <- read.table("data/seedlings.txt", h=TRUE) pr.t <- table(pr) chisq.test(pr.t, simulate=TRUE) Yes, there is an association between fungus (or their absence) and germination. How to know differences between particular samples? Here we need a post hoc test: Code $26$ (R): pr <- read.table("data/seedlings.txt", h=TRUE) pairwise.Table2.test(table(pr), exact=TRUE) (Exact Fisher test was used because some counts were really small.) It is now clear that germination patterns form two fungal infections, CID80 and CID105, are significantly different from germination in the control (CID0). Also, significant association was found in the every comparison between three infections; this means that all three germination patterns are statistically different. Finally, one fungus, CID63 produces germination pattern which is not statistically different from the control. Answer to the question about multiple comparisons of toxicity. Here we will go the slightly different way. Instead of using array, we will extract p-values right from the original data, and will avoid warnings with the exact test: Code $27$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.p.values <- apply(tox[,-1], 2, function(.x) fisher.test(table(tox[, 1], .x))$p.value) (We cannot use pairwise.Table2.test() from the previous answer since our comparisons have different structure. But we used exact test to avoid warnings related with small numbers of counts.) Now we can adjust p-values: Code $28$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.p.values <- apply(tox[,-1], 2, function(.x) fisher.test(table(tox[, 1], .x))$p.value) round(p.adjust(tox.p.values, method="BH"), 3) Well, now we can say that Caesar salad and tomatoes are statistically supported as culprits. But why table tests always show us two factors? This could be due to the interaction: in simple words, it means that people who took the salad, frequently took tomatoes with it. Answer to the scurvy-grass question. Check the data file, load and check result: Code $29$ (R): cc <-read.table("http://ashipunov.info/shipunov/open/cochlearia.txt", h=TRUE) cc$LOC <- factor(cc$LOC, labels=paste0("loc", levels(cc$LOC))) cc$IS.CREEPING <- factor(cc$IS.CREEPING, labels=c("upright", "creeping")) str(cc) (In addition, we converted LOC and IS.CREEPING to factors and provided new level labels.) Next step is the visual analysis (Figure $7$): Code $30$ (R): cc <-read.table("http://ashipunov.info/shipunov/open/cochlearia.txt", h=TRUE) s.cols <- colorRampPalette(c("white", "forestgreen"))(5)[3:4] spineplot(IS.CREEPING ~ LOC, data=cc, col=s.cols) Some locations look different. To analyze, we need contingency table: Code $31$ (R): cc <-read.table("http://ashipunov.info/shipunov/open/cochlearia.txt", h=TRUE) (cc.lc <- xtabs(~ LOC + IS.CREEPING, data=cc)) Now the test and effect size: Code $32$ (R): cc <-read.table("http://ashipunov.info/shipunov/open/cochlearia.txt", h=TRUE) cc.lc <- xtabs(~ LOC + IS.CREEPING, data=cc) chisq.test(cc.lc, simulate.p.value=TRUE) VTcoeffs(cc.lc)[2, ] # asmisc.r (Run pairwise.Table2.test(cc.lc) yourself to understand differences in details.) Yes, there is a large, statistically significant association between locality and life form of scurvy-grass. Answer to the question about equality of proportions of LOBES character in two birch localities. First, we need to select these two localities (1 and 2) and count proportions there. The shortest way is to use the table() function: Code $33$ (R): (betula.ll <- table(betula[betula$LOC < 3, c("LOC","LOBES")])) Spine plot (Figure $8$) helps to make differences in the table even more apparent: Code $34$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.ll <- table(betula[betula$LOC < 3, c("LOC","LOBES")]) birch.cols <- colorRampPalette(c("black", "forestgreen"))(5)[3:4] spineplot(betula.ll, col=birch.cols) (Please also note how to create two colors intermediate between black and dark green.) The most natural choice is prop.test() which is applicable directly to the table() output: Code $35$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.ll <- table(betula[betula$LOC < 3, c("LOC","LOBES")]) prop.test(betula.ll) Instead of proportion test, we can use Fisher exact: Code $36$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.ll <- table(betula[betula$LOC < 3, c("LOC","LOBES")]) fisher.test(betula.ll) ... or chi-squared with simulation (note that one cell has only 4 cases), or with default Yates’ correction: Code $37$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.ll <- table(betula[betula$LOC < 3, c("LOC","LOBES")]) chisq.test(betula.ll) All in all, yes, proportions of plants with different position of lobes are different between location 1 and 2. And what about effect size of this association? Code $38$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.ll <- table(betula[betula\$LOC < 3, c("LOC","LOBES")]) VTcoeffs(betula.ll)[2, ] # asmisc.r Answer to the question about proportion equality in the whole betula dataset. First, make table: Code $39$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) (betula.lw <- table(betula[, c("LOBES","WINGS")])) There is no apparent asymmetry. Since betula.lw is $2\times2$ table, we can apply fourfold plot. It shows differences not only as different sizes of sectors, but also allows to check 95% confidence interval with marginal rings (Figure $9$): Code $40$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.lw <- table(betula[, c("LOBES","WINGS")]) fourfoldplot(betula.lw, col=birch.cols) Also not suggestive... Finally, we need to test the association, if any. Noe that samples are related. This is because LOBES and WINGS were measured on the same plants. Therefore, instead of the chi-squared or proportion test we should run McNemar’s test: Code $41$ (R): betula <- read.table("http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) betula.lw <- table(betula[, c("LOBES","WINGS")]) mcnemar.test(betula.lw) We conclude that proportions of two character states in each of characters are not statistically different.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/05%3A_Two-Dimensional_Data_-_Differences/5.05%3A_Answers_to_exercises.txt
Here we finally come to the world of statistical models, the study of not just differences but how exactly things are related. One of the most important features of models is an ability to predict results. Modeling expands into thousands of varieties, there are experiment planning, Bayesian methods, maximal likelihood, and many others—but we will limit ourself with correlation, core linear regression, analysis of covariation, and introduction to logistic models. 06: Two-Dimensional Data - Models To start with relationships, one need first to find a correlation, e.g., to measure the extent and sign of relation, and to prove if this is statistically reliable. Note that correlation does not reflect the nature of relationship (Figure $1$). If we find a significant correlation between variables, this could mean that A depends on B, B depends on A, A and B depend on each other, or A and B depend on a third variable C but have no relation to each other. A famous example is the correlation between ice cream sales and home fires. It would be strange to suggest that eating ice cream causes people to start fires, or that experiencing fires causes people to buy ice cream. In fact, both of these parameters depend on air temperature$^{[1]}$. Numbers alone could be misleading, so there is a simple rule: plot it first. Plot it first The most striking example of relationships where numbers alone do to provide a reliable answer, is the Anscombe’s quartet, four sets of two variables which have almost identical means and standard deviations: Code $1$ (R): classic.desc <- function(.x) {c(mean=mean(.x, na.rm=TRUE),var=var(.x, na.rm=TRUE))} sapply(anscombe, classic.desc) (Data anscombe is embedded into R. To compact input and output, several tricks were used. Please find them yourself.) Linear model coefficients (see below) are also quite similar but if we plot these data, the picture (Figure $2$) is radically different from what is reflected in numbers: Code $2$ (R): a.vars <- data.frame(i=c(1, 5), ii=c(2, 6), iii=c(3, 7), iv=c(4, 8)) oldpar <- par(mfrow=c(2, 2), mar=c(4, 4, 1, 1)) for (i in 1:4) { plot(anscombe[a.vars[, i]], pch=19, cex=1.2); abline(lm(anscombe[rev(a.vars[, i])]), lty=2) } (For aesthetic purposes, we put all four plots on the same figure. Note the for operator which produces cycle repeating one sequence of commands four times. To know more, check ?"for".) To the credit of nonparametric and/or robust numerical methods, they are not so easy to deceive: Code $3$ (R): robust.desc <- function(.x) {c(median=median(.x, na.rm=TRUE),IQR=IQR(.x, na.rm=TRUE), mad=mad(.x, na.rm=TRUE))} sapply(anscombe, robust.desc) This is correct to guess that boxplots should also show the difference. Please try to plot them yourself. Correlation To measure the extent and sign of linear relationship, we need to calculate correlation coefficient. The absolute value of the correlation coefficient varies from 0 to 1. Zero means that the values of one variable are unconnected with the values of the other variable. A correlation coefficient of $1$ or $-1$ is an evidence of a linear relationship between two variables. A positive value of means the correlation is positive (the higher the value of one variable, the higher the value of the other), while negative values mean the correlation is negative (the higher the value of one, the lower of the other). It is easy to calculate correlation coefficient in R: Code $4$ (R): cor(5:15, 7:17) cor(5:15, c(7:16, 23)) (By default, R calculates the parametric Pearson correlation coefficient $r$.) In the simplest case, it is given two arguments (vectors of equal length). It can also be called with one argument if using a matrix or data frame. In this case, the function cor() calculates a correlation matrix, composed of correlation coefficients between all pairs of data columns. Code $5$ (R): cor(trees) As correlation is in fact the effect size of covariance, joint variation of two variables, to calculate it manually, one needs to know individual variances and variance of the difference between variables: Code $6$ (R): with(trees, cor(Girth, Height)) (v1 <- var(trees$Girth)) (v2 <- var(trees$Height)) (v12 <- var(trees$Girth - trees$Height)) (pearson.r <- (v1 + v2 - v12)/(2*sqrt(v1)*sqrt(v2))) Another way is to use cov() function which calculates covariance directly: Code $7$ (R): with(trees, cov(Girth, Height)/(sd(Girth)*sd(Height))) To interpret correlation coefficient values, we can use either symnum() or Topm() functions (see below), or Mag() together with apply(): Code $8$ (R): noquote(apply(cor(trees), 1:2, function(.x) Mag(.x, squared=FALSE))) # asmisc.r If the numbers of observations in the columns are unequal (some columns have missing data), the parameter use becomes important. Default is everything which returns NA whenever there are any missing values in a dataset. If the parameter use is set to complete.obs, observations with missing data are automatically excluded. Sometimes, missing data values are so dispersed that complete.obs will not leave much of it. In that last case, use pairwise.complete.obs which removes missing values pair by pair. Pearson’s parametric correlation coefficients characteristically fail with the Anscombe’s data: Code $9$ (R): diag(cor(anscombe[, 1:4], anscombe[, 5:8])) To overcome the problem, one can use Spearman’s $\rho$ (“rho”, or rank correlation coefficient) which is most frequently used nonparametric correlation coefficient: Code $10$ (R): with(trees, cor(Girth, Height, method="spearman")) diag(cor(anscombe[, 1:4], anscombe[, 5:8], method="s")) (Spearman’s correlation is definitely more robust!) The third kind of correlation coefficient in R is nonparametric Kendall’s $\tau$ (“tau”): Code $11$ (R): with(trees, cor(Girth, Height, method="k")) diag(cor(anscombe[, 1:4], anscombe[, 5:8], method="k")) It is often used to measure association between two ranked or binary variables, i.e. as an alternative to effect sizes of the association in contingency tables. How to check if correlation is statistically significant? As a null hypothesis, we could accept that correlation coefficient is equal to zero (no correlation). If the null is rejected, then correlation is significant: Code $12$ (R): with(trees, cor.test(Girth, Height)) The logic of cor.test() is the same as in tests before (Table 5.1.1, Figure 5.1.1). In terms of p-value: The probability of obtaining the test statistic (correlation coefficient), given the initial assumption of zero correlation between the data is very low—about 0.3%. We would reject H$_0$ and therefore accept an alternative hypothesis that correlation between variables is present. Please note the confidence interval, it indicates here that the true value of the coefficient lies between 0.2 and 0.7. with 95% probability. It is not always easy to read the big correlation table, like in the following example of longley macroeconomic data. Fortunately, there are several workarounds, for example, the symnum() function which replaces numbers with letters or symbols in accordance to their value: Code $13$ (R): symnum(cor(longley)) The second way is to represent the correlation matrix with a plot. For example, we may use the heatmap: split everything from $-1$ to $+1$ into equal intervals, assign the color for each interval and show these colors (Figure $3$): Code $14$ (R): cor.l <- cor(longley) dimnames(cor.l) <- lapply(dimnames(cor.l), abbreviate) rgb.palette <- colorRampPalette(c("cadetblue", "khaki")) palette.l <- rgb.palette(length(unique(abs(cor.l)))) library(lattice) levelplot(abs(cor.l), col.regions=palette.l, xlab="", ylab="") (We shortened here long names with the abbreviate() command.) The other interesting way of representing correlations are correlation ellipses (from ellipse package). In that case, correlation coefficients are shown as variously compressed ellipses; when coefficient is close to $-1$ or $+1$, ellipse is more narrow (Figure $4$). The slope of ellipse represents the sign of correlation (negative or positive): Code $15$ (R): library(ellipse) colors <- cm.colors(7) plotcorr(cor.l, type="lower", col=colors[5*cor.l + 2]) Several useful ways to visualize and analyze correlations present in the asmisc.r file supplied with this book: Code $16$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.cor <- cor(tox, method="k") Pleiad(tox.cor, corr=TRUE, lcol="black") # asmisc.r We calculated here Kendall’s correlation coefficient for the binary toxicity data to make the picture used on the title page. Pleiad() not only showed (Figure $5$) that illness is associated with tomato and Caesar salad, but also found two other correlation pleiads: coffee/rice and crab dip/crisps. (By the way, pleiads show one more application of R: analysis of networks.) Function Cor() outputs correlation matrix together with asterisks for the significant correlation tests: Code $17$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) Cor(tox, method="kendall", dec=2) # asmisc.r Finally, function Topm() shows largest correlations by rows: Code $18$ (R): tox <- read.table("data/poisoning.txt", h=TRUE) tox.cor <- cor(tox, method="k") Topm(tox.cor, level=0.4) # asmisc.r Data file traits.txt contains results of the survey where most genetically apparent human phenotype characters were recorded from many individuals. Explanation of these characters are in trait_c.txt file. Please analyze this data with correlation methods.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/06%3A_Two-Dimensional_Data_-_Models/6.01%3A_Analysis_of_Correlation.txt
Single line Analysis of correlation allows to determine if variables are dependent and calculate the strength and sign of the dependence. However, if the goal is to understand the other features of dependence (like direction), and, even more important, predict (extrapolate) results (Figure $1$) we need another kind of analysis, the analysis of regression. It gives much more information on the relationship, but requires us to assign variables beforehand to one of two categories: influence (predictor) or response. This approach is rooted in the nature of the data: for example, we may use air temperature to predict ice cream sales, but hardly the other way around. The most well-known example is a simple linear regression: $\mbox{response} = \mbox{intercept} + \mbox{slope}\times\mbox{influence}$ or, in R formula language, even simpler: response ~ influence That model estimates the average value of response if the value of influence is known (note that both effect and influence are measurement variables). The differences between observed and predicted values are model errors (or, better, residuals). The goal is to minimize residuals (Figure $3$); since residuals could be both positive and negative, it is typically done via squared values, this method is called least squares. Ideally, residuals should have the normal distribution with zero mean and constant variance which is not dependent on effect and influence. In that case, residuals are homogeneous. In other cases, residuals could show heterogeneity. And if there is the dependence between residuals and influence, then most likely the overall model should be non-linear and therefore requires the other kind of analysis. Linear regression model is based on the several assumptions: • Linearity of the relationship. It means that for a unit change in influence, there should always be a corresponding change in effect. Units of change in response variable should retain the same size and sign throughout the range of influence. • Normality of residuals. Please note that normality of data is not an assumption! However, if you want to get rid of most other assumptions, you might want to use other regression methods like LOESS. • Homoscedasticity of residuals. Variability within residuals should remain constant across the whole range of influence, or else we could not predict the effect reliably. The null hypothesis states that nothing in the variability of response is explained by the model. Numerically, R-squared coefficient is the the degree to which the variability of response is explained by the model, therefore null hypothesis is that R-squared equals zero, this approach uses F-statistics (Fisher’s statistics), like in ANOVA. There are also checks of additional null hypotheses that both intercept and slope are zeros. If all three p-values are smaller than the level of significance (0.05), the whole model is statistically significant. Here is an example. The embedded women data contains observations on the height and weight of 15 women. We will try to understand the dependence between weight and height, graphically at first (Figure $2$): Code $1$ (R): women.lm <- lm(weight ~ height, data=women) plot(weight ~ height, data=women, xlab="Height, in", ylab="Weight, lb") grid() abline(women.lm, col="red") Cladd(women.lm, data=women) # asmisc.r legend("bottomright", col=2:1, lty=1:2, legend=c("linear relationship", "95% confidence bands")) (Here we used function Cladd() which adds confidence bands to the plot$^{[1]}$.) Let us visualize residuals better (Figure $3$): Code $2$ (R): women.lm <- lm(weight ~ height, data=women) plot(weight ~ height, data=women, pch=19, col="red") abline(women.lm) with(women, segments(height, fitted(women.lm), height, weight, col="red")) To look on the results of model analysis, we can employ summary(): Code $3$ (R): women.lm <- lm(weight ~ height, data=women) summary(women.lm) This long output is better to read from bottom to the top. We can say that: • The significance of relation (reflected with R-squared) is high from the statistical point of view: F-statistics is $1433$ with overall p-value: 1.091e-14. • The R-squared (use Adjusted R-squared because this is better suited for the model) is really big, $R^2 = 0.9903$. This means that almost all variation in response variable (weight) is explained by predictor (height). R-squared is related with the coefficient of correlation and might be used as the measure of effect size. Since it is squared, high values start from 0.25: Code $4$ (R): Mag(0.9903) # asmisc.r • Both coefficients are statistically different from zero, this might be seen via “stars” (like ***), and also via actual p-values Pr(>|t|): 1.71e-09 for intercept, and 1.09e-14 for height, which represents the slope. To calculate slope in degrees, one might run: Code $5$ (R): women.lm <- lm(weight ~ height, data=women) (atan(women.lm$coefficients[[2]]) * 180)/pi • Overall, our model is: Weight (estimated) = -87.51667 + 3.45 * Height, so if the height grows by 4 inches, the weight will grow on approximately 14 pounds. • The maximal positive residual is $3.1167$ lb, maximal negative is $-1.7333$ lb. • Half of residuals are quite close to the median (within approximately $\pm1$ interval). On the first glance, the model summary looks fine. However, before making any conclusions, we must also check assumptions of the model. The command plot(women.lm) returns four consecutive plots: • First plot, residuals vs. fitted values, is most important. Ideally, it should show no structure (uniform variation and no trend); this satisfies both linearity and homoscedascicity assumptions. • Unfortunately, women.lm model has an obvious trend which indicates non-linearity. Residuals are positive when fitted values are small, negative for fitted values in the mid-range, and positive again for large fitted values. Clearly, the first assumption of the linear regression analysis is violated. Code $6$ (R): oldpar <- par(mfrow=c(3, 3)) ## Uniform variation and no trend: for (i in 1:9) plot(1:50, rnorm(50), xlab="Fitted", ylab="Residuals") title("'Good' Residuals vs. Fitted", outer=TRUE, line=-2) ## Non-uniform variation plus trend: for (i in 1:9) plot(1:50, ((1:50)*rnorm(50) + 50:1), xlab="Fitted",ylab="Residuals") title("'Bad' Residuals vs. Fitted", outer=TRUE, line=-2) par(oldpar) • To understand residuals vs. fitted plots better, please run the following code yourself and look on the resulted plots: • On the the next plot, standardized residuals do not follow the normal line perfectly (see the explanation of the QQ plot in the previous chapter), but they are “good enough”. To review different variants of these plots, run the following code yourself: Code $7$ (R): oldpar <- par(mfrow=c(3, 3)) for (i in 1:9) { aa <- rnorm(50); qqnorm(aa, main=""); qqline(aa) } title("'Good' normality QQ plots", outer=TRUE, line=-2) for (i in 1:9) { aa <- rnorm(50)^2; qqnorm(aa, main=""); qqline(aa) } title("'Bad' normality QQ plots", outer=TRUE, line=-2) par(oldpar) • Test for the normality should also work: Code $8$ (R): Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(women.lm$residuals) # asmisc.r • The third, Scale-Location plot, is similar to the residuals vs. fitted, but instead of “raw” residuals it uses the square roots of their standardized values. It is also used to reveal trends in the magnitudes of residuals. In a good model, these values should be more or less randomly distributed. • Finally, the last plot demonstrates which values exert most influence over the final shape of the model. Here the two values with most leverage are the first and the last measurements, those, in fact, that stay furtherest away from linearity. (If you need to know more about summary and plotting of linear models, check help pages with commands ?summary.lm and ?plot.lm. By the way, as ANOVA has many similarities to the linear model analysis, in R you can run same diagnostic plots for any ANOVA model.) Now it is clear that our first linear model does not work well for our data which is likely non-linear. While there are many non-linear regression methods, let us modify it first in a more simple way to introduce non-linearity. One of simple ways is to add the cubed term, because weight relates with volume, and volume is a cube of linear sizes: Code $9$ (R): women.lm3 <- lm(weight ~ height + I(height^3), data=women) summary(women.lm3) plot(women.lm3, which=1) # just residuals vs. fitted (Function I() was used to tell R that height^3 is arithmetical operation and not the part of model formula.) The quick look on the residuals vs. fitted plot (Figure $4$) shows that this second model fits much better! Confidence bands and predicted line are also look more appropriate : You may want also to see the confidence intervals for linear model parameters. For that purpose, use confint(women.lm). Another example is from egg data studied graphically in the second chapter (Figure 2.9.1). Does the length of the egg linearly relate with with of the egg? Code $10$ (R): eggs <- read.table("data/eggs.txt") eggs.lm <- lm(V2 ~ V1, data=eggs) We can analyze the assumptions first: Code $11$ (R): eggs <- read.table("data/eggs.txt") eggs.lm <- lm(V2 ~ V1, data=eggs) plot(eggs.lm) The most important, residuals vs. fitted is not perfect but could be considered as “good enough” (please check it yourself): there is no obvious trend, and residuals seem to be more or less equally spread (homoscedasticity is fulfilled). Distribution of residuals is close to normal. Now we can interpret the model summary: Code $12$ (R): eggs <- read.table("data/eggs.txt") eggs.lm <- lm(V2 ~ V1, data=eggs) summary(eggs.lm) Significance of the slope means that the line is definitely slanted (this is actually what is called “relation” in common language). However, intercept is not significantly different from zero: Code $13$ (R): eggs <- read.table("data/eggs.txt") eggs.lm <- lm(V2 ~ V1, data=eggs) confint(eggs.lm) (Confidence interval for intercept includes zero.) To check the magnitude of effect size, one can use: Code $14$ (R): eggs <- read.table("data/eggs.txt") eggs.lm <- lm(V2 ~ V1, data=eggs) Mag(summary(eggs.lm)$adj.r.squared) This is a really large effect. Third example is based on a simple idea to check if the success in multiple choice test depends on time spent with it. Data presents in exams.txt file which contains results of two multiple choice tests in a large class: Code $15$ (R): ee <- read.table("http://ashipunov.info/data/exams.txt", h=T) str(ee) First variable is the number of test, two others are order of finishing the work, and resulted number of points (out of 50). We assume here that the order reflects the time spent on test. Select one of two exams: Code $16$ (R): ee <- read.table("http://ashipunov.info/data/exams.txt", h=T) ee3 <- ee[ee$EXAM.N == 3,] ... and plot it first (please check this plot yourself): Code $17$ (R): ee <- read.table("http://ashipunov.info/data/exams.txt", h=T) ee3 <- ee[ee$EXAM.N == 3,] plot(POINTS.50 ~ ORDER, data=ee3) Well, no visible relation occurs. Now we approach it inferentially: Code $18$ (R): ee <- read.table("http://ashipunov.info/data/exams.txt", h=T) ee3 <- ee[ee$EXAM.N == 3,] ee3.lm <- lm(POINTS.50 ~ ORDER, data=ee3) summary(ee3.lm) As usual, this output is read from bottom to the top. First, statistical significance of the relation is absent, and relation (adjusted R-squared) itself is almost zero. Even if intercept is significant, slope is not and therefore could easily be zero. There is no relation between time spent and result of the test. To double check if the linear model approach was at all applicable in this case, run diagnostic plots yourself: Code $19$ (R): ee <- read.table("http://ashipunov.info/data/exams.txt", h=T) ee3 <- ee[ee$EXAM.N == 3,] plot(ee3.lm) And as the final touch, try the regression line and confidence bands: Code $20$ (R): ee <- read.table("http://ashipunov.info/data/exams.txt", h=T) ee3 <- ee[ee$EXAM.N == 3,] abline(ee3.lm) Cladd(ee3.lm, data=ee3) Almost horizontal—no relation. It is also interesting to check if the other exam went the same way. Please find out yourself. ). Please find which morphological measurement characters are most correlated, and check the linear model of their relationships. ) plant. Please find which pair of morphological characters is most correlated and analyze the linear model which includes these characters. Also, check if length of leaf is different between the three biggest populations of sundew. As the linear models and ANOVA have many in common, there is no problem in the analysis of multiple groups with the default linear regression methods. Consider our ANOVA data: Code $21$ (R): hwc <- read.table("data/hwc.txt", h=TRUE) newcolor <- relevel(hwc$COLOR, "brown") summary(lm(cbind(WEIGHT, HEIGHT) ~ newcolor, data=hwc)) This example shows few additional “tricks”. First, this is how to analyze several response variables at once. This is applicable also to aov()try it yourself. Next, it shows how to re-level factor putting one of proximal levels first. That helps to compare coefficients. In our case, it shows that blonds do not differ from browns by weight. Note that “intercepts” here have no clear relation with plotting linear relationships. It is also easy to calculate the effect size because R-squared is the effect size. Last but not least, please check assumptions of the linear model with plot(lm(...)). At the moment in R, this works only for singular response. Is there the linear relation between the weight and height in our ANOVA hwc data? Many lines Sometimes, there is a need to analyze not just linear relationships between variables, but to answer second order question: compare several regression lines. In formula language, this is described as response ~ influence * factor where factor is a categorical variable responsible for the distinction between regression lines, and star (*) indicates that we are simultaneously checking (1) response from influence (predictor), (2) response from factor and (3) response from interaction between influence and factor. This kind of analysis is frequently called ANCOVA, “ANalysis of COVAriation”. The ANCOVA will check if there is any difference between intercept and slope of the first regression line and intercepts and slopes of all other regression lines where each line corresponds with one factor level. Let us start from the example borrowed from M.J. Crawley’s “R Book”. 40 plants were treated in two groups: grazed (in first two weeks of the cultivation) and not grazed. Rootstock diameter was also measured. At the end of season, dry fruit production was measured from both groups. First, we analyze the data graphically: Code $22$ (R): ipo <- read.table("data/ipomopsis.txt", h=TRUE) with(ipo, plot(Root, Fruit, pch=as.numeric(Grazing))) abline(lm(Fruit ~ Root, data=subset(ipo, Grazing=="Grazed"))) abline(lm(Fruit ~ Root, data=subset(ipo, Grazing=="Ungrazed")), lty=2) legend("topleft", lty=1:2, legend=c("Grazed","Ungrazed")) As it is seen on the plot (Figure $5$), regression lines for grazed and non-grazed plants are likely different. Now to the ANCOVA model: Code $23$ (R): ipo <- read.table("data/ipomopsis.txt", h=TRUE) ipo.lm <- lm(Fruit ~ Root * Grazing, data=ipo) summary(ipo.lm) Model output is similar to the linear model but one more term is present. This term indicated interaction which labeled with colon. Since Grazing factor has two level arranged alphabetically, first level (Grazed) used as default and therefore (Intercept) belongs to grazed plants group. The intercept of non-grazed group is labeled as GrazingUngrazed. In fact, this is not even an intercept but difference between intercept of non-grazed group and intercept of grazed group. Analogously, slope for grazed is labeled as Root, and difference between slopes of non-grazed and grazed labeled as Root:GrazingUngrazed. This difference is interaction, or how grazing affects the shape of relation between rootstock size and fruit weight. To convert this output into regression formulas, some calculation will be needed: Fruit = -125.174 + 23.24 * Root (grazed) Fruit = (-125.174 + 30.806) + (23.24 + 0.756) * Root (non-grazed) Note that difference between slopes is not significant. Therefore, interaction could be ignored. Let us check if this is true: Code $24$ (R): ipo <- read.table("data/ipomopsis.txt", h=TRUE) ipo.lm <- lm(Fruit ~ Root * Grazing, data=ipo) ipo.lm2 <- update(ipo.lm, . ~ . - Root:Grazing) summary(ipo.lm2) AIC(ipo.lm) AIC(ipo.lm2) First, we updated our first model by removing the interaction term. This is the additive model. Then summary() told us that all coefficients are now significant (check its output yourself). This is definitely better. Finally, we employed AIC (Akaike’s Information Criterion). AIC came from the theory of information and typically reflects the entropy, in other words, adequacy of the model. The smaller is AIC, the better is a model. Then the second model is the unmistakable winner. By the way, we could specify the same additive model using plus sign instead of star in the model formula. What will the AIC tell about our previous example, women data models? Code $25$ (R): AIC(women.lm) AIC(women.lm3) Again, the second model (with the cubed term) is better. It is well known that in the analysis of voting results, dependence between attendance and the number of people voted for the particular candidate, plays a great role. It is possible, for example, to elucidate if elections were falsified. Here we will use the elections.txt data file containing voting results for three different Russian parties in more than 100 districts: Code $26$ (R): elections <- read.table("data/elections.txt", h=TRUE) str(elections) To simplify typing, we will attach() the data frame (if you do the same, do not forget to detach() it at the end) and calculate proportions of voters and the overall attendance: Code $27$ (R): elections <- read.table("data/elections.txt", h=TRUE) attach(elections) PROP <- cbind(CAND.1, CAND.2, CAND.3) / VOTER ATTEN <- (VALID + INVALID) / VOTER Now we will look on the dependence between attendance and voting graphically (Figure $6$): Code $28$ (R): elections <- read.table("data/elections.txt", h=TRUE) ATTEN <- (VALID + INVALID) / VOTER lm.1 <- lm(CAND.1/VOTER ~ ATTEN) lm.2 <- lm(CAND.2/VOTER ~ ATTEN) lm.3 <- lm(CAND.3/VOTER ~ ATTEN) plot(CAND.3/VOTER ~ ATTEN, xlim=c(0, 1), ylim=c(0, 1), xlab="Attendance", ylab="Percent voted for the candidate") points(CAND.1/VOTER ~ ATTEN, pch=2) points(CAND.2/VOTER ~ ATTEN, pch=3) abline(lm.3) abline(lm.2, lty=2) abline(lm.1, lty=3) legend("topleft", lty=c(3, 2, 1), legend=c("Party 1","Party 2","Party 3")) detach(elections) So the third party had a voting process which was suspiciously different from voting processes for two other parties. It was clear even from the graphical analysis but we might want to test it inferentially, using ANCOVA: Code $29$ (R): elections <- read.table("data/elections.txt", h=TRUE) PROP <- cbind(CAND.1, CAND.2, CAND.3) / VOTER ATTEN <- (VALID + INVALID) / VOTER elections2 <- cbind(ATTEN, stack(data.frame(PROP))) names(elections2) <- c("atten","percn","cand") str(elections2) (Here we created and checked the new data frame. In elections2, all variables are now stack()’ed in two columns, and the third column contains the party code.) Code $30$ (R): elections <- read.table("data/elections.txt", h=TRUE) PROP <- cbind(CAND.1, CAND.2, CAND.3) / VOTER ATTEN <- (VALID + INVALID) / VOTER elections2 <- cbind(ATTEN, stack(data.frame(PROP))) ancova.v <- lm(percn ~ atten * cand, data=elections2) summary(ancova.v) Here (Intercept) belongs specifically to the model for first party. Its p-value indicates if it differs significantly from zero. Second coefficient, atten, belongs to the continuous predictor, attendance. It is not an intercept but slope of a regression. It is also compared to zero. Next four rows represent differences from the first party, two for intercepts and two for slopes (this is the traditional way to structure output in R). Last two items represent interactions. We were most interested if there is an interaction between attendance and voting for the third party, this interaction is common in case of falsifications and our results support this idea. (Figure $7$, phenomenon when in each population there are mostly two types of plants: with flowers bearing long stamens and short style, and with flowers bearing long style and short stamens. It was proved that heterostyly helps in pollination. Please check if the linear dependencies between lengths of style and stamens are different in these two species. Find also which model is better, full (multiplicative, with interactions) or reduced (additive, without interactions). Figure $7$ Heterostyly in primroses: flowers from the different plants of one population. More then one way, again Armed with the knowledge about AIC, multiplicative and additive models, we can return now to the ANOVA two-way layouts, briefly mentioned before. Consider the following example: Code $31$ (R): ToothGrowth.1 <- with(ToothGrowth, data.frame(len, supp, fdose=factor(dose))) str(ToothGrowth.1) Normality <- function(a) {ifelse(shapiro.test(a)$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(ToothGrowth$len) with(ToothGrowth, fligner.test(split(len, list(dose, supp)))) (To start, we converted dose into factor. Otherwise, our model will be ANCOVA instead of ANOVA.) Assumptions met, now the core analysis: Code $32$ (R): ToothGrowth.1 <- with(ToothGrowth, data.frame(len, supp, fdose=factor(dose))) summary(aov(len ~ supp * fdose, data=ToothGrowth.1)) summary(aov(len ~ supp + fdose, data=ToothGrowth.1)) AIC(aov(len ~ supp + fdose, data=ToothGrowth.1)) Now we see what was already visible on the interaction plot (Figure 5.3.4: model with interactions is better, and significant are all three terms: dose, supplement, and interaction between them. Effect size is really high: Code $33$ (R): ToothGrowth.1 <- with(ToothGrowth, data.frame(len, supp, fdose=factor(dose))) Eta2 <- function(aov){summary.lm(aov)\$r.squared} Eta2(aov(len ~ supp * fdose, data=ToothGrowth.1)) Post hoc tests are typically more dangerous in two-way analysis, simply because there are much more comparisons. However, it is possible to run TukeyHSD(): Code $34$ (R): ToothGrowth.1 <- with(ToothGrowth, data.frame(len, supp, fdose=factor(dose))) TukeyHSD(aov(len ~ supp * fdose, data=ToothGrowth.1)) The rest of comparisons is here omitted, but TukeyHSD() has plotting method allowing to plot the single or last element (Figure 6.3.1): Code $35$ (R): ToothGrowth.1 <- with(ToothGrowth, data.frame(len, supp, fdose=factor(dose))) TukeyHSD(aov(len ~ supp * fdose, data=ToothGrowth.1)) plot(TukeyHSD(aov(len ~ supp * fdose, data=ToothGrowth.1)), las=1)
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/06%3A_Two-Dimensional_Data_-_Models/6.02%3A_Analysis_of_regression.txt
There are a few analytical methods working with categorical variables. Practically, we are restricted here with proportion tests and chi-squared. However, the goal sometimes is more complicated as we may want to check not only the presence of the correspondence but also its features—something like regression analysis but for the nominal data. In formula language, this might be described as factor ~ influence Below is an example using data from hiring interviews. Programmers with different months of professional experience were asked to write a program on paper. Then the program was entered into the memory of a computer and if it worked, the case was marked with “S” (success) and “F” (failure) otherwise: Code \(1\) (R): ```l <- read.table("data/logit.txt") head(l)``` It is more or less obvious more experienced programmers are more successful. This is even possible to check visually, with cdplot() (Figure \(2\)): Code \(2\) (R): ```l <- read.table("data/logit.txt") cdplot(V2 ~ V1, data=l, xlab="Experience, months", ylab="Success")``` But is it possible to determine numerically the dependence between years of experience and programming success? Contingency tables is not a good solution because V1 is a measurement variable. Linear regression will not work because the response here is a factor. But there is a solution. We can research the model where the response is not a success/failure but the probability of success (which, as all probabilities is a measurement variable changing from 0 to 1): Code \(3\) (R): ```l <- read.table("data/logit.txt") l.logit <- glm(V2 ~ V1, family=binomial, data=l) summary(l.logit)``` Not going deeply into details, we can see here that both parameters of the regression are significant since p-values are small. This is enough to say that the experience influences the programming success. The file seeing.txt came from the results of the following experiment. People were demonstrated some objects for the short time, and later they were asked to describe these objects. First column of the data file contains the person ID, second—the number of object (five objects were shown to each person in sequence) and the third column is the success/failure of description (in binary 0/1 format). Is there dependence between the object number and the success? The output of summary.glm() contains the AIC value. It is accepted that smaller AIC corresponds with the more optimal model. To show it, we will return to the intoxication example from the previous chapter. Tomatoes or salad? Code \(4\) (R): ```tox <- read.table("data/poisoning.txt", h=TRUE) tox.logit <- glm(formula=I(2-ILL) ~ CAESAR + TOMATO, family=binomial, data=tox) tox.logit2 <- update(tox.logit, . ~ . - TOMATO) tox.logit3 <- update(tox.logit, . ~ . - CAESAR)``` At first, we created the logistic regression model. Since it “needs” the binary response, we subtracted the ILL value from 2 so the illness became encoded as 0 and no illness as 1. I() function was used to avoid the subtraction to be interpret as a model formula, and our minus symbol had only arithmetical meaning. On the next step, we used update() to modify the starting model removing tomatoes, then we removed the salad (dots mean that we use all initial influences and responses). Now to the AIC: Code \(5\) (R): ```tox <- read.table("data/poisoning.txt", h=TRUE) tox.logit <- glm(formula=I(2-ILL) ~ CAESAR + TOMATO, family=binomial, data=tox) tox.logit2 <- update(tox.logit, . ~ . - TOMATO) tox.logit3 <- update(tox.logit, . ~ . - CAESAR) tox.logit\$aic tox.logit2\$aic tox.logit3\$aic``` The model without tomatoes but with salad is the most optimal. It means that the poisoning agent was most likely the Caesar salad alone. Now, for the sake of completeness, readers might have question if there are methods similar to logistic regression but using not two but many factor levels as response? And methods using ranked (ordinal) variables as response? (As a reminder, measurement variable as a response is a property of linear regression and similar.) Their names are multinomial regression and ordinal regression, and appropriate functions exist in several R packages, e.g., nnet, rms and ordinal. File juniperus.txt in the open repository contains measurements of morphological and ecological characters in several Arctic populations of junipers (Juniperus). Please analyze how measurements are distributed among populations, and check specifically if the needle length is different between locations. Another problem is that junipers of smaller size (height less than 1 m) and with shorter needles (less than 8 mm) were frequently separated from the common juniper (Juniperus communis) into another species, J. sibirica. Please check if plants with J. sibirica characters present in data, and does the probability of being J. sibirica depends on the amount of shading pine trees in vicinity (character PINE.N).
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/06%3A_Two-Dimensional_Data_-_Models/6.03%3A_Probability_of_the_success-_logistic_regression.txt
Correlation and linear models Answer to the question of human traits. Inspect the data, load it and check the object: Code \(1\) (R): ```traits <- read.table("data/traits.txt", h=TRUE, row.names=1) str(traits)``` Data is binary, so Kendall’s correlation is most natural: Code \(2\) (R): ```traits <- read.table("data/traits.txt", h=TRUE, row.names=1) Cor(traits, method="kendall", dec=1) # asmisc.r traits.c <- cor(traits, method="kendall") Topm(traits.c) # asmisc.r``` We will visualize correlation with Pleiad(), one of advantages of it is to show which correlations are connected, grouped—so-called “correlation pleiads”: Code \(3\) (R): ```traits <- read.table("data/traits.txt", h=TRUE, row.names=1) Cor(traits, method="kendall", dec=1) # asmisc.r traits.c <- cor(traits, method="kendall") Pleiad(traits.c, corr=TRUE, lcol=1, legend=FALSE, off=1.12, pch=21, bg="white", cex=1.1) # asmisc.r``` (Look on the title page to see correlations. One pleiad, CHIN, TONGUE and THUMB is the most apparent.) Answer to the question of the linear dependence between height and weight for the artificial data. Correlation is present but the dependence is weak (Figure \(1\)): Code \(4\) (R): ```hwc <- read.table("data/hwc.txt", h=TRUE) cor.test(hwc\$WEIGHT, hwc\$HEIGHT) w.h <- lm(WEIGHT ~ HEIGHT, data=hwc) summary(w.h) plot(WEIGHT ~ HEIGHT, data=hwc, xlab="Height, cm", ylab="Weight, kg") abline(w.h) Cladd(w.h, data=hwc) # asmisc.r``` The conclusion about weak dependence was made because of low R-squared which means that predictor variable, height, does not explain much of the dependent variable, weight. In addition, many residuals are located outside of IQR. This is also easy to see on the plot where many data points are distant from the regression line and even from 95% confidence bands. Answer to spring draba question. Check file, load and check the object: Code \(5\) (R): ```ee <- read.table("http://ashipunov.info/shipunov/open/erophila.txt", h=TRUE) str(ee) # asmisc.r``` Now, check normality and correlations with the appropriate method: Code \(6\) (R): ```ee <- read.table("http://ashipunov.info/shipunov/open/erophila.txt", h=TRUE) Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} sapply(ee[, 2:4], Normality) # asmisc.r Topm(cor(ee[, 2:4], method="spearman")) # asmisc.r with(ee, cor.test(FRUIT.L, FRUIT.MAXW, method="spearman"))``` Therefore, FRUIT.L and FRUIT.MAXW are best candidates for linear model analysis. We will plot it first (Figure \(2\)): Code \(7\) (R): ```ee <- read.table("http://ashipunov.info/shipunov/open/erophila.txt", h=TRUE) ee.lm <- lm(FRUIT.MAXW ~ FRUIT.L, data=ee) plot(FRUIT.MAXW ~ FRUIT.L, data=ee, type="n") Points(ee\$FRUIT.L, ee\$FRUIT.MAXW, scale=.5) # asmisc.r Cladd(ee.lm, ee, ab.lty=1) # asmisc.r``` (Points() is a “single” variant of PPoints() from the above, and was used because there are multiple overlaid data points.) Finally, check the linear model and assumptions: Code \(8\) (R): ```ee <- read.table("http://ashipunov.info/shipunov/open/erophila.txt", h=TRUE) ee.lm <- lm(FRUIT.MAXW ~ FRUIT.L, data=ee) summary(ee.lm) plot(ee.lm, which=1)``` There is a reliable model (p-value: < 2.2e-16) which has a high R-squared value (sqrt(0.4651) = 0.6819824). Slope coefficient is significant whereas intercept is not. Homogeneity of residuals is apparent, their normality is also out of question: Code \(9\) (R): ```ee <- read.table("http://ashipunov.info/shipunov/open/erophila.txt", h=TRUE) ee.lm <- lm(FRUIT.MAXW ~ FRUIT.L, data=ee) Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} Normality(ee.lm\$residuals)``` Answer to the heterostyly question. First, inspect the file, load the data and check it: Code \(10\) (R): ```he <- read.table("http://ashipunov.info/shipunov/open/heterostyly.txt", h=TRUE) str(he)``` This is how to visualize the phenomenon of heterostyly for all data: Code \(11\) (R): ```he <- read.table("http://ashipunov.info/shipunov/open/heterostyly.txt", h=TRUE) boxplot((STYLE.L-STAMEN.L) ~ (STYLE.L-STAMEN.L)>0, names=c("short","long"), data=he)``` (Please review this plot yourself.) Now we need to visualize linear relationships of question. There are many overlaid data points so the best way is to employ the PPoints() function (Figure \(3\)): Code \(12\) (R): ```he <- read.table("http://ashipunov.info/shipunov/open/heterostyly.txt", h=TRUE) plot(STYLE.L ~ STAMEN.L, data=he, type="n", xlab="Length of stamens, mm", ylab="Length of style, mm") PPoints(he\$SPECIES, he\$STAMEN.L, he\$STYLE.L, scale=.9, cols=1) abline(lm(STYLE.L ~ STAMEN.L, data=he[he\$SPECIES=="veris", ])) abline(lm(STYLE.L ~ STAMEN.L, data=he[he\$SPECIES=="vulgaris", ]), lty=2) legend("topright", pch=1:2, lty=1:2, legend=c("Primula veris", "P. vulgaris"), text.font=3)``` Now to the models. We will assume that length of stamens is the independent variable. Explore, check assumptions and AIC for the full model: Code \(13\) (R): ```he <- read.table("http://ashipunov.info/shipunov/open/heterostyly.txt", h=TRUE) summary(he.lm1 <- lm(STYLE.L ~ STAMEN.L * SPECIES, data=he)) plot(he.lm1, which=1:2) AIC(he.lm1)``` Reduced (additive) model: Code \(14\) (R): ```he <- read.table("http://ashipunov.info/shipunov/open/heterostyly.txt", h=TRUE) summary(he.lm2 <- update(he.lm1, . ~ . - STAMEN.L:SPECIES)) AIC(he.lm2)``` Full model is better, most likely because of strong interactions. To check interactions graphically is possible also with the interaction plot which will treat independent variable as factor: Code \(15\) (R): ```he <- read.table("http://ashipunov.info/shipunov/open/heterostyly.txt", h=TRUE) with(he, interaction.plot(cut(STAMEN.L, quantile(STAMEN.L)), SPECIES, STYLE.L))``` This technical plot (check it yourself) shows the reliable differences between lines of different species. This differences are bigger when stamens are longer. This plot is more suitable for the complex ANOVA but as you see, works also for linear models. Answer to the question about sundew (Drosera) populations. First, inspect the file, then load it and check the structure of object: Code \(16\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) str(dr) # asmisc.r``` Since we a required to calculate correlation, check the normality first: Code \(17\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} sapply(dr[,-1], Normality)``` Well, to this data we can apply only nonparametric methods: Code \(18\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) dr.cor <- cor(dr[,-1], method="spearman", use="pairwise") Topm(dr.cor) # asmisc.r Pleiad(dr.cor, corr=TRUE, legtext=2, legpos="bottom", leghoriz=TRUE, pch=19, cex=1.2) # asmisc.r``` (Note that "pairwise" was employed, there are many NAs.) The last plot (Figure \(4\)) shows two most important correlation pleiads: one related with leaf size, and another—with inflorescence. Since we know now which characters are most correlated, proceed to linear model. Since in the development of sundews stalk formed first, let us accept STALK.L as independent variable (influence), and INFL.L as dependent variable (response): Code \(19\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) summary(dr.lm <- lm(INFL.L ~ STALK.L, data=dr)) plot(dr.lm, which=1:2)``` Reliable model with high R-squared. However, normality of residuals is not perfect (please check model plots yourself). Now to the analysis of leaf length. Determine which three populations are largest and subset the data: Code \(20\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) (largest3 <- rev(sort(table(dr[, 1])))[1:3]) dr3 <- dr[dr\$POP %in% names(largest3), ] dr3\$POP <- droplevels(dr3\$POP)``` Now we need to plot them and check if there are visual differences: Code \(21\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) largest3 <- rev(sort(table(dr[, 1])))[1:3] dr3 <- dr[dr\$POP %in% names(largest3), ] boxplot(LEAF.L ~ POP, data=dr3)``` Yes, they probably exist (please check the plot yourself.) It is worth to look on similarity of ranges: Code \(22\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) largest3 <- rev(sort(table(dr[, 1])))[1:3] dr3 <- dr[dr\$POP %in% names(largest3), ] dr3\$POP <- droplevels(dr3\$POP) tapply(dr3\$LEAF.L, dr3\$POP, mad, na.rm=TRUE) fligner.test(LEAF.L ~ POP, data=dr3)``` The robust range statistic, MAD (median absolute deviation) shows that variations are similar. We also ran the nonparametric analog of Bartlett test to see the statistical significance of this similarity. Yes, variances are statistically similar. Since we have three populations to analyze, we will need something ANOVA-like, but nonparametric: Code \(23\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) largest3 <- rev(sort(table(dr[, 1])))[1:3] dr3 <- dr[dr\$POP %in% names(largest3), ] kruskal.test(LEAF.L ~ POP, data=dr3)``` Yes, there is at least one population where leaf length is different from all others. To see which, we need a post hoc, pairwise test: Code \(24\) (R): ```dr <- read.table( "http://ashipunov.info/shipunov/open/drosera.txt", h=TRUE) largest3 <- rev(sort(table(dr[, 1])))[1:3] dr3 <- dr[dr\$POP %in% names(largest3), ] dr3\$POP <- droplevels(dr3\$POP) pairwise.wilcox.test(dr3\$LEAF.L, dr3\$POP)``` Population N1 is most divergent whereas Q1 is not really different from L. Logistic regression Answer to the question about demonstration of objects. We will go the same way as in the example about programmers. After loading data, we attach it for simplicity: Code \(25\) (R): ```seeing <- read.table("data/seeing.txt") attach(seeing)``` Check the model: Code \(26\) (R): ```seeing <- read.table("data/seeing.txt") seeing.logit <- glm(V3 ~ V2, family=binomial, data=seeing) summary(seeing.logit)``` (Calling variables, we took into account the fact that R assign names like V1, V2, V3 etc. to “anonymous” columns.) As one can see, the model is significant. It means that some learning takes place within the experiment. It is possible to represent the logistic model graphically (Figure \(5\)): Code \(27\) (R): ```seeing <- read.table("data/seeing.txt") seeing.logit <- glm(V3 ~ V2, family=binomial, data=seeing) tries <- seq(1, 5, length=50) # exactly 50 numbers from 1 to 5 seeing.p <- predict(seeing.logit, list(V2=tries), type="response") plot(V3 ~ jitter(V2, amount=.1), data=seeing, xlab="", ylab="") lines(tries, seeing.p)``` We used predict() function to calculate probabilities of success for non-existed attempts, and also added small random noise with function jitter() to avoid the overlap. Answer to the juniper questions. Check file, load it, check the object: Code \(28\) (R): ```jj <- read.table( "http://ashipunov.info/shipunov/open/juniperus.txt", h=TRUE) jj\$LOC <- factor(jj\$LOC, labels=paste0("loc", levels(jj\$LOC))) str(jj) # asmisc.r``` Analyze morphological and ecological characters graphically (Figure \(6\)): Code \(29\) (R): ```jj <- read.table( "http://ashipunov.info/shipunov/open/juniperus.txt", h=TRUE) j.cols <- colorRampPalette(c("steelblue", "white"))(5)[2:4] boxplot(jj[, 2:7], jj\$LOC, legpos="top", boxcols=j.cols) # asmisc.r``` Now plot length of needles against location (Figure \(7\)): Code \(30\) (R): ```jj <- read.table( "http://ashipunov.info/shipunov/open/juniperus.txt", h=TRUE) j.cols <- colorRampPalette(c("steelblue", "white"))(5)[2:4] spineplot(LOC ~ NEEDLE.L, data=jj, col=j.cols)``` (As you see, spine plot works with measurement data.) Since there is a measurement character and several locations, the most appropriate is ANOVA-like approach. We need to check assumptions first: Code \(31\) (R): ```Normality <- function(a) {ifelse(shapiro.test(a)\$p.value < 0.05, "NOT NORMAL", "NORMAL")} jj <- read.table( "http://ashipunov.info/shipunov/open/juniperus.txt", h=TRUE) Normality(jj\$NEEDLE.L) tapply(jj\$NEEDLE.L, jj\$LOC, var) bartlett.test(NEEDLE.L ~ LOC, data=jj)``` Code \(32\) (R): ```jj <- read.table( "http://ashipunov.info/shipunov/open/juniperus.txt", h=TRUE) oneway.test(NEEDLE.L ~ LOC, data=jj) (eta.squared <-summary(lm(NEEDLE.L ~ LOC, data=jj))\$adj.r.squared) pairwise.t.test(jj\$NEEDLE.L, jj\$LOC)``` (Note how we calculated eta-squared, the effect size of ANOVA. As you see, this could be done through linear model.) There is significant difference between the second and two other locations. And to the second problem. First, we make new variable based on logical expression of character differences: Code \(33\) (R): ```jj <- read.table( "http://ashipunov.info/shipunov/open/juniperus.txt", h=TRUE) is.sibirica <- with(jj, (NEEDLE.L < 8 & HEIGHT < 100)) sibirica <- factor(is.sibirica, labels=c("communis", "sibirica")) summary(sibirica)``` There are both “species” in the data. Now, we plot conditional density and analyze logistic regression: Code \(34\) (R): ```jj <- read.table( "http://ashipunov.info/shipunov/open/juniperus.txt", h=TRUE) is.sibirica <- with(jj, (NEEDLE.L < 8 & HEIGHT < 100)) sibirica <- factor(is.sibirica, labels=c("communis", "sibirica")) cdplot(sibirica ~ PINE.N, data=jj, col=j.cols[c(1, 3)]) summary(glm(sibirica ~ PINE.N, data=jj, family=binomial))``` Conditional density plot (Figure \(8\)) shows an apparent tendency, and model summary outputs significance for slope coefficient. On the next page, there is a table (Table \(1\)) with a key which could help to choose the right inferential method if you know number of samples and type of the data. Type of data One variable Two variables Many variables Measurement, normally distributed t-test Difference: t-test (paired and non-paired), F-test (scale) Effect size: Cohen's d, Lyubishchev's K Relation: correlation, linear models Linear models, ANOVA, one-way test, Bartlett test (scale) Post hoc: pairwise-test, Tukey HSD Effect size: R-squared Measurement and ranked Wilcoxon test, Shapiro-Wilk test Difference: Wilcoxon test (paired and non-paired), sign test, robust rank irder test, Ansari-Bradley test (scale) Effect size: Cliff's delta, Lyubishchev's K Relation: nonparametric correlation Linear models, LOESS, Kruskal-Wallis test, Friedman test, Fligner-Killeen test (scale) Post hoc: pairwise Wilcoxon test, pairwise robust rank order test Effect size: R-squared Categorical One sample test of proportions, goodness-of-fit test Association: Chi-squared test, Fisher's exact test, test of proportions, G-test, McNemar's test (paired) Effect size: Cramer's V, Tschuprow's T, odds ratio Association tests (see on the left); generalized linear models of binomial family (= logistic regression) Post hoc: pairwise table test Table \(1\) Key to the most important inferential statistical methods (except multivariate). After you narrow the search with couple of methods, proceed to the main text.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/06%3A_Two-Dimensional_Data_-_Models/6.04%3A_Answers_to_exercises.txt
“Data Mining”, “Big Data”, “Machine Learning”, “Pattern Recognition” phrases often mean all statistical methods, analytical and visual which help to understand the structure of data. Data might be of any kind, but it is usually multidimensional, which is best represented with the table of multiple columns a.k.a. variables (which might be of different types: measurement, ranked or categorical) and rows a.k.a objects. So more traditional name for these methods is “multivariate data analysis” or “multivariate statistics”. Data mining is often based on the idea of classification, arrange objects into non-intersecting, frequently hierarchical groups. We use classification all the time (but sometimes do not realize it). We open the door and enter the room, the first thing is to recognize (classify) what is inside. Our brain has the outstanding power of classification, but computers and software are speedily advancing and becoming more brain-like. This is why data mining is related with artificial intelligence. There are even methods calling “neural networks”! In this chapter, along with the other data, we will frequently use the embedded iris data taken from works of Ronald Fisher\(^{[1]}\). There are four characters measured on three species of irises (Figure \(1\)), and fifth column is the species name. References 1. Fisher R.A. 1936. The use of multiple measurements in taxonomic problems. Annals of Eugenics. 7(2): 179–188. 07: Multidimensional Data - Analysis of Structure The most simple operation with multidimensional data is to draw it. Pictographs Pictograph is a plot where each element represents one of objects, and every feature of the element corresponds with one character of the primary object. If the every row of data is unique, pictographs might be useful. Here is the star plot (Figure $1$) example: Code $1$ (R): eq8 <- read.table("data/eq8.txt", h=TRUE) str(eq8) # asmisc.r eq8m <- aggregate(eq8[, 2:9], list(eq8[, 1]), median, na.rm=TRUE) row.names(eq8m) <- eq8m[, 1] eq8m$Group.1 <- NULL stars(eq8m, cex=1.2, lwd=1.2, col.stars=rep("darkseagreen", 8)) (We made every element to represent the species of horsetails, and length of the particular ray corresponds with some morphological characters. It is easy to see, as an example, similarities between Equisetum $\times$litorale and E. fluviatile.) Slightly more exotic pictograph is Chernoff’s faces where features of elements are shown as human face characters (Figure $1$): Code $2$ (R): eq8 <- read.table("data/eq8.txt", h=TRUE) library(TeachingDemos) faces(eq8m) (Original Chernoff’s faces have been implemented in the faces2() function, there is also another variant in symbols() package.) Related to pictographs are ways to overview the whole numeric dataset, matrix or data frame. First, command image() allows for plots like on Figure $3$: Code $3$ (R): image(scale(iris[,-5]), axes=FALSE) axis(2, at=seq(0, 1, length.out=4), labels=abbreviate(colnames(iris[,-5])), las=2) (This is a “portrait” or iris matrix, not extremely informative but useful in many ways. For example, it is well visible that highest, most red, values of Pt.L (abbreviated from Petal.Length) correspond with lowest values of Sp.W (Sepal.Width). It is possible even to spot 3-species structure of this data.) More advanced is the parallel coordinates plot (Figure $4$): Code $4$ (R): library(MASS) parcoord(iris[,-5], col=as.numeric(iris[, 5]), lwd=2) legend("top", bty="n", lty=1, lwd=2, col=1:3, legend=names(table(iris[, 5]))) This is somewhat like the multidimensional stripchart. Every character is represented with one axis which has its values from all plants. Then, for every plant, these values were connected with lines. There are many interesting things which could be spotted from this plot. For example, it is clear that petal characters are more distinguishing than sepal. It is also visible that Iris setosa is more distinct from two other species, and so on. Grouped plots Even boxplots and dotcharts could represent multiple characters of multiple groups, but you will need to scale them first and then manually control positions of plotted elements, or use Boxplots() and Linechart() described in the previous chapter: Code $5$ (R): boxplot(iris[, 1:4], iris[, 5], srt=0, adj=c(.5, 1), legpos="topright") # asmisc.r Linechart(iris[, 1:4], iris[, 5], mad=TRUE) # asmisc.r (Please try these plots yourself.) Function matplot() allows to place multiple scatterplots in one frame, symbols() allows to place multiple smaller plots in desired locations, and function pairs() allows to show multiple scatterplots as a matrix (Figure $5$). Code $6$ (R): pairs(iris[, 1:4], pch=21, bg=as.numeric(iris[, 5]), oma=c(2, 2, 3, 2)) oldpar <- par(xpd=TRUE) legend(0, 1.09, horiz=TRUE, legend=levels(iris[, 5]), pch=21, pt.bg=1:3, bty="n") par(oldpar) (This matrix plot shows dependencies between each possible pair of five variables simultaneously.) Matrix plot is just one of the big variety of R trellis plots. Many of them are in the lattice package (Figure $6$): Code $7$ (R): betula <- read.table( "http://ashipunov.info/shipunov/open/betula.txt", h=TRUE) library(lattice) d.tmp <- do.call(make.groups, betula[, c(2:4, 7:8)]) d.tmp$LOC <- betula$LOC bwplot(data ~ factor(LOC) | which, data=d.tmp, ylab="") (Note how to use make.groups() and do.call() to stack all columns into the long variable (it is also possible to use stack(), see above). When LOC was added to temporary dataset, it was recycled five times—exactly what we need.) Library lattice offers multiple trellis variants of common R plots. For example, one could make the trellis dotchart which will show differences between horsetail species (Figure $7$) Code $8$ (R): eq8 <- read.table("data/eq8.txt", h=TRUE) eq.s <- stack(as.data.frame(scale(eq8m))) eq.s$SPECIES <- row.names(eq8m) dotplot(SPECIES ~ values | ind, eq.s, xlab="") (Here we stacked all numerical columns into one with stack().) Few trellis plots are available in the core R. This is our election data from previous chapter (Figure $8$): Code $9$ (R): elections <- read.table("data/elections.txt", h=TRUE) PROP <- cbind(CAND.1, CAND.2, CAND.3) / VOTER ATTEN <- (VALID + INVALID) / VOTER elections2 <- cbind(ATTEN, stack(data.frame(PROP))) coplot(percn ~ atten | cand, data=elections2, col="red", bg="pink", pch=21, bar.bg=c(fac="lightblue")) 3D plots If there just three numerical variables, we can try to plot all of them with 3-axis plots. Frequently seen in geology, metallurgy and some other fields are ternary plots. They implemented, for example, in the vcd package. They use triangle coordinate system which allows to reflect simultaneously three measurement variables and some more categorical characters (via colors, point types etc.): Code $10$ (R): library(vcd) ternaryplot(scale(iris[, 2:4], center=FALSE), cex=.3, col=iris[, 5], main="") grid_legend(0.8, 0.7, pch=19, size=.5, col=1:3, levels(iris[, 5])) The “brick” 3D plot could be done, for example, with the package scatterplot3d (Figure $10$): Code $11$ (R): library(scatterplot3d) i3d <- scatterplot3d(iris[, 2:4], color=as.numeric(iris[, 5]), type="h", pch=14 + as.numeric(iris[, 5]), xlab="Sepal.Width", ylab="", zlab="Petal.Width") dims <- par("usr") x <- dims[1]+ 0.82*diff(dims[1:2]) y <- dims[3]+ 0.1*diff(dims[3:4]) text(x, y, "Petal.Length", srt=40) legend(i3d\$xyz.convert(3.8, 6.5, 1.5), col=1:3, pch=(14 + 1:3), legend=levels(iris[, 5]), bg="white") (Here some additional efforts were used to make y-axis label slanted.) These 3D scatterplots look attractive, but what if some points were hidden from the view? How to rotate and find the best projection? Library RGL will help to create the dynamic 3D plot: Code $12$ (R): library(rgl) plot3d(iris[, 1:3], col=as.numeric(iris[, 5])) Please run these commands yourself. The size of window and projection in RGL plots are controlled with mouse. That will help to understand better the position of every point. In case of iris data, it is visible clearly that one of the species (Iris setosa) is more distinct than two others, and the most “splitting” character is the length of petals (Petal.Length). There are four characters on the plot, because color was used to distinguish species. To save current RGL plot, you will need to run rgl.snapshot() or rgl.postscript() function. Please also note that RGL package depends on the external OpenGL library and therefore on some systems, additional installations might be required. Another 3D possibility is cloud() from lattice package. It is a static plot with the relatively heavy code but important is that user can use different rotations (Figure 7.2.1): Code $13$ (R): library(lattice) p <- cloud(Sepal.Length ~ Petal.Length * Petal.Width, data=iris, groups=Species, par.settings=list(clip=list(panel="off")), auto.key=list(space="top", columns=3, points=TRUE)) update(p[rep(1, 4)], layout=c(2, 2), function(..., screen) panel.cloud(..., screen=list(z=c(-70, 110)[current.column()], x=-70, y=c(140, 0)[current.row()])))
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/07%3A_Multidimensional_Data_-_Analysis_of_Structure/7.01%3A_How_to_draw_the_multivariate_data.txt
We see that plotting of multivariate data always has two problems: either there are too many elements (e.g., in parallel coordinates) which are hard to understand, or there is a need of some grouping operation (e.g., median or range) which will result in the lost of information. What will be really helpful is to safely process the data first, for example, to reduce dimensions—from many to 2 or 3. These techniques are described in this section. Apart from (a) reduction of dimensionality (projection pursuit), the following methods help to (b) find groups (clusters) in data, (c) discover hidden factors (latent variables) and understand variable importance (feature selection\(^{[1]}\)), (d) recognize objects (e.g., complicated shapes) within data, typically using densities and hiatus (gaps) in multidimensional space, and (e) unmix signals. Classification with primary data Primary is what come directly from observation, and did not yet processes in any way (to make secondary data). Shadows of hyper clouds: PCA RGL (see above) allows to find the best projection manually, with a mouse. However, it is possible to do programmatically, with principal component analysis, PCA. It belongs to the family of non-supervised methods, methods of classification without learning, or ordination. PCA treats the data as points in the virtual multidimensional space where every dimension is the one character. These points make together the multidimensional cloud. The goal of the analysis is to find a line which crosses this cloud along its most elongated part, like pear on the stick (Figure \(2\)). This is the first principal component. The second line is perpendicular to the first and again span the second most elongated part of the cloud. These two lines make the plane on which every point is projected. Let us prove this practically. We will load the two-dimensional (hence only two principal components) black and white pear image and see what PCA does with it: Code \(1\) (R): ```library(png) # package to read PNG images aa <- readPNG("data/pear2d_bw.png") bb <- which(aa == 0, arr.ind=TRUE) # pixels to coordinates ## plot together original (green) and PCA-rotated: bbs <- scale(bb) pps <- scale(prcomp(bb)\$x[, 1:2]) # only two PCs anyway xx <- range(c(bbs[, 1], pps[, 1])) yy <- range(c(bbs[, 2], pps[, 2])) plot(pps, pch=".", col=adjustcolor("black", alpha=0.5), xlim=xx, ylim=yy) points(bbs, pch=".", col=adjustcolor("green", alpha=0.5)) legend("bottomright", fill=adjustcolor(c("green", "black"), alpha=0.5), legend=c("Original", "PCA-rotated"), bty="n", border=0)``` PCA is related with a task of finding the “most average person”. The simple combination of averages will not work, which is well explained in Todd Rose’s “The End of Average” book. However, it is usually possible to find in the hyperspace the configuration of parameters which will suit most of people, and this is what PCA is for. After the PCA procedure, all columns (characters) are transformed into components, and the most informative component is the first, next is the second, then third etc. The number of components is the same as the number of initial characters but first two or three usually include all necessary information. This is why it is possible to use them for 2D visualization of multidimensional data. There are many similarities between PCA and factor analysis (which is out of the scope of this book). At first, we will use an example from the open repository presenting measurements of four different populations of sedges: Code \(2\) (R): ```ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE) str(ca) ca.pca <- princomp(scale(ca[,-1]))``` (Function scale() standardizes all variables.) The following (Figure \(4\)) plot is technical screeplot which shows the relative importance of each component: Code \(3\) (R): ```ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE) ca.pca <- princomp(scale(ca[,-1])) plot(ca.pca, main="")``` Here it is easy to see that among four components (same number as initial characters), two first have the highest importances. There is a way to have the same without plotting: Code \(4\) (R): ```ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE) ca.pca <- princomp(scale(ca[,-1])) summary(ca.pca)``` First two components together explain about 84% percents of the total variance. Visualization of PCA is usually made using scores from PCA model (Figure \(5\)): Code \(5\) (R): ```ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE) ca.pca <- princomp(scale(ca[,-1])) ca.p <- ca.pca\$scores[, 1:2] plot(ca.p, type="n", xlab="PC1", ylab="PC2") text(ca.p, labels=ca[, 1], col=ca[, 1]) Hulls(ca.p, ca[, 1]) # asmisc.r``` (Last command draws hulls which help to conclude that first sedges from the third population are intermediate between first and second, they might be even hybrids. If there are three, not two, components which are most important, then any of 3D plots like scatterplot3d() explained above, will help to visualize them.) It is tempting to measure the intersection between hulls. This is possible with Overlap() function, which in turn loads PBSmapping package: Code \(6\) (R): ```ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE) ca.pca <- princomp(scale(ca[,-1])) ca.p <- ca.pca\$scores[, 1:2] ca.h <- Hulls(ca.p, ca[, 1], plot=FALSE) # asmisc.r ca.o <- Overlap(ca.h) # asmisc.r summary(ca.o)``` Sometimes, PCA results are useful to present as a biplot (Figure \(6\)): Code \(7\) (R): ```ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE) ca.pca <- princomp(scale(ca[,-1])) biplot(ca.pca, xlabs=ca[, 1])``` Biplot helps to understand visually how large is the load of each initial character into first two components. For example, characters of height and spike length (but spike width) have a biggest loads into the first component which distinguishes populations most. Function loadings() allows to see this information in the numerical form: Code \(8\) (R): ```ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE) ca.pca <- princomp(scale(ca[,-1])) loadings(ca.pca)``` R has two variants of PCA calculation, first (already discussed) with princomp(), and second with prcomp(). The difference lays in the way how exactly components are calculated. First way is traditional, but second is recommended: Code \(9\) (R): ```iris.pca <- prcomp(iris[, 1:4], scale=TRUE) iris.pca\$rotation iris.p <- iris.pca\$x[, 1:2] plot(iris.p, type="n", xlab="PC1", ylab="PC2") text(iris.p, labels=abbreviate(iris[, 5], 1, method="both.sides"), col=as.numeric(iris[, 5])) Ellipses(iris.p[, 1:2], as.numeric(iris[, 5])) # asmisc.r``` Example above shows some differences between two PCA methods. First, prcomp() conveniently accepts scale option. Second, loadings are taken from the rotation element. Third, scores are in the the element with x name. Please run the code yourself to see how to add 95% confidence ellipses to the 2D ordination plot. One might see that Iris setosa (letter “s” on the plot) is seriously divergent from two other species, Iris versicolor (“v”) and Iris virginica (“a”). Packages ade4 and vegan offer many variants of PCA (Figure \(7\)): Code \(10\) (R): ```library(ade4) iris.dudi <- dudi.pca(iris[, 1:4], scannf=FALSE) s.class(iris.dudi\$li, iris[, 5])``` (The plot is similar to the shown on Figure \(5\); however, the differences between groups are here more clear.) In addition, this is possible to use the inferential approach for the PCA: Code \(11\) (R): ```iris.between <- bca(iris.dudi, iris[, 5], scannf=FALSE) randtest(iris.between)``` Monte-Carlo randomization allows to understand numerically how well are Iris species separated with this PCA. The high Observation value (72.2% which is larger than 50%) is the sign of reliable differences. There are other variants of permutation tests for PCA, for example, with anosim() from the vegan package. Please note that principal component analysis is in general a linear technique similar to the analysis of correlations, and it can fail in some complicated cases. Data solitaire: SOM There are several other techniques which allow unsupervised classification of primary data. Self-organizing maps (SOM) is a technique somewhat similar to breaking the deck of cards into several piles: Code \(12\) (R): ```library(kohonen) iris.som <- som(scale(iris[, 1:4]), grid = somgrid(3, 3, "hexagonal")) # 9 "piles" predict(iris.som)\$unit.classif # content of each "pile" iris.som\$distances # distance to the "center of pile" set.seed(1) plot(iris.som, main="") oldpar <- par(new=TRUE, oma=c(2, rep(0, 3))) plot(iris.som, type="mapping", col=iris\$Species, main="", border=0) par(oldpar) legend("top", col=1:3, pch=1, legend=levels(iris\$Species))``` The resulted plot (Figure \(8\)) contains graphical representation of character values, together with the placement of actual data points. In fact, SOM is the non-learning neural network. More advanced Growing Neural Gas (GNG) algorithm uses ideas similar to SOM. Data density: t-SNE With the really big number of samples, t-SNE algorithm (name stands for “t-Distributed Stochastic Neighbor Embedding”) performs better than classical PCA. t-SNE is frequently used for the shape recognition. It is easy enough to employ it in R(Figure \(9\)): Code \(13\) (R): ```library(Rtsne) iris.unique <- unique(iris) set.seed(42) tsne.out <- Rtsne(as.matrix(iris.unique[, 1:4])) SP <- iris.unique\$Species plot(tsne.out\$Y, col=SP, pch=14+as.numeric(SP), xlab="", ylab="") legend("topleft", pch=14+1:nlevels(SP), col=1:nlevels(SP), legend=levels(SP))``` Classification with correspondence Correspondence analysis is the family of techniques similar to PCA, but applicable to categorical data (primary or in contingency tables). Simple variant of the correspondence analysis is implemented in corresp() from MASS package (Figure \(10\)) which works with contingency tables: Code \(14\) (R): ```HE <- margin.table(HairEyeColor, 1:2) HE.df <- Table2df(HE) # asmisc.r biplot(corresp(HE.df, nf = 2), xpd=TRUE) legend("topleft", fill=1:2, legend=c("hair","eyes"))``` (We converted here “table” object HE into the data frame. xpd=TRUE was used to allow text to go out of the plotting box.) This example uses HairEyeColor data from previous chapter. Plot visualizes both parameters so if the particular combination of colors is more frequent, then positions of corresponding words is closer. For example, black hairs and brown eyes frequently occur together. The position of these words is more distant from the center (designated with cross) because numerical values of these characters are remote. This possibility to visualize several character sets simultaneously on the one plot is the impressive feature of correspondence analysis (Figure \(11\)): Code \(15\) (R): ```library(vegan) alla <- read.table("data/lakesea_abio.txt", sep=" ", h=TRUE) allc <- read.table("data/lakesea_bio.txt", sep=" ", h=TRUE) all.cca <- cca(allc, alla[,-14]) plot.all.cca <- plot(all.cca, display=c("sp","cn"), xlab="", ylab="") points(all.cca, display="sites", pch=ifelse(alla[, 14], 15, 0)) legend("topleft", pch=c(15, 0), legend=c("lake","sea")) text(-1.6, -4.2, "Carex.lasiocarpa", pos=4)``` This is much more advanced than biplot. Data used here contained both abiotic (ecotopes) and biotic factors (plant species), plus the geography of some Arctic islands: were these lake islands or sea islands. The plot was able to arrange all of these data: for abiotic factors, it used arrows, for biotic—pluses, and for sites (islands themselves as characterized by the sum of all available factors, biotic and abiotic)—squares of different color, depending on geographic origin. All pluses could be identified with the interactive identify(plot.all.cca, "species") command. We did it just for one most outstanding species, Carex lasiocarpa (woolly-fruit sedge) which is clearly associated with lake islands, and also with swamps. Classification with distances Important way of non-supervised classification is to work with distances instead of original data. Distance-based methods need the dissimilarities between each pair of objects to be calculated first. Advantage of these methods is that dissimilarities could be calculated from data of any type: measurement, ranked or nominal. Distances There are myriads of ways to calculate dissimilarity (or similarity which is essentially the reverse dissimilarity)\(^{[2]}\). One of these ways already explained above is a (reverse absolute) correlation. Other popular ways are Euclidean (square) distance and Manhattan (block) distance. Both of them (Figure \(12\)) are useful for measurement variables. Manhattan distances are similar to driving distances, especially when there are not many roads available. The example below are driving distances between biggest North Dakota towns: Code \(16\) (R): ```nd <- read.table("data/nd.txt", h=TRUE, sep=" ", row.names=1) nd.d <- as.dist(nd) str(nd.d)``` In most cases, we need to convert raw variables into distance matrix. The basic way is to use dist(). Note that ranked and binary variables usually require different approaches which are implemented in the vegan (function vegdist()) and cluster packages (function daisy()). The last function recognizes the type of variable and applies the most appropriate metric (including the universal Gower distance); it also accepts the metric specified by user: Code \(17\) (R): ```library(cluster) iris.dist <- daisy(iris[, 1:4], metric="manhattan") str(iris.dist)``` In biology, one can use Smirnov taxonomic distances, available from smirnov package. In the following example, we use plant species distribution data on small islands. The next plot intends to help the reader to understand them better. It is just a kind of map which shows geographical locations and sizes of islands: Code \(18\) (R): ```mp <- read.table( "http://ashipunov.info/shipunov/open/moldino_l.txt", h=TRUE, sep=" ", row.names=1) library(MASS) eqscplot(mp\$LON, mp\$LAT, cex=round(log(mp\$SQUARE))-5.5, axes=FALSE, xlab="", ylab="", xpd=T) text(mp\$LON, mp\$LAT, labels=row.names(mp), pos=4, offset=1, cex=.9)``` (Please plot it yourself.) Now we will calculate and visualize Smirnov’s distances: Code \(19\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose library(smirnov) m1.Txy <- smirnov(m1) m1.s <- (1 - m1.Txy) # similarity to dissimilarity dimnames(m1.s) <- list(row.names(m1)) symnum(m1.s)``` Smirnov’s distances have an interesting feature: instead of 0 or 1, diagonal of the similarity matrix is filled with the coefficient of uniqueness values (Txx): Code \(20\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose library(smirnov) m1.Txy <- smirnov(m1) m1.Txx <- diag(m1.Txy) names(m1.Txx) <- row.names(m1) rev(sort(round(m1.Txx, 3)))``` This means that Verik island is a most unique in regards to plant species occurrence. Making maps: multidimensional scaling There are many things to do with the distance matrix. One of most straightforward is the multidimensional scaling, MDS (the other name is “principal coordinate analysis”, PCoA): Code \(21\) (R): ```nd <- read.table("data/nd.txt", h=TRUE, sep=" ", row.names=1) nd.d <- as.dist(nd) nd.c <- cmdscale(nd.d) new.names <- sub("y C", "y C", row.names(nd)) library(MASS) eqscplot(nd.c, type="n", axes=FALSE, xlab="", ylab="") points(nd.c, pch=19) text(nd.c, labels=new.names, xpd=TRUE, pos=3, cex=0.8)``` Compare the plot (Figure \(13\)) it with any geographical map. If you do not have a map of North Dakota but have these driving distances, cmdscale() allows to re-create the map! So in essence, MDS is a task reverse to navigation (finding driving directions from map): it uses “driving directions” and makes a map from them. Another, less impressive but more useful example (Figure \(14\)) is from raw data of Fisher’s irises: Code \(22\) (R): ```library(KernSmooth) library(cluster) iris.dist <- daisy(iris[, 1:4], metric="manhattan") iris.c <- cmdscale(iris.dist) est <- bkde2D(iris.c, bandwidth=c(.7, 1.5)) plot(iris.c, type="n", xlab="Dim. 1", ylab="Dim. 2") text(iris.c, labels=abbreviate(iris[, 5], 1, method="both.sides")) contour(est\$x1, est\$x2, est\$fhat, add=TRUE, drawlabels=FALSE, lty=3)``` (There is no real difference from PCA because metric multidimensional scaling is related to principal component analysis; also, the internal structure of data is the same.) To make the plot “prettier”, we added here density lines of point closeness estimated with bkde2D() function from the KernSmooth package. Another way to show density is to plot 3D surface like (Figure \(15\)): Code \(23\) (R): ```library(cluster) iris.dist <- daisy(iris[, 1:4], metric="manhattan") iris.c <- cmdscale(iris.dist) est <- bkde2D(iris.c, bandwidth=c(.7, 1.5)) persp(est\$x1, est\$x2, est\$fhat, theta=135, phi=45, col="purple3", shade=0.75, border=NA, xlab="Dim. 1", ylab="Dim. 2", zlab="Density")``` In addition to cmdscale(), MASS package (functions isoMDS() and sammon()) implements the non-metric multidimensional scaling, and package vegan has the advanced non-metric metaMDS(). Non-metric multidimensional scaling does not have analogs to PCA loadings (importances of variables) and proportion of variance explained by component, but it is possible to calculate surrogate metrics: Code \(24\) (R): ```iris.dist2 <- dist(iris[, 1:4], method="manhattan") ## to remove zero distances: iris.dist2[iris.dist2 == 0] <- abs(jitter(0)) library(MASS) iris.m <- isoMDS(iris.dist2) cor(iris[, 1:4], iris.m\$points) # MDS loadings surrogate MDSv(iris.m\$points) # asmisc.r``` Consequently (and similarly to PCA), sepal width character influences second dimension much more than three other characters. We can also guess that within this non-metric solution, first dimension takes almost 98% of variance. Making trees: hierarchical clustering The other way to process the distance matrix is to perform hierarchical clustering which produces dendrograms, or trees, which are “one and a half dimensional” plots (Figure \(16\)): Code \(25\) (R): ```aa <- read.table("data/atmospheres.txt", h=TRUE, sep=" ", row.names=1) aa.dist <- dist(t(aa)) # because planets are columns aa.hclust <- hclust(aa.dist, method="ward.D") plot(aa.hclust, xlab="", ylab="", sub="")``` Ward’s method of clustering is well known to produce sharp, well-separated clusters (this, however, might lead to false conclusions if data has no apparent structure). Distant planets are most similar (on the height \(\approx25\)), similarity between Venus and Mars is also high (dissimilarity is \(\approx0\)). Earth is more outstanding, similarity with Mercury is lower, on the height \(\approx100\); but since Mercury has no true atmosphere, it could be ignored. The following classification could be produced from this plot: • Earth group: Venus, Mars, Earth, [Mercury] • Jupiter group: Jupiter, Saturn, Uranus, Neptune Instead of this “speculative” approach, one can use cutree() function to produce classification explicitly; this requires the hclust() object and number of desired clusters: Code \(26\) (R): ```library(cluster) iris.dist <- daisy(iris[, 1:4], metric="manhattan") iris.hclust <- hclust(iris.dist) iris.3 <- cutree(iris.hclust, 3) head(iris.3) # show cluster numbers Misclass(iris.3, iris[, 5]) # asmisc.r``` To check how well the selected method performs classification, we wrote the custom function Misclass(). This function calculates the confusion matrix. Please note that Misclass() assumes predicted and observed groups in the same order, see also below for fanny() function results. Confusion matrix is a simple way to assess the predictive power of the model. More advanced technique of same sort is called cross-validation. As an example, user might splut data into 10 equal parts (e.g., with cut()) and then in turn, make each part an “unknown” whereas the rest will become training subset. As you can see from the table, 32% of Iris virginica were misclassified. The last is possible to improve, if we change either distance metric, or clustering method. For example, Ward’s method of clustering gives more separated clusters and slightly better misclassification rates. Please try it yourself. Hierarchical clustering does not by default return any variable importance. However, it is still possible to assist the feature selection with clustering heatmap (Figure \(17\)): Code \(27\) (R): ```aa <- read.table("data/atmospheres.txt", h=TRUE, sep=" ", row.names=1) library(cetcolor) heatmap(t(aa), col=cet_pal(12, "coolwarm"), margins=c(9, 6))``` (Here we also used cetcolor package which allows to create perceptually uniform color palettes.) Heatmap separately clusters rows and columns and places result of the image() function in the center. Then it become visible which characters influence which object clusters and vice versa. On this heatmap, for example, Mars and Venus cluster together mostly because of similar levels of carbon dioxide. There are too many irises to plot the resulted dendrogram in the common way. One workaround is to select only some irises (see below). Another method is to use function Ploth() (Figure \(18\)): Code \(28\) (R): ```Ploth(iris.hclust, col=as.numeric(iris[, 5]), pch=16, col.edges=TRUE, horiz=TRUE, leaflab="none") # asmisc.r legend("topleft", fill=1:nlevels(iris[, 5]), legend=levels(iris[, 5])) ``` Ploth() is useful also if one need simply to rotate the dendrogram. Please check the following yourself: Code \(29\) (R): ```tox <- read.table("data/poisoning.txt", h=TRUE) oldpar <- par(mar=c(2, 0, 0, 4)) tox.dist <- as.dist(1 - abs(tox.cor)) Ploth(hclust(tox.dist, method="ward.D"), horiz=TRUE) # asmisc.r par(oldpar)``` (This is also a demonstration of how to use correlation for the distance. As you will see, the same connection between Caesar salad, tomatoes and illness could be visualized with dendrogram. There visible also some other interesting relations.) file. Planet Aqua is entirely covered by shallow water. This ocean is inhabited with various flat organisms (Figure \(19\)). These creatures (we call them “kubricks”) can photosynthesize and/or eat other organisms or their parts (which match with the shape of their mouths), and move (only if they have no stalks). Provide the dendrogram for kubrick species based on result of hierarchical clustering. How to know the best clustering method Hierarchical cluster analysis and relatives (e.g., phylogeny trees) are visually appealing, but there are three important questions which need to be solved: (1) which distance is the best (this also relevant to other distance-based methods); (2) which hierarchical clustering method is the best; and (3) how to assess stability of clusters. Second question is relatively easy to answer. Function Co.test(dist, tree) from asmisc.r reveals consistency between distance object and hierachical clusterization. It is essentially correlation test between initial distances and distances revealed from cophenetic structure of the dendrogram. Cophenetic distances are useful in many ways. For example, to choose the best clusterization method and therefore answer the second question, one might use cophenetic-based Code \(30\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose library(smirnov) m1.Txy <- smirnov(m1) m1.s <- (1 - m1.Txy) # similarity to dissimilarity PlotBest.hclust(as.dist(m1.s)) # asmisc.r``` (Make and review this plot yourself. Which clustering is better?) Note, however, these “best” scores are not always best for you. For example, one might still decide to use ward.D because it makes clusters sharp and visually separated. To choose the best distance method, one might use the visually similar approach: Code \(31\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose PlotBest.dist(m1) # asmisc.r``` (Again, please review the plot yourself.) In fact, it just visualizes the correlation between multidimensional scaling of distances and principal component analysis of raw data. Nevertheless, it is still useful. How to compare clusterings Hierarchical clustering are dendrograms and it is not easy to compare them “out of the box”. Several different methods allow to compare two trees. We can employ methods associated with biological phylogenies (these trees are essentially dendrograms). Suppose that there are two clusterings: Code \(32\) (R): ```aa <- read.table("data/atmospheres.txt", h=TRUE, sep=" ", row.names=1) aa.d1 <- hclust(dist(t(aa))) aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D")``` Library ape has dist.topo() function which calculates topological distance between trees, and library phangorn calculates several those indexes: Code \(33\) (R): ```aa <- read.table("data/atmospheres.txt", h=TRUE, sep=" ", row.names=1) library(ape) aa.d1 <- hclust(dist(t(aa))) aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D") aa.ph1 <- unroot(as.phylo(aa.d1)) # convert aa.ph2 <- unroot(as.phylo(aa.d2)) dist.topo(aa.ph1, aa.ph2) phangorn::treedist(aa.ph1, aa.ph2)``` Next possibility is to plot two trees side-by-side and show differences with lines connecting same tips (Figure \(20\)): Code \(34\) (R): ```aa <- read.table("data/atmospheres.txt", h=TRUE, sep=" ", row.names=1) library(ape) aa.d1 <- hclust(dist(t(aa))) aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D") aa.ph1 <- unroot(as.phylo(aa.d1)) # convert aa.ph2 <- unroot(as.phylo(aa.d2)) ass <- cbind(aa.ph1\$tip.label, aa.ph1\$tip.label) aa.ph2r <- rotate(aa.ph2, c("Earth", "Neptune")) cophyloplot(aa.ph1, aa.ph2r, assoc=ass, space=30, lty=2)``` (Note that sometimes you might need to rotate branch with rotate() function. Rotation does not change dendrogram.) There is also possible to plot consensus tree which shows only those clusters which appear in both clusterings: Code \(35\) (R): ```aa <- read.table("data/atmospheres.txt", h=TRUE, sep=" ", row.names=1) library(ape) aa.d1 <- hclust(dist(t(aa))) aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D") aa.ph1 <- unroot(as.phylo(aa.d1)) # convert aa.ph2 <- unroot(as.phylo(aa.d2)) aa.ph2r <- rotate(aa.ph2, c("Earth", "Neptune")) plot(consensus(aa.ph1, aa.ph2r))``` (Please make this plot yourself.) Heatmap could also be used to visualize similarities between two dendrograms: Code \(36\) (R): ```aa <- read.table("data/atmospheres.txt", h=TRUE, sep=" ", row.names=1) library(ape) aa.d1 <- hclust(dist(t(aa))) aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D") aa12.match <- Hclust.match(aa.d1, aa.d2) # asmisc.r library(cetcolor) cols <- cet_pal(max(aa12.match), "blues") kheatmap(aa12.match, scale="none", col=cols)``` (Hclust.match() counts matches between two dendrograms (which based on the same data) and then heatmap() plots these counts as colors, and also supplies the consensus configuration as two identical dendrograms on the top and on the left. Please make this plot yourself.) Both multidimendional scaling and hierarchical clustering are distance-based methods. Please make and review the following plot (from the vegan3d package) to understand how to compare them: Code \(37\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose library(smirnov) m1.Txy <- smirnov(m1) m1.s <- (1 - m1.Txy) # similarity to dissimilarity m1.dist <- as.dist(m1.s) m1.hclust <- hclust(m1.dist) m1.c <- cmdscale(m1.dist) library(vegan3d) orditree3d(m1.c, m1.hclust, text=attr(m1.dist, "Labels"), type="t")``` How good are resulted clusters There are several ways to check how good are resulted clusters, and many are based on the bootstrap replication (see Appendix). Function Jclust() presents a method to bootstrap bipartitions and plot consensus tree with support values (Figure \(21\): Code \(38\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose (m1.j <- Jclust(m1, 3, iter=1000)) plot(m1.j, rect.lty=2, sub="")``` (Note that Jclust() uses cutree() and therefore works only if it “knows” the number of desired clusters. Since consensus result relates with cluster number, plots with different numbers of clusters will be different.) Another way is to use pvclust package which has an ability to calculate the support for clusters via bootstrap (Figure \(22\)): Code \(39\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose library(pvclust) m1.pvclust <- pvclust(t(m1), method.dist="manhattan", nboot=100, parallel=TRUE) plot(m1.pvclust, col.pv=c("darkgreen", 0, 0), main="")``` (Function pvclust() clusterizes columns, not rows, so we have to transpose data again. On the plot, numerical values of cluster stability (au) are located above each node. The closer are these values to 100, the better.) There is also BootA() function in asmisc.r set which allows to bootstrap clustering with methods from phylogenetic package ape: Code \(40\) (R): ```mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep=" ", row.names=1) m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose m1.ba <- BootA(m1, FUN=function(.x) # asmisc.r as.phylo(hclust(dist(.x, method="minkowski"), method="average")), iter=100, mc.cores=4) plot(m1.ba\$boot.tree, show.node.label=TRUE) plot(m1.ba\$cons.tree) # majority-rule consensus``` (This method requires to make an anonymous function which uses methods you want. It also plots both consensus tree (without support values) and original tree with support values. Please make these trees. Note that by default, only support values greater then 50% are shown.) Making groups: k-means and friends Apart from hierarchical, there are many other ways of clustering. Typically, they do not return any ordination (“map”) and provide only cluster membership. For example, k-means clustering tries to obtain the a priori specified number of clusters from the raw data (it does not need the distance matrix to be supplied): Code \(41\) (R): ```eq <- read.table("data/eq.txt", h=TRUE) eq.k <- kmeans(eq[,-1], 2)``` K-means clustering does not plot trees; instead, for every object it returns the number of its cluster: Code \(42\) (R): ```eq <- read.table("data/eq.txt", h=TRUE) eq.k <- kmeans(eq[,-1], 2) str(eq.k\$cluster) Misclass(eq.k\$cluster, eq\$SPECIES) # asmisc.r``` (As you see, misclassification errors are low.) Instead of a priori cluster number, function kmeans() also accepts row numbers of cluster centers. Spectral clustering from kernlab package is superficially similar method capable to separate really tangled elements: Code \(43\) (R): ```library(kernlab) data(spirals) set.seed(1) sc <- specc(spirals, centers=2) plot(spirals, col=sc, xlab="", ylab="")``` Kernel methods (like spectral clustering) recalculate the primary data to make it more suitable for the analysis. Support vector machines (SVM, see below) is another example. There is also kernel PCA (function kpca() in kernlab package). Next group of clustering methods is based on fuzzy logic and takes into account the fuzziness of relations. There is always the possibility that particular object classified in the cluster A belongs to the different cluster B, and fuzzy clustering tries to measure this possibility: Code \(44\) (R): ```library(cluster) iris.f <- fanny(iris[, 1:4], 3) head(data.frame(sp=iris[, 5], iris.f\$membership))``` Textual part of the fanny() output is most interesting. Every row contains multiple membership values which represent the probability of this object to be in the particular cluster. For example, sixth plant most likely belongs to the first cluster but there is also visible attraction to the third cluster. In addition, fanny() can round memberships and produce hard clustering like other cluster methods: Code \(45\) (R): `Misclass(iris.f\$clustering, factor(iris[, 5], levels=c("setosa", "virginica", "versicolor"))) # asmisc.r` (We had to re-level the Species variable because fanny() gives number 2 to the Iris virginica cluster.) How to know cluster numbers All “k-means and friends” methods want to know the number of clusters before they start. So how to know a priori how many clusters present in data? This question is one of the most important in clustering, both practically and theoretically. The visual analysis of banner plot (invented by Kaufman & Rousseeuw, 1990) could predict this number (Figure \(24\)): Code \(46\) (R): ```eq <- read.table("data/eq.txt", h=TRUE) library(cluster) eq.a <- agnes(eq[,-1]) plot(eq.a, which=1, col=c(0, "maroon"))``` White bars on the left represent unclustered data, maroon lines on the right show height of possible clusters. Therefore, two clusters is the most natural solution, four clusters should be the next possible option. Model-based clustering allows to determine how many clusters present in data and also cluster membership. The method assumes that clusters have the particular nature and multidimensional shapes: Code \(47\) (R): ```library(mclust) iris.mclust <- Mclust(iris[, -5]) summary(iris.mclust) # 2 clusters iris.mclust\$classification``` (As you see, it reveals two clusters only. This is explanable because in iris data two species are much more similar then the third one.) DBSCAN is the powerful algorithm for the big data (like raster images which consist of billions of pixels) and there is the R package with the same name (in lowercase). DBSCAN reveals how many clusters are in data at particular resolution: Code \(48\) (R): ```library(dbscan) kNNdistplot(iris[, -5], k = 5) abline(h=.5, col = "red", lty=2) (iris.dbscan <- dbscan(iris[, -5], eps = .5, minPts = 5)) plot(iris.p, type="n", xlab="", ylab="") text(iris.p, labels=abbreviate(iris[, 5], 1, method="both.sides"), col=iris.dbscan\$cluster+1)``` (Plots are not shown, please make then yourself. First plot helps to find the size of neighborhood (look on the knee). The second illustrates results. Similar to model-based clustering, DBSCAN by default reveals only two clusters in iris data.) Note that while DBSCAN was not able to recover all three species, it recovered clouds, and also places marginal points in the “noise” group. DBSCAN, as you see, is useful for smoothing, important part of image recognition. Parameter eps allows to change “resolution” of clustering and to find more, or less, clusters. DBSCAN relates with t-SNE (see above) and with supervised methods based on proximity (like kNN, see below). It can also be supervised itself and predict clusters for new points. Note that k-means and DBSCAN are based on specifically calculated proximities, not directly on distances. Data stars contains information about 50 brightest stars in the night sky, their location and constellations. Please use DBSCAN to make artificial constellations on the base of star proximity. How are they related to real constellations? Note that location (right ascension and declination) is given in degrees or hours (sexagesimal system), they must be converted into decimals. “Mean-shift” method searches for modes within data, which in essence, is similar to finding proximities. The core mean-shift algotithm is slow so approximate “blurring” version is typically preferable: Code \(49\) (R): ```library(MeanShift) bandwidth <- quantile(dist(iris[, -5]), 0.25) (bmsClustering(t(iris[, -5]), h=bandwidth))``` Another approach to find cluster number is similar to the PCA screeplot: Code \(50\) (R): ```eq <- read.table("data/eq.txt", h=TRUE) library(cluster) eq.a <- agnes(eq[,-1]) wss <- (nrow(eq[,-1]) - 1) * sum(apply(eq[,-1], 2, var)) for (i in 2:15) wss[i] <- sum(kmeans(eq[,-1], centers=i)\$withinss) barplot(wss, names.arg=1:15, xlab="Number of clusters", main="Sums of squares within groups", yaxt="n", ylab="")``` (Please check this plot yourself. As on the banner plot, it visible that highest relative “cliffs” are after 1 and 4 cluster numbers.) Collection asmisc.r contains function Peaks() which helps to find local maxima in simple data sequence. Number of these peaks on the histogram (with the sensible number of breaks) should point on the number of clusters: Code \(51\) (R): ```histdata <- hist(apply(scale(iris[, -5]), 1, function(.x) sum(abs(.x))), breaks=10, plot=FALSE) sum(Peaks(histdata\$counts))``` ``````> [1] 3`````` (“Three” is the first number of peaks after “one” and does not change when 8 < breaks < 22.) Finally, the integrative package NbClust allows to use diverse methods to assess the putative number of clusters: Code \(52\) (R): ```library(NbClust) iris.nb <- NbClust(iris[, -5], method="ward.D") # wait!``` How to compare different ordinations Most of classification methods result in some ordination, 2D plot which includes all data points. This allow to compare them with Procrustes analysis (see Appendix for more details) which rotates ans scales one data matrix to make in maximally similar with the second (target) one. Let us compare results of classic PCA and t-SNE: Code \(53\) (R): ```irisu.pca <- prcomp(iris.unique[, 1:4], scale=TRUE) irisu.p <- irisu.pca\$x[, 1:2] library(vegan) irisu.pr <- procrustes(irisu.p, tsne.out\$Y) plot(irisu.pr, ar.col=iris.unique\$Species, xlab="", ylab="", main="") # arrows point to target (PCA) with(iris.unique, legend("topright", pch="↑", col=1:nlevels(Species), legend=levels(Species), bty="n")) legend("bottomright", lty=2:1, legend=c("PCA", "t-SNE"), bty="n")``` Resulted plot (Figure 7.3.1) shows how dense are points in t-SNE and how PCA spreads them. Which of methods makes better grouping? Find it yourself.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/07%3A_Multidimensional_Data_-_Analysis_of_Structure/7.02%3A_Classification_without_learning.txt
Methods explained in this section are not only visualizations. Together, they frequently called “classification with learning”, “supervised classification”, “machine learning”, or just “classification”. All of them are based on the idea of learning: ... He scrambled through and rose to his feet. ... He saw nothing but colours—colours that refused to form themselves into things. Moreover, he knew nothing yet well enough to see it: you cannot see things till you know roughly what they are\(^{[1]}\). His first impression was of a bright, pale world—a watercolour world out of a child’s paint-box; a moment later he recognized the flat belt of light blue as a sheet of water, or of something like water, which came nearly to his feet. They were on the shore of a lake or river... C.S.Lewis. Out of the Silent Planet. First, small part of data where identity is already known (training dataset) used to develop (fit) the model of classification (Figure \(2\)). On the next step, this model is used to classify objects with unknown identity (testing dataset). In most of these methods, it is possible to estimate the quality of the classification and also assess the significance of the every character. Let us create training and testing datasets from iris data: Code \(1\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ]``` (iris.unknown is of course the fake unknown so to use it properly, we must specify iris.unknown[, -5]. On the other hand, species information will help to create misclassification table (confusion matrix, see below).) Learning with regression Linear discriminant analysis One of the simplest methods of classification is the linear discriminant analysis (LDA). The basic idea is to create the set of linear functions which “decide” how to classify the particular object. Code \(2\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] library(MASS) iris.lda <- lda(Species ~ . , data=iris.train) iris.predicted <- predict(iris.lda, iris.unknown[, 1:4]) Misclass(iris.predicted\$class, iris.unknown[, 5]) # asmisc.r``` Training resulted in the hypothesis which allowed almost all plants (with an exception of seven Iris virginica) to be placed into the proper group. Please note that LDA does not require scaling of variables. It is possible to check LDA results with inferential methods. Multidimensional analysis of variation (MANOVA) allows to understand the relation between data and model (classification from LDA): Code \(3\) (R): ```iris.unknown <- iris[-seq(1, nrow(iris), 5), ] ldam <- manova(as.matrix(iris.unknown[, 1:4]) ~ iris.predicted\$class) summary(ldam, test="Wilks")``` Important here are both p-value based on Fisher statistics, and also the value of Wilks’ statistics which is the likelihood ratio (in our case, the probability that groups are not different). It is possible to check the relative importance of every character in LDA with ANOVA-like techniques: Code \(4\) (R): ```iris.unknown <- iris[-seq(1, nrow(iris), 5), ] summary(aov(as.matrix(iris.unknown[, 1:4]) ~ iris.predicted\$class))``` (This idea is applicable to other classification methods too.) ... and also visualize LDA results (Figure \(3\)): Code \(5\) (R): ```library(MASS) iris.lda2 <- lda(iris[, 1:4], iris[, 5]) iris.ldap2 <- predict(iris.lda2, dimen=2)\$x plot(iris.ldap2, type="n", xlab="LD1", ylab="LD2") text(iris.ldap2, labels=abbreviate(iris[, 5], 1, method="both.sides")) Ellipses(iris.ldap2, as.numeric(iris[, 5]), centers=TRUE) # asmisc.r``` (Please note 95% confidence ellipses with centers.) To place all points on the plot, we simply used all data as training. Note the good discrimination (higher than in PCA, MDS or clustering), even between close Iris versicolor and I. virginica. This is because LDA frequently overestimates the differences between groups. This feature, and also the parametricity and linearity of LDA made it less used over the last years. With LDA, it is easy to illustrate one more important concept of machine learning: quality of training. Consider the following example: Code \(6\) (R): ```set.seed(230) iris.sample2 <- sample(1:nrow(iris), 30) iris.train2 <- iris[iris.sample2, ] iris.unknown2 <- iris[-iris.sample2, ] library(MASS) iris.lda2 <- lda(Species ~ . , data=iris.train2) iris.predicted2 <- predict(iris.lda2, iris.unknown2[, 1:4]) Misclass(iris.predicted2\$class, iris.unknown2[, 5])``` Misclassification error here almost two times bigger! Why? Code \(7\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] set.seed(230) iris.sample2 <- sample(1:nrow(iris), 30) iris.train2 <- iris[iris.sample2, ] table(iris.train2\$Species)``` Well, using sample() (and particular set.seed() value) resulted in biased training sample, this is why our second model was trained so poorly. Our first way to sample (every 5th iris) was better, and if there is a need to use sample(), consider to sample each species separately. Now, return to the default random number generator settings: Code \(8\) (R): `set.seed(NULL)` Please note that it is widely known that while LDA was developed on biological material, this kind of data rarely meets two key assumptions of this method: (1) multivariate normality and (2) multivariate homoscedasticity. Amazingly, even Fisher’s Iris data with which LDA was invented, does not meet these assumptions! Therefore, we do not recommend to use LDA and keep it here mostly for teaching purposes. Recursive partitioning To replace linear discriminant analysis, multiple methods with similar background ideas were invented. Recursive partitioning, or decision trees (regression trees, classification trees), allow, among other, to make and visualize the sort of discrimination key where every step results in splitting objects in two groups (Figure \(4\)): Code \(9\) (R): ```set.seed(NULL) library(tree) iris.tree <- tree(Species ~ ., data=iris) plot(iris.tree) text(iris.tree)``` We loaded first the tree package containing tree() function (rpart is another package which makes classification trees). Then we again used the whole dataset as training data. The plot shows that all plants with petal length less than 2.45 cm belong to Iris setosa, and from the rest those plants which have petal width less than 1.75 cm and petal length more than 4.95 cm, are I. versicolor; all other irises belong to I. virginica. In fact, these trees are result of something similar to “hierarchical discriminant analysis”, and it is possible to use them for the supervised classification: Code \(10\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] iris.tree2 <- tree(Species ~ ., data=iris.train) iris.tp2 <- predict(iris.tree2, iris.unknown[,-5], type="class") library(MASS) Misclass(iris.tp2, iris.unknown[, 5])``` Try to find out which characters distinguish species of horsetails described in eq.txt data file. File eq_c.txt contains the description of characters. Package party offers sophisticated recursive partitioning methods together with advanced tree plots (Figure \(5\)): Code \(11\) (R): ```library(party) SA <- abbreviate(iris\$Species, 1, method="both.sides") iris.ct <- ctree(factor(SA) ~ ., data=iris[, 1:4]) plot(iris.ct) library(MASS) Misclass(SA, predict(iris.ct))``` (For species names, we used one-letter abbreviations.) Ensemble learnig Random Forest The other method, internally similar to regression trees, rapidly gains popularity. This is the Random Forest. Its name came from the ability to use numerous decision trees and build the complex classification model. Random Forest belongs to bagging ensemble methods; it uses bootstrap (see in Appendix) to multiply the number of trees in the model (hence “forest”). Below is an example of Random Forest classifier made from the iris data: Code \(12\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] library(randomForest) set.seed(17) iris.rf <- randomForest(Species ~ ., data=iris.train) iris.rfp <- predict(iris.rf, iris.unknown[,-5]) library(MASS) Misclass(iris.rfp, iris.unknown[, 5])``` Here results are similar to LDA but Random Forest allows for more. For example, it can clarify the importance of each character (with function `importance()`), and reveal classification distances (proximities) between all objects of training subset (these distances could be in turn used for clustering). Random Forest could also visualize the multidimensional dataset (Figure \(6\)): Code \(13\) (R): ```library(randomForest) set.seed(17) # because plot is random iris.urf <- randomForest(iris[,-5]) iris.rfm <- MDSplot(iris.urf, iris[, 5], xlab="", ylab="", pch=abbreviate(iris[, 5], 1, method="both.sides")) Pal <- brewer.pal(nlevels(iris[, 5]), "Set1") Hulls(iris.rfm\$points, as.numeric(iris[, 5]), centers=TRUE, usecolor=Pal) # asmisc.r``` (We applied several tricks to show convex hulls and their centroids.) Package ranger implements even faster variant of Random Forest algorithm, it also can employ parallel calculations. Gradient boosting There are many weak classification methods which typically make high misclassification errors. However, many of them are also ultra-fast. So, is it possible to combine many weak learners to make the strong one? Yes! This is what boosting methods do. Gradient boosting employs multi-step optimization and is now among most frequently using learning techniques. In R, there are several gradient boosting packages, for example, xgboost and gbm: Code \(14\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] library(gbm) set.seed(4) iris.gbm <- gbm(Species ~ ., data=rbind(iris.train, iris.train)) # to make training bigger Distribution not specified, assuming multinomial ... plot(iris.gbm) iris.gbm.p1 <- predict.gbm(iris.gbm, iris.unknown, n.trees=iris.gbm\$n.trees) iris.gbm.p2 <- apply(iris.gbm.p1, 1, # membership trick function(.x) colnames(iris.gbm.p1)[which.max(.x)]) library(MASS) Misclass(iris.gbm.p2, iris.unknown[, 5])``` (Plot is purely technical; in the above form, it will show the marginal effect (effect on membership) of the 1st variable. Please make it yourself. “Membership trick” selects the “best species” from three alternatives as gbm() reports classification result in fuzzy form.) Learning with proximity k-Nearest Neighbors algorithm (or kNN) is the “lazy classifier” because it does not work until unknown data is supplied: Code \(15\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] library(class) iris.knn.pred <- knn(train=iris.train[,-5], test=iris.unknown[,-5], cl=iris.train[, 5], k=5) library(MASS) Misclass(iris.knn.pred, iris.unknown[, 5])``` kNN is based on distance calculation and “voting”. It calculates distances from every unknown object to the every object of the training set. Next, it considers several (5 in the case above) nearest neighbors with known identity and finds which id is prevalent. This prevalent id assigned to the unknown member. Function knn() uses Euclidean distances but in principle, any distance would work for kNN. To illustrate idea of nearest neighbors, we use Voronoi decomposition, the technique which is close to both kNN and distance calculation: Code \(16\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] iris.p <- prcomp(iris[, 1:4], scale=TRUE)\$x[, 1:2] iris.p1 <- iris.p[seq(1, nrow(iris.p), 5), ] iris.p2 <- iris.p[-seq(1, nrow(iris.p), 5), ] library(tripack) iris.v <- voronoi.mosaic(iris.p1[, 1], iris.p1[, 2], duplicate="remove") plot(iris.v, do.points=FALSE, main="", sub="") points(iris.p1[, 1:2], col=iris.train\$Species, pch=16, cex=2) points(iris.p2[, 1:2], col=iris.unknown\$Species)``` The plot (Figure \(7\)) contains multiple cells which represent neighborhoods of training sample (big dots). This is not exactly what kNN does, but idea is just the same. In fact, Voronoi plot is a good tool to visualize any distance-based approach. Depth classification based on how close an arbitrary point of the space is located to an implicitly defined center of a multidimensional data cloud: Code \(17\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] library(ddalpha) iris.dd <- ddalpha.train(Species ~ ., data=iris.train) iris.p <- predict(iris.dd, iris.unknown[, -5]) library(MASS) Misclass(unlist(iris.p), iris.unknown[, 5]) # asmisc.r iris.pp <- predict(iris.dd, iris.unknown[, -5], outsider.method="Ignore") sapply(iris.pp, as.character) # shows points outside train clouds``` Learning with rules Naïve Bayes classifier is one of the simplest machine learning algorithms which tries to classify objects based on the probabilities of previously seen attributes. Quite unexpectedly, it is typically a good classifier: Code \(18\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] library(e1071) iris.nb <- naiveBayes(Species ~ ., data=iris.train) iris.nbp <- predict(iris.nb, iris.unknown[,-5]) library(MASS) Misclass(iris.nbp, iris.unknown[, 5]) # asmisc.r``` Note that Naïve Bayes classifier could use not only numerical like above, but also nominal predictors (which is similar to correspondence analysis.) Apriori method is similar to regression trees but instead of classifying objects, it researches association rules between classes of objects. This method could be used not only to find these rules but also to make classification. Note that measurement iris data is less suitable for association rules then nominal data, and it needs discretization first: Code \(19\) (R): ```library(arulesCBA) irisd <- as.data.frame(lapply(iris[1:4], discretize, categories=9)) irisd\$Species <- iris\$Species irisd.train <- irisd[seq(1, nrow(irisd), 5), ] irisd.unknown <- irisd[-seq(1, nrow(irisd), 5), ] irisd.cba <- CBA(Species ~ ., irisd.train, supp=0.05, conf=0.8) inspect(irisd.cba\$rules) irisd.cbap <- predict(irisd.cba, irisd.unknown) library(MASS) Misclass(irisd.cbap, irisd.unknown\$Species)``` (Rules are self-explanatory. What do you think, does this method performs better for the nominal data? Please find it out.) Learning from the black box Famous SVM, Support Vector Machines is a kernel technique which calculates parameters of the hyper-planes dividing multiple groups in the multidimensional space of characters: Code \(20\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] library(e1071) iris.svm <- svm(Species ~ ., data=iris.train) iris.svmp <- predict(iris.svm, iris.unknown[,-5]) Misclass(iris.svmp, iris.unknown[, 5]) # asmisc.r``` Classification, or prediction grid often helps to illustrate the SVM method. Data points are arranged with PCA to reduce dimensionality, and then classifier predicts the identity for the every point in the artificially made grid (Figure \(8\)). This is possible to perform manually but Gradd() function simplifies plotting: Code \(21\) (R): ```iris.p <- prcomp(iris[, 1:4], scale=TRUE)\$x[, 1:2] iris.svm.pca <- svm(Species ~ ., data=cbind(iris[5], iris.p)) plot(iris.p, type="n", xlab="", ylab="") Gradd(iris.svm.pca, iris.p) # gmoon.r text(iris.p, col=as.numeric(iris[, 5]), labels=abbreviate(iris[, 5], 1, method="both.sides"))``` And finally, neural networks! This name is used for the statistical technique based on some features of neural cells, neurons. First, we need to prepare data and convert categorical variable Species into three logical dummy variables: Code \(22\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.dummy <- Tobin(iris.train\$Species, convert.names=FALSE) # asmisc.r iris.train2 <- cbind(iris.train[, -5], iris.dummy) str(iris.train2)``` Now, we call neuralnet package and proceed to the main calculation. The package “wants” to supply all terms in the model explicitly: Code \(23\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.dummy <- Tobin(iris.train\$Species, convert.names=FALSE) # asmisc.r iris.train2 <- cbind(iris.train[, -5], iris.dummy) library(neuralnet) set.seed(17) iris.n <- neuralnet(setosa + versicolor + virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, data=iris.train2, hidden=3, lifesign="full")``` (Note use of set.seed(), this is to make your results similar to presented here.) Now predict (with compute() function) and check misclassification: Code \(24\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.unknown <- iris[-seq(1, nrow(iris), 5), ] iris.dummy <- Tobin(iris.train\$Species, convert.names=FALSE) # asmisc.r iris.np <- compute(iris.n, iris.unknown[,-5]) iris.np2 <- apply(iris.np\$net.result, 1, function(.x) colnames(iris.dummy)[which.max(.x)]) Misclass(iris.np2, iris.unknown[, 5])``` Results of neural network prediction are fuzzy, similar to the results of fuzzy clustering or regression trees, this is why which.max() was applied for every row. As you see, this is one of the lowest misclassification errors. It is possible to plot the actual network: Code \(25\) (R): ```iris.train <- iris[seq(1, nrow(iris), 5), ] iris.dummy <- Tobin(iris.train\$Species, convert.names=FALSE) # asmisc.r iris.train2 <- cbind(iris.train[, -5], iris.dummy) library(neuralnet) set.seed(17) iris.n <- neuralnet(setosa + versicolor + virginica ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, data=iris.train2, hidden=3, lifesign="full") plot(iris.n, rep="best", intercept=FALSE)``` The plot (Figure 7.4.1) is a bit esoteric for the newbie, but hopefully will introduce into the method because there is an apparent multi-layered structure which is used for neural networks decisions.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/07%3A_Multidimensional_Data_-_Analysis_of_Structure/7.03%3A_Machine_learning.txt
There is no deep distinction between supervised and non-supevised methods, some of non-supervised (like SOM or PCA) could use training whereas some supervised (LDA, Random Forest, recursive partitioning) are useful directly as visualizations. And there is a in-between semi-supervised learning. It takes into account both data features and data labeling (Figure \(2\)). One of the most important features of SSL is an ability to work with the very small training sample. Many really bright ideas are embedded in SSL, here we illustrate two of them. Self-learning is when classification is developed in multiple cycles. On each cycle, testing points which are most confident, are labeled and added to the training set: Code \(1\) (R): ```library(SSL) iris.30 <- seq(1, nrow(iris), 30) # only 5 labeled points! iris.sslt1 <- sslSelfTrain(iris[iris.30, -5], iris[iris.30, 5], iris[-iris.30, -5], nrounds=20, n=5) # n found manually, ignore errors while searching iris.sslt2 <- levels(iris\$Species)[iris.sslt1] Misclass(iris.sslt2, iris[-iris.30, 5])``` As you see, with only 5 data points (approximately 3% of data vs. 33% of data in iris.train), semi-supervised self-leaning (based on gradient boosting in this case) reached 73% of accuracy. Another semi-supervised approach is based on graph theory and uses graph label propagation: Code \(2\) (R): ```iris.10 <- seq(1, nrow(iris), 10) # 10 labeled points iris.sslp1 <- sslLabelProp(iris[, -5], iris[iris.10, 5], iris.10, graph.type="knn", k=30) # k found manually iris.sslp2 <- ifelse(round(iris.sslp1) == 0, 1, round(iris.sslp1)) ## "practice is when everything works but nobody knows why..." iris.sslp3 <- levels(iris\$Species)[iris.sslp2] Misclass(iris.sslp3[-iris.10], iris[-iris.10, 5])``` The idea of this algorithm is similar to what was shown on the illustration (Figure \(2\)) above. Label propagation with 10 points outperforms Randon Forest (see above) which used 30 points. 7.05: Deep Learning Nowadays, “deep learning” is a bit of buzzword which used to designate software packages including multiple classification methods, and among the always some complicated neural networks (multi-layered, recurrent etc.) In that sense, R with necessary packages is a deep learning system. What is missed (actually, not), is a common interface to all “animals” in this zoo of methods. Package mlr was created to unify the learning interface in R: Code \(1\) (R): ```library(mlr) ... ## 1) Define the task ## Specify the type of analysis (e.g. classification) ## and provide data and response variable task <- makeClassifTask(data=iris, target="Species") ## 2) Define the learner, use listLearners()[,1] ## Choose a specific algorithm lrn <- makeLearner("classif.ctree") n = nrow(iris) train.set <- sample(n, size=2/3*n) test.set <- setdiff(1:n, train.set) ## 3) Fit the model ## Train the learner on the task using a random subset ## of the data as training set model <- train(lrn, task, subset=train.set) ## 4) Make predictions ## Predict values of the response variable for new ## observations by the trained model ## using the other part of the data as test set pred <- predict(model, task=task, subset=test.set) ## 5) Evaluate the learner ## Calculate the mean misclassification error and accuracy performance(pred, measures=list(mmce, acc))``` In addition, R now has interfaces (ways to connect with) to (almost) all famous “deep learning” software systems, namely TensorFlow, H2O, Keras, Caffe and MXNet. 7.06: How to choose the right method So which classification method to use? There are generally two answers: (1) this (these) which work(s) best with your data and (2) as many as possible. The second makes the perfect sense because human perception works the same way, using all possible models until it reaches stability in recognition. Remember some optical illusions (e.g., the famous duck-rabbit image, Figure \(1\)) and Rorschach inkblot test. They illustrate how flexible is human cognition and how many models we really use to recognize objects. At the end of the chapter, we decided to place the decision tree (Figure \(2\)) which allows to select some most important multivariate methods. Please note that if you decide to transform your data (for example, make a distance matrix from it), then you might access other methods:
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/07%3A_Multidimensional_Data_-_Analysis_of_Structure/7.04%3A_Semi-supervised_learning.txt
Answer to the stars question. First, load the data and as suggested above, convert coordinates into decimals: Code \(1\) (R): ```s50 <- read.table("data/stars.txt", h=T, sep=" ", as.is=T, quote="") str(s50) RA10 <- as.numeric(substr(s50\$RA, 1, 2)) + as.numeric(substr(s50\$RA, 4, 5))/60 + as.numeric(substr(s50\$RA, 7, 10))/3600 DEC10 <- sign(as.numeric(substr(s50\$DEC, 1, 3))) * (as.numeric(substr(s50\$DEC, 2, 3)) + as.numeric(substr(s50\$DEC, 5, 6))/60 + as.numeric(substr(s50\$DEC, 8, 9))/3600) coo <- cbind(RA10, DEC10)``` Next, some preliminary plots (please make them yourself): Code \(2\) (R): ```s50 <- read.table("data/stars.txt", h=T, sep=" ", as.is=T, quote="") RA10 <- as.numeric(substr(s50\$RA, 1, 2)) + as.numeric(substr(s50\$RA, 4, 5))/60 + as.numeric(substr(s50\$RA, 7, 10))/3600 DEC10 <- sign(as.numeric(substr(s50\$DEC, 1, 3)))*(as.numeric(substr(s50\$DEC, 2, 3))+as.numeric(substr(s50\$DEC, 5, 6))/60+as.numeric(substr(s50\$DEC, 8, 9))/3600 coo <- cbind(RA10, DEC10) oldpar <- par(bg="black", fg="white", mar=rep(0, 4)) plot(coo, pch="*", cex=(3 - s50\$VMAG)) Hulls(coo, as.numeric(factor(s50\$CONSTEL)), # asmisc.r usecolors=rep("white", nlevels(factor(s50\$CONSTEL)))) points(runif(100, min(RA10), max(RA10)), runif(100, min(DEC10), max(DEC10)), pch=".") par(oldpar) plot(coo, type="n") text(coo, s50\$CONSTEL.A) plot(coo, type="n") text(coo, s50\$NAME)``` Now, load dbscan package and try to find where number of “constellations” is maximal: Code \(3\) (R): ```library(dbscan) s50 <- read.table("data/stars.txt", h=T, sep=" ", as.is=T, quote="") RA10 <- as.numeric(substr(s50\$RA, 1, 2)) + as.numeric(substr(s50\$RA, 4, 5))/60 + as.numeric(substr(s50\$RA, 7, 10))/3600 DEC10 <- sign(as.numeric(substr(s50\$DEC, 1, 3)))*(as.numeric(substr(s50\$DEC, 2, 3))+as.numeric(substr(s50\$DEC, 5, 6))/60+as.numeric(substr(s50\$DEC, 8, 9))/3600 coo <- cbind(RA10, DEC10) for (eps in 1:20) cat(c(eps, ":", names(table(dbscan(coo, eps=eps)\$cluster))), " ")``` Plot the prettified “night sky” (Figure \(1\)) with found constellations: Code \(4\) (R): ```library(dbscan) s50 <- read.table("data/stars.txt", h=T, sep=" ", as.is=T, quote="") RA10 <- as.numeric(substr(s50\$RA, 1, 2)) + as.numeric(substr(s50\$RA, 4, 5))/60 + as.numeric(substr(s50\$RA, 7, 10))/3600 DEC10 <- sign(as.numeric(substr(s50\$DEC, 1, 3)))*(as.numeric(substr(s50\$DEC, 2, 3))+as.numeric(substr(s50\$DEC, 5, 6))/60+as.numeric(substr(s50\$DEC, 8, 9))/3600 coo <- cbind(RA10, DEC10) s50.db <- dbscan(coo, eps=9) oldpar <- par(bg="black", fg="white", mar=rep(0, 4)) plot(coo, pch=8, cex=(3 - s50\$VMAG)) Hulls(coo, s50.db\$cluster, # asmisc.r usecolors=c("black", "white", "white", "white")) points(runif(100, min(RA10), max(RA10)), runif(100, min(DEC10), max(DEC10)), pch=".") par(oldpar)``` dev.off() To access agreement between two classifications (two systems of constellations) we might use adjusted Rand index which counts correspondences: Code \(5\) (R): ```s50 <- read.table("data/stars.txt", h=T, sep=" ", as.is=T, quote="") library(dbscan) RA10 <- as.numeric(substr(s50\$RA, 1, 2)) + as.numeric(substr(s50\$RA, 4, 5))/60 + as.numeric(substr(s50\$RA, 7, 10))/3600 DEC10 <- sign(as.numeric(substr(s50\$DEC, 1, 3)))*(as.numeric(substr(s50\$DEC, 2, 3))+as.numeric(substr(s50\$DEC, 5, 6))/60+as.numeric(substr(s50\$DEC, 8, 9))/3600 coo <- cbind(RA10, DEC10) s50.db <- dbscan(coo, eps=9) Adj.Rand(as.numeric(factor(s50\$CONSTEL)), s50.db\$cluster) # asmisc.r``` (It is of course, low.) Answer to the beer classification exercise. To make hierarchical classification, we need first to make the distance matrix. Let us look on the data: Code \(6\) (R): ```beer <- read.table("data/beer.txt", sep=" ", h=TRUE) head(beer)``` Data is binary and therefore we need the specific method of distance calculation. We will use here Jaccard distance implemented in vegdist() function from the vegan package. It is also possible to use here other methods like “binary” from the core dist() function. Next step would be the construction of dendrogram (Figure \(2\)): Code \(7\) (R): ```beer <- read.table("data/beer.txt", sep=" ", h=TRUE) library(vegan) beer.d <- vegdist(beer, "jaccard") plot(hclust(beer.d, method="ward.D"), main="", xlab="", sub="")``` There are two big groups (on about 1.7 dissimilarity level), we can call them “Baltika” and “Budweiser”. On the next split (approximately on 1.4 dissimilarity), there are two subgroups in each group. All other splits are significantly deeper. Therefore, it is possible to make the following hierarchical classification: • Baltika group • Baltika subgroup: Baltika.6, Baltika.9, Ochak.dark, Afanas.light, Sibirskoe, Tula.hard • Tula subgroup: Zhigulevsk, Khamovn, Tula.arsenal, Tula.orig • Budweiser group • Budweiser subgroup: Sinebryukh, Vena.porter, Sokol.soft, Budweiser, Sokol.light • Ochak subgroup: Baltika.3, Klinsk.dark, Oldm.light, Vena.peterg, Ochak.class, Ochak.special, Klinsk.gold, Sibir.orig, Efes, Bochk.light, Heineken It is also a good idea to check the resulted classification with any other classification method, like non-hierarchical clustering, multidimensional scaling or even PCA. The more consistent is the above classification with this second approach, the better. Answer to the plant species classification tree exercise. The tree is self-explanatory but we need to build it first (Figure \(3\)): Code \(8\) (R): ```eq <- read.table("data/eq.txt", h=TRUE) eq.tree <- tree(eq[, 1] ~ ., eq[,-1]) plot(eq.tree); text(eq.tree)``` Answer to the kubricks (Figure \(3\)) question. This is just a plan as you will still need to perform these steps individually: 1. Open R, open Excel or any spreadsheet software and create the data file. This data file should be the table where kubrick species are rows and characters are columns (variables). Every row should start with a name of kubrick (i.e., letter), and every column should have a header (name of character). For characters, short uppercased names with no spaces are preferable. Topleft cell might stay empty. In every other cell, there should be either 1 (character present) or 0 (character absent). For the character, you might use “presence of stalk” or “presence of three mouths”, or “ability to make photosynthesis”, or something alike. Since there are 8 kubricks, it is recommended to invent \(N+1\) (in this case, 9) characters. 2. Save your table as a text file, preferably tab-separated (use approaches described in the second chapter), then load it into R with read.table(.., h=TRUE, row.names=1). 3. Apply hierarchical clustering with the distance method applicable for binary (0/1) data, for example binary from dist() or another method (like Jaccard) from the vegan::vegdist(). 4. Make the dendrogram with hclust() using the appropriate clustering algorithm. In the data directory, there is a data file, kubricks.txt. It is just an example so it is not necessarily correct and does not contain descriptions of characters. Since it is pretty obvious how to perform hierarchical clustering (see the “beer” example above), we present below two other possibilities. First, we use MDS plus the MST, minimum spanning tree, the set of lines which show the shortest path connecting all objects on the ordination plot (Figure \(4\)): Code \(9\) (R): ```kubricks <- read.table("data/kubricks.txt", h=TRUE, row.names=1) kubricks.d <- dist(kubricks, method="binary") kubricks.c <- cmdscale(kubricks.d) plot(kubricks.c, type="n", axes=FALSE) text(kubricks.c, labels=row.names(kubricks), cex=2) library(vegan) lines(spantree(kubricks.d), kubricks.c[, 1:2], lty=2)``` Second, we can take into account that kubricks are biological objects. Therefore, with the help of packages ape and phangorn we can try to construct the most parsimonious (i.e., shortest) phylogeny tree for kubricks. Let us accept that kubrick H is the outgroup, the most primitive one: Code \(10\) (R): ```kubricks <- read.table("data/kubricks.txt", h=TRUE, row.names=1) library(phangorn) k <- as.phyDat(data.frame(t(kubricks)), type="USER", levels = c(0, 1)) kd <- dist.hamming(k) # Hamming distance for morphological data kdnj <- NJ(kd) # neighbor-joining tree kp <- optim.parsimony(kdnj, k) ktree <- root(kp, outgroup="H", resolve.root=TRUE) # re-root plot(ktree, cex=2)``` (Make and review this plot yourself.)
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/07%3A_Multidimensional_Data_-_Analysis_of_Structure/7.07%3A_Answers_to_exercises.txt
The following is for impatient readers who prefer to learn R in a speedy way. They will need to type all commands listed below. Please do not copy-paste them but exactly type from the keyboard: that way, they will be much easier to remember and consequently to learn. For each command, we recommend to read the help (call it with ?command). As an exception from most others parts of this book, R output and plots are generally not shown below. You will need to check and get them yourself. We strongly recommend also to “play” with commands: modify them and look how they work. All of the following relates with an imaginary data file containing information about some insects. Data file is a table of four columns separated with tabs: SEX COLOR WEIGHT LENGTH 0 1 10.68 9.43 1 1 10.02 10.66 0 2 10.18 10.41 1 1 8.01 9 0 3 10.23 8.98 1 3 9.7 9.71 1 2 9.73 9.09 0 3 11.22 9.23 1 1 9.19 8.97 1 2 11.45 10.34 Companion file bugs_c.txt contains information about these characters: # Imaginary insects SEX females 0, males 1 COLOR red 1, blue 2, green 3 LENGTH length of the insect in millimeters 08: Appendix A- Example of R session If you download your data file from Internet, go to the read.table() step. Otherwise, proceed as described. Create the working directory on the disk (using only lowercase English letters, numbers and underscore symbols for the name); inside working directory, create the directory data. Copy into it the data file with *.txt extension and Tab delimiter into it (this file could be made in Excel or similar via Save as...). Name file as bugs.txt. Open R. Using setwd() command (with the full path and / slashes as argument), change working directory to the directory where bugs.txt is located. To check location, type Code \(1\) (R): `dir("data")` ... and press ENTER key (press it on the end of every command). Among other, this command should output the name of file, bugs.txt. Now read the data file and create in R memory the object data which will be the working copy of the data file. Type: Code \(2\) (R): ```dir("data") data <- read.table("data/bugs.txt", h=TRUE)``` If you use online approach, replace data with URL (see the foreword). Look on the data file: Code \(3\) (R): ```dir("data") data <- read.table("data/bugs.txt", h=TRUE) head(data)``` Attention! If anything looks wrong, note that it is not quite handy to change data from inside R. The more sensible approach is to change the initial text file (for example, in Excel) and then read.table() it from disk again. Look on the data structure: how many characters (variables, columns), how many observations, what are names of characters and what is their type and order: Code \(4\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) str(data)``` Please note that SEX and COLOR are represented with numbers whereas they are categorical variables. Create new object which contains data only about females (SEX is 0): Code \(5\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data.f <- data[data\$SEX == 0, ]``` Now—the object containing data about big (more than 10 mm) males: Code \(6\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data.m.big <- data[data\$SEX == 1 & data\$LENGTH > 10, ]``` By the way, this command is easier not to type but create from the previous command (this way is preferable in R). To repeat the previous command, press “\(\uparrow\)” key on the keyboard. ==” and “&” are logical statements “equal to” and “and”, respectively. They were used for data selection. Selection also requires square brackets, and if the data is tabular (like our data), there should be a comma inside square brackets which separates statements about rows from statements concerning columns. Add new character (columns) to the data file: the relative weight of bug (the ratio between weight and length)— WEIGHT.R: Code \(7\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data\$WEIGHT.R <- data\$WEIGHT/data\$LENGTH``` Check new character using str() (use “\(\uparrow\)”!) This new character was added only to the memory copy of your data file. It will disappear when you close R. You may want to save new version of the data file under the new name bugs_new.txt in your data subdirectory: Code \(8\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) write.table(data, file="data/bugs_new.txt", quote=FALSE)```
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/08%3A_Appendix_A-_Example_of_R_session/8.01%3A_Starting....txt
Firstly, look on the basic characteristics of every character: Code \(1\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) summary(data)``` Since SEX and COLOR are categorical, the output in these columns has no sense, but you may want to convert these columns into “true” categorical data. There are multiple possibilities but the simplest is the conversion into factor: Code \(2\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data1 <- data data1\$SEX <- factor(data1\$SEX, labels=c("female", "male")) data1\$COLOR <- factor(data1\$COLOR, labels=c("red", "blue", "green"))``` (To retain the original data, we copied it first into new object data1. Please check it now with summary() yourself.) summary() command is applicable not only to the whole data frame but also to individual characters (or variables, or columns): Code \(3\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) summary(data\$WEIGHT)``` It is possible to calculate characteristics from summary() one by one. Maximum and minimum: Code \(4\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) min(data\$WEIGHT) max(data\$WEIGHT)``` ... median: Code \(5\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) median(data\$WEIGHT)``` ... mean for WEIGHT and for each character: Code \(6\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) mean(data\$WEIGHT)``` and Code \(7\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) colMeans(data)``` ... and also round the result to one decimal place: Code \(8\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) round(colMeans(data), 1)``` (Again, the output of colMeans() has no sense for SEX and COLOR.) Unfortunately, the commands above (but not summary()) do not work if the data have missed values (NA): Code \(9\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data2 <- data data2[3, 3] <- NA mean(data2\$WEIGHT)``` To calculate mean without noticing missing data, enter Code \(10\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data2 <- data mean(data2\$WEIGHT, na.rm=TRUE)``` Another way is to remove rows with NA from the data with: Code \(11\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data2 <- data data2.o <- na.omit(data2)``` Then, data2.o will be free from missing values. Sometimes, you need to calculate the sum of all character values: Code \(12\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) sum(data\$WEIGHT)``` ... or the sum of all values in one row (we will try the second row): Code \(13\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) sum(data[2, ])``` ... or the sum of all values for every row: Code \(14\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) apply(data, 1, sum)``` (These summarizing exercises are here for training purposes only.) For the categorical data, it is sensible to look how many times every value appear in the data file (and that also help to know all values of the character): Code \(15\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) table(data\$SEX) table(data\$COLOR)``` Now transform frequencies into percents (100% is the total number of bugs): Code \(16\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) 100*prop.table(table(data\$SEX))``` One of the most important characteristics of data variability is the standard deviation: Code \(17\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) sd(data\$WEIGHT)``` Calculate standard deviation for each numerical column (columns 3 and 4): Code \(18\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) sapply(data[, 3:4], sd)``` If you want to do the same for data with a missed value, you need something like: Code \(19\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) data2 <- data sapply(data2[, 3:4], sd, na.rm=TRUE)``` Calculate also the coefficient of variation (CV): Code \(20\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) 100*sd(data\$WEIGHT)/mean(data\$WEIGHT)``` We can calculate any characteristic separately for males and females. Means for insect weights: Code \(21\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) tapply(data\$WEIGHT, data\$SEX, mean)``` How many individuals of each color are among males and females? Code \(22\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) table(data\$COLOR, data\$SEX)``` (Rows are colors, columns are males and females.) Now the same in percents: Code \(23\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) 100*prop.table(table(data\$COLOR, data\$SEX))``` Finally, calculate mean values of weight separately for every combination of color and sex (i.e., for red males, red females, green males, green females, and so on): Code \(24\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) tapply(data\$WEIGHT, list(data\$SEX, data\$COLOR), mean)```
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/08%3A_Appendix_A-_Example_of_R_session/8.02%3A_Describing....txt
At the beginning, visually check the distribution of data. Make histogram: Code \(1\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) hist(data\$WEIGHT, breaks=3)``` (To see more detailed histogram, increase the number of breaks.) If for the histogram you want to split data in the specific way (for example, by 20 units, starting from 0 and ending in 100), type: Code \(2\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) hist(data\$WEIGHT, breaks=c(seq(0, 100, 20))``` Boxplots show outliers, maximum, minimum, quartile range and median for any measurement variable: Code \(3\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) boxplot(data\$LENGTH)``` ... now for males and females separately, using formula interface: Code \(4\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) boxplot(data\$LENGTH ~ data\$SEX)``` There are two commands which together help to check normality of the character: Code \(5\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) qqnorm(data\$WEIGHT); qqline(data\$WEIGHT)``` (These two separate commands work together to make a single plot, this is why we used semicolon. The more dots on the resulting plot are deviated from the line, the more non-normal is the data.) Make scatterplot where all bugs represented with small circles. X axis will represent the length whereas Y axis—the weight: Code \(6\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="p")``` (type="p" is the default for plot(), therefore it is usually omitted.) It is possible to change the size of dots varying the cex parameter. Compare Code \(7\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="p", cex=0.5)``` with Code \(8\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="p", cex=2)``` How to compare? The best way is to have more than one graphical window on the desktop. To start new window, type dev.new(). It is also possible to change the type of plotting symbol. Figure \(1\) shows their numbers. If you want this table on the computer, you can run: Code \(9\) (R): `Ex.points() # gmoon.r` To obtain similar graphic examples about types of lines, default colors, font faces and plot types, load the gmoon.r\(^{[1]}\) and run: Code \(10\) (R): ```Ex.lines() # gmoon.r Ex.cols() # gmoon.r Ex.fonts() # gmoon.r Ex.types() # gmoon.r``` Use symbol 2 (empty triangle): Code \(11\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="p", pch=2)``` Use text codes (0/1) for the SEX instead of graphical symbol: Code \(12\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="n") text(data\$LENGTH, data\$WEIGHT, labels=data\$SEX)``` (Here both commands make one plot together. The first one plots the empty field with axes, the second add there text symbols.) The same plot is possible to make with the single command, but this works only for one-letter labels: Code \(13\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, pch=as.character(data\$SEX))``` If we want these numbers to have different colors, type: Code \(14\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="n") text(data\$LENGTH, data\$WEIGHT, labels=data\$SEX, col=data\$SEX+1)``` (Again, both commands make one plot. We added +1 because otherwise female signs would be of 0 color, which is “invisible”.) Different symbols for males and females: Code \(15\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="n") points(data\$LENGTH, data\$WEIGHT, pch=data\$SEX)``` The more complicated variant—use symbols from Hershey fonts\(^{[2]}\) which are internal in R(Figure \(2\)): Code \(16\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH^3, data\$WEIGHT, type="n", xlab=expression("Volume (cm"^3*")"), ylab="Weight") text(data\$LENGTH^3, data\$WEIGHT, labels=ifelse(data\$SEX, "\MA", "\VE"), vfont=c("serif","plain"), cex=1.5)``` (Note also how expression() was employed to make advanced axes labels. Inside expression(), different parts are joined with star *. To know more, run ?plotmath.) We can paint symbols with different colors: Code \(17\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) plot(data\$LENGTH, data\$WEIGHT, type="n") points(data\$LENGTH, data\$WEIGHT, pch=data\$SEX*3, col=data\$SEX+1)``` Finally, it is good to have a legend: Code \(18\) (R): `legend("bottomright", legend=c("male", "female"), pch=c(0, 3), col=1:2)` And then save the plot as PDF file: Code \(19\) (R): ```dev.copy(pdf, "graph.pdf") dev.off()``` Attention! Saving into the external file, never forget to type dev.off()! If you do not want any of axis and main labels, insert options main="", xlab="", ylab="" into your plot() command. There is also a better way to save plots because it does not duplicate to screen and therefore works better in R scripts: Code \(20\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) pdf("graph.pdf") plot(data\$LENGTH, data\$WEIGHT, type="n") points(data\$LENGTH, data\$WEIGHT, pch=data\$SEX*3, col=data\$SEX+1) legend("bottomright", legend=c("male", "female"), pch=c(0, 3), col=1:2) dev.off()``` (Please note here that R issues no warning if the file with the same name is already exist on the disk, it simply erases it and saves the new one. Be careful!)
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/08%3A_Appendix_A-_Example_of_R_session/8.03%3A_Plotting....txt
The significance of difference between means for paired parametric data (t-test for paired data): Code $1$ (R): data <- read.table("data/bugs.txt", h=TRUE) t.test(data$WEIGHT, data$LENGTH, paired=TRUE) ... t-test for independent data: Code $2$ (R): data <- read.table("data/bugs.txt", h=TRUE) t.test(data$WEIGHT, data$LENGTH, paired=FALSE) (Last example is for learning purpose only because our data is paired since every row corresponds with one animal. Also, "paired=FALSE" is the default for the t.test(), therefore one can skip it.) Here is how to compare values of one character between two groups using formula interface: Code $3$ (R): data <- read.table("data/bugs.txt", h=TRUE) t.test(data$WEIGHT ~ data$SEX) Formula was used because our weight/sex data is in the long form: Code $4$ (R): data <- read.table("data/bugs.txt", h=TRUE) data[, c("WEIGHT", "SEX")] Convert weight/sex data into the short form and test: Code $5$ (R): data <- read.table("data/bugs.txt", h=TRUE) data3 <- unstack(data[, c("WEIGHT", "SEX")]) t.test(data3[[1]], data3[[2]]) (Note that test results are exactly the same. Only format was different.) If the p-value is equal or less than 0.05, then the difference is statistically supported. R does not require you to check if the dispersion is the same. Nonparametric Wilcoxon test for the differences: Code $6$ (R): data <- read.table("data/bugs.txt", h=TRUE) wilcox.test(data$WEIGHT, data$LENGTH, paired=TRUE) One-way test for the differences between three and more groups (the simple variant of ANOVA, analysis of variation): Code $7$ (R): data <- read.table("data/bugs.txt", h=TRUE) wilcox.test(data$WEIGHT ~ data$SEX) Which pair(s) are significantly different? Code $8$ (R): data <- read.table("data/bugs.txt", h=TRUE) pairwise.t.test(data$WEIGHT, data$COLOR, p.adj="bonferroni") (We used Bonferroni correction for multiple comparisons.) Nonparametric Kruskal-Wallis test for differences between three and more groups: Code $9$ (R): data <- read.table("data/bugs.txt", h=TRUE) kruskal.test(data$WEIGHT ~ data$COLOR) Which pairs are significantly different in this nonparametric test? Code $10$ (R): data <- read.table("data/bugs.txt", h=TRUE) pairwise.wilcox.test(data$WEIGHT, data$COLOR) The significance of the correspondence between categorical data (nonparametric Pearson chi-squared, or $\chi^2$ test): Code $11$ (R): data <- read.table("data/bugs.txt", h=TRUE) chisq.test(data$COLOR, data$SEX) The significance of proportions (nonparametric): Code $12$ (R): data <- read.table("data/bugs.txt", h=TRUE) prop.test(sum(data$SEX), length(data$SEX), 0.5) (Here we checked if this is true that the proportion of male is different from 50%.) The significance of linear correlation between variables, parametric way (Pearson correlation test): Code $13$ (R): data <- read.table("data/bugs.txt", h=TRUE) cor.test(data$WEIGHT, data$LENGTH, method="pearson") ... and nonparametric way (Spearman’s correlation test): Code $14$ (R): data <- read.table("data/bugs.txt", h=TRUE) cor.test(data$WEIGHT, data$LENGTH, method="spearman") The significance (and many more) of the linear model describing relation of one variable on another: Code $15$ (R): data <- read.table("data/bugs.txt", h=TRUE) summary(lm(data$LENGTH ~ data$SEX)) ... and analysis of variation (ANOVA) based on the linear model: Code $16$ (R): data <- read.table("data/bugs.txt", h=TRUE) aov(lm(data$LENGTH ~ data$SEX)) 8.05: Finishing... Save command history from the menu (on macOS) or with command Code \(1\) (R): `savehistory("bugs.r")` (on Windows or Linux.) Attention! Always save everything which you did in R! Quit R typing Code \(2\) (R): `q("no")` Later, you can open the saved bugs.r in any text editor, change it, remove possible mistakes and redundancies, add more commands to it, copy fragments from it into the R window, and finally, run this file as R script, either from within R, with command source("bugs.r", echo=TRUE), or even without starting the interactive R session, typing in the console window something like Rscript bugs.r. mistake in this chapter. Please find it. Do not look on the next page. 8.06: Answers to exercises Answer to the question about mistake. This is it: Code \(1\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) hist(data\$WEIGHT, breaks=c(seq(0, 100, 20))``` Here should be Code \(2\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) hist(data\$WEIGHT, breaks=c(seq(0, 100, 20)))``` By the way, non-paired brackets (and also non-paired quotes) are among the most frequent mistakes in R. Even more, function seq() makes vector so function c() is unnecessary and the better variant of the same command is Code \(3\) (R): ```data <- read.table("data/bugs.txt", h=TRUE) hist(data\$WEIGHT, breaks=seq(0, 100, 20))``` Now the truth is that there are two mistakes in the text. We are sorry about it, but we believe it will help you to understand R code better. Second is not syntactic mistake, it is more like inconsistency between the text and example. Please find it yourself.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/08%3A_Appendix_A-_Example_of_R_session/8.04%3A_Testing....txt
Once upon a time there was a master student. He studied kubricks, and published a nice paper with many plots made in R. Then he graduated, started a family, and they lived happily ever after until ... ten years later, some new kubricks were discovered and he was asked to update his old plots with new data! (By the way, the intro image was made with R! Here is how—thanks to Paul Murrell’s “R Graphics” book and his grid package:) Code \(1\) (R): `Gridmoon() # gmoon.r` There are recommendations for those R users who want to make their research reproducible in different labs, on different computers, and also on your own computer but 10 years later (or sometimes just 10 days after). How to proceed? Use R script! 09: Appendix B- Ten Years Later or use R script Script is a core tool for reproducible, evaluable data analysis. Every R user must know how to make scripts. This is a short instruction for unexperienced user: 1. Save your history of commands, just in case. 2. Then copy-paste all necessary commands from your R console into the text editor (e.g., open blank file in R editor with file.edit() command\(^{[1]}\). Notes: (a) Interactive commands which need user input, like help(), identify(), install.packages(), or url.show()) should not go into the script. (b) All plotting commands should be within pdf(...) / dev.off() or similar. (c) It is also a good idea to place your package/script loading commands first, then your data loading commands like read.table() and finally actual calculations and plotting. (d) To add the single function, you may (1) type function name without parentheses, (2) copy-paste function name and output into the script and (3) after the name of function, insert assignment operator. (e) Try to optimize your script (Figure \(1\)), e.g., to remove all unnecessary commands. For example, pay attention to those which do not assign or plot anything. Some of them, however, might be useful to show your results on the screen. (f) To learn how to write your scripts better, read style guides, e.g., Google’s R Style Guide on google.github.io/styleguide/....xml\(^{[2]}\). 3. Save your script. We recommend .r extension, there are also other opinions like .R or .Rscript. Anyway, please do not forget to tell your OS to show file extensions, this could be really important. 4. Close R, do not save workspace. 5. Make a test directory inside your working directory, or (if it already exists) delete it (with all contents) and then make again from scratch. 6. Copy your script into test directory. Note that the master version of script (were you will insert changes) should stay outside of test directory. 7. Start R, make test the working directory. 8. Run your script from within R via source(script_name.r, echo=TRUE). Note that: (a) R runs your script two times, first it checks for errors, second performs commands; and (b) all warnings will concentrate at the end of output (so please do not worry). It is really important to check your script exactly as descried above, because in this case commands and objects saved in a previous session will not interfere with your script commands. Alternatively you can use non-interactive way with Rresults shell script (see below). If everything is well (please check especially if all plot files exist and open correctly in your independent viewer), then script is ready. If not, open script in the editor and try to find a mistake (see below), then correct, close R, re-create (delete old and make new) test directory and repeat. When your script is ready, you may use it as the most convenient way to protocol and even to report your work. The most important is that your script is self-sufficient, downloads all data, loads all packages and makes all plots itself. Actually, this book is the one giant R script. When I run it, all R plots are re-created. This is the first plus: the exact correspondence between code and plots. Second plus is that all code is checked with R, and if there is a mistake, it will simply stop. I do not control textual output because I want to modify it, e.g., to make it fit better with the text. . Can you find it?
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/09%3A_Appendix_B-_Ten_Years_Later_or_use_R_script/9.01%3A_How_to_make_your_first_R_script.txt
What if your script does not work? Most likely, there is some message (which you probably do not understand) but outputs nothing or something inappropriate. You will need to debug your script! • First is to find where exactly your script fails. If you run source() command with echo=TRUE option, this is possible just by looking into output. If this is still not clear, run the script piece by piece: open it in any simple text editor and copy-paste pieces of the script from the beginning to the end into the R window. • Above mentioned is related with one of the most important principles of debugging: minimize your code as much as possible, and find the minimal example which still does not work. It is likely that you will see the mistake after minimization. If not, that minimal example will be appropriate to post somewhere with a question. • Related with the above is that if you want to ask somebody else about your R problem, make not only minimal, but minimal self-contained example. If your script loads some data, attach it to your question, or use some embedded R data (like trees or iris), or generate data with sample(), runif(), seq(), rep(), rnorm() or other command. Even R experts are unable to answer questions without data. • Back to the script. In R, many expressions are “Russian dolls” so to understand how they work (or why they do not work), you will need to take them to pieces, “undress”, removing parentheses from the outside and going deeper to the core of expression like: Code \(1\) (R): ```plot(log(trees\$Volume), 1:nrow(trees)) log(trees\$Volume) trees\$Volume trees 1:nrow(trees) nrow(trees)``` This research could be interleaved with occasional calls to the help like ?log or ?nrow. • To make smaller script, do not remove pieces forever. Use commenting instead, both one-line and multi-line. The last is not defined in R directly but one can use: Code \(2\) (R): `if(0) {## print here anything _syntactically correct_}` • If your problem is likely within the large function, especially within the cycle, use some way to “look inside”. For example, with print(): Code \(3\) (R): ```abc <- function(x) { for (i in 1:10) x <- x+1; x } abc(5) abc <- function(x) { for (i in 1:10) { x <- x+1; print(x)}} abc(5)``` Of course, R has much more advanced functions for debugging but frequently minimization and analysis of print()’s (this is called tracing) are enough to solve the problem. • The most common problems are mismatched parentheses or square brackets, and missing commas. Using a text editor with syntax highlighting can eliminate many of these problems. One of useful precautions is always count open and close brackets. These counts should be equal. • Scripts or command sets downloaded from Internet could suffer from automatic tools which, for example, convert quotes into quotes-like (but not readable in R) symbols. The only solution is to carefully replace them with the correct R quotes. By the way, this is another reason why not to use office document editors for R. • Sometimes, your script does not work because your data changed and now conflicts with your script. This should not happen if your script was made using “paranoid mode”, commands which are generally safe for all kinds of data, like mat.or.vec() which makes vector if only one column is specified, and matrix otherwise. Another useful “paranoid” custom is to make checks like if(is.matrix) { ... } everywhere. These precautions allow to avoid situations when you updated data start to be of another type, for example, you had in the past one column, and now you have two. Of course, something always should be left to chance, but this means that you should be ready to conflicts of this sort. • Sometimes, script does not work because there were changes in R. For example, in the beginning of its history, R used underscore (_) for the left assignment, together with <-. The story is when S language was in development, on some keyboards underscore was located where on other keyboards there was left arrow (as one symbol). These two assignment operators were inherited in R. Later, R team decided to get rid of underscore as an assignment. Therefore, older scripts might not work in newer R. Another, more recent example was to change clustering method="ward" to method="ward.D". This was because initial implementation of Ward’s method worked well but did not reflect the original description. Consequently, in older versions of R newer scripts might stop to work. Fortunately, in R cases like first (broken backward compatibility) or second (broken forward compatibility) are rare. They are more frequent in R packages though. • If you downloaded the script and do not understand what it is doing, use minimization and other principles explained above. But even more important is to play with a script, change options, change order of commands, feed it with different data, and so on. Remember that (almost) everything what is made by one human could be deciphered by another one.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/09%3A_Appendix_B-_Ten_Years_Later_or_use_R_script/9.02%3A_My_R_script_does_not_work.txt
Patient: Doc, it hurts when I do this. Doctor: Don’t do that. To those readers who want to dig deeper, this section continues to explain why R scripts do not sometimes work, and how to solve these problems. Advices Use the Source, Luke!.. The most effective way to know what is going on is to look on the source of R function of interest. Simplest way to access source is to type function name without paretheses. If the function is buried deeper, then try to use methods() and getAnywhere(). In some cases, functions are actually not R code, but C or even Fortran. Download R source, open it and find out. This last method (download source) works well for simpler cases too. Keep it simple Try not to use any external packages, any complicated plots, any custom functions and even some basic functions (like subset()) without absolute need. This increases reproducibility and makes your life easier. Analogously, it is better to avoid running R through any external system. Even macOS R shell can bring problems (remember history issues?). RStudio is a great piece of software but it is prone to the same problem. Learn to love errors and warnings They help! If the code issues error or warning, it is a symptom of something wrong. Much worse is when the code does not issue anything but produce unreliable results. However, warnings sometimes are really boring, especially if you know what is going on and why do you have them. On macOS it is even worse because they colored in red... So use suppressWarnings() function, but again, only when you know what you are doing. You can think of it as of headache pills: useful but potentially dangerous. Subselect by names, not numbers Selecting columns by numbers (like trees[, 2:3]) is convenient but dangerous if you changed your object from the original one. It is always better to use longer approach and select by names, like Code \(1\) (R): `trees[, c("Height", "Volume")]` When you select by name, be aware of two things. First, selection by one name will return NULL and can make new column if aything assigned on the right side. This works only for [[ and \$: Code \(2\) (R): `trees[, "aaa"]` Code \(3\) (R): ```trees[["aaa"]] trees\$aaa``` (See also “A Case of Identity” below.) Second, negative selection works only with numbers: Code \(4\) (R): `trees[, -c("Height", "Volume")]` Code \(5\) (R): `trees[, -which(names(trees) %in% c("Height", "Volume"))]` About reserved words, again Try to avoid name your objects with reserved words (?Reserved). Be especially careful with T, F, and return. If you assign them to any other object, consequences could be unpredictable. This is, by the way another good reason to write TRUE instead of T and FALSE instead of F (you cannot assign anything to TRUE and FALSE). It is also a really bad idea to assign anything to .Last.value. However, using the default .Last.value (it is not a function, see ?.Last.value) could be a fruitful idea. If you modified internal data and want to restore it, use something like data(trees). The Case-book of Advanced R user The Adventure of the Factor String By default, R converts textual string into factors. It is usefult to make contrasts but bring problems into many other applications. To avoid this behavior in read.table(), use as.is=TRUE option, and in data frame operations, use stringsAsFactors=FALSE (or the same name global option). Also, always control mode of your objects with str(). A Case of Were-objects When R object undergoes some automatic changes, sooner or later you will see that it changes the type, mode or structure and therefore escapes from your control. Typically, it happens when you make an object smaller: Code \(6\) (R): ```mode(trees) trees2 <- trees[, 2:3] mode(trees2) trees1 <- trees2[, 2] mode(trees1)``` Data frames and matrices normally drop dimensions after reduction. To prevent this, use [, , drop=FALSE] argument. There is even function mat.or.vec(), please check how it works. Factors, on other hand, do not drop levels after reductions. To prevent, use [, drop= TRUE]. Empty zombie objects appear when you apply malformed selection condition: Code \(7\) (R): ```trees.new <- trees[trees[, 1] < 0, ] str(trees.new)``` To avoid such situations (there are more pitfalls of this kind), try to use str() (or Str() from asmisc.r) every time you create new object. A Case of Missing Compare If missing data are present, comparisons should be thought carefully: Code \(8\) (R): ```aa <- c(1, NA, 3) aa[aa != 1] # bad idea aa[aa != 1 & !is.na(aa)] # good idea``` A Case of Outlaw Parameters Consider the following: Code \(9\) (R): ```mean(trees[, 1]) mean(trees[, 1], .2) mean(trees[, 1], t=.2) mean(trees[, 1], tr=.2) mean(trees[, 1], tri=.2) mean(trees[, 1], trim=.2) mean(trees[, 1], trimm=.2) # why?! mean(trees[, 1], anyweirdoption=1) # what?!``` Problem is that R frequently ignores illegal parameters. In some cases, this makes debugging difficult. However, not all functions are equal: Code \(10\) (R): ```IQR(trees[, 1]) IQR(trees[, 1], t=8) IQR(trees[, 1], type=8) IQR(trees[, 1], types=8) ``` Code \(11\) (R): `IQR(trees[, 1], anyweirdoption=1)` And some functions are even more weird: Code \(12\) (R): ```bb <- boxplot(1:20, plot=FALSE) bxp(bb, horiz=T) # plots OK boxplot(1:20, horiz=T) # does not plot horizontally! boxplot(1:20, horizontal=T) # this is what you need``` The general reason of all these different behaviors is that functions above are internally different. The first case is especially harmful because R does not react on your misprints. Be careful. A Case of Identity Similar by consequences is an example when something was selected from list but the name was mistyped: Code \(13\) (R): ```prop.test(3, 23) pval <- prop.test(3, 23)\$pvalue pval pval <- prop.test(3, 23)\$p.value # correct identity! pval``` This is not a bug but a feature of lists and data frames. For example, it will allow to grow them seamlessly. However, mistypes do not raise any errors and therefore this might be a problem when you debug. The Adventure of the Floating Point This is well known to all computer scientists but could be new to unexperienced users: Code \(14\) (R): ```aa <- sqrt(2) aa * aa == 2 aa * aa - 2``` What is going on? Elementary, my dear reader. Computers work only with 0 and 1 and do not know about floating points numbers. Instead of exact comparison, use “near exact” all.equal() which is aware of this situation: Code \(15\) (R): ```all.equal(aa * aa, 2) all.equal(aa * aa - 2, 0)``` A Case of Twin Files Do this small exercise, preferably on two computers, one under Windows and another under Linux: Code \(16\) (R): ```pdf("Ex.pdf") plot(1) dev.off() pdf("ex.pdf") plot(1:3) dev.off()``` On Linux, there are two files with proper numbers of dots in each, but on Windows, there is only one file named Ex.pdf but with three dots! This is even worse on macOS, because typical installation behaves like Windows but there are other variants too. Do not use uppercase in file names. And do not use any other symbols (including spaces) except lowercase ASCII letters, underscore, 0–9 numbers, and dot for extension. This will help to make your work portable. A Case of Bad Grammar The style of your scripts could be the matter of taste, but not always. Consider the following: Code \(17\) (R): `aa<-3` This could be interpreted as either Code \(18\) (R): `aa <- 3` or Code \(19\) (R): `aa < -3` Always keep spaces around assignments. Spaces after commas are not so important but they will help to read your script. A Case of Double Dipping Double comparisons do not work! Use logical concatenation instead. Code \(20\) (R): ```aa <- 3 0 < aa < 10``` Code \(21\) (R): ```aa <- 3 aa > 0 & aa < 10``` A Case of Factor Join There is no c() for factors in R, result will be not a factor but numerical codes. This is concerted with a nature of factors. However, if you really want to concatenate factors and return result as a factor, ?c help page recommends: Code \(22\) (R): ```c(factor(LETTERS[1:3]), factor(letters[1:3])) c.factor <- function(..., recursive=TRUE) unlist(list(...), recursive=recursive) c(factor(LETTERS[1:3]), factor(letters[1:3]))``` A Case of Bad Font Here is a particularly nasty error: Code \(23\) (R): `ll <- seq(0, 1, 1ength=10)` Unfortunately, well-known problem. It is always better to use good, visually discernible monospaced font. Avoid also lowercase “l”, just in case. Use “j” instead, it is much easier to spot. By the way, error message shows the problem because it stops printing exactly where is something wrong.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/09%3A_Appendix_B-_Ten_Years_Later_or_use_R_script/9.03%3A_Common_pitfalls_in_R_scripting.txt
This last section is even more practical. Let us discuss several R scripts. Good This is an example of (almost) ideal R script: Code \(1\) (R): ```### PREPARATIONS library(effsize) Normal <- function(x) { ifelse(shapiro.test(x)\$p.value > 0.05, "NORMAL", "NON-NORMAL")} cc <-read.table("http://ashipunov.info/shipunov/open/ceratophyllum.txt",h=TRUE) ### DATA PROCESING ## check data: str(cc) head(cc) sapply(cc[, 4:5], Normal) # both non-normal ## plot it first: pdf("plot1.pdf") boxplot(cc[, 4:5], ylab="Position of stem, mm on grid") dev.off() ## we only need effect size: cliff.delta(cc\$PLANT1, cc\$PLANT2)``` Its main features: • clearly separated parts: loading of external material (lines 1–12) and processing of the data itself (lines 13–26) • package(s) first (line 3), then custom functions (line 5–7), then data (line 9) • data is checked (lines 16–18) with str() and then checked for normality • after checks, data was plotted first (lines 21–23), then analyzed (line 26) • acceptable style • every step is commented To see how it works, change working directory to where script is located, then load this script into R with: Code \(2\) (R): `source("good.r", echo=TRUE)` Another variant is non-interactive and therefore faster and cleaner. Use Rresults script (works on macOS and Linux) like: Code \(3\) (R): `Rresults good.r` Script will print both input and output to the terminal, plus also save it as a text file and save plots in one PDF file with script name. Since this book is the R script, you will find examples of Rresults output in the on-line book directory\(^{[1]}\). Bad Now consider the following script: Code \(4\) (R): ```wiltingdata<-read.table("http://ashipunov.info/shipunov/open/wilting.txt", h=TRUE) url.show("http://ashipunov.info/shipunov/open/wilting_c.txt") sapply(wiltingdata, Normality) willowsdata<-wiltingdata[grep("Salix",wiltingdata\$SPECIES),] Rro.test(willows[,1],willows[,2]) summary(K(willows[,1],willows[,2])) source("http://ashipunov.info/r/asmisc.r") plot(wiltingadta)``` It is really bad, it simply does not work. Problems start on the first line, and both interactive (with source()) and non-interactive (with Rresults) ways will show it like: Code \(5\) (R): `wiltingdata<- read.table("http://ashipunov.info/shipunov/open/wilting.txt",h=TRUE)` Something is really wrong and you will need to find and correct (debug) it. And since code was not commented, you have to guess what author(s) actually wanted. Other negative features: • no parts, no proper order of loading, checking and plotting • interactive url.show() will block non-interactive applications and therefore is potentially harmful (not a mistake though) • bad style: in particular, no spaces around assignments and no spaces after commas • very long object names (hard to type) Debugging process will consist of multiple tries until we make the working (preferably in the sensible way), “not-too-bad” script. This could be prettified later, most important is to make it work. There are many ways to debug. For example, you can open (1) R in the terminal, (2) text editor\(^{[2]}\) with your script and probably also some advanced (3) file manager. Run the script first to see the problem. Then copy-paste from R to editor and back again. Let us go to the first line problem first. Message is cryptic, but likely this is some conflict between read.table() and the actual data. Therefore, you need to look on data and if you do, you will find that data contains both spaces and tabs. This is why R was confused. You should tell it to use tabs: Code \(6\) (R): `read.table("http://ashipunov.info/shipunov/open/wilting.txt", h=TRUE, sep=" ")` First line starts to work. This way, step by step, you will come to the next stage. Not too bad This is result of debugging. It is not yet fully prettified, there are no chapters and comments. However, it works and likely in the way implied by authors. Code \(7\) (R): ```wilt <- read.table("http://ashipunov.info/shipunov/open/wilting.txt",h=TRUE, sep=" ") ## url.show("http://ashipunov.info/shipunov/open/wilting_c.txt") source("http://ashipunov.info/r/asmisc.r") str(wilt) Normality(wilt[, 2]) willowsdata <- wilt[grep("Salix", wilt\$SPECIES), ] willowsdata\$SPECIES <- droplevels(willowsdata\$SPECIES) willows <- split(willowsdata\$TIME, willowsdata\$SPECIES) pdf("willows.pdf") boxplot(willows, ylab="Wilting time, minutes") dev.off() Rro.test(willows[[1]], willows[[2]]) summary(K(willows[[1]], willows[[2]]))``` What was changed • custom commands moved up to line 3 (not to the proper place, better would be line 1, but this position garantrees work) • url.show() commented out • checks added (lines 5–6) • names shortened a bit and style improved (not very important but useful) • plotting now plots to file, not just to screen device • object willows appeared out of nowhere, therefore we had to guess what is it, why it was used, and then somehow recreate it (lines 8–9) object but it is not the same as in initial script. What is different? Is it possible to make them the same? 9.05: Answers to exercises Answer to question about book code. If you are really attentive, you might find that some lines of code are preceded by space before greater sign. For example, `q()` in the second chapter. Of course, I do not want R to exit so early. This is why this code is not processed. Now you can find other examples and think why they do not go trough R. Answer to question about recreating the object impied in “bad” script. Our new object apparently is a list and requires subsetting with double brackets whereas original object was likely a matrix, with two columns, each representing one species. We can stack() our list and make it the data frame, but this will not help us to subset exaclty like in original version. The other way is to make both species parts exactly equal lengths and then it is easy to make (e.g., with cbind()) a matrix which will consist of two columns-species. However, this will result in loosing some data. Maybe, they did use some different version of data? It is hard to tell. Do not make bad scripts! (And this concluding image was made with command:) Code \(1\) (R): `Gridmoon(Nightsky=FALSE, Moon=FALSE, Stars=FALSE, Hillcol="forestgreen", Text="Use R script!", Textcol="yellow", Textpos=c(.35,.85), Textsize=96) # gmoon.r`
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/09%3A_Appendix_B-_Ten_Years_Later_or_use_R_script/9.04%3A_Good_Bad_and_Not-too-bad.txt
• 10.1: R and databases There are many interfaces which connect R with different database management software and there is even package sqldf which allows to work with R data frames through commands from SQL language. • 10.2: R and time • 10.3: R and Bootstrapping All generalities like standard deviation and mean are normally taken from sample but meant to represent the whole statistical population. Therefore, it is possible that these estimations could be seriously wrong. Statistical techniques like bootstrapping were designed to minimize the risk of these errors. Bootstrap is based only on the given sample but try to estimate the whole population. • 10.4: R and shape Analysis of biological shape is a really useful technique. Inspired with highly influential works of D’Arcy Thompson, it takes into account not the linear measurements but the whole shape of the object: contours of teeth, bones, leaves, flower petals, and even 3D objects like skulls or beaks. Naturally, shape is not exactly measurement data, it should be analyzed with special approaches. There are methods based on the analysis of curves  and methods which use landmarks and thin-plate splines (T • 10.5: R and Bayes • 10.6: R, DNA and evolution • 10.7: R and reporting • 10.8: Answers to exercises 10: Appendix C- R fragments There are many interfaces which connect R with different database management software and there is even package sqldf which allows to work with R data frames through commands from SQL language. However, the R core also can work in the database-like way, even without serious extension. The Table \(1\) shows the correspondence between SQL operators and commands of R. SELECT [, subset() JOIN merge() GROUP BY aggregate(), tapply() DISTINCT unique(), duplicated() ORDER BY order(), sort(), rev() WHERE which(), %in%, == LIKE grep() INSERT rbind() EXCEPT ! and - Table \(1\) Approximate correspondence between SQL operators and R functions. One of the most significant disadvantages there is that many of these R commands (like merge()) are slow. Below are examples of the user functions which work much faster: Code \(1\) (R): ```Recode <- function(var, from, to) { x <- as.vector(var) x.tmp <- x for (i in 1:length(from)) {x <- replace(x, x.tmp == from[i], to[i])} if(is.factor(var)) factor(x) else x}``` Now we can operate with multiple data frames as with one. This is important if data is organized hierarchically. For example, if we are measuring plants in different regions, we might want to have two tables: the first with regional data, and the second with results of measurements. To connect these two tables, we need a key, the same column which presents in both tables: Code \(2\) (R): ```locations <- read.table("http://ashipunov.info/shipunov/open/eq_l.txt", h=TRUE, sep=";") measurements <- read.table( "http://ashipunov.info/shipunov/open/eq_s.txt", h=TRUE, sep=";") head(locations) head(measurements) loc.N.POP <- Recode(measurements\$N.POP, locations\$N.POP,as.character(locations\$SPECIES)) head(cbind(species=loc.N.POP, measurements))``` Here was shown how to work with two related tables and Recode() command. First table contains locations, the second—measurements. Species names are only in the first table. If we want to know the correspondence between species and characters, we might want to merge these tables. The key is N.POP column (location ID). The recode.r collection of R functions distributed with this book contains ready-to-use Recode() function and related useful tools. There is another feature related with databasing: quite frequently, there is a need to convert “text to columns”. This is especially important when data contains pieces of text instead of single words: Code \(3\) (R): ```m <- c("Plantago major", "Plantago lanceolata", "Littorella uniflora") do.call(rbind, strsplit(m, split=" ")) # one space inside quotes``` (Vectorized function call do.call() constructs a function call its arguments.) There is also the data encoding operation which converts categorical data into binary (0/1) form. Several ways are possible: Code \(4\) (R): ```m <- c("L", "S", "XL", "XXL", "S", "M", "L") m.f <- factor(m) m.o <- ordered(m.f, levels=c("S", "M", "L", "XL", "XXL")) m.o model.matrix( ~ m.o - 1, data=data.frame(m.o)) Tobin(m.o, convert.names=FALSE) # asmisc.r``` R and TeX are friendly software so it is possible to make them work together in order to automate book processing. Such books will be “semi-static” where starting data comes from the regularly updated database, and then R scripts and TeX work to create typographically complicated documents. Flora and fauna manuals and checklists are perfect candidates for these semi-static manuals. This book supplements contain the archived folder manual.zip which illustrates how this approach works on example of imaginary “kubricks” (see above).
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/10%3A_Appendix_C-_R_fragments/10.01%3A_R_and_databases.txt
If we measure same object multiple times, especially at regular (sampling) intervals, we will finally have the time series, specific type of measurement data. While many common options of data analysis are applicable to time series, there are multiple specific methods and plots. Time series frequently have two components, non-random and random. The first could in turn contain the seasonal component which is related with time periodically, like year seasons or day and night. The trend is the second part of non-random component, it is both no-random and non-periodical. If time series has the non-random component, the later values should correlate with earlier values. This is autocorrelation. Autocorrelation has lags, intervals of time where correlation is maximal. These lags could be organized hierarchically. Different time series could be cross-correlated if they are related. If the goal is to analyze the time series and (1) fill the gaps within (interpolation) or (2) make forecast (extrapolation), then one need to create the time series model (for example, with arima() function). But before the start, one will need to convert the ordinary data frame or vector into time series. Conversion of dates is probably most complicated: Code \(1\) (R): ```dates.df <- data.frame(dates=c("2011-01-01","2011-01-02", "2011-01-03","2011-01-04","2011-01-05")) str(dates.df\$dates) dates.1 <- as.Date(dates.df\$dates, "%Y-%m-%d") str(dates.1)``` In that example, we showed how to use as.Data() function to convert one type to another. Actually, our recommendation is to use the fully numerical date: Code \(2\) (R): ```d <- c(20130708, 19990203, 17650101) as.Date(as.character(d), "%Y%m%d")``` The advantage of this system is that dates here are accessible (for example, for sorting) both as numbers and as dates. And here is how to create time series of the regular type: Code \(3\) (R): ```ts(1:10, # sequence frequency = 4, # by quartile start = c(1959, 2)) # start in the second quartile 1959``` (If the time series is irregular, one may want to apply its() from the its package.) It is possible to convert the whole matrix. In that case, every column will become the time series: Code \(4\) (R): ```z <- ts(matrix(rnorm(30), 10, 3), start=c(1961, 1), # start in January 1961 frequency=12) # by months class(z)``` Generic plot() function “knows” how to show the time series (Figure \(1\): Code \(5\) (R): ```z <- ts(matrix(rnorm(30), 10, 3), start=c(1961, 1), frequency=12) plot(z, plot.type="single", # place all series on one plot lty=1:3)``` (There is also specialized ts.plot() function.) There are numerous analytical methods applicable to time series. We will show some of them on the example of “non-stop” observations on carnivorous plant—sundew (Drosera rotundifolia). In nature, leaves of sundew are constantly open and close in hope to catch and then digest the insect prey (Figure \(2\)). File sundew.txt contains results of observations related with the fourth leaf of the second plant in the group observed. The leaf condition was noted every 40 minutes, and there were 36 observations per 24 hours. We will try to make the time series from SHAPE column which encodes the shape of leaf blade (1 flat, 2 concave), it is the ranked data since it is possible to imagine the SHAPE \(= 1.5\). Command file.show() reveals this structure: WET;SHAPE 2;1 1;1 1;1 ... Now we can read the file and check it: Code \(6\) (R): ```leaf <- read.table("data/sundew.txt", h=TRUE, sep=";") str(leaf) summary(leaf)``` Everything looks fine, there are no visible errors or outliers. Now convert the SHAPE variable into time series: Code \(7\) (R): ```leaf <- read.table("data/sundew.txt", h=TRUE, sep=";") shape <- ts(leaf\$SHAPE, frequency=36)``` Let us check it: Code \(8\) (R): ```leaf <- read.table("data/sundew.txt", h=TRUE, sep=";") shape <- ts(leaf\$SHAPE, frequency=36) str(shape)``` Looks perfect because our observations lasted for slightly more than 3 days. Now access the periodicity of the time series (seasonal component) and check out the possible trend (Figure \(3\)): Code \(9\) (R): ```leaf <- read.table("data/sundew.txt", h=TRUE, sep=";") shape <- ts(leaf\$SHAPE, frequency=36) (acf(shape, main=expression(italic("Drosera")*" leaf")))``` (Please note also how expression() was used to make part of the title italic, like it is traditional in biology.) Command acf() (auto-correlation function) outputs coefficients of autocorrelation and also draws the autocorrelation plot. In our case, significant periodicity is absent because almost all pikes lay within the confidence interval. Only first tree pikes are outside, these correspond with lags lower than 0.05 day (about 1 hour or less). It means that within one hour, the leaf shape will stay the same. On larger intervals (we have 24 h period), these predictions are not quite possible. However, there is a tendency in pikes: they are much smaller to the right. It could be the sign of trend. Check it (Figure \(4\)): Code \(10\) (R): ```leaf <- read.table("data/sundew.txt", h=TRUE, sep=";") shape <- ts(leaf\$SHAPE, frequency=36) plot(stl(shape, s.window="periodic")\$time.series, main="")``` As you see, there is a tendency for decreasing of SHAPE with time. We used stl() function (STL—“Seasonal Decomposition of Time Series by Loess” to show that. STL segregates the time series into seasonal (day length in our case), random and trend components. WET is the second character in our sundew dataset. It shows the wetness of the leaf. Does wetness have the same periodicity and trend as the leaf shape?
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/10%3A_Appendix_C-_R_fragments/10.02%3A_R_and_time.txt
All generalities like standard deviation and mean are normally taken from sample but meant to represent the whole statistical population. Therefore, it is possible that these estimations could be seriously wrong. Statistical techniques like bootstrapping were designed to minimize the risk of these errors. Bootstrap is based only on the given sample but try to estimate the whole population. The idea of bootstrap was inspired by from Buerger and Raspe “Baron Munchausen’s miraculous adventures”, where the main character pulls himself (along with his horse) out of a swamp by his hair (Figure \(1\)). Statistical bootstrap was actively promoted by Bradley Efron since 1970s but was not used frequently until 2000s because it is computationally intensive. In essence, bootstrap is the re-sampling strategy which replaces part of sample with the subsample of its own. In R, we can simply sample() our data with the replacement. First, we will bootstrap the mean (Figure \(2\)) using the advanced boot package: Code \(1\) (R): ```library(boot) ## Statistic to be bootstrapped: ave.b <- function (data, indices){ d <- data[indices] return(mean(d)) } (result.b <- boot(data=trees\$Height, statistic=ave.b, R=100))``` (Note that here and in many other places in this book number of replicates is 100. For the working purposes, however, we recommend it to be at least 1,000.) Code \(2\) (R): ```library(boot) ave.b <- function (data, indices){ d <- data[indices] return(mean(d)) } result.b <- boot(data=trees\$Height, statistic=ave.b, R=100) plot(result.b)``` Package boot allows to calculate the 95% confidence interval: Code \(3\) (R): ```library(boot) aave.b <- function (data, indices){ d <- data[indices] return(mean(d)) } result.b <- boot(data=trees\$Height, statistic=ave.b, R=100) boot.ci(result.b, type="bca")``` More basic bootstrap package bootstraps in a simpler way. To demonstrate, we will use the spur.txt data file. This data is a result of measurements of spur length on 1511 Dactylorhiza orchid flowers. The length of spur is important because only pollinators with mouth parts comparable to spur length can successfully pollinate these flowers. Code \(4\) (R): ```spur <- scan("data/spur.txt") library(bootstrap) result.b2 <- bootstrap(x=spur, 100, function(x) mean(x)) ## Median of bootstrapped values: median(result.b2\$thetastar) ## Confidence interval: quantile(result.b2\$thetastar, probs=c(0.025, 0.975))``` Jackknife is similar to the bootstrap but in that case observations will be taking out of the sample one by one without replacement: Code \(5\) (R): ```spur <- scan("data/spur.txt") result.j <- jackknife(x=spur, function(x) mean(x)) ## Median of jackknifed values: median(result.j\$jack.values) ## Standard error: result.j\$jack.se ## Confidence interval: quantile(result.j\$jack.values, probs=c(0.025, 0.975))``` This is possible to bootstrap standard deviation and mean of this data even without any extra package, with for cycle and sample(): Code \(6\) (R): ```spur <- scan("data/spur.txt") boot <- 100 tt <- matrix(ncol=2, nrow=boot) for (n in 1:boot) { spur.sample <- sample(spur, length(spur), replace=TRUE) tt[n, 1] <- mean(spur.sample) tt[n, 2] <- sd(spur.sample) } (result <- data.frame(spur.mean=mean(spur), spur.sd=sd(spur), boot.mean=mean(tt[, 1]), boot.sd=mean(tt[, 2])))``` (Alternatively, tt could be an empty data frame, but this way takes more computer time which is important for bootstrap. What we did above, is the pre-allocation, useful way to save time and memory.) Actually, spur length distribution does not follow the normal law (check it yourself). It is better then to estimate median and median absolute deviation (instead of mean and standard deviation), or median and 95% range: Code \(7\) (R): ```dact <- scan("data/dact.txt") quantile(dact, c(0.025, 0.5, 0.975)) apply(replicate(100, quantile(sample(dact, length(dact), replace=TRUE), c(0.025, 0.5, 0.975))), 1, mean)``` (Note the use of replicate() function, this is another member of apply() family.) This approach allows also to bootstrap almost any measures. Let us, for example, bootstrap 95% confidence interval for Lyubishchev’s K: Code \(8\) (R): ```sleep.K.rep <- replicate(100, K(extra ~ group, data=sleep[sample(1:nrow(sleep), replace=TRUE), ])) quantile(sleep.K.rep, c(0.025, 0.975))``` Bootstrap and jackknife are related with numerous resampling techniques. There are multiple R packages (like coin) providing resampling tests and related procedures: Code \(9\) (R): ```library(coin) wilcox_test(V1 ~ V2, data=subset(grades, V2 %in% c("A1", "B1")), conf.int=TRUE)``` Bootstrap is also widely used in the machine learning. Above there was an example of Jclust() function from the asmisc.r set. There also are BootA(), BootRF() and BootKNN() to boorstrap non-supervised and supervised results. ) plants. Use bootstrap and resampling methods.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/10%3A_Appendix_C-_R_fragments/10.03%3A_R_and_Bootstrapping.txt
Analysis of biological shape is a really useful technique. Inspired with highly influential works of D’Arcy Thompson\(^{[1]}\), it takes into account not the linear measurements but the whole shape of the object: contours of teeth, bones, leaves, flower petals, and even 3D objects like skulls or beaks. Naturally, shape is not exactly measurement data, it should be analyzed with special approaches. There are methods based on the analysis of curves (namely, Fourier coefficients) and methods which use landmarks and thin-plate splines (TPS). The last method allows to visualize aligned shapes with PCA (in so-called tangent space) and plot transformation grids. In R, several packages capable to perform this statistical analysis of shape, or geometric morphometry. Fourier analysis is possible with momocs, and landmark analysis used below with geomorph package: Code \(1\) (R): ```library(geomorph) TangentSpace2 <- function(A){ x <- two.d.array(A) pc.res <- prcomp(x) pcdata <- pc.res\$x list(array=x, pc.summary=summary(pc.res), pc.scores=pcdata)}``` (One additional function was defined to simplify the workflow.) Data comes out of leaf measures of alder tree. There are two data files: classic morphometric dataset with multiple linear measurements, and geometric morphometric dataset: Code \(2\) (R): ```am <- read.table("data/bigaln.txt", sep=";", head=TRUE) ag <- readland.tps("data/bigaln.tps", specID="imageID")``` Geometric morphometric data was prepared with separate program, tpsDig\(^{[2]}\). In the field, every leaf was contoured with sharp pencil, and then all images were scanned. Next, PNG images were supplied to tpsDig and went through landmark mapping\(^{[3]}\). In total, there were 12 landmarks: top, base, and endpoints of the first (lower) five pairs of primary leaf veins (Figure \(1\)). Note that in geometric morphometry, preferable number of cases should be > 11 times bigger then number of variables. Next step is the Generalized Procrustes Analysis (GPA). The name refers to bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off (Figure \(2\)). GPA aligns all images together: Code \(3\) (R): ```ag <- readland.tps("data/bigaln.tps", specID="imageID") gpa.ag <- gpagen(ag)``` ... and next—principal component analysis on GPA results: Code \(4\) (R): ```ag <- readland.tps("data/bigaln.tps", specID="imageID") gpa.ag <- gpagen(ag) ta.ag <- TangentSpace2(gpa.ag\$coords) screeplot(ta.ag\$pc.summary) # importance of principal components pca.ag <- ta.ag\$pc.summary\$x``` (Check the PCA screeplot yourself.) Now we can plot the results (Figure \(3\)). For example, let us check if leaves from top branches (high P.1 indices) differ in their shape from leaves of lower branches (small P.1 indices): Code \(5\) (R): ```ag <- readland.tps("data/bigaln.tps", specID="imageID") gpa.ag <- gpagen(ag) ta.ag <- TangentSpace2(gpa.ag\$coords) pca.ag <- ta.ag\$pc.summary\$x pca.ag.ids <- as.numeric(gsub(".png", "", row.names(pca.ag))) branch <- cut(am\$P.1, 3, labels=c("lower", "middle", "top")) b.code <- as.numeric(Recode(pca.ag.ids, am\$PIC, branch, char=FALSE)) # recode.r plot(pca.ag[, 1:2], xlab="PC1", ylab="PC2", pch=19, col=b.code) legend("topright", legend=paste(levels(branch), "branches"),pch=19, col=1:3)``` Well, the difference, if even exists, is small. Now plot consensus shapes of top and lower leaves. First, we need mean shapes for the whole dataset and separately for lower and top branches, and then links to connect landmarks: Code \(6\) (R): ```ag <- readland.tps("data/bigaln.tps", specID="imageID") gpa.ag <- gpagen(ag) c.lower <- mshape(gpa.ag\$coords[, , b.code == 1]) c.top <- mshape(gpa.ag\$coords[, , b.code == 3]) all.mean <- mshape(gpa.ag\$coords) ag.links <- matrix(c(1, rep(c(2:7, 12:8), each=2), 1),ncol=2, byrow=TRUE)``` Finally, we plot D’Arcy Thompson’s transformation grids (Figure C.5.1): Code \(7\) (R): ```ag <- readland.tps("data/bigaln.tps", specID="imageID") ag.links <- matrix(c(1, rep(c(2:7, 12:8), each=2), 1),ncol=2, byrow=TRUE) old.par <- par(mfrow=c(1, 2)) GP <- gridPar(grid.col="grey", tar.link.col="darkseagreen", tar.pt.bg=0) plotRefToTarget(c.lower, all.mean, links=ag.links, gridPars=GP) title(main="lower branches", line=-5, cex=0.8) plotRefToTarget(c.top, all.mean, links=ag.links, gridPars=GP) title(main="top branches", line=-5, cex=0.8) par(old.par)``` Small difference is clearly visible and could be the starting point for the further research.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/10%3A_Appendix_C-_R_fragments/10.04%3A_R_and_shape.txt
Most of statistical test and many methods use “throwing coin” assumption; however long we throw the coin, probability to see the face is always \(frac{1}{2}\). There is another approach, “apple bag”. Suppose we have closed, non-transparent bag full of red and green apples. We took the first apple. It was red. We took the second one. It was red again. Third time: red again. And again. This means that red apples are likely dominate in the bag. It is because the apple bag is not a coin: it is possible to take all apples from bag and leave it empty but it is impossible to spend all coin throws. Coin throwing is unlimited, apple bag is limited. So if you like to know proportion of red to green apples in a bag after you took several apples out of it, you need to know some priors: (1) how many apples you took, (2) how many red apples you took, (3) how many apples are in your bag, and then (4) calculate proportions of everything in accordance with particular formula. This formula is a famous Bayes formula but we do not use formulas in this book (except one, and it is already spent). All in all, Bayesian algorithms use conditional models like our apple bag above. Note that, as with apple bag we need to take apples first and then calculate proportions, in Bayesian algorithms we always need sampling. This is why these algorithms are complicated and were never developed well in pre-computer era. Below, Bayesian approach exemplified with Bayes factor which in some way is a replacement to p-value. Whereas p-value approach allows only to reject or fail-to-reject null, Bayes factors allow to express preference (higher degree of belief) towards one of two hypotheses. If there are two hypotheses, M1 and M2, then Bayes factor of: < 0 negative (support M2) 0–5 negligible 5–10 substantial 10–15 strong 15–20 very strong > 20 decisive So unlike p-value, Bayes factor is also an effect measure, not just a threshold. To calculate Bayes factor in R, one should be careful because there are plenty of hidden rocks in Bayesian statistics. However, some simple examples will work: Following is an example of typical two-sample test, traditional and Bayesian: Code \(1\) (R): ```## Restrict to two groups chickwts <- chickwts[chickwts\$feed %in% c("horsebean", "linseed"), ] ## Drop unused factor levels chickwts\$feed <- factor(chickwts\$feed) ## Plot data plot(weight ~ feed, data=chickwts, main="Chick weights") ## traditional t test t.test(weight ~ feed, data=chickwts, var.eq=TRUE) ## Compute Bayes factor library(BayesFactor) bf <- ttestBF(formula = weight ~ feed, data=chickwts) bf``` Many more examples are at http://bayesfactorpcl.r-forge.r-project.org/ 10.06: R DNA and evolution In biology, majority of research is now related with DNA-based phylogenetic studies. R is aware of these methods, and one of examples (morphological though) was presented above. DNA phylogeny research includes numerous steps, and the scripting power of R could be used to automate procedures by joining them in a sort of workflow which we call Ripeline. Book supplements contain archived folder ripeline.zip which includes R scripts and data illustrating work with DNA tabular database, FASTA operations, DNA alignment, flank removal, gapcoding, concatenation, and examples of how to use internal and external tree estimators. 10.07: R and reporting Literal programming, the idea of famous Donald Knuth, is the way to interleave the code and explanatory comments. Resulted document is the living report: when you change your code or your data, it will be immediately reflected in the report. There many ways to create living reports in R using various office document formats but the most natural way is to use LaTeX. Let us create the text file and call it, for example, test_sweave.rnw: On the next step, this file should be “fed” to the R: Code \(1\) (R): `Sweave("test_sweave.rnw")` After that, you will have the new LaTeX file, test_sweave.tex. Finally, with a help of pdfLaTeX you can obtain the PDF which is shown on the Figure C.8.1.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/10%3A_Appendix_C-_R_fragments/10.05%3A_R_and_Bayes.txt
Answer to the sundew wetness question. Let us apply the approach we used for the leaf shape: Code \(1\) (R): ```leaf <- read.table("data/sundew.txt", h=TRUE, sep=";") wet <- ts(leaf\$WET, frequency=36) str(wet) acf(wet) plot(stl(wet, s.window="periodic")\$time.series)``` (Plots are not shown, please make them yourself.) There is some periodicity with 0.2 (5 hours) period. However, trend is likely absent. Answer to the dodder infestation question. Inspect the data, load and check: Code \(2\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") str(cu)``` (Note that two last columns are ranked. Consequently, only nonparametric methods are applicable here.) Then we need to select two hosts of question and drop unused levels: Code \(3\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") cu2 <- cu[cu\$HOST %in% c("Alchemilla","Knautia"), ] cu2 <- droplevels(cu2)``` It is better to convert this to the short form: Code \(4\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") cu2 <- cu[cu\$HOST %in% c("Alchemilla","Knautia"), ] cu2 <- droplevels(cu2) cu2.s <- split(cu2\$DEGREE, cu2\$HOST)``` No look on these samples graphically: Code \(5\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") cu2 <- cu[cu\$HOST %in% c("Alchemilla","Knautia"), ] cu2 <- droplevels(cu2) cu2.s <- split(cu2\$DEGREE, cu2\$HOST) boxplot(cu2.s)``` There is a prominent difference. Now to numbers: Code \(6\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") cu2 <- cu[cu\$HOST %in% c("Alchemilla","Knautia"), ] cu2 <- droplevels(cu2) cu2.s <- split(cu2\$DEGREE, cu2\$HOST) sapply(cu2.s, median) cliff.delta(cu2.s\$Alchemilla, cu2.s\$Knautia) wilcox.test(cu2.s\$Alchemilla, cu2.s\$Knautia)``` Interesting! Despite on the difference between medians and large effect size, Wilcoxon test failed to support it statistically. Why? Were shapes of distributions similar? Code \(7\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") cu2 <- cu[cu\$HOST %in% c("Alchemilla","Knautia"), ] cu2 <- droplevels(cu2) cu2.s <- split(cu2\$DEGREE, cu2\$HOST) ansari.test(cu2.s\$Alchemilla, cu2.s\$Knautia) library(beeswarm) la <- layout(matrix(c(1, 3, 2, 3), ncol=2, byrow=TRUE)) for (i in 1:2) hist(cu2.s[[i]], main=names(cu2.s)[i], xlab="", xlim=range(cu2.s)) bxplot(cu2.s) ; beeswarm(cu2.s, cex=1.2, add=TRUE)``` (Please note how to make complex layout with layout() command. This commands takes matrix as argument, and then simply place plot number something to the position where this number occurs in the matrix. After layout was created, you can check it with command layout.show(la).) As both Ansari-Bradley test and plots suggest, distributions are really different (Figure \(2\)). One workaround is to use robust rank order test which is not so sensitive to the differences in variation: Code \(8\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") cu2 <- cu[cu\$HOST %in% c("Alchemilla","Knautia"), ] cu2 <- droplevels(cu2) cu2.s <- split(cu2\$DEGREE, cu2\$HOST) Rro.test(cu2.s\$Alchemilla, cu2.s\$Knautia) # asmisc.r``` This test found the significance. Now we will try to bootstrap the difference between medians: Code \(9\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") cu2 <- cu[cu\$HOST %in% c("Alchemilla","Knautia"), ] cu2 <- droplevels(cu2) cu2.s <- split(cu2\$DEGREE, cu2\$HOST) library(boot) meddif.b <- function (data, ind) { d <- data[ind]; median(d[cu2\$HOST == "Alchemilla"]) - median(d[cu2\$HOST == "Knautia"]) } meddif.boot <- boot(data=cu2\$DEGREE, statistic=meddif.b, strata=cu2\$HOST, R=999) boot.ci(meddif.boot, type="bca")``` (Please note how strata was applied to avoid mixing of two different hosts.) This is not dissimilar to what we saw above in the effect size output: large difference but 0 included. This could be described as “prominent but unstable” difference. That was not asked in assignment but how to analyze whole data in case of so different shapes of distributions. One possibility is the Kruskal test with Monte-Carlo replications. By default, it makes 1000 tries: Code \(10\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") library(coin) kruskal_test(DEGREE ~ HOST, data=cu, distribution=approximate())``` There is no overall significance. It is not a surprise, ANOVA-like tests could sometimes contradict with individual or pairwise. Another possibility is a post hoc robust rank order test: Code \(11\) (R): ```cu <- read.table("http://ashipunov.info/shipunov/open/cuscuta.txt",h=TRUE, sep=" ") pairwise.Rro.test(cu\$DEGREE, cu\$HOST) # asmisc.r``` Now it found some significant differences but did not reveal it for our marginal, unstable case of Alchemilla and Knautia.
textbooks/stats/Introductory_Statistics/Book%3A_Visual_Statistics_Use_R_(Shipunov)/10%3A_Appendix_C-_R_fragments/10.08%3A_Answers_to_exercises.txt
Introduction to data 1.1 (a) Treatment: 10/43 = 0.23 $\rightarrow$ 23%. Control: 2/46 = 0:04 ! 4%. (b) There is a 19% difference between the pain reduction rates in the two groups. At first glance, it appears patients in the treatment group are more likely to experience pain reduction from the acupuncture treatment. (c) Answers may vary but should be sensible. Two possible answers: 1Though the groups' difference is big, I'm skeptical the results show a real difference and think this might be due to chance. 2The difference in these rates looks pretty big, so I suspect acupuncture is having a positive impact on pain. 1.3 (a-i) 143,196 eligible study subjects born in Southern California between 1989 and 1993. (a-ii) Measurements of carbon monoxide, nitrogen dioxide, ozone, and particulate matter less than $10_{\mu m}$ (PM10) collected at air-qualitymonitoring stations as well as length of gestation. These are continuous numerical variables. (a-iii) The research question: "Is there an association between air pollution exposure and preterm births?" (b-i) 600 adult patients aged 18-69 years diagnosed and currently treated for asthma. (b-ii) The variables were whether or not the patient practiced the Buteyko method (categorical) and measures of quality of life, activity, asthma symptoms and medication reduction of the patients (categorical, ordinal). It may also be reasonable to treat the ratings on a scale of 1 to 10 as discrete numerical variables. (b-iii) The research question: "Do asthmatic patients who practice the Buteyko method experience improvement in their condition?" 1.5 (a) $50 \times 3 = 150$. (b) Four continuous numerical variables: sepal length, sepal width, petal length, and petal width. (c) One categorical variable, species, with three levels: setosa, versicolor, and virginica. 1.7 (a) Population of interest: all births in Southern California. Sample: 143,196 births between 1989 and 1993 in Southern California. If births in this time span can be considered to be representative of all births, then the results are generalizable to the population of Southern California. However, since the study is observational, the ndings do not imply causal relationships. (b) Population: all 18-69 year olds diagnosed and currently treated for asthma. Sample: 600 adult patients aged 18-69 years diagnosed and currently treated for asthma. Since the sample consists of voluntary patients, the results cannot necessarily be generalized to the population at large. However, since the study is an experiment, the ndings can be used to establish causal relationships. 1.9 (a) Explanatory: number of study hours per week. Response: GPA. (b) There is a slight positive relationship between the two variables. One respondent reported a GPA above 4.0, which is a data error. There are also a few respondents who reported unusually high study hours (60 and 70 hours/week). The variability in GPA also appears to be larger for students who study less than those who study more. Since the data become sparse as the number of study hours increases, it is somewhat difficult to evaluate the strength of the relationship and also the variability across different numbers of study hours. (c) Observational. (d) Since this is an observational study, a causal relationship is not implied. 1.11 (a) Observational. (b) The professor suspects students in a given section may have similar feelings about the course. To ensure each section is reasonably represented, she may choose to randomly select a xed number of students, say 10, from each section for a total sample size of 40 students. Since a random sample of fixed size was taken within each section in this scenario, this represents strati ed sampling. 1.13 Sampling from the phone book would miss unlisted phone numbers, so this would result in bias. People who do not have their numbers listed may share certain characteristics, e.g. consider that cell phones are not listed in phone books, so a sample from the phone book would not necessarily be a representative of the population. 1.15 The estimate will be biased, and it will tend to overestimate the true family size. For example, suppose we had just two families: the first with 2 parents and 5 children, and the second with 2 parents and 1 child. Then if we draw one of the six children at random, 5 times out of 6 we would sample the larger family 1.17 (a) No, this is an observational study. (b) This statement is not justi ed; it implies a causal association between sleep disorders and bullying. However, this was an observational study. A better conclusion would be "School children identi ed as bullies are more likely to suffer from sleep disorders than non-bullies." 1.19 (a) Experiment, as the treatment was assigned to each patient. (b) Response: Duration of the cold. Explanatory: Treatment, with 4 levels: placebo, 1g, 3g, 3g with additives. (c) Patients were blinded. (d) Double-blind with respect to the researchers evaluating the patients, but the nurses who briey interacted with patients during the distribution of the medication were not blinded. We could say the study was partly double-blind. (e) No. The patients were randomly assigned to treatment groups and were blinded, so we would expect about an equal number of patients in each group to not adhere to the treatment. 1.21 (a) Experiment. (b) Treatment is exercise twice a week. Control is no exercise. (c) Yes, the blocking variable is age. (d) No. (e) This is an experiment, so a causal conclusion is reasonable. Since the sample is random, the conclusion can be generalized to the population at large. However, we must consider that a placebo effect is possible. (f) Yes. Randomly sampled people should not be required to participate in a clinical trial, and there are also ethical concerns about the plan to instruct one group not to participate in a healthy behavior, which in this case is exercise. 1.23 (a) Positive association: mammals with longer gestation periods tend to live longer as well. (b) Association would still be positive. (c) No, they are not independent. See part (a). 1.25 (a) 1/linear and 3/nonlinear. (b) 4/some curvature (nonlinearity) may be present on the right side. "Linear" would also be acceptable for the type of relationship for plot 4. (c) 2. 1.27 (a) Decrease: the new score is smaller than the mean of the 24 previous scores. (b) Calculate a weighted mean. Use a weight of 24 for the old mean and 1 for the new mean: $\frac {(24 \times 74 + 1 \times 64)}{(24 + 1)} = 73.6$. There are other ways to solve this exercise that do not use a weighted mean. (c) The new score is more than 1 standard deviation away from the previous mean, so increase. 1.29 Both distributions are right skewed and bimodal with modes at 10 and 20 cigarettes; note that people may be rounding their answers to half a pack or a whole pack. The median of each distribution is between 10 and 15 cigarettes. The middle 50% of the data (the IQR) appears to be spread equally in each group and have a width of about 10 to 15. There are potential outliers above 40 cigarettes per day. It appears that more respondents who smoke only a few cigarettes (0 to 5) on the weekdays than on weekends. 1.31 (a) $\bar {x}_{amtWeekends} = 20, \bar {x}_{amtWeekdays} = 16$. (b) $s_{amtWeekends} = 0, s_{amtWeekdays} = 4.18$. In this very small sample, higher on weekdays. 1.33 (a) Both distributions have the same median, 6, and the same IQR. (b) Same IQR, but second distribution has higher median. (c) Second distribution has higher median. IQRs are equal. (d) Second distribution has higher median and larger IQR. 1.35 1.37 Descriptions will vary a little. (a) 2. Unimodal, symmetric, centered at 60, standard deviation of roughly 3. (b) 3. Symmetric and approximately evenly distributed from 0 to 100. (c) 1. Right skewed, unimodal, centered at about 1.5, with most observations falling between 0 and 3. A very small fraction of observations exceed a value of 5. 1.39 The histogram shows that the distribution is bimodal, which is not apparent in the box plot. The box plot makes it easy to identify more precise values of observations outside of the whiskers. 1.41 (a) The median is better; the mean is substantially affected by the two extreme observations. (b) The IQR is better; the standard deviation, like the mean, is substantially affected by the two high salaries. 1.43 The distribution is unimodal and symmetric with a mean of about 25 minutes and a standard deviation of about 5 minutes. There does not appear to be any counties with unusually high or low mean travel times. Since the distribution is already unimodal and symmetric, a log transformation is not necessary. 1.45 Answers will vary. There are pockets of longer travel time around DC, Southeastern NY, Chicago, Minneapolis, Los Angeles, and many other big cities. There is also a large section of shorter average commute times that overlap with farmland in the Midwest. Many farmers' homes are adjacent to their farmland, so their commute would be 0 minutes, which may explain why the average commute time for these counties is relatively low. 1.47 (a) We see the order of the categories and the relative frequencies in the bar plot. (b) There are no features that are apparent in the pie chart but not in the bar plot. (c) We usually prefer to use a bar plot as we can also see the relative frequencies of the categories in this graph. 1.49 The vertical locations at which the ideological groups break into the Yes, No, and Not Sure categories differ, which indicates the variables are dependent. 1.51 (a) False. Instead of comparing counts, we should compare percentages. (b) True. (c) False. We cannot infer a causal relationship from an association in an observational study. However, we can say the drug a person is on affects his risk in this case, as he chose that drug and his choice may be associated with other variables, which is why part (b) is true. The difference in these statements is subtle but important. (d) True. 1.53 (a) Proportion who had heart attack: $\frac {7,979}{227,571} \approx 0.035$ (b) Expected number of cardiovascular problems in the rosiglitazone group if having cardiovascular problems and treatment were independent can be calculated as the number of patients in that group multiplied by the overall rate of cardiovascular problems in the study: $67,593 \times \frac {7,979}{227,571} \approx 2370$. (c-i) H0: Independence model. The treatment and cardiovascular problems are independent. They have no relationship, and the difference in incidence rates between the rosiglitazone and pioglitazone groups is due to chance. HA: Alternate model. The treatment and cardiovascular problems are not independent. The difference in the incidence rates between the rosiglitazone and pioglitazone groups is not due to chance, and rosiglitazone is associated with an increased risk of serious cardiovascular problems. (c-ii) A higher number of patients with cardiovascular problems in the rosiglitazone group than expected under the assumption of independence would provide support for the alternative hypothesis. This would suggest that rosiglitazone increases the risk of such problems. (c-iii) In the actual study, we observed 2,593 cardiovascular events in the rosiglitazone group. In the 1,000 simulations under the independence model, we observed somewhat less than 2,593 in all but one or two simulations, which suggests that the actual results did not come from the independence model. That is, the analysis provides strong evidence that the variables are not independent, and we reject the independence model in favor of the alternative. The study's results provide strong evidence that rosiglitazone is associated with an increased risk of cardiovascular problems. Probability 2.1 (a) False. These are independent trials. (b) False. There are red face cards. (c) True. A card cannot be both a face card and an ace. 2.3 (a) 10 tosses. Fewer tosses mean more variability in the sample fraction of heads, meaning there's a better chance of getting at least 60% heads. (b) 100 tosses. More flips means the observed proportion of heads would often be closer to the average, 0.50, and therefore also above 0.40. (c) 100 tosses. With more flips, the observed proportion of heads would often be closer to the average, 0.50. (d) 10 tosses. Fewer ips would increase variability in the fraction of tosses that are heads. 2.5 (a) $0.5^{10} = 0.00098$. (b) $0.5^{10} = 0.00098$. (c) P(at least one tails) = 1 - P(no tails) = $1 - (0.5^{10}) \approx 1 - 0.001 = 0.999$. 2.7 (a) No, there are voters who are both politically Independent and also swing voters. (b) Venn diagram below: (c) 24%. (d) Add up the corresponding disjoint sections in the Venn diagram: 0.24 + 0.11 + 0.12 = 0.47. Alternatively, use the General Addition Rule: 0.35 + 0.23 - 0.11 = 0.47. (e) 1 - 0.47 = 0.53. (f) $P(Independent) \times P(swing) = 0.35 \times 0.23 = 0.08$, which does not equal P(Independent and swing) = 0.11, so the events are dependent. If you stated that this difference might be due to sampling variability in the survey, that answer would also be reasonable (we'll dive into this topic more in later chapters). 2.9 (a) If the class is not graded on a curve, they are independent. If graded on a curve, then neither independent nor disjoint (unless the instructor will only give one A, which is a situation we will ignore in parts (b) and (c)). (b) They are probably not independent: if you study together, your study habits would be related, which suggests your course performances are also related. (c) No. See the answer to part (a) when the course is not graded on a curve. More generally: if two things are unrelated (independent), then one occurring does not preclude the other from occurring. 2.11 (a) $0.16 + 0.09 = 0.25$. (b) $0.17 + 0.09 = 0.26$. (c) Assuming that the education level of the husband and wife are independent: $0.25 \times 0.26 = 0.065$. You might also notice we actually made a second assumption: that the decision to get married is unrelated to education level. (d) The husband/wife independence assumption is probably not reasonable, because people often marry another person with a comparable level of education. We will leave it to you to think about whether the second assumption noted in part (c) is reasoanble. 2.13 (a) Invalid. Sum is greater than 1. (b) Valid. Probabilities are between 0 and 1, and they sum to 1. In this class, every student gets a C. (c) Invalid. Sum is less than 1. (d) Invalid. There is a negative probability. (e) Valid. Probabilities are between 0 and 1, and they sum to 1. (f) Invalid. There is a negative probability. 2.15 (a) No, but we could if A and B are independent. (b-i) 0.21. (b-ii) 0.3+0.7-0.21 = 0.79. (b-iii) Same as P(A): 0.3. (c) No, because $0.1 \ne 0.21$, where 0.21 was the value computed under independence from part (a). (d) P(A|B) = 0.1/0.7 = 0.143. 2.17 (a) 0.60 + 0.20 - 0.18 = 0.62. (b) 0.18/0.20 = 0.90. (c) $0.11/0.33 \approx 0.33$. (d) No, otherwise the final answers of parts (b) and (c) would have been equal. (e) $0.06/0.34\approx 0.18$. 2.19 (a) 162/248 = 0.65. (b) 181/252 = 0.72 (c) Under the assumption of a dating choices being independent of hamburger preference, which on the surface seems reasonable: $0.65 \times 0.72 = 0.468$. (d) (252 + 6 - 1)/500 = 0.514 2.21 (a) The tree diagram: (b) $P(can construct|pass) = \frac {P(can construct and pass)}{P(pass)} = \frac {0.80 \times 0.86}{0.8 \times 0.86 + 0.2 \times 0.65} = \frac {0.688}{0.818} \approx 0.84$. 2.23 First draw a tree diagram: Then compute the probability: $P(HIV |+) = \frac {P(HIV and +)}{P(+)} = \frac {0.259 \times 0.997}{0.259 \times 0.997+0.741 \times 0.074} = \frac {0.2582}{0.3131} = 0.8247$. 2.25 A tree diagram of the situation: $P(lupus|positive) = \frac {P(lupus and positive)}{P(positive)} = \frac {0.0196}{0.0196+0.2548} = 0.0714$. Even when a patient tests positive for lupus, there is only a 7.14% chance that he actually has lupus. While House is not exactly right - it is possible that the patient has lupus - his implied skepticism is warranted. 2.27 (a) 0.3. (b) 0.3. (c) 0.3. (d) $0.3 \times 0.3 = 0.09$. (e) Yes, the population that is being sampled from is identical in each draw. 2.29 (a) 2/9. (b) 3/9 = 1/3. (c) $(3/10) \times (2/9) \approx 0.067$. (d) No. In this small population of marbles, removing one marble meaningfully changes the probability of what might be drawn next. 2.31 For 1 leggings (L) and 2 jeans (J), there are three possible orderings: LJJ, JLJ, and JJL. The probability for LJJ is $(5/24) \times (7/23) \times (6/22) = 0.0173$. The other two orderings have the same probability, and these three possible orderings are disjoint events. Final answer: 0.0519. 2.33 (a) 13. (b) No. The students are not a random sample. 2.35 (a) The table below summarizes the probability model: Event X P(X) X . P(X) ${(X - E(X))}^2$ ${(X - E(X))}^2 - P(X)$ 3 hearts 3 blacks Else 50 25 0 $\frac {13}{52} \times \frac {12}{51} \times \frac {11}{50}$=0.0129 $\frac {26}{52} \times \frac {25}{51} \times \frac {24}{50}$ = 0.1176 1 - (0.0129 + 0.1176) = 0.8695 0.65 2.94 0 ${(0.65-3.59)}^2$ = 8.6436 ${(2.94-3.59)}^2$ = 0.4225 ${(0-3.59)}^2$ = 12.8881 $8.6436 \times 0.0129$= 0.1115 $0.4225 \times 0.1176$ = 0.0497 $12.8881 \times 0.8695$ = 11.2062 E(X) = $3.59 V(X) = 11.3674 $SD(X) = \sqrt {V(X)}$= 3.37 (b) E(X-5) = E(X)-5 = 3.59-5 = -$1.41. The standard deviation is the same as the standard deviation of X: $3.37. (c) No. The expected earnings is negative, so on average you would lose money playing the game. 2.37 Event X P(X) X . P(X) Boom Normal Recession 0.18 0.09 -0.12 1/3 1/3 1/3 $0.18 \times 1/3 = 0.06$ $0.09 \times 1/3 = 0.03$ $-0.12 \times 1/3 = -0.04$ E(X) = 0.05 The expected return is a 5% increase in value for a single year. 2.39 (a) Expected: -$0.16. Variance: 8.95. SD: $2.99. (b) Expected: -$0.16. SD: $1.73. (c) Expected values are the same, but the SDs differ. The SD from the game with tripled winnings/losses is larger, since the three independent games might go in different directions (e.g.could win one game and lose two games). So the three independent games is lower risk, but in this context it just means we are likely to lose a more stable amount since the expected value is still negative. 2.41 A fair game has an expected value of zero: $5 \times 0.46 + x \times 0.54 = 0$. Solving for x: -$4.26. You would bet $4.26 for the Padres to make the game fair. 2.43 (a) Expected:$3.90. SD: $0.34. (b) Expected:$27.30. SD: $0.89. If you computed part (b) using part (a), you should have ob- tained an SD of$0.90. 2.45 Approximate answers are OK. Answers are only estimates based on the sample. (a) (29 + 32)/144 = 0.42. (b) 21/144 = 0.15. (c) (26 + 12 + 15)/144 = 0.37\). Distributions of random variables 3.1 (a) 8.85%. (b) 6.94%. (c) 58.86%. (d) 4.56%. 3.3 (a) Verbal: $N(\mu = 462; \sigma = 119)$, Quant: $N(\mu = 584; \sigma = 151)$. (b) $Z_{V R} = 1.33, Z_{QR} = 0.57$. (c) She scored 1.33 standard deviations above the mean on the Verbal Reasoning section and 0.57 standard deviations above the mean on the Quantitative Reasoning section. (d) She did better on the Verbal Reasoning section since her Z score on that section was higher. (e) $Perc_{V R} = 0.9082 \approx 91%, Perc_{QR} = 0.7157 \approx 72%$. (f) 100% - 91% = 9% did better than her on VR, and 100% - 72% = 28% did better than her on QR. (g) We cannot compare the raw scores since they are on different scales. Comparing her percentile scores is more appropriate when comparing her performance to others. (h) Answer to part (b) would not change as Z scores can be calculated for distributions that are not normal. However, we could not answer parts (c)-(f) since we cannot use the normal probability table to calculate probabilities and percentiles without a normal model. 3.5 (a) Z = 0.84, which corresponds to 711 on QR. (b) Z = -0.52, which corresponds to 400 on VR. 3.7 (a) $Z = 1.2 \approx 0.1151$. (b) $Z = -1.28 \approx$70.60F or colder. 3.9 (a) N(25; 2.78). (b) $Z = 1.08 \approx 0.1401$. (c) The answers are very close because only the units were changed. (The only reason why they are a little different is because 280C is 82.40F, not precisely 830F.) 3.11 (a) Z = 0.67. (b) $\mu$ = $1650, x =$1800. (c) $0.67 = \frac {1800-1650}{\sigma} = 223.88$. 3.13 $Z = 1.56 \approx 0.0594$, i.e. 6%. 3.15 (a) $Z = 0.73 \approx 0.2327$. (b) If you are bidding on only one auction and set a low maximum bid price, someone will probably outbid you. If you set a high maximum bid price, you may win the auction but pay more than is necessary. If bidding on more than one auction, and you set your maximum bid price very low, you probably won't win any of the auctions. However, if the maximum bid price is even modestly high, you are likely to win multiple auctions. (c) An answer roughly equal to the 10th percentile would be reasonable. Regrettably, no percentile cutoff point guarantees beyond any possible event that you win at least one auction. However, you may pick a higher percentile if you want to be more sure of winning an auction. (d) Answers will vary a little but should correspond to the answer in part (c). We use the 10th percentile: $Z = -1:28 \approx 69.80$. 3.17 14/20 = 70% are within 1 SD. Within 2 SD: 19/20 = 95%. Within 3 SD: 20/20 = 100%. They follow this rule closely. 3.19 The distribution is unimodal and symmetric. The superimposed normal curve approximates the distribution pretty well. The points on the normal probability plot also follow a relatively straight line. There is one slightly distant observation on the lower end, but it is not extreme. The data appear to be reasonably approximated by the normal distribution. 3.21 (a) No. The cards are not independent. For example, if the first card is an ace of clubs, that implies the second card cannot be an ace of clubs. Additionally, there are many possible categories, which would need to be simplified. (b) No. There are six events under consideration. The Bernoulli distribution allows for only two events or categories. Note that rolling a die could be a Bernoulli trial if we simply to two events, e.g. rolling a 6 and not rolling a 6, though specifying such details would be necessary. 3.23 (a) ${(1 - 0.471)}^2 \times 0.471 = 0.1318$. (b) $0.471^3 = 0.1045$. (c) $\mu = 1/0.471 = 2.12, \sigma = 2.38$. (d) $\mu = 1/0.30 = 3.33, \sigma = 2.79$. (e) When p is smaller, the event is rarer, meaning the expected number of trials before a success and the standard deviation of the waiting time are higher. 3.25 (a) $0.875^2 \times 0.125 = 0.096$. (b) $\mu = 8, \sigma = 7.48$. 3.27 (a) Yes. The conditions are satisfied: independence, xed number of trials, either success or failure for each trial, and probability of success being constant across trials. (b) 0.200. (c) 0.200. (d) $0.0024+0.0284+0.1323 = 0.1631$. (e) 1 - 0.0024 = 0.9976. 3.29 (a) $\mu = 35, \sigma = 3.24$. (b) Yes. Z = 3.09. Since 45 is more than 2 standard deviations from the mean, it would be considered unusual. Note that the normal model is not required to apply this rule of thumb. (c) Using a normal model: 0.0010. This does indeed appear to be an unusual observation. If using a normal model with a 0.5 correction, the probability would be calculated as 0.0017. 3.31 Want to find the probabiliy that there will be more than 1,786 enrollees. Using the normal model: 0.0537. With a 0.5 correction: 0.0559. 3.33 (a) 1 - 0.753 = 0.5781. (b) 0.1406. (c) 0.4219. (d) 1 - 0.253 = 0.9844. 3.35 (a) Geometric distribution: 0.109. (b) Binomial: 0.219. (c) Binomial: 0.137. (d) 1 - 0.8756 = 0.551. (e) Geometric: 0.084. (f) Using a binomial distribution with n = 6 and p = 0.125, we see that $\mu = 4, \sigma = 1.06$, and Z = -1.89. Since this is within 2 SD, it may not be considered unusual, though this is a borderline case, so we might say the observations is somewhat unusual. 3.37 0 wins (-$3): 0.1458. 1 win (-$1): 0.3936. 2 wins (+$1): 0.3543. 3 wins (+$3): 0.1063. 3.39 (a) $\overset {Anna}{1/5} \times \overset {Ben}{1/4} \times \overset {Carl}{1/3} \times \overset {Damian}{1/2} \times \overset {Eddy}{1/1} = 1/5! = 1/120$. (b) Since the probabilities must add to 1, there must be 5! = 120 possible orderings. (c) 8! = 40,320. 3.41 (a) Geometric: $(5/6)^4 \times (1/6) = 0.0804$. Note that the geometric distribution is just a special case of the negative binomial distribution when there is a single success on the last trial. (b) Binomial: 0.0322. (c) Negative binomial: 0.0193. 3.43 (a) Negative binomial with n = 4 and p = 0.55, where a success is defined here as a female student. The negative binomial setting is appropriate since the last trial is fixed but the order of the rst 3 trials is unknown. (b) 0.1838. (c)$\binom {3}{1} = 3$. (d) In the binomial model there are no restrictions on the outcome of the last trial. In the negative binomial model the last trial is fixed. Therefore we are interested in the number of ways of orderings of the other k -1 successes in the first n - 1 trials. 3.45 (a) Poisson with $\lambda = 75$. (b) $\mu = \lambda = 75, \sigma = \sqrt {\lambda} = 8.66$. (c) Z = -1.73. Since 60 is within 2 standard deviations of the mean, it would not generally be considered unusual. Note that we often use this rule of thumb even when the normal model does not apply. 3.47 Using Poisson with $\lambda = 75: 0.0402$. Foundations for inference 4.1 (a) Mean. Each student reports a numerical value: a number of hours. (b) Mean. Each student reports a number, which is a percentage, and we can average over these percentages. (c) Proportion. Each student reports Yes or No, so this is a categorical variable and we use a proportion. (d) Mean. Each student reports a number, which is a percentage like in part (b). (e) Proportion. Each student reports whether or not he got a job, so this is a categorical variable and we use a proportion. 4.3 (a) Mean: 13.65. Median: 14. (b) SD: 1.91. IQR: 15 - 13 = 2. (c) Z16 = 1.23, which is not unusual since it is within 2 SD of the mean. Z18 = 2:23, which is generally considered unusual. (d) No. Point estimates that are based on samples only approximate the population parameter, and they vary from one sample to another. (e) We use the SE, which is $1.91/\sqrt {100} = 0.191$ for this sample's mean. 4.5 (a) SE = 2.89. (b) Z = 1.73, which indicates that the two values are not unusually distant from each other when accounting for the uncertainty in John's point estimate. 4.7 (a) We are 95% confident that US residents spend an average of 3.53 to 3.83 hours per day relaxing or pursuing activities they enjoy after an average work day. (b) 95% of such random samples will yield a 95% CI that contains the true average hours per day that US residents spend relaxing or pursuing activities they enjoy after an average work day. (c) They can be a little less confident in capturing the parameter, so the interval will be a little slimmer. 4.9 A common way to decrease the width of the interval without losing con dence is to increase the sample size. It may also be possible to use a more advanced sampling method, such as strati ed sampling, though the required analysis is beyond the scope of this course, and such a sampling method may be difficult in this context. 4.11 (a) False. Provided the data distribution is not very strongly skewed (n = 64 in this sample, so we can be slightly lenient with the skew), the sample mean will be nearly normal, allowing for the method normal approximation described. (b) False. Inference is made on the population parameter, not the point estimate. The point estimate is always in the confidence interval. (c) True. (d) False. The confidence interval is not about a sample mean. (e) False. To be more con dent that we capture the parameter, we need a wider interval. Think about needing a bigger net to be more sure of catching a sh in a murky lake. (f) True. Optional explanation: This is true since the normal model was used to model the sample mean. The margin of error is half the width of the interval, and the sample mean is the midpoint of the interval. (g) False. In the calculation of the standard error, we divide the standard deviation by the square root of the sample size. To cut the SE (or margin of error) in half, we would need to sample 22 = 4 times the number of people in the initial sample. 4.13 Independence: sample from < 10% of population. We must assume it is a simple random sample to move forward; in practice, we would investigate whether this is the case, but here we will just report that we are making this assumption. Notice that there are no students who have had no exclusive relationships in the sample, which suggests some student responses are likely missing (perhaps only positive values were reported). The sample size is at least 30. The skew is strong, but the sample is very large so this is not a concern. 90% CI: (2.97, 3.43). We are 90% con dent that the average number of exclusive relationships that Duke students have been in is between 2.97 and 3.43. 4.15 (a) H0 : $\mu$ = 8 (On average, New Yorkers sleep 8 hours a night.) HA : $\mu$ < 8 (On average, New Yorkers sleep less than 8 hours a night.) (b) H0 : $\mu$ = 15 (The average amount of company time each employee spends not working is 15 minutes for March Madness.) HA : $\mu$ > 15 (The average amount of company time each employee spends not working is greater than 15 minutes for March Madness.) 4.17 First, the hypotheses should be about the population mean ($\mu$) not the sample mean. Second, the null hypothesis should have an equal sign and the alternative hypothesis should be about the null hypothesized value, not the observed sample mean. The correct way to set up these hypotheses is shown below: $H_0 : \mu = \text {2 hours}$ $H_A : \mu > \text {2 hours}$ The one-sided test indicates that we our only interested in showing that 2 is an underestimate. Here the interest is in only one direction, so a one-sided test seems most appropriate. If we would also be interested if the data showed strong evidence that 2 was an overestimate, then the test should be two-sided. 4.19 (a) This claim does not seem plausible since 3 hours (180 minutes) is not in the interval. (b) 2.2 hours (132 minutes) is in the 95% confidence interval, so we do not have evidence to say she is wrong. However, it would be more appropriate to use the point estimate of the sample. (c) A 99% con dence interval will be wider than a 95% con dence interval, meaning it would enclose this smaller interval. This means 132 minutes would be in the wider interval, and we would not reject her claim based on a 99% confidence level. 4.21 Independence: The sample is presumably a simple random sample, though we should verify that is the case. Generally, this is what is meant by "random sample", though it is a good idea to actually check. For all following questions and solutions, it may be assumed that "random sample" actually means "simple random sample". 75 ball bearings is smaller than 10% of the population of ball bearings. The sample size is at least 30. The data are only slightly skewed. Under the assumption that the random sample is a simple random sample, $\bar {x}$ will be normally distributed. $H_0 : \mu$ = 7 hours. $H_A : \mu \ne$ 7 hours. $Z = -1.04 \rightarrow$ p-value = $2 \times 0.1492 = 0.2984$. Since the p-value is greater than 0.05, we fail to reject H0. The data do not provide convincing evidence that the average lifespan of all ball bearings produced by this machine is different than 7 hours. (Comment on using a one-sided alternative: the worker may be interested in learning if the ball bearings underperforms or over-performs the manufacturer's claim, which is why we suggest a two-sided test.) 4.23 (a) Independence: The sample is random and 64 patients would almost certainly make up less than 10% of the ER residents. The sample size is at least 30. No information is provided about the skew. In practice, we would ask to see the data to check this condition, but here we will make the assumption that the skew is not very strong. (b) $H_0 : \mu = 127. H_A : \mu \ne 127$. $Z = 2.15 \approx$ p-value = $2 \times 0.0158 = 0.0316$. Since the p-value is less than $\alpha = 0.05$, we reject H0. The data provide convincing evidence that the the average ER wait time has increased over the last year. (c) Yes, it would change. The p-value is greater than 0.01, meaning we would fail to reject H0 at = 0.01. 4.25 $H_0 : \mu = 130. H_A : \mu \ne 130$. Z = 1.39 $\approx$ p-value = $2 \times 0.0823 = 0.1646$, which is larger than $\alpha = 0.05$. The data do not provide convincing evidence that the true average calorie content in bags of potato chips is different than 130 calories. 4.27 (a) H0: Anti-depressants do not help symptoms of Fibromyalgia. HA: Antidepressants do treat symptoms of Fibromyalgia. Remark: Diana might also have taken special note if her symptoms got much worse, so a more scienti c approach would have been to use a two-sided test. While parts (b)-(d) use the onesided version, your answers will be a little different if you used a two-sided test. (b) Concluding that anti-depressants work for the treatment of Fibromyalgia symptoms when they actually do not. (c) Concluding that anti-depressants do not work for the treatment of Fibromyalgia symptoms when they actually do. (d) If she makes a Type 1 error, she will continue taking medication that does not actually treat her disorder. If she makes a Type 2 error, she will stop taking medication that could treat her disorder. 4.29 (a) If the null hypothesis is rejected in error, then the regulators concluded that the adverse effect was higher in those taking the drug than those who did not take the drug when in reality the rates are the same for the two groups. (b) If the null hypothesis is not rejected but should have been, then the regulators failed to identify that the adverse effect was higher in those taking the drug. (c) Answers may vary a little. If all 403 drugs are actually okay, then about $403 \times 0.05 \approx 20$ drugs will have a Type 1 error. Of the 42 suspect drugs, we would expect about 20/42 would represent an error while about $22/42 \approx 52%$ would actually be drugs with adverse effects. (d) There is not enough information to tell. 4.31 (a) Independence: The sample is random. In practice, we should ask whether 70 customers is less than 10% of the population (we'll assume this is the case for this exercise). The sample size is at least 30. No information is provided about the skew, so this is another item we would typically ask about. For now, we'll assume the skew is not very strong. (b) $H_0 : \mu = 18. H_A : \mu > 18$. $Z = 3.46 \approx$ p-value = 0.0003, which is less than $\alpha = 0.05$, so we reject H0. There is strong evidence that the average revenue per customer is greater than \$18. (c) (18.65, 19.85). (d) Yes. The hypothesis test reject the notion that $\mu = 18$, and this value is not in the confidence interval. (e) Even though the increase in average revenue per customer appears to be significant, the restaurant owner may want to consider other criteria, such as total profits. With a longer happy hour, the revenue over the entire evening may actually drop since lower prices are offered for a longer time. Also, costs usually rise when prices are lowered. A better measure to consider may be an increase in total profits for the entire evening. 4.33 (a) The distribution is unimodal and strongly right skewed with a median between 5 and 10 years old. Ages range from 0 to slightly over 50 years old, and the middle 50% of the distribution is roughly between 5 and 15 years old. There are potential outliers on the higher end. (b) When the sample size is small, the sampling distribution is right skewed, just like the population distribution. As the sample size increases, the sampling distribution gets more unimodal, symmetric, and approaches normality. The variability also decreases. This is consistent with the Central Limit Theorem. 4.35 The centers are the same in each plot, and each data set is from a nearly normal distribution (see Section 4.2.6), though the histograms may not look very normal since each represents only 100 data points. The only way to tell which plot corresponds to which scenario is to examine the variability of each distribution. Plot B is the most variable, followed by Plot A, then Plot C. This means Plot B will correspond to the original data, Plot A to the sample means with size 5, and Plot C to the sample means with size 25. 4.37 (a) Right skewed. There is a long tail on the higher end of the distribution but a much shorter tail on the lower end. (b) Less than, as the median would be less than the mean in a right skewed distribution. (c) We should not. (d) Even though the population distribution is not normal, the conditions for inference are reasonably satis ed, with the possible exception of skew. If the skew isn't very strong (we should ask to see the data), then we can use the Central Limit Theorem to estimate this probability. For now, we'll assume the skew isn't very strong, though the description suggests it is at least moderate to strong. Use N(1.3; $SE_{\bar {x}} = 0.3/\sqrt {60}$): Z = 2.58 $\rightarrow$ 0.0049. (e) It would decrease it by a factor of $1/\sqrt {2}$. 4.39 (a) $Z = -3.33 \rightarrow 0.0004$. (b) The population SD is known and the data are nearly normal, so the sample mean will be nearly normal with distribution $N(\mu, \sigma / \sqrt {n}$, i.e. N(2.5; 0.0055). (c) $Z = -10.54 \rightarrow \approx 0$. (d) See below: (e) We could not estimate (a) without a nearly normal population distribution. We also could not estimate (c) since the sample size is not sufficient to yield a nearly normal sampling distribution if the population distribution is not nearly normal. 4.41 (a) We cannot use the normal model for this calculation, but we can use the histogram. About 500 songs are shown to be longer than 5 minutes, so the probability is about $500/3000 = 0.167$. (b) Two different answers are reasonable. Option 1Since the population distribution is only slightly skewed to the right, even a small sample size will yield a nearly normal sampling distribution. We also know that the songs are sampled randomly and the sample size is less than 10% of the population, so the length of one song in the sample is independent of another. We are looking for the probability that the total length of 15 songs is more than 60 minutes, which means that the average song should last at least 60/15 = 4 minutes. Using $SE = 1.62/ \sqrt {15}$, $Z = 1.31 \rightarrow 0.0951$. Option 2Since the population distribution is not normal, a small sample size may not be sufficient to yield a nearly normal sampling distribution. Therefore, we cannot estimate the probability using the tools we have learned so far. (c) We can now be confident that the conditions are satis ed. $Z = 0.92 \rightarrow 0.1788$. 4.43 (a) $H_0 : \mu _{2009} = \mu _{2004}$. $H_A : \mu _{2009} \ne \mu _{2004}$. (b) $\bar {x}_{2009} - \bar {x}_{2004} = -3.6$ spam emails per day. (c) The null hypothesis was not rejected, and the data do not provide convincing evidence that the true average number of spam emails per day in years 2004 and 2009 are different. The observed difference is about what we might expect from sampling variability alone. (d) Yes, since the hypothesis of no difference was not rejected in part (c). 4.45 (a) $H_0 : p_{2009} = p_{2004}$. $H_A : p_{2009} \ne p_{2004}$. (b) -7%. (c) The null hypothesis was rejected. The data provide strong evidence that the true proportion of those who once a month or less frequently delete their spam email was higher in 2004 than in 2009. The difference is so large that it cannot easily be explained as being due to chance. (d) No, since the null difference, 0, was rejected in part (c). 4.47 (a) Scenario I is higher. Recall that a sample mean based on less data tends to be less accurate and have larger standard errors. (b) Scenario I is higher. The higher the confidence level, the higher the corresponding margin of error. (c) They are equal. The sample size does not affect the calculation of the p-value for a given Z score. (d) Scenario I is higher. If the null hypothesis is harder to reject (lower), then we are more likely to make a Type 2 error. 4.49 $10 \ge 2.58 \times \frac {102}{\sqrt {n}} \rightarrow n \ge 692.5319$. He should survey at least 693 customers. 4.51 (a) The null hypothesis would be that the mean this year is also 128 minutes. The alternative hypothesis would be that the mean is different from 128 minutes. (b) First calculate the SE: $\frac {39}{\sqrt {64}} = 4.875$. Next, identify the Z scores that would result in rejecting H0: $Z_{lower}$ = -1.96, $Z_{upper}$ = 1.96. In each case, calculate the corresponding sample mean cutoff: $\bar {x}_{lower}$ = 118.445 and $\bar {x}_{upper}$ = 137.555\). (c) Construct Z scores for the values from part (b) but using the supposed true distribution (i.e. $\mu$ = 135), i.e. not using the null value ($\mu$ = 128). The probability of correctly rejecting the null hypothesis would be 0.0003+0.3015 = 0.3018 using these two cutoffs, and the probability of a Type 2 error would then be 1 - 0.3018 = 0.6982. Inference for numerical data 5.1 (a) For each observation in one data set, there is exactly one specially-corresponding observation in the other data set for the same geographic location. The data are paired. (b) H0 : $\mu_{diff} = 0$ (There is no difference in average daily high temperature between January 1, 1968 and January 1, 2008 in the continental US.) $H_A : \mu_{diff} > 0$ (Average daily high temperature in January 1, 1968 was lower than average daily high temperature in January, 2008 in the continental US.) If you chose a two-sided test, that would also be acceptable. If this is the case, note that your p-value will be a little bigger than what is reported here in part (d). (c) Independence: locations are random and represent less than 10% of all possible locations in the US. The sample size is at least 30. We are not given the distribution to check the skew. In practice, we would ask to see the data to check this condition, but here we will move forward under the assumption that it is not strongly skewed. (d) $Z = 1.60 \rightarrow$ p-value = 0.0548. (e) Since the p-value > $\alpha$ (since not given use 0.05), fail to reject H0. The data do not provide strong evidence of temperature warming in the continental US. However it should be noted that the p-value is very close to 0.05. (f) Type 2, since we may have incorrectly failed to reject H0. There may be an increase, but we were unable to defftect it. (g) Yes, since we failed to reject H0, which had a null value of 0. 5.3 (a) (-0.03, 2.23). (b) We are 90% confident that the average daily high on January 1, 2008 in the continental US was 0.13 degrees lower to 2.13 degrees higher than the average daily high on January 1, 1968. (c) No, since 0 is included in the interval. 5.5 (a) Each of the 36 mothers is related to exactly one of the 36 fathers (and vice-versa), so there is a special correspondence between the mothers and fathers. (b) $H_0 : \mu _{diff}$ = 0. $H_A : \mu _{diff} \ne 0$. Independence: random sample from less than 10% of population. Sample size of at least 30. The skew of the differences is, at worst, slight. $Z = 2.72 \rightarrow$ p-value = 0.0066. Since p-value < 0.05, reject H0. The data provide strong evidence that the average IQ scores of mothers and fathers of gifted children are different, and the data indicate that mothers' scores are higher than fathers' scores for the parents of gifted children. 5.7 Independence: Random samples that are less than 10% of the population. Both samples are at least of size 30. In practice, we'd ask for the data to check the skew (which is not provided), but here we will move forward under the assumption that the skew is not extreme (there is some leeway in the skew for such large samples). Use z* = 1.65. 90% CI: (0.16, 5.84). We are 90% con dent that the average score in 2008 was 0.16 to 5.84 points higher than the average score in 2004. 5.9 (a) $H_0 : \mu _{2008} = \mu _{2004} \rightarrow \mu _{2004} - \mu _{2008} = 0$ (Average math score in 2008 is equal to average math score in 2004.) $H_A : \mu _{2008} \ne \mu _{2004} \rightarrow \mu _{2004} - \mu _{2008} \ne 0$ (Average math score in 2008 is different than average math score in 2004.) Conditions necessary for inference were checked in Exercise 5.7. Z = -1.74 $\rightarrow$ p-value = 0.0818. Since the p-value < $\alpha$, reject H0. The data provide strong evidence that the average math score for 13 year old students has changed between 2004 and 2008. (b) Yes, a Type 1 error is possible. We rejected H0, but it is possible H0 is actually true. (c) No, since we rejected H0 in part (a). 5.11 (a) We are 95% confident that those on the Paleo diet lose 0.891 pounds less to 4.891 pounds more than those in the control group. (b) No. The value representing no difference between the diets, 0, is included in the confidence interval. (c) The change would have shifted the con dence interval by 1 pound, yielding CI = (0.109; 5.891), which does not include 0. Had we observed this result, we would have rejected H0. 5.13 Independence and sample size conditions are satis ed. Almost any degree of skew is reasonable with such large samples. Compute the joint SE: $\sqrt {SE^2_M + SE^2_W} = 0.114$. The 95% CI: (-11.32, -10.88). We are 95% confident that the average body fat percentage in men is 11.32% to 10.88% lower than the average body fat percentage in women. 5.15 (a) df = 6 - 1 = 5, $t^*_5$ = 2.02 (column with two tails of 0.10, row with df = 5). (b) df = 21 - 1 = 5, $t^*_20 = 2.53$ (column with two tails of 0.02, row with df = 20). (c) df = 28, $t^*_28 = 2.05$. (d) df = 11, $t^*_11 = 3.11$. 5.17 The mean is the midpoint: $\bar {x} = 20$. Identify the margin of error: ME = 1.015, then use $t^*_{35} = 2.03$ and $SE = s/\sqrt {n}$ in the formula for margin of error to identify s = 3. 5.19 (a) $H_0: \mu = 8$ (New Yorkers sleep 8 hrs per night on average.) $H_A: \mu < 8$ (New Yorkers sleep less than 8 hrs per night on average.) (b) Independence: The sample is random and from less than 10% of New Yorkers. The sample is small, so we will use a t distribution. For this size sample, slight skew is acceptable, and the min/max suggest there is not much skew in the data. T = -1.75. df = 25-1 = 24. (c) 0.025 < p-value < 0.05. If in fact the true population mean of the amount New Yorkers sleep per night was 8 hours, the probability of getting a random sample of 25 New Yorkers where the average amount of sleep is 7.73 hrs per night or less is between 0.025 and 0.05. (d) Since p-value < 0.05, reject H0. The data provide strong evidence that New Yorkers sleep less than 8 hours per night on average. (e) No, as we rejected H0. 5.21 $t^*_{19}$ is 1.73 for a one-tail. We want the lower tail, so set -1.73 equal to the T score, then solve for $\bar {x}: 56.91$. 5.23 No, he should not move forward with the test since the distributions of total personal income are very strongly skewed. When sample sizes are large, we can be a bit lenient with skew. However, such strong skew observed in this exercise would require somewhat large sample sizes, somewhat higher than 30. 5.25 (a) These data are paired. For example, the Friday the 13th in say, September 1991, would probably be more similar to the Friday the 6th in September 1991 than to Friday the 6th in another month or year. (b) Let $\mu _{diff} = \mu _{sixth} - \mu _{thirteenth}$. $H_0 : \mu _{diff} = 0$. $H_A : \mu _{diff} \ne 0$. (c) Independence: The months selected are not random. However, if we think these dates are roughly equivalent to a simple random sample of all such Friday 6th/13th date pairs, then independence is reasonable. To proceed, we must make this strong assumption, though we should note this assumption in any reported results. With fewer than 10 observations, we would need to use the t distribution to model the sample mean. The normal probability plot of the differences shows an approximately straight line. There isn't a clear reason why this distribution would be skewed, and since the normal quantile plot looks reasonable, we can mark this condition as reasonably satis ed. (d) T = 4.94 for df = 10 - 1 = 9 $\rightarrow$ p-value < 0.01. (e) Since p-value < 0.05, reject H0. The data provide strong evidence that the average number of cars at the intersection is higher on Friday the 6th than on Friday the 13th. (We might believe this intersection is representative of all roads, i.e. there is higher traffic on Friday the 6th relative to Friday the 13th. However, we should be cautious of the required assumption for such a generalization.) (f) If the average number of cars passing the intersection actually was the same on Friday the 6th and 13th, then the probability that we would observe a test statistic so far from zero is less than 0.01. (g) We might have made a Type 1 error, i.e. incorrectly rejected the null hypothesis. 5.27 (a) $H_0 : \mu _{diff} = 0$. $H_A : \mu _{diff} \ne 0$. T = -2.71. df = 5. 0:02 < p-value < 0:05. Since p-value < 0.05, reject H0. The data provide strong evidence that the average number of traffic accident related emergency room admissions are different between Friday the 6th and Friday the 13th. Furthermore, the data indicate that the direction of that difference is that accidents are lower on Friday the 6th relative to Friday the 13th. (b) (-6.49, -0.17). (c) This is an observational study, not an experiment, so we cannot so easily infer a causal intervention implied by this statement. It is true that there is a difference. However, for example, this does not mean that a responsible adult going out on Friday the 13th has a higher chance of harm than on any other night. 5.29 (a) Chicken fed linseed weighed an average of 218.75 grams while those fed horsebean weighed an average of 160.20 grams. Both distributions are relatively symmetric with no apparent outliers. There is more variability in the weights of chicken fed linseed. (b) $H_0 : \mu _{ls} = \mu _{hb}$. $H_A : \mu _{ls} \ne \mu _{hb}$. We leave the conditions to you to consider. T = 3.02, df = min(11; 9) = 9 $\rightarrow$ 0.01 < p-value < 0.02. Since p-value < 0.05, reject H0. The data provide strong evidence that there is a significant difference between the average weights of chickens that were fed linseed and horsebean. (c) Type 1, since we rejected H0. (d) Yes, since p-value > 0.01, we would have failed to reject H0. 5.31 $H_0 : \mu _C = \mu _S$. $H_A : \mu _C \ne \mu _S$. T = 3.48, df = 11 $\rightarrow$ p-value < 0.01. Since p-value < 0.05, reject H0. The data provide strong evidence that the average weight of chickens that were fed casein is different than the average weight of chickens that were fed soybean (with weights from casein being higher). Since this is a randomized experiment, the observed difference are can be attributed to the diet. 5.33 $H_0 : \mu _T = \mu _C$. $H_A : \mu _T \ne \mu _C$. T = 2.24, df = 21 $\rightarrow$ 0.02 < p-value < 0.05. Since p-value < 0.05, reject H0. The data provide strong evidence that the average food consumption by the patients in the treatment and control groups are different. Furthermore, the data indicate patients in the distracted eating (treatment) group consume more food than patients in the control group. 5.35 Let $\mu _{diff} = \mu _{pre} - \mu _{post}$. $H_0 : \mu _{diff} = 0$: Treatment has no effect. $H_A : \mu _{diff} > 0$: Treatment is effective in reducing Pd T scores, the average pre-treatment score is higher than the average post-treatment score. Note that the reported values are pre minus post, so we are looking for a positive difference, which would correspond to a reduction in the psychopathic deviant T score. Conditions are checked as follows. Independence: The subjects are randomly assigned to treatments, so the patients in each group are independent. All three sample sizes are smaller than 30, so we use t tests.Distributions of differences are somewhat skewed. The sample sizes are small, so we cannot reliably relax this assumption. (We will proceed, but we would not report the results of this specific analysis, at least for treatment group 1.) For all three groups: $df = 13. T_1 = 1.89$ (0.025 < p-value < 0.05), $T_2 = 1.35$ (p-value = 0.10), $T_3 = -1.40$ (p-value > 0.10). The only significant test reduction is found in Treatment 1, however, we had earlier noted that this result might not be reliable due to the skew in the distribution. Note that the calculation of the p-value for Treatment 3 was unnecessary: the sample mean indicated a increase in Pd T scores under this treatment (as opposed to a decrease, which was the result of interest). That is, we could tell without formally completing the hypothesis test that the p-value would be large for this treatment group. 5.37 $H_0: \mu _1 = \mu _2 = \dots = \mu _6$. HA: The average weight varies across some (or all) groups. Independence: Chicks are randomly assigned to feed types (presumably kept separate from one another), therefore independence of observations is reasonable. Approx. normal: the distributions of weights within each feed type appear to be fairly symmetric. Constant variance: Based on the side-by-side box plots, the constant variance assumption appears to be reasonable. There are differences in the actual computed standard deviations, but these might be due to chance as these are quite small samples. $F_{5;65} = 15.36$ and the p-value is approximately 0. With such a small p-value, we reject H0. The data provide convincing evidence that the average weight of chicks varies across some (or all) feed supplement groups. 5.39 (a) H0: The mean MET for each group is equal to the others. HA: At least one pair of means is different. (b) Independence: We don't have any information on how the data were collected, so we cannot assess independence. To proceed, we must assume the subjects in each group are independent. In practice, we would inquire for more details. Approx. normal: The data are bound below by zero and the standard deviations are larger than the means, indicating very strong strong skew. However, since the sample sizes are extremely large, even extreme skew is acceptable. Constant variance: This condition is sufficiently met, as the standard deviations are reasonably consistent across groups. (c) See below, with the last column omitted: Df Sum Sq Mean Sq Total 50738 25575327 (d) Since p-value is very small, reject H0. The data provide convincing evidence that the average MET differs between at least one pair of groups. 5.41 (a) H0: Average GPA is the same for all majors. HA: At least one pair of means are different. (b) Since p-value > 0.05, fail to reject H0. The data do not provide convincing evidence of a difference between the average GPAs across three groups of majors. (c) The total degrees of freedom is 195+2 = 197, so the sample size is 197 + 1 = 198. 5.43 (a) False. As the number of groups increases, so does the number of comparisons and hence the modified significance level decreases. (b) True. (c) True. (d) False. We need observations to be independent regardless of sample size. 5.45 (a) H0: Average score difference is the same for all treatments. HA: At least one pair of means are different. (b) We should check conditions. If we look back to the earlier exercise, we will see that the patients were randomized, so independence is satis ed. There are some minor concerns about skew, especially with the third group, though this may be acceptable. The standard deviations across the groups are reasonably similar. Since the p-value is less than 0.05, reject H0. The data provide convincing evidence of a difference between the average reduction in score among treatments. (c) We determined that at least two means are different in part (b), so we now conduct $K = 3 \times 2/2 = 3$ pairwise t tests that each use $\alpha = 0.05/3 = 0.0167$ for a significance level. Use the following hypotheses for each pairwise test. H0: The two means are equal. HA: The two means are different. The sample sizes are equal and we use the pooled SD, so we can compute SE = 3.7 with the pooled df = 39. The p-value only for Trmt 1 vs. Trmt 3 may be statistically significant: 0.01 < p-value < 0.02. Since we cannot tell, we should use a computer to get the p-value, 0.015, which is statistically significant for the adjusted significance level. That is, we have identified Treatment 1 and Treatment 3 as having different effects. Checking the other two comparisons, the differences are not statistically significant. Inference for categorical data 6.1 (a) False. Doesn't satisfy success-failure condition. (b) True. The success-failure condition is not satis ed. In most samples we would expect $\hat {p}$ to be close to 0.08, the true population proportion. While $\hat {p}$ can be much above 0.08, it is bound below by 0, suggesting it would take on a right skewed shape. Plotting the sampling distribution would confirm this suspicion. (c) False. $SE_{\hat {p}} = 0.0243$, and $\hat {p} = 0.12$ is only $\frac {0.12-0.08}{0.0243} = 1.65$ SEs away from the mean, which would not be considered unusual. (d) True. $\hat {p} = 0.12$ is 2.32 standard errors away from the mean, which is often considered unusual. (e) False. Decreases the SE by a factor of $1/\sqrt {2}$. 6.3 (a) True. See the reasoning of 6.1(b). (b) True. We take the square root of the sample size in the SE formula. (c) True. The independence and success-failure conditions are satisfied. (d) True. The independence and success-failure conditions are satisfied. 6.5 (a) False. A con dence interval is constructed to estimate the population proportion, not the sample proportion. (b) True. 95% CI: 70% $\pm$ 8%. (c) True. By the definition of a confidence interval. (d) True. Quadrupling the sample size decreases the SE and ME by a factor of $1/\sqrt {4}$. (e) True. The 95% CI is entirely above 50%. 6.7 With a random sample from < 10% of the population, independence is satis ed. The success-failure condition is also satis ed. ME = z*$\sqrt {\frac {\hat {p}(1- \hat {p})}{n}} = 1.96 \sqrt {\frac {0.56 \times 0.44}{600}} = 0.0397 \approx 4%$ 6.9 (a) Proportion of graduates from this university who found a job within one year of graduating. $\hat {p} = 348/400 = 0.87$. (b) This is a random sample from less than 10% of the population, so the observations are independent. Success-failure condition is satisfied: 348 successes, 52 failures, both well above 10. (c) (0.8371, 0.9029). We are 95% confident that approximately 84% to 90% of graduates from this university found a job within one year of completing their undergraduate degree. (d) 95% of such random samples would produce a 95% confidence interval that includes the true proportion of students at this university who found a job within one year of graduating from college. (e) (0.8267, 0.9133). Similar interpretation as before. (f) 99% CI is wider, as we are more confident that the true proportion is within the interval and so need to cover a wider range. 6.11 (a) No. The sample only represents students who took the SAT, and this was also an online survey. (b) (0.5289, 0.5711). We are 95% confident that 53% to 57% of high school seniors are fairly certain that they will participate in a study abroad program in college. (c) 90% of such random samples would produce a 90% con dence interval that includes the true proportion. (d) Yes. The interval lies entirely above 50%. 6.13 (a) This is an appropriate setting for a hypothesis test. H0 : p = 0.50. HA : p > 0.50. Both independence and the success-failure condition are satis ed. $Z = 1:.2 \rightarrow$ p-value = 0.1314. Since the p-value > $\alpha$ = 0.05, we fail to reject H0. The data do not provide strong evidence in favor of the claim. (b) Yes, since we did not reject H0 in part (a). 6.15 (a) $H_0 : p = 0.38$. $H_A : p \ne 0.38$. Independence (random sample, < 10% of population) and the success-failure condition are satisfied. $Z = -20 \rightarrow p-value \approx 0$. Since the p-value is very small, we reject H0. The data provide strong evidence that the proportion of Americans who only use their cell phones to access the internet is different than the Chinese proportion of 38%, and the data indicate that the proportion is lower in the US. (b) If in fact 38% of Americans used their cell phones as a primary access point to the internet, the probability of obtaining a random sample of 2,254 Americans where 17% or less or 59% or more use their only their cell phones to access the internet would be approximately 0. (c) (0.1545, 0.1855). We are 95% confident that approximately 15.5% to 18.6% of all Americans primarily use their cell phones to browse the internet. 6.17 (a) $H_0 : p = 0.5. H_A : p > 0.5$. Independence (random sample, < 10% of population) is satisfied, as is the success-failure conditions (using p0 = 0.5, we expect 40 successes and 40 failures). $Z = 2.91 \rightarrow p-value = 0.0018$. Since the p-value < 0.05, we reject the null hypothesis. The data provide strong evidence that the rate of correctly identifying a soda for these people is significantly better than just by random guessing. (b) If in fact people cannot tell the difference between diet and regular soda and they randomly guess, the probability of getting a random sample of 80 people where 53 or more identify a soda correctly would be 0.0018. 6.19 (a) Independence is satisfied (random sample from < 10% of the population), as is the success-failure condition (40 smokers, 160 non-smokers). The 95% CI: (0.145, 0.255). We are 95% confident that 14.5% to 25.5% of all students at this university smoke. (b) We want z*SE to be no larger than 0.02 for a 95% confidence level. We use z* = 1.96 and plug in the point estimate $\hat {p} = 0.2$ within the SE formula: $1.96 \sqrt {\frac {0.2(1 - 0.2)}{n}} \le 0.02$. The sample size n should be at least 1,537. 6.21 The margin of error, which is computed as z*SE, must be smaller than 0.01 for a 90% confidence level. We use z* = 1.65 for a 90% confidence level, and we can use the point estimate $\hat {p} = 052$ in the formula for SE. $1.65 \sqrt {\frac {0.52(1 - 0.52)}{n}} \le 0.01$. Therefore, the sample size n must be at least 6,796. 6.23 This is not a randomized experiment, and it is unclear whether people would be affected by the behavior of their peers. That is, independence may not hold. Additionally, there are only 5 interventions under the provocative scenario, so the success-failure condition does not hold. Even if we consider a hypothesis test where we pool the proportions, the success-failure condition will not be satisfied. Since one condition is questionable and the other is not satisfied, the difference in sample proportions will not follow a nearly normal distribution. 6.25 (a) False. The entire con dence interval is above 0. (b) True. (c) True. (d) True. (e) False. It is simply the negated and reordered values: (-0.06,-0.02). 6.27 (a) (0.23, 0.33). We are 95% confident that the proportion of Democrats who support the plan is 23% to 33% higher than the proportion of Independents who do. (b) True. 6.29 (a) College grads: 23.7%. Non-college grads: 33.7%. (b) Let $p_{CG}$ and $p_{NCG}$ represent the proportion of college graduates and noncollege graduates who responded "do not know". $H_0 : p_{CG} = p_{NCG}. H_A : p_{CG} \ne p_{NCG}$. Independence is satisfied (random sample, < 10% of the population), and the success-failure condition, which we would check using the pooled proportion ($\hat {p} = 235/827 = 0.284$), is also satisfied. $Z = -3.18 \rightarrow p-value = 0.0014$. Since the p-value is very small, we reject H0. The data provide strong evidence that the proportion of college graduates who do not have an opinion on this issue is different than that of non-college graduates. The data also indicate that fewer college grads say they "do not know" than noncollege grads (i.e. the data indicate the direction after we reject H0). 6.31 (a) College grads: 35.2%. Non-college grads: 33.9%. (b) Let pCG and pNCG represent the proportion of college graduates and non-college grads who support offshore drilling. H0 : $p_{CG} = p_{NCG}. H_A : p_{CG} \ne p_{NCG}$. Independence is satisfied (random sample, < 10% of the population), and the success-failure condition, which we would check using the pooled proportion ($\hat {p} = 286/827 = 0.346$), is also satised. $Z = 0.39 \rightarrow p-value = 0.6966$. Since the p-value > $\alpha$ (0.05), we fail to reject H0. The data do not provide strong evidence of a difference between the proportions of college graduates and non-college graduates who support offshore drilling in California. 6.33 Subscript C means control group. Subscript T means truck drivers. (a) H0 : pC = pT . HA : pC $\ne$ pT . Independence is satisfied (random samples, < 10% of the population), as is the success-failure condition, which we would check using the pooled proportion ($\hat {p} = 70/495 = 0.141$). $Z = -1.58 \rightarrow p-value = 0.1164$. Since the p-value is high, we fail to reject H0. The data do not provide strong evidence that the rates of sleep deprivation are different for non-transportation workers and truck drivers. 6.35 (a) Summary of the study: Virol. failure Yes No Total Nevaripine Lopinavir 26 10 94 110 120 120 Total 36 204 240 (b) H0 : pN = pL. There is no difference in virologic failure rates between the Nevaripine and Lopinavir groups. HA : pN $\ne$ pL. There is some difference in virologic failure rates between the Nevaripine and Lopinavir groups. (c) Random assignment was used, so the observations in each group are independent. If the patients in the study are representative of those in the general population (something impossible to check with the given information), then we can also confidently generalize the ndings to the population. The success-failure condition, which we would check using the pooled proportion ($\hat {p} = 36/240 = 0.15$), is satis ed. $Z = 3.04 \rightarrow p-value = 0.0024$. Since the p-value is low, we reject H0. There is strong evidence of a difference in virologic failure rates between the Nevaripine and Lopinavir groups do not appear to be independent. 6.37 (a) False. The chi-square distribution has one parameter called degrees of freedom. (b) True. (c) True. (d) False. As the degrees of freedom increases, the shape of the chi-square distribution becomes more symmetric. 6.39 (a) H0: The distribution of the format of the book used by the students follows the professor's predictions. HA: The distribution of the format of the book used by the students does not follow the professor's predictions. (b) $E_{hard copy} = 126 \times 0.60 = 75.6$. $E_{print} = 126 \times 0.25 = 31.5$. $E_{online} = 126 \times 0.15 = 18.9$. (c) Independence: The sample is not random. However, if the professor has reason to believe that the proportions are stable from one term to the next and students are not a ecting each other's study habits, independence is probably reasonable. Sample size: All expected counts are at least 5. Degrees of freedom: df = k - 1 = 3 - 1 = 2 is more than 1. (d) $X^2 = 2.32, df = 2, p-value > 0.3$. (e) Since the p-value is large, we fail to reject H0. The data do not provide strong evidence indicating the professor's predictions were statistically inaccurate. 6.41 (a). Two-way table: Quit Treatment Yes No Total Patch + support group Only patch 40 30 110 120 150 150 Total 70 230 300 (b-i) $E_{row_1;col_1} = \frac {(row 1 total) \times (col 1 total)}{table total} = \frac {150 \times 70}{300} = 35$. This is lower than the observed value. (b-ii) $E_{row_2;col_2} = \frac {(row 2 total) \times (col 2 total)}{table total} = \frac {150 \times 230}{300} = 115$. This is lower than the observed value. 6.43 H0: The opinion of college grads and nongrads is not different on the topic of drilling for oil and natural gas off the coast of California. HA: Opinions regarding the drilling for oil and natural gas off the coast of California has an association with earning a college degree. $E_{row 1;col 1} = 151.5 E_{row 1;col 2} = 134.5$ $E_{row 2;col 1} = 162.1 E_{row 2;col 2} = 143.9$ $E_{row 3;col 1} = 124.5 E_{row 3;col 2} = 110.5$ Independence: The samples are both random, unrelated, and from less than 10% of the population, so independence between observations is reasonable. Sample size: All expected counts are at least 5. Degrees of freedom: $df = (R - 1) \times (C - 1) = (3 - 1) \times (2 - 1) = 2$, which is greater than 1. $X^2 = 11.47, df = 2 \rightarrow 0.001 < p-value < 0.005$. Since the p-value < \alpha\), we reject H0. There is strong evidence that there is an association between support for off -shore drilling and having a college degree. 6.45 (a) H0 : There is no relationship between gender and how informed Facebook users are about adjusting their privacy settings. HA : There is a relationship between gender and how informed Facebook users are about adjusting their privacy settings. (b) The expected counts: $E_{row 1;col 1} = 296.6 E_{row 1;col 2} = 369.3$ $E_{row 2;col 1} = 162.1 E_{row 2;col 2} = 68.2$ $E_{row 3;col 1} = 7.6 E_{row 3;col 2} = 9.4$ The sample is random, all expected counts are above 5, and $df = (3 - 1) \times (2 - 1) = 2 > 1$, so we may proceed with the test. 6.47 It is not appropriate. There are only 9 successes in the sample, so the success-failure condition is not met. 6.49 (a) H0 : p = 0.69. HA : p $\ne$ 0.69. (b) $\hat {p} = \frac {17}{30} = 0.57$. (c) The success-failure condition is not satisfied; note that it is appropriate to use the null value ($p_0 = 0.69$) to compute the expected number of successes and failures. (d) Answers may vary. Each student can be represented with a card. Take 100 cards, 69 black cards representing those who follow the news about Egypt and 31 red cards representing those who do not. Shuffle the cards and draw with replacement (shuffling each time in between draws) 30 cards representing the 30 high school students. Calculate the proportion of black cards in this sample, $\hat {p} _{sim}$, i.e. the proportion of those who follow the news in the simulation. Repeat this many times (e.g. 10,000 times) and plot the resulting sample proportions. The p-value will be two times the proportion of simulations where $\hat {p}_{sim} \ge 0.57$. (Note: we would generally use a computer to perform these simulations.) (e) The p-value is about 0.001 + 0.005 + 0.020 + 0.035 + 0.075 = 0.136, meaning the two-sided p-value is about 0.272. Your p-value may vary slightly since it is based on a visual estimate. Since the p-value is greater than 0.05, we fail to reject H0. The data do not provide strong evidence that the proportion of high school students who followed the news about Egypt is different than the proportion of American adults who did. 6.51 The subscript pr corresponds to provocative and con to conservative. (a) $H_0 : p_{pr} = p_{con}$. $H_A : p_{pr} \ne p_{con}$. (b) -0.35. (c) The left tail for the p-value is calculated by adding up the two left bins: 0.005 + 0.015 = 0.02. Doubling the one tail, the p-value is 0.04. (Students may have approximate results, and a small number of students may have a p-value of about 0.05.) Since the p-value is low, we reject H0. The data provide strong evidence that people react differently under the two scenarios. Introduction to linear regression 7.1 (a) The residual plot will show randomly distributed residuals around 0. The variance is also approximately constant. (b) The residuals will show a fan shape, with higher variability for smaller x. There will also be many points on the right above the line. There is trouble with the model being t here. 7.3 (a) Strong relationship, but a straight line would not t the data. (b) Strong relationship, and a linear t would be reasonable. (c) Weak relationship, and trying a linear fit would be reasonable. (d) Moderate relationship, but a straight line would not t the data. (e) Strong relationship, and a linear t would be reasonable. (f) Weak relationship, and trying a linear fit would be reasonable. 7.5 (a) Exam 2 since there is less of a scatter in the plot of nal exam grade versus exam 2. Notice that the relationship between Exam 1 and the Final Exam appears to be slightly nonlinear. (b) Exam 2 and the nal are relatively close to each other chronologically, or Exam 2 may be cumulative so has greater similarities in material to the nal exam. Answers may vary for part (b). 7.7 (a) $R = -0.7 \rightarrow$ (4). (b) $R = 0.45 \rightarrow$ (3). (c) $R = 0.06 \rightarrow$ (1). (d) $R = 0.92 \rightarrow$ (2). 7.9 (a) The relationship is positive, weak, and possibly linear. However, there do appear to be some anomalous observations along the left where several students have the same height that is notably far from the cloud of the other points. Additionally, there are many students who appear not to have driven a car, and they are represented by a set of points along the bottom of the scatterplot. (b) There is no obvious explanation why simply being tall should lead a person to drive faster. However, one confounding factor is gender. Males tend to be taller than females on average, and personal experiences (anecdotal) may suggest they drive faster. If we were to follow-up on this suspicion, we would nd that sociological studies con rm this suspicion. (c) Males are taller on average and they drive faster. The gender variable is indeed an important confounding variable. 7.11 (a) There is a somewhat weak, positive, possibly linear relationship between the distance traveled and travel time. There is clustering near the lower left corner that we should take special note of. (b) Changing the units will not change the form, direction or strength of the relationship between the two variables. If longer distances measured in miles are associated with longer travel time measured in minutes, longer distances measured in kilometers will be associated with longer travel time measured in hours. (c) Changing units doesn't affect correlation: R = 0.636. 7.13 (a) There is a moderate, positive, and linear relationship between shoulder girth and height. (b) Changing the units, even if just for one of the variables, will not change the form, direction or strength of the relationship between the two variables. 7.15 In each part, we may write the husband ages as a linear function of the wife ages: (a) $age_H = age_W + 3$; (b) $age_H = age_W - 2$; and (c) $age_H = age_W/2$. Therefore, the correlation will be exactly 1 in all three parts. An alternative way to gain insight into this solution is to create a mock data set, such as a data set of 5 women with ages 26, 27, 28, 29, and 30 (or some other set of ages). Then, based on the description, say for part (a), we can compute their husbands' ages as 29, 30, 31, 32, and 33. We can plot these points to see they fall on a straight line, and they always will. The same approach can be applied to the other parts as well. 7.17 (a) There is a positive, very strong, linear association between the number of tourists and spending. (b) Explanatory: number of tourists (in thousands). Response: spending (in millions of US dollars). (c)We can predict spending for a given number of tourists using a regression line. This may be useful information for determining how much the country may want to spend in advertising abroad, or to forecast expected revenues from tourism. (d) Even though the relationship appears linear in the scatterplot, the residual plot actually shows a nonlinear relationship. This is not a contradiction: residual plots can show divergences from linearity that can be difficult to see in a scatterplot. A simple linear model is inadequate for modeling these data. It is also important to consider that these data are observed sequentially, which means there may be a hidden structure that it is not evident in the current data but that is important to consider. 7.19 (a) First calculate the slope: $b_1 = R \times \frac {s_y}{s_x} = 0.636 \times \frac {113}{99} = 0.726$. Next, make use of the fact that the regression line passes through the point $(\bar {x}; \bar {y}): \bar {y} = b_0 + b_1 \times \bar {x}$. Plug in $\bar {x}, \bar {y}, and b_1$, and solve for $b_0$: 51. Solution: travdel time = $51 + 0.726 \times distance$. (b) $b_1$: For each additional mile in distance, the model predicts an additional 0.726 minutes in travel time. $b_0$: When the distance traveled is 0 miles, the travel time is expected to be 51 minutes. It does not make sense to have a travel distance of 0 miles in this context. Here, the y-intercept serves only to adjust the height of the line and is meaningless by itself. (c) $R^2 = 0.636^2 = 0.40$. About 40% of the variability in travel time is accounted for by the model, i.e. explained by the distance traveled. (d) $\hat {travdel time} = 51 + 0.726 \times distance = 51 + 0.726 \times 103 \approx 126 minutes$. (Note: we should be cautious in our predictions with this model since we have not yet evaluated whether it is a well- t model.) (e) $e_i = y_i - \hat {y}_i = 168 - 126 = 42 minutes$. A positive residual means that the model underestimates the travel time. (f) No, this calculation would require extrapolation. 7.21 The relationship between the variables is somewhat linear. However, there are two apparent outliers. The residuals do not show a random scatter around 0. A simple linear model may not be appropriate for these data, and we should investigate the two outliers. 7.23 (a) $\sqrt {R^2} = 0.849$. Since the trend is negative, R is also negative: $R = -0.849$. (b) $b_0 = 55.34. b_1 = -0.537$. (c) For a neighborhood with 0% reduced-fee lunch, we would expect 55.34% of the bike riders to wear helmets. (d) For every additional percentage point of reduced fee lunches in a neighborhood, we would expect 0.537% fewer kids to be wearing helmets. (e) $\hat {y} = 40 \times (-0.537)+55.34 = 33.86$, $e = 40 - \hat {y} = 6.14$. There are 6.14% more bike riders wearing helmets than predicted by the regression model in this neighborhood. 7.25 (a) The outlier is in the upper-left corner. Since it is horizontally far from the center of the data, it is a point with high leverage. Since the slope of the regression line would be very different if t without this point, it is also an inuential point. (b) The outlier is located in the lowerleft corner. It is horizontally far from the rest of the data, so it is a high-leverage point. The line again would look notably different if the fit excluded this point, meaning it the outlier is inuential. (c) The outlier is in the upper-middle of the plot. Since it is near the horizontal center of the data, it is not a high-leverage point. This means it also will have little or no inuence on the slope of the regression line. 7.27 (a) There is a negative, moderate-to-strong, somewhat linear relationship between percent of families who own their home and the percent of the population living in urban areas in 2010. There is one outlier: a state where 100% of the population is urban. The variability in the percent of homeownership also increases as we move from left to right in the plot. (b) The outlier is located in the bottom right corner, horizontally far from the center of the other points, so it is a point with high leverage. It is an influential point since excluding this point from the analysis would greatly affect the slope of the regression line. 7.29 (a) The relationship is positive, moderate-to-strong, and linear. There are a few outliers but no points that appear to be influential. (b) $\hat {wedight} = -105.0113+1.0176 \times height. Slope: For each additional centimeter in height, the model predicts the average weight to be 1.0176 additional kilograms (about 2.2 pounds). Intercept: People who are 0 centimeters tall are expected to weigh -105.0113 kilograms. This is obviously not possible. Here, the y-intercept serves only to adjust the height of the line and is meaningless by itself. (c) H0: The true slope coefficient of height is zero ( \(\beta _1$ = 0). H0: The true slope coefficient of height is greater than zero ( $\beta _1$ > 0). A two-sided test would also be acceptable for this application. The p-value for the two-sided alternative hypothesis ( $\beta _1 \ne 0$) is incredibly small, so the p-value for the onesided hypothesis will be even smaller. That is, we reject H0. The data provide convincing evidence that height and weight are positively correlated. The true slope parameter is indeed greater than 0. (d) $R^2 = 0.72^2 = 0.52$. Approximately 52% of the variability in weight can be explained by the height of individuals. 7.31 (a) $H_0: \beta _1 = 0. H_0: \beta _1 > 0$. A two-sided test would also be acceptable for this application. The p-value, as reported in the table, is incredibly small. Thus, for a one-sided test, the p-value will also be incredibly small, and we reject $H_0$. The data provide convincing evidence that wives' and husbands' heights are positively correlated. (b) $\hat {hedight} _W = 43.5755 + 0.2863 times height_H$. (c) Slope: For each additional inch in husband's height, the average wife's height is expected to be an additional 0.2863 inches on average. Intercept: Men who are 0 inches tall are expected to have wives who are, on average, 43.5755 inches tall. The intercept here is meaningless, and it serves only to adjust the height of the line. (d) The slope is positive, so R must also be positive. $R = \sqrt {0.09} = 0.30$. (e) 63.2612. Since $R^2$ is low, the prediction based on this regression model is not very reliable. (f) No, we should avoid extrapolating. 7.33 (a) 25.75. (b) $H_0: \beta _1 = 0$. $H_A: \beta _1 \ne 0$. A one-sided test also may be reasonable for this application. T = 2.23, $df = 23 \rightarrow p-value$ between 0.02 and 0.05. So we reject H0. There is an association between gestational age and head circumference. We can also say that the associaation is positive. Multiple and logistic regression 8.1 (a) $\hat {baby_weight} = 123.05 \times 8.94$ smoke (b) The estimated body weight of babies born to smoking mothers is 8.94 ounces lower than babies born to non-smoking mothers. Smoker: $123.05-8.94 \times 1 = 114.11$ ounces. Non-smoker: $123.05 - 8.94 \times 0 = 123.05$ ounces. (c) $H_0: \beta _1 = 0. H_A: \beta _1 \ne 0$. $T = -8..65$, and the p-value is approximately 0. Since the p-value is very small, we reject $H_0$. The data provide strong evidence that the true slope parameter is different than 0 and that there is an association between birth weight and smoking. Furthermore, having rejected $H_0$, we can conclude that smoking is associated with lower birth weights. 8.3 (a) $\hat {baby_weight} = -80.41 + 0.44 \times gestation - 3.33 \times parity - 0.01 \times age + 1.15 \times height + 0.05 weight - 8.40$ smoke. (b) gestation: The model predicts a 0.44 ounce increase in the birth weight of the baby for each additional day of pregnancy, all else held constant. age: The model predicts a 0.01 ounce decrease in the birth weight of the baby for each additional year in mother's age, all else held constant. (c) Parity might be correlated with one of the other variables in the model, which complicates model estimation. (d) $\hat {baby_weight} = 120.58$. e = 120 - 120.58 = -0.58. The model over-predicts this baby's birth weight. (e) $R^2 = 0.2504$. $R^2_{adj} = 0.2468$. 8.5 (a) (-0.32, 0.16). We are 95% confident that male students on average have GPAs 0.32 points lower to 0.16 points higher than females when controlling for the other variables in the model. (b) Yes, since the p-value is larger than 0.05 in all cases (not including the intercept). 8.7 (a) There is not a signi cant relationship between the age of the mother. We should consider removing this variable from the model. (b) All other variables are statistically significant at the 5% level. 8.9 Based on the p-value alone, either gestation or smoke should be added to the model first. However, since the adjusted $R^2$ for the model with gestation is higher, it would be preferable to add gestation in the first step of the forwardselection algorithm. (Other explanations are possible. For instance, it would be reasonable to only use the adjusted $R^2$.) 8.11 Nearly normal residuals: The normal probability plot shows a nearly normal distribution of the residuals, however, there are some minor irregularities at the tails. With a data set so large, these would not be a concern. Constant variability of residuals: The scatterplot of the residuals versus the tted values does not show any overall structure. However, values that have very low or very high tted values appear to also have somewhat larger outliers. In addition, the residuals do appear to have constant variability between the two parity and smoking status groups, though these items are relatively minor. Independent residuals: The scatterplot of residuals versus the order of data collection shows a random scatter, suggesting that there is no apparent structures related to the order the data were collected. Linear relationships between the response variable and numerical explanatory variables: The residuals vs. height and weight of mother are randomly distributed around 0. The residuals vs. length of gestation plot also does not show any clear or strong remaining structures, with the possible exception of very short or long gestations. The rest of the residuals do appear to be randomly distributed around 0. All concerns raised here are relatively mild. There are some outliers, but there is so much data that the inuence of such observations will be minor. 8.13 (a) There are a few potential outliers, e.g. on the left in the total length variable, but nothing that will be of serious concern in a data set this large. (b) When coefficient estimates are sensitive to which variables are included in the model, this typically indicates that some variables are collinear. For example, a possum's gender may be related to its head length, which would explain why the coefficient (and p-value) for sex male changed when we removed the head length variable. Likewise, a possum's skull width is likely to be related to its head length, probably even much more closely related than the head length was to gender. 8.15 (a) The logistic model relating $\hat {p}_i$ to the predictors may be written as $log (\frac {\hat {p}_i}{1- \hat {p}_i}) = 33.5095 - 1.4207 \times sex male_i - 0.2787 \times skull widthi + 0.5687 total length_i$. Only total_length has a positive association with a possum being from Victoria. (b) $\hat {p} = 0.0062$. While the probability is very near zero, we have not run diagnostics on the model. We might also be a little skeptical that the model will remain accurate for a possum found in a US zoo. For example, perhaps the zoo selected a possum with specific characteristics but only looked in one region. On the other hand, it is encouraging that the possum was caught in the wild. (Answers regarding the reliability of the model probability will vary.) Contributors David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenIntro_Statistics/9%3A_End_of_chapter_exercise_solution.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. Exercises: OpenStax These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 1.2: Definitions of Statistics, Probability, and Key Terms For each of the following eight exercises, identify: a. the population, b. the sample, c. the parameter, d. the statistic, e. the variable, and f. the data. Give examples where appropriate. Q 1.2.1 A fitness center is interested in the mean amount of time a client exercises in the center each week. Q. 1.2.2 Ski resorts are interested in the mean age that children take their first ski and snowboard lessons. They need this information to plan their ski classes optimally. S 1.2.2 1. all children who take ski or snowboard lessons 2. a group of these children 3. the population mean age of children who take their first snowboard lesson 4. the sample mean age of children who take their first snowboard lesson 5. $X =$ the age of one child who takes his or her first ski or snowboard lesson 6. values for $X$, such as 3, 7, and so on Q 1.2.3 A cardiologist is interested in the mean recovery period of her patients who have had heart attacks. Q 1.2.4 Insurance companies are interested in the mean health costs each year of their clients, so that they can determine the costs of health insurance. S 1.2.5 1. the clients of the insurance companies 2. a group of the clients 3. the mean health costs of the clients 4. the mean health costs of the sample 5. $X =$ the health costs of one client 6. values for $X$, such as 34, 9, 82, and so on Q 1.2.6 A politician is interested in the proportion of voters in his district who think he is doing a good job. Q 1.2.7 A marriage counselor is interested in the proportion of clients she counsels who stay married. S 1.2.7 1. all the clients of this counselor 2. a group of clients of this marriage counselor 3. the proportion of all her clients who stay married 4. the proportion of the sample of the counselor’s clients who stay married 5. $X =$ the number of couples who stay married 6. yes, no Q 1.2.8 Political pollsters may be interested in the proportion of people who will vote for a particular cause. Q 1.2.9 A marketing company is interested in the proportion of people who will buy a particular product. S 1.2.9 1. all people (maybe in a certain geographic area, such as the United States) 2. a group of the people 3. the proportion of all people who will buy the product 4. the proportion of the sample who will buy the product 5. $X =$ the number of people who will buy it 6. buy, not buy Use the following information to answer the next three exercises: A Lake Tahoe Community College instructor is interested in the mean number of days Lake Tahoe Community College math students are absent from class during a quarter. Q 1.2.10 What is the population she is interested in? 1. all Lake Tahoe Community College students 2. all Lake Tahoe Community College English students 3. all Lake Tahoe Community College students in her classes 4. all Lake Tahoe Community College math students Q 1.2.11 Consider the following: $X =$ number of days a Lake Tahoe Community College math student is absent In this case, $X$ is an example of a: 1. variable. 2. population. 3. statistic. 4. data. a Q 1.2.12 The instructor’s sample produces a mean number of days absent of 3.5 days. This value is an example of a: 1. parameter. 2. data. 3. statistic. 4. variable. 1.3: Data, Sampling, and Variation in Data and Sampling Practice Exercise 1.3.11 “Number of times per week” is what type of data? 1. qualitative 2. quantitative discrete 3. quantitative continuous Use the following information to answer the next four exercises: A study was done to determine the age, number of times per week, and the duration (amount of time) of residents using a local park in San Antonio, Texas. The first house in the neighborhood around the park was selected randomly, and then the resident of every eighth house in the neighborhood around the park was interviewed. Exercise 1.3.12 The sampling method was 1. simple random 2. systematic 3. stratified 4. cluster Answer b Exercise 1.3.13 “Duration (amount of time)” is what type of data? 1. qualitative 2. quantitative discrete 3. quantitative continuous Exercise 1.3.14 The colors of the houses around the park are what kind of data? 1. qualitative 2. quantitative discrete 3. quantitative continuous Answer a Exercise 1.3.15 The population is ______________________ Exercise 1.3.16 Table contains the total number of deaths worldwide as a result of earthquakes from 2000 to 2012. Year Total Number of Deaths 2000 231 2001 21,357 2002 11,685 2003 33,819 2004 228,802 2005 88,003 2006 6,605 2007 712 2008 88,011 2009 1,790 2010 320,120 2011 21,953 2012 768 Total 823,856 Use Table to answer the following questions. 1. What is the proportion of deaths between 2007 and 2012? 2. What percent of deaths occurred before 2001? 3. What is the percent of deaths that occurred in 2003 or after 2010? 4. What is the fraction of deaths that happened before 2012? 5. What kind of data is the number of deaths? 6. Earthquakes are quantified according to the amount of energy they produce (examples are 2.1, 5.0, 6.7). What type of data is that? 7. What contributed to the large number of deaths in 2010? In 2004? Explain. Answer 1. 0.5242 2. 0.03% 3. 6.86% 4. $\frac{823,088}{823,856}$ 5. quantitative discrete 6. quantitative continuous 7. In both years, underwater earthquakes produced massive tsunamis. For the following four exercises, determine the type of sampling used (simple random, stratified, systematic, cluster, or convenience). Exercise 1.3.17 A group of test subjects is divided into twelve groups; then four of the groups are chosen at random. Exercise 1.3.18 A market researcher polls every tenth person who walks into a store. Answer systematic Exercise 1.3.19 The first 50 people who walk into a sporting event are polled on their television preferences. Exercise 1.3.20 A computer generates 100 random numbers, and 100 people whose names correspond with the numbers on the list are chosen. Answer simple random Use the following information to answer the next seven exercises: Studies are often done by pharmaceutical companies to determine the effectiveness of a treatment program. Suppose that a new AIDS antibody drug is currently under study. It is given to patients once the AIDS symptoms have revealed themselves. Of interest is the average (mean) length of time in months patients live once starting the treatment. Two researchers each follow a different set of 40 AIDS patients from the start of treatment until their deaths. The following data (in months) are collected. 3; 4; 11; 15; 16; 17; 22; 44; 37; 16; 14; 24; 25; 15; 26; 27; 33; 29; 35; 44; 13; 21; 22; 10; 12; 8; 40; 32; 26; 27; 31; 34; 29; 17; 8; 24; 18; 47; 33; 34 3; 14; 11; 5; 16; 17; 28; 41; 31; 18; 14; 14; 26; 25; 21; 22; 31; 2; 35; 44; 23; 21; 21; 16; 12; 18; 41; 22; 16; 25; 33; 34; 29; 13; 18; 24; 23; 42; 33; 29 Exercise 1.3.21 Complete the tables using the data provided: Researcher A Survival Length (in months) Frequency Relative Frequency Cumulative Relative Frequency 0.5–6.5 6.5–12.5 12.5–18.5 18.5–24.5 24.5–30.5 30.5–36.5 36.5–42.5 42.5–48.5 Researcher B Survival Length (in months) Frequency Relative Frequency Cumulative Relative Frequency 0.5–6.5 6.5–12.5 12.5–18.5 18.5–24.5 24.5–30.5 30.5–36.5 36.5-45.5 Exercise 1.3.22 Determine what the key term data refers to in the above example for Researcher A. Answer values for $X$, such as 3, 4, 11, and so on Exercise 1.3.23 List two reasons why the data may differ. Exercise 1.3.24 Can you tell if one researcher is correct and the other one is incorrect? Why? Answer No, we do not have enough information to make such a claim. Exercise 1.3.25 Would you expect the data to be identical? Why or why not? Exercise 1.3.26 How might the researchers gather random data? Answer Take a simple random sample from each group. One way is by assigning a number to each patient and using a random number generator to randomly select patients. Exercise 1.3.27 Suppose that the first researcher conducted his survey by randomly choosing one state in the nation and then randomly picking 40 patients from that state. What sampling method would that researcher have used? Exercise 1.3.28 Suppose that the second researcher conducted his survey by choosing 40 patients he knew. What sampling method would that researcher have used? What concerns would you have about this data set, based upon the data collection method? Answer This would be convenience sampling and is not random. Use the following data to answer the next five exercises: Two researchers are gathering data on hours of video games played by school-aged children and young adults. They each randomly sample different groups of 150 students from the same school. They collect the following data. Researcher A Hours Played per Week Frequency Relative Frequency Cumulative Relative Frequency 0–2 26 0.17 0.17 2–4 30 0.20 0.37 4–6 49 0.33 0.70 6–8 25 0.17 0.87 8–10 12 0.08 0.95 10–12 8 0.05 1 Researcher B Hours Played per Week Frequency Relative Frequency Cumulative Relative Frequency 0–2 48 0.32 0.32 2–4 51 0.34 0.66 4–6 24 0.16 0.82 6–8 12 0.08 0.90 8–10 11 0.07 0.97 10–12 4 0.03 1 Exercise 1.3.29 Give a reason why the data may differ. Exercise 1.3.30 Would the sample size be large enough if the population is the students in the school? Answer Yes, the sample size of 150 would be large enough to reflect a population of one school. Exercise 1.3.31 Would the sample size be large enough if the population is school-aged children and young adults in the United States? Exercise 1.3.32 Researcher A concludes that most students play video games between four and six hours each week. Researcher B concludes that most students play video games between two and four hours each week. Who is correct? Answer Even though the specific data support each researcher’s conclusions, the different results suggest that more data need to be collected before the researchers can reach a conclusion. Exercise 1.3.33 As part of a way to reward students for participating in the survey, the researchers gave each student a gift card to a video game store. Would this affect the data if students knew about the award before the study? Use the following data to answer the next five exercises: A pair of studies was performed to measure the effectiveness of a new software program designed to help stroke patients regain their problem-solving skills. Patients were asked to use the software program twice a day, once in the morning and once in the evening. The studies observed 200 stroke patients recovering over a period of several weeks. The first study collected the data in the first Table. The second study collected the data in the second Table. Group Showed improvement No improvement Deterioration Used program 142 43 15 Did not use program 72 110 18 Group Showed improvement No improvement Deterioration Used program 105 74 19 Did not use program 89 99 12 Exercise 1.3.34 Given what you know, which study is correct? Answer There is not enough information given to judge if either one is correct or incorrect. Exercise 1.3.35 The first study was performed by the company that designed the software program. The second study was performed by the American Medical Association. Which study is more reliable? Exercise 1.3.36 Both groups that performed the study concluded that the software works. Is this accurate? Answer The software program seems to work because the second study shows that more patients improve while using the software than not. Even though the difference is not as large as that in the first study, the results from the second study are likely more reliable and still show improvement. Exercise 1.3.37 The company takes the two studies as proof that their software causes mental improvement in stroke patients. Is this a fair statement? Exercise 1.3.38 Patients who used the software were also a part of an exercise program whereas patients who did not use the software were not. Does this change the validity of the conclusions from Exercise? Answer Yes, because we cannot tell if the improvement was due to the software or the exercise; the data is confounded, and a reliable conclusion cannot be drawn. New studies should be performed. Exercise 1.3.39 Is a sample size of 1,000 a reliable measure for a population of 5,000? Exercise 1.3.40 Is a sample of 500 volunteers a reliable measure for a population of 2,500? Answer No, even though the sample is large enough, the fact that the sample consists of volunteers makes it a self-selected sample, which is not reliable. Exercise 1.3.41 A question on a survey reads: "Do you prefer the delicious taste of Brand X or the taste of Brand Y?" Is this a fair question? Exercise 1.3.42 Is a sample size of two representative of a population of five? Answer No, even though the sample is a large portion of the population, two responses are not enough to justify any conclusions. Because the population is so small, it would be better to include everyone in the population to get the most accurate data. Exercise 1.3.43 Is it possible for two experiments to be well run with similar sample sizes to get different data? Bringing It Together Exercise 1.3.44 Seven hundred and seventy-one distance learning students at Long Beach City College responded to surveys in the 2010-11 academic year. Highlights of the summary report are listed below. LBCC Distance Learning Survey Results Have computer at home 96% Unable to come to campus for classes 65% Age 41 or over 24% Would like LBCC to offer more DL courses 95% Took DL classes due to a disability 17% Live at least 16 miles from campus 13% Took DL courses to fulfill transfer requirements 71% 1. What percent of the students surveyed do not have a computer at home? 2. About how many students in the survey live at least 16 miles from campus? 3. If the same survey were done at Great Basin College in Elko, Nevada, do you think the percentages would be the same? Why? Exercise 1.3.45 Several online textbook retailers advertise that they have lower prices than on-campus bookstores. However, an important factor is whether the Internet retailers actually have the textbooks that students need in stock. Students need to be able to get textbooks promptly at the beginning of the college term. If the book is not available, then a student would not be able to get the textbook at all, or might get a delayed delivery if the book is back ordered. A college newspaper reporter is investigating textbook availability at online retailers. He decides to investigate one textbook for each of the following seven subjects: calculus, biology, chemistry, physics, statistics, geology, and general engineering. He consults textbook industry sales data and selects the most popular nationally used textbook in each of these subjects. He visits websites for a random sample of major online textbook sellers and looks up each of these seven textbooks to see if they are available in stock for quick delivery through these retailers. Based on his investigation, he writes an article in which he draws conclusions about the overall availability of all college textbooks through online textbook retailers. Write an analysis of his study that addresses the following issues: Is his sample representative of the population of all college textbooks? Explain why or why not. Describe some possible sources of bias in this study, and how it might affect the results of the study. Give some suggestions about what could be done to improve the study. Answer Answers will vary. Sample answer: The sample is not representative of the population of all college textbooks. Two reasons why it is not representative are that he only sampled seven subjects and he only investigated one textbook in each subject. There are several possible sources of bias in the study. The seven subjects that he investigated are all in mathematics and the sciences; there are many subjects in the humanities, social sciences, and other subject areas, (for example: literature, art, history, psychology, sociology, business) that he did not investigate at all. It may be that different subject areas exhibit different patterns of textbook availability, but his sample would not detect such results. He also looked only at the most popular textbook in each of the subjects he investigated. The availability of the most popular textbooks may differ from the availability of other textbooks in one of two ways: • the most popular textbooks may be more readily available online, because more new copies are printed, and more students nationwide are selling back their used copies OR • the most popular textbooks may be harder to find available online, because more student demand exhausts the supply more quickly. In reality, many college students do not use the most popular textbook in their subject, and this study gives no useful information about the situation for those less popular textbooks. He could improve this study by: • expanding the selection of subjects he investigates so that it is more representative of all subjects studied by college students, and • expanding the selection of textbooks he investigates within each subject to include a mixed representation of both the most popular and less popular textbooks. For the following exercises, identify the type of data that would be used to describe a response (quantitative discrete, quantitative continuous, or qualitative), and give an example of the data. Q 1.3.1 number of tickets sold to a concert S 1.3.1 quantitative discrete, 150 Q 1.3.2 percent of body fat Q 1.3.3 favorite baseball team S 1.3.3 qualitative, Oakland A’s Q 1.3.4 time in line to buy groceries Q 1.3.5 number of students enrolled at Evergreen Valley College S 1.3.5 quantitative discrete, 11,234 students Q 1.3.6 most-watched television show Q 1.3.7 brand of toothpaste S 1.3.7 qualitative, Crest Q 1.3.8 distance to the closest movie theater Q 1.3.9 age of executives in Fortune 500 companies S 1.3.9 quantitative continuous, 47.3 years Q 1.3.10 number of competing computer spreadsheet software packages Use the following information to answer the next two exercises: A study was done to determine the age, number of times per week, and the duration (amount of time) of resident use of a local park in San Jose. The first house in the neighborhood around the park was selected randomly and then every 8th house in the neighborhood around the park was interviewed. Q 1.3.11 “Number of times per week” is what type of data? 1. qualitative 2. quantitative discrete 3. quantitative continuous b Q 1.3.12 “Duration (amount of time)” is what type of data? 1. qualitative 2. quantitative discrete 3. quantitative continuous Q 1.3.13 Airline companies are interested in the consistency of the number of babies on each flight, so that they have adequate safety equipment. Suppose an airline conducts a survey. Over Thanksgiving weekend, it surveys six flights from Boston to Salt Lake City to determine the number of babies on the flights. It determines the amount of safety equipment needed by the result of that study. 1. Using complete sentences, list three things wrong with the way the survey was conducted. 2. Using complete sentences, list three ways that you would improve the survey if it were to be repeated. S 1.3.13 1. The survey was conducted using six similar flights. The survey would not be a true representation of the entire population of air travelers. Conducting the survey on a holiday weekend will not produce representative results. 2. Conduct the survey during different times of the year. Conduct the survey using flights to and from various locations. Conduct the survey on different days of the week. Q 1.3.14 Suppose you want to determine the mean number of students per statistics class in your state. Describe a possible sampling method in three to five complete sentences. Make the description detailed. Q 1.3.15 Suppose you want to determine the mean number of cans of soda drunk each month by students in their twenties at your school. Describe a possible sampling method in three to five complete sentences. Make the description detailed. S 1.3.15 Answers will vary. Sample Answer: You could use a systematic sampling method. Stop the tenth person as they leave one of the buildings on campus at 9:50 in the morning. Then stop the tenth person as they leave a different building on campus at 1:50 in the afternoon. Q 1.3.16 List some practical difficulties involved in getting accurate results from a telephone survey. Q 1.3.17 List some practical difficulties involved in getting accurate results from a mailed survey. S 1.3.17 Answers will vary. Sample Answer: Many people will not respond to mail surveys. If they do respond to the surveys, you can’t be sure who is responding. In addition, mailing lists can be incomplete. Q 1.3.18 With your classmates, brainstorm some ways you could overcome these problems if you needed to conduct a phone or mail survey. Q 1.3.19 The instructor takes her sample by gathering data on five randomly selected students from each Lake Tahoe Community College math class. The type of sampling she used is 1. cluster sampling 2. stratified sampling 3. simple random sampling 4. convenience sampling b Q 1.3.20 A study was done to determine the age, number of times per week, and the duration (amount of time) of residents using a local park in San Jose. The first house in the neighborhood around the park was selected randomly and then every eighth house in the neighborhood around the park was interviewed. The sampling method was: 1. simple random 2. systematic 3. stratified 4. cluster Q 1.3.21 Name the sampling method used in each of the following situations: 1. A woman in the airport is handing out questionnaires to travelers asking them to evaluate the airport’s service. She does not ask travelers who are hurrying through the airport with their hands full of luggage, but instead asks all travelers who are sitting near gates and not taking naps while they wait. 2. A teacher wants to know if her students are doing homework, so she randomly selects rows two and five and then calls on all students in row two and all students in row five to present the solutions to homework problems to the class. 3. The marketing manager for an electronics chain store wants information about the ages of its customers. Over the next two weeks, at each store location, 100 randomly selected customers are given questionnaires to fill out asking for information about age, as well as about other variables of interest. 4. The librarian at a public library wants to determine what proportion of the library users are children. The librarian has a tally sheet on which she marks whether books are checked out by an adult or a child. She records this data for every fourth patron who checks out books. 5. A political party wants to know the reaction of voters to a debate between the candidates. The day after the debate, the party’s polling staff calls 1,200 randomly selected phone numbers. If a registered voter answers the phone or is available to come to the phone, that registered voter is asked whom he or she intends to vote for and whether the debate changed his or her opinion of the candidates. S 1.3.21 1. convenience 2. cluster 3. stratified 4. systematic 5. simple random A “random survey” was conducted of 3,274 people of the “microprocessor generation” (people born since 1971, the year the microprocessor was invented). It was reported that 48% of those individuals surveyed stated that if they had $2,000 to spend, they would use it for computer equipment. Also, 66% of those surveyed considered themselves relatively savvy computer users. 1. Do you consider the sample size large enough for a study of this type? Why or why not? 2. Based on your “gut feeling,” do you believe the percents accurately reflect the U.S. population for those individuals born since 1971? If not, do you think the percents of the population are actually higher or lower than the sample statistics? Why? Additional information: The survey, reported by Intel Corporation, was filled out by individuals who visited the Los Angeles Convention Center to see the Smithsonian Institute's road show called “America’s Smithsonian.” 3. With this additional information, do you feel that all demographic and ethnic groups were equally represented at the event? Why or why not? 4. With the additional information, comment on how accurately you think the sample statistics reflect the population parameters. Q 1.3.23 The Gallup-Healthways Well-Being Index is a survey that follows trends of U.S. residents on a regular basis. There are six areas of health and wellness covered in the survey: Life Evaluation, Emotional Health, Physical Health, Healthy Behavior, Work Environment, and Basic Access. Some of the questions used to measure the Index are listed below. Identify the type of data obtained from each question used in this survey: qualitative, quantitative discrete, or quantitative continuous. 1. Do you have any health problems that prevent you from doing any of the things people your age can normally do? 2. During the past 30 days, for about how many days did poor health keep you from doing your usual activities? 3. In the last seven days, on how many days did you exercise for 30 minutes or more? 4. Do you have health insurance coverage? S 1.3.23 1. qualitative 2. quantitative discrete 3. quantitative discrete 4. qualitative Q 1.3.24 In advance of the 1936 Presidential Election, a magazine titled Literary Digest released the results of an opinion poll predicting that the republican candidate Alf Landon would win by a large margin. The magazine sent post cards to approximately 10,000,000 prospective voters. These prospective voters were selected from the subscription list of the magazine, from automobile registration lists, from phone lists, and from club membership lists. Approximately 2,300,000 people returned the postcards. 1. Think about the state of the United States in 1936. Explain why a sample chosen from magazine subscription lists, automobile registration lists, phone books, and club membership lists was not representative of the population of the United States at that time. 2. What effect does the low response rate have on the reliability of the sample? 3. Are these problems examples of sampling error or nonsampling error? 4. During the same year, George Gallup conducted his own poll of 30,000 prospective voters. His researchers used a method they called "quota sampling" to obtain survey answers from specific subsets of the population. Quota sampling is an example of which sampling method described in this module? Q 1.3.25 Crime-related and demographic statistics for 47 US states in 1960 were collected from government agencies, including the FBI's Uniform Crime Report. One analysis of this data found a strong connection between education and crime indicating that higher levels of education in a community correspond to higher crime rates. Which of the potential problems with samples discussed in [link] could explain this connection? S 1.3.26 Causality: The fact that two variables are related does not guarantee that one variable is influencing the other. We cannot assume that crime rate impacts education level or that education level impacts crime rate. Confounding: There are many factors that define a community other than education level and crime rate. Communities with high crime rates and high education levels may have other lurking variables that distinguish them from communities with lower crime rates and lower education levels. Because we cannot isolate these variables of interest, we cannot draw valid conclusions about the connection between education and crime. Possible lurking variables include police expenditures, unemployment levels, region, average age, and size. Q 1.3.27 YouPolls is a website that allows anyone to create and respond to polls. One question posted April 15 asks: “Do you feel happy paying your taxes when members of the Obama administration are allowed to ignore their tax liabilities?”1 As of April 25, 11 people responded to this question. Each participant answered “NO!” Which of the potential problems with samples discussed in this module could explain this connection? Q 1.3.28 A scholarly article about response rates begins with the following quote: “Declining contact and cooperation rates in random digit dial (RDD) national telephone surveys raise serious concerns about the validity of estimates drawn from such research.”2 The Pew Research Center for People and the Press admits: “The percentage of people we interview – out of all we try to interview – has been declining over the past decade or more.”3 1. What are some reasons for the decline in response rate over the past decade? 2. Explain why researchers are concerned with the impact of the declining response rate on public opinion polls. S 1.3.28 1. Possible reasons: increased use of caller id, decreased use of landlines, increased use of private numbers, voice mail, privacy managers, hectic nature of personal schedules, decreased willingness to be interviewed 2. When a large number of people refuse to participate, then the sample may not have the same characteristics of the population. Perhaps the majority of people willing to participate are doing so because they feel strongly about the subject of the survey. 1.4: Frequency, Frequency Tables, and Levels of Measurement Q 1.4.1 Fifty part-time students were asked how many courses they were taking this term. The (incomplete) results are shown below: Part-time Student Course Loads # of Courses Frequency Relative Frequency Cumulative Relative Frequency 1 30 0.6 2 15 3 1. Fill in the blanks in Table. 2. What percent of students take exactly two courses? 3. What percent of students take one or two courses? Q 1.4.2 Sixty adults with gum disease were asked the number of times per week they used to floss before their diagnosis. The (incomplete) results are shown in Table. Flossing Frequency for Adults with Gum Disease # Flossing per Week Frequency Relative Frequency Cumulative Relative Freq. 0 27 0.4500 1 18 3 0.9333 6 3 0.0500 7 1 0.0167 1. Fill in the blanks in Table. 2. What percent of adults flossed six times per week? 3. What percent flossed at most three times per week? S 1.4.2 1. # Flossing per Week Frequency Relative Frequency Cumulative Relative Frequency 0 27 0.4500 0.4500 1 18 0.3000 0.7500 3 11 0.1833 0.9333 6 3 0.0500 0.9833 7 1 0.0167 1 2. 5.00% 3. 93.33% Q 1.4.3 Nineteen immigrants to the U.S were asked how many years, to the nearest year, they have lived in the U.S. The data are as follows: 2; 5; 7; 2; 2; 10; 20; 15; 0; 7; 0; 20; 5; 12; 15; 12; 45; 10 . Table was produced. Frequency of Immigrant Survey Responses Data Frequency Relative Frequency Cumulative Relative Frequency 0 2 219219 0.1053 2 3 319319 0.2632 4 1 119119 0.3158 5 3 319319 0.4737 7 2 219219 0.5789 10 2 219219 0.6842 12 2 219219 0.7895 15 1 119119 0.8421 20 1 119119 1.0000 1. Fix the errors in Table. Also, explain how someone might have arrived at the incorrect number(s). 2. Explain what is wrong with this statement: “47 percent of the people surveyed have lived in the U.S. for 5 years.” 3. Fix the statement in b to make it correct. 4. What fraction of the people surveyed have lived in the U.S. five or seven years? 5. What fraction of the people surveyed have lived in the U.S. at most 12 years? 6. What fraction of the people surveyed have lived in the U.S. fewer than 12 years? 7. What fraction of the people surveyed have lived in the U.S. from five to 20 years, inclusive? Q 1.4.4 How much time does it take to travel to work? Table shows the mean commute time by state for workers at least 16 years old who are not working at home. Find the mean travel time, and round off the answer properly. 24.0 24.3 25.9 18.9 27.5 17.9 21.8 20.9 16.7 27.3 18.2 24.7 20.0 22.6 23.9 18.0 31.4 22.3 24.0 25.5 24.7 24.6 28.1 24.9 22.6 23.6 23.4 25.7 24.8 25.5 21.2 25.7 23.1 23.0 23.9 26.0 16.3 23.1 21.4 21.5 27.0 27.0 18.6 31.7 23.3 30.1 22.9 23.3 21.7 18.6 S 1.4.4 The sum of the travel times is 1,173.1. Divide the sum by 50 to calculate the mean value: 23.462. Because each state’s travel time was measured to the nearest tenth, round this calculation to the nearest hundredth: 23.46. Q 1.4.5 Forbes magazine published data on the best small firms in 2012. These were firms which had been publicly traded for at least a year, have a stock price of at least$5 per share, and have reported annual revenue between $5 million and$1 billion. Table shows the ages of the chief executive officers for the first 60 ranked firms. Age Frequency Relative Frequency Cumulative Relative Frequency 40–44 3 45–49 11 50–54 13 55–59 16 60–64 10 65–69 6 70–74 1 1. What is the frequency for CEO ages between 54 and 65? 2. What percentage of CEOs are 65 years or older? 3. What is the relative frequency of ages under 50? 4. What is the cumulative relative frequency for CEOs younger than 55? 5. Which graph shows the relative frequency and which shows the cumulative relative frequency? Use the following information to answer the next two exercises: Table contains data on hurricanes that have made direct hits on the U.S. Between 1851 and 2004. A hurricane is given a strength category rating based on the minimum wind speed generated by the storm. Frequency of Hurricane Direct Hits Category Number of Direct Hits Relative Frequency Cumulative Frequency Total = 273 1 109 0.3993 0.3993 2 72 0.2637 0.6630 3 71 0.2601 4 18   0.9890 5 3 0.0110 1.0000 Q 1.4.6 What is the relative frequency of direct hits that were category 4 hurricanes? 1. 0.0768 2. 0.0659 3. 0.2601 4. Not enough information to calculate b Q 1.4.7 What is the relative frequency of direct hits that were AT MOST a category 3 storm? 1. 0.3480 2. 0.9231 3. 0.2601 4. 0.3370 1.5: Experimental Design and Ethics Q 1.5.1 How does sleep deprivation affect your ability to drive? A recent study measured the effects on 19 professional drivers. Each driver participated in two experimental sessions: one after normal sleep and one after 27 hours of total sleep deprivation. The treatments were assigned in random order. In each session, performance was measured on a variety of tasks including a driving simulation. Use key terms from this module to describe the design of this experiment. S 1.5.1 Explanatory variable: amount of sleep Response variable: performance measured in assigned tasks Treatments: normal sleep and 27 hours of total sleep deprivation Experimental Units: 19 professional drivers Lurking variables: none – all drivers participated in both treatments Random assignment: treatments were assigned in random order; this eliminated the effect of any “learning” that may take place during the first experimental session Control/Placebo: completing the experimental session under normal sleep conditions Blinding: researchers evaluating subjects’ performance must not know which treatment is being applied at the time Q 1.5.2 An advertisement for Acme Investments displays the two graphs in Figures to show the value of Acme’s product in comparison with the Other Guy’s product. Describe the potentially misleading visual effect of these comparison graphs. How can this be corrected? As the graphs show, Acme consistently outperforms the Other Guys! Q 1.5.3 The graph in Figure shows the number of complaints for six different airlines as reported to the US Department of Transportation in February 2013. Alaska, Pinnacle, and Airtran Airlines have far fewer complaints reported than American, Delta, and United. Can we conclude that American, Delta, and United are the worst airline carriers since they have the most complaints? S 1.5.3 You cannot assume that the numbers of complaints reflect the quality of the airlines. The airlines shown with the greatest number of complaints are the ones with the most passengers. You must consider the appropriateness of methods for presenting data; in this case displaying totals is misleading.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/01.E%3A_Sampling_and_Data_%28Exercises%29.txt
2.2: Stem-and-Leaf Graphs (Stemplots), Line Graphs, and Bar Graphs Q 2.2.1 Student grades on a chemistry exam were: 77, 78, 76, 81, 86, 51, 79, 82, 84, 99 1. Construct a stem-and-leaf plot of the data. 2. Are there any potential outliers? If so, which scores are they? Why do you consider them outliers? Q 2.2.2 The table below contains the 2010 obesity rates in U.S. states and Washington, DC. State Percent (%) State Percent (%) State Percent (%) Alabama 32.2 Kentucky 31.3 North Dakota 27.2 Alaska 24.5 Louisiana 31.0 Ohio 29.2 Arizona 24.3 Maine 26.8 Oklahoma 30.4 Arkansas 30.1 Maryland 27.1 Oregon 26.8 California 24.0 Massachusetts 23.0 Pennsylvania 28.6 Colorado 21.0 Michigan 30.9 Rhode Island 25.5 Connecticut 22.5 Minnesota 24.8 South Carolina 31.5 Delaware 28.0 Mississippi 34.0 South Dakota 27.3 Washington, DC 22.2 Missouri 30.5 Tennessee 30.8 Florida 26.6 Montana 23.0 Texas 31.0 Georgia 29.6 Nebraska 26.9 Utah 22.5 Hawaii 22.7 Nevada 22.4 Vermont 23.2 Idaho 26.5 New Hampshire 25.0 Virginia 26.0 Illinois 28.2 New Jersey 23.8 Washington 25.5 Indiana 29.6 New Mexico 25.1 West Virginia 32.5 Iowa 28.4 New York 23.9 Wisconsin 26.3 Kansas 29.4 North Carolina 27.8 Wyoming 25.1 1. Use a random number generator to randomly pick eight states. Construct a bar graph of the obesity rates of those eight states. 2. Construct a bar graph for all the states beginning with the letter "A." 3. Construct a bar graph for all the states beginning with the letter "M." S 2.2.2 1. Example solution for using the random number generator for the TI-84+ to generate a simple random sample of 8 states. Instructions are as follows. • Number the entries in the table 1–51 (Includes Washington, DC; Numbered vertically) • Press MATH • Arrow over to PRB • Press 5:randInt( • Enter 51,1,8) Eight numbers are generated (use the right arrow key to scroll through the numbers). The numbers correspond to the numbered states (for this example: {47 21 9 23 51 13 25 4}. If any numbers are repeated, generate a different number by using 5:randInt(51,1)). Here, the states (and Washington DC) are {Arkansas, Washington DC, Idaho, Maryland, Michigan, Mississippi, Virginia, Wyoming}. Corresponding percents are {30.1, 22.2, 26.5, 27.1, 30.9, 34.0, 26.0, 25.1}. For each of the following data sets, create a stem plot and identify any outliers. Exercise 2.2.7 The miles per gallon rating for 30 cars are shown below (lowest to highest). 19, 19, 19, 20, 21, 21, 25, 25, 25, 26, 26, 28, 29, 31, 31, 32, 32, 33, 34, 35, 36, 37, 37, 38, 38, 38, 38, 41, 43, 43 Answer Stem Leaf 1 9 9 9 2 0 1 1 5 5 5 6 6 8 9 3 1 1 2 2 3 4 5 6 7 7 8 8 8 8 4 1 3 3 The height in feet of 25 trees is shown below (lowest to highest). 25, 27, 33, 34, 34, 34, 35, 37, 37, 38, 39, 39, 39, 40, 41, 45, 46, 47, 49, 50, 50, 53, 53, 54, 54 The data are the prices of different laptops at an electronics store. Round each value to the nearest ten. 249, 249, 260, 265, 265, 280, 299, 299, 309, 319, 325, 326, 350, 350, 350, 365, 369, 389, 409, 459, 489, 559, 569, 570, 610 Answer Stem Leaf 2 5 5 6 7 7 8 3 0 0 1 2 3 3 5 5 5 7 7 9 4 1 6 9 5 6 7 7 6 1 The data are daily high temperatures in a town for one month. 61, 61, 62, 64, 66, 67, 67, 67, 68, 69, 70, 70, 70, 71, 71, 72, 74, 74, 74, 75, 75, 75, 76, 76, 77, 78, 78, 79, 79, 95 For the next three exercises, use the data to construct a line graph. Exercise 2.2.8 In a survey, 40 people were asked how many times they visited a store before making a major purchase. The results are shown in the Table below. Number of times in store Frequency 1 4 2 10 3 16 4 6 5 4 Answer Exercise 2.2.9 In a survey, several people were asked how many years it has been since they purchased a mattress. The results are shown in Table. Years since last purchase Frequency 0 2 1 8 2 13 3 22 4 16 5 9 Exercise 2.2.10 Several children were asked how many TV shows they watch each day. The results of the survey are shown in the Table below. Number of TV Shows Frequency 0 12 1 18 2 36 3 7 4 2 Answer Exercise 2.2.11 The students in Ms. Ramirez’s math class have birthdays in each of the four seasons. Table shows the four seasons, the number of students who have birthdays in each season, and the percentage (%) of students in each group. Construct a bar graph showing the number of students. Seasons Number of students Proportion of population Spring 8 24% Summer 9 26% Autumn 11 32% Winter 6 18% Using the data from Mrs. Ramirez’s math class supplied in the table above, construct a bar graph showing the percentages. Answer Exercise 2.2.12 David County has six high schools. Each school sent students to participate in a county-wide science competition. Table shows the percentage breakdown of competitors from each school, and the percentage of the entire student population of the county that goes to each school. Construct a bar graph that shows the population percentage of competitors from each school. High School Science competition population Overall student population Alabaster 28.9% 8.6% Concordia 7.6% 23.2% Genoa 12.1% 15.0% Mocksville 18.5% 14.3% Tynneson 24.2% 10.1% West End 8.7% 28.8% Use the data from the David County science competition supplied in Exercise. Construct a bar graph that shows the county-wide population percentage of students at each school. Answer 2.3: Histograms, Frequency, Polygons, and Time Series Graphs Q 2.3.1 Suppose that three book publishers were interested in the number of fiction paperbacks adult consumers purchase per month. Each publisher conducted a survey. In the survey, adult consumers were asked the number of fiction paperbacks they had purchased the previous month. The results are as follows: Publisher A # of books Freq. Rel. Freq. 0 10 1 12 2 16 3 12 4 8 5 6 6 2 8 2 Publisher B # of books Freq. Rel. Freq. 0 18 1 24 2 24 3 22 4 15 5 10 7 5 9 1 Publisher C # of books Freq. Rel. Freq. 0–1 20 2–3 35 4–5 12 6–7 2 8–9 1 1. Find the relative frequencies for each survey. Write them in the charts. 2. Using either a graphing calculator, computer, or by hand, use the frequency column to construct a histogram for each publisher's survey. For Publishers A and B, make bar widths of one. For Publisher C, make bar widths of two. 3. In complete sentences, give two reasons why the graphs for Publishers A and B are not identical. 4. Would you have expected the graph for Publisher C to look like the other two graphs? Why or why not? 5. Make new histograms for Publisher A and Publisher B. This time, make bar widths of two. 6. Now, compare the graph for Publisher C to the new graphs for Publishers A and B. Are the graphs more similar or more different? Explain your answer. Q 2.3.2 Often, cruise ships conduct all on-board transactions, with the exception of gambling, on a cashless basis. At the end of the cruise, guests pay one bill that covers all onboard transactions. Suppose that 60 single travelers and 70 couples were surveyed as to their on-board bills for a seven-day cruise from Los Angeles to the Mexican Riviera. Following is a summary of the bills for each group. Singles Amount($) Frequency Rel. Frequency 51–100 5 101–150 10 151–200 15 201–250 15 251–300 10 301–350 5 Couples Amount($) Frequency Rel. Frequency 100–150 5 201–250 5 251–300 5 301–350 5 351–400 10 401–450 10 451–500 10 501–550 10 551–600 5 601–650 5 1. Fill in the relative frequency for each group. 2. Construct a histogram for the singles group. Scale the x-axis by $50 widths. Use relative frequency on the y-axis. 3. Construct a histogram for the couples group. Scale the x-axis by$50 widths. Use relative frequency on the y-axis. 4. Compare the two graphs: 1. List two similarities between the graphs. 2. List two differences between the graphs. 3. Overall, are the graphs more similar or different? 5. Construct a new graph for the couples by hand. Since each couple is paying for two individuals, instead of scaling the x-axis by $50, scale it by$100. Use relative frequency on the y-axis. 6. Compare the graph for the singles with the new graph for the couples: 1. List two similarities between the graphs. 2. Overall, are the graphs more similar or different? 7. How did scaling the couples graph differently change the way you compared it to the singles graph? 8. Based on the graphs, do you think that individuals spend the same amount, more or less, as singles as they do person by person as a couple? Explain why in one or two complete sentences. S 2.3.2 Singles Amount($) Frequency Relative Frequency 51–100 5 0.08 101–150 10 0.17 151–200 15 0.25 201–250 15 0.25 251–300 10 0.17 301–350 5 0.08 Couples Amount($) Frequency Relative Frequency 100–150 5 0.07 201–250 5 0.07 251–300 5 0.07 301–350 5 0.07 351–400 10 0.14 401–450 10 0.14 451–500 10 0.14 501–550 10 0.14 551–600 5 0.07 601–650 5 0.07 1. See the tables above 2. In the following histogram data values that fall on the right boundary are counted in the class interval, while values that fall on the left boundary are not counted (with the exception of the first interval where both boundary values are included). 3. In the following histogram, the data values that fall on the right boundary are counted in the class interval, while values that fall on the left boundary are not counted (with the exception of the first interval where values on both boundaries are included). 4. Compare the two graphs: 1. Answers may vary. Possible answers include: • Both graphs have a single peak. The percentage of people who own at most three t-shirts costing more than $19 each is approximately: 1. 21 2. 59 3. 41 4. Cannot be determined S 2.3.4 c Q 2.3.5 If the data were collected by asking the first 111 people who entered the store, then the type of sampling is: 1. cluster 2. simple random 3. stratified 4. convenience Q 2.3.6 Following are the 2010 obesity rates by U.S. states and Washington, DC. State Percent (%) State Percent (%) State Percent (%) Alabama 32.2 Kentucky 31.3 North Dakota 27.2 Alaska 24.5 Louisiana 31.0 Ohio 29.2 Arizona 24.3 Maine 26.8 Oklahoma 30.4 Arkansas 30.1 Maryland 27.1 Oregon 26.8 California 24.0 Massachusetts 23.0 Pennsylvania 28.6 Colorado 21.0 Michigan 30.9 Rhode Island 25.5 Connecticut 22.5 Minnesota 24.8 South Carolina 31.5 Delaware 28.0 Mississippi 34.0 South Dakota 27.3 Washington, DC 22.2 Missouri 30.5 Tennessee 30.8 Florida 26.6 Montana 23.0 Texas 31.0 Georgia 29.6 Nebraska 26.9 Utah 22.5 Hawaii 22.7 Nevada 22.4 Vermont 23.2 Idaho 26.5 New Hampshire 25.0 Virginia 26.0 Illinois 28.2 New Jersey 23.8 Washington 25.5 Indiana 29.6 New Mexico 25.1 West Virginia 32.5 Iowa 28.4 New York 23.9 Wisconsin 26.3 Kansas 29.4 North Carolina 27.8 Wyoming 25.1 Construct a bar graph of obesity rates of your state and the four states closest to your state. Hint: Label the $x$-axis with the states. S 2.3.7 Answers will vary. Exercise 2.3.6 Sixty-five randomly selected car salespersons were asked the number of cars they generally sell in one week. Fourteen people answered that they generally sell three cars; nineteen generally sell four cars; twelve generally sell five cars; nine generally sell six cars; eleven generally sell seven cars. Complete the table. Data Value (# cars) Frequency Relative Frequency Cumulative Relative Frequency Exercise 2.3.7 What does the frequency column in the Table above sum to? Why? Answer 65 Exercise 2.3.8 What does the relative frequency column in in the Table above sum to? Why? Exercise 2.3.9 What is the difference between relative frequency and frequency for each data value in in the Table above ? Answer The relative frequency shows the proportion of data points that have each value. The frequency tells the number of data points that have each value. Exercise 2.3.10 What is the difference between cumulative relative frequency and relative frequency for each data value? Exercise 2.3.11 To construct the histogram for the data in in the Table above , determine appropriate minimum and maximum x and y values and the scaling. Sketch the histogram. Label the horizontal and vertical axes with words. Include numerical scaling. Answer Answers will vary. One possible histogram is shown: Exercise 2.3.12 Construct a frequency polygon for the following: 1. Pulse Rates for Women Frequency 60–69 12 70–79 14 80–89 11 90–99 1 100–109 1 110–119 0 120–129 1 2. Actual Speed in a 30 MPH Zone Frequency 42–45 25 46–49 14 50–53 7 54–57 3 58–61 1 3. Tar (mg) in Nonfiltered Cigarettes Frequency 10–13 1 14–17 0 18–21 15 22–25 7 26–29 2 Exercise 2.3.13 Construct a frequency polygon from the frequency distribution for the 50 highest ranked countries for depth of hunger. Depth of Hunger Frequency 230–259 21 260–289 13 290–319 5 320–349 7 350–379 1 380–409 1 410–439 1 Answer Find the midpoint for each class. These will be graphed on the x-axis. The frequency values will be graphed on the y-axis values. Exercise 2.3.14 Use the two frequency tables to compare the life expectancy of men and women from 20 randomly selected countries. Include an overlayed frequency polygon and discuss the shapes of the distributions, the center, the spread, and any outliers. What can we conclude about the life expectancy of women compared to men? Life Expectancy at Birth – Women Frequency 49–55 3 56–62 3 63–69 1 70–76 3 77–83 8 84–90 2 Life Expectancy at Birth – Men Frequency 49–55 3 56–62 3 63–69 1 70–76 1 77–83 7 84–90 5 Exercise 2.3.15 Construct a times series graph for (a) the number of male births, (b) the number of female births, and (c) the total number of births. Sex/Year 1855 1856 1857 1858 1859 1860 1861 Female 45,545 49,582 50,257 50,324 51,915 51,220 52,403 Male 47,804 52,239 53,158 53,694 54,628 54,409 54,606 Total 93,349 101,821 103,415 104,018 106,543 105,629 107,009 Sex/Year 1862 1863 1864 1865 1866 1867 1868 1869 Female 51,812 53,115 54,959 54,850 55,307 55,527 56,292 55,033 Male 55,257 56,226 57,374 58,220 58,360 58,517 59,222 58,321 Total 107,069 109,341 112,333 113,070 113,667 114,044 115,514 113,354 Sex/Year 1871 1870 1872 1871 1872 1827 1874 1875 Female 56,099 56,431 57,472 56,099 57,472 58,233 60,109 60,146 Male 60,029 58,959 61,293 60,029 61,293 61,467 63,602 63,432 Total 116,128 115,390 118,765 116,128 118,765 119,700 123,711 123,578 Answer Exercise 2.3.16 The following data sets list full time police per 100,000 citizens along with homicides per 100,000 citizens for the city of Detroit, Michigan during the period from 1961 to 1973. Year 1961 1962 1963 1964 1965 1966 1967 Police 260.35 269.8 272.04 272.96 272.51 261.34 268.89 Homicides 8.6 8.9 8.52 8.89 13.07 14.57 21.36 Year 1968 1969 1970 1971 1972 1973 Police 295.99 319.87 341.43 356.59 376.69 390.19 Homicides 28.03 31.49 37.39 46.26 47.24 52.33 1. Construct a double time series graph using a common x-axis for both sets of data. 2. Which variable increased the fastest? Explain. 3. Did Detroit’s increase in police officers have an impact on the murder rate? Explain. 2.4: Measures of the Location of the Data Q 2.4.1 The median age for U.S. blacks currently is 30.9 years; for U.S. whites it is 42.3 years. 1. Based upon this information, give two reasons why the black median age could be lower than the white median age. 2. Does the lower median age for blacks necessarily mean that blacks die younger than whites? Why or why not? 3. How might it be possible for blacks and whites to die at approximately the same age, but for the median age for whites to be higher? Q 2.4.2 Six hundred adult Americans were asked by telephone poll, "What do you think constitutes a middle-class income?" The results are in the Table below. Also, include left endpoint, but not the right endpoint. Salary ($) Relative Frequency < 20,000 0.02 20,000–25,000 0.09 25,000–30,000 0.19 30,000–40,000 0.26 40,000–50,000 0.18 50,000–75,000 0.17 75,000–99,999 0.02 100,000+ 0.01 1. What percentage of the survey answered "not sure"? 2. What percentage think that middle-class is from $25,000 to$50,000? 3. Construct a histogram of the data. 1. Should all bars have the same width, based on the data? Why or why not? 2. How should the <20,000 and the 100,000+ intervals be handled? Why? 4. Find the 40th and 80th percentiles 5. Construct a bar graph of the data S 2.4.2 1. $1 - (0.02 + 0.09 + 0.19 + 0.26 + 0.18 + 0.17 + 0.02 + 0.01) = 0.06$ 2. $0.19 + 0.26 + 0.18 = 0.63$ 3. Check student’s solution. 4. 40th percentile will fall between 30,000 and 40,000 80th percentile will fall between 50,000 and 75,000 5. Check student’s solution. Q 2.4.3 Given the following box plot: 1. which quarter has the smallest spread of data? What is that spread? 2. which quarter has the largest spread of data? What is that spread? 3. find the interquartile range (IQR). 4. are there more data in the interval 5–10 or in the interval 10–13? How do you know this? 5. which interval has the fewest data in it? How do you know this? 1. 0–2 2. 2–4 3. 10–12 4. 12–13 5. need more information Q 2.4.4 The following box plot shows the U.S. population for 1990, the latest available year. 1. Are there fewer or more children (age 17 and under) than senior citizens (age 65 and over)? How do you know? 2. 12.6% are age 65 and over. Approximately what percentage of the population are working age adults (above age 17 to age 65)? S 2.4.4 1. more children; the left whisker shows that 25% of the population are children 17 and younger. The right whisker shows that 25% of the population are adults 50 and older, so adults 65 and over represent less than 25%. 2. 62.4% 2.5: Box Plots Q 2.5.1 In a survey of 20-year-olds in China, Germany, and the United States, people were asked the number of foreign countries they had visited in their lifetime. The following box plots display the results. 1. In complete sentences, describe what the shape of each box plot implies about the distribution of the data collected. 2. Have more Americans or more Germans surveyed been to over eight foreign countries? 3. Compare the three box plots. What do they imply about the foreign travel of 20-year-old residents of the three countries when compared to each other? Q 2.5.2 Given the following box plot, answer the questions. 1. Think of an example (in words) where the data might fit into the above box plot. In 2–5 sentences, write down the example. 2. What does it mean to have the first and second quartiles so close together, while the second to third quartiles are far apart? S 2.5.2 1. Answers will vary. Possible answer: State University conducted a survey to see how involved its students are in community service. The box plot shows the number of community service hours logged by participants over the past year. 2. Because the first and second quartiles are close, the data in this quarter is very similar. There is not much variation in the values. The data in the third quarter is much more variable, or spread out. This is clear because the second quartile is so far away from the third quartile. Q 2.5.3 Given the following box plots, answer the questions. 1. In complete sentences, explain why each statement is false. 1. Data 1 has more data values above two than Data 2 has above two. 2. The data sets cannot have the same mode. 3. For Data 1, there are more data values below four than there are above four. 2. For which group, Data 1 or Data 2, is the value of “7” more likely to be an outlier? Explain why in complete sentences. Q 2.5.4 A survey was conducted of 130 purchasers of new BMW 3 series cars, 130 purchasers of new BMW 5 series cars, and 130 purchasers of new BMW 7 series cars. In it, people were asked the age they were when they purchased their car. The following box plots display the results. 1. In complete sentences, describe what the shape of each box plot implies about the distribution of the data collected for that car series. 2. Which group is most likely to have an outlier? Explain how you determined that. 3. Compare the three box plots. What do they imply about the age of purchasing a BMW from the series when compared to each other? 4. Look at the BMW 5 series. Which quarter has the smallest spread of data? What is the spread? 5. Look at the BMW 5 series. Which quarter has the largest spread of data? What is the spread? 6. Look at the BMW 5 series. Estimate the interquartile range (IQR). 7. Look at the BMW 5 series. Are there more data in the interval 31 to 38 or in the interval 45 to 55? How do you know this? 8. Look at the BMW 5 series. Which interval has the fewest data in it? How do you know this? 1. 31–35 2. 38–41 3. 41–64 S 2.5.4 1. Each box plot is spread out more in the greater values. Each plot is skewed to the right, so the ages of the top 50% of buyers are more variable than the ages of the lower 50%. 2. The BMW 3 series is most likely to have an outlier. It has the longest whisker. 3. Comparing the median ages, younger people tend to buy the BMW 3 series, while older people tend to buy the BMW 7 series. However, this is not a rule, because there is so much variability in each data set. 4. The second quarter has the smallest spread. There seems to be only a three-year difference between the first quartile and the median. 5. The third quarter has the largest spread. There seems to be approximately a 14-year difference between the median and the third quartile. 6. IQR ~ 17 years 7. There is not enough information to tell. Each interval lies within a quarter, so we cannot tell exactly where the data in that quarter is concentrated. 8. The interval from 31 to 35 years has the fewest data values. Twenty-five percent of the values fall in the interval 38 to 41, and 25% fall between 41 and 64. Since 25% of values fall between 31 and 38, we know that fewer than 25% fall between 31 and 35. Q 2.5.5 Twenty-five randomly selected students were asked the number of movies they watched the previous week. The results are as follows: # of movies Frequency 0 5 1 9 2 6 3 4 4 1 Construct a box plot of the data. 2.6: Measures of the Center of the Data Q 2.6.1 The most obese countries in the world have obesity rates that range from 11.4% to 74.6%. This data is summarized in the following table. Percent of Population Obese Number of Countries 11.4–20.45 29 20.45–29.45 13 29.45–38.45 4 38.45–47.45 0 47.45–56.45 2 56.45–65.45 1 65.45–74.45 0 74.45–83.45 1 1. What is the best estimate of the average obesity percentage for these countries? 2. The United States has an average obesity rate of 33.9%. Is this rate above average or below? 3. How does the United States compare to other countries? Q 2.6.2 The table below gives the percent of children under five considered to be underweight. What is the best estimate for the mean percentage of underweight children? Percent of Underweight Children Number of Countries 16–21.45 23 21.45–26.9 4 26.9–32.35 9 32.35–37.8 7 37.8–43.25 6 43.25–48.7 1 S 2.6.2 The mean percentage, $\bar{x} = \frac{1328.65}{50} = 26.75$ 2.7: Skewness and the Mean, Median, and Mode Q 2.7.1 The median age of the U.S. population in 1980 was 30.0 years. In 1991, the median age was 33.1 years. 1. What does it mean for the median age to rise? 2. Give two reasons why the median age could rise. 3. For the median age to rise, is the actual number of children less in 1991 than it was in 1980? Why or why not? 2.8: Measures of the Spread of the Data Use the following information to answer the next nine exercises: The population parameters below describe the full-time equivalent number of students (FTES) each year at Lake Tahoe Community College from 1976–1977 through 2004–2005. • $\mu = 1000$ FTES • median = 1,014 FTES • $\sigma = 474$ FTES • first quartile = 528.5 FTES • third quartile = 1,447.5 FTES • $n = 29$ years Q 2.8.1 A sample of 11 years is taken. About how many are expected to have a FTES of 1014 or above? Explain how you determined your answer. S 2.8.1 The median value is the middle value in the ordered list of data values. The median value of a set of 11 will be the 6th number in order. Six years will have totals at or below the median. Q 2.8.2 75% of all years have an FTES: 1. at or below: _____ 2. at or above: _____ Q 2.8.3 The population standard deviation = _____ 474 FTES Q 2.8.4 What percent of the FTES were from 528.5 to 1447.5? How do you know? Q 2.8.5 What is the IQR? What does the IQR represent? 919 Q 2.8.6 How many standard deviations away from the mean is the median? Additional Information: The population FTES for 2005–2006 through 2010–2011 was given in an updated report. The data are reported here. Year 2005–06 2006–07 2007–08 2008–09 2009–10 2010–11 Total FTES 1,585 1,690 1,735 1,935 2,021 1,890 Q 2.8.7 Calculate the mean, median, standard deviation, the first quartile, the third quartile and theIQR. Round to one decimal place. S 2.8.7 • mean = 1,809.3 • median = 1,812.5 • standard deviation = 151.2 • first quartile = 1,690 • third quartile = 1,935 • IQR = 245 Q 2.8.8 Construct a box plot for the FTES for 2005–2006 through 2010–2011 and a box plot for the FTES for 1976–1977 through 2004–2005. Q 2.8.9 Compare the IQR for the FTES for 1976–77 through 2004–2005 with the IQR for the FTES for 2005-2006 through 2010–2011. Why do you suppose the IQRs are so different? S 2.8.10 Hint: Think about the number of years covered by each time period and what happened to higher education during those periods. Q 2.8.11 Three students were applying to the same graduate school. They came from schools with different grading systems. Which student had the best GPA when compared to other students at his school? Explain how you determined your answer. Student GPA School Average GPA School Standard Deviation Thuy 2.7 3.2 0.8 Vichet 87 75 20 Kamala 8.6 8 0.4 Q 2.8.12 A music school has budgeted to purchase three musical instruments. They plan to purchase a piano costing $3,000, a guitar costing$550, and a drum set costing $600. The mean cost for a piano is$4,000 with a standard deviation of $2,500. The mean cost for a guitar is$500 with a standard deviation of $200. The mean cost for drums is$700 with a standard deviation of \$100. Which cost is the lowest, when compared to other instruments of the same type? Which cost is the highest when compared to other instruments of the same type. Justify your answer. S 2.8.12 For pianos, the cost of the piano is 0.4 standard deviations BELOW the mean. For guitars, the cost of the guitar is 0.25 standard deviations ABOVE the mean. For drums, the cost of the drum set is 1.0 standard deviations BELOW the mean. Of the three, the drums cost the lowest in comparison to the cost of other instruments of the same type. The guitar costs the most in comparison to the cost of other instruments of the same type. Q 2.8.13 An elementary school class ran one mile with a mean of 11 minutes and a standard deviation of three minutes. Rachel, a student in the class, ran one mile in eight minutes. A junior high school class ran one mile with a mean of nine minutes and a standard deviation of two minutes. Kenji, a student in the class, ran 1 mile in 8.5 minutes. A high school class ran one mile with a mean of seven minutes and a standard deviation of four minutes. Nedda, a student in the class, ran one mile in eight minutes. 1. Why is Kenji considered a better runner than Nedda, even though Nedda ran faster than he? 2. Who is the fastest runner with respect to his or her class? Explain why. Q 2.8.14 The most obese countries in the world have obesity rates that range from 11.4% to 74.6%. This data is summarized in the table belo2 Percent of Population Obese Number of Countries 11.4–20.45 29 20.45–29.45 13 29.45–38.45 4 38.45–47.45 0 47.45–56.45 2 56.45–65.45 1 65.45–74.45 0 74.45–83.45 1 What is the best estimate of the average obesity percentage for these countries? What is the standard deviation for the listed obesity rates? The United States has an average obesity rate of 33.9%. Is this rate above average or below? How “unusual” is the United States’ obesity rate compared to the average rate? Explain. S 2.8.14 • $\bar{x} = 23.32$ • Using the TI 83/84, we obtain a standard deviation of: $s_{x} = 12.95$. • The obesity rate of the United States is 10.58% higher than the average obesity rate. • Since the standard deviation is 12.95, we see that $23.32 + 12.95 = 36.27$ is the obesity percentage that is one standard deviation from the mean. The United States obesity rate is slightly less than one standard deviation from the mean. Therefore, we can assume that the United States, while 34% obese, does not have an unusually high percentage of obese people. Q 2.8.15 The Table below gives the percent of children under five considered to be underweight. Percent of Underweight Children Number of Countries 16–21.45 23 21.45–26.9 4 26.9–32.35 9 32.35–37.8 7 37.8–43.25 6 43.25–48.7 1 What is the best estimate for the mean percentage of underweight children? What is the standard deviation? Which interval(s) could be considered unusual? Explain.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/02.E%3A_Descriptive_Statistics_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 3.2: Terminology Q 3.2.1 The graph in Figure 3.2.1  displays the sample sizes and percentages of people in different age and gender groups who were polled concerning their approval of Mayor Ford’s actions in office. The total number in the sample of all the age groups is 1,045. 1. Define three events in the graph. 2. Describe in words what the entry 40 means. 3. Describe in words the complement of the entry in question 2. 4. Describe in words what the entry 30 means. 5. Out of the males and females, what percent are males? 6. Out of the females, what percent disapprove of Mayor Ford? 7. Out of all the age groups, what percent approve of Mayor Ford? 8. Find $P(\text{Approve|Male})$. 9. Out of the age groups, what percent are more than 44 years old? 10. Find $P(\text{Approve|Age} < 35)$. Q 3.2.2 Explain what is wrong with the following statements. Use complete sentences. 1. If there is a 60% chance of rain on Saturday and a 70% chance of rain on Sunday, then there is a 130% chance of rain over the weekend. 2. The probability that a baseball player hits a home run is greater than the probability that he gets a successful hit. S 3.2.2 1. You can't calculate the joint probability knowing the probability of both events occurring, which is not in the information given; the probabilities should be multiplied, not added; and probability is never greater than 100% 2. A home run by definition is a successful hit, so he has to have at least as many successful hits as home runs. 3.3: Independent and Mutually Exclusive Events Use the following information to answer the next 12 exercises. The graph shown is based on more than 170,000 interviews done by Gallup that took place from January through December 2012. The sample consists of employed Americans 18 years of age or older. The Emotional Health Index Scores are the sample space. We randomly sample one Emotional Health Index Score. Q 3.3.1 Find the probability that an Emotional Health Index Score is 82.7. Q 3.3.2 Find the probability that an Emotional Health Index Score is 81.0. 0 Q 3.3.3 Find the probability that an Emotional Health Index Score is more than 81? Q 3.3.4 Find the probability that an Emotional Health Index Score is between 80.5 and 82? 0.3571 Q 3.3.5 If we know an Emotional Health Index Score is 81.5 or more, what is the probability that it is 82.7? Q 3.3.6 What is the probability that an Emotional Health Index Score is 80.7 or 82.7? 0.2142 Q 3.3.7 What is the probability that an Emotional Health Index Score is less than 80.2 given that it is already less than 81. Q 3.3.8 What occupation has the highest emotional index score? Physician (83.7) Q 3.3.9 What occupation has the lowest emotional index score? Q 3.3.10 What is the range of the data? S 3.3.10 83.7 − 79.6 = 4.1 Q 3.3.11 Compute the average EHIS. Q 3.3.12 If all occupations are equally likely for a certain individual, what is the probability that he or she will have an occupation with lower than average EHIS? S 3.3.12 $P(\text{Occupation} < 81.3) = 0.5$ 3.4: Two Basic Rules of Probability Q 3.4.1 On February 28, 2013, a Field Poll Survey reported that 61% of California registered voters approved of allowing two people of the same gender to marry and have regular marriage laws apply to them. Among 18 to 39 year olds (California registered voters), the approval rating was 78%. Six in ten California registered voters said that the upcoming Supreme Court’s ruling about the constitutionality of California’s Proposition 8 was either very or somewhat important to them. Out of those CA registered voters who support same-sex marriage, 75% say the ruling is important to them. In this problem, let: • $\text{C} =$ California registered voters who support same-sex marriage. • $\text{B} =$ California registered voters who say the Supreme Court’s ruling about the constitutionality of California’s Proposition 8 is very or somewhat important to them • $\text{A} =$ California registered voters who are 18 to 39 years old. 1. Find $P(\text{C})$. 2. Find $P(\text{B})$. 3. Find $P(\text{C|A})$. 4. Find $P(\text{B|C})$. 5. In words, what is $\text{C|A}$? 6. In words, what is $\text{B|C}$? 7. Find $P(\text{C AND B})$. 8. In words, what is $\text{C AND B}$? 9. Find $P(\text{C OR B})$. 10. Are $\text{C}$ and $\text{B}$ mutually exclusive events? Show why or why not. Q 3.4.2 After Rob Ford, the mayor of Toronto, announced his plans to cut budget costs in late 2011, the Forum Research polled 1,046 people to measure the mayor’s popularity. Everyone polled expressed either approval or disapproval. These are the results their poll produced: • In early 2011, 60 percent of the population approved of Mayor Ford’s actions in office. • In mid-2011, 57 percent of the population approved of his actions. • In late 2011, the percentage of popular approval was measured at 42 percent. 1. What is the sample size for this study? 2. What proportion in the poll disapproved of Mayor Ford, according to the results from late 2011? 3. How many people polled responded that they approved of Mayor Ford in late 2011? 4. What is the probability that a person supported Mayor Ford, based on the data collected in mid-2011? 5. What is the probability that a person supported Mayor Ford, based on the data collected in early 2011? S 3.4.2 1. The Forum Research surveyed 1,046 Torontonians. 2. 58% 3. 42% of 1,046 = 439 (rounding to the nearest integer) 4. 0.57 5. 0.60. Use the following information to answer the next three exercises. The casino game, roulette, allows the gambler to bet on the probability of a ball, which spins in the roulette wheel, landing on a particular color, number, or range of numbers. The table used to place bets contains of 38 numbers, and each number is assigned to a color and a range. Q 3.4.3 1. List the sample space of the 38 possible outcomes in roulette. 2. You bet on red. Find $P(\text{red})$. 3. You bet on -1st 12- (1st Dozen). Find $P(\text{-1st 12-})$. 4. You bet on an even number. Find $P(\text{even number})$. 5. Is getting an odd number the complement of getting an even number? Why? 6. Find two mutually exclusive events. 7. Are the events Even and 1st Dozen independent? Q 3.4.4 Compute the probability of winning the following types of bets: 1. Betting on two lines that touch each other on the table as in 1-2-3-4-5-6 2. Betting on three numbers in a line, as in 1-2-3 3. Betting on one number 4. Betting on four numbers that touch each other to form a square, as in 10-11-13-14 5. Betting on two numbers that touch each other on the table, as in 10-11 or 10-13 6. Betting on 0-00-1-2-3 7. Betting on 0-1-2; or 0-00-2; or 00-2-3 S 3.4.4 1. $P(\text{Betting on two line that touch each other on the table}) = \frac{6}{38}$ 2. $P(\text{Betting on three numbers in a line}) = \frac{3}{38}$ 3. $P(\text{Betting on one number}) = \frac{1}{38}$ 4. $P(\text{Betting on four number that touch each other to form a square}) = \frac{4}{38}$ 5. $P(\text{Betting on two number that touch each other on the table}) = \frac{2}{38}$ 6. $P(\text{Betting on 0-00-1-2-3)} = \frac{5}{38}$ 7. $P(\text{Betting on 0-1-2; or 0-00-2; or 00-2-3}) = \frac{3}{38}$ Q 3.4.5 Compute the probability of winning the following types of bets: 1. Betting on a color 2. Betting on one of the dozen groups 3. Betting on the range of numbers from 1 to 18 4. Betting on the range of numbers 19–36 5. Betting on one of the columns 6. Betting on an even or odd number (excluding zero) Q 3.4.6 Suppose that you have eight cards. Five are green and three are yellow. The five green cards are numbered 1, 2, 3, 4, and 5. The three yellow cards are numbered 1, 2, and 3. The cards are well shuffled. You randomly draw one card. • $\text{G} =$ card drawn is green • $\text{E} =$ card drawn is even-numbered 1. List the sample space. 2. $P(\text{G}) =$ _____ 3. $P(\text{G|E}) =$ _____ 4. $P(\text{G AND E}) =$ _____ 5. $P(\text{G OR E}) =$ _____ 6. Are $\text{G}$ and $\text{E}$ mutually exclusive? Justify your answer numerically. S 3.4.6 1. $\{G1, G2, G3, G4, G5, Y1, Y2, Y3\}$ 2. $\frac{5}{8}$ 3. $\frac{2}{3}$ 4. $\frac{2}{8}$ 5. $\frac{}6{8}$ 6. No, because $P(\text{G AND E})$ does not equal 0. Q 3.4.7 Roll two fair dice. Each die has six faces. 1. List the sample space. 2. Let $\text{A}$ be the event that either a three or four is rolled first, followed by an even number. Find $P(\text{A})$. 3. Let B be the event that the sum of the two rolls is at most seven. Find $P(\text{B})$. 4. In words, explain what “$P(\text{A|B})$” represents. Find $P(\text{A|B})$. 5. Are $\text{A}$ and $\text{B}$ mutually exclusive events? Explain your answer in one to three complete sentences, including numerical justification. 6. Are $\text{A}$ and $\text{B}$ independent events? Explain your answer in one to three complete sentences, including numerical justification. Q 3.4.8 A special deck of cards has ten cards. Four are green, three are blue, and three are red. When a card is picked, its color of it is recorded. An experiment consists of first picking a card and then tossing a coin. 1. List the sample space. 2. Let $\text{A}$ be the event that a blue card is picked first, followed by landing a head on the coin toss. Find $P(\text{A})$. 3. Let $\text{B}$ be the event that a red or green is picked, followed by landing a head on the coin toss. Are the events $\text{A}$ and $\text{B}$ mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification. 4. Let $\text{C}$ be the event that a red or blue is picked, followed by landing a head on the coin toss. Are the events $\text{A}$ and $\text{C}$ mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification. S 3.4.9 The coin toss is independent of the card picked first. 1. $\{(G,H) (G,T) (B,H) (B,T) (R,H) (R,T)\}$ 2. $P(\text{A}) = P(\text{blue})P(\text{head}) = \left(\frac{3}{10}\right)\left(\frac{1}{2}\right) = \frac{3}{20}$ 3. Yes, $\text{A}$ and $\text{B}$ are mutually exclusive because they cannot happen at the same time; you cannot pick a card that is both blue and also (red or green). $P(\text{A AND B}) = 0$ 4. No, $\text{A}$ and $\text{C}$ are not mutually exclusive because they can occur at the same time. In fact, $\text{C}$ includes all of the outcomes of $\text{A}$; if the card chosen is blue it is also (red or blue). $P(\text{A AND C}) = P(\text{A}) = 320$ Q 3.4.10 An experiment consists of first rolling a die and then tossing a coin. 1. List the sample space. 2. Let $\text{A}$ be the event that either a three or a four is rolled first, followed by landing a head on the coin toss. Find $P(\text{A})$. 3. Let $\text{B}$ be the event that the first and second tosses land on heads. Are the events $\text{A}$ and $\text{B}$ mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification. Q 3.4.11 An experiment consists of tossing a nickel, a dime, and a quarter. Of interest is the side the coin lands on. 1. List the sample space. 2. Let $\text{A}$ be the event that there are at least two tails. Find $P(\text{A})$. 3. Let $\text{B}$ be the event that the first and second tosses land on heads. Are the events $\text{A}$ and $\text{B}$ mutually exclusive? Explain your answer in one to three complete sentences, including justification. S 3.4.12 1. $S = {\text{(HHH), (HHT), (HTH), (HTT), (THH), (THT), (TTH), (TTT)}}$ 2. $\frac{4}{8}$ 3. Yes, because if $\text{A}$ has occurred, it is impossible to obtain two tails. In other words, $P(\text{A AND B}) = 0$. Q 3.4.13 Consider the following scenario: Let $P(\text{C}) = 0.4$. Let $P(\text{D}) = 0.5$. Let $P(\text{C|D}) = 0.6$. 1. Find $P(C AND D)$. 2. Are $\text{C}$ and $\text{D}$ mutually exclusive? Why or why not? 3. Are $\text{C}$ and $\text{D}$ independent events? Why or why not? 4. Find $P(\text{C OR D})$. 5. Find $P(\text{D|C})$. Q 3.4.14 $\text{Y}$ and $\text{Z}$ are independent events. 1. Rewrite the basic Addition Rule $P(\text{Y OR Z}) = P(\text{Y}) + P(\text{Z}) - P(\text{Y AND Z})$ using the information that $\text{Y}$ and $\text{Z}$ are independent events. 2. Use the rewritten rule to find $P(\text{Z})$ if $P(\text{Y OR Z}) = 0.71$ and $P(\text{Y}) = 0.42$. S 3.4.14 1. If $\text{Y}$ and $\text{Z}$ are independent, then $P(\text{Y AND Z}) = P(\text{Y})P(\text{Z})$, so $P(\text{Y OR Z}) = P(\text{Y}) + P(\text{Z}) -P(\text{Y})P(\text{Z})$. 2. 0.5 Q 3.4.15 $\text{G}$ and $\text{H}$ are mutually exclusive events. $P(\text{G}) = 0.5 P(\text{H}) = 0.3$ 1. Explain why the following statement MUST be false: $P(\text{H|G}) = 0.4$. 2. Find $P(\text{H OR G})$. 3. Are $\text{G}$ and $\text{H}$ independent or dependent events? Explain in a complete sentence. Q 3.4.16 Approximately 281,000,000 people over age five live in the United States. Of these people, 55,000,000 speak a language other than English at home. Of those who speak another language at home, 62.3% speak Spanish. Let: $\text{E} =$ speaks English at home; $\text{E′} =$ speaks another language at home; $\text{S} =$ speaks Spanish; Finish each probability statement by matching the correct answer. Probability Statements Answers a. $P(\text{E′}) =$ i. 0.8043 b. $P(\text{E}) =$ ii. 0.623 c. $P(\text{S and E′}) =$ iii. 0.1957 d. $P(\text{S|E′}) =$ iv. 0.1219 1. iii 2. i 3. iv 4. ii Q 3.4.17 1994, the U.S. government held a lottery to issue 55,000 Green Cards (permits for non-citizens to work legally in the U.S.). Renate Deutsch, from Germany, was one of approximately 6.5 million people who entered this lottery. Let $G =$ won green card. 1. What was Renate’s chance of winning a Green Card? Write your answer as a probability statement. 2. In the summer of 1994, Renate received a letter stating she was one of 110,000 finalists chosen. Once the finalists were chosen, assuming that each finalist had an equal chance to win, what was Renate’s chance of winning a Green Card? Write your answer as a conditional probability statement. Let $\text{F} =$ was a finalist. 3. Are $\text{G}$ and $\text{F}$ independent or dependent events? Justify your answer numerically and also explain why. 4. Are $\text{G}$ and $\text{F}$ mutually exclusive events? Justify your answer numerically and explain why. Q 3.4.18 Three professors at George Washington University did an experiment to determine if economists are more selfish than other people. They dropped 64 stamped, addressed envelopes with \$10 cash in different classrooms on the George Washington campus. 44% were returned overall. From the economics classes 56% of the envelopes were returned. From the business, psychology, and history classes 31% were returned. Let: $\text{R} =$ money returned; $\text{E} =$ economics classes; $\text{O} =$ other classes 1. Write a probability statement for the overall percent of money returned. 2. Write a probability statement for the percent of money returned out of the economics classes. 3. Write a probability statement for the percent of money returned out of the other classes. 4. Is money being returned independent of the class? Justify your answer numerically and explain it. 5. Based upon this study, do you think that economists are more selfish than other people? Explain why or why not. Include numbers to justify your answer. S 3.4.18 1. $P(\text{R}) = 0.44$ 2. $P(\text{R|E}) = 0.56$ 3. $P(\text{R|O}) = 0.31$ 4. No, whether the money is returned is not independent of which class the money was placed in. There are several ways to justify this mathematically, but one is that the money placed in economics classes is not returned at the same overall rate; $P(\text{R|E}) \neq P(\text{R})$. 5. No, this study definitely does not support that notion; in fact, it suggests the opposite. The money placed in the economics classrooms was returned at a higher rate than the money place in all classes collectively; $P(\text{R|E}) > P(\text{R})$. Q 3.4.19 The following table of data obtained from www.baseball-almanac.com shows hit information for four players. Suppose that one hit from the table is randomly selected. Name Single Double Triple Home Run Total Hits Babe Ruth 1,517 506 136 714 2,873 Jackie Robinson 1,054 273 54 137 1,518 Ty Cobb 3,603 174 295 114 4,189 Hank Aaron 2,294 624 98 755 3,771 Total 8,471 1,577 583 1,720 12,351 Are "the hit being made by Hank Aaron" and "the hit being a double" independent events? 1. Yes, because $P(\text{hit by Hank Aaron|hit is a double}) = P(\text{hit by Hank Aaron})$ 2. No, because $P(\text{hit by Hank Aaron|hit is a double}) \neq P(\text{hit is a double})$ 3. No, because $P(\text{hit is by Hank Aaron|hit is a double}) \neq P(\text{hit by Hank Aaron})$ 4. Yes, because $P(\text{hit is by Hank Aaron|hit is a double}) = P(\text{hit is a double})$ Q 3.4.29 United Blood Services is a blood bank that serves more than 500 hospitals in 18 states. According to their website, a person with type O blood and a negative Rh factor (Rh-) can donate blood to any person with any blood type. Their data show that 43% of people have type O blood and 15% of people have Rh- factor; 52% of people have type O or Rh- factor. 1. Find the probability that a person has both type O blood and the Rh- factor. 2. Find the probability that a person does NOT have both type O blood and the Rh- factor. S 3.4.30 1. $P(\text{type O OR Rh-}) = P(\text{type O}) + P(\text{Rh-}) - P(\text{type O AND Rh-})$ $0.52 = 0.43 + 0.15 - P(\text{type O AND Rh-})$; solve to find $P(\text{type O AND Rh-}) = 0.06$ 6% of people have type O, Rh- blood 2. $P(\text{NOT(type O AND Rh-)}) = 1 - P(\text{type O AND Rh-}) = 1 - 0.06 = 0.94$ 94% of people do not have type O, Rh- blood Q 3.4.31 At a college, 72% of courses have final exams and 46% of courses require research papers. Suppose that 32% of courses have a research paper and a final exam. Let F be the event that a course has a final exam. Let R be the event that a course requires a research paper. 1. Find the probability that a course has a final exam or a research project. 2. Find the probability that a course has NEITHER of these two requirements. Q 3.4.32 In a box of assorted cookies, 36% contain chocolate and 12% contain nuts. Of those, 8% contain both chocolate and nuts. Sean is allergic to both chocolate and nuts. 1. Find the probability that a cookie contains chocolate or nuts (he can't eat it). 2. Find the probability that a cookie does not contain chocolate or nuts (he can eat it). S 3.4.32 1. Let $C =$ be the event that the cookie contains chocolate. Let $N =$ the event that the cookie contains nuts. 2. $P(\text{C OR N}) = P(\text{C}) + P(\text{N}) - P(\text{C AND N}) = 0.36 + 0.12 - 0.08 = 0.40$ 3. $P(\text{NEITHER chocolate NOR nuts}) = 1 - P(\text{C OR N}) = 1 - 0.40 = 0.60$ Q 3.4.33 A college finds that 10% of students have taken a distance learning class and that 40% of students are part time students. Of the part time students, 20% have taken a distance learning class. Let $\text{D} =$ event that a student takes a distance learning class and $\text{E} =$ event that a student is a part time student 1. Find $P(\text{D AND E})$. 2. Find $P(\text{E|D})$. 3. Find $P(\text{D OR E})$. 4. Using an appropriate test, show whether $\text{D}$ and $\text{E}$ are independent. 5. Using an appropriate test, show whether $\text{D}$ and $\text{E}$ are mutually exclusive. 3.5: Contingency Tables Use the information in the Table to answer the next eight exercises. The table shows the political party affiliation of each of 67 members of the US Senate in June 2012, and when they are up for reelection. Up for reelection: Democratic Party Republican Party Other Total November 2014 20 13 0 November 2016 10 24 0 Total Q 3.5.1 What is the probability that a randomly selected senator has an “Other” affiliation? 0 Q 3.5.2 What is the probability that a randomly selected senator is up for reelection in November 2016? Q 3.5.3 What is the probability that a randomly selected senator is a Democrat and up for reelection in November 2016? S 3.5.3 $\frac{10}{67}$ Q 3.5.4 What is the probability that a randomly selected senator is a Republican or is up for reelection in November 2014? Q 3.5.5 Suppose that a member of the US Senate is randomly selected. Given that the randomly selected senator is up for reelection in November 2016, what is the probability that this senator is a Democrat? S 3.5.5 $\frac{10}{34}$ Q 3.5.6 Suppose that a member of the US Senate is randomly selected. What is the probability that the senator is up for reelection in November 2014, knowing that this senator is a Republican? Q 3.5.7 The events “Republican” and “Up for reelection in 2016” are ________ 1. mutually exclusive. 2. independent. 3. both mutually exclusive and independent. 4. neither mutually exclusive nor independent. d Q 3.5.8 The events “Other” and “Up for reelection in November 2016” are ________ 1. mutually exclusive. 2. independent. 3. both mutually exclusive and independent. 4. neither mutually exclusive nor independent. Q 3.5.9 This table gives the number of participants in the recent National Health Interview Survey who had been treated for cancer in the previous 12 months. The results are sorted by age, race (black or white), and sex. We are interested in possible relationships between age, race, and sex. Race and Sex 15-24 25-40 41-65 over 65 TOTALS white, male 1,165 2,036 3,703   8,395 white, female 1,076 2,242 4,060   9,129 black, male 142 194 384   824 black, female 131 290 486   1,061 all others TOTALS 2,792 5,279 9,354   21,081 Do not include "all others" for parts f and g. 1. Fill in the column for cancer treatment for individuals over age 65. 2. Fill in the row for all other races. 3. Find the probability that a randomly selected individual was a white male. 4. Find the probability that a randomly selected individual was a black female. 5. Find the probability that a randomly selected individual was black 6. Find the probability that a randomly selected individual was a black or white male. 7. Out of the individuals over age 65, find the probability that a randomly selected individual was a black or white male. S 3.5.9 1. Race and Sex 1–14 15–24 25–64 over 64 TOTALS white, male 210 3,360 13,610 4,870 22,050 white, female 80 580 3,380 890 4,930 black, male 10 460 1,060 140 1,670 black, female 0 40 270 20 330 all others       100 TOTALS 310 4,650 18,780 6,020 29,760 2. Race and Sex 1–14 15–24 25–64 over 64 TOTALS white, male 210 3,360 13,610 4,870 22,050 white, female 80 580 3,380 890 4,930 black, male 10 460 1,060 140 1,670 black, female 0 40 270 20 330 all others 10 210 460 100 780 TOTALS 310 4,650 18,780 6,020 29,760 3. $\frac{22,050}{29,760}$ 4. $\frac{330}{29,760}$ 5. $\frac{2,000}{29,760}$ 6. $\frac{23,720}{29,760}$ 7. $\frac{5,010}{6,020}$ Use the following information to answer the next two exercises. The table of data obtained fromwww.baseball-almanac.com shows hit information for four well known baseball players. Suppose that one hit from the table is randomly selected. NAME Single Double Triple Home Run TOTAL HITS Babe Ruth 1,517 506 136 714 2,873 Jackie Robinson 1,054 273 54 137 1,518 Ty Cobb 3,603 174 295 114 4,189 Hank Aaron 2,294 624 98 755 3,771 TOTAL 8,471 1,577 583 1,720 12,351 Q 3.5.10 Find $P(\text{hit was made by Babe Ruth})$. 1. $\frac{1518}{2873}$ 2. $\frac{2873}{12351}$ 3. $\frac{583}{12351}$ 4. $\frac{4189}{12351}$ Q 3.5.11 Find $P(\text{hit was made by Ty Cobb|The hit was a Home Run})$. 1. $\frac{4189}{12351}$ 2. $\frac{114}{1720}$ 3. $\frac{1720}{4189}$ 4. $\frac{114}{12351}$ b Q 3.5.12 Table identifies a group of children by one of four hair colors, and by type of hair. Hair Type Brown Blond Black Red Totals Wavy 20   15 3 43 Straight 80 15   12 Totals   20     215 1. Complete the table. 2. What is the probability that a randomly selected child will have wavy hair? 3. What is the probability that a randomly selected child will have either brown or blond hair? 4. What is the probability that a randomly selected child will have wavy brown hair? 5. What is the probability that a randomly selected child will have red hair, given that he or she has straight hair? 6. If $\text{B}$ is the event of a child having brown hair, find the probability of the complement of $\text{B}$. 7. In words, what does the complement of $\text{B}$ represent? Q 3.5.13 In a previous year, the weights of the members of the San Francisco 49ers and the Dallas Cowboys were published in the San Jose Mercury News. The factual data were compiled into the following table. Shirt# ≤ 210 211–250 251–290 > 290 1–33 21 5 0 0 34–66 6 18 7 4 66–99 6 12 22 5 For the following, suppose that you randomly select one player from the 49ers or Cowboys. 1. Find the probability that his shirt number is from 1 to 33. 2. Find the probability that he weighs at most 210 pounds. 3. Find the probability that his shirt number is from 1 to 33 AND he weighs at most 210 pounds. 4. Find the probability that his shirt number is from 1 to 33 OR he weighs at most 210 pounds. 5. Find the probability that his shirt number is from 1 to 33 GIVEN that he weighs at most 210 pounds. S 3.5.13 1. $\frac{26}{106}$ 2. $\frac{33}{106}$ 3. $\frac{21}{106}$ 4. $\left(\frac{26}{106}\right) + \left(\frac{33}{106}\right) - \left(\frac{21}{106}\right) = \left(\frac{38}{106}\right)$ 5. $\frac{21}{33}$ 3.6: Tree and Venn Diagrams Exercise 3.6.8 The probability that a man develops some form of cancer in his lifetime is 0.4567. The probability that a man has at least one false positive test result (meaning the test comes back for cancer when the man does not have it) is 0.51. Let: $\text{C} =$ a man develops cancer in his lifetime; $\text{P} =$ man has at least one false positive. Construct a tree diagram of the situation. Answer Bring It Together Use the following information to answer the next two exercises. Suppose that you have eight cards. Five are green and three are yellow. The cards are well shuffled. Exercise 3.6.9 Suppose that you randomly draw two cards, one at a time, with replacement. Let $\text{G}_{1} =$ first card is green Let $\text{G}_{2} =$ second card is green 1. Draw a tree diagram of the situation. 2. Find $P(\text{G}_{1} \text{AND} \text{G}_{2})$. 3. Find $P(\text{at least one green})$. 4. Find $P(\text{G}_{2}|\text{G}_{1})$. 5. Are $\text{G}_{1}$ and $\text{G}_{2}$ independent events? Explain why or why not. Answer 1. $P(\text{GG}) = \left(\frac{5}{8}\right)\left(\frac{5}{8}\right) = \frac{25}{64}$ 2. $P(\text{at least one green}) = P(\text{GG}) + P(\text{GY}) + P(\text{YG}) = \frac{25}{64} + \frac{15}{64} + \frac{15}{64} = \frac{55}{64}$ 3. $P(\text{G|G}) = \frac{5}{8}$ 4. Yes, they are independent because the first card is placed back in the bag before the second card is drawn; the composition of cards in the bag remains the same from draw one to draw two. Exercise 3.6.10 Suppose that you randomly draw two cards, one at a time, without replacement. $\text{G}_{1} =$ first card is green $\text{G}_{2} =$ second card is green 1. Draw a tree diagram of the situation. 2. Find $P(\text{G}_{1} \text{AND G}_{2})$. 3. Find $P(\text{at least one green})$. 4. Find $P(\text{G}_{2}|\text{G}_{1})$. 5. Are $\text{G}_{2}$ and $\text{G}_{1}$ independent events? Explain why or why not. Use the following information to answer the next two exercises. The percent of licensed U.S. drivers (from a recent year) that are female is 48.60. Of the females, 5.03% are age 19 and under; 81.36% are age 20–64; 13.61% are age 65 or over. Of the licensed U.S. male drivers, 5.04% are age 19 and under; 81.43% are age 20–64; 13.53% are age 65 or over. Exercise 3.6.11 Complete the following. 1. Construct a table or a tree diagram of the situation. 2. Find $P(\text{driver is female})$. 3. Find $P(\text{driver is age 65 or over|driver is female})$. 4. Find $P(\text{driver is age 65 or over AND female})$. 5. In words, explain the difference between the probabilities in part c and part d. 6. Find $P(\text{driver is age 65 or over})$. 7. Are being age 65 or over and being female mutually exclusive events? How do you know? Answer 1.   <20 20–64 >64 Totals Female 0.0244 0.3954 0.0661 0.486 Male 0.0259 0.4186 0.0695 0.514 Totals 0.0503 0.8140 0.1356 1 2. $P(\text{F}) = 0.486$ 3. $P(\text{>64|F}) = 0.1361$ 4. $P(\text{>64 and F}) = P(\text{F}) P(\text{>64|F}) = (0.486)(0.1361) = 0.0661$ 5. $P(\text{>64|F})$ is the percentage of female drivers who are 65 or older and $P(\text{>64 and F})$ is the percentage of drivers who are female and 65 or older. 6. $P(\text{>64}) = P(\text{>64 and F}) + P(\text{>64 and M}) = 0.1356$ 7. No, being female and 65 or older are not mutually exclusive because they can occur at the same time $P(\text{>64 and F}) = 0.0661$. Exercise 3.6.12 Suppose that 10,000 U.S. licensed drivers are randomly selected. 1. How many would you expect to be male? 2. Using the table or tree diagram, construct a contingency table of gender versus age group. 3. Using the contingency table, find the probability that out of the age 20–64 group, a randomly selected driver is female. Exercise 3.6.13 Approximately 86.5% of Americans commute to work by car, truck, or van. Out of that group, 84.6% drive alone and 15.4% drive in a carpool. Approximately 3.9% walk to work and approximately 5.3% take public transportation. 1. Construct a table or a tree diagram of the situation. Include a branch for all other modes of transportation to work. 2. Assuming that the walkers walk alone, what percent of all commuters travel alone to work? 3. Suppose that 1,000 workers are randomly selected. How many would you expect to travel alone to work? 4. Suppose that 1,000 workers are randomly selected. How many would you expect to drive in a carpool? Answer 1.   Car, Truck or Van Walk Public Transportation Other Totals Alone 0.7318 Not Alone 0.1332 Totals 0.8650 0.0390 0.0530 0.0430 1 2. If we assume that all walkers are alone and that none from the other two groups travel alone (which is a big assumption) we have: $P(\text{Alone}) = 0.7318 + 0.0390 = 0.7708$. 3. Make the same assumptions as in (b) we have: (0.7708)(1,000) = 771 4. (0.1332)(1,000) = 133 Exercise 3.6.14 When the Euro coin was introduced in 2002, two math professors had their statistics students test whether the Belgian one Euro coin was a fair coin. They spun the coin rather than tossing it and found that out of 250 spins, 140 showed a head (event $\text{H}$) while 110 showed a tail (event $\text{T}$). On that basis, they claimed that it is not a fair coin. 1. Based on the given data, find $P(\text{H})$ and $P(\text{T})$. 2. Use a tree to find the probabilities of each possible outcome for the experiment of tossing the coin twice. 3. Use the tree to find the probability of obtaining exactly one head in two tosses of the coin. 4. Use the tree to find the probability of obtaining at least one head. Exercise 3.6.15 Use the following information to answer the next two exercises. The following are real data from Santa Clara County, CA. As of a certain time, there had been a total of 3,059 documented cases of AIDS in the county. They were grouped into the following categories: * includes homosexual/bisexual IV drug users Homosexual/Bisexual IV Drug User* Heterosexual Contact Other Totals Female 0 70 136 49 ____ Male 2,146 463 60 135 ____ Totals ____ ____ ____ ____ ____ Suppose a person with AIDS in Santa Clara County is randomly selected. 1. Find $P(\text{Person is female})$. 2. Find $P(\text{Person has a risk factor heterosexual contact})$. 3. Find $P(\text{Person is female OR has a risk factor of IV drug user})$. 4. Find $P(\text{Person is female AND has a risk factor of homosexual/bisexual})$. 5. Find $P(\text{Person is male AND has a risk factor of IV drug user})$. 6. Find $P(\text{Person is female GIVEN person got the disease from heterosexual contact})$. 7. Construct a Venn diagram. Make one group females and the other group heterosexual contact. Answer The completed contingency table is as follows: * includes homosexual/bisexual IV drug users Homosexual/Bisexual IV Drug User* Heterosexual Contact Other Totals Female 0 70 136 49 255 Male 2,146 463 60 135 2,804 Totals 2,146 533 196 184 3,059 1. $\frac{255}{2059}$ 2. $\frac{196}{3059}$ 3. $\frac{718}{3059}$ 4. 0 5. $\frac{463}{3059}$ 6. $\frac{136}{196}$ Exercise 3.6.16 Answer these questions using probability rules. Do NOT use the contingency table. Three thousand fifty-nine cases of AIDS had been reported in Santa Clara County, CA, through a certain date. Those cases will be our population. Of those cases, 6.4% obtained the disease through heterosexual contact and 7.4% are female. Out of the females with the disease, 53.3% got the disease from heterosexual contact. 1. Find $P(\text{Person is female})$. 2. Find $P(\text{Person obtained the disease through heterosexual contact})$. 3. Find $P(\text{Person is female GIVEN person got the disease from heterosexual contact})$ 4. Construct a Venn diagram representing this situation. Make one group females and the other group heterosexual contact. Fill in all values as probabilities. Use the following information to answer the next two exercises. This tree diagram shows the tossing of an unfair coin followed by drawing one bead from a cup containing three red $(\text{R})$, four yellow ($\text{Y}$) and five blue ($\text{B}$) beads. For the coin, $P(\text{H}) = \frac{2}{3}$ and $P(\text{T}) = \frac{1}{3}$ where $\text{H}$ is heads and $\text{T}$ is tails. Q 3.6.1 Find $P(\text{tossing a Head on the coin AND a Red bead})$ 1. $\frac{2}{3}$ 2. $\frac{5}{15}$ 3. $\frac{6}{36}$ 4. $\frac{5}{36}$ Q 3.6.2 Find $P(\text{Blue bead})$. 1. $\frac{15}{36}$ 2. $\frac{10}{36}$ 3. $\frac{10}{12}$ 4. $\frac{6}{36}$ a Q 3.6.3 A box of cookies contains three chocolate and seven butter cookies. Miguel randomly selects a cookie and eats it. Then he randomly selects another cookie and eats it. (How many cookies did he take?) 1. Draw the tree that represents the possibilities for the cookie selections. Write the probabilities along each branch of the tree. 2. Are the probabilities for the flavor of the SECOND cookie that Miguel selects independent of his first selection? Explain. 3. For each complete path through the tree, write the event it represents and find the probabilities. 4. Let $\text{S}$ be the event that both cookies selected were the same flavor. Find $P(\text{S})$. 5. Let $\text{T}$ be the event that the cookies selected were different flavors. Find $P(\text{T})$ by two different methods: by using the complement rule and by using the branches of the tree. Your answers should be the same with both methods. 6. Let $\text{U}$ be the event that the second cookie selected is a butter cookie. Find $P(\text{U})$.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/03.E%3A_Probability_Topics_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 4.2: Probability Distribution Function (PDF) for a Discrete Random Variable Q 4.2.1 Suppose that the PDF for the number of years it takes to earn a Bachelor of Science (B.S.) degree is given in Table. $x$ $P(x)$ 3 0.05 4 0.40 5 0.30 6 0.15 7 0.10 1. In words, define the random variable $X$. 2. What does it mean that the values zero, one, and two are not included for $x$ in the PDF? Exercise 4.3.5 Complete the expected value table. $x$ $P(x)$ $x*P(x)$ 0 0.2 1 0.2 2 0.4 3 0.2 Exercise 4.3.6 Find the expected value from the expected value table. $x$ $P(x)$ $x*P(x)$ 2 0.1 4 0.3 4(0.3) = 1.2 6 0.4 6(0.4) = 2.4 8 0.2 8(0.2) = 1.6 Answer $0.2 + 1.2 + 2.4 + 1.6 = 5.4$ Exercise 4.3.7 Find the standard deviation. $x$ $P(x)$ $x*P(x)$ $(x – \mu)^{2}P(x)$ 2 0.1 2(0.1) = 0.2 (2–5.4)2(0.1) = 1.156 4 0.3 4(0.3) = 1.2 (4–5.4)2(0.3) = 0.588 6 0.4 6(0.4) = 2.4 (6–5.4)2(0.4) = 0.144 8 0.2 8(0.2) = 1.6 (8–5.4)2(0.2) = 1.352 Exercise 4.3.8 Identify the mistake in the probability distribution table. $x$ $P(x)$ $x*P(x)$ 1 0.15 0.15 2 0.25 0.50 3 0.30 0.90 4 0.20 0.80 5 0.15 0.75 Answer The values of $P(x)$ do not sum to one. Exercise 4.3.9 Identify the mistake in the probability distribution table. $x$ $P(x)$ $x*P(x)$ 1 0.15 0.15 2 0.25 0.40 3 0.25 0.65 4 0.20 0.85 5 0.15 1 Use the following information to answer the next five exercises: A physics professor wants to know what percent of physics majors will spend the next several years doing post-graduate research. He has the following probability distribution. $x$ $P(x)$ $x*P(x)$ 1 0.35 2 0.20 3 0.15 4 5 0.10 6 0.05 Exercise 4.3.10 Define the random variable $X$. Answer Let $X =$ the number of years a physics major will spend doing post-graduate research. Exercise 4.3.11 Define $P(x)$, or the probability of $x$. Exercise 4.3.12 Find the probability that a physics major will do post-graduate research for four years. $P(x = 4) =$ _______ Answer $1 – 0.35 – 0.20 – 0.15 – 0.10 – 0.05 = 0.15$ Exercise 4.3.13 Find the probability that a physics major will do post-graduate research for at most three years. $P(x \leq 3) =$ _______ Exercise 4.3.14 On average, how many years would you expect a physics major to spend doing post-graduate research? Answer $1(0.35) + 2(0.20) + 3(0.15) + 4(0.15) + 5(0.10) + 6(0.05) = 0.35 + 0.40 + 0.45 + 0.60 + 0.50 + 0.30 = 2.6$ years Use the following information to answer the next seven exercises: A ballet instructor is interested in knowing what percent of each year's class will continue on to the next, so that she can plan what classes to offer. Over the years, she has established the following probability distribution. • Let $X =$ the number of years a student will study ballet with the teacher. • Let $P(x) =$ the probability that a student will study ballet $x$ years. Exercise 4.3.15 Complete Table using the data provided. $x$ $P(x)$ $x*P(x)$ 1 0.10 2 0.05 3 0.10 4 5 0.30 6 0.20 7 0.10 Exercise 4.3.16 In words, define the random variable $X$. Answer $X$ is the number of years a student studies ballet with the teacher. Exercise 4.3.17 $P(x = 4) =$ _______ Exercise 4.3.18 $P(x < 4) =$ _______ Answer $0.10 + 0.05 + 0.10 = 0.25$ Exercise 4.3.19 On average, how many years would you expect a child to study ballet with this teacher? Exercise 4.3.20 What does the column "$P(x)$" sum to and why? Answer The sum of the probabilities sum to one because it is a probability distribution. Exercise 4.3.21 What does the column "$x*P(x)$" sum to and why? Exercise 4.3.22 You are playing a game by drawing a card from a standard deck and replacing it. If the card is a face card, you win $30. If it is not a face card, you pay$2. There are 12 face cards in a deck of 52 cards. What is the expected value of playing the game? Answer $−2\left(\dfrac{40}{52}\right)+30\left(\dfrac{12}{52}\right) = −1.54 + 6.92 = 5.38$ Exercise 4.3.23 You are playing a game by drawing a card from a standard deck and replacing it. If the card is a face card, you win $30. If it is not a face card, you pay$2. There are 12 face cards in a deck of 52 cards. Should you play the game? 4.3: Mean or Expected Value and Standard Deviation Q 4.3.1 A theater group holds a fund-raiser. It sells 100 raffle tickets for $5 apiece. Suppose you purchase four tickets. The prize is two passes to a Broadway show, worth a total of$150. 1. What are you interested in here? 2. In words, define the random variable $X$. 3. List the values that $X$ may take on. 4. Construct a PDF. 5. If this fund-raiser is repeated often and you always purchase four tickets, what would be your expected average winnings per raffle? Q 4.3.2 A game involves selecting a card from a regular 52-card deck and tossing a coin. The coin is a fair coin and is equally likely to land on heads or tails. • If the card is a face card, and the coin lands on Heads, you win $6 • If the card is a face card, and the coin lands on Tails, you win$2 • If the card is not a face card, you lose $2, no matter what the coin shows. 1. Find the expected value for this game (expected net gain or loss). 2. Explain what your calculations indicate about your long-term average profits and losses on this game. 3. Should you play this game to win money? S 4.3.2 The variable of interest is $X$, or the gain or loss, in dollars. The face cards jack, queen, and king. There are $(3)(4) = 12$ face cards and $52 – 12 = 40$ cards that are not face cards. We first need to construct the probability distribution for $X$. We use the card and coin events to determine the probability for each outcome, but we use the monetary value of $X$ to determine the expected value. Card Event $X$ net gain/loss $P(X)$ Face Card and Heads 6 $\left(\frac{12}{52}\right) \left(\frac{1}{2}\right) = \left(\frac{6}{52}\right)$ Face Card and Tails 2 $\left(\frac{12}{52}\right) \left(\frac{1}{2}\right) = \left(\frac{6}{52}\right)$ (Not Face Card) and (H or T) –2 $\left(\frac{40}{52}\right) (1) = \left(\frac{40}{52}\right)$ • $\text{Expected value} = (6)\left(\frac{6}{52}\right) + (2)\left(\frac{6}{52}\right) + (-2)\left(\frac{40}{52}\right) = -\frac{32}{52}$ • $\text{Expected value} = –0.62$, rounded to the nearest cent • If you play this game repeatedly, over a long string of games, you would expect to lose 62 cents per game, on average. • You should not play this game to win money because the expected value indicates an expected average loss. Q 4.3.3 You buy a lottery ticket to a lottery that costs$10 per ticket. There are only 100 tickets available to be sold in this lottery. In this lottery there are one $500 prize, two$100 prizes, and four $25 prizes. Find your expected gain or loss. Q 4.3.4 Complete the PDF and answer the questions. $x$ $P(x)$ $xP(x)$ 0 0.3 1 0.2 2 3 0.4 1. Find the probability that $x = 2$. 2. Find the expected value. S 4.3.4 1. 0.1 2. 1.6 Q 4.3.5 Suppose that you are offered the following “deal.” You roll a die. If you roll a six, you win$10. If you roll a four or five, you win $5. If you roll a one, two, or three, you pay$6. 1. What are you ultimately interested in here (the value of the roll or the money you win)? 2. In words, define the Random Variable $X$. 3. List the values that $X$ may take on. 4. Construct a PDF. 5. Over the long run of playing this game, what are your expected average winnings per game? 6. Based on numerical values, should you take the deal? Explain your decision in complete sentences. Q 4.3.6 A venture capitalist, willing to invest $1,000,000, has three investments to choose from. The first investment, a software company, has a 10% chance of returning$5,000,000 profit, a 30% chance of returning $1,000,000 profit, and a 60% chance of losing the million dollars. The second company, a hardware company, has a 20% chance of returning$3,000,000 profit, a 40% chance of returning $1,000,000 profit, and a 40% chance of losing the million dollars. The third company, a biotech firm, has a 10% chance of returning$6,000,000 profit, a 70% of no profit or loss, and a 20% chance of losing the million dollars. 1. Construct a PDF for each investment. 2. Find the expected value for each investment. 3. Which is the safest investment? Why do you think so? 4. Which is the riskiest investment? Why do you think so? 5. Which investment has the highest expected return, on average? S 4.3.6 1. Software Company $x$ $P(x)$ 5,000,000 0.10 1,000,000 0.30 –1,000,000 0.60 Hardware Company $x$ $P(x)$ 3,000,000 0.20 1,000,000 0.40 –1,000,00 0.40 Biotech Firm $x$ $P(x)$ 6,00,000 0.10 0 0.70 –1,000,000 0.20 2. $200,000;$600,000; $400,000 3. third investment because it has the lowest probability of loss 4. first investment because it has the highest probability of loss 5. second investment Q 4.3.7 Suppose that 20,000 married adults in the United States were randomly surveyed as to the number of children they have. The results are compiled and are used as theoretical probabilities. Let $X =$ the number of children married people have. $x$ $P(x)$ $xP(x)$ 0 0.10 1 0.20 2 0.30 3 4 0.10 5 0.05 6 (or more) 0.05 1. Find the probability that a married adult has three children. 2. In words, what does the expected value in this example represent? 3. Find the expected value. 4. Is it more likely that a married adult will have two to three children or four to six children? How do you know? Q 4.3.8 Suppose that the PDF for the number of years it takes to earn a Bachelor of Science (B.S.) degree is given as in the Table. $x$ $P(x)$ 3 0.05 4 0.40 5 0.30 6 0.15 7 0.10 On average, how many years do you expect it to take for an individual to earn a B.S.? S 4.3.8 4.85 years Q 4.3.9 People visiting video rental stores often rent more than one DVD at a time. The probability distribution for DVD rentals per customer at Video To Go is given in the following table. There is a five-video limit per customer at this store, so nobody ever rents more than five DVDs. $x$ $P(x)$ 0 0.03 1 0.50 2 0.24 3 4 0.70 5 0.04 1. Describe the random variable $X$ in words. 2. Find the probability that a customer rents three DVDs. 3. Find the probability that a customer rents at least four DVDs. 4. Find the probability that a customer rents at most two DVDs. Another shop, Entertainment Headquarters, rents DVDs and video games. The probability distribution for DVD rentals per customer at this shop is given as follows. They also have a five-DVD limit per customer. $x$ $P(x)$ 0 0.35 1 0.25 2 0.20 3 0.10 4 0.05 5 0.05 5. At which store is the expected number of DVDs rented per customer higher? 6. If Video to Go estimates that they will have 300 customers next week, how many DVDs do they expect to rent next week? Answer in sentence form. 7. If Video to Go expects 300 customers next week, and Entertainment HQ projects that they will have 420 customers, for which store is the expected number of DVD rentals for next week higher? Explain. 8. Which of the two video stores experiences more variation in the number of DVD rentals per customer? How do you know that? Q 4.3.10 A “friend” offers you the following “deal.” For a$10 fee, you may pick an envelope from a box containing 100 seemingly identical envelopes. However, each envelope contains a coupon for a free gift. • Ten of the coupons are for a free gift worth $6. • Eighty of the coupons are for a free gift worth$8. • Six of the coupons are for a free gift worth $12. • Four of the coupons are for a free gift worth$40. Based upon the financial gain or loss over the long run, should you play the game? 1. Yes, I expect to come out ahead in money. 2. No, I expect to come out behind in money. 3. It doesn’t matter. I expect to break even. b Q 4.3.11 Florida State University has 14 statistics classes scheduled for its Summer 2013 term. One class has space available for 30 students, eight classes have space for 60 students, one class has space for 70 students, and four classes have space for 100 students. 1. What is the average class size assuming each class is filled to capacity? 2. Space is available for 980 students. Suppose that each class is filled to capacity and select a statistics student at random. Let the random variable $X$ equal the size of the student’s class. Define the PDF for $X$. 3. Find the mean of $X$. 4. Find the standard deviation of $X$. In a lottery, there are 250 prizes of $5, 50 prizes of$25, and ten prizes of $100. Assuming that 10,000 tickets are to be issued and sold, what is a fair price to charge to break even? S 4.3.12 Let $X =$ the amount of money to be won on a ticket. The following table shows the PDF for $X$. $x$ $P(x)$ 0 0.969 5 $\frac{250}{10,000} = 0.025$ 25 $\frac{50}{10,000} = 0.005$ 100 $\frac{10}{10,000} = 0.001$ Calculate the expected value of $X$. $0(0.969) + 5(0.025) + 25(0.005) + 100(0.001) = 0.35$ A fair price for a ticket is$0.35. Any price over $0.35 will enable the lottery to raise money. 4.4: Binomial Distribution Q 4.4.1 According to a recent article the average number of babies born with significant hearing loss (deafness) is approximately two per 1,000 babies in a healthy baby nursery. The number climbs to an average of 30 per 1,000 babies in an intensive care nursery. Suppose that 1,000 babies from healthy baby nurseries were randomly surveyed. Find the probability that exactly two babies were born deaf. Use the following information to answer the next four exercises. Recently, a nurse commented that when a patient calls the medical advice line claiming to have the flu, the chance that he or she truly has the flu (and not just a nasty cold) is only about 4%. Of the next 25 patients calling in claiming to have the flu, we are interested in how many actually have the flu. Q 4.4.2 Define the random variable and list its possible values. S 4.4.2 $X =$ the number of patients calling in claiming to have the flu, who actually have the flu. $X = 0, 1, 2, ...25$ Q 4.4.3 State the distribution of $X$. Q 4.4.4 Find the probability that at least four of the 25 patients actually have the flu. S 4.4.4 0.0165 Q 4.4.5 On average, for every 25 patients calling in, how many do you expect to have the flu? Q 4.4.6 People visiting video rental stores often rent more than one DVD at a time. The probability distribution for DVD rentals per customer at Video To Go is given Table. There is five-video limit per customer at this store, so nobody ever rents more than five DVDs. $x$ $P(x)$ 0 0.03 1 0.50 2 0.24 3 4 0.07 5 0.04 1. Describe the random variable $X$ in words. 2. Find the probability that a customer rents three DVDs. 3. Find the probability that a customer rents at least four DVDs. 4. Find the probability that a customer rents at most two DVDs. S 4.4.6 1. $X =$ the number of DVDs a Video to Go customer rents 2. 0.12 3. 0.11 4. 0.77 Q 4.4.7 A school newspaper reporter decides to randomly survey 12 students to see if they will attend Tet (Vietnamese New Year) festivities this year. Based on past years, she knows that 18% of students attend Tet festivities. We are interested in the number of students who will attend the festivities. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many of the 12 students do we expect to attend the festivities? 5. Find the probability that at most four students will attend. 6. Find the probability that more than two students will attend. Use the following information to answer the next three exercises: The probability that the San Jose Sharks will win any given game is 0.3694 based on a 13-year win history of 382 wins out of 1,034 games played (as of a certain date). An upcoming monthly schedule contains 12 games. Q 4.4.8 The expected number of wins for that upcoming month is: 1. 1.67 2. 12 3. $\frac{382}{1043}$ 4. 4.43 S 4.4.8 d. 4.43 Let $X =$ the number of games won in that upcoming month. Q 4.4.9 What is the probability that the San Jose Sharks win six games in that upcoming month? 1. 0.1476 2. 0.2336 3. 0.7664 4. 0.8903 Q 4.4.10 What is the probability that the San Jose Sharks win at least five games in that upcoming month? 1. 0.3694 2. 0.5266 3. 0.4734 4. 0.2305 S 4.4.10 c Q 4.4.11 A student takes a ten-question true-false quiz, but did not study and randomly guesses each answer. Find the probability that the student passes the quiz with a grade of at least 70% of the questions correct. Q 4.4.12 A student takes a 32-question multiple-choice exam, but did not study and randomly guesses each answer. Each question has three possible choices for the answer. Find the probability that the student guesses more than 75% of the questions correctly. S 4.4.13 • $X =$ number of questions answered correctly • $X \sim B(32, \frac{1}{3})$ • We are interested in MORE THAN 75% of 32 questions correct. 75% of 32 is 24. We want to find $P(x > 24)$. The event "more than 24" is the complement of "less than or equal to 24." • Using your calculator's distribution menu: $1 – \text{binomcdf}(32, \frac{1}{3}, 24)$ • $P(x > 24) = 0$ • The probability of getting more than 75% of the 32 questions correct when randomly guessing is very small and practically zero. Q 4.4.14 Six different colored dice are rolled. Of interest is the number of dice that show a one. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. On average, how many dice would you expect to show a one? 5. Find the probability that all six dice show a one. 6. Is it more likely that three or that four dice will show a one? Use numbers to justify your answer numerically. Q 4.4.15 More than 96 percent of the very largest colleges and universities (more than 15,000 total enrollments) have some online offerings. Suppose you randomly pick 13 such institutions. We are interested in the number that offer distance learning courses. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. On average, how many schools would you expect to offer such courses? 5. Find the probability that at most ten offer such courses. 6. Is it more likely that 12 or that 13 will offer such courses? Use numbers to justify your answer numerically and answer in a complete sentence. S 4.4.15 1. $X =$ the number of college and universities that offer online offerings. 2. 0, 1, 2, …, 13 3. $X \sim B(13, 0.96)$ 4. 12.48 5. 0.0135 6. $P(x = 12) = 0.3186 P(x = 13) = 0.5882$ More likely to get 13. Q 4.4.16 Suppose that about 85% of graduating students attend their graduation. A group of 22 graduating students is randomly chosen. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many are expected to attend their graduation? 5. Find the probability that 17 or 18 attend. 6. Based on numerical values, would you be surprised if all 22 attended graduation? Justify your answer numerically. Q 4.4.17 At The Fencing Center, 60% of the fencers use the foil as their main weapon. We randomly survey 25 fencers at The Fencing Center. We are interested in the number of fencers who do not use the foil as their main weapon. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many are expected to not to use the foil as their main weapon? 5. Find the probability that six do not use the foil as their main weapon. 6. Based on numerical values, would you be surprised if all 25 did not use foil as their main weapon? Justify your answer numerically. S 4.4.17 1. $X =$ the number of fencers who do not use the foil as their main weapon 2. 0, 1, 2, 3,... 25 3. $X \sim B(25,0.40)$ 4. 10 5. 0.0442 6. The probability that all 25 not use the foil is almost zero. Therefore, it would be very surprising. Q 4.4.18 Approximately 8% of students at a local high school participate in after-school sports all four years of high school. A group of 60 seniors is randomly chosen. Of interest is the number who participated in after-school sports all four years of high school. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many seniors are expected to have participated in after-school sports all four years of high school? 5. Based on numerical values, would you be surprised if none of the seniors participated in after-school sports all four years of high school? Justify your answer numerically. 6. Based upon numerical values, is it more likely that four or that five of the seniors participated in after-school sports all four years of high school? Justify your answer numerically. Q 4.4.19 The chance of an IRS audit for a tax return with over$25,000 in income is about 2% per year. We are interested in the expected number of audits a person with that income has in a 20-year period. Assume each year is independent. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many audits are expected in a 20-year period? 5. Find the probability that a person is not audited at all. 6. Find the probability that a person is audited more than twice. S 4.4.19 1. $X =$ the number of audits in a 20-year period 2. 0, 1, 2, …, 20 3. $X \sim B(20, 0.02)$ 4. 0.4 5. 0.6676 6. 0.0071 Q 4.4.20 It has been estimated that only about 30% of California residents have adequate earthquake supplies. Suppose you randomly survey 11 California residents. We are interested in the number who have adequate earthquake supplies. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. What is the probability that at least eight have adequate earthquake supplies? 5. Is it more likely that none or that all of the residents surveyed will have adequate earthquake supplies? Why? 6. How many residents do you expect will have adequate earthquake supplies? Q 4.4.21 There are two similar games played for Chinese New Year and Vietnamese New Year. In the Chinese version, fair dice with numbers 1, 2, 3, 4, 5, and 6 are used, along with a board with those numbers. In the Vietnamese version, fair dice with pictures of a gourd, fish, rooster, crab, crayfish, and deer are used. The board has those six objects on it, also. We will play with bets being $1. The player places a bet on a number or object. The “house” rolls three dice. If none of the dice show the number or object that was bet, the house keeps the$1 bet. If one of the dice shows the number or object bet (and the other two do not show it), the player gets back his or her $1 bet, plus$1 profit. If two of the dice show the number or object bet (and the third die does not show it), the player gets back his or her $1 bet, plus$2 profit. If all three dice show the number or object bet, the player gets back his or her $1 bet, plus$3 profit. Let $X =$ number of matches and $Y =$ profit per game. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. List the values that $Y$ may take on. Then, construct one PDF table that includes both $X$ and $Y$ and their probabilities. 5. Calculate the average expected matches over the long run of playing this game for the player. 6. Calculate the average expected earnings over the long run of playing this game for the player. 7. Determine who has the advantage, the player or the house. S 4.4.21 1. $X =$ the number of matches 2. 0, 1, 2, 3 3. $X \sim B(3,16)(3,16)$ 4. In dollars: −1, 1, 2, 3 5. $\frac{1}{2}$ 6. Multiply each $Y$ value by the corresponding $X$ probability from the PDF table. The answer is −0.0787. You lose about eight cents, on average, per game. 7. The house has the advantage. Q 4.4.22 According to The World Bank, only 9% of the population of Uganda had access to electricity as of 2009. Suppose we randomly sample 150 people in Uganda. Let $X =$ the number of people who have access to electricity. 1. What is the probability distribution for $X$? 2. Using the formulas, calculate the mean and standard deviation of $X$. 3. Use your calculator to find the probability that 15 people in the sample have access to electricity. 4. Find the probability that at most ten people in the sample have access to electricity. 5. Find the probability that more than 25 people in the sample have access to electricity. Q 4.4.23 The literacy rate for a nation measures the proportion of people age 15 and over that can read and write. The literacy rate in Afghanistan is 28.1%. Suppose you choose 15 people in Afghanistan at random. Let $X =$ the number of people who are literate. 1. Sketch a graph of the probability distribution of $X$. 2. Using the formulas, calculate the (i) mean and (ii) standard deviation of $X$. 3. Find the probability that more than five people in the sample are literate. Is it is more likely that three people or four people are literate. S 4.4.23 1. $X \sim B(15, 0.281)$ 1. Mean $= \mu = np = 15(0.281) = 4.215$ 2. Standard Deviation $= \sigma = \sqrt{npq} = \sqrt{15(0.281)(0.719)} = 1.7409$ 2. $P(x > 5) = 1 – P(x ≤ 5) = 1 – \text{binomcdf}(15, 0.281, 5) = 1 – 0.7754 = 0.2246$ $P(x = 3) = \text{binompdf}(15, 0.281, 3) = 0.1927$ $P(x = 4) = \text{binompdf}(15, 0.281, 4) = 0.2259$ It is more likely that four people are literate that three people are. 4.5: Geometric Distribution Q 4.5.1 A consumer looking to buy a used red Miata car will call dealerships until she finds a dealership that carries the car. She estimates the probability that any independent dealership will have the car will be 28%. We are interested in the number of dealerships she must call. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. On average, how many dealerships would we expect her to have to call until she finds one that has the car? 5. Find the probability that she must call at most four dealerships. 6. Find the probability that she must call three or four dealerships. Q 4.5.2 Suppose that the probability that an adult in America will watch the Super Bowl is 40%. Each person is considered independent. We are interested in the number of adults in America we must survey until we find one who will watch the Super Bowl. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many adults in America do you expect to survey until you find one who will watch the Super Bowl? 5. Find the probability that you must ask seven people. 6. Find the probability that you must ask three or four people. S 4.5.2 1. $X =$ the number of adults in America who are surveyed until one says he or she will watch the Super Bowl. 2. $X \sim G(0.40)$ 3. 2.5 4. 0.0187 5. 0.2304 Q 4.5.3 It has been estimated that only about 30% of California residents have adequate earthquake supplies. Suppose we are interested in the number of California residents we must survey until we find a resident who does not have adequate earthquake supplies. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. What is the probability that we must survey just one or two residents until we find a California resident who does not have adequate earthquake supplies? 5. What is the probability that we must survey at least three California residents until we find a California resident who does not have adequate earthquake supplies? 6. How many California residents do you expect to need to survey until you find a California resident who does not have adequate earthquake supplies? 7. How many California residents do you expect to need to survey until you find a California resident who does have adequate earthquake supplies? Q 4.5.4 In one of its Spring catalogs, L.L. Bean® advertised footwear on 29 of its 192 catalog pages. Suppose we randomly survey 20 pages. We are interested in the number of pages that advertise footwear. Each page may be picked more than once. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many pages do you expect to advertise footwear on them? 5. Is it probable that all twenty will advertise footwear on them? Why or why not? 6. What is the probability that fewer than ten will advertise footwear on them? 7. Reminder: A page may be picked more than once. We are interested in the number of pages that we must randomly survey until we find one that has footwear advertised on it. Define the random variable $X$ and give its distribution. 8. What is the probability that you only need to survey at most three pages in order to find one that advertises footwear on it? 9. How many pages do you expect to need to survey in order to find one that advertises footwear? S 4.5.4 1. $X =$ the number of pages that advertise footwear 2. $X$ takes on the values 0, 1, 2, ..., 20 3. $X \sim B(20, \frac{29}{192})$ 4. 3.02 5. No 6. 0.9997 7. $X =$ the number of pages we must survey until we find one that advertises footwear. $X \sim G(\frac{29}{192})$ 8. 0.3881 9. 6.6207 pages Q 4.5.5 Suppose that you are performing the probability experiment of rolling one fair six-sided die. Let $\text{F}$ be the event of rolling a four or a five. You are interested in how many times you need to roll the die in order to obtain the first four or five as the outcome. • $p =$ probability of success (event $\text{F}$ occurs) • $q =$ probability of failure (event $\text{F}$ does not occur) 1. Write the description of the random variable $X$. 2. What are the values that $X$ can take on? 3. Find the values of $p$ and $q$. 4. Find the probability that the first occurrence of event $\text{F}$ (rolling a four or five) is on the second trial. Q 4.5.5 Ellen has music practice three days a week. She practices for all of the three days 85% of the time, two days 8% of the time, one day 4% of the time, and no days 3% of the time. One week is selected at random. What values does $X$ take on? 0, 1, 2, and 3 Q 4.5.6 The World Bank records the prevalence of HIV in countries around the world. According to their data, “Prevalence of HIV refers to the percentage of people ages 15 to 49 who are infected with HIV.”1 In South Africa, the prevalence of HIV is 17.3%. Let $X =$ the number of people you test until you find a person infected with HIV. 1. Sketch a graph of the distribution of the discrete random variable $X$. 2. What is the probability that you must test 30 people to find one with HIV? 3. What is the probability that you must ask ten people? 4. Find the (i) mean and (ii) standard deviation of the distribution of $X$. Q 4.5.7 According to a recent Pew Research poll, 75% of millenials (people born between 1981 and 1995) have a profile on a social networking site. Let $X =$ the number of millenials you ask until you find a person without a profile on a social networking site. 1. Describe the distribution of $X$. 2. Find the (i) mean and (ii) standard deviation of $X$. 3. What is the probability that you must ask ten people to find one person without a social networking site? 4. What is the probability that you must ask 20 people to find one person without a social networking site? 5. What is the probability that you must ask at most five people? S 4.5.7 1. $X \sim \text{G}(0.25)$ 1. Mean $= \mu = \frac{1}{p} = \frac{1}{0.25} = 4$ 2. Standard Deviation $= \sigma = \sqrt{\frac{1-p}{p^{2}}} = \sqrt{\frac{1-0.25}{0.25^{2}}} \approx 3.4641$ 2. $P(x = 10) = \text{geometpdf}(0.25, 10) = 0.0188$ 3. $P(x = 20) = \text{geometpdf}(0.25, 20) = 0.0011$ 4. $P(x \leq 5) = \text{geometcdf}(0.25, 5) = 0.7627$ Footnotes 1. ”Prevalence of HIV, total (% of populations ages 15-49),” The World Bank, 2013. Available online at http://data.worldbank.org/indicator/...last&sort=desc (accessed May 15, 2013). 4.6: Hypergeometric Distribution Q 4.6.1 A group of Martial Arts students is planning on participating in an upcoming demonstration. Six are students of Tae Kwon Do; seven are students of Shotokan Karate. Suppose that eight students are randomly picked to be in the first demonstration. We are interested in the number of Shotokan Karate students in that first demonstration. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many Shotokan Karate students do we expect to be in that first demonstration? Q 4.6.2 In one of its Spring catalogs, L.L. Bean® advertised footwear on 29 of its 192 catalog pages. Suppose we randomly survey 20 pages. We are interested in the number of pages that advertise footwear. Each page may be picked at most once. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many pages do you expect to advertise footwear on them? 5. Calculate the standard deviation. S 4.6.2 1. $X =$ the number of pages that advertise footwear 2. 0, 1, 2, 3, ..., 20 3. $X \sim \text{H}(29, 163, 20); r = 29, b = 163, n = 20$ 4. 3.03 5. 1.5197 Q 4.6.3 Suppose that a technology task force is being formed to study technology awareness among instructors. Assume that ten people will be randomly chosen to be on the committee from a group of 28 volunteers, 20 who are technically proficient and eight who are not. We are interested in the number on the committee who are not technically proficient. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many instructors do you expect on the committee who are not technically proficient? 5. Find the probability that at least five on the committee are not technically proficient. 6. Find the probability that at most three on the committee are not technically proficient. Q 4.6.4 Suppose that nine Massachusetts athletes are scheduled to appear at a charity benefit. The nine are randomly chosen from eight volunteers from the Boston Celtics and four volunteers from the New England Patriots. We are interested in the number of Patriots picked. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. Are you choosing the nine athletes with or without replacement? S 4.6.4 1. $X =$ the number of Patriots picked 2. 0, 1, 2, 3, 4 3. $X \sim H(4, 8, 9)$ 4. Without replacement Q 4.6.5 A bridge hand is defined as 13 cards selected at random and without replacement from a deck of 52 cards. In a standard deck of cards, there are 13 cards from each suit: hearts, spades, clubs, and diamonds. What is the probability of being dealt a hand that does not contain a heart? 1. What is the group of interest? 2. How many are in the group of interest? 3. How many are in the other group? 4. Let $X =$ _________. What values does $X$ take on? 5. The probability question is $P$(_______). 6. Find the probability in question. 7. Find the (i) mean and (ii) standard deviation of $X$. 4.7: Poisson Distribution Q 4.7.1 The switchboard in a Minneapolis law office gets an average of 5.5 incoming phone calls during the noon hour on Mondays. Experience shows that the existing staff can handle up to six calls in an hour. Let $X =$ the number of calls received at noon. 1. Find the mean and standard deviation of $X$. 2. What is the probability that the office receives at most six calls at noon on Monday? 3. Find the probability that the law office receives six calls at noon. What does this mean to the law office staff who get, on average, 5.5 incoming phone calls at noon? 4. What is the probability that the office receives more than eight calls at noon? S 4.7.1 1. $X \sim P(5.5); \mu= 5.5; \sigma = \sqrt{5.5} \approx 2.3452$ 2. $P(x \leq 6) = \text{poissoncdf}(5.5, 6) \approx 0.6860$ 3. There is a 15.7% probability that the law staff will receive more calls than they can handle. 4. $P(x > 8) = 1 – P(x \leq 8) = 1 – \text{poissoncdf}(5.5, 8) \approx 1 – 0.8944 = 0.1056$ Q 4.7.2 The maternity ward at Dr. Jose Fabella Memorial Hospital in Manila in the Philippines is one of the busiest in the world with an average of 60 births per day. Let $X =$ the number of births in an hour. 1. Find the mean and standard deviation of $X$. 2. Sketch a graph of the probability distribution of $X$. 3. What is the probability that the maternity ward will deliver three babies in one hour? 4. What is the probability that the maternity ward will deliver at most three babies in one hour? 5. What is the probability that the maternity ward will deliver more than five babies in one hour? Q 4.7.3 A manufacturer of Christmas tree light bulbs knows that 3% of its bulbs are defective. Find the probability that a string of 100 lights contains at most four defective bulbs using both the binomial and Poisson distributions. S 4.7.3 Let $X =$ the number of defective bulbs in a string. Using the Poisson distribution: • $\mu = np = 100(0.03) = 3$ • $X \sim P(3)$ • $P(x \leq 4) = \text{poissoncdf}(3, 4) \approx 0.8153$ Using the binomial distribution: • $X \sim \text{B}(100, 0.03)$ • $P(x \leq 4) = \text{binomcdf}(100, 0.03, 4) \approx 0.8179$ The Poisson approximation is very good—the difference between the probabilities is only 0.0026. Q 4.7.4 The average number of children a Japanese woman has in her lifetime is 1.37. Suppose that one Japanese woman is randomly chosen. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. Find the probability that she has no children. 5. Find the probability that she has fewer children than the Japanese average. 6. Find the probability that she has more children than the Japanese average. Q 4.7.5 The average number of children a Spanish woman has in her lifetime is 1.47. Suppose that one Spanish woman is randomly chosen. 1. In words, define the Random Variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. Find the probability that she has no children. 5. Find the probability that she has fewer children than the Spanish average. 6. Find the probability that she has more children than the Spanish average . S 4.7.5 1. $X =$ the number of children for a Spanish woman 2. 0, 1, 2, 3,... 3. $X \sim P(1.47)$ 4. 0.2299 5. 0.5679 6. 0.4321 Q 4.7.6 Fertile, female cats produce an average of three litters per year. Suppose that one fertile, female cat is randomly chosen. In one year, find the probability she produces: 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _______ 4. Find the probability that she has no litters in one year. 5. Find the probability that she has at least two litters in one year. 6. Find the probability that she has exactly three litters in one year. Q 4.7.7 he chance of having an extra fortune in a fortune cookie is about 3%. Given a bag of 144 fortune cookies, we are interested in the number of cookies with an extra fortune. Two distributions may be used to solve this problem, but only use one distribution to solve the problem. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many cookies do we expect to have an extra fortune? 5. Find the probability that none of the cookies have an extra fortune. 6. Find the probability that more than three have an extra fortune. 7. As $n$ increases, what happens involving the probabilities using the two distributions? Explain in complete sentences. S 4.7.7 1. $X =$ the number of fortune cookies that have an extra fortune 2. 0, 1, 2, 3,... 144 3. $X \sim B(144, 0.03)$ or $P(4.32)$ 4. 4.32 5. 0.0124 or 0.0133 6. 0.6300 or 0.6264 7. As $n$ gets larger, the probabilities get closer together. Q 4.7.8 According to the South Carolina Department of Mental Health web site, for every 200 U.S. women, the average number who suffer from anorexia is one. Out of a randomly chosen group of 600 U.S. women determine the following. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many are expected to suffer from anorexia? 5. Find the probability that no one suffers from anorexia. 6. Find the probability that more than four suffer from anorexia. Q 4.7.9 The chance of an IRS audit for a tax return with over $25,000 in income is about 2% per year. Suppose that 100 people with tax returns over$25,000 are randomly picked. We are interested in the number of people audited in one year. Use a Poisson distribution to answer the following questions. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many are expected to be audited? 5. Find the probability that no one was audited. 6. Find the probability that at least three were audited. S 4.7.9 1. $X =$ the number of people audited in one year 2. 0, 1, 2, ..., 100 3. $X \sim P(2)$ 4. 2 5. 0.1353 6. 0.3233 Q 4.7.10 Approximately 8% of students at a local high school participate in after-school sports all four years of high school. A group of 60 seniors is randomly chosen. Of interest is the number that participated in after-school sports all four years of high school. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. How many seniors are expected to have participated in after-school sports all four years of high school? 5. Based on numerical values, would you be surprised if none of the seniors participated in after-school sports all four years of high school? Justify your answer numerically. 6. Based on numerical values, is it more likely that four or that five of the seniors participated in after-school sports all four years of high school? Justify your answer numerically. Q 4.7.11 On average, Pierre, an amateur chef, drops three pieces of egg shell into every two cake batters he makes. Suppose that you buy one of his cakes. 1. In words, define the random variable $X$. 2. List the values that $X$ may take on. 3. Give the distribution of $X$. $X \sim$ _____(_____,_____) 4. On average, how many pieces of egg shell do you expect to be in the cake? 5. What is the probability that there will not be any pieces of egg shell in the cake? 6. Let’s say that you buy one of Pierre’s cakes each week for six weeks. What is the probability that there will not be any egg shell in any of the cakes? 7. Based upon the average given for Pierre, is it possible for there to be seven pieces of shell in the cake? Why? S 4.7.11 1. $X =$ the number of shell pieces in one cake 2. 0, 1, 2, 3,... 3. $X \sim P(1.5)$ 4. 1.5 5. 0.2231 6. 0.0001 7. Yes Use the following information to answer the next two exercises: The average number of times per week that Mrs. Plum’s cats wake her up at night because they want to play is ten. We are interested in the number of times her cats wake her up each week. Q 4.7.12 In words, the random variable $X =$ _________________ 1. the number of times Mrs. Plum’s cats wake her up each week. 2. the number of times Mrs. Plum’s cats wake her up each hour. 3. the number of times Mrs. Plum’s cats wake her up each night. 4. the number of times Mrs. Plum’s cats wake her up. Q 4.7.13 Find the probability that her cats will wake her up no more than five times next week. 1. 0.5000 2. 0.9329 3. 0.0378 4. 0.0671 d
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/04.E%3A_Discrete_Random_Variables_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. Continuous Probability Functions For each probability and percentile problem, draw the picture. Q 5.2.1 Consider the following experiment. You are one of 100 people enlisted to take part in a study to determine the percent of nurses in America with an R.N. (registered nurse) degree. You ask nurses if they have an R.N. degree. The nurses answer “yes” or “no.” You then calculate the percentage of nurses with an R.N. degree. You give that percentage to your supervisor. 1. What part of the experiment will yield discrete data? 2. What part of the experiment will yield continuous data? Q 5.2.2 When age is rounded to the nearest year, do the data stay continuous, or do they become discrete? Why? S 5.2.2 Age is a measurement, regardless of the accuracy used. Exercise 5.2.2 Which type of distribution does the graph illustrate? Answer Uniform Distribution Exercise 5.2.3 Which type of distribution does the graph illustrate? Exercise 5.2.4 Which type of distribution does the graph illustrate? Answer Normal Distribution Exercise 5.2.5 What does the shaded area represent? $P($___$< x <$ ___$)$ Exercise 5.2.6 What does the shaded area represent? $P($___$< x <$ ___$)$ Answer $P(6 < x < 7)$ Exercise 5.2.7 For a continuous probability distribution, $0 \leq x \leq 15$. What is $P(x > 15)$? Exercise 5.2.8 What is the area under $f(x)$ if the function is a continuous probability density function? Answer one Exercise 5.2.9 For a continuous probability distribution, $0 \leq x \leq 10$. What is $P(x = 7)$? Exercise 5.2.11 A continuous probability function is restricted to the portion between $x = 0$ and $7$. What is $P(x = 10)$? Answer zero Exercise 5.2.12 $f(x)$ for a continuous probability function is $\frac{1}{5}$, and the function is restricted to $0 \leq x \leq 5$. What is $P(x < 0)$? Exercise 5.2.13 $f(x)$, a continuous probability function, is equal to $\frac{1}{12}$, and the function is restricted to $0 \leq x \leq 12$. What is $P(0 < x) < 12)$? Answer one Exercise 5.2.14 Find the probability that $x$ falls in the shaded area. Exercise 5.2.15 Find the probability that $x$ falls in the shaded area. Answer 0.625 Exercise 5.2.16 Find the probability that $x$ falls in the shaded area. Exercise 5.2.17 $f(x)$, a continuous probability function, is equal to $\frac{1}{3}$ and the function is restricted to $1 \leq x \leq 4$. Describe $P(x > \frac{3}{2})$. Answer The probability is equal to the area from $x = \frac{3}{2}$ to $x = 4$ above the x-axis and up to $f(x) = \frac{1}{3}$. The Uniform Distribution For each probability and percentile problem, draw the picture. Q 5.3.1 Births are approximately uniformly distributed between the 52 weeks of the year. They can be said to follow a uniform distribution from one to 53 (spread of 52 weeks). 1. $X \sim$ _________ 2. Graph the probability distribution. 3. $f(x) =$ _________ 4. $\mu =$ _________ 5. $\sigma =$ _________ 6. Find the probability that a person is born at the exact moment week 19 starts. That is, find $P(x = 19) =$ _________ 7. $P(2 < x < 31) =$ _________ 8. Find the probability that a person is born after week 40. 9. $P(12 < x|x < 28) =$ _________ 10. Find the 70th percentile. 11. Find the minimum for the upper quarter. Q 5.3.2 A random number generator picks a number from one to nine in a uniform manner. 1. $X \sim$ _________ 2. Graph the probability distribution. 3. $f(x) =$ _________ 4. $\mu =$ _________ 5. $\mu =$ _________ 6. $P(3.5 < x < 7.25) =$ _________ 7. $P(x > 5.67) =$ _________ 8. $P(x > 5|x > 3) =$ _________ 9. Find the 90th percentile. S 5.3.2 1. $X \sim U(1, 9)$ 2. Check student’s solution. 3. $f(x) = 18$ where $1 \leq x \leq 9$ 4. five 5. 2.3 6. $\frac{15}{32}$ 7. $\frac{333}{800}$ 8. $\frac{2}{3}$ 9. 8.2 Q 5.3.3 According to a study by Dr. John McDougall of his live-in weight loss program at St. Helena Hospital, the people who follow his program lose between six and 15 pounds a month until they approach trim body weight. Let’s suppose that the weight loss is uniformly distributed. We are interested in the weight loss of a randomly selected individual following the program for one month. 1. Define the random variable. $X =$ _________ 2. $X \sim$ _________ 3. Graph the probability distribution. 4. $f(x) =$ _________ 5. $\mu =$ _________ 6. $\sigma =$ _________ 7. Find the probability that the individual lost more than ten pounds in a month. 8. Suppose it is known that the individual lost more than ten pounds in a month. Find the probability that he lost less than 12 pounds in the month. 9. $P(7 < x < 13|x > 9) =$ __________. State this in a probability question, similarly to parts g and h, draw the picture, and find the probability. Q 5.3.4 A subway train on the Red Line arrives every eight minutes during rush hour. We are interested in the length of time a commuter must wait for a train to arrive. The time follows a uniform distribution. 1. Define the random variable. $X =$ _______ 2. $X \sim$ _______ 3. Graph the probability distribution. 4. $f(x) =$ _______ 5. $\mu =$ _______ 6. $\sigma =$ _______ 7. Find the probability that the commuter waits less than one minute. 8. Find the probability that the commuter waits between three and four minutes. 9. Sixty percent of commuters wait more than how long for the train? State this in a probability question, similarly to parts g and h, draw the picture, and find the probability. S 5.3.5 1. $X$ represents the length of time a commuter must wait for a train to arrive on the Red Line. 2. $X \sim U(0, 8)$ 3. $f(x) = \frac{1}{8}$ where $leq x leq 8$ 4. four 5. 2.31 6. $\frac{1}{8}$ 7. $\frac{1}{8}$ 8. 3.2 Q 5.3.6 The age of a first grader on September 1 at Garden Elementary School is uniformly distributed from 5.8 to 6.8 years. We randomly select one first grader from the class. 1. Define the random variable. $X =$ _________ 2. $X \sim$ _________ 3. Graph the probability distribution. 4. $f(x) =$ _________ 5. $\mu =$ _________ 6. $\sigma =$ _________ 7. Find the probability that she is over 6.5 years old. 8. Find the probability that she is between four and six years old. 9. Find the 70th percentile for the age of first graders on September 1 at Garden Elementary School. Use the following information to answer the next three exercises. The Sky Train from the terminal to the rental–car and long–term parking center is supposed to arrive every eight minutes. The waiting times for the train are known to follow a uniform distribution. Q 5.3.7 What is the average waiting time (in minutes)? 1. zero 2. two 3. three 4. four d Q 5.3.8 Find the 30th percentile for the waiting times (in minutes). 1. two 2. 2.4 3. 2.75 4. three Q 5.3.9 The probability of waiting more than seven minutes given a person has waited more than four minutes is? 1. 0.125 2. 0.25 3. 0.5 4. 0.75 b Q 5.3.10 The time (in minutes) until the next bus departs a major bus depot follows a distribution with $f(x) = \frac{1}{20}$ where $x$ goes from 25 to 45 minutes. 1. Define the random variable. $X =$ ________ 2. $X \sim$ ________ 3. Graph the probability distribution. 4. The distribution is ______________ (name of distribution). It is _____________ (discrete or continuous). 5. $\mu =$ ________ 6. $\sigma =$ ________ 7. Find the probability that the time is at most 30 minutes. Sketch and label a graph of the distribution. Shade the area of interest. Write the answer in a probability statement. 8. Find the probability that the time is between 30 and 40 minutes. Sketch and label a graph of the distribution. Shade the area of interest. Write the answer in a probability statement. 9. $P(25 < x < 55) =$ _________. State this in a probability statement, similarly to parts g and h, draw the picture, and find the probability. 10. Find the 90th percentile. This means that 90% of the time, the time is less than _____ minutes. 11. Find the 75th percentile. In a complete sentence, state what this means. (See part j.) 12. Find the probability that the time is more than 40 minutes given (or knowing that) it is at least 30 minutes. Q 5.3.11 Suppose that the value of a stock varies each day from $16 to$25 with a uniform distribution. 1. Find the probability that the value of the stock is more than $19. 2. Find the probability that the value of the stock is between$19 and $22. 3. Find the upper quartile - 25% of all days the stock is above what value? Draw the graph. 4. Given that the stock is greater than$18, find the probability that the stock is more than $21. S 5.3.11 1. The probability density function of $X$ is $\frac{1}{25-16} = \frac{1}{9}$. $P(X > 19) = (25 - 19)\left(\frac{1}{9}\right) = \frac{6}{9} = \frac{2}{3}$. 2. The area must be 0.25, and $0.25 = (\text{width})\left(\frac{1}{9}\right)$, so $\text{width} = (0.25)(9) = 2.25$. Thus, the value is $25 – 2.25 = 22.75$. 3. This is a conditional probability question. $P(x > 21| x > 18)$. You can do this two ways: • Draw the graph where a is now 18 and b is still 25. The height is $\frac{1}{25 - 18} = \frac{1}{7}$ So, $P(x > 21|x > 18) = (25 – 21)\left(\frac{1}{7}\right) = \frac{4}{7}$. • Use the formula: $P(x > 21|x > 18) = \frac{P(x > 21 \text{AND} x > 18)}{P(x > 18)} = \frac{P(x > 21)}{P(x > 18)} = \frac{(25 - 21)}{25 - 18} = \frac{4}{7}$. Q 5.3.12 A fireworks show is designed so that the time between fireworks is between one and five seconds, and follows a uniform distribution. 1. Find the average time between fireworks. 2. Find probability that the time between fireworks is greater than four seconds. Q 5.3.13 The number of miles driven by a truck driver falls between 300 and 700, and follows a uniform distribution. 1. Find the probability that the truck driver goes more than 650 miles in a day. 2. Find the probability that the truck drivers goes between 400 and 650 miles in a day. 3. At least how many miles does the truck driver travel on the furthest 10% of days? S 5.3.13 1. $P(X > 650) = \frac{700-650}{700-300} = \frac{500}{400} = \frac{1}{8} = 0.125$. 2. $P(400 < X < 650) = \frac{700-650}{700-300} = \frac{250}{400} = 0.625$ 3. $0.10 = \frac{\text{width}}{700-300}$, so $\text{width} = 400(0.10) = 40$. Since $700 – 40 = 660$, the drivers travel at least 660 miles on the furthest 10% of days. The Exponential Distribution Use the following information to answer the next ten exercises. A customer service representative must spend different amounts of time with each customer to resolve various concerns. The amount of time spent with each customer can be modeled by the following distribution: $X \sim Exp(0.2)$ Exercise 5.4.8 What type of distribution is this? Exercise 5.4.9 Are outcomes equally likely in this distribution? Why or why not? Answer No, outcomes are not equally likely. In this distribution, more people require a little bit of time, and fewer people require a lot of time, so it is more likely that someone will require less time. Exercise 5.4.10 What is $m$? What does it represent? Exercise 5.4.11 What is the mean? Answer five Exercise 5.4.12 What is the standard deviation? Exercise 5.4.13 State the probability density function. Answer $f(x) = 0.2e^{-0.2x}$ Exercise 5.4.14 Graph the distribution. Exercise 5.4.15 Find $P(2 < x < 10)$. Answer 0.5350 Exercise 5.4.16 Find $P(x > 6)$. Exercise 5.4.17 Find the 70th percentile. Answer 6.02 Use the following information to answer the next seven exercises. A distribution is given as $X \sim Exp(0.75)$. Exercise 5.4.18 What is $m$? Exercise 5.4.19 What is the probability density function? Answer $f(x) = 0.75e^{-0.75x}$ Exercise 5.4.20 What is the cumulative distribution function? Exercise 5.4.21 Draw the distribution. Answer Exercise 5.4.22 Find $P(x < 4)$. Exercise 5.4.23 Find the 30th percentile. Answer 0.4756 Exercise 5.4.24 Find the median. Exercise 5.4.25 Which is larger, the mean or the median? Answer The mean is larger. The mean is $\frac{1}{m} = \frac{1}{0.75} \approx 1.33$, which is greater than $0.9242$. Use the following information to answer the next 16 exercises. Carbon-14 is a radioactive element with a half-life of about 5,730 years. Carbon-14 is said to decay exponentially. The decay rate is 0.000121. We start with one gram of carbon-14. We are interested in the time (years) it takes to decay carbon-14. Exercise 5.4.26 What is being measured here? Exercise 5.4.27 Are the data discrete or continuous? Answer continuous Exercise 5.4.28 In words, define the random variable $X$. Exercise 5.4.29 What is the decay rate ($m$)? Answer $m = 0.000121$ Exercise 5.4.30 The distribution for $X$ is ______. Exercise 5.4.31 Find the amount (percent of one gram) of carbon-14 lasting less than 5,730 years. This means, find $P(x < 5,730)$. 1. Sketch the graph, and shade the area of interest. 2. Find the probability. $P(x < 5,730) =$_ __________ Answer 1. Check student's solution 2. $P(x < 5,730) = 0.5001$ Exercise 5.4.32 Find the percentage of carbon-14 lasting longer than 10,000 years. 1. Sketch the graph, and shade the area of interest. 2. Find the probability. P($x$ > 10,000) = ________ Exercise 5.2.33 Thirty percent (30%) of carbon-14 will decay within how many years? 1. Sketch the graph, and shade the area of interest. 2. Find the value $k$ such that $P(x < k) = 0.30$. Answer 1. Check student's solution. 2. $k = 2947.73$ Q 5.4.1 Suppose that the length of long distance phone calls, measured in minutes, is known to have an exponential distribution with the average length of a call equal to eight minutes. 1. Define the random variable. $X =$ ________________. 2. Is $X$ continuous or discrete? 3. $X \sim$ ________ 4. $\mu =$ ________ 5. $\sigma =$ ________ 6. Draw a graph of the probability distribution. Label the axes. 7. Find the probability that a phone call lasts less than nine minutes. 8. Find the probability that a phone call lasts more than nine minutes. 9. Find the probability that a phone call lasts between seven and nine minutes. 10. If 25 phone calls are made one after another, on average, what would you expect the total to be? Why? Q 5.4.2 Suppose that the useful life of a particular car battery, measured in months, decays with parameter 0.025. We are interested in the life of the battery. 1. Define the random variable. $X =$ _________________________________. 2. Is $X$ continuous or discrete? 3. $X \sim$ ________ 4. On average, how long would you expect one car battery to last? 5. On average, how long would you expect nine car batteries to last, if they are used one after another? 6. Find the probability that a car battery lasts more than 36 months. 7. Seventy percent of the batteries last at least how long? S 5.4.2 1. $X =$ the useful life of a particular car battery, measured in months. 2. $X$ is continuous. 3. $X ~\sim \text{Exp}(0.025)$ 4. 40 months 5. 360 months 6. 0.4066 7. 14.27 Q 5.4.3 The percent of persons (ages five and older) in each state who speak a language at home other than English is approximately exponentially distributed with a mean of 9.848. Suppose we randomly pick a state. 1. Define the random variable. $X =$ _________________________________. 2. Is $X$ continuous or discrete? 3. $X ~\sim$ ________ 4. $\mu =$ ________ 5. $\sigma =$ ________ 6. Draw a graph of the probability distribution. Label the axes. 7. Find the probability that the percent is less than 12. 8. Find the probability that the percent is between eight and 14. 9. The percent of all individuals living in the United States who speak a language at home other than English is 13.8. 1. Why is this number different from 9.848%? 2. What would make this number higher than 9.848%? Q 5.4.4 The time (in years) after reaching age 60 that it takes an individual to retire is approximately exponentially distributed with a mean of about five years. Suppose we randomly pick one retired individual. We are interested in the time after age 60 to retirement. 1. Define the random variable. $X =$ _________________________________. 2. Is $X$ continuous or discrete? 3. $X \sim$ ________ 4. $\mu =$ ________ 5. $\sigma =$ ________ 6. Draw a graph of the probability distribution. Label the axes. 7. Find the probability that the person retired after age 70. 8. Do more people retire before age 65 or after age 65? 9. In a room of 1,000 people over age 80, how many do you expect will NOT have retired yet? S 5.4.4 1. $X =$ the time (in years) after reaching age 60 that it takes an individual to retire 2. $X$ is continuous. 3. $X ~ \text{Exp}\left(\frac{1}{5}\right)$ 4. five 5. five 6. Check student’s solution. 7. 0.1353 8. before 9. 18.3 Q 5.4.5 The cost of all maintenance for a car during its first year is approximately exponentially distributed with a mean of$150. 1. Define the random variable. $X =$ _________________________________. 2. $X \sim$ ________ 3. $\mu =$ ________ 4. $\sigma =$ ________ 5. Draw a graph of the probability distribution. Label the axes. 6. Find the probability that a car required over \$300 for maintenance during its first year. Use the following information to answer the next three exercises. The average lifetime of a certain new cell phone is three years. The manufacturer will replace any cell phone failing within two years of the date of purchase. The lifetime of these cell phones is known to follow an exponential distribution. Q 5.4.6 The decay rate is: 1. 0.3333 2. 0.5000 3. 2 4. 3 a Q 5.4.7 What is the probability that a phone will fail within two years of the date of purchase? 1. 0.8647 2. 0.4866 3. 0.2212 4. 0.9997 Q 5.4.8 What is the median lifetime of these phones (in years)? 1. 0.1941 2. 1.3863 3. 2.0794 4. 5.5452 c Q 5.4.9 Let $X \sim \text{Exp}(0.1)$. 1. decay rate = ________ 2. $\mu =$ ________ 3. Graph the probability distribution function. 4. On the graph, shade the area corresponding to $P(x < 6)$ and find the probability. 5. Sketch a new graph, shade the area corresponding to $P(3 < x < 6)$ and find the probability. 6. Sketch a new graph, shade the area corresponding to $P(x < 7)$ and find the probability. 7. Sketch a new graph, shade the area corresponding to the 40th percentile and find the value. 8. Find the average value of $x$. Q 5.4.10 Suppose that the longevity of a light bulb is exponential with a mean lifetime of eight years. 1. Find the probability that a light bulb lasts less than one year. 2. Find the probability that a light bulb lasts between six and ten years. 3. Seventy percent of all light bulbs last at least how long? 4. A company decides to offer a warranty to give refunds to light bulbs whose lifetime is among the lowest two percent of all bulbs. To the nearest month, what should be the cutoff lifetime for the warranty to take place? 5. If a light bulb has lasted seven years, what is the probability that it fails within the 8thyear. S 5.4.10 Let $T =$ the life time of a light bulb. The decay parameter is $m = \frac{1}{8}$, and $T \sim \text{Exp}(\frac{1}{8})$. The cumulative distribution function is $P(T < t) = 1 - e^{-\frac{t}{s}}$ 1. Therefore, $P(T < t) = 1 - ^{-\frac{1}{s}} \approx 0.1175$. 2. We want to find $P(6 < t < 10)$. To do this, $P(6 < t < 10) – P(t < 6) = \left(1 - e^{-\frac{1}{s} * 10}\right) - \left(1 - e^{\frac{1}{s}*6}\right) \approx 0.7135 - 0.5276 = 0.1859$ 3. We want to find $0.70 = P(T < t) = 1 - \left(1 - e^{-\frac{t}{s}}\right) = e^{-\frac{t}{s}}$. Solving for $t$, $e^{-\frac{t}{s}} = 0.70$, so $-\frac{t}{8} = \text{ln}(0.70)$ and $t = -8 \cdot \text{ln}(0.70) \approx 2.85$ years. Or use $t = \frac{\text{ln(area_to_the_right)}}{(-m)} = \frac{\text{ln}(0.70)}{-\frac{1}{8}} \approx 2.85$ years 4. We want to find $0.02 = P(T < t) = 1 – e^{-\frac{t}{8}}$ Solving for $t, e^{-\frac{t}{8}} = 0.98$, so $\frac{t}{8} = \text{ln}(0.98)$, and $t = –8 \cdot \text{ln}(0.98) \approx 0.1616$ years, or roughly two months. The warranty should cover light bulbs that last less than 2 months. Or use $\frac{\text{ln(area_to_the_right)}}{(-m)} = \frac{\text{ln}(1-0.2)}{-\frac{1}{8}} = 0.1616$. 5. We must find $P(T < 8|T > 7)$. Notice that by the rule of complement events, $P(T < 8|T > 7) = 1 – P(T > 8|T > 7)$. By the memoryless property $(P(X > r + t|X > r) = P(X > t))$. So $P(T > 8|T > 7) = P(T > 1) = 1 - \left(1 - e^{-\frac{1}{8}}\right) = e^{-\frac{1}{8}} \approx 0.8825$ Therefore, $P(T < 8|T > 7) = 1 – 0.8825 = 0.1175$. Q 5.4.11 At a 911 call center, calls come in at an average rate of one call every two minutes. Assume that the time that elapses from one call to the next has the exponential distribution. 1. On average, how much time occurs between five consecutive calls? 2. Find the probability that after a call is received, it takes more than three minutes for the next call to occur. 3. Ninety-percent of all calls occur within how many minutes of the previous call? 4. Suppose that two minutes have elapsed since the last call. Find the probability that the next call will occur within the next minute. 5. Find the probability that less than 20 calls occur within an hour. Q 5.4.12 In major league baseball, a no-hitter is a game in which a pitcher, or pitchers, doesn't give up any hits throughout the game. No-hitters occur at a rate of about three per season. Assume that the duration of time between no-hitters is exponential. 1. What is the probability that an entire season elapses with a single no-hitter? 2. If an entire season elapses without any no-hitters, what is the probability that there are no no-hitters in the following season? 3. What is the probability that there are more than 3 no-hitters in a single season? S 5.4.12 Let $X =$ the number of no-hitters throughout a season. Since the duration of time between no-hitters is exponential, the number of no-hitters per season is Poisson with mean $\lambda = 3$. Therefore, $(X = 0) = \frac{3^{0}e^{3}}{0!} = e^{-3} \approx 0.0498$ NOTE You could let $T =$ duration of time between no-hitters. Since the time is exponential and there are 3 no-hitters per season, then the time between no-hitters is $\frac{1}{3}$season. For the exponential, $\mu = \frac{1}{3}$. Therefore, $m = \frac{1}{\mu} = 3$ and $T \sim \text{Exp}(3)$. 1. The desired probability is $P(T > 1) = 1 – P(T < 1) = 1 – (1 – e^{-3}) = e^{-3} \approx 0.0498$. 2. Let $T =$ duration of time between no-hitters. We find $P(T > 2|T > 1)$, and by the memoryless property this is simply $P(T > 1)$, which we found to be 0.0498 in part a. 3. Let $X =$ the number of no-hitters is a season. Assume that $X$ is Poisson with mean $\lambda = 3$. Then $P(X > 3) = 1 – P(X \leq 3) = 0.3528$. Q 5.4.13 During the years 1998–2012, a total of 29 earthquakes of magnitude greater than 6.5 have occurred in Papua New Guinea. Assume that the time spent waiting between earthquakes is exponential. 1. What is the probability that the next earthquake occurs within the next three months? 2. Given that six months has passed without an earthquake in Papua New Guinea, what is the probability that the next three months will be free of earthquakes? 3. What is the probability of zero earthquakes occurring in 2014? 4. What is the probability that at least two earthquakes will occur in 2014? Q 5.4.14 According to the American Red Cross, about one out of nine people in the U.S. have Type B blood. Suppose the blood types of people arriving at a blood drive are independent. In this case, the number of Type B blood types that arrive roughly follows the Poisson distribution. 1. If 100 people arrive, how many on average would be expected to have Type B blood? 2. What is the probability that over 10 people out of these 100 have type B blood? 3. What is the probability that more than 20 people arrive before a person with type B blood is found? S 5.4.14 1. $\frac{100}{9} = 11.11$ 2. $P(X > 10) = 1 – P(X \leq 10) = 1 – \text{Poissoncdf}(11.11, 10) \approx 0.5532$. 3. The number of people with Type B blood encountered roughly follows the Poisson distribution, so the number of people $X$ who arrive between successive Type B arrivals is roughly exponential with mean $μ = 9$ and $m = \frac{1}{9}$. The cumulative distribution function of $X$ is $P(X < x) = 1 - e^{-\frac{x}{9}}$. Thus, $P(X > 20) = 1 - P(X \leq 20) = 1 − (1 − e^{-\frac{20}{9}}) \approx 0.1084.$. NOTE We could also deduce that each person arriving has a 8/9 chance of not having Type B blood. So the probability that none of the first 20 people arrive have Type B blood is $\left(\frac{8}{9}\right)^{20} \approx 0.0948$. (The geometric distribution is more appropriate than the exponential because the number of people between Type B people is discrete instead of continuous.) Q 5.4.15 A web site experiences traffic during normal working hours at a rate of 12 visits per hour. Assume that the duration between visits has the exponential distribution. 1. Find the probability that the duration between two successive visits to the web site is more than ten minutes. 2. The top 25% of durations between visits are at least how long? 3. Suppose that 20 minutes have passed since the last visit to the web site. What is the probability that the next visit will occur within the next 5 minutes? 4. Find the probability that less than 7 visits occur within a one-hour period. Q 5.4.16 At an urgent care facility, patients arrive at an average rate of one patient every seven minutes. Assume that the duration between arrivals is exponentially distributed. 1. Find the probability that the time between two successive visits to the urgent care facility is less than 2 minutes. 2. Find the probability that the time between two successive visits to the urgent care facility is more than 15 minutes. 3. If 10 minutes have passed since the last arrival, what is the probability that the next person will arrive within the next five minutes? 4. Find the probability that more than eight patients arrive during a half-hour period. S 5.4.17 Let $T =$ duration (in minutes) between successive visits. Since patients arrive at a rate of one patient every seven minutes, $\mu = 7$ and the decay constant is $m = \frac{1}{7}$. The cdf is $P(T < t) = 1 - e^{\frac{t}{\tau}}$ 1. $P(T < 2) = 1 - 1 - e^{-\frac{2}{7}} \approx 0.2485$. 2. $P(T > 15) = 1 - P(T < 15) = 1 - \left(1 - e^{-\frac{15}{7}}\right) \approx e^{-\frac{15}{7}} \approx 0.1173$. 3. $P(T > 15|T > 10) = P(T > 5) = 1 - \left(1 - e^{ \frac{5}{7}}\right) = e^{-\frac{5}{7}} \approx 0.4895$. 4. Let $X =$ # of patients arriving during a half-hour period. Then $X$ has the Poisson distribution with a mean of $\frac{30}{7}$, $X \sim \text{Poisson}\left(\frac{30}{7}\right)$. Find $P(X > 8) = 1 – P(X \leq 8) \approx 0.0311$.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/05.E%3A_Continuous_Random_Variables_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 6.2: The Standard Normal Distribution Use the following information to answer the next two exercises: The patient recovery time from a particular surgical procedure is normally distributed with a mean of 5.3 days and a standard deviation of 2.1 days. Q 6.2.1 What is the median recovery time? 1. 2.7 2. 5.3 3. 7.4 4. 2.1 Q 6.2.2 What is the z-score for a patient who takes ten days to recover? 1. 1.5 2. 0.2 3. 2.2 4. 7.3 c Q 6.2.3 The length of time to find a parking space at 9 A.M. follows a normal distribution with a mean of five minutes and a standard deviation of two minutes. If the mean is significantly greater than the standard deviation, which of the following statements is true? 1. The data cannot follow the uniform distribution. 2. The data cannot follow the exponential distribution.. 3. The data cannot follow the normal distribution. 1. I only 2. II only 3. III only 4. I, II, and III Q 6.2.4 The heights of the 430 National Basketball Association players were listed on team rosters at the start of the 2005–2006 season. The heights of basketball players have an approximate normal distribution with mean, µ = 79 inches and a standard deviation, σ = 3.89 inches. For each of the following heights, calculate the z-score and interpret it using complete sentences. 1. 77 inches 2. 85 inches 3. If an NBA player reported his height had a z-score of 3.5, would you believe him? Explain your answer. S 6.2.4 1. Use the $z$-score formula. $z = –0.5141$. The height of 77 inches is 0.5141 standard deviations below the mean. An NBA player whose height is 77 inches is shorter than average. 2. Use the $z$-score formula. $z = 1.5424$. The height 85 inches is 1.5424 standard deviations above the mean. An NBA player whose height is 85 inches is taller than average. 3. Height $= 79 + 3.5(3.89) = 90.67$ inches, which is over 7.7 feet tall. There are very few NBA players this tall so the answer is no, not likely. Q 6.2.5 The systolic blood pressure (given in millimeters) of males has an approximately normal distribution with mean $\mu = 125$ and standard deviation $\sigma = 14$. Systolic blood pressure for males follows a normal distribution. 1. Calculate the z-scores for the male systolic blood pressures 100 and 150 millimeters. 2. If a male friend of yours said he thought his systolic blood pressure was 2.5 standard deviations below the mean, but that he believed his blood pressure was between 100 and 150 millimeters, what would you say to him? Q 6.2.6 Kyle’s doctor told him that the z-score for his systolic blood pressure is 1.75. Which of the following is the best interpretation of this standardized score? The systolic blood pressure (given in millimeters) of males has an approximately normal distribution with mean $\mu = 125$ and standard deviation $\sigma = 14$. If $X =$ a systolic blood pressure score then $X \sim N(125, 14)$. 1. Which answer(s) is/are correct? 1. Kyle’s systolic blood pressure is 175. 2. Kyle’s systolic blood pressure is 1.75 times the average blood pressure of men his age. 3. Kyle’s systolic blood pressure is 1.75 above the average systolic blood pressure of men his age. 4. Kyles’s systolic blood pressure is 1.75 standard deviations above the average systolic blood pressure for men. 2. Calculate Kyle’s blood pressure. S 6.2.6 1. iv 2. Kyle’s blood pressure is equal to $125 + (1.75)(14) = 149.5$. Q 6.2.7 Height and weight are two measurements used to track a child’s development. The World Health Organization measures child development by comparing the weights of children who are the same height and the same gender. In 2009, weights for all 80 cm girls in the reference population had a mean $\mu = 10.2$ kg and standard deviation $\sigma = 0.8$ kg. Weights are normally distributed. $X \sim N(10.2, 0.8)$. Calculate the z-scores that correspond to the following weights and interpret them. 1. 11 kg 2. 7.9 kg 3. 12.2 kg Q 6.2.8 In 2005, 1,475,623 students heading to college took the SAT. The distribution of scores in the math section of the SAT follows a normal distribution with mean $\mu = 520$ and standard deviation $\sigma = 115$. 1. Calculate the $z$-score for an SAT score of 720. Interpret it using a complete sentence. 2. What math SAT score is 1.5 standard deviations above the mean? What can you say about this SAT score? 3. For 2012, the SAT math test had a mean of 514 and standard deviation 117. The ACT math test is an alternate to the SAT and is approximately normally distributed with mean 21 and standard deviation 5.3. If one person took the SAT math test and scored 700 and a second person took the ACT math test and scored 30, who did better with respect to the test they took? S 6.2.8 Let $X =$ an SAT math score and $Y =$ an ACT math score. 1. $X = 720 \frac{720-520}{15} = 1.74$ The exam score of 720 is 1.74 standard deviations above the mean of 520. 2. $z = 1.5$ The math SAT score is $520 + 1.5(115) \approx 692.5$. The exam score of 692.5 is 1.5 standard deviations above the mean of 520. 3. $\frac{X-\mu}{\sigma} = \frac{700-514}{117} \approx 1.59$, the z-score for the SAT. $\frac{Y-\mu}{\sigma} = \frac{30-21}{5.3} \approx 1.70$, the z-scores for the ACT. With respect to the test they took, the person who took the ACT did better (has the higher z-score). 6.3: Using the Normal Distribution Use the following information to answer the next two exercises: The patient recovery time from a particular surgical procedure is normally distributed with a mean of 5.3 days and a standard deviation of 2.1 days. Q 6.3.1 What is the probability of spending more than two days in recovery? 1. 0.0580 2. 0.8447 3. 0.0553 4. 0.9420 Q 6.3.2 The 90th percentile for recovery times is? 1. 8.89 2. 7.07 3. 7.99 4. 4.32 S 6.3.2 c Use the following information to answer the next three exercises: The length of time it takes to find a parking space at 9 A.M. follows a normal distribution with a mean of five minutes and a standard deviation of two minutes. Q 6.3.3 Based upon the given information and numerically justified, would you be surprised if it took less than one minute to find a parking space? 1. Yes 2. No 3. Unable to determine Q 6.3.4 Find the probability that it takes at least eight minutes to find a parking space. 1. 0.0001 2. 0.9270 3. 0.1862 4. 0.0668 d Q 6.3.5 Seventy percent of the time, it takes more than how many minutes to find a parking space? 1. 1.24 2. 2.41 3. 3.95 4. 6.05 Q 6.3.6 According to a study done by De Anza students, the height for Asian adult males is normally distributed with an average of 66 inches and a standard deviation of 2.5 inches. Suppose one Asian adult male is randomly chosen. Let $X =$ height of the individual. 1. $X \sim$ _____(_____,_____) 2. Find the probability that the person is between 65 and 69 inches. Include a sketch of the graph, and write a probability statement. 3. Would you expect to meet many Asian adult males over 72 inches? Explain why or why not, and justify your answer numerically. 4. The middle 40% of heights fall between what two values? Sketch the graph, and write the probability statement. S 6.3.6 1. $X \sim N(66, 2.5)$ 2. 0.5404 3. No, the probability that an Asian male is over 72 inches tall is 0.0082 Q 6.3.7 IQ is normally distributed with a mean of 100 and a standard deviation of 15. Suppose one individual is randomly chosen. Let $X =$ IQ of an individual. 1. $X \sim$ _____(_____,_____) 2. Find the probability that the person has an IQ greater than 120. Include a sketch of the graph, and write a probability statement. 3. MENSA is an organization whose members have the top 2% of all IQs. Find the minimum IQ needed to qualify for the MENSA organization. Sketch the graph, and write the probability statement. 4. The middle 50% of IQs fall between what two values? Sketch the graph and write the probability statement. Q 6.3.8 The percent of fat calories that a person in America consumes each day is normally distributed with a mean of about 36 and a standard deviation of 10. Suppose that one individual is randomly chosen. Let $X =$ percent of fat calories. 1. $X \sim$ _____(_____,_____) 2. Find the probability that the percent of fat calories a person consumes is more than 40. Graph the situation. Shade in the area to be determined. 3. Find the maximum number for the lower quarter of percent of fat calories. Sketch the graph and write the probability statement. S 6.3.8 1. $X \sim N(36, 10)$ 2. The probability that a person consumes more than 40% of their calories as fat is 0.3446. 3. Approximately 25% of people consume less than 29.26% of their calories as fat. Q 6.3.9 Suppose that the distance of fly balls hit to the outfield (in baseball) is normally distributed with a mean of 250 feet and a standard deviation of 50 feet. 1. If $X =$ distance in feet for a fly ball, then $X \sim$ _____(_____,_____) 2. If one fly ball is randomly chosen from this distribution, what is the probability that this ball traveled fewer than 220 feet? Sketch the graph. Scale the horizontal axis X. Shade the region corresponding to the probability. Find the probability. 3. Find the 80th percentile of the distribution of fly balls. Sketch the graph, and write the probability statement. Q 6.3.10 In China, four-year-olds average three hours a day unsupervised. Most of the unsupervised children live in rural areas, considered safe. Suppose that the standard deviation is 1.5 hours and the amount of time spent alone is normally distributed. We randomly select one Chinese four-year-old living in a rural area. We are interested in the amount of time the child spends alone per day. 1. In words, define the random variable $X$. 2. $X \sim$ _____(_____,_____) 3. Find the probability that the child spends less than one hour per day unsupervised. Sketch the graph, and write the probability statement. 4. What percent of the children spend over ten hours per day unsupervised? 5. Seventy percent of the children spend at least how long per day unsupervised? S 6.3.10 1. $X =$ number of hours that a Chinese four-year-old in a rural area is unsupervised during the day. 2. $X ~ N(3, 1.5)$ 3. The probability that the child spends less than one hour a day unsupervised is 0.0918. 4. The probability that a child spends over ten hours a day unsupervised is less than 0.0001. 5. 2.21 hours Q 6.3.11 In the 1992 presidential election, Alaska’s 40 election districts averaged 1,956.8 votes per district for President Clinton. The standard deviation was 572.3. (There are only 40 election districts in Alaska.) The distribution of the votes per district for President Clinton was bell-shaped. Let $X =$ number of votes for President Clinton for an election district. 1. State the approximate distribution of $X$. 2. Is 1,956.8 a population mean or a sample mean? How do you know? 3. Find the probability that a randomly selected district had fewer than 1,600 votes for President Clinton. Sketch the graph and write the probability statement. 4. Find the probability that a randomly selected district had between 1,800 and 2,000 votes for President Clinton. 5. Find the third quartile for votes for President Clinton. Q 6.3.12 Suppose that the duration of a particular type of criminal trial is known to be normally distributed with a mean of 21 days and a standard deviation of seven days. 1. In words, define the random variable $X$. 2. $X \sim$ _____(_____,_____) 3. If one of the trials is randomly chosen, find the probability that it lasted at least 24 days. Sketch the graph and write the probability statement. 4. Sixty percent of all trials of this type are completed within how many days? S 6.3.12 1. $X =$ the distribution of the number of days a particular type of criminal trial will take 2. $X \sim N(21, 7)$ 3. The probability that a randomly selected trial will last more than 24 days is 0.3336. 4. 22.77 Q 6.3.13 Terri Vogel, an amateur motorcycle racer, averages 129.71 seconds per 2.5 mile lap (in a seven-lap race) with a standard deviation of 2.28 seconds. The distribution of her race times is normally distributed. We are interested in one of her randomly selected laps. 1. In words, define the random variable $X$. 2. $X \sim$ _____(_____,_____) 3. Find the percent of her laps that are completed in less than 130 seconds. 4. The fastest 3% of her laps are under _____. 5. The middle 80% of her laps are from _______ seconds to _______ seconds. Q 6.3.14 Thuy Dau, Ngoc Bui, Sam Su, and Lan Voung conducted a survey as to how long customers at Lucky claimed to wait in the checkout line until their turn. Let $X =$ time in line. Table displays the ordered real data (in minutes): 0.50 4.25 5 6 7.25 1.75 4.25 5.25 6 7.25 2 4.25 5.25 6.25 7.25 2.25 4.25 5.5 6.25 7.75 2.25 4.5 5.5 6.5 8 2.5 4.75 5.5 6.5 8.25 2.75 4.75 5.75 6.5 9.5 3.25 4.75 5.75 6.75 9.5 3.75 5 6 6.75 9.75 3.75 5 6 6.75 10.75 1. Calculate the sample mean and the sample standard deviation. 2. Construct a histogram. 3. Draw a smooth curve through the midpoints of the tops of the bars. 4. In words, describe the shape of your histogram and smooth curve. 5. Let the sample mean approximate $\mu$ and the sample standard deviation approximate $\sigma$. The distribution of $X$ can then be approximated by $X \sim$ _____(_____,_____) 6. Use the distribution in part e to calculate the probability that a person will wait fewer than 6.1 minutes. 7. Determine the cumulative relative frequency for waiting less than 6.1 minutes. 8. Why aren’t the answers to part f and part g exactly the same? 9. Why are the answers to part f and part g as close as they are? 10. If only ten customers has been surveyed rather than 50, do you think the answers to part f and part g would have been closer together or farther apart? Explain your conclusion. S 6.3.14 1. $\text{mean} = 5.51$, $s = 2.15$ 2. Check student's solution. 3. Check student's solution. 4. Check student's solution. 5. $X \sim N(5.51, 2.15)$ 6. 0.6029 7. The cumulative frequency for less than 6.1 minutes is 0.64. 8. The answers to part f and part g are not exactly the same, because the normal distribution is only an approximation to the real one. 9. The answers to part f and part g are close, because a normal distribution is an excellent approximation when the sample size is greater than 30. 10. The approximation would have been less accurate, because the smaller sample size means that the data does not fit normal curve as well. Q 6.3.15 Suppose that Ricardo and Anita attend different colleges. Ricardo’s GPA is the same as the average GPA at his school. Anita’s GPA is 0.70 standard deviations above her school average. In complete sentences, explain why each of the following statements may be false. 1. Ricardo’s actual GPA is lower than Anita’s actual GPA. 2. Ricardo is not passing because his z-score is zero. 3. Anita is in the 70th percentile of students at her college. Q 6.3.16 Table shows a sample of the maximum capacity (maximum number of spectators) of sports stadiums. The table does not include horse-racing or motor-racing stadiums. 40,000 40,000 45,050 45,500 46,249 48,134 49,133 50,071 50,096 50,466 50,832 51,100 51,500 51,900 52,000 52,132 52,200 52,530 52,692 53,864 54,000 55,000 55,000 55,000 55,000 55,000 55,000 55,082 57,000 58,008 59,680 60,000 60,000 60,492 60,580 62,380 62,872 64,035 65,000 65,050 65,647 66,000 66,161 67,428 68,349 68,976 69,372 70,107 70,585 71,594 72,000 72,922 73,379 74,500 75,025 76,212 78,000 80,000 80,000 82,300 1. Calculate the sample mean and the sample standard deviation for the maximum capacity of sports stadiums (the data). 2. Construct a histogram. 3. Draw a smooth curve through the midpoints of the tops of the bars of the histogram. 4. In words, describe the shape of your histogram and smooth curve. 5. Let the sample mean approximate $\mu$ and the sample standard deviation approximate $\sigma$. The distribution of $X$ can then be approximated by $X \sim$ _____(_____,_____). 6. Use the distribution in part e to calculate the probability that the maximum capacity of sports stadiums is less than 67,000 spectators. 7. Determine the cumulative relative frequency that the maximum capacity of sports stadiums is less than 67,000 spectators. Hint: Order the data and count the sports stadiums that have a maximum capacity less than 67,000. Divide by the total number of sports stadiums in the sample. 8. Why aren’t the answers to part f and part g exactly the same? S 6.3.16 1. $\text{mean} = 60,136$, $s = 10,468$ 2. Answers will vary. 3. Answers will vary. 4. Answers will vary. 5. $X \sim N(60136, 10468)$ 6. 0.7440 7. The cumulative relative frequency is $\frac{43}{60} = 0.717$. 8. The answers for part f and part g are not the same, because the normal distribution is only an approximation. Q 6.3.17 An expert witness for a paternity lawsuit testifies that the length of a pregnancy is normally distributed with a mean of 280 days and a standard deviation of 13 days. An alleged father was out of the country from 240 to 306 days before the birth of the child, so the pregnancy would have been less than 240 days or more than 306 days long if he was the father. The birth was uncomplicated, and the child needed no medical intervention. What is the probability that he was NOT the father? What is the probability that he could be the father? Calculate the z-scores first, and then use those to calculate the probability. Q 6.3.18 A NUMMI assembly line, which has been operating since 1984, has built an average of 6,000 cars and trucks a week. Generally, 10% of the cars were defective coming off the assembly line. Suppose we draw a random sample of n = 100 cars. Let X represent the number of defective cars in the sample. What can we say about X in regard to the 68-95-99.7 empirical rule (one standard deviation, two standard deviations and three standard deviations from the mean are being referred to)? Assume a normal distribution for the defective cars in the sample. S 6.3.18 • $n = 100; p = 0.1; q = 0.9$ • $\mu = np = (100)(0.10) = 10$ • $\sigma = \sqrt{npq} = \sqrt{(100)(0.1)(0.9)} = 3$ 1. $z = \pm: x_{1} = \mu + z\sigma = 10 + 1(3) = 13$ and $x_{2} = \mu = z\sigma = 10 - 1(3) = 7.68%$ of the defective cars will fall between seven and 13. 2. $z = \pm: x_{1} = \mu + z\sigma = 10 + 2(3) = 16$ and $x_{2} = \mu = z\sigma = 10 - 2(3) = 4.95%$ of the defective cars will fall between four and 16 3. $z = \pm: x_{1} = \mu + z\sigma = 10 + 3(3) = 19$ and $x_{2} = \mu = z\sigma = 10 - 3(3) = 1.997%$ of the defective cars will fall between one and 19. Q 6.3.19 We flip a coin 100 times ($n = 100$) and note that it only comes up heads 20% ($p = 0.20$) of the time. The mean and standard deviation for the number of times the coin lands on heads is $\mu = 20$ and $\sigma = 4$ (verify the mean and standard deviation). Solve the following: 1. There is about a 68% chance that the number of heads will be somewhere between ___ and ___. 2. There is about a ____chance that the number of heads will be somewhere between 12 and 28. 3. There is about a ____ chance that the number of heads will be somewhere between eight and 32. Q 6.3.20 A \$1 scratch off lotto ticket will be a winner one out of five times. Out of a shipment of $n = 190$ lotto tickets, find the probability for the lotto tickets that there are 1. somewhere between 34 and 54 prizes. 2. somewhere between 54 and 64 prizes. 3. more than 64 prizes. S 6.3.21 • $n = 190; p = 1515 = 0.2; q = 0.8$ • $\mu = np = (190)(0.2) = 38$ • $\sigma = \sqrt{npq} = \sqrt{(190)(0.2)(0.8)} = 5.5136$ 1. For this problem: $P(34 < x < 54) = \text{normalcdf}(34,54,48,5.5136) = 0.7641$ 2. For this problem: $P(54 < x < 64) = \text{normalcdf}(54,64,48,5.5136) = 0.0018$ 3. For this problem: $P(x > 64) = \text{normalcdf}(64,10^{99},48,5.5136) = 0.0000012$ (approximately 0) Q 6.3.22 Facebook provides a variety of statistics on its Web site that detail the growth and popularity of the site. On average, 28 percent of 18 to 34 year olds check their Facebook profiles before getting out of bed in the morning. Suppose this percentage follows a normal distribution with a standard deviation of five percent. 1. Find the probability that the percent of 18 to 34-year-olds who check Facebook before getting out of bed in the morning is at least 30. 2. Find the 95th percentile, and express it in a sentence.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/06.E%3A_The_Normal_Distribution_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 7.2: The Central Limit Theorem for Sample Means (Averages) Previously, De Anza statistics students estimated that the amount of change daytime statistics students carry is exponentially distributed with a mean of $0.88. Suppose that we randomly pick 25 daytime statistics students. 1. In words, $Χ =$ ____________ 2. $Χ \sim$ _____(_____,_____) 3. In words, $\bar{X} =$ ____________ 4. $\bar{X} \sim$ ______ (______, ______) 5. Find the probability that an individual had between$0.80 and $1.00. Graph the situation, and shade in the area to be determined. 6. Find the probability that the average of the 25 students was between$0.80 and $1.00. Graph the situation, and shade in the area to be determined. 7. Explain why there is a difference in part e and part f. S 7.2.1 1. $Χ =$ amount of change students carry 2. $Χ \sim E(0.88, 0.88)$ 3. $\bar{X} =$ average amount of change carried by a sample of 25 students. 4. $\bar{X} \sim N(0.88, 0.176)$ 5. 0.0819 6. 0.1882 7. The distributions are different. Part a is exponential and part b is normal. Q 7.2.2 Suppose that the distance of fly balls hit to the outfield (in baseball) is normally distributed with a mean of 250 feet and a standard deviation of 50 feet. We randomly sample 49 fly balls. 1. If $\bar{X} =$ average distance in feet for 49 fly balls, then $\bar{X} \sim$ _______(_______,_______) 2. What is the probability that the 49 balls traveled an average of less than 240 feet? Sketch the graph. Scale the horizontal axis for $\bar{X}$. Shade the region corresponding to the probability. Find the probability. 3. Find the 80th percentile of the distribution of the average of 49 fly balls. Q 7.2.3 According to the Internal Revenue Service, the average length of time for an individual to complete (keep records for, learn, prepare, copy, assemble, and send) IRS Form 1040 is 10.53 hours (without any attached schedules). The distribution is unknown. Let us assume that the standard deviation is two hours. Suppose we randomly sample 36 taxpayers. 1. In words, $Χ =$ _____________ 2. In words, $\bar{X} =$ _____________ 3. $\bar{X} \sim$ _____(_____,_____) 4. Would you be surprised if the 36 taxpayers finished their Form 1040s in an average of more than 12 hours? Explain why or why not in complete sentences. 5. Would you be surprised if one taxpayer finished his or her Form 1040 in more than 12 hours? In a complete sentence, explain why. S 7.2.3 1. length of time for an individual to complete IRS form 1040, in hours. 2. mean length of time for a sample of 36 taxpayers to complete IRS form 1040, in hours. 3. $N\left(10.53, \frac{1}{3}\right)$ 4. Yes. I would be surprised, because the probability is almost 0. 5. No. I would not be totally surprised because the probability is 0.2312 Q 7.2.4 Suppose that a category of world-class runners are known to run a marathon (26 miles) in an average of 145 minutes with a standard deviation of 14 minutes. Consider 49 of the races. Let $\bar{X}$ the average of the 49 races. 1. $\bar{X} \sim$ _____(_____,_____) 2. Find the probability that the runner will average between 142 and 146 minutes in these 49 marathons. 3. Find the 80th percentile for the average of these 49 marathons. 4. Find the median of the average running times. Q 7.2.5 The length of songs in a collector’s iTunes album collection is uniformly distributed from two to 3.5 minutes. Suppose we randomly pick five albums from the collection. There are a total of 43 songs on the five albums. 1. In words, $Χ =$ _________ 2. $\bar{X} \sim$ _____________ 3. In words, $\bar{X} =$ _____________ 4. $\bar{X} \sim$ _____(_____,_____) 5. Find the first quartile for the average song length. 6. The IQR(interquartile range) for the average song length is from _______–_______. S 7.2.6 1. the length of a song, in minutes, in the collection 2. $U(2, 3.5)$ 3. the average length, in minutes, of the songs from a sample of five albums from the collection 4. $N(2.75, 0.0220)$ 5. 2.74 minutes 6. 0.03 minutes Q 7.2.7 In 1940 the average size of a U.S. farm was 174 acres. Let’s say that the standard deviation was 55 acres. Suppose we randomly survey 38 farmers from 1940. 1. In words, $Χ =$ _____________ 2. In words, $\bar{X} =$ _____________ 3. $\bar{X} \sim$ _____(_____,_____) 4. The IQR for $\bar{X}$ is from _______ acres to _______ acres. Q 7.2.8 Determine which of the following are true and which are false. Then, in complete sentences, justify your answers. 1. When the sample size is large, the mean of $\bar{X}$ is approximately equal to the mean of $Χ$. 2. When the sample size is large, $\bar{X}$ is approximately normally distributed. 3. When the sample size is large, the standard deviation of $\bar{X}$ is approximately the same as the standard deviation of $Χ$. S 7.2.8 1. True. The mean of a sampling distribution of the means is approximately the mean of the data distribution. 2. True. According to the Central Limit Theorem, the larger the sample, the closer the sampling distribution of the means becomes normal. 3. The standard deviation of the sampling distribution of the means will decrease making it approximately the same as the standard deviation of $Χ$ as the sample size increases. Q 7.2.9 The percent of fat calories that a person in America consumes each day is normally distributed with a mean of about 36 and a standard deviation of about ten. Suppose that 16 individuals are randomly chosen. Let $\bar{X} =$ average percent of fat calories. 1. $\bar{X} \sim$ ______(______, ______) 2. For the group of 16, find the probability that the average percent of fat calories consumed is more than five. Graph the situation and shade in the area to be determined. 3. Find the first quartile for the average percent of fat calories. Q 7.2.10 The distribution of income in some Third World countries is considered wedge shaped (many very poor people, very few middle income people, and even fewer wealthy people). Suppose we pick a country with a wedge shaped distribution. Let the average salary be$2,000 per year with a standard deviation of $8,000. We randomly survey 1,000 residents of that country. 1. In words, $Χ =$ _____________ 2. In words, $\bar{X} =$ _____________ 3. $\bar{X} \sim$ _____(_____,_____) 4. How is it possible for the standard deviation to be greater than the average? 5. Why is it more likely that the average of the 1,000 residents will be from$2,000 to $2,100 than from$2,100 to $2,200? S 7.2.10 1. $Χ =$ the yearly income of someone in a third world country 2. the average salary from samples of 1,000 residents of a third world country 3. $\bar{X} \sim N\left(2000, \frac{8000}{\sqrt{1000}}\right)$ 4. Very wide differences in data values can have averages smaller than standard deviations. 5. The distribution of the sample mean will have higher probabilities closer to the population mean. $P(2000 < \bar{X} < 2100) = 0.1537$ $P(2100 < \bar{X} < 2200) = 0.1317$ Q 7.2.11 Which of the following is NOT TRUE about the distribution for averages? 1. The mean, median, and mode are equal. 2. The area under the curve is one. 3. The curve never touches the x-axis. 4. The curve is skewed to the right. Q 7.2.12 The cost of unleaded gasoline in the Bay Area once followed an unknown distribution with a mean of$4.59 and a standard deviation of $0.10. Sixteen gas stations from the Bay Area are randomly chosen. We are interested in the average cost of gasoline for the 16 gas stations. The distribution to use for the average cost of gasoline for the 16 gas stations is: 1. $\bar{X} \sim N(4.59, 0.10)$ 2. $\bar{X} \sim N\left(4.59, \frac{0.10}{\sqrt{16}}\right)$ 3. $\bar{X} \sim N\left(4.59, \frac{16}{0.10}\right)$ 4. $\bar{X} \sim N\left(4.59, \frac{\sqrt{16}}{0.10}\right)$ S 7.2.12 b 7.3: The Central Limit Theorem for Sums Q 7.3.1 Which of the following is NOT TRUE about the theoretical distribution of sums? 1. The mean, median and mode are equal. 2. The area under the curve is one. 3. The curve never touches the x-axis. 4. The curve is skewed to the right. Q 7.3.2 Suppose that the duration of a particular type of criminal trial is known to have a mean of 21 days and a standard deviation of seven days. We randomly sample nine trials. 1. In words, $\sum X =$ ______________ 2. $\sum X \sim$ _____(_____,_____) 3. Find the probability that the total length of the nine trials is at least 225 days. 4. Ninety percent of the total of nine of these types of trials will last at least how long? S 7.3.2 1. the total length of time for nine criminal trials 2. $N(189, 21)$ 3. 0.0432 4. 162.09; ninety percent of the total nine trials of this type will last 162 days or more. Q 7.3.3 Suppose that the weight of open boxes of cereal in a home with children is uniformly distributed from two to six pounds with a mean of four pounds and standard deviation of 1.1547. We randomly survey 64 homes with children. 1. In words, $X =$_____________ 2. The distribution is _______. 3. In words, $\sum X =$ _______________ 4. $\sum X \sim$ _____(_____,_____) 5. Find the probability that the total weight of open boxes is less than 250 pounds. 6. Find the 35th percentile for the total weight of open boxes of cereal. Q 7.3.4 Salaries for teachers in a particular elementary school district are normally distributed with a mean of$44,000 and a standard deviation of $6,500. We randomly survey ten teachers from that district. 1. In words, $X =$ ______________ 2. $X \sim$ _____(_____,_____) 3. In words, $\sum X =$ _____________ 4. $\sum X \sim$ _____(_____,_____) 5. Find the probability that the teachers earn a total of over$400,000. 6. Find the 90th percentile for an individual teacher's salary. 7. Find the 90th percentile for the sum of ten teachers' salary. 8. If we surveyed 70 teachers instead of ten, graphically, how would that change the distribution in part d? 9. If each of the 70 teachers received a $3,000 raise, graphically, how would that change the distribution in part b? S 7.3.4 1. $X =$ the salary of one elementary school teacher in the district 2. $X \sim N(44,000, 6,500)$ 3. $\sum X \sim$ sum of the salaries of ten elementary school teachers in the sample 4. $\sum X \sim N(44000, 20554.80)$ 5. 0.9742 6.$52,330.09 7. 466,342.04 8. Sampling 70 teachers instead of ten would cause the distribution to be more spread out. It would be a more symmetrical normal curve. 1. almost zero 2. 0.1587 3. 0.0943 4. unknown a 2. \$46,634 Q 7.4.15 The average length of a maternity stay in a U.S. hospital is said to be 2.4 days with a standard deviation of 0.9 days. We randomly survey 80 women who recently bore children in a U.S. hospital. 1. In words, $X =$ _____________ 2. In words, $\bar{X} =$ ___________________ 3. $\bar{X} \sim$ _____(_____,_____) 4. In words, $\sum X =$ _______________ 5. $\sum X \sim$ _____(_____,_____) 6. Is it likely that an individual stayed more than five days in the hospital? Why or why not? 7. Is it likely that the average stay for the 80 women was more than five days? Why or why not? 8. Which is more likely: 1. An individual stayed more than five days. 2. the average stay of 80 women was more than five days. 9. If we were to sum up the women’s stays, is it likely that, collectively they spent more than a year in the hospital? Why or why not? For each problem, wherever possible, provide graphs and use the calculator. Q 7.4.16 NeverReady batteries has engineered a newer, longer lasting AAA battery. The company claims this battery has an average life span of 17 hours with a standard deviation of 0.8 hours. Your statistics class questions this claim. As a class, you randomly select 30 batteries and find that the sample mean life span is 16.7 hours. If the process is working properly, what is the probability of getting a random sample of 30 batteries in which the sample mean lifetime is 16.7 hours or less? Is the company’s claim reasonable? S 7.4.16 • We have $\mu = 17, \sigma = 0.8, \bar{x} = 16.7$, and $n = 30$. To calculate the probability, we usenormalcdf(lower, upper, $\mu$, $\frac{\sigma}{\sqrt{n}}$) = normalcdf$\left(E–99,16.7,17,\frac{0.8}{\sqrt{30}}\right) = 0.0200$. • If the process is working properly, then the probability that a sample of 30 batteries would have at most 16.7 lifetime hours is only 2%. Therefore, the class was justified to question the claim. Q 7.4.17 Men have an average weight of 172 pounds with a standard deviation of 29 pounds. 1. Find the probability that 20 randomly selected men will have a sum weight greater than 3600 lbs. 2. If 20 men have a sum weight greater than 3500 lbs, then their total weight exceeds the safety limits for water taxis. Based on (a), is this a safety concern? Explain. Q 7.4.18 M&M candies large candy bags have a claimed net weight of 396.9 g. The standard deviation for the weight of the individual candies is 0.017 g. The following table is from a stats experiment conducted by a statistics class. Red Orange Yellow Brown Blue Green 0.751 0.735 0.883 0.696 0.881 0.925 0.841 0.895 0.769 0.876 0.863 0.914 0.856 0.865 0.859 0.855 0.775 0.881 0.799 0.864 0.784 0.806 0.854 0.865 0.966 0.852 0.824 0.840 0.810 0.865 0.859 0.866 0.858 0.868 0.858 1.015 0.857 0.859 0.848 0.859 0.818 0.876 0.942 0.838 0.851 0.982 0.868 0.809 0.873 0.863 0.803 0.865 0.809 0.888 0.932 0.848 0.890 0.925 0.842 0.940 0.878 0.793 0.832 0.833 0.905 0.977 0.807 0.845 0.850 0.841 0.852 0.830 0.932 0.778 0.856 0.833 0.814 0.842 0.881 0.791 0.778 0.818 0.810 0.786 0.864 0.881 0.853 0.825 0.864 0.855 0.873 0.942 0.880 0.825 0.882 0.869 0.931 0.912 0.887 The bag contained 465 candies and he listed weights in the table came from randomly selected candies. Count the weights. 1. Find the mean sample weight and the standard deviation of the sample weights of candies in the table. 2. Find the sum of the sample weights in the table and the standard deviation of the sum the of the weights. 3. If 465 M&Ms are randomly selected, find the probability that their weights sum to at least 396.9. 4. Is the Mars Company’s M&M labeling accurate? S7.4.19 1. For the sample, we have $n = 100, \bar{x} = 0.862, s = 0.05$ 2. $\sum \bar{x} = 85.65, \sum s = 5.18$ 3. normalcdf$(396.9,E99,(465)(0.8565),(0.05)\left(\sqrt{465}\right)) \approx 1$ 4. Since the probability of a sample of size 465 having at least a mean sum of 396.9 is approximately 1, we can conclude that Mars is correctly labeling their M&M packages. Q7.4.20 The Screw Right Company claims their $\frac{3}{4}$ inch screws are within $\pm 0.23$ of the claimed mean diameter of 0.750 inches with a standard deviation of 0.115 inches. The following data were recorded. 0.757 0.723 0.754 0.737 0.757 0.741 0.722 0.741 0.743 0.742 0.740 0.758 0.724 0.739 0.736 0.735 0.760 0.750 0.759 0.754 0.744 0.758 0.765 0.756 0.738 0.742 0.758 0.757 0.724 0.757 0.744 0.738 0.763 0.756 0.760 0.768 0.761 0.742 0.734 0.754 0.758 0.735 0.740 0.743 0.737 0.737 0.725 0.761 0.758 0.756 The screws were randomly selected from the local home repair store. 1. Find the mean diameter and standard deviation for the sample 2. Find the probability that 50 randomly selected screws will be within the stated tolerance levels. Is the company’s diameter claim plausible? Q 7.4.21 Your company has a contract to perform preventive maintenance on thousands of air-conditioners in a large city. Based on service records from previous years, the time that a technician spends servicing a unit averages one hour with a standard deviation of one hour. In the coming week, your company will service a simple random sample of 70 units in the city. You plan to budget an average of 1.1 hours per technician to complete the work. Will this be enough time? Q 7.4.22 Use normalcdf $(E–99,1.1,1,\frac{1}{\sqrt{70}}) = 0.7986.$ This means that there is an 80% chance that the service time will be less than 1.1 hours. It could be wise to schedule more time since there is an associated 20% chance that the maintenance time will be greater than 1.1 hours. Q 7.4.23 A typical adult has an average IQ score of 105 with a standard deviation of 20. If 20 randomly selected adults are given an IQ test, what is the probability that the sample mean scores will be between 85 and 125 points? Q 7.4.24 Certain coins have an average weight of 5.201 grams with a standard deviation of 0.065 g. If a vending machine is designed to accept coins whose weights range from 5.111 g to 5.291 g, what is the expected number of rejected coins when 280 randomly selected coins are inserted into the machine? S 7.4.25 Since we have normalcdf$(5.111,5.291,5.201,\frac{0.065}{\sqrt{280}}) \approx 1$, we can conclude that practically all the coins are within the limits, therefore, there should be no rejected coins out of a well selected sample of size 280. Chapter Review The central limit theorem can be used to illustrate the law of large numbers. The law of large numbers states that the larger the sample size you take from a population, the closer the sample mean $\bar{x}$ gets to $\mu$. Use the following information to answer the next ten exercises: A manufacturer produces 25-pound lifting weights. The lowest actual weight is 24 pounds, and the highest is 26 pounds. Each weight is equally likely so the distribution of weights is uniform. A sample of 100 weights is taken. Exercise 7.4.7 1. What is the distribution for the weights of one 25-pound lifting weight? What is the mean and standard deivation? 2. What is the distribution for the mean weight of 100 25-pound lifting weights? 3. Find the probability that the mean actual weight for the 100 weights is less than 24.9. Answer 1. $U(24, 26), 25, 0.5774$ 2. $N(25, 0.0577)$ 3. 0.0416 Exercise 7.4.8 Draw the graph from Exercise Exercise 7.4.9 Find the probability that the mean actual weight for the 100 weights is greater than 25.2. Answer 0.0003 Exercise 7.4.10 Draw the graph from Exercise Exercise 7.4.11 Find the 90th percentile for the mean weight for the 100 weights. Answer 25.07 Exercise 7.4.12 Draw the graph from Exercise Exercise 7.4.13 1. What is the distribution for the sum of the weights of 100 25-pound lifting weights? 2. Find $P(\sum x < 2,450)$. Answer 1. $N(2,500, 5.7735)$ 2. 0 Exercise 7.4.14 Draw the graph from Exercise Exercise 7.4.15 Find the 90th percentile for the total weight of the 100 weights. Answer 2,507.40 Exercise 7.4.16 Draw the graph from Exercise Use the following information to answer the next five exercises: The length of time a particular smartphone's battery lasts follows an exponential distribution with a mean of ten months. A sample of 64 of these smartphones is taken. Exercise 7.4.17 1. What is the standard deviation? 2. What is the parameter $m$? Answer 1. 10 2. $\dfrac{1}{10}$ Exercise 7.4.18 What is the distribution for the length of time one battery lasts? Exercise 7.4.19 What is the distribution for the mean length of time 64 batteries last? Answer $N\left(10, \dfrac{10}{8}\right)$ Exercise 7.4.20 What is the distribution for the total length of time 64 batteries last? Exercise 7.4.21 Find the probability that the sample mean is between seven and 11. Answer 0.7799 Exercise 7.4.22 Find the 80th percentile for the total length of time 64 batteries last. Exercise 7.4.23 Find the $IQR$ for the mean amount of time 64 batteries last. Answer 1.69 Exercise 7.4.24 Find the middle 80% for the total amount of time 64 batteries last. Use the following information to answer the next eight exercises: A uniform distribution has a minimum of six and a maximum of ten. A sample of 50 is taken. Exercise 7.4.25 Find $P(\sum x > 420)$ Answer 0.0072 Exercise 7.4.26 Find the 90th percentile for the sums. Exercise 7.4.27 Find the 15th percentile for the sums. Answer 391.54 Exercise 7.4.28 Find the first quartile for the sums. Exercise 7.4.29 Find the third quartile for the sums. Answer 405.51 Exercise 7.4.30 Find the 80th percentile for the sums.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/07.E%3A_The_Central_Limit_Theorem_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 8.2: A Single Population Mean using the Normal Distribution Q 8.2.1 Among various ethnic groups, the standard deviation of heights is known to be approximately three inches. We wish to construct a 95% confidence interval for the mean height of male Swedes. Forty-eight male Swedes are surveyed. The sample mean is 71 inches. The sample standard deviation is 2.8 inches. 1. $\bar{x}$ =________ 2. $\sigma$ =________ 3. $n$ =________ 1. In words, define the random variables $X$ and $\bar{X}$. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population mean height of male Swedes. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. What will happen to the error bound obtained if 1,000 male Swedes are surveyed instead of 48? Why? S 8.2.1 1. 71 2. 3 3. 48 1. X is the height of a Swedish male, and is the mean height from a sample of 48 Swedish males. 2. Normal. We know the standard deviation for the population, and the sample size is greater than 30. 1. CI: (70.151, 71.849) ii. EBM = 0.849 e. The error bound will decrease in size, because the sample size increased. Recall, when all factors remain unchanged, an increase in sample size decreases variability. Thus, we do not need as large an interval to capture the true population mean. Q 8.2.2 Announcements for 84 upcoming engineering conferences were randomly picked from a stack of IEEE Spectrum magazines. The mean length of the conferences was 3.94 days, with a standard deviation of 1.28 days. Assume the underlying population is normal. 1. In words, define the random variables $X$ and $\bar{X}$. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population mean length of engineering conferences. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. Q 8.2.3 Suppose that an accounting firm does a study to determine the time needed to complete one person’s tax forms. It randomly surveys 100 people. The sample mean is 23.6 hours. There is a known standard deviation of 7.0 hours. The population distribution is assumed to be normal. 1. $\bar{x} =$ ________ 2. $\sigma =$ ________ 3. $n =$ ________ 1. In words, define the random variables $X$ and $\bar{X}$. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population mean time to complete the tax forms. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. If the firm wished to increase its level of confidence and keep the error bound the same by taking another survey, what changes should it make? 5. If the firm did another survey, kept the error bound the same, and only surveyed 49 people, what would happen to the level of confidence? Why? 6. Suppose that the firm decided that it needed to be at least 96% confident of the population mean length of time to within one hour. How would the number of people the firm surveys change? Why? S 8.2.3 1. $\bar{x} = 23.6$ 2. $\sigma = 7$ 3. $n = 100$ 1. $X$ is the time needed to complete an individual tax form. $\bar{X}$ is the mean time to complete tax forms from a sample of 100 customers. 2. $N\left(23.6, \frac{7}{\sqrt{100}}\right)$ because we know sigma. 1. (22.228, 24.972) 2. $EBM = 1.372$ 3. It will need to change the sample size. The firm needs to determine what the confidence level should be, then apply the error bound formula to determine the necessary sample size. 4. The confidence level would increase as a result of a larger interval. Smaller sample sizes result in more variability. To capture the true population mean, we need to have a larger interval. 5. According to the error bound formula, the firm needs to survey 206 people. Since we increase the confidence level, we need to increase either our error bound or the sample size. Q 8.2.4 A sample of 16 small bags of the same brand of candies was selected. Assume that the population distribution of bag weights is normal. The weight of each bag was then recorded. The mean weight was two ounces with a standard deviation of 0.12 ounces. The population standard deviation is known to be 0.1 ounce. 1. $\bar{x} =$________ 2. $\sigma =$________ 3. $s_{x} =$________ 1. In words, define the random variable $X$. 2. In words, define the random variable $\bar{X}$. 3. Which distribution should you use for this problem? Explain your choice. 4. Construct a 90% confidence interval for the population mean weight of the candies. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 5. Construct a 98% confidence interval for the population mean weight of the candies. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 6. In complete sentences, explain why the confidence interval in part f is larger than the confidence interval in part e. 7. In complete sentences, give an interpretation of what the interval in part f means. Q 8.2.5 A camp director is interested in the mean number of letters each child sends during his or her camp session. The population standard deviation is known to be 2.5. A survey of 20 campers is taken. The mean from the sample is 7.9 with a sample standard deviation of 2.8. 1. $\bar{x} =$________ 2. $\sigma =$________ 3. $x =$________ 1. Define the random variables $X$ and $\bar{X}$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 90% confidence interval for the population mean number of letters campers send home. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. What will happen to the error bound and confidence interval if 500 campers are surveyed? Why? S 8.2.5 1. 7.9 2. 2.5 3. 20 1. $X$ is the number of letters a single camper will send home. $\bar{X}$ is the mean number of letters sent home from a sample of 20 campers. 2. $N 7.9\left(\frac{2.5}{\sqrt{20}}\right)$ 3. 1. CI: (6.98, 8.82) 1. $EBM: 0.92$  4. The error bound and confidence interval will decrease. Q 8.2.6 What is meant by the term “90% confident” when constructing a confidence interval for a mean? 1. If we took repeated samples, approximately 90% of the samples would produce the same confidence interval. 2. If we took repeated samples, approximately 90% of the confidence intervals calculated from those samples would contain the sample mean. 3. If we took repeated samples, approximately 90% of the confidence intervals calculated from those samples would contain the true value of the population mean. 4. If we took repeated samples, the sample mean would equal the population mean in approximately 90% of the samples. Q 8.2.7 The Federal Election Commission collects information about campaign contributions and disbursements for candidates and political committees each election cycle. During the 2012 campaign season, there were 1,619 candidates for the House of Representatives across the United States who received contributions from individuals. Table shows the total receipts from individuals for a random selection of 40 House candidates rounded to the nearest $100. The standard deviation for this data to the nearest hundred is $\sigma$ =$909,200. $3,600$1,243,900 $10,900$385,200 $581,500$7,400 $2,900$400 $3,714,500$632,500 $391,000$467,400 $56,800$5,800 $405,200$733,200 $8,000$468,700 $75,200$41,000 $13,300$9,500 $953,800$1,113,500 $1,109,300$353,900 $986,100$88,600 $378,200$13,200 $3,800$745,100 $5,800$3,072,100 $1,626,700$512,900 $2,309,200$6,600 $202,400$15,800 1. Find the point estimate for the population mean. 2. Using 95% confidence, calculate the error bound. 3. Create a 95% confidence interval for the mean total individual contributions. 4. Interpret the confidence interval in the context of the problem. S 8.2.7 1. $\bar{x} = 568,873$ 2. $CL = 0.95 \alpha = 1 - 0.95 = 0.05 z_{\frac{\alpha}{2}} = 1.96$ $EBM = z_{0.025 } \frac{\sigma}{\sqrt{n}} = 1.96 \frac{909200}{\sqrt{40}} = 281,764$ 3. $\bar{x}$ − EBM = 568,873 − 281,764 = 287,109 $\bar{x}$ + EBM = 568,873 + 281,764 = 850,637 Alternate solution: 1. Press STAT and arrow over to TESTS. 2. Arrow down to 7:ZInterval. 3. Press ENTER. 4. Arrow to Stats and press ENTER. 5. Arrow down and enter the following values: • $\sigma: 909,200$ • $\bar{x}: 568,873$ • $n: 40$ • $CL: 0.95$ 6. Arrow down to Calculate and press ENTER. 7. The confidence interval is ($287,114,$850,632). 8. Notice the small difference between the two solutions–these differences are simply due to rounding error in the hand calculations. 4. We estimate with 95% confidence that the mean amount of contributions received from all individuals by House candidates is between $287,109 and$850,637. Q 8.2.8 The American Community Survey (ACS), part of the United States Census Bureau, conducts a yearly census similar to the one taken every ten years, but with a smaller percentage of participants. The most recent survey estimates with 90% confidence that the mean household income in the U.S. falls between $69,720 and$69,922. Find the point estimate for mean U.S. household income and the error bound for mean U.S. household income. Q 8.2.9 The average height of young adult males has a normal distribution with standard deviation of 2.5 inches. You want to estimate the mean height of students at your college or university to within one inch with 93% confidence. How many male students must you measure? S 8.2.9 Use the formula for $EBM$, solved for $n$: $n = \frac{z^{2}\sigma^{2}}{EBM^{2}}$ From the statement of the problem, you know that $\sigma$ = 2.5, and you need $EBM = 1$. $z = z_{0.035} = 1.812$ (This is the value of $z$ for which the area under the density curve to the right of $z$ is 0.035.) $n = \frac{z^{2}\sigma^{2}}{EBM^{2}} = \frac{1.812^{2}2.5^{2}}{1^{2}} \approx 20.52$ You need to measure at least 21 male students to achieve your goal. 8.3: A Single Population Mean using the Student t Distribution Q 8.3.1 In six packages of “The Flintstones® Real Fruit Snacks” there were five Bam-Bam snack pieces. The total number of snack pieces in the six bags was 68. We wish to calculate a 96% confidence interval for the population proportion of Bam-Bam snack pieces. 1. Define the random variables $X$ and $P′$ in words. 2. Which distribution should you use for this problem? Explain your choice 3. Calculate $p′$. 4. Construct a 96% confidence interval for the population proportion of Bam-Bam snack pieces per bag. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 5. Do you think that six packages of fruit snacks yield enough data to give accurate results? Why or why not? Q 8.3.2 A random survey of enrollment at 35 community colleges across the United States yielded the following figures: 6,414; 1,550; 2,109; 9,350; 21,828; 4,300; 5,944; 5,722; 2,825; 2,044; 5,481; 5,200; 5,853; 2,750; 10,012; 6,357; 27,000; 9,414; 7,681; 3,200; 17,500; 9,200; 7,380; 18,314; 6,557; 13,713; 17,768; 7,493; 2,771; 2,861; 1,263; 7,285; 28,165; 5,080; 11,622. Assume the underlying population is normal. 1. $\bar{x} =$ __________ 2. $s_{x} =$ __________ 3. $n =$ __________ 4. $n – 1 =$ __________ 1. Define the random variables $X$ and $\bar{X}$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population mean enrollment at community colleges in the United States. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. What will happen to the error bound and confidence interval if 500 community colleges were surveyed? Why? S 8.3.2 1. 8629 2. 6944 3. 35 4. 34 1. $t_{34}$ 1. $CI: (6244, 11,014)$ 2. $EB = 2385$ 2. It will become smaller Q 8.3.3 Suppose that a committee is studying whether or not there is waste of time in our judicial system. It is interested in the mean amount of time individuals waste at the courthouse waiting to be called for jury duty. The committee randomly surveyed 81 people who recently served as jurors. The sample mean wait time was eight hours with a sample standard deviation of four hours. 1. $\bar{x} =$ __________ 2. $s_{x} =$ __________ 3. $n =$ __________ 4. $n – 1 =$ __________ 1. Define the random variables $X$ and $\bar{X}$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population mean time wasted. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. Explain in a complete sentence what the confidence interval means. Q 8.3.4 A pharmaceutical company makes tranquilizers. It is assumed that the distribution for the length of time they last is approximately normal. Researchers in a hospital used the drug on a random sample of nine patients. The effective period of the tranquilizer for each patient (in hours) was as follows: 2.7; 2.8; 3.0; 2.3; 2.3; 2.2; 2.8; 2.1; and 2.4. 1. $\bar{x} =$ __________ 2. $s_{x} =$ __________ 3. $n =$ __________ 4. $n – 1 =$ __________ 1. Define the random variable $X$ in words. 2. Define the random variable $\bar{X}$ in words. 3. Which distribution should you use for this problem? Explain your choice. 4. Construct a 95% confidence interval for the population mean length of time. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 5. What does it mean to be “95% confident” in this problem? S 8.3.4 1. $\bar{x} = 2.51$ 2. $s_{x} = 0.318$ 3. $n = 9$ 4. $n - 1 = 8$ 1. the effective length of time for a tranquilizer 2. the mean effective length of time of tranquilizers from a sample of nine patients 3. We need to use a Student’s-t distribution, because we do not know the population standard deviation. 1. $CI: (2.27, 2.76)$ 2. Check student's solution. 3. $EBM: 0.25$ 4. If we were to sample many groups of nine patients, 95% of the samples would contain the true population mean length of time. Q 8.3.5 Suppose that 14 children, who were learning to ride two-wheel bikes, were surveyed to determine how long they had to use training wheels. It was revealed that they used them an average of six months with a sample standard deviation of three months. Assume that the underlying population distribution is normal. 1. $\bar{x} =$ __________ 2. $s_{x} =$ __________ 3. $n =$ __________ 4. $n – 1 =$ __________ 1. Define the random variable $X$ in words. 2. Define the random variable $\bar{X}$ in words. 3. Which distribution should you use for this problem? Explain your choice. 4. Construct a 99% confidence interval for the population mean length of time using training wheels. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 5. Why would the error bound change if the confidence level were lowered to 90%? Q 8.3.6 The Federal Election Commission (FEC) collects information about campaign contributions and disbursements for candidates and political committees each election cycle. A political action committee (PAC) is a committee formed to raise money for candidates and campaigns. A Leadership PAC is a PAC formed by a federal politician (senator or representative) to raise money to help other candidates’ campaigns. The FEC has reported financial information for 556 Leadership PACs that operating during the 2011–2012 election cycle. The following table shows the total receipts during this cycle for a random selection of 20 Leadership PACs. $46,500.00$0 $40,966.50$105,887.20 $5,175.00$29,050.00 $19,500.00$181,557.20 $31,500.00$149,970.80 $2,555,363.20$12,025.00 $409,000.00$60,521.70 $18,000.00$61,810.20 $76,530.80$119,459.20 $0$63,520.00 $6,500.00$502,578.00 $705,061.10$708,258.90 $135,810.00$2,000.00 $2,000.00$0 $1,287,933.80$219,148.30 $\bar{x} = 251,854.23$ $s = 521,130.41$ Use this sample data to construct a 96% confidence interval for the mean amount of money raised by all Leadership PACs during the 2011–2012 election cycle. Use the Student's $t$-distribution. S 8.3.6 $\bar{x} = 251,854.23$ $s = 521,130.41$ Note that we are not given the population standard deviation, only the standard deviation of the sample. There are 30 measures in the sample, so $n = 30$, and $df = 30 - 1 = 29$ $CL = 0.96$, so $\alpha = 1 - CL = 1 - 0.96 = 0.04$ $\frac{\alpha}{2} = 0.02 t_{0.02} = t_{0.02} = 2.150$ $EBM = t_{\frac{\alpha}{2}}\left(\frac{s}{\sqrt{n}}\right) = 2.150\left(\frac{521,130.41}{\sqrt{30}}\right) - 204,561.66$ $\bar{x} - EBM = 251,854.23 - 204,561.66 = 47,292.57$ $\bar{x} + EBM = 251,854.23+ 204,561.66 = 456,415.89$ We estimate with 96% confidence that the mean amount of money raised by all Leadership PACs during the 2011–2012 election cycle lies between $47,292.57 and$456,415.89. Alternate Solution Enter the data as a list. Press STAT and arrow over to TESTS. Arrow down to 8:TInterval. Press ENTER. Arrow to Data and press ENTER. Arrow down and enter the name of the list where the data is stored. Enter Freq: 1 Enter C-Level: 0.96 Arrow down to Calculate and press Enter. The 96% confidence interval is ($47,262,$456,447). The difference between solutions arises from rounding differences. Forbes magazine published data on the best small firms in 2012. These were firms that had been publicly traded for at least a year, have a stock price of at least $5 per share, and have reported annual revenue between$5 million and $1 billion. The Table shows the ages of the corporate CEOs for a random sample of these firms. 48 58 51 61 56 59 74 63 53 50 59 60 60 57 46 55 63 57 47 55 57 43 61 62 49 67 67 55 55 49 Use this sample data to construct a 90% confidence interval for the mean age of CEO’s for these top small firms. Use the Student's t-distribution. Q 8.3.8 Unoccupied seats on flights cause airlines to lose revenue. Suppose a large airline wants to estimate its mean number of unoccupied seats per flight over the past year. To accomplish this, the records of 225 flights are randomly selected and the number of unoccupied seats is noted for each of the sampled flights. The sample mean is 11.6 seats and the sample standard deviation is 4.1 seats. 1. $\bar{x} =$ __________ 2. $s_{x} =$ __________ 3. $n =$ __________ 4. $n-1 =$ __________ 1. Define the random variables $X$ and $\bar{X}$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 92% confidence interval for the population mean number of unoccupied seats per flight. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. S 8.3.8 1. $\bar{x} =$ 2. $s_{x} =$ 3. $n =$ 4. $n-1 =$ 1. $X$ is the number of unoccupied seats on a single flight. $\bar{X}$ is the mean number of unoccupied seats from a sample of 225 flights. 2. We will use a Student’s $t$-distribution, because we do not know the population standard deviation. 1. $CI: (11.12 , 12.08)$ 2. Check student's solution. 3. $EBM$: 0.48 Q 8.3.9 In a recent sample of 84 used car sales costs, the sample mean was$6,425 with a standard deviation of $3,156. Assume the underlying distribution is approximately normal. 1. Which distribution should you use for this problem? Explain your choice. 2. Define the random variable $\bar{X}$ in words. 3. Construct a 95% confidence interval for the population mean cost of a used car. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. Explain what a “95% confidence interval” means for this study. Q 8.3.10 Six different national brands of chocolate chip cookies were randomly selected at the supermarket. The grams of fat per serving are as follows: 8; 8; 10; 7; 9; 9. Assume the underlying distribution is approximately normal. 1. Construct a 90% confidence interval for the population mean grams of fat per serving of chocolate chip cookies sold in supermarkets. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 2. If you wanted a smaller error bound while keeping the same level of confidence, what should have been changed in the study before it was done? 3. Go to the store and record the grams of fat per serving of six brands of chocolate chip cookies. 4. Calculate the mean. 5. Is the mean within the interval you calculated in part a? Did you expect it to be? Why or why not? S 8.3.10 1. CI: (7.64 , 9.36) 2. $EBM: 0.86$ 1. The sample should have been increased. 2. Answers will vary. 3. Answers will vary. 4. Answers will vary. Q 8.3.11 A survey of the mean number of cents off that coupons give was conducted by randomly surveying one coupon per page from the coupon sections of a recent San Jose Mercury News. The following data were collected: 20¢; 75¢; 50¢; 65¢; 30¢; 55¢; 40¢; 40¢; 30¢; 55¢;$1.50; 40¢; 65¢; 40¢. Assume the underlying distribution is approximately normal. 1. $\bar{x} =$ __________ 2. $s_{x} =$ __________ 3. $n =$ __________ 4. $n-1 =$ __________ 1. Define the random variables $X$ and $\bar{X}$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population mean worth of coupons. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. If many random samples were taken of size 14, what percent of the confidence intervals constructed should contain the population mean worth of coupons? Explain why. Use the following information to answer the next two exercises: A quality control specialist for a restaurant chain takes a random sample of size 12 to check the amount of soda served in the 16 oz. serving size. The sample mean is 13.30 with a sample standard deviation of 1.55. Assume the underlying population is normally distributed. Q 8.3.12 Find the 95% Confidence Interval for the true population mean for the amount of soda served. 1. (12.42, 14.18) 2. (12.32, 14.29) 3. (12.50, 14.10) 4. Impossible to determine b Q 8.3.13 What is the error bound? 1. 0.87 2. 1.98 3. 0.99 4. 1.74 8.4: A Population Proportion Q 8.4.1 Insurance companies are interested in knowing the population percent of drivers who always buckle up before riding in a car. 1. When designing a study to determine this population proportion, what is the minimum number you would need to survey to be 95% confident that the population proportion is estimated to within 0.03? 2. If it were later determined that it was important to be more than 95% confident and a new survey was commissioned, how would that affect the minimum number you would need to survey? Why? S 8.4.1 1. 1,068 2. The sample size would need to be increased since the critical value increases as the confidence level increases. Q 8.4.2 Suppose that the insurance companies did do a survey. They randomly surveyed 400 drivers and found that 320 claimed they always buckle up. We are interested in the population proportion of drivers who claim they always buckle up. 1. $x =$ __________ 2. $n =$ __________ 3. $p′ =$ __________ 1. Define the random variables $X$ and $P′$, in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population proportion who claim they always buckle up. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. If this survey were done by telephone, list three difficulties the companies might have in obtaining random results. Q 8.4.3 According to a recent survey of 1,200 people, 61% feel that the president is doing an acceptable job. We are interested in the population proportion of people who feel the president is doing an acceptable job. 1. Define the random variables $X$ and $P′$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 90% confidence interval for the population proportion of people who feel the president is doing an acceptable job. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. S 8.4.3 1. $X =$ the number of people who feel that the president is doing an acceptable job; $P′ =$ the proportion of people in a sample who feel that the president is doing an acceptable job. 2. $N\left(0.61, \sqrt{\frac{(0.61)(0.39)}{1200}}\right)$ 1. $CI: (0.59, 0.63)$ 2. Check student’s solution 3. $EBM: 0.02$ Q 8.4.4 An article regarding interracial dating and marriage recently appeared in the Washington Post. Of the 1,709 randomly selected adults, 315 identified themselves as Latinos, 323 identified themselves as blacks, 254 identified themselves as Asians, and 779 identified themselves as whites. In this survey, 86% of blacks said that they would welcome a white person into their families. Among Asians, 77% would welcome a white person into their families, 71% would welcome a Latino, and 66% would welcome a black person. 1. We are interested in finding the 95% confidence interval for the percent of all black adults who would welcome a white person into their families. Define the random variables $X$ and $P′$, in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. Q 8.4.5 Refer to the information in Exercise. 1. Construct three 95% confidence intervals. 1. percent of all Asians who would welcome a white person into their families. 2. percent of all Asians who would welcome a Latino into their families. 3. percent of all Asians who would welcome a black person into their families. 2. Even though the three point estimates are different, do any of the confidence intervals overlap? Which? 3. For any intervals that do overlap, in words, what does this imply about the significance of the differences in the true proportions? 4. For any intervals that do not overlap, in words, what does this imply about the significance of the differences in the true proportions? S 8.4.5 1. (0.72, 0.82) 2. (0.65, 0.76) 3. (0.60, 0.72) 1. Yes, the intervals (0.72, 0.82) and (0.65, 0.76) overlap, and the intervals (0.65, 0.76) and (0.60, 0.72) overlap. 2. We can say that there does not appear to be a significant difference between the proportion of Asian adults who say that their families would welcome a white person into their families and the proportion of Asian adults who say that their families would welcome a Latino person into their families. 3. We can say that there is a significant difference between the proportion of Asian adults who say that their families would welcome a white person into their families and the proportion of Asian adults who say that their families would welcome a black person into their families. Q 8.4.6 Stanford University conducted a study of whether running is healthy for men and women over age 50. During the first eight years of the study, 1.5% of the 451 members of the 50-Plus Fitness Association died. We are interested in the proportion of people over 50 who ran and died in the same eight-year period. 1. Define the random variables $X$ and $P′$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 97% confidence interval for the population proportion of people over 50 who ran and died in the same eight–year period. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. Explain what a “97% confidence interval” means for this study. Q 8.4.7 A telephone poll of 1,000 adult Americans was reported in an issue of Time Magazine. One of the questions asked was “What is the main problem facing the country?” Twenty percent answered “crime.” We are interested in the population proportion of adult Americans who feel that crime is the main problem. 1. Define the random variables $X$ and $P′$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population proportion of adult Americans who feel that crime is the main problem. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. Suppose we want to lower the sampling error. What is one way to accomplish that? 5. The sampling error given by Yankelovich Partners, Inc. (which conducted the poll) is $\pm 3%$. In one to three complete sentences, explain what the ±3% represents. S 8.4.7 1. $X =$ the number of adult Americans who feel that crime is the main problem; $P′ =$ the proportion of adult Americans who feel that crime is the main problem 2. Since we are estimating a proportion, given $P′ = 0.2$ and $n = 1000$, the distribution we should use is $N\left(0.61, \sqrt{\frac{(0.2)(0.8)}{1000}}\right)$. 1. $CI: (0.18, 0.22)$ 2. Check student’s solution. 3. $EBM: 0.02$ 3. One way to lower the sampling error is to increase the sample size. 4. The stated “$\pm 3%$” represents the maximum error bound. This means that those doing the study are reporting a maximum error of 3%. Thus, they estimate the percentage of adult Americans who feel that crime is the main problem to be between 18% and 22%. Q 8.4.8 Refer to Exercise. Another question in the poll was “[How much are] you worried about the quality of education in our schools?” Sixty-three percent responded “a lot”. We are interested in the population proportion of adult Americans who are worried a lot about the quality of education in our schools. 1. Define the random variables $X$ and $P′$ in words. 2. Which distribution should you use for this problem? Explain your choice. 3. Construct a 95% confidence interval for the population proportion of adult Americans who are worried a lot about the quality of education in our schools. 1. State the confidence interval. 2. Sketch the graph. 3. Calculate the error bound. 4. The sampling error given by Yankelovich Partners, Inc. (which conducted the poll) is $\pm 3%$. In one to three complete sentences, explain what the ±3% represents. Use the following information to answer the next three exercises: According to a Field Poll, 79% of California adults (actual results are 400 out of 506 surveyed) feel that “education and our schools” is one of the top issues facing California. We wish to construct a 90% confidence interval for the true proportion of California adults who feel that education and the schools is one of the top issues facing California. Q 8.4.9 A point estimate for the true population proportion is: 1. 0.90 2. 1.27 3. 0.79 4. 400 c Q 8.4.10 A 90% confidence interval for the population proportion is _______. 1. (0.761, 0.820) 2. (0.125, 0.188) 3. (0.755, 0.826) 4. (0.130, 0.183) Q 8.4.11 The error bound is approximately _____. 1. 1.581 2. 0.791 3. 0.059 4. 0.030 S 8.4.11 d Use the following information to answer the next two exercises: Five hundred and eleven (511) homes in a certain southern California community are randomly surveyed to determine if they meet minimal earthquake preparedness recommendations. One hundred seventy-three (173) of the homes surveyed met the minimum recommendations for earthquake preparedness, and 338 did not. Q 8.4.12 Find the confidence interval at the 90% Confidence Level for the true population proportion of southern California community homes meeting at least the minimum recommendations for earthquake preparedness. 1. (0.2975, 0.3796) 2. (0.6270, 0.6959) 3. (0.3041, 0.3730) 4. (0.6204, 0.7025) Q 8.4.13 The point estimate for the population proportion of homes that do not meet the minimum recommendations for earthquake preparedness is ______. 1. 0.6614 2. 0.3386 3. 173 4. 338 a Q 8.4.14 On May 23, 2013, Gallup reported that of the 1,005 people surveyed, 76% of U.S. workers believe that they will continue working past retirement age. The confidence level for this study was reported at 95% with a $\pm 3%$ margin of error. 1. Determine the estimated proportion from the sample. 2. Determine the sample size. 3. Identify $CL$ and $\alpha$. 4. Calculate the error bound based on the information provided. 5. Compare the error bound in part d to the margin of error reported by Gallup. Explain any differences between the values. 6. Create a confidence interval for the results of this study. 7. A reporter is covering the release of this study for a local news station. How should she explain the confidence interval to her audience? Q 8.4.15 A national survey of 1,000 adults was conducted on May 13, 2013 by Rasmussen Reports. It concluded with 95% confidence that 49% to 55% of Americans believe that big-time college sports programs corrupt the process of higher education. 1. Find the point estimate and the error bound for this confidence interval. 2. Can we (with 95% confidence) conclude that more than half of all American adults believe this? 3. Use the point estimate from part a and $n = 1,000$ to calculate a 75% confidence interval for the proportion of American adults that believe that major college sports programs corrupt higher education. 4. Can we (with 75% confidence) conclude that at least half of all American adults believe this? S 8.4.15 1. $p′ = \frac{(0.55+0.49)}{2} = 0.52; EBP = 0.55 - 0.52 = 0.03$ 2. No, the confidence interval includes values less than or equal to 0.50. It is possible that less than half of the population believe this. 3. $CL = 0.75$, so $\alpha = 1 – 0.75 = 0.25$ and $\frac{\alpha}{2} = 0.125 z_{\frac{\alpha}{2}} = 1.150$. (The area to the right of this $z$ is 0.125, so the area to the left is $1 – 0.125 = 0.875$.) $EBP = (1.150)\sqrt{\frac{0.52(0.48)}{1,000}} \approx 0.018$ $(p′ - EBP, p′ + EBP) = (0.52 – 0.018, 0.52 + 0.018) = (0.502, 0.538)$ Alternate Solution STAT TESTS A: 1-PropZinterval with $x = (0.52)(1,000), n = 1,000, CL = 0.75$. Answer is (0.502, 0.538) 4. Yes – this interval does not fall less than 0.50 so we can conclude that at least half of all American adults believe that major sports programs corrupt education – but we do so with only 75% confidence. Q 8.4.16 Public Policy Polling recently conducted a survey asking adults across the U.S. about music preferences. When asked, 80 of the 571 participants admitted that they have illegally downloaded music. 1. Create a 99% confidence interval for the true proportion of American adults who have illegally downloaded music. 2. This survey was conducted through automated telephone interviews on May 6 and 7, 2013. The error bound of the survey compensates for sampling error, or natural variability among samples. List some factors that could affect the survey’s outcome that are not covered by the margin of error. 3. Without performing any calculations, describe how the confidence interval would change if the confidence level changed from 99% to 90%. Q 8.4.17 You plan to conduct a survey on your college campus to learn about the political awareness of students. You want to estimate the true proportion of college students on your campus who voted in the 2012 presidential election with 95% confidence and a margin of error no greater than five percent. How many students must you interview? S 8.4.17 $CL = 0.95 \alpha = 1 - 0.95 = 0.05 \frac{\alpha}{2} = 0.025 z_{\frac{\alpha}{2}} = 1.96.$ Use $p′ = q′ = 0.5$. $n = \frac{z_{\frac{\alpha}{2}}^{2}p'q'}{EPB^{2}} = \frac{1.96^{2}(0.5)(0.5)}{0.05^{2}} = 384.16$ You need to interview at least 385 students to estimate the proportion to within 5% at 95% confidence. Q 8.4.18 In a recent Zogby International Poll, nine of 48 respondents rated the likelihood of a terrorist attack in their community as “likely” or “very likely.” Use the “plus four” method to create a 97% confidence interval for the proportion of American adults who believe that a terrorist attack in their community is likely or very likely. Explain what this confidence interval means in the context of the problem.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/08.E%3A_Confidence_Intervals_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 9.2: Null and Alternative Hypotheses Q 9.2.1 Some of the following statements refer to the null hypothesis, some to the alternate hypothesis. State the null hypothesis, $H_{0}$, and the alternative hypothesis. $H_{a}$, in terms of the appropriate parameter $(\mu \text{or} p)$. 1. The mean number of years Americans work before retiring is 34. 2. At most 60% of Americans vote in presidential elections. 3. The mean starting salary for San Jose State University graduates is at least $100,000 per year. 4. Twenty-nine percent of high school seniors get drunk each month. 5. Fewer than 5% of adults ride the bus to work in Los Angeles. 6. The mean number of cars a person owns in her lifetime is not more than ten. 7. About half of Americans prefer to live away from cities, given the choice. 8. Europeans have a mean paid vacation each year of six weeks. 9. The chance of developing breast cancer is under 11% for women. 10. Private universities' mean tuition cost is more than$20,000 per year. S 9.2.1 1. $H_{0}: \mu = 34; H_{a}: \mu \neq 34$ 2. $H_{0}: p \leq 0.60; H_{a}: p > 0.60$ 3. $H_{0}: \mu \geq 100,000; H_{a}: \mu < 100,000$ 4. $H_{0}: p = 0.29; H_{a}: p \neq 0.29$ 5. $H_{0}: p = 0.05; H_{a}: p < 0.05$ 6. $H_{0}: \mu \leq 10; H_{a}: \mu > 10$ 7. $H_{0}: p = 0.50; H_{a}: p \neq 0.50$ 8. $H_{0}: \mu = 6; H_{a}: \mu \neq 6$ 9. $H_{0}: p ≥ 0.11; H_{a}: p < 0.11$ 10. $H_{0}: \mu \leq 20,000; H_{a}: \mu > 20,000$ Q 9.2.2 Over the past few decades, public health officials have examined the link between weight concerns and teen girls' smoking. Researchers surveyed a group of 273 randomly selected teen girls living in Massachusetts (between 12 and 15 years old). After four years the girls were surveyed again. Sixty-three said they smoked to stay thin. Is there good evidence that more than thirty percent of the teen girls smoke to stay thin? The alternative hypothesis is: 1. $p < 0.30$ 2. $p \leq 0.30$ 3. $p \geq 0.30$ 4. $p > 0.30$ Q 9.2.3 A statistics instructor believes that fewer than 20% of Evergreen Valley College (EVC) students attended the opening night midnight showing of the latest Harry Potter movie. She surveys 84 of her students and finds that 11 attended the midnight showing. An appropriate alternative hypothesis is: 1. $p = 0.20$ 2. $p > 0.20$ 3. $p < 0.20$ 4. $p \leq 0.20$ c Q 9.2.4 Previously, an organization reported that teenagers spent 4.5 hours per week, on average, on the phone. The organization thinks that, currently, the mean is higher. Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test. The null and alternative hypotheses are: 1. $H_{0}: \bar{x} = 4.5, H_{a}: \bar{x} > 4.5$ 2. $H_{0}: \mu \geq 4.5, H_{a}: \mu < 4.5$ 3. $H_{0}: \mu = 4.75, H_{a}: \mu > 4.75$ 4. $H_{0}: \mu = 4.5, H_{a}: \mu > 4.5$ 9.3: Outcomes and the Type I and Type II Errors Q 9.3.1 State the Type I and Type II errors in complete sentences given the following statements. 1. The mean number of years Americans work before retiring is 34. 2. At most 60% of Americans vote in presidential elections. 3. The mean starting salary for San Jose State University graduates is at least $100,000 per year. 4. Twenty-nine percent of high school seniors get drunk each month. 5. Fewer than 5% of adults ride the bus to work in Los Angeles. 6. The mean number of cars a person owns in his or her lifetime is not more than ten. 7. About half of Americans prefer to live away from cities, given the choice. 8. Europeans have a mean paid vacation each year of six weeks. 9. The chance of developing breast cancer is under 11% for women. 10. Private universities mean tuition cost is more than$20,000 per year. S 9.3.1 1. Type I error: We conclude that the mean is not 34 years, when it really is 34 years. Type II error: We conclude that the mean is 34 years, when in fact it really is not 34 years. 2. Type I error: We conclude that more than 60% of Americans vote in presidential elections, when the actual percentage is at most 60%.Type II error: We conclude that at most 60% of Americans vote in presidential elections when, in fact, more than 60% do. 3. Type I error: We conclude that the mean starting salary is less than $100,000, when it really is at least$100,000. Type II error: We conclude that the mean starting salary is at least $100,000 when, in fact, it is less than$100,000. 4. Type I error: We conclude that the proportion of high school seniors who get drunk each month is not 29%, when it really is 29%. Type II error: We conclude that the proportion of high school seniors who get drunk each month is 29% when, in fact, it is not 29%. 5. Type I error: We conclude that fewer than 5% of adults ride the bus to work in Los Angeles, when the percentage that do is really 5% or more. Type II error: We conclude that 5% or more adults ride the bus to work in Los Angeles when, in fact, fewer that 5% do. 6. Type I error: We conclude that the mean number of cars a person owns in his or her lifetime is more than 10, when in reality it is not more than 10. Type II error: We conclude that the mean number of cars a person owns in his or her lifetime is not more than 10 when, in fact, it is more than 10. 7. Type I error: We conclude that the proportion of Americans who prefer to live away from cities is not about half, though the actual proportion is about half. Type II error: We conclude that the proportion of Americans who prefer to live away from cities is half when, in fact, it is not half. 8. Type I error: We conclude that the duration of paid vacations each year for Europeans is not six weeks, when in fact it is six weeks. Type II error: We conclude that the duration of paid vacations each year for Europeans is six weeks when, in fact, it is not. 9. Type I error: We conclude that the proportion is less than 11%, when it is really at least 11%. Type II error: We conclude that the proportion of women who develop breast cancer is at least 11%, when in fact it is less than 11%. 10. Type I error: We conclude that the average tuition cost at private universities is more than $20,000, though in reality it is at most$20,000. Type II error: We conclude that the average tuition cost at private universities is at most $20,000 when, in fact, it is more than$20,000. Q 9.3.2 For statements a-j in Exercise 9.109, answer the following in complete sentences. 1. State a consequence of committing a Type I error. 2. State a consequence of committing a Type II error. Q 9.3.3 When a new drug is created, the pharmaceutical company must subject it to testing before receiving the necessary permission from the Food and Drug Administration (FDA) to market the drug. Suppose the null hypothesis is “the drug is unsafe.” What is the Type II Error? 1. To conclude the drug is safe when in, fact, it is unsafe. 2. Not to conclude the drug is safe when, in fact, it is safe. 3. To conclude the drug is safe when, in fact, it is safe. 4. Not to conclude the drug is unsafe when, in fact, it is unsafe. b Q 9.3.4 A statistics instructor believes that fewer than 20% of Evergreen Valley College (EVC) students attended the opening midnight showing of the latest Harry Potter movie. She surveys 84 of her students and finds that 11 of them attended the midnight showing. The Type I error is to conclude that the percent of EVC students who attended is ________. 1. at least 20%, when in fact, it is less than 20%. 2. 20%, when in fact, it is 20%. 3. less than 20%, when in fact, it is at least 20%. 4. less than 20%, when in fact, it is less than 20%. Q 9.3.4 It is believed that Lake Tahoe Community College (LTCC) Intermediate Algebra students get less than seven hours of sleep per night, on average. A survey of 22 LTCC Intermediate Algebra students generated a mean of 7.24 hours with a standard deviation of 1.93 hours. At a level of significance of 5%, do LTCC Intermediate Algebra students get less than seven hours of sleep per night, on average? The Type II error is not to reject that the mean number of hours of sleep LTCC students get per night is at least seven when, in fact, the mean number of hours 1. is more than seven hours. 2. is at most seven hours. 3. is at least seven hours. 4. is less than seven hours. d Q 9.3.5 Previously, an organization reported that teenagers spent 4.5 hours per week, on average, on the phone. The organization thinks that, currently, the mean is higher. Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test, the Type I error is: 1. to conclude that the current mean hours per week is higher than 4.5, when in fact, it is higher 2. to conclude that the current mean hours per week is higher than 4.5, when in fact, it is the same 3. to conclude that the mean hours per week currently is 4.5, when in fact, it is higher 4. to conclude that the mean hours per week currently is no higher than 4.5, when in fact, it is not higher 9.4: Distribution Needed for Hypothesis Testing Q 9.4.1 It is believed that Lake Tahoe Community College (LTCC) Intermediate Algebra students get less than seven hours of sleep per night, on average. A survey of 22 LTCC Intermediate Algebra students generated a mean of 7.24 hours with a standard deviation of 1.93 hours. At a level of significance of 5%, do LTCC Intermediate Algebra students get less than seven hours of sleep per night, on average? The distribution to be used for this test is $\bar{X} \sim$ ________________ 1. $N\left(7.24, \frac{1.93}{\sqrt{22}}\right)$ 2. $N\left(7.24, 1.93\right)$ 3. $t_{22}$ 4. $t_{21}$ d 9.5: Rare Events, the Sample, Decision and Conclusion Q 9.5.1 The National Institute of Mental Health published an article stating that in any one-year period, approximately 9.5 percent of American adults suffer from depression or a depressive illness. Suppose that in a survey of 100 people in a certain town, seven of them suffered from depression or a depressive illness. Conduct a hypothesis test to determine if the true proportion of people in that town suffering from depression or a depressive illness is lower than the percent in the general adult American population. 1. Is this a test of one mean or proportion? 2. State the null and alternative hypotheses. $H_{0}$ : ____________________ $H_{a}$ : ____________________ 3. Is this a right-tailed, left-tailed, or two-tailed test? 4. What symbol represents the random variable for this test? 5. In words, define the random variable for this test. 6. Calculate the following: 1. $x =$ ________________ 2. $n =$ ________________ 3. $p′ =$ _____________ 7. Calculate $\sigma_{x} =$ __________. Show the formula set-up. 8. State the distribution to use for the hypothesis test. 9. Find the $p\text{-value}$. 10. At a pre-conceived $\alpha = 0.05$, what is your: 1. Decision: 2. Reason for the decision: 3. Conclusion (write out in a complete sentence): 9.6: Additional Information and Full Hypothesis Test Examples For each of the word problems, use a solution sheet to do the hypothesis test. The solution sheet is found in [link]. Please feel free to make copies of the solution sheets. For the online version of the book, it is suggested that you copy the .doc or the .pdf files. Note If you are using a Student's $t$-distribution for one of the following homework problems, you may assume that the underlying population is normally distributed. (In general, you must first prove that assumption, however.) Q 9.6.1. A particular brand of tires claims that its deluxe tire averages at least 50,000 miles before it needs to be replaced. From past studies of this tire, the standard deviation is known to be 8,000. A survey of owners of that tire design is conducted. From the 28 tires surveyed, the mean lifespan was 46,500 miles with a standard deviation of 9,800 miles. Using $\alpha = 0.05$, is the data highly inconsistent with the claim? S 9.6.1 1. $H_{0}: \mu \geq 50,000$ 2. $H_{a}: \mu < 50,000$ 3. Let $\bar{X} =$ the average lifespan of a brand of tires. 4. normal distribution 5. $z = -2.315$ 6. $p\text{-value} = 0.0103$ 7. Check student’s solution. 1. alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is less than 0.05. 4. Conclusion: There is sufficient evidence to conclude that the mean lifespan of the tires is less than 50,000 miles. 8. $(43,537, 49,463)$ Q 9.6.2 From generation to generation, the mean age when smokers first start to smoke varies. However, the standard deviation of that age remains constant of around 2.1 years. A survey of 40 smokers of this generation was done to see if the mean starting age is at least 19. The sample mean was 18.1 with a sample standard deviation of 1.3. Do the data support the claim at the 5% level? The cost of a daily newspaper varies from city to city. However, the variation among prices remains steady with a standard deviation of 20¢. A study was done to test the claim that the mean cost of a daily newspaper is $1.00. Twelve costs yield a mean cost of 95¢ with a standard deviation of 18¢. Do the data support the claim at the 1% level? S 9.6.3 1. $H_{0}: \mu = 1.00$ 2. $H_{a}: \mu \neq 1.00$ 3. Let $\bar{X} =$ the average cost of a daily newspaper. 4. normal distribution 5. $z = –0.866$ 6. $p\text{-value} = 0.3865$ 7. Check student’s solution. 1. $\alpha: 0.01$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.01. 4. Conclusion: There is sufficient evidence to support the claim that the mean cost of daily papers is$1. The mean cost could be $1. 8. $(0.84, 1.06)$ Q 9.6.4 An article in the San Jose Mercury News stated that students in the California state university system take 4.5 years, on average, to finish their undergraduate degrees. Suppose you believe that the mean time is longer. You conduct a survey of 49 students and obtain a sample mean of 5.1 with a sample standard deviation of 1.2. Do the data support your claim at the 1% level? Q 9.6.5 The mean number of sick days an employee takes per year is believed to be about ten. Members of a personnel department do not believe this figure. They randomly survey eight employees. The number of sick days they took for the past year are as follows: 12; 4; 15; 3; 11; 8; 6; 8. Let $x =$ the number of sick days they took for the past year. Should the personnel team believe that the mean number is ten? S 9.6.5 1. $H_{0}: \mu = 10$ 2. $H_{a}: \mu \neq 10$ 3. Let $\bar{X}$ the mean number of sick days an employee takes per year. 4. Student’s t-distribution 5. $t = –1.12$ 6. $p\text{-value} = 0.300$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.05. 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that the mean number of sick days is not ten. 8. $(4.9443, 11.806)$ Q 9.6.6 In 1955, Life Magazine reported that the 25 year-old mother of three worked, on average, an 80 hour week. Recently, many groups have been studying whether or not the women's movement has, in fact, resulted in an increase in the average work week for women (combining employment and at-home work). Suppose a study was done to determine if the mean work week has increased. 81 women were surveyed with the following results. The sample mean was 83; the sample standard deviation was ten. Does it appear that the mean work week has increased for women at the 5% level? Q 9.6.7 Your statistics instructor claims that 60 percent of the students who take her Elementary Statistics class go through life feeling more enriched. For some reason that she can't quite figure out, most people don't believe her. You decide to check this out on your own. You randomly survey 64 of her past Elementary Statistics students and find that 34 feel more enriched as a result of her class. Now, what do you think? S 9.6.7 1. $H_{0}: p \geq 0.6$ 2. $H_{a}: p < 0.6$ 3. Let $P′ =$ the proportion of students who feel more enriched as a result of taking Elementary Statistics. 4. normal for a single proportion 5. 1.12 6. $p\text{-value} = 0.1308$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.05. 4. Conclusion: There is insufficient evidence to conclude that less than 60 percent of her students feel more enriched. 8. Confidence Interval: $(0.409, 0.654)$ The “plus-4s” confidence interval is $(0.411, 0.648)$ Q 9.6.8 A Nissan Motor Corporation advertisement read, “The average man’s I.Q. is 107. The average brown trout’s I.Q. is 4. So why can’t man catch brown trout?” Suppose you believe that the brown trout’s mean I.Q. is greater than four. You catch 12 brown trout. A fish psychologist determines the I.Q.s as follows: 5; 4; 7; 3; 6; 4; 5; 3; 6; 3; 8; 5. Conduct a hypothesis test of your belief. Q 9.6.9 Refer to Exercise 9.119. Conduct a hypothesis test to see if your decision and conclusion would change if your belief were that the brown trout’s mean I.Q. is not four. S 9.6.9 1. $H_{0}: \mu = 4$ 2. $H_{a}: \mu \neq 4$ 3. Let $\bar{X}$ the average I.Q. of a set of brown trout. 4. two-tailed Student's t-test 5. $t = 1.95$ 6. $p\text{-value} = 0.076$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.05 4. Conclusion: There is insufficient evidence to conclude that the average IQ of brown trout is not four. 8. $(3.8865,5.9468)$ Q 9.6.10 According to an article in Newsweek, the natural ratio of girls to boys is 100:105. In China, the birth ratio is 100: 114 (46.7% girls). Suppose you don’t believe the reported figures of the percent of girls born in China. You conduct a study. In this study, you count the number of girls and boys born in 150 randomly chosen recent births. There are 60 girls and 90 boys born of the 150. Based on your study, do you believe that the percent of girls born in China is 46.7? Q 9.6.11 A poll done for Newsweek found that 13% of Americans have seen or sensed the presence of an angel. A contingent doubts that the percent is really that high. It conducts its own survey. Out of 76 Americans surveyed, only two had seen or sensed the presence of an angel. As a result of the contingent’s survey, would you agree with the Newsweek poll? In complete sentences, also give three reasons why the two polls might give different results. S 9.6.11 1. $H_{0}: p \geq 0.13$ 2. $H_{a}: p < 0.13$ 3. Let $P′ =$ the proportion of Americans who have seen or sensed angels 4. normal for a single proportion 5. –2.688 6. $p\text{-value} = 0.0036$ 7. Check student’s solution. 1. alpha: 0.05 2. Decision: Reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$e is less than 0.05. 4. Conclusion: There is sufficient evidence to conclude that the percentage of Americans who have seen or sensed an angel is less than 13%. 8. $(0, 0.0623)$. The“plus-4s” confidence interval is (0.0022, 0.0978) Q 9.6.12 The mean work week for engineers in a start-up company is believed to be about 60 hours. A newly hired engineer hopes that it’s shorter. She asks ten engineering friends in start-ups for the lengths of their mean work weeks. Based on the results that follow, should she count on the mean work week to be shorter than 60 hours? Data (length of mean work week): 70; 45; 55; 60; 65; 55; 55; 60; 50; 55. Q 9.6.13 Use the “Lap time” data for Lap 4 (see [link]) to test the claim that Terri finishes Lap 4, on average, in less than 129 seconds. Use all twenty races given. S 9.6.13 1. $H_{0}: \mu \geq 129$ 2. $H_{a}: \mu < 129$ 3. Let $\bar{X} =$ the average time in seconds that Terri finishes Lap 4. 4. Student's t-distribution 5. $t = 1.209$ 6. 0.8792 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.05. 4. Conclusion: There is insufficient evidence to conclude that Terri’s mean lap time is less than 129 seconds. 8. $(128.63, 130.37)$ Q 9.6.14 Use the “Initial Public Offering” data (see [link]) to test the claim that the mean offer price was$18 per share. Do not use all the data. Use your random number generator to randomly survey 15 prices. Note The following questions were written by past students. They are excellent problems! Q 9.6.15 "Asian Family Reunion," by Chau Nguyen Every two years it comes around. We all get together from different towns. In my honest opinion, It's not a typical family reunion. Not forty, or fifty, or sixty, But how about seventy companions! The kids would play, scream, and shout One minute they're happy, another they'll pout. The teenagers would look, stare, and compare From how they look to what they wear. The men would chat about their business That they make more, but never less. Money is always their subject And there's always talk of more new projects. The women get tired from all of the chats They head to the kitchen to set out the mats. Some would sit and some would stand Eating and talking with plates in their hands. Then come the games and the songs And suddenly, everyone gets along! With all that laughter, it's sad to say That it always ends in the same old way. They hug and kiss and say "good-bye" And then they all begin to cry! I say that 60 percent shed their tears But my mom counted 35 people this year. She said that boys and men will always have their pride, So we won't ever see them cry. I myself don't think she's correct, So could you please try this problem to see if you object? S 9.6.15 1. $H_{0}: p = 0.60$ 2. $H_{a}: p < 0.60$ 3. Let $P′ =$ the proportion of family members who shed tears at a reunion. 4. normal for a single proportion 5. –1.71 6. 0.0438 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the proportion of family members who shed tears at a reunion is less than 0.60. However, the test is weak because the $p\text{-value}$ and alpha are quite close, so other tests should be done. 8. We are 95% confident that between 38.29% and 61.71% of family members will shed tears at a family reunion. $(0.3829, 0.6171)$. The“plus-4s” confidence interval (see chapter 8) is $(0.3861, 0.6139)$ Note that here the “large-sample” $1 - \text{PropZTest}$ provides the approximate $p\text{-value}$ of 0.0438. Whenever a $p\text{-value}$ based on a normal approximation is close to the level of significance, the exact $p\text{-value}$ based on binomial probabilities should be calculated whenever possible. This is beyond the scope of this course. Q 9.6.16 "The Problem with Angels," by Cyndy Dowling Although this problem is wholly mine, The catalyst came from the magazine, Time. On the magazine cover I did find The realm of angels tickling my mind. Inside, 69% I found to be In angels, Americans do believe. Then, it was time to rise to the task, Ninety-five high school and college students I did ask. Viewing all as one group, Random sampling to get the scoop. So, I asked each to be true, "Do you believe in angels?" Tell me, do! Hypothesizing at the start, Totally believing in my heart That the proportion who said yes Would be equal on this test. Lo and behold, seventy-three did arrive, Out of the sample of ninety-five. Now your job has just begun, Solve this problem and have some fun. Q 9.6.17 "Blowing Bubbles," by Sondra Prull Studying stats just made me tense, I had to find some sane defense. Some light and lifting simple play To float my math anxiety away. Blowing bubbles lifts me high Takes my troubles to the sky. POIK! They're gone, with all my stress Bubble therapy is the best. The label said each time I blew The average number of bubbles would be at least 22. I blew and blew and this I found From 64 blows, they all are round! But the number of bubbles in 64 blows Varied widely, this I know. 20 per blow became the mean They deviated by 6, and not 16. From counting bubbles, I sure did relax But now I give to you your task. Was 22 a reasonable guess? Find the answer and pass this test! S 9.6.17 1. $H_{0}: \mu \geq 22$ 2. $H_{a}: \mu < 22$ 3. Let $\bar{X} =$ the mean number of bubbles per blow. 4. Student's t-distribution 5. –2.667 6. $p\text{-value} = 0.00486$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is less than 0.05. 4. Conclusion: There is sufficient evidence to conclude that the mean number of bubbles per blow is less than 22. 8. $(18.501, 21.499)$ Q 9.6.18 "Dalmatian Darnation," by Kathy Sparling A greedy dog breeder named Spreckles Bred puppies with numerous freckles The Dalmatians he sought Possessed spot upon spot The more spots, he thought, the more shekels. His competitors did not agree That freckles would increase the fee. They said, “Spots are quite nice But they don't affect price; One should breed for improved pedigree.” The breeders decided to prove This strategy was a wrong move. Breeding only for spots Would wreak havoc, they thought. His theory they want to disprove. They proposed a contest to Spreckles Comparing dog prices to freckles. In records they looked up One hundred one pups: Dalmatians that fetched the most shekels. They asked Mr. Spreckles to name An average spot count he'd claim To bring in big bucks. Said Spreckles, “Well, shucks, It's for one hundred one that I aim.” Said an amateur statistician Who wanted to help with this mission. “Twenty-one for the sample Standard deviation's ample: They examined one hundred and one Dalmatians that fetched a good sum. They counted each spot, Mark, freckle and dot And tallied up every one. Instead of one hundred one spots They averaged ninety six dots Can they muzzle Spreckles’ Obsession with freckles Based on all the dog data they've got? Q 9.6.19 "Macaroni and Cheese, please!!" by Nedda Misherghi and Rachelle Hall As a poor starving student I don't have much money to spend for even the bare necessities. So my favorite and main staple food is macaroni and cheese. It's high in taste and low in cost and nutritional value. One day, as I sat down to determine the meaning of life, I got a serious craving for this, oh, so important, food of my life. So I went down the street to Greatway to get a box of macaroni and cheese, but it was SO expensive! $2.02 !!! Can you believe it? It made me stop and think. The world is changing fast. I had thought that the mean cost of a box (the normal size, not some super-gigantic-family-value-pack) was at most$1, but now I wasn't so sure. However, I was determined to find out. I went to 53 of the closest grocery stores and surveyed the prices of macaroni and cheese. Here are the data I wrote in my notebook: Price per box of Mac and Cheese: • 5 stores @ $2.02 • 15 stores @$0.25 • 3 stores @ $1.29 • 6 stores @$0.35 • 4 stores @ $2.27 • 7 stores @$1.50 • 5 stores @ $1.89 • 8 stores @ 0.75. I could see that the cost varied but I had to sit down to figure out whether or not I was right. If it does turn out that this mouth-watering dish is at most$1, then I'll throw a big cheesy party in our next statistics lab, with enough macaroni and cheese for just me. (After all, as a poor starving student I can't be expected to feed our class of animals!) S 9.6.19 1. $H_{0}: \mu \leq 1$ 2. $H_{a}: \mu > 1$ 3. Let $\bar{X} =$ the mean cost in dollars of macaroni and cheese in a certain town. 4. Student's $t$-distribution 5. $t = 0.340$ 6. $p\text{-value} = 0.36756$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.05 4. Conclusion: The mean cost could be $1, or less. At the 5% significance level, there is insufficient evidence to conclude that the mean price of a box of macaroni and cheese is more than$1. 8. $(0.8291, 1.241)$ Q 9.6.20 "William Shakespeare: The Tragedy of Hamlet, Prince of Denmark," by Jacqueline Ghodsi THE CHARACTERS (in order of appearance): • HAMLET, Prince of Denmark and student of Statistics • POLONIUS, Hamlet’s tutor • HOROTIO, friend to Hamlet and fellow student Scene: The great library of the castle, in which Hamlet does his lessons Act I (The day is fair, but the face of Hamlet is clouded. He paces the large room. His tutor, Polonius, is reprimanding Hamlet regarding the latter’s recent experience. Horatio is seated at the large table at right stage.) POLONIUS: My Lord, how cans’t thou admit that thou hast seen a ghost! It is but a figment of your imagination! HAMLET: I beg to differ; I know of a certainty that five-and-seventy in one hundred of us, condemned to the whips and scorns of time as we are, have gazed upon a spirit of health, or goblin damn’d, be their intents wicked or charitable. POLONIUS If thou doest insist upon thy wretched vision then let me invest your time; be true to thy work and speak to me through the reason of the null and alternate hypotheses. (He turns to Horatio.) Did not Hamlet himself say, “What piece of work is man, how noble in reason, how infinite in faculties? Then let not this foolishness persist. Go, Horatio, make a survey of three-and-sixty and discover what the true proportion be. For my part, I will never succumb to this fantasy, but deem man to be devoid of all reason should thy proposal of at least five-and-seventy in one hundred hold true. HORATIO (to Hamlet): What should we do, my Lord? HAMLET: Go to thy purpose, Horatio. HORATIO: To what end, my Lord? HAMLET: That you must teach me. But let me conjure you by the rights of our fellowship, by the consonance of our youth, but the obligation of our ever-preserved love, be even and direct with me, whether I am right or no. (Horatio exits, followed by Polonius, leaving Hamlet to ponder alone.) Act II (The next day, Hamlet awaits anxiously the presence of his friend, Horatio. Polonius enters and places some books upon the table just a moment before Horatio enters.) POLONIUS: So, Horatio, what is it thou didst reveal through thy deliberations? HORATIO: In a random survey, for which purpose thou thyself sent me forth, I did discover that one-and-forty believe fervently that the spirits of the dead walk with us. Before my God, I might not this believe, without the sensible and true avouch of mine own eyes. POLONIUS: Give thine own thoughts no tongue, Horatio. (Polonius turns to Hamlet.) But look to’t I charge you, my Lord. Come Horatio, let us go together, for this is not our test. (Horatio and Polonius leave together.) HAMLET: To reject, or not reject, that is the question: whether ‘tis nobler in the mind to suffer the slings and arrows of outrageous statistics, or to take arms against a sea of data, and, by opposing, end them. (Hamlet resignedly attends to his task.) (Curtain falls) Q 9.6.21 "Untitled," by Stephen Chen I've often wondered how software is released and sold to the public. Ironically, I work for a company that sells products with known problems. Unfortunately, most of the problems are difficult to create, which makes them difficult to fix. I usually use the test program X, which tests the product, to try to create a specific problem. When the test program is run to make an error occur, the likelihood of generating an error is 1%. So, armed with this knowledge, I wrote a new test program Y that will generate the same error that test program X creates, but more often. To find out if my test program is better than the original, so that I can convince the management that I'm right, I ran my test program to find out how often I can generate the same error. When I ran my test program 50 times, I generated the error twice. While this may not seem much better, I think that I can convince the management to use my test program instead of the original test program. Am I right? S 9.6.21 1. $H_{0}: p = 0.01$ 2. $H_{a}: p > 0.01$ 3. Let $P′ =$ the proportion of errors generated 4. Normal for a single proportion 5. 2.13 6. 0.0165 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis 3. Reason for decision: The $p\text{-value}$ is less than 0.05. 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the proportion of errors generated is more than 0.01. 8. Confidence interval: $(0, 0.094)$. The“plus-4s” confidence interval is $(0.004, 0.144)$. Q 9.6.22 "Japanese Girls’ Names" by Kumi Furuichi It used to be very typical for Japanese girls’ names to end with “ko.” (The trend might have started around my grandmothers’ generation and its peak might have been around my mother’s generation.) “Ko” means “child” in Chinese characters. Parents would name their daughters with “ko” attaching to other Chinese characters which have meanings that they want their daughters to become, such as Sachiko—happy child, Yoshiko—a good child, Yasuko—a healthy child, and so on. However, I noticed recently that only two out of nine of my Japanese girlfriends at this school have names which end with “ko.” More and more, parents seem to have become creative, modernized, and, sometimes, westernized in naming their children. I have a feeling that, while 70 percent or more of my mother’s generation would have names with “ko” at the end, the proportion has dropped among my peers. I wrote down all my Japanese friends’, ex-classmates’, co-workers, and acquaintances’ names that I could remember. Following are the names. (Some are repeats.) Test to see if the proportion has dropped for this generation. Ai, Akemi, Akiko, Ayumi, Chiaki, Chie, Eiko, Eri, Eriko, Fumiko, Harumi, Hitomi, Hiroko, Hiroko, Hidemi, Hisako, Hinako, Izumi, Izumi, Junko, Junko, Kana, Kanako, Kanayo, Kayo, Kayoko, Kazumi, Keiko, Keiko, Kei, Kumi, Kumiko, Kyoko, Kyoko, Madoka, Maho, Mai, Maiko, Maki, Miki, Miki, Mikiko, Mina, Minako, Miyako, Momoko, Nana, Naoko, Naoko, Naoko, Noriko, Rieko, Rika, Rika, Rumiko, Rei, Reiko, Reiko, Sachiko, Sachiko, Sachiyo, Saki, Sayaka, Sayoko, Sayuri, Seiko, Shiho, Shizuka, Sumiko, Takako, Takako, Tomoe, Tomoe, Tomoko, Touko, Yasuko, Yasuko, Yasuyo, Yoko, Yoko, Yoko, Yoshiko, Yoshiko, Yoshiko, Yuka, Yuki, Yuki, Yukiko, Yuko, Yuko. Q 9.6.23 "Phillip’s Wish," by Suzanne Osorio My nephew likes to play Chasing the girls makes his day. He asked his mother If it is okay To get his ear pierced. She said, “No way!” To poke a hole through your ear, Is not what I want for you, dear. He argued his point quite well, Says even my macho pal, Mel, Has gotten this done. It’s all just for fun. C’mon please, mom, please, what the hell. Again Phillip complained to his mother, Saying half his friends (including their brothers) Are piercing their ears And they have no fears He wants to be like the others. She said, “I think it’s much less. We must do a hypothesis test. And if you are right, I won’t put up a fight. But, if not, then my case will rest.” We proceeded to call fifty guys To see whose prediction would fly. Nineteen of the fifty Said piercing was nifty And earrings they’d occasionally buy. Then there’s the other thirty-one, Who said they’d never have this done. So now this poem’s finished. Will his hopes be diminished, Or will my nephew have his fun? S 9.6.23 1. $H_{0}: p = 0.50$ 2. $H_{a}: p < 0.50$ 3. Let $P′ =$ the proportion of friends that has a pierced ear. 4. normal for a single proportion 5. –1.70 6. $p\text{-value} = 0.0448$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis 3. Reason for decision: The $p\text{-value}$ is less than 0.05. (However, they are very close.) 4. Conclusion: There is sufficient evidence to support the claim that less than 50% of his friends have pierced ears. 8. Confidence Interval: $(0.245, 0.515)$: The “plus-4s” confidence interval is $(0.259, 0.519)$. Q 9.6.24 "The Craven," by Mark Salangsang Once upon a morning dreary In stats class I was weak and weary. Pondering over last night’s homework Whose answers were now on the board This I did and nothing more. While I nodded nearly napping Suddenly, there came a tapping. As someone gently rapping, Rapping my head as I snore. Quoth the teacher, “Sleep no more.” “In every class you fall asleep,” The teacher said, his voice was deep. “So a tally I’ve begun to keep Of every class you nap and snore. The percentage being forty-four.” “My dear teacher I must confess, While sleeping is what I do best. The percentage, I think, must be less, A percentage less than forty-four.” This I said and nothing more. “We’ll see,” he said and walked away, And fifty classes from that day He counted till the month of May The classes in which I napped and snored. The number he found was twenty-four. At a significance level of 0.05, Please tell me am I still alive? Or did my grade just take a dive Plunging down beneath the floor? Upon thee I hereby implore. Q 9.6.25 Toastmasters International cites a report by Gallop Poll that 40% of Americans fear public speaking. A student believes that less than 40% of students at her school fear public speaking. She randomly surveys 361 schoolmates and finds that 135 report they fear public speaking. Conduct a hypothesis test to determine if the percent at her school is less than 40%. S 9.6.25 1. $H_{0}: p = 0.40$ 2. $H_{a}: p < 0.40$ 3. Let $P′ =$ the proportion of schoolmates who fear public speaking. 4. normal for a single proportion 5. –1.01 6. $p\text{-value} = 0.1563$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.05. 4. Conclusion: There is insufficient evidence to support the claim that less than 40% of students at the school fear public speaking. 8. Confidence Interval: $(0.3241, 0.4240)$: The “plus-4s” confidence interval is $(0.3257, 0.4250)$. Q 9.6.26 Sixty-eight percent of online courses taught at community colleges nationwide were taught by full-time faculty. To test if 68% also represents California’s percent for full-time faculty teaching the online classes, Long Beach City College (LBCC) in California, was randomly selected for comparison. In the same year, 34 of the 44 online courses LBCC offered were taught by full-time faculty. Conduct a hypothesis test to determine if 68% represents California. NOTE: For more accurate results, use more California community colleges and this past year's data. Q 9.6.27 According to an article in Bloomberg Businessweek, New York City's most recent adult smoking rate is 14%. Suppose that a survey is conducted to determine this year’s rate. Nine out of 70 randomly chosen N.Y. City residents reply that they smoke. Conduct a hypothesis test to determine if the rate is still 14% or if it has decreased. S 9.6.27 1. $H_{0}: p = 0.14$ 2. $H_{a}: p < 0.14$ 3. Let $P′ =$ the proportion of NYC residents that smoke. 4. normal for a single proportion 5. –0.2756 6. $p\text{-value} = 0.3914$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is greater than 0.05. 4. At the 5% significance level, there is insufficient evidence to conclude that the proportion of NYC residents who smoke is less than 0.14. 8. Confidence Interval: $(0.0502, 0.2070)$: The “plus-4s” confidence interval (see chapter 8) is $(0.0676, 0.2297)$. Q 9.6.28 The mean age of De Anza College students in a previous term was 26.6 years old. An instructor thinks the mean age for online students is older than 26.6. She randomly surveys 56 online students and finds that the sample mean is 29.4 with a standard deviation of 2.1. Conduct a hypothesis test. Q 9.6.29 Registered nurses earned an average annual salary of $69,110. For that same year, a survey was conducted of 41 California registered nurses to determine if the annual salary is higher than$69,110 for California nurses. The sample average was $71,121 with a sample standard deviation of$7,489. Conduct a hypothesis test. S 9.6.29 1. $H_{0}: \mu = 69,110$ 2. $H_{0}: \mu > 69,110$ 3. Let $\bar{X} =$ the mean salary in dollars for California registered nurses. 4. Student's t-distribution 5. $t = 1.719$ 6. $p\text{-value}: 0.0466$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: The $p\text{-value}$ is less than 0.05. 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the mean salary of California registered nurses exceeds \$69,110. 8. $(68,757, 73,485)$ Q 9.6.30 La Leche League International reports that the mean age of weaning a child from breastfeeding is age four to five worldwide. In America, most nursing mothers wean their children much earlier. Suppose a random survey is conducted of 21 U.S. mothers who recently weaned their children. The mean weaning age was nine months (3/4 year) with a standard deviation of 4 months. Conduct a hypothesis test to determine if the mean weaning age in the U.S. is less than four years old. Q 9.6.31 Over the past few decades, public health officials have examined the link between weight concerns and teen girls' smoking. Researchers surveyed a group of 273 randomly selected teen girls living in Massachusetts (between 12 and 15 years old). After four years the girls were surveyed again. Sixty-three said they smoked to stay thin. Is there good evidence that more than thirty percent of the teen girls smoke to stay thin? After conducting the test, your decision and conclusion are 1. Reject $H_{0}$: There is sufficient evidence to conclude that more than 30% of teen girls smoke to stay thin. 2. Do not reject $H_{0}$: There is not sufficient evidence to conclude that less than 30% of teen girls smoke to stay thin. 3. Do not reject $H_{0}$: There is not sufficient evidence to conclude that more than 30% of teen girls smoke to stay thin. 4. Reject $H_{0}$: There is sufficient evidence to conclude that less than 30% of teen girls smoke to stay thin. c Q 9.6.32 A statistics instructor believes that fewer than 20% of Evergreen Valley College (EVC) students attended the opening night midnight showing of the latest Harry Potter movie. She surveys 84 of her students and finds that 11 of them attended the midnight showing. At a 1% level of significance, an appropriate conclusion is: 1. There is insufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is less than 20%. 2. There is sufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is more than 20%. 3. There is sufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is less than 20%. 4. There is insufficient evidence to conclude that the percent of EVC students who attended the midnight showing of Harry Potter is at least 20%. Q 9.6.33 Previously, an organization reported that teenagers spent 4.5 hours per week, on average, on the phone. The organization thinks that, currently, the mean is higher. Fifteen randomly chosen teenagers were asked how many hours per week they spend on the phone. The sample mean was 4.75 hours with a sample standard deviation of 2.0. Conduct a hypothesis test. At a significance level of $a = 0.05$, what is the correct conclusion? 1. There is enough evidence to conclude that the mean number of hours is more than 4.75 2. There is enough evidence to conclude that the mean number of hours is more than 4.5 3. There is not enough evidence to conclude that the mean number of hours is more than 4.5 4. There is not enough evidence to conclude that the mean number of hours is more than 4.75 S 9.6.33 c Instructions: For the following ten exercises, Hypothesis testing: For the following ten exercises, answer each question. State the null and alternate hypothesis. State the $p\text{-value}$. State $\alpha$. What is your decision? Write a conclusion. Answer any other questions asked in the problem. Q 9.6.34 According to the Center for Disease Control website, in 2011 at least 18% of high school students have smoked a cigarette. An Introduction to Statistics class in Davies County, KY conducted a hypothesis test at the local high school (a medium sized–approximately 1,200 students–small city demographic) to determine if the local high school’s percentage was lower. One hundred fifty students were chosen at random and surveyed. Of the 150 students surveyed, 82 have smoked. Use a significance level of 0.05 and using appropriate statistical evidence, conduct a hypothesis test and state the conclusions. Q 9.6.35 A recent survey in the N.Y. Times Almanac indicated that 48.8% of families own stock. A broker wanted to determine if this survey could be valid. He surveyed a random sample of 250 families and found that 142 owned some type of stock. At the 0.05 significance level, can the survey be considered to be accurate? S 9.6.35 1. $H_{0}: p = 0.488$ $H_{a}: p \neq 0.488$ 2. $p\text{-value} = 0.0114$ 3. $\alpha = 0.05$ 4. Reject the null hypothesis. 5. At the 5% level of significance, there is enough evidence to conclude that 48.8% of families own stocks. 6. The survey does not appear to be accurate. Q 9.6.36 Driver error can be listed as the cause of approximately 54% of all fatal auto accidents, according to the American Automobile Association. Thirty randomly selected fatal accidents are examined, and it is determined that 14 were caused by driver error. Using $\alpha = 0.05$, is the AAA proportion accurate? Q 9.6.37 The US Department of Energy reported that 51.7% of homes were heated by natural gas. A random sample of 221 homes in Kentucky found that 115 were heated by natural gas. Does the evidence support the claim for Kentucky at the $\alpha = 0.05$ level in Kentucky? Are the results applicable across the country? Why? S 9.6.37 1. $H_{0}: p = 0.517$ $H_{0}: p \neq 0.517$ 2. $p\text{-value} = 0.9203$. 3. $\alpha = 0.05$. 4. Do not reject the null hypothesis. 5. At the 5% significance level, there is not enough evidence to conclude that the proportion of homes in Kentucky that are heated by natural gas is 0.517. 6. However, we cannot generalize this result to the entire nation. First, the sample’s population is only the state of Kentucky. Second, it is reasonable to assume that homes in the extreme north and south will have extreme high usage and low usage, respectively. We would need to expand our sample base to include these possibilities if we wanted to generalize this claim to the entire nation. Q 9.6.38 For Americans using library services, the American Library Association claims that at most 67% of patrons borrow books. The library director in Owensboro, Kentucky feels this is not true, so she asked a local college statistic class to conduct a survey. The class randomly selected 100 patrons and found that 82 borrowed books. Did the class demonstrate that the percentage was higher in Owensboro, KY? Use $\alpha = 0.01$ level of significance. What is the possible proportion of patrons that do borrow books from the Owensboro Library? Q 9.6.39 The Weather Underground reported that the mean amount of summer rainfall for the northeastern US is at least 11.52 inches. Ten cities in the northeast are randomly selected and the mean rainfall amount is calculated to be 7.42 inches with a standard deviation of 1.3 inches. At the $\alpha = 0.05 level$, can it be concluded that the mean rainfall was below the reported average? What if $\alpha = 0.01$? Assume the amount of summer rainfall follows a normal distribution. S 9.6.39 1. $H_{0}: \mu \geq 11.52$ $H_{a}: \mu < 11.52$ 2. $p\text{-value} = 0.000002$ which is almost 0. 3. $\alpha = 0.05$. 4. Reject the null hypothesis. 5. At the 5% significance level, there is enough evidence to conclude that the mean amount of summer rain in the northeaster US is less than 11.52 inches, on average. 6. We would make the same conclusion if alpha was 1% because the $p\text{-value}$ is almost 0. Q 9.6.40 A survey in the N.Y. Times Almanac finds the mean commute time (one way) is 25.4 minutes for the 15 largest US cities. The Austin, TX chamber of commerce feels that Austin’s commute time is less and wants to publicize this fact. The mean for 25 randomly selected commuters is 22.1 minutes with a standard deviation of 5.3 minutes. At the $\alpha = 0.10$ level, is the Austin, TX commute significantly less than the mean commute time for the 15 largest US cities? Q 9.6.41 A report by the Gallup Poll found that a woman visits her doctor, on average, at most 5.8 times each year. A random sample of 20 women results in these yearly visit totals 3; 2; 1; 3; 7; 2; 9; 4; 6; 6; 8; 0; 5; 6; 4; 2; 1; 3; 4; 1 At the $\alpha = 0.05$ level can it be concluded that the sample mean is higher than 5.8 visits per year? S 9.6.42 1. $H_{0}: \mu \leq 5.8$ $H_{a}: \mu > 5.8$ 2. $p\text{-value} = 0.9987$ 3. $\alpha = 0.05$ 4. Do not reject the null hypothesis. 5. At the 5% level of significance, there is not enough evidence to conclude that a woman visits her doctor, on average, more than 5.8 times a year. Q 9.6.42 According to the N.Y. Times Almanac the mean family size in the U.S. is 3.18. A sample of a college math class resulted in the following family sizes: 5; 4; 5; 4; 4; 3; 6; 4; 3; 3; 5; 5; 6; 3; 3; 2; 7; 4; 5; 2; 2; 2; 3; 2 At $\alpha = 0.05$ level, is the class’ mean family size greater than the national average? Does the Almanac result remain valid? Why? Q 9.6.43 The student academic group on a college campus claims that freshman students study at least 2.5 hours per day, on average. One Introduction to Statistics class was skeptical. The class took a random sample of 30 freshman students and found a mean study time of 137 minutes with a standard deviation of 45 minutes. At α = 0.01 level, is the student academic group’s claim correct? S 9.6.43 1. $H_{0}: \mu \geq 150$ $H_{0}: \mu < 150$ 2. $p\text{-value} = 0.0622$ 3. $\alpha = 0.01$ 4. Do not reject the null hypothesis. 5. At the 1% significance level, there is not enough evidence to conclude that freshmen students study less than 2.5 hours per day, on average. 6. The student academic group’s claim appears to be correct.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/09.E%3A_Hypothesis_Testing_with_One_Sample_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 10.2: Two Population Means with Unknown Standard Deviations Use the following information to answer the next 15 exercises: Indicate if the hypothesis test is for 1. independent group means, population standard deviations, and/or variances known 2. independent group means, population standard deviations, and/or variances unknown 3. matched or paired samples 4. single mean 5. two proportions 6. single proportion Exercise 10.2.3 It is believed that 70% of males pass their drivers test in the first attempt, while 65% of females pass the test in the first attempt. Of interest is whether the proportions are in fact equal. Answer two proportions Exercise 10.2.4 A new laundry detergent is tested on consumers. Of interest is the proportion of consumers who prefer the new brand over the leading competitor. A study is done to test this. Exercise 10.2.5 A new windshield treatment claims to repel water more effectively. Ten windshields are tested by simulating rain without the new treatment. The same windshields are then treated, and the experiment is run again. A hypothesis test is conducted. Answer matched or paired samples Exercise 10.2.6 S 10.3.3 Subscripts: 1 = boys, 2 = girls 1. $H_{0}: \mu_{1} \leq \mu_{2}$ 2. $H_{a}: \mu_{1} > \mu_{2}$ 3. The random variable is the difference in the mean auto insurance costs for boys and girls. 4. normal 5. test statistic: $z = 2.50$ 6. $p\text{-value}: 0.0062$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for Decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the mean cost of auto insurance for teenage boys is greater than that for girls. Q 10.3.4 A group of transfer bound students wondered if they will spend the same mean amount on texts and supplies each year at their four-year university as they have at their community college. They conducted a random survey of 54 students at their community college and 66 students at their local four-year university. The sample means were $947 and$1,011, respectively. The population standard deviations are known to be $254 and$87, respectively. Conduct a hypothesis test to determine if the means are statistically the same. Q 10.3.5 Some manufacturers claim that non-hybrid sedan cars have a lower mean miles-per-gallon (mpg) than hybrid ones. Suppose that consumers test 21 hybrid sedans and get a mean of 31 mpg with a standard deviation of seven mpg. Thirty-one non-hybrid sedans get a mean of 22 mpg with a standard deviation of four mpg. Suppose that the population standard deviations are known to be six and three, respectively. Conduct a hypothesis test to evaluate the manufacturers claim. S 10.3.5 Subscripts: 1 = non-hybrid sedans, 2 = hybrid sedans 1. $H_{0}: \mu_{1} \geq \mu_{2}$ 2. $H_{a}: \mu_{1} < \mu_{2}$ 3. The random variable is the difference in the mean miles per gallon of non-hybrid sedans and hybrid sedans. 4. normal 5. test statistic: 6.36 6. $p\text{-value}: 0$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the mean miles per gallon of non-hybrid sedans is less than that of hybrid sedans. Q 10.3.6 A baseball fan wanted to know if there is a difference between the number of games played in a World Series when the American League won the series versus when the National League won the series. From 1922 to 2012, the population standard deviation of games won by the American League was 1.14, and the population standard deviation of games won by the National League was 1.11. Of 19 randomly selected World Series games won by the American League, the mean number of games won was 5.76. The mean number of 17 randomly selected games won by the National League was 5.42. Conduct a hypothesis test. Q 10.3.7 One of the questions in a study of marital satisfaction of dual-career couples was to rate the statement “I’m pleased with the way we divide the responsibilities for childcare.” The ratings went from one (strongly agree) to five (strongly disagree). Table contains ten of the paired responses for husbands and wives. Conduct a hypothesis test to see if the mean difference in the husband’s versus the wife’s satisfaction level is negative (meaning that, within the partnership, the husband is happier than the wife). Wife’s Score 2 2 3 3 4 2 1 1 2 4 Husband’s Score 2 2 1 3 2 1 1 1 2 4 S 10.3.7 1. $H_{0}: \mu_{d} = 0$ 2. $H_{a}: \mu_{d} < 0$ 3. The random variable $X_{d}$ is the average difference between husband’s and wife’s satisfaction level. 4. $t_{9}$ 5. test statistic: $t = –1.86$ 6. $p\text{-value}: 0.0479$ 7. Check student’s solution 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis, but run another test. 3. Reason for Decision: $p\text{-value} < \alpha$ 4. Conclusion: This is a weak test because alpha and the p-value are close. However, there is insufficient evidence to conclude that the mean difference is negative. 10.4: Comparing Two Independent Population Proportions Use the following information for the next five exercises. Two types of phone operating system are being tested to determine if there is a difference in the proportions of system failures (crashes). Fifteen out of a random sample of 150 phones with OS1 had system failures within the first eight hours of operation. Nine out of another random sample of 150 phones with OS2 had system failures within the first eight hours of operation. OS2 is believed to be more stable (have fewer crashes) than OS1. Exercise 10.4.2 Is this a test of means or proportions? Exercise 10.4.3 What is the random variable? Answer $P'_{OS_{1}} - P'_{OS_{2}} =$ difference in the proportions of phones that had system failures within the first eight hours of operation with OS1 and OS2. Exercise 10.4.4 State the null and alternative hypotheses. Exercise 10.4.5 What is the $p\text{-value}$? Answer 0.1018 Exercise 10.4.6 What can you conclude about the two operating systems? Use the following information to answer the next twelve exercises. In the recent Census, three percent of the U.S. population reported being of two or more races. However, the percent varies tremendously from state to state. Suppose that two random surveys are conducted. In the first random survey, out of 1,000 North Dakotans, only nine people reported being of two or more races. In the second random survey, out of 500 Nevadans, 17 people reported being of two or more races. Conduct a hypothesis test to determine if the population percents are the same for the two states or if the percent for Nevada is statistically higher than for North Dakota. Exercise 10.4.7 Is this a test of means or proportions? Answer proportions Exercise 10.4.8 State the null and alternative hypotheses. 1. $H_{0}$: _________ 2. $H_{a}$: _________ Exercise 10.4.9 Is this a right-tailed, left-tailed, or two-tailed test? How do you know? Answer right-tailed Exercise 10.4.10 What is the random variable of interest for this test? Exercise 10.4.11 In words, define the random variable for this test. Answer The random variable is the difference in proportions (percents) of the populations that are of two or more races in Nevada and North Dakota. Exercise 10.4.12 Which distribution (normal or Student's t) would you use for this hypothesis test? Exercise 10.4.13 Explain why you chose the distribution you did for the Exercise 10.56. Answer Our sample sizes are much greater than five each, so we use the normal for two proportions distribution for this hypothesis test. Exercise 10.4.14 Calculate the test statistic. Exercise 10.4.15 Sketch a graph of the situation. Mark the hypothesized difference and the sample difference. Shade the area corresponding to the $p\text{-value}$. Answer Check student’s solution. Exercise 10.4.16 Find the $p\text{-value}$. Exercise 10.4.17 At a pre-conceived $\alpha = 0.05$, what is your: 1. Decision: 2. Reason for the decision: 3. Conclusion (write out in a complete sentence): Answer 1. Reject the null hypothesis. 2. $p\text{-value} < \alpha$ 3. At the 5% significance level, there is sufficient evidence to conclude that the proportion (percent) of the population that is of two or more races in Nevada is statistically higher than that in North Dakota. Exercise 10.4.18 Does it appear that the proportion of Nevadans who are two or more races is higher than the proportion of North Dakotans? Why or why not? DIRECTIONS: For each of the word problems, use a solution sheet to do the hypothesis test. The solution sheet is found in [link]. Please feel free to make copies of the solution sheets. For the online version of the book, it is suggested that you copy the .doc or the .pdf files. NOTE If you are using a Student's t-distribution for one of the following homework problems, including for paired data, you may assume that the underlying population is normally distributed. (In general, you must first prove that assumption, however.) Q 10.4.1 A recent drug survey showed an increase in the use of drugs and alcohol among local high school seniors as compared to the national percent. Suppose that a survey of 100 local seniors and 100 national seniors is conducted to see if the proportion of drug and alcohol use is higher locally than nationally. Locally, 65 seniors reported using drugs or alcohol within the past month, while 60 national seniors reported using them. Q 10.4.2 We are interested in whether the proportions of female suicide victims for ages 15 to 24 are the same for the whites and the blacks races in the United States. We randomly pick one year, 1992, to compare the races. The number of suicides estimated in the United States in 1992 for white females is 4,930. Five hundred eighty were aged 15 to 24. The estimate for black females is 330. Forty were aged 15 to 24. We will let female suicide victims be our population. S 10.4.2 1. $H_{0}: P_{W} = P_{B}$ 2. $H_{a}: P_{W} \neq P_{B}$ 3. The random variable is the difference in the proportions of white and black suicide victims, aged 15 to 24. 4. normal for two proportions 5. test statistic: –0.1944 6. $p\text{-value}: 0.8458$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that the proportions of white and black female suicide victims, aged 15 to 24, are different. Q 10.4.3 Elizabeth Mjelde, an art history professor, was interested in whether the value from the Golden Ratio formula, $\left(\frac{(larger + smaller dimension}{larger dimension}\right)$ was the same in the Whitney Exhibit for works from 1900 to 1919 as for works from 1920 to 1942. Thirty-seven early works were sampled, averaging 1.74 with a standard deviation of 0.11. Sixty-five of the later works were sampled, averaging 1.746 with a standard deviation of 0.1064. Do you think that there is a significant difference in the Golden Ratio calculation? Q 10.4.4 A recent year was randomly picked from 1985 to the present. In that year, there were 2,051 Hispanic students at Cabrillo College out of a total of 12,328 students. At Lake Tahoe College, there were 321 Hispanic students out of a total of 2,441 students. In general, do you think that the percent of Hispanic students at the two colleges is basically the same or different? S 10.4.4 Subscripts: 1 = Cabrillo College, 2 = Lake Tahoe College 1. $H_{0}: p_{1} = p_{2}$ 2. $H_{a}: p_{1} \neq p_{2}$ 3. The random variable is the difference between the proportions of Hispanic students at Cabrillo College and Lake Tahoe College. 4. normal for two proportions 5. test statistic: 4.29 6. $p\text{-value}: 0.00002$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: p-value < alpha 4. Conclusion: There is sufficient evidence to conclude that the proportions of Hispanic students at Cabrillo College and Lake Tahoe College are different. Use the following information to answer the next three exercises. Neuroinvasive West Nile virus is a severe disease that affects a person’s nervous system . It is spread by the Culex species of mosquito. In the United States in 2010 there were 629 reported cases of neuroinvasive West Nile virus out of a total of 1,021 reported cases and there were 486 neuroinvasive reported cases out of a total of 712 cases reported in 2011. Is the 2011 proportion of neuroinvasive West Nile virus cases more than the 2010 proportion of neuroinvasive West Nile virus cases? Using a 1% level of significance, conduct an appropriate hypothesis test. • “2011” subscript: 2011 group. • “2010” subscript: 2010 group Q 10.4.5 This is: 1. a test of two proportions 2. a test of two independent means 3. a test of a single mean 4. a test of matched pairs. Q 10.4.6 An appropriate null hypothesis is: 1. $p_{2011} \leq p_{2010}$ 2. $p_{2011} \geq p_{2010}$ 3. $\mu_{2011} \leq \mu_{2010}$ 4. $p_{2011} > p_{2010}$ a Q 10.4.7 The $p\text{-value}$ is 0.0022. At a 1% level of significance, the appropriate conclusion is 1. There is sufficient evidence to conclude that the proportion of people in the United States in 2011 who contracted neuroinvasive West Nile disease is less than the proportion of people in the United States in 2010 who contracted neuroinvasive West Nile disease. 2. There is insufficient evidence to conclude that the proportion of people in the United States in 2011 who contracted neuroinvasive West Nile disease is more than the proportion of people in the United States in 2010 who contracted neuroinvasive West Nile disease. 3. There is insufficient evidence to conclude that the proportion of people in the United States in 2011 who contracted neuroinvasive West Nile disease is less than the proportion of people in the United States in 2010 who contracted neuroinvasive West Nile disease. 4. There is sufficient evidence to conclude that the proportion of people in the United States in 2011 who contracted neuroinvasive West Nile disease is more than the proportion of people in the United States in 2010 who contracted neuroinvasive West Nile disease. Q 10.4.8 Researchers conducted a study to find out if there is a difference in the use of eReaders by different age groups. Randomly selected participants were divided into two age groups. In the 16- to 29-year-old group, 7% of the 628 surveyed use eReaders, while 11% of the 2,309 participants 30 years old and older use eReaders. S 10.4.9 Test: two independent sample proportions. Random variable: $p′_{1} - p′_{2}$ Distribution: $H_{0}: p_{1} = p_{2}$ $H_{a}: p_{1} \neq p_{2}$ The proportion of eReader users is different for the 16- to 29-year-old users from that of the 30 and older users. Graph: two-tailed $p\text{-value}: 0.0033$ Decision: Reject the null hypothesis. Conclusion: At the 5% level of significance, from the sample data, there is sufficient evidence to conclude that the proportion of eReader users 16 to 29 years old is different from the proportion of eReader users 30 and older. Q 10.4.10 are considered obese if their body mass index (BMI) is at least 30. The researchers wanted to determine if the proportion of women who are obese in the south is less than the proportion of southern men who are obese. The results are shown in Table. Test at the 1% level of significance. Number who are obese Sample size Men 42,769 155,525 Women 67,169 248,775 Q 10.4.11 Two computer users were discussing tablet computers. A higher proportion of people ages 16 to 29 use tablets than the proportion of people age 30 and older. Table details the number of tablet owners for each age group. Test at the 1% level of significance. 16–29 year olds 30 years old and older Own a Tablet 69 231 Sample Size 628 2,309 S 10.4.11 Test: two independent sample proportions Random variable: $p′_{1} - p′_{2}$ Distribution: $H_{0}: p_{1} = p_{2}$ $H_{a}: p_{1} > p_{2}$ A higher proportion of tablet owners are aged 16 to 29 years old than are 30 years old and older. Graph: right-tailed $p\text{-value}: 0.2354$ Decision: Do not reject the $H_{0}$. Conclusion: At the 1% level of significance, from the sample data, there is not sufficient evidence to conclude that a higher proportion of tablet owners are aged 16 to 29 years old than are 30 years old and older. Q 10.4.12 A group of friends debated whether more men use smartphones than women. They consulted a research study of smartphone use among adults. The results of the survey indicate that of the 973 men randomly sampled, 379 use smartphones. For women, 404 of the 1,304 who were randomly sampled use smartphones. Test at the 5% level of significance. Q 10.4.13 While her husband spent 2½ hours picking out new speakers, a statistician decided to determine whether the percent of men who enjoy shopping for electronic equipment is higher than the percent of women who enjoy shopping for electronic equipment. The population was Saturday afternoon shoppers. Out of 67 men, 24 said they enjoyed the activity. Eight of the 24 women surveyed claimed to enjoy the activity. Interpret the results of the survey. S 10.4.13 Subscripts: 1: men; 2: women 1. $H_{0}: p_{1} \leq p_{2}$ 2. $H_{a}: p_{1} > p_{2}$ 3. $P'_{1} - P\_{2}$ is the difference between the proportions of men and women who enjoy shopping for electronic equipment. 4. normal for two proportions 5. test statistic: 0.22 6. $p\text{-value}: 0.4133$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for Decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that the proportion of men who enjoy shopping for electronic equipment is more than the proportion of women. Q 10.4.14 We are interested in whether children’s educational computer software costs less, on average, than children’s entertainment software. Thirty-six educational software titles were randomly picked from a catalog. The mean cost was $31.14 with a standard deviation of$4.69. Thirty-five entertainment software titles were randomly picked from the same catalog. The mean cost was $33.86 with a standard deviation of$10.87. Decide whether children’s educational software costs less, on average, than children’s entertainment software. Q 10.4.15 Joan Nguyen recently claimed that the proportion of college-age males with at least one pierced ear is as high as the proportion of college-age females. She conducted a survey in her classes. Out of 107 males, 20 had at least one pierced ear. Out of 92 females, 47 had at least one pierced ear. Do you believe that the proportion of males has reached the proportion of females? S 10.4.15 1. $H_{0}: p_{1} = p_{2}$ 2. $H_{a}: p_{1} \neq p_{2}$ 3. $P'_{1} - P\_{2}$ is the difference between the proportions of men and women that have at least one pierced ear. 4. normal for two proportions 5. test statistic: –4.82 6. $p\text{-value}: 0$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for Decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% significance level, there is sufficient evidence to conclude that the proportions of males and females with at least one pierced ear is different. Q 10.4.16 Use the data sets found in [link] to answer this exercise. Is the proportion of race laps Terri completes slower than 130 seconds less than the proportion of practice laps she completes slower than 135 seconds? Q 10.4.17 "To Breakfast or Not to Breakfast?" by Richard Ayore In the American society, birthdays are one of those days that everyone looks forward to. People of different ages and peer groups gather to mark the 18th, 20th, …, birthdays. During this time, one looks back to see what he or she has achieved for the past year and also focuses ahead for more to come. If, by any chance, I am invited to one of these parties, my experience is always different. Instead of dancing around with my friends while the music is booming, I get carried away by memories of my family back home in Kenya. I remember the good times I had with my brothers and sister while we did our daily routine. Every morning, I remember we went to the shamba (garden) to weed our crops. I remember one day arguing with my brother as to why he always remained behind just to join us an hour later. In his defense, he said that he preferred waiting for breakfast before he came to weed. He said, “This is why I always work more hours than you guys!” And so, to prove him wrong or right, we decided to give it a try. One day we went to work as usual without breakfast, and recorded the time we could work before getting tired and stopping. On the next day, we all ate breakfast before going to work. We recorded how long we worked again before getting tired and stopping. Of interest was our mean increase in work time. Though not sure, my brother insisted that it was more than two hours. Using the data in Table, solve our problem. Work hours with breakfast Work hours without breakfast 8 6 7 5 9 5 5 4 9 7 8 7 10 7 7 5 6 6 9 5 S 10.4.17 1. $H_{0}: \mu_{d} = 0$ 2. $H_{a}: \mu_{d} > 0$ 3. The random variable $X_{d}$ is the mean difference in work times on days when eating breakfast and on days when not eating breakfast. 4. $t_{9}$ 5. test statistic: 4.8963 6. $p\text{-value}: 0.0004$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for Decision:$p\text{-value} < \alpha$ 4. Conclusion: At the 5% level of significance, there is sufficient evidence to conclude that the mean difference in work times on days when eating breakfast and on days when not eating breakfast has increased. 10.5: Matched or Paired Samples Use the following information to answer the next five exercises. A study was conducted to test the effectiveness of a software patch in reducing system failures over a six-month period. Results for randomly selected installations are shown in Table. The “before” value is matched to an “after” value, and the differences are calculated. The differences have a normal distribution. Test at the 1% significance level. Installation A B C D E F G H Before 3 6 4 2 5 8 2 6 After 1 5 2 0 1 0 2 2 Exercise 10.5.4 What is the random variable? Answer the mean difference of the system failures Exercise 10.5.5 State the null and alternative hypotheses. Exercise 10.5.6 What is the $p\text{-value}$? Answer 0.0067 Exercise 10.5.7 Draw the graph of the $p\text{-value}$. Exercise 10.5.8 What conclusion can you draw about the software patch? Answer With a $p\text{-value} 0.0067$, we can reject the null hypothesis. There is enough evidence to support that the software patch is effective in reducing the number of system failures. Use the following information to answer next five exercises. A study was conducted to test the effectiveness of a juggling class. Before the class started, six subjects juggled as many balls as they could at once. After the class, the same six subjects juggled as many balls as they could. The differences in the number of balls are calculated. The differences have a normal distribution. Test at the 1% significance level. Subject A B C D E F Before 3 4 3 2 4 5 After 4 5 6 4 5 7 Exercise 10.5.9 State the null and alternative hypotheses. Exercise 10.5.10 What is the $p\text{-value}$? Answer 0.0021 Exercise 10.5.11 What is the sample mean difference? Exercise 10.5.12 Draw the graph of the $p\text{-value}$. Answer Exercise 10.5.13 What conclusion can you draw about the juggling class? Use the following information to answer the next five exercises. A doctor wants to know if a blood pressure medication is effective. Six subjects have their blood pressures recorded. After twelve weeks on the medication, the same six subjects have their blood pressure recorded again. For this test, only systolic pressure is of concern. Test at the 1% significance level. Patient A B C D E F Before 161 162 165 162 166 171 After 158 159 166 160 167 169 Exercise 10.5.14 State the null and alternative hypotheses. Answer $H_{0}: \mu_{d} \geq 0$ $H_{a}: \mu_{d} < 0$ Exercise 10.5.15 What is the test statistic? Exercise 10.5.16 What is the $p\text{-value}$? Answer 0.0699 Exercise 10.5.17 What is the sample mean difference? Exercise 10.5.18 What is the conclusion? Answer We decline to reject the null hypothesis. There is not sufficient evidence to support that the medication is effective. Bringing It Together Use the following information to answer the next ten exercises. indicate which of the following choices best identifies the hypothesis test. 1. independent group means, population standard deviations and/or variances known 2. independent group means, population standard deviations and/or variances unknown 3. matched or paired samples 4. single mean 5. two proportions 6. single proportion Exercise 10.5.19 A powder diet is tested on 49 people, and a liquid diet is tested on 36 different people. The population standard deviations are two pounds and three pounds, respectively. Of interest is whether the liquid diet yields a higher mean weight loss than the powder diet. Exercise 10.5.20 A new chocolate bar is taste-tested on consumers. Of interest is whether the proportion of children who like the new chocolate bar is greater than the proportion of adults who like it. Answer e Exercise 10.5.21 The mean number of English courses taken in a two–year time period by male and female college students is believed to be about the same. An experiment is conducted and data are collected from nine males and 16 females. Exercise 10.5.22 A football league reported that the mean number of touchdowns per game was five. A study is done to determine if the mean number of touchdowns has decreased. Answer d Exercise 10.5.23 A study is done to determine if students in the California state university system take longer to graduate than students enrolled in private universities. One hundred students from both the California state university system and private universities are surveyed. From years of research, it is known that the population standard deviations are 1.5811 years and one year, respectively. Exercise 10.5.24 According to a YWCA Rape Crisis Center newsletter, 75% of rape victims know their attackers. A study is done to verify this. Answer f Exercise 10.5.25 According to a recent study, U.S. companies have a mean maternity-leave of six weeks. Exercise 10.5.26 A recent drug survey showed an increase in use of drugs and alcohol among local high school students as compared to the national percent. Suppose that a survey of 100 local youths and 100 national youths is conducted to see if the proportion of drug and alcohol use is higher locally than nationally. Answer e Exercise 10.5.27 A new SAT study course is tested on 12 individuals. Pre-course and post-course scores are recorded. Of interest is the mean increase in SAT scores. The following data are collected: Pre-course score Post-course score 1 300 960 920 1010 1100 840 880 1100 1070 1250 1320 860 860 1330 1370 790 770 990 1040 1110 1200 740 850 Exercise 10.5.28 University of Michigan researchers reported in the Journal of the National Cancer Institute that quitting smoking is especially beneficial for those under age 49. In this American Cancer Society study, the risk (probability) of dying of lung cancer was about the same as for those who had never smoked. Answer f Exercise 10.5.29 Lesley E. Tan investigated the relationship between left-handedness vs. right-handedness and motor competence in preschool children. Random samples of 41 left-handed preschool children and 41 right-handed preschool children were given several tests of motor skills to determine if there is evidence of a difference between the children based on this experiment. The experiment produced the means and standard deviations shown Table. Determine the appropriate test and best distribution to use for that test. Left-handed Right-handed Sample size 41 41 Sample mean 97.5 98.1 Sample standard deviation 17.5 19.2 1. Two independent means, normal distribution 2. Two independent means, Student’s-t distribution 3. Matched or paired samples, Student’s-t distribution 4. Two population proportions, normal distribution Exercise 10.5.30 A golf instructor is interested in determining if her new technique for improving players’ golf scores is effective. She takes four (4) new students. She records their 18-hole scores before learning the technique and then after having taken her class. She conducts a hypothesis test. The data are as Table. Player 1 Player 2 Player 3 Player 4 Mean score before class 83 78 93 87 Mean score after class 80 80 86 86 This is: 1. a test of two independent means. 2. a test of two proportions. 3. a test of a single mean. 4. a test of a single proportion. Answer a DIRECTIONS: For each of the word problems, use a solution sheet to do the hypothesis test. The solution sheet is found in Appendix E. Please feel free to make copies of the solution sheets. For the online version of the book, it is suggested that you copy the .doc or the .pdf files. NOTE If you are using a Student's t-distribution for the homework problems, including for paired data, you may assume that the underlying population is normally distributed. (When using these tests in a real situation, you must first prove that assumption, however.) Q 10.5.1 Ten individuals went on a low–fat diet for 12 weeks to lower their cholesterol. The data are recorded in Table. Do you think that their cholesterol levels were significantly lowered? Starting cholesterol level Ending cholesterol level 140 140 220 230 110 120 240 220 200 190 180 150 190 200 360 300 280 300 260 240 S 10.5.1 $p\text{-value} = 0.1494$ At the 5% significance level, there is insufficient evidence to conclude that the medication lowered cholesterol levels after 12 weeks. Use the following information to answer the next two exercises. A new AIDS prevention drug was tried on a group of 224 HIV positive patients. Forty-five patients developed AIDS after four years. In a control group of 224 HIV positive patients, 68 developed AIDS after four years. We want to test whether the method of treatment reduces the proportion of patients that develop AIDS after four years or if the proportions of the treated group and the untreated group stay the same. Let the subscript $t =$ treated patient and $ut =$ untreated patient. Q 10.5.2 The appropriate hypotheses are: 1. $H_{0}: p_{t} < p_{ut}$ and $H_{a}: p_{t} \geq p_{ut}$ 2. $H_{0}: p_{t} \leq p_{ut}$ and $H_{a}: p_{t} > p_{ut}$ 3. $H_{0}: p_{t} = p_{ut}$ and $H_{a}: p_{t} \neq p_{ut}$ 4. $H_{0}: p_{t} = p_{ut}$ and $H_{a}: p_{t} < p_{ut}$ Q 10.5.3 If the $p\text{-value}$ is 0.0062 what is the conclusion (use $\alpha = 0.05$)? 1. The method has no effect. 2. There is sufficient evidence to conclude that the method reduces the proportion of HIV positive patients who develop AIDS after four years. 3. There is sufficient evidence to conclude that the method increases the proportion of HIV positive patients who develop AIDS after four years. 4. There is insufficient evidence to conclude that the method reduces the proportion of HIV positive patients who develop AIDS after four years. S 10.5.3 b Use the following information to answer the next two exercises. An experiment is conducted to show that blood pressure can be consciously reduced in people trained in a “biofeedback exercise program.” Six subjects were randomly selected and blood pressure measurements were recorded before and after the training. The difference between blood pressures was calculated (after - before) producing the following results: $\bar{x}_{d} = -10.2$ $s_{d} = 8.4$. Using the data, test the hypothesis that the blood pressure has decreased after the training. Q 10.5.4 The distribution for the test is: 1. $t_{5}$ 2. $t_{6}$ 3. $N(-10.2, 8.4)$ 4. $N\left(-10.2, \frac{8.4}{\sqrt{6}}\right)$ Q 10.5.5 If $\alpha = 0.05$, the $p\text{-value}$ and the conclusion are 1. 0.0014; There is sufficient evidence to conclude that the blood pressure decreased after the training. 2. 0.0014; There is sufficient evidence to conclude that the blood pressure increased after the training. 3. 0.0155; There is sufficient evidence to conclude that the blood pressure decreased after the training. 4. 0.0155; There is sufficient evidence to conclude that the blood pressure increased after the training. c Q 10.5.6 A golf instructor is interested in determining if her new technique for improving players’ golf scores is effective. She takes four new students. She records their 18-hole scores before learning the technique and then after having taken her class. She conducts a hypothesis test. The data are as follows. Player 1 Player 2 Player 3 Player 4 Mean score before class 83 78 93 87 Mean score after class 80 80 86 86 The correct decision is: 1. Reject $H_{0}$. 2. Do not reject the $H_{0}$. Q 10.5.7 A local cancer support group believes that the estimate for new female breast cancer cases in the south is higher in 2013 than in 2012. The group compared the estimates of new female breast cancer cases by southern state in 2012 and in 2013. The results are in Table. Southern States 2012 2013 Alabama 3,450 3,720 Arkansas 2,150 2,280 Florida 15,540 15,710 Georgia 6,970 7,310 Kentucky 3,160 3,300 Louisiana 3,320 3,630 Mississippi 1,990 2,080 North Carolina 7,090 7,430 Oklahoma 2,630 2,690 South Carolina 3,570 3,580 Tennessee 4,680 5,070 Texas 15,050 14,980 Virginia 6,190 6,280 S 10.5.7 Test: two matched pairs or paired samples (t-test) Random variable: $\bar{X}_{d}$ Distribution: $t_{12}$ $H_{0}: \mu_{d} = 0 H_{a}: \mu_{d} > 0$ The mean of the differences of new female breast cancer cases in the south between 2013 and 2012 is greater than zero. The estimate for new female breast cancer cases in the south is higher in 2013 than in 2012. Graph: right-tailed $p\text{-value}: 0.0004$ Decision: Reject $H_{0}$ Conclusion: At the 5% level of significance, from the sample data, there is sufficient evidence to conclude that there was a higher estimate of new female breast cancer cases in 2013 than in 2012. Q 10.5.8 A traveler wanted to know if the prices of hotels are different in the ten cities that he visits the most often. The list of the cities with the corresponding hotel prices for his two favorite hotel chains is in Table. Test at the 1% level of significance. Cities Hyatt Regency prices in dollars Hilton prices in dollars Atlanta 107 169 Boston 358 289 Chicago 209 299 Dallas 209 198 Denver 167 169 Indianapolis 179 214 Los Angeles 179 169 New York City 625 459 Philadelphia 179 159 Washington, DC 245 239 Q 10.5.9 A politician asked his staff to determine whether the underemployment rate in the northeast decreased from 2011 to 2012. The results are in Table. Northeastern States 2011 2012 Connecticut 17.3 16.4 Delaware 17.4 13.7 Maine 19.3 16.1 Maryland 16.0 15.5 Massachusetts 17.6 18.2 New Hampshire 15.4 13.5 New Jersey 19.2 18.7 New York 18.5 18.7 Ohio 18.2 18.8 Pennsylvania 16.5 16.9 Rhode Island 20.7 22.4 Vermont 14.7 12.3 West Virginia 15.5 17.3 S 10.5.9 Test: matched or paired samples (t-test) Difference data: $\{–0.9, –3.7, –3.2, –0.5, 0.6, –1.9, –0.5, 0.2, 0.6, 0.4, 1.7, –2.4, 1.8\}$ Random Variable: $\bar{X}_{d}$ Distribution: $H_{0}: \mu_{d} = 0 H_{a}: \mu_{d} < 0$ The mean of the differences of the rate of underemployment in the northeastern states between 2012 and 2011 is less than zero. The underemployment rate went down from 2011 to 2012. Graph: left-tailed. $p\text{-value}: 0.1207$ Decision: Do not reject $H_{0}$. Conclusion: At the 5% level of significance, from the sample data, there is not sufficient evidence to conclude that there was a decrease in the underemployment rates of the northeastern states from 2011 to 2012.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/10.E%3A_Hypothesis_Testing_with_Two_Samples_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 11.2: Facts about the Chi-Square Distribution Decide whether the following statements are true or false. Q 11.2.1 As the number of degrees of freedom increases, the graph of the chi-square distribution looks more and more symmetrical. true Q 11.2.2 The standard deviation of the chi-square distribution is twice the mean. Q 11.2.3 The mean and the median of the chi-square distribution are the same if $df = 24$. false 11:3: Goodness-of-Fit Test For each problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.3.1 A six-sided die is rolled 120 times. Fill in the expected frequency column. Then, conduct a hypothesis test to determine if the die is fair. The data in Table are the result of the 120 rolls. Face Value Frequency Expected Frequency 1 15 2 29 3 16 4 15 5 30 6 15 The marital status distribution of the U.S. male population, ages 15 and older, is as shown in Table.Q 11.3.2 Marital Status Percent Expected Frequency never married 31.3 married 56.1 widowed 2.5 divorced/separated 10.1 Suppose that a random sample of 400 U.S. young adult males, 18 to 24 years old, yielded the following frequency distribution. We are interested in whether this age group of males fits the distribution of the U.S. adult population. Calculate the frequency one would expect when surveying 400 people. Fill in Table, rounding to two decimal places. Marital Status Frequency never married 140 married 238 widowed 2 divorced/separated 20 S 11.3.2 Marital Status Percent Expected Frequency never married 31.3 125.2 married 56.1 224.4 widowed 2.5 10 divorced/separated 10.1 40.4 1. The data fits the distribution. 2. The data does not fit the distribution. 3. 3 4. chi-square distribution with $df = 3$ 5. 19.27 6. 0.0002 7. Check student’s solution. 1. $\alpha = 0.05$ 2. Decision: Reject null 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: Data does not fit the distribution. Use the following information to answer the next two exercises: The columns in Table contain the Race/Ethnicity of U.S. Public Schools for a recent year, the percentages for the Advanced Placement Examinee Population for that class, and the Overall Student Population. Suppose the right column contains the result of a survey of 1,000 local students from that year who took an AP Exam. Race/Ethnicity AP Examinee Population Overall Student Population Survey Frequency Asian, Asian American, or Pacific Islander 10.2% 5.4% 113 Black or African-American 8.2% 14.5% 94 Hispanic or Latino 15.5% 15.9% 136 American Indian or Alaska Native 0.6% 1.2% 10 White 59.4% 61.6% 604 Not reported/other 6.1% 1.4% 43 Q 11.3.3 Perform a goodness-of-fit test to determine whether the local results follow the distribution of the U.S. overall student population based on ethnicity. Q 11.3.4 Perform a goodness-of-fit test to determine whether the local results follow the distribution of U.S. AP examinee population, based on ethnicity. S 11.3.4 1. $H_{0}$: The local results follow the distribution of the U.S. AP examinee population 2. $H_{0}$: The local results do not follow the distribution of the U.S. AP examinee population 3. $df = 5$ 4. chi-square distribution with $df = 5$ 5. chi-square test statistic = 13.4 6. $p\text{-value} = 0.0199$ 7. Check student’s solution. 1. $\alpha = 0.05$ 2. Decision: Reject null when $a = 0.05$ 3. Reason for Decision: $p\text{-value} < \alpha$ 4. Conclusion: Local data do not fit the AP Examinee Distribution. 5. Decision: Do not reject null when $a = 0.01$ 6. Conclusion: There is insufficient evidence to conclude that local data do not follow the distribution of the U.S. AP examinee distribution. Q 11.3.5 The City of South Lake Tahoe, CA, has an Asian population of 1,419 people, out of a total population of 23,609. Suppose that a survey of 1,419 self-reported Asians in the Manhattan, NY, area yielded the data in Table. Conduct a goodness-of-fit test to determine if the self-reported sub-groups of Asians in the Manhattan area fit that of the Lake Tahoe area. Race Lake Tahoe Frequency Manhattan Frequency Asian Indian 131 174 Chinese 118 557 Filipino 1,045 518 Japanese 80 54 Korean 12 29 Vietnamese 9 21 Other 24 66 Use the following information to answer the next two exercises: UCLA conducted a survey of more than 263,000 college freshmen from 385 colleges in fall 2005. The results of students' expected majors by gender were reported in The Chronicle of Higher Education (2/2/2006). Suppose a survey of 5,000 graduating females and 5,000 graduating males was done as a follow-up last year to determine what their actual majors were. The results are shown in the tables for Exercise and Exercise. The second column in each table does not add to 100% because of rounding. Q 11.3.6 Conduct a goodness-of-fit test to determine if the actual college majors of graduating females fit the distribution of their expected majors. Major Women - Expected Major Women - Actual Major Arts & Humanities 14.0% 670 Biological Sciences 8.4% 410 Business 13.1% 685 Education 13.0% 650 Engineering 2.6% 145 Physical Sciences 2.6% 125 Professional 18.9% 975 Social Sciences 13.0% 605 Technical 0.4% 15 Other 5.8% 300 Undecided 8.0% 420 S 11.3.6 1. $H_{0}$: The actual college majors of graduating females fit the distribution of their expected majors 2. $H_{a}$: The actual college majors of graduating females do not fit the distribution of their expected majors 3. $df = 10$ 4. chi-square distribution with $df = 10$ 5. $\text{test statistic} = 11.48$ 6. $p\text{-value} = 0.3211$ 7. Check student’s solution. 1. $\alpha = 0.05$ 2. Decision: Do not reject null when $a = 0.05$ and $a = 0.01$ 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is insufficient evidence to conclude that the distribution of actual college majors of graduating females fits the distribution of their expected majors. Q 11.3.7 Conduct a goodness-of-fit test to determine if the actual college majors of graduating males fit the distribution of their expected majors. Major Men - Expected Major Men - Actual Major Arts & Humanities 11.0% 600 Biological Sciences 6.7% 330 Business 22.7% 1130 Education 5.8% 305 Engineering 15.6% 800 Physical Sciences 3.6% 175 Professional 9.3% 460 Social Sciences 7.6% 370 Technical 1.8% 90 Other 8.2% 400 Undecided 6.6% 340 Read the statement and decide whether it is true or false. Q 11.3.8 In a goodness-of-fit test, the expected values are the values we would expect if the null hypothesis were true. true Q 11.3.9 In general, if the observed values and expected values of a goodness-of-fit test are not close together, then the test statistic can get very large and on a graph will be way out in the right tail. Q 11.3.10 Use a goodness-of-fit test to determine if high school principals believe that students are absent equally during the week or not. true Q 11.3.11 The test to use to determine if a six-sided die is fair is a goodness-of-fit test. Q 11.3.12 In a goodness-of fit test, if the p-value is 0.0113, in general, do not reject the null hypothesis. false Q 11.3.13 A sample of 212 commercial businesses was surveyed for recycling one commodity; a commodity here means any one type of recyclable material such as plastic or aluminum. Table shows the business categories in the survey, the sample size of each category, and the number of businesses in each category that recycle one commodity. Based on the study, on average half of the businesses were expected to be recycling one commodity. As a result, the last column shows the expected number of businesses in each category that recycle one commodity. At the 5% significance level, perform a hypothesis test to determine if the observed number of businesses that recycle one commodity follows the uniform distribution of the expected values. Business Type Number in class Observed Number that recycle one commodity Expected number that recycle one commodity Office 35 19 17.5 Retail/Wholesale 48 27 24 Food/Restaurants 53 35 26.5 Manufacturing/Medical 52 21 26 Hotel/Mixed 24 9 12 Q 11.3.14 Table contains information from a survey among 499 participants classified according to their age groups. The second column shows the percentage of obese people per age class among the study participants. The last column comes from a different study at the national level that shows the corresponding percentages of obese people in the same age classes in the USA. Perform a hypothesis test at the 5% significance level to determine whether the survey participants are a representative sample of the USA obese population. Age Class (Years) Obese (Percentage) Expected USA average (Percentage) 20–30 75.0 32.6 31–40 26.5 32.6 41–50 13.6 36.6 51–60 21.9 36.6 61–70 21.0 39.7 S 11.3.14 1. $H_{0}$: Surveyed obese fit the distribution of expected obese 2. $H_{a}$: Surveyed obese do not fit the distribution of expected obese 3. $df = 4$ 4. chi-square distribution with $df = 4$ 5. $\text{test statistic} = 54.01$ 6. $p\text{-value} = 0$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% level of significance, from the data, there is sufficient evidence to conclude that the surveyed obese do not fit the distribution of expected obese. 11.4: Test of Independence For each problem, use a solution sheet to solve the hypothesis test problem. Go to Appendix E for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.4.1 A recent debate about where in the United States skiers believe the skiing is best prompted the following survey. Test to see if the best ski area is independent of the level of the skier. U.S. Ski Area Beginner Intermediate Advanced Tahoe 20 30 40 Utah 10 30 60 Colorado 10 40 50 Q 11.4.2 Car manufacturers are interested in whether there is a relationship between the size of car an individual drives and the number of people in the driver’s family (that is, whether car size and family size are independent). To test this, suppose that 800 car owners were randomly surveyed with the results in Table. Conduct a test of independence. Family Size Sub & Compact Mid-size Full-size Van & Truck 1 20 35 40 35 2 20 50 70 80 3–4 20 50 100 90 5+ 20 30 70 70 S 11.4.2 1. $H_{0}$: Car size is independent of family size. 2. $H_{a}$: Car size is dependent on family size. 3. $df = 9$ 4. chi-square distribution with $df = 9$ 5. $\text{test statistic} = 15.8284$ 6. $p\text{-value} = 0.0706$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that car size and family size are dependent. Q 11.4.3 College students may be interested in whether or not their majors have any effect on starting salaries after graduation. Suppose that 300 recent graduates were surveyed as to their majors in college and their starting salaries after graduation. Table shows the data. Conduct a test of independence. Major < $50,000$50,000 – $68,999$69,000 + English 5 20 5 Engineering 10 30 60 Nursing 10 15 15 Business 10 20 30 Psychology 20 30 20 Q 11.4.4 Some travel agents claim that honeymoon hot spots vary according to age of the bride. Suppose that 280 recent brides were interviewed as to where they spent their honeymoons. The information is given in Table. Conduct a test of independence. Location 20–29 30–39 40–49 50 and over Niagara Falls 15 25 25 20 Poconos 15 25 25 10 Europe 10 25 15 5 Virgin Islands 20 25 15 5 1. $H_{0}$: Honeymoon locations are independent of bride’s age. 2. $H_{a}$: Honeymoon locations are dependent on bride’s age. 3. $df = 9$ 4. chi-square distribution with $df = 9$ 5. $\text{test statistic} = 15.7027$ 6. $p\text{-value} = 0.0734$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that honeymoon location and bride age are dependent. Q 11.4.5 A manager of a sports club keeps information concerning the main sport in which members participate and their ages. To test whether there is a relationship between the age of a member and his or her choice of sport, 643 members of the sports club are randomly selected. Conduct a test of independence. Sport 18 - 25 26 - 30 31 - 40 41 and over racquetball 42 58 30 46 tennis 58 76 38 65 swimming 72 60 65 33 Q 11.4.6 A major food manufacturer is concerned that the sales for its skinny french fries have been decreasing. As a part of a feasibility study, the company conducts research into the types of fries sold across the country to determine if the type of fries sold is independent of the area of the country. The results of the study are shown in Table. Conduct a test of independence. Type of Fries Northeast South Central West skinny fries 70 50 20 25 curly fries 100 60 15 30 steak fries 20 40 10 10 S 11.4.6 1. $H_{0}$: The types of fries sold are independent of the location. 2. $H_{a}$: The types of fries sold are dependent on the location. 3. $df = 6$ 4. chi-square distribution with $df = 6$ 5. $\text{test statistic} =18.8369$ 6. $p\text{-value} = 0.0044$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, There is sufficient evidence that types of fries and location are dependent. Q 11.4.7 According to Dan Lenard, an independent insurance agent in the Buffalo, N.Y. area, the following is a breakdown of the amount of life insurance purchased by males in the following age groups. He is interested in whether the age of the male and the amount of life insurance purchased are independent events. Conduct a test for independence. Age of Males None < $200,000$200,000–$400,000$401,001–$1,000,000$1,000,001+ 20–29 40 15 40 0 5 30–39 35 5 20 20 10 40–49 20 0 30 0 30 50+ 40 30 15 15 10 Q 11.4.8 Suppose that 600 thirty-year-olds were surveyed to determine whether or not there is a relationship between the level of education an individual has and salary. Conduct a test of independence. Annual Salary Not a high school graduate High school graduate College graduate Masters or doctorate < $30,000 15 25 10 5$30,000–$40,000 20 40 70 30$40,000–$50,000 10 20 40 55$50,000–$60,000 5 10 20 60$60,000+ 0 5 10 150 S 11.4.8 1. $H_{0}$: Salary is independent of level of education. 2. $H_{a}$: Salary is dependent on level of education. 3. $df = 12$ 4. chi-square distribution with $df = 12$ 5. $\text{test statistic} = 255.7704$ 6. $p\text{-value} = 0$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, There is sufficient evidence that types of fries and location are dependent. Read the statement and decide whether it is true or false. Q 11.4.9 The number of degrees of freedom for a test of independence is equal to the sample size minus one. Q 11.4.10 The test for independence uses tables of observed and expected data values. true Q 11.4.11 The test to use when determining if the college or university a student chooses to attend is related to his or her socioeconomic status is a test for independence. Q 11.4.12 In a test of independence, the expected number is equal to the row total multiplied by the column total divided by the total surveyed. true Q 11.4.13 An ice cream maker performs a nationwide survey about favorite flavors of ice cream in different geographic areas of the U.S. Based on Table, do the numbers suggest that geographic location is independent of favorite ice cream flavors? Test at the 5% significance level. U.S. region/Flavor Strawberry Chocolate Vanilla Rocky Road Mint Chocolate Chip Pistachio Row total East 8 31 27 8 15 7 96 Midwest 10 32 22 11 15 6 96 West 12 21 22 19 15 8 97 South 15 28 30 8 15 6 102 Column Total 45 112 101 46 60 27 391 Q 11.4.14 Table provides a recent survey of the youngest online entrepreneurs whose net worth is estimated at one million dollars or more. Their ages range from 17 to 30. Each cell in the table illustrates the number of entrepreneurs who correspond to the specific age group and their net worth. Are the ages and net worth independent? Perform a test of independence at the 5% significance level. Age Group\ Net Worth Value (in millions of US dollars) 1–5 6–24 ≥25 Row Total 17–25 8 7 5 20 26–30 6 5 9 20 Column Total 14 12 14 40 S 11.4.14 1. $H_{0}$: Age is independent of the youngest online entrepreneurs’ net worth. 2. $H_{5}$: Age is dependent on the net worth of the youngest online entrepreneurs. 3. $df = 2$ 4. chi-square distribution with $df = 2$ 5. $\text{test statistic} = 1.76$ 6. $p\text{-value} = 0.4144$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that age and net worth for the youngest online entrepreneurs are dependent. Q 11.4.15 A 2013 poll in California surveyed people about taxing sugar-sweetened beverages. The results are presented in Table, and are classified by ethnic group and response type. Are the poll responses independent of the participants’ ethnic group? Conduct a test of independence at the 5% significance level. Opinion/Ethnicity Asian-American White/Non-Hispanic African-American Latino Row Total Against tax 48 433 41 160 628 In Favor of tax 54 234 24 147 459 No opinion 16 43 16 19 84 Column Total 118 710 71 272 1171 11.5: Test for Homogeneity For each word problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.5.1 A psychologist is interested in testing whether there is a difference in the distribution of personality types for business majors and social science majors. The results of the study are shown in Table. Conduct a test of homogeneity. Test at a 5% level of significance. Open Conscientious Extrovert Agreeable Neurotic Business 41 52 46 61 58 Social Science 72 75 63 80 65 S 11.5.1 1. $H_{0}$: The distribution for personality types is the same for both majors 2. $H_{a}$: The distribution for personality types is not the same for both majors 3. $df = 4$ 4. chi-square with $df = 4$ 5. $\text{test statistic} = 3.01$ 6. $p\text{-value} = 0.5568$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is insufficient evidence to conclude that the distribution of personality types is different for business and social science majors. Q 11.5.2 Do men and women select different breakfasts? The breakfasts ordered by randomly selected men and women at a popular breakfast place is shown in Table. Conduct a test for homogeneity at a 5% level of significance. French Toast Pancakes Waffles Omelettes Men 47 35 28 53 Women 65 59 55 60 Q 11.5.3 A fisherman is interested in whether the distribution of fish caught in Green Valley Lake is the same as the distribution of fish caught in Echo Lake. Of the 191 randomly selected fish caught in Green Valley Lake, 105 were rainbow trout, 27 were other trout, 35 were bass, and 24 were catfish. Of the 293 randomly selected fish caught in Echo Lake, 115 were rainbow trout, 58 were other trout, 67 were bass, and 53 were catfish. Perform a test for homogeneity at a 5% level of significance. S 11.5.3 1. $H_{0}$: The distribution for fish caught is the same in Green Valley Lake and in Echo Lake. 2. $H_{a}$: The distribution for fish caught is not the same in Green Valley Lake and in Echo Lake. 3. $df = 3$ 4. chi-square with $df = 3$ 5. $\text{test statistic} = 11.75$ 6. $p\text{-value} = 0.0083$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is evidence to conclude that the distribution of fish caught is different in Green Valley Lake and in Echo Lake Q 11.5.4 In 2007, the United States had 1.5 million homeschooled students, according to the U.S. National Center for Education Statistics. In Table you can see that parents decide to homeschool their children for different reasons, and some reasons are ranked by parents as more important than others. According to the survey results shown in the table, is the distribution of applicable reasons the same as the distribution of the most important reason? Provide your assessment at the 5% significance level. Did you expect the result you obtained? Reasons for Homeschooling Applicable Reason (in thousands of respondents) Most Important Reason (in thousands of respondents) Row Total Concern about the environment of other schools 1,321 309 1,630 Dissatisfaction with academic instruction at other schools 1,096 258 1,354 To provide religious or moral instruction 1,257 540 1,797 Child has special needs, other than physical or mental 315 55 370 Nontraditional approach to child’s education 984 99 1,083 Other reasons (e.g., finances, travel, family time, etc.) 485 216 701 Column Total 5,458 1,477 6,935 Q 11.5.5 When looking at energy consumption, we are often interested in detecting trends over time and how they correlate among different countries. The information in Table shows the average energy use (in units of kg of oil equivalent per capita) in the USA and the joint European Union countries (EU) for the six-year period 2005 to 2010. Do the energy use values in these two areas come from the same distribution? Perform the analysis at the 5% significance level. Year European Union United States Row Total 2010 3,413 7,164 10,557 2009 3,302 7,057 10,359 2008 3,505 7,488 10,993 2007 3,537 7,758 11,295 2006 3,595 7,697 11,292 2005 3,613 7,847 11,460 Column Total 45,011 20,965 65,976 S 11.5.5 1. $H_{0}$: The distribution of average energy use in the USA is the same as in Europe between 2005 and 2010. 2. $H_{a}$: The distribution of average energy use in the USA is not the same as in Europe between 2005 and 2010. 3. $df = 4$ 4. chi-square with $df = 4$ 5. $\text{test statistic} = 2.7434$ 6. $p\text{-value} = 0.7395$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: At the 5% significance level, there is insufficient evidence to conclude that the average energy use values in the US and EU are not derived from different distributions for the period from 2005 to 2010. Q 11.5.6 The Insurance Institute for Highway Safety collects safety information about all types of cars every year, and publishes a report of Top Safety Picks among all cars, makes, and models. Table presents the number of Top Safety Picks in six car categories for the two years 2009 and 2013. Analyze the table data to conclude whether the distribution of cars that earned the Top Safety Picks safety award has remained the same between 2009 and 2013. Derive your results at the 5% significance level. Year \ Car Type Small Mid-Size Large Small SUV Mid-Size SUV Large SUV Row Total 2009 12 22 10 10 27 6 87 2013 31 30 19 11 29 4 124 Column Total 43 52 29 21 56 10 211 11.6: Comparison of the Chi-Square Tests For each word problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.6.1 Is there a difference between the distribution of community college statistics students and the distribution of university statistics students in what technology they use on their homework? Of some randomly selected community college students, 43 used a computer, 102 used a calculator with built in statistics functions, and 65 used a table from the textbook. Of some randomly selected university students, 28 used a computer, 33 used a calculator with built in statistics functions, and 40 used a table from the textbook. Conduct an appropriate hypothesis test using a 0.05 level of significance. S 11.6.1 1. $H_{0}$: The distribution for technology use is the same for community college students and university students. 2. $H_{a}$: The distribution for technology use is not the same for community college students and university students. 3. $df = 2$ 4. chi-square with $df = 2$ 5. $\text{test statistic} = 7.05$ 6. $p\text{-value} = 0.0294$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is sufficient evidence to conclude that the distribution of technology use for statistics homework is not the same for statistics students at community colleges and at universities. Read the statement and decide whether it is true or false. Q 11.6.2 If $df = 2$, the chi-square distribution has a shape that reminds us of the exponential. 11.7: Test of a Single Variance Use the following information to answer the next twelve exercises: Suppose an airline claims that its flights are consistently on time with an average delay of at most 15 minutes. It claims that the average delay is so consistent that the variance is no more than 150 minutes. Doubting the consistency part of the claim, a disgruntled traveler calculates the delays for his next 25 flights. The average delay for those 25 flights is 22 minutes with a standard deviation of 15 minutes. Q 11.7.1 Is the traveler disputing the claim about the average or about the variance? Q 11.7.2 A sample standard deviation of 15 minutes is the same as a sample variance of __________ minutes. 225 Q 11.7.3 Is this a right-tailed, left-tailed, or two-tailed test? Q 11.7.4 $H_{0}$: __________ S 11.7.4 $H_{0}: \sigma^{2} \leq 150$ Q 11.7.5 $df =$ ________ Q 11.7.6 chi-square test statistic = ________ 36 Q 11.7.7 $p\text{-value} =$ ________ Q 11.7.8 Graph the situation. Label and scale the horizontal axis. Mark the mean and test statistic. Shade the $p\text{-value}$. S 11.7.8 Check student’s solution. Q 11.7.9 Let $\alpha = 0.05$ Decision: ________ Conclusion (write out in a complete sentence.): ________ Q 11.7.10 How did you know to test the variance instead of the mean? S 11.7.10 The claim is that the variance is no more than 150 minutes. Q 11.7.11 If an additional test were done on the claim of the average delay, which distribution would you use? Q 11.7.12 If an additional test were done on the claim of the average delay, but 45 flights were surveyed, which distribution would you use? S 11.7.12 a Student's $t$- or normal distribution For each word problem, use a solution sheet to solve the hypothesis test problem. Go to [link] for the chi-square solution sheet. Round expected frequency to two decimal places. Q 11.7.13 A plant manager is concerned her equipment may need recalibrating. It seems that the actual weight of the 15 oz. cereal boxes it fills has been fluctuating. The standard deviation should be at most 0.5 oz. In order to determine if the machine needs to be recalibrated, 84 randomly selected boxes of cereal from the next day’s production were weighed. The standard deviation of the 84 boxes was 0.54. Does the machine need to be recalibrated? S 11.7.20 1. $H_{0}: \sigma = 25^{2}$ 2. $H_{a}: \sigma > 25^{2}$ 3. $df = n - 1 = 7$ 4. test statistic: $\chi^{2} = \chi^{2}_{7} = \frac{(n-1)s^{2}}{25^{2}} = \frac{(8-1)(34.29)^{2}}{25^{2}} = 13.169$ 5. $p\text{-value}: P(\chi^{2}_{7} > 13.169) = 1- P(\chi^{2}_{7} \leq 13.169) = 0.0681$ 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: At the 5% level, there is insufficient evidence to conclude that the variance is more than 625. Q 11.7.21 A company packages apples by weight. One of the weight grades is Class A apples. Class A apples have a mean weight of 150 g, and there is a maximum allowed weight tolerance of 5% above or below the mean for apples in the same consumer package. A batch of apples is selected to be included in a Class A apple package. Given the following apple weights of the batch, does the fruit comply with the Class A grade weight tolerance requirements. Conduct an appropriate hypothesis test. 1. at the 5% significance level 2. at the 1% significance level Weights in selected apple batch (in grams): 158; 167; 149; 169; 164; 139; 154; 150; 157; 171; 152; 161; 141; 166; 172;
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/11.E%3A_The_Chi-Square_Distribution_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 12.2: Linear Equations Q 12.2.1 For each of the following situations, state the independent variable and the dependent variable. 1. A study is done to determine if elderly drivers are involved in more motor vehicle fatalities than other drivers. The number of fatalities per 100,000 drivers is compared to the age of drivers. 2. A study is done to determine if the weekly grocery bill changes based on the number of family members. 3. Insurance companies base life insurance premiums partially on the age of the applicant. 4. Utility bills vary according to power consumption. 5. A study is done to determine if a higher education reduces the crime rate in a population. S 12.2.1 1. independent variable: age; dependent variable: fatalities 2. independent variable: # of family members; dependent variable: grocery bill 3. independent variable: age of applicant; dependent variable: insurance premium 4. independent variable: power consumption; dependent variable: utility 5. independent variable: higher education (years); dependent variable: crime rates Q 12.2.2 Piece-rate systems are widely debated incentive payment plans. In a recent study of loan officer effectiveness, the following piece-rate system was examined: % of goal reached < 80 80 100 120 Incentive n/a $4,000 with an additional$125 added per percentage point from 81–99% $6,500 with an additional$125 added per percentage point from 101–119% $9,500 with an additional$125 added per percentage point starting at 121% If a loan officer makes 95% of his or her goal, write the linear function that applies based on the incentive plan table. In context, explain the y-intercept and slope. 12.3: Scatter Plots Q 12.3.1 The Gross Domestic Product Purchasing Power Parity is an indication of a country’s currency value compared to another country. Table shows the GDP PPP of Cuba as compared to US dollars. Construct a scatter plot of the data. Year Cuba’s PPP Year Cuba’s PPP 1999 1,700 2006 4,000 2000 1,700 2007 11,000 2002 2,300 2008 9,500 2003 2,900 2009 9,700 2004 3,000 2010 9,900 2005 3,500 S 12.3.1 Check student’s solution. Q 12.3.2 The following table shows the poverty rates and cell phone usage in the United States. Construct a scatter plot of the data Year Poverty Rate Cellular Usage per Capita 2003 12.7 54.67 2005 12.6 74.19 2007 12 84.86 2009 12 90.82 Q 12.3.3 Does the higher cost of tuition translate into higher-paying jobs? The table lists the top ten colleges based on mid-career salary and the associated yearly tuition costs. Construct a scatter plot of the data. School Mid-Career Salary (in thousands) Yearly Tuition Princeton 137 28,540 Harvey Mudd 135 40,133 CalTech 127 39,900 US Naval Academy 122 0 West Point 120 0 MIT 118 42,050 Lehigh University 118 43,220 NYU-Poly 117 39,565 Babson College 117 40,400 Stanford 114 54,506 S 12.3.3 For graph: check student’s solution. Note that tuition is the independent variable and salary is the dependent variable. Q 12.3.4 If the level of significance is 0.05 and the $p\text{-value}$ is 0.06, what conclusion can you draw? Q 12.3.5 If there are 15 data points in a set of data, what is the number of degree of freedom? 13 12.4: The Regression Equation Q 12.4.1 What is the process through which we can calculate a line that goes through a scatter plot with a linear pattern? Q 12.4.2 Explain what it means when a correlation has an $r^{2}$ of 0.72. S 12.4.2 It means that 72% of the variation in the dependent variable ($y$) can be explained by the variation in the independent variable ($x$). Q 12.4.3 Can a coefficient of determination be negative? Why or why not? 12.5: Testing the Significance of the Correlation Coefficient Q 12.5.1 If the level of significance is 0.05 and the $p\text{-value}$ is $0.06$, what conclusion can you draw? S 12.5.1 We do not reject the null hypothesis. There is not sufficient evidence to conclude that there is a significant linear relationship between $x$ and $y$ because the correlation coefficient is not significantly different from zero. Q 12.5.2 If there are 15 data points in a set of data, what is the number of degree of freedom? 12.6: Prediction Q 12.6.1 Recently, the annual number of driver deaths per 100,000 for the selected age groups was as follows: Age Number of Driver Deaths per 100,000 17.5 38 22 36 29.5 24 44.5 20 64.5 18 80 28 1. For each age group, pick the midpoint of the interval for the x value. (For the 75+ group, use 80.) 2. Using “ages” as the independent variable and “Number of driver deaths per 100,000” as the dependent variable, make a scatter plot of the data. 3. Calculate the least squares (best–fit) line. Put the equation in the form of: ŷ = a + bx 4. Find the correlation coefficient. Is it significant? 5. Predict the number of deaths for ages 40 and 60. 6. Based on the given data, is there a linear relationship between age of a driver and driver fatality rate? 7. What is the slope of the least squares (best-fit) line? Interpret the slope. S 12.6.1 1. Age Number of Driver Deaths per 100,000 16–19 38 20–24 36 25–34 24 35–54 20 55–74 18 75+ 28 2. Check student’s solution. 3. $\hat{y} = 35.5818045 - 0.19182491x$ 4. $r = -0.57874$ For four $df$ and alpha $= 0.05$, the LinRegTTest gives $p\text{-value} = 0.2288$ so we do not reject the null hypothesis; there is not a significant linear relationship between deaths and age. Using the table of critical values for the correlation coefficient, with four df, the critical value is 0.811. The correlation coefficient $r = -0.57874$ is not less than –0.811, so we do not reject the null hypothesis. 5. if age = 40, $\hat{y}\text{ (deaths) }= 35.5818045 – 0.19182491(40) = 27.9$ if age = 60, $\hat{y}\text{ (deaths) }= 35.5818045 – 0.19182491(60) = 24.1$ 6. For entire dataset, there is a linear relationship for the ages up to age 74. The oldest age group shows an increase in deaths from the prior group, which is not consistent with the younger ages. 7. $\text{slope} = -0.19182491$ Q 12.6.2 Table shows the life expectancy for an individual born in the United States in certain years. Year of Birth Life Expectancy 1930 59.7 1940 62.9 1950 70.2 1965 69.7 1973 71.4 1982 74.5 1987 75 1992 75.7 2010 78.7 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the ordered pairs. 3. Calculate the least squares line. Put the equation in the form of: $\hat{y} = a + bx$ 4. Find the correlation coefficient. Is it significant? 5. Find the estimated life expectancy for an individual born in 1950 and for one born in 1982. 6. Why aren’t the answers to part e the same as the values in Table that correspond to those years? 7. Use the two points in part e to plot the least squares line on your graph from part b. 8. Based on the data, is there a linear relationship between the year of birth and life expectancy? 9. Are there any outliers in the data? 10. Using the least squares line, find the estimated life expectancy for an individual born in 1850. Does the least squares line give an accurate estimate for that year? Explain why or why not. 11. What is the slope of the least-squares (best-fit) line? Interpret the slope. Q 12.6.3 The maximum discount value of the Entertainment® card for the “Fine Dining” section, Edition ten, for various pages is given in Table Q 12.6.4 Table gives the gold medal times for every other Summer Olympics for the women’s 100-meter freestyle (swimming). Year Time (seconds) 1912 82.2 1924 72.4 1932 66.8 1952 66.8 1960 61.2 1968 60.0 1976 55.65 1984 55.92 1992 54.64 2000 53.8 2008 53.1 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. Is the decrease in times significant? 6. Find the estimated gold medal time for 1932. Find the estimated time for 1984. 7. Why are the answers from part f different from the chart values? 8. Does it appear that a line is the best way to fit the data? Why or why not? 9. Use the least-squares line to estimate the gold medal time for the next Summer Olympics. Do you think that your answer is reasonable? Why or why not? Q 12.6.5 State # letters in name Year entered the Union Rank for entering the Union Area (square miles) Alabama 7 1819 22 52,423 Colorado 8 1876 38 104,100 Hawaii 6 1959 50 10,932 Iowa 4 1846 29 56,276 Maryland 8 1788 7 12,407 Missouri 8 1821 24 69,709 New Jersey 9 1787 3 8,722 Ohio 4 1803 17 44,828 South Carolina 13 1788 8 32,008 Utah 4 1896 45 84,904 Wisconsin 9 1848 30 65,499 We are interested in whether or not the number of letters in a state name depends upon the year the state entered the Union. 1. Decide which variable should be the independent variable and which should be the dependent variable. 2. Draw a scatter plot of the data. 3. Does it appear from inspection that there is a relationship between the variables? Why or why not? 4. Calculate the least-squares line. Put the equation in the form of: $\hat{y} = a + bx$. 5. Find the correlation coefficient. What does it imply about the significance of the relationship? 6. Find the estimated number of letters (to the nearest integer) a state would have if it entered the Union in 1900. Find the estimated number of letters a state would have if it entered the Union in 1940. 7. Does it appear that a line is the best way to fit the data? Why or why not? 8. Use the least-squares line to estimate the number of letters a new state that enters the Union this year would have. Can the least squares line be used to predict it? Why or why not? S 12.6.5 1. Year is the independent or $x$ variable; the number of letters is the dependent or $y$ variable. 2. Check student’s solution. 3. no 4. $\hat{y} = 47.03 - 0.0216x$ 5. $-0.4280$ 6. 6; 5 7. No, the relationship does not appear to be linear; the correlation is not significant. 8. current year: 2013: 3.55 or four letters; this is not an appropriate use of the least squares line. It is extrapolation. 12.7: Outliers Q 12.7.1 The height (sidewalk to roof) of notable tall buildings in America is compared to the number of stories of the building (beginning at street level). Height (in feet) Stories 1,050 57 428 28 362 26 529 40 790 60 401 22 380 38 1,454 110 1,127 100 700 46 1. Using “stories” as the independent variable and “height” as the dependent variable, make a scatter plot of the data. 2. Does it appear from inspection that there is a relationship between the variables? 3. Calculate the least squares line. Put the equation in the form of: $\hat{y} = a + bx$ 4. Find the correlation coefficient. Is it significant? 5. Find the estimated heights for 32 stories and for 94 stories. 6. Based on the data in Table, is there a linear relationship between the number of stories in tall buildings and the height of the buildings? 7. Are there any outliers in the data? If so, which point(s)? 8. What is the estimated height of a building with six stories? Does the least squares line give an accurate estimate of height? Explain why or why not. 9. Based on the least squares line, adding an extra story is predicted to add about how many feet to a building? 10. What is the slope of the least squares (best-fit) line? Interpret the slope. Q 12.7.2 Ornithologists, scientists who study birds, tag sparrow hawks in 13 different colonies to study their population. They gather data for the percent of new sparrow hawks in each colony and the percent of those that have returned from migration. Percent return: 74; 66; 81; 52; 73; 62; 52; 45; 62; 46; 60; 46; 38 Percent new: 5; 6; 8; 11; 12; 15; 16; 17; 18; 18; 19; 20; 20 1. Enter the data into your calculator and make a scatter plot. 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot from part a. 3. Explain in words what the slope and $y$-intercept of the regression line tell us. 4. How well does the regression line fit the data? Explain your response. 5. Which point has the largest residual? Explain what the residual means in context. Is this point an outlier? An influential point? Explain. 6. An ecologist wants to predict how many birds will join another colony of sparrow hawks to which 70% of the adults from the previous year have returned. What is the prediction? S 12.7.2 1. Check student’s solution. 2. Check student’s solution. 3. The slope of the regression line is -0.3179 with a $y$-intercept of 32.966. In context, the $y$-intercept indicates that when there are no returning sparrow hawks, there will be almost 31% new sparrow hawks, which doesn’t make sense since if there are no returning birds, then the new percentage would have to be 100% (this is an example of why we do not extrapolate). The slope tells us that for each percentage increase in returning birds, the percentage of new birds in the colony decreases by 0.3179%. 4. If we examine $r2$, we see that only 50.238% of the variation in the percent of new birds is explained by the model and the correlation coefficient, $r = 0.71$ only indicates a somewhat strong correlation between returning and new percentages. 5. The ordered pair $(66, 6)$ generates the largest residual of 6.0. This means that when the observed return percentage is 66%, our observed new percentage, 6%, is almost 6% less than the predicted new value of 11.98%. If we remove this data pair, we see only an adjusted slope of -0.2723 and an adjusted intercept of 30.606. In other words, even though this data generates the largest residual, it is not an outlier, nor is the data pair an influential point. 6. If there are 70% returning birds, we would expect to see $y = -0.2723(70) + 30.606 = 0.115$ or 11.5% new birds in the colony. Q 12.7.3 The following table shows data on average per capita wine consumption and heart disease rate in a random sample of 10 countries. Yearly wine consumption in liters 2.5 3.9 2.9 2.4 2.9 0.8 9.1 2.7 0.8 0.7 Death from heart diseases 221 167 131 191 220 297 71 172 211 300 1. Enter the data into your calculator and make a scatter plot. 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot from part a. 3. Explain in words what the slope and $y$-intercept of the regression line tell us. 4. How well does the regression line fit the data? Explain your response. 5. Which point has the largest residual? Explain what the residual means in context. Is this point an outlier? An influential point? Explain. 6. Do the data provide convincing evidence that there is a linear relationship between the amount of alcohol consumed and the heart disease death rate? Carry out an appropriate test at a significance level of 0.05 to help answer this question. Q 12.7.4 The following table consists of one student athlete’s time (in minutes) to swim 2000 yards and the student’s heart rate (beats per minute) after swimming on a random sample of 10 days: Swim Time Heart Rate 34.12 144 35.72 152 34.72 124 34.05 140 34.13 152 35.73 146 36.17 128 35.57 136 35.37 144 35.57 148 1. Enter the data into your calculator and make a scatter plot. 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot from part a. 3. Explain in words what the slope and $y$-intercept of the regression line tell us. 4. How well does the regression line fit the data? Explain your response. 5. Which point has the largest residual? Explain what the residual means in context. Is this point an outlier? An influential point? Explain. S 12.7.4 1. Check student’s solution. 2. Check student’s solution. 3. We have a slope of $-1.4946$ with a $y$-intercept of 193.88. The slope, in context, indicates that for each additional minute added to the swim time, the heart rate will decrease by 1.5 beats per minute. If the student is not swimming at all, the $y$-intercept indicates that his heart rate will be 193.88 beats per minute. While the slope has meaning (the longer it takes to swim 2,000 meters, the less effort the heart puts out), the $y$-intercept does not make sense. If the athlete is not swimming (resting), then his heart rate should be very low. 4. Since only 1.5% of the heart rate variation is explained by this regression equation, we must conclude that this association is not explained with a linear relationship. 5. The point $(34.72, 124)$ generates the largest residual of $-11.82$. This means that our observed heart rate is almost 12 beats less than our predicted rate of 136 beats per minute. When this point is removed, the slope becomes $1.6914$ with the $y$-intercept changing to $83.694$. While the linear association is still very weak, we see that the removed data pair can be considered an influential point in the sense that the $y$-intercept becomes more meaningful. Q 12.7.5 A researcher is investigating whether non-white minorities commit a disproportionate number of homicides. He uses demographic data from Detroit, MI to compare homicide rates and the number of the population that are white males. White Males Homicide rate per 100,000 people 558,724 8.6 538,584 8.9 519,171 8.52 500,457 8.89 482,418 13.07 465,029 14.57 448,267 21.36 432,109 28.03 416,533 31.49 401,518 37.39 387,046 46.26 373,095 47.24 359,647 52.33 1. Use your calculator to construct a scatter plot of the data. What should the independent variable be? Why? 2. Use your calculator’s regression function to find the equation of the least-squares regression line. Add this to your scatter plot. 3. Discuss what the following mean in context. 1. The slope of the regression equation 2. The $y$-intercept of the regression equation 3. The correlation $r$ 4. The coefficient of determination $r2$. 4. Do the data provide convincing evidence that there is a linear relationship between the number of white males in the population and the homicide rate? Carry out an appropriate test at a significance level of 0.05 to help answer this question. Q 12.7.6 School Mid-Career Salary (in thousands) Yearly Tuition Princeton 137 28,540 Harvey Mudd 135 40,133 CalTech 127 39,900 US Naval Academy 122 0 West Point 120 0 MIT 118 42,050 Lehigh University 118 43,220 NYU-Poly 117 39,565 Babson College 117 40,400 Stanford 114 54,506 Using the data to determine the linear-regression line equation with the outliers removed. Is there a linear correlation for the data set with outliers removed? Justify your answer. S 12.7.6 If we remove the two service academies (the tuition is \$0.00), we construct a new regression equation of $y = -0.0009x + 160$ with a correlation coefficient of $0.71397$ and a coefficient of determination of $0.50976$. This allows us to say there is a fairly strong linear association between tuition costs and salaries if the service academies are removed from the data set.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/12.E%3A_Linear_Regression_and_Correlation_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax. 13.2: One-Way ANOVA Q 13.2.1 Three different traffic routes are tested for mean driving time. The entries in the table are the driving times in minutes on the three different routes. The one-way $ANOVA$ results are shown in Table. Route 1 Route 2 Route 3 30 27 16 32 29 41 27 28 22 35 36 31 State $SS_{\text{between}}$, $SS_{\text{within}}$, and the $F$ statistic. S 13.2.1 $SS_{\text{between}} = 26$ $SS_{\text{within}} = 441$ $F = 0.2653$ Q 13.2.2 Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. Northeast South West Central East 16.3 16.9 16.4 16.2 17.1 16.1 16.5 16.5 16.6 17.2 16.4 16.4 16.6 16.5 16.6 16.5 16.2 16.1 16.4 16.8 $\bar{x} =$ ________ ________ ________ ________ ________ $s^{2} =$ ________ ________ ________ ________ ________ State the hypotheses. $H_{0}$: ____________ $H_{a}$: ____________ 13.3: The F-Distribution and the F-Ratio Use the following information to answer the next five exercises. There are five basic assumptions that must be fulfilled in order to perform a one-way $ANOVA$ test. What are they? Exercise 13.2.1 Write one assumption. Answer Each population from which a sample is taken is assumed to be normal. Exercise 13.2.2 Write another assumption. Exercise 13.2.3 Write a third assumption. Answer The populations are assumed to have equal standard deviations (or variances). Exercise 13.2.4 Write a fourth assumption. Exercise 13.2.5 Write the final assumption. Answer The response is a numerical value. Exercise 13.2.6 State the null hypothesis for a one-way $ANOVA$ test if there are four groups. Exercise 13.2.7 State the alternative hypothesis for a one-way $ANOVA$ test if there are three groups. Answer $H_{a}: \text{At least two of the group means } \mu_{1}, \mu_{2}, \mu_{3} \text{ are not equal.}$ Exercise 13.2.8 When do you use an $ANOVA$ test? Use the following information to answer the next three exercises. Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. Northeast South West Central East 16.3 16.9 16.4 16.2 17.1 16.1 16.5 16.5 16.6 17.2 16.4 16.4 16.6 16.5 16.6 16.5 16.2 16.1 16.4 16.8 $\bar{x} =$ ________ ________ ________ ________ ________ $s^{2}$ ________ ________ ________ ________ ________ $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5}$ $H_{a}$: At least any two of the group means $\mu_{1} , \mu_{2}, \dotso, \mu_{5}$ are not equal. Q 13.3.1 degrees of freedom – numerator: $df(\text{num}) =$ _________ Q 13.3.2 degrees of freedom – denominator: $df(\text{denom}) =$ ________ S 13.3.2 $df(\text{denom}) = 15$ Q 13.3.3 $F$ statistic = ________ 13.4: Facts About the F Distribution Exercise 13.4.4 An $F$ statistic can have what values? Exercise 13.4.5 What happens to the curves as the degrees of freedom for the numerator and the denominator get larger? Answer The curves approximate the normal distribution. Use the following information to answer the next seven exercise. Four basketball teams took a random sample of players regarding how high each player can jump (in inches). The results are shown in Table. Team 1 Team 2 Team 3 Team 4 Team 5 36 32 48 38 41 42 35 50 44 39 51 38 39 46 40 Exercise 13.4.6 What is the $df(\text{num})$? Exercise 13.4.7 What is the $df(\text{denom})$? Answer ten Exercise 13.4.8 What are the Sum of Squares and Mean Squares Factors? Exercise 13.4.9 What are the Sum of Squares and Mean Squares Errors? Answer $SS = 237.33; MS = 23.73$ Exercise 13.4.10 What is the $F$ statistic? Exercise 13.4.11 What is the $p\text{-value}$? Answer 0.1614 Exercise 13.4.12 At the 5% significance level, is there a difference in the mean jump heights among the teams? Use the following information to answer the next seven exercises. A video game developer is testing a new game on three different groups. Each group represents a different target market for the game. The developer collects scores from a random sample from each group. The results are shown in Table Group A Group B Group C 101 151 101 108 149 109 98 160 198 107 112 186 111 126 160 Exercise 13.4.13 What is the $df(\text{num})$? Answer two Exercise 13.4.14 What is the $df(\text{denom})$? Exercise 13.4.15 What are the $SS_{\text{between}}$ and $MS_{\text{between}}$? Answer $SS_{\text{between}} = 5,700.4$; $MS_{\text{between}} = 2,850.2$ Exercise 13.4.16 What are the $SS_{\text{within}}$ and $MS_{\text{within}}$? Exercise 13.4.17 What is the $F$ Statistic? Answer 3.6101 Exercise 13.4.18 What is the $p\text{-value}$? Exercise 13.4.19 At the 10% significance level, are the scores among the different groups different? Answer Yes, there is enough evidence to show that the scores among the groups are statistically significant at the 10% level. Use the following information to answer the next three exercises. Suppose a group is interested in determining whether teenagers obtain their drivers licenses at approximately the same average age across the country. Suppose that the following data are randomly collected from five teenagers in each region of the country. The numbers represent the age at which teenagers obtained their drivers licenses. Northeast South West Central East 16.3 16.9 16.4 16.2 17.1 16.1 16.5 16.5 16.6 17.2 16.4 16.4 16.6 16.5 16.6 16.5 16.2 16.1 16.4 16.8 $\bar{x} =$ ________ ________ ________ ________ ________ $s^{2} =$ ________ ________ ________ ________ ________ Enter the data into your calculator or computer. Exercise 13.4.20 $p\text{-value} =$ ______ State the decisions and conclusions (in complete sentences) for the following preconceived levels of $\alpha$. Exercise 13.4.21 $\alpha = 0.05$ 1. Decision: ____________________________ 2. Conclusion: ____________________________ Exercise 13.4.22 $\alpha = 0.01$ 1. Decision: ____________________________ 2. Conclusion: ____________________________ Use the following information to answer the next eight exercises. Groups of men from three different areas of the country are to be tested for mean weight. The entries in the table are the weights for the different groups. The one-way $ANOVA$ results are shown in Table. Group 1 Group 2 Group 3 216 202 170 198 213 165 240 284 182 187 228 197 176 210 201 Exercise 13.3.2 What is the Sum of Squares Factor? Answer 4,939.2 Exercise 13.3.3 What is the Sum of Squares Error? Exercise 13.3.4 What is the $df$ for the numerator? Answer 2 Exercise 13.3.5 What is the $df$ for the denominator? Exercise 13.3.6 What is the Mean Square Factor? Answer 2,469.6 Exercise 13.3.7 What is the Mean Square Error? Exercise 13.3.8 What is the $F$ statistic? Answer 3.7416 Use the following information to answer the next eight exercises. Girls from four different soccer teams are to be tested for mean goals scored per game. The entries in the table are the goals per game for the different teams. The one-way $ANOVA$ results are shown in Table. Team 1 Team 2 Team 3 Team 4 1 2 0 3 2 3 1 4 0 2 1 4 3 4 0 3 2 4 0 2 Exercise 13.3.9 What is $SS_{\text{between}}$? Exercise 13.3.10 What is the $df$ for the numerator? Answer 3 Exercise 13.3.11 What is $MS_{\text{between}}$? Exercise 13.3.12 What is $SS_{\text{within}}$? Answer 13.2 Exercise 13.3.13 What is the $df$ for the denominator? Exercise 13.3.14 What is $MS_{\text{within}}$? Answer 0.825 Exercise 13.3.15 What is the $F$ statistic? Exercise 13.3.16 Judging by the $F$ statistic, do you think it is likely or unlikely that you will reject the null hypothesis? Answer Because a one-way $ANOVA$ test is always right-tailed, a high $F$ statistic corresponds to a low $p\text{-value}$, so it is likely that we will reject the null hypothesis. DIRECTIONS Use a solution sheet to conduct the following hypothesis tests. The solution sheet can be found in [link]. Q 13.4.1 Three students, Linda, Tuan, and Javier, are given five laboratory rats each for a nutritional experiment. Each rat's weight is recorded in grams. Linda feeds her rats Formula A, Tuan feeds his rats Formula B, and Javier feeds his rats Formula C. At the end of a specified time period, each rat is weighed again, and the net gain in grams is recorded. Using a significance level of 10%, test the hypothesis that the three formulas produce the same mean weight gain. Weights of Student Lab Rats Linda's rats Tuan's rats Javier's rats 43.5 47.0 51.2 39.4 40.5 40.9 41.3 38.9 37.9 46.0 46.3 45.0 38.2 44.2 48.6 1. $H_{0}: \mu_{L} = \mu_{T} = \mu_{J}$ 2. at least any two of the means are different 3. $df(\text{num}) = 2; df(\text{denom}) = 12$ 4. $F$ distribution 5. 0.67 6. 0.5305 7. Check student’s solution. 8. Decision: Do not reject null hypothesis; Conclusion: There is insufficient evidence to conclude that the means are different. Q 13.4.2 A grassroots group opposed to a proposed increase in the gas tax claimed that the increase would hurt working-class people the most, since they commute the farthest to work. Suppose that the group randomly surveyed 24 individuals and asked them their daily one-way commuting mileage. The results are in Table. Using a 5% significance level, test the hypothesis that the three mean commuting mileages are the same. working-class professional (middle incomes) professional (wealthy) 17.8 16.5 8.5 26.7 17.4 6.3 49.4 22.0 4.6 9.4 7.4 12.6 65.4 9.4 11.0 47.1 2.1 28.6 19.5 6.4 15.4 51.2 13.9 9.3 Q 13.4.3 Examine the seven practice laps from [link]. Determine whether the mean lap time is statistically the same for the seven practice laps, or if there is at least one lap that has a different mean time from the others. S 13.4.3 1. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5} = \mu_{6} = \mu_{T}$ 2. At least two mean lap times are different. 3. $df(\text{num}) = 6; df(\text{denom}) = 98$ 4. $F$ distribution 5. 1.69 6. 0.1319 7. Check student’s solution. 8. Decision: Do not reject null hypothesis; Conclusion: There is insufficient evidence to conclude that the mean lap times are different. Use the following information to answer the next two exercises. Table lists the number of pages in four different types of magazines. home decorating news health computer 172 87 82 104 286 94 153 136 163 123 87 98 205 106 103 207 197 101 96 146 Q 13.4.4 Using a significance level of 5%, test the hypothesis that the four magazine types have the same mean length. Q 13.4.5 Eliminate one magazine type that you now feel has a mean length different from the others. Redo the hypothesis test, testing that the remaining three means are statistically the same. Use a new solution sheet. Based on this test, are the mean lengths for the remaining three magazines statistically the same? S 13.4.6 1. $H_{a}: \mu_{d} = \mu_{n} = \mu_{h}$ 2. At least any two of the magazines have different mean lengths. 3. $df(\text{num}) = 2, df(\text{denom}) = 12$ 4. $F$ distribtuion 5. $F = 15.28$ 6. $p\text{-value} = 0.001$ 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Reject the Null Hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: There is sufficient evidence to conclude that the mean lengths of the magazines are different. Q 13.4.7 A researcher wants to know if the mean times (in minutes) that people watch their favorite news station are the same. Suppose that Table shows the results of a study. CNN FOX Local 45 15 72 12 43 37 18 68 56 38 50 60 23 31 51 35 22 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. Q 13.4.8 Are the means for the final exams the same for all statistics class delivery types? Table shows the scores on final exams from several randomly selected classes that used the different delivery types. Online Hybrid Face-to-Face 72 83 80 84 73 78 77 84 84 80 81 81 81   86 79 82 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. S 13.4.8 1. $H_{0}: \mu_{o} = \mu_{h} = \mu_{f}$ 2. At least two of the means are different. 3. $df(\text{n}) = 2, df(\text{d}) = 13$ 4. $F_{2,13}$ 5. 0.64 6. 0.5437 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: The mean scores of different class delivery are not different. Q 13.4.9 Are the mean number of times a month a person eats out the same for whites, blacks, Hispanics and Asians? Suppose that Table shows the results of a study. White Black Hispanic Asian 6 4 7 8 8 1 3 3 2 5 5 5 4 2 4 1 6   6 7 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. Q 13.4.10 Are the mean numbers of daily visitors to a ski resort the same for the three types of snow conditions? Suppose that Table shows the results of a study. Powder Machine Made Hard Packed 1,210 2,107 2,846 1,080 1,149 1,638 1,537 862 2,019 941 1,870 1,178 1,528 2,233 1,382 Assume that all distributions are normal, the four population standard deviations are approximately the same, and the data were collected independently and randomly. Use a level of significance of 0.05. S 13.4.11 1. $H_{0}: \mu_{p} = \mu_{m} = \mu_{h}$ 2. At least any two of the means are different. 3. $df(\text{n}) = 2, df(\text{d}) = 12$ 4. $F_{2,12}$ 5. 3.13 6. 0.0807 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} < \alpha$ 4. Conclusion: There is not sufficient evidence to conclude that the mean numbers of daily visitors are different. Q 13.4.12 Sanjay made identical paper airplanes out of three different weights of paper, light, medium and heavy. He made four airplanes from each of the weights, and launched them himself across the room. Here are the distances (in meters) that his planes flew. Paper Type/Trial Trial 1 Trial 2 Trial 3 Trial 4 Heavy 5.1 meters 3.1 meters 4.7 meters 5.3 meters Medium 4 meters 3.5 meters 4.5 meters 6.1 meters Light 3.1 meters 3.3 meters 2.1 meters 1.9 meters Figure 13.4.1. 1. Take a look at the data in the graph. Look at the spread of data for each group (light, medium, heavy). Does it seem reasonable to assume a normal distribution with the same variance for each group? Yes or No. 2. Why is this a balanced design? 3. Calculate the sample mean and sample standard deviation for each group. 4. Does the weight of the paper have an effect on how far the plane will travel? Use a 1% level of significance. Complete the test using the method shown in the bean plant example in Example. • variance of the group means __________ • $MS_{\text{between}} =$ ___________ • mean of the three sample variances ___________ • $MS_{\text{within}} =$ _____________ • $F$ statistic = ____________ • $df(\text{num}) =$ __________, $df(\text{denom}) =$ ___________ • number of groups _______ • number of observations _______ • $p\text{-value} =$ __________ ($P(F >$ _______$) =$ __________) • Graph the $p\text{-value}$. • decision: _______________________ • conclusion: _______________________________________________________________ Q 13.4.13 DDT is a pesticide that has been banned from use in the United States and most other areas of the world. It is quite effective, but persisted in the environment and over time became seen as harmful to higher-level organisms. Famously, egg shells of eagles and other raptors were believed to be thinner and prone to breakage in the nest because of ingestion of DDT in the food chain of the birds. An experiment was conducted on the number of eggs (fecundity) laid by female fruit flies. There are three groups of flies. One group was bred to be resistant to DDT (the RS group). Another was bred to be especially susceptible to DDT (SS). Finally there was a control line of non-selected or typical fruitflies (NS). Here are the data: RS SS NS RS SS NS 12.8 38.4 35.4 22.4 23.1 22.6 21.6 32.9 27.4 27.5 29.4 40.4 14.8 48.5 19.3 20.3 16 34.4 23.1 20.9 41.8 38.7 20.1 30.4 34.6 11.6 20.3 26.4 23.3 14.9 19.7 22.3 37.6 23.7 22.9 51.8 22.6 30.2 36.9 26.1 22.5 33.8 29.6 33.4 37.3 29.5 15.1 37.9 16.4 26.7 28.2 38.6 31 29.5 20.3 39 23.4 44.4 16.9 42.4 29.3 12.8 33.7 23.2 16.1 36.6 14.9 14.6 29.2 23.6 10.8 47.4 27.3 12.2 41.7 The values are the average number of eggs laid daily for each of 75 flies (25 in each group) over the first 14 days of their lives. Using a 1% level of significance, are the mean rates of egg selection for the three strains of fruitfly different? If so, in what way? Specifically, the researchers were interested in whether or not the selectively bred strains were different from the nonselected line, and whether the two selected lines were different from each other. Here is a chart of the three groups: S 13.4.13 The data appear normally distributed from the chart and of similar spread. There do not appear to be any serious outliers, so we may proceed with our ANOVA calculations, to see if we have good evidence of a difference between the three groups. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$; $H_{a}: \mu_{i} \neq \mu_{j}$ some $i \neq j$. Define $\mu_{1}, \mu_{2}, \mu_{3}$, as the population mean number of eggs laid by the three groups of fruit flies. $F$ statistic $= 8.6657$; $p\text{-value} = 0.0004$ Decision: Since the $p\text{-value}$ is less than the level of significance of 0.01, we reject the null hypothesis. Conclusion: We have good evidence that the average number of eggs laid during the first 14 days of life for these three strains of fruitflies are different. Interestingly, if you perform a two sample $t$-test to compare the RS and NS groups they are significantly different ($p = 0.0013$). Similarly, SS and NS are significantly different ($p = 0.0006$). However, the two selected groups, RS and SS are not significantly different ($p = 0.5176$). Thus we appear to have good evidence that selection either for resistance or for susceptibility involves a reduced rate of egg production (for these specific strains) as compared to flies that were not selected for resistance or susceptibility to DDT. Here, genetic selection has apparently involved a loss of fecundity. Q 13.4.14 The data shown is the recorded body temperatures of 130 subjects as estimated from available histograms. Traditionally we are taught that the normal human body temperature is 98.6 F. This is not quite correct for everyone. Are the mean temperatures among the four groups different? Calculate 95% confidence intervals for the mean body temperature in each group and comment about the confidence intervals. FL FH ML MH FL FH ML MH 96.4 96.8 96.3 96.9 98.4 98.6 98.1 98.6 96.7 97.7 96.7 97 98.7 98.6 98.1 98.6 97.2 97.8 97.1 97.1 98.7 98.6 98.2 98.7 97.2 97.9 97.2 97.1 98.7 98.7 98.2 98.8 97.4 98 97.3 97.4 98.7 98.7 98.2 98.8 97.6 98 97.4 97.5 98.8 98.8 98.2 98.8 97.7 98 97.4 97.6 98.8 98.8 98.3 98.9 97.8 98 97.4 97.7 98.8 98.8 98.4 99 97.8 98.1 97.5 97.8 98.8 98.9 98.4 99 97.9 98.3 97.6 97.9 99.2 99 98.5 99 97.9 98.3 97.6 98 99.3 99 98.5 99.2 98 98.3 97.8 98   99.1 98.6 99.5 98.2 98.4 97.8 98   99.1 98.6 98.2 98.4 97.8 98.3   99.2 98.7 98.2 98.4 97.9 98.4   99.4 99.1 98.2 98.4 98 98.4   99.9 99.3 98.2 98.5 98 98.6   100 99.4 98.2 98.6 98 98.6   100.8 13.5: Test of Two Variances Use the following information to answer the next two exercises. There are two assumptions that must be true in order to perform an $F$ test of two variances. Exercise 13.5.2 Name one assumption that must be true. Answer The populations from which the two samples are drawn are normally distributed. Exercise 13.5.3 What is the other assumption that must be true? Use the following information to answer the next five exercises. Two coworkers commute from the same building. They are interested in whether or not there is any variation in the time it takes them to drive to work. They each record their times for 20 commutes. The first worker’s times have a variance of 12.1. The second worker’s times have a variance of 16.9. The first worker thinks that he is more consistent with his commute times and that his commute time is shorter. Test the claim at the 10% level. Exercise 13.5.4 State the null and alternative hypotheses. Answer $H_{0}: \sigma_{1} = \sigma_{2}$ $H_{a}: \sigma_{1} < \sigma_{2}$ or $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ $H_{a}: \sigma^{2}_{1} < \sigma^{2}_{2}$ Exercise 13.5.5 What is $s_{1}$ in this problem? Exercise 13.5.6 What is $s_{2}$ in this problem? Answer 4.11 Exercise 13.5.7 What is $n$? Exercise 13.5.8 What is the $F$ statistic? Answer 0.7159 Exercise 13.5.9 What is the $p\text{-value}$? Exercise 13.5.10 Is the claim accurate? Answer No, at the 10% level of significance, we do not reject the null hypothesis and state that the data do not show that the variation in drive times for the first worker is less than the variation in drive times for the second worker. Use the following information to answer the next four exercises. Two students are interested in whether or not there is variation in their test scores for math class. There are 15 total math tests they have taken so far. The first student’s grades have a standard deviation of 38.1. The second student’s grades have a standard deviation of 22.5. The second student thinks his scores are lower. Exercise 13.5.11 State the null and alternative hypotheses. Exercise 13.5.12 What is the $F$ Statistic? Answer 2.8674 Exercise 13.5.13 What is the $p\text{-value}$? Exercise 13.5.14 At the 5% significance level, do we reject the null hypothesis? Answer Reject the null hypothesis. There is enough evidence to say that the variance of the grades for the first student is higher than the variance in the grades for the second student. Use the following information to answer the next three exercises. Two cyclists are comparing the variances of their overall paces going uphill. Each cyclist records his or her speeds going up 35 hills. The first cyclist has a variance of 23.8 and the second cyclist has a variance of 32.1. The cyclists want to see if their variances are the same or different. Exercise 13.5.15 State the null and alternative hypotheses. Exercise 13.5.16 What is the $F$ Statistic? Answer 0.7414 Exercise 13.5.17 At the 5% significance level, what can we say about the cyclists’ variances? Q 13.5.1 Three students, Linda, Tuan, and Javier, are given five laboratory rats each for a nutritional experiment. Each rat’s weight is recorded in grams. Linda feeds her rats Formula A, Tuan feeds his rats Formula B, and Javier feeds his rats Formula C. At the end of a specified time period, each rat is weighed again and the net gain in grams is recorded. Linda's rats Tuan's rats Javier's rats 43.5 47.0 51.2 39.4 40.5 40.9 41.3 38.9 37.9 46.0 46.3 45.0 38.2 44.2 48.6 Determine whether or not the variance in weight gain is statistically the same among Javier’s and Linda’s rats. Test at a significance level of 10%. S 13.5.1 1. $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ 2. $H_{a}: \sigma^{2}_{1} \neq \sigma^{2}_{1}$ 3. $df(\text{num}) = 4; df(\text{denom}) = 4$ 4. $F_{4, 4}$ 5. 3.00 6. $2(0.1563) = 0.3126$. Using the TI-83+/84+ function 2-SampFtest, you get the test statistic as 2.9986 and p-value directly as 0.3127. If you input the lists in a different order, you get a test statistic of 0.3335 but the $p\text{-value}$ is the same because this is a two-tailed test. 7. Check student't solution. 8. Decision: Do not reject the null hypothesis; Conclusion: There is insufficient evidence to conclude that the variances are different. Q 13.5.2 A grassroots group opposed to a proposed increase in the gas tax claimed that the increase would hurt working-class people the most, since they commute the farthest to work. Suppose that the group randomly surveyed 24 individuals and asked them their daily one-way commuting mileage. The results are as follows. working-class professional (middle incomes) professional (wealthy) 17.8 16.5 8.5 26.7 17.4 6.3 49.4 22.0 4.6 9.4 7.4 12.6 65.4 9.4 11.0 47.1 2.1 28.6 19.5 6.4 15.4 51.2 13.9 9.3 Determine whether or not the variance in mileage driven is statistically the same among the working class and professional (middle income) groups. Use a 5% significance level. Q 13.5.3 Refer to the data from [link]. Examine practice laps 3 and 4. Determine whether or not the variance in lap time is statistically the same for those practice laps. Use the following information to answer the next two exercises. The following table lists the number of pages in four different types of magazines. home decorating news health computer 172 87 82 104 286 94 153 136 163 123 87 98 205 106 103 207 197 101 96 146 S 13.5.3 1. $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ 2. $H_{a}: \sigma^{2}_{1} \neq \sigma^{2}_{1}$ 3. $df(\text{n}) = 19, df(\text{d}) = 19$ 4. $F_{19,19}$ 5. 1.13 6. 0.786 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is not sufficient evidence to conclude that the variances are different. Q 13.5.4 Which two magazine types do you think have the same variance in length? Q 13.5.5 Which two magazine types do you think have different variances in length? S 13.5.5 The answers may vary. Sample answer: Home decorating magazines and news magazines have different variances. Q 13.5.6 Is the variance for the amount of money, in dollars, that shoppers spend on Saturdays at the mall the same as the variance for the amount of money that shoppers spend on Sundays at the mall? Suppose that the Table shows the results of a study. Saturday Sunday Saturday Sunday 75 44 62 137 18 58 0 82 150 61 124 39 94 19 50 127 62 99 31 141 73 60 118 73 89 Q 13.5.7 Are the variances for incomes on the East Coast and the West Coast the same? Suppose that Table shows the results of a study. Income is shown in thousands of dollars. Assume that both distributions are normal. Use a level of significance of 0.05. East West 38 71 47 126 30 42 82 51 75 44 52 90 115 88 67 S 13.5.7 1. $H_{0}: \sigma^{2}_{1} = \sigma^{2}_{2}$ 2. $H_{a}: \sigma^{2}_{1} \neq \sigma^{2}_{1}$ 3. $df(\text{n}) = 7, df(\text{d}) = 6$ 4. $F_{7,6}$ 5. 0.8117 6. 0.7825 7. Check student’s solution. 1. $\alpha: 0.05$ 2. Decision: Do not reject the null hypothesis. 3. Reason for decision: $p\text{-value} > \alpha$ 4. Conclusion: There is not sufficient evidence to conclude that the variances are different. Q 13.5.8 Thirty men in college were taught a method of finger tapping. They were randomly assigned to three groups of ten, with each receiving one of three doses of caffeine: 0 mg, 100 mg, 200 mg. This is approximately the amount in no, one, or two cups of coffee. Two hours after ingesting the caffeine, the men had the rate of finger tapping per minute recorded. The experiment was double blind, so neither the recorders nor the students knew which group they were in. Does caffeine affect the rate of tapping, and if so how? Here are the data: 0 mg 100 mg 200 mg 0 mg 100 mg 200 mg 242 248 246 245 246 248 244 245 250 248 247 252 247 248 248 248 250 250 242 247 246 244 246 248 246 243 245 242 244 250 Q 13.5.9 King Manuel I, Komnenus ruled the Byzantine Empire from Constantinople (Istanbul) during the years 1145 to 1180 A.D. The empire was very powerful during his reign, but declined significantly afterwards. Coins minted during his era were found in Cyprus, an island in the eastern Mediterranean Sea. Nine coins were from his first coinage, seven from the second, four from the third, and seven from a fourth. These spanned most of his reign. We have data on the silver content of the coins: First Coinage Second Coinage Third Coinage Fourth Coinage 5.9 6.9 4.9 5.3 6.8 9.0 5.5 5.6 6.4 6.6 4.6 5.5 7.0 8.1 4.5 5.1 6.6 9.3   6.2 7.7 9.2   5.8 7.2 8.6   5.8 6.9 6.2 Did the silver content of the coins change over the course of Manuel’s reign? Here are the means and variances of each coinage. The data are unbalanced. First Second Third Fourth Mean 6.7444 8.2429 4.875 5.6143 Variance 0.2953 1.2095 0.2025 0.1314 S 13.5.9 Here is a strip chart of the silver content of the coins: While there are differences in spread, it is not unreasonable to use $ANOVA$ techniques. Here is the completed $ANOVA$ table: Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) 37.748 $4 - 1 = 3$ 12.5825 26.272 Error (Within) 11.015 $27 - 4 = 23$ 0.4789 Total 48.763 $27 - 1 = 26$ $P(F > 26.272) = 0$; Reject the null hypothesis for any alpha. There is sufficient evidence to conclude that the mean silver content among the four coinages are different. From the strip chart, it appears that the first and second coinages had higher silver contents than the third and fourth. Q 13.5.10 The American League and the National League of Major League Baseball are each divided into three divisions: East, Central, and West. Many years, fans talk about some divisions being stronger (having better teams) than other divisions. This may have consequences for the postseason. For instance, in 2012 Tampa Bay won 90 games and did not play in the postseason, while Detroit won only 88 and did play in the postseason. This may have been an oddity, but is there good evidence that in the 2012 season, the American League divisions were significantly different in overall records? Use the following data to test whether the mean number of wins per team in the three American League divisions were the same or not. Note that the data are not balanced, as two divisions had five teams, while one had only four. Division Team Wins East NY Yankees 95 East Baltimore 93 East Tampa Bay 90 East Toronto 73 East Boston 69 Division Team Wins Central Detroit 88 Central Chicago Sox 85 Central Kansas City 72 Central Cleveland 68 Central Minnesota 66 Division Team Wins West Oakland 94 West Texas 93 West LA Angels 89 West Seattle 75 S 13.5.10 Here is a stripchart of the number of wins for the 14 teams in the AL for the 2012 season. While the spread seems similar, there may be some question about the normality of the data, given the wide gaps in the middle near the 0.500 mark of 82 games (teams play 162 games each season in MLB). However, one-way $ANOVA$ is robust. Here is the $ANOVA$ table for the data: Source of Variation Sum of Squares ($SS$) Degrees of Freedom ($df$) Mean Square ($MS$) $F$ Factor (Between) 344.16 3 – 1 = 2 172.08 26.272 Error (Within) 1,219.55 14 – 3 = 11 110.87 1.5521 Total 1,563.71 14 – 1 = 13 $P(F > 1.5521) = 0.2548$ Since the $p\text{-value}$ is so large, there is not good evidence against the null hypothesis of equal means. We decline to reject the null hypothesis. Thus, for 2012, there is not any have any good evidence of a significant difference in mean number of wins between the divisions of the American League.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_OpenStax/13.E%3A_F_Distribution_and_One-Way_ANOVA_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. Exercises: Shafer and Zhang These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. Exercises 1. Explain what is meant by the term population. 2. Explain what is meant by the term sample. 3. Explain how a sample differs from a population. 4. Explain what is meant by the term sample data. 5. Explain what a parameter is. 6. Explain what a statistic is. 7. Give an example of a population and two different characteristics that may be of interest. 8. Describe the difference between descriptive statistics and inferential statistics. Illustrate with an example. 9. Identify each of the following data sets as either a population or a sample: 1. The grade point averages (GPAs) of all students at a college. 2. The GPAs of a randomly selected group of students on a college campus. 3. The ages of the nine Supreme Court Justices of the United States on $\text{January}\; 1,\; 1842$. 4. The gender of every second customer who enters a movie theater. 5. The lengths of Atlantic croakers caught on a fishing trip to the beach. 10. Identify the following measures as either quantitative or qualitative: 1. The $30$ high-temperature readings of the last $30$ days. 2. The scores of $40$ students on an English test. 3. The blood types of $120$ teachers in a middle school. 4. The last four digits of social security numbers of all students in a class. 5. The numbers on the jerseys of $53$ football players on a team. 11. Identify the following measures as either quantitative or qualitative: 1. The genders of the first $40$ newborns in a hospital one year. 2. The natural hair color of $20$ randomly selected fashion models. 3. The ages of $20$ randomly selected fashion models. 4. The fuel economy in miles per gallon of $20$ new cars purchased last month. 5. The political affiliation of $500$ randomly selected voters. 12. A researcher wishes to estimate the average amount spent per person by visitors to a theme park. He takes a random sample of forty visitors and obtains an average of $\28$ per person. 1. What is the population of interest? 2. What is the parameter of interest? 3. Based on this sample, do we know the average amount spent per person by visitors to the park? Explain fully. 13. A researcher wishes to estimate the average weight of newborns in South America in the last five years. He takes a random sample of $235$ newborns and obtains an average of $3.27$ kilograms. 1. What is the population of interest? 2. What is the parameter of interest? 3. Based on this sample, do we know the average weight of newborns in South America? Explain fully. 14. A researcher wishes to estimate the proportion of all adults who own a cell phone. He takes a random sample of $1,572$ adults; $1,298$ of them own a cell phone, hence $1298/1572 \approx 0.83$ or about $83\%$ own a cell phone. 1. What is the population of interest? 2. What is the parameter of interest? 3. What is the statistic involved? 4. Based on this sample, do we know the proportion of all adults who own a cell phone? Explain fully. 15. A sociologist wishes to estimate the proportion of all adults in a certain region who have never married. In a random sample of $1,320$ adults, $145$ have never married, hence $145/1320 \approx 0.11$ or about $11\%$ have never married. 1. What is the population of interest? 2. What is the parameter of interest? 3. What is the statistic involved? 4. Based on this sample, do we know the proportion of all adults who have never married? Explain fully. 1. What must be true of a sample if it is to give a reliable estimate of the value of a particular population parameter? 2. What must be true of a sample if it is to give certain knowledge of the value of a particular population parameter? Answers 1. A population is the total collection of objects that are of interest in a statistical study. 2. A sample, being a subset, is typically smaller than the population. In a statistical study, all elements of a sample are available for observation, which is not typically the case for a population. 3. A parameter is a value describing a characteristic of a population. In a statistical study the value of a parameter is typically unknown. 4. All currently registered students at a particular college form a population. Two population characteristics of interest could be the average GPA and the proportion of students over $23$ years. 1. Population. 2. Sample. 3. Population. 4. Sample. 5. Sample. 1. Qualitative. 2. Qualitative. 3. Quantitative. 4. Quantitative. 5. Qualitative. 1. All newborn babies in South America in the last five years. 2. The average birth weight of all newborn babies in South America in the last five years. 3. No, not exactly, but we know the approximate value of the average. 1. All adults in the region. 2. The proportion of the adults in the region who have never married. 3. The proportion computed from the sample, $0.1$. 4. No, not exactly, but we know the approximate value of the proportion. Exercises 1. List all the measurements for the data set represented by the following data frequency table. $\begin{array}{c|ccccc}x & 31 & 32 & 33 & 34 & 35 \ \hline f & 1 & 5 & 6 & 4 & 2\end{array}$ 2. List all the measurements for the data set represented by the following data frequency table $\begin{array}{c|ccccccc}x & 97 & 98 & 99 & 100 & 101 & 102 & 103 & 105 \ \hline f & 7 & 5 & 3 & 4 & 2 & 2 & 1 & 1\end{array}$ 3. Construct the data frequency table for the following data set. $\begin{array}22 & 25 & 22 & 27 & 24 & 23 \ 26 & 24 & 22 & 24 & 26 &\end{array}$ 4. Construct the data frequency table for the following data set. $\{1,\, 5,\, 2,\, 3,\, 5,\, 1,\, 4,\, 4,\, 4,\, 3,\, 2,\, 5,\, 1,\, 3,\, 2,\, 1,\, 1,\, 1,\, 2\}$ Answers 1. $\{31,\, 32,\, 32,\, 32,\, 32,\, 32,\, 33,\, 33,\, 33,\, 33,\, 33,\, 33,\, 34,\, 34,\, 34,\, 34,\, 35,\, 35\}$ 2. $\begin{array}{c|ccccc}x & 22 & 23 & 24 & 25 & 26 & 27 \ \hline f & 3 & 1 & 3 & 1 & 2 & 1\end{array}$ • Anonymous
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/01.E%3A_Introduction_to_Statistics_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 2.1: Three popular data displays Basic 1. Describe one difference between a frequency histogram and a relative frequency histogram. 2. Describe one advantage of a stem and leaf diagram over a frequency histogram. 3. Construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for the following data set. For the histograms use classes $51-60$, $61-70$, and so on. $\begin{array}69 & 92 & 68 & 77 & 80 \ 70 & 85 & 88 & 85 & 96 \ 93 & 75 & 76 & 82 & 100 \ 53 & 70 & 70 & 82 & 85\end{array}$ 4. Construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for the following data set. For the histograms use classes $6.0-6.9$, $7.0-7.9$, and so on. $\begin{array}8.5 & 8.2 & 7.0 & 7.0 & 4.9 \ 6.5 & 8.2 & 7.6 & 1.5 & 9.3 \ 9.6 & 8.5 & 8.8 & 8.5 & 8.7 \ 8.0 & 7.7 & 2.9 & 9.2 & 6.9\end{array}$ 5. A data set contains $n = 10$ observations. The values $x$ and their frequencies $f$ are summarized in the following data frequency table. $\begin{array}{c|cccc}x & -1 & 0 & 1 & 2 \ \hline f & 3 & 4 & 2 & 1\end{array}$Construct a frequency histogram and a relative frequency histogram for the data set. 6. A data set contains the $n=20$ observations The values $x$ and their frequencies $f$ are summarized in the following data frequency table. $\begin{array}{c|ccc}x & -1 & 0 & 1 & 2 \ \hline f & 3 & a & 2 & 1\end{array}$The frequency of the value $0$ is missing. Find a and then sketch a frequency histogram and a relative frequency histogram for the data set. 7. A data set has the following frequency distribution table: $\begin{array}{c|ccc}x & 1 & 2 & 3 & 4 \ \hline f & 3 & a & 2 & 1\end{array}$The number a is unknown. Can you construct a frequency histogram? If so, construct it. If not, say why not. 8. A table of some of the relative frequencies computed from a data set is $\begin{array}{c|ccc}x & 1 & 2 & 3 & 4 \ \hline f ∕ n & 0.3 & p & 0.2 & 0.1\end{array}$The number $p$ is yet to be computed. Finish the table and construct the relative frequency histogram for the data set. Applications 1. The IQ scores of ten students randomly selected from an elementary school are given. $\begin{array}108 & 100 & 99 & 125 & 87 \ 105 & 107 & 105 & 119 & 118\end{array}$Grouping the measures in the $80s$, the $90s$, and so on, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram. 2. The IQ scores of ten students randomly selected from an elementary school for academically gifted students are given. $\begin{array}133 & 140 & 152 & 142 & 137 \ 145 & 160 & 138 & 139 & 138\end{array}$Grouping the measures by their common hundreds and tens digits, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram. 3. During a one-day blood drive $300$ people donated blood at a mobile donation center. The blood types of these $300$ donors are summarized in the table. $\begin{array}{c|ccc}Blood\: Type\hspace{0.167em} & O & A & B & AB \ \hline Frequency & 136 & 120 & 32 & 12\end{array}$Construct a relative frequency histogram for the data set. 4. In a particular kitchen appliance store an electric automatic rice cooker is a popular item. The weekly sales for the last $20$weeks are shown. $\begin{array}20 & 15 & 14 & 14 & 18 \ 15 & 17 & 16 & 16 & 18 \ 15 & 19 & 12 & 13 & 9 \ 19 & 15 & 15 & 16 & 15\end{array}$Construct a relative frequency histogram with classes $6-10$, $11-15$, and $16-20$. Additional Exercises 1. Random samples, each of size $n = 10$, were taken of the lengths in centimeters of three kinds of commercial fish, with the following results: $\begin {array}{lrcccccccc} Sample \hspace{0.167em}1 : & 108 & 100 & 99 & 125 & 87 & 105 & 107 & 105 & 119 & 118 \ Sample \hspace{0.167em} 2 : & 133 & 140 & 152 & 142 & 137 & 145 & 160 & 138 & 139 & 138 \ Sample \hspace{0.167em} 3 : & 82 & 60 & 83 & 82 & 82 & 74 & 79 & 82 & 80 & 80\end{array}$Grouping the measures by their common hundreds and tens digits, construct a stem and leaf diagram, a frequency histogram, and a relative frequency histogram for each of the samples. Compare the histograms and describe any patterns they exhibit. 2. During a one-day blood drive $300$ people donated blood at a mobile donation center. The blood types of these $300$ donors are summarized below. $\begin{array}{c|ccc}Blood\: Type\hspace{0.167em} & O & A & B & AB \ \hline Frequency & 136 & 120 & 32 & 12\end{array}$Identify the blood type that has the highest relative frequency for these $300$ people. Can you conclude that the blood type you identified is also most common for all people in the population at large? Explain. 3. In a particular kitchen appliance store, the weekly sales of an electric automatic rice cooker for the last $20$ weeks are as follows. $\begin{array}20 & 15 & 14 & 14 & 18 \ 15 & 17 & 16 & 16 & 18 \ 15 & 19 & 12 & 13 & 9 \ 19 & 15 & 15 & 16 & 15\end{array}$In retail sales, too large an inventory ties up capital, while too small an inventory costs lost sales and customer satisfaction. Using the relative frequency histogram for these data, find approximately how many rice cookers must be in stock at the beginning of each week if 1. the store is not to run out of stock by the end of a week for more than $15\%$ of the weeks; and 2. the store is not to run out of stock by the end of a week for more than $5\%$ of the weeks. 4. In retail sales, too large an inventory ties up capital, while too small an inventory costs lost sales and customer satisfaction. Using the relative frequency histogram for these data, find approximately how many rice cookers must be in stock at the beginning of each week if the store is not to run out of stock by the end of a week for more than $15\%$ of the weeks; and the store is not to run out of stock by the end of a week for more than $5\%$ of the weeks. Answers 1. The vertical scale on one is the frequencies and on the other is the relative frequencies. 2. $\begin{array}{r|cccccc}5 & 3 & & & & & & \ 6 & 8 & 9 & & & & & \ 7 & 0 & 0 & 0 & 5 & 6 & 7 & \ 8 & 0 & 2 & 3 & 5 & 5 & 5 & 8 \ 9 & 2 & 3 & 6 & & & & \ 10 & 0 & & & & & &\end{array}$ 3. Noting that $n = 10$ the relative frequency table is: $\begin{array}{c|cccc}x & -1 & 0 & 1 & 2 \ \hline f ∕ n & 0.3 & 0.4 & 0.2 & 0.1\end{array}$ 4. Since $n$ is unknown, $a$ is unknown, so the histogram cannot be constructed. 5. $\begin{array}{r|cccc}8 & 7 & & & & \ 9 & 9 & & & & \ 10 & 0 & 5 & 5 & 7 & 8 \ 11 & 8 & 9 & & \ 12 & 5 & & & &\end{array}$ Frequency and relative frequency histograms are similarly generated. 6. Noting $n = 300$, the relative frequency table is therefore: $\begin{array}{c|cccc}Blood\hspace{0.167em}Type & O & A & B & AB \ \hline f ∕ n & 0.4533 & 0.4 & 0.1067 & 0.04\end{array}$ A relative frequency histogram is then generated. 7. The stem and leaf diagrams listed for Samples $1,\, 2,\; \text{and}\; 3$ in that order: $\begin{array}{c|ccccc}6 & & & & & \ 7 & & & & & \ 8 & 7 & & & & \ 9 & 9 & & & & \ 10 & 0 & 5 & 5 & 7 & 8 \ 11 & 8 & 9 & & & \ 12 & 5 & & & & \ 13 & & & & & \ 14 & & & & & \ 15 & & & & & \ 16 & & & & &\end{array}$ $\begin{array}{c|ccccc}6 & & & & & \ 7 & & & & & \ 8 & & & & & \ 9 & & & & & \ 10 & & & & & \ 11 & & & & & \ 12 & & & & & \ 13 & 3 & 7 & 8 & 8 & 9 \ 14 & 0 & 2 & 5 & & \ 15 & 2 & & & & \ 16 & 0 & & & &\end{array}$ $\begin{array}{c|ccccccc}6 & 0 & & & & \ 7 & 4 & 9 & & & \ 8 & 0 & 0 & 2 & 2 & 2 & 2 & 3 \ 9 & & & & & \ 10 & & & & & \ 11 & & & & & \ 12 & & & & & \ 13 & & & & & \ 14 & & & & & \ 15 & & & & & \ 16 & & & & &\end{array}$ The frequency tables are given below in the same order: $\begin{array}{c|ccc}Length\hspace{0.167em} & 80 \sim 89 & 90 \sim 99 & 100 \sim 109 \ \hline f & 1 & 1 & 5\end{array}$ $\begin{array}{c|cc}Length\hspace{0.167em} & 110 \sim 119 & 120 \sim 129 \ \hline f & 2 & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 130 \sim 139 & 140 \sim 149 & 150 \sim 159 \ \hline f & 5 & 3 & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 160 \sim 169 \ \hline f & 1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 60 \sim 69 & 70 \sim 79 & 80 \sim 89 \ \hline f & 1 & 2 & 7\end{array}$ The relative frequency tables are also given below in the same order: $\begin{array}{c|ccc}Length\hspace{0.167em} & 80 \sim 89 & 90 \sim 99 & 100 \sim 109 \ \hline f ∕ n & 0.1 & 0.1 & 0.5\end{array}$ $\begin{array}{c|cc}Length\hspace{0.167em} & 110 \sim 119 & 120 \sim 129 \ \hline f ∕ n & 0.2 & 0.1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 130 \sim 139 & 140 \sim 149 & 150 \sim 159 \ \hline f ∕ n & 0.5 & 0.3 & 0.1\end{array}$ $\begin{array}{c|c}Length\hspace{0.167em} & 160 \sim 169 \ \hline f ∕ n & 0.1\end{array}$ $\begin{array}{c|ccc}Length\hspace{0.167em} & 60 \sim 69 & 70 \sim 79 & 80 \sim 89 \ \hline f ∕ n & 0.1 & 0.2 & 0.7\end{array}$ 1. 19 2. 20 2.2: Measures of Central Location Basic 1. For the sample data set $\{1,2,6\}$ find 1. $\sum x$ 2. $\sum x^2$ 3. $\sum (x-3)$ 4. $\sum (x-3)^2$ 2. For the sample data set $\{-1,0,1,4\}$ find 1. $\sum x$ 2. $\sum x^2$ 3. $\sum (x-1)$ 4. $\sum (x-1)^2$ 3. Find the mean, the median, and the mode for the sample $1\; 2\; 3\; 4$ 4. Find the mean, the median, and the mode for the sample $3\; 3\; 4\; 4$ 5. Find the mean, the median, and the mode for the sample $2\; 1\; 2\; 7$ 6. Find the mean, the median, and the mode for the sample $-1\; 0\; 1\; 4\; 1\; 1$ 7. Find the mean, the median, and the mode for the sample data represented by the table $\begin{array}{c|c c c}x & 1 & 2 & 7 \ \hline f & 1 & 2 & 1\ \end{array}$ 8. Find the mean, the median, and the mode for the sample data represented by the table $\begin{array}{c|c c c c}x & -1 & 0 & 1 & 4 \ \hline f & 1 & 1 & 3 & 1\ \end{array}$ 9. Create a sample data set of size $n=3$ for which the mean $\bar{x}$ is greater than the median $\tilde{x}$. 10. Create a sample data set of size $n=3$ for which the mean $\bar{x}$ is less than the median $\tilde{x}$. 11. Create a sample data set of size $n=4$ for which the mean $\bar{x}$, the median $\tilde{x}$, and the mode are all identical. 12. Create a sample data set of size $n=4$ for which the median $\tilde{x}$ and the mode are identical but the mean $\bar{x}$ is different. Applications 1. Find the mean and the median for the LDL cholesterol level in a sample of ten heart patients. $\begin{matrix} 132 & 162 & 133 & 145 & 148\ 139 & 147 & 160 & 150 & 153 \end{matrix}$ 2. Find the mean and the median, for the LDL cholesterol level in a sample of ten heart patients on a special diet. $\begin{matrix} 127 & 152 & 138 & 110 & 152\ 113 & 131 & 148 & 135 & 158 \end{matrix}$ 3. Find the mean, the median, and the mode for the number of vehicles owned in a survey of $52$ households. $\begin{array}{c|c c c c c c c c} x & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\ \hline f &2 &12 &15 &11 &6 &3 &1 &2\ \end{array}$ 4. The number of passengers in each of $120$ randomly observed vehicles during morning rush hour was recorded, with the following results. $\begin{array}{c|c c c c c } x & 1 & 2 & 3 & 4 & 5\ \hline f &84 &29 &3 &3 &1\ \end{array}$Find the mean, the median, and the mode of this data set. 5. Twenty-five $1-lb$ boxes of $16d$ nails were randomly selected and the number of nails in each box was counted, with the following results. $\begin{array}{c|c c c c c } x & 47 & 48 & 49 & 50 & 51\ \hline f &1 &3 &18 &2 &1\ \end{array}$Find the mean, the median, and the mode of this data set. Additional Exercises 1. Five laboratory mice with thymus leukemia are observed for a predetermined period of $500$ days. After $500$ days, four mice have died but the fifth one survives. The recorded survival times for the five mice are $\begin{matrix} 493 & 421 & 222 & 378 & 500^* \end{matrix}$where $500^*$ indicates that the fifth mouse survived for at least $500$ days but the survival time (i.e., the exact value of the observation) is unknown. 1. Can you find the sample mean for the data set? If so, find it. If not, why not? 2. Can you find the sample median for the data set? If so, find it. If not, why not? 2. Five laboratory mice with thymus leukemia are observed for a predetermined period of $500$ days. After $450$ days, three mice have died, and one of the remaining mice is sacrificed for analysis. By the end of the observational period, the last remaining mouse still survives. The recorded survival times for the five mice are $\begin{matrix} 222 & 421 & 378 & 450^* & 500^* \end{matrix}$where $^*$ indicates that the mouse survived for at least the given number of days but the exact value of the observation is unknown. 1. Can you find the sample mean for the data set? If so, find it. If not, explain why not. 2. Can you find the sample median for the data set? If so, find it. If not, explain why not. 3. A player keeps track of all the rolls of a pair of dice when playing a board game and obtains the following data. $\begin{array}{c|c c c c c c } x & 2 & 3 & 4 & 5 & 6 & 7\ \hline f &10 &29 &40 &56 &68 &77 \ \end{array}$ $\begin{array}{c|c c c c c } x & 8 & 9 & 10 & 11 & 12 \ \hline f &67 &55 &39 &28 &11 \ \end{array}$Find the mean, the median, and the mode. 4. Cordelia records her daily commute time to work each day, to the nearest minute, for two months, and obtains the following data. $\begin{array}{c|c c c c c c c } x & 26 & 27 & 28 & 29 & 30 & 31 & 32\ \hline f &3 &4 &16 &12 &6 &2 &1 \ \end{array}$ 1. Based on the frequencies, do you expect the mean and the median to be about the same or markedly different, and why? 2. Compute the mean, the median, and the mode. 5. An ordered stem and leaf diagram gives the scores of $71$ students on an exam. $\begin{array}{c|c c c c c c c c c c c c c c c c c c } 10 & 0 & 0 \ 9 &1 &1 &1 &1 &2 &3\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\ 4 &2 &5 &6 &8 &8\ 3 &9 &9 \end{array}$ 1. Based on the shape of the display, do you expect the mean and the median to be about the same or markedly different, and why? 2. Compute the mean, the median, and the mode. 6. A man tosses a coin repeatedly until it lands heads and records the number of tosses required. (For example, if it lands heads on the first toss he records a $1$; if it lands tails on the first two tosses and heads on the third he records a $3$.) The data are shown. $\begin{array}{c|c c c c c c c c c c } x & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \ \hline f &384 &208 &98 &56 &28 &12 &8 &2 &3 &1 \end{array}$ 1. Find the mean of the data. 2. Find the median of the data. 1. Construct a data set consisting of ten numbers, all but one of which is above average, where the average is the mean. 2. Is it possible to construct a data set as in part (a) when the average is the median? Explain. 7. Show that no matter what kind of average is used (mean, median, or mode) it is impossible for all members of a data set to be above average. 1. Twenty sacks of grain weigh a total of $1,003\; lb$. What is the mean weight per sack? 2. Can the median weight per sack be calculated based on the information given? If not, construct two data sets with the same total but different medians. 8. Begin with the following set of data, call it $\text{Data Set I}$. $\begin{matrix} 5 & -2 & 6 & 14 & -3 & 0 & 1 & 4 & 3 & 2 & 5 \end{matrix}$ 1. Compute the mean, median, and mode. 2. Form a new data set, $\text{Data Set II}$, by adding $3$ to each number in $\text{Data Set I}$. Calculate the mean, median, and mode of $\text{Data Set II}$. 3. Form a new data set, $\text{Data Set III}$, by subtracting $6$ from each number in $\text{Data Set I}$. Calculate the mean, median, and mode of $\text{Data Set III}$. 4. Comparing the answers to parts (a), (b), and (c), can you guess the pattern? State the general principle that you expect to be true. Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the mean and median of the $1,000$ SAT scores. 2. Compute the mean and median of the $1,000$ GPAs. 2. Large $\text{Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean $\mu$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 3. Large $\text{Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population mean $\mu$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample mean $\bar{x}$ and compare it to $\mu$. 4. Large $\text{Data Sets}\: 7,\: 7A,\: \text{and}\: 7B$ list the survival times in days of $140$ laboratory mice with thymic leukemia from onset to death. 1. Compute the mean and median survival time for all mice, without regard to gender. 2. Compute the mean and median survival time for the $65$ male mice (separately recorded in Large $\text{Data Set 7A}$). 3. Compute the mean and median survival time for the $75$ female mice (separately recorded in Large $\text{Data Set 7B}$). Answers 1. 9 2. 41 3. 0 4. 14 1. $\bar x= 2.5,\; \tilde{x} = 2.5,\; \text{mode} = \{1,2,3,4\}$ 2. $\bar x= 3,\; \tilde{x} = 2,\; \text{mode} = 2$ 3. $\bar x= 3,\; \tilde{x} = 2,\; \text{mode} = 2$ 4. $\{0, 0, 3\}$ 5. $\{0, 1, 1, 2\}$ 6. $\bar x = 146.9,\; \tilde x = 147.5$ 7. $\bar x=2.6 ,\; \tilde{x} = 2,\; \text{mode} = 2$ 8. $\bar x= 48.96,\; \tilde{x} = 49,\; \text{mode} = 49$ 1. No, the survival times of the fourth and fifth mice are unknown. 2. Yes, $\tilde{x}=421$. 9. $\bar x= 28.55,\; \tilde{x} = 28,\; \text{mode} = 28$ 10. $\bar x= 2.05,\; \tilde{x} = 2,\; \text{mode} = 1$ 11. Mean: $nx_{min}\leq \sum x$ so dividing by $n$ yields $x_{min}\leq \bar{x}$, so the minimum value is not above average. Median: the middle measurement, or average of the two middle measurements, $\tilde{x}$, is at least as large as $x_{min}$, so the minimum value is not above average. Mode: the mode is one of the measurements, and is not greater than itself 1. $\bar x= 3.18,\; \tilde{x} = 3,\; \text{mode} = 5$ 2. $\bar x= 6.18,\; \tilde{x} = 6,\; \text{mode} = 8$ 3. $\bar x= -2.81,\; \tilde{x} = -3,\; \text{mode} = -1$ 4. If a number is added to every measurement in a data set, then the mean, median, and mode all change by that number. 1. $\mu = 1528.74$ 2. $\bar{x}=1502.8$ 3. $\bar{x}=1532.2$ 1. $\bar x= 553.4286,\; \tilde{x} = 552.5$ 2. $\bar x= 665.9692,\; \tilde{x} = 667$ 3. $\bar x= 455.8933,\; \tilde{x} = 448$ 2.3 Measures of Variability Basic 1. Find the range, the variance, and the standard deviation for the following sample. $1\; 2\; 3\; 4$ 2. Find the range, the variance, and the standard deviation for the following sample. $2\; -3\; 6\; 0\; 3\; 1$ 3. Find the range, the variance, and the standard deviation for the following sample. $2\; 1\; 2\; 7$ 4. Find the range, the variance, and the standard deviation for the following sample. $-1\; 0\; 1\; 4\; 1\; 1$ 5. Find the range, the variance, and the standard deviation for the sample represented by the data frequency table. $\begin{array}{c|c c c} x & 1 & 2 & 7 \ \hline f &1 &2 &1\ \end{array}$ 6. Find the range, the variance, and the standard deviation for the sample represented by the data frequency table. $\begin{array}{c|c c c c} x & -1 & 0 & 1 & 4 \ \hline f &1 &1 &3 &1\ \end{array}$ Applications 1. Find the range, the variance, and the standard deviation for the sample of ten IQ scores randomly selected from a school for academically gifted students. $\begin{matrix} 132 & 162 & 133 & 145 & 148\ 139 & 147 & 160 & 150 & 153 \end{matrix}$ 2. Find the range, the variance and the standard deviation for the sample of ten IQ scores randomly selected from a school for academically gifted students. $\begin{matrix} 142 & 152 & 138 & 145 & 148\ 139 & 147 & 155 & 150 & 153 \end{matrix}$ Additional Exercises 1. Consider the data set represented by the table $\begin{array}{c|c c c c c c c} x & 26 & 27 & 28 & 29 & 30 & 31 & 32 \ \hline f &3 &4 &16 &12 &6 &2 &1\ \end{array}$ 1. Use the frequency table to find that $\sum x=1256$ and $\sum x^2=35,926$. 2. Use the information in part (a) to compute the sample mean and the sample standard deviation. 2. Find the sample standard deviation for the data $\begin{array}{c|c c c c c} x & 1 & 2 & 3 & 4 & 5 \ \hline f &384 &208 &98 &56 &28 \ \end{array}$ $\begin{array}{c|c c c c c} x & 6 & 7 & 8 & 9 & 10 \ \hline f &12 &8 &2 &3 &1 \ \end{array}$ 3. A random sample of $49$ invoices for repairs at an automotive body shop is taken. The data are arrayed in the stem and leaf diagram shown. (Stems are thousands of dollars, leaves are hundreds, so that for example the largest observation is $3,800$.) $\begin{array}{c|c c c c c c c c c c c} 3 & 5 & 6 & 8 \ 3 &0 &0 &1 &1 &2 &4 \ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8 \ 0 &4 \end{array}$ For these data, $\sum x=101$, $\sum x^2=244,830,000$. 1. Compute the mean, median, and mode. 2. Compute the range. 3. Compute the sample standard deviation. 4. What must be true of a data set if its standard deviation is $0$? 5. A data set consisting of $25$ measurements has standard deviation $0$. One of the measurements has value $17$. What are the other $24$ measurements? 6. Create a sample data set of size $n=3$ for which the range is $0$ and the sample mean is $2$. 7. Create a sample data set of size $n=3$ for which the sample variance is $0$ and the sample mean is $1$. 8. The sample $\{-1,0,1\}$ has mean $\bar{x}=0$ and standard deviation $\bar{x}=0$. Create a sample data set of size $n=3$ for which $\bar{x}=0$ and $s$ is greater than $1$. 9. The sample $\{-1,0,1\}$ has mean $\bar{x}=0$ and standard deviation $\bar{x}=0$. Create a sample data set of size $n=3$ for which $\bar{x}=0$ and the standard deviation $s$ is less than $1$. 10. Begin with the following set of data, call it $\text{Data Set I}$. $5\; -2\; 6\; 1\; 4\; -3\; 0\; 1\; 4\; 3\; 2\; 5$ 1. Compute the sample standard deviation of $\text{Data Set I}$. 2. Form a new data set, $\text{Data Set II}$, by adding $3$ to each number in $\text{Data Set I}$. Calculate the sample standard deviation of $\text{Data Set II}$. 3. Form a new data set, $\text{Data Set III}$, by subtracting $6$ from each number in $\text{Data Set I}$. Calculate the sample standard deviation of $\text{Data Set III}$. 4. Comparing the answers to parts (a), (b), and (c), can you guess the pattern? State the general principle that you expect to be true. Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. $\text{Large Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the range and sample standard deviation of the $1,000$ SAT scores. 2. Compute the range and sample standard deviation of the $1,000$ GPAs. 2. $\text{Large Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population range and population standard deviation $\sigma$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 3. $\text{Large Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population range and population standard deviation $\sigma$. 2. Regard the first $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 3. Regard the next $25$ observations as a random sample drawn from this population. Compute the sample range and sample standard deviation $s$ and compare them to the population range and $\sigma$. 4. $\text{Large Data Set 7, 7A, and 7B }$ list the survival times in days of $140$ laboratory mice with thymic leukemia from onset to death. 1. Compute the range and sample standard deviation of survival time for all mice, without regard to gender. 2. Compute the range and sample standard deviation of survival time for the $65$ male mice (separately recorded in $\text{Large Data Set 7A}$). 3. Compute the range and sample standard deviation of survival time for the $75$ female mice (separately recorded in $\text{Large Data Set 7B}$). Do you see a difference in the results for male and female mice? Does it appear to be significant? Answers 1. $R = 3,\; s^2 = 1.7,\; s = 1.3$. 2. $R = 6,\; s^2=7.\bar{3},\; s = 2.7$. 3. $R = 6,\; s^2=7.3,\; s = 2.7$. 1. $R = 30,\; s^2 = 103.2,\; s = 10.2$. 1. $\bar{x}=28.55,\; s = 1.3$. 1. $\bar{x}=2063,\; \tilde{x} =2000,\; \text{mode}=2000$. 2. $R = 3400$. 3. $s = 869$. 2. All are $17$. 3. $\{1,1,1\}$ 4. One example is $\{-.5,0,.5\}$. 1. $R = 1350$ and $s = 212.5455$ 2. $R = 4.00$ and $s = 0.7407$ 1. $R = 4.00$ and $\sigma = 0.740375$ 2. $R = 3.04$ and $s = 0.808045$ 3. $R = 2.49$ and $s = 0.657843$ 2.4 Relative Position of Data Basic 1. Consider the data set $\begin{matrix} 69 & 92 & 68 & 77 & 80\ 93 & 75 & 76 & 82 & 100\ 70 & 85 & 88 & 85 & 96\ 53 & 70 & 70 & 82 & 85 \end{matrix}$ 1. Find the percentile rank of $82$. 2. Find the percentile rank of $68$. 2. Consider the data set $\begin{matrix} 8.5 & 8.2 & 7.0 & 7.0 & 4.9\ 9.6 & 8.5 & 8.8 & 8.5 & 8.7\ 6.5 & 8.2 & 7.6 & 1.5 & 9.3\ 8.0 & 7.7 & 2.9 & 9.2 & 6.9 \end{matrix}$ 1. Find the percentile rank of $6.5$. 2. Find the percentile rank of $7.7$. 3. Consider the data set represented by the ordered stem and leaf diagram $\begin{array}{c|c c c c c c c c c c c c c c c c c c} 10 & 0 & 0 \ 9 &1 &1 &1 &1 &2 &3\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\ 4 &2 &5 &6 &8 &8\ 3 &9 &9 \end{array}$ 1. Find the percentile rank of the grade $75$. 2. Find the percentile rank of the grade $57$. 4. Is the $90^{th}$ percentile of a data set always equal to $90\%$? Why or why not? 5. The $29^{th}$ percentile in a large data set is $5$. 1. Approximately what percentage of the observations are less than $5$? 2. Approximately what percentage of the observations are greater than $5$? 6. The $54^{th}$ percentile in a large data set is $98.6$. 1. Approximately what percentage of the observations are less than $98.6$? 2. Approximately what percentage of the observations are greater than $98.6$? 7. In a large data set the $29^{th}$ percentile is $5$ and the $79^{th}$ percentile is $10$. Approximately what percentage of observations lie between $5$ and $10$? 8. In a large data set the $40^{th}$ percentile is $125$ and the $82^{nd}$ percentile is $158$. Approximately what percentage of observations lie between $125$ and $158$? 9. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the stem and leaf diagram in Figure 2.1.2 "Ordered Stem and Leaf Diagram". 10. Find the five-number summary and the IQR and sketch the box plot for the sample explicitly displayed in "Example 2.2.7" in Section 2.2. 11. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the data frequency table $\begin{array}{c|c c c c c} x & 1 & 2 & 5 & 8 & 9 \ \hline f &5 &2 &3 &6 &4\ \end{array}$ 12. Find the five-number summary and the IQR and sketch the box plot for the sample represented by the data frequency table $\begin{array}{c|c c c c c c c c c} x & -5 & -3 & -2 & -1 & 0 & 1 & 3 & 4 & 5 \ \hline f &2 &1 &3 &2 &4 &1 &1 &2 &1\ \end{array}$ 13. Find the $z$-score of each measurement in the following sample data set. $-5\; \; 6\; \; 2\; \; -1\; \; 0$ 14. Find the $z$-score of each measurement in the following sample data set. $1.6\; \; 5.2\; \; 2.8\; \; 3.7\; \; 4.0$ 15. The sample with data frequency table $\begin{array}{c|c c c} x & 1 & 2 & 7 \ \hline f &1 &2 &1\ \end{array}$ has mean $\bar{x}=3$ and standard deviation $s\approx 2.71$. Find the $z$-score for every value in the sample. 16. The sample with data frequency table $\begin{array}{c|c c c c} x & -1 & 0 & 1 & 4 \ \hline f &1 &1 &3 &1\ \end{array}$ has mean $\bar{x}=1$ and standard deviation $s\approx 1.67$. Find the $z$-score for every value in the sample. 17. For the population $0\; \; 0\; \; 2\; \; 2$compute each of the following. 1. The population mean $\mu$. 2. The population variance $\sigma ^2$. 3. The population standard deviation $\sigma$. 4. The $z$-score for every value in the population data set. 18. For the population $0.5\; \; 2.1\; \; 4.4\; \; 1.0$compute each of the following. 1. The population mean $\mu$. 2. The population variance $\sigma ^2$. 3. The population standard deviation $\sigma$. 4. The $z$-score for every value in the population data set. 19. A measurement $x$ in a sample with mean $\bar{x}=10$ and standard deviation $s=3$ has $z$-score $z=2$. Find $x$. 20. A measurement $x$ in a sample with mean $\bar{x}=10$ and standard deviation $s=3$ has $z$-score $z=-1$. Find $x$. 21. A measurement $x$ in a population with mean $\mu =2.3$ and standard deviation $\sigma =1.3$ has $z$-score $z=2$. Find $x$. 22. A measurement $x$ in a sample with mean $\mu =2.3$ and standard deviation $\sigma =1.3$ has $z$-score $z=-1.2$. Find $x$. Applications 1. The weekly sales for the last $20$ weeks in a kitchen appliance store for an electric automatic rice cooker are $\begin{matrix} 20 & 15 & 14 & 14 & 18\ 15 & 19 & 12 & 13 & 9\ 15 & 17 & 16 & 16 & 18\ 19 & 15 & 15 & 16 & 15 \end{matrix}$ 1. Find the percentile rank of $15$. 2. If the sample accurately reflects the population, then what percentage of weeks would an inventory of $15$ rice cookers be adequate? 2. The table shows the number of vehicles owned in a survey of 52 households. $\begin{array}{c|c c c c c c c c} x & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \ \hline f &2 &12 &15 &11 &6 &3 &1 &2\ \end{array}$ 1. Find the percentile rank of $2$. 2. If the sample accurately reflects the population, then what percentage of households have at most two vehicles? 3. For two months Cordelia records her daily commute time to work each day to the nearest minute and obtains the following data: $\begin{array}{c|c c c c c c c} x & 26 & 27 & 28 & 29 & 30 & 31 & 32 \ \hline f &3 &4 &16 &12 &6 &2 &1 \ \end{array}$Cordelia is supposed to be at work at $8:00\; a.m$. but refuses to leave her house before $7:30\; a.m$. 1. Find the percentile rank of $30$, the time she has to get to work. 2. Assuming that the sample accurately reflects the population of all of Cordelia’s commute times, use your answer to part (a) to predict the proportion of the work days she is late for work. 4. The mean score on a standardized grammar exam is $49.6$; the standard deviation is $1.35$. Dromio is told that the $z$-score of his exam score is $-1.19$. 1. Is Dromio’s score above average or below average? 2. What was Dromio’s actual score on the exam? 5. A random sample of $49$ invoices for repairs at an automotive body shop is taken. The data are arrayed in the stem and leaf diagram shown. (Stems are thousands of dollars, leaves are hundreds, so that for example the largest observation is $3,800$.) $\begin{array}{c|c c c c c c c c c c c} 3 & 5 & 6 & 8 \ 3 &0 &0 &1 &1 &2 &4 \ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8 \ 0 &4 \end{array}$For these data, $\sum x=101,100$, $\sum x^2=244,830,000$. 1. Find the $z$-score of the repair that cost $\1,100$. 2. Find the $z$-score of the repairs that cost $\2,700$. 6. The stem and leaf diagram shows the time in seconds that callers to a telephone-order center were on hold before their call was taken. $\begin{array}{c|c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c} 0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &1 &1 &1 &1 &1 &2 &2 &2 &2 &2 &3 &3 &3 &3 &3 &3 &3 &4 &4 &4 &4 &4 \ 0 &5 &5 &5 &5 &5 &5 &5 &5 &5 &6 &6 &6 &6 &6 &6 &6 &6 &6 &6 &7 &7 &7 &7 &7 &7 &8 &8 &8 &9 &9 \ 1 &0 &0 &1 &1 &1 &1 &2 &2 &2 &2 &4 &4 \ 1 &5 &6 &6 &8 &9 \ 2 &2 &4 \ 2 &5 \ 3 &0 \ \end{array}$ 1. Find the quartiles. 2. Give the five-number summary of the data. 3. Find the range and the IQR. Additional Exercises 1. Consider the data set represented by the ordered stem and leaf diagram $\begin{array}{c|c c c c c c c c c c c c c c c c c c} 10 &0 &0 \ 9 &1 &1 &1 &1 &2 &3\ 8 &0 &1 &1 &2 &2 &3 &4 &5 &7 &8 &8 &9\ 7 &0 &0 &0 &1 &1 &2 &4 &4 &5 &6 &6 &6 &7 &7 &7 &8 &8 &9\ 6 &0 &1 &2 &2 &2 &3 &4 &4 &5 &7 &7 &7 &7 &8 &8\ 5 &0 &2 &3 &3 &4 &4 &6 &7 &7 &8 &9\ 4 &2 &5 &6 &8 &8\ 3 &9 &9 \end{array}$ 1. Find the three quartiles. 2. Give the five-number summary of the data. 3. Find the range and the IQR. 2. For the following stem and leaf diagram the units on the stems are thousands and the units on the leaves are hundreds, so that for example the largest observation is $3,800$. $\begin{array}{c|c c c c c c c c c c c} 3 &5 &6 &8 \ 3 &0 &0 &1 &1 &2 &4\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8\ 0 &4 \end{array}$ 1. Find the percentile rank of $800$. 2. Find the percentile rank of $3,200$. 3. Find the five-number summary for the following sample data. $\begin{array}{c|c c c c c c c} x &26 &27 &28 &29 &30 &31 &32 \ \hline f &3 &4 &16 &12 &6 &2 &1\ \end{array}$ 4. Find the five-number summary for the following sample data. $\begin{array}{c|c c c c c c c c c c} x &1 &2 &3 &4 &5 &6 &7 &8 &9 &10 \ \hline f &384 &208 &98 &56 &28 &12 &8 &2 &3 &1\ \end{array}$ 5. For the following stem and leaf diagram the units on the stems are thousands and the units on the leaves are hundreds, so that for example the largest observation is $3,800$. $\begin{array}{c|c c c c c c c c c c c} 3 &5 &6 &8 \ 3 &0 &0 &1 &1 &2 &4\ 2 &5 &6 &6 &7 &7 &8 &8 &9 &9 \ 2 &0 &0 &0 &0 &1 &2 &2 &4 \ 1 &5 &5 &5 &6 &6 &7 &7 &7 &8 &8 &9 \ 1 &0 &0 &1 &3 &4 &4 &4 \ 0 &5 &6 &8 &8\ 0 &4 \end{array}$ 1. Find the three quartiles. 2. Find the IQR. 3. Give the five-number summary of the data. 6. Determine whether the following statement is true. “In any data set, if an observation $x_1$ is greater than another observation $x_2$, then the $z$-score of $x_1$ is greater than the $z$-score of $x_2$. 7. Emilia and Ferdinand took the same freshman chemistry course, Emilia in the fall, Ferdinand in the spring. Emilia made an $83$ on the common final exam that she took, on which the mean was $76$ and the standard deviation $8$. Ferdinand made a $79$ on the common final exam that he took, which was more difficult, since the mean was $65$ and the standard deviation $12$. The one who has a higher $z$-score did relatively better. Was it Emilia or Ferdinand? 8. Refer to the previous exercise. On the final exam in the same course the following semester, the mean is $68$ and the standard deviation is $9$. What grade on the exam matches Emilia’s performance? Ferdinand’s? 9. Rosencrantz and Guildenstern are on a weight-reducing diet. Rosencrantz, who weighs $178\; lb$, belongs to an age and body-type group for which the mean weight is $145\; lb$ and the standard deviation is $15\; lb$. Guildenstern, who weighs $204\; lb$, belongs to an age and body-type group for which the mean weight is $165\; lb$ and the standard deviation is $20\; lb$. Assuming z-scores are good measures for comparison in this context, who is more overweight for his age and body type? Large Data Set Exercises Note: For Large Data Set Exercises below, all of the data sets associated with these questions are missing, but the questions themselves are included here for reference. 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the three quartiles and the interquartile range of the $1,000$ SAT scores. 2. Compute the three quartiles and the interquartile range of the $1,000$ GPAs. 2. Large $\text{Data Set 10}$ records the scores of $72$ students on a statistics exam. 1. Compute the five-number summary of the data. 2. Describe in words the performance of the class on the exam in the light of the result in part (a). 3. Large $\text{Data Sets 3 and 3A}$ list the heights of $174$ customers entering a shoe store. 1. Compute the five-number summary of the heights, without regard to gender. 2. Compute the five-number summary of the heights of the men in the sample. 3. Compute the five-number summary of the heights of the women in the sample. 4. Large $\text{Data Sets 7, 7A, and 3B}$ list the survival times in days of $140$ laboratory mice with thymic leukemia from onset to death. 1. Compute the three quartiles and the interquartile range of the survival times for all mice, without regard to gender. 2. Compute the three quartiles and the interquartile range of the survival times for the $65$ male mice (separately recorded in $\text{Data Set 7A}$). 3. Compute the three quartiles and the interquartile range of the survival times for the $75$ female mice (separately recorded in $\text{Data Sets 7B}$). Answer 1. 60 2. 10 1. 59 2. 23 1. 29 2. 71 1. $50\%$ 2. $x_{min}=25,\; \; Q_1=70,\; \; Q_2=77.5\; \; Q_3=90,\; \; x_{max}=100, \; \; IQR=20$ 3. $x_{min}=1,\; \; Q_1=1.5,\; \; Q_2=6.5\; \; Q_3=8,\; \; x_{max}=9, \; \; IQR=6.5$ 4. $-1.3,\; 1.39,\; 0.4,\; -0.35,\; -0.11$ 5. $z=-0.74\; \text{for}\; x = 1,\; z=-0.37\; \text{for}\; x = 2,\; z = 1.48\; \text{for}\; x = 7$ 1. 1 2. 1 3. 1 4. $z=-1\; \text{for}\; x = 0,\; z=1\; \text{for}\; x = 2$ 6. 16 7. 4.9 1. 55 2. 55 1. 93 2. 0.07 1. -1.11 2. 0.73 1. $Q_1=59,\; Q_2=70,\; Q_3=81$ 2. $x_{min}=39,\; Q_1=59,\; Q_2=70,\; Q_3=81,\; x_{max}=100$ 3. $R = 61,\; IQR=22$ 8. $x_{min}=26,\; Q_1=28,\; Q_2=28,\; Q_3=29,\; x_{max}=32$ 1. $Q_1=1450,\; Q_2=2000,\; Q_3=2800$ 2. $IQR=1350$ 3. $x_{min}=400,\; Q_1=1450,\; Q_2=2000,\; Q_3=2800,\; x_{max}=3800$ 9. Emilia: $z=0.875$, Ferdinand: $z=1.1\bar{6}$ 10. Rosencrantz: $z=2.2$, Guildenstern: $z=1.95$. Rosencrantz is more overweight for his age and body type. 1. $x_{min}=15,\; Q_1=51,\; Q_2=67,\; Q_3=82,\; x_{max}=97$ 2. The data set appears to be skewed to the left. 1. $Q_1=440,\; Q_2=552.5,\; Q_3=661\; \; \text{and}\; \; IQR=221$ 2. $Q_1=641,\; Q_2=667,\; Q_3=700\; \; \text{and}\; \; IQR=59$ 3. $Q_1=407,\; Q_2=448,\; Q_3=504\; \; \text{and}\; \; IQR=97$ 2.5 The Empirical Rule and Chebyshev's Theorem Basic 1. State the Empirical Rule. 2. Describe the conditions under which the Empirical Rule may be applied. 3. State Chebyshev’s Theorem. 4. Describe the conditions under which Chebyshev’s Theorem may be applied. 5. A sample data set with a bell-shaped distribution has mean $\bar{x}=6$ and standard deviation $s=2$. Find the approximate proportion of observations in the data set that lie: 1. between $4$ and $8$; 2. between $2$ and $10$; 3. between $0$ and $12$. 6. A population data set with a bell-shaped distribution has mean $\mu =6$ and standard deviation $\sigma =2$. Find the approximate proportion of observations in the data set that lie: 1. between $4$ and $8$; 2. between $2$ and $10$; 3. between $0$ and $12$. 7. A population data set with a bell-shaped distribution has mean $\mu =2$ and standard deviation $\sigma =1.1$. Find the approximate proportion of observations in the data set that lie: 1. above $2$; 2. above $3.1$; 3. between $2$ and $3.1$. 8. A sample data set with a bell-shaped distribution has mean $\bar{x}=2$ and standard deviation $s=1.1$. Find the approximate proportion of observations in the data set that lie: 1. below $-0.2$; 2. below $3.1$; 3. between $-1.3$ and $0.9$. 9. A population data set with a bell-shaped distribution and size $N=500$ has mean $\mu =2$ and standard deviation $\sigma =1.1$. Find the approximate number of observations in the data set that lie: 1. above $2$; 2. above $3.1$; 3. between $2$ and $3.1$. 10. A sample data set with a bell-shaped distribution and size $n=128$ has mean $\bar{x}=2$ and standard deviation $s=1.1$. Find the approximate number of observations in the data set that lie: 1. below $-0.2$; 2. below $3.1$; 3. between $-1.3$ and $0.9$. 11. A sample data set has mean $\bar{x}=6$ and standard deviation $s=2$. Find the minimum proportion of observations in the data set that must lie: 1. between $2$ and $10$; 2. between $0$ and $12$; 3. between $4$ and $8$. 12. A population data set has mean $\mu =2$ and standard deviation $\sigma =1.1$. Find the minimum proportion of observations in the data set that must lie: 1. between $-0.2$ and $4.2$; 2. between $-1.3$ and $5.3$. 13. A population data set of size $N=500$ has mean $\mu =5.2$ and standard deviation $\sigma =1.1$. Find the minimum number of observations in the data set that must lie: 1. between $3$ and $7.4$; 2. between $1.9$ and $8.5$. 14. A sample data set of size $n=128$ has mean $\bar{x}=2$ and standard deviation $s=2$. Find the minimum number of observations in the data set that must lie: 1. between $-2$ and $6$ (including $-2$ and $6$); 2. between $-4$ and $8$ (including $-4$ and $8$). 15. A sample data set of size $n=30$ has mean $\bar{x}=6$ and standard deviation $s=2$. 1. What is the maximum proportion of observations in the data set that can lie outside the interval $(2,10)$? 2. What can be said about the proportion of observations in the data set that are below $2$? 3. What can be said about the proportion of observations in the data set that are above $10$? 4. What can be said about the number of observations in the data set that are above $10$? 16. A population data set has mean $\mu =2$ and standard deviation $\sigma =1.1$. 1. What is the maximum proportion of observations in the data set that can lie outside the interval $(-1.3,5.3)$? 2. What can be said about the proportion of observations in the data set that are below $-1.3$? 3. What can be said about the proportion of observations in the data set that are above $5.3$? Applications 1. Scores on a final exam taken by $1,200$ students have a bell-shaped distribution with mean $72$ and standard deviation $9$. 1. What is the median score on the exam? 2. About how many students scored between $63$ and $81$? 3. About how many students scored between $72$ and $90$? 4. About how many students scored below $54$? 2. Lengths of fish caught by a commercial fishing boat have a bell-shaped distribution with mean $23$ inches and standard deviation $1.5$ inches. 1. About what proportion of all fish caught are between $20$ inches and $26$ inches long? 2. About what proportion of all fish caught are between $20$ inches and $23$ inches long? 3. About how long is the longest fish caught (only a small fraction of a percent are longer)? 3. Hockey pucks used in professional hockey games must weigh between $5.5$ and $6$ ounces. If the weight of pucks manufactured by a particular process is bell-shaped, has mean $5.75$ ounces and standard deviation $0.125$ ounce, what proportion of the pucks will be usable in professional games? 4. Hockey pucks used in professional hockey games must weigh between $5.5$ and $6$ ounces. If the weight of pucks manufactured by a particular process is bell-shaped and has mean $5.75$ ounces, how large can the standard deviation be if $99.7\%$ of the pucks are to be usable in professional games? 5. Speeds of vehicles on a section of highway have a bell-shaped distribution with mean $60\; mph$ and standard deviation $2.5\; mph$. 1. If the speed limit is $55\; mph$, about what proportion of vehicles are speeding? 2. What is the median speed for vehicles on this highway? 3. What is the percentile rank of the speed $65\; mph$? 4. What speed corresponds to the $16_{th}$ percentile? 6. Suppose that, as in the previous exercise, speeds of vehicles on a section of highway have mean $60\; mph$ and standard deviation $2.5\; mph$, but now the distribution of speeds is unknown. 1. If the speed limit is $55\; mph$, at least what proportion of vehicles must speeding? 2. What can be said about the proportion of vehicles going $65\; mph$ or faster? 7. An instructor announces to the class that the scores on a recent exam had a bell-shaped distribution with mean $75$ and standard deviation $5$. 1. What is the median score? 2. Approximately what proportion of students in the class scored between $70$ and $80$? 3. Approximately what proportion of students in the class scored above $85$? 4. What is the percentile rank of the score $85$? 8. The GPAs of all currently registered students at a large university have a bell-shaped distribution with mean $2.7$ and standard deviation $0.6$. Students with a GPA below $1.5$ are placed on academic probation. Approximately what percentage of currently registered students at the university are on academic probation? 9. Thirty-six students took an exam on which the average was $80$ and the standard deviation was $6$. A rumor says that five students had scores $61$ or below. Can the rumor be true? Why or why not? Additional Exercises 1. For the sample data $\begin{array}{c|c c c c c c c} x &26 &27 &28 &29 &30 &31 &32 \ \hline f &3 &4 &16 &12 &6 &2 &1\ \end{array}$ $\sum x=1,256\; \; \text{and}\; \; \sum x^2=35,926$ 1. Compute the mean and the standard deviation. 2. About how many of the measurements does the Empirical Rule predict will be in the interval $\left (\bar{x}-s,\bar{x}+s \right )$, the interval $\left (\bar{x}-2s,\bar{x}+2s \right )$, and the interval $\left (\bar{x}-3s,\bar{x}+3s \right )$? 3. Compute the number of measurements that are actually in each of the intervals listed in part (a), and compare to the predicted numbers. 2. A sample of size $n = 80$ has mean $139$ and standard deviation $13$, but nothing else is known about it. 1. What can be said about the number of observations that lie in the interval $(126,152)$? 2. What can be said about the number of observations that lie in the interval $(113,165)$? 3. What can be said about the number of observations that exceed $165$? 4. What can be said about the number of observations that either exceed $165$ or are less than $113$? 3. For the sample data $\begin{array}{c|c c c c c } x &1 &2 &3 &4 &5 \ \hline f &84 &29 &3 &3 &1\ \end{array}$ $\sum x=168\; \; \text{and}\; \; \sum x^2=300$ 1. Compute the sample mean and the sample standard deviation. 2. Considering the shape of the data set, do you expect the Empirical Rule to apply? Count the number of measurements within one standard deviation of the mean and compare it to the number predicted by the Empirical Rule. 3. What does Chebyshev’s Rule say about the number of measurements within one standard deviation of the mean? 4. Count the number of measurements within two standard deviations of the mean and compare it to the minimum number guaranteed by Chebyshev’s Theorem to lie in that interval. 4. For the sample data set $\begin{array}{c|c c c c c } x &47 &48 &49 &50 &51 \ \hline f &1 &3 &18 &2 &1\ \end{array}$ $\sum x=1224\; \; \text{and}\; \; \sum x^2=59,940$ 1. Compute the sample mean and the sample standard deviation. 2. Considering the shape of the data set, do you expect the Empirical Rule to apply? Count the number of measurements within one standard deviation of the mean and compare it to the number predicted by the Empirical Rule. 3. What does Chebyshev’s Rule say about the number of measurements within one standard deviation of the mean? 4. Count the number of measurements within two standard deviations of the mean and compare it to the minimum number guaranteed by Chebyshev’s Theorem to lie in that interval. Answers 1. See the displayed statement in the text. 2. See the displayed statement in the text. 1. $0.68$ 2. $0.95$ 3. $0.997$ 1. $0.5$ 2. $0.16$ 3. $0.34$ 1. $250$ 2. $80$ 3. $170$ 1. $3/4$ 2. $8/9$ 3. $0$ 1. $375$ 2. $445$ 1. At most $0.25$. 2. At most $0.25$. 3. At most $0.25$. 4. At most $7$. 1. $72$ 2. $816$ 3. $570$ 4. $30$ 3. $0.95$ 1. $0.975$ 2. $60$ 3. $97.5$ 4. $57.5$ 1. $75$ 2. $0.68$ 3. $0.025$ 4. $0.975$ 4. By Chebyshev’s Theorem at most $1/9$ of the scores can be below $62$, so the rumor is impossible. 1. Nothing. 2. It is at least $60$. 3. It is at most $20$. 4. It is at most $20$. 1. $\bar{x}=48.96$, $s = 0.7348$. 2. Roughly bell-shaped, the Empirical Rule should apply. True count: $18$, Predicted: $17$. 3. Nothing. 4. True count: $23$, Guaranteed: at least $18.75$, hence at least $19$. • Anonymous
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/02.E%3A_Descriptive_Statistics_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 3.1: Sample Spaces, Events, and Their Probabilities Basic Q3.1.1 A box contains $10$ white and $10$ black marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, two marbles in succession and noting the color each time. (To draw “with replacement” means that the first marble is put back before the second marble is drawn.) Q3.1.2 A box contains $16$ white and $16$ black marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, three marbles in succession and noting the color each time. (To draw “with replacement” means that each marble is put back before the next marble is drawn.) Q3.1.3 A box contains $8$ red, $8$ yellow, and $8$ green marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, two marbles in succession and noting the color each time. Q3.1.4 A box contains $6$ red, $6$ yellow, and $6$ green marbles. Construct a sample space for the experiment of randomly drawing out, with replacement, three marbles in succession and noting the color each time. Q3.1.5 In the situation of Exercise 1, list the outcomes that comprise each of the following events. 1. At least one marble of each color is drawn. 2. No white marble is drawn. Q3.1.6 In the situation of Exercise 2, list the outcomes that comprise each of the following events. 1. At least one marble of each color is drawn. 2. No white marble is drawn. 3. More black than white marbles are drawn. Q3.1.7 In the situation of Exercise 3, list the outcomes that comprise each of the following events. 1. No yellow marble is drawn. 2. The two marbles drawn have the same color. 3. At least one marble of each color is drawn. Q3.1.8 In the situation of Exercise 4, list the outcomes that comprise each of the following events. 1. No yellow marble is drawn. 2. The three marbles drawn have the same color. 3. At least one marble of each color is drawn. Q3.1.9 Assuming that each outcome is equally likely, find the probability of each event in Exercise 5. Q3.1.10 Assuming that each outcome is equally likely, find the probability of each event in Exercise 6. Q3.1.11 Assuming that each outcome is equally likely, find the probability of each event in Exercise 7. Q3.1.12 Assuming that each outcome is equally likely, find the probability of each event in Exercise 8. Q3.1.13 A sample space is $S=\{a,b,c,d,e\}$. Identify two events as $U=\{a,b,d\}$ and $V=\{b,c,d\}$. Suppose $P(a)$ and $P(b)$ are each $0.2$ and $P(c)$ and $P(d)$ are each $0.1$. 1. Determine what $P(e)$ must be. 2. Find $P(U)$. 3. Find $P(V)$ Q3.1.14 A sample space is $S=\{u,v,w,x\}$. Identify two events as $A=\{v,w\}$ and $B=\{u,w,x\}$. Suppose $P(u)=0.22$, $P(w)=0.36$, and $P(x)=0.27$. 1. Determine what $P(v)$ must be. 2. Find $P(A)$. 3. Find $P(B)$. Q3.1.15 A sample space is $S=\{m,n,q,r,s\}$. Identify two events as $U=\{m,q,s\}$ and $V=\{n,q,r\}$. The probabilities of some of the outcomes are given by the following table: $\begin{array}{c|c c c c c} Outcome &m &n &q &r &s \ \hline Probability &0.18 &0.16 & &0.24 &0.21\ \end{array}$ 1. Determine what $P(q)$ must be. 2. Find $P(U)$. 3. Find $P(V)$. Q3.1.16 A sample space is $S=\{d,e,f,g,h\}$. Identify two events as $M=\{e,f,g,h\}$ and $N=\{d,g\}$. The probabilities of some of the outcomes are given by the following table: $\begin{array}{c|c c c c c} Outcome &d &e &f &g &h \ \hline Probability &0.22 &0.13 &0.27 & &0.19\ \end{array}$ 1. Determine what $P(g)$ must be. 2. Find $P(M)$. 3. Find $P(N)$. Applications Q3.1.17 The sample space that describes all three-child families according to the genders of the children with respect to birth order was constructed in "Example 3.1.4". Identify the outcomes that comprise each of the following events in the experiment of selecting a three-child family at random. 1. At least one child is a girl. 2. At most one child is a girl. 3. All of the children are girls. 4. Exactly two of the children are girls. 5. The first born is a girl. Q3.1.18 The sample space that describes three tosses of a coin is the same as the one constructed in "Example 3.1.4" with “boy” replaced by “heads” and “girl” replaced by “tails.” Identify the outcomes that comprise each of the following events in the experiment of tossing a coin three times. 1. The coin lands heads more often than tails. 2. The coin lands heads the same number of times as it lands tails. 3. The coin lands heads at least twice. 4. The coin lands heads on the last toss. Q3.1.19 Assuming that the outcomes are equally likely, find the probability of each event in Exercise 17. Q3.1.20 Assuming that the outcomes are equally likely, find the probability of each event in Exercise 18. Additional Exercises Q3.1.21 The following two-way contingency table gives the breakdown of the population in a particular locale according to age and tobacco usage: Age Tobacco Use Smoker Non-smoker Under $30$ $0.05$ $0.20$ Over $30$ $0.20$ $0.55$ A person is selected at random. Find the probability of each of the following events. 1. The person is a smoker. 2. The person is under $30$. 3. The person is a smoker who is under $30$. Q3.1.22 The following two-way contingency table gives the breakdown of the population in a particular locale according to party affiliation ($A, B, C,\; \text{or None}$) and opinion on a bond issue: Affiliation Opinion Favors Opposes Undecided $A$ $0.12$ $0.09$ $0.07$ $B$ $0.16$ $0.12$ $0.14$ $C$ $0.04$ $0.03$ $0.06$ None $0.08$ $0.06$ $0.03$ A person is selected at random. Find the probability of each of the following events. 1. The person is affiliated with party $B$. 2. The person is affiliated with some party. 3. The person is in favor of the bond issue. 4. The person has no party affiliation and is undecided about the bond issue. Q3.1.23 The following two-way contingency table gives the breakdown of the population of married or previously married women beyond child-bearing age in a particular locale according to age at first marriage and number of children: Age Number of Children $0$ $1\; or\; 2$ $3\; \text{or More}$ $Under\; 20$ $0.02$ $0.14$ $0.08$ $20-29$ $0.07$ $0.37$ $0.11$ $30\; \text{and above}$ $0.10$ $0.10$ $0.01$ A woman is selected at random. Find the probability of each of the following events. 1. The woman was in her twenties at her first marriage. 2. The woman was $20$ or older at her first marriage. 3. The woman had no children. 4. The woman was in her twenties at her first marriage and had at least three children. Q3.1.24 The following two-way contingency table gives the breakdown of the population of adults in a particular locale according to highest level of education and whether or not the individual regularly takes dietary supplements: Education Use of Supplements Takes Does Not Take No High School Diploma $0.04$ $0.06$ High School Diploma $0.06$ $0.44$ Undergraduate Degree $0.09$ $0.28$ Graduate Degree $0.01$ $0.02$ An adult is selected at random. Find the probability of each of the following events. 1. The person has a high school diploma and takes dietary supplements regularly. 2. The person has an undergraduate degree and takes dietary supplements regularly. 3. The person takes dietary supplements regularly. 4. The person does not take dietary supplements regularly. Large Data Set Exercises Q3.1.25 Large Data Set 4 and Data Set 4A record the results of $500$ tosses of a coin. Find the relative frequency of each outcome $1, 2, 3, 4, 5,\; and\; 6$. Does the coin appear to be “balanced” or “fair”? Q3.1.26 Large Data Set 6, Data Set 6A, and Data Set 6B record results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer Candidate $A$ for a U.S. Senate seat or prefer some other candidate. 1. Find the probability that a randomly selected voter among these $400$ prefers Candidate $A$. 2. Find the probability that a randomly selected voter among the $200$ who live in Region $1$ prefers Candidate $A$ (separately recorded in $\text{Large Data Set 6A}$). 3. Find the probability that a randomly selected voter among the $200$ who live in Region $2$ prefers Candidate $A$ (separately recorded in $\text{Large Data Set 6B}$). Answers S3.1.1 $S=\{bb,bw,wb,ww\}$ S3.1.3 $S=\{rr,ry,rg,yr,yy,yg,gr,gy,gg\}$ S3.1.5 1. $\{bw,wb\}$ 2. $\{bb\}$ S3.1.7 1. $\{rr,rg,gr,gg\}$ 2. $\{rr,yy,gg\}$ 3. $\varnothing$ S3.1.9 1. $1/4$ 2. $2/4$ S3.1.11 1. $4/9$ 2. $3/9$ 3. $0$ S3.1.13 1. $0.4$ 2. $0.5$ 3. $0.4$ S3.1.15 1. $0.61$ 2. $0.6$ 3. $0.21$ S3.1.17 1. $\{gbb,gbg,ggb,ggg\}$ 2. $\{bgg,gbg,ggb\}$ 3. $\{ggg\}$ 4. $\{bbb,bbg,bgb,gbb\}$ 5. $\{bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$ S3.1.19 1. $4/8$ 2. $3/8$ 3. $1/8$ 4. $4/8$ 5. $7/8$ S3.1.21 1. $0.05$ 2. $0.25$ 3. $0.25$ S3.1.23 1. $0.11$ 2. $0.19$ 3. $0.76$ 4. $0.55$ S3.1.25 The relative frequencies for $1$ through $6$ are $0.16, 0.194, 0.162, 0.164, 0.154\; and\; 0.166$. It would appear that the die is not balanced. 3.2: Complements, Intersections and Unions Basic 1. For the sample space $S=\{a,b,c,d,e\}$ identify the complement of each event given. 1. $A=\{a,d,e\}$ 2. $B=\{b,c,d,e\}$ 3. $S$ 2. For the sample space $S=\{r,s,t,u,v\}$ identify the complement of each event given. 1. $R=\{t,u\}$ 2. $T=\{r\}$ 3. $\varnothing$ (the “empty” set that has no elements) 3. The sample space for three tosses of a coin is $S=\{hhh,hht,hth,htt,thh,tht,tth,ttt\}$ Define events $\text{H:at least one head is observed}\ \text{M:more heads than tails are observed}$ 1. List the outcomes that comprise $H$ and $M$. 2. List the outcomes that comprise $H\cap M$, $H\cup M$, and $H^c$. 3. Assuming all outcomes are equally likely, find $P(H\cap M)$, $P(H\cup M)$, and $P(H^c)$. 4. Determine whether or not $H^c$ and $M$ are mutually exclusive. Explain why or why not. 4. For the experiment of rolling a single six-sided die once, define events $\text{T:the number rolled is three}\ \text{G:the number rolled is four or greater}$ 1. List the outcomes that comprise $T$ and $G$. 2. List the outcomes that comprise $T\cap G$, $T\cup G$, $T^c$, and $(T\cup G)^c$. 3. Assuming all outcomes are equally likely, find $P(T\cap G)$, $P(T\cup G)$, and $P(T^c)$. 4. Determine whether or not $T$ and $G$ are mutually exclusive. Explain why or why not. 5. A special deck of $16$ cards has $4$ that are blue, $4$ yellow, $4$ green, and $4$ red. The four cards of each color are numbered from one to four. A single card is drawn at random. Define events $\text{B:the card is blue}\ \text{R:the card is red}\ \text{N:the number on the card is at most two}$ 1. List the outcomes that comprise $B$, $R$, and $N$. 2. List the outcomes that comprise $B\cap R$, $B\cup R$, $B\cap N$, $R\cup N$, $B^c$, and $(B\cup R)^c$. 3. Assuming all outcomes are equally likely, find the probabilities of the events in the previous part. 4. Determine whether or not $B$ and $N$ are mutually exclusive. Explain why or why not. 6. In the context of the previous problem, define events $\text{Y:the card is yellow}\ \text{I:the number on the card is not a one}\ \text{J:the number on the card is a two or a four}$ 1. List the outcomes that comprise $Y$, $I$, and $J$. 2. List the outcomes that comprise $Y\cap I$, $Y\cup J$, $I\cap J$, $I^c$, and $(Y\cup J)^c$. 3. Assuming all outcomes are equally likely, find the probabilities of the events in the previous part. 4. Determine whether or not $I^c$ and $J$ are mutually exclusive. Explain why or why not. 7. The Venn diagram provided shows a sample space and two events $A$ and $B$. Suppose $P(a)=0.13, P(b)=0.09, P(c)=0.27, P(d)=0.20,\; \text{and}\; P(e)=0.31$. Confirm that the probabilities of the outcomes add up to $1$, then compute the following probabilities. 1. $P(A)$. 2. $P(B)$. 3. $P(A^c)$. Two ways: (i) by finding the outcomes in $A^c$ and adding their probabilities, and (ii) using the Probability Rule for Complements. 4. $P(A\cap B)$. 5. $P(A\cup B)$ Two ways: (i) by finding the outcomes in $A\cup B$ and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. The Venn diagram provided shows a sample space and two events $A$ and $B$. Suppose $P(a)=0.32, P(b)=0.17, P(c)=0.28,\; \text{and}\; P(d)=0.23$. Confirm that the probabilities of the outcomes add up to $1$, then compute the following probabilities. 1. $P(A)$. 2. $P(B)$. 3. $P(A^c)$. Two ways: (i) by finding the outcomes in $A^c$ and adding their probabilities, and (ii) using the Probability Rule for Complements. 4. $P(A\cap B)$. 5. $P(A\cup B)$ Two ways: (i) by finding the outcomes in $A\cup B$ and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. Confirm that the probabilities in the two-way contingency table add up to $1$, then use it to find the probabilities of the events indicated. $U$ $V$ $W$ $A$ $0.15$ $0.00$ $0.23$ $B$ $0.22$ $0.30$ $0.10$ 1. $P(A), P(B), P(A\cap B)$. 2. $P(U), P(W), P(U\cap W)$. 3. $P(U\cup W)$. 4. $P(V^c)$. 5. Determine whether or not the events $A$ and $U$ are mutually exclusive; the events $A$ and $V$. 1. Confirm that the probabilities in the two-way contingency table add up to $1$, then use it to find the probabilities of the events indicated. $R$ $S$ $T$ $M$ $0.09$ $0.25$ $0.19$ $N$ $0.31$ $0.16$ $0.00$ 1. $P(R), P(S), P(R\cap S)$. 2. $P(M), P(N), P(M\cap N)$. 3. $P(R\cup S)$. 4. $P(R^c)$. 5. Determine whether or not the events $N$ and $S$ are mutually exclusive; the events $N$ and $T$. Applications 1. Make a statement in ordinary English that describes the complement of each event (do not simply insert the word “not”). 1. In the roll of a die: “five or more.” 2. In a roll of a die: “an even number.” 3. In two tosses of a coin: “at least one heads.” 4. In the random selection of a college student: “Not a freshman.” 2. Make a statement in ordinary English that describes the complement of each event (do not simply insert the word “not”). 1. In the roll of a die: “two or less.” 2. In the roll of a die: “one, three, or four.” 3. In two tosses of a coin: “at most one heads.” 4. In the random selection of a college student: “Neither a freshman nor a senior.” 3. The sample space that describes all three-child families according to the genders of the children with respect to birth order is $S=\{bbb,bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$. For each of the following events in the experiment of selecting a three-child family at random, state the complement of the event in the simplest possible terms, then find the outcomes that comprise the event and its complement. 1. At least one child is a girl. 2. At most one child is a girl. 3. All of the children are girls. 4. Exactly two of the children are girls. 5. The first born is a girl. 4. The sample space that describes the two-way classification of citizens according to gender and opinion on a political issue is $S=\{mf,ma,mn,ff,fa,fn\}$, where the first letter denotes gender ($\text{m: male, f: female}$) and the second opinion ($\text{f: for, a: against, n: neutral}$). For each of the following events in the experiment of selecting a citizen at random, state the complement of the event in the simplest possible terms, then find the outcomes that comprise the event and its complement. 1. The person is male. 2. The person is not in favor. 3. The person is either male or in favor. 4. The person is female and neutral. 5. A tourist who speaks English and German but no other language visits a region of Slovenia. If $35\%$ of the residents speak English, $15\%$ speak German, and $3\%$ speak both English and German, what is the probability that the tourist will be able to talk with a randomly encountered resident of the region? 6. In a certain country $43\%$ of all automobiles have airbags, $27\%$ have anti-lock brakes, and $13\%$ have both. What is the probability that a randomly selected vehicle will have both airbags and anti-lock brakes? 7. A manufacturer examines its records over the last year on a component part received from outside suppliers. The breakdown on source (supplier $A$, supplier $B$) and quality ($\text{H: high, U: usable, D: defective}$) is shown in the two-way contingency table. $H$ $U$ $D$ $A$ $0.6937$ $0.0049$ $0.0014$ $B$ $0.2982$ $0.0009$ $0.0009$ The record of a part is selected at random. Find the probability of each of the following events. 1. The part was defective. 2. The part was either of high quality or was at least usable, in two ways: (i) by adding numbers in the table, and (ii) using the answer to (a) and the Probability Rule for Complements. 3. The part was defective and came from supplier $B$. 4. The part was defective or came from supplier $B$, in two ways: by finding the cells in the table that correspond to this event and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. Individuals with a particular medical condition were classified according to the presence ($T$) or absence ($N$) of a potential toxin in their blood and the onset of the condition ($\text{E: early, M: midrange, L: late}$). The breakdown according to this classification is shown in the two-way contingency table. $E$ $M$ $L$ $T$ $0.012$ $0.124$ $0.013$ $N$ $0.170$ $0.638$ $0.043$ One of these individuals is selected at random. Find the probability of each of the following events. 1. The person experienced early onset of the condition. 2. The onset of the condition was either midrange or late, in two ways: (i) by adding numbers in the table, and (ii) using the answer to (a) and the Probability Rule for Complements. 3. The toxin is present in the person’s blood. 4. The person experienced early onset of the condition and the toxin is present in the person’s blood. 5. The person experienced early onset of the condition or the toxin is present in the person’s blood, in two ways: (i) by finding the cells in the table that correspond to this event and adding their probabilities, and (ii) using the Additive Rule of Probability. 1. The breakdown of the students enrolled in a university course by class ($\text{F: freshman, So: sophomore, J: junior, Se: senior}$) and academic major ($\text{S: science, mathematics, or engineering, L: liberal arts, O: other}$) is shown in the two-way classification table. Major Class $F$ $So$ $J$ $Se$ $S$ $92$ $42$ $20$ $13$ $L$ $368$ $167$ $80$ $53$ $O$ $460$ $209$ $100$ $67$ A student enrolled in the course is selected at random. Adjoin the row and column totals to the table and use the expanded table to find the probability of each of the following events. 1. The student is a freshman. 2. The student is a liberal arts major. 3. The student is a freshman liberal arts major. 4. The student is either a freshman or a liberal arts major. 5. The student is not a liberal arts major. 1. The table relates the response to a fund-raising appeal by a college to its alumni to the number of years since graduation. Response Years Since Graduation $0-5$ $6-20$ $21-35$ Over $35$ Positive $120$ $440$ $210$ $90$ None $1380$ $3560$ $3290$ $910$ An alumnus is selected at random. Adjoin the row and column totals to the table and use the expanded table to find the probability of each of the following events. 1. The alumnus responded. 2. The alumnus did not respond. 3. The alumnus graduated at least $21$ years ago. 4. The alumnus graduated at least $21$ years ago and responded. Additional Exercises 1. The sample space for tossing three coins is $S=\{hhh,hht,hth,htt,thh,tht,tth,ttt\}$ 1. List the outcomes that correspond to the statement “All the coins are heads.” 2. List the outcomes that correspond to the statement “Not all the coins are heads.” 3. List the outcomes that correspond to the statement “All the coins are not heads.” Answers 1. $\{b,c\}$ 2. $\{a\}$ 3. $\varnothing$ 1. $H=\{hhh,hht,hth,htt,thh,tht,tth\},\; M=\{hhh,hht,hth,thh\}$ 2. $H\cap M=\{hhh,hht,hth,thh\}, H\cup M=H, H^c=\{ttt\}$ 3. $P(H\cap M)=4/8, P(H\cup M)=7/8, P(H^c)=1/8$ 4. Mutually exclusive because they have no elements in common. 1. $B=\{b1,b2,b3,b4\},\; R=\{r1,r2,r3,r4\},\; N=\{b1,b2,y1,y2,g1,g2,r1,r2\}$ 2. $B\cap R=\varnothing , B\cup R=\{b1,b2,b3,b4,r1,r2,r3,r4\},\; B\cap N=\{b1,b2\},\ R\cup N=\{b1,b2,y1,y2,g1,g2,r1,r2,r3,r4\},\ B^c=\{y1,y2,y3,y4,g1,g2,g3,g4,r1,r2,r3,r4\},\; (B\cup R)^c=\{y1,y2,y3,y4,g1,g2,g3,g4\}$ 3. $P(B\cap R)=0,\; P(B\cup R)=8/16,\; P(B\cap N)=2/16,\; P(R\cup N)=10/16,\; P(B^c)=12/16,\; P((B\cup R)^c)=8/16$ 4. Not mutually exclusive because they have an element in common. 1. $0.36$ 2. $0.78$ 3. $0.64$ 4. $0.27$ 5. $0.87$ 1. $P(A)=0.38,\; P(B)=0.62,\; P(A\cap B)=0$ 2. $P(U)=0.37,\; P(W)=0.33,\; P(U\cap W)=0$ 3. $0.7$ 4. $0.7$ 5. $A$ and $U$ are not mutually exclusive because $P(A\cap U)$ is the nonzero number $0.15$. $A$ and $V$ are mutually exclusive because $P(A\cap V)=0$. 1. “four or less” 2. “an odd number” 3. “no heads” or “all tails” 4. “a freshman” 1. “All the children are boys.” Event: $\{bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$, Complement: $\{bbb\}$ 2. “At least two of the children are girls” or “There are two or three girls.” Event: $\{bbb,bbg,bgb,gbb\}$, Complement: $\{bgg,gbg,ggb,ggg\}$ 3. “At least one child is a boy.” Event: $\{ggg\}$, Complement: $\{bbb,bbg,bgb,bgg,gbb,gbg,ggb\}$ 4. “There are either no girls, exactly one girl, or three girls.” Event: $\{bgg,gbg,ggb\}$, Complement: $\{bbb,bbg,bgb,gbb,ggg\}$ 5. “The first born is a boy.” Event: $\{gbb,gbg,ggb,ggg\}$, Complement: $\{bbb,bbg,bgb,bgg\}$ 1. $0.47$ 1. $0.0023$ 2. $0.9977$ 3. $0.0009$ 4. $0.3014$ 1. $920/1671$ 2. $668/1671$ 3. $368/1671$ 4. $1220/1671$ 5. $1003/1671$ 1. $\{hhh\}$ 2. $\{hht,hth,htt,thh,tht,tth,ttt\}$ 3. $\{ttt\}$ 3.3: Conditional Probability and Independent Events Basic 1. Q3.3.1For two events $A$ and $B$, $P(A)=0.73,\; P(B)=0.48\; \text{and}\; P(A\cap B)=0.29$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 3. Determine whether or not $A$ and $B$ are independent. 2. Q3.3.1For two events $A$ and $B$, $P(A)=0.26,\; P(B)=0.37\; \text{and}\; P(A\cap B)=0.11$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 3. Determine whether or not $A$ and $B$ are independent. 3. Q3.3.1For independent events $A$ and $B$, $P(A)=0.81$ and $P(B)=0.27$. 1. $P(A\cap B)$. 2. Find $P(A\mid B)$. 3. Find $P(B\mid A)$. 4. Q3.3.1For independent events $A$ and $B$, $P(A)=0.68$ and $P(B)=0.37$. 1. $P(A\cap B)$. 2. Find $P(A\mid B)$. 3. Find $P(B\mid A)$. 5. Q3.3.1For mutually exclusive events $A$ and $B$, $P(A)=0.17$ and $P(B)=0.32$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 6. Q3.3.1For mutually exclusive events $A$ and $B$, $P(A)=0.45$ and $P(B)=0.09$. 1. Find $P(A\mid B)$. 2. Find $P(B\mid A)$. 7. Q3.3.1Compute the following probabilities in connection with the roll of a single fair die. 1. The probability that the roll is even. 2. The probability that the roll is even, given that it is not a two. 3. The probability that the roll is even, given that it is not a one. 8. Q3.3.1Compute the following probabilities in connection with two tosses of a fair coin. 1. The probability that the second toss is heads. 2. The probability that the second toss is heads, given that the first toss is heads. 3. The probability that the second toss is heads, given that at least one of the two tosses is heads. 9. Q3.3.1A special deck of $16$ cards has $4$ that are blue, $4$ yellow, $4$ green, and $4$ red. The four cards of each color are numbered from one to four. A single card is drawn at random. Find the following probabilities. 1. The probability that the card drawn is red. 2. The probability that the card is red, given that it is not green. 3. The probability that the card is red, given that it is neither red nor yellow. 4. The probability that the card is red, given that it is not a four. 10. Q3.3.1A special deck of $16$ cards has $4$ that are blue, $4$ yellow, $4$ green, and $4$ red. The four cards of each color are numbered from one to four. A single card is drawn at random. Find the following probabilities. 1. The probability that the card drawn is a two or a four. 2. The probability that the card is a two or a four, given that it is not a one. 3. The probability that the card is a two or a four, given that it is either a two or a three. 4. The probability that the card is a two or a four, given that it is red or green. 11. Q3.3.1A random experiment gave rise to the two-way contingency table shown. Use it to compute the probabilities indicated. $R$ $S$ $A$ $0.12$ $0.18$ $B$ $0.28$ $0.42$ 1. $P(A),\; P(R),\; P(A\cap B)$. 2. Based on the answer to (a), determine whether or not the events $A$ and $R$ are independent. 3. Based on the answer to (b), determine whether or not $P(A\mid R)$ can be predicted without any computation. If so, make the prediction. In any case, compute $P(A\mid R)$ using the Rule for Conditional Probability. 12. Q3.3.1A random experiment gave rise to the two-way contingency table shown. Use it to compute the probabilities indicated. $R$ $S$ $A$ $0.13$ $0.07$ $B$ $0.61$ $0.19$ 1. $P(A),\; P(R),\; P(A\cap B)$. 2. Based on the answer to (a), determine whether or not the events $A$ and $R$ are independent. 3. Based on the answer to (b), determine whether or not $P(A\mid R)$ can be predicted without any computation. If so, make the prediction. In any case, compute $P(A\mid R)$ using the Rule for Conditional Probability. 13. Q3.3.1Suppose for events $A$ and $B$ in a random experiment $P(A)=0.70$ and $P(B)=0.30$.Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B)$. 2. $P(A\cap B)$, with the extra information that $A$ and $B$ are independent. 3. $P(A\cap B)$, with the extra information that $A$ and $B$ are mutually exclusive. 14. Q3.3.1Suppose for events $A$ and $B$ in a random experiment $P(A)=0.50$ and $P(B)=0.50$. Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B)$. 2. $P(A\cap B)$, with the extra information that $A$ and $B$ are independent. 3. $P(A\cap B)$, with the extra information that $A$ and $B$ are mutually exclusive. 15. Q3.3.1Suppose for events $A,\; B,\; and\; C$ connected to some random experiment, $A,\; B,\; and\; C$ are independent and $P(A)=0.50$, $P(B)=0.50\; \text{and}\; P(C)=0.44$. Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B\cap C)$. 2. $P(A^c\cap B^c\cap C^c)$. 16. Q3.3.1Suppose for events $A,\; B,\; and\; C$ connected to some random experiment, $A,\; B,\; and\; C$ are independent and $P(A)=0.95$, $P(B)=0.73\; \text{and}\; P(C)=0.62$. Compute the indicated probability, or explain why there is not enough information to do so. 1. $P(A\cap B\cap C)$. 2. $P(A^c\cap B^c\cap C^c)$. Applications Q3.3.17 The sample space that describes all three-child families according to the genders of the children with respect to birth order is $S=\{bbb,bbg,bgb,bgg,gbb,gbg,ggb,ggg\}$ In the experiment of selecting a three-child family at random, compute each of the following probabilities, assuming all outcomes are equally likely. 1. The probability that the family has at least two boys. 2. The probability that the family has at least two boys, given that not all of the children are girls. 3. The probability that at least one child is a boy. 4. The probability that at least one child is a boy, given that the first born is a girl. Q3.3.18 The following two-way contingency table gives the breakdown of the population in a particular locale according to age and number of vehicular moving violations in the past three years: Age Violations $0$ $1$ $2+$ Under $21$ $0.04$ $0.06$ $0.02$ $21-40$ $0.25$ $0.16$ $0.01$ $41-60$ $0.23$ $0.10$ $0.02$ $60+$ $0.08$ $0.03$ $0.00$ A person is selected at random. Find the following probabilities. 1. The person is under $21$. 2. The person has had at least two violations in the past three years. 3. The person has had at least two violations in the past three years, given that he is under $21$. 4. The person is under $21$, given that he has had at least two violations in the past three years. 5. Determine whether the events “the person is under $21$” and “the person has had at least two violations in the past three years” are independent or not. Q3.3.19 The following two-way contingency table gives the breakdown of the population in a particular locale according to party affiliation ($A, B, C, \text{or None}$) and opinion on a bond issue: Affiliation Opinion Favors Opposes Undecided $A$ $0.12$ $0.09$ $0.07$ $B$ $0.16$ $0.12$ $0.14$ $C$ $0.04$ $0.03$ $0.06$ None $0.08$ $0.06$ $0.03$ A person is selected at random. Find each of the following probabilities. 1. The person is in favor of the bond issue. 2. The person is in favor of the bond issue, given that he is affiliated with party $A$. 3. The person is in favor of the bond issue, given that he is affiliated with party $B$. Q3.3.20 The following two-way contingency table gives the breakdown of the population of patrons at a grocery store according to the number of items purchased and whether or not the patron made an impulse purchase at the checkout counter: Number of Items Impulse Purchase Made Not Made Few $0.01$ $0.19$ Many $0.04$ $0.76$ A patron is selected at random. Find each of the following probabilities. 1. The patron made an impulse purchase. 2. The patron made an impulse purchase, given that the total number of items purchased was many. 3. Determine whether or not the events “few purchases” and “made an impulse purchase at the checkout counter” are independent. Q3.3.21 The following two-way contingency table gives the breakdown of the population of adults in a particular locale according to employment type and level of life insurance: Employment Type Level of Insurance Low Medium High Unskilled $0.07$ $0.19$ $0.00$ Semi-skilled $0.04$ $0.28$ $0.08$ Skilled $0.03$ $0.18$ $0.05$ Professional $0.01$ $0.05$ $0.02$ An adult is selected at random. Find each of the following probabilities. 1. The person has a high level of life insurance. 2. The person has a high level of life insurance, given that he does not have a professional position. 3. The person has a high level of life insurance, given that he has a professional position. 4. Determine whether or not the events “has a high level of life insurance” and “has a professional position” are independent. Q3.3.22 The sample space of equally likely outcomes for the experiment of rolling two fair dice is $\begin{matrix} 11 & 12 & 13 & 14 & 15 & 16\ 21 & 22 & 23 & 24 & 25 & 26\ 31 & 32 & 33 & 34 & 35 & 36\ 41 & 42 & 43 & 44 & 45 & 46\ 51 & 52 & 53 & 54 & 55 & 56\ 61 & 62 & 63 & 64 & 65 & 66 \end{matrix}$ Identify the events $\text{N: the sum is at least nine, T: at least one of the dice is a two, and F: at least one of the dice is a five}$. 1. Find $P(N)$. 2. Find $P(N\mid F)$. 3. Find $P(N\mid T)$. 4. Determine from the previous answers whether or not the events $N$ and $F$ are independent; whether or not $N$ and $T$ are. Q3.3.23 The sensitivity of a drug test is the probability that the test will be positive when administered to a person who has actually taken the drug. Suppose that there are two independent tests to detect the presence of a certain type of banned drugs in athletes. One has sensitivity $0.75$; the other has sensitivity $0.85$. If both are applied to an athlete who has taken this type of drug, what is the chance that his usage will go undetected? Q3.3.24 A man has two lights in his well house to keep the pipes from freezing in winter. He checks the lights daily. Each light has probability $0.002$ of burning out before it is checked the next day (independently of the other light). 1. If the lights are wired in parallel one will continue to shine even if the other burns out. In this situation, compute the probability that at least one light will continue to shine for the full $24$ hours. Note the greatly increased reliability of the system of two bulbs over that of a single bulb. 2. If the lights are wired in series neither one will continue to shine even if only one of them burns out. In this situation, compute the probability that at least one light will continue to shine for the full $24$ hours. Note the slightly decreased reliability of the system of two bulbs over that of a single bulb. Q3.3.25 An accountant has observed that $5\%$ of all copies of a particular two-part form have an error in Part I, and $2\%$ have an error in Part II. If the errors occur independently, find the probability that a randomly selected form will be error-free. Q3.3.26 A box contains $20$ screws which are identical in size, but $12$ of which are zinc coated and $8$ of which are not. Two screws are selected at random, without replacement. 1. Find the probability that both are zinc coated. 2. Find the probability that at least one is zinc coated. Additional Exercises Q3.3.27 Events $A$ and $B$ are mutually exclusive. Find $P(A\mid B)$. Q3.3.28 The city council of a particular city is composed of five members of party $A$, four members of party $B$, and three independents. Two council members are randomly selected to form an investigative committee. 1. Find the probability that both are from party $A$. 2. Find the probability that at least one is an independent. 3. Find the probability that the two have different party affiliations (that is, not both $A$, not both $B$, and not both independent). Q3.3.29 A basketball player makes $60\%$ of the free throws that he attempts, except that if he has just tried and missed a free throw then his chances of making a second one go down to only $30\%$. Suppose he has just been awarded two free throws. 1. Find the probability that he makes both. 2. Find the probability that he makes at least one. (A tree diagram could help.) Q3.3.30 An economist wishes to ascertain the proportion $p$ of the population of individual taxpayers who have purposely submitted fraudulent information on an income tax return. To truly guarantee anonymity of the taxpayers in a random survey, taxpayers questioned are given the following instructions. 1. Flip a coin. 2. If the coin lands heads, answer “Yes” to the question “Have you ever submitted fraudulent information on a tax return?” even if you have not. 3. If the coin lands tails, give a truthful “Yes” or “No” answer to the question “Have you ever submitted fraudulent information on a tax return?” The questioner is not told how the coin landed, so he does not know if a “Yes” answer is the truth or is given only because of the coin toss. 1. Using the Probability Rule for Complements and the independence of the coin toss and the taxpayers’ status fill in the empty cells in the two-way contingency table shown. Assume that the coin is fair. Each cell except the two in the bottom row will contain the unknown proportion (or probability) $p$. Status Coin Probability $H$ $T$ Fraud $p$ No fraud Probability $1$ 2. The only information that the economist sees are the entries in the following table: $\begin{array}{c|c|c} Response & "Yes" & "No" \ \hline Proportion &r &s\ \end{array}$Equate the entry in the one cell in the table in (a) that corresponds to the answer “No” to the number s to obtain the formula that expresses the unknown number $p$ in terms of the known number $s$. 3. Equate the sum of the entries in the three cells in the table in (a) that together correspond to the answer “Yes” to the number r to obtain the formula that expresses the unknown number $p$ in terms of the known number $r$. 4. Use the fact that $r+s=1$(since they are the probabilities of complementary events) to verify that the formulas in (b) and (c) give the same value for $p$. (For example, insert $s=1-r$into the formula in (b) to obtain the formula in (c)). 5. Suppose a survey of $1,200$ taxpayers is conducted and $690$ respond “Yes” (truthfully or not) to the question “Have you ever submitted fraudulent information on a tax return?” Use the answer to either (b) or (c) to estimate the true proportion $p$ of all individual taxpayers who have purposely submitted fraudulent information on an income tax return. Answers 1. $0.6$ 2. $0.4$ 3. not independent 1. $0.22$ 2. $0.81$ 3. $0.27$ 1. $0$ 2. $0$ 1. $0.5$ 2. $0.4$ 3. $0.6$ 1. $0.25$ 2. $0.33$ 3. $0$ 4. $0.25$ 1. $P(A)=0.3,\; P(R)=0.4,\; P(A\cap R)=0.12$ 2. independent 3. without computation $0.3$ 1. Insufficient information. The events A and B are not known to be either independent or mutually exclusive. 2. $0.21$ 3. $0$ 1. $0.25$ 2. $0.02$ 1. $0.5$ 2. $0.57$ 3. $0.875$ 4. $0.75$ 1. $0.4$ 2. $0.43$ 3. $0.38$ 1. $0.15$ 2. $0.14$ 3. $0.25$ 4. not independent 1. $0.0375$ 2. $0.931$ 3. $0$ 1. $0.36$ 2. $0.72$ • Anonymous
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/03.E%3A_Basic_Concepts_of_Probability_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 4.1: Random Variables Basic 1. Classify each random variable as either discrete or continuous. 1. The number of arrivals at an emergency room between midnight and $6:00\; a.m$. 2. The weight of a box of cereal labeled “$18$ ounces.” 3. The duration of the next outgoing telephone call from a business office. 4. The number of kernels of popcorn in a $1$-pound container. 5. The number of applicants for a job. 2. Classify each random variable as either discrete or continuous. 1. The time between customers entering a checkout lane at a retail store. 2. The weight of refuse on a truck arriving at a landfill. 3. The number of passengers in a passenger vehicle on a highway at rush hour. 4. The number of clerical errors on a medical chart. 5. The number of accident-free days in one month at a factory. 3. Classify each random variable as either discrete or continuous. 1. The number of boys in a randomly selected three-child family. 2. The temperature of a cup of coffee served at a restaurant. 3. The number of no-shows for every $100$ reservations made with a commercial airline. 4. The number of vehicles owned by a randomly selected household. 5. The average amount spent on electricity each July by a randomly selected household in a certain state. 4. Classify each random variable as either discrete or continuous. 1. The number of patrons arriving at a restaurant between $5:00\; p.m$. and $6:00\; p.m$. 2. The number of new cases of influenza in a particular county in a coming month. 3. The air pressure of a tire on an automobile. 4. The amount of rain recorded at an airport one day. 5. The number of students who actually register for classes at a university next semester. 5. Identify the set of possible values for each random variable. (Make a reasonable estimate based on experience, where necessary.) 1. The number of heads in two tosses of a coin. 2. The average weight of newborn babies born in a particular county one month. 3. The amount of liquid in a $12$-ounce can of soft drink. 4. The number of games in the next World Series (best of up to seven games). 5. The number of coins that match when three coins are tossed at once. 6. Identify the set of possible values for each random variable. (Make a reasonable estimate based on experience, where necessary.) 1. The number of hearts in a five-card hand drawn from a deck of $52$ cards that contains $13$ hearts in all. 2. The number of pitches made by a starting pitcher in a major league baseball game. 3. The number of breakdowns of city buses in a large city in one week. 4. The distance a rental car rented on a daily rate is driven each day. 5. The amount of rainfall at an airport next month. Answers 1. discrete 2. continuous 3. continuous 4. discrete 5. discrete 1. discrete 2. continuous 3. discrete 4. discrete 5. continuous 1. $\{0.1.2\}$ 2. an interval $(a,b)$ (answers vary) 3. an interval $(a,b)$ (answers vary) 4. $\{4,5,6,7\}$ 5. $\{2,3\}$ 4.2: Probability Distributioins for Discrete Random Variables Basic 1. Determine whether or not the table is a valid probability distribution of a discrete random variable. Explain fully. 1. $\begin{array}{c|c c c c} x &-2 &0 &2 &4 \ \hline P(x) &0.3 &0.5 &0.2 &0.1\ \end{array}$ 2. $\begin{array}{c|c c c} x &0.5 &0.25 &0.25\ \hline P(x) &-0.4 &0.6 &0.8\ \end{array}$ 3. $\begin{array}{c|c c c c c} x &1.1 &2.5 &4.1 &4.6 &5.3\ \hline P(x) &0.16 &0.14 &0.11 &0.27 &0.22\ \end{array}$ 2. Determine whether or not the table is a valid probability distribution of a discrete random variable. Explain fully. 1. $\begin{array}{c|c c c c c} x &0 &1 &2 &3 &4\ \hline P(x) &-0.25 &0.50 &0.35 &0.10 &0.30\ \end{array}$ 2. $\begin{array}{c|c c c } x &1 &2 &3 \ \hline P(x) &0.325 &0.406 &0.164 \ \end{array}$ 3. $\begin{array}{c|c c c c c} x &25 &26 &27 &28 &29 \ \hline P(x) &0.13 &0.27 &0.28 &0.18 &0.14 \ \end{array}$ 3. A discrete random variable $X$ has the following probability distribution: $\begin{array}{c|c c c c c} x &77 &78 &79 &80 &81 \ \hline P(x) &0.15 &0.15 &0.20 &0.40 &0.10 \ \end{array}$Compute each of the following quantities. 1. $P(80)$. 2. $P(X>80)$. 3. $P(X\leq 80)$. 4. The mean $\mu$ of $X$. 5. The variance $\sigma ^2$ of $X$. 6. The standard deviation $\sigma$ of $X$. 4. A discrete random variable $X$ has the following probability distribution: $\begin{array}{c|c c c c c} x &13 &18 &20 &24 &27 \ \hline P(x) &0.22 &0.25 &0.20 &0.17 &0.16 \ \end{array}$Compute each of the following quantities. 1. $P(18)$. 2. $P(X>18)$. 3. $P(X\leq 18)$. 4. The mean $\mu$ of $X$. 5. The variance $\sigma ^2$ of $X$. 6. The standard deviation $\sigma$ of $X$. 5. If each die in a pair is “loaded” so that one comes up half as often as it should, six comes up half again as often as it should, and the probabilities of the other faces are unaltered, then the probability distribution for the sum X of the number of dots on the top faces when the two are rolled is $\begin{array}{c|c c c c c c} x &2 &3 &4 &5 &6 &7 \ \hline P(x) &\frac{1}{144} &\frac{4}{144} &\frac{8}{144} &\frac{12}{144} &\frac{16}{144} &\frac{22}{144}\ \end{array}$ $\begin{array}{c|c c c c c } x &8 &9 &10 &11 &12 \ \hline P(x) &\frac{24}{144} &\frac{20}{144} &\frac{16}{144} &\frac{12}{144} &\frac{9}{144} \ \end{array}$Compute each of the following. 1. $P(5\leq X\leq 9)$. 2. $P(X\geq 7)$. 3. The mean $\mu$ of $X$. (For fair dice this number is $7$). 4. The standard deviation $\sigma$ of $X$. (For fair dice this number is about $2.415$). Applications 1. Borachio works in an automotive tire factory. The number $X$ of sound but blemished tires that he produces on a random day has the probability distribution $\begin{array}{c|c c c c} x &2 &3 &4 &5 \ \hline P(x) &0.48 &0.36 &0.12 &0.04\ \end{array}$ 1. Find the probability that Borachio will produce more than three blemished tires tomorrow. 2. Find the probability that Borachio will produce at most two blemished tires tomorrow. 3. Compute the mean and standard deviation of $X$. Interpret the mean in the context of the problem. 2. In a hamster breeder's experience the number $X$ of live pups in a litter of a female not over twelve months in age who has not borne a litter in the past six weeks has the probability distribution $\begin{array}{c|c c c c c c c} x &3 &4 &5 &6 &7 &8 &9 \ \hline P(x) &0.04 &0.10 &0.26 &0.31 &0.22 &0.05 &0.02\ \end{array}$ 1. Find the probability that the next litter will produce five to seven live pups. 2. Find the probability that the next litter will produce at least six live pups. 3. Compute the mean and standard deviation of $X$. Interpret the mean in the context of the problem. 3. The number $X$ of days in the summer months that a construction crew cannot work because of the weather has the probability distribution $\begin{array}{c|c c c c c} x &6 &7 &8 &9 &10\ \hline P(x) &0.03 &0.08 &0.15 &0.20 &0.19 \ \end{array}$ $\begin{array}{c|c c c c } x &11 &12 &13 &14 \ \hline P(x) &0.16 &0.10 &0.07 &0.02 \ \end{array}$ 1. Find the probability that no more than ten days will be lost next summer. 2. Find the probability that from $8$ to $12$ days will be lost next summer. 3. Find the probability that no days at all will be lost next summer. 4. Compute the mean and standard deviation of $X$. Interpret the mean in the context of the problem. 4. Let $X$ denote the number of boys in a randomly selected three-child family. Assuming that boys and girls are equally likely, construct the probability distribution of $X$. 5. Let $X$ denote the number of times a fair coin lands heads in three tosses. Construct the probability distribution of $X$. 6. Five thousand lottery tickets are sold for $\1$ each. One ticket will win $\1,000$, two tickets will win $\500$ each, and ten tickets will win $\100$ each. Let $X$ denote the net gain from the purchase of a randomly selected ticket. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$. Interpret its meaning. 3. Compute the standard deviation $\sigma$ of $X$. 7. Seven thousand lottery tickets are sold for $\5$ each. One ticket will win $\2,000$, two tickets will win $\750$ each, and five tickets will win $\100$ each. Let $X$ denote the net gain from the purchase of a randomly selected ticket. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$. Interpret its meaning. 3. Compute the standard deviation $\sigma$ of $X$. 8. An insurance company will sell a $\90,000$ one-year term life insurance policy to an individual in a particular risk group for a premium of $\478$. Find the expected value to the company of a single policy if a person in this risk group has a $99.62\%$ chance of surviving one year. 9. An insurance company will sell a $\10,000$ one-year term life insurance policy to an individual in a particular risk group for a premium of $\368$. Find the expected value to the company of a single policy if a person in this risk group has a $97.25\%$ chance of surviving one year. 10. An insurance company estimates that the probability that an individual in a particular risk group will survive one year is $0.9825$. Such a person wishes to buy a $\150,000$ one-year term life insurance policy. Let $C$ denote how much the insurance company charges such a person for such a policy. 1. Construct the probability distribution of $X$. (Two entries in the table will contain $C$). 2. Compute the expected value $E(X)$ of $X$. 3. Determine the value $C$ must have in order for the company to break even on all such policies (that is, to average a net gain of zero per policy on such policies). 4. Determine the value $C$ must have in order for the company to average a net gain of $\250$ per policy on all such policies. 11. An insurance company estimates that the probability that an individual in a particular risk group will survive one year is $0.99$. Such a person wishes to buy a $\75,000$ one-year term life insurance policy. Let $C$ denote how much the insurance company charges such a person for such a policy. 1. Construct the probability distribution of $X$. (Two entries in the table will contain $C$). 2. Compute the expected value $E(X)$ of $X$. 3. Determine the value $C$ must have in order for the company to break even on all such policies (that is, to average a net gain of zero per policy on such policies). 4. Determine the value $C$ must have in order for the company to average a net gain of $\150$ per policy on all such policies. 12. A roulette wheel has $38$ slots. Thirty-six slots are numbered from $1$ to $36$; half of them are red and half are black. The remaining two slots are numbered $0$ and $00$ and are green. In a $\1$ bet on red, the bettor pays $\1$ to play. If the ball lands in a red slot, he receives back the dollar he bet plus an additional dollar. If the ball does not land on red he loses his dollar. Let $X$ denote the net gain to the bettor on one play of the game. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$, and interpret its meaning in the context of the problem. 3. Compute the standard deviation of $X$. 13. A roulette wheel has $38$ slots. Thirty-six slots are numbered from $1$ to $36$; the remaining two slots are numbered $0$ and $00$. Suppose the “number” $00$ is considered not to be even, but the number $0$ is still even. In a $\1$ bet on even, the bettor pays $\1$ to play. If the ball lands in an even numbered slot, he receives back the dollar he bet plus an additional dollar. If the ball does not land on an even numbered slot, he loses his dollar. Let $X$ denote the net gain to the bettor on one play of the game. 1. Construct the probability distribution of $X$. 2. Compute the expected value $E(X)$ of $X$, and explain why this game is not offered in a casino (where 0 is not considered even). 3. Compute the standard deviation of $X$. 14. The time, to the nearest whole minute, that a city bus takes to go from one end of its route to the other has the probability distribution shown. As sometimes happens with probabilities computed as empirical relative frequencies, probabilities in the table add up only to a value other than $1.00$ because of round-off error. $\begin{array}{c|c c c c c c} x &42 &43 &44 &45 &46 &47 \ \hline P(x) &0.10 &0.23 &0.34 &0.25 &0.05 &0.02\ \end{array}$ 1. Find the average time the bus takes to drive the length of its route. 2. Find the standard deviation of the length of time the bus takes to drive the length of its route. 15. Tybalt receives in the mail an offer to enter a national sweepstakes. The prizes and chances of winning are listed in the offer as: $\5$ million, one chance in $65$ million; $\150,000$, one chance in $6.5$ million; $\5,000$, one chance in $650,000$; and $\1,000$, one chance in $65,000$. If it costs Tybalt $44$ cents to mail his entry, what is the expected value of the sweepstakes to him? Additional Exercises 1. The number $X$ of nails in a randomly selected $1$-pound box has the probability distribution shown. Find the average number of nails per pound. $\begin{array}{c|c c c } x &100 &101 &102 \ \hline P(x) &0.01 &0.96 &0.03 \ \end{array}$ 2. Three fair dice are rolled at once. Let $X$ denote the number of dice that land with the same number of dots on top as at least one other die. The probability distribution for $X$ is $\begin{array}{c|c c c } x &0 &u &3 \ \hline P(x) &p &\frac{15}{36} &\frac{1}{36} \ \end{array}$ 1. Find the missing value $u$ of $X$. 2. Find the missing probability $p$. 3. Compute the mean of $X$. 4. Compute the standard deviation of $X$. 3. Two fair dice are rolled at once. Let $X$ denote the difference in the number of dots that appear on the top faces of the two dice. Thus for example if a one and a five are rolled, $X=4$, and if two sixes are rolled, $X=0$. 1. Construct the probability distribution for $X$. 2. Compute the mean $\mu$ of $X$. 3. Compute the standard deviation $\sigma$ of $X$. 4. A fair coin is tossed repeatedly until either it lands heads or a total of five tosses have been made, whichever comes first. Let $X$ denote the number of tosses made. 1. Construct the probability distribution for $X$. 2. Compute the mean $\mu$ of $X$. 3. Compute the standard deviation $\sigma$ of $X$. 5. A manufacturer receives a certain component from a supplier in shipments of $100$ units. Two units in each shipment are selected at random and tested. If either one of the units is defective the shipment is rejected. Suppose a shipment has $5$ defective units. 1. Construct the probability distribution for the number $X$ of defective units in such a sample. (A tree diagram is helpful). 2. Find the probability that such a shipment will be accepted. 6. Shylock enters a local branch bank at $4:30\; p.m$. every payday, at which time there are always two tellers on duty. The number $X$ of customers in the bank who are either at a teller window or are waiting in a single line for the next available teller has the following probability distribution. $\begin{array}{c|c c c c} x &0 &1 &2 &3 \ \hline P(x) &0.135 &0.192 &0.284 &0.230 \ \end{array}$ $\begin{array}{c|c c c } x &4 &5 &6 \ \hline P(x) &0.103 &0.051 &0.005 \ \end{array}$ 1. What number of customers does Shylock most often see in the bank the moment he enters? 2. What number of customers waiting in line does Shylock most often see the moment he enters? 3. What is the average number of customers who are waiting in line the moment Shylock enters? 7. The owner of a proposed outdoor theater must decide whether to include a cover that will allow shows to be performed in all weather conditions. Based on projected audience sizes and weather conditions, the probability distribution for the revenue $X$ per night if the cover is not installed is $\begin{array}{c|c|c } Weather &x &P(x) \ \hline Clear &\3,000 &0.61 \ Threatening &\2,800 &0.17 \ Light Rain &\1,975 &0.11 \ Show-cancelling\; rain &\0 &0.11 \ \end{array}$The additional cost of the cover is \$410,000. The owner will have it built if this cost can be recovered from the increased revenue the cover affords in the first ten 90-night seasons. 1. Compute the mean revenue per night if the cover is not installed. 2. Use the answer to (a) to compute the projected total revenue per $90$-night season if the cover is not installed. 3. Compute the projected total revenue per season when the cover is in place. To do so assume that if the cover were in place the revenue each night of the season would be the same as the revenue on a clear night. 4. Using the answers to (b) and (c), decide whether or not the additional cost of the installation of the cover will be recovered from the increased revenue over the first ten years. Will the owner have the cover installed? Answers 1. no: the sum of the probabilities exceeds $1$ 2. no: a negative probability 3. no: the sum of the probabilities is less than $1$ 1. $0.4$ 2. $0.1$ 3. $0.9$ 4. $79.15$ 5. $\sigma ^2=1.5275$ 6. $\sigma =1.2359$ 1. $0.6528$ 2. $0.7153$ 3. $\mu =7.8333$ 4. $\sigma ^2=5.4866$ 5. $\sigma =2.3424$ 1. $0.79$ 2. $0.60$ 3. $\mu =5.8$, $\sigma =1.2570$ 1. $\begin{array}{c|c c c c} x &0 &1 &2 &3 \ \hline P(x) &1/8 &3/8 &3/8 &1/8\ \end{array}$ 1. $\begin{array}{c|c c c c} x &-1 &999 &499 &99 \ \hline P(x) &\frac{4987}{5000} &\frac{1}{5000} &\frac{2}{5000} &\frac{10}{5000}\ \end{array}$ 2. $-0.4$ 3. $17.8785$ 2. $136$ 1. $\begin{array}{c|c c c } x &C &C &-150,000 \ \hline P(x) &0.9825 & &0.0175 \ \end{array}$ 2. $C-2625$ 3. $C \geq 2625$ 4. $C \geq 2875$ 1. $\begin{array}{c|c c } x &-1 &1 \ \hline P(x) &\frac{20}{38} &\frac{18}{38} \ \end{array}$ 2. $E(X)=-0.0526$. In many bets the bettor sustains an average loss of about $5.25$ cents per bet. 3. $0.9986$ 1. $43.54$ 2. $1.2046$ 3. $101.02$ 1. $\begin{array}{c|c c c c c c} x &0 &1 &2 &3 &4 &5 \ \hline P(x) &\frac{6}{36} &\frac{10}{36} &\frac{8}{36} &\frac{6}{36} &\frac{4}{36} &\frac{2}{36} \ \end{array}$ 2. $1.9444$ 3. $1.4326$ 1. $\begin{array}{c|c c c } x &0 &1 &2 \ \hline P(x) &0.902 &0.096 &0.002 \ \end{array}$ 2. $0.902$ 1. $2523.25$ 2. $227,092.5$ 3. $270,000$ 4. The owner will install the cover. 4.3: The Binomial Distribution Basic 1. Determine whether or not the random variable $X$ is a binomial random variable. If so, give the values of $n$ and$p$. If not, explain why not. 1. $X$ is the number of dots on the top face of fair die that is rolled. 2. $X$ is the number of hearts in a five-card hand drawn (without replacement) from a well-shuffled ordinary deck. 3. $X$ is the number of defective parts in a sample of ten randomly selected parts coming from a manufacturing process in which $0.02\%$ of all parts are defective. 4. $X$ is the number of times the number of dots on the top face of a fair die is even in six rolls of the die. 5. $X$ is the number of dice that show an even number of dots on the top face when six dice are rolled at once. 2. Determine whether or not the random variable $X$ is a binomial random variable. If so, give the values of $n$ and $p$. If not, explain why not. 1. $X$ is the number of black marbles in a sample of $5$ marbles drawn randomly and without replacement from a box that contains $25$ white marbles and $15$ black marbles. 2. $X$ is the number of black marbles in a sample of $5$ marbles drawn randomly and with replacement from a box that contains $25$ white marbles and $15$ black marbles. 3. $X$ is the number of voters in favor of proposed law in a sample $1,200$ randomly selected voters drawn from the entire electorate of a country in which $35\%$ of the voters favor the law. 4. $X$ is the number of fish of a particular species, among the next ten landed by a commercial fishing boat, that are more than $13$ inches in length, when $17\%$ of all such fish exceed $13$ inches in length. 5. $X$ is the number of coins that match at least one other coin when four coins are tossed at once. 3. $X$ is a binomial random variable with parameters $n=12$ and $p=0.82$. Compute the probability indicated. 1. $P(11)$ 2. $P(9)$ 3. $P(0)$ 4. $P(13)$ 4. $X$ is a binomial random variable with parameters $n=16$ and $p=0.74$. Compute the probability indicated. 1. $P(14)$ 2. $P(4)$ 3. $P(0)$ 4. $P(20)$ 5. $X$ is a binomial random variable with parameters $n=5$, $p=0.5$. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $P(X \leq 3)$ 2. $P(X \geq 3)$ 3. $P(3)$ 4. $P(0)$ 5. $P(5)$ 6. $X$ is a binomial random variable with parameters $n=5$, $p=0.\bar{3}$. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $P(X \leq 2)$ 2. $P(X \geq 2)$ 3. $P(2)$ 4. $P(0)$ 5. $P(5)$ 7. $X$ is a binomial random variable with the parameters shown. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $n = 10, p = 0.25, P(X \leq 6)$ 2. $n = 10, p = 0.75, P(X \leq 6)$ 3. $n = 15, p = 0.75, P(X \leq 6)$ 4. $n = 15, p = 0.75, P(12)$ 5. $n = 15, p=0.\bar{6}, P(10\leq X\leq 12)$ 8. $X$ is a binomial random variable with the parameters shown. Use the tables in 7.1: Large Sample Estimation of a Population Mean to compute the probability indicated. 1. $n = 5, p = 0.05, P(X \leq 1)$ 2. $n = 5, p = 0.5, P(X \leq 1)$ 3. $n = 10, p = 0.75, P(X \leq 5)$ 4. $n = 10, p = 0.75, P(12)$ 5. $n = 10, p=0.\bar{6}, P(5\leq X\leq 8)$ 9. $X$ is a binomial random variable with the parameters shown. Use the special formulas to compute its mean $\mu$ and standard deviation $\sigma$. 1. $n = 8, p = 0.43$ 2. $n = 47, p = 0.82$ 3. $n = 1200, p = 0.44$ 4. $n = 2100, p = 0.62$ 10. $X$ is a binomial random variable with the parameters shown. Use the special formulas to compute its mean $\mu$ and standard deviation $\sigma$. 1. $n = 14, p = 0.55$ 2. $n = 83, p = 0.05$ 3. $n = 957, p = 0.35$ 4. $n = 1750, p = 0.79$ 11. $X$ is a binomial random variable with the parameters shown. Compute its mean $\mu$ and standard deviation $\sigma$ in two ways, first using the tables in 7.1: Large Sample Estimation of a Population Mean in conjunction with the general formulas $\mu =\sum xP(x)$ and $\sigma =\sqrt{\left [ \sum x^2P(x) \right ]-\mu ^2}$, then using the special formulas $\mu =np$ and $\sigma =\sqrt{npq}$. 1. $n = 5, p=0.\bar{3}$ 2. $n = 10, p = 0.75$ 12. $X$ is a binomial random variable with the parameters shown. Compute its mean $\mu$ and standard deviation $\sigma$ in two ways, first using the tables in 7.1: Large Sample Estimation of a Population Mean in conjunction with the general formulas $\mu =\sum xP(x)$ and $\sigma =\sqrt{\left [ \sum x^2P(x) \right ]-\mu ^2}$, then using the special formulas $\mu =np$ and $\sigma =\sqrt{npq}$. 1. $n = 10, p = 0.25$ 2. $n = 15, p = 0.1$ 13. $X$ is a binomial random variable with parameters $n=10$ and $p=1/3$. Use the cumulative probability distribution for $X$ that is given in 7.1: Large Sample Estimation of a Population Mean to construct the probability distribution of $X$. 14. $X$ is a binomial random variable with parameters $n=15$ and $p=1/2$. Use the cumulative probability distribution for $X$ that is given in 7.1: Large Sample Estimation of a Population Mean to construct the probability distribution of $X$. 15. In a certain board game a player's turn begins with three rolls of a pair of dice. If the player rolls doubles all three times there is a penalty. The probability of rolling doubles in a single roll of a pair of fair dice is $1/6$. Find the probability of rolling doubles all three times. 16. A coin is bent so that the probability that it lands heads up is $2/3$. The coin is tossed ten times. 1. Find the probability that it lands heads up at most five times. 2. Find the probability that it lands heads up more times than it lands tails up. Applications 1. An English-speaking tourist visits a country in which $30\%$ of the population speaks English. He needs to ask someone directions. 1. Find the probability that the first person he encounters will be able to speak English. 2. The tourist sees four local people standing at a bus stop. Find the probability that at least one of them will be able to speak English. 2. The probability that an egg in a retail package is cracked or broken is $0.025$. 1. Find the probability that a carton of one dozen eggs contains no eggs that are either cracked or broken. 2. Find the probability that a carton of one dozen eggs has (i) at least one that is either cracked or broken; (ii) at least two that are cracked or broken. 3. Find the average number of cracked or broken eggs in one dozen cartons. 3. An appliance store sells $20$ refrigerators each week. Ten percent of all purchasers of a refrigerator buy an extended warranty. Let $X$ denote the number of the next $20$ purchasers who do so. 1. Verify that $X$ satisfies the conditions for a binomial random variable, and find $n$ and $p$. 2. Find the probability that $X$ is zero. 3. Find the probability that $X$ is two, three, or four. 4. Find the probability that $X$ is at least five. 4. Adverse growing conditions have caused $5\%$ of grapefruit grown in a certain region to be of inferior quality. Grapefruit are sold by the dozen. 1. Find the average number of inferior quality grapefruit per box of a dozen. 2. A box that contains two or more grapefruit of inferior quality will cause a strong adverse customer reaction. Find the probability that a box of one dozen grapefruit will contain two or more grapefruit of inferior quality. 5. The probability that a $7$-ounce skein of a discount worsted weight knitting yarn contains a knot is $0.25$. Goneril buys ten skeins to crochet an afghan. 1. Find the probability that (i) none of the ten skeins will contain a knot; (ii) at most one will. 2. Find the expected number of skeins that contain knots. 3. Find the most likely number of skeins that contain knots. 6. One-third of all patients who undergo a non-invasive but unpleasant medical test require a sedative. A laboratory performs $20$ such tests daily. Let $X$ denote the number of patients on any given day who require a sedative. 1. Verify that $X$ satisfies the conditions for a binomial random variable, and find $n$ and $p$. 2. Find the probability that on any given day between five and nine patients will require a sedative (include five and nine). 3. Find the average number of patients each day who require a sedative. 4. Using the cumulative probability distribution for $X$ in 7.1: Large Sample Estimation of a Population Mean find the minimum number $x_{min}$ of doses of the sedative that should be on hand at the start of the day so that there is a $99\%$ chance that the laboratory will not run out. 7. About $2\%$ of alumni give money upon receiving a solicitation from the college or university from which they graduated. Find the average number monetary gifts a college can expect from every $2,000$ solicitations it sends. 8. Of all college students who are eligible to give blood, about $18\%$ do so on a regular basis. Each month a local blood bank sends an appeal to give blood to $250$ randomly selected students. Find the average number of appeals in such mailings that are made to students who already give blood. 9. About $12\%$ of all individuals write with their left hands. A class of $130$ students meets in a classroom with $130$ individual desks, exactly $14$ of which are constructed for people who write with their left hands. Find the probability that exactly $14$ of the students enrolled in the class write with their left hands. 10. A traveling salesman makes a sale on $65\%$ of his calls on regular customers. He makes four sales calls each day. 1. Construct the probability distribution of $X$, the number of sales made each day. 2. Find the probability that, on a randomly selected day, the salesman will make a sale. 3. Assuming that the salesman makes $20$ sales calls per week, find the mean and standard deviation of the number of sales made per week. 11. A corporation has advertised heavily to try to insure that over half the adult population recognizes the brand name of its products. In a random sample of $20$ adults, $14$ recognized its brand name. What is the probability that $14$ or more people in such a sample would recognize its brand name if the actual proportion $p$ of all adults who recognize the brand name were only $0.50$? Additional Exercises 1. When dropped on a hard surface a thumbtack lands with its sharp point touching the surface with probability $2/3$; it lands with its sharp point directed up into the air with probability $1/3$. The tack is dropped and its landing position observed $15$ times. 1. Find the probability that it lands with its point in the air at least $7$ times. 2. If the experiment of dropping the tack $15$ times is done repeatedly, what is the average number of times it lands with its point in the air? 2. A professional proofreader has a $98\%$ chance of detecting an error in a piece of written work (other than misspellings, double words, and similar errors that are machine detected). A work contains four errors. 1. Find the probability that the proofreader will miss at least one of them. 2. Show that two such proofreaders working independently have a $99.96\%$ chance of detecting an error in a piece of written work. 3. Find the probability that two such proofreaders working independently will miss at least one error in a work that contains four errors. 3. A multiple choice exam has $20$ questions; there are four choices for each question. 1. A student guesses the answer to every question. Find the chance that he guesses correctly between four and seven times. 2. Find the minimum score the instructor can set so that the probability that a student will pass just by guessing is $20\%$ or less. 4. In spite of the requirement that all dogs boarded in a kennel be inoculated, the chance that a healthy dog boarded in a clean, well-ventilated kennel will develop kennel cough from a carrier is $0.008$. 1. If a carrier (not known to be such, of course) is boarded with three other dogs, what is the probability that at least one of the three healthy dogs will develop kennel cough? 2. If a carrier is boarded with four other dogs, what is the probability that at least one of the four healthy dogs will develop kennel cough? 3. The pattern evident from parts (a) and (b) is that if $K+1$ dogs are boarded together, one a carrier and $K$ healthy dogs, then the probability that at least one of the healthy dogs will develop kennel cough is $P(X\geq 1)=1-(0.992)^K$, where $X$ is the binomial random variable that counts the number of healthy dogs that develop the condition. Experiment with different values of $K$ in this formula to find the maximum number $K+1$ of dogs that a kennel owner can board together so that if one of the dogs has the condition, the chance that another dog will be infected is less than $0.05$. 5. Investigators need to determine which of $600$ adults have a medical condition that affects $2\%$ of the adult population. A blood sample is taken from each of the individuals. 1. Show that the expected number of diseased individuals in the group of $600$ is $12$ individuals. 2. Instead of testing all $600$ blood samples to find the expected $12$ diseased individuals, investigators group the samples into $60$ groups of $10$ each, mix a little of the blood from each of the $10$ samples in each group, and test each of the $60$ mixtures. Show that the probability that any such mixture will contain the blood of at least one diseased person, hence test positive, is about $0.18$. 3. Based on the result in (b), show that the expected number of mixtures that test positive is about $11$. (Supposing that indeed $11$ of the $60$ mixtures test positive, then we know that none of the $490$ persons whose blood was in the remaining $49$ samples that tested negative has the disease. We have eliminated $490$ persons from our search while performing only $60$ tests.) Answers 1. not binomial; not success/failure. 2. not binomial; trials are not independent. 3. binomial; $n = 10, p = 0.0002$ 4. binomial; $n = 6, p = 0.5$ 5. binomial; $n = 6, p = 0.5$ 1. $0.2434$ 2. $0.2151$ 3. $0.18^{12}\approx 0$ 4. $0$ 1. $0.8125$ 2. $0.5000$ 3. $0.3125$ 4. $0.0313$ 5. $0.0312$ 1. $0.9965$ 2. $0.2241$ 3. $0.0042$ 4. $0.2252$ 5. $0.5390$ 1. $\mu = 3.44, \sigma = 1.4003$ 2. $\mu = 38.54, \sigma = 2.6339$ 3. $\mu = 528, \sigma = 17.1953$ 4. $\mu = 1302, \sigma = 22.2432$ 1. $\mu = 1.6667, \sigma = 1.0541$ 2. $\mu = 7.5, \sigma = 1.3693$ 1. $\begin{array}{c|c c c c} x &0 &1 &2 &3 \ \hline P(x) &0.0173 &0.0867 &0.1951 &0.2602\ \end{array}$ $\begin{array}{c|c c c c} x &4 &5 &6 &7 \ \hline P(x) &0.2276 &0.1365 &0.0569 &0.0163\ \end{array}$ $\begin{array}{c|c c c } x &8 &9 &10 \ \hline P(x) &0.0030 &0.0004 &0.0000 \ \end{array}$ 2. $0.0046$ 1. $0.3$ 2. $0.7599$ 1. $n = 20, p = 0.1$ 2. $0.1216$ 3. $0.5651$ 4. $0.0432$ 1. $0.0563$ and $0.2440$ 2. $2.5$ 3. $2$ 3. $40$ 4. $0.1019$ 5. $0.0577$ 1. $0.0776$ 2. $0.9996$ 3. $0.0016$ 1. $0.0238$ 2. $0.0316$ 3. $6$ • Anonymous
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/04.E%3A_Discrete_Random_Variables_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 5.1: Continuous Random Variables Basic 1. A continuous random variable $X$ has a uniform distribution on the interval $[5,12]$. Sketch the graph of its density function. 2. A continuous random variable $X$ has a uniform distribution on the interval $[-3,3]$. Sketch the graph of its density function. 3. A continuous random variable $X$ has a normal distribution with mean $100$ and standard deviation $10$. Sketch a qualitatively accurate graph of its density function. 4. A continuous random variable $X$ has a normal distribution with mean $73$ and standard deviation $2.5$. Sketch a qualitatively accurate graph of its density function. 5. A continuous random variable $X$ has a normal distribution with mean $73$. The probability that $X$ takes a value greater than $80$ is $0.212$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value less than $66$. Sketch the density curve with relevant regions shaded to illustrate the computation. 6. A continuous random variable $X$ has a normal distribution with mean $169$. The probability that $X$ takes a value greater than $180$ is $0.17$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value less than $158$. Sketch the density curve with relevant regions shaded to illustrate the computation. 7. A continuous random variable $X$ has a normal distribution with mean $50.5$. The probability that $X$ takes a value less than $54$ is $0.76$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value greater than $47$. Sketch the density curve with relevant regions shaded to illustrate the computation. 8. A continuous random variable $X$ has a normal distribution with mean $12.25$. The probability that $X$ takes a value less than $13$ is $0.82$. Use this information and the symmetry of the density function to find the probability that $X$ takes a value greater than $11.50$. Sketch the density curve with relevant regions shaded to illustrate the computation. 9. The figure provided shows the density curves of three normally distributed random variables $X_A,\; X_B\; \text{and}\; X_C$. Their standard deviations (in no particular order) are $15$, $7$, and $20$. Use the figure to identify the values of the means $\mu _A,\: \mu _B,\; \text{and}\; \mu _C$ and standard deviations $\sigma _A,\: \sigma _B,\; \text{and}\; \sigma _C$ of the three random variables. 1. The figure provided shows the density curves of three normally distributed random variables $X_A,\; X_B\; \text{and}\; X_C$. Their standard deviations (in no particular order) are $20$, $5$, and $10$. Use the figure to identify the values of the means $\mu _A,\: \mu _B,\; \text{and}\; \mu _C$ and standard deviations $\sigma _A,\: \sigma _B,\; \text{and}\; \sigma _C$ of the three random variables. Applications 1. Dogberry's alarm clock is battery operated. The battery could fail with equal probability at any time of the day or night. Every day Dogberry sets his alarm for $6:30\; a.m.$ and goes to bed at $10:00\; p.m.$. Find the probability that when the clock battery finally dies, it will do so at the most inconvenient time, between $10:00\; p.m.$ and $6:30\; a.m.$. 2. Buses running a bus line near Desdemona's house run every $15$ minutes. Without paying attention to the schedule she walks to the nearest stop to take the bus to town. Find the probability that she waits more than $10$ minutes. 3. The amount $X$ of orange juice in a randomly selected half-gallon container varies according to a normal distribution with mean $64$ ounces and standard deviation $0.25$ ounce. 1. Sketch the graph of the density function for $X$. 2. What proportion of all containers contain less than a half gallon ($64$ ounces)? Explain. 3. What is the median amount of orange juice in such containers? Explain. 4. The weight $X$ of grass seed in bags marked $50$ lb varies according to a normal distribution with mean $50$ lb and standard deviation $1$ ounce ($0.0625$ lb). 1. Sketch the graph of the density function for $X$. 2. What proportion of all bags weigh less than $50$ pounds? Explain. 3. What is the median weight of such bags? Explain. Answers 1. The graph is a horizontal line with height $1/7$ from $x = 5$ to $x = 12$ 2. 3. The graph is a bell-shaped curve centered at $100$ and extending from about $70$ to $130$. 4. 5. $0.212$ 6. 7. $0.76$ 8. 9. $\mu _A=100,\; \mu _B=200,\; \mu _C=300,\; \sigma _A=7,\; \sigma _B=20,\; \sigma _C=15$ 10. 11. $0.3542$ 12. 1. The graph is a bell-shaped curve centered at $64$ and extending from about $63.25$ to $64.75$. 2. $0.5$ 3. $64$ 5.2: The Standard Normal Distribution Basic 1. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated.$/**/$ 1. $P(Z < -1.72)$ 2. $P(Z < 2.05)$ 3. $P(Z < 0)$ 4. $P(Z > -2.11)$ 5. $P(Z > 1.63)$ 6. $P(Z > 2.36)$ 2. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated.$/**/$ 1. $P(Z < -1.17)$ 2. $P(Z < -0.05)$ 3. $P(Z < 0.66)$ 4. $P(Z > -2.43)$ 5. $P(Z > -1.00)$ 6. $P(Z > 2.19)$ 3. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(-2.15 < Z < -1.09)$ 2. $P(-0.93 < Z < 0.55)$ 3. $P(0.68 < Z < 2.11)$ 4. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(-1.99 < Z < -1.03)$ 2. $P(-0.87 < Z < 1.58)$ 3. $P(0.33 < Z < 0.96)$ 5. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(-4.22 < Z < -1.39)$ 2. $P(-1.37 < Z < 5.11)$ 3. $P(Z < -4.31)$ 4. $P(Z < 5.02)$ 6. Use Figure 7.1.5: Cumulative Normal Probability to find the probability indicated. 1. $P(Z > -5.31)$ 2. $P(-4.08 < Z < 0.58)$ 3. $P(Z < -6.16)$ 4. $P(-0.51< Z < 5.63)$ 7. Use Figure 7.1.5: Cumulative Normal Probability to find the probability listed. Find the second probability without referring to the table, but using the symmetry of the standard normal density curve instead. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(Z < -1.08),\; P(Z > 1.08)$ 2. $P(Z < -0.36),\; P(Z > 0.36)$ 3. $P(Z < 1.25),\; P(Z > -1.25)$ 4. $P(Z < 2.03),\; P(Z > -2.03)$ 8. Use Figure 7.1.5: Cumulative Normal Probability to find the probability listed. Find the second probability without referring to the table, but using the symmetry of the standard normal density curve instead. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(Z < -2.11),\; P(Z > 2.11)$ 2. $P(Z < -0.88),\; P(Z > 0.88)$ 3. $P(Z < 2.44),\; P(Z > -2.44)$ 4. $P(Z < 3.07),\; P(Z > -3.07)$ 9. The probability that a standard normal random variable $Z$ takes a value in the union of intervals $(-\infty ,-\alpha ]\cup [\alpha ,\infty )$, which arises in applications, will be denoted $P(Z \leq -a\; or\; Z \geq a)$. Use Figure 7.1.5: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the standard normal density curve you need to use Figure 7.1.5: Cumulative Normal Probability only one time for each part. 1. $P(Z < -1.29\; or\; Z > 1.29)$ 2. $P(Z < -2.33\; or\; Z > 2.33)$ 3. $P(Z < -1.96\; or\; Z > 1.96)$ 4. $P(Z < -3.09\; or\; Z > 3.09)$ 10. The probability that a standard normal random variable $Z$ takes a value in the union of intervals $(-\infty ,-\alpha ]\cup [\alpha ,\infty )$, which arises in applications, will be denoted $P(Z \leq -a\; or\; Z \geq a)$. Use Figure 7.1.5: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the standard normal density curve you need to use Figure 7.1.5: Cumulative Normal Probability only one time for each part. 1. $P(Z < -2.58 \; or\; Z > 2.58 )$ 2. $P(Z < -2.81 \; or\; Z > 2.81 )$ 3. $P(Z < -1.65 \; or\; Z > 1.65 )$ 4. $P(Z < -2.43 \; or\; Z > 2.43 )$ Answers 1. $0.0427$ 2. $0.9798$ 3. $0.5$ 4. $0.9826$ 5. $0.0516$ 6. $0.0091$ 1. 1. $0.1221$ 2. $0.5326$ 3. $0.2309$ 2. 1. $0.0823$ 2. $0.9147$ 3. $0.0000$ 4. $1.0000$ 3. 1. $0.1401,\; 0.1401$ 2. $0.3594,\; 0.3594$ 3. $0.8944,\; 0.8944$ 4. $0.9788,\; 0.9788$ 4. 1. $0.1970$ 2. $0.01980$ 3. $0.0500$ 4. $0.0020$ 5.3: Probability Computations for General Normal Random Variables Basic 1. $X$ is a normally distributed random variable with mean $57$ and standard deviation $6$. Find the probability indicated. 1. $P(X < 59.5)$ 2. $P(X < 46.2)$ 3. $P(X > 52.2)$ 4. $P(X > 70)$ 2. $X$ is a normally distributed random variable with mean $-25$ and standard deviation $4$. Find the probability indicated. 1. $P(X < -27.2)$ 2. $P(X < -14.8)$ 3. $P(X > -33.1)$ 4. $P(X > -16.5)$ 3. $X$ is a normally distributed random variable with mean $112$ and standard deviation $15$. Find the probability indicated. 1. $P(100<X<125)$ 2. $P(91<X<107)$ 3. $P(118<X<160)$ 4. $X$ is a normally distributed random variable with mean $72$ and standard deviation $22$. Find the probability indicated. 1. $P(78<X<127)$ 2. $P(60<X<90)$ 3. $P(49<X<71)$ 5. $X$ is a normally distributed random variable with mean $500$ and standard deviation $25$. Find the probability indicated. 1. $P(X < 400)$ 2. $P(466<X<625)$ 6. $X$ is a normally distributed random variable with mean $0$ and standard deviation $0.75$. Find the probability indicated. 1. $P(-4.02 < X < 3.82)$ 2. $P(X > 4.11)$ 7. $X$ is a normally distributed random variable with mean $15$ and standard deviation $1$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the first probability listed. Find the second probability using the symmetry of the density curve. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(X < 12),\; P(X > 18)$ 2. $P(X < 14),\; P(X > 16)$ 3. $P(X < 11.25),\; P(X > 18.75)$ 4. $P(X < 12.67),\; P(X > 17.33)$ 8. $X$ is a normally distributed random variable with mean $100$ and standard deviation $10$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the first probability listed. Find the second probability using the symmetry of the density curve. Sketch the density curve with relevant regions shaded to illustrate the computation. 1. $P(X < 80),\; P(X > 120)$ 2. $P(X < 75),\; P(X > 125)$ 3. $P(X < 84.55),\; P(X > 115.45)$ 4. $P(X < 77.42),\; P(X > 122.58)$ 9. $X$ is a normally distributed random variable with mean $67$ and standard deviation $13$. The probability that $X$ takes a value in the union of intervals $(-\infty ,67-a]\cup [67+a,\infty )$ will be denoted $P(X\leq 67-a\; or\; X\geq 67+a)$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the density curve you need to use Figure 7.1.5$/**/$: Cumulative Normal Probability only one time for each part. 1. $P(X<57\; or\; X>77)$ 2. $P(X<47\; or\; X>87)$ 3. $P(X<49\; or\; X>85)$ 4. $P(X<37\; or\; X>97)$ 10. $X$ is a normally distributed random variable with mean $288$ and standard deviation $6$. The probability that $X$ takes a value in the union of intervals $(-\infty ,288-a]\cup [288+a,\infty )$ will be denoted $P(X\leq 288-a\; or\; X\geq 288+a)$. Use Figure 7.1.5$/**/$: Cumulative Normal Probability to find the following probabilities of this type. Sketch the density curve with relevant regions shaded to illustrate the computation. Because of the symmetry of the density curve you need to use Figure 7.1.5$/**/$: Cumulative Normal Probability only one time for each part. 1. $P(X<278\; or\; X>298)$ 2. $P(X<268\; or\; X>308)$ 3. $P(X<273\; or\; X>303)$ 4. $P(X<280\; or\; X>296)$ Applications 1. The amount $X$ of beverage in a can labeled $12$ ounces is normally distributed with mean $12.1$ ounces and standard deviation $0.05$ ounce. A can is selected at random. 1. Find the probability that the can contains at least $12$ ounces. 2. Find the probability that the can contains between $11.9$ and $12.1$ ounces. 2. The length of gestation for swine is normally distributed with mean $114$ days and standard deviation $0.75$ day. Find the probability that a litter will be born within one day of the mean of $114$. 3. The systolic blood pressure $X$ of adults in a region is normally distributed with mean $112$ mm Hg and standard deviation $15$ mm Hg. A person is considered “prehypertensive” if his systolic blood pressure is between $120$ and $130$ mm Hg. Find the probability that the blood pressure of a randomly selected person is prehypertensive. 4. Heights $X$ of adult women are normally distributed with mean $63.7$ inches and standard deviation $2.71$ inches. Romeo, who is $69.25$ inches tall, wishes to date only women who are shorter than he but within $4$ inches of his height. Find the probability that the next woman he meets will have such a height. 5. Heights $X$ of adult men are normally distributed with mean $69.1$ inches and standard deviation $2.92$ inches. Juliet, who is $63.25$ inches tall, wishes to date only men who are taller than she but within 6 inches of her height. Find the probability that the next man she meets will have such a height. 6. A regulation hockey puck must weigh between $5.5$ and $6$ ounces. The weights $X$ of pucks made by a particular process are normally distributed with mean $5.75$ ounces and standard deviation $0.11$ ounce. Find the probability that a puck made by this process will meet the weight standard. 7. A regulation golf ball may not weigh more than $1.620$ ounces. The weights $X$ of golf balls made by a particular process are normally distributed with mean $1.361$ ounces and standard deviation $0.09$ ounce. Find the probability that a golf ball made by this process will meet the weight standard. 8. The length of time that the battery in Hippolyta's cell phone will hold enough charge to operate acceptably is normally distributed with mean $25.6$ hours and standard deviation $0.32$ hour. Hippolyta forgot to charge her phone yesterday, so that at the moment she first wishes to use it today it has been $26$ hours $18$ minutes since the phone was last fully charged. Find the probability that the phone will operate properly. 9. The amount of non-mortgage debt per household for households in a particular income bracket in one part of the country is normally distributed with mean $\28,350$ and standard deviation $\3,425$. Find the probability that a randomly selected such household has between $\20,000$ and $\30,000$ in non-mortgage debt. 10. Birth weights of full-term babies in a certain region are normally distributed with mean $7.125$ lb and standard deviation $1.290$ lb. Find the probability that a randomly selected newborn will weigh less than $5.5$ lb, the historic definition of prematurity. 11. The distance from the seat back to the front of the knees of seated adult males is normally distributed with mean $23.8$ inches and standard deviation $1.22$ inches. The distance from the seat back to the back of the next seat forward in all seats on aircraft flown by a budget airline is $26$ inches. Find the proportion of adult men flying with this airline whose knees will touch the back of the seat in front of them. 12. The distance from the seat to the top of the head of seated adult males is normally distributed with mean $36.5$ inches and standard deviation $1.39$ inches. The distance from the seat to the roof of a particular make and model car is $40.5$ inches. Find the proportion of adult men who when sitting in this car will have at least one inch of headroom (distance from the top of the head to the roof). Additional Exercises 1. The useful life of a particular make and type of automotive tire is normally distributed with mean $57,500$ miles and standard deviation $950$ miles. 1. Find the probability that such a tire will have a useful life of between $57,000$ and $58,000$ miles. 2. Hamlet buys four such tires. Assuming that their lifetimes are independent, find the probability that all four will last between $57,000$ and $58,000$ miles. (If so, the best tire will have no more than $1,000$ miles left on it when the first tire fails.) Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 2. A machine produces large fasteners whose length must be within $0.5$ inch of $22$ inches. The lengths are normally distributed with mean $22.0$ inches and standard deviation $0.17$ inch. 1. Find the probability that a randomly selected fastener produced by the machine will have an acceptable length. 2. The machine produces $20$ fasteners per hour. The length of each one is inspected. Assuming lengths of fasteners are independent, find the probability that all $20$ will have acceptable length. Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 3. The lengths of time taken by students on an algebra proficiency exam (if not forced to stop before completing it) are normally distributed with mean $28$ minutes and standard deviation $1.5$ minutes. 1. Find the proportion of students who will finish the exam if a $30$-minute time limit is set. 2. Six students are taking the exam today. Find the probability that all six will finish the exam within the $30$-minute limit, assuming that times taken by students are independent. Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 4. Heights of adult men between $18$ and $34$ years of age are normally distributed with mean $69.1$ inches and standard deviation $2.92$ inches. One requirement for enlistment in the military is that men must stand between $60$ and $80$ inches tall. 1. Find the probability that a randomly elected man meets the height requirement for military service. 2. Twenty-three men independently contact a recruiter this week. Find the probability that all of them meet the height requirement. Hint: There is a binomial random variable here, whose value of $p$ comes from part (a). 5. A regulation hockey puck must weigh between $5.5$ and $6$ ounces. In an alternative manufacturing process the mean weight of pucks produced is $5.75$ ounce. The weights of pucks have a normal distribution whose standard deviation can be decreased by increasingly stringent (and expensive) controls on the manufacturing process. Find the maximum allowable standard deviation so that at most $0.005$ of all pucks will fail to meet the weight standard. (Hint: The distribution is symmetric and is centered at the middle of the interval of acceptable weights.) 6. The amount of gasoline $X$ delivered by a metered pump when it registers $5$ gallons is a normally distributed random variable. The standard deviation $\sigma$ of $X$measures the precision of the pump; the smaller $\sigma$ is the smaller the variation from delivery to delivery. A typical standard for pumps is that when they show that $5$ gallons of fuel has been delivered the actual amount must be between $4.97$ and $5.03$ gallons (which corresponds to being off by at most about half a cup). Supposing that the mean of $X$ is $5$, find the largest that $\sigma$ can be so that $P(4.97 < X < 5.03)$ is $1.0000$ to four decimal places when computed using Figure 7.1.5: Cumulative Normal Probability which means that the pump is sufficiently accurate. (Hint: The $z$-score of $5.03$ will be the smallest value of $Z$ so that Figure 7.1.5: Cumulative Normal Probability gives $P(Z<z)=1.0000$). Answers 1. $0.6628$ 2. $0.0359$ 3. $0.7881$ 4. $0.0150$ 1. 1. $0.5959$ 2. $0.2899$ 3. $0.3439$ 2. 1. $0.0000$ 2. $0.9131$ 3. 1. $0.0013,\; 0.0013$ 2. $0.1587,\; 0.1587$ 3. $0.0001,\; 0.0001$ 4. $0.0099,\; 0.0099$ 4. 1. $0.4412$ 2. $0.1236$ 3. $0.1676$ 4. $0.0208$ 5. 1. $0.9772$ 2. $0.5000$ 6. 7. $0.1830$ 8. 9. $0.4971$ 10. 11. $0.9980$ 12. 13. $0.6771$ 14. 15. $0.0359$ 16. 1. $0.4038$ 2. $0.0266$ 17. 1. $0.9082$ 2. $0.5612$ 18. 19. $0.089$ 5.4: Areas of Tails of Distributions Basic 1. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.0075$ 2. $P(Z<z*)=0.9850$ 3. $P(Z>z*)=0.8997$ 4. $P(Z>z*)=0.0110$ 2. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.3300$ 2. $P(Z<z*)=0.9901$ 3. $P(Z>z*)=0.0055$ 4. $P(Z>z*)=0.7995$ 3. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.1500$ 2. $P(Z<z*)=0.7500$ 3. $P(Z>z*)=0.3333$ 4. $P(Z>z*)=0.8000$ 4. Find the value of $z\ast$ that yields the probability shown. 1. $P(Z<z*)=0.2200$ 2. $P(Z<z*)=0.6000$ 3. $P(Z>z*)=0.0750$ 4. $P(Z>z*)=0.8200$ 5. Find the indicated value of $Z$. (It is easier to find $-z_c$and negate it.) 1. $Z_{0.025}$ 2. $Z_{0.20}$ 6. Find the indicated value of $Z$. (It is easier to find $-z_c$and negate it.) 1. $Z_{0.002}$ 2. $Z_{0.02}$ 7. Find the value of $x\ast$ that yields the probability shown, where $X$ is a normally distributed random variable $X$ with mean $83$ and standard deviation $4$. 1. $P(X<x*)=0.8700$ 2. P(X>x*)=0.0500P(X>x*)=0.0500$P(X>x*)=0.0500$ 8. Find the value of $x\ast$ that yields the probability shown, where $X$ is a normally distributed random variable $X$ with mean $54$ and standard deviation $12$. 1. $P(X<x*)=0.0900$ 2. P(X>x*)=0.0500P(X>x*)=0.0500$P(X>x*)=0.6500$ 9. $X$ is a normally distributed random variable $X$ with mean $15$ and standard deviation $0.25$. Find the values $X_L$ and $X_R$ of $X$ that are symmetrically located with respect to the mean of $X$ and satisfy $P(X_L < X < X_R) = 0.80$. (Hint. First solve the corresponding problem for $Z$). 10. $X$ is a normally distributed random variable $X$ with mean $28$ and standard deviation $3.7$. Find the values $X_L$ and $X_R$ of $X$ that are symmetrically located with respect to the mean of $X$ and satisfy $P(X_L < X < X_R) = 0.65$. (Hint. First solve the corresponding problem for $Z$). Applications 1. Scores on a national exam are normally distributed with mean $382$ and standard deviation $26$. 1. Find the score that is the $50^{th}$ percentile. 2. Find the score that is the $90^{th}$ percentile. 2. Heights of women are normally distributed with mean $63.7$ inches and standard deviation $2.47$ inches. 1. Find the height that is the $10^{th}$ percentile. 2. Find the height that is the $80^{th}$ percentile. 3. The monthly amount of water used per household in a small community is normally distributed with mean $7,069$ gallons and standard deviation $58$ gallons. Find the three quartiles for the amount of water used. 4. The quantity of gasoline purchased in a single sale at a chain of filling stations in a certain region is normally distributed with mean $11.6$ gallons and standard deviation $2.78$ gallons. Find the three quartiles for the quantity of gasoline purchased in a single sale. 5. Scores on the common final exam given in a large enrollment multiple section course were normally distributed with mean $69.35$ and standard deviation $12.93$. The department has the rule that in order to receive an $A$ in the course his score must be in the top $10\%$ of all exam scores. Find the minimum exam score that meets this requirement. 6. The average finishing time among all high school boys in a particular track event in a certain state is $5$ minutes $17$ seconds. Times are normally distributed with standard deviation $12$ seconds. 1. The qualifying time in this event for participation in the state meet is to be set so that only the fastest $5\%$ of all runners qualify. Find the qualifying time. (Hint: Convert seconds to minutes.) 2. In the western region of the state the times of all boys running in this event are normally distributed with standard deviation $12$ seconds, but with mean $5$ minutes $22$ seconds. Find the proportion of boys from this region who qualify to run in this event in the state meet. 7. Tests of a new tire developed by a tire manufacturer led to an estimated mean tread life of $67,350$ miles and standard deviation of $1,120$ miles. The manufacturer will advertise the lifetime of the tire (for example, a “$50,000$ mile tire”) using the largest value for which it is expected that $98\%$ of the tires will last at least that long. Assuming tire life is normally distributed, find that advertised value. 8. Tests of a new light led to an estimated mean life of $1,321$ hours and standard deviation of $106$ hours. The manufacturer will advertise the lifetime of the bulb using the largest value for which it is expected that $90\%$ of the bulbs will last at least that long. Assuming bulb life is normally distributed, find that advertised value. 9. The weights $X$ of eggs produced at a particular farm are normally distributed with mean $1.72$ ounces and standard deviation $0.12$ ounce. Eggs whose weights lie in the middle $75\%$ of the distribution of weights of all eggs are classified as “medium.” Find the maximum and minimum weights of such eggs. (These weights are endpoints of an interval that is symmetric about the mean and in which the weights of $75\%$ of the eggs produced at this farm lie.) 10. The lengths $X$ of hardwood flooring strips are normally distributed with mean $28.9$ inches and standard deviation $6.12$ inches. Strips whose lengths lie in the middle 80% of the distribution of lengths of all strips are classified as “average-length strips.” Find the maximum and minimum lengths of such strips. (These lengths are endpoints of an interval that is symmetric about the mean and in which the lengths of $80\%$ of the hardwood strips lie.) 11. All students in a large enrollment multiple section course take common in-class exams and a common final, and submit common homework assignments. Course grades are assigned based on students' final overall scores, which are approximately normally distributed. The department assigns a $C$ to students whose scores constitute the middle $2/3$ of all scores. If scores this semester had mean $72.5$ and standard deviation $6.14$, find the interval of scores that will be assigned a $C$. 12. Researchers wish to investigate the overall health of individuals with abnormally high or low levels of glucose in the blood stream. Suppose glucose levels are normally distributed with mean $96$ and standard deviation $8.5\; mg/dl$, and that “normal” is defined as the middle $90\%$ of the population. Find the interval of normal glucose levels, that is, the interval centered at $96$ that contains $90\%$ of all glucose levels in the population. Additional Exercises 1. A machine for filling $2$-liter bottles of soft drink delivers an amount to each bottle that varies from bottle to bottle according to a normal distribution with standard deviation $0.002$ liter and mean whatever amount the machine is set to deliver. 1. If the machine is set to deliver $2$ liters (so the mean amount delivered is $2$ liters) what proportion of the bottles will contain at least $2$ liters of soft drink? 2. Find the minimum setting of the mean amount delivered by the machine so that at least $99\%$ of all bottles will contain at least $2$ liters. 2. A nursery has observed that the mean number of days it must darken the environment of a species poinsettia plant daily in order to have it ready for market is $71$ days. Suppose the lengths of such periods of darkening are normally distributed with standard deviation $2$ days. Find the number of days in advance of the projected delivery dates of the plants to market that the nursery must begin the daily darkening process in order that at least $95\%$ of the plants will be ready on time. (Poinsettias are so long-lived that once ready for market the plant remains salable indefinitely.) Answers 1. $-2.43$ 2. $2.17$ 3. $-1.28$ 4. $2.29$ 1. 1. $-1.04$ 2. $0.67$ 3. $0.43$ 4. $-0.84$ 2. 1. $1.96$ 2. $0.84$ 3. 1. $87.52$ 2. $89.58$ 4. 5. $15.32$ 6. 1. $382$ 2. $415$ 7. 8. $7030.14,\; 7069,\; 7107.86$ 9. 10. $85.90$ 11. 12. $65,054$ 13. 14. $1.58,\; 1.86$ 15. 16. $66.5,\; 78.5$ 17. 1. $0.5$ 2. $2.005$ • Anonymous
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/05.E%3A_Continuous_Random_Variables_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 6.1: The Mean and Standard Deviation of the Sample Mean Basic Q6.1.1 Random samples of size $225$ are drawn from a population with mean $100$ and standard deviation $20$. Find the mean and standard deviation of the sample mean. Q6.1.2 Random samples of size $64$ are drawn from a population with mean $32$ and standard deviation $5$. Find the mean and standard deviation of the sample mean. Q6.1.3 A population has mean $75$ and standard deviation $12$. 1. Random samples of size $121$ are taken. Find the mean and standard deviation of the sample mean. 2. How would the answers to part (a) change if the size of the samples were $400$ instead of $121$? Q6.1.4 A population has mean $5.75$ and standard deviation $1.02$. 1. Random samples of size $81$ are taken. Find the mean and standard deviation of the sample mean. 2. How would the answers to part (a) change if the size of the samples were $25$ instead of $81$? Answers S6.1.1 $\mu _{\bar{X}}=100,\; \sigma _{\bar{X}}=1.33$ S6.1.3 1. $\mu _{\bar{X}}=75,\; \sigma _{\bar{X}}=1.09$ 2. $\mu _{\bar{X}}$ stays the same but $\sigma _{\bar{X}}$ decreases to $0.6$ 6.2: The Sampling Distribution of the Sample Mean Basic 1. A population has mean $128$ and standard deviation $22$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $36$. 2. Find the probability that the mean of a sample of size $36$ will be within $10$ units of the population mean, that is, between $118$ and $138$. 2. A population has mean $1,542$ and standard deviation $246$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $100$. 2. Find the probability that the mean of a sample of size $100$ will be within $100$ units of the population mean, that is, between $1,442$ and $1,642$. 3. A population has mean $73.5$ and standard deviation $2.5$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $30$. 2. Find the probability that the mean of a sample of size $30$ will be less than $72$. 4. A population has mean $48.4$ and standard deviation $6.3$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $64$. 2. Find the probability that the mean of a sample of size $64$ will be less than $46.7$. 5. A normally distributed population has mean $25.6$ and standard deviation $3.3$. 1. Find the probability that a single randomly selected element $X$ of the population exceeds $30$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $9$. 3. Find the probability that the mean of a sample of size $9$ drawn from this population exceeds $30$. 6. A normally distributed population has mean $57.7$ and standard deviation $12.1$. 1. Find the probability that a single randomly selected element $X$ of the population is less than $45$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $16$. 3. Find the probability that the mean of a sample of size $16$ drawn from this population is less than $45$. 7. A population has mean $557$ and standard deviation $35$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $50$. 2. Find the probability that the mean of a sample of size $50$ will be more than $570$. 8. A population has mean $16$ and standard deviation $1.7$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $80$. 2. Find the probability that the mean of a sample of size $80$ will be more than $16.4$. 9. A normally distributed population has mean $1,214$ and standard deviation $122$. 1. Find the probability that a single randomly selected element $X$ of the population is between $1,100$ and $1,300$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $25$. 3. Find the probability that the mean of a sample of size $25$ drawn from this population is between $1,100$ and $1,300$. 10. A normally distributed population has mean $57,800$ and standard deviation $750$. 1. Find the probability that a single randomly selected element $X$ of the population is between $57,000$ and $58,000$. 2. Find the mean and standard deviation of $\overline{X}$ for samples of size $100$. 3. Find the probability that the mean of a sample of size $100$ drawn from this population is between $57,000$ and $58,000$. 11. A population has mean $72$ and standard deviation $6$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $45$. 2. Find the probability that the mean of a sample of size $45$ will differ from the population mean $72$ by at least $2$ units, that is, is either less than $70$ or more than $74$. (Hint: One way to solve the problem is to first find the probability of the complementary event.) 12. A population has mean $12$ and standard deviation $1.5$. 1. Find the mean and standard deviation of $\overline{X}$ for samples of size $90$. 2. Find the probability that the mean of a sample of size $90$ will differ from the population mean $12$ by at least $0.3$ unit, that is, is either less than $11.7$ or more than $12.3$. (Hint: One way to solve the problem is to first find the probability of the complementary event.) Applications 1. Suppose the mean number of days to germination of a variety of seed is $22$, with standard deviation $2.3$ days. Find the probability that the mean germination time of a sample of $160$ seeds will be within $0.5$ day of the population mean. 2. Suppose the mean length of time that a caller is placed on hold when telephoning a customer service center is $23.8$ seconds, with standard deviation $4.6$ seconds. Find the probability that the mean length of time on hold in a sample of $1,200$ calls will be within $0.5$ second of the population mean. 3. Suppose the mean amount of cholesterol in eggs labeled “large” is $186$ milligrams, with standard deviation $7$ milligrams. Find the probability that the mean amount of cholesterol in a sample of $144$ eggs will be within $2$ milligrams of the population mean. 4. Suppose that in one region of the country the mean amount of credit card debt per household in households having credit card debt is $\15,250$, with standard deviation $\7,125$. Find the probability that the mean amount of credit card debt in a sample of $1,600$ such households will be within $\300$ of the population mean. 5. Suppose speeds of vehicles on a particular stretch of roadway are normally distributed with mean $36.6$ mph and standard deviation $1.7$ mph. 1. Find the probability that the speed $X$ of a randomly selected vehicle is between $35$ and $40$ mph. 2. Find the probability that the mean speed $\overline{X}$ of $20$ randomly selected vehicles is between $35$ and $40$ mph. 6. Many sharks enter a state of tonic immobility when inverted. Suppose that in a particular species of sharks the time a shark remains in a state of tonic immobility when inverted is normally distributed with mean $11.2$ minutes and standard deviation $1.1$ minutes. 1. If a biologist induces a state of tonic immobility in such a shark in order to study it, find the probability that the shark will remain in this state for between $10$ and $13$ minutes. 2. When a biologist wishes to estimate the mean time that such sharks stay immobile by inducing tonic immobility in each of a sample of $12$ sharks, find the probability that mean time of immobility in the sample will be between $10$ and $13$ minutes. 7. Suppose the mean cost across the country of a $30$-day supply of a generic drug is $\46.58$, with standard deviation $\4.84$. Find the probability that the mean of a sample of $100$ prices of $30$-day supplies of this drug will be between $\45$ and $\50$. 8. Suppose the mean length of time between submission of a state tax return requesting a refund and the issuance of the refund is $47$ days, with standard deviation $6$ days. Find the probability that in a sample of $50$ returns requesting a refund, the mean such time will be more than $50$ days. 9. Scores on a common final exam in a large enrollment, multiple-section freshman course are normally distributed with mean $72.7$ and standard deviation $13.1$. 1. Find the probability that the score $X$ on a randomly selected exam paper is between $70$ and $80$. 2. Find the probability that the mean score $\overline{X}$ of $38$ randomly selected exam papers is between $70$ and $80$. 10. Suppose the mean weight of school children’s bookbags is $17.4$ pounds, with standard deviation $2.2$ pounds. Find the probability that the mean weight of a sample of $30$ bookbags will exceed $17$ pounds. 11. Suppose that in a certain region of the country the mean duration of first marriages that end in divorce is $7.8$ years, standard deviation $1.2$ years. Find the probability that in a sample of $75$ divorces, the mean age of the marriages is at most $8$ years. 12. Borachio eats at the same fast food restaurant every day. Suppose the time $X$ between the moment Borachio enters the restaurant and the moment he is served his food is normally distributed with mean $4.2$ minutes and standard deviation $1.3$ minutes. 1. Find the probability that when he enters the restaurant today it will be at least $5$ minutes until he is served. 2. Find the probability that average time until he is served in eight randomly selected visits to the restaurant will be at least $5$ minutes. Additional Exercises 1. A high-speed packing machine can be set to deliver between $11$ and $13$ ounces of a liquid. For any delivery setting in this range the amount delivered is normally distributed with mean some amount $\mu$ and with standard deviation $0.08$ ounce. To calibrate the machine it is set to deliver a particular amount, many containers are filled, and $25$ containers are randomly selected and the amount they contain is measured. Find the probability that the sample mean will be within $0.05$ ounce of the actual mean amount being delivered to all containers. 2. A tire manufacturer states that a certain type of tire has a mean lifetime of $60,000$ miles. Suppose lifetimes are normally distributed with standard deviation $\sigma =3,500$ miles. 1. Find the probability that if you buy one such tire, it will last only $57,000$ or fewer miles. If you had this experience, is it particularly strong evidence that the tire is not as good as claimed? 2. A consumer group buys five such tires and tests them. Find the probability that average lifetime of the five tires will be $57,000$ miles or less. If the mean is so low, is that particularly strong evidence that the tire is not as good as claimed? Answers 1. $\mu _{\overline{X}}=128,\; \sigma _{\overline{X}}=3.67$ 2. $0.9936$ 1. 1. $\mu _{\overline{X}}=73.5,\; \sigma _{\overline{X}}=0.456$ 2. $0.0005$ 2. 1. $0.0918$ 2. $\mu _{\overline{X}}=25.6,\; \sigma _{\overline{X}}=1.1$ 3. $0.0000$ 3. 1. $\mu _{\overline{X}}=557,\; \sigma _{\overline{X}}=4.9497$ 2. $0.0043$ 4. 1. $0.5818$ 2. $\mu _{\overline{X}}=1214\; \sigma _{\overline{X}}=24.4$ 3. $0.9998$ 5. 1. $\mu _{\overline{X}}=72\; \sigma _{\overline{X}}=0.8944$ 2. $0.0250$ 6. 7. $0.9940$ 8. 9. $0.9994$ 10. 1. $0.8036$ 2. $1.0000$ 11. 12. $0.9994$ 13. 1. $0.2955$ 2. $0.8977$ 14. 15. $0.9251$ 16. 17. $0.9982$ 6.3: The Sample Proportion Basic 1. The proportion of a population with a characteristic of interest is $p = 0.37$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $1,600$. 2. The proportion of a population with a characteristic of interest is $p = 0.82$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $900$. 3. The proportion of a population with a characteristic of interest is $p = 0.76$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $1,200$. 4. The proportion of a population with a characteristic of interest is $p = 0.37$. Find the mean and standard deviation of the sample proportion $\widehat{P}$ obtained from random samples of size $125$. 5. Random samples of size $225$ are drawn from a population in which the proportion with the characteristic of interest is $0.25$. Decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 6. Random samples of size $1,600$ are drawn from a population in which the proportion with the characteristic of interest is $0.05$. Decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 7. Random samples of size $n$ produced sample proportions $\hat{p}$ as shown. In each case decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 1. $n = 50,\; \hat{p}=0.48$ 2. $n = 50,\; \hat{p}=0.12$ 3. $n = 100,\; \hat{p}=0.12$ 8. Samples of size $n$ produced sample proportions $\hat{p}$ as shown. In each case decide whether or not the sample size is large enough to assume that the sample proportion $\widehat{P}$ is normally distributed. 1. $n = 30,\; \hat{p}=0.72$ 2. $n = 30,\; \hat{p}=0.84$ 3. $n = 75,\; \hat{p}=0.84$ 9. A random sample of size $121$ is taken from a population in which the proportion with the characteristic of interest is $p = 0.47$. Find the indicated probabilities. 1. $P(0.45\leq \widehat{P}\leq 0.50)$ 2. $P(\widehat{P}\geq 0.50)$ 10. A random sample of size $225$ is taken from a population in which the proportion with the characteristic of interest is $p = 0.34$. Find the indicated probabilities. 1. $P(0.25\leq \widehat{P}\leq 0.40)$ 2. $P(\widehat{P}\geq 0.35)$ 11. A random sample of size 900 is taken from a population in which the proportion with the characteristic of interest is p = 0.62. Find the indicated probabilities. 1. $P(0.60\leq \widehat{P}\leq 0.64)$ 2. $P(0.57\leq \widehat{P}\leq 0.67)$ 12. A random sample of size 1,100 is taken from a population in which the proportion with the characteristic of interest is p = 0.28. Find the indicated probabilities. 1. $P(0.27\leq \widehat{P}\leq 0.29)$ 2. $P(0.23\leq \widehat{P}\leq 0.33)$ Applications 1. Suppose that $8\%$ of all males suffer some form of color blindness. Find the probability that in a random sample of $250$ men at least $10\%$ will suffer some form of color blindness. First verify that the sample is sufficiently large to use the normal distribution. 2. Suppose that $29\%$ of all residents of a community favor annexation by a nearby municipality. Find the probability that in a random sample of $50$ residents at least $35\%$ will favor annexation. First verify that the sample is sufficiently large to use the normal distribution. 3. Suppose that $2\%$ of all cell phone connections by a certain provider are dropped. Find the probability that in a random sample of $1,500$ calls at most $40$ will be dropped. First verify that the sample is sufficiently large to use the normal distribution. 4. Suppose that in $20\%$ of all traffic accidents involving an injury, driver distraction in some form (for example, changing a radio station or texting) is a factor. Find the probability that in a random sample of $275$ such accidents between $15\%$ and $25\%$ involve driver distraction in some form. First verify that the sample is sufficiently large to use the normal distribution. 5. An airline claims that $72\%$ of all its flights to a certain region arrive on time. In a random sample of $30$ recent arrivals, $19$ were on time. You may assume that the normal distribution applies. 1. Compute the sample proportion. 2. Assuming the airline’s claim is true, find the probability of a sample of size $30$ producing a sample proportion so low as was observed in this sample. 6. A humane society reports that $19\%$ of all pet dogs were adopted from an animal shelter. Assuming the truth of this assertion, find the probability that in a random sample of $80$ pet dogs, between $15\%$ and $20\%$ were adopted from a shelter. You may assume that the normal distribution applies. 7. In one study it was found that $86\%$ of all homes have a functional smoke detector. Suppose this proportion is valid for all homes. Find the probability that in a random sample of $600$ homes, between $80\%$ and $90\%$ will have a functional smoke detector. You may assume that the normal distribution applies. 8. A state insurance commission estimates that $13\%$ of all motorists in its state are uninsured. Suppose this proportion is valid. Find the probability that in a random sample of $50$ motorists, at least $5$ will be uninsured. You may assume that the normal distribution applies. 9. An outside financial auditor has observed that about $4\%$ of all documents he examines contain an error of some sort. Assuming this proportion to be accurate, find the probability that a random sample of $700$ documents will contain at least $30$ with some sort of error. You may assume that the normal distribution applies. 10. Suppose $7\%$ of all households have no home telephone but depend completely on cell phones. Find the probability that in a random sample of $450$ households, between $25$ and $35$ will have no home telephone. You may assume that the normal distribution applies. Additional Exercises 1. Some countries allow individual packages of prepackaged goods to weigh less than what is stated on the package, subject to certain conditions, such as the average of all packages being the stated weight or greater. Suppose that one requirement is that at most $4\%$ of all packages marked $500$ grams can weigh less than $490$ grams. Assuming that a product actually meets this requirement, find the probability that in a random sample of $150$ such packages the proportion weighing less than $490$ grams is at least $3\%$. You may assume that the normal distribution applies. 2. An economist wishes to investigate whether people are keeping cars longer now than in the past. He knows that five years ago, $38\%$ of all passenger vehicles in operation were at least ten years old. He commissions a study in which $325$ automobiles are randomly sampled. Of them, $132$ are ten years old or older. 1. Find the sample proportion. 2. Find the probability that, when a sample of size $325$ is drawn from a population in which the true proportion is $0.38$, the sample proportion will be as large as the value you computed in part (a). You may assume that the normal distribution applies. 3. Give an interpretation of the result in part (b). Is there strong evidence that people are keeping their cars longer than was the case five years ago? 3. A state public health department wishes to investigate the effectiveness of a campaign against smoking. Historically $22\%$ of all adults in the state regularly smoked cigars or cigarettes. In a survey commissioned by the public health department, $279$ of $1,500$ randomly selected adults stated that they smoke regularly. 1. Find the sample proportion. 2. Find the probability that, when a sample of size $1,500$ is drawn from a population in which the true proportion is $0.22$, the sample proportion will be no larger than the value you computed in part (a). You may assume that the normal distribution applies. 3. Give an interpretation of the result in part (b). How strong is the evidence that the campaign to reduce smoking has been effective? 4. In an effort to reduce the population of unwanted cats and dogs, a group of veterinarians set up a low-cost spay/neuter clinic. At the inception of the clinic a survey of pet owners indicated that $78\%$ of all pet dogs and cats in the community were spayed or neutered. After the low-cost clinic had been in operation for three years, that figure had risen to $86\%$. 1. What information is missing that you would need to compute the probability that a sample drawn from a population in which the proportion is $78\%$ (corresponding to the assumption that the low-cost clinic had had no effect) is as high as $86\%$? 2. Knowing that the size of the original sample three years ago was $150$ and that the size of the recent sample was $125$, compute the probability mentioned in part (a). You may assume that the normal distribution applies. 3. Give an interpretation of the result in part (b). How strong is the evidence that the presence of the low-cost clinic has increased the proportion of pet dogs and cats that have been spayed or neutered? 5. An ordinary die is “fair” or “balanced” if each face has an equal chance of landing on top when the die is rolled. Thus the proportion of times a three is observed in a large number of tosses is expected to be close to $1/6$ or $0.1\bar{6}$. Suppose a die is rolled $240$ times and shows three on top $36$ times, for a sample proportion of $0.15$. 1. Find the probability that a fair die would produce a proportion of $0.15$ or less. You may assume that the normal distribution applies. 2. Give an interpretation of the result in part (b). How strong is the evidence that the die is not fair? 3. Suppose the sample proportion $0.15$ came from rolling the die $2,400$ times instead of only $240$ times. Rework part (a) under these circumstances. 4. Give an interpretation of the result in part (c). How strong is the evidence that the die is not fair? Answers 1. $\mu _{\widehat{P}}=0.37,\; \sigma _{\widehat{P}}=0.012$ 2. 3. $\mu _{\widehat{P}}=0.76,\; \sigma _{\widehat{P}}=0.012$ 4. 5. $p\pm 3\sqrt{\frac{pq}{n}}=0.25\pm 0.087,\; \text{yes}$ 6. 1. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.48\pm 0.21,\; \text{yes}$ 2. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.12\pm 0.14,\; \text{no}$ 3. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.12\pm 0.10,\; \text{yes}$ 7. 1. $0.4154$ 2. $0.2546$ 8. 1. $0.7850$ 2. $0.9980$ 9. 10. $p\pm 3\sqrt{\frac{pq}{n}}=0.08\pm 0.05$ and $[0.03,0.13]\subset [0,1],0.1210$ 11. 12. $p\pm 3\sqrt{\frac{pq}{n}}=0.02\pm 0.01$ and $[0.01,0.03]\subset [0,1],0.9671$ 13. 1. $0.63$ 2. $0.1446$ 14. 15. $0.9977$ 16. 17. $0.3483$ 18. 19. $0.7357$ 20. 1. $0.186$ 2. $0.0007$ 3. In a population in which the true proportion is $22\%$ the chance that a random sample of size $1,500$ would produce a sample proportion of $18.6\%$ or less is only $7/100$ of $1\%$. This is strong evidence that currently a smaller proportion than $22\%$ smoke. 21. 1. $0.2451$ 2. We would expect a sample proportion of $0.15$ or less in about $24.5\%$ of all samples of size $240$, so this is practically no evidence at all that the die is not fair. 3. $0.0139$ 4. We would expect a sample proportion of $0.15$ or less in only about $1.4\%$ of all samples of size $2,400$, so this is strong evidence that the die is not fair. • Anonymous
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/06.E%3A_Sampling_Distributions_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 7.1: Large Sample Estimation of a Population Mean Basic 1. A random sample is drawn from a population of known standard deviation $11.3$. Construct a $90\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n = 36,\; \bar{x}=105.2,\; s = 11.2$ 2. $n = 100,\; \bar{x}=105.2,\; s = 11.2$ 2. A random sample is drawn from a population of known standard deviation $22.1$. Construct a $95\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n =121 ,\; \bar{x}=82.4,\; s = 21.9$ 2. $n =81 ,\; \bar{x}=82.4,\; s = 21.9$ 3. A random sample is drawn from a population of unknown standard deviation. Construct a $99\%$ confidence interval for the population mean based on the information given. 1. $n =49 ,\; \bar{x}=17.1,\; s = 2.1$ 2. $n =169 ,\; \bar{x}=17.1,\; s = 2.1$ 4. A random sample is drawn from a population of unknown standard deviation. Construct a $98\%$ confidence interval for the population mean based on the information given. 1. $n =225 ,\; \bar{x}=92.0,\; s = 8.4$ 2. $n =64 ,\; \bar{x}=92.0,\; s = 8.4$ 5. A random sample of size $144$ is drawn from a population whose distribution, mean, and standard deviation are all unknown. The summary statistics are $\bar{x}=58.2$ and $s = 2.6$. 1. Construct an $80\%$ confidence interval for the population mean $\mu$. 2. Construct a $90\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. 6. A random sample of size $256$ is drawn from a population whose distribution, mean, and standard deviation are all unknown. The summary statistics are $\bar{x}=1011$ and $s = 34$. 1. Construct a $90\%$ confidence interval for the population mean $\mu$. 2. Construct a $99\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. Applications 1. A government agency was charged by the legislature with estimating the length of time it takes citizens to fill out various forms. Two hundred randomly selected adults were timed as they filled out a particular form. The times required had mean $12.8$ minutes with standard deviation $1.7$ minutes. Construct a $90\%$ confidence interval for the mean time taken for all adults to fill out this form. 2. Four hundred randomly selected working adults in a certain state, including those who worked at home, were asked the distance from their home to their workplace. The average distance was $8.84$ miles with standard deviation $2.70$ miles. Construct a $99\%$ confidence interval for the mean distance from home to work for all residents of this state. 3. On every passenger vehicle that it tests an automotive magazine measures, at true speed $55$ mph, the difference between the true speed of the vehicle and the speed indicated by the speedometer. For $36$ vehicles tested the mean difference was $-1.2$ mph with standard deviation $0.2$ mph. Construct a $90\%$ confidence interval for the mean difference between true speed and indicated speed for all vehicles. 4. A corporation monitors time spent by office workers browsing the web on their computers instead of working. In a sample of computer records of $50$ workers, the average amount of time spent browsing in an eight-hour work day was $27.8$ minutes with standard deviation $8.2$ minutes. Construct a $99.5\%$ confidence interval for the mean time spent by all office workers in browsing the web in an eight-hour day. 5. A sample of $250$ workers aged $16$ and older produced an average length of time with the current employer (“job tenure”) of $4.4$ years with standard deviation $3.8$ years. Construct a $99.9\%$ confidence interval for the mean job tenure of all workers aged $16$ or older. 6. The amount of a particular biochemical substance related to bone breakdown was measured in $30$ healthy women. The sample mean and standard deviation were $3.3$ nanograms per milliliter (ng/mL) and $1.4$ ng/mL. Construct an $80\%$ confidence interval for the mean level of this substance in all healthy women. 7. A corporation that owns apartment complexes wishes to estimate the average length of time residents remain in the same apartment before moving out. A sample of $150$ rental contracts gave a mean length of occupancy of $3.7$ years with standard deviation $1.2$ years. Construct a $95\%$ confidence interval for the mean length of occupancy of apartments owned by this corporation. 8. The designer of a garbage truck that lifts roll-out containers must estimate the mean weight the truck will lift at each collection point. A random sample of $325$ containers of garbage on current collection routes yielded $\bar{x}=75.3 lb,\; s = 12.8 lb$. Construct a $99.8\%$ confidence interval for the mean weight the trucks must lift each time. 9. In order to estimate the mean amount of damage sustained by vehicles when a deer is struck, an insurance company examined the records of $50$ such occurrences, and obtained a sample mean of $\2,785$ with sample standard deviation $\221$. Construct a $95\%$ confidence interval for the mean amount of damage in all such accidents. 10. In order to estimate the mean FICO credit score of its members, a credit union samples the scores of $95$ members, and obtains a sample mean of $738.2$ with sample standard deviation $64.2$. Construct a $99\%$ confidence interval for the mean FICO score of all of its members. Additional Exercises 1. For all settings a packing machine delivers a precise amount of liquid; the amount dispensed always has standard deviation $0.07$ ounce. To calibrate the machine its setting is fixed and it is operated $50$ times. The mean amount delivered is $6.02$ ounces with sample standard deviation $0.04$ ounce. Construct a $99.5\%$ confidence interval for the mean amount delivered at this setting. Hint: Not all the information provided is needed. 2. A power wrench used on an assembly line applies a precise, preset amount of torque; the torque applied has standard deviation $0.73$ foot-pound at every torque setting. To check that the wrench is operating within specifications it is used to tighten $100$ fasteners. The mean torque applied is $36.95$ foot-pounds with sample standard deviation $0.62$ foot-pound. Construct a $99.9\%$ confidence interval for the mean amount of torque applied by the wrench at this setting. Hint: Not all the information provided is needed. 3. The number of trips to a grocery store per week was recorded for a randomly selected collection of households, with the results shown in the table. $\begin{matrix} 2 & 2 & 2 & 1 & 4 & 2 & 3 & 2 & 5 & 4\ 2 & 3 & 5 & 0 & 3 & 2 & 3 & 1 & 4 & 3\ 3 & 2 & 1 & 6 & 2 & 3 & 3 & 2 & 4 & 4 \end{matrix}$Construct a $95\%$ confidence interval for the average number of trips to a grocery store per week of all households. 4. For each of $40$ high school students in one county the number of days absent from school in the previous year were counted, with the results shown in the frequency table. $\begin{array}{c|c c c c c c} x &0 &1 &2 &3 &4 &5 \ \hline f &24 &7 &5 &2 &1 &1\ \end{array}$Construct a $90\%$ confidence interval for the average number of days absent from school of all students in the county. 5. A town council commissioned a random sample of $85$ households to estimate the number of four-wheel vehicles per household in the town. The results are shown in the following frequency table. $\begin{array}{c|c c c c c c} x &0 &1 &2 &3 &4 &5 \ \hline f &1 &16 &28 &22 &12 &6\ \end{array}$Construct a $98\%$ confidence interval for the average number of four-wheel vehicles per household in the town. 6. The number of hours per day that a television set was operating was recorded for a randomly selected collection of households, with the results shown in the table. $\begin{matrix} 3.7 & 4.2 & 1.5 & 3.6 & 5.9\ 4.7 & 8.2 & 3.9 & 2.5 & 4.4\ 2.1 & 3.6 & 1.1 & 7.3 & 4.2\ 3.0 & 3.8 & 2.2 & 4.2 & 3.8\ 4.3 & 2.1 & 2.4 & 6.0 & 3.7\ 2.5 & 1.3 & 2.8 & 3.0 & 5.6 \end{matrix}$Construct a $99.8\%$ confidence interval for the mean number of hours that a television set is in operation in all households. Large Data Set Exercises Large Data Set missing from the original 1. Large $\text{Data Set 1}$ records the SAT scores of $1,000$ students. Regarding it as a random sample of all high school students, use it to construct a $99\%$ confidence interval for the mean SAT score of all students. 2. Large $\text{Data Set 1}$ records the GPAs of $1,000$ college students. Regarding it as a random sample of all college students, use it to construct a $95\%$ confidence interval for the mean GPA of all students. 3. Large $\text{Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean $\mu$. 2. Regard the first $36$ students as a random sample and use it to construct a $99\%$ confidence for the mean $\mu$ of all $1,000$ SAT scores. Does it actually capture the mean $\mu$? 4. Large $\text{Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population mean $\mu$. 2. Regard the first $36$ students as a random sample and use it to construct a $95\%$ confidence for the mean $\mu$ of all $1,000$ GPAs. Does it actually capture the mean $\mu$? Answers 1. $105.2\pm 3.10$ 2. $105.2\pm 1.86$ 1. $17.1\pm 0.77$ 2. $17.1\pm 0.42$ 1. $58.2\pm 0.28$ 2. $58.2\pm 0.36$ 3. Asking for greater confidence requires a longer interval. 1. $12.8\pm 0.20$ 2. $-1.2\pm 0.05$ 3. $4.4\pm 0.79$ 4. $3.7\pm 0.19$ 5. $2785\pm 61$ 6. $6.02\pm 0.03$ 7. $2.8\pm 0.48$ 8. $2.54\pm 0.30$ 9. $(1511.43,1546.05)$ 1. $\mu = 1528.74$ 2. $(1428.22,1602.89)$ 7.2: Small Sample Estimation of a Population Mean Basic 1. A random sample is drawn from a normally distributed population of known standard deviation $5$. Construct a $99.8\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n = 16,\; \bar{x}=98,\; s = 5.6$ 2. $n = 9,\; \bar{x}=98,\; s = 5.6$ 2. A random sample is drawn from a normally distributed population of known standard deviation $10.7$. Construct a $95\%$ confidence interval for the population mean based on the information given (not all of the information given need be used). 1. $n = 25,\; \bar{x}=103.3,\; s = 11.0$ 2. $n = 4,\; \bar{x}=103.3,\; s = 11.0$ 3. A random sample is drawn from a normally distributed population of unknown standard deviation. Construct a $99\%$ confidence interval for the population mean based on the information given. 1. $n = 18,\; \bar{x}=386,\; s = 24$ 2. $n = 7,\; \bar{x}=386,\; s = 24$ 4. A random sample is drawn from a normally distributed population of unknown standard deviation. Construct a $98\%$ confidence interval for the population mean based on the information given. 1. $n = 8,\; \bar{x}=58.3,\; s = 4.1$ 2. $n = 27,\; \bar{x}=58.3,\; s = 4.1$ 5. A random sample of size $14$ is drawn from a normal population. The summary statistics are $\bar{x}=933,\; and\; s = 18$. 1. Construct an $80\%$ confidence interval for the population mean $\mu$. 2. Construct a $90\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. 6. A random sample of size $28$ is drawn from a normal population. The summary statistics are $\bar{x}=68.6,\; and\; s = 1.28$. 1. Construct a $95\%$ confidence interval for the population mean $\mu$. 2. Construct a $99.5\%$ confidence interval for the population mean $\mu$. 3. Comment on why one interval is longer than the other. ApplicationExercises 1. City planners wish to estimate the mean lifetime of the most commonly planted trees in urban settings. A sample of $16$ recently felled trees yielded mean age $32.7$ years with standard deviation $3.1$ years. Assuming the lifetimes of all such trees are normally distributed, construct a $99.8\%$ confidence interval for the mean lifetime of all such trees. 2. To estimate the number of calories in a cup of diced chicken breast meat, the number of calories in a sample of four separate cups of meat is measured. The sample mean is $211.8$ calories with sample standard deviation $0.9$ calorie. Assuming the caloric content of all such chicken meat is normally distributed, construct a $95\%$ confidence interval for the mean number of calories in one cup of meat. 3. A college athletic program wishes to estimate the average increase in the total weight an athlete can lift in three different lifts after following a particular training program for six weeks. Twenty-five randomly selected athletes when placed on the program exhibited a mean gain of $47.3$ lb with standard deviation $6.4$ lb. Construct a $90\%$ confidence interval for the mean increase in lifting capacity all athletes would experience if placed on the training program. Assume increases among all athletes are normally distributed. 4. To test a new tread design with respect to stopping distance, a tire manufacturer manufactures a set of prototype tires and measures the stopping distance from $70$ mph on a standard test car. A sample of $25$ stopping distances yielded a sample mean $173$ feet with sample standard deviation $8$ feet. Construct a $98\%$ confidence interval for the mean stopping distance for these tires. Assume a normal distribution of stopping distances. 5. A manufacturer of chokes for shotguns tests a choke by shooting $15$ patterns at targets $40$ yards away with a specified load of shot. The mean number of shot in a $30$-inch circle is $53.5$ with standard deviation $1.6$. Construct an $80\%$ confidence interval for the mean number of shot in a $30$-inch circle at $40$ yards for this choke with the specified load. Assume a normal distribution of the number of shot in a $30$-inch circle at $40$ yards for this choke. 6. In order to estimate the speaking vocabulary of three-year-old children in a particular socioeconomic class, a sociologist studies the speech of four children. The mean and standard deviation of the sample are $\bar{x}=1120$ and $s = 215$ words. Assuming that speaking vocabularies are normally distributed, construct an $80\%$ confidence interval for the mean speaking vocabulary of all three-year-old children in this socioeconomic group. 7. A thread manufacturer tests a sample of eight lengths of a certain type of thread made of blended materials and obtains a mean tensile strength of $8.2$ lb with standard deviation $0.06$ lb. Assuming tensile strengths are normally distributed, construct a $90\%$ confidence interval for the mean tensile strength of this thread. 8. An airline wishes to estimate the weight of the paint on a fully painted aircraft of the type it flies. In a sample of four repaintings the average weight of the paint applied was $239$ pounds, with sample standard deviation $8$ pounds. Assuming that weights of paint on aircraft are normally distributed, construct a $99.8\%$ confidence interval for the mean weight of paint on all such aircraft. 9. In a study of dummy foal syndrome, the average time between birth and onset of noticeable symptoms in a sample of six foals was $18.6$ hours, with standard deviation $1.7$ hours. Assuming that the time to onset of symptoms in all foals is normally distributed, construct a $90\%$ confidence interval for the mean time between birth and onset of noticeable symptoms. 10. A sample of $26$ women’s size $6$ dresses had mean waist measurement $25.25$ inches with sample standard deviation $0.375$ inch. Construct a $95\%$ confidence interval for the mean waist measurement of all size $6$ women’s dresses. Assume waist measurements are normally distributed. Additional Exercises 1. Botanists studying attrition among saplings in new growth areas of forests diligently counted stems in six plots in five-year-old new growth areas, obtaining the following counts of stems per acre: $\begin{matrix} 9,432 & 11,026 & 10,539\ 8,773 & 9,868 & 10,247 \end{matrix}$ Construct an $80\%$ confidence interval for the mean number of stems per acre in all five-year-old new growth areas of forests. Assume that the number of stems per acre is normally distributed. 2. Nutritionists are investigating the efficacy of a diet plan designed to increase the caloric intake of elderly people. The increase in daily caloric intake in $12$ individuals who are put on the plan is (a minus sign signifies that calories consumed went down): $\begin{matrix} 121 & 284 & -94 & 295 & 183 & 312\ 188 & -102 & 259 & 226 & 152 & 167 \end{matrix}$ Construct a $99.8\%$ confidence interval for the mean increase in caloric intake for all people who are put on this diet. Assume that population of differences in intake is normally distributed. 3. A machine for making precision cuts in dimension lumber produces studs with lengths that vary with standard deviation $0.003$ inch. Five trial cuts are made to check the machine’s calibration. The mean length of the studs produced is $104.998$ inches with sample standard deviation $0.004$ inch. Construct a $99.5\%$ confidence interval for the mean lengths of all studs cut by this machine. Assume lengths are normally distributed. Hint: Not all the numbers given in the problem are used. 4. The variation in time for a baked good to go through a conveyor oven at a large scale bakery has standard deviation $0.017$ minute at every time setting. To check the bake time of the oven periodically four batches of goods are carefully timed. The recent check gave a mean of $27.2$ minutes with sample standard deviation $0.012$ minute. Construct a $99.8\%$ confidence interval for the mean bake time of all batches baked in this oven. Assume bake times are normally distributed. Hint: Not all the numbers given in the problem are used. 5. Wildlife researchers tranquilized and weighed three adult male polar bears. The data (in pounds) are: $926, 742, 1109$. Assume the weights of all bears are normally distributed. 1. Construct an $80\%$ confidence interval for the mean weight of all adult male polar bears using these data. 2. Convert the three weights in pounds to weights in kilograms using the conversion $1\; lb = 0.453\; kg$ (so the first datum changes to $(926)(0.453)=419$). Use the converted data to construct an $80\%$ confidence interval for the mean weight of all adult male polar bears expressed in kilograms. 3. Convert your answer in part (a) into kilograms directly and compare it to your answer in (b). This illustrates that if you construct a confidence interval in one system of units you can convert it directly into another system of units without having to convert all the data to the new units. 6. Wildlife researchers trapped and measured six adult male collared lemmings. The data (in millimeters) are: $104, 99, 112, 115, 96, 109$. Assume the lengths of all lemmings are normally distributed. 1. Construct a $90\%$ confidence interval for the mean length of all adult male collared lemmings using these data. 2. Convert the six lengths in millimeters to lengths in inches using the conversion $1\; mm = 0.039\; in$ (so the first datum changes to $(104)(0.039) = 4.06$). Use the converted data to construct a $90\%$ confidence interval for the mean length of all adult male collared lemmings expressed in inches. 3. Convert your answer in part (a) into inches directly and compare it to your answer in (b). This illustrates that if you construct a confidence interval in one system of units you can convert it directly into another system of units without having to convert all the data to the new units. Answers 1. $98\pm 3.9$ 2. $98\pm 5.2$ 1. $386\pm 16.4$ 2. $386\pm 33.6$ 1. $933\pm 6.5$ 2. $933\pm 8.5$ 3. Asking for greater confidence requires a longer interval. 1. $32.7\pm 2.9$ 2. $47.3\pm 2.19$ 3. $53.5\pm 0.56$ 4. $8.2\pm 0.04$ 5. $18.6\pm 1.4$ 6. $9981\pm 486$ 7. $104.998\pm 0.004$ 1. $926\pm 200$ 2. $419\pm 90$ 3. $419\pm 91$ 7.3: Large Sample Estimation of a Population Proportion Basic 1. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $90\%$ confidence interval for the population proportion. 1. $n = 25, \hat{p}=0.7$ 2. $n = 50, \hat{p}=0.7$ 2. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $95\%$ confidence interval for the population proportion. 1. $n = 2500, \hat{p}=0.22$ 2. $n = 1200, \hat{p}=022$ 3. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $98\%$ confidence interval for the population proportion. 1. $n = 80, \hat{p}=0.4$ 2. $n = 325, \hat{p}=0.4$ 4. Information about a random sample is given. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. Then construct a $99.5\%$ confidence interval for the population proportion. 1. $n = 200, \hat{p}=0.85$ 2. $n = 75, \hat{p}=0.85$ 5. In a random sample of size $1,100,338$ have the characteristic of interest. 1. Compute the sample proportion $\hat{p}$ with the characteristic of interest. 2. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. 3. Construct an $80\%$ confidence interval for the population proportion $p$. 4. Construct a $90\%$ confidence interval for the population proportion $p$. 5. Comment on why one interval is longer than the other. 6. In a random sample of size $2,400,420$ have the characteristic of interest. 1. Compute the sample proportion $\hat{p}$ with the characteristic of interest. 2. Verify that the sample is large enough to use it to construct a confidence interval for the population proportion. 3. Construct a $90\%$ confidence interval for the population proportion $p$. 4. Construct a $99\%$ confidence interval for the population proportion $p$. 5. Comment on why one interval is longer than the other. Q7.3.7 A security feature on some web pages is graphic representations of words that are readable by human beings but not machines. When a certain design format was tested on $450$ subjects, by having them attempt to read ten disguised words, $448$ subjects could read all the words. 1. Give a point estimate of the proportion $p$ of all people who could read words disguised in this way. 2. Show that the sample is not sufficiently large to construct a confidence interval for the proportion of all people who could read words disguised in this way. Q7.3.8 In a random sample of $900$ adults, $42$ defined themselves as vegetarians. 1. Give a point estimate of the proportion of all adults who would define themselves as vegetarians. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct an $80\%$ confidence interval for the proportion of all adults who would define themselves as vegetarians. Q7.3.9 In a random sample of $250$ employed people, $61$ said that they bring work home with them at least occasionally. 1. Give a point estimate of the proportion of all employed people who bring work home with them at least occasionally. 2. Construct a $99\%$ confidence interval for that proportion. Q7.3.10 In a random sample of $1,250$ household moves, $822$ were moves to a location within the same county as the original residence. 1. Give a point estimate of the proportion of all household moves that are to a location within the same county as the original residence. 2. Construct a $98\%$ confidence interval for that proportion. Q7.3.11 In a random sample of $12,447$ hip replacement or revision surgery procedures nationwide, $162$ patients developed a surgical site infection. 1. Give a point estimate of the proportion of all patients undergoing a hip surgery procedure who develop a surgical site infection. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct a $95\%$ confidence interval for the proportion of all patients undergoing a hip surgery procedure who develop a surgical site infection. Q7.3.12 In a certain region prepackaged products labeled $500$ g must contain on average at least $500$ grams of the product, and at least $90\%$ of all packages must weigh at least $490$ grams. In a random sample of $300$ packages, $288$ weighed at least $490$ grams. 1. Give a point estimate of the proportion of all packages that weigh at least $490$ grams. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct a $99.8\%$ confidence interval for the proportion of all packages that weigh at least $490$ grams. Q7.3.13 A survey of $50$ randomly selected adults in a small town asked them if their opinion on a proposed “no cruising” restriction late at night. Responses were coded $1$ for in favor, $0$ for indifferent, and $2$ for opposed, with the results shown in the table.$\begin{matrix} 1 & 0 & 2 & 0 & 1 & 0 & 0 & 1 & 1 & 2\ 0 & 2 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 0 \ 0 & 2 & 1 & 2 & 0 & 0 & 0 & 2 & 0 & 1\ 0 & 2 & 0 & 2 & 0 & 1 & 0 & 0 & 2 & 0\ 1 & 0 & 0 & 1 & 2 & 0 & 0 & 2 & 1 & 2 \end{matrix}$ 1. Give a point estimate of the proportion of all adults in the community who are indifferent concerning the proposed restriction. 2. Assuming that the sample is sufficiently large, construct a $90\%$ confidence interval for the proportion of all adults in the community who are indifferent concerning the proposed restriction. Q7.3.14 To try to understand the reason for returned goods, the manager of a store examines the records on $40$ products that were returned in the last year. Reasons were coded by $1$ for “defective,” $2$ for “unsatisfactory,” and $0$ for all other reasons, with the results shown in the table. $\begin{matrix} 0 & 2 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2\ 0 & 0 & 2 & 0 & 0 & 0 & 0 & 2 & 0 & 0\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{matrix}$ 1. Give a point estimate of the proportion of all returns that are because of something wrong with the product, that is, either defective or performed unsatisfactorily. 2. Assuming that the sample is sufficiently large, construct an $80\%$ confidence interval for the proportion of all returns that are because of something wrong with the product. Q7.3.15 In order to estimate the proportion of entering students who graduate within six years, the administration at a state university examined the records of $600$ randomly selected students who entered the university six years ago, and found that $312$ had graduated. 1. Give a point estimate of the six-year graduation rate, the proportion of entering students who graduate within six years. 2. Assuming that the sample is sufficiently large, construct a $98\%$ confidence interval for the six-year graduation rate. Q7.3.16 In a random sample of $2,300$ mortgages taken out in a certain region last year, $187$ were adjustable-rate mortgages. 1. Give a point estimate of the proportion of all mortgages taken out in this region last year that were adjustable-rate mortgages. 2. Assuming that the sample is sufficiently large, construct a $99.9\%$ confidence interval for the proportion of all mortgages taken out in this region last year that were adjustable-rate mortgages. Q7.3.17 In a research study in cattle breeding, $159$ of $273$ cows in several herds that were in estrus were detected by means of an intensive once a day, one-hour observation of the herds in early morning. 1. Give a point estimate of the proportion of all cattle in estrus who are detected by this method. 2. Assuming that the sample is sufficiently large, construct a $90\%$ confidence interval for the proportion of all cattle in estrus who are detected by this method. Q7.3.18 A survey of $21,250$ households concerning telephone service gave the results shown in the table. Landline No Landline Cell phone 12,474 5,844 No cell phone 2,529 403 1. Give a point estimate for the proportion of all households in which there is a cell phone but no landline. 2. Assuming the sample is sufficiently large, construct a $99.9\%$ confidence interval for the proportion of all households in which there is a cell phone but no landline. 3. Give a point estimate for the proportion of all households in which there is no telephone service of either kind. 4. Assuming the sample is sufficiently large, construct a $99.9\%$ confidence interval for the proportion of all all households in which there is no telephone service of either kind. Additional Exercises 1. In a random sample of $900$ adults, $42$ defined themselves as vegetarians. Of these $42$, $29$ were women. 1. Give a point estimate of the proportion of all self-described vegetarians who are women. 2. Verify that the sample is sufficiently large to use it to construct a confidence interval for that proportion. 3. Construct a $90\%$ confidence interval for the proportion of all all self-described vegetarians who are women. 2. A random sample of $185$ college soccer players who had suffered injuries that resulted in loss of playing time was made with the results shown in the table. Injuries are classified according to severity of the injury and the condition under which it was sustained. Minor Moderate Serious Practice 48 20 6 Game 62 32 17 1. Give a point estimate for the proportion $p$ of all injuries to college soccer players that are sustained in practice. 2. Construct a $95\%$ confidence interval for the proportion $p$ of all injuries to college soccer players that are sustained in practice. 3. Give a point estimate for the proportion $p$ of all injuries to college soccer players that are either moderate or serious. 4. Construct a $95\%$ confidence interval for the proportion $p$ of all injuries to college soccer players that are either moderate or serious. 3. The body mass index (BMI) was measured in $1,200$ randomly selected adults, with the results shown in the table. BMI Under 18.5 18.5–25 Over 25 Men 36 165 315 Women 75 274 335 1. Give a point estimate for the proportion of all men whose BMI is over $25$. 2. Assuming the sample is sufficiently large, construct a $99\%$ confidence interval for the proportion of all men whose BMI is over $25$. 3. Give a point estimate for the proportion of all adults, regardless of gender, whose BMI is over $25$. 4. Assuming the sample is sufficiently large, construct a $99\%$ confidence interval for the proportion of all adults, regardless of gender, whose BMI is over $25$. 4. Confidence intervals constructed using the formula in this section often do not do as well as expected unless $n$ is quite large, especially when the true population proportion is close to either $0$ or $1$. In such cases a better result is obtained by adding two successes and two failures to the actual data and then computing the confidence interval. This is the same as using the formula $\tilde{p}\pm z_{\alpha /2}\sqrt{\frac{\tilde{p}(1-\tilde{p})}{\tilde{n}}}\ \text{where}\ \tilde{p}=\frac{x+2}{n+4}\; \text{and}\; \tilde{n}=n+4$ Suppose that in a random sample of $600$ households, $12$ had no telephone service of any kind. Use the adjusted confidence interval procedure just described to form a $99.9\%$ confidence interval for the proportion of all households that have no telephone service of any kind. Large Data Set Exercises Large Data Set missing from the original 1. Large $\text{Data Sets 4 and 4A}$ list the results of $500$ tosses of a die. Let $p$ denote the proportion of all tosses of this die that would result in a four. Use the sample data to construct a $90\%$ confidence interval for $p$. 2. Large $\text{Data Set 6}$ records results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer Candidate $A$ for a U.S. Senate seat or prefer some other candidate. Use the full data set ($400$ observations) to construct a $98\%$ confidence interval for the proportion $p$ of all voters who prefer Candidate $A$. 3. Lines $2$ through $536$ in $\text{Data Set 11}$ is a sample of $535$ real estate sales in a certain region in $2008$. Those that were foreclosure sales are identified with a $1$ in the second column. 1. Use these data to construct a point estimate $\hat{p}$ of the proportion $p$ of all real estate sales in this region in $2008$ that were foreclosure sales. 2. Use these data to construct a $90\%$ confidence for $p$. 4. Lines $537$ through $1106$ in Large $\text{Data Set 11}$ is a sample of $570$ real estate sales in a certain region in $2010$. Those that were foreclosure sales are identified with a $1$ in the second column. 1. Use these data to construct a point estimate $\hat{p}$ of the proportion $p$ of all real estate sales in this region in $2010$ that were foreclosure sales. 2. Use these data to construct a $90\%$ confidence for $p$. Answers 1. $(0.5492, 0.8508)$ 2. $(0.5934, 0.8066)$ 1. $(0.2726, 0.5274)$ 2. $(0.3368, 0.4632)$ 1. $0.3073$ 2. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.31\pm 0.04\; \text{and}\; [0.27,0.35]\subset [0,1]$ 3. $(0.2895, 0.3251)$ 4. $(0.2844, 0.3302)$ 5. Asking for greater confidence requires a longer interval. 1. $0.9956$ 2. $(0.9862, 1.005)$ 1. $0.244$ 2. $(0.1740, 0.3140)$ 1. $0.013$ 2. $(0.01, 0.016)$ 3. $(0.011, 0.015)$ 1. $0.52$ 2. $(0.4038, 0.6362)$ 1. $0.52$ 2. $(0.4726, 0.5674)$ 1. $0.5824$ 2. $(0.5333, 0.6315)$ 1. $0.69$ 2. $\hat{p}\pm 3\sqrt{\frac{\hat{p}\hat{q}}{n}}=0.69\pm 0.21\; \text{and}\; [0.48,0.90]\subset [0,1]$ 3. $0.69\pm 0.12$ 1. $0.6105$ 2. $(0.5552, 0.6658)$ 3. $0.5583$ 4. $(0.5214, 0.5952)$ 1. $(0.1368,0.1912)$ 1. $\hat{p}=0.2280$ 2. $(0.1982,0.2579)$ 7.4: Sample Size Considerations Basic 1. Estimate the minimum sample size needed to form a confidence interval for the mean of a population having the standard deviation shown, meeting the criteria given. 1. $\sigma = 30, 95\%$ confidence, $E = 10$ 2. $\sigma = 30, 99\%$ confidence, $E = 10$ 3. $\sigma = 30, 95\%$ confidence, $E = 5$ 2. Estimate the minimum sample size needed to form a confidence interval for the mean of a population having the standard deviation shown, meeting the criteria given. 1. $\sigma = 4, 95\%$ confidence, $E = 1$ 2. $\sigma = 4, 99\%$ confidence, $E = 1$ 3. $\sigma = 4, 95\%$ confidence, $E = 0.5$ 3. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $p\approx 0.37, 80\%$ confidence, $E = 0.05$ 2. $p\approx 0.37, 90\%$ confidence, $E = 0.05$ 3. $p\approx 0.37, 80\%$ confidence, $E = 0.01$ 4. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $p\approx 0.81, 95\%$ confidence, $E = 0.02$ 2. $p\approx 0.81, 99\%$ confidence, $E = 0.02$ 3. $p\approx 0.81, 95\%$ confidence, $E = 0.01$ 5. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $80\%$ confidence, $E = 0.05$ 2. $90\%$ confidence, $E = 0.05$ 3. $80\%$ confidence, $E = 0.01$ 6. Estimate the minimum sample size needed to form a confidence interval for the proportion of a population that has a particular characteristic, meeting the criteria given. 1. $95\%$ confidence, $E = 0.02$ 2. $99\%$ confidence, $E = 0.02$ 3. $95\%$ confidence, $E = 0.01$ Applications 1. A software engineer wishes to estimate, to within $5$ seconds, the mean time that a new application takes to start up, with $95\%$ confidence. Estimate the minimum size sample required if the standard deviation of start up times for similar software is $12$ seconds. 2. A real estate agent wishes to estimate, to within $\2.50$, the mean retail cost per square foot of newly built homes, with $80\%$ confidence. He estimates the standard deviation of such costs at $\5.00$. Estimate the minimum size sample required. 3. An economist wishes to estimate, to within $2$ minutes, the mean time that employed persons spend commuting each day, with $95\%$ confidence. On the assumption that the standard deviation of commuting times is $8$ minutes, estimate the minimum size sample required. 4. A motor club wishes to estimate, to within $1$ cent, the mean price of $1$ gallon of regular gasoline in a certain region, with $98\%$ confidence. Historically the variability of prices is measured by $\sigma =\0.03$Estimate the minimum size sample required. 5. A bank wishes to estimate, to within $\25$, the mean average monthly balance in its checking accounts, with $99.8\%$ confidence. Assuming $\sigma =\250$, estimate the minimum size sample required. 6. A retailer wishes to estimate, to within $15$ seconds, the mean duration of telephone orders taken at its call center, with $99.5\%$ confidence. In the past the standard deviation of call length has been about $1.25$ minutes. Estimate the minimum size sample required. (Be careful to express all the information in the same units.) 7. The administration at a college wishes to estimate, to within two percentage points, the proportion of all its entering freshmen who graduate within four years, with $90\%$ confidence. Estimate the minimum size sample required. 8. A chain of automotive repair stores wishes to estimate, to within five percentage points, the proportion of all passenger vehicles in operation that are at least five years old, with $98\%$ confidence. Estimate the minimum size sample required. 9. An internet service provider wishes to estimate, to within one percentage point, the current proportion of all email that is spam, with $99.9\%$ confidence. Last year the proportion that was spam was $71\%$. Estimate the minimum size sample required. 10. An agronomist wishes to estimate, to within one percentage point, the proportion of a new variety of seed that will germinate when planted, with $95\%$ confidence. A typical germination rate is $97\%$. Estimate the minimum size sample required. 11. A charitable organization wishes to estimate, to within half a percentage point, the proportion of all telephone solicitations to its donors that result in a gift, with $90\%$ confidence. Estimate the minimum sample size required, using the information that in the past the response rate has been about $30\%$. 12. A government agency wishes to estimate the proportion of drivers aged $16-24$ who have been involved in a traffic accident in the last year. It wishes to make the estimate to within one percentage point and at $90\%$ confidence. Find the minimum sample size required, using the information that several years ago the proportion was $0.12$. Additional Exercises 1. An economist wishes to estimate, to within six months, the mean time between sales of existing homes, with $95\%$ confidence. Estimate the minimum size sample required. In his experience virtually all houses are re-sold within $40$ months, so using the Empirical Rule he will estimate $\sigma$ by one-sixth the range, or $40/6=6.7$. 2. A wildlife manager wishes to estimate the mean length of fish in a large lake, to within one inch, with $80\%$ confidence. Estimate the minimum size sample required. In his experience virtually no fish caught in the lake is over $23$ inches long, so using the Empirical Rule he will estimate $\sigma$ by one-sixth the range, or $23/6=3.8$. 3. You wish to estimate the current mean birth weight of all newborns in a certain region, to within $1$ ounce ($1/16$ pound) and with $95\%$ confidence. A sample will cost $\400$ plus $\1.50$ for every newborn weighed. You believe the standard deviations of weight to be no more than $1.25$ pounds. You have $\2,500$ to spend on the study. 1. Can you afford the sample required? 2. If not, what are your options? 4. You wish to estimate a population proportion to within three percentage points, at $95\%$ confidence. A sample will cost $\500$ plus $50$ cents for every sample element measured. You have $\1,000$ to spend on the study. 1. Can you afford the sample required? 2. If not, what are your options? Answers 1. $35$ 2. $60$ 3. $139$ 1. $154$ 2. $253$ 3. $3832$ 1. $165$ 2. $271$ 3. $4109$ 1. $23$ 2. $62$ 3. $955$ 4. $1692$ 5. $22,301$ 6. $22,731$ 7. $5$ 1. no 2. decrease the confidence level
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/07.E%3A_Estimation_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 8.1: The Elements of Hypothesis Testing Q8.1.1 State the null and alternative hypotheses for each of the following situations. (That is, identify the correct number $\mu _0$ and write $H_0:\mu =\mu _0$ and the appropriate analogous expression for $H_a$.) 1. The average July temperature in a region historically has been $74.5^{\circ}F$. Perhaps it is higher now. 2. The average weight of a female airline passenger with luggage was $145$ pounds ten years ago. The FAA believes it to be higher now. 3. The average stipend for doctoral students in a particular discipline at a state university is $\14,756$. The department chairman believes that the national average is higher. 4. The average room rate in hotels in a certain region is $\82.53$. A travel agent believes that the average in a particular resort area is different. 5. The average farm size in a predominately rural state was $69.4$ acres. The secretary of agriculture of that state asserts that it is less today. Q1.1.2 State the null and alternative hypotheses for each of the following situations. (That is, identify the correct number $\mu _0$ and write $H_0:\mu =\mu _0$ and the appropriate analogous expression for $H_a$.) 1. The average time workers spent commuting to work in Verona five years ago was $38.2$ minutes. The Verona Chamber of Commerce asserts that the average is less now. 2. The mean salary for all men in a certain profession is $\58,291$. A special interest group thinks that the mean salary for women in the same profession is different. 3. The accepted figure for the caffeine content of an $8$-ounce cup of coffee is $133$ mg. A dietitian believes that the average for coffee served in a local restaurants is higher. 4. The average yield per acre for all types of corn in a recent year was $161.9$ bushels. An economist believes that the average yield per acre is different this year. 5. An industry association asserts that the average age of all self-described fly fishermen is $42.8$ years. A sociologist suspects that it is higher. Q1.1.3 Describe the two types of errors that can be made in a test of hypotheses. Q1.1.4 Under what circumstance is a test of hypotheses certain to yield a correct decision? Answers 1. $H_0:\mu =74.5\; vs\; H_a:\mu >74.5$ 2. $H_0:\mu =145\; vs\; H_a:\mu >145$ 3. $H_0:\mu =14756\; vs\; H_a:\mu >14756$ 4. $H_0:\mu =82.53\; vs\; H_a:\mu \neq 82.53$ 5. $H_0:\mu =69.4\; vs\; H_a:\mu <69.4$ 1. A Type I error is made when a true $H_0$ is rejected. A Type II error is made when a false $H_0$ is not rejected. 8.2: Large Sample Tests for a Population Mean Basic 1. Find the rejection region (for the standardized test statistic) for each hypothesis test. 1. $H_0:\mu =27\; vs\; H_a:\mu <27\; @\; \alpha =0.05$ 2. $H_0:\mu =52\; vs\; H_a:\mu \neq 52\; @\; \alpha =0.05$ 3. $H_0:\mu =-105\; vs\; H_a:\mu >-105\; @\; \alpha =0.10$ 4. $H_0:\mu =78.8\; vs\; H_a:\mu \neq 78.8\; @\; \alpha =0.10$ 2. Find the rejection region (for the standardized test statistic) for each hypothesis test. 1. $H_0:\mu =17\; vs\; H_a:\mu <17\; @\; \alpha =0.01$ 2. $H_0:\mu =880\; vs\; H_a:\mu \neq 880\; @\; \alpha =0.01$ 3. $H_0:\mu =-12\; vs\; H_a:\mu >-12\; @\; \alpha =0.05$ 4. $H_0:\mu =21.1\; vs\; H_a:\mu \neq 21.1\; @\; \alpha =0.05$ 3. Find the rejection region (for the standardized test statistic) for each hypothesis test. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0:\mu =141\; vs\; H_a:\mu <141\; @\; \alpha =0.20$ 2. $H_0:\mu =-54\; vs\; H_a:\mu <-54\; @\; \alpha =0.05$ 3. $H_0:\mu =98.6\; vs\; H_a:\mu \neq 98.6\; @\; \alpha =0.05$ 4. $H_0:\mu =3.8\; vs\; H_a:\mu >3.8\; @\; \alpha =0.001$ 4. Find the rejection region (for the standardized test statistic) for each hypothesis test. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0:\mu =-62\; vs\; H_a:\mu \neq -62\; @\; \alpha =0.005$ 2. $H_0:\mu =73\; vs\; H_a:\mu >73\; @\; \alpha =0.001$ 3. $H_0:\mu =1124\; vs\; H_a:\mu <1124\; @\; \alpha =0.001$ 4. $H_0:\mu =0.12\; vs\; H_a:\mu \neq 0.12\; @\; \alpha =0.001$ 5. Compute the value of the test statistic for the indicated test, based on the information given. 1. Testing $H_0:\mu =72.2\; vs\; H_a:\mu >72.2,\; \sigma \; \text{unknown}\; n=55,\; \bar{x}=75.1,\; s=9.25$ 2. Testing $H_0:\mu =58\; vs\; H_a:\mu >58,\; \sigma =1.22\; n=40,\; \bar{x}=58.5,\; s=1.29$ 3. Testing $H_0:\mu =-19.5\; vs\; H_a:\mu <-19.5,\; \sigma \; \text{unknown}\; n=30,\; \bar{x}=-23.2,\; s=9.55$ 4. Testing $H_0:\mu =805\; vs\; H_a:\mu \neq 805,\; \sigma =37.5\; n=75,\; \bar{x}=818,\; s=36.2$ 6. Compute the value of the test statistic for the indicated test, based on the information given. 1. Testing $H_0:\mu =342\; vs\; H_a:\mu <342,\; \sigma =11.2\; n=40,\; \bar{x}=339,\; s=10.3$ 2. Testing $H_0:\mu =105\; vs\; H_a:\mu >105,\; \sigma =5.3\; n=80,\; \bar{x}=107,\; s=5.1$ 3. Testing $H_0:\mu =-13.5\; vs\; H_a:\mu \neq -13.5,\; \sigma \; \text{unknown}\; n=32,\; \bar{x}=-13.8,\; s=1.5$ 4. Testing $H_0:\mu =28\; vs\; H_a:\mu \neq 28,\; \sigma \; \text{unknown}\; n=68,\; \bar{x}=27.8,\; s=1.3$ 7. Perform the indicated test of hypotheses, based on the information given. 1. Test $H_0:\mu =212\; vs\; H_a:\mu <212\; @\; \alpha =0.10,\; \sigma \; \text{unknown}\; n=36,\; \bar{x}=211.2,\; s=2.2$ 2. Test $H_0:\mu =-18\; vs\; H_a:\mu >-18\; @\; \alpha =0.05,\; \sigma =3.3\; n=44,\; \bar{x}=-17.2,\; s=3.1$ 3. Test $H_0:\mu =24\; vs\; H_a:\mu \neq 24\; @\; \alpha =0.02,\; \sigma \; \text{unknown}\; n=50,\; \bar{x}=22.8,\; s=1.9$ 8. Perform the indicated test of hypotheses, based on the information given. 1. Test $H_0:\mu =105\; vs\; H_a:\mu >105\; @\; \alpha =0.05,\; \sigma \; \text{unknown}\; n=30,\; \bar{x}=108,\; s=7.2$ 2. Test $H_0:\mu =21.6\; vs\; H_a:\mu <21.6\; @\; \alpha =0.01,\; \sigma \; \text{unknown}\; n=78,\; \bar{x}=20.5,\; s=3.9$ 3. Test $H_0:\mu =-375\; vs\; H_a:\mu \neq -375\; @\; \alpha =0.01,\; \sigma =18.5\; n=31,\; \bar{x}=-388,\; s=18.0$ Applications 1. In the past the average length of an outgoing telephone call from a business office has been $143$ seconds. A manager wishes to check whether that average has decreased after the introduction of policy changes. A sample of $100$ telephone calls produced a mean of $133$ seconds, with a standard deviation of $35$ seconds. Perform the relevant test at the $1\%$ level of significance. 2. The government of an impoverished country reports the mean age at death among those who have survived to adulthood as $66.2$ years. A relief agency examines $30$ randomly selected deaths and obtains a mean of $62.3$ years with standard deviation $8.1$ years. Test whether the agency’s data support the alternative hypothesis, at the $1\%$ level of significance, that the population mean is less than $66.2$. 3. The average household size in a certain region several years ago was $3.14$ persons. A sociologist wishes to test, at the $5\%$ level of significance, whether it is different now. Perform the test using the information collected by the sociologist: in a random sample of $75$ households, the average size was $2.98$ persons, with sample standard deviation $0.82$ person. 4. The recommended daily calorie intake for teenage girls is $2,200$ calories/day. A nutritionist at a state university believes the average daily caloric intake of girls in that state to be lower. Test that hypothesis, at the $5\%$ level of significance, against the null hypothesis that the population average is $2,200$ calories/day using the following sample data: $n=36,\; \bar{x}=2,150,\; s=203$ 5. An automobile manufacturer recommends oil change intervals of $3,000$ miles. To compare actual intervals to the recommendation, the company randomly samples records of $50$ oil changes at service facilities and obtains sample mean $3,752$ miles with sample standard deviation $638$ miles. Determine whether the data provide sufficient evidence, at the $5\%$ level of significance, that the population mean interval between oil changes exceeds $3,000$ miles. 6. A medical laboratory claims that the mean turn-around time for performance of a battery of tests on blood samples is $1.88$ business days. The manager of a large medical practice believes that the actual mean is larger. A random sample of $45$ blood samples yielded mean $2.09$ and sample standard deviation $0.13$ day. Perform the relevant test at the $10\%$ level of significance, using these data. 7. A grocery store chain has as one standard of service that the mean time customers wait in line to begin checking out not exceed $2$ minutes. To verify the performance of a store the company measures the waiting time in $30$ instances, obtaining mean time $2.17$ minutes with standard deviation $0.46$ minute. Use these data to test the null hypothesis that the mean waiting time is $2$ minutes versus the alternative that it exceeds $2$ minutes, at the $10\%$ level of significance. 8. A magazine publisher tells potential advertisers that the mean household income of its regular readership is $\61,500$. An advertising agency wishes to test this claim against the alternative that the mean is smaller. A sample of $40$ randomly selected regular readers yields mean income $\59,800$ with standard deviation $\5,850$. Perform the relevant test at the $1\%$ level of significance. 9. Authors of a computer algebra system wish to compare the speed of a new computational algorithm to the currently implemented algorithm. They apply the new algorithm to $50$ standard problems; it averages $8.16$ seconds with standard deviation $0.17$ second. The current algorithm averages $8.21$ seconds on such problems. Test, at the $1\%$ level of significance, the alternative hypothesis that the new algorithm has a lower average time than the current algorithm. 10. A random sample of the starting salaries of $35$ randomly selected graduates with bachelor’s degrees last year gave sample mean and standard deviation $\41,202$ and $\7,621$, respectively. Test whether the data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the mean starting salary of all graduates last year is less than the mean of all graduates two years before, $\43,589$. Additional Exercises 1. The mean household income in a region served by a chain of clothing stores is $\48,750$. In a sample of $40$ customers taken at various stores the mean income of the customers was $\51,505$ with standard deviation $\6,852$. 1. Test at the $10\%$ level of significance the null hypothesis that the mean household income of customers of the chain is $\48,750$ against that alternative that it is different from $\48,750$. 2. The sample mean is greater than $\48,750$, suggesting that the actual mean of people who patronize this store is greater than $\48,750$. Perform this test, also at the $10\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) 2. The labor charge for repairs at an automobile service center are based on a standard time specified for each type of repair. The time specified for replacement of universal joint in a drive shaft is one hour. The manager reviews a sample of $30$ such repairs. The average of the actual repair times is $0.86$ hour with standard deviation $0.32$ hour. 1. Test at the $1\%$ level of significance the null hypothesis that the actual mean time for this repair differs from one hour. 2. The sample mean is less than one hour, suggesting that the mean actual time for this repair is less than one hour. Perform this test, also at the $1\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) Large Data Set Exercises Large Data Set missing from the original 1. Large $\text{Data Set 1}$ records the SAT scores of $1,000$ students. Regarding it as a random sample of all high school students, use it to test the hypothesis that the population mean exceeds $1,510$, at the $1\%$ level of significance. (The null hypothesis is that $\mu =1510$). 2. Large $\text{Data Set 1}$ records the GPAs of $1,000$ college students. Regarding it as a random sample of all college students, use it to test the hypothesis that the population mean is less than $2.50$, at the $10\%$ level of significance. (The null hypothesis is that $\mu =2.50$). 3. Large $\text{Data Set 1}$ lists the SAT scores of $1,000$ students. 1. Regard the data as arising from a census of all students at a high school, in which the SAT score of every student was measured. Compute the population mean $\mu$. 2. Regard the first $50$ students in the data set as a random sample drawn from the population of part (a) and use it to test the hypothesis that the population mean exceeds $1,510$, at the $10\%$ level of significance. (The null hypothesis is that $\mu =1510$). 3. Is your conclusion in part (b) in agreement with the true state of nature (which by part (a) you know), or is your decision in error? If your decision is in error, is it a Type I error or a Type II error? 4. Large $\text{Data Set 1}$ lists the GPAs of $1,000$ students. 1. Regard the data as arising from a census of all freshman at a small college at the end of their first academic year of college study, in which the GPA of every such person was measured. Compute the population mean $\mu$. 2. Regard the first $50$ students in the data set as a random sample drawn from the population of part (a) and use it to test the hypothesis that the population mean is less than $2.50$, at the $10\%$ level of significance. (The null hypothesis is that $\mu =2.50$). 3. Is your conclusion in part (b) in agreement with the true state of nature (which by part (a) you know), or is your decision in error? If your decision is in error, is it a Type I error or a Type II error? Answers 1. $Z\leq -1.645$ 2. $Z\leq -1.645\; or\; Z\geq 1.96$ 3. $Z\geq 1.28$ 4. $Z\leq -1.645\; or\; Z\geq 1.645$ 1. $Z\leq -0.84$ 2. $Z\leq -1.645$ 3. $Z\leq -1.96\; or\; Z\geq 1.96$ 4. $Z\geq 3.1$ 1. $Z = 2.235$ 2. $Z = 2.592$ 3. $Z = -2.122$ 4. $Z = 3.002$ 1. $Z = -2.18,\; -z_{0.10}=-1.28,\; \text{reject}\; H_0$ 2. $Z = 1.61,\; z_{0.05}=1.645,\; \text{do not reject}\; H_0$ 3. $Z = -4.47,\; -z_{0.01}=-2.33,\; \text{reject}\; H_0$ 1. $Z = -2.86,\; -z_{0.01}=-2.33,\; \text{reject}\; H_0$ 2. $Z = -1.69,\; -z_{0.025}=-1.96,\; \text{do not reject}\; H_0$ 3. $Z = 8.33,\; z_{0.05}=1.645,\; \text{reject}\; H_0$ 4. $Z = 2.02,\; z_{0.10}=1.28,\; \text{reject}\; H_0$ 5. $Z = -2.08,\; -z_{0.01}=-2.33,\; \text{do not reject}\; H_0$ 1. $Z =2.54,\; z_{0.05}=1.645,\; \text{reject}\; H_0$ 2. $Z = 2.54,\; z_{0.10}=1.28,\; \text{reject}\; H_0$ 6. $H_0:\mu =1510\; vs\; H_a:\mu >1510$. Test Statistic: $Z = 2.7882$. Rejection Region: $[2.33,\infty )$. Decision: Reject $H_0$. 1. $\mu _0=1528.74$ 2. $H_0:\mu =1510\; vs\; H_a:\mu >1510$. Test Statistic: $Z = -1.41$. Rejection Region: $[1.28,\infty )$. Decision: Fail to reject $H_0$. 3. No, it is a Type II error. 8.3: The Observed Significance of a Test Basic 1. Compute the observed significance of each test. 1. Testing $H_0:\mu =54.7\; vs\; H_a:\mu <54.7,\; \text{test statistic}\; z=-1.72$ 2. Testing $H_0:\mu =195\; vs\; H_a:\mu \neq 195,\; \text{test statistic}\; z=-2.07$ 3. Testing $H_0:\mu =-45\; vs\; H_a:\mu >-45,\; \text{test statistic}\; z=2.54$ 2. Compute the observed significance of each test. 1. Testing $H_0:\mu =0\; vs\; H_a:\mu \neq 0,\; \text{test statistic}\; z=2.82$ 2. Testing $H_0:\mu =18.4\; vs\; H_a:\mu <18.4,\; \text{test statistic}\; z=-1.74$ 3. Testing $H_0:\mu =63.85\; vs\; H_a:\mu >63.85,\; \text{test statistic}\; z=1.93$ 3. Compute the observed significance of each test. (Some of the information given might not be needed.) 1. Testing $H_0:\mu =27.5\; vs\; H_a:\mu >27.5,\; n=49,\; \bar{x}=28.9,\; s=3.14,\; \text{test statistic}\; z=3.12$ 2. Testing $H_0:\mu =581\; vs\; H_a:\mu <581,\; n=32,\; \bar{x}=560,\; s=47.8,\; \text{test statistic}\; z=-2.49$ 3. Testing $H_0:\mu =138.5\; vs\; H_a:\mu \neq 138.5,\; n=44,\; \bar{x}=137.6,\; s=2.45,\; \text{test statistic}\; z=-2.44$ 4. Compute the observed significance of each test. (Some of the information given might not be needed.) 1. Testing $H_0:\mu =-17.9\; vs\; H_a:\mu <-17.9,\; n=34,\; \bar{x}=-18.2,\; s=0.87,\; \text{test statistic}\; z=-2.01$ 2. Testing $H_0:\mu =5.5\; vs\; H_a:\mu \neq 5.5,\; n=56,\; \bar{x}=7.4,\; s=4.82,\; \text{test statistic}\; z=2.95$ 3. Testing $H_0:\mu =1255\; vs\; H_a:\mu >1255,\; n=152,\; \bar{x}=1257,\; s=7.5,\; \text{test statistic}\; z=3.29$ 5. Make the decision in each test, based on the information provided. 1. Testing $H_0:\mu =82.9\; vs\; H_a:\mu <82.9\; @\; \alpha =0.05$, observed significance $p=0.038$ 2. Testing $H_0:\mu =213.5\; vs\; H_a:\mu \neq 213.5\; @\; \alpha =0.01$, observed significance $p=0.038$ 6. Make the decision in each test, based on the information provided. 1. Testing $H_0:\mu =31.4\; vs\; H_a:\mu >31.4\; @\; \alpha =0.10$, observed significance $p=0.062$ 2. Testing $H_0:\mu =-75.5\; vs\; H_a:\mu <-75.5\; @\; \alpha =0.05$, observed significance $p=0.062$ Applications 1. A lawyer believes that a certain judge imposes prison sentences for property crimes that are longer than the state average $11.7$ months. He randomly selects $36$ of the judge’s sentences and obtains mean $13.8$ and standard deviation $3.9$ months. 1. Perform the test at the $1\%$ level of significance using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $1\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 2. In a recent year the fuel economy of all passenger vehicles was $19.8$ mpg. A trade organization sampled $50$ passenger vehicles for fuel economy and obtained a sample mean of $20.1$ mpg with standard deviation $2.45$ mpg. The sample mean $20.1$ exceeds $19.8$, but perhaps the increase is only a result of sampling error. 1. Perform the relevant test of hypotheses at the $20\%$ level of significance using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $20\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 3. The mean score on a $25$-point placement exam in mathematics used for the past two years at a large state university is $14.3$. The placement coordinator wishes to test whether the mean score on a revised version of the exam differs from $14.3$. She gives the revised exam to $30$ entering freshmen early in the summer; the mean score is $14.6$ with standard deviation $2.4$. 1. Perform the test at the $10\%$ level of significance using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $10\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 4. The mean increase in word family vocabulary among students in a one-year foreign language course is $576$ word families. In order to estimate the effect of a new type of class scheduling, an instructor monitors the progress of $60$ students; the sample mean increase in word family vocabulary of these students is $542$ word families with sample standard deviation $18$ word families. 1. Test at the $5\%$ level of significance whether the mean increase with the new class scheduling is different from $576$ word families, using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $5\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 5. The mean yield for hard red winter wheat in a certain state is $44.8$ bu/acre. In a pilot program a modified growing scheme was introduced on $35$ independent plots. The result was a sample mean yield of $45.4$ bu/acre with sample standard deviation $1.6$ bu/acre, an apparent increase in yield. 1. Test at the $5\%$ level of significance whether the mean yield under the new scheme is greater than $44.8$ bu/acre, using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $5\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). 6. The average amount of time that visitors spent looking at a retail company’s old home page on the world wide web was $23.6$ seconds. The company commissions a new home page. On its first day in place the mean time spent at the new page by $7,628$ visitors was $23.5$ seconds with standard deviation $5.1$ seconds. 1. Test at the $5\%$ level of significance whether the mean visit time for the new page is less than the former mean of $23.6$ seconds, using the critical value approach. 2. Compute the observed significance of the test. 3. Perform the test at the $5\%$ level of significance using the $p$-value approach. You need not repeat the first three steps, already done in part (a). Answers 1. $p\text{-value}=0.0427$ 2. $p\text{-value}=0.0384$ 3. $p\text{-value}=0.0055$ 1. $p\text{-value}=0.0009$ 2. $p\text{-value}=0.0064$ 3. $p\text{-value}=0.0146$ 1. reject $H_0$ 2. do not reject $H_0$ 1. $Z=3.23,\; z_{0.01}=2.33$, reject $H_0$ 2. $p\text{-value}=0.0006$ 3. reject $H_0$ 1. $Z=0.68,\; z_{0.05}=1.645$, do not reject $H_0$ 2. $p\text{-value}=0.4966$ 3. do not reject $H_0$ 1. $Z=2.22,\; z_{0.05}=1.645$, reject $H_0$ 2. $p\text{-value}=0.0132$ 3. reject $H_0$ 8.4: Small Sample Tests for a Population Mean Basic 1. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. 1. $H_0: \mu =27\; vs\; H_a:\mu <27\; @\; \alpha =0.05,\; n=12,\; \sigma =2.2$ 2. $H_0: \mu =52\; vs\; H_a:\mu \neq 52\; @\; \alpha =0.05,\; n=6,\; \sigma \; \text{unknown}$ 3. $H_0: \mu =-105\; vs\; H_a:\mu >-105\; @\; \alpha =0.10,\; n=24,\; \sigma \; \text{unknown}$ 4. $H_0: \mu =78.8\; vs\; H_a:\mu \neq 78.8\; @\; \alpha =0.10,\; n=8,\; \sigma =1.7$ 2. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. 1. $H_0: \mu =17\; vs\; H_a:\mu <17\; @\; \alpha =0.01,\; n=26,\; \sigma =0.94$ 2. $H_0: \mu =880\; vs\; H_a:\mu \neq 880\; @\; \alpha =0.01,\; n=4,\; \sigma \; \text{unknown}$ 3. $H_0: \mu =-12\; vs\; H_a:\mu >-12\; @\; \alpha =0.05,\; n=18,\; \sigma =1.1$ 4. $H_0: \mu =21.1\; vs\; H_a:\mu \neq 21.1\; @\; \alpha =0.05,\; n=23,\; \sigma \; \text{unknown}$ 3. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0: \mu =141\; vs\; H_a:\mu <141\; @\; \alpha =0.20,\; n=29,\; \sigma \; \text{unknown}$ 2. $H_0: \mu =-54\; vs\; H_a:\mu <-54\; @\; \alpha =0.05,\; n=15,\; \sigma =1.9$ 3. $H_0: \mu =98.6\; vs\; H_a:\mu \neq 98.6\; @\; \alpha =0.05,\; n=12,\; \sigma \; \text{unknown}$ 4. $H_0: \mu =3.8\; vs\; H_a:\mu >3.8\; @\; \alpha =0.001,\; n=27,\; \sigma \; \text{unknown}$ 4. Find the rejection region (for the standardized test statistic) for each hypothesis test based on the information given. The population is normally distributed. Identify the test as left-tailed, right-tailed, or two-tailed. 1. $H_0: \mu =-62\; vs\; H_a:\mu \neq -62\; @\; \alpha =0.005,\; n=8,\; \sigma \; \text{unknown}$ 2. $H_0: \mu =73\; vs\; H_a:\mu >73\; @\; \alpha =0.001,\; n=22,\; \sigma \; \text{unknown}$ 3. $H_0: \mu =1124\; vs\; H_a:\mu <1124\; @\; \alpha =0.001,\; n=21,\; \sigma \; \text{unknown}$ 4. $H_0: \mu =0.12\; vs\; H_a:\mu \neq 0.12\; @\; \alpha =0.001,\; n=14,\; \sigma =0.026$ 5. A random sample of size 20 drawn from a normal population yielded the following results: $\bar{x}=49.2,\; s=1.33$ 1. Test $H_0: \mu =50\; vs\; H_a:\mu \neq 50\; @\; \alpha =0.01$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. 6. A random sample of size 16 drawn from a normal population yielded the following results: $\bar{x}=-0.96,\; s=1.07$ 1. Test $H_0: \mu =0\; vs\; H_a:\mu <0\; @\; \alpha =0.001$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. 7. A random sample of size 8 drawn from a normal population yielded the following results: $\bar{x}=289,\; s=46$ 1. Test $H_0: \mu =250\; vs\; H_a:\mu >250\; @\; \alpha =0.05$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. 8. A random sample of size 12 drawn from a normal population yielded the following results: $\bar{x}=86.2,\; s=0.63$ 1. Test $H_0: \mu =85.5\; vs\; H_a:\mu \neq 85.5\; @\; \alpha =0.01$. 2. Estimate the observed significance of the test in part (a) and state a decision based on the $p$-value approach to hypothesis testing. Applications 1. Researchers wish to test the efficacy of a program intended to reduce the length of labor in childbirth. The accepted mean labor time in the birth of a first child is $15.3$ hours. The mean length of the labors of $13$ first-time mothers in a pilot program was $8.8$ hours with standard deviation $3.1$ hours. Assuming a normal distribution of times of labor, test at the $10\%$ level of significance test whether the mean labor time for all women following this program is less than $15.3$ hours. 2. A dairy farm uses the somatic cell count (SCC) report on the milk it provides to a processor as one way to monitor the health of its herd. The mean SCC from five samples of raw milk was $250,000$ cells per milliliter with standard deviation $37,500$ cell/ml. Test whether these data provide sufficient evidence, at the $10\%$ level of significance, to conclude that the mean SCC of all milk produced at the dairy exceeds that in the previous report, $210,250$ cell/ml. Assume a normal distribution of SCC. 3. Six coins of the same type are discovered at an archaeological site. If their weights on average are significantly different from $5.25$ grams then it can be assumed that their provenance is not the site itself. The coins are weighed and have mean $4.73$ g with sample standard deviation $0.18$ g. Perform the relevant test at the $0.1\%$ ($\text{1/10th of}\; 1\%$) level of significance, assuming a normal distribution of weights of all such coins. 4. An economist wishes to determine whether people are driving less than in the past. In one region of the country the number of miles driven per household per year in the past was $18.59$ thousand miles. A sample of $15$ households produced a sample mean of $16.23$ thousand miles for the last year, with sample standard deviation $4.06$ thousand miles. Assuming a normal distribution of household driving distances per year, perform the relevant test at the $5\%$ level of significance. 5. The recommended daily allowance of iron for females aged $19-50$ is $18$ mg/day. A careful measurement of the daily iron intake of $15$ women yielded a mean daily intake of $16.2$ mg with sample standard deviation $4.7$ mg. 1. Assuming that daily iron intake in women is normally distributed, perform the test that the actual mean daily intake for all women is different from $18$ mg/day, at the $10\%$ level of significance. 2. The sample mean is less than $18$, suggesting that the actual population mean is less than $18$ mg/day. Perform this test, also at the $10\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) 6. The target temperature for a hot beverage the moment it is dispensed from a vending machine is $170^{\circ}F$. A sample of ten randomly selected servings from a new machine undergoing a pre-shipment inspection gave mean temperature $173^{\circ}F$ with sample standard deviation $6.3^{\circ}F$. 1. Assuming that temperature is normally distributed, perform the test that the mean temperature of dispensed beverages is different from $170^{\circ}F$, at the $10\%$ level of significance. 2. The sample mean is greater than $170$, suggesting that the actual population mean is greater than $170^{\circ}F$. Perform this test, also at the $10\%$ level of significance. (The computation of the test statistic done in part (a) still applies here.) 7. The average number of days to complete recovery from a particular type of knee operation is $123.7$ days. From his experience a physician suspects that use of a topical pain medication might be lengthening the recovery time. He randomly selects the records of seven knee surgery patients who used the topical medication. The times to total recovery were:$\begin{matrix} 128 & 135 & 121 & 142 & 126 & 151 & 123 \end{matrix}$ 1. Assuming a normal distribution of recovery times, perform the relevant test of hypotheses at the $10\%$ level of significance. 2. Would the decision be the same at the $5\%$ level of significance? Answer either by constructing a new rejection region (critical value approach) or by estimating the $p$-value of the test in part (a) and comparing it to $\alpha$. 8. A 24-hour advance prediction of a day’s high temperature is “unbiased” if the long-term average of the error in prediction (true high temperature minus predicted high temperature) is zero. The errors in predictions made by one meteorological station for $20$ randomly selected days were:$\begin{matrix} 2 & 0 & -3 & 1 & -2\ 1 & 0 & -1 & 1 & -1\ -4 & 1 & 1 & -4 & 0\ -4 & -3 & -4 & 2 & 2 \end{matrix}$ 1. Assuming a normal distribution of errors, test the null hypothesis that the predictions are unbiased (the mean of the population of all errors is $0$) versus the alternative that it is biased (the population mean is not $0$), at the $1\%$ level of significance. 2. Would the decision be the same at the $5\%$ level of significance? The $10\%$ level of significance? Answer either by constructing new rejection regions (critical value approach) or by estimating the $p$-value of the test in part (a) and comparing it to $\alpha$. 9. Pasteurized milk may not have a standardized plate count (SPC) above $20,000$ colony-forming bacteria per milliliter (cfu/ml). The mean SPC for five samples was $21,500$ cfu/ml with sample standard deviation $750$ cfu/ml. Test the null hypothesis that the mean SPC for this milk is $20,000$ versus the alternative that it is greater than $20,000$, at the $10\%$ level of significance. Assume that the SPC follows a normal distribution. 10. One water quality standard for water that is discharged into a particular type of stream or pond is that the average daily water temperature be at most $18^{\circ}F$. Six samples taken throughout the day gave the data: $\begin{matrix} 16.8 & 21.5 & 19.1 & 12.8 & 18.0 & 20.7 \end{matrix}$ The sample mean exceeds $\bar{x}=18.15$, but perhaps this is only sampling error. Determine whether the data provide sufficient evidence, at the $10\%$ level of significance, to conclude that the mean temperature for the entire day exceeds $18^{\circ}F$. Additional Exercises 1. A calculator has a built-in algorithm for generating a random number according to the standard normal distribution. Twenty-five numbers thus generated have mean $0.15$ and sample standard deviation $0.94$. Test the null hypothesis that the mean of all numbers so generated is $0$ versus the alternative that it is different from $0$, at the $20\%$ level of significance. Assume that the numbers do follow a normal distribution. 2. At every setting a high-speed packing machine delivers a product in amounts that vary from container to container with a normal distribution of standard deviation $0.12$ ounce. To compare the amount delivered at the current setting to the desired amount $64.1$ ounce, a quality inspector randomly selects five containers and measures the contents of each, obtaining sample mean $63.9$ ounces and sample standard deviation $0.10$ ounce. Test whether the data provide sufficient evidence, at the $5\%$ level of significance, to conclude that the mean of all containers at the current setting is less than $64.1$ ounces. 3. A manufacturing company receives a shipment of $1,000$ bolts of nominal shear strength $4,350$ lb. A quality control inspector selects five bolts at random and measures the shear strength of each. The data are:$\begin{matrix} 4,320 & 4,290 & 4,360 & 4,350 & 4,320 \end{matrix}$ 1. Assuming a normal distribution of shear strengths, test the null hypothesis that the mean shear strength of all bolts in the shipment is $4,350$ lb versus the alternative that it is less than $4,350$ lb, at the $10\%$ level of significance. 2. Estimate the $p$-value (observed significance) of the test of part (a). 3. Compare the $p$-value found in part (b) to $\alpha = 0.10$ andmake a decision based on the $p$-value approach. Explain fully. 4. A literary historian examines a newly discovered document possibly written by Oberon Theseus. The mean average sentence length of the surviving undisputed works of Oberon Theseus is $48.72$ words. The historian counts words in sentences between five successive $101$ periods in the document in question to obtain a mean average sentence length of $39.46$ words with standard deviation $7.45$ words. (Thus the sample size is five.) 1. Determine if these data provide sufficient evidence, at the $1\%$ level of significance, to conclude that the mean average sentence length in the document is less than $48.72$. 2. Estimate the $p$-value of the test. 3. Based on the answers to parts (a) and (b), state whether or not it is likely that the document was written by Oberon Theseus. Answers 1. $Z\leq -1.645$ 2. $T\leq -2.571\; or\; T \geq 2.571$ 3. $T \geq 1.319$ 4. $Z\leq -1645\; or\; Z \geq 1.645$ 1. $T\leq -0.855$ 2. $Z\leq -1.645$ 3. $T\leq -2.201\; or\; T \geq 2.201$ 4. $T \geq 3.435$ 1. $T=-2.690,\; df=19,\; -t_{0.005}=-2.861,\; \text{do not reject }H_0$ 2. $0.01<p-value<0.02,\; \alpha =0.01,\; \text{do not reject }H_0$ 1. $T=2.398,\; df=7,\; t_{0.05}=1.895,\; \text{reject }H_0$ 2. $0.01<p-value<0.025,\; \alpha =0.05,\; \text{reject }H_0$ 1. $T=-7.560,\; df=12,\; -t_{0.10}=-1.356,\; \text{reject }H_0$ 2. $T=-7.076,\; df=5,\; -t_{0.0005}=-6.869,\; \text{reject }H_0$ 1. $T=-1.483,\; df=14,\; -t_{0.05}=-1.761,\; \text{do not reject }H_0$ 2. $T=-1.483,\; df=14,\; -t_{0.10}=-1.345,\; \text{reject }H_0$ 1. $T=2.069,\; df=6,\; t_{0.10}=1.44,\; \text{reject }H_0$ 2. $T=2.069,\; df=6,\; t_{0.05}=1.943,\; \text{reject }H_0$ 3. $T=4.472,\; df=4,\; t_{0.10}=1.533,\; \text{reject }H_0$ 4. $T=0.798,\; df=24,\; t_{0.10}=1.318,\; \text{do not reject }H_0$ 1. $T=-1.773,\; df=4,\; -t_{0.05}=-2.132,\; \text{do not reject }H_0$ 2. $0.05<p-value<0.10$ 3. $\alpha =0.05,\; \text{do not reject }H_0$ 8.5: Large Sample Tests for a Population Proportion Basic On all exercises for this section you may assume that the sample is sufficiently large for the relevant test to be validly performed. 1. Compute the value of the test statistic for each test using the information given. 1. Testing $H_0:p=0.50\; vs\; H_a:p>0.50,\; n=360,\; \hat{p}=0.56$. 2. Testing $H_0:p=0.50\; vs\; H_a:p\neq 0.50,\; n=360,\; \hat{p}=0.56$. 3. Testing $H_0:p=0.37\; vs\; H_a:p<0.37,\; n=1200,\; \hat{p}=0.35$. 2. Compute the value of the test statistic for each test using the information given. 1. Testing $H_0:p=0.72\; vs\; H_a:p<0.72,\; n=2100,\; \hat{p}=0.71$. 2. Testing $H_0:p=0.83\; vs\; H_a:p\neq 0.83,\; n=500,\; \hat{p}=0.86$. 3. Testing $H_0:p=0.22\; vs\; H_a:p<0.22,\; n=750,\; \hat{p}=0.18$. 3. For each part of Exercise 1 construct the rejection region for the test for $\alpha = 0.05$ and make the decision based on your answer to that part of the exercise. 4. For each part of Exercise 2 construct the rejection region for the test for $\alpha = 0.05$ and make the decision based on your answer to that part of the exercise. 5. For each part of Exercise 1 compute the observed significance ($p$-value) of the test and compare it to $\alpha = 0.05$ in order to make the decision by the $p$-value approach to hypothesis testing. 6. For each part of Exercise 2 compute the observed significance ($p$-value) of the test and compare it to $\alpha = 0.05$ in order to make the decision by the $p$-value approach to hypothesis testing. 7. Perform the indicated test of hypotheses using the critical value approach. 1. Testing $H_0:p=0.55\; vs\; H_a:p>0.55\; @\; \alpha =0.05,\; n=300,\; \hat{p}=0.60$. 2. Testing $H_0:p=0.47\; vs\; H_a:p\neq 0.47\; @\; \alpha =0.01,\; n=9750,\; \hat{p}=0.46$. 8. Perform the indicated test of hypotheses using the critical value approach. 1. Testing $H_0:p=0.15\; vs\; H_a:p\neq 0.15\; @\; \alpha =0.001,\; n=1600,\; \hat{p}=0.18$. 2. Testing $H_0:p=0.90\; vs\; H_a:p>0.90\; @\; \alpha =0.01,\; n=1100,\; \hat{p}=0.91$. 9. Perform the indicated test of hypotheses using the $p$-value approach. 1. Testing $H_0:p=0.37\; vs\; H_a:p\neq 0.37\; @\; \alpha =0.005,\; n=1300,\; \hat{p}=0.40$. 2. Testing $H_0:p=0.94\; vs\; H_a:p>0.94\; @\; \alpha =0.05,\; n=1200,\; \hat{p}=0.96$. 10. Perform the indicated test of hypotheses using the $p$-value approach. 1. Testing $H_0:p=0.25\; vs\; H_a:p<0.25\; @\; \alpha =0.10,\; n=850,\; \hat{p}=0.23$. 2. Testing $H_0:p=0.33\; vs\; H_a:p\neq 0.33\; @\; \alpha =0.05,\; n=1100,\; \hat{p}=0.30$. Applications 1. Five years ago $3.9\%$ of children in a certain region lived with someone other than a parent. A sociologist wishes to test whether the current proportion is different. Perform the relevant test at the $5\%$ level of significance using the following data: in a random sample of $2,759$ children, $119$ lived with someone other than a parent. 2. The government of a particular country reports its literacy rate as $52\%$. A nongovernmental organization believes it to be less. The organization takes a random sample of $600$ inhabitants and obtains a literacy rate of $42\%$. Perform the relevant test at the $0.5\%$ (one-half of $1\%$) level of significance. 3. Two years ago $72\%$ of household in a certain county regularly participated in recycling household waste. The county government wishes to investigate whether that proportion has increased after an intensive campaign promoting recycling. In a survey of $900$ households, $674$ regularly participate in recycling. Perform the relevant test at the $10\%$ level of significance. 4. Prior to a special advertising campaign, $23\%$ of all adults recognized a particular company’s logo. At the close of the campaign the marketing department commissioned a survey in which $311$ of $1,200$ randomly selected adults recognized the logo. Determine, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that more than $23\%$ of all adults now recognize the company’s logo. 5. A report five years ago stated that $35.5\%$ of all state-owned bridges in a particular state were “deficient.” An advocacy group took a random sample of $100$ state-owned bridges in the state and found $33$ to be currently rated as being “deficient.” Test whether the current proportion of bridges in such condition is $35.5\%$ versus the alternative that it is different from $35.5\%$, at the $10\%$ level of significance. 6. In the previous year the proportion of deposits in checking accounts at a certain bank that were made electronically was $45\%$. The bank wishes to determine if the proportion is higher this year. It examined $20,000$ deposit records and found that $9,217$ were electronic. Determine, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that more than $45\%$ of all deposits to checking accounts are now being made electronically. 7. According to the Federal Poverty Measure $12\%$ of the U.S. population lives in poverty. The governor of a certain state believes that the proportion there is lower. In a sample of size $1,550,163$ were impoverished according to the federal measure. 1. Test whether the true proportion of the state’s population that is impoverished is less than $12\%$, at the $5\%$ level of significance. 2. Compute the observed significance of the test. 8. An insurance company states that it settles $85\%$ of all life insurance claims within $30$ days. A consumer group asks the state insurance commission to investigate. In a sample of $250$ life insurance claims, $203$ were settled within $30$ days. 1. Test whether the true proportion of all life insurance claims made to this company that are settled within $30$ days is less than $85\%$, at the $5\%$ level of significance. 2. Compute the observed significance of the test. 9. A special interest group asserts that $90\%$ of all smokers began smoking before age $18$. In a sample of $850$ smokers, $687$ began smoking before age $18$. 1. Test whether the true proportion of all smokers who began smoking before age $18$ is less than $90\%$, at the $1\%$ level of significance. 2. Compute the observed significance of the test. 10. In the past, $68\%$ of a garage’s business was with former patrons. The owner of the garage samples $200$ repair invoices and finds that for only $114$ of them the patron was a repeat customer. 1. Test whether the true proportion of all current business that is with repeat customers is less than $68\%$, at the $1\%$ level of significance. 2. Compute the observed significance of the test. Additional Exercises 1. A rule of thumb is that for working individuals one-quarter of household income should be spent on housing. A financial advisor believes that the average proportion of income spent on housing is more than $0.25$. In a sample of $30$ households, the mean proportion of household income spent on housing was $0.285$ with a standard deviation of $0.063$. Perform the relevant test of hypotheses at the $1\%$ level of significance. Hint: This exercise could have been presented in an earlier section. 2. Ice cream is legally required to contain at least $10\%$ milk fat by weight. The manufacturer of an economy ice cream wishes to be close to the legal limit, hence produces its ice cream with a target proportion of $0.106$ milk fat. A sample of five containers yielded a mean proportion of $0.094$ milk fat with standard deviation $0.002$. Test the null hypothesis that the mean proportion of milk fat in all containers is $0.106$ against the alternative that it is less than $0.106$, at the $10\%$ level of significance. Assume that the proportion of milk fat in containers is normally distributed. Hint: This exercise could have been presented in an earlier section. Large Data Set Exercises Large Data Sets missing 1. Large $\text{Data Sets 4 and 4A}$ list the results of $500$ tosses of a die. Let $p$ denote the proportion of all tosses of this die that would result in a five. Use the sample data to test the hypothesis that $p$ is different from $1/6$, at the $20\%$ level of significance. 2. Large $\text{Data Set 6}$ records results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer Candidate $A$ for a U.S. Senate seat or prefer some other candidate. Use the full data set ($400$ observations) to test the hypothesis that the proportion $p$ of all voters who prefer Candidate $A$ exceeds $0.35$. Test at the $10\%$ level of significance. 3. Lines $2$ through $536$ in Large $\text{Data Set 11}$ is a sample of $535$ real estate sales in a certain region in 2008. Those that were foreclosure sales are identified with a $1$ in the second column. Use these data to test, at the $10\%$ level of significance, the hypothesis that the proportion $p$ of all real estate sales in this region in 2008 that were foreclosure sales was less than $25\%$. (The null hypothesis is $H_0:p=0.25$). 4. Lines $537$ through $1106$ in Large $\text{Data Set 11}$ is a sample of $570$ real estate sales in a certain region in 2010. Those that were foreclosure sales are identified with a $1$ in the second column. Use these data to test, at the $5\%$ level of significance, the hypothesis that the proportion $p$ of all real estate sales in this region in 2010 that were foreclosure sales was greater than $23\%$. (The null hypothesis is $H_0:p=0.25$). Answers 1. $Z = 2.277$ 2. $Z = 2.277$ 3. $Z = -1.435$ 1. $Z \geq 1.645$; reject $H_0$ 2. $Z\leq -1.96\; or\; Z \geq 1.96$; reject $H_0$ 3. $Z \leq -1.645$; do not reject $H_0$ 1. $p-value=0.0116,\; \alpha =0.05$; reject $H_0$ 2. $p-value=0.0232,\; \alpha =0.05$; reject $H_0$ 3. $p-value=0.0749,\; \alpha =0.05$; do not reject $H_0$ 1. $Z=1.74,\; z_{0.05}=1.645$; reject $H_0$ 2. $Z=-1.98,\; -z_{0.005}=-2.576$; do not reject $H_0$ 1. $Z=2.24,\; p-value=0.025,\alpha =0.005$; do not reject $H_0$ 2. $Z=2.92,\; p-value=0.0018,\alpha =0.05$; reject $H_0$ 1. $Z=1.11,\; z_{0.025}=1.96$; do not reject $H_0$ 2. $Z=1.93,\; z_{0.10}=1.28$; reject $H_0$ 3. $Z=-0.523,\; \pm z_{0.05}=\pm 1.645$; do not reject $H_0$ 1. $Z=-1.798,\; -z_{0.05}=-1.645$; reject $H_0$ 2. $p-value=0.0359$ 1. $Z=-8.92,\; -z_{0.01}=-2.33$; reject $H_0$ 2. $p-value\approx 0$ 4. $Z=3.04,\; z_{0.01}=2.33$; reject $H_0$ 5. $H_0:p=1/6\; vs\; H_a:p\neq 1/6$. Test Statistic: $Z = -0.76$. Rejection Region: $(-\infty ,-1.28]\cup [1.28,\infty )$. Decision: Fail to reject $H_0$. 6. $H_0:p=0.25\; vs\; H_a:p<0.25$. Test Statistic: $Z = -1.17$. Rejection Region: $(-\infty ,-1.28]$. Decision: Fail to reject $H_0$. • Anonymous
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/08.E%3A_Testing_Hypotheses_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 9.1: Comparison of Two Population Means: Large, Independent Samples Q9.1.1 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $90\%$ confidence, $n_1=45, \bar{x_1}=27, s_1=2\ n_2=60, \bar{x_2}=22, s_2=3$ 2. $99\%$ confidence, $n_1=30, \bar{x_1}=-112, s_1=9\ n_2=40, \bar{x_2}=-98, s_2=4$ Q9.1.2 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $95\%$ confidence, $n_1=110, \bar{x_1}=77, s_1=15\ n_2=85, \bar{x_2}=79, s_2=21$ 2. $90\%$ confidence, $n_1=65, \bar{x_1}=-83, s_1=12\ n_2=65, \bar{x_2}=-74, s_2=8$ Q9.1.3 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.5\%$ confidence, $n_1=130, \bar{x_1}=27.2, s_1=2.5\ n_2=155, \bar{x_2}=38.8, s_2=4.6$ 2. $95\%$ confidence, $n_1=68, \bar{x_1}=215.5, s_1=12.3\ n_2=84, \bar{x_2}=287.8, s_2=14.1$ Q9.1.4 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.9\%$ confidence, $n_1=275, \bar{x_1}=70.2, s_1=1.5\ n_2=325, \bar{x_2}=63.4, s_2=1.1$ 2. $90\%$ confidence, $n_1=120, \bar{x_1}=35.5, s_1=0.75\ n_2=146, \bar{x_2}=29.6, s_2=0.80$ Q9.1.5 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=3\; vs\; H_a:\mu _1-\mu _2\neq 3\; @\; \alpha =0.05$ $n_1=35, \bar{x_1}=25, s_1=1\ n_2=45, \bar{x_2}=19, s_2=2$ 2. Test $H_0:\mu _1-\mu _2=-25\; vs\; H_a:\mu _1-\mu _2<-25\; @\; \alpha =0.10$ $n_1=85, \bar{x_1}=188, s_1=15\ n_2=62, \bar{x_2}=215, s_2=19$ Q9.1.6 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=45\; vs\; H_a:\mu _1-\mu _2>45\; @\; \alpha =0.001$ $n_1=200, \bar{x_1}=1312, s_1=35\ n_2=225, \bar{x_2}=1256, s_2=28$ 2. Test $H_0:\mu _1-\mu _2=-12\; vs\; H_a:\mu _1-\mu _2\neq -12\; @\; \alpha =0.10$ $n_1=35, \bar{x_1}=121, s_1=6\ n_2=40, \bar{x_2}=135 s_2=7$ Q9.1.7 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=0\; vs\; H_a:\mu _1-\mu _2\neq 0\; @\; \alpha =0.01$ $n_1=125, \bar{x_1}=-46, s_1=10\ n_2=90, \bar{x_2}=-50, s_2=13$ 2. Test $H_0:\mu _1-\mu _2=20\; vs\; H_a:\mu _1-\mu _2>20\; @\; \alpha =0.05$ $n_1=40, \bar{x_1}=142, s_1=11\ n_2=40, \bar{x_2}=118 s_2=10$ Q9.1.8 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. Compute the $p$-value of the test as well. 1. Test $H_0:\mu _1-\mu _2=13\; vs\; H_a:\mu _1-\mu _2<13\; @\; \alpha =0.01$ $n_1=35, \bar{x_1}=100, s_1=2\ n_2=35, \bar{x_2}=88, s_2=2$ 2. Test $H_0:\mu _1-\mu _2=-10\; vs\; H_a:\mu _1-\mu _2\neq -10\; @\; \alpha =0.10$ $n_1=146, \bar{x_1}=62, s_1=4\ n_2=120, \bar{x_2}=73 s_2=7$ Q9.1.9 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=57\; vs\; H_a:\mu _1-\mu _2<57\; @\; \alpha =0.10$ $n_1=117, \bar{x_1}=1309, s_1=42\ n_2=133, \bar{x_2}=1258, s_2=37$ 2. Test $H_0:\mu _1-\mu _2=-1.5\; vs\; H_a:\mu _1-\mu _2\neq -1.5\; @\; \alpha =0.20$ $n_1=65, \bar{x_1}=16.9, s_1=1.3\ n_2=57, \bar{x_2}=18.6 s_2=1.1$ Q9.1.10 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=-10.5\; vs\; H_a:\mu _1-\mu _2>-10.5\; @\; \alpha =0.01$ $n_1=64, \bar{x_1}=85.6, s_1=2.4\ n_2=50, \bar{x_2}=95.3, s_2=3.1$ 2. Test $H_0:\mu _1-\mu _2=110\; vs\; H_a:\mu _1-\mu _2\neq 110\; @\; \alpha =0.02$ $n_1=176, \bar{x_1}=1918, s_1=68\ n_2=241, \bar{x_2}=1782 s_2=146$ Q9.1.11 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=50\; vs\; H_a:\mu _1-\mu _2>50\; @\; \alpha =0.005$ $n_1=72, \bar{x_1}=272, s_1=26\ n_2=103, \bar{x_2}=213, s_2=14$ 2. Test $H_0:\mu _1-\mu _2=7.5\; vs\; H_a:\mu _1-\mu _2\neq 7.5\; @\; \alpha =0.10$ $n_1=52, \bar{x_1}=94.3, s_1=2.6\ n_2=38, \bar{x_2}=88.6 s_2=8.0$ Q9.1.12 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. 1. Test $H_0:\mu _1-\mu _2=23\; vs\; H_a:\mu _1-\mu _2<23\; @\; \alpha =0.20$ $n_1=314, \bar{x_1}=198, s_1=12.2\ n_2=220, \bar{x_2}=176, s_2=11.5$ 2. Test $H_0:\mu _1-\mu _2=4.4\; vs\; H_a:\mu _1-\mu _2\neq 4.4\; @\; \alpha =0.05$ $n_1=32, \bar{x_1}=40.3, s_1=0.5\ n_2=30, \bar{x_2}=35.5 s_2=0.7$ Q9.1.13 In order to investigate the relationship between mean job tenure in years among workers who have a bachelor’s degree or higher and those who do not, random samples of each type of worker were taken, with the following results. n $\bar{x}$ s Bachelor’s degree or higher 155 5.2 1.3 No degree 210 5.0 1.5 1. Construct the $99\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $1\%$ level of significance, the claim that mean job tenure among those with higher education is greater than among those without, against the default that there is no difference in the means. 3. Compute the observed significance of the test. Q9.1.14 Records of $40$ used passenger cars and $40$ used pickup trucks (none used commercially) were randomly selected to investigate whether there was any difference in the mean time in years that they were kept by the original owner before being sold. For cars the mean was $5.3$ years with standard deviation $2.2$ years. For pickup trucks the mean was $7.1$ years with standard deviation $3.0$ years. 1. Construct the $95\%$ confidence interval for the difference in the means based on these data. 2. Test the hypothesis that there is a difference in the means against the null hypothesis that there is no difference. Use the $1\%$ level of significance. 3. Compute the observed significance of the test in part (b). Q9.1.15 In previous years the average number of patients per hour at a hospital emergency room on weekends exceeded the average on weekdays by $6.3$ visits per hour. A hospital administrator believes that the current weekend mean exceeds the weekday mean by fewer than $6.3$ hours. 1. Construct the $99\%$ confidence interval for the difference in the population means based on the following data, derived from a study in which $30$ weekend and $30$ weekday one-hour periods were randomly selected and the number of new patients in each recorded. n $\bar{x}$ s Weekends 30 13.8 3.1 Weekdays 30 8.6 2.7 1. Test at the $5\%$ level of significance whether the current weekend mean exceeds the weekday mean by fewer than $6.3$ patients per hour. 2. Compute the observed significance of the test. Q9.1.16 A sociologist surveys $50$ randomly selected citizens in each of two countries to compare the mean number of hours of volunteer work done by adults in each. Among the $50$ inhabitants of Lilliput, the mean hours of volunteer work per year was $52$, with standard deviation $11.8$. Among the $50$ inhabitants of Blefuscu, the mean number of hours of volunteer work per year was $37$, with standard deviation $7.2$. 1. Construct the $99\%$ confidence interval for the difference in mean number of hours volunteered by all residents of Lilliput and the mean number of hours volunteered by all residents of Blefuscu. 2. Test, at the $1\%$ level of significance, the claim that the mean number of hours volunteered by all residents of Lilliput is more than ten hours greater than the mean number of hours volunteered by all residents of Blefuscu. 3. Compute the observed significance of the test in part (b). Q9.1.17 A university administrator asserted that upperclassmen spend more time studying than underclassmen. 1. Test this claim against the default that the average number of hours of study per week by the two groups is the same, using the following information based on random samples from each group of students. Test at the $1\%$ level of significance. n $\bar{x}$ s Upperclassmen 35 15.6 2.9 Underclassmen 35 12.3 4.1 1. Compute the observed significance of the test. Q9.1.18 An kinesiologist claims that the resting heart rate of men aged $18$ to $25$ who exercise regularly is more than five beats per minute less than that of men who do not exercise regularly. Men in each category were selected at random and their resting heart rates were measured, with the results shown. n $\bar{x}$ s Regular exercise 40 63 1.0 No regular exercise 30 71 1.2 1. Perform the relevant test of hypotheses at the $1\%$ level of significance. 2. Compute the observed significance of the test. Q9.1.19 Children in two elementary school classrooms were given two versions of the same test, but with the order of questions arranged from easier to more difficult in Version $A$ and in reverse order in Version $B$. Randomly selected students from each class were given Version $A$ and the rest Version $B$. The results are shown in the table. n $\bar{x}$ s Version A 31 83 4.6 Version B 32 78 4.3 1. Construct the $90\%$ confidence interval for the difference in the means of the populations of all children taking Version $A$ of such a test and of all children taking Version $B$ of such a test. 2. Test at the $1\%$ level of significance the hypothesis that the $A$ version of the test is easier than the $B$ version (even though the questions are the same). 3. Compute the observed significance of the test. Q9.1.20 The Municipal Transit Authority wants to know if, on weekdays, more passengers ride the northbound blue line train towards the city center that departs at $8:15\; a.m.$ or the one that departs at $8:30\; a.m$. The following sample statistics are assembled by the Transit Authority. n $\bar{x}$ s 8:15 a.m. train 30 323 41 8:30 a.m. train 45 356 45 1. Construct the $90\%$ confidence interval for the difference in the mean number of daily travelers on the $8:15\; a.m.$ train and the mean number of daily travelers on the $8:30\; a.m.$ train. 2. Test at the $5\%$ level of significance whether the data provide sufficient evidence to conclude that more passengers ride the $8:30\; a.m.$ train. 3. Compute the observed significance of the test. Q9.1.21 In comparing the academic performance of college students who are affiliated with fraternities and those male students who are unaffiliated, a random sample of students was drawn from each of the two populations on a university campus. Summary statistics on the student GPAs are given below. n $\bar{x}$ s Fraternity 645 2.90 0.47 Unaffiliated 450 2.88 0.42 Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that there is a difference in average GPA between the population of fraternity students and the population of unaffiliated male students on this university campus. Q9.1.22 In comparing the academic performance of college students who are affiliated with sororities and those female students who are unaffiliated, a random sample of students was drawn from each of the two populations on a university campus. Summary statistics on the student GPAs are given below. n $\bar{x}$ s Sorority 330 3.18 0.37 Unaffiliated 550 3.12 0.41 Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that there is a difference in average GPA between the population of sorority students and the population of unaffiliated female students on this university campus. Q9.1.23 The owner of a professional football team believes that the league has become more offense oriented since five years ago. To check his belief, $32$ randomly selected games from one year’s schedule were compared to $32$ randomly selected games from the schedule five years later. Since more offense produces more points per game, the owner analyzed the following information on points per game (ppg). n $\bar{x}$ s ppg previously 32 20.62 4.17 ppg recently 32 22.05 4.01 Test, at the $10\%$ level of significance, whether the data on points per game provide sufficient evidence to conclude that the game has become more offense oriented. Q9.1.24 The owner of a professional football team believes that the league has become more offense oriented since five years ago. To check his belief, $32$ randomly selected games from one year’s schedule were compared to $32$ randomly selected games from the schedule five years later. Since more offense produces more offensive yards per game, the owner analyzed the following information on offensive yards per game (oypg). n $\bar{x}$ s oypg previously 32 316 40 oypg recently 32 336 35 Test, at the $10\%$ level of significance, whether the data on offensive yards per game provide sufficient evidence to conclude that the game has become more offense oriented. Large Data Set Exercises Large Data Sets are absent 1. Large $\text{Data Sets 1A and 1B}$ list the SAT scores for $1,000$ randomly selected students. Denote the population of all male students as $\text{Population 1}$ and the population of all female students as $\text{Population 2}$. 1. Restricting attention to just the males, find $n_1$, $\bar{x_1}$ and $s_1$. Restricting attention to just the females, find $n_2$, $\bar{x_2}$ and $s_2$. 2. Let $\mu _1$ denote the mean SAT score for all males and $\mu _2$ the mean SAT score for all females. Use the results of part (a) to construct a $90\%$ confidence interval for the difference $\mu _1-\mu _2$. 3. Test, at the $5\%$ level of significance, the hypothesis that the mean SAT scores among males exceeds that of females. 2. Large $\text{Data Sets 1A and 1B}$ list the SAT scores for $1,000$ randomly selected students. Denote the population of all male students as $\text{Population 1}$ and the population of all female students as $\text{Population 2}$. 1. Restricting attention to just the males, find $n_1$, $\bar{x_1}$ and $s_1$. Restricting attention to just the females, find $n_2$, $\bar{x_2}$ and $s_2$. 2. Let $\mu _1$ denote the mean SAT score for all males and $\mu _2$ the mean SAT score for all females. Use the results of part (a) to construct a $95\%$ confidence interval for the difference $\mu _1-\mu _2$. 3. Test, at the $10\%$ level of significance, the hypothesis that the mean SAT scores among males exceeds that of females. 3. Large $\text{Data Sets 7A and 7B}$ list the survival times for $65$ male and $75$ female laboratory mice with thymic leukemia. Denote the population of all such male mice as $\text{Population 1}$ and the population of all such female mice as $\text{Population 2}$. 1. Restricting attention to just the males, find $n_1$, $\bar{x_1}$ and $s_1$. Restricting attention to just the females, find $n_2$, $\bar{x_2}$ and $s_2$. 2. Let $\mu _1$ denote the mean survival for all males and $\mu _2$ the mean survival time for all females. Use the results of part (a) to construct a $99\%$ confidence interval for the difference $\mu _1-\mu _2$. 3. Test, at the $1\%$ level of significance, the hypothesis that the mean survival time for males exceeds that for females by more than $182$ days (half a year). 4. Compute the observed significance of the test in part (c). Answers 1. $(4.20,5.80)$ 2. $(-18.54,-9.46)$ 1. $(-12.81,-10.39)$ 2. $(-76.50,-68.10)$ 1. $Z = 8.753, \pm z_{0.025}=\pm 1.960$, reject $H_0$, $p$-value=$0.0000$ 2. $Z = -0.687, -z_{0.10}=-1.282$, do not reject $H_0$, $p$-value=$0.2451$ 1. $Z = 2.444, \pm z_{0.005}=\pm 2.576$, do not reject $H_0$, $p$-value=$0.0146$ 2. $Z = 1.702, z_{0.05}=-1.645$, reject $H_0$, $p$-value=$0.0446$ 1. $Z = -1.19$, $p$-value=$0.1170$, do not reject $H_0$ 2. $Z = -0.92$, $p$-value=$0.3576$, do not reject $H_0$ 1. $Z = 2.68$, $p$-value=$0.0037$, reject $H_0$ 2. $Z = -1.34$, $p$-value=$0.1802$, do not reject $H_0$ 1. $0.2\pm 0.4$ 2. $Z = -1.466, -z_{0.050}=-1.645$, do not reject $H_0$ (exceeds by $6.3$ or more) 3. $p$-value=$0.0869$ 1. $5.2\pm 1.9$ 2. $Z = -1.466, -z_{0.050}=-1.645$, do not reject $H_0$ (exceeds by $6.3$ or more) 3. $p$-value=$0.0708$ 1. $Z = 3.888, z_{0.01}=2.326$, reject $H_0$ (upperclassmen study more) 2. $p$-value=$0.0001$ 1. $5\pm 1.8$ 2. $Z = 4.454, z_{0.01}=2.326$, reject $H_0$ (Test A is easier) 3. $p$-value=$0.0000$ 1. $Z = 0.738, \pm z_{0.025}=\pm 1.960$, do not reject $H_0$ (no difference) 2. $Z = -1.398, -z_{0.10}=-1.282$, reject $H_0$ (more offense oriented) 1. $n_1=419,\; \bar{x_1}=1540.33,\; s_1=205.40, \; n_2=581,\; \bar{x_2}=1520.38,\; s_2=217.34$ 2. $(-2.24,42.15)$ 3. $H_0:\mu _1-\mu _2=0\; vs\; H_a:\mu _1-\mu _2>0$. Test Statistic: $Z = 1.48$. Rejection Region: $[1.645,\infty )$. Decision: Fail to reject $H_0$. 1. $n_1=65,\; \bar{x_1}=665.97,\; s_1=41.60, \; n_2=75,\; \bar{x_2}=455.89,\; s_2=63.22$ 2. $(187.06,233.09)$ 3. $H_0:\mu _1-\mu _2=182\; vs\; H_a:\mu _1-\mu _2>182$. Test Statistic: $Z = 3.14$. Rejection Region: $[2.33,\infty )$. Decision: Reject $H_0$. 4. $p$-value=$0.0008$ 9.2: Comparison of Two Population Means: Small, Independent Samples Basic In all exercises for this section assume that the populations are normal and have equal standard deviations. Q9.2.1 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $95\%$ confidence, $n_1=10,\; \bar{x_1}=120,\; s_1=2\ n_2=15,\; \bar{x_2}=101,\; s_1=4$ 2. $99\%$ confidence, $n_1=6,\; \bar{x_1}=25,\; s_1=1\ n_2=12,\; \bar{x_2}=17,\; s_1=3$ Q9.2.2 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $90\%$ confidence, $n_1=28,\; \bar{x_1}=212,\; s_1=6\ n_2=23,\; \bar{x_2}=198,\; s_1=5$ 2. $99\%$ confidence, $n_1=14,\; \bar{x_1}=68,\; s_1=8\ n_2=20,\; \bar{x_2}=43,\; s_1=3$ Q9.2.3 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.9\%$ confidence, $n_1=35,\; \bar{x_1}=6.5,\; s_1=0.2\ n_2=20,\; \bar{x_2}=6.2,\; s_1=0.1$ 2. $99\%$ confidence, $n_1=18,\; \bar{x_1}=77.3,\; s_1=1.2\ n_2=32,\; \bar{x_2}=75.0,\; s_1=1.6$ Q9.2.4 Construct the confidence interval for $\mu _1-\mu _2$ for the level of confidence and the data from independent samples given. 1. $99.5\%$ confidence, $n_1=40,\; \bar{x_1}=85.6,\; s_1=2.8\ n_2=20,\; \bar{x_2}=73.1,\; s_1=2.1$ 2. $99.9\%$ confidence, $n_1=25,\; \bar{x_1}=215,\; s_1=7\ n_2=35,\; \bar{x_2}=185,\; s_1=12$ Q9.2.5 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=11\; vs\; H_a:\mu _1-\mu _2>11\; @\; \alpha =0.025$ $n_1=6,\; \bar{x_1}=32,\; s_1=2\ n_2=11,\; \bar{x_2}=19,\; s_1=1$ 2. Test $H_0:\mu _1-\mu _2=26\; vs\; H_a:\mu _1-\mu _2\neq 26\; @\; \alpha =0.05$ $n_1=17,\; \bar{x_1}=166,\; s_1=4\ n_2=24,\; \bar{x_2}=138,\; s_1=3$ Q9.2.6 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=40\; vs\; H_a:\mu _1-\mu _2<40\; @\; \alpha =0.10$ $n_1=14,\; \bar{x_1}=289,\; s_1=11\ n_2=12,\; \bar{x_2}=254,\; s_1=9$ 2. Test $H_0:\mu _1-\mu _2=21\; vs\; H_a:\mu _1-\mu _2\neq 21\; @\; \alpha =0.05$ $n_1=23,\; \bar{x_1}=130,\; s_1=6\ n_2=27,\; \bar{x_2}=113,\; s_1=8$ Q9.2.7 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=-15\; vs\; H_a:\mu _1-\mu _2<-15\; @\; \alpha =0.10$ $n_1=30,\; \bar{x_1}=42,\; s_1=7\ n_2=12,\; \bar{x_2}=60,\; s_1=5$ 2. Test $H_0:\mu _1-\mu _2=103\; vs\; H_a:\mu _1-\mu _2\neq 103\; @\; \alpha =0.10$ $n_1=17,\; \bar{x_1}=711,\; s_1=28\ n_2=32,\; \bar{x_2}=598,\; s_1=21$ Q9.2.8 Perform the test of hypotheses indicated, using the data from independent samples given. Use the critical value approach. 1. Test $H_0:\mu _1-\mu _2=75\; vs\; H_a:\mu _1-\mu _2>75\; @\; \alpha =0.025$ $n_1=45,\; \bar{x_1}=674,\; s_1=18\ n_2=29,\; \bar{x_2}=591,\; s_1=13$ 2. Test $H_0:\mu _1-\mu _2=-20\; vs\; H_a:\mu _1-\mu _2\neq -20\; @\; \alpha =0.005$ $n_1=30,\; \bar{x_1}=137,\; s_1=8\ n_2=19,\; \bar{x_2}=166,\; s_1=11$ Q9.2.9 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=12\; vs\; H_a:\mu _1-\mu _2>12\; @\; \alpha =0.01$ $n_1=20,\; \bar{x_1}=133,\; s_1=7\ n_2=10,\; \bar{x_2}=115,\; s_1=5$ 2. Test $H_0:\mu _1-\mu _2=46\; vs\; H_a:\mu _1-\mu _2\neq 46\; @\; \alpha =0.10$ $n_1=24,\; \bar{x_1}=586,\; s_1=11\ n_2=27,\; \bar{x_2}=535,\; s_1=13$ Q9.2.10 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=38\; vs\; H_a:\mu _1-\mu _2<38\; @\; \alpha =0.01$ $n_1=12,\; \bar{x_1}=464,\; s_1=5\ n_2=10,\; \bar{x_2}=432,\; s_1=6$ 2. Test $H_0:\mu _1-\mu _2=4\; vs\; H_a:\mu _1-\mu _2\neq 4\; @\; \alpha =0.005$ $n_1=14,\; \bar{x_1}=68,\; s_1=2\ n_2=17,\; \bar{x_2}=67,\; s_1=3$ Q9.2.11 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=50\; vs\; H_a:\mu _1-\mu _2>50\; @\; \alpha =0.01$ $n_1=30,\; \bar{x_1}=681,\; s_1=8\ n_2=27,\; \bar{x_2}=625,\; s_1=8$ 2. Test $H_0:\mu _1-\mu _2=35\; vs\; H_a:\mu _1-\mu _2\neq 35\; @\; \alpha =0.10$ $n_1=36,\; \bar{x_1}=325,\; s_1=11\ n_2=29,\; \bar{x_2}=286,\; s_1=7$ Q9.2.12 Perform the test of hypotheses indicated, using the data from independent samples given. Use the $p$-value approach. (The $p$-value can be only approximated.) 1. Test $H_0:\mu _1-\mu _2=-4\; vs\; H_a:\mu _1-\mu _2<-4\; @\; \alpha =0.05$ $n_1=40,\; \bar{x_1}=80,\; s_1=5\ n_2=25,\; \bar{x_2}=87,\; s_1=5$ 2. Test $H_0:\mu _1-\mu _2=21\; vs\; H_a:\mu _1-\mu _2\neq 21\; @\; \alpha =0.01$ $n_1=15,\; \bar{x_1}=192,\; s_1=12\ n_2=34,\; \bar{x_2}=180,\; s_1=8$ Q9.2.13 A county environmental agency suspects that the fish in a particular polluted lake have elevated mercury level. To confirm that suspicion, five striped bass in that lake were caught and their tissues were tested for mercury. For the purpose of comparison, four striped bass in an unpolluted lake were also caught and tested. The fish tissue mercury levels in mg/kg are given below. Sample 1 (from polluted lake) Sample 2(from unpolluted lake) 0.580 0.382 0.711 0.276 0.571 0.570 0.666 0.366 0.598 1. Construct the $95\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that fish in the polluted lake have elevated levels of mercury in their tissue. Q9.2.14 A genetic engineering company claims that it has developed a genetically modified tomato plant that yields on average more tomatoes than other varieties. A farmer wants to test the claim on a small scale before committing to a full-scale planting. Ten genetically modified tomato plants are grown from seeds along with ten other tomato plants. At the season’s end, the resulting yields in pound are recorded as below. Sample 1(genetically modified) Sample 2(regular) 20 21 23 21 27 22 25 18 25 20 25 20 27 18 23 25 24 23 22 20 1. Construct the $99\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the mean yield of the genetically modified variety is greater than that for the standard variety. Q9.2.15 The coaching staff of a professional football team believes that the rushing offense has become increasingly potent in recent years. To investigate this belief, $20$ randomly selected games from one year’s schedule were compared to $11$ randomly selected games from the schedule five years later. The sample information on rushing yards per game (rypg) is summarized below. n $\bar{x}$ s rypg previously 20 112 24 rypg recently 11 114 21 1. Construct the $95\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $5\%$ level of significance, whether the data on rushing yards per game provide sufficient evidence to conclude that the rushing offense has become more potent in recent years. Q9.2.16 The coaching staff of professional football team believes that the rushing offense has become increasingly potent in recent years. To investigate this belief, $20$ randomly selected games from one year’s schedule were compared to $11$ randomly selected games from the schedule five years later. The sample information on passing yards per game (pypg) is summarized below. n $\bar{x}$ s pypg previously 20 203 38 pypg recently 11 232 33 1. Construct the $95\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $5\%$ level of significance, whether the data on passing yards per game provide sufficient evidence to conclude that the passing offense has become more potent in recent years. Q9.2.17 A university administrator wishes to know if there is a difference in average starting salary for graduates with master’s degrees in engineering and those with master’s degrees in business. Fifteen recent graduates with master’s degree in engineering and $11$ with master’s degrees in business are surveyed and the results are summarized below. n $\bar{x}$ s Engineering 15 68,535 1627 Business 11 63,230 2033 1. Construct the $90\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the average starting salaries are different. Q9.2.18 A gardener sets up a flower stand in a busy business district and sells bouquets of assorted fresh flowers on weekdays. To find a more profitable pricing, she sells bouquets for $15$ dollars each for ten days, then for $10$ dollars each for five days. Her average daily profit for the two different prices are given below. n $\bar{x}$ s $15 10 171 26$10 5 198 29 1. Construct the $90\%$ confidence interval for the difference in the population means based on these data. 2. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude the gardener’s average daily profit will be higher if the bouquets are sold at $\10$ each. Answers 1. $(16.16,21.84)$ 2. $(4.28,11.72)$ 1. $(0.13,0.47)$ 2. (1.14,3.46)$$(1.14,3.46)$$ 1. $T = 2.787,\; t_{0.025}=2.131$, reject $H_0$ 2. $T = 1.831,\; \pm t_{0.025}=\pm 2.023$, do not reject $H_0$ 1. $T = -1.349,\; -t_{0.10}=-1.303$, reject $H_0$ 2. $T = 1.411,\; \pm t_{0.05}=\pm 1.678$, do not reject $H_0$ 1. $T = 2.411,\; df=28,\; \text{p-value}>0.01$, do not reject $H_0$ 2. $T = 1.473,\; df=49,\; \text{p-value}<0.10$, reject $H_0$ 1. $T = 2.827,\; df=55,\; \text{p-value}<0.01$, reject $H_0$ 2. $T = 1.699,\; df=63,\; \text{p-value}<0.10$, reject $H_0$ 1. $0.2267\pm 0.2182$ 2. $T = 1.699,\; df=63,\; t_{0.05}=1.895$, reject $H_0$ (elevated levels) 1. $-2\pm 17.7$ 2. $T = -0.232,\; df=29,\; -t_{0.05}=-1.699$, do not reject $H_0$ (not more potent) 1. $5305\pm 1227$ 2. $T = 7.395,\; df=24,\; \pm t_{0.05}=\pm 1.711$, reject $H_0$ (different) 9.3 Comparison of Two Population Means: Paired Samples Basic In all exercises for this section assume that the population of differences is normal. 1. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 35 & 32 & 35 & 35 & 36 & 35 & 35\ Population\: 2 & 28 & 26 & 27 & 26 & 29 & 27 & 29 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $95\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $10\%$ level of significance, the hypothesis that $\mu _1-\mu _2>7$ as an alternative to the null hypothesis that $\mu _1-\mu _2=7$. 2. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 103 & 127 & 96 & 110\ Population\: 2 & 81 & 106 & 73 & 88\ Population\: 1 & 90 & 118 & 130 & 106\ Population\: 2 & 70 & 95 & 109 & 83 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $90\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $1\%$ level of significance, the hypothesis that $\mu _1-\mu _2<247$ as an alternative to the null hypothesis that $\mu _1-\mu _2=24$. 3. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 40 & 27 & 55 & 34\ Population\: 2 & 53 & 42 & 68 & 50 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $99\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $10\%$ level of significance, the hypothesis that $\mu _1-\mu _2 \neq -12$ as an alternative to the null hypothesis that $\mu _1-\mu _2=-12$. 4. Use the following paired sample data for this exercise. $\begin{matrix} Population\: 1 & 196 & 165 & 181 & 201 & 190\ Population\: 2 & 212 & 182 & 199 & 210 & 205 \end{matrix}$ 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $98\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $2\%$ level of significance, the hypothesis that $\mu _1-\mu _2 \neq -20$ as an alternative to the null hypothesis that $\mu _1-\mu _2=-20$. Applications 1. Each of five laboratory mice was released into a maze twice. The five pairs of times to escape were: Mouse 1 2 3 4 5 First release 129 89 136 163 118 Second release 113 97 139 85 75 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $90\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $10\%$ level of significance, the hypothesis that it takes mice less time to run the maze on the second trial, on average. 2. Eight golfers were asked to submit their latest scores on their favorite golf courses. These golfers were each given a set of newly designed clubs. After playing with the new clubs for a few months, the golfers were again asked to submit their latest scores on the same golf courses. The results are summarized below. Golfer 1 2 3 4 5 6 7 8 Own clubs 77 80 69 73 73 72 75 77 New clubs 72 81 68 73 75 70 73 75 1. Compute $\bar{d}$ and $s_d$. 2. Give a point estimate for $\mu _1-\mu _2=\mu _d$. 3. Construct the $99\%$ confidence interval for $\mu _1-\mu _2=\mu _d$ from these data. 4. Test, at the $1\%$ level of significance, the hypothesis that on average golf scores are lower with the new clubs. 3. A neighborhood home owners association suspects that the recent appraisal values of the houses in the neighborhood conducted by the county government for taxation purposes is too high. It hired a private company to appraise the values of ten houses in the neighborhood. The results, in thousands of dollars, are House County Government Private Company 1 217 219 2 350 338 3 296 291 4 237 237 5 237 235 6 272 269 7 257 239 8 277 275 9 312 320 10 335 335 1. Give a point estimate for the difference between the mean private appraisal of all such homes and the government appraisal of all such homes. 2. Construct the $99\%$ confidence interval based on these data for the difference. 3. Test, at the $1\%$ level of significance, the hypothesis that appraised values by the county government of all such houses is greater than the appraised values by the private appraisal company. 4. In order to cut costs a wine producer is considering using duo or $1+1$ corks in place of full natural wood corks, but is concerned that it could affect buyers’s perception of the quality of the wine. The wine producer shipped eight pairs of bottles of its best young wines to eight wine experts. Each pair includes one bottle with a natural wood cork and one with a duo cork. The experts are asked to rate the wines on a one to ten scale, higher numbers corresponding to higher quality. The results are: Wine Expert Duo Cork Wood Cork 1 8.5 8.5 2 8.0 8.5 3 6.5 8.0 4 7.5 8.5 5 8.0 7.5 6 8.0 8.0 7 9.0 9.0 8 7.0 7.5 1. Give a point estimate for the difference between the mean ratings of the wine when bottled are sealed with different kinds of corks. 2. Construct the $90\%$ confidence interval based on these data for the difference. 3. Test, at the $10\%$ level of significance, the hypothesis that on the average duo corks decrease the rating of the wine. 5. Engineers at a tire manufacturing corporation wish to test a new tire material for increased durability. To test the tires under realistic road conditions, new front tires are mounted on each of $11$ company cars, one tire made with a production material and the other with the experimental material. After a fixed period the $11$ pairs were measured for wear. The amount of wear for each tire (in mm) is shown in the table: Car Production Experimental 1 5.1 5.0 2 6.5 6.5 3 3.6 3.1 4 3.5 3.7 5 5.7 4.5 6 5.0 4.1 7 6.4 5.3 8 4.7 2.6 9 3.2 3.0 10 3.5 3.5 11 6.4 5.1 1. Give a point estimate for the difference in mean wear. 2. Construct the $99\%$ confidence interval for the difference based on these data. 3. Test, at the $1\%$ level of significance, the hypothesis that the mean wear with the experimental material is less than that for the production material. 6. A marriage counselor administered a test designed to measure overall contentment to $30$ randomly selected married couples. The scores for each couple are given below. A higher number corresponds to greater contentment or happiness. Couple Husband Wife 1 47 44 2 44 46 3 49 44 4 53 44 5 42 43 6 45 45 7 48 47 8 45 44 9 52 44 10 47 42 11 40 34 12 45 42 13 40 43 14 46 41 15 47 45 16 46 45 17 46 41 18 46 41 19 44 45 20 45 43 21 48 38 22 42 46 23 50 44 24 46 51 25 43 45 26 50 40 27 46 46 28 42 41 29 51 41 30 46 47 1. Test, at the $1\%$ level of significance, the hypothesis that on average men and women are not equally happy in marriage. 2. Test, at the $1\%$ level of significance, the hypothesis that on average men are happier than women in marriage. Large Data Set Exercises Large Data Sets are absent 1. Large $\text{Data Set 5}$ lists the scores for $25$ randomly selected students on practice SAT reading tests before and after taking a two-week SAT preparation course. Denote the population of all students who have taken the course as $\text{Population 1}$ and the population of all students who have not taken the course as $\text{Population 2}$. 1. Compute the $25$ differences in the order after - before, their mean $\bar{d}$, and their sample standard deviation $s_d$. 2. Give a point estimate for $\mu _d=\mu _1-\mu _2$, the difference in the mean score of all students who have taken the course and the mean score of all who have not. 3. Construct a $98\%$ confidence interval for $\mu _d$. 4. Test, at the $1\%$ level of significance, the hypothesis that the mean SAT score increases by at least ten points by taking the two-week preparation course. 2. Large $\text{Data Set 12}$ lists the scores on one round for 75 randomly selected members at a golf course, first using their own original clubs, then two months later after using new clubs with an experimental design. Denote the population of all golfers using their own original clubs as $\text{Population 1}$ and the population of all golfers using the new style clubs as $\text{Population 2}$. 1. Compute the $75$ differences in the order original clubs - new clubs, their mean $\bar{d}$, and their sample standard deviation $s_d$. 2. Give a point estimate for $\mu _d=\mu _1-\mu _2$, the difference in the mean score of all students who have taken the course and the mean score of all who have not. 3. Construct a $90\%$ confidence interval for $\mu _d$. 4. Test, at the $1\%$ level of significance, the hypothesis that the mean SAT score increases by at least ten points by taking the two-week preparation course. 3. Consider the previous problem again. Since the data set is so large, it is reasonable to use the standard normal distribution instead of Student’s $t$-distribution with $74$ degrees of freedom. 1. Construct a 90% confidence interval for $\mu _d$ using the standard normal distribution, meaning that the formula is $\bar{d}\pm z_{\alpha /2}\frac{s_d}{\sqrt{n}}$. (The computations done in part (a) of the previous problem still apply and need not be redone.) How does the result obtained here compare to the result obtained in part (c) of the previous problem? 2. Test, at the $1\%$ level of significance, the hypothesis that the mean golf score decreases by at least one stroke by using the new kind of clubs, using the standard normal distribution. (All the work done in part (d) of the previous problem applies, except the critical value is now $z_\alpha$ instead of $z_\alpha$ (or the $p$-value can be computed exactly instead of only approximated, if you used the $p$-value approach).) How does the result obtained here compare to the result obtained in part (c) of the previous problem? 3. Construct the $99\%$ confidence intervals for $\mu _d$ using both the $t$- and $z$-distributions. How much difference is there in the results now? Answers 1. $\bar{d}=7.4286,\; s_d=0.9759$ 2. $\bar{d}=7.4286$ 3. $(6.53,8.33)$ 4. $T = 1.162,\; df=6,\; t_{0.10}=1.44$, do not reject $H_0$ 1. $\bar{d}=-14.25,\; s_d=1.5$ 2. $\bar{d}=-14.25$ 3. $(-18.63,-9.87)$ 4. $T = -3.000,\; df=3,\; \pm t_{0.05}=\pm 2.353$, reject $H_0$ 1. $\bar{d}=25.2,\; s_d=35.6609$ 2. $\bar{d}=25.2$ 3. $25.2\pm 34.0$ 4. $T = 1.580,\; df=4,\; t_{0.10}=1.533$, reject $H_0$ (takes less time) 1. $3.2$ 2. $3.2\pm 7.5$ 3. $T = 1.392,\; df=9,\; t_{0.10}=2.821$, do not reject $H_0$ (government appraisals not higher) 1. $0.65$ 2. $0.65\pm 0.69$ 3. $T = 3.014,\; df=10,\; t_{0.10}=2.764$, reject $H_0$ (experimental material wears less) 1. $\bar{d}=16.68,\; s_d=10.77$ 2. $\bar{d}=16.68$ 3. $(11.31,22.05)$ 4. $H_0:\mu _1-\mu _2=10\; vs\; H_a:\mu _1-\mu _2>10$. Test Statistic: $T = 3.1014,\; df=11$. Rejection Region: $[2.492,\infty )$. Decision: Reject $H_0$. 1. $(1.6266,2.6401)$. Endpoints change in the third decimal place. 2. $H_0:\mu _1-\mu _2=1\; vs\; H_a:\mu _1-\mu _2>1$. Test Statistic: $Z = 3.6791$. Rejection Region: $[2.33,\infty )$. Decision: Reject $H_0$. The decision is the same as in the previous problem. 3. Using the $t$-distribution, $(1.3188,2.9478)$. Using the $z$-distribution, $(1.3401,2.9266)$. There is a difference. 9.4: Comparison of Two Population Proportions Basic 1. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $90\%$ confidence $n_1=1670,\; \hat{p_1}=0.42\ n_2=900,\; \hat{p_2}=0.38$ 2. $95\%$ confidence $n_1=600,\; \hat{p_1}=0.84\ n_2=420,\; \hat{p_2}=0.67$ 2. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $98\%$ confidence $n_1=750,\; \hat{p_1}=0.64\ n_2=800,\; \hat{p_2}=0.51$ 2. $99.5\%$ confidence $n_1=250,\; \hat{p_1}=0.78\ n_2=250,\; \hat{p_2}=0.51$ 3. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $80\%$ confidence $n_1=300,\; \hat{p_1}=0.255\ n_2=400,\; \hat{p_2}=0.193$ 2. $95\%$ confidence $n_1=3500,\; \hat{p_1}=0.147\ n_2=3750,\; \hat{p_2}=0.131$ 4. Construct the confidence interval for $p_1-p_2$ for the level of confidence and the data given. (The samples are sufficiently large.) 1. $99\%$ confidence $n_1=2250,\; \hat{p_1}=0.915\ n_2=2525,\; \hat{p_2}=0.858$ 2. $95\%$ confidence $n_1=120,\; \hat{p_1}=0.650\ n_2=200,\; \hat{p_2}=0.505$ 5. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2>0\; @\; \alpha =0.10$ $n_1=1200,\; \hat{p_1}=0.42\ n_2=1200,\; \hat{p_2}=0.40$ 2. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2\neq 0\; @\; \alpha =0.05$ $n_1=550,\; \hat{p_1}=0.61\ n_2=600,\; \hat{p_2}=0.67$ 6. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.05\; vs\; H_a:p_1-p_2>0.05\; @\; \alpha =0.05$ $n_1=1100,\; \hat{p_1}=0.57\ n_2=1100,\; \hat{p_2}=0.48$ 2. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2\neq 0\; @\; \alpha =0.05$ $n_1=800,\; \hat{p_1}=0.39\ n_2=900,\; \hat{p_2}=0.43$ 7. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.25\; vs\; H_a:p_1-p_2<0.25\; @\; \alpha =0.005$ $n_1=1400,\; \hat{p_1}=0.57\ n_2=1200,\; \hat{p_2}=0.37$ 2. Test $H_0:p_1-p_2=0.16\; vs\; H_a:p_1-p_2\neq 0.16\; @\; \alpha =0.02$ $n_1=750,\; \hat{p_1}=0.43\ n_2=600,\; \hat{p_2}=0.22$ 8. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.08\; vs\; H_a:p_1-p_2>0.08\; @\; \alpha =0.025$ $n_1=450,\; \hat{p_1}=0.67\ n_2=200,\; \hat{p_2}=0.52$ 2. Test $H_0:p_1-p_2=0.02\; vs\; H_a:p_1-p_2\neq 0.02\; @\; \alpha =0.001$ $n_1=2700,\; \hat{p_1}=0.837\ n_2=2900,\; \hat{p_2}=0.854$ 9. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2<0\; @\; \alpha =0.005$ $n_1=1100,\; \hat{p_1}=0.22\ n_2=1300,\; \hat{p_2}=0.27$ 2. Test $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2\neq 0\; @\; \alpha =0.01$ $n_1=650,\; \hat{p_1}=0.35\ n_2=650,\; \hat{p_2}=0.41$ 10. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.15\; vs\; H_a:p_1-p_2>0.15\; @\; \alpha =0.10$ $n_1=950,\; \hat{p_1}=0.41\ n_2=500,\; \hat{p_2}=0.23$ 2. Test $H_0:p_1-p_2=0.10\; vs\; H_a:p_1-p_2\neq 0.10\; @\; \alpha =0.10$ $n_1=220,\; \hat{p_1}=0.92\ n_2=160,\; \hat{p_2}=0.78$ 11. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.22\; vs\; H_a:p_1-p_2>0.22\; @\; \alpha =0.05$ $n_1=90,\; \hat{p_1}=0.72\ n_2=75,\; \hat{p_2}=0.40$ 2. Test $H_0:p_1-p_2=0.37\; vs\; H_a:p_1-p_2\neq 0.37\; @\; \alpha =0.02$ $n_1=425,\; \hat{p_1}=0.772\ n_2=425,\; \hat{p_2}=0.331$ 12. Perform the test of hypotheses indicated, using the data given. Use the critical value approach. Compute the $p$-value of the test as well. (The samples are sufficiently large.) 1. Test $H_0:p_1-p_2=0.50\; vs\; H_a:p_1-p_2<0.50\; @\; \alpha =0.10$ $n_1=40,\; \hat{p_1}=0.65\ n_2=55,\; \hat{p_2}=0.24$ 2. Test $H_0:p_1-p_2=0.30\; vs\; H_a:p_1-p_2\neq 0.30\; @\; \alpha =0.10$ $n_1=7500,\; \hat{p_1}=0.664\ n_2=1000,\; \hat{p_2}=0.319$ Applications In all the remaining exercsises the samples are sufficiently large (so this need not be checked). 1. Voters in a particular city who identify themselves with one or the other of two political parties were randomly selected and asked if they favor a proposal to allow citizens with proper license to carry a concealed handgun in city parks. The results are: Party A Party B Sample size, n 150 200 Number in favor, x 90 140 1. Give a point estimate for the difference in the proportion of all members of $\text{Party A}$ and all members of $\text{Party B}$ who favor the proposal. 2. Construct the $95\%$ confidence interval for the difference, based on these data. 3. Test, at the $5\%$ level of significance, the hypothesis that the proportion of all members of $\text{Party A}$ who favor the proposal is less than the proportion of all members of $\text{Party B}$ who do. 4. Compute the $p$-value of the test. 2. To investigate a possible relation between gender and handedness, a random sample of $320$ adults was taken, with the following results: Men Women Sample size, n 168 152 Number of left-handed, x 24 9 1. Give a point estimate for the difference in the proportion of all men who are left-handed and the proportion of all women who are left-handed. 2. Construct the $95\%$ confidence interval for the difference, based on these data. 3. Test, at the $5\%$ level of significance, the hypothesis that the proportion of men who are left-handed is greater than the proportion of women who are. 4. Compute the $p$-value of the test. 3. A local school board member randomly sampled private and public high school teachers in his district to compare the proportions of National Board Certified (NBC) teachers in the faculty. The results were: Private Schools Public Schools Sample size, n 80 520 Proportion of NBC teachers, pˆ$p^$ 0.175 0.150 1. Give a point estimate for the difference in the proportion of all teachers in area public schools and the proportion of all teachers in private schools who are National Board Certified. 2. Construct the $90\%$ confidence interval for the difference, based on these data. 3. Test, at the $10\%$ level of significance, the hypothesis that the proportion of all public school teachers who are National Board Certified is less than the proportion of private school teachers who are. 4. Compute the $p$-value of the test. 4. In professional basketball games, the fans of the home team always try to distract free throw shooters on the visiting team. To investigate whether this tactic is actually effective, the free throw statistics of a professional basketball player with a high free throw percentage were examined. During the entire last season, this player had $656$ free throws, $420$ in home games and $236$ in away games. The results are summarized below. Home Away Sample size, n 420 236 Free throw percent, pˆ$\hat{p}$ 81.5% 78.8% 1. Give a point estimate for the difference in the proportion of free throws made at home and away. 2. Construct the $90\%$ confidence interval for the difference, based on these data. 3. Test, at the $10\%$ level of significance, the hypothesis that there exists a home advantage in free throws. 4. Compute the $p$-value of the test. 5. Randomly selected middle-aged people in both China and the United States were asked if they believed that adults have an obligation to financially support their aged parents. The results are summarized below. China USA Sample size, n 1300 150 Number of yes, x 1170 110 Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that there exists a cultural difference in attitude regarding this question. 6. A manufacturer of walk-behind push mowers receives refurbished small engines from two new suppliers, $A$ and $B$. It is not uncommon that some of the refurbished engines need to be lightly serviced before they can be fitted into mowers. The mower manufacturer recently received $100$ engines from each supplier. In the shipment from $A$, $13$ needed further service. In the shipment from $B$, $10$ needed further service. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that there exists a difference in the proportions of engines from the two suppliers needing service. Large Data Set Exercises Large Data Sets are absent 1. Large $\text{Data Sets 6A and 6B}$ record results of a random survey of $200$ voters in each of two regions, in which they were asked to express whether they prefer $\text{Candidate A}$ for a U.S. Senate seat or prefer some other candidate. Let the population of all voters in $\text{region 1}$ be denoted $\text{Population 1}$ and the population of all voters in $\text{region 2}$ be denoted $\text{Population 2}$. Let $p_1$ be the proportion of voters in $\text{Population 1}$ who prefer $\text{Candidate A}$, and $p_2$ the proportion in $\text{Population 2}$ who do. 1. Find the relevant sample proportions $\hat{p_1}$ and $\hat{p_2}$. 2. Construct a point estimate for $p_1-p_2$. 3. Construct a $95\%$ confidence interval for $p_1-p_2$. 4. Test, at the $5\%$ level of significance, the hypothesis that the same proportion of voters in the two regions favor $\text{Candidate A}$, against the alternative that a larger proportion in $\text{Population 2}$ do. 2. Large $\text{Data Set 11}$ records the results of samples of real estate sales in a certain region in the year $2008$ (lines $2$ through $536$) and in the year $2010$ (lines $537$ through $1106$). Foreclosure sales are identified with a 1 in the second column. Let all real estate sales in the region in 2008 be $\text{Population 1}$ and all real estate sales in the region in 2010 be $\text{Population 2}$. 1. Use the sample data to construct point estimates $\hat{p_1}$ and $\hat{p_2}$ of the proportions $p_1$ and $p_2$ of all real estate sales in this region in $2008$ and $2010$ that were foreclosure sales. Construct a point estimate of $p_1-p_2$. 2. Use the sample data to construct a $90\%$ confidence for $p_1-p_2$. 3. Test, at the $10\%$ level of significance, the hypothesis that the proportion of real estate sales in the region in $2010$ that were foreclosure sales was greater than the proportion of real estate sales in the region in $2008$ that were foreclosure sales. (The default is that the proportions were the same.) Answers 1. $(0.0068,0.0732)$ 2. $(0.1163,0.2237)$ 1. $(0.0210,0.1030)$ 2. (0.0001,0.0319)$(0.0001,0.0319)$ 1. $Z = 0.996,\; z_{0.10}=1.282,\; \text{p-value}=0.1587$, do not reject $H_0$ 2. $Z = -2.120,\; \pm z_{0.025}=\pm 1.960,\; \text{p-value}=0.0340$, reject $H_0$ 1. $Z = -2.602,\; -z_{0.005}=-2.576,\; \text{p-value}=0.0047$, reject $H_0$ 2. $Z = 2.020,\; \pm z_{0.01}=\pm 2.326,\; \text{p-value}=0.0434$, do not reject $H_0$ 1. $Z = -2.85,\; \text{p-value}=0.0022$, reject $H_0$ 2. $Z = -2.23,\; \text{p-value}=0.0258$, do not reject $H_0$ 1. $Z =1.36,\; \text{p-value}=0.0869$, do not reject $H_0$ 2. $Z = 2.32,\; \text{p-value}=0.0204$, do not reject $H_0$ 1. $-0.10$ 2. $-0.10\pm 0.101$ 3. $Z = -1.943,\; -z_{0.05}=-1.645$, reject $H_0$ (fewer in $\text{Party A}$ favor) 4. $\text{p-value}=0.0262$ 1. $0.025$ 2. $0.025\pm 0.0745$ 3. $Z = 0.552,\; z_{0.10}=1.282$, do not reject $H_0$ (as many public school teachers are certified) 4. $\text{p-value}=0.2912$ 1. $Z = 4.498,\; \pm z_{0.005}=\pm 2.576$, reject $H_0$ (different) 1. $\hat{p_1}=0.355$ and $\hat{p_2}=0.41$ 2. $\hat{p_1}-\hat{p_2}=-0.055$ 3. $(-0.1501,0.0401)$ 4. $H_0:p_1-p_2=0\; vs\; H_a:p_1-p_2<0$. Test Statistic: $Z=-1.1335$. Rejection Region: $(-\infty ,-1.645 ]$. Decision: Fail to reject $H_0$. 9.5 Sample Size Considerations Basic 1. Estimate the common sample size $n$ of equally sized independent samples needed to estimate $\mu _1-\mu _2$ as specified when the population standard deviations are as shown. 1. $90\%$ confidence, to within $3$ units, $\sigma _1=10$ and $\sigma _2=7$ 2. $99\%$ confidence, to within $4$ units, $\sigma _1=6.8$ and $\sigma _2=9.3$ 3. $95\%$ confidence, to within $5$ units, $\sigma _1=22.6$ and $\sigma _2=31.8$ 2. Estimate the common sample size $n$ of equally sized independent samples needed to estimate $\mu _1-\mu _2$ as specified when the population standard deviations are as shown. 1. $80\%$ confidence, to within $2$ units, $\sigma _1=14$ and $\sigma _2=23$ 2. $90\%$ confidence, to within $0.3$ units, $\sigma _1=1.3$ and $\sigma _2=0.8$ 3. $99\%$ confidence, to within $11$ units, $\sigma _1=42$ and $\sigma _2=37$ 3. Estimate the number $n$ of pairs that must be sampled in order to estimate $\mu _d=\mu _1-\mu _2$ as specified when the standard deviation $s_d$ of the population of differences is as shown. 1. $80\%$ confidence, to within $6$ units, $\sigma _d=26.5$ 2. $95\%$ confidence, to within $4$ units, $\sigma _d=12$ 3. $90\%$ confidence, to within $5.2$ units, $\sigma _d=11.3$ 4. Estimate the number $n$ of pairs that must be sampled in order to estimate $\mu _d=\mu _1-\mu _2$ as specified when the standard deviation $s_d$ of the population of differences is as shown. 1. $90\%$ confidence, to within $20$ units, $\sigma _d=75.5$ 2. $95\%$ confidence, to within $11$ units, $\sigma _d=31.4$ 3. $99\%$ confidence, to within $1.8$ units, $\sigma _d=4$ 5. Estimate the minimum equal sample sizes $n _1=n_2$ necessary in order to estimate $p _1-p _2$ as specified. 1. $80\%$ confidence, to within $0.05$ (five percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.20$ and $p_2\approx 0.65$ 2. $90\%$ confidence, to within $0.02$ (two percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.75$ and $p_2\approx 0.63$ 3. $95\%$ confidence, to within $0.10$ (ten percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.11$ and $p_2\approx 0.37$ 6. Estimate the minimum equal sample sizes $n _1=n_2$ necessary in order to estimate $p _1-p _2$ as specified. 1. $80\%$ confidence, to within $0.02$ (two percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.78$ and $p_2\approx 0.65$ 2. $90\%$ confidence, to within $0.05$ (five percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.12$ and $p_2\approx 0.24$ 3. $95\%$ confidence, to within $0.10$ (ten percentage points) 1. when no prior knowledge of $p _1$ or $p _2$ is available 2. when prior studies indicate that $p_1\approx 0.14$ and $p_2\approx 0.21$ Applications 1. An educational researcher wishes to estimate the difference in average scores of elementary school children on two versions of a $100$-point standardized test, at $99\%$ confidence and to within two points. Estimate the minimum equal sample sizes necessary if it is known that the standard deviation of scores on different versions of such tests is $4.9$. 2. A university administrator wishes to estimate the difference in mean grade point averages among all men affiliated with fraternities and all unaffiliated men, with $95\%$ confidence and to within $0.15$. It is known from prior studies that the standard deviations of grade point averages in the two groups have common value $0.4$. Estimate the minimum equal sample sizes necessary to meet these criteria. 3. An automotive tire manufacturer wishes to estimate the difference in mean wear of tires manufactured with an experimental material and ordinary production tire, with $90\%$ confidence and to within $0.5$ mm. To eliminate extraneous factors arising from different driving conditions the tires will be tested in pairs on the same vehicles. It is known from prior studies that the standard deviations of the differences of wear of tires constructed with the two kinds of materials is $1.75$ mm. Estimate the minimum number of pairs in the sample necessary to meet these criteria. 4. To assess to the relative happiness of men and women in their marriages, a marriage counselor plans to administer a test measuring happiness in marriage to $n$ randomly selected married couples, record the their test scores, find the differences, and then draw inferences on the possible difference. Let $\mu _1$ and $\mu _2$ be the true average levels of happiness in marriage for men and women respectively as measured by this test. Suppose it is desired to find a $90\%$ confidence interval for estimating $\mu _d=\mu _1-\mu _2$ to within two test points. Suppose further that, from prior studies, it is known that the standard deviation of the differences in test scores is $\sigma _d\approx 10$. What is the minimum number of married couples that must be included in this study? 5. A journalist plans to interview an equal number of members of two political parties to compare the proportions in each party who favor a proposal to allow citizens with a proper license to carry a concealed handgun in public parks. Let $p_1$ and $p_2$ be the true proportions of members of the two parties who are in favor of the proposal. Suppose it is desired to find a $95\%$ confidence interval for estimating $p_1-p_2$ to within $0.05$. Estimate the minimum equal number of members of each party that must be sampled to meet these criteria. 6. A member of the state board of education wants to compare the proportions of National Board Certified (NBC) teachers in private high schools and in public high schools in the state. His study plan calls for an equal number of private school teachers and public school teachers to be included in the study. Let $p_1$ and $p_2$ be these proportions. Suppose it is desired to find a $99\%$ confidence interval that estimates $p_1-p_2$ to within $0.05$. 1. Supposing that both proportions are known, from a prior study, to be approximately $0.15$, compute the minimum common sample size needed. 2. Compute the minimum common sample size needed on the supposition that nothing is known about the values of $p_1$ and $p_2$. Answers 1. $n_1=n_2=45$ 2. $n_1=n_2=56$ 3. $n_1=n_2=234$ 1. $n_1=n_2=33$ 2. $n_1=n_2=35$ 3. $n_1=n_2=13$ 1. $n_1=n_2=329$ 2. $n_1=n_2=255$ 1. $n_1=n_2=3383$ 2. $n_1=n_2=2846$ 1. $n_1=n_2=193$ 2. $n_1=n_2=128$ 1. $n_1=n_2\approx 80$ 2. $n_1=n_2\approx 34$ 3. $n_1=n_2\approx 769$
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/09.E%3A_Two-Sample_Problems_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 10.1 Linear Relationships Between Variables Basic 1. A line has equation $y=0.5x+2$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 2. A line has equation $y=x-0.5$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 3. A line has equation $y=-2x+4$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 4. A line has equation $y=-1.5x+1$. 1. Pick five distinct $x$-values, use the equation to compute the corresponding $y$-values, and plot the five points obtained. 2. Give the value of the slope of the line; give the value of the $y$-intercept. 5. Based on the information given about a line, determine how $y$ will change (increase, decrease, or stay the same) when $x$ is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The slope is positive. 2. The $y$-intercept is positive. 3. The slope is zero. 6. Based on the information given about a line, determine how $y$ will change (increase, decrease, or stay the same) when $x$ is increased, and explain. In some cases it might be impossible to tell from the information given. 1. The $y$-intercept is negative. 2. The $y$-intercept is zero. 3. The slope is negative. 7. A data set consists of eight $(x,y)$ pairs of numbers: $\begin{matrix} (0,12) & (4,16) & (8,22) & (15,28)\ (2,15) & (5,14) & (13,24) & (20,30) \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. 8. A data set consists of ten $(x,y)$ pairs of numbers: $\begin{matrix} (3,20) & (6,9) & (11,0) & (14,1) & (18,9)\ (5,13) & (8,4) & (12,0) & (17,6) & (20,16) \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. 9. A data set consists of nine $(x,y)$ pairs of numbers: $\begin{matrix} (8,16) & (10,4) & (12,0) & (14,4) & (16,16)\ (9,9) & (11,1) & (13,1) & (15,9) & \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. 10. A data set consists of five $(x,y)$ pairs of numbers: $\begin{matrix} (0,1) & (2,5) & (3,7) & (5,11) & (8,17) \end{matrix}$ 1. Plot the data in a scatter diagram. 2. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be deterministic or to involve randomness. 3. Based on the plot, explain whether the relationship between $x$ and $y$ appears to be linear or not linear. Applications 1. At $60^{\circ}F$ a particular blend of automotive gasoline weights $6.17$ lb/gal. The weight $y$ of gasoline on a tank truck that is loaded with $x$ gallons of gasoline is given by the linear equation $y=6.17x$ 1. Explain whether the relationship between the weight $y$ and the amount $x$ of gasoline is deterministic or contains an element of randomness. 2. Predict the weight of gasoline on a tank truck that has just been loaded with $6,750$ gallons of gasoline. 2. The rate for renting a motor scooter for one day at a beach resort area is $\25$ plus $30$ cents for each mile the scooter is driven. The total cost $y$ in dollars for renting a scooter and driving it $x$ miles is $y=0.30x+25$ 1. Explain whether the relationship between the cost $y$ of renting the scooter for a day and the distance $x$ that the scooter is driven that day is deterministic or contains an element of randomness. 2. A person intends to rent a scooter one day for a trip to an attraction $17$ miles away. Assuming that the total distance the scooter is driven is $34$ miles, predict the cost of the rental. 3. The pricing schedule for labor on a service call by an elevator repair company is $\150$ plus $\50$ per hour on site. 1. Write down the linear equation that relates the labor cost $y$ to the number of hours $x$ that the repairman is on site. 2. Calculate the labor cost for a service call that lasts $2.5$ hours. 4. The cost of a telephone call made through a leased line service is $2.5$ cents per minute. 1. Write down the linear equation that relates the cost $y$ (in cents) of a call to its length $x$. 2. Calculate the cost of a call that lasts $23$ minutes. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. Plot the scatter diagram with SAT score as the independent variable ($x$) and GPA as the dependent variable ($y$). Comment on the appearance and strength of any linear trend. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Plot the scatter diagram with golf score using the original clubs as the independent variable ($x$) and golf score using the new clubs as the dependent variable ($y$). Comment on the appearance and strength of any linear trend. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. Plot the scatter diagram with the number of bidders at the auction as the independent variable ($x$) and the sales price as the dependent variable ($y$). Comment on the appearance and strength of any linear trend. Answers 1. Answers vary. 2. Slope $m=0.5$; $y$-intercept $b=2$. 1. Answers vary. 2. Slope $m=-2$; $y$-intercept $b=4$. 1. $y$ increases. 2. Impossible to tell. 3. $y$ does not change. 1. Scatter diagram needed. 2. Involves randomness. 3. Linear. 1. Scatter diagram needed. 2. Deterministic. 3. Not linear. 1. Deterministic. 2. $41,647.5$ pounds. 1. $y=50x+150$. 2. $\275$. 1. There appears to a hint of some positive correlation. 2. There appears to be clear positive correlation. 10.2 The Linear Correlation Coefficient Basic With the exception of the exercises at the end of Section 10.3, the first Basic exercise in each of the following sections through Section 10.7 uses the data from the first exercise here, the second Basic exercise uses the data from the second exercise here, and so on, and similarly for the Application exercises. Save your computations done on these exercises so that you do not need to repeat them later. 1. For the sample data $\begin{array}{c|c c c c c} x &0 &1 &3 &5 &8 \ \hline y &2 &4 &6 &5 &9\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 2. For the sample data $\begin{array}{c|c c c c c} x &0 &2 &3 &6 &9 \ \hline y &0 &3 &3 &4 &8\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 3. For the sample data $\begin{array}{c|c c c c c} x &1 &3 &4 &6 &8 \ \hline y &4 &1 &3 &-1 &0\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 4. For the sample data $\begin{array}{c|c c c c c} x &1 &2 &4 &7 &9 \ \hline y &5 &5 &6 &-3 &0\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 5. For the sample data $\begin{array}{c|c c c c c} x &1 &1 &3 &4 &5 \ \hline y &2 &1 &5 &3 &4\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 6. For the sample data $\begin{array}{c|c c c c c} x &1 &3 &5 &5 &8 \ \hline y &5 &-2 &2 &-1 &-3\ \end{array}$ 1. Draw the scatter plot. 2. Based on the scatter plot, predict the sign of the linear correlation coefficient. Explain your answer. 3. Compute the linear correlation coefficient and compare its sign to your answer to part (b). 7. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=5\; \; \sum x=25\; \; \sum x^2=165\ \sum y=24\; \; \sum y^2=134\; \; \sum xy=144\ 1\leq x\leq 9$ 8. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=5\; \; \sum x=31\; \; \sum x^2=253\ \sum y=18\; \; \sum y^2=90\; \; \sum xy=148\ 2\leq x\leq 12$ 9. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=10\; \; \sum x=0\; \; \sum x^2=60\ \sum y=24\; \; \sum y^2=234\; \; \sum xy=-87\ -4\leq x\leq 4$ 10. Compute the linear correlation coefficient for the sample data summarized by the following information: $n=10\; \; \sum x=-3\; \; \sum x^2=263\ \sum y=55\; \; \sum y^2=917\; \; \sum xy=-355\ -10\leq x\leq 10$ Applications 1. The age $x$ in months and vocabulary $y$ were measured for six children, with the results shown in the table. $\begin{array}{c|c c c c c c c} x &13 &14 &15 &16 &16 &18 \ \hline y &8 &10 &15 &20 &27 &30\ \end{array}$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 2. The curb weight $x$ in hundreds of pounds and braking distance $y$ in feet, at $50$ miles per hour on dry pavement, were measured for five vehicles, with the results shown in the table. $\begin{array}{c|c c c c c c } x &25 &27.5 &32.5 &35 &45 \ \hline y &105 &125 &140 &140 &150 \ \end{array}$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 3. The age $x$ and resting heart rate $y$ were measured for ten men, with the results shown in the table. $\begin{array}{c|c c c c c c } x &20 &23 &30 &37 &35 \ \hline y &72 &71 &73 &74 &74 \ \end{array}\ \begin{array}{c|c c c c c c } x &45 &51 &55 &60 &63 \ \hline y &73 &72 &79 &75 &77 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 4. The wind speed $x$ in miles per hour and wave height $y$ in feet were measured under various conditions on an enclosed deep water sea, with the results shown in the table, $\begin{array}{c|c c c c c c } x &0 &0 &2 &7 &7 \ \hline y &2.0 &0.0 &0.3 &0.7 &3.3 \ \end{array}\ \begin{array}{c|c c c c c c } x &9 &13 &20 &22 &31 \ \hline y &4.9 &4.9 &3.0 &6.9 &5.9 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 5. The advertising expenditure $x$ and sales $y$ in thousands of dollars for a small retail business in its first eight years in operation are shown in the table. $\begin{array}{c|c c c c c } x &1.4 &1.6 &1.6 &2.0 \ \hline y &180 &184 &190 &220 \ \end{array}\ \begin{array}{c|c c c c c c } x &2.0 &2.2 &2.4 &2.6 \ \hline y &186 &215 &205 &240 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 6. The height $x$ at age $2$ and $y$ at age $20$, both in inches, for ten women are tabulated in the table. $\begin{array}{c|c c c c c } x &31.3 &31.7 &32.5 &33.5 &34.4\ \hline y &60.7 &61.0 &63.1 &64.2 &65.9 \ \end{array}\ \begin{array}{c|c c c c c } x &35.2 &35.8 &32.7 &33.6 &34.8 \ \hline y &68.2 &67.6 &62.3 &64.9 &66.8 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 7. The course average $x$ just before a final exam and the score $y$ on the final exam were recorded for $15$ randomly selected students in a large physics class, with the results shown in the table. $\begin{array}{c|c c c c c } x &69.3 &87.7 &50.5 &51.9 &82.7\ \hline y &56 &89 &55 &49 &61 \ \end{array}\ \begin{array}{c|c c c c c } x &70.5 &72.4 &91.7 &83.3 &86.5 \ \hline y &66 &72 &83 &73 &82 \ \end{array}\ \begin{array}{c|c c c c c } x &79.3 &78.5 &75.7 &52.3 &62.2 \ \hline y &92 &80 &64 &18 &76 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 8. The table shows the acres $x$ of corn planted and acres $y$ of corn harvested, in millions of acres, in a particular country in ten successive years. $\begin{array}{c|c c c c c } x &75.7 &78.9 &78.6 &80.9 &81.8\ \hline y &68.8 &69.3 &70.9 &73.6 &75.1 \ \end{array}\ \begin{array}{c|c c c c c } x &78.3 &93.5 &85.9 &86.4 &88.2 \ \hline y &70.6 &86.5 &78.6 &79.5 &81.4 \ \end{array}\$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 9. Fifty male subjects drank a measured amount $x$ (in ounces) of a medication and the concentration $y$ (in percent) in their blood of the active ingredient was measured $30$ minutes later. The sample data are summarized by the following information. $n=50\; \; \sum x=112.5\; \; \sum y=4.83\ \sum xy=15.255\; \; 0\leq x\leq 4.5\ \sum x^2=356.25\; \; \sum y^2=0.667$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 10. In an effort to produce a formula for estimating the age of large free-standing oak trees non-invasively, the girth $x$ (in inches) five feet off the ground of $15$ such trees of known age $y$ (in years) was measured. The sample data are summarized by the following information. $n=15\; \; \sum x=3368\; \; \sum y=6496\ \sum xy=1,933,219\; \; 74\leq x\leq 395\ \sum x^2=917,780\; \; \sum y^2=4,260,666$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 11. Construction standards specify the strength of concrete $28$ days after it is poured. For $30$ samples of various types of concrete the strength $x$ after $3$ days and the strength $y$ after $28$ days (both in hundreds of pounds per square inch) were measured. The sample data are summarized by the following information. $n=30\; \; \sum x=501.6\; \; \sum y=1338.8\ \sum xy=23,246.55\; \; 11\leq x\leq 22\ \sum x^2=8724.74\; \; \sum y^2=61,980.14$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. 12. Power-generating facilities used forecasts of temperature to forecast energy demand. The average temperature $x$ (degrees Fahrenheit) and the day’s energy demand $y$ (million watt-hours) were recorded on $40$ randomly selected winter days in the region served by a power company. The sample data are summarized by the following information. $n=40\; \; \sum x=2000\; \; \sum y=2969\ \sum xy=143,042\; \; 40\leq x\leq 60\ \sum x^2=101,340\; \; \sum y^2=243,027$ Compute the linear correlation coefficient for these sample data and interpret its meaning in the context of the problem. Additional Exercises 1. In each case state whether you expect the two variables $x$ and $y$ indicated to have positive, negative, or zero correlation. 1. the number $x$ of pages in a book and the age $y$ of the author 2. the number $x$ of pages in a book and the age $y$ of the intended reader 3. the weight $x$ of an automobile and the fuel economy $y$ in miles per gallon 4. the weight $x$ of an automobile and the reading $y$ on its odometer 5. the amount $x$ of a sedative a person took an hour ago and the time $y$ it takes him to respond to a stimulus 2. In each case state whether you expect the two variables $x$ and $y$ indicated to have positive, negative, or zero correlation. 1. the length $x$ of time an emergency flare will burn and the length $y$ of time the match used to light it burned 2. the average length $x$ of time that calls to a retail call center are on hold one day and the number $y$ of calls received that day 3. the length $x$ of a regularly scheduled commercial flight between two cities and the headwind $y$ encountered by the aircraft 4. the value $x$ of a house and the its size $y$ in square feet 5. the average temperature $x$ on a winter day and the energy consumption $y$ of the furnace 3. Changing the units of measurement on two variables $x$ and $y$ should not change the linear correlation coefficient. Moreover, most change of units amount to simply multiplying one unit by the other (for example, $1$ foot = $12$ inches). Multiply each $x$ value in the table in Exercise 1 by two and compute the linear correlation coefficient for the new data set. Compare the new value of $r$ to the one for the original data. 4. Refer to the previous exercise. Multiply each $x$ value in the table in Exercise 2 by two, multiply each $y$ value by three, and compute the linear correlation coefficient for the new data set. Compare the new value of $r$ to the one for the original data. 5. Reversing the roles of $x$ and $y$ in the data set of Exercise 1 produces the data set $\begin{array}{c|c c c c c} x &2 &4 &6 &5 &9 \ \hline y &0 &1 &3 &5 &8\ \end{array}$ Compute the linear correlation coefficient of the new set of data and compare it to what you got in Exercise 1. 6. In the context of the previous problem, look at the formula for $r$ and see if you can tell why what you observed there must be true for every data set. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. Compute the linear correlation coefficient $r$. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the first large data set problem for Section 10.1. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the linear correlation coefficient $r$. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the second large data set problem for Section 10.1. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. Compute the linear correlation coefficient $r$. Compare its value to your comments on the appearance and strength of any linear trend in the scatter diagram that you constructed in the third large data set problem for Section 10.1. Answers 1. $r=0.921$ 2. $r=-0.794$ 3. $r=0.707$ 4. $0.875$ 5. $-0.846$ 6. $0.948$ 7. $0.709$ 8. $0.832$ 9. $0.751$ 10. $0.965$ 11. $0.992$ 12. .921 1. zero 2. positive 3. negative 4. zero 5. positive 13. same value 14. same value 15. $r=0.4601$ 16. $r=0.9002$ 10.3 Modelling Linear Relationships with Randomness Present Basic 1. State the three assumptions that are the basis for the Simple Linear Regression Model. 2. The Simple Linear Regression Model is summarized by the equation $y=\beta _1x+\beta _0+\varepsilon$ Identify the deterministic part and the random part. 3. Is the number $\beta _1$ in the equation $y=\beta _1x+\beta _0$ a statistic or a population parameter? Explain. 4. Is the number $\sigma$ in the Simple Linear Regression Model a statistic or a population parameter? Explain. 5. Describe what to look for in a scatter diagram in order to check that the assumptions of the Simple Linear Regression Model are true. 6. True or false: the assumptions of the Simple Linear Regression Model must hold exactly in order for the procedures and analysis developed in this chapter to be useful. Answers 1. The mean of $y$ is linearly related to $x$. 2. For each given $x$, $y$ is a normal random variable with mean $\beta _1x+\beta _0$ and a standard deviation $\sigma$. 3. All the observations of $y$ in the sample are independent. 1. $\beta _1$ is a population parameter. 2. A linear trend. 10.4 The Least Squares Regression Line Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2. 1. Compute the least squares regression line for the data in Exercise 1 of Section 10.2. 2. Compute the least squares regression line for the data in Exercise 2 of Section 10.2. 3. Compute the least squares regression line for the data in Exercise 3 of Section 10.2. 4. Compute the least squares regression line for the data in Exercise 4 of Section 10.2. 5. For the data in Exercise 5 of Section 10.2 1. Compute the least squares regression line. 2. Compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 6. For the data in Exercise 6 of Section 10.2 1. Compute the least squares regression line. 2. Compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 7. Compute the least squares regression line for the data in Exercise 7 of Section 10.2. 8. Compute the least squares regression line for the data in Exercise 8 of Section 10.2. 9. For the data in Exercise 9 of Section 10.2 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$? Explain. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 10. For the data in Exercise 10 of Section 10.2 1. Compute the least squares regression line. 2. Can you compute the sum of the squared errors $\text{SSE}$ using the definition $\sum (y-\hat{y})^2$? Explain. 3. Compute the sum of the squared errors $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. Applications 1. For the data in Exercise 11 of Section 10.2 1. Compute the least squares regression line. 2. On average, how many new words does a child from $13$ to $18$ months old learn each month? Explain. 3. Estimate the average vocabulary of all $16$-month-old children. 2. For the data in Exercise 12 of Section 10.2 1. Compute the least squares regression line. 2. On average, how many additional feet are added to the braking distance for each additional $100$ pounds of weight? Explain. 3. Estimate the average braking distance of all cars weighing $3,000$ pounds. 3. For the data in Exercise 13 of Section 10.2 1. Compute the least squares regression line. 2. Estimate the average resting heart rate of all $40$-year-old men. 3. Estimate the average resting heart rate of all newborn baby boys. Comment on the validity of the estimate. 4. For the data in Exercise 14 of Section 10.2 1. Compute the least squares regression line. 2. Estimate the average wave height when the wind is blowing at $10$ miles per hour. 3. Estimate the average wave height when there is no wind blowing. Comment on the validity of the estimate. 5. For the data in Exercise 15 of Section 10.2 1. Compute the least squares regression line. 2. On average, for each additional thousand dollars spent on advertising, how does revenue change? Explain. 3. Estimate the revenue if $\2,500$ is spent on advertising next year. 6. For the data in Exercise 16 of Section 10.2 1. Compute the least squares regression line. 2. On average, for each additional inch of height of two-year-old girl, what is the change in the adult height? Explain. 3. Predict the adult height of a two-year-old girl who is $33$ inches tall. 7. For the data in Exercise 17 of Section 10.2 1. Compute the least squares regression line. 2. Compute $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 3. Estimate the average final exam score of all students whose course average just before the exam is $85$. 8. For the data in Exercise 18 of Section 10.2 1. Compute the least squares regression line. 2. Compute $\text{SSE}$ using the formula $SSE=SS_{yy}-\widehat{\beta _1}SS_{xy}$. 3. Estimate the number of acres that would be harvested if $90$ million acres of corn were planted. 9. For the data in Exercise 19 of Section 10.2 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the average concentration of the active ingredient in the blood in men after consuming $1$ ounce of the medication. 10. For the data in Exercise 20 of Section 10.2 1. Compute the least squares regression line. 2. Interpret the value of the slope of the least squares regression line in the context of the problem. 3. Estimate the age of an oak tree whose girth five feet off the ground is $92$ inches. 11. For the data in Exercise 21 of Section 10.2 1. Compute the least squares regression line. 2. The $28$-day strength of concrete used on a certain job must be at least $3,200$ psi. If the $3$-day strength is $1,300$ psi, would we anticipate that the concrete will be sufficiently strong on the $28^{th}$ day? Explain fully. 12. For the data in Exercise 22 of Section 10.2 1. Compute the least squares regression line. 2. If the power facility is called upon to provide more than $95$ million watt-hours tomorrow then energy will have to be purchased from elsewhere at a premium. The forecast is for an average temperature of $42$ degrees. Should the company plan on purchasing power at a premium? Additional Exercises 1. Verify that no matter what the data are, the least squares regression line always passes through the point with coordinates $(\bar{x},\bar{y})$. Hint: Find the predicted value of $y$ when $x=\bar{x}$. 2. In Exercise 1 you computed the least squares regression line for the data in Exercise 1 of Section 10.2. 1. Reverse the roles of x and y and compute the least squares regression line for the new data set $\begin{array}{c|c c c c c c} x &2 &4 &6 &5 &9 \ \hline y &0 &1 &3 &5 &8\ \end{array}$ 2. Interchanging x and y corresponds geometrically to reflecting the scatter plot in a 45-degree line. Reflecting the regression line for the original data the same way gives a line with the equation $\bar{y}=1.346x-3.600$. Is this the equation that you got in part (a)? Can you figure out why not? Hint: Think about how x and y are treated differently geometrically in the computation of the goodness of fit. 3. Compute $\text{SSE}$ for each line and see if they fit the same, or if one fits the data better than the other. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the least squares regression line with SAT score as the independent variable ($x$) and GPA as the dependent variable ($y$). 2. Interpret the meaning of the slope $\widehat{\beta _1}$ of regression line in the context of problem. 3. Compute $\text{SSE}$ the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the GPA of a student whose SAT score is $1350$. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Compute the least squares regression line with scores using the original clubs as the independent variable ($x$) and scores using the new clubs as the dependent variable ($y$). 2. Interpret the meaning of the slope $\widehat{\beta _1}$ of regression line in the context of problem. 3. Compute $\text{SSE}$ the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the score with the new clubs of a golfer whose score with the old clubs is $73$. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. 1. Compute the least squares regression line with the number of bidders present at the auction as the independent variable ($x$) and sales price as the dependent variable ($y$). 2. Interpret the meaning of the slope $\widehat{\beta _1}$ of regression line in the context of problem. 3. Compute $\text{SSE}$ the measure of the goodness of fit of the regression line to the sample data. 4. Estimate the sales price of a clock at an auction at which the number of bidders is seven. Answers 1. $\hat{y}=0.743x+2.675$ 2. $\hat{y}=-0.610x+4.082$ 3. $\hat{y}=0.625x+1.25,\; SSE=5$ 4. $\hat{y}=0.6x+1.8$ 5. $\hat{y}=-1.45x+2.4,\; SSE=50.25$ (cannot use the definition to compute) 1. $\hat{y}=4.848x-56$ 2. $4.8$ 3. $21.6$ 1. $\hat{y}=0.114x+69.222$ 2. $73.8$ 3. $69.2$, invalid extrapolation 1. $\hat{y}=42.024x+119.502$ 2. increases by $\42,024$ 3. $\224,562$ 1. $\hat{y}=1.045x-8.527$ 2. $2151.93367$ 3. $80.3$ 1. $\hat{y}=0.043x+0.001$ 2. For each additional ounce of medication consumed blood concentration of the active ingredient increases by $0.043\%$ 3. $0.044\%$ 1. $\hat{y}=2.550x+1.993$ 2. Predicted $28$-day strength is $3,514$ psi; sufficiently strong 1. $\hat{y}=0.0016x+0.022$ 2. On average, every $100$ point increase in SAT score adds $0.16$ point to the GPA. 3. $SSE=432.10$ 4. $\hat{y}=2.182$ 1. $\hat{y}=116.62x+6955.1$ 2. On average, every $1$ additional bidder at an auction raises the price by $116.62$ dollars. 3. $SSE=1850314.08$ 4. $\hat{y}=7771.44$ 10.5 Statistical Inferences About β1 Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2 and Section 10.4. 1. Construct the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 1 of Section 10.2. 2. Construct the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 2 of Section 10.2. 3. Construct the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 3 of Section 10.2. 4. Construct the $99\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 4 of Section 10.2. 5. For the data in Exercise 5 of Section 10.2 test, at the $10\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). 6. For the data in Exercise 6 of Section 10.2 test, at the $5\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). 7. Construct the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 7 of Section 10.2. 8. Construct the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line based on the sample data set of Exercise 8 of Section 10.2. 9. For the data in Exercise 9 of Section 10.2 test, at the $1\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). 10. For the data in Exercise 10 of Section 10.2 test, at the $1\%$ level of significance, whether $x$ is useful for predicting $y$ (that is, whether $\beta _1\neq 0$). Applications 1. For the data in Exercise 11 of Section 10.2 construct a $90\%$ confidence interval for the mean number of new words acquired per month by children between $13$ and $18$ months of age. 2. For the data in Exercise 12 of Section 10.2 construct a $90\%$ confidence interval for the mean increased braking distance for each additional $100$ pounds of vehicle weight. 3. For the data in Exercise 13 of Section 10.2 test, at the $10\%$ level of significance, whether age is useful for predicting resting heart rate. 4. For the data in Exercise 14 of Section 10.2 test, at the $10\%$ level of significance, whether wind speed is useful for predicting wave height. 5. For the situation described in Exercise 15 of Section 10.2 1. Construct the $95\%$ confidence interval for the mean increase in revenue per additional thousand dollars spent on advertising. 2. An advertising agency tells the business owner that for every additional thousand dollars spent on advertising, revenue will increase by over $\25,000$. Test this claim (which is the alternative hypothesis) at the $5\%$ level of significance. 3. Perform the test of part (b) at the $10\%$ level of significance. 4. Based on the results in (b) and (c), how believable is the ad agency’s claim? (This is a subjective judgement.) 6. For the situation described in Exercise 16 of Section 10.2 1. Construct the $90\%$ confidence interval for the mean increase in height per additional inch of length at age two. 2. It is claimed that for girls each additional inch of length at age two means more than an additional inch of height at maturity. Test this claim (which is the alternative hypothesis) at the $10\%$ level of significance. 7. For the data in Exercise 17 of Section 10.2 test, at the $10\%$ level of significance, whether course average before the final exam is useful for predicting the final exam grade. 8. For the situation described in Exercise 18 of Section 10.2, an agronomist claims that each additional million acres planted results in more than $750,000$ additional acres harvested. Test this claim at the $1\%$ level of significance. 9. For the data in Exercise 19 of Section 10.2 test, at the $1/10$th of $1\%$ level of significance, whether, ignoring all other facts such as age and body mass, the amount of the medication consumed is a useful predictor of blood concentration of the active ingredient. 10. For the data in Exercise 20 of Section 10.2 test, at the $1\%$ level of significance, whether for each additional inch of girth the age of the tree increases by at least two and one-half years. 11. For the data in Exercise 21 of Section 10.2 1. Construct the $95\%$ confidence interval for the mean increase in strength at $28$ days for each additional hundred psi increase in strength at $3$ days. 2. Test, at the $1/10$th of $1\%$ level of significance, whether the $3$-day strength is useful for predicting $28$-day strength. 12. For the situation described in Exercise 22 of Section 10.2 1. Construct the $99\%$ confidence interval for the mean decrease in energy demand for each one-degree drop in temperature. 2. An engineer with the power company believes that for each one-degree increase in temperature, daily energy demand will decrease by more than $3.6$ million watt-hours. Test this claim at the $1\%$ level of significance. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Compute the $90\%$ confidence interval for the slope $\beta _1$ of the population regression line with SAT score as the independent variable ($x$) and GPA as the dependent variable ($y$). 2. Test, at the $10\%$ level of significance, the hypothesis that the slope of the population regression line is greater than $0.001$, against the null hypothesis that it is exactly $0.001$. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Compute the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line with scores using the original clubs as the independent variable ($x$) and scores using the new clubs as the dependent variable ($y$). 2. Test, at the $10\%$ level of significance, the hypothesis that the slope of the population regression line is different from $1$, against the null hypothesis that it is exactly $1$. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. 1. Compute the $95\%$ confidence interval for the slope $\beta _1$ of the population regression line with the number of bidders present at the auction as the independent variable($x$) and sales price as the dependent variable ($y$). 2. Test, at the $10\%$ level of significance, the hypothesis that the average sales price increases by more than $\90$ for each additional bidder at an auction, against the default that it increases by exactly $\90$. Answers 1. $0.743\pm 0.578$ 2. $-0.610\pm 0.633$ 3. $T=1.732,\; \pm t_{0.05}=\pm 2.353$, do not reject $H_0$ 4. $0.6\pm 0.451$ 5. $T=-4.481,\; \pm t_{0.005}=\pm 3.355$, reject $H_0$ 6. $4.8\pm 1.7$ words 7. $T=2.843,\; \pm t_{0.05}=\pm 1.860$, reject $H_0$ 1. $42.024\pm 28.011$ thousand dollars 2. $T=1.487,\; \pm t_{0.05}=\pm 1.943$, do not reject $H_0$ 3. $t_{0.10}=1.440$, reject $H_0$ 8. $T=4.096,\; \pm t_{0.05}=\pm 1.771$, reject $H_0$ 9. $T=25.524,\; \pm t_{0.0005}=\pm 3.505$, reject $H_0$ 1. $2.550\pm 0.127$ hundred psi 2. $T=41.072,\; \pm t_{0.005}=\pm 3.674$, reject $H_0$ 1. $(0.0014,0.0018)$ 2. $H_0:\beta _1=0.001\; vs\; H_a:\beta _1>0.001$. Test Statistic: $Z=6.1625$. Rejection Region: $[1.28,+\infty )$. Decision: Reject $H_0$ 1. $(101.789,131.4435)$ 2. $H_0:\beta _1=90\; vs\; H_a:\beta _1>90$. Test Statistic: $T=3.5938,\; d.f.=58$. Rejection Region: $[1.296,+\infty )$. Decision: Reject $H_0$ 10.6 The Coefficient of Determination Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in Section 10.2, Section 10.4, and Section 10.5. 1. For the sample data set of Exercise 1 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 2. For the sample data set of Exercise 2 of Section 10.2" find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 3. For the sample data set of Exercise 3 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 4. For the sample data set of Exercise 4 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 5. For the sample data set of Exercise 5 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 6. For the sample data set of Exercise 6 of Section 10.2 find the coefficient of determination using the formula $r^2=\widehat{\beta _1}SS_{xy}/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 7. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 8. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 9. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. 10. For the sample data set of Exercise 7 of Section 10.2 find the coefficient of determination using the formula $r^2=(SS_{yy}-SSE)/SS_{yy}$. Confirm your answer by squaring $r$ as computed in that exercise. Applications 1. For the data in Exercise 11 of Section 10.2 compute the coefficient of determination and interpret its value in the context of age and vocabulary. 2. For the data in Exercise 12 of Section 10.2" compute the coefficient of determination and interpret its value in the context of vehicle weight and braking distance. 3. For the data in Exercise 13 of Section 10.2 compute the coefficient of determination and interpret its value in the context of age and resting heart rate. In the age range of the data, does age seem to be a very important factor with regard to heart rate? 4. For the data in Exercise 14 of Section 10.2 compute the coefficient of determination and interpret its value in the context of wind speed and wave height. Does wind speed seem to be a very important factor with regard to wave height? 5. For the data in Exercise 15 of Section 10.2 find the proportion of the variability in revenue that is explained by level of advertising. 6. For the data in Exercise 16 of Section 10.2 find the proportion of the variability in adult height that is explained by the variation in length at age two. 7. For the data in Exercise 17 of Section 10.2 compute the coefficient of determination and interpret its value in the context of course average before the final exam and score on the final exam. 8. For the data in Exercise 18 of Section 10.2 compute the coefficient of determination and interpret its value in the context of acres planted and acres harvested. 9. For the data in Exercise 19 of Section 10.2 compute the coefficient of determination and interpret its value in the context of the amount of the medication consumed and blood concentration of the active ingredient. 10. For the data in Exercise 20 of Section 10.2 compute the coefficient of determination and interpret its value in the context of tree size and age. 11. For the data in Exercise 21 of Section 10.2 find the proportion of the variability in $28$-day strength of concrete that is accounted for by variation in $3$-day strength. 12. For the data in Exercise 22 of Section 10.2 find the proportion of the variability in energy demand that is accounted for by variation in average temperature. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. Compute the coefficient of determination and interpret its value in the context of SAT scores and GPAs. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). Compute the coefficient of determination and interpret its value in the context of golf scores with the two kinds of golf clubs. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. Compute the coefficient of determination and interpret its value in the context of the number of bidders at an auction and the price of this type of antique grandfather clock. Answers 1. $0.848$ 2. $0.631$ 3. $0.5$ 4. $0.766$ 5. $0.715$ 6. $0.898$; about $90\%$ of the variability in vocabulary is explained by age 7. $0.503$; about $50\%$ of the variability in heart rate is explained by age. Age is a significant but not dominant factor in explaining heart rate. 8. The proportion is $r^2=0.692$ 9. $0.563$; about $56\%$ of the variability in final exam scores is explained by course average before the final exam 10. $0.931$; about $93\%$ of the variability in the blood concentration of the active ingredient is explained by the amount of the medication consumed 11. The proportion is $r^2=0.984$ 12. $r^2=21.17\%$ 13. $r^2=81.04\%$ 10.7 Estimation and Prediction Basic For the Basic and Application exercises in this section use the computations that were done for the exercises with the same number in previous sections. 1. For the sample data set of Exercise 1 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 4$. 2. Construct the $90\%$ confidence interval for that mean value. 2. For the sample data set of Exercise 2 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 4$. 2. Construct the $90\%$ confidence interval for that mean value. 3. For the sample data set of Exercise 3 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 7$. 2. Construct the $95\%$ confidence interval for that mean value. 4. For the sample data set of Exercise 4 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 2$. 2. Construct the $80\%$ confidence interval for that mean value. 5. For the sample data set of Exercise 5 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 1$. 2. Construct the $80\%$ confidence interval for that mean value. 6. For the sample data set of Exercise 6 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 5$. 2. Construct the $95\%$ confidence interval for that mean value. 7. For the sample data set of Exercise 7 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 6$. 2. Construct the $99\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = 12$? Explain. 8. For the sample data set of Exercise 8 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 12$. 2. Construct the $80\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = 0$? Explain. 9. For the sample data set of Exercise 9 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 0$. 2. Construct the $90\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = -1$? Explain. 10. For the sample data set of Exercise 9 of Section 10.2 1. Give a point estimate for the mean value of $y$ in the sub-population determined by the condition $x = 8$. 2. Construct the $95\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $x = 0$? Explain. Applications 1. For the data in Exercise 11 of Section 10.2 1. Give a point estimate for the average number of words in the vocabulary of $18$-month-old children. 2. Construct the $95\%$ confidence interval for that mean value. 3. Construct the $95\%$ confidence interval for that mean value. 2. For the data in Exercise 12 of Section 10.2 1. Give a point estimate for the average braking distance of automobiles that weigh $3,250$ pounds. 2. Construct the $80\%$ confidence interval for that mean value. 3. Is it valid to make the same estimates for $5,000$-pound automobiles? Explain. 3. For the data in Exercise 13 of Section 10.2 1. Give a point estimate for the resting heart rate of a man who is $35$ years old. 2. One of the men in the sample is $35$ years old, but his resting heart rate is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the $90\%$ confidence interval for the mean resting heart rate of all $35$-year-old men. 4. For the data in Exercise 14 of Section 10.2 1. Give a point estimate for the wave height when the wind speed is $13$ miles per hour. 2. One of the wind speeds in the sample is $13$ miles per hour, but the height of waves that day is not what you computed in part (a). Explain why this is not a contradiction. 3. Construct the $95\%$ confidence interval for the mean wave height on days when the wind speed is $13$ miles per hour. 5. For the data in Exercise 15 of Section 10.2 1. The business owner intends to spend $\2,500$ on advertising next year. Give an estimate of next year’s revenue based on this fact. 2. Construct the $90\%$ prediction interval for next year’s revenue, based on the intent to spend $\2,500$ on advertising. 6. For the data in Exercise 16 of Section 10.2 1. A two-year-old girl is $32.3$ inches long. Predict her adult height. 2. Construct the $95\%$ prediction interval for the girl’s adult height. 7. For the data in Exercise 17 of Section 10.2 1. Lodovico has a $78.6$ average in his physics class just before the final. Give a point estimate of what his final exam grade will be. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Lodovico’s final exam grade at the $90\%$ level of confidence. 8. For the data in Exercise 18 of Section 10.2 1. This year $86.2$ million acres of corn were planted. Give a point estimate of the number of acres that will be harvested this year. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the number of acres that will be harvested this year, at the $99\%$ level of confidence. 9. For the data in Exercise 19 of Section 10.2 1. Give a point estimate for the blood concentration of the active ingredient of this medication in a man who has consumed $1.5$ ounces of the medication just recently. 2. Gratiano just consumed $1.5$ ounces of this medication $30$ minutes ago. Construct a $95\%$ prediction interval for the concentration of the active ingredient in his blood right now. 10. For the data in Exercise 20 of Section 10.2 1. You measure the girth of a free-standing oak tree five feet off the ground and obtain the value $127$ inches. How old do you estimate the tree to be? 2. Construct a $90\%$ prediction interval for the age of this tree. 11. For the data in Exercise 21 of Section 10.2 1. A test cylinder of concrete three days old fails at $1,750$ psi. Predict what the $28$-day strength of the concrete will be. 2. Construct a $99\%$ prediction interval for the $28$-day strength of this concrete. 3. Based on your answer to (b), what would be the minimum $28$-day strength you could expect this concrete to exhibit? 12. For the data in Exercise 22 of Section 10.2 1. Tomorrow’s average temperature is forecast to be $53$ degrees. Estimate the energy demand tomorrow. 2. Construct a $99\%$ prediction interval for the energy demand tomorrow. 3. Based on your answer to (b), what would be the minimum demand you could expect? Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Set 1}$ lists the SAT scores and GPAs of $1,000$ students. 1. Give a point estimate of the mean GPA of all students who score $1350$ on the SAT. 2. Construct a $90\%$ confidence interval for the mean GPA of all students who score $1350$ on the SAT. 2. Large $\text{Data Set 12}$ lists the golf scores on one round of golf for $75$ golfers first using their own original clubs, then using clubs of a new, experimental design (after two months of familiarization with the new clubs). 1. Thurio averages $72$ strokes per round with his own clubs. Give a point estimate for his score on one round if he switches to the new clubs. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for Thurio’s score on one round if he switches to the new clubs, at $90\%$ confidence. 3. Large $\text{Data Set 13}$ records the number of bidders and sales price of a particular type of antique grandfather clock at $60$ auctions. 1. There are seven likely bidders at the Verona auction today. Give a point estimate for the price of such a clock at today’s auction. 2. Explain whether an interval estimate for this problem is a confidence interval or a prediction interval. 3. Based on your answer to (b), construct an interval estimate for the likely sale price of such a clock at today’s sale, at $95\%$ confidence. Answers 1. $5.647$ 2. $5.647\pm 1.253$ 1. $-0.188$ 2. $-0.188\pm 3.041$ 1. $1.875$ 2. $1.875\pm 1.423$ 1. $5.4$ 2. $5.4\pm 3.355$ 3. invalid (extrapolation) 1. $2.4$ 2. 2.4±1.4742.4±1.474$2.4\pm 1.474$ 3. valid ($-1$ is in the range of the $x$-values in the data set) 1. $31.3$ words 2. $31.3\pm 7.1$ words 3. not valid, since two years is $24$ months, hence this is extrapolation 1. $73.2$ beats/min 2. The man’s heart rate is not the predicted average for all men his age. 3. $73.2\pm 1.2$ beats/min 1. $\224,562$ 2. $\224,562 \pm \28,699$ 1. $74$ 2. Prediction (one person, not an average for all who have average $78.6$ before the final exam) 3. $74\pm 24$ 1. $0.066\%$ 2. $0.066\pm 0.034\%$ 1. $4,656$ psi 2. 4,656±321$4,656\pm 321$ psi 3. $4,656-321=4,335$ psi 1. $2.19$ 2. $(2.1421,2.2316)$ 1. $7771.39$ 2. A prediction interval. 3. $(7410.41,8132.38)$ 10.8 A Complete Example Basic The exercises in this section are unrelated to those in previous sections. 1. The data give the amount $x$ of silicofluoride in the water (mg/L) and the amount $y$ of lead in the bloodstream (μg/dL) of ten children in various communities with and without municipal water. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find $SSE,\; s_\varepsilon$ and $r$, and so on). In the hypothesis test use as the alternative hypothesis $\beta _1>0$, and test at the $5\%$ level of significance. Use confidence level $95\%$ for the confidence interval for $\beta _1$. Construct $95\%$ confidence and predictions intervals at $x_p=2$ at the end. $\begin{array}{c|c c c c c} x &0.0 &0.0 &1.1 &1.4 &1.6 \ \hline y &0.3 &0.1 &4.7 &3.2 &5.1\ \end{array}\ \begin{array}{c|c c c c c} x &1.7 &2.0 &2.0 &2.2 &2.2 \ \hline y &7.0 &5.0 &6.1 &8.6 &9.5\ \end{array}$ 2. The table gives the weight $x$ (thousands of pounds) and available heat energy $y$ (million BTU) of a standard cord of various species of wood typically used for heating. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find $SSE,\; s_\varepsilon$ and $r$, and so on). In the hypothesis test use as the alternative hypothesis $\beta _1$, and test at the $5\%$ level of significance. Use confidence level $95\%$ for the confidence interval for $\beta _1$. Construct $95\%$ confidence and predictions intervals at $x_p=5$ at the end. $\begin{array}{c|c c c c c} x &3.37 &3.50 &4.29 &4.00 &4.64 \ \hline y &23.6 &17.5 &20.1 &21.6 &28.1\ \end{array}\ \begin{array}{c|c c c c c} x &4.99 &4.94 &5.48 &3.26 &4.16 \ \hline y &25.3 &27.0 &30.7 &18.9 &20.7\ \end{array}$ Large Data Set Exercises Large Data Sets not available 1. Large Data Sets 3 and 3A list the shoe sizes and heights of $174$ customers entering a shoe store. The gender of the customer is not indicated in Large Data Set 3. However, men’s and women’s shoes are not measured on the same scale; for example, a size $8$ shoe for men is not the same size as a size $8$ shoe for women. Thus it would not be meaningful to apply regression analysis to Large Data Set 3. Nevertheless, compute the scatter diagrams, with shoe size as the independent variable ($x$) and height as the dependent variable ($y$), for (i) just the data on men, (ii) just the data on women, and (iii) the full mixed data set with both men and women. Does the third, invalid scatter diagram look markedly different from the other two? 2. Separate out from Large Data Set 3A just the data on men and do a complete analysis, with shoe size as the independent variable ($x$) and height as the dependent variable ($y$). Use $\alpha =0.05$ and $x_p=10$ whenever appropriate. 3. Separate out from Large Data Set 3A just the data on women and do a complete analysis, with shoe size as the independent variable ($x$) and height as the dependent variable ($y$). Use $\alpha =0.05$ and $x_p=10$ whenever appropriate. Answers 1. $\sum x=14.2,\; \sum y=49.6,\; \sum xy=91.73,\; \sum x^2=26.3,\; \sum y^2=333.86\ SS_{xx}=6.136,\; SS_{xy}=21.298,\; SS_{yy}=87.844\ \bar{x}=1.42,\; \bar{y}=4.96\ \widehat{\beta _1}=3.47,\; \widehat{\beta _0}=0.03\ SSE=13.92\ s_\varepsilon =1.32\ r = 0.9174, r^2 = 0.8416\ df=8, T = 6.518$ The $95\%$ confidence interval for $\beta _1$ is: $(2.24,4.70)$ At $x_p=2$ the $95\%$ confidence interval for $E(y)$ is $(5.77,8.17)$ At $x_p=2$ the $95\%$ confidence interval for $y$ is $(3.73,10.21)$ 2. The positively correlated trend seems less profound than that in each of the previous plots. 3. The regression line: $\hat{y}=3.3426x+138.7692$. Coefficient of Correlation: $r = 0.9431$. Coefficient of Determination: $r^2 = 0.8894$. $SSE=283.2473$. $s_e=1.9305$. A $95\%$ confidence interval for $\beta _1$: $(3.0733,3.6120)$. Test Statistic for $H_0: \beta _1=0: T=24.7209$. At $x_p=10$, $\hat{y}=172.1956$; a $95\%$ confidence interval for the mean value of $y$ is: $(171.5577,172.8335)$; and a $95\%$ prediction interval for an individual value of $y$ is: $(168.2974,176.0938)$.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/10.E%3A_Correlation_and_Regression_%28Exercises%29.txt
These are homework exercises to accompany the Textmap created for "Introductory Statistics" by Shafer and Zhang. 11.1: Chi-Square Tests for Independence Q11.1.1 Find $\chi _{0.01}^{2}$ for each of the following number of degrees of freedom. 1. $df=5$ 2. $df=11$ 3. $df=25$ Q11.1.2 Find $\chi _{0.05}^{2}$ for each of the following number of degrees of freedom. 1. $df=6$ 2. $df=12$ 3. $df=30$ Q11.1.3 Find $\chi _{0.10}^{2}$ for each of the following number of degrees of freedom. 1. $df=6$ 2. $df=12$ 3. $df=30$ Q11.1.4 Find $\chi _{0.01}^{2}$ for each of the following number of degrees of freedom. 1. $df=7$ 2. $df=10$ 3. $df=20$ Q11.1.5 For $df=7$ and $\alpha =0.05$ 1. $\chi _{\alpha }^{2}$ 2. $\chi _{\frac{\alpha }{2}}^{2}$ Q11.1.6 For $df=17$ and $\alpha =0.01$ 1. $\chi _{\alpha }^{2}$ 2. $\chi _{\frac{\alpha }{2}}^{2}$ Q11.1.7 A data sample is sorted into a $2 \times 2$ contingency table based on two factors, each of which has two levels. Factor 1 Level 1 Level 2 Row Total Factor 2 Level 1 $20$ $10$ R Level 2 $15$ 5 R Column Total C C n 1. Find the column totals, the row totals, and the grand total, $n$, of the table. 2. Find the expected number $E$ of observations for each cell based on the assumption that the two factors are independent (that is, just use the formula $E=(R\times C)/n$). 3. Find the value of the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. Q11.1.8 A data sample is sorted into a $3 \times 2$ contingency table based on two factors, one of which has three levels and the other of which has two levels. Factor 1 Level 1 Level 2 Row Total Factor 2 Level 1 $20$ $10$ R Level 2 $15$ 5 R Level 3 $10$ $20$ R Column Total C C n 1. Find the column totals, the row totals, and the grand total, $n$, of the table. 2. Find the expected number $E$ of observations for each cell based on the assumption that the two factors are independent (that is, just use the formula $E=(R\times C)/n$). 3. Find the value of the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. Q11.1.9 A child psychologist believes that children perform better on tests when they are given perceived freedom of choice. To test this belief, the psychologist carried out an experiment in which $200$ third graders were randomly assigned to two groups, $A$ and $B$. Each child was given the same simple logic test. However in group $B$, each child was given the freedom to choose a text booklet from many with various drawings on the covers. The performance of each child was rated as Very Good, Good, and Fair. The results are summarized in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to support the psychologist’s belief. Group A B Performance Very Good 32 29 Good 55 61 Fair 10 13 Q11.1.10 In regard to wine tasting competitions, many experts claim that the first glass of wine served sets a reference taste and that a different reference wine may alter the relative ranking of the other wines in competition. To test this claim, three wines, $A$, $B$ and $C$, were served at a wine tasting event. Each person was served a single glass of each wine, but in different orders for different guests. At the close, each person was asked to name the best of the three. One hundred seventy-two people were at the event and their top picks are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to support the claim that wine experts’ preference is dependent on the first served wine. Top Pick A B C First Glass A 12 31 27 B 15 40 21 C 10 9 7 1. Is being left-handed hereditary? To answer this question, $250$ adults are randomly selected and their handedness and their parents’ handedness are noted. The results are summarized in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that there is a hereditary element in handedness. Number of Parents Left-Handed 0 1 2 Handedness Left 8 10 12 Right 178 21 21 2. Some geneticists claim that the genes that determine left-handedness also govern development of the language centers of the brain. If this claim is true, then it would be reasonable to expect that left-handed people tend to have stronger language abilities. A study designed to text this claim randomly selected $807$ students who took the Graduate Record Examination (GRE). Their scores on the language portion of the examination were classified into three categories: low, average, and high, and their handedness was also noted. The results are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that left-handed people tend to have stronger language abilities. GRE English Scores Low Average High Handedness Left 18 40 22 Right 201 360 166 3. It is generally believed that children brought up in stable families tend to do well in school. To verify such a belief, a social scientist examined $290$ randomly selected students’ records in a public high school and noted each student’s family structure and academic status four years after entering high school. The data were then sorted into a $2 \times 3$ contingency table with two factors. $\text{Factor 1}$ has two levels: graduated and did not graduate. $\text{Factor 2}$ has three levels: no parent, one parent, and two parents. The results are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that family structure matters in school performance of the students. Academic Status Graduated Did Not Graduate Family No parent 18 31 One parent 101 44 Two parents 70 26 4. A large middle school administrator wishes to use celebrity influence to encourage students to make healthier choices in the school cafeteria. The cafeteria is situated at the center of an open space. Everyday at lunch time students get their lunch and a drink in three separate lines leading to three separate serving stations. As an experiment, the school administrator displayed a poster of a popular teen pop star drinking milk at each of the three areas where drinks are provided, except the milk in the poster is different at each location: one shows white milk, one shows strawberry-flavored pink milk, and one shows chocolate milk. After the first day of the experiment the administrator noted the students’ milk choices separately for the three lines. The data are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that the posters had some impact on the students’ drink choices. Student Choice Regular Strawberry Chocolate Poster Choice Regular 38 28 40 Strawberry 18 51 24 Chocolate 32 32 53 Large Data Set Exercise Large Data Sets not available 1. Large $\text{Data Set 8}$ records the result of a survey of $300$ randomly selected adults who go to movie theaters regularly. For each person the gender and preferred type of movie were recorded. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the factors “gender” and “preferred type of movie” are dependent. Answers 1. $15.09$ 2. $24.72$ 3. $44.31$ 1. $10.64$ 2. $18.55$ 3. $40.26$ 1. $14.07$ 2. $16.01$ 1. $C_1=35,\; C_2=15,\; R_1=30,\; R_2=20,\; n=50$ 2. $E_{11}=21,\; E_{12}=9,\; E_{21}=14,\; E_{22}=6$ 3. $\chi ^2=0.3968$ 4. $df=1$ 1. $\chi ^2=0.6698,\; \chi _{0.05}^{2}=5.99$, do not reject $H_0$ 2. $\chi ^2=72.35,\; \chi _{0.01}^{2}=9.21$, reject $H_0$ 3. $\chi ^2=21.2784,\; \chi _{0.01}^{2}=9.21$, reject $H_0$ 4. $\chi ^2=28.4539$, $df=3$, Rejection Region: $[7.815,\infty )$, Decision: reject $H_0$ of independence 11.2: Chi-Square One-Sample Goodness-of-Fit Tests Basic 1. A data sample is sorted into five categories with an assumed probability distribution. Factor Levels Assumed Distribution Observed Frequency 1 $p_1=0.1$ 10 2 $p_2=0.4$ 35 3 $p_3=0.4$ 45 4 $p_4=0.1$ 10 1. Find the size $n$ of the sample. 2. Find the expected number $E$ of observations for each level, if the sampled population has a probability distribution as assumed (that is, just use the formula $E_i=n\times p_i$). 3. Find the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. 2. A data sample is sorted into five categories with an assumed probability distribution. Factor Levels Assumed Distribution Observed Frequency 1 $p_1=0.3$ 23 2 $p_2=0.3$ 30 3 $p_3=0.2$ 19 4 $p_4=0.1$ 8 5 $p_5=0.1$ 10 1. Find the size $n$ of the sample. 2. Find the expected number $E$ of observations for each level, if the sampled population has a probability distribution as assumed (that is, just use the formula $E_i=n\times p_i$). 3. Find the chi-square test statistic $\chi ^2$. 4. Find the number of degrees of freedom of the chi-square test statistic. Applications 1. Retailers of collectible postage stamps often buy their stamps in large quantities by weight at auctions. The prices the retailers are willing to pay depend on how old the postage stamps are. Many collectible postage stamps at auctions are described by the proportions of stamps issued at various periods in the past. Generally the older the stamps the higher the value. At one particular auction, a lot of collectible stamps is advertised to have the age distribution given in the table provided. A retail buyer took a sample of $73$ stamps from the lot and sorted them by age. The results are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the age distribution of the lot is different from what was claimed by the seller. Year Claimed Distribution Observed Frequency Before 1940 0.10 6 1940 to 1959 0.25 15 1960 to 1979 0.45 30 After 1979 0.20 22 2. The litter size of Bengal tigers is typically two or three cubs, but it can vary between one and four. Based on long-term observations, the litter size of Bengal tigers in the wild has the distribution given in the table provided. A zoologist believes that Bengal tigers in captivity tend to have different (possibly smaller) litter sizes from those in the wild. To verify this belief, the zoologist searched all data sources and found $316$ litter size records of Bengal tigers in captivity. The results are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the distribution of litter sizes in captivity differs from that in the wild. Litter Size Wild Litter Distribution Observed Frequency 1 0.11 41 2 0.69 243 3 0.18 27 4 0.02 5 3. An online shoe retailer sells men’s shoes in sizes $8$ to $13$. In the past orders for the different shoe sizes have followed the distribution given in the table provided. The management believes that recent marketing efforts may have expanded their customer base and, as a result, there may be a shift in the size distribution for future orders. To have a better understanding of its future sales, the shoe seller examined $1,040$ sales records of recent orders and noted the sizes of the shoes ordered. The results are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that the shoe size distribution of future sales will differ from the historic one. Shoe Size Past Size Distribution Recent Size Frequency 8.0 0.03 25 8.5 0.06 43 9.0 0.09 88 9.5 0.19 221 10.0 0.23 272 10.5 0.14 150 11.0 0.10 107 11.5 0.06 51 12.0 0.05 37 12.5 0.03 35 13.0 0.02 11 4. An online shoe retailer sells women’s shoes in sizes $5$ to $10$. In the past orders for the different shoe sizes have followed the distribution given in the table provided. The management believes that recent marketing efforts may have expanded their customer base and, as a result, there may be a shift in the size distribution for future orders. To have a better understanding of its future sales, the shoe seller examined $1,174$ sales records of recent orders and noted the sizes of the shoes ordered. The results are given in the table provided. Test, at the $1\%$ level of significance, whether there is sufficient evidence in the data to conclude that the shoe size distribution of future sales will differ from the historic one. Shoe Size Past Size Distribution Recent Size Frequency 5.0 0.02 20 5.5 0.03 23 6.0 0.07 88 6.5 0.08 90 7.0 0.20 222 7.5 0.20 258 8.0 0.15 177 8.5 0.11 121 9.0 0.08 91 9.5 0.04 53 10.0 0.02 31 5. A chess opening is a sequence of moves at the beginning of a chess game. There are many well-studied named openings in chess literature. French Defense is one of the most popular openings for black, although it is considered a relatively weak opening since it gives black probability $0.344$ of winning, probability $0.405$ of losing, and probability $0.251$ of drawing. A chess master believes that he has discovered a new variation of French Defense that may alter the probability distribution of the outcome of the game. In his many Internet chess games in the last two years, he was able to apply the new variation in $77$ games. The wins, losses, and draws in the $77$ games are given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the newly discovered variation of French Defense alters the probability distribution of the result of the game. Result for Black Probability Distribution New Variation Wins Win 0.344 31 Loss 0.405 25 Draw 0.251 21 6. The Department of Parks and Wildlife stocks a large lake with fish every six years. It is determined that a healthy diversity of fish in the lake should consist of $10\%$ largemouth bass, $15\%$ smallmouth bass, $10\%$ striped bass, $10\%$ trout, and $20\%$ catfish. Therefore each time the lake is stocked, the fish population in the lake is restored to maintain that particular distribution. Every three years, the department conducts a study to see whether the distribution of the fish in the lake has shifted away from the target proportions. In one particular year, a research group from the department observed a sample of $292$ fish from the lake with the results given in the table provided. Test, at the $5\%$ level of significance, whether there is sufficient evidence in the data to conclude that the fish population distribution has shifted since the last stocking. Fish Target Distribution Fish in Sample Largemouth Bass 0.10 14 Smallmouth Bass 0.15 49 Striped Bass 0.10 21 Trout 0.10 22 Catfish 0.20 75 Other 0.35 111 Large Data Set Exercise Large Data Sets not available 1. Large $\text{Data Set 4}$ records the result of $500$ tosses of six-sided die. Test, at the $10\%$ level of significance, whether there is sufficient evidence in the data to conclude that the die is not “fair” (or “balanced”), that is, that the probability distribution differs from probability $1/6$ for each of the six faces on the die. S11.2.1 1. $n=100$ 2. $E=10,E=40,E=40,E=10$ 3. $\chi^2=1.25$ 4. $df=3$ S11.2.3 $\chi ^2=4.8082,\; \chi _{0.05}^{2}=7.81,\; \text{do not reject } H_0$ S11.2.5 $\chi ^2=26.5765,\; \chi _{0.01}^{2}=23.21,\; \text{reject } H_0$ S11.2.7 $\chi ^2=2.1401,\; \chi _{0.05}^{2}=5.99,\; \text{do not reject } H_0$ S11.2.9 $\chi ^2=2.944,\; df=5,\; \text{Rejection Region: }[9.236,\infty ),\; \text{Decision: Fail to reject }H_0\; \text{of balance}$ 11.3 F-tests for Equality of Two Variances Basic 1. Find $F_{0.01}$ for each of the following degrees of freedom. 1. $df_1=5$ and $df_2=5$ 2. $df_1=5$ and $df_2=12$ 3. $df_1=12$ and $df_2=20$ 2. Find $F_{0.05}$ for each of the following degrees of freedom. 1. $df_1=6$ and $df_2=6$ 2. $df_1=6$ and $df_2=12$ 3. $df_1=12$ and $df_2=30$ 3. Find $F_{0.95}$ for each of the following degrees of freedom. 1. $df_1=6$ and $df_2=6$ 2. $df_1=6$ and $df_2=12$ 3. $df_1=12$ and $df_2=30$ 4. Find $F_{0.90}$ for each of the following degrees of freedom. 1. $df_1=5$ and $df_2=5$ 2. $df_1=5$ and $df_2=12$ 3. $df_1=12$ and $df_2=20$ 5. For $df_1=7$, $df_2=10$ and $\alpha =0.05$, find 1. $F_{\alpha }$ 2. $F_{1-\alpha }$ 3. $F_{\alpha /2}$ 4. $F_{1-\alpha /2}$ 6. For $df_1=15$, $df_2=8$ and $\alpha =0.01$, find 1. $F_{\alpha }$ 2. $F_{1-\alpha }$ 3. $F_{\alpha /2}$ 4. $F_{1-\alpha /2}$ 7. For each of the two samples $\text{Sample 1}:\{8,2,11,0,-2\}\ \text{Sample 2}:\{-2,0,0,0,2,4,-1\}$ find 1. the sample size 2. the sample mean 3. the sample variance 8. For each of the two samples $\text{Sample 1}:\{0.8,1.2,1.1,0.8,-2.0\}\ \text{Sample 2}:\{-2.0,0.0,0.7,0.8,2.2,4.1,-1.9\}$ find 1. the sample size 2. the sample mean 3. the sample variance 9. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=16$ $s_{1}^{2}=53$ 2 $n_2=21$ $s_{2}^{2}=32$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. Find $F_{0.05}$ using $df_1$ and $df_2$ computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}>\alpha _{2}^{2}$ at the $5\%$ level of significance. 10. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=11$ $s_{1}^{2}=61$ 2 $n_2=8$ $s_{2}^{2}=44$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$. 2. Find the degrees of freedom $df_1$ and $df_2$. 3. Find $F_{0.05}$ using $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}>\alpha _{2}^{2}$ at the $5\%$ level of significance. 11. Two random samples taken from two normal populations yielded the following information: 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$. 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $F_{1-\alpha }$ using $df_1$ and $df_2$ computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}<\alpha _{2}^{2}$ at the $5\%$ level of significance. 12. Sample Sample Size Sample Variance 1 $n_1=10$ $s_{1}^{2}=12$ 2 $n_2=13$ $s_{2}^{2}=23$ 13. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=8$ $s_{1}^{2}=102$ 2 $n_2=8$ $s_{2}^{2}=603$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $F_{1-\alpha }$using $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}<\alpha _{2}^{2}$at the $5\%$ level of significance. 14. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=9$ $s_{1}^{2}=123$ 2 $n_2=31$ $s_{2}^{2}=543$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $F_{1-\alpha /2}$ and $F_{\alpha /2}$ using $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}\neq \alpha _{2}^{2}$at the $5\%$ level of significance. 15. Two random samples taken from two normal populations yielded the following information: Sample Sample Size Sample Variance 1 $n_1=21$ $s_{1}^{2}=199$ 2 $n_2=21$ $s_{2}^{2}=66$ 1. Find the statistic $F=s_{1}^{2}/s_{2}^{2}$ 2. Find the degrees of freedom $df_1$ and $df_2$. 3. For $\alpha =0.05$ find $df_1$ and $df_2$computed above. 4. Perform the test the hypotheses $H_0:\alpha _{1}^{2}=\alpha _{2}^{2}\; vs\; H_a:\alpha _{1}^{2}\neq \alpha _{2}^{2}$at the $5\%$ level of significance. Applications 1. Japanese sturgeon is a subspecies of the sturgeon family indigenous to Japan and the Northwest Pacific. In a particular fish hatchery newly hatched baby Japanese sturgeon are kept in tanks for several weeks before being transferred to larger ponds. Dissolved oxygen in tank water is very tightly monitored by an electronic system and rigorously maintained at a target level of $6.5$ milligrams per liter (mg/l). The fish hatchery looks to upgrade their water monitoring systems for tighter control of dissolved oxygen. A new system is evaluated against the old one currently being used in terms of the variance in measured dissolved oxygen. Thirty-one water samples from a tank operated with the new system were collected and $16$ water samples from a tank operated with the old system were collected, all during the course of a day. The samples yield the following information:$\text{New Sample 1: }n_1=31\; s_{1}^{2}=0.0121\ \text{Old Sample 2: }n_1=16\; s_{2}^{2}=0.0319$Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the new system will provide a tighter control of dissolved oxygen in the tanks. 2. The risk of investing in a stock is measured by the volatility, or the variance, in changes in the price of that stock. Mutual funds are baskets of stocks and offer generally lower risk to investors. Different mutual funds have different focuses and offer different levels of risk. Hippolyta is deciding between two mutual funds, $A$ and $B$, with similar expected returns. To make a final decision, she examined the annual returns of the two funds during the last ten years and obtained the following information: $\text{Mutual Fund A Sample 1: }n_1=10\; s_{1}^{2}=0.012\ \text{Mutual Fund B Sample 2: }n_1=10\; s_{2}^{2}=0.005$Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the two mutual funds offer different levels of risk. 3. It is commonly acknowledged that grading of the writing part of a college entrance examination is subject to inconsistency. Every year a large number of potential graders are put through a rigorous training program before being given grading assignments. In order to gauge whether such a training program really enhances consistency in grading, a statistician conducted an experiment in which a reference essay was given to $61$ trained graders and $31$ untrained graders. Information on the scores given by these graders is summarized below:$\text{Trained Sample 1: }n_1=61\; s_{1}^{2}=2.15\ \text{Untrained Sample 2: }n_1=31\; s_{2}^{2}=3.91$Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the training program enhances the consistency in essay grading. 4. A common problem encountered by many classical music radio stations is that their listeners belong to an increasingly narrow band of ages in the population. The new general manager of a classical music radio station believed that a new playlist offered by a professional programming agency would attract listeners from a wider range of ages. The new list was used for a year. Two random samples were taken before and after the new playlist was adopted. Information on the ages of the listeners in the sample are summarized below:$\text{Before Sample 1: }n_1=21\; s_{1}^{2}=56.25\ \text{After Sample 2: }n_1=16\; s_{2}^{2}=76.56$Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the new playlist has expanded the range of listener ages. 5. A laptop computer maker uses battery packs supplied by two companies, $A$ and $B$. While both brands have the same average battery life between charges (LBC), the computer maker seems to receive more complaints about shorter LBC than expected for battery packs supplied by company $B$. The computer maker suspects that this could be caused by higher variance in LBC for Brand $B$. To check that, ten new battery packs from each brand are selected, installed on the same models of laptops, and the laptops are allowed to run until the battery packs are completely discharged. The following are the observed LBCs in hours. Brand $A$ Brand $B$ 3.2 3.0 3.4 3.5 2.8 2.9 3.0 3.1 3.0 2.3 3.0 2.0 2.8 3.0 2.9 2.9 3.0 3.0 3.0 4.1 Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the LBCs of Brand $B$ have a larger variance that those of Brand $A$. 6. A manufacturer of a blood-pressure measuring device for home use claims that its device is more consistent than that produced by a leading competitor. During a visit to a medical store a potential buyer tried both devices on himself repeatedly during a short period of time. The following are readings of systolic pressure. Manufacturer Competitor 132 129 134 132 129 129 129 138 130 132 1. Test, at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that the manufacturer’s claim is true. 2. Repeat the test at the $10\%$ level of significance. Quote as many computations from part (a) as possible. Large Data Set Exercises Large Data Sets not available 1. Large $\text{Data Sets 1A and 1B }$record SAT scores for $419$ male and $581$ female students. Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the variances of scores of male and female students differ. 2. Large $\text{Data Sets 7, 7A, and 7B }$record the survival times of $140$ laboratory mice with thymic leukemia. Test, at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that the variances of survival times of male mice and female mice differ. Answers 1. $11.0$ 2. $5.06$ 3. $3.23$ 1. $0.23$ 2. $0.25$ 3. $0.40$ 1. $3.14$ 2. $0.27$ 3. $3.95$ 4. $0.21$ 1. $\text{Sample 1}$ 1. $n_1=5$ 2. $\bar{x_1}=3.8$ 3. $s_{1}^{2}=30.2$ 2. $\text{Sample 1}$ 1. $n_2=7$ 2. $\bar{x_1}=0.4286$ 3. $s_{2}^{2}=3.95$ 1. $1.6563$ 2. $df_1=15,\; df_2=20$ 3. $F_{0.05}=2.2$ 4. do not reject $H_0$ 1. $0.5217$ 2. $df_1=9,\; df_2=12$ 3. $F_{0.95}=0.3254$ 4. do not reject $H_0$ 1. $0.1692$ 2. $df_1=8,\; df_2=30$ 3. $F_{0.975}=0.26,\; F_{0.025}=2.65$ 4. reject $H_0$ 1. $F = 0.3793,\; F_{0.90}=0.58$, reject $H_0$ 2. $F = 0.5499,\; F_{0.95}=0.61$, reject $H_0$ 3. $F = 0.0971,\; F_{0.95}=0.31$, reject $H_0$ 4. $F = 0.893131, df_1=418,\; df_2=580$. Rejection Region: $(0,0.7897]\cup [1.2614,\infty )$. Decision: Fail to reject $H_0$ of equal variances. 11.4 F-Tests in One-Way ANOVA Basic 1. The following three random samples are taken from three normal populations with respective means $\mu _1$, $\mu _2$ and $\mu _3$, and the same variance $\sigma ^2$. Sample 1 Sample 2 Sample 3 2 3 0 2 5 1 3 7 2 5 1 3 1. Find the combined sample size $n$. 2. Find the combined sample mean $\bar{x}$. 3. Find the sample mean for each of the three samples. 4. Find the sample variance for each of the three samples. 5. Find $MST$. 6. Find $MSE$. 7. Find $F=MST/MSE$. 2. The following three random samples are taken from three normal populations with respective means $\mu _1$, $\mu _2$ and $\mu _3$, and the same variance $\sigma ^2$. Sample 1 Sample 2 Sample 3 0.0 1.3 0.2 0.1 1.5 0.2 0.2 1.7 0.3 0.1 0.5 0.0 1. Find the combined sample size $n$. 2. Find the combined sample mean $\bar{x}$. 3. Find the sample mean for each of the three samples. 4. Find the sample variance for each of the three samples. 5. Find $MST$. 6. Find $MSE$. 7. Find $F=MST/MSE$. 3. Refer to Exercise 1. 1. Find the number of populations under consideration $K$. 2. Find the degrees of freedom $df_1=K-1$ and $df_2=n-K$ 3. For $\alpha =0.05$, find $F_{\alpha }$ with the degrees of freedom computed above. 4. At $\alpha =0.05$, test hypotheses $H_0: \mu _1=\mu _2=\mu _3\ vs\; H_a: \text{at least one pair of the population means are not equal}$ 4. Refer to Exercise 2. 1. Find the number of populations under consideration $K$. 2. Find the degrees of freedom $df_1=K-1$ and $df_2=n-K$ 3. For $\alpha =0.01$, find $F_{\alpha }$ with the degrees of freedom computed above. 4. At $\alpha =0.01$, test hypotheses $H_0: \mu _1=\mu _2=\mu _3\ vs\; H_a: \text{at least one pair of the population means are not equal}$ Applications 1. The Mozart effect refers to a boost of average performance on tests for elementary school students if the students listen to Mozart’s chamber music for a period of time immediately before the test. In order to attempt to test whether the Mozart effect actually exists, an elementary school teacher conducted an experiment by dividing her third-grade class of $15$ students into three groups of $5$. The first group was given an end-of-grade test without music; the second group listened to Mozart’s chamber music for $10$ minutes; and the third groups listened to Mozart’s chamber music for $20$ minutes before the test. The scores of the $15$ students are given below: Group 1 Group 2 Group 3 80 79 73 63 73 82 74 74 79 71 77 82 70 81 84 Using the ANOVA $F$-test at $\alpha =0.10$, is there sufficient evidence in the data to suggest that the Mozart effect exists? 2. The Mozart effect refers to a boost of average performance on tests for elementary school students if the students listen to Mozart’s chamber music for a period of time immediately before the test. Many educators believe that such an effect is not necessarily due to Mozart’s music per se but rather a relaxation period before the test. To support this belief, an elementary school teacher conducted an experiment by dividing her third-grade class of $15$ students into three groups of $5$. Students in the first group were asked to give themselves a self-administered facial massage; students in the second group listened to Mozart’s chamber music for $15$ minutes; students in the third group listened to Schubert’s chamber music for $15$ minutes before the test. The scores of the $15$ students are given below: Group 1 Group 2 Group 3 79 82 80 81 84 81 80 86 71 89 91 90 86 82 86 Test, using the ANOVA $F$-test at the $10\%$ level of significance, whether the data provide sufficient evidence to conclude that any of the three relaxation method does better than the others. 3. Precision weighing devices are sensitive to environmental conditions. Temperature and humidity in a laboratory room where such a device is installed are tightly controlled to ensure high precision in weighing. A newly designed weighing device is claimed to be more robust against small variations of temperature and humidity. To verify such a claim, a laboratory tests the new device under four settings of temperature-humidity conditions. First, two levels of high and low temperature and two levels of high and low humidity are identified. Let $T$ stand for temperature and $H$ for humidity. The four experimental settings are defined and noted as $\text{(T, H): (high, high), (high, low), (low, high), and (low, low)}$. A pre-calibrated standard weight of $1$ kg was weighed by the new device four times in each setting. The results in terms of error (in micrograms mcg) are given below: (high, high) (high, low) (low, high) (low, low) −1.50 11.47 −14.29 5.54 −6.73 9.28 −18.11 10.34 11.69 5.58 −11.16 15.23 −5.72 10.80 −10.41 −5.69 Test, using the ANOVA $F$-test at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the mean weight readings by the newly designed device vary among the four settings. 4. To investigate the real cost of owning different makes and models of new automobiles, a consumer protection agency followed $16$ owners of new vehicles of four popular makes and models, call them $\text{TC,HA,NA, and FT}$, and kept a record of each of the owner’s real cost in dollars for the first five years. The five-year costs of the $16$ car owners are given below: TC HA NA FT 8423 7776 8907 10333 7889 7211 9077 9217 8665 6870 8732 10540 7129 9747 7359 8677 Test, using the ANOVA $F$-test at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that there are differences among the mean real costs of ownership for these four models. 5. Helping people to lose weight has become a huge industry in the United States, with annual revenue in the hundreds of billion dollars. Recently each of the three market-leading weight reducing programs claimed to be the most effective. A consumer research company recruited $33$ people who wished to lose weight and sent them to the three leading programs. After six months their weight losses were recorded. The results are summarized below: Statistic Prog. 1 Prog. 2 Prog. 3 Sample Mean $\bar{x_1}=10.65$ $\bar{x_2}=8.90$ $\bar{x_3}=9.33$ Sample Variance $s_{1}^{2}=27.20$ $s_{2}^{2}=16.86$ $s_{3}^{2}=32.40$ Sample Size $n_1=11$ $n_2=11$ $n_3=11$ The mean weight loss of the combined sample of all $33$ people was $\bar{x}=9.63$. Test, using the ANOVA $F$-test at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that some program is more effective than the others. 6. A leading pharmaceutical company in the disposable contact lenses market has always taken for granted that the sales of certain peripheral products such as contact lens solutions would automatically go with the established brands. The long-standing culture in the company has been that lens solutions would not make a significant difference in user experience. Recent market research surveys, however, suggest otherwise. To gain a better understanding of the effects of contact lens solutions on user experience, the company conducted a comparative study in which $63$ contact lens users were randomly divided into three groups, each of which received one of three top selling lens solutions on the market, including one of the company’s own. After using the assigned solution for two weeks, each participant was asked to rate the solution on the scale of $1$ to $5$ for satisfaction, with $5$ being the highest level of satisfaction. The results of the study are summarized below: Statistics Sol. 1 Sol. 2 Sol. 3 Sample Mean $\bar{x_1}=3.28$ $\bar{x_2}=3.96$ $\bar{x_3}=4.10$ Sample Variance $s_{1}^{2}=0.15$ $s_{2}^{2}=0.32$ $s_{3}^{2}=0.36$ Sample Size $n_1=18$ $n_1=23$ $n_1=22$ The mean satisfaction level of the combined sample of all $63$ participants was $\bar{x}=3.81$. Test, using the ANOVA $F$-test at the $5\%$ level of significance, whether the data provide sufficient evidence to conclude that not all three average satisfaction levels are the same Large Data Set Exercise Large Data Set not available 1. Large $\text{Data Set 9}$ records the costs of materials (textbook, solution manual, laboratory fees, and so on) in each of ten different courses in each of three different subjects, chemistry, computer science, and mathematics. Test, at the $1\%$ level of significance, whether the data provide sufficient evidence to conclude that the mean costs in the three disciplines are not all the same. Answers 1. $n=12$ 2. $\bar{x}=2.8333$ 3. $\bar{x_1}=3,\; \bar{x_2}=5,\; \bar{x_3}=1$ 4. $s_{1}^{2}=1.5,\; s_{2}^{2}=4,\; s_{3}^{2}=0.6667$ 5. $MST=13.83$ 6. $MSE=1.78$ 7. $F = 7.7812$ 1. $K=3$ 2. $df_1=2,\; df_2=9$ 3. $F_{0.05}=4.26$ 4. $F = 5.53$, reject $H_0$ 1. $F = 3.9647,\; F_{0.10}=2.81$, reject $H_0$ 2. $F = 9.6018,\; F_{0.01}=5.95$, reject $H_0$ 3. $F = 0.3589,\; F_{0.05}=5.32$, do not reject $H_0$ 4. $F = 1.418$, $df_1=2,\; and \; df_2=27$, Rejection Region: $[5.4881,\infty )$, Decision: Fail to reject $H_0$ of equal means.
textbooks/stats/Introductory_Statistics/Exercises_(Introductory_Statistics)/Exercises%3A_Shafer_and_Zhang/11.E%3A_Chi-Square_Tests_and_F-Tests_%28Exercises%29.txt