idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
2,801
Performance metrics to evaluate unsupervised learning
The most voted answer is very helpful, I just want to add something here. Evaluation metrics for unsupervised learning algorithms by Palacio-Niño & Berzal (2019) gives an overview of some common metrics for evaluating unsupervised learning tasks. Both internal and external validation methods (w/o ground truth labels) are listed in the paper. Hope this helps!
Performance metrics to evaluate unsupervised learning
The most voted answer is very helpful, I just want to add something here. Evaluation metrics for unsupervised learning algorithms by Palacio-Niño & Berzal (2019) gives an overview of some common metri
Performance metrics to evaluate unsupervised learning The most voted answer is very helpful, I just want to add something here. Evaluation metrics for unsupervised learning algorithms by Palacio-Niño & Berzal (2019) gives an overview of some common metrics for evaluating unsupervised learning tasks. Both internal and external validation methods (w/o ground truth labels) are listed in the paper. Hope this helps!
Performance metrics to evaluate unsupervised learning The most voted answer is very helpful, I just want to add something here. Evaluation metrics for unsupervised learning algorithms by Palacio-Niño & Berzal (2019) gives an overview of some common metri
2,802
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models?
Connection between James–Stein estimator and ridge regression Let $\mathbf y$ be a vector of observation of $\boldsymbol \theta$ of length $m$, ${\mathbf y} \sim N({\boldsymbol \theta}, \sigma^2 I)$, the James-Stein estimator is, $$\widehat{\boldsymbol \theta}_{JS} = \left( 1 - \frac{(m-2) \sigma^2}{\|{\mathbf y}\|^2} \right) {\mathbf y}.$$ In terms of ridge regression, we can estimate $\boldsymbol \theta$ via $\min_{\boldsymbol{\theta}} \|\mathbf{y}-\boldsymbol{\theta}\|^2 + \lambda\|\boldsymbol{\theta}\|^2 ,$ where the solution is $$\widehat{\boldsymbol \theta}_{\mathrm{ridge}} = \frac{1}{1+\lambda}\mathbf y.$$ It is easy to see that the two estimators are in the same form, but we need to estimate $\sigma^2$ in James-Stein estimator, and determine $\lambda$ in ridge regression via cross-validation. Connection between James–Stein estimator and random effects models Let us discuss the mixed/random effects models in genetics first. The model is $$\mathbf {y}=\mathbf {X}\boldsymbol{\beta} + \boldsymbol{Z\theta}+\mathbf {e}, \boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I), \textbf{e}\sim N(\mathbf{0},\sigma^2 I).$$ If there is no fixed effects and $\mathbf {Z}=I$, the model becomes $$\mathbf {y}=\boldsymbol{\theta}+\mathbf {e}, \boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I), \textbf{e}\sim N(\mathbf{0},\sigma^2 I),$$ which is equivalent to the setting of James-Stein estimator, with some Bayesian idea. Connection between random effects models and ridge regression If we focus on the random effects models above, $$\mathbf {y}=\mathbf {Z\theta}+\mathbf {e}, \boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I), \textbf{e}\sim N(\mathbf{0},\sigma^2 I).$$ The estimation is equivalent to solve the problem $$\min_{\boldsymbol{\theta}} \|\mathbf{y}-\mathbf {Z\theta}\|^2 + \lambda\|\boldsymbol{\theta}\|^2$$ when $\lambda=\sigma^2/\sigma_{\theta}^2$. The proof can be found in Chapter 3 of Pattern recognition and machine learning. Connection between (multilevel) random effects models and that in genetics In the random effects model above, the dimension of $\mathbf y$ is $m\times 1,$ and that of $\mathbf Z$ is $m \times p$. If we vectorize $\mathbf Z$ as $(mp)\times 1,$ and repeat $\mathbf y$ correspondingly, then we have the hierarchical/clustered structure, $p$ clusters and each with $m$ units. If we regress $\mathrm{vec}(\mathbf Z)$ on repeated $\mathbf y$, then we can obtain the random effect of $Z$ on $y$ for each cluster, though it is kind of like reverse regression. Acknowledgement: the first three points are largely learned from these two Chinese articles, 1, 2.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression,
Connection between James–Stein estimator and ridge regression Let $\mathbf y$ be a vector of observation of $\boldsymbol \theta$ of length $m$, ${\mathbf y} \sim N({\boldsymbol \theta}, \sigma^2 I)$,
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models? Connection between James–Stein estimator and ridge regression Let $\mathbf y$ be a vector of observation of $\boldsymbol \theta$ of length $m$, ${\mathbf y} \sim N({\boldsymbol \theta}, \sigma^2 I)$, the James-Stein estimator is, $$\widehat{\boldsymbol \theta}_{JS} = \left( 1 - \frac{(m-2) \sigma^2}{\|{\mathbf y}\|^2} \right) {\mathbf y}.$$ In terms of ridge regression, we can estimate $\boldsymbol \theta$ via $\min_{\boldsymbol{\theta}} \|\mathbf{y}-\boldsymbol{\theta}\|^2 + \lambda\|\boldsymbol{\theta}\|^2 ,$ where the solution is $$\widehat{\boldsymbol \theta}_{\mathrm{ridge}} = \frac{1}{1+\lambda}\mathbf y.$$ It is easy to see that the two estimators are in the same form, but we need to estimate $\sigma^2$ in James-Stein estimator, and determine $\lambda$ in ridge regression via cross-validation. Connection between James–Stein estimator and random effects models Let us discuss the mixed/random effects models in genetics first. The model is $$\mathbf {y}=\mathbf {X}\boldsymbol{\beta} + \boldsymbol{Z\theta}+\mathbf {e}, \boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I), \textbf{e}\sim N(\mathbf{0},\sigma^2 I).$$ If there is no fixed effects and $\mathbf {Z}=I$, the model becomes $$\mathbf {y}=\boldsymbol{\theta}+\mathbf {e}, \boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I), \textbf{e}\sim N(\mathbf{0},\sigma^2 I),$$ which is equivalent to the setting of James-Stein estimator, with some Bayesian idea. Connection between random effects models and ridge regression If we focus on the random effects models above, $$\mathbf {y}=\mathbf {Z\theta}+\mathbf {e}, \boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I), \textbf{e}\sim N(\mathbf{0},\sigma^2 I).$$ The estimation is equivalent to solve the problem $$\min_{\boldsymbol{\theta}} \|\mathbf{y}-\mathbf {Z\theta}\|^2 + \lambda\|\boldsymbol{\theta}\|^2$$ when $\lambda=\sigma^2/\sigma_{\theta}^2$. The proof can be found in Chapter 3 of Pattern recognition and machine learning. Connection between (multilevel) random effects models and that in genetics In the random effects model above, the dimension of $\mathbf y$ is $m\times 1,$ and that of $\mathbf Z$ is $m \times p$. If we vectorize $\mathbf Z$ as $(mp)\times 1,$ and repeat $\mathbf y$ correspondingly, then we have the hierarchical/clustered structure, $p$ clusters and each with $m$ units. If we regress $\mathrm{vec}(\mathbf Z)$ on repeated $\mathbf y$, then we can obtain the random effect of $Z$ on $y$ for each cluster, though it is kind of like reverse regression. Acknowledgement: the first three points are largely learned from these two Chinese articles, 1, 2.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, Connection between James–Stein estimator and ridge regression Let $\mathbf y$ be a vector of observation of $\boldsymbol \theta$ of length $m$, ${\mathbf y} \sim N({\boldsymbol \theta}, \sigma^2 I)$,
2,803
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models?
I'm going to leave it as an exercise for the community to flesh this answer out, but in general the reason why shrinkage estimators will *dominate*$^1$ unbiased estimators in finite samples is because Bayes$^2$ estimators cannot be dominated$^3$, and many shrinkage estimators can be derived as being Bayes.$^4$ All of this falls under the aegis of Decision Theory. A exhaustive, but rather unfriendly reference is "Theory of point estimation" by Lehmann and Casella. Maybe others can chime in with friendlier references? $^1$ An estimator $\delta_1(X)$ of parameter $\theta \in \Omega$ on data $X$ is dominated by another estimator $\delta_2(X)$ if for every $\theta \in \Omega$ the Risk (eg, Mean Square Error) of $\delta_1$ is equal or larger than $\delta_2$, and $\delta_2$ beats $\delta_1$ for at least one $\theta$. In other words, you get equal or better performance for $\delta_2$ everywhere in the parameter space. $^2$ An estimator is Bayes (under squared-error loss anyways) if it is the the posterior expectation of $\theta$, given the data, under some prior $\pi$, eg, $\delta(X) = E(\theta | X)$, where the expectation is taken with the posterior. Naturally, different priors lead to different risks for different subsets of $\Omega$. An important toy example is the prior $$\pi_{\theta_0} = \begin{cases} 1 & \mbox{if } \theta = \theta_0 \\ 0 & \theta \neq \theta_0 \end{cases} $$ that puts all prior mass about the point $\theta_0$. Then you can show that the Bayes estimator is the constant function $\delta(X) = \theta_0$, which of course has extremely good performance at and near $\theta_0$, and very bad performance elsewhere. But nonetheless, it cannot be dominated, because only that estimator leads to zero risk at $\theta_0$. $^3$ A natural question is if any estimator that cannot be dominated (called admissible, though wouldn't indomitable be snazzier?) need be Bayes? The answer is almost. See "complete class theorems." $^4$ For example, ridge regression arises as a Bayesian procedure when you place a Normal(0, $1/\lambda^2$) prior on $\beta$, and random effect models arise as an empirical Bayesian procedure in a similar framework. These arguments are complicated by the fact that the vanilla version of the Bayesian admissibility theorems assume that every parameter has a proper prior placed on it. Even in ridge regression, that is not true, because the "prior" being placed on variance $\sigma^2$ of error term is the constant function (Lebesgue measure), which is not a proper (integrable) probability distribution. But nonetheless, many such "partially" Bayes estimators can be shown to be admissible by demonstrating that they are the "limit" of a sequence of estimators that are proper Bayes. But proofs here get rather convoluted and delicate. See "generalized bayes estimators".
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression,
I'm going to leave it as an exercise for the community to flesh this answer out, but in general the reason why shrinkage estimators will *dominate*$^1$ unbiased estimators in finite samples is because
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models? I'm going to leave it as an exercise for the community to flesh this answer out, but in general the reason why shrinkage estimators will *dominate*$^1$ unbiased estimators in finite samples is because Bayes$^2$ estimators cannot be dominated$^3$, and many shrinkage estimators can be derived as being Bayes.$^4$ All of this falls under the aegis of Decision Theory. A exhaustive, but rather unfriendly reference is "Theory of point estimation" by Lehmann and Casella. Maybe others can chime in with friendlier references? $^1$ An estimator $\delta_1(X)$ of parameter $\theta \in \Omega$ on data $X$ is dominated by another estimator $\delta_2(X)$ if for every $\theta \in \Omega$ the Risk (eg, Mean Square Error) of $\delta_1$ is equal or larger than $\delta_2$, and $\delta_2$ beats $\delta_1$ for at least one $\theta$. In other words, you get equal or better performance for $\delta_2$ everywhere in the parameter space. $^2$ An estimator is Bayes (under squared-error loss anyways) if it is the the posterior expectation of $\theta$, given the data, under some prior $\pi$, eg, $\delta(X) = E(\theta | X)$, where the expectation is taken with the posterior. Naturally, different priors lead to different risks for different subsets of $\Omega$. An important toy example is the prior $$\pi_{\theta_0} = \begin{cases} 1 & \mbox{if } \theta = \theta_0 \\ 0 & \theta \neq \theta_0 \end{cases} $$ that puts all prior mass about the point $\theta_0$. Then you can show that the Bayes estimator is the constant function $\delta(X) = \theta_0$, which of course has extremely good performance at and near $\theta_0$, and very bad performance elsewhere. But nonetheless, it cannot be dominated, because only that estimator leads to zero risk at $\theta_0$. $^3$ A natural question is if any estimator that cannot be dominated (called admissible, though wouldn't indomitable be snazzier?) need be Bayes? The answer is almost. See "complete class theorems." $^4$ For example, ridge regression arises as a Bayesian procedure when you place a Normal(0, $1/\lambda^2$) prior on $\beta$, and random effect models arise as an empirical Bayesian procedure in a similar framework. These arguments are complicated by the fact that the vanilla version of the Bayesian admissibility theorems assume that every parameter has a proper prior placed on it. Even in ridge regression, that is not true, because the "prior" being placed on variance $\sigma^2$ of error term is the constant function (Lebesgue measure), which is not a proper (integrable) probability distribution. But nonetheless, many such "partially" Bayes estimators can be shown to be admissible by demonstrating that they are the "limit" of a sequence of estimators that are proper Bayes. But proofs here get rather convoluted and delicate. See "generalized bayes estimators".
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, I'm going to leave it as an exercise for the community to flesh this answer out, but in general the reason why shrinkage estimators will *dominate*$^1$ unbiased estimators in finite samples is because
2,804
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models?
James-Stein assumes that the dimension of response is at least 3. In the standard ridge regression the response is one-dimensional. You are confusing the number of predictors with the response dimension. That being said, I see the similarity among those situations, but what exactly to do, e.g. whether a factor should be fixed or random, how much shrinkage to apply, if at all, is dependent on the particular dataset. E.g., the more orthogonal the predictors are, the less it makes sense to pick Ridge regression over standard regression. The larger the number of parameters, the more it makes sense to extract the prior from the dataset itself via Empirical Bayes and then use it for shrinking the parameter estimates. The higher the signal-to-noise ratio, the smaller the benefits of shrinkage, etc.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression,
James-Stein assumes that the dimension of response is at least 3. In the standard ridge regression the response is one-dimensional. You are confusing the number of predictors with the response dimensi
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models? James-Stein assumes that the dimension of response is at least 3. In the standard ridge regression the response is one-dimensional. You are confusing the number of predictors with the response dimension. That being said, I see the similarity among those situations, but what exactly to do, e.g. whether a factor should be fixed or random, how much shrinkage to apply, if at all, is dependent on the particular dataset. E.g., the more orthogonal the predictors are, the less it makes sense to pick Ridge regression over standard regression. The larger the number of parameters, the more it makes sense to extract the prior from the dataset itself via Empirical Bayes and then use it for shrinking the parameter estimates. The higher the signal-to-noise ratio, the smaller the benefits of shrinkage, etc.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, James-Stein assumes that the dimension of response is at least 3. In the standard ridge regression the response is one-dimensional. You are confusing the number of predictors with the response dimensi
2,805
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models?
As others have said, the connection between the three is how you incorporate the prior information into the measurement. In case of the Stein paradox, you know that the true correlation between the input variables should be zero (and all the possible correlation measures, since you want to imply independence, not just uncorrelatedness), hence you can construct a variable better than the simple sample mean and suppress the various correlation measures. In the Bayesian framework, you can construct a prior that literally down weighs the events that lead to correlation between the sample means and up weighs the others. In case of ridge regression you want to find a good estimate for the conditional expectation value E(y|x). In principle this is a infinite-dimensional problem and ill-defined since we have only finite number of measurements. However, the prior knowledge is that we are looking for a continuos function that models the data. This is still ill-defined, since there are still infinitely many ways to model continuos functions, but the set is somewhat smaller. Ridge regression is just one simple way to sort the possible continuos functions, test them and stop at a final degree of freedom. An interpretation is the VC-dimension picture: during the ridge regression, you check that how well a f(x, p1, p2... ) model with a given degree of freedom describes the uncertainty inherent in the data. Practically, it measures how well can the f(x, p1, p2 ... ) and the empirical P(p1,p2...) can reconstruct the full P(y|x) distribution and not just E(y|x). This way the models with too many degree of freedom (which usually overfit) are weighed down, since more parameter mean after a certain degree of freedom will give larger correlations between the parameters and consequently much wider P(f(x, p1, p2... ) ) distributions. An other interpretation is that the original loss function is a measure value as well, and it the evaluation on a given sample comes with an uncertainty, so the real task is not minimizing the loss function but to find a minimum that is significantly lower than the others (practically changing from one degree of freedom to an other is a Bayesian decision, so one changes the number of parameters only if they give a significant decrease in the loss function). The ridge regression can be interpreted as an approximation to these two pictures (CV-dimension, expected loss). In some cases you want to prefer higher degrees of freedoms, for example in particle physics you study particle collision where you expect the produced number of particles to be a Poisson distribution, so you reconstruct the particle track from on an image (a photo for example) in a way that prefers a given number of tracks and suppresses models which has smaller or higher track-number-interpretation of the image. The third case also tries to implement a prior information into the measurement, namely that it is known from previous measurements that the students' height can be modeled very well by Gaussian distributions and not by a Cauchy, for example. So in short, the answer is that you can shrink the uncertainty of a measurement if you know what to expect and categorize the data with some previous data (the prior information). This previous data is what constrains your modeling function that you use to fit the measurements. In simple cases you can write down your model in the Bayesian framework, but sometimes it is impractical, like in integrating over the all the possible continuos functions to find the one that has the Bayesian Maximal A Posterior value.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression,
As others have said, the connection between the three is how you incorporate the prior information into the measurement. In case of the Stein paradox, you know that the true correlation between the
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models? As others have said, the connection between the three is how you incorporate the prior information into the measurement. In case of the Stein paradox, you know that the true correlation between the input variables should be zero (and all the possible correlation measures, since you want to imply independence, not just uncorrelatedness), hence you can construct a variable better than the simple sample mean and suppress the various correlation measures. In the Bayesian framework, you can construct a prior that literally down weighs the events that lead to correlation between the sample means and up weighs the others. In case of ridge regression you want to find a good estimate for the conditional expectation value E(y|x). In principle this is a infinite-dimensional problem and ill-defined since we have only finite number of measurements. However, the prior knowledge is that we are looking for a continuos function that models the data. This is still ill-defined, since there are still infinitely many ways to model continuos functions, but the set is somewhat smaller. Ridge regression is just one simple way to sort the possible continuos functions, test them and stop at a final degree of freedom. An interpretation is the VC-dimension picture: during the ridge regression, you check that how well a f(x, p1, p2... ) model with a given degree of freedom describes the uncertainty inherent in the data. Practically, it measures how well can the f(x, p1, p2 ... ) and the empirical P(p1,p2...) can reconstruct the full P(y|x) distribution and not just E(y|x). This way the models with too many degree of freedom (which usually overfit) are weighed down, since more parameter mean after a certain degree of freedom will give larger correlations between the parameters and consequently much wider P(f(x, p1, p2... ) ) distributions. An other interpretation is that the original loss function is a measure value as well, and it the evaluation on a given sample comes with an uncertainty, so the real task is not minimizing the loss function but to find a minimum that is significantly lower than the others (practically changing from one degree of freedom to an other is a Bayesian decision, so one changes the number of parameters only if they give a significant decrease in the loss function). The ridge regression can be interpreted as an approximation to these two pictures (CV-dimension, expected loss). In some cases you want to prefer higher degrees of freedoms, for example in particle physics you study particle collision where you expect the produced number of particles to be a Poisson distribution, so you reconstruct the particle track from on an image (a photo for example) in a way that prefers a given number of tracks and suppresses models which has smaller or higher track-number-interpretation of the image. The third case also tries to implement a prior information into the measurement, namely that it is known from previous measurements that the students' height can be modeled very well by Gaussian distributions and not by a Cauchy, for example. So in short, the answer is that you can shrink the uncertainty of a measurement if you know what to expect and categorize the data with some previous data (the prior information). This previous data is what constrains your modeling function that you use to fit the measurements. In simple cases you can write down your model in the Bayesian framework, but sometimes it is impractical, like in integrating over the all the possible continuos functions to find the one that has the Bayesian Maximal A Posterior value.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, As others have said, the connection between the three is how you incorporate the prior information into the measurement. In case of the Stein paradox, you know that the true correlation between the
2,806
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models?
James Stein estimator and Ridge regression Consider $\mathbf y=\mathbf{X}\beta+\mathbf{\epsilon}$ With $\mathbf{\epsilon}\sim N(0,\sigma^2I)$ Least square solution is of the form $\hat \beta= \mathbf S^{-1}\mathbf{X}'\mathbf{y}$ where $\mathbf S= \mathbf X'\mathbf X$. $\hat \beta $ is unbiased for $\beta$ and has covriance matrix $\sigma^2 \mathbf S^{-1}$. Therefore we can write $\hat \beta \sim N(\beta, \sigma^2\mathbf S^{-1})$ Note that $\hat \beta $ are the the Maximum likelihood estimates, MLE. James Stein For simplicity for the Jame Stein we will assume $\mathbf S=\mathbf I$. James and Stein will then add a prior on the $\beta$, of the form $\beta \sim N(0,a\mathbf I)$ And will get a posterior of the form $\frac{a}{a+\sigma^2}\hat \beta=(1-\frac{\sigma^2}{a+\sigma^2})\hat \beta$, they will then estimate $\frac{1}{a+\sigma^2}$ with $\frac{p-2}{\|\hat \beta\|^2}$ and get a James Stein estimator of the form $\hat \beta=(1-\frac{p-2}{\|\hat \beta\|^2})\hat \beta$. Ridge Regression In ridge regression $\mathbf X$ is usually standadised (mean 0, vairance 1 for each column of $\mathbf X$ ) so that the regression parameters $\beta=(\beta_1,\beta_2,\ldots, \beta_p)$ are comparable. When this is $S_{ii}=1$ for $i=1,2,\ldots,p$. A ridge regression estimate of $\beta$ is defined as, $\lambda\geq0$, to be $\hat \beta (\lambda) =(\mathbf S+\lambda I)^{-1}\mathbf X'\mathbf y=(\mathbf S +\lambda\mathbf I)^{-1}\mathbf S \hat \beta$ note that $\hat \beta$ is the MLE. How was $\hat \beta (\lambda)$ derived ?? Recall $\hat \beta \sim N(\hat \beta, \sigma^2\mathbf S^{-1})$ and if we add a Bayesian prior $\beta\sim N(0,\frac{\sigma^2}{\lambda}\mathbf I)$ Then we get $\text{E}\left(\beta|\hat \beta\right)=(\mathbf S +\lambda\mathbf I)^{-1}\mathbf S \hat \beta$ Same as the ridge regression estimate $\hat \beta (\lambda)$. So the original form of the James Stein given here takes $\mathbf S=\mathbf I$ and $a=\frac{\sigma^2}{\lambda}$.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression,
James Stein estimator and Ridge regression Consider $\mathbf y=\mathbf{X}\beta+\mathbf{\epsilon}$ With $\mathbf{\epsilon}\sim N(0,\sigma^2I)$ Least square solution is of the form $\hat \beta= \mathb
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models? James Stein estimator and Ridge regression Consider $\mathbf y=\mathbf{X}\beta+\mathbf{\epsilon}$ With $\mathbf{\epsilon}\sim N(0,\sigma^2I)$ Least square solution is of the form $\hat \beta= \mathbf S^{-1}\mathbf{X}'\mathbf{y}$ where $\mathbf S= \mathbf X'\mathbf X$. $\hat \beta $ is unbiased for $\beta$ and has covriance matrix $\sigma^2 \mathbf S^{-1}$. Therefore we can write $\hat \beta \sim N(\beta, \sigma^2\mathbf S^{-1})$ Note that $\hat \beta $ are the the Maximum likelihood estimates, MLE. James Stein For simplicity for the Jame Stein we will assume $\mathbf S=\mathbf I$. James and Stein will then add a prior on the $\beta$, of the form $\beta \sim N(0,a\mathbf I)$ And will get a posterior of the form $\frac{a}{a+\sigma^2}\hat \beta=(1-\frac{\sigma^2}{a+\sigma^2})\hat \beta$, they will then estimate $\frac{1}{a+\sigma^2}$ with $\frac{p-2}{\|\hat \beta\|^2}$ and get a James Stein estimator of the form $\hat \beta=(1-\frac{p-2}{\|\hat \beta\|^2})\hat \beta$. Ridge Regression In ridge regression $\mathbf X$ is usually standadised (mean 0, vairance 1 for each column of $\mathbf X$ ) so that the regression parameters $\beta=(\beta_1,\beta_2,\ldots, \beta_p)$ are comparable. When this is $S_{ii}=1$ for $i=1,2,\ldots,p$. A ridge regression estimate of $\beta$ is defined as, $\lambda\geq0$, to be $\hat \beta (\lambda) =(\mathbf S+\lambda I)^{-1}\mathbf X'\mathbf y=(\mathbf S +\lambda\mathbf I)^{-1}\mathbf S \hat \beta$ note that $\hat \beta$ is the MLE. How was $\hat \beta (\lambda)$ derived ?? Recall $\hat \beta \sim N(\hat \beta, \sigma^2\mathbf S^{-1})$ and if we add a Bayesian prior $\beta\sim N(0,\frac{\sigma^2}{\lambda}\mathbf I)$ Then we get $\text{E}\left(\beta|\hat \beta\right)=(\mathbf S +\lambda\mathbf I)^{-1}\mathbf S \hat \beta$ Same as the ridge regression estimate $\hat \beta (\lambda)$. So the original form of the James Stein given here takes $\mathbf S=\mathbf I$ and $a=\frac{\sigma^2}{\lambda}$.
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, James Stein estimator and Ridge regression Consider $\mathbf y=\mathbf{X}\beta+\mathbf{\epsilon}$ With $\mathbf{\epsilon}\sim N(0,\sigma^2I)$ Least square solution is of the form $\hat \beta= \mathb
2,807
Practical thoughts on explanatory vs. predictive modeling
In one sentence Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?" In many sentences I think the main difference is what is intended to be done with the analysis. I would suggest explanation is much more important for intervention than prediction. If you want to do something to alter an outcome, then you had best be looking to explain why it is the way it is. Explanatory modelling, if done well, will tell you how to intervene (which input should be adjusted). However, if you simply want to understand what the future will be like, without any intention (or ability) to intervene, then predictive modelling is more likely to be appropriate. As an incredibly loose example, using "cancer data". Predictive modelling using "cancer data" would be appropriate (or at least useful) if you were funding the cancer wards of different hospitals. You don't really need to explain why people get cancer, rather you only need an accurate estimate of how much services will be required. Explanatory modelling probably wouldn't help much here. For example, knowing that smoking leads to higher risk of cancer doesn't on its own tell you whether to give more funding to ward A or ward B. Explanatory modelling of "cancer data" would be appropriate if you wanted to decrease the national cancer rate - predictive modelling would be fairly obsolete here. The ability to accurately predict cancer rates is hardly likely to help you decide how to reduce it. However, knowing that smoking leads to higher risk of cancer is valuable information - because if you decrease smoking rates (e.g. by making cigarettes more expensive), this leads to more people with less risk, which (hopefully) leads to an expected decrease in cancer rates. Looking at the problem this way, I would think that explanatory modelling would mainly focus on variables which are in control of the user, either directly or indirectly. There may be a need to collect other variables, but if you can't change any of the variables in the analysis, then I doubt that explanatory modelling will be useful, except maybe to give you the desire to gain control or influence over those variables which are important. Predictive modelling, crudely, just looks for associations between variables, whether controlled by the user or not. You only need to know the inputs/features/independent variables/etc.. to make a prediction, but you need to be able to modify or influence the inputs/features/independent variables/etc.. in order to intervene and change an outcome.
Practical thoughts on explanatory vs. predictive modeling
In one sentence Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?" In many sentences I think the main difference is wh
Practical thoughts on explanatory vs. predictive modeling In one sentence Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?" In many sentences I think the main difference is what is intended to be done with the analysis. I would suggest explanation is much more important for intervention than prediction. If you want to do something to alter an outcome, then you had best be looking to explain why it is the way it is. Explanatory modelling, if done well, will tell you how to intervene (which input should be adjusted). However, if you simply want to understand what the future will be like, without any intention (or ability) to intervene, then predictive modelling is more likely to be appropriate. As an incredibly loose example, using "cancer data". Predictive modelling using "cancer data" would be appropriate (or at least useful) if you were funding the cancer wards of different hospitals. You don't really need to explain why people get cancer, rather you only need an accurate estimate of how much services will be required. Explanatory modelling probably wouldn't help much here. For example, knowing that smoking leads to higher risk of cancer doesn't on its own tell you whether to give more funding to ward A or ward B. Explanatory modelling of "cancer data" would be appropriate if you wanted to decrease the national cancer rate - predictive modelling would be fairly obsolete here. The ability to accurately predict cancer rates is hardly likely to help you decide how to reduce it. However, knowing that smoking leads to higher risk of cancer is valuable information - because if you decrease smoking rates (e.g. by making cigarettes more expensive), this leads to more people with less risk, which (hopefully) leads to an expected decrease in cancer rates. Looking at the problem this way, I would think that explanatory modelling would mainly focus on variables which are in control of the user, either directly or indirectly. There may be a need to collect other variables, but if you can't change any of the variables in the analysis, then I doubt that explanatory modelling will be useful, except maybe to give you the desire to gain control or influence over those variables which are important. Predictive modelling, crudely, just looks for associations between variables, whether controlled by the user or not. You only need to know the inputs/features/independent variables/etc.. to make a prediction, but you need to be able to modify or influence the inputs/features/independent variables/etc.. in order to intervene and change an outcome.
Practical thoughts on explanatory vs. predictive modeling In one sentence Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?" In many sentences I think the main difference is wh
2,808
Practical thoughts on explanatory vs. predictive modeling
In my view the differences are as follows: Explanatory/Descriptive When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relationships between the data after noise has been accounted for. Example: Is it true that exercising regularly (say 30 minutes per day) leads to lower blood pressure? To answer this question we may collect data from patients about their exercise regimen and their blood pressure values over time. The goal is to see if we can explain variations in blood pressure by variations in exercise regimen. Blood pressure is impacted by not only exercise by wide variety of other factors as well such as amount of sodium a person eats etc. These other factors would be considered noise in the above example as the focus is on teasing out the relationship between exercise regimen and blood pressure. Prediction When doing a predictive exercise, we are extrapolating into the unknown using the known relationships between the data we have at hand. The known relationship may emerge from an explanatory/descriptive analysis or some other technique. Example: If I exercise 1 hour per day to what extent is my blood pressure likely to drop? To answer this question, we may use a previously uncovered relationship between blood pressure and exercise regimen to perform the prediction. In the above context, the focus is not on explanation, although an explanatory model can help with the prediction process. There are also non-explanatory approaches (e.g., neural nets) which are good at predicting the unknown without necessarily adding to our knowledge as to the nature of the underlying relationship between the variables.
Practical thoughts on explanatory vs. predictive modeling
In my view the differences are as follows: Explanatory/Descriptive When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relati
Practical thoughts on explanatory vs. predictive modeling In my view the differences are as follows: Explanatory/Descriptive When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relationships between the data after noise has been accounted for. Example: Is it true that exercising regularly (say 30 minutes per day) leads to lower blood pressure? To answer this question we may collect data from patients about their exercise regimen and their blood pressure values over time. The goal is to see if we can explain variations in blood pressure by variations in exercise regimen. Blood pressure is impacted by not only exercise by wide variety of other factors as well such as amount of sodium a person eats etc. These other factors would be considered noise in the above example as the focus is on teasing out the relationship between exercise regimen and blood pressure. Prediction When doing a predictive exercise, we are extrapolating into the unknown using the known relationships between the data we have at hand. The known relationship may emerge from an explanatory/descriptive analysis or some other technique. Example: If I exercise 1 hour per day to what extent is my blood pressure likely to drop? To answer this question, we may use a previously uncovered relationship between blood pressure and exercise regimen to perform the prediction. In the above context, the focus is not on explanation, although an explanatory model can help with the prediction process. There are also non-explanatory approaches (e.g., neural nets) which are good at predicting the unknown without necessarily adding to our knowledge as to the nature of the underlying relationship between the variables.
Practical thoughts on explanatory vs. predictive modeling In my view the differences are as follows: Explanatory/Descriptive When seeking an explanatory/descriptive answer the primary focus is on the data we have and we seek to discover the underlying relati
2,809
Practical thoughts on explanatory vs. predictive modeling
One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive purposes (i.e., its inclusion in the model leads to worse predictive accuracy). I see this mistake almost every day in published papers. Another difference is in the distinction between principal components analysis and factor analysis. PCA is often used in prediction, but is not so useful for explanation. FA involves the additional step of rotation which is done to improve interpretation (and hence explanation). There is a nice post today on Galit Shmueli's blog about this. Update: a third case arises in time series when a variable may be an important explanatory variable but it just isn't available for the future. For example, home loans may be strongly related to GDP but that isn't much use for predicting future home loans unless we also have good predictions of GDP.
Practical thoughts on explanatory vs. predictive modeling
One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive p
Practical thoughts on explanatory vs. predictive modeling One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive purposes (i.e., its inclusion in the model leads to worse predictive accuracy). I see this mistake almost every day in published papers. Another difference is in the distinction between principal components analysis and factor analysis. PCA is often used in prediction, but is not so useful for explanation. FA involves the additional step of rotation which is done to improve interpretation (and hence explanation). There is a nice post today on Galit Shmueli's blog about this. Update: a third case arises in time series when a variable may be an important explanatory variable but it just isn't available for the future. For example, home loans may be strongly related to GDP but that isn't much use for predicting future home loans unless we also have good predictions of GDP.
Practical thoughts on explanatory vs. predictive modeling One practical issue that arises here is variable selection in modelling. A variable can be an important explanatory variable (e.g., is statistically significant) but may not be useful for predictive p
2,810
Practical thoughts on explanatory vs. predictive modeling
Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinction. Here is a deck of slides that I use in my data mining course to teach linear regression from both angles. Even with linear regression alone and with this tiny example various issues emerge that lead to different models for explanatory vs. predictive goals (choice of variables, variable selection, performance measures, etc.) Galit
Practical thoughts on explanatory vs. predictive modeling
Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinct
Practical thoughts on explanatory vs. predictive modeling Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinction. Here is a deck of slides that I use in my data mining course to teach linear regression from both angles. Even with linear regression alone and with this tiny example various issues emerge that lead to different models for explanatory vs. predictive goals (choice of variables, variable selection, performance measures, etc.) Galit
Practical thoughts on explanatory vs. predictive modeling Although some people find it easiest to think of the distinction in terms of the model/algorithm used (e.g., neural nets=predictive), that is only one particular aspect of the explain/predict distinct
2,811
Practical thoughts on explanatory vs. predictive modeling
Example: A classic example that I have seen is in the context of predicting human performance. Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a strong predictor of task performance. Thus, if you put self-efficacy into a multiple regression along with other variables such as intelligence and degree of prior experience, you often find that self-efficacy is a strong predictor. This has lead some researchers to suggest that self-efficacy causes task performance. And that effective interventions are those which focus on increasing a person's sense of self-efficacy. However, the alternative theoretical model sees self-efficacy largely as a consequence of task performance. I.e., If you are good, you'll know it. In this framework interventions should focus on increasing actual competence and not perceived competence. Thus, including a variable like self-efficacy might increase prediction, but assuming you adopt the self-efficacy-as-consequence model, it should not be included as a predictor if the aim of the model is to elucidate causal processes influencing performance. This of course raises the issue of how to develop and validate a causal theoretical model. This clearly relies on multiple studies, ideally with some experimental manipulation, and a coherent argument about dynamic processes. Proximal versus distal: I've seen similar issues when researchers are interested in the effects of distal and proximal causes. Proximal causes tend to predict better than distal causes. However, theoretical interest may be in understanding the ways in which distal and proximal causes operate. Variable selection issue: Finally, a huge issue in social science research is the variable selection issue. In any given study, there is an infinite number of variables that could have been measured but weren't. Thus, interpretation of models need to consider the implications of this when making theoretical interpretations.
Practical thoughts on explanatory vs. predictive modeling
Example: A classic example that I have seen is in the context of predicting human performance. Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a st
Practical thoughts on explanatory vs. predictive modeling Example: A classic example that I have seen is in the context of predicting human performance. Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a strong predictor of task performance. Thus, if you put self-efficacy into a multiple regression along with other variables such as intelligence and degree of prior experience, you often find that self-efficacy is a strong predictor. This has lead some researchers to suggest that self-efficacy causes task performance. And that effective interventions are those which focus on increasing a person's sense of self-efficacy. However, the alternative theoretical model sees self-efficacy largely as a consequence of task performance. I.e., If you are good, you'll know it. In this framework interventions should focus on increasing actual competence and not perceived competence. Thus, including a variable like self-efficacy might increase prediction, but assuming you adopt the self-efficacy-as-consequence model, it should not be included as a predictor if the aim of the model is to elucidate causal processes influencing performance. This of course raises the issue of how to develop and validate a causal theoretical model. This clearly relies on multiple studies, ideally with some experimental manipulation, and a coherent argument about dynamic processes. Proximal versus distal: I've seen similar issues when researchers are interested in the effects of distal and proximal causes. Proximal causes tend to predict better than distal causes. However, theoretical interest may be in understanding the ways in which distal and proximal causes operate. Variable selection issue: Finally, a huge issue in social science research is the variable selection issue. In any given study, there is an infinite number of variables that could have been measured but weren't. Thus, interpretation of models need to consider the implications of this when making theoretical interpretations.
Practical thoughts on explanatory vs. predictive modeling Example: A classic example that I have seen is in the context of predicting human performance. Self-efficacy (i.e., the degree to which a person thinks that they can perform a task well) is often a st
2,812
Practical thoughts on explanatory vs. predictive modeling
Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the document) are as follows: "Higher predictive accuracy is associated with more reliable information about the underlying data mechanism. Weak predictive accuracy can lead to questionable conclusions." "Algorithmic models can give better predictive accuracy than data models, and provide better information about the underlying mechanism."
Practical thoughts on explanatory vs. predictive modeling
Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the docum
Practical thoughts on explanatory vs. predictive modeling Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the document) are as follows: "Higher predictive accuracy is associated with more reliable information about the underlying data mechanism. Weak predictive accuracy can lead to questionable conclusions." "Algorithmic models can give better predictive accuracy than data models, and provide better information about the underlying mechanism."
Practical thoughts on explanatory vs. predictive modeling Statistical Modeling: Two Cultures (2001) by L. Breiman is, perhaps, the best paper on this point. His main conclusions (see also the replies from other prominent statisticians in the end of the docum
2,813
Practical thoughts on explanatory vs. predictive modeling
I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction between the aims of the practitioner, which are either "causal" or "predictive". In general, I think "explanation" is such a vague word that it means nearly nothing. For example, is Hooke's Law explanatory or predictive? On the other end of the spectrum, are predictively accurate recommendation systems good causal models of explicit item ratings? I think we all share the intuition that the goal of science is explanation, while the goal of technology is prediction; and this intuition somehow gets lost in consideration of the tools we use, like supervised learning algorithms, that can be employed for both causal inference and predictive modeling, but are really purely mathematical devices that are not intrinsically linked to "prediction" or "explanation". Having said all of that, maybe the only word that I would apply to a model is interpretable. Regressions are usually interpretable; neural nets with many layers are often not so. I think people sometimes naively assume that a model that is interpretable is providing causal information, while uninterpretable models only provide predictive information. This attitude seems simply confused to me.
Practical thoughts on explanatory vs. predictive modeling
I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction bet
Practical thoughts on explanatory vs. predictive modeling I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction between the aims of the practitioner, which are either "causal" or "predictive". In general, I think "explanation" is such a vague word that it means nearly nothing. For example, is Hooke's Law explanatory or predictive? On the other end of the spectrum, are predictively accurate recommendation systems good causal models of explicit item ratings? I think we all share the intuition that the goal of science is explanation, while the goal of technology is prediction; and this intuition somehow gets lost in consideration of the tools we use, like supervised learning algorithms, that can be employed for both causal inference and predictive modeling, but are really purely mathematical devices that are not intrinsically linked to "prediction" or "explanation". Having said all of that, maybe the only word that I would apply to a model is interpretable. Regressions are usually interpretable; neural nets with many layers are often not so. I think people sometimes naively assume that a model that is interpretable is providing causal information, while uninterpretable models only provide predictive information. This attitude seems simply confused to me.
Practical thoughts on explanatory vs. predictive modeling I haven't read her work beyond the abstract of the linked paper, but my sense is that the distinction between "explanation" and "prediction" should be thrown away and replaced with the distinction bet
2,814
Practical thoughts on explanatory vs. predictive modeling
I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus. Explanatory Models By definition explanatory models have as their primary focus the goal of explaining something in the real world. In most instances, we seek to offer simple and clean explanations. By simple I mean that we prefer parsimony (explain the phenomena with as few parameters as possible) and by clean I mean that we would like to make statements of the following form: "the effect of changing $x$ by one unit changes $y$ by $\beta$ holding everything else constant". Given these goals of simple and clear explanations, explanatory models seek to penalize complex models (by using appropriate criteria such as AIC) and prefer to obtain orthogonal independent variables (either via controlled experiments or via suitable data transformations). Predictive Models The goal of predictive models is to predict something. Thus, they tend to focus less on parsimony or simplicity but more on their ability to predict the dependent variable. However, the above is somewhat of an artificial distinction as explanatory models can be used for prediction and sometimes predictive models can explain something.
Practical thoughts on explanatory vs. predictive modeling
I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus. Explanatory Mod
Practical thoughts on explanatory vs. predictive modeling I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus. Explanatory Models By definition explanatory models have as their primary focus the goal of explaining something in the real world. In most instances, we seek to offer simple and clean explanations. By simple I mean that we prefer parsimony (explain the phenomena with as few parameters as possible) and by clean I mean that we would like to make statements of the following form: "the effect of changing $x$ by one unit changes $y$ by $\beta$ holding everything else constant". Given these goals of simple and clear explanations, explanatory models seek to penalize complex models (by using appropriate criteria such as AIC) and prefer to obtain orthogonal independent variables (either via controlled experiments or via suitable data transformations). Predictive Models The goal of predictive models is to predict something. Thus, they tend to focus less on parsimony or simplicity but more on their ability to predict the dependent variable. However, the above is somewhat of an artificial distinction as explanatory models can be used for prediction and sometimes predictive models can explain something.
Practical thoughts on explanatory vs. predictive modeling I am still a bit unclear as to what the question is. Having said that, to my mind the fundamental difference between predictive and explanatory models is the difference in their focus. Explanatory Mod
2,815
Practical thoughts on explanatory vs. predictive modeling
as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned. Brad Efron, one of the commentators on The Two Cultures paper, made the following observation (as discussed in my earlier question): Prediction by itself is only occasionally sufficient. The post office is happy with any method that predicts correct addresses from hand-written scrawls. Peter Gregory undertook his study for prediction purposes, but also to better understand the medical basis of hepatitis. Most statistical surveys have the identification of causal factors as their ultimate goal. Certain fields (eg. Medicine) place a heavy weight on model fitting as explanatory process (the distribution, etc.), as a means to understanding the underlying process that generates the data. Other fields are less concerned with this, and will be happy with a "black box" model that has a very high predictive success. This can work its way into the model building process as well.
Practical thoughts on explanatory vs. predictive modeling
as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned. Brad Efron, one of the commentators on The Two Cultures paper, made
Practical thoughts on explanatory vs. predictive modeling as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned. Brad Efron, one of the commentators on The Two Cultures paper, made the following observation (as discussed in my earlier question): Prediction by itself is only occasionally sufficient. The post office is happy with any method that predicts correct addresses from hand-written scrawls. Peter Gregory undertook his study for prediction purposes, but also to better understand the medical basis of hepatitis. Most statistical surveys have the identification of causal factors as their ultimate goal. Certain fields (eg. Medicine) place a heavy weight on model fitting as explanatory process (the distribution, etc.), as a means to understanding the underlying process that generates the data. Other fields are less concerned with this, and will be happy with a "black box" model that has a very high predictive success. This can work its way into the model building process as well.
Practical thoughts on explanatory vs. predictive modeling as others have already said, the distinction is somewhat meaningless, except in so far as the aims of the researcher are concerned. Brad Efron, one of the commentators on The Two Cultures paper, made
2,816
Practical thoughts on explanatory vs. predictive modeling
With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want to be pedantic. Sometimes people are just sloppy or lazy in their terminology. This is true of many people, and I'm certainly no better. What's of potential value here (discussing explanation vs. prediction on CV), is to clarify the distinction between the two approaches. In short, the distinction centers on the role of causality. If you want to understand some dynamic in the world, and explain why something happens the way it does, you need to identify the causal relationships amongst the relevant variables. To predict, you can ignore causality. For example, you can predict an effect from knowledge about its cause; you can predict the existence of the cause from knowledge that the effect occurred; and you can predict the approximate level of one effect by knowledge of another effect that is driven by the same cause. Why would someone want to be able to do this? To increase their knowledge of what might happen in the future, so that they can plan accordingly. For example, a parole board may want to be able to predict the probability that a convict will recidivate if paroled. However, this is not sufficient for explanation. Of course, estimating the true causal relationship between two variables can be extremely difficult. In addition, models that do capture (what are thought to be) the real causal relationships are often worse for making predictions. So why do it, then? First, most of this is done in science, where understanding is pursued for its own sake. Second, if we can reliably pick out true causes, and can develop the ability to affect them, we can exert some influence over the effects. With regard to the statistical modeling strategy, there isn't a large difference. Primarily the difference lies in how to conduct the study. If your goal is to be able to predict, find out what information will be available to users of the model when they will need to make the prediction. Information they won't have access to is of no value. If they will most likely want to be able to predict at a certain level (or within a narrow range) of the predictors, try to center the sampled range of the predictor on that level and oversample there. For instance, if a parole board will mostly want to know about criminals with 2 major convictions, you might gather info about criminals with 1, 2, and 3 convictions. On the other hand, assessing the causal status of a variable basically requires an experiment. That is, experimental units need to be assigned at random to prespecified levels of the explanatory variables. If there is concern about whether or not the nature of the causal effect is contingent on some other variable, that variable must be included in the experiment. If it is not possible to conduct a true experiment, then you face a much more difficult situation, one that is too complex to go into here.
Practical thoughts on explanatory vs. predictive modeling
With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want
Practical thoughts on explanatory vs. predictive modeling With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want to be pedantic. Sometimes people are just sloppy or lazy in their terminology. This is true of many people, and I'm certainly no better. What's of potential value here (discussing explanation vs. prediction on CV), is to clarify the distinction between the two approaches. In short, the distinction centers on the role of causality. If you want to understand some dynamic in the world, and explain why something happens the way it does, you need to identify the causal relationships amongst the relevant variables. To predict, you can ignore causality. For example, you can predict an effect from knowledge about its cause; you can predict the existence of the cause from knowledge that the effect occurred; and you can predict the approximate level of one effect by knowledge of another effect that is driven by the same cause. Why would someone want to be able to do this? To increase their knowledge of what might happen in the future, so that they can plan accordingly. For example, a parole board may want to be able to predict the probability that a convict will recidivate if paroled. However, this is not sufficient for explanation. Of course, estimating the true causal relationship between two variables can be extremely difficult. In addition, models that do capture (what are thought to be) the real causal relationships are often worse for making predictions. So why do it, then? First, most of this is done in science, where understanding is pursued for its own sake. Second, if we can reliably pick out true causes, and can develop the ability to affect them, we can exert some influence over the effects. With regard to the statistical modeling strategy, there isn't a large difference. Primarily the difference lies in how to conduct the study. If your goal is to be able to predict, find out what information will be available to users of the model when they will need to make the prediction. Information they won't have access to is of no value. If they will most likely want to be able to predict at a certain level (or within a narrow range) of the predictors, try to center the sampled range of the predictor on that level and oversample there. For instance, if a parole board will mostly want to know about criminals with 2 major convictions, you might gather info about criminals with 1, 2, and 3 convictions. On the other hand, assessing the causal status of a variable basically requires an experiment. That is, experimental units need to be assigned at random to prespecified levels of the explanatory variables. If there is concern about whether or not the nature of the causal effect is contingent on some other variable, that variable must be included in the experiment. If it is not possible to conduct a true experiment, then you face a much more difficult situation, one that is too complex to go into here.
Practical thoughts on explanatory vs. predictive modeling With respect, this question could be better focused. Have people ever used one term when the other was more appropriate? Yes, of course. Sometimes it's clear enough from context, or you don't want
2,817
Practical thoughts on explanatory vs. predictive modeling
Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offer an example that might be useful. Suppose we are intereted in modeling College GPA as a function of academic preparation. As measures of academic preparation, we have: Aptitude Test Scores; HS GPA; and Number of AP Tests passed. Strategy for Prediction If the goal is prediction, I might use all of these variables simultaneously in a linear model and my primary concern would be predictive accuracy. Whichever of the variables prove most useful for predicting College GPA would be included in the final model. Strategy for Explanation If the goal is explanation, I might be more concerned about data reduction and think carefully about the correlations among the independent variables. My primary concern would be interpreting the coefficients. Example In a typical multivariate problem with correlated predictors, it would not be uncommon to observe regression coefficients that are "unexpected". Given the interrelationships among the independent variables, it would not be surprising to see partial coefficients for some of these variables that are not in the same direction as their zero-order relationships and which may seem counter intuitive and tough to explain. For example, suppose the model suggests that (with Aptitude Test Scores and Number of AP Tests Successfully Completed taken into account) higher High School GPAs are associated with lower College GPAs. This is not a problem for prediction, but it does pose problems for an explanatory model where such a relationship is difficult to interpret. This model might provide the best out of sample predictions but it does little to help us understand the relationship between academic preparation and College GPA. Instead, an explanatory strategy might seek some form of variable reduction, such as principal components, factor analysis, or SEM to: focus on the variable that is the best measure of "academic performance" and model College GPA on that one variable; or use factor scores/latent variables derived from the combination of the three measures of academic preparation rather than the original variables. Strategies such as these might reduce the predictive power of the model, but they may yield a better understanding of how Academic Preparation is related to College GPA.
Practical thoughts on explanatory vs. predictive modeling
Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offe
Practical thoughts on explanatory vs. predictive modeling Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offer an example that might be useful. Suppose we are intereted in modeling College GPA as a function of academic preparation. As measures of academic preparation, we have: Aptitude Test Scores; HS GPA; and Number of AP Tests passed. Strategy for Prediction If the goal is prediction, I might use all of these variables simultaneously in a linear model and my primary concern would be predictive accuracy. Whichever of the variables prove most useful for predicting College GPA would be included in the final model. Strategy for Explanation If the goal is explanation, I might be more concerned about data reduction and think carefully about the correlations among the independent variables. My primary concern would be interpreting the coefficients. Example In a typical multivariate problem with correlated predictors, it would not be uncommon to observe regression coefficients that are "unexpected". Given the interrelationships among the independent variables, it would not be surprising to see partial coefficients for some of these variables that are not in the same direction as their zero-order relationships and which may seem counter intuitive and tough to explain. For example, suppose the model suggests that (with Aptitude Test Scores and Number of AP Tests Successfully Completed taken into account) higher High School GPAs are associated with lower College GPAs. This is not a problem for prediction, but it does pose problems for an explanatory model where such a relationship is difficult to interpret. This model might provide the best out of sample predictions but it does little to help us understand the relationship between academic preparation and College GPA. Instead, an explanatory strategy might seek some form of variable reduction, such as principal components, factor analysis, or SEM to: focus on the variable that is the best measure of "academic performance" and model College GPA on that one variable; or use factor scores/latent variables derived from the combination of the three measures of academic preparation rather than the original variables. Strategies such as these might reduce the predictive power of the model, but they may yield a better understanding of how Academic Preparation is related to College GPA.
Practical thoughts on explanatory vs. predictive modeling Most of the answers have helped clarify what modeling for explanation and modeling for prediction are and why they differ. What is not clear, thus far, is how they differ. So, I thought I would offe
2,818
Practical thoughts on explanatory vs. predictive modeling
I would like to offer a model-centered view on the matter. Predictive modeling is what happens in most analyses. For example, a researcher sets up a regression model with a bunch of predictors. The regression coefficients then represent predictive comparisons between groups. The predictive aspect comes from the probability model: the inference is done with regard to a superpopulation model which may have produced the observed population or sample. The purpose of this model is to predict new outcomes for units emerging from this superpopulation. Often, this is a vain objective because things are always changing, especially in the social world. Or because your model is about rare units such as countries and you cannot draw a new sample. The usefulness of the model in this case is left to the appreciation of the analyst. When you try to generalize the results to other groups or future units, this is still prediction but of a different kind. We may call it forecasting for example. The key point is that the predictive power of estimated models is, by default, of descriptive nature. You compare an outcome across groups and hypothesize a probability model for these comparisons, but you cannot conclude that these comparisons constitute causal effects. The reason is that these groups may suffer from selection bias. Ie, they may naturally have a higher score in the outcome of interest, irrespective of the treatment (the hypothetical causal intervention). Or they may be subject to a different treatment effect size than other groups. This is why, especially for observational data, the estimated models are generally about predictive comparisons and not explanation. Explanation is about the identification and estimation of causal effect and requires well designed experiments or thoughtful use of instrumental variables. In this case, the predictive comparisons are cut from any selection bias and represent causal effects. The model may thus be regarded as explanatory. I found that thinking in these terms has often clarified what I was really doing when setting up a model for some data.
Practical thoughts on explanatory vs. predictive modeling
I would like to offer a model-centered view on the matter. Predictive modeling is what happens in most analyses. For example, a researcher sets up a regression model with a bunch of predictors. The re
Practical thoughts on explanatory vs. predictive modeling I would like to offer a model-centered view on the matter. Predictive modeling is what happens in most analyses. For example, a researcher sets up a regression model with a bunch of predictors. The regression coefficients then represent predictive comparisons between groups. The predictive aspect comes from the probability model: the inference is done with regard to a superpopulation model which may have produced the observed population or sample. The purpose of this model is to predict new outcomes for units emerging from this superpopulation. Often, this is a vain objective because things are always changing, especially in the social world. Or because your model is about rare units such as countries and you cannot draw a new sample. The usefulness of the model in this case is left to the appreciation of the analyst. When you try to generalize the results to other groups or future units, this is still prediction but of a different kind. We may call it forecasting for example. The key point is that the predictive power of estimated models is, by default, of descriptive nature. You compare an outcome across groups and hypothesize a probability model for these comparisons, but you cannot conclude that these comparisons constitute causal effects. The reason is that these groups may suffer from selection bias. Ie, they may naturally have a higher score in the outcome of interest, irrespective of the treatment (the hypothetical causal intervention). Or they may be subject to a different treatment effect size than other groups. This is why, especially for observational data, the estimated models are generally about predictive comparisons and not explanation. Explanation is about the identification and estimation of causal effect and requires well designed experiments or thoughtful use of instrumental variables. In this case, the predictive comparisons are cut from any selection bias and represent causal effects. The model may thus be regarded as explanatory. I found that thinking in these terms has often clarified what I was really doing when setting up a model for some data.
Practical thoughts on explanatory vs. predictive modeling I would like to offer a model-centered view on the matter. Predictive modeling is what happens in most analyses. For example, a researcher sets up a regression model with a bunch of predictors. The re
2,819
Practical thoughts on explanatory vs. predictive modeling
We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affected by changes in the INPUT space. In this sense even a purely predictive model can provide explanatory insights. This is a point that is often overlooked or misunderstood by the research community. Just because we do not understand why an algorithm is working doesn't mean the algorithm lacks explanatory power... Overall from a mainstream point of view, probabilityislogic's succinct reply is absolutely correct...
Practical thoughts on explanatory vs. predictive modeling
We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affecte
Practical thoughts on explanatory vs. predictive modeling We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affected by changes in the INPUT space. In this sense even a purely predictive model can provide explanatory insights. This is a point that is often overlooked or misunderstood by the research community. Just because we do not understand why an algorithm is working doesn't mean the algorithm lacks explanatory power... Overall from a mainstream point of view, probabilityislogic's succinct reply is absolutely correct...
Practical thoughts on explanatory vs. predictive modeling We can learn a lot more than we think from Black box "predictive" models. The key is in running different types of sensitivity analyses and simulations to really understand how model OUTPUT is affecte
2,820
Practical thoughts on explanatory vs. predictive modeling
There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we often mix them up, hence conflation. I agree that in social science applications, the distinction is sensible, but in natural sciences they are and should be the same. Also, I call them inference vs. forecasting, and agree that in social sciences one should not mix them up. I'll start with the natural sciences. In physics we're focused on explaining, we're trying to understand how the world works, what causes what etc. So, the focus is on causality, inference and such. On the other hand, the predictive aspect is also a part of the scientific process. In fact, the way you prove a theory, which already explained observations well (think of in-sample), is to predict new observations then check how prediction worked. Any theory that lack predictive abilities will have big trouble gaining acceptance in physics. That's why experiments such as Michelson-Morley's are so important. In social sciences, unfortunately, the underlying phenomena are unstable, unrepeatable, unreproducible. If you watch nuclei decay you'll get the same results every time you observe them, and the same results that I or a dude one hundred years ago got. Not in economics or finance. Also, the ability to conduct experiments is very limited, almost non existent for all practical purposes, we only observe and conduct random samples of observations. I can keep going on but the idea's that the phenomena that we deal with are very unstable, hence our theories are not of the same quality as in physics. Therefore, one of the ways we deal with the situation is to focus on either inference (when you try to understand what causes what or impact what) or forecasting (just say what you think will happen to this or that ignore the structure).
Practical thoughts on explanatory vs. predictive modeling
There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we
Practical thoughts on explanatory vs. predictive modeling There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we often mix them up, hence conflation. I agree that in social science applications, the distinction is sensible, but in natural sciences they are and should be the same. Also, I call them inference vs. forecasting, and agree that in social sciences one should not mix them up. I'll start with the natural sciences. In physics we're focused on explaining, we're trying to understand how the world works, what causes what etc. So, the focus is on causality, inference and such. On the other hand, the predictive aspect is also a part of the scientific process. In fact, the way you prove a theory, which already explained observations well (think of in-sample), is to predict new observations then check how prediction worked. Any theory that lack predictive abilities will have big trouble gaining acceptance in physics. That's why experiments such as Michelson-Morley's are so important. In social sciences, unfortunately, the underlying phenomena are unstable, unrepeatable, unreproducible. If you watch nuclei decay you'll get the same results every time you observe them, and the same results that I or a dude one hundred years ago got. Not in economics or finance. Also, the ability to conduct experiments is very limited, almost non existent for all practical purposes, we only observe and conduct random samples of observations. I can keep going on but the idea's that the phenomena that we deal with are very unstable, hence our theories are not of the same quality as in physics. Therefore, one of the ways we deal with the situation is to focus on either inference (when you try to understand what causes what or impact what) or forecasting (just say what you think will happen to this or that ignore the structure).
Practical thoughts on explanatory vs. predictive modeling There is distinction between what she calls explanatory and predictive applications in statistics. She says we should know every time we use one or another which one exactly is being used. She says we
2,821
Practical thoughts on explanatory vs. predictive modeling
A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression and factor analysis The latent variables are manifested in the form of multi collinearity in predictive models (regression).
Practical thoughts on explanatory vs. predictive modeling
A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression a
Practical thoughts on explanatory vs. predictive modeling A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression and factor analysis The latent variables are manifested in the form of multi collinearity in predictive models (regression).
Practical thoughts on explanatory vs. predictive modeling A Structural Model would give explanation and a predictive model would give prediction. A structural model would have latent variables. A structural model is a simultaneous culmination of regression a
2,822
Practical thoughts on explanatory vs. predictive modeling
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Explanatory model has also been used in medicine and the health area as well, with a very different meaning. Basically what people have as internal beliefs or meanings can be quite different from accepted explanations. For example a religious person may have an explanatory model that an illness was due to punishment or karma for a past behaviour along with accepting th biological reasons as well. https://thehealthcareblog.com/blog/2013/06/11/the-patient-explanatory-model/ https://pdfs.semanticscholar.org/0b69/ffd5cc4c7bb2f401be6819c946a955344880.pdf
Practical thoughts on explanatory vs. predictive modeling
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Practical thoughts on explanatory vs. predictive modeling Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Explanatory model has also been used in medicine and the health area as well, with a very different meaning. Basically what people have as internal beliefs or meanings can be quite different from accepted explanations. For example a religious person may have an explanatory model that an illness was due to punishment or karma for a past behaviour along with accepting th biological reasons as well. https://thehealthcareblog.com/blog/2013/06/11/the-patient-explanatory-model/ https://pdfs.semanticscholar.org/0b69/ffd5cc4c7bb2f401be6819c946a955344880.pdf
Practical thoughts on explanatory vs. predictive modeling Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
2,823
How can adding a 2nd IV make the 1st IV significant?
Although collinearity (of predictor variables) is a possible explanation, I would like to suggest it is not an illuminating explanation because we know collinearity is related to "common information" among the predictors, so there is nothing mysterious or counter-intuitive about the side effect of introducing a second correlated predictor into the model. Let us then consider the case of two predictors that are truly orthogonal: there is absolutely no collinearity among them. A remarkable change in significance can still happen. Designate the predictor variables $X_1$ and $X_2$ and let $Y$ name the predictand. The regression of $Y$ against $X_1$ will fail to be significant when the variation in $Y$ around its mean is not appreciably reduced when $X_1$ is used as the independent variable. When that variation is strongly associated with a second variable $X_2$, however, the situation changes. Recall that multiple regression of $Y$ against $X_1$ and $X_2$ is equivalent to Separately regress $Y$ and $X_1$ against $X_2$. Regress the $Y$ residuals against the $X_1$ residuals. The residuals from the first step have removed the effect of $X_2$. When $X_2$ is closely correlated with $Y$, this can expose a relatively small amount of variation that had previously been masked. If this variation is associated with $X_1$, we obtain a significant result. All this might perhaps be clarified with a concrete example. To begin, let's use R to generate two orthogonal independent variables along with some independent random error $\varepsilon$: n <- 32 set.seed(182) u <-matrix(rnorm(2*n), ncol=2) u0 <- cbind(u[,1] - mean(u[,1]), u[,2] - mean(u[,2])) x <- svd(u0)$u eps <- rnorm(n) (The svd step assures the two columns of matrix x (representing $X_1$ and $X_2$) are orthogonal, ruling out collinearity as a possible explanation of any subsequent results.) Next, create $Y$ as a linear combination of the $X$'s and the error. I have adjusted the coefficients to produce the counter-intuitive behavior: y <- x %*% c(0.05, 1) + eps * 0.01 This is a realization of the model $Y \sim_{iid} N(0.05 X_1 + 1.00 X_2, 0.01^2)$ with $n=32$ cases. Look at the two regressions in question. First, regress $Y$ against $X_1$ only: > summary(lm(y ~ x[,1])) ... Estimate Std. Error t value Pr(>|t|) (Intercept) -0.002576 0.032423 -0.079 0.937 x[, 1] 0.068950 0.183410 0.376 0.710 The high p-value of 0.710 shows that $X_1$ is completely non-significant. Next, regress $Y$ against $X_1$ and $X_2$: > summary(lm(y ~ x)) ... Estimate Std. Error t value Pr(>|t|) (Intercept) -0.002576 0.001678 -1.535 0.136 x1 0.068950 0.009490 7.265 5.32e-08 *** x2 1.003276 0.009490 105.718 < 2e-16 *** Suddenly, in the presence of $X_2$, $X_1$ is strongly significant, as indicated by the near-zero p-values for both variables. We can visualize this behavior by means of a scatterplot matrix of the variables $X_1$, $X_2$, and $Y$ along with the residuals used in the two-step characterization of multiple regression above. Because $X_1$ and $X_2$ are orthogonal, the $X_1$ residuals will be the same as $X_1$ and therefore need not be redrawn. We will include the residuals of $Y$ against $X_2$ in the scatterplot matrix, giving this figure: lmy <- lm(y ~ x[,2]) d <- data.frame(X1=x[,1], X2=x[,2], Y=y, RY=residuals(lmy)) plot(d) Here is a rendering of it (with a little prettification): This matrix of graphics has four rows and four columns, which I will count down from the top and from left to right. Notice: The $(X_1, X_2)$ scatterplot in the second row and first column confirms the orthogonality of these predictors: the least squares line is horizontal and correlation is zero. The $(X_1, Y)$ scatterplot in the third row and first column exhibits the slight but completely insignificant relationship reported by the first regression of $Y$ against $X_1$. (The correlation coefficient, $\rho$, is only $0.07$). The $(X_2, Y)$ scatterplot in the third row and second column shows the strong relationship between $Y$ and the second independent variable. (The correlation coefficient is $0.996$). The fourth row examines the relationships between the residuals of $Y$ (regressed against $X_2$) and other variables: The vertical scale shows that the residuals are (relatively) quite small: we couldn't easily see them in the scatterplot of $Y$ against $X_2$. The residuals are strongly correlated with $X_1$ ($\rho = 0.80$). The regression against $X_2$ has unmasked this previously hidden behavior. By construction, there is no remaining correlation between the residuals and $X_2$. There is little correlation between $Y$ and these residuals ($\rho = 0.09$). This shows how the residuals can behave entirely differently than $Y$ itself. That's how $X_1$ can suddenly be revealed as a significant contributor to the regression. Finally, it is worth remarking that the two estimates of the $X_1$ coefficient (both equal to $0.06895$, not far from the intended value of $0.05$) agree only because $X_1$ and $X_2$ are orthogonal. Except in designed experiments, it is rare for orthogonality to hold exactly. A departure from orthogonality usually causes the coefficient estimates to change.
How can adding a 2nd IV make the 1st IV significant?
Although collinearity (of predictor variables) is a possible explanation, I would like to suggest it is not an illuminating explanation because we know collinearity is related to "common information"
How can adding a 2nd IV make the 1st IV significant? Although collinearity (of predictor variables) is a possible explanation, I would like to suggest it is not an illuminating explanation because we know collinearity is related to "common information" among the predictors, so there is nothing mysterious or counter-intuitive about the side effect of introducing a second correlated predictor into the model. Let us then consider the case of two predictors that are truly orthogonal: there is absolutely no collinearity among them. A remarkable change in significance can still happen. Designate the predictor variables $X_1$ and $X_2$ and let $Y$ name the predictand. The regression of $Y$ against $X_1$ will fail to be significant when the variation in $Y$ around its mean is not appreciably reduced when $X_1$ is used as the independent variable. When that variation is strongly associated with a second variable $X_2$, however, the situation changes. Recall that multiple regression of $Y$ against $X_1$ and $X_2$ is equivalent to Separately regress $Y$ and $X_1$ against $X_2$. Regress the $Y$ residuals against the $X_1$ residuals. The residuals from the first step have removed the effect of $X_2$. When $X_2$ is closely correlated with $Y$, this can expose a relatively small amount of variation that had previously been masked. If this variation is associated with $X_1$, we obtain a significant result. All this might perhaps be clarified with a concrete example. To begin, let's use R to generate two orthogonal independent variables along with some independent random error $\varepsilon$: n <- 32 set.seed(182) u <-matrix(rnorm(2*n), ncol=2) u0 <- cbind(u[,1] - mean(u[,1]), u[,2] - mean(u[,2])) x <- svd(u0)$u eps <- rnorm(n) (The svd step assures the two columns of matrix x (representing $X_1$ and $X_2$) are orthogonal, ruling out collinearity as a possible explanation of any subsequent results.) Next, create $Y$ as a linear combination of the $X$'s and the error. I have adjusted the coefficients to produce the counter-intuitive behavior: y <- x %*% c(0.05, 1) + eps * 0.01 This is a realization of the model $Y \sim_{iid} N(0.05 X_1 + 1.00 X_2, 0.01^2)$ with $n=32$ cases. Look at the two regressions in question. First, regress $Y$ against $X_1$ only: > summary(lm(y ~ x[,1])) ... Estimate Std. Error t value Pr(>|t|) (Intercept) -0.002576 0.032423 -0.079 0.937 x[, 1] 0.068950 0.183410 0.376 0.710 The high p-value of 0.710 shows that $X_1$ is completely non-significant. Next, regress $Y$ against $X_1$ and $X_2$: > summary(lm(y ~ x)) ... Estimate Std. Error t value Pr(>|t|) (Intercept) -0.002576 0.001678 -1.535 0.136 x1 0.068950 0.009490 7.265 5.32e-08 *** x2 1.003276 0.009490 105.718 < 2e-16 *** Suddenly, in the presence of $X_2$, $X_1$ is strongly significant, as indicated by the near-zero p-values for both variables. We can visualize this behavior by means of a scatterplot matrix of the variables $X_1$, $X_2$, and $Y$ along with the residuals used in the two-step characterization of multiple regression above. Because $X_1$ and $X_2$ are orthogonal, the $X_1$ residuals will be the same as $X_1$ and therefore need not be redrawn. We will include the residuals of $Y$ against $X_2$ in the scatterplot matrix, giving this figure: lmy <- lm(y ~ x[,2]) d <- data.frame(X1=x[,1], X2=x[,2], Y=y, RY=residuals(lmy)) plot(d) Here is a rendering of it (with a little prettification): This matrix of graphics has four rows and four columns, which I will count down from the top and from left to right. Notice: The $(X_1, X_2)$ scatterplot in the second row and first column confirms the orthogonality of these predictors: the least squares line is horizontal and correlation is zero. The $(X_1, Y)$ scatterplot in the third row and first column exhibits the slight but completely insignificant relationship reported by the first regression of $Y$ against $X_1$. (The correlation coefficient, $\rho$, is only $0.07$). The $(X_2, Y)$ scatterplot in the third row and second column shows the strong relationship between $Y$ and the second independent variable. (The correlation coefficient is $0.996$). The fourth row examines the relationships between the residuals of $Y$ (regressed against $X_2$) and other variables: The vertical scale shows that the residuals are (relatively) quite small: we couldn't easily see them in the scatterplot of $Y$ against $X_2$. The residuals are strongly correlated with $X_1$ ($\rho = 0.80$). The regression against $X_2$ has unmasked this previously hidden behavior. By construction, there is no remaining correlation between the residuals and $X_2$. There is little correlation between $Y$ and these residuals ($\rho = 0.09$). This shows how the residuals can behave entirely differently than $Y$ itself. That's how $X_1$ can suddenly be revealed as a significant contributor to the regression. Finally, it is worth remarking that the two estimates of the $X_1$ coefficient (both equal to $0.06895$, not far from the intended value of $0.05$) agree only because $X_1$ and $X_2$ are orthogonal. Except in designed experiments, it is rare for orthogonality to hold exactly. A departure from orthogonality usually causes the coefficient estimates to change.
How can adding a 2nd IV make the 1st IV significant? Although collinearity (of predictor variables) is a possible explanation, I would like to suggest it is not an illuminating explanation because we know collinearity is related to "common information"
2,824
How can adding a 2nd IV make the 1st IV significant?
It feels like the OP's question can be interpreted in two different ways: Mathematically, how does OLS work, such that adding an independent variable can change results in an unexpected way? How can modifying my model by adding one variable change the effect of another, independent variable in the model? There are several good answers already for question #1. And question #2 may be so obvious to the experts that they assume the OP must be asking question #1 instead. But I think question #2 deserves an answer, which would be something like: Let's start with an example. Say that you had the heights, age, gender, etc, of a number of children, and you wanted to do a regression to predict their height. You start with a naive model that uses gender as the independent variable. And it's not statistically significant. (How could it be, you're mixing 3-year-olds and teen-agers.) Then you add in age and suddenly not only is age significant, but so is gender. How could that be? Of course, in my example, you can clearly see that age is an important factor in the height of a child/teen. Probably the most important factor that you have data on. Gender can matter, too, especially for older children and adults, but gender alone is a poor model of how tall a child is. Age plus gender is a reasonable (though, of course simplified) model that's adequate for the task. If you add other data -- interaction of age and gender, diet, height of parents, etc -- you could make an even better model, which would of course still be simplified compared to the host of factors that actually determine a child's height, but then again all models are simplified versions of reality. (A map of the world that's 1:1 scale isn't too useful for a traveler.) Your original model (gender only) is too simplified -- so simplified that it's essentially broken. But that doesn't mean that gender is not useful in a better model. EDIT: added gung's suggestion re: the interaction term of age and gender.
How can adding a 2nd IV make the 1st IV significant?
It feels like the OP's question can be interpreted in two different ways: Mathematically, how does OLS work, such that adding an independent variable can change results in an unexpected way? How can
How can adding a 2nd IV make the 1st IV significant? It feels like the OP's question can be interpreted in two different ways: Mathematically, how does OLS work, such that adding an independent variable can change results in an unexpected way? How can modifying my model by adding one variable change the effect of another, independent variable in the model? There are several good answers already for question #1. And question #2 may be so obvious to the experts that they assume the OP must be asking question #1 instead. But I think question #2 deserves an answer, which would be something like: Let's start with an example. Say that you had the heights, age, gender, etc, of a number of children, and you wanted to do a regression to predict their height. You start with a naive model that uses gender as the independent variable. And it's not statistically significant. (How could it be, you're mixing 3-year-olds and teen-agers.) Then you add in age and suddenly not only is age significant, but so is gender. How could that be? Of course, in my example, you can clearly see that age is an important factor in the height of a child/teen. Probably the most important factor that you have data on. Gender can matter, too, especially for older children and adults, but gender alone is a poor model of how tall a child is. Age plus gender is a reasonable (though, of course simplified) model that's adequate for the task. If you add other data -- interaction of age and gender, diet, height of parents, etc -- you could make an even better model, which would of course still be simplified compared to the host of factors that actually determine a child's height, but then again all models are simplified versions of reality. (A map of the world that's 1:1 scale isn't too useful for a traveler.) Your original model (gender only) is too simplified -- so simplified that it's essentially broken. But that doesn't mean that gender is not useful in a better model. EDIT: added gung's suggestion re: the interaction term of age and gender.
How can adding a 2nd IV make the 1st IV significant? It feels like the OP's question can be interpreted in two different ways: Mathematically, how does OLS work, such that adding an independent variable can change results in an unexpected way? How can
2,825
How can adding a 2nd IV make the 1st IV significant?
I think this issue has been discussed before on this site fairly thoroughly, if you just knew where to look. So I will probably add a comment later with some links to other questions, or may edit this to provide a fuller explanation if I can't find any. There are two basic possibilities: First, the other IV may absorb some of the residual variability and thus increase the power of the statistical test of the initial IV. The second possibility is that you have a suppressor variable. This is a very counter-intuitive topic, but you can find some info here*, here or this excellent CV thread. * Note that you need to read all the way through to the bottom to get to the part that explains suppressor variables, you could just skip ahead to there, but you will be best served by reading the whole thing. Edit: as promised, I'm adding a fuller explanation of my point regarding how the other IV can absorb some of the residual variability and thus increasing the power of the statistical test of the initial IV. @whuber added an impressive example, but I thought I might add a complimentary example that explains this phenomenon in a different way, which may help some people understand the phenomenon more clearly. In addition, I demonstrate that the second IV does not have to be more strongly associated (although, in practice, it almost always will be for this phenomenon to occur). Covariates in a regression model can be tested with $t$-tests by dividing the parameter estimate by its standard error, or they can be tested with $F$-tests by partitioning the sums of squares. When type III SS are used, these two testing methods will be equivalent (for more on types of SS and associated tests, it may help to read my answer here: How to interpret type I SS). For those just starting to learn about regression methods, the $t$-tests are often the focus because they seem easier for people to understand. However, this is a case where I think looking at the ANOVA table is more helpful. Let's recall the basic ANOVA table for a simple regression model: \begin{array}{lllll} &\text{Source} &\text{SS} &\text{df} &\text{MS} &\text{F} \\ \hline &x_1 &\sum(\hat y_i-\bar y)^2 &1 &\frac{\text{SS}_{x_1}}{\text{df}_{x_1}} &\frac{\text{MS}_{x_1}}{\text{MS}_{\rm res}} \\ &\text{Residual} &\sum(y_i-\hat y_i)^2 &N-(1+1) &\frac{\text{SS}_{\rm res}}{\text{df}_{\rm res}} \\ &\text{Total} &\sum(y_i-\bar y)^2 &N-1 \end{array} Here $\bar y$ is the mean of $y$, $y_i$ is the observed value of $y$ for unit (e.g., patient) $i$, $\hat y_i$ is model's predicted value for unit $i$, and $N$ is the total number of units in the study. If you have a multiple regression model with two orthogonal covariates, the ANOVA table might be constructed like so: \begin{array}{lllll} &\text{Source} &\text{SS} &\text{df} &\text{MS} &\text{F} \\ \hline &x_1 &\sum(\hat y_{x_{1i}\bar x_2}-\bar y)^2 &1 &\frac{\text{SS}_{x_1}}{\text{df}_{x_1}} &\frac{\text{MS}_{x_1}}{\text{MS}_{\rm res}} \\ &x_2 &\sum(\hat y_{\bar x_1x_{2i}}-\bar y)^2 &1 &\frac{\text{SS}_{x_2}}{\text{df}_{x_2}} &\frac{\text{MS}_{x_2}}{\text{MS}_{\rm res}} \\ &\text{Residual} &\sum(y_i-\hat y_i)^2 &N-(2+1) &\frac{\text{SS}_{\rm res}}{\text{df}_{\rm res}} \\ &\text{Total} &\sum(y_i-\bar y)^2 &N-1 \end{array} Here $\hat y_{x_{1i}\bar x_2}$, for example, is the predicted value for unit $i$ if its observed value for $x_1$ was its actual observed value, but its observed value for $x_2$ was the mean of $x_2$. Of course, it is possible that $\bar x_2$ is the observed value of $x_2$ for some observation, in which case there are no adjustments to be made, but this won't typically be the case. Note that this method for creating the ANOVA table is only valid if all variables are orthogonal; this is a highly simplified case created for expository purposes. If we are considering the situation where the same data are used to fit a model both with and without $x_2$, then the observed $y$ values and $\bar y$ will be the same. Thus, the total SS must be the same in both ANOVA tables. In addition, if $x_1$ and $x_2$ are orthogonal to each other, then $SS_{x_1}$ will be identical in both ANOVA tables as well. So, how is it that there can be sums of squares associated with $x_2$ in the table? Where did they come from if the total SS and $SS_{x_1}$ are the same? The answer is that they came from $SS_\text{res}$. The $\text{df}_{x_2}$ are also taken from $\text{df}_\text{res}$. Now the $F$-test of $x_1$ is the $MS_{x_1}$ divided by $MS_\text{res}$ in both cases. Since $MS_{x_1}$ is the same, the difference in the significance of this test comes from the change in $MS_\text{res}$, which has changed in two ways: It started with fewer SS, because some were allotted to $x_2$, but those are divided by fewer df, since some degrees of freedom were allotted to $x_2$, as well. The change in the significance / power of the $F$-test (and equivalently the $t$-test, in this case) is due to how those two changes trade off. If more SS are given to $x_2$, relative to the df that are given to $x_2$, then the $MS_\text{res}$ will decrease, causing the $F$ associated with $x_1$ to increase and $p$ to become more significant. The effect of $x_2$ does not have to be larger than $x_1$ for this to occur, but if it is not, then the shifts in $p$-values will be quite small. The only way it will end up switching between non-significance and significance is if the $p$-values happen to be just slightly on both sides of alpha. Here is an example, coded in R: x1 = rep(1:3, times=15) x2 = rep(1:3, each=15) cor(x1, x2) # [1] 0 set.seed(11628) y = 0 + 0.3*x1 + 0.3*x2 + rnorm(45, mean=0, sd=1) model1 = lm(y~x1) model12 = lm(y~x1+x2) anova(model1) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 5.314 5.3136 3.9568 0.05307 . # Residuals 43 57.745 1.3429 # ... anova(model12) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 5.314 5.3136 4.2471 0.04555 * # x2 1 5.198 5.1979 4.1546 0.04785 * # Residuals 42 52.547 1.2511 # ... In fact, $x_2$ doesn't have to be significant at all. Consider: set.seed(1201) y = 0 + 0.3*x1 + 0.3*x2 + rnorm(45, mean=0, sd=1) anova(model1) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 3.631 3.6310 3.8461 0.05636 . # ... anova(model12) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 3.631 3.6310 4.0740 0.04996 * # x2 1 3.162 3.1620 3.5478 0.06656 . # ... These are admittedly nothing like the dramatic example in @whuber's post, but they may help people understand what is going on here.
How can adding a 2nd IV make the 1st IV significant?
I think this issue has been discussed before on this site fairly thoroughly, if you just knew where to look. So I will probably add a comment later with some links to other questions, or may edit thi
How can adding a 2nd IV make the 1st IV significant? I think this issue has been discussed before on this site fairly thoroughly, if you just knew where to look. So I will probably add a comment later with some links to other questions, or may edit this to provide a fuller explanation if I can't find any. There are two basic possibilities: First, the other IV may absorb some of the residual variability and thus increase the power of the statistical test of the initial IV. The second possibility is that you have a suppressor variable. This is a very counter-intuitive topic, but you can find some info here*, here or this excellent CV thread. * Note that you need to read all the way through to the bottom to get to the part that explains suppressor variables, you could just skip ahead to there, but you will be best served by reading the whole thing. Edit: as promised, I'm adding a fuller explanation of my point regarding how the other IV can absorb some of the residual variability and thus increasing the power of the statistical test of the initial IV. @whuber added an impressive example, but I thought I might add a complimentary example that explains this phenomenon in a different way, which may help some people understand the phenomenon more clearly. In addition, I demonstrate that the second IV does not have to be more strongly associated (although, in practice, it almost always will be for this phenomenon to occur). Covariates in a regression model can be tested with $t$-tests by dividing the parameter estimate by its standard error, or they can be tested with $F$-tests by partitioning the sums of squares. When type III SS are used, these two testing methods will be equivalent (for more on types of SS and associated tests, it may help to read my answer here: How to interpret type I SS). For those just starting to learn about regression methods, the $t$-tests are often the focus because they seem easier for people to understand. However, this is a case where I think looking at the ANOVA table is more helpful. Let's recall the basic ANOVA table for a simple regression model: \begin{array}{lllll} &\text{Source} &\text{SS} &\text{df} &\text{MS} &\text{F} \\ \hline &x_1 &\sum(\hat y_i-\bar y)^2 &1 &\frac{\text{SS}_{x_1}}{\text{df}_{x_1}} &\frac{\text{MS}_{x_1}}{\text{MS}_{\rm res}} \\ &\text{Residual} &\sum(y_i-\hat y_i)^2 &N-(1+1) &\frac{\text{SS}_{\rm res}}{\text{df}_{\rm res}} \\ &\text{Total} &\sum(y_i-\bar y)^2 &N-1 \end{array} Here $\bar y$ is the mean of $y$, $y_i$ is the observed value of $y$ for unit (e.g., patient) $i$, $\hat y_i$ is model's predicted value for unit $i$, and $N$ is the total number of units in the study. If you have a multiple regression model with two orthogonal covariates, the ANOVA table might be constructed like so: \begin{array}{lllll} &\text{Source} &\text{SS} &\text{df} &\text{MS} &\text{F} \\ \hline &x_1 &\sum(\hat y_{x_{1i}\bar x_2}-\bar y)^2 &1 &\frac{\text{SS}_{x_1}}{\text{df}_{x_1}} &\frac{\text{MS}_{x_1}}{\text{MS}_{\rm res}} \\ &x_2 &\sum(\hat y_{\bar x_1x_{2i}}-\bar y)^2 &1 &\frac{\text{SS}_{x_2}}{\text{df}_{x_2}} &\frac{\text{MS}_{x_2}}{\text{MS}_{\rm res}} \\ &\text{Residual} &\sum(y_i-\hat y_i)^2 &N-(2+1) &\frac{\text{SS}_{\rm res}}{\text{df}_{\rm res}} \\ &\text{Total} &\sum(y_i-\bar y)^2 &N-1 \end{array} Here $\hat y_{x_{1i}\bar x_2}$, for example, is the predicted value for unit $i$ if its observed value for $x_1$ was its actual observed value, but its observed value for $x_2$ was the mean of $x_2$. Of course, it is possible that $\bar x_2$ is the observed value of $x_2$ for some observation, in which case there are no adjustments to be made, but this won't typically be the case. Note that this method for creating the ANOVA table is only valid if all variables are orthogonal; this is a highly simplified case created for expository purposes. If we are considering the situation where the same data are used to fit a model both with and without $x_2$, then the observed $y$ values and $\bar y$ will be the same. Thus, the total SS must be the same in both ANOVA tables. In addition, if $x_1$ and $x_2$ are orthogonal to each other, then $SS_{x_1}$ will be identical in both ANOVA tables as well. So, how is it that there can be sums of squares associated with $x_2$ in the table? Where did they come from if the total SS and $SS_{x_1}$ are the same? The answer is that they came from $SS_\text{res}$. The $\text{df}_{x_2}$ are also taken from $\text{df}_\text{res}$. Now the $F$-test of $x_1$ is the $MS_{x_1}$ divided by $MS_\text{res}$ in both cases. Since $MS_{x_1}$ is the same, the difference in the significance of this test comes from the change in $MS_\text{res}$, which has changed in two ways: It started with fewer SS, because some were allotted to $x_2$, but those are divided by fewer df, since some degrees of freedom were allotted to $x_2$, as well. The change in the significance / power of the $F$-test (and equivalently the $t$-test, in this case) is due to how those two changes trade off. If more SS are given to $x_2$, relative to the df that are given to $x_2$, then the $MS_\text{res}$ will decrease, causing the $F$ associated with $x_1$ to increase and $p$ to become more significant. The effect of $x_2$ does not have to be larger than $x_1$ for this to occur, but if it is not, then the shifts in $p$-values will be quite small. The only way it will end up switching between non-significance and significance is if the $p$-values happen to be just slightly on both sides of alpha. Here is an example, coded in R: x1 = rep(1:3, times=15) x2 = rep(1:3, each=15) cor(x1, x2) # [1] 0 set.seed(11628) y = 0 + 0.3*x1 + 0.3*x2 + rnorm(45, mean=0, sd=1) model1 = lm(y~x1) model12 = lm(y~x1+x2) anova(model1) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 5.314 5.3136 3.9568 0.05307 . # Residuals 43 57.745 1.3429 # ... anova(model12) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 5.314 5.3136 4.2471 0.04555 * # x2 1 5.198 5.1979 4.1546 0.04785 * # Residuals 42 52.547 1.2511 # ... In fact, $x_2$ doesn't have to be significant at all. Consider: set.seed(1201) y = 0 + 0.3*x1 + 0.3*x2 + rnorm(45, mean=0, sd=1) anova(model1) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 3.631 3.6310 3.8461 0.05636 . # ... anova(model12) # ... # Df Sum Sq Mean Sq F value Pr(>F) # x1 1 3.631 3.6310 4.0740 0.04996 * # x2 1 3.162 3.1620 3.5478 0.06656 . # ... These are admittedly nothing like the dramatic example in @whuber's post, but they may help people understand what is going on here.
How can adding a 2nd IV make the 1st IV significant? I think this issue has been discussed before on this site fairly thoroughly, if you just knew where to look. So I will probably add a comment later with some links to other questions, or may edit thi
2,826
How can adding a 2nd IV make the 1st IV significant?
This thread has already three excellent answers (+1 to each). My answer is an extended comment and illustration to the point made by @gung (which took me some time to understand): There are two basic possibilities: First, the other IV may absorb some of the residual variability and thus increase the power of the statistical test of the initial IV. The second possibility is that you have a suppressor variable. For me, the clearest conceptual way to think about multiple regression is geometric. Consider two IVs $x_1$ and $x_2$, and a DV $y$. Let them be centered, so that we do not need to care about intercept. Then if we have $n$ data points in the dataset, all three variables can be imagined as vectors in $\mathbb R^n$; the length of each vector corresponds to the variance and the angle between any two of them corresponds to the correlation. Crucially, performing multiple OLS regression is nothing else than projecting dependent variable $\mathbf y$ onto the plane spanned by $\mathbf x_1$ and $\mathbf x_2$ (with the "hat matrix" simply being a projector). Readers unfamiliar with this approach can look e.g. in The Elements of Statistical Learning, Section 3.2, or in many other books. "Enhancement" The following Figure shows both possibilities listed by @gung. Consider only the blue part at first (i.e. ignore all the red lines): Here $\mathbf x_1$ and $\mathbf x_2$ are orthogonal predictors spanning a plane (called "plane $X$"). Dependent variable $\mathbf y$ is projected onto this plane, and its projection OD is what is usually called $\hat y$. Then OD is decomposed into OF (contribution of IV1) and OE (contribution of IV2). Note that OE is much longer than OF. Now imagine that there is no second predictor $\mathbf x_2$. Regressing $\mathbf y$ onto $\mathbf x_1$ would result in projecting it onto OF as well. But the angle AOC ($\alpha$) is close to $90^\circ$; an appropriate statistical test would conclude that there is almost no association between $y$ and $x_1$ and that $x_1$ is hence not significant. When $x_2$ is added, the projection OF does not change (because $\mathbf x_1$ and $\mathbf x_2$ are orthogonal). However, to test whether $x_1$ is significant, we now need to look at what is left unexplained after $x_2$. The second predictor $x_2$ explains a large portion of $y$, OE, with only a smaller part EC remaining unexplained. For clarity, I copied this vector to the origin and called it OG: notice that the angle GOF ($\beta$) is much smaller than $\alpha$. It can easily be small enough for the test to conclude that it is "significantly smaller than $90^\circ$", i.e. that $x_1$ is now a significant predictor. Another way to put it is that the test is now comparing the length of OF to OG, and not to OC as before; OF is tiny and "insignificant" compared to OC, but big enough to be "significant" compared to OG. This is exactly the situation presented by @whuber, @gung, and @Wayne in their answers. I don't know if this effect has a standard name in the regression literature, so I will call it "enhancement". Suppression Notice that in the above, if $\alpha=90^\circ$ then $\beta=90^\circ$ as well; in other words, "enhancement" can only enhance the power to detect significant predictor, but if the effect of $x_1$ alone was exactly zero, it will stay exactly zero. Not so in suppression. Imagine that we add $x_3$ to $x_1$ (instead of $x_2$) -- please consider the red part of the drawing. The vector $\mathbf x_3$ lies in the same plane $X$, but is not orthogonal to $\mathbf x_1$ (meaning that $x_3$ is correlated with $x_1$). Since the plane $X$ is the same as before, projection OD of $\mathbf y$ also stays the same. However, the decomposition of OD into contributions of both predictors changes drastically: now OD is decomposed into OF' and OE'. Notice how OF' is much longer than OF used to be. A statistical test would compare the length of OF' to E'C and conclude that the contribution of $x_1$ is significant. This means that a predictor $x_1$ that has exactly zero correlation with $y$ turns out to be a significant predictor. This situation is (very confusingly, in my opinion!) known as "suppression"; see here as to why: Suppression effect in regression: definition and visual explanation/depiction -- @ttnphns illustrates his great answer with a lot of figures similar to mine here (only better done).
How can adding a 2nd IV make the 1st IV significant?
This thread has already three excellent answers (+1 to each). My answer is an extended comment and illustration to the point made by @gung (which took me some time to understand): There are two basic
How can adding a 2nd IV make the 1st IV significant? This thread has already three excellent answers (+1 to each). My answer is an extended comment and illustration to the point made by @gung (which took me some time to understand): There are two basic possibilities: First, the other IV may absorb some of the residual variability and thus increase the power of the statistical test of the initial IV. The second possibility is that you have a suppressor variable. For me, the clearest conceptual way to think about multiple regression is geometric. Consider two IVs $x_1$ and $x_2$, and a DV $y$. Let them be centered, so that we do not need to care about intercept. Then if we have $n$ data points in the dataset, all three variables can be imagined as vectors in $\mathbb R^n$; the length of each vector corresponds to the variance and the angle between any two of them corresponds to the correlation. Crucially, performing multiple OLS regression is nothing else than projecting dependent variable $\mathbf y$ onto the plane spanned by $\mathbf x_1$ and $\mathbf x_2$ (with the "hat matrix" simply being a projector). Readers unfamiliar with this approach can look e.g. in The Elements of Statistical Learning, Section 3.2, or in many other books. "Enhancement" The following Figure shows both possibilities listed by @gung. Consider only the blue part at first (i.e. ignore all the red lines): Here $\mathbf x_1$ and $\mathbf x_2$ are orthogonal predictors spanning a plane (called "plane $X$"). Dependent variable $\mathbf y$ is projected onto this plane, and its projection OD is what is usually called $\hat y$. Then OD is decomposed into OF (contribution of IV1) and OE (contribution of IV2). Note that OE is much longer than OF. Now imagine that there is no second predictor $\mathbf x_2$. Regressing $\mathbf y$ onto $\mathbf x_1$ would result in projecting it onto OF as well. But the angle AOC ($\alpha$) is close to $90^\circ$; an appropriate statistical test would conclude that there is almost no association between $y$ and $x_1$ and that $x_1$ is hence not significant. When $x_2$ is added, the projection OF does not change (because $\mathbf x_1$ and $\mathbf x_2$ are orthogonal). However, to test whether $x_1$ is significant, we now need to look at what is left unexplained after $x_2$. The second predictor $x_2$ explains a large portion of $y$, OE, with only a smaller part EC remaining unexplained. For clarity, I copied this vector to the origin and called it OG: notice that the angle GOF ($\beta$) is much smaller than $\alpha$. It can easily be small enough for the test to conclude that it is "significantly smaller than $90^\circ$", i.e. that $x_1$ is now a significant predictor. Another way to put it is that the test is now comparing the length of OF to OG, and not to OC as before; OF is tiny and "insignificant" compared to OC, but big enough to be "significant" compared to OG. This is exactly the situation presented by @whuber, @gung, and @Wayne in their answers. I don't know if this effect has a standard name in the regression literature, so I will call it "enhancement". Suppression Notice that in the above, if $\alpha=90^\circ$ then $\beta=90^\circ$ as well; in other words, "enhancement" can only enhance the power to detect significant predictor, but if the effect of $x_1$ alone was exactly zero, it will stay exactly zero. Not so in suppression. Imagine that we add $x_3$ to $x_1$ (instead of $x_2$) -- please consider the red part of the drawing. The vector $\mathbf x_3$ lies in the same plane $X$, but is not orthogonal to $\mathbf x_1$ (meaning that $x_3$ is correlated with $x_1$). Since the plane $X$ is the same as before, projection OD of $\mathbf y$ also stays the same. However, the decomposition of OD into contributions of both predictors changes drastically: now OD is decomposed into OF' and OE'. Notice how OF' is much longer than OF used to be. A statistical test would compare the length of OF' to E'C and conclude that the contribution of $x_1$ is significant. This means that a predictor $x_1$ that has exactly zero correlation with $y$ turns out to be a significant predictor. This situation is (very confusingly, in my opinion!) known as "suppression"; see here as to why: Suppression effect in regression: definition and visual explanation/depiction -- @ttnphns illustrates his great answer with a lot of figures similar to mine here (only better done).
How can adding a 2nd IV make the 1st IV significant? This thread has already three excellent answers (+1 to each). My answer is an extended comment and illustration to the point made by @gung (which took me some time to understand): There are two basic
2,827
How can adding a 2nd IV make the 1st IV significant?
I don't think any of the answers have explicitly mentioned the mathematical intuition for the orthogonal/uncorrelated case, so I will show this here, but I don't believe this answer will be 100% complete. Suppose that $x_1$ and $x_2$ are uncorrelated, which implies that their centered versions are orthogonal, i.e., $(x_1 - \bar{x}_1 ) \perp (x_2 - \bar{x}_2)$. Now consider the estimators: $$ \hat{\beta} = (X^TX)^{-1}X^Ty $$ Without loss of generality (constant shifts doesn't affect $\hat{\beta}$), suppose that $X$ here consist of the centered versions of $x_1, x_2$. We can also assume that $X$ here does not include the intercept, which is fine since it's centered and we can simply compute the intercept as $\hat{\beta}_0 = \bar{y}$, so $X \in \mathbb{R}^{n \times 2}$ and $(X^TX)^{-1}$ is diagonal, which will result in the estimators from multiple linear regression being the same as that of separate regression $y$ on $x_1$ and $y$ on $x_2$. Now consider the t score, which is used to compute p values and measure significance. We have $$ t = \frac{\hat{\beta}_j - \beta_j}{SE(\hat{\beta}_j)} $$ We want to test for $\beta_j \neq 0$, so our null hypothesis is $\beta_j = 0$, and we have $$ t = \frac{\hat{\beta}_j}{SE(\hat{\beta}_j)} $$ As we saw earlier, $\hat{\beta}_j$ doesn't change when a predictor that is orthogonal to $x_j$ is added. So for this situation, the only thing that would affect the t score/significance is $SE(\hat{\beta})$, which we know to be $$ SE(\beta) = \operatorname{var}\left(\hat{\beta} \right) = \sigma^2 (X^TX)^{-1} $$ Again note that $(X^TX)^{-1}$ is a diagonal matrix, so this component remains the same for the simple linear regression and multiple linear regression case, so the only thing that could change the t score is $\sigma^2$. If we know the population variance, then the t score wouldn't change, but we typically estimate the population variance with $$ \hat{\sigma}^2 = \frac{1}{n - p - 1}\sum_{i=1}^n (y_i - \hat{y}_i)^2 $$ The estimator of the population variance is non-decreasing when additional predictors are added (orthogonal or not) -- this is equivalent to that the plain vanilla $R^2$ cannot decrease when additional predictors are added, because the sum of squares of residuals can only remain the same (which occurs when the added predictor can be written as a linear combination of the current predictors) or increase. An intuitive way to think about this is if having $p+1$ predictors with non-zero coefficients would get you a worse fit than a p-subset of these $p+1$ predictors, then least squares would just return the smaller model with p non-zero predictors and 1 zeroed predictor So we see that $SE(\hat{\beta})$ is non-decreasing, which means that the t score increases monotonically, which would contribute to a decrease in the p-value; however, the degrees of freedom of the t-distribution decreases with increasing predictors, and this results in an increase in p-value. So there are competing effects going on here.
How can adding a 2nd IV make the 1st IV significant?
I don't think any of the answers have explicitly mentioned the mathematical intuition for the orthogonal/uncorrelated case, so I will show this here, but I don't believe this answer will be 100% compl
How can adding a 2nd IV make the 1st IV significant? I don't think any of the answers have explicitly mentioned the mathematical intuition for the orthogonal/uncorrelated case, so I will show this here, but I don't believe this answer will be 100% complete. Suppose that $x_1$ and $x_2$ are uncorrelated, which implies that their centered versions are orthogonal, i.e., $(x_1 - \bar{x}_1 ) \perp (x_2 - \bar{x}_2)$. Now consider the estimators: $$ \hat{\beta} = (X^TX)^{-1}X^Ty $$ Without loss of generality (constant shifts doesn't affect $\hat{\beta}$), suppose that $X$ here consist of the centered versions of $x_1, x_2$. We can also assume that $X$ here does not include the intercept, which is fine since it's centered and we can simply compute the intercept as $\hat{\beta}_0 = \bar{y}$, so $X \in \mathbb{R}^{n \times 2}$ and $(X^TX)^{-1}$ is diagonal, which will result in the estimators from multiple linear regression being the same as that of separate regression $y$ on $x_1$ and $y$ on $x_2$. Now consider the t score, which is used to compute p values and measure significance. We have $$ t = \frac{\hat{\beta}_j - \beta_j}{SE(\hat{\beta}_j)} $$ We want to test for $\beta_j \neq 0$, so our null hypothesis is $\beta_j = 0$, and we have $$ t = \frac{\hat{\beta}_j}{SE(\hat{\beta}_j)} $$ As we saw earlier, $\hat{\beta}_j$ doesn't change when a predictor that is orthogonal to $x_j$ is added. So for this situation, the only thing that would affect the t score/significance is $SE(\hat{\beta})$, which we know to be $$ SE(\beta) = \operatorname{var}\left(\hat{\beta} \right) = \sigma^2 (X^TX)^{-1} $$ Again note that $(X^TX)^{-1}$ is a diagonal matrix, so this component remains the same for the simple linear regression and multiple linear regression case, so the only thing that could change the t score is $\sigma^2$. If we know the population variance, then the t score wouldn't change, but we typically estimate the population variance with $$ \hat{\sigma}^2 = \frac{1}{n - p - 1}\sum_{i=1}^n (y_i - \hat{y}_i)^2 $$ The estimator of the population variance is non-decreasing when additional predictors are added (orthogonal or not) -- this is equivalent to that the plain vanilla $R^2$ cannot decrease when additional predictors are added, because the sum of squares of residuals can only remain the same (which occurs when the added predictor can be written as a linear combination of the current predictors) or increase. An intuitive way to think about this is if having $p+1$ predictors with non-zero coefficients would get you a worse fit than a p-subset of these $p+1$ predictors, then least squares would just return the smaller model with p non-zero predictors and 1 zeroed predictor So we see that $SE(\hat{\beta})$ is non-decreasing, which means that the t score increases monotonically, which would contribute to a decrease in the p-value; however, the degrees of freedom of the t-distribution decreases with increasing predictors, and this results in an increase in p-value. So there are competing effects going on here.
How can adding a 2nd IV make the 1st IV significant? I don't think any of the answers have explicitly mentioned the mathematical intuition for the orthogonal/uncorrelated case, so I will show this here, but I don't believe this answer will be 100% compl
2,828
F1/Dice-Score vs IoU
You're on the right track. So a few things right off the bat. From the definition of the two metrics, we have that IoU and F score are always within a factor of 2 of each other: $$ F/2 \leq IoU \leq F $$ and also that they meet at the extremes of one and zero under the conditions that you would expect (perfect match and completely disjoint). Note also that the ratio between them can be related explicitly to the IoU: $$ IoU/F = 1/2 + IoU/2 $$ so that the ratio approaches 1/2 as both metrics approach zero. But there's a stronger statement that can be made for the typical application of classification a la machine learning. For any fixed "ground truth", the two metrics are always positively correlated. That is to say that if classifier A is better than B under one metric, it is also better than classifier B under the other metric. It is tempting then to conclude that the two metrics are functionally equivalent so the choice between them is arbitrary, but not so fast! The problem comes when taking the average score over a set of inferences. Then the difference emerges when quantifying how much worse classifier B is than A for any given case. In general, the IoU metric tends to penalize single instances of bad classification more than the F score quantitatively even when they can both agree that this one instance is bad. Similarly to how L2 can penalize the largest mistakes more than L1, the IoU metric tends to have a "squaring" effect on the errors relative to the F score. So the F score tends to measure something closer to average performance, while the IoU score measures something closer to the worst case performance. Suppose for example that the vast majority of the inferences are moderately better with classifier A than B, but some of them of them are significantly worse using classifier A. It may be the case then that the F metric favors classifier A while the IoU metric favors classifier B. To be sure, both of these metrics are much more alike than they are different. But both of them suffer from another disadvantage from the standpoint of taking averages of these scores over many inferences: they both overstate the importance of sets with little-to-no actual ground truth positive sets. In the common example of image segmentation, if an image only has a single pixel of some detectable class, and the classifier detects that pixel and one other pixel, its F score is a lowly 2/3 and the IoU is even worse at 1/2. Trivial mistakes like these can seriously dominate the average score taken over a set of images. In short, it weights each pixel error inversely proportionally to the size of the selected/relevant set rather than treating them equally. There is a far simpler metric that avoids this problem. Simply use the total error: FN + FP (e.g. 5% of the image's pixels were miscategorized). In the case where one is more important than the other, a weighted average may be used: $c_0$FP + $c_1$FN.
F1/Dice-Score vs IoU
You're on the right track. So a few things right off the bat. From the definition of the two metrics, we have that IoU and F score are always within a factor of 2 of each other: $$ F/2 \leq IoU \leq F
F1/Dice-Score vs IoU You're on the right track. So a few things right off the bat. From the definition of the two metrics, we have that IoU and F score are always within a factor of 2 of each other: $$ F/2 \leq IoU \leq F $$ and also that they meet at the extremes of one and zero under the conditions that you would expect (perfect match and completely disjoint). Note also that the ratio between them can be related explicitly to the IoU: $$ IoU/F = 1/2 + IoU/2 $$ so that the ratio approaches 1/2 as both metrics approach zero. But there's a stronger statement that can be made for the typical application of classification a la machine learning. For any fixed "ground truth", the two metrics are always positively correlated. That is to say that if classifier A is better than B under one metric, it is also better than classifier B under the other metric. It is tempting then to conclude that the two metrics are functionally equivalent so the choice between them is arbitrary, but not so fast! The problem comes when taking the average score over a set of inferences. Then the difference emerges when quantifying how much worse classifier B is than A for any given case. In general, the IoU metric tends to penalize single instances of bad classification more than the F score quantitatively even when they can both agree that this one instance is bad. Similarly to how L2 can penalize the largest mistakes more than L1, the IoU metric tends to have a "squaring" effect on the errors relative to the F score. So the F score tends to measure something closer to average performance, while the IoU score measures something closer to the worst case performance. Suppose for example that the vast majority of the inferences are moderately better with classifier A than B, but some of them of them are significantly worse using classifier A. It may be the case then that the F metric favors classifier A while the IoU metric favors classifier B. To be sure, both of these metrics are much more alike than they are different. But both of them suffer from another disadvantage from the standpoint of taking averages of these scores over many inferences: they both overstate the importance of sets with little-to-no actual ground truth positive sets. In the common example of image segmentation, if an image only has a single pixel of some detectable class, and the classifier detects that pixel and one other pixel, its F score is a lowly 2/3 and the IoU is even worse at 1/2. Trivial mistakes like these can seriously dominate the average score taken over a set of images. In short, it weights each pixel error inversely proportionally to the size of the selected/relevant set rather than treating them equally. There is a far simpler metric that avoids this problem. Simply use the total error: FN + FP (e.g. 5% of the image's pixels were miscategorized). In the case where one is more important than the other, a weighted average may be used: $c_0$FP + $c_1$FN.
F1/Dice-Score vs IoU You're on the right track. So a few things right off the bat. From the definition of the two metrics, we have that IoU and F score are always within a factor of 2 of each other: $$ F/2 \leq IoU \leq F
2,829
F1/Dice-Score vs IoU
Yes, they indeed represent different things and have different meaning when looking at the formulas. However, when you use them as a evaluation measure to compare the performance of different model, you only need to choose one of them. The reason can be explained by the following evidence: First, let $$ a = TP,\quad b=TP+FP+TN $$ Then, we have $$ IoU = \frac{TP}{TP+FP+TN} = \frac{a}{b} $$ $$ Dice = \frac{TP+TP}{TP+TP+FP+TN} = \frac{2a}{a+b} $$ Hence, $$ Dice = \frac{\frac{2a}{b}}{\frac{a+b}{b}}= \frac{2 \cdot \frac{a}{b}}{\frac{a}{b}+1} = \frac{2 \cdot IoU}{IoU + 1} $$ Considering the line plot of $y=2x/(x+1)$ in the range of [0,1], we find out that Dice has a monotonic increasing relation to IoU. Then the following situation will not happen: $Dice_1 < Dice_2$ while $IoU_1 > IoU_2$ (the subscript represents different model). That is, Dice score is just similar representation of IoU under the numerical sense. It's enough to only using one of them for model comparison.
F1/Dice-Score vs IoU
Yes, they indeed represent different things and have different meaning when looking at the formulas. However, when you use them as a evaluation measure to compare the performance of different model, y
F1/Dice-Score vs IoU Yes, they indeed represent different things and have different meaning when looking at the formulas. However, when you use them as a evaluation measure to compare the performance of different model, you only need to choose one of them. The reason can be explained by the following evidence: First, let $$ a = TP,\quad b=TP+FP+TN $$ Then, we have $$ IoU = \frac{TP}{TP+FP+TN} = \frac{a}{b} $$ $$ Dice = \frac{TP+TP}{TP+TP+FP+TN} = \frac{2a}{a+b} $$ Hence, $$ Dice = \frac{\frac{2a}{b}}{\frac{a+b}{b}}= \frac{2 \cdot \frac{a}{b}}{\frac{a}{b}+1} = \frac{2 \cdot IoU}{IoU + 1} $$ Considering the line plot of $y=2x/(x+1)$ in the range of [0,1], we find out that Dice has a monotonic increasing relation to IoU. Then the following situation will not happen: $Dice_1 < Dice_2$ while $IoU_1 > IoU_2$ (the subscript represents different model). That is, Dice score is just similar representation of IoU under the numerical sense. It's enough to only using one of them for model comparison.
F1/Dice-Score vs IoU Yes, they indeed represent different things and have different meaning when looking at the formulas. However, when you use them as a evaluation measure to compare the performance of different model, y
2,830
F1/Dice-Score vs IoU
For Nico's answer above, I'm wondering shouldn't IoU be TP/(TP+FP+FN) instead of TP/(TP+FP+TN)? Also shouldn't the Dice score be (TP+TP)/(TP+TP+FP+FN)?
F1/Dice-Score vs IoU
For Nico's answer above, I'm wondering shouldn't IoU be TP/(TP+FP+FN) instead of TP/(TP+FP+TN)? Also shouldn't the Dice score be (TP+TP)/(TP+TP+FP+FN)?
F1/Dice-Score vs IoU For Nico's answer above, I'm wondering shouldn't IoU be TP/(TP+FP+FN) instead of TP/(TP+FP+TN)? Also shouldn't the Dice score be (TP+TP)/(TP+TP+FP+FN)?
F1/Dice-Score vs IoU For Nico's answer above, I'm wondering shouldn't IoU be TP/(TP+FP+FN) instead of TP/(TP+FP+TN)? Also shouldn't the Dice score be (TP+TP)/(TP+TP+FP+FN)?
2,831
Understanding stratified cross-validation
Stratification seeks to ensure that each fold is representative of all strata of the data. Generally this is done in a supervised way for classification and aims to ensure each class is (approximately) equally represented across each test fold (which are of course combined in a complementary way to form training folds). The intuition behind this relates to the bias of most classification algorithms. They tend to weight each instance equally which means overrepresented classes get too much weight (e.g. optimizing F-measure, Accuracy or a complementary form of error). Stratification is not so important for an algorithm that weights each class equally (e.g. optimizing Kappa, Informedness or ROC AUC) or according to a cost matrix (e.g. that is giving a value to each class correctly weighted and/or a cost to each way of misclassifying). See, e.g. D. M. W. Powers (2014), What the F-measure doesn't measure: Features, Flaws, Fallacies and Fixes. http://arxiv.org/pdf/1503.06410 One specific issue that is important across even unbiased or balanced algorithms, is that they tend not to be able to learn or test a class that isn't represented at all in a fold, and furthermore even the case where only one of a class is represented in a fold doesn't allow generalization to performed resp. evaluated. However even this consideration isn't universal and for example doesn't apply so much to one-class learning, which tries to determine what is normal for an individual class, and effectively identifies outliers as being a different class, given that cross-validation is about determining statistics not generating a specific classifier. On the other hand, supervised stratification compromises the technical purity of the evaluation as the labels of the test data shouldn't affect training, but in stratification are used in the selection of the training instances. Unsupervised stratification is also possible based on spreading similar data around looking only at the attributes of the data, not the true class. See, e.g. https://doi.org/10.1016/S0004-3702(99)00094-6 N. A. Diamantidis, D. Karlis, E. A. Giakoumakis (1997), Unsupervised stratification of cross-validation for accuracy estimation. Stratification can also be applied to regression rather than classification, in which case like the unsupervised stratification, similarity rather than identity is used, but the supervised version uses the known true function value. Further complications are rare classes and multilabel classification, where classifications are being done on multiple (independent) dimensions. Here tuples of the true labels across all dimensions can be treated as classes for the purpose of cross-validation. However, not all combinations necessarily occur, and some combinations may be rare. Rare classes and rare combinations are a problem in that a class/combination that occurs at least once but less than K times (in K-CV) cannot be represented in all test folds. In such cases, one could instead consider a form of stratified boostrapping (sampling with replacement to generate a full size training fold with repetitions expected and 36.8% expected unselected for testing, with one instance of each class selected initially without replacement for the test fold). Another approach to multilabel stratification is to try to stratify or bootstrap each class dimension separately without seeking to ensure representative selection of combinations. With L labels and N instances and Kkl instances of class k for label l, we can randomly choose (without replacement) from the corresponding set of labeled instances Dkl approximately N/LKkl instances. This does not ensure optimal balance but rather seeks balance heuristically. This can be improved by barring selection of labels at or over quota unless there is no choice (as some combinations do not occur or are rare). Problems tend to mean either that there is too little data or that the dimensions are not independent.
Understanding stratified cross-validation
Stratification seeks to ensure that each fold is representative of all strata of the data. Generally this is done in a supervised way for classification and aims to ensure each class is (approximately
Understanding stratified cross-validation Stratification seeks to ensure that each fold is representative of all strata of the data. Generally this is done in a supervised way for classification and aims to ensure each class is (approximately) equally represented across each test fold (which are of course combined in a complementary way to form training folds). The intuition behind this relates to the bias of most classification algorithms. They tend to weight each instance equally which means overrepresented classes get too much weight (e.g. optimizing F-measure, Accuracy or a complementary form of error). Stratification is not so important for an algorithm that weights each class equally (e.g. optimizing Kappa, Informedness or ROC AUC) or according to a cost matrix (e.g. that is giving a value to each class correctly weighted and/or a cost to each way of misclassifying). See, e.g. D. M. W. Powers (2014), What the F-measure doesn't measure: Features, Flaws, Fallacies and Fixes. http://arxiv.org/pdf/1503.06410 One specific issue that is important across even unbiased or balanced algorithms, is that they tend not to be able to learn or test a class that isn't represented at all in a fold, and furthermore even the case where only one of a class is represented in a fold doesn't allow generalization to performed resp. evaluated. However even this consideration isn't universal and for example doesn't apply so much to one-class learning, which tries to determine what is normal for an individual class, and effectively identifies outliers as being a different class, given that cross-validation is about determining statistics not generating a specific classifier. On the other hand, supervised stratification compromises the technical purity of the evaluation as the labels of the test data shouldn't affect training, but in stratification are used in the selection of the training instances. Unsupervised stratification is also possible based on spreading similar data around looking only at the attributes of the data, not the true class. See, e.g. https://doi.org/10.1016/S0004-3702(99)00094-6 N. A. Diamantidis, D. Karlis, E. A. Giakoumakis (1997), Unsupervised stratification of cross-validation for accuracy estimation. Stratification can also be applied to regression rather than classification, in which case like the unsupervised stratification, similarity rather than identity is used, but the supervised version uses the known true function value. Further complications are rare classes and multilabel classification, where classifications are being done on multiple (independent) dimensions. Here tuples of the true labels across all dimensions can be treated as classes for the purpose of cross-validation. However, not all combinations necessarily occur, and some combinations may be rare. Rare classes and rare combinations are a problem in that a class/combination that occurs at least once but less than K times (in K-CV) cannot be represented in all test folds. In such cases, one could instead consider a form of stratified boostrapping (sampling with replacement to generate a full size training fold with repetitions expected and 36.8% expected unselected for testing, with one instance of each class selected initially without replacement for the test fold). Another approach to multilabel stratification is to try to stratify or bootstrap each class dimension separately without seeking to ensure representative selection of combinations. With L labels and N instances and Kkl instances of class k for label l, we can randomly choose (without replacement) from the corresponding set of labeled instances Dkl approximately N/LKkl instances. This does not ensure optimal balance but rather seeks balance heuristically. This can be improved by barring selection of labels at or over quota unless there is no choice (as some combinations do not occur or are rare). Problems tend to mean either that there is too little data or that the dimensions are not independent.
Understanding stratified cross-validation Stratification seeks to ensure that each fold is representative of all strata of the data. Generally this is done in a supervised way for classification and aims to ensure each class is (approximately
2,832
Understanding stratified cross-validation
Cross-validation article in Encyclopedia of Database Systems says: Stratification is the process of rearranging the data as to ensure each fold is a good representative of the whole. For example in a binary classification problem where each class comprises 50% of the data, it is best to arrange the data such that in every fold, each class comprises around half the instances. About the importance of the stratification, Kohavi (A study of cross-validation and bootstrap for accuracy estimation and model selection) concludes that: stratification is generally a better scheme, both in terms of bias and variance, when compared to regular cross-validation.
Understanding stratified cross-validation
Cross-validation article in Encyclopedia of Database Systems says: Stratification is the process of rearranging the data as to ensure each fold is a good representative of the whole. For example in
Understanding stratified cross-validation Cross-validation article in Encyclopedia of Database Systems says: Stratification is the process of rearranging the data as to ensure each fold is a good representative of the whole. For example in a binary classification problem where each class comprises 50% of the data, it is best to arrange the data such that in every fold, each class comprises around half the instances. About the importance of the stratification, Kohavi (A study of cross-validation and bootstrap for accuracy estimation and model selection) concludes that: stratification is generally a better scheme, both in terms of bias and variance, when compared to regular cross-validation.
Understanding stratified cross-validation Cross-validation article in Encyclopedia of Database Systems says: Stratification is the process of rearranging the data as to ensure each fold is a good representative of the whole. For example in
2,833
Understanding stratified cross-validation
A quick and dirty explanation as follows: Cross Validation: Splits the data into k "random" folds Stratified Cross Valiadtion: Splits the data into k folds, making sure each fold is an appropriate representative of the original data. (class distribution, mean, variance, etc) Example of 5 fold Cross Validation: Example of 5 folds Stratified Cross Validation:
Understanding stratified cross-validation
A quick and dirty explanation as follows: Cross Validation: Splits the data into k "random" folds Stratified Cross Valiadtion: Splits the data into k folds, making sure each fold is an appropriate rep
Understanding stratified cross-validation A quick and dirty explanation as follows: Cross Validation: Splits the data into k "random" folds Stratified Cross Valiadtion: Splits the data into k folds, making sure each fold is an appropriate representative of the original data. (class distribution, mean, variance, etc) Example of 5 fold Cross Validation: Example of 5 folds Stratified Cross Validation:
Understanding stratified cross-validation A quick and dirty explanation as follows: Cross Validation: Splits the data into k "random" folds Stratified Cross Valiadtion: Splits the data into k folds, making sure each fold is an appropriate rep
2,834
Understanding stratified cross-validation
The mean response value is approximately equal in all the folds is another way of saying the proportion of each class in all the folds are approximately equal. For example, we have a dataset with 80 class 0 records and 20 class 1 records. We may gain a mean response value of (80*0+20*1)/100 = 0.2 and we want 0.2 to be the mean response value of all folds. This is also a quick way in EDA to measure if the dataset given is imbalanced instead of counting.
Understanding stratified cross-validation
The mean response value is approximately equal in all the folds is another way of saying the proportion of each class in all the folds are approximately equal. For example, we have a dataset with 80 c
Understanding stratified cross-validation The mean response value is approximately equal in all the folds is another way of saying the proportion of each class in all the folds are approximately equal. For example, we have a dataset with 80 class 0 records and 20 class 1 records. We may gain a mean response value of (80*0+20*1)/100 = 0.2 and we want 0.2 to be the mean response value of all folds. This is also a quick way in EDA to measure if the dataset given is imbalanced instead of counting.
Understanding stratified cross-validation The mean response value is approximately equal in all the folds is another way of saying the proportion of each class in all the folds are approximately equal. For example, we have a dataset with 80 c
2,835
Understanding stratified cross-validation
This page of the documentation of scikit-learn has a pretty nice visual explanation of what are the differences between cross-validation sampling approaches. Here are some images for the methods you asked taken from the mentioned page. As you can see, with KFold CV you divide the data in equal parts and pick train and test sets. For this method, I suggest you to include a sample shuffling process to avoid any eventual bias on this division. For stratified KFold CV, you consider dividing train and test sets for each strata, since there is a imbalance on sample sizes. This is essential for classification problems, but you may consider using it when doing regression if you can divide data into clusters.
Understanding stratified cross-validation
This page of the documentation of scikit-learn has a pretty nice visual explanation of what are the differences between cross-validation sampling approaches. Here are some images for the methods you a
Understanding stratified cross-validation This page of the documentation of scikit-learn has a pretty nice visual explanation of what are the differences between cross-validation sampling approaches. Here are some images for the methods you asked taken from the mentioned page. As you can see, with KFold CV you divide the data in equal parts and pick train and test sets. For this method, I suggest you to include a sample shuffling process to avoid any eventual bias on this division. For stratified KFold CV, you consider dividing train and test sets for each strata, since there is a imbalance on sample sizes. This is essential for classification problems, but you may consider using it when doing regression if you can divide data into clusters.
Understanding stratified cross-validation This page of the documentation of scikit-learn has a pretty nice visual explanation of what are the differences between cross-validation sampling approaches. Here are some images for the methods you a
2,836
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
The main issue in any statistical model is the assumptions that underlay any inference procedure. In the sort of model you describe, the residuals are assumed independent. If they have some spatial dependence and this is not modelled in the sytematic part of the model, the residuals from that model will also exhibit spatial dependence, or in other words they will be spatially autocorrelated. Such dependence would invalidate the theory that produces p-values from test statistics in the GAM for example; you can't trust the p-values because they were computed assuming independence. You have two main options for handling such data; i) model the spatial dependence in the systematic part of the model, or ii) relax the assumption of independence and estimate the correlation between residuals. i) is what is being attempted by including a smooth of the spatial locations in the model. ii) requires estimation of the correlation matrix of the residuals often during model fitting using a procedure like generalised least squares. How well either of these approaches deal with the spatial dependence will depend upon the nature & complexity of the spatial dependence and how easily it can be modelled. In summary, if you can model the spatial dependence between observations then the residuals are more likely to be independent random variables and therefore not violate the assumptions of any inferential procedure.
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
The main issue in any statistical model is the assumptions that underlay any inference procedure. In the sort of model you describe, the residuals are assumed independent. If they have some spatial de
Why does including latitude and longitude in a GAM account for spatial autocorrelation? The main issue in any statistical model is the assumptions that underlay any inference procedure. In the sort of model you describe, the residuals are assumed independent. If they have some spatial dependence and this is not modelled in the sytematic part of the model, the residuals from that model will also exhibit spatial dependence, or in other words they will be spatially autocorrelated. Such dependence would invalidate the theory that produces p-values from test statistics in the GAM for example; you can't trust the p-values because they were computed assuming independence. You have two main options for handling such data; i) model the spatial dependence in the systematic part of the model, or ii) relax the assumption of independence and estimate the correlation between residuals. i) is what is being attempted by including a smooth of the spatial locations in the model. ii) requires estimation of the correlation matrix of the residuals often during model fitting using a procedure like generalised least squares. How well either of these approaches deal with the spatial dependence will depend upon the nature & complexity of the spatial dependence and how easily it can be modelled. In summary, if you can model the spatial dependence between observations then the residuals are more likely to be independent random variables and therefore not violate the assumptions of any inferential procedure.
Why does including latitude and longitude in a GAM account for spatial autocorrelation? The main issue in any statistical model is the assumptions that underlay any inference procedure. In the sort of model you describe, the residuals are assumed independent. If they have some spatial de
2,837
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
"Spatial autocorrelation" means various things to various people. An overarching concept, though, is that a phenomenon observed at locations $\mathbf{z}$ may depend in some definite way on (a) covariates, (b) location, and (c) its values at nearby locations. (Where the technical definitions vary lie in the kind of data being considered, what "definite way" is postulated, and what "nearby" means: all of these have to be made quantitative in order to proceed.) To see what might be going on, let's consider a simple example of such a spatial model to describe the topography of a region. Let the measured elevation at a point $\mathbf{z}$ be $y(\mathbf{z})$. One possible model is that $y$ depends in some definite mathematical way on the coordinates of $\mathbf{z}$, which I will write $(z_1,z_2)$ in this two-dimensional situation. Letting $\varepsilon$ represent (hypothetically independent) deviations between the observations and the model (which as usual are assumed to have zero expectation), we may write $$y(\mathbf{z}) = \beta_0 + \beta_1 z_1 + \beta_2 z_2 + \varepsilon(\mathbf{z})$$ for a linear trend model. The linear trend (represented by the $\beta_1$ and $\beta_2$ coefficients) is one way to capture the idea that nearby values $y(\mathbf{z})$ and $y(\mathbf{z}')$, for $\mathbf{z}$ close to $\mathbf{z}'$, should tend to be close to one another. We can even calculate this by considering the expected value of the size of the difference between $y(\mathbf{z})$ and $y(\mathbf{z}')$, $E[|y(\mathbf{z}) - y(\mathbf{z}')|]$. It turns out the mathematics is much simpler if we use a slightly different measure of difference: instead, we compute the expected squared difference: $$\eqalign{ E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2] &= E[\left(\beta_0 + \beta_1 z_1 + \beta_2 z_2 + \varepsilon(\mathbf{z}) - \left(\beta_0 + \beta_1 z_1' + \beta_2 z_2' + \varepsilon(\mathbf{z}')\right)\right)^2] \\ &=E[\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)' + \varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=E[\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 \\ &\quad+ 2\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)\\ &\quad+ \left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 + E[\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] }$$ This model is free of any explicit spatial autocorrelation, because there is no term in it directly relating $y(\mathbf{z})$ to nearby values $y(\mathbf{z}')$. An alternative, different, model ignores the linear trend and supposes only that there is autocorrelation. One way to do that is through the structure of the deviations $\varepsilon(\mathbf{z})$. We might posit that $$y(\mathbf{z}) = \beta_0 + \varepsilon(\mathbf{z})$$ and, to account for our anticipation of correlation, we will assume some kind of "covariance structure" for the $\varepsilon$. For this to be spatially meaningful, we will assume the covariance between $\varepsilon(\mathbf{z})$ and $\varepsilon(\mathbf{z}')$, equal to $E[\varepsilon(\mathbf{z})\varepsilon(\mathbf{z}')]$ because the $\varepsilon$ have zero means, tends to decrease as $\mathbf{z}$ and $\mathbf{z}'$ become more and more distant. Because the details do not matter, let's just call this covariance $C(\mathbf{z}, \mathbf{z}')$. This is spatial autocorrelation. Indeed, the (usual Pearson) correlation between $y(\mathbf{z})$ and $y(\mathbf{z}')$ is $$\rho(y(\mathbf{z}), y(\mathbf{z}')) = \frac{C(\mathbf{z}, \mathbf{z}')}{\sqrt{C(\mathbf{z}, \mathbf{z})C(\mathbf{z}', \mathbf{z}')}}.$$ In this notation, the previous expected squared difference of $y$'s for the first model is $$\eqalign{ E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2] &= \left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 + E[\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 + C_1(\mathbf{z}, \mathbf{z}) + C_1(\mathbf{z}', \mathbf{z}') }$$ (assuming $\mathbf{z} \ne \mathbf{z}'$) because the $\varepsilon$ at different locations have been assumed to be independent. I have written $C_1$ instead of $C$ to indicate this is the covariance function for the first model. When the covariances of the $\varepsilon$ do not vary dramatically from one location to another (indeed, they are usually assumed to be constant), this equation shows that the expected squared difference in $y$'s increases quadratically with the separation between $\mathbf{z}$ and $\mathbf{z}'$. The actual amount of increase is determined by the trend coefficients $\beta_0$ and $\beta_1$. Let's see what the expected squared differences in the $y$'s is for the new model, model 2: $$\eqalign{ E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2] &= E[\left(\beta_0 + \varepsilon(\mathbf{z}) - \left(\beta_0 + \varepsilon(\mathbf{z}')\right)\right)^2] \\ &=E[\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=E[\varepsilon(\mathbf{z})^2 - 2 \varepsilon(\mathbf{z})\varepsilon(\mathbf{z}') + \varepsilon(\mathbf{z}')^2] \\ &=C_2(\mathbf{z}, \mathbf{z}) - 2C_2(\mathbf{z}, \mathbf{z}') + C_2(\mathbf{z}', \mathbf{z}'). }$$ Again this behaves in the right way: because we figured $C_2(\mathbf{z}, \mathbf{z}')$ should decrease as $\mathbf{z}$ and $\mathbf{z}'$ become more separated, the expected squared difference in $y$'s indeed goes up with increasing separation of the locations. Comparing the two expressions for $E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2]$ in the two models shows us that $\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2$ in the first model is playing a role mathematically identical to $-2C_2(\mathbf{z}, \mathbf{z}')$ in the second model. (There's an additive constant lurking there, buried in the different meanings of the $C_i(\mathbf{z}, \mathbf{z})$, but it doesn't matter in this analysis.) Ergo, depending on the model, spatial correlation is typically represented as some combination of a trend and a stipulated correlation structure on random errors. We now have, I hope, a clear answer to the question: one can represent the idea behind Tobler's Law of Geography ("everything is related to everything else, but nearer things are more related") in different ways. In some models, Tobler's Law is adequately represented by including trends (or "drift" terms) that are functions of spatial coordinates like longitude and latitude. In others, Tobler's Law is captured by means of a nontrivial covariance structure among additive random terms (the $\varepsilon$). In practice, models incorporate both methods. Which one you choose depends on what you want to accomplish with the model and on your view of how spatial autocorrelation arises--whether it is implied by underlying trends or reflects variations you wish to consider random. Neither one is always right and, in any given problem, it's often possible to use both kinds of models to analyze the data, understand the phenomenon, and predict its values at other locations (interpolation).
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
"Spatial autocorrelation" means various things to various people. An overarching concept, though, is that a phenomenon observed at locations $\mathbf{z}$ may depend in some definite way on (a) covaria
Why does including latitude and longitude in a GAM account for spatial autocorrelation? "Spatial autocorrelation" means various things to various people. An overarching concept, though, is that a phenomenon observed at locations $\mathbf{z}$ may depend in some definite way on (a) covariates, (b) location, and (c) its values at nearby locations. (Where the technical definitions vary lie in the kind of data being considered, what "definite way" is postulated, and what "nearby" means: all of these have to be made quantitative in order to proceed.) To see what might be going on, let's consider a simple example of such a spatial model to describe the topography of a region. Let the measured elevation at a point $\mathbf{z}$ be $y(\mathbf{z})$. One possible model is that $y$ depends in some definite mathematical way on the coordinates of $\mathbf{z}$, which I will write $(z_1,z_2)$ in this two-dimensional situation. Letting $\varepsilon$ represent (hypothetically independent) deviations between the observations and the model (which as usual are assumed to have zero expectation), we may write $$y(\mathbf{z}) = \beta_0 + \beta_1 z_1 + \beta_2 z_2 + \varepsilon(\mathbf{z})$$ for a linear trend model. The linear trend (represented by the $\beta_1$ and $\beta_2$ coefficients) is one way to capture the idea that nearby values $y(\mathbf{z})$ and $y(\mathbf{z}')$, for $\mathbf{z}$ close to $\mathbf{z}'$, should tend to be close to one another. We can even calculate this by considering the expected value of the size of the difference between $y(\mathbf{z})$ and $y(\mathbf{z}')$, $E[|y(\mathbf{z}) - y(\mathbf{z}')|]$. It turns out the mathematics is much simpler if we use a slightly different measure of difference: instead, we compute the expected squared difference: $$\eqalign{ E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2] &= E[\left(\beta_0 + \beta_1 z_1 + \beta_2 z_2 + \varepsilon(\mathbf{z}) - \left(\beta_0 + \beta_1 z_1' + \beta_2 z_2' + \varepsilon(\mathbf{z}')\right)\right)^2] \\ &=E[\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)' + \varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=E[\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 \\ &\quad+ 2\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)\\ &\quad+ \left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 + E[\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] }$$ This model is free of any explicit spatial autocorrelation, because there is no term in it directly relating $y(\mathbf{z})$ to nearby values $y(\mathbf{z}')$. An alternative, different, model ignores the linear trend and supposes only that there is autocorrelation. One way to do that is through the structure of the deviations $\varepsilon(\mathbf{z})$. We might posit that $$y(\mathbf{z}) = \beta_0 + \varepsilon(\mathbf{z})$$ and, to account for our anticipation of correlation, we will assume some kind of "covariance structure" for the $\varepsilon$. For this to be spatially meaningful, we will assume the covariance between $\varepsilon(\mathbf{z})$ and $\varepsilon(\mathbf{z}')$, equal to $E[\varepsilon(\mathbf{z})\varepsilon(\mathbf{z}')]$ because the $\varepsilon$ have zero means, tends to decrease as $\mathbf{z}$ and $\mathbf{z}'$ become more and more distant. Because the details do not matter, let's just call this covariance $C(\mathbf{z}, \mathbf{z}')$. This is spatial autocorrelation. Indeed, the (usual Pearson) correlation between $y(\mathbf{z})$ and $y(\mathbf{z}')$ is $$\rho(y(\mathbf{z}), y(\mathbf{z}')) = \frac{C(\mathbf{z}, \mathbf{z}')}{\sqrt{C(\mathbf{z}, \mathbf{z})C(\mathbf{z}', \mathbf{z}')}}.$$ In this notation, the previous expected squared difference of $y$'s for the first model is $$\eqalign{ E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2] &= \left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 + E[\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2 + C_1(\mathbf{z}, \mathbf{z}) + C_1(\mathbf{z}', \mathbf{z}') }$$ (assuming $\mathbf{z} \ne \mathbf{z}'$) because the $\varepsilon$ at different locations have been assumed to be independent. I have written $C_1$ instead of $C$ to indicate this is the covariance function for the first model. When the covariances of the $\varepsilon$ do not vary dramatically from one location to another (indeed, they are usually assumed to be constant), this equation shows that the expected squared difference in $y$'s increases quadratically with the separation between $\mathbf{z}$ and $\mathbf{z}'$. The actual amount of increase is determined by the trend coefficients $\beta_0$ and $\beta_1$. Let's see what the expected squared differences in the $y$'s is for the new model, model 2: $$\eqalign{ E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2] &= E[\left(\beta_0 + \varepsilon(\mathbf{z}) - \left(\beta_0 + \varepsilon(\mathbf{z}')\right)\right)^2] \\ &=E[\left(\varepsilon(\mathbf{z}) - \varepsilon(\mathbf{z}')\right)^2] \\ &=E[\varepsilon(\mathbf{z})^2 - 2 \varepsilon(\mathbf{z})\varepsilon(\mathbf{z}') + \varepsilon(\mathbf{z}')^2] \\ &=C_2(\mathbf{z}, \mathbf{z}) - 2C_2(\mathbf{z}, \mathbf{z}') + C_2(\mathbf{z}', \mathbf{z}'). }$$ Again this behaves in the right way: because we figured $C_2(\mathbf{z}, \mathbf{z}')$ should decrease as $\mathbf{z}$ and $\mathbf{z}'$ become more separated, the expected squared difference in $y$'s indeed goes up with increasing separation of the locations. Comparing the two expressions for $E[\left(y(\mathbf{z}) - y(\mathbf{z}')\right)^2]$ in the two models shows us that $\left(\beta_1 (z_1-z_1') + \beta_2 (z_2-z_2)'\right)^2$ in the first model is playing a role mathematically identical to $-2C_2(\mathbf{z}, \mathbf{z}')$ in the second model. (There's an additive constant lurking there, buried in the different meanings of the $C_i(\mathbf{z}, \mathbf{z})$, but it doesn't matter in this analysis.) Ergo, depending on the model, spatial correlation is typically represented as some combination of a trend and a stipulated correlation structure on random errors. We now have, I hope, a clear answer to the question: one can represent the idea behind Tobler's Law of Geography ("everything is related to everything else, but nearer things are more related") in different ways. In some models, Tobler's Law is adequately represented by including trends (or "drift" terms) that are functions of spatial coordinates like longitude and latitude. In others, Tobler's Law is captured by means of a nontrivial covariance structure among additive random terms (the $\varepsilon$). In practice, models incorporate both methods. Which one you choose depends on what you want to accomplish with the model and on your view of how spatial autocorrelation arises--whether it is implied by underlying trends or reflects variations you wish to consider random. Neither one is always right and, in any given problem, it's often possible to use both kinds of models to analyze the data, understand the phenomenon, and predict its values at other locations (interpolation).
Why does including latitude and longitude in a GAM account for spatial autocorrelation? "Spatial autocorrelation" means various things to various people. An overarching concept, though, is that a phenomenon observed at locations $\mathbf{z}$ may depend in some definite way on (a) covaria
2,838
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
The other answers are good I just wanted to add something about 'accounting for' spatial autocorrelation. Sometimes this claim is made more strongly along the lines of "accounting for spatial autocorrelation not explained by the covariates". This can present a misleading picture of what the spatial smooth does. It is not like there is some orderly queue in the likelihood where the smooth patiently waits for the covariates to go first and then smooth will mop up the 'unexplained' parts. In reality they all get a chance to explain the data. This paper with an aptly named title presents the issue really clearly, although it is from the point of view of a CAR model the principles apply to GAM smooths. Adding Spatially-Correlated Errors Can Mess Up the Fixed Effect You Love The 'solution' in the paper is to smooth the residuals instead of smoothing on space. That would have the effect of allowing your covariates to explain what they can. Of course, there are many applications in which this would not be a desireable solution.
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
The other answers are good I just wanted to add something about 'accounting for' spatial autocorrelation. Sometimes this claim is made more strongly along the lines of "accounting for spatial autocor
Why does including latitude and longitude in a GAM account for spatial autocorrelation? The other answers are good I just wanted to add something about 'accounting for' spatial autocorrelation. Sometimes this claim is made more strongly along the lines of "accounting for spatial autocorrelation not explained by the covariates". This can present a misleading picture of what the spatial smooth does. It is not like there is some orderly queue in the likelihood where the smooth patiently waits for the covariates to go first and then smooth will mop up the 'unexplained' parts. In reality they all get a chance to explain the data. This paper with an aptly named title presents the issue really clearly, although it is from the point of view of a CAR model the principles apply to GAM smooths. Adding Spatially-Correlated Errors Can Mess Up the Fixed Effect You Love The 'solution' in the paper is to smooth the residuals instead of smoothing on space. That would have the effect of allowing your covariates to explain what they can. Of course, there are many applications in which this would not be a desireable solution.
Why does including latitude and longitude in a GAM account for spatial autocorrelation? The other answers are good I just wanted to add something about 'accounting for' spatial autocorrelation. Sometimes this claim is made more strongly along the lines of "accounting for spatial autocor
2,839
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
Spatial correlation is simply how the x and y coordinates relate to the magnitude of the resulting surface in the space. So the autocorrelation between the coordinates can be expressed in terms of a functional relationship between the neighboring points.
Why does including latitude and longitude in a GAM account for spatial autocorrelation?
Spatial correlation is simply how the x and y coordinates relate to the magnitude of the resulting surface in the space. So the autocorrelation between the coordinates can be expressed in terms of a
Why does including latitude and longitude in a GAM account for spatial autocorrelation? Spatial correlation is simply how the x and y coordinates relate to the magnitude of the resulting surface in the space. So the autocorrelation between the coordinates can be expressed in terms of a functional relationship between the neighboring points.
Why does including latitude and longitude in a GAM account for spatial autocorrelation? Spatial correlation is simply how the x and y coordinates relate to the magnitude of the resulting surface in the space. So the autocorrelation between the coordinates can be expressed in terms of a
2,840
Variance of product of multiple independent random variables
I will assume that the random variables $X_1, X_2, \cdots , X_n$ are independent, which condition the OP has not included in the problem statement. With this assumption, we have that $$\begin{align} \operatorname{var}(X_1\cdots X_n) &= E[(X_1\cdots X_n)^2]-\left(E[X_1\cdots X_n]\right)^2\\ &= E[X_1^2\cdots X_n^2]-\left(E[(X_1]\cdots E[X_n]\right)^2\\ &= E[X_1^2]\cdots E[X_n^2] - (E[X_1])^2\cdots (E[X_n])^2\\ &= \prod_{i=1}^n \left(\operatorname{var}(X_i)+(E[X_i])^2\right) - \prod_{i=1}^n \left(E[X_i]\right)^2 \end{align}$$ If the first product term above is multiplied out, one of the terms in the expansion cancels out the second product term above. Thus, for the case $n=2$, we have the result stated by the OP. As @Macro points out, for $n=2$, we need not assume that $X_1$ and $X_2$ are independent: the weaker condition that $X_1$ and $X_2$ are uncorrelated and $X_1^2$ and $X_2^2$ are uncorrelated as well suffices. But for $n \geq 3$, lack of correlation is not enough. Independence suffices, but is not necessary. What is required is the factoring of the expectation of the products shown above into products of expectations, which independence guarantees.
Variance of product of multiple independent random variables
I will assume that the random variables $X_1, X_2, \cdots , X_n$ are independent, which condition the OP has not included in the problem statement. With this assumption, we have that $$\begin{align}
Variance of product of multiple independent random variables I will assume that the random variables $X_1, X_2, \cdots , X_n$ are independent, which condition the OP has not included in the problem statement. With this assumption, we have that $$\begin{align} \operatorname{var}(X_1\cdots X_n) &= E[(X_1\cdots X_n)^2]-\left(E[X_1\cdots X_n]\right)^2\\ &= E[X_1^2\cdots X_n^2]-\left(E[(X_1]\cdots E[X_n]\right)^2\\ &= E[X_1^2]\cdots E[X_n^2] - (E[X_1])^2\cdots (E[X_n])^2\\ &= \prod_{i=1}^n \left(\operatorname{var}(X_i)+(E[X_i])^2\right) - \prod_{i=1}^n \left(E[X_i]\right)^2 \end{align}$$ If the first product term above is multiplied out, one of the terms in the expansion cancels out the second product term above. Thus, for the case $n=2$, we have the result stated by the OP. As @Macro points out, for $n=2$, we need not assume that $X_1$ and $X_2$ are independent: the weaker condition that $X_1$ and $X_2$ are uncorrelated and $X_1^2$ and $X_2^2$ are uncorrelated as well suffices. But for $n \geq 3$, lack of correlation is not enough. Independence suffices, but is not necessary. What is required is the factoring of the expectation of the products shown above into products of expectations, which independence guarantees.
Variance of product of multiple independent random variables I will assume that the random variables $X_1, X_2, \cdots , X_n$ are independent, which condition the OP has not included in the problem statement. With this assumption, we have that $$\begin{align}
2,841
Covariance and independence?
Easy example: Let $X$ be a random variable that is $-1$ or $+1$ with probability 0.5. Then let $Y$ be a random variable such that $Y=0$ if $X=-1$, and $Y$ is randomly $-1$ or $+1$ with probability 0.5 if $X=1$. Clearly $X$ and $Y$ are highly dependent (since knowing $Y$ allows me to perfectly know $X$), but their covariance is zero: They both have zero mean, and $$\eqalign{ \mathbb{E}[XY] &=&(-1) &\cdot &0 &\cdot &P(X=-1) \\ &+& 1 &\cdot &1 &\cdot &P(X=1,Y=1) \\ &+& 1 &\cdot &(-1)&\cdot &P(X=1,Y=-1) \\ &=&0. }$$ Or more generally, take any distribution $P(X)$ and any $P(Y|X)$ such that $P(Y=a|X) = P(Y=-a|X)$ for all $X$ (i.e., a joint distribution that is symmetric around the $x$ axis), and you will always have zero covariance. But you will have non-independence whenever $P(Y|X) \neq P(Y)$; i.e., the conditionals are not all equal to the marginal. Or ditto for symmetry around the $y$ axis.
Covariance and independence?
Easy example: Let $X$ be a random variable that is $-1$ or $+1$ with probability 0.5. Then let $Y$ be a random variable such that $Y=0$ if $X=-1$, and $Y$ is randomly $-1$ or $+1$ with probability 0
Covariance and independence? Easy example: Let $X$ be a random variable that is $-1$ or $+1$ with probability 0.5. Then let $Y$ be a random variable such that $Y=0$ if $X=-1$, and $Y$ is randomly $-1$ or $+1$ with probability 0.5 if $X=1$. Clearly $X$ and $Y$ are highly dependent (since knowing $Y$ allows me to perfectly know $X$), but their covariance is zero: They both have zero mean, and $$\eqalign{ \mathbb{E}[XY] &=&(-1) &\cdot &0 &\cdot &P(X=-1) \\ &+& 1 &\cdot &1 &\cdot &P(X=1,Y=1) \\ &+& 1 &\cdot &(-1)&\cdot &P(X=1,Y=-1) \\ &=&0. }$$ Or more generally, take any distribution $P(X)$ and any $P(Y|X)$ such that $P(Y=a|X) = P(Y=-a|X)$ for all $X$ (i.e., a joint distribution that is symmetric around the $x$ axis), and you will always have zero covariance. But you will have non-independence whenever $P(Y|X) \neq P(Y)$; i.e., the conditionals are not all equal to the marginal. Or ditto for symmetry around the $y$ axis.
Covariance and independence? Easy example: Let $X$ be a random variable that is $-1$ or $+1$ with probability 0.5. Then let $Y$ be a random variable such that $Y=0$ if $X=-1$, and $Y$ is randomly $-1$ or $+1$ with probability 0
2,842
Covariance and independence?
Here is the example I always give to the students. Take a random variable $X$ with $E[X]=0$ and $E[X^3]=0$, e.g. normal random variable with zero mean. Take $Y=X^2$. It is clear that $X$ and $Y$ are related, but $$Cov(X,Y)=E[XY]-E[X]\cdot E[Y]=E[X^3]=0.$$
Covariance and independence?
Here is the example I always give to the students. Take a random variable $X$ with $E[X]=0$ and $E[X^3]=0$, e.g. normal random variable with zero mean. Take $Y=X^2$. It is clear that $X$ and $Y$ are
Covariance and independence? Here is the example I always give to the students. Take a random variable $X$ with $E[X]=0$ and $E[X^3]=0$, e.g. normal random variable with zero mean. Take $Y=X^2$. It is clear that $X$ and $Y$ are related, but $$Cov(X,Y)=E[XY]-E[X]\cdot E[Y]=E[X^3]=0.$$
Covariance and independence? Here is the example I always give to the students. Take a random variable $X$ with $E[X]=0$ and $E[X^3]=0$, e.g. normal random variable with zero mean. Take $Y=X^2$. It is clear that $X$ and $Y$ are
2,843
Covariance and independence?
The image below (source Wikipedia) has a number of examples on the third row, in particular the first and the fourth example have a strong dependent relationship, but 0 correlation (and 0 covariance).
Covariance and independence?
The image below (source Wikipedia) has a number of examples on the third row, in particular the first and the fourth example have a strong dependent relationship, but 0 correlation (and 0 covariance).
Covariance and independence? The image below (source Wikipedia) has a number of examples on the third row, in particular the first and the fourth example have a strong dependent relationship, but 0 correlation (and 0 covariance).
Covariance and independence? The image below (source Wikipedia) has a number of examples on the third row, in particular the first and the fourth example have a strong dependent relationship, but 0 correlation (and 0 covariance).
2,844
Covariance and independence?
Some other examples, consider datapoints that form a circle or ellipse, the covariance is 0, but knowing x you narrow y to 2 values. Or data in a square or rectangle. Also data that forms an X or a V or a ^ or < or > will all give covariance 0, but are not independent. If y = sin(x) (or cos) and x covers an integer multiple of periods then cov will equal 0, but knowing x you know y or at least |y| in the ellipse, x, <, and > cases.
Covariance and independence?
Some other examples, consider datapoints that form a circle or ellipse, the covariance is 0, but knowing x you narrow y to 2 values. Or data in a square or rectangle. Also data that forms an X or a
Covariance and independence? Some other examples, consider datapoints that form a circle or ellipse, the covariance is 0, but knowing x you narrow y to 2 values. Or data in a square or rectangle. Also data that forms an X or a V or a ^ or < or > will all give covariance 0, but are not independent. If y = sin(x) (or cos) and x covers an integer multiple of periods then cov will equal 0, but knowing x you know y or at least |y| in the ellipse, x, <, and > cases.
Covariance and independence? Some other examples, consider datapoints that form a circle or ellipse, the covariance is 0, but knowing x you narrow y to 2 values. Or data in a square or rectangle. Also data that forms an X or a
2,845
Covariance and independence?
Inspired by mpiktas's answer. Consider $X$ to be a uniformly distributed random variable, i.e. $X \sim U(-1,1) $. Here, $$E[X] = (b+a)/2 = 0.$$ $$E[X^2] = \int_{-1}^{1} x^2 dx = 2/3$$ $$E[X^3] = \int_{-1}^{1} x^3 dx = 0$$ Since $Cov(X, Y) = E[XY] - E[X] \cdot E[Y]$, $$ Cov(X^2, X) = E[X^3] - E[X] \cdot E[X^2] \\ = 0 - 0 \cdot 2/3= 0 $$ Clearly $X$ and $X^2$ are not independent. But their covariance is computed to be zero. Since a counter example has been found, the proposition is false in general.
Covariance and independence?
Inspired by mpiktas's answer. Consider $X$ to be a uniformly distributed random variable, i.e. $X \sim U(-1,1) $. Here, $$E[X] = (b+a)/2 = 0.$$ $$E[X^2] = \int_{-1}^{1} x^2 dx = 2/3$$ $$E[X^3] = \int
Covariance and independence? Inspired by mpiktas's answer. Consider $X$ to be a uniformly distributed random variable, i.e. $X \sim U(-1,1) $. Here, $$E[X] = (b+a)/2 = 0.$$ $$E[X^2] = \int_{-1}^{1} x^2 dx = 2/3$$ $$E[X^3] = \int_{-1}^{1} x^3 dx = 0$$ Since $Cov(X, Y) = E[XY] - E[X] \cdot E[Y]$, $$ Cov(X^2, X) = E[X^3] - E[X] \cdot E[X^2] \\ = 0 - 0 \cdot 2/3= 0 $$ Clearly $X$ and $X^2$ are not independent. But their covariance is computed to be zero. Since a counter example has been found, the proposition is false in general.
Covariance and independence? Inspired by mpiktas's answer. Consider $X$ to be a uniformly distributed random variable, i.e. $X \sim U(-1,1) $. Here, $$E[X] = (b+a)/2 = 0.$$ $$E[X^2] = \int_{-1}^{1} x^2 dx = 2/3$$ $$E[X^3] = \int
2,846
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
There's a good reason for it. The value can be found via noquote(unlist(format(.Machine))) double.eps double.neg.eps double.xmin 2.220446e-16 1.110223e-16 2.225074e-308 double.xmax double.base double.digits 1.797693e+308 2 53 double.rounding double.guard double.ulp.digits 5 0 -52 double.neg.ulp.digits double.exponent double.min.exp -53 11 -1022 double.max.exp integer.max sizeof.long 1024 2147483647 4 sizeof.longlong sizeof.longdouble sizeof.pointer 8 12 4 If you look at the help, (?".Machine"): double.eps the smallest positive floating-point number x such that 1 + x != 1. It equals double.base ^ ulp.digits if either double.base is 2 or double.rounding is 0; otherwise, it is (double.base ^ double.ulp.digits) / 2. Normally 2.220446e-16. It's essentially a value below which you can be quite confident the value will be pretty numerically meaningless - in that any smaller value isn't likely to be an accurate calculation of the value we were attempting to compute. (Having studied a little numerical analysis, depending on what computations were performed by the specific procedure, there's a good chance numerical meaninglessness comes in a fair way above that.) But statistical meaning will have been lost far earlier. Note that p-values depend on assumptions, and the further out into the extreme tail you go the more heavily the true p-value (rather than the nominal value we calculate) will be affected by the mistaken assumptions, in some cases even when they're only a little bit wrong. Since the assumptions are simply not going to be all exactly satisfied, middling p-values may be reasonably accurate (in terms of relative accuracy, perhaps only out by a modest fraction), but extremely tiny p-values may be out by many orders of magnitude. Which is to say that usual practice (something like the "<0.0001" that's you say is common in packages, or the APA rule that Jaap mentions in his answer) is probably not so far from sensible practice, but the approximate point at which things lose meaning beyond saying 'it's very very small' will of course vary quite a lot depending on circumstances. This is one reason why I can't suggest a general rule - there can't be a single rule that's even remotely suitable for everyone in all circumstances - change the circumstances a little and the broad grey line marking the change from somewhat meaningful to relatively meaningless will change, sometimes by a long way. If you were to specify sufficient information about the exact circumstances (e.g. it's a regression, with this much nonlinearity, that amount of variation in this independent variable, this kind and amount of dependence in the error term, that kind of and amount of heteroskedasticity, this shape of error distribution), I could simulate 'true' p-values for you to compare with the nominal p-values, so you could see when they were too different for the nominal value to carry any meaning. But that leads us to the second reason why - even if you specified enough information to simulate the true p-values - I still couldn't responsibly state a cut-off for even those circumstances. What you report depends on people's preferences - yours, and your audience. Imagine you told me enough about the circumstances for me to decide that I wanted to draw the line at a nominal $p$ of $10^{-6}$. All well and good, we might think - except your own preference function (what looks right to you, were you to look at the difference between nominal p-values given by stats packages and the the ones resulting from simulation when you suppose a particular set of failures of assumptions) might put it at $10^{-5}$ and the editors of the journal you want to submit to might put have their blanket rule to cut off at $10^{-4}$, while the next journal might put it at $10^{-3}$ and the next may have no general rule and the specific editor you got might accept even lower values than I gave ... but one of the referees may then have a specific cut off! In the absence of knowledge of their preference functions and rules, and the absence of knowledge of your own utilities, how do I responsibly suggest any general choice of what actions to take? I can at least tell you the sorts of things that I do (and I don't suggest this is a good choice for you at all): There are few circumstances (outside of simulating p-values) in which I would make much of a p less than $10^{-6}$ (I may or may not mention the value reported by the package, but I wouldn't make anything of it other than it was very small, I would usually emphasize the meaningless of the exact number). Sometimes I take a value somewhere in the region of $10^{-5}$ to $10^{-4}$ and say that p was much less than that. On occasion I do actually do as suggested above - perform some simulations to see how sensitive the p-value is in the far tail to various violations of the assumptions, particularly if there's a specific kind of violation I am worried about. That's certainly helpful in informing a choice - but I am as likely to discuss the results of the simulation as to use them to choose a cut-off-value, giving others a chance to choose their own. An alternative to simulation is to look at some procedures that are more robust* to the various potential failures of assumption and see how much difference to the p-value that might make. Their p-values will also not be particularly meaningful, but they do at least give some sense of how much impact there might be. If some are very different from the nominal one, it also gives more of an idea which violations of assumptions to investigate the impact of. Even if you don't report any of those alternatives, it gives a better picture of how meaningful your small p-value is. * Note that here we don't really need procedures that are robust to gross violations of some assumption; ones that are less affected by relatively mild deviations of the relevant assumption should be fine for this exercise. I will say that when/if you do come to do such simulations, even with quite mild violations, in some cases it can be surprising at how far even not-that-small p-values can be wrong. That has done more to change the way I personally interpret a p-value than it has shifted the specific cut-offs I might use. When submitting the results of an actual hypothesis test to a journal, I try to find out if they have any rule. If they don't, I tend to please myself, and then wait for the referees to complain.
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
There's a good reason for it. The value can be found via noquote(unlist(format(.Machine))) double.eps double.neg.eps double.xmin 2.220446e-16 1.110223e-1
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) There's a good reason for it. The value can be found via noquote(unlist(format(.Machine))) double.eps double.neg.eps double.xmin 2.220446e-16 1.110223e-16 2.225074e-308 double.xmax double.base double.digits 1.797693e+308 2 53 double.rounding double.guard double.ulp.digits 5 0 -52 double.neg.ulp.digits double.exponent double.min.exp -53 11 -1022 double.max.exp integer.max sizeof.long 1024 2147483647 4 sizeof.longlong sizeof.longdouble sizeof.pointer 8 12 4 If you look at the help, (?".Machine"): double.eps the smallest positive floating-point number x such that 1 + x != 1. It equals double.base ^ ulp.digits if either double.base is 2 or double.rounding is 0; otherwise, it is (double.base ^ double.ulp.digits) / 2. Normally 2.220446e-16. It's essentially a value below which you can be quite confident the value will be pretty numerically meaningless - in that any smaller value isn't likely to be an accurate calculation of the value we were attempting to compute. (Having studied a little numerical analysis, depending on what computations were performed by the specific procedure, there's a good chance numerical meaninglessness comes in a fair way above that.) But statistical meaning will have been lost far earlier. Note that p-values depend on assumptions, and the further out into the extreme tail you go the more heavily the true p-value (rather than the nominal value we calculate) will be affected by the mistaken assumptions, in some cases even when they're only a little bit wrong. Since the assumptions are simply not going to be all exactly satisfied, middling p-values may be reasonably accurate (in terms of relative accuracy, perhaps only out by a modest fraction), but extremely tiny p-values may be out by many orders of magnitude. Which is to say that usual practice (something like the "<0.0001" that's you say is common in packages, or the APA rule that Jaap mentions in his answer) is probably not so far from sensible practice, but the approximate point at which things lose meaning beyond saying 'it's very very small' will of course vary quite a lot depending on circumstances. This is one reason why I can't suggest a general rule - there can't be a single rule that's even remotely suitable for everyone in all circumstances - change the circumstances a little and the broad grey line marking the change from somewhat meaningful to relatively meaningless will change, sometimes by a long way. If you were to specify sufficient information about the exact circumstances (e.g. it's a regression, with this much nonlinearity, that amount of variation in this independent variable, this kind and amount of dependence in the error term, that kind of and amount of heteroskedasticity, this shape of error distribution), I could simulate 'true' p-values for you to compare with the nominal p-values, so you could see when they were too different for the nominal value to carry any meaning. But that leads us to the second reason why - even if you specified enough information to simulate the true p-values - I still couldn't responsibly state a cut-off for even those circumstances. What you report depends on people's preferences - yours, and your audience. Imagine you told me enough about the circumstances for me to decide that I wanted to draw the line at a nominal $p$ of $10^{-6}$. All well and good, we might think - except your own preference function (what looks right to you, were you to look at the difference between nominal p-values given by stats packages and the the ones resulting from simulation when you suppose a particular set of failures of assumptions) might put it at $10^{-5}$ and the editors of the journal you want to submit to might put have their blanket rule to cut off at $10^{-4}$, while the next journal might put it at $10^{-3}$ and the next may have no general rule and the specific editor you got might accept even lower values than I gave ... but one of the referees may then have a specific cut off! In the absence of knowledge of their preference functions and rules, and the absence of knowledge of your own utilities, how do I responsibly suggest any general choice of what actions to take? I can at least tell you the sorts of things that I do (and I don't suggest this is a good choice for you at all): There are few circumstances (outside of simulating p-values) in which I would make much of a p less than $10^{-6}$ (I may or may not mention the value reported by the package, but I wouldn't make anything of it other than it was very small, I would usually emphasize the meaningless of the exact number). Sometimes I take a value somewhere in the region of $10^{-5}$ to $10^{-4}$ and say that p was much less than that. On occasion I do actually do as suggested above - perform some simulations to see how sensitive the p-value is in the far tail to various violations of the assumptions, particularly if there's a specific kind of violation I am worried about. That's certainly helpful in informing a choice - but I am as likely to discuss the results of the simulation as to use them to choose a cut-off-value, giving others a chance to choose their own. An alternative to simulation is to look at some procedures that are more robust* to the various potential failures of assumption and see how much difference to the p-value that might make. Their p-values will also not be particularly meaningful, but they do at least give some sense of how much impact there might be. If some are very different from the nominal one, it also gives more of an idea which violations of assumptions to investigate the impact of. Even if you don't report any of those alternatives, it gives a better picture of how meaningful your small p-value is. * Note that here we don't really need procedures that are robust to gross violations of some assumption; ones that are less affected by relatively mild deviations of the relevant assumption should be fine for this exercise. I will say that when/if you do come to do such simulations, even with quite mild violations, in some cases it can be surprising at how far even not-that-small p-values can be wrong. That has done more to change the way I personally interpret a p-value than it has shifted the specific cut-offs I might use. When submitting the results of an actual hypothesis test to a journal, I try to find out if they have any rule. If they don't, I tend to please myself, and then wait for the referees to complain.
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) There's a good reason for it. The value can be found via noquote(unlist(format(.Machine))) double.eps double.neg.eps double.xmin 2.220446e-16 1.110223e-1
2,847
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
What common practice is might depend on your field of research. The manual of the American Psychological Association (APA), which is one of the most often used citation styles, states (p. 139, 6th edition): Do not use any value smaller than p < 0.001
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
What common practice is might depend on your field of research. The manual of the American Psychological Association (APA), which is one of the most often used citation styles, states (p. 139, 6th edi
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) What common practice is might depend on your field of research. The manual of the American Psychological Association (APA), which is one of the most often used citation styles, states (p. 139, 6th edition): Do not use any value smaller than p < 0.001
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) What common practice is might depend on your field of research. The manual of the American Psychological Association (APA), which is one of the most often used citation styles, states (p. 139, 6th edi
2,848
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
Such extreme p-values occur more often in fields with very large amounts of data, such as genomics and process monitoring. In those cases, it's sometimes reported as -log10(p-value). See for example, this figure from Nature, where the p-values go down to 1e-26. -log10(p-value) is called "LogWorth" by statisticians I work with at JMP.
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
Such extreme p-values occur more often in fields with very large amounts of data, such as genomics and process monitoring. In those cases, it's sometimes reported as -log10(p-value). See for example,
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) Such extreme p-values occur more often in fields with very large amounts of data, such as genomics and process monitoring. In those cases, it's sometimes reported as -log10(p-value). See for example, this figure from Nature, where the p-values go down to 1e-26. -log10(p-value) is called "LogWorth" by statisticians I work with at JMP.
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) Such extreme p-values occur more often in fields with very large amounts of data, such as genomics and process monitoring. In those cases, it's sometimes reported as -log10(p-value). See for example,
2,849
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
I'm surprised no one mentioned this term explicitly, but @Glen_b alluded to it. The formal terminology for this issue is "machine epsilon." https://en.wikipedia.org/wiki/Machine_epsilon For 64 bit double precision, the smallest representable value is $1.11e^{-16}$ or $2.22e^{-16}$ depending on how the software computes machine epsilon.
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?)
I'm surprised no one mentioned this term explicitly, but @Glen_b alluded to it. The formal terminology for this issue is "machine epsilon." https://en.wikipedia.org/wiki/Machine_epsilon For 64 bit dou
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) I'm surprised no one mentioned this term explicitly, but @Glen_b alluded to it. The formal terminology for this issue is "machine epsilon." https://en.wikipedia.org/wiki/Machine_epsilon For 64 bit double precision, the smallest representable value is $1.11e^{-16}$ or $2.22e^{-16}$ depending on how the software computes machine epsilon.
How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) I'm surprised no one mentioned this term explicitly, but @Glen_b alluded to it. The formal terminology for this issue is "machine epsilon." https://en.wikipedia.org/wiki/Machine_epsilon For 64 bit dou
2,850
Why is multicollinearity not checked in modern statistics/machine learning
Considering multicollineariy is important in regression analysis because, in extrema, it directly bears on whether or not your coefficients are uniquely identified in the data. In less severe cases, it can still mess with your coefficient estimates; small changes in the data used for estimation may cause wild swings in estimated coefficients. These can be problematic from an inferential standpoint: If two variables are highly correlated, increases in one may be offset by decreases in another so the combined effect is to negate each other. With more than two variables, the effect can be even more subtle, but if the predictions are stable, that is often enough for machine learning applications. Consider why we regularize in a regression context: We need to constrict the model from being too flexible. Applying the correct amount of regularization will slightly increase the bias for a larger reduction in variance. The classic example of this is adding polynomial terms and interaction effects to a regression: In the degenerate case, the prediction equation will interpolate data points, but probably be terrible when attempting to predict the values of unseen data points. Shrinking those coefficients will likely minimize or entirely eliminate some of those coefficients and improve generalization. A random forest, however, could be seen to have a regularization parameter through the number of variables sampled at each split: you get better splits the larger the mtry (more features to choose from; some of them are better than others), but that also makes each tree more highly correlated with each other tree, somewhat mitigating the diversifying effect of estimating multiple trees in the first place. This dilemma compels one to find the right balance, usually achieved using cross-validation. Importantly, and in contrast to a regression analysis, the predictions of the random forest model are not harmed by highly collinear variables: even if two of the variables provide the same child node purity, you can just pick one. Likewise, for something like an SVM, you can include more predictors than features because the kernel trick lets you operate solely on the inner product of those feature vectors. Having more features than observations would be a problem in regressions, but the kernel trick means we only estimate a coefficient for each exemplar, while the regularization parameter $C$ reduces the flexibility of the solution -- which is decidedly a good thing, since estimating $N$ parameters for $N$ observations in an unrestricted way will always produce a perfect model on test data -- and we come full circle, back to the ridge/LASSO/elastic net regression scenario where we have the model flexibility constrained as a check against an overly optimistic model. A review of the KKT conditions of the SVM problem reveals that the SVM solution is unique, so we don't have to worry about the identification problems which arose in the regression case. Finally, consider the actual impact of multicollinearity. It doesn't change the predictive power of the model (at least, on the training data) but it does screw with our coefficient estimates. In most ML applications, we don't care about coefficients themselves, just the loss of our model predictions, so in that sense, checking VIF doesn't actually answer a consequential question. (But if a slight change in the data causes a huge fluctuation in coefficients [a classic symptom of multicollinearity], it may also change predictions, in which case we do care -- but all of this [we hope!] is characterized when we perform cross-validation, which is a part of the modeling process anyway.) A regression is more easily interpreted, but interpretation might not be the most important goal for some tasks.
Why is multicollinearity not checked in modern statistics/machine learning
Considering multicollineariy is important in regression analysis because, in extrema, it directly bears on whether or not your coefficients are uniquely identified in the data. In less severe cases, i
Why is multicollinearity not checked in modern statistics/machine learning Considering multicollineariy is important in regression analysis because, in extrema, it directly bears on whether or not your coefficients are uniquely identified in the data. In less severe cases, it can still mess with your coefficient estimates; small changes in the data used for estimation may cause wild swings in estimated coefficients. These can be problematic from an inferential standpoint: If two variables are highly correlated, increases in one may be offset by decreases in another so the combined effect is to negate each other. With more than two variables, the effect can be even more subtle, but if the predictions are stable, that is often enough for machine learning applications. Consider why we regularize in a regression context: We need to constrict the model from being too flexible. Applying the correct amount of regularization will slightly increase the bias for a larger reduction in variance. The classic example of this is adding polynomial terms and interaction effects to a regression: In the degenerate case, the prediction equation will interpolate data points, but probably be terrible when attempting to predict the values of unseen data points. Shrinking those coefficients will likely minimize or entirely eliminate some of those coefficients and improve generalization. A random forest, however, could be seen to have a regularization parameter through the number of variables sampled at each split: you get better splits the larger the mtry (more features to choose from; some of them are better than others), but that also makes each tree more highly correlated with each other tree, somewhat mitigating the diversifying effect of estimating multiple trees in the first place. This dilemma compels one to find the right balance, usually achieved using cross-validation. Importantly, and in contrast to a regression analysis, the predictions of the random forest model are not harmed by highly collinear variables: even if two of the variables provide the same child node purity, you can just pick one. Likewise, for something like an SVM, you can include more predictors than features because the kernel trick lets you operate solely on the inner product of those feature vectors. Having more features than observations would be a problem in regressions, but the kernel trick means we only estimate a coefficient for each exemplar, while the regularization parameter $C$ reduces the flexibility of the solution -- which is decidedly a good thing, since estimating $N$ parameters for $N$ observations in an unrestricted way will always produce a perfect model on test data -- and we come full circle, back to the ridge/LASSO/elastic net regression scenario where we have the model flexibility constrained as a check against an overly optimistic model. A review of the KKT conditions of the SVM problem reveals that the SVM solution is unique, so we don't have to worry about the identification problems which arose in the regression case. Finally, consider the actual impact of multicollinearity. It doesn't change the predictive power of the model (at least, on the training data) but it does screw with our coefficient estimates. In most ML applications, we don't care about coefficients themselves, just the loss of our model predictions, so in that sense, checking VIF doesn't actually answer a consequential question. (But if a slight change in the data causes a huge fluctuation in coefficients [a classic symptom of multicollinearity], it may also change predictions, in which case we do care -- but all of this [we hope!] is characterized when we perform cross-validation, which is a part of the modeling process anyway.) A regression is more easily interpreted, but interpretation might not be the most important goal for some tasks.
Why is multicollinearity not checked in modern statistics/machine learning Considering multicollineariy is important in regression analysis because, in extrema, it directly bears on whether or not your coefficients are uniquely identified in the data. In less severe cases, i
2,851
Why is multicollinearity not checked in modern statistics/machine learning
The reason is because the goals of "traditional statistics" are different from many Machine Learning techniques. By "traditional statistics", I assume you mean regression and its variants. In regression, we are trying to understand the impact the independent variables have on the dependent variable. If there is strong multicollinearity, this is simply not possible. No algorithm is going to fix this. If studiousness is correlated with class attendance and grades, we cannot know what is truly causing the grades to go up - attendance or studiousness. However, in Machine Learning techniques that focus on predictive accuracy, all we care about is how we can use a set of variables to predict another set. We don't care about the impact these variables have on each other. Basically, the fact that we don't check for multicollinearity in Machine Learning techniques isn't a consequence of the algorithm, it's a consequence of the goal. You can see this by noticing that strong collinearity between variables doesn't hurt the predictive accuracy of regression methods.
Why is multicollinearity not checked in modern statistics/machine learning
The reason is because the goals of "traditional statistics" are different from many Machine Learning techniques. By "traditional statistics", I assume you mean regression and its variants. In regress
Why is multicollinearity not checked in modern statistics/machine learning The reason is because the goals of "traditional statistics" are different from many Machine Learning techniques. By "traditional statistics", I assume you mean regression and its variants. In regression, we are trying to understand the impact the independent variables have on the dependent variable. If there is strong multicollinearity, this is simply not possible. No algorithm is going to fix this. If studiousness is correlated with class attendance and grades, we cannot know what is truly causing the grades to go up - attendance or studiousness. However, in Machine Learning techniques that focus on predictive accuracy, all we care about is how we can use a set of variables to predict another set. We don't care about the impact these variables have on each other. Basically, the fact that we don't check for multicollinearity in Machine Learning techniques isn't a consequence of the algorithm, it's a consequence of the goal. You can see this by noticing that strong collinearity between variables doesn't hurt the predictive accuracy of regression methods.
Why is multicollinearity not checked in modern statistics/machine learning The reason is because the goals of "traditional statistics" are different from many Machine Learning techniques. By "traditional statistics", I assume you mean regression and its variants. In regress
2,852
Why is multicollinearity not checked in modern statistics/machine learning
There appears to be an underlying assumption here that not checking for collinearity is a reasonable or even best practice. This seems flawed. For example, checking for perfect collinearity in a dataset with many predictors will reveal whether two variables are actually the same thing e.g. birth date and age (example taken from Dormann et al. (2013), Ecography, 36, 1, pp 27–46). I have also sometimes seen the issue of perfectly correlated predictors arise in Kaggle competitions where competitors on the forum attempt to eliminate potential predictors which have been anonymised (i.e. the predictor label is hidden, a common problem in Kaggle and Kaggle-like competitions). There is also still an activity in machine learning of selecting predictors - identifying highly correlated predictors may allow the worker to find predictors which are proxies for another underlying (hidden) variable and ultimately find one variable which does the best job of representing the latent variable or alternatively suggest variables which may be combined (e.g. via PCA). Hence, I would suggest that although machine learning methods have usually (or at least often) been designed to be robust in the face of correlated predictors, understanding the degree to which predictors are correlated is often a useful step in producing a robust and accurate model, and is a useful aid for obtaining an optimised model.
Why is multicollinearity not checked in modern statistics/machine learning
There appears to be an underlying assumption here that not checking for collinearity is a reasonable or even best practice. This seems flawed. For example, checking for perfect collinearity in a datas
Why is multicollinearity not checked in modern statistics/machine learning There appears to be an underlying assumption here that not checking for collinearity is a reasonable or even best practice. This seems flawed. For example, checking for perfect collinearity in a dataset with many predictors will reveal whether two variables are actually the same thing e.g. birth date and age (example taken from Dormann et al. (2013), Ecography, 36, 1, pp 27–46). I have also sometimes seen the issue of perfectly correlated predictors arise in Kaggle competitions where competitors on the forum attempt to eliminate potential predictors which have been anonymised (i.e. the predictor label is hidden, a common problem in Kaggle and Kaggle-like competitions). There is also still an activity in machine learning of selecting predictors - identifying highly correlated predictors may allow the worker to find predictors which are proxies for another underlying (hidden) variable and ultimately find one variable which does the best job of representing the latent variable or alternatively suggest variables which may be combined (e.g. via PCA). Hence, I would suggest that although machine learning methods have usually (or at least often) been designed to be robust in the face of correlated predictors, understanding the degree to which predictors are correlated is often a useful step in producing a robust and accurate model, and is a useful aid for obtaining an optimised model.
Why is multicollinearity not checked in modern statistics/machine learning There appears to be an underlying assumption here that not checking for collinearity is a reasonable or even best practice. This seems flawed. For example, checking for perfect collinearity in a datas
2,853
Why is multicollinearity not checked in modern statistics/machine learning
The main issue with multicollinearity is that it messes up the coefficients (betas) of independent variables. That's why it's a serious issue when you're studying the relationships between variables, establishing causality etc. However, if you're not interested in understanding the phenomenon so much, but are solely focused on prediction and forecasting, then multicollinearity is less of an issue. Or at least that's what people think about it. I'm not talking about perfect multicollinearity here, which is a technical issue or identification issue. Technically, it simply means that the design matrix leads to singularity, and the solution is not defined.
Why is multicollinearity not checked in modern statistics/machine learning
The main issue with multicollinearity is that it messes up the coefficients (betas) of independent variables. That's why it's a serious issue when you're studying the relationships between variables,
Why is multicollinearity not checked in modern statistics/machine learning The main issue with multicollinearity is that it messes up the coefficients (betas) of independent variables. That's why it's a serious issue when you're studying the relationships between variables, establishing causality etc. However, if you're not interested in understanding the phenomenon so much, but are solely focused on prediction and forecasting, then multicollinearity is less of an issue. Or at least that's what people think about it. I'm not talking about perfect multicollinearity here, which is a technical issue or identification issue. Technically, it simply means that the design matrix leads to singularity, and the solution is not defined.
Why is multicollinearity not checked in modern statistics/machine learning The main issue with multicollinearity is that it messes up the coefficients (betas) of independent variables. That's why it's a serious issue when you're studying the relationships between variables,
2,854
Why is multicollinearity not checked in modern statistics/machine learning
The regularization in those machine learning stabilizes the regression coefficients, so at least that effect of multicollinearity tamed. But more importantly, if you're going for prediction (which machine learners often are), then the multicollinearity "problem" wasn't that big of a problem in the first place. It's a problem when you need to estimate a particular coefficient and you don't have the information. Also, my answer to "When does LASSO select correlated predictors" might be helpful to you.
Why is multicollinearity not checked in modern statistics/machine learning
The regularization in those machine learning stabilizes the regression coefficients, so at least that effect of multicollinearity tamed. But more importantly, if you're going for prediction (which mac
Why is multicollinearity not checked in modern statistics/machine learning The regularization in those machine learning stabilizes the regression coefficients, so at least that effect of multicollinearity tamed. But more importantly, if you're going for prediction (which machine learners often are), then the multicollinearity "problem" wasn't that big of a problem in the first place. It's a problem when you need to estimate a particular coefficient and you don't have the information. Also, my answer to "When does LASSO select correlated predictors" might be helpful to you.
Why is multicollinearity not checked in modern statistics/machine learning The regularization in those machine learning stabilizes the regression coefficients, so at least that effect of multicollinearity tamed. But more importantly, if you're going for prediction (which mac
2,855
Why is multicollinearity not checked in modern statistics/machine learning
I think that multicollinearity should be checked in machine learning. Here is why: Suppose that you have two highly correlated features X and Y in our dataset. This means that the response plane is not reliable (a small change in the data can have drastic effects on the orientation of the response plane). Which implies that the predictions of the model for data points far away from the line, where X and Y tend to fall, are not reliable. If you use your model for predictions for such points the predictions probably will be very bad. To put it in other words, when you have two highly correlated features, as a model, you are learning a plane where actually the data mostly falls in a line. So, it is important to remove highly correlated features from your data for preventing unreliable models and erroneous predictions.
Why is multicollinearity not checked in modern statistics/machine learning
I think that multicollinearity should be checked in machine learning. Here is why: Suppose that you have two highly correlated features X and Y in our dataset. This means that the response plane is no
Why is multicollinearity not checked in modern statistics/machine learning I think that multicollinearity should be checked in machine learning. Here is why: Suppose that you have two highly correlated features X and Y in our dataset. This means that the response plane is not reliable (a small change in the data can have drastic effects on the orientation of the response plane). Which implies that the predictions of the model for data points far away from the line, where X and Y tend to fall, are not reliable. If you use your model for predictions for such points the predictions probably will be very bad. To put it in other words, when you have two highly correlated features, as a model, you are learning a plane where actually the data mostly falls in a line. So, it is important to remove highly correlated features from your data for preventing unreliable models and erroneous predictions.
Why is multicollinearity not checked in modern statistics/machine learning I think that multicollinearity should be checked in machine learning. Here is why: Suppose that you have two highly correlated features X and Y in our dataset. This means that the response plane is no
2,856
Why would parametric statistics ever be preferred over nonparametric?
Rarely if ever a parametric test and a non-parametric test actually have the same null. The parametric $t$-test is testing the mean of the distribution, assuming the first two moments exist. The Wilcoxon rank sum test does not assume any moments, and tests equality of distributions instead. Its implied parameter is a weird functional of distributions, the probability that the observation from one sample is lower than the observation from the other. You can sort of talk about comparisons between the two tests under the completely specified null of identical distributions... but you have to recognize that the two tests are testing different hypotheses. The information that parametric tests bring in along with their assumption helps improving the power of the tests. Of course that information better be right, but there are few if any domains of human knowledge these days where such preliminary information does not exist. An interesting exception that explicitly says "I don't want to assume anything" is the courtroom where non-parametric methods continue to be widely popular -- and it makes perfect sense for the application. There's probably a good reason, pun intended, that Phillip Good authored good books on both non-parametric statistics and courtroom statistics. There are also testing situations where you don't have access to the microdata necessary for the nonparametric test. Suppose you were asked to compare two groups of people to gauge whether one is more obese than the other. In an ideal world, you will have height and weight measurements for everybody, and you could form a permutation test stratifying by height. In a less than ideal (i.e., real) world, you may only have the mean height and mean weight in each group (or may be some ranges or variances of these characteristics on top of the sample means). Your best bet is then to compute the mean BMI for each group and compare them if you only have the means; or assume a bivariate normal for height and weight if you have means and variances (you'd probably have to take a correlation from some external data if it did not come with your samples), form some sort of regression lines of weight on height within each group, and check whether one line is above the other.
Why would parametric statistics ever be preferred over nonparametric?
Rarely if ever a parametric test and a non-parametric test actually have the same null. The parametric $t$-test is testing the mean of the distribution, assuming the first two moments exist. The Wilco
Why would parametric statistics ever be preferred over nonparametric? Rarely if ever a parametric test and a non-parametric test actually have the same null. The parametric $t$-test is testing the mean of the distribution, assuming the first two moments exist. The Wilcoxon rank sum test does not assume any moments, and tests equality of distributions instead. Its implied parameter is a weird functional of distributions, the probability that the observation from one sample is lower than the observation from the other. You can sort of talk about comparisons between the two tests under the completely specified null of identical distributions... but you have to recognize that the two tests are testing different hypotheses. The information that parametric tests bring in along with their assumption helps improving the power of the tests. Of course that information better be right, but there are few if any domains of human knowledge these days where such preliminary information does not exist. An interesting exception that explicitly says "I don't want to assume anything" is the courtroom where non-parametric methods continue to be widely popular -- and it makes perfect sense for the application. There's probably a good reason, pun intended, that Phillip Good authored good books on both non-parametric statistics and courtroom statistics. There are also testing situations where you don't have access to the microdata necessary for the nonparametric test. Suppose you were asked to compare two groups of people to gauge whether one is more obese than the other. In an ideal world, you will have height and weight measurements for everybody, and you could form a permutation test stratifying by height. In a less than ideal (i.e., real) world, you may only have the mean height and mean weight in each group (or may be some ranges or variances of these characteristics on top of the sample means). Your best bet is then to compute the mean BMI for each group and compare them if you only have the means; or assume a bivariate normal for height and weight if you have means and variances (you'd probably have to take a correlation from some external data if it did not come with your samples), form some sort of regression lines of weight on height within each group, and check whether one line is above the other.
Why would parametric statistics ever be preferred over nonparametric? Rarely if ever a parametric test and a non-parametric test actually have the same null. The parametric $t$-test is testing the mean of the distribution, assuming the first two moments exist. The Wilco
2,857
Why would parametric statistics ever be preferred over nonparametric?
As others have written: if the preconditions are met, your parametric test will be more powerful than the nonparametric one. In your watch analogy, the non-water-resistant one would be far more accurate unless it got wet. For instance, your water-resistant watch might be off by one hour either way, whereas the non-water-resistant one would be accurate... and you need to catch a bus after your rafting trip. In such a case it might make sense to take the non-water-resistant watch along with you and make sure it doesn't get wet. Bonus point: nonparametric methods are not always easy. Yes, a permutation test alternative to a t test is simple. But a nonparametric alternative to a mixed linear model with multiple two-way interactions and nested random effects is quite a bit harder to set up than a simple call to nlme(). I have done so, using permutation tests, and in my experience, the p values of parametric and permutation tests have always been pretty close together, even if residuals from the parametric model were quite non-normal. Parametric tests are often surprisingly resilient against departures from their preconditions.
Why would parametric statistics ever be preferred over nonparametric?
As others have written: if the preconditions are met, your parametric test will be more powerful than the nonparametric one. In your watch analogy, the non-water-resistant one would be far more accura
Why would parametric statistics ever be preferred over nonparametric? As others have written: if the preconditions are met, your parametric test will be more powerful than the nonparametric one. In your watch analogy, the non-water-resistant one would be far more accurate unless it got wet. For instance, your water-resistant watch might be off by one hour either way, whereas the non-water-resistant one would be accurate... and you need to catch a bus after your rafting trip. In such a case it might make sense to take the non-water-resistant watch along with you and make sure it doesn't get wet. Bonus point: nonparametric methods are not always easy. Yes, a permutation test alternative to a t test is simple. But a nonparametric alternative to a mixed linear model with multiple two-way interactions and nested random effects is quite a bit harder to set up than a simple call to nlme(). I have done so, using permutation tests, and in my experience, the p values of parametric and permutation tests have always been pretty close together, even if residuals from the parametric model were quite non-normal. Parametric tests are often surprisingly resilient against departures from their preconditions.
Why would parametric statistics ever be preferred over nonparametric? As others have written: if the preconditions are met, your parametric test will be more powerful than the nonparametric one. In your watch analogy, the non-water-resistant one would be far more accura
2,858
Why would parametric statistics ever be preferred over nonparametric?
While I agree that in many cases, non-parametric techniques are favourable, there are also situations in which parametric methods are more useful. Let's focus on the "two-sample t-test versus Wilcoxon's rank sum test" discussion (otherwise we have to write a whole book). With tiny group sizes of 2-3, only the t-test can theoretically achieve p values under 5%. In biology and chemistry, group sizes like this are not uncommon. Of course it is delicate to use a t-test in such setting. But maybe it is better than nothing. (This point is linked to the issue that in perfect circumstances, t-test has more power than the Wilcoxon test). With huge group sizes, also a t-test can be viewed as being non-parametric thanks to the Central Limit Theorem. The results of the t-test are in line with the Student confidence interval for the mean difference. If variances heavily vary across groups, then Welch's version of the t-test tries to take this into account, while Wilcoxon's rank sum test can fail badly if means are to be compared (e.g. error probability of the first kind much different from the nominal level).
Why would parametric statistics ever be preferred over nonparametric?
While I agree that in many cases, non-parametric techniques are favourable, there are also situations in which parametric methods are more useful. Let's focus on the "two-sample t-test versus Wilcoxo
Why would parametric statistics ever be preferred over nonparametric? While I agree that in many cases, non-parametric techniques are favourable, there are also situations in which parametric methods are more useful. Let's focus on the "two-sample t-test versus Wilcoxon's rank sum test" discussion (otherwise we have to write a whole book). With tiny group sizes of 2-3, only the t-test can theoretically achieve p values under 5%. In biology and chemistry, group sizes like this are not uncommon. Of course it is delicate to use a t-test in such setting. But maybe it is better than nothing. (This point is linked to the issue that in perfect circumstances, t-test has more power than the Wilcoxon test). With huge group sizes, also a t-test can be viewed as being non-parametric thanks to the Central Limit Theorem. The results of the t-test are in line with the Student confidence interval for the mean difference. If variances heavily vary across groups, then Welch's version of the t-test tries to take this into account, while Wilcoxon's rank sum test can fail badly if means are to be compared (e.g. error probability of the first kind much different from the nominal level).
Why would parametric statistics ever be preferred over nonparametric? While I agree that in many cases, non-parametric techniques are favourable, there are also situations in which parametric methods are more useful. Let's focus on the "two-sample t-test versus Wilcoxo
2,859
Why would parametric statistics ever be preferred over nonparametric?
In hypothesis testing nonparametric tests are often testing different hypotheses, which is one reason why one can't always just substitute a nonparametric test for a parametric one. More generally, parametric procedures provide a way of imposing structure on otherwise unstructured problems. This is very useful and can be viewed as a kind of simplifying heuristic rather than a belief that the model is literally true. Take for instance the problem of predicting a continuous response $y$ based on a vector of predictors $x$ using some regression function $f$ (even assuming that such a function exists is a kind of parametric restriction). If we assume absolutely nothing about $f$ then it's not at all clear how we might proceed in estimating this function. The set of possible answers that we need to search is just too large. But if we restrict the space of possible answers to (for instance) the set of linear functions $f(x) = \sum_{j=1}^{p} \beta_j x_j$, then we can actually start making progress. We don't need to believe that the model holds exactly, we are just making an approximation due to the need to arrive at some answer, however imperfect.
Why would parametric statistics ever be preferred over nonparametric?
In hypothesis testing nonparametric tests are often testing different hypotheses, which is one reason why one can't always just substitute a nonparametric test for a parametric one. More generally, pa
Why would parametric statistics ever be preferred over nonparametric? In hypothesis testing nonparametric tests are often testing different hypotheses, which is one reason why one can't always just substitute a nonparametric test for a parametric one. More generally, parametric procedures provide a way of imposing structure on otherwise unstructured problems. This is very useful and can be viewed as a kind of simplifying heuristic rather than a belief that the model is literally true. Take for instance the problem of predicting a continuous response $y$ based on a vector of predictors $x$ using some regression function $f$ (even assuming that such a function exists is a kind of parametric restriction). If we assume absolutely nothing about $f$ then it's not at all clear how we might proceed in estimating this function. The set of possible answers that we need to search is just too large. But if we restrict the space of possible answers to (for instance) the set of linear functions $f(x) = \sum_{j=1}^{p} \beta_j x_j$, then we can actually start making progress. We don't need to believe that the model holds exactly, we are just making an approximation due to the need to arrive at some answer, however imperfect.
Why would parametric statistics ever be preferred over nonparametric? In hypothesis testing nonparametric tests are often testing different hypotheses, which is one reason why one can't always just substitute a nonparametric test for a parametric one. More generally, pa
2,860
Why would parametric statistics ever be preferred over nonparametric?
Semiparametric models have many advantages. They offer tests such as the Wilcoxon test as a special case, but allow estimation of effect ratios, quantiles, means, and exceedance probabilities. They extend to longitudinal and censored data. They are robust in the Y-space and are transformation invariant except for estimating means. See http://biostat.mc.vanderbilt.edu/rms link to course handouts for a detailed example/case study. In contrast to fully parametric methods ($t$-test, ordinary multiple regression, mixed effect models, parametric survival models, etc.), semiparametric methods for ordinal or continuous $Y$ assume nothing about the distribution of $Y$ for a given $X$, not even that the distribution is unimodal or smooth. The distribution may even have severe spikes inside it or at the boundaries. Semiparametric models assume only a connection (e.g., exponentiation in the case of a Cox model) between distributions for two different covariate settings $X_{1}$ and $X_{2}$. Examples include the proportional odds model (special case: Wilcoxon and Kruskal-Wallis) and proportional hazards model (special case: log-rank and stratified log-rank test). In effect, semiparametric models have lots of intercepts. These intercepts encode the distribution of $Y$ nonparametrically. This doesn't, however, create any problem with overparameterization.
Why would parametric statistics ever be preferred over nonparametric?
Semiparametric models have many advantages. They offer tests such as the Wilcoxon test as a special case, but allow estimation of effect ratios, quantiles, means, and exceedance probabilities. They
Why would parametric statistics ever be preferred over nonparametric? Semiparametric models have many advantages. They offer tests such as the Wilcoxon test as a special case, but allow estimation of effect ratios, quantiles, means, and exceedance probabilities. They extend to longitudinal and censored data. They are robust in the Y-space and are transformation invariant except for estimating means. See http://biostat.mc.vanderbilt.edu/rms link to course handouts for a detailed example/case study. In contrast to fully parametric methods ($t$-test, ordinary multiple regression, mixed effect models, parametric survival models, etc.), semiparametric methods for ordinal or continuous $Y$ assume nothing about the distribution of $Y$ for a given $X$, not even that the distribution is unimodal or smooth. The distribution may even have severe spikes inside it or at the boundaries. Semiparametric models assume only a connection (e.g., exponentiation in the case of a Cox model) between distributions for two different covariate settings $X_{1}$ and $X_{2}$. Examples include the proportional odds model (special case: Wilcoxon and Kruskal-Wallis) and proportional hazards model (special case: log-rank and stratified log-rank test). In effect, semiparametric models have lots of intercepts. These intercepts encode the distribution of $Y$ nonparametrically. This doesn't, however, create any problem with overparameterization.
Why would parametric statistics ever be preferred over nonparametric? Semiparametric models have many advantages. They offer tests such as the Wilcoxon test as a special case, but allow estimation of effect ratios, quantiles, means, and exceedance probabilities. They
2,861
Why would parametric statistics ever be preferred over nonparametric?
Among the host of answers supplied, I would also call attention to Bayesian statistics. Some problems cannot be answered by likelihoods alone. A Frequentist uses counterfactual reasoning where the "probability" refers to alternate universes and an alternate universe framework makes no sense as far as inferring the state of an individual, such as the guilt or innocence of a criminal, or whether bottlenecking of gene frequency in a species exposed to a massive environmental shift led to its extinction. In the Bayesian context, probability is "belief" not frequency, which can be applied to that which has already precipitated. Now, the majority of Bayesian methods require fully specifying probability models for the prior and the outcome. And, most of these probability models are parametric. Consistent with what others are saying, these need not be exactly correct to produce meaningful summaries of the data. "All models are wrong, some models are useful." There is, of course, nonparametric Bayesian methods. These have a lot of statistical wrinkles and, generally speaking, require nearly comprehensive population data to be used meaningfully.
Why would parametric statistics ever be preferred over nonparametric?
Among the host of answers supplied, I would also call attention to Bayesian statistics. Some problems cannot be answered by likelihoods alone. A Frequentist uses counterfactual reasoning where the "pr
Why would parametric statistics ever be preferred over nonparametric? Among the host of answers supplied, I would also call attention to Bayesian statistics. Some problems cannot be answered by likelihoods alone. A Frequentist uses counterfactual reasoning where the "probability" refers to alternate universes and an alternate universe framework makes no sense as far as inferring the state of an individual, such as the guilt or innocence of a criminal, or whether bottlenecking of gene frequency in a species exposed to a massive environmental shift led to its extinction. In the Bayesian context, probability is "belief" not frequency, which can be applied to that which has already precipitated. Now, the majority of Bayesian methods require fully specifying probability models for the prior and the outcome. And, most of these probability models are parametric. Consistent with what others are saying, these need not be exactly correct to produce meaningful summaries of the data. "All models are wrong, some models are useful." There is, of course, nonparametric Bayesian methods. These have a lot of statistical wrinkles and, generally speaking, require nearly comprehensive population data to be used meaningfully.
Why would parametric statistics ever be preferred over nonparametric? Among the host of answers supplied, I would also call attention to Bayesian statistics. Some problems cannot be answered by likelihoods alone. A Frequentist uses counterfactual reasoning where the "pr
2,862
Why would parametric statistics ever be preferred over nonparametric?
The only reason I am answering despite all the fine answers above is that no one has called attention to the #1 reason we use parametric tests (at least in particle physics data analysis). Because we know the parametrization of the data. Duh! That's such a big advantage. You're boiling down your hundreds, thousands or millions of data points into the few parameters that you care about and describe your distribution. These tell you the underlying physics (or whatever science gives you your data). Of course, if you don't have any idea of the underlying probability density then you have no choice: use non-parametric tests. Non-parametric tests do have the virtue of lacking any preconceived biases, but can be harder to implement - sometimes much harder.
Why would parametric statistics ever be preferred over nonparametric?
The only reason I am answering despite all the fine answers above is that no one has called attention to the #1 reason we use parametric tests (at least in particle physics data analysis). Because we
Why would parametric statistics ever be preferred over nonparametric? The only reason I am answering despite all the fine answers above is that no one has called attention to the #1 reason we use parametric tests (at least in particle physics data analysis). Because we know the parametrization of the data. Duh! That's such a big advantage. You're boiling down your hundreds, thousands or millions of data points into the few parameters that you care about and describe your distribution. These tell you the underlying physics (or whatever science gives you your data). Of course, if you don't have any idea of the underlying probability density then you have no choice: use non-parametric tests. Non-parametric tests do have the virtue of lacking any preconceived biases, but can be harder to implement - sometimes much harder.
Why would parametric statistics ever be preferred over nonparametric? The only reason I am answering despite all the fine answers above is that no one has called attention to the #1 reason we use parametric tests (at least in particle physics data analysis). Because we
2,863
Why would parametric statistics ever be preferred over nonparametric?
Nonparametric statistics has its own problems! One of them is the emphasis on hypothesis testing, often we need estimation and confidence intervals, and getting them in complicated models with nonparametrics is --- complicated. There is a very good blog post about this, with discussion, at http://andrewgelman.com/2015/07/13/dont-do-the-wilcoxon/ The discussion leads to this other post, http://notstatschat.tumblr.com/post/63237480043/rock-paper-scissors-wilcoxon-test, which is recommended for a very different viewpoint on Wilcoxon. The short version is: the Wilcoxon (and other rank tests) can lead to nontransitivity.
Why would parametric statistics ever be preferred over nonparametric?
Nonparametric statistics has its own problems! One of them is the emphasis on hypothesis testing, often we need estimation and confidence intervals, and getting them in complicated models with nonpar
Why would parametric statistics ever be preferred over nonparametric? Nonparametric statistics has its own problems! One of them is the emphasis on hypothesis testing, often we need estimation and confidence intervals, and getting them in complicated models with nonparametrics is --- complicated. There is a very good blog post about this, with discussion, at http://andrewgelman.com/2015/07/13/dont-do-the-wilcoxon/ The discussion leads to this other post, http://notstatschat.tumblr.com/post/63237480043/rock-paper-scissors-wilcoxon-test, which is recommended for a very different viewpoint on Wilcoxon. The short version is: the Wilcoxon (and other rank tests) can lead to nontransitivity.
Why would parametric statistics ever be preferred over nonparametric? Nonparametric statistics has its own problems! One of them is the emphasis on hypothesis testing, often we need estimation and confidence intervals, and getting them in complicated models with nonpar
2,864
Why would parametric statistics ever be preferred over nonparametric?
I would say that non-parametric statistics are more generally applicable in the sense that they make less assumptions than parametric statistics. Nevertheless, if one uses a parametric statistics and the underlying assumptions are fulfilled, then the paramatric statistics will be more powerfull than the non-parametric.
Why would parametric statistics ever be preferred over nonparametric?
I would say that non-parametric statistics are more generally applicable in the sense that they make less assumptions than parametric statistics. Nevertheless, if one uses a parametric statistics an
Why would parametric statistics ever be preferred over nonparametric? I would say that non-parametric statistics are more generally applicable in the sense that they make less assumptions than parametric statistics. Nevertheless, if one uses a parametric statistics and the underlying assumptions are fulfilled, then the paramatric statistics will be more powerfull than the non-parametric.
Why would parametric statistics ever be preferred over nonparametric? I would say that non-parametric statistics are more generally applicable in the sense that they make less assumptions than parametric statistics. Nevertheless, if one uses a parametric statistics an
2,865
Why would parametric statistics ever be preferred over nonparametric?
Parametric statistics are often ways to incorporate external [to data] knowledge. For instance, you know that the error distribution is normal, and this knowledge came from either prior experience or some other consideration and not from the data set. In this case, by assuming normal distribution you are incorporating this external knowledge into your parameter estimates, which must improve your estimates. On your watch analogy. These days almost all watches are water resistant except for specialty pieces with jewelry or unusual materials like wood. The reason to wear them is precisely that: they're special. If you meant water proof then many dress watches are not water proof. The reason to wear them is again their function: you wouldn't wear a diver watch with a suite and tie. Also, these days many watches have open back so you can enjoy looking at the movement through the crystal. Naturally, these watches are usually not water proof.
Why would parametric statistics ever be preferred over nonparametric?
Parametric statistics are often ways to incorporate external [to data] knowledge. For instance, you know that the error distribution is normal, and this knowledge came from either prior experience or
Why would parametric statistics ever be preferred over nonparametric? Parametric statistics are often ways to incorporate external [to data] knowledge. For instance, you know that the error distribution is normal, and this knowledge came from either prior experience or some other consideration and not from the data set. In this case, by assuming normal distribution you are incorporating this external knowledge into your parameter estimates, which must improve your estimates. On your watch analogy. These days almost all watches are water resistant except for specialty pieces with jewelry or unusual materials like wood. The reason to wear them is precisely that: they're special. If you meant water proof then many dress watches are not water proof. The reason to wear them is again their function: you wouldn't wear a diver watch with a suite and tie. Also, these days many watches have open back so you can enjoy looking at the movement through the crystal. Naturally, these watches are usually not water proof.
Why would parametric statistics ever be preferred over nonparametric? Parametric statistics are often ways to incorporate external [to data] knowledge. For instance, you know that the error distribution is normal, and this knowledge came from either prior experience or
2,866
Why would parametric statistics ever be preferred over nonparametric?
This is not hypothesis testing scenario, but it may be a good example for answering your question: let's consider clustering analysis. There are many "non-parametric" clustering methods like hierarchical clustering, K-means etc., but the problem is always how to assess if your clustering solution is "better", than other possible solution (and often there are multiple possible solutions). Each algorithm gives you the best it can get, however how you know if there isn't anything better..? Now, there are also parametric approaches to clustering, so called model-based clustering, like Finite Mixture Models. With FMM you build a statistical model describing the distribution of your data and fit it into data. When you have your model, you can assess how likely is your data given this model, you can use likelihood ratio tests, compare AIC's, and use multiple other methods for checking model fit and model comparison. Non-parametric clustering algorithms just group data using some similarity criteria, while with using FMM enable you to describe and try to understand your data, check how good does it fit, make predictions... In practice non-parametric approaches are simple, work out-of-the box and are pretty good, while FMM can be problematic, but still, model-based approaches often provide you with richer output.
Why would parametric statistics ever be preferred over nonparametric?
This is not hypothesis testing scenario, but it may be a good example for answering your question: let's consider clustering analysis. There are many "non-parametric" clustering methods like hierarchi
Why would parametric statistics ever be preferred over nonparametric? This is not hypothesis testing scenario, but it may be a good example for answering your question: let's consider clustering analysis. There are many "non-parametric" clustering methods like hierarchical clustering, K-means etc., but the problem is always how to assess if your clustering solution is "better", than other possible solution (and often there are multiple possible solutions). Each algorithm gives you the best it can get, however how you know if there isn't anything better..? Now, there are also parametric approaches to clustering, so called model-based clustering, like Finite Mixture Models. With FMM you build a statistical model describing the distribution of your data and fit it into data. When you have your model, you can assess how likely is your data given this model, you can use likelihood ratio tests, compare AIC's, and use multiple other methods for checking model fit and model comparison. Non-parametric clustering algorithms just group data using some similarity criteria, while with using FMM enable you to describe and try to understand your data, check how good does it fit, make predictions... In practice non-parametric approaches are simple, work out-of-the box and are pretty good, while FMM can be problematic, but still, model-based approaches often provide you with richer output.
Why would parametric statistics ever be preferred over nonparametric? This is not hypothesis testing scenario, but it may be a good example for answering your question: let's consider clustering analysis. There are many "non-parametric" clustering methods like hierarchi
2,867
Why would parametric statistics ever be preferred over nonparametric?
Predictions and forecasting to new data are often very difficult or impossible for non-parametric models. For example, I can forecast the number of warranty claims for the next 10 years using a Weibull or Lognormal survival model, however this is not possible using the Cox model or Kaplan-Meier. Edit: Let me be a little more clear. If a company has a defective product then they are often interested in projecting the future warranty claim rate and CDF based on current warranty claims and sales data. This can help them decide whether or not a recall is needed. I don't know how you do this using a non-parametric model.
Why would parametric statistics ever be preferred over nonparametric?
Predictions and forecasting to new data are often very difficult or impossible for non-parametric models. For example, I can forecast the number of warranty claims for the next 10 years using a Weibu
Why would parametric statistics ever be preferred over nonparametric? Predictions and forecasting to new data are often very difficult or impossible for non-parametric models. For example, I can forecast the number of warranty claims for the next 10 years using a Weibull or Lognormal survival model, however this is not possible using the Cox model or Kaplan-Meier. Edit: Let me be a little more clear. If a company has a defective product then they are often interested in projecting the future warranty claim rate and CDF based on current warranty claims and sales data. This can help them decide whether or not a recall is needed. I don't know how you do this using a non-parametric model.
Why would parametric statistics ever be preferred over nonparametric? Predictions and forecasting to new data are often very difficult or impossible for non-parametric models. For example, I can forecast the number of warranty claims for the next 10 years using a Weibu
2,868
Why would parametric statistics ever be preferred over nonparametric?
I honestly believe that there is no right answer to this question. Judging from the given answers, the consensus is that parametric tests are more powerful than nonparametric equivalents. I won't contest this view but I see it more as a hypothetical rather than factual viewpoint since it is not something explicitly taught in schools and no peer reviewer will ever tell you "your paper was rejected because you used non-parametric tests". This question is about something that the world of statistics is unable to clearly answer but has taken for granted. My personal opinion is that the preference of either parametric or nonparametric has more to do with tradition than anything else (for lack of a better term). Parametric techniques for testing and prediction were there first and have a long history, so it's not easy to completely ignore them. Prediction in particular, has some impressive nonparametric solutions which are widely in use as a first choice tool nowadays. I think this is one of the reasons that Machine Learning techniques such as neural networks and decision trees, which are nonparametric by nature, have gained widespread popularity over the recent years.
Why would parametric statistics ever be preferred over nonparametric?
I honestly believe that there is no right answer to this question. Judging from the given answers, the consensus is that parametric tests are more powerful than nonparametric equivalents. I won't cont
Why would parametric statistics ever be preferred over nonparametric? I honestly believe that there is no right answer to this question. Judging from the given answers, the consensus is that parametric tests are more powerful than nonparametric equivalents. I won't contest this view but I see it more as a hypothetical rather than factual viewpoint since it is not something explicitly taught in schools and no peer reviewer will ever tell you "your paper was rejected because you used non-parametric tests". This question is about something that the world of statistics is unable to clearly answer but has taken for granted. My personal opinion is that the preference of either parametric or nonparametric has more to do with tradition than anything else (for lack of a better term). Parametric techniques for testing and prediction were there first and have a long history, so it's not easy to completely ignore them. Prediction in particular, has some impressive nonparametric solutions which are widely in use as a first choice tool nowadays. I think this is one of the reasons that Machine Learning techniques such as neural networks and decision trees, which are nonparametric by nature, have gained widespread popularity over the recent years.
Why would parametric statistics ever be preferred over nonparametric? I honestly believe that there is no right answer to this question. Judging from the given answers, the consensus is that parametric tests are more powerful than nonparametric equivalents. I won't cont
2,869
Why would parametric statistics ever be preferred over nonparametric?
Lots of good answers already but there are some reasons I haven't seen mentioned: Familiarity. Depending on your audience, the parameteric result may be much more familiar than a roughly equivalent non-parametric one. If the two give similar conclusions, then familiarity is good. Simplicity. Sometimes, the parametric test is simpler to perform and to report. Some nonparametric methods are very computer intensive. Of course, computers have gotten a lot faster and algorithms have improved as well, but .... data has gotten "bigger". Sometimes what is usually a disadvantage of the parametric test is actually an advantage, although this is specific to particular pairs of tests. For instance, I am, generally, a fan of quantile regression as it makes fewer assumptions than the usual methods. But sometimes you really need to estimate the mean, rather than the median.
Why would parametric statistics ever be preferred over nonparametric?
Lots of good answers already but there are some reasons I haven't seen mentioned: Familiarity. Depending on your audience, the parameteric result may be much more familiar than a roughly equivalent n
Why would parametric statistics ever be preferred over nonparametric? Lots of good answers already but there are some reasons I haven't seen mentioned: Familiarity. Depending on your audience, the parameteric result may be much more familiar than a roughly equivalent non-parametric one. If the two give similar conclusions, then familiarity is good. Simplicity. Sometimes, the parametric test is simpler to perform and to report. Some nonparametric methods are very computer intensive. Of course, computers have gotten a lot faster and algorithms have improved as well, but .... data has gotten "bigger". Sometimes what is usually a disadvantage of the parametric test is actually an advantage, although this is specific to particular pairs of tests. For instance, I am, generally, a fan of quantile regression as it makes fewer assumptions than the usual methods. But sometimes you really need to estimate the mean, rather than the median.
Why would parametric statistics ever be preferred over nonparametric? Lots of good answers already but there are some reasons I haven't seen mentioned: Familiarity. Depending on your audience, the parameteric result may be much more familiar than a roughly equivalent n
2,870
Why would parametric statistics ever be preferred over nonparametric?
It is a an issue of statistical power. Non-parametric tests generally have lower statistical power than their parametric counterparts.
Why would parametric statistics ever be preferred over nonparametric?
It is a an issue of statistical power. Non-parametric tests generally have lower statistical power than their parametric counterparts.
Why would parametric statistics ever be preferred over nonparametric? It is a an issue of statistical power. Non-parametric tests generally have lower statistical power than their parametric counterparts.
Why would parametric statistics ever be preferred over nonparametric? It is a an issue of statistical power. Non-parametric tests generally have lower statistical power than their parametric counterparts.
2,871
Impractical question: is it possible to find the regression line using a ruler and compass?
Loosely speaking, it's apparently possible to compute any quantity which can be expressed "using only the integers 0 and 1 and the operations for addition, subtraction, multiplication, division, and square roots" with only a compass and ruler -- the wikipedia article on constructible numbers has more details. Since the slope of the OLS line definitely has such a closed form, we could deduce it's possible to construct the line. As someone who isn't an expert in compass and ruler constructions, I found this a bit unbelievable, so I gave it a try myself: the green line is the OLS fit for the three blue points, not fitting an intercept for simplicity. You can play around with it here for yourself and drag around the blue points a bit. Here's roughly how the construction went: it turns out you can multiply two numbers by constructing similar triangles. So for each of the three (x,y) points, I computed x^2 on the x-axis and xy on the y-axis (shown in red). Then I simply added up all the x^2's and xy's to get the green point in the top right which defines the OLS line.
Impractical question: is it possible to find the regression line using a ruler and compass?
Loosely speaking, it's apparently possible to compute any quantity which can be expressed "using only the integers 0 and 1 and the operations for addition, subtraction, multiplication, division, and s
Impractical question: is it possible to find the regression line using a ruler and compass? Loosely speaking, it's apparently possible to compute any quantity which can be expressed "using only the integers 0 and 1 and the operations for addition, subtraction, multiplication, division, and square roots" with only a compass and ruler -- the wikipedia article on constructible numbers has more details. Since the slope of the OLS line definitely has such a closed form, we could deduce it's possible to construct the line. As someone who isn't an expert in compass and ruler constructions, I found this a bit unbelievable, so I gave it a try myself: the green line is the OLS fit for the three blue points, not fitting an intercept for simplicity. You can play around with it here for yourself and drag around the blue points a bit. Here's roughly how the construction went: it turns out you can multiply two numbers by constructing similar triangles. So for each of the three (x,y) points, I computed x^2 on the x-axis and xy on the y-axis (shown in red). Then I simply added up all the x^2's and xy's to get the green point in the top right which defines the OLS line.
Impractical question: is it possible to find the regression line using a ruler and compass? Loosely speaking, it's apparently possible to compute any quantity which can be expressed "using only the integers 0 and 1 and the operations for addition, subtraction, multiplication, division, and s
2,872
Where to cut a dendrogram?
There is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions are equally good from a theoretical point of view. Several clues were given in a related question, What stop-criteria for agglomerative hierarchical clustering are used in practice? I generally use visual criteria, e.g. silhouette plots, and some kind of numerical criteria, like Dunn’s validity index, Hubert's gamma, G2/G3 coefficient, or the corrected Rand index. Basically, we want to know how well the original distance matrix is approximated in the cluster space, so a measure of the cophenetic correlation is also useful. I also use k-means, with several starting values, and the gap statistic (mirror) to determine the number of clusters that minimize the within-SS. The concordance with Ward hierarchical clustering gives an idea of the stability of the cluster solution (You can use matchClasses() in the e1071 package for that). You will find useful resources in the CRAN Task View Cluster, including pvclust, fpc, clv, among others. Also worth to give a try is the clValid package (described in the Journal of Statistical Software). Now, if your clusters change over time, this is a bit more tricky; why choosing the first cluster-solution rather than another? Do you expect that some individuals move from one cluster to another as a result of an underlying process evolving with time? There are some measure that try to match clusters that have a maximum absolute or relative overlap, as was suggested to you in your preceding question. Look at Comparing Clusterings - An Overview from Wagner and Wagner.
Where to cut a dendrogram?
There is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions
Where to cut a dendrogram? There is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions are equally good from a theoretical point of view. Several clues were given in a related question, What stop-criteria for agglomerative hierarchical clustering are used in practice? I generally use visual criteria, e.g. silhouette plots, and some kind of numerical criteria, like Dunn’s validity index, Hubert's gamma, G2/G3 coefficient, or the corrected Rand index. Basically, we want to know how well the original distance matrix is approximated in the cluster space, so a measure of the cophenetic correlation is also useful. I also use k-means, with several starting values, and the gap statistic (mirror) to determine the number of clusters that minimize the within-SS. The concordance with Ward hierarchical clustering gives an idea of the stability of the cluster solution (You can use matchClasses() in the e1071 package for that). You will find useful resources in the CRAN Task View Cluster, including pvclust, fpc, clv, among others. Also worth to give a try is the clValid package (described in the Journal of Statistical Software). Now, if your clusters change over time, this is a bit more tricky; why choosing the first cluster-solution rather than another? Do you expect that some individuals move from one cluster to another as a result of an underlying process evolving with time? There are some measure that try to match clusters that have a maximum absolute or relative overlap, as was suggested to you in your preceding question. Look at Comparing Clusterings - An Overview from Wagner and Wagner.
Where to cut a dendrogram? There is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions
2,873
Where to cut a dendrogram?
There isn't really an answer. It's somewhere between 1 and N. However, you can think about it from a profit perspective. For example, in marketing one uses segmentation, which is much like clustering. A message (an advertisement or letter, say) that is tailored for each individual will have the highest response rate. A generic message tailored to the average will have the lowest response rate. Having say three messages tailored to three segments will be somewhere in between. This is the revenue side. A message that is tailored to each individual will have the highest cost. A generic message tailored to the average will have the lowest cost. Three messages tailored to three segments will be somewhere in between. Say paying a writer to write a custom message costs 1000, two cost 2000 and so on. Say by using one message, your revenue will be 5000. If you segmented your customers into 2 segments, and wrote tailored messages to each segment, your response rate will be higher. Say revenues are now 7500. With three segments, a slightly higher response rate, and your revenues are 9000. One more segment, and you're at 9500. To maximize profit, keep segmenting until the marginal revenue from segmenting equals the marginal cost of segmenting. In this example, you would use three segments to maximize profit. Segments Revenue Cost Profit 1 5000 1000 4000 2 7500 2000 5500 3 9000 3000 6000 4 9500 4000 5500
Where to cut a dendrogram?
There isn't really an answer. It's somewhere between 1 and N. However, you can think about it from a profit perspective. For example, in marketing one uses segmentation, which is much like clustering.
Where to cut a dendrogram? There isn't really an answer. It's somewhere between 1 and N. However, you can think about it from a profit perspective. For example, in marketing one uses segmentation, which is much like clustering. A message (an advertisement or letter, say) that is tailored for each individual will have the highest response rate. A generic message tailored to the average will have the lowest response rate. Having say three messages tailored to three segments will be somewhere in between. This is the revenue side. A message that is tailored to each individual will have the highest cost. A generic message tailored to the average will have the lowest cost. Three messages tailored to three segments will be somewhere in between. Say paying a writer to write a custom message costs 1000, two cost 2000 and so on. Say by using one message, your revenue will be 5000. If you segmented your customers into 2 segments, and wrote tailored messages to each segment, your response rate will be higher. Say revenues are now 7500. With three segments, a slightly higher response rate, and your revenues are 9000. One more segment, and you're at 9500. To maximize profit, keep segmenting until the marginal revenue from segmenting equals the marginal cost of segmenting. In this example, you would use three segments to maximize profit. Segments Revenue Cost Profit 1 5000 1000 4000 2 7500 2000 5500 3 9000 3000 6000 4 9500 4000 5500
Where to cut a dendrogram? There isn't really an answer. It's somewhere between 1 and N. However, you can think about it from a profit perspective. For example, in marketing one uses segmentation, which is much like clustering.
2,874
Where to cut a dendrogram?
Perhaps one of the simplest methods would be a graphical representation in which the x-axis is the number of groups and the y-axis any evaluation metric as the distance or the similarity. In that plot you usually can observe two differentiated regions, being the x-axis value at the 'knee' of the line the 'optimal' number of cluster. There are also some statistics that could hepl on this task: Hubert' gamma, pseudo-t², pseudo-F or cubic clustering criteria (CCC) among others.
Where to cut a dendrogram?
Perhaps one of the simplest methods would be a graphical representation in which the x-axis is the number of groups and the y-axis any evaluation metric as the distance or the similarity. In that plot
Where to cut a dendrogram? Perhaps one of the simplest methods would be a graphical representation in which the x-axis is the number of groups and the y-axis any evaluation metric as the distance or the similarity. In that plot you usually can observe two differentiated regions, being the x-axis value at the 'knee' of the line the 'optimal' number of cluster. There are also some statistics that could hepl on this task: Hubert' gamma, pseudo-t², pseudo-F or cubic clustering criteria (CCC) among others.
Where to cut a dendrogram? Perhaps one of the simplest methods would be a graphical representation in which the x-axis is the number of groups and the y-axis any evaluation metric as the distance or the similarity. In that plot
2,875
Where to cut a dendrogram?
There is also "Clustergram: visualization and diagnostics for cluster analysis" (with R code) Not really an answer, but another interesting idea for the toolbox.
Where to cut a dendrogram?
There is also "Clustergram: visualization and diagnostics for cluster analysis" (with R code) Not really an answer, but another interesting idea for the toolbox.
Where to cut a dendrogram? There is also "Clustergram: visualization and diagnostics for cluster analysis" (with R code) Not really an answer, but another interesting idea for the toolbox.
Where to cut a dendrogram? There is also "Clustergram: visualization and diagnostics for cluster analysis" (with R code) Not really an answer, but another interesting idea for the toolbox.
2,876
Where to cut a dendrogram?
In hierarchical clustering the number of output partitions is not just the horizontal cuts, but also the non horizontal cuts which decides the final clustering. Thus this can be seen as a third criterion aside the 1. distance metric and 2. Linkage criterion. http://en.wikipedia.org/wiki/Hierarchical_clustering The criterion you have mentioned is a 3rd kind which is a sort of optimization constraint on the set of partitions in the hierarchy. This is formally presented in this paper and examples of segmentation are given! http://www.esiee.fr/~kiranr/ClimbingECCV2012_Preprint.pdf
Where to cut a dendrogram?
In hierarchical clustering the number of output partitions is not just the horizontal cuts, but also the non horizontal cuts which decides the final clustering. Thus this can be seen as a third crite
Where to cut a dendrogram? In hierarchical clustering the number of output partitions is not just the horizontal cuts, but also the non horizontal cuts which decides the final clustering. Thus this can be seen as a third criterion aside the 1. distance metric and 2. Linkage criterion. http://en.wikipedia.org/wiki/Hierarchical_clustering The criterion you have mentioned is a 3rd kind which is a sort of optimization constraint on the set of partitions in the hierarchy. This is formally presented in this paper and examples of segmentation are given! http://www.esiee.fr/~kiranr/ClimbingECCV2012_Preprint.pdf
Where to cut a dendrogram? In hierarchical clustering the number of output partitions is not just the horizontal cuts, but also the non horizontal cuts which decides the final clustering. Thus this can be seen as a third crite
2,877
Where to cut a dendrogram?
Some academic paper is giving a precise answer to that problem, under some separation assumptions (stability/noise resilience) on the clusters of the flat partition. The coarse idea of the paper solution is to extract the flat partition by cutting at different levels in the dendrogram. Say you want to minimize intra-cluster variance (that is your optimization objective), then you can formulate the problem as a dynamic programming problem: to minimize the objective function, should I cut here? or not? and you traverse the tree recursively looking for the best cuts giving you k clusters with the smallest objective function value (intra-variance or something else). The paper (for more details): Awasthi P., Blum A., Sheffet O., 2012. Center-based Clustering under Perturbation Stability published in Information Processing Letters, Elsevier. I have implemented its approach (essentially dynamic programming) in my blog, and detailed the steps, if it can help: http://marti.ai/ml/2017/05/12/cut-a-dendrogram.html
Where to cut a dendrogram?
Some academic paper is giving a precise answer to that problem, under some separation assumptions (stability/noise resilience) on the clusters of the flat partition. The coarse idea of the paper solut
Where to cut a dendrogram? Some academic paper is giving a precise answer to that problem, under some separation assumptions (stability/noise resilience) on the clusters of the flat partition. The coarse idea of the paper solution is to extract the flat partition by cutting at different levels in the dendrogram. Say you want to minimize intra-cluster variance (that is your optimization objective), then you can formulate the problem as a dynamic programming problem: to minimize the objective function, should I cut here? or not? and you traverse the tree recursively looking for the best cuts giving you k clusters with the smallest objective function value (intra-variance or something else). The paper (for more details): Awasthi P., Blum A., Sheffet O., 2012. Center-based Clustering under Perturbation Stability published in Information Processing Letters, Elsevier. I have implemented its approach (essentially dynamic programming) in my blog, and detailed the steps, if it can help: http://marti.ai/ml/2017/05/12/cut-a-dendrogram.html
Where to cut a dendrogram? Some academic paper is giving a precise answer to that problem, under some separation assumptions (stability/noise resilience) on the clusters of the flat partition. The coarse idea of the paper solut
2,878
Where to cut a dendrogram?
As the other answers said, it is definitely subjective and dependent on what type of granularity you are trying to study. For a general approach, I cut this one to give me 2 clusters and 1 outlier. I would then focus on the two clusters to see if there was anything significant between them. # Init import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() # Load data from sklearn.datasets import load_diabetes # Clustering from scipy.cluster.hierarchy import dendrogram, fcluster, leaves_list from scipy.spatial import distance from fastcluster import linkage # You can use SciPy one too %matplotlib inline # Dataset A_data = load_diabetes().data DF_diabetes = pd.DataFrame(A_data, columns = ["attr_%d" % j for j in range(A_data.shape[1])]) # Absolute value of correlation matrix, then subtract from 1 for disimilarity DF_dism = 1 - np.abs(DF_diabetes.corr()) # Compute average linkage A_dist = distance.squareform(DF_dism.as_matrix()) Z = linkage(A_dist,method="average") # Dendrogram D = dendrogram(Z=Z, labels=DF_dism.index, color_threshold=0.7, leaf_font_size=12, leaf_rotation=45)
Where to cut a dendrogram?
As the other answers said, it is definitely subjective and dependent on what type of granularity you are trying to study. For a general approach, I cut this one to give me 2 clusters and 1 outlier.
Where to cut a dendrogram? As the other answers said, it is definitely subjective and dependent on what type of granularity you are trying to study. For a general approach, I cut this one to give me 2 clusters and 1 outlier. I would then focus on the two clusters to see if there was anything significant between them. # Init import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() # Load data from sklearn.datasets import load_diabetes # Clustering from scipy.cluster.hierarchy import dendrogram, fcluster, leaves_list from scipy.spatial import distance from fastcluster import linkage # You can use SciPy one too %matplotlib inline # Dataset A_data = load_diabetes().data DF_diabetes = pd.DataFrame(A_data, columns = ["attr_%d" % j for j in range(A_data.shape[1])]) # Absolute value of correlation matrix, then subtract from 1 for disimilarity DF_dism = 1 - np.abs(DF_diabetes.corr()) # Compute average linkage A_dist = distance.squareform(DF_dism.as_matrix()) Z = linkage(A_dist,method="average") # Dendrogram D = dendrogram(Z=Z, labels=DF_dism.index, color_threshold=0.7, leaf_font_size=12, leaf_rotation=45)
Where to cut a dendrogram? As the other answers said, it is definitely subjective and dependent on what type of granularity you are trying to study. For a general approach, I cut this one to give me 2 clusters and 1 outlier.
2,879
KL divergence between two multivariate Gaussians
Starting with where you began with some slight corrections, we can write $$ \begin{aligned} KL &= \int \left[ \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} (x-\mu_1)^T\Sigma_1^{-1}(x-\mu_1) + \frac{1}{2} (x-\mu_2)^T\Sigma_2^{-1}(x-\mu_2) \right] \times p(x) dx \\ &= \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} \text{tr}\ \left\{E[(x - \mu_1)(x - \mu_1)^T] \ \Sigma_1^{-1} \right\} + \frac{1}{2} E[(x - \mu_2)^T \Sigma_2^{-1} (x - \mu_2)] \\ &= \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} \text{tr}\ \{I_d \} + \frac{1}{2} (\mu_1 - \mu_2)^T \Sigma_2^{-1} (\mu_1 - \mu_2) + \frac{1}{2} \text{tr} \{ \Sigma_2^{-1} \Sigma_1 \} \\ &= \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - d + \text{tr} \{ \Sigma_2^{-1}\Sigma_1 \} + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right]. \end{aligned} $$ Note that I have used a couple of properties from Section 8.2 of the Matrix Cookbook.
KL divergence between two multivariate Gaussians
Starting with where you began with some slight corrections, we can write $$ \begin{aligned} KL &= \int \left[ \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} (x-\mu_1)^T\Sigma_1^{-1}(x-\mu
KL divergence between two multivariate Gaussians Starting with where you began with some slight corrections, we can write $$ \begin{aligned} KL &= \int \left[ \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} (x-\mu_1)^T\Sigma_1^{-1}(x-\mu_1) + \frac{1}{2} (x-\mu_2)^T\Sigma_2^{-1}(x-\mu_2) \right] \times p(x) dx \\ &= \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} \text{tr}\ \left\{E[(x - \mu_1)(x - \mu_1)^T] \ \Sigma_1^{-1} \right\} + \frac{1}{2} E[(x - \mu_2)^T \Sigma_2^{-1} (x - \mu_2)] \\ &= \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} \text{tr}\ \{I_d \} + \frac{1}{2} (\mu_1 - \mu_2)^T \Sigma_2^{-1} (\mu_1 - \mu_2) + \frac{1}{2} \text{tr} \{ \Sigma_2^{-1} \Sigma_1 \} \\ &= \frac{1}{2}\left[\log\frac{|\Sigma_2|}{|\Sigma_1|} - d + \text{tr} \{ \Sigma_2^{-1}\Sigma_1 \} + (\mu_2 - \mu_1)^T \Sigma_2^{-1}(\mu_2 - \mu_1)\right]. \end{aligned} $$ Note that I have used a couple of properties from Section 8.2 of the Matrix Cookbook.
KL divergence between two multivariate Gaussians Starting with where you began with some slight corrections, we can write $$ \begin{aligned} KL &= \int \left[ \frac{1}{2} \log\frac{|\Sigma_2|}{|\Sigma_1|} - \frac{1}{2} (x-\mu_1)^T\Sigma_1^{-1}(x-\mu
2,880
Practical questions on tuning Random Forests
I'm not an authoritative figure, so consider these brief practitioner notes: More trees is always better with diminishing returns. Deeper trees are almost always better subject to requiring more trees for similar performance. The above two points are directly a result of the bias-variance tradeoff. Deeper trees reduces the bias; more trees reduces the variance. The most important hyper-parameter is how many features to test for each split. The more useless features there are, the more features you should try. This needs tuned. You can sort of tune it via OOB estimates if you just want to know your performance on your training data and there is no twinning (~repeated measures). Even though this is the most important parameter, it's optimum is still usually fairly close to the original suggest defaults (sqrt(p) or (p/3) for classification/regression). Fairly recent research shows you don't even need to do exhaustive split searches inside a feature to get good performance. Just try a few cut points for each selected feature and move on. This makes training even faster. (~Extremely Random Forests/Trees).
Practical questions on tuning Random Forests
I'm not an authoritative figure, so consider these brief practitioner notes: More trees is always better with diminishing returns. Deeper trees are almost always better subject to requiring more trees
Practical questions on tuning Random Forests I'm not an authoritative figure, so consider these brief practitioner notes: More trees is always better with diminishing returns. Deeper trees are almost always better subject to requiring more trees for similar performance. The above two points are directly a result of the bias-variance tradeoff. Deeper trees reduces the bias; more trees reduces the variance. The most important hyper-parameter is how many features to test for each split. The more useless features there are, the more features you should try. This needs tuned. You can sort of tune it via OOB estimates if you just want to know your performance on your training data and there is no twinning (~repeated measures). Even though this is the most important parameter, it's optimum is still usually fairly close to the original suggest defaults (sqrt(p) or (p/3) for classification/regression). Fairly recent research shows you don't even need to do exhaustive split searches inside a feature to get good performance. Just try a few cut points for each selected feature and move on. This makes training even faster. (~Extremely Random Forests/Trees).
Practical questions on tuning Random Forests I'm not an authoritative figure, so consider these brief practitioner notes: More trees is always better with diminishing returns. Deeper trees are almost always better subject to requiring more trees
2,881
Practical questions on tuning Random Forests
Number of trees: the bigger the better: yes. One way to evaluate and know when to stop is to monitor your error rate while building your forest (or any other evaluation criteria you could use) and detect when it converges. You could do that on the learning set itself or, if available, on an independent test set. Also, it has to be noted that the number of test nodes in your trees is upper bounded by the number of objects, so if you have lots of variables and not so many training objects, larger forest will be highly recommended in order to increase the chances of evaluating all the descriptors at least once in your forest. Tree depth: there are several ways to control how deep your trees are (limit the maximum depth, limit the number of nodes, limit the number of objects required to split, stop splitting if the split does not sufficiently improves the fit,...). Most of the time, it is recommended to prune (limit the depth of) the trees if you are dealing with noisy data. Finally, you can use your fully developed trees to compute performance of shorter trees as these are a "subset" of the fully developed ones. How many features to test at each node: cross-validate your experiences with a wide range of values (including the recommended ones), you should obtain a performance curve and be able to identify a maximum pointing out what is the best value for this parameter + Shea Parkes answer. Shea Parkes mentionned the Extra-Trees, here is the original paper describing in details the method: http://orbi.ulg.ac.be/bitstream/2268/9357/1/geurts-mlj-advance.pdf
Practical questions on tuning Random Forests
Number of trees: the bigger the better: yes. One way to evaluate and know when to stop is to monitor your error rate while building your forest (or any other evaluation criteria you could use) and det
Practical questions on tuning Random Forests Number of trees: the bigger the better: yes. One way to evaluate and know when to stop is to monitor your error rate while building your forest (or any other evaluation criteria you could use) and detect when it converges. You could do that on the learning set itself or, if available, on an independent test set. Also, it has to be noted that the number of test nodes in your trees is upper bounded by the number of objects, so if you have lots of variables and not so many training objects, larger forest will be highly recommended in order to increase the chances of evaluating all the descriptors at least once in your forest. Tree depth: there are several ways to control how deep your trees are (limit the maximum depth, limit the number of nodes, limit the number of objects required to split, stop splitting if the split does not sufficiently improves the fit,...). Most of the time, it is recommended to prune (limit the depth of) the trees if you are dealing with noisy data. Finally, you can use your fully developed trees to compute performance of shorter trees as these are a "subset" of the fully developed ones. How many features to test at each node: cross-validate your experiences with a wide range of values (including the recommended ones), you should obtain a performance curve and be able to identify a maximum pointing out what is the best value for this parameter + Shea Parkes answer. Shea Parkes mentionned the Extra-Trees, here is the original paper describing in details the method: http://orbi.ulg.ac.be/bitstream/2268/9357/1/geurts-mlj-advance.pdf
Practical questions on tuning Random Forests Number of trees: the bigger the better: yes. One way to evaluate and know when to stop is to monitor your error rate while building your forest (or any other evaluation criteria you could use) and det
2,882
Why is sample standard deviation a biased estimator of $\sigma$?
@NRH's answer to this question gives a nice, simple proof of the biasedness of the sample standard deviation. Here I will explicitly calculate the expectation of the sample standard deviation (the original poster's second question) from a normally distributed sample, at which point the bias is clear. The unbiased sample variance of a set of points $x_1, ..., x_n$ is $$ s^{2} = \frac{1}{n-1} \sum_{i=1}^{n} (x_i - \overline{x})^2 $$ If the $x_i$'s are normally distributed, it is a fact that $$ \frac{(n-1)s^2}{\sigma^2} \sim \chi^{2}_{n-1} $$ where $\sigma^2$ is the true variance. The $\chi^2_{k}$ distribution has probability density $$ p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1}e^{-x/2} $$ using this we can derive the expected value of $s$; $$ \begin{align} E(s) &= \sqrt{\frac{\sigma^2}{n-1}} E \left( \sqrt{\frac{s^2(n-1)}{\sigma^2}} \right) \\ &= \sqrt{\frac{\sigma^2}{n-1}} \int_{0}^{\infty} \sqrt{x} \frac{(1/2)^{(n-1)/2}}{\Gamma((n-1)/2)} x^{((n-1)/2) - 1}e^{-x/2} \ dx \end{align} $$ which follows from the definition of expected value and fact that $ \sqrt{\frac{s^2(n-1)}{\sigma^2}}$ is the square root of a $\chi^2$ distributed variable. The trick now is to rearrange terms so that the integrand becomes another $\chi^2$ density: $$ \begin{align} E(s) &= \sqrt{\frac{\sigma^2}{n-1}} \int_{0}^{\infty} \frac{(1/2)^{(n-1)/2}}{\Gamma(\frac{n-1}{2})} x^{(n/2) - 1}e^{-x/2} \ dx \\ &= \sqrt{\frac{\sigma^2}{n-1}} \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \int_{0}^{\infty} \frac{(1/2)^{(n-1)/2}}{\Gamma(n/2)} x^{(n/2) - 1}e^{-x/2} \ dx \\ &= \sqrt{\frac{\sigma^2}{n-1}} \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \cdot \frac{ (1/2)^{(n-1)/2} }{ (1/2)^{n/2} } \underbrace{ \int_{0}^{\infty} \frac{(1/2)^{n/2}}{\Gamma(n/2)} x^{(n/2) - 1}e^{-x/2} \ dx}_{\chi^2_n \ {\rm density} } \end{align} $$ now we know the integrand the last line is equal to 1, since it is a $\chi^2_{n}$ density. Simplifying constants a bit gives $$ E(s) = \sigma \cdot \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } $$ Therefore the bias of $s$ is $$ \sigma - E(s) = \sigma \bigg(1 - \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \bigg) \sim \frac{\sigma}{4 n} \>$$ as $n \to \infty$. It's not hard to see that this bias is not 0 for any finite $n$, thus proving the sample standard deviation is biased. Below the bias is plot as a function of $n$ for $\sigma=1$ in red along with $1/4n$ in blue:
Why is sample standard deviation a biased estimator of $\sigma$?
@NRH's answer to this question gives a nice, simple proof of the biasedness of the sample standard deviation. Here I will explicitly calculate the expectation of the sample standard deviation (the ori
Why is sample standard deviation a biased estimator of $\sigma$? @NRH's answer to this question gives a nice, simple proof of the biasedness of the sample standard deviation. Here I will explicitly calculate the expectation of the sample standard deviation (the original poster's second question) from a normally distributed sample, at which point the bias is clear. The unbiased sample variance of a set of points $x_1, ..., x_n$ is $$ s^{2} = \frac{1}{n-1} \sum_{i=1}^{n} (x_i - \overline{x})^2 $$ If the $x_i$'s are normally distributed, it is a fact that $$ \frac{(n-1)s^2}{\sigma^2} \sim \chi^{2}_{n-1} $$ where $\sigma^2$ is the true variance. The $\chi^2_{k}$ distribution has probability density $$ p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1}e^{-x/2} $$ using this we can derive the expected value of $s$; $$ \begin{align} E(s) &= \sqrt{\frac{\sigma^2}{n-1}} E \left( \sqrt{\frac{s^2(n-1)}{\sigma^2}} \right) \\ &= \sqrt{\frac{\sigma^2}{n-1}} \int_{0}^{\infty} \sqrt{x} \frac{(1/2)^{(n-1)/2}}{\Gamma((n-1)/2)} x^{((n-1)/2) - 1}e^{-x/2} \ dx \end{align} $$ which follows from the definition of expected value and fact that $ \sqrt{\frac{s^2(n-1)}{\sigma^2}}$ is the square root of a $\chi^2$ distributed variable. The trick now is to rearrange terms so that the integrand becomes another $\chi^2$ density: $$ \begin{align} E(s) &= \sqrt{\frac{\sigma^2}{n-1}} \int_{0}^{\infty} \frac{(1/2)^{(n-1)/2}}{\Gamma(\frac{n-1}{2})} x^{(n/2) - 1}e^{-x/2} \ dx \\ &= \sqrt{\frac{\sigma^2}{n-1}} \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \int_{0}^{\infty} \frac{(1/2)^{(n-1)/2}}{\Gamma(n/2)} x^{(n/2) - 1}e^{-x/2} \ dx \\ &= \sqrt{\frac{\sigma^2}{n-1}} \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \cdot \frac{ (1/2)^{(n-1)/2} }{ (1/2)^{n/2} } \underbrace{ \int_{0}^{\infty} \frac{(1/2)^{n/2}}{\Gamma(n/2)} x^{(n/2) - 1}e^{-x/2} \ dx}_{\chi^2_n \ {\rm density} } \end{align} $$ now we know the integrand the last line is equal to 1, since it is a $\chi^2_{n}$ density. Simplifying constants a bit gives $$ E(s) = \sigma \cdot \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } $$ Therefore the bias of $s$ is $$ \sigma - E(s) = \sigma \bigg(1 - \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \bigg) \sim \frac{\sigma}{4 n} \>$$ as $n \to \infty$. It's not hard to see that this bias is not 0 for any finite $n$, thus proving the sample standard deviation is biased. Below the bias is plot as a function of $n$ for $\sigma=1$ in red along with $1/4n$ in blue:
Why is sample standard deviation a biased estimator of $\sigma$? @NRH's answer to this question gives a nice, simple proof of the biasedness of the sample standard deviation. Here I will explicitly calculate the expectation of the sample standard deviation (the ori
2,883
Why is sample standard deviation a biased estimator of $\sigma$?
You don't need normality. All you need is that $$s^2 = \frac{1}{n-1} \sum_{i=1}^n(x_i - \bar{x})^2$$ is an unbiased estimator of the variance $\sigma^2$. Then use that the square root function is strictly concave such that (by a strong form of Jensen's inequality) $$E(\sqrt{s^2}) < \sqrt{E(s^2)} = \sigma$$ unless the distribution of $s^2$ is degenerate at $\sigma^2$.
Why is sample standard deviation a biased estimator of $\sigma$?
You don't need normality. All you need is that $$s^2 = \frac{1}{n-1} \sum_{i=1}^n(x_i - \bar{x})^2$$ is an unbiased estimator of the variance $\sigma^2$. Then use that the square root function is str
Why is sample standard deviation a biased estimator of $\sigma$? You don't need normality. All you need is that $$s^2 = \frac{1}{n-1} \sum_{i=1}^n(x_i - \bar{x})^2$$ is an unbiased estimator of the variance $\sigma^2$. Then use that the square root function is strictly concave such that (by a strong form of Jensen's inequality) $$E(\sqrt{s^2}) < \sqrt{E(s^2)} = \sigma$$ unless the distribution of $s^2$ is degenerate at $\sigma^2$.
Why is sample standard deviation a biased estimator of $\sigma$? You don't need normality. All you need is that $$s^2 = \frac{1}{n-1} \sum_{i=1}^n(x_i - \bar{x})^2$$ is an unbiased estimator of the variance $\sigma^2$. Then use that the square root function is str
2,884
Why is sample standard deviation a biased estimator of $\sigma$?
Complementing NRH's answer, if someone is teaching this to a group of students who haven't studied Jensen's inequality yet, one way to go is to define the sample standard deviation $$ S_n = \sqrt{\sum_{i=1}^n\frac{(X_i-\bar{X}_n)^2}{n-1}} , $$ suppose that $S_n$ is non degenerate (therefore, $\mathrm{Var}[S_n]\ne0$), and notice the equivalences $$ 0 < \mathrm{Var}[S_n] = \mathrm{E}[S_n^2] - \mathrm{E}^2[S_n] \;\;\Leftrightarrow\;\; \mathrm{E}^2[S_n] < \mathrm{E}[S_n^2] \;\;\Leftrightarrow\;\; \mathrm{E}[S_n] < \sqrt{\mathrm{E}[S_n^2]} =\sigma. $$
Why is sample standard deviation a biased estimator of $\sigma$?
Complementing NRH's answer, if someone is teaching this to a group of students who haven't studied Jensen's inequality yet, one way to go is to define the sample standard deviation $$ S_n = \sqrt{\s
Why is sample standard deviation a biased estimator of $\sigma$? Complementing NRH's answer, if someone is teaching this to a group of students who haven't studied Jensen's inequality yet, one way to go is to define the sample standard deviation $$ S_n = \sqrt{\sum_{i=1}^n\frac{(X_i-\bar{X}_n)^2}{n-1}} , $$ suppose that $S_n$ is non degenerate (therefore, $\mathrm{Var}[S_n]\ne0$), and notice the equivalences $$ 0 < \mathrm{Var}[S_n] = \mathrm{E}[S_n^2] - \mathrm{E}^2[S_n] \;\;\Leftrightarrow\;\; \mathrm{E}^2[S_n] < \mathrm{E}[S_n^2] \;\;\Leftrightarrow\;\; \mathrm{E}[S_n] < \sqrt{\mathrm{E}[S_n^2]} =\sigma. $$
Why is sample standard deviation a biased estimator of $\sigma$? Complementing NRH's answer, if someone is teaching this to a group of students who haven't studied Jensen's inequality yet, one way to go is to define the sample standard deviation $$ S_n = \sqrt{\s
2,885
Why is sample standard deviation a biased estimator of $\sigma$?
This is a more general result without assuming of Normal distribution. The proof goes along the lines of this paper by David E. Giles. First, we consider Taylor's expanding $g(x) = \sqrt{x}$ about $x=\sigma^2$, we have $$ g(x) = \sigma + \frac{1}{2 \sigma}(x-\sigma^2) - \frac{1}{8 \sigma^3}(x-\sigma^2)^2 + R(x), $$ where $R(x) =- \left(\frac{1}{8 \tilde \sigma^3} - \frac{1}{8 \sigma^3}\right)(x-\sigma^2)^2$ for some $\tilde \sigma$ between $\sqrt{x}$ and $\sigma$. Let $\kappa = E(X - \mu)^4 / \sigma^4$ be the kurtosis. It could be shown that $E\left[\sqrt{n}(S_n^2 - \sigma^2)\right]^2 \rightarrow \sigma^4(\kappa-1)$ and $n ER(S_n^2) \rightarrow 0$ (and the proofs are beyond the discussion of this thread. See for example CLT that states that $\sqrt{n}(S_n^2 - \sigma^2)$ converges to $N(0, \sigma^4(\kappa-1))$). Thus, $$ E(S_n) = Eg(S_n^2) =\sigma + \frac{1}{2 \sigma} E(S_n^2 - \sigma^2) - \frac{1}{8\sigma^3} E(S_n^2 - \sigma^2)^2 + o(n^{-1}). $$ $$ = \sigma - \frac{\sigma}{8}\left[ \frac{\kappa - 1}{n}\right] + o(n^{-1}). $$ For normal distribution, setting $\kappa = 3$ gives the first order bias $-\frac{\sigma}{4n}$ as shown above.
Why is sample standard deviation a biased estimator of $\sigma$?
This is a more general result without assuming of Normal distribution. The proof goes along the lines of this paper by David E. Giles. First, we consider Taylor's expanding $g(x) = \sqrt{x}$ about $x=
Why is sample standard deviation a biased estimator of $\sigma$? This is a more general result without assuming of Normal distribution. The proof goes along the lines of this paper by David E. Giles. First, we consider Taylor's expanding $g(x) = \sqrt{x}$ about $x=\sigma^2$, we have $$ g(x) = \sigma + \frac{1}{2 \sigma}(x-\sigma^2) - \frac{1}{8 \sigma^3}(x-\sigma^2)^2 + R(x), $$ where $R(x) =- \left(\frac{1}{8 \tilde \sigma^3} - \frac{1}{8 \sigma^3}\right)(x-\sigma^2)^2$ for some $\tilde \sigma$ between $\sqrt{x}$ and $\sigma$. Let $\kappa = E(X - \mu)^4 / \sigma^4$ be the kurtosis. It could be shown that $E\left[\sqrt{n}(S_n^2 - \sigma^2)\right]^2 \rightarrow \sigma^4(\kappa-1)$ and $n ER(S_n^2) \rightarrow 0$ (and the proofs are beyond the discussion of this thread. See for example CLT that states that $\sqrt{n}(S_n^2 - \sigma^2)$ converges to $N(0, \sigma^4(\kappa-1))$). Thus, $$ E(S_n) = Eg(S_n^2) =\sigma + \frac{1}{2 \sigma} E(S_n^2 - \sigma^2) - \frac{1}{8\sigma^3} E(S_n^2 - \sigma^2)^2 + o(n^{-1}). $$ $$ = \sigma - \frac{\sigma}{8}\left[ \frac{\kappa - 1}{n}\right] + o(n^{-1}). $$ For normal distribution, setting $\kappa = 3$ gives the first order bias $-\frac{\sigma}{4n}$ as shown above.
Why is sample standard deviation a biased estimator of $\sigma$? This is a more general result without assuming of Normal distribution. The proof goes along the lines of this paper by David E. Giles. First, we consider Taylor's expanding $g(x) = \sqrt{x}$ about $x=
2,886
What are some of the most common misconceptions about linear regression?
False premise: A $\boldsymbol{\hat{\beta} \approx 0}$ means that there is no strong relationship between DV and IV.Non-linear functional relationships abound, and yet data produced by many such relationships would often produce nearly zero slopes if one assumes the relationship must be linear, or even approximately linear. Relatedly, in another false premise researchers often assume—possibly because many introductory regression textbooks teach—that one "tests for non-linearity" by building a series of regressions of the DV onto polynomial expansions of the IV (e.g., $Y \sim \beta_{0} + \beta_{X}X + \varepsilon$, followed by $Y \sim \beta_{0} + \beta_{X}X + \beta_{X^{2}}X^{2} + \varepsilon$, followed by $Y \sim \beta_{0} + \beta_{X}X + \beta_{X^{2}}X^{2} + \beta_{X^{3}}X^{3} + \varepsilon$, etc.). Just as straight line cannot well represent a non-linear functional relationship between DV and IV, a parabola cannot well represent literally an infinite number of nonlinear relationships (e.g., sinusoids, cycloids, step functions, saturation effects, s-curves, etc. ad infinitum). One may instead take a regression approach that does not assume any particular functional form (e.g., running line smoothers, GAMs, etc.). A third false premise is that increasing the number of estimated parameters necessarily results in a loss of statistical power. This may be false when the true relationship is non-linear and requires multiple parameters to estimate (e.g., a "broken stick" function requires not only the intercept and slope terms of a straight line, but requires point at which slope changes and a how much slope changes by estimates also): the residuals of a misspecified model (e.g., a straight line) may grow quite large (relative to a properly specified functional relation) resulting in a lower rejection probability and wider confidence intervals and prediction intervals (in addition to estimates being biased).
What are some of the most common misconceptions about linear regression?
False premise: A $\boldsymbol{\hat{\beta} \approx 0}$ means that there is no strong relationship between DV and IV.Non-linear functional relationships abound, and yet data produced by many such relati
What are some of the most common misconceptions about linear regression? False premise: A $\boldsymbol{\hat{\beta} \approx 0}$ means that there is no strong relationship between DV and IV.Non-linear functional relationships abound, and yet data produced by many such relationships would often produce nearly zero slopes if one assumes the relationship must be linear, or even approximately linear. Relatedly, in another false premise researchers often assume—possibly because many introductory regression textbooks teach—that one "tests for non-linearity" by building a series of regressions of the DV onto polynomial expansions of the IV (e.g., $Y \sim \beta_{0} + \beta_{X}X + \varepsilon$, followed by $Y \sim \beta_{0} + \beta_{X}X + \beta_{X^{2}}X^{2} + \varepsilon$, followed by $Y \sim \beta_{0} + \beta_{X}X + \beta_{X^{2}}X^{2} + \beta_{X^{3}}X^{3} + \varepsilon$, etc.). Just as straight line cannot well represent a non-linear functional relationship between DV and IV, a parabola cannot well represent literally an infinite number of nonlinear relationships (e.g., sinusoids, cycloids, step functions, saturation effects, s-curves, etc. ad infinitum). One may instead take a regression approach that does not assume any particular functional form (e.g., running line smoothers, GAMs, etc.). A third false premise is that increasing the number of estimated parameters necessarily results in a loss of statistical power. This may be false when the true relationship is non-linear and requires multiple parameters to estimate (e.g., a "broken stick" function requires not only the intercept and slope terms of a straight line, but requires point at which slope changes and a how much slope changes by estimates also): the residuals of a misspecified model (e.g., a straight line) may grow quite large (relative to a properly specified functional relation) resulting in a lower rejection probability and wider confidence intervals and prediction intervals (in addition to estimates being biased).
What are some of the most common misconceptions about linear regression? False premise: A $\boldsymbol{\hat{\beta} \approx 0}$ means that there is no strong relationship between DV and IV.Non-linear functional relationships abound, and yet data produced by many such relati
2,887
What are some of the most common misconceptions about linear regression?
It's very common to assume that only $y$ data are subject to measurement error (or at least, that this is the only error that we shall consider). But this ignores the possibility - and consequences - of error in the $x$ measurements. This might be particularly acute in observational studies where the $x$ variables are not under experimental control. Regression dilution or regression attenuation is the phenomenon recognised by Spearman (1904) whereby the estimated regression slope in simple linear regression is biased towards zero by the presence of measurement error in the independent variable. Suppose the true slope is positive — the effect of jittering the points' $x$ co-ordinates (perhaps most easily visualised as "smudging" the points horizontally) is to render the regression line less steep. Intuitively, points with a large $x$ are now more likely to be so because of positive measurement error, while the $y$ value is more likely to reflect the true (error-free) value of $x$, and hence be lower than the true line would be for the observed $x$. In more complex models, measurement error in $x$ variables can produce more complicated effects on the parameter estimates. There are errors in variables models that take such error into account. Spearman suggested a correction factor for disattenuating bivariate correlation coefficients and other correction factors have been developed for more sophisticated situations. However, such corrections can be difficult — particularly in the multivariate case and in the presence of confounders — and it may be controversial whether the correction is a genuine improvement, see e.g. Smith and Phillips (1996). So I suppose this is two misconceptions for the price of one — on the one hand it is a mistake to think that the way we write $y = X\beta + \varepsilon$ means "all the error is in the $y$" and ignore the very physically real possibility of measurement errors in the independent variables. On the other hand, it may be inadvisable to blindly apply "corrections" for measurement error in all such situations as a knee-jerk response (though it may well be a good idea to take steps to reduce the measurement error in the first place). (I should probably also link to some other common error-in-variables models, in increasingly general order: orthogonal regression, Deming regression, and total least squares.) References Smith, G. D., & Phillips, A. N. (1996). "Inflation in epidemiology: 'the proof and measurement of association between two things' revisited". British Medical Journal, 312(7047), 1659–1661. Spearman, C. (1904). "The proof and measurement of association between two things." American Journal of Psychology 15: 72–101.
What are some of the most common misconceptions about linear regression?
It's very common to assume that only $y$ data are subject to measurement error (or at least, that this is the only error that we shall consider). But this ignores the possibility - and consequences -
What are some of the most common misconceptions about linear regression? It's very common to assume that only $y$ data are subject to measurement error (or at least, that this is the only error that we shall consider). But this ignores the possibility - and consequences - of error in the $x$ measurements. This might be particularly acute in observational studies where the $x$ variables are not under experimental control. Regression dilution or regression attenuation is the phenomenon recognised by Spearman (1904) whereby the estimated regression slope in simple linear regression is biased towards zero by the presence of measurement error in the independent variable. Suppose the true slope is positive — the effect of jittering the points' $x$ co-ordinates (perhaps most easily visualised as "smudging" the points horizontally) is to render the regression line less steep. Intuitively, points with a large $x$ are now more likely to be so because of positive measurement error, while the $y$ value is more likely to reflect the true (error-free) value of $x$, and hence be lower than the true line would be for the observed $x$. In more complex models, measurement error in $x$ variables can produce more complicated effects on the parameter estimates. There are errors in variables models that take such error into account. Spearman suggested a correction factor for disattenuating bivariate correlation coefficients and other correction factors have been developed for more sophisticated situations. However, such corrections can be difficult — particularly in the multivariate case and in the presence of confounders — and it may be controversial whether the correction is a genuine improvement, see e.g. Smith and Phillips (1996). So I suppose this is two misconceptions for the price of one — on the one hand it is a mistake to think that the way we write $y = X\beta + \varepsilon$ means "all the error is in the $y$" and ignore the very physically real possibility of measurement errors in the independent variables. On the other hand, it may be inadvisable to blindly apply "corrections" for measurement error in all such situations as a knee-jerk response (though it may well be a good idea to take steps to reduce the measurement error in the first place). (I should probably also link to some other common error-in-variables models, in increasingly general order: orthogonal regression, Deming regression, and total least squares.) References Smith, G. D., & Phillips, A. N. (1996). "Inflation in epidemiology: 'the proof and measurement of association between two things' revisited". British Medical Journal, 312(7047), 1659–1661. Spearman, C. (1904). "The proof and measurement of association between two things." American Journal of Psychology 15: 72–101.
What are some of the most common misconceptions about linear regression? It's very common to assume that only $y$ data are subject to measurement error (or at least, that this is the only error that we shall consider). But this ignores the possibility - and consequences -
2,888
What are some of the most common misconceptions about linear regression?
There are some standard misunderstandings that apply in this context as well as other statistical contexts: e.g., the meaning of $p$-values, incorrectly inferring causality, etc. A couple of misunderstandings that I think are specific to multiple regression are: Thinking that the variable with the larger estimated coefficient and/or lower $p$-value is 'more important'. Thinking that adding more variables to the model gets you 'closer to the truth'. For example, the slope from a simple regression of $Y$ on $X$ may not be the true direct relationship between $X$ and $Y$, but if I add variables $Z_1, \ldots, Z_5$, that coefficient will be a better representation of the true relationship, and if I add $Z_6, \ldots, Z_{20}$, it will be even better than that.
What are some of the most common misconceptions about linear regression?
There are some standard misunderstandings that apply in this context as well as other statistical contexts: e.g., the meaning of $p$-values, incorrectly inferring causality, etc. A couple of misunde
What are some of the most common misconceptions about linear regression? There are some standard misunderstandings that apply in this context as well as other statistical contexts: e.g., the meaning of $p$-values, incorrectly inferring causality, etc. A couple of misunderstandings that I think are specific to multiple regression are: Thinking that the variable with the larger estimated coefficient and/or lower $p$-value is 'more important'. Thinking that adding more variables to the model gets you 'closer to the truth'. For example, the slope from a simple regression of $Y$ on $X$ may not be the true direct relationship between $X$ and $Y$, but if I add variables $Z_1, \ldots, Z_5$, that coefficient will be a better representation of the true relationship, and if I add $Z_6, \ldots, Z_{20}$, it will be even better than that.
What are some of the most common misconceptions about linear regression? There are some standard misunderstandings that apply in this context as well as other statistical contexts: e.g., the meaning of $p$-values, incorrectly inferring causality, etc. A couple of misunde
2,889
What are some of the most common misconceptions about linear regression?
I'd say the first one you list is probably the most common -- and perhaps the most widely taught that way -- of the things that are plainly seen to be wrong, but here are some others that are less clear in some situations (whether they really apply) but may impact even more analyses, and perhaps more seriously. These are often simply never mentioned when the subject of regression is introduced. Treating as random samples from the population of interest sets of observations that cannot possibly be close to representative (let alone randomly sampled). [Some studies could instead be seen as something nearer to convenience samples] With observational data, simply ignoring the consequences of leaving out important drivers of the process that would certainly bias the estimates of the coefficients of the included variables (in many cases, even to likely changing their sign), with no attempt to consider ways of dealing with them (whether out of ignorance of the problem or simply being unaware that anything can be done). [Some research areas have this problem more than others, whether because of the kinds of data that are collected or because people in some application areas are more likely to have been taught about the issue.] Spurious regression (mostly with data collected over time). [Even when people are aware it happens, there's another common misconception that simply differencing to supposed stationary is sufficient to completely avoid the problem.] There are many others one could mention of course (treating as independent data that will almost certainly be serially correlated or even integrated may be about as common, for example). You may notice that observational studies of data collected over time may be hit by all of these at once... yet that kind of study is very common in many areas of research where regression is a standard tool. How they can get to publication without a single reviewer or editor knowing about at least one of them and at least requiring some level of disclaimer in the conclusions continues to worry me. Statistics is fraught with problems of irreproducible results when dealing with fairly carefully controlled experiments (when combined with perhaps not so carefully controlled analyses), so as soon as one steps outside those bounds, how much worse must the reproducibility situation be?
What are some of the most common misconceptions about linear regression?
I'd say the first one you list is probably the most common -- and perhaps the most widely taught that way -- of the things that are plainly seen to be wrong, but here are some others that are less cle
What are some of the most common misconceptions about linear regression? I'd say the first one you list is probably the most common -- and perhaps the most widely taught that way -- of the things that are plainly seen to be wrong, but here are some others that are less clear in some situations (whether they really apply) but may impact even more analyses, and perhaps more seriously. These are often simply never mentioned when the subject of regression is introduced. Treating as random samples from the population of interest sets of observations that cannot possibly be close to representative (let alone randomly sampled). [Some studies could instead be seen as something nearer to convenience samples] With observational data, simply ignoring the consequences of leaving out important drivers of the process that would certainly bias the estimates of the coefficients of the included variables (in many cases, even to likely changing their sign), with no attempt to consider ways of dealing with them (whether out of ignorance of the problem or simply being unaware that anything can be done). [Some research areas have this problem more than others, whether because of the kinds of data that are collected or because people in some application areas are more likely to have been taught about the issue.] Spurious regression (mostly with data collected over time). [Even when people are aware it happens, there's another common misconception that simply differencing to supposed stationary is sufficient to completely avoid the problem.] There are many others one could mention of course (treating as independent data that will almost certainly be serially correlated or even integrated may be about as common, for example). You may notice that observational studies of data collected over time may be hit by all of these at once... yet that kind of study is very common in many areas of research where regression is a standard tool. How they can get to publication without a single reviewer or editor knowing about at least one of them and at least requiring some level of disclaimer in the conclusions continues to worry me. Statistics is fraught with problems of irreproducible results when dealing with fairly carefully controlled experiments (when combined with perhaps not so carefully controlled analyses), so as soon as one steps outside those bounds, how much worse must the reproducibility situation be?
What are some of the most common misconceptions about linear regression? I'd say the first one you list is probably the most common -- and perhaps the most widely taught that way -- of the things that are plainly seen to be wrong, but here are some others that are less cle
2,890
What are some of the most common misconceptions about linear regression?
I probably wouldn't call these misconceptions, but maybe common points of confusion/hang-ups and, in some cases, issues that researchers may not be aware of. Multicollinearity (including the case of more variables than data points) Heteroskedasticity Whether values of the independent variables are subject to noise How scaling (or not scaling) affects interpretation of the coefficients How to treat data from multiple subjects How to deal with serial correlations (e.g. time series) On the misconception side of things: What linearity means (e.g. $y = ax^2 + bx + c$ is nonlinear w.r.t. $x$, but linear w.r.t. the weights). That 'regression' means ordinary least squares or linear regression That low/high weights necessarily imply weak/strong relationships with the dependent variable That dependence between the dependent and independent variables can necessarily be reduced to pairwise dependencies. That high goodness-of fit on the training set implies a good model (i.e. neglecting overfitting)
What are some of the most common misconceptions about linear regression?
I probably wouldn't call these misconceptions, but maybe common points of confusion/hang-ups and, in some cases, issues that researchers may not be aware of. Multicollinearity (including the case of
What are some of the most common misconceptions about linear regression? I probably wouldn't call these misconceptions, but maybe common points of confusion/hang-ups and, in some cases, issues that researchers may not be aware of. Multicollinearity (including the case of more variables than data points) Heteroskedasticity Whether values of the independent variables are subject to noise How scaling (or not scaling) affects interpretation of the coefficients How to treat data from multiple subjects How to deal with serial correlations (e.g. time series) On the misconception side of things: What linearity means (e.g. $y = ax^2 + bx + c$ is nonlinear w.r.t. $x$, but linear w.r.t. the weights). That 'regression' means ordinary least squares or linear regression That low/high weights necessarily imply weak/strong relationships with the dependent variable That dependence between the dependent and independent variables can necessarily be reduced to pairwise dependencies. That high goodness-of fit on the training set implies a good model (i.e. neglecting overfitting)
What are some of the most common misconceptions about linear regression? I probably wouldn't call these misconceptions, but maybe common points of confusion/hang-ups and, in some cases, issues that researchers may not be aware of. Multicollinearity (including the case of
2,891
What are some of the most common misconceptions about linear regression?
In my experience, students frequently adopt the view the that squared errors (or OLS regression) are an inherently appropriate, accurate, and overall good thing to use, or are even without alternative. I have frequently seen OLS advertised along with remarks that it "gives greater weight to more extreme/deviant observations", and most of the time it is at least implied that this is a desirable property. This notion may be modified later, when the treatment of outliers and robust approaches are introduced, but at that point the damage is done. Arguably, the widespread use of squared errors has historically more to do with their mathematical convenience than with some natural law of real-world error costs. Overall, greater emphasis could be placed on the understanding that the choice of error function is somewhat arbitrary. Ideally, any choice of penalty within an algorithm should be guided by the corresponding real-world cost function associated with potential error (i.e., using a decision-making framework). Why not establish this principle first, and then see how well we can do?
What are some of the most common misconceptions about linear regression?
In my experience, students frequently adopt the view the that squared errors (or OLS regression) are an inherently appropriate, accurate, and overall good thing to use, or are even without alternative
What are some of the most common misconceptions about linear regression? In my experience, students frequently adopt the view the that squared errors (or OLS regression) are an inherently appropriate, accurate, and overall good thing to use, or are even without alternative. I have frequently seen OLS advertised along with remarks that it "gives greater weight to more extreme/deviant observations", and most of the time it is at least implied that this is a desirable property. This notion may be modified later, when the treatment of outliers and robust approaches are introduced, but at that point the damage is done. Arguably, the widespread use of squared errors has historically more to do with their mathematical convenience than with some natural law of real-world error costs. Overall, greater emphasis could be placed on the understanding that the choice of error function is somewhat arbitrary. Ideally, any choice of penalty within an algorithm should be guided by the corresponding real-world cost function associated with potential error (i.e., using a decision-making framework). Why not establish this principle first, and then see how well we can do?
What are some of the most common misconceptions about linear regression? In my experience, students frequently adopt the view the that squared errors (or OLS regression) are an inherently appropriate, accurate, and overall good thing to use, or are even without alternative
2,892
What are some of the most common misconceptions about linear regression?
Another common misconception is that the error term (or disturbance in econometrics parlance) and the residuals are the same thing. The error term is a random variable in the true model or data generating process, and is often assumed to follow a certain distribution, whereas the residuals are the deviations of the observed data from the fitted model. As such, the residuals can be considered to be estimates of the errors.
What are some of the most common misconceptions about linear regression?
Another common misconception is that the error term (or disturbance in econometrics parlance) and the residuals are the same thing. The error term is a random variable in the true model or data genera
What are some of the most common misconceptions about linear regression? Another common misconception is that the error term (or disturbance in econometrics parlance) and the residuals are the same thing. The error term is a random variable in the true model or data generating process, and is often assumed to follow a certain distribution, whereas the residuals are the deviations of the observed data from the fitted model. As such, the residuals can be considered to be estimates of the errors.
What are some of the most common misconceptions about linear regression? Another common misconception is that the error term (or disturbance in econometrics parlance) and the residuals are the same thing. The error term is a random variable in the true model or data genera
2,893
What are some of the most common misconceptions about linear regression?
The most common misconception I encounter is that linear regression assumes normality of errors. It doesn't. Normality is useful in connection with some aspects of linear regression e.g. small sample properties such as confidence limits of coefficients. Even for these things there are asymptotic values available for non-normal distributions. The second most common is a cluster of confusion with regards to endogeneity, e.g. not being careful with feedback loops. If there's a feedback loop from Y back to X it's an issue.
What are some of the most common misconceptions about linear regression?
The most common misconception I encounter is that linear regression assumes normality of errors. It doesn't. Normality is useful in connection with some aspects of linear regression e.g. small sample
What are some of the most common misconceptions about linear regression? The most common misconception I encounter is that linear regression assumes normality of errors. It doesn't. Normality is useful in connection with some aspects of linear regression e.g. small sample properties such as confidence limits of coefficients. Even for these things there are asymptotic values available for non-normal distributions. The second most common is a cluster of confusion with regards to endogeneity, e.g. not being careful with feedback loops. If there's a feedback loop from Y back to X it's an issue.
What are some of the most common misconceptions about linear regression? The most common misconception I encounter is that linear regression assumes normality of errors. It doesn't. Normality is useful in connection with some aspects of linear regression e.g. small sample
2,894
What are some of the most common misconceptions about linear regression?
The one I've often seen is a misconception on applicability of linear regression in certain use cases, in practice. For example, let us say that the variable that we are interested in is count of something (example: visitors on website) or ratio of something (example: conversion rates). In such cases, the variable can be better modeled by using link functions like Poisson (counts), Beta (ratios) etc. So using generalized model with more appropriate link function is more suitable. But just because the variable is not categorical, I've seen people starting with simple linear regression (link function = identity). Even if we disregard the accuracy implications, the modeling assumptions are a problem here.
What are some of the most common misconceptions about linear regression?
The one I've often seen is a misconception on applicability of linear regression in certain use cases, in practice. For example, let us say that the variable that we are interested in is count of some
What are some of the most common misconceptions about linear regression? The one I've often seen is a misconception on applicability of linear regression in certain use cases, in practice. For example, let us say that the variable that we are interested in is count of something (example: visitors on website) or ratio of something (example: conversion rates). In such cases, the variable can be better modeled by using link functions like Poisson (counts), Beta (ratios) etc. So using generalized model with more appropriate link function is more suitable. But just because the variable is not categorical, I've seen people starting with simple linear regression (link function = identity). Even if we disregard the accuracy implications, the modeling assumptions are a problem here.
What are some of the most common misconceptions about linear regression? The one I've often seen is a misconception on applicability of linear regression in certain use cases, in practice. For example, let us say that the variable that we are interested in is count of some
2,895
What are some of the most common misconceptions about linear regression?
An error that I made is to assume a symmetry of X and Y in the OLS. For instance, if I assume a linear relation $$ Y = a \, X + b$$ with a and b given by my software using OLS, then I believe that assuming X as a function of Y will give using OLS the coefficients: $$ X = \frac{1}{a} \, Y - \frac{b}{a}$$ that is wrong. Maybe this is also related to the difference between OLS and total least square or first principal component.
What are some of the most common misconceptions about linear regression?
An error that I made is to assume a symmetry of X and Y in the OLS. For instance, if I assume a linear relation $$ Y = a \, X + b$$ with a and b given by my software using OLS, then I believe that ass
What are some of the most common misconceptions about linear regression? An error that I made is to assume a symmetry of X and Y in the OLS. For instance, if I assume a linear relation $$ Y = a \, X + b$$ with a and b given by my software using OLS, then I believe that assuming X as a function of Y will give using OLS the coefficients: $$ X = \frac{1}{a} \, Y - \frac{b}{a}$$ that is wrong. Maybe this is also related to the difference between OLS and total least square or first principal component.
What are some of the most common misconceptions about linear regression? An error that I made is to assume a symmetry of X and Y in the OLS. For instance, if I assume a linear relation $$ Y = a \, X + b$$ with a and b given by my software using OLS, then I believe that ass
2,896
What are some of the most common misconceptions about linear regression?
Here is one I think is frequently overlooked by researchers: Variable interaction: researchers often look at isolated betas of individual predictors, and often don't even specify interaction terms. But in real world things interact. Without proper specification of all possible interaction terms, you don't know how your "predictors" engage together into forming an outcome. And if you want to be diligent and specify all interactions, the number of predictors will explode. From my calculations you can investigate only 4 variables and their interactions with 100 subjects. If you add one more variable you can overfit very easily.
What are some of the most common misconceptions about linear regression?
Here is one I think is frequently overlooked by researchers: Variable interaction: researchers often look at isolated betas of individual predictors, and often don't even specify interaction terms. B
What are some of the most common misconceptions about linear regression? Here is one I think is frequently overlooked by researchers: Variable interaction: researchers often look at isolated betas of individual predictors, and often don't even specify interaction terms. But in real world things interact. Without proper specification of all possible interaction terms, you don't know how your "predictors" engage together into forming an outcome. And if you want to be diligent and specify all interactions, the number of predictors will explode. From my calculations you can investigate only 4 variables and their interactions with 100 subjects. If you add one more variable you can overfit very easily.
What are some of the most common misconceptions about linear regression? Here is one I think is frequently overlooked by researchers: Variable interaction: researchers often look at isolated betas of individual predictors, and often don't even specify interaction terms. B
2,897
What are some of the most common misconceptions about linear regression?
Another common misconception is that the estimates (fitted values) are not invariant to transformations, e.g. $$f(\hat{y}_i) \neq \widehat{f(y_i)}$$ in general, where $\hat{y}_i = \vec{x}_i ^T \hat{\beta}$, the fitted regression value based on your estimated regression coefficients. If this is what you want for monotonic functions $f(\cdot)$ not necessarily linear, then what you want is a quantile regression. The equality above holds in linear regression for linear functions but non-linear functions (e.g. $log(\cdot)$) this will not hold. However, this will hold for any monotonic function in quantile regression. This comes up all the time when you do a log transform of your data, fit a linear regression, then exponentiate the fitted value and people read that as the regression. This isn't the mean, this is the median (if things are truly log-normally distributed).
What are some of the most common misconceptions about linear regression?
Another common misconception is that the estimates (fitted values) are not invariant to transformations, e.g. $$f(\hat{y}_i) \neq \widehat{f(y_i)}$$ in general, where $\hat{y}_i = \vec{x}_i ^T \hat{\
What are some of the most common misconceptions about linear regression? Another common misconception is that the estimates (fitted values) are not invariant to transformations, e.g. $$f(\hat{y}_i) \neq \widehat{f(y_i)}$$ in general, where $\hat{y}_i = \vec{x}_i ^T \hat{\beta}$, the fitted regression value based on your estimated regression coefficients. If this is what you want for monotonic functions $f(\cdot)$ not necessarily linear, then what you want is a quantile regression. The equality above holds in linear regression for linear functions but non-linear functions (e.g. $log(\cdot)$) this will not hold. However, this will hold for any monotonic function in quantile regression. This comes up all the time when you do a log transform of your data, fit a linear regression, then exponentiate the fitted value and people read that as the regression. This isn't the mean, this is the median (if things are truly log-normally distributed).
What are some of the most common misconceptions about linear regression? Another common misconception is that the estimates (fitted values) are not invariant to transformations, e.g. $$f(\hat{y}_i) \neq \widehat{f(y_i)}$$ in general, where $\hat{y}_i = \vec{x}_i ^T \hat{\
2,898
Is standardization needed before fitting logistic regression?
Standardization isn't required for logistic regression. The main goal of standardizing features is to help convergence of the technique used for optimization. For example, if you use Newton-Raphson to maximize the likelihood, standardizing the features makes the convergence faster. Otherwise, you can run your logistic regression without any standardization treatment on the features.
Is standardization needed before fitting logistic regression?
Standardization isn't required for logistic regression. The main goal of standardizing features is to help convergence of the technique used for optimization. For example, if you use Newton-Raphson to
Is standardization needed before fitting logistic regression? Standardization isn't required for logistic regression. The main goal of standardizing features is to help convergence of the technique used for optimization. For example, if you use Newton-Raphson to maximize the likelihood, standardizing the features makes the convergence faster. Otherwise, you can run your logistic regression without any standardization treatment on the features.
Is standardization needed before fitting logistic regression? Standardization isn't required for logistic regression. The main goal of standardizing features is to help convergence of the technique used for optimization. For example, if you use Newton-Raphson to
2,899
Is standardization needed before fitting logistic regression?
If you use logistic regression with LASSO or ridge regression (as Weka Logistic class does) you should. As Hastie,Tibshirani and Friedman points out (page 82 of the pdf or at page 63 of the book): The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving. Also this thread does.
Is standardization needed before fitting logistic regression?
If you use logistic regression with LASSO or ridge regression (as Weka Logistic class does) you should. As Hastie,Tibshirani and Friedman points out (page 82 of the pdf or at page 63 of the book):
Is standardization needed before fitting logistic regression? If you use logistic regression with LASSO or ridge regression (as Weka Logistic class does) you should. As Hastie,Tibshirani and Friedman points out (page 82 of the pdf or at page 63 of the book): The ridge solutions are not equivariant under scaling of the inputs, and so one normally standardizes the inputs before solving. Also this thread does.
Is standardization needed before fitting logistic regression? If you use logistic regression with LASSO or ridge regression (as Weka Logistic class does) you should. As Hastie,Tibshirani and Friedman points out (page 82 of the pdf or at page 63 of the book):
2,900
Is standardization needed before fitting logistic regression?
@Aymen is right, you don't need to normalize your data for logistic regression. (For more general information, it may help to read through this CV thread: When should you center your data & when should you standardize?; you might also note that your transformation is more commonly called 'normalizing', see: How to verify a distribution is normalized?) Let me address some other points in the question. It is worth noting here that in logistic regression your coefficients indicate the effect of a one-unit change in your predictor variable on the log odds of 'success'. The effect of transforming a variable (such as by standardizing or normalizing) is to change what we are calling a 'unit' in the context of our model. Your raw $x$ data varied across some number of units in the original metric. After you normalized, your data ranged from $0$ to $1$. That is, a change of one unit now means going from the lowest valued observation to the highest valued observation. The amount of increase in the log odds of success has not changed. From these facts, I suspect that your first variable (store1) spanned $133/37\approx 3.6$ original units, and your second variable (store2) spanned only $11/13\approx 0.85$ original units.
Is standardization needed before fitting logistic regression?
@Aymen is right, you don't need to normalize your data for logistic regression. (For more general information, it may help to read through this CV thread: When should you center your data & when shou
Is standardization needed before fitting logistic regression? @Aymen is right, you don't need to normalize your data for logistic regression. (For more general information, it may help to read through this CV thread: When should you center your data & when should you standardize?; you might also note that your transformation is more commonly called 'normalizing', see: How to verify a distribution is normalized?) Let me address some other points in the question. It is worth noting here that in logistic regression your coefficients indicate the effect of a one-unit change in your predictor variable on the log odds of 'success'. The effect of transforming a variable (such as by standardizing or normalizing) is to change what we are calling a 'unit' in the context of our model. Your raw $x$ data varied across some number of units in the original metric. After you normalized, your data ranged from $0$ to $1$. That is, a change of one unit now means going from the lowest valued observation to the highest valued observation. The amount of increase in the log odds of success has not changed. From these facts, I suspect that your first variable (store1) spanned $133/37\approx 3.6$ original units, and your second variable (store2) spanned only $11/13\approx 0.85$ original units.
Is standardization needed before fitting logistic regression? @Aymen is right, you don't need to normalize your data for logistic regression. (For more general information, it may help to read through this CV thread: When should you center your data & when shou