idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,901
What's the difference between correlation and simple linear regression?
All of the given answers so far provide important insights but it should not be forgotten that you can transform the parameters of one into the other: Regression: $y = mx + b$ Connection between regression parameters and correlation, covariance, variance, standard deviation and means: $$m = \frac{Cov(y, x)}{Var(x)} = \frac{Cor(y, x) \cdot Sd(y)}{Sd(x)}$$ $$b = \bar{y}-m\bar{x}$$ So you can transform both into each other by scaling and shifting their parameters. An example in R: y <- c(4.17, 5.58, 5.18, 6.11, 4.50, 4.61, 5.17, 4.53, 5.33, 5.14) x <- c(4.81, 4.17, 4.41, 3.59, 5.87, 3.83, 6.03, 4.89, 4.32, 4.69) lm(y ~ x) ## ## Call: ## lm(formula = y ~ x) ## ## Coefficients: ## (Intercept) x ## 6.5992 -0.3362 (m <- cov(y, x) / var(x)) # slope of regression ## [1] -0.3362361 cor(y, x) * sd(y) / sd(x) # the same with correlation ## [1] -0.3362361 mean(y) - m*mean(x) # intercept ## [1] 6.599196
What's the difference between correlation and simple linear regression?
All of the given answers so far provide important insights but it should not be forgotten that you can transform the parameters of one into the other: Regression: $y = mx + b$ Connection between regre
What's the difference between correlation and simple linear regression? All of the given answers so far provide important insights but it should not be forgotten that you can transform the parameters of one into the other: Regression: $y = mx + b$ Connection between regression parameters and correlation, covariance, variance, standard deviation and means: $$m = \frac{Cov(y, x)}{Var(x)} = \frac{Cor(y, x) \cdot Sd(y)}{Sd(x)}$$ $$b = \bar{y}-m\bar{x}$$ So you can transform both into each other by scaling and shifting their parameters. An example in R: y <- c(4.17, 5.58, 5.18, 6.11, 4.50, 4.61, 5.17, 4.53, 5.33, 5.14) x <- c(4.81, 4.17, 4.41, 3.59, 5.87, 3.83, 6.03, 4.89, 4.32, 4.69) lm(y ~ x) ## ## Call: ## lm(formula = y ~ x) ## ## Coefficients: ## (Intercept) x ## 6.5992 -0.3362 (m <- cov(y, x) / var(x)) # slope of regression ## [1] -0.3362361 cor(y, x) * sd(y) / sd(x) # the same with correlation ## [1] -0.3362361 mean(y) - m*mean(x) # intercept ## [1] 6.599196
What's the difference between correlation and simple linear regression? All of the given answers so far provide important insights but it should not be forgotten that you can transform the parameters of one into the other: Regression: $y = mx + b$ Connection between regre
1,902
What's the difference between correlation and simple linear regression?
Correlation analysis only quantifies the relation between two variables ignoring which is dependent variable and which is independent. But before appliyng regression you have to calrify that impact of which variable you want to check on the other variable.
What's the difference between correlation and simple linear regression?
Correlation analysis only quantifies the relation between two variables ignoring which is dependent variable and which is independent. But before appliyng regression you have to calrify that impact of
What's the difference between correlation and simple linear regression? Correlation analysis only quantifies the relation between two variables ignoring which is dependent variable and which is independent. But before appliyng regression you have to calrify that impact of which variable you want to check on the other variable.
What's the difference between correlation and simple linear regression? Correlation analysis only quantifies the relation between two variables ignoring which is dependent variable and which is independent. But before appliyng regression you have to calrify that impact of
1,903
What's the difference between correlation and simple linear regression?
From correlation we can only get an index describing the linear relationship between two variables; in regression we can predict the relationship between more than two variables and can use it to identify which variables x can predict the outcome variable y.
What's the difference between correlation and simple linear regression?
From correlation we can only get an index describing the linear relationship between two variables; in regression we can predict the relationship between more than two variables and can use it to iden
What's the difference between correlation and simple linear regression? From correlation we can only get an index describing the linear relationship between two variables; in regression we can predict the relationship between more than two variables and can use it to identify which variables x can predict the outcome variable y.
What's the difference between correlation and simple linear regression? From correlation we can only get an index describing the linear relationship between two variables; in regression we can predict the relationship between more than two variables and can use it to iden
1,904
What's the difference between correlation and simple linear regression?
Quoting Altman DG, "Practical statistics for medical research" Chapman & Hall, 1991, page 321: "Correlation reduces a set of data to a single number that bears no direct relation to the actual data. Regression is a much more useful method, with results which are clearly related to the measurement obtained. The strength of the relation is explicit, and uncertainty can be seen clearly from confidence intervals or prediction intervals"
What's the difference between correlation and simple linear regression?
Quoting Altman DG, "Practical statistics for medical research" Chapman & Hall, 1991, page 321: "Correlation reduces a set of data to a single number that bears no direct relation to the actual data. R
What's the difference between correlation and simple linear regression? Quoting Altman DG, "Practical statistics for medical research" Chapman & Hall, 1991, page 321: "Correlation reduces a set of data to a single number that bears no direct relation to the actual data. Regression is a much more useful method, with results which are clearly related to the measurement obtained. The strength of the relation is explicit, and uncertainty can be seen clearly from confidence intervals or prediction intervals"
What's the difference between correlation and simple linear regression? Quoting Altman DG, "Practical statistics for medical research" Chapman & Hall, 1991, page 321: "Correlation reduces a set of data to a single number that bears no direct relation to the actual data. R
1,905
What's the difference between correlation and simple linear regression?
The regression analysis is a technique to study the cause of effect of a relation between two variables. whereas, The correlation analysis is a technique to study the quantifies the relation between two variables.
What's the difference between correlation and simple linear regression?
The regression analysis is a technique to study the cause of effect of a relation between two variables. whereas, The correlation analysis is a technique to study the quantifies the relation between t
What's the difference between correlation and simple linear regression? The regression analysis is a technique to study the cause of effect of a relation between two variables. whereas, The correlation analysis is a technique to study the quantifies the relation between two variables.
What's the difference between correlation and simple linear regression? The regression analysis is a technique to study the cause of effect of a relation between two variables. whereas, The correlation analysis is a technique to study the quantifies the relation between t
1,906
What's the difference between correlation and simple linear regression?
Correlation is an index (just one number) of the strength of a relationship. Regression is an analysis (estimation of parameters of a model and statistical test of their significance) of the adequacy of a particular functional relationship. The size of the correlation is related to how accurate the predictions of the regression will be.
What's the difference between correlation and simple linear regression?
Correlation is an index (just one number) of the strength of a relationship. Regression is an analysis (estimation of parameters of a model and statistical test of their significance) of the adequacy
What's the difference between correlation and simple linear regression? Correlation is an index (just one number) of the strength of a relationship. Regression is an analysis (estimation of parameters of a model and statistical test of their significance) of the adequacy of a particular functional relationship. The size of the correlation is related to how accurate the predictions of the regression will be.
What's the difference between correlation and simple linear regression? Correlation is an index (just one number) of the strength of a relationship. Regression is an analysis (estimation of parameters of a model and statistical test of their significance) of the adequacy
1,907
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
This is certainly a valid concern, but this isn't quite right. If 1,000,000 studies are done and all the null hypotheses are true then approximately 50,000 will have significant results at p < 0.05. That's what a p value means. However, the null is essentially never strictly true. But even if we loosen it to "almost true" or "about right" or some such, that would mean that the 1,000,000 studies would all have to be about things like The relationship between social security number and IQ Is the length of your toes related to the state of your birth? and so on. Nonsense. One trouble is, of course, that we don't know which nulls are true. Another problem is the one @Glen_b mentioned in his comment - the file drawer problem. This is why I so much like Robert Abelson's ideas that he puts forth in Statistics as Principled Argument. That is, statistical evidence should be part of a principled argument as to why something is the case and should be judged on the MAGIC criteria: Magnitude: How big is the effect? Articulation: Is it full of "ifs", "ands" and "buts" (that's bad) Generality: How widely does it apply? Interestingness Credibilty: Incredible claims require a lot of evidence
Is this really how p-values work? Can a million research papers per year be based on pure randomness
This is certainly a valid concern, but this isn't quite right. If 1,000,000 studies are done and all the null hypotheses are true then approximately 50,000 will have significant results at p < 0.05.
Is this really how p-values work? Can a million research papers per year be based on pure randomness? This is certainly a valid concern, but this isn't quite right. If 1,000,000 studies are done and all the null hypotheses are true then approximately 50,000 will have significant results at p < 0.05. That's what a p value means. However, the null is essentially never strictly true. But even if we loosen it to "almost true" or "about right" or some such, that would mean that the 1,000,000 studies would all have to be about things like The relationship between social security number and IQ Is the length of your toes related to the state of your birth? and so on. Nonsense. One trouble is, of course, that we don't know which nulls are true. Another problem is the one @Glen_b mentioned in his comment - the file drawer problem. This is why I so much like Robert Abelson's ideas that he puts forth in Statistics as Principled Argument. That is, statistical evidence should be part of a principled argument as to why something is the case and should be judged on the MAGIC criteria: Magnitude: How big is the effect? Articulation: Is it full of "ifs", "ands" and "buts" (that's bad) Generality: How widely does it apply? Interestingness Credibilty: Incredible claims require a lot of evidence
Is this really how p-values work? Can a million research papers per year be based on pure randomness This is certainly a valid concern, but this isn't quite right. If 1,000,000 studies are done and all the null hypotheses are true then approximately 50,000 will have significant results at p < 0.05.
1,908
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
Aren't all researches around the world somewhat like the "infinite monkey theorem" monkeys? Remember, scientists are critically NOT like infinite monkeys, because their research behavior--particularly experimentation--is anything but random. Experiments are (at least supposed to be) incredibly carefully controlled manipulations and measurements that are based on mechanistically informed hypotheses that builds on a large body of previous research. They are not just random shots in the dark (or monkey fingers on typewriters). Consider that there are 23887 universities in the world. If each university has 1000 students, that's 23 millions of students each year. Let's say that each year, each student does at least one research, That estimate for the number of published research findings has got to be way way off. I don't know if there are 23 million "university students" (does that just include universities, or colleges too?) in the world, but I know that the vast majority of them never publishes any scientific findings. I mean, most of them are not science majors, and even most science majors never publish findings. A more likely estimate (some discussion) for number of scientific publications each year is about 1-2 million. Doesn't that mean that even if all the research samples were pulled from random population, about 5% of them would "reject the null hypothesis as invalid". Wow. Think of that. That's about a million research papers per year getting published due to "significant" results. Keep in mind, not all published research has statistics where significance is right at the p = 0.05 value. Often one sees p values like p<0.01 or even p<0.001. I don't know what the "mean" p value is over a million papers, of course. If this is how it works, this is scary. It means that a lot of the "scientific truth" we take for granted is based on pure randomness. Also keep in mind, scientists are really not supposed to take a small number of results at p around 0.05 as "scientific truth". Not even close. Scientists are supposed to integrate over many studies, each of which has appropriate statistical power, plausible mechanism, reproducibility, magnitude of effect, etc., and incorporate that into a tentative model of how some phenomenon works. But, does this mean that almost all of science is correct? No way. Scientists are human, and fall prey to biases, bad research methodology (including improper statistical approaches), fraud, simple human error, and bad luck. Probably more dominant in why a healthy portion of published science is wrong are these factors rather than the p<0.05 convention. In fact, let's just cut right to the chase, and make an even "scarier" statement than what you have put forth: Why Most Published Research Findings Are False
Is this really how p-values work? Can a million research papers per year be based on pure randomness
Aren't all researches around the world somewhat like the "infinite monkey theorem" monkeys? Remember, scientists are critically NOT like infinite monkeys, because their research behavior--particul
Is this really how p-values work? Can a million research papers per year be based on pure randomness? Aren't all researches around the world somewhat like the "infinite monkey theorem" monkeys? Remember, scientists are critically NOT like infinite monkeys, because their research behavior--particularly experimentation--is anything but random. Experiments are (at least supposed to be) incredibly carefully controlled manipulations and measurements that are based on mechanistically informed hypotheses that builds on a large body of previous research. They are not just random shots in the dark (or monkey fingers on typewriters). Consider that there are 23887 universities in the world. If each university has 1000 students, that's 23 millions of students each year. Let's say that each year, each student does at least one research, That estimate for the number of published research findings has got to be way way off. I don't know if there are 23 million "university students" (does that just include universities, or colleges too?) in the world, but I know that the vast majority of them never publishes any scientific findings. I mean, most of them are not science majors, and even most science majors never publish findings. A more likely estimate (some discussion) for number of scientific publications each year is about 1-2 million. Doesn't that mean that even if all the research samples were pulled from random population, about 5% of them would "reject the null hypothesis as invalid". Wow. Think of that. That's about a million research papers per year getting published due to "significant" results. Keep in mind, not all published research has statistics where significance is right at the p = 0.05 value. Often one sees p values like p<0.01 or even p<0.001. I don't know what the "mean" p value is over a million papers, of course. If this is how it works, this is scary. It means that a lot of the "scientific truth" we take for granted is based on pure randomness. Also keep in mind, scientists are really not supposed to take a small number of results at p around 0.05 as "scientific truth". Not even close. Scientists are supposed to integrate over many studies, each of which has appropriate statistical power, plausible mechanism, reproducibility, magnitude of effect, etc., and incorporate that into a tentative model of how some phenomenon works. But, does this mean that almost all of science is correct? No way. Scientists are human, and fall prey to biases, bad research methodology (including improper statistical approaches), fraud, simple human error, and bad luck. Probably more dominant in why a healthy portion of published science is wrong are these factors rather than the p<0.05 convention. In fact, let's just cut right to the chase, and make an even "scarier" statement than what you have put forth: Why Most Published Research Findings Are False
Is this really how p-values work? Can a million research papers per year be based on pure randomness Aren't all researches around the world somewhat like the "infinite monkey theorem" monkeys? Remember, scientists are critically NOT like infinite monkeys, because their research behavior--particul
1,909
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
Your understanding of $p$-values seems to be correct. Similar concerns are voiced quite often. What makes sense to compute in your example, is not only the number of studies out of 23 mln that arrive to false positives, but also the proportion of studies that obtained significant effect that were false. This is called "false discovery rate". It is not equal to $\alpha$ and depends on various other things such as e.g. the proportion of nulls across your 23 mln studies. This is of course impossible to know, but one can make guesses. Some people say that the false discovery rate is at least 30%. See e.g. this recent discussion of a 2014 paper by David Colquhoun: Confusion with false discovery rate and multiple testing (on Colquhoun 2014). I have been arguing there against this "at least 30%" estimate, but I do agree that in some fields of research false discovery rate can be a lot bit higher than 5%. This is indeed worrisome. I don't think that saying that null is almost never true helps here; Type S and Type M errors (as introduced by Andrew Gelman) are not much better than type I/II errors. I think what it really means, is that one should never trust an isolated "significant" result. This is even true in high energy physics with their super-stringent $\alpha\approx 10^{-7}$ criterion; we believe the discovery of the Higgs boson partially because it fits so well to the theory prediction. This is of course much much MUCH more so in some other disciplines with much lower conventional significance criteria ($\alpha=0.05$) and lack of very specific theoretical predictions. Good studies, at least in my field, do not report an isolated $p<0.05$ result. Such a finding would need to be confirmed by another (at least partially independent) analysis, and by a couple of other independent experiments. If I look at the best studies in my field, I always see a whole bunch of experiments that together point at a particular result; their "cumulative" $p$-value (that is never explicitly computed) is very low. To put it differently, I think that if a researcher gets some $p<0.05$ finding, it only means that he or she should go and investigate it further. It definitely does not mean that it should be regarded as "scientific truth".
Is this really how p-values work? Can a million research papers per year be based on pure randomness
Your understanding of $p$-values seems to be correct. Similar concerns are voiced quite often. What makes sense to compute in your example, is not only the number of studies out of 23 mln that arrive
Is this really how p-values work? Can a million research papers per year be based on pure randomness? Your understanding of $p$-values seems to be correct. Similar concerns are voiced quite often. What makes sense to compute in your example, is not only the number of studies out of 23 mln that arrive to false positives, but also the proportion of studies that obtained significant effect that were false. This is called "false discovery rate". It is not equal to $\alpha$ and depends on various other things such as e.g. the proportion of nulls across your 23 mln studies. This is of course impossible to know, but one can make guesses. Some people say that the false discovery rate is at least 30%. See e.g. this recent discussion of a 2014 paper by David Colquhoun: Confusion with false discovery rate and multiple testing (on Colquhoun 2014). I have been arguing there against this "at least 30%" estimate, but I do agree that in some fields of research false discovery rate can be a lot bit higher than 5%. This is indeed worrisome. I don't think that saying that null is almost never true helps here; Type S and Type M errors (as introduced by Andrew Gelman) are not much better than type I/II errors. I think what it really means, is that one should never trust an isolated "significant" result. This is even true in high energy physics with their super-stringent $\alpha\approx 10^{-7}$ criterion; we believe the discovery of the Higgs boson partially because it fits so well to the theory prediction. This is of course much much MUCH more so in some other disciplines with much lower conventional significance criteria ($\alpha=0.05$) and lack of very specific theoretical predictions. Good studies, at least in my field, do not report an isolated $p<0.05$ result. Such a finding would need to be confirmed by another (at least partially independent) analysis, and by a couple of other independent experiments. If I look at the best studies in my field, I always see a whole bunch of experiments that together point at a particular result; their "cumulative" $p$-value (that is never explicitly computed) is very low. To put it differently, I think that if a researcher gets some $p<0.05$ finding, it only means that he or she should go and investigate it further. It definitely does not mean that it should be regarded as "scientific truth".
Is this really how p-values work? Can a million research papers per year be based on pure randomness Your understanding of $p$-values seems to be correct. Similar concerns are voiced quite often. What makes sense to compute in your example, is not only the number of studies out of 23 mln that arrive
1,910
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
Your concern is exactly the concern that underlies a great deal of the current discussion in science about reproducability. However, the true state of affairs is a bit more complicated than you suggest. First, let's establish some terminology. Null hypothesis significance testing can be understood as a signal detection problem -- the null hypothesis is either true or false, and you can either choose to reject or retain it. The combination of two decisions and two possible "true" states of affairs results in the following table, which most people see at some point when they're first learning statistics: Scientists who use null hypothesis significance testing are attempting to maximize the number of correct decisions (shown in blue) and minimize the number of incorrect decisions (shown in red). Working scientists are also trying to publish their results so that they can get jobs and advance their careers. Of course, bear in mind that, as many other answerers have already mentioned, the null hypothesis is not chosen at random -- instead, it is usually chosen specifically because, based on prior theory, the scientist believes it to be false. Unfortunately, it is hard to quantify the proportion of times that scientists are correct in their predictions, but bear in mind that, when scientists are dealing with the "$H_0$ is false" column, they should be worried about false negatives rather than false positives. You, however, seem to be concerned about false positives, so let's focus on the "$H_0$ is true" column. In this situation, what is the probability of a scientist publishing a false result? Publication bias As long as the probability of publication does not depend on whether the result is "significant", then the probability is precisely $\alpha$ -- .05, and sometimes lower depending on the field. The problem is that there is good evidence that the probability of publication does depend on whether the result is significant (see, for example, Stern & Simes, 1997; Dwan et al., 2008), either because scientists only submit significant results for publication (the so-called file-drawer problem; Rosenthal, 1979) or because non-significant results are submitted for publication but don't make it through peer review. The general issue of the probability of publication depending on the observed $p$-value is what is meant by publication bias. If we take a step back and think about the implications of publication bias for a broader research literature, a research literature affected by publication bias will still contain true results -- sometimes the null hypothesis that a scientist claims to be false really will be false, and, depending on the degree of publication bias, sometimes a scientist will correctly claim that a given null hypothesis is true. However, the research literature will also be cluttered up by too large a proportion of false positives (i.e., studies in which the researcher claims that the null hypothesis is false when really it's true). Researcher degrees of freedom Publication bias is not the only way that, under the null hypothesis, the probability of publishing a significant result will be greater than $\alpha$. When used improperly, certain areas of flexibility in the design of studies and analysis of data, which are sometimes labeled researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011), can increase the rate of false positives, even when there is no publication bias. For example, if we assume that, upon obtaining a non-significant result, all (or some) scientists will exclude one outlying data point if this exclusion will change the non-significant result into a significant one, the rate of false positives will be greater than $\alpha$. Given the presence of a large enough number of questionable research practices, the rate of false positives can go as high as .60 even if the nominal rate was set at .05 (Simmons, Nelson, & Simonsohn, 2011). It's important to note that the improper use of researcher degrees of freedom (which is sometimes known as a questionable research practice; Martinson, Anderson, & de Vries, 2005) is not the same as making up data. In some cases, excluding outliers is the right thing to do, either because equipment fails or for some other reason. The key issue is that, in the presence of researcher degrees of freedom, the decisions made during analysis often depend on how the data turn out (Gelman & Loken, 2014), even if the researchers in question are not aware of this fact. As long as researchers use researcher degrees of freedom (consciously or unconsciously) to increase the probability of a significant result (perhaps because significant results are more "publishable"), the presence of researcher degrees of freedom will overpopulate a research literature with false positives in the same way as publication bias. An important caveat to the above discussion is that scientific papers (at least in psychology, which is my field) seldom consist of single results. More common are multiple studies, each of which involves multiple tests -- the emphasis is on building a larger argument and ruling out alternative explanations for the presented evidence. However, the selective presentation of results (or the presence of researcher degrees of freedom) can produce bias in a set of results just as easily as a single result. There is evidence that the results presented in multi-study papers is often much cleaner and stronger than one would expect even if all the predictions of these studies were all true (Francis, 2013). Conclusion Fundamentally, I agree with your intuition that null hypothesis significance testing can go wrong. However, I would argue that the true culprits producing a high rate of false positives are processes like publication bias and the presence of researcher degrees of freedom. Indeed, many scientists are well aware of these problems, and improving scientific reproducability is a very active current topic of discussion (e.g., Nosek & Bar-Anan, 2012; Nosek, Spies, & Motyl, 2012). So you are in good company with your concerns, but I also think there are also reasons for some cautious optimism. References Stern, J. M., & Simes, R. J. (1997). Publication bias: Evidence of delayed publication in a cohort study of clinical research projects. BMJ, 315(7109), 640–645. http://doi.org/10.1136/bmj.315.7109.640 Dwan, K., Altman, D. G., Arnaiz, J. A., Bloom, J., Chan, A., Cronin, E., … Williamson, P. R. (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE, 3(8), e3081. http://doi.org/10.1371/journal.pone.0003081 Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. http://doi.org/10.1037/0033-2909.86.3.638 Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. http://doi.org/10.1177/0956797611417632 Martinson, B. C., Anderson, M. S., & de Vries, R. (2005). Scientists behaving badly. Nature, 435, 737–738. http://doi.org/10.1038/435737a Gelman, A., & Loken, E. (2014). The statistical crisis in science. American Scientist, 102, 460-465. Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology, 57(5), 153–169. http://doi.org/10.1016/j.jmp.2013.02.003 Nosek, B. A., & Bar-Anan, Y. (2012). Scientific utopia: I. Opening scientific communication. Psychological Inquiry, 23(3), 217–243. http://doi.org/10.1080/1047840X.2012.692215 Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. http://doi.org/10.1177/1745691612459058
Is this really how p-values work? Can a million research papers per year be based on pure randomness
Your concern is exactly the concern that underlies a great deal of the current discussion in science about reproducability. However, the true state of affairs is a bit more complicated than you sugge
Is this really how p-values work? Can a million research papers per year be based on pure randomness? Your concern is exactly the concern that underlies a great deal of the current discussion in science about reproducability. However, the true state of affairs is a bit more complicated than you suggest. First, let's establish some terminology. Null hypothesis significance testing can be understood as a signal detection problem -- the null hypothesis is either true or false, and you can either choose to reject or retain it. The combination of two decisions and two possible "true" states of affairs results in the following table, which most people see at some point when they're first learning statistics: Scientists who use null hypothesis significance testing are attempting to maximize the number of correct decisions (shown in blue) and minimize the number of incorrect decisions (shown in red). Working scientists are also trying to publish their results so that they can get jobs and advance their careers. Of course, bear in mind that, as many other answerers have already mentioned, the null hypothesis is not chosen at random -- instead, it is usually chosen specifically because, based on prior theory, the scientist believes it to be false. Unfortunately, it is hard to quantify the proportion of times that scientists are correct in their predictions, but bear in mind that, when scientists are dealing with the "$H_0$ is false" column, they should be worried about false negatives rather than false positives. You, however, seem to be concerned about false positives, so let's focus on the "$H_0$ is true" column. In this situation, what is the probability of a scientist publishing a false result? Publication bias As long as the probability of publication does not depend on whether the result is "significant", then the probability is precisely $\alpha$ -- .05, and sometimes lower depending on the field. The problem is that there is good evidence that the probability of publication does depend on whether the result is significant (see, for example, Stern & Simes, 1997; Dwan et al., 2008), either because scientists only submit significant results for publication (the so-called file-drawer problem; Rosenthal, 1979) or because non-significant results are submitted for publication but don't make it through peer review. The general issue of the probability of publication depending on the observed $p$-value is what is meant by publication bias. If we take a step back and think about the implications of publication bias for a broader research literature, a research literature affected by publication bias will still contain true results -- sometimes the null hypothesis that a scientist claims to be false really will be false, and, depending on the degree of publication bias, sometimes a scientist will correctly claim that a given null hypothesis is true. However, the research literature will also be cluttered up by too large a proportion of false positives (i.e., studies in which the researcher claims that the null hypothesis is false when really it's true). Researcher degrees of freedom Publication bias is not the only way that, under the null hypothesis, the probability of publishing a significant result will be greater than $\alpha$. When used improperly, certain areas of flexibility in the design of studies and analysis of data, which are sometimes labeled researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011), can increase the rate of false positives, even when there is no publication bias. For example, if we assume that, upon obtaining a non-significant result, all (or some) scientists will exclude one outlying data point if this exclusion will change the non-significant result into a significant one, the rate of false positives will be greater than $\alpha$. Given the presence of a large enough number of questionable research practices, the rate of false positives can go as high as .60 even if the nominal rate was set at .05 (Simmons, Nelson, & Simonsohn, 2011). It's important to note that the improper use of researcher degrees of freedom (which is sometimes known as a questionable research practice; Martinson, Anderson, & de Vries, 2005) is not the same as making up data. In some cases, excluding outliers is the right thing to do, either because equipment fails or for some other reason. The key issue is that, in the presence of researcher degrees of freedom, the decisions made during analysis often depend on how the data turn out (Gelman & Loken, 2014), even if the researchers in question are not aware of this fact. As long as researchers use researcher degrees of freedom (consciously or unconsciously) to increase the probability of a significant result (perhaps because significant results are more "publishable"), the presence of researcher degrees of freedom will overpopulate a research literature with false positives in the same way as publication bias. An important caveat to the above discussion is that scientific papers (at least in psychology, which is my field) seldom consist of single results. More common are multiple studies, each of which involves multiple tests -- the emphasis is on building a larger argument and ruling out alternative explanations for the presented evidence. However, the selective presentation of results (or the presence of researcher degrees of freedom) can produce bias in a set of results just as easily as a single result. There is evidence that the results presented in multi-study papers is often much cleaner and stronger than one would expect even if all the predictions of these studies were all true (Francis, 2013). Conclusion Fundamentally, I agree with your intuition that null hypothesis significance testing can go wrong. However, I would argue that the true culprits producing a high rate of false positives are processes like publication bias and the presence of researcher degrees of freedom. Indeed, many scientists are well aware of these problems, and improving scientific reproducability is a very active current topic of discussion (e.g., Nosek & Bar-Anan, 2012; Nosek, Spies, & Motyl, 2012). So you are in good company with your concerns, but I also think there are also reasons for some cautious optimism. References Stern, J. M., & Simes, R. J. (1997). Publication bias: Evidence of delayed publication in a cohort study of clinical research projects. BMJ, 315(7109), 640–645. http://doi.org/10.1136/bmj.315.7109.640 Dwan, K., Altman, D. G., Arnaiz, J. A., Bloom, J., Chan, A., Cronin, E., … Williamson, P. R. (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE, 3(8), e3081. http://doi.org/10.1371/journal.pone.0003081 Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. http://doi.org/10.1037/0033-2909.86.3.638 Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. http://doi.org/10.1177/0956797611417632 Martinson, B. C., Anderson, M. S., & de Vries, R. (2005). Scientists behaving badly. Nature, 435, 737–738. http://doi.org/10.1038/435737a Gelman, A., & Loken, E. (2014). The statistical crisis in science. American Scientist, 102, 460-465. Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology, 57(5), 153–169. http://doi.org/10.1016/j.jmp.2013.02.003 Nosek, B. A., & Bar-Anan, Y. (2012). Scientific utopia: I. Opening scientific communication. Psychological Inquiry, 23(3), 217–243. http://doi.org/10.1080/1047840X.2012.692215 Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. http://doi.org/10.1177/1745691612459058
Is this really how p-values work? Can a million research papers per year be based on pure randomness Your concern is exactly the concern that underlies a great deal of the current discussion in science about reproducability. However, the true state of affairs is a bit more complicated than you sugge
1,911
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
A substantial check on the important issue raised in this question is that "scientific truth" is not based on individual, isolated publications. If a result is sufficiently interesting it will prompt other scientists to pursue the implications of the result. That work will tend to confirm or refute the original finding. There might be a 1/20 chance of rejecting a true null hypothesis in an individual study, but only a 1/400 of doing so twice in a row. If scientists did simply repeat experiments until they find "significance" and then published their results the problem might be as large as the OP suggests. But that's not how science works, at least in my nearly 50 years of experience in biomedical research. Furthermore, a publication is seldom about a single "significant" experiment but rather is based on a set of inter-related experiments (each required to be "significant" on its own) that together provide support for a broader, substantive hypothesis. A much larger problem comes from scientists who are too committed to their own hypotheses. They then may over-interpret the implications of individual experiments to support their hypotheses, engage in dubious data editing (like arbitrarily removing outliers), or (as I have seen and helped catch) just make up the data. Science, however, is a highly social process, regardless of the mythology about mad scientists hiding high up in ivory towers. The give and take among thousands of scientists pursuing their interests, based on what they have learned from others' work, is the ultimate institutional protection from false positives. False findings can sometimes be perpetuated for years, but if an issue is sufficiently important the process will eventually identify the erroneous conclusions.
Is this really how p-values work? Can a million research papers per year be based on pure randomness
A substantial check on the important issue raised in this question is that "scientific truth" is not based on individual, isolated publications. If a result is sufficiently interesting it will prompt
Is this really how p-values work? Can a million research papers per year be based on pure randomness? A substantial check on the important issue raised in this question is that "scientific truth" is not based on individual, isolated publications. If a result is sufficiently interesting it will prompt other scientists to pursue the implications of the result. That work will tend to confirm or refute the original finding. There might be a 1/20 chance of rejecting a true null hypothesis in an individual study, but only a 1/400 of doing so twice in a row. If scientists did simply repeat experiments until they find "significance" and then published their results the problem might be as large as the OP suggests. But that's not how science works, at least in my nearly 50 years of experience in biomedical research. Furthermore, a publication is seldom about a single "significant" experiment but rather is based on a set of inter-related experiments (each required to be "significant" on its own) that together provide support for a broader, substantive hypothesis. A much larger problem comes from scientists who are too committed to their own hypotheses. They then may over-interpret the implications of individual experiments to support their hypotheses, engage in dubious data editing (like arbitrarily removing outliers), or (as I have seen and helped catch) just make up the data. Science, however, is a highly social process, regardless of the mythology about mad scientists hiding high up in ivory towers. The give and take among thousands of scientists pursuing their interests, based on what they have learned from others' work, is the ultimate institutional protection from false positives. False findings can sometimes be perpetuated for years, but if an issue is sufficiently important the process will eventually identify the erroneous conclusions.
Is this really how p-values work? Can a million research papers per year be based on pure randomness A substantial check on the important issue raised in this question is that "scientific truth" is not based on individual, isolated publications. If a result is sufficiently interesting it will prompt
1,912
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
Just to add to the discussion, here is an interesting post and subsequent discussion about how people are commonly misunderstanding p-value. What should be retained in any case is that a p-value is just a measure of the strength of evidence in rejecting a given hypothesis. A p-value is definitely not a hard threshold below which something is "true" and above which it is only due to chance. As explained in the post referenced above: results are a combination of real effects and chance, it’s not either/or
Is this really how p-values work? Can a million research papers per year be based on pure randomness
Just to add to the discussion, here is an interesting post and subsequent discussion about how people are commonly misunderstanding p-value. What should be retained in any case is that a p-value is ju
Is this really how p-values work? Can a million research papers per year be based on pure randomness? Just to add to the discussion, here is an interesting post and subsequent discussion about how people are commonly misunderstanding p-value. What should be retained in any case is that a p-value is just a measure of the strength of evidence in rejecting a given hypothesis. A p-value is definitely not a hard threshold below which something is "true" and above which it is only due to chance. As explained in the post referenced above: results are a combination of real effects and chance, it’s not either/or
Is this really how p-values work? Can a million research papers per year be based on pure randomness Just to add to the discussion, here is an interesting post and subsequent discussion about how people are commonly misunderstanding p-value. What should be retained in any case is that a p-value is ju
1,913
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
As also pointed out in the other answers, this will only cause problems if you are going to selectively consider the positive results where the null hypothesis is ruled out. This is why scientists write review articles where they consider previously published research results and try to develop a better understanding of the subject based on that. However, there then still remains a problem, which is due to the so-called "publication bias", i.e. scientists are more likely to write up an article about a positive result than on a negative result, also a paper on a negative result is more likely to get rejected for publication than a paper on a positive result. Especially in fields where statistical test are very important will this be a big problem, the field of medicine is a notorious example. This is why it was made compulsory to register clinical trials before they are conducted (e.g. here). So, you must explain the set up, how the statistical analysis is going to be performed, etc. etc. before the trial gets underway. The leading medical journals will refuse to publish papers if the trials they report on where not registered. Unfortunately, despite this measure, the system isn't working all that well.
Is this really how p-values work? Can a million research papers per year be based on pure randomness
As also pointed out in the other answers, this will only cause problems if you are going to selectively consider the positive results where the null hypothesis is ruled out. This is why scientists wri
Is this really how p-values work? Can a million research papers per year be based on pure randomness? As also pointed out in the other answers, this will only cause problems if you are going to selectively consider the positive results where the null hypothesis is ruled out. This is why scientists write review articles where they consider previously published research results and try to develop a better understanding of the subject based on that. However, there then still remains a problem, which is due to the so-called "publication bias", i.e. scientists are more likely to write up an article about a positive result than on a negative result, also a paper on a negative result is more likely to get rejected for publication than a paper on a positive result. Especially in fields where statistical test are very important will this be a big problem, the field of medicine is a notorious example. This is why it was made compulsory to register clinical trials before they are conducted (e.g. here). So, you must explain the set up, how the statistical analysis is going to be performed, etc. etc. before the trial gets underway. The leading medical journals will refuse to publish papers if the trials they report on where not registered. Unfortunately, despite this measure, the system isn't working all that well.
Is this really how p-values work? Can a million research papers per year be based on pure randomness As also pointed out in the other answers, this will only cause problems if you are going to selectively consider the positive results where the null hypothesis is ruled out. This is why scientists wri
1,914
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
This is close to a very important fact about the scientific method: it emphasizes falsifiability. The philosophy of science which is most popular today has Karl Popper's concept of falsifiability as a corner stone. The basic scientific process is thus: Anyone can claim any theory they want, at any time. Science will admit any theory which is "falsifiable." The most literal sense of that word is that, if anyone else doesn't like the claim, that person is free to spend the resources to disprove the claim. If you don't think argyle socks cure cancer, you are free to use your own medical ward to disprove it. Because this bar for entry is monumentally low, it is traditional that "Science" as a cultural group will not really entertain any idea until you have done a "good effort" to falsify your own theory. Acceptance of ideas tends to go in stages. You can get your concept into a journal article with one study and a rather low p-value. What that does buy you is publicity and some credibility. If someone is interested in your idea, such as if your science has engineering applications, they may want to use it. At that time, they are more likely to fund an additional round of falsification. This process goes forward, always with the same attitude: believe what you want, but to call it science, I need to be able to disprove it later. This low bar for entry is what allows it to be so innovative. So yes, there are a large number of theoretically "wrong" journal articles out there. However, the key is that every published article is in theory falsifiable, so at any point in time, someone could spend the money to test it. This is the key: journals contain not only things which pass a reasonable p-test, but they also contain the keys for others to dismantle it if the results turn out to be false.
Is this really how p-values work? Can a million research papers per year be based on pure randomness
This is close to a very important fact about the scientific method: it emphasizes falsifiability. The philosophy of science which is most popular today has Karl Popper's concept of falsifiability as
Is this really how p-values work? Can a million research papers per year be based on pure randomness? This is close to a very important fact about the scientific method: it emphasizes falsifiability. The philosophy of science which is most popular today has Karl Popper's concept of falsifiability as a corner stone. The basic scientific process is thus: Anyone can claim any theory they want, at any time. Science will admit any theory which is "falsifiable." The most literal sense of that word is that, if anyone else doesn't like the claim, that person is free to spend the resources to disprove the claim. If you don't think argyle socks cure cancer, you are free to use your own medical ward to disprove it. Because this bar for entry is monumentally low, it is traditional that "Science" as a cultural group will not really entertain any idea until you have done a "good effort" to falsify your own theory. Acceptance of ideas tends to go in stages. You can get your concept into a journal article with one study and a rather low p-value. What that does buy you is publicity and some credibility. If someone is interested in your idea, such as if your science has engineering applications, they may want to use it. At that time, they are more likely to fund an additional round of falsification. This process goes forward, always with the same attitude: believe what you want, but to call it science, I need to be able to disprove it later. This low bar for entry is what allows it to be so innovative. So yes, there are a large number of theoretically "wrong" journal articles out there. However, the key is that every published article is in theory falsifiable, so at any point in time, someone could spend the money to test it. This is the key: journals contain not only things which pass a reasonable p-test, but they also contain the keys for others to dismantle it if the results turn out to be false.
Is this really how p-values work? Can a million research papers per year be based on pure randomness This is close to a very important fact about the scientific method: it emphasizes falsifiability. The philosophy of science which is most popular today has Karl Popper's concept of falsifiability as
1,915
Is this really how p-values work? Can a million research papers per year be based on pure randomness?
Is this how "science" is supposed to work? That's how a lot of social sciences work. No so much with physical sciences. Think of this: you typed your question on a computer. People were able to build these complicated beasts called computers using the knowledge of physics, chemistry and other fields of physical sciences. If the situation was as bad as you describe, none of the electronics would work. Or think of the things like a mass of an electron, which is known with insane precision. They pass through billions of logic gates in a computer over an over, and your computer still works and works for years. UPDATE: To respond to the down votes I received, I felt inspired to give you a couple of examples. The first one is from physics: Bystritsky, V. M., et al. "Measuring the astrophysical S factors and the cross sections of the p (d, γ) 3He reaction in the ultralow energy region using a zirconium deuteride target." Physics of Particles and Nuclei Letters 10.7 (2013): 717-722. As I wrote before, these physicist don't even pretend doing any statistics beyond computing the standard errors. There's a bunch of graphs and tables, not a single p-value or even confidence interval. The only evidence of statistics is the standard errors notes as $0.237 \pm 0.061$, for instance. My next example is from... psychology: Paustian-Underdahl, Samantha C., Lisa Slattery Walker, and David J. Woehr. "Gender and perceptions of leadership effectiveness: A meta-analysis of contextual moderators." Journal of Applied Psychology, 2014, Vol. 99, No. 6, 1129 –1145. These researchers have all the usual suspects: confidence intervals, p-values, $\chi^2$ etc. Now, look at some tables from papers and guess which papers they are from: That's the answer why in one case you need "cool" statistics and in another you don't: because the data is either crappy or not. When you have good data, you don't need much stats beyond standard errors. UPDATE2: @PatrickS.Forscher made an interesting statement in the comment: It is also true that social science theories are "softer" (less formal) than physics theories. I must disagree. In Economics and Finance the theories are not "soft" at all. You can randomly lookup a paper in these fields and get something like this: and so on. It's from Schervish, Mark J., Teddy Seidenfeld, and Joseph B. Kadane. "Extensions of expected utility theory and some limitations of pairwise comparisons." (2003). Does this look soft to you? I'm re-iterating my point here that when your theories are not good and the data is crappy, you can use the hardest math and still get a crappy result. In this paper they're talking about utilities, the concept like happiness and satisfaction - absolutely unobservable. It's like what is a utility of having a house vs. eating a cheeseburger? Presumably there's this function, where you can plug "eat cheeseburger" or "live in own house" and the function will spit out the answer in some units. As crazy as it sounds this is what modern ecnomics is built on, thank to von Neuman.
Is this really how p-values work? Can a million research papers per year be based on pure randomness
Is this how "science" is supposed to work? That's how a lot of social sciences work. No so much with physical sciences. Think of this: you typed your question on a computer. People were able to build
Is this really how p-values work? Can a million research papers per year be based on pure randomness? Is this how "science" is supposed to work? That's how a lot of social sciences work. No so much with physical sciences. Think of this: you typed your question on a computer. People were able to build these complicated beasts called computers using the knowledge of physics, chemistry and other fields of physical sciences. If the situation was as bad as you describe, none of the electronics would work. Or think of the things like a mass of an electron, which is known with insane precision. They pass through billions of logic gates in a computer over an over, and your computer still works and works for years. UPDATE: To respond to the down votes I received, I felt inspired to give you a couple of examples. The first one is from physics: Bystritsky, V. M., et al. "Measuring the astrophysical S factors and the cross sections of the p (d, γ) 3He reaction in the ultralow energy region using a zirconium deuteride target." Physics of Particles and Nuclei Letters 10.7 (2013): 717-722. As I wrote before, these physicist don't even pretend doing any statistics beyond computing the standard errors. There's a bunch of graphs and tables, not a single p-value or even confidence interval. The only evidence of statistics is the standard errors notes as $0.237 \pm 0.061$, for instance. My next example is from... psychology: Paustian-Underdahl, Samantha C., Lisa Slattery Walker, and David J. Woehr. "Gender and perceptions of leadership effectiveness: A meta-analysis of contextual moderators." Journal of Applied Psychology, 2014, Vol. 99, No. 6, 1129 –1145. These researchers have all the usual suspects: confidence intervals, p-values, $\chi^2$ etc. Now, look at some tables from papers and guess which papers they are from: That's the answer why in one case you need "cool" statistics and in another you don't: because the data is either crappy or not. When you have good data, you don't need much stats beyond standard errors. UPDATE2: @PatrickS.Forscher made an interesting statement in the comment: It is also true that social science theories are "softer" (less formal) than physics theories. I must disagree. In Economics and Finance the theories are not "soft" at all. You can randomly lookup a paper in these fields and get something like this: and so on. It's from Schervish, Mark J., Teddy Seidenfeld, and Joseph B. Kadane. "Extensions of expected utility theory and some limitations of pairwise comparisons." (2003). Does this look soft to you? I'm re-iterating my point here that when your theories are not good and the data is crappy, you can use the hardest math and still get a crappy result. In this paper they're talking about utilities, the concept like happiness and satisfaction - absolutely unobservable. It's like what is a utility of having a house vs. eating a cheeseburger? Presumably there's this function, where you can plug "eat cheeseburger" or "live in own house" and the function will spit out the answer in some units. As crazy as it sounds this is what modern ecnomics is built on, thank to von Neuman.
Is this really how p-values work? Can a million research papers per year be based on pure randomness Is this how "science" is supposed to work? That's how a lot of social sciences work. No so much with physical sciences. Think of this: you typed your question on a computer. People were able to build
1,916
What is the best way to identify outliers in multivariate data?
Have a look at the mvoutlier package which relies on ordered robust mahalanobis distances, as suggested by @drknexus.
What is the best way to identify outliers in multivariate data?
Have a look at the mvoutlier package which relies on ordered robust mahalanobis distances, as suggested by @drknexus.
What is the best way to identify outliers in multivariate data? Have a look at the mvoutlier package which relies on ordered robust mahalanobis distances, as suggested by @drknexus.
What is the best way to identify outliers in multivariate data? Have a look at the mvoutlier package which relies on ordered robust mahalanobis distances, as suggested by @drknexus.
1,917
What is the best way to identify outliers in multivariate data?
I think Robin Girard's answer would work pretty well for 3 and possibly 4 dimensions, but the curse of dimensionality would prevent it working beyond that. However, his suggestion led me to a related approach which is to apply the cross-validated kernel density estimate to the first three principal component scores. Then a very high-dimensional data set can still be handled ok. In summary, for i=1 to n Compute a density estimate of the first three principal component scores obtained from the data set without Xi. Calculate the likelihood of Xi for the density estimated in step 1. call it Li. end for Sort the Li (for i=1,..,n) and the outliers are those with likelihood below some threshold. I'm not sure what would be a good threshold -- I'll leave that for whoever writes the paper on this! One possibility is to do a boxplot of the log(Li) values and see what outliers are detected at the negative end.
What is the best way to identify outliers in multivariate data?
I think Robin Girard's answer would work pretty well for 3 and possibly 4 dimensions, but the curse of dimensionality would prevent it working beyond that. However, his suggestion led me to a related
What is the best way to identify outliers in multivariate data? I think Robin Girard's answer would work pretty well for 3 and possibly 4 dimensions, but the curse of dimensionality would prevent it working beyond that. However, his suggestion led me to a related approach which is to apply the cross-validated kernel density estimate to the first three principal component scores. Then a very high-dimensional data set can still be handled ok. In summary, for i=1 to n Compute a density estimate of the first three principal component scores obtained from the data set without Xi. Calculate the likelihood of Xi for the density estimated in step 1. call it Li. end for Sort the Li (for i=1,..,n) and the outliers are those with likelihood below some threshold. I'm not sure what would be a good threshold -- I'll leave that for whoever writes the paper on this! One possibility is to do a boxplot of the log(Li) values and see what outliers are detected at the negative end.
What is the best way to identify outliers in multivariate data? I think Robin Girard's answer would work pretty well for 3 and possibly 4 dimensions, but the curse of dimensionality would prevent it working beyond that. However, his suggestion led me to a related
1,918
What is the best way to identify outliers in multivariate data?
You can find a pedagogical summary of the various methods available in (1) For some --recent-- numerical comparisons of the various methods listed there, you can check (2) and (3). there are many older (and less exhaustive) numerical comparisons, typically found in books. You will find one on pages 142-143 of (4), for example. Note that all the methods discussed here have an open source R implementation, mainly through the rrcov package. (1) P. Rousseeuw and M. Hubert (2013) High-Breakdown Estimators of Multivariate Location and Scatter. (2) M. Hubert, P. Rousseeuw, K. Vakili (2013). Shape bias of robust covariance estimators: an empirical study. Statistical Papers. (3) K. Vakili and E. Schmitt (2014). Finding multivariate outliers with FastPCS. Computational Statistics & Data Analysis. (4) Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York.
What is the best way to identify outliers in multivariate data?
You can find a pedagogical summary of the various methods available in (1) For some --recent-- numerical comparisons of the various methods listed there, you can check (2) and (3). there are many old
What is the best way to identify outliers in multivariate data? You can find a pedagogical summary of the various methods available in (1) For some --recent-- numerical comparisons of the various methods listed there, you can check (2) and (3). there are many older (and less exhaustive) numerical comparisons, typically found in books. You will find one on pages 142-143 of (4), for example. Note that all the methods discussed here have an open source R implementation, mainly through the rrcov package. (1) P. Rousseeuw and M. Hubert (2013) High-Breakdown Estimators of Multivariate Location and Scatter. (2) M. Hubert, P. Rousseeuw, K. Vakili (2013). Shape bias of robust covariance estimators: an empirical study. Statistical Papers. (3) K. Vakili and E. Schmitt (2014). Finding multivariate outliers with FastPCS. Computational Statistics & Data Analysis. (4) Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York.
What is the best way to identify outliers in multivariate data? You can find a pedagogical summary of the various methods available in (1) For some --recent-- numerical comparisons of the various methods listed there, you can check (2) and (3). there are many old
1,919
What is the best way to identify outliers in multivariate data?
I didn't see anybody mention influence functions. I first saw this idea in Gnanadesikan's multivariate book. In one dimension an outlier is either an extremely large or an extremely small value. In multivariate analysis it is an observation removed from the bulk of the data. But what metric should we use to define extreme for the outlier? There are many choices. The Mahalanobis distance is just one. I think that looking for every type of outlier is futile and counterproductive. I would ask why do you care about the outlier? In estimating a mean they can have a great deal of influence on that estimate. Robust estimators downweight and accommodate outliers but they do not formally test for them. Now in regression, the outliers--like leverage points--could have large effects on the slope parameters in the model. With bivariate data they can unduly influence the estimated correlation coefficient and in three or more dimensions the multiple correlation coefficient. Influence functions were introduced by Hampel as a tool in robust estimation and Mallows wrote a nice unpublished paper advocating their use. The influence function is a function of the point you are at in n-dimensional space and the parameter. It essentially measures the difference between the parameter estimate with the point in the calculation and with the point left out. Rather than go to the trouble of doing the calculation of the two estimates and taking the difference, often you can derive a formula for it. Then the contours of constant influence tell you the direction that is extreme with respect to the estimate of this parameter and hence tell you where in the n-dimensional space to look for the outlier. For more you can look at my 1983 paper in the American Journal of Mathematical and Management Sciences titled "The influence function and its application to data validation." In data validation we wanted to look for outliers that affected the intended use of the data. My feeling is that you should direct your attention to outliers that greatly affect the parameters you are interested in estimating and not care so much about others that don't.
What is the best way to identify outliers in multivariate data?
I didn't see anybody mention influence functions. I first saw this idea in Gnanadesikan's multivariate book. In one dimension an outlier is either an extremely large or an extremely small value. In
What is the best way to identify outliers in multivariate data? I didn't see anybody mention influence functions. I first saw this idea in Gnanadesikan's multivariate book. In one dimension an outlier is either an extremely large or an extremely small value. In multivariate analysis it is an observation removed from the bulk of the data. But what metric should we use to define extreme for the outlier? There are many choices. The Mahalanobis distance is just one. I think that looking for every type of outlier is futile and counterproductive. I would ask why do you care about the outlier? In estimating a mean they can have a great deal of influence on that estimate. Robust estimators downweight and accommodate outliers but they do not formally test for them. Now in regression, the outliers--like leverage points--could have large effects on the slope parameters in the model. With bivariate data they can unduly influence the estimated correlation coefficient and in three or more dimensions the multiple correlation coefficient. Influence functions were introduced by Hampel as a tool in robust estimation and Mallows wrote a nice unpublished paper advocating their use. The influence function is a function of the point you are at in n-dimensional space and the parameter. It essentially measures the difference between the parameter estimate with the point in the calculation and with the point left out. Rather than go to the trouble of doing the calculation of the two estimates and taking the difference, often you can derive a formula for it. Then the contours of constant influence tell you the direction that is extreme with respect to the estimate of this parameter and hence tell you where in the n-dimensional space to look for the outlier. For more you can look at my 1983 paper in the American Journal of Mathematical and Management Sciences titled "The influence function and its application to data validation." In data validation we wanted to look for outliers that affected the intended use of the data. My feeling is that you should direct your attention to outliers that greatly affect the parameters you are interested in estimating and not care so much about others that don't.
What is the best way to identify outliers in multivariate data? I didn't see anybody mention influence functions. I first saw this idea in Gnanadesikan's multivariate book. In one dimension an outlier is either an extremely large or an extremely small value. In
1,920
What is the best way to identify outliers in multivariate data?
I would do some sort of "leave one out testing algorithm" (n is the number of data): for i=1 to n compute a density estimation of the data set obtained by throwing $X_i$ away. (This density estimate should be done with some assumption if the dimension is high, for example, a gaussian assumption for which the density estimate is easy: mean and covariance) Calculate the likelihood of $X_i$ for the density estimated in step 1. call it $L_i$. end for sort the $L_i$ (for i=1,..,n) and use a multiple hypothesis testing procedure to say which are not good ... This will work if n is sufficiently large... you can also use "leave k out strategy" which can be more relevent when you have "groups" of outliers ...
What is the best way to identify outliers in multivariate data?
I would do some sort of "leave one out testing algorithm" (n is the number of data): for i=1 to n compute a density estimation of the data set obtained by throwing $X_i$ away. (This density estimate
What is the best way to identify outliers in multivariate data? I would do some sort of "leave one out testing algorithm" (n is the number of data): for i=1 to n compute a density estimation of the data set obtained by throwing $X_i$ away. (This density estimate should be done with some assumption if the dimension is high, for example, a gaussian assumption for which the density estimate is easy: mean and covariance) Calculate the likelihood of $X_i$ for the density estimated in step 1. call it $L_i$. end for sort the $L_i$ (for i=1,..,n) and use a multiple hypothesis testing procedure to say which are not good ... This will work if n is sufficiently large... you can also use "leave k out strategy" which can be more relevent when you have "groups" of outliers ...
What is the best way to identify outliers in multivariate data? I would do some sort of "leave one out testing algorithm" (n is the number of data): for i=1 to n compute a density estimation of the data set obtained by throwing $X_i$ away. (This density estimate
1,921
What is the best way to identify outliers in multivariate data?
You can find candidates for "outliers" among the support points of the minimum volume bounding ellipsoid. (Efficient algorithms to find these points in fairly high dimensions, both exactly and approximately, were invented in a spate of papers in the 1970's because this problem is intimately connected with a question in experimental design.)
What is the best way to identify outliers in multivariate data?
You can find candidates for "outliers" among the support points of the minimum volume bounding ellipsoid. (Efficient algorithms to find these points in fairly high dimensions, both exactly and approx
What is the best way to identify outliers in multivariate data? You can find candidates for "outliers" among the support points of the minimum volume bounding ellipsoid. (Efficient algorithms to find these points in fairly high dimensions, both exactly and approximately, were invented in a spate of papers in the 1970's because this problem is intimately connected with a question in experimental design.)
What is the best way to identify outliers in multivariate data? You can find candidates for "outliers" among the support points of the minimum volume bounding ellipsoid. (Efficient algorithms to find these points in fairly high dimensions, both exactly and approx
1,922
What is the best way to identify outliers in multivariate data?
I novel approach I saw was by IT Jolliffe Principal Components Analysis. You run a PCA on your data (Note: PCA can be quite a useful data exploration tool in its own right), but instead of looking at the first few Principal Components (PCs), you plot the last few PCs. These PCs are the linear relationships between your variables with the smallest variance possible. Thus they detect "exact" or close to exact multivariate relationships in your data. A plot of the PC scores for the last PC will show outliers not easily detectable by looking individually at each variable. One example is for height and weight - some who has "above average" height and "below average" weight would be detected by the last PC of height and weight (assuming these are positively correlated), even if their height and weight were not "extreme" individually (e.g. someone who was 180cm and 60kg).
What is the best way to identify outliers in multivariate data?
I novel approach I saw was by IT Jolliffe Principal Components Analysis. You run a PCA on your data (Note: PCA can be quite a useful data exploration tool in its own right), but instead of looking at
What is the best way to identify outliers in multivariate data? I novel approach I saw was by IT Jolliffe Principal Components Analysis. You run a PCA on your data (Note: PCA can be quite a useful data exploration tool in its own right), but instead of looking at the first few Principal Components (PCs), you plot the last few PCs. These PCs are the linear relationships between your variables with the smallest variance possible. Thus they detect "exact" or close to exact multivariate relationships in your data. A plot of the PC scores for the last PC will show outliers not easily detectable by looking individually at each variable. One example is for height and weight - some who has "above average" height and "below average" weight would be detected by the last PC of height and weight (assuming these are positively correlated), even if their height and weight were not "extreme" individually (e.g. someone who was 180cm and 60kg).
What is the best way to identify outliers in multivariate data? I novel approach I saw was by IT Jolliffe Principal Components Analysis. You run a PCA on your data (Note: PCA can be quite a useful data exploration tool in its own right), but instead of looking at
1,923
What is the best way to identify outliers in multivariate data?
It may be an overshoot, but you may train an unsupervised Random Forest on the data and use the object proximity measure to detect outliers. More details here.
What is the best way to identify outliers in multivariate data?
It may be an overshoot, but you may train an unsupervised Random Forest on the data and use the object proximity measure to detect outliers. More details here.
What is the best way to identify outliers in multivariate data? It may be an overshoot, but you may train an unsupervised Random Forest on the data and use the object proximity measure to detect outliers. More details here.
What is the best way to identify outliers in multivariate data? It may be an overshoot, but you may train an unsupervised Random Forest on the data and use the object proximity measure to detect outliers. More details here.
1,924
What is the best way to identify outliers in multivariate data?
For moderate dimensions, like 3, then some sort of kernel cross-validation technique as suggested elsewhere seems reasonable and is the best I can come up with. For higher dimensions, I'm not sure that the problem is solvable; it lands pretty squarely into 'curse-of-dimensionality' territory. The issue is that distance functions tend to converge to very large values very quickly as you increase dimensionality, including distances derived from distributions. If you're defining an outlier as "a point with a comparatively large distance function relative to the others", and all your distance functions are beginning to converge because you're in a high-dimensional space, well, you're in trouble. Without some sort of distributional assumption that will let you turn it into a probabilistic classification problem, or at least some rotation that lets you separate your space into "noise dimensions" and "informative dimensions", I think that the geometry of high-dimensional spaces is going to prohibit any easy -- or at least robust -- identification of outliers.
What is the best way to identify outliers in multivariate data?
For moderate dimensions, like 3, then some sort of kernel cross-validation technique as suggested elsewhere seems reasonable and is the best I can come up with. For higher dimensions, I'm not sure tha
What is the best way to identify outliers in multivariate data? For moderate dimensions, like 3, then some sort of kernel cross-validation technique as suggested elsewhere seems reasonable and is the best I can come up with. For higher dimensions, I'm not sure that the problem is solvable; it lands pretty squarely into 'curse-of-dimensionality' territory. The issue is that distance functions tend to converge to very large values very quickly as you increase dimensionality, including distances derived from distributions. If you're defining an outlier as "a point with a comparatively large distance function relative to the others", and all your distance functions are beginning to converge because you're in a high-dimensional space, well, you're in trouble. Without some sort of distributional assumption that will let you turn it into a probabilistic classification problem, or at least some rotation that lets you separate your space into "noise dimensions" and "informative dimensions", I think that the geometry of high-dimensional spaces is going to prohibit any easy -- or at least robust -- identification of outliers.
What is the best way to identify outliers in multivariate data? For moderate dimensions, like 3, then some sort of kernel cross-validation technique as suggested elsewhere seems reasonable and is the best I can come up with. For higher dimensions, I'm not sure tha
1,925
What is the best way to identify outliers in multivariate data?
I'm not sure what you mean when you say you aren't thinking of a regression problem but of "true multivariate data". My initial response would be to calculate the Mahalanobis distance since it doesn't require that you specify a particular IV or DV, but at its core (as far as I understand it) it is related to a leverage statistic.
What is the best way to identify outliers in multivariate data?
I'm not sure what you mean when you say you aren't thinking of a regression problem but of "true multivariate data". My initial response would be to calculate the Mahalanobis distance since it doesn'
What is the best way to identify outliers in multivariate data? I'm not sure what you mean when you say you aren't thinking of a regression problem but of "true multivariate data". My initial response would be to calculate the Mahalanobis distance since it doesn't require that you specify a particular IV or DV, but at its core (as far as I understand it) it is related to a leverage statistic.
What is the best way to identify outliers in multivariate data? I'm not sure what you mean when you say you aren't thinking of a regression problem but of "true multivariate data". My initial response would be to calculate the Mahalanobis distance since it doesn'
1,926
What is the best way to identify outliers in multivariate data?
I'm not aware that anyone is doing this, but I generally like to try dimensionality reduction when I have a problem like this. You might look into a method from manifold learning or non-linear dimensionality reduction. An example would be a Kohonen map. A good reference for R is "Self- and Super-organizing Maps in R: The kohonen Package".
What is the best way to identify outliers in multivariate data?
I'm not aware that anyone is doing this, but I generally like to try dimensionality reduction when I have a problem like this. You might look into a method from manifold learning or non-linear dimens
What is the best way to identify outliers in multivariate data? I'm not aware that anyone is doing this, but I generally like to try dimensionality reduction when I have a problem like this. You might look into a method from manifold learning or non-linear dimensionality reduction. An example would be a Kohonen map. A good reference for R is "Self- and Super-organizing Maps in R: The kohonen Package".
What is the best way to identify outliers in multivariate data? I'm not aware that anyone is doing this, but I generally like to try dimensionality reduction when I have a problem like this. You might look into a method from manifold learning or non-linear dimens
1,927
What is the best way to identify outliers in multivariate data?
My first response would be that if you can do multivariate regression on the data, then to use the residuals from that regression to spot outliers. (I know you said it's not a regression problem, so this might not help you, sorry !) I'm copying some of this from a Stackoverflow question I've previously answered which has some example R code First, we'll create some data, and then taint it with an outlier; > testout<-data.frame(X1=rnorm(50,mean=50,sd=10),X2=rnorm(50,mean=5,sd=1.5),Y=rnorm(50,mean=200,sd=25)) > #Taint the Data > testout$X1[10]<-5 > testout$X2[10]<-5 > testout$Y[10]<-530 > testout X1 X2 Y 1 44.20043 1.5259458 169.3296 2 40.46721 5.8437076 200.9038 3 48.20571 3.8243373 189.4652 4 60.09808 4.6609190 177.5159 5 50.23627 2.6193455 210.4360 6 43.50972 5.8212863 203.8361 7 44.95626 7.8368405 236.5821 8 66.14391 3.6828843 171.9624 9 45.53040 4.8311616 187.0553 10 5.00000 5.0000000 530.0000 11 64.71719 6.4007245 164.8052 12 54.43665 7.8695891 192.8824 13 45.78278 4.9921489 182.2957 14 49.59998 4.7716099 146.3090 <snip> 48 26.55487 5.8082497 189.7901 49 45.28317 5.0219647 208.1318 50 44.84145 3.6252663 251.5620 It's often most usefull to examine the data graphically (you're brain is much better at spotting outliers than maths is) > #Use Boxplot to Review the Data > boxplot(testout$X1, ylab="X1") > boxplot(testout$X2, ylab="X2") > boxplot(testout$Y, ylab="Y") You can then use stats to calculate critical cut off values, here using the Lund Test (See Lund, R. E. 1975, "Tables for An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 4, pp. 473-476. and Prescott, P. 1975, "An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 1, pp. 129-132.) > #Alternative approach using Lund Test > lundcrit<-function(a, n, q) { + # Calculates a Critical value for Outlier Test according to Lund + # See Lund, R. E. 1975, "Tables for An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 4, pp. 473-476. + # and Prescott, P. 1975, "An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 1, pp. 129-132. + # a = alpha + # n = Number of data elements + # q = Number of independent Variables (including intercept) + F<-qf(c(1-(a/n)),df1=1,df2=n-q-1,lower.tail=TRUE) + crit<-((n-q)*F/(n-q-1+F))^0.5 + crit + } > testoutlm<-lm(Y~X1+X2,data=testout) > testout$fitted<-fitted(testoutlm) > testout$residual<-residuals(testoutlm) > testout$standardresid<-rstandard(testoutlm) > n<-nrow(testout) > q<-length(testoutlm$coefficients) > crit<-lundcrit(0.1,n,q) > testout$Ynew<-ifelse(testout$standardresid>crit,NA,testout$Y) > testout X1 X2 Y newX1 fitted residual standardresid 1 44.20043 1.5259458 169.3296 44.20043 209.8467 -40.5171222 -1.009507695 2 40.46721 5.8437076 200.9038 40.46721 231.9221 -31.0183107 -0.747624895 3 48.20571 3.8243373 189.4652 48.20571 203.4786 -14.0134646 -0.335955648 4 60.09808 4.6609190 177.5159 60.09808 169.6108 7.9050960 0.190908291 5 50.23627 2.6193455 210.4360 50.23627 194.3285 16.1075799 0.391537883 6 43.50972 5.8212863 203.8361 43.50972 222.6667 -18.8306252 -0.452070155 7 44.95626 7.8368405 236.5821 44.95626 223.3287 13.2534226 0.326339981 8 66.14391 3.6828843 171.9624 66.14391 148.8870 23.0754677 0.568829360 9 45.53040 4.8311616 187.0553 45.53040 214.0832 -27.0279262 -0.646090667 10 5.00000 5.0000000 530.0000 NA 337.0535 192.9465135 5.714275585 11 64.71719 6.4007245 164.8052 64.71719 159.9911 4.8141018 0.118618011 12 54.43665 7.8695891 192.8824 54.43665 194.7454 -1.8630426 -0.046004311 13 45.78278 4.9921489 182.2957 45.78278 213.7223 -31.4266180 -0.751115595 14 49.59998 4.7716099 146.3090 49.59998 201.6296 -55.3205552 -1.321042392 15 45.07720 4.2355525 192.9041 45.07720 213.9655 -21.0613819 -0.504406009 16 62.27717 7.1518606 186.6482 62.27717 169.2455 17.4027250 0.430262983 17 48.50446 3.0712422 228.3253 48.50446 200.6938 27.6314695 0.667366651 18 65.49983 5.4609713 184.8983 65.49983 155.2768 29.6214506 0.726319931 19 44.38387 4.9305222 213.9378 44.38387 217.7981 -3.8603382 -0.092354925 20 43.52883 8.3777627 203.5657 43.52883 228.9961 -25.4303732 -0.634725264 <snip> 49 45.28317 5.0219647 208.1318 45.28317 215.3075 -7.1756966 -0.171560291 50 44.84145 3.6252663 251.5620 44.84145 213.1535 38.4084869 0.923804784 Ynew 1 169.3296 2 200.9038 3 189.4652 4 177.5159 5 210.4360 6 203.8361 7 236.5821 8 171.9624 9 187.0553 10 NA 11 164.8052 12 192.8824 13 182.2957 14 146.3090 15 192.9041 16 186.6482 17 228.3253 18 184.8983 19 213.9378 20 203.5657 <snip> 49 208.1318 50 251.5620 Obviosuly there are other outlier tests than the Lund test (Grubbs springs to mind), but I'm not sure which are better suited to multivariate data.
What is the best way to identify outliers in multivariate data?
My first response would be that if you can do multivariate regression on the data, then to use the residuals from that regression to spot outliers. (I know you said it's not a regression problem, so t
What is the best way to identify outliers in multivariate data? My first response would be that if you can do multivariate regression on the data, then to use the residuals from that regression to spot outliers. (I know you said it's not a regression problem, so this might not help you, sorry !) I'm copying some of this from a Stackoverflow question I've previously answered which has some example R code First, we'll create some data, and then taint it with an outlier; > testout<-data.frame(X1=rnorm(50,mean=50,sd=10),X2=rnorm(50,mean=5,sd=1.5),Y=rnorm(50,mean=200,sd=25)) > #Taint the Data > testout$X1[10]<-5 > testout$X2[10]<-5 > testout$Y[10]<-530 > testout X1 X2 Y 1 44.20043 1.5259458 169.3296 2 40.46721 5.8437076 200.9038 3 48.20571 3.8243373 189.4652 4 60.09808 4.6609190 177.5159 5 50.23627 2.6193455 210.4360 6 43.50972 5.8212863 203.8361 7 44.95626 7.8368405 236.5821 8 66.14391 3.6828843 171.9624 9 45.53040 4.8311616 187.0553 10 5.00000 5.0000000 530.0000 11 64.71719 6.4007245 164.8052 12 54.43665 7.8695891 192.8824 13 45.78278 4.9921489 182.2957 14 49.59998 4.7716099 146.3090 <snip> 48 26.55487 5.8082497 189.7901 49 45.28317 5.0219647 208.1318 50 44.84145 3.6252663 251.5620 It's often most usefull to examine the data graphically (you're brain is much better at spotting outliers than maths is) > #Use Boxplot to Review the Data > boxplot(testout$X1, ylab="X1") > boxplot(testout$X2, ylab="X2") > boxplot(testout$Y, ylab="Y") You can then use stats to calculate critical cut off values, here using the Lund Test (See Lund, R. E. 1975, "Tables for An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 4, pp. 473-476. and Prescott, P. 1975, "An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 1, pp. 129-132.) > #Alternative approach using Lund Test > lundcrit<-function(a, n, q) { + # Calculates a Critical value for Outlier Test according to Lund + # See Lund, R. E. 1975, "Tables for An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 4, pp. 473-476. + # and Prescott, P. 1975, "An Approximate Test for Outliers in Linear Models", Technometrics, vol. 17, no. 1, pp. 129-132. + # a = alpha + # n = Number of data elements + # q = Number of independent Variables (including intercept) + F<-qf(c(1-(a/n)),df1=1,df2=n-q-1,lower.tail=TRUE) + crit<-((n-q)*F/(n-q-1+F))^0.5 + crit + } > testoutlm<-lm(Y~X1+X2,data=testout) > testout$fitted<-fitted(testoutlm) > testout$residual<-residuals(testoutlm) > testout$standardresid<-rstandard(testoutlm) > n<-nrow(testout) > q<-length(testoutlm$coefficients) > crit<-lundcrit(0.1,n,q) > testout$Ynew<-ifelse(testout$standardresid>crit,NA,testout$Y) > testout X1 X2 Y newX1 fitted residual standardresid 1 44.20043 1.5259458 169.3296 44.20043 209.8467 -40.5171222 -1.009507695 2 40.46721 5.8437076 200.9038 40.46721 231.9221 -31.0183107 -0.747624895 3 48.20571 3.8243373 189.4652 48.20571 203.4786 -14.0134646 -0.335955648 4 60.09808 4.6609190 177.5159 60.09808 169.6108 7.9050960 0.190908291 5 50.23627 2.6193455 210.4360 50.23627 194.3285 16.1075799 0.391537883 6 43.50972 5.8212863 203.8361 43.50972 222.6667 -18.8306252 -0.452070155 7 44.95626 7.8368405 236.5821 44.95626 223.3287 13.2534226 0.326339981 8 66.14391 3.6828843 171.9624 66.14391 148.8870 23.0754677 0.568829360 9 45.53040 4.8311616 187.0553 45.53040 214.0832 -27.0279262 -0.646090667 10 5.00000 5.0000000 530.0000 NA 337.0535 192.9465135 5.714275585 11 64.71719 6.4007245 164.8052 64.71719 159.9911 4.8141018 0.118618011 12 54.43665 7.8695891 192.8824 54.43665 194.7454 -1.8630426 -0.046004311 13 45.78278 4.9921489 182.2957 45.78278 213.7223 -31.4266180 -0.751115595 14 49.59998 4.7716099 146.3090 49.59998 201.6296 -55.3205552 -1.321042392 15 45.07720 4.2355525 192.9041 45.07720 213.9655 -21.0613819 -0.504406009 16 62.27717 7.1518606 186.6482 62.27717 169.2455 17.4027250 0.430262983 17 48.50446 3.0712422 228.3253 48.50446 200.6938 27.6314695 0.667366651 18 65.49983 5.4609713 184.8983 65.49983 155.2768 29.6214506 0.726319931 19 44.38387 4.9305222 213.9378 44.38387 217.7981 -3.8603382 -0.092354925 20 43.52883 8.3777627 203.5657 43.52883 228.9961 -25.4303732 -0.634725264 <snip> 49 45.28317 5.0219647 208.1318 45.28317 215.3075 -7.1756966 -0.171560291 50 44.84145 3.6252663 251.5620 44.84145 213.1535 38.4084869 0.923804784 Ynew 1 169.3296 2 200.9038 3 189.4652 4 177.5159 5 210.4360 6 203.8361 7 236.5821 8 171.9624 9 187.0553 10 NA 11 164.8052 12 192.8824 13 182.2957 14 146.3090 15 192.9041 16 186.6482 17 228.3253 18 184.8983 19 213.9378 20 203.5657 <snip> 49 208.1318 50 251.5620 Obviosuly there are other outlier tests than the Lund test (Grubbs springs to mind), but I'm not sure which are better suited to multivariate data.
What is the best way to identify outliers in multivariate data? My first response would be that if you can do multivariate regression on the data, then to use the residuals from that regression to spot outliers. (I know you said it's not a regression problem, so t
1,928
What is the best way to identify outliers in multivariate data?
One of the above answers touched in mahalanobis distances.... perhaps anpther step further and calculating simultaneous confidence intervals would help detect outliers!
What is the best way to identify outliers in multivariate data?
One of the above answers touched in mahalanobis distances.... perhaps anpther step further and calculating simultaneous confidence intervals would help detect outliers!
What is the best way to identify outliers in multivariate data? One of the above answers touched in mahalanobis distances.... perhaps anpther step further and calculating simultaneous confidence intervals would help detect outliers!
What is the best way to identify outliers in multivariate data? One of the above answers touched in mahalanobis distances.... perhaps anpther step further and calculating simultaneous confidence intervals would help detect outliers!
1,929
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression?
As an economist, the analysis of variance (ANOVA) is taught and usually understood in relation to linear regression (e.g. in Arthur Goldberger's A Course in Econometrics). Economists/Econometricians typically view ANOVA as uninteresting and prefer to move straight to regression models. From the perspective of linear (or even generalised linear) models, ANOVA assigns coefficients into batches, with each batch corresponding to a "source of variation" in ANOVA terminology. Generally you can replicate the inferences you would obtain from ANOVA using regression but not always OLS regression. Multilevel models are needed for analysing hierarchical data structures such as "split-plot designs," where between-group effects are compared to group-level errors, and within-group effects are compared to data-level errors. Gelman's paper [1] goes into great detail about this problem and effectively argues that ANOVA is an important statistical tool that should still be taught for it's own sake. In particular Gelman argues that ANOVA is a way of understanding and structuring multilevel models. Therefore ANOVA is not an alternative to regression but as a tool for summarizing complex high-dimensional inferences and for exploratory data analysis. Gelman is a well-respected statistician and some credence should be given to his view. However, almost all of the empirical work that I do would be equally well served by linear regression and so I firmly fall into the camp of viewing it as a little bit pointless. Some disciplines with complex study designs (e.g. psychology) may find ANOVA useful. [1] Gelman, A. (2005). Analysis of variance: why it is more important than ever (with discussion). Annals of Statistics 33, 1–53. doi:10.1214/009053604000001048
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio
As an economist, the analysis of variance (ANOVA) is taught and usually understood in relation to linear regression (e.g. in Arthur Goldberger's A Course in Econometrics). Economists/Econometricians t
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression? As an economist, the analysis of variance (ANOVA) is taught and usually understood in relation to linear regression (e.g. in Arthur Goldberger's A Course in Econometrics). Economists/Econometricians typically view ANOVA as uninteresting and prefer to move straight to regression models. From the perspective of linear (or even generalised linear) models, ANOVA assigns coefficients into batches, with each batch corresponding to a "source of variation" in ANOVA terminology. Generally you can replicate the inferences you would obtain from ANOVA using regression but not always OLS regression. Multilevel models are needed for analysing hierarchical data structures such as "split-plot designs," where between-group effects are compared to group-level errors, and within-group effects are compared to data-level errors. Gelman's paper [1] goes into great detail about this problem and effectively argues that ANOVA is an important statistical tool that should still be taught for it's own sake. In particular Gelman argues that ANOVA is a way of understanding and structuring multilevel models. Therefore ANOVA is not an alternative to regression but as a tool for summarizing complex high-dimensional inferences and for exploratory data analysis. Gelman is a well-respected statistician and some credence should be given to his view. However, almost all of the empirical work that I do would be equally well served by linear regression and so I firmly fall into the camp of viewing it as a little bit pointless. Some disciplines with complex study designs (e.g. psychology) may find ANOVA useful. [1] Gelman, A. (2005). Analysis of variance: why it is more important than ever (with discussion). Annals of Statistics 33, 1–53. doi:10.1214/009053604000001048
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio As an economist, the analysis of variance (ANOVA) is taught and usually understood in relation to linear regression (e.g. in Arthur Goldberger's A Course in Econometrics). Economists/Econometricians t
1,930
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression?
I think Graham's second paragraph gets at the heart of the matter. I suspect it's not so much technical than historical, probably due to the influence of "Statistical Methods for Research Workers", and the ease of teaching/applying the tool for non-statisticans in experimental analysis involving discrete factors, rather than delving into model building and associated tools. In statistics, ANOVA is usually taught as a special case of regression. (I think this is similar to why biostatistics is filled with a myriad of eponymous "tests" rather than emphasizing model building.)
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio
I think Graham's second paragraph gets at the heart of the matter. I suspect it's not so much technical than historical, probably due to the influence of "Statistical Methods for Research Workers", a
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression? I think Graham's second paragraph gets at the heart of the matter. I suspect it's not so much technical than historical, probably due to the influence of "Statistical Methods for Research Workers", and the ease of teaching/applying the tool for non-statisticans in experimental analysis involving discrete factors, rather than delving into model building and associated tools. In statistics, ANOVA is usually taught as a special case of regression. (I think this is similar to why biostatistics is filled with a myriad of eponymous "tests" rather than emphasizing model building.)
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio I think Graham's second paragraph gets at the heart of the matter. I suspect it's not so much technical than historical, probably due to the influence of "Statistical Methods for Research Workers", a
1,931
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression?
I would say that some of you are using the term regression when you should be using general linear model. I think of regression as a glm that involves continuous covariates. When continuous covariates are combined with dummy variables that should be called analysis of covariance. If only dummy variables are used we refer to that special form of glm as analysis of variance. I think analysis of variance has a distinct second meaning as the procedure for testing for significant coefficients in a glm using the decomposition of variance into model term components and the error term component.
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio
I would say that some of you are using the term regression when you should be using general linear model. I think of regression as a glm that involves continuous covariates. When continuous covariat
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression? I would say that some of you are using the term regression when you should be using general linear model. I think of regression as a glm that involves continuous covariates. When continuous covariates are combined with dummy variables that should be called analysis of covariance. If only dummy variables are used we refer to that special form of glm as analysis of variance. I think analysis of variance has a distinct second meaning as the procedure for testing for significant coefficients in a glm using the decomposition of variance into model term components and the error term component.
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio I would say that some of you are using the term regression when you should be using general linear model. I think of regression as a glm that involves continuous covariates. When continuous covariat
1,932
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression?
ANOVA can be used with categorical explanatory variables (factors) that take more than 2 values (levels), and gives a basic test that the mean response is the same for every value. This avoids the regression problem on carrying multiple pairwise t-tests between those levels: Multiple t-tests on a fixed 5% significance level, would make roughly 5% of them give wrong results. These tests are not independed from each other. Comparing A's levels with B's is connected with comparing A's to C's, as A's data are used in both tests. It is better to use contrasts for different combinations on the factor levels you want to test.
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio
ANOVA can be used with categorical explanatory variables (factors) that take more than 2 values (levels), and gives a basic test that the mean response is the same for every value. This avoids the reg
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression? ANOVA can be used with categorical explanatory variables (factors) that take more than 2 values (levels), and gives a basic test that the mean response is the same for every value. This avoids the regression problem on carrying multiple pairwise t-tests between those levels: Multiple t-tests on a fixed 5% significance level, would make roughly 5% of them give wrong results. These tests are not independed from each other. Comparing A's levels with B's is connected with comparing A's to C's, as A's data are used in both tests. It is better to use contrasts for different combinations on the factor levels you want to test.
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio ANOVA can be used with categorical explanatory variables (factors) that take more than 2 values (levels), and gives a basic test that the mean response is the same for every value. This avoids the reg
1,933
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression?
ANOVA you are testing whether there are significant difference between the population means assuming you are comparing more than two population means, then you are going to use an F test. In regression analysis you build a model between independent variables and a dependent variable. If you have one independent variable with four levels you can use three dummy variables and run a regression model. The F-test for the regression model which is used to test for the significance of the regression model is the same as the F which you get when testing for the difference between the population means. If you run a stepwise regression then some of the dummy variables might be dropped from the model and your F-value will differ from that when you perform ANOVA test.
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio
ANOVA you are testing whether there are significant difference between the population means assuming you are comparing more than two population means, then you are going to use an F test. In regressi
Why is ANOVA taught / used as if it is a different research methodology compared to linear regression? ANOVA you are testing whether there are significant difference between the population means assuming you are comparing more than two population means, then you are going to use an F test. In regression analysis you build a model between independent variables and a dependent variable. If you have one independent variable with four levels you can use three dummy variables and run a regression model. The F-test for the regression model which is used to test for the significance of the regression model is the same as the F which you get when testing for the difference between the population means. If you run a stepwise regression then some of the dummy variables might be dropped from the model and your F-value will differ from that when you perform ANOVA test.
Why is ANOVA taught / used as if it is a different research methodology compared to linear regressio ANOVA you are testing whether there are significant difference between the population means assuming you are comparing more than two population means, then you are going to use an F test. In regressi
1,934
On the importance of the i.i.d. assumption in statistical learning
The i.i.d. assumption about the pairs $(\mathbf{X}_i, y_i)$, $i = 1, \ldots, N$, is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and sometimes just because we usually make this assumption. To satisfactorily answer if the assumption is really necessary, and what the consequences are of not making this assumption, I would easily end up writing a book (if you ever easily end up doing something like that). Here I will try to give a brief overview of what I find to be the most important aspects. A fundamental assumption Let's assume that we want to learn a probability model of $y$ given $\mathbf{X}$, which we call $p(y \mid \mathbf{X})$. We do not make any assumptions about this model a priory, but we will make the minimal assumption that such a model exists such that the conditional distribution of $y_i$ given $\mathbf{X}_i$ is $p(y_i \mid \mathbf{X}_i)$. What is worth noting about this assumption is that the conditional distribution of $y_i$ depends on $i$ only through $\mathbf{X}_i$. This is what makes the model useful, e.g. for prediction. The assumption holds as a consequence of the identically distributed part under the i.i.d. assumption, but it is weaker because we don't make any assumptions about the $\mathbf{X}_i$'s. In the following the focus will mostly be on the role of independence. Modelling There are two major approaches to learning a model of $y$ given $\mathbf{X}$. One approach is known as discriminative modelling and the other as generative modelling. Discriminative modelling: We model $p(y \mid \mathbf{X})$ directly, e.g. a logistic regression model, a neural network, a tree or a random forest. The working modelling assumption will typically be that the $y_i$'s are conditionally independent given the $\mathbf{X}_i$'s, though estimation techniques relying on subsampling or bootstrapping make most sense under the i.i.d. or the weaker exchangeability assumption (see below). But generally, for discriminative modelling we don't need to make distributional assumptions about the $\mathbf{X}_i$'s. Generative modelling: We model the joint distribution, $p(\mathbf{X}, y)$, of $(\mathbf{X}, y)$ typically by modelling the conditional distribution $p(\mathbf{X} \mid y)$ and the marginal distribution $p(y)$. Then we use Bayes's formula for computing $p(y \mid \mathbf{X})$. Linear discriminant analysis and naive Bayes methods are examples. The working modelling assumption will typically be the i.i.d. assumption. For both modelling approaches the working modelling assumption is used to derive or propose learning methods (or estimators). That could be by maximising the (penalised) log-likelihood, minimising the empirical risk or by using Bayesian methods. Even if the working modelling assumption is wrong, the resulting method can still provide a sensible fit of $p(y \mid \mathbf{X})$. Some techniques used together with discriminative modelling, such as bagging (bootstrap aggregation), work by fitting many models to data sampled randomly from the dataset. Without the i.i.d. assumption (or exchangeability) the resampled datasets will not have a joint distribution similar to that of the original dataset. Any dependence structure has become "messed up" by the resampling. I have not thought deeply about this, but I don't see why that should necessarily break the method as a method for learning $p(y \mid \mathbf{X})$. At least not for methods based on the working independence assumptions. I am happy to be proved wrong here. Consistency and error bounds A central question for all learning methods is whether they result in models close to $p(y \mid \mathbf{X})$. There is a vast theoretical literature in statistics and machine learning dealing with consistency and error bounds. A main goal of this literature is to prove that the learned model is close to $p(y \mid \mathbf{X})$ when $N$ is large. Consistency is a qualitative assurance, while error bounds provide (semi-) explicit quantitative control of the closeness and give rates of convergence. The theoretical results all rely on assumptions about the joint distribution of the observations in the dataset. Often the working modelling assumptions mentioned above are made (that is, conditional independence for discriminative modelling and i.i.d. for generative modelling). For discriminative modelling, consistency and error bounds will require that the $\mathbf{X}_i$'s fulfil certain conditions. In classical regression one such condition is that $\frac{1}{N} \mathbb{X}^T \mathbb{X} \to \Sigma$ for $N \to \infty$, where $\mathbb{X}$ denotes the design matrix with rows $\mathbf{X}_i^T$. Weaker conditions may be enough for consistency. In sparse learning another such condition is the restricted eigenvalue condition, see e.g. On the conditions used to prove oracle results for the Lasso. The i.i.d. assumption together with some technical distributional assumptions imply that some such sufficient conditions are fulfilled with large probability, and thus the i.i.d. assumption may prove to be a sufficient but not a necessary assumption to get consistency and error bounds for discriminative modelling. The working modelling assumption of independence may be wrong for either of the modelling approaches. As a rough rule-of-thumb one can still expect consistency if the data comes from an ergodic process, and one can still expect some error bounds if the process is sufficiently fast mixing. A precise mathematical definition of these concepts would take us too far away from the main question. It is enough to note that there exist dependence structures besides the i.i.d. assumption for which the learning methods can be proved to work as $N$ tends to infinity. If we have more detailed knowledge about the dependence structure, we may choose to replace the working independence assumption used for modelling with a model that captures the dependence structure as well. This is often done for time series. A better working model may result in a more efficient method. Model assessment Rather than proving that the learning method gives a model close to $p(y \mid \mathbf{X})$ it is of great practical value to obtain a (relative) assessment of "how good a learned model is". Such assessment scores are comparable for two or more learned models, but they will not provide an absolute assessment of how close a learned model is to $p(y \mid \mathbf{X})$. Estimates of assessment scores are typically computed empirically based on splitting the dataset into a training and a test dataset or by using cross-validation. As with bagging, a random splitting of the dataset will "mess up" any dependence structure. However, for methods based on the working independence assumptions, ergodicity assumptions weaker than i.i.d. should be sufficient for the assessment estimates to be reasonable, though standard errors on these estimates will be very difficult to come up with. [Edit: Dependence among the variables will result in a distribution of the learned model that differs from the distribution under the i.i.d. assumption. The estimate produced by cross-validation is not obviously related to the generalization error. If the dependence is strong, it will most likely be a poor estimate.] Summary (tl;dr) All the above is under the assumption that there is a fixed conditional probability model, $p(y \mid \mathbf{X})$. Thus there cannot be trends or sudden changes in the conditional distribution not captured by $\mathbf{X}$. When learning a model of $y$ given $\mathbf{X}$, independence plays a role as a useful working modelling assumption that allows us to derive learning methods a sufficient but not necessary assumption for proving consistency and providing error bounds a sufficient but not necessary assumption for using random data splitting techniques such as bagging for learning and cross-validation for assessment. To understand precisely what alternatives to i.i.d. that are also sufficient is non-trivial and to some extent a research subject.
On the importance of the i.i.d. assumption in statistical learning
The i.i.d. assumption about the pairs $(\mathbf{X}_i, y_i)$, $i = 1, \ldots, N$, is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and some
On the importance of the i.i.d. assumption in statistical learning The i.i.d. assumption about the pairs $(\mathbf{X}_i, y_i)$, $i = 1, \ldots, N$, is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and sometimes just because we usually make this assumption. To satisfactorily answer if the assumption is really necessary, and what the consequences are of not making this assumption, I would easily end up writing a book (if you ever easily end up doing something like that). Here I will try to give a brief overview of what I find to be the most important aspects. A fundamental assumption Let's assume that we want to learn a probability model of $y$ given $\mathbf{X}$, which we call $p(y \mid \mathbf{X})$. We do not make any assumptions about this model a priory, but we will make the minimal assumption that such a model exists such that the conditional distribution of $y_i$ given $\mathbf{X}_i$ is $p(y_i \mid \mathbf{X}_i)$. What is worth noting about this assumption is that the conditional distribution of $y_i$ depends on $i$ only through $\mathbf{X}_i$. This is what makes the model useful, e.g. for prediction. The assumption holds as a consequence of the identically distributed part under the i.i.d. assumption, but it is weaker because we don't make any assumptions about the $\mathbf{X}_i$'s. In the following the focus will mostly be on the role of independence. Modelling There are two major approaches to learning a model of $y$ given $\mathbf{X}$. One approach is known as discriminative modelling and the other as generative modelling. Discriminative modelling: We model $p(y \mid \mathbf{X})$ directly, e.g. a logistic regression model, a neural network, a tree or a random forest. The working modelling assumption will typically be that the $y_i$'s are conditionally independent given the $\mathbf{X}_i$'s, though estimation techniques relying on subsampling or bootstrapping make most sense under the i.i.d. or the weaker exchangeability assumption (see below). But generally, for discriminative modelling we don't need to make distributional assumptions about the $\mathbf{X}_i$'s. Generative modelling: We model the joint distribution, $p(\mathbf{X}, y)$, of $(\mathbf{X}, y)$ typically by modelling the conditional distribution $p(\mathbf{X} \mid y)$ and the marginal distribution $p(y)$. Then we use Bayes's formula for computing $p(y \mid \mathbf{X})$. Linear discriminant analysis and naive Bayes methods are examples. The working modelling assumption will typically be the i.i.d. assumption. For both modelling approaches the working modelling assumption is used to derive or propose learning methods (or estimators). That could be by maximising the (penalised) log-likelihood, minimising the empirical risk or by using Bayesian methods. Even if the working modelling assumption is wrong, the resulting method can still provide a sensible fit of $p(y \mid \mathbf{X})$. Some techniques used together with discriminative modelling, such as bagging (bootstrap aggregation), work by fitting many models to data sampled randomly from the dataset. Without the i.i.d. assumption (or exchangeability) the resampled datasets will not have a joint distribution similar to that of the original dataset. Any dependence structure has become "messed up" by the resampling. I have not thought deeply about this, but I don't see why that should necessarily break the method as a method for learning $p(y \mid \mathbf{X})$. At least not for methods based on the working independence assumptions. I am happy to be proved wrong here. Consistency and error bounds A central question for all learning methods is whether they result in models close to $p(y \mid \mathbf{X})$. There is a vast theoretical literature in statistics and machine learning dealing with consistency and error bounds. A main goal of this literature is to prove that the learned model is close to $p(y \mid \mathbf{X})$ when $N$ is large. Consistency is a qualitative assurance, while error bounds provide (semi-) explicit quantitative control of the closeness and give rates of convergence. The theoretical results all rely on assumptions about the joint distribution of the observations in the dataset. Often the working modelling assumptions mentioned above are made (that is, conditional independence for discriminative modelling and i.i.d. for generative modelling). For discriminative modelling, consistency and error bounds will require that the $\mathbf{X}_i$'s fulfil certain conditions. In classical regression one such condition is that $\frac{1}{N} \mathbb{X}^T \mathbb{X} \to \Sigma$ for $N \to \infty$, where $\mathbb{X}$ denotes the design matrix with rows $\mathbf{X}_i^T$. Weaker conditions may be enough for consistency. In sparse learning another such condition is the restricted eigenvalue condition, see e.g. On the conditions used to prove oracle results for the Lasso. The i.i.d. assumption together with some technical distributional assumptions imply that some such sufficient conditions are fulfilled with large probability, and thus the i.i.d. assumption may prove to be a sufficient but not a necessary assumption to get consistency and error bounds for discriminative modelling. The working modelling assumption of independence may be wrong for either of the modelling approaches. As a rough rule-of-thumb one can still expect consistency if the data comes from an ergodic process, and one can still expect some error bounds if the process is sufficiently fast mixing. A precise mathematical definition of these concepts would take us too far away from the main question. It is enough to note that there exist dependence structures besides the i.i.d. assumption for which the learning methods can be proved to work as $N$ tends to infinity. If we have more detailed knowledge about the dependence structure, we may choose to replace the working independence assumption used for modelling with a model that captures the dependence structure as well. This is often done for time series. A better working model may result in a more efficient method. Model assessment Rather than proving that the learning method gives a model close to $p(y \mid \mathbf{X})$ it is of great practical value to obtain a (relative) assessment of "how good a learned model is". Such assessment scores are comparable for two or more learned models, but they will not provide an absolute assessment of how close a learned model is to $p(y \mid \mathbf{X})$. Estimates of assessment scores are typically computed empirically based on splitting the dataset into a training and a test dataset or by using cross-validation. As with bagging, a random splitting of the dataset will "mess up" any dependence structure. However, for methods based on the working independence assumptions, ergodicity assumptions weaker than i.i.d. should be sufficient for the assessment estimates to be reasonable, though standard errors on these estimates will be very difficult to come up with. [Edit: Dependence among the variables will result in a distribution of the learned model that differs from the distribution under the i.i.d. assumption. The estimate produced by cross-validation is not obviously related to the generalization error. If the dependence is strong, it will most likely be a poor estimate.] Summary (tl;dr) All the above is under the assumption that there is a fixed conditional probability model, $p(y \mid \mathbf{X})$. Thus there cannot be trends or sudden changes in the conditional distribution not captured by $\mathbf{X}$. When learning a model of $y$ given $\mathbf{X}$, independence plays a role as a useful working modelling assumption that allows us to derive learning methods a sufficient but not necessary assumption for proving consistency and providing error bounds a sufficient but not necessary assumption for using random data splitting techniques such as bagging for learning and cross-validation for assessment. To understand precisely what alternatives to i.i.d. that are also sufficient is non-trivial and to some extent a research subject.
On the importance of the i.i.d. assumption in statistical learning The i.i.d. assumption about the pairs $(\mathbf{X}_i, y_i)$, $i = 1, \ldots, N$, is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and some
1,935
On the importance of the i.i.d. assumption in statistical learning
What i.i.d. assumption states is that random variables are independent and identically distributed. You can formally define what does it mean, but informally it says that all the variables provide the same kind of information independently of each other (you can read also about related exchangeability). From the abstract ideas let's jump for a moment to concrete example: in most cases your data can be stored in a matrix, with observations row-wise and variables column-wise. If you assume your data to be i.i.d., then it means for you that you need to bother only about relations between columns and do not have to bother about relations between rows. If you bothered about both then you would model dependence of columns on columns and rows on rows, i.e. everything on everything. It is very hard to make simplifications and build a statistical model of everything depending on everything. You correctly noticed that exchengeability makes it possible for us to use methods such as cross-validation, or bootstrap, but it also makes it possible to use central limit theorem and it enables us to make simplifications helpful for modeling (thinking in column-wise terms). As you noticed in the LASSO example, independence assumption is often softened to conditional independence. Even in such case we need independent and identically distributed "parts". Similar, softer assumption is often made for time-series models, that you mentioned, that assume stationarity (so there is dependence but there is also a common distribution and series stabilizes over time -- again "i.i.d." parts). It is a matter of observing a number of similar things that carry the same idea about some general phenomenon. If we have a number of distinct and dependent things we cannot make any generalizations. What you have to remember is that this is only an assumption, we are not strict about it. It is about having enough things that all, independently, convey similar information about some common phenomenon. If the things influenced each other they would obviously convey similar information so they wouldn't be that useful. Imagine that you wanted to learn about abilities of children in a classroom, so you give them some tests. You could use the test results as an indicator of the abilities of kids only if they did them by themselves, independently of each other. If they interacted then you'd probably measure abilities of the most clever kid, or the most influential one. It does not mean that you need to assume that there was no interaction, or dependence, between kids whatsoever, but simply that they did the tests by themselves. The kids also need to be "identically distributed", so they cannot come from different countries, speak different languages, be in different ages since it will make it hard to interpret the results (maybe they did not understand the questions and answered randomly). If you can assume that your data is i.i.d. then you can focus on building a general model. You can deal with non-i.i.d. data but then you have to worry about "noise" in your data much more. Besides your main question you are also asking about cross-validation with non-i.i.d. data. While you seem to understate the importance of i.i.d. assumption, at the same time you overstate the problems of not meeting this assumption poses for cross-validation. There are multiple ways how we can deal with such data when using resampling methods like bootstrap, or cross-validation. If you are dealing with time-series you cannot assume that the values are independent, so taking the random fraction of values would be a bad idea because it would ignore the autocorrelated structure of the data. Because of that, with time-series we commonly use one step ahead cross-validation, i.e. you take part of the series to predict next value (not used for modeling). Similarly, if your data has clustered structure, you sample whole clusters to preserve the nature of the data. So as with modeling, we can deal with non-i.i.d.-sness also when doing cross-validation, but we need to adapt our methods to the nature of the data since methods designed for i.i.d. data do not apply in such cases.
On the importance of the i.i.d. assumption in statistical learning
What i.i.d. assumption states is that random variables are independent and identically distributed. You can formally define what does it mean, but informally it says that all the variables provide the
On the importance of the i.i.d. assumption in statistical learning What i.i.d. assumption states is that random variables are independent and identically distributed. You can formally define what does it mean, but informally it says that all the variables provide the same kind of information independently of each other (you can read also about related exchangeability). From the abstract ideas let's jump for a moment to concrete example: in most cases your data can be stored in a matrix, with observations row-wise and variables column-wise. If you assume your data to be i.i.d., then it means for you that you need to bother only about relations between columns and do not have to bother about relations between rows. If you bothered about both then you would model dependence of columns on columns and rows on rows, i.e. everything on everything. It is very hard to make simplifications and build a statistical model of everything depending on everything. You correctly noticed that exchengeability makes it possible for us to use methods such as cross-validation, or bootstrap, but it also makes it possible to use central limit theorem and it enables us to make simplifications helpful for modeling (thinking in column-wise terms). As you noticed in the LASSO example, independence assumption is often softened to conditional independence. Even in such case we need independent and identically distributed "parts". Similar, softer assumption is often made for time-series models, that you mentioned, that assume stationarity (so there is dependence but there is also a common distribution and series stabilizes over time -- again "i.i.d." parts). It is a matter of observing a number of similar things that carry the same idea about some general phenomenon. If we have a number of distinct and dependent things we cannot make any generalizations. What you have to remember is that this is only an assumption, we are not strict about it. It is about having enough things that all, independently, convey similar information about some common phenomenon. If the things influenced each other they would obviously convey similar information so they wouldn't be that useful. Imagine that you wanted to learn about abilities of children in a classroom, so you give them some tests. You could use the test results as an indicator of the abilities of kids only if they did them by themselves, independently of each other. If they interacted then you'd probably measure abilities of the most clever kid, or the most influential one. It does not mean that you need to assume that there was no interaction, or dependence, between kids whatsoever, but simply that they did the tests by themselves. The kids also need to be "identically distributed", so they cannot come from different countries, speak different languages, be in different ages since it will make it hard to interpret the results (maybe they did not understand the questions and answered randomly). If you can assume that your data is i.i.d. then you can focus on building a general model. You can deal with non-i.i.d. data but then you have to worry about "noise" in your data much more. Besides your main question you are also asking about cross-validation with non-i.i.d. data. While you seem to understate the importance of i.i.d. assumption, at the same time you overstate the problems of not meeting this assumption poses for cross-validation. There are multiple ways how we can deal with such data when using resampling methods like bootstrap, or cross-validation. If you are dealing with time-series you cannot assume that the values are independent, so taking the random fraction of values would be a bad idea because it would ignore the autocorrelated structure of the data. Because of that, with time-series we commonly use one step ahead cross-validation, i.e. you take part of the series to predict next value (not used for modeling). Similarly, if your data has clustered structure, you sample whole clusters to preserve the nature of the data. So as with modeling, we can deal with non-i.i.d.-sness also when doing cross-validation, but we need to adapt our methods to the nature of the data since methods designed for i.i.d. data do not apply in such cases.
On the importance of the i.i.d. assumption in statistical learning What i.i.d. assumption states is that random variables are independent and identically distributed. You can formally define what does it mean, but informally it says that all the variables provide the
1,936
On the importance of the i.i.d. assumption in statistical learning
In my opinion there are two rather mundane reasons why the i.i.d. assumption is important in statistical learning (or statistics in general). Lots of behind the scenes mathematics depend on this assumption. If you want to prove that your learning method actually works for more than one data set, i.i.d. assumption will crop up eventually. It is possible to avoid it, but mathematics becomes several times harder. If you want to learn something from data, you need to assume that there is something to learn. Learning is impossible if every data point is generated by different mechanism. So it is essential to assume that something unifies given data set. If we assume that data is random, then this something is naturally a probability distribution, because probability distribution encompasses all information about the random variable. So if we have data $x_1,...,x_n$ ($x_i$ can be either a vector or scalar), we assume that it comes from distribution $F_n$: $$(x_1,...,x_n)\sim F_n.$$ Here we have a problem. We need to ensure that $F_n$ is related to $F_m$, for different $n$ and $m$, otherwise we have the initial problem, that every data point is generated differently. The second problem is that although we have $n$ data points, we basically have one data point for estimating $F_n$, because $F_n$ is $n$-variate probability distribution. The most simple solution for these two problems is an i.i.d assumption. With it $F_n=F^n,$ where $x_i\sim F$. We get very clear relationship between $F_n$ and $F_m$ and we have $n$ data points to estimate one $F$. There are other ways these two problems are solved, but it is essential to note that every statistical learning method needs to solve this problem and it so happens that i.i.d. assumption is by far the most uncomplicated way to do it.
On the importance of the i.i.d. assumption in statistical learning
In my opinion there are two rather mundane reasons why the i.i.d. assumption is important in statistical learning (or statistics in general). Lots of behind the scenes mathematics depend on this assu
On the importance of the i.i.d. assumption in statistical learning In my opinion there are two rather mundane reasons why the i.i.d. assumption is important in statistical learning (or statistics in general). Lots of behind the scenes mathematics depend on this assumption. If you want to prove that your learning method actually works for more than one data set, i.i.d. assumption will crop up eventually. It is possible to avoid it, but mathematics becomes several times harder. If you want to learn something from data, you need to assume that there is something to learn. Learning is impossible if every data point is generated by different mechanism. So it is essential to assume that something unifies given data set. If we assume that data is random, then this something is naturally a probability distribution, because probability distribution encompasses all information about the random variable. So if we have data $x_1,...,x_n$ ($x_i$ can be either a vector or scalar), we assume that it comes from distribution $F_n$: $$(x_1,...,x_n)\sim F_n.$$ Here we have a problem. We need to ensure that $F_n$ is related to $F_m$, for different $n$ and $m$, otherwise we have the initial problem, that every data point is generated differently. The second problem is that although we have $n$ data points, we basically have one data point for estimating $F_n$, because $F_n$ is $n$-variate probability distribution. The most simple solution for these two problems is an i.i.d assumption. With it $F_n=F^n,$ where $x_i\sim F$. We get very clear relationship between $F_n$ and $F_m$ and we have $n$ data points to estimate one $F$. There are other ways these two problems are solved, but it is essential to note that every statistical learning method needs to solve this problem and it so happens that i.i.d. assumption is by far the most uncomplicated way to do it.
On the importance of the i.i.d. assumption in statistical learning In my opinion there are two rather mundane reasons why the i.i.d. assumption is important in statistical learning (or statistics in general). Lots of behind the scenes mathematics depend on this assu
1,937
On the importance of the i.i.d. assumption in statistical learning
The only place where one can safely ignored iid is in undergraduate statistics and machine learning courses. You have written that: one can work around the i.i.d. assumption and obtain robust results. Actually the results will usually stay the same, it is rather the inferences that one can draw that will change... This is only true if the functional form of the models is assumed to be basically correct. But, such an assumption is even less plausible than iid. There are at least two ways in which iid is critically important in terms of applied modeling: It is an explicit assumption in most statistical inference, as you note in your question. In most real-world modeling, at some stage we need to use inference to test the specification, such as during variable selection and model comparison. So, while each particular model fit may be OK despite iid violations, you can end up choosing the wrong model anyway. I find that thinking through violations of iid is a useful way to think about the data generating mechanism, which in turn helps me think about the appropriate specification of a model a priori. Two examples: If the data is clustered, this is a violation of iid. A remedy to this may be a mixture model. The inference I will draw from a mixture models is generally completely different to that which I draw from OLS. Non-linear relationships between the dependent and independent variables often show up when inspecting residuals as a part of investigating iid. Of course, in pretty much ever model that I have ever built, I have failed in my quest to reduce the distribution of the residuals to anything close to a truly normal distribution. But, nevertheless, I always gain a lot by trying really, really, hard to do it.
On the importance of the i.i.d. assumption in statistical learning
The only place where one can safely ignored iid is in undergraduate statistics and machine learning courses. You have written that: one can work around the i.i.d. assumption and obtain robust results
On the importance of the i.i.d. assumption in statistical learning The only place where one can safely ignored iid is in undergraduate statistics and machine learning courses. You have written that: one can work around the i.i.d. assumption and obtain robust results. Actually the results will usually stay the same, it is rather the inferences that one can draw that will change... This is only true if the functional form of the models is assumed to be basically correct. But, such an assumption is even less plausible than iid. There are at least two ways in which iid is critically important in terms of applied modeling: It is an explicit assumption in most statistical inference, as you note in your question. In most real-world modeling, at some stage we need to use inference to test the specification, such as during variable selection and model comparison. So, while each particular model fit may be OK despite iid violations, you can end up choosing the wrong model anyway. I find that thinking through violations of iid is a useful way to think about the data generating mechanism, which in turn helps me think about the appropriate specification of a model a priori. Two examples: If the data is clustered, this is a violation of iid. A remedy to this may be a mixture model. The inference I will draw from a mixture models is generally completely different to that which I draw from OLS. Non-linear relationships between the dependent and independent variables often show up when inspecting residuals as a part of investigating iid. Of course, in pretty much ever model that I have ever built, I have failed in my quest to reduce the distribution of the residuals to anything close to a truly normal distribution. But, nevertheless, I always gain a lot by trying really, really, hard to do it.
On the importance of the i.i.d. assumption in statistical learning The only place where one can safely ignored iid is in undergraduate statistics and machine learning courses. You have written that: one can work around the i.i.d. assumption and obtain robust results
1,938
On the importance of the i.i.d. assumption in statistical learning
I would like to stress that in some circumstances, the data are not i.i.d. and statistical learning is still possible. It is crucial to have an identifiable model for the joint distribution of all observations; if the observations are i.i.d. then this joint distribution is easily obtained from the marginal distribution of single observations. But in some cases, the joint distribution is given directly, without resorting to a marginal distribution. A widely used model in which the observations are not i.i.d. is the linear mixed model: $$\let\epsilon\varepsilon Y = X \alpha + Z u + \epsilon $$ with $\def\R{\mathbb{R}}Y \in \R^n$, $X \in \R^{n\times p}$, $\alpha \in \R^p$, $Z \in \R^{n\times q}$, $u \in \R^q$, and $\epsilon\in\R^n$. The (design) matrix $X$ and $Z$ are considered as fixed, $\alpha$ is a vector of parameters, $u$ is a random vector $\def\N{\mathcal{N}} u\sim \N(0,\tau I_q)$ and $\epsilon \sim \N(0,\sigma^2 I_n)$, $\tau$ and $\sigma^2$ being parameters of the model. This model is best expressed by giving the distribution of $Y$: $$Y \sim \N(X\alpha, \tau ZZ' + \sigma^2 I_n).$$ The parameters to be learned are $\alpha$, $\tau$, $\sigma^2$. A single vector $Y$ of dimension $n$ is observed; its components are not i.i.d.
On the importance of the i.i.d. assumption in statistical learning
I would like to stress that in some circumstances, the data are not i.i.d. and statistical learning is still possible. It is crucial to have an identifiable model for the joint distribution of all obs
On the importance of the i.i.d. assumption in statistical learning I would like to stress that in some circumstances, the data are not i.i.d. and statistical learning is still possible. It is crucial to have an identifiable model for the joint distribution of all observations; if the observations are i.i.d. then this joint distribution is easily obtained from the marginal distribution of single observations. But in some cases, the joint distribution is given directly, without resorting to a marginal distribution. A widely used model in which the observations are not i.i.d. is the linear mixed model: $$\let\epsilon\varepsilon Y = X \alpha + Z u + \epsilon $$ with $\def\R{\mathbb{R}}Y \in \R^n$, $X \in \R^{n\times p}$, $\alpha \in \R^p$, $Z \in \R^{n\times q}$, $u \in \R^q$, and $\epsilon\in\R^n$. The (design) matrix $X$ and $Z$ are considered as fixed, $\alpha$ is a vector of parameters, $u$ is a random vector $\def\N{\mathcal{N}} u\sim \N(0,\tau I_q)$ and $\epsilon \sim \N(0,\sigma^2 I_n)$, $\tau$ and $\sigma^2$ being parameters of the model. This model is best expressed by giving the distribution of $Y$: $$Y \sim \N(X\alpha, \tau ZZ' + \sigma^2 I_n).$$ The parameters to be learned are $\alpha$, $\tau$, $\sigma^2$. A single vector $Y$ of dimension $n$ is observed; its components are not i.i.d.
On the importance of the i.i.d. assumption in statistical learning I would like to stress that in some circumstances, the data are not i.i.d. and statistical learning is still possible. It is crucial to have an identifiable model for the joint distribution of all obs
1,939
On the importance of the i.i.d. assumption in statistical learning
One area where i.i.d. assumption is critical in practice (other that inference) is data collection. If you do not collect data in a random manner then you will have a sampling bias and your data will not be a good representation of the underlying model.
On the importance of the i.i.d. assumption in statistical learning
One area where i.i.d. assumption is critical in practice (other that inference) is data collection. If you do not collect data in a random manner then you will have a sampling bias and your data will
On the importance of the i.i.d. assumption in statistical learning One area where i.i.d. assumption is critical in practice (other that inference) is data collection. If you do not collect data in a random manner then you will have a sampling bias and your data will not be a good representation of the underlying model.
On the importance of the i.i.d. assumption in statistical learning One area where i.i.d. assumption is critical in practice (other that inference) is data collection. If you do not collect data in a random manner then you will have a sampling bias and your data will
1,940
Is there a way to remember the definitions of Type I and Type II Errors?
Since type two means "False negative" or sort of "false false", I remember it as the number of falses. Type I: "I falsely think the alternate hypothesis is true" (one false) Type II: "I falsely think the alternate hypothesis is false" (two falses)
Is there a way to remember the definitions of Type I and Type II Errors?
Since type two means "False negative" or sort of "false false", I remember it as the number of falses. Type I: "I falsely think the alternate hypothesis is true" (one false) Type II: "I falsely thin
Is there a way to remember the definitions of Type I and Type II Errors? Since type two means "False negative" or sort of "false false", I remember it as the number of falses. Type I: "I falsely think the alternate hypothesis is true" (one false) Type II: "I falsely think the alternate hypothesis is false" (two falses)
Is there a way to remember the definitions of Type I and Type II Errors? Since type two means "False negative" or sort of "false false", I remember it as the number of falses. Type I: "I falsely think the alternate hypothesis is true" (one false) Type II: "I falsely thin
1,941
Is there a way to remember the definitions of Type I and Type II Errors?
When the boy cried wolf ... The first error the villagers made (when they believed him) was a type 1 error. The second error the villagers made (when they didn't believe him) was a type 2 error. The boy's cry was an alternative hypothesis because the null hypothesis is no wolf ;)
Is there a way to remember the definitions of Type I and Type II Errors?
When the boy cried wolf ... The first error the villagers made (when they believed him) was a type 1 error. The second error the villagers made (when they didn't believe him) was a type 2 error. The
Is there a way to remember the definitions of Type I and Type II Errors? When the boy cried wolf ... The first error the villagers made (when they believed him) was a type 1 error. The second error the villagers made (when they didn't believe him) was a type 2 error. The boy's cry was an alternative hypothesis because the null hypothesis is no wolf ;)
Is there a way to remember the definitions of Type I and Type II Errors? When the boy cried wolf ... The first error the villagers made (when they believed him) was a type 1 error. The second error the villagers made (when they didn't believe him) was a type 2 error. The
1,942
Is there a way to remember the definitions of Type I and Type II Errors?
I make no apologies for posting such a ridiculous image, because that's exactly why it's easy to remember. Null hypothesis: Patient is not pregnant. Image source: Ellis, P.D. (2010), “Effect Size FAQs,” website http://www.effectsizefaq.com, accessed on 12/18/2014.
Is there a way to remember the definitions of Type I and Type II Errors?
I make no apologies for posting such a ridiculous image, because that's exactly why it's easy to remember. Null hypothesis: Patient is not pregnant. Image source: Ellis, P.D. (2010), “Effect Size FAQ
Is there a way to remember the definitions of Type I and Type II Errors? I make no apologies for posting such a ridiculous image, because that's exactly why it's easy to remember. Null hypothesis: Patient is not pregnant. Image source: Ellis, P.D. (2010), “Effect Size FAQs,” website http://www.effectsizefaq.com, accessed on 12/18/2014.
Is there a way to remember the definitions of Type I and Type II Errors? I make no apologies for posting such a ridiculous image, because that's exactly why it's easy to remember. Null hypothesis: Patient is not pregnant. Image source: Ellis, P.D. (2010), “Effect Size FAQ
1,943
Is there a way to remember the definitions of Type I and Type II Errors?
Here's a handy way that happens to have some truth to it. Young scientists commit Type-I because they want to find effects and jump the gun while old scientist commit Type-II because they refuse to change their beliefs. (someone comment in a funnier version of that :) )
Is there a way to remember the definitions of Type I and Type II Errors?
Here's a handy way that happens to have some truth to it. Young scientists commit Type-I because they want to find effects and jump the gun while old scientist commit Type-II because they refuse to ch
Is there a way to remember the definitions of Type I and Type II Errors? Here's a handy way that happens to have some truth to it. Young scientists commit Type-I because they want to find effects and jump the gun while old scientist commit Type-II because they refuse to change their beliefs. (someone comment in a funnier version of that :) )
Is there a way to remember the definitions of Type I and Type II Errors? Here's a handy way that happens to have some truth to it. Young scientists commit Type-I because they want to find effects and jump the gun while old scientist commit Type-II because they refuse to ch
1,944
Is there a way to remember the definitions of Type I and Type II Errors?
I was talking to a friend of mine about this and he kicked me a link to the Wikipedia article on type I and type II errors, where they apparently now provide a (somewhat unhelpful, in my opinion) mnemonic. I did, however, want to add it here just for the sake of completion. Although I didn't think it helped me, it might help someone else: For those experiencing difficulty correctly identifying the two error types, the following mnemonic is based on the fact that (a) an "error" is false, and (b) the Initial letters of "Positive" and "Negative" are written with a different number of vertical lines: A Type I error is a false POSITIVE; and P has a single vertical line. A Type II error is a false NEGATIVE; and N has two vertical lines. With this, you need to remember that a false positive means rejecting a true null hypothesis and a false negative is failing to reject a false null hypothesis. This is by no means the best answer here, but I did want to throw it out there in the event someone finds this question and this can help them.
Is there a way to remember the definitions of Type I and Type II Errors?
I was talking to a friend of mine about this and he kicked me a link to the Wikipedia article on type I and type II errors, where they apparently now provide a (somewhat unhelpful, in my opinion) mnem
Is there a way to remember the definitions of Type I and Type II Errors? I was talking to a friend of mine about this and he kicked me a link to the Wikipedia article on type I and type II errors, where they apparently now provide a (somewhat unhelpful, in my opinion) mnemonic. I did, however, want to add it here just for the sake of completion. Although I didn't think it helped me, it might help someone else: For those experiencing difficulty correctly identifying the two error types, the following mnemonic is based on the fact that (a) an "error" is false, and (b) the Initial letters of "Positive" and "Negative" are written with a different number of vertical lines: A Type I error is a false POSITIVE; and P has a single vertical line. A Type II error is a false NEGATIVE; and N has two vertical lines. With this, you need to remember that a false positive means rejecting a true null hypothesis and a false negative is failing to reject a false null hypothesis. This is by no means the best answer here, but I did want to throw it out there in the event someone finds this question and this can help them.
Is there a way to remember the definitions of Type I and Type II Errors? I was talking to a friend of mine about this and he kicked me a link to the Wikipedia article on type I and type II errors, where they apparently now provide a (somewhat unhelpful, in my opinion) mnem
1,945
Is there a way to remember the definitions of Type I and Type II Errors?
You could reject the idea entirely. Some authors (Andrew Gelman is one) are shifting to discussing Type S (sign) and Type M (magnitude) errors. You can infer the wrong effect direction (e.g., you believe the treatment group does better but actually does worse) or the wrong magnitude (e.g., you find a massive effect where there is only a tiny, or essentially no effect, or vice versa). See more at Gelman's blog.
Is there a way to remember the definitions of Type I and Type II Errors?
You could reject the idea entirely. Some authors (Andrew Gelman is one) are shifting to discussing Type S (sign) and Type M (magnitude) errors. You can infer the wrong effect direction (e.g., you bel
Is there a way to remember the definitions of Type I and Type II Errors? You could reject the idea entirely. Some authors (Andrew Gelman is one) are shifting to discussing Type S (sign) and Type M (magnitude) errors. You can infer the wrong effect direction (e.g., you believe the treatment group does better but actually does worse) or the wrong magnitude (e.g., you find a massive effect where there is only a tiny, or essentially no effect, or vice versa). See more at Gelman's blog.
Is there a way to remember the definitions of Type I and Type II Errors? You could reject the idea entirely. Some authors (Andrew Gelman is one) are shifting to discussing Type S (sign) and Type M (magnitude) errors. You can infer the wrong effect direction (e.g., you bel
1,946
Is there a way to remember the definitions of Type I and Type II Errors?
I'll try not to be redundant with other responses (although it seems a little bit what J. M. already suggested), but I generally like showing the following two pictures:
Is there a way to remember the definitions of Type I and Type II Errors?
I'll try not to be redundant with other responses (although it seems a little bit what J. M. already suggested), but I generally like showing the following two pictures:
Is there a way to remember the definitions of Type I and Type II Errors? I'll try not to be redundant with other responses (although it seems a little bit what J. M. already suggested), but I generally like showing the following two pictures:
Is there a way to remember the definitions of Type I and Type II Errors? I'll try not to be redundant with other responses (although it seems a little bit what J. M. already suggested), but I generally like showing the following two pictures:
1,947
Is there a way to remember the definitions of Type I and Type II Errors?
I use the "judicial" approach for remembering the difference between type I and type II: a judge committing a type I error sends an innocent man to jail, while a judge committing a type II error lets a guilty man walk free.
Is there a way to remember the definitions of Type I and Type II Errors?
I use the "judicial" approach for remembering the difference between type I and type II: a judge committing a type I error sends an innocent man to jail, while a judge committing a type II error lets
Is there a way to remember the definitions of Type I and Type II Errors? I use the "judicial" approach for remembering the difference between type I and type II: a judge committing a type I error sends an innocent man to jail, while a judge committing a type II error lets a guilty man walk free.
Is there a way to remember the definitions of Type I and Type II Errors? I use the "judicial" approach for remembering the difference between type I and type II: a judge committing a type I error sends an innocent man to jail, while a judge committing a type II error lets
1,948
Is there a way to remember the definitions of Type I and Type II Errors?
Based on the principle of Occam's razor, Type I errors (rejecting the null hypothesis when it is true) are "arguably" worse than Type II errors (not rejecting the null hypothesis when it is false). If you believe such an argument: Type I errors are of primary concern Type II errors are of secondary concern Note: I'm not endorsing this value judgement, but it does help me remember Type I from Type II.
Is there a way to remember the definitions of Type I and Type II Errors?
Based on the principle of Occam's razor, Type I errors (rejecting the null hypothesis when it is true) are "arguably" worse than Type II errors (not rejecting the null hypothesis when it is false). If
Is there a way to remember the definitions of Type I and Type II Errors? Based on the principle of Occam's razor, Type I errors (rejecting the null hypothesis when it is true) are "arguably" worse than Type II errors (not rejecting the null hypothesis when it is false). If you believe such an argument: Type I errors are of primary concern Type II errors are of secondary concern Note: I'm not endorsing this value judgement, but it does help me remember Type I from Type II.
Is there a way to remember the definitions of Type I and Type II Errors? Based on the principle of Occam's razor, Type I errors (rejecting the null hypothesis when it is true) are "arguably" worse than Type II errors (not rejecting the null hypothesis when it is false). If
1,949
Is there a way to remember the definitions of Type I and Type II Errors?
Here is one explanation that might help you remember the difference. TYPE I ERROR: An alarm without a fire. TYPE II ERROR: A fire without an alarm. Every cook knows how to avoid Type I Error - just remove the batteries. Unfortunately, this increases the incidences of Type II error. :) Reducing the chances of Type II error would mean making the alarm hypersensitive, which in turn would increase the chances of Type I error. Source: A Cartoon Guide to Statistics
Is there a way to remember the definitions of Type I and Type II Errors?
Here is one explanation that might help you remember the difference. TYPE I ERROR: An alarm without a fire. TYPE II ERROR: A fire without an alarm. Every cook knows how to avoid Type I Error - just re
Is there a way to remember the definitions of Type I and Type II Errors? Here is one explanation that might help you remember the difference. TYPE I ERROR: An alarm without a fire. TYPE II ERROR: A fire without an alarm. Every cook knows how to avoid Type I Error - just remove the batteries. Unfortunately, this increases the incidences of Type II error. :) Reducing the chances of Type II error would mean making the alarm hypersensitive, which in turn would increase the chances of Type I error. Source: A Cartoon Guide to Statistics
Is there a way to remember the definitions of Type I and Type II Errors? Here is one explanation that might help you remember the difference. TYPE I ERROR: An alarm without a fire. TYPE II ERROR: A fire without an alarm. Every cook knows how to avoid Type I Error - just re
1,950
Is there a way to remember the definitions of Type I and Type II Errors?
Hurrah, a question non-technical enough so as I can answer it! "Type one is a con" [rhyming]- i.e. fools you into thinking that a difference exists when it doesn't. Always works for me.
Is there a way to remember the definitions of Type I and Type II Errors?
Hurrah, a question non-technical enough so as I can answer it! "Type one is a con" [rhyming]- i.e. fools you into thinking that a difference exists when it doesn't. Always works for me.
Is there a way to remember the definitions of Type I and Type II Errors? Hurrah, a question non-technical enough so as I can answer it! "Type one is a con" [rhyming]- i.e. fools you into thinking that a difference exists when it doesn't. Always works for me.
Is there a way to remember the definitions of Type I and Type II Errors? Hurrah, a question non-technical enough so as I can answer it! "Type one is a con" [rhyming]- i.e. fools you into thinking that a difference exists when it doesn't. Always works for me.
1,951
Is there a way to remember the definitions of Type I and Type II Errors?
(a bit joke answer I invented just a minute ago) A first class person thinks he is always right. A second class person thinks he is always wrong. The first class person can only make a type I error (because sometimes he will be wrong). The second class person can only make a type II error (because sometimes he will be right).
Is there a way to remember the definitions of Type I and Type II Errors?
(a bit joke answer I invented just a minute ago) A first class person thinks he is always right. A second class person thinks he is always wrong. The first class person can only make a type I erro
Is there a way to remember the definitions of Type I and Type II Errors? (a bit joke answer I invented just a minute ago) A first class person thinks he is always right. A second class person thinks he is always wrong. The first class person can only make a type I error (because sometimes he will be wrong). The second class person can only make a type II error (because sometimes he will be right).
Is there a way to remember the definitions of Type I and Type II Errors? (a bit joke answer I invented just a minute ago) A first class person thinks he is always right. A second class person thinks he is always wrong. The first class person can only make a type I erro
1,952
Is there a way to remember the definitions of Type I and Type II Errors?
I used to think of it in terms of the usual picture of two Normal distributions (or bell curves). Going left to right, distribution 1 is the Null, and the distribution 2 is the Alternative. Type I (erroneously) rejects the first (Null) and Type II "rejects" the second (Alternative). (Now you just need to remember that you're not actually rejecting the alternative, but erroneously accepting (or failing to reject) the Null -- i.e. restate everything in the form of the Null. Hey, it worked for me!)
Is there a way to remember the definitions of Type I and Type II Errors?
I used to think of it in terms of the usual picture of two Normal distributions (or bell curves). Going left to right, distribution 1 is the Null, and the distribution 2 is the Alternative. Type I (
Is there a way to remember the definitions of Type I and Type II Errors? I used to think of it in terms of the usual picture of two Normal distributions (or bell curves). Going left to right, distribution 1 is the Null, and the distribution 2 is the Alternative. Type I (erroneously) rejects the first (Null) and Type II "rejects" the second (Alternative). (Now you just need to remember that you're not actually rejecting the alternative, but erroneously accepting (or failing to reject) the Null -- i.e. restate everything in the form of the Null. Hey, it worked for me!)
Is there a way to remember the definitions of Type I and Type II Errors? I used to think of it in terms of the usual picture of two Normal distributions (or bell curves). Going left to right, distribution 1 is the Null, and the distribution 2 is the Alternative. Type I (
1,953
Is there a way to remember the definitions of Type I and Type II Errors?
My friend came up with this and I thought it was rather brilliant. She said that during the last two presidencies Republicans have committed both errors: President ONE was Bush who commited a type ONE error by saying there were weapons of mass destruction in Iraq when in fact..... Under president TWO, Obama, (some) Republicans are comitting a type TWO error arguing that climate change is a myth when in fact.... Whatever your views on politics or climate change, it's a pretty easy way to remember!!
Is there a way to remember the definitions of Type I and Type II Errors?
My friend came up with this and I thought it was rather brilliant. She said that during the last two presidencies Republicans have committed both errors: President ONE was Bush who commited a type ONE
Is there a way to remember the definitions of Type I and Type II Errors? My friend came up with this and I thought it was rather brilliant. She said that during the last two presidencies Republicans have committed both errors: President ONE was Bush who commited a type ONE error by saying there were weapons of mass destruction in Iraq when in fact..... Under president TWO, Obama, (some) Republicans are comitting a type TWO error arguing that climate change is a myth when in fact.... Whatever your views on politics or climate change, it's a pretty easy way to remember!!
Is there a way to remember the definitions of Type I and Type II Errors? My friend came up with this and I thought it was rather brilliant. She said that during the last two presidencies Republicans have committed both errors: President ONE was Bush who commited a type ONE
1,954
Is there a way to remember the definitions of Type I and Type II Errors?
I am surprised that noone has suggested the 'art/baf' mnemonic. Basically remember that $\alpha$ is the probability of the type I error and $\beta$ is the probability of a type II error (this is easy to remember because $\alpha$ is the 1st letter in the greek alphabet, so goes with the 1st error, $\beta$ is the 2nd letter and goes with the 2nd error). Now remember the word "art" or "$\alpha$rt" says that $\alpha$ is the probability of Rejecting a True null hypothesis and the psuedo word "baf" or "$\beta$af" says that $\beta$ is the probability of Accepting a False null hypothesis. The "art" portion is fairly acceptable, the "baf" portion suffers from the fact that 1). it is not a real word, and 2). we are not supposed to accept the null, just fail to reject it. But if you can remember "art/baf" and the idea of Reject True is the R and T in art and the a/$\alpha$ links it to the type I error, then it is a pretty good mnemonic.
Is there a way to remember the definitions of Type I and Type II Errors?
I am surprised that noone has suggested the 'art/baf' mnemonic. Basically remember that $\alpha$ is the probability of the type I error and $\beta$ is the probability of a type II error (this is easy
Is there a way to remember the definitions of Type I and Type II Errors? I am surprised that noone has suggested the 'art/baf' mnemonic. Basically remember that $\alpha$ is the probability of the type I error and $\beta$ is the probability of a type II error (this is easy to remember because $\alpha$ is the 1st letter in the greek alphabet, so goes with the 1st error, $\beta$ is the 2nd letter and goes with the 2nd error). Now remember the word "art" or "$\alpha$rt" says that $\alpha$ is the probability of Rejecting a True null hypothesis and the psuedo word "baf" or "$\beta$af" says that $\beta$ is the probability of Accepting a False null hypothesis. The "art" portion is fairly acceptable, the "baf" portion suffers from the fact that 1). it is not a real word, and 2). we are not supposed to accept the null, just fail to reject it. But if you can remember "art/baf" and the idea of Reject True is the R and T in art and the a/$\alpha$ links it to the type I error, then it is a pretty good mnemonic.
Is there a way to remember the definitions of Type I and Type II Errors? I am surprised that noone has suggested the 'art/baf' mnemonic. Basically remember that $\alpha$ is the probability of the type I error and $\beta$ is the probability of a type II error (this is easy
1,955
Is there a way to remember the definitions of Type I and Type II Errors?
Type 1 = Reject : this is a ONE-word expression Type 2 = Do not : this is a TWO-word expression
Is there a way to remember the definitions of Type I and Type II Errors?
Type 1 = Reject : this is a ONE-word expression Type 2 = Do not : this is a TWO-word expression
Is there a way to remember the definitions of Type I and Type II Errors? Type 1 = Reject : this is a ONE-word expression Type 2 = Do not : this is a TWO-word expression
Is there a way to remember the definitions of Type I and Type II Errors? Type 1 = Reject : this is a ONE-word expression Type 2 = Do not : this is a TWO-word expression
1,956
Is there a way to remember the definitions of Type I and Type II Errors?
RAAR 'like a lion'= first part is *R*eject when we should *A*ccept (type I error) second part is *A*ccept when we should *R*eject (type II error) This is the easiest way to remember it for me :) Good LUCK!
Is there a way to remember the definitions of Type I and Type II Errors?
RAAR 'like a lion'= first part is *R*eject when we should *A*ccept (type I error) second part is *A*ccept when we should *R*eject (type II error) This is the easiest way to remember it for me :) Good
Is there a way to remember the definitions of Type I and Type II Errors? RAAR 'like a lion'= first part is *R*eject when we should *A*ccept (type I error) second part is *A*ccept when we should *R*eject (type II error) This is the easiest way to remember it for me :) Good LUCK!
Is there a way to remember the definitions of Type I and Type II Errors? RAAR 'like a lion'= first part is *R*eject when we should *A*ccept (type I error) second part is *A*ccept when we should *R*eject (type II error) This is the easiest way to remember it for me :) Good
1,957
Is there a way to remember the definitions of Type I and Type II Errors?
I remember it by thinking: What's the first thing I do when I do a null-hypothesis significance test? I set the criterion for the probability that I will make a false rejection. Thus, type 1 is this criterion and type 2 is the other probability of interest: the probability that I will fail to reject the null when the null is false. So, 1=first probability I set, 2=the other one.
Is there a way to remember the definitions of Type I and Type II Errors?
I remember it by thinking: What's the first thing I do when I do a null-hypothesis significance test? I set the criterion for the probability that I will make a false rejection. Thus, type 1 is this c
Is there a way to remember the definitions of Type I and Type II Errors? I remember it by thinking: What's the first thing I do when I do a null-hypothesis significance test? I set the criterion for the probability that I will make a false rejection. Thus, type 1 is this criterion and type 2 is the other probability of interest: the probability that I will fail to reject the null when the null is false. So, 1=first probability I set, 2=the other one.
Is there a way to remember the definitions of Type I and Type II Errors? I remember it by thinking: What's the first thing I do when I do a null-hypothesis significance test? I set the criterion for the probability that I will make a false rejection. Thus, type 1 is this c
1,958
Is there a way to remember the definitions of Type I and Type II Errors?
Here's how I do it: Type I is an Optimistic error. Type II is a Pessimistic error. O, P: 1, 2. They're alphabetical.
Is there a way to remember the definitions of Type I and Type II Errors?
Here's how I do it: Type I is an Optimistic error. Type II is a Pessimistic error. O, P: 1, 2. They're alphabetical.
Is there a way to remember the definitions of Type I and Type II Errors? Here's how I do it: Type I is an Optimistic error. Type II is a Pessimistic error. O, P: 1, 2. They're alphabetical.
Is there a way to remember the definitions of Type I and Type II Errors? Here's how I do it: Type I is an Optimistic error. Type II is a Pessimistic error. O, P: 1, 2. They're alphabetical.
1,959
Is there a way to remember the definitions of Type I and Type II Errors?
Memorize “It’s Type I not II where the null is true” as it rhymes and figure the rest out while you are looking at the problem Since you are making an error Type I - the null is true but you say it isn’t (reject it) - False positive Then Type II is where the null is not True but you say it is (Fail to reject it)- False Negative Also, it helps to state what your Null and Alternative Hypothesis are BEFORE doing anything else
Is there a way to remember the definitions of Type I and Type II Errors?
Memorize “It’s Type I not II where the null is true” as it rhymes and figure the rest out while you are looking at the problem Since you are making an error Type I - the null is true but you say it is
Is there a way to remember the definitions of Type I and Type II Errors? Memorize “It’s Type I not II where the null is true” as it rhymes and figure the rest out while you are looking at the problem Since you are making an error Type I - the null is true but you say it isn’t (reject it) - False positive Then Type II is where the null is not True but you say it is (Fail to reject it)- False Negative Also, it helps to state what your Null and Alternative Hypothesis are BEFORE doing anything else
Is there a way to remember the definitions of Type I and Type II Errors? Memorize “It’s Type I not II where the null is true” as it rhymes and figure the rest out while you are looking at the problem Since you are making an error Type I - the null is true but you say it is
1,960
Is there a way to remember the definitions of Type I and Type II Errors?
This is how I remember the difference between Type I and Type II errors Type I is a false POSITIVE Type II is a false NEGATIVE Type I is so POSITIVE it jumps out of bed first, runs downstairs and finds a significant breakfast while Type II is so NEGATIVE it stays in bed all day so when it eventually crawls out all the food is gone. It can never find anything!
Is there a way to remember the definitions of Type I and Type II Errors?
This is how I remember the difference between Type I and Type II errors Type I is a false POSITIVE Type II is a false NEGATIVE Type I is so POSITIVE it jumps out of bed first, runs downstairs and find
Is there a way to remember the definitions of Type I and Type II Errors? This is how I remember the difference between Type I and Type II errors Type I is a false POSITIVE Type II is a false NEGATIVE Type I is so POSITIVE it jumps out of bed first, runs downstairs and finds a significant breakfast while Type II is so NEGATIVE it stays in bed all day so when it eventually crawls out all the food is gone. It can never find anything!
Is there a way to remember the definitions of Type I and Type II Errors? This is how I remember the difference between Type I and Type II errors Type I is a false POSITIVE Type II is a false NEGATIVE Type I is so POSITIVE it jumps out of bed first, runs downstairs and find
1,961
Is there a way to remember the definitions of Type I and Type II Errors?
Type One error Reject null hypothesis when it is true T.O.E.R.N.H.W.I.I.T. Tiny Overly Eager Raccoons Never Hide When It Is Teatime Type Two Error Accept null hypothesis when it is false T.T.E.A.N.H.W.I.I.F. Twelve Tan Elvis's Ate Nine Hams With Intelligent Irish Farmers
Is there a way to remember the definitions of Type I and Type II Errors?
Type One error Reject null hypothesis when it is true T.O.E.R.N.H.W.I.I.T. Tiny Overly Eager Raccoons Never Hide When It Is Teatime Type Two Error Accept null hypothesis when it is false T.T.E.A.N
Is there a way to remember the definitions of Type I and Type II Errors? Type One error Reject null hypothesis when it is true T.O.E.R.N.H.W.I.I.T. Tiny Overly Eager Raccoons Never Hide When It Is Teatime Type Two Error Accept null hypothesis when it is false T.T.E.A.N.H.W.I.I.F. Twelve Tan Elvis's Ate Nine Hams With Intelligent Irish Farmers
Is there a way to remember the definitions of Type I and Type II Errors? Type One error Reject null hypothesis when it is true T.O.E.R.N.H.W.I.I.T. Tiny Overly Eager Raccoons Never Hide When It Is Teatime Type Two Error Accept null hypothesis when it is false T.T.E.A.N
1,962
Is there a way to remember the definitions of Type I and Type II Errors?
To a software engineer: How about associating Type I error (first of the two) with the term "S"erial "N"umber -- you find something "significant" but it's acutally "not." Type II error is just the opposite once you know what Type I error is.
Is there a way to remember the definitions of Type I and Type II Errors?
To a software engineer: How about associating Type I error (first of the two) with the term "S"erial "N"umber -- you find something "significant" but it's acutally "not." Type II error is just the op
Is there a way to remember the definitions of Type I and Type II Errors? To a software engineer: How about associating Type I error (first of the two) with the term "S"erial "N"umber -- you find something "significant" but it's acutally "not." Type II error is just the opposite once you know what Type I error is.
Is there a way to remember the definitions of Type I and Type II Errors? To a software engineer: How about associating Type I error (first of the two) with the term "S"erial "N"umber -- you find something "significant" but it's acutally "not." Type II error is just the op
1,963
Is there a way to remember the definitions of Type I and Type II Errors?
Sometimes reading really old scientific papers help me to understand some ideas behind statistics. ...they identified "two sources of error", namely: (a) the error of rejecting a hypothesis that should have been accepted, and (b) the error of accepting a hypothesis that should have been rejected. (wiki) Original source: Neyman, J.; Pearson, E.S. (1967) [1928]. "On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference, Part I". Joint Statistical Papers. Cambridge University Press. pp. 1–66. http://biomet.oxfordjournals.org/content/20A/1-2/175.full.pdf+html
Is there a way to remember the definitions of Type I and Type II Errors?
Sometimes reading really old scientific papers help me to understand some ideas behind statistics. ...they identified "two sources of error", namely: (a) the error of rejecting a hypothesis that shoul
Is there a way to remember the definitions of Type I and Type II Errors? Sometimes reading really old scientific papers help me to understand some ideas behind statistics. ...they identified "two sources of error", namely: (a) the error of rejecting a hypothesis that should have been accepted, and (b) the error of accepting a hypothesis that should have been rejected. (wiki) Original source: Neyman, J.; Pearson, E.S. (1967) [1928]. "On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference, Part I". Joint Statistical Papers. Cambridge University Press. pp. 1–66. http://biomet.oxfordjournals.org/content/20A/1-2/175.full.pdf+html
Is there a way to remember the definitions of Type I and Type II Errors? Sometimes reading really old scientific papers help me to understand some ideas behind statistics. ...they identified "two sources of error", namely: (a) the error of rejecting a hypothesis that shoul
1,964
Is there a way to remember the definitions of Type I and Type II Errors?
I think that the usual table is confusing because it concatenates negation verbs. I found the following "verdict table" easier to remember an generalize: H0 (fair) True False Positive False positive True positive Decision Type I error (Gilty) Negative True negative False negative Type II error Note that: the decision (positive/negative) matches the verdict name the verdicts with "false" are the errors
Is there a way to remember the definitions of Type I and Type II Errors?
I think that the usual table is confusing because it concatenates negation verbs. I found the following "verdict table" easier to remember an generalize: H0 (fair)
Is there a way to remember the definitions of Type I and Type II Errors? I think that the usual table is confusing because it concatenates negation verbs. I found the following "verdict table" easier to remember an generalize: H0 (fair) True False Positive False positive True positive Decision Type I error (Gilty) Negative True negative False negative Type II error Note that: the decision (positive/negative) matches the verdict name the verdicts with "false" are the errors
Is there a way to remember the definitions of Type I and Type II Errors? I think that the usual table is confusing because it concatenates negation verbs. I found the following "verdict table" easier to remember an generalize: H0 (fair)
1,965
Is there a way to remember the definitions of Type I and Type II Errors?
RouTiNe FoR FuN Type I error is RTN: Reject The Null Type II error is FRFN: Fail to Reject a False Null (hypothesis)
Is there a way to remember the definitions of Type I and Type II Errors?
RouTiNe FoR FuN Type I error is RTN: Reject The Null Type II error is FRFN: Fail to Reject a False Null (hypothesis)
Is there a way to remember the definitions of Type I and Type II Errors? RouTiNe FoR FuN Type I error is RTN: Reject The Null Type II error is FRFN: Fail to Reject a False Null (hypothesis)
Is there a way to remember the definitions of Type I and Type II Errors? RouTiNe FoR FuN Type I error is RTN: Reject The Null Type II error is FRFN: Fail to Reject a False Null (hypothesis)
1,966
Is there a way to remember the definitions of Type I and Type II Errors?
My mnemonic for Type II errors is: TWO: This Was Opposing [our chance of getting published/funding/famous], i.e., the experimental hypothesis was rejected (albeit in error). Or TWO: This Was Out-and-out failure (but it's an error so it's not). Type I is what is left (i.e., false positive).
Is there a way to remember the definitions of Type I and Type II Errors?
My mnemonic for Type II errors is: TWO: This Was Opposing [our chance of getting published/funding/famous], i.e., the experimental hypothesis was rejected (albeit in error). Or TWO: This Was Out-and-o
Is there a way to remember the definitions of Type I and Type II Errors? My mnemonic for Type II errors is: TWO: This Was Opposing [our chance of getting published/funding/famous], i.e., the experimental hypothesis was rejected (albeit in error). Or TWO: This Was Out-and-out failure (but it's an error so it's not). Type I is what is left (i.e., false positive).
Is there a way to remember the definitions of Type I and Type II Errors? My mnemonic for Type II errors is: TWO: This Was Opposing [our chance of getting published/funding/famous], i.e., the experimental hypothesis was rejected (albeit in error). Or TWO: This Was Out-and-o
1,967
Is there a way to remember the definitions of Type I and Type II Errors?
After reading through all of these I came up with my own to remember about type I (making the opposite apply to type II.) [A]lpha is first and is an error when you [A]ccept the [A]lternate. AAA.
Is there a way to remember the definitions of Type I and Type II Errors?
After reading through all of these I came up with my own to remember about type I (making the opposite apply to type II.) [A]lpha is first and is an error when you [A]ccept the [A]lternate. AAA.
Is there a way to remember the definitions of Type I and Type II Errors? After reading through all of these I came up with my own to remember about type I (making the opposite apply to type II.) [A]lpha is first and is an error when you [A]ccept the [A]lternate. AAA.
Is there a way to remember the definitions of Type I and Type II Errors? After reading through all of these I came up with my own to remember about type I (making the opposite apply to type II.) [A]lpha is first and is an error when you [A]ccept the [A]lternate. AAA.
1,968
Is there a way to remember the definitions of Type I and Type II Errors?
RAT !RAF RAT denotes type I errors and !RAF is type II. Type I error - RAT Rejecting H0 when it's Actually True Type II - !RAF   !   Rejecting H0 when it's Actually False ≡ not Rejecting H0 when it's Actually False ! denotes the not operator so replace ! with the word "not". NB: H0 = Null hypothesis
Is there a way to remember the definitions of Type I and Type II Errors?
RAT !RAF RAT denotes type I errors and !RAF is type II. Type I error - RAT Rejecting H0 when it's Actually True Type II - !RAF   !   Rejecting H0 when it's Actually False ≡ not Rejecting H0 when it's
Is there a way to remember the definitions of Type I and Type II Errors? RAT !RAF RAT denotes type I errors and !RAF is type II. Type I error - RAT Rejecting H0 when it's Actually True Type II - !RAF   !   Rejecting H0 when it's Actually False ≡ not Rejecting H0 when it's Actually False ! denotes the not operator so replace ! with the word "not". NB: H0 = Null hypothesis
Is there a way to remember the definitions of Type I and Type II Errors? RAT !RAF RAT denotes type I errors and !RAF is type II. Type I error - RAT Rejecting H0 when it's Actually True Type II - !RAF   !   Rejecting H0 when it's Actually False ≡ not Rejecting H0 when it's
1,969
Is there a way to remember the definitions of Type I and Type II Errors?
If the error is type I, the true null is done. If the error is type II, a false null gets through.
Is there a way to remember the definitions of Type I and Type II Errors?
If the error is type I, the true null is done. If the error is type II, a false null gets through.
Is there a way to remember the definitions of Type I and Type II Errors? If the error is type I, the true null is done. If the error is type II, a false null gets through.
Is there a way to remember the definitions of Type I and Type II Errors? If the error is type I, the true null is done. If the error is type II, a false null gets through.
1,970
The Book of Why by Judea Pearl: Why is he bashing statistics?
I fully agree that Pearl's tone is arrogant, and his characterisation of "statisticians" is simplistic and monolithic. Also, I don't find his writing particularly clear. However, I think he has a point. Causal reasoning was not part of my formal training (MSc): the closest I got to the topic was an elective course in experimental design, i.e. any causality claims required me to physically control the environment. Pearl's book Causality was my first exposure to a refutation of this idea. Obviously I can't speak for all statisticians and curricula, but from my own perspective I subscribe to Pearl's observation that causal reasoning is not a priority in statistics. It is true that statisticians sometimes control for more variables than is strictly necessary, but this rarely leads to error (at least in my experience). This is also a belief that I held after graduating with an MSc in statistics in 2010. However, it is deeply incorrect. When you control for a common effect (called "collider" in the book), you can introduce selection bias. This realization was quite astonishing to me, and really convinced me of the usefulness of representing my causal hypotheses as graphs. EDIT: I was asked to elaborate on selection bias. This topic is quite subtle, I highly recommend perusing the edX MOOC on Causal Diagrams, a very nice introduction to graphs which has a chapter dedicated to selection bias. For a toy example, to paraphrase this paper cited in the book: Consider the variables A=attractiveness, B=beauty, C=competence. Suppose that B and C are causally unrelated in the general population (i.e., beauty does not cause competence, competence does not cause beauty, and beauty and competence do not share a common cause). Suppose also that any one of B or C is sufficient for being attractive, i.e. A is a collider. Conditioning on A creates a spurious association between B and C. A more serious example is the "birth weight paradox", according to which a mother's smoking (S) during pregnancy seems to decrease the mortality (M) of the baby, if the baby is underweight (U). The proposed explanation is that birth defects (D) also cause low birth weight, and also contribute to mortality. The corresponding causal diagram is { S -> U, D -> U, U -> M, S -> M, D -> M } in which U is a collider; conditioning on it introduces the spurious association. The intuition behind this is that if the mother is a smoker, the low birth weight is less likely to be due to a defect.
The Book of Why by Judea Pearl: Why is he bashing statistics?
I fully agree that Pearl's tone is arrogant, and his characterisation of "statisticians" is simplistic and monolithic. Also, I don't find his writing particularly clear. However, I think he has a poin
The Book of Why by Judea Pearl: Why is he bashing statistics? I fully agree that Pearl's tone is arrogant, and his characterisation of "statisticians" is simplistic and monolithic. Also, I don't find his writing particularly clear. However, I think he has a point. Causal reasoning was not part of my formal training (MSc): the closest I got to the topic was an elective course in experimental design, i.e. any causality claims required me to physically control the environment. Pearl's book Causality was my first exposure to a refutation of this idea. Obviously I can't speak for all statisticians and curricula, but from my own perspective I subscribe to Pearl's observation that causal reasoning is not a priority in statistics. It is true that statisticians sometimes control for more variables than is strictly necessary, but this rarely leads to error (at least in my experience). This is also a belief that I held after graduating with an MSc in statistics in 2010. However, it is deeply incorrect. When you control for a common effect (called "collider" in the book), you can introduce selection bias. This realization was quite astonishing to me, and really convinced me of the usefulness of representing my causal hypotheses as graphs. EDIT: I was asked to elaborate on selection bias. This topic is quite subtle, I highly recommend perusing the edX MOOC on Causal Diagrams, a very nice introduction to graphs which has a chapter dedicated to selection bias. For a toy example, to paraphrase this paper cited in the book: Consider the variables A=attractiveness, B=beauty, C=competence. Suppose that B and C are causally unrelated in the general population (i.e., beauty does not cause competence, competence does not cause beauty, and beauty and competence do not share a common cause). Suppose also that any one of B or C is sufficient for being attractive, i.e. A is a collider. Conditioning on A creates a spurious association between B and C. A more serious example is the "birth weight paradox", according to which a mother's smoking (S) during pregnancy seems to decrease the mortality (M) of the baby, if the baby is underweight (U). The proposed explanation is that birth defects (D) also cause low birth weight, and also contribute to mortality. The corresponding causal diagram is { S -> U, D -> U, U -> M, S -> M, D -> M } in which U is a collider; conditioning on it introduces the spurious association. The intuition behind this is that if the mother is a smoker, the low birth weight is less likely to be due to a defect.
The Book of Why by Judea Pearl: Why is he bashing statistics? I fully agree that Pearl's tone is arrogant, and his characterisation of "statisticians" is simplistic and monolithic. Also, I don't find his writing particularly clear. However, I think he has a poin
1,971
The Book of Why by Judea Pearl: Why is he bashing statistics?
Your very question reflects what Pearl is saying! a simple linear regression is essentially a causal model No, a linear regression is a statistical model, not a causal model. Let's assume $Y, X, Z$ are random variables with a multivariate normal distribution. Then you can correctly estimate the linear expectations $E[Y\mid X]$, $E[X\mid Y]$, $E[Y\mid X,Z]$, $E[Z\mid Y, X]$ etc using linear regression, but there's nothing here that says whether any of those quantities are causal. A linear structural equation, on the other hand, is a causal model. But the first step is to understand the difference between statistical assumptions (constraints on the observed joint probability distribution) and causal assumptions (constraints on the causal model). do you think that Judea Pearl misrepresenting statistics, and if yes, why? No, I don't think so, because we see these misconceptions daily. Of course, Pearl is making some generalizations, since some statisticians do work with causal inference (Don Rubin was a pioneer in promoting potential outcomes... also, I am a statistician!). But he is correct in saying that the bulk of traditional statistics education shuns causality, even to formally define what a causal effect is. To make this clear, if we ask a statistician/econometrician with just a regular training to define mathematically what is the expected value of $Y$ if we intervene on $X$, he would probably write $E[Y|X]$ (see an example here)! But that's an observational quantity, that's not how you define a causal effect! In other terms, currently, a student with just a traditional statistics course lacks even the ability of properly defining this quantity mathematically ($E[Y_{x}]$ or $E[Y|do(x)]$) if you are not familiar with the structural/counterfactual theory of causation! The quote you bring from the book is also a great example. You will not find in traditional statistics books a correct definition of what a confounder is, nor guidance about when you should (or should not) adjust for a covariate in observational studies. In general, you see “correlational criteria”, such as “if the covariate is associated with the treatment and with the outcome, you should adjust for it”. One of the most notable examples of this confusion shows up in Simpson’s Paradox—when faced with two estimates of opposite signs, which one should you use, the adjusted or unadjusted? The, answer, of course, depends on the causal model. And what does Pearl mean when he says that this question was brought to an end? In the case of simple adjustment via regression, he is referring to the backdoor criterion (see more here). And for identification in general---beyond simple adjustment---he means that we now have complete algorithms for identification of causal effects for any given semi-markovian DAG. Another remark here is worth making. Even in experimental studies — where traditional statistics has surely done a lot of important work with the design of experiments!— in the end of the day you still need a causal model. Experiments can suffer from lack of compliance, from loss of follow up, from selection bias... also, most of the time you don't want to confine the results of your experiments to the specific population you analyzed, you want to generalize your experimental results to a broader/different population. Here, again, one may ask: what should you adjust for? Are the data and substantive knowledge you have enough to allow such extrapolation? All of these are causal concepts, thus you need a language to formally express causal assumptions and check whether they are enough to allow you to do what you want! In sum, these misconceptions are widespread in statistics and econometrics, there are several examples here in Cross Validated, such as: understanding what a confounder is understanding which variables you should include in a regression (for estimating causal effects) understanding that even if you include all variables a regression may not be causal understanding the difference between (lack of) statistical associations and causation defining a causal model and causal effects And many more. Do you think that causal inference is a Revolution with a big R which really changes all our thinking? Considering the current state of affairs in many sciences, how much we have advanced and how fast things are changing, and how much we can still do, I would say this is indeed a revolution. PS: Pearl suggested two of his posts on UCLA's causality blog that will be of interest to this discussion, you can find the posts here and here. PS 2: As January has mentioned in his new edit, Andrew Gelman has a new post in his blog. In addition to the debate on Gelman's blog, Pearl has also answered on twitter (below): Gelman's review of #Bookofwhy should be of interest because it represents an attitude that paralyzes wide circles of statistical researchers. My initial reaction is now posted on https://t.co/mRyDcgQtEc Related posts: https://t.co/xUwR6eCGrZ and https://t.co/qwqV3oyGUy — Judea Pearl (@yudapearl) January 9, 2019
The Book of Why by Judea Pearl: Why is he bashing statistics?
Your very question reflects what Pearl is saying! a simple linear regression is essentially a causal model No, a linear regression is a statistical model, not a causal model. Let's assume $Y, X, Z$
The Book of Why by Judea Pearl: Why is he bashing statistics? Your very question reflects what Pearl is saying! a simple linear regression is essentially a causal model No, a linear regression is a statistical model, not a causal model. Let's assume $Y, X, Z$ are random variables with a multivariate normal distribution. Then you can correctly estimate the linear expectations $E[Y\mid X]$, $E[X\mid Y]$, $E[Y\mid X,Z]$, $E[Z\mid Y, X]$ etc using linear regression, but there's nothing here that says whether any of those quantities are causal. A linear structural equation, on the other hand, is a causal model. But the first step is to understand the difference between statistical assumptions (constraints on the observed joint probability distribution) and causal assumptions (constraints on the causal model). do you think that Judea Pearl misrepresenting statistics, and if yes, why? No, I don't think so, because we see these misconceptions daily. Of course, Pearl is making some generalizations, since some statisticians do work with causal inference (Don Rubin was a pioneer in promoting potential outcomes... also, I am a statistician!). But he is correct in saying that the bulk of traditional statistics education shuns causality, even to formally define what a causal effect is. To make this clear, if we ask a statistician/econometrician with just a regular training to define mathematically what is the expected value of $Y$ if we intervene on $X$, he would probably write $E[Y|X]$ (see an example here)! But that's an observational quantity, that's not how you define a causal effect! In other terms, currently, a student with just a traditional statistics course lacks even the ability of properly defining this quantity mathematically ($E[Y_{x}]$ or $E[Y|do(x)]$) if you are not familiar with the structural/counterfactual theory of causation! The quote you bring from the book is also a great example. You will not find in traditional statistics books a correct definition of what a confounder is, nor guidance about when you should (or should not) adjust for a covariate in observational studies. In general, you see “correlational criteria”, such as “if the covariate is associated with the treatment and with the outcome, you should adjust for it”. One of the most notable examples of this confusion shows up in Simpson’s Paradox—when faced with two estimates of opposite signs, which one should you use, the adjusted or unadjusted? The, answer, of course, depends on the causal model. And what does Pearl mean when he says that this question was brought to an end? In the case of simple adjustment via regression, he is referring to the backdoor criterion (see more here). And for identification in general---beyond simple adjustment---he means that we now have complete algorithms for identification of causal effects for any given semi-markovian DAG. Another remark here is worth making. Even in experimental studies — where traditional statistics has surely done a lot of important work with the design of experiments!— in the end of the day you still need a causal model. Experiments can suffer from lack of compliance, from loss of follow up, from selection bias... also, most of the time you don't want to confine the results of your experiments to the specific population you analyzed, you want to generalize your experimental results to a broader/different population. Here, again, one may ask: what should you adjust for? Are the data and substantive knowledge you have enough to allow such extrapolation? All of these are causal concepts, thus you need a language to formally express causal assumptions and check whether they are enough to allow you to do what you want! In sum, these misconceptions are widespread in statistics and econometrics, there are several examples here in Cross Validated, such as: understanding what a confounder is understanding which variables you should include in a regression (for estimating causal effects) understanding that even if you include all variables a regression may not be causal understanding the difference between (lack of) statistical associations and causation defining a causal model and causal effects And many more. Do you think that causal inference is a Revolution with a big R which really changes all our thinking? Considering the current state of affairs in many sciences, how much we have advanced and how fast things are changing, and how much we can still do, I would say this is indeed a revolution. PS: Pearl suggested two of his posts on UCLA's causality blog that will be of interest to this discussion, you can find the posts here and here. PS 2: As January has mentioned in his new edit, Andrew Gelman has a new post in his blog. In addition to the debate on Gelman's blog, Pearl has also answered on twitter (below): Gelman's review of #Bookofwhy should be of interest because it represents an attitude that paralyzes wide circles of statistical researchers. My initial reaction is now posted on https://t.co/mRyDcgQtEc Related posts: https://t.co/xUwR6eCGrZ and https://t.co/qwqV3oyGUy — Judea Pearl (@yudapearl) January 9, 2019
The Book of Why by Judea Pearl: Why is he bashing statistics? Your very question reflects what Pearl is saying! a simple linear regression is essentially a causal model No, a linear regression is a statistical model, not a causal model. Let's assume $Y, X, Z$
1,972
The Book of Why by Judea Pearl: Why is he bashing statistics?
I'm a fan of Judea's writing, and I've read Causality (love) and Book of Why (like). I do not feel that Judea is bashing statistics. It's hard to hear criticism. But what can we say about any person or field that doesn't take criticism? They tend from greatness to complacency. You must ask: is the criticism correct, needed, useful, and does it propose alternatives? The answer to all those is an emphatic "Yes". Correct? I've reviewed and collaborated on a few dozen papers, mostly analyses of observational data, and I rarely feel there is a sufficient discussion of causality. The "adjustment" approach involves selecting variables because they were hand-picked from the DD as being "useful" "relevant" "important" or other nonsense.$^1$ Needed? The media is awash with seemingly contradictory statements about the health effects of major exposures. Inconsistency with data analysis has stagnated evidence which leaves us lacking useful policy, healthcare procedures, and recommendations for better living. Useful? Judea's comment is pertinent and specific enough to give pause. It is directly relevant to any data analysis any statistician or data expert might encounter. Does it propose alternatives? Yes, Judea in fact discusses the possibility of advanced statistical methods, and even how they reduce to known statistical frameworks (like Structural Equation Modeling) and their connection to regression models). It all boils down to requiring an explicit statement of the content knowledge that has guided the modeling approach. Judea isn't simply suggesting we defenestrate all statistical methods (e.g. regression). Rather, he is saying that we need to embrace some causal theory to justify models. $^1$ the complaint here is about the use of convincing and imprecise language to justify what is ultimately the wrong approach to modeling. There can be overlap, serendipitously, but Pearl is clear about the purpose of a causal diagram (DAG) and how variables can be classified as "confounders".
The Book of Why by Judea Pearl: Why is he bashing statistics?
I'm a fan of Judea's writing, and I've read Causality (love) and Book of Why (like). I do not feel that Judea is bashing statistics. It's hard to hear criticism. But what can we say about any person o
The Book of Why by Judea Pearl: Why is he bashing statistics? I'm a fan of Judea's writing, and I've read Causality (love) and Book of Why (like). I do not feel that Judea is bashing statistics. It's hard to hear criticism. But what can we say about any person or field that doesn't take criticism? They tend from greatness to complacency. You must ask: is the criticism correct, needed, useful, and does it propose alternatives? The answer to all those is an emphatic "Yes". Correct? I've reviewed and collaborated on a few dozen papers, mostly analyses of observational data, and I rarely feel there is a sufficient discussion of causality. The "adjustment" approach involves selecting variables because they were hand-picked from the DD as being "useful" "relevant" "important" or other nonsense.$^1$ Needed? The media is awash with seemingly contradictory statements about the health effects of major exposures. Inconsistency with data analysis has stagnated evidence which leaves us lacking useful policy, healthcare procedures, and recommendations for better living. Useful? Judea's comment is pertinent and specific enough to give pause. It is directly relevant to any data analysis any statistician or data expert might encounter. Does it propose alternatives? Yes, Judea in fact discusses the possibility of advanced statistical methods, and even how they reduce to known statistical frameworks (like Structural Equation Modeling) and their connection to regression models). It all boils down to requiring an explicit statement of the content knowledge that has guided the modeling approach. Judea isn't simply suggesting we defenestrate all statistical methods (e.g. regression). Rather, he is saying that we need to embrace some causal theory to justify models. $^1$ the complaint here is about the use of convincing and imprecise language to justify what is ultimately the wrong approach to modeling. There can be overlap, serendipitously, but Pearl is clear about the purpose of a causal diagram (DAG) and how variables can be classified as "confounders".
The Book of Why by Judea Pearl: Why is he bashing statistics? I'm a fan of Judea's writing, and I've read Causality (love) and Book of Why (like). I do not feel that Judea is bashing statistics. It's hard to hear criticism. But what can we say about any person o
1,973
The Book of Why by Judea Pearl: Why is he bashing statistics?
I haven't read this book, so I can only judge the particular quote you give. However, even on this basis, I agree with you that this seems extremely unfair to the statistical profession. I actually think that statisticians have always done a remarkably good job at stressing the distinction between statistical associations (correlation, etc.) and causality, and warning against the conflation of the two. Indeed, in my experience, statisticians have generally been the primary professional force fighting against the ubiquitous confusion between cause and correlation. It is outright false (and virtually slander) to claim that statisticians are "...loath to talk about causality at all." I can see why you are annoyed reading arrogant horseshit like this. I would say that it is reasonably common for non-statisticians who use statistical models to have a poor understanding of the relationship between statistical association and causality. Some have good scientific training from other fields, in which case they may also be well aware of the issue, but there are certainly some people who use statistical models who have a poor grasp of these issues. This is true in many applied scientific fields where practitioners have basic training in statistics, but do not learn it at a deep level. In these cases it is often professional statisticians who alert other researchers to the distinctions between these concepts and their proper relationship. Statisticians are often the key designers of RCTs and other experiments involving controls used to isolate causality. They are often called on to explain protocols such as randomisation, placebos, and other protocols that are used to try to sever relationships with potential confounding variables. It is true that statisticians sometimes control for more variables than is strictly necessary, but this rarely leads to error (at least in my experience). I think most statisticians are aware of the difference between confounding variables and collider variables when they do regression analysis with a view to causal inferences, and even if they are not always building perfect models, the notion that they somehow eschew consideration of causality is simply ridiculous. I think Judea Pearl has made a very valuable contribution to statistics with his work on causality, and I am grateful to him for this wonderful contribution. He has constructed and examined some very useful formalisms that help to isolate causal relationships, and his work has become a staple of a good statistical education. I read his book Causality while I was a grad student, and it is on my shelf, and on the shelves of many other statisticians. Much of this formalism echoes things that have been known intuitively to statisticians since before they were formalised into an algebraic system, but it is very valuable in any case, and goes beyond that which is obvious. (I actually think in the future we will see a merging of the "do" operation with probability algebra occurring at an axiomatic level, and this will probably eventually become the core of probability theory. I would love to see this built directly into statistical education, so that you learn about causal models and the "do" operation when you learn about probability measures.) One final thing to bear in mind here is that there are many applications of statistics where the goal is predictive, where the practitioner is not seeking to infer causality. These types of applications are extremely common in statistics, and in such cases, it is important not to restrict oneself to causal relationships. This is true in most applications of statistics in finance, HR, workforce modelling, and many other fields. One should not underestimate the amount of contexts where one cannot or should not seek to control variables. Update: I notice that my answer disagrees with the one provided by Carlos. Perhaps we disagree on what constitutes "a statistician/econometrician with just a regular training". Anyone who I would call a "statistician" usually has at least a graduate-level education, and usually has substantial professional training/experience. (For example, in Australia, the requirement to become an "Accredited Statistician" with our national professional body requires a minimum of four years experience after an honours degree, or six years experience after a regular bachelors degree.) In any case, a student studying statistics is not a statistician. I notice that as evidence of the alleged lack of understanding of causality by statisticians, Carlos's answer points to several questions on CV.SE which ask about causality in regression. In every one of these cases, the question is asked by someone who is obviously a novice (not a statistician) and the answers given by Carlos and others (which reflect the correct explanation) are highly-upvoted answers. Indeed, in several of the cases Carlos has given a detailed account of the causality and his answers are the most highly up-voted. This surely proves that statisticians do understand causality. Some other posters have pointed out that analysis of causality is often not included in the statistics curriculum. That is true, and it is a great shame, but most professional statisticians are not recent graduates, and they have learned far beyond what is included in a standard masters program. Again, in this respect, it appears that I have a higher view of the average level of knowledge of statisticians than other posters.
The Book of Why by Judea Pearl: Why is he bashing statistics?
I haven't read this book, so I can only judge the particular quote you give. However, even on this basis, I agree with you that this seems extremely unfair to the statistical profession. I actually
The Book of Why by Judea Pearl: Why is he bashing statistics? I haven't read this book, so I can only judge the particular quote you give. However, even on this basis, I agree with you that this seems extremely unfair to the statistical profession. I actually think that statisticians have always done a remarkably good job at stressing the distinction between statistical associations (correlation, etc.) and causality, and warning against the conflation of the two. Indeed, in my experience, statisticians have generally been the primary professional force fighting against the ubiquitous confusion between cause and correlation. It is outright false (and virtually slander) to claim that statisticians are "...loath to talk about causality at all." I can see why you are annoyed reading arrogant horseshit like this. I would say that it is reasonably common for non-statisticians who use statistical models to have a poor understanding of the relationship between statistical association and causality. Some have good scientific training from other fields, in which case they may also be well aware of the issue, but there are certainly some people who use statistical models who have a poor grasp of these issues. This is true in many applied scientific fields where practitioners have basic training in statistics, but do not learn it at a deep level. In these cases it is often professional statisticians who alert other researchers to the distinctions between these concepts and their proper relationship. Statisticians are often the key designers of RCTs and other experiments involving controls used to isolate causality. They are often called on to explain protocols such as randomisation, placebos, and other protocols that are used to try to sever relationships with potential confounding variables. It is true that statisticians sometimes control for more variables than is strictly necessary, but this rarely leads to error (at least in my experience). I think most statisticians are aware of the difference between confounding variables and collider variables when they do regression analysis with a view to causal inferences, and even if they are not always building perfect models, the notion that they somehow eschew consideration of causality is simply ridiculous. I think Judea Pearl has made a very valuable contribution to statistics with his work on causality, and I am grateful to him for this wonderful contribution. He has constructed and examined some very useful formalisms that help to isolate causal relationships, and his work has become a staple of a good statistical education. I read his book Causality while I was a grad student, and it is on my shelf, and on the shelves of many other statisticians. Much of this formalism echoes things that have been known intuitively to statisticians since before they were formalised into an algebraic system, but it is very valuable in any case, and goes beyond that which is obvious. (I actually think in the future we will see a merging of the "do" operation with probability algebra occurring at an axiomatic level, and this will probably eventually become the core of probability theory. I would love to see this built directly into statistical education, so that you learn about causal models and the "do" operation when you learn about probability measures.) One final thing to bear in mind here is that there are many applications of statistics where the goal is predictive, where the practitioner is not seeking to infer causality. These types of applications are extremely common in statistics, and in such cases, it is important not to restrict oneself to causal relationships. This is true in most applications of statistics in finance, HR, workforce modelling, and many other fields. One should not underestimate the amount of contexts where one cannot or should not seek to control variables. Update: I notice that my answer disagrees with the one provided by Carlos. Perhaps we disagree on what constitutes "a statistician/econometrician with just a regular training". Anyone who I would call a "statistician" usually has at least a graduate-level education, and usually has substantial professional training/experience. (For example, in Australia, the requirement to become an "Accredited Statistician" with our national professional body requires a minimum of four years experience after an honours degree, or six years experience after a regular bachelors degree.) In any case, a student studying statistics is not a statistician. I notice that as evidence of the alleged lack of understanding of causality by statisticians, Carlos's answer points to several questions on CV.SE which ask about causality in regression. In every one of these cases, the question is asked by someone who is obviously a novice (not a statistician) and the answers given by Carlos and others (which reflect the correct explanation) are highly-upvoted answers. Indeed, in several of the cases Carlos has given a detailed account of the causality and his answers are the most highly up-voted. This surely proves that statisticians do understand causality. Some other posters have pointed out that analysis of causality is often not included in the statistics curriculum. That is true, and it is a great shame, but most professional statisticians are not recent graduates, and they have learned far beyond what is included in a standard masters program. Again, in this respect, it appears that I have a higher view of the average level of knowledge of statisticians than other posters.
The Book of Why by Judea Pearl: Why is he bashing statistics? I haven't read this book, so I can only judge the particular quote you give. However, even on this basis, I agree with you that this seems extremely unfair to the statistical profession. I actually
1,974
The Book of Why by Judea Pearl: Why is he bashing statistics?
a simple linear regression is essentially a causal model Here's an example I came up with where a linear regression model fails to be causal. Let's say a priori that a drug was taken at time 0 (t=0) and that it has no effect on the rate of heart attacks at t=1. Heart attacks at t=1 affect heart attacks at t=2 (i.e. previous damage makes the heart more susceptible to damage). Survival at t=3 only depends on whether or not people had a heart attack at t=2 -- heart attack at t=1 realistically would affect survival at t=3, but we won't have an arrow, for the sake of simplicity. Here's the legend: Here's the true causal graph: Let's pretend that we don't know that heart attacks at t=1 are independent of taking the drug at t=0 so we construct a simple linear regression model to estimate the effect of the drug on heart attack at t=0. Here our predictor would be Drug t=0 and our outcome variable would be Heart Attack t=1. The only data we have is people who survive at t=3, so we'll run our regression on that data. Here's the 95% Bayesian credible interval for coefficient of Drug t=0: Much of the probability as we can see is greater than 0, so it looks like there's an effect! However, we know a priori that there is 0 effect. The mathematics of causation as developed by Judea Pearl and others make it much easier to see that there will be bias in this example (due to conditioning on a descendant of a collider). Judea's work implies that in this situation, we should use the full data set (i.e. don't look at the people who only survived), which will remove the biased paths: Here's the 95% Credible Interval when looking at the full data set (i.e. not conditioning on those who survived). . It is densely centered at 0, which essentially shows no association at all. In real-life examples, things might not be so simple. There may be many more variables that might cause systematic bias (confounding, selection bias, etc.). What to adjust for in analyses has been mathematized by Pearl; algorithms can suggest which variable to adjust for, or even tell us when adjusting is not enough to remove systematic bias. With this formal theory set in place, we don't need to spend so much time arguing about what to adjust for and what not to adjust for; we can quickly reach conclusions as to whether or not our results are sound. We can design our experiments better, we can analyze observational data more easily. Here's a freely-available course online on Causal DAGs by Miguel Hernàn. It has a bunch of real-life case studies where professors / scientists / statisticians have come to opposite conclusions about the question at hand. Some of them might seem like paradoxes. However, you can easily solve them via Judea Pearl's d-separation and backdoor-criterion. For reference, here's code to the data-generating process and the code for credible intervals shown above: import numpy as np import pandas as pd import statsmodels as sm import pymc3 as pm from sklearn.linear_model import LinearRegression %matplotlib inline # notice that taking the drug is independent of heart attack at time 1. # heart_attack_time_1 doesn't "listen" to take_drug_t_0 take_drug_t_0 = np.random.binomial(n=1, p=0.7, size=10000) heart_attack_time_1 = np.random.binomial(n=1, p=0.4, size=10000) proba_heart_attack_time_2 = [] # heart_attack_time_1 increases the probability of heart_attack_time_2. Let's say # it's because it weakens the heart and makes it more susceptible to further # injuries # # Yet, take_drug_t_0 decreases the probability of heart attacks happening at # time 2 for drug_t_0, heart_attack_t_1 in zip(take_drug_t_0, heart_attack_time_1): if drug_t_0 == 0 and heart_attack_t_1 == 0: proba_heart_attack_time_2.append(0.1) elif drug_t_0 == 1 and heart_attack_t_1 == 0: proba_heart_attack_time_2.append(0.1) elif drug_t_0 == 0 and heart_attack_t_1 == 1: proba_heart_attack_time_2.append(0.5) elif drug_t_0 == 1 and heart_attack_t_1 == 1: proba_heart_attack_time_2.append(0.05) heart_attack_time_2 = np.random.binomial( n=2, p=proba_heart_attack_time_2, size=10000 ) # people who've had a heart attack at time 2 are more likely to die by time 3 proba_survive_t_3 = [] for heart_attack_t_2 in heart_attack_time_2: if heart_attack_t_2 == 0: proba_survive_t_3.append(0.95) else: proba_survive_t_3.append(0.6) survive_t_3 = np.random.binomial( n=1, p=proba_survive_t_3, size=10000 ) df = pd.DataFrame( { 'survive_t_3': survive_t_3, 'take_drug_t_0': take_drug_t_0, 'heart_attack_time_1': heart_attack_time_1, 'heart_attack_time_2': heart_attack_time_2 } ) # we only have access to data of the people who survived survive_t_3_data = df[ df['survive_t_3'] == 1 ] survive_t_3_X = survive_t_3_data[['take_drug_t_0']] lr = LinearRegression() lr.fit(survive_t_3_X, survive_t_3_data['heart_attack_time_1']) lr.coef_ with pm.Model() as collider_bias_model_normal: alpha = pm.Normal(name='alpha', mu=0, sd=1) take_drug_t_0 = pm.Normal(name='take_drug_t_0', mu=0, sd=1) summation = alpha + take_drug_t_0 * survive_t_3_data['take_drug_t_0'] sigma = pm.Exponential('sigma', lam=1) pm.Normal( name='observed', mu=summation, sd=sigma, observed=survive_t_3_data['heart_attack_time_1'] ) collider_bias_normal_trace = pm.sample(2000, tune=1000) pm.plot_posterior(collider_bias_normal_trace['take_drug_t_0']) with pm.Model() as no_collider_bias_model_normal: alpha = pm.Normal(name='alpha', mu=0, sd=1) take_drug_t_0 = pm.Normal(name='take_drug_t_0', mu=0, sd=1) summation = alpha + take_drug_t_0 * df['take_drug_t_0'] sigma = pm.Exponential('sigma', lam=1) pm.Normal( name='observed', mu=summation, sd=sigma, observed=df['heart_attack_time_1'] ) no_collider_bias_normal_trace = pm.sample(2000, tune=2000) pm.plot_posterior(no_collider_bias_normal_trace['take_drug_t_0'])
The Book of Why by Judea Pearl: Why is he bashing statistics?
a simple linear regression is essentially a causal model Here's an example I came up with where a linear regression model fails to be causal. Let's say a priori that a drug was taken at time 0 (t=0)
The Book of Why by Judea Pearl: Why is he bashing statistics? a simple linear regression is essentially a causal model Here's an example I came up with where a linear regression model fails to be causal. Let's say a priori that a drug was taken at time 0 (t=0) and that it has no effect on the rate of heart attacks at t=1. Heart attacks at t=1 affect heart attacks at t=2 (i.e. previous damage makes the heart more susceptible to damage). Survival at t=3 only depends on whether or not people had a heart attack at t=2 -- heart attack at t=1 realistically would affect survival at t=3, but we won't have an arrow, for the sake of simplicity. Here's the legend: Here's the true causal graph: Let's pretend that we don't know that heart attacks at t=1 are independent of taking the drug at t=0 so we construct a simple linear regression model to estimate the effect of the drug on heart attack at t=0. Here our predictor would be Drug t=0 and our outcome variable would be Heart Attack t=1. The only data we have is people who survive at t=3, so we'll run our regression on that data. Here's the 95% Bayesian credible interval for coefficient of Drug t=0: Much of the probability as we can see is greater than 0, so it looks like there's an effect! However, we know a priori that there is 0 effect. The mathematics of causation as developed by Judea Pearl and others make it much easier to see that there will be bias in this example (due to conditioning on a descendant of a collider). Judea's work implies that in this situation, we should use the full data set (i.e. don't look at the people who only survived), which will remove the biased paths: Here's the 95% Credible Interval when looking at the full data set (i.e. not conditioning on those who survived). . It is densely centered at 0, which essentially shows no association at all. In real-life examples, things might not be so simple. There may be many more variables that might cause systematic bias (confounding, selection bias, etc.). What to adjust for in analyses has been mathematized by Pearl; algorithms can suggest which variable to adjust for, or even tell us when adjusting is not enough to remove systematic bias. With this formal theory set in place, we don't need to spend so much time arguing about what to adjust for and what not to adjust for; we can quickly reach conclusions as to whether or not our results are sound. We can design our experiments better, we can analyze observational data more easily. Here's a freely-available course online on Causal DAGs by Miguel Hernàn. It has a bunch of real-life case studies where professors / scientists / statisticians have come to opposite conclusions about the question at hand. Some of them might seem like paradoxes. However, you can easily solve them via Judea Pearl's d-separation and backdoor-criterion. For reference, here's code to the data-generating process and the code for credible intervals shown above: import numpy as np import pandas as pd import statsmodels as sm import pymc3 as pm from sklearn.linear_model import LinearRegression %matplotlib inline # notice that taking the drug is independent of heart attack at time 1. # heart_attack_time_1 doesn't "listen" to take_drug_t_0 take_drug_t_0 = np.random.binomial(n=1, p=0.7, size=10000) heart_attack_time_1 = np.random.binomial(n=1, p=0.4, size=10000) proba_heart_attack_time_2 = [] # heart_attack_time_1 increases the probability of heart_attack_time_2. Let's say # it's because it weakens the heart and makes it more susceptible to further # injuries # # Yet, take_drug_t_0 decreases the probability of heart attacks happening at # time 2 for drug_t_0, heart_attack_t_1 in zip(take_drug_t_0, heart_attack_time_1): if drug_t_0 == 0 and heart_attack_t_1 == 0: proba_heart_attack_time_2.append(0.1) elif drug_t_0 == 1 and heart_attack_t_1 == 0: proba_heart_attack_time_2.append(0.1) elif drug_t_0 == 0 and heart_attack_t_1 == 1: proba_heart_attack_time_2.append(0.5) elif drug_t_0 == 1 and heart_attack_t_1 == 1: proba_heart_attack_time_2.append(0.05) heart_attack_time_2 = np.random.binomial( n=2, p=proba_heart_attack_time_2, size=10000 ) # people who've had a heart attack at time 2 are more likely to die by time 3 proba_survive_t_3 = [] for heart_attack_t_2 in heart_attack_time_2: if heart_attack_t_2 == 0: proba_survive_t_3.append(0.95) else: proba_survive_t_3.append(0.6) survive_t_3 = np.random.binomial( n=1, p=proba_survive_t_3, size=10000 ) df = pd.DataFrame( { 'survive_t_3': survive_t_3, 'take_drug_t_0': take_drug_t_0, 'heart_attack_time_1': heart_attack_time_1, 'heart_attack_time_2': heart_attack_time_2 } ) # we only have access to data of the people who survived survive_t_3_data = df[ df['survive_t_3'] == 1 ] survive_t_3_X = survive_t_3_data[['take_drug_t_0']] lr = LinearRegression() lr.fit(survive_t_3_X, survive_t_3_data['heart_attack_time_1']) lr.coef_ with pm.Model() as collider_bias_model_normal: alpha = pm.Normal(name='alpha', mu=0, sd=1) take_drug_t_0 = pm.Normal(name='take_drug_t_0', mu=0, sd=1) summation = alpha + take_drug_t_0 * survive_t_3_data['take_drug_t_0'] sigma = pm.Exponential('sigma', lam=1) pm.Normal( name='observed', mu=summation, sd=sigma, observed=survive_t_3_data['heart_attack_time_1'] ) collider_bias_normal_trace = pm.sample(2000, tune=1000) pm.plot_posterior(collider_bias_normal_trace['take_drug_t_0']) with pm.Model() as no_collider_bias_model_normal: alpha = pm.Normal(name='alpha', mu=0, sd=1) take_drug_t_0 = pm.Normal(name='take_drug_t_0', mu=0, sd=1) summation = alpha + take_drug_t_0 * df['take_drug_t_0'] sigma = pm.Exponential('sigma', lam=1) pm.Normal( name='observed', mu=summation, sd=sigma, observed=df['heart_attack_time_1'] ) no_collider_bias_normal_trace = pm.sample(2000, tune=2000) pm.plot_posterior(no_collider_bias_normal_trace['take_drug_t_0'])
The Book of Why by Judea Pearl: Why is he bashing statistics? a simple linear regression is essentially a causal model Here's an example I came up with where a linear regression model fails to be causal. Let's say a priori that a drug was taken at time 0 (t=0)
1,975
The Book of Why by Judea Pearl: Why is he bashing statistics?
Two papers, the second a classic, that help (I think) shed additional lights on Judea's points and this topic more generally. This comes from someone who has used SEM (which is correlation and regression) repeatedly and resonates with his critiques: https://www.sciencedirect.com/science/article/pii/S0022103111001466 http://psycnet.apa.org/record/1973-20037-001 Essentially the papers describe why correlational models (regression) can not ordinarily be taken as implying any strong causal inference. Any pattern of associations can fit a given covariance matrix (i.e., non specification of direction and or relationship among the variables). Hence the need for such things as an experimental design, counterfactual propositions, etc. This even applies when one has a temporal structure to their data where the putative cause occurs in time before the putative effect.
The Book of Why by Judea Pearl: Why is he bashing statistics?
Two papers, the second a classic, that help (I think) shed additional lights on Judea's points and this topic more generally. This comes from someone who has used SEM (which is correlation and regress
The Book of Why by Judea Pearl: Why is he bashing statistics? Two papers, the second a classic, that help (I think) shed additional lights on Judea's points and this topic more generally. This comes from someone who has used SEM (which is correlation and regression) repeatedly and resonates with his critiques: https://www.sciencedirect.com/science/article/pii/S0022103111001466 http://psycnet.apa.org/record/1973-20037-001 Essentially the papers describe why correlational models (regression) can not ordinarily be taken as implying any strong causal inference. Any pattern of associations can fit a given covariance matrix (i.e., non specification of direction and or relationship among the variables). Hence the need for such things as an experimental design, counterfactual propositions, etc. This even applies when one has a temporal structure to their data where the putative cause occurs in time before the putative effect.
The Book of Why by Judea Pearl: Why is he bashing statistics? Two papers, the second a classic, that help (I think) shed additional lights on Judea's points and this topic more generally. This comes from someone who has used SEM (which is correlation and regress
1,976
The Book of Why by Judea Pearl: Why is he bashing statistics?
"...since we are essentially assuming that one variable is the cause and another is the effect (hence correlation is different approach from regression modelling)..." Regression modeling most definitely does NOT make this assumption. "... and testing whether this causal relationship explains the observed patterns." If you are assuming causality and validating it against observations, your are doing SEM modeling, or what Pearl would call SCM modeling. Whether or not you want to call that part of the domain of stats is debatable. But I think most wouldn't call it classical stats. Rather than dumping on stats in general, I believe Pearl is just criticizing statistician's reticence to address causal semantics. He considers this a serious problem because of what Carl Sagan calls the "get in and get out" phenomenon, where you drop a study that says "meat consumption 'strongly associated' with increased libido, p < .05" and then bows out knowing full well the two outcomes are going to be causally linked in the mind of the public.
The Book of Why by Judea Pearl: Why is he bashing statistics?
"...since we are essentially assuming that one variable is the cause and another is the effect (hence correlation is different approach from regression modelling)..." Regression modeling most definite
The Book of Why by Judea Pearl: Why is he bashing statistics? "...since we are essentially assuming that one variable is the cause and another is the effect (hence correlation is different approach from regression modelling)..." Regression modeling most definitely does NOT make this assumption. "... and testing whether this causal relationship explains the observed patterns." If you are assuming causality and validating it against observations, your are doing SEM modeling, or what Pearl would call SCM modeling. Whether or not you want to call that part of the domain of stats is debatable. But I think most wouldn't call it classical stats. Rather than dumping on stats in general, I believe Pearl is just criticizing statistician's reticence to address causal semantics. He considers this a serious problem because of what Carl Sagan calls the "get in and get out" phenomenon, where you drop a study that says "meat consumption 'strongly associated' with increased libido, p < .05" and then bows out knowing full well the two outcomes are going to be causally linked in the mind of the public.
The Book of Why by Judea Pearl: Why is he bashing statistics? "...since we are essentially assuming that one variable is the cause and another is the effect (hence correlation is different approach from regression modelling)..." Regression modeling most definite
1,977
Solving for regression parameters in closed-form vs gradient descent
Unless the closed form solution is extremely expensive to compute, it generally is the way to go when it is available. However, For most nonlinear regression problems there is no closed form solution. Even in linear regression (one of the few cases where a closed form solution is available), it may be impractical to use the formula. The following example shows one way in which this can happen. For linear regression on a model of the form $y=X\beta$, where $X$ is a matrix with full column rank, the least squares solution, $\hat{\beta} = \arg \min \| X \beta -y \|_{2}$ is given by $\hat{\beta}=(X^{T}X)^{-1}X^{T}y$ Now, imagine that $X$ is a very large but sparse matrix. e.g. $X$ might have 100,000 columns and 1,000,000 rows, but only 0.001% of the entries in $X$ are nonzero. There are specialized data structures for storing only the nonzero entries of such sparse matrices. Also imagine that we're unlucky, and $X^{T}X$ is a fairly dense matrix with a much higher percentage of nonzero entries. Storing a dense 100,000 by 100,000 element $X^{T}X$ matrix would then require $1 \times 10^{10}$ floating point numbers (at 8 bytes per number, this comes to 80 gigabytes.) This would be impractical to store on anything but a supercomputer. Furthermore, the inverse of this matrix (or more commonly a Cholesky factor) would also tend to have mostly nonzero entries. However, there are iterative methods for solving the least squares problem that require no more storage than $X$, $y$, and $\hat{\beta}$ and never explicitly form the matrix product $X^{T}X$. In this situation, using an iterative method is much more computationally efficient than using the closed form solution to the least squares problem. This example might seem absurdly large. However, large sparse least squares problems of this size are routinely solved by iterative methods on desktop computers in seismic tomography research.
Solving for regression parameters in closed-form vs gradient descent
Unless the closed form solution is extremely expensive to compute, it generally is the way to go when it is available. However, For most nonlinear regression problems there is no closed form solutio
Solving for regression parameters in closed-form vs gradient descent Unless the closed form solution is extremely expensive to compute, it generally is the way to go when it is available. However, For most nonlinear regression problems there is no closed form solution. Even in linear regression (one of the few cases where a closed form solution is available), it may be impractical to use the formula. The following example shows one way in which this can happen. For linear regression on a model of the form $y=X\beta$, where $X$ is a matrix with full column rank, the least squares solution, $\hat{\beta} = \arg \min \| X \beta -y \|_{2}$ is given by $\hat{\beta}=(X^{T}X)^{-1}X^{T}y$ Now, imagine that $X$ is a very large but sparse matrix. e.g. $X$ might have 100,000 columns and 1,000,000 rows, but only 0.001% of the entries in $X$ are nonzero. There are specialized data structures for storing only the nonzero entries of such sparse matrices. Also imagine that we're unlucky, and $X^{T}X$ is a fairly dense matrix with a much higher percentage of nonzero entries. Storing a dense 100,000 by 100,000 element $X^{T}X$ matrix would then require $1 \times 10^{10}$ floating point numbers (at 8 bytes per number, this comes to 80 gigabytes.) This would be impractical to store on anything but a supercomputer. Furthermore, the inverse of this matrix (or more commonly a Cholesky factor) would also tend to have mostly nonzero entries. However, there are iterative methods for solving the least squares problem that require no more storage than $X$, $y$, and $\hat{\beta}$ and never explicitly form the matrix product $X^{T}X$. In this situation, using an iterative method is much more computationally efficient than using the closed form solution to the least squares problem. This example might seem absurdly large. However, large sparse least squares problems of this size are routinely solved by iterative methods on desktop computers in seismic tomography research.
Solving for regression parameters in closed-form vs gradient descent Unless the closed form solution is extremely expensive to compute, it generally is the way to go when it is available. However, For most nonlinear regression problems there is no closed form solutio
1,978
Solving for regression parameters in closed-form vs gradient descent
UPDATE For linear regression, it's a one step procedure, so iteration of any kind is not needed. For logistic regression, the Newton-Raphson iterative approach uses the second partial derivatives of the objective function w.r.t. each coefficient, as well as the first partial derivatives, so it converges much faster than gradient descent, which only uses the first partial derivatives. OP There have been several posts on machine learning (ML) and regression. ML is not needed for solving ordinary least squares (OLS), since it involves a one-step matrix sandwiching operation for solving a system of linear equations -- i.e., $\boldsymbol{\beta}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$ . The fact that everything is linear means that only a one-step operation is needed to solve for the coefficients. Logistic regression is based on maximizing the likelihood function $L=\prod_i{p_i}$, which can be solved using Newton-Raphson, or other ML gradient ascent methods, metaheuristics (hill climbing, genetic algorithms, swarm intelligence, ant colony optimization, etc). Regarding parsimony, use of ML for OLS would be wasteful because iterative learning is inefficient for solving OLS. Now, back to your real question on derivatives vs. ML approaches to solving gradient-based problems. Specifically, for logistic regression, Newton-Raphson's gradient descent (derivative-based) approach is commonly used. Newton-Raphson requires that you know the objective function and its partial derivatives w.r.t. each parameter (continuous in the limit and differentiable). ML is mostly used when the objective function is too complex ("narly") and you don't know the derivatives. For example, an artificial neural network (ANN) can be used to solve either a function approximation problem or supervised classification problem when the function is not known. In this case, the ANN is the function. Don't make the mistake of using ML methods to solve a logistic regression problem, just because you can. For logistic, Newton-Raphson is extremely fast and is the appropriate technique for solving the problem. ML is commonly used when you don't know what the function is. (by the way, ANNs are from the field of computational intelligence, and not ML).
Solving for regression parameters in closed-form vs gradient descent
UPDATE For linear regression, it's a one step procedure, so iteration of any kind is not needed. For logistic regression, the Newton-Raphson iterative approach uses the second partial derivatives of t
Solving for regression parameters in closed-form vs gradient descent UPDATE For linear regression, it's a one step procedure, so iteration of any kind is not needed. For logistic regression, the Newton-Raphson iterative approach uses the second partial derivatives of the objective function w.r.t. each coefficient, as well as the first partial derivatives, so it converges much faster than gradient descent, which only uses the first partial derivatives. OP There have been several posts on machine learning (ML) and regression. ML is not needed for solving ordinary least squares (OLS), since it involves a one-step matrix sandwiching operation for solving a system of linear equations -- i.e., $\boldsymbol{\beta}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$ . The fact that everything is linear means that only a one-step operation is needed to solve for the coefficients. Logistic regression is based on maximizing the likelihood function $L=\prod_i{p_i}$, which can be solved using Newton-Raphson, or other ML gradient ascent methods, metaheuristics (hill climbing, genetic algorithms, swarm intelligence, ant colony optimization, etc). Regarding parsimony, use of ML for OLS would be wasteful because iterative learning is inefficient for solving OLS. Now, back to your real question on derivatives vs. ML approaches to solving gradient-based problems. Specifically, for logistic regression, Newton-Raphson's gradient descent (derivative-based) approach is commonly used. Newton-Raphson requires that you know the objective function and its partial derivatives w.r.t. each parameter (continuous in the limit and differentiable). ML is mostly used when the objective function is too complex ("narly") and you don't know the derivatives. For example, an artificial neural network (ANN) can be used to solve either a function approximation problem or supervised classification problem when the function is not known. In this case, the ANN is the function. Don't make the mistake of using ML methods to solve a logistic regression problem, just because you can. For logistic, Newton-Raphson is extremely fast and is the appropriate technique for solving the problem. ML is commonly used when you don't know what the function is. (by the way, ANNs are from the field of computational intelligence, and not ML).
Solving for regression parameters in closed-form vs gradient descent UPDATE For linear regression, it's a one step procedure, so iteration of any kind is not needed. For logistic regression, the Newton-Raphson iterative approach uses the second partial derivatives of t
1,979
If mean is so sensitive, why use it in the first place?
In a sense, the mean is used because it is sensitive to the data. If the distribution happens to be symmetric and the tails are about like the normal distribution, the mean is a very efficient summary of central tendency. The median, while being robust and well-defined for any continuous distribution, is only $\frac{2}{\pi}$ as efficient as the mean if the data happened to come from a normal distribution. It is this relative inefficiency of the median that keeps us from using it even more than we do. The relative inefficiency translates into a minor absolute inefficiency as the sample size gets large, so for large $n$ we can be more guilt-free about using the median. It is interesting to note that for a measure of variation (spread, dispersion), there is a very robust estimator that is 0.98 as efficient as the standard deviation, namely Gini's mean difference. This is the mean absolute difference between any two observations. [You have to multiply the sample standard deviation by a constant to estimate the same quantity estimated by Gini's mean difference.] An efficient measure of central tendency is the Hodges-Lehmann estimator, i.e., the median of all pairwise means. We would use it more if its interpretation were simpler.
If mean is so sensitive, why use it in the first place?
In a sense, the mean is used because it is sensitive to the data. If the distribution happens to be symmetric and the tails are about like the normal distribution, the mean is a very efficient summar
If mean is so sensitive, why use it in the first place? In a sense, the mean is used because it is sensitive to the data. If the distribution happens to be symmetric and the tails are about like the normal distribution, the mean is a very efficient summary of central tendency. The median, while being robust and well-defined for any continuous distribution, is only $\frac{2}{\pi}$ as efficient as the mean if the data happened to come from a normal distribution. It is this relative inefficiency of the median that keeps us from using it even more than we do. The relative inefficiency translates into a minor absolute inefficiency as the sample size gets large, so for large $n$ we can be more guilt-free about using the median. It is interesting to note that for a measure of variation (spread, dispersion), there is a very robust estimator that is 0.98 as efficient as the standard deviation, namely Gini's mean difference. This is the mean absolute difference between any two observations. [You have to multiply the sample standard deviation by a constant to estimate the same quantity estimated by Gini's mean difference.] An efficient measure of central tendency is the Hodges-Lehmann estimator, i.e., the median of all pairwise means. We would use it more if its interpretation were simpler.
If mean is so sensitive, why use it in the first place? In a sense, the mean is used because it is sensitive to the data. If the distribution happens to be symmetric and the tails are about like the normal distribution, the mean is a very efficient summar
1,980
If mean is so sensitive, why use it in the first place?
Lots of great answers already, but, taking a step back and getting a little more basic, I'd say it's because the answer you get depends on the question you ask. The mean and median answer different questions - sometimes one is appropriate, sometimes the other. It's simple to say that the median should be used when there are outliers, or for skewed distributions, or whatever. But that's not always the case. Take income - nearly always reported with median, and usually that's right. But if you are looking at the spending power of a whole community, it may not be right. And in some cases, even the mode might be best (esp. if the data are grouped).
If mean is so sensitive, why use it in the first place?
Lots of great answers already, but, taking a step back and getting a little more basic, I'd say it's because the answer you get depends on the question you ask. The mean and median answer different qu
If mean is so sensitive, why use it in the first place? Lots of great answers already, but, taking a step back and getting a little more basic, I'd say it's because the answer you get depends on the question you ask. The mean and median answer different questions - sometimes one is appropriate, sometimes the other. It's simple to say that the median should be used when there are outliers, or for skewed distributions, or whatever. But that's not always the case. Take income - nearly always reported with median, and usually that's right. But if you are looking at the spending power of a whole community, it may not be right. And in some cases, even the mode might be best (esp. if the data are grouped).
If mean is so sensitive, why use it in the first place? Lots of great answers already, but, taking a step back and getting a little more basic, I'd say it's because the answer you get depends on the question you ask. The mean and median answer different qu
1,981
If mean is so sensitive, why use it in the first place?
When a value is garbage for us we call it "outlier" and want analysis be robust to it (and prefer median); when that same value is attractive we call it "extreme" and want analysis be sensitive to it (and prefer mean). Dialectics ... Mean reacts equally to a shift of value irrespective to where in the distribution the shift takes place. For example, in 1 2 3 4 5 you may increase any value by 2 - the increase of mean will be the same. Median's reaction is less "consistent": add 2 to data points 4 or 5, and median won't increase; but add 2 to point 2 - so that the shift is over the median, and the median changes dramatically (greatly than mean will change). Mean is always exactly located. Median is not; for example, in set 1 2 3 4 any value between 2 and 3 can be called median. Thus, analyses based on medians are not always unique solution. Mean is a locus of minimal sum-of-squared-deviations. Many optimization tasks based on linear algebra (including famous OLS regression) minimize this squared error and therefore imply concept of mean. Median a locus of minimal sum-of-absolute-deviations. Optimization techniques to minimize such error are non-linear and are more complex / poorly known.
If mean is so sensitive, why use it in the first place?
When a value is garbage for us we call it "outlier" and want analysis be robust to it (and prefer median); when that same value is attractive we call it "extreme" and want analysis be sensitive to it
If mean is so sensitive, why use it in the first place? When a value is garbage for us we call it "outlier" and want analysis be robust to it (and prefer median); when that same value is attractive we call it "extreme" and want analysis be sensitive to it (and prefer mean). Dialectics ... Mean reacts equally to a shift of value irrespective to where in the distribution the shift takes place. For example, in 1 2 3 4 5 you may increase any value by 2 - the increase of mean will be the same. Median's reaction is less "consistent": add 2 to data points 4 or 5, and median won't increase; but add 2 to point 2 - so that the shift is over the median, and the median changes dramatically (greatly than mean will change). Mean is always exactly located. Median is not; for example, in set 1 2 3 4 any value between 2 and 3 can be called median. Thus, analyses based on medians are not always unique solution. Mean is a locus of minimal sum-of-squared-deviations. Many optimization tasks based on linear algebra (including famous OLS regression) minimize this squared error and therefore imply concept of mean. Median a locus of minimal sum-of-absolute-deviations. Optimization techniques to minimize such error are non-linear and are more complex / poorly known.
If mean is so sensitive, why use it in the first place? When a value is garbage for us we call it "outlier" and want analysis be robust to it (and prefer median); when that same value is attractive we call it "extreme" and want analysis be sensitive to it
1,982
If mean is so sensitive, why use it in the first place?
There are a lot of answers to this question. Here's one that you probably won't see elsewhere so I'm including it here because I believe it's pertinent to the topic. People often believe that because the median is considered a robust measure with respect to outliers that it's also robust to most everything. In fact, it's also considered robust to bias in skewed distributions. These two robust properties of the median are often taught together. One might note that underlying skewed distributions also tend to generate small samples that look like they have outliers and conventional wisdom is that one use medians in such situations. #function to generate random values from a skewed distribution rexg <- function (n, m, sig, tau) { rexp(n, rate = 1/tau) + rnorm(n, mean = m, sd = sig) } (just a demonstration that this is skewed and the basic shape) hist(rexg(1e4, 0, 1, 1)) Now, let's see what happens if we sample from this distribution various sample sizes and calculate median and mean to see what the differences between them are. #generate values with various n's N <- 1e4 ns <- 2:30 y <- sapply(ns, function(x) mean(apply(matrix(rexg(x*N, 0, 1, 1), ncol = N), 2, median))) plot(ns,y, type = 'l', ylim = c(0.85, 1.03), col = 'red') y <- sapply(ns, function(x) mean(colMeans(matrix(rexg(x*N, 0, 1, 1), ncol = N)))) lines(ns,y) As can be seen from the above plot the median (in red) is much more sensitive to the n than the mean. This is contrary to some conventional wisdom regarding using medians with low ns, especially if the distribution might be skewed. And, it reinforces the point that the mean is a known value while the median is sensitive to other properties, one if which being the n. This analysis is similar to Miller, J. (1988). A warning about median reaction time. Journal of Experimental Psychology: Human Perception and Performance, 14(3):539–543. REVISION Upon thinking about the skew issue I considered that the impact on the median might just be because in small samples you have a greater probability that the median is in the tail of the distribution, whereas the mean will almost always be weighted by values closer to the mode. Therefore, perhaps if one was just sampling with a probability of outliers then maybe the same results would occur. So I thought about situations where outliers may occur and experimenters may attempt to eliminate them. If outliers happened consistently, such as one in every single sampling of data, then medians are robust against the effect of this outlier and the conventional story about the use of medians holds. But that's not usually how things go. One might find an outlier in very few cells of an experiment and decide to use median instead of mean in this case. Again, the median is more robust but it's actual impact is relatively small because there are very few outliers. This would definitely be a more common case then the one above but the effect of using a median would probably be so small that it wouldn't matter much. Perhaps more commonly outliers might be a random component of the data. For example, the true mean and standard deviation of the population may be about 0 but there's a percentage of the time we sample from an outlier population where the mean is 3. Consider the following simulation, where just such a population is sampled varying the sample size. # generate n samples N times with an outp probability # of an outlier. rout <- function (n, N, outp) { outPos <- sample(0:1,n*N, replace = TRUE, prob = c(1-outp, outp)) numOutliers <- sum(outPos) y <- matrix( rnorm(N*n), ncol = N ) y[which(outPos==1)] <- rnorm(numOutliers, 4) return(y) } outp <- 0.1 N <- 1e4 ns <- 3:30 yMed <- sapply(ns, function(x) mean(apply(rout(x,N,outp), 2, median))) var(yMed) yM <- sapply(ns, function(x) mean(colMeans(rout(x, N, outp)))) var(yM) plot(ns,yMed, type = 'l', ylim = range(c(yMed,yM)), ylab = 'Y', xlab = 'n', col = 'red') lines(ns,yM) The median is in red and mean in black. This is a similar finding to that of a skewed distribution. In a relatively practical example of the use of medians to avoid the effects of outliers one can come up with situations where the estimate is affected by n much more when the median is used than when the mean is used.
If mean is so sensitive, why use it in the first place?
There are a lot of answers to this question. Here's one that you probably won't see elsewhere so I'm including it here because I believe it's pertinent to the topic. People often believe that becaus
If mean is so sensitive, why use it in the first place? There are a lot of answers to this question. Here's one that you probably won't see elsewhere so I'm including it here because I believe it's pertinent to the topic. People often believe that because the median is considered a robust measure with respect to outliers that it's also robust to most everything. In fact, it's also considered robust to bias in skewed distributions. These two robust properties of the median are often taught together. One might note that underlying skewed distributions also tend to generate small samples that look like they have outliers and conventional wisdom is that one use medians in such situations. #function to generate random values from a skewed distribution rexg <- function (n, m, sig, tau) { rexp(n, rate = 1/tau) + rnorm(n, mean = m, sd = sig) } (just a demonstration that this is skewed and the basic shape) hist(rexg(1e4, 0, 1, 1)) Now, let's see what happens if we sample from this distribution various sample sizes and calculate median and mean to see what the differences between them are. #generate values with various n's N <- 1e4 ns <- 2:30 y <- sapply(ns, function(x) mean(apply(matrix(rexg(x*N, 0, 1, 1), ncol = N), 2, median))) plot(ns,y, type = 'l', ylim = c(0.85, 1.03), col = 'red') y <- sapply(ns, function(x) mean(colMeans(matrix(rexg(x*N, 0, 1, 1), ncol = N)))) lines(ns,y) As can be seen from the above plot the median (in red) is much more sensitive to the n than the mean. This is contrary to some conventional wisdom regarding using medians with low ns, especially if the distribution might be skewed. And, it reinforces the point that the mean is a known value while the median is sensitive to other properties, one if which being the n. This analysis is similar to Miller, J. (1988). A warning about median reaction time. Journal of Experimental Psychology: Human Perception and Performance, 14(3):539–543. REVISION Upon thinking about the skew issue I considered that the impact on the median might just be because in small samples you have a greater probability that the median is in the tail of the distribution, whereas the mean will almost always be weighted by values closer to the mode. Therefore, perhaps if one was just sampling with a probability of outliers then maybe the same results would occur. So I thought about situations where outliers may occur and experimenters may attempt to eliminate them. If outliers happened consistently, such as one in every single sampling of data, then medians are robust against the effect of this outlier and the conventional story about the use of medians holds. But that's not usually how things go. One might find an outlier in very few cells of an experiment and decide to use median instead of mean in this case. Again, the median is more robust but it's actual impact is relatively small because there are very few outliers. This would definitely be a more common case then the one above but the effect of using a median would probably be so small that it wouldn't matter much. Perhaps more commonly outliers might be a random component of the data. For example, the true mean and standard deviation of the population may be about 0 but there's a percentage of the time we sample from an outlier population where the mean is 3. Consider the following simulation, where just such a population is sampled varying the sample size. # generate n samples N times with an outp probability # of an outlier. rout <- function (n, N, outp) { outPos <- sample(0:1,n*N, replace = TRUE, prob = c(1-outp, outp)) numOutliers <- sum(outPos) y <- matrix( rnorm(N*n), ncol = N ) y[which(outPos==1)] <- rnorm(numOutliers, 4) return(y) } outp <- 0.1 N <- 1e4 ns <- 3:30 yMed <- sapply(ns, function(x) mean(apply(rout(x,N,outp), 2, median))) var(yMed) yM <- sapply(ns, function(x) mean(colMeans(rout(x, N, outp)))) var(yM) plot(ns,yMed, type = 'l', ylim = range(c(yMed,yM)), ylab = 'Y', xlab = 'n', col = 'red') lines(ns,yM) The median is in red and mean in black. This is a similar finding to that of a skewed distribution. In a relatively practical example of the use of medians to avoid the effects of outliers one can come up with situations where the estimate is affected by n much more when the median is used than when the mean is used.
If mean is so sensitive, why use it in the first place? There are a lot of answers to this question. Here's one that you probably won't see elsewhere so I'm including it here because I believe it's pertinent to the topic. People often believe that becaus
1,983
If mean is so sensitive, why use it in the first place?
From the mean it's easy to calculate the sum over all items, e.g. if you know the average income of the population and the size of the population, you can immediately calculate the total income of the entire population. The mean is straightforward to calculate in O(n) time complexity. Calculating the median in linear time is possible but requires more thought. The obvious solution requiring sorting has worse (O(n log n)) time complexity. And I speculate that there is another reason for the mean being more popular than the median: The mean is taught to more persons at school and it's probably taught before teaching the median
If mean is so sensitive, why use it in the first place?
From the mean it's easy to calculate the sum over all items, e.g. if you know the average income of the population and the size of the population, you can immediately calculate the total income of the
If mean is so sensitive, why use it in the first place? From the mean it's easy to calculate the sum over all items, e.g. if you know the average income of the population and the size of the population, you can immediately calculate the total income of the entire population. The mean is straightforward to calculate in O(n) time complexity. Calculating the median in linear time is possible but requires more thought. The obvious solution requiring sorting has worse (O(n log n)) time complexity. And I speculate that there is another reason for the mean being more popular than the median: The mean is taught to more persons at school and it's probably taught before teaching the median
If mean is so sensitive, why use it in the first place? From the mean it's easy to calculate the sum over all items, e.g. if you know the average income of the population and the size of the population, you can immediately calculate the total income of the
1,984
If mean is so sensitive, why use it in the first place?
"It is a known that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place?" In cases one knows there are no outliers, for example when one knows the data-generating process (for example in mathematical statistics). One should point out the trivial, that, these two quantities (mean and median) are actually not measuring the same thing and that most users ask for the former when what they really ought to be interested in the latter (this point is well illustrated by the median-based Wilcoxon tests which are more readily interpreted than the t-tests). Then, there are the cases where for some happenstance reason or another, some regulation imposes the use of he mean.
If mean is so sensitive, why use it in the first place?
"It is a known that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place?" In cases one knows there are no outliers, for example when one knows t
If mean is so sensitive, why use it in the first place? "It is a known that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place?" In cases one knows there are no outliers, for example when one knows the data-generating process (for example in mathematical statistics). One should point out the trivial, that, these two quantities (mean and median) are actually not measuring the same thing and that most users ask for the former when what they really ought to be interested in the latter (this point is well illustrated by the median-based Wilcoxon tests which are more readily interpreted than the t-tests). Then, there are the cases where for some happenstance reason or another, some regulation imposes the use of he mean.
If mean is so sensitive, why use it in the first place? "It is a known that median is resistant to outliers. If that is the case, when and why would we use the mean in the first place?" In cases one knows there are no outliers, for example when one knows t
1,985
If mean is so sensitive, why use it in the first place?
If the concern is over the presence of outliers, there are some straight-forward ways to check your data. Outliers, almost by definition, come into our data when something changes either in the process generating the data or in the process collecting the data. i.e. the data ceases to be homogeneous. If your data is not homogeneous then neither the mean nor the median make much sense, since you are trying to estimate the central tendency of two separate data sets that have been mixed together. The best method to ensure homogeneity is to examine the data-generating and -collection processes to ensure that all of your data is coming from a single set of processes. Nothing beats a little brain-power, here. As a secondary check, you can turn to one of several statistical tests: chi-squared, Dixon's Q-test, Grubb's test or the control chart / process behavior chart (typically X-bar R or XmR). My experience is that, when your data can be ordered as it was collected, the process behavior charts are better at detecting outliers than the outlier tests. This use for the charts may be somewhat controversial, but I believe it is entirely consistent with Shewhart's original intent and it is a use that is explicitly advocated by Donald Wheeler. Whether you use the outliers tests or the process behavior charts, remember that a detected "outlier" is merely signalling potential non-homogeneity that needs to be further examined. It rarely makes sense to throw out data points if you don't have some explanation for why they were outliers. If you are using R, the outliers package provides the outliers tests, and for process behavior charts there is the qcc, IQCC and qAnalyst. I have a personal preference for the usage and output of the qcc package.
If mean is so sensitive, why use it in the first place?
If the concern is over the presence of outliers, there are some straight-forward ways to check your data. Outliers, almost by definition, come into our data when something changes either in the proces
If mean is so sensitive, why use it in the first place? If the concern is over the presence of outliers, there are some straight-forward ways to check your data. Outliers, almost by definition, come into our data when something changes either in the process generating the data or in the process collecting the data. i.e. the data ceases to be homogeneous. If your data is not homogeneous then neither the mean nor the median make much sense, since you are trying to estimate the central tendency of two separate data sets that have been mixed together. The best method to ensure homogeneity is to examine the data-generating and -collection processes to ensure that all of your data is coming from a single set of processes. Nothing beats a little brain-power, here. As a secondary check, you can turn to one of several statistical tests: chi-squared, Dixon's Q-test, Grubb's test or the control chart / process behavior chart (typically X-bar R or XmR). My experience is that, when your data can be ordered as it was collected, the process behavior charts are better at detecting outliers than the outlier tests. This use for the charts may be somewhat controversial, but I believe it is entirely consistent with Shewhart's original intent and it is a use that is explicitly advocated by Donald Wheeler. Whether you use the outliers tests or the process behavior charts, remember that a detected "outlier" is merely signalling potential non-homogeneity that needs to be further examined. It rarely makes sense to throw out data points if you don't have some explanation for why they were outliers. If you are using R, the outliers package provides the outliers tests, and for process behavior charts there is the qcc, IQCC and qAnalyst. I have a personal preference for the usage and output of the qcc package.
If mean is so sensitive, why use it in the first place? If the concern is over the presence of outliers, there are some straight-forward ways to check your data. Outliers, almost by definition, come into our data when something changes either in the proces
1,986
If mean is so sensitive, why use it in the first place?
When might you want the mean? Examples from finance: Bond returns: The median bond return will generally be a few percentage points. The mean bond return might be low or high depending on the default rate and recovery in default. The median will ignore all this! Good luck explaining to your investors, "I know our fund is down 40% this year because almost half are bonds went bust with no recovery, but our median bond returned 1%!" Venture capital returns: Same thing in reverse. The median VC or angel investment is a bust, and all the return comes from a few winners! (Side note/warning: estimates of venture capital or private equity returns are highly problematic... be careful!) When forming a diversified portfolio, deciding what to invest in and how much, the mean and covariance of returns are likely to factor prominently into your optimization problem.
If mean is so sensitive, why use it in the first place?
When might you want the mean? Examples from finance: Bond returns: The median bond return will generally be a few percentage points. The mean bond return might be low or high depending on the defaul
If mean is so sensitive, why use it in the first place? When might you want the mean? Examples from finance: Bond returns: The median bond return will generally be a few percentage points. The mean bond return might be low or high depending on the default rate and recovery in default. The median will ignore all this! Good luck explaining to your investors, "I know our fund is down 40% this year because almost half are bonds went bust with no recovery, but our median bond returned 1%!" Venture capital returns: Same thing in reverse. The median VC or angel investment is a bust, and all the return comes from a few winners! (Side note/warning: estimates of venture capital or private equity returns are highly problematic... be careful!) When forming a diversified portfolio, deciding what to invest in and how much, the mean and covariance of returns are likely to factor prominently into your optimization problem.
If mean is so sensitive, why use it in the first place? When might you want the mean? Examples from finance: Bond returns: The median bond return will generally be a few percentage points. The mean bond return might be low or high depending on the defaul
1,987
If mean is so sensitive, why use it in the first place?
We use the mean more than the median because it is additive, in two senses. (I am surprised that in 11 years, no one has really said this!) If data on a population is broken down into data about men and data about women, then for example: \begin{align} \text{average overall height =} &\text{average height for men} * \text{fraction of men}\\ & + \text{average height of women} * \text{fraction of women} \end{align} So means are additive over merged populations: the mean of a large population is the average of means over subpopulations, weighted by their sizes. No analog holds for medians. If we have data on paired variables for a single population, then for example: \begin{align} \text{average income after tax =} &\text{ average income before tax}\\ & - \text{average taxes paid} \end{align} So means are additive over paired variables; the mean of a sum or difference is the sum or difference of the means. No analog holds for medians. If we have data broken down by day of the week, then for example: \begin{align} \text{average sales for a week = } &\text{average sales for a Sunday} + \\ &\cdots + \text{average sales for a Saturday} \end{align} This could be considered an example of either of the above situations. Again, no analog holds for medians. Because means are additive, you can calculate a mean from smaller pieces of data. But to calculate a median usually requires having one comprehensive data source for everything. This is a much bigger practical issue than a little loss of efficiency in a summary statistic. The geometric mean also has these same properties, with addition replaced by multiplication, additive replaced by multiplicative, etc. In general, the only statistics with both of these aggregative properties are transformed versions of the mean, of the form $f^{-1}(E[f(X)])$, for some increasing function $f$. E.g., the geometric mean is the case with $f(x)=\ln(x)$, $f^{-1}(x)=e^x$. The general result can be seen as a corollary of the Morgenstern-von Neumann theorem on expected utility.
If mean is so sensitive, why use it in the first place?
We use the mean more than the median because it is additive, in two senses. (I am surprised that in 11 years, no one has really said this!) If data on a population is broken down into data about men
If mean is so sensitive, why use it in the first place? We use the mean more than the median because it is additive, in two senses. (I am surprised that in 11 years, no one has really said this!) If data on a population is broken down into data about men and data about women, then for example: \begin{align} \text{average overall height =} &\text{average height for men} * \text{fraction of men}\\ & + \text{average height of women} * \text{fraction of women} \end{align} So means are additive over merged populations: the mean of a large population is the average of means over subpopulations, weighted by their sizes. No analog holds for medians. If we have data on paired variables for a single population, then for example: \begin{align} \text{average income after tax =} &\text{ average income before tax}\\ & - \text{average taxes paid} \end{align} So means are additive over paired variables; the mean of a sum or difference is the sum or difference of the means. No analog holds for medians. If we have data broken down by day of the week, then for example: \begin{align} \text{average sales for a week = } &\text{average sales for a Sunday} + \\ &\cdots + \text{average sales for a Saturday} \end{align} This could be considered an example of either of the above situations. Again, no analog holds for medians. Because means are additive, you can calculate a mean from smaller pieces of data. But to calculate a median usually requires having one comprehensive data source for everything. This is a much bigger practical issue than a little loss of efficiency in a summary statistic. The geometric mean also has these same properties, with addition replaced by multiplication, additive replaced by multiplicative, etc. In general, the only statistics with both of these aggregative properties are transformed versions of the mean, of the form $f^{-1}(E[f(X)])$, for some increasing function $f$. E.g., the geometric mean is the case with $f(x)=\ln(x)$, $f^{-1}(x)=e^x$. The general result can be seen as a corollary of the Morgenstern-von Neumann theorem on expected utility.
If mean is so sensitive, why use it in the first place? We use the mean more than the median because it is additive, in two senses. (I am surprised that in 11 years, no one has really said this!) If data on a population is broken down into data about men
1,988
Interpreting plot.lm()
As stated in the documentation, plot.lm() can return 6 different plots: [1] a plot of residuals against fitted values, [2] a Scale-Location plot of sqrt(| residuals |) against fitted values, [3] a Normal Q-Q plot, [4] a plot of Cook's distances versus row labels, [5] a plot of residuals against leverages, and [6] a plot of Cook's distances against leverage/(1-leverage). By default, the first three and 5 are provided. (my numbering) Plots [1], [2], [3] & [5] are returned by default. Interpreting [1] is discussed on CV here: Interpreting residuals vs. fitted plot for verifying the assumptions of a linear model. I explained the assumption of homoscedasticity and the plots that can help you assess it (including scale-location plots [2]) on CV here: What does having constant variance in a linear regression model mean? I have discussed qq-plots [3] on CV here: QQ plot does not match histogram and here: PP-plots vs. QQ-plots. There is also a very good overview here: How to interpret a QQ-plot? So, what's left is primarily just understanding [5], the residual-leverage plot. To understand this, we need to understand three things: leverage, standardized residuals, and Cook's distance. To understand leverage, recognize that Ordinary Least Squares regression fits a line that will pass through the center of your data, $(\bar X,~\bar Y)$. The line can be shallowly or steeply sloped, but it will pivot around that point like a lever on a fulcrum. We can take this analogy fairly literally: because OLS seeks to minimize the vertical distances between the data and the line*, the data points that are further out towards the extremes of $X$ will push / pull harder on the lever (i.e., the regression line); they have more leverage. One result of this could be that the results you get are driven by a few data points; that's what this plot is intended to help you determine. Another result of the fact that points further out on $X$ have more leverage is that they tend to be closer to the regression line (or more accurately: the regression line is fit so as to be closer to them) than points that are near $\bar X$. In other words, the residual standard deviation can differ at different points on $X$ (even if the error standard deviation is constant). To correct for this, residuals are often standardized so that they have constant variance (assuming the underlying data generating process is homoscedastic, of course). One way to think about whether or not the results you have were driven by a given data point is to calculate how far the predicted values for your data would move if your model were fit without the data point in question. This calculated total distance is called Cook's distance. Fortunately, you don't have to rerun your regression model $N$ times to find out how far the predicted values will move, Cook's D is a function of the leverage and standardized residual associated with each data point. With these facts in mind, consider the plots associated with four different situations: a dataset where everything is fine a dataset with a high-leverage, but low-standardized residual point a dataset with a low-leverage, but high-standardized residual point a dataset with a high-leverage, high-standardized residual point The plots on the left show the data, the center of the data $(\bar X,~\bar Y)$ with a blue dot, the underlying data generating process with a dashed gray line, the model fit with a blue line, and the special point with a red dot. On the right are the corresponding residual-leverage plots; the special point is 21. The model is badly distorted primarily in the fourth case where there is a point with high leverage and a large (negative) standardized residual. For reference, here are the values associated with the special points: leverage std.residual cooks.d high leverage, low residual 0.3814234 0.0014559 0.0000007 low leverage, high residual 0.0476191 3.4456341 0.2968102 high leverage, high residual 0.3814234 -3.8086475 4.4722437 Below is the code I used to generate these plots: set.seed(20) x1 = rnorm(20, mean=20, sd=3) y1 = 5 + .5*x1 + rnorm(20) x2 = c(x1, 30); y2 = c(y1, 20.8) x3 = c(x1, 19.44); y3 = c(y1, 20.8) x4 = c(x1, 30); y4 = c(y1, 10) * For help understanding how OLS regression seeks to find the line that minimizes the vertical distances between the data and the line, see my answer here: What is the difference between linear regression on y with x and x with y?
Interpreting plot.lm()
As stated in the documentation, plot.lm() can return 6 different plots: [1] a plot of residuals against fitted values, [2] a Scale-Location plot of sqrt(| residuals |) against fitted values, [3] a
Interpreting plot.lm() As stated in the documentation, plot.lm() can return 6 different plots: [1] a plot of residuals against fitted values, [2] a Scale-Location plot of sqrt(| residuals |) against fitted values, [3] a Normal Q-Q plot, [4] a plot of Cook's distances versus row labels, [5] a plot of residuals against leverages, and [6] a plot of Cook's distances against leverage/(1-leverage). By default, the first three and 5 are provided. (my numbering) Plots [1], [2], [3] & [5] are returned by default. Interpreting [1] is discussed on CV here: Interpreting residuals vs. fitted plot for verifying the assumptions of a linear model. I explained the assumption of homoscedasticity and the plots that can help you assess it (including scale-location plots [2]) on CV here: What does having constant variance in a linear regression model mean? I have discussed qq-plots [3] on CV here: QQ plot does not match histogram and here: PP-plots vs. QQ-plots. There is also a very good overview here: How to interpret a QQ-plot? So, what's left is primarily just understanding [5], the residual-leverage plot. To understand this, we need to understand three things: leverage, standardized residuals, and Cook's distance. To understand leverage, recognize that Ordinary Least Squares regression fits a line that will pass through the center of your data, $(\bar X,~\bar Y)$. The line can be shallowly or steeply sloped, but it will pivot around that point like a lever on a fulcrum. We can take this analogy fairly literally: because OLS seeks to minimize the vertical distances between the data and the line*, the data points that are further out towards the extremes of $X$ will push / pull harder on the lever (i.e., the regression line); they have more leverage. One result of this could be that the results you get are driven by a few data points; that's what this plot is intended to help you determine. Another result of the fact that points further out on $X$ have more leverage is that they tend to be closer to the regression line (or more accurately: the regression line is fit so as to be closer to them) than points that are near $\bar X$. In other words, the residual standard deviation can differ at different points on $X$ (even if the error standard deviation is constant). To correct for this, residuals are often standardized so that they have constant variance (assuming the underlying data generating process is homoscedastic, of course). One way to think about whether or not the results you have were driven by a given data point is to calculate how far the predicted values for your data would move if your model were fit without the data point in question. This calculated total distance is called Cook's distance. Fortunately, you don't have to rerun your regression model $N$ times to find out how far the predicted values will move, Cook's D is a function of the leverage and standardized residual associated with each data point. With these facts in mind, consider the plots associated with four different situations: a dataset where everything is fine a dataset with a high-leverage, but low-standardized residual point a dataset with a low-leverage, but high-standardized residual point a dataset with a high-leverage, high-standardized residual point The plots on the left show the data, the center of the data $(\bar X,~\bar Y)$ with a blue dot, the underlying data generating process with a dashed gray line, the model fit with a blue line, and the special point with a red dot. On the right are the corresponding residual-leverage plots; the special point is 21. The model is badly distorted primarily in the fourth case where there is a point with high leverage and a large (negative) standardized residual. For reference, here are the values associated with the special points: leverage std.residual cooks.d high leverage, low residual 0.3814234 0.0014559 0.0000007 low leverage, high residual 0.0476191 3.4456341 0.2968102 high leverage, high residual 0.3814234 -3.8086475 4.4722437 Below is the code I used to generate these plots: set.seed(20) x1 = rnorm(20, mean=20, sd=3) y1 = 5 + .5*x1 + rnorm(20) x2 = c(x1, 30); y2 = c(y1, 20.8) x3 = c(x1, 19.44); y3 = c(y1, 20.8) x4 = c(x1, 30); y4 = c(y1, 10) * For help understanding how OLS regression seeks to find the line that minimizes the vertical distances between the data and the line, see my answer here: What is the difference between linear regression on y with x and x with y?
Interpreting plot.lm() As stated in the documentation, plot.lm() can return 6 different plots: [1] a plot of residuals against fitted values, [2] a Scale-Location plot of sqrt(| residuals |) against fitted values, [3] a
1,989
When is unbalanced data really a problem in Machine Learning?
Not a direct answer, but it's worth noting that in the statistical literature, some of the prejudice against unbalanced data has historical roots. Many classical models simplify neatly under the assumption of balanced data, especially for methods like ANOVA that are closely related to experimental design—a traditional / original motivation for developing statistical methods. But the statistical / probabilistic arithmetic gets quite ugly, quite quickly, with unbalanced data. Prior to the widespread adoption of computers, the by-hand calculations were so extensive that estimating models on unbalanced data was practically impossible. Of course, computers have basically rendered this a non-issue. Likewise, we can estimate models on massive datasets, solve high-dimensional optimization problems, and draw samples from analytically intractable joint probability distributions, all of which were functionally impossible like, fifty years ago. It's an old problem, and academics sank a lot of time into working on the problem...meanwhile, many applied problems outpaced / obviated that research, but old habits die hard... Edit to add: I realize I didn't come out and just say it: there isn't a low level problem with using unbalanced data. In my experience, the advice to "avoid unbalanced data" is either algorithm-specific, or inherited wisdom. I agree with AdamO that in general, unbalanced data poses no conceptual problem to a well-specified model.
When is unbalanced data really a problem in Machine Learning?
Not a direct answer, but it's worth noting that in the statistical literature, some of the prejudice against unbalanced data has historical roots. Many classical models simplify neatly under the assum
When is unbalanced data really a problem in Machine Learning? Not a direct answer, but it's worth noting that in the statistical literature, some of the prejudice against unbalanced data has historical roots. Many classical models simplify neatly under the assumption of balanced data, especially for methods like ANOVA that are closely related to experimental design—a traditional / original motivation for developing statistical methods. But the statistical / probabilistic arithmetic gets quite ugly, quite quickly, with unbalanced data. Prior to the widespread adoption of computers, the by-hand calculations were so extensive that estimating models on unbalanced data was practically impossible. Of course, computers have basically rendered this a non-issue. Likewise, we can estimate models on massive datasets, solve high-dimensional optimization problems, and draw samples from analytically intractable joint probability distributions, all of which were functionally impossible like, fifty years ago. It's an old problem, and academics sank a lot of time into working on the problem...meanwhile, many applied problems outpaced / obviated that research, but old habits die hard... Edit to add: I realize I didn't come out and just say it: there isn't a low level problem with using unbalanced data. In my experience, the advice to "avoid unbalanced data" is either algorithm-specific, or inherited wisdom. I agree with AdamO that in general, unbalanced data poses no conceptual problem to a well-specified model.
When is unbalanced data really a problem in Machine Learning? Not a direct answer, but it's worth noting that in the statistical literature, some of the prejudice against unbalanced data has historical roots. Many classical models simplify neatly under the assum
1,990
When is unbalanced data really a problem in Machine Learning?
Unbalanced data is only a problem depending on your application. If for example your data indicates that A happens 99.99% of the time and 0.01% of the time B happens and you try to predict a certain result your algorithm will probably always say A. This is of course correct! It is unlikely for your method to get better prediction accuracy than 99.99%. However in many applications we are not interested in just the correctness of the prediction but also in why B happens sometimes. This is where unbalanced data becomes a problem. Because it is hard to convince your method that it can predict better than 99.99% correct. The method is correct but not for your question. So solving unbalanced data is basically intentionally biasing your data to get interesting results instead of accurate results. All methods are vulnerable although SVM and logistic regressions tend to be a little less vulnerable while decision trees are very vulnerable. In general there are three cases: You are purely interested in accurate prediction and you think your data is representative. In this case you do not have to correct at all. Bask in the glory of your 99.99% accurate predictions :). You are interested in prediction but your data is from a fair sample but somehow you lost a number of observations. If you lost observations in a completely random way you're still fine. If you lost them in a biased way but you don't know how biased, you will need new data. However if these observations are lost only on the basis of one charateristic. (for example you sorted results in A and B but not in any other way but lost half of B) Ypu can bootstrap your data. You are not interested in accurate global prediction, but only in a rare case. In this case you can inflate the data of that case by bootstrapping the data or if you have enough data throwing a way data of the other cases. Notice that this does bias your data and results and so chances and that kind of results are wrong! In general it mostly depends on what the goal is. Some goals suffer from unbalanced data others don't. All general prediction methods suffer from it because otherwise they would give terrible results in general.
When is unbalanced data really a problem in Machine Learning?
Unbalanced data is only a problem depending on your application. If for example your data indicates that A happens 99.99% of the time and 0.01% of the time B happens and you try to predict a certain r
When is unbalanced data really a problem in Machine Learning? Unbalanced data is only a problem depending on your application. If for example your data indicates that A happens 99.99% of the time and 0.01% of the time B happens and you try to predict a certain result your algorithm will probably always say A. This is of course correct! It is unlikely for your method to get better prediction accuracy than 99.99%. However in many applications we are not interested in just the correctness of the prediction but also in why B happens sometimes. This is where unbalanced data becomes a problem. Because it is hard to convince your method that it can predict better than 99.99% correct. The method is correct but not for your question. So solving unbalanced data is basically intentionally biasing your data to get interesting results instead of accurate results. All methods are vulnerable although SVM and logistic regressions tend to be a little less vulnerable while decision trees are very vulnerable. In general there are three cases: You are purely interested in accurate prediction and you think your data is representative. In this case you do not have to correct at all. Bask in the glory of your 99.99% accurate predictions :). You are interested in prediction but your data is from a fair sample but somehow you lost a number of observations. If you lost observations in a completely random way you're still fine. If you lost them in a biased way but you don't know how biased, you will need new data. However if these observations are lost only on the basis of one charateristic. (for example you sorted results in A and B but not in any other way but lost half of B) Ypu can bootstrap your data. You are not interested in accurate global prediction, but only in a rare case. In this case you can inflate the data of that case by bootstrapping the data or if you have enough data throwing a way data of the other cases. Notice that this does bias your data and results and so chances and that kind of results are wrong! In general it mostly depends on what the goal is. Some goals suffer from unbalanced data others don't. All general prediction methods suffer from it because otherwise they would give terrible results in general.
When is unbalanced data really a problem in Machine Learning? Unbalanced data is only a problem depending on your application. If for example your data indicates that A happens 99.99% of the time and 0.01% of the time B happens and you try to predict a certain r
1,991
When is unbalanced data really a problem in Machine Learning?
WLOG you can focus on imbalance in a single factor, rather than a more nuanced concept of "data sparsity", or small cell counts. In statistical analyses not focused on learning, we are faced with the issue of providing adequate inference while controlling for one or more effects through adjustment, matching, or weighting. All of these have similar power and yield similar estimates to propensity score matching. Propensity score matching will balance the covariates in the analysis set. They all end up being "the same" in terms of reducing bias, maintaining efficiency because they block confounding effects. With imbalanced data, you may naively believe that your data are sufficiently large, but with a sparse number of people having the rarer condition: variance inflation diminishes power substantially, and it can be difficult to "control" for effects when those effects are strongly associated with the predictor and outcome. Therefore, at least in regression (but I suspect in all circumstances), the only problem with imbalanced data is that you effectively have smaller sample size than the $N$ might represent. If any method is suitable for the number of people in the rarer class, there should be no issue if their proportion membership is imbalanced.
When is unbalanced data really a problem in Machine Learning?
WLOG you can focus on imbalance in a single factor, rather than a more nuanced concept of "data sparsity", or small cell counts. In statistical analyses not focused on learning, we are faced with the
When is unbalanced data really a problem in Machine Learning? WLOG you can focus on imbalance in a single factor, rather than a more nuanced concept of "data sparsity", or small cell counts. In statistical analyses not focused on learning, we are faced with the issue of providing adequate inference while controlling for one or more effects through adjustment, matching, or weighting. All of these have similar power and yield similar estimates to propensity score matching. Propensity score matching will balance the covariates in the analysis set. They all end up being "the same" in terms of reducing bias, maintaining efficiency because they block confounding effects. With imbalanced data, you may naively believe that your data are sufficiently large, but with a sparse number of people having the rarer condition: variance inflation diminishes power substantially, and it can be difficult to "control" for effects when those effects are strongly associated with the predictor and outcome. Therefore, at least in regression (but I suspect in all circumstances), the only problem with imbalanced data is that you effectively have smaller sample size than the $N$ might represent. If any method is suitable for the number of people in the rarer class, there should be no issue if their proportion membership is imbalanced.
When is unbalanced data really a problem in Machine Learning? WLOG you can focus on imbalance in a single factor, rather than a more nuanced concept of "data sparsity", or small cell counts. In statistical analyses not focused on learning, we are faced with the
1,992
When is unbalanced data really a problem in Machine Learning?
I know I'm late to the party, but: the theory behind the data imbalance problem has been beautifully worked out by Sugiyama (2000) and a huge number of highly cited papers following that, under the keyword "covariate shift adaptation". There is also a whole book devoted to this subject by Sugiyama / Kawanabe from 2012, called "Machine Learning in Non-Stationary Environments". For some reason, this branch of research is only rarely mentioned in discussions about learning from imbalanced datasets, possibly because people are unaware of it? The gist of it is this: data imbalance is a problem if a) your model is misspecified, and b) you're either interested in good performance on a minority class or you're interested in the model itself. The reason can be illustrated very simply: if the model does not describe reality correctly, it will minimize the deviation from the most frequently observed type of samples (figure taken from Berk et al. (2018)): I will try to give a very brief summary of the technical main idea of Sugiyama. Suppose your training data are drawn from a distribution $p_{\mathrm{train}}(x)$, but you would like the model to perform well on data drawn from another distribution $p_{\mathrm{target}}(x)$. This is what's called "covariate shift", and it can also simply mean that you would like the model to work equally well on all regions of the data space, i.e. $p_{\mathrm{target}}(x)$ may be a uniform distribution. Then, instead of minimizing the expected loss over the training distribution $$ \theta^* = \arg \min_\theta E[\ell(x, \theta)]_{p_{\text{train}}} \approx \arg \min_\theta \frac{1}{N}\sum_{i=1}^N \ell(x_i, \theta)$$ as one would usually do, one minimizes the expected loss over the target distribution: $$ \theta^* = \arg \min_\theta E[\ell(x, \theta)]_{p_{\text{target}}} \\ = \arg \min_\theta E\left[\frac{p_{\text{target}}(x)}{p_{\text{train}}(x)}\ell(x, \theta)\right]_{p_{\text{train}}} \\ \approx \arg \min_\theta \frac{1}{N}\sum_{i=1}^N \underbrace{\frac{p_{\text{target}}(x_i)}{p_{\text{train}}(x_i)}}_{=w_i} \ell(x_i, \theta)$$ In practice, this amounts to simply weighting individual samples by their importance $w_i$. The key to practically implementing this is an efficient method for estimating the importance, which is generally nontrivial. This is one of the main topics of papers on this subject, and many methods can be found in the literature (keyword "Direct importance estimation"). All the oversampling / undersampling / SMOTE techniques people use are essentially just different hacks for implementing importance weighting, I believe.
When is unbalanced data really a problem in Machine Learning?
I know I'm late to the party, but: the theory behind the data imbalance problem has been beautifully worked out by Sugiyama (2000) and a huge number of highly cited papers following that, under the ke
When is unbalanced data really a problem in Machine Learning? I know I'm late to the party, but: the theory behind the data imbalance problem has been beautifully worked out by Sugiyama (2000) and a huge number of highly cited papers following that, under the keyword "covariate shift adaptation". There is also a whole book devoted to this subject by Sugiyama / Kawanabe from 2012, called "Machine Learning in Non-Stationary Environments". For some reason, this branch of research is only rarely mentioned in discussions about learning from imbalanced datasets, possibly because people are unaware of it? The gist of it is this: data imbalance is a problem if a) your model is misspecified, and b) you're either interested in good performance on a minority class or you're interested in the model itself. The reason can be illustrated very simply: if the model does not describe reality correctly, it will minimize the deviation from the most frequently observed type of samples (figure taken from Berk et al. (2018)): I will try to give a very brief summary of the technical main idea of Sugiyama. Suppose your training data are drawn from a distribution $p_{\mathrm{train}}(x)$, but you would like the model to perform well on data drawn from another distribution $p_{\mathrm{target}}(x)$. This is what's called "covariate shift", and it can also simply mean that you would like the model to work equally well on all regions of the data space, i.e. $p_{\mathrm{target}}(x)$ may be a uniform distribution. Then, instead of minimizing the expected loss over the training distribution $$ \theta^* = \arg \min_\theta E[\ell(x, \theta)]_{p_{\text{train}}} \approx \arg \min_\theta \frac{1}{N}\sum_{i=1}^N \ell(x_i, \theta)$$ as one would usually do, one minimizes the expected loss over the target distribution: $$ \theta^* = \arg \min_\theta E[\ell(x, \theta)]_{p_{\text{target}}} \\ = \arg \min_\theta E\left[\frac{p_{\text{target}}(x)}{p_{\text{train}}(x)}\ell(x, \theta)\right]_{p_{\text{train}}} \\ \approx \arg \min_\theta \frac{1}{N}\sum_{i=1}^N \underbrace{\frac{p_{\text{target}}(x_i)}{p_{\text{train}}(x_i)}}_{=w_i} \ell(x_i, \theta)$$ In practice, this amounts to simply weighting individual samples by their importance $w_i$. The key to practically implementing this is an efficient method for estimating the importance, which is generally nontrivial. This is one of the main topics of papers on this subject, and many methods can be found in the literature (keyword "Direct importance estimation"). All the oversampling / undersampling / SMOTE techniques people use are essentially just different hacks for implementing importance weighting, I believe.
When is unbalanced data really a problem in Machine Learning? I know I'm late to the party, but: the theory behind the data imbalance problem has been beautifully worked out by Sugiyama (2000) and a huge number of highly cited papers following that, under the ke
1,993
When is unbalanced data really a problem in Machine Learning?
Let's assume we have two classes: A, representing 99.99% of the population B, representing 0.01% of the population Let's assume we are interested in identifying class B elements, that could be individuals affected by a rare disease or fraudster. Just by guessing A learners would score high on their loss-functions and the very few incorrectly classified elements might not move, numerically, the needle (in a haystack, in this case). This example brings the intuition behind one of the "tricks" to mitigate the class imbalance problem: tweaking the cost function. I feel that unbalanced data is a problem when models show near-zero sensitivity and near-one specificity. See the example in this article under the section "ignoring the problem". Problems have often a solution. Alongside the aforementioned trick, there are other options. However, they come at a price: an increase in model and computational complexity. The question asks which models are more likely to settle on near-zero sensitivity and near-one specificity. I feel that it depends on a few dimensions: Less capacity, as usual. Some cost functions might struggle more than others: mean squared error (MSE) is less exposed than Huber - MSE should be less benign towards incorrectly classified B class elements.
When is unbalanced data really a problem in Machine Learning?
Let's assume we have two classes: A, representing 99.99% of the population B, representing 0.01% of the population Let's assume we are interested in identifying class B elements, that could be indiv
When is unbalanced data really a problem in Machine Learning? Let's assume we have two classes: A, representing 99.99% of the population B, representing 0.01% of the population Let's assume we are interested in identifying class B elements, that could be individuals affected by a rare disease or fraudster. Just by guessing A learners would score high on their loss-functions and the very few incorrectly classified elements might not move, numerically, the needle (in a haystack, in this case). This example brings the intuition behind one of the "tricks" to mitigate the class imbalance problem: tweaking the cost function. I feel that unbalanced data is a problem when models show near-zero sensitivity and near-one specificity. See the example in this article under the section "ignoring the problem". Problems have often a solution. Alongside the aforementioned trick, there are other options. However, they come at a price: an increase in model and computational complexity. The question asks which models are more likely to settle on near-zero sensitivity and near-one specificity. I feel that it depends on a few dimensions: Less capacity, as usual. Some cost functions might struggle more than others: mean squared error (MSE) is less exposed than Huber - MSE should be less benign towards incorrectly classified B class elements.
When is unbalanced data really a problem in Machine Learning? Let's assume we have two classes: A, representing 99.99% of the population B, representing 0.01% of the population Let's assume we are interested in identifying class B elements, that could be indiv
1,994
When is unbalanced data really a problem in Machine Learning?
If you think about it: On a perfectly separable highly imbalanced data set, almost any algorithm will perform without errors. Hence, it is more a problem of noise in data and less tied to a particular algorithm. And you don't know beforehand which algorithm compensates for one particular type of noise best. In the end you just have to try different methods and decide by cross validation.
When is unbalanced data really a problem in Machine Learning?
If you think about it: On a perfectly separable highly imbalanced data set, almost any algorithm will perform without errors. Hence, it is more a problem of noise in data and less tied to a particular
When is unbalanced data really a problem in Machine Learning? If you think about it: On a perfectly separable highly imbalanced data set, almost any algorithm will perform without errors. Hence, it is more a problem of noise in data and less tied to a particular algorithm. And you don't know beforehand which algorithm compensates for one particular type of noise best. In the end you just have to try different methods and decide by cross validation.
When is unbalanced data really a problem in Machine Learning? If you think about it: On a perfectly separable highly imbalanced data set, almost any algorithm will perform without errors. Hence, it is more a problem of noise in data and less tied to a particular
1,995
When is unbalanced data really a problem in Machine Learning?
Great answers above, and not sure how much I can add here, but I feel there are three things to consider with imbalanced data, and new trade-offs you'll have to consider when rebalancing. I'd like to frame this in the context of predicting a minority outcome (a common task with imbalanced classes): By resampling, you may improve overall accuracy, but usually its the case that with severe class imbalance, you're actually trying to predict or otherwise describe features of the minority class. The best evaluation metrics here would then be F1 scores, precision, recall and the like. The resampling process (whether by SMOTE, or undersampling the majority class, etc.) disrupts the distribution of your data which naturally occurs, and training performed on these artificially created classes will usually perform poorly when applied to back to natural distributions. This creates sampling bias, in a sense. In my own work, I've found Random Forest Classifier does a bit better job than Logistic Regression, although it's pretty "data hungry" and will require enough minority samples (see point #2). It really depends on the question you're trying to answer and the types of data you have available. It's quite possible you may already have enough of the minority class to make useful predictions. Consider a class imbalance of 100:1. Would 1000 majority samples and 10 minority samples make for a useful classifier? Of course not. But what about 1,000,000 majority and 10,000 minority samples? The model you select may have enough of the minority outcome to make useful predictions. It's minority count - not relative proportion to the majority class - that is ultimately important. More of a general point is that we've become obsessed with correcting class imbalances, as if its a central problem. Left by the wayside are the far more important tasks of feature engineering and proper model selection as is necessary for predicting minority outcomes. I wrote a piece about this here: Why Balancing Classes is Over-Hyped
When is unbalanced data really a problem in Machine Learning?
Great answers above, and not sure how much I can add here, but I feel there are three things to consider with imbalanced data, and new trade-offs you'll have to consider when rebalancing. I'd like to
When is unbalanced data really a problem in Machine Learning? Great answers above, and not sure how much I can add here, but I feel there are three things to consider with imbalanced data, and new trade-offs you'll have to consider when rebalancing. I'd like to frame this in the context of predicting a minority outcome (a common task with imbalanced classes): By resampling, you may improve overall accuracy, but usually its the case that with severe class imbalance, you're actually trying to predict or otherwise describe features of the minority class. The best evaluation metrics here would then be F1 scores, precision, recall and the like. The resampling process (whether by SMOTE, or undersampling the majority class, etc.) disrupts the distribution of your data which naturally occurs, and training performed on these artificially created classes will usually perform poorly when applied to back to natural distributions. This creates sampling bias, in a sense. In my own work, I've found Random Forest Classifier does a bit better job than Logistic Regression, although it's pretty "data hungry" and will require enough minority samples (see point #2). It really depends on the question you're trying to answer and the types of data you have available. It's quite possible you may already have enough of the minority class to make useful predictions. Consider a class imbalance of 100:1. Would 1000 majority samples and 10 minority samples make for a useful classifier? Of course not. But what about 1,000,000 majority and 10,000 minority samples? The model you select may have enough of the minority outcome to make useful predictions. It's minority count - not relative proportion to the majority class - that is ultimately important. More of a general point is that we've become obsessed with correcting class imbalances, as if its a central problem. Left by the wayside are the far more important tasks of feature engineering and proper model selection as is necessary for predicting minority outcomes. I wrote a piece about this here: Why Balancing Classes is Over-Hyped
When is unbalanced data really a problem in Machine Learning? Great answers above, and not sure how much I can add here, but I feel there are three things to consider with imbalanced data, and new trade-offs you'll have to consider when rebalancing. I'd like to
1,996
When is unbalanced data really a problem in Machine Learning?
For me the most important problem with unbalanced data is the baseline estimator. For example, you have two classes with 90% and 10% sample distribution. But what does this mean for a dummy or naive classifier? You can infer this meaning by comparing it with a baseline’s performance. You can always predict the most frequent label in the training set, so the model has to be better or at least than 90% (this is the baseline)! Typical baselines include those supported by scikit-learn’s "dummy" estimators: Classification baselines: "stratified": generates predictions by respecting the training set’s class distribution. "most_frequent": always predicts the most frequent label in the training set. "prior": always predicts the class that maximizes the class prior.
When is unbalanced data really a problem in Machine Learning?
For me the most important problem with unbalanced data is the baseline estimator. For example, you have two classes with 90% and 10% sample distribution. But what does this mean for a dummy or naive c
When is unbalanced data really a problem in Machine Learning? For me the most important problem with unbalanced data is the baseline estimator. For example, you have two classes with 90% and 10% sample distribution. But what does this mean for a dummy or naive classifier? You can infer this meaning by comparing it with a baseline’s performance. You can always predict the most frequent label in the training set, so the model has to be better or at least than 90% (this is the baseline)! Typical baselines include those supported by scikit-learn’s "dummy" estimators: Classification baselines: "stratified": generates predictions by respecting the training set’s class distribution. "most_frequent": always predicts the most frequent label in the training set. "prior": always predicts the class that maximizes the class prior.
When is unbalanced data really a problem in Machine Learning? For me the most important problem with unbalanced data is the baseline estimator. For example, you have two classes with 90% and 10% sample distribution. But what does this mean for a dummy or naive c
1,997
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
Personally I remember the difference between precision and recall (a.k.a. sensitivity) by thinking about information retrieval: Recall is the fraction of the documents that are relevant to the query that are successfully retrieved, hence its name (in English recall = the action of remembering something). Precision is the fraction of the documents retrieved that are relevant to the user's information need. Somehow you take a few shots and if most of them got their target (relevant documents) then you have a high precision, regardless of how many shots you fired (number of documents that got retrieved).
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
Personally I remember the difference between precision and recall (a.k.a. sensitivity) by thinking about information retrieval: Recall is the fraction of the documents that are relevant to the query
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? Personally I remember the difference between precision and recall (a.k.a. sensitivity) by thinking about information retrieval: Recall is the fraction of the documents that are relevant to the query that are successfully retrieved, hence its name (in English recall = the action of remembering something). Precision is the fraction of the documents retrieved that are relevant to the user's information need. Somehow you take a few shots and if most of them got their target (relevant documents) then you have a high precision, regardless of how many shots you fired (number of documents that got retrieved).
What is the best way to remember the difference between sensitivity, specificity, precision, accurac Personally I remember the difference between precision and recall (a.k.a. sensitivity) by thinking about information retrieval: Recall is the fraction of the documents that are relevant to the query
1,998
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
For precision and recall, each is the true positive (TP) as the numerator divided by a different denominator. Precision: TP / Predicted positive Recall: TP / Real positive
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
For precision and recall, each is the true positive (TP) as the numerator divided by a different denominator. Precision: TP / Predicted positive Recall: TP / Real positive
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? For precision and recall, each is the true positive (TP) as the numerator divided by a different denominator. Precision: TP / Predicted positive Recall: TP / Real positive
What is the best way to remember the difference between sensitivity, specificity, precision, accurac For precision and recall, each is the true positive (TP) as the numerator divided by a different denominator. Precision: TP / Predicted positive Recall: TP / Real positive
1,999
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
Mnemonics neatly eliminate man’s only nemesis: insufficient cerebral storage. There is SNOUT SPIN: A Sensitive test, when Negative rules OUT disease A Specific test, when Positive, rules IN a disease. I imagine a pig spinning around in a centrifuge, perhaps in preparation for going into space, to help me remember this mnemonic. Humming the theme to Tail Spin with the words appropriately changed can help the musically inclined from a certain generation. I am not aware of any others.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
Mnemonics neatly eliminate man’s only nemesis: insufficient cerebral storage. There is SNOUT SPIN: A Sensitive test, when Negative rules OUT disease A Specific test, when Positive, rules IN a disease
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? Mnemonics neatly eliminate man’s only nemesis: insufficient cerebral storage. There is SNOUT SPIN: A Sensitive test, when Negative rules OUT disease A Specific test, when Positive, rules IN a disease. I imagine a pig spinning around in a centrifuge, perhaps in preparation for going into space, to help me remember this mnemonic. Humming the theme to Tail Spin with the words appropriately changed can help the musically inclined from a certain generation. I am not aware of any others.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac Mnemonics neatly eliminate man’s only nemesis: insufficient cerebral storage. There is SNOUT SPIN: A Sensitive test, when Negative rules OUT disease A Specific test, when Positive, rules IN a disease
2,000
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
I agree that the terms are very non-intuitive and hard to correspond to their formulas. Here are some diagrams with the mnemonic tricks that I have developed, and now I have them all solidly memorized. Classification matrix First, here are the mnemonics summarized in two common variations of the classification matrix (also called confusion matrix). My mnemonics work for either variation, so just focus on the variation of the matrix with which you are more familiar. Note: TP = true positives TN = true negatives FP = false positives FN = false negatives Short version of mnemonics Here is the short form of my mnemonics; the details below explain the logic underlying them, which should help in memorizing what they mean: Accuracy: correct predictions divided by all predictions: TP+TN/(TP+FP+FP+FN) Precision and Recall: focus on true positives PREcision is TP divided by PREdicted positive: TP/(TP+FP) REcAll is TP divided by REAl positive: TP/(TP+FN) Sensitivity and Specificity: focus on correct predictions SNIP (SeNsitivity Is Positive): TP/(TP+FN) SPIN (SPecificity Is Negative): TN/(TN+FP) Detailed explanation of logic underlying the mnemonics, corresponding to their intrinsic meaning Accuracy: overall results Accuracy is actually quite intuitive and usually presents no difficulty in memorization. It is simply the correct (true) predictions divided by all predictions, whether true or false. $$ Accuracy = \frac{correct\:predictions}{all\:predictions} = \frac{TP + TN}{TP + TN + FP + FN} $$ Precision and Recall: focus on true positives The essence of precision and recall is that they both consider the proportion of true positive results; that is, they are two different ways of measuring how many times the model correctly guessed the class of interest (that is, the positive class). So, TP is always the numerator. The difference between them is in the denominator: whereas precision considers all the values that were predicted to be positive (whether correctly or not), recall considers all the values that actually are positive (whether correctly predicted or not). Precision is the proportion of positive predictions that were correct: $$ Precision = \frac{true\:positive}{\pmb P\pmb R\pmb E dicted\:positive} = \frac{TP}{TP + FP} $$ Recall is the proportion of real or actual positives that were predicted correctly: $$ Recall = \frac{true\:positive}{\pmb R\pmb E\pmb A l\:positive} = \frac{TP}{TP + FN} $$ To differentiate them, you can remember: PREcision is TP divided by PREdicted positive REcAll is TP divided by REAl positive Sensitivity and Specificity: focus on correct predictions The essence of sensitivity and specificity is that they both focus on the proportion of correct predictions. So, the numerator is always a measure of true predictions and the denominator is always all the total of corresponding predictions of that class. Whereas sensitivity measures the proportion of correctly predicted positives out of all actual positive values, specificity measures the proportion of correctly predicted negatives out of all actual negative values. Sensitivity is the proportion of actual positives that were correctly predicted: $$ Sensitivity = \frac{true\:\pmb Positive}{real\:\pmb Positive} = \frac{TP}{TP + FN} $$ (Note that the although the formulas for Recall and Sensitivity are mathematically identical, when recall is paired with precision and sensitivity is paired with specificity, the interpretations and applications of the two measures are rather different.) Specificity is the proportion of actual negatives that were correctly predicted: $$ Specificity = \frac{true\:\pmb Negative}{real\:\pmb Negative} = \frac{TN}{TN + FP} $$ To remember which is which, remember "snip and spin", but note that the letters P and N are swapped (that is, they spin around and the ends of the long names are snipped): SNIP (SeNsitivity Is Positive) SPIN (SPecificity Is Negative)
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
I agree that the terms are very non-intuitive and hard to correspond to their formulas. Here are some diagrams with the mnemonic tricks that I have developed, and now I have them all solidly memorized
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? I agree that the terms are very non-intuitive and hard to correspond to their formulas. Here are some diagrams with the mnemonic tricks that I have developed, and now I have them all solidly memorized. Classification matrix First, here are the mnemonics summarized in two common variations of the classification matrix (also called confusion matrix). My mnemonics work for either variation, so just focus on the variation of the matrix with which you are more familiar. Note: TP = true positives TN = true negatives FP = false positives FN = false negatives Short version of mnemonics Here is the short form of my mnemonics; the details below explain the logic underlying them, which should help in memorizing what they mean: Accuracy: correct predictions divided by all predictions: TP+TN/(TP+FP+FP+FN) Precision and Recall: focus on true positives PREcision is TP divided by PREdicted positive: TP/(TP+FP) REcAll is TP divided by REAl positive: TP/(TP+FN) Sensitivity and Specificity: focus on correct predictions SNIP (SeNsitivity Is Positive): TP/(TP+FN) SPIN (SPecificity Is Negative): TN/(TN+FP) Detailed explanation of logic underlying the mnemonics, corresponding to their intrinsic meaning Accuracy: overall results Accuracy is actually quite intuitive and usually presents no difficulty in memorization. It is simply the correct (true) predictions divided by all predictions, whether true or false. $$ Accuracy = \frac{correct\:predictions}{all\:predictions} = \frac{TP + TN}{TP + TN + FP + FN} $$ Precision and Recall: focus on true positives The essence of precision and recall is that they both consider the proportion of true positive results; that is, they are two different ways of measuring how many times the model correctly guessed the class of interest (that is, the positive class). So, TP is always the numerator. The difference between them is in the denominator: whereas precision considers all the values that were predicted to be positive (whether correctly or not), recall considers all the values that actually are positive (whether correctly predicted or not). Precision is the proportion of positive predictions that were correct: $$ Precision = \frac{true\:positive}{\pmb P\pmb R\pmb E dicted\:positive} = \frac{TP}{TP + FP} $$ Recall is the proportion of real or actual positives that were predicted correctly: $$ Recall = \frac{true\:positive}{\pmb R\pmb E\pmb A l\:positive} = \frac{TP}{TP + FN} $$ To differentiate them, you can remember: PREcision is TP divided by PREdicted positive REcAll is TP divided by REAl positive Sensitivity and Specificity: focus on correct predictions The essence of sensitivity and specificity is that they both focus on the proportion of correct predictions. So, the numerator is always a measure of true predictions and the denominator is always all the total of corresponding predictions of that class. Whereas sensitivity measures the proportion of correctly predicted positives out of all actual positive values, specificity measures the proportion of correctly predicted negatives out of all actual negative values. Sensitivity is the proportion of actual positives that were correctly predicted: $$ Sensitivity = \frac{true\:\pmb Positive}{real\:\pmb Positive} = \frac{TP}{TP + FN} $$ (Note that the although the formulas for Recall and Sensitivity are mathematically identical, when recall is paired with precision and sensitivity is paired with specificity, the interpretations and applications of the two measures are rather different.) Specificity is the proportion of actual negatives that were correctly predicted: $$ Specificity = \frac{true\:\pmb Negative}{real\:\pmb Negative} = \frac{TN}{TN + FP} $$ To remember which is which, remember "snip and spin", but note that the letters P and N are swapped (that is, they spin around and the ends of the long names are snipped): SNIP (SeNsitivity Is Positive) SPIN (SPecificity Is Negative)
What is the best way to remember the difference between sensitivity, specificity, precision, accurac I agree that the terms are very non-intuitive and hard to correspond to their formulas. Here are some diagrams with the mnemonic tricks that I have developed, and now I have them all solidly memorized