idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,201
Nested cross validation for model selection
In addition to cebeleites excellent answer (+1), the basic idea is that cross-validation is used to assess the performance of a method for fitting a model, not of the model itself. If you need to perform model selection, then you need to perform that independently in each fold of the cross-validation procedure, as it is an integral part of the model fitting procedure. If you use a cross-validation based model selection procedure, this means you end up with nested cross-validation. It is helpful to consider the purpose of each cross-validation - one is for model selection, the other for performance estimation. I would make my final model by fitting the model (including model selection) to the whole dataset, after using nested cross-validation to get an idea of the performance I could reasonably expect to get from that model.
Nested cross validation for model selection
In addition to cebeleites excellent answer (+1), the basic idea is that cross-validation is used to assess the performance of a method for fitting a model, not of the model itself. If you need to per
Nested cross validation for model selection In addition to cebeleites excellent answer (+1), the basic idea is that cross-validation is used to assess the performance of a method for fitting a model, not of the model itself. If you need to perform model selection, then you need to perform that independently in each fold of the cross-validation procedure, as it is an integral part of the model fitting procedure. If you use a cross-validation based model selection procedure, this means you end up with nested cross-validation. It is helpful to consider the purpose of each cross-validation - one is for model selection, the other for performance estimation. I would make my final model by fitting the model (including model selection) to the whole dataset, after using nested cross-validation to get an idea of the performance I could reasonably expect to get from that model.
Nested cross validation for model selection In addition to cebeleites excellent answer (+1), the basic idea is that cross-validation is used to assess the performance of a method for fitting a model, not of the model itself. If you need to per
1,202
Nested cross validation for model selection
I don't think anyone really answered the first question. By "Nested cross-validation" I think he meant combining it with GridSearch. Usually GridSearch has CV built in and takes a parameter on how many folds we wish to test. Combining those two I think its a good practice but the model from GridSearch and CrossValidation is not your final model. You should pick the best parameters and train a new model with all your data eventually, or even do a CrossValidation here too on unseen data and then if the model really is that good you train it on all your data. That is your final model.
Nested cross validation for model selection
I don't think anyone really answered the first question. By "Nested cross-validation" I think he meant combining it with GridSearch. Usually GridSearch has CV built in and takes a parameter on how man
Nested cross validation for model selection I don't think anyone really answered the first question. By "Nested cross-validation" I think he meant combining it with GridSearch. Usually GridSearch has CV built in and takes a parameter on how many folds we wish to test. Combining those two I think its a good practice but the model from GridSearch and CrossValidation is not your final model. You should pick the best parameters and train a new model with all your data eventually, or even do a CrossValidation here too on unseen data and then if the model really is that good you train it on all your data. That is your final model.
Nested cross validation for model selection I don't think anyone really answered the first question. By "Nested cross-validation" I think he meant combining it with GridSearch. Usually GridSearch has CV built in and takes a parameter on how man
1,203
Nested cross validation for model selection
As was already pointed out by the answer of cebeleites, inner and outer CV loop have different purposes: the inner CV loop is used to get the best model, the outer CV loop can serve different purposes. It can help you to estimate in a more unbiased way the generalisation error of your top performing model. Additionally it gives you insights into the "stability" of you inner CV loop: are the best performing hyperparameters consistent with regard to the different outer folds? For this information you pay a high price because you are repating the optimization procedure k-times (k-Fold outer CV). If your goal is only to estimate the generalization performance, I would consider another way described below. According to this paper from Bergstra and Bengio: Random Search for Hyper-Parameter Optimization (4000 citations, as of 2019): Goal: make a hyperoptimization to get the best model and report / get an idea about its generalization error Your available data is only a small portion of a generally unknown distribution. CV can help by giving you a mean of expectations rather than a single expectation. CV can help you in choosing the best model (the best hyperparameters). You could also skip CV here at the cost of fewer informations (mean of expectation on different datasets, variance). At the end you would choose the top performing model out of your inner loop (for example random search on hyperparameters with / without CV). Now you have your "best" model: it is the winner of the hyperoptimization loop. In practice there will be several different models that perform nearly equally good. When it comes to report your testing error, you must be careful: "However, when different trials have nearly optimal validation means, then it is not clear which test score to report, and a slightly different choice of λ [single fixed hyperparameter set] could have yielded a different test error. To resolve the difficulty of choosing a winner, we report a weighted average of all the test set scores, in which each one is weighted by the probability that its particular λ(s) is in fact the best." For details, see the paper. It involves calculating the test error of each model you evaluated in the hyperoptimization loop. This should be cheaper than a nested CV! So: this technique is an alternative to estimate generalization errors from a model selected out of a hyperoptimization loop! NB: in practice, most people just do a single hyperoptimization (often with CV) and report the performance on the test set. This can be too optimistic.
Nested cross validation for model selection
As was already pointed out by the answer of cebeleites, inner and outer CV loop have different purposes: the inner CV loop is used to get the best model, the outer CV loop can serve different purposes
Nested cross validation for model selection As was already pointed out by the answer of cebeleites, inner and outer CV loop have different purposes: the inner CV loop is used to get the best model, the outer CV loop can serve different purposes. It can help you to estimate in a more unbiased way the generalisation error of your top performing model. Additionally it gives you insights into the "stability" of you inner CV loop: are the best performing hyperparameters consistent with regard to the different outer folds? For this information you pay a high price because you are repating the optimization procedure k-times (k-Fold outer CV). If your goal is only to estimate the generalization performance, I would consider another way described below. According to this paper from Bergstra and Bengio: Random Search for Hyper-Parameter Optimization (4000 citations, as of 2019): Goal: make a hyperoptimization to get the best model and report / get an idea about its generalization error Your available data is only a small portion of a generally unknown distribution. CV can help by giving you a mean of expectations rather than a single expectation. CV can help you in choosing the best model (the best hyperparameters). You could also skip CV here at the cost of fewer informations (mean of expectation on different datasets, variance). At the end you would choose the top performing model out of your inner loop (for example random search on hyperparameters with / without CV). Now you have your "best" model: it is the winner of the hyperoptimization loop. In practice there will be several different models that perform nearly equally good. When it comes to report your testing error, you must be careful: "However, when different trials have nearly optimal validation means, then it is not clear which test score to report, and a slightly different choice of λ [single fixed hyperparameter set] could have yielded a different test error. To resolve the difficulty of choosing a winner, we report a weighted average of all the test set scores, in which each one is weighted by the probability that its particular λ(s) is in fact the best." For details, see the paper. It involves calculating the test error of each model you evaluated in the hyperoptimization loop. This should be cheaper than a nested CV! So: this technique is an alternative to estimate generalization errors from a model selected out of a hyperoptimization loop! NB: in practice, most people just do a single hyperoptimization (often with CV) and report the performance on the test set. This can be too optimistic.
Nested cross validation for model selection As was already pointed out by the answer of cebeleites, inner and outer CV loop have different purposes: the inner CV loop is used to get the best model, the outer CV loop can serve different purposes
1,204
What's wrong with XKCD's Frequentists vs. Bayesians comic?
The main issue is that the first experiment (Sun gone nova) is not repeatable, which makes it highly unsuitable for frequentist methodology that interprets probability as estimate of how frequent an event is giving that we can repeat the experiment many times. In contrast, bayesian probability is interpreted as our degree of belief giving all available prior knowledge, making it suitable for common sense reasoning about one-time events. The dice throw experiment is repeatable, but I find it very unlikely that any frequentist would intentionally ignore the influence of the first experiment and be so confident in significance of the obtained results. Although it seems that author mocks frequentist reliance on repeatable experiments and their distrust of priors, giving the unsuitability of the experimental setup to the frequentist methodology I would say that real theme of this comic is not frequentist methodology but blind following of unsuitable methodology in general. Whether it's funny or not is up to you (for me it is) but I think it more misleads than clarifies the differences between the two approaches.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
The main issue is that the first experiment (Sun gone nova) is not repeatable, which makes it highly unsuitable for frequentist methodology that interprets probability as estimate of how frequent an e
What's wrong with XKCD's Frequentists vs. Bayesians comic? The main issue is that the first experiment (Sun gone nova) is not repeatable, which makes it highly unsuitable for frequentist methodology that interprets probability as estimate of how frequent an event is giving that we can repeat the experiment many times. In contrast, bayesian probability is interpreted as our degree of belief giving all available prior knowledge, making it suitable for common sense reasoning about one-time events. The dice throw experiment is repeatable, but I find it very unlikely that any frequentist would intentionally ignore the influence of the first experiment and be so confident in significance of the obtained results. Although it seems that author mocks frequentist reliance on repeatable experiments and their distrust of priors, giving the unsuitability of the experimental setup to the frequentist methodology I would say that real theme of this comic is not frequentist methodology but blind following of unsuitable methodology in general. Whether it's funny or not is up to you (for me it is) but I think it more misleads than clarifies the differences between the two approaches.
What's wrong with XKCD's Frequentists vs. Bayesians comic? The main issue is that the first experiment (Sun gone nova) is not repeatable, which makes it highly unsuitable for frequentist methodology that interprets probability as estimate of how frequent an e
1,205
What's wrong with XKCD's Frequentists vs. Bayesians comic?
Why does this result seem "wrong?" A Bayesian would say that the result seems counter-intuitive because we have "prior" beliefs about when the sun will explode, and the evidence provided by this machine isn't enough to wash out those beliefs (mostly because of it's uncertainty due to the coin flipping). But a frequentist is able to make such an assessment, he simply must do so in the context of data, as opposed to belief. The real source of the paradox is the fact that the frequentist statistical test performed doesn't take into account all of the data available. There's no problem with the analysis in the comic, but the result seems strange because we know that the sun most likely won't explode for a long time. But HOW do we know this? Because we've made measurements, observations, and simulations that can constrain when the sun will explode. So, our full knowledge should take those measurements and data points into account. In a Bayesian analysis, this is done by using those measurements to construct a prior (although, the procedure to turn measurements into a prior isn't well-defined: at some point there must be an initial prior, or else it's "turtles all the way down"). So, when the Bayesian uses his prior, he's really taking into account a lot of additional information that the frequentist's p-value analysis isn't privy to. So, to remain on equal footing, a full frequentist analysis of the problem should include the same additional data about the sun exploding that is used to construct the bayesian prior. But, instead of using priors, a frequentist would simply expand the likelihood that he's using to incorporate those other measurements, and his p-value would be calculated using that full likelihood. $L = L$(Machine Said Yes | Sun Has Exploded) * $L$(All other data about the sun | Sun Has Exploded) A full frequentist analysis would most likely show that the second part of the likelihood will be much more constraining and will be the dominant contribution to the p-value calculation (because we have a wealth of information about the sun, and the errors on this information are small (hopefully)). Practically, one need not go out and collect all data points obtained from the last 500 years to do a frequentist calculation, one can approximate them as some simple likelihood term that encodes the uncertainty as to whether the sun has exploded or not. This will then become similar to the Bayesian's prior, but it is slightly different philosophically because it's a likelihood, meaning that it encodes some previous measurement (as opposed to a prior, which encodes some a priori belief). This new term will become a part of the likelihood and will be used to build confidence intervals (or p-values or whatever), as opposed to the bayesian prior, which is integrated over to form credible intervals or posteriors.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
Why does this result seem "wrong?" A Bayesian would say that the result seems counter-intuitive because we have "prior" beliefs about when the sun will explode, and the evidence provided by this mach
What's wrong with XKCD's Frequentists vs. Bayesians comic? Why does this result seem "wrong?" A Bayesian would say that the result seems counter-intuitive because we have "prior" beliefs about when the sun will explode, and the evidence provided by this machine isn't enough to wash out those beliefs (mostly because of it's uncertainty due to the coin flipping). But a frequentist is able to make such an assessment, he simply must do so in the context of data, as opposed to belief. The real source of the paradox is the fact that the frequentist statistical test performed doesn't take into account all of the data available. There's no problem with the analysis in the comic, but the result seems strange because we know that the sun most likely won't explode for a long time. But HOW do we know this? Because we've made measurements, observations, and simulations that can constrain when the sun will explode. So, our full knowledge should take those measurements and data points into account. In a Bayesian analysis, this is done by using those measurements to construct a prior (although, the procedure to turn measurements into a prior isn't well-defined: at some point there must be an initial prior, or else it's "turtles all the way down"). So, when the Bayesian uses his prior, he's really taking into account a lot of additional information that the frequentist's p-value analysis isn't privy to. So, to remain on equal footing, a full frequentist analysis of the problem should include the same additional data about the sun exploding that is used to construct the bayesian prior. But, instead of using priors, a frequentist would simply expand the likelihood that he's using to incorporate those other measurements, and his p-value would be calculated using that full likelihood. $L = L$(Machine Said Yes | Sun Has Exploded) * $L$(All other data about the sun | Sun Has Exploded) A full frequentist analysis would most likely show that the second part of the likelihood will be much more constraining and will be the dominant contribution to the p-value calculation (because we have a wealth of information about the sun, and the errors on this information are small (hopefully)). Practically, one need not go out and collect all data points obtained from the last 500 years to do a frequentist calculation, one can approximate them as some simple likelihood term that encodes the uncertainty as to whether the sun has exploded or not. This will then become similar to the Bayesian's prior, but it is slightly different philosophically because it's a likelihood, meaning that it encodes some previous measurement (as opposed to a prior, which encodes some a priori belief). This new term will become a part of the likelihood and will be used to build confidence intervals (or p-values or whatever), as opposed to the bayesian prior, which is integrated over to form credible intervals or posteriors.
What's wrong with XKCD's Frequentists vs. Bayesians comic? Why does this result seem "wrong?" A Bayesian would say that the result seems counter-intuitive because we have "prior" beliefs about when the sun will explode, and the evidence provided by this mach
1,206
What's wrong with XKCD's Frequentists vs. Bayesians comic?
As far as I can see the frequentist bit is reasonable this far: Let $H_0$ be the hypothesis that the sun has not exploded and $H_1$ be the hypothesis that it has. The p-value is thus the probability of observing the result (the machine saying "yes") under $H_0$. Assuming that the machine correctly detects the presence of absence of neutrinos, then if the machine says "yes" under $H_0$ then it is because the machine is lying to us as a result of rolling two sixes. Thus the p-value is 1/36, so following normal quasi-Fisher scientific practice, a frequentist would reject the null hypothesis, at the 95% level of significance. But rejecting the null hypothesis does not mean that you are entitled to accept the alternate hypothesis, so the frequentists conclusion is not justified by the analysis. Frequentist hypothesis tests embody the idea of falsificationism (sort of), you can't prove anything is true, only disprove. So if you want to assert $H_1$, you assume $H_0$ is true and only proceed if you can show that $H_0$ is inconsistent with the data. However that doesn't mean $H_1$ is true, just that it survives the test and continues as a viable hypothesis at least as far as the next test. The Bayesian is also merely common sense, noting that there is nothing to lose by making the bet. I'm sure frequentist approaches, when the false-positive and false-negative costs are taken into account (Neyman-Peason?) would draw the same conclusion as being the best strategy in terms of long-run gain. To summarise: Both the frequentist and Bayesian are being sloppy here: The frequentist for blindly following a recipe without considering the appropriate level of significance, false-positive/false-negative costs or the physics of the problem (i.e. not using his common sense). The Bayesian is being sloppy for not stating his priors explicitly, but then again using common sense the priors he is using are obviously correct (it is much more likely that the machine is lying than sun actually having exploded), the sloppiness is perhaps excusable.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
As far as I can see the frequentist bit is reasonable this far: Let $H_0$ be the hypothesis that the sun has not exploded and $H_1$ be the hypothesis that it has. The p-value is thus the probability
What's wrong with XKCD's Frequentists vs. Bayesians comic? As far as I can see the frequentist bit is reasonable this far: Let $H_0$ be the hypothesis that the sun has not exploded and $H_1$ be the hypothesis that it has. The p-value is thus the probability of observing the result (the machine saying "yes") under $H_0$. Assuming that the machine correctly detects the presence of absence of neutrinos, then if the machine says "yes" under $H_0$ then it is because the machine is lying to us as a result of rolling two sixes. Thus the p-value is 1/36, so following normal quasi-Fisher scientific practice, a frequentist would reject the null hypothesis, at the 95% level of significance. But rejecting the null hypothesis does not mean that you are entitled to accept the alternate hypothesis, so the frequentists conclusion is not justified by the analysis. Frequentist hypothesis tests embody the idea of falsificationism (sort of), you can't prove anything is true, only disprove. So if you want to assert $H_1$, you assume $H_0$ is true and only proceed if you can show that $H_0$ is inconsistent with the data. However that doesn't mean $H_1$ is true, just that it survives the test and continues as a viable hypothesis at least as far as the next test. The Bayesian is also merely common sense, noting that there is nothing to lose by making the bet. I'm sure frequentist approaches, when the false-positive and false-negative costs are taken into account (Neyman-Peason?) would draw the same conclusion as being the best strategy in terms of long-run gain. To summarise: Both the frequentist and Bayesian are being sloppy here: The frequentist for blindly following a recipe without considering the appropriate level of significance, false-positive/false-negative costs or the physics of the problem (i.e. not using his common sense). The Bayesian is being sloppy for not stating his priors explicitly, but then again using common sense the priors he is using are obviously correct (it is much more likely that the machine is lying than sun actually having exploded), the sloppiness is perhaps excusable.
What's wrong with XKCD's Frequentists vs. Bayesians comic? As far as I can see the frequentist bit is reasonable this far: Let $H_0$ be the hypothesis that the sun has not exploded and $H_1$ be the hypothesis that it has. The p-value is thus the probability
1,207
What's wrong with XKCD's Frequentists vs. Bayesians comic?
The greatest problem that I see is that there is no test statistic derived. $p$-value (with all the criticisms that Bayesian statisticians mount against it) for a value $t$ of a test statistic $T$ is defined as ${\rm Prob}[T \ge t| H_0]$ (assuming that the null is rejected for greater values of $T$, as would be a case with $\chi^2$ statistics, say). If you need to reach a decision of greater importance, you can increase the critical value and push the rejection region further up. Effectively, that's what multiple testing corrections like Bonferroni do, instructing you to use a much lower threshold for $p$-values. Instead, the frequentist statistician is stuck here with the tests of sizes on the grid of $0, 1/36, 2/36, \ldots$. Of course, this "frequentist" approach is unscientific, as the result will hardly be reproducible. Once Sun goes supernova, it stays supernova, so the detector should keep saying "Yes" again and again. However, a repeated running of this machine is unlikely to yield the "Yes" result again. This is recognized in areas that want to present themselves as rigorous and try to reproduce their experimental results... which, as far as I understand, happens with probability anywhere between 5% (publishing the original paper was a pure type I error) and somewhere around 30-40% in some medical fields. Meta-analysis folks can fill you in with better numbers, this is just the buzz that comes across me from time to time through the statistics grapevine. One other problem from the "proper" frequentist perspective is that rolling a die is the least powerful test, with power = significance level (if not lower; 2.7% power for the 5% significance level is nothing to boast about). Neyman-Pearson theory for t-tests agonizes over demonstrating that this is a UMPT, and a lot of high brow statistical theory (which I barely understand, I have to admit) is devoted to deriving the power curves and finding the conditions when a given test is the most powerful one in a given class. (Credits: @Dikran Marsupial mentioned the issue of power in one of the comments.) I don't know if this troubles you, but the Bayesian statistician is shown here as the guy who knows no math and has a gambling problem. A proper Bayesian statistician would postulate the prior, discuss its degree of objectivity, derive the posterior, and demonstrate how much they learned from the data. None of that was done, so Bayesian process has been oversimplified just as much as the frequentist one has been. This situation demonstrates the classical screening for cancer issue (and I am sure biostatisticians can describe it better than I could). When screening for a rare disease with an imperfect instrument, most of the positives come out to be false positives. Smart statisticians know that, and know better to follow up cheap and dirty screeners with more expensive and more accurate biopsies.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
The greatest problem that I see is that there is no test statistic derived. $p$-value (with all the criticisms that Bayesian statisticians mount against it) for a value $t$ of a test statistic $T$ is
What's wrong with XKCD's Frequentists vs. Bayesians comic? The greatest problem that I see is that there is no test statistic derived. $p$-value (with all the criticisms that Bayesian statisticians mount against it) for a value $t$ of a test statistic $T$ is defined as ${\rm Prob}[T \ge t| H_0]$ (assuming that the null is rejected for greater values of $T$, as would be a case with $\chi^2$ statistics, say). If you need to reach a decision of greater importance, you can increase the critical value and push the rejection region further up. Effectively, that's what multiple testing corrections like Bonferroni do, instructing you to use a much lower threshold for $p$-values. Instead, the frequentist statistician is stuck here with the tests of sizes on the grid of $0, 1/36, 2/36, \ldots$. Of course, this "frequentist" approach is unscientific, as the result will hardly be reproducible. Once Sun goes supernova, it stays supernova, so the detector should keep saying "Yes" again and again. However, a repeated running of this machine is unlikely to yield the "Yes" result again. This is recognized in areas that want to present themselves as rigorous and try to reproduce their experimental results... which, as far as I understand, happens with probability anywhere between 5% (publishing the original paper was a pure type I error) and somewhere around 30-40% in some medical fields. Meta-analysis folks can fill you in with better numbers, this is just the buzz that comes across me from time to time through the statistics grapevine. One other problem from the "proper" frequentist perspective is that rolling a die is the least powerful test, with power = significance level (if not lower; 2.7% power for the 5% significance level is nothing to boast about). Neyman-Pearson theory for t-tests agonizes over demonstrating that this is a UMPT, and a lot of high brow statistical theory (which I barely understand, I have to admit) is devoted to deriving the power curves and finding the conditions when a given test is the most powerful one in a given class. (Credits: @Dikran Marsupial mentioned the issue of power in one of the comments.) I don't know if this troubles you, but the Bayesian statistician is shown here as the guy who knows no math and has a gambling problem. A proper Bayesian statistician would postulate the prior, discuss its degree of objectivity, derive the posterior, and demonstrate how much they learned from the data. None of that was done, so Bayesian process has been oversimplified just as much as the frequentist one has been. This situation demonstrates the classical screening for cancer issue (and I am sure biostatisticians can describe it better than I could). When screening for a rare disease with an imperfect instrument, most of the positives come out to be false positives. Smart statisticians know that, and know better to follow up cheap and dirty screeners with more expensive and more accurate biopsies.
What's wrong with XKCD's Frequentists vs. Bayesians comic? The greatest problem that I see is that there is no test statistic derived. $p$-value (with all the criticisms that Bayesian statisticians mount against it) for a value $t$ of a test statistic $T$ is
1,208
What's wrong with XKCD's Frequentists vs. Bayesians comic?
I agree with @GeorgeLewis that it may be premature to conclude the Frequentist approach is wrong - let's just rerun the neutrino detector several more times to collect more data. No need to mess around with priors.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
I agree with @GeorgeLewis that it may be premature to conclude the Frequentist approach is wrong - let's just rerun the neutrino detector several more times to collect more data. No need to mess aroun
What's wrong with XKCD's Frequentists vs. Bayesians comic? I agree with @GeorgeLewis that it may be premature to conclude the Frequentist approach is wrong - let's just rerun the neutrino detector several more times to collect more data. No need to mess around with priors.
What's wrong with XKCD's Frequentists vs. Bayesians comic? I agree with @GeorgeLewis that it may be premature to conclude the Frequentist approach is wrong - let's just rerun the neutrino detector several more times to collect more data. No need to mess aroun
1,209
What's wrong with XKCD's Frequentists vs. Bayesians comic?
There's nothing wrong with this comic, and the reason has nothing to do with statistics. It's economics. If the frequentist is correct, the Earth will be tantamount to uninhabitable within 48 hours. The value of \$50 will be effectively null. The Bayesian, recognizing this, can make the bet knowing that his benefit is \$50 in the normal case, and marginally nothing in the sun-exploded case.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
There's nothing wrong with this comic, and the reason has nothing to do with statistics. It's economics. If the frequentist is correct, the Earth will be tantamount to uninhabitable within 48 hours.
What's wrong with XKCD's Frequentists vs. Bayesians comic? There's nothing wrong with this comic, and the reason has nothing to do with statistics. It's economics. If the frequentist is correct, the Earth will be tantamount to uninhabitable within 48 hours. The value of \$50 will be effectively null. The Bayesian, recognizing this, can make the bet knowing that his benefit is \$50 in the normal case, and marginally nothing in the sun-exploded case.
What's wrong with XKCD's Frequentists vs. Bayesians comic? There's nothing wrong with this comic, and the reason has nothing to do with statistics. It's economics. If the frequentist is correct, the Earth will be tantamount to uninhabitable within 48 hours.
1,210
What's wrong with XKCD's Frequentists vs. Bayesians comic?
Now that CERN has decided that neutrinos are not faster than light - the electromagnetic radiation shock front would hit the earth before the neutrino change was noticed. This would have at the least (in the very short term) spectacular auroral effects. Thus the fact that it is dark would not prevent the skies from being lit up; the moon from shining excessively brightly (cf Larry Niven's "Inconstant Moon") and spectacular flashes as artificial satellites were vapourised and self combusted. All in all - perhaps the wrong test? (And whilst there may have been prior - there would be insufficient time for a realistic determination of posterior.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
Now that CERN has decided that neutrinos are not faster than light - the electromagnetic radiation shock front would hit the earth before the neutrino change was noticed. This would have at the leas
What's wrong with XKCD's Frequentists vs. Bayesians comic? Now that CERN has decided that neutrinos are not faster than light - the electromagnetic radiation shock front would hit the earth before the neutrino change was noticed. This would have at the least (in the very short term) spectacular auroral effects. Thus the fact that it is dark would not prevent the skies from being lit up; the moon from shining excessively brightly (cf Larry Niven's "Inconstant Moon") and spectacular flashes as artificial satellites were vapourised and self combusted. All in all - perhaps the wrong test? (And whilst there may have been prior - there would be insufficient time for a realistic determination of posterior.
What's wrong with XKCD's Frequentists vs. Bayesians comic? Now that CERN has decided that neutrinos are not faster than light - the electromagnetic radiation shock front would hit the earth before the neutrino change was noticed. This would have at the leas
1,211
What's wrong with XKCD's Frequentists vs. Bayesians comic?
The answer for your question: "does he correctly apply the frequentist methodology?" is no, he does not applied precisely the frequentist approach. The p-value for this problem is not exactly 1/36. We first must note that the involved hypotheses are H0: The Sun has not exploded, H1: The Sun has exploded. Then, p-value = P("the machine returns yes" | the Sun hasn't exploded). To compute this probability, we must note that "the machine returns yes" is equivalent to "the neutrino detector measures the Sun exploding AND tells the true result OR the neutrino detector does not measure the Sun exploding AND lies to us". Assuming that the dice throwing is independent of the neutrino detector measurement, we can compute the p-value by defining: p0 = P("the neutrino detector measures the Sun exploding" |the Sun hasn't exploded), Then, the p-value is p-value = p0 x 35/36 + (1-p0) x 1/36 = (1/36) x (1+ 34 x p0). For this problem, the p-value is a number between 1/36 and 35/36. The p-value is equal 1/36 if and only if p0=0. That is, a hidden assumption in this cartoon is that the detector machine will never measure the Sun exploding if the Sun hasn't exploded. Moreover, much more information should be inserted in the likelihood about external evidences of an anova explosion going on. All the Best.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
The answer for your question: "does he correctly apply the frequentist methodology?" is no, he does not applied precisely the frequentist approach. The p-value for this problem is not exactly 1/36. We
What's wrong with XKCD's Frequentists vs. Bayesians comic? The answer for your question: "does he correctly apply the frequentist methodology?" is no, he does not applied precisely the frequentist approach. The p-value for this problem is not exactly 1/36. We first must note that the involved hypotheses are H0: The Sun has not exploded, H1: The Sun has exploded. Then, p-value = P("the machine returns yes" | the Sun hasn't exploded). To compute this probability, we must note that "the machine returns yes" is equivalent to "the neutrino detector measures the Sun exploding AND tells the true result OR the neutrino detector does not measure the Sun exploding AND lies to us". Assuming that the dice throwing is independent of the neutrino detector measurement, we can compute the p-value by defining: p0 = P("the neutrino detector measures the Sun exploding" |the Sun hasn't exploded), Then, the p-value is p-value = p0 x 35/36 + (1-p0) x 1/36 = (1/36) x (1+ 34 x p0). For this problem, the p-value is a number between 1/36 and 35/36. The p-value is equal 1/36 if and only if p0=0. That is, a hidden assumption in this cartoon is that the detector machine will never measure the Sun exploding if the Sun hasn't exploded. Moreover, much more information should be inserted in the likelihood about external evidences of an anova explosion going on. All the Best.
What's wrong with XKCD's Frequentists vs. Bayesians comic? The answer for your question: "does he correctly apply the frequentist methodology?" is no, he does not applied precisely the frequentist approach. The p-value for this problem is not exactly 1/36. We
1,212
What's wrong with XKCD's Frequentists vs. Bayesians comic?
A simpler point that may be lost among all the verbose answers here is that the frequentist is depicted drawing his conclusion based upon a single sample. In practice you would never do this. Reaching a valid conclusion requires a statistically significant sample size (or in other words, science needs to be repeatable). So in practice the frequentist would run the machine multiple times and then come to a conclusion about the resulting data. Presumably this would entail asking the machine the same question several more times. And presumably if the machine is only wrong 1 out of every 36 times a clear pattern will emerge. And from that pattern (rather then from one single reading) the frequentist will draw a (fairly accurate, I would say) conclusion regarding whether or not the sun has exploded.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
A simpler point that may be lost among all the verbose answers here is that the frequentist is depicted drawing his conclusion based upon a single sample. In practice you would never do this. Reachin
What's wrong with XKCD's Frequentists vs. Bayesians comic? A simpler point that may be lost among all the verbose answers here is that the frequentist is depicted drawing his conclusion based upon a single sample. In practice you would never do this. Reaching a valid conclusion requires a statistically significant sample size (or in other words, science needs to be repeatable). So in practice the frequentist would run the machine multiple times and then come to a conclusion about the resulting data. Presumably this would entail asking the machine the same question several more times. And presumably if the machine is only wrong 1 out of every 36 times a clear pattern will emerge. And from that pattern (rather then from one single reading) the frequentist will draw a (fairly accurate, I would say) conclusion regarding whether or not the sun has exploded.
What's wrong with XKCD's Frequentists vs. Bayesians comic? A simpler point that may be lost among all the verbose answers here is that the frequentist is depicted drawing his conclusion based upon a single sample. In practice you would never do this. Reachin
1,213
What's wrong with XKCD's Frequentists vs. Bayesians comic?
This is of course a frequentist 0.05 level test - the null hypothesis is rejected less than 5% of the time under the null hypothesis and even the power under the alternative is great. On the other hand prior information tells us that the sun going supernova at a by particular point in time is pretty unlikely, but that getting a lie by chance is more likely. Bottom line: there's not really anything wrong with the comic and it shows that testing implausible hypotheses leads to a high false discovery rate. Additionally, you probably want to take prior information into account in your assessment of offered bets - that's why a Bayesian posterior in combination with decision analysis is so popular.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
This is of course a frequentist 0.05 level test - the null hypothesis is rejected less than 5% of the time under the null hypothesis and even the power under the alternative is great. On the other han
What's wrong with XKCD's Frequentists vs. Bayesians comic? This is of course a frequentist 0.05 level test - the null hypothesis is rejected less than 5% of the time under the null hypothesis and even the power under the alternative is great. On the other hand prior information tells us that the sun going supernova at a by particular point in time is pretty unlikely, but that getting a lie by chance is more likely. Bottom line: there's not really anything wrong with the comic and it shows that testing implausible hypotheses leads to a high false discovery rate. Additionally, you probably want to take prior information into account in your assessment of offered bets - that's why a Bayesian posterior in combination with decision analysis is so popular.
What's wrong with XKCD's Frequentists vs. Bayesians comic? This is of course a frequentist 0.05 level test - the null hypothesis is rejected less than 5% of the time under the null hypothesis and even the power under the alternative is great. On the other han
1,214
What's wrong with XKCD's Frequentists vs. Bayesians comic?
I don't see any problem with the frequentist's approach. If the null hypothesis is rejected, the p-value is the probability of a type 1 error. A type 1 error is rejecting a true null hypothesis. In this case we have a p-value of 0.028. This means that among all the hypothesis tests with this p-value ever conducted, roughly 3 out of a hundred will reject a true null hypothesis. By construction, this would be one of those cases. Frequentists accept that sometimes they'll reject true null hypothesis or retain false null hypothesis (Type 2 errors), they've never claimed otherwise. Moreover, they precisely quantify the frequency of their erroneous inferences in the long run. Perhaps, a less confusing way of looking at this result is to exchange the roles of the hypotheses. Since the two hypotheses are simple, this is easy to do. If the null is that the sun went nova, then the p-value is 35/36=0.972. This means that this is no evidence against the hypothesis that the sun went nova, so we can't reject it based on this result. This seems more reasonable. If you are thinking. Why would anybody assume the sun went nova? I would ask you. Why would anybody carry out such an experiment if the very thought of the sun exploding seems ridiculous? I think this just shows that one has to assess the usefulness of an experiment beforehand. This experiment, for example, would be completely useless because it tests something we already know simply from looking up to the sky (Which I'm sure produces a p-value that is effectively zero). Designing a good experiment is a requirement to produce good science. If your experiment is poorly designed, then no matter what statistical inference tool you use, your results are unlikely to be useful.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
I don't see any problem with the frequentist's approach. If the null hypothesis is rejected, the p-value is the probability of a type 1 error. A type 1 error is rejecting a true null hypothesis. In th
What's wrong with XKCD's Frequentists vs. Bayesians comic? I don't see any problem with the frequentist's approach. If the null hypothesis is rejected, the p-value is the probability of a type 1 error. A type 1 error is rejecting a true null hypothesis. In this case we have a p-value of 0.028. This means that among all the hypothesis tests with this p-value ever conducted, roughly 3 out of a hundred will reject a true null hypothesis. By construction, this would be one of those cases. Frequentists accept that sometimes they'll reject true null hypothesis or retain false null hypothesis (Type 2 errors), they've never claimed otherwise. Moreover, they precisely quantify the frequency of their erroneous inferences in the long run. Perhaps, a less confusing way of looking at this result is to exchange the roles of the hypotheses. Since the two hypotheses are simple, this is easy to do. If the null is that the sun went nova, then the p-value is 35/36=0.972. This means that this is no evidence against the hypothesis that the sun went nova, so we can't reject it based on this result. This seems more reasonable. If you are thinking. Why would anybody assume the sun went nova? I would ask you. Why would anybody carry out such an experiment if the very thought of the sun exploding seems ridiculous? I think this just shows that one has to assess the usefulness of an experiment beforehand. This experiment, for example, would be completely useless because it tests something we already know simply from looking up to the sky (Which I'm sure produces a p-value that is effectively zero). Designing a good experiment is a requirement to produce good science. If your experiment is poorly designed, then no matter what statistical inference tool you use, your results are unlikely to be useful.
What's wrong with XKCD's Frequentists vs. Bayesians comic? I don't see any problem with the frequentist's approach. If the null hypothesis is rejected, the p-value is the probability of a type 1 error. A type 1 error is rejecting a true null hypothesis. In th
1,215
What's wrong with XKCD's Frequentists vs. Bayesians comic?
How to integrate "prior knowledge" about the sun stability in the frequentist methodology? Very interesting topic. Here are just some thoughts, not a perfect analysis... Using the Bayesian approach with a noninformative prior typically provides a statistical inference comparable to the frequentist one. Why does the Bayesian has a strong prior belief that the sun has not exploded ? Because he knows as everyone that the sun has never exploded since its beginning. We can see on some simple statistical models with conjugate priors that using a prior distribution is equivalent to use the posterior distribution derived from a noninfomative prior and preliminary experiments. The sentence above suggests that the Frequentist should conclude as the Bayesian by including the results of preliminary experiments in his model. And this is what the Bayesian actually does: his prior comes from his knowledge of the preliminary experiments ! Let $N$ be the age of the sun in days, and $x_i$ be the status of the sun (0 = exploded / 1 = not exploded) at day $i$. Assume the $x_i$ are i.i.d Bernoulli variates with probability of succes $\theta$. The realizations of the $x_i$ have been observed : $x_i=1$ for all $i =1,\ldots,N$. In the current problem, we have $N+1$ observations : the $x_i$ and the result $y=\{\text{Yes}\}$ of the detector. The natural question is : what is the probability that the sun has exploded, that is, what is $\Pr(x_{N+1}=0)$ ? This is $\theta$ and estimating $\theta$ from the available observations $x_1, \ldots, x_N$ and $y$ yields an estimate highly close to $1$ because $N$ is huge, and the "unexpected" value $y=\{\text{Yes}\}$ has a negligible impact on the estimate of $\theta$. And the Bayesian intends to reflect this information through his prior distribution about $\theta$. From this perspective I don't see how to rephrase the question in terms of hypothesis testing. Taking $H_0 =\{\text{the sun has not exploded}\}$ makes no sense cause it is a possible issue of the experiment in my interpretation, not a true/false hypothesis. Maybe this is the error of the Frequentist ?
What's wrong with XKCD's Frequentists vs. Bayesians comic?
How to integrate "prior knowledge" about the sun stability in the frequentist methodology? Very interesting topic. Here are just some thoughts, not a perfect analysis... Using the Bayesian approach
What's wrong with XKCD's Frequentists vs. Bayesians comic? How to integrate "prior knowledge" about the sun stability in the frequentist methodology? Very interesting topic. Here are just some thoughts, not a perfect analysis... Using the Bayesian approach with a noninformative prior typically provides a statistical inference comparable to the frequentist one. Why does the Bayesian has a strong prior belief that the sun has not exploded ? Because he knows as everyone that the sun has never exploded since its beginning. We can see on some simple statistical models with conjugate priors that using a prior distribution is equivalent to use the posterior distribution derived from a noninfomative prior and preliminary experiments. The sentence above suggests that the Frequentist should conclude as the Bayesian by including the results of preliminary experiments in his model. And this is what the Bayesian actually does: his prior comes from his knowledge of the preliminary experiments ! Let $N$ be the age of the sun in days, and $x_i$ be the status of the sun (0 = exploded / 1 = not exploded) at day $i$. Assume the $x_i$ are i.i.d Bernoulli variates with probability of succes $\theta$. The realizations of the $x_i$ have been observed : $x_i=1$ for all $i =1,\ldots,N$. In the current problem, we have $N+1$ observations : the $x_i$ and the result $y=\{\text{Yes}\}$ of the detector. The natural question is : what is the probability that the sun has exploded, that is, what is $\Pr(x_{N+1}=0)$ ? This is $\theta$ and estimating $\theta$ from the available observations $x_1, \ldots, x_N$ and $y$ yields an estimate highly close to $1$ because $N$ is huge, and the "unexpected" value $y=\{\text{Yes}\}$ has a negligible impact on the estimate of $\theta$. And the Bayesian intends to reflect this information through his prior distribution about $\theta$. From this perspective I don't see how to rephrase the question in terms of hypothesis testing. Taking $H_0 =\{\text{the sun has not exploded}\}$ makes no sense cause it is a possible issue of the experiment in my interpretation, not a true/false hypothesis. Maybe this is the error of the Frequentist ?
What's wrong with XKCD's Frequentists vs. Bayesians comic? How to integrate "prior knowledge" about the sun stability in the frequentist methodology? Very interesting topic. Here are just some thoughts, not a perfect analysis... Using the Bayesian approach
1,216
What's wrong with XKCD's Frequentists vs. Bayesians comic?
If the frequentist is set on using a p-value to determine whether the sun has exploded, his mistake is that he should be testing at a much lower significance level $\alpha$ than 0.05, since the claim that the sun has gone nova and that the frequentist is still alive despite photons from the nova already reaching Earth is so unlikely. In fact, $\alpha$ should be so small that even if the machine is designed to always tell the truth, the frequentist still doesn't reject the null hypothesis $H_0$ that the sun hasn't gone nova. Let $q$ be the probability that the frequentist hallucinates that the machine reports that the sun has gone nova. $\alpha$ should be less then $q$. $H_0$ should only be rejected if the probability of the observation under $H_0$ is less than $\alpha$, but this probability is bounded below by $q$, so $H_0$ cannot be rejected. ($q$ could also have been defined as the probability that the machine is broken and always reports that the sun has gone nova.)
What's wrong with XKCD's Frequentists vs. Bayesians comic?
If the frequentist is set on using a p-value to determine whether the sun has exploded, his mistake is that he should be testing at a much lower significance level $\alpha$ than 0.05, since the claim
What's wrong with XKCD's Frequentists vs. Bayesians comic? If the frequentist is set on using a p-value to determine whether the sun has exploded, his mistake is that he should be testing at a much lower significance level $\alpha$ than 0.05, since the claim that the sun has gone nova and that the frequentist is still alive despite photons from the nova already reaching Earth is so unlikely. In fact, $\alpha$ should be so small that even if the machine is designed to always tell the truth, the frequentist still doesn't reject the null hypothesis $H_0$ that the sun hasn't gone nova. Let $q$ be the probability that the frequentist hallucinates that the machine reports that the sun has gone nova. $\alpha$ should be less then $q$. $H_0$ should only be rejected if the probability of the observation under $H_0$ is less than $\alpha$, but this probability is bounded below by $q$, so $H_0$ cannot be rejected. ($q$ could also have been defined as the probability that the machine is broken and always reports that the sun has gone nova.)
What's wrong with XKCD's Frequentists vs. Bayesians comic? If the frequentist is set on using a p-value to determine whether the sun has exploded, his mistake is that he should be testing at a much lower significance level $\alpha$ than 0.05, since the claim
1,217
What's wrong with XKCD's Frequentists vs. Bayesians comic?
In my view, a more correct frequentist analysis would be as follows: H0: The sun has exploded and the machine is telling the truth. H1: The sun has not exploded and the machine is lying. The p value here is = P(sun exploded) . p(machine is telling the truth) = 0.97 . P(sun exploded) The statistician can not conclude anything without knowing the nature of the second probability. Although we know that P(sun exploded) is 0, because sun like stars do not explode into supernovae.
What's wrong with XKCD's Frequentists vs. Bayesians comic?
In my view, a more correct frequentist analysis would be as follows: H0: The sun has exploded and the machine is telling the truth. H1: The sun has not exploded and the machine is lying. The p value h
What's wrong with XKCD's Frequentists vs. Bayesians comic? In my view, a more correct frequentist analysis would be as follows: H0: The sun has exploded and the machine is telling the truth. H1: The sun has not exploded and the machine is lying. The p value here is = P(sun exploded) . p(machine is telling the truth) = 0.97 . P(sun exploded) The statistician can not conclude anything without knowing the nature of the second probability. Although we know that P(sun exploded) is 0, because sun like stars do not explode into supernovae.
What's wrong with XKCD's Frequentists vs. Bayesians comic? In my view, a more correct frequentist analysis would be as follows: H0: The sun has exploded and the machine is telling the truth. H1: The sun has not exploded and the machine is lying. The p value h
1,218
How does a Support Vector Machine (SVM) work?
Support vector machines focus only on the points that are the most difficult to tell apart, whereas other classifiers pay attention to all of the points. The intuition behind the support vector machine approach is that if a classifier is good at the most challenging comparisons (the points in B and A that are closest to each other in Figure 2), then the classifier will be even better at the easy comparisons (comparing points in B and A that are far away from each other). Perceptrons and other classifiers: Perceptrons are built by taking one point at a time and adjusting the dividing line accordingly. As soon as all of the points are separated, the perceptron algorithm stops. But it could stop anywhere. Figure 1 shows that there are a bunch of different dividing lines that separate the data. The perceptron's stopping criteria is simple: "separate the points and stop improving the line when you get 100% separation". The perceptron is not explicitly told to find the best separating line. Logistic regression and linear discriminant models are built similarly to perceptrons. The best dividing line maximizes the distance between the B points closest to A and the A points closest to B. It's not necessary to look at all of the points to do this. In fact, incorporating feedback from points that are far away can bump the line a little too far, as seen below. Support Vector Machines: Unlike other classifiers, the support vector machine is explicitly told to find the best separating line. How? The support vector machine searches for the closest points (Figure 2), which it calls the "support vectors" (the name "support vector machine" is due to the fact that points are like vectors and that the best line "depends on" or is "supported by" the closest points). Once it has found the closest points, the SVM draws a line connecting them (see the line labeled 'w' in Figure 2). It draws this connecting line by doing vector subtraction (point A - point B). The support vector machine then declares the best separating line to be the line that bisects -- and is perpendicular to -- the connecting line. The support vector machine is better because when you get a new sample (new points), you will have already made a line that keeps B and A as far away from each other as possible, and so it is less likely that one will spillover across the line into the other's territory. I consider myself a visual learner, and I struggled with the intuition behind support vector machines for a long time. The paper called Duality and Geometry in SVM Classifiers finally helped me see the light; that's where I got the images from.
How does a Support Vector Machine (SVM) work?
Support vector machines focus only on the points that are the most difficult to tell apart, whereas other classifiers pay attention to all of the points. The intuition behind the support vector machi
How does a Support Vector Machine (SVM) work? Support vector machines focus only on the points that are the most difficult to tell apart, whereas other classifiers pay attention to all of the points. The intuition behind the support vector machine approach is that if a classifier is good at the most challenging comparisons (the points in B and A that are closest to each other in Figure 2), then the classifier will be even better at the easy comparisons (comparing points in B and A that are far away from each other). Perceptrons and other classifiers: Perceptrons are built by taking one point at a time and adjusting the dividing line accordingly. As soon as all of the points are separated, the perceptron algorithm stops. But it could stop anywhere. Figure 1 shows that there are a bunch of different dividing lines that separate the data. The perceptron's stopping criteria is simple: "separate the points and stop improving the line when you get 100% separation". The perceptron is not explicitly told to find the best separating line. Logistic regression and linear discriminant models are built similarly to perceptrons. The best dividing line maximizes the distance between the B points closest to A and the A points closest to B. It's not necessary to look at all of the points to do this. In fact, incorporating feedback from points that are far away can bump the line a little too far, as seen below. Support Vector Machines: Unlike other classifiers, the support vector machine is explicitly told to find the best separating line. How? The support vector machine searches for the closest points (Figure 2), which it calls the "support vectors" (the name "support vector machine" is due to the fact that points are like vectors and that the best line "depends on" or is "supported by" the closest points). Once it has found the closest points, the SVM draws a line connecting them (see the line labeled 'w' in Figure 2). It draws this connecting line by doing vector subtraction (point A - point B). The support vector machine then declares the best separating line to be the line that bisects -- and is perpendicular to -- the connecting line. The support vector machine is better because when you get a new sample (new points), you will have already made a line that keeps B and A as far away from each other as possible, and so it is less likely that one will spillover across the line into the other's territory. I consider myself a visual learner, and I struggled with the intuition behind support vector machines for a long time. The paper called Duality and Geometry in SVM Classifiers finally helped me see the light; that's where I got the images from.
How does a Support Vector Machine (SVM) work? Support vector machines focus only on the points that are the most difficult to tell apart, whereas other classifiers pay attention to all of the points. The intuition behind the support vector machi
1,219
How does a Support Vector Machine (SVM) work?
Ryan Zotti's answer explains the motivation behind the maximization of the decision boundaries, carlosdc's answer gives some similarities and differences with respect to other classifiers. I'll give in this answer a brief mathematical overview of how SVMs are trained and used. Notations In the following, scalars are denoted with italic lowercases (e.g., $y,\, b$), vectors with bold lowercases (e.g., $\mathbf{w},\, \mathbf{x}$), and matrices with italic uppercases (e.g., $W$). $\mathbf{w^T}$ is the transpose of $\mathbf{w}$, and $\|\mathbf{w}\| = \mathbf{w}^T\mathbf{w}$. Let: $\mathbf{x}$ be a feature vector (i.e., the input of the SVM). $\mathbf{x} \in \mathbb{R}^n$, where $n$ is the dimension of the feature vector. $y$ be the class (i.e., the output of the SVM). $y \in \{ -1,1\}$, i.e. the classification task is binary. $\mathbf{w}$ and $b$ be the parameters of the SVM: we need to learn them using the training set. $(\mathbf{x}^{(i)}, y^{(i)})$ be the $i^ {\text {th}}$ sample in the dataset. Let's assume we have $N$ samples in the training set. With $n=2$, one can represent the SVM's decision boundaries as follows: The class $y$ is determined as follows: $$ y^{(i)}=\left\{ \begin{array}{ll} -1 &\text{ if } \mathbf{w^T}\mathbf{x}^{(i)}+b \leq -1 \\ 1 &\text{ if } \mathbf{w^T}\mathbf{x}^{(i)}+b \ge 1 \\ \end{array} \right. $$ which can be more concisely written as $y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1$. Goal The SVM aims at satisfying two requirements: The SVM should maximize the distance between the two decision boundaries. Mathematically, this means we want to maximize the distance between the hyperplane defined by $\mathbf{w^T}\mathbf{x}+b = -1$ and the hyperplane defined by $\mathbf{w^T}\mathbf{x}+b = 1$. This distance is equal to $\frac{2}{\|\mathbf{w}\|}$. This means we want to solve $\underset{\mathbf{w}}{\operatorname{max}} \frac{2}{\|\mathbf{w}\|}$. Equivalently we want $\underset{\mathbf{w}}{\operatorname{min}} \frac{\|\mathbf{w}\|}{2}$. The SVM should also correctly classify all $\mathbf{x}^{(i)}$, which means $y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1, \forall i \in \{1,\dots,N\}$ Which leads us to the following quadratic optimization problem: $$\begin{align} \min_{\mathbf{w},b}\quad &\frac{\|\mathbf{w}\|}{2}, \\ s.t.\quad&y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1 &\forall i \in \{1,\dots,N\} \end{align}$$ This is the hard-margin SVM, as this quadratic optimization problem admits a solution iff the data is linearly separable. One can relax the constraints by introducing so-called slack variables $\xi^{(i)}$. Note that each sample of the training set has its own slack variable. This gives us the following quadratic optimization problem: $$\begin{align} \min_{\mathbf{w},b}\quad &\frac{\|\mathbf{w}\|}{2}+ C \sum_{i=1}^{N} \xi^{(i)}, \\ s.t.\quad&y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1 - \xi^{(i)},&\forall i \in \{1,\dots,N\} \\ \quad&\xi^{(i)}\ge0, &\forall i \in \{1,\dots,N\} \end{align}$$ This is the soft-margin SVM. $C$ is a hyperparameter called penalty of the error term. (What is the influence of C in SVMs with linear kernel? and Which search range for determining SVM optimal parameters?). One can add even more flexibility by introducing a function $\phi$ that maps the original feature space to a higher dimensional feature space. This allows non-linear decision boundaries. The quadratic optimization problem becomes: $$\begin{align} \min_{\mathbf{w},b}\quad &\frac{\|\mathbf{w}\|}{2}+ C \sum_{i=1}^{N} \xi^{(i)}, \\ s.t.\quad&y^{(i)} (\mathbf{w^T}\phi \left(\mathbf{x}^{(i)}\right)+b) \ge 1 - \xi^{(i)},&\forall i \in \{1,\dots,N\} \\ \quad&\xi^{(i)}\ge0, &\forall i \in \{1,\dots,N\} \end{align}$$ Optimization The quadratic optimization problem can be transformed into another optimization problem named the Lagrangian dual problem (the previous problem is called the primal): $$\begin{align} \max_{\mathbf{\alpha}} \quad &\min_{\mathbf{w},b} \frac{\|\mathbf{w}\|}{2}+ C \sum_{i=1}^{N} \alpha^{(i)} \left(1-\mathbf{w^T}\phi \left(\mathbf{x}^{(i)}\right)+b)\right), \\ s.t. \quad&0 \leq \alpha^{(i)} \leq C, &\forall i \in \{1,\dots,N\} \end{align}$$ This optimization problem can be simplified (by setting some gradients to $0$) to: $$\begin{align} \max_{\mathbf{\alpha}} \quad & \sum_{i=1}^{N} \alpha^{(i)} - \sum_{i=1}^{N}\sum_{j=1}^{N} \left( y^{(i)}\alpha^{(i)}\phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right) y^{(j)}\alpha^{(j)} \right), \\ s.t. \quad&0 \leq \alpha^{(i)} \leq C, &\forall i \in \{1,\dots,N\} \end{align}$$ $\mathbf{w}$ doesn't appear as $\mathbf{w}=\sum_{i =1}^{N}\alpha^{(i)}y^{(i)}\phi\left(x^{(i)}\right)$ (as stated by the representer theorem). We therefore learn the $\alpha^{(i)}$ using the $(\mathbf{x}^{(i)}, y^{(i)})$ of the training set. (FYI: Why bother with the dual problem when fitting SVM? short answer: faster computation + allows to use the kernel trick, though there exist some good methods to train SVM in the primal e.g. see {1}) Making a prediction Once the $\alpha^{(i)}$ are learned, one can predict the class of a new sample with the feature vector $\mathbf{x}^{\text {test}}$ as follows: \begin{align*} y^{\text {test}}&=\text {sign}\left(\mathbf{w^T}\phi\left(\mathbf{x}^{\text {test}}\right)+b\right) \\ &= \text {sign}\left(\sum_{i =1}^{N}\alpha^{(i)}y^{(i)}\phi\left(x^{(i)}\right)^T\phi\left(\mathbf{x}^{\text {test}}\right)+b \right) \end{align*} The summation $\sum_{i =1}^{N}$ could seem overwhelming, since it means one has to sum over all the training samples, but the vast majority of $\alpha^{(i)}$ are $0$ (see Why are the Lagrange multipliers sparse for SVMs?) so in practice it isn't an issue. (note that one can construct special cases where all $\alpha^{{(i)}} > 0$.) $\alpha^{{(i)}}=0$ iff $x^{{(i)}}$ is a support vector. The illustration above has 3 support vectors. Kernel trick One can observe that the optimization problem uses the $\phi\left(\mathbf{x}^{(i)}\right)$ only in the inner product $\phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right)$. The function that maps $\left(\mathbf{x}^{(i)},\mathbf{x}^{(j)}\right)$ to the inner product $\phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right)$ is called a kernel, a.k.a. kernel function, often denoted by $k$. One can choose $k$ so that the inner product is efficient to compute. This allows to use a potentially high feature space at a low computational cost. That is called the kernel trick. For a kernel function to be valid, i.e. usable with the kernel trick, it should satisfy two key properties. There exist many kernel functions to choose from. As a side note, the kernel trick may be applied to other machine learning models, in which case they are referred as kernelized. Going further Some interesting QAs on SVMs: Best way to perform multiclass SVM Support vector machines and regression Understanding the different formulations for SVM What's the difference between $\ell_1$-SVM, $\ell_2$-SVM and LS-SVM loss functions? To deal with an unbalanced dataset: Best way to handle unbalanced multiclass dataset with SVM A priori selection of SVM class weights How does one interpret SVM feature weights? Interpretating the C value in a linear SVM Generalization bounds on SVM General formula for the VC Dimension of a SVM What does the "machine" in "support vector machine" and "restricted Boltzmann machine" mean? How are SVMs = Template Matching? Single layer NeuralNetwork with ReLU activation equal to SVM? Comparing SVM and logistic regression Other links: Least squares support vector machine References: {1} Chapelle, Olivier. "Training a support vector machine in the primal." Neural computation 19, no. 5 (2007): 1155-1178. https://scholar.google.com/scholar?cluster=469291847682573606&hl=en&as_sdt=0,22 ; http://www.chapelle.cc/olivier/pub/neco07.pdf
How does a Support Vector Machine (SVM) work?
Ryan Zotti's answer explains the motivation behind the maximization of the decision boundaries, carlosdc's answer gives some similarities and differences with respect to other classifiers. I'll give i
How does a Support Vector Machine (SVM) work? Ryan Zotti's answer explains the motivation behind the maximization of the decision boundaries, carlosdc's answer gives some similarities and differences with respect to other classifiers. I'll give in this answer a brief mathematical overview of how SVMs are trained and used. Notations In the following, scalars are denoted with italic lowercases (e.g., $y,\, b$), vectors with bold lowercases (e.g., $\mathbf{w},\, \mathbf{x}$), and matrices with italic uppercases (e.g., $W$). $\mathbf{w^T}$ is the transpose of $\mathbf{w}$, and $\|\mathbf{w}\| = \mathbf{w}^T\mathbf{w}$. Let: $\mathbf{x}$ be a feature vector (i.e., the input of the SVM). $\mathbf{x} \in \mathbb{R}^n$, where $n$ is the dimension of the feature vector. $y$ be the class (i.e., the output of the SVM). $y \in \{ -1,1\}$, i.e. the classification task is binary. $\mathbf{w}$ and $b$ be the parameters of the SVM: we need to learn them using the training set. $(\mathbf{x}^{(i)}, y^{(i)})$ be the $i^ {\text {th}}$ sample in the dataset. Let's assume we have $N$ samples in the training set. With $n=2$, one can represent the SVM's decision boundaries as follows: The class $y$ is determined as follows: $$ y^{(i)}=\left\{ \begin{array}{ll} -1 &\text{ if } \mathbf{w^T}\mathbf{x}^{(i)}+b \leq -1 \\ 1 &\text{ if } \mathbf{w^T}\mathbf{x}^{(i)}+b \ge 1 \\ \end{array} \right. $$ which can be more concisely written as $y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1$. Goal The SVM aims at satisfying two requirements: The SVM should maximize the distance between the two decision boundaries. Mathematically, this means we want to maximize the distance between the hyperplane defined by $\mathbf{w^T}\mathbf{x}+b = -1$ and the hyperplane defined by $\mathbf{w^T}\mathbf{x}+b = 1$. This distance is equal to $\frac{2}{\|\mathbf{w}\|}$. This means we want to solve $\underset{\mathbf{w}}{\operatorname{max}} \frac{2}{\|\mathbf{w}\|}$. Equivalently we want $\underset{\mathbf{w}}{\operatorname{min}} \frac{\|\mathbf{w}\|}{2}$. The SVM should also correctly classify all $\mathbf{x}^{(i)}$, which means $y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1, \forall i \in \{1,\dots,N\}$ Which leads us to the following quadratic optimization problem: $$\begin{align} \min_{\mathbf{w},b}\quad &\frac{\|\mathbf{w}\|}{2}, \\ s.t.\quad&y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1 &\forall i \in \{1,\dots,N\} \end{align}$$ This is the hard-margin SVM, as this quadratic optimization problem admits a solution iff the data is linearly separable. One can relax the constraints by introducing so-called slack variables $\xi^{(i)}$. Note that each sample of the training set has its own slack variable. This gives us the following quadratic optimization problem: $$\begin{align} \min_{\mathbf{w},b}\quad &\frac{\|\mathbf{w}\|}{2}+ C \sum_{i=1}^{N} \xi^{(i)}, \\ s.t.\quad&y^{(i)} (\mathbf{w^T}\mathbf{x}^{(i)}+b) \ge 1 - \xi^{(i)},&\forall i \in \{1,\dots,N\} \\ \quad&\xi^{(i)}\ge0, &\forall i \in \{1,\dots,N\} \end{align}$$ This is the soft-margin SVM. $C$ is a hyperparameter called penalty of the error term. (What is the influence of C in SVMs with linear kernel? and Which search range for determining SVM optimal parameters?). One can add even more flexibility by introducing a function $\phi$ that maps the original feature space to a higher dimensional feature space. This allows non-linear decision boundaries. The quadratic optimization problem becomes: $$\begin{align} \min_{\mathbf{w},b}\quad &\frac{\|\mathbf{w}\|}{2}+ C \sum_{i=1}^{N} \xi^{(i)}, \\ s.t.\quad&y^{(i)} (\mathbf{w^T}\phi \left(\mathbf{x}^{(i)}\right)+b) \ge 1 - \xi^{(i)},&\forall i \in \{1,\dots,N\} \\ \quad&\xi^{(i)}\ge0, &\forall i \in \{1,\dots,N\} \end{align}$$ Optimization The quadratic optimization problem can be transformed into another optimization problem named the Lagrangian dual problem (the previous problem is called the primal): $$\begin{align} \max_{\mathbf{\alpha}} \quad &\min_{\mathbf{w},b} \frac{\|\mathbf{w}\|}{2}+ C \sum_{i=1}^{N} \alpha^{(i)} \left(1-\mathbf{w^T}\phi \left(\mathbf{x}^{(i)}\right)+b)\right), \\ s.t. \quad&0 \leq \alpha^{(i)} \leq C, &\forall i \in \{1,\dots,N\} \end{align}$$ This optimization problem can be simplified (by setting some gradients to $0$) to: $$\begin{align} \max_{\mathbf{\alpha}} \quad & \sum_{i=1}^{N} \alpha^{(i)} - \sum_{i=1}^{N}\sum_{j=1}^{N} \left( y^{(i)}\alpha^{(i)}\phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right) y^{(j)}\alpha^{(j)} \right), \\ s.t. \quad&0 \leq \alpha^{(i)} \leq C, &\forall i \in \{1,\dots,N\} \end{align}$$ $\mathbf{w}$ doesn't appear as $\mathbf{w}=\sum_{i =1}^{N}\alpha^{(i)}y^{(i)}\phi\left(x^{(i)}\right)$ (as stated by the representer theorem). We therefore learn the $\alpha^{(i)}$ using the $(\mathbf{x}^{(i)}, y^{(i)})$ of the training set. (FYI: Why bother with the dual problem when fitting SVM? short answer: faster computation + allows to use the kernel trick, though there exist some good methods to train SVM in the primal e.g. see {1}) Making a prediction Once the $\alpha^{(i)}$ are learned, one can predict the class of a new sample with the feature vector $\mathbf{x}^{\text {test}}$ as follows: \begin{align*} y^{\text {test}}&=\text {sign}\left(\mathbf{w^T}\phi\left(\mathbf{x}^{\text {test}}\right)+b\right) \\ &= \text {sign}\left(\sum_{i =1}^{N}\alpha^{(i)}y^{(i)}\phi\left(x^{(i)}\right)^T\phi\left(\mathbf{x}^{\text {test}}\right)+b \right) \end{align*} The summation $\sum_{i =1}^{N}$ could seem overwhelming, since it means one has to sum over all the training samples, but the vast majority of $\alpha^{(i)}$ are $0$ (see Why are the Lagrange multipliers sparse for SVMs?) so in practice it isn't an issue. (note that one can construct special cases where all $\alpha^{{(i)}} > 0$.) $\alpha^{{(i)}}=0$ iff $x^{{(i)}}$ is a support vector. The illustration above has 3 support vectors. Kernel trick One can observe that the optimization problem uses the $\phi\left(\mathbf{x}^{(i)}\right)$ only in the inner product $\phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right)$. The function that maps $\left(\mathbf{x}^{(i)},\mathbf{x}^{(j)}\right)$ to the inner product $\phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right)$ is called a kernel, a.k.a. kernel function, often denoted by $k$. One can choose $k$ so that the inner product is efficient to compute. This allows to use a potentially high feature space at a low computational cost. That is called the kernel trick. For a kernel function to be valid, i.e. usable with the kernel trick, it should satisfy two key properties. There exist many kernel functions to choose from. As a side note, the kernel trick may be applied to other machine learning models, in which case they are referred as kernelized. Going further Some interesting QAs on SVMs: Best way to perform multiclass SVM Support vector machines and regression Understanding the different formulations for SVM What's the difference between $\ell_1$-SVM, $\ell_2$-SVM and LS-SVM loss functions? To deal with an unbalanced dataset: Best way to handle unbalanced multiclass dataset with SVM A priori selection of SVM class weights How does one interpret SVM feature weights? Interpretating the C value in a linear SVM Generalization bounds on SVM General formula for the VC Dimension of a SVM What does the "machine" in "support vector machine" and "restricted Boltzmann machine" mean? How are SVMs = Template Matching? Single layer NeuralNetwork with ReLU activation equal to SVM? Comparing SVM and logistic regression Other links: Least squares support vector machine References: {1} Chapelle, Olivier. "Training a support vector machine in the primal." Neural computation 19, no. 5 (2007): 1155-1178. https://scholar.google.com/scholar?cluster=469291847682573606&hl=en&as_sdt=0,22 ; http://www.chapelle.cc/olivier/pub/neco07.pdf
How does a Support Vector Machine (SVM) work? Ryan Zotti's answer explains the motivation behind the maximization of the decision boundaries, carlosdc's answer gives some similarities and differences with respect to other classifiers. I'll give i
1,220
How does a Support Vector Machine (SVM) work?
The technique is predicated upon drawing a decision boundary line leaving as ample a margin to the first positive and negative examples as possible: As in the illustration above, if we select an orthogonal vector such that $ \lVert w \rVert=1$ we can establish a decision criterion for any unknown example $\mathbf u$ to be catalogued as positive of the form: $$ \color{blue}{\mathbf w} \cdot {\mathbf u} \geq C$$ corresponding to a value that would place the projection beyond the decision line in the middle of the street. Notice that $\color{blue}{\mathbf w} \cdot {\mathbf u} = {\mathbf u} \cdot \color{blue}{\mathbf w}$. An equivalent condition for a positive sample would be: $$\color{blue}{\mathbf w}\cdot \mathbf u + b \geq 0 \tag 1$$ with $C = - b.$ We need $b$ and $\color{blue}{\mathbf w}$ to have a decision rule, and to get there we need constraints. First constraint we are going to impose is that for any positive sample $\mathbf x_+,$, $\color{blue}{\mathbf w}\cdot \mathbf x_+ + b \geq 1$; and for negative samples, $\color{blue}{\mathbf w}\cdot \mathbf x_- + b \leq -1$. In the division boundary or hyperplane (median) the value would be $0$, while the values at the gutters will be $1$ and $-1$: The vector $\bf w$ is the weights vector, whereas $b$ is the bias. To bring these two inequalities together, we can introduce the variable $y_i$ so that $y_i=+1$ for positive examples, and $y_i=-1$ if the examples are negative, and conclude $$ y_i (x_i\cdot \color{blue}{\mathbf w} + b) -1\geq 0.$$ So we establish that this has to be greater than zero, but if the example is on the hyperplanes (the "gutters") that maximize the margin of separation between the decision hyperplane and the tips of the support vectors, in this case lines), then: $$ y_i \,(x_i\cdot \color{blue}{\mathbf w} + b) -1 = 0\tag 2$$ Notice that this is equivalent to requiring that $y_i \,(x_i\cdot \color{blue}{\mathbf w} + b) = 1.$ Second constraint: the distance of the decision hyperplane to the tips of the support vectors will be maximized. In other words the margin of separation ("street") will be maximized: Assuming a unit vector perpendicular to the decision boundary, $\mathbf w$, the dot product with the difference between two "bordering" plus and minus examples is the width of "the street": $$ \text{width}= (x_+ \,{\bf -}\, x_-) \cdot \frac{w}{\lVert w \rVert}$$ On the equation above $x_+$ and $x_-$ are in the gutter (on hyperplanes maximizing the separation). Therefore, for the positive example: $ ({\mathbf x_i}\cdot \color{blue}{\mathbf w} + b) -1 = 0$, or $ {\mathbf x_+}\cdot \color{blue}{\mathbf w} = 1 - b$; and for the negative example: $ {\mathbf x_-}\cdot \color{blue}{\mathbf w} = -1 - b$. So, reformulating the width of the street: $$\begin{align}\text{width}&=(x_+ \,{\bf -}\, x_-) \cdot \frac{w}{\lVert w \rVert}\\[1.5ex] &= \frac{x_+\cdot w \,{\bf -}\, x_-\cdot w}{\lVert w \rVert}\\[1.5ex] &=\frac{1-b-(-1-b)}{\lVert w \rVert}\\[1.5ex] &= \frac{2}{\lVert w \rVert}\tag 3 \end{align}$$ So now we just have to maximize the width of the street - i.e. maximize $ \frac{2}{\lVert w \rVert},$ minimize $\lVert w \rVert$, or minimize: $$\frac{1}{2}\;\lVert w \rVert^2 \tag 4$$ which is mathematically convenient. So we want to: Minimise $\lVert w\rVert^2$ with the constraint: $y_i(\mathbf w \cdot \mathbf x_i + b )-1=0$ Since we want to minimise this expression based on some constraints, we need a Lagrange multiplier (going back to equations 2 and 4): $$ \mathscr{L} = \frac{1}{2} \lVert \mathbf w \rVert^2 - \sum \lambda_i \Big[y_i \, \left( \mathbf x_i\cdot \color{blue}{\mathbf w} + b \right) -1\Big]\tag 5$$ Differentiating, $$ \frac{\partial \mathscr{L}}{\partial \color{blue}{\mathbf w} }= \color{blue}{\mathbf w} - \sum \lambda_i \; y_i \; \mathbf x_i = 0$$. Therefore, $$\color{blue}{\mathbf w} = \sum \lambda_i \; y_i \; \mathbf x_i\tag 6$$ And differentiating with respect to $b:$ $$ \frac{\partial \mathscr{L}}{\partial b}=-\sum \lambda_i y_i = 0,$$ which means that we have a zero sum product of multipliers and labels: $$ \sum \lambda_i \, y_i = 0\tag 7$$ Pluging equation Eq (6) back into Eq (5), $$ \mathscr{L} = \frac{1}{2} \color{purple}{\left(\sum \lambda_i y_i \mathbf x_i \right) \,\left(\sum \lambda_j y_j \mathbf x_j \right)}- \color{green}{\left(\sum \lambda_i y_i \mathbf x_i\right)\cdot \left(\sum \lambda_j y_j \mathbf x_j \right)} - \sum \lambda_i y_i b +\sum \lambda_i$$ The penultimate term is zero as per equation Eq (7). Therefore, $$ \mathscr{L} = \sum \lambda_i - \frac{1}{2}\displaystyle \sum_i \sum_j \lambda_i \lambda_j\,\, y_i y_j \,\, \mathbf x_i \cdot \mathbf x_j\tag 8$$ Eq (8) being the final Lagrangian. Hence, the optimization depends on the dot product of pairs of examples. Going back to the "decision rule" in Eq (1) above, and using Eq (6): $$ \sum\; \lambda_i \; y_i \; \mathbf x_i\cdot \mathbf u + b \geq 0\tag 9$$ will be the final decision rule for a new vector $\mathbf u.$
How does a Support Vector Machine (SVM) work?
The technique is predicated upon drawing a decision boundary line leaving as ample a margin to the first positive and negative examples as possible: As in the illustration above, if we select an orth
How does a Support Vector Machine (SVM) work? The technique is predicated upon drawing a decision boundary line leaving as ample a margin to the first positive and negative examples as possible: As in the illustration above, if we select an orthogonal vector such that $ \lVert w \rVert=1$ we can establish a decision criterion for any unknown example $\mathbf u$ to be catalogued as positive of the form: $$ \color{blue}{\mathbf w} \cdot {\mathbf u} \geq C$$ corresponding to a value that would place the projection beyond the decision line in the middle of the street. Notice that $\color{blue}{\mathbf w} \cdot {\mathbf u} = {\mathbf u} \cdot \color{blue}{\mathbf w}$. An equivalent condition for a positive sample would be: $$\color{blue}{\mathbf w}\cdot \mathbf u + b \geq 0 \tag 1$$ with $C = - b.$ We need $b$ and $\color{blue}{\mathbf w}$ to have a decision rule, and to get there we need constraints. First constraint we are going to impose is that for any positive sample $\mathbf x_+,$, $\color{blue}{\mathbf w}\cdot \mathbf x_+ + b \geq 1$; and for negative samples, $\color{blue}{\mathbf w}\cdot \mathbf x_- + b \leq -1$. In the division boundary or hyperplane (median) the value would be $0$, while the values at the gutters will be $1$ and $-1$: The vector $\bf w$ is the weights vector, whereas $b$ is the bias. To bring these two inequalities together, we can introduce the variable $y_i$ so that $y_i=+1$ for positive examples, and $y_i=-1$ if the examples are negative, and conclude $$ y_i (x_i\cdot \color{blue}{\mathbf w} + b) -1\geq 0.$$ So we establish that this has to be greater than zero, but if the example is on the hyperplanes (the "gutters") that maximize the margin of separation between the decision hyperplane and the tips of the support vectors, in this case lines), then: $$ y_i \,(x_i\cdot \color{blue}{\mathbf w} + b) -1 = 0\tag 2$$ Notice that this is equivalent to requiring that $y_i \,(x_i\cdot \color{blue}{\mathbf w} + b) = 1.$ Second constraint: the distance of the decision hyperplane to the tips of the support vectors will be maximized. In other words the margin of separation ("street") will be maximized: Assuming a unit vector perpendicular to the decision boundary, $\mathbf w$, the dot product with the difference between two "bordering" plus and minus examples is the width of "the street": $$ \text{width}= (x_+ \,{\bf -}\, x_-) \cdot \frac{w}{\lVert w \rVert}$$ On the equation above $x_+$ and $x_-$ are in the gutter (on hyperplanes maximizing the separation). Therefore, for the positive example: $ ({\mathbf x_i}\cdot \color{blue}{\mathbf w} + b) -1 = 0$, or $ {\mathbf x_+}\cdot \color{blue}{\mathbf w} = 1 - b$; and for the negative example: $ {\mathbf x_-}\cdot \color{blue}{\mathbf w} = -1 - b$. So, reformulating the width of the street: $$\begin{align}\text{width}&=(x_+ \,{\bf -}\, x_-) \cdot \frac{w}{\lVert w \rVert}\\[1.5ex] &= \frac{x_+\cdot w \,{\bf -}\, x_-\cdot w}{\lVert w \rVert}\\[1.5ex] &=\frac{1-b-(-1-b)}{\lVert w \rVert}\\[1.5ex] &= \frac{2}{\lVert w \rVert}\tag 3 \end{align}$$ So now we just have to maximize the width of the street - i.e. maximize $ \frac{2}{\lVert w \rVert},$ minimize $\lVert w \rVert$, or minimize: $$\frac{1}{2}\;\lVert w \rVert^2 \tag 4$$ which is mathematically convenient. So we want to: Minimise $\lVert w\rVert^2$ with the constraint: $y_i(\mathbf w \cdot \mathbf x_i + b )-1=0$ Since we want to minimise this expression based on some constraints, we need a Lagrange multiplier (going back to equations 2 and 4): $$ \mathscr{L} = \frac{1}{2} \lVert \mathbf w \rVert^2 - \sum \lambda_i \Big[y_i \, \left( \mathbf x_i\cdot \color{blue}{\mathbf w} + b \right) -1\Big]\tag 5$$ Differentiating, $$ \frac{\partial \mathscr{L}}{\partial \color{blue}{\mathbf w} }= \color{blue}{\mathbf w} - \sum \lambda_i \; y_i \; \mathbf x_i = 0$$. Therefore, $$\color{blue}{\mathbf w} = \sum \lambda_i \; y_i \; \mathbf x_i\tag 6$$ And differentiating with respect to $b:$ $$ \frac{\partial \mathscr{L}}{\partial b}=-\sum \lambda_i y_i = 0,$$ which means that we have a zero sum product of multipliers and labels: $$ \sum \lambda_i \, y_i = 0\tag 7$$ Pluging equation Eq (6) back into Eq (5), $$ \mathscr{L} = \frac{1}{2} \color{purple}{\left(\sum \lambda_i y_i \mathbf x_i \right) \,\left(\sum \lambda_j y_j \mathbf x_j \right)}- \color{green}{\left(\sum \lambda_i y_i \mathbf x_i\right)\cdot \left(\sum \lambda_j y_j \mathbf x_j \right)} - \sum \lambda_i y_i b +\sum \lambda_i$$ The penultimate term is zero as per equation Eq (7). Therefore, $$ \mathscr{L} = \sum \lambda_i - \frac{1}{2}\displaystyle \sum_i \sum_j \lambda_i \lambda_j\,\, y_i y_j \,\, \mathbf x_i \cdot \mathbf x_j\tag 8$$ Eq (8) being the final Lagrangian. Hence, the optimization depends on the dot product of pairs of examples. Going back to the "decision rule" in Eq (1) above, and using Eq (6): $$ \sum\; \lambda_i \; y_i \; \mathbf x_i\cdot \mathbf u + b \geq 0\tag 9$$ will be the final decision rule for a new vector $\mathbf u.$
How does a Support Vector Machine (SVM) work? The technique is predicated upon drawing a decision boundary line leaving as ample a margin to the first positive and negative examples as possible: As in the illustration above, if we select an orth
1,221
How does a Support Vector Machine (SVM) work?
I'm going to focus on the the similarities and differences it from other classifiers: From a perceptron: SVM uses hinge loss and L2 regularization, the perceptron uses the perceptron loss and could use early stopping (or among other techniques) for regularization, there is really no regularization term in the perceptron. As it doesn't have an regularization term, the perceptron is bound to be overtrained, therefore the generalization capabilities can be arbitrarily bad. The optimization is done using stochastic gradient descent and is therefore very fast. On the positive side this paper shows that by doing early stopping with a slightly modified loss function the performance could be on par with an SVM. From logistic regression: logistic regression uses logistic loss term and could use L1 or L2 regularization. You can think of logistic regression as the discriminative brother of the generative naive-Bayes. From LDA: LDA can also be seen as a generative algorithm, it assumes that the probability density functions (p(x|y=0) and p(x|y=1) are normally distributed. This is ideal when the data is in fact normally distributed. It has however, the downside that "training" requires the inversion of a matrix that can be large (when you have many features). Under homocedasticity LDA becomes QDA which is Bayes optimal for normally distributed data. Meaning that if the assumptions are satisfied you really cannot do better than this. At runtime (test time), once the model has been trained, the complexity of all these methods is the same, it is just a dot product between the hyperplane the training procedure found and the datapoint.
How does a Support Vector Machine (SVM) work?
I'm going to focus on the the similarities and differences it from other classifiers: From a perceptron: SVM uses hinge loss and L2 regularization, the perceptron uses the perceptron loss and could u
How does a Support Vector Machine (SVM) work? I'm going to focus on the the similarities and differences it from other classifiers: From a perceptron: SVM uses hinge loss and L2 regularization, the perceptron uses the perceptron loss and could use early stopping (or among other techniques) for regularization, there is really no regularization term in the perceptron. As it doesn't have an regularization term, the perceptron is bound to be overtrained, therefore the generalization capabilities can be arbitrarily bad. The optimization is done using stochastic gradient descent and is therefore very fast. On the positive side this paper shows that by doing early stopping with a slightly modified loss function the performance could be on par with an SVM. From logistic regression: logistic regression uses logistic loss term and could use L1 or L2 regularization. You can think of logistic regression as the discriminative brother of the generative naive-Bayes. From LDA: LDA can also be seen as a generative algorithm, it assumes that the probability density functions (p(x|y=0) and p(x|y=1) are normally distributed. This is ideal when the data is in fact normally distributed. It has however, the downside that "training" requires the inversion of a matrix that can be large (when you have many features). Under homocedasticity LDA becomes QDA which is Bayes optimal for normally distributed data. Meaning that if the assumptions are satisfied you really cannot do better than this. At runtime (test time), once the model has been trained, the complexity of all these methods is the same, it is just a dot product between the hyperplane the training procedure found and the datapoint.
How does a Support Vector Machine (SVM) work? I'm going to focus on the the similarities and differences it from other classifiers: From a perceptron: SVM uses hinge loss and L2 regularization, the perceptron uses the perceptron loss and could u
1,222
How does a Support Vector Machine (SVM) work?
Some comments on Duality and KTT conditions Primal problem Picking up from @Antoni's post in between equations $(4)$ and $(5)$, recall that our original, or primal, optimization problem is of the form: \begin{aligned} \min_{w, b} f(w,b) & = \min_{w, b} \ \frac{1}{2} ||w||^2 \\ s.t. \ \ g_i(w,b) &= - y^{(i)} (w^T x^{(i)} + b) + 1 = 0 \end{aligned} Lagrange method The method of Lagrange multipliers allows us to turn a constrained optimization problem into an unconstrained one of the form: $$\mathcal{L}(w, b, \alpha) = \frac{1}{2} ||w||^2 - \sum_i^m \alpha_i [y^{(i)} (w^T x^{(i)} + b) - 1]$$ Where $\mathcal{L}(w, b, \alpha)$ is called the Lagrangian and $\alpha_i$ are called the Lagrangian multipliers. Our primal optimization problem with the Lagrangian becomes the following: (note that the use of $min$, $max$ is not the most rigorous as we should also be using $\inf$ and $\sup$ here...) $$ \min_{w,b} \left( \max_\alpha \mathcal{L}(w, b, \alpha)\right)$$ Dual problem What @Antoni and Prof. Patrick Winston have done in their derivation is assume that the optimization function and the constraints meet some technical conditions such that we can do the following: $$ \min_{w,b} \left( \max_\alpha \mathcal{L}(w, b, \alpha)\right) = \max_\alpha \left( \min_{w,b} \mathcal{L}(w, b, \alpha)\right)$$ This allows us to take the partial derivatives of $\mathcal{L}(w, b, \alpha)$ with respect to $w$ and $b$, equate to zero and then plug the results back into the original equation of the Lagrangian, hence generating an equivalent dual optimization problem of the form \begin{aligned} &\max_{\alpha} \min_{w,b} \mathcal{L}(w,b,\alpha) \\ & \max_{\alpha} \sum_i^m \alpha_i - \frac{1}{2} \sum_{i,j}^m y^{(i)}y^{(j)} \alpha_i \alpha_j <x^{(i)} x^{(j)}> \\ & s.t. \ \alpha_i \geq 0 \\ & s.t. \ \sum_i^m \alpha_i y^{(i)} = 0 \end{aligned} Duality and KTT Without going into excessive mathematical technicalities, these conditions are a combination of the Duality and the Karush Kuhn Tucker (KTT) conditions and allow us to solve the dual problem instead of the primal one, while ensuring that the optimal solution is the same. In our case the conditions are the following: The primal objective and inequality constraint functions must be convex The equality constraint function must be affine The constraints must be strictly feasible Then there exists $w^*, \alpha^*$ which are solutions to the primal and dual problems. Moreover, the parameters $w^*, \alpha^*$ satisfy the KTT conditions below: \begin{aligned} &\frac{\partial}{\partial w_i} \mathcal{L}(w^*, \alpha^*, \beta^*) = 0 &(A) \\ &\frac{\partial}{\partial \beta_i} \mathcal{L}(w^*, \alpha^*, \beta^*) = 0 &(B) \\ &\alpha_i^* g_i(w^*) = 0 &(C) \\ &g_i(w^*) \leq 0 &(D) \\ &\alpha_i^* \geq 0 &(E) \end{aligned} Moreover, if some $w^*, \alpha^*$ satisfy the KTT solutions then they are also solution to the primal and dual problem. Equation $(C)$ above is of particular importance and is called the dual complementarity condition. It implies that if $\alpha_i^* > 0$ then $g_i(w^*) = 0$ which means that the constraint $g_i(w) \leq 0$ is active, i.e. it holds with equality rather than inequality. This is the explanation behind equation $(2)$ in Antoni's derivation where the inequality constraint is turned into an equality constraint. A intuitive but informal diagram Sources Andrew Ng 1 and 2 MIT
How does a Support Vector Machine (SVM) work?
Some comments on Duality and KTT conditions Primal problem Picking up from @Antoni's post in between equations $(4)$ and $(5)$, recall that our original, or primal, optimization problem is of the form
How does a Support Vector Machine (SVM) work? Some comments on Duality and KTT conditions Primal problem Picking up from @Antoni's post in between equations $(4)$ and $(5)$, recall that our original, or primal, optimization problem is of the form: \begin{aligned} \min_{w, b} f(w,b) & = \min_{w, b} \ \frac{1}{2} ||w||^2 \\ s.t. \ \ g_i(w,b) &= - y^{(i)} (w^T x^{(i)} + b) + 1 = 0 \end{aligned} Lagrange method The method of Lagrange multipliers allows us to turn a constrained optimization problem into an unconstrained one of the form: $$\mathcal{L}(w, b, \alpha) = \frac{1}{2} ||w||^2 - \sum_i^m \alpha_i [y^{(i)} (w^T x^{(i)} + b) - 1]$$ Where $\mathcal{L}(w, b, \alpha)$ is called the Lagrangian and $\alpha_i$ are called the Lagrangian multipliers. Our primal optimization problem with the Lagrangian becomes the following: (note that the use of $min$, $max$ is not the most rigorous as we should also be using $\inf$ and $\sup$ here...) $$ \min_{w,b} \left( \max_\alpha \mathcal{L}(w, b, \alpha)\right)$$ Dual problem What @Antoni and Prof. Patrick Winston have done in their derivation is assume that the optimization function and the constraints meet some technical conditions such that we can do the following: $$ \min_{w,b} \left( \max_\alpha \mathcal{L}(w, b, \alpha)\right) = \max_\alpha \left( \min_{w,b} \mathcal{L}(w, b, \alpha)\right)$$ This allows us to take the partial derivatives of $\mathcal{L}(w, b, \alpha)$ with respect to $w$ and $b$, equate to zero and then plug the results back into the original equation of the Lagrangian, hence generating an equivalent dual optimization problem of the form \begin{aligned} &\max_{\alpha} \min_{w,b} \mathcal{L}(w,b,\alpha) \\ & \max_{\alpha} \sum_i^m \alpha_i - \frac{1}{2} \sum_{i,j}^m y^{(i)}y^{(j)} \alpha_i \alpha_j <x^{(i)} x^{(j)}> \\ & s.t. \ \alpha_i \geq 0 \\ & s.t. \ \sum_i^m \alpha_i y^{(i)} = 0 \end{aligned} Duality and KTT Without going into excessive mathematical technicalities, these conditions are a combination of the Duality and the Karush Kuhn Tucker (KTT) conditions and allow us to solve the dual problem instead of the primal one, while ensuring that the optimal solution is the same. In our case the conditions are the following: The primal objective and inequality constraint functions must be convex The equality constraint function must be affine The constraints must be strictly feasible Then there exists $w^*, \alpha^*$ which are solutions to the primal and dual problems. Moreover, the parameters $w^*, \alpha^*$ satisfy the KTT conditions below: \begin{aligned} &\frac{\partial}{\partial w_i} \mathcal{L}(w^*, \alpha^*, \beta^*) = 0 &(A) \\ &\frac{\partial}{\partial \beta_i} \mathcal{L}(w^*, \alpha^*, \beta^*) = 0 &(B) \\ &\alpha_i^* g_i(w^*) = 0 &(C) \\ &g_i(w^*) \leq 0 &(D) \\ &\alpha_i^* \geq 0 &(E) \end{aligned} Moreover, if some $w^*, \alpha^*$ satisfy the KTT solutions then they are also solution to the primal and dual problem. Equation $(C)$ above is of particular importance and is called the dual complementarity condition. It implies that if $\alpha_i^* > 0$ then $g_i(w^*) = 0$ which means that the constraint $g_i(w) \leq 0$ is active, i.e. it holds with equality rather than inequality. This is the explanation behind equation $(2)$ in Antoni's derivation where the inequality constraint is turned into an equality constraint. A intuitive but informal diagram Sources Andrew Ng 1 and 2 MIT
How does a Support Vector Machine (SVM) work? Some comments on Duality and KTT conditions Primal problem Picking up from @Antoni's post in between equations $(4)$ and $(5)$, recall that our original, or primal, optimization problem is of the form
1,223
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian?
The bivariate normal distribution is the exception, not the rule! It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided. Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications. Examples It is useful to start with some examples. The figure below contains heatmaps of six bivariate distributions, all of which have standard normal marginals. The left and middle ones in the top row are bivariate normals, the remaining ones are not (as should be apparent). They're described further below. The bare bones of copulas Properties of dependence are often efficiently analyzed using copulas. A bivariate copula is just a fancy name for a probability distribution on the unit square $[0,1]^2$ with uniform marginals. Suppose $C(u,v)$ is a bivariate copula. Then, immediately from the above, we know that $C(u,v) \geq 0$, $C(u,1) = u$ and $C(1,v) = v$, for example. We can construct bivariate random variables on the Euclidean plane with prespecified marginals by a simple transformation of a bivariate copula. Let $F_1$ and $F_2$ be prescribed marginal distributions for a pair of random variables $(X,Y)$. Then, if $C(u,v)$ is a bivariate copula, $$ F(x,y) = C(F_1(x), F_2(y)) $$ is a bivariate distribution function with marginals $F_1$ and $F_2$. To see this last fact, just note that $$ \renewcommand{\Pr}{\mathbb P} \Pr(X \leq x) = \Pr(X \leq x, Y < \infty) = C(F_1(x), F_2(\infty)) = C(F_1(x),1) = F_1(x) \>. $$ The same argument works for $F_2$. For continuous $F_1$ and $F_2$, Sklar's theorem asserts a converse implying uniqueness. That is, given a bivariate distribution $F(x,y)$ with continuous marginals $F_1$, $F_2$, the corresponding copula is unique (on the appropriate range space). The bivariate normal is exceptional Sklar's theorem tells us (essentially) that there is only one copula that produces the bivariate normal distribution. This is, aptly named, the Gaussian copula which has density on $[0,1]^2$ $$ c_\rho(u,v) := \frac{\partial^2}{\partial u \, \partial v} C_\rho(u,v) = \frac{\varphi_{2,\rho}(\Phi^{-1}(u),\Phi^{-1}(v))}{\varphi(\Phi^{-1}(u)) \varphi(\Phi^{-1}(v))} \>, $$ where the numerator is the bivariate normal distribution with correlation $\rho$ evaluated at $\Phi^{-1}(u)$ and $\Phi^{-1}(v)$. But, there are lots of other copulas and all of them will give a bivariate distribution with normal marginals which is not the bivariate normal by using the transformation described in the previous section. Some details on the examples Note that if $C(u,v)$ is am arbitrary copula with density $c(u,v)$, the corresponding bivariate density with standard normal marginals under the transformation $F(x,y) = C(\Phi(x),\Phi(y))$ is $$ f(x,y) = \varphi(x) \varphi(y) c(\Phi(x), \Phi(y)) \> . $$ Note that by applying the Gaussian copula in the above equation, we recover the bivariate normal density. But, for any other choice of $c(u,v)$, we will not. The examples in the figure were constructed as follows (going across each row, one column at a time): Bivariate normal with independent components. Bivariate normal with $\rho = -0.4$. The example given in this answer of Dilip Sarwate. It can easily be seen to be induced by the copula $C(u,v)$ with density $c(u,v) = 2 (\mathbf 1_{(0 \leq u \leq 1/2, 0 \leq v \leq 1/2)} + \mathbf 1_{(1/2 < u \leq 1, 1/2 < v \leq 1)})$. Generated from the Frank copula with parameter $\theta = 2$. Generated from the Clayton copula with parameter $\theta = 1$. Generated from an asymmetric modification of the Clayton copula with parameter $\theta = 3$.
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G
The bivariate normal distribution is the exception, not the rule! It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. Th
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? The bivariate normal distribution is the exception, not the rule! It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided. Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications. Examples It is useful to start with some examples. The figure below contains heatmaps of six bivariate distributions, all of which have standard normal marginals. The left and middle ones in the top row are bivariate normals, the remaining ones are not (as should be apparent). They're described further below. The bare bones of copulas Properties of dependence are often efficiently analyzed using copulas. A bivariate copula is just a fancy name for a probability distribution on the unit square $[0,1]^2$ with uniform marginals. Suppose $C(u,v)$ is a bivariate copula. Then, immediately from the above, we know that $C(u,v) \geq 0$, $C(u,1) = u$ and $C(1,v) = v$, for example. We can construct bivariate random variables on the Euclidean plane with prespecified marginals by a simple transformation of a bivariate copula. Let $F_1$ and $F_2$ be prescribed marginal distributions for a pair of random variables $(X,Y)$. Then, if $C(u,v)$ is a bivariate copula, $$ F(x,y) = C(F_1(x), F_2(y)) $$ is a bivariate distribution function with marginals $F_1$ and $F_2$. To see this last fact, just note that $$ \renewcommand{\Pr}{\mathbb P} \Pr(X \leq x) = \Pr(X \leq x, Y < \infty) = C(F_1(x), F_2(\infty)) = C(F_1(x),1) = F_1(x) \>. $$ The same argument works for $F_2$. For continuous $F_1$ and $F_2$, Sklar's theorem asserts a converse implying uniqueness. That is, given a bivariate distribution $F(x,y)$ with continuous marginals $F_1$, $F_2$, the corresponding copula is unique (on the appropriate range space). The bivariate normal is exceptional Sklar's theorem tells us (essentially) that there is only one copula that produces the bivariate normal distribution. This is, aptly named, the Gaussian copula which has density on $[0,1]^2$ $$ c_\rho(u,v) := \frac{\partial^2}{\partial u \, \partial v} C_\rho(u,v) = \frac{\varphi_{2,\rho}(\Phi^{-1}(u),\Phi^{-1}(v))}{\varphi(\Phi^{-1}(u)) \varphi(\Phi^{-1}(v))} \>, $$ where the numerator is the bivariate normal distribution with correlation $\rho$ evaluated at $\Phi^{-1}(u)$ and $\Phi^{-1}(v)$. But, there are lots of other copulas and all of them will give a bivariate distribution with normal marginals which is not the bivariate normal by using the transformation described in the previous section. Some details on the examples Note that if $C(u,v)$ is am arbitrary copula with density $c(u,v)$, the corresponding bivariate density with standard normal marginals under the transformation $F(x,y) = C(\Phi(x),\Phi(y))$ is $$ f(x,y) = \varphi(x) \varphi(y) c(\Phi(x), \Phi(y)) \> . $$ Note that by applying the Gaussian copula in the above equation, we recover the bivariate normal density. But, for any other choice of $c(u,v)$, we will not. The examples in the figure were constructed as follows (going across each row, one column at a time): Bivariate normal with independent components. Bivariate normal with $\rho = -0.4$. The example given in this answer of Dilip Sarwate. It can easily be seen to be induced by the copula $C(u,v)$ with density $c(u,v) = 2 (\mathbf 1_{(0 \leq u \leq 1/2, 0 \leq v \leq 1/2)} + \mathbf 1_{(1/2 < u \leq 1, 1/2 < v \leq 1)})$. Generated from the Frank copula with parameter $\theta = 2$. Generated from the Clayton copula with parameter $\theta = 1$. Generated from an asymmetric modification of the Clayton copula with parameter $\theta = 3$.
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G The bivariate normal distribution is the exception, not the rule! It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. Th
1,224
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian?
It is true that each element of a multivariate normal vector is itself normally distributed, and you can deduce their means and variances. However, it is not true that any two Guassian random variables are jointly normally distributed. Here is an example: Edit: In response to the consensus that a random variable that is a point mass can be thought of as a normally distributed variable with $\sigma^2=0$, I'm changing my example. Let $X \sim N(0,1)$ and let $Y = X \cdot (2B-1)$ where $B$ is a ${\rm Bernoulli}(1/2)$ random variable. That is, $Y = \pm X$ each with probability $1/2$. We first show that $Y$ has a standard normal distribution. By the law of total probability, $$ P(Y \leq y) = \frac{1}{2} \Big( P(Y \leq y | B = 1) + P(Y \leq y | B = 0) \Big) $$ Next, $$ P(Y \leq y | B = 0) = P(-X \leq y) = 1-P(X \leq -y) = 1-\Phi(-y) = \Phi(y) $$ where $\Phi$ is the standard normal CDF. Similarly, $$ P(Y \leq y | B = 1) = P(X \leq y) = \Phi(y) $$ Therefore, $$ P(Y \leq y) = \frac{1}{2} \Big( \Phi(y) + \Phi(y) \Big) = \Phi(y) $$ so, the CDF of $Y$ is $\Phi(\cdot)$, thus $Y \sim N(0,1)$. Now we show that $X,Y$ are not jointly normally distributed. As @cardinal points out, one characterization of the multivariate normal is that every linear combination of its elements is normally distributed. $X,Y$ do not have this property, since $$ Y+X = \begin{cases} 2X &\mbox{if } B = 1 \\ 0 & \mbox{if } B = 0. \end{cases} $$ Therefore $Y+X$ is a $50/50$ mixture of a $N(0,4)$ random variable and a point mass at 0, therefore it cannot be normally distributed.
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G
It is true that each element of a multivariate normal vector is itself normally distributed, and you can deduce their means and variances. However, it is not true that any two Guassian random variable
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? It is true that each element of a multivariate normal vector is itself normally distributed, and you can deduce their means and variances. However, it is not true that any two Guassian random variables are jointly normally distributed. Here is an example: Edit: In response to the consensus that a random variable that is a point mass can be thought of as a normally distributed variable with $\sigma^2=0$, I'm changing my example. Let $X \sim N(0,1)$ and let $Y = X \cdot (2B-1)$ where $B$ is a ${\rm Bernoulli}(1/2)$ random variable. That is, $Y = \pm X$ each with probability $1/2$. We first show that $Y$ has a standard normal distribution. By the law of total probability, $$ P(Y \leq y) = \frac{1}{2} \Big( P(Y \leq y | B = 1) + P(Y \leq y | B = 0) \Big) $$ Next, $$ P(Y \leq y | B = 0) = P(-X \leq y) = 1-P(X \leq -y) = 1-\Phi(-y) = \Phi(y) $$ where $\Phi$ is the standard normal CDF. Similarly, $$ P(Y \leq y | B = 1) = P(X \leq y) = \Phi(y) $$ Therefore, $$ P(Y \leq y) = \frac{1}{2} \Big( \Phi(y) + \Phi(y) \Big) = \Phi(y) $$ so, the CDF of $Y$ is $\Phi(\cdot)$, thus $Y \sim N(0,1)$. Now we show that $X,Y$ are not jointly normally distributed. As @cardinal points out, one characterization of the multivariate normal is that every linear combination of its elements is normally distributed. $X,Y$ do not have this property, since $$ Y+X = \begin{cases} 2X &\mbox{if } B = 1 \\ 0 & \mbox{if } B = 0. \end{cases} $$ Therefore $Y+X$ is a $50/50$ mixture of a $N(0,4)$ random variable and a point mass at 0, therefore it cannot be normally distributed.
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G It is true that each element of a multivariate normal vector is itself normally distributed, and you can deduce their means and variances. However, it is not true that any two Guassian random variable
1,225
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian?
The following post contains an outline of a proof, just to give the main ideas and get you started. Let $z = (Z_1, Z_2)$ be two independent Gaussian random variables and let $x = (X_1, X_2)$ be $$ x = \begin{pmatrix} X_1 \\ X_2 \end{pmatrix} = \begin{pmatrix} \alpha_{11} Z_1 + \alpha_{12} Z_2\\ \alpha_{21} Z_1 + \alpha_{22} Z_2 \end{pmatrix} = \begin{pmatrix} \alpha_{11} & \alpha_{12}\\ \alpha_{21} & \alpha_{22} \end{pmatrix} \begin{pmatrix} Z_1 \\ Z_2 \end{pmatrix} = A z. $$ Each $X_i \sim N(\mu_i, \sigma_i^2)$, but as they are both linear combinations of the same independent r.vs, they are jointly dependent. Definition A pair of r.vs $x = (X_1, X_2)$ are said to be bivariate normally distributed iff it can be written as a linear combination $x = Az$ of independent normal r.vs $z = (Z_1, Z_2)$. Lemma If $ x = (X_1, X_2)$ is a bivariate Gaussian, then any other linear combination of them is again a normal random variable. Proof. Trivial, skipped to not offend anyone. Property If $X_1, X_2$ are uncorrelated, then they are independent and vice-versa. Distribution of $X_1 | X_2$ Assume $X_1, X_2$ are the same Gaussian r.vs as before but let's suppose they have positive variance and zero mean for simplicity. If $\mathbf S$ is the subspace spanned by $X_2$, let $ X_1^{\mathbf S} = \frac{\rho \sigma_{X_1}}{\sigma_{X_2}} X_2 $ and $ X_1^{\mathbf S^\perp} = X_1 - X_1^{\mathbf S} $. $X_1$ and $X_2$ are linear combinations of $z$, so $ X_2, X_1^{\mathbf S^\perp}$ are too. They are jointly Gaussian, uncorrelated (prove it) and independent. The decomposition $$ X_1 = X_1^{\mathbf S} + X_1^{\mathbf S^\perp} $$ holds with $\mathbf{E}[X_1 | X_2] = \frac{\rho \sigma_{X_1}}{\sigma_{X_2}} X_2 = X_1^{\mathbf S}$ $$ \begin{split} \mathbf{V}[X_1 | X_2] &= \mathbf{V}[X_1^{\mathbf S^\perp}] \\ &= \mathbf{E} \left[ X_1 - \frac{\rho \sigma_{X_1}}{\sigma_{X_2}} X_2 \right]^2 \\ &= (1 - \rho)^2 \sigma^2_{X_1}. \end{split} $$ Then $$ X_1 | X_2 \sim N\left( X_1^{\mathbf S}, (1 - \rho)^2 \sigma^2_{X_1} \right).$$ Two univariate Gaussian random variables $X, Y$ are jointly Gaussian if the conditionals $X | Y$ and $Y|X$ are Gaussian too.
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G
The following post contains an outline of a proof, just to give the main ideas and get you started. Let $z = (Z_1, Z_2)$ be two independent Gaussian random variables and let $x = (X_1, X_2)$ be $$ x =
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? The following post contains an outline of a proof, just to give the main ideas and get you started. Let $z = (Z_1, Z_2)$ be two independent Gaussian random variables and let $x = (X_1, X_2)$ be $$ x = \begin{pmatrix} X_1 \\ X_2 \end{pmatrix} = \begin{pmatrix} \alpha_{11} Z_1 + \alpha_{12} Z_2\\ \alpha_{21} Z_1 + \alpha_{22} Z_2 \end{pmatrix} = \begin{pmatrix} \alpha_{11} & \alpha_{12}\\ \alpha_{21} & \alpha_{22} \end{pmatrix} \begin{pmatrix} Z_1 \\ Z_2 \end{pmatrix} = A z. $$ Each $X_i \sim N(\mu_i, \sigma_i^2)$, but as they are both linear combinations of the same independent r.vs, they are jointly dependent. Definition A pair of r.vs $x = (X_1, X_2)$ are said to be bivariate normally distributed iff it can be written as a linear combination $x = Az$ of independent normal r.vs $z = (Z_1, Z_2)$. Lemma If $ x = (X_1, X_2)$ is a bivariate Gaussian, then any other linear combination of them is again a normal random variable. Proof. Trivial, skipped to not offend anyone. Property If $X_1, X_2$ are uncorrelated, then they are independent and vice-versa. Distribution of $X_1 | X_2$ Assume $X_1, X_2$ are the same Gaussian r.vs as before but let's suppose they have positive variance and zero mean for simplicity. If $\mathbf S$ is the subspace spanned by $X_2$, let $ X_1^{\mathbf S} = \frac{\rho \sigma_{X_1}}{\sigma_{X_2}} X_2 $ and $ X_1^{\mathbf S^\perp} = X_1 - X_1^{\mathbf S} $. $X_1$ and $X_2$ are linear combinations of $z$, so $ X_2, X_1^{\mathbf S^\perp}$ are too. They are jointly Gaussian, uncorrelated (prove it) and independent. The decomposition $$ X_1 = X_1^{\mathbf S} + X_1^{\mathbf S^\perp} $$ holds with $\mathbf{E}[X_1 | X_2] = \frac{\rho \sigma_{X_1}}{\sigma_{X_2}} X_2 = X_1^{\mathbf S}$ $$ \begin{split} \mathbf{V}[X_1 | X_2] &= \mathbf{V}[X_1^{\mathbf S^\perp}] \\ &= \mathbf{E} \left[ X_1 - \frac{\rho \sigma_{X_1}}{\sigma_{X_2}} X_2 \right]^2 \\ &= (1 - \rho)^2 \sigma^2_{X_1}. \end{split} $$ Then $$ X_1 | X_2 \sim N\left( X_1^{\mathbf S}, (1 - \rho)^2 \sigma^2_{X_1} \right).$$ Two univariate Gaussian random variables $X, Y$ are jointly Gaussian if the conditionals $X | Y$ and $Y|X$ are Gaussian too.
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G The following post contains an outline of a proof, just to give the main ideas and get you started. Let $z = (Z_1, Z_2)$ be two independent Gaussian random variables and let $x = (X_1, X_2)$ be $$ x =
1,226
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian?
I thought it might be worth pointing out a couple of nice examples; one I've mentioned in a couple of older answers here on Cross Validated (e.g. this one) and one rather pretty one which occurred to me the other day. Here we have two variables, $Y$ and $Z$, that have (uncorrelated) normal distributions, where $Y$ is functionally (though nonlinearly) related to $Z$. There are any number of possible examples of this type: Start with $Z\sim N(0,1)$ Let $U = F(Z^{2})$ where $F$ is the cdf of a chi-squared variate with $1$ d.f. Note that $U$ is then standard uniform. Let $Y = \Phi^{-1}(U)$ Then $(Y,Z)$ are marginally normal (and in this case uncorrelated) but are not jointly normal You can generate samples from the joint distribution of Y and Z as follows (in R): y <- qnorm(pchisq((z=rnorm(100000L))^2,1)) # if plots are too slow, try 10000L #let's take a look at it: par(mfrow=c(2,2)) hist(z,n=50) hist(y,n=50) qqnorm(y,pch=16,cex=.2,col=rgb(.2,.2,.2,.2)) plot(z,y,pch=16,cex=.2,col=rgb(.2,.2,.2,.2)) par(mfrow=c(1,1)) In particular, the joint distribution lies on a continuous curve with a cusp in it. Here's another. This gives a rather nice bivariate density with heart-shaped contours: It relies on the fact that if $Z_1$, $Z_2$, $Z_3$, and $Z_4$ are iid standard normal, then $L=Z_1Z_2+Z_3Z_4$ is Laplace (double exponential). There's a variety of ways to convert $L$ to a normal, but one is to take $Y=\Phi^{-1}(1-\exp(-|L|))$. Then $Y$ is standard normal but (by symmetry) the relationship between $Z_i$ and $Y$ (for any $i$ in $\{1,2,3,4\}$) is the same; $Y$ and $Z_i$ are not jointly normal but are marginally normal). See the display below (the R code for this may be a little slow, but I think it's worth the wait. If you want a faster version, cut the sample size down. n=100000L z1=rnorm(n); z2=rnorm(n); z3=rnorm(n); z4=rnorm(n) L=z1*z2+z3*z4 y = qnorm(pexp(abs(L))) par(mfrow=c(2,2)) hist(z1,n=100) hist(y,n=100) qqnorm(y) plot(z1,y,cex=.6,col=rgb(.1,.2,.3,.2)) points(z1,y,cex=.5,col=rgb(.35,.3,.0,.1)) # this helps visualize points(z1,y,cex=.4,col=rgb(.4,.1,.1,.05)) # the contours par(mfrow=c(1,1))
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G
I thought it might be worth pointing out a couple of nice examples; one I've mentioned in a couple of older answers here on Cross Validated (e.g. this one) and one rather pretty one which occurred to
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? I thought it might be worth pointing out a couple of nice examples; one I've mentioned in a couple of older answers here on Cross Validated (e.g. this one) and one rather pretty one which occurred to me the other day. Here we have two variables, $Y$ and $Z$, that have (uncorrelated) normal distributions, where $Y$ is functionally (though nonlinearly) related to $Z$. There are any number of possible examples of this type: Start with $Z\sim N(0,1)$ Let $U = F(Z^{2})$ where $F$ is the cdf of a chi-squared variate with $1$ d.f. Note that $U$ is then standard uniform. Let $Y = \Phi^{-1}(U)$ Then $(Y,Z)$ are marginally normal (and in this case uncorrelated) but are not jointly normal You can generate samples from the joint distribution of Y and Z as follows (in R): y <- qnorm(pchisq((z=rnorm(100000L))^2,1)) # if plots are too slow, try 10000L #let's take a look at it: par(mfrow=c(2,2)) hist(z,n=50) hist(y,n=50) qqnorm(y,pch=16,cex=.2,col=rgb(.2,.2,.2,.2)) plot(z,y,pch=16,cex=.2,col=rgb(.2,.2,.2,.2)) par(mfrow=c(1,1)) In particular, the joint distribution lies on a continuous curve with a cusp in it. Here's another. This gives a rather nice bivariate density with heart-shaped contours: It relies on the fact that if $Z_1$, $Z_2$, $Z_3$, and $Z_4$ are iid standard normal, then $L=Z_1Z_2+Z_3Z_4$ is Laplace (double exponential). There's a variety of ways to convert $L$ to a normal, but one is to take $Y=\Phi^{-1}(1-\exp(-|L|))$. Then $Y$ is standard normal but (by symmetry) the relationship between $Z_i$ and $Y$ (for any $i$ in $\{1,2,3,4\}$) is the same; $Y$ and $Z_i$ are not jointly normal but are marginally normal). See the display below (the R code for this may be a little slow, but I think it's worth the wait. If you want a faster version, cut the sample size down. n=100000L z1=rnorm(n); z2=rnorm(n); z3=rnorm(n); z4=rnorm(n) L=z1*z2+z3*z4 y = qnorm(pexp(abs(L))) par(mfrow=c(2,2)) hist(z1,n=100) hist(y,n=100) qqnorm(y) plot(z1,y,cex=.6,col=rgb(.1,.2,.3,.2)) points(z1,y,cex=.5,col=rgb(.35,.3,.0,.1)) # this helps visualize points(z1,y,cex=.4,col=rgb(.4,.1,.1,.05)) # the contours par(mfrow=c(1,1))
Is it possible to have a pair of Gaussian random variables for which the joint distribution is not G I thought it might be worth pointing out a couple of nice examples; one I've mentioned in a couple of older answers here on Cross Validated (e.g. this one) and one rather pretty one which occurred to
1,227
Bias and variance in leave-one-out vs K-fold cross validation
why would models learned with leave-one-out CV have higher variance? [TL:DR] A summary of recent posts and debates (July 2018) This topic has been widely discussed both on this site, and in the scientific literature, with conflicting views, intuitions and conclusions. Back in 2013 when this question was first asked, the dominant view was that LOOCV leads to larger variance of the expected generalization error of a training algorithm producing models out of samples of size $n(K−1)/K$. This view, however, appears to be an incorrect generalization of a special case and I would argue that the correct answer is: "it depends..." Paraphrasing Yves Grandvalet the author of a 2004 paper on the topic I would summarize the intuitive argument as follows: If cross-validation were averaging independent estimates: then leave-one-out CV one should see relatively lower variance between models since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. This is not true when training sets are highly correlated: Correlation may increase with K and this increase is responsible for the overall increase of variance in the second scenario. Intuitively, in that situation, leave-one-out CV may be blind to instabilities that exist, but may not be triggered by changing a single point in the training data, which makes it highly variable to the realization of the training set. Experimental simulations from myself and others on this site, as well as those of researchers in the papers linked below will show you that there is no universal truth on the topic. Most experiments have monotonically decreasing or constant variance with $K$, but some special cases show increasing variance with $K$. The rest of this answer proposes a simulation on a toy example and an informal literature review. [Update] You can find here an alternative simulation for an unstable model in the presence of outliers. Simulations from a toy example showing decreasing / constant variance Consider the following toy example where we are fitting a degree 4 polynomial to a noisy sine curve. We expect this model to fare poorly for small datasets due to overfitting, as shown by the learning curve. Note that we plot 1 - MSE here to reproduce the illustration from ESLII page 243  Methodology You can find the code for this simulation here. The approach was the following: Generate 10,000 points from the distribution $sin(x) + \epsilon$ where the true variance of $\epsilon$ is known Iterate $i$ times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling $N$ points from the original distribution For each data set $i$: Perform K-fold cross validation for one value of $K$ Store the average Mean Square Error (MSE) across the K-folds Once the loop over $i$ is complete, calculate the mean and standard deviation of the MSE across the $i$ datasets for the same value of $K$ Repeat the above steps for all $K$ in range $\{ 5,...,N\}$ all the way to Leave One Out CV (LOOCV) Impact of $K$ on the Bias and Variance of the MSE across $i$ datasets. Left Hand Side: Kfolds for 200 data points, Right Hand Side: Kfolds for 40 data points Standard Deviation of MSE (across data sets i) vs Kfolds From this simulation, it seems that: For small number $N = 40$ of datapoints, increasing $K$ until $K=10$ or so significantly improves both the bias and the variance. For larger $K$ there is no effect on either bias or variance. The intuition is that for too small effective training size, the polynomial model is very unstable, especially for $K \leq 5$ For larger $N = 200$ - increasing $K$ has no particular impact on both the bias and variance. An informal literature review The following three papers investigate the bias and variance of cross validation Ron Kohavi (1995) Yves Grandvalet and Yoshua Bengio (2004) Zhang and Yang (2015) Kohavi 1995 This paper is often refered to as the source for the argument that LOOC has higher variance. In section 1: “For example, leave-oneout is almost unbiased, but it has high variance, leading to unreliable estimates (Efron 1983)" This statement is source of much confusion, because it seems to be from Efron in 1983, not Kohavi. Both Kohavi's theoretical argumentations and experimental results go against this statement: Corollary 2 ( Variance in CV) Given a dataset and an inducer. If the inducer is stable under the perturbations caused by deleting the test instances for the folds in k-fold CV for various values of $k$, then the variance of the estimate will be the same Experiment In his experiment, Kohavi compares two algorithms: a C4.5 decision tree and a Naive Bayes classifier across multiple datasets from the UC Irvine repository. His results are below: LHS is accuracy vs folds (i.e. bias) and RHS is standard deviation vs folds In fact, only the decision tree on three data sets clearly has higher variance for increasing K. Other results show decreasing or constant variance. Finally, although the conclusion could be worded more strongly, there is no argument for LOO having higher variance, quite the opposite. From section 6. Summary "k-fold cross validation with moderate k values (10-20) reduces the variance... As k-decreases (2-5) and the samples get smaller, there is variance due to instability of the training sets themselves. Zhang and Yang The authors take a strong view on this topic and clearly state in Section 7.1 In fact, in least squares linear regression, Burman (1989) shows that among the k-fold CVs, in estimating the prediction error, LOO (i.e., n-fold CV) has the smallest asymptotic bias and variance. ... ... Then a theoretical calculation (Lu, 2007) shows that LOO has the smallest bias and variance at the same time among all delete-n CVs with all possible n_v deletions considered Experimental results Similarly, Zhang's experiments point in the direction of decreasing variance with K, as shown below for the True model and the wrong model for Figure 3 and Figure 5. The only experiment for which variance increases with $K$ is for the Lasso and SCAD models. This is explained as follows on page 31: However, if model selection is involved, the performance of LOO worsens in variability as the model selection uncertainty gets higher due to large model space, small penalty coefficients and/or the use of data-driven penalty coefficients
Bias and variance in leave-one-out vs K-fold cross validation
why would models learned with leave-one-out CV have higher variance? [TL:DR] A summary of recent posts and debates (July 2018) This topic has been widely discussed both on this site, and in the scien
Bias and variance in leave-one-out vs K-fold cross validation why would models learned with leave-one-out CV have higher variance? [TL:DR] A summary of recent posts and debates (July 2018) This topic has been widely discussed both on this site, and in the scientific literature, with conflicting views, intuitions and conclusions. Back in 2013 when this question was first asked, the dominant view was that LOOCV leads to larger variance of the expected generalization error of a training algorithm producing models out of samples of size $n(K−1)/K$. This view, however, appears to be an incorrect generalization of a special case and I would argue that the correct answer is: "it depends..." Paraphrasing Yves Grandvalet the author of a 2004 paper on the topic I would summarize the intuitive argument as follows: If cross-validation were averaging independent estimates: then leave-one-out CV one should see relatively lower variance between models since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. This is not true when training sets are highly correlated: Correlation may increase with K and this increase is responsible for the overall increase of variance in the second scenario. Intuitively, in that situation, leave-one-out CV may be blind to instabilities that exist, but may not be triggered by changing a single point in the training data, which makes it highly variable to the realization of the training set. Experimental simulations from myself and others on this site, as well as those of researchers in the papers linked below will show you that there is no universal truth on the topic. Most experiments have monotonically decreasing or constant variance with $K$, but some special cases show increasing variance with $K$. The rest of this answer proposes a simulation on a toy example and an informal literature review. [Update] You can find here an alternative simulation for an unstable model in the presence of outliers. Simulations from a toy example showing decreasing / constant variance Consider the following toy example where we are fitting a degree 4 polynomial to a noisy sine curve. We expect this model to fare poorly for small datasets due to overfitting, as shown by the learning curve. Note that we plot 1 - MSE here to reproduce the illustration from ESLII page 243  Methodology You can find the code for this simulation here. The approach was the following: Generate 10,000 points from the distribution $sin(x) + \epsilon$ where the true variance of $\epsilon$ is known Iterate $i$ times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling $N$ points from the original distribution For each data set $i$: Perform K-fold cross validation for one value of $K$ Store the average Mean Square Error (MSE) across the K-folds Once the loop over $i$ is complete, calculate the mean and standard deviation of the MSE across the $i$ datasets for the same value of $K$ Repeat the above steps for all $K$ in range $\{ 5,...,N\}$ all the way to Leave One Out CV (LOOCV) Impact of $K$ on the Bias and Variance of the MSE across $i$ datasets. Left Hand Side: Kfolds for 200 data points, Right Hand Side: Kfolds for 40 data points Standard Deviation of MSE (across data sets i) vs Kfolds From this simulation, it seems that: For small number $N = 40$ of datapoints, increasing $K$ until $K=10$ or so significantly improves both the bias and the variance. For larger $K$ there is no effect on either bias or variance. The intuition is that for too small effective training size, the polynomial model is very unstable, especially for $K \leq 5$ For larger $N = 200$ - increasing $K$ has no particular impact on both the bias and variance. An informal literature review The following three papers investigate the bias and variance of cross validation Ron Kohavi (1995) Yves Grandvalet and Yoshua Bengio (2004) Zhang and Yang (2015) Kohavi 1995 This paper is often refered to as the source for the argument that LOOC has higher variance. In section 1: “For example, leave-oneout is almost unbiased, but it has high variance, leading to unreliable estimates (Efron 1983)" This statement is source of much confusion, because it seems to be from Efron in 1983, not Kohavi. Both Kohavi's theoretical argumentations and experimental results go against this statement: Corollary 2 ( Variance in CV) Given a dataset and an inducer. If the inducer is stable under the perturbations caused by deleting the test instances for the folds in k-fold CV for various values of $k$, then the variance of the estimate will be the same Experiment In his experiment, Kohavi compares two algorithms: a C4.5 decision tree and a Naive Bayes classifier across multiple datasets from the UC Irvine repository. His results are below: LHS is accuracy vs folds (i.e. bias) and RHS is standard deviation vs folds In fact, only the decision tree on three data sets clearly has higher variance for increasing K. Other results show decreasing or constant variance. Finally, although the conclusion could be worded more strongly, there is no argument for LOO having higher variance, quite the opposite. From section 6. Summary "k-fold cross validation with moderate k values (10-20) reduces the variance... As k-decreases (2-5) and the samples get smaller, there is variance due to instability of the training sets themselves. Zhang and Yang The authors take a strong view on this topic and clearly state in Section 7.1 In fact, in least squares linear regression, Burman (1989) shows that among the k-fold CVs, in estimating the prediction error, LOO (i.e., n-fold CV) has the smallest asymptotic bias and variance. ... ... Then a theoretical calculation (Lu, 2007) shows that LOO has the smallest bias and variance at the same time among all delete-n CVs with all possible n_v deletions considered Experimental results Similarly, Zhang's experiments point in the direction of decreasing variance with K, as shown below for the True model and the wrong model for Figure 3 and Figure 5. The only experiment for which variance increases with $K$ is for the Lasso and SCAD models. This is explained as follows on page 31: However, if model selection is involved, the performance of LOO worsens in variability as the model selection uncertainty gets higher due to large model space, small penalty coefficients and/or the use of data-driven penalty coefficients
Bias and variance in leave-one-out vs K-fold cross validation why would models learned with leave-one-out CV have higher variance? [TL:DR] A summary of recent posts and debates (July 2018) This topic has been widely discussed both on this site, and in the scien
1,228
Bias and variance in leave-one-out vs K-fold cross validation
In $k$-fold cross-validation we partition a dataset into $k$ equally sized non-overlapping subsets $S$. For each fold $S_i$, a model is trained on $S \setminus S_i$, which is then evaluated on $S_i$. The cross-validation estimator of, for example the prediction error, is defined as the average of the prediction errors obtained on each fold. While there is no overlap between the test sets on which the models are evaluated, there is overlap between the training sets for all $k>2$. The overlap is largest for leave-one-out cross-validation. This means that the learned models are correlated, i.e. dependent, and the variance of the sum of correlated variables increases with the amount of covariance (see wikipedia): \begin{equation} \operatorname{Var}\left(\sum_{i=1}^NX_i\right)=\sum_{i=1}^N \sum_{j=1}^N \operatorname{Cov}\left(X_i,X_j\right) \end{equation} Therefore, leave-one-out cross-validation has large variance in comparison to CV with smaller $k$. However, note that while two-fold cross validation doesn't have the problem of overlapping training sets, it often also has large variance because the training sets are only half the size of the original sample. A good compromise is ten-fold cross-validation. Some interesting papers that touch upon this subject (out of many more): A study of cross-validation and bootstrap for accuracy estimation and model selection by Ron Kohavi No unbiased estimator of the variance of k-fold cross-validation by Yoshua Bengio and Yves Grandvalet
Bias and variance in leave-one-out vs K-fold cross validation
In $k$-fold cross-validation we partition a dataset into $k$ equally sized non-overlapping subsets $S$. For each fold $S_i$, a model is trained on $S \setminus S_i$, which is then evaluated on $S_i$.
Bias and variance in leave-one-out vs K-fold cross validation In $k$-fold cross-validation we partition a dataset into $k$ equally sized non-overlapping subsets $S$. For each fold $S_i$, a model is trained on $S \setminus S_i$, which is then evaluated on $S_i$. The cross-validation estimator of, for example the prediction error, is defined as the average of the prediction errors obtained on each fold. While there is no overlap between the test sets on which the models are evaluated, there is overlap between the training sets for all $k>2$. The overlap is largest for leave-one-out cross-validation. This means that the learned models are correlated, i.e. dependent, and the variance of the sum of correlated variables increases with the amount of covariance (see wikipedia): \begin{equation} \operatorname{Var}\left(\sum_{i=1}^NX_i\right)=\sum_{i=1}^N \sum_{j=1}^N \operatorname{Cov}\left(X_i,X_j\right) \end{equation} Therefore, leave-one-out cross-validation has large variance in comparison to CV with smaller $k$. However, note that while two-fold cross validation doesn't have the problem of overlapping training sets, it often also has large variance because the training sets are only half the size of the original sample. A good compromise is ten-fold cross-validation. Some interesting papers that touch upon this subject (out of many more): A study of cross-validation and bootstrap for accuracy estimation and model selection by Ron Kohavi No unbiased estimator of the variance of k-fold cross-validation by Yoshua Bengio and Yves Grandvalet
Bias and variance in leave-one-out vs K-fold cross validation In $k$-fold cross-validation we partition a dataset into $k$ equally sized non-overlapping subsets $S$. For each fold $S_i$, a model is trained on $S \setminus S_i$, which is then evaluated on $S_i$.
1,229
Bias and variance in leave-one-out vs K-fold cross validation
[...] my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the $K$-fold CV, since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. I think your intuition is sensible if you are thinking about the predictions made by the models on each leave-one-out fold. They are based on correlated/very similar data (the full dataset minus one data point) and will therefore make similar predictions - i.e., low variability. The source of confusion though is that when people talk about LOOCV leading to high variability, they aren't talking about the predictions made by the many models built during that loop of cross-validation on the holdout sets. Instead, they are talking about how much variability your final chosen model (the one chosen via LOOCV) would have if you train that exact model/parameters on new training sets - training sets your model haven't seen before. In this case, variability would be high. Why would variability be high? Let's simplify this a bit. Imagine that instead of using LOOCV to pick a model, you just had one training set and then you tested a model built using that training data, say, 100 times on 100 single test data points (data points are not part of the training set). If you pick the model and parameter set that does the best across those 100 tests, then you'll select one that allows this particular training set to be really good at predicting the test data. You could potentially choose a model that captures 100% of the associations between that particular training dataset and the holdout data. Unfortunately, some part of those associations between the training and test data sets will be noise or spurious associations because, although the test set changes and you can identify noise on this side, the training dataset doesn't and you can't determine what explained variance is due to noise. In other words, what this means is that have overfit your predictions to this particular training dataset. Now, if you were to re-train this model with the same parameters multiple times on new training sets, what would happen? Well, a model that is overfit to a particular set of training data will lead to variability in its prediction when the training set changes (ie. change the training set slightly and the model will change its predictions substantially). Because all of the folds in LOOCV are highly correlated, it is similar to the case above (same training set; different test points). In other words, if that particular training set has some spurious correlation with those test points, you're model will have difficulties determining which correlations are real and which are spurious, because even though the test set changes, the training set doesn't. In contrast, less correlated training folds means that the model will be fit to multiple unique datasets. So, in this situation, if you retrain the model on a another new data set, it'll lead to a similar prediction (i.e., small variability).
Bias and variance in leave-one-out vs K-fold cross validation
[...] my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the $K$-fold CV, since we are only shifting one data point across folds and there
Bias and variance in leave-one-out vs K-fold cross validation [...] my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the $K$-fold CV, since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. I think your intuition is sensible if you are thinking about the predictions made by the models on each leave-one-out fold. They are based on correlated/very similar data (the full dataset minus one data point) and will therefore make similar predictions - i.e., low variability. The source of confusion though is that when people talk about LOOCV leading to high variability, they aren't talking about the predictions made by the many models built during that loop of cross-validation on the holdout sets. Instead, they are talking about how much variability your final chosen model (the one chosen via LOOCV) would have if you train that exact model/parameters on new training sets - training sets your model haven't seen before. In this case, variability would be high. Why would variability be high? Let's simplify this a bit. Imagine that instead of using LOOCV to pick a model, you just had one training set and then you tested a model built using that training data, say, 100 times on 100 single test data points (data points are not part of the training set). If you pick the model and parameter set that does the best across those 100 tests, then you'll select one that allows this particular training set to be really good at predicting the test data. You could potentially choose a model that captures 100% of the associations between that particular training dataset and the holdout data. Unfortunately, some part of those associations between the training and test data sets will be noise or spurious associations because, although the test set changes and you can identify noise on this side, the training dataset doesn't and you can't determine what explained variance is due to noise. In other words, what this means is that have overfit your predictions to this particular training dataset. Now, if you were to re-train this model with the same parameters multiple times on new training sets, what would happen? Well, a model that is overfit to a particular set of training data will lead to variability in its prediction when the training set changes (ie. change the training set slightly and the model will change its predictions substantially). Because all of the folds in LOOCV are highly correlated, it is similar to the case above (same training set; different test points). In other words, if that particular training set has some spurious correlation with those test points, you're model will have difficulties determining which correlations are real and which are spurious, because even though the test set changes, the training set doesn't. In contrast, less correlated training folds means that the model will be fit to multiple unique datasets. So, in this situation, if you retrain the model on a another new data set, it'll lead to a similar prediction (i.e., small variability).
Bias and variance in leave-one-out vs K-fold cross validation [...] my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the $K$-fold CV, since we are only shifting one data point across folds and there
1,230
Bias and variance in leave-one-out vs K-fold cross validation
Although this question is rather old, I would like to add an additional answer because I think it is worth clarifying this a bit more. My question is partly motivated by this thread: Optimal number of folds in K-fold cross-validation: is leave-one-out CV always the best choice?. The answer there suggests that models learned with leave-one-out cross-validation have higher variance than those learned with regular K-fold cross-validation, making leave-one-out CV a worse choice. That answer does not suggest that, and it should not. Let's review the answer provided there: Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse, as it has a relatively high variance (i.e. its value changes more for different samples of data than the value for k-fold cross-validation). It is talking about performance. Here performance must be understood as the performance of the model error estimator. What you are estimating with k-fold or LOOCV is model performance, both when using these techniques for choosing the model and for providing an error estimate in itself. This is NOT the model variance, it is the variance of the estimator of the error (of the model). See the example (*) bellow. However, my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the K-fold CV, since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. Indeed, there is lower variance between models, They are trained with datasets that have $n-2$ observations in common! As $n$ increases, they become virtually the same model (Assuming no stochasticity). It is precisely this lower variance and higher correlation between models what makes the estimator I talk about above have more variance, because that estimator is the mean of these correlated quantities, and the variance of the mean of correlated data is higher than that of uncorrelated data. Here it is shown why: variance of the mean of correlated and uncorrelated data. Or going in the other direction, if K is low in the K-fold CV, the training sets would be quite different across folds, and the resulting models are more likely to be different (hence higher variance). Indeed. If the above argument is right, why would models learned with leave-one-out CV have higher variance? The above argument is right. Now, the question is wrong. The variance of the model is a whole different topic. There is a variance where there is a random variable. In machine learning you deal with lots of random variables, in particular and not restricted to: each observation is a random variable; the sample is a random variable; the model, since it is trained from a random variable, is a random variable; the estimator of the error that your model will produce when faced to the population is a random variable; and last but not least, the error of the model is a random variable, since there is likely to be noise in the population (this is called irreducible error). There can also be more randomness if there is stochasticity involved in the model learning process. It is of paramount importance to distinguish between all these variables. (*) Example: Suppose you have a model with a real error $err$, where you should understand $err$ as the error that the model produces over the entire population. Since you have a sample drawn from this population, you use Cross validation techniques over that sample to compute an estimate of $E$, which we can name $\tilde{err}$. As every estimator, $\tilde{err}$ is a random variable, meaning it has its own variance, $var(\tilde{err})$, and its own bias, $E(\tilde{err}-err)$. $var(\tilde{err})$ is precisely what is higher when employing LOOCV. While LOOCV is a less biased estimator than $k-fold$ with $k < n$, it has more variance. To further understand why a compromise between bias and variance is desired, suppose $err = 10$, and that you have two estimators: $\tilde{err}_1$ and $\tilde{err}_2$. The first one is producing this output $$\tilde{err}_1 = 0,5,10,20,15,5,20,0,10,15...$$ whereas the second one is producing $$ \tilde{err}_2 = 8.5,9.5,8.5,9.5,8.75,9.25,8.8,9.2...$$ The last one, although it has more bias, should be preferred, as it has a lot less variance and a acceptable bias, i.e. a compromise (bias-variance trade-off). Please do note that you neither want very low variance if that entails a high bias! Additional note: In this answer I try to clarify (what I think are) the misconceptions that surround this topic and, in particular, tries to answer point by point and precisely the doubts the asker has. In particular, I try to make clear which variance we are talking about, which is what it is essentially asked here. I.e. I explain the answer which is linked by the OP. That being said, while I provide the theoretical reasoning behind the claim, we have not found, yet, conclusive empirical evidence that supports it. So please be very careful. Ideally, you should read this post first and then refer to the answer by Xavier Bourret Sicotte, which provides an insightful discussion about the empirical aspects. Last but not least, something else must be taken into account: Even if variance as you increase $k$ remains flat (as we haven't empirically proved otherwise), $k-fold$ with $k$ small enough allows for repetition (repeated k-fold), which definitely should be done, e.g. $10 \ \times \ 10-fold$. This effectively reduces variance, and is not an option when performing LOOCV.
Bias and variance in leave-one-out vs K-fold cross validation
Although this question is rather old, I would like to add an additional answer because I think it is worth clarifying this a bit more. My question is partly motivated by this thread: Optimal number
Bias and variance in leave-one-out vs K-fold cross validation Although this question is rather old, I would like to add an additional answer because I think it is worth clarifying this a bit more. My question is partly motivated by this thread: Optimal number of folds in K-fold cross-validation: is leave-one-out CV always the best choice?. The answer there suggests that models learned with leave-one-out cross-validation have higher variance than those learned with regular K-fold cross-validation, making leave-one-out CV a worse choice. That answer does not suggest that, and it should not. Let's review the answer provided there: Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse, as it has a relatively high variance (i.e. its value changes more for different samples of data than the value for k-fold cross-validation). It is talking about performance. Here performance must be understood as the performance of the model error estimator. What you are estimating with k-fold or LOOCV is model performance, both when using these techniques for choosing the model and for providing an error estimate in itself. This is NOT the model variance, it is the variance of the estimator of the error (of the model). See the example (*) bellow. However, my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the K-fold CV, since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. Indeed, there is lower variance between models, They are trained with datasets that have $n-2$ observations in common! As $n$ increases, they become virtually the same model (Assuming no stochasticity). It is precisely this lower variance and higher correlation between models what makes the estimator I talk about above have more variance, because that estimator is the mean of these correlated quantities, and the variance of the mean of correlated data is higher than that of uncorrelated data. Here it is shown why: variance of the mean of correlated and uncorrelated data. Or going in the other direction, if K is low in the K-fold CV, the training sets would be quite different across folds, and the resulting models are more likely to be different (hence higher variance). Indeed. If the above argument is right, why would models learned with leave-one-out CV have higher variance? The above argument is right. Now, the question is wrong. The variance of the model is a whole different topic. There is a variance where there is a random variable. In machine learning you deal with lots of random variables, in particular and not restricted to: each observation is a random variable; the sample is a random variable; the model, since it is trained from a random variable, is a random variable; the estimator of the error that your model will produce when faced to the population is a random variable; and last but not least, the error of the model is a random variable, since there is likely to be noise in the population (this is called irreducible error). There can also be more randomness if there is stochasticity involved in the model learning process. It is of paramount importance to distinguish between all these variables. (*) Example: Suppose you have a model with a real error $err$, where you should understand $err$ as the error that the model produces over the entire population. Since you have a sample drawn from this population, you use Cross validation techniques over that sample to compute an estimate of $E$, which we can name $\tilde{err}$. As every estimator, $\tilde{err}$ is a random variable, meaning it has its own variance, $var(\tilde{err})$, and its own bias, $E(\tilde{err}-err)$. $var(\tilde{err})$ is precisely what is higher when employing LOOCV. While LOOCV is a less biased estimator than $k-fold$ with $k < n$, it has more variance. To further understand why a compromise between bias and variance is desired, suppose $err = 10$, and that you have two estimators: $\tilde{err}_1$ and $\tilde{err}_2$. The first one is producing this output $$\tilde{err}_1 = 0,5,10,20,15,5,20,0,10,15...$$ whereas the second one is producing $$ \tilde{err}_2 = 8.5,9.5,8.5,9.5,8.75,9.25,8.8,9.2...$$ The last one, although it has more bias, should be preferred, as it has a lot less variance and a acceptable bias, i.e. a compromise (bias-variance trade-off). Please do note that you neither want very low variance if that entails a high bias! Additional note: In this answer I try to clarify (what I think are) the misconceptions that surround this topic and, in particular, tries to answer point by point and precisely the doubts the asker has. In particular, I try to make clear which variance we are talking about, which is what it is essentially asked here. I.e. I explain the answer which is linked by the OP. That being said, while I provide the theoretical reasoning behind the claim, we have not found, yet, conclusive empirical evidence that supports it. So please be very careful. Ideally, you should read this post first and then refer to the answer by Xavier Bourret Sicotte, which provides an insightful discussion about the empirical aspects. Last but not least, something else must be taken into account: Even if variance as you increase $k$ remains flat (as we haven't empirically proved otherwise), $k-fold$ with $k$ small enough allows for repetition (repeated k-fold), which definitely should be done, e.g. $10 \ \times \ 10-fold$. This effectively reduces variance, and is not an option when performing LOOCV.
Bias and variance in leave-one-out vs K-fold cross validation Although this question is rather old, I would like to add an additional answer because I think it is worth clarifying this a bit more. My question is partly motivated by this thread: Optimal number
1,231
Bias and variance in leave-one-out vs K-fold cross validation
The issues are indeed subtle. But it is definitely not true that LOOCV has larger variance in general. A recent paper discusses some key aspects and addresses several seemingly widespread misconceptions on cross-validation. Yongli Zhang and Yuhong Yang (2015). Cross-validation for selecting a model selection procedure. Journal of Econometrics, vol. 187, 95-112. The following misconceptions are frequently seen in the literature, even up to now: "Leave-one-out (LOO) CV has smaller bias but larger variance than leave- more-out CV" This view is quite popular. For instance, Kohavi (1995, Section 1) states: "For example, leave-one-out is almost unbiased, but it has high variance, leading to unreliable estimates". The statement, however, is not generally true. In more detail: In the literature, even including recent publications, there are overly taken recommendations. The general suggestion of Kohavi (1995) to use 10-fold CV has been widely accepted. For instance, Krstajic et al (2014, page 11) state: “Kohavi [6] and Hastie et al [4] empirically show that V-fold cross-validation compared to leave-one-out cross-validation has lower variance”. They consequently take the recommendation of 10-fold CV (with repetition) for all their numerical investigations. In our view, such a practice may be misleading. First, there should not be any general recommendation that does not take into account of the goal of the use of CV. In particular, examination of bias and variance of CV accuracy estimation of a candidate model/modeling procedure can be a very different matter from optimal model selection (with either of the two goals of model selection stated earlier). Second, even limited to the accuracy estimation context, the statement is not generally correct. For models/modeling procedures with low instability, LOO often has the smallest variability. We have also demonstrated that for highly unstable procedures (e.g., LASSO with pn much larger than n), the 10-fold or 5-fold CVs, while reducing variability, can have significantly larger MSE than LOO due to even worse bias increase. Overall, from Figures 3-4, LOO and repeated 50- and 20-fold CVs are the best here, 10-fold is significantly worse, and k ≤ 5 is clearly poor. For predictive performance estimation, we tend to believe that LOO is typically the best or among the best for a fixed model or a very stable modeling procedure (such as BIC in our context) in both bias and variance, or quite close to the best in MSE for a more unstable procedure (such as AIC or even LASSO with p ≫ n). While 10-fold CV (with repetitions) certainly can be the best sometimes, but more frequently, it is in an awkward position: it is riskier than LOO (due to the bias problem) for prediction error estimation and it is usually worse than delete-n/2 CV for identifying the best candidate.
Bias and variance in leave-one-out vs K-fold cross validation
The issues are indeed subtle. But it is definitely not true that LOOCV has larger variance in general. A recent paper discusses some key aspects and addresses several seemingly widespread misconceptio
Bias and variance in leave-one-out vs K-fold cross validation The issues are indeed subtle. But it is definitely not true that LOOCV has larger variance in general. A recent paper discusses some key aspects and addresses several seemingly widespread misconceptions on cross-validation. Yongli Zhang and Yuhong Yang (2015). Cross-validation for selecting a model selection procedure. Journal of Econometrics, vol. 187, 95-112. The following misconceptions are frequently seen in the literature, even up to now: "Leave-one-out (LOO) CV has smaller bias but larger variance than leave- more-out CV" This view is quite popular. For instance, Kohavi (1995, Section 1) states: "For example, leave-one-out is almost unbiased, but it has high variance, leading to unreliable estimates". The statement, however, is not generally true. In more detail: In the literature, even including recent publications, there are overly taken recommendations. The general suggestion of Kohavi (1995) to use 10-fold CV has been widely accepted. For instance, Krstajic et al (2014, page 11) state: “Kohavi [6] and Hastie et al [4] empirically show that V-fold cross-validation compared to leave-one-out cross-validation has lower variance”. They consequently take the recommendation of 10-fold CV (with repetition) for all their numerical investigations. In our view, such a practice may be misleading. First, there should not be any general recommendation that does not take into account of the goal of the use of CV. In particular, examination of bias and variance of CV accuracy estimation of a candidate model/modeling procedure can be a very different matter from optimal model selection (with either of the two goals of model selection stated earlier). Second, even limited to the accuracy estimation context, the statement is not generally correct. For models/modeling procedures with low instability, LOO often has the smallest variability. We have also demonstrated that for highly unstable procedures (e.g., LASSO with pn much larger than n), the 10-fold or 5-fold CVs, while reducing variability, can have significantly larger MSE than LOO due to even worse bias increase. Overall, from Figures 3-4, LOO and repeated 50- and 20-fold CVs are the best here, 10-fold is significantly worse, and k ≤ 5 is clearly poor. For predictive performance estimation, we tend to believe that LOO is typically the best or among the best for a fixed model or a very stable modeling procedure (such as BIC in our context) in both bias and variance, or quite close to the best in MSE for a more unstable procedure (such as AIC or even LASSO with p ≫ n). While 10-fold CV (with repetitions) certainly can be the best sometimes, but more frequently, it is in an awkward position: it is riskier than LOO (due to the bias problem) for prediction error estimation and it is usually worse than delete-n/2 CV for identifying the best candidate.
Bias and variance in leave-one-out vs K-fold cross validation The issues are indeed subtle. But it is definitely not true that LOOCV has larger variance in general. A recent paper discusses some key aspects and addresses several seemingly widespread misconceptio
1,232
Bias and variance in leave-one-out vs K-fold cross validation
Before discussing about bias and variance, the first question is: What is estimated by cross-validation? In our 2004 JMLR paper, we argue that, without any further assumption, $K$-fold cross-validation estimates the expected generalization error of a training algorithm producing models out of samples of size $n(K-1)/K$. Here, the expectation is with respect to training samples. With this view, changing $K$ means changing the estimated quantity: the comparison of bias and variance for different values of $K$ should then be treated with caution. That being said, we provide experimental results that show that variance may monotonically decreases with $K$, or that it may be minimal for an intermediate value. We conjecture that the first scenario should be encountered for stable algorithms (for the current data distribution), and the second one for unstable algorithms. my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the $K$-fold CV, since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. This intuition would be correct if cross-validation was averaging independent estimates, but they can be highly correlated, and this correlation may increase with $K$. This increase is responsible for the overall increase of variance in the second scenario mentioned above. Intuitively, in that situation, leave-one-out CV may be blind to instabilities that exist, but may not be triggered by changing a siongle point in the training data, which makes it highly variable to the realization of the training set.
Bias and variance in leave-one-out vs K-fold cross validation
Before discussing about bias and variance, the first question is: What is estimated by cross-validation? In our 2004 JMLR paper, we argue that, without any further assumption, $K$-fold cross-validat
Bias and variance in leave-one-out vs K-fold cross validation Before discussing about bias and variance, the first question is: What is estimated by cross-validation? In our 2004 JMLR paper, we argue that, without any further assumption, $K$-fold cross-validation estimates the expected generalization error of a training algorithm producing models out of samples of size $n(K-1)/K$. Here, the expectation is with respect to training samples. With this view, changing $K$ means changing the estimated quantity: the comparison of bias and variance for different values of $K$ should then be treated with caution. That being said, we provide experimental results that show that variance may monotonically decreases with $K$, or that it may be minimal for an intermediate value. We conjecture that the first scenario should be encountered for stable algorithms (for the current data distribution), and the second one for unstable algorithms. my intuition tells me that in leave-one-out CV one should see relatively lower variance between models than in the $K$-fold CV, since we are only shifting one data point across folds and therefore the training sets between folds overlap substantially. This intuition would be correct if cross-validation was averaging independent estimates, but they can be highly correlated, and this correlation may increase with $K$. This increase is responsible for the overall increase of variance in the second scenario mentioned above. Intuitively, in that situation, leave-one-out CV may be blind to instabilities that exist, but may not be triggered by changing a siongle point in the training data, which makes it highly variable to the realization of the training set.
Bias and variance in leave-one-out vs K-fold cross validation Before discussing about bias and variance, the first question is: What is estimated by cross-validation? In our 2004 JMLR paper, we argue that, without any further assumption, $K$-fold cross-validat
1,233
Bias and variance in leave-one-out vs K-fold cross validation
I think there is a more straightforward answer. If you increase k, the test sets get smaller and smaller. Since the folds are randomly sampled, it can happen with small test sets, but not as likely with bigger ones, that they are not representative of a random shuffle. One test set could contain all the difficult to predict records and another all the easy ones. Therefore, variance is high when you predict very small test sets per fold.
Bias and variance in leave-one-out vs K-fold cross validation
I think there is a more straightforward answer. If you increase k, the test sets get smaller and smaller. Since the folds are randomly sampled, it can happen with small test sets, but not as likely wi
Bias and variance in leave-one-out vs K-fold cross validation I think there is a more straightforward answer. If you increase k, the test sets get smaller and smaller. Since the folds are randomly sampled, it can happen with small test sets, but not as likely with bigger ones, that they are not representative of a random shuffle. One test set could contain all the difficult to predict records and another all the easy ones. Therefore, variance is high when you predict very small test sets per fold.
Bias and variance in leave-one-out vs K-fold cross validation I think there is a more straightforward answer. If you increase k, the test sets get smaller and smaller. Since the folds are randomly sampled, it can happen with small test sets, but not as likely wi
1,234
Bias and variance in leave-one-out vs K-fold cross validation
I see in most machine learning courses, that a model is validated on smaller training sets and prediction scores are evaluated to give a measure of prediction power, these models are then discarded and the model fit to the whole dataset for the final model. I have a couple of bug bears with terminology often used when talking about validation. For a start, each model fit on each fold of data (using k folds) or each loocv is a different model. No two will be the same (different coefficient values), therefore I think the term 'model validation ' is misleading, I prefer to say method validation in this case. Secondly, and more directly related to this topic, if the end product is to fit to the whole dataset for the final model, the loocv will always give a more better idea of how the final model will predict, as each will likely be very similar to the final model. Personally, I see little point in validating a model using models that are quite different from the model being validated. The more you stray from the model being validated, the less valid and relevant the validation actually is. Unless I am missing something, which I may well be, computational expense should be the only thing that would persuade one to move toward lower k folds
Bias and variance in leave-one-out vs K-fold cross validation
I see in most machine learning courses, that a model is validated on smaller training sets and prediction scores are evaluated to give a measure of prediction power, these models are then discarded an
Bias and variance in leave-one-out vs K-fold cross validation I see in most machine learning courses, that a model is validated on smaller training sets and prediction scores are evaluated to give a measure of prediction power, these models are then discarded and the model fit to the whole dataset for the final model. I have a couple of bug bears with terminology often used when talking about validation. For a start, each model fit on each fold of data (using k folds) or each loocv is a different model. No two will be the same (different coefficient values), therefore I think the term 'model validation ' is misleading, I prefer to say method validation in this case. Secondly, and more directly related to this topic, if the end product is to fit to the whole dataset for the final model, the loocv will always give a more better idea of how the final model will predict, as each will likely be very similar to the final model. Personally, I see little point in validating a model using models that are quite different from the model being validated. The more you stray from the model being validated, the less valid and relevant the validation actually is. Unless I am missing something, which I may well be, computational expense should be the only thing that would persuade one to move toward lower k folds
Bias and variance in leave-one-out vs K-fold cross validation I see in most machine learning courses, that a model is validated on smaller training sets and prediction scores are evaluated to give a measure of prediction power, these models are then discarded an
1,235
Bias and variance in leave-one-out vs K-fold cross validation
Lets say you have data set of 100 observations. You choose 80 for training and 20 in test data. You will use dataset of 80 observations to train the model. i.e. further split it into K fold. Method 1: LOOCV: The holdout data contains only one data point. All the models across different holdout sets will be quite similar due to highly correlated training data. Any model you pick here will have have (poor) generalization ability because it is on just one data point in the holdout. Test set data (20 points) would be very different from the hold out data. Since model did not learn enough variation from the holdout set, it wont generalize well on test data. Method 2: K fold (K say 5 <<N) Here your training data sizes are smaller and k-th fold used for validation will be larger than 1 observations (as in LOOCV). If you fit number of models say, and if you decide best model using a larger hold out data set, the model will be less complex (or less likely to overfit) because otherwise it wont generalize on the hold out data set (unless holdout has same distribution than K-1 folds of training data). Now we know that a model which does not generalize to the test data will have high variance because its typically overfitting. Also using bias variance tradeoff principal, models based on LOOCV will have low bias (but high variance). On the other hand, models using K fold may have low variance (but higher bias). Hope this helps.
Bias and variance in leave-one-out vs K-fold cross validation
Lets say you have data set of 100 observations. You choose 80 for training and 20 in test data. You will use dataset of 80 observations to train the model. i.e. further split it into K fold. Method 1:
Bias and variance in leave-one-out vs K-fold cross validation Lets say you have data set of 100 observations. You choose 80 for training and 20 in test data. You will use dataset of 80 observations to train the model. i.e. further split it into K fold. Method 1: LOOCV: The holdout data contains only one data point. All the models across different holdout sets will be quite similar due to highly correlated training data. Any model you pick here will have have (poor) generalization ability because it is on just one data point in the holdout. Test set data (20 points) would be very different from the hold out data. Since model did not learn enough variation from the holdout set, it wont generalize well on test data. Method 2: K fold (K say 5 <<N) Here your training data sizes are smaller and k-th fold used for validation will be larger than 1 observations (as in LOOCV). If you fit number of models say, and if you decide best model using a larger hold out data set, the model will be less complex (or less likely to overfit) because otherwise it wont generalize on the hold out data set (unless holdout has same distribution than K-1 folds of training data). Now we know that a model which does not generalize to the test data will have high variance because its typically overfitting. Also using bias variance tradeoff principal, models based on LOOCV will have low bias (but high variance). On the other hand, models using K fold may have low variance (but higher bias). Hope this helps.
Bias and variance in leave-one-out vs K-fold cross validation Lets say you have data set of 100 observations. You choose 80 for training and 20 in test data. You will use dataset of 80 observations to train the model. i.e. further split it into K fold. Method 1:
1,236
Bias and variance in leave-one-out vs K-fold cross validation
I think the standard error that we are talking about here is the standard error of the $MSE$s or $Err$s generated across different cross-validation iterations. But the simulation done by Xavier Bourret Sicotte calculated the standard error of $MSE$ or $Err$ based on repeatedly drawing different sample data from the population. If you refer to the lecture on Cross-Validation of Rob and Trevor Hastie (free online), they gave the formula for standard error of $CV_k$ as $$ \widehat{\mathrm{SE}}\left(\mathrm{CV}_{K}\right)=\sqrt{\sum_{k=1}^{K}\left(\operatorname{Err}_{k}-\overline{\operatorname{Err}_{k}}\right)^{2} /(K-1)} $$ where $$ \mathrm{CV}_{K}=\sum_{k=1}^{K} \frac{n_{k}}{n} \operatorname{Err}_{k} $$ where $\operatorname{Err}_{k}=\sum_{i \in C_{k}} I\left(y_{i} \neq \hat{y}_{i}\right) / n_{k}$ So in my opinion the simulation was not correctly done: Generate 10,000 points from the distribution sin(x)+ϵ where the true variance of ϵ is known Iterate i times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling N points from the original distribution For each data set i: Perform K-fold cross validation for one value of K Store the average Mean Square Error (MSE) across the K-folds Once the loop over i is complete, calculate the mean and standard deviation of the MSE across the i datasets for the same value of K <<this is where I think it is not right>> Repeat the above steps for all K in range {5,...,N} all the way to Leave One Out CV (LOOCV)
Bias and variance in leave-one-out vs K-fold cross validation
I think the standard error that we are talking about here is the standard error of the $MSE$s or $Err$s generated across different cross-validation iterations. But the simulation done by Xavier Bourre
Bias and variance in leave-one-out vs K-fold cross validation I think the standard error that we are talking about here is the standard error of the $MSE$s or $Err$s generated across different cross-validation iterations. But the simulation done by Xavier Bourret Sicotte calculated the standard error of $MSE$ or $Err$ based on repeatedly drawing different sample data from the population. If you refer to the lecture on Cross-Validation of Rob and Trevor Hastie (free online), they gave the formula for standard error of $CV_k$ as $$ \widehat{\mathrm{SE}}\left(\mathrm{CV}_{K}\right)=\sqrt{\sum_{k=1}^{K}\left(\operatorname{Err}_{k}-\overline{\operatorname{Err}_{k}}\right)^{2} /(K-1)} $$ where $$ \mathrm{CV}_{K}=\sum_{k=1}^{K} \frac{n_{k}}{n} \operatorname{Err}_{k} $$ where $\operatorname{Err}_{k}=\sum_{i \in C_{k}} I\left(y_{i} \neq \hat{y}_{i}\right) / n_{k}$ So in my opinion the simulation was not correctly done: Generate 10,000 points from the distribution sin(x)+ϵ where the true variance of ϵ is known Iterate i times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling N points from the original distribution For each data set i: Perform K-fold cross validation for one value of K Store the average Mean Square Error (MSE) across the K-folds Once the loop over i is complete, calculate the mean and standard deviation of the MSE across the i datasets for the same value of K <<this is where I think it is not right>> Repeat the above steps for all K in range {5,...,N} all the way to Leave One Out CV (LOOCV)
Bias and variance in leave-one-out vs K-fold cross validation I think the standard error that we are talking about here is the standard error of the $MSE$s or $Err$s generated across different cross-validation iterations. But the simulation done by Xavier Bourre
1,237
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
I am going to change the order of questions about. I've found textbooks and lecture notes frequently disagree, and would like a system to work through the choice that can safely be recommended as best practice, and especially a textbook or paper this can be cited to. Unfortunately, some discussions of this issue in books and so on rely on received wisdom. Sometimes that received wisdom is reasonable, sometimes it is less so (at the least in the sense that it tends to focus on a smaller issue when a larger problem is ignored); we should examine the justifications offered for the advice (if any justification is offered at all) with care. Most guides to choosing a t-test or non-parametric test focus on the normality issue. That’s true, but it’s somewhat misguided for several reasons that I address in this answer. If performing an "unrelated samples" or "unpaired" t-test, whether to use a Welch correction? This (to use it unless you have reason to think variances should be equal) is the advice of numerous references. I point to some in this answer. Some people use a hypothesis test for equality of variances, but here it would have low power. Generally I just eyeball whether the sample SDs are "reasonably" close or not (which is somewhat subjective, so there must be a more principled way of doing it) but again, with low n it may well be that the population SDs are rather further apart than the sample ones. Is it safer simply to always use the Welch correction for small samples, unless there is some good reason to believe population variances are equal? That’s what the advice is. The properties of the tests are affected by the choice based on the assumption test. Some references on this can be seen here and here, though there are more that say similar things. The equal-variances issue has many similar characteristics to the normality issue – people want to test it, advice suggests conditioning choice of tests on the results of tests can adversely affect the results of both kinds of subsequent test – it’s better simply not to assume what you can’t adequately justify (by reasoning about the data, using information from other studies relating to the same variables and so on). However, there are differences. One is that – at least in terms of the distribution of the test statistic under the null hypothesis (and hence, its level-robustness) - non-normality is less important in large samples (at least in respect of significance level, though power might still be an issue if you need to find small effects), while the effect of unequal variances under the equal variance assumption doesn’t really go away with large sample size. What principled method can be recommended for choosing which is the most appropriate test when the sample size is "small"? With hypothesis tests, what matters (under some set of conditions) is primarily two things: What is the actual type I error rate? What is the power behaviour like? We also need to keep in mind that if we're comparing two procedures, changing the first will change the second (that is, if they’re not conducted at the same actual significance level, you would expect that higher $\alpha$ is associated with higher power). (Of course we're usually not so confident we know what distributions we're dealing with, so the sensitivity of those behaviors to changes in circumstances also matter.) With these small-sample issues in mind, is there a good - hopefully citable - checklist to work through when deciding between t and non-parametric tests? I will consider a number of situations in which I’ll make some recommendations, considering both the possibility of non-normality and unequal variances. In every case, take mention of the t-test to imply the Welch-test: n medium-large Non-normal (or unknown), likely to have near-equal variance: If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney, though if it’s only slightly heavy, the t-test should do okay. With light-tails the t-test may (often) be preferred. Permutation tests are a good option (you can even do a permutation test using a t-statistic if you're so inclined). Bootstrap tests are also suitable. Non-normal (or unknown), unequal variance (or variance relationship unknown): If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney if inequality of variance is only related to inequality of mean - i.e. if H0 is true the difference in spread should also be absent. GLMs are often a good option, especially if there’s skewness and spread is related to the mean. A permutation test is another option, with a similar caveat as for the rank-based tests. Bootstrap tests are a good possibility here. Zimmerman and Zumbo (1993)$^{[1]}$ suggest a Welch-t-test on the ranks which they say performs better that the Wilcoxon-Mann-Whitney in cases where the variances are unequal. n moderately small rank tests are reasonable defaults here if you expect non-normality (again with the above caveat). If you have external information about shape or variance, you might consider GLMs . If you expect things not to be too far from normal, t-tests may be fine. n very small Because of the problem with getting suitable significance levels, neither permutation tests nor rank tests may be suitable, and at the smallest sizes, a t-test may be the best option (there’s some possibility of slightly robustifying it). However, there’s a good argument for using higher type I error rates with small samples (otherwise you’re letting type II error rates inflate while holding type I error rates constant). Also see de Winter (2013)$^{[2]}$. The advice must be modified somewhat when the distributions are both strongly skewed and very discrete, such as Likert scale items where most of the observations are in one of the end categories. Then the Wilcoxon-Mann-Whitney isn’t necessarily a better choice than the t-test. Simulation can help guide choices further when you have some information about likely circumstances. I appreciate this is something of a perennial topic, but most questions concern the questioner's particular data set, sometimes a more general discussion of power, and occasionally what to do if two tests disagree, but I would like a procedure to pick the correct test in the first place! The main problem is how hard it is to check the normality assumption in a small data set: It is difficult to check normality in a small data set, and to some extent that's an important issue, but I think there's another issue of importance that we need to consider. A basic problem is that trying to assess normality as the basis of choosing between tests adversely impacts the properties of the tests you're choosing between. Any formal test for normality would have low power so violations may well not be detected. (Personally I wouldn't test for this purpose, and I'm clearly not alone, but I've found this little use when clients demand a normality test be performed because that's what their textbook or old lecture notes or some website they found once declare should be done. This is one point where a weightier looking citation would be welcome.) Here’s an example of a reference (there are others) which is unequivocal (Fay and Proschan, 2010$^{[3]}$): The choice between t- and WMW DRs should not be based on a test of normality. They are similarly unequivocal about not testing for equality of variance. To make matters worse, it is unsafe to use the Central Limit Theorem as a safety net: for small n we can't rely on the convenient asymptotic normality of the test statistic and t distribution. Nor even in large samples -- asymptotic normality of the numerator doesn’t imply that the t-statistic will have a t-distribution. However, that may not matter so much, since you should still have asymptotic normality (e.g. CLT for the numerator, and Slutsky’s theorem suggest that eventually the t-statistic should begin to look normal, if the conditions for both hold.) One principled response to this is "safety first": as there's no way to reliably verify the normality assumption on a small sample, run an equivalent non-parametric test instead. That’s actually the advice that the references I mention (or link to mentions of) give. Another approach I've seen but feel less comfortable with, is to perform a visual check and proceed with a t-test if nothing untowards is observed ("no reason to reject normality", ignoring the low power of this check). My personal inclination is to consider whether there are any grounds for assuming normality, theoretical (e.g. variable is sum of several random components and CLT applies) or empirical (e.g. previous studies with larger n suggest variable is normal). Both those are good arguments, especially when backed up with the fact that the t-test is reasonably robust against moderate deviations from normality. (One should keep in mind, however, that "moderate deviations" is a tricky phrase; certain kinds of deviations from normality may impact the power performace of the t-test quite a bit even though those deviations are visually very small - the t-test is less robust to some deviations than others. We should keep this in mind whenever we're discussing small deviations from normality.) Beware, however, the phrasing "suggest the variable is normal". Being reasonably consistent with normality is not the same thing as normality. We can often reject actual normality with no need even to see the data – for example, if the data cannot be negative, the distribution cannot be normal. Fortunately, what matters is closer to what we might actually have from previous studies or reasoning about how the data are composed, which is that the deviations from normality should be small. If so, I would use a t-test if data passed visual inspection, and otherwise stick to non-parametrics. But any theoretical or empirical grounds usually only justify assuming approximate normality, and on low degrees of freedom it's hard to judge how near normal it needs to be to avoid invalidating a t-test. Well, that’s something we can assess the impact of fairly readily (such as via simulations, as I mentioned earlier). From what I've seen, skewness seems to matter more than heavy tails (but on the other hand I have seen some claims of the opposite - though I don't know what that's based on). For people who see the choice of methods as a trade-off between power and robustness, claims about the asymptotic efficiency of the non-parametric methods are unhelpful. For instance, the rule of thumb that "Wilcoxon tests have about 95% of the power of a t-test if the data really are normal, and are often far more powerful if the data is not, so just use a Wilcoxon" is sometimes heard, but if the 95% only applies to large n, this is flawed reasoning for smaller samples. But we can check small-sample power quite easily! It’s easy enough to simulate to obtain power curves as here. (Again, also see de Winter (2013)$^{[2]}$). Having done such simulations under a variety of circumstances, both for the two-sample and one-sample/paired-difference cases, the small sample efficiency at the normal in both cases seems to be a little lower than the asymptotic efficiency, but the efficiency of the signed rank and Wilcoxon-Mann-Whitney tests is still very high even at very small sample sizes. At least that's if the tests are done at the same actual significance level; you can't do a 5% test with very small samples (and least not without randomized tests for example), but if you're prepared to perhaps do (say) a 5.5% or a 3.2% test instead, then the rank tests hold up very well indeed compared with a t-test at that significance level. Small samples may make it very difficult, or impossible, to assess whether a transformation is appropriate for the data since it's hard to tell whether the transformed data belong to a (sufficiently) normal distribution. So if a QQ plot reveals very positively skewed data, which look more reasonable after taking logs, is it safe to use a t-test on the logged data? On larger samples this would be very tempting, but with small n I'd probably hold off unless there had been grounds to expect a log-normal distribution in the first place. There’s another alternative: make a different parametric assumption. For example, if there’s skewed data, one might, for example, in some situations reasonably consider a gamma distribution, or some other skewed family as a better approximation - in moderately large samples, we might just use a GLM, but in very small samples it may be necessary to look to a small-sample test - in many cases simulation can be useful. Alternative 2: robustify the t-test (but taking care about the choice of robust procedure so as not to heavily discretize the resulting distribution of the test statistic) - this has some advantages over a very-small-sample nonparametric procedure such as the ability to consider tests with low type I error rate. Here I'm thinking along the lines of using say M-estimators of location (and related estimators of scale) in the t-statistic to smoothly robustify against deviations from normality. Something akin to the Welch, like: $$\frac{\stackrel{\sim}{x}-\stackrel{\sim}{y}}{\stackrel{\sim}{S}_p}$$ where $\stackrel{\sim}{S}_p^2=\frac{\stackrel{\sim}{s}_x^2}{n_x}+\frac{\stackrel{\sim}{s}_y^2}{n_y}$ and $\stackrel{\sim}{x}$, $\stackrel{\sim}{s}_x$ etc being robust estimates of location and scale respectively. I'd aim to reduce any tendency of the statistic to discreteness - so I'd avoid things like trimming and Winsorizing, since if the original data were discrete, trimming etc will exacerbate this; by using M-estimation type approaches with a smooth $\psi$-function you achieve similar effects without contributing to the discreteness. Keep in mind we're trying to deal with the situation where $n$ is very small indeed (around 3-5, in each sample, say), so even M-estimation potentially has its issues. You could, for example, use simulation at the normal to get p-values (if sample sizes are very small, I'd suggest that over bootstrapping - if sample sizes aren't so small, a carefully-implemented bootstrap may do quite well, but then we might as well go back to Wilcoxon-Mann-Whitney). There's be a scaling factor as well as a d.f. adjustment to get to what I'd imagine would then be a reasonable t-approximation. This means we should get the kind of properties we seek very close to the normal, and should have reasonable robustness in the broad vicinity of the normal. There are a number of issues that come up that would be outside the scope of the present question, but I think in very small samples the benefits should outweigh the costs and the extra effort required. [I haven't read the literature on this stuff for a very long time, so I don't have suitable references to offer on that score.] Of course if you didn't expect the distribution to be somewhat normal-like, but rather similar to some other distribution, you could undertake a suitable robustification of a different parametric test. What if you want to check assumptions for the non-parametrics? Some sources recommend verifying a symmetric distribution before applying a Wilcoxon test, which brings up similar problems to checking normality. Indeed. I assume you mean the signed rank test*. In the case of using it on paired data, if you are prepared to assume that the two distributions are the same shape apart from location shift you are safe, since the differences should then be symmetric. Actually, we don't even need that much; for the test to work you need symmetry under the null; it's not required under the alternative (e.g. consider a paired situation with identically-shaped right skewed continuous distributions on the positive half-line, where the scales differ under the alternative but not under the null; the signed rank test should work essentially as expected in that case). The interpretation of the test is easier if the alternative is a location shift though. *(Wilcoxon’s name is associated with both the one and two sample rank tests – signed rank and rank sum; with their U test, Mann and Whitney generalized the situation studied by Wilcoxon, and introduced important new ideas for evaluating the null distribution, but the priority between the two sets of authors on the Wilcoxon-Mann-Whitney is clearly Wilcoxon’s -- so at least if we only consider Wilcoxon vs Mann&Whitney, Wilcoxon goes first in my book. However, it seems Stigler's Law beats me yet again, and Wilcoxon should perhaps share some of that priority with a number of earlier contributors, and (besides Mann and Whitney) should share credit with several discoverers of an equivalent test.[4][5] ) References [1]: Zimmerman DW and Zumbo BN, (1993), Rank transformations and the power of the Student t-test and Welch t′-test for non-normal populations, Canadian Journal Experimental Psychology, 47: 523–39. [2]: J.C.F. de Winter (2013), "Using the Student’s t-test with extremely small sample sizes," Practical Assessment, Research and Evaluation, 18:10, August, ISSN 1531-7714 http://pareonline.net/getvn.asp?v=18&n=10 [3]: Michael P. Fay and Michael A. Proschan (2010), "Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules," Stat Surv; 4: 1–39. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2857732/ [4]: Berry, K.J., Mielke, P.W. and Johnston, J.E. (2012), "The Two-sample Rank-sum Test: Early Development," Electronic Journal for History of Probability and Statistics, Vol.8, December pdf [5]: Kruskal, W. H. (1957), "Historical notes on the Wilcoxon unpaired two-sample test," Journal of the American Statistical Association, 52, 356–360.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
I am going to change the order of questions about. I've found textbooks and lecture notes frequently disagree, and would like a system to work through the choice that can safely be recommended as bes
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples I am going to change the order of questions about. I've found textbooks and lecture notes frequently disagree, and would like a system to work through the choice that can safely be recommended as best practice, and especially a textbook or paper this can be cited to. Unfortunately, some discussions of this issue in books and so on rely on received wisdom. Sometimes that received wisdom is reasonable, sometimes it is less so (at the least in the sense that it tends to focus on a smaller issue when a larger problem is ignored); we should examine the justifications offered for the advice (if any justification is offered at all) with care. Most guides to choosing a t-test or non-parametric test focus on the normality issue. That’s true, but it’s somewhat misguided for several reasons that I address in this answer. If performing an "unrelated samples" or "unpaired" t-test, whether to use a Welch correction? This (to use it unless you have reason to think variances should be equal) is the advice of numerous references. I point to some in this answer. Some people use a hypothesis test for equality of variances, but here it would have low power. Generally I just eyeball whether the sample SDs are "reasonably" close or not (which is somewhat subjective, so there must be a more principled way of doing it) but again, with low n it may well be that the population SDs are rather further apart than the sample ones. Is it safer simply to always use the Welch correction for small samples, unless there is some good reason to believe population variances are equal? That’s what the advice is. The properties of the tests are affected by the choice based on the assumption test. Some references on this can be seen here and here, though there are more that say similar things. The equal-variances issue has many similar characteristics to the normality issue – people want to test it, advice suggests conditioning choice of tests on the results of tests can adversely affect the results of both kinds of subsequent test – it’s better simply not to assume what you can’t adequately justify (by reasoning about the data, using information from other studies relating to the same variables and so on). However, there are differences. One is that – at least in terms of the distribution of the test statistic under the null hypothesis (and hence, its level-robustness) - non-normality is less important in large samples (at least in respect of significance level, though power might still be an issue if you need to find small effects), while the effect of unequal variances under the equal variance assumption doesn’t really go away with large sample size. What principled method can be recommended for choosing which is the most appropriate test when the sample size is "small"? With hypothesis tests, what matters (under some set of conditions) is primarily two things: What is the actual type I error rate? What is the power behaviour like? We also need to keep in mind that if we're comparing two procedures, changing the first will change the second (that is, if they’re not conducted at the same actual significance level, you would expect that higher $\alpha$ is associated with higher power). (Of course we're usually not so confident we know what distributions we're dealing with, so the sensitivity of those behaviors to changes in circumstances also matter.) With these small-sample issues in mind, is there a good - hopefully citable - checklist to work through when deciding between t and non-parametric tests? I will consider a number of situations in which I’ll make some recommendations, considering both the possibility of non-normality and unequal variances. In every case, take mention of the t-test to imply the Welch-test: n medium-large Non-normal (or unknown), likely to have near-equal variance: If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney, though if it’s only slightly heavy, the t-test should do okay. With light-tails the t-test may (often) be preferred. Permutation tests are a good option (you can even do a permutation test using a t-statistic if you're so inclined). Bootstrap tests are also suitable. Non-normal (or unknown), unequal variance (or variance relationship unknown): If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney if inequality of variance is only related to inequality of mean - i.e. if H0 is true the difference in spread should also be absent. GLMs are often a good option, especially if there’s skewness and spread is related to the mean. A permutation test is another option, with a similar caveat as for the rank-based tests. Bootstrap tests are a good possibility here. Zimmerman and Zumbo (1993)$^{[1]}$ suggest a Welch-t-test on the ranks which they say performs better that the Wilcoxon-Mann-Whitney in cases where the variances are unequal. n moderately small rank tests are reasonable defaults here if you expect non-normality (again with the above caveat). If you have external information about shape or variance, you might consider GLMs . If you expect things not to be too far from normal, t-tests may be fine. n very small Because of the problem with getting suitable significance levels, neither permutation tests nor rank tests may be suitable, and at the smallest sizes, a t-test may be the best option (there’s some possibility of slightly robustifying it). However, there’s a good argument for using higher type I error rates with small samples (otherwise you’re letting type II error rates inflate while holding type I error rates constant). Also see de Winter (2013)$^{[2]}$. The advice must be modified somewhat when the distributions are both strongly skewed and very discrete, such as Likert scale items where most of the observations are in one of the end categories. Then the Wilcoxon-Mann-Whitney isn’t necessarily a better choice than the t-test. Simulation can help guide choices further when you have some information about likely circumstances. I appreciate this is something of a perennial topic, but most questions concern the questioner's particular data set, sometimes a more general discussion of power, and occasionally what to do if two tests disagree, but I would like a procedure to pick the correct test in the first place! The main problem is how hard it is to check the normality assumption in a small data set: It is difficult to check normality in a small data set, and to some extent that's an important issue, but I think there's another issue of importance that we need to consider. A basic problem is that trying to assess normality as the basis of choosing between tests adversely impacts the properties of the tests you're choosing between. Any formal test for normality would have low power so violations may well not be detected. (Personally I wouldn't test for this purpose, and I'm clearly not alone, but I've found this little use when clients demand a normality test be performed because that's what their textbook or old lecture notes or some website they found once declare should be done. This is one point where a weightier looking citation would be welcome.) Here’s an example of a reference (there are others) which is unequivocal (Fay and Proschan, 2010$^{[3]}$): The choice between t- and WMW DRs should not be based on a test of normality. They are similarly unequivocal about not testing for equality of variance. To make matters worse, it is unsafe to use the Central Limit Theorem as a safety net: for small n we can't rely on the convenient asymptotic normality of the test statistic and t distribution. Nor even in large samples -- asymptotic normality of the numerator doesn’t imply that the t-statistic will have a t-distribution. However, that may not matter so much, since you should still have asymptotic normality (e.g. CLT for the numerator, and Slutsky’s theorem suggest that eventually the t-statistic should begin to look normal, if the conditions for both hold.) One principled response to this is "safety first": as there's no way to reliably verify the normality assumption on a small sample, run an equivalent non-parametric test instead. That’s actually the advice that the references I mention (or link to mentions of) give. Another approach I've seen but feel less comfortable with, is to perform a visual check and proceed with a t-test if nothing untowards is observed ("no reason to reject normality", ignoring the low power of this check). My personal inclination is to consider whether there are any grounds for assuming normality, theoretical (e.g. variable is sum of several random components and CLT applies) or empirical (e.g. previous studies with larger n suggest variable is normal). Both those are good arguments, especially when backed up with the fact that the t-test is reasonably robust against moderate deviations from normality. (One should keep in mind, however, that "moderate deviations" is a tricky phrase; certain kinds of deviations from normality may impact the power performace of the t-test quite a bit even though those deviations are visually very small - the t-test is less robust to some deviations than others. We should keep this in mind whenever we're discussing small deviations from normality.) Beware, however, the phrasing "suggest the variable is normal". Being reasonably consistent with normality is not the same thing as normality. We can often reject actual normality with no need even to see the data – for example, if the data cannot be negative, the distribution cannot be normal. Fortunately, what matters is closer to what we might actually have from previous studies or reasoning about how the data are composed, which is that the deviations from normality should be small. If so, I would use a t-test if data passed visual inspection, and otherwise stick to non-parametrics. But any theoretical or empirical grounds usually only justify assuming approximate normality, and on low degrees of freedom it's hard to judge how near normal it needs to be to avoid invalidating a t-test. Well, that’s something we can assess the impact of fairly readily (such as via simulations, as I mentioned earlier). From what I've seen, skewness seems to matter more than heavy tails (but on the other hand I have seen some claims of the opposite - though I don't know what that's based on). For people who see the choice of methods as a trade-off between power and robustness, claims about the asymptotic efficiency of the non-parametric methods are unhelpful. For instance, the rule of thumb that "Wilcoxon tests have about 95% of the power of a t-test if the data really are normal, and are often far more powerful if the data is not, so just use a Wilcoxon" is sometimes heard, but if the 95% only applies to large n, this is flawed reasoning for smaller samples. But we can check small-sample power quite easily! It’s easy enough to simulate to obtain power curves as here. (Again, also see de Winter (2013)$^{[2]}$). Having done such simulations under a variety of circumstances, both for the two-sample and one-sample/paired-difference cases, the small sample efficiency at the normal in both cases seems to be a little lower than the asymptotic efficiency, but the efficiency of the signed rank and Wilcoxon-Mann-Whitney tests is still very high even at very small sample sizes. At least that's if the tests are done at the same actual significance level; you can't do a 5% test with very small samples (and least not without randomized tests for example), but if you're prepared to perhaps do (say) a 5.5% or a 3.2% test instead, then the rank tests hold up very well indeed compared with a t-test at that significance level. Small samples may make it very difficult, or impossible, to assess whether a transformation is appropriate for the data since it's hard to tell whether the transformed data belong to a (sufficiently) normal distribution. So if a QQ plot reveals very positively skewed data, which look more reasonable after taking logs, is it safe to use a t-test on the logged data? On larger samples this would be very tempting, but with small n I'd probably hold off unless there had been grounds to expect a log-normal distribution in the first place. There’s another alternative: make a different parametric assumption. For example, if there’s skewed data, one might, for example, in some situations reasonably consider a gamma distribution, or some other skewed family as a better approximation - in moderately large samples, we might just use a GLM, but in very small samples it may be necessary to look to a small-sample test - in many cases simulation can be useful. Alternative 2: robustify the t-test (but taking care about the choice of robust procedure so as not to heavily discretize the resulting distribution of the test statistic) - this has some advantages over a very-small-sample nonparametric procedure such as the ability to consider tests with low type I error rate. Here I'm thinking along the lines of using say M-estimators of location (and related estimators of scale) in the t-statistic to smoothly robustify against deviations from normality. Something akin to the Welch, like: $$\frac{\stackrel{\sim}{x}-\stackrel{\sim}{y}}{\stackrel{\sim}{S}_p}$$ where $\stackrel{\sim}{S}_p^2=\frac{\stackrel{\sim}{s}_x^2}{n_x}+\frac{\stackrel{\sim}{s}_y^2}{n_y}$ and $\stackrel{\sim}{x}$, $\stackrel{\sim}{s}_x$ etc being robust estimates of location and scale respectively. I'd aim to reduce any tendency of the statistic to discreteness - so I'd avoid things like trimming and Winsorizing, since if the original data were discrete, trimming etc will exacerbate this; by using M-estimation type approaches with a smooth $\psi$-function you achieve similar effects without contributing to the discreteness. Keep in mind we're trying to deal with the situation where $n$ is very small indeed (around 3-5, in each sample, say), so even M-estimation potentially has its issues. You could, for example, use simulation at the normal to get p-values (if sample sizes are very small, I'd suggest that over bootstrapping - if sample sizes aren't so small, a carefully-implemented bootstrap may do quite well, but then we might as well go back to Wilcoxon-Mann-Whitney). There's be a scaling factor as well as a d.f. adjustment to get to what I'd imagine would then be a reasonable t-approximation. This means we should get the kind of properties we seek very close to the normal, and should have reasonable robustness in the broad vicinity of the normal. There are a number of issues that come up that would be outside the scope of the present question, but I think in very small samples the benefits should outweigh the costs and the extra effort required. [I haven't read the literature on this stuff for a very long time, so I don't have suitable references to offer on that score.] Of course if you didn't expect the distribution to be somewhat normal-like, but rather similar to some other distribution, you could undertake a suitable robustification of a different parametric test. What if you want to check assumptions for the non-parametrics? Some sources recommend verifying a symmetric distribution before applying a Wilcoxon test, which brings up similar problems to checking normality. Indeed. I assume you mean the signed rank test*. In the case of using it on paired data, if you are prepared to assume that the two distributions are the same shape apart from location shift you are safe, since the differences should then be symmetric. Actually, we don't even need that much; for the test to work you need symmetry under the null; it's not required under the alternative (e.g. consider a paired situation with identically-shaped right skewed continuous distributions on the positive half-line, where the scales differ under the alternative but not under the null; the signed rank test should work essentially as expected in that case). The interpretation of the test is easier if the alternative is a location shift though. *(Wilcoxon’s name is associated with both the one and two sample rank tests – signed rank and rank sum; with their U test, Mann and Whitney generalized the situation studied by Wilcoxon, and introduced important new ideas for evaluating the null distribution, but the priority between the two sets of authors on the Wilcoxon-Mann-Whitney is clearly Wilcoxon’s -- so at least if we only consider Wilcoxon vs Mann&Whitney, Wilcoxon goes first in my book. However, it seems Stigler's Law beats me yet again, and Wilcoxon should perhaps share some of that priority with a number of earlier contributors, and (besides Mann and Whitney) should share credit with several discoverers of an equivalent test.[4][5] ) References [1]: Zimmerman DW and Zumbo BN, (1993), Rank transformations and the power of the Student t-test and Welch t′-test for non-normal populations, Canadian Journal Experimental Psychology, 47: 523–39. [2]: J.C.F. de Winter (2013), "Using the Student’s t-test with extremely small sample sizes," Practical Assessment, Research and Evaluation, 18:10, August, ISSN 1531-7714 http://pareonline.net/getvn.asp?v=18&n=10 [3]: Michael P. Fay and Michael A. Proschan (2010), "Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules," Stat Surv; 4: 1–39. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2857732/ [4]: Berry, K.J., Mielke, P.W. and Johnston, J.E. (2012), "The Two-sample Rank-sum Test: Early Development," Electronic Journal for History of Probability and Statistics, Vol.8, December pdf [5]: Kruskal, W. H. (1957), "Historical notes on the Wilcoxon unpaired two-sample test," Journal of the American Statistical Association, 52, 356–360.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples I am going to change the order of questions about. I've found textbooks and lecture notes frequently disagree, and would like a system to work through the choice that can safely be recommended as bes
1,238
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
In my view the principled approach recognizes that (1) tests and graphical assessments of normality have insufficient sensitivity and graph interpretation is frequently not objective, (2) multi-step procedures have uncertain operating characteristics, (3) many nonparametric tests have excellent operating characteristics under situations in which parametric tests have optimum power, and (4) the proper transformation of $Y$ is not generally the identity function, and nonparametric $k$-sample tests are invariant to the transformation chosen (not so for one-sample tests such as the Wilcoxon signed rank test). Regarding (2), multi-step procedures are particularly problematic in areas such as drug development where oversight agencies such as FDA are rightfully concerned about possible manipulation of results. For example, an unscrupulous researcher might conveniently forget to report the test of normality if the $t$-test results in a low $P$-value. Putting all this together, some suggested guidance is as follows: If there is not a compelling reason to assume a Gaussian distribution before examining the data, and no covariate adjustment is needed, use a nonparametric test. If covariate adjustment is needed, use the semiparametric regression generalization of the rank test you prefer. For the Wilcoxon test this is the proportional odds model and for a normal scores test this is probit ordinal regression. These recommendations are fairly general although your mileage may vary for certain small sample sizes. But we know that for larger samples the relative efficiency of the Wilcoxon 2-sample test and signed rank tests compared to the $t$-test (if equal variance holds in the 2-sample case) is $\frac{3}{\pi}$ and that the relative efficiency of rank tests is frequently much greater than 1.0 when the Gaussian distribution does not hold. To me, the information loss in using rank tests is very small compared to the possible gains, robustness, and freedom from having to specify the transformation of $Y$. Nonparametric tests can perform well even if their optimality assumptions are not satisfied. For the $k$-sample problem, rank tests make no assumptions about the distribution for a given group; they only make assumptions for how the distributions of the $k$ groups are connected to each other, if you require the test to be optimal. For a $-\log-\log$ link cumulative probability ordinal model the distributions are assumed to be in proportional hazards. For a logit link cumulative probability model (proportional odds model), the distributions are assumed to be connected by the proportional odds assumptions, i.e., the logits of the cumulative distribution functions are parallel. The shape of one of the distributions is irrelevant. Details may be found here in Chapter 15 of Handouts. There are two types of assumptions of a frequentist statistical method that are frequently considered. The first is assumptions required to make the method preserve type I error. The second relates to preserving type II error (optimality; sensitivity). I believe that the best way to expose the assumptions needed for the second are to embed a nonparametric test in a semiparametric model as done above. The actual connection between the two is from Rao efficient score tests arising from the semiparametric model. The numerator of the score test from a proportional odds model for the two-sample case is exactly the rank-sum statistic. For background information on ordinal models see this. For equivalence of Wilcoxon and proportion odds tests see this.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
In my view the principled approach recognizes that (1) tests and graphical assessments of normality have insufficient sensitivity and graph interpretation is frequently not objective, (2) multi-step p
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples In my view the principled approach recognizes that (1) tests and graphical assessments of normality have insufficient sensitivity and graph interpretation is frequently not objective, (2) multi-step procedures have uncertain operating characteristics, (3) many nonparametric tests have excellent operating characteristics under situations in which parametric tests have optimum power, and (4) the proper transformation of $Y$ is not generally the identity function, and nonparametric $k$-sample tests are invariant to the transformation chosen (not so for one-sample tests such as the Wilcoxon signed rank test). Regarding (2), multi-step procedures are particularly problematic in areas such as drug development where oversight agencies such as FDA are rightfully concerned about possible manipulation of results. For example, an unscrupulous researcher might conveniently forget to report the test of normality if the $t$-test results in a low $P$-value. Putting all this together, some suggested guidance is as follows: If there is not a compelling reason to assume a Gaussian distribution before examining the data, and no covariate adjustment is needed, use a nonparametric test. If covariate adjustment is needed, use the semiparametric regression generalization of the rank test you prefer. For the Wilcoxon test this is the proportional odds model and for a normal scores test this is probit ordinal regression. These recommendations are fairly general although your mileage may vary for certain small sample sizes. But we know that for larger samples the relative efficiency of the Wilcoxon 2-sample test and signed rank tests compared to the $t$-test (if equal variance holds in the 2-sample case) is $\frac{3}{\pi}$ and that the relative efficiency of rank tests is frequently much greater than 1.0 when the Gaussian distribution does not hold. To me, the information loss in using rank tests is very small compared to the possible gains, robustness, and freedom from having to specify the transformation of $Y$. Nonparametric tests can perform well even if their optimality assumptions are not satisfied. For the $k$-sample problem, rank tests make no assumptions about the distribution for a given group; they only make assumptions for how the distributions of the $k$ groups are connected to each other, if you require the test to be optimal. For a $-\log-\log$ link cumulative probability ordinal model the distributions are assumed to be in proportional hazards. For a logit link cumulative probability model (proportional odds model), the distributions are assumed to be connected by the proportional odds assumptions, i.e., the logits of the cumulative distribution functions are parallel. The shape of one of the distributions is irrelevant. Details may be found here in Chapter 15 of Handouts. There are two types of assumptions of a frequentist statistical method that are frequently considered. The first is assumptions required to make the method preserve type I error. The second relates to preserving type II error (optimality; sensitivity). I believe that the best way to expose the assumptions needed for the second are to embed a nonparametric test in a semiparametric model as done above. The actual connection between the two is from Rao efficient score tests arising from the semiparametric model. The numerator of the score test from a proportional odds model for the two-sample case is exactly the rank-sum statistic. For background information on ordinal models see this. For equivalence of Wilcoxon and proportion odds tests see this.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples In my view the principled approach recognizes that (1) tests and graphical assessments of normality have insufficient sensitivity and graph interpretation is frequently not objective, (2) multi-step p
1,239
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Rand Wilcox in his publications and books make some very important points, many of which were listed by Frank Harrell and Glen_b in earlier posts. The mean is not necessarily the quantity we want to make inferences about. There maybe other quantities that better exemplifies a typical observation. For t-tests, power can be low even for small departures from normality. For t-tests, observed probability coverage can be substantially different than nominal. Some key suggestions are: A robust alternative is to compare trimmed means or M-estimators using the t-test. Wilcox suggests 20% trimmed means. Empirical Likelihood methods are theoretically more advantageous (Owen, 2001) but not necessarily so for medium to small n. Permutations tests are great if one needs to control Type I error, but one cannot get CI. For many situations Wilcox proposes the bootstrap-t to compare trimmed means. In R, this is implemented in the functions yuenbt, yhbt in the WRS package. Percentile bootstrap maybe better than percentile-t when amount of trimming is >/=20%. In R this is implemented in the function pb2gen in the aforementioned WRS package. Two good references are Wilcox (2010) and Wilcox (2012).
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Rand Wilcox in his publications and books make some very important points, many of which were listed by Frank Harrell and Glen_b in earlier posts. The mean is not necessarily the quantity we want to
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Rand Wilcox in his publications and books make some very important points, many of which were listed by Frank Harrell and Glen_b in earlier posts. The mean is not necessarily the quantity we want to make inferences about. There maybe other quantities that better exemplifies a typical observation. For t-tests, power can be low even for small departures from normality. For t-tests, observed probability coverage can be substantially different than nominal. Some key suggestions are: A robust alternative is to compare trimmed means or M-estimators using the t-test. Wilcox suggests 20% trimmed means. Empirical Likelihood methods are theoretically more advantageous (Owen, 2001) but not necessarily so for medium to small n. Permutations tests are great if one needs to control Type I error, but one cannot get CI. For many situations Wilcox proposes the bootstrap-t to compare trimmed means. In R, this is implemented in the functions yuenbt, yhbt in the WRS package. Percentile bootstrap maybe better than percentile-t when amount of trimming is >/=20%. In R this is implemented in the function pb2gen in the aforementioned WRS package. Two good references are Wilcox (2010) and Wilcox (2012).
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Rand Wilcox in his publications and books make some very important points, many of which were listed by Frank Harrell and Glen_b in earlier posts. The mean is not necessarily the quantity we want to
1,240
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Bradley, in his work Distribution-Free Statistical Tests (1968, pp. 17–24), brings thirteen contrasts between what he calls "classical" and "distribution-free" tests. Note that Bradley differentiates between "non-parametric" and "distribution-free," but for the purposes of your question this difference is not relevant. Included in those thirteen are elements that relate not just to the derivatinos of the tests, but their applications. These include: Choice of significance level: Classical tests have continuous significance levels; distribution-free tests usually have discrete observations of the significance levels, so the classical tests offer more flexibility in setting said level. Logical validity of rejection region: Distribution-free test rejection regions can be less intuitively understandable (neither necessarily smooth nor continuous) and may cause confusion as to when the test should be considered to have rejected the null hypothesis. Type of statistics which are testable: To quote Bradley directly: "Statistics defined in terms of arithmetical operations upon observation magnitudes can be tested by classical techniques, wheras thse defined by order relationships (rank) or category-frequencies, etc. can be tested by distribution-free methods. Means and variances are examples of the former and medians and interquartile ranges, of the latter." Especially when dealing with non-normal distributions, the ability to test other statistics becomes valuable, lending weight to the distribution-free tests. Testability of higher-order interactions: Much easier under classical tests than distribution-free tests. Influence of sample size: This is a rather important one in my opinion. When sample sizes are small (Bradley says around n = 10), it may be very difficult to determine if the parametric assumptions underlying the classical tests have been violated or not. Distribution-free tests do not have these assumptions to be violated. Moreover, even when the assumptions have not been violated, the distribution-free tests are often almost as easy to apply and almost as efficient of a test. So for small sample sizes (less than 10, possible up to 30) Bradley favors an almost routine application of distribution-free tests. For large sample sizes, the Central Limit Theorem tends to overwhelm parametric violations in that the sample mean and sample variance will tend to the normal, and the parametric tests may be superior in terms of efficieny. Scope of Application: By being distribution-free, such tests are applicable to a much larger class of populations than classical tests assuming a specific distribution. Detectibility of violation of assumption of a continuous distribution: Easy to see in distributio-free tests (e.g. existence of tied scores), harder in parametric tests. Effect of violation of assumption of a continuous distribution: If the assumption is violated the test becomes inexact. Bradley spends time explaining how the bounds of the inexactitude can be estimated for distribution-free tests, but there is no analogous routine for classical tests.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Bradley, in his work Distribution-Free Statistical Tests (1968, pp. 17–24), brings thirteen contrasts between what he calls "classical" and "distribution-free" tests. Note that Bradley differentiates
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Bradley, in his work Distribution-Free Statistical Tests (1968, pp. 17–24), brings thirteen contrasts between what he calls "classical" and "distribution-free" tests. Note that Bradley differentiates between "non-parametric" and "distribution-free," but for the purposes of your question this difference is not relevant. Included in those thirteen are elements that relate not just to the derivatinos of the tests, but their applications. These include: Choice of significance level: Classical tests have continuous significance levels; distribution-free tests usually have discrete observations of the significance levels, so the classical tests offer more flexibility in setting said level. Logical validity of rejection region: Distribution-free test rejection regions can be less intuitively understandable (neither necessarily smooth nor continuous) and may cause confusion as to when the test should be considered to have rejected the null hypothesis. Type of statistics which are testable: To quote Bradley directly: "Statistics defined in terms of arithmetical operations upon observation magnitudes can be tested by classical techniques, wheras thse defined by order relationships (rank) or category-frequencies, etc. can be tested by distribution-free methods. Means and variances are examples of the former and medians and interquartile ranges, of the latter." Especially when dealing with non-normal distributions, the ability to test other statistics becomes valuable, lending weight to the distribution-free tests. Testability of higher-order interactions: Much easier under classical tests than distribution-free tests. Influence of sample size: This is a rather important one in my opinion. When sample sizes are small (Bradley says around n = 10), it may be very difficult to determine if the parametric assumptions underlying the classical tests have been violated or not. Distribution-free tests do not have these assumptions to be violated. Moreover, even when the assumptions have not been violated, the distribution-free tests are often almost as easy to apply and almost as efficient of a test. So for small sample sizes (less than 10, possible up to 30) Bradley favors an almost routine application of distribution-free tests. For large sample sizes, the Central Limit Theorem tends to overwhelm parametric violations in that the sample mean and sample variance will tend to the normal, and the parametric tests may be superior in terms of efficieny. Scope of Application: By being distribution-free, such tests are applicable to a much larger class of populations than classical tests assuming a specific distribution. Detectibility of violation of assumption of a continuous distribution: Easy to see in distributio-free tests (e.g. existence of tied scores), harder in parametric tests. Effect of violation of assumption of a continuous distribution: If the assumption is violated the test becomes inexact. Bradley spends time explaining how the bounds of the inexactitude can be estimated for distribution-free tests, but there is no analogous routine for classical tests.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Bradley, in his work Distribution-Free Statistical Tests (1968, pp. 17–24), brings thirteen contrasts between what he calls "classical" and "distribution-free" tests. Note that Bradley differentiates
1,241
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Starting to answer this very interesting question. For non-paired data: Performance of five two-sample location tests for skewed distributions with unequal variances by Morten W. Fagerland, Leiv Sandvik (behind paywall) performs a series of experiments with 5 different tests (t-test, Welch U, Yuen-Welch, Wilcoxon-Mann-Whitney and Brunner-Munzel) for different combinations of sample size, sample ratio, departure from normality, and so on. The paper ends up suggesting Welch U in general, But appendix A of the paper lists the results for each combination of sample sizes. And for small sample sizes (m=10 n=10 or 25) the results are more confusing (as expected) - in my estimation of the results (not the authors') Welch U, Brunner-Munzel seems to perform equally well, and t-test also well in m=10 and n=10 case. This is what I know so far. For a "fast" solution, I used to cite Increasing Physicians’ Awareness of the Impact of Statistics on Research Outcomes: Comparative Power of the t-test and Wilcoxon Rank-Sum Test in Small Samples Applied Research by Patrick D Bridge and Shlomo S Sawilowsky (also behind paywall) and go straight to Wilcoxon no matter the sample size, but caveat emptor, for example Should we always choose a nonparametric test when comparing two apparently nonnormal distributions? by Eva Skovlund and Grete U. Fensta. I have not yet found any similar results for paired data
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Starting to answer this very interesting question. For non-paired data: Performance of five two-sample location tests for skewed distributions with unequal variances by Morten W. Fagerland, Leiv Sand
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Starting to answer this very interesting question. For non-paired data: Performance of five two-sample location tests for skewed distributions with unequal variances by Morten W. Fagerland, Leiv Sandvik (behind paywall) performs a series of experiments with 5 different tests (t-test, Welch U, Yuen-Welch, Wilcoxon-Mann-Whitney and Brunner-Munzel) for different combinations of sample size, sample ratio, departure from normality, and so on. The paper ends up suggesting Welch U in general, But appendix A of the paper lists the results for each combination of sample sizes. And for small sample sizes (m=10 n=10 or 25) the results are more confusing (as expected) - in my estimation of the results (not the authors') Welch U, Brunner-Munzel seems to perform equally well, and t-test also well in m=10 and n=10 case. This is what I know so far. For a "fast" solution, I used to cite Increasing Physicians’ Awareness of the Impact of Statistics on Research Outcomes: Comparative Power of the t-test and Wilcoxon Rank-Sum Test in Small Samples Applied Research by Patrick D Bridge and Shlomo S Sawilowsky (also behind paywall) and go straight to Wilcoxon no matter the sample size, but caveat emptor, for example Should we always choose a nonparametric test when comparing two apparently nonnormal distributions? by Eva Skovlund and Grete U. Fensta. I have not yet found any similar results for paired data
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Starting to answer this very interesting question. For non-paired data: Performance of five two-sample location tests for skewed distributions with unequal variances by Morten W. Fagerland, Leiv Sand
1,242
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Considering following links: Is normality testing 'essentially useless'? Need and best way to determine normality of data To simplify things, since non-parametric tests are reasonably good even for normal data, why not use them always for small samples.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Considering following links: Is normality testing 'essentially useless'? Need and best way to determine normality of data To simplify things, since non-parametric tests are reasonably good even for n
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Considering following links: Is normality testing 'essentially useless'? Need and best way to determine normality of data To simplify things, since non-parametric tests are reasonably good even for normal data, why not use them always for small samples.
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Considering following links: Is normality testing 'essentially useless'? Need and best way to determine normality of data To simplify things, since non-parametric tests are reasonably good even for n
1,243
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Simulating the difference of means of Gamma populations Comparing the t-test and the Mann Whitney test Summary of results When the variance of the two populations is the same, the Mann Whitney test has greater true power but also greater true type 1 error than the t-test. For large sample N = 1000, the minimum true type 1 error for the Mann whitney test is 9%, whereas the t-test has true Type 1 of 5% as required by the experiment setup (reject $H_0$ for p values below 5%) When the variance of two populations is different, then the Mann Whitney test leads to large type 1 error, even when the means are the same. This is expected since the Mann Whitney tests for difference in distributions, not in means. The t test is robust to differences in variance but identical means Experiment 1) Different means, same variance Consider two gamma distributions parametrized using k (shape) and scale $\theta$, with parameters $X_1$: gamma with $k = 0.5$ and $\theta = 1$ hence mean $E[X_1] = k\theta = 0.5$ and variance $Var[X_1] = k\theta^2 = 0.5$ $X_2$: gamma with $k = 1.445$ and $\theta = 0.588235$ $E[X_2] = .85$ and variance $Var[X_2] = .5$ We will be testing for a difference in means of samples from $X_1$ and $X_2$. Here the setup is chosen such that $X_1$ and $X_2$ have the same variance, hence the true cohen d distance is 0.5 $$ d = (.85 - .5) / \sqrt{.5} = 0.5$$ We will compare two testing methods: the two sample t-test and the Mann Whitney non parametric test, and simulate the true Type I and Power of these tests for different sample size (assuming we reject null hypothesis for $p$ value < 0.05) $H_0: \mu_{X_1} = \mu_{X_2} = 0.5$ $H_1: \mu_{X_1} \neq \mu_{X_2}$ The true type 1 error is calculated as: $P(\text{reject} | H_0)$ and the true power is calculated as: $P(\text{reject} | H_1)$. We simulate thousands of experiment using the true distribution of $H_0$ and $H_1$ Sources: https://en.wikipedia.org/wiki/Gamma_distribution https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Population distributions Simulation results Discussion As expected, the sample mean is not normally distributed for small sample size ($N = 10$) as shown by the distribution skew and kurtosis. For larger sample size, the distribution is approximately normal For all sample sizes, the Mann Whitney test has more power than the t-test, and in some cases by a factor of 2 For all samples sizes, the Mann Whitney test has greater type I error, and this by a factor or 2 - 3 t-test has low power for small sample size Discussion: when the variance of the two populations are indeed the same, the Mann Whitney test greatly outperforms the t-test in terms of power for small sample size, but has a higher Type 1 error rate Experiment 2: Different variances, same mean $X_1$: gamma with $k = 0.5$ and $\theta = 1$ hence mean $E[X_1] = k\theta = .5$ and variance $Var[X_1] = k\theta^2 = .5$ $X_2$: gamma with $k = 0.25$ and $\theta = 2$ $E[X_2] = .5$ and variance $Var[X_2] = 1$ Here we won't be able to computer the power because the simulation does not contain the true $H_1$ scenario. However we can compute the type 1 error when $Var[X_1] = Var[X_2]$ and when $Var[X_1] \neq Var[X_2]$ Discussion Results from the simulation show that the t-test is very robust to different variance, and the type I error is close to 5% for all sample sizes. As expected, the Mann Whitney test performs poorly in this case since it is not testing for a difference in means but for a difference in distributions
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Simulating the difference of means of Gamma populations Comparing the t-test and the Mann Whitney test Summary of results When the variance of the two populations is the same, the Mann Whitney test h
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Simulating the difference of means of Gamma populations Comparing the t-test and the Mann Whitney test Summary of results When the variance of the two populations is the same, the Mann Whitney test has greater true power but also greater true type 1 error than the t-test. For large sample N = 1000, the minimum true type 1 error for the Mann whitney test is 9%, whereas the t-test has true Type 1 of 5% as required by the experiment setup (reject $H_0$ for p values below 5%) When the variance of two populations is different, then the Mann Whitney test leads to large type 1 error, even when the means are the same. This is expected since the Mann Whitney tests for difference in distributions, not in means. The t test is robust to differences in variance but identical means Experiment 1) Different means, same variance Consider two gamma distributions parametrized using k (shape) and scale $\theta$, with parameters $X_1$: gamma with $k = 0.5$ and $\theta = 1$ hence mean $E[X_1] = k\theta = 0.5$ and variance $Var[X_1] = k\theta^2 = 0.5$ $X_2$: gamma with $k = 1.445$ and $\theta = 0.588235$ $E[X_2] = .85$ and variance $Var[X_2] = .5$ We will be testing for a difference in means of samples from $X_1$ and $X_2$. Here the setup is chosen such that $X_1$ and $X_2$ have the same variance, hence the true cohen d distance is 0.5 $$ d = (.85 - .5) / \sqrt{.5} = 0.5$$ We will compare two testing methods: the two sample t-test and the Mann Whitney non parametric test, and simulate the true Type I and Power of these tests for different sample size (assuming we reject null hypothesis for $p$ value < 0.05) $H_0: \mu_{X_1} = \mu_{X_2} = 0.5$ $H_1: \mu_{X_1} \neq \mu_{X_2}$ The true type 1 error is calculated as: $P(\text{reject} | H_0)$ and the true power is calculated as: $P(\text{reject} | H_1)$. We simulate thousands of experiment using the true distribution of $H_0$ and $H_1$ Sources: https://en.wikipedia.org/wiki/Gamma_distribution https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Population distributions Simulation results Discussion As expected, the sample mean is not normally distributed for small sample size ($N = 10$) as shown by the distribution skew and kurtosis. For larger sample size, the distribution is approximately normal For all sample sizes, the Mann Whitney test has more power than the t-test, and in some cases by a factor of 2 For all samples sizes, the Mann Whitney test has greater type I error, and this by a factor or 2 - 3 t-test has low power for small sample size Discussion: when the variance of the two populations are indeed the same, the Mann Whitney test greatly outperforms the t-test in terms of power for small sample size, but has a higher Type 1 error rate Experiment 2: Different variances, same mean $X_1$: gamma with $k = 0.5$ and $\theta = 1$ hence mean $E[X_1] = k\theta = .5$ and variance $Var[X_1] = k\theta^2 = .5$ $X_2$: gamma with $k = 0.25$ and $\theta = 2$ $E[X_2] = .5$ and variance $Var[X_2] = 1$ Here we won't be able to computer the power because the simulation does not contain the true $H_1$ scenario. However we can compute the type 1 error when $Var[X_1] = Var[X_2]$ and when $Var[X_1] \neq Var[X_2]$ Discussion Results from the simulation show that the t-test is very robust to different variance, and the type I error is close to 5% for all sample sizes. As expected, the Mann Whitney test performs poorly in this case since it is not testing for a difference in means but for a difference in distributions
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples Simulating the difference of means of Gamma populations Comparing the t-test and the Mann Whitney test Summary of results When the variance of the two populations is the same, the Mann Whitney test h
1,244
Free statistical textbooks
Online books include http://davidmlane.com/hyperstat/ http://vassarstats.net/textbook/ https://dwstockburger.com/Multibook/mbk.htm https://web.archive.org/web/20180122061046/http://bookboon.com/en/statistics-ebooks http://www.freebookcentre.net/SpecialCat/Free-Statistics-Books-Download.html Update: I can now add my own forecasting textbook Forecasting: principles and practice (Hyndman & Athanasopoulos, 2012)
Free statistical textbooks
Online books include http://davidmlane.com/hyperstat/ http://vassarstats.net/textbook/ https://dwstockburger.com/Multibook/mbk.htm https://web.archive.org/web/20180122061046/http://bookboon.com/en/st
Free statistical textbooks Online books include http://davidmlane.com/hyperstat/ http://vassarstats.net/textbook/ https://dwstockburger.com/Multibook/mbk.htm https://web.archive.org/web/20180122061046/http://bookboon.com/en/statistics-ebooks http://www.freebookcentre.net/SpecialCat/Free-Statistics-Books-Download.html Update: I can now add my own forecasting textbook Forecasting: principles and practice (Hyndman & Athanasopoulos, 2012)
Free statistical textbooks Online books include http://davidmlane.com/hyperstat/ http://vassarstats.net/textbook/ https://dwstockburger.com/Multibook/mbk.htm https://web.archive.org/web/20180122061046/http://bookboon.com/en/st
1,245
Free statistical textbooks
The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is a standard text for statistics and data mining, and is now free: https://web.stanford.edu/~hastie/ElemStatLearn/ Also Available here.
Free statistical textbooks
The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is a standard text for statistics and data mining, and is now free: https://web.stanford.edu/~hastie/ElemStatLearn/ Also Availa
Free statistical textbooks The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is a standard text for statistics and data mining, and is now free: https://web.stanford.edu/~hastie/ElemStatLearn/ Also Available here.
Free statistical textbooks The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is a standard text for statistics and data mining, and is now free: https://web.stanford.edu/~hastie/ElemStatLearn/ Also Availa
1,246
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Introduction to Statistical Thought
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Introduction to Statistical Thought
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,247
Free statistical textbooks
There's a superb Probability book here: https://web.archive.org/web/20100102085337/http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html which you can also buy in hardcopy.;
Free statistical textbooks
There's a superb Probability book here: https://web.archive.org/web/20100102085337/http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html which you can also buy in ha
Free statistical textbooks There's a superb Probability book here: https://web.archive.org/web/20100102085337/http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html which you can also buy in hardcopy.;
Free statistical textbooks There's a superb Probability book here: https://web.archive.org/web/20100102085337/http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html which you can also buy in ha
1,248
Free statistical textbooks
I've often found the Engineering Statistics Handbook useful. It can be found here. Although I've never read it myself, I hear Introduction to Probability and Statistics Using R is very good. It's a full ~400 page ebook (also available as an actual book). As a bonus, it also teaches you R, which of course you want to learn anyways.
Free statistical textbooks
I've often found the Engineering Statistics Handbook useful. It can be found here. Although I've never read it myself, I hear Introduction to Probability and Statistics Using R is very good. It's a fu
Free statistical textbooks I've often found the Engineering Statistics Handbook useful. It can be found here. Although I've never read it myself, I hear Introduction to Probability and Statistics Using R is very good. It's a full ~400 page ebook (also available as an actual book). As a bonus, it also teaches you R, which of course you want to learn anyways.
Free statistical textbooks I've often found the Engineering Statistics Handbook useful. It can be found here. Although I've never read it myself, I hear Introduction to Probability and Statistics Using R is very good. It's a fu
1,249
Free statistical textbooks
Machine Learning One the most, if not the most, popular textbooks on machine learning is Hastie, Tibshirani, and Friedman, The Elements of Statistical Learning, which is fully available online (currently 10th printing). It is comparable in scope e.g. to Bishop's Pattern Recognition and ML or Murphy's ML, but those books are not free, while ESL is. Hastie & Tibshirani also co-wrote freely available An Introduction to Statistical Learning, With Applications in R which is basically a simpler version of The Elements and focuses on R. In 2015, Hastie & Tibshirani co-authored a new textbook Statistical Learning with Sparsity: The Lasso and Generalizations, also available online. This one is quite a bit shorter and focuses specifically on lasso. Another freely available all-encompassing machine learning textbook is David Barber's Bayesian Reasoning and Machine Learning. I did not use it myself, but it is widely considered to be an excellent book. $\hskip{5em}$ Switching now to more specialized topics, there are: Rasmussen & Williams Gaussian Processes for Machine Learning, which is the book on Gaussian processes. Much awaited Goodfellow, Bengio and Courville Deep Learning textbook that is about to be published by MIT Press. It isn't published yet, but the book is already available online. On the official website one can view it in browser but cannot download (as per agreement with the publisher), but it is easy to find a combined PDF e.g. here on github. Csaba Szepesvári, Algorithms for Reinforcement Learning, a concise book on RL. A classical, much more detailed but a bit dated textbook is Sutton & Barto, Reinforcement Learning: An Introduction which is also freely available online but only in a cumbersome HTML format. Boyd and Vandenberghe, Convex Optimization. $\hskip{4em}$
Free statistical textbooks
Machine Learning One the most, if not the most, popular textbooks on machine learning is Hastie, Tibshirani, and Friedman, The Elements of Statistical Learning, which is fully available online (curr
Free statistical textbooks Machine Learning One the most, if not the most, popular textbooks on machine learning is Hastie, Tibshirani, and Friedman, The Elements of Statistical Learning, which is fully available online (currently 10th printing). It is comparable in scope e.g. to Bishop's Pattern Recognition and ML or Murphy's ML, but those books are not free, while ESL is. Hastie & Tibshirani also co-wrote freely available An Introduction to Statistical Learning, With Applications in R which is basically a simpler version of The Elements and focuses on R. In 2015, Hastie & Tibshirani co-authored a new textbook Statistical Learning with Sparsity: The Lasso and Generalizations, also available online. This one is quite a bit shorter and focuses specifically on lasso. Another freely available all-encompassing machine learning textbook is David Barber's Bayesian Reasoning and Machine Learning. I did not use it myself, but it is widely considered to be an excellent book. $\hskip{5em}$ Switching now to more specialized topics, there are: Rasmussen & Williams Gaussian Processes for Machine Learning, which is the book on Gaussian processes. Much awaited Goodfellow, Bengio and Courville Deep Learning textbook that is about to be published by MIT Press. It isn't published yet, but the book is already available online. On the official website one can view it in browser but cannot download (as per agreement with the publisher), but it is easy to find a combined PDF e.g. here on github. Csaba Szepesvári, Algorithms for Reinforcement Learning, a concise book on RL. A classical, much more detailed but a bit dated textbook is Sutton & Barto, Reinforcement Learning: An Introduction which is also freely available online but only in a cumbersome HTML format. Boyd and Vandenberghe, Convex Optimization. $\hskip{4em}$
Free statistical textbooks Machine Learning One the most, if not the most, popular textbooks on machine learning is Hastie, Tibshirani, and Friedman, The Elements of Statistical Learning, which is fully available online (curr
1,250
Free statistical textbooks
I really like The Little Handbook of Statistical Practice by Gerard E. Dallal
Free statistical textbooks
I really like The Little Handbook of Statistical Practice by Gerard E. Dallal
Free statistical textbooks I really like The Little Handbook of Statistical Practice by Gerard E. Dallal
Free statistical textbooks I really like The Little Handbook of Statistical Practice by Gerard E. Dallal
1,251
Free statistical textbooks
Here's a fresh one: Introduction to Probability and Statistics Using R . It's R-specific, though, but it's a great one. I haven't read it yet, but it seems fine so far...
Free statistical textbooks
Here's a fresh one: Introduction to Probability and Statistics Using R . It's R-specific, though, but it's a great one. I haven't read it yet, but it seems fine so far...
Free statistical textbooks Here's a fresh one: Introduction to Probability and Statistics Using R . It's R-specific, though, but it's a great one. I haven't read it yet, but it seems fine so far...
Free statistical textbooks Here's a fresh one: Introduction to Probability and Statistics Using R . It's R-specific, though, but it's a great one. I haven't read it yet, but it seems fine so far...
1,252
Free statistical textbooks
Norman Matloff has written a mathematical statistics textbook for computer science students that's free. Kind of a niche market, I suppose. For what it's worth, I haven't read it, but Matloff has a Ph.D. in mathematical statistics, works for a computer science department, and wrote a really good R book, that I recommend for people who want to go to the next stage of programming R better (as opposed to just fitting models with canned functions).
Free statistical textbooks
Norman Matloff has written a mathematical statistics textbook for computer science students that's free. Kind of a niche market, I suppose. For what it's worth, I haven't read it, but Matloff has a
Free statistical textbooks Norman Matloff has written a mathematical statistics textbook for computer science students that's free. Kind of a niche market, I suppose. For what it's worth, I haven't read it, but Matloff has a Ph.D. in mathematical statistics, works for a computer science department, and wrote a really good R book, that I recommend for people who want to go to the next stage of programming R better (as opposed to just fitting models with canned functions).
Free statistical textbooks Norman Matloff has written a mathematical statistics textbook for computer science students that's free. Kind of a niche market, I suppose. For what it's worth, I haven't read it, but Matloff has a
1,253
Free statistical textbooks
OpenIntro Statistics http://www.openintro.org/stat/textbook.php Inexpensive paperback copies are also available on Amazon.
Free statistical textbooks
OpenIntro Statistics http://www.openintro.org/stat/textbook.php Inexpensive paperback copies are also available on Amazon.
Free statistical textbooks OpenIntro Statistics http://www.openintro.org/stat/textbook.php Inexpensive paperback copies are also available on Amazon.
Free statistical textbooks OpenIntro Statistics http://www.openintro.org/stat/textbook.php Inexpensive paperback copies are also available on Amazon.
1,254
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. A New View of Statistics by Will G. Hopkins is great! It is designed to help you understand how to understand the results of statistical analyses, not how to prove statistical theorems.
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. A New View of Statistics by Will G. Hopkins is great! It is designed to help you understand how to understand the results of statistical analyses, not how to prove statistical theorems.
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,255
Free statistical textbooks
Not Statistics specific, but a good resource is: http://www.reddit.com/r/mathbooks Also, George Cain at Georgia Tech maintains a list of freely available maths texts that includes some statistical texts. http://people.math.gatech.edu/~cain/textbooks/onlinebooks.html
Free statistical textbooks
Not Statistics specific, but a good resource is: http://www.reddit.com/r/mathbooks Also, George Cain at Georgia Tech maintains a list of freely available maths texts that includes some statistical te
Free statistical textbooks Not Statistics specific, but a good resource is: http://www.reddit.com/r/mathbooks Also, George Cain at Georgia Tech maintains a list of freely available maths texts that includes some statistical texts. http://people.math.gatech.edu/~cain/textbooks/onlinebooks.html
Free statistical textbooks Not Statistics specific, but a good resource is: http://www.reddit.com/r/mathbooks Also, George Cain at Georgia Tech maintains a list of freely available maths texts that includes some statistical te
1,256
Free statistical textbooks
"An Introduction to Statistical Learning with Applications in R" https://www.statlearning.com/ by two of the 3 authors of the well-known "The Elements of Statistical Learning" plus 2 other authors. An Introduction to Statistical Learning with Applications in R is written at a more introductory level with less mathematical background required than The Elements of Statistical Learning, makes use of R (unlike The Elements of Statistical Learning), and was first published in 2013, some years after this thread was started.
Free statistical textbooks
"An Introduction to Statistical Learning with Applications in R" https://www.statlearning.com/ by two of the 3 authors of the well-known "The Elements of Statistical Learning" plus 2 other authors. A
Free statistical textbooks "An Introduction to Statistical Learning with Applications in R" https://www.statlearning.com/ by two of the 3 authors of the well-known "The Elements of Statistical Learning" plus 2 other authors. An Introduction to Statistical Learning with Applications in R is written at a more introductory level with less mathematical background required than The Elements of Statistical Learning, makes use of R (unlike The Elements of Statistical Learning), and was first published in 2013, some years after this thread was started.
Free statistical textbooks "An Introduction to Statistical Learning with Applications in R" https://www.statlearning.com/ by two of the 3 authors of the well-known "The Elements of Statistical Learning" plus 2 other authors. A
1,257
Free statistical textbooks
For getting into stochastic processes and SDEs, Tom Kurtz's lecture notes are hard to beat. It starts with a decent review of probability and some convergence results, and then dives right into continuous time stochastic processes in fairly clear, comprehensible language. In general it's one of the best books on the topic -- free or otherwise -- I've found.
Free statistical textbooks
For getting into stochastic processes and SDEs, Tom Kurtz's lecture notes are hard to beat. It starts with a decent review of probability and some convergence results, and then dives right into conti
Free statistical textbooks For getting into stochastic processes and SDEs, Tom Kurtz's lecture notes are hard to beat. It starts with a decent review of probability and some convergence results, and then dives right into continuous time stochastic processes in fairly clear, comprehensible language. In general it's one of the best books on the topic -- free or otherwise -- I've found.
Free statistical textbooks For getting into stochastic processes and SDEs, Tom Kurtz's lecture notes are hard to beat. It starts with a decent review of probability and some convergence results, and then dives right into conti
1,258
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I really like these two books by Daniel McFadden of Berkeley: Lecture Notes: Econometric Tools http://elsa.berkeley.edu/users/mcfadden/e240a_sp98/e240a.html Lecture Notes: Econometrics/Statistics http://elsa.berkeley.edu/users/mcfadden/e240b_f01/e240b.html
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I really like these two books by Daniel McFadden of Berkeley: Lecture Notes: Econometric Tools http://elsa.berkeley.edu/users/mcfadden/e240a_sp98/e240a.html Lecture Notes: Econometrics/Statistics http://elsa.berkeley.edu/users/mcfadden/e240b_f01/e240b.html
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,259
Free statistical textbooks
Cosma Shalizi, CMUs ML guru, occasionally updates a draft of a stats book soon to be published by Cambridge Press titled Advanced Data Analysis from an Elementary Point of View. Can't recommend it highly enough... Here's the Table of contents: I. Regression and Its Generalizations Regression Basics The Truth about Linear Regression Model Evaluation Smoothing in Regression Simulation The Bootstrap Weighting and Variance Splines Additive Models Testing Regression Specifications Logistic Regression Generalized Linear Models and Generalized Additive Models Classification and Regression Trees II. Distributions and Latent Structure Density Estimation Relative Distributions and Smooth Tests of Goodness-of-Fit Principal Components Analysis Factor Models Nonlinear Dimensionality Reduction Mixture Models Graphical Models III. Dependent Data Time Series Spatial and Network Data Simulation-Based Inference IV. Causal Inference Graphical Causal Models Identifying Causal Effects Causal Inference from Experiments Estimating Causal Effects Discovering Causal Structure Appendices Data-Analysis Problem Sets Reminders from Linear Algebra Big O and Little o Notation Taylor Expansions Multivariate Distributions Algebra with Expectations and Variances Propagation of Error, and Standard Errors for Derived Quantities Optimization chi-squared and the Likelihood Ratio Test Proof of the Gauss-Markov Theorem Rudimentary Graph Theory Information Theory Hypothesis Testing Writing R Functions Random Variable Generation
Free statistical textbooks
Cosma Shalizi, CMUs ML guru, occasionally updates a draft of a stats book soon to be published by Cambridge Press titled Advanced Data Analysis from an Elementary Point of View. Can't recommend it hig
Free statistical textbooks Cosma Shalizi, CMUs ML guru, occasionally updates a draft of a stats book soon to be published by Cambridge Press titled Advanced Data Analysis from an Elementary Point of View. Can't recommend it highly enough... Here's the Table of contents: I. Regression and Its Generalizations Regression Basics The Truth about Linear Regression Model Evaluation Smoothing in Regression Simulation The Bootstrap Weighting and Variance Splines Additive Models Testing Regression Specifications Logistic Regression Generalized Linear Models and Generalized Additive Models Classification and Regression Trees II. Distributions and Latent Structure Density Estimation Relative Distributions and Smooth Tests of Goodness-of-Fit Principal Components Analysis Factor Models Nonlinear Dimensionality Reduction Mixture Models Graphical Models III. Dependent Data Time Series Spatial and Network Data Simulation-Based Inference IV. Causal Inference Graphical Causal Models Identifying Causal Effects Causal Inference from Experiments Estimating Causal Effects Discovering Causal Structure Appendices Data-Analysis Problem Sets Reminders from Linear Algebra Big O and Little o Notation Taylor Expansions Multivariate Distributions Algebra with Expectations and Variances Propagation of Error, and Standard Errors for Derived Quantities Optimization chi-squared and the Likelihood Ratio Test Proof of the Gauss-Markov Theorem Rudimentary Graph Theory Information Theory Hypothesis Testing Writing R Functions Random Variable Generation
Free statistical textbooks Cosma Shalizi, CMUs ML guru, occasionally updates a draft of a stats book soon to be published by Cambridge Press titled Advanced Data Analysis from an Elementary Point of View. Can't recommend it hig
1,260
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Some free Stats textbooks are also available here.
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Some free Stats textbooks are also available here.
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,261
Free statistical textbooks
I know other authors have gone to some trouble to make their books available here on stack exchange ... The printed version of our 2002 edition was printed 3 times and sold out 3 times; Springer and Google recently started selling it (book only) as a PDF eBook (no software) on the Springer and Google sites for $79. We are delighted to be able to make the PDF eBook version (2002 edition) available for FREE to stackexchange users at: http://www.mathstatica.com/book/bookcontents.html This is a complete PDF version of the original 2002 printed edition. Although no software is included (neither Mathematica nor mathStatica), the methods, theorems, summary tables, examples, exercises, theorems etc are all useful and relevant ... even as a reference text for people who do not even have Mathematica. One can either download: the entire book as a single download file ... with live clickable Table of Contents etc, ... or chapter by chapter. iBooks installation To install as an iBook: Download the entire book as a single PDF file Then drag it into iBooks (under the section: PDF files). iPad installation To install on an iPad: First install it as an iBook (as above) Open iTunes; select your iPad; click on Books: select the book and sync it over to your iPad.
Free statistical textbooks
I know other authors have gone to some trouble to make their books available here on stack exchange ... The printed version of our 2002 edition was printed 3 times and sold out 3 times; Springer and
Free statistical textbooks I know other authors have gone to some trouble to make their books available here on stack exchange ... The printed version of our 2002 edition was printed 3 times and sold out 3 times; Springer and Google recently started selling it (book only) as a PDF eBook (no software) on the Springer and Google sites for $79. We are delighted to be able to make the PDF eBook version (2002 edition) available for FREE to stackexchange users at: http://www.mathstatica.com/book/bookcontents.html This is a complete PDF version of the original 2002 printed edition. Although no software is included (neither Mathematica nor mathStatica), the methods, theorems, summary tables, examples, exercises, theorems etc are all useful and relevant ... even as a reference text for people who do not even have Mathematica. One can either download: the entire book as a single download file ... with live clickable Table of Contents etc, ... or chapter by chapter. iBooks installation To install as an iBook: Download the entire book as a single PDF file Then drag it into iBooks (under the section: PDF files). iPad installation To install on an iPad: First install it as an iBook (as above) Open iTunes; select your iPad; click on Books: select the book and sync it over to your iPad.
Free statistical textbooks I know other authors have gone to some trouble to make their books available here on stack exchange ... The printed version of our 2002 edition was printed 3 times and sold out 3 times; Springer and
1,262
Free statistical textbooks
It's nice to see academics freely distribute their works. Here is trove of free ML / Stats books in PDF: Machine Learning Elements of Statistical Learning Hastie, Tibshirani, Friedman All of Statistics Larry Wasserman Machine Learning and Bayesian Reasoning David Barber Gaussian Processes for Machine Learning Rasmussen and Williams Information Theory, Inference, and Learning Algorithms David MacKay Introduction to Machine Learning Smola and Vishwanathan A Probabilistic Theory of Pattern Recognition Devroye, Gyorfi, Lugosi Introduction to Information Retrieval Manning, Rhagavan, Shutze Forecasting: principles and practice Hyndman, Athanasopoulos (Online Book) Probability / Stats Introduction to statistical thought Lavine Basic Probability Theory Robert Ash Introduction to probability Grinstead and Snell Principle of Uncertainty Kadane Linear Algebra / Optimization Linear Algebra, Theory, and Applications Kuttler Linear Algebra Done Wrong Treil Applied Numerical Computing Vandenberghe Applied Numerical Linear Algebra James Demmel Convex Optimization Boyd and Vandenberghe Genetic Algorithm A Field Guide to Genetic Programming Poli, Langdon, McPhee Evolved To Win Sipper Essentials of Metaheuristics Luke
Free statistical textbooks
It's nice to see academics freely distribute their works. Here is trove of free ML / Stats books in PDF: Machine Learning Elements of Statistical Learning Hastie, Tibshirani, Friedman All of Statisti
Free statistical textbooks It's nice to see academics freely distribute their works. Here is trove of free ML / Stats books in PDF: Machine Learning Elements of Statistical Learning Hastie, Tibshirani, Friedman All of Statistics Larry Wasserman Machine Learning and Bayesian Reasoning David Barber Gaussian Processes for Machine Learning Rasmussen and Williams Information Theory, Inference, and Learning Algorithms David MacKay Introduction to Machine Learning Smola and Vishwanathan A Probabilistic Theory of Pattern Recognition Devroye, Gyorfi, Lugosi Introduction to Information Retrieval Manning, Rhagavan, Shutze Forecasting: principles and practice Hyndman, Athanasopoulos (Online Book) Probability / Stats Introduction to statistical thought Lavine Basic Probability Theory Robert Ash Introduction to probability Grinstead and Snell Principle of Uncertainty Kadane Linear Algebra / Optimization Linear Algebra, Theory, and Applications Kuttler Linear Algebra Done Wrong Treil Applied Numerical Computing Vandenberghe Applied Numerical Linear Algebra James Demmel Convex Optimization Boyd and Vandenberghe Genetic Algorithm A Field Guide to Genetic Programming Poli, Langdon, McPhee Evolved To Win Sipper Essentials of Metaheuristics Luke
Free statistical textbooks It's nice to see academics freely distribute their works. Here is trove of free ML / Stats books in PDF: Machine Learning Elements of Statistical Learning Hastie, Tibshirani, Friedman All of Statisti
1,263
Free statistical textbooks
Statsoft's Electronic Statistics Handbook ('The only Internet Resource about Statistics Recommended by Encyclopedia Britannica') is worth checking out.
Free statistical textbooks
Statsoft's Electronic Statistics Handbook ('The only Internet Resource about Statistics Recommended by Encyclopedia Britannica') is worth checking out.
Free statistical textbooks Statsoft's Electronic Statistics Handbook ('The only Internet Resource about Statistics Recommended by Encyclopedia Britannica') is worth checking out.
Free statistical textbooks Statsoft's Electronic Statistics Handbook ('The only Internet Resource about Statistics Recommended by Encyclopedia Britannica') is worth checking out.
1,264
Free statistical textbooks
Not properly an entire textbook, but the part IV of Mathematics for Computer Science is about probability and random variables.
Free statistical textbooks
Not properly an entire textbook, but the part IV of Mathematics for Computer Science is about probability and random variables.
Free statistical textbooks Not properly an entire textbook, but the part IV of Mathematics for Computer Science is about probability and random variables.
Free statistical textbooks Not properly an entire textbook, but the part IV of Mathematics for Computer Science is about probability and random variables.
1,265
Free statistical textbooks
Some downloadable notes on probability, which seems interesting: http://www.math.harvard.edu/~knill/teaching/math19b_2011/handouts/chapters1-19.pdf Applied probability: http://www.acsu.buffalo.edu/~bialas/EAS305/docs/EAS305%20NOTES%202005.pdf http://www.ma.huji.ac.il/~razk/Teaching/LectureNotes/LectureNotesProbability.pdf
Free statistical textbooks
Some downloadable notes on probability, which seems interesting: http://www.math.harvard.edu/~knill/teaching/math19b_2011/handouts/chapters1-19.pdf Applied probability: http://www.acsu.buffalo.edu/~bi
Free statistical textbooks Some downloadable notes on probability, which seems interesting: http://www.math.harvard.edu/~knill/teaching/math19b_2011/handouts/chapters1-19.pdf Applied probability: http://www.acsu.buffalo.edu/~bialas/EAS305/docs/EAS305%20NOTES%202005.pdf http://www.ma.huji.ac.il/~razk/Teaching/LectureNotes/LectureNotesProbability.pdf
Free statistical textbooks Some downloadable notes on probability, which seems interesting: http://www.math.harvard.edu/~knill/teaching/math19b_2011/handouts/chapters1-19.pdf Applied probability: http://www.acsu.buffalo.edu/~bi
1,266
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. A write up of probability tutorials and related puzzles along with R code for learning. Hope it helps
Free statistical textbooks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. A write up of probability tutorials and related puzzles along with R code for learning. Hope it helps
Free statistical textbooks Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,267
Free statistical textbooks
http://www.probabilitycourse.com/ is a website hosting free online-based Probability and Statistics textbook. It also has extra features such as graphing tools and lecture videos
Free statistical textbooks
http://www.probabilitycourse.com/ is a website hosting free online-based Probability and Statistics textbook. It also has extra features such as graphing tools and lecture videos
Free statistical textbooks http://www.probabilitycourse.com/ is a website hosting free online-based Probability and Statistics textbook. It also has extra features such as graphing tools and lecture videos
Free statistical textbooks http://www.probabilitycourse.com/ is a website hosting free online-based Probability and Statistics textbook. It also has extra features such as graphing tools and lecture videos
1,268
Free statistical textbooks
Here is also a great free book on multivariate statistics by Marden, primarily concerned with the normal linear model linked on this page: https://people.stat.sc.edu/hansont/stat730/Marden2013.pdf
Free statistical textbooks
Here is also a great free book on multivariate statistics by Marden, primarily concerned with the normal linear model linked on this page: https://people.stat.sc.edu/hansont/stat730/Marden2013.pdf
Free statistical textbooks Here is also a great free book on multivariate statistics by Marden, primarily concerned with the normal linear model linked on this page: https://people.stat.sc.edu/hansont/stat730/Marden2013.pdf
Free statistical textbooks Here is also a great free book on multivariate statistics by Marden, primarily concerned with the normal linear model linked on this page: https://people.stat.sc.edu/hansont/stat730/Marden2013.pdf
1,269
Free statistical textbooks
Gelman et al. "Bayesian Data Analysis" (3rd edition).
Free statistical textbooks
Gelman et al. "Bayesian Data Analysis" (3rd edition).
Free statistical textbooks Gelman et al. "Bayesian Data Analysis" (3rd edition).
Free statistical textbooks Gelman et al. "Bayesian Data Analysis" (3rd edition).
1,270
Free statistical textbooks
It's not a textbook but Bayesian Methods in the Search for the MH370 is a great introduction to particle filters.
Free statistical textbooks
It's not a textbook but Bayesian Methods in the Search for the MH370 is a great introduction to particle filters.
Free statistical textbooks It's not a textbook but Bayesian Methods in the Search for the MH370 is a great introduction to particle filters.
Free statistical textbooks It's not a textbook but Bayesian Methods in the Search for the MH370 is a great introduction to particle filters.
1,271
Free statistical textbooks
A digital textbook on probability and statistics by M. Taboga can be found at https://www.statlect.com The level is intermediate. It has hundreds of solved exercises and examples, as well as step-by-step proofs of all the results presented.
Free statistical textbooks
A digital textbook on probability and statistics by M. Taboga can be found at https://www.statlect.com The level is intermediate. It has hundreds of solved exercises and examples, as well as step-by-s
Free statistical textbooks A digital textbook on probability and statistics by M. Taboga can be found at https://www.statlect.com The level is intermediate. It has hundreds of solved exercises and examples, as well as step-by-step proofs of all the results presented.
Free statistical textbooks A digital textbook on probability and statistics by M. Taboga can be found at https://www.statlect.com The level is intermediate. It has hundreds of solved exercises and examples, as well as step-by-s
1,272
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
Geometrically, matrix $\bf A'A$ is called matrix of scalar products (= dot products, = inner products). Algebraically, it is called sum-of-squares-and-cross-products matrix (SSCP). Its $i$-th diagonal element is equal to $\sum a_{(i)}^2$, where $a_{(i)}$ denotes values in the $i$-th column of $\bf A$ and $\sum$ is the sum across rows. The $ij$-th off-diagonal element therein is $\sum a_{(i)}a_{(j)}$. There is a number of important association coefficients and their square matrices are called angular similarities or SSCP-type similarities: Dividing SSCP matrix by $n$, the sample size or number of rows of $\bf A$, you get MSCP (mean-square-and-cross-product) matrix. The pairwise formula of this association measure is hence $\frac{\sum xy}{n}$ (with vectors $x$ and $y$ being a pair of columns from $\bf A$). If you center columns (variables) of $\bf A$, then $\bf A'A$ is the scatter (or co-scatter, if to be rigorous) matrix and $\mathbf {A'A}/(n-1)$ is the covariance matrix. Pairwise formula of covariance is $\frac{\sum c_xc_y}{n-1}$ with $c_x$ and $c_y$ denoting centerted columns. If you z-standardize columns of $\bf A$ (subtract the column mean and divide by the standard deviation), then $\mathbf {A'A}/(n-1)$ is the Pearson correlation matrix: correlation is covariance for standardized variables. Pairwise formula of correlation is $\frac{\sum z_xz_y}{n-1}$ with $z_x$ and $z_y$ denoting standardized columns. The correlation is also called coefficient of linearity. If you unit-scale columns of $\bf A$ (bring their SS, sum-of-squares, to 1), then $\bf A'A$ is the cosine similarity matrix. The equivalent pairwise formula thus appears to be $\sum u_xu_y = \frac{\sum{xy}}{\sqrt{\sum x^2}\sqrt{\sum y^2}}$ with $u_x$ and $u_y$ denoting L2-normalized columns. Cosine similarity is also called coefficient of proportionality. If you center and then unit-scale columns of $\bf A$, then $\bf A'A$ is again the Pearson correlation matrix, because correlation is cosine for centered variables$^{1,2}$: $\sum cu_xcu_y = \frac{\sum{c_xc_y}}{\sqrt{\sum c_x^2}\sqrt{\sum c_y^2}}$ Alongside these four principal association measures let us also mention some other, also based on of $\bf A'A$, to top it off. They can be seen as measures alternative to cosine similarity because they adopt different from it normalization, the denominator in the formula: Coefficient of identity [Zegers & ten Berge, 1985] has its denominator in the form of arithmetic mean rather than geometric mean: $\frac{\sum{xy}}{(\sum x^2+\sum y^2)/2}$. It can be 1 if and only if the being compared columns of $\bf A$ are identical. Another usable coefficient like it is called similarity ratio: $\frac{\sum{xy}}{\sum x^2 + \sum y^2 -\sum {xy}} = \frac{\sum{xy}}{\sum {xy} + \sum {(x-y)^2}}$. Finally, if values in $\bf A$ are nonnegative and their sum within the columns is 1 (e.g. they are proportions), then $\bf \sqrt {A}'\sqrt A$ is the matrix of fidelity or Bhattacharyya coefficient. $^1$ One way also to compute correlation or covariance matrix, used by many statistical packages, bypasses centering the data and departs straight from SSCP matrix $\bf A'A$ this way. Let $\bf s$ be the row vector of column sums of data $\bf A$ while $n$ is the number of rows in the data. Then (1) compute the scatter matrix as $\bf C = A'A-s's/ \it n$ [thence, $\mathbf C/(n-1)$ will be the covariance matrix]; (2) the diagonal of $\bf C$ is the sums of squared deviations, row vector $\bf d$; (3) compute correlation matrix $\bf R=C/\sqrt{d'd}$. $^2$ An acute but statistically novice reader might find it difficult reconciling the two definitions of correlation - as "covariance" (which includes averaging by sample size, the division by df="n-1") and as "cosine" (which implies no such averaging). But in fact no real averaging in the first formula of correlation takes place. The thing is that st. deviation, by which z-standardization was achieved, had been in turn computed with the division by that same df; and so the denominator "n-1" in the formula of correlation-as-covariance entirely cancels if you unwrap the formula: the formula turns into the formula of cosine. To compute empirical correlation value you really need not to know $n$ (except when computing the mean, to center).
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
Geometrically, matrix $\bf A'A$ is called matrix of scalar products (= dot products, = inner products). Algebraically, it is called sum-of-squares-and-cross-products matrix (SSCP). Its $i$-th diagonal
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? Geometrically, matrix $\bf A'A$ is called matrix of scalar products (= dot products, = inner products). Algebraically, it is called sum-of-squares-and-cross-products matrix (SSCP). Its $i$-th diagonal element is equal to $\sum a_{(i)}^2$, where $a_{(i)}$ denotes values in the $i$-th column of $\bf A$ and $\sum$ is the sum across rows. The $ij$-th off-diagonal element therein is $\sum a_{(i)}a_{(j)}$. There is a number of important association coefficients and their square matrices are called angular similarities or SSCP-type similarities: Dividing SSCP matrix by $n$, the sample size or number of rows of $\bf A$, you get MSCP (mean-square-and-cross-product) matrix. The pairwise formula of this association measure is hence $\frac{\sum xy}{n}$ (with vectors $x$ and $y$ being a pair of columns from $\bf A$). If you center columns (variables) of $\bf A$, then $\bf A'A$ is the scatter (or co-scatter, if to be rigorous) matrix and $\mathbf {A'A}/(n-1)$ is the covariance matrix. Pairwise formula of covariance is $\frac{\sum c_xc_y}{n-1}$ with $c_x$ and $c_y$ denoting centerted columns. If you z-standardize columns of $\bf A$ (subtract the column mean and divide by the standard deviation), then $\mathbf {A'A}/(n-1)$ is the Pearson correlation matrix: correlation is covariance for standardized variables. Pairwise formula of correlation is $\frac{\sum z_xz_y}{n-1}$ with $z_x$ and $z_y$ denoting standardized columns. The correlation is also called coefficient of linearity. If you unit-scale columns of $\bf A$ (bring their SS, sum-of-squares, to 1), then $\bf A'A$ is the cosine similarity matrix. The equivalent pairwise formula thus appears to be $\sum u_xu_y = \frac{\sum{xy}}{\sqrt{\sum x^2}\sqrt{\sum y^2}}$ with $u_x$ and $u_y$ denoting L2-normalized columns. Cosine similarity is also called coefficient of proportionality. If you center and then unit-scale columns of $\bf A$, then $\bf A'A$ is again the Pearson correlation matrix, because correlation is cosine for centered variables$^{1,2}$: $\sum cu_xcu_y = \frac{\sum{c_xc_y}}{\sqrt{\sum c_x^2}\sqrt{\sum c_y^2}}$ Alongside these four principal association measures let us also mention some other, also based on of $\bf A'A$, to top it off. They can be seen as measures alternative to cosine similarity because they adopt different from it normalization, the denominator in the formula: Coefficient of identity [Zegers & ten Berge, 1985] has its denominator in the form of arithmetic mean rather than geometric mean: $\frac{\sum{xy}}{(\sum x^2+\sum y^2)/2}$. It can be 1 if and only if the being compared columns of $\bf A$ are identical. Another usable coefficient like it is called similarity ratio: $\frac{\sum{xy}}{\sum x^2 + \sum y^2 -\sum {xy}} = \frac{\sum{xy}}{\sum {xy} + \sum {(x-y)^2}}$. Finally, if values in $\bf A$ are nonnegative and their sum within the columns is 1 (e.g. they are proportions), then $\bf \sqrt {A}'\sqrt A$ is the matrix of fidelity or Bhattacharyya coefficient. $^1$ One way also to compute correlation or covariance matrix, used by many statistical packages, bypasses centering the data and departs straight from SSCP matrix $\bf A'A$ this way. Let $\bf s$ be the row vector of column sums of data $\bf A$ while $n$ is the number of rows in the data. Then (1) compute the scatter matrix as $\bf C = A'A-s's/ \it n$ [thence, $\mathbf C/(n-1)$ will be the covariance matrix]; (2) the diagonal of $\bf C$ is the sums of squared deviations, row vector $\bf d$; (3) compute correlation matrix $\bf R=C/\sqrt{d'd}$. $^2$ An acute but statistically novice reader might find it difficult reconciling the two definitions of correlation - as "covariance" (which includes averaging by sample size, the division by df="n-1") and as "cosine" (which implies no such averaging). But in fact no real averaging in the first formula of correlation takes place. The thing is that st. deviation, by which z-standardization was achieved, had been in turn computed with the division by that same df; and so the denominator "n-1" in the formula of correlation-as-covariance entirely cancels if you unwrap the formula: the formula turns into the formula of cosine. To compute empirical correlation value you really need not to know $n$ (except when computing the mean, to center).
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? Geometrically, matrix $\bf A'A$ is called matrix of scalar products (= dot products, = inner products). Algebraically, it is called sum-of-squares-and-cross-products matrix (SSCP). Its $i$-th diagonal
1,273
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
The matrix $A^TA$ contains all the inner products of all columns in $A$. The diagonal thus contains the squared norms of columns. If you think about geometry and orthogonal projections onto the column space spanned by the columns in $A$ you may recall that norms and inner products of the vectors spanning this space play a central role in the computation of the projection. Least squares regression as well as principal components can be understood in terms of orthogonal projections. Also note that if the columns of $A$ are orthonormal, thus forming an orthonormal basis for the column space, then $A^TA = I$ $-$ the identity matrix.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
The matrix $A^TA$ contains all the inner products of all columns in $A$. The diagonal thus contains the squared norms of columns. If you think about geometry and orthogonal projections onto the column
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? The matrix $A^TA$ contains all the inner products of all columns in $A$. The diagonal thus contains the squared norms of columns. If you think about geometry and orthogonal projections onto the column space spanned by the columns in $A$ you may recall that norms and inner products of the vectors spanning this space play a central role in the computation of the projection. Least squares regression as well as principal components can be understood in terms of orthogonal projections. Also note that if the columns of $A$ are orthonormal, thus forming an orthonormal basis for the column space, then $A^TA = I$ $-$ the identity matrix.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? The matrix $A^TA$ contains all the inner products of all columns in $A$. The diagonal thus contains the squared norms of columns. If you think about geometry and orthogonal projections onto the column
1,274
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
@NRH gave a good technical answer. If you want something really basic, you can think of $A^TA$ as the matrix equivalent of $A^2$ for a scalar.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
@NRH gave a good technical answer. If you want something really basic, you can think of $A^TA$ as the matrix equivalent of $A^2$ for a scalar.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? @NRH gave a good technical answer. If you want something really basic, you can think of $A^TA$ as the matrix equivalent of $A^2$ for a scalar.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? @NRH gave a good technical answer. If you want something really basic, you can think of $A^TA$ as the matrix equivalent of $A^2$ for a scalar.
1,275
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
Although it has been already discussed that $\textbf{A}^T\textbf{A}$ has the meaning of taking dot products, I would only add a graphical representation of this multiplication. Indeed, while rows of the matrix $\textbf{A}^T$ (and columns of the matrix $\textbf{A}$) represent variables, we treat each variable measurements as a multidimensional vector. Multiplying the row $row_p$ of $\textbf{A}^T$ with the column $col_p$ of $\textbf{A}$ is equivalent to taking the dot product of two vectors: $dot(row_p, col_p)$ - the result being the entry at position $(p,p)$ inside the matrix $\textbf{A}^T \textbf{A}$. Similarly, multiplying the row $p$ of $\textbf{A}^T$ with the column $k$ of $\textbf{A}$ is equivalent to the dot product: $dot(row_p, col_k)$, with the result at position $(p,k)$. The entry $(p, k)$ of the resulting matrix $\textbf{A}^T\textbf{A}$ has the meaning of how much the vector $row_p$ is in the direction of the vector $col_k$. If the dot product of two vectors $row_i$ and $col_j$ is other than zero, some information about a vector $row_i$ is carried by a vector $col_j$, and vice versa. This idea plays an important role in Principal Component Analysis, where we want to find a new representation of our initial data matrix $\textbf{A}$ such that, there is no more information carried about any column $i$ in any other column $j \neq i$. Studying PCA deeper, you will see that a "new version" of the covariance matrix is computed and it becomes a diagonal matrix which I leave to you to realize that... indeed it means what I expressed in the previous sentence.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
Although it has been already discussed that $\textbf{A}^T\textbf{A}$ has the meaning of taking dot products, I would only add a graphical representation of this multiplication. Indeed, while rows of t
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? Although it has been already discussed that $\textbf{A}^T\textbf{A}$ has the meaning of taking dot products, I would only add a graphical representation of this multiplication. Indeed, while rows of the matrix $\textbf{A}^T$ (and columns of the matrix $\textbf{A}$) represent variables, we treat each variable measurements as a multidimensional vector. Multiplying the row $row_p$ of $\textbf{A}^T$ with the column $col_p$ of $\textbf{A}$ is equivalent to taking the dot product of two vectors: $dot(row_p, col_p)$ - the result being the entry at position $(p,p)$ inside the matrix $\textbf{A}^T \textbf{A}$. Similarly, multiplying the row $p$ of $\textbf{A}^T$ with the column $k$ of $\textbf{A}$ is equivalent to the dot product: $dot(row_p, col_k)$, with the result at position $(p,k)$. The entry $(p, k)$ of the resulting matrix $\textbf{A}^T\textbf{A}$ has the meaning of how much the vector $row_p$ is in the direction of the vector $col_k$. If the dot product of two vectors $row_i$ and $col_j$ is other than zero, some information about a vector $row_i$ is carried by a vector $col_j$, and vice versa. This idea plays an important role in Principal Component Analysis, where we want to find a new representation of our initial data matrix $\textbf{A}$ such that, there is no more information carried about any column $i$ in any other column $j \neq i$. Studying PCA deeper, you will see that a "new version" of the covariance matrix is computed and it becomes a diagonal matrix which I leave to you to realize that... indeed it means what I expressed in the previous sentence.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? Although it has been already discussed that $\textbf{A}^T\textbf{A}$ has the meaning of taking dot products, I would only add a graphical representation of this multiplication. Indeed, while rows of t
1,276
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
An important view of the geometry of $A'A$ is this (the viewpoint strongly stressed in Strang's book on "Linear Algebra and Its Applications"): Suppose A is an $m \times n$-matrix of rank k, representing a linear map $A: R^n \rightarrow R^m$. Let Col(A) and Row(A) be the column and row spaces of $A$. Then (a) As a real symmetric matrix, $(A'A): R^n \rightarrow R^n$ has a basis $\{e_1,..., e_n\}$ of eigenvectors with non-zero eigenvalues $d_1,\ldots,d_k$. Thus: $(A'A)(x_1e_1 + \ldots + x_ne_n) = d_1x_1e_1 + ... + d_kx_ke_k$. (b) Range(A) = Col(A), by definition of Col(A). So A|Row(A) maps Row(A) into Col(A). (c) Kernel(A) is the orthogonal complement of Row(A). This is because matrix multiplication is defined in terms of the dot products (row i)*(col j). (So $Av'= 0 \iff \text{v is in Kernel(A)} \iff v \text{is in orthogonal complement of Row(A)}$ (d) $A(R^n)=A(\text{Row}(A))$ and $A|\text{Row(A)}:\text{Row(A)} \rightarrow Col(A)$ is an isomorphism. Reason: If v = r+k (r \in Row(A), k \in Kernel(A),from (c)) then A(v) = A(r) + 0 = A(r) where A(r) = 0 <==> r = 0$. [Incidentally gives a proof that Row rank = Column rank!] (e) Applying (d), $A'|:Col(A)=\text{Row(A)} \rightarrow \text{Col(A')}=\text{Row(A)}$ is an isomorphism (f)By (d) and (e): $A'A(R^n) = \text{Row(A)}$ and A'A maps Row(A) isomorphically onto Row(A).
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
An important view of the geometry of $A'A$ is this (the viewpoint strongly stressed in Strang's book on "Linear Algebra and Its Applications"): Suppose A is an $m \times n$-matrix of rank k, represen
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? An important view of the geometry of $A'A$ is this (the viewpoint strongly stressed in Strang's book on "Linear Algebra and Its Applications"): Suppose A is an $m \times n$-matrix of rank k, representing a linear map $A: R^n \rightarrow R^m$. Let Col(A) and Row(A) be the column and row spaces of $A$. Then (a) As a real symmetric matrix, $(A'A): R^n \rightarrow R^n$ has a basis $\{e_1,..., e_n\}$ of eigenvectors with non-zero eigenvalues $d_1,\ldots,d_k$. Thus: $(A'A)(x_1e_1 + \ldots + x_ne_n) = d_1x_1e_1 + ... + d_kx_ke_k$. (b) Range(A) = Col(A), by definition of Col(A). So A|Row(A) maps Row(A) into Col(A). (c) Kernel(A) is the orthogonal complement of Row(A). This is because matrix multiplication is defined in terms of the dot products (row i)*(col j). (So $Av'= 0 \iff \text{v is in Kernel(A)} \iff v \text{is in orthogonal complement of Row(A)}$ (d) $A(R^n)=A(\text{Row}(A))$ and $A|\text{Row(A)}:\text{Row(A)} \rightarrow Col(A)$ is an isomorphism. Reason: If v = r+k (r \in Row(A), k \in Kernel(A),from (c)) then A(v) = A(r) + 0 = A(r) where A(r) = 0 <==> r = 0$. [Incidentally gives a proof that Row rank = Column rank!] (e) Applying (d), $A'|:Col(A)=\text{Row(A)} \rightarrow \text{Col(A')}=\text{Row(A)}$ is an isomorphism (f)By (d) and (e): $A'A(R^n) = \text{Row(A)}$ and A'A maps Row(A) isomorphically onto Row(A).
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? An important view of the geometry of $A'A$ is this (the viewpoint strongly stressed in Strang's book on "Linear Algebra and Its Applications"): Suppose A is an $m \times n$-matrix of rank k, represen
1,277
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
There are levels of intuition. For those familiar with matrix notation instatistics the intuition is to think of it as a square of the random variable: $x\to E[x^2]$ vs $A\to A^TA$ In matrix notation a sample of the random variable $x$ observations $x_i$ or a population are represented by a column vector: $$a=\begin{bmatrix} x_1 \\ x_2 \\ \dots \\ x_n \end{bmatrix}$$ So, if you want to get a sample mean of the square of the variable $x$, you simply get a dot product $$\bar{x^2}=\frac{a\cdot a} n$$, which is the same in matrix notation as $A^TA$. Notice, that if the sample mean of the variable is ZERO, then the variance is equal to the mean of the square: $\sigma^2=E[x^2]$ which is analogous to $A^TA$. This is the reason why in PCA you need the zero mean, and why $A^TA$ shows up, after all PCA is to decompose the variance matrix of the data set.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$?
There are levels of intuition. For those familiar with matrix notation instatistics the intuition is to think of it as a square of the random variable: $x\to E[x^2]$ vs $A\to A^TA$ In matrix notation
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? There are levels of intuition. For those familiar with matrix notation instatistics the intuition is to think of it as a square of the random variable: $x\to E[x^2]$ vs $A\to A^TA$ In matrix notation a sample of the random variable $x$ observations $x_i$ or a population are represented by a column vector: $$a=\begin{bmatrix} x_1 \\ x_2 \\ \dots \\ x_n \end{bmatrix}$$ So, if you want to get a sample mean of the square of the variable $x$, you simply get a dot product $$\bar{x^2}=\frac{a\cdot a} n$$, which is the same in matrix notation as $A^TA$. Notice, that if the sample mean of the variable is ZERO, then the variance is equal to the mean of the square: $\sigma^2=E[x^2]$ which is analogous to $A^TA$. This is the reason why in PCA you need the zero mean, and why $A^TA$ shows up, after all PCA is to decompose the variance matrix of the data set.
Is there an intuitive interpretation of $A^TA$ for a data matrix $A$? There are levels of intuition. For those familiar with matrix notation instatistics the intuition is to think of it as a square of the random variable: $x\to E[x^2]$ vs $A\to A^TA$ In matrix notation
1,278
Is it necessary to scale the target value in addition to scaling features for regression analysis?
Let's first analyse why feature scaling is performed. Feature scaling improves the convergence of steepest descent algorithms, which do not possess the property of scale invariance. In stochastic gradient descent training examples inform the weight updates iteratively like so, $$w_{t+1} = w_t - \gamma\nabla_w \ell(f_w(x),y)$$ Where $w$ are the weights, $\gamma$ is a stepsize, $\nabla_w$ is the gradient wrt weights, $\ell$ is a loss function, $f_w$ is the function parameterized by $w$, $x$ is a training example, and $y$ is the response/label. Compare the following convex functions, representing proper scaling and improper scaling. A step through one weight update of size $\gamma$ will yield much better reduction in the error in the properly scaled case than the improperly scaled case. Shown below is the direction of $\nabla_w \ell(f_w(x),y)$ of length $\gamma$. Normalizing the output will not affect shape of $f$, so it's generally not necessary. The only situation I can imagine scaling the outputs has an impact, is if your response variable is very large and/or you're using f32 variables (which is common with GPU linear algebra). In this case it is possible to get a floating point overflow of an element of the weights. The symptom is either an Inf value or it will wrap-around to the other extreme representation.
Is it necessary to scale the target value in addition to scaling features for regression analysis?
Let's first analyse why feature scaling is performed. Feature scaling improves the convergence of steepest descent algorithms, which do not possess the property of scale invariance. In stochastic grad
Is it necessary to scale the target value in addition to scaling features for regression analysis? Let's first analyse why feature scaling is performed. Feature scaling improves the convergence of steepest descent algorithms, which do not possess the property of scale invariance. In stochastic gradient descent training examples inform the weight updates iteratively like so, $$w_{t+1} = w_t - \gamma\nabla_w \ell(f_w(x),y)$$ Where $w$ are the weights, $\gamma$ is a stepsize, $\nabla_w$ is the gradient wrt weights, $\ell$ is a loss function, $f_w$ is the function parameterized by $w$, $x$ is a training example, and $y$ is the response/label. Compare the following convex functions, representing proper scaling and improper scaling. A step through one weight update of size $\gamma$ will yield much better reduction in the error in the properly scaled case than the improperly scaled case. Shown below is the direction of $\nabla_w \ell(f_w(x),y)$ of length $\gamma$. Normalizing the output will not affect shape of $f$, so it's generally not necessary. The only situation I can imagine scaling the outputs has an impact, is if your response variable is very large and/or you're using f32 variables (which is common with GPU linear algebra). In this case it is possible to get a floating point overflow of an element of the weights. The symptom is either an Inf value or it will wrap-around to the other extreme representation.
Is it necessary to scale the target value in addition to scaling features for regression analysis? Let's first analyse why feature scaling is performed. Feature scaling improves the convergence of steepest descent algorithms, which do not possess the property of scale invariance. In stochastic grad
1,279
Is it necessary to scale the target value in addition to scaling features for regression analysis?
Yes, you do need to scale the target variable. I will quote this reference: A target variable with a large spread of values, in turn, may result in large error gradient values causing weight values to change dramatically, making the learning process unstable. In the reference, there's also a demonstration on code where the model weights exploded during training given the very large errors and, in turn, error gradients calculated for weight updates also exploded. In short, if you don't scale the data and you have very large values, make sure to use very small learning rate values. This was mentioned by @drSpacy as well.
Is it necessary to scale the target value in addition to scaling features for regression analysis?
Yes, you do need to scale the target variable. I will quote this reference: A target variable with a large spread of values, in turn, may result in large error gradient values causing weight values t
Is it necessary to scale the target value in addition to scaling features for regression analysis? Yes, you do need to scale the target variable. I will quote this reference: A target variable with a large spread of values, in turn, may result in large error gradient values causing weight values to change dramatically, making the learning process unstable. In the reference, there's also a demonstration on code where the model weights exploded during training given the very large errors and, in turn, error gradients calculated for weight updates also exploded. In short, if you don't scale the data and you have very large values, make sure to use very small learning rate values. This was mentioned by @drSpacy as well.
Is it necessary to scale the target value in addition to scaling features for regression analysis? Yes, you do need to scale the target variable. I will quote this reference: A target variable with a large spread of values, in turn, may result in large error gradient values causing weight values t
1,280
Is it necessary to scale the target value in addition to scaling features for regression analysis?
Generally, It is not necessary. Scaling inputs helps to avoid the situation, when one or several features dominate others in magnitude, as a result, the model hardly picks up the contribution of the smaller scale variables, even if they are strong. But if you scale the target, your mean squared error (MSE) is automatically scaled. Additionally, you need to look at the mean absolute scaled error (MASE). MASE>1 automatically means that you are doing worse than a constant (naive) prediction.
Is it necessary to scale the target value in addition to scaling features for regression analysis?
Generally, It is not necessary. Scaling inputs helps to avoid the situation, when one or several features dominate others in magnitude, as a result, the model hardly picks up the contribution of the s
Is it necessary to scale the target value in addition to scaling features for regression analysis? Generally, It is not necessary. Scaling inputs helps to avoid the situation, when one or several features dominate others in magnitude, as a result, the model hardly picks up the contribution of the smaller scale variables, even if they are strong. But if you scale the target, your mean squared error (MSE) is automatically scaled. Additionally, you need to look at the mean absolute scaled error (MASE). MASE>1 automatically means that you are doing worse than a constant (naive) prediction.
Is it necessary to scale the target value in addition to scaling features for regression analysis? Generally, It is not necessary. Scaling inputs helps to avoid the situation, when one or several features dominate others in magnitude, as a result, the model hardly picks up the contribution of the s
1,281
Is it necessary to scale the target value in addition to scaling features for regression analysis?
No, linear transformations of the response are never necessary. They may, however, be helpful to aid in interpretation of your model. For example, if your response is given in meters but is typically very small, it may be helpful to rescale to i.e. millimeters. Note also that centering and/or scaling the inputs can be useful for the same reason. For instance, you can roughly interpret a coefficient as the effect on the response per unit change in the predictor when all other predictors are set to 0. But 0 often won't be a valid or interesting value for those variables. Centering the inputs lets you interpret the coefficient as the effect per unit change when the other predictors assume their average values. Other transformations (i.e. log or square root) may be helpful if the response is not linear in the predictors on the original scale. If this is the case, you can read about generalized linear models to see if they're suitable for you.
Is it necessary to scale the target value in addition to scaling features for regression analysis?
No, linear transformations of the response are never necessary. They may, however, be helpful to aid in interpretation of your model. For example, if your response is given in meters but is typically
Is it necessary to scale the target value in addition to scaling features for regression analysis? No, linear transformations of the response are never necessary. They may, however, be helpful to aid in interpretation of your model. For example, if your response is given in meters but is typically very small, it may be helpful to rescale to i.e. millimeters. Note also that centering and/or scaling the inputs can be useful for the same reason. For instance, you can roughly interpret a coefficient as the effect on the response per unit change in the predictor when all other predictors are set to 0. But 0 often won't be a valid or interesting value for those variables. Centering the inputs lets you interpret the coefficient as the effect per unit change when the other predictors assume their average values. Other transformations (i.e. log or square root) may be helpful if the response is not linear in the predictors on the original scale. If this is the case, you can read about generalized linear models to see if they're suitable for you.
Is it necessary to scale the target value in addition to scaling features for regression analysis? No, linear transformations of the response are never necessary. They may, however, be helpful to aid in interpretation of your model. For example, if your response is given in meters but is typically
1,282
Is it necessary to scale the target value in addition to scaling features for regression analysis?
It may be useful for some cases. Even though not being a common error function, when L1 error used to calculate loss, a rather slow learning may occur. Assume that we have a linear regression model, and also have a constant learning rate $n$. Say, $ y = b_1x + b_0 $ $ n = 0.1 $ $b_1$ and $b_0$ are updated as follows: $b_1{new} = b_1{old} - n* \frac{\hat{y}-y} {|\hat{y}-y|} *x$ $b_0{new} = b_0{old} - n* \frac{\hat{y}-y} {|\hat{y}-y|} $ $\frac{\hat{y}-y} {|\hat{y}-y|}$ evaluates to -1 or 1. Hence, $b_0$ will be incremented/decremented by $n$, and $b_1$ will be incremented/decremented by $n*x$. Now, if the output value is in millions or billions, obviosuly $b_0$ will require so much iteration to approach the cost to zero. If the input is normalized (or standardized), $b_1$ will also be changed by similar and close values to $b_0$ (e.g. 0.1), and it will require too much iteration too. Actually this is why a factor of the actual loss is desired in the derivative of the cost at a certain point (such as $\hat{y}-y$).
Is it necessary to scale the target value in addition to scaling features for regression analysis?
It may be useful for some cases. Even though not being a common error function, when L1 error used to calculate loss, a rather slow learning may occur. Assume that we have a linear regression model, a
Is it necessary to scale the target value in addition to scaling features for regression analysis? It may be useful for some cases. Even though not being a common error function, when L1 error used to calculate loss, a rather slow learning may occur. Assume that we have a linear regression model, and also have a constant learning rate $n$. Say, $ y = b_1x + b_0 $ $ n = 0.1 $ $b_1$ and $b_0$ are updated as follows: $b_1{new} = b_1{old} - n* \frac{\hat{y}-y} {|\hat{y}-y|} *x$ $b_0{new} = b_0{old} - n* \frac{\hat{y}-y} {|\hat{y}-y|} $ $\frac{\hat{y}-y} {|\hat{y}-y|}$ evaluates to -1 or 1. Hence, $b_0$ will be incremented/decremented by $n$, and $b_1$ will be incremented/decremented by $n*x$. Now, if the output value is in millions or billions, obviosuly $b_0$ will require so much iteration to approach the cost to zero. If the input is normalized (or standardized), $b_1$ will also be changed by similar and close values to $b_0$ (e.g. 0.1), and it will require too much iteration too. Actually this is why a factor of the actual loss is desired in the derivative of the cost at a certain point (such as $\hat{y}-y$).
Is it necessary to scale the target value in addition to scaling features for regression analysis? It may be useful for some cases. Even though not being a common error function, when L1 error used to calculate loss, a rather slow learning may occur. Assume that we have a linear regression model, a
1,283
Is it necessary to scale the target value in addition to scaling features for regression analysis?
It does affect gradient descent in a bad way. check the formula for gradient descent: $$ x_{n+1} = x_{n} - \gamma\Delta F(x_n) $$ lets say that $x_2$ is a feature that is 1000 times greater than $x_1$ for $ F(\vec{x})=\vec{x}^2 $ we have $ \Delta F(\vec{x})=2*\vec{x} $. The optimal way to reach (0,0) which is the global optimum is to move across the diagonal but if one of the features dominates the other in terms of scale that wont happen. To illustrate: If you do the transformation $\vec{z}= (x_1,1000*x_1)$, assume a uniform learning rate $ \gamma $ for both coordinates and calculate the gradient then $$ \vec{z_{n+1}} = \vec{z_{n}} - \gamma\Delta F(z_1,z_2) .$$ The functional form is the same but the learning rate for the second coordinate has to be adjusted to 1/1000 of that for the first coordinate to match it. If not coordinate two will dominate and the $\Delta$ vector will point more towards that direction. As a result it biases the delta to point across that direction only and makes the converge slower.
Is it necessary to scale the target value in addition to scaling features for regression analysis?
It does affect gradient descent in a bad way. check the formula for gradient descent: $$ x_{n+1} = x_{n} - \gamma\Delta F(x_n) $$ lets say that $x_2$ is a feature that is 1000 times greater than $x_1
Is it necessary to scale the target value in addition to scaling features for regression analysis? It does affect gradient descent in a bad way. check the formula for gradient descent: $$ x_{n+1} = x_{n} - \gamma\Delta F(x_n) $$ lets say that $x_2$ is a feature that is 1000 times greater than $x_1$ for $ F(\vec{x})=\vec{x}^2 $ we have $ \Delta F(\vec{x})=2*\vec{x} $. The optimal way to reach (0,0) which is the global optimum is to move across the diagonal but if one of the features dominates the other in terms of scale that wont happen. To illustrate: If you do the transformation $\vec{z}= (x_1,1000*x_1)$, assume a uniform learning rate $ \gamma $ for both coordinates and calculate the gradient then $$ \vec{z_{n+1}} = \vec{z_{n}} - \gamma\Delta F(z_1,z_2) .$$ The functional form is the same but the learning rate for the second coordinate has to be adjusted to 1/1000 of that for the first coordinate to match it. If not coordinate two will dominate and the $\Delta$ vector will point more towards that direction. As a result it biases the delta to point across that direction only and makes the converge slower.
Is it necessary to scale the target value in addition to scaling features for regression analysis? It does affect gradient descent in a bad way. check the formula for gradient descent: $$ x_{n+1} = x_{n} - \gamma\Delta F(x_n) $$ lets say that $x_2$ is a feature that is 1000 times greater than $x_1
1,284
Is it necessary to scale the target value in addition to scaling features for regression analysis?
I think the best way to know whether we should scale the output is to try both way, using scaler.inverse_transform in sklearn. Neural network is not robust to transformation, in general. Therefore, if you scale the output variables, train,then the MSE produced is for the scaled version. However, if you use that model to predict and use scaler.inverse_transform, and recompute MSE, it may be a different scence.
Is it necessary to scale the target value in addition to scaling features for regression analysis?
I think the best way to know whether we should scale the output is to try both way, using scaler.inverse_transform in sklearn. Neural network is not robust to transformation, in general. Therefore, i
Is it necessary to scale the target value in addition to scaling features for regression analysis? I think the best way to know whether we should scale the output is to try both way, using scaler.inverse_transform in sklearn. Neural network is not robust to transformation, in general. Therefore, if you scale the output variables, train,then the MSE produced is for the scaled version. However, if you use that model to predict and use scaler.inverse_transform, and recompute MSE, it may be a different scence.
Is it necessary to scale the target value in addition to scaling features for regression analysis? I think the best way to know whether we should scale the output is to try both way, using scaler.inverse_transform in sklearn. Neural network is not robust to transformation, in general. Therefore, i
1,285
Differences between cross validation and bootstrapping to estimate the prediction error
It comes down to variance and bias (as usual). CV tends to be less biased but K-fold CV has fairly large variance. On the other hand, bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic). Other bootstrapping methods have been adapted to deal with the bootstrap bias (such as the 632 and 632+ rules). Two other approaches would be "Monte Carlo CV" aka "leave-group-out CV" which does many random splits of the data (sort of like mini-training and test splits). Variance is very low for this method and the bias isn't too bad if the percentage of data in the hold-out is low. Also, repeated CV does K-fold several times and averages the results similar to regular K-fold. I'm most partial to this since it keeps the low bias and reduces the variance. Edit For large sample sizes, the variance issues become less important and the computational part is more of an issues. I still would stick by repeated CV for small and large sample sizes. Some relevant research is below (esp Kim and Molinaro). References Bengio, Y., & Grandvalet, Y. (2005). Bias in estimating the variance of k-fold cross-validation. Statistical modeling and analysis for complex data problems, 75–95. Braga-Neto, U. M. (2004). Is cross-validation valid for small-sample microarray classification Bioinformatics, 20(3), 374–380. doi:10.1093/bioinformatics/btg419 Efron, B. (1983). Estimating the error rate of a prediction rule: improvement on cross-validation. Journal of the American Statistical Association, 316–331. Efron, B., & Tibshirani, R. (1997). Improvements on cross-validation: The. 632+ bootstrap method. Journal of the American Statistical Association, 548–560. Furlanello, C., Merler, S., Chemini, C., & Rizzoli, A. (1997). An application of the bootstrap 632+ rule to ecological data. WIRN 97. Jiang, W., & Simon, R. (2007). A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification. Statistics in Medicine, 26(29), 5320–5334. Jonathan, P., Krzanowski, W., & McCarthy, W. (2000). On the use of cross-validation to assess performance in multivariate prediction. Statistics and Computing, 10(3), 209–229. Kim, J.-H. (2009). Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Computational Statistics and Data Analysis, 53(11), 3735–3745. doi:10.1016/j.csda.2009.04.009 Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. International Joint Conference on Artificial Intelligence, 14, 1137–1145. Martin, J., & Hirschberg, D. (1996). Small sample statistics for classification error rates I: Error rate measurements. Molinaro, A. M. (2005). Prediction error estimation: a comparison of resampling methods. Bioinformatics, 21(15), 3301–3307. doi:10.1093/bioinformatics/bti499 Sauerbrei, W., & Schumacher1, M. (2000). Bootstrap and Cross-Validation to Assess Complexity of Data-Driven Regression Models. Medical Data Analysis, 26–28. Tibshirani, RJ, & Tibshirani, R. (2009). A bias correction for the minimum error rate in cross-validation. Arxiv preprint arXiv:0908.2904.
Differences between cross validation and bootstrapping to estimate the prediction error
It comes down to variance and bias (as usual). CV tends to be less biased but K-fold CV has fairly large variance. On the other hand, bootstrapping tends to drastically reduce the variance but gives m
Differences between cross validation and bootstrapping to estimate the prediction error It comes down to variance and bias (as usual). CV tends to be less biased but K-fold CV has fairly large variance. On the other hand, bootstrapping tends to drastically reduce the variance but gives more biased results (they tend to be pessimistic). Other bootstrapping methods have been adapted to deal with the bootstrap bias (such as the 632 and 632+ rules). Two other approaches would be "Monte Carlo CV" aka "leave-group-out CV" which does many random splits of the data (sort of like mini-training and test splits). Variance is very low for this method and the bias isn't too bad if the percentage of data in the hold-out is low. Also, repeated CV does K-fold several times and averages the results similar to regular K-fold. I'm most partial to this since it keeps the low bias and reduces the variance. Edit For large sample sizes, the variance issues become less important and the computational part is more of an issues. I still would stick by repeated CV for small and large sample sizes. Some relevant research is below (esp Kim and Molinaro). References Bengio, Y., & Grandvalet, Y. (2005). Bias in estimating the variance of k-fold cross-validation. Statistical modeling and analysis for complex data problems, 75–95. Braga-Neto, U. M. (2004). Is cross-validation valid for small-sample microarray classification Bioinformatics, 20(3), 374–380. doi:10.1093/bioinformatics/btg419 Efron, B. (1983). Estimating the error rate of a prediction rule: improvement on cross-validation. Journal of the American Statistical Association, 316–331. Efron, B., & Tibshirani, R. (1997). Improvements on cross-validation: The. 632+ bootstrap method. Journal of the American Statistical Association, 548–560. Furlanello, C., Merler, S., Chemini, C., & Rizzoli, A. (1997). An application of the bootstrap 632+ rule to ecological data. WIRN 97. Jiang, W., & Simon, R. (2007). A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification. Statistics in Medicine, 26(29), 5320–5334. Jonathan, P., Krzanowski, W., & McCarthy, W. (2000). On the use of cross-validation to assess performance in multivariate prediction. Statistics and Computing, 10(3), 209–229. Kim, J.-H. (2009). Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Computational Statistics and Data Analysis, 53(11), 3735–3745. doi:10.1016/j.csda.2009.04.009 Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. International Joint Conference on Artificial Intelligence, 14, 1137–1145. Martin, J., & Hirschberg, D. (1996). Small sample statistics for classification error rates I: Error rate measurements. Molinaro, A. M. (2005). Prediction error estimation: a comparison of resampling methods. Bioinformatics, 21(15), 3301–3307. doi:10.1093/bioinformatics/bti499 Sauerbrei, W., & Schumacher1, M. (2000). Bootstrap and Cross-Validation to Assess Complexity of Data-Driven Regression Models. Medical Data Analysis, 26–28. Tibshirani, RJ, & Tibshirani, R. (2009). A bias correction for the minimum error rate in cross-validation. Arxiv preprint arXiv:0908.2904.
Differences between cross validation and bootstrapping to estimate the prediction error It comes down to variance and bias (as usual). CV tends to be less biased but K-fold CV has fairly large variance. On the other hand, bootstrapping tends to drastically reduce the variance but gives m
1,286
Differences between cross validation and bootstrapping to estimate the prediction error
@Frank Harrell has done a lot of work on this question. I don't know of specific references. But I rather see the two techniques as being for different purposes. Cross validation is a good tool when deciding on the model -- it helps you avoid fooling yourself into thinking that you have a good model when in fact you are overfitting. When your model is fixed, then using the bootstrap makes more sense (to me at least). There is an introduction to these concepts (plus permutation tests) using R at http://www.burns-stat.com/pages/Tutor/bootstrap_resampling.html
Differences between cross validation and bootstrapping to estimate the prediction error
@Frank Harrell has done a lot of work on this question. I don't know of specific references. But I rather see the two techniques as being for different purposes. Cross validation is a good tool when
Differences between cross validation and bootstrapping to estimate the prediction error @Frank Harrell has done a lot of work on this question. I don't know of specific references. But I rather see the two techniques as being for different purposes. Cross validation is a good tool when deciding on the model -- it helps you avoid fooling yourself into thinking that you have a good model when in fact you are overfitting. When your model is fixed, then using the bootstrap makes more sense (to me at least). There is an introduction to these concepts (plus permutation tests) using R at http://www.burns-stat.com/pages/Tutor/bootstrap_resampling.html
Differences between cross validation and bootstrapping to estimate the prediction error @Frank Harrell has done a lot of work on this question. I don't know of specific references. But I rather see the two techniques as being for different purposes. Cross validation is a good tool when
1,287
Differences between cross validation and bootstrapping to estimate the prediction error
My understanding is that bootstrapping is a way to quantify the uncertainty in your model while cross validation is used for model selection and measuring predictive accuracy.
Differences between cross validation and bootstrapping to estimate the prediction error
My understanding is that bootstrapping is a way to quantify the uncertainty in your model while cross validation is used for model selection and measuring predictive accuracy.
Differences between cross validation and bootstrapping to estimate the prediction error My understanding is that bootstrapping is a way to quantify the uncertainty in your model while cross validation is used for model selection and measuring predictive accuracy.
Differences between cross validation and bootstrapping to estimate the prediction error My understanding is that bootstrapping is a way to quantify the uncertainty in your model while cross validation is used for model selection and measuring predictive accuracy.
1,288
Differences between cross validation and bootstrapping to estimate the prediction error
These are two techniques of resampling: In cross validation we divide the data randomly into kfold and it helps in overfitting, but this approach has its drawback. As it uses random samples so some sample produces major error. In order to minimize CV has techniques but its not so powerful with classification problems. Bootstrap helps in this, it improves the error from its own sample check..for detail please refer.. https://lagunita.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/cv_boot.pdf
Differences between cross validation and bootstrapping to estimate the prediction error
These are two techniques of resampling: In cross validation we divide the data randomly into kfold and it helps in overfitting, but this approach has its drawback. As it uses random samples so some sa
Differences between cross validation and bootstrapping to estimate the prediction error These are two techniques of resampling: In cross validation we divide the data randomly into kfold and it helps in overfitting, but this approach has its drawback. As it uses random samples so some sample produces major error. In order to minimize CV has techniques but its not so powerful with classification problems. Bootstrap helps in this, it improves the error from its own sample check..for detail please refer.. https://lagunita.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/cv_boot.pdf
Differences between cross validation and bootstrapping to estimate the prediction error These are two techniques of resampling: In cross validation we divide the data randomly into kfold and it helps in overfitting, but this approach has its drawback. As it uses random samples so some sa
1,289
Numerical example to understand Expectation-Maximization
This is a recipe to learn EM with a practical and (in my opinion) very intuitive 'Coin-Toss' example: Read this short EM tutorial paper by Do and Batzoglou. This is the schema where the coin toss example is explained: You may have question marks in your head, especially regarding where the probabilities in the Expectation step come from. Please have a look at the explanations on this maths stack exchange page. Look at/run this code that I wrote in Python that simulates the solution to the coin-toss problem in the EM tutorial paper of item 1: import numpy as np import math import matplotlib.pyplot as plt ## E-M Coin Toss Example as given in the EM tutorial paper by Do and Batzoglou* ## def get_binomial_log_likelihood(obs,probs): """ Return the (log)likelihood of obs, given the probs""" # Binomial Distribution Log PDF # ln (pdf) = Binomial Coeff * product of probabilities # ln[f(x|n, p)] = comb(N,k) * num_heads*ln(pH) + (N-num_heads) * ln(1-pH) N = sum(obs);#number of trials k = obs[0] # number of heads binomial_coeff = math.factorial(N) / (math.factorial(N-k) * math.factorial(k)) prod_probs = obs[0]*math.log(probs[0]) + obs[1]*math.log(1-probs[0]) log_lik = binomial_coeff + prod_probs return log_lik # 1st: Coin B, {HTTTHHTHTH}, 5H,5T # 2nd: Coin A, {HHHHTHHHHH}, 9H,1T # 3rd: Coin A, {HTHHHHHTHH}, 8H,2T # 4th: Coin B, {HTHTTTHHTT}, 4H,6T # 5th: Coin A, {THHHTHHHTH}, 7H,3T # so, from MLE: pA(heads) = 0.80 and pB(heads)=0.45 # represent the experiments head_counts = np.array([5,9,8,4,7]) tail_counts = 10-head_counts experiments = zip(head_counts,tail_counts) # initialise the pA(heads) and pB(heads) pA_heads = np.zeros(100); pA_heads[0] = 0.60 pB_heads = np.zeros(100); pB_heads[0] = 0.50 # E-M begins! delta = 0.001 j = 0 # iteration counter improvement = float('inf') while (improvement>delta): expectation_A = np.zeros((len(experiments),2), dtype=float) expectation_B = np.zeros((len(experiments),2), dtype=float) for i in range(0,len(experiments)): e = experiments[i] # i'th experiment # loglikelihood of e given coin A: ll_A = get_binomial_log_likelihood(e,np.array([pA_heads[j],1-pA_heads[j]])) # loglikelihood of e given coin B ll_B = get_binomial_log_likelihood(e,np.array([pB_heads[j],1-pB_heads[j]])) # corresponding weight of A proportional to likelihood of A weightA = math.exp(ll_A) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of B proportional to likelihood of B weightB = math.exp(ll_B) / ( math.exp(ll_A) + math.exp(ll_B) ) expectation_A[i] = np.dot(weightA, e) expectation_B[i] = np.dot(weightB, e) pA_heads[j+1] = sum(expectation_A)[0] / sum(sum(expectation_A)); pB_heads[j+1] = sum(expectation_B)[0] / sum(sum(expectation_B)); improvement = ( max( abs(np.array([pA_heads[j+1],pB_heads[j+1]]) - np.array([pA_heads[j],pB_heads[j]]) )) ) j = j+1 plt.figure(); plt.plot(range(0,j),pA_heads[0:j], 'r--') plt.plot(range(0,j),pB_heads[0:j]) plt.show()
Numerical example to understand Expectation-Maximization
This is a recipe to learn EM with a practical and (in my opinion) very intuitive 'Coin-Toss' example: Read this short EM tutorial paper by Do and Batzoglou. This is the schema where the coin toss e
Numerical example to understand Expectation-Maximization This is a recipe to learn EM with a practical and (in my opinion) very intuitive 'Coin-Toss' example: Read this short EM tutorial paper by Do and Batzoglou. This is the schema where the coin toss example is explained: You may have question marks in your head, especially regarding where the probabilities in the Expectation step come from. Please have a look at the explanations on this maths stack exchange page. Look at/run this code that I wrote in Python that simulates the solution to the coin-toss problem in the EM tutorial paper of item 1: import numpy as np import math import matplotlib.pyplot as plt ## E-M Coin Toss Example as given in the EM tutorial paper by Do and Batzoglou* ## def get_binomial_log_likelihood(obs,probs): """ Return the (log)likelihood of obs, given the probs""" # Binomial Distribution Log PDF # ln (pdf) = Binomial Coeff * product of probabilities # ln[f(x|n, p)] = comb(N,k) * num_heads*ln(pH) + (N-num_heads) * ln(1-pH) N = sum(obs);#number of trials k = obs[0] # number of heads binomial_coeff = math.factorial(N) / (math.factorial(N-k) * math.factorial(k)) prod_probs = obs[0]*math.log(probs[0]) + obs[1]*math.log(1-probs[0]) log_lik = binomial_coeff + prod_probs return log_lik # 1st: Coin B, {HTTTHHTHTH}, 5H,5T # 2nd: Coin A, {HHHHTHHHHH}, 9H,1T # 3rd: Coin A, {HTHHHHHTHH}, 8H,2T # 4th: Coin B, {HTHTTTHHTT}, 4H,6T # 5th: Coin A, {THHHTHHHTH}, 7H,3T # so, from MLE: pA(heads) = 0.80 and pB(heads)=0.45 # represent the experiments head_counts = np.array([5,9,8,4,7]) tail_counts = 10-head_counts experiments = zip(head_counts,tail_counts) # initialise the pA(heads) and pB(heads) pA_heads = np.zeros(100); pA_heads[0] = 0.60 pB_heads = np.zeros(100); pB_heads[0] = 0.50 # E-M begins! delta = 0.001 j = 0 # iteration counter improvement = float('inf') while (improvement>delta): expectation_A = np.zeros((len(experiments),2), dtype=float) expectation_B = np.zeros((len(experiments),2), dtype=float) for i in range(0,len(experiments)): e = experiments[i] # i'th experiment # loglikelihood of e given coin A: ll_A = get_binomial_log_likelihood(e,np.array([pA_heads[j],1-pA_heads[j]])) # loglikelihood of e given coin B ll_B = get_binomial_log_likelihood(e,np.array([pB_heads[j],1-pB_heads[j]])) # corresponding weight of A proportional to likelihood of A weightA = math.exp(ll_A) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of B proportional to likelihood of B weightB = math.exp(ll_B) / ( math.exp(ll_A) + math.exp(ll_B) ) expectation_A[i] = np.dot(weightA, e) expectation_B[i] = np.dot(weightB, e) pA_heads[j+1] = sum(expectation_A)[0] / sum(sum(expectation_A)); pB_heads[j+1] = sum(expectation_B)[0] / sum(sum(expectation_B)); improvement = ( max( abs(np.array([pA_heads[j+1],pB_heads[j+1]]) - np.array([pA_heads[j],pB_heads[j]]) )) ) j = j+1 plt.figure(); plt.plot(range(0,j),pA_heads[0:j], 'r--') plt.plot(range(0,j),pB_heads[0:j]) plt.show()
Numerical example to understand Expectation-Maximization This is a recipe to learn EM with a practical and (in my opinion) very intuitive 'Coin-Toss' example: Read this short EM tutorial paper by Do and Batzoglou. This is the schema where the coin toss e
1,290
Numerical example to understand Expectation-Maximization
It sounds like your question has two parts: the underlying idea and a concrete example. I'll start with the underlying idea, then link to an example at the bottom. EM is useful in Catch-22 situations where it seems like you need to know $A$ before you can calculate $B$ and you need to know $B$ before you can calculate $A$. The most common case people deal with is probably mixture distributions. For our example, let's look at a simple Gaussian mixture model: You have two different univariate Gaussian distributions with different means and unit variance. You have a bunch of data points, but you're not sure which points came from which distribution, and you're also not sure about the means of the two distributions. And now you're stuck: If you knew the true means, you could figure out which data points came from which Gaussian. For example, if a data point had a very high value, it probably came from the distribution with the higher mean. But you don't know what the means are, so this won't work. If you knew which distribution each point came from, then you could estimate the two distributions' means using the sample means of the relevant points. But you don't actually know which points to assign to which distribution, so this won't work either. So neither approach seems like it works: you'd need to know the answer before you can find the answer, and you're stuck. What EM lets you do is alternate between these two tractable steps instead of tackling the whole process at once. You'll need to start with a guess about the two means (although your guess doesn't necessarily have to be very accurate, you do need to start somewhere). If your guess about the means was accurate, then you'd have enough information to carry out the step in my first bullet point above, and you could (probabilistically) assign each data point to one of the two Gaussians. Even though we know our guess is wrong, let's try this anyway. And then, given each point's assigned distributions, you could get new estimates for the means using the second bullet point. It turns out that, each time you do loop through these two steps, you're improving a lower bound on the model's likelihood. That's already pretty cool: even though the two suggestions in the bullet points above didn't seem like they'd work individually, you can still use them together to improve the model. The real magic of EM is that, after enough iterations, the lower bound will be so high that there won't be any space between it and the local maximum. As a result, and you've locally optimized the likelihood. So you haven't just improved the model, you've found the best possible model one can find with incremental updates. This page from Wikipedia shows a slightly more complicated example (two-dimensional Gaussians and unknown covariance), but the basic idea is the same. It also includes well-commented R code for implementing the example. In the code, the "Expectation" step (E-step) corresponds to my first bullet point: figuring out which Gaussian gets responsibility for each data point, given the current parameters for each Gaussian. The "Maximization" step (M-step) updates the means and covariances, given these assignments, as in my second bullet point. As you can see in the animation, these updates quickly allow the algorithm to go from a set of terrible estimates to a set of very good ones: there really do seem to be two clouds of points centered on the two Gaussian distributions that EM finds.
Numerical example to understand Expectation-Maximization
It sounds like your question has two parts: the underlying idea and a concrete example. I'll start with the underlying idea, then link to an example at the bottom. EM is useful in Catch-22 situation
Numerical example to understand Expectation-Maximization It sounds like your question has two parts: the underlying idea and a concrete example. I'll start with the underlying idea, then link to an example at the bottom. EM is useful in Catch-22 situations where it seems like you need to know $A$ before you can calculate $B$ and you need to know $B$ before you can calculate $A$. The most common case people deal with is probably mixture distributions. For our example, let's look at a simple Gaussian mixture model: You have two different univariate Gaussian distributions with different means and unit variance. You have a bunch of data points, but you're not sure which points came from which distribution, and you're also not sure about the means of the two distributions. And now you're stuck: If you knew the true means, you could figure out which data points came from which Gaussian. For example, if a data point had a very high value, it probably came from the distribution with the higher mean. But you don't know what the means are, so this won't work. If you knew which distribution each point came from, then you could estimate the two distributions' means using the sample means of the relevant points. But you don't actually know which points to assign to which distribution, so this won't work either. So neither approach seems like it works: you'd need to know the answer before you can find the answer, and you're stuck. What EM lets you do is alternate between these two tractable steps instead of tackling the whole process at once. You'll need to start with a guess about the two means (although your guess doesn't necessarily have to be very accurate, you do need to start somewhere). If your guess about the means was accurate, then you'd have enough information to carry out the step in my first bullet point above, and you could (probabilistically) assign each data point to one of the two Gaussians. Even though we know our guess is wrong, let's try this anyway. And then, given each point's assigned distributions, you could get new estimates for the means using the second bullet point. It turns out that, each time you do loop through these two steps, you're improving a lower bound on the model's likelihood. That's already pretty cool: even though the two suggestions in the bullet points above didn't seem like they'd work individually, you can still use them together to improve the model. The real magic of EM is that, after enough iterations, the lower bound will be so high that there won't be any space between it and the local maximum. As a result, and you've locally optimized the likelihood. So you haven't just improved the model, you've found the best possible model one can find with incremental updates. This page from Wikipedia shows a slightly more complicated example (two-dimensional Gaussians and unknown covariance), but the basic idea is the same. It also includes well-commented R code for implementing the example. In the code, the "Expectation" step (E-step) corresponds to my first bullet point: figuring out which Gaussian gets responsibility for each data point, given the current parameters for each Gaussian. The "Maximization" step (M-step) updates the means and covariances, given these assignments, as in my second bullet point. As you can see in the animation, these updates quickly allow the algorithm to go from a set of terrible estimates to a set of very good ones: there really do seem to be two clouds of points centered on the two Gaussian distributions that EM finds.
Numerical example to understand Expectation-Maximization It sounds like your question has two parts: the underlying idea and a concrete example. I'll start with the underlying idea, then link to an example at the bottom. EM is useful in Catch-22 situation
1,291
Numerical example to understand Expectation-Maximization
Here's an example of Expectation Maximisation (EM) used to estimate the mean and standard deviation. The code is in Python, but it should be easy to follow even if you're not familiar with the language. The motivation for EM The red and blue points shown below are drawn from two different normal distributions, each with a particular mean and standard deviation: To compute reasonable approximations of the "true" mean and standard deviation parameters for the red distribution, we could very easily look at the red points and record the position of each one, and then use the familiar formulae (and similarly for the blue group). Now consider the case where we know that there are two groups of points, but we cannot see which point belongs to which group. In other words, the colours are hidden: It's not at all obvious how to divide the points into two groups. We are now unable to just look at the positions and compute estimates for the parameters of the red distribution or the blue distribution. This is where EM can be used to solve the problem. Using EM to estimate parameters Here is the code used to generate the points shown above. You can see the actual means and standard deviations of the normal distributions that the points were drawn from. The variables red and blue hold the positions of each point in the red and blue groups respectively: import numpy as np from scipy import stats np.random.seed(110) # for reproducible random results # set parameters red_mean = 3 red_std = 0.8 blue_mean = 7 blue_std = 2 # draw 20 samples from normal distributions with red/blue parameters red = np.random.normal(red_mean, red_std, size=20) blue = np.random.normal(blue_mean, blue_std, size=20) both_colours = np.sort(np.concatenate((red, blue))) If we could see the colour of each point, we would try and recover means and standard deviations using library functions: >>> np.mean(red) 2.802 >>> np.std(red) 0.871 >>> np.mean(blue) 6.932 >>> np.std(blue) 2.195 But since the colours are hidden from us, we'll start the EM process... First, we just guess at the values for the parameters of each group (step 1). These guesses don't have to be good: # estimates for the mean red_mean_guess = 1.1 blue_mean_guess = 9 # estimates for the standard deviation red_std_guess = 2 blue_std_guess = 1.7 Pretty bad guesses - the means look like they are a long way from any "middle" of a group of points. To continue with EM and improve these guesses, we compute the likelihood of each data point (regardless of its secret colour) appearing under these guesses for the mean and standard deviation (step 2). The variable both_colours holds each data point. The function stats.norm computes the probability of the point under a normal distribution with the given parameters: likelihood_of_red = stats.norm(red_mean_guess, red_std_guess).pdf(both_colours) likelihood_of_blue = stats.norm(blue_mean_guess, blue_std_guess).pdf(both_colours) This tells us, for example, that with our current guesses the data point at 1.761 is much more likely to be red (0.189) than blue (0.00003). We can turn these two likelihood values into weights (step 3) so that they sum to 1 as follows: likelihood_total = likelihood_of_red + likelihood_of_blue red_weight = likelihood_of_red / likelihood_total blue_weight = likelihood_of_blue / likelihood_total With our current estimates and our newly-computed weights, we can now compute new, probably better, estimates for the parameters (step 4). We need a function for the mean and a function for the standard deviation: def estimate_mean(data, weight): return np.sum(data * weight) / np.sum(weight) def estimate_std(data, weight, mean): variance = np.sum(weight * (data - mean)**2) / np.sum(weight) return np.sqrt(variance) These look very similar to the usual functions to the mean and standard deviation of data. The difference is the use of a weight parameter which assigns a weight to each data point. This weighting is the key to EM. The greater the weight of a colour on a data point, the more the data point influences the next estimates for that colour's parameters. Ultimately, this has the effect of pulling each parameter in the right direction. The new guesses are computed with these functions: # new estimates for standard deviation blue_std_guess = estimate_std(both_colours, blue_weight, blue_mean_guess) red_std_guess = estimate_std(both_colours, red_weight, red_mean_guess) # new estimates for mean red_mean_guess = estimate_mean(both_colours, red_weight) blue_mean_guess = estimate_mean(both_colours, blue_weight) The EM process is then repeated with these new guesses from step 2 onward. We can repeat the steps for a given number of iterations (say 20), or until we see the parameters converge. After five iterations, we see our initial bad guesses start to get better: After 20 iterations, the EM process has more or less converged: For comparison, here are the results of the EM process compared with the values computed where colour information is not hidden: | EM guess | Actual ----------+----------+-------- Red mean | 2.910 | 2.802 Red std | 0.854 | 0.871 Blue mean | 6.838 | 6.932 Blue std | 2.227 | 2.195 Note: this answer was adapted from my answer on Stack Overflow here.
Numerical example to understand Expectation-Maximization
Here's an example of Expectation Maximisation (EM) used to estimate the mean and standard deviation. The code is in Python, but it should be easy to follow even if you're not familiar with the languag
Numerical example to understand Expectation-Maximization Here's an example of Expectation Maximisation (EM) used to estimate the mean and standard deviation. The code is in Python, but it should be easy to follow even if you're not familiar with the language. The motivation for EM The red and blue points shown below are drawn from two different normal distributions, each with a particular mean and standard deviation: To compute reasonable approximations of the "true" mean and standard deviation parameters for the red distribution, we could very easily look at the red points and record the position of each one, and then use the familiar formulae (and similarly for the blue group). Now consider the case where we know that there are two groups of points, but we cannot see which point belongs to which group. In other words, the colours are hidden: It's not at all obvious how to divide the points into two groups. We are now unable to just look at the positions and compute estimates for the parameters of the red distribution or the blue distribution. This is where EM can be used to solve the problem. Using EM to estimate parameters Here is the code used to generate the points shown above. You can see the actual means and standard deviations of the normal distributions that the points were drawn from. The variables red and blue hold the positions of each point in the red and blue groups respectively: import numpy as np from scipy import stats np.random.seed(110) # for reproducible random results # set parameters red_mean = 3 red_std = 0.8 blue_mean = 7 blue_std = 2 # draw 20 samples from normal distributions with red/blue parameters red = np.random.normal(red_mean, red_std, size=20) blue = np.random.normal(blue_mean, blue_std, size=20) both_colours = np.sort(np.concatenate((red, blue))) If we could see the colour of each point, we would try and recover means and standard deviations using library functions: >>> np.mean(red) 2.802 >>> np.std(red) 0.871 >>> np.mean(blue) 6.932 >>> np.std(blue) 2.195 But since the colours are hidden from us, we'll start the EM process... First, we just guess at the values for the parameters of each group (step 1). These guesses don't have to be good: # estimates for the mean red_mean_guess = 1.1 blue_mean_guess = 9 # estimates for the standard deviation red_std_guess = 2 blue_std_guess = 1.7 Pretty bad guesses - the means look like they are a long way from any "middle" of a group of points. To continue with EM and improve these guesses, we compute the likelihood of each data point (regardless of its secret colour) appearing under these guesses for the mean and standard deviation (step 2). The variable both_colours holds each data point. The function stats.norm computes the probability of the point under a normal distribution with the given parameters: likelihood_of_red = stats.norm(red_mean_guess, red_std_guess).pdf(both_colours) likelihood_of_blue = stats.norm(blue_mean_guess, blue_std_guess).pdf(both_colours) This tells us, for example, that with our current guesses the data point at 1.761 is much more likely to be red (0.189) than blue (0.00003). We can turn these two likelihood values into weights (step 3) so that they sum to 1 as follows: likelihood_total = likelihood_of_red + likelihood_of_blue red_weight = likelihood_of_red / likelihood_total blue_weight = likelihood_of_blue / likelihood_total With our current estimates and our newly-computed weights, we can now compute new, probably better, estimates for the parameters (step 4). We need a function for the mean and a function for the standard deviation: def estimate_mean(data, weight): return np.sum(data * weight) / np.sum(weight) def estimate_std(data, weight, mean): variance = np.sum(weight * (data - mean)**2) / np.sum(weight) return np.sqrt(variance) These look very similar to the usual functions to the mean and standard deviation of data. The difference is the use of a weight parameter which assigns a weight to each data point. This weighting is the key to EM. The greater the weight of a colour on a data point, the more the data point influences the next estimates for that colour's parameters. Ultimately, this has the effect of pulling each parameter in the right direction. The new guesses are computed with these functions: # new estimates for standard deviation blue_std_guess = estimate_std(both_colours, blue_weight, blue_mean_guess) red_std_guess = estimate_std(both_colours, red_weight, red_mean_guess) # new estimates for mean red_mean_guess = estimate_mean(both_colours, red_weight) blue_mean_guess = estimate_mean(both_colours, blue_weight) The EM process is then repeated with these new guesses from step 2 onward. We can repeat the steps for a given number of iterations (say 20), or until we see the parameters converge. After five iterations, we see our initial bad guesses start to get better: After 20 iterations, the EM process has more or less converged: For comparison, here are the results of the EM process compared with the values computed where colour information is not hidden: | EM guess | Actual ----------+----------+-------- Red mean | 2.910 | 2.802 Red std | 0.854 | 0.871 Blue mean | 6.838 | 6.932 Blue std | 2.227 | 2.195 Note: this answer was adapted from my answer on Stack Overflow here.
Numerical example to understand Expectation-Maximization Here's an example of Expectation Maximisation (EM) used to estimate the mean and standard deviation. The code is in Python, but it should be easy to follow even if you're not familiar with the languag
1,292
Numerical example to understand Expectation-Maximization
Following Zhubarb's answer, I implemented the Do and Batzoglou "coin tossing" E-M example in GNU R. Note that I use the mle function of the stats4 package - this helped me to understand more clearly how E-M and MLE are related. require("stats4"); ## sample data from Do and Batzoglou ds<-data.frame(heads=c(5,9,8,4,7),n=c(10,10,10,10,10), coin=c("B","A","A","B","A"),weight_A=1:5*0) ## "baby likelihood" for a single observation llf <- function(heads, n, theta) { comb <- function(n, x) { #nCr function return(factorial(n) / (factorial(x) * factorial(n-x))) } if (theta<0 || theta >1) { # probabilities should be in [0,1] return(-Inf); } z<-comb(n,heads)* theta^heads * (1-theta)^(n-heads); return (log(z)) } ## the "E-M" likelihood function em <- function(theta_A,theta_B) { # expectation step: given current parameters, what is the likelihood # an observation is the result of tossing coin A (vs coin B)? ds$weight_A <<- by(ds, 1:nrow(ds), function(row) { llf_A <- llf(row$heads,row$n, theta_A); llf_B <- llf(row$heads,row$n, theta_B); return(exp(llf_A)/(exp(llf_A)+exp(llf_B))); }) # maximisation step: given params and weights, calculate likelihood of the sample return(- sum(by(ds, 1:nrow(ds), function(row) { llf_A <- llf(row$heads,row$n, theta_A); llf_B <- llf(row$heads,row$n, theta_B); return(row$weight_A*llf_A + (1-row$weight_A)*llf_B); }))) } est<-mle(em,start = list(theta_A=0.6,theta_B=0.5), nobs=NROW(ds))
Numerical example to understand Expectation-Maximization
Following Zhubarb's answer, I implemented the Do and Batzoglou "coin tossing" E-M example in GNU R. Note that I use the mle function of the stats4 package - this helped me to understand more clearly h
Numerical example to understand Expectation-Maximization Following Zhubarb's answer, I implemented the Do and Batzoglou "coin tossing" E-M example in GNU R. Note that I use the mle function of the stats4 package - this helped me to understand more clearly how E-M and MLE are related. require("stats4"); ## sample data from Do and Batzoglou ds<-data.frame(heads=c(5,9,8,4,7),n=c(10,10,10,10,10), coin=c("B","A","A","B","A"),weight_A=1:5*0) ## "baby likelihood" for a single observation llf <- function(heads, n, theta) { comb <- function(n, x) { #nCr function return(factorial(n) / (factorial(x) * factorial(n-x))) } if (theta<0 || theta >1) { # probabilities should be in [0,1] return(-Inf); } z<-comb(n,heads)* theta^heads * (1-theta)^(n-heads); return (log(z)) } ## the "E-M" likelihood function em <- function(theta_A,theta_B) { # expectation step: given current parameters, what is the likelihood # an observation is the result of tossing coin A (vs coin B)? ds$weight_A <<- by(ds, 1:nrow(ds), function(row) { llf_A <- llf(row$heads,row$n, theta_A); llf_B <- llf(row$heads,row$n, theta_B); return(exp(llf_A)/(exp(llf_A)+exp(llf_B))); }) # maximisation step: given params and weights, calculate likelihood of the sample return(- sum(by(ds, 1:nrow(ds), function(row) { llf_A <- llf(row$heads,row$n, theta_A); llf_B <- llf(row$heads,row$n, theta_B); return(row$weight_A*llf_A + (1-row$weight_A)*llf_B); }))) } est<-mle(em,start = list(theta_A=0.6,theta_B=0.5), nobs=NROW(ds))
Numerical example to understand Expectation-Maximization Following Zhubarb's answer, I implemented the Do and Batzoglou "coin tossing" E-M example in GNU R. Note that I use the mle function of the stats4 package - this helped me to understand more clearly h
1,293
Numerical example to understand Expectation-Maximization
All of the above look like great resources, but I must link to this great example. It presents a very simple explanation for finding the parameters for two lines of a set of points. The tutorial is by Yair Weiss while at MIT. http://www.cs.huji.ac.il/~yweiss/emTutorial.pdf http://www.cs.huji.ac.il/~yweiss/tutorials.html
Numerical example to understand Expectation-Maximization
All of the above look like great resources, but I must link to this great example. It presents a very simple explanation for finding the parameters for two lines of a set of points. The tutorial is
Numerical example to understand Expectation-Maximization All of the above look like great resources, but I must link to this great example. It presents a very simple explanation for finding the parameters for two lines of a set of points. The tutorial is by Yair Weiss while at MIT. http://www.cs.huji.ac.il/~yweiss/emTutorial.pdf http://www.cs.huji.ac.il/~yweiss/tutorials.html
Numerical example to understand Expectation-Maximization All of the above look like great resources, but I must link to this great example. It presents a very simple explanation for finding the parameters for two lines of a set of points. The tutorial is
1,294
Numerical example to understand Expectation-Maximization
The answer given by Zhubarb is great, but unfortunately it is in Python. Below is a Java implementation of the EM algorithm executed on the same problem (posed in the article by Do and Batzoglou, 2008). I've added some printf's to the standard output to see how the parameters converge. thetaA = 0.71301, thetaB = 0.58134 thetaA = 0.74529, thetaB = 0.56926 thetaA = 0.76810, thetaB = 0.54954 thetaA = 0.78316, thetaB = 0.53462 thetaA = 0.79106, thetaB = 0.52628 thetaA = 0.79453, thetaB = 0.52239 thetaA = 0.79593, thetaB = 0.52073 thetaA = 0.79647, thetaB = 0.52005 thetaA = 0.79667, thetaB = 0.51977 thetaA = 0.79674, thetaB = 0.51966 thetaA = 0.79677, thetaB = 0.51961 thetaA = 0.79678, thetaB = 0.51960 thetaA = 0.79679, thetaB = 0.51959 Final result: thetaA = 0.79678, thetaB = 0.51960 Java code follows below: import java.util.*; /***************************************************************************** This class encapsulates the parameters of the problem. For this problem posed in the article by (Do and Batzoglou, 2008), the parameters are thetaA and thetaB, the probability of a coin coming up heads for the two coins A and B. *****************************************************************************/ class Parameters { double _thetaA = 0.0; // Probability of heads for coin A. double _thetaB = 0.0; // Probability of heads for coin B. double _delta = 0.00001; public Parameters(double thetaA, double thetaB) { _thetaA = thetaA; _thetaB = thetaB; } /************************************************************************* Returns true if this parameter is close enough to another parameter (typically the estimated parameter coming from the maximization step). *************************************************************************/ public boolean converged(Parameters other) { if (Math.abs(_thetaA - other._thetaA) < _delta && Math.abs(_thetaB - other._thetaB) < _delta) { return true; } return false; } public double getThetaA() { return _thetaA; } public double getThetaB() { return _thetaB; } public String toString() { return String.format("thetaA = %.5f, thetaB = %.5f", _thetaA, _thetaB); } } /***************************************************************************** This class encapsulates an observation, that is the number of heads and tails in a trial. The observation can be either (1) one of the observed observations, or (2) an estimated observation resulting from the expectation step. *****************************************************************************/ class Observation { double _numHeads = 0; double _numTails = 0; public Observation(String s) { for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (c == 'H') { _numHeads++; } else if (c == 'T') { _numTails++; } else { throw new RuntimeException("Unknown character: " + c); } } } public Observation(double numHeads, double numTails) { _numHeads = numHeads; _numTails = numTails; } public double getNumHeads() { return _numHeads; } public double getNumTails() { return _numTails; } public String toString() { return String.format("heads: %.1f, tails: %.1f", _numHeads, _numTails); } } /***************************************************************************** This class runs expectation-maximization for the problem posed by the article from (Do and Batzoglou, 2008). *****************************************************************************/ public class EM { // Current estimated parameters. private Parameters _parameters; // Observations from the trials. These observations are set once. private final List<Observation> _observations; // Estimated observations per coin. These observations are the output // of the expectation step. private List<Observation> _expectedObservationsForCoinA; private List<Observation> _expectedObservationsForCoinB; private static java.io.PrintStream o = System.out; /************************************************************************* Principal constructor. @param observations The observations from the trial. @param parameters The initial guessed parameters. *************************************************************************/ public EM(List<Observation> observations, Parameters parameters) { _observations = observations; _parameters = parameters; } /************************************************************************* Run EM until parameters converge. *************************************************************************/ public Parameters run() { while (true) { expectation(); Parameters estimatedParameters = maximization(); o.printf("%s\n", estimatedParameters); if (_parameters.converged(estimatedParameters)) { break; } _parameters = estimatedParameters; } return _parameters; } /************************************************************************* Given the observations and current estimated parameters, compute new estimated completions (distribution over the classes) and observations. *************************************************************************/ private void expectation() { _expectedObservationsForCoinA = new ArrayList<Observation>(); _expectedObservationsForCoinB = new ArrayList<Observation>(); for (Observation observation : _observations) { int numHeads = (int)observation.getNumHeads(); int numTails = (int)observation.getNumTails(); double probabilityOfObservationForCoinA= binomialProbability(10, numHeads, _parameters.getThetaA()); double probabilityOfObservationForCoinB= binomialProbability(10, numHeads, _parameters.getThetaB()); double normalizer = probabilityOfObservationForCoinA + probabilityOfObservationForCoinB; // Compute the completions for coin A and B (i.e. the probability // distribution of the two classes, summed to 1.0). double completionCoinA = probabilityOfObservationForCoinA / normalizer; double completionCoinB = probabilityOfObservationForCoinB / normalizer; // Compute new expected observations for the two coins. Observation expectedObservationForCoinA = new Observation(numHeads * completionCoinA, numTails * completionCoinA); Observation expectedObservationForCoinB = new Observation(numHeads * completionCoinB, numTails * completionCoinB); _expectedObservationsForCoinA.add(expectedObservationForCoinA); _expectedObservationsForCoinB.add(expectedObservationForCoinB); } } /************************************************************************* Given new estimated observations, compute new estimated parameters. *************************************************************************/ private Parameters maximization() { double sumCoinAHeads = 0.0; double sumCoinATails = 0.0; double sumCoinBHeads = 0.0; double sumCoinBTails = 0.0; for (Observation observation : _expectedObservationsForCoinA) { sumCoinAHeads += observation.getNumHeads(); sumCoinATails += observation.getNumTails(); } for (Observation observation : _expectedObservationsForCoinB) { sumCoinBHeads += observation.getNumHeads(); sumCoinBTails += observation.getNumTails(); } return new Parameters(sumCoinAHeads / (sumCoinAHeads + sumCoinATails), sumCoinBHeads / (sumCoinBHeads + sumCoinBTails)); //o.printf("parameters: %s\n", _parameters); } /************************************************************************* Since the coin-toss experiment posed in this article is a Bernoulli trial, use a binomial probability Pr(X=k; n,p) = (n choose k) * p^k * (1-p)^(n-k). *************************************************************************/ private static double binomialProbability(int n, int k, double p) { double q = 1.0 - p; return nChooseK(n, k) * Math.pow(p, k) * Math.pow(q, n-k); } private static long nChooseK(int n, int k) { long numerator = 1; for (int i = 0; i < k; i++) { numerator = numerator * n; n--; } long denominator = factorial(k); return (long)(numerator / denominator); } private static long factorial(int n) { long result = 1; for (; n >0; n--) { result = result * n; } return result; } /************************************************************************* Entry point into the program. *************************************************************************/ public static void main(String argv[]) { // Create the observations and initial parameter guess // from the (Do and Batzoglou, 2008) article. List<Observation> observations = new ArrayList<Observation>(); observations.add(new Observation("HTTTHHTHTH")); observations.add(new Observation("HHHHTHHHHH")); observations.add(new Observation("HTHHHHHTHH")); observations.add(new Observation("HTHTTTHHTT")); observations.add(new Observation("THHHTHHHTH")); Parameters initialParameters = new Parameters(0.6, 0.5); EM em = new EM(observations, initialParameters); Parameters finalParameters = em.run(); o.printf("Final result:\n%s\n", finalParameters); } }
Numerical example to understand Expectation-Maximization
The answer given by Zhubarb is great, but unfortunately it is in Python. Below is a Java implementation of the EM algorithm executed on the same problem (posed in the article by Do and Batzoglou, 2008
Numerical example to understand Expectation-Maximization The answer given by Zhubarb is great, but unfortunately it is in Python. Below is a Java implementation of the EM algorithm executed on the same problem (posed in the article by Do and Batzoglou, 2008). I've added some printf's to the standard output to see how the parameters converge. thetaA = 0.71301, thetaB = 0.58134 thetaA = 0.74529, thetaB = 0.56926 thetaA = 0.76810, thetaB = 0.54954 thetaA = 0.78316, thetaB = 0.53462 thetaA = 0.79106, thetaB = 0.52628 thetaA = 0.79453, thetaB = 0.52239 thetaA = 0.79593, thetaB = 0.52073 thetaA = 0.79647, thetaB = 0.52005 thetaA = 0.79667, thetaB = 0.51977 thetaA = 0.79674, thetaB = 0.51966 thetaA = 0.79677, thetaB = 0.51961 thetaA = 0.79678, thetaB = 0.51960 thetaA = 0.79679, thetaB = 0.51959 Final result: thetaA = 0.79678, thetaB = 0.51960 Java code follows below: import java.util.*; /***************************************************************************** This class encapsulates the parameters of the problem. For this problem posed in the article by (Do and Batzoglou, 2008), the parameters are thetaA and thetaB, the probability of a coin coming up heads for the two coins A and B. *****************************************************************************/ class Parameters { double _thetaA = 0.0; // Probability of heads for coin A. double _thetaB = 0.0; // Probability of heads for coin B. double _delta = 0.00001; public Parameters(double thetaA, double thetaB) { _thetaA = thetaA; _thetaB = thetaB; } /************************************************************************* Returns true if this parameter is close enough to another parameter (typically the estimated parameter coming from the maximization step). *************************************************************************/ public boolean converged(Parameters other) { if (Math.abs(_thetaA - other._thetaA) < _delta && Math.abs(_thetaB - other._thetaB) < _delta) { return true; } return false; } public double getThetaA() { return _thetaA; } public double getThetaB() { return _thetaB; } public String toString() { return String.format("thetaA = %.5f, thetaB = %.5f", _thetaA, _thetaB); } } /***************************************************************************** This class encapsulates an observation, that is the number of heads and tails in a trial. The observation can be either (1) one of the observed observations, or (2) an estimated observation resulting from the expectation step. *****************************************************************************/ class Observation { double _numHeads = 0; double _numTails = 0; public Observation(String s) { for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (c == 'H') { _numHeads++; } else if (c == 'T') { _numTails++; } else { throw new RuntimeException("Unknown character: " + c); } } } public Observation(double numHeads, double numTails) { _numHeads = numHeads; _numTails = numTails; } public double getNumHeads() { return _numHeads; } public double getNumTails() { return _numTails; } public String toString() { return String.format("heads: %.1f, tails: %.1f", _numHeads, _numTails); } } /***************************************************************************** This class runs expectation-maximization for the problem posed by the article from (Do and Batzoglou, 2008). *****************************************************************************/ public class EM { // Current estimated parameters. private Parameters _parameters; // Observations from the trials. These observations are set once. private final List<Observation> _observations; // Estimated observations per coin. These observations are the output // of the expectation step. private List<Observation> _expectedObservationsForCoinA; private List<Observation> _expectedObservationsForCoinB; private static java.io.PrintStream o = System.out; /************************************************************************* Principal constructor. @param observations The observations from the trial. @param parameters The initial guessed parameters. *************************************************************************/ public EM(List<Observation> observations, Parameters parameters) { _observations = observations; _parameters = parameters; } /************************************************************************* Run EM until parameters converge. *************************************************************************/ public Parameters run() { while (true) { expectation(); Parameters estimatedParameters = maximization(); o.printf("%s\n", estimatedParameters); if (_parameters.converged(estimatedParameters)) { break; } _parameters = estimatedParameters; } return _parameters; } /************************************************************************* Given the observations and current estimated parameters, compute new estimated completions (distribution over the classes) and observations. *************************************************************************/ private void expectation() { _expectedObservationsForCoinA = new ArrayList<Observation>(); _expectedObservationsForCoinB = new ArrayList<Observation>(); for (Observation observation : _observations) { int numHeads = (int)observation.getNumHeads(); int numTails = (int)observation.getNumTails(); double probabilityOfObservationForCoinA= binomialProbability(10, numHeads, _parameters.getThetaA()); double probabilityOfObservationForCoinB= binomialProbability(10, numHeads, _parameters.getThetaB()); double normalizer = probabilityOfObservationForCoinA + probabilityOfObservationForCoinB; // Compute the completions for coin A and B (i.e. the probability // distribution of the two classes, summed to 1.0). double completionCoinA = probabilityOfObservationForCoinA / normalizer; double completionCoinB = probabilityOfObservationForCoinB / normalizer; // Compute new expected observations for the two coins. Observation expectedObservationForCoinA = new Observation(numHeads * completionCoinA, numTails * completionCoinA); Observation expectedObservationForCoinB = new Observation(numHeads * completionCoinB, numTails * completionCoinB); _expectedObservationsForCoinA.add(expectedObservationForCoinA); _expectedObservationsForCoinB.add(expectedObservationForCoinB); } } /************************************************************************* Given new estimated observations, compute new estimated parameters. *************************************************************************/ private Parameters maximization() { double sumCoinAHeads = 0.0; double sumCoinATails = 0.0; double sumCoinBHeads = 0.0; double sumCoinBTails = 0.0; for (Observation observation : _expectedObservationsForCoinA) { sumCoinAHeads += observation.getNumHeads(); sumCoinATails += observation.getNumTails(); } for (Observation observation : _expectedObservationsForCoinB) { sumCoinBHeads += observation.getNumHeads(); sumCoinBTails += observation.getNumTails(); } return new Parameters(sumCoinAHeads / (sumCoinAHeads + sumCoinATails), sumCoinBHeads / (sumCoinBHeads + sumCoinBTails)); //o.printf("parameters: %s\n", _parameters); } /************************************************************************* Since the coin-toss experiment posed in this article is a Bernoulli trial, use a binomial probability Pr(X=k; n,p) = (n choose k) * p^k * (1-p)^(n-k). *************************************************************************/ private static double binomialProbability(int n, int k, double p) { double q = 1.0 - p; return nChooseK(n, k) * Math.pow(p, k) * Math.pow(q, n-k); } private static long nChooseK(int n, int k) { long numerator = 1; for (int i = 0; i < k; i++) { numerator = numerator * n; n--; } long denominator = factorial(k); return (long)(numerator / denominator); } private static long factorial(int n) { long result = 1; for (; n >0; n--) { result = result * n; } return result; } /************************************************************************* Entry point into the program. *************************************************************************/ public static void main(String argv[]) { // Create the observations and initial parameter guess // from the (Do and Batzoglou, 2008) article. List<Observation> observations = new ArrayList<Observation>(); observations.add(new Observation("HTTTHHTHTH")); observations.add(new Observation("HHHHTHHHHH")); observations.add(new Observation("HTHHHHHTHH")); observations.add(new Observation("HTHTTTHHTT")); observations.add(new Observation("THHHTHHHTH")); Parameters initialParameters = new Parameters(0.6, 0.5); EM em = new EM(observations, initialParameters); Parameters finalParameters = em.run(); o.printf("Final result:\n%s\n", finalParameters); } }
Numerical example to understand Expectation-Maximization The answer given by Zhubarb is great, but unfortunately it is in Python. Below is a Java implementation of the EM algorithm executed on the same problem (posed in the article by Do and Batzoglou, 2008
1,295
Numerical example to understand Expectation-Maximization
% Implementation of the EM (Expectation-Maximization)algorithm example exposed on: % Motion Segmentation using EM - a short tutorial, Yair Weiss, %http://www.cs.huji.ac.il/~yweiss/emTutorial.pdf % Juan Andrade, [email protected] clear all clc %% Setup parameters m1 = 2; % slope line 1 m2 = 6; % slope line 2 b1 = 3; % vertical crossing line 1 b2 = -2; % vertical crossing line 2 x = [-1:0.1:5]; % x axis values sigma1 = 1; % Standard Deviation of Noise added to line 1 sigma2 = 2; % Standard Deviation of Noise added to line 2 %% Clean lines l1 = m1*x+b1; % line 1 l2 = m2*x+b2; % line 2 %% Adding noise to lines p1 = l1 + sigma1*randn(size(l1)); p2 = l2 + sigma2*randn(size(l2)); %% showing ideal and noise values figure,plot(x,l1,'r'),hold,plot(x,l2,'b'), plot(x,p1,'r.'),plot(x,p2,'b.'),grid %% initial guess m11(1) = -1; % slope line 1 m22(1) = 1; % slope line 2 b11(1) = 2; % vertical crossing line 1 b22(1) = 2; % vertical crossing line 2 %% EM algorithm loop iterations = 10; % number of iterations (a stop based on a threshold may used too) for i=1:iterations %% expectation step (equations 2 and 3) res1 = m11(i)*x + b11(i) - p1; res2 = m22(i)*x + b22(i) - p2; % line 1 w1 = (exp((-res1.^2)./sigma1))./((exp((-res1.^2)./sigma1)) + (exp((-res2.^2)./sigma2))); % line 2 w2 = (exp((-res2.^2)./sigma2))./((exp((-res1.^2)./sigma1)) + (exp((-res2.^2)./sigma2))); %% maximization step (equation 4) % line 1 A(1,1) = sum(w1.*(x.^2)); A(1,2) = sum(w1.*x); A(2,1) = sum(w1.*x); A(2,2) = sum(w1); bb = [sum(w1.*x.*p1) ; sum(w1.*p1)]; temp = A\bb; m11(i+1) = temp(1); b11(i+1) = temp(2); % line 2 A(1,1) = sum(w2.*(x.^2)); A(1,2) = sum(w2.*x); A(2,1) = sum(w2.*x); A(2,2) = sum(w2); bb = [sum(w2.*x.*p2) ; sum(w2.*p2)]; temp = A\bb; m22(i+1) = temp(1); b22(i+1) = temp(2); %% plotting evolution of results l1temp = m11(i+1)*x+b11(i+1); l2temp = m22(i+1)*x+b22(i+1); figure,plot(x,l1temp,'r'),hold,plot(x,l2temp,'b'), plot(x,p1,'r.'),plot(x,p2,'b.'),grid end
Numerical example to understand Expectation-Maximization
% Implementation of the EM (Expectation-Maximization)algorithm example exposed on: % Motion Segmentation using EM - a short tutorial, Yair Weiss, %http://www.cs.huji.ac.il/~yweiss/emTutorial.pdf % Jua
Numerical example to understand Expectation-Maximization % Implementation of the EM (Expectation-Maximization)algorithm example exposed on: % Motion Segmentation using EM - a short tutorial, Yair Weiss, %http://www.cs.huji.ac.il/~yweiss/emTutorial.pdf % Juan Andrade, [email protected] clear all clc %% Setup parameters m1 = 2; % slope line 1 m2 = 6; % slope line 2 b1 = 3; % vertical crossing line 1 b2 = -2; % vertical crossing line 2 x = [-1:0.1:5]; % x axis values sigma1 = 1; % Standard Deviation of Noise added to line 1 sigma2 = 2; % Standard Deviation of Noise added to line 2 %% Clean lines l1 = m1*x+b1; % line 1 l2 = m2*x+b2; % line 2 %% Adding noise to lines p1 = l1 + sigma1*randn(size(l1)); p2 = l2 + sigma2*randn(size(l2)); %% showing ideal and noise values figure,plot(x,l1,'r'),hold,plot(x,l2,'b'), plot(x,p1,'r.'),plot(x,p2,'b.'),grid %% initial guess m11(1) = -1; % slope line 1 m22(1) = 1; % slope line 2 b11(1) = 2; % vertical crossing line 1 b22(1) = 2; % vertical crossing line 2 %% EM algorithm loop iterations = 10; % number of iterations (a stop based on a threshold may used too) for i=1:iterations %% expectation step (equations 2 and 3) res1 = m11(i)*x + b11(i) - p1; res2 = m22(i)*x + b22(i) - p2; % line 1 w1 = (exp((-res1.^2)./sigma1))./((exp((-res1.^2)./sigma1)) + (exp((-res2.^2)./sigma2))); % line 2 w2 = (exp((-res2.^2)./sigma2))./((exp((-res1.^2)./sigma1)) + (exp((-res2.^2)./sigma2))); %% maximization step (equation 4) % line 1 A(1,1) = sum(w1.*(x.^2)); A(1,2) = sum(w1.*x); A(2,1) = sum(w1.*x); A(2,2) = sum(w1); bb = [sum(w1.*x.*p1) ; sum(w1.*p1)]; temp = A\bb; m11(i+1) = temp(1); b11(i+1) = temp(2); % line 2 A(1,1) = sum(w2.*(x.^2)); A(1,2) = sum(w2.*x); A(2,1) = sum(w2.*x); A(2,2) = sum(w2); bb = [sum(w2.*x.*p2) ; sum(w2.*p2)]; temp = A\bb; m22(i+1) = temp(1); b22(i+1) = temp(2); %% plotting evolution of results l1temp = m11(i+1)*x+b11(i+1); l2temp = m22(i+1)*x+b22(i+1); figure,plot(x,l1temp,'r'),hold,plot(x,l2temp,'b'), plot(x,p1,'r.'),plot(x,p2,'b.'),grid end
Numerical example to understand Expectation-Maximization % Implementation of the EM (Expectation-Maximization)algorithm example exposed on: % Motion Segmentation using EM - a short tutorial, Yair Weiss, %http://www.cs.huji.ac.il/~yweiss/emTutorial.pdf % Jua
1,296
Numerical example to understand Expectation-Maximization
Well, I would suggest you to go through a book on R by Maria L Rizzo. One of the chapters contain the use of EM algorithm with a numerical example. I remember going through the code for better understanding. Also, try to view it from a clustering point of view in the beginning. Work out by hand, a clustering problem where 10 observations are taken from two different normal densities. This should help.Take help from R :)
Numerical example to understand Expectation-Maximization
Well, I would suggest you to go through a book on R by Maria L Rizzo. One of the chapters contain the use of EM algorithm with a numerical example. I remember going through the code for better underst
Numerical example to understand Expectation-Maximization Well, I would suggest you to go through a book on R by Maria L Rizzo. One of the chapters contain the use of EM algorithm with a numerical example. I remember going through the code for better understanding. Also, try to view it from a clustering point of view in the beginning. Work out by hand, a clustering problem where 10 observations are taken from two different normal densities. This should help.Take help from R :)
Numerical example to understand Expectation-Maximization Well, I would suggest you to go through a book on R by Maria L Rizzo. One of the chapters contain the use of EM algorithm with a numerical example. I remember going through the code for better underst
1,297
Numerical example to understand Expectation-Maximization
Just in case, I have written a Ruby implementation of the above mentioned coin toss example by Do & Batzoglou and it produces exactly the same numbers as they do w.r.t. the same input parameters $\theta_A = 0.6$ and $\theta_B = 0.5$. # gem install distribution require 'distribution' # error bound EPS = 10**-6 # number of coin tosses N = 10 # observations X = [5, 9, 8, 4, 7] # randomly initialized thetas theta_a, theta_b = 0.6, 0.5 p [theta_a, theta_b] loop do expectation = X.map do |h| like_a = Distribution::Binomial.pdf(h, N, theta_a) like_b = Distribution::Binomial.pdf(h, N, theta_b) norm_a = like_a / (like_a + like_b) norm_b = like_b / (like_a + like_b) [norm_a, norm_b, h] end maximization = expectation.each_with_object([0.0, 0.0, 0.0, 0.0]) do |(norm_a, norm_b, h), r| r[0] += norm_a * h; r[1] += norm_a * (N - h) r[2] += norm_b * h; r[3] += norm_b * (N - h) end theta_a_hat = maximization[0] / (maximization[0] + maximization[1]) theta_b_hat = maximization[2] / (maximization[2] + maximization[3]) error_a = (theta_a_hat - theta_a).abs / theta_a error_b = (theta_b_hat - theta_b).abs / theta_b theta_a, theta_b = theta_a_hat, theta_b_hat p [theta_a, theta_b] break if error_a < EPS && error_b < EPS end
Numerical example to understand Expectation-Maximization
Just in case, I have written a Ruby implementation of the above mentioned coin toss example by Do & Batzoglou and it produces exactly the same numbers as they do w.r.t. the same input parameters $\the
Numerical example to understand Expectation-Maximization Just in case, I have written a Ruby implementation of the above mentioned coin toss example by Do & Batzoglou and it produces exactly the same numbers as they do w.r.t. the same input parameters $\theta_A = 0.6$ and $\theta_B = 0.5$. # gem install distribution require 'distribution' # error bound EPS = 10**-6 # number of coin tosses N = 10 # observations X = [5, 9, 8, 4, 7] # randomly initialized thetas theta_a, theta_b = 0.6, 0.5 p [theta_a, theta_b] loop do expectation = X.map do |h| like_a = Distribution::Binomial.pdf(h, N, theta_a) like_b = Distribution::Binomial.pdf(h, N, theta_b) norm_a = like_a / (like_a + like_b) norm_b = like_b / (like_a + like_b) [norm_a, norm_b, h] end maximization = expectation.each_with_object([0.0, 0.0, 0.0, 0.0]) do |(norm_a, norm_b, h), r| r[0] += norm_a * h; r[1] += norm_a * (N - h) r[2] += norm_b * h; r[3] += norm_b * (N - h) end theta_a_hat = maximization[0] / (maximization[0] + maximization[1]) theta_b_hat = maximization[2] / (maximization[2] + maximization[3]) error_a = (theta_a_hat - theta_a).abs / theta_a error_b = (theta_b_hat - theta_b).abs / theta_b theta_a, theta_b = theta_a_hat, theta_b_hat p [theta_a, theta_b] break if error_a < EPS && error_b < EPS end
Numerical example to understand Expectation-Maximization Just in case, I have written a Ruby implementation of the above mentioned coin toss example by Do & Batzoglou and it produces exactly the same numbers as they do w.r.t. the same input parameters $\the
1,298
When to use gamma GLMs?
The gamma has a property shared by the lognormal; namely that when the shape parameter is held constant while the scale parameter is varied (as is usually done when using either for models), the variance is proportional to mean-squared (constant coefficient of variation). Something approximate to this occurs fairly often with financial data, or indeed, with many other kinds of data. As a result it's often suitable for data that are continuous, positive, right-skew and where variance is near-constant on the log-scale, though there are a number of other well-known (and often fairly readily available) choices with those properties. Further, it's common to fit a log-link with the gamma GLM (it's relatively more rare to use the natural link). What makes it slightly different from fitting a normal linear model to the logs of the data is that on the log scale the gamma is left skew to varying degrees while the normal (the log of a lognormal) is symmetric. This makes it (the gamma) useful in a variety of situations. I've seen practical uses for gamma GLMs discussed (with real data examples) in (off the top of my head) de Jong & Heller and Frees as well as numerous papers; I've also seen applications in other areas. Oh, and if I remember right, Venables and Ripley's MASS uses it on school absenteeism (the quine data; Edit: turns out it's actually in Statistics Complements to MASS, see p11, the 14th page of the pdf, it has a log link but there's a small shift of the DV). Uh, and McCullagh and Nelder did a blood clotting example, though perhaps it may have been natural link. Then there's Faraway's book where he did a car insurance example and a semiconductor manufacturing data example. There are some advantages and some disadvantages to choosing either of the two options. Since these days both are easy to fit; it's generally a matter of choosing what's most suitable. It's far from the only option; for example, there's also inverse Gaussian GLMs, which are more skew/heavier tailed (and even more heteroskedastic) than either gamma or lognormal. As for drawbacks, it's harder to do prediction intervals. Some diagnostic displays are harder to interpret. Computing expectations on the scale of the linear predictor (generally the log-scale) is harder than for the equivalent lognormal model. Hypothesis tests and intervals are generally asymptotic. These are often relatively minor issues. It has some advantages over log-link lognormal regression (taking logs and fitting an ordinary linear regression model); one is that mean prediction is easy. This is often a telling advantage for me.
When to use gamma GLMs?
The gamma has a property shared by the lognormal; namely that when the shape parameter is held constant while the scale parameter is varied (as is usually done when using either for models), the varia
When to use gamma GLMs? The gamma has a property shared by the lognormal; namely that when the shape parameter is held constant while the scale parameter is varied (as is usually done when using either for models), the variance is proportional to mean-squared (constant coefficient of variation). Something approximate to this occurs fairly often with financial data, or indeed, with many other kinds of data. As a result it's often suitable for data that are continuous, positive, right-skew and where variance is near-constant on the log-scale, though there are a number of other well-known (and often fairly readily available) choices with those properties. Further, it's common to fit a log-link with the gamma GLM (it's relatively more rare to use the natural link). What makes it slightly different from fitting a normal linear model to the logs of the data is that on the log scale the gamma is left skew to varying degrees while the normal (the log of a lognormal) is symmetric. This makes it (the gamma) useful in a variety of situations. I've seen practical uses for gamma GLMs discussed (with real data examples) in (off the top of my head) de Jong & Heller and Frees as well as numerous papers; I've also seen applications in other areas. Oh, and if I remember right, Venables and Ripley's MASS uses it on school absenteeism (the quine data; Edit: turns out it's actually in Statistics Complements to MASS, see p11, the 14th page of the pdf, it has a log link but there's a small shift of the DV). Uh, and McCullagh and Nelder did a blood clotting example, though perhaps it may have been natural link. Then there's Faraway's book where he did a car insurance example and a semiconductor manufacturing data example. There are some advantages and some disadvantages to choosing either of the two options. Since these days both are easy to fit; it's generally a matter of choosing what's most suitable. It's far from the only option; for example, there's also inverse Gaussian GLMs, which are more skew/heavier tailed (and even more heteroskedastic) than either gamma or lognormal. As for drawbacks, it's harder to do prediction intervals. Some diagnostic displays are harder to interpret. Computing expectations on the scale of the linear predictor (generally the log-scale) is harder than for the equivalent lognormal model. Hypothesis tests and intervals are generally asymptotic. These are often relatively minor issues. It has some advantages over log-link lognormal regression (taking logs and fitting an ordinary linear regression model); one is that mean prediction is easy. This is often a telling advantage for me.
When to use gamma GLMs? The gamma has a property shared by the lognormal; namely that when the shape parameter is held constant while the scale parameter is varied (as is usually done when using either for models), the varia
1,299
When to use gamma GLMs?
That's a good question. In fact, why don't people use generalised linear models (GLM) more is also a good question. Warning note: Some people use GLM for general linear model, not what is in mind here. It does depend where you look. For example, gamma distributions have been popular in several of the environmental sciences for some decades and so modelling with predictor variables too is a natural extension. There are many examples in hydrology and geomorphology, to name some fields in which I have strayed. It is hard to pin down quite when to use it beyond an empty answer of whenever it works best. Given skewed positive data I will often find myself trying gamma and lognormal models (in GLM context log link, normal or Gaussian family) and choosing which works better. Gamma modelling remained quite difficult to do until fairly recently, certainly as compared with say taking logs and applying linear regressions, without writing a lot of code yourself. Even now, I'd guess that it isn't equally easy across all the major statistical software environments. In explaining what is used and what is not used, despite merits and demerits, I think you always come down to precisely the kind of factors you identify: what is taught, what is in the literature that people read, what people hear talked about at work and at conferences. So, you need a kind of amateur sociology of science to explain. Most people seem to follow straight and narrow paths within their own fields. Loosely, the larger the internal literature in any field on modelling techniques, the less inclined people in that field seem to be to try something different.
When to use gamma GLMs?
That's a good question. In fact, why don't people use generalised linear models (GLM) more is also a good question. Warning note: Some people use GLM for general linear model, not what is in mind her
When to use gamma GLMs? That's a good question. In fact, why don't people use generalised linear models (GLM) more is also a good question. Warning note: Some people use GLM for general linear model, not what is in mind here. It does depend where you look. For example, gamma distributions have been popular in several of the environmental sciences for some decades and so modelling with predictor variables too is a natural extension. There are many examples in hydrology and geomorphology, to name some fields in which I have strayed. It is hard to pin down quite when to use it beyond an empty answer of whenever it works best. Given skewed positive data I will often find myself trying gamma and lognormal models (in GLM context log link, normal or Gaussian family) and choosing which works better. Gamma modelling remained quite difficult to do until fairly recently, certainly as compared with say taking logs and applying linear regressions, without writing a lot of code yourself. Even now, I'd guess that it isn't equally easy across all the major statistical software environments. In explaining what is used and what is not used, despite merits and demerits, I think you always come down to precisely the kind of factors you identify: what is taught, what is in the literature that people read, what people hear talked about at work and at conferences. So, you need a kind of amateur sociology of science to explain. Most people seem to follow straight and narrow paths within their own fields. Loosely, the larger the internal literature in any field on modelling techniques, the less inclined people in that field seem to be to try something different.
When to use gamma GLMs? That's a good question. In fact, why don't people use generalised linear models (GLM) more is also a good question. Warning note: Some people use GLM for general linear model, not what is in mind her
1,300
When to use gamma GLMs?
Gamma regression is in the GLM and so you can get many useful quantities for diagnostic purposes, such as deviance residuals, leverages, Cook's distance, and so on. They are perhaps not as nice as the corresponding quantities for log-transformed data. One thing that gamma regression avoids compared to the lognormal is transformation bias. Jensen's inequality implies that the predictions from lognormal regression will be systematically biased because it's modeling transformed data rather than the transformed expected value. Also, gamma regression (or other models for nonnegative data) can cope with a broader array of data than the lognormal due to the fact that it can have a mode at 0, such as you have with the exponential distribution, which is in the gamma family, which is impossible for the lognormal. I have read suggestions that using the Poisson likelihood as a quasi-likelihood is more stable. They're conjugates of each other. The quasi-Poisson also has the substantial benefit of being able to cope with exact 0 values, which trouble both the gamma and, especially, the lognormal.
When to use gamma GLMs?
Gamma regression is in the GLM and so you can get many useful quantities for diagnostic purposes, such as deviance residuals, leverages, Cook's distance, and so on. They are perhaps not as nice as the
When to use gamma GLMs? Gamma regression is in the GLM and so you can get many useful quantities for diagnostic purposes, such as deviance residuals, leverages, Cook's distance, and so on. They are perhaps not as nice as the corresponding quantities for log-transformed data. One thing that gamma regression avoids compared to the lognormal is transformation bias. Jensen's inequality implies that the predictions from lognormal regression will be systematically biased because it's modeling transformed data rather than the transformed expected value. Also, gamma regression (or other models for nonnegative data) can cope with a broader array of data than the lognormal due to the fact that it can have a mode at 0, such as you have with the exponential distribution, which is in the gamma family, which is impossible for the lognormal. I have read suggestions that using the Poisson likelihood as a quasi-likelihood is more stable. They're conjugates of each other. The quasi-Poisson also has the substantial benefit of being able to cope with exact 0 values, which trouble both the gamma and, especially, the lognormal.
When to use gamma GLMs? Gamma regression is in the GLM and so you can get many useful quantities for diagnostic purposes, such as deviance residuals, leverages, Cook's distance, and so on. They are perhaps not as nice as the