idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
2,001
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
In the context of binary classification: Accuracy - How many instances did the model label correctly? Recall - How often was the model able to find positives? Precision - How believable the model is when it says an instance is a positive?
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
In the context of binary classification: Accuracy - How many instances did the model label correctly? Recall - How often was the model able to find positives? Precision - How believable the model is w
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? In the context of binary classification: Accuracy - How many instances did the model label correctly? Recall - How often was the model able to find positives? Precision - How believable the model is when it says an instance is a positive?
What is the best way to remember the difference between sensitivity, specificity, precision, accurac In the context of binary classification: Accuracy - How many instances did the model label correctly? Recall - How often was the model able to find positives? Precision - How believable the model is w
2,002
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
The following article helps me a lot https://medium.com/swlh/how-to-remember-all-these-classification-concepts-forever-761c065be33 accuracy: Double-A rule precision: Triple-P rule
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
The following article helps me a lot https://medium.com/swlh/how-to-remember-all-these-classification-concepts-forever-761c065be33 accuracy: Double-A rule precision: Triple-P rule
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? The following article helps me a lot https://medium.com/swlh/how-to-remember-all-these-classification-concepts-forever-761c065be33 accuracy: Double-A rule precision: Triple-P rule
What is the best way to remember the difference between sensitivity, specificity, precision, accurac The following article helps me a lot https://medium.com/swlh/how-to-remember-all-these-classification-concepts-forever-761c065be33 accuracy: Double-A rule precision: Triple-P rule
2,003
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
I created an interactive confusion table to help me understand the difference between these terms: http://zyxue.github.io/2018/05/15/on-the-p-value.html#interactive-confusion-table. I post the link here in case someone may find it helpful, too.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
I created an interactive confusion table to help me understand the difference between these terms: http://zyxue.github.io/2018/05/15/on-the-p-value.html#interactive-confusion-table. I post the link he
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? I created an interactive confusion table to help me understand the difference between these terms: http://zyxue.github.io/2018/05/15/on-the-p-value.html#interactive-confusion-table. I post the link here in case someone may find it helpful, too.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac I created an interactive confusion table to help me understand the difference between these terms: http://zyxue.github.io/2018/05/15/on-the-p-value.html#interactive-confusion-table. I post the link he
2,004
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
I had a similar problem and came across Andrew Ng's slide, which I found helpful, although there are good answers here as well. As highlighted by other answers the key is remembering the confusion matrix. Positives are on the first row and negatives are on the bottom row. Andrew Ng Explanation: For both precision and recall, True Positive is on top of the division symbol (i.e. numerator). For the Prescison we divide this by Predicted positive (which is the first row) and for reCAll, we divide this by ACtual positive (which is the first column) High precision would mean that if a diagnosis of patients have that rare disease, probably the patient does have it and it's an accurate diagnosis. High recall means that if there's a patient with that rare disease, probably the algorithm will correctly identify that they do have that disease. Microsoft explanation: Precision is the ability of a model to avoid labeling negative samples as positive (by looking at the above eq, precision formula, we need the false positive to be zero meaning do not tell people that they have the rare disease) Recall is the ability of a model to detect all positive samples (by looking at the above eq, recall formula, we need the false negative to be zero meaning do not miss patients who have the rare disease)
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
I had a similar problem and came across Andrew Ng's slide, which I found helpful, although there are good answers here as well. As highlighted by other answers the key is remembering the confusion mat
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? I had a similar problem and came across Andrew Ng's slide, which I found helpful, although there are good answers here as well. As highlighted by other answers the key is remembering the confusion matrix. Positives are on the first row and negatives are on the bottom row. Andrew Ng Explanation: For both precision and recall, True Positive is on top of the division symbol (i.e. numerator). For the Prescison we divide this by Predicted positive (which is the first row) and for reCAll, we divide this by ACtual positive (which is the first column) High precision would mean that if a diagnosis of patients have that rare disease, probably the patient does have it and it's an accurate diagnosis. High recall means that if there's a patient with that rare disease, probably the algorithm will correctly identify that they do have that disease. Microsoft explanation: Precision is the ability of a model to avoid labeling negative samples as positive (by looking at the above eq, precision formula, we need the false positive to be zero meaning do not tell people that they have the rare disease) Recall is the ability of a model to detect all positive samples (by looking at the above eq, recall formula, we need the false negative to be zero meaning do not miss patients who have the rare disease)
What is the best way to remember the difference between sensitivity, specificity, precision, accurac I had a similar problem and came across Andrew Ng's slide, which I found helpful, although there are good answers here as well. As highlighted by other answers the key is remembering the confusion mat
2,005
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
I'll try and explain how I remember what recall is. Definition: Recall = True positives/All real world positives. OR Recall = True positives/True Positives and False Negatives. Imagine an automobile company that wants to recall some of its cars for a manufacturing defect (hard to imagine, right?). This company obviously wants to get in all the cars that have the issue. That's our denominator. The total number of faulty cars. It may indeed get hold of all of them, by calling every single car it ever manufactured. So here, it's recall would be perfect, a value of 1. There cannot be a false negative (part two of the denominator) since we labelled everything as positive! In this case, the owner is obviously a multi-billionaire who doesn't care about the cost of the *recall exercise. But what if a corporate entity wanted to cut costs (again, just go with me on this) by getting only the faulty cars in. Well, then, they would want to figure out something like, let's only call in cars that were manufactured in January this year as they have the maximum chances of this problem. This creates our false negatives, that is, cars that have the problem but do not meet the January criterion. Therefore the second part of the denominator (FN) now becomes non-zero, which then reduces the overall fraction. Key takeaways - it is the false negatives that fiddle with the recall metric. The mnemonic, if you really need it, is that cars get recalled. Hope this helps, somewhat.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
I'll try and explain how I remember what recall is. Definition: Recall = True positives/All real world positives. OR Recall = True positives/True Positives and False Negatives. Imagine an automobile c
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? I'll try and explain how I remember what recall is. Definition: Recall = True positives/All real world positives. OR Recall = True positives/True Positives and False Negatives. Imagine an automobile company that wants to recall some of its cars for a manufacturing defect (hard to imagine, right?). This company obviously wants to get in all the cars that have the issue. That's our denominator. The total number of faulty cars. It may indeed get hold of all of them, by calling every single car it ever manufactured. So here, it's recall would be perfect, a value of 1. There cannot be a false negative (part two of the denominator) since we labelled everything as positive! In this case, the owner is obviously a multi-billionaire who doesn't care about the cost of the *recall exercise. But what if a corporate entity wanted to cut costs (again, just go with me on this) by getting only the faulty cars in. Well, then, they would want to figure out something like, let's only call in cars that were manufactured in January this year as they have the maximum chances of this problem. This creates our false negatives, that is, cars that have the problem but do not meet the January criterion. Therefore the second part of the denominator (FN) now becomes non-zero, which then reduces the overall fraction. Key takeaways - it is the false negatives that fiddle with the recall metric. The mnemonic, if you really need it, is that cars get recalled. Hope this helps, somewhat.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac I'll try and explain how I remember what recall is. Definition: Recall = True positives/All real world positives. OR Recall = True positives/True Positives and False Negatives. Imagine an automobile c
2,006
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
I use the word TARP to remember the difference between accuracy and precision. TARP: True=Accuracy, Relative=Precision. Accuracy measures how close a measurement is to the TRUE value, as the standard/accepted value is the TRUTH. Precision measures how close measurements are RELATIVE to each other, or how low the spread between various measurements is. Accuracy is truth, precision is relativity. Hope this helps.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
I use the word TARP to remember the difference between accuracy and precision. TARP: True=Accuracy, Relative=Precision. Accuracy measures how close a measurement is to the TRUE value, as the standard/
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? I use the word TARP to remember the difference between accuracy and precision. TARP: True=Accuracy, Relative=Precision. Accuracy measures how close a measurement is to the TRUE value, as the standard/accepted value is the TRUTH. Precision measures how close measurements are RELATIVE to each other, or how low the spread between various measurements is. Accuracy is truth, precision is relativity. Hope this helps.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac I use the word TARP to remember the difference between accuracy and precision. TARP: True=Accuracy, Relative=Precision. Accuracy measures how close a measurement is to the TRUE value, as the standard/
2,007
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
Specificity tackles false positive. High specificity means a low false-positive rate. (Specificity = 1 - false-positive rate) Sensitivity tackles false negative. High sensitivity means a low false-negative rate. (Sensitivity = 1 - false-negative rate) That's why specificity is also called true negative rate, and sensitivity is also called true positive rate. The reason why we don't call them these ways is probably because the term "true negative rate" can be misleading to laymen as the denominator can be confusing. true negative rate = true negatives/actual negatives, NOT predicted negatives. The same goes for "true positive rate". P.S. The answer above is tightly connected with the mnemonics "spout" and "spin", but I think it makes the mnemonics more understandable, plus I don't need to remember two extra words.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
Specificity tackles false positive. High specificity means a low false-positive rate. (Specificity = 1 - false-positive rate) Sensitivity tackles false negative. High sensitivity means a low false-neg
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? Specificity tackles false positive. High specificity means a low false-positive rate. (Specificity = 1 - false-positive rate) Sensitivity tackles false negative. High sensitivity means a low false-negative rate. (Sensitivity = 1 - false-negative rate) That's why specificity is also called true negative rate, and sensitivity is also called true positive rate. The reason why we don't call them these ways is probably because the term "true negative rate" can be misleading to laymen as the denominator can be confusing. true negative rate = true negatives/actual negatives, NOT predicted negatives. The same goes for "true positive rate". P.S. The answer above is tightly connected with the mnemonics "spout" and "spin", but I think it makes the mnemonics more understandable, plus I don't need to remember two extra words.
What is the best way to remember the difference between sensitivity, specificity, precision, accurac Specificity tackles false positive. High specificity means a low false-positive rate. (Specificity = 1 - false-positive rate) Sensitivity tackles false negative. High sensitivity means a low false-neg
2,008
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. for further study see this link https://newbiettn.github.io/2016/08/30/precision-recall-sensitivity-specificity/
What is the best way to remember the difference between sensitivity, specificity, precision, accurac
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What is the best way to remember the difference between sensitivity, specificity, precision, accuracy, and recall? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. for further study see this link https://newbiettn.github.io/2016/08/30/precision-recall-sensitivity-specificity/
What is the best way to remember the difference between sensitivity, specificity, precision, accurac Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
2,009
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
I said earlier that I would have a go at answering the question, so here goes... Jaynes was being a little naughty in his paper in that a frequentist confidence interval isn't defined as an interval where we might expect the true value of the statistic to lie with high (specified) probability, so it isn't unduly surprising that contradictions arise if they are interpreted as if they were. The problem is that this is often the way confidence intervals are used in practice, as an interval highly likely to contain the true value (given what we can infer from our sample of data) is what we often want. The key issue for me is that when a question is posed, it is best to have a direct answer to that question. Whether Bayesian credible intervals are worse than frequentist confidence intervals depends on what question was actually asked. If the question asked was: (a) "Give me an interval where the true value of the statistic lies with probability p", then it appears a frequentist cannot actually answer that question directly (and this introduces the kind of problems that Jaynes discusses in his paper), but a Bayesian can, which is why a Bayesian credible interval is superior to the frequentist confidence interval in the examples given by Jaynes. But this is only becuase it is the "wrong question" for the frequentist. (b) "Give me an interval where, were the experiment repeated a large number of times, the true value of the statistic would lie within p*100% of such intervals" then the frequentist answer is just what you want. The Bayesian may also be able to give a direct answer to this question (although it may not simply be the obvious credible interval). Whuber's comment on the question suggests this is the case. So essentially, it is a matter of correctly specifying the question and properly intepreting the answer. If you want to ask question (a) then use a Bayesian credible interval, if you want to ask question (b) then use a frequentist confidence interval.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
I said earlier that I would have a go at answering the question, so here goes... Jaynes was being a little naughty in his paper in that a frequentist confidence interval isn't defined as an interval w
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals I said earlier that I would have a go at answering the question, so here goes... Jaynes was being a little naughty in his paper in that a frequentist confidence interval isn't defined as an interval where we might expect the true value of the statistic to lie with high (specified) probability, so it isn't unduly surprising that contradictions arise if they are interpreted as if they were. The problem is that this is often the way confidence intervals are used in practice, as an interval highly likely to contain the true value (given what we can infer from our sample of data) is what we often want. The key issue for me is that when a question is posed, it is best to have a direct answer to that question. Whether Bayesian credible intervals are worse than frequentist confidence intervals depends on what question was actually asked. If the question asked was: (a) "Give me an interval where the true value of the statistic lies with probability p", then it appears a frequentist cannot actually answer that question directly (and this introduces the kind of problems that Jaynes discusses in his paper), but a Bayesian can, which is why a Bayesian credible interval is superior to the frequentist confidence interval in the examples given by Jaynes. But this is only becuase it is the "wrong question" for the frequentist. (b) "Give me an interval where, were the experiment repeated a large number of times, the true value of the statistic would lie within p*100% of such intervals" then the frequentist answer is just what you want. The Bayesian may also be able to give a direct answer to this question (although it may not simply be the obvious credible interval). Whuber's comment on the question suggests this is the case. So essentially, it is a matter of correctly specifying the question and properly intepreting the answer. If you want to ask question (a) then use a Bayesian credible interval, if you want to ask question (b) then use a frequentist confidence interval.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi I said earlier that I would have a go at answering the question, so here goes... Jaynes was being a little naughty in his paper in that a frequentist confidence interval isn't defined as an interval w
2,010
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
This is a "fleshed out" example given in a book written by Larry Wasserman All of statistics on Page 216 (12.8 Strengths and Weaknesses of Bayesian Inference). I basically provide what Wasserman doesn't in his book 1) an explanation for what is actually happening, rather than a throw away line; 2) the frequentist answer to the question, which Wasserman conveniently does not give; and 3) a demonstration that the equivalent confidence calculated using the same information suffers from the same problem. In this example, he states the following situation An observation, X, with a Sampling distribution: $(X|\theta)\sim N(\theta,1)$ Prior distribution of $(\theta)\sim N(0,1)$ (he actually uses a general $\tau^2$ for the variance, but his diagram specialises to $\tau^2=1$) He then goes to show that, using a Bayesian 95% credible interval in this set-up eventually has 0% frequentist coverage when the true value of $\theta$ becomes arbitrarily large. For instance, he provides a graph of the coverage (p218), and checking by eye, when the true value of $\theta$ is 3, the coverage is about 35%. He then goes on to say: ...What should we conclude from all this? The important thing is to understand that frequentist and Bayesian methods are answering different questions. To combine prior beliefs with data in a principled way, use Bayesian inference. To construct procedures with guaranteed long run performance, such as confidence intervals, use frequentist methods... (p217) And then moves on without any disection or explanation of why the Bayesian method performed apparently so bad. Further, he does not give a answer from the frequentist approach, just a broad brush statement about "the long-run" - a classical political tactic (emphasise your strength + others weakness, but never compare like for like). I will show how the problem as stated $\tau=1$ can be formulated in frequentist/orthodox terms, and then show that the result using confidence intervals gives precisely the same answer as the Bayesian one. Thus any defect in the Bayesian (real or perceived) is not corrected by using confidence intervals. Okay, so here goes. The first question I ask is what state of knowledge is described by the prior $\theta\sim N(0,1)$? If one was "ignorant" about $\theta$, then the appropriate way to express this is $p(\theta)\propto 1$. Now suppose that we were ignorant, and we observed $Y\sim N(\theta,1)$, independently of $X$. What would our posterior for $\theta$ be? $$p(\theta|Y)\propto p(\theta)p(Y|\theta)\propto exp\Big(-\frac{1}{2}(Y-\theta)^2\Big)$$ Thus $(\theta|Y)\sim N(Y,1)$. This means that the prior distribution given in Wassermans example, is equivalent to having observed an iid copy of $X$ equal to $0$. Frequentist methods cannot deal with a prior, but it can be thought of as having made 2 observations from the sampling distribution, one equal to $0$, and one equal to $X$. Both problems are entirely equivalent, and we can actually give the frequentist answer for the question. Because we are dealing with a normal distribution with known variance, the mean is a sufficient statistic for constructing a confidence interval for $\theta$. The mean is equal to $\overline{x}=\frac{0+X}{2}=\frac{X}{2}$ and has a sampling distribution $$(\overline{x}|\theta)\sim N(\theta,\frac{1}{2})$$ Thus an $(1-\alpha)\text{%}$ CI is given by: $$\frac{1}{2}X\pm Z_{\alpha/2}\frac{1}{\sqrt{2}}$$ But, using The results of example 12.8 for Wasserman, he shows that the posterior $(1-\alpha)\text{%}$ credible interval for $\theta$ is given by: $$cX\pm \sqrt{c}Z_{\alpha/2}$$. Where $c=\frac{\tau^{2}}{1+\tau^{2}}$. Thus, plugging in the value at $\tau^{2}=1$ gives $c=\frac{1}{2}$ and the credible interval becomes: $$\frac{1}{2}X\pm Z_{\alpha/2}\frac{1}{\sqrt{2}}$$ Which are exactly the same as the confidence interval! So any defect in the coverage exhibited by the Bayesian method, is not corrected by using the frequentist confidence interval! [If the frequentist chooses to ignore the prior, then to be a fair comparison, the Bayesian should also ignore this prior, and use the ignorance prior $p(\theta)\propto 1$, and the two intervals will still be equal - both $X \pm Z_{\alpha/2})$]. So what the hell is going on here? The problem is basically one of non-robustness of the normal sampling distribution. because the problem is equivalent to having already observed a iid copy, $X=0$. If you have observed $0$, then this is extremely unlikely to have occurred if the true value is $\theta=4$ (probability that $X\leq 0$ when $\theta=4$ is 0.000032). This explains why the coverage is so bad for large "true values", because they effectively make the implicit observation contained in the prior an outlier. In fact you can show that this example is basically equivalent to showing that the arithmetic mean has an unbounded influence function. Generalisation. Now some people may say "but you only considered $\tau=1$, which may be a special case". This is not true: any value of $\tau^2=\frac{1}{N}$ $(N=0,1,2,3,\dots)$ can be interpreted as observing $N$ iid copies of $X$ which were all equal to $0$, in addition to the $X$ of the question. The confidence interval will have the same "bad" coverage properties for large $\theta$. But this becomes increasingly unlikely if you keep observing values of $0$ (and no rational person would continue to worry about large $\theta$ when you keep seeing $0$).
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
This is a "fleshed out" example given in a book written by Larry Wasserman All of statistics on Page 216 (12.8 Strengths and Weaknesses of Bayesian Inference). I basically provide what Wasserman does
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals This is a "fleshed out" example given in a book written by Larry Wasserman All of statistics on Page 216 (12.8 Strengths and Weaknesses of Bayesian Inference). I basically provide what Wasserman doesn't in his book 1) an explanation for what is actually happening, rather than a throw away line; 2) the frequentist answer to the question, which Wasserman conveniently does not give; and 3) a demonstration that the equivalent confidence calculated using the same information suffers from the same problem. In this example, he states the following situation An observation, X, with a Sampling distribution: $(X|\theta)\sim N(\theta,1)$ Prior distribution of $(\theta)\sim N(0,1)$ (he actually uses a general $\tau^2$ for the variance, but his diagram specialises to $\tau^2=1$) He then goes to show that, using a Bayesian 95% credible interval in this set-up eventually has 0% frequentist coverage when the true value of $\theta$ becomes arbitrarily large. For instance, he provides a graph of the coverage (p218), and checking by eye, when the true value of $\theta$ is 3, the coverage is about 35%. He then goes on to say: ...What should we conclude from all this? The important thing is to understand that frequentist and Bayesian methods are answering different questions. To combine prior beliefs with data in a principled way, use Bayesian inference. To construct procedures with guaranteed long run performance, such as confidence intervals, use frequentist methods... (p217) And then moves on without any disection or explanation of why the Bayesian method performed apparently so bad. Further, he does not give a answer from the frequentist approach, just a broad brush statement about "the long-run" - a classical political tactic (emphasise your strength + others weakness, but never compare like for like). I will show how the problem as stated $\tau=1$ can be formulated in frequentist/orthodox terms, and then show that the result using confidence intervals gives precisely the same answer as the Bayesian one. Thus any defect in the Bayesian (real or perceived) is not corrected by using confidence intervals. Okay, so here goes. The first question I ask is what state of knowledge is described by the prior $\theta\sim N(0,1)$? If one was "ignorant" about $\theta$, then the appropriate way to express this is $p(\theta)\propto 1$. Now suppose that we were ignorant, and we observed $Y\sim N(\theta,1)$, independently of $X$. What would our posterior for $\theta$ be? $$p(\theta|Y)\propto p(\theta)p(Y|\theta)\propto exp\Big(-\frac{1}{2}(Y-\theta)^2\Big)$$ Thus $(\theta|Y)\sim N(Y,1)$. This means that the prior distribution given in Wassermans example, is equivalent to having observed an iid copy of $X$ equal to $0$. Frequentist methods cannot deal with a prior, but it can be thought of as having made 2 observations from the sampling distribution, one equal to $0$, and one equal to $X$. Both problems are entirely equivalent, and we can actually give the frequentist answer for the question. Because we are dealing with a normal distribution with known variance, the mean is a sufficient statistic for constructing a confidence interval for $\theta$. The mean is equal to $\overline{x}=\frac{0+X}{2}=\frac{X}{2}$ and has a sampling distribution $$(\overline{x}|\theta)\sim N(\theta,\frac{1}{2})$$ Thus an $(1-\alpha)\text{%}$ CI is given by: $$\frac{1}{2}X\pm Z_{\alpha/2}\frac{1}{\sqrt{2}}$$ But, using The results of example 12.8 for Wasserman, he shows that the posterior $(1-\alpha)\text{%}$ credible interval for $\theta$ is given by: $$cX\pm \sqrt{c}Z_{\alpha/2}$$. Where $c=\frac{\tau^{2}}{1+\tau^{2}}$. Thus, plugging in the value at $\tau^{2}=1$ gives $c=\frac{1}{2}$ and the credible interval becomes: $$\frac{1}{2}X\pm Z_{\alpha/2}\frac{1}{\sqrt{2}}$$ Which are exactly the same as the confidence interval! So any defect in the coverage exhibited by the Bayesian method, is not corrected by using the frequentist confidence interval! [If the frequentist chooses to ignore the prior, then to be a fair comparison, the Bayesian should also ignore this prior, and use the ignorance prior $p(\theta)\propto 1$, and the two intervals will still be equal - both $X \pm Z_{\alpha/2})$]. So what the hell is going on here? The problem is basically one of non-robustness of the normal sampling distribution. because the problem is equivalent to having already observed a iid copy, $X=0$. If you have observed $0$, then this is extremely unlikely to have occurred if the true value is $\theta=4$ (probability that $X\leq 0$ when $\theta=4$ is 0.000032). This explains why the coverage is so bad for large "true values", because they effectively make the implicit observation contained in the prior an outlier. In fact you can show that this example is basically equivalent to showing that the arithmetic mean has an unbounded influence function. Generalisation. Now some people may say "but you only considered $\tau=1$, which may be a special case". This is not true: any value of $\tau^2=\frac{1}{N}$ $(N=0,1,2,3,\dots)$ can be interpreted as observing $N$ iid copies of $X$ which were all equal to $0$, in addition to the $X$ of the question. The confidence interval will have the same "bad" coverage properties for large $\theta$. But this becomes increasingly unlikely if you keep observing values of $0$ (and no rational person would continue to worry about large $\theta$ when you keep seeing $0$).
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi This is a "fleshed out" example given in a book written by Larry Wasserman All of statistics on Page 216 (12.8 Strengths and Weaknesses of Bayesian Inference). I basically provide what Wasserman does
2,011
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
The problem starts with your sentence : Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches. Yeah well, how do you know your prior is correct? Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula $$P=1-e^{-\frac{4}{3}ut}$$ with u being the rate of substitution. Now you want to make a model of the evolution, based on comparison of DNA sequences. In essence, you try to estimate a tree in which you try to model the amount of change between the DNA sequences as close as possible. The P above is the chance of at least one change on a given branch. Evolutionary models describe the chances of change between any two nucleotides, and from these evolutionary models the estimation function is derived, either with p as a parameter or with t as a parameter. You have no sensible knowledge and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. (It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.) In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point for the prior. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods. ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18 On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible. It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored. EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
The problem starts with your sentence : Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches. Yeah
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals The problem starts with your sentence : Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches. Yeah well, how do you know your prior is correct? Take the case of Bayesian inference in phylogeny. The probability of at least one change is related to evolutionary time (branch length t) by the formula $$P=1-e^{-\frac{4}{3}ut}$$ with u being the rate of substitution. Now you want to make a model of the evolution, based on comparison of DNA sequences. In essence, you try to estimate a tree in which you try to model the amount of change between the DNA sequences as close as possible. The P above is the chance of at least one change on a given branch. Evolutionary models describe the chances of change between any two nucleotides, and from these evolutionary models the estimation function is derived, either with p as a parameter or with t as a parameter. You have no sensible knowledge and you chose a flat prior for p. This inherently implies an exponentially decreasing prior for t. (It becomes even more problematic if you want to set a flat prior on t. The implied prior on p is strongly dependent on where you cut off the range of t.) In theory, t can be infinite, but when you allow an infinite range, the area under its density function equals infinity as well, so you have to define a truncation point for the prior. Now when you chose the truncation point sufficiently large, it is not difficult to prove that both ends of the credible interval rise, and at a certain point the true value is not contained in the credible interval any more. Unless you have a very good idea about the prior, Bayesian methods are not guaranteed to be equal to or superior to other methods. ref: Joseph Felsenstein : Inferring Phylogenies, chapter 18 On a side note, I'm getting sick of that Bayesian/Frequentist quarrel. They're both different frameworks, and neither is the Absolute Truth. The classical examples pro Bayesian methods invariantly come from probability calculation, and not one frequentist will contradict them. The classical argument against Bayesian methods invariantly involve the arbitrary choice of a prior. And sensible priors are definitely possible. It all boils down to the correct use of either method at the right time. I've seen very few arguments/comparisons where both methods were applied correctly. Assumptions of any method are very much underrated and far too often ignored. EDIT : to clarify, the problem lies in the fact that the estimate based on p differs from the estimate based on t in the Bayesian framework when working with uninformative priors (which is in a number of cases the only possible solution). This is not true in the ML framework for phylogenetic inference. It is not a matter of a wrong prior, it is inherent to the method.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi The problem starts with your sentence : Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches. Yeah
2,012
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
Keith Winstein, EDIT: Just to clarify, this answer describes the example given in Keith Winstein Answer on the King with the cruel statistical game. The Bayesian and Frequentist answers both use the same information, which is to ignore the information on the number of fair and unfair coins when constructing the intervals. If this information is not ignored, the frequentist should use the integrated Beta-Binomial Likelihood as the sampling distribution in constructing the Confidence interval, in which case the Clopper-Pearson Confidence Interval is not appropriate, and needs to be modified. A similar adjustment should occur in the Bayesian solution. EDIT: I have also clarified the initial use of the clopper Pearson Interval. EDIT: alas, my alpha is the wrong way around, and my clopper pearson interval is incorrect. My humblest apologies to @whuber, who correctly pointed this out, but who I initially disagreed with and ignored. The CI Using the Clopper Pearson method is very good If you only get one observation, then the Clopper Pearson Interval can be evaluated analytically. Suppose the coin is comes up as "success" (heads) you need to choose $\theta$ such that $$[Pr(Bi(1,\theta)\geq X)\geq\frac{\alpha}{2}] \cap [Pr(Bi(1,\theta)\leq X)\geq\frac{\alpha}{2}]$$ When $X=1$ these probabilities are $Pr(Bi(1,\theta)\geq 1)=\theta$ and $Pr(Bi(1,\theta)\leq 1)=1$, so the Clopper Pearson CI implies that $\theta\geq\frac{\alpha}{2}$ (and the trivially always true $1\geq\frac{\alpha}{2}$) when $X=1$. When $X=0$ these probabilities are $Pr(Bi(1,\theta)\geq 0)=1$ and $Pr(Bi(1,\theta)\leq 0)=1-\theta$, so the Clopper Pearson CI implies that $1-\theta \geq\frac{\alpha}{2}$, or $\theta\leq 1-\frac{\alpha}{2}$ when $X=0$. So for a 95% CI we get $[0.025,1]$ when $X=1$, and $[0,0.975]$ when $X=0$. Thus, one who uses the Clopper Pearson Confidence Interval will never ever be beheaded. Upon observing the interval, it is basically the whole parameter space. But the C-P interval is doing this by giving 100% coverage to a supposedly 95% interval! Basically, the Frequentists "cheats" by giving a 95% confidence interval more coverage than he/she was asked to give (although who wouldn't cheat in such a situation? if it were me, I'd give the whole [0,1] interval). If the king asked for an exact 95% CI, this frequentist method would fail regardless of what actually happened (perhaps a better one exists?). What about the Bayesian Interval? (specifically the Highest Posterior Desnity (HPD) Bayesian Interval) Because we know a priori that both heads and tails can come up, the uniform prior is a reasonable choice. This gives a posterior distribution of $(\theta|X)\sim Beta(1+X,2-X)$ . Now, all we need to do now is create an interval with 95% posterior probability. Similar to the clopper pearson CI, the Cummulative Beta distribution is analytic here also, so that $Pr(\theta \geq \theta^{e} | x=1) = 1-(\theta^{e})^{2}$ and $Pr(\theta \leq \theta^{e} | x=0) = 1-(1-\theta^{e})^{2}$ setting these to 0.95 gives $\theta^{e}=\sqrt{0.05}\approx 0.224$ when $X=1$ and $\theta^{e}= 1-\sqrt{0.05}\approx 0.776$ when $X=0$. So the two credible intervals are $(0,0.776)$ when $X=0$ and $(0.224,1)$ when $X=1$ Thus the Bayesian will be beheaded for his HPD Credible interval in the case when he gets the bad coin and the Bad coin comes up tails which will occur with a chance of $\frac{1}{10^{12}+1}\times\frac{1}{10}\approx 0$. First observation, the Bayesian Interval is smaller than the confidence interval. Another thing is that the Bayesian would be closer to the actual coverage stated, 95%, than the frequentist. In fact, the Bayesian is just about as close to the 95% coverage as one can get in this problem. And contrary to Keith's statement, if the bad coin is chosen, 10 Bayesians out of 100 will on average lose their head (not all of them, because the bad coin must come up heads for the interval to not contain $0.1$). Interestingly, if the CP-interval for 1 observation was used repeatedly (so we have N such intervals, each based on 1 observation), and the true proportion was anything between $0.025$ and $0.975$, then coverage of the 95% CI will always be 100%, and not 95%! This clearly depends on the true value of the parameter! So this is at least one case where repeated use of a confidence interval does not lead to the desired level of confidence. To quote a genuine 95% confidence interval, then by definition there should be some cases (i.e. at least one) of the observed interval which do not contain the true value of the parameter. Otherwise, how can one justify the 95% tag? Would it not be just a valid or invalid to call it a 90%, 50%, 20%, or even 0% interval? I do not see how simply stating "it actually means 95% or more" without a complimentary restriction is satisfactory. This is because the obvious mathematical solution is the whole parameter space, and the problem is trivial. suppose I want a 50% CI? if it only bounds the false negatives then the whole parameter space is a valid CI using only this criteria. Perhaps a better criterion is (and this is what I believe is implicit in the definition by Kieth) "as close to 95% as possible, without going below 95%". The Bayesian Interval would have a coverage closer to 95% than the frequentist (although not by much), and would not go under 95% in the coverage ($\text{100%}$ coverage when $X=0$, and $100\times\frac{10^{12}+\frac{9}{10}}{10^{12}+1}\text{%} > \text{95%}$ coverage when $X=1$). In closing, it does seem a bit odd to ask for an interval of uncertainty, and then evaluate that interval by the using the true value which we were uncertain about. A "fairer" comparison, for both confidence and credible intervals, to me seems like the truth of the statement of uncertainty given with the interval.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
Keith Winstein, EDIT: Just to clarify, this answer describes the example given in Keith Winstein Answer on the King with the cruel statistical game. The Bayesian and Frequentist answers both use the
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals Keith Winstein, EDIT: Just to clarify, this answer describes the example given in Keith Winstein Answer on the King with the cruel statistical game. The Bayesian and Frequentist answers both use the same information, which is to ignore the information on the number of fair and unfair coins when constructing the intervals. If this information is not ignored, the frequentist should use the integrated Beta-Binomial Likelihood as the sampling distribution in constructing the Confidence interval, in which case the Clopper-Pearson Confidence Interval is not appropriate, and needs to be modified. A similar adjustment should occur in the Bayesian solution. EDIT: I have also clarified the initial use of the clopper Pearson Interval. EDIT: alas, my alpha is the wrong way around, and my clopper pearson interval is incorrect. My humblest apologies to @whuber, who correctly pointed this out, but who I initially disagreed with and ignored. The CI Using the Clopper Pearson method is very good If you only get one observation, then the Clopper Pearson Interval can be evaluated analytically. Suppose the coin is comes up as "success" (heads) you need to choose $\theta$ such that $$[Pr(Bi(1,\theta)\geq X)\geq\frac{\alpha}{2}] \cap [Pr(Bi(1,\theta)\leq X)\geq\frac{\alpha}{2}]$$ When $X=1$ these probabilities are $Pr(Bi(1,\theta)\geq 1)=\theta$ and $Pr(Bi(1,\theta)\leq 1)=1$, so the Clopper Pearson CI implies that $\theta\geq\frac{\alpha}{2}$ (and the trivially always true $1\geq\frac{\alpha}{2}$) when $X=1$. When $X=0$ these probabilities are $Pr(Bi(1,\theta)\geq 0)=1$ and $Pr(Bi(1,\theta)\leq 0)=1-\theta$, so the Clopper Pearson CI implies that $1-\theta \geq\frac{\alpha}{2}$, or $\theta\leq 1-\frac{\alpha}{2}$ when $X=0$. So for a 95% CI we get $[0.025,1]$ when $X=1$, and $[0,0.975]$ when $X=0$. Thus, one who uses the Clopper Pearson Confidence Interval will never ever be beheaded. Upon observing the interval, it is basically the whole parameter space. But the C-P interval is doing this by giving 100% coverage to a supposedly 95% interval! Basically, the Frequentists "cheats" by giving a 95% confidence interval more coverage than he/she was asked to give (although who wouldn't cheat in such a situation? if it were me, I'd give the whole [0,1] interval). If the king asked for an exact 95% CI, this frequentist method would fail regardless of what actually happened (perhaps a better one exists?). What about the Bayesian Interval? (specifically the Highest Posterior Desnity (HPD) Bayesian Interval) Because we know a priori that both heads and tails can come up, the uniform prior is a reasonable choice. This gives a posterior distribution of $(\theta|X)\sim Beta(1+X,2-X)$ . Now, all we need to do now is create an interval with 95% posterior probability. Similar to the clopper pearson CI, the Cummulative Beta distribution is analytic here also, so that $Pr(\theta \geq \theta^{e} | x=1) = 1-(\theta^{e})^{2}$ and $Pr(\theta \leq \theta^{e} | x=0) = 1-(1-\theta^{e})^{2}$ setting these to 0.95 gives $\theta^{e}=\sqrt{0.05}\approx 0.224$ when $X=1$ and $\theta^{e}= 1-\sqrt{0.05}\approx 0.776$ when $X=0$. So the two credible intervals are $(0,0.776)$ when $X=0$ and $(0.224,1)$ when $X=1$ Thus the Bayesian will be beheaded for his HPD Credible interval in the case when he gets the bad coin and the Bad coin comes up tails which will occur with a chance of $\frac{1}{10^{12}+1}\times\frac{1}{10}\approx 0$. First observation, the Bayesian Interval is smaller than the confidence interval. Another thing is that the Bayesian would be closer to the actual coverage stated, 95%, than the frequentist. In fact, the Bayesian is just about as close to the 95% coverage as one can get in this problem. And contrary to Keith's statement, if the bad coin is chosen, 10 Bayesians out of 100 will on average lose their head (not all of them, because the bad coin must come up heads for the interval to not contain $0.1$). Interestingly, if the CP-interval for 1 observation was used repeatedly (so we have N such intervals, each based on 1 observation), and the true proportion was anything between $0.025$ and $0.975$, then coverage of the 95% CI will always be 100%, and not 95%! This clearly depends on the true value of the parameter! So this is at least one case where repeated use of a confidence interval does not lead to the desired level of confidence. To quote a genuine 95% confidence interval, then by definition there should be some cases (i.e. at least one) of the observed interval which do not contain the true value of the parameter. Otherwise, how can one justify the 95% tag? Would it not be just a valid or invalid to call it a 90%, 50%, 20%, or even 0% interval? I do not see how simply stating "it actually means 95% or more" without a complimentary restriction is satisfactory. This is because the obvious mathematical solution is the whole parameter space, and the problem is trivial. suppose I want a 50% CI? if it only bounds the false negatives then the whole parameter space is a valid CI using only this criteria. Perhaps a better criterion is (and this is what I believe is implicit in the definition by Kieth) "as close to 95% as possible, without going below 95%". The Bayesian Interval would have a coverage closer to 95% than the frequentist (although not by much), and would not go under 95% in the coverage ($\text{100%}$ coverage when $X=0$, and $100\times\frac{10^{12}+\frac{9}{10}}{10^{12}+1}\text{%} > \text{95%}$ coverage when $X=1$). In closing, it does seem a bit odd to ask for an interval of uncertainty, and then evaluate that interval by the using the true value which we were uncertain about. A "fairer" comparison, for both confidence and credible intervals, to me seems like the truth of the statement of uncertainty given with the interval.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi Keith Winstein, EDIT: Just to clarify, this answer describes the example given in Keith Winstein Answer on the King with the cruel statistical game. The Bayesian and Frequentist answers both use the
2,013
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
Frequentist confidence intervals bound the rate of false positives (Type I errors), and guarantee their coverage will be bounded below by the confidence parameter, even in the worst case. Bayesian credibility intervals don't. So if the thing you care about is false positives and you need to bound them, confidence intervals are the the approach that you'll want to use. For example, let's say you have an evil king with a court of 100 courtiers and courtesans and he wants to play a cruel statistical game with them. The king has a bag of a trillion fair coins, plus one unfair coin whose heads probability is 10%. He's going to perform the following game. First, he'll draw a coin uniformly at random from the bag. Then the coin will be passed around a room of 100 people and each one will be forced to do an experiment on it, privately, and then each person will state a 95% uncertainty interval on what they think the coin's heads probability is. Anybody who gives an interval that represents a false positive -- i.e. an interval that doesn't cover the true value of the heads probability -- will be beheaded. If we wanted to express the /a posteriori/ probability distribution function of the coin's weight, then of course a credibility interval is what does that. The answer will always be the interval [0.5, 0.5] irrespective of outcome. Even if you flip zero heads or one head, you'll still say [0.5, 0.5] because it's a heck of a lot more probable that the king drew a fair coin and you had a 1/1024 day getting ten heads in a row, than that the king drew the unfair coin. So this is not a good idea for the courtiers and courtesans to use! Because when the unfair coin is drawn, the whole room (all 100 people) will be wrong and they'll all get beheaded. In this world where the most important thing is false positives, what we need is an absolute guarantee that the rate of false positives will be less than 5%, no matter which coin is drawn. Then we need to use a confidence interval, like Blyth-Still-Casella or Clopper-Pearson, that works and provides at least 95% coverage irrespective of the true value of the parameter, even in the worst case. If everybody uses this method instead, then no matter which coin is drawn, at the end of the day we can guarantee that the expected number of wrong people will be no more than five. So the point is: if your criterion requires bounding false positives (or equivalently, guaranteeing coverage), you gotta go with a confidence interval. That's what they do. Credibility intervals may be a more intuitive way of expressing uncertainty, they may perform pretty well from a frequentist analysis, but they are not going to provide the guaranteed bound on false positives you'll get when you go asking for it. (Of course if you also care about false negatives, you'll need a method that makes guarantees about those too...)
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
Frequentist confidence intervals bound the rate of false positives (Type I errors), and guarantee their coverage will be bounded below by the confidence parameter, even in the worst case. Bayesian cre
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals Frequentist confidence intervals bound the rate of false positives (Type I errors), and guarantee their coverage will be bounded below by the confidence parameter, even in the worst case. Bayesian credibility intervals don't. So if the thing you care about is false positives and you need to bound them, confidence intervals are the the approach that you'll want to use. For example, let's say you have an evil king with a court of 100 courtiers and courtesans and he wants to play a cruel statistical game with them. The king has a bag of a trillion fair coins, plus one unfair coin whose heads probability is 10%. He's going to perform the following game. First, he'll draw a coin uniformly at random from the bag. Then the coin will be passed around a room of 100 people and each one will be forced to do an experiment on it, privately, and then each person will state a 95% uncertainty interval on what they think the coin's heads probability is. Anybody who gives an interval that represents a false positive -- i.e. an interval that doesn't cover the true value of the heads probability -- will be beheaded. If we wanted to express the /a posteriori/ probability distribution function of the coin's weight, then of course a credibility interval is what does that. The answer will always be the interval [0.5, 0.5] irrespective of outcome. Even if you flip zero heads or one head, you'll still say [0.5, 0.5] because it's a heck of a lot more probable that the king drew a fair coin and you had a 1/1024 day getting ten heads in a row, than that the king drew the unfair coin. So this is not a good idea for the courtiers and courtesans to use! Because when the unfair coin is drawn, the whole room (all 100 people) will be wrong and they'll all get beheaded. In this world where the most important thing is false positives, what we need is an absolute guarantee that the rate of false positives will be less than 5%, no matter which coin is drawn. Then we need to use a confidence interval, like Blyth-Still-Casella or Clopper-Pearson, that works and provides at least 95% coverage irrespective of the true value of the parameter, even in the worst case. If everybody uses this method instead, then no matter which coin is drawn, at the end of the day we can guarantee that the expected number of wrong people will be no more than five. So the point is: if your criterion requires bounding false positives (or equivalently, guaranteeing coverage), you gotta go with a confidence interval. That's what they do. Credibility intervals may be a more intuitive way of expressing uncertainty, they may perform pretty well from a frequentist analysis, but they are not going to provide the guaranteed bound on false positives you'll get when you go asking for it. (Of course if you also care about false negatives, you'll need a method that makes guarantees about those too...)
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi Frequentist confidence intervals bound the rate of false positives (Type I errors), and guarantee their coverage will be bounded below by the confidence parameter, even in the worst case. Bayesian cre
2,014
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
In this answer I aim to describe the difference between confidence intervals and credible intervals in an intuitive way. I hope that this may help to understand: why/how credible intervals are better than confidence intervals. on which conditions the credible interval depends and when they are not always better. Credible intervals and confidence intervals are constructed in different ways and can be different see also: The basic logic of constructing a confidence interval and If a credible interval has a flat prior, is a 95% confidence interval equal to a 95% credible interval? In the question by probabilityislogic an example is given from Larry Wasserman, which was mentioned in the comments by suncoolsu. $$X \sim N(\theta,1) \quad \text{where} \quad \theta \sim N(0,\tau^2)$$ We could see each experiment with random values for $\theta$ and $X$ as a joint variable. This is plotted below for the 20k simulated cases when $\tau=1$ This experiment can be considered as a joint random variable where both the observation $X$ and the underlying unobserved parameter $\theta$ have a multivariate normal distribution. $$f(x,\theta) = \frac{1}{2 \pi \tau} e^{-\frac{1}{2} \left((x-\theta)^2+ \frac{1}{\tau^2}\theta^2\right)}$$ Both the $\alpha \%$-confidence interval and $\alpha \%$-credible interval draw boundaries in such a way that $\alpha \%$ of the mass of the density $f(\theta,X)$ falls inside the boundaries. How do they differ? The credible interval draws boundaries by evaluating the $\alpha \%$ mass in a horizontal direction such that for every fixed $X$ an $\alpha \%$ of the mass falls in between the boundaries for the conditional density $$\theta_X \sim N(cX,c) \quad \text{with} \quad c=\frac{\tau^2}{\tau^2+1}$$ falls in between the boundaries. The confidence interval draws boundaries by evaluating the $\alpha \%$ mass in a vertical direction such that for every fixed $\theta$ an $\alpha \%$ of the mass falls in between the boundaries for the conditional density $$X_\theta \sim N(\theta,1) \hphantom{ \quad \text{with} \quad c=\frac{\tau^2}{\tau^2+1}}$$ What is different? The confidence interval is restricted in the way that it draws the boundaries. The confidence interval places these boundaries by considering the conditional distribution $X_\theta$ and will cover $\alpha \%$ independent from what the true value of $\theta$ is (this independence is both the strength and weakness of the confidence interval). The credible interval makes an improvement by including information about the marginal distribution of $\theta$ and in this way it will be able to make smaller intervals without giving up on the average coverage which is still $\alpha \%$. (But it becomes less reliable/fails when the additional assumption, about the prior, is not true) In the example the credible interval is smaller by a factor $c = \frac{\tau^2}{\tau^2+1}$ and the improvement of the coverage, albeit the smaller intervals, is achieved by shifting the intervals a bit towards $\theta = 0$, which has a larger probability of occurring (which is where the prior density concentrates). Conclusion We can say that*, if the assumptions are true then for a given observation $X$, the credible interval will always perform better (or at least the same). But yes, the exception is the disadvantage of the credible interval (and the advantage of the confidence interval) that the conditional cover probability $\alpha \%$ is biased depending on the true value of the parameter $\theta$. This is especially detrimental when the assumptions about the prior distribution of $\theta$ are not trustworthy. *see also the two methods in this question The basic logic of constructing a confidence interval. In the image of my answer it is illustrated that the confidence interval can place the boundaries, with respect to the posterior distribution for a given observation $X$, at different 'heights'. So it may not always be optimally selecting the shortest interval, and for each observation $X$ it may be possible to decrease the length of the interval by shifting the boundaries while enclosing the same $\alpha \%$ amount of probability mass. For a given underlying parameter $\theta$ the roles are reversed and it is the confidence interval that performs better (smaller interval in vertical direction) than the credible interval. (although this is not the performance that we seek because we are interested in the intervals in the other direction, intervals of $\theta$ given $X$ and not intervals of $X$ given $\theta$) About the exception Examples based on incorrect prior assumptions are not acceptable This exclusion of incorrect assumptions makes it a bit a loaded question. Yes, given certain conditions, the credible interval is better than the confidence interval. But are those conditions practical? Both credible intervals and confidence intervals make statements about some probability, like $\alpha \%$ of the cases the parameter is correctly estimated. However, that "probability" is only a probability in the mathematical sense and relates to the specific case that the underlying assumptions of the model are very trustworthy. If the assumptions are uncertain then this uncertainty should propagate into the computed uncertainty/probability $\alpha \%$. So credible intervals and confidence intervals are in practice only appropriate when the assumptions are sufficiently trustworthy such that the propagation of errors can be neglected. Credible intervals might be in some cases easier to compute, but the additional assumptions, makes credible intervals (in some way) more difficult to apply than confidence intervals, because more assumptions are being made and this will influence the 'true' value of $\alpha \%$. Additional: This question relates a bit to Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? See in the image below the expression of conditional probability/chance of containing the parameter for this particular example The $\alpha \%$ confidence interval will correctly estimate/contain the true parameter $\alpha \%$ of the time, for a each parameter $\theta$. But for a given observation $X$ the $\alpha \%$ confidence interval will not estimate/contain the true parameter $\alpha \%$ of the time. (type I errors will occur at the same rate $\alpha \%$ for different values of the underlying parameter $\theta$. But for different observations $X$ the type I error rate will be different. For some observations the confidence interval may be more/less often wrong than for other observations). The $\alpha \%$ credible interval will correctly estimate/contain the true parameter $\alpha \%$ of the time, for each observation $X$. But for a given parameter $\theta$ the $\alpha \%$ credible interval will not estimate/contain the true parameter $\alpha \%$ of the time. (type I errors will occur at the same rate $\alpha \%$ for different values of the observed parameter $X$. But for different underlying parameters $\theta$ the type I error rate will be different. For some underlying parameters the credible interval may be more/less often wrong than for other underlying parameters). Code for computing both images: # parameters set.seed(1) n <- 2*10^4 perc = 0.95 za <- qnorm(0.5+perc/2,0,1) # model tau <- 1 theta <- rnorm(n,0,tau) X <- rnorm(n,theta,1) # plot scatterdiagram of distribution plot(theta,X, xlab=expression(theta), ylab = "observed X", pch=21,col=rgb(0,0,0,0.05),bg=rgb(0,0,0,0.05),cex=0.25, xlim = c(-5,5),ylim=c(-5,5) ) # confidence interval t <- seq(-6,6,0.01) lines(t,t-za*1,col=2) lines(t,t+za*1,col=2) # credible interval obsX <- seq(-6,6,0.01) lines(obsX*tau^2/(tau^2+1)+za*sqrt(tau^2/(tau^2+1)),obsX,col=3) lines(obsX*tau^2/(tau^2+1)-za*sqrt(tau^2/(tau^2+1)),obsX,col=3) # adding contours for joint density conX <- seq(-5,5,0.1) conT <- seq(-5,5,0.1) ln <- length(conX) z <- matrix(rep(0,ln^2),ln) for (i in 1:ln) { for (j in 1:ln) { z[i,j] <- dnorm(conT[i],0,tau)*dnorm(conX[j],conT[i],1) } } contour(conT,conX,-log(z), add=TRUE, levels = 1:10 ) legend(-5,5,c("confidence interval","credible interval","log joint density"), lty=1, col=c(2,3,1), lwd=c(1,1,0.5),cex=0.7) title(expression(atop("scatterplot and contourplot of", paste("X ~ N(",theta,",1) and ",theta," ~ N(0,",tau^2,")")))) # expression succes rate as function of X and theta # Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? layout(matrix(c(1:2),1)) par(mar=c(4,4,2,2),mgp=c(2.5,1,0)) pX <- seq(-5,5,0.1) pt <- seq(-5,5,0.1) cc <- tau^2/(tau^2+1) plot(-10,-10, xlim=c(-5,5),ylim = c(0,1), xlab = expression(theta), ylab = "chance of containing the parameter") lines(pt,pnorm(pt/cc+za/sqrt(cc),pt,1)-pnorm(pt/cc-za/sqrt(cc),pt,1),col=3) lines(pt,pnorm(pt+za,pt,1)-pnorm(pt-za,pt,1),col=2) title(expression(paste("for different values ", theta))) legend(-3.8,0.15, c("confidence interval","credible interval"), lty=1, col=c(2,3),cex=0.7, box.col="white") plot(-10,-10, xlim=c(-5,5),ylim = c(0,1), xlab = expression(X), ylab = "chance of containing the parameter") lines(pX,pnorm(pX*cc+za*sqrt(cc),pX*cc,sqrt(cc))-pnorm(pX*cc-za*sqrt(cc),pX*cc,sqrt(cc)),col=3) lines(pX,pnorm(pX+za,pX*cc,sqrt(cc))-pnorm(pX-za,pX*cc,sqrt(cc)),col=2) title(expression(paste("for different values ", X))) text(0,0.3, c("95% Confidence Interval\ndoes not imply\n95% chance of containing the parameter"), cex= 0.7,pos=1) library(shape) Arrows(-3,0.3,-3.9,0.38,arr.length=0.2)
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
In this answer I aim to describe the difference between confidence intervals and credible intervals in an intuitive way. I hope that this may help to understand: why/how credible intervals are better
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals In this answer I aim to describe the difference between confidence intervals and credible intervals in an intuitive way. I hope that this may help to understand: why/how credible intervals are better than confidence intervals. on which conditions the credible interval depends and when they are not always better. Credible intervals and confidence intervals are constructed in different ways and can be different see also: The basic logic of constructing a confidence interval and If a credible interval has a flat prior, is a 95% confidence interval equal to a 95% credible interval? In the question by probabilityislogic an example is given from Larry Wasserman, which was mentioned in the comments by suncoolsu. $$X \sim N(\theta,1) \quad \text{where} \quad \theta \sim N(0,\tau^2)$$ We could see each experiment with random values for $\theta$ and $X$ as a joint variable. This is plotted below for the 20k simulated cases when $\tau=1$ This experiment can be considered as a joint random variable where both the observation $X$ and the underlying unobserved parameter $\theta$ have a multivariate normal distribution. $$f(x,\theta) = \frac{1}{2 \pi \tau} e^{-\frac{1}{2} \left((x-\theta)^2+ \frac{1}{\tau^2}\theta^2\right)}$$ Both the $\alpha \%$-confidence interval and $\alpha \%$-credible interval draw boundaries in such a way that $\alpha \%$ of the mass of the density $f(\theta,X)$ falls inside the boundaries. How do they differ? The credible interval draws boundaries by evaluating the $\alpha \%$ mass in a horizontal direction such that for every fixed $X$ an $\alpha \%$ of the mass falls in between the boundaries for the conditional density $$\theta_X \sim N(cX,c) \quad \text{with} \quad c=\frac{\tau^2}{\tau^2+1}$$ falls in between the boundaries. The confidence interval draws boundaries by evaluating the $\alpha \%$ mass in a vertical direction such that for every fixed $\theta$ an $\alpha \%$ of the mass falls in between the boundaries for the conditional density $$X_\theta \sim N(\theta,1) \hphantom{ \quad \text{with} \quad c=\frac{\tau^2}{\tau^2+1}}$$ What is different? The confidence interval is restricted in the way that it draws the boundaries. The confidence interval places these boundaries by considering the conditional distribution $X_\theta$ and will cover $\alpha \%$ independent from what the true value of $\theta$ is (this independence is both the strength and weakness of the confidence interval). The credible interval makes an improvement by including information about the marginal distribution of $\theta$ and in this way it will be able to make smaller intervals without giving up on the average coverage which is still $\alpha \%$. (But it becomes less reliable/fails when the additional assumption, about the prior, is not true) In the example the credible interval is smaller by a factor $c = \frac{\tau^2}{\tau^2+1}$ and the improvement of the coverage, albeit the smaller intervals, is achieved by shifting the intervals a bit towards $\theta = 0$, which has a larger probability of occurring (which is where the prior density concentrates). Conclusion We can say that*, if the assumptions are true then for a given observation $X$, the credible interval will always perform better (or at least the same). But yes, the exception is the disadvantage of the credible interval (and the advantage of the confidence interval) that the conditional cover probability $\alpha \%$ is biased depending on the true value of the parameter $\theta$. This is especially detrimental when the assumptions about the prior distribution of $\theta$ are not trustworthy. *see also the two methods in this question The basic logic of constructing a confidence interval. In the image of my answer it is illustrated that the confidence interval can place the boundaries, with respect to the posterior distribution for a given observation $X$, at different 'heights'. So it may not always be optimally selecting the shortest interval, and for each observation $X$ it may be possible to decrease the length of the interval by shifting the boundaries while enclosing the same $\alpha \%$ amount of probability mass. For a given underlying parameter $\theta$ the roles are reversed and it is the confidence interval that performs better (smaller interval in vertical direction) than the credible interval. (although this is not the performance that we seek because we are interested in the intervals in the other direction, intervals of $\theta$ given $X$ and not intervals of $X$ given $\theta$) About the exception Examples based on incorrect prior assumptions are not acceptable This exclusion of incorrect assumptions makes it a bit a loaded question. Yes, given certain conditions, the credible interval is better than the confidence interval. But are those conditions practical? Both credible intervals and confidence intervals make statements about some probability, like $\alpha \%$ of the cases the parameter is correctly estimated. However, that "probability" is only a probability in the mathematical sense and relates to the specific case that the underlying assumptions of the model are very trustworthy. If the assumptions are uncertain then this uncertainty should propagate into the computed uncertainty/probability $\alpha \%$. So credible intervals and confidence intervals are in practice only appropriate when the assumptions are sufficiently trustworthy such that the propagation of errors can be neglected. Credible intervals might be in some cases easier to compute, but the additional assumptions, makes credible intervals (in some way) more difficult to apply than confidence intervals, because more assumptions are being made and this will influence the 'true' value of $\alpha \%$. Additional: This question relates a bit to Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? See in the image below the expression of conditional probability/chance of containing the parameter for this particular example The $\alpha \%$ confidence interval will correctly estimate/contain the true parameter $\alpha \%$ of the time, for a each parameter $\theta$. But for a given observation $X$ the $\alpha \%$ confidence interval will not estimate/contain the true parameter $\alpha \%$ of the time. (type I errors will occur at the same rate $\alpha \%$ for different values of the underlying parameter $\theta$. But for different observations $X$ the type I error rate will be different. For some observations the confidence interval may be more/less often wrong than for other observations). The $\alpha \%$ credible interval will correctly estimate/contain the true parameter $\alpha \%$ of the time, for each observation $X$. But for a given parameter $\theta$ the $\alpha \%$ credible interval will not estimate/contain the true parameter $\alpha \%$ of the time. (type I errors will occur at the same rate $\alpha \%$ for different values of the observed parameter $X$. But for different underlying parameters $\theta$ the type I error rate will be different. For some underlying parameters the credible interval may be more/less often wrong than for other underlying parameters). Code for computing both images: # parameters set.seed(1) n <- 2*10^4 perc = 0.95 za <- qnorm(0.5+perc/2,0,1) # model tau <- 1 theta <- rnorm(n,0,tau) X <- rnorm(n,theta,1) # plot scatterdiagram of distribution plot(theta,X, xlab=expression(theta), ylab = "observed X", pch=21,col=rgb(0,0,0,0.05),bg=rgb(0,0,0,0.05),cex=0.25, xlim = c(-5,5),ylim=c(-5,5) ) # confidence interval t <- seq(-6,6,0.01) lines(t,t-za*1,col=2) lines(t,t+za*1,col=2) # credible interval obsX <- seq(-6,6,0.01) lines(obsX*tau^2/(tau^2+1)+za*sqrt(tau^2/(tau^2+1)),obsX,col=3) lines(obsX*tau^2/(tau^2+1)-za*sqrt(tau^2/(tau^2+1)),obsX,col=3) # adding contours for joint density conX <- seq(-5,5,0.1) conT <- seq(-5,5,0.1) ln <- length(conX) z <- matrix(rep(0,ln^2),ln) for (i in 1:ln) { for (j in 1:ln) { z[i,j] <- dnorm(conT[i],0,tau)*dnorm(conX[j],conT[i],1) } } contour(conT,conX,-log(z), add=TRUE, levels = 1:10 ) legend(-5,5,c("confidence interval","credible interval","log joint density"), lty=1, col=c(2,3,1), lwd=c(1,1,0.5),cex=0.7) title(expression(atop("scatterplot and contourplot of", paste("X ~ N(",theta,",1) and ",theta," ~ N(0,",tau^2,")")))) # expression succes rate as function of X and theta # Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? layout(matrix(c(1:2),1)) par(mar=c(4,4,2,2),mgp=c(2.5,1,0)) pX <- seq(-5,5,0.1) pt <- seq(-5,5,0.1) cc <- tau^2/(tau^2+1) plot(-10,-10, xlim=c(-5,5),ylim = c(0,1), xlab = expression(theta), ylab = "chance of containing the parameter") lines(pt,pnorm(pt/cc+za/sqrt(cc),pt,1)-pnorm(pt/cc-za/sqrt(cc),pt,1),col=3) lines(pt,pnorm(pt+za,pt,1)-pnorm(pt-za,pt,1),col=2) title(expression(paste("for different values ", theta))) legend(-3.8,0.15, c("confidence interval","credible interval"), lty=1, col=c(2,3),cex=0.7, box.col="white") plot(-10,-10, xlim=c(-5,5),ylim = c(0,1), xlab = expression(X), ylab = "chance of containing the parameter") lines(pX,pnorm(pX*cc+za*sqrt(cc),pX*cc,sqrt(cc))-pnorm(pX*cc-za*sqrt(cc),pX*cc,sqrt(cc)),col=3) lines(pX,pnorm(pX+za,pX*cc,sqrt(cc))-pnorm(pX-za,pX*cc,sqrt(cc)),col=2) title(expression(paste("for different values ", X))) text(0,0.3, c("95% Confidence Interval\ndoes not imply\n95% chance of containing the parameter"), cex= 0.7,pos=1) library(shape) Arrows(-3,0.3,-3.9,0.38,arr.length=0.2)
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi In this answer I aim to describe the difference between confidence intervals and credible intervals in an intuitive way. I hope that this may help to understand: why/how credible intervals are better
2,015
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals I'm going to say "any paper in experimental science". There's an XKCD cartoon that has made the rounds here before, which I've edited slightly: Okay, the stick figure on the left is nuts, and the one on the right is saner. But I want to focus on a different question: if this experiment were published, what would you want to see in the paper? You don't want the opinion of either of these guys. What you want is the information in the first panel, so you can form your own opinion. That's what the confidence interval tells you: the Universe—which we expect to lie to us about 5% of the time—just told us that the answer is somewhere in here. That isn't what you really want to know. What you really want to know is something like the credible interval. But it's what you want the paper to tell you: it's a concise summary of the result of this particular experiment. The calculation of the confidence interval still incorporates assumptions that may be wrong, invalidating it. But they're assumptions about the reliability of the equipment, the quality of the randomization, and other things that the experimenter can be expected to know better than you. Human bias can still creep in, but it's unavoidable that you have to trust the experimenter about these sorts of things. If you want to make a decision on the basis of this data, then you shouldn't treat the confidence interval as a credible interval, as the guy on the left does. You probably should do a Bayesian analysis. Proponents of Bayesianism often talk about winning bets, because Bayesian inference is good for that. But not everything is about winning bets.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals I'm going to say "any paper in experimental science". There's an XKCD cartoon that
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals I'm going to say "any paper in experimental science". There's an XKCD cartoon that has made the rounds here before, which I've edited slightly: Okay, the stick figure on the left is nuts, and the one on the right is saner. But I want to focus on a different question: if this experiment were published, what would you want to see in the paper? You don't want the opinion of either of these guys. What you want is the information in the first panel, so you can form your own opinion. That's what the confidence interval tells you: the Universe—which we expect to lie to us about 5% of the time—just told us that the answer is somewhere in here. That isn't what you really want to know. What you really want to know is something like the credible interval. But it's what you want the paper to tell you: it's a concise summary of the result of this particular experiment. The calculation of the confidence interval still incorporates assumptions that may be wrong, invalidating it. But they're assumptions about the reliability of the equipment, the quality of the randomization, and other things that the experimenter can be expected to know better than you. Human bias can still creep in, but it's unavoidable that you have to trust the experimenter about these sorts of things. If you want to make a decision on the basis of this data, then you shouldn't treat the confidence interval as a credible interval, as the guy on the left does. You probably should do a Bayesian analysis. Proponents of Bayesianism often talk about winning bets, because Bayesian inference is good for that. But not everything is about winning bets.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals I'm going to say "any paper in experimental science". There's an XKCD cartoon that
2,016
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
are there examples where the frequentist confidence interval is clearly superior to the Bayesian credible interval (as per the challenge implicitly made by Jaynes). Here is an example: the true $\theta$ equals $10$ but the prior on $\theta$ is concentrated about $1$. I am doing statistics for a clinical trial, and $\theta$ measures the risk to death, so the Bayesian result is a disaster, isn't it ? More seriously, what is "the" Bayesian credible interval ? In other words: what is the selected prior ? Maybe Jaynes proposed an automatic way to select a prior, I don't know ! Bernardo proposed a "reference prior" to be used as a standard for scientific communication [and even a "reference credible interval" (Bernardo - objective credible regions)]. Assuming this is "the" Bayesian approach, now the question is: when is an interval superior to another one ? The frequentist properties of the Bayesian interval are not always optimal, but neither are the Bayesian properties of "the" frequentist interval (by the way, what is "the" frequentist interval ? )
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
are there examples where the frequentist confidence interval is clearly superior to the Bayesian credible interval (as per the challenge implicitly made by Jaynes). Here is an example: the true $\the
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals are there examples where the frequentist confidence interval is clearly superior to the Bayesian credible interval (as per the challenge implicitly made by Jaynes). Here is an example: the true $\theta$ equals $10$ but the prior on $\theta$ is concentrated about $1$. I am doing statistics for a clinical trial, and $\theta$ measures the risk to death, so the Bayesian result is a disaster, isn't it ? More seriously, what is "the" Bayesian credible interval ? In other words: what is the selected prior ? Maybe Jaynes proposed an automatic way to select a prior, I don't know ! Bernardo proposed a "reference prior" to be used as a standard for scientific communication [and even a "reference credible interval" (Bernardo - objective credible regions)]. Assuming this is "the" Bayesian approach, now the question is: when is an interval superior to another one ? The frequentist properties of the Bayesian interval are not always optimal, but neither are the Bayesian properties of "the" frequentist interval (by the way, what is "the" frequentist interval ? )
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi are there examples where the frequentist confidence interval is clearly superior to the Bayesian credible interval (as per the challenge implicitly made by Jaynes). Here is an example: the true $\the
2,017
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
The second example in this thread compares a frequentist confidence interval to two different posterior intervals based on two different non-informative priors. Despite using all the information in the likelihood, both credible intervals can be considered inferior because: i) neither credible interval provides a long-run guarantee of covering the unknown fixed true parameter; ii) it is not obvious which non-informative prior one should choose when constructing the posterior if the experimenter truly has no prior knowledge; iii) the posterior probability statements are not verifiable statements about the actual fixed parameter, the hypothesis, nor the experiment. Both the credible interval and the confidence interval attempt to address the request, "Give me a set of plausible true values of the parameter, given the observed data." In his answer to the original post, Dikran Marsupial provides the following: (a) "Give me an interval where the true value of the statistic lies with probability p", then it appears a frequentist cannot actually answer that question directly (and this introduces the kind of problems that Jaynes discusses in his paper), but a Bayesian can, which is why a Bayesian credible interval is superior to the frequentist confidence interval in the examples given by Jaynes. But this is only becuase it is the "wrong question" for the frequentist. (b) "Give me an interval where, were the experiment repeated a large number of times, the true value of the statistic would lie within p*100% of such intervals" then the frequentist answer is just what you want. The Bayesian may also be able to give a direct answer to this question (although it may not simply be the obvious credible interval). Whuber's comment on the question suggests this is the case. Dikran Marsupial's response is wrong for two reasons. The first is that neither the credible interval nor the confidence interval is a set of statistic values. Each is a set in the parameter space. Secondly, if we ignore this mistake and consider both the confidence and credible interval as residing in the parameter space, it is misleading in (a) to suggest a Bayesian approach can provide "an interval where the true parameter lies with 100p% probability." Under a Bayesian approach it is more transparent to say "a set of values that has 100p% belief units (Bayesian probability)." We must make it clear this is not a verifiable statement about the actual fixed parameter, the hypothesis, nor the experiment. The confidence interval for a single observed experimental result is considered plausible due to its long-run performance over repeated experiments. This coverage probability is a statement about the experiment in relation to the unknown fixed true parameter. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood, Bayesian belief is more objectively viewed as a type of confidence based on frequency probability of the experiment.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi
The second example in this thread compares a frequentist confidence interval to two different posterior intervals based on two different non-informative priors. Despite using all the information in t
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals The second example in this thread compares a frequentist confidence interval to two different posterior intervals based on two different non-informative priors. Despite using all the information in the likelihood, both credible intervals can be considered inferior because: i) neither credible interval provides a long-run guarantee of covering the unknown fixed true parameter; ii) it is not obvious which non-informative prior one should choose when constructing the posterior if the experimenter truly has no prior knowledge; iii) the posterior probability statements are not verifiable statements about the actual fixed parameter, the hypothesis, nor the experiment. Both the credible interval and the confidence interval attempt to address the request, "Give me a set of plausible true values of the parameter, given the observed data." In his answer to the original post, Dikran Marsupial provides the following: (a) "Give me an interval where the true value of the statistic lies with probability p", then it appears a frequentist cannot actually answer that question directly (and this introduces the kind of problems that Jaynes discusses in his paper), but a Bayesian can, which is why a Bayesian credible interval is superior to the frequentist confidence interval in the examples given by Jaynes. But this is only becuase it is the "wrong question" for the frequentist. (b) "Give me an interval where, were the experiment repeated a large number of times, the true value of the statistic would lie within p*100% of such intervals" then the frequentist answer is just what you want. The Bayesian may also be able to give a direct answer to this question (although it may not simply be the obvious credible interval). Whuber's comment on the question suggests this is the case. Dikran Marsupial's response is wrong for two reasons. The first is that neither the credible interval nor the confidence interval is a set of statistic values. Each is a set in the parameter space. Secondly, if we ignore this mistake and consider both the confidence and credible interval as residing in the parameter space, it is misleading in (a) to suggest a Bayesian approach can provide "an interval where the true parameter lies with 100p% probability." Under a Bayesian approach it is more transparent to say "a set of values that has 100p% belief units (Bayesian probability)." We must make it clear this is not a verifiable statement about the actual fixed parameter, the hypothesis, nor the experiment. The confidence interval for a single observed experimental result is considered plausible due to its long-run performance over repeated experiments. This coverage probability is a statement about the experiment in relation to the unknown fixed true parameter. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood, Bayesian belief is more objectively viewed as a type of confidence based on frequency probability of the experiment.
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confi The second example in this thread compares a frequentist confidence interval to two different posterior intervals based on two different non-informative priors. Despite using all the information in t
2,018
What is the reason that a likelihood function is not a pdf?
We'll start with two definitions: A probability density function (pdf) is a non-negative function that integrates to $1$. The likelihood is defined as the joint density of the observed data as a function of the parameter. But, as pointed out by the reference to Lehmann made by @whuber in a comment below, the likelihood function is a function of the parameter only, with the data held as a fixed constant. So the fact that it is a density as a function of the data is irrelevant. Therefore, the likelihood function is not a pdf because its integral with respect to the parameter does not necessarily equal 1 (and may not be integrable at all, actually, as pointed out by another comment from @whuber). To see this, we'll use a simple example. Suppose you have a single observation, $x$, from a ${\rm Bernoulli}(\theta)$ distribution. Then the likelihood function is $$ L(\theta) = \theta^{x} (1 - \theta)^{1-x} $$ It is a fact that $\int_{0}^{1} L(\theta) d \theta = 1/2$. Specifically, if $x = 1$, then $L(\theta) = \theta$, so $$\int_{0}^{1} L(\theta) d \theta = \int_{0}^{1} \theta \ d \theta = 1/2$$ and a similar calculation applies when $x = 0$. Therefore, $L(\theta)$ cannot be a density function. Perhaps even more important than this technical example showing why the likelihood isn't a probability density is to point out that the likelihood is not the probability of the parameter value being correct or anything like that - it is the probability (density) of the data given the parameter value, which is a completely different thing. Therefore one should not expect the likelihood function to behave like a probability density.
What is the reason that a likelihood function is not a pdf?
We'll start with two definitions: A probability density function (pdf) is a non-negative function that integrates to $1$. The likelihood is defined as the joint density of the observed data as a fu
What is the reason that a likelihood function is not a pdf? We'll start with two definitions: A probability density function (pdf) is a non-negative function that integrates to $1$. The likelihood is defined as the joint density of the observed data as a function of the parameter. But, as pointed out by the reference to Lehmann made by @whuber in a comment below, the likelihood function is a function of the parameter only, with the data held as a fixed constant. So the fact that it is a density as a function of the data is irrelevant. Therefore, the likelihood function is not a pdf because its integral with respect to the parameter does not necessarily equal 1 (and may not be integrable at all, actually, as pointed out by another comment from @whuber). To see this, we'll use a simple example. Suppose you have a single observation, $x$, from a ${\rm Bernoulli}(\theta)$ distribution. Then the likelihood function is $$ L(\theta) = \theta^{x} (1 - \theta)^{1-x} $$ It is a fact that $\int_{0}^{1} L(\theta) d \theta = 1/2$. Specifically, if $x = 1$, then $L(\theta) = \theta$, so $$\int_{0}^{1} L(\theta) d \theta = \int_{0}^{1} \theta \ d \theta = 1/2$$ and a similar calculation applies when $x = 0$. Therefore, $L(\theta)$ cannot be a density function. Perhaps even more important than this technical example showing why the likelihood isn't a probability density is to point out that the likelihood is not the probability of the parameter value being correct or anything like that - it is the probability (density) of the data given the parameter value, which is a completely different thing. Therefore one should not expect the likelihood function to behave like a probability density.
What is the reason that a likelihood function is not a pdf? We'll start with two definitions: A probability density function (pdf) is a non-negative function that integrates to $1$. The likelihood is defined as the joint density of the observed data as a fu
2,019
What is the reason that a likelihood function is not a pdf?
Okay but the likelihood function is the joint probability density for the observed data given the parameter $θ$. As such it can be normalized to form a probability density function. So it is essentially like a pdf.
What is the reason that a likelihood function is not a pdf?
Okay but the likelihood function is the joint probability density for the observed data given the parameter $θ$. As such it can be normalized to form a probability density function. So it is essentia
What is the reason that a likelihood function is not a pdf? Okay but the likelihood function is the joint probability density for the observed data given the parameter $θ$. As such it can be normalized to form a probability density function. So it is essentially like a pdf.
What is the reason that a likelihood function is not a pdf? Okay but the likelihood function is the joint probability density for the observed data given the parameter $θ$. As such it can be normalized to form a probability density function. So it is essentia
2,020
What is the reason that a likelihood function is not a pdf?
The likelihood is defined as $\mathcal{L}(\theta; x_1,...,x_n) = f(x_1,...,x_n; \theta)$, where if f(x; θ) is a probability mass function, then the likelihood is always less than one, but if f(x; θ) is a probability density function, then the likelihood can be greater than one, since densities can be greater than one. Normally observations are treated iid, then: $\mathcal{L}(\theta; x_1,...,x_n) = f(x_1,...,x_n; \theta) = \prod_{j} f(x_j; \theta)$ Let's see its original form: According to the Bayesian inference, $f(x_1,...,x_n; \theta) = \frac{f(\theta; x_1,...,x_n) * f(x_1,...,x_n)}{f(\theta)}$ holds, that is $\hat{\mathcal{L}} = \frac{posterior * evidence}{prior}$. Notice that the maximum likelihood estimate treats the ratio of evidence to prior as a constant(see answers of this question), which omits the prior beliefs. The likelihood has a positive correlation with the posterior which is based on the estimated parameters. $\hat{\mathcal{L}}$ may be a pdf but $\mathcal{L}$ is not since $\mathcal{L}$ is just a part of $\hat{\mathcal{L}}$ which is intractable. For example, I don't know the mean and standard variance of a Gaussian distribution and want to get them by training using a lot of observation from that distribution. I first initialize the mean and standard variance randomly(which defines a Gaussian distribution), and then I take one case and fit into the estimated distribution and I can get a probability from the estimated distribution. Then I continue to put the case in and get many probabilities and then I multiply these probabilities and get a score. This kind of score is the likelihood. Hardly can it be a probability of a certain pdf.
What is the reason that a likelihood function is not a pdf?
The likelihood is defined as $\mathcal{L}(\theta; x_1,...,x_n) = f(x_1,...,x_n; \theta)$, where if f(x; θ) is a probability mass function, then the likelihood is always less than one, but if f(x; θ) i
What is the reason that a likelihood function is not a pdf? The likelihood is defined as $\mathcal{L}(\theta; x_1,...,x_n) = f(x_1,...,x_n; \theta)$, where if f(x; θ) is a probability mass function, then the likelihood is always less than one, but if f(x; θ) is a probability density function, then the likelihood can be greater than one, since densities can be greater than one. Normally observations are treated iid, then: $\mathcal{L}(\theta; x_1,...,x_n) = f(x_1,...,x_n; \theta) = \prod_{j} f(x_j; \theta)$ Let's see its original form: According to the Bayesian inference, $f(x_1,...,x_n; \theta) = \frac{f(\theta; x_1,...,x_n) * f(x_1,...,x_n)}{f(\theta)}$ holds, that is $\hat{\mathcal{L}} = \frac{posterior * evidence}{prior}$. Notice that the maximum likelihood estimate treats the ratio of evidence to prior as a constant(see answers of this question), which omits the prior beliefs. The likelihood has a positive correlation with the posterior which is based on the estimated parameters. $\hat{\mathcal{L}}$ may be a pdf but $\mathcal{L}$ is not since $\mathcal{L}$ is just a part of $\hat{\mathcal{L}}$ which is intractable. For example, I don't know the mean and standard variance of a Gaussian distribution and want to get them by training using a lot of observation from that distribution. I first initialize the mean and standard variance randomly(which defines a Gaussian distribution), and then I take one case and fit into the estimated distribution and I can get a probability from the estimated distribution. Then I continue to put the case in and get many probabilities and then I multiply these probabilities and get a score. This kind of score is the likelihood. Hardly can it be a probability of a certain pdf.
What is the reason that a likelihood function is not a pdf? The likelihood is defined as $\mathcal{L}(\theta; x_1,...,x_n) = f(x_1,...,x_n; \theta)$, where if f(x; θ) is a probability mass function, then the likelihood is always less than one, but if f(x; θ) i
2,021
What is the reason that a likelihood function is not a pdf?
I'm not a statistician, but my understanding is that while the likelihood function itself is not a PDF with respect to the parameter(s), it is directly related to that PDF by Bayes Rule. The likelihood function, P(X|theta), and posterior distribution, f(theta|X), are tightly linked; not "a completely different thing" at all.
What is the reason that a likelihood function is not a pdf?
I'm not a statistician, but my understanding is that while the likelihood function itself is not a PDF with respect to the parameter(s), it is directly related to that PDF by Bayes Rule. The likelihoo
What is the reason that a likelihood function is not a pdf? I'm not a statistician, but my understanding is that while the likelihood function itself is not a PDF with respect to the parameter(s), it is directly related to that PDF by Bayes Rule. The likelihood function, P(X|theta), and posterior distribution, f(theta|X), are tightly linked; not "a completely different thing" at all.
What is the reason that a likelihood function is not a pdf? I'm not a statistician, but my understanding is that while the likelihood function itself is not a PDF with respect to the parameter(s), it is directly related to that PDF by Bayes Rule. The likelihoo
2,022
What is the reason that a likelihood function is not a pdf?
yoooo, lets make something clear. Likelihood is completely different from probability!, when we want to calculate the probability of for example getting x=0, when x is coming from a normal distribution with miu=0 and sigma=1, we need to define a bin, like 0.01, and integral the probability function there(pdf, in this case normal distribution). so we calculate the integral of normal distribution and put, for instance, -0.01 and 0.01 as input for outcome of integral. BUTT, in likelihood, we just calculate the point value of the pdf... , this is totally different from the probability of the x to be 0, we just enter input x=0 to the function and get the outcome. for instance, by definition, the function y=1, for x=(0 to 1), the integral of this function is 1, so this can be a pdf, but the point value of each x in (0 to 1) is equal 1, which is not the probability of this ponits, just the value of pdf in those point. now we get to why likelihood is used in modeling, when we maximize the likelihood function for a set of observed data, with respect to a assumed model(function with parameters to be found), in action this would maximize the probability of those data with respect to that model(function). we work with the likelihood, but in the end, probability (integral of pdf) become maximum as well. edit1: for the second comment, youre right, my bad, I meant the integral of pdf over the data that we collected. the whole purpose of modeling is to fit a 'decided' model to a set of data in a way that the probability of those data considering our 'decided' model would be maximized. for this probability to be maximized, we need to calculate the integral of pdf over our data. by maximizing likelihood, we maximize the integral as well.
What is the reason that a likelihood function is not a pdf?
yoooo, lets make something clear. Likelihood is completely different from probability!, when we want to calculate the probability of for example getting x=0, when x is coming from a normal distributio
What is the reason that a likelihood function is not a pdf? yoooo, lets make something clear. Likelihood is completely different from probability!, when we want to calculate the probability of for example getting x=0, when x is coming from a normal distribution with miu=0 and sigma=1, we need to define a bin, like 0.01, and integral the probability function there(pdf, in this case normal distribution). so we calculate the integral of normal distribution and put, for instance, -0.01 and 0.01 as input for outcome of integral. BUTT, in likelihood, we just calculate the point value of the pdf... , this is totally different from the probability of the x to be 0, we just enter input x=0 to the function and get the outcome. for instance, by definition, the function y=1, for x=(0 to 1), the integral of this function is 1, so this can be a pdf, but the point value of each x in (0 to 1) is equal 1, which is not the probability of this ponits, just the value of pdf in those point. now we get to why likelihood is used in modeling, when we maximize the likelihood function for a set of observed data, with respect to a assumed model(function with parameters to be found), in action this would maximize the probability of those data with respect to that model(function). we work with the likelihood, but in the end, probability (integral of pdf) become maximum as well. edit1: for the second comment, youre right, my bad, I meant the integral of pdf over the data that we collected. the whole purpose of modeling is to fit a 'decided' model to a set of data in a way that the probability of those data considering our 'decided' model would be maximized. for this probability to be maximized, we need to calculate the integral of pdf over our data. by maximizing likelihood, we maximize the integral as well.
What is the reason that a likelihood function is not a pdf? yoooo, lets make something clear. Likelihood is completely different from probability!, when we want to calculate the probability of for example getting x=0, when x is coming from a normal distributio
2,023
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
I would recommend Hanley’s & McNeil’s 1982 paper ‘The meaning and use of the area under a receiver operating characteristic (ROC) curve’. Example They have the following table of disease status and test result (corresponding to, for example, the estimated risk from a logistic model). The first number on the right is the number of patients with true disease status ‘normal’ and the second number is the number of patients with true disease status ‘abnormal’: (1) Definitely normal: 33/3 (2) Probably normal: 6/2 (3) Questionable: 6/2 (4) Probably abnormal: 11/11 (5) Definitely abnormal: 2/33 So there are in total 58 ‘normal’ patients and ‘51’ abnormal ones. We see that when the predictor is 1, ‘Definitely normal’, the patient is usually normal (true for 33 of the 36 patients), and when it is 5, ‘Definitely abnormal’ the patients is usually abnormal (true for 33 of the 35 patients), so the predictor makes sense. But how should we judge a patient with a score of 2, 3, or 4? What we set our cutoff for judging a patients as abnormal or normal to determines the sensitivity and specificity of the resulting test. Sensitivity and specificity We can calculate the estimated sensitivity and specificity for different cutoffs. (I’ll just write ‘sensitivity’ and ‘specificity’ from now on, letting the estimated nature of the values be implicit.) If we choose our cutoff so that we classify all the patients as abnormal, no matter what their test results says (i.e., we choose the cutoff 1+), we will get a sensitivity of 51/51 = 1. The specificity will be 0/58 = 0. Doesn’t sound so good. OK, so let’s choose a less strict cutoff. We only classify patients as abnormal if they have a test result of 2 or higher. We then miss 3 abnormal patients, and have a sensitivity of 48/51 = 0.94. But we have a much increased specificity, of 33/58 = 0.57. We can now continue this, choosing various cutoffs (3, 4, 5, >5). (In the last case, we won’t classify any patients as abnormal, even if they have the highest possible test score of 5.) The ROC curve If we do this for all possible cutoffs, and the plot the sensitivity against 1 minus the specificity, we get the ROC curve. We can use the following R code: # Data norm = rep(1:5, times=c(33,6,6,11,2)) abnorm = rep(1:5, times=c(3,2,2,11,33)) testres = c(abnorm,norm) truestat = c(rep(1,length(abnorm)), rep(0,length(norm))) # Summary table (Table I in the paper) ( tab=as.matrix(table(truestat, testres)) ) The output is: testres truestat 1 2 3 4 5 0 33 6 6 11 2 1 3 2 2 11 33 We can calculate various statistics: ( tot=colSums(tab) ) # Number of patients w/ each test result ( truepos=unname(rev(cumsum(rev(tab[2,])))) ) # Number of true positives ( falsepos=unname(rev(cumsum(rev(tab[1,])))) ) # Number of false positives ( totpos=sum(tab[2,]) ) # The total number of positives (one number) ( totneg=sum(tab[1,]) ) # The total number of negatives (one number) (sens=truepos/totpos) # Sensitivity (fraction true positives) (omspec=falsepos/totneg) # 1 − specificity (false positives) sens=c(sens,0); omspec=c(omspec,0) # Numbers when we classify all as normal And using this, we can plot the (estimated) ROC curve: plot(omspec, sens, type="b", xlim=c(0,1), ylim=c(0,1), lwd=2, xlab="1 − specificity", ylab="Sensitivity") # perhaps with xaxs="i" grid() abline(0,1, col="red", lty=2) Manually calculating the AUC We can very easily calculate the area under the ROC curve, using the formula for the area of a trapezoid: height = (sens[-1]+sens[-length(sens)])/2 width = -diff(omspec) # = diff(rev(omspec)) sum(height*width) The result is 0.8931711. A concordance measure The AUC can also be seen as a concordance measure. If we take all possible pairs of patients where one is normal and the other is abnormal, we can calculate how frequently it’s the abnormal one that has the highest (most ‘abnormal-looking’) test result (if they have the same value, we count that this as ‘half a victory’): o = outer(abnorm, norm, "-") mean((o>0) + .5*(o==0)) The answer is again 0.8931711, the area under the ROC curve. This will always be the case. A graphical view of concordance As pointed out by Harrell in his answer, this also has a graphical interpretation. Let’s plot test score (risk estimate) on the y-axis and true disease status on the x-axis (here with some jittering, to show overlapping points): plot(jitter(truestat,.2), jitter(testres,.8), las=1, xlab="True disease status", ylab="Test score") Let us now draw a line between each point on the left (a ‘normal’ patient) and each point on the right (an ‘abnormal’ patient). The proportion of lines with a positive slope (i.e., the proportion of concordant pairs) is the concordance index (flat lines count as ‘50% concordance’). It’s a bit difficult to visualise the actual lines for this example, due to the number of ties (equal risk score), but with some jittering and transparency we can get a reasonable plot: d = cbind(x_norm=0, x_abnorm=1, expand.grid(y_norm=norm, y_abnorm=abnorm)) library(ggplot2) ggplot(d, aes(x=x_norm, xend=x_abnorm, y=y_norm, yend=y_abnorm)) + geom_segment(colour="#ff000006", position=position_jitter(width=0, height=.1)) + xlab("True disease status") + ylab("Test\nscore") + theme_light() + theme(axis.title.y=element_text(angle=0)) We see that most of the lines slope upwards, so the concordance index will be high. We also see the contribution to the index from each type of observation pair. Most of it comes from normal patients with a risk score of 1 paired with abnormal patients with a risk score of 5 (1–5 pairs), but quite a lot also comes from 1–4 pairs and 4–5 pairs. And it’s very easy to calculate the actual concordance index based on the slope definition: d = transform(d, slope=(y_norm-y_abnorm)/(x_norm-x_abnorm)) mean((d$slope > 0) + .5*(d$slope==0)) The answer is again 0.8931711, i.e., the AUC. The Wilcoxon–Mann–Whitney test There is a close connection between the concordance measure and the Wilcoxon–Mann–Whitney test. Actually, the latter tests if the probability of concordance (i.e., that it’s the abnormal patient in a random normal–abnormal pair that will have the most ‘abnormal-looking’ test result) is exactly 0.5. And its test statistic is just a simple transformation of the estimated concordance probability: > ( wi = wilcox.test(abnorm,norm) ) Wilcoxon rank sum test with continuity correction data: abnorm and norm W = 2642, p-value = 1.944e-13 alternative hypothesis: true location shift is not equal to 0 The test statistic (W = 2642) counts the number of concordant pairs. If we divide it by the number of possible pairs, we get a familar number: w = wi$statistic w/(length(abnorm)*length(norm)) Yes, it’s 0.8931711, the area under the ROC curve. Easier ways to calculate the AUC (in R) But let’s make life easier for ourselves. There are various packages that calculate the AUC for us automatically. The Epi package The Epi package creates a nice ROC curve with various statistics (including the AUC) embedded: library(Epi) ROC(testres, truestat) # also try adding plot="sp" The pROC package I also like the pROC package, since it can smooth the ROC estimate (and calculate an AUC estimate based on the smoothed ROC): (The red line is the original ROC, and the black line is the smoothed ROC. Also note the default 1:1 aspect ratio. It makes sense to use this, since both the sensitivity and specificity has a 0–1 range.) The estimated AUC from the smoothed ROC is 0.9107, similar to, but slightly larger than, the AUC from the unsmoothed ROC (if you look at the figure, you can easily see why it’s larger). (Though we really have too few possible distinct test result values to calculate a smooth AUC). The rms package Harrell’s rms package can calculate various related concordance statistics using the rcorr.cens() function. The C Index in its output is the AUC: > library(rms) > rcorr.cens(testres,truestat)[1] C Index 0.8931711 The caTools package Finally, we have the caTools package and its colAUC() function. It has a few advantages over other packages (mainly speed and the ability to work with multi-dimensional data – see ?colAUC) that can sometimes be helpful. But of course it gives the same answer as we have calculated over and over: library(caTools) colAUC(testres, truestat, plotROC=TRUE) [,1] 0 vs. 1 0.8931711 Final words Many people seem to think that the AUC tells us how ‘good’ a test is. And some people think that the AUC is the probability that the test will correctly classify a patient. It is not. As you can see from the above example and calculations, the AUC tells us something about a family of tests, one test for each possible cutoff. And the AUC is calculated based on cutoffs one would never use in practice. Why should we care about the sensitivity and specificity of ‘nonsensical’ cutoff values? Still, that’s what the AUC is (partially) based on. (Of course, if the AUC is very close to 1, almost every possible test will have great discriminatory power, and we would all be very happy.) The ‘random normal–abnormal’ pair interpretation of the AUC is nice (and can be extended, for instance to survival models, where we see if its the person with the highest (relative) hazard that dies the earliest). But one would never use it in practice. It’s a rare case where one knows one has one healthy and one ill person, doesn’t know which person is the ill one, and must decide which of them to treat. (In any case, the decision is easy; treat the one with the highest estimated risk.) So I think studying the actual ROC curve will be more useful than just looking at the AUC summary measure. And if you use the ROC together with (estimates of the) costs of false positives and false negatives, along with base rates of what you’re studying, you can get somewhere. Also note that the AUC only measures discrimination, not calibration. That is, it measures whether you can discriminate between two persons (one ill and one healthy), based on the risk score. For this, it only looks at relative risk values (or ranks, if you will, cf. the Wilcoxon–Mann–Whitney test interpretation), not the absolute ones, which you should be interested in. For example, if you divide each risk estimate from your logistic model by 2, you will get exactly the same AUC (and ROC). When evaluating a risk model, calibration is also very important. To examine this, you will look at all patients with a risk score of around, e.g., 0.7, and see if approximately 70% of these actually were ill. Do this for each possible risk score (possibly using some sort of smoothing / local regression). Plot the results, and you’ll get a graphical measure of calibration. If have a model with both good calibration and good discrimination, then you start to have good model. :)
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
I would recommend Hanley’s & McNeil’s 1982 paper ‘The meaning and use of the area under a receiver operating characteristic (ROC) curve’. Example They have the following table of disease status and te
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand I would recommend Hanley’s & McNeil’s 1982 paper ‘The meaning and use of the area under a receiver operating characteristic (ROC) curve’. Example They have the following table of disease status and test result (corresponding to, for example, the estimated risk from a logistic model). The first number on the right is the number of patients with true disease status ‘normal’ and the second number is the number of patients with true disease status ‘abnormal’: (1) Definitely normal: 33/3 (2) Probably normal: 6/2 (3) Questionable: 6/2 (4) Probably abnormal: 11/11 (5) Definitely abnormal: 2/33 So there are in total 58 ‘normal’ patients and ‘51’ abnormal ones. We see that when the predictor is 1, ‘Definitely normal’, the patient is usually normal (true for 33 of the 36 patients), and when it is 5, ‘Definitely abnormal’ the patients is usually abnormal (true for 33 of the 35 patients), so the predictor makes sense. But how should we judge a patient with a score of 2, 3, or 4? What we set our cutoff for judging a patients as abnormal or normal to determines the sensitivity and specificity of the resulting test. Sensitivity and specificity We can calculate the estimated sensitivity and specificity for different cutoffs. (I’ll just write ‘sensitivity’ and ‘specificity’ from now on, letting the estimated nature of the values be implicit.) If we choose our cutoff so that we classify all the patients as abnormal, no matter what their test results says (i.e., we choose the cutoff 1+), we will get a sensitivity of 51/51 = 1. The specificity will be 0/58 = 0. Doesn’t sound so good. OK, so let’s choose a less strict cutoff. We only classify patients as abnormal if they have a test result of 2 or higher. We then miss 3 abnormal patients, and have a sensitivity of 48/51 = 0.94. But we have a much increased specificity, of 33/58 = 0.57. We can now continue this, choosing various cutoffs (3, 4, 5, >5). (In the last case, we won’t classify any patients as abnormal, even if they have the highest possible test score of 5.) The ROC curve If we do this for all possible cutoffs, and the plot the sensitivity against 1 minus the specificity, we get the ROC curve. We can use the following R code: # Data norm = rep(1:5, times=c(33,6,6,11,2)) abnorm = rep(1:5, times=c(3,2,2,11,33)) testres = c(abnorm,norm) truestat = c(rep(1,length(abnorm)), rep(0,length(norm))) # Summary table (Table I in the paper) ( tab=as.matrix(table(truestat, testres)) ) The output is: testres truestat 1 2 3 4 5 0 33 6 6 11 2 1 3 2 2 11 33 We can calculate various statistics: ( tot=colSums(tab) ) # Number of patients w/ each test result ( truepos=unname(rev(cumsum(rev(tab[2,])))) ) # Number of true positives ( falsepos=unname(rev(cumsum(rev(tab[1,])))) ) # Number of false positives ( totpos=sum(tab[2,]) ) # The total number of positives (one number) ( totneg=sum(tab[1,]) ) # The total number of negatives (one number) (sens=truepos/totpos) # Sensitivity (fraction true positives) (omspec=falsepos/totneg) # 1 − specificity (false positives) sens=c(sens,0); omspec=c(omspec,0) # Numbers when we classify all as normal And using this, we can plot the (estimated) ROC curve: plot(omspec, sens, type="b", xlim=c(0,1), ylim=c(0,1), lwd=2, xlab="1 − specificity", ylab="Sensitivity") # perhaps with xaxs="i" grid() abline(0,1, col="red", lty=2) Manually calculating the AUC We can very easily calculate the area under the ROC curve, using the formula for the area of a trapezoid: height = (sens[-1]+sens[-length(sens)])/2 width = -diff(omspec) # = diff(rev(omspec)) sum(height*width) The result is 0.8931711. A concordance measure The AUC can also be seen as a concordance measure. If we take all possible pairs of patients where one is normal and the other is abnormal, we can calculate how frequently it’s the abnormal one that has the highest (most ‘abnormal-looking’) test result (if they have the same value, we count that this as ‘half a victory’): o = outer(abnorm, norm, "-") mean((o>0) + .5*(o==0)) The answer is again 0.8931711, the area under the ROC curve. This will always be the case. A graphical view of concordance As pointed out by Harrell in his answer, this also has a graphical interpretation. Let’s plot test score (risk estimate) on the y-axis and true disease status on the x-axis (here with some jittering, to show overlapping points): plot(jitter(truestat,.2), jitter(testres,.8), las=1, xlab="True disease status", ylab="Test score") Let us now draw a line between each point on the left (a ‘normal’ patient) and each point on the right (an ‘abnormal’ patient). The proportion of lines with a positive slope (i.e., the proportion of concordant pairs) is the concordance index (flat lines count as ‘50% concordance’). It’s a bit difficult to visualise the actual lines for this example, due to the number of ties (equal risk score), but with some jittering and transparency we can get a reasonable plot: d = cbind(x_norm=0, x_abnorm=1, expand.grid(y_norm=norm, y_abnorm=abnorm)) library(ggplot2) ggplot(d, aes(x=x_norm, xend=x_abnorm, y=y_norm, yend=y_abnorm)) + geom_segment(colour="#ff000006", position=position_jitter(width=0, height=.1)) + xlab("True disease status") + ylab("Test\nscore") + theme_light() + theme(axis.title.y=element_text(angle=0)) We see that most of the lines slope upwards, so the concordance index will be high. We also see the contribution to the index from each type of observation pair. Most of it comes from normal patients with a risk score of 1 paired with abnormal patients with a risk score of 5 (1–5 pairs), but quite a lot also comes from 1–4 pairs and 4–5 pairs. And it’s very easy to calculate the actual concordance index based on the slope definition: d = transform(d, slope=(y_norm-y_abnorm)/(x_norm-x_abnorm)) mean((d$slope > 0) + .5*(d$slope==0)) The answer is again 0.8931711, i.e., the AUC. The Wilcoxon–Mann–Whitney test There is a close connection between the concordance measure and the Wilcoxon–Mann–Whitney test. Actually, the latter tests if the probability of concordance (i.e., that it’s the abnormal patient in a random normal–abnormal pair that will have the most ‘abnormal-looking’ test result) is exactly 0.5. And its test statistic is just a simple transformation of the estimated concordance probability: > ( wi = wilcox.test(abnorm,norm) ) Wilcoxon rank sum test with continuity correction data: abnorm and norm W = 2642, p-value = 1.944e-13 alternative hypothesis: true location shift is not equal to 0 The test statistic (W = 2642) counts the number of concordant pairs. If we divide it by the number of possible pairs, we get a familar number: w = wi$statistic w/(length(abnorm)*length(norm)) Yes, it’s 0.8931711, the area under the ROC curve. Easier ways to calculate the AUC (in R) But let’s make life easier for ourselves. There are various packages that calculate the AUC for us automatically. The Epi package The Epi package creates a nice ROC curve with various statistics (including the AUC) embedded: library(Epi) ROC(testres, truestat) # also try adding plot="sp" The pROC package I also like the pROC package, since it can smooth the ROC estimate (and calculate an AUC estimate based on the smoothed ROC): (The red line is the original ROC, and the black line is the smoothed ROC. Also note the default 1:1 aspect ratio. It makes sense to use this, since both the sensitivity and specificity has a 0–1 range.) The estimated AUC from the smoothed ROC is 0.9107, similar to, but slightly larger than, the AUC from the unsmoothed ROC (if you look at the figure, you can easily see why it’s larger). (Though we really have too few possible distinct test result values to calculate a smooth AUC). The rms package Harrell’s rms package can calculate various related concordance statistics using the rcorr.cens() function. The C Index in its output is the AUC: > library(rms) > rcorr.cens(testres,truestat)[1] C Index 0.8931711 The caTools package Finally, we have the caTools package and its colAUC() function. It has a few advantages over other packages (mainly speed and the ability to work with multi-dimensional data – see ?colAUC) that can sometimes be helpful. But of course it gives the same answer as we have calculated over and over: library(caTools) colAUC(testres, truestat, plotROC=TRUE) [,1] 0 vs. 1 0.8931711 Final words Many people seem to think that the AUC tells us how ‘good’ a test is. And some people think that the AUC is the probability that the test will correctly classify a patient. It is not. As you can see from the above example and calculations, the AUC tells us something about a family of tests, one test for each possible cutoff. And the AUC is calculated based on cutoffs one would never use in practice. Why should we care about the sensitivity and specificity of ‘nonsensical’ cutoff values? Still, that’s what the AUC is (partially) based on. (Of course, if the AUC is very close to 1, almost every possible test will have great discriminatory power, and we would all be very happy.) The ‘random normal–abnormal’ pair interpretation of the AUC is nice (and can be extended, for instance to survival models, where we see if its the person with the highest (relative) hazard that dies the earliest). But one would never use it in practice. It’s a rare case where one knows one has one healthy and one ill person, doesn’t know which person is the ill one, and must decide which of them to treat. (In any case, the decision is easy; treat the one with the highest estimated risk.) So I think studying the actual ROC curve will be more useful than just looking at the AUC summary measure. And if you use the ROC together with (estimates of the) costs of false positives and false negatives, along with base rates of what you’re studying, you can get somewhere. Also note that the AUC only measures discrimination, not calibration. That is, it measures whether you can discriminate between two persons (one ill and one healthy), based on the risk score. For this, it only looks at relative risk values (or ranks, if you will, cf. the Wilcoxon–Mann–Whitney test interpretation), not the absolute ones, which you should be interested in. For example, if you divide each risk estimate from your logistic model by 2, you will get exactly the same AUC (and ROC). When evaluating a risk model, calibration is also very important. To examine this, you will look at all patients with a risk score of around, e.g., 0.7, and see if approximately 70% of these actually were ill. Do this for each possible risk score (possibly using some sort of smoothing / local regression). Plot the results, and you’ll get a graphical measure of calibration. If have a model with both good calibration and good discrimination, then you start to have good model. :)
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand I would recommend Hanley’s & McNeil’s 1982 paper ‘The meaning and use of the area under a receiver operating characteristic (ROC) curve’. Example They have the following table of disease status and te
2,024
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
Have a look at this question: Understanding ROC curve Here's how to build a ROC curve (from that question): Drawing ROC curve given a data set processed by your ranking classifier rank test examples on decreasing score start in $(0, 0)$ for each example $x$ (in the decreasing order) if $x$ is positive, move $1/\text{pos}$ up if $x$ is negative, move $1/\text{neg}$ right where $\text{pos}$ and $\text{neg}$ are the fractions of positive and negative examples respectively. You can use this idea for manually calculating AUC ROC using the following algorithm: auc = 0.0 height = 0.0 for each training example x_i, y_i if y_i = 1.0: height = height + tpr else auc = auc + height * fpr return auc This nice gif-animated picture should illustrate this process clearer
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
Have a look at this question: Understanding ROC curve Here's how to build a ROC curve (from that question): Drawing ROC curve given a data set processed by your ranking classifier rank test examples
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand Have a look at this question: Understanding ROC curve Here's how to build a ROC curve (from that question): Drawing ROC curve given a data set processed by your ranking classifier rank test examples on decreasing score start in $(0, 0)$ for each example $x$ (in the decreasing order) if $x$ is positive, move $1/\text{pos}$ up if $x$ is negative, move $1/\text{neg}$ right where $\text{pos}$ and $\text{neg}$ are the fractions of positive and negative examples respectively. You can use this idea for manually calculating AUC ROC using the following algorithm: auc = 0.0 height = 0.0 for each training example x_i, y_i if y_i = 1.0: height = height + tpr else auc = auc + height * fpr return auc This nice gif-animated picture should illustrate this process clearer
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand Have a look at this question: Understanding ROC curve Here's how to build a ROC curve (from that question): Drawing ROC curve given a data set processed by your ranking classifier rank test examples
2,025
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
Karl's post has a lot of excellent information. But I have not yet seen in the past 20 years an example of an ROC curve that changed anyone's thinking in a good direction. The only value of an ROC curve in my humble opinion is that its area happens to equal a very useful concordance probability. The ROC curve itself tempts the reader to use cutoffs, which is bad statistical practice. As far as manually calculating the $c$-index, make a plot with $Y=0,1$ on the $x$-axis and the continuous predictor or predicted probability that $Y=1$ on the $y$-axis. If you connect every point with $Y=0$ with every point with $Y=1$, the proportion of the lines that have a positive slope is the concordance probability. Any measures that have a denominator of $n$ in this setting are improper accuracy scoring rules and should be avoided. This includes proportion classified correctly, sensitivity, and specificity. For the R Hmisc package rcorr.cens function, print the entire result to see more information, especially a standard error.
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
Karl's post has a lot of excellent information. But I have not yet seen in the past 20 years an example of an ROC curve that changed anyone's thinking in a good direction. The only value of an ROC c
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand Karl's post has a lot of excellent information. But I have not yet seen in the past 20 years an example of an ROC curve that changed anyone's thinking in a good direction. The only value of an ROC curve in my humble opinion is that its area happens to equal a very useful concordance probability. The ROC curve itself tempts the reader to use cutoffs, which is bad statistical practice. As far as manually calculating the $c$-index, make a plot with $Y=0,1$ on the $x$-axis and the continuous predictor or predicted probability that $Y=1$ on the $y$-axis. If you connect every point with $Y=0$ with every point with $Y=1$, the proportion of the lines that have a positive slope is the concordance probability. Any measures that have a denominator of $n$ in this setting are improper accuracy scoring rules and should be avoided. This includes proportion classified correctly, sensitivity, and specificity. For the R Hmisc package rcorr.cens function, print the entire result to see more information, especially a standard error.
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand Karl's post has a lot of excellent information. But I have not yet seen in the past 20 years an example of an ROC curve that changed anyone's thinking in a good direction. The only value of an ROC c
2,026
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
Here is an alternative to the natural way of calculating AUC by simply using the trapezoidal rule to get the area under the ROC curve. The AUC is equal to the probability that a randomly sampled positive observation has a predicted probability (of being positive) greater than a randomly sampled negative observation. You can use this to calculate the AUC quite easily in any programming language by going through all the pairwise combinations of positive and negative observations. You could also randomly sample observations if the sample size was too large. If you want to calculate AUC using pen and paper, this might not be the best approach unless you have a very small sample/a lot of time. For example in R: n <- 100L x1 <- rnorm(n, 2.0, 0.5) x2 <- rnorm(n, -1.0, 2) y <- rbinom(n, 1L, plogis(-0.4 + 0.5 * x1 + 0.1 * x2)) mod <- glm(y ~ x1 + x2, "binomial") probs <- predict(mod, type = "response") combinations <- expand.grid(positiveProbs = probs[y == 1L], negativeProbs = probs[y == 0L]) mean(combinations$positiveProbs > combinations$negativeProbs) [1] 0.628723 We can verify using the pROC package: library(pROC) auc(y, probs) Area under the curve: 0.6287 Using random sampling: mean(sample(probs[y == 1L], 100000L, TRUE) > sample(probs[y == 0L], 100000L, TRUE)) [1] 0.62896
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
Here is an alternative to the natural way of calculating AUC by simply using the trapezoidal rule to get the area under the ROC curve. The AUC is equal to the probability that a randomly sampled posi
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand Here is an alternative to the natural way of calculating AUC by simply using the trapezoidal rule to get the area under the ROC curve. The AUC is equal to the probability that a randomly sampled positive observation has a predicted probability (of being positive) greater than a randomly sampled negative observation. You can use this to calculate the AUC quite easily in any programming language by going through all the pairwise combinations of positive and negative observations. You could also randomly sample observations if the sample size was too large. If you want to calculate AUC using pen and paper, this might not be the best approach unless you have a very small sample/a lot of time. For example in R: n <- 100L x1 <- rnorm(n, 2.0, 0.5) x2 <- rnorm(n, -1.0, 2) y <- rbinom(n, 1L, plogis(-0.4 + 0.5 * x1 + 0.1 * x2)) mod <- glm(y ~ x1 + x2, "binomial") probs <- predict(mod, type = "response") combinations <- expand.grid(positiveProbs = probs[y == 1L], negativeProbs = probs[y == 0L]) mean(combinations$positiveProbs > combinations$negativeProbs) [1] 0.628723 We can verify using the pROC package: library(pROC) auc(y, probs) Area under the curve: 0.6287 Using random sampling: mean(sample(probs[y == 1L], 100000L, TRUE) > sample(probs[y == 0L], 100000L, TRUE)) [1] 0.62896
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand Here is an alternative to the natural way of calculating AUC by simply using the trapezoidal rule to get the area under the ROC curve. The AUC is equal to the probability that a randomly sampled posi
2,027
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
You have true value for observations. Calculate posterior probability and then rank observations by this probability. Assuming cut-off probability of $P$ and number of observations $N$: $$\frac{\text{Sum of true ranks}-0.5PN(PN+1)}{PN(N-PN)}$$
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand
You have true value for observations. Calculate posterior probability and then rank observations by this probability. Assuming cut-off probability of $P$ and number of observations $N$: $$\frac{\t
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand You have true value for observations. Calculate posterior probability and then rank observations by this probability. Assuming cut-off probability of $P$ and number of observations $N$: $$\frac{\text{Sum of true ranks}-0.5PN(PN+1)}{PN(N-PN)}$$
How to calculate Area Under the Curve (AUC), or the c-statistic, by hand You have true value for observations. Calculate posterior probability and then rank observations by this probability. Assuming cut-off probability of $P$ and number of observations $N$: $$\frac{\t
2,028
How much do we know about p-hacking "in the wild"?
EXECUTIVE SUMMARY: if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. Andrew Gelman likes to write about this topic and has been posting extensively about it lately on his blog. I don't always agree with him but I like his perspective on $p$-hacking. Here is an excerpt from the Introduction to his Garden of Forking Paths paper (Gelman & Loken 2013; a version appeared in American Scientist 2014; see also Gelman's brief comment on the ASA's statement), emphasis mine: This problem is sometimes called “p-hacking” or “researcher degrees of freedom” (Simmons, Nelson, and Simonsohn, 2011). In a recent article, we spoke of “fishing expeditions [...]”. But we are starting to feel that the term “fishing” was unfortunate, in that it invokes an image of a researcher trying out comparison after comparison, throwing the line into the lake repeatedly until a fish is snagged. We have no reason to think that researchers regularly do that. We think the real story is that researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances. We regret the spread of the terms “fishing” and “p-hacking” (and even “researcher degrees of freedom”) for two reasons: first, because when such terms are used to describe a study, there is the misleading implication that researchers were consciously trying out many different analyses on a single data set; and, second, because it can lead researchers who know they did not try out many different analyses to mistakenly think they are not so strongly subject to problems of researcher degrees of freedom. [...] Our key point here is that it is possible to have multiple potential comparisons, in the sense of a data analysis whose details are highly contingent on data, without the researcher performing any conscious procedure of fishing or examining multiple p-values. So: Gelman does not like the term p-hacking because it implies that the researches were actively cheating. Whereas the problems can occur simply because the researchers choose what test to perform/report after looking at the data, i.e. after doing some exploratory analysis. With some experience of working in biology, I can safely say that everybody does that. Everybody (myself included) collects some data with only vague a priori hypotheses, does extensive exploratory analysis, runs various significance tests, collects some more data, runs and re-runs the tests, and finally reports some $p$-values in the final manuscript. All of this is happening without actively cheating, doing dumb xkcd-jelly-beans-style cherry-picking, or consciously hacking anything. So if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. The only exceptions that come to mind are fully pre-registered replication studies in psychology or fully pre-registered medical trials. Specific evidence Amusingly, some people polled researchers to find that many admit doing some sort of hacking (John et al. 2012, Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling): Apart from that, everybody heard about the so called "replication crisis" in psychology: more than one half of the recent studies published in the top psychology journals do not replicate (Nosek et al. 2015, Estimating the reproducibility of psychological science). (This study has recently been all over the blogs again, because the March 2016 issue of Science published a Comment attempting to refute Nosek et al. and also a reply by Nosek et al. The discussion continued elsewhere, see post by Andrew Gelman and the RetractionWatch post that he links to. To put it politely, the critique is unconvincing.) Update Nov 2018: Kaplan and Irvin, 2017, Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time show that the fraction of clinical trials reporting null results increased from 43% to 92% after pre-registration became required: $P$-value distributions in the literature Head et al. 2015 I have not heard about Head et al. study before, but have now spent some time looking through the surrounding literature. I have also taken a brief look at their raw data. Head et al. downloaded all Open Access papers from PubMed and extracted all p-values reported in the text, getting 2.7 mln p-values. Out of these, 1.1 mln was reported as $p=a$ and not as $p<a$. Out of these, Head et al. randomly took one p-value per paper but this does not seem to change the distribution, so here is how the distribution of all 1.1 mln values looks like (between $0$ and $0.06$): I used $0.0001$ bin width, and one can clearly see a lot of predictable rounding in the reported $p$-values. Now, Head et al. do the following: they compare the number of $p$-values in the $(0.045, 0.5)$ interval and in the $(0.04, 0.045)$ interval; the former number turns out to be (significantly) larger and they take it as an evidence of $p$-hacking. If one squints, one can see it on my figure. I find this hugely unconvincing for one simple reason. Who wants to report their findings with $p=0.05$? Actually, many people seem to be doing exactly that, but still it appears natural to try to avoid this unsatisfactory border-line value and rather to report another significant digit, e.g. $p=0.048$ (unless of course it's $p=0.052$). So some excess of $p$-values close but not equal to $0.05$ can be explained by researcher's rounding preferences. And apart from that, the effect is tiny. (The only strong effect that I can see on this figure is a pronounced drop of the $p$-value density right after $0.05$. This is clearly due to the publication bias.) Unless I missed something, Head et al. do not even discuss this potential alternative explanation. They do not present any histogram of the $p$-values either. There is a bunch of papers criticizing Head et al. In this unpublished manuscript Hartgerink argues that Head et al. should have included $p=0.04$ and $p=0.05$ in their comparison (and if they had, they would not have found their effect). I am not sure about that; it does not sound very convincing. It would be much better if we could somehow inspect the distribution of the "raw" $p$-values without any rounding. Distributions of $p$-values without rounding In this 2016 PeerJ paper (preprint posted in 2015) the same Hartgerink et al. extract p-values from lots of papers in top psychology journals and do exactly that: they recompute exact $p$-values from the reported $t$-, $F$-, $\chi^2$- etc. statistic values; this distribution is free from any rounding artifacts and does not exhibit any increase towards 0.05 whatsoever (Figure 4): $\hspace{5em}$ A very similar approach is taken by Krawczyk 2015 in PLoS One, who extracts 135k $p$-values from the top experimental psychology journals. Here is how the distribution looks for the reported (left) and recomputed (right) $p$-values: The difference is striking. The left histogram shows some weird stuff going on around $p=0.05$, but on the right one it is gone. This means that this weird stuff is due to people's preferences of reporting values around $p\approx 0.05$ and not due to $p$-hacking. Mascicampo and Lalande It seems that the first to observe the alleged excess of $p$-values just below 0.05 were Masicampo & Lalande 2012, looking at three top journals in psychology: This does look impressive, but Lakens 2015 (preprint) in a published Comment argues that this only appears impressive thanks to the misleading exponential fit. See also Lakens 2015, On the challenges of drawing conclusions from p-values just below 0.05 and references therein. Economics Brodeur et al. 2016 (the link goes to the 2013 preprint) do the same thing for economics literature. The look at the three economics journals, extract 50k test results, convert all of them into $z$-scores (using reported coefficients and standard errors whenever possible and using $p$-values if only they were reported), and get the following: This is a bit confusing because small $p$-values are on the right and large $p$-values are on the left. As authors write in the abstract, "The distribution of p-values exhibits a camel shape with abundant p-values above .25" and "a valley between .25 and .10". They argue that this valley is a sign of something fishy, but this is only an indirect evidence. Also, it might be simply due to selective reporting, when large p-values above .25 are reported as some evidence of a lack of effect but p-values between .1 and .25 are felt to be neither here nor there and tend to be omitted. (I am not sure if this effect is present in biological literature or not because the plots above focus on $p<0.05$ interval.) Falsely reassuring? Based on all of the above, my conclusion is that I don't see any strong evidence of $p$-hacking in $p$-value distributions across biological/psychological literature as a whole. There is plenty of evidence of selective reporting, publication bias, rounding $p$-values down to $0.05$ and other funny rounding effects, but I disagree with conclusions of Head et al.: there is no suspicious bump below $0.05$. Uri Simonsohn argues that this is "falsely reassuring". Well, actually he cites these papers un-critically but then remarks that "most p-values are way smaller" than 0.05. Then he says: "That’s reassuring, but falsely reassuring". And here is why: If we want to know if researchers p-hack their results, we need to examine the p-values associated with their results, those they may want to p-hack in the first place. Samples, to be unbiased, must only include observations from the population of interest. Most p-values reported in most papers are irrelevant for the strategic behavior of interest. Covariates, manipulation checks, main effects in studies testing interactions, etc. Including them we underestimate p-hacking and we overestimate the evidential value of data. Analyzing all p-values asks a different question, a less sensible one. Instead of “Do researchers p-hack what they study?” we ask “Do researchers p-hack everything?” This makes total sense. Looking at all reported $p$-values is way too noisy. Uri's $p$-curve paper (Simonsohn et al. 2013) nicely demonstrates what one can see if one looks at carefully selected $p$-values. They selected 20 psychology papers based on some suspicious keywords (namely, authors of these papers reported tests controlling for a covariate and did not report what happens without controlling for it) and then took only $p$-values that are testing the main findings. Here is how the distribution looks like (left): Strong left skew suggests strong $p$-hacking. Conclusions I would say that we know that there must be a lot of $p$-hacking going on, mostly of the Forking-Paths type that Gelman describes; probably to the extent that published $p$-values cannot really be taken at face value and should be "discounted" by the reader by some substantial fraction. However, this attitude seems to produce much more subtle effects than simply a bump in the overall $p$-values distribution just below $0.05$ and cannot really be detected by such a blunt analysis.
How much do we know about p-hacking "in the wild"?
EXECUTIVE SUMMARY: if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. Andrew Gelman likes to write about this t
How much do we know about p-hacking "in the wild"? EXECUTIVE SUMMARY: if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. Andrew Gelman likes to write about this topic and has been posting extensively about it lately on his blog. I don't always agree with him but I like his perspective on $p$-hacking. Here is an excerpt from the Introduction to his Garden of Forking Paths paper (Gelman & Loken 2013; a version appeared in American Scientist 2014; see also Gelman's brief comment on the ASA's statement), emphasis mine: This problem is sometimes called “p-hacking” or “researcher degrees of freedom” (Simmons, Nelson, and Simonsohn, 2011). In a recent article, we spoke of “fishing expeditions [...]”. But we are starting to feel that the term “fishing” was unfortunate, in that it invokes an image of a researcher trying out comparison after comparison, throwing the line into the lake repeatedly until a fish is snagged. We have no reason to think that researchers regularly do that. We think the real story is that researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances. We regret the spread of the terms “fishing” and “p-hacking” (and even “researcher degrees of freedom”) for two reasons: first, because when such terms are used to describe a study, there is the misleading implication that researchers were consciously trying out many different analyses on a single data set; and, second, because it can lead researchers who know they did not try out many different analyses to mistakenly think they are not so strongly subject to problems of researcher degrees of freedom. [...] Our key point here is that it is possible to have multiple potential comparisons, in the sense of a data analysis whose details are highly contingent on data, without the researcher performing any conscious procedure of fishing or examining multiple p-values. So: Gelman does not like the term p-hacking because it implies that the researches were actively cheating. Whereas the problems can occur simply because the researchers choose what test to perform/report after looking at the data, i.e. after doing some exploratory analysis. With some experience of working in biology, I can safely say that everybody does that. Everybody (myself included) collects some data with only vague a priori hypotheses, does extensive exploratory analysis, runs various significance tests, collects some more data, runs and re-runs the tests, and finally reports some $p$-values in the final manuscript. All of this is happening without actively cheating, doing dumb xkcd-jelly-beans-style cherry-picking, or consciously hacking anything. So if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. The only exceptions that come to mind are fully pre-registered replication studies in psychology or fully pre-registered medical trials. Specific evidence Amusingly, some people polled researchers to find that many admit doing some sort of hacking (John et al. 2012, Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling): Apart from that, everybody heard about the so called "replication crisis" in psychology: more than one half of the recent studies published in the top psychology journals do not replicate (Nosek et al. 2015, Estimating the reproducibility of psychological science). (This study has recently been all over the blogs again, because the March 2016 issue of Science published a Comment attempting to refute Nosek et al. and also a reply by Nosek et al. The discussion continued elsewhere, see post by Andrew Gelman and the RetractionWatch post that he links to. To put it politely, the critique is unconvincing.) Update Nov 2018: Kaplan and Irvin, 2017, Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time show that the fraction of clinical trials reporting null results increased from 43% to 92% after pre-registration became required: $P$-value distributions in the literature Head et al. 2015 I have not heard about Head et al. study before, but have now spent some time looking through the surrounding literature. I have also taken a brief look at their raw data. Head et al. downloaded all Open Access papers from PubMed and extracted all p-values reported in the text, getting 2.7 mln p-values. Out of these, 1.1 mln was reported as $p=a$ and not as $p<a$. Out of these, Head et al. randomly took one p-value per paper but this does not seem to change the distribution, so here is how the distribution of all 1.1 mln values looks like (between $0$ and $0.06$): I used $0.0001$ bin width, and one can clearly see a lot of predictable rounding in the reported $p$-values. Now, Head et al. do the following: they compare the number of $p$-values in the $(0.045, 0.5)$ interval and in the $(0.04, 0.045)$ interval; the former number turns out to be (significantly) larger and they take it as an evidence of $p$-hacking. If one squints, one can see it on my figure. I find this hugely unconvincing for one simple reason. Who wants to report their findings with $p=0.05$? Actually, many people seem to be doing exactly that, but still it appears natural to try to avoid this unsatisfactory border-line value and rather to report another significant digit, e.g. $p=0.048$ (unless of course it's $p=0.052$). So some excess of $p$-values close but not equal to $0.05$ can be explained by researcher's rounding preferences. And apart from that, the effect is tiny. (The only strong effect that I can see on this figure is a pronounced drop of the $p$-value density right after $0.05$. This is clearly due to the publication bias.) Unless I missed something, Head et al. do not even discuss this potential alternative explanation. They do not present any histogram of the $p$-values either. There is a bunch of papers criticizing Head et al. In this unpublished manuscript Hartgerink argues that Head et al. should have included $p=0.04$ and $p=0.05$ in their comparison (and if they had, they would not have found their effect). I am not sure about that; it does not sound very convincing. It would be much better if we could somehow inspect the distribution of the "raw" $p$-values without any rounding. Distributions of $p$-values without rounding In this 2016 PeerJ paper (preprint posted in 2015) the same Hartgerink et al. extract p-values from lots of papers in top psychology journals and do exactly that: they recompute exact $p$-values from the reported $t$-, $F$-, $\chi^2$- etc. statistic values; this distribution is free from any rounding artifacts and does not exhibit any increase towards 0.05 whatsoever (Figure 4): $\hspace{5em}$ A very similar approach is taken by Krawczyk 2015 in PLoS One, who extracts 135k $p$-values from the top experimental psychology journals. Here is how the distribution looks for the reported (left) and recomputed (right) $p$-values: The difference is striking. The left histogram shows some weird stuff going on around $p=0.05$, but on the right one it is gone. This means that this weird stuff is due to people's preferences of reporting values around $p\approx 0.05$ and not due to $p$-hacking. Mascicampo and Lalande It seems that the first to observe the alleged excess of $p$-values just below 0.05 were Masicampo & Lalande 2012, looking at three top journals in psychology: This does look impressive, but Lakens 2015 (preprint) in a published Comment argues that this only appears impressive thanks to the misleading exponential fit. See also Lakens 2015, On the challenges of drawing conclusions from p-values just below 0.05 and references therein. Economics Brodeur et al. 2016 (the link goes to the 2013 preprint) do the same thing for economics literature. The look at the three economics journals, extract 50k test results, convert all of them into $z$-scores (using reported coefficients and standard errors whenever possible and using $p$-values if only they were reported), and get the following: This is a bit confusing because small $p$-values are on the right and large $p$-values are on the left. As authors write in the abstract, "The distribution of p-values exhibits a camel shape with abundant p-values above .25" and "a valley between .25 and .10". They argue that this valley is a sign of something fishy, but this is only an indirect evidence. Also, it might be simply due to selective reporting, when large p-values above .25 are reported as some evidence of a lack of effect but p-values between .1 and .25 are felt to be neither here nor there and tend to be omitted. (I am not sure if this effect is present in biological literature or not because the plots above focus on $p<0.05$ interval.) Falsely reassuring? Based on all of the above, my conclusion is that I don't see any strong evidence of $p$-hacking in $p$-value distributions across biological/psychological literature as a whole. There is plenty of evidence of selective reporting, publication bias, rounding $p$-values down to $0.05$ and other funny rounding effects, but I disagree with conclusions of Head et al.: there is no suspicious bump below $0.05$. Uri Simonsohn argues that this is "falsely reassuring". Well, actually he cites these papers un-critically but then remarks that "most p-values are way smaller" than 0.05. Then he says: "That’s reassuring, but falsely reassuring". And here is why: If we want to know if researchers p-hack their results, we need to examine the p-values associated with their results, those they may want to p-hack in the first place. Samples, to be unbiased, must only include observations from the population of interest. Most p-values reported in most papers are irrelevant for the strategic behavior of interest. Covariates, manipulation checks, main effects in studies testing interactions, etc. Including them we underestimate p-hacking and we overestimate the evidential value of data. Analyzing all p-values asks a different question, a less sensible one. Instead of “Do researchers p-hack what they study?” we ask “Do researchers p-hack everything?” This makes total sense. Looking at all reported $p$-values is way too noisy. Uri's $p$-curve paper (Simonsohn et al. 2013) nicely demonstrates what one can see if one looks at carefully selected $p$-values. They selected 20 psychology papers based on some suspicious keywords (namely, authors of these papers reported tests controlling for a covariate and did not report what happens without controlling for it) and then took only $p$-values that are testing the main findings. Here is how the distribution looks like (left): Strong left skew suggests strong $p$-hacking. Conclusions I would say that we know that there must be a lot of $p$-hacking going on, mostly of the Forking-Paths type that Gelman describes; probably to the extent that published $p$-values cannot really be taken at face value and should be "discounted" by the reader by some substantial fraction. However, this attitude seems to produce much more subtle effects than simply a bump in the overall $p$-values distribution just below $0.05$ and cannot really be detected by such a blunt analysis.
How much do we know about p-hacking "in the wild"? EXECUTIVE SUMMARY: if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. Andrew Gelman likes to write about this t
2,029
How much do we know about p-hacking "in the wild"?
Funnel plots have been a tremendous statistical innovation that turned meta analysis on its head. Basically, a funnel plot shows the clinical and statistical significance on the same plot. Ideally, they would form a funnel shape. However, several meta-analyses have produced funnel plots that show a strong bimodal shape, where investigators (or publishers) selectively withheld results that were null. The result is that the triangle becomes wider, because smaller, less powered studies used more drastic methods to "encourage" results to reach statistical significance. The Cochrane Report team has this to say about them. If there is bias, for example because smaller studies without statistically significant effects (shown as open circles in Figure 10.4.a, Panel A) remain unpublished, this will lead to an asymmetrical appearance of the funnel plot with a gap in a bottom corner of the graph (Panel B). In this situation the effect calculated in a meta-analysis will tend to overestimate the intervention effect (Egger 1997a, Villar 1997). The more pronounced the asymmetry, the more likely it is that the amount of bias will be substantial. The first plot shows a symmetrical plot in the absence of bias. The second shows an asymmetrical plot in the presence of reporting bias. The third shows an asymmetrical plot in the presence of bias because some smaller studies (open circles) are of lower methodological quality and therefore produce exaggerated intervention effect estimates. I suspect most authors are unaware of the methods they use to p-hack. They don't keep track of the overall number of models they fit, applying different exclusion criteria or opting for different adjustment variables each time. However, if I had to mandate a simple process, I would love to see the total number of models fit. That's not to say there might be legitimate reasons to rerun models, for instance we just ran through a Alzheimer's analysis not knowing ApoE had been collected in the sample. Egg on my face, we reran the models.
How much do we know about p-hacking "in the wild"?
Funnel plots have been a tremendous statistical innovation that turned meta analysis on its head. Basically, a funnel plot shows the clinical and statistical significance on the same plot. Ideally, th
How much do we know about p-hacking "in the wild"? Funnel plots have been a tremendous statistical innovation that turned meta analysis on its head. Basically, a funnel plot shows the clinical and statistical significance on the same plot. Ideally, they would form a funnel shape. However, several meta-analyses have produced funnel plots that show a strong bimodal shape, where investigators (or publishers) selectively withheld results that were null. The result is that the triangle becomes wider, because smaller, less powered studies used more drastic methods to "encourage" results to reach statistical significance. The Cochrane Report team has this to say about them. If there is bias, for example because smaller studies without statistically significant effects (shown as open circles in Figure 10.4.a, Panel A) remain unpublished, this will lead to an asymmetrical appearance of the funnel plot with a gap in a bottom corner of the graph (Panel B). In this situation the effect calculated in a meta-analysis will tend to overestimate the intervention effect (Egger 1997a, Villar 1997). The more pronounced the asymmetry, the more likely it is that the amount of bias will be substantial. The first plot shows a symmetrical plot in the absence of bias. The second shows an asymmetrical plot in the presence of reporting bias. The third shows an asymmetrical plot in the presence of bias because some smaller studies (open circles) are of lower methodological quality and therefore produce exaggerated intervention effect estimates. I suspect most authors are unaware of the methods they use to p-hack. They don't keep track of the overall number of models they fit, applying different exclusion criteria or opting for different adjustment variables each time. However, if I had to mandate a simple process, I would love to see the total number of models fit. That's not to say there might be legitimate reasons to rerun models, for instance we just ran through a Alzheimer's analysis not knowing ApoE had been collected in the sample. Egg on my face, we reran the models.
How much do we know about p-hacking "in the wild"? Funnel plots have been a tremendous statistical innovation that turned meta analysis on its head. Basically, a funnel plot shows the clinical and statistical significance on the same plot. Ideally, th
2,030
Can someone explain Gibbs sampling in very simple words? [duplicate]
You are a dungeonmaster hosting Dungeons & Dragons and a player casts 'Spell of Eldritch Chaotic Weather (SECW). You've never heard of this spell before, but it turns out it is quite involved. The player hands you a dense book and says, 'the effect of this spell is that one of the events in this book occurs.' The book contains a whopping 1000 different effects, and what's more, the events have different 'relative probabilities.' The book tells you that the most likely event is 'fireball'; all the probabilities of the other events are described relative to the probability of 'fireball'; for example: on page 155, it says that 'duck storm' is half as likely as 'fireball.' How are you, the Dungeon Master, to sample a random event from this book? Here's how you can do it: The accept-reject algorithm: 1) Roll a d1000 to decide a 'candidate' event. 2) Suppose the candidate event is 44% as likely as the most likely event, 'fireball'. Then accept the candidate with probability 44%. (Roll a d100, and accept if the roll is 44 or lower. Otherwise, go back to step 1 until you accept an event.) 3) The accepted event is your random sample. The accept-reject algorithm is guaranteed to sample from the distribution with the specified relative probabilities. After much dice rolling you finally end up accepting a candidate: 'summon frog'. You breathe a sigh of relief as you now you can get back to the (routine in comparison) business of handling the battle between the troll-orcs and dragon-elves. However, not to be outdone, another player decides to cast 'Level. 2 arcane cyber-effect storm.' For this spell, two different random effects occur: a randomly generated attack, and a randomly generated character buff. The manual for this spell is so huge that it can only fit on a CD. The player boots you up and shows you a page. Your jaw drops: the entry for each attack is about as large a the manual for the previous spell, because it lists a relative probability for each possible accompanying buff 'Cupric Blade' The most likely buff accompanying this attack is 'Hotelling aura' 'Jackal Vision' is 33% as likely to accompany this attack as 'Hotelling aura' 'Toaster Ears' is 20% as likely to accompany this attack as 'Hotelling aura' ... Similarly, the probability of a particular attack spell occurring depends on the probability of the buff occurring. It would be justified to wonder if a proper probability distribution can even be defined given this information. Well, it turns out that if there is one, it is uniquely specified by the conditional probabilities given in the manual. But how to sample from it? Luckily for you, the CD comes with an automated Gibbs' sampler, because you would have to spend an eternity doing the following by hand. Gibbs' sampler algorithm 1) Choose an attack spell randomly 2) Use the accept-reject algorithm to choose the buff conditional on the attack 3) Forget the attack spell you chose in step 1. Choose a new attack spell using the accept-reject algorithm conditional on the buff in step 2 4) Go to step 2, repeat forever (though usually 10000 iterations will be enough) 5) Whatever your algorithm has at the last iteration, is your sample. You see, in general, MCMC samplers are only asymptotically guaranteed to generate samples from a distribution with the specified conditional probabilities. But in many cases, MCMC samplers are the only practical solution available.
Can someone explain Gibbs sampling in very simple words? [duplicate]
You are a dungeonmaster hosting Dungeons & Dragons and a player casts 'Spell of Eldritch Chaotic Weather (SECW). You've never heard of this spell before, but it turns out it is quite involved. The
Can someone explain Gibbs sampling in very simple words? [duplicate] You are a dungeonmaster hosting Dungeons & Dragons and a player casts 'Spell of Eldritch Chaotic Weather (SECW). You've never heard of this spell before, but it turns out it is quite involved. The player hands you a dense book and says, 'the effect of this spell is that one of the events in this book occurs.' The book contains a whopping 1000 different effects, and what's more, the events have different 'relative probabilities.' The book tells you that the most likely event is 'fireball'; all the probabilities of the other events are described relative to the probability of 'fireball'; for example: on page 155, it says that 'duck storm' is half as likely as 'fireball.' How are you, the Dungeon Master, to sample a random event from this book? Here's how you can do it: The accept-reject algorithm: 1) Roll a d1000 to decide a 'candidate' event. 2) Suppose the candidate event is 44% as likely as the most likely event, 'fireball'. Then accept the candidate with probability 44%. (Roll a d100, and accept if the roll is 44 or lower. Otherwise, go back to step 1 until you accept an event.) 3) The accepted event is your random sample. The accept-reject algorithm is guaranteed to sample from the distribution with the specified relative probabilities. After much dice rolling you finally end up accepting a candidate: 'summon frog'. You breathe a sigh of relief as you now you can get back to the (routine in comparison) business of handling the battle between the troll-orcs and dragon-elves. However, not to be outdone, another player decides to cast 'Level. 2 arcane cyber-effect storm.' For this spell, two different random effects occur: a randomly generated attack, and a randomly generated character buff. The manual for this spell is so huge that it can only fit on a CD. The player boots you up and shows you a page. Your jaw drops: the entry for each attack is about as large a the manual for the previous spell, because it lists a relative probability for each possible accompanying buff 'Cupric Blade' The most likely buff accompanying this attack is 'Hotelling aura' 'Jackal Vision' is 33% as likely to accompany this attack as 'Hotelling aura' 'Toaster Ears' is 20% as likely to accompany this attack as 'Hotelling aura' ... Similarly, the probability of a particular attack spell occurring depends on the probability of the buff occurring. It would be justified to wonder if a proper probability distribution can even be defined given this information. Well, it turns out that if there is one, it is uniquely specified by the conditional probabilities given in the manual. But how to sample from it? Luckily for you, the CD comes with an automated Gibbs' sampler, because you would have to spend an eternity doing the following by hand. Gibbs' sampler algorithm 1) Choose an attack spell randomly 2) Use the accept-reject algorithm to choose the buff conditional on the attack 3) Forget the attack spell you chose in step 1. Choose a new attack spell using the accept-reject algorithm conditional on the buff in step 2 4) Go to step 2, repeat forever (though usually 10000 iterations will be enough) 5) Whatever your algorithm has at the last iteration, is your sample. You see, in general, MCMC samplers are only asymptotically guaranteed to generate samples from a distribution with the specified conditional probabilities. But in many cases, MCMC samplers are the only practical solution available.
Can someone explain Gibbs sampling in very simple words? [duplicate] You are a dungeonmaster hosting Dungeons & Dragons and a player casts 'Spell of Eldritch Chaotic Weather (SECW). You've never heard of this spell before, but it turns out it is quite involved. The
2,031
Can someone explain Gibbs sampling in very simple words? [duplicate]
I find this document GIBBS SAMPLING FOR THE UNINITIATED by Resnik & Hardisty very useful for non-statistics background folks. It explains why & how to use Gibbs sampling, and has examples demonstrating the algo. Seems I cannot comment yet. Gibbs sampling is not a self-contained concept. It requires some prerequisite knowledge. Below is the knowledge chain i summarized from my own study, as for your reference (My major was applied physics): Monte Carlo (high level understanding) Markov model (high level) Bayes theorem Gibbs sampling The document I named here is roughly following the chain. If the link is broken, google the document name. You will find it. Some thoughts: I don't think Gibbs sampling can be understood solely by some abstracts. There is no shortcut for it. You need to understand the math behind it. And since it's more like a "technique", my criterion of "understanding it" is "you can edit its code and understand what you're doing (not necessarily from scratch)". For those who think they have understood it by looking at some quick notes, they probably just understand what is "Markov Chain Monte Carlo" in a high level and think they have got it all (I made this illusion myself).
Can someone explain Gibbs sampling in very simple words? [duplicate]
I find this document GIBBS SAMPLING FOR THE UNINITIATED by Resnik & Hardisty very useful for non-statistics background folks. It explains why & how to use Gibbs sampling, and has examples demonstratin
Can someone explain Gibbs sampling in very simple words? [duplicate] I find this document GIBBS SAMPLING FOR THE UNINITIATED by Resnik & Hardisty very useful for non-statistics background folks. It explains why & how to use Gibbs sampling, and has examples demonstrating the algo. Seems I cannot comment yet. Gibbs sampling is not a self-contained concept. It requires some prerequisite knowledge. Below is the knowledge chain i summarized from my own study, as for your reference (My major was applied physics): Monte Carlo (high level understanding) Markov model (high level) Bayes theorem Gibbs sampling The document I named here is roughly following the chain. If the link is broken, google the document name. You will find it. Some thoughts: I don't think Gibbs sampling can be understood solely by some abstracts. There is no shortcut for it. You need to understand the math behind it. And since it's more like a "technique", my criterion of "understanding it" is "you can edit its code and understand what you're doing (not necessarily from scratch)". For those who think they have understood it by looking at some quick notes, they probably just understand what is "Markov Chain Monte Carlo" in a high level and think they have got it all (I made this illusion myself).
Can someone explain Gibbs sampling in very simple words? [duplicate] I find this document GIBBS SAMPLING FOR THE UNINITIATED by Resnik & Hardisty very useful for non-statistics background folks. It explains why & how to use Gibbs sampling, and has examples demonstratin
2,032
Can someone explain Gibbs sampling in very simple words? [duplicate]
From wikipedia: "The goal of Gibbs Sampling here is to approximate the distribution of $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$" Notation can be found on the wiki site or from the original paper here. One "scan" of Gibbs sampling targeting the above distribution will give you draws from the following probability distributions: $P(\mathbf{Z}_{(1,1)}|\mathbf{Z}_{-(1,1)}\mathbf{W};\alpha,\beta)$, $P(\mathbf{Z}_{(1,2)}|\mathbf{Z}_{-(1,2)}\mathbf{W};\alpha,\beta)$, $P(\mathbf{Z}_{(1,3)}|\mathbf{Z}_{-(1,3)}\mathbf{W};\alpha,\beta),\ldots, P(\mathbf{Z}_{(N,K)}|\mathbf{Z}_{-(N,K)}\mathbf{W};\alpha,\beta)$. You can either run through them in a sequence, or you can randomly chose which of these to sample form. But you keep doing scans over and over to get a lot of samples. Whatever option you choose, you get a sequence of $\mathbf{Z}$s. $$ \mathbf{Z}^1, \mathbf{Z}^2, \mathbf{Z}^3\ldots $$ Each $\mathbf{Z}^i$ is an $N\times K$ matrix. Also, for two consecutive $\mathbf{Z}$ matrices, only one element will be different. That's because you're sampling from a distribution $P(\mathbf{Z}_{(m,n)}|\mathbf{Z}_{-(m,n)}\mathbf{W};\alpha,\beta)$ when you go from one sample to the next. Why would you want this? Don't we want independent and identical draws from $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$? That way we could use the law of large numbers and central limit theorems to approximate expectations, and we would have some idea of the error. But I doubt these $\mathbf{Z}$ draws are independent. And are they even identical (are they even coming from the same distribution)? Gibbs sampling can still give you a law of large numbers and a central limit theorem. $\mathbf{Z}^1, \mathbf{Z}^2, \mathbf{Z}^3\ldots $ is a Markov Chain with stationary/invariant distribution $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$. That means the marginal distribution of each draw is from the distribution you're targetting (so they're identical draws). However, they are not independent. In practice this means you run the chain for longer or you subsample the chain (only take every 100th sample, say). Everything can still "work," though. For more information I would click the link underneath the question. There are some good references posted in that thread. This answer just attempts to give you the jist using the notation in common LDA references.
Can someone explain Gibbs sampling in very simple words? [duplicate]
From wikipedia: "The goal of Gibbs Sampling here is to approximate the distribution of $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$" Notation can be found on the wiki site or from the original paper here.
Can someone explain Gibbs sampling in very simple words? [duplicate] From wikipedia: "The goal of Gibbs Sampling here is to approximate the distribution of $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$" Notation can be found on the wiki site or from the original paper here. One "scan" of Gibbs sampling targeting the above distribution will give you draws from the following probability distributions: $P(\mathbf{Z}_{(1,1)}|\mathbf{Z}_{-(1,1)}\mathbf{W};\alpha,\beta)$, $P(\mathbf{Z}_{(1,2)}|\mathbf{Z}_{-(1,2)}\mathbf{W};\alpha,\beta)$, $P(\mathbf{Z}_{(1,3)}|\mathbf{Z}_{-(1,3)}\mathbf{W};\alpha,\beta),\ldots, P(\mathbf{Z}_{(N,K)}|\mathbf{Z}_{-(N,K)}\mathbf{W};\alpha,\beta)$. You can either run through them in a sequence, or you can randomly chose which of these to sample form. But you keep doing scans over and over to get a lot of samples. Whatever option you choose, you get a sequence of $\mathbf{Z}$s. $$ \mathbf{Z}^1, \mathbf{Z}^2, \mathbf{Z}^3\ldots $$ Each $\mathbf{Z}^i$ is an $N\times K$ matrix. Also, for two consecutive $\mathbf{Z}$ matrices, only one element will be different. That's because you're sampling from a distribution $P(\mathbf{Z}_{(m,n)}|\mathbf{Z}_{-(m,n)}\mathbf{W};\alpha,\beta)$ when you go from one sample to the next. Why would you want this? Don't we want independent and identical draws from $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$? That way we could use the law of large numbers and central limit theorems to approximate expectations, and we would have some idea of the error. But I doubt these $\mathbf{Z}$ draws are independent. And are they even identical (are they even coming from the same distribution)? Gibbs sampling can still give you a law of large numbers and a central limit theorem. $\mathbf{Z}^1, \mathbf{Z}^2, \mathbf{Z}^3\ldots $ is a Markov Chain with stationary/invariant distribution $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$. That means the marginal distribution of each draw is from the distribution you're targetting (so they're identical draws). However, they are not independent. In practice this means you run the chain for longer or you subsample the chain (only take every 100th sample, say). Everything can still "work," though. For more information I would click the link underneath the question. There are some good references posted in that thread. This answer just attempts to give you the jist using the notation in common LDA references.
Can someone explain Gibbs sampling in very simple words? [duplicate] From wikipedia: "The goal of Gibbs Sampling here is to approximate the distribution of $P(\mathbf{Z}|\mathbf{W};\alpha,\beta)$" Notation can be found on the wiki site or from the original paper here.
2,033
What correlation makes a matrix singular and what are implications of singularity or near-singularity?
What is singular matrix? A square matrix is singular, that is, its determinant is zero, if it contains rows or columns which are proportionally interrelated; in other words, one or more of its rows (columns) is exactly expressible as a linear combination of all or some other its rows (columns), the combination being without a constant term. Imagine, for example, a $3 \times 3$ matrix $A$ - symmetric, like correlaton matrix, or asymmetric. If in terms of its entries it appears that $\text {col}_3 = 2.15 \cdot \text {col}_1$ for example, then the matrix $A$ is singular. If, as another example, its $\text{row}_2 = 1.6 \cdot \text{row}_1 - 4 \cdot \text{row}_3$, then $A$ is again singular. As a particular case, if any row contains just zeros, the matrix is also singular because any column then is a linear combination of the other columns. In general, if any row (column) of a square matrix is a weighted sum of the other rows (columns), then any of the latter is also a weighted sum of the other rows (columns). Singular or near-singular matrix is often referred to as "ill-conditioned" matrix because it delivers problems in many statistical data analyses. What data produce singular correlation matrix of variables? What must multivariate data look like in order for its correlation or covariance matrix to be a singular matrix as described above? It is when there is linear interdependances among the variables. If some variable is an exact linear combination of the other variables, with constant term allowed, the correlation and covariance matrces of the variables will be singular. The dependency observed in such matrix between its columns is actually that same dependency as the dependency between the variables in the data observed after the variables have been centered (their means brought to 0) or standardized (if we mean correlation rather than covariance matrix). Some frequent particular situations when the correlation/covariance matrix of variables is singular: (1) Number of variables is equal or greater than the number of cases; (2) Two or more variables sum up to a constant; (3) Two variables are identical or differ merely in mean (level) or variance (scale). Also, duplicating observations in a dataset will lead the matrix towards singularity. The more times you clone a case the closer is singularity. So, when doing some sort of imputation of missing values it is always beneficial (from both statistical and mathematical view) to add some noise to the imputed data. Singularity as geometric collinearity In geometrical viewpoint, singularity is (multi)collinearity (or "complanarity"): variables displayed as vectors (arrows) in space lie in the space of dimentionality lesser than the number of variables - in a reduced space. (That dimensionality is known as the rank of the matrix; it is equal to the number of non-zero eigenvalues of the matrix.) In a more distant or "transcendental" geometrical view, singularity or zero-definiteness (presense of zero eigenvalue) is the bending point between positive definiteness and non-positive definiteness of a matrix. When some of the vectors-variables (which is the correlation/covariance matrix) "go beyond" lying even in the reduced euclidean space - so that they cannot "converge in" or "perfectly span" euclidean space anymore, non-positive definiteness appears, i.e. some eigenvalues of the correlation matrix become negative. (See about non-positive definite matrix, aka non-gramian here.) Non-positive definite matrix is also "ill-conditioned" for some kinds of statistical analysis. Collinearity in regression: a geometric explanation and implications The first picture below shows a normal regression situation with two predictors (we'll speek of linear regression). The picture is copied from here where it is explained in more details. In short, moderately correlated (= having acute angle between them) predictors $X_1$ and $X_2$ span 2-dimesional space "plane X". The dependent variable $Y$ is projected onto it orthogonally, leaving the predicted variable $Y'$ and the residuals with st. deviation equal to the length of $e$. R-square of the regression is the angle between $Y$ and $Y'$, and the two regression coefficients are directly related to the skew coordinates $b_1$ and $b_2$, respectively. The picture below shows regression situation with completely collinear predictors. $X_1$ and $X_2$ correlate perfectly and therefore these two vectors coincide and form the line, a 1-dimensional space. This is a reduced space. Mathematically though, plane X must exist in order to solve regression with two predictors, - but the plane is not defined anymore, alas. Fortunately, if we drop any one of the two collinear predictors out of analysis the regression is then simply solved because one-predictor regression needs one-dimensional predictor space. We see prediction $Y'$ and error $e$ of that (one-predictor) regression, drawn on the picture. There exist other approaches as well, besides dropping variables, to get rid of collinearity. The final picture below displays a situation with nearly collinear predictors. This situation is different and a bit more complex and nasty. $X_1$ and $X_2$ (both shown again in blue) tightly correlate and thence almost coincide. But there is still a tiny angle between, and because of the non-zero angle, plane X is defined (this plane on the picture looks like the plane on the first picture). So, mathematically there is no problem to solve the regression. The problem which arises here is a statistical one. Usually we do regression to infer about the R-square and the coefficients in the population. From sample to sample, data varies a bit. So, if we took another sample, the juxtaposition of the two predictor vectors would change slightly, which is normal. Not "normal" is that under near collinearity it leads to devastating consequences. Imagine that $X_1$ deviated just a little down, beyond plane X - as shown by grey vector. Because the angle between the two predictors was so small, plane X which will come through $X_2$ and through that drifted $X_1$ will drastically diverge from old plane X. Thus, because $X_1$ and $X_2$ are so much correlated we expect very different plane X in different samples from the same population. As plane X is different, predictions, R-square, residuals, coefficients - everything become different, too. It is well seen on the picture, where plane X swung somewhere 40 degrees. In a situation like that, estimates (coefficients, R-square etc.) are very unreliable which fact is expressed by their huge standard errors. And in contrast, with predictors far from collinear, estimates are reliable because the space spanned by the predictors is robust to those sampling fluctuations of data. Collinearity as a function of the whole matrix Even a high correlation between two variables, if it is below 1, doesn't necessarily make the whole correlation matrix singular; it depends on the rest correlations as well. For example this correlation matrix: 1.000 .990 .200 .990 1.000 .100 .200 .100 1.000 has determinant .00950 which is yet enough different from 0 to be considered eligible in many statistical analyses. But this matrix: 1.000 .990 .239 .990 1.000 .100 .239 .100 1.000 has determinant .00010, a degree closer to 0. Collinearity diagnostics: further reading Statistical data analyses, such as regressions, incorporate special indices and tools to detect collinearity strong enough to consider dropping some of the variables or cases from the analysis, or to undertake other healing means. Please search (including this site) for "collinearity diagnostics", "multicollinearity", "singularity/collinearity tolerance", "condition indices", "variance decomposition proportions", "variance inflation factors (VIF)".
What correlation makes a matrix singular and what are implications of singularity or near-singularit
What is singular matrix? A square matrix is singular, that is, its determinant is zero, if it contains rows or columns which are proportionally interrelated; in other words, one or more of its rows (c
What correlation makes a matrix singular and what are implications of singularity or near-singularity? What is singular matrix? A square matrix is singular, that is, its determinant is zero, if it contains rows or columns which are proportionally interrelated; in other words, one or more of its rows (columns) is exactly expressible as a linear combination of all or some other its rows (columns), the combination being without a constant term. Imagine, for example, a $3 \times 3$ matrix $A$ - symmetric, like correlaton matrix, or asymmetric. If in terms of its entries it appears that $\text {col}_3 = 2.15 \cdot \text {col}_1$ for example, then the matrix $A$ is singular. If, as another example, its $\text{row}_2 = 1.6 \cdot \text{row}_1 - 4 \cdot \text{row}_3$, then $A$ is again singular. As a particular case, if any row contains just zeros, the matrix is also singular because any column then is a linear combination of the other columns. In general, if any row (column) of a square matrix is a weighted sum of the other rows (columns), then any of the latter is also a weighted sum of the other rows (columns). Singular or near-singular matrix is often referred to as "ill-conditioned" matrix because it delivers problems in many statistical data analyses. What data produce singular correlation matrix of variables? What must multivariate data look like in order for its correlation or covariance matrix to be a singular matrix as described above? It is when there is linear interdependances among the variables. If some variable is an exact linear combination of the other variables, with constant term allowed, the correlation and covariance matrces of the variables will be singular. The dependency observed in such matrix between its columns is actually that same dependency as the dependency between the variables in the data observed after the variables have been centered (their means brought to 0) or standardized (if we mean correlation rather than covariance matrix). Some frequent particular situations when the correlation/covariance matrix of variables is singular: (1) Number of variables is equal or greater than the number of cases; (2) Two or more variables sum up to a constant; (3) Two variables are identical or differ merely in mean (level) or variance (scale). Also, duplicating observations in a dataset will lead the matrix towards singularity. The more times you clone a case the closer is singularity. So, when doing some sort of imputation of missing values it is always beneficial (from both statistical and mathematical view) to add some noise to the imputed data. Singularity as geometric collinearity In geometrical viewpoint, singularity is (multi)collinearity (or "complanarity"): variables displayed as vectors (arrows) in space lie in the space of dimentionality lesser than the number of variables - in a reduced space. (That dimensionality is known as the rank of the matrix; it is equal to the number of non-zero eigenvalues of the matrix.) In a more distant or "transcendental" geometrical view, singularity or zero-definiteness (presense of zero eigenvalue) is the bending point between positive definiteness and non-positive definiteness of a matrix. When some of the vectors-variables (which is the correlation/covariance matrix) "go beyond" lying even in the reduced euclidean space - so that they cannot "converge in" or "perfectly span" euclidean space anymore, non-positive definiteness appears, i.e. some eigenvalues of the correlation matrix become negative. (See about non-positive definite matrix, aka non-gramian here.) Non-positive definite matrix is also "ill-conditioned" for some kinds of statistical analysis. Collinearity in regression: a geometric explanation and implications The first picture below shows a normal regression situation with two predictors (we'll speek of linear regression). The picture is copied from here where it is explained in more details. In short, moderately correlated (= having acute angle between them) predictors $X_1$ and $X_2$ span 2-dimesional space "plane X". The dependent variable $Y$ is projected onto it orthogonally, leaving the predicted variable $Y'$ and the residuals with st. deviation equal to the length of $e$. R-square of the regression is the angle between $Y$ and $Y'$, and the two regression coefficients are directly related to the skew coordinates $b_1$ and $b_2$, respectively. The picture below shows regression situation with completely collinear predictors. $X_1$ and $X_2$ correlate perfectly and therefore these two vectors coincide and form the line, a 1-dimensional space. This is a reduced space. Mathematically though, plane X must exist in order to solve regression with two predictors, - but the plane is not defined anymore, alas. Fortunately, if we drop any one of the two collinear predictors out of analysis the regression is then simply solved because one-predictor regression needs one-dimensional predictor space. We see prediction $Y'$ and error $e$ of that (one-predictor) regression, drawn on the picture. There exist other approaches as well, besides dropping variables, to get rid of collinearity. The final picture below displays a situation with nearly collinear predictors. This situation is different and a bit more complex and nasty. $X_1$ and $X_2$ (both shown again in blue) tightly correlate and thence almost coincide. But there is still a tiny angle between, and because of the non-zero angle, plane X is defined (this plane on the picture looks like the plane on the first picture). So, mathematically there is no problem to solve the regression. The problem which arises here is a statistical one. Usually we do regression to infer about the R-square and the coefficients in the population. From sample to sample, data varies a bit. So, if we took another sample, the juxtaposition of the two predictor vectors would change slightly, which is normal. Not "normal" is that under near collinearity it leads to devastating consequences. Imagine that $X_1$ deviated just a little down, beyond plane X - as shown by grey vector. Because the angle between the two predictors was so small, plane X which will come through $X_2$ and through that drifted $X_1$ will drastically diverge from old plane X. Thus, because $X_1$ and $X_2$ are so much correlated we expect very different plane X in different samples from the same population. As plane X is different, predictions, R-square, residuals, coefficients - everything become different, too. It is well seen on the picture, where plane X swung somewhere 40 degrees. In a situation like that, estimates (coefficients, R-square etc.) are very unreliable which fact is expressed by their huge standard errors. And in contrast, with predictors far from collinear, estimates are reliable because the space spanned by the predictors is robust to those sampling fluctuations of data. Collinearity as a function of the whole matrix Even a high correlation between two variables, if it is below 1, doesn't necessarily make the whole correlation matrix singular; it depends on the rest correlations as well. For example this correlation matrix: 1.000 .990 .200 .990 1.000 .100 .200 .100 1.000 has determinant .00950 which is yet enough different from 0 to be considered eligible in many statistical analyses. But this matrix: 1.000 .990 .239 .990 1.000 .100 .239 .100 1.000 has determinant .00010, a degree closer to 0. Collinearity diagnostics: further reading Statistical data analyses, such as regressions, incorporate special indices and tools to detect collinearity strong enough to consider dropping some of the variables or cases from the analysis, or to undertake other healing means. Please search (including this site) for "collinearity diagnostics", "multicollinearity", "singularity/collinearity tolerance", "condition indices", "variance decomposition proportions", "variance inflation factors (VIF)".
What correlation makes a matrix singular and what are implications of singularity or near-singularit What is singular matrix? A square matrix is singular, that is, its determinant is zero, if it contains rows or columns which are proportionally interrelated; in other words, one or more of its rows (c
2,034
Who Are The Bayesians?
I'm going to take your questions in order: The question is, Who are the Bayesians today? Anybody who does Bayesian data analysis and self-identifies as "Bayesian". Just like a programmer is someone who programs and self-identifies as a "programmer". A slight difference is that for historical reasons Bayesian has ideological connotations, because of the often heated argument between proponents of "frequentist" interpretations of probability and proponents of "Bayesian" interpretations of probability. Are they some select academic institutions, where you know that if you go there you will become a Bayesian? No, just like other parts of statistics you just need a good book (and perhaps a good teacher). If so, are they specially sought after? Bayesian data analysis is a very useful tool when doing statistical modeling, which I imagine is a pretty sought-after skill, (even if companies perhaps aren't specifically looking for "Bayesians"). Are we referring to just a few respected statisticians and mathematicians, and if so who are they? There are many respected statisticians that I believe would call themselves Bayesians, but those are not the Bayesians. Do they even exist as such, these pure "Bayesians"? That's a bit like asking "Do these pure programmers exist"? There is an amusing article called 46656 Varieties of Bayesians, and sure there is a healthy argument among "Bayesians" regarding many foundational issues. Just like programmers can argue over the merits of different programming techniques. (BTW, pure programmers program in Haskell). Would they happily accept the label? Some do, some don't. When I discovered Bayesian data analysis I thought it was the best since sliced bread (I still do) and I was happy to call myself a "Bayesian" (not least to irritate the p-value people at my department). Nowadays I don't like the term, I think it might alienate people as it makes Bayesian data analysis sound like some kind of cult, which it isn't, rather than a useful method to have in your statistical toolbox. Is it always a flattering distinction? Nope! As far as I know, the term "Bayesian" was introduced by the famous statistician Fisher as a derogatory term. Before that it was called "inverse probability" or just "probability". Are they mathematicians with peculiar slides in meetings, deprived of any p values and confidence intervals, easily spotted on the brochure? Well, there are conferences in Bayesian statistics, and I don't think they include that many p-values. Whether you'll find the slides peculiar will depend on your background... How much of a niche is being a "Bayesian"? Are we referring to a minority of statisticians? I still think a minority of statisticians deal with Bayesian statistics, but I also think the proportion is growing. Or is current Bayesian-ism equated with machine learning applications? Nope, but Bayesian models are used a lot in machine learning. Here is a great machine learning book that presents machine learning from a Bayesian/probibalistic perspective: http://www.cs.ubc.ca/~murphyk/MLbook/ Hope that answered most of the questions :) Update: [C]ould you please consider adding a list of specific techniques or premises that distinguish Bayesian statistics? What distinguish Bayesian statistics is the use of Bayesian models :) Here is my spin on what a Bayesian model is: A Bayesian model is a statistical model where you use probability to represent all uncertainty within the model, both the uncertainty regarding the output but also the uncertainty regarding the input (aka parameters) to the model. The whole prior/posterior/Bayes theorem thing follows on this, but in my opinion, using probability for everything is what makes it Bayesian (and indeed a better word would perhaps just be something like probabilistic model). Now, Bayesian models can be tricky to fit, and there is a host of different computational techniques that are used for this. But these techniques are not Bayesian in themselves. To namedrop some computational techniques: Markov chain Monte Carlo Metropolis-Hastings Gibbs sampling Hamiltonian Monte Carlo Variational Bayes Approximate Bayesian computation Particle filters Laplace approximation And so on... Who was the famous statistician who introduced the term 'Bayesian' as derogatory? It was supposedly Ronald Fisher. The paper When did Bayesian inference become "Bayesian"? gives the history of the term "Bayesian".
Who Are The Bayesians?
I'm going to take your questions in order: The question is, Who are the Bayesians today? Anybody who does Bayesian data analysis and self-identifies as "Bayesian". Just like a programmer is someone
Who Are The Bayesians? I'm going to take your questions in order: The question is, Who are the Bayesians today? Anybody who does Bayesian data analysis and self-identifies as "Bayesian". Just like a programmer is someone who programs and self-identifies as a "programmer". A slight difference is that for historical reasons Bayesian has ideological connotations, because of the often heated argument between proponents of "frequentist" interpretations of probability and proponents of "Bayesian" interpretations of probability. Are they some select academic institutions, where you know that if you go there you will become a Bayesian? No, just like other parts of statistics you just need a good book (and perhaps a good teacher). If so, are they specially sought after? Bayesian data analysis is a very useful tool when doing statistical modeling, which I imagine is a pretty sought-after skill, (even if companies perhaps aren't specifically looking for "Bayesians"). Are we referring to just a few respected statisticians and mathematicians, and if so who are they? There are many respected statisticians that I believe would call themselves Bayesians, but those are not the Bayesians. Do they even exist as such, these pure "Bayesians"? That's a bit like asking "Do these pure programmers exist"? There is an amusing article called 46656 Varieties of Bayesians, and sure there is a healthy argument among "Bayesians" regarding many foundational issues. Just like programmers can argue over the merits of different programming techniques. (BTW, pure programmers program in Haskell). Would they happily accept the label? Some do, some don't. When I discovered Bayesian data analysis I thought it was the best since sliced bread (I still do) and I was happy to call myself a "Bayesian" (not least to irritate the p-value people at my department). Nowadays I don't like the term, I think it might alienate people as it makes Bayesian data analysis sound like some kind of cult, which it isn't, rather than a useful method to have in your statistical toolbox. Is it always a flattering distinction? Nope! As far as I know, the term "Bayesian" was introduced by the famous statistician Fisher as a derogatory term. Before that it was called "inverse probability" or just "probability". Are they mathematicians with peculiar slides in meetings, deprived of any p values and confidence intervals, easily spotted on the brochure? Well, there are conferences in Bayesian statistics, and I don't think they include that many p-values. Whether you'll find the slides peculiar will depend on your background... How much of a niche is being a "Bayesian"? Are we referring to a minority of statisticians? I still think a minority of statisticians deal with Bayesian statistics, but I also think the proportion is growing. Or is current Bayesian-ism equated with machine learning applications? Nope, but Bayesian models are used a lot in machine learning. Here is a great machine learning book that presents machine learning from a Bayesian/probibalistic perspective: http://www.cs.ubc.ca/~murphyk/MLbook/ Hope that answered most of the questions :) Update: [C]ould you please consider adding a list of specific techniques or premises that distinguish Bayesian statistics? What distinguish Bayesian statistics is the use of Bayesian models :) Here is my spin on what a Bayesian model is: A Bayesian model is a statistical model where you use probability to represent all uncertainty within the model, both the uncertainty regarding the output but also the uncertainty regarding the input (aka parameters) to the model. The whole prior/posterior/Bayes theorem thing follows on this, but in my opinion, using probability for everything is what makes it Bayesian (and indeed a better word would perhaps just be something like probabilistic model). Now, Bayesian models can be tricky to fit, and there is a host of different computational techniques that are used for this. But these techniques are not Bayesian in themselves. To namedrop some computational techniques: Markov chain Monte Carlo Metropolis-Hastings Gibbs sampling Hamiltonian Monte Carlo Variational Bayes Approximate Bayesian computation Particle filters Laplace approximation And so on... Who was the famous statistician who introduced the term 'Bayesian' as derogatory? It was supposedly Ronald Fisher. The paper When did Bayesian inference become "Bayesian"? gives the history of the term "Bayesian".
Who Are The Bayesians? I'm going to take your questions in order: The question is, Who are the Bayesians today? Anybody who does Bayesian data analysis and self-identifies as "Bayesian". Just like a programmer is someone
2,035
Who Are The Bayesians?
Bayesians are people who define probabilities as a numerical representation of the plausibility of some proposition. Frequentists are people who define probabilities as representing long run frequencies. If you are only happy with one or other of these definitions then you are either a Bayesian or a frequentist. If you are happy with either, and use the most appropriate definition for the task at hand, then you are a statistician! ;o) Basically, it boils down to the definition of a probability, and I would hope that most working statisticians would be able to see the benefits and disadvantages of both approaches. hint of skepticism regarding the chasm between lofty objectives, and arbitrariness in the selection of the prior distribution, or eventual use of frequentist maths after all. The skepticism also goes in the other direction. Frequentism was invented with the lofty objective of eliminating the subjectivity of existing thought on probability and statistics. However, the subjectivity is still there (for example in determining the appropriate level of significance in hypothesis testing), but it is just not made explicit, or often just ignored.
Who Are The Bayesians?
Bayesians are people who define probabilities as a numerical representation of the plausibility of some proposition. Frequentists are people who define probabilities as representing long run frequenc
Who Are The Bayesians? Bayesians are people who define probabilities as a numerical representation of the plausibility of some proposition. Frequentists are people who define probabilities as representing long run frequencies. If you are only happy with one or other of these definitions then you are either a Bayesian or a frequentist. If you are happy with either, and use the most appropriate definition for the task at hand, then you are a statistician! ;o) Basically, it boils down to the definition of a probability, and I would hope that most working statisticians would be able to see the benefits and disadvantages of both approaches. hint of skepticism regarding the chasm between lofty objectives, and arbitrariness in the selection of the prior distribution, or eventual use of frequentist maths after all. The skepticism also goes in the other direction. Frequentism was invented with the lofty objective of eliminating the subjectivity of existing thought on probability and statistics. However, the subjectivity is still there (for example in determining the appropriate level of significance in hypothesis testing), but it is just not made explicit, or often just ignored.
Who Are The Bayesians? Bayesians are people who define probabilities as a numerical representation of the plausibility of some proposition. Frequentists are people who define probabilities as representing long run frequenc
2,036
Who Are The Bayesians?
Andrew Gelman, for example, a professor of statistics and political science at Columbia University, is a prominent Bayesian. I suspect the most of ISBA fellows would probably consider themselves Bayesians as well. In general, the following research topics typically reflect a Bayesian approach. If you read papers about them, it is likely the authors would describe themselves as "Bayesian" Markov-Chain Monte Carlo Variational Bayesian Methods (the name gives that one away) Particle Filtering Probabilistic programming
Who Are The Bayesians?
Andrew Gelman, for example, a professor of statistics and political science at Columbia University, is a prominent Bayesian. I suspect the most of ISBA fellows would probably consider themselves Bayes
Who Are The Bayesians? Andrew Gelman, for example, a professor of statistics and political science at Columbia University, is a prominent Bayesian. I suspect the most of ISBA fellows would probably consider themselves Bayesians as well. In general, the following research topics typically reflect a Bayesian approach. If you read papers about them, it is likely the authors would describe themselves as "Bayesian" Markov-Chain Monte Carlo Variational Bayesian Methods (the name gives that one away) Particle Filtering Probabilistic programming
Who Are The Bayesians? Andrew Gelman, for example, a professor of statistics and political science at Columbia University, is a prominent Bayesian. I suspect the most of ISBA fellows would probably consider themselves Bayes
2,037
Who Are The Bayesians?
Today, we're all Bayesians, but there's a world beyond these two camps: algorithmic probability. I'm not sure what's the standard reference on this subject, but there's this beautiful paper by Kolmogorov on algorithmic complexity: A. N. Kolmogorov, Three approaches to the definition of the concept “quantity of information”, Probl. Peredachi Inf., 1965, Volume 1, Issue 1, 3–11. I'm sure there's an English translation. In this paper he defines the quantity of information in three ways: combinatorial, probabilistic and (new) algorithmic. Combinatorial directly maps to frequentist, Probabilist doesn't directly correspond to Bayesian, but it's compatible with it. UPDATE: If you're interested in the philosophy of the probability then I want to point to a very interesting work "The origins and legacy of Kolmogorov’s Grundbegriffe" by Glenn Shafer and Vladimir Vovk. We sort of forgot everything before Kolmogorov, and there was a lot going on before his seminal work. On the other hand, we don't know much about his philosophical views. It's generally thought that he was a frequentist, for instance. The reality's that he lived in Soviet Union in 1930', where it was quite dangerous to venture into philosophy, literally, you could get in existential trouble, which some scientist did (ended up in GULAG prisons). So, he was sort of forced to implicitly indicate that he was a frequentist. I think that in reality he was not just a mathematician, but he was a scientist, and had a complex view of applicability of probability theory to reality. There's also another paper by Vovk on Kolmogorov's algorithmic approach to randomness: Kolmogorov’s contributions to the foundations of probability Vovk has created a game-theoretic approach to probabilities - also very interesting. UPDATE 2: Here's a Bayesian, actually, a professor from one of the universities in Washington, DC. He was trying to make a point that we should elect politicians who update their beliefs based on experiences, new observations. Here $P(B|E)$ is the posterior belief $B$, after the new experience $E$; $P(E|B)$ is the prior. He was trying to explain this to Colbert/Stuart "Rally for Fear" participants. UPDATE 3: I also wanted to point to something in Kolmogorov's original work that's not commonly known for some reason (or easily forgotten) by practitioners. He had a section about connecting the theory to reality. In particular, he set two conditions for using the theory: A. if you repeat the experiment many times then the frequency of occurrence will differ by only a small amount from the probability, practically certainly B. If probability is very small, then if you conduct the experiment only once then you can be practically certain that the event will not occur There are different interpretations of these conditions, but most people would agree that these are not the pure frequentist's views. Kolmogorov declared that he follows von Mises' approach to certain extent, but he seemed to indicate that things are not as simple as it may appear. I often think of condition B, and can't come to a stable conclusion, it looks slightly different every time I think about it.
Who Are The Bayesians?
Today, we're all Bayesians, but there's a world beyond these two camps: algorithmic probability. I'm not sure what's the standard reference on this subject, but there's this beautiful paper by Kolmogo
Who Are The Bayesians? Today, we're all Bayesians, but there's a world beyond these two camps: algorithmic probability. I'm not sure what's the standard reference on this subject, but there's this beautiful paper by Kolmogorov on algorithmic complexity: A. N. Kolmogorov, Three approaches to the definition of the concept “quantity of information”, Probl. Peredachi Inf., 1965, Volume 1, Issue 1, 3–11. I'm sure there's an English translation. In this paper he defines the quantity of information in three ways: combinatorial, probabilistic and (new) algorithmic. Combinatorial directly maps to frequentist, Probabilist doesn't directly correspond to Bayesian, but it's compatible with it. UPDATE: If you're interested in the philosophy of the probability then I want to point to a very interesting work "The origins and legacy of Kolmogorov’s Grundbegriffe" by Glenn Shafer and Vladimir Vovk. We sort of forgot everything before Kolmogorov, and there was a lot going on before his seminal work. On the other hand, we don't know much about his philosophical views. It's generally thought that he was a frequentist, for instance. The reality's that he lived in Soviet Union in 1930', where it was quite dangerous to venture into philosophy, literally, you could get in existential trouble, which some scientist did (ended up in GULAG prisons). So, he was sort of forced to implicitly indicate that he was a frequentist. I think that in reality he was not just a mathematician, but he was a scientist, and had a complex view of applicability of probability theory to reality. There's also another paper by Vovk on Kolmogorov's algorithmic approach to randomness: Kolmogorov’s contributions to the foundations of probability Vovk has created a game-theoretic approach to probabilities - also very interesting. UPDATE 2: Here's a Bayesian, actually, a professor from one of the universities in Washington, DC. He was trying to make a point that we should elect politicians who update their beliefs based on experiences, new observations. Here $P(B|E)$ is the posterior belief $B$, after the new experience $E$; $P(E|B)$ is the prior. He was trying to explain this to Colbert/Stuart "Rally for Fear" participants. UPDATE 3: I also wanted to point to something in Kolmogorov's original work that's not commonly known for some reason (or easily forgotten) by practitioners. He had a section about connecting the theory to reality. In particular, he set two conditions for using the theory: A. if you repeat the experiment many times then the frequency of occurrence will differ by only a small amount from the probability, practically certainly B. If probability is very small, then if you conduct the experiment only once then you can be practically certain that the event will not occur There are different interpretations of these conditions, but most people would agree that these are not the pure frequentist's views. Kolmogorov declared that he follows von Mises' approach to certain extent, but he seemed to indicate that things are not as simple as it may appear. I often think of condition B, and can't come to a stable conclusion, it looks slightly different every time I think about it.
Who Are The Bayesians? Today, we're all Bayesians, but there's a world beyond these two camps: algorithmic probability. I'm not sure what's the standard reference on this subject, but there's this beautiful paper by Kolmogo
2,038
Who Are The Bayesians?
The most "hard core" Bayesian that I know of is Edwin Jaynes, deceased in 1998. I'd expect further "hard core" Bayesians to be found among his pupils, especially the posthumous co-author of his main work Probability Theory: The Logic of Science, Larry Bretthorst. Other notable historic Bayesians include Harold Jeffreys and and Leonard Savage. While I don't have a complete overview of the field, my impression is that the more recent popularity of Bayesian methods (especially in machine learning) is not due to deep philosophical conviction, but the pragmatic position that Bayesian methods have proved useful in many applications. I think typical for this position is Andrew Gelman.
Who Are The Bayesians?
The most "hard core" Bayesian that I know of is Edwin Jaynes, deceased in 1998. I'd expect further "hard core" Bayesians to be found among his pupils, especially the posthumous co-author of his main w
Who Are The Bayesians? The most "hard core" Bayesian that I know of is Edwin Jaynes, deceased in 1998. I'd expect further "hard core" Bayesians to be found among his pupils, especially the posthumous co-author of his main work Probability Theory: The Logic of Science, Larry Bretthorst. Other notable historic Bayesians include Harold Jeffreys and and Leonard Savage. While I don't have a complete overview of the field, my impression is that the more recent popularity of Bayesian methods (especially in machine learning) is not due to deep philosophical conviction, but the pragmatic position that Bayesian methods have proved useful in many applications. I think typical for this position is Andrew Gelman.
Who Are The Bayesians? The most "hard core" Bayesian that I know of is Edwin Jaynes, deceased in 1998. I'd expect further "hard core" Bayesians to be found among his pupils, especially the posthumous co-author of his main w
2,039
Who Are The Bayesians?
I don't know who the Bayesians are (although I suppose I should have a prior distribution for that), but I do know who they are not. To quote the eminent, now departed Bayesian, D.V. Lindley, "there is no one less Bayesian than an empirical Bayesian". Empirical Bayes section of Bayesian Methods: A Social and Behavioral Sciences Approach, Second Edition by Jeff Gill. Meaning I suppose that even "Frequentists" think about what model makes sense (choice of a model form in some sense constitutes a prior), as opposed to empirical Bayesians who are totally mechanical about everything. I think that in practice there is not that much difference in the results of statistical analysis performed by top echelon Bayesians and Frequentists. What is scary is when you see a low quality statistician who tries to rigidly pattern himself (never observed it with a female) after his ideological role model with absolute ideological purity, and approach analysis exactly as he thinks his role model would, but without the quality of thought and judgment the role model has. That can result in very bad analysis and recommendations. I think ultra-hard core, but low quality, ideologs are much more common among Bayesians than Frequentists. This particularly applies in Decision Analysis.
Who Are The Bayesians?
I don't know who the Bayesians are (although I suppose I should have a prior distribution for that), but I do know who they are not. To quote the eminent, now departed Bayesian, D.V. Lindley, "there i
Who Are The Bayesians? I don't know who the Bayesians are (although I suppose I should have a prior distribution for that), but I do know who they are not. To quote the eminent, now departed Bayesian, D.V. Lindley, "there is no one less Bayesian than an empirical Bayesian". Empirical Bayes section of Bayesian Methods: A Social and Behavioral Sciences Approach, Second Edition by Jeff Gill. Meaning I suppose that even "Frequentists" think about what model makes sense (choice of a model form in some sense constitutes a prior), as opposed to empirical Bayesians who are totally mechanical about everything. I think that in practice there is not that much difference in the results of statistical analysis performed by top echelon Bayesians and Frequentists. What is scary is when you see a low quality statistician who tries to rigidly pattern himself (never observed it with a female) after his ideological role model with absolute ideological purity, and approach analysis exactly as he thinks his role model would, but without the quality of thought and judgment the role model has. That can result in very bad analysis and recommendations. I think ultra-hard core, but low quality, ideologs are much more common among Bayesians than Frequentists. This particularly applies in Decision Analysis.
Who Are The Bayesians? I don't know who the Bayesians are (although I suppose I should have a prior distribution for that), but I do know who they are not. To quote the eminent, now departed Bayesian, D.V. Lindley, "there i
2,040
Who Are The Bayesians?
I'm probably too late to this discussion for anyone to notice this, but I think it is a shame that no-one has pointed out the fact that the most important difference between Bayesian and Frequentist approaches is that the Bayesians (mostly) use methods that respect the likelihood principle whereas Frequentists almost invariably do not. The likelihood principle says that the evidence relevant to the statistical model parameter of interest in entirely contained in the relevant likelihood function. Frequentists who care about statistical theory or philosophy should be far more concerned by arguments about the validity of the likelihood principle than about arguments over the distinction between frequency and partial belief interpretations of probability and about the desirability of prior probabilities. While it is possible for different interpretations of probability to coexist without conflict and for some people to choose to supply a prior without requiring others to do so, if the likelihood principle is true in either a positive or normative sense then many Frequentist methods lose their claims to optimality. Frequentist attacks on the likelihood principle are vehement because that principle undermines their statistical world-view, but mostly those attacks miss their mark (http://arxiv.org/abs/1507.08394).
Who Are The Bayesians?
I'm probably too late to this discussion for anyone to notice this, but I think it is a shame that no-one has pointed out the fact that the most important difference between Bayesian and Frequentist a
Who Are The Bayesians? I'm probably too late to this discussion for anyone to notice this, but I think it is a shame that no-one has pointed out the fact that the most important difference between Bayesian and Frequentist approaches is that the Bayesians (mostly) use methods that respect the likelihood principle whereas Frequentists almost invariably do not. The likelihood principle says that the evidence relevant to the statistical model parameter of interest in entirely contained in the relevant likelihood function. Frequentists who care about statistical theory or philosophy should be far more concerned by arguments about the validity of the likelihood principle than about arguments over the distinction between frequency and partial belief interpretations of probability and about the desirability of prior probabilities. While it is possible for different interpretations of probability to coexist without conflict and for some people to choose to supply a prior without requiring others to do so, if the likelihood principle is true in either a positive or normative sense then many Frequentist methods lose their claims to optimality. Frequentist attacks on the likelihood principle are vehement because that principle undermines their statistical world-view, but mostly those attacks miss their mark (http://arxiv.org/abs/1507.08394).
Who Are The Bayesians? I'm probably too late to this discussion for anyone to notice this, but I think it is a shame that no-one has pointed out the fact that the most important difference between Bayesian and Frequentist a
2,041
Who Are The Bayesians?
You may believe you're a Bayesian, but you're probably wrong... http://www.rmm-journal.de/downloads/Article_Senn.pdf Bayesians derive the probability distribution of outcomes of interest given prior belief / prior information. To a Bayesian this distribution (and its summaries) are what most people will be interested in. Contrast with typical "frequentist" results that tell you what the chance of seeing results as or more extreme than those observed given that the null hypothesis is true (p-value) or interval estimates for the parameter of interest, 95% of which would contain the true value if you could do repeated sampling (confidence interval). Bayesian prior distributions are contentious because they are YOUR prior. There is no "correct" prior. Most pragmatic Bayesians look for external evidence that can be used for priors and then discount or modify this based on what is expected to be "reasonable" for the particular case. For example, sceptical priors may have a "lump" of probability on a null case - "How good would the data need to be to make me change my mind / change current practice?" Most will also look at robustness of inferences to different priors. There are a group of Bayesians who look into "reference" priors that allow them to construct inferences that are not "influenced" by prior belief and so they get probabilistic statements and interval estimates that have "frequentist" properties. There are also a group of "Hardcore Bayesians" who might advocate not choosing a model (all models are wrong), and who might argue that exploratory analysis is bound to influence your priors and so shouldn't be done. There are few that radical though... In most fields of statistics you'll find Bayesian analyses and practitioners. Just as you'll find some folks who prefer non-parametrics...
Who Are The Bayesians?
You may believe you're a Bayesian, but you're probably wrong... http://www.rmm-journal.de/downloads/Article_Senn.pdf Bayesians derive the probability distribution of outcomes of interest given prior
Who Are The Bayesians? You may believe you're a Bayesian, but you're probably wrong... http://www.rmm-journal.de/downloads/Article_Senn.pdf Bayesians derive the probability distribution of outcomes of interest given prior belief / prior information. To a Bayesian this distribution (and its summaries) are what most people will be interested in. Contrast with typical "frequentist" results that tell you what the chance of seeing results as or more extreme than those observed given that the null hypothesis is true (p-value) or interval estimates for the parameter of interest, 95% of which would contain the true value if you could do repeated sampling (confidence interval). Bayesian prior distributions are contentious because they are YOUR prior. There is no "correct" prior. Most pragmatic Bayesians look for external evidence that can be used for priors and then discount or modify this based on what is expected to be "reasonable" for the particular case. For example, sceptical priors may have a "lump" of probability on a null case - "How good would the data need to be to make me change my mind / change current practice?" Most will also look at robustness of inferences to different priors. There are a group of Bayesians who look into "reference" priors that allow them to construct inferences that are not "influenced" by prior belief and so they get probabilistic statements and interval estimates that have "frequentist" properties. There are also a group of "Hardcore Bayesians" who might advocate not choosing a model (all models are wrong), and who might argue that exploratory analysis is bound to influence your priors and so shouldn't be done. There are few that radical though... In most fields of statistics you'll find Bayesian analyses and practitioners. Just as you'll find some folks who prefer non-parametrics...
Who Are The Bayesians? You may believe you're a Bayesian, but you're probably wrong... http://www.rmm-journal.de/downloads/Article_Senn.pdf Bayesians derive the probability distribution of outcomes of interest given prior
2,042
Who Are The Bayesians?
Just to take up your last question (so I'm not after a prize!), about a link between a Bayesian/Frequentist approach and one's epistemological position, the most interesting author I've come upon is Deborah Mayo. A good starting point is this 2010 exchange between Mayo and Andrew Gelman (who emerges here as a somewhat heretical Bayesian). Mayo later published a detailed response to the Gelman & Shalizi paper here.
Who Are The Bayesians?
Just to take up your last question (so I'm not after a prize!), about a link between a Bayesian/Frequentist approach and one's epistemological position, the most interesting author I've come upon is D
Who Are The Bayesians? Just to take up your last question (so I'm not after a prize!), about a link between a Bayesian/Frequentist approach and one's epistemological position, the most interesting author I've come upon is Deborah Mayo. A good starting point is this 2010 exchange between Mayo and Andrew Gelman (who emerges here as a somewhat heretical Bayesian). Mayo later published a detailed response to the Gelman & Shalizi paper here.
Who Are The Bayesians? Just to take up your last question (so I'm not after a prize!), about a link between a Bayesian/Frequentist approach and one's epistemological position, the most interesting author I've come upon is D
2,043
Who Are The Bayesians?
A subset of all Bayesians, i.e. those Bayesians who bothered to send an email, is listed here.
Who Are The Bayesians?
A subset of all Bayesians, i.e. those Bayesians who bothered to send an email, is listed here.
Who Are The Bayesians? A subset of all Bayesians, i.e. those Bayesians who bothered to send an email, is listed here.
Who Are The Bayesians? A subset of all Bayesians, i.e. those Bayesians who bothered to send an email, is listed here.
2,044
Who Are The Bayesians?
I would call Bruno de Finetti and L. J. Savage Bayesians. They worked on its philosophical foundations.
Who Are The Bayesians?
I would call Bruno de Finetti and L. J. Savage Bayesians. They worked on its philosophical foundations.
Who Are The Bayesians? I would call Bruno de Finetti and L. J. Savage Bayesians. They worked on its philosophical foundations.
Who Are The Bayesians? I would call Bruno de Finetti and L. J. Savage Bayesians. They worked on its philosophical foundations.
2,045
Who Are The Bayesians?
For understanding the foundational debate between frequentists and Bayesians, it would be hard to find a more authoritative voice than Bradley Efron. This topic has been a theme he has touched on numerous times in his career, but personally I found one of his older papers helpful: Controversies in the Foundations of Statistics (this one won an award for expository excellence).
Who Are The Bayesians?
For understanding the foundational debate between frequentists and Bayesians, it would be hard to find a more authoritative voice than Bradley Efron. This topic has been a theme he has touched on nume
Who Are The Bayesians? For understanding the foundational debate between frequentists and Bayesians, it would be hard to find a more authoritative voice than Bradley Efron. This topic has been a theme he has touched on numerous times in his career, but personally I found one of his older papers helpful: Controversies in the Foundations of Statistics (this one won an award for expository excellence).
Who Are The Bayesians? For understanding the foundational debate between frequentists and Bayesians, it would be hard to find a more authoritative voice than Bradley Efron. This topic has been a theme he has touched on nume
2,046
How to choose nlme or lme4 R library for mixed effects models?
Both packages use Lattice as the backend, but nlme has some nice features like groupedData() and lmList() that are lacking in lme4 (IMO). From a practical perspective, the two most important criteria seem, however, that lme4 extends nlme with other link functions: in nlme, you cannot fit outcomes whose distribution is not gaussian, lme4 can be used to fit mixed-effects logistic regression, for example. in nlme, it is possible to specify the variance-covariance matrix for the random effects (e.g. an AR(1)); it is not possible in lme4. Now, lme4 can easily handle very huge number of random effects (hence, number of individuals in a given study) thanks to its C part and the use of sparse matrices. The nlme package has somewhat been superseded by lme4 so I won't expect people spending much time developing add-ons on top of nlme. Personally, when I have a continuous response in my model, I tend to use both packages, but I'm now versed to the lme4 way for fitting GLMM. Rather than buying a book, take a look first at the Doug Bates' draft book on R-forge: lme4: Mixed-effects Modeling with R.
How to choose nlme or lme4 R library for mixed effects models?
Both packages use Lattice as the backend, but nlme has some nice features like groupedData() and lmList() that are lacking in lme4 (IMO). From a practical perspective, the two most important criteria
How to choose nlme or lme4 R library for mixed effects models? Both packages use Lattice as the backend, but nlme has some nice features like groupedData() and lmList() that are lacking in lme4 (IMO). From a practical perspective, the two most important criteria seem, however, that lme4 extends nlme with other link functions: in nlme, you cannot fit outcomes whose distribution is not gaussian, lme4 can be used to fit mixed-effects logistic regression, for example. in nlme, it is possible to specify the variance-covariance matrix for the random effects (e.g. an AR(1)); it is not possible in lme4. Now, lme4 can easily handle very huge number of random effects (hence, number of individuals in a given study) thanks to its C part and the use of sparse matrices. The nlme package has somewhat been superseded by lme4 so I won't expect people spending much time developing add-ons on top of nlme. Personally, when I have a continuous response in my model, I tend to use both packages, but I'm now versed to the lme4 way for fitting GLMM. Rather than buying a book, take a look first at the Doug Bates' draft book on R-forge: lme4: Mixed-effects Modeling with R.
How to choose nlme or lme4 R library for mixed effects models? Both packages use Lattice as the backend, but nlme has some nice features like groupedData() and lmList() that are lacking in lme4 (IMO). From a practical perspective, the two most important criteria
2,047
How to choose nlme or lme4 R library for mixed effects models?
As chl pointed out, the main difference is what kind of variance-covariance structure you can specify for the random effects. In lme4 you can specify either: diagonal covariance structures (i.e., enforce mutually uncorrelated random effects via syntax like ~ (1 | group)+ (0 + x1 | group) + (0 + x2 | group)) or unstructured covariance matrices (i.e. all correlations are estimated, ~ (1 + x1 + x2 | group)) or partially diagonal, partially unstructured covariance (y ~ (1 + x1 | group) + (0 + x2 | group), where you would estimate a correlation between the random intercept and random slope for x1, but no correlations between the random slope for x2 and the random intercept and between the random slope for x2 and the random slope for x1). nlme offers a much broader class of covariance structures for the random effects. My experience is that the flexibility of lme4 is sufficient for most applications, however. I'd also add a third difference in capabilities that may be more relevant for many longitudinal data situations: nlme let's you specify variance-covariance structures for the residuals (i.e. spatial or temporal autocorrelation or heteroskedasticity or covariate-dependent variability) in the weights argument (c.f. ?varFunc), while lme4 only allows fixed prior weights for the observations. A fourth difference is that it can be difficult to get nlme to fit (partially) crossed random effects, while that's a non-issue in lme4. You'll probably be fine if you stick with lme4.
How to choose nlme or lme4 R library for mixed effects models?
As chl pointed out, the main difference is what kind of variance-covariance structure you can specify for the random effects. In lme4 you can specify either: diagonal covariance structures (i.e., enf
How to choose nlme or lme4 R library for mixed effects models? As chl pointed out, the main difference is what kind of variance-covariance structure you can specify for the random effects. In lme4 you can specify either: diagonal covariance structures (i.e., enforce mutually uncorrelated random effects via syntax like ~ (1 | group)+ (0 + x1 | group) + (0 + x2 | group)) or unstructured covariance matrices (i.e. all correlations are estimated, ~ (1 + x1 + x2 | group)) or partially diagonal, partially unstructured covariance (y ~ (1 + x1 | group) + (0 + x2 | group), where you would estimate a correlation between the random intercept and random slope for x1, but no correlations between the random slope for x2 and the random intercept and between the random slope for x2 and the random slope for x1). nlme offers a much broader class of covariance structures for the random effects. My experience is that the flexibility of lme4 is sufficient for most applications, however. I'd also add a third difference in capabilities that may be more relevant for many longitudinal data situations: nlme let's you specify variance-covariance structures for the residuals (i.e. spatial or temporal autocorrelation or heteroskedasticity or covariate-dependent variability) in the weights argument (c.f. ?varFunc), while lme4 only allows fixed prior weights for the observations. A fourth difference is that it can be difficult to get nlme to fit (partially) crossed random effects, while that's a non-issue in lme4. You'll probably be fine if you stick with lme4.
How to choose nlme or lme4 R library for mixed effects models? As chl pointed out, the main difference is what kind of variance-covariance structure you can specify for the random effects. In lme4 you can specify either: diagonal covariance structures (i.e., enf
2,048
How to choose nlme or lme4 R library for mixed effects models?
Others have summarized the differences very well. My impression is that lme4 is more suited for clustered data sets especially when you need to use crossed random effects. For repeated measures designs (including many longitudinal designs) however, nlme is the tool since only nlme supports specifying a correlation structure for the residuals. You do it using the correlations or cor argument with a corStruct object. It also allows you to model heteroscedasticity using a varFunc object.
How to choose nlme or lme4 R library for mixed effects models?
Others have summarized the differences very well. My impression is that lme4 is more suited for clustered data sets especially when you need to use crossed random effects. For repeated measures design
How to choose nlme or lme4 R library for mixed effects models? Others have summarized the differences very well. My impression is that lme4 is more suited for clustered data sets especially when you need to use crossed random effects. For repeated measures designs (including many longitudinal designs) however, nlme is the tool since only nlme supports specifying a correlation structure for the residuals. You do it using the correlations or cor argument with a corStruct object. It also allows you to model heteroscedasticity using a varFunc object.
How to choose nlme or lme4 R library for mixed effects models? Others have summarized the differences very well. My impression is that lme4 is more suited for clustered data sets especially when you need to use crossed random effects. For repeated measures design
2,049
How to choose nlme or lme4 R library for mixed effects models?
There are actually a number of packages in R for fitting mixed effects models beyond lme4 and nlme. There's a nice wiki run by the R special interest group for mixed models, which has a very nice FAQ and a page comparing the different packages. As for my opinions on actually using lme4 and nlme: I found lme4 to be generally easier to use due to its rather direct extension of the basic R formula syntax. (If you need to work with generalized additive models, then the gamm4 package extends this syntax one further step and so you have a nice smooth learning curve.) As others have mentioned, lme4 can handle generalized models (other link functions and error distributions), while nlme's focus on the Gaussian link function allows it do so some things that are very hard in the general case (specifying covariance structure and certain things dependent on degrees of freedom calculation, like p-values, the latter of which I encourage you to move away from!).
How to choose nlme or lme4 R library for mixed effects models?
There are actually a number of packages in R for fitting mixed effects models beyond lme4 and nlme. There's a nice wiki run by the R special interest group for mixed models, which has a very nice FAQ
How to choose nlme or lme4 R library for mixed effects models? There are actually a number of packages in R for fitting mixed effects models beyond lme4 and nlme. There's a nice wiki run by the R special interest group for mixed models, which has a very nice FAQ and a page comparing the different packages. As for my opinions on actually using lme4 and nlme: I found lme4 to be generally easier to use due to its rather direct extension of the basic R formula syntax. (If you need to work with generalized additive models, then the gamm4 package extends this syntax one further step and so you have a nice smooth learning curve.) As others have mentioned, lme4 can handle generalized models (other link functions and error distributions), while nlme's focus on the Gaussian link function allows it do so some things that are very hard in the general case (specifying covariance structure and certain things dependent on degrees of freedom calculation, like p-values, the latter of which I encourage you to move away from!).
How to choose nlme or lme4 R library for mixed effects models? There are actually a number of packages in R for fitting mixed effects models beyond lme4 and nlme. There's a nice wiki run by the R special interest group for mixed models, which has a very nice FAQ
2,050
Mutual information versus correlation
Let's consider one fundamental concept of (linear) correlation, covariance (which is Pearson's correlation coefficient "un-standardized"). For two discrete random variables $X$ and $Y$ with probability mass functions $p(x)$, $p(y)$ and joint pmf $p(x,y)$ we have $$\operatorname{Cov}(X,Y) = E(XY) - E(X)E(Y) = \sum_{x,y}p(x,y)xy - \left(\sum_xp(x)x\right)\cdot \left(\sum_yp(y)y\right)$$ $$\Rightarrow \operatorname{Cov}(X,Y) = \sum_{x,y}\left[p(x,y)-p(x)p(y)\right]xy$$ The Mutual Information between the two is defined as $$I(X,Y) = E\left (\ln \frac{p(x,y)}{p(x)p(y)}\right)=\sum_{x,y}p(x,y)\left[\ln p(x,y)-\ln p(x)p(y)\right]$$ Compare the two: each contains a point-wise "measure" of "the distance of the two rv's from independence" as it is expressed by the distance of the joint pmf from the product of the marginal pmf's: the $\operatorname{Cov}(X,Y)$ has it as difference of levels, while $I(X,Y)$ has it as difference of logarithms. And what do these measures do? In $\operatorname{Cov}(X,Y)$ they create a weighted sum of the product of the two random variables. In $I(X,Y)$ they create a weighted sum of their joint probabilities. So with $\operatorname{Cov}(X,Y)$ we look at what non-independence does to their product, while in $I(X,Y)$ we look at what non-independence does to their joint probability distribution. Reversely, $I(X,Y)$ is the average value of the logarithmic measure of distance from independence, while $\operatorname{Cov}(X,Y)$ is the weighted value of the levels-measure of distance from independence, weighted by the product of the two rv's. So the two are not antagonistic—they are complementary, describing different aspects of the association between two random variables. One could comment that Mutual Information "is not concerned" whether the association is linear or not, while Covariance may be zero and the variables may still be stochastically dependent. On the other hand, Covariance can be calculated directly from a data sample without the need to actually know the probability distributions involved (since it is an expression involving moments of the distribution), while Mutual Information requires knowledge of the distributions, whose estimation, if unknown, is a much more delicate and uncertain work compared to the estimation of Covariance.
Mutual information versus correlation
Let's consider one fundamental concept of (linear) correlation, covariance (which is Pearson's correlation coefficient "un-standardized"). For two discrete random variables $X$ and $Y$ with probabilit
Mutual information versus correlation Let's consider one fundamental concept of (linear) correlation, covariance (which is Pearson's correlation coefficient "un-standardized"). For two discrete random variables $X$ and $Y$ with probability mass functions $p(x)$, $p(y)$ and joint pmf $p(x,y)$ we have $$\operatorname{Cov}(X,Y) = E(XY) - E(X)E(Y) = \sum_{x,y}p(x,y)xy - \left(\sum_xp(x)x\right)\cdot \left(\sum_yp(y)y\right)$$ $$\Rightarrow \operatorname{Cov}(X,Y) = \sum_{x,y}\left[p(x,y)-p(x)p(y)\right]xy$$ The Mutual Information between the two is defined as $$I(X,Y) = E\left (\ln \frac{p(x,y)}{p(x)p(y)}\right)=\sum_{x,y}p(x,y)\left[\ln p(x,y)-\ln p(x)p(y)\right]$$ Compare the two: each contains a point-wise "measure" of "the distance of the two rv's from independence" as it is expressed by the distance of the joint pmf from the product of the marginal pmf's: the $\operatorname{Cov}(X,Y)$ has it as difference of levels, while $I(X,Y)$ has it as difference of logarithms. And what do these measures do? In $\operatorname{Cov}(X,Y)$ they create a weighted sum of the product of the two random variables. In $I(X,Y)$ they create a weighted sum of their joint probabilities. So with $\operatorname{Cov}(X,Y)$ we look at what non-independence does to their product, while in $I(X,Y)$ we look at what non-independence does to their joint probability distribution. Reversely, $I(X,Y)$ is the average value of the logarithmic measure of distance from independence, while $\operatorname{Cov}(X,Y)$ is the weighted value of the levels-measure of distance from independence, weighted by the product of the two rv's. So the two are not antagonistic—they are complementary, describing different aspects of the association between two random variables. One could comment that Mutual Information "is not concerned" whether the association is linear or not, while Covariance may be zero and the variables may still be stochastically dependent. On the other hand, Covariance can be calculated directly from a data sample without the need to actually know the probability distributions involved (since it is an expression involving moments of the distribution), while Mutual Information requires knowledge of the distributions, whose estimation, if unknown, is a much more delicate and uncertain work compared to the estimation of Covariance.
Mutual information versus correlation Let's consider one fundamental concept of (linear) correlation, covariance (which is Pearson's correlation coefficient "un-standardized"). For two discrete random variables $X$ and $Y$ with probabilit
2,051
Mutual information versus correlation
Here's an example. In these two plots the correlation coefficient is zero. But we can get high shared mutual information even when the correlation is zero. In the first, I see that if I have a high or low value of X then I'm likely to get a high value of Y. But if the value of X is moderate then I have a low value of Y. The first plot holds information about the mutual information shared by X and Y. In the second plot, X tells me nothing about Y.
Mutual information versus correlation
Here's an example. In these two plots the correlation coefficient is zero. But we can get high shared mutual information even when the correlation is zero. In the first, I see that if I have a high
Mutual information versus correlation Here's an example. In these two plots the correlation coefficient is zero. But we can get high shared mutual information even when the correlation is zero. In the first, I see that if I have a high or low value of X then I'm likely to get a high value of Y. But if the value of X is moderate then I have a low value of Y. The first plot holds information about the mutual information shared by X and Y. In the second plot, X tells me nothing about Y.
Mutual information versus correlation Here's an example. In these two plots the correlation coefficient is zero. But we can get high shared mutual information even when the correlation is zero. In the first, I see that if I have a high
2,052
Mutual information versus correlation
Mutual information is a distance between two probability distributions. Correlation is a linear distance between two random variables. You can have a mutual information between any two probabilities defined for a set of symbols, while you cannot have a correlation between symbols that cannot naturally be mapped into a R^N space. On the other hand, the mutual information does not make assumptions about some properties of the variables... If you are working with variables that are smooth, correlation may tell you more about them; for instance if their relationship is monotonic. If you have some prior information, then you may be able to switch from one to another; in medical records you can map the symbols "has genotype A" as 1 and "does not have genotype A" into 0 and 1 values and see if this has some form of correlation with one sickness or another. Similarly, you can take a variable that is continuous (ex: salary), convert it into discrete categories and compute the mutual information between those categories and another set of symbols.
Mutual information versus correlation
Mutual information is a distance between two probability distributions. Correlation is a linear distance between two random variables. You can have a mutual information between any two probabilities d
Mutual information versus correlation Mutual information is a distance between two probability distributions. Correlation is a linear distance between two random variables. You can have a mutual information between any two probabilities defined for a set of symbols, while you cannot have a correlation between symbols that cannot naturally be mapped into a R^N space. On the other hand, the mutual information does not make assumptions about some properties of the variables... If you are working with variables that are smooth, correlation may tell you more about them; for instance if their relationship is monotonic. If you have some prior information, then you may be able to switch from one to another; in medical records you can map the symbols "has genotype A" as 1 and "does not have genotype A" into 0 and 1 values and see if this has some form of correlation with one sickness or another. Similarly, you can take a variable that is continuous (ex: salary), convert it into discrete categories and compute the mutual information between those categories and another set of symbols.
Mutual information versus correlation Mutual information is a distance between two probability distributions. Correlation is a linear distance between two random variables. You can have a mutual information between any two probabilities d
2,053
Mutual information versus correlation
Although both of them are a measure of relationship between features, the MI is more general than correlation coefficient (CE) sine the CE is only able to takes into account linear relationships but the MI can also handle non-linear relationships.
Mutual information versus correlation
Although both of them are a measure of relationship between features, the MI is more general than correlation coefficient (CE) sine the CE is only able to takes into account linear relationships but t
Mutual information versus correlation Although both of them are a measure of relationship between features, the MI is more general than correlation coefficient (CE) sine the CE is only able to takes into account linear relationships but the MI can also handle non-linear relationships.
Mutual information versus correlation Although both of them are a measure of relationship between features, the MI is more general than correlation coefficient (CE) sine the CE is only able to takes into account linear relationships but t
2,054
Mutual information versus correlation
Mutual Information (MI) uses the concept entropy to specify how much common certainty are there in two data samples $X$ and $Y$ with distribution functions $p_{x}(x)$ and $p_y(y)$. Considering this interpretation of MI: $$I(X:Y) = H(X) + H(Y) - H(X,Y)$$ we see that the last part says about the dependency of variables. In case of independence the MI is zero and in case of a consistency between $X$ and $Y$ the MI is equal with the entropy of $X$ or $Y$. Though, the covariance measures only the distance of every data sample $(x,y)$ from the average ($\mu_X, \mu_Y)$. Therefore, Cov is only one part of MI. Another difference is the extra information that Cov can deliver about the sign of Cov. This type of knowledge can not extracted from MI because of log-function.
Mutual information versus correlation
Mutual Information (MI) uses the concept entropy to specify how much common certainty are there in two data samples $X$ and $Y$ with distribution functions $p_{x}(x)$ and $p_y(y)$. Considering this in
Mutual information versus correlation Mutual Information (MI) uses the concept entropy to specify how much common certainty are there in two data samples $X$ and $Y$ with distribution functions $p_{x}(x)$ and $p_y(y)$. Considering this interpretation of MI: $$I(X:Y) = H(X) + H(Y) - H(X,Y)$$ we see that the last part says about the dependency of variables. In case of independence the MI is zero and in case of a consistency between $X$ and $Y$ the MI is equal with the entropy of $X$ or $Y$. Though, the covariance measures only the distance of every data sample $(x,y)$ from the average ($\mu_X, \mu_Y)$. Therefore, Cov is only one part of MI. Another difference is the extra information that Cov can deliver about the sign of Cov. This type of knowledge can not extracted from MI because of log-function.
Mutual information versus correlation Mutual Information (MI) uses the concept entropy to specify how much common certainty are there in two data samples $X$ and $Y$ with distribution functions $p_{x}(x)$ and $p_y(y)$. Considering this in
2,055
Mutual information versus correlation
Note that Correlation(Pearson, Spearman or Kendell) takes values in $[-1,1]$ while Mutual Information takes value in $\mathbb{R^*}$. This makes a big difference: a correlation score is a stronger description of the association between the two RVs than the mutual information. On the other hand, although mutual information is a weaker description it can capture a more general association between two RVs. A non-zero correlation score not only tells you that the two RVs are related but also in which direction are they related in the value space. This could be very useful in some situations. Consider the two RVs: the weight and height of a human. If I am told that they are positively correlated, then I can guess the height of a person based on his weight, i.e., a heavy person is likely to be tall(n.b. correlation is not equal to causation). However, mutual information only tells you if there is some relationship between the two RVs, without precising how exactly the two RVs are related in values. In the same example as above, if I am told that weight and height has a non zero mutual information, then all I know is weight is somehow related to height but I don't know how exactly are they related. It could be negatively related or positively related. (Indeed, an example is a correlation of -1 and +1 both yield maximum mutual information, so the two scenarios are indistinguishable solely based on MI!) However, the advantage of MI is that it could capture more general(or subtle) associations. Pearson, Spearman, and Kendall are all trying to capture a sort of an increasing or decreasing relationship between RVs in value space. But if the association between the two RVs is quadratic such as shown in the last figure of this reply: then neither of them would capture this relationship and gives 0 as a correlation. The choice of MI versus a kind of correlation coefficient really depends on what you want to capture. If you want to investigate if there is any sort of association between the two RVs, then choose MI. If you want to find out if the values of two RVs are mutually indicative, then choose a correlation coefficient.
Mutual information versus correlation
Note that Correlation(Pearson, Spearman or Kendell) takes values in $[-1,1]$ while Mutual Information takes value in $\mathbb{R^*}$. This makes a big difference: a correlation score is a stronger desc
Mutual information versus correlation Note that Correlation(Pearson, Spearman or Kendell) takes values in $[-1,1]$ while Mutual Information takes value in $\mathbb{R^*}$. This makes a big difference: a correlation score is a stronger description of the association between the two RVs than the mutual information. On the other hand, although mutual information is a weaker description it can capture a more general association between two RVs. A non-zero correlation score not only tells you that the two RVs are related but also in which direction are they related in the value space. This could be very useful in some situations. Consider the two RVs: the weight and height of a human. If I am told that they are positively correlated, then I can guess the height of a person based on his weight, i.e., a heavy person is likely to be tall(n.b. correlation is not equal to causation). However, mutual information only tells you if there is some relationship between the two RVs, without precising how exactly the two RVs are related in values. In the same example as above, if I am told that weight and height has a non zero mutual information, then all I know is weight is somehow related to height but I don't know how exactly are they related. It could be negatively related or positively related. (Indeed, an example is a correlation of -1 and +1 both yield maximum mutual information, so the two scenarios are indistinguishable solely based on MI!) However, the advantage of MI is that it could capture more general(or subtle) associations. Pearson, Spearman, and Kendall are all trying to capture a sort of an increasing or decreasing relationship between RVs in value space. But if the association between the two RVs is quadratic such as shown in the last figure of this reply: then neither of them would capture this relationship and gives 0 as a correlation. The choice of MI versus a kind of correlation coefficient really depends on what you want to capture. If you want to investigate if there is any sort of association between the two RVs, then choose MI. If you want to find out if the values of two RVs are mutually indicative, then choose a correlation coefficient.
Mutual information versus correlation Note that Correlation(Pearson, Spearman or Kendell) takes values in $[-1,1]$ while Mutual Information takes value in $\mathbb{R^*}$. This makes a big difference: a correlation score is a stronger desc
2,056
Calculating the parameters of a Beta distribution using the mean and variance
I set$$\mu=\frac{\alpha}{\alpha+\beta}$$and$$\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$$and solved for $\alpha$ and $\beta$. My results show that$$\alpha=\left(\frac{1-\mu}{\sigma^2}-\frac{1}{\mu}\right)\mu^2$$and$$\beta=\alpha\left(\frac{1}{\mu}-1\right)$$ I've written up some R code to estimate the parameters of the Beta distribution from a given mean, mu, and variance, var: estBetaParams <- function(mu, var) { alpha <- ((1 - mu) / var - 1 / mu) * mu ^ 2 beta <- alpha * (1 / mu - 1) return(params = list(alpha = alpha, beta = beta)) } There's been some confusion around the bounds of $\mu$ and $\sigma^2$ for any given Beta distribution, so let's make that clear here. $\mu=\frac{\alpha}{\alpha+\beta}\in\left(0, 1\right)$ $\sigma^2=\frac{\alpha\beta}{\left(\alpha+\beta\right)^2\left(\alpha+\beta+1\right)}=\frac{\mu\left(1-\mu\right)}{\alpha+\beta+1}<\frac{\mu\left(1-\mu\right)}{1}=\mu\left(1-\mu\right)\in\left(0,0.5^2\right)$
Calculating the parameters of a Beta distribution using the mean and variance
I set$$\mu=\frac{\alpha}{\alpha+\beta}$$and$$\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$$and solved for $\alpha$ and $\beta$. My results show that$$\alpha=\left(\frac{1-\mu}{\sigma^
Calculating the parameters of a Beta distribution using the mean and variance I set$$\mu=\frac{\alpha}{\alpha+\beta}$$and$$\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$$and solved for $\alpha$ and $\beta$. My results show that$$\alpha=\left(\frac{1-\mu}{\sigma^2}-\frac{1}{\mu}\right)\mu^2$$and$$\beta=\alpha\left(\frac{1}{\mu}-1\right)$$ I've written up some R code to estimate the parameters of the Beta distribution from a given mean, mu, and variance, var: estBetaParams <- function(mu, var) { alpha <- ((1 - mu) / var - 1 / mu) * mu ^ 2 beta <- alpha * (1 / mu - 1) return(params = list(alpha = alpha, beta = beta)) } There's been some confusion around the bounds of $\mu$ and $\sigma^2$ for any given Beta distribution, so let's make that clear here. $\mu=\frac{\alpha}{\alpha+\beta}\in\left(0, 1\right)$ $\sigma^2=\frac{\alpha\beta}{\left(\alpha+\beta\right)^2\left(\alpha+\beta+1\right)}=\frac{\mu\left(1-\mu\right)}{\alpha+\beta+1}<\frac{\mu\left(1-\mu\right)}{1}=\mu\left(1-\mu\right)\in\left(0,0.5^2\right)$
Calculating the parameters of a Beta distribution using the mean and variance I set$$\mu=\frac{\alpha}{\alpha+\beta}$$and$$\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$$and solved for $\alpha$ and $\beta$. My results show that$$\alpha=\left(\frac{1-\mu}{\sigma^
2,057
Calculating the parameters of a Beta distribution using the mean and variance
Here's a generic way to solve these types of problems, using Maple instead of R. This works for other distributions as well: with(Statistics): eq1 := mu = Mean(BetaDistribution(alpha, beta)): eq2 := sigma^2 = Variance(BetaDistribution(alpha, beta)): solve([eq1, eq2], [alpha, beta]); which leads to the solution $$ \begin{align*} \alpha &= - \frac{\mu (\sigma^2 + \mu^2 - \mu)}{\sigma^2} \\ \beta &= \frac{(\sigma^2 + \mu^2 - \mu) (\mu - 1)}{\sigma^2}. \end{align*} $$ This is equivalent to Max's solution.
Calculating the parameters of a Beta distribution using the mean and variance
Here's a generic way to solve these types of problems, using Maple instead of R. This works for other distributions as well: with(Statistics): eq1 := mu = Mean(BetaDistribution(alpha, beta)): eq2 := s
Calculating the parameters of a Beta distribution using the mean and variance Here's a generic way to solve these types of problems, using Maple instead of R. This works for other distributions as well: with(Statistics): eq1 := mu = Mean(BetaDistribution(alpha, beta)): eq2 := sigma^2 = Variance(BetaDistribution(alpha, beta)): solve([eq1, eq2], [alpha, beta]); which leads to the solution $$ \begin{align*} \alpha &= - \frac{\mu (\sigma^2 + \mu^2 - \mu)}{\sigma^2} \\ \beta &= \frac{(\sigma^2 + \mu^2 - \mu) (\mu - 1)}{\sigma^2}. \end{align*} $$ This is equivalent to Max's solution.
Calculating the parameters of a Beta distribution using the mean and variance Here's a generic way to solve these types of problems, using Maple instead of R. This works for other distributions as well: with(Statistics): eq1 := mu = Mean(BetaDistribution(alpha, beta)): eq2 := s
2,058
Calculating the parameters of a Beta distribution using the mean and variance
In R, the beta distribution with parameters $\textbf{shape1} = a$ and $\textbf{shape2} = b$ has density $f(x) = \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} x^{a-1}(1-x)^{b-1}$, for $a > 0$, $b >0$, and $0 < x < 1$. In R, you can compute it by dbeta(x, shape1=a, shape2=b) In that parametrisation, the mean is $E(X) = \frac{a}{a+b}$ and the variance is $V(X) = \frac{ab}{(a + b)^2 (a + b + 1)}$. So, you can now follow Nick Sabbe's answer. Good work! Edit I find: $a = \left( \frac{1 - \mu}{V} - \frac{1}{\mu} \right) \mu^2$, and $b = \left( \frac{1 - \mu}{V} - \frac{1}{\mu} \right) \mu (1 - \mu)$, where $\mu=E(X)$ and $V=V(X)$.
Calculating the parameters of a Beta distribution using the mean and variance
In R, the beta distribution with parameters $\textbf{shape1} = a$ and $\textbf{shape2} = b$ has density $f(x) = \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} x^{a-1}(1-x)^{b-1}$, for $a > 0$, $b >0$, and $0
Calculating the parameters of a Beta distribution using the mean and variance In R, the beta distribution with parameters $\textbf{shape1} = a$ and $\textbf{shape2} = b$ has density $f(x) = \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} x^{a-1}(1-x)^{b-1}$, for $a > 0$, $b >0$, and $0 < x < 1$. In R, you can compute it by dbeta(x, shape1=a, shape2=b) In that parametrisation, the mean is $E(X) = \frac{a}{a+b}$ and the variance is $V(X) = \frac{ab}{(a + b)^2 (a + b + 1)}$. So, you can now follow Nick Sabbe's answer. Good work! Edit I find: $a = \left( \frac{1 - \mu}{V} - \frac{1}{\mu} \right) \mu^2$, and $b = \left( \frac{1 - \mu}{V} - \frac{1}{\mu} \right) \mu (1 - \mu)$, where $\mu=E(X)$ and $V=V(X)$.
Calculating the parameters of a Beta distribution using the mean and variance In R, the beta distribution with parameters $\textbf{shape1} = a$ and $\textbf{shape2} = b$ has density $f(x) = \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} x^{a-1}(1-x)^{b-1}$, for $a > 0$, $b >0$, and $0
2,059
Calculating the parameters of a Beta distribution using the mean and variance
On Wikipedia for example, you can find the following formulas for mean and variance of a beta distribution given alpha and beta: $$ \mu=\frac{\alpha}{\alpha+\beta} $$ and $$ \sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} $$ Inverting these ( fill out $\beta=\alpha(\frac{1}{\mu}-1)$ in the bottom equation) should give you the result you want (though it may take some work).
Calculating the parameters of a Beta distribution using the mean and variance
On Wikipedia for example, you can find the following formulas for mean and variance of a beta distribution given alpha and beta: $$ \mu=\frac{\alpha}{\alpha+\beta} $$ and $$ \sigma^2=\frac{\alpha\b
Calculating the parameters of a Beta distribution using the mean and variance On Wikipedia for example, you can find the following formulas for mean and variance of a beta distribution given alpha and beta: $$ \mu=\frac{\alpha}{\alpha+\beta} $$ and $$ \sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} $$ Inverting these ( fill out $\beta=\alpha(\frac{1}{\mu}-1)$ in the bottom equation) should give you the result you want (though it may take some work).
Calculating the parameters of a Beta distribution using the mean and variance On Wikipedia for example, you can find the following formulas for mean and variance of a beta distribution given alpha and beta: $$ \mu=\frac{\alpha}{\alpha+\beta} $$ and $$ \sigma^2=\frac{\alpha\b
2,060
Calculating the parameters of a Beta distribution using the mean and variance
For a generalized Beta distribution defined on the interval $[a,b]$, you have the relations: $$\mu=\frac{a\beta+b\alpha}{\alpha+\beta},\quad\sigma^{2}=\frac{\alpha\beta\left(b-a\right)^{2}}{\left(\alpha+\beta\right)^{2}\left(1+\alpha+\beta\right)}$$ which can be inverted to give: $$\alpha=\lambda\frac{\mu-a}{b-a},\quad\beta=\lambda\frac{b-\mu}{b-a}$$ where $$\lambda=\frac{\left(\mu-a\right)\left(b-\mu\right)}{\sigma^{2}}-1$$
Calculating the parameters of a Beta distribution using the mean and variance
For a generalized Beta distribution defined on the interval $[a,b]$, you have the relations: $$\mu=\frac{a\beta+b\alpha}{\alpha+\beta},\quad\sigma^{2}=\frac{\alpha\beta\left(b-a\right)^{2}}{\left(\alp
Calculating the parameters of a Beta distribution using the mean and variance For a generalized Beta distribution defined on the interval $[a,b]$, you have the relations: $$\mu=\frac{a\beta+b\alpha}{\alpha+\beta},\quad\sigma^{2}=\frac{\alpha\beta\left(b-a\right)^{2}}{\left(\alpha+\beta\right)^{2}\left(1+\alpha+\beta\right)}$$ which can be inverted to give: $$\alpha=\lambda\frac{\mu-a}{b-a},\quad\beta=\lambda\frac{b-\mu}{b-a}$$ where $$\lambda=\frac{\left(\mu-a\right)\left(b-\mu\right)}{\sigma^{2}}-1$$
Calculating the parameters of a Beta distribution using the mean and variance For a generalized Beta distribution defined on the interval $[a,b]$, you have the relations: $$\mu=\frac{a\beta+b\alpha}{\alpha+\beta},\quad\sigma^{2}=\frac{\alpha\beta\left(b-a\right)^{2}}{\left(\alp
2,061
Calculating the parameters of a Beta distribution using the mean and variance
Solve the $\mu$ equation for either $\alpha$ or $\beta$, solving for $\beta$, you get $$\beta=\frac{\alpha(1-\mu)}{\mu}$$ Then plug this into the second equation, and solve for $\alpha$. So you get $$\sigma^2=\frac{\frac{\alpha^2(1-\mu)}{\mu}}{(\alpha+\frac{\alpha(1-\mu)}{\mu})^2(\alpha+\frac{\alpha(1-\mu)}{\mu}+1)}$$ Which simplifies to $$\sigma^2=\frac{\frac{\alpha^2(1-\mu)}{\mu}}{(\frac{\alpha}{\mu})^2\frac{\alpha+\mu}{\mu}}$$ $$\sigma^2=\frac{(1-\mu)\mu^2}{\alpha+\mu}$$ Then finish solving for $\alpha$.
Calculating the parameters of a Beta distribution using the mean and variance
Solve the $\mu$ equation for either $\alpha$ or $\beta$, solving for $\beta$, you get $$\beta=\frac{\alpha(1-\mu)}{\mu}$$ Then plug this into the second equation, and solve for $\alpha$. So you get
Calculating the parameters of a Beta distribution using the mean and variance Solve the $\mu$ equation for either $\alpha$ or $\beta$, solving for $\beta$, you get $$\beta=\frac{\alpha(1-\mu)}{\mu}$$ Then plug this into the second equation, and solve for $\alpha$. So you get $$\sigma^2=\frac{\frac{\alpha^2(1-\mu)}{\mu}}{(\alpha+\frac{\alpha(1-\mu)}{\mu})^2(\alpha+\frac{\alpha(1-\mu)}{\mu}+1)}$$ Which simplifies to $$\sigma^2=\frac{\frac{\alpha^2(1-\mu)}{\mu}}{(\frac{\alpha}{\mu})^2\frac{\alpha+\mu}{\mu}}$$ $$\sigma^2=\frac{(1-\mu)\mu^2}{\alpha+\mu}$$ Then finish solving for $\alpha$.
Calculating the parameters of a Beta distribution using the mean and variance Solve the $\mu$ equation for either $\alpha$ or $\beta$, solving for $\beta$, you get $$\beta=\frac{\alpha(1-\mu)}{\mu}$$ Then plug this into the second equation, and solve for $\alpha$. So you get
2,062
Calculating the parameters of a Beta distribution using the mean and variance
I was looking for python, but stumbled upon this. So this would be useful for others like me. Here is a python code to estimate beta parameters (according to the equations given above): # estimate parameters of beta dist. def getAlphaBeta(mu, sigma): alpha = mu**2 * ((1 - mu) / sigma**2 - 1 / mu) beta = alpha * (1 / mu - 1) return {"alpha": 0.5, "beta": 0.1} print(getAlphaBeta(0.5, 0.1) # {alpha: 12, beta: 12} You can verify the parameters $\alpha$ and $\beta$ by importing scipy.stats.beta package.
Calculating the parameters of a Beta distribution using the mean and variance
I was looking for python, but stumbled upon this. So this would be useful for others like me. Here is a python code to estimate beta parameters (according to the equations given above): # estimate par
Calculating the parameters of a Beta distribution using the mean and variance I was looking for python, but stumbled upon this. So this would be useful for others like me. Here is a python code to estimate beta parameters (according to the equations given above): # estimate parameters of beta dist. def getAlphaBeta(mu, sigma): alpha = mu**2 * ((1 - mu) / sigma**2 - 1 / mu) beta = alpha * (1 / mu - 1) return {"alpha": 0.5, "beta": 0.1} print(getAlphaBeta(0.5, 0.1) # {alpha: 12, beta: 12} You can verify the parameters $\alpha$ and $\beta$ by importing scipy.stats.beta package.
Calculating the parameters of a Beta distribution using the mean and variance I was looking for python, but stumbled upon this. So this would be useful for others like me. Here is a python code to estimate beta parameters (according to the equations given above): # estimate par
2,063
What is a complete list of the usual assumptions for linear regression?
The answer depends heavily on how do you define complete and usual. Suppose we write linear regression model in the following way:$ \newcommand{\x}{\mathbf{x}} \newcommand{\bet}{\boldsymbol\beta} \DeclareMathOperator{\E}{\mathbb{E}} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Tr}{Tr} $ $$y_i = \x_i'\bet + u_i$$ where $\mathbf{x}_i$ is the vector of predictor variables, $\beta$ is the parameter of interest, $y_i$ is the response variable, and $u_i$ are the disturbance. One of the possible estimates of $\beta$ is the least squares estimate: $$ \hat\bet = \textrm{argmin}_{\bet}\sum(y_i-\x_i\bet)^2 = \left(\sum \x_i \x_i'\right)^{-1} \sum \x_i y_i .$$ Now practically all of the textbooks deal with the assumptions when this estimate $\hat\bet$ has desirable properties, such as unbiasedness, consistency, efficiency, some distributional properties, etc. Each of these properties requires certain assumptions, which are not the same. So the better question would be to ask which assumptions are needed for wanted properties of the LS estimate. The properties I mention above require some probability model for regression. And here we have the situation where different models are used in different applied fields. The simple case is to treat $y_i$ as an independent random variables, with $\x_i$ being non-random. I do not like the word usual, but we can say that this is the usual case in most applied fields (as far as I know). Here is the list of some of the desirable properties of statistical estimates: The estimate exists. Unbiasedness: $E\hat\bet=\bet$. Consistency: $\hat\bet \to \bet$ as $n\to\infty$ ($n$ here is the size of a data sample). Efficiency: $\Var(\hat\bet)$ is smaller than $\Var(\tilde\bet)$ for alternative estimates $\tilde\bet$ of $\bet$. The ability to either approximate or calculate the distribution function of $\hat\bet$. Existence Existence property might seem weird, but it is very important. In the definition of $\hat\beta$ we invert the matrix $\sum \x_i \x_i'.$ It is not guaranteed that the inverse of this matrix exists for all possible variants of $\x_i$. So we immediately get our first assumption: Matrix $\sum \x_i \x_i'$ should be of full rank, i.e. invertible. Unbiasedness We have $$ \E\hat\bet = \left(\sum \x_i \x_i' \right)^{-1}\left(\sum \x_i \E y_i \right) = \bet, $$ if $$\E y_i = \x_i \bet.$$ We may number it the second assumption, but we may have stated it outright, since this is one of the natural ways to define linear relationship. Note that to get unbiasedness we only require that $\E y_i = \x_i \bet$ for all $i$, and $\x_i$ are constants. Independence property is not required. Consistency For getting the assumptions for consistency we need to state more clearly what do we mean by $\to$. For sequences of random variables we have different modes of convergence: in probability, almost surely, in distribution and $p$-th moment sense. Suppose we want to get the convergence in probability. We can use either law of large numbers, or directly use the multivariate Chebyshev inequality (employing the fact that $\E \hat\bet = \bet$): $$\Pr(\lVert \hat\bet - \bet \rVert >\varepsilon)\le \frac{\Tr(\Var(\hat\bet))}{\varepsilon^2}.$$ (This variant of the inequality comes directly from applying Markov's inequality to $\lVert \hat\bet - \bet\rVert^2$, noting that $\E \lVert \hat\bet - \bet\rVert^2 = \Tr \Var(\hat\bet)$.) Since convergence in probability means that the left hand term must vanish for any $\varepsilon>0$ as $n\to\infty$, we need that $\Var(\hat\bet)\to 0$ as $n\to\infty$. This is perfectly reasonable since with more data the precision with which we estimate $\bet$ should increase. We have that $$ \Var(\hat\bet) =\left( \sum \x_i \x_i' \right)^{-1} \left( \sum_i \sum_j \x_i \x_j' \Cov(y_i, y_j) \right) \left(\sum \mathbf{x}_i\mathbf{x}_i'\right)^{-1}.$$ Independence ensures that $\Cov(y_i, y_j) = 0$, hence the expression simplifies to $$ \Var(\hat\bet) = \left( \sum \x_i \x_i' \right)^{-1} \left( \sum_i \x_i \x_i' \Var(y_i) \right) \left( \sum \x_i \x_i' \right)^{-1} .$$ Now assume $\Var(y_i) = \text{const}$, then $$ \Var(\hat\beta) = \left(\sum \x_i \x_i' \right)^{-1} \Var(y_i) .$$ Now if we additionally require that $\frac{1}{n} \sum \x_i \x_i'$ is bounded for each $n$, we immediately get $$\Var(\bet) \to 0 \text{ as } n \to \infty.$$ So to get the consistency we assumed that there is no autocorrelation ($\Cov(y_i, y_j) = 0$), the variance $\Var(y_i)$ is constant, and the $\x_i$ do not grow too much. The first assumption is satisfied if $y_i$ comes from independent samples. Efficiency The classic result is the Gauss-Markov theorem. The conditions for it is exactly the first two conditions for consistency and the condition for unbiasedness. Distributional properties If $y_i$ are normal we immediately get that $\hat\bet$ is normal, since it is a linear combination of normal random variables. If we assume previous assumptions of independence, uncorrelatedness and constant variance we get that $$ \hat\bet \sim \mathcal{N}\left(\bet, \sigma^2\left(\sum \x_i \x_i' \right)^{-1} \right)$$ where $\Var(y_i)=\sigma^2$. If $y_i$ are not normal, but independent, we can get approximate distribution of $\hat\bet$ thanks to the central limit theorem. For this we need to assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \to A$$ for some matrix $A$. The constant variance for asymptotic normality is not required if we assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \Var(y_i) \to B.$$ Note that with constant variance of $y$, we have that $B = \sigma^2 A$. The central limit theorem then gives us the following result: $$\sqrt{n}(\hat\bet - \bet) \to \mathcal{N}\left(0, A^{-1} B A^{-1} \right).$$ So from this we see that independence and constant variance for $y_i$ and certain assumptions for $\mathbf{x}_i$ gives us a lot of useful properties for LS estimate $\hat\bet$. The thing is that these assumptions can be relaxed. For example we required that $\x_i$ are not random variables. This assumption is not feasible in econometric applications. If we let $\x_i$ be random, we can get similar results if use conditional expectations and take into account the randomness of $\x_i$. The independence assumption also can be relaxed. We already demonstrated that sometimes only uncorrelatedness is needed. Even this can be further relaxed and it is still possible to show that the LS estimate will be consistent and asymptoticaly normal. See for example White's book for more details.
What is a complete list of the usual assumptions for linear regression?
The answer depends heavily on how do you define complete and usual. Suppose we write linear regression model in the following way:$ \newcommand{\x}{\mathbf{x}} \newcommand{\bet}{\boldsymbol\beta} \Dec
What is a complete list of the usual assumptions for linear regression? The answer depends heavily on how do you define complete and usual. Suppose we write linear regression model in the following way:$ \newcommand{\x}{\mathbf{x}} \newcommand{\bet}{\boldsymbol\beta} \DeclareMathOperator{\E}{\mathbb{E}} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Tr}{Tr} $ $$y_i = \x_i'\bet + u_i$$ where $\mathbf{x}_i$ is the vector of predictor variables, $\beta$ is the parameter of interest, $y_i$ is the response variable, and $u_i$ are the disturbance. One of the possible estimates of $\beta$ is the least squares estimate: $$ \hat\bet = \textrm{argmin}_{\bet}\sum(y_i-\x_i\bet)^2 = \left(\sum \x_i \x_i'\right)^{-1} \sum \x_i y_i .$$ Now practically all of the textbooks deal with the assumptions when this estimate $\hat\bet$ has desirable properties, such as unbiasedness, consistency, efficiency, some distributional properties, etc. Each of these properties requires certain assumptions, which are not the same. So the better question would be to ask which assumptions are needed for wanted properties of the LS estimate. The properties I mention above require some probability model for regression. And here we have the situation where different models are used in different applied fields. The simple case is to treat $y_i$ as an independent random variables, with $\x_i$ being non-random. I do not like the word usual, but we can say that this is the usual case in most applied fields (as far as I know). Here is the list of some of the desirable properties of statistical estimates: The estimate exists. Unbiasedness: $E\hat\bet=\bet$. Consistency: $\hat\bet \to \bet$ as $n\to\infty$ ($n$ here is the size of a data sample). Efficiency: $\Var(\hat\bet)$ is smaller than $\Var(\tilde\bet)$ for alternative estimates $\tilde\bet$ of $\bet$. The ability to either approximate or calculate the distribution function of $\hat\bet$. Existence Existence property might seem weird, but it is very important. In the definition of $\hat\beta$ we invert the matrix $\sum \x_i \x_i'.$ It is not guaranteed that the inverse of this matrix exists for all possible variants of $\x_i$. So we immediately get our first assumption: Matrix $\sum \x_i \x_i'$ should be of full rank, i.e. invertible. Unbiasedness We have $$ \E\hat\bet = \left(\sum \x_i \x_i' \right)^{-1}\left(\sum \x_i \E y_i \right) = \bet, $$ if $$\E y_i = \x_i \bet.$$ We may number it the second assumption, but we may have stated it outright, since this is one of the natural ways to define linear relationship. Note that to get unbiasedness we only require that $\E y_i = \x_i \bet$ for all $i$, and $\x_i$ are constants. Independence property is not required. Consistency For getting the assumptions for consistency we need to state more clearly what do we mean by $\to$. For sequences of random variables we have different modes of convergence: in probability, almost surely, in distribution and $p$-th moment sense. Suppose we want to get the convergence in probability. We can use either law of large numbers, or directly use the multivariate Chebyshev inequality (employing the fact that $\E \hat\bet = \bet$): $$\Pr(\lVert \hat\bet - \bet \rVert >\varepsilon)\le \frac{\Tr(\Var(\hat\bet))}{\varepsilon^2}.$$ (This variant of the inequality comes directly from applying Markov's inequality to $\lVert \hat\bet - \bet\rVert^2$, noting that $\E \lVert \hat\bet - \bet\rVert^2 = \Tr \Var(\hat\bet)$.) Since convergence in probability means that the left hand term must vanish for any $\varepsilon>0$ as $n\to\infty$, we need that $\Var(\hat\bet)\to 0$ as $n\to\infty$. This is perfectly reasonable since with more data the precision with which we estimate $\bet$ should increase. We have that $$ \Var(\hat\bet) =\left( \sum \x_i \x_i' \right)^{-1} \left( \sum_i \sum_j \x_i \x_j' \Cov(y_i, y_j) \right) \left(\sum \mathbf{x}_i\mathbf{x}_i'\right)^{-1}.$$ Independence ensures that $\Cov(y_i, y_j) = 0$, hence the expression simplifies to $$ \Var(\hat\bet) = \left( \sum \x_i \x_i' \right)^{-1} \left( \sum_i \x_i \x_i' \Var(y_i) \right) \left( \sum \x_i \x_i' \right)^{-1} .$$ Now assume $\Var(y_i) = \text{const}$, then $$ \Var(\hat\beta) = \left(\sum \x_i \x_i' \right)^{-1} \Var(y_i) .$$ Now if we additionally require that $\frac{1}{n} \sum \x_i \x_i'$ is bounded for each $n$, we immediately get $$\Var(\bet) \to 0 \text{ as } n \to \infty.$$ So to get the consistency we assumed that there is no autocorrelation ($\Cov(y_i, y_j) = 0$), the variance $\Var(y_i)$ is constant, and the $\x_i$ do not grow too much. The first assumption is satisfied if $y_i$ comes from independent samples. Efficiency The classic result is the Gauss-Markov theorem. The conditions for it is exactly the first two conditions for consistency and the condition for unbiasedness. Distributional properties If $y_i$ are normal we immediately get that $\hat\bet$ is normal, since it is a linear combination of normal random variables. If we assume previous assumptions of independence, uncorrelatedness and constant variance we get that $$ \hat\bet \sim \mathcal{N}\left(\bet, \sigma^2\left(\sum \x_i \x_i' \right)^{-1} \right)$$ where $\Var(y_i)=\sigma^2$. If $y_i$ are not normal, but independent, we can get approximate distribution of $\hat\bet$ thanks to the central limit theorem. For this we need to assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \to A$$ for some matrix $A$. The constant variance for asymptotic normality is not required if we assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \Var(y_i) \to B.$$ Note that with constant variance of $y$, we have that $B = \sigma^2 A$. The central limit theorem then gives us the following result: $$\sqrt{n}(\hat\bet - \bet) \to \mathcal{N}\left(0, A^{-1} B A^{-1} \right).$$ So from this we see that independence and constant variance for $y_i$ and certain assumptions for $\mathbf{x}_i$ gives us a lot of useful properties for LS estimate $\hat\bet$. The thing is that these assumptions can be relaxed. For example we required that $\x_i$ are not random variables. This assumption is not feasible in econometric applications. If we let $\x_i$ be random, we can get similar results if use conditional expectations and take into account the randomness of $\x_i$. The independence assumption also can be relaxed. We already demonstrated that sometimes only uncorrelatedness is needed. Even this can be further relaxed and it is still possible to show that the LS estimate will be consistent and asymptoticaly normal. See for example White's book for more details.
What is a complete list of the usual assumptions for linear regression? The answer depends heavily on how do you define complete and usual. Suppose we write linear regression model in the following way:$ \newcommand{\x}{\mathbf{x}} \newcommand{\bet}{\boldsymbol\beta} \Dec
2,064
What is a complete list of the usual assumptions for linear regression?
There are a number of good answers here. It occurs to me that there is one assumption that has not been stated however (at least not explicitly). Specifically, a regression model assumes that $\mathbf X$ (the values of your explanatory / predictor variables) is fixed and known, and that all of the uncertainty in the situation exists within the $Y$ variable. In addition, this uncertainty is assumed to be sampling error only. Here are two ways to think about this: If you are building an explanatory model (modeling experimental results), you know exactly what the levels of the independent variables are, because you manipulated / administered them. Moreover, you decided what those levels would be before you ever started gathering data. So you are conceptualizing all of the uncertainty in the relationship as existing within the response. On the other hand, if you are building a predictive model, it is true that the situation differs, but you still treat the predictors as though they were fixed and known, because, in the future, when you use the model to make a prediction about the likely value of $y$, you will have a vector, $\mathbf x$, and the model is designed to treat those values as though they are correct. That is, you will be conceiving of the uncertainty as being the unknown value of $y$. These assumptions can be seen in the equation for a prototypical regression model: $$ y_i = \beta_0 + \beta_1x_i + \varepsilon_i $$ A model with uncertainty (perhaps due to measurement error) in $x$ as well might have the same data generating process, but the model that's estimated would look like this: $$ y_i = \hat\beta_0 + \hat\beta_1(x_i + \eta_i) + \hat\varepsilon_i, $$ where $\eta$ represents random measurement error. (Situations like the latter have led to work on errors in variables models; a basic result is that if there is measurement error in $x$, the naive $\hat\beta_1$ would be attenuated--closer to 0 than its true value, and that if there is measurement error in $y$, statistical tests of the $\hat\beta$'s would be underpowered, but otherwise unbiased.) One practical consequence of the asymmetry intrinsic in the typical assumption is that regressing $y$ on $x$ is different from regressing $x$ on $y$. (See my answer here: What is the difference between doing linear regression on y with x versus x with y? for a more detailed discussion of this fact.)
What is a complete list of the usual assumptions for linear regression?
There are a number of good answers here. It occurs to me that there is one assumption that has not been stated however (at least not explicitly). Specifically, a regression model assumes that $\math
What is a complete list of the usual assumptions for linear regression? There are a number of good answers here. It occurs to me that there is one assumption that has not been stated however (at least not explicitly). Specifically, a regression model assumes that $\mathbf X$ (the values of your explanatory / predictor variables) is fixed and known, and that all of the uncertainty in the situation exists within the $Y$ variable. In addition, this uncertainty is assumed to be sampling error only. Here are two ways to think about this: If you are building an explanatory model (modeling experimental results), you know exactly what the levels of the independent variables are, because you manipulated / administered them. Moreover, you decided what those levels would be before you ever started gathering data. So you are conceptualizing all of the uncertainty in the relationship as existing within the response. On the other hand, if you are building a predictive model, it is true that the situation differs, but you still treat the predictors as though they were fixed and known, because, in the future, when you use the model to make a prediction about the likely value of $y$, you will have a vector, $\mathbf x$, and the model is designed to treat those values as though they are correct. That is, you will be conceiving of the uncertainty as being the unknown value of $y$. These assumptions can be seen in the equation for a prototypical regression model: $$ y_i = \beta_0 + \beta_1x_i + \varepsilon_i $$ A model with uncertainty (perhaps due to measurement error) in $x$ as well might have the same data generating process, but the model that's estimated would look like this: $$ y_i = \hat\beta_0 + \hat\beta_1(x_i + \eta_i) + \hat\varepsilon_i, $$ where $\eta$ represents random measurement error. (Situations like the latter have led to work on errors in variables models; a basic result is that if there is measurement error in $x$, the naive $\hat\beta_1$ would be attenuated--closer to 0 than its true value, and that if there is measurement error in $y$, statistical tests of the $\hat\beta$'s would be underpowered, but otherwise unbiased.) One practical consequence of the asymmetry intrinsic in the typical assumption is that regressing $y$ on $x$ is different from regressing $x$ on $y$. (See my answer here: What is the difference between doing linear regression on y with x versus x with y? for a more detailed discussion of this fact.)
What is a complete list of the usual assumptions for linear regression? There are a number of good answers here. It occurs to me that there is one assumption that has not been stated however (at least not explicitly). Specifically, a regression model assumes that $\math
2,065
What is a complete list of the usual assumptions for linear regression?
The following diagrams show which assumptions are required to get which implications in the finite and asymptotic scenarios. Linear Regression Assumptions: Key Points Generally the assumptions can be broken down into what we need for our coefficient estimators to be right on average--unbiased--or right with infinite data--consistent and to follow a certain distribution so we can know how precisely we are measuring them. Unbiasedness / Consistency We want our coefficients to be right on average (unbiased) or at least right if we have a lot of data (consistent). If you want unbiased coefficients, the key assumption is strict exogeneity. This means that the average value of the error term in the regression is 0 given the covariates used in the regression. For consistent coefficients, the key assumption is “predetermined regressors” which is implied by: "there is no correlation between the error term and any of the covariates of the regression" if a constant is included in the regression. Strictly speaking, there is no way to confirm these assumptions are right without randomly assigning the covariate whose coefficient you want to get right. Without random assignment, you have to make a qualitative argument that the assumptions are met. However, if you make a scatter plot of residuals on the y axis and the predicted outcome value on the x axis and there is a systematic trend away from 0, that’s a sign this assumption (or the linearity assumption) is not met. Assumptions are also important to understand the precision of coefficient estimates. Understanding the Precision of the Coefficients Homoskedasticity and normality are not needed for unbiased/consistent coefficients. You only need these additional assumptions if you want to get a sense of the precision with which you are measuring your coefficients with shortcut methods (e.g. F tests). However, you can always use heteroskedasticity robust standard errors, bootstrapping, or randomization inference to understand precision instead (descriptions and examples of these latter procedures can be found in my post here).
What is a complete list of the usual assumptions for linear regression?
The following diagrams show which assumptions are required to get which implications in the finite and asymptotic scenarios. Linear Regression Assumptions: Key Points Generally the assumptions can b
What is a complete list of the usual assumptions for linear regression? The following diagrams show which assumptions are required to get which implications in the finite and asymptotic scenarios. Linear Regression Assumptions: Key Points Generally the assumptions can be broken down into what we need for our coefficient estimators to be right on average--unbiased--or right with infinite data--consistent and to follow a certain distribution so we can know how precisely we are measuring them. Unbiasedness / Consistency We want our coefficients to be right on average (unbiased) or at least right if we have a lot of data (consistent). If you want unbiased coefficients, the key assumption is strict exogeneity. This means that the average value of the error term in the regression is 0 given the covariates used in the regression. For consistent coefficients, the key assumption is “predetermined regressors” which is implied by: "there is no correlation between the error term and any of the covariates of the regression" if a constant is included in the regression. Strictly speaking, there is no way to confirm these assumptions are right without randomly assigning the covariate whose coefficient you want to get right. Without random assignment, you have to make a qualitative argument that the assumptions are met. However, if you make a scatter plot of residuals on the y axis and the predicted outcome value on the x axis and there is a systematic trend away from 0, that’s a sign this assumption (or the linearity assumption) is not met. Assumptions are also important to understand the precision of coefficient estimates. Understanding the Precision of the Coefficients Homoskedasticity and normality are not needed for unbiased/consistent coefficients. You only need these additional assumptions if you want to get a sense of the precision with which you are measuring your coefficients with shortcut methods (e.g. F tests). However, you can always use heteroskedasticity robust standard errors, bootstrapping, or randomization inference to understand precision instead (descriptions and examples of these latter procedures can be found in my post here).
What is a complete list of the usual assumptions for linear regression? The following diagrams show which assumptions are required to get which implications in the finite and asymptotic scenarios. Linear Regression Assumptions: Key Points Generally the assumptions can b
2,066
What is a complete list of the usual assumptions for linear regression?
The assumptions of the classical linear regression model include: Linear Parameter and correct model specification Full Rank of the X Matrix Explanatory Variables must be exogenous Independent and Identically Distributed Error Terms Normal Distributed Error Terms in Population Although the answers here provide already a good overview of the classical OLS assumption, you can find a more comprehensive description of the assumption of the classical linear regression model here: https://economictheoryblog.com/2015/04/01/ols_assumptions/ In addition, the article describes the consequences in case one violates certain assumptions.
What is a complete list of the usual assumptions for linear regression?
The assumptions of the classical linear regression model include: Linear Parameter and correct model specification Full Rank of the X Matrix Explanatory Variables must be exogenous Independent and
What is a complete list of the usual assumptions for linear regression? The assumptions of the classical linear regression model include: Linear Parameter and correct model specification Full Rank of the X Matrix Explanatory Variables must be exogenous Independent and Identically Distributed Error Terms Normal Distributed Error Terms in Population Although the answers here provide already a good overview of the classical OLS assumption, you can find a more comprehensive description of the assumption of the classical linear regression model here: https://economictheoryblog.com/2015/04/01/ols_assumptions/ In addition, the article describes the consequences in case one violates certain assumptions.
What is a complete list of the usual assumptions for linear regression? The assumptions of the classical linear regression model include: Linear Parameter and correct model specification Full Rank of the X Matrix Explanatory Variables must be exogenous Independent and
2,067
What is a complete list of the usual assumptions for linear regression?
Different assumptions can be used to justify OLS In some situations, an author tests the residuals for normality. But in other situations, the residuals aren't normal and the author uses OLS anyway! You'll see texts saying that homoscedasticity is an assumption. But you see researchers using OLS when homoscedasticity is violated. What gives?! An answer is that somewhat different sets of assumptions can be used to justify the use of ordinary least squares (OLS) estimation. OLS is a tool like a hammer: you can use a hammer on nails but you can also use it on pegs, to break apart ice, etc... Two broad categories of assumptions are those that apply to small samples and those that rely on large samples so that the central limit theorem can be applied. 1. Small sample assumptions Small sample assumptions as discussed in Hayashi (2000) are: Linearity Strict exogeneity No multicollinearity Spherical errors (homoscedasticity) Under (1)-(4), the Gauss-Markov theorem applies, and the ordinary least squares estimator is the best linear unbiased estimator. Normality of error terms Further assuming normal error terms allows hypothesis testing. If the error terms are conditionally normal, the distribution of the OLS estimator is also conditionally normal. Another noteworthy point is that with normality, the OLS estimator is also the maximum likelihood estimator. 2. Large sample assumptions These assumptions can be modified/relaxed if we have a large enough sample so that we can lean on the law of large numbers (for consistency of the OLS estimator) and the central limit theorem (so that the sampling distribution of the OLS estimator converges to the normal distribution and we can do hypothesis testing, talk about p-values etc...). Hayashi is a macroeconomics guy and his large sample assumptions are formulated with the time series context in mind: linearity ergodic stationarity predetermined regressors: error-terms are orthogonal to their contemporaneous error terms. $\operatorname{E}[\mathbf{x}\mathbf{x}']$ is full rank $\mathbf{x}_i \epsilon_i$ is a martingale difference sequence with finite second moments. Finite 4th moments of regressors You may encounter stronger versions of these assumptions, for example, that error terms are independent. Proper large sample assumptions get you to a sampling distribution of the OLS estimator that is asymptotically normal. References Hayashi, Fumio, 2000, Econometrics
What is a complete list of the usual assumptions for linear regression?
Different assumptions can be used to justify OLS In some situations, an author tests the residuals for normality. But in other situations, the residuals aren't normal and the author uses OLS anyway
What is a complete list of the usual assumptions for linear regression? Different assumptions can be used to justify OLS In some situations, an author tests the residuals for normality. But in other situations, the residuals aren't normal and the author uses OLS anyway! You'll see texts saying that homoscedasticity is an assumption. But you see researchers using OLS when homoscedasticity is violated. What gives?! An answer is that somewhat different sets of assumptions can be used to justify the use of ordinary least squares (OLS) estimation. OLS is a tool like a hammer: you can use a hammer on nails but you can also use it on pegs, to break apart ice, etc... Two broad categories of assumptions are those that apply to small samples and those that rely on large samples so that the central limit theorem can be applied. 1. Small sample assumptions Small sample assumptions as discussed in Hayashi (2000) are: Linearity Strict exogeneity No multicollinearity Spherical errors (homoscedasticity) Under (1)-(4), the Gauss-Markov theorem applies, and the ordinary least squares estimator is the best linear unbiased estimator. Normality of error terms Further assuming normal error terms allows hypothesis testing. If the error terms are conditionally normal, the distribution of the OLS estimator is also conditionally normal. Another noteworthy point is that with normality, the OLS estimator is also the maximum likelihood estimator. 2. Large sample assumptions These assumptions can be modified/relaxed if we have a large enough sample so that we can lean on the law of large numbers (for consistency of the OLS estimator) and the central limit theorem (so that the sampling distribution of the OLS estimator converges to the normal distribution and we can do hypothesis testing, talk about p-values etc...). Hayashi is a macroeconomics guy and his large sample assumptions are formulated with the time series context in mind: linearity ergodic stationarity predetermined regressors: error-terms are orthogonal to their contemporaneous error terms. $\operatorname{E}[\mathbf{x}\mathbf{x}']$ is full rank $\mathbf{x}_i \epsilon_i$ is a martingale difference sequence with finite second moments. Finite 4th moments of regressors You may encounter stronger versions of these assumptions, for example, that error terms are independent. Proper large sample assumptions get you to a sampling distribution of the OLS estimator that is asymptotically normal. References Hayashi, Fumio, 2000, Econometrics
What is a complete list of the usual assumptions for linear regression? Different assumptions can be used to justify OLS In some situations, an author tests the residuals for normality. But in other situations, the residuals aren't normal and the author uses OLS anyway
2,068
What is a complete list of the usual assumptions for linear regression?
It's all about what you want to do with your model. Imagine if your errors were positively skewed/non-normal. If you wanted to make a prediction interval, you could do better than using the t-distribution. If your variance is smaller at smaller predicted values, again, you'd be making a prediction interval that's too big. It's better to understand why the assumptions are there.
What is a complete list of the usual assumptions for linear regression?
It's all about what you want to do with your model. Imagine if your errors were positively skewed/non-normal. If you wanted to make a prediction interval, you could do better than using the t-distri
What is a complete list of the usual assumptions for linear regression? It's all about what you want to do with your model. Imagine if your errors were positively skewed/non-normal. If you wanted to make a prediction interval, you could do better than using the t-distribution. If your variance is smaller at smaller predicted values, again, you'd be making a prediction interval that's too big. It's better to understand why the assumptions are there.
What is a complete list of the usual assumptions for linear regression? It's all about what you want to do with your model. Imagine if your errors were positively skewed/non-normal. If you wanted to make a prediction interval, you could do better than using the t-distri
2,069
What is a complete list of the usual assumptions for linear regression?
The least squares regression coefficient provides a way to summarize the first order trend in any kind of data. @mpiktas answer is a thorough treatment of the conditions under which least squares is increasingly optimal. I'd like to go the other way and show the most general case when least squares works. Let's see the most general formulation of the least-squares equation: $$E[Y|X] = \alpha + \beta X$$ It's just a linear model for the conditional mean of the response. Note I've bucked the error term. If you'd like to summarize the uncertainty of $\beta$, then you must appeal to the central limit theorem. The most general class of least squares estimators converge to normal when the Lindeberg condition is met: boiled down, the Lindeberg condition for least squares requires that the fraction of the largest squared residual to the sum of the sum of squared residuals must go to 0 as $n \rightarrow \infty$. If your design will keep sampling larger and larger residuals, then the experiment is "dead in the water". When the Lindeberg condition is met, the regression parameter $\beta$ is well defined, and the estimator $\hat{\beta}$ is an unbiased estimator that has a known approximating distribution. More efficient estimators may exist. In other cases of heteroscedasticity, or correlated data, usually a weighted estimator is more efficient. That's why I would never advocate using the naïve methods when better ones are available. But they often are not!
What is a complete list of the usual assumptions for linear regression?
The least squares regression coefficient provides a way to summarize the first order trend in any kind of data. @mpiktas answer is a thorough treatment of the conditions under which least squares is
What is a complete list of the usual assumptions for linear regression? The least squares regression coefficient provides a way to summarize the first order trend in any kind of data. @mpiktas answer is a thorough treatment of the conditions under which least squares is increasingly optimal. I'd like to go the other way and show the most general case when least squares works. Let's see the most general formulation of the least-squares equation: $$E[Y|X] = \alpha + \beta X$$ It's just a linear model for the conditional mean of the response. Note I've bucked the error term. If you'd like to summarize the uncertainty of $\beta$, then you must appeal to the central limit theorem. The most general class of least squares estimators converge to normal when the Lindeberg condition is met: boiled down, the Lindeberg condition for least squares requires that the fraction of the largest squared residual to the sum of the sum of squared residuals must go to 0 as $n \rightarrow \infty$. If your design will keep sampling larger and larger residuals, then the experiment is "dead in the water". When the Lindeberg condition is met, the regression parameter $\beta$ is well defined, and the estimator $\hat{\beta}$ is an unbiased estimator that has a known approximating distribution. More efficient estimators may exist. In other cases of heteroscedasticity, or correlated data, usually a weighted estimator is more efficient. That's why I would never advocate using the naïve methods when better ones are available. But they often are not!
What is a complete list of the usual assumptions for linear regression? The least squares regression coefficient provides a way to summarize the first order trend in any kind of data. @mpiktas answer is a thorough treatment of the conditions under which least squares is
2,070
What is a complete list of the usual assumptions for linear regression?
There is no such a thing as a single list of assumptions, there will be at least 2: one for fixed and one for random design matrix. Plus you may want to look at the assumptions for time series regressions (see p.13) The case when the design matrix $X$ is fixed could be the most common one, and its assumptions are often expressed as a Gauss-Markov theorem. The fixed design means that you truly control the regressors. For instance, you conduct an experiment and can set the parameters such as temperature, pressure etc. See also p.13 here. Unfortunately, in social sciences such as economics you rarely can control the parameters of the experiment. Usually, you observe what happens in economy, record the environment metrics, then regress on them. It turns out that it's a very different and more difficult situation, called a random design. In this case the Gauss-Markov theorem is modified also see p.12 here. You can see how the conditions are now expressed in terms of conditional probabilities, which is not an innocuous change. In econometrics the assumptions have names: linearity strict exogeneity no multicollinearity spherical error variance (includes homoscedasticity and no correlation) Notice that I never mentioned normality. It's not a standard assumption. It's often used in intro regression courses because it makes some derivations easier, but it's not required for regression to work and have nice properties.
What is a complete list of the usual assumptions for linear regression?
There is no such a thing as a single list of assumptions, there will be at least 2: one for fixed and one for random design matrix. Plus you may want to look at the assumptions for time series regress
What is a complete list of the usual assumptions for linear regression? There is no such a thing as a single list of assumptions, there will be at least 2: one for fixed and one for random design matrix. Plus you may want to look at the assumptions for time series regressions (see p.13) The case when the design matrix $X$ is fixed could be the most common one, and its assumptions are often expressed as a Gauss-Markov theorem. The fixed design means that you truly control the regressors. For instance, you conduct an experiment and can set the parameters such as temperature, pressure etc. See also p.13 here. Unfortunately, in social sciences such as economics you rarely can control the parameters of the experiment. Usually, you observe what happens in economy, record the environment metrics, then regress on them. It turns out that it's a very different and more difficult situation, called a random design. In this case the Gauss-Markov theorem is modified also see p.12 here. You can see how the conditions are now expressed in terms of conditional probabilities, which is not an innocuous change. In econometrics the assumptions have names: linearity strict exogeneity no multicollinearity spherical error variance (includes homoscedasticity and no correlation) Notice that I never mentioned normality. It's not a standard assumption. It's often used in intro regression courses because it makes some derivations easier, but it's not required for regression to work and have nice properties.
What is a complete list of the usual assumptions for linear regression? There is no such a thing as a single list of assumptions, there will be at least 2: one for fixed and one for random design matrix. Plus you may want to look at the assumptions for time series regress
2,071
What is a complete list of the usual assumptions for linear regression?
The assumption of linearity is that the model is linear in the parameters. It is fine to have a regression model with quadratic or higher order effects as long as the power function of the independent variable is part of a linear additive model. If the model does not contain higher order terms when it should, then the lack of fit will be evident in the plot of the residuals. However, standard regression models do not incorporate models in which the independent variable is raised to the power of a parameter (although there are other approaches that can be used to evaluate such models). Such models contain non-linear parameters.
What is a complete list of the usual assumptions for linear regression?
The assumption of linearity is that the model is linear in the parameters. It is fine to have a regression model with quadratic or higher order effects as long as the power function of the independen
What is a complete list of the usual assumptions for linear regression? The assumption of linearity is that the model is linear in the parameters. It is fine to have a regression model with quadratic or higher order effects as long as the power function of the independent variable is part of a linear additive model. If the model does not contain higher order terms when it should, then the lack of fit will be evident in the plot of the residuals. However, standard regression models do not incorporate models in which the independent variable is raised to the power of a parameter (although there are other approaches that can be used to evaluate such models). Such models contain non-linear parameters.
What is a complete list of the usual assumptions for linear regression? The assumption of linearity is that the model is linear in the parameters. It is fine to have a regression model with quadratic or higher order effects as long as the power function of the independen
2,072
What is a complete list of the usual assumptions for linear regression?
The following are the assumptions of Linear Regression analysis. Correct specification. The linear functional form is correctly specified. Strict exogeneity. The errors in the regression should have conditional mean zero. No multicollinearity. The regressors in X must all be linearly independent. Homoscedasticity which means that the error term has the same variance in each observation. No autocorrelation: the errors are uncorrelated between observations. Normality. It is sometimes additionally assumed that the errors have normal distribution conditional on the regressors. i.i.d observations: $(x_i, y_i)$ is independent from, and has the same distribution as, $(x_j, y_j)$ for all $i\neq j$. For more information visit the Wiki page on same.
What is a complete list of the usual assumptions for linear regression?
The following are the assumptions of Linear Regression analysis. Correct specification. The linear functional form is correctly specified. Strict exogeneity. The errors in the regression should have c
What is a complete list of the usual assumptions for linear regression? The following are the assumptions of Linear Regression analysis. Correct specification. The linear functional form is correctly specified. Strict exogeneity. The errors in the regression should have conditional mean zero. No multicollinearity. The regressors in X must all be linearly independent. Homoscedasticity which means that the error term has the same variance in each observation. No autocorrelation: the errors are uncorrelated between observations. Normality. It is sometimes additionally assumed that the errors have normal distribution conditional on the regressors. i.i.d observations: $(x_i, y_i)$ is independent from, and has the same distribution as, $(x_j, y_j)$ for all $i\neq j$. For more information visit the Wiki page on same.
What is a complete list of the usual assumptions for linear regression? The following are the assumptions of Linear Regression analysis. Correct specification. The linear functional form is correctly specified. Strict exogeneity. The errors in the regression should have c
2,073
What is the difference between Cross-entropy and KL divergence?
You will need some conditions to claim the equivalence between minimizing cross entropy and minimizing KL divergence. I will put your question under the context of classification problems using cross entropy as loss functions. Let us first recall that entropy is used to measure the uncertainty of a system, which is defined as \begin{equation} S(v)=-\sum_ip(v_i)\log p(v_i)\label{eq:entropy}, \end{equation} for $p(v_i)$ as the probabilities of different states $v_i$ of the system. From an information theory point of view, $S(v)$ is the amount of information is needed for removing the uncertainty. For instance, the event $I$ I will die within 200 years is almost certain (we may solve the aging problem for the word almost), therefore it has low uncertainty which requires only the information of the aging problem cannot be solved to make it certain. However, the event $II$ I will die within 50 years is more uncertain than event $I$, thus it needs more information to remove the uncertainties. Here entropy can be used to quantify the uncertainty of the distribution When will I die?, which can be regarded as the expectation of uncertainties of individual events like $I$ and $II$. Now look at the definition of KL divergence between distributions A and B \begin{equation} D_{KL}(A\parallel B) = \sum_ip_A(v_i)\log p_A(v_i) - p_A(v_i)\log p_B(v_i)\label{eq:kld}, \end{equation} where the first term of the right hand side is the entropy of distribution A, the second term can be interpreted as the expectation of distribution B in terms of A. And the $D_{KL}$ describes how different B is from A from the perspective of A. It's worth of noting $A$ usually stands for the data, i.e. the measured distribution, and $B$ is the theoretical or hypothetical distribution. That means, you always start from what you observed. To relate cross entropy to entropy and KL divergence, we formalize the cross entropy in terms of distributions $A$ and $B$ as \begin{equation} H(A, B) = -\sum_ip_A(v_i)\log p_B(v_i)\label{eq:crossentropy}. \end{equation} From the definitions, we can easily see \begin{equation} H(A, B) = D_{KL}(A\parallel B)+S_A\label{eq:entropyrelation}. \end{equation} If $S_A$ is a constant, then minimizing $H(A, B)$ is equivalent to minimizing $D_{KL}(A\parallel B)$. A further question follows naturally as how the entropy can be a constant. In a machine learning task, we start with a dataset (denoted as $P(\mathcal D)$) which represent the problem to be solved, and the learning purpose is to make the model estimated distribution (denoted as $P(model)$) as close as possible to true distribution of the problem (denoted as $P(truth)$). $P(truth)$ is unknown and represented by $P(\mathcal D)$. Therefore in an ideal world, we expect \begin{equation} P(model)\approx P(\mathcal D) \approx P(truth) \end{equation} and minimize $D_{KL}(P(\mathcal D)\parallel P(model))$. And luckily, in practice $\mathcal D$ is given, which means its entropy $S(D)$ is fixed as a constant.
What is the difference between Cross-entropy and KL divergence?
You will need some conditions to claim the equivalence between minimizing cross entropy and minimizing KL divergence. I will put your question under the context of classification problems using cross
What is the difference between Cross-entropy and KL divergence? You will need some conditions to claim the equivalence between minimizing cross entropy and minimizing KL divergence. I will put your question under the context of classification problems using cross entropy as loss functions. Let us first recall that entropy is used to measure the uncertainty of a system, which is defined as \begin{equation} S(v)=-\sum_ip(v_i)\log p(v_i)\label{eq:entropy}, \end{equation} for $p(v_i)$ as the probabilities of different states $v_i$ of the system. From an information theory point of view, $S(v)$ is the amount of information is needed for removing the uncertainty. For instance, the event $I$ I will die within 200 years is almost certain (we may solve the aging problem for the word almost), therefore it has low uncertainty which requires only the information of the aging problem cannot be solved to make it certain. However, the event $II$ I will die within 50 years is more uncertain than event $I$, thus it needs more information to remove the uncertainties. Here entropy can be used to quantify the uncertainty of the distribution When will I die?, which can be regarded as the expectation of uncertainties of individual events like $I$ and $II$. Now look at the definition of KL divergence between distributions A and B \begin{equation} D_{KL}(A\parallel B) = \sum_ip_A(v_i)\log p_A(v_i) - p_A(v_i)\log p_B(v_i)\label{eq:kld}, \end{equation} where the first term of the right hand side is the entropy of distribution A, the second term can be interpreted as the expectation of distribution B in terms of A. And the $D_{KL}$ describes how different B is from A from the perspective of A. It's worth of noting $A$ usually stands for the data, i.e. the measured distribution, and $B$ is the theoretical or hypothetical distribution. That means, you always start from what you observed. To relate cross entropy to entropy and KL divergence, we formalize the cross entropy in terms of distributions $A$ and $B$ as \begin{equation} H(A, B) = -\sum_ip_A(v_i)\log p_B(v_i)\label{eq:crossentropy}. \end{equation} From the definitions, we can easily see \begin{equation} H(A, B) = D_{KL}(A\parallel B)+S_A\label{eq:entropyrelation}. \end{equation} If $S_A$ is a constant, then minimizing $H(A, B)$ is equivalent to minimizing $D_{KL}(A\parallel B)$. A further question follows naturally as how the entropy can be a constant. In a machine learning task, we start with a dataset (denoted as $P(\mathcal D)$) which represent the problem to be solved, and the learning purpose is to make the model estimated distribution (denoted as $P(model)$) as close as possible to true distribution of the problem (denoted as $P(truth)$). $P(truth)$ is unknown and represented by $P(\mathcal D)$. Therefore in an ideal world, we expect \begin{equation} P(model)\approx P(\mathcal D) \approx P(truth) \end{equation} and minimize $D_{KL}(P(\mathcal D)\parallel P(model))$. And luckily, in practice $\mathcal D$ is given, which means its entropy $S(D)$ is fixed as a constant.
What is the difference between Cross-entropy and KL divergence? You will need some conditions to claim the equivalence between minimizing cross entropy and minimizing KL divergence. I will put your question under the context of classification problems using cross
2,074
What is the difference between Cross-entropy and KL divergence?
I suppose it is because the models usually work with the samples packed in mini-batches. For KL divergence and Cross-Entropy, their relation can be written as $$H(q, p) = D_{KL}(p, q)+H(p) = -\sum_i{p_i\log(q_i)}$$ so have $$D_{KL}(p, q) = H(q, p) - H(p)$$ From the equation, we could see that KL divergence can depart into a Cross-Entropy of p and q (the first part), and a global entropy of ground truth p (the second part). In many machine learning projects, minibatch is involved to expedite training, where the $p'$ of a minibatch may be different from the global $p$. In such a case, Cross-Entropy is relatively more robust in practice while KL divergence needs a more stable H(p) to finish her job.
What is the difference between Cross-entropy and KL divergence?
I suppose it is because the models usually work with the samples packed in mini-batches. For KL divergence and Cross-Entropy, their relation can be written as $$H(q, p) = D_{KL}(p, q)+H(p) = -\sum_i{p
What is the difference between Cross-entropy and KL divergence? I suppose it is because the models usually work with the samples packed in mini-batches. For KL divergence and Cross-Entropy, their relation can be written as $$H(q, p) = D_{KL}(p, q)+H(p) = -\sum_i{p_i\log(q_i)}$$ so have $$D_{KL}(p, q) = H(q, p) - H(p)$$ From the equation, we could see that KL divergence can depart into a Cross-Entropy of p and q (the first part), and a global entropy of ground truth p (the second part). In many machine learning projects, minibatch is involved to expedite training, where the $p'$ of a minibatch may be different from the global $p$. In such a case, Cross-Entropy is relatively more robust in practice while KL divergence needs a more stable H(p) to finish her job.
What is the difference between Cross-entropy and KL divergence? I suppose it is because the models usually work with the samples packed in mini-batches. For KL divergence and Cross-Entropy, their relation can be written as $$H(q, p) = D_{KL}(p, q)+H(p) = -\sum_i{p
2,075
What is the difference between Cross-entropy and KL divergence?
This is how I think about it: $$ D_{KL}(p(y_i | x_i) \:||\: q(y_i | x_i, \theta)) = H(p(y_i | x_i, \theta), q(y_i | x_i, \theta)) - H(p(y_i | x_i, \theta)) \tag{1}\label{eq:kl} $$ where $p$ and $q$ are two probability distributions. In machine learning, we typically know $p$, which is the distribution of the target. For example, in a binary classification problem, $\mathcal{Y} = \{0, 1\}$, so if $y_i = 1$, $p(y_i = 1 | x) = 1$ and $p(y_i = 0 | x) = 0$, and vice versa. Given each $y_i \: \forall \: i = 1, 2, \ldots, N$, where $N$ is the total number of points in the dataset, we typically want to minimize the KL divergence $D_{KL}(p,q)$ between the distribution of the target $p(y_i | x)$ and our predicted distribution $q(y_i | x, \theta)$, averaged over all $i$. (We do so by tuning our model parameters $\theta$. Thus, for each training example, the model is spitting out a distribution over the class labels $0$ and $1$.) For each example, since the target is fixed, its distribution never changes. Thus, $H(p(y_i | x_i))$ is constant for each $i$, regardless of what our current model parameters $\theta$ are. Thus, the minimizer of $D_{KL}(p,q)$ is equal to the minimizer of $H(p, q)$. If you had a situation where $p$ and $q$ were both variable (say, in which $x_1\sim p$ and $x_2\sim q$ were two latent variables) and wanted to match the two distributions, then you would have to choose between minimizing $D_{KL}$ and minimizing $H(p, q)$. This is because minimizing $D_{KL}$ implies maximizing $H(p)$ while minimizing $H(p, q)$ implies minimizing $H(p)$. To see the latter, we can solve equation (\ref{eq:kl}) for $H(p,q)$: $$ H(p,q) = D_{KL}(p,q) + H(p) \tag{2}\label{eq:hpq} $$ The former would yield a broad distribution for $p$ while the latter would yield one that is concentrated in one or a few modes. Note that it is your choice as a ML practitioner whether you want to minimize $D_{KL}(p, q)$ or $D_{KL}(q, p)$. A small discussion of this is given in the context of variational inference (VI) below. In VI, you must choose between minimizing $D_{KL}(p,q)$ and $D_{KL}(q,p)$, which are not equal since KL divergence is not symmetric. If we once again treat $p$ as known, then minimizing $D_{KL}(p, q)$ would result in a distribution $q$ that is sharp and focused on one or a few areas while minimizing $D_{KL}(q, p)$ would result in a distribution $q$ that is wide and covers a broad range of the domain of $q$. Again, the latter is because minimizing $D_{KL}(q, p)$ implies maximizing the entropy of $q$.
What is the difference between Cross-entropy and KL divergence?
This is how I think about it: $$ D_{KL}(p(y_i | x_i) \:||\: q(y_i | x_i, \theta)) = H(p(y_i | x_i, \theta), q(y_i | x_i, \theta)) - H(p(y_i | x_i, \theta)) \tag{1}\label{eq:kl} $$ where $p$ and $q$ ar
What is the difference between Cross-entropy and KL divergence? This is how I think about it: $$ D_{KL}(p(y_i | x_i) \:||\: q(y_i | x_i, \theta)) = H(p(y_i | x_i, \theta), q(y_i | x_i, \theta)) - H(p(y_i | x_i, \theta)) \tag{1}\label{eq:kl} $$ where $p$ and $q$ are two probability distributions. In machine learning, we typically know $p$, which is the distribution of the target. For example, in a binary classification problem, $\mathcal{Y} = \{0, 1\}$, so if $y_i = 1$, $p(y_i = 1 | x) = 1$ and $p(y_i = 0 | x) = 0$, and vice versa. Given each $y_i \: \forall \: i = 1, 2, \ldots, N$, where $N$ is the total number of points in the dataset, we typically want to minimize the KL divergence $D_{KL}(p,q)$ between the distribution of the target $p(y_i | x)$ and our predicted distribution $q(y_i | x, \theta)$, averaged over all $i$. (We do so by tuning our model parameters $\theta$. Thus, for each training example, the model is spitting out a distribution over the class labels $0$ and $1$.) For each example, since the target is fixed, its distribution never changes. Thus, $H(p(y_i | x_i))$ is constant for each $i$, regardless of what our current model parameters $\theta$ are. Thus, the minimizer of $D_{KL}(p,q)$ is equal to the minimizer of $H(p, q)$. If you had a situation where $p$ and $q$ were both variable (say, in which $x_1\sim p$ and $x_2\sim q$ were two latent variables) and wanted to match the two distributions, then you would have to choose between minimizing $D_{KL}$ and minimizing $H(p, q)$. This is because minimizing $D_{KL}$ implies maximizing $H(p)$ while minimizing $H(p, q)$ implies minimizing $H(p)$. To see the latter, we can solve equation (\ref{eq:kl}) for $H(p,q)$: $$ H(p,q) = D_{KL}(p,q) + H(p) \tag{2}\label{eq:hpq} $$ The former would yield a broad distribution for $p$ while the latter would yield one that is concentrated in one or a few modes. Note that it is your choice as a ML practitioner whether you want to minimize $D_{KL}(p, q)$ or $D_{KL}(q, p)$. A small discussion of this is given in the context of variational inference (VI) below. In VI, you must choose between minimizing $D_{KL}(p,q)$ and $D_{KL}(q,p)$, which are not equal since KL divergence is not symmetric. If we once again treat $p$ as known, then minimizing $D_{KL}(p, q)$ would result in a distribution $q$ that is sharp and focused on one or a few areas while minimizing $D_{KL}(q, p)$ would result in a distribution $q$ that is wide and covers a broad range of the domain of $q$. Again, the latter is because minimizing $D_{KL}(q, p)$ implies maximizing the entropy of $q$.
What is the difference between Cross-entropy and KL divergence? This is how I think about it: $$ D_{KL}(p(y_i | x_i) \:||\: q(y_i | x_i, \theta)) = H(p(y_i | x_i, \theta), q(y_i | x_i, \theta)) - H(p(y_i | x_i, \theta)) \tag{1}\label{eq:kl} $$ where $p$ and $q$ ar
2,076
What is the difference between Cross-entropy and KL divergence?
@zewen's answer can be misleading as he claims that in mini-batch training, CE can be more robust than KL. In most of standard mini-batch training, we use gradient-based approach, and the gradient of $H(p)$ with respect to $q$ (which is a function of our model parameter) would be zero. So in these cases, CE and KL as a loss function are identical. I actually want to add a comment below @zewen's answer but I can't because I do not have enough reputation...
What is the difference between Cross-entropy and KL divergence?
@zewen's answer can be misleading as he claims that in mini-batch training, CE can be more robust than KL. In most of standard mini-batch training, we use gradient-based approach, and the gradient of
What is the difference between Cross-entropy and KL divergence? @zewen's answer can be misleading as he claims that in mini-batch training, CE can be more robust than KL. In most of standard mini-batch training, we use gradient-based approach, and the gradient of $H(p)$ with respect to $q$ (which is a function of our model parameter) would be zero. So in these cases, CE and KL as a loss function are identical. I actually want to add a comment below @zewen's answer but I can't because I do not have enough reputation...
What is the difference between Cross-entropy and KL divergence? @zewen's answer can be misleading as he claims that in mini-batch training, CE can be more robust than KL. In most of standard mini-batch training, we use gradient-based approach, and the gradient of
2,077
What is the difference between Cross-entropy and KL divergence?
Some answers are already provided, while I would like to point out regarding the question itself measure the distance between two probability distributions that neither of cross-entropy and KL divergence measures the distance between two distributions-- instead they measure the difference of two distributions [1]. It's not distance because of the asymmetry, i.e. $\textrm{CE}(P,Q) \ne \textrm{CE}(Q,P)$ and $\textrm{KL}(P,Q) \ne\textrm{ KL}(Q,P).$ Reference: [1] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning, vol. 1 (MIT Press Cambridge, 2016).
What is the difference between Cross-entropy and KL divergence?
Some answers are already provided, while I would like to point out regarding the question itself measure the distance between two probability distributions that neither of cross-entropy and KL diver
What is the difference between Cross-entropy and KL divergence? Some answers are already provided, while I would like to point out regarding the question itself measure the distance between two probability distributions that neither of cross-entropy and KL divergence measures the distance between two distributions-- instead they measure the difference of two distributions [1]. It's not distance because of the asymmetry, i.e. $\textrm{CE}(P,Q) \ne \textrm{CE}(Q,P)$ and $\textrm{KL}(P,Q) \ne\textrm{ KL}(Q,P).$ Reference: [1] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning, vol. 1 (MIT Press Cambridge, 2016).
What is the difference between Cross-entropy and KL divergence? Some answers are already provided, while I would like to point out regarding the question itself measure the distance between two probability distributions that neither of cross-entropy and KL diver
2,078
What is the difference between Cross-entropy and KL divergence?
Minimizing an importance sampling estimate of the KL divergence is equivalent to minimizing the cross entropy loss of these importance samples.
What is the difference between Cross-entropy and KL divergence?
Minimizing an importance sampling estimate of the KL divergence is equivalent to minimizing the cross entropy loss of these importance samples.
What is the difference between Cross-entropy and KL divergence? Minimizing an importance sampling estimate of the KL divergence is equivalent to minimizing the cross entropy loss of these importance samples.
What is the difference between Cross-entropy and KL divergence? Minimizing an importance sampling estimate of the KL divergence is equivalent to minimizing the cross entropy loss of these importance samples.
2,079
How to 'sum' a standard deviation?
Short answer: You average the variances; then you can take square root to get the average standard deviation. Example Month MWh StdDev Variance ========== ===== ====== ======== January 927 333 110889 February 1234 250 62500 March 1032 301 90601 April 876 204 41616 May 865 165 27225 June 750 263 69169 July 780 280 78400 August 690 98 9604 September 730 76 5776 October 821 240 57600 November 803 178 31684 December 850 250 62500 =========== ===== ======= ======= Total 10358 805 647564 ÷12 863 232 53964 And then the average standard deviation is sqrt(53,964) = 232 From Sum of normally distributed random variables: If $X$ and $Y$ are independent random variables that are normally distributed (and therefore also jointly so), then their sum is also normally distributed ...the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances And from Wolfram Alpha's Normal Sum Distribution: Amazingly, the distribution of a sum of two normally distributed independent variates $X$ and $Y$ with means and variances $(\mu_X,\sigma_X^2)$ and $(\mu_Y,\sigma_Y^2)$, respectively is another normal distribution $$ P_{X+Y}(u) = \frac{1}{\sqrt{2\pi (\sigma_X^2 + \sigma_Y^2)}} e^{-[u-(\mu_X+\mu_Y)]^2/[2(\sigma_X^2 + \sigma_Y^2)]} $$ which has mean $$\mu_{X+Y} = \mu_X+\mu_Y$$ and variance $$ \sigma_{X+Y}^2 = \sigma_X^2 + \sigma_Y^2$$ For your data: sum: 10,358 MWh variance: 647,564 standard deviation: 804.71 ( sqrt(647564) ) So to answer your question: How to 'sum' a standard deviation? You sum them quadratically: s = sqrt(s1^2 + s2^2 + ... + s12^2) Conceptually you sum the variances, then take the square root to get the standard deviation. Because i was curious, i wanted to know the average monthly mean power, and its standard deviation. Through induction, we need 12 normal distributions which: sum to a mean of 10,358 sum to a variance of 647,564 That would be 12 average monthly distributions of: mean of 10,358/12 = 863.16 variance of 647,564/12 = 53,963.6 standard deviation of sqrt(53963.6) = 232.3 We can check our monthly average distributions by adding them up 12 times, to see that they equal the yearly distribution: Mean: 863.16*12 = 10358 = 10,358 (correct) Variance: 53963.6*12 = 647564 = 647,564 (correct) Note: i'll leave it to someone with a knowledge of the esoteric Latex math to convert my formula images, and formula code into stackexchange formatted formulas. Edit: I moved the short, to the point, answer up top. Because i needed to do this again today, but wanted to double-check that i average the variances.
How to 'sum' a standard deviation?
Short answer: You average the variances; then you can take square root to get the average standard deviation. Example Month MWh StdDev Variance ========== ===== ====== ======== January
How to 'sum' a standard deviation? Short answer: You average the variances; then you can take square root to get the average standard deviation. Example Month MWh StdDev Variance ========== ===== ====== ======== January 927 333 110889 February 1234 250 62500 March 1032 301 90601 April 876 204 41616 May 865 165 27225 June 750 263 69169 July 780 280 78400 August 690 98 9604 September 730 76 5776 October 821 240 57600 November 803 178 31684 December 850 250 62500 =========== ===== ======= ======= Total 10358 805 647564 ÷12 863 232 53964 And then the average standard deviation is sqrt(53,964) = 232 From Sum of normally distributed random variables: If $X$ and $Y$ are independent random variables that are normally distributed (and therefore also jointly so), then their sum is also normally distributed ...the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances And from Wolfram Alpha's Normal Sum Distribution: Amazingly, the distribution of a sum of two normally distributed independent variates $X$ and $Y$ with means and variances $(\mu_X,\sigma_X^2)$ and $(\mu_Y,\sigma_Y^2)$, respectively is another normal distribution $$ P_{X+Y}(u) = \frac{1}{\sqrt{2\pi (\sigma_X^2 + \sigma_Y^2)}} e^{-[u-(\mu_X+\mu_Y)]^2/[2(\sigma_X^2 + \sigma_Y^2)]} $$ which has mean $$\mu_{X+Y} = \mu_X+\mu_Y$$ and variance $$ \sigma_{X+Y}^2 = \sigma_X^2 + \sigma_Y^2$$ For your data: sum: 10,358 MWh variance: 647,564 standard deviation: 804.71 ( sqrt(647564) ) So to answer your question: How to 'sum' a standard deviation? You sum them quadratically: s = sqrt(s1^2 + s2^2 + ... + s12^2) Conceptually you sum the variances, then take the square root to get the standard deviation. Because i was curious, i wanted to know the average monthly mean power, and its standard deviation. Through induction, we need 12 normal distributions which: sum to a mean of 10,358 sum to a variance of 647,564 That would be 12 average monthly distributions of: mean of 10,358/12 = 863.16 variance of 647,564/12 = 53,963.6 standard deviation of sqrt(53963.6) = 232.3 We can check our monthly average distributions by adding them up 12 times, to see that they equal the yearly distribution: Mean: 863.16*12 = 10358 = 10,358 (correct) Variance: 53963.6*12 = 647564 = 647,564 (correct) Note: i'll leave it to someone with a knowledge of the esoteric Latex math to convert my formula images, and formula code into stackexchange formatted formulas. Edit: I moved the short, to the point, answer up top. Because i needed to do this again today, but wanted to double-check that i average the variances.
How to 'sum' a standard deviation? Short answer: You average the variances; then you can take square root to get the average standard deviation. Example Month MWh StdDev Variance ========== ===== ====== ======== January
2,080
How to 'sum' a standard deviation?
This is an old question but the answer accepted is incorrect or at least incomplete. The user wants to calculate the standard deviation over 12-month data where the mean and standard deviation are already calculated over each month. Assuming that the number of samples in each month is the same, then it is possible to calculate the sample mean and variance over the year from each month's data. For simplicity assume that we have two sets of data: $X=\{x_1,....x_N\}$ $Y=\{y_1,....,y_N\}$ with known values of sample mean and sample variance, $\mu_x$, $\mu_y$,$\sigma^2_x$,$\sigma^2_y$. Now we want to calculate the same estimates for $Z=\{x_1,....,x_N, y_1,...,y_N\}$. Consider that $\mu_x$,$\sigma^2_x$ are calculated as: $\mu_x = \frac{\sum^N_{i=1} x_i}{N}$ $\sigma^2_x = \frac{\sum^N_{i=1} x^2_i}{N}-\mu^2_x$ To estimate mean and variance over the total set we need to calculate: $\mu_z = \frac{\sum^N_{i=1} x_i +\sum^N_{i=1} y_i }{2N}= (\mu_x+\mu_y)/2$ which is given in the accepted answer. For variance however the story is different: $\sigma^2_z = \frac{\sum^N_{i=1} x^2_i +\sum^N_{i=1} y^2_i }{2N}-\mu^2_z$ $\sigma^2_z = \frac{1 }{2}(\frac{\sum^N_{i=1} x^2_i}{N}-\mu^2_x + \frac{\sum^N_{i=1} y^2_i}{N}-\mu^2_y )+\frac{1 }{2}(\mu^2_x+\mu^2_y) -(\frac{\mu_x+\mu_y}{2})^2$ $\sigma^2_z = \frac{1 }{2}(\sigma^2_x+\sigma^2_y )+(\frac{\mu_x-\mu_y}{2})^2$ So if you have the variance over each subset and you want the variance over the whole set then you can average the variances of each subset if they all have the same mean. Otherwise, you need to add the variance of the mean of each subset. As an example assume that over the first half of the year we produce exactly 1000 MWh per day and in the second half, we produce 2000 MWh per day. Then the mean and variance of energy production in the first and the second half are 1000 and 2000 for the mean and 0 for the variance of each half year. Now we want to calculate the variance of energy production over the whole year. If we average the two variances we arrive at zero, which is not correct since the energy per day over the whole year is not constant. Hence we need to add the variance of all the means from each subset. This has a close connection to the law of total variance. enter link description here. $$\operatorname{Var}(Y) = \operatorname{E}[\operatorname{Var}(Y \mid X)] + \operatorname{Var}(\operatorname{E}[Y \mid X])$$ To use the above theorem in this case, we can interpret the conditioning variable X as Y belongs to group $X_i$. In the context of the original question, X is the random variable indicating the month of the year and Y is energy production per day.
How to 'sum' a standard deviation?
This is an old question but the answer accepted is incorrect or at least incomplete. The user wants to calculate the standard deviation over 12-month data where the mean and standard deviation are alr
How to 'sum' a standard deviation? This is an old question but the answer accepted is incorrect or at least incomplete. The user wants to calculate the standard deviation over 12-month data where the mean and standard deviation are already calculated over each month. Assuming that the number of samples in each month is the same, then it is possible to calculate the sample mean and variance over the year from each month's data. For simplicity assume that we have two sets of data: $X=\{x_1,....x_N\}$ $Y=\{y_1,....,y_N\}$ with known values of sample mean and sample variance, $\mu_x$, $\mu_y$,$\sigma^2_x$,$\sigma^2_y$. Now we want to calculate the same estimates for $Z=\{x_1,....,x_N, y_1,...,y_N\}$. Consider that $\mu_x$,$\sigma^2_x$ are calculated as: $\mu_x = \frac{\sum^N_{i=1} x_i}{N}$ $\sigma^2_x = \frac{\sum^N_{i=1} x^2_i}{N}-\mu^2_x$ To estimate mean and variance over the total set we need to calculate: $\mu_z = \frac{\sum^N_{i=1} x_i +\sum^N_{i=1} y_i }{2N}= (\mu_x+\mu_y)/2$ which is given in the accepted answer. For variance however the story is different: $\sigma^2_z = \frac{\sum^N_{i=1} x^2_i +\sum^N_{i=1} y^2_i }{2N}-\mu^2_z$ $\sigma^2_z = \frac{1 }{2}(\frac{\sum^N_{i=1} x^2_i}{N}-\mu^2_x + \frac{\sum^N_{i=1} y^2_i}{N}-\mu^2_y )+\frac{1 }{2}(\mu^2_x+\mu^2_y) -(\frac{\mu_x+\mu_y}{2})^2$ $\sigma^2_z = \frac{1 }{2}(\sigma^2_x+\sigma^2_y )+(\frac{\mu_x-\mu_y}{2})^2$ So if you have the variance over each subset and you want the variance over the whole set then you can average the variances of each subset if they all have the same mean. Otherwise, you need to add the variance of the mean of each subset. As an example assume that over the first half of the year we produce exactly 1000 MWh per day and in the second half, we produce 2000 MWh per day. Then the mean and variance of energy production in the first and the second half are 1000 and 2000 for the mean and 0 for the variance of each half year. Now we want to calculate the variance of energy production over the whole year. If we average the two variances we arrive at zero, which is not correct since the energy per day over the whole year is not constant. Hence we need to add the variance of all the means from each subset. This has a close connection to the law of total variance. enter link description here. $$\operatorname{Var}(Y) = \operatorname{E}[\operatorname{Var}(Y \mid X)] + \operatorname{Var}(\operatorname{E}[Y \mid X])$$ To use the above theorem in this case, we can interpret the conditioning variable X as Y belongs to group $X_i$. In the context of the original question, X is the random variable indicating the month of the year and Y is energy production per day.
How to 'sum' a standard deviation? This is an old question but the answer accepted is incorrect or at least incomplete. The user wants to calculate the standard deviation over 12-month data where the mean and standard deviation are alr
2,081
How to 'sum' a standard deviation?
TL;DR Given several days, and for each day we are given its Average, Sample StdDev and number of Samples, denoted as: $$ \mu_d,\ \sigma_d,\ N_d $$ We would like to compute the Average and Sample StdDev across all days. Average is simply a weighted average: $$ \mu = \frac{\sum{\mu_dN_d}}{\sum{N_d}} = \frac{\sum{\mu_dN_d}}{N} $$ Sample StdDev is this thing: $$ \sigma=\sqrt{\frac{\sum_{d}{(\sigma_d^2(N_d-1)+N_d(\mu-\mu_d)^2})}{N-1}} $$ Where subscript d denotes a day we collected Average, Sample StdDev and number of Samples for. Details We've had a similar problem in which we had a process that computes a daily Average and Sample StdDev and saves it alongside the number of daily samples. Using this input we had to compute a weekly / monthly Average and StdDev. The number of samples per day was not constant in our case. Denote the Average, Sample StdDev and Number of Samples of the entire set as: $$ \mu,\ \sigma\ and\ N\ $$ And for day d denote the Average, Sample StdDev and Number of Samples as: $$ \mu_d,\ \sigma_d,\ N_d $$ Computing the entire set's Average is simply a a Weighted Average of the days' Averages in question: $$ \mu = \frac{\sum{\mu_dN_d}}{\sum{N_d}} = \frac{\sum{\mu_dN_d}}{N} $$ But things are much more involved when considering Sample StdDev. For a day's Sample StdDev we have: $$ \sigma_d=\sqrt{\frac{\sum_{N_d}(x_j-\mu_d)^2}{N_d-1}} $$ First a bit of clean up: $$ \sigma_d^2(N_d-1)=\sum_{N_d}(x_j-\mu_d)^2 $$ Let's look at the right-hand side term of the equation above. If we can reach from this sum to the following sum per day: $$ \sum_{N_d}{(x_j-\mu)^2} $$ then summation over the days will give us what we are looking for as the days are disjoint and cover the entire set: $$ \sum_{d}{\sum_{N_d}{(x_j-\mu)^2}} = \sum_{N}{(x_j-\mu)^2} $$ The insight to get from daily StdDev to the entire set's StdDev is to notice that while we don't have the daily samples, we do have the sum of the daily samples through the daily Average. Given this insight let's work on the right-hand side term of the equation above: $$ \sum_{N_d}(x_j-\mu_d)^2=\sum_{N_d}{(x_j^2-2x_j\mu_d+\mu_d^2)}=\\ =\sum_{N_d}{(x_j^2-2x_j\mu_d+\mu_d^2)}+(\sum_{N_d}{\mu^2}-\sum_{N_d}{\mu^2})+(2\sum_{N_d}{x_j(\mu-\mu_d})-2\sum_{N_d}{x_j(\mu-\mu_d})) $$ At this point we did nothing but adding and subtracting terms that will zero out keeping the equation the same. Now since we sum over Nd on all summations let's rewrite the summations for fun and profit: $$ \require{cancel} =\sum_{N_d}{(x_j^2-2x_j(\cancel{\mu_d}+\mu-\cancel{\mu_d})+\mu^2)}+\sum_{N_d}{\mu_d^2}-\sum_{N_d}{\mu^2}+2\sum_{N_d}{x_j(\mu-\mu_d}) $$ Summations are over j so summation terms that are not dependent on j can be simply multiplied by Nd: $$ =\sum_{N_d}{(x_j^2-2x_j\mu+\mu^2)}+N_d\mu_d^2-N_d\mu^2+2\sum_{N_d}{x_j(\mu-\mu_d)} $$ And we are getting close: $$ =\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2\sum_{N_d}{x_j(\mu-\mu_d)} $$ Now let's handle the rightmost term as we can't use xj directly but we can use its sum as we have that day's Average. Simply multiply and divide by Nd to get the Average: $$ =\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2(\mu-\mu_d){N_d}(\frac{1}{N_d}\sum_{N_d}{x_j})\\ =\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2(\mu-\mu_d){N_d}\mu_d $$ At this point we have the summation we need to compute the entire set's Sample StdDev and all the other terms are quantities we know, namely day's statistics and number of samples. Let's plug it back to the clean-up step above: $$ \sigma_d^2(N_d-1)=\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2(\mu-\mu_d){N_d}\mu_d\\ \leftrightarrow\ \sigma_d^2(N_d-1)-N_d\mu_d^2+N_d\mu^2-2N_d\mu_d(\mu-\mu_d)=\sum_{N_d}{(x_j-\mu)^2}\\ \leftrightarrow\ \sigma_d^2(N_d-1)+N_d(\mu-\mu_d)^2=\sum_{N_d}{(x_j-\mu)^2} $$ We are now ready to compute the set's Sample StdDev: $$ \sigma=\sqrt{\frac{\sum_{N}(x_j-\mu)^2}{N-1}}\\ =\sqrt{\frac{\sum_{d}{\sum_{N_d}(x_j-\mu)^2}}{N-1}}\\ =\sqrt{\frac{\sum_{d}{(\sigma_d^2(N_d-1)+N_d(\mu-\mu_d)^2})}{N-1}} $$
How to 'sum' a standard deviation?
TL;DR Given several days, and for each day we are given its Average, Sample StdDev and number of Samples, denoted as: $$ \mu_d,\ \sigma_d,\ N_d $$ We would like to compute the Average and Sample StdDe
How to 'sum' a standard deviation? TL;DR Given several days, and for each day we are given its Average, Sample StdDev and number of Samples, denoted as: $$ \mu_d,\ \sigma_d,\ N_d $$ We would like to compute the Average and Sample StdDev across all days. Average is simply a weighted average: $$ \mu = \frac{\sum{\mu_dN_d}}{\sum{N_d}} = \frac{\sum{\mu_dN_d}}{N} $$ Sample StdDev is this thing: $$ \sigma=\sqrt{\frac{\sum_{d}{(\sigma_d^2(N_d-1)+N_d(\mu-\mu_d)^2})}{N-1}} $$ Where subscript d denotes a day we collected Average, Sample StdDev and number of Samples for. Details We've had a similar problem in which we had a process that computes a daily Average and Sample StdDev and saves it alongside the number of daily samples. Using this input we had to compute a weekly / monthly Average and StdDev. The number of samples per day was not constant in our case. Denote the Average, Sample StdDev and Number of Samples of the entire set as: $$ \mu,\ \sigma\ and\ N\ $$ And for day d denote the Average, Sample StdDev and Number of Samples as: $$ \mu_d,\ \sigma_d,\ N_d $$ Computing the entire set's Average is simply a a Weighted Average of the days' Averages in question: $$ \mu = \frac{\sum{\mu_dN_d}}{\sum{N_d}} = \frac{\sum{\mu_dN_d}}{N} $$ But things are much more involved when considering Sample StdDev. For a day's Sample StdDev we have: $$ \sigma_d=\sqrt{\frac{\sum_{N_d}(x_j-\mu_d)^2}{N_d-1}} $$ First a bit of clean up: $$ \sigma_d^2(N_d-1)=\sum_{N_d}(x_j-\mu_d)^2 $$ Let's look at the right-hand side term of the equation above. If we can reach from this sum to the following sum per day: $$ \sum_{N_d}{(x_j-\mu)^2} $$ then summation over the days will give us what we are looking for as the days are disjoint and cover the entire set: $$ \sum_{d}{\sum_{N_d}{(x_j-\mu)^2}} = \sum_{N}{(x_j-\mu)^2} $$ The insight to get from daily StdDev to the entire set's StdDev is to notice that while we don't have the daily samples, we do have the sum of the daily samples through the daily Average. Given this insight let's work on the right-hand side term of the equation above: $$ \sum_{N_d}(x_j-\mu_d)^2=\sum_{N_d}{(x_j^2-2x_j\mu_d+\mu_d^2)}=\\ =\sum_{N_d}{(x_j^2-2x_j\mu_d+\mu_d^2)}+(\sum_{N_d}{\mu^2}-\sum_{N_d}{\mu^2})+(2\sum_{N_d}{x_j(\mu-\mu_d})-2\sum_{N_d}{x_j(\mu-\mu_d})) $$ At this point we did nothing but adding and subtracting terms that will zero out keeping the equation the same. Now since we sum over Nd on all summations let's rewrite the summations for fun and profit: $$ \require{cancel} =\sum_{N_d}{(x_j^2-2x_j(\cancel{\mu_d}+\mu-\cancel{\mu_d})+\mu^2)}+\sum_{N_d}{\mu_d^2}-\sum_{N_d}{\mu^2}+2\sum_{N_d}{x_j(\mu-\mu_d}) $$ Summations are over j so summation terms that are not dependent on j can be simply multiplied by Nd: $$ =\sum_{N_d}{(x_j^2-2x_j\mu+\mu^2)}+N_d\mu_d^2-N_d\mu^2+2\sum_{N_d}{x_j(\mu-\mu_d)} $$ And we are getting close: $$ =\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2\sum_{N_d}{x_j(\mu-\mu_d)} $$ Now let's handle the rightmost term as we can't use xj directly but we can use its sum as we have that day's Average. Simply multiply and divide by Nd to get the Average: $$ =\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2(\mu-\mu_d){N_d}(\frac{1}{N_d}\sum_{N_d}{x_j})\\ =\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2(\mu-\mu_d){N_d}\mu_d $$ At this point we have the summation we need to compute the entire set's Sample StdDev and all the other terms are quantities we know, namely day's statistics and number of samples. Let's plug it back to the clean-up step above: $$ \sigma_d^2(N_d-1)=\sum_{N_d}{(x_j-\mu)^2}+N_d\mu_d^2-N_d\mu^2+2(\mu-\mu_d){N_d}\mu_d\\ \leftrightarrow\ \sigma_d^2(N_d-1)-N_d\mu_d^2+N_d\mu^2-2N_d\mu_d(\mu-\mu_d)=\sum_{N_d}{(x_j-\mu)^2}\\ \leftrightarrow\ \sigma_d^2(N_d-1)+N_d(\mu-\mu_d)^2=\sum_{N_d}{(x_j-\mu)^2} $$ We are now ready to compute the set's Sample StdDev: $$ \sigma=\sqrt{\frac{\sum_{N}(x_j-\mu)^2}{N-1}}\\ =\sqrt{\frac{\sum_{d}{\sum_{N_d}(x_j-\mu)^2}}{N-1}}\\ =\sqrt{\frac{\sum_{d}{(\sigma_d^2(N_d-1)+N_d(\mu-\mu_d)^2})}{N-1}} $$
How to 'sum' a standard deviation? TL;DR Given several days, and for each day we are given its Average, Sample StdDev and number of Samples, denoted as: $$ \mu_d,\ \sigma_d,\ N_d $$ We would like to compute the Average and Sample StdDe
2,082
How to 'sum' a standard deviation?
I'd like to stress again the incorrectness in part of the accepted answer. The wording of the question lead to confusion. The question have Average and StdDev of each month, but it's unclear what kind of subset is used. Is it the average of 1 wind turbine of the whole farm or the daily average of the whole farm? If it's the daily average for each month, you can't add up the monthly average to get the annual average because they do not have the same denominator. If it's the unit average, the question should state We can say that in the average year each turbine in the wind farm produces 10,358 MWh,... Instead of We can say that in the average year the wind farm produces 10,358 MWh,... Further more, The Standard deviation or variance is the comparison against the set's own average. It does NOT contain any information regarding the average of its parent set (the bigger set which the computed set is a component of). The image is not necessarily very precise, but it conveys the general idea. Let's imagine the output of one wind farm as in the image. As you can see, the "local" variance has nothing to do with the "global" variance, no matter how you add or multiply those. If you add the "local" variances together, it will be very small compare to the "global" variance. You cannot predict the variance of the year using variance of 2 half year. So, in the accepted answer, while the sum calculation is correct, the division by 12 to get the monthly number means nothing.. Of the three sections, the first and last sections are wrong, the second is right. Again, it's a very wrong application, please do not follow it or it will get you into trouble. Just calculate for the whole thing, using total yearly/monthly output of each unit as data points depending whether you want yearly or monthly number, that should be the correct answer. You probably want something like this. This is my randomly generated numbers. If you have the data, the result in cell O2 should be your answer.
How to 'sum' a standard deviation?
I'd like to stress again the incorrectness in part of the accepted answer. The wording of the question lead to confusion. The question have Average and StdDev of each month, but it's unclear what kind
How to 'sum' a standard deviation? I'd like to stress again the incorrectness in part of the accepted answer. The wording of the question lead to confusion. The question have Average and StdDev of each month, but it's unclear what kind of subset is used. Is it the average of 1 wind turbine of the whole farm or the daily average of the whole farm? If it's the daily average for each month, you can't add up the monthly average to get the annual average because they do not have the same denominator. If it's the unit average, the question should state We can say that in the average year each turbine in the wind farm produces 10,358 MWh,... Instead of We can say that in the average year the wind farm produces 10,358 MWh,... Further more, The Standard deviation or variance is the comparison against the set's own average. It does NOT contain any information regarding the average of its parent set (the bigger set which the computed set is a component of). The image is not necessarily very precise, but it conveys the general idea. Let's imagine the output of one wind farm as in the image. As you can see, the "local" variance has nothing to do with the "global" variance, no matter how you add or multiply those. If you add the "local" variances together, it will be very small compare to the "global" variance. You cannot predict the variance of the year using variance of 2 half year. So, in the accepted answer, while the sum calculation is correct, the division by 12 to get the monthly number means nothing.. Of the three sections, the first and last sections are wrong, the second is right. Again, it's a very wrong application, please do not follow it or it will get you into trouble. Just calculate for the whole thing, using total yearly/monthly output of each unit as data points depending whether you want yearly or monthly number, that should be the correct answer. You probably want something like this. This is my randomly generated numbers. If you have the data, the result in cell O2 should be your answer.
How to 'sum' a standard deviation? I'd like to stress again the incorrectness in part of the accepted answer. The wording of the question lead to confusion. The question have Average and StdDev of each month, but it's unclear what kind
2,083
How to 'sum' a standard deviation?
I believe what you may be really interested in though is the standard error rather than the standard deviation. The standard error of the mean (SEM) is the standard deviation of the sample-mean's estimate of a population mean, and that will give you a measure how how good your yearly MWh estimate is. It's very easy to compute: if you used $n$ samples to obtain your monthly MWh averages and standard deviations, you would just compute the standard deviation as @IanBoyd suggested and normalize it by the total size of your sample. That is, $$ s = \frac{\sqrt{s_1^2 + s_2^2 + \ldots + s_{12}^2}}{\sqrt{12 \times n}} $$
How to 'sum' a standard deviation?
I believe what you may be really interested in though is the standard error rather than the standard deviation. The standard error of the mean (SEM) is the standard deviation of the sample-mean's esti
How to 'sum' a standard deviation? I believe what you may be really interested in though is the standard error rather than the standard deviation. The standard error of the mean (SEM) is the standard deviation of the sample-mean's estimate of a population mean, and that will give you a measure how how good your yearly MWh estimate is. It's very easy to compute: if you used $n$ samples to obtain your monthly MWh averages and standard deviations, you would just compute the standard deviation as @IanBoyd suggested and normalize it by the total size of your sample. That is, $$ s = \frac{\sqrt{s_1^2 + s_2^2 + \ldots + s_{12}^2}}{\sqrt{12 \times n}} $$
How to 'sum' a standard deviation? I believe what you may be really interested in though is the standard error rather than the standard deviation. The standard error of the mean (SEM) is the standard deviation of the sample-mean's esti
2,084
How to 'sum' a standard deviation?
If you know the number of samples used for the calculation of the monthly mean and standard deviation, you can use the "batch extension" by Chan et al. of Welford's algorithm to combine the variances (squares of standard deviations) and means of data subsets. The algorithm is numerically robust and exact. See this Wiki page. I have implemented it in Python here. For your example, and additionally an assumed sample size of 30 per month, the usage and result would be mean_array = [927,1234,1032,876,865,750,780,690,730,821,803,850] stddev_array =[333 ,250,301,204,165,263,280,98,76,240,178,250] # number of samples for the monthly mean and standard dev n_array =[30,30,30,30,30,30,30,30,30,30,30,30] sa = StatisticsAggregator() for n,mean,stddev in zip(n_array, mean_array, stddev_array): sa.add(n, mean, stddev) print('global mean', sa.mean) print('global std. dev.', np.sqrt(sa.var)) #>> global mean 863.1666666666666 #>> global std. dev. 143.15424858832827
How to 'sum' a standard deviation?
If you know the number of samples used for the calculation of the monthly mean and standard deviation, you can use the "batch extension" by Chan et al. of Welford's algorithm to combine the variances
How to 'sum' a standard deviation? If you know the number of samples used for the calculation of the monthly mean and standard deviation, you can use the "batch extension" by Chan et al. of Welford's algorithm to combine the variances (squares of standard deviations) and means of data subsets. The algorithm is numerically robust and exact. See this Wiki page. I have implemented it in Python here. For your example, and additionally an assumed sample size of 30 per month, the usage and result would be mean_array = [927,1234,1032,876,865,750,780,690,730,821,803,850] stddev_array =[333 ,250,301,204,165,263,280,98,76,240,178,250] # number of samples for the monthly mean and standard dev n_array =[30,30,30,30,30,30,30,30,30,30,30,30] sa = StatisticsAggregator() for n,mean,stddev in zip(n_array, mean_array, stddev_array): sa.add(n, mean, stddev) print('global mean', sa.mean) print('global std. dev.', np.sqrt(sa.var)) #>> global mean 863.1666666666666 #>> global std. dev. 143.15424858832827
How to 'sum' a standard deviation? If you know the number of samples used for the calculation of the monthly mean and standard deviation, you can use the "batch extension" by Chan et al. of Welford's algorithm to combine the variances
2,085
When to use regularization methods for regression?
Short answer: Whenever you are facing one of these situations: large number of variables or low ratio of no. observations to no. variables (including the $n\ll p$ case), high collinearity, seeking for a sparse solution (i.e., embed feature selection when estimating model parameters), or accounting for variables grouping in high-dimensional data set. Ridge regression generally yields better predictions than OLS solution, through a better compromise between bias and variance. Its main drawback is that all predictors are kept in the model, so it is not very interesting if you seek a parsimonious model or want to apply some kind of feature selection. To achieve sparsity, the lasso is more appropriate but it will not necessarily yield good results in presence of high collinearity (it has been observed that if predictors are highly correlated, the prediction performance of the lasso is dominated by ridge regression). The second problem with L1 penalty is that the lasso solution is not uniquely determined when the number of variables is greater than the number of subjects (this is not the case of ridge regression). The last drawback of lasso is that it tends to select only one variable among a group of predictors with high pairwise correlations. In this case, there are alternative solutions like the group (i.e., achieve shrinkage on block of covariates, that is some blocks of regression coefficients are exactly zero) or fused lasso. The Graphical Lasso also offers promising features for GGMs (see the R glasso package). But, definitely, the elasticnet criteria, which is a combination of L1 and L2 penalties achieve both shrinkage and automatic variable selection, and it allows to keep $m>p$ variables in the case where $n\ll p$. Following Zou and Hastie (2005), it is defined as the argument that minimizes (over $\beta$) $$ L(\lambda_1,\lambda_2,\mathbf{\beta}) = \|Y-X\beta\|^2 + \lambda_2\|\beta\|^2 + \lambda_1\|\beta\|_1 $$ where $\|\beta\|^2=\sum_{j=1}^p\beta_j^2$ and $\|\beta\|^1=\sum_{j=1}^p|\beta_j |$. The lasso can be computed with an algorithm based on coordinate descent as described in the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent (JSS, 2010) or the LARS algorithm. In R, the penalized, lars or biglars, and glmnet packages are useful packages; in Python, there's the scikit.learn toolkit, with extensive documentation on the algorithms used to apply all three kind of regularization schemes. As for general references, the Lasso page contains most of what is needed to get started with lasso regression and technical details about L1-penalty, and this related question features essential references, When should I use lasso vs ridge?
When to use regularization methods for regression?
Short answer: Whenever you are facing one of these situations: large number of variables or low ratio of no. observations to no. variables (including the $n\ll p$ case), high collinearity, seeking
When to use regularization methods for regression? Short answer: Whenever you are facing one of these situations: large number of variables or low ratio of no. observations to no. variables (including the $n\ll p$ case), high collinearity, seeking for a sparse solution (i.e., embed feature selection when estimating model parameters), or accounting for variables grouping in high-dimensional data set. Ridge regression generally yields better predictions than OLS solution, through a better compromise between bias and variance. Its main drawback is that all predictors are kept in the model, so it is not very interesting if you seek a parsimonious model or want to apply some kind of feature selection. To achieve sparsity, the lasso is more appropriate but it will not necessarily yield good results in presence of high collinearity (it has been observed that if predictors are highly correlated, the prediction performance of the lasso is dominated by ridge regression). The second problem with L1 penalty is that the lasso solution is not uniquely determined when the number of variables is greater than the number of subjects (this is not the case of ridge regression). The last drawback of lasso is that it tends to select only one variable among a group of predictors with high pairwise correlations. In this case, there are alternative solutions like the group (i.e., achieve shrinkage on block of covariates, that is some blocks of regression coefficients are exactly zero) or fused lasso. The Graphical Lasso also offers promising features for GGMs (see the R glasso package). But, definitely, the elasticnet criteria, which is a combination of L1 and L2 penalties achieve both shrinkage and automatic variable selection, and it allows to keep $m>p$ variables in the case where $n\ll p$. Following Zou and Hastie (2005), it is defined as the argument that minimizes (over $\beta$) $$ L(\lambda_1,\lambda_2,\mathbf{\beta}) = \|Y-X\beta\|^2 + \lambda_2\|\beta\|^2 + \lambda_1\|\beta\|_1 $$ where $\|\beta\|^2=\sum_{j=1}^p\beta_j^2$ and $\|\beta\|^1=\sum_{j=1}^p|\beta_j |$. The lasso can be computed with an algorithm based on coordinate descent as described in the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent (JSS, 2010) or the LARS algorithm. In R, the penalized, lars or biglars, and glmnet packages are useful packages; in Python, there's the scikit.learn toolkit, with extensive documentation on the algorithms used to apply all three kind of regularization schemes. As for general references, the Lasso page contains most of what is needed to get started with lasso regression and technical details about L1-penalty, and this related question features essential references, When should I use lasso vs ridge?
When to use regularization methods for regression? Short answer: Whenever you are facing one of these situations: large number of variables or low ratio of no. observations to no. variables (including the $n\ll p$ case), high collinearity, seeking
2,086
When to use regularization methods for regression?
A theoretical justification for the use of ridge regression is that its solution is the posterior mean given a normal prior on the coefficients. That is, if you care about squared error and you believe in a normal prior, the ridge estimates are optimal. Similarly, the lasso estimate is the posterior mode under a double-exponential prior on your coefficients. This is optimal under a zero-one loss function. In practice, these techniques typically improve predictive accuracy in situations where you have many correlated variables and not a lot of data. While the OLS estimator is best linear unbiased, it has high variance in these situations. If you look at the bias-variance trade off, prediction accuracy improves because the small increase in bias is more than offset by the large reduction in variance.
When to use regularization methods for regression?
A theoretical justification for the use of ridge regression is that its solution is the posterior mean given a normal prior on the coefficients. That is, if you care about squared error and you belie
When to use regularization methods for regression? A theoretical justification for the use of ridge regression is that its solution is the posterior mean given a normal prior on the coefficients. That is, if you care about squared error and you believe in a normal prior, the ridge estimates are optimal. Similarly, the lasso estimate is the posterior mode under a double-exponential prior on your coefficients. This is optimal under a zero-one loss function. In practice, these techniques typically improve predictive accuracy in situations where you have many correlated variables and not a lot of data. While the OLS estimator is best linear unbiased, it has high variance in these situations. If you look at the bias-variance trade off, prediction accuracy improves because the small increase in bias is more than offset by the large reduction in variance.
When to use regularization methods for regression? A theoretical justification for the use of ridge regression is that its solution is the posterior mean given a normal prior on the coefficients. That is, if you care about squared error and you belie
2,087
How to plot ROC curves in multiclass classification?
It seems you are looking for multi-class ROC analysis, which is a kind of multi-objective optimization covered in a tutorial at ICML'04. As in several multi-class problem, the idea is generally to carry out pairwise comparison (one class vs. all other classes, one class vs. another class, see (1) or the Elements of Statistical Learning), and there is a recent paper by Landgrebe and Duin on that topic, Approximating the multiclass ROC by pairwise analysis, Pattern Recognition Letters 2007 28: 1747-1758. Now, for visualization purpose, I've seen some papers some time ago, most of them turning around volume under the ROC surface (VUS) or Cobweb diagram. I don't know, however, if there exists an R implementation of these methods, although I think the stars() function might be used for cobweb plot. I just ran across a Matlab toolbox that seems to offer multi-class ROC analysis, PRSD Studio. Other papers that may also be useful as a first start for visualization/computation: Visualisation of multi-class ROC surfaces A simplified extension of the Area under the ROC to the multiclass domain References: 1. Allwein, E.L., Schapire, R.E. and Singer, Y. (2000). Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141.
How to plot ROC curves in multiclass classification?
It seems you are looking for multi-class ROC analysis, which is a kind of multi-objective optimization covered in a tutorial at ICML'04. As in several multi-class problem, the idea is generally to car
How to plot ROC curves in multiclass classification? It seems you are looking for multi-class ROC analysis, which is a kind of multi-objective optimization covered in a tutorial at ICML'04. As in several multi-class problem, the idea is generally to carry out pairwise comparison (one class vs. all other classes, one class vs. another class, see (1) or the Elements of Statistical Learning), and there is a recent paper by Landgrebe and Duin on that topic, Approximating the multiclass ROC by pairwise analysis, Pattern Recognition Letters 2007 28: 1747-1758. Now, for visualization purpose, I've seen some papers some time ago, most of them turning around volume under the ROC surface (VUS) or Cobweb diagram. I don't know, however, if there exists an R implementation of these methods, although I think the stars() function might be used for cobweb plot. I just ran across a Matlab toolbox that seems to offer multi-class ROC analysis, PRSD Studio. Other papers that may also be useful as a first start for visualization/computation: Visualisation of multi-class ROC surfaces A simplified extension of the Area under the ROC to the multiclass domain References: 1. Allwein, E.L., Schapire, R.E. and Singer, Y. (2000). Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141.
How to plot ROC curves in multiclass classification? It seems you are looking for multi-class ROC analysis, which is a kind of multi-objective optimization covered in a tutorial at ICML'04. As in several multi-class problem, the idea is generally to car
2,088
How to plot ROC curves in multiclass classification?
I recently found this pROC package in R which plots a multiclass ROC using the technique specified by Hand and Till (2001). You can use the multiclass.roc function.
How to plot ROC curves in multiclass classification?
I recently found this pROC package in R which plots a multiclass ROC using the technique specified by Hand and Till (2001). You can use the multiclass.roc function.
How to plot ROC curves in multiclass classification? I recently found this pROC package in R which plots a multiclass ROC using the technique specified by Hand and Till (2001). You can use the multiclass.roc function.
How to plot ROC curves in multiclass classification? I recently found this pROC package in R which plots a multiclass ROC using the technique specified by Hand and Till (2001). You can use the multiclass.roc function.
2,089
How to plot ROC curves in multiclass classification?
You need to specify your classifier to act as one-vs-rest, and then you can plot individual ROC curves. There's a handy library for doing it without much work in python called yellowbrick. Check out the docs with a minimal reproducible example. The result looks like this (source)
How to plot ROC curves in multiclass classification?
You need to specify your classifier to act as one-vs-rest, and then you can plot individual ROC curves. There's a handy library for doing it without much work in python called yellowbrick. Check out t
How to plot ROC curves in multiclass classification? You need to specify your classifier to act as one-vs-rest, and then you can plot individual ROC curves. There's a handy library for doing it without much work in python called yellowbrick. Check out the docs with a minimal reproducible example. The result looks like this (source)
How to plot ROC curves in multiclass classification? You need to specify your classifier to act as one-vs-rest, and then you can plot individual ROC curves. There's a handy library for doing it without much work in python called yellowbrick. Check out t
2,090
How to plot ROC curves in multiclass classification?
The answers here are pretty complete, but I still would like to add my 5 cents. In this question you can find an example of R code for producing ROC Curves using One-Vs-All Approach and the ROCR R library. This is the plot from that answer:
How to plot ROC curves in multiclass classification?
The answers here are pretty complete, but I still would like to add my 5 cents. In this question you can find an example of R code for producing ROC Curves using One-Vs-All Approach and the ROCR R lib
How to plot ROC curves in multiclass classification? The answers here are pretty complete, but I still would like to add my 5 cents. In this question you can find an example of R code for producing ROC Curves using One-Vs-All Approach and the ROCR R library. This is the plot from that answer:
How to plot ROC curves in multiclass classification? The answers here are pretty complete, but I still would like to add my 5 cents. In this question you can find an example of R code for producing ROC Curves using One-Vs-All Approach and the ROCR R lib
2,091
How to plot ROC curves in multiclass classification?
While the math is beyond me this general review article has some references you will likely be interested in, and has a brief description of multi-class ROC graphs. An introduction to ROC analysis by Tom Fawcett Pattern Recognition Letters Volume 27, Issue 8, June 2006, Pages 861-874 Link to pdf as provided by gd047- thanks
How to plot ROC curves in multiclass classification?
While the math is beyond me this general review article has some references you will likely be interested in, and has a brief description of multi-class ROC graphs. An introduction to ROC analysis by
How to plot ROC curves in multiclass classification? While the math is beyond me this general review article has some references you will likely be interested in, and has a brief description of multi-class ROC graphs. An introduction to ROC analysis by Tom Fawcett Pattern Recognition Letters Volume 27, Issue 8, June 2006, Pages 861-874 Link to pdf as provided by gd047- thanks
How to plot ROC curves in multiclass classification? While the math is beyond me this general review article has some references you will likely be interested in, and has a brief description of multi-class ROC graphs. An introduction to ROC analysis by
2,092
How to compute precision/recall for multiclass-multilabel classification?
For multi-label classification you have two ways to go First consider the following. $n$ is the number of examples. $Y_i$ is the ground truth label assignment of the $i^{th}$ example.. $x_i$ is the $i^{th}$ example. $h(x_i)$ is the predicted labels for the $i^{th}$ example. Example based The metrics are computed in a per datapoint manner. For each predicted label its only its score is computed, and then these scores are aggregated over all the datapoints. Precision = $\frac{1}{n}\sum_{i=1}^{n}\frac{|Y_{i}\cap h(x_{i})|}{|h(x_{i})|}$ , The ratio of how much of the predicted is correct. The numerator finds how many labels in the predicted vector has common with the ground truth, and the ratio computes, how many of the predicted true labels are actually in the ground truth. Recall = $\frac{1}{n}\sum_{i=1}^{n}\frac{|Y_{i}\cap h(x_{i})|}{|Y_{i}|}$ , The ratio of how many of the actual labels were predicted. The numerator finds how many labels in the predicted vector has common with the ground truth (as above), then finds the ratio to the number of actual labels, therefore getting what fraction of the actual labels were predicted. There are other metrics as well. Label based Here the things are done labels-wise. For each label the metrics (eg. precision, recall) are computed and then these label-wise metrics are aggregated. Hence, in this case you end up computing the precision/recall for each label over the entire dataset, as you do for a binary classification (as each label has a binary assignment), then aggregate it. The easy way is to present the general form. This is just an extension of the standard multi-class equivalent. Macro averaged $\frac{1}{q}\sum_{j=1}^{q}B(TP_{j},FP_{j},TN_{j},FN_{j})$ Micro averaged $B(\sum_{j=1}^{q}TP_{j},\sum_{j=1}^{q}FP_{j},\sum_{j=1}^{q}TN_{j},\sum_{j=1}^{q}FN_{j})$ Here the $TP_{j},FP_{j},TN_{j},FN_{j}$ are the true positive, false positive, true negative and false negative counts respectively for only the $j^{th}$ label. Here $B$ stands for any of the confusion-matrix based metric. In your case you would plug in the standard precision and recall formulas. For macro average you pass in the per label count and then sum, for micro average you average the counts first, then apply your metric function. You might be interested to have a look into the code for the mult-label metrics here , which a part of the package mldr in R. Also you might be interested to look into the Java multi-label library MULAN. This is a nice paper to get into the different metrics: A Review on Multi-Label Learning Algorithms
How to compute precision/recall for multiclass-multilabel classification?
For multi-label classification you have two ways to go First consider the following. $n$ is the number of examples. $Y_i$ is the ground truth label assignment of the $i^{th}$ example.. $x_i$ is the $
How to compute precision/recall for multiclass-multilabel classification? For multi-label classification you have two ways to go First consider the following. $n$ is the number of examples. $Y_i$ is the ground truth label assignment of the $i^{th}$ example.. $x_i$ is the $i^{th}$ example. $h(x_i)$ is the predicted labels for the $i^{th}$ example. Example based The metrics are computed in a per datapoint manner. For each predicted label its only its score is computed, and then these scores are aggregated over all the datapoints. Precision = $\frac{1}{n}\sum_{i=1}^{n}\frac{|Y_{i}\cap h(x_{i})|}{|h(x_{i})|}$ , The ratio of how much of the predicted is correct. The numerator finds how many labels in the predicted vector has common with the ground truth, and the ratio computes, how many of the predicted true labels are actually in the ground truth. Recall = $\frac{1}{n}\sum_{i=1}^{n}\frac{|Y_{i}\cap h(x_{i})|}{|Y_{i}|}$ , The ratio of how many of the actual labels were predicted. The numerator finds how many labels in the predicted vector has common with the ground truth (as above), then finds the ratio to the number of actual labels, therefore getting what fraction of the actual labels were predicted. There are other metrics as well. Label based Here the things are done labels-wise. For each label the metrics (eg. precision, recall) are computed and then these label-wise metrics are aggregated. Hence, in this case you end up computing the precision/recall for each label over the entire dataset, as you do for a binary classification (as each label has a binary assignment), then aggregate it. The easy way is to present the general form. This is just an extension of the standard multi-class equivalent. Macro averaged $\frac{1}{q}\sum_{j=1}^{q}B(TP_{j},FP_{j},TN_{j},FN_{j})$ Micro averaged $B(\sum_{j=1}^{q}TP_{j},\sum_{j=1}^{q}FP_{j},\sum_{j=1}^{q}TN_{j},\sum_{j=1}^{q}FN_{j})$ Here the $TP_{j},FP_{j},TN_{j},FN_{j}$ are the true positive, false positive, true negative and false negative counts respectively for only the $j^{th}$ label. Here $B$ stands for any of the confusion-matrix based metric. In your case you would plug in the standard precision and recall formulas. For macro average you pass in the per label count and then sum, for micro average you average the counts first, then apply your metric function. You might be interested to have a look into the code for the mult-label metrics here , which a part of the package mldr in R. Also you might be interested to look into the Java multi-label library MULAN. This is a nice paper to get into the different metrics: A Review on Multi-Label Learning Algorithms
How to compute precision/recall for multiclass-multilabel classification? For multi-label classification you have two ways to go First consider the following. $n$ is the number of examples. $Y_i$ is the ground truth label assignment of the $i^{th}$ example.. $x_i$ is the $
2,093
How to compute precision/recall for multiclass-multilabel classification?
Another popular tool for measuring classifier performance is ROC/AUC ; this one too has a multi-class / multi-label extension : see [Hand 2001] [Hand 2001]: A simple generalization of the area under the ROC curve to multiple class classification problems
How to compute precision/recall for multiclass-multilabel classification?
Another popular tool for measuring classifier performance is ROC/AUC ; this one too has a multi-class / multi-label extension : see [Hand 2001] [Hand 2001]: A simple generalization of the area under t
How to compute precision/recall for multiclass-multilabel classification? Another popular tool for measuring classifier performance is ROC/AUC ; this one too has a multi-class / multi-label extension : see [Hand 2001] [Hand 2001]: A simple generalization of the area under the ROC curve to multiple class classification problems
How to compute precision/recall for multiclass-multilabel classification? Another popular tool for measuring classifier performance is ROC/AUC ; this one too has a multi-class / multi-label extension : see [Hand 2001] [Hand 2001]: A simple generalization of the area under t
2,094
How to compute precision/recall for multiclass-multilabel classification?
Here is some discuss of coursera forum thread about confusion matrix and multi-class precision/recall measurement. The basic idea is to compute all precision and recall of all the classes, then average them to get a single real number measurement. Confusion matrix make it easy to compute precision and recall of a class. Below is some basic explain about confusion matrix, copied from that thread: A confusion matrix is a way of classifying true positives, true negatives, false positives, and false negatives, when there are more than 2 classes. It's used for computing the precision and recall and hence f1-score for multi class problems. The actual values are represented by columns. The predicted values are represented by rows. Examples: 10 training examples that are actually 8, are classified (predicted) incorrectly as 5 13 training examples that are actually 4, are classified incorrectly as 9 Confusion Matrix cm = 0 1 2 3 4 5 6 7 8 9 10 1 298 2 1 0 1 1 3 1 1 0 2 0 293 7 4 1 0 5 2 0 0 3 1 3 263 0 8 0 0 3 0 2 4 1 5 0 261 4 0 3 2 0 1 5 0 0 10 0 254 3 0 10 2 1 6 0 4 1 1 4 300 0 1 0 0 7 1 3 2 0 0 0 264 0 7 1 8 3 5 3 1 7 1 0 289 1 0 9 0 1 3 13 1 0 11 1 289 0 10 0 6 0 1 6 1 2 1 4 304 For class x: True positive: diagonal position, cm(x, x). False positive: sum of column x (without main diagonal), sum(cm(:, x))-cm(x, x). False negative: sum of row x (without main diagonal), sum(cm(x, :), 2)-cm(x, x). You can compute precision, recall and F1 score following course formula. Averaging over all classes (with or without weighting) gives values for the entire model.
How to compute precision/recall for multiclass-multilabel classification?
Here is some discuss of coursera forum thread about confusion matrix and multi-class precision/recall measurement. The basic idea is to compute all precision and recall of all the classes, then averag
How to compute precision/recall for multiclass-multilabel classification? Here is some discuss of coursera forum thread about confusion matrix and multi-class precision/recall measurement. The basic idea is to compute all precision and recall of all the classes, then average them to get a single real number measurement. Confusion matrix make it easy to compute precision and recall of a class. Below is some basic explain about confusion matrix, copied from that thread: A confusion matrix is a way of classifying true positives, true negatives, false positives, and false negatives, when there are more than 2 classes. It's used for computing the precision and recall and hence f1-score for multi class problems. The actual values are represented by columns. The predicted values are represented by rows. Examples: 10 training examples that are actually 8, are classified (predicted) incorrectly as 5 13 training examples that are actually 4, are classified incorrectly as 9 Confusion Matrix cm = 0 1 2 3 4 5 6 7 8 9 10 1 298 2 1 0 1 1 3 1 1 0 2 0 293 7 4 1 0 5 2 0 0 3 1 3 263 0 8 0 0 3 0 2 4 1 5 0 261 4 0 3 2 0 1 5 0 0 10 0 254 3 0 10 2 1 6 0 4 1 1 4 300 0 1 0 0 7 1 3 2 0 0 0 264 0 7 1 8 3 5 3 1 7 1 0 289 1 0 9 0 1 3 13 1 0 11 1 289 0 10 0 6 0 1 6 1 2 1 4 304 For class x: True positive: diagonal position, cm(x, x). False positive: sum of column x (without main diagonal), sum(cm(:, x))-cm(x, x). False negative: sum of row x (without main diagonal), sum(cm(x, :), 2)-cm(x, x). You can compute precision, recall and F1 score following course formula. Averaging over all classes (with or without weighting) gives values for the entire model.
How to compute precision/recall for multiclass-multilabel classification? Here is some discuss of coursera forum thread about confusion matrix and multi-class precision/recall measurement. The basic idea is to compute all precision and recall of all the classes, then averag
2,095
How to compute precision/recall for multiclass-multilabel classification?
I don't know about the multi-label part but for the mutli-class classification those links will help you This link explains how to build the confusion matrix that you can use to calculate the precision and recall for each category And this link explains how to calculate micro-f1 and macro-f1 measures to evaluate the classifier as a whole. hope that you found that useful.
How to compute precision/recall for multiclass-multilabel classification?
I don't know about the multi-label part but for the mutli-class classification those links will help you This link explains how to build the confusion matrix that you can use to calculate the precisi
How to compute precision/recall for multiclass-multilabel classification? I don't know about the multi-label part but for the mutli-class classification those links will help you This link explains how to build the confusion matrix that you can use to calculate the precision and recall for each category And this link explains how to calculate micro-f1 and macro-f1 measures to evaluate the classifier as a whole. hope that you found that useful.
How to compute precision/recall for multiclass-multilabel classification? I don't know about the multi-label part but for the mutli-class classification those links will help you This link explains how to build the confusion matrix that you can use to calculate the precisi
2,096
How to compute precision/recall for multiclass-multilabel classification?
Exactly the same way you would do it general case, with sets: http://en.wikipedia.org/wiki/F1_score http://en.wikipedia.org/wiki/Precision_and_recall Here are simple Python functions that do exactly that: def precision(y_true, y_pred): i = set(y_true).intersection(y_pred) len1 = len(y_pred) if len1 == 0: return 0 else: return len(i) / len1 def recall(y_true, y_pred): i = set(y_true).intersection(y_pred) return len(i) / len(y_true) def f1(y_true, y_pred): p = precision(y_true, y_pred) r = recall(y_true, y_pred) if p + r == 0: return 0 else: return 2 * (p * r) / (p + r) if __name__ == '__main__': print(f1(['A', 'B', 'C'], ['A', 'B']))
How to compute precision/recall for multiclass-multilabel classification?
Exactly the same way you would do it general case, with sets: http://en.wikipedia.org/wiki/F1_score http://en.wikipedia.org/wiki/Precision_and_recall Here are simple Python functions that do exactly t
How to compute precision/recall for multiclass-multilabel classification? Exactly the same way you would do it general case, with sets: http://en.wikipedia.org/wiki/F1_score http://en.wikipedia.org/wiki/Precision_and_recall Here are simple Python functions that do exactly that: def precision(y_true, y_pred): i = set(y_true).intersection(y_pred) len1 = len(y_pred) if len1 == 0: return 0 else: return len(i) / len1 def recall(y_true, y_pred): i = set(y_true).intersection(y_pred) return len(i) / len(y_true) def f1(y_true, y_pred): p = precision(y_true, y_pred) r = recall(y_true, y_pred) if p + r == 0: return 0 else: return 2 * (p * r) / (p + r) if __name__ == '__main__': print(f1(['A', 'B', 'C'], ['A', 'B']))
How to compute precision/recall for multiclass-multilabel classification? Exactly the same way you would do it general case, with sets: http://en.wikipedia.org/wiki/F1_score http://en.wikipedia.org/wiki/Precision_and_recall Here are simple Python functions that do exactly t
2,097
How to compute precision/recall for multiclass-multilabel classification?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. this link helped me.. https://www.youtube.com/watch?v=HBi-P5j0Kec i am hoping it will help you as well say the distribution as as below A B C D A 100 80 10 10 B 0 9 0 1 C 0 1 8 1 D 0 1 0 9 the precision for A would be P(A) = 100/ 100 + 0 + 0 +0 = 100 P(B) = 9/ 9 + 80 + 1 + 1 = 9/91 psst... essentially take the true positive of the class and divide up by the column data across rows recall for a would be R(A) = 100/ 100+ 80+10+10 = 0.5 R(B) = 9 / 9+ 0+0+1 = 0.9 psst... essentially take the true positive of the class and divide up by the row data across columns once you get all the values, take the macro average avg(P) = P(A) + P(B) + P(C) + P(D) / 4 avg(R) = R(A) + R(B) + R(C) + R(D) / 4 F1 = 2 *avg(P) * avg(R) / avg(P) + avg(R)
How to compute precision/recall for multiclass-multilabel classification?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to compute precision/recall for multiclass-multilabel classification? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. this link helped me.. https://www.youtube.com/watch?v=HBi-P5j0Kec i am hoping it will help you as well say the distribution as as below A B C D A 100 80 10 10 B 0 9 0 1 C 0 1 8 1 D 0 1 0 9 the precision for A would be P(A) = 100/ 100 + 0 + 0 +0 = 100 P(B) = 9/ 9 + 80 + 1 + 1 = 9/91 psst... essentially take the true positive of the class and divide up by the column data across rows recall for a would be R(A) = 100/ 100+ 80+10+10 = 0.5 R(B) = 9 / 9+ 0+0+1 = 0.9 psst... essentially take the true positive of the class and divide up by the row data across columns once you get all the values, take the macro average avg(P) = P(A) + P(B) + P(C) + P(D) / 4 avg(R) = R(A) + R(B) + R(C) + R(D) / 4 F1 = 2 *avg(P) * avg(R) / avg(P) + avg(R)
How to compute precision/recall for multiclass-multilabel classification? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
2,098
How to compute precision/recall for multiclass-multilabel classification?
Check out these slides from cs205.org at Harvard. Once you get to the section on Error Measures, there is discussion of precision and recall in multi-class settings (e.g., one-vs-all or one-vs-one) and confusion matrices. Confusion matrices is what you really want here. FYI, in the Python software package scikits.learn, there are built-in methods to automatically compute things like the confusion matrix from classifiers trained on multi-class data. It can probably directly compute precision-recall plots for you too. Worth a look.
How to compute precision/recall for multiclass-multilabel classification?
Check out these slides from cs205.org at Harvard. Once you get to the section on Error Measures, there is discussion of precision and recall in multi-class settings (e.g., one-vs-all or one-vs-one) an
How to compute precision/recall for multiclass-multilabel classification? Check out these slides from cs205.org at Harvard. Once you get to the section on Error Measures, there is discussion of precision and recall in multi-class settings (e.g., one-vs-all or one-vs-one) and confusion matrices. Confusion matrices is what you really want here. FYI, in the Python software package scikits.learn, there are built-in methods to automatically compute things like the confusion matrix from classifiers trained on multi-class data. It can probably directly compute precision-recall plots for you too. Worth a look.
How to compute precision/recall for multiclass-multilabel classification? Check out these slides from cs205.org at Harvard. Once you get to the section on Error Measures, there is discussion of precision and recall in multi-class settings (e.g., one-vs-all or one-vs-one) an
2,099
How to compute precision/recall for multiclass-multilabel classification?
From Ozgur et al (2005) it is possible to see that you should compute Precision and Recall following the normal expressions, but instead of averaging over total N instances in your dataset, you should use N=[instances with at least one label with the class in question assigned to]. here is the reference mentioned: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.8244&rep=rep1&type=pdf
How to compute precision/recall for multiclass-multilabel classification?
From Ozgur et al (2005) it is possible to see that you should compute Precision and Recall following the normal expressions, but instead of averaging over total N instances in your dataset, you should
How to compute precision/recall for multiclass-multilabel classification? From Ozgur et al (2005) it is possible to see that you should compute Precision and Recall following the normal expressions, but instead of averaging over total N instances in your dataset, you should use N=[instances with at least one label with the class in question assigned to]. here is the reference mentioned: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.8244&rep=rep1&type=pdf
How to compute precision/recall for multiclass-multilabel classification? From Ozgur et al (2005) it is possible to see that you should compute Precision and Recall following the normal expressions, but instead of averaging over total N instances in your dataset, you should
2,100
How to compute precision/recall for multiclass-multilabel classification?
In case if you want to see the results directly: from sklearn.metrics import classification_report, confusion_matrix classification_report(y_test, y_pred) This would work in case you want average precision, recall and f-1 score from sklearn.metrics import precision_recall_fscore_support as score precision,recall,fscore,support=score(y_test,grid_predictions,average='weighted') print ('Precision : {}'.format(precision)) print ('Recall : {}'.format(recall)) print ('F-score : {}'.format(fscore))
How to compute precision/recall for multiclass-multilabel classification?
In case if you want to see the results directly: from sklearn.metrics import classification_report, confusion_matrix classification_report(y_test, y_pred) This would work in case you want average pr
How to compute precision/recall for multiclass-multilabel classification? In case if you want to see the results directly: from sklearn.metrics import classification_report, confusion_matrix classification_report(y_test, y_pred) This would work in case you want average precision, recall and f-1 score from sklearn.metrics import precision_recall_fscore_support as score precision,recall,fscore,support=score(y_test,grid_predictions,average='weighted') print ('Precision : {}'.format(precision)) print ('Recall : {}'.format(recall)) print ('F-score : {}'.format(fscore))
How to compute precision/recall for multiclass-multilabel classification? In case if you want to see the results directly: from sklearn.metrics import classification_report, confusion_matrix classification_report(y_test, y_pred) This would work in case you want average pr