idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
2,101
What do you call an average that does not include outliers?
It's called the trimmed mean. Basically what you do is compute the mean of the middle 80% of your data, ignoring the top and bottom 10%. Of course, these numbers can vary, but that's the general idea.
What do you call an average that does not include outliers?
It's called the trimmed mean. Basically what you do is compute the mean of the middle 80% of your data, ignoring the top and bottom 10%. Of course, these numbers can vary, but that's the general idea
What do you call an average that does not include outliers? It's called the trimmed mean. Basically what you do is compute the mean of the middle 80% of your data, ignoring the top and bottom 10%. Of course, these numbers can vary, but that's the general idea.
What do you call an average that does not include outliers? It's called the trimmed mean. Basically what you do is compute the mean of the middle 80% of your data, ignoring the top and bottom 10%. Of course, these numbers can vary, but that's the general idea
2,102
What do you call an average that does not include outliers?
A statistically sensible approach is to use a standard deviation cut-off. For example, remove any results +/-3 standard deviations. Using a rule like "biggest 10%" doesn't make sense. What if there are no outliers? The 10% rule would eliminate some data anyway. Unacceptable.
What do you call an average that does not include outliers?
A statistically sensible approach is to use a standard deviation cut-off. For example, remove any results +/-3 standard deviations. Using a rule like "biggest 10%" doesn't make sense. What if there a
What do you call an average that does not include outliers? A statistically sensible approach is to use a standard deviation cut-off. For example, remove any results +/-3 standard deviations. Using a rule like "biggest 10%" doesn't make sense. What if there are no outliers? The 10% rule would eliminate some data anyway. Unacceptable.
What do you call an average that does not include outliers? A statistically sensible approach is to use a standard deviation cut-off. For example, remove any results +/-3 standard deviations. Using a rule like "biggest 10%" doesn't make sense. What if there a
2,103
What do you call an average that does not include outliers?
Another standard test for identifying outliers is to use LQ $-$ (1.5$\times$IQR) and UQ $+$ (1.5$\times$ IQR). This is somewhat easier than computing the standard deviation and more general since it doesn't make any assumptions about the underlying data being from a normal distribution.
What do you call an average that does not include outliers?
Another standard test for identifying outliers is to use LQ $-$ (1.5$\times$IQR) and UQ $+$ (1.5$\times$ IQR). This is somewhat easier than computing the standard deviation and more general since it
What do you call an average that does not include outliers? Another standard test for identifying outliers is to use LQ $-$ (1.5$\times$IQR) and UQ $+$ (1.5$\times$ IQR). This is somewhat easier than computing the standard deviation and more general since it doesn't make any assumptions about the underlying data being from a normal distribution.
What do you call an average that does not include outliers? Another standard test for identifying outliers is to use LQ $-$ (1.5$\times$IQR) and UQ $+$ (1.5$\times$ IQR). This is somewhat easier than computing the standard deviation and more general since it
2,104
What do you call an average that does not include outliers?
For a very specific name, you'll need to specify the mechanism for outlier rejection. One general term is "robust". dsimcha mentions one approach: trimming. Another is clipping: all values outside a known-good range are discarded.
What do you call an average that does not include outliers?
For a very specific name, you'll need to specify the mechanism for outlier rejection. One general term is "robust". dsimcha mentions one approach: trimming. Another is clipping: all values outside a
What do you call an average that does not include outliers? For a very specific name, you'll need to specify the mechanism for outlier rejection. One general term is "robust". dsimcha mentions one approach: trimming. Another is clipping: all values outside a known-good range are discarded.
What do you call an average that does not include outliers? For a very specific name, you'll need to specify the mechanism for outlier rejection. One general term is "robust". dsimcha mentions one approach: trimming. Another is clipping: all values outside a
2,105
What do you call an average that does not include outliers?
The "average" you're talking about is actually called the "mean". It's not exactly answering your question, but a different statistic which is not affected by outliers is the median, that is, the middle number. {90,89,92,91,5} mean: 73.4 {90,89,92,91,5} median: 90 This might be useful to you, I dunno.
What do you call an average that does not include outliers?
The "average" you're talking about is actually called the "mean". It's not exactly answering your question, but a different statistic which is not affected by outliers is the median, that is, the midd
What do you call an average that does not include outliers? The "average" you're talking about is actually called the "mean". It's not exactly answering your question, but a different statistic which is not affected by outliers is the median, that is, the middle number. {90,89,92,91,5} mean: 73.4 {90,89,92,91,5} median: 90 This might be useful to you, I dunno.
What do you call an average that does not include outliers? The "average" you're talking about is actually called the "mean". It's not exactly answering your question, but a different statistic which is not affected by outliers is the median, that is, the midd
2,106
What do you call an average that does not include outliers?
There is no official name because of the various mechanisms, such as Q test, used to get rid of outliers. Removing outliers is called trimming. No program I have ever used has average() with an integrated trim()
What do you call an average that does not include outliers?
There is no official name because of the various mechanisms, such as Q test, used to get rid of outliers. Removing outliers is called trimming. No program I have ever used has average() with an inte
What do you call an average that does not include outliers? There is no official name because of the various mechanisms, such as Q test, used to get rid of outliers. Removing outliers is called trimming. No program I have ever used has average() with an integrated trim()
What do you call an average that does not include outliers? There is no official name because of the various mechanisms, such as Q test, used to get rid of outliers. Removing outliers is called trimming. No program I have ever used has average() with an inte
2,107
What do you call an average that does not include outliers?
I don't know if it has a name, but you could easily come up with a number of algorithms to reject outliers: Find all numbers between the 10th and 90th percentiles (do this by sorting then rejecting the first $N/10$ and last $N/10$ numbers) and take the mean value of the remaining values. Sort values, reject high and low values as long as by doing so, the mean/standard deviation change more than $X\%$. Sort values, reject high and low values as long as by doing so, the values in question are more than $K$ standard deviations from the mean.
What do you call an average that does not include outliers?
I don't know if it has a name, but you could easily come up with a number of algorithms to reject outliers: Find all numbers between the 10th and 90th percentiles (do this by sorting then rejecting t
What do you call an average that does not include outliers? I don't know if it has a name, but you could easily come up with a number of algorithms to reject outliers: Find all numbers between the 10th and 90th percentiles (do this by sorting then rejecting the first $N/10$ and last $N/10$ numbers) and take the mean value of the remaining values. Sort values, reject high and low values as long as by doing so, the mean/standard deviation change more than $X\%$. Sort values, reject high and low values as long as by doing so, the values in question are more than $K$ standard deviations from the mean.
What do you call an average that does not include outliers? I don't know if it has a name, but you could easily come up with a number of algorithms to reject outliers: Find all numbers between the 10th and 90th percentiles (do this by sorting then rejecting t
2,108
What do you call an average that does not include outliers?
The most common way of having a Robust (the usual word meaning resistant to bad data) average is to use the median. This is just the middle value in the sorted list (of half way between the middle two values), so for your example it would be 90.5 = half way between 90 and 91. If you want to get really into robust statistics (such as robust estimates of standard deviation etc) I would recommend a lost of the code at The AGORAS group but this may be too advanced for your purposes.
What do you call an average that does not include outliers?
The most common way of having a Robust (the usual word meaning resistant to bad data) average is to use the median. This is just the middle value in the sorted list (of half way between the middle two
What do you call an average that does not include outliers? The most common way of having a Robust (the usual word meaning resistant to bad data) average is to use the median. This is just the middle value in the sorted list (of half way between the middle two values), so for your example it would be 90.5 = half way between 90 and 91. If you want to get really into robust statistics (such as robust estimates of standard deviation etc) I would recommend a lost of the code at The AGORAS group but this may be too advanced for your purposes.
What do you call an average that does not include outliers? The most common way of having a Robust (the usual word meaning resistant to bad data) average is to use the median. This is just the middle value in the sorted list (of half way between the middle two
2,109
What do you call an average that does not include outliers?
... {90,89,92,91(,5)} avg = 90.5 How do you describe this average in statistics? ... There's no special designation for that method. Call it any name you want, provided that you always tell the audience how you arrived at your result, and you have the outliers in hand to show them if they request (and believe me: they will request).
What do you call an average that does not include outliers?
... {90,89,92,91(,5)} avg = 90.5 How do you describe this average in statistics? ... There's no special designation for that method. Call it any name you want, provided that you always tell
What do you call an average that does not include outliers? ... {90,89,92,91(,5)} avg = 90.5 How do you describe this average in statistics? ... There's no special designation for that method. Call it any name you want, provided that you always tell the audience how you arrived at your result, and you have the outliers in hand to show them if they request (and believe me: they will request).
What do you call an average that does not include outliers? ... {90,89,92,91(,5)} avg = 90.5 How do you describe this average in statistics? ... There's no special designation for that method. Call it any name you want, provided that you always tell
2,110
What do you call an average that does not include outliers?
If all you have is one variable (as you imply) I think some of the respondents above are being over critical of your approach. Certainly other methods that look at things like leverage are more statistically sound; however that implies you are doing modeling of some sort. If you just have for example scores on a test or age of senior citizens (plausible cases of your example) I think it is practical and reasonable to be suspicious of the outlier you bring up. You could look at the overall mean and the trimmed mean and see how much it changes, but that will be a function of your sample size and the deviation from the mean for your outliers. With egregious outliers like that, you would certainly want to look into te data generating process to figure out why that's the case. Is it a data entry or administrative fluke? If so and it is likely unrelated to actual true value (that is unobserved) it seems to me perfectly fine to trim. If it is a true value as far as you can tell you may not be able to remove unless you are explicit in your analysis about it.
What do you call an average that does not include outliers?
If all you have is one variable (as you imply) I think some of the respondents above are being over critical of your approach. Certainly other methods that look at things like leverage are more statis
What do you call an average that does not include outliers? If all you have is one variable (as you imply) I think some of the respondents above are being over critical of your approach. Certainly other methods that look at things like leverage are more statistically sound; however that implies you are doing modeling of some sort. If you just have for example scores on a test or age of senior citizens (plausible cases of your example) I think it is practical and reasonable to be suspicious of the outlier you bring up. You could look at the overall mean and the trimmed mean and see how much it changes, but that will be a function of your sample size and the deviation from the mean for your outliers. With egregious outliers like that, you would certainly want to look into te data generating process to figure out why that's the case. Is it a data entry or administrative fluke? If so and it is likely unrelated to actual true value (that is unobserved) it seems to me perfectly fine to trim. If it is a true value as far as you can tell you may not be able to remove unless you are explicit in your analysis about it.
What do you call an average that does not include outliers? If all you have is one variable (as you imply) I think some of the respondents above are being over critical of your approach. Certainly other methods that look at things like leverage are more statis
2,111
What do you call an average that does not include outliers?
I love the discussion here - the trimmed mean is a powerful tool to get a central tendency estimate concentrated around the middle of the data. The one thing I would add is that there is a choice to be made about which "metric" to use in the cases of small and large sample sizes. In some cases we talk about means in the context of large samples because of central-limit theorem, medians as robust small-sample alternatives and trimmed means as robust to outliers. Obviously the above is a gross generalization, but there are interesting papers that talk about the families and classes of estimators in large and small sample settings and their properties. I work in bioinformatics aand usually you deal with small samples (3-10s) usually in mice models, and what not, and this paper gives a good technical overview of what alternatives exist and what properties these estimators have. Robust estimation in very small samples Reference: Rousseeuw, P. J., & Verboven, S. (2002). Robust estimation in very small samples. Computational Statistics & Data Analysis, 40(4), 741-758. Link: https://www.sciencedirect.com/science/article/pii/S0167947302000786 This is off-course one paper, but there are plenty others that discuss these types of estimators. Hope this helps.
What do you call an average that does not include outliers?
I love the discussion here - the trimmed mean is a powerful tool to get a central tendency estimate concentrated around the middle of the data. The one thing I would add is that there is a choice to b
What do you call an average that does not include outliers? I love the discussion here - the trimmed mean is a powerful tool to get a central tendency estimate concentrated around the middle of the data. The one thing I would add is that there is a choice to be made about which "metric" to use in the cases of small and large sample sizes. In some cases we talk about means in the context of large samples because of central-limit theorem, medians as robust small-sample alternatives and trimmed means as robust to outliers. Obviously the above is a gross generalization, but there are interesting papers that talk about the families and classes of estimators in large and small sample settings and their properties. I work in bioinformatics aand usually you deal with small samples (3-10s) usually in mice models, and what not, and this paper gives a good technical overview of what alternatives exist and what properties these estimators have. Robust estimation in very small samples Reference: Rousseeuw, P. J., & Verboven, S. (2002). Robust estimation in very small samples. Computational Statistics & Data Analysis, 40(4), 741-758. Link: https://www.sciencedirect.com/science/article/pii/S0167947302000786 This is off-course one paper, but there are plenty others that discuss these types of estimators. Hope this helps.
What do you call an average that does not include outliers? I love the discussion here - the trimmed mean is a powerful tool to get a central tendency estimate concentrated around the middle of the data. The one thing I would add is that there is a choice to b
2,112
What do you call an average that does not include outliers?
There are superior methods to the IQR or SD based methods. Due to outliers being present, the distribution likely has issues with normality already (unless ouliers are evenly distributed at both ends of the distribution). This inflates the SD a lot, making the SDs use less than desirable, however the SD method has some desirable aspects over the IQR method, namely 1.5 times the IQR is a relatively subjective cutoff. While subjectivity in these matters is unavoidable it is preferable to reduce it. A Hampel Identifier on the other hand uses robust methods to estimate outliers. Essentially its the same as the SD method, but you would replace means with medians and SD with Median Absolute Deviations (MAD). MADs are just the median distance from the media. This MAD is multiplied by a scaling constant .675. The formula comes out to (X - Median)/(.675*MAD). The resulting statistic is treated identically to a Z-score. This bypasses the issue of the likely non-normality that if you have outliers may be present. As for what to call it. Trimmed means are normally reserved for the method of trimming the bottom and top ten percent mentioned by @dsimcha. If it has been completely cleaned you may refer to it as the cleaned mean, or just the mean. Just be sure to be clear what you did to it in your write-up. Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., & Stahel, W. A. (1986). Robust Statistics. John Wiley & Sons, New York.
What do you call an average that does not include outliers?
There are superior methods to the IQR or SD based methods. Due to outliers being present, the distribution likely has issues with normality already (unless ouliers are evenly distributed at both ends
What do you call an average that does not include outliers? There are superior methods to the IQR or SD based methods. Due to outliers being present, the distribution likely has issues with normality already (unless ouliers are evenly distributed at both ends of the distribution). This inflates the SD a lot, making the SDs use less than desirable, however the SD method has some desirable aspects over the IQR method, namely 1.5 times the IQR is a relatively subjective cutoff. While subjectivity in these matters is unavoidable it is preferable to reduce it. A Hampel Identifier on the other hand uses robust methods to estimate outliers. Essentially its the same as the SD method, but you would replace means with medians and SD with Median Absolute Deviations (MAD). MADs are just the median distance from the media. This MAD is multiplied by a scaling constant .675. The formula comes out to (X - Median)/(.675*MAD). The resulting statistic is treated identically to a Z-score. This bypasses the issue of the likely non-normality that if you have outliers may be present. As for what to call it. Trimmed means are normally reserved for the method of trimming the bottom and top ten percent mentioned by @dsimcha. If it has been completely cleaned you may refer to it as the cleaned mean, or just the mean. Just be sure to be clear what you did to it in your write-up. Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., & Stahel, W. A. (1986). Robust Statistics. John Wiley & Sons, New York.
What do you call an average that does not include outliers? There are superior methods to the IQR or SD based methods. Due to outliers being present, the distribution likely has issues with normality already (unless ouliers are evenly distributed at both ends
2,113
What do you call an average that does not include outliers?
disclaimer - this method is ad hoc and without rigorous study. Use at your own risk :) What I found to be quite good was to reduce the relevancy of a points contribution to the mean by the square of its number of standard deviations from the mean but only if the point is more than one standard deviation from the mean. Steps: Calculate the mean and standard deviation as usual. Recalculate the mean, but this time, for each value, if it is more than one standard deviation from the mean reduce its contribution to the mean. To do reduce its contribution, divide its value by the square of its number of deviations before adding to the total. Also because it's contributing less, we need to Reduce N, so subtract 1-1/(square of values deviation) from N. Recalculate the standard deviation, but use this new mean rather than the old mean. example: stddev = 0.5 mean = 10 value = 11 then, deviations = distance from mean / stddev = |10-11|/0.5 = 2 so value changes from 11 to 11/(2)^2 = 11/4 also N changes, it is reduced to N-3/4. code: def mean(data): """Return the sample arithmetic mean of data.""" n = len(data) if n < 1: raise ValueError('mean requires at least one data point') return 1.0*sum(data)/n # in Python 2 use sum(data)/float(n) def _ss(data): """Return sum of square deviations of sequence data.""" c = mean(data) ss = sum((x-c)**2 for x in data) return ss, c def stddev(data, ddof=0): """Calculates the population standard deviation by default; specify ddof=1 to compute the sample standard deviation.""" n = len(data) if n < 2: raise ValueError('variance requires at least two data points') ss, c = _ss(data) pvar = ss/(n-ddof) return pvar**0.5, c def rob_adjusted_mean(values, s, m): n = 0.0 tot = 0.0 for v in values: diff = abs(v - m) deviations = diff / s if deviations > 1: #it's an outlier, so reduce its relevancy / weighting by square of its number of deviations n += 1.0/deviations**2 tot += v/deviations**2 else: n += 1 tot += v return tot/n def rob_adjusted_ss(values, s, m): """Return sum of square deviations of sequence data.""" c = rob_adjusted_mean(values, s, m) ss = sum((x-c)**2 for x in values) return ss, c def rob_adjusted_stddev(data, s, m, ddof=0): """Calculates the population standard deviation by default; specify ddof=1 to compute the sample standard deviation.""" n = len(data) if n < 2: raise ValueError('variance requires at least two data points') ss, c = rob_adjusted_ss(data, s, m) pvar = ss/(n-ddof) return pvar**0.5, c s, m = stddev(values,ddof=1) print s, m s, m = rob_adjusted_stddev(values, s, m, ddof=1) print s, m output before and after adjustment of my 50 measurements: 0.0409789841609 139.04222 0.0425867309757 139.030745443
What do you call an average that does not include outliers?
disclaimer - this method is ad hoc and without rigorous study. Use at your own risk :) What I found to be quite good was to reduce the relevancy of a points contribution to the mean by the square of i
What do you call an average that does not include outliers? disclaimer - this method is ad hoc and without rigorous study. Use at your own risk :) What I found to be quite good was to reduce the relevancy of a points contribution to the mean by the square of its number of standard deviations from the mean but only if the point is more than one standard deviation from the mean. Steps: Calculate the mean and standard deviation as usual. Recalculate the mean, but this time, for each value, if it is more than one standard deviation from the mean reduce its contribution to the mean. To do reduce its contribution, divide its value by the square of its number of deviations before adding to the total. Also because it's contributing less, we need to Reduce N, so subtract 1-1/(square of values deviation) from N. Recalculate the standard deviation, but use this new mean rather than the old mean. example: stddev = 0.5 mean = 10 value = 11 then, deviations = distance from mean / stddev = |10-11|/0.5 = 2 so value changes from 11 to 11/(2)^2 = 11/4 also N changes, it is reduced to N-3/4. code: def mean(data): """Return the sample arithmetic mean of data.""" n = len(data) if n < 1: raise ValueError('mean requires at least one data point') return 1.0*sum(data)/n # in Python 2 use sum(data)/float(n) def _ss(data): """Return sum of square deviations of sequence data.""" c = mean(data) ss = sum((x-c)**2 for x in data) return ss, c def stddev(data, ddof=0): """Calculates the population standard deviation by default; specify ddof=1 to compute the sample standard deviation.""" n = len(data) if n < 2: raise ValueError('variance requires at least two data points') ss, c = _ss(data) pvar = ss/(n-ddof) return pvar**0.5, c def rob_adjusted_mean(values, s, m): n = 0.0 tot = 0.0 for v in values: diff = abs(v - m) deviations = diff / s if deviations > 1: #it's an outlier, so reduce its relevancy / weighting by square of its number of deviations n += 1.0/deviations**2 tot += v/deviations**2 else: n += 1 tot += v return tot/n def rob_adjusted_ss(values, s, m): """Return sum of square deviations of sequence data.""" c = rob_adjusted_mean(values, s, m) ss = sum((x-c)**2 for x in values) return ss, c def rob_adjusted_stddev(data, s, m, ddof=0): """Calculates the population standard deviation by default; specify ddof=1 to compute the sample standard deviation.""" n = len(data) if n < 2: raise ValueError('variance requires at least two data points') ss, c = rob_adjusted_ss(data, s, m) pvar = ss/(n-ddof) return pvar**0.5, c s, m = stddev(values,ddof=1) print s, m s, m = rob_adjusted_stddev(values, s, m, ddof=1) print s, m output before and after adjustment of my 50 measurements: 0.0409789841609 139.04222 0.0425867309757 139.030745443
What do you call an average that does not include outliers? disclaimer - this method is ad hoc and without rigorous study. Use at your own risk :) What I found to be quite good was to reduce the relevancy of a points contribution to the mean by the square of i
2,114
What do you call an average that does not include outliers?
It can be the median. Not always, but sometimes. I have no idea what it is called in other occasions. Hope this helped. (At least a little.)
What do you call an average that does not include outliers?
It can be the median. Not always, but sometimes. I have no idea what it is called in other occasions. Hope this helped. (At least a little.)
What do you call an average that does not include outliers? It can be the median. Not always, but sometimes. I have no idea what it is called in other occasions. Hope this helped. (At least a little.)
What do you call an average that does not include outliers? It can be the median. Not always, but sometimes. I have no idea what it is called in other occasions. Hope this helped. (At least a little.)
2,115
What do you call an average that does not include outliers?
My statistics textbook refers to this as a Sample Mean as opposed to a Population Mean. Sample implies there was a restriction applied to the full dataset, though no modification (removal) to the dataset was made.
What do you call an average that does not include outliers?
My statistics textbook refers to this as a Sample Mean as opposed to a Population Mean. Sample implies there was a restriction applied to the full dataset, though no modification (removal) to the data
What do you call an average that does not include outliers? My statistics textbook refers to this as a Sample Mean as opposed to a Population Mean. Sample implies there was a restriction applied to the full dataset, though no modification (removal) to the dataset was made.
What do you call an average that does not include outliers? My statistics textbook refers to this as a Sample Mean as opposed to a Population Mean. Sample implies there was a restriction applied to the full dataset, though no modification (removal) to the data
2,116
Is there a minimum sample size required for the t-test to be valid?
There is no minimum sample size for the t test to be valid other than it be large enough to calculate the test statistic. Validity requires that the assumptions for the test statistic hold approximately. Those assumptions are in the one sample case that the data are iid normal (or approximately normal) with mean 0 under the null hypothesis and a variance that is unknown but estimated from the sample. In the two sample case it is that both samples are independent of each other and each sample consists of iid normal variables with the two samples having the same mean and a common unknown variance under the null hypothesis. A pooled estimate of variance is used for the statistic. In the one sample case the distribution under the null hypothesis is a central t with n-1 degrees of freedom. In the two sample cases with sample sizes n and m not necessarily equal the null distribution of the test statistics is t with n+m-2 degrees of freedom. The increased variability due to low sample size is accounted for in the distribution which has heavier tails when the degrees of freedom is low which corresponds to a low sample size. So critical values can be found for the test statistic to have a given significance level for any sample size (well, at least of size 2 or larger). The problem with low sample size is with regard to the power of the test. The reviewer may have felt that 15 per group was not a large enough sample size to have high power of detecting a meaningful difference say delta between the two means or a mean greater than delta in absolute value for a one sample problem. Needing 40 would require a specification of a certain power at a particular delta that would be achieved with n equal 40 but not lower than 40. I should add that for the t test to be performed the sample must be large enough to estimate the variance or variances.
Is there a minimum sample size required for the t-test to be valid?
There is no minimum sample size for the t test to be valid other than it be large enough to calculate the test statistic. Validity requires that the assumptions for the test statistic hold approximat
Is there a minimum sample size required for the t-test to be valid? There is no minimum sample size for the t test to be valid other than it be large enough to calculate the test statistic. Validity requires that the assumptions for the test statistic hold approximately. Those assumptions are in the one sample case that the data are iid normal (or approximately normal) with mean 0 under the null hypothesis and a variance that is unknown but estimated from the sample. In the two sample case it is that both samples are independent of each other and each sample consists of iid normal variables with the two samples having the same mean and a common unknown variance under the null hypothesis. A pooled estimate of variance is used for the statistic. In the one sample case the distribution under the null hypothesis is a central t with n-1 degrees of freedom. In the two sample cases with sample sizes n and m not necessarily equal the null distribution of the test statistics is t with n+m-2 degrees of freedom. The increased variability due to low sample size is accounted for in the distribution which has heavier tails when the degrees of freedom is low which corresponds to a low sample size. So critical values can be found for the test statistic to have a given significance level for any sample size (well, at least of size 2 or larger). The problem with low sample size is with regard to the power of the test. The reviewer may have felt that 15 per group was not a large enough sample size to have high power of detecting a meaningful difference say delta between the two means or a mean greater than delta in absolute value for a one sample problem. Needing 40 would require a specification of a certain power at a particular delta that would be achieved with n equal 40 but not lower than 40. I should add that for the t test to be performed the sample must be large enough to estimate the variance or variances.
Is there a minimum sample size required for the t-test to be valid? There is no minimum sample size for the t test to be valid other than it be large enough to calculate the test statistic. Validity requires that the assumptions for the test statistic hold approximat
2,117
Is there a minimum sample size required for the t-test to be valid?
With all deference to him, he doesn't know what he's talking about. The t-test was designed for working with small samples. There isn't really a minimum (maybe you could say a minimum of 3 for a one-sample t-test, IDK), but you do have a concern regarding adequate power with small samples. You may be interested in reading about the ideas behind compromise power analysis when the possible sample size is highly restricted, as in your case. As for a reference that proves you can use the t-test with small samples, I don't know of one, and I doubt that one exists. Why would anyone try to prove that? The idea is just silly.
Is there a minimum sample size required for the t-test to be valid?
With all deference to him, he doesn't know what he's talking about. The t-test was designed for working with small samples. There isn't really a minimum (maybe you could say a minimum of 3 for a one-s
Is there a minimum sample size required for the t-test to be valid? With all deference to him, he doesn't know what he's talking about. The t-test was designed for working with small samples. There isn't really a minimum (maybe you could say a minimum of 3 for a one-sample t-test, IDK), but you do have a concern regarding adequate power with small samples. You may be interested in reading about the ideas behind compromise power analysis when the possible sample size is highly restricted, as in your case. As for a reference that proves you can use the t-test with small samples, I don't know of one, and I doubt that one exists. Why would anyone try to prove that? The idea is just silly.
Is there a minimum sample size required for the t-test to be valid? With all deference to him, he doesn't know what he's talking about. The t-test was designed for working with small samples. There isn't really a minimum (maybe you could say a minimum of 3 for a one-s
2,118
Is there a minimum sample size required for the t-test to be valid?
As mentioned in existing answers, the main issue with a small sample size is low statistical power. There are various rules of thumb regarding what is acceptable statistical power. Some people say 80% statistical power is reasonable, but ultimately, more is better. There is also generally a trade-off between the cost of getting more participants and the benefit of getting more statistical power. You can assess statistical power of a t test using a simple function in R, power.t.test. The following code provides the statistical power for a sample size of 15, a one-sample t-test, standard $\alpha=.05$, and three different effect sizes of .2, .5, .8 which have sometimes been referred to as small, medium, and large effects respectively. p.2 <-power.t.test(n=15, delta=.2, sd=1, sig.level=.05, type='one.sample') p.5 <- power.t.test(n=15, delta=.5, sd=1, sig.level=.05, type='one.sample') p.8 <-power.t.test(n=15, delta=.8, sd=1, sig.level=.05, type='one.sample') round(rbind(p.2=p.2$power, p.5=p.5$power, p.8=p.8$power), 2) [,1] p.2 0.11 p.5 0.44 p.8 0.82 Thus, we can see that if the population effect size was "small" or "medium", you would have low statistical power (i.e., 11% and 44% respectively). However, if the effect size is large in the population, you would have what some would describe as "reasonable" power (i.e., 82%). The Quick-r website provides further information on power analysis using R.
Is there a minimum sample size required for the t-test to be valid?
As mentioned in existing answers, the main issue with a small sample size is low statistical power. There are various rules of thumb regarding what is acceptable statistical power. Some people say 80%
Is there a minimum sample size required for the t-test to be valid? As mentioned in existing answers, the main issue with a small sample size is low statistical power. There are various rules of thumb regarding what is acceptable statistical power. Some people say 80% statistical power is reasonable, but ultimately, more is better. There is also generally a trade-off between the cost of getting more participants and the benefit of getting more statistical power. You can assess statistical power of a t test using a simple function in R, power.t.test. The following code provides the statistical power for a sample size of 15, a one-sample t-test, standard $\alpha=.05$, and three different effect sizes of .2, .5, .8 which have sometimes been referred to as small, medium, and large effects respectively. p.2 <-power.t.test(n=15, delta=.2, sd=1, sig.level=.05, type='one.sample') p.5 <- power.t.test(n=15, delta=.5, sd=1, sig.level=.05, type='one.sample') p.8 <-power.t.test(n=15, delta=.8, sd=1, sig.level=.05, type='one.sample') round(rbind(p.2=p.2$power, p.5=p.5$power, p.8=p.8$power), 2) [,1] p.2 0.11 p.5 0.44 p.8 0.82 Thus, we can see that if the population effect size was "small" or "medium", you would have low statistical power (i.e., 11% and 44% respectively). However, if the effect size is large in the population, you would have what some would describe as "reasonable" power (i.e., 82%). The Quick-r website provides further information on power analysis using R.
Is there a minimum sample size required for the t-test to be valid? As mentioned in existing answers, the main issue with a small sample size is low statistical power. There are various rules of thumb regarding what is acceptable statistical power. Some people say 80%
2,119
Is there a minimum sample size required for the t-test to be valid?
Consider the following from pp. 254-256 of Sauro, J., & Lewis, J. R. (2016). Quantifying the User Experience: Practical Statistics for User Research, 2nd Ed. Cambridge, MA: Morgan-Kaufmann (you can look inside at https://www.amazon.com/Quantifying-User-Experience-Second-Statistics/dp/0128023082/). DO YOU NEED TO TEST AT LEAST 30 USERS? ON ONE HAND Probably most of us who have taken an introductory statistics class (or know someone who took such a class) have heard the rule of thumb that to estimate or compare means, your sample size should be at least 30. According to the central limit theorem, as the sample size increases, the distribution of the mean becomes more and more normal, regardless of the normality of the underlying distribution. Some simulation studies have shown that for a wide variety of distributions (but not all—see Bradley, 1978), the distribution of the mean becomes near normal when n = 30. Another consideration is that it is slightly simpler to use z-scores rather than t-scores because z-scores do not require the use of degrees of freedom. As shown in Table 9.1 and Fig. 9.2, by the time you have about 30 degrees of freedom the value of t gets pretty close to the value of z. Consequently, there can be a feeling that you don’t have to deal with small samples that require small-sample statistics (Cohen, 1990). ... ON THE OTHER HAND When the cost of a sample is expensive, as it typically is in many types of user research (e.g., moderated usability testing), it is important to estimate the needed sample size as accurately as possible, with the understanding that it is an estimate. The likelihood that 30 is exactly the right sample for a given set of circumstances is very low. As shown in our chapters on sample size estimation, a more appropriate approach is to take the formulas for computing the significance levels of a statistical test and, using algebra to solve for n, convert them to sample size estimation formulas. Those formulas then provide specific guidance on what you have to know or estimate for a given situation to estimate the required sample size. The idea that even with the t-distribution (as opposed to the z-distribution) you need to have a sample size of at least 30 is inconsistent with the history of the development of the distribution. In 1899, William S. Gossett, a recent graduate of New College in Oxford with degrees in chemistry and mathematics, became one of the first scientists to join the Guinness brewery. “Compared with the giants of his day, he published very little, but his contribution is of critical importance. … The nature of the process of brewing, with its variability in temperature and ingredients, means that it is not possible to take large samples over a long run” (Cowles, 1989, p. 108–109). This meant that Gossett could not use z-scores in his work—they just don’t work well with small samples. After analyzing the deficiencies of the z-distribution for statistical tests with small samples, he worked out the necessary adjustments as a function of degrees of freedom to produce his t tables, published under the pseudonym “Student” due to the policies of Guinness prohibiting publication by employees (Salsburg, 2001). In the work that led to the publication of the tables, Gossett performed an early version of Monte Carlo simulations (Stigler, 1999). He prepared 3000 cards labeled with physical measurements taken on criminals, shuffled them, then dealt them out into 750 groups of size 4—a sample size much smaller than 30. OUR RECOMMENDATION This controversy is similar to the “five is enough” versus “eight is not enough” argument covered in Chapter 6, but applied to summative rather than formative research. For any research, the number of users to test depends on the purpose of the test and the type of data you plan to collect. The “magic number” 30 has some empirical rationale, but in our opinion, it’s very weak. As you can see from the numerous examples in this book that have sample sizes not equal to 30 (sometimes less, sometimes more), we do not hold this rule of thumb in very high regard. As described in our sample size chapter for summative research, the appropriate sample size for a study depends on the type of distribution, the expected variability of the data, the desired levels of confidence and power, and the minimum size of the effect that you need to be able to reliably detect. As illustrated in Fig. 9.2, when using the t-distribution with very small samples (e.g., with degrees of freedom less than 5), the very large values of t compensate for small sample sizes with regard to the control of Type I errors (claiming a difference is significant when it really is not). With sample sizes these small, your confidence intervals will be much wider than what you would get with larger samples. But once you’re dealing with more than 5 degrees of freedom, there is very little absolute difference between the value of z and the value of t. From the perspective of the approach of t to z, there is very little gain past 10 degrees of freedom. It isn’t much more complicated to use the t-distribution than the z-distribution (you just need to be sure to use the right value for the degrees of freedom), and the reason for the development of the t-distribution was to enable the analysis of small samples. This is just one of the less obvious ways in which usability practitioners benefit from the science and practice of beer brewing. Historians of statistics widely regard Gossett’s publication of Student’s t-test as a landmark event (Box, 1984; Cowles, 1989; Stigler, 1999). In a letter to Ronald A. Fisher (one of the fathers of modern statistics) containing an early copy of the t tables, Gossett wrote, “You are probably the only man who will ever use them” (Box, 1978). Gossett got a lot of things right, but he certainly got that wrong. REFERENCES Box, G. E. P. (1984). The importance of practice in the development of statistics. Technometrics, 26(1), 1-8. Box, J. F. (1978). Fisher, the life of a scientist. New York, NY: John Wiley. Bradley, J. V. (1978). Robustness? British Journal of Mathematical and Statistical Psychology, 31, 144-152. Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304-1312. Cowles, M. (1989). Statistics in psychology: An historical perspective. Hillsdale, NJ: Lawrence Erlbaum. Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York, NY: W. H. Freeman. Stigler, S. M. (1999). Statistics on the table: The history of statistical concepts and methods. Cambridge, MA: Harvard University Press.
Is there a minimum sample size required for the t-test to be valid?
Consider the following from pp. 254-256 of Sauro, J., & Lewis, J. R. (2016). Quantifying the User Experience: Practical Statistics for User Research, 2nd Ed. Cambridge, MA: Morgan-Kaufmann (you can
Is there a minimum sample size required for the t-test to be valid? Consider the following from pp. 254-256 of Sauro, J., & Lewis, J. R. (2016). Quantifying the User Experience: Practical Statistics for User Research, 2nd Ed. Cambridge, MA: Morgan-Kaufmann (you can look inside at https://www.amazon.com/Quantifying-User-Experience-Second-Statistics/dp/0128023082/). DO YOU NEED TO TEST AT LEAST 30 USERS? ON ONE HAND Probably most of us who have taken an introductory statistics class (or know someone who took such a class) have heard the rule of thumb that to estimate or compare means, your sample size should be at least 30. According to the central limit theorem, as the sample size increases, the distribution of the mean becomes more and more normal, regardless of the normality of the underlying distribution. Some simulation studies have shown that for a wide variety of distributions (but not all—see Bradley, 1978), the distribution of the mean becomes near normal when n = 30. Another consideration is that it is slightly simpler to use z-scores rather than t-scores because z-scores do not require the use of degrees of freedom. As shown in Table 9.1 and Fig. 9.2, by the time you have about 30 degrees of freedom the value of t gets pretty close to the value of z. Consequently, there can be a feeling that you don’t have to deal with small samples that require small-sample statistics (Cohen, 1990). ... ON THE OTHER HAND When the cost of a sample is expensive, as it typically is in many types of user research (e.g., moderated usability testing), it is important to estimate the needed sample size as accurately as possible, with the understanding that it is an estimate. The likelihood that 30 is exactly the right sample for a given set of circumstances is very low. As shown in our chapters on sample size estimation, a more appropriate approach is to take the formulas for computing the significance levels of a statistical test and, using algebra to solve for n, convert them to sample size estimation formulas. Those formulas then provide specific guidance on what you have to know or estimate for a given situation to estimate the required sample size. The idea that even with the t-distribution (as opposed to the z-distribution) you need to have a sample size of at least 30 is inconsistent with the history of the development of the distribution. In 1899, William S. Gossett, a recent graduate of New College in Oxford with degrees in chemistry and mathematics, became one of the first scientists to join the Guinness brewery. “Compared with the giants of his day, he published very little, but his contribution is of critical importance. … The nature of the process of brewing, with its variability in temperature and ingredients, means that it is not possible to take large samples over a long run” (Cowles, 1989, p. 108–109). This meant that Gossett could not use z-scores in his work—they just don’t work well with small samples. After analyzing the deficiencies of the z-distribution for statistical tests with small samples, he worked out the necessary adjustments as a function of degrees of freedom to produce his t tables, published under the pseudonym “Student” due to the policies of Guinness prohibiting publication by employees (Salsburg, 2001). In the work that led to the publication of the tables, Gossett performed an early version of Monte Carlo simulations (Stigler, 1999). He prepared 3000 cards labeled with physical measurements taken on criminals, shuffled them, then dealt them out into 750 groups of size 4—a sample size much smaller than 30. OUR RECOMMENDATION This controversy is similar to the “five is enough” versus “eight is not enough” argument covered in Chapter 6, but applied to summative rather than formative research. For any research, the number of users to test depends on the purpose of the test and the type of data you plan to collect. The “magic number” 30 has some empirical rationale, but in our opinion, it’s very weak. As you can see from the numerous examples in this book that have sample sizes not equal to 30 (sometimes less, sometimes more), we do not hold this rule of thumb in very high regard. As described in our sample size chapter for summative research, the appropriate sample size for a study depends on the type of distribution, the expected variability of the data, the desired levels of confidence and power, and the minimum size of the effect that you need to be able to reliably detect. As illustrated in Fig. 9.2, when using the t-distribution with very small samples (e.g., with degrees of freedom less than 5), the very large values of t compensate for small sample sizes with regard to the control of Type I errors (claiming a difference is significant when it really is not). With sample sizes these small, your confidence intervals will be much wider than what you would get with larger samples. But once you’re dealing with more than 5 degrees of freedom, there is very little absolute difference between the value of z and the value of t. From the perspective of the approach of t to z, there is very little gain past 10 degrees of freedom. It isn’t much more complicated to use the t-distribution than the z-distribution (you just need to be sure to use the right value for the degrees of freedom), and the reason for the development of the t-distribution was to enable the analysis of small samples. This is just one of the less obvious ways in which usability practitioners benefit from the science and practice of beer brewing. Historians of statistics widely regard Gossett’s publication of Student’s t-test as a landmark event (Box, 1984; Cowles, 1989; Stigler, 1999). In a letter to Ronald A. Fisher (one of the fathers of modern statistics) containing an early copy of the t tables, Gossett wrote, “You are probably the only man who will ever use them” (Box, 1978). Gossett got a lot of things right, but he certainly got that wrong. REFERENCES Box, G. E. P. (1984). The importance of practice in the development of statistics. Technometrics, 26(1), 1-8. Box, J. F. (1978). Fisher, the life of a scientist. New York, NY: John Wiley. Bradley, J. V. (1978). Robustness? British Journal of Mathematical and Statistical Psychology, 31, 144-152. Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304-1312. Cowles, M. (1989). Statistics in psychology: An historical perspective. Hillsdale, NJ: Lawrence Erlbaum. Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York, NY: W. H. Freeman. Stigler, S. M. (1999). Statistics on the table: The history of statistical concepts and methods. Cambridge, MA: Harvard University Press.
Is there a minimum sample size required for the t-test to be valid? Consider the following from pp. 254-256 of Sauro, J., & Lewis, J. R. (2016). Quantifying the User Experience: Practical Statistics for User Research, 2nd Ed. Cambridge, MA: Morgan-Kaufmann (you can
2,120
Is there a minimum sample size required for the t-test to be valid?
The two-sample t-test is valid if the two samples are independent simple random samples from Normal distributions with the same variance and each of the sample sizes is at least two (so that the population variance can be estimated.) Considerations of power are irrelevant to the question of the validity of the test. Depending upon the size of the effect that one wishes to detect, a small sample size may be imprudent, but a small sample size does not invalidate the test. Note also that for any sample size, the sampling distribution of the mean is Normal if the parent distribution is Normal. Of course,larger sample sizes are always better because they provide more precise estimates of parameters. The Central Limit Theorem tells us that sample means are more Normally distributed than individual values, but as pointed out by Casella and Berger, it is of limited usefulness since the rate of approach to Normality must be checked for any particular case. Relying on rules of thumb is unwise. See the results reported Rand Wilcox's books.
Is there a minimum sample size required for the t-test to be valid?
The two-sample t-test is valid if the two samples are independent simple random samples from Normal distributions with the same variance and each of the sample sizes is at least two (so that the popul
Is there a minimum sample size required for the t-test to be valid? The two-sample t-test is valid if the two samples are independent simple random samples from Normal distributions with the same variance and each of the sample sizes is at least two (so that the population variance can be estimated.) Considerations of power are irrelevant to the question of the validity of the test. Depending upon the size of the effect that one wishes to detect, a small sample size may be imprudent, but a small sample size does not invalidate the test. Note also that for any sample size, the sampling distribution of the mean is Normal if the parent distribution is Normal. Of course,larger sample sizes are always better because they provide more precise estimates of parameters. The Central Limit Theorem tells us that sample means are more Normally distributed than individual values, but as pointed out by Casella and Berger, it is of limited usefulness since the rate of approach to Normality must be checked for any particular case. Relying on rules of thumb is unwise. See the results reported Rand Wilcox's books.
Is there a minimum sample size required for the t-test to be valid? The two-sample t-test is valid if the two samples are independent simple random samples from Normal distributions with the same variance and each of the sample sizes is at least two (so that the popul
2,121
Is there a minimum sample size required for the t-test to be valid?
While it is true that the t-distribution takes into account the small sample size, I would assume that your referee was thinking about the difficulty of establishing that the population is normally distributed, when the only information you have is a relatively small sample? This may not be a huge issue with a sample of size 15, since the sample hopefully is large enough to show some signs of being vaguely normally distributed? If this is true, then hopefully the population is somewhere near normal too and, combined with Central Limit Theorem, that ought to give you sample means that are well enough behaved. But I'm dubious about recommendations to use t-tests for tiny samples (such as size four) unless the normality of the population can be established by some external information or mechanical understanding? There cannot surely be anywhere near enough information in a sample of size four to have any clue as the shape of the population distribution.
Is there a minimum sample size required for the t-test to be valid?
While it is true that the t-distribution takes into account the small sample size, I would assume that your referee was thinking about the difficulty of establishing that the population is normally di
Is there a minimum sample size required for the t-test to be valid? While it is true that the t-distribution takes into account the small sample size, I would assume that your referee was thinking about the difficulty of establishing that the population is normally distributed, when the only information you have is a relatively small sample? This may not be a huge issue with a sample of size 15, since the sample hopefully is large enough to show some signs of being vaguely normally distributed? If this is true, then hopefully the population is somewhere near normal too and, combined with Central Limit Theorem, that ought to give you sample means that are well enough behaved. But I'm dubious about recommendations to use t-tests for tiny samples (such as size four) unless the normality of the population can be established by some external information or mechanical understanding? There cannot surely be anywhere near enough information in a sample of size four to have any clue as the shape of the population distribution.
Is there a minimum sample size required for the t-test to be valid? While it is true that the t-distribution takes into account the small sample size, I would assume that your referee was thinking about the difficulty of establishing that the population is normally di
2,122
Is there a minimum sample size required for the t-test to be valid?
There are two different ways to justify the use of the t-test. Your data is normally distributed and you have at least two samples per group You have large sample sizes in each group If either of these cases hold, then the t-test is considered a valid test. So if you are willing to make the assumption that your data is normally distributed (which many researchers who collect small samples are), then you have nothing to worry about. However, someone might reasonably object that you are relying on this assumption to get your results, especially if your data is known to be skewed. Then the question of sample size required for valid inference is a very reasonable one. As for how large a sample size is required, unfortunately there's no real solid answer for that; the more skewed your data, the bigger the sample size required to make the approximation reasonable. 15-20 per group is usually considered reasonable large, but as with most rules of thumb, there exist counter examples: for example, in lottery ticket returns (where 1 in, say, 10,000,000 observations is an EXTREME outlier), you would literally need somewhere around 100,000,000 observations before these tests would be appropriate.
Is there a minimum sample size required for the t-test to be valid?
There are two different ways to justify the use of the t-test. Your data is normally distributed and you have at least two samples per group You have large sample sizes in each group If either of
Is there a minimum sample size required for the t-test to be valid? There are two different ways to justify the use of the t-test. Your data is normally distributed and you have at least two samples per group You have large sample sizes in each group If either of these cases hold, then the t-test is considered a valid test. So if you are willing to make the assumption that your data is normally distributed (which many researchers who collect small samples are), then you have nothing to worry about. However, someone might reasonably object that you are relying on this assumption to get your results, especially if your data is known to be skewed. Then the question of sample size required for valid inference is a very reasonable one. As for how large a sample size is required, unfortunately there's no real solid answer for that; the more skewed your data, the bigger the sample size required to make the approximation reasonable. 15-20 per group is usually considered reasonable large, but as with most rules of thumb, there exist counter examples: for example, in lottery ticket returns (where 1 in, say, 10,000,000 observations is an EXTREME outlier), you would literally need somewhere around 100,000,000 observations before these tests would be appropriate.
Is there a minimum sample size required for the t-test to be valid? There are two different ways to justify the use of the t-test. Your data is normally distributed and you have at least two samples per group You have large sample sizes in each group If either of
2,123
Is there a minimum sample size required for the t-test to be valid?
Czarina may find interesting to compare the results of her parametric t-test with the results obtained by a bootstrap t-test. The following code for Stata 13/1 mimics a fictitious example concerning a two-sample t-test with unequal variances (parametric t-test: p-value = 0.1493; bootstrap t-test: p-value = 0.1543). set obs 15 g A=2*runiform() g B=2.5*runiform() ttest A == B, unpaired unequal scalar t =r(t) sum A, meanonly replace A=A-r(mean) + 1.110498 ///1.110498=combined mean of A and B sum B, meanonly replace B=B-r(mean) + 1.110498 bootstrap r(t), reps(10000) nodots/// saving(C:\Users\user\Desktop\Czarina.dta, every(1) double replace) : /// ttest A == B, unpairedunequal use "C:\Users\user\Desktop\Czarina.dta", clear count if _bs_1<=-1.4857///-1.4857=t-value from parametric ttest count if _bs_1>=1.4857 display (811+732)/10000///this chunk of code calculates a bootstrap p-value/// to be compared with the parametric ttest p-value
Is there a minimum sample size required for the t-test to be valid?
Czarina may find interesting to compare the results of her parametric t-test with the results obtained by a bootstrap t-test. The following code for Stata 13/1 mimics a fictitious example concerning a
Is there a minimum sample size required for the t-test to be valid? Czarina may find interesting to compare the results of her parametric t-test with the results obtained by a bootstrap t-test. The following code for Stata 13/1 mimics a fictitious example concerning a two-sample t-test with unequal variances (parametric t-test: p-value = 0.1493; bootstrap t-test: p-value = 0.1543). set obs 15 g A=2*runiform() g B=2.5*runiform() ttest A == B, unpaired unequal scalar t =r(t) sum A, meanonly replace A=A-r(mean) + 1.110498 ///1.110498=combined mean of A and B sum B, meanonly replace B=B-r(mean) + 1.110498 bootstrap r(t), reps(10000) nodots/// saving(C:\Users\user\Desktop\Czarina.dta, every(1) double replace) : /// ttest A == B, unpairedunequal use "C:\Users\user\Desktop\Czarina.dta", clear count if _bs_1<=-1.4857///-1.4857=t-value from parametric ttest count if _bs_1>=1.4857 display (811+732)/10000///this chunk of code calculates a bootstrap p-value/// to be compared with the parametric ttest p-value
Is there a minimum sample size required for the t-test to be valid? Czarina may find interesting to compare the results of her parametric t-test with the results obtained by a bootstrap t-test. The following code for Stata 13/1 mimics a fictitious example concerning a
2,124
Is there a minimum sample size required for the t-test to be valid?
I concur regarding the usefulness of a boostrapped t-test. I would also recommend, as a comparison, a look at the Bayesian method offered by Kruschke at http://www.indiana.edu/~kruschke/BEST/BEST.pdf. In general, questions of "How many subjects?" can't be answered unless you have in hand an idea of what a significant effect size would be in terms of the problem being solved. That is, and for instance, if the test were a hypothetical study regarding the efficacy of a new drug, the effect size might be the minimum size needed to justify the new drug compared to old for the U.S. Food and Drug Administration. What's odd in this and many other discussions is the wholesale willingness to posit that some data just have some theoretical distribution, like being Gaussian. First, we don't need to posit, we can check, even with small samples. Second, why posit any specific theoretical distribution at all? Why not just take the data as an empirical distribution unto itself? Sure, in the case of small sample sizes, positing that the data come from some distribution is highly useful for analysis. But, to paraphrase Bradley Efron, in doing so you've just made up an infinite amount of data. Sometimes that can be okay if your problem is appropriate. Some times it isn't.
Is there a minimum sample size required for the t-test to be valid?
I concur regarding the usefulness of a boostrapped t-test. I would also recommend, as a comparison, a look at the Bayesian method offered by Kruschke at http://www.indiana.edu/~kruschke/BEST/BEST.pdf.
Is there a minimum sample size required for the t-test to be valid? I concur regarding the usefulness of a boostrapped t-test. I would also recommend, as a comparison, a look at the Bayesian method offered by Kruschke at http://www.indiana.edu/~kruschke/BEST/BEST.pdf. In general, questions of "How many subjects?" can't be answered unless you have in hand an idea of what a significant effect size would be in terms of the problem being solved. That is, and for instance, if the test were a hypothetical study regarding the efficacy of a new drug, the effect size might be the minimum size needed to justify the new drug compared to old for the U.S. Food and Drug Administration. What's odd in this and many other discussions is the wholesale willingness to posit that some data just have some theoretical distribution, like being Gaussian. First, we don't need to posit, we can check, even with small samples. Second, why posit any specific theoretical distribution at all? Why not just take the data as an empirical distribution unto itself? Sure, in the case of small sample sizes, positing that the data come from some distribution is highly useful for analysis. But, to paraphrase Bradley Efron, in doing so you've just made up an infinite amount of data. Sometimes that can be okay if your problem is appropriate. Some times it isn't.
Is there a minimum sample size required for the t-test to be valid? I concur regarding the usefulness of a boostrapped t-test. I would also recommend, as a comparison, a look at the Bayesian method offered by Kruschke at http://www.indiana.edu/~kruschke/BEST/BEST.pdf.
2,125
Is there a minimum sample size required for the t-test to be valid?
As far as assumptions go for the two sample case; it is that both samples are independent of each other and each sample consists of iid normal variables with the two samples having the same mean and a common unknown variance under the null hypothesis. There is also the Welch t-test utilizing the Satterwaite Approximation for the standard error. This is a 2 sample t-test assuming unequal variances. Welch's t-test
Is there a minimum sample size required for the t-test to be valid?
As far as assumptions go for the two sample case; it is that both samples are independent of each other and each sample consists of iid normal variables with the two samples having the same mean and a
Is there a minimum sample size required for the t-test to be valid? As far as assumptions go for the two sample case; it is that both samples are independent of each other and each sample consists of iid normal variables with the two samples having the same mean and a common unknown variance under the null hypothesis. There is also the Welch t-test utilizing the Satterwaite Approximation for the standard error. This is a 2 sample t-test assuming unequal variances. Welch's t-test
Is there a minimum sample size required for the t-test to be valid? As far as assumptions go for the two sample case; it is that both samples are independent of each other and each sample consists of iid normal variables with the two samples having the same mean and a
2,126
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test?
You can turn the question around. Since the ordinary Pearson $\chi^2$ test is almost always more accurate than Fisher's exact test and is much quicker to compute, why does anyone use Fisher's test? Note that it is a fallacy that the expected cell frequencies have to exceed 5 for Pearson's $\chi^2$ to yield accurate $P$-values. The test is accurate as long as expected cell frequencies exceed 1.0 if a very simple $\frac{N-1}{N}$ correction is applied to the test statistic. From R-help, 2009: Campbell, I. Chi-squared and Fisher-Irwin tests of two-by-two tables with small sample recommendations. Statistics in Medicine 2007; 26:3661-3675. (abstract) ...latest edition of Armitage's book recommends that continuity adjustments never be used for contingency table chi-square tests; E. Pearson modification of Pearson chi-square test, differing from the original by a factor of (N-1)/N; Cochran noted that the number 5 in "expected frequency less than 5" was arbitrary; findings of published studies may be summarized as follows, for comparative trials: Yates' chi-squared test has type I error rates less than the nominal, often less than half the nominal; The Fisher-Irwin test has type I error rates less than the nominal; K Pearson's version of the chi-squared test has type I error rates closer to the nominal than Yate's chi-squared test and the Fisher-Irwin test, but in some situations gives type I errors appreciably larger than the nominal value; The 'N-1' chi-squared test, behaves like K. Pearson's 'N' version, but the tendency for higher than nominal values is reduced; The two-sided Fisher-Irwin test using Irwin's rule is less conservative than the method doubling the one-sided probability; The mid-P Fisher-Irwin test by doubling the one-sided probability performs better than standard versions of the Fisher-Irwin test, and the mid-P method by Irwin's rule performs better still in having actual type I errors closer to nominal levels."; strong support for the 'N-1' test provided expected frequencies exceed 1; flaw in Fisher test which was based on Fisher's premise that marginal totals carry no useful information; demonstration of their useful information in very small sample sizes; Yates' continuity adjustment of N/2 is a large over-correction and is inappropriate; counter arguments exist to the use of randomization tests in randomized trials; calculations of worst cases; overall recommendation: use the 'N-1' chi-square test when all expected frequencies are at least 1; otherwise use the Fisher-Irwin test using Irwin's rule for two-sided tests, taking tables from either tail as likely, or less, as that observed; see letter to the editor by Antonio Andres and author's reply in 27:1791-1796; 2008. Crans GG, Shuster JJ. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial. Statistics in Medicine 2008; 27:3598-3611. (abstract) ...first paper to truly quantify the conservativeness of Fisher's test; "the test size of FET was less than 0.035 for nearly all sample sizes before 50 and did not approach 0.05 even for sample sizes over 100."; conservativeness of "exact" methods; see Stat in Med 28:173-179, 2009 for a criticism which was unanswered Lydersen S, Fagerland MW, Laake P. Recommended tests for association in $2\times 2$ tables. Statistics in Medicine 2009; 28:1159-1175. (abstract) ...Fisher's exact test should never be used unless the mid-$P$ correction is applied; value of unconditional tests; see letter to the editor 30:890-891;2011
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than
You can turn the question around. Since the ordinary Pearson $\chi^2$ test is almost always more accurate than Fisher's exact test and is much quicker to compute, why does anyone use Fisher's test? N
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test? You can turn the question around. Since the ordinary Pearson $\chi^2$ test is almost always more accurate than Fisher's exact test and is much quicker to compute, why does anyone use Fisher's test? Note that it is a fallacy that the expected cell frequencies have to exceed 5 for Pearson's $\chi^2$ to yield accurate $P$-values. The test is accurate as long as expected cell frequencies exceed 1.0 if a very simple $\frac{N-1}{N}$ correction is applied to the test statistic. From R-help, 2009: Campbell, I. Chi-squared and Fisher-Irwin tests of two-by-two tables with small sample recommendations. Statistics in Medicine 2007; 26:3661-3675. (abstract) ...latest edition of Armitage's book recommends that continuity adjustments never be used for contingency table chi-square tests; E. Pearson modification of Pearson chi-square test, differing from the original by a factor of (N-1)/N; Cochran noted that the number 5 in "expected frequency less than 5" was arbitrary; findings of published studies may be summarized as follows, for comparative trials: Yates' chi-squared test has type I error rates less than the nominal, often less than half the nominal; The Fisher-Irwin test has type I error rates less than the nominal; K Pearson's version of the chi-squared test has type I error rates closer to the nominal than Yate's chi-squared test and the Fisher-Irwin test, but in some situations gives type I errors appreciably larger than the nominal value; The 'N-1' chi-squared test, behaves like K. Pearson's 'N' version, but the tendency for higher than nominal values is reduced; The two-sided Fisher-Irwin test using Irwin's rule is less conservative than the method doubling the one-sided probability; The mid-P Fisher-Irwin test by doubling the one-sided probability performs better than standard versions of the Fisher-Irwin test, and the mid-P method by Irwin's rule performs better still in having actual type I errors closer to nominal levels."; strong support for the 'N-1' test provided expected frequencies exceed 1; flaw in Fisher test which was based on Fisher's premise that marginal totals carry no useful information; demonstration of their useful information in very small sample sizes; Yates' continuity adjustment of N/2 is a large over-correction and is inappropriate; counter arguments exist to the use of randomization tests in randomized trials; calculations of worst cases; overall recommendation: use the 'N-1' chi-square test when all expected frequencies are at least 1; otherwise use the Fisher-Irwin test using Irwin's rule for two-sided tests, taking tables from either tail as likely, or less, as that observed; see letter to the editor by Antonio Andres and author's reply in 27:1791-1796; 2008. Crans GG, Shuster JJ. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial. Statistics in Medicine 2008; 27:3598-3611. (abstract) ...first paper to truly quantify the conservativeness of Fisher's test; "the test size of FET was less than 0.035 for nearly all sample sizes before 50 and did not approach 0.05 even for sample sizes over 100."; conservativeness of "exact" methods; see Stat in Med 28:173-179, 2009 for a criticism which was unanswered Lydersen S, Fagerland MW, Laake P. Recommended tests for association in $2\times 2$ tables. Statistics in Medicine 2009; 28:1159-1175. (abstract) ...Fisher's exact test should never be used unless the mid-$P$ correction is applied; value of unconditional tests; see letter to the editor 30:890-891;2011
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than You can turn the question around. Since the ordinary Pearson $\chi^2$ test is almost always more accurate than Fisher's exact test and is much quicker to compute, why does anyone use Fisher's test? N
2,127
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test?
This is a great question. Fisher's exact test is one of the great examples of Fisher's clever use of experimental design, together with conditioning on data (basically on tables with the observed row and marginal totals) and his ingenuity at finding probability distributions (though this isn't the best example, for a better example see here). The use of computers to calculate "exact" p-values has definitely helped to obtain accurate answers. However, it is hard to justify the assumptions of Fisher's exact test in practice. Because the so called "exact" comes from the fact that in the "tea tasting experiement" or in the 2x2 contingency tables case, the row total and column total, that is, the marginal totals are fixed by design. This assumption is rarely justified in practice. For nice references see here. The name "exact" leads one into believing that the p-values given by this test are exact, which again in most of the cases is unfortunately not correct because of these reasons If the marginals are not fixed by design (which happens almost every time in practice), the p-values will be conservative. Since the test uses a discrete probability distribution (specifically, Hyper-geometric distribution), for certain cutoffs it is impossible to calculate the "exact null probabilities", that is, p-value. In most of the practical cases, using a likelihood ratio test or Chi-square test should not give very different answers (p-value) from a Fisher's exact test. Yes, when the marginals are fixed, Fisher's exact test is a better choice, but this will happen rarely. Therefore, using Chi-square test of likelihood ratio test is always recommended for consistency checks. Similar ideas apply when the Fisher's exact test is generalized to any table, which basically equivalent to calculating Multivariate Hypergeometric proabilities. Therefore one must always try to calculate Chi-square and likelihood ratio distribution based p-values, in addition to "exact" p-values.
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than
This is a great question. Fisher's exact test is one of the great examples of Fisher's clever use of experimental design, together with conditioning on data (basically on tables with the observed row
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test? This is a great question. Fisher's exact test is one of the great examples of Fisher's clever use of experimental design, together with conditioning on data (basically on tables with the observed row and marginal totals) and his ingenuity at finding probability distributions (though this isn't the best example, for a better example see here). The use of computers to calculate "exact" p-values has definitely helped to obtain accurate answers. However, it is hard to justify the assumptions of Fisher's exact test in practice. Because the so called "exact" comes from the fact that in the "tea tasting experiement" or in the 2x2 contingency tables case, the row total and column total, that is, the marginal totals are fixed by design. This assumption is rarely justified in practice. For nice references see here. The name "exact" leads one into believing that the p-values given by this test are exact, which again in most of the cases is unfortunately not correct because of these reasons If the marginals are not fixed by design (which happens almost every time in practice), the p-values will be conservative. Since the test uses a discrete probability distribution (specifically, Hyper-geometric distribution), for certain cutoffs it is impossible to calculate the "exact null probabilities", that is, p-value. In most of the practical cases, using a likelihood ratio test or Chi-square test should not give very different answers (p-value) from a Fisher's exact test. Yes, when the marginals are fixed, Fisher's exact test is a better choice, but this will happen rarely. Therefore, using Chi-square test of likelihood ratio test is always recommended for consistency checks. Similar ideas apply when the Fisher's exact test is generalized to any table, which basically equivalent to calculating Multivariate Hypergeometric proabilities. Therefore one must always try to calculate Chi-square and likelihood ratio distribution based p-values, in addition to "exact" p-values.
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than This is a great question. Fisher's exact test is one of the great examples of Fisher's clever use of experimental design, together with conditioning on data (basically on tables with the observed row
2,128
What is an "uninformative prior"? Can we ever have one with truly no information?
[Warning: as a card-carrying member of the Objective Bayes Section of ISBA, my views are not exactly representative of all Bayesian statisticians!, quite the opposite...] In summary, there is no such thing as a prior with "truly no information". Indeed, the concept of "uninformative" prior is sadly a misnomer. Any prior distribution contains some specification that is akin to some amount of information. Even (or especially) the uniform prior. For one thing, the uniform prior is only flat for one given parameterisation of the problem. If one changes to another parameterisation (even a bounded one), the Jacobian change of variable comes into the picture and the density and therefore the prior is no longer flat. As pointed out by Elvis, maximum entropy is one approach advocated to select so-called "uninformative" priors. It however requires (a) some degree of information on some moments $h(\theta)$ of the prior distribution $\pi(\cdot)$ to specify the constraints$$\int_{\Theta} h(\theta)\,\text{d}\pi(\theta) = \mathfrak{h}_0$$ that lead to the MaxEnt prior $$\pi^*(\theta)\propto \exp\{ \lambda^\text{T}h(\theta) \}$$ and (b) the preliminary choice of a reference measure $\text{d}\mu(\theta)$ [in continuous settings], a choice that brings the debate back to its initial stage! (In addition, the parametrisation of the constraints (i.e., the choice of $h$) impacts the shape of the resulting MaxEnt prior.) José Bernardo has produced an original theory of reference priors where he chooses the prior in order to maximise the information brought by the data by maximising the Kullback distance between prior and posterior. In the simplest cases with no nuisance parameters, the solution is Jeffreys' prior. In more complex problems, (a) a choice of the parameters of interest (or even a ranking of their order of interest) must be made; (b) the computation of the prior is fairly involved and requires a sequence of embedded compact sets to avoid improperness issues. (See e.g. The Bayesian Choice for details.) In an interesting twist, some researchers outside the Bayesian perspective have been developing procedures called confidence distributions that are probability distributions on the parameter space, constructed by inversion from frequency-based procedures without an explicit prior structure or even a dominating measure on this parameter space. They argue that this absence of well-defined prior is a plus, although the result definitely depends on the choice of the initialising frequency-based procedure In short, there is no "best" (or even "better") choice for "the" "uninformative" prior. And I consider this is how things should be because the very nature of Bayesian analysis implies that the choice of the prior distribution matters. And that there is no comparison of priors: one cannot be "better" than another. (At least before observing the data: once it is observed, comparison of priors becomes model choice.) The conclusion of José Bernardo, Jim Berger, Dongchu Sun, and many other "objective" Bayesians is that there are roughly equivalent reference priors one can use when being unsure about one's prior information or seeking a benchmark Bayesian inference, some of those priors being partly supported by information theory arguments, others by non-Bayesian frequentist properties (like matching priors), and all resulting in rather similar inferences.
What is an "uninformative prior"? Can we ever have one with truly no information?
[Warning: as a card-carrying member of the Objective Bayes Section of ISBA, my views are not exactly representative of all Bayesian statisticians!, quite the opposite...] In summary, there is no such
What is an "uninformative prior"? Can we ever have one with truly no information? [Warning: as a card-carrying member of the Objective Bayes Section of ISBA, my views are not exactly representative of all Bayesian statisticians!, quite the opposite...] In summary, there is no such thing as a prior with "truly no information". Indeed, the concept of "uninformative" prior is sadly a misnomer. Any prior distribution contains some specification that is akin to some amount of information. Even (or especially) the uniform prior. For one thing, the uniform prior is only flat for one given parameterisation of the problem. If one changes to another parameterisation (even a bounded one), the Jacobian change of variable comes into the picture and the density and therefore the prior is no longer flat. As pointed out by Elvis, maximum entropy is one approach advocated to select so-called "uninformative" priors. It however requires (a) some degree of information on some moments $h(\theta)$ of the prior distribution $\pi(\cdot)$ to specify the constraints$$\int_{\Theta} h(\theta)\,\text{d}\pi(\theta) = \mathfrak{h}_0$$ that lead to the MaxEnt prior $$\pi^*(\theta)\propto \exp\{ \lambda^\text{T}h(\theta) \}$$ and (b) the preliminary choice of a reference measure $\text{d}\mu(\theta)$ [in continuous settings], a choice that brings the debate back to its initial stage! (In addition, the parametrisation of the constraints (i.e., the choice of $h$) impacts the shape of the resulting MaxEnt prior.) José Bernardo has produced an original theory of reference priors where he chooses the prior in order to maximise the information brought by the data by maximising the Kullback distance between prior and posterior. In the simplest cases with no nuisance parameters, the solution is Jeffreys' prior. In more complex problems, (a) a choice of the parameters of interest (or even a ranking of their order of interest) must be made; (b) the computation of the prior is fairly involved and requires a sequence of embedded compact sets to avoid improperness issues. (See e.g. The Bayesian Choice for details.) In an interesting twist, some researchers outside the Bayesian perspective have been developing procedures called confidence distributions that are probability distributions on the parameter space, constructed by inversion from frequency-based procedures without an explicit prior structure or even a dominating measure on this parameter space. They argue that this absence of well-defined prior is a plus, although the result definitely depends on the choice of the initialising frequency-based procedure In short, there is no "best" (or even "better") choice for "the" "uninformative" prior. And I consider this is how things should be because the very nature of Bayesian analysis implies that the choice of the prior distribution matters. And that there is no comparison of priors: one cannot be "better" than another. (At least before observing the data: once it is observed, comparison of priors becomes model choice.) The conclusion of José Bernardo, Jim Berger, Dongchu Sun, and many other "objective" Bayesians is that there are roughly equivalent reference priors one can use when being unsure about one's prior information or seeking a benchmark Bayesian inference, some of those priors being partly supported by information theory arguments, others by non-Bayesian frequentist properties (like matching priors), and all resulting in rather similar inferences.
What is an "uninformative prior"? Can we ever have one with truly no information? [Warning: as a card-carrying member of the Objective Bayes Section of ISBA, my views are not exactly representative of all Bayesian statisticians!, quite the opposite...] In summary, there is no such
2,129
What is an "uninformative prior"? Can we ever have one with truly no information?
An appealing property of formal noninformative priors is the "frequentist-matching property" : it means that a posterior 95%-credibility interval is also (at least, approximately) a 95%-confidence interval in the frequentist sense. This property holds for Bernardo's reference prior although the fundations of these noninformative priors are not oriented towards the achievement of a good frequentist-matching property, If you use a "naive" ("flat") noninformative prior such as the uniform distribution or a Gaussian distribution with a huge variance then there is no guarantee that the frequentist-matching property holds. Maybe Bernardo's reference prior could not be considered as the "best" choice of a noninformative prior but could be considered as the most successful one. Theoretically it overcomes many paradoxes of other candidates.
What is an "uninformative prior"? Can we ever have one with truly no information?
An appealing property of formal noninformative priors is the "frequentist-matching property" : it means that a posterior 95%-credibility interval is also (at least, approximately) a 95%-confidence int
What is an "uninformative prior"? Can we ever have one with truly no information? An appealing property of formal noninformative priors is the "frequentist-matching property" : it means that a posterior 95%-credibility interval is also (at least, approximately) a 95%-confidence interval in the frequentist sense. This property holds for Bernardo's reference prior although the fundations of these noninformative priors are not oriented towards the achievement of a good frequentist-matching property, If you use a "naive" ("flat") noninformative prior such as the uniform distribution or a Gaussian distribution with a huge variance then there is no guarantee that the frequentist-matching property holds. Maybe Bernardo's reference prior could not be considered as the "best" choice of a noninformative prior but could be considered as the most successful one. Theoretically it overcomes many paradoxes of other candidates.
What is an "uninformative prior"? Can we ever have one with truly no information? An appealing property of formal noninformative priors is the "frequentist-matching property" : it means that a posterior 95%-credibility interval is also (at least, approximately) a 95%-confidence int
2,130
What is an "uninformative prior"? Can we ever have one with truly no information?
Jeffreys distributions also suffer from inconsistencies: the Jeffreys priors for a variable over $(-\infty,\infty)$ or over $(0,\infty)$ are improper, which is not the case for the Jeffreys prior of a probability parameter $p$: the measure $\text{d}p/\sqrt{p(1-p)}$ has a mass of $\pi$ over $(0,1)$. Renyi has shown that a non-informative distribution must be associated with an improper integral. See instead Lhoste's distributions which avoid this difficulty and are invariant under changes of variables (e.g., for $p$, the measure is $\text{d}p/p(1-p)$). References E. LHOSTE : "Le calcul des probabilités appliqué à l'artillerie", Revue d'artillerie, tome 91, mai à août 1923 A. RENYI : "On a new axiomatic theory of probability" Acta Mathematica, Académie des Sciences hongroises, tome VI, fasc.3-4, 1955 M. DUMAS : "Lois de probabilité a priori de Lhoste", Sciences et techniques de l'armement, 56, 4ème fascicule, 1982, pp 687-715
What is an "uninformative prior"? Can we ever have one with truly no information?
Jeffreys distributions also suffer from inconsistencies: the Jeffreys priors for a variable over $(-\infty,\infty)$ or over $(0,\infty)$ are improper, which is not the case for the Jeffreys prior of a
What is an "uninformative prior"? Can we ever have one with truly no information? Jeffreys distributions also suffer from inconsistencies: the Jeffreys priors for a variable over $(-\infty,\infty)$ or over $(0,\infty)$ are improper, which is not the case for the Jeffreys prior of a probability parameter $p$: the measure $\text{d}p/\sqrt{p(1-p)}$ has a mass of $\pi$ over $(0,1)$. Renyi has shown that a non-informative distribution must be associated with an improper integral. See instead Lhoste's distributions which avoid this difficulty and are invariant under changes of variables (e.g., for $p$, the measure is $\text{d}p/p(1-p)$). References E. LHOSTE : "Le calcul des probabilités appliqué à l'artillerie", Revue d'artillerie, tome 91, mai à août 1923 A. RENYI : "On a new axiomatic theory of probability" Acta Mathematica, Académie des Sciences hongroises, tome VI, fasc.3-4, 1955 M. DUMAS : "Lois de probabilité a priori de Lhoste", Sciences et techniques de l'armement, 56, 4ème fascicule, 1982, pp 687-715
What is an "uninformative prior"? Can we ever have one with truly no information? Jeffreys distributions also suffer from inconsistencies: the Jeffreys priors for a variable over $(-\infty,\infty)$ or over $(0,\infty)$ are improper, which is not the case for the Jeffreys prior of a
2,131
What is an "uninformative prior"? Can we ever have one with truly no information?
I agree with the excellent answer by Xi'an, pointing out that there is no single prior that is "uninformative" in the sense of carrying no information. To expand on this topic, I wanted to point out that one alternative is to undertake Bayesian analysis within the imprecise probability framework (see esp. Walley 1991, Walley 2000). Within this framework the prior belief is represented by a set of probability distributions, and this leads to a corresponding set of posterior distributions. That might sound like it would not be very helpful, but it actually is quite amazing. Even with a very broad set of prior distributions (where certain moments can range over all possible values) you often still get posterior convergence to a single posterior as $n \rightarrow \infty$. This analytical framework has been axiomatised by Walley as its own special form of probabilistic analysis, but is essentially equivalent to robust Bayesian analysis using a set of priors, yielding a corresponding set of posteriors. In many models it is possible to set an "uninformative" set of priors that allows some moments (e.g., the prior mean) to vary over the entire possible range of values, and this nonetheless produces valuable posterior results, where the posterior moments are bounded more tightly. This form of analysis arguably has a better claim to being called "uninformative", at least with respect to moments that are able to vary over their entire allowable range. A simple example - Bernoulli model: Suppose we observe data $X_1,...,X_n | \theta \sim \text{IID Bern}(\theta)$ where $\theta$ is the unknown parameter of interest. Usually we would use a beta density as the prior (both the Jeffrey's prior and reference prior are of this form). We can specify this form of prior density in terms of the prior mean $\mu$ and another parameter $\kappa > 1$ as: $$\begin{equation} \begin{aligned} \pi_0(\theta | \mu, \kappa) = \text{Beta}(\theta | \mu, \kappa) = \text{Beta} \Big( \theta \Big| \alpha = \mu (\kappa - 1), \beta = (1-\mu) (\kappa - 1) \Big). \end{aligned} \end{equation}$$ (This form gives prior moments $\mathbb{E}(\theta) = \mu$ and $\mathbb{V}(\theta) = \mu(1-\mu) / \kappa$.) Now, in an imprecise model we could set the prior to consist of the set of all these prior distributions over all possible expected values, but with the other parameter fixed to control the precision over the range of mean values. For example, we might use the set of priors: $$\mathscr{P}_0 \equiv \Big\{ \text{Beta}(\mu, \kappa) \Big| 0 \leqslant \mu \leqslant 1 \Big\}. \quad \quad \quad \quad \quad$$ Suppose we observe $s = \sum_{i=1}^n x_i$ positive indicators in the data. Then, using the updating rule for the Bernoulli-beta model, the corresponding posterior set is: $$\mathscr{P}_\mathbf{x} = \Big\{ \text{Beta}\Big( \tfrac{s + \mu(\kappa-1)}{n + \kappa -1}, n+\kappa \Big) \Big| 0 \leqslant \mu \leqslant 1 \Big\}.$$ The range of possible values for the posterior expectation is: $$\frac{s}{n + \kappa-1} \leqslant \mathbb{E}(\theta | \mathbb{x}) \leqslant \frac{s + \kappa-1}{n + \kappa-1}.$$ What is important here is that even though we started with a model that was "uninformative" with respect to the expected value of the parameter (the prior expectation ranged over all possible values), we nonetheless end up with posterior inferences that are informative with respect to the posterior expectation of the parameter (they now range over a narrower set of values). As $n \rightarrow \infty$ this range of values is squeezed down to a single point, which is the true value of $\theta$.
What is an "uninformative prior"? Can we ever have one with truly no information?
I agree with the excellent answer by Xi'an, pointing out that there is no single prior that is "uninformative" in the sense of carrying no information. To expand on this topic, I wanted to point out
What is an "uninformative prior"? Can we ever have one with truly no information? I agree with the excellent answer by Xi'an, pointing out that there is no single prior that is "uninformative" in the sense of carrying no information. To expand on this topic, I wanted to point out that one alternative is to undertake Bayesian analysis within the imprecise probability framework (see esp. Walley 1991, Walley 2000). Within this framework the prior belief is represented by a set of probability distributions, and this leads to a corresponding set of posterior distributions. That might sound like it would not be very helpful, but it actually is quite amazing. Even with a very broad set of prior distributions (where certain moments can range over all possible values) you often still get posterior convergence to a single posterior as $n \rightarrow \infty$. This analytical framework has been axiomatised by Walley as its own special form of probabilistic analysis, but is essentially equivalent to robust Bayesian analysis using a set of priors, yielding a corresponding set of posteriors. In many models it is possible to set an "uninformative" set of priors that allows some moments (e.g., the prior mean) to vary over the entire possible range of values, and this nonetheless produces valuable posterior results, where the posterior moments are bounded more tightly. This form of analysis arguably has a better claim to being called "uninformative", at least with respect to moments that are able to vary over their entire allowable range. A simple example - Bernoulli model: Suppose we observe data $X_1,...,X_n | \theta \sim \text{IID Bern}(\theta)$ where $\theta$ is the unknown parameter of interest. Usually we would use a beta density as the prior (both the Jeffrey's prior and reference prior are of this form). We can specify this form of prior density in terms of the prior mean $\mu$ and another parameter $\kappa > 1$ as: $$\begin{equation} \begin{aligned} \pi_0(\theta | \mu, \kappa) = \text{Beta}(\theta | \mu, \kappa) = \text{Beta} \Big( \theta \Big| \alpha = \mu (\kappa - 1), \beta = (1-\mu) (\kappa - 1) \Big). \end{aligned} \end{equation}$$ (This form gives prior moments $\mathbb{E}(\theta) = \mu$ and $\mathbb{V}(\theta) = \mu(1-\mu) / \kappa$.) Now, in an imprecise model we could set the prior to consist of the set of all these prior distributions over all possible expected values, but with the other parameter fixed to control the precision over the range of mean values. For example, we might use the set of priors: $$\mathscr{P}_0 \equiv \Big\{ \text{Beta}(\mu, \kappa) \Big| 0 \leqslant \mu \leqslant 1 \Big\}. \quad \quad \quad \quad \quad$$ Suppose we observe $s = \sum_{i=1}^n x_i$ positive indicators in the data. Then, using the updating rule for the Bernoulli-beta model, the corresponding posterior set is: $$\mathscr{P}_\mathbf{x} = \Big\{ \text{Beta}\Big( \tfrac{s + \mu(\kappa-1)}{n + \kappa -1}, n+\kappa \Big) \Big| 0 \leqslant \mu \leqslant 1 \Big\}.$$ The range of possible values for the posterior expectation is: $$\frac{s}{n + \kappa-1} \leqslant \mathbb{E}(\theta | \mathbb{x}) \leqslant \frac{s + \kappa-1}{n + \kappa-1}.$$ What is important here is that even though we started with a model that was "uninformative" with respect to the expected value of the parameter (the prior expectation ranged over all possible values), we nonetheless end up with posterior inferences that are informative with respect to the posterior expectation of the parameter (they now range over a narrower set of values). As $n \rightarrow \infty$ this range of values is squeezed down to a single point, which is the true value of $\theta$.
What is an "uninformative prior"? Can we ever have one with truly no information? I agree with the excellent answer by Xi'an, pointing out that there is no single prior that is "uninformative" in the sense of carrying no information. To expand on this topic, I wanted to point out
2,132
Essential data checking tests
It helps to understand how the data were recorded. Let me share a story. Once, long ago, many datasets were stored only in fading hardcopy. In those dark days I contracted with an organization (of great pedigree and size; many of you probably own its stock) to computerize about 10^5 records of environmental monitoring data at one of its manufacturing plants. To do this, I personally marked up a shelf of laboratory reports (to show where the data were), created data entry forms, and contracted with a temp agency for literate workers to type the data into the forms. (Yes, you had to pay extra for people who could read.) Due to the value and sensitivity of the data, I conducted this process in parallel with two workers at a time (who usually changed from day to day). It took a couple of weeks. I wrote software to compare the two sets of entries, systematically identifying and correcting all the errors that showed up. Boy were there errors! What can go wrong? A good way to describe and measure errors is at the level of the basic record, which in this situation was a description of a single analytical result (the concentration of some chemical, often) for a particular sample obtained at a given monitoring point on a given date. In comparing the two datasets, I found: Errors of omission: one dataset would include a record, another would not. This usually happened because either (a) a line or two would be overlooked at the bottom of a page or (b) an entire page would be skipped. Apparent errors of omission that were really data-entry mistakes. A record is identified by a monitoring point name, a date, and the "analyte" (usually a chemical name). If any of these has a typographical error, it will not be matched to the other records with which it is related. In effect, the correct record disappears and an incorrect record appears. Fake duplication. The same results can appear in multiple sources, be transcribed multiple times, and seem to be true repeated measures when they are not. Duplicates are straightforward to detect, but deciding whether they are erroneous depends on knowing whether duplicates should even appear in the dataset. Sometimes you just can't know. Frank data-entry errors. The "good" ones are easy to catch because they change the type of the datum: using the letter "O" for the digit "0", for instance, turns a number into a non-number. Other good errors change the value so much it can readily be detected with statistical tests. (In one case, the leading digit in "1,000,010 mg/Kg" was cut off, leaving a value of 10. That's an enormous change when you're talking about a pesticide concentration!) The bad errors are hard to catch because they change a value into one that fits (sort of) with the rest of the data, such as typing "80" for "50". (This kind of mistake happens with OCR software all the time.) Transpositions. The right values can be entered but associated with the wrong record keys. This is insidious, because the global statistical characteristics of the dataset might remain unaltered, but spurious differences can be created between groups. Probably only a mechanism like double-entry is even capable of detecting these errors. Once you are aware of these errors and know, or have a theory, of how they occur, you can write scripts to troll your datasets for the possible presence of such errors and flag them for further attention. You cannot always resolve them, but at least you can include a "comment" or "quality flag" field to accompany the data throughout their later analysis. Since that time I have paid attention to data quality issues and have had many more opportunities to make comprehensive checks of large statistical datasets. None is perfect; they all benefit from quality checks. Some of the principles I have developed over the years for doing this include Whenever possible, create redundancy in data entry and data transcription procedures: checksums, totals, repeated entries: anything to support automatic internal checks of consistency. If possible, create and exploit another database which describes what the data should look like: that is, computer-readable metadata. For instance, in a drug experiment you might know in advance that every patient will be seen three times. This enables you to create a database with all the correct records and their identifiers with the values just waiting to be filled in. Fill them in with the data given you and then check for duplicates, omissions, and unexpected data. Always normalize your data (specifically, get them into at least fourth normal form), regardless of how you plan to format the dataset for analysis. This forces you to create tables of every conceptually distinct entity you are modeling. (In the environmental case, this would include tables of monitoring locations, samples, chemicals (properties, typical ranges, etc.), tests of those samples (a test usually covers a suite of chemicals), and the individual results of those tests. In so doing you create many effective checks of data quality and consistency and identify many potentially missing or duplicate or inconsistent values. This effort (which requires good data processing skills but is straightforward) is astonishingly effective. If you aspire to analyze large or complex datasets and do not have good working knowledge of relational databases and their theory, add that to your list of things to be learned as soon as possible. It will pay dividends throughout your career. Always perform as many "stupid" checks as you possibly can. These are automated verification of obvious things such that dates fall into their expected periods, the counts of patients (or chemicals or whatever) always add up correctly, that values are always reasonable (e.g., a pH must be between 0 and 14 and maybe in a much narrower range for, say, blood pH readings), etc. This is where domain expertise can be the most help: the statistician can fearlessly ask stupid questions of the experts and exploit the answers to check the data. Much more can be said of course--the subject is worth a book--but this should be enough to stimulate ideas.
Essential data checking tests
It helps to understand how the data were recorded. Let me share a story. Once, long ago, many datasets were stored only in fading hardcopy. In those dark days I contracted with an organization (of g
Essential data checking tests It helps to understand how the data were recorded. Let me share a story. Once, long ago, many datasets were stored only in fading hardcopy. In those dark days I contracted with an organization (of great pedigree and size; many of you probably own its stock) to computerize about 10^5 records of environmental monitoring data at one of its manufacturing plants. To do this, I personally marked up a shelf of laboratory reports (to show where the data were), created data entry forms, and contracted with a temp agency for literate workers to type the data into the forms. (Yes, you had to pay extra for people who could read.) Due to the value and sensitivity of the data, I conducted this process in parallel with two workers at a time (who usually changed from day to day). It took a couple of weeks. I wrote software to compare the two sets of entries, systematically identifying and correcting all the errors that showed up. Boy were there errors! What can go wrong? A good way to describe and measure errors is at the level of the basic record, which in this situation was a description of a single analytical result (the concentration of some chemical, often) for a particular sample obtained at a given monitoring point on a given date. In comparing the two datasets, I found: Errors of omission: one dataset would include a record, another would not. This usually happened because either (a) a line or two would be overlooked at the bottom of a page or (b) an entire page would be skipped. Apparent errors of omission that were really data-entry mistakes. A record is identified by a monitoring point name, a date, and the "analyte" (usually a chemical name). If any of these has a typographical error, it will not be matched to the other records with which it is related. In effect, the correct record disappears and an incorrect record appears. Fake duplication. The same results can appear in multiple sources, be transcribed multiple times, and seem to be true repeated measures when they are not. Duplicates are straightforward to detect, but deciding whether they are erroneous depends on knowing whether duplicates should even appear in the dataset. Sometimes you just can't know. Frank data-entry errors. The "good" ones are easy to catch because they change the type of the datum: using the letter "O" for the digit "0", for instance, turns a number into a non-number. Other good errors change the value so much it can readily be detected with statistical tests. (In one case, the leading digit in "1,000,010 mg/Kg" was cut off, leaving a value of 10. That's an enormous change when you're talking about a pesticide concentration!) The bad errors are hard to catch because they change a value into one that fits (sort of) with the rest of the data, such as typing "80" for "50". (This kind of mistake happens with OCR software all the time.) Transpositions. The right values can be entered but associated with the wrong record keys. This is insidious, because the global statistical characteristics of the dataset might remain unaltered, but spurious differences can be created between groups. Probably only a mechanism like double-entry is even capable of detecting these errors. Once you are aware of these errors and know, or have a theory, of how they occur, you can write scripts to troll your datasets for the possible presence of such errors and flag them for further attention. You cannot always resolve them, but at least you can include a "comment" or "quality flag" field to accompany the data throughout their later analysis. Since that time I have paid attention to data quality issues and have had many more opportunities to make comprehensive checks of large statistical datasets. None is perfect; they all benefit from quality checks. Some of the principles I have developed over the years for doing this include Whenever possible, create redundancy in data entry and data transcription procedures: checksums, totals, repeated entries: anything to support automatic internal checks of consistency. If possible, create and exploit another database which describes what the data should look like: that is, computer-readable metadata. For instance, in a drug experiment you might know in advance that every patient will be seen three times. This enables you to create a database with all the correct records and their identifiers with the values just waiting to be filled in. Fill them in with the data given you and then check for duplicates, omissions, and unexpected data. Always normalize your data (specifically, get them into at least fourth normal form), regardless of how you plan to format the dataset for analysis. This forces you to create tables of every conceptually distinct entity you are modeling. (In the environmental case, this would include tables of monitoring locations, samples, chemicals (properties, typical ranges, etc.), tests of those samples (a test usually covers a suite of chemicals), and the individual results of those tests. In so doing you create many effective checks of data quality and consistency and identify many potentially missing or duplicate or inconsistent values. This effort (which requires good data processing skills but is straightforward) is astonishingly effective. If you aspire to analyze large or complex datasets and do not have good working knowledge of relational databases and their theory, add that to your list of things to be learned as soon as possible. It will pay dividends throughout your career. Always perform as many "stupid" checks as you possibly can. These are automated verification of obvious things such that dates fall into their expected periods, the counts of patients (or chemicals or whatever) always add up correctly, that values are always reasonable (e.g., a pH must be between 0 and 14 and maybe in a much narrower range for, say, blood pH readings), etc. This is where domain expertise can be the most help: the statistician can fearlessly ask stupid questions of the experts and exploit the answers to check the data. Much more can be said of course--the subject is worth a book--but this should be enough to stimulate ideas.
Essential data checking tests It helps to understand how the data were recorded. Let me share a story. Once, long ago, many datasets were stored only in fading hardcopy. In those dark days I contracted with an organization (of g
2,133
Essential data checking tests
@whuber makes great suggestions; I would only add this: Plots, plots, plots, plots. Scatterplots, histograms, boxplots, lineplots, heatmaps and anything else you can think of. Of course, as you've found there are errors that won't be apparent on any plots but they're a good place to start. Just make sure you're clear on how your software handles missing data, etc. Depending on the context you can get creative. One thing I like to do With multivariate data is fit some kind of factor model/probabilistic PCA (something that will do multiple imputation for missing data) and look at scores for as many components as possible. Data points which score highly on the less important components/factors are often outliers you might not see otherwise.
Essential data checking tests
@whuber makes great suggestions; I would only add this: Plots, plots, plots, plots. Scatterplots, histograms, boxplots, lineplots, heatmaps and anything else you can think of. Of course, as you've fou
Essential data checking tests @whuber makes great suggestions; I would only add this: Plots, plots, plots, plots. Scatterplots, histograms, boxplots, lineplots, heatmaps and anything else you can think of. Of course, as you've found there are errors that won't be apparent on any plots but they're a good place to start. Just make sure you're clear on how your software handles missing data, etc. Depending on the context you can get creative. One thing I like to do With multivariate data is fit some kind of factor model/probabilistic PCA (something that will do multiple imputation for missing data) and look at scores for as many components as possible. Data points which score highly on the less important components/factors are often outliers you might not see otherwise.
Essential data checking tests @whuber makes great suggestions; I would only add this: Plots, plots, plots, plots. Scatterplots, histograms, boxplots, lineplots, heatmaps and anything else you can think of. Of course, as you've fou
2,134
Essential data checking tests
Big things I tend to check: Variable type - to see that a number is numeric, and not factor/character (might indicate some problem with the data that was entered) Consistent value levels - to see that a variable with the name "t1" didn't find it self again with the name "t1 " or "t 1" Outliers - see that the ranges of value make sense. (did you get a blood pressure value of 0? or minus?). Here we sometimes find out that someone encoded -5 as missing value, or something like that. Linear restrictions. I don't use that, but some find that they wish to have restructions on the dependencies of some columns (columns A, B must add to C, or something like that). For this you can have a look at the deducorrect package (I met the speaker, Mark van der Loo, in the last useR conference - and was very impressed with his package) too little randomness. Sometimes values got to be rounded to some values, or truncated at some point. These type of things are often more clear in scatter plots. Missing values - making sure that the missing is not related to some other variable (missing at random). But I don't have a rule of thumb to give here. Empty rows or rows with mostly no-values. These should be (usually) found and omitted. Great question BTW - I hope to read other people's experience on the matter.
Essential data checking tests
Big things I tend to check: Variable type - to see that a number is numeric, and not factor/character (might indicate some problem with the data that was entered) Consistent value levels - to see tha
Essential data checking tests Big things I tend to check: Variable type - to see that a number is numeric, and not factor/character (might indicate some problem with the data that was entered) Consistent value levels - to see that a variable with the name "t1" didn't find it self again with the name "t1 " or "t 1" Outliers - see that the ranges of value make sense. (did you get a blood pressure value of 0? or minus?). Here we sometimes find out that someone encoded -5 as missing value, or something like that. Linear restrictions. I don't use that, but some find that they wish to have restructions on the dependencies of some columns (columns A, B must add to C, or something like that). For this you can have a look at the deducorrect package (I met the speaker, Mark van der Loo, in the last useR conference - and was very impressed with his package) too little randomness. Sometimes values got to be rounded to some values, or truncated at some point. These type of things are often more clear in scatter plots. Missing values - making sure that the missing is not related to some other variable (missing at random). But I don't have a rule of thumb to give here. Empty rows or rows with mostly no-values. These should be (usually) found and omitted. Great question BTW - I hope to read other people's experience on the matter.
Essential data checking tests Big things I tend to check: Variable type - to see that a number is numeric, and not factor/character (might indicate some problem with the data that was entered) Consistent value levels - to see tha
2,135
Essential data checking tests
When you have measures along time ("longitudinal data") it is often useful to check the gradients as well as the marginal distributions. This gradient can be calculated at different scales. More generally you can do meaningful transformations on your data (fourier, wavelet) and check the distributions of the marginals of the transformed data.
Essential data checking tests
When you have measures along time ("longitudinal data") it is often useful to check the gradients as well as the marginal distributions. This gradient can be calculated at different scales. More gener
Essential data checking tests When you have measures along time ("longitudinal data") it is often useful to check the gradients as well as the marginal distributions. This gradient can be calculated at different scales. More generally you can do meaningful transformations on your data (fourier, wavelet) and check the distributions of the marginals of the transformed data.
Essential data checking tests When you have measures along time ("longitudinal data") it is often useful to check the gradients as well as the marginal distributions. This gradient can be calculated at different scales. More gener
2,136
Essential data checking tests
A few I always go through: Are there the number of records there are supposed to be? For example, if you pulled your data from another source, or its a sub-set of someone elses data, do your numbers look reasonable. You'd think this would be covered, but you'd...be suprised. Are all your variables there? Do the values of those variables make sense? For example, if a Yes/No/Missing variable is coded "1,2,3" - what does that mean? Where are your missing values? Are there some variables that seem overburdened with missing information? Are there certain subjects with massive numbers of missing values. Those are the first steps I go through to make sure a dataset is even ready for something like exploratory data analysis. Just sitting down, roaming about the data some going "Does that...seem right?"
Essential data checking tests
A few I always go through: Are there the number of records there are supposed to be? For example, if you pulled your data from another source, or its a sub-set of someone elses data, do your numbers
Essential data checking tests A few I always go through: Are there the number of records there are supposed to be? For example, if you pulled your data from another source, or its a sub-set of someone elses data, do your numbers look reasonable. You'd think this would be covered, but you'd...be suprised. Are all your variables there? Do the values of those variables make sense? For example, if a Yes/No/Missing variable is coded "1,2,3" - what does that mean? Where are your missing values? Are there some variables that seem overburdened with missing information? Are there certain subjects with massive numbers of missing values. Those are the first steps I go through to make sure a dataset is even ready for something like exploratory data analysis. Just sitting down, roaming about the data some going "Does that...seem right?"
Essential data checking tests A few I always go through: Are there the number of records there are supposed to be? For example, if you pulled your data from another source, or its a sub-set of someone elses data, do your numbers
2,137
Essential data checking tests
I would use acceptance sampling method to each column ( it gives the cut-off number where you can draw the line between high quality and low quality) , there is an online calculator for that.
Essential data checking tests
I would use acceptance sampling method to each column ( it gives the cut-off number where you can draw the line between high quality and low quality) , there is an online calculator for that.
Essential data checking tests I would use acceptance sampling method to each column ( it gives the cut-off number where you can draw the line between high quality and low quality) , there is an online calculator for that.
Essential data checking tests I would use acceptance sampling method to each column ( it gives the cut-off number where you can draw the line between high quality and low quality) , there is an online calculator for that.
2,138
When to use an offset in a Poisson regression? [duplicate]
Here is an example of application. Poisson regression is typically used to model count data. But, sometimes, it is more relevant to model rates instead of counts. This is relevant when, e.g., individuals are not followed the same amount of time. For example, six cases over 1 year should not amount to the same as six cases over 10 years. So, instead of having $\log \mu_x = \beta_0 + \beta_1 x$ (where $\mu_x$ is the expected count for those with covariate $x$), you have $\log \tfrac{\mu_x}{t_x} = \beta'_0 + \beta'_1 x$ (where $t_x$ is the exposure time for those with covariate $x$). Now, the last equation could be rewritten $\log \mu_x = \log t_x + \beta'_0 + \beta'_1 x$ and $\log t_x$ plays the role of an offset.
When to use an offset in a Poisson regression? [duplicate]
Here is an example of application. Poisson regression is typically used to model count data. But, sometimes, it is more relevant to model rates instead of counts. This is relevant when, e.g., individu
When to use an offset in a Poisson regression? [duplicate] Here is an example of application. Poisson regression is typically used to model count data. But, sometimes, it is more relevant to model rates instead of counts. This is relevant when, e.g., individuals are not followed the same amount of time. For example, six cases over 1 year should not amount to the same as six cases over 10 years. So, instead of having $\log \mu_x = \beta_0 + \beta_1 x$ (where $\mu_x$ is the expected count for those with covariate $x$), you have $\log \tfrac{\mu_x}{t_x} = \beta'_0 + \beta'_1 x$ (where $t_x$ is the exposure time for those with covariate $x$). Now, the last equation could be rewritten $\log \mu_x = \log t_x + \beta'_0 + \beta'_1 x$ and $\log t_x$ plays the role of an offset.
When to use an offset in a Poisson regression? [duplicate] Here is an example of application. Poisson regression is typically used to model count data. But, sometimes, it is more relevant to model rates instead of counts. This is relevant when, e.g., individu
2,139
Why to optimize max log probability instead of probability
Gradient methods generally work better optimizing $\log p(x)$ than $p(x)$ because the gradient of $\log p(x)$ is generally more well-scaled. That is, it has a size that consistently and helpfully reflects the objective function's geometry, making it easier to select an appropriate step size and get to the optimum in fewer steps. To see what I mean, compare the gradient optimization process for $p(x) = \exp(-x^2)$ and $f(x) = \log p(x) = -x^2$. At any point $x$, the gradient of $f(x)$ is $$f'(x) = -2x.$$ If we multiply that by $1/2$, we get the exact step size needed to get to the global optimum at the origin, no matter what $x$ is. This means that we don't have to work too hard to get a good step size (or "learning rate" in ML jargon). No matter where our initial point is, we just set our step to half the gradient and we'll be at the origin in one step. And if we don't know the exact factor that is needed, we can just pick a step size around 1, do a bit of line search, and we'll find a great step size very quickly, one that works well no matter where $x$ is. This property is robust to translation and scaling of $f(x)$. While scaling $f(x)$ will cause the optimal step scaling to differ from 1/2, at least the step scaling will be the same no matter what $x$ is, so we only have to find one parameter to get an efficient gradient-based optimization scheme. In contrast, the gradient of $p(x)$ has very poor global properties for optimization. We have $$p'(x) = f'(x) p(x)= -2x \exp(-x^2).$$ This multiplies the perfectly nice, well-behaved gradient $-2x$ with a factor $\exp(-x^2)$ which decays (faster than) exponentially as $x$ increases. At $x = 5$, we already have $\exp(-x^2) = 1.4 \cdot 10^{-11}$, so a step along the gradient vector is about $10^{-11}$ times too small. To get a reasonable step size toward the optimum, we'd have to scale the gradient by the reciprocal of that, an enormous constant $\sim 10^{11}$. Such a badly-scaled gradient is worse than useless for optimization purposes - we'd be better off just attempting a unit step in the uphill direction than setting our step by scaling against $p'(x)$! (In many variables $p'(x)$ becomes a bit more useful since we at least get directional information from the gradient, but the scaling issue remains.) In general there is no guarantee that $\log p(x)$ will have such great gradient scaling properties as this toy example, especially when we have more than one variable. However, for pretty much any nontrivial problem, $\log p(x)$ is going to be way, way better than $p(x)$. This is because the likelihood is a big product with a bunch of terms, and the log turns that product into a sum, as noted in several other answers. Provided the terms in the likelihood are well-behaved from an optimization standpoint, their log is generally well-behaved, and the sum of well-behaved functions is well-behaved. By well-behaved I mean $f''(x)$ doesn't change too much or too rapidly, leading to a nearly quadratic function that is easy to optimize by gradient methods. The sum of a derivative is the derivative of the sum, no matter what the derivative's order, which helps to ensure that that big pile of sum terms has a very reasonable second derivative!
Why to optimize max log probability instead of probability
Gradient methods generally work better optimizing $\log p(x)$ than $p(x)$ because the gradient of $\log p(x)$ is generally more well-scaled. That is, it has a size that consistently and helpfully refl
Why to optimize max log probability instead of probability Gradient methods generally work better optimizing $\log p(x)$ than $p(x)$ because the gradient of $\log p(x)$ is generally more well-scaled. That is, it has a size that consistently and helpfully reflects the objective function's geometry, making it easier to select an appropriate step size and get to the optimum in fewer steps. To see what I mean, compare the gradient optimization process for $p(x) = \exp(-x^2)$ and $f(x) = \log p(x) = -x^2$. At any point $x$, the gradient of $f(x)$ is $$f'(x) = -2x.$$ If we multiply that by $1/2$, we get the exact step size needed to get to the global optimum at the origin, no matter what $x$ is. This means that we don't have to work too hard to get a good step size (or "learning rate" in ML jargon). No matter where our initial point is, we just set our step to half the gradient and we'll be at the origin in one step. And if we don't know the exact factor that is needed, we can just pick a step size around 1, do a bit of line search, and we'll find a great step size very quickly, one that works well no matter where $x$ is. This property is robust to translation and scaling of $f(x)$. While scaling $f(x)$ will cause the optimal step scaling to differ from 1/2, at least the step scaling will be the same no matter what $x$ is, so we only have to find one parameter to get an efficient gradient-based optimization scheme. In contrast, the gradient of $p(x)$ has very poor global properties for optimization. We have $$p'(x) = f'(x) p(x)= -2x \exp(-x^2).$$ This multiplies the perfectly nice, well-behaved gradient $-2x$ with a factor $\exp(-x^2)$ which decays (faster than) exponentially as $x$ increases. At $x = 5$, we already have $\exp(-x^2) = 1.4 \cdot 10^{-11}$, so a step along the gradient vector is about $10^{-11}$ times too small. To get a reasonable step size toward the optimum, we'd have to scale the gradient by the reciprocal of that, an enormous constant $\sim 10^{11}$. Such a badly-scaled gradient is worse than useless for optimization purposes - we'd be better off just attempting a unit step in the uphill direction than setting our step by scaling against $p'(x)$! (In many variables $p'(x)$ becomes a bit more useful since we at least get directional information from the gradient, but the scaling issue remains.) In general there is no guarantee that $\log p(x)$ will have such great gradient scaling properties as this toy example, especially when we have more than one variable. However, for pretty much any nontrivial problem, $\log p(x)$ is going to be way, way better than $p(x)$. This is because the likelihood is a big product with a bunch of terms, and the log turns that product into a sum, as noted in several other answers. Provided the terms in the likelihood are well-behaved from an optimization standpoint, their log is generally well-behaved, and the sum of well-behaved functions is well-behaved. By well-behaved I mean $f''(x)$ doesn't change too much or too rapidly, leading to a nearly quadratic function that is easy to optimize by gradient methods. The sum of a derivative is the derivative of the sum, no matter what the derivative's order, which helps to ensure that that big pile of sum terms has a very reasonable second derivative!
Why to optimize max log probability instead of probability Gradient methods generally work better optimizing $\log p(x)$ than $p(x)$ because the gradient of $\log p(x)$ is generally more well-scaled. That is, it has a size that consistently and helpfully refl
2,140
Why to optimize max log probability instead of probability
Underflow The computer uses a limited digit floating point representation of fractions, multiplying so many probabilities is guaranteed to be very very close to zero. With $log$, we don't have this issue.
Why to optimize max log probability instead of probability
Underflow The computer uses a limited digit floating point representation of fractions, multiplying so many probabilities is guaranteed to be very very close to zero. With $log$, we don't have this is
Why to optimize max log probability instead of probability Underflow The computer uses a limited digit floating point representation of fractions, multiplying so many probabilities is guaranteed to be very very close to zero. With $log$, we don't have this issue.
Why to optimize max log probability instead of probability Underflow The computer uses a limited digit floating point representation of fractions, multiplying so many probabilities is guaranteed to be very very close to zero. With $log$, we don't have this is
2,141
Why to optimize max log probability instead of probability
The logarithm of the probability of multiple joint probabilities simplifies to the sum of the logarithms of the individual probabilities (and the sum rule is easier than the product rule for differentiation) $\log \left(\prod_i P(x_i)\right) = \sum_i \log \left( P(x_i)\right)$ The logarithm of a member of the family of exponential probability distributions (which includes the ubiquitous normal) is polynomial in the parameters (i.e. max-likelihood reduces to least-squares for normal distributions) $\log\left(\exp\left(-\frac{1}{2}x^2\right)\right) = -\frac{1}{2}x^2$ The latter form is both more numerically stable and symbolically easier to differentiate than the former. Last but not least, the logarithm is a monotonic transformation that preserves the locations of the extrema (in particular, the estimated parameters in max-likelihood are identical for the original and the log-transformed formulation)
Why to optimize max log probability instead of probability
The logarithm of the probability of multiple joint probabilities simplifies to the sum of the logarithms of the individual probabilities (and the sum rule is easier than the product rule for different
Why to optimize max log probability instead of probability The logarithm of the probability of multiple joint probabilities simplifies to the sum of the logarithms of the individual probabilities (and the sum rule is easier than the product rule for differentiation) $\log \left(\prod_i P(x_i)\right) = \sum_i \log \left( P(x_i)\right)$ The logarithm of a member of the family of exponential probability distributions (which includes the ubiquitous normal) is polynomial in the parameters (i.e. max-likelihood reduces to least-squares for normal distributions) $\log\left(\exp\left(-\frac{1}{2}x^2\right)\right) = -\frac{1}{2}x^2$ The latter form is both more numerically stable and symbolically easier to differentiate than the former. Last but not least, the logarithm is a monotonic transformation that preserves the locations of the extrema (in particular, the estimated parameters in max-likelihood are identical for the original and the log-transformed formulation)
Why to optimize max log probability instead of probability The logarithm of the probability of multiple joint probabilities simplifies to the sum of the logarithms of the individual probabilities (and the sum rule is easier than the product rule for different
2,142
Why to optimize max log probability instead of probability
It is much easier to take a derivative of sum of logarithms than to take a derivative of product, that contains, say, 100 multipliers.
Why to optimize max log probability instead of probability
It is much easier to take a derivative of sum of logarithms than to take a derivative of product, that contains, say, 100 multipliers.
Why to optimize max log probability instead of probability It is much easier to take a derivative of sum of logarithms than to take a derivative of product, that contains, say, 100 multipliers.
Why to optimize max log probability instead of probability It is much easier to take a derivative of sum of logarithms than to take a derivative of product, that contains, say, 100 multipliers.
2,143
Why to optimize max log probability instead of probability
As a general rule, the most basic and easy optimization problem is to optimize a quadratic function. You can easily find the optimum of such a function no matter where you start. How this manifests depends on the specific method but the closer your function to a quadratic, the better. As noted by TemplateRex, in a wide variety of problems, the probabilities that go into calculating the likelihood function come from the normal distribution, or are approximated by it. So if you work on the log, you get a nice quadratic function. Whereas if you work on the probabilities, you have a function that Is not convex (the bane of optimization algorithms everywhere) Crosses multiple scales rapidly, and therefore has a very narrow range where the function values are indicative of where to direct your search. Which function would you rather optimize, this, or this? (This was actually an easy one; in practical applications your search can start so far off the optimum that the function values and gradients, even if you were able to compute them numerically, will be indistinguishable from 0 and useless for the purposes of the optimization algorithm. But transforming to a quadratic function makes this a piece of cake.) Note that this is completely consistent with the numerical stability issues already mentioned. The reason log scale is required to work with this function, is exactly the same reason that the log probability is much better behaved (for optimization and other purposes) than the original. You could also approach this another way. Even if there was no advantage to the log (which there is) - we're gonna use the log scale anyway for derivations and calculation, so what reason is there to apply the exp transformation just for computing the gradient? We may as well remain consistent with the log.
Why to optimize max log probability instead of probability
As a general rule, the most basic and easy optimization problem is to optimize a quadratic function. You can easily find the optimum of such a function no matter where you start. How this manifests de
Why to optimize max log probability instead of probability As a general rule, the most basic and easy optimization problem is to optimize a quadratic function. You can easily find the optimum of such a function no matter where you start. How this manifests depends on the specific method but the closer your function to a quadratic, the better. As noted by TemplateRex, in a wide variety of problems, the probabilities that go into calculating the likelihood function come from the normal distribution, or are approximated by it. So if you work on the log, you get a nice quadratic function. Whereas if you work on the probabilities, you have a function that Is not convex (the bane of optimization algorithms everywhere) Crosses multiple scales rapidly, and therefore has a very narrow range where the function values are indicative of where to direct your search. Which function would you rather optimize, this, or this? (This was actually an easy one; in practical applications your search can start so far off the optimum that the function values and gradients, even if you were able to compute them numerically, will be indistinguishable from 0 and useless for the purposes of the optimization algorithm. But transforming to a quadratic function makes this a piece of cake.) Note that this is completely consistent with the numerical stability issues already mentioned. The reason log scale is required to work with this function, is exactly the same reason that the log probability is much better behaved (for optimization and other purposes) than the original. You could also approach this another way. Even if there was no advantage to the log (which there is) - we're gonna use the log scale anyway for derivations and calculation, so what reason is there to apply the exp transformation just for computing the gradient? We may as well remain consistent with the log.
Why to optimize max log probability instead of probability As a general rule, the most basic and easy optimization problem is to optimize a quadratic function. You can easily find the optimum of such a function no matter where you start. How this manifests de
2,144
Why to optimize max log probability instead of probability
By using the $\ln p$ we increase the dynamic range of the optimization algorithm. The $p$ in applications is usually a product of functions. For instance, in maximum likelihood estimation it's the product of the form $L(x|\theta)=\Pi_{i=1}^n f(x_i|\theta)$, where $f(.)$ is the density function, which can be greater or less than 1, btw. So, when $n$ is very large, i.e. large sample, your likelihood function $L(.)$ is usually far away from 1: it's either very small or very large, because it's a power function $L\sim f(.)^n$. By taking a log we simply improve the dynamic range of any optimization algorithm, allowing it to work with extremely large or small values in the same way.
Why to optimize max log probability instead of probability
By using the $\ln p$ we increase the dynamic range of the optimization algorithm. The $p$ in applications is usually a product of functions. For instance, in maximum likelihood estimation it's the pro
Why to optimize max log probability instead of probability By using the $\ln p$ we increase the dynamic range of the optimization algorithm. The $p$ in applications is usually a product of functions. For instance, in maximum likelihood estimation it's the product of the form $L(x|\theta)=\Pi_{i=1}^n f(x_i|\theta)$, where $f(.)$ is the density function, which can be greater or less than 1, btw. So, when $n$ is very large, i.e. large sample, your likelihood function $L(.)$ is usually far away from 1: it's either very small or very large, because it's a power function $L\sim f(.)^n$. By taking a log we simply improve the dynamic range of any optimization algorithm, allowing it to work with extremely large or small values in the same way.
Why to optimize max log probability instead of probability By using the $\ln p$ we increase the dynamic range of the optimization algorithm. The $p$ in applications is usually a product of functions. For instance, in maximum likelihood estimation it's the pro
2,145
Why to optimize max log probability instead of probability
Some nice answers have been given already. But I encountered recently a new one: Often, you are given a huge training data set $\mathcal{X}$, and you define some probabilistic model $p(x|\theta)$, and you want to maximize the likelihood for $x \in \mathcal{X}$. They are assumed to be independent, i.e. you have $$ p(\mathcal{X}|\theta) = \prod_{x\in\mathcal{X}} p(x|\theta) . $$ Now, you often do some sort of stochastic (mini-batch) gradient-based training, i.e. in each step, for your loss $L$, you optimize $L(\mathcal{X'}|\theta)$ for $\mathcal{X'} \subset \mathcal{X}$, i.e. $$ \theta' := \theta - \frac{\partial \sum_{x\in\mathcal{X'}} L(x|\theta)}{\partial \theta} . $$ Now, these stochastic steps are accumulated additively. Because of that, you want the property that in general $$ L(\mathcal{X}|\theta) = \sum_{x\in\mathcal{X}} L(x|\theta) . $$ This is the case for $$ L(x|\theta) = -\log p(x|\theta) . $$
Why to optimize max log probability instead of probability
Some nice answers have been given already. But I encountered recently a new one: Often, you are given a huge training data set $\mathcal{X}$, and you define some probabilistic model $p(x|\theta)$, and
Why to optimize max log probability instead of probability Some nice answers have been given already. But I encountered recently a new one: Often, you are given a huge training data set $\mathcal{X}$, and you define some probabilistic model $p(x|\theta)$, and you want to maximize the likelihood for $x \in \mathcal{X}$. They are assumed to be independent, i.e. you have $$ p(\mathcal{X}|\theta) = \prod_{x\in\mathcal{X}} p(x|\theta) . $$ Now, you often do some sort of stochastic (mini-batch) gradient-based training, i.e. in each step, for your loss $L$, you optimize $L(\mathcal{X'}|\theta)$ for $\mathcal{X'} \subset \mathcal{X}$, i.e. $$ \theta' := \theta - \frac{\partial \sum_{x\in\mathcal{X'}} L(x|\theta)}{\partial \theta} . $$ Now, these stochastic steps are accumulated additively. Because of that, you want the property that in general $$ L(\mathcal{X}|\theta) = \sum_{x\in\mathcal{X}} L(x|\theta) . $$ This is the case for $$ L(x|\theta) = -\log p(x|\theta) . $$
Why to optimize max log probability instead of probability Some nice answers have been given already. But I encountered recently a new one: Often, you are given a huge training data set $\mathcal{X}$, and you define some probabilistic model $p(x|\theta)$, and
2,146
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
There is a lot of information on this topic at the GLMM FAQ. However, in your particular case, I would suggest using library(nlme) m1 <- lme(value~status,random=~1|experiment,data=mydata) anova(m1) because you don't need any of the stuff that lmer offers (higher speed, handling of crossed random effects, GLMMs ...). lme should give you exactly the same coefficient and variance estimates but will also compute df and p-values for you (which do make sense in a "classical" design such as you appear to have). You may also want to consider the random term ~status|experiment (allowing for variation of status effects across blocks, or equivalently including a status-by-experiment interaction). Posters above are also correct that your t statistics are so large that your p-value will definitely be <0.05, but I can imagine you would like "real" p-values.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
There is a lot of information on this topic at the GLMM FAQ. However, in your particular case, I would suggest using library(nlme) m1 <- lme(value~status,random=~1|experiment,data=mydata) anova(m1) b
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? There is a lot of information on this topic at the GLMM FAQ. However, in your particular case, I would suggest using library(nlme) m1 <- lme(value~status,random=~1|experiment,data=mydata) anova(m1) because you don't need any of the stuff that lmer offers (higher speed, handling of crossed random effects, GLMMs ...). lme should give you exactly the same coefficient and variance estimates but will also compute df and p-values for you (which do make sense in a "classical" design such as you appear to have). You may also want to consider the random term ~status|experiment (allowing for variation of status effects across blocks, or equivalently including a status-by-experiment interaction). Posters above are also correct that your t statistics are so large that your p-value will definitely be <0.05, but I can imagine you would like "real" p-values.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? There is a lot of information on this topic at the GLMM FAQ. However, in your particular case, I would suggest using library(nlme) m1 <- lme(value~status,random=~1|experiment,data=mydata) anova(m1) b
2,147
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
You could use the package lmerTest. You just install/load it and the lmer models get extended. So e.g. library(lmerTest) lmm <- lmer(value~status+(1|experiment))) summary(lmm) anova(lmm) would give you results with p-values. If p-values are the right indication is a little bit disputed, but if you want to have them, this is the way to get them.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
You could use the package lmerTest. You just install/load it and the lmer models get extended. So e.g. library(lmerTest) lmm <- lmer(value~status+(1|experiment))) summary(lmm) anova(lmm) would give y
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? You could use the package lmerTest. You just install/load it and the lmer models get extended. So e.g. library(lmerTest) lmm <- lmer(value~status+(1|experiment))) summary(lmm) anova(lmm) would give you results with p-values. If p-values are the right indication is a little bit disputed, but if you want to have them, this is the way to get them.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? You could use the package lmerTest. You just install/load it and the lmer models get extended. So e.g. library(lmerTest) lmm <- lmer(value~status+(1|experiment))) summary(lmm) anova(lmm) would give y
2,148
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
If you can handle abandoning p-values (and you should), you can compute a likelihood ratio that would represent the weight of evidence for the effect of status via: #compute a model where the effect of status is estimated unrestricted_fit = lmer( formula = value ~ (1|experiment) + status , REML = F #because we want to compare models on likelihood ) #next, compute a model where the effect of status is not estimated restricted_fit = lmer( formula = value ~ (1|experiment) , REML = F #because we want to compare models on likelihood ) #compute the AIC-corrected log-base-2 likelihood ratio (a.k.a. "bits" of evidence) (AIC(restricted_fit)-AIC(unrestricted_fit))*log2(exp(1))
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
If you can handle abandoning p-values (and you should), you can compute a likelihood ratio that would represent the weight of evidence for the effect of status via: #compute a model where the effect o
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? If you can handle abandoning p-values (and you should), you can compute a likelihood ratio that would represent the weight of evidence for the effect of status via: #compute a model where the effect of status is estimated unrestricted_fit = lmer( formula = value ~ (1|experiment) + status , REML = F #because we want to compare models on likelihood ) #next, compute a model where the effect of status is not estimated restricted_fit = lmer( formula = value ~ (1|experiment) , REML = F #because we want to compare models on likelihood ) #compute the AIC-corrected log-base-2 likelihood ratio (a.k.a. "bits" of evidence) (AIC(restricted_fit)-AIC(unrestricted_fit))*log2(exp(1))
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? If you can handle abandoning p-values (and you should), you can compute a likelihood ratio that would represent the weight of evidence for the effect of status via: #compute a model where the effect o
2,149
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
The issue is that the calculation of p-values for these models is not trivial, see dicussion here so the authors of the lme4 package have purposely chosen not to include p-values in the output. You may find a method of calculating these, but they will not necessarily be correct.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
The issue is that the calculation of p-values for these models is not trivial, see dicussion here so the authors of the lme4 package have purposely chosen not to include p-values in the output. You ma
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? The issue is that the calculation of p-values for these models is not trivial, see dicussion here so the authors of the lme4 package have purposely chosen not to include p-values in the output. You may find a method of calculating these, but they will not necessarily be correct.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? The issue is that the calculation of p-values for these models is not trivial, see dicussion here so the authors of the lme4 package have purposely chosen not to include p-values in the output. You ma
2,150
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Consider what you're asking. If you just want to know if the overall p-value for the effect of status passes some some sort of arbitrary cutoff value, like 0.05, then that's easy. First, you want to find out the overall effect. You could get that from anova. m <- lmer(...) #just run your lmer command but save the model anova(m) Now you have an F value. You can take that and look it up in some F tables. Just pick the lowest possible denom. degrees of freedom. The cutoff there is going to be around 20. Your F may be larger than that but I could be wrong. Even if it's not, look at the number of degrees of freedom from a conventional ANOVA calculation here using the number of experiments you have. Sticking that value in you're down to about 5 for a cutoff. Now you easily pass it in your study. The 'true' df for your model will be something higher than that because you're modelling every data point as opposed to aggregate values that an ANOVA would model. If you actually want an exact p-value there's no such thing unless you're willing to make a theoretical statement about it. If you read Pinheiro & Bates (2001, and perhaps some more books on the subject... see other links in these answers) and you come away with an argument for a specific df then you could use that. But you're not actually looking for an exact p-value anyway. I mention this because you therefore shouldn't report an exact p-value, only that your cutoff is passed. You should really consider the Mike Lawrence answer because the whole idea of just sticking with a pass point for p-values as the final and most important information to extract from your data is generally misguided (but might not be in your case since we don't really have enough information to know). Mike is using a pet version of LR calculation that is interesting, but it may be hard to find a lot of documentation on it. If you look into model selection and interpretation using AIC you may like it.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Consider what you're asking. If you just want to know if the overall p-value for the effect of status passes some some sort of arbitrary cutoff value, like 0.05, then that's easy. First, you want to
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Consider what you're asking. If you just want to know if the overall p-value for the effect of status passes some some sort of arbitrary cutoff value, like 0.05, then that's easy. First, you want to find out the overall effect. You could get that from anova. m <- lmer(...) #just run your lmer command but save the model anova(m) Now you have an F value. You can take that and look it up in some F tables. Just pick the lowest possible denom. degrees of freedom. The cutoff there is going to be around 20. Your F may be larger than that but I could be wrong. Even if it's not, look at the number of degrees of freedom from a conventional ANOVA calculation here using the number of experiments you have. Sticking that value in you're down to about 5 for a cutoff. Now you easily pass it in your study. The 'true' df for your model will be something higher than that because you're modelling every data point as opposed to aggregate values that an ANOVA would model. If you actually want an exact p-value there's no such thing unless you're willing to make a theoretical statement about it. If you read Pinheiro & Bates (2001, and perhaps some more books on the subject... see other links in these answers) and you come away with an argument for a specific df then you could use that. But you're not actually looking for an exact p-value anyway. I mention this because you therefore shouldn't report an exact p-value, only that your cutoff is passed. You should really consider the Mike Lawrence answer because the whole idea of just sticking with a pass point for p-values as the final and most important information to extract from your data is generally misguided (but might not be in your case since we don't really have enough information to know). Mike is using a pet version of LR calculation that is interesting, but it may be hard to find a lot of documentation on it. If you look into model selection and interpretation using AIC you may like it.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Consider what you're asking. If you just want to know if the overall p-value for the effect of status passes some some sort of arbitrary cutoff value, like 0.05, then that's easy. First, you want to
2,151
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Edit: This method is no longer supported in newer versions of lme4. Use the lmerTest package as suggested in this answer by pbx101. There is a post on the R list by lme4's author for why p-values are not displayed. He suggests using MCMC samples instead, which you do using the pvals.fnc from the languageR package: library("lme4") library("languageR") model=lmer(...) pvals.fnc(model) See http://www2.hawaii.edu/~kdrager/MixedEffectsModels.pdf for an example and details.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Edit: This method is no longer supported in newer versions of lme4. Use the lmerTest package as suggested in this answer by pbx101. There is a post on the R list by lme4's author for why p-values are
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Edit: This method is no longer supported in newer versions of lme4. Use the lmerTest package as suggested in this answer by pbx101. There is a post on the R list by lme4's author for why p-values are not displayed. He suggests using MCMC samples instead, which you do using the pvals.fnc from the languageR package: library("lme4") library("languageR") model=lmer(...) pvals.fnc(model) See http://www2.hawaii.edu/~kdrager/MixedEffectsModels.pdf for an example and details.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Edit: This method is no longer supported in newer versions of lme4. Use the lmerTest package as suggested in this answer by pbx101. There is a post on the R list by lme4's author for why p-values are
2,152
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Are you interested in knowing if the combined effect of status has a significant effect on value? If so, you can use the Anova function in the car package (not to be confused with the anova function in base R). dat <- data.frame( experiment = sample(c("A","B","C","D"), 264, replace=TRUE), status = sample(c("D","R","A"), 264, replace=TRUE), value = runif(264) ) require(lme4) (fm <- lmer(value~status+(1|experiment), data=dat)) require(car) Anova(fm) Have a look at ?Anova after loading the car package.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Are you interested in knowing if the combined effect of status has a significant effect on value? If so, you can use the Anova function in the car package (not to be confused with the anova function i
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Are you interested in knowing if the combined effect of status has a significant effect on value? If so, you can use the Anova function in the car package (not to be confused with the anova function in base R). dat <- data.frame( experiment = sample(c("A","B","C","D"), 264, replace=TRUE), status = sample(c("D","R","A"), 264, replace=TRUE), value = runif(264) ) require(lme4) (fm <- lmer(value~status+(1|experiment), data=dat)) require(car) Anova(fm) Have a look at ?Anova after loading the car package.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Are you interested in knowing if the combined effect of status has a significant effect on value? If so, you can use the Anova function in the car package (not to be confused with the anova function i
2,153
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Simply loading the afex package will print the p-values in the output of the lmer function from the lme4 package (you don't need to be using afex; just load it): library(lme4) #for mixed model library(afex) #for p-values This will automatically add a p-value column to the output of the lmer(yourmodel) for the fixed effects.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
Simply loading the afex package will print the p-values in the output of the lmer function from the lme4 package (you don't need to be using afex; just load it): library(lme4) #for mixed model librar
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Simply loading the afex package will print the p-values in the output of the lmer function from the lme4 package (you don't need to be using afex; just load it): library(lme4) #for mixed model library(afex) #for p-values This will automatically add a p-value column to the output of the lmer(yourmodel) for the fixed effects.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? Simply loading the afex package will print the p-values in the output of the lmer function from the lme4 package (you don't need to be using afex; just load it): library(lme4) #for mixed model librar
2,154
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
You could use parameters::p_value() to get the p-values. I found it to be very useful.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
You could use parameters::p_value() to get the p-values. I found it to be very useful.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? You could use parameters::p_value() to get the p-values. I found it to be very useful.
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? You could use parameters::p_value() to get the p-values. I found it to be very useful.
2,155
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
The function pvals.fnc is not longer supported by lme4. Using the package lmerTest package, it is possible to use other method to calculate the p-value, such as the Kenward-Roger's approximations model=lmer(value~status+1|experiment) anova(model, ddf="Kenward-Roger")
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
The function pvals.fnc is not longer supported by lme4. Using the package lmerTest package, it is possible to use other method to calculate the p-value, such as the Kenward-Roger's approximations mod
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? The function pvals.fnc is not longer supported by lme4. Using the package lmerTest package, it is possible to use other method to calculate the p-value, such as the Kenward-Roger's approximations model=lmer(value~status+1|experiment) anova(model, ddf="Kenward-Roger")
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? The function pvals.fnc is not longer supported by lme4. Using the package lmerTest package, it is possible to use other method to calculate the p-value, such as the Kenward-Roger's approximations mod
2,156
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
You can install the jtools package and use the summ function on the output model to get the p-value of the fixed effects. library(lme4) library(jtools) model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy) summ(model) FIXED EFFECTS: --------------------------------------------------------- Est. S.E. t val. d.f. p ----------------- -------- ------ -------- ------- ------ (Intercept) 251.41 6.82 36.84 17.00 0.00 Days 10.47 1.55 6.77 17.00 0.00 --------------------------------------------------------- p values calculated using Satterthwaite d.f. ```
How to obtain the p-value (check significance) of an effect in a lme4 mixed model?
You can install the jtools package and use the summ function on the output model to get the p-value of the fixed effects. library(lme4) library(jtools) model <- lmer(Reaction ~ Days + (Days | Subject)
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? You can install the jtools package and use the summ function on the output model to get the p-value of the fixed effects. library(lme4) library(jtools) model <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy) summ(model) FIXED EFFECTS: --------------------------------------------------------- Est. S.E. t val. d.f. p ----------------- -------- ------ -------- ------- ------ (Intercept) 251.41 6.82 36.84 17.00 0.00 Days 10.47 1.55 6.77 17.00 0.00 --------------------------------------------------------- p values calculated using Satterthwaite d.f. ```
How to obtain the p-value (check significance) of an effect in a lme4 mixed model? You can install the jtools package and use the summ function on the output model to get the p-value of the fixed effects. library(lme4) library(jtools) model <- lmer(Reaction ~ Days + (Days | Subject)
2,157
What is "restricted maximum likelihood" and when should it be used?
As per ocram's answer, ML is biased for the estimation of variance components. But observe that the bias gets smaller for larger sample sizes. Hence in answer to your questions "...what are the advantages of REML vs ML ? Under what circumstances may REML be preferred over ML (or vice versa) when fitting a mixed effects model ?", for small sample sizes REML is preferred. However, likelihood ratio tests for REML require exactly the same fixed effects specification in both models. So, to compare models with different fixed effects (a common scenario) with an LR test, ML must be used. REML takes account of the number of (fixed effects) parameters estimated, losing 1 degree of freedom for each. This is achieved by applying ML to the least squares residuals, which are independent of the fixed effects.
What is "restricted maximum likelihood" and when should it be used?
As per ocram's answer, ML is biased for the estimation of variance components. But observe that the bias gets smaller for larger sample sizes. Hence in answer to your questions "...what are the advant
What is "restricted maximum likelihood" and when should it be used? As per ocram's answer, ML is biased for the estimation of variance components. But observe that the bias gets smaller for larger sample sizes. Hence in answer to your questions "...what are the advantages of REML vs ML ? Under what circumstances may REML be preferred over ML (or vice versa) when fitting a mixed effects model ?", for small sample sizes REML is preferred. However, likelihood ratio tests for REML require exactly the same fixed effects specification in both models. So, to compare models with different fixed effects (a common scenario) with an LR test, ML must be used. REML takes account of the number of (fixed effects) parameters estimated, losing 1 degree of freedom for each. This is achieved by applying ML to the least squares residuals, which are independent of the fixed effects.
What is "restricted maximum likelihood" and when should it be used? As per ocram's answer, ML is biased for the estimation of variance components. But observe that the bias gets smaller for larger sample sizes. Hence in answer to your questions "...what are the advant
2,158
What is "restricted maximum likelihood" and when should it be used?
Here is a quick answer... Standard illustrative example Let $y = (y_1, \dotsc, y_n)$ be a sample from a normal distribution $\mathrm{N}(\mu, \sigma^2$). Both $\mu$ and $\sigma^2$ are unknown. The maximum likelihood estimator of $\sigma^2$, obtained by taking the derivative of the log-likelihood with respect to $\sigma^2$ and equating to zero, is $$ \hat{\sigma}^2_{\textrm{ML}} = \frac{1}{n} \sum_{i=1}^n (y_i -\bar{y})^2 $$ where $\bar{y} = \frac{1}{n} \sum_{i=1}^n y_i$ is the maximum likelihood estimator of $\mu$. We can show that $$ \mathrm{E}(\hat{\sigma}^2_{\textrm{ML}}) = \frac{n-1}{n} \sigma^2. $$ [Start by rewriting $\hat{\sigma}^2_{\textrm{ML}}$ as $\frac{1}{n} \sum_{i=1}^n \left((y_i - \mu) + (\mu - \bar{y})\right)^2$]. Thus, $\hat{\sigma}^2_{\textrm{ML}}$ is biased. Note that if we had known $\mu$, then the MLE for $\sigma^2$ would have been unbiased. Hence, the problem with $\hat{\sigma}^2_{\textrm{ML}}$ appears to be linked with the fact that we have substituted $\bar{x}$ for the unknown mean in the estimation. The intuitive idea of REML estimation is to end up with a likelihood that contains all the information on $\sigma^2$ but no longer contains the information on $\mu$. More technically, the REML likelihood is a likelihood of linear combinations of the original data: instead of the likelihood of $y$, we consider the likelihood of $Ky$, where the matrix $K$ is such that $\mathrm{E}[Ky] = 0$. REML estimation is often used in the more complicated context of mixed models. Every book on mixed models have a section explaining REML estimation in more details. Edit @Joe King: Here (and here, for English web page context) is one of my favorite books on mixed models that is fully available online. Section 2.4.2 deals with estimating variance components. Enjoy your reading :-)
What is "restricted maximum likelihood" and when should it be used?
Here is a quick answer... Standard illustrative example Let $y = (y_1, \dotsc, y_n)$ be a sample from a normal distribution $\mathrm{N}(\mu, \sigma^2$). Both $\mu$ and $\sigma^2$ are unknown. The max
What is "restricted maximum likelihood" and when should it be used? Here is a quick answer... Standard illustrative example Let $y = (y_1, \dotsc, y_n)$ be a sample from a normal distribution $\mathrm{N}(\mu, \sigma^2$). Both $\mu$ and $\sigma^2$ are unknown. The maximum likelihood estimator of $\sigma^2$, obtained by taking the derivative of the log-likelihood with respect to $\sigma^2$ and equating to zero, is $$ \hat{\sigma}^2_{\textrm{ML}} = \frac{1}{n} \sum_{i=1}^n (y_i -\bar{y})^2 $$ where $\bar{y} = \frac{1}{n} \sum_{i=1}^n y_i$ is the maximum likelihood estimator of $\mu$. We can show that $$ \mathrm{E}(\hat{\sigma}^2_{\textrm{ML}}) = \frac{n-1}{n} \sigma^2. $$ [Start by rewriting $\hat{\sigma}^2_{\textrm{ML}}$ as $\frac{1}{n} \sum_{i=1}^n \left((y_i - \mu) + (\mu - \bar{y})\right)^2$]. Thus, $\hat{\sigma}^2_{\textrm{ML}}$ is biased. Note that if we had known $\mu$, then the MLE for $\sigma^2$ would have been unbiased. Hence, the problem with $\hat{\sigma}^2_{\textrm{ML}}$ appears to be linked with the fact that we have substituted $\bar{x}$ for the unknown mean in the estimation. The intuitive idea of REML estimation is to end up with a likelihood that contains all the information on $\sigma^2$ but no longer contains the information on $\mu$. More technically, the REML likelihood is a likelihood of linear combinations of the original data: instead of the likelihood of $y$, we consider the likelihood of $Ky$, where the matrix $K$ is such that $\mathrm{E}[Ky] = 0$. REML estimation is often used in the more complicated context of mixed models. Every book on mixed models have a section explaining REML estimation in more details. Edit @Joe King: Here (and here, for English web page context) is one of my favorite books on mixed models that is fully available online. Section 2.4.2 deals with estimating variance components. Enjoy your reading :-)
What is "restricted maximum likelihood" and when should it be used? Here is a quick answer... Standard illustrative example Let $y = (y_1, \dotsc, y_n)$ be a sample from a normal distribution $\mathrm{N}(\mu, \sigma^2$). Both $\mu$ and $\sigma^2$ are unknown. The max
2,159
What is "restricted maximum likelihood" and when should it be used?
ML method underestimates the variance parameters because it assumes that the fixed parameters are known without uncertainty when estimating the variance parameters. The REML method uses a mathematical trick to make the estimates for the variance parameters independent of the estimates for the fixed effects. REML works by first getting regression residuals for the observations modeled by the fixed effects portion of the model, ignoring at this point any variance components. ML estimates are unbiased for the fixed effects but biased for the random effects, whereas the REML estimates are biased for the fixed effects and unbiased for the random effects.
What is "restricted maximum likelihood" and when should it be used?
ML method underestimates the variance parameters because it assumes that the fixed parameters are known without uncertainty when estimating the variance parameters. The REML method uses a mathematica
What is "restricted maximum likelihood" and when should it be used? ML method underestimates the variance parameters because it assumes that the fixed parameters are known without uncertainty when estimating the variance parameters. The REML method uses a mathematical trick to make the estimates for the variance parameters independent of the estimates for the fixed effects. REML works by first getting regression residuals for the observations modeled by the fixed effects portion of the model, ignoring at this point any variance components. ML estimates are unbiased for the fixed effects but biased for the random effects, whereas the REML estimates are biased for the fixed effects and unbiased for the random effects.
What is "restricted maximum likelihood" and when should it be used? ML method underestimates the variance parameters because it assumes that the fixed parameters are known without uncertainty when estimating the variance parameters. The REML method uses a mathematica
2,160
When to use Fisher and Neyman-Pearson framework?
Let me start by defining the terms of the discussion as I see them. A p-value is the probability of getting a sample statistic (say, a sample mean) as far as, or further from some reference value than your sample statistic, if the reference value were the true population parameter. For example, a p-value answers the question: what is the probability of getting a sample mean IQ more than $|\bar x-100|$ points away from 100, if 100 is really the mean of the population from which your sample was drawn. Now the issue is, how should that number be employed in making a statistical inference? Fisher thought that the p-value could be interpreted as a continuous measure of evidence against the null hypothesis. There is no particular fixed value at which the results become 'significant'. The way I usually try to get this across to people is to point out that, for all intents and purposes, p=.049 and p=.051 constitute an identical amount of evidence against the null hypothesis (cf. @Henrik's answer here). On the other hand, Neyman & Pearson thought you could use the p-value as part of a formalized decision making process. At the end of your investigation, you have to either reject the null hypothesis, or fail to reject the null hypothesis. In addition, the null hypothesis could be either true or not true. Thus, there are four theoretical possibilities (although in any given situation, there are just two): you could make a correct decision (fail to reject a true--or reject a false--null hypothesis), or you could make a type I or type II error (by rejecting a true null, or failing to reject a false null hypothesis, respectively). (Note that the p-value is not the same thing as the type I error rate, which I discuss here.) The p-value allows the process of deciding whether or not to reject the null hypothesis to be formalized. Within the Neyman-Pearson framework, the process would work like this: there is a null hypothesis that people will believe by default in the absence of sufficient evidence to the contrary, and an alternative hypothesis that you believe may be true instead. There are some long-run error rates that you will be willing to live with (note that there is no reason these have to be 5% and 20%). Given these things, you design your study to differentiate between those two hypotheses while maintaining, at most, those error rates, by conducting a power analysis and conducting your study accordingly. (Typically, this means having sufficient data.) After your study is completed, you compare your p-value to $\alpha$ and reject the null hypothesis if $p<\alpha$; if it's not, you fail to reject the null hypothesis. Either way, your study is complete and you have made your decision. The Fisherian and Neyman-Pearson approaches are not the same. The central contention of the Neyman-Pearson framework is that at the end of your study, you have to make a decision and walk away. Allegedly, a researcher once approached Fisher with 'non-significant' results, asking him what he should do, and Fisher said, 'go get more data'. Personally, I find the elegant logic of the Neyman-Pearson approach very appealing. But I don't think it's always appropriate. To my mind, at least two conditions must be met before the Neyman-Pearson framework should be considered: There should be some specific alternative hypothesis (effect magnitude) that you care about for some reason. (I don't care what the effect size is, what your reason is, whether it's well-founded or coherent, etc., only that you have one.) There should be some reason to suspect that the effect will be 'significant', if the alternative hypothesis is true. (In practice, this will typically mean that you conducted a power analysis, and have enough data.) When these conditions aren't met, the p-value can still be interpreted in keeping with Fisher's ideas. Moreover, it seems likely to me that most of the time these conditions are not met. Here are some easy examples that come to mind, where tests are run, but the above conditions are not met: the omnibus ANOVA for a multiple regression model (it is possible to figure out how all the hypothesized non-zero slope parameters come together to create a non-centrality parameter for the F distribution, but it isn't remotely intuitive, and I doubt anyone does it) the value of a Shapiro-Wilk test of the normality of your residuals in a regression analysis (what magnitude of $W$ do you care about and why? how much power to you have to reject the null when that magnitude is correct?) the value of a test of homogeneity of variance (e.g., Levene's test; same comments as above) any other tests to check assumptions, etc. t-tests of covariates other than the explanatory variable of primary interest in the study initial / exploratory research (e.g., pilot studies)
When to use Fisher and Neyman-Pearson framework?
Let me start by defining the terms of the discussion as I see them. A p-value is the probability of getting a sample statistic (say, a sample mean) as far as, or further from some reference value tha
When to use Fisher and Neyman-Pearson framework? Let me start by defining the terms of the discussion as I see them. A p-value is the probability of getting a sample statistic (say, a sample mean) as far as, or further from some reference value than your sample statistic, if the reference value were the true population parameter. For example, a p-value answers the question: what is the probability of getting a sample mean IQ more than $|\bar x-100|$ points away from 100, if 100 is really the mean of the population from which your sample was drawn. Now the issue is, how should that number be employed in making a statistical inference? Fisher thought that the p-value could be interpreted as a continuous measure of evidence against the null hypothesis. There is no particular fixed value at which the results become 'significant'. The way I usually try to get this across to people is to point out that, for all intents and purposes, p=.049 and p=.051 constitute an identical amount of evidence against the null hypothesis (cf. @Henrik's answer here). On the other hand, Neyman & Pearson thought you could use the p-value as part of a formalized decision making process. At the end of your investigation, you have to either reject the null hypothesis, or fail to reject the null hypothesis. In addition, the null hypothesis could be either true or not true. Thus, there are four theoretical possibilities (although in any given situation, there are just two): you could make a correct decision (fail to reject a true--or reject a false--null hypothesis), or you could make a type I or type II error (by rejecting a true null, or failing to reject a false null hypothesis, respectively). (Note that the p-value is not the same thing as the type I error rate, which I discuss here.) The p-value allows the process of deciding whether or not to reject the null hypothesis to be formalized. Within the Neyman-Pearson framework, the process would work like this: there is a null hypothesis that people will believe by default in the absence of sufficient evidence to the contrary, and an alternative hypothesis that you believe may be true instead. There are some long-run error rates that you will be willing to live with (note that there is no reason these have to be 5% and 20%). Given these things, you design your study to differentiate between those two hypotheses while maintaining, at most, those error rates, by conducting a power analysis and conducting your study accordingly. (Typically, this means having sufficient data.) After your study is completed, you compare your p-value to $\alpha$ and reject the null hypothesis if $p<\alpha$; if it's not, you fail to reject the null hypothesis. Either way, your study is complete and you have made your decision. The Fisherian and Neyman-Pearson approaches are not the same. The central contention of the Neyman-Pearson framework is that at the end of your study, you have to make a decision and walk away. Allegedly, a researcher once approached Fisher with 'non-significant' results, asking him what he should do, and Fisher said, 'go get more data'. Personally, I find the elegant logic of the Neyman-Pearson approach very appealing. But I don't think it's always appropriate. To my mind, at least two conditions must be met before the Neyman-Pearson framework should be considered: There should be some specific alternative hypothesis (effect magnitude) that you care about for some reason. (I don't care what the effect size is, what your reason is, whether it's well-founded or coherent, etc., only that you have one.) There should be some reason to suspect that the effect will be 'significant', if the alternative hypothesis is true. (In practice, this will typically mean that you conducted a power analysis, and have enough data.) When these conditions aren't met, the p-value can still be interpreted in keeping with Fisher's ideas. Moreover, it seems likely to me that most of the time these conditions are not met. Here are some easy examples that come to mind, where tests are run, but the above conditions are not met: the omnibus ANOVA for a multiple regression model (it is possible to figure out how all the hypothesized non-zero slope parameters come together to create a non-centrality parameter for the F distribution, but it isn't remotely intuitive, and I doubt anyone does it) the value of a Shapiro-Wilk test of the normality of your residuals in a regression analysis (what magnitude of $W$ do you care about and why? how much power to you have to reject the null when that magnitude is correct?) the value of a test of homogeneity of variance (e.g., Levene's test; same comments as above) any other tests to check assumptions, etc. t-tests of covariates other than the explanatory variable of primary interest in the study initial / exploratory research (e.g., pilot studies)
When to use Fisher and Neyman-Pearson framework? Let me start by defining the terms of the discussion as I see them. A p-value is the probability of getting a sample statistic (say, a sample mean) as far as, or further from some reference value tha
2,161
When to use Fisher and Neyman-Pearson framework?
Practicality is in the eye of the beholder, but; Fisher's significance testing can be interpreted as a way of deciding whether or not the data suggests any interesting `signal'. We either reject the null hypothesis (which may be a Type I error) or don't say anything at all. For example, in lots of modern 'omics' applications, this interpretation fits; we don't want to make too many Type I errors, we do want to pull out the most exciting signals, though we may miss some. Neyman-Pearson's hypothesis makes sense when there are two disjoint alternatives (e.g. the Higgs Boson does or does not exist) between which we decide. As well as the risk of a Type I error, here we can also make Type II error - when there's a real signal but we say it's not there, making a 'null' decision. N-P's argument was that, without making too many type I error rates, we want to minimize the risk of Type II errors. Often, neither system will seem perfect - for example you may just want a point estimate and corresponding measure of uncertainty. Also, it may not matter which version you use, because you report the p-value and leave test interpretation to the reader. But to choose between the approaches above, identify whether (or not) Type II errors are relevant to your application.
When to use Fisher and Neyman-Pearson framework?
Practicality is in the eye of the beholder, but; Fisher's significance testing can be interpreted as a way of deciding whether or not the data suggests any interesting `signal'. We either reject the
When to use Fisher and Neyman-Pearson framework? Practicality is in the eye of the beholder, but; Fisher's significance testing can be interpreted as a way of deciding whether or not the data suggests any interesting `signal'. We either reject the null hypothesis (which may be a Type I error) or don't say anything at all. For example, in lots of modern 'omics' applications, this interpretation fits; we don't want to make too many Type I errors, we do want to pull out the most exciting signals, though we may miss some. Neyman-Pearson's hypothesis makes sense when there are two disjoint alternatives (e.g. the Higgs Boson does or does not exist) between which we decide. As well as the risk of a Type I error, here we can also make Type II error - when there's a real signal but we say it's not there, making a 'null' decision. N-P's argument was that, without making too many type I error rates, we want to minimize the risk of Type II errors. Often, neither system will seem perfect - for example you may just want a point estimate and corresponding measure of uncertainty. Also, it may not matter which version you use, because you report the p-value and leave test interpretation to the reader. But to choose between the approaches above, identify whether (or not) Type II errors are relevant to your application.
When to use Fisher and Neyman-Pearson framework? Practicality is in the eye of the beholder, but; Fisher's significance testing can be interpreted as a way of deciding whether or not the data suggests any interesting `signal'. We either reject the
2,162
When to use Fisher and Neyman-Pearson framework?
The whole point is that you cannot ignore the philosophical differences. A mathematical procedure in statistics doesn't just stand alone as something you apply without some underlying hypotheses, assumptions, theory... philosophy. That said, if you insist on sticking with frequentist philosophies there might be a few very specific kinds of problems where Neyman-Pearson really needs to be considered. They'd all fall in the class of repeated testing like quality control or fMRI. Setting a specific alpha beforehand and considering the whole Type I, Type II, and power framework becomes more important in that setting.
When to use Fisher and Neyman-Pearson framework?
The whole point is that you cannot ignore the philosophical differences. A mathematical procedure in statistics doesn't just stand alone as something you apply without some underlying hypotheses, ass
When to use Fisher and Neyman-Pearson framework? The whole point is that you cannot ignore the philosophical differences. A mathematical procedure in statistics doesn't just stand alone as something you apply without some underlying hypotheses, assumptions, theory... philosophy. That said, if you insist on sticking with frequentist philosophies there might be a few very specific kinds of problems where Neyman-Pearson really needs to be considered. They'd all fall in the class of repeated testing like quality control or fMRI. Setting a specific alpha beforehand and considering the whole Type I, Type II, and power framework becomes more important in that setting.
When to use Fisher and Neyman-Pearson framework? The whole point is that you cannot ignore the philosophical differences. A mathematical procedure in statistics doesn't just stand alone as something you apply without some underlying hypotheses, ass
2,163
When to use Fisher and Neyman-Pearson framework?
My understanding is: p-value is to tell us what to believe (verifying a theory with sufficient data) while Neyman-Pearson approach is to tell us what to do (making best possible decisions even with limited data). So it looks to me that (small) p-value is more stringent while Neyman-Pearson approach is more pragmatic; That's probably why p-value is used more in answering scientific questions while Neyman and Pearson is used more in making statistical/practical decisions.
When to use Fisher and Neyman-Pearson framework?
My understanding is: p-value is to tell us what to believe (verifying a theory with sufficient data) while Neyman-Pearson approach is to tell us what to do (making best possible decisions even with li
When to use Fisher and Neyman-Pearson framework? My understanding is: p-value is to tell us what to believe (verifying a theory with sufficient data) while Neyman-Pearson approach is to tell us what to do (making best possible decisions even with limited data). So it looks to me that (small) p-value is more stringent while Neyman-Pearson approach is more pragmatic; That's probably why p-value is used more in answering scientific questions while Neyman and Pearson is used more in making statistical/practical decisions.
When to use Fisher and Neyman-Pearson framework? My understanding is: p-value is to tell us what to believe (verifying a theory with sufficient data) while Neyman-Pearson approach is to tell us what to do (making best possible decisions even with li
2,164
Kendall Tau or Spearman's rho?
I found that Spearman correlation is mostly used in place of usual linear correlation when working with integer valued scores on a measurement scale, when it has a moderate number of possible scores or when we don't want to make rely on assumptions about the bivariate relationships. As compared to Pearson coefficient, the interpretation of Kendall's tau seems to me less direct than that of Spearman's rho, in the sense that it quantifies the difference between the % of concordant and discordant pairs among all possible pairwise events. In my understanding, Kendall's tau more closely resembles Goodman-Kruskal Gamma. I just browsed an article from Larry Winner in the J. Statistics Educ. (2006) which discusses the use of both measures, NASCAR Winston Cup Race Results for 1975-2003. I also found @onestop answer about Pearson's or Spearman's correlation with non-normal data interesting in this respect. Of note, Kendall's tau (the a version) has connection to Somers' D (and Harrell's C) used for predictive modelling (see e.g., Interpretation of Somers’ D under four simple models by RB Newson and reference 6 therein, and articles by Newson published in the Stata Journal 2006). An overview of rank-sum tests is provided in Efficient Calculation of Jackknife Confidence Intervals for Rank Statistics, that was published in the JSS (2006).
Kendall Tau or Spearman's rho?
I found that Spearman correlation is mostly used in place of usual linear correlation when working with integer valued scores on a measurement scale, when it has a moderate number of possible scores o
Kendall Tau or Spearman's rho? I found that Spearman correlation is mostly used in place of usual linear correlation when working with integer valued scores on a measurement scale, when it has a moderate number of possible scores or when we don't want to make rely on assumptions about the bivariate relationships. As compared to Pearson coefficient, the interpretation of Kendall's tau seems to me less direct than that of Spearman's rho, in the sense that it quantifies the difference between the % of concordant and discordant pairs among all possible pairwise events. In my understanding, Kendall's tau more closely resembles Goodman-Kruskal Gamma. I just browsed an article from Larry Winner in the J. Statistics Educ. (2006) which discusses the use of both measures, NASCAR Winston Cup Race Results for 1975-2003. I also found @onestop answer about Pearson's or Spearman's correlation with non-normal data interesting in this respect. Of note, Kendall's tau (the a version) has connection to Somers' D (and Harrell's C) used for predictive modelling (see e.g., Interpretation of Somers’ D under four simple models by RB Newson and reference 6 therein, and articles by Newson published in the Stata Journal 2006). An overview of rank-sum tests is provided in Efficient Calculation of Jackknife Confidence Intervals for Rank Statistics, that was published in the JSS (2006).
Kendall Tau or Spearman's rho? I found that Spearman correlation is mostly used in place of usual linear correlation when working with integer valued scores on a measurement scale, when it has a moderate number of possible scores o
2,165
Kendall Tau or Spearman's rho?
I refer the honorable gentleman to my previous answer: "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters", according to Kendall & Gibbons (1990).
Kendall Tau or Spearman's rho?
I refer the honorable gentleman to my previous answer: "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters", accord
Kendall Tau or Spearman's rho? I refer the honorable gentleman to my previous answer: "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters", according to Kendall & Gibbons (1990).
Kendall Tau or Spearman's rho? I refer the honorable gentleman to my previous answer: "...confidence intervals for Spearman’s rS are less reliable and less interpretable than confidence intervals for Kendall’s τ-parameters", accord
2,166
Kendall Tau or Spearman's rho?
Again somewhat philosophical answer; the basic difference is that Spearman's Rho is an attempt to extend R^2 (="variance explained") idea over nonlinear interactions, while Kendall's Tau is rather intended to be a test statistic for nonlinear correlation test. So, Tau should be used for testing nonlinear correlations, Rho as R extension (or for people familiar with R^2 -- explaining Tau to unsuspecting audience in limited time is painful).
Kendall Tau or Spearman's rho?
Again somewhat philosophical answer; the basic difference is that Spearman's Rho is an attempt to extend R^2 (="variance explained") idea over nonlinear interactions, while Kendall's Tau is rather int
Kendall Tau or Spearman's rho? Again somewhat philosophical answer; the basic difference is that Spearman's Rho is an attempt to extend R^2 (="variance explained") idea over nonlinear interactions, while Kendall's Tau is rather intended to be a test statistic for nonlinear correlation test. So, Tau should be used for testing nonlinear correlations, Rho as R extension (or for people familiar with R^2 -- explaining Tau to unsuspecting audience in limited time is painful).
Kendall Tau or Spearman's rho? Again somewhat philosophical answer; the basic difference is that Spearman's Rho is an attempt to extend R^2 (="variance explained") idea over nonlinear interactions, while Kendall's Tau is rather int
2,167
Kendall Tau or Spearman's rho?
Here's a quote from Andrew Gilpin (1993) advocating Kendall's τ over Spearman's ρ for theoretical reasons: "[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample size, increases; and $τ$ is also more tractable mathematically, particularly when ties are present." Reference Gilpin, A. R. (1993). Table for conversion of Kendall's Tau to Spearman's Rho within the context measures of magnitude of effect for meta-analysis. Educational and Psychological Measurement, 53(1), 87-92.
Kendall Tau or Spearman's rho?
Here's a quote from Andrew Gilpin (1993) advocating Kendall's τ over Spearman's ρ for theoretical reasons: "[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample
Kendall Tau or Spearman's rho? Here's a quote from Andrew Gilpin (1993) advocating Kendall's τ over Spearman's ρ for theoretical reasons: "[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample size, increases; and $τ$ is also more tractable mathematically, particularly when ties are present." Reference Gilpin, A. R. (1993). Table for conversion of Kendall's Tau to Spearman's Rho within the context measures of magnitude of effect for meta-analysis. Educational and Psychological Measurement, 53(1), 87-92.
Kendall Tau or Spearman's rho? Here's a quote from Andrew Gilpin (1993) advocating Kendall's τ over Spearman's ρ for theoretical reasons: "[Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample
2,168
Kendall Tau or Spearman's rho?
FWIW, a quote from Myers & Well (research design and statistical analyses, second edition, 2003, p. 510). If you still care about the p-values; Seigel and Castellan (1988, nonparametric statistics for the behavioral sciences) point out that, although $\tau$ and Spearman $\rho$ will generally have different values when calculated for the same data set, when significance tests for $\tau$ and Spearman $\rho$ are based on their sampling distributions, they will yield the same p-values.
Kendall Tau or Spearman's rho?
FWIW, a quote from Myers & Well (research design and statistical analyses, second edition, 2003, p. 510). If you still care about the p-values; Seigel and Castellan (1988, nonparametric statistics fo
Kendall Tau or Spearman's rho? FWIW, a quote from Myers & Well (research design and statistical analyses, second edition, 2003, p. 510). If you still care about the p-values; Seigel and Castellan (1988, nonparametric statistics for the behavioral sciences) point out that, although $\tau$ and Spearman $\rho$ will generally have different values when calculated for the same data set, when significance tests for $\tau$ and Spearman $\rho$ are based on their sampling distributions, they will yield the same p-values.
Kendall Tau or Spearman's rho? FWIW, a quote from Myers & Well (research design and statistical analyses, second edition, 2003, p. 510). If you still care about the p-values; Seigel and Castellan (1988, nonparametric statistics fo
2,169
Why does k-means clustering algorithm use only Euclidean distance metric?
K-Means procedure - which is a vector quantization method often used as a clustering method - does not explicitly use pairwise distances between data points at all (in contrast to hierarchical and some other clusterings which allow for arbitrary proximity measure). It amounts to repeatedly assigning points to the closest centroid thereby using Euclidean distance from data points to a centroid. However, K-Means is implicitly based on pairwise Euclidean distances between data points, because the sum of squared deviations from centroid is equal to the sum of pairwise squared Euclidean distances divided by the number of points. The term "centroid" is itself from Euclidean geometry. It is multivariate mean in euclidean space. Euclidean space is about euclidean distances. Non-Euclidean distances will generally not span Euclidean space. That's why K-Means is for Euclidean distances only. But a Euclidean distance between two data points can be represented in a number of alternative ways. For example, it is closely tied with cosine or scalar product between the points. If you have cosine, or covariance, or correlation, you can always (1) transform it to (squared) Euclidean distance, and then (2) create data for that matrix of Euclidean distances (by means of Principal Coordinates or other forms of metric Multidimensional Scaling) to (3) input those data to K-Means clustering. Therefore, it is possible to make K-Means "work with" pairwise cosines or such; in fact, such implementations of K-Means clustering exist. See also about "K-means for distance matrix" implementation. It is possible to program K-means in a way that it directly calculate on the square matrix of pairwise Euclidean distances, of course. But it will work slowly, and so the more efficient way is to create data for that distance matrix (converting the distances into scalar products and so on - the pass that is outlined in the previous paragraph) - and then apply standard K-means procedure to that dataset. Please note I was discussing the topic whether euclidean or noneuclidean dissimilarity between data points is compatible with K-means. It is related to but not quite the same question as whether noneuclidean deviations from centroid (in wide sense, centre or quasicentroid) can be incorporated in K-means or modified "K-means". See related question K-means: Why minimizing WCSS is maximizing Distance between clusters?.
Why does k-means clustering algorithm use only Euclidean distance metric?
K-Means procedure - which is a vector quantization method often used as a clustering method - does not explicitly use pairwise distances between data points at all (in contrast to hierarchical and som
Why does k-means clustering algorithm use only Euclidean distance metric? K-Means procedure - which is a vector quantization method often used as a clustering method - does not explicitly use pairwise distances between data points at all (in contrast to hierarchical and some other clusterings which allow for arbitrary proximity measure). It amounts to repeatedly assigning points to the closest centroid thereby using Euclidean distance from data points to a centroid. However, K-Means is implicitly based on pairwise Euclidean distances between data points, because the sum of squared deviations from centroid is equal to the sum of pairwise squared Euclidean distances divided by the number of points. The term "centroid" is itself from Euclidean geometry. It is multivariate mean in euclidean space. Euclidean space is about euclidean distances. Non-Euclidean distances will generally not span Euclidean space. That's why K-Means is for Euclidean distances only. But a Euclidean distance between two data points can be represented in a number of alternative ways. For example, it is closely tied with cosine or scalar product between the points. If you have cosine, or covariance, or correlation, you can always (1) transform it to (squared) Euclidean distance, and then (2) create data for that matrix of Euclidean distances (by means of Principal Coordinates or other forms of metric Multidimensional Scaling) to (3) input those data to K-Means clustering. Therefore, it is possible to make K-Means "work with" pairwise cosines or such; in fact, such implementations of K-Means clustering exist. See also about "K-means for distance matrix" implementation. It is possible to program K-means in a way that it directly calculate on the square matrix of pairwise Euclidean distances, of course. But it will work slowly, and so the more efficient way is to create data for that distance matrix (converting the distances into scalar products and so on - the pass that is outlined in the previous paragraph) - and then apply standard K-means procedure to that dataset. Please note I was discussing the topic whether euclidean or noneuclidean dissimilarity between data points is compatible with K-means. It is related to but not quite the same question as whether noneuclidean deviations from centroid (in wide sense, centre or quasicentroid) can be incorporated in K-means or modified "K-means". See related question K-means: Why minimizing WCSS is maximizing Distance between clusters?.
Why does k-means clustering algorithm use only Euclidean distance metric? K-Means procedure - which is a vector quantization method often used as a clustering method - does not explicitly use pairwise distances between data points at all (in contrast to hierarchical and som
2,170
Why does k-means clustering algorithm use only Euclidean distance metric?
See also @ttnphns answer for an interpretation of k-means that actually involves pointwise Euclidean distances. The way k-means is constructed is not based on distances. K-means minimizes within-cluster variance. Now if you look at the definition of variance, it is identical to the sum of squared Euclidean distances from the center. (@ttnphns answer refers to pairwise Euclidean distances!) The basic idea of k-means is to minimize squared errors. There is no "distance" involved here. Why it is not correct to use arbitary distances: because k-means may stop converging with other distance functions. The common proof of convergence is like this: the assignment step and the mean update step both optimize the same criterion. There is a finite number of assignments possible. Therefore, it must converge after a finite number of improvements. To use this proof for other distance functions, you must show that the mean (note: k-means) minimizes your distances, too. If you are looking for an Manhattan-distance variant of k-means, there is k-medians. Because the median is a known best L1 estimator. If you want arbitrary distance functions, have a look at k-medoids (aka: PAM, partitioning around medoids). The medoid minimizes arbitrary distances (because it is defined as the minimum), and there only exist a finite number of possible medoids, too. It is much more expensive than the mean, though.
Why does k-means clustering algorithm use only Euclidean distance metric?
See also @ttnphns answer for an interpretation of k-means that actually involves pointwise Euclidean distances. The way k-means is constructed is not based on distances. K-means minimizes within-clust
Why does k-means clustering algorithm use only Euclidean distance metric? See also @ttnphns answer for an interpretation of k-means that actually involves pointwise Euclidean distances. The way k-means is constructed is not based on distances. K-means minimizes within-cluster variance. Now if you look at the definition of variance, it is identical to the sum of squared Euclidean distances from the center. (@ttnphns answer refers to pairwise Euclidean distances!) The basic idea of k-means is to minimize squared errors. There is no "distance" involved here. Why it is not correct to use arbitary distances: because k-means may stop converging with other distance functions. The common proof of convergence is like this: the assignment step and the mean update step both optimize the same criterion. There is a finite number of assignments possible. Therefore, it must converge after a finite number of improvements. To use this proof for other distance functions, you must show that the mean (note: k-means) minimizes your distances, too. If you are looking for an Manhattan-distance variant of k-means, there is k-medians. Because the median is a known best L1 estimator. If you want arbitrary distance functions, have a look at k-medoids (aka: PAM, partitioning around medoids). The medoid minimizes arbitrary distances (because it is defined as the minimum), and there only exist a finite number of possible medoids, too. It is much more expensive than the mean, though.
Why does k-means clustering algorithm use only Euclidean distance metric? See also @ttnphns answer for an interpretation of k-means that actually involves pointwise Euclidean distances. The way k-means is constructed is not based on distances. K-means minimizes within-clust
2,171
Why does k-means clustering algorithm use only Euclidean distance metric?
I might be a little pedantic here, but K-means is the name given to a particular algorithm that assigns labels to data points such that within cluster variances are minimized, and it is not the name for a "general technique". K-means algorithm has been independently proposed from several fields, with strong interpretations applicable to the field. It just turns out, nicely, that it is also euclidean distance to the center. For a brief history of K-means, please read Data Clustering: 50-years beyond K-means There are a plethora of other clustering algorithms that use metrics other than Euclidean. The most general case I know is of using Bregman Divergences for clustering, of which Euclidean is a special case.
Why does k-means clustering algorithm use only Euclidean distance metric?
I might be a little pedantic here, but K-means is the name given to a particular algorithm that assigns labels to data points such that within cluster variances are minimized, and it is not the name
Why does k-means clustering algorithm use only Euclidean distance metric? I might be a little pedantic here, but K-means is the name given to a particular algorithm that assigns labels to data points such that within cluster variances are minimized, and it is not the name for a "general technique". K-means algorithm has been independently proposed from several fields, with strong interpretations applicable to the field. It just turns out, nicely, that it is also euclidean distance to the center. For a brief history of K-means, please read Data Clustering: 50-years beyond K-means There are a plethora of other clustering algorithms that use metrics other than Euclidean. The most general case I know is of using Bregman Divergences for clustering, of which Euclidean is a special case.
Why does k-means clustering algorithm use only Euclidean distance metric? I might be a little pedantic here, but K-means is the name given to a particular algorithm that assigns labels to data points such that within cluster variances are minimized, and it is not the name
2,172
Why does k-means clustering algorithm use only Euclidean distance metric?
Since this is apparently now a canonical question, and it hasn't been mentioned here yet: One natural extension of k-means to use distance metrics other than the standard Euclidean distance on $\mathbb R^d$ is to use the kernel trick. This refers to the idea of implicitly mapping the inputs to a high-, or infinite-, dimensional Hilbert space, where distances correspond to the distance function we want to use, and run the algorithm there. That is, letting $\varphi : \mathbb R^p \to \mathcal H$ be some feature map such that the desired metric $d$ can be written $d(x, y) = \lVert \varphi(x) - \varphi(y) \rVert_{\mathcal H}$, we run k-means on the points $\{ \varphi(x_i) \}$. In many cases, we can't compute the map $\varphi$ explicitly, but we can compute the kernel $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_{\mathcal H}$. Not all distance metrics fit this model, but many do, and there are such functions defined on strings, graphs, images, probability distributions, and more.... In this situation, in the standard (Lloyd's) k-means algorithm, we can assign easily points to their clusters, but we represent the cluster centers implicitly (as linear combinations of the input points in Hilbert space). Finding the best representation in the input space would require finding a Fréchet mean, which is quite expensive. So it's easy to get cluster assignments with a kernel, harder to get the means. The following paper discusses this algorithm, and relates it to spectral clustering: I. Dhillon, Y. Guan, and B. Kulis. Kernel k-means, Spectral Clustering and Normalized Cuts. KDD 2005.
Why does k-means clustering algorithm use only Euclidean distance metric?
Since this is apparently now a canonical question, and it hasn't been mentioned here yet: One natural extension of k-means to use distance metrics other than the standard Euclidean distance on $\mathb
Why does k-means clustering algorithm use only Euclidean distance metric? Since this is apparently now a canonical question, and it hasn't been mentioned here yet: One natural extension of k-means to use distance metrics other than the standard Euclidean distance on $\mathbb R^d$ is to use the kernel trick. This refers to the idea of implicitly mapping the inputs to a high-, or infinite-, dimensional Hilbert space, where distances correspond to the distance function we want to use, and run the algorithm there. That is, letting $\varphi : \mathbb R^p \to \mathcal H$ be some feature map such that the desired metric $d$ can be written $d(x, y) = \lVert \varphi(x) - \varphi(y) \rVert_{\mathcal H}$, we run k-means on the points $\{ \varphi(x_i) \}$. In many cases, we can't compute the map $\varphi$ explicitly, but we can compute the kernel $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_{\mathcal H}$. Not all distance metrics fit this model, but many do, and there are such functions defined on strings, graphs, images, probability distributions, and more.... In this situation, in the standard (Lloyd's) k-means algorithm, we can assign easily points to their clusters, but we represent the cluster centers implicitly (as linear combinations of the input points in Hilbert space). Finding the best representation in the input space would require finding a Fréchet mean, which is quite expensive. So it's easy to get cluster assignments with a kernel, harder to get the means. The following paper discusses this algorithm, and relates it to spectral clustering: I. Dhillon, Y. Guan, and B. Kulis. Kernel k-means, Spectral Clustering and Normalized Cuts. KDD 2005.
Why does k-means clustering algorithm use only Euclidean distance metric? Since this is apparently now a canonical question, and it hasn't been mentioned here yet: One natural extension of k-means to use distance metrics other than the standard Euclidean distance on $\mathb
2,173
Why does k-means clustering algorithm use only Euclidean distance metric?
I've read many interesting comments here, but let me add that Matlab's "personal" implementation of k-means supports 4 non-Euclidean distances [between data points and cluster centres]. The only comment from the documentation I can see about that is: Distance measure, in p-dimensional space, used for minimization, specified as the comma-separated pair consisting of 'Distance' and a string. kmeans computes centroid clusters differently for the different, supported distance measures. This table summarizes the available distance measures. In the formulae, x is an observation (that is, a row of X) and c is a centroid (a row vector). Then a list of functions of c and x follows. Thus, considering that p is the dimensionality of the input data, it seems that no Euclidean embedding is performed beforehand. BTW in the past I've been using Matlab's k-means with correlation distance and it (unsurprisingly) did what it was supposed to do.
Why does k-means clustering algorithm use only Euclidean distance metric?
I've read many interesting comments here, but let me add that Matlab's "personal" implementation of k-means supports 4 non-Euclidean distances [between data points and cluster centres]. The only comme
Why does k-means clustering algorithm use only Euclidean distance metric? I've read many interesting comments here, but let me add that Matlab's "personal" implementation of k-means supports 4 non-Euclidean distances [between data points and cluster centres]. The only comment from the documentation I can see about that is: Distance measure, in p-dimensional space, used for minimization, specified as the comma-separated pair consisting of 'Distance' and a string. kmeans computes centroid clusters differently for the different, supported distance measures. This table summarizes the available distance measures. In the formulae, x is an observation (that is, a row of X) and c is a centroid (a row vector). Then a list of functions of c and x follows. Thus, considering that p is the dimensionality of the input data, it seems that no Euclidean embedding is performed beforehand. BTW in the past I've been using Matlab's k-means with correlation distance and it (unsurprisingly) did what it was supposed to do.
Why does k-means clustering algorithm use only Euclidean distance metric? I've read many interesting comments here, but let me add that Matlab's "personal" implementation of k-means supports 4 non-Euclidean distances [between data points and cluster centres]. The only comme
2,174
Why does k-means clustering algorithm use only Euclidean distance metric?
From here: Let us consider two documents A and B represented by the vectors in the above figure. The cosine treats both vectors as unit vectors by normalizing them, giving you a measure of the angle between the two vectors. It does provide an accurate measure of similarity but with no regard to magnitude. But magnitude is an important factor while considering similarity.
Why does k-means clustering algorithm use only Euclidean distance metric?
From here: Let us consider two documents A and B represented by the vectors in the above figure. The cosine treats both vectors as unit vectors by normalizing them, giving you a measure of the angle
Why does k-means clustering algorithm use only Euclidean distance metric? From here: Let us consider two documents A and B represented by the vectors in the above figure. The cosine treats both vectors as unit vectors by normalizing them, giving you a measure of the angle between the two vectors. It does provide an accurate measure of similarity but with no regard to magnitude. But magnitude is an important factor while considering similarity.
Why does k-means clustering algorithm use only Euclidean distance metric? From here: Let us consider two documents A and B represented by the vectors in the above figure. The cosine treats both vectors as unit vectors by normalizing them, giving you a measure of the angle
2,175
How should outliers be dealt with in linear regression analysis?
Rather than exclude outliers, you can use a robust method of regression. In R, for example, the rlm() function from the MASS package can be used instead of the lm() function. The method of estimation can be tuned to be more or less robust to outliers.
How should outliers be dealt with in linear regression analysis?
Rather than exclude outliers, you can use a robust method of regression. In R, for example, the rlm() function from the MASS package can be used instead of the lm() function. The method of estimation
How should outliers be dealt with in linear regression analysis? Rather than exclude outliers, you can use a robust method of regression. In R, for example, the rlm() function from the MASS package can be used instead of the lm() function. The method of estimation can be tuned to be more or less robust to outliers.
How should outliers be dealt with in linear regression analysis? Rather than exclude outliers, you can use a robust method of regression. In R, for example, the rlm() function from the MASS package can be used instead of the lm() function. The method of estimation
2,176
How should outliers be dealt with in linear regression analysis?
Sometimes outliers are bad data, and should be excluded, such as typos. Sometimes they are Wayne Gretzky or Michael Jordan, and should be kept. Outlier detection methods include: Univariate -> boxplot. outside of 1.5 times inter-quartile range is an outlier. Bivariate -> scatterplot with confidence ellipse. outside of, say, 95% confidence ellipse is an outlier. Multivariate -> Mahalanobis D2 distance Mark those observations as outliers. Run a logistic regression (on Y=IsOutlier) to see if there are any systematic patterns. Remove ones that you can demonstrate they are not representative of any sub-population.
How should outliers be dealt with in linear regression analysis?
Sometimes outliers are bad data, and should be excluded, such as typos. Sometimes they are Wayne Gretzky or Michael Jordan, and should be kept. Outlier detection methods include: Univariate -> boxplo
How should outliers be dealt with in linear regression analysis? Sometimes outliers are bad data, and should be excluded, such as typos. Sometimes they are Wayne Gretzky or Michael Jordan, and should be kept. Outlier detection methods include: Univariate -> boxplot. outside of 1.5 times inter-quartile range is an outlier. Bivariate -> scatterplot with confidence ellipse. outside of, say, 95% confidence ellipse is an outlier. Multivariate -> Mahalanobis D2 distance Mark those observations as outliers. Run a logistic regression (on Y=IsOutlier) to see if there are any systematic patterns. Remove ones that you can demonstrate they are not representative of any sub-population.
How should outliers be dealt with in linear regression analysis? Sometimes outliers are bad data, and should be excluded, such as typos. Sometimes they are Wayne Gretzky or Michael Jordan, and should be kept. Outlier detection methods include: Univariate -> boxplo
2,177
How should outliers be dealt with in linear regression analysis?
I do think there is something to be said for just excluding the outliers. A regression line is supposed to summarise the data. Because of leverage you can have a situation where 1% of your data points affects the slope by 50%. It's only dangerous from a moral and scientific point of view if you don't tell anybody that you excluded the outliers. As long as you point them out you can say: "This regression line fits pretty well for most of the data. 1% of the time a value will come along that doesn't fit this trend, but hey, it's a crazy world, no system is perfect"
How should outliers be dealt with in linear regression analysis?
I do think there is something to be said for just excluding the outliers. A regression line is supposed to summarise the data. Because of leverage you can have a situation where 1% of your data points
How should outliers be dealt with in linear regression analysis? I do think there is something to be said for just excluding the outliers. A regression line is supposed to summarise the data. Because of leverage you can have a situation where 1% of your data points affects the slope by 50%. It's only dangerous from a moral and scientific point of view if you don't tell anybody that you excluded the outliers. As long as you point them out you can say: "This regression line fits pretty well for most of the data. 1% of the time a value will come along that doesn't fit this trend, but hey, it's a crazy world, no system is perfect"
How should outliers be dealt with in linear regression analysis? I do think there is something to be said for just excluding the outliers. A regression line is supposed to summarise the data. Because of leverage you can have a situation where 1% of your data points
2,178
How should outliers be dealt with in linear regression analysis?
Sharpie, Taking your question literally, I would argue that there are no statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis (as opposed to determining whether or not a given observation is an outlier). This must come from subject-area knowledge. I think the best way to start is to ask whether the outliers even make sense, especially given the other variables you've collected. For example, is it really reasonable that you have a 600 pound woman in your study, which recruited from various sports injury clinics? Or, isn't it strange that a person is listing 55 years or professional experience when they're only 60 years old? And so forth. Hopefully, you then have a reasonable basis for either throwing them out or getting the data compilers to double-check the records for you. I would also suggest robust regression methods and the transparent reporting of dropped observations, as suggested by Rob and Chris respectively. Hope this helps, Brenden
How should outliers be dealt with in linear regression analysis?
Sharpie, Taking your question literally, I would argue that there are no statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis (as opposed to
How should outliers be dealt with in linear regression analysis? Sharpie, Taking your question literally, I would argue that there are no statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis (as opposed to determining whether or not a given observation is an outlier). This must come from subject-area knowledge. I think the best way to start is to ask whether the outliers even make sense, especially given the other variables you've collected. For example, is it really reasonable that you have a 600 pound woman in your study, which recruited from various sports injury clinics? Or, isn't it strange that a person is listing 55 years or professional experience when they're only 60 years old? And so forth. Hopefully, you then have a reasonable basis for either throwing them out or getting the data compilers to double-check the records for you. I would also suggest robust regression methods and the transparent reporting of dropped observations, as suggested by Rob and Chris respectively. Hope this helps, Brenden
How should outliers be dealt with in linear regression analysis? Sharpie, Taking your question literally, I would argue that there are no statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis (as opposed to
2,179
How should outliers be dealt with in linear regression analysis?
I've published a method for identifying outliers in nonlinear regression, and it can be also used when fitting a linear model. HJ Motulsky and RE Brown. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate. BMC Bioinformatics 2006, 7:123
How should outliers be dealt with in linear regression analysis?
I've published a method for identifying outliers in nonlinear regression, and it can be also used when fitting a linear model. HJ Motulsky and RE Brown. Detecting outliers when fitting data with nonli
How should outliers be dealt with in linear regression analysis? I've published a method for identifying outliers in nonlinear regression, and it can be also used when fitting a linear model. HJ Motulsky and RE Brown. Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate. BMC Bioinformatics 2006, 7:123
How should outliers be dealt with in linear regression analysis? I've published a method for identifying outliers in nonlinear regression, and it can be also used when fitting a linear model. HJ Motulsky and RE Brown. Detecting outliers when fitting data with nonli
2,180
How should outliers be dealt with in linear regression analysis?
There are two statistical distance measures that are specifically catered to detecting outliers and then considering whether such outliers should be removed from your linear regression. The first one is Cook's distance. You can find a pretty good explanation of it at Wikipedia: http://en.wikipedia.org/wiki/Cook%27s_distance. The higher the Cook's distance is the more influential (impact on regression coefficient) the observation is. The typical cut-off point to consider removing the observation is a Cook's distance = 4/n (n is sample size). The second one is DFFITS which is also well covered by Wikipedia: http://en.wikipedia.org/wiki/DFFITS. The typical cut-off point to consider removing an observation is a DFFITS value of 2 times sqrt(k/n) where k is number of variables and n is the sample size. Both measures usually give you similar results leading to similar observation selection.
How should outliers be dealt with in linear regression analysis?
There are two statistical distance measures that are specifically catered to detecting outliers and then considering whether such outliers should be removed from your linear regression. The first on
How should outliers be dealt with in linear regression analysis? There are two statistical distance measures that are specifically catered to detecting outliers and then considering whether such outliers should be removed from your linear regression. The first one is Cook's distance. You can find a pretty good explanation of it at Wikipedia: http://en.wikipedia.org/wiki/Cook%27s_distance. The higher the Cook's distance is the more influential (impact on regression coefficient) the observation is. The typical cut-off point to consider removing the observation is a Cook's distance = 4/n (n is sample size). The second one is DFFITS which is also well covered by Wikipedia: http://en.wikipedia.org/wiki/DFFITS. The typical cut-off point to consider removing an observation is a DFFITS value of 2 times sqrt(k/n) where k is number of variables and n is the sample size. Both measures usually give you similar results leading to similar observation selection.
How should outliers be dealt with in linear regression analysis? There are two statistical distance measures that are specifically catered to detecting outliers and then considering whether such outliers should be removed from your linear regression. The first on
2,181
How should outliers be dealt with in linear regression analysis?
Garbage in, garbage out.... Implicit in getting the full benefit of linear regression is that the noise follows a normal distribution. Ideally you have mostly data and a little noise.... not mostly noise and a little data. You can test for normality of residuals after the linear fit by looking at the residuals. You can also filter input data before the linear fit for obvious, glaring errors. Here are some types of noise in garbage input data that do not typically fit a normal distribution: Digits missing or added with hand-entered data (off by a factor of 10 or more) Wrong or incorrectly converted units (grams vs kilos vs pounds; meters, feet, miles, km), possibly from merging multiple data sets (Note: The Mars Orbiter was thought to be lost in this way, so even NASA rocket scientists can make this mistake) Use of codes like 0, -1, -99999 or 99999 to mean something non-numeric like "not applicable" or "column unavailable" and just dumping this into a linear model along with valid data Writing a spec for what is "valid data" for each column can help you tag invalid data. For instance, a person's height in cm should be in a range, say, 100-300 cm. If you find 1.8 for height that's a typo, and while you can assume it was 1.8 m and alter it to 180 -- I'd say it is usually safer to throw it out and best to document as much of the filtering as possible.
How should outliers be dealt with in linear regression analysis?
Garbage in, garbage out.... Implicit in getting the full benefit of linear regression is that the noise follows a normal distribution. Ideally you have mostly data and a little noise.... not mostly n
How should outliers be dealt with in linear regression analysis? Garbage in, garbage out.... Implicit in getting the full benefit of linear regression is that the noise follows a normal distribution. Ideally you have mostly data and a little noise.... not mostly noise and a little data. You can test for normality of residuals after the linear fit by looking at the residuals. You can also filter input data before the linear fit for obvious, glaring errors. Here are some types of noise in garbage input data that do not typically fit a normal distribution: Digits missing or added with hand-entered data (off by a factor of 10 or more) Wrong or incorrectly converted units (grams vs kilos vs pounds; meters, feet, miles, km), possibly from merging multiple data sets (Note: The Mars Orbiter was thought to be lost in this way, so even NASA rocket scientists can make this mistake) Use of codes like 0, -1, -99999 or 99999 to mean something non-numeric like "not applicable" or "column unavailable" and just dumping this into a linear model along with valid data Writing a spec for what is "valid data" for each column can help you tag invalid data. For instance, a person's height in cm should be in a range, say, 100-300 cm. If you find 1.8 for height that's a typo, and while you can assume it was 1.8 m and alter it to 180 -- I'd say it is usually safer to throw it out and best to document as much of the filtering as possible.
How should outliers be dealt with in linear regression analysis? Garbage in, garbage out.... Implicit in getting the full benefit of linear regression is that the noise follows a normal distribution. Ideally you have mostly data and a little noise.... not mostly n
2,182
How should outliers be dealt with in linear regression analysis?
For a linear regression you could use a repeated median straight line fit.
How should outliers be dealt with in linear regression analysis?
For a linear regression you could use a repeated median straight line fit.
How should outliers be dealt with in linear regression analysis? For a linear regression you could use a repeated median straight line fit.
How should outliers be dealt with in linear regression analysis? For a linear regression you could use a repeated median straight line fit.
2,183
How should outliers be dealt with in linear regression analysis?
Statistical tests to be used as a basis for exclusion: - standardised residuals - leverage statistics - Cook's distance, which is a combination of the two above. From experience, exclusion should be limited to instances of incorrect data entry. Reweighting outliers in the linear regression model is a very good compromise method. The application of this in R is offered by Rob. A great example is here: http://www.ats.ucla.edu/stat/r/dae/rreg.htm If exclusion is necessary, 'one rule of thumb' relates to Dfbeta statistics (measures change in the estimate when the outlier is deleted), such that if the absolute value of the DfBeta statistic exceeds 2/sqrt(n) then that substantiates removal of the outlier.
How should outliers be dealt with in linear regression analysis?
Statistical tests to be used as a basis for exclusion: - standardised residuals - leverage statistics - Cook's distance, which is a combination of the two above. From experience, exclusion should be l
How should outliers be dealt with in linear regression analysis? Statistical tests to be used as a basis for exclusion: - standardised residuals - leverage statistics - Cook's distance, which is a combination of the two above. From experience, exclusion should be limited to instances of incorrect data entry. Reweighting outliers in the linear regression model is a very good compromise method. The application of this in R is offered by Rob. A great example is here: http://www.ats.ucla.edu/stat/r/dae/rreg.htm If exclusion is necessary, 'one rule of thumb' relates to Dfbeta statistics (measures change in the estimate when the outlier is deleted), such that if the absolute value of the DfBeta statistic exceeds 2/sqrt(n) then that substantiates removal of the outlier.
How should outliers be dealt with in linear regression analysis? Statistical tests to be used as a basis for exclusion: - standardised residuals - leverage statistics - Cook's distance, which is a combination of the two above. From experience, exclusion should be l
2,184
How should outliers be dealt with in linear regression analysis?
in linear regression we can handle outlier using below steps: Using training data find best hyperplane or line that best fit Find points which are far away from the line or hyperplane pointer which is very far away from hyperplane remove them considering those point as an outlier. i.e. D(train)=D(train)-outlier retrain the model go to step one.
How should outliers be dealt with in linear regression analysis?
in linear regression we can handle outlier using below steps: Using training data find best hyperplane or line that best fit Find points which are far away from the line or hyperplane pointer which i
How should outliers be dealt with in linear regression analysis? in linear regression we can handle outlier using below steps: Using training data find best hyperplane or line that best fit Find points which are far away from the line or hyperplane pointer which is very far away from hyperplane remove them considering those point as an outlier. i.e. D(train)=D(train)-outlier retrain the model go to step one.
How should outliers be dealt with in linear regression analysis? in linear regression we can handle outlier using below steps: Using training data find best hyperplane or line that best fit Find points which are far away from the line or hyperplane pointer which i
2,185
How to interpret an inverse covariance or precision matrix?
There are basically two things to be said. The first is that if you look at the density for the multivariate normal distribution (with mean 0 here) it is proportional to $$\exp\left(-\frac{1}{2}x^T P x\right)$$ where $P = \Sigma^{-1}$ is the inverse of the covariance matrix, also called the precision. This matrix is positive definite and defines via $$(x,y) \mapsto x^T P y$$ an inner product on $\mathbb{R}^p$. The resulting geometry, which gives specific meaning to the concept of orthogonality and defines a norm related to the normal distribution, is important, and to understand, for instance, the geometric content of LDA you need to view things in the light of the geometry given by $P$. The other thing to be said is that the partial correlations can be read of directly from $P$, see here. The same Wikipedia page gives that the partial correlations, and thus the entries of $P$, have a geometrical interpretation in terms of cosine to an angle. What is, perhaps, more important in the context of partial correlations is that the partial correlation between $X_i$ and $X_j$ is 0 if and only if entry $i,j$ in $P$ is zero. For the normal distribution the variables $X_i$ and $X_j$ are then conditionally independent given all other variables. This is what Steffens book, that I referred to in the comment above, is all about. Conditional independence and Graphical models. It has a fairly complete treatment of the normal distribution, but it may not be that easy to follow.
How to interpret an inverse covariance or precision matrix?
There are basically two things to be said. The first is that if you look at the density for the multivariate normal distribution (with mean 0 here) it is proportional to $$\exp\left(-\frac{1}{2}x^T P
How to interpret an inverse covariance or precision matrix? There are basically two things to be said. The first is that if you look at the density for the multivariate normal distribution (with mean 0 here) it is proportional to $$\exp\left(-\frac{1}{2}x^T P x\right)$$ where $P = \Sigma^{-1}$ is the inverse of the covariance matrix, also called the precision. This matrix is positive definite and defines via $$(x,y) \mapsto x^T P y$$ an inner product on $\mathbb{R}^p$. The resulting geometry, which gives specific meaning to the concept of orthogonality and defines a norm related to the normal distribution, is important, and to understand, for instance, the geometric content of LDA you need to view things in the light of the geometry given by $P$. The other thing to be said is that the partial correlations can be read of directly from $P$, see here. The same Wikipedia page gives that the partial correlations, and thus the entries of $P$, have a geometrical interpretation in terms of cosine to an angle. What is, perhaps, more important in the context of partial correlations is that the partial correlation between $X_i$ and $X_j$ is 0 if and only if entry $i,j$ in $P$ is zero. For the normal distribution the variables $X_i$ and $X_j$ are then conditionally independent given all other variables. This is what Steffens book, that I referred to in the comment above, is all about. Conditional independence and Graphical models. It has a fairly complete treatment of the normal distribution, but it may not be that easy to follow.
How to interpret an inverse covariance or precision matrix? There are basically two things to be said. The first is that if you look at the density for the multivariate normal distribution (with mean 0 here) it is proportional to $$\exp\left(-\frac{1}{2}x^T P
2,186
How to interpret an inverse covariance or precision matrix?
I like this probabilistic graphical model to illustrate NRH's point that the partial correlation is zero if and only if X is conditionally independent from Y given Z, with the assumption that all involved variables are multivariate Gaussian (the property does not hold in the general case): (the $y_i$ are Gaussian random variables; ignore T and k) Source: David MacKay's talk on Gaussian Process Basics, 25th minute.
How to interpret an inverse covariance or precision matrix?
I like this probabilistic graphical model to illustrate NRH's point that the partial correlation is zero if and only if X is conditionally independent from Y given Z, with the assumption that all invo
How to interpret an inverse covariance or precision matrix? I like this probabilistic graphical model to illustrate NRH's point that the partial correlation is zero if and only if X is conditionally independent from Y given Z, with the assumption that all involved variables are multivariate Gaussian (the property does not hold in the general case): (the $y_i$ are Gaussian random variables; ignore T and k) Source: David MacKay's talk on Gaussian Process Basics, 25th minute.
How to interpret an inverse covariance or precision matrix? I like this probabilistic graphical model to illustrate NRH's point that the partial correlation is zero if and only if X is conditionally independent from Y given Z, with the assumption that all invo
2,187
How to interpret an inverse covariance or precision matrix?
The interpretation based on partial correlations is probably the most statistically useful, since it applies to all multivariate distributions. In the special case of the multivariate Normal distribution, zero partial correlation corresponds to conditional independence. You can derive this interpretation by using the Schur complement to get a formula for the entries of the concentration matrix in terms of the entries of the covariance matrix. See http://en.wikipedia.org/wiki/Schur_complement#Applications_to_probability_theory_and_statistics
How to interpret an inverse covariance or precision matrix?
The interpretation based on partial correlations is probably the most statistically useful, since it applies to all multivariate distributions. In the special case of the multivariate Normal distribu
How to interpret an inverse covariance or precision matrix? The interpretation based on partial correlations is probably the most statistically useful, since it applies to all multivariate distributions. In the special case of the multivariate Normal distribution, zero partial correlation corresponds to conditional independence. You can derive this interpretation by using the Schur complement to get a formula for the entries of the concentration matrix in terms of the entries of the covariance matrix. See http://en.wikipedia.org/wiki/Schur_complement#Applications_to_probability_theory_and_statistics
How to interpret an inverse covariance or precision matrix? The interpretation based on partial correlations is probably the most statistically useful, since it applies to all multivariate distributions. In the special case of the multivariate Normal distribu
2,188
How to interpret an inverse covariance or precision matrix?
Covariance matrix can represent relations between all variables while inverse covariance shows the relations of elements with their neighbors (as wikipedia said partial/pair wise relations). I borrow the following example from here in 24:10. Imagine 5 masses are connected together and vowelling around with 6 springs. The covariance matrix would contain correlation of all masses, if one goes right, others can also goes right, but the inverse covariance matrix shows the relation of those masses that are connected by same springs (neighbors) and it contains many zeros and it is not necessary positive.
How to interpret an inverse covariance or precision matrix?
Covariance matrix can represent relations between all variables while inverse covariance shows the relations of elements with their neighbors (as wikipedia said partial/pair wise relations). I borrow
How to interpret an inverse covariance or precision matrix? Covariance matrix can represent relations between all variables while inverse covariance shows the relations of elements with their neighbors (as wikipedia said partial/pair wise relations). I borrow the following example from here in 24:10. Imagine 5 masses are connected together and vowelling around with 6 springs. The covariance matrix would contain correlation of all masses, if one goes right, others can also goes right, but the inverse covariance matrix shows the relation of those masses that are connected by same springs (neighbors) and it contains many zeros and it is not necessary positive.
How to interpret an inverse covariance or precision matrix? Covariance matrix can represent relations between all variables while inverse covariance shows the relations of elements with their neighbors (as wikipedia said partial/pair wise relations). I borrow
2,189
How to interpret an inverse covariance or precision matrix?
Bar-Shalom and Fortmann (1988) make mention of the inverse covariance in the context of Kalman filtering as follows: ...[T]here is a recursion for the inverse covariance (or information matrix) $\mathbf{P}^{-1}(k+1|k+1) = \mathbf{P}^{-1}(k+1|k) + \mathbf{H}'(k+1) \mathbf{R}^{-1}(k+1)\mathbf{H}(k+1)$ ...Indeed, a complete set of prediction and update equations, known as the information filter[8, 29, 142], can be developed for the inverse covariance and a transformed state vector $\mathbf{P}^{-1}\hat{\mathbf{x}}$. The book is indexed at Google.
How to interpret an inverse covariance or precision matrix?
Bar-Shalom and Fortmann (1988) make mention of the inverse covariance in the context of Kalman filtering as follows: ...[T]here is a recursion for the inverse covariance (or information matrix) $\mat
How to interpret an inverse covariance or precision matrix? Bar-Shalom and Fortmann (1988) make mention of the inverse covariance in the context of Kalman filtering as follows: ...[T]here is a recursion for the inverse covariance (or information matrix) $\mathbf{P}^{-1}(k+1|k+1) = \mathbf{P}^{-1}(k+1|k) + \mathbf{H}'(k+1) \mathbf{R}^{-1}(k+1)\mathbf{H}(k+1)$ ...Indeed, a complete set of prediction and update equations, known as the information filter[8, 29, 142], can be developed for the inverse covariance and a transformed state vector $\mathbf{P}^{-1}\hat{\mathbf{x}}$. The book is indexed at Google.
How to interpret an inverse covariance or precision matrix? Bar-Shalom and Fortmann (1988) make mention of the inverse covariance in the context of Kalman filtering as follows: ...[T]here is a recursion for the inverse covariance (or information matrix) $\mat
2,190
Euclidean distance is usually not good for sparse data (and more general case)?
Here is a simple toy example illustrating the effect of dimension in a discrimination problem e.g. the problem you face when you want to say if something is observed or if only random effect is observed (this problem is a classic in science). Heuristic. The key issue here is that the Euclidian norm gives the same importance to any direction. This constitutes a lack of prior, and as you certainly know in high dimension there is no free lunch (i.e. if you have no prior idea of what you are searching for, then there is no reason why some noise would not look like what you are searching for, this is tautology ...). I would say that for any problem there is a limit of information that is necessary to find something else than noise. This limit is related somehow to the "size" of the area you are trying to explore with regard to the "noise" level (i.e. level of uninformative content). In high dimension if you have the prior that your signal is sparse then you can remove (i.e. penalize) non sparse vector with a metric that fills the space with sparse vector or by using a thresholding technique. Framework Assume that $\xi$ is a gaussian vector with mean $\nu$ and diagonal covariance $\sigma Id$ ($\sigma$ is known) and that you want to test the simple hypothesis $$H_0: \;\nu=0,\; Vs \; H_{\theta}: \; \nu=\theta $$ (for a given $\theta\in \mathbb{R}^n$) $\theta$ is not necessarily known in advance. Test statistic with energy. The intuition you certainly have is that it is a good idea to evaluate the norm/energy $\mathcal{E}_n=\frac{1}{n}\sum_{i=1}^n\xi_i^2$ of you observation $\xi$ to build a test statistic. Actually you can construct a standardized centered (under $H_0$) version $T_n$ of the energy $T_n=\frac{\sum_i\xi_i^2-\sigma^2}{\sqrt{2n\sigma^4}}$. That makes a critical region at level $\alpha$ of the form $\{T_n\geq v_{1-\alpha}\}$ for a well chosen $v_{1-\alpha}$ Power of the test and dimension. In this case it is an easy probability exercise to show the following formula for the power of your test: $$P_{\theta}(T\leq v_{1-\alpha})=P\left (Z\leq \frac{v_{1-\alpha}}{\sqrt{1+2\|\theta\|_2^2/(n\sigma^2)}}-\frac{\|\theta\|^2_2}{\sqrt{2n\sigma^4+2\sigma^2\|\theta\|_2^2/(n\sigma^2)}}\right )$$ with $Z$ a sum of $n$ iid random variables with $\mathbb{E}[Z]=0$ and $Var(Z)=1$. This means that the power of your test is increased by the energy of your signal $\|\theta\|^2_2$ and decreased by $n$. Practically speaking this means that when you increase the size $n$ of your problem if it does not increase the strength of the signal at the same time then you are adding uninformative information to your observation (or you are reducing the proportion of useful information in the information you have): this is like adding noise and reduces the power of the test (i.e. it is more likely that you are gonna say nothing is observed while there is actually something). Toward a test with a threshold statistic. If you do not have much energy in your signal but if you know a linear transformation that can help you to have this energy concentrated in a small part of your signal, then you can build a test statistic that will only evaluate the energy for the small part of your signal. If you known in advance where it is concentrated (for example you known there cannot be high frequencies in your signal) then you can obtain a power in the preceding test with $n$ replaced by a small number and $\|\theta\|^2_2$ almost the same... If you do not know it in advance you have to estimate it this leads to well known thresholding tests. Note that this argument is exactly at the root many papers such as A Antoniadis, F Abramovich, T Sapatinas, and B Vidakovic. Wavelet methods for testing in functional analysis of variance models. International Journal on Wavelets and its applications, 93 :1007–1021, 2004. M. V. Burnashef and Begmatov. On a problem of signal detection leading to stable distribution. Theory of probability and its applications, 35(3) :556–560, 1990. Y. Baraud. Non asymptotic minimax rate of testing in signal detection. Bernoulli, 8 :577–606, 2002. J Fan. Test of significance based on wavelet thresholding and neyman’s truncation. JASA, 91 :674–688, 1996. J. Fan and S-K Lin. Test of significance when data are curves. JASA, 93 :1007–1021, 1998. V. Spokoiny. Adaptative hypothesis testing using wavelets. Annals of Statistics, 24(6) :2477–2498, december 1996.
Euclidean distance is usually not good for sparse data (and more general case)?
Here is a simple toy example illustrating the effect of dimension in a discrimination problem e.g. the problem you face when you want to say if something is observed or if only random effect is observ
Euclidean distance is usually not good for sparse data (and more general case)? Here is a simple toy example illustrating the effect of dimension in a discrimination problem e.g. the problem you face when you want to say if something is observed or if only random effect is observed (this problem is a classic in science). Heuristic. The key issue here is that the Euclidian norm gives the same importance to any direction. This constitutes a lack of prior, and as you certainly know in high dimension there is no free lunch (i.e. if you have no prior idea of what you are searching for, then there is no reason why some noise would not look like what you are searching for, this is tautology ...). I would say that for any problem there is a limit of information that is necessary to find something else than noise. This limit is related somehow to the "size" of the area you are trying to explore with regard to the "noise" level (i.e. level of uninformative content). In high dimension if you have the prior that your signal is sparse then you can remove (i.e. penalize) non sparse vector with a metric that fills the space with sparse vector or by using a thresholding technique. Framework Assume that $\xi$ is a gaussian vector with mean $\nu$ and diagonal covariance $\sigma Id$ ($\sigma$ is known) and that you want to test the simple hypothesis $$H_0: \;\nu=0,\; Vs \; H_{\theta}: \; \nu=\theta $$ (for a given $\theta\in \mathbb{R}^n$) $\theta$ is not necessarily known in advance. Test statistic with energy. The intuition you certainly have is that it is a good idea to evaluate the norm/energy $\mathcal{E}_n=\frac{1}{n}\sum_{i=1}^n\xi_i^2$ of you observation $\xi$ to build a test statistic. Actually you can construct a standardized centered (under $H_0$) version $T_n$ of the energy $T_n=\frac{\sum_i\xi_i^2-\sigma^2}{\sqrt{2n\sigma^4}}$. That makes a critical region at level $\alpha$ of the form $\{T_n\geq v_{1-\alpha}\}$ for a well chosen $v_{1-\alpha}$ Power of the test and dimension. In this case it is an easy probability exercise to show the following formula for the power of your test: $$P_{\theta}(T\leq v_{1-\alpha})=P\left (Z\leq \frac{v_{1-\alpha}}{\sqrt{1+2\|\theta\|_2^2/(n\sigma^2)}}-\frac{\|\theta\|^2_2}{\sqrt{2n\sigma^4+2\sigma^2\|\theta\|_2^2/(n\sigma^2)}}\right )$$ with $Z$ a sum of $n$ iid random variables with $\mathbb{E}[Z]=0$ and $Var(Z)=1$. This means that the power of your test is increased by the energy of your signal $\|\theta\|^2_2$ and decreased by $n$. Practically speaking this means that when you increase the size $n$ of your problem if it does not increase the strength of the signal at the same time then you are adding uninformative information to your observation (or you are reducing the proportion of useful information in the information you have): this is like adding noise and reduces the power of the test (i.e. it is more likely that you are gonna say nothing is observed while there is actually something). Toward a test with a threshold statistic. If you do not have much energy in your signal but if you know a linear transformation that can help you to have this energy concentrated in a small part of your signal, then you can build a test statistic that will only evaluate the energy for the small part of your signal. If you known in advance where it is concentrated (for example you known there cannot be high frequencies in your signal) then you can obtain a power in the preceding test with $n$ replaced by a small number and $\|\theta\|^2_2$ almost the same... If you do not know it in advance you have to estimate it this leads to well known thresholding tests. Note that this argument is exactly at the root many papers such as A Antoniadis, F Abramovich, T Sapatinas, and B Vidakovic. Wavelet methods for testing in functional analysis of variance models. International Journal on Wavelets and its applications, 93 :1007–1021, 2004. M. V. Burnashef and Begmatov. On a problem of signal detection leading to stable distribution. Theory of probability and its applications, 35(3) :556–560, 1990. Y. Baraud. Non asymptotic minimax rate of testing in signal detection. Bernoulli, 8 :577–606, 2002. J Fan. Test of significance based on wavelet thresholding and neyman’s truncation. JASA, 91 :674–688, 1996. J. Fan and S-K Lin. Test of significance when data are curves. JASA, 93 :1007–1021, 1998. V. Spokoiny. Adaptative hypothesis testing using wavelets. Annals of Statistics, 24(6) :2477–2498, december 1996.
Euclidean distance is usually not good for sparse data (and more general case)? Here is a simple toy example illustrating the effect of dimension in a discrimination problem e.g. the problem you face when you want to say if something is observed or if only random effect is observ
2,191
Euclidean distance is usually not good for sparse data (and more general case)?
I believe it is not so much the sparsity, but the high dimensionality usually associated with sparse data. But maybe it is even worse when the data is very sparse. Because then the distance of any two objects will likely be a quadratic mean of their lengths, or $$\lim_{dim\rightarrow\infty}d(x,y) = ||x-y|| \rightarrow_p \sqrt{||x||^2 + ||y||^2}$$ This equation holds trivially if $\forall_i x_i=0 \vee y_i=0$. If you increase the dimensionality and sparseness enough so that it holds for almost all attributes, the difference will be minimal. Even worse: if you normalized your vectors to have length $||x||=1$, then the euclidean distance of any two objects will be $\sqrt{2}$ with high probability. So as a rule of thumb, for Euclidean distance to be usable (I'm not claiming useful or meaningful) the objects should be non-zero in $3/4$ of attributes. Then there should be a reasonable number of attributes where $|y_i| \neq |x_i-y_i| \neq |x_i|$ so the vector difference becomes useful. This also applies to any other norm-induced difference. Because in the situation above $|x-y| \rightarrow_p |x + y|$ I don't think this is a desirable behavior for distance functions to become largely independent of the actual difference, or the absolute difference converging to the absolute sum! A common solution is to use distances such as Cosine distance. On some data they work very well. Roughly speaking, they only look at attributes where both vectors are non-zero. An interesting approach is discussed in the reference below (they didn't invent it, but I like their experimental evaluation of the properties) is to use shared nearest neighbors. So even when vectors x and y have no attributes in common, they might have some common neighbors. Counting the number of objects connecting two objects is closely related to graph distances. There is a lot of discussion on distance functions in: Can Shared-Neighbor Distances Defeat the Curse of Dimensionality? M. E. Houle, H.-P. Kriegel, P. Kröger, E. Schubert and A. Zimek SSDBM 2010 and if you do not prefer scientific articles, also on Wikipedia: Curse of Dimensionality
Euclidean distance is usually not good for sparse data (and more general case)?
I believe it is not so much the sparsity, but the high dimensionality usually associated with sparse data. But maybe it is even worse when the data is very sparse. Because then the distance of any two
Euclidean distance is usually not good for sparse data (and more general case)? I believe it is not so much the sparsity, but the high dimensionality usually associated with sparse data. But maybe it is even worse when the data is very sparse. Because then the distance of any two objects will likely be a quadratic mean of their lengths, or $$\lim_{dim\rightarrow\infty}d(x,y) = ||x-y|| \rightarrow_p \sqrt{||x||^2 + ||y||^2}$$ This equation holds trivially if $\forall_i x_i=0 \vee y_i=0$. If you increase the dimensionality and sparseness enough so that it holds for almost all attributes, the difference will be minimal. Even worse: if you normalized your vectors to have length $||x||=1$, then the euclidean distance of any two objects will be $\sqrt{2}$ with high probability. So as a rule of thumb, for Euclidean distance to be usable (I'm not claiming useful or meaningful) the objects should be non-zero in $3/4$ of attributes. Then there should be a reasonable number of attributes where $|y_i| \neq |x_i-y_i| \neq |x_i|$ so the vector difference becomes useful. This also applies to any other norm-induced difference. Because in the situation above $|x-y| \rightarrow_p |x + y|$ I don't think this is a desirable behavior for distance functions to become largely independent of the actual difference, or the absolute difference converging to the absolute sum! A common solution is to use distances such as Cosine distance. On some data they work very well. Roughly speaking, they only look at attributes where both vectors are non-zero. An interesting approach is discussed in the reference below (they didn't invent it, but I like their experimental evaluation of the properties) is to use shared nearest neighbors. So even when vectors x and y have no attributes in common, they might have some common neighbors. Counting the number of objects connecting two objects is closely related to graph distances. There is a lot of discussion on distance functions in: Can Shared-Neighbor Distances Defeat the Curse of Dimensionality? M. E. Houle, H.-P. Kriegel, P. Kröger, E. Schubert and A. Zimek SSDBM 2010 and if you do not prefer scientific articles, also on Wikipedia: Curse of Dimensionality
Euclidean distance is usually not good for sparse data (and more general case)? I believe it is not so much the sparsity, but the high dimensionality usually associated with sparse data. But maybe it is even worse when the data is very sparse. Because then the distance of any two
2,192
Euclidean distance is usually not good for sparse data (and more general case)?
I'd suggest starting with Cosine distance, not Euclidean, for any data with most vectors nearly orthogonal, $x \cdot y \approx$ 0. To see why, look at $|x - y|^2 = |x|^2 + |y|^2 - 2\ x \cdot y$. If $x \cdot y \approx$ 0, this reduces to $|x|^2 + |y|^2$: a crummy measure of distance, as Anony-Mousse points out. Cosine distance amounts to using $x / |x|$, or projecting the data onto the surface of the unit sphere, so all $|x|$ = 1. Then $|x - y|^2 = 2 - 2\ x \cdot y$ a quite different and usually better metric than plain Euclidean. $ x \cdot y$ may be small, but it's not masked by noisy $|x|^2 + |y|^2$. $x \cdot y$ is mostly near 0 for sparse data. For example, if $x$ and $y$ each have 100 terms non-zero and 900 zeros, they'll both be non-zero in only about 10 terms (if the non-zero terms scatter randomly). Normalizing $x$ /= $|x|$ may be slow for sparse data; it's fast in scikit-learn. Summary: start with cosine distance, but don't expect wonders on any old data. Successful metrics require evaluation, tuning, domain knowledge.
Euclidean distance is usually not good for sparse data (and more general case)?
I'd suggest starting with Cosine distance, not Euclidean, for any data with most vectors nearly orthogonal, $x \cdot y \approx$ 0. To see why, look at $|x - y|^2 = |x|^2 + |y|^2 - 2\ x \cdot y$. If $x
Euclidean distance is usually not good for sparse data (and more general case)? I'd suggest starting with Cosine distance, not Euclidean, for any data with most vectors nearly orthogonal, $x \cdot y \approx$ 0. To see why, look at $|x - y|^2 = |x|^2 + |y|^2 - 2\ x \cdot y$. If $x \cdot y \approx$ 0, this reduces to $|x|^2 + |y|^2$: a crummy measure of distance, as Anony-Mousse points out. Cosine distance amounts to using $x / |x|$, or projecting the data onto the surface of the unit sphere, so all $|x|$ = 1. Then $|x - y|^2 = 2 - 2\ x \cdot y$ a quite different and usually better metric than plain Euclidean. $ x \cdot y$ may be small, but it's not masked by noisy $|x|^2 + |y|^2$. $x \cdot y$ is mostly near 0 for sparse data. For example, if $x$ and $y$ each have 100 terms non-zero and 900 zeros, they'll both be non-zero in only about 10 terms (if the non-zero terms scatter randomly). Normalizing $x$ /= $|x|$ may be slow for sparse data; it's fast in scikit-learn. Summary: start with cosine distance, but don't expect wonders on any old data. Successful metrics require evaluation, tuning, domain knowledge.
Euclidean distance is usually not good for sparse data (and more general case)? I'd suggest starting with Cosine distance, not Euclidean, for any data with most vectors nearly orthogonal, $x \cdot y \approx$ 0. To see why, look at $|x - y|^2 = |x|^2 + |y|^2 - 2\ x \cdot y$. If $x
2,193
Euclidean distance is usually not good for sparse data (and more general case)?
Part of the curse of dimensionality is that data start to spread out away from the center. This is true for multivariate normal and even when the components are IID (spherical normal). But if you want to strictly speak about Euclidean distance even in low dimensional space if the data have a correlation structure Euclidean distance is not the appropriate metric. If we suppose the data are multivariate normal with some nonzero covariances and for sake of argument suppose the covariance matrix is known. Then the Mahalanobis distance is the appropriate distance measure and it is not the same as Euclidean distance which it would only reduce to if the covariance matrix is proportional to the identity matrix.
Euclidean distance is usually not good for sparse data (and more general case)?
Part of the curse of dimensionality is that data start to spread out away from the center. This is true for multivariate normal and even when the components are IID (spherical normal). But if you wa
Euclidean distance is usually not good for sparse data (and more general case)? Part of the curse of dimensionality is that data start to spread out away from the center. This is true for multivariate normal and even when the components are IID (spherical normal). But if you want to strictly speak about Euclidean distance even in low dimensional space if the data have a correlation structure Euclidean distance is not the appropriate metric. If we suppose the data are multivariate normal with some nonzero covariances and for sake of argument suppose the covariance matrix is known. Then the Mahalanobis distance is the appropriate distance measure and it is not the same as Euclidean distance which it would only reduce to if the covariance matrix is proportional to the identity matrix.
Euclidean distance is usually not good for sparse data (and more general case)? Part of the curse of dimensionality is that data start to spread out away from the center. This is true for multivariate normal and even when the components are IID (spherical normal). But if you wa
2,194
Euclidean distance is usually not good for sparse data (and more general case)?
I believe this is related to the curse of dimensionality / concentration of measure but I can no longer find the discussion that motivates this remark. I believe there was a thread on metaoptimize but I failed to Google it... For text data, normalizing the vectors using TF-IDF and then applying cosine similarity will probably yield better results than euclidean distance as long documents (with many words) can share the same topics hence be very similar to short documents sharing a high number of common words. Discarding the norm of the vectors helps in that particular case.
Euclidean distance is usually not good for sparse data (and more general case)?
I believe this is related to the curse of dimensionality / concentration of measure but I can no longer find the discussion that motivates this remark. I believe there was a thread on metaoptimize but
Euclidean distance is usually not good for sparse data (and more general case)? I believe this is related to the curse of dimensionality / concentration of measure but I can no longer find the discussion that motivates this remark. I believe there was a thread on metaoptimize but I failed to Google it... For text data, normalizing the vectors using TF-IDF and then applying cosine similarity will probably yield better results than euclidean distance as long documents (with many words) can share the same topics hence be very similar to short documents sharing a high number of common words. Discarding the norm of the vectors helps in that particular case.
Euclidean distance is usually not good for sparse data (and more general case)? I believe this is related to the curse of dimensionality / concentration of measure but I can no longer find the discussion that motivates this remark. I believe there was a thread on metaoptimize but
2,195
Euclidean distance is usually not good for sparse data (and more general case)?
An axiomatic measure of sparsity is the so-called $\ell_0$ count, that counts the (finite) number of non-zero entries in a vector. With this measure, vectors $(1,0,0,0)$ and $(0,21,0,0)$ possess the same sparsity. And absolutely not the same $\ell_2$ norm. And $(1,0,0,0)$ (very sparse) has the same $\ell_2$ norm as $\left(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4}\right)$, a very flat, non-sparse vector. And absolutely not the same $\ell_0$ count. This function, neither a norm nor a quasinorm, is nonsmooth and nonconvex. Depending on the domain, its names are legion, for instance: cardinality function, numerosity measure, or simply parsimony or sparsity. It is often considered as unpractical for practical purposes since its use leads to NP hard problems. While standard distances or norms (such as the $\ell_2$ Euclidian distance) are more tractable, one of their issues is their $1$-homogeneity: $$\| a.x\| = |a|\| x\|$$ for $a\neq 0$. This could be seen as non-intuitive, as the scalar product does not change the proportion of null entries in data ($\ell_0$ is $0$-homogeneneous). So in pratice, some ressort to combinations of $\ell_p(x)$ terms ($p \ge1$), such as lasso, ridge or elastic net regularizations. The $\ell_1$ norm (Manhattan or Taxicab distance), or its smoothed avatars, is especially useful. Since works by E. Candès and others, one can explain Why $\ell_1$ Is a Good Approximation to $\ell_0$: A Geometric Explanation. Others have made $p < 1$ in $\ell_p(x)$, at the price of non-convexity issues. Another interesting path is to re-axiomize the notion of sparsity. One of the recent notable works is Comparing Measures of Sparsity, by N. Hurley et al., dealing with the sparsity of distributions. From six axioms (with funny names like Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies), a couple of sparsity indices emerged: one based on the Gini index, another on norm ratios, especially the one-over-two $\frac{\ell_1}{\ell_2}$ norm-ratio, shown below: Although not convex, some proofs of convergence and some historical references are detailed in Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed $\frac{\ell _1}{\ell_2}$ Regularization. Some pseudo-norm/norm ratios $\ell_p/\ell_q$ are provided in SPOQ ℓp-Over-ℓq Regularization for Sparse Signal Recovery applied to Mass Spectrometry.
Euclidean distance is usually not good for sparse data (and more general case)?
An axiomatic measure of sparsity is the so-called $\ell_0$ count, that counts the (finite) number of non-zero entries in a vector. With this measure, vectors $(1,0,0,0)$ and $(0,21,0,0)$ possess the s
Euclidean distance is usually not good for sparse data (and more general case)? An axiomatic measure of sparsity is the so-called $\ell_0$ count, that counts the (finite) number of non-zero entries in a vector. With this measure, vectors $(1,0,0,0)$ and $(0,21,0,0)$ possess the same sparsity. And absolutely not the same $\ell_2$ norm. And $(1,0,0,0)$ (very sparse) has the same $\ell_2$ norm as $\left(\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4}\right)$, a very flat, non-sparse vector. And absolutely not the same $\ell_0$ count. This function, neither a norm nor a quasinorm, is nonsmooth and nonconvex. Depending on the domain, its names are legion, for instance: cardinality function, numerosity measure, or simply parsimony or sparsity. It is often considered as unpractical for practical purposes since its use leads to NP hard problems. While standard distances or norms (such as the $\ell_2$ Euclidian distance) are more tractable, one of their issues is their $1$-homogeneity: $$\| a.x\| = |a|\| x\|$$ for $a\neq 0$. This could be seen as non-intuitive, as the scalar product does not change the proportion of null entries in data ($\ell_0$ is $0$-homogeneneous). So in pratice, some ressort to combinations of $\ell_p(x)$ terms ($p \ge1$), such as lasso, ridge or elastic net regularizations. The $\ell_1$ norm (Manhattan or Taxicab distance), or its smoothed avatars, is especially useful. Since works by E. Candès and others, one can explain Why $\ell_1$ Is a Good Approximation to $\ell_0$: A Geometric Explanation. Others have made $p < 1$ in $\ell_p(x)$, at the price of non-convexity issues. Another interesting path is to re-axiomize the notion of sparsity. One of the recent notable works is Comparing Measures of Sparsity, by N. Hurley et al., dealing with the sparsity of distributions. From six axioms (with funny names like Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies), a couple of sparsity indices emerged: one based on the Gini index, another on norm ratios, especially the one-over-two $\frac{\ell_1}{\ell_2}$ norm-ratio, shown below: Although not convex, some proofs of convergence and some historical references are detailed in Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed $\frac{\ell _1}{\ell_2}$ Regularization. Some pseudo-norm/norm ratios $\ell_p/\ell_q$ are provided in SPOQ ℓp-Over-ℓq Regularization for Sparse Signal Recovery applied to Mass Spectrometry.
Euclidean distance is usually not good for sparse data (and more general case)? An axiomatic measure of sparsity is the so-called $\ell_0$ count, that counts the (finite) number of non-zero entries in a vector. With this measure, vectors $(1,0,0,0)$ and $(0,21,0,0)$ possess the s
2,196
Euclidean distance is usually not good for sparse data (and more general case)?
The paper On the surprising behavior of distance metrics in high dimensional space discusses the behaviour of distance metrics in high dimensional spaces. They take on the $L_k$ norm and propose the manhattan $L_1$ norm as the most effective in high dimensional spaces for clustering purposes. They also introduce a fractional norm $L_f$ similar to the $L_k$ norm but with $f \in (0..1)$. In short, they show that for high dimensional spaces using the euclidean norm as a default is probably not a good idea; we have usually little intuition in such spaces, and the exponential blowup due to the number of dimensions is hard to take into account with the euclidean distance.
Euclidean distance is usually not good for sparse data (and more general case)?
The paper On the surprising behavior of distance metrics in high dimensional space discusses the behaviour of distance metrics in high dimensional spaces. They take on the $L_k$ norm and propose the m
Euclidean distance is usually not good for sparse data (and more general case)? The paper On the surprising behavior of distance metrics in high dimensional space discusses the behaviour of distance metrics in high dimensional spaces. They take on the $L_k$ norm and propose the manhattan $L_1$ norm as the most effective in high dimensional spaces for clustering purposes. They also introduce a fractional norm $L_f$ similar to the $L_k$ norm but with $f \in (0..1)$. In short, they show that for high dimensional spaces using the euclidean norm as a default is probably not a good idea; we have usually little intuition in such spaces, and the exponential blowup due to the number of dimensions is hard to take into account with the euclidean distance.
Euclidean distance is usually not good for sparse data (and more general case)? The paper On the surprising behavior of distance metrics in high dimensional space discusses the behaviour of distance metrics in high dimensional spaces. They take on the $L_k$ norm and propose the m
2,197
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomization tests, and permutation tests
We can find different Resampling methods, or loosely called "simulation" methods, that depend upon resampling or shuffling of the samples. There might be differences in opinions with respect to proper terminology, but the following discussion tries to generalize and simplify what is available in the appropriate literature: Resampling methods are used in (1) estimating precision / accuracy of sample statistics through using subset of data (e.g. Jackknifing) or drawing randomly with replacement from a set of data points (e.g. bootstrapping) (2) Exchanging labels on data points when performing significance tests (permutation tests, also called exact tests, randomization tests, or re-randomization tests) (3) Validating models by using random subsets (bootstrapping, cross validation) (see wikipedia: resampling methods) BOOTSTRAPING "Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample". The method assigns measures of accuracy (defined in terms of bias, variance, confidence intervals, prediction error or some other such measure) to sample estimates. The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modeled by resampling the sample data and performing inference on (resample → sample). As the population is unknown, the true error in a sample statistic against its population value is unknowable. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference from resample data → 'true' sample is measurable." see wikipedia Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) #To generate a single bootstrap sample sample(Yvar, replace = TRUE) #generate 1000 bootstrap samples boot <-list() for (i in 1:1000) boot[[i]] <- sample(Yvar,replace=TRUE) In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling"). Here we resample the data with replacement, and the size of the resample must be equal to the size of the original data set. In regression problems, case resampling refers to the simple scheme of resampling individual cases - often rows of a data set in regression problems, the explanatory variables are often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information (see Wikipedia). So it will be logical to sample rows of the data rather just Yvar. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep("A", 5), rep("B", 5), rep("C", 5)) mydf <- data.frame (Yvar, Xvar) boot.samples <- list() for(i in 1:10) { b.samples.cases <- sample(length(Xvar), length(Xvar), replace=TRUE) b.mydf <- mydf[b.samples.cases,] boot.samples[[i]] <- b.mydf } str(boot.samples) boot.samples[1] You can see some cases are repeated as we are sampling with replacement. "Parametric bootstrap - a parametric model is fitted to the data, often by maximum likelihood, and samples of random numbers are drawn from this fitted model. Usually the sample drawn has the same sample size as the original data. Then the quantity, or estimate, of interest is calculated from these data. This sampling process is repeated many times as for other bootstrap methods. The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model."(see Wikipedia). The following is parametric bootstrap with normal distribution assumption with mean and standard deviation parameters. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) # parameters for Yvar mean.y <- mean(Yvar) sd.y <- sd(Yvar) #To generate a single bootstrap sample with assumed normal distribution (mean, sd) rnorm(length(Yvar), mean.y, sd.y) #generate 1000 bootstrap samples boot <-list() for (i in 1:1000) boot[[i]] <- rnorm(length(Yvar), mean.y, sd.y) There are other variants of bootstrap, please consult the wikipedia page or any good statical book on resampling. JACKNIFE "The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the estimate and then finding the average of these calculations. Given a sample of size N, the jackknife estimate is found by aggregating the estimates of each N − 1 estimate in the sample." see: wikipedia The following shows how to jackknife the Yvar. jackdf <- list() jack <- numeric(length(Yvar)-1) for (i in 1:length (Yvar)){ for (j in 1:length(Yvar)){ if(j < i){ jack[j] <- Yvar[j] } else if(j > i) { jack[j-1] <- Yvar[j] } } jackdf[[i]] <- jack } jackdf "the regular bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other." See this question on Bootstrap vs Jacknife. RANDOMIZATION TESTS "In parametric tests we randomly sample from one or more populations. We make certain assumptions about those populations, most commonly that they are normally distributed with equal variances. We establish a null hypothesis that is framed in terms of parameters, often of the form m1 -m2 = 0 . We use our sample statistics as estimates of the corresponding population parameters, and calculate a test statistic (such as a t test). For example: in Student's t - test for differences in means when variances are unknown, but are considered to be equal. The hypothesis of interest is that H0: m1 = m2. One of alternative hypothesis would be stated as : HA: m1 < m2. Given two samples drawn from populations 1 and 2, assuming that these are normally distributed populations with equal variances, and that the samples were drawn independently and at random from each population, then a statistic whose distribution is known can be elaborated to test H0. One way to avoid these distributional assumptions has been the approach now called non - parametric, rank - order, rank - like, and distribution - free statistics. These distribution - free statistics are usually criticized for being less "efficient" than the analogous test based on assuming the populations to be normally distributed. Another alternative approach is randomization approach - "process of randomly assigning ranks to observations independent of one's knowledge of which sample an observation is a member. A randomization test makes use of such a procedure, but does so by operating on the observations rather than the joint ranking of the observations. For this reason, the distribution of an analogous statistic (the sum of the observations in one sample) cannot be easily tabulated, although it is theoretically possible to enumerate such a distribution" (see) Randomization tests differ from parametric tests in almost every respect. (1) There is no requirement that we have random samples from one or more populations—in fact we usually have not sampled randomly. (2) We rarely think in terms of the populations from which the data came, and there is no need to assume anything about normality or homoscedasticity (3) Our null hypothesis has nothing to do with parameters, but is phrased rather vaguely, as, for example, the hypothesis that the treatment has no effect on the how participants perform.(4) Because we are not concerned with populations, we are not concerned with estimating (or even testing) characteristics of those populations (5) We do calculate some sort of test statistic, however we do not compare that statistic to tabled distributions. Instead, we compare it to the results we obtain when we repeatedly randomize the data across the groups, and calculate the corresponding statistic for each randomization. (6) Even more than parametric tests, randomization tests emphasize the importance of random assignment of participants to treatments." see. The type of randomization test that is very popular is permutation test. If our sample size is 12 and 5, the total permutation possible is C(12,5) = 792. If our sample sizes been 10 and 15 then over 3.2 million arrangements would have been possible. This is computing challenge: What then? Sample. When the universe of possible arrangements is too large to enumerate why not sample arrangements from this universe independently and at random? The distribution of the test statistic over this series of samples can then be tabulated, its' mean and variance computed, and the error rate associated with an hypothesis test estimated. PERMUTATION TEST According to wikipedia "A permutation test (also called a randomization test, re-randomization test, or an exact test) is a type of statistical significance test in which the distribution of the test statistic under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points. Permutation tests exist for any test statistic, regardless of whether or not its distribution is known. Thus one is always free to choose the statistic which best discriminates between hypothesis and alternative and which minimizes losses." The difference between permutation and bootstrap is that bootstraps sample with replacement, and permutations sample without replacement. In either case, the time order of the observations is lost and hence volatility clustering is lost — thus assuring that the samples are under the null hypothesis of no volatility clustering. The permutations always have all of the same observations, so they are more like the original data than bootstrap samples. The expectation is that the permutation test should be more sensitive than a bootstrap test. The permutations destroy volatility clustering but do not add any other variability. See the question on permutation vs bootstrapping - "The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals". So to perform permutation in this case we can just change replace = FALSE in the above bootstrap example. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) #generate 1000 bootstrap samples permutes <-list() for (i in 1:1000) permutes[[i]] <- sample(Yvar,replace=FALSE) In case of more than one variable, just picking of the rows and reshuffling the order will not make any difference as the data will remain same. So we reshuffle the y variable. Something what you have done, but I do not think we need double reshuffling of both x and y variables (as you have done). Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep("A", 5), rep("B", 5), rep("C", 5)) mydf <- data.frame (Yvar, Xvar) permt.samples <- list() for(i in 1:10) { t.yvar <- Yvar[ sample(length(Yvar), length(Yvar), replace=FALSE) ] b.df <- data.frame (Xvar, t.yvar) permt.samples[[i]] <- b.df } str(permt.samples) permt.samples[1] MONTE CARLO METHODS "Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; typically one runs simulations many times over in order to obtain the distribution of an unknown probabilistic entity. The name comes from the resemblance of the technique to the act of playing and recording results in a real gambling casino. " see Wikipedia "In applied statistics, Monte Carlo methods are generally used for two purposes: (1) To compare competing statistics for small samples under realistic data conditions. Although Type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions. (2) To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions. Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice – or more frequently—for the efficiency of not having to track which permutations have already been selected)." Both MC and Permutation test are sometime collectively called randomization tests. The difference is in MC we sample the permutation samples, rather using all possible combinations [see] 21. CROSS VALIDATION The idea beyond cross validation is that models should be tested with data that were not used to fit the model. Cross validation is perhaps most often used in the context of prediction. "Cross-validation is a statistical method for validating a predictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. One form of cross-validation leaves out a single observation at a time; this is similar to the jackknife. Another, K-fold cross-validation, splits the data into K subsets; each is held out in turn as the validation set." see Wikipedia . Cross validation is usually done with quantitative data. You can convert your qualitative (factor data) to quantitative someway to fit a linear model and test this model. The following is simple hold-out strategy where 50% of data is used for model prediction while rest is used for testing. Lets assume Xvar is also quantitative variable. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep(1, 5), rep(2, 5), rep(3, 5)) mydf <- data.frame (Yvar, Xvar) training.id <- sample(1:nrow(mydf), round(nrow(mydf)/2,0), replace = FALSE) test.id <- setdiff(1:nrow(mydf), training.id) # training dataset mydf.train <- mydf[training.id] #testing dataset mydf.test <- mydf[test.id] Unlike bootstrap and permutation tests the cross-validation dataset for training and testing is different. The following figure shows a summary of resampling in different methods. Hope this helps a bit.
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomiz
We can find different Resampling methods, or loosely called "simulation" methods, that depend upon resampling or shuffling of the samples. There might be differences in opinions with respect to proper
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomization tests, and permutation tests We can find different Resampling methods, or loosely called "simulation" methods, that depend upon resampling or shuffling of the samples. There might be differences in opinions with respect to proper terminology, but the following discussion tries to generalize and simplify what is available in the appropriate literature: Resampling methods are used in (1) estimating precision / accuracy of sample statistics through using subset of data (e.g. Jackknifing) or drawing randomly with replacement from a set of data points (e.g. bootstrapping) (2) Exchanging labels on data points when performing significance tests (permutation tests, also called exact tests, randomization tests, or re-randomization tests) (3) Validating models by using random subsets (bootstrapping, cross validation) (see wikipedia: resampling methods) BOOTSTRAPING "Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample". The method assigns measures of accuracy (defined in terms of bias, variance, confidence intervals, prediction error or some other such measure) to sample estimates. The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modeled by resampling the sample data and performing inference on (resample → sample). As the population is unknown, the true error in a sample statistic against its population value is unknowable. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference from resample data → 'true' sample is measurable." see wikipedia Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) #To generate a single bootstrap sample sample(Yvar, replace = TRUE) #generate 1000 bootstrap samples boot <-list() for (i in 1:1000) boot[[i]] <- sample(Yvar,replace=TRUE) In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling"). Here we resample the data with replacement, and the size of the resample must be equal to the size of the original data set. In regression problems, case resampling refers to the simple scheme of resampling individual cases - often rows of a data set in regression problems, the explanatory variables are often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information (see Wikipedia). So it will be logical to sample rows of the data rather just Yvar. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep("A", 5), rep("B", 5), rep("C", 5)) mydf <- data.frame (Yvar, Xvar) boot.samples <- list() for(i in 1:10) { b.samples.cases <- sample(length(Xvar), length(Xvar), replace=TRUE) b.mydf <- mydf[b.samples.cases,] boot.samples[[i]] <- b.mydf } str(boot.samples) boot.samples[1] You can see some cases are repeated as we are sampling with replacement. "Parametric bootstrap - a parametric model is fitted to the data, often by maximum likelihood, and samples of random numbers are drawn from this fitted model. Usually the sample drawn has the same sample size as the original data. Then the quantity, or estimate, of interest is calculated from these data. This sampling process is repeated many times as for other bootstrap methods. The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model."(see Wikipedia). The following is parametric bootstrap with normal distribution assumption with mean and standard deviation parameters. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) # parameters for Yvar mean.y <- mean(Yvar) sd.y <- sd(Yvar) #To generate a single bootstrap sample with assumed normal distribution (mean, sd) rnorm(length(Yvar), mean.y, sd.y) #generate 1000 bootstrap samples boot <-list() for (i in 1:1000) boot[[i]] <- rnorm(length(Yvar), mean.y, sd.y) There are other variants of bootstrap, please consult the wikipedia page or any good statical book on resampling. JACKNIFE "The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the estimate and then finding the average of these calculations. Given a sample of size N, the jackknife estimate is found by aggregating the estimates of each N − 1 estimate in the sample." see: wikipedia The following shows how to jackknife the Yvar. jackdf <- list() jack <- numeric(length(Yvar)-1) for (i in 1:length (Yvar)){ for (j in 1:length(Yvar)){ if(j < i){ jack[j] <- Yvar[j] } else if(j > i) { jack[j-1] <- Yvar[j] } } jackdf[[i]] <- jack } jackdf "the regular bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other." See this question on Bootstrap vs Jacknife. RANDOMIZATION TESTS "In parametric tests we randomly sample from one or more populations. We make certain assumptions about those populations, most commonly that they are normally distributed with equal variances. We establish a null hypothesis that is framed in terms of parameters, often of the form m1 -m2 = 0 . We use our sample statistics as estimates of the corresponding population parameters, and calculate a test statistic (such as a t test). For example: in Student's t - test for differences in means when variances are unknown, but are considered to be equal. The hypothesis of interest is that H0: m1 = m2. One of alternative hypothesis would be stated as : HA: m1 < m2. Given two samples drawn from populations 1 and 2, assuming that these are normally distributed populations with equal variances, and that the samples were drawn independently and at random from each population, then a statistic whose distribution is known can be elaborated to test H0. One way to avoid these distributional assumptions has been the approach now called non - parametric, rank - order, rank - like, and distribution - free statistics. These distribution - free statistics are usually criticized for being less "efficient" than the analogous test based on assuming the populations to be normally distributed. Another alternative approach is randomization approach - "process of randomly assigning ranks to observations independent of one's knowledge of which sample an observation is a member. A randomization test makes use of such a procedure, but does so by operating on the observations rather than the joint ranking of the observations. For this reason, the distribution of an analogous statistic (the sum of the observations in one sample) cannot be easily tabulated, although it is theoretically possible to enumerate such a distribution" (see) Randomization tests differ from parametric tests in almost every respect. (1) There is no requirement that we have random samples from one or more populations—in fact we usually have not sampled randomly. (2) We rarely think in terms of the populations from which the data came, and there is no need to assume anything about normality or homoscedasticity (3) Our null hypothesis has nothing to do with parameters, but is phrased rather vaguely, as, for example, the hypothesis that the treatment has no effect on the how participants perform.(4) Because we are not concerned with populations, we are not concerned with estimating (or even testing) characteristics of those populations (5) We do calculate some sort of test statistic, however we do not compare that statistic to tabled distributions. Instead, we compare it to the results we obtain when we repeatedly randomize the data across the groups, and calculate the corresponding statistic for each randomization. (6) Even more than parametric tests, randomization tests emphasize the importance of random assignment of participants to treatments." see. The type of randomization test that is very popular is permutation test. If our sample size is 12 and 5, the total permutation possible is C(12,5) = 792. If our sample sizes been 10 and 15 then over 3.2 million arrangements would have been possible. This is computing challenge: What then? Sample. When the universe of possible arrangements is too large to enumerate why not sample arrangements from this universe independently and at random? The distribution of the test statistic over this series of samples can then be tabulated, its' mean and variance computed, and the error rate associated with an hypothesis test estimated. PERMUTATION TEST According to wikipedia "A permutation test (also called a randomization test, re-randomization test, or an exact test) is a type of statistical significance test in which the distribution of the test statistic under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points. Permutation tests exist for any test statistic, regardless of whether or not its distribution is known. Thus one is always free to choose the statistic which best discriminates between hypothesis and alternative and which minimizes losses." The difference between permutation and bootstrap is that bootstraps sample with replacement, and permutations sample without replacement. In either case, the time order of the observations is lost and hence volatility clustering is lost — thus assuring that the samples are under the null hypothesis of no volatility clustering. The permutations always have all of the same observations, so they are more like the original data than bootstrap samples. The expectation is that the permutation test should be more sensitive than a bootstrap test. The permutations destroy volatility clustering but do not add any other variability. See the question on permutation vs bootstrapping - "The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals". So to perform permutation in this case we can just change replace = FALSE in the above bootstrap example. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) #generate 1000 bootstrap samples permutes <-list() for (i in 1:1000) permutes[[i]] <- sample(Yvar,replace=FALSE) In case of more than one variable, just picking of the rows and reshuffling the order will not make any difference as the data will remain same. So we reshuffle the y variable. Something what you have done, but I do not think we need double reshuffling of both x and y variables (as you have done). Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep("A", 5), rep("B", 5), rep("C", 5)) mydf <- data.frame (Yvar, Xvar) permt.samples <- list() for(i in 1:10) { t.yvar <- Yvar[ sample(length(Yvar), length(Yvar), replace=FALSE) ] b.df <- data.frame (Xvar, t.yvar) permt.samples[[i]] <- b.df } str(permt.samples) permt.samples[1] MONTE CARLO METHODS "Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; typically one runs simulations many times over in order to obtain the distribution of an unknown probabilistic entity. The name comes from the resemblance of the technique to the act of playing and recording results in a real gambling casino. " see Wikipedia "In applied statistics, Monte Carlo methods are generally used for two purposes: (1) To compare competing statistics for small samples under realistic data conditions. Although Type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions. (2) To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions. Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice – or more frequently—for the efficiency of not having to track which permutations have already been selected)." Both MC and Permutation test are sometime collectively called randomization tests. The difference is in MC we sample the permutation samples, rather using all possible combinations [see] 21. CROSS VALIDATION The idea beyond cross validation is that models should be tested with data that were not used to fit the model. Cross validation is perhaps most often used in the context of prediction. "Cross-validation is a statistical method for validating a predictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. One form of cross-validation leaves out a single observation at a time; this is similar to the jackknife. Another, K-fold cross-validation, splits the data into K subsets; each is held out in turn as the validation set." see Wikipedia . Cross validation is usually done with quantitative data. You can convert your qualitative (factor data) to quantitative someway to fit a linear model and test this model. The following is simple hold-out strategy where 50% of data is used for model prediction while rest is used for testing. Lets assume Xvar is also quantitative variable. Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- c(rep(1, 5), rep(2, 5), rep(3, 5)) mydf <- data.frame (Yvar, Xvar) training.id <- sample(1:nrow(mydf), round(nrow(mydf)/2,0), replace = FALSE) test.id <- setdiff(1:nrow(mydf), training.id) # training dataset mydf.train <- mydf[training.id] #testing dataset mydf.test <- mydf[test.id] Unlike bootstrap and permutation tests the cross-validation dataset for training and testing is different. The following figure shows a summary of resampling in different methods. Hope this helps a bit.
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomiz We can find different Resampling methods, or loosely called "simulation" methods, that depend upon resampling or shuffling of the samples. There might be differences in opinions with respect to proper
2,198
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomization tests, and permutation tests
Here's my contribution. Data Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- rep(LETTERS[1:3], each=5) mydf <- data.frame(Yvar, Xvar) Monte Carlo I see Monte Carlo as a method to obtain a distribution of an (outcome) random variable, which is the result of a nontrivial function of other (input) random variables. I don't immediately see an overlap with the current ANOVA analysis, probably other forum members can give their input here. Bootstrapping The purpose is to have an idea of the uncertainty of a statistic calculated from an observed sample. For example: we can calculate that the sample mean of Yvar is 8.4, but how certain are we of the population mean for Yvar? The trick is to do as if the sample is the population, and sample many times from that fake population. n <- 1000 bootstrap_means <- numeric(length=n) for(i in 1:n){ bootstrap_sample <- sample(x=Yvar, size=length(Yvar), replace=TRUE) bootstrap_means[i] <- mean(bootstrap_sample) } hist(bootstrap_means) We just took samples and didn't assume any parametric distribution. This is the nonparametric bootstrap. If you would feel comfortable with assuming for example that Xvar is normally distributed, you can also sample from a normal distribution (rnorm(...)) using the estimated mean and standard deviation, this would be the parametric bootstrap. Other users might perhaps give applications of the bootstrap with respect to the effect sizes of the Xvar levels? Jackknifing The jackknife seems to be a bit outdated. Just for completeness, you could compare it more or less to the bootstrap, but the strategy is here to see what happens if we leave out one observation (and repeat this for each observation). Cross-validation In cross-validation, you split your (usually large) dataset in a training set and a validation set, to see how well your estimated model is able to predict the values in the validation set. I personally haven't seen yet an application of cross-validation to ANOVA, so I prefer to leave this part to others. Randomization/permutation tests Be warned, terminology is not agreed upon. See Difference between Randomization test and Permutation test. The null hypothesis would be that there is no difference between the populations of groups A, B and C, so it shouldn't matter if we randomly exchange the labels of the 15 values of Xvar. If the originally observed F value (or another statistic) doesn't agree with those obtained after randomly exchanging labels, then it probably did matter, and the null hypothesis can be rejected. observed_F_value <- anova(lm(Yvar ~ Xvar))$"F value"[1] n <- 10000 permutation_F_values <- numeric(length=n) for(i in 1:n){ # note: the sample function without extra parameters defaults to a permutation temp_fit <- anova(lm(Yvar ~ sample(Xvar))) permutation_F_values[i] <- temp_fit$"F value"[1] } hist(permutation_F_values, xlim=range(c(observed_F_value, permutation_F_values))) abline(v=observed_F_value, lwd=3, col="red") cat("P value: ", sum(permutation_F_values >= observed_F_value), "/", n, "\n", sep="") Be careful with the way you reassign the labels in the case of complex designs though. Also note that in the case of unequal variances, the null hypothesis of exchangeability is not true in the first place, so this permutation test wouldn't be correct. Here we did not explicitly go through all possible permutations of the labels, this is a Monte Carlo estimate of the P-value. With small datasets you can go through all possible permutations, but the R-code above is a bit easier to understand.
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomiz
Here's my contribution. Data Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- rep(LETTERS[1:3], each=5) mydf <- data.frame(Yvar, Xvar) Monte Carlo I see Monte Carlo as a m
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomization tests, and permutation tests Here's my contribution. Data Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- rep(LETTERS[1:3], each=5) mydf <- data.frame(Yvar, Xvar) Monte Carlo I see Monte Carlo as a method to obtain a distribution of an (outcome) random variable, which is the result of a nontrivial function of other (input) random variables. I don't immediately see an overlap with the current ANOVA analysis, probably other forum members can give their input here. Bootstrapping The purpose is to have an idea of the uncertainty of a statistic calculated from an observed sample. For example: we can calculate that the sample mean of Yvar is 8.4, but how certain are we of the population mean for Yvar? The trick is to do as if the sample is the population, and sample many times from that fake population. n <- 1000 bootstrap_means <- numeric(length=n) for(i in 1:n){ bootstrap_sample <- sample(x=Yvar, size=length(Yvar), replace=TRUE) bootstrap_means[i] <- mean(bootstrap_sample) } hist(bootstrap_means) We just took samples and didn't assume any parametric distribution. This is the nonparametric bootstrap. If you would feel comfortable with assuming for example that Xvar is normally distributed, you can also sample from a normal distribution (rnorm(...)) using the estimated mean and standard deviation, this would be the parametric bootstrap. Other users might perhaps give applications of the bootstrap with respect to the effect sizes of the Xvar levels? Jackknifing The jackknife seems to be a bit outdated. Just for completeness, you could compare it more or less to the bootstrap, but the strategy is here to see what happens if we leave out one observation (and repeat this for each observation). Cross-validation In cross-validation, you split your (usually large) dataset in a training set and a validation set, to see how well your estimated model is able to predict the values in the validation set. I personally haven't seen yet an application of cross-validation to ANOVA, so I prefer to leave this part to others. Randomization/permutation tests Be warned, terminology is not agreed upon. See Difference between Randomization test and Permutation test. The null hypothesis would be that there is no difference between the populations of groups A, B and C, so it shouldn't matter if we randomly exchange the labels of the 15 values of Xvar. If the originally observed F value (or another statistic) doesn't agree with those obtained after randomly exchanging labels, then it probably did matter, and the null hypothesis can be rejected. observed_F_value <- anova(lm(Yvar ~ Xvar))$"F value"[1] n <- 10000 permutation_F_values <- numeric(length=n) for(i in 1:n){ # note: the sample function without extra parameters defaults to a permutation temp_fit <- anova(lm(Yvar ~ sample(Xvar))) permutation_F_values[i] <- temp_fit$"F value"[1] } hist(permutation_F_values, xlim=range(c(observed_F_value, permutation_F_values))) abline(v=observed_F_value, lwd=3, col="red") cat("P value: ", sum(permutation_F_values >= observed_F_value), "/", n, "\n", sep="") Be careful with the way you reassign the labels in the case of complex designs though. Also note that in the case of unequal variances, the null hypothesis of exchangeability is not true in the first place, so this permutation test wouldn't be correct. Here we did not explicitly go through all possible permutations of the labels, this is a Monte Carlo estimate of the P-value. With small datasets you can go through all possible permutations, but the R-code above is a bit easier to understand.
Resampling / simulation methods: monte carlo, bootstrapping, jackknifing, cross-validation, randomiz Here's my contribution. Data Yvar <- c(8,9,10,13,12, 14,18,12,8,9, 1,3,2,3,4) Xvar <- rep(LETTERS[1:3], each=5) mydf <- data.frame(Yvar, Xvar) Monte Carlo I see Monte Carlo as a m
2,199
How to generate uniformly distributed points on the surface of the 3-d unit sphere?
A standard method is to generate three standard normals and construct a unit vector from them. That is, when $X_i \sim N(0,1)$ and $\lambda^2 = X_1^2 + X_2^2 + X_3^2$, then $(X_1/\lambda, X_2/\lambda, X_3/\lambda)$ is uniformly distributed on the sphere. This method works well for $d$-dimensional spheres, too. In 3D you can use rejection sampling: draw $X_i$ from a uniform$[-1,1]$ distribution until the length of $(X_1, X_2, X_3)$ is less than or equal to 1, then--just as with the preceding method--normalize the vector to unit length. The expected number of trials per spherical point equals $2^3/(4 \pi / 3)$ = 1.91. In higher dimensions the expected number of trials gets so large this rapidly becomes impracticable. There are many ways to check uniformity. A neat way, although somewhat computationally intensive, is with Ripley's K function. The expected number of points within (3D Euclidean) distance $\rho$ of any location on the sphere is proportional to the area of the sphere within distance $\rho$, which equals $\pi\rho^2$. By computing all interpoint distances you can compare the data to this ideal. General principles of constructing statistical graphics suggest a good way to make the comparison is to plot variance-stabilized residuals $e_i(d_{[i]} - e_i)$ against $i = 1, 2, \ldots, n(n-1)/2=m$ where $d_{[i]}$ is the $i^\text{th}$ smallest of the mutual distances and $e_i = 2\sqrt{i/m}$. The plot should be close to zero. (This approach is unconventional.) Here is a picture of 100 independent draws from a uniform spherical distribution obtained with the first method: Here is the diagnostic plot of the distances: The y scale suggests these values are all close to zero. Here is the accumulation of 100 such plots to suggest what size deviations might actually be significant indicators of non-uniformity: (These plots look an awful lot like Brownian bridges...there may be some interesting theoretical discoveries lurking here.) Finally, here is the diagnostic plot for a set of 100 uniform random points plus another 41 points uniformly distributed in the upper hemisphere only: Relative to the uniform distribution, it shows a significant decrease in average interpoint distances out to a range of one hemisphere. That in itself is meaningless, but the useful information here is that something is non-uniform on the scale of one hemisphere. In effect, this plot readily detects that one hemisphere has a different density than the other. (A simpler chi-square test would do this with more power if you knew in advance which hemisphere to test out of the infinitely many possible ones.)
How to generate uniformly distributed points on the surface of the 3-d unit sphere?
A standard method is to generate three standard normals and construct a unit vector from them. That is, when $X_i \sim N(0,1)$ and $\lambda^2 = X_1^2 + X_2^2 + X_3^2$, then $(X_1/\lambda, X_2/\lambda
How to generate uniformly distributed points on the surface of the 3-d unit sphere? A standard method is to generate three standard normals and construct a unit vector from them. That is, when $X_i \sim N(0,1)$ and $\lambda^2 = X_1^2 + X_2^2 + X_3^2$, then $(X_1/\lambda, X_2/\lambda, X_3/\lambda)$ is uniformly distributed on the sphere. This method works well for $d$-dimensional spheres, too. In 3D you can use rejection sampling: draw $X_i$ from a uniform$[-1,1]$ distribution until the length of $(X_1, X_2, X_3)$ is less than or equal to 1, then--just as with the preceding method--normalize the vector to unit length. The expected number of trials per spherical point equals $2^3/(4 \pi / 3)$ = 1.91. In higher dimensions the expected number of trials gets so large this rapidly becomes impracticable. There are many ways to check uniformity. A neat way, although somewhat computationally intensive, is with Ripley's K function. The expected number of points within (3D Euclidean) distance $\rho$ of any location on the sphere is proportional to the area of the sphere within distance $\rho$, which equals $\pi\rho^2$. By computing all interpoint distances you can compare the data to this ideal. General principles of constructing statistical graphics suggest a good way to make the comparison is to plot variance-stabilized residuals $e_i(d_{[i]} - e_i)$ against $i = 1, 2, \ldots, n(n-1)/2=m$ where $d_{[i]}$ is the $i^\text{th}$ smallest of the mutual distances and $e_i = 2\sqrt{i/m}$. The plot should be close to zero. (This approach is unconventional.) Here is a picture of 100 independent draws from a uniform spherical distribution obtained with the first method: Here is the diagnostic plot of the distances: The y scale suggests these values are all close to zero. Here is the accumulation of 100 such plots to suggest what size deviations might actually be significant indicators of non-uniformity: (These plots look an awful lot like Brownian bridges...there may be some interesting theoretical discoveries lurking here.) Finally, here is the diagnostic plot for a set of 100 uniform random points plus another 41 points uniformly distributed in the upper hemisphere only: Relative to the uniform distribution, it shows a significant decrease in average interpoint distances out to a range of one hemisphere. That in itself is meaningless, but the useful information here is that something is non-uniform on the scale of one hemisphere. In effect, this plot readily detects that one hemisphere has a different density than the other. (A simpler chi-square test would do this with more power if you knew in advance which hemisphere to test out of the infinitely many possible ones.)
How to generate uniformly distributed points on the surface of the 3-d unit sphere? A standard method is to generate three standard normals and construct a unit vector from them. That is, when $X_i \sim N(0,1)$ and $\lambda^2 = X_1^2 + X_2^2 + X_3^2$, then $(X_1/\lambda, X_2/\lambda
2,200
How to generate uniformly distributed points on the surface of the 3-d unit sphere?
Here is some rather simple R code n <- 100000 # large enough for meaningful tests z <- 2*runif(n) - 1 # uniform on [-1, 1] theta <- 2*pi*runif(n) - pi # uniform on [-pi, pi] x <- sin(theta)*sqrt(1-z^2) # based on angle y <- cos(theta)*sqrt(1-z^2) It is very simple to see from the construction that $x^2+y^2 = 1- z^2$ and so $x^2+y^2+z^2=1$ but if it needs to be tested then mean(x^2+y^2+z^2) # should be 1 var(x^2+y^2+z^2) # should be 0 and easy to test that each of $x$ and $y$ are uniformly distributed on $[-1,1]$ ($z$ obviously is) with plot.ecdf(x) # should be uniform on [-1, 1] plot.ecdf(y) plot.ecdf(z) Clearly, given a value of $z$, $x$ and $y$ are uniformly distributed around a circle of radius $\sqrt{1-z^2}$ and this can be tested by looking at the distribution of the arctangent of their ratio. But since $z$ has the same marginal distribution as $x$ and as $y$, a similar statement is true for any pair, and this too can be tested. plot.ecdf(atan2(x,y)) # should be uniform on [-pi, pi] plot.ecdf(atan2(y,z)) plot.ecdf(atan2(z,x)) If still unconvinced, the next steps would be to look at some arbitrary 3-D rotation or how many points fell within a given solid angle, but that starts to get more complicated, and I think is unnecessary.
How to generate uniformly distributed points on the surface of the 3-d unit sphere?
Here is some rather simple R code n <- 100000 # large enough for meaningful tests z <- 2*runif(n) - 1 # uniform on [-1, 1] theta <- 2*pi*runif(n) - pi # uniform
How to generate uniformly distributed points on the surface of the 3-d unit sphere? Here is some rather simple R code n <- 100000 # large enough for meaningful tests z <- 2*runif(n) - 1 # uniform on [-1, 1] theta <- 2*pi*runif(n) - pi # uniform on [-pi, pi] x <- sin(theta)*sqrt(1-z^2) # based on angle y <- cos(theta)*sqrt(1-z^2) It is very simple to see from the construction that $x^2+y^2 = 1- z^2$ and so $x^2+y^2+z^2=1$ but if it needs to be tested then mean(x^2+y^2+z^2) # should be 1 var(x^2+y^2+z^2) # should be 0 and easy to test that each of $x$ and $y$ are uniformly distributed on $[-1,1]$ ($z$ obviously is) with plot.ecdf(x) # should be uniform on [-1, 1] plot.ecdf(y) plot.ecdf(z) Clearly, given a value of $z$, $x$ and $y$ are uniformly distributed around a circle of radius $\sqrt{1-z^2}$ and this can be tested by looking at the distribution of the arctangent of their ratio. But since $z$ has the same marginal distribution as $x$ and as $y$, a similar statement is true for any pair, and this too can be tested. plot.ecdf(atan2(x,y)) # should be uniform on [-pi, pi] plot.ecdf(atan2(y,z)) plot.ecdf(atan2(z,x)) If still unconvinced, the next steps would be to look at some arbitrary 3-D rotation or how many points fell within a given solid angle, but that starts to get more complicated, and I think is unnecessary.
How to generate uniformly distributed points on the surface of the 3-d unit sphere? Here is some rather simple R code n <- 100000 # large enough for meaningful tests z <- 2*runif(n) - 1 # uniform on [-1, 1] theta <- 2*pi*runif(n) - pi # uniform