idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
51,401
Is R output reliable (specially IRT package ltm) [duplicate]
R is widely used in scientific circles for published papers. R stores your data in RAM, so either it will be able to process your data set or it won't -- depending on whether the data and processing fit in memory -- there is no degraded mode where you get results but they are less accurate. (Technically, there are packages that let you work with larger data sets than fit in memory, but it's not trivial to use them.) There are almost always choices of packages that do similar tasks, so if you are very concerned about ltm, you can also look into other packages that do IRT. A quick search on my machine brings up packages MCMCpack, psych, and KernSmoothIRT, and there are probably others if you look on CRAN. Analyze your data with two packages to make sure that the answers are in reasonable agreement. The beauty of R is that it's free, so you can try it and see. And the packages are free, so you can try more than one. If your data set is too large, or if you're not satisfied with the results, you've only lost a bit of time.
Is R output reliable (specially IRT package ltm) [duplicate]
R is widely used in scientific circles for published papers. R stores your data in RAM, so either it will be able to process your data set or it won't -- depending on whether the data and processing f
Is R output reliable (specially IRT package ltm) [duplicate] R is widely used in scientific circles for published papers. R stores your data in RAM, so either it will be able to process your data set or it won't -- depending on whether the data and processing fit in memory -- there is no degraded mode where you get results but they are less accurate. (Technically, there are packages that let you work with larger data sets than fit in memory, but it's not trivial to use them.) There are almost always choices of packages that do similar tasks, so if you are very concerned about ltm, you can also look into other packages that do IRT. A quick search on my machine brings up packages MCMCpack, psych, and KernSmoothIRT, and there are probably others if you look on CRAN. Analyze your data with two packages to make sure that the answers are in reasonable agreement. The beauty of R is that it's free, so you can try it and see. And the packages are free, so you can try more than one. If your data set is too large, or if you're not satisfied with the results, you've only lost a bit of time.
Is R output reliable (specially IRT package ltm) [duplicate] R is widely used in scientific circles for published papers. R stores your data in RAM, so either it will be able to process your data set or it won't -- depending on whether the data and processing f
51,402
Is R output reliable (specially IRT package ltm) [duplicate]
If you, or anyone else, has a question about the results of an analysis in R you/they can always look at the source code to see exactly what computations are being made. With any proprietary software you have to take their word that it is doing the correct things. > library(fortunes) > fortune(102) Mingzhai Sun: When you use it [R], since it is written by so many authors, how do you know that the results are trustable? Bill Venables: The R engine [...] is pretty well uniformly excellent code but you have to take my word for that. Actually, you don't. The whole engine is open source so, if you wish, you can check every line of it. If people were out to push dodgy software, this is not the way they'd go about it. -- Mingzhai Sun and Bill Venables R-help (January 2004)
Is R output reliable (specially IRT package ltm) [duplicate]
If you, or anyone else, has a question about the results of an analysis in R you/they can always look at the source code to see exactly what computations are being made. With any proprietary software
Is R output reliable (specially IRT package ltm) [duplicate] If you, or anyone else, has a question about the results of an analysis in R you/they can always look at the source code to see exactly what computations are being made. With any proprietary software you have to take their word that it is doing the correct things. > library(fortunes) > fortune(102) Mingzhai Sun: When you use it [R], since it is written by so many authors, how do you know that the results are trustable? Bill Venables: The R engine [...] is pretty well uniformly excellent code but you have to take my word for that. Actually, you don't. The whole engine is open source so, if you wish, you can check every line of it. If people were out to push dodgy software, this is not the way they'd go about it. -- Mingzhai Sun and Bill Venables R-help (January 2004)
Is R output reliable (specially IRT package ltm) [duplicate] If you, or anyone else, has a question about the results of an analysis in R you/they can always look at the source code to see exactly what computations are being made. With any proprietary software
51,403
Is R output reliable (specially IRT package ltm) [duplicate]
I might add that in recent IRT work I am undertaking, when all specifications are made the same, I have found IRT results (2PLM) from the ltm package to map on to the same analyses conducted in the Mplus statistical package extremely well, including model solution indices as well as parameter estimates.
Is R output reliable (specially IRT package ltm) [duplicate]
I might add that in recent IRT work I am undertaking, when all specifications are made the same, I have found IRT results (2PLM) from the ltm package to map on to the same analyses conducted in the Mp
Is R output reliable (specially IRT package ltm) [duplicate] I might add that in recent IRT work I am undertaking, when all specifications are made the same, I have found IRT results (2PLM) from the ltm package to map on to the same analyses conducted in the Mplus statistical package extremely well, including model solution indices as well as parameter estimates.
Is R output reliable (specially IRT package ltm) [duplicate] I might add that in recent IRT work I am undertaking, when all specifications are made the same, I have found IRT results (2PLM) from the ltm package to map on to the same analyses conducted in the Mp
51,404
How is CLT related to the condition of data (normality assumption)?
CLT states that if sample is big enough, our data approximates normal distribution That is false. If you take a very large sample from a non-normally distributed population, the empirical distribution of the sample has high probability of being close to that of the population, which is not normal. It does not in any way converge to the normal distribution. CLT says that the sampling distribution of the sample mean, or of the sample sum, approaches the normal distribution as the sample size grows. That is quite a different matter. As for $1000$ being a big enough sample to get a good approximation to the normal distribution, that is not true of very skewed population distributions. For example, if $X_1,X_2,X_3,\ldots\sim\operatorname{Poisson}(0.001)$ then $X_1+\cdots+X_{1000}\sim\operatorname{Poisson}(1),$ and that is not close to the normal distribution.
How is CLT related to the condition of data (normality assumption)?
CLT states that if sample is big enough, our data approximates normal distribution That is false. If you take a very large sample from a non-normally distributed population, the empirical distributio
How is CLT related to the condition of data (normality assumption)? CLT states that if sample is big enough, our data approximates normal distribution That is false. If you take a very large sample from a non-normally distributed population, the empirical distribution of the sample has high probability of being close to that of the population, which is not normal. It does not in any way converge to the normal distribution. CLT says that the sampling distribution of the sample mean, or of the sample sum, approaches the normal distribution as the sample size grows. That is quite a different matter. As for $1000$ being a big enough sample to get a good approximation to the normal distribution, that is not true of very skewed population distributions. For example, if $X_1,X_2,X_3,\ldots\sim\operatorname{Poisson}(0.001)$ then $X_1+\cdots+X_{1000}\sim\operatorname{Poisson}(1),$ and that is not close to the normal distribution.
How is CLT related to the condition of data (normality assumption)? CLT states that if sample is big enough, our data approximates normal distribution That is false. If you take a very large sample from a non-normally distributed population, the empirical distributio
51,405
How is CLT related to the condition of data (normality assumption)?
do I even need to check it for normality as CLT states that if sample is big enough, our data approximates normal distribution? It is wrong that, if the sample is big enough the distribution of the data/population approaches a normal distribution. Instead, the CLT relates to (the limit of) the mean of samples (or other types of sums of variables). But you are right that the sample distribution of the test statistic, which is used to estimate parameters of the population distribution or estimate the error/variance, will often approach a normal distribution independent from the underlying distribution of the population. So for large sample sizes the violation of the assumption that the error distribution is a normal distribution becomes less of a problem. (Ironically the test of normality becomes more powerful and likely to reject the hypothesis of the assumption)
How is CLT related to the condition of data (normality assumption)?
do I even need to check it for normality as CLT states that if sample is big enough, our data approximates normal distribution? It is wrong that, if the sample is big enough the distribution of the
How is CLT related to the condition of data (normality assumption)? do I even need to check it for normality as CLT states that if sample is big enough, our data approximates normal distribution? It is wrong that, if the sample is big enough the distribution of the data/population approaches a normal distribution. Instead, the CLT relates to (the limit of) the mean of samples (or other types of sums of variables). But you are right that the sample distribution of the test statistic, which is used to estimate parameters of the population distribution or estimate the error/variance, will often approach a normal distribution independent from the underlying distribution of the population. So for large sample sizes the violation of the assumption that the error distribution is a normal distribution becomes less of a problem. (Ironically the test of normality becomes more powerful and likely to reject the hypothesis of the assumption)
How is CLT related to the condition of data (normality assumption)? do I even need to check it for normality as CLT states that if sample is big enough, our data approximates normal distribution? It is wrong that, if the sample is big enough the distribution of the
51,406
How is CLT related to the condition of data (normality assumption)?
As Dave mentioned, a very similar question on this topic was asked about a year ago, which is worth looking over. I happen to be one of the respondents, and you can read my answer here. The following is going to be in rough terms aimed at making things easily understandable. For more details and technicalities, see my answer (and attached comments) that I linked above. Brief explanation The CLT essentially states that the distribution of the mean of the data will become normal as the sample size increases, it says nothing about the distribution of the data itself. So you could have normal, exponential, bimodal, ect. distributed data but, if your sample size is large enough, the distribution of the mean will be normal. For example, say you are measuring a metric from a population that is exponentially distributed. Imagine you take 100 random samples from the population where each sample is large enough for the CLT to apply. If you were then to take the mean of those 100 samples, the CLT essentially states that those 100 means would be normally distributed. So the repeated measures of the mean are normally distributed, despite the fact the underlying data is actually exponentially distributed. If this is still a bit confusing, I personally found this (quirky) video very helpful as it does a great job illustrating these concepts. So why does this all matter? Knowing the means are normally distributed allows us to use a variety of parametric tests (anova, t-test, ect.) which operate on the assumption that this is true. This is both the power and pitfall of the CLT: the ability to compare non-normally distributed samples to one another. I think where this gets confusing is that just because you can compare two sample means to each other doesn't mean that you should. (ie comparing the means of two exponential distributions might not tell you what you think it does, or two bi-modal distributions, or a bi-modal with a uni-modal distribution, ect). The question most people should ask is, "is the mean (or a difference in means) a useful metric given the distribution of my data". Only if the answer to this question is yes, should one proceed to compare means (thus relying on the CLT). This is where those Q-Q plots and tests for normality you mentioned come in. They examine the distribution of the data itself to see if it is normal. This is important as the distribution of the data is critical to keep in mind when interpreting the results of a statistical test. If the data does not follow the distribution you expected it to (in this case normal) that comparison of means likely doesn't tell you what you think it does.
How is CLT related to the condition of data (normality assumption)?
As Dave mentioned, a very similar question on this topic was asked about a year ago, which is worth looking over. I happen to be one of the respondents, and you can read my answer here. The following
How is CLT related to the condition of data (normality assumption)? As Dave mentioned, a very similar question on this topic was asked about a year ago, which is worth looking over. I happen to be one of the respondents, and you can read my answer here. The following is going to be in rough terms aimed at making things easily understandable. For more details and technicalities, see my answer (and attached comments) that I linked above. Brief explanation The CLT essentially states that the distribution of the mean of the data will become normal as the sample size increases, it says nothing about the distribution of the data itself. So you could have normal, exponential, bimodal, ect. distributed data but, if your sample size is large enough, the distribution of the mean will be normal. For example, say you are measuring a metric from a population that is exponentially distributed. Imagine you take 100 random samples from the population where each sample is large enough for the CLT to apply. If you were then to take the mean of those 100 samples, the CLT essentially states that those 100 means would be normally distributed. So the repeated measures of the mean are normally distributed, despite the fact the underlying data is actually exponentially distributed. If this is still a bit confusing, I personally found this (quirky) video very helpful as it does a great job illustrating these concepts. So why does this all matter? Knowing the means are normally distributed allows us to use a variety of parametric tests (anova, t-test, ect.) which operate on the assumption that this is true. This is both the power and pitfall of the CLT: the ability to compare non-normally distributed samples to one another. I think where this gets confusing is that just because you can compare two sample means to each other doesn't mean that you should. (ie comparing the means of two exponential distributions might not tell you what you think it does, or two bi-modal distributions, or a bi-modal with a uni-modal distribution, ect). The question most people should ask is, "is the mean (or a difference in means) a useful metric given the distribution of my data". Only if the answer to this question is yes, should one proceed to compare means (thus relying on the CLT). This is where those Q-Q plots and tests for normality you mentioned come in. They examine the distribution of the data itself to see if it is normal. This is important as the distribution of the data is critical to keep in mind when interpreting the results of a statistical test. If the data does not follow the distribution you expected it to (in this case normal) that comparison of means likely doesn't tell you what you think it does.
How is CLT related to the condition of data (normality assumption)? As Dave mentioned, a very similar question on this topic was asked about a year ago, which is worth looking over. I happen to be one of the respondents, and you can read my answer here. The following
51,407
Regression analysis for a massive dataset
The main thing to keep in mind is that with this amount of data, every coefficient will probably come out as statistically significant. In order to find out which regressors are really important (as contrasted with statistically significant), I recommend using a holdout sample: fit your model to only 4 million data points, predict the other million points and compare to the actual values. Do this for a couple of different models (using or not using regressors, transforming regressors etc.) and see which ones yield the best predictions, by e.g. calculating the Mean Absolute Deviation (MAD) between the predictions and the actual observations. Better yet: iterate this over the entire dataset five times, using a different million points as a holdout sample each time. This is known as "cross-validation" (five-fold cross-validation in this case).
Regression analysis for a massive dataset
The main thing to keep in mind is that with this amount of data, every coefficient will probably come out as statistically significant. In order to find out which regressors are really important (as c
Regression analysis for a massive dataset The main thing to keep in mind is that with this amount of data, every coefficient will probably come out as statistically significant. In order to find out which regressors are really important (as contrasted with statistically significant), I recommend using a holdout sample: fit your model to only 4 million data points, predict the other million points and compare to the actual values. Do this for a couple of different models (using or not using regressors, transforming regressors etc.) and see which ones yield the best predictions, by e.g. calculating the Mean Absolute Deviation (MAD) between the predictions and the actual observations. Better yet: iterate this over the entire dataset five times, using a different million points as a holdout sample each time. This is known as "cross-validation" (five-fold cross-validation in this case).
Regression analysis for a massive dataset The main thing to keep in mind is that with this amount of data, every coefficient will probably come out as statistically significant. In order to find out which regressors are really important (as c
51,408
Regression analysis for a massive dataset
You have 6 variables and 5 milion data points. So your data set would take about half a gigabyte of memory ($\frac{5\cdot 10^6\cdot16}{1024^2}\cdot 6$). So it is not that big for computers which now usually have 4GB RAM as a standard. The point I am trying to make is that although your data is big it is not massive and so you can do usual regression analysis.
Regression analysis for a massive dataset
You have 6 variables and 5 milion data points. So your data set would take about half a gigabyte of memory ($\frac{5\cdot 10^6\cdot16}{1024^2}\cdot 6$). So it is not that big for computers which now u
Regression analysis for a massive dataset You have 6 variables and 5 milion data points. So your data set would take about half a gigabyte of memory ($\frac{5\cdot 10^6\cdot16}{1024^2}\cdot 6$). So it is not that big for computers which now usually have 4GB RAM as a standard. The point I am trying to make is that although your data is big it is not massive and so you can do usual regression analysis.
Regression analysis for a massive dataset You have 6 variables and 5 milion data points. So your data set would take about half a gigabyte of memory ($\frac{5\cdot 10^6\cdot16}{1024^2}\cdot 6$). So it is not that big for computers which now u
51,409
t-test where one sample has zero variance?
This answer doesn't address the issue of why the second group has no variation (and I really do suggest you get to the bottom of that). If you're comfortable saying the yield of group 2 is definitely known to be 4.0 you can use a one-sample t-test to compare group 1 against that value. Update I should clarify some things, since this answer is getting more upvotes than it deserves. First, with the two-sample test you're testing the hypothesis that group 1 differs from group 2. With the one-sample test, you're testing the hypothesis that group 1 differs from a value of 4.0. These are subtly different questions. Second, I should stress that treating the value of 4.0 as known is an assumption, and the one-sample test is only valid if this assumption is correct. All statistical tests involve some assumptions like this, for instance assuming that sampling is not biased. It's your job as the analyst to be clear about what assumptions you're making, to decide whether you think they're justified, and to bear in mind that if the assumptions turn out to be wrong then so do your conclusions.
t-test where one sample has zero variance?
This answer doesn't address the issue of why the second group has no variation (and I really do suggest you get to the bottom of that). If you're comfortable saying the yield of group 2 is definitely
t-test where one sample has zero variance? This answer doesn't address the issue of why the second group has no variation (and I really do suggest you get to the bottom of that). If you're comfortable saying the yield of group 2 is definitely known to be 4.0 you can use a one-sample t-test to compare group 1 against that value. Update I should clarify some things, since this answer is getting more upvotes than it deserves. First, with the two-sample test you're testing the hypothesis that group 1 differs from group 2. With the one-sample test, you're testing the hypothesis that group 1 differs from a value of 4.0. These are subtly different questions. Second, I should stress that treating the value of 4.0 as known is an assumption, and the one-sample test is only valid if this assumption is correct. All statistical tests involve some assumptions like this, for instance assuming that sampling is not biased. It's your job as the analyst to be clear about what assumptions you're making, to decide whether you think they're justified, and to bear in mind that if the assumptions turn out to be wrong then so do your conclusions.
t-test where one sample has zero variance? This answer doesn't address the issue of why the second group has no variation (and I really do suggest you get to the bottom of that). If you're comfortable saying the yield of group 2 is definitely
51,410
t-test where one sample has zero variance?
It requires a bit of explaining how or why this second sample is so "precise" so to speak. Is this rounding error, or are there accidental replications? There is a lot of science going on, not explained, that would inform which option(s) would or would not work. However, it's not a dead-end. Here are 3 suggestions (non-exhaustive) in order of difficulty depending on your chops and software at hand. Fit a T-test with an equal variance assumption: even if the assumption is clearly violated, you can argue that the test will be conservative. Similarly, you could jitter the response to within a rounding range - i.e. add random noise. Use a resampling based test: the permutation test is a powerful and flexible means to assess the test of mean difference under the assumption the two samples (under the null) come from the same distribution - note that that distribution would be approximately normal with a "point mass" at X=4.00. Use a Bayesian approach similar to option 2, it's just a convenient way to calculate the sampling distribution of the mean difference, and make inference correspondingly. Use expectation maximization: OK so you rounded all the variability out of one sample - generally not a good idea. When you report the result as 4.00 - appreciably we all expect that the result must not be 3.99 or 4.01, so for God's sake, use significant figures, you! Anyway, most of the tests above involve collecting the sample into a single group and assuming that follows some kind of normal distribution - the distribution under the null. Clearly that's not the case, if a value of 4.00 is really 4 and could in fact be 3.50 up to 4.49, then you have to integrate up whatever area of the normal curve of the assumed parametric framework at that iteration to find the true likelihood contribution at that time point. This is a somewhat standard, and most correct, approach to measurement error - which is the specific problem you have here - but is sadly the most technically difficult to implement.
t-test where one sample has zero variance?
It requires a bit of explaining how or why this second sample is so "precise" so to speak. Is this rounding error, or are there accidental replications? There is a lot of science going on, not explain
t-test where one sample has zero variance? It requires a bit of explaining how or why this second sample is so "precise" so to speak. Is this rounding error, or are there accidental replications? There is a lot of science going on, not explained, that would inform which option(s) would or would not work. However, it's not a dead-end. Here are 3 suggestions (non-exhaustive) in order of difficulty depending on your chops and software at hand. Fit a T-test with an equal variance assumption: even if the assumption is clearly violated, you can argue that the test will be conservative. Similarly, you could jitter the response to within a rounding range - i.e. add random noise. Use a resampling based test: the permutation test is a powerful and flexible means to assess the test of mean difference under the assumption the two samples (under the null) come from the same distribution - note that that distribution would be approximately normal with a "point mass" at X=4.00. Use a Bayesian approach similar to option 2, it's just a convenient way to calculate the sampling distribution of the mean difference, and make inference correspondingly. Use expectation maximization: OK so you rounded all the variability out of one sample - generally not a good idea. When you report the result as 4.00 - appreciably we all expect that the result must not be 3.99 or 4.01, so for God's sake, use significant figures, you! Anyway, most of the tests above involve collecting the sample into a single group and assuming that follows some kind of normal distribution - the distribution under the null. Clearly that's not the case, if a value of 4.00 is really 4 and could in fact be 3.50 up to 4.49, then you have to integrate up whatever area of the normal curve of the assumed parametric framework at that iteration to find the true likelihood contribution at that time point. This is a somewhat standard, and most correct, approach to measurement error - which is the specific problem you have here - but is sadly the most technically difficult to implement.
t-test where one sample has zero variance? It requires a bit of explaining how or why this second sample is so "precise" so to speak. Is this rounding error, or are there accidental replications? There is a lot of science going on, not explain
51,411
t-test where one sample has zero variance?
One solution is to use a one-sample t-test of the null hypothesis: $$\text{H}_{0}\text{: }\mu_{1} = 4\text{, with H}_{\text{A}}\text{: }\mu_{1} \ne 4$$ If $n_1=26$, $\bar{x}_1=3.865$ and $s_{1} = .24428$, our test statistic for this null would be: $$t = \frac{\bar{x}_{1}-4}{\frac{s_{1}}{\sqrt{n_{1}}}} = \frac{3.865-4}{\frac{.24428}{\sqrt{26}}} = -2.82$$ At the $\alpha=0.05$ level of significance we would reject the two-sided null hypothesis above, and conclude that we found evidence that Group 1 was sampled from a population with a mean different (less than) than 4. Alternately, you might try adding a tiny amount of artificial noise to estimate a nearly, but not exactly, zero variance to Group 2 and conduct a two-sample t test assuming different variances (this will produce almost identical results to the one-sample test).
t-test where one sample has zero variance?
One solution is to use a one-sample t-test of the null hypothesis: $$\text{H}_{0}\text{: }\mu_{1} = 4\text{, with H}_{\text{A}}\text{: }\mu_{1} \ne 4$$ If $n_1=26$, $\bar{x}_1=3.865$ and $s_{1} = .244
t-test where one sample has zero variance? One solution is to use a one-sample t-test of the null hypothesis: $$\text{H}_{0}\text{: }\mu_{1} = 4\text{, with H}_{\text{A}}\text{: }\mu_{1} \ne 4$$ If $n_1=26$, $\bar{x}_1=3.865$ and $s_{1} = .24428$, our test statistic for this null would be: $$t = \frac{\bar{x}_{1}-4}{\frac{s_{1}}{\sqrt{n_{1}}}} = \frac{3.865-4}{\frac{.24428}{\sqrt{26}}} = -2.82$$ At the $\alpha=0.05$ level of significance we would reject the two-sided null hypothesis above, and conclude that we found evidence that Group 1 was sampled from a population with a mean different (less than) than 4. Alternately, you might try adding a tiny amount of artificial noise to estimate a nearly, but not exactly, zero variance to Group 2 and conduct a two-sample t test assuming different variances (this will produce almost identical results to the one-sample test).
t-test where one sample has zero variance? One solution is to use a one-sample t-test of the null hypothesis: $$\text{H}_{0}\text{: }\mu_{1} = 4\text{, with H}_{\text{A}}\text{: }\mu_{1} \ne 4$$ If $n_1=26$, $\bar{x}_1=3.865$ and $s_{1} = .244
51,412
t-test where one sample has zero variance?
As others have already said, it would be better to address the issue with rounding to get the original data. However, if you have to work with the data you have here's another fairly simple option that may have some advantages over current answers: Do a t-test under the assumption of equal variance where the variance is calculated using only Group 1. Basically, using the procedure for unequal sample sizes, similar variances but replacing the variance of group 2 ($s^{2}_{X_{2}}$ on the wiki page) with the variance of group 1 ($s^{2}_{X_{1}}$). You would have to do this by hand but it's should be doable even in e.g. Excel if you don't have much experience with software like R or python. I think the problem with pre-packaged t-test either assuming equal or unequal variance is that Group 2 is going to bias downwards the estimate of variance (assuming you don't believe Group 2 has 0 variance). Similarly, testing Group 1 against a fixed mean of 4 (like t.test(Group_1, mu=4)) ignores the variance in Group 2. More sophisticated solutions based on re-sampling, Bayesian statistics, etc (see @AdamO answer) may be better but more difficult to implement. Sextus Empiricus in comment says: If group2 has an unexpectedly low variance, then it is arbitrary to just decide to replace it with the variance of group 1 I'd say that eventually every analysis has some arbitrary elements. The question is how you justify them and how they impact the interpretation. My suggestion assumes that the lack of variation in group 2 is unrealistic and all data points have been reset to 4.00. In such case, I think using the variance from group 1 is more sensible than taking the values in group 2 at face value or, equivalently, testing if group 1 differs from the point value of 4.00. I think it is more reasonable to assume that different farms have more or less the same variance than assuming no variance at all. I guess you could even present results for a range of variances for group 2 (from no variance to the same variance of group 1) and make an informed decision based on that. In the end it's up to the OP to decide what assumptions are more sensible and make sure that collaborators are aware of those decisions. I would also consider whether it is more costly a false positive (claim farms are different when they are not) or a false negative (farms are indistinguishable when instead they are).
t-test where one sample has zero variance?
As others have already said, it would be better to address the issue with rounding to get the original data. However, if you have to work with the data you have here's another fairly simple option tha
t-test where one sample has zero variance? As others have already said, it would be better to address the issue with rounding to get the original data. However, if you have to work with the data you have here's another fairly simple option that may have some advantages over current answers: Do a t-test under the assumption of equal variance where the variance is calculated using only Group 1. Basically, using the procedure for unequal sample sizes, similar variances but replacing the variance of group 2 ($s^{2}_{X_{2}}$ on the wiki page) with the variance of group 1 ($s^{2}_{X_{1}}$). You would have to do this by hand but it's should be doable even in e.g. Excel if you don't have much experience with software like R or python. I think the problem with pre-packaged t-test either assuming equal or unequal variance is that Group 2 is going to bias downwards the estimate of variance (assuming you don't believe Group 2 has 0 variance). Similarly, testing Group 1 against a fixed mean of 4 (like t.test(Group_1, mu=4)) ignores the variance in Group 2. More sophisticated solutions based on re-sampling, Bayesian statistics, etc (see @AdamO answer) may be better but more difficult to implement. Sextus Empiricus in comment says: If group2 has an unexpectedly low variance, then it is arbitrary to just decide to replace it with the variance of group 1 I'd say that eventually every analysis has some arbitrary elements. The question is how you justify them and how they impact the interpretation. My suggestion assumes that the lack of variation in group 2 is unrealistic and all data points have been reset to 4.00. In such case, I think using the variance from group 1 is more sensible than taking the values in group 2 at face value or, equivalently, testing if group 1 differs from the point value of 4.00. I think it is more reasonable to assume that different farms have more or less the same variance than assuming no variance at all. I guess you could even present results for a range of variances for group 2 (from no variance to the same variance of group 1) and make an informed decision based on that. In the end it's up to the OP to decide what assumptions are more sensible and make sure that collaborators are aware of those decisions. I would also consider whether it is more costly a false positive (claim farms are different when they are not) or a false negative (farms are indistinguishable when instead they are).
t-test where one sample has zero variance? As others have already said, it would be better to address the issue with rounding to get the original data. However, if you have to work with the data you have here's another fairly simple option tha
51,413
How to plot $x^{1700}(1-x)^{300}$?
Stephan's answer about floating point is correct. As a work-around, you could plot the data on a logarithmic scale. Instead of plotting $$ x ^{1700} (1-x)^{300} $$ you would plot $$ 1700\log(x) + 300\log(1-x) $$ Working on a logarithmic scale can be nice when it keeps the data in a reasonable range for floating point arithmetic. Because $\log$ is monotonic increasing, values will retain the same ordering (any maxima occur at the same values of $x$), even though they're reported on a different scale.
How to plot $x^{1700}(1-x)^{300}$?
Stephan's answer about floating point is correct. As a work-around, you could plot the data on a logarithmic scale. Instead of plotting $$ x ^{1700} (1-x)^{300} $$ you would plot $$ 1700\log(x) + 300\
How to plot $x^{1700}(1-x)^{300}$? Stephan's answer about floating point is correct. As a work-around, you could plot the data on a logarithmic scale. Instead of plotting $$ x ^{1700} (1-x)^{300} $$ you would plot $$ 1700\log(x) + 300\log(1-x) $$ Working on a logarithmic scale can be nice when it keeps the data in a reasonable range for floating point arithmetic. Because $\log$ is monotonic increasing, values will retain the same ordering (any maxima occur at the same values of $x$), even though they're reported on a different scale.
How to plot $x^{1700}(1-x)^{300}$? Stephan's answer about floating point is correct. As a work-around, you could plot the data on a logarithmic scale. Instead of plotting $$ x ^{1700} (1-x)^{300} $$ you would plot $$ 1700\log(x) + 300\
51,414
How to plot $x^{1700}(1-x)^{300}$?
That likelihood function is proportional to a beta density with parameters $\alpha=1701, \beta=301$ so can be plotted as a beta density, as a likelihood function is only defined up to proportionality: What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice? resulting in It might be more informative to plot the log likelihood function: For reference, the R code used below: plot( function(x) dbeta(x, 1701, 301), from=0, to=1, col="red", n=1001, main="Beta likelihood function") plot( function(x) dbeta(x, 1701, 301, log=TRUE), from=0, to=1, col="red", n=1001, main="Beta loglikelihood function")
How to plot $x^{1700}(1-x)^{300}$?
That likelihood function is proportional to a beta density with parameters $\alpha=1701, \beta=301$ so can be plotted as a beta density, as a likelihood function is only defined up to proportionality:
How to plot $x^{1700}(1-x)^{300}$? That likelihood function is proportional to a beta density with parameters $\alpha=1701, \beta=301$ so can be plotted as a beta density, as a likelihood function is only defined up to proportionality: What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice? resulting in It might be more informative to plot the log likelihood function: For reference, the R code used below: plot( function(x) dbeta(x, 1701, 301), from=0, to=1, col="red", n=1001, main="Beta likelihood function") plot( function(x) dbeta(x, 1701, 301, log=TRUE), from=0, to=1, col="red", n=1001, main="Beta loglikelihood function")
How to plot $x^{1700}(1-x)^{300}$? That likelihood function is proportional to a beta density with parameters $\alpha=1701, \beta=301$ so can be plotted as a beta density, as a likelihood function is only defined up to proportionality:
51,415
How to plot $x^{1700}(1-x)^{300}$?
The $y$ value of your maximum (which indeed is at $x=0.85$) is $\exp(-845.42)\approx 10^{-367.16}$. The smallest double numbers R can work with are about $2\times 10^{-308}$. You are simply running out of number space. If you really want to plot this, use a dedicated package for high precision arithmetic.
How to plot $x^{1700}(1-x)^{300}$?
The $y$ value of your maximum (which indeed is at $x=0.85$) is $\exp(-845.42)\approx 10^{-367.16}$. The smallest double numbers R can work with are about $2\times 10^{-308}$. You are simply running ou
How to plot $x^{1700}(1-x)^{300}$? The $y$ value of your maximum (which indeed is at $x=0.85$) is $\exp(-845.42)\approx 10^{-367.16}$. The smallest double numbers R can work with are about $2\times 10^{-308}$. You are simply running out of number space. If you really want to plot this, use a dedicated package for high precision arithmetic.
How to plot $x^{1700}(1-x)^{300}$? The $y$ value of your maximum (which indeed is at $x=0.85$) is $\exp(-845.42)\approx 10^{-367.16}$. The smallest double numbers R can work with are about $2\times 10^{-308}$. You are simply running ou
51,416
How to plot $x^{1700}(1-x)^{300}$?
You can plot this curve accurately, on a linear scale. Let $a=1700$ and $b=300$ be the parameters. The largest value of $f(x)=x^a(1-x)^b$ for $0\lt x \lt 1$ is attained at $x_m=(a-1)/(a+b-2)$ (the mode of the corresponding Beta$(a,b)$ distribution). There, $$y_m = \log f(x_m) = a \log(x_m) + b \log(1-x_m)$$ is the logarithm of the curve's peak. Scale this up by defining $$f^{*}(x) = \exp(-y_m) f(x) = \exp(a \log(x) + b \log(1-x) - y_m).$$ This attains the value of $1$ at its peak. Thus, the plot of $f^{*}$ fits within the vertical range $[0,1].$ To obtain a plot of $f,$ all you need do is relabel the vertical axis (by multiplying all its values by $\exp(y_m).$) This approach works in any graphics environment. Here is an example, computed entirely with double-precision arithmetic in R: You can succeed provided both $a$ and $b$ do not themselves require more digits to represent than IEEE supports: that is, they should be a couple of orders of magnitude less than $10^{15.6}.$ Here is an example with $a=10^8$ and $b=10^{13}:$ (It is no accident both curves appear to have the same shape: both will be extremely close to a Gaussian when both $a$ and $b$ are large. For all practical purposes, all we ever have to do is draw a Gaussian and then label both axes suitably according to the parameters $a$ and $b$.) This method will work whenever you can easily compute $\log f(x)$ and find (or estimate) its maximum value -- and this will be the case in most applications. The R code below is merely to prove the concept: for general purpose work, the labeling algorithm will need to be a little fussier. betaplot <- function(a, b, xlim=c(-4,4), scale=1, nticks=5, interval=2, ...) { # a,b>1 n <- a + b mu <- a / n # Mean sigma <- sqrt(((a / n) * (b / n)) / (n+1)) # SD xlim <- xlim * sigma + mu # Plot limits m <- (a - 1) / (n - 2) # Mode f <- function(x) a * log10(x) + b * log10(1-x) logmax <- round(f(m)) # Nearest whole power of 10 to max # # The plot itself. # curve(exp(log(10) * (f(x) - logmax)), xlim=xlim, ylim=c(0,scale), ylab="", yaxt = "n", ...) # # Ticks and labels. # yticks <- seq(0, nticks) * interval yticks <- yticks[yticks <= 10*scale] rug(yticks/10, side=2, ticksize=-0.03) for (y in yticks) { if (y==0) {s <- 0} else if (y==10) {s <- bquote(10^.(logmax))} else {s <- bquote(.(y)%*%10^.(logmax-1))} mtext(s, side=2, line=1, at=y/10) } } # # Examples. # betaplot(1700, 300, scale=0.8, nticks=4, interval=2, lwd=2) betaplot(1e8, 1e13, scale=0.6, nticks=3, interval=2, lwd=2)
How to plot $x^{1700}(1-x)^{300}$?
You can plot this curve accurately, on a linear scale. Let $a=1700$ and $b=300$ be the parameters. The largest value of $f(x)=x^a(1-x)^b$ for $0\lt x \lt 1$ is attained at $x_m=(a-1)/(a+b-2)$ (the mod
How to plot $x^{1700}(1-x)^{300}$? You can plot this curve accurately, on a linear scale. Let $a=1700$ and $b=300$ be the parameters. The largest value of $f(x)=x^a(1-x)^b$ for $0\lt x \lt 1$ is attained at $x_m=(a-1)/(a+b-2)$ (the mode of the corresponding Beta$(a,b)$ distribution). There, $$y_m = \log f(x_m) = a \log(x_m) + b \log(1-x_m)$$ is the logarithm of the curve's peak. Scale this up by defining $$f^{*}(x) = \exp(-y_m) f(x) = \exp(a \log(x) + b \log(1-x) - y_m).$$ This attains the value of $1$ at its peak. Thus, the plot of $f^{*}$ fits within the vertical range $[0,1].$ To obtain a plot of $f,$ all you need do is relabel the vertical axis (by multiplying all its values by $\exp(y_m).$) This approach works in any graphics environment. Here is an example, computed entirely with double-precision arithmetic in R: You can succeed provided both $a$ and $b$ do not themselves require more digits to represent than IEEE supports: that is, they should be a couple of orders of magnitude less than $10^{15.6}.$ Here is an example with $a=10^8$ and $b=10^{13}:$ (It is no accident both curves appear to have the same shape: both will be extremely close to a Gaussian when both $a$ and $b$ are large. For all practical purposes, all we ever have to do is draw a Gaussian and then label both axes suitably according to the parameters $a$ and $b$.) This method will work whenever you can easily compute $\log f(x)$ and find (or estimate) its maximum value -- and this will be the case in most applications. The R code below is merely to prove the concept: for general purpose work, the labeling algorithm will need to be a little fussier. betaplot <- function(a, b, xlim=c(-4,4), scale=1, nticks=5, interval=2, ...) { # a,b>1 n <- a + b mu <- a / n # Mean sigma <- sqrt(((a / n) * (b / n)) / (n+1)) # SD xlim <- xlim * sigma + mu # Plot limits m <- (a - 1) / (n - 2) # Mode f <- function(x) a * log10(x) + b * log10(1-x) logmax <- round(f(m)) # Nearest whole power of 10 to max # # The plot itself. # curve(exp(log(10) * (f(x) - logmax)), xlim=xlim, ylim=c(0,scale), ylab="", yaxt = "n", ...) # # Ticks and labels. # yticks <- seq(0, nticks) * interval yticks <- yticks[yticks <= 10*scale] rug(yticks/10, side=2, ticksize=-0.03) for (y in yticks) { if (y==0) {s <- 0} else if (y==10) {s <- bquote(10^.(logmax))} else {s <- bquote(.(y)%*%10^.(logmax-1))} mtext(s, side=2, line=1, at=y/10) } } # # Examples. # betaplot(1700, 300, scale=0.8, nticks=4, interval=2, lwd=2) betaplot(1e8, 1e13, scale=0.6, nticks=3, interval=2, lwd=2)
How to plot $x^{1700}(1-x)^{300}$? You can plot this curve accurately, on a linear scale. Let $a=1700$ and $b=300$ be the parameters. The largest value of $f(x)=x^a(1-x)^b$ for $0\lt x \lt 1$ is attained at $x_m=(a-1)/(a+b-2)$ (the mod
51,417
How to calculate the 4th quartile from median and IQR?
Note: In the following answer I assume that you only know the quantiles you mentioned and you do not know anything else about the distribution, for instance you do not know whether the distribution is symmetric or what its pdf or its (centralized) moments are. It is not possible to calculate the 4th quartile, if you have only the median and the IQR. Let us look at the following definitions: median = second quartile. IQR = third quartile $-$ first quartile. The 4th quartile is in neither of these two equations. Therefore, it is impossible to calculate it with the information given. Here is one example: x <- c(1,2,3,4,5,6,7,8,9,10) y <- c(1,2,3,4,5,6,7,8,9,20) summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.00 3.25 5.50 5.50 7.75 10.00 summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.00 3.25 5.50 6.50 7.75 20.00 The first quartile is for both "x" and "y" 3,25. Also the median is 5.5 for both. The third quartile is 7.75 for both and the IQR is 7.75 $-$ 3.25 = 4.5 for both. However, the 4th quartile, which is also the maximum, is different, namely 10 and 20. You can also look at boxplots of x and y and you will see that the first quartile, the second quartile (median) and the third quartile are equal. Therefore, you cannot conclude anything about the rest of the distribution of the datapoints. df <- data.frame(x,y) p <- ggplot(stack(df), aes(x = ind, y = values)) + geom_boxplot() p
How to calculate the 4th quartile from median and IQR?
Note: In the following answer I assume that you only know the quantiles you mentioned and you do not know anything else about the distribution, for instance you do not know whether the distribution is
How to calculate the 4th quartile from median and IQR? Note: In the following answer I assume that you only know the quantiles you mentioned and you do not know anything else about the distribution, for instance you do not know whether the distribution is symmetric or what its pdf or its (centralized) moments are. It is not possible to calculate the 4th quartile, if you have only the median and the IQR. Let us look at the following definitions: median = second quartile. IQR = third quartile $-$ first quartile. The 4th quartile is in neither of these two equations. Therefore, it is impossible to calculate it with the information given. Here is one example: x <- c(1,2,3,4,5,6,7,8,9,10) y <- c(1,2,3,4,5,6,7,8,9,20) summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.00 3.25 5.50 5.50 7.75 10.00 summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.00 3.25 5.50 6.50 7.75 20.00 The first quartile is for both "x" and "y" 3,25. Also the median is 5.5 for both. The third quartile is 7.75 for both and the IQR is 7.75 $-$ 3.25 = 4.5 for both. However, the 4th quartile, which is also the maximum, is different, namely 10 and 20. You can also look at boxplots of x and y and you will see that the first quartile, the second quartile (median) and the third quartile are equal. Therefore, you cannot conclude anything about the rest of the distribution of the datapoints. df <- data.frame(x,y) p <- ggplot(stack(df), aes(x = ind, y = values)) + geom_boxplot() p
How to calculate the 4th quartile from median and IQR? Note: In the following answer I assume that you only know the quantiles you mentioned and you do not know anything else about the distribution, for instance you do not know whether the distribution is
51,418
How to calculate the 4th quartile from median and IQR?
@Ferdi is correct, but I think that you are asking the wrong question. I think you are confused because "quartile" seems to mean "4 of something". There are, indeed, 4 groups. But that means there are 3 divisions and, at least in what I've read, the term 4th quartile (as a number) is not used at all. If you do calculate the 4th quartile as a number, then you'd also want the 0th quartile, which would be the minimum. But I don't think that's what you want. In case that isn't clear, picture cutting a rectangle into 4 rectangles. You need three cuts to make four rectangles. If I have wrongly accused you of being confused, I apologize, but I've seen this confusion more than once.
How to calculate the 4th quartile from median and IQR?
@Ferdi is correct, but I think that you are asking the wrong question. I think you are confused because "quartile" seems to mean "4 of something". There are, indeed, 4 groups. But that means there a
How to calculate the 4th quartile from median and IQR? @Ferdi is correct, but I think that you are asking the wrong question. I think you are confused because "quartile" seems to mean "4 of something". There are, indeed, 4 groups. But that means there are 3 divisions and, at least in what I've read, the term 4th quartile (as a number) is not used at all. If you do calculate the 4th quartile as a number, then you'd also want the 0th quartile, which would be the minimum. But I don't think that's what you want. In case that isn't clear, picture cutting a rectangle into 4 rectangles. You need three cuts to make four rectangles. If I have wrongly accused you of being confused, I apologize, but I've seen this confusion more than once.
How to calculate the 4th quartile from median and IQR? @Ferdi is correct, but I think that you are asking the wrong question. I think you are confused because "quartile" seems to mean "4 of something". There are, indeed, 4 groups. But that means there a
51,419
How to calculate the 4th quartile from median and IQR?
The first quartile has 25% of the data below it, 2nd quartile = median has 50% of data below it, third quartile has 75% data below and 25% above. IQR = 3rd quartile - 1st quartile. A fourth quartile would be the maximum, which you can't get from the median and IQR. IQR and median tell you very little about the shape of the distribution. You might be able to make an estimate if you know the shape of the distribution, but for many distributions the answer will be infinity. I suspect that the third quartile is what you really want. If you have the IQR and median and know the shape of the distribution you may be able to estimate the third quartile: e.g. median plus half the IQR for a symmetric distribution. However many distributions are not symmetric. Also, be careful in case you have been given the semi interquartile range rather than the IQR.
How to calculate the 4th quartile from median and IQR?
The first quartile has 25% of the data below it, 2nd quartile = median has 50% of data below it, third quartile has 75% data below and 25% above. IQR = 3rd quartile - 1st quartile. A fourth quartile w
How to calculate the 4th quartile from median and IQR? The first quartile has 25% of the data below it, 2nd quartile = median has 50% of data below it, third quartile has 75% data below and 25% above. IQR = 3rd quartile - 1st quartile. A fourth quartile would be the maximum, which you can't get from the median and IQR. IQR and median tell you very little about the shape of the distribution. You might be able to make an estimate if you know the shape of the distribution, but for many distributions the answer will be infinity. I suspect that the third quartile is what you really want. If you have the IQR and median and know the shape of the distribution you may be able to estimate the third quartile: e.g. median plus half the IQR for a symmetric distribution. However many distributions are not symmetric. Also, be careful in case you have been given the semi interquartile range rather than the IQR.
How to calculate the 4th quartile from median and IQR? The first quartile has 25% of the data below it, 2nd quartile = median has 50% of data below it, third quartile has 75% data below and 25% above. IQR = 3rd quartile - 1st quartile. A fourth quartile w
51,420
Why does dbeta not sum to 1?
The relevant property of a probability density is not that it sums (for evaluation on some particular $x$ values) to one, but that it integrates to one. If you evaluate a density $f$ at $x$ values that form a regular grid with grid width $\Delta x$, then you have very approximately $$ \int_{-\infty}^\infty f(x)\,dx \approx \sum_{i=1}^n f(x_i)\Delta x, $$ which is why your initial two examples sum to almost one: here we have $\Delta x=1$, so we have approximately $$ 1=\int_{-\infty}^\infty f(x)\,dx \approx \sum_{i=1}^n f(x_i). $$ Compare a finer grid for the normal case: > sum(dnorm(seq(-10,30,by=0.1), mean = 10, sd = 2, log = FALSE)) [1] 10 And of course, if we include $\Delta x$ in your calculation for dbeta, then we again approximate the integral correctly: > xx <- seq(0,1,by=0.01) > sum(dbeta(xx, shape1 = 2, shape2 = 4, ncp = 0, log = FALSE)*mean(diff(xx))) [1] 0.9998333
Why does dbeta not sum to 1?
The relevant property of a probability density is not that it sums (for evaluation on some particular $x$ values) to one, but that it integrates to one. If you evaluate a density $f$ at $x$ values tha
Why does dbeta not sum to 1? The relevant property of a probability density is not that it sums (for evaluation on some particular $x$ values) to one, but that it integrates to one. If you evaluate a density $f$ at $x$ values that form a regular grid with grid width $\Delta x$, then you have very approximately $$ \int_{-\infty}^\infty f(x)\,dx \approx \sum_{i=1}^n f(x_i)\Delta x, $$ which is why your initial two examples sum to almost one: here we have $\Delta x=1$, so we have approximately $$ 1=\int_{-\infty}^\infty f(x)\,dx \approx \sum_{i=1}^n f(x_i). $$ Compare a finer grid for the normal case: > sum(dnorm(seq(-10,30,by=0.1), mean = 10, sd = 2, log = FALSE)) [1] 10 And of course, if we include $\Delta x$ in your calculation for dbeta, then we again approximate the integral correctly: > xx <- seq(0,1,by=0.01) > sum(dbeta(xx, shape1 = 2, shape2 = 4, ncp = 0, log = FALSE)*mean(diff(xx))) [1] 0.9998333
Why does dbeta not sum to 1? The relevant property of a probability density is not that it sums (for evaluation on some particular $x$ values) to one, but that it integrates to one. If you evaluate a density $f$ at $x$ values tha
51,421
Why does dbeta not sum to 1?
d* functions represent proportions only with a discrete response. In fact, your dnorm example just happens to sum to one, but > sum(dnorm(seq(-10, 30, by = 0.01), mean = 10, sd = 2, log = FALSE)) [1] 100 is 100! The normal and beta distributions are continuous, not discrete. Therefore, instead of summing to 1, they must integrate to 1, which is different.
Why does dbeta not sum to 1?
d* functions represent proportions only with a discrete response. In fact, your dnorm example just happens to sum to one, but > sum(dnorm(seq(-10, 30, by = 0.01), mean = 10, sd = 2, log = FALSE)) [1]
Why does dbeta not sum to 1? d* functions represent proportions only with a discrete response. In fact, your dnorm example just happens to sum to one, but > sum(dnorm(seq(-10, 30, by = 0.01), mean = 10, sd = 2, log = FALSE)) [1] 100 is 100! The normal and beta distributions are continuous, not discrete. Therefore, instead of summing to 1, they must integrate to 1, which is different.
Why does dbeta not sum to 1? d* functions represent proportions only with a discrete response. In fact, your dnorm example just happens to sum to one, but > sum(dnorm(seq(-10, 30, by = 0.01), mean = 10, sd = 2, log = FALSE)) [1]
51,422
Why does dbeta not sum to 1?
dpois is the probability mass function (pmf) a discrete Poisson distribution that can take integer values in $(0, \infty)$. If you sum over the probabilities for all integers, you should indeed get 1. dnorm is the probability density function (pdf) for the normal distribution over the real numbers in $(\infty, \infty)$ and dbeta the pdf for a Beta distribution over $(0, 1)$. These will sum to 1, if you integrate them over their domain. pdfs are the continuous distribution equivalent to pmfs, but that sum(dnorm(-10:30, mean = 10, sd = 2, log = FALSE)) sums to 1 is pure coincidence (or someone giving you a trick question).
Why does dbeta not sum to 1?
dpois is the probability mass function (pmf) a discrete Poisson distribution that can take integer values in $(0, \infty)$. If you sum over the probabilities for all integers, you should indeed get 1.
Why does dbeta not sum to 1? dpois is the probability mass function (pmf) a discrete Poisson distribution that can take integer values in $(0, \infty)$. If you sum over the probabilities for all integers, you should indeed get 1. dnorm is the probability density function (pdf) for the normal distribution over the real numbers in $(\infty, \infty)$ and dbeta the pdf for a Beta distribution over $(0, 1)$. These will sum to 1, if you integrate them over their domain. pdfs are the continuous distribution equivalent to pmfs, but that sum(dnorm(-10:30, mean = 10, sd = 2, log = FALSE)) sums to 1 is pure coincidence (or someone giving you a trick question).
Why does dbeta not sum to 1? dpois is the probability mass function (pmf) a discrete Poisson distribution that can take integer values in $(0, \infty)$. If you sum over the probabilities for all integers, you should indeed get 1.
51,423
Examples for Type I and Type II errors
A picture is worth a thousand words. Null hypothesis: patient is not pregnant. Image via Paul Ellis.
Examples for Type I and Type II errors
A picture is worth a thousand words. Null hypothesis: patient is not pregnant. Image via Paul Ellis.
Examples for Type I and Type II errors A picture is worth a thousand words. Null hypothesis: patient is not pregnant. Image via Paul Ellis.
Examples for Type I and Type II errors A picture is worth a thousand words. Null hypothesis: patient is not pregnant. Image via Paul Ellis.
51,424
Examples for Type I and Type II errors
Let's say you are testing a new drug for some disease. In a test of its effectiveness, a type I error would be to say it has an effect when it does not; a type II error would be to say it has no effect when it does.
Examples for Type I and Type II errors
Let's say you are testing a new drug for some disease. In a test of its effectiveness, a type I error would be to say it has an effect when it does not; a type II error would be to say it has no effec
Examples for Type I and Type II errors Let's say you are testing a new drug for some disease. In a test of its effectiveness, a type I error would be to say it has an effect when it does not; a type II error would be to say it has no effect when it does.
Examples for Type I and Type II errors Let's say you are testing a new drug for some disease. In a test of its effectiveness, a type I error would be to say it has an effect when it does not; a type II error would be to say it has no effec
51,425
Examples for Type I and Type II errors
Type I error /false positive: is same as rejecting the null when it is true. Few Examples: (With the null hypothesis that the person is innocent), convicting an innocent person (With the null hypothesis that e-mail is non-spam), non-spam mail is sent to spam box (With the null hypothesis that there is no metal present in passenger's bag), metal detector beeps (detects metal) for a bag with no metal Type II error /false negative: is similar to accepting the null when it is false. Few Examples: (With the null hypothesis that the person is innocent), letting a guilty person go free. (With the null hypothesis that e-mail is non-spam), Spam mail is sent to Inbox (With the null hypothesis that there is no metal present in passenger's bag), metal detector fails to beep (does not detect metal) for a bag with metal in it Other beautiful examples in layman's terms are give here: Is there a way to remember the definitions of Type I and Type II Errors?
Examples for Type I and Type II errors
Type I error /false positive: is same as rejecting the null when it is true. Few Examples: (With the null hypothesis that the person is innocent), convicting an innocent person (With the null hypot
Examples for Type I and Type II errors Type I error /false positive: is same as rejecting the null when it is true. Few Examples: (With the null hypothesis that the person is innocent), convicting an innocent person (With the null hypothesis that e-mail is non-spam), non-spam mail is sent to spam box (With the null hypothesis that there is no metal present in passenger's bag), metal detector beeps (detects metal) for a bag with no metal Type II error /false negative: is similar to accepting the null when it is false. Few Examples: (With the null hypothesis that the person is innocent), letting a guilty person go free. (With the null hypothesis that e-mail is non-spam), Spam mail is sent to Inbox (With the null hypothesis that there is no metal present in passenger's bag), metal detector fails to beep (does not detect metal) for a bag with metal in it Other beautiful examples in layman's terms are give here: Is there a way to remember the definitions of Type I and Type II Errors?
Examples for Type I and Type II errors Type I error /false positive: is same as rejecting the null when it is true. Few Examples: (With the null hypothesis that the person is innocent), convicting an innocent person (With the null hypot
51,426
Examples for Type I and Type II errors
The boy who cried wolf. I am not sure who is who in the fable but the basic idea is that the two types of errors (Type I and Type II) are timely ordered in the famous fable. Type I: villagers (scientists) believe there is a wolf (effect in population), since the boy cried wolf, but in reality there is not any. Type II: villagers (scientists) believe there is not any wolf (effect in population), although the boy cries wolf, and in reality there is a wolf. Never been a fan of a examples that taught which one is "worse" as (in my opinion) it is dependent on a problem at hand.
Examples for Type I and Type II errors
The boy who cried wolf. I am not sure who is who in the fable but the basic idea is that the two types of errors (Type I and Type II) are timely ordered in the famous fable. Type I: villagers (scienti
Examples for Type I and Type II errors The boy who cried wolf. I am not sure who is who in the fable but the basic idea is that the two types of errors (Type I and Type II) are timely ordered in the famous fable. Type I: villagers (scientists) believe there is a wolf (effect in population), since the boy cried wolf, but in reality there is not any. Type II: villagers (scientists) believe there is not any wolf (effect in population), although the boy cries wolf, and in reality there is a wolf. Never been a fan of a examples that taught which one is "worse" as (in my opinion) it is dependent on a problem at hand.
Examples for Type I and Type II errors The boy who cried wolf. I am not sure who is who in the fable but the basic idea is that the two types of errors (Type I and Type II) are timely ordered in the famous fable. Type I: villagers (scienti
51,427
Examples for Type I and Type II errors
Null hypothesis is: "Today is not my friends birthday." Type I error: My friend does not have birthday today but I will wish her happy birthday. Type II error: My friend has birthday today but I don't wish her happy birthday.
Examples for Type I and Type II errors
Null hypothesis is: "Today is not my friends birthday." Type I error: My friend does not have birthday today but I will wish her happy birthday. Type II error: My friend has birthday today but I don'
Examples for Type I and Type II errors Null hypothesis is: "Today is not my friends birthday." Type I error: My friend does not have birthday today but I will wish her happy birthday. Type II error: My friend has birthday today but I don't wish her happy birthday.
Examples for Type I and Type II errors Null hypothesis is: "Today is not my friends birthday." Type I error: My friend does not have birthday today but I will wish her happy birthday. Type II error: My friend has birthday today but I don'
51,428
Suitable graph to visualize the spread of data
There are numerous possible displays, depending on what more specifically you want. One example would be a boxplot for each group (A, B, ...) (assuming there are enough values in each group to support one*): boxplot(len~supp,data=ToothGrowth,horizontal=TRUE,boxwex=.7) But you might want to look at histograms, ecdfs, or a number of other possibilities * Edit: from your later comments, it looks like there's enough data for boxplots.
Suitable graph to visualize the spread of data
There are numerous possible displays, depending on what more specifically you want. One example would be a boxplot for each group (A, B, ...) (assuming there are enough values in each group to suppor
Suitable graph to visualize the spread of data There are numerous possible displays, depending on what more specifically you want. One example would be a boxplot for each group (A, B, ...) (assuming there are enough values in each group to support one*): boxplot(len~supp,data=ToothGrowth,horizontal=TRUE,boxwex=.7) But you might want to look at histograms, ecdfs, or a number of other possibilities * Edit: from your later comments, it looks like there's enough data for boxplots.
Suitable graph to visualize the spread of data There are numerous possible displays, depending on what more specifically you want. One example would be a boxplot for each group (A, B, ...) (assuming there are enough values in each group to suppor
51,429
Suitable graph to visualize the spread of data
You already got some excellent answers but let me suggest another plot that was not mentioned yet (this is an example that I created to answer another question): In R, it is available e.g. through stripchart() or ggplot2's geom_point() or geom_jitter(). (Jitter adds a little bit of noise to avoid too much overlap.) This plot allows you to look at the data somewhat more directly than histograms (which can be badly misleading, see Glen_b's great answer to another question) or boxplots (which are great but a little more complicated to understand and explain). In GGPlot, you can also combine boxplots and jittered dots, see the documentation.
Suitable graph to visualize the spread of data
You already got some excellent answers but let me suggest another plot that was not mentioned yet (this is an example that I created to answer another question): In R, it is available e.g. through st
Suitable graph to visualize the spread of data You already got some excellent answers but let me suggest another plot that was not mentioned yet (this is an example that I created to answer another question): In R, it is available e.g. through stripchart() or ggplot2's geom_point() or geom_jitter(). (Jitter adds a little bit of noise to avoid too much overlap.) This plot allows you to look at the data somewhat more directly than histograms (which can be badly misleading, see Glen_b's great answer to another question) or boxplots (which are great but a little more complicated to understand and explain). In GGPlot, you can also combine boxplots and jittered dots, see the documentation.
Suitable graph to visualize the spread of data You already got some excellent answers but let me suggest another plot that was not mentioned yet (this is an example that I created to answer another question): In R, it is available e.g. through st
51,430
Suitable graph to visualize the spread of data
As mentioned by Glen_b, there are a number of possibilities. Here is an example of a histogram and density plot using the "lattice" package. I've also provided some sample data. set.seed(1) mydf <- data.frame(V1 = sample(LETTERS[1:5], 500, replace = TRUE), V2 = sample(0:50, 500, replace = TRUE)) head(mydf) tail(mydf) library(lattice) histogram(~V2 | V1, data = mydf) densityplot(~V2 | V1, data = mydf) Both are with default settings applied.
Suitable graph to visualize the spread of data
As mentioned by Glen_b, there are a number of possibilities. Here is an example of a histogram and density plot using the "lattice" package. I've also provided some sample data. set.seed(1) mydf <- d
Suitable graph to visualize the spread of data As mentioned by Glen_b, there are a number of possibilities. Here is an example of a histogram and density plot using the "lattice" package. I've also provided some sample data. set.seed(1) mydf <- data.frame(V1 = sample(LETTERS[1:5], 500, replace = TRUE), V2 = sample(0:50, 500, replace = TRUE)) head(mydf) tail(mydf) library(lattice) histogram(~V2 | V1, data = mydf) densityplot(~V2 | V1, data = mydf) Both are with default settings applied.
Suitable graph to visualize the spread of data As mentioned by Glen_b, there are a number of possibilities. Here is an example of a histogram and density plot using the "lattice" package. I've also provided some sample data. set.seed(1) mydf <- d
51,431
How to decide between PCA and logistic regression?
The key difference between two approches PCA will NOT consider the response variable but only the variance of the independent variables. Logistic Regression will consider how each independent variable impact on response variable. We can make an example that PCA and logistic regression will have completely different results, i.e., one method shows some feature is important, but other says opposite. Here is how do we construct the example: Independent variable $X_1$ has very small variance (see left plot $x_1$ and $x_2$ are on different scale.), BUT it closely related to the response (from the code, you can see $y$ is assigned based on $X_1$ plus uniform noise). Logistic regression will say it is very important (see the summary of the model in code section), but PCA will say the opposite (see the biplot / right subfigure, the length of $X_1$ arrow is very short short.). Code (in case you want to make the same simulation) set.seed(0) n_data=200 x1=rnorm(n_data,sd=0.3) x2=rnorm(n_data,sd=1) y=ifelse(x1+0.1*runif(n_data)>0,1,2) par(mfrow=c(1,2),cex=1.2) plot(x1,x2,col=y,pch=20) summary(glm(factor(y)~x1+x2-1,family = binomial())) pr.out=princomp(cbind(x1,x2)) biplot(pr.out,xlabs=rep("*",200)) > summary(glm(factor(y)~x1+x2-1,family = binomial())) Call: glm(formula = factor(y) ~ x1 + x2 - 1, family = binomial()) Deviance Residuals: Min 1Q Median 3Q Max -2.27753 -0.19392 -0.00118 0.05413 1.24053 Coefficients: Estimate Std. Error z value Pr(>|z|) x1 -26.4414 4.8434 -5.459 4.78e-08 *** x2 -0.4267 0.2975 -1.434 0.152 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 277.259 on 200 degrees of freedom Residual deviance: 66.817 on 198 degrees of freedom AIC: 70.817 Number of Fisher Scoring iterations: 8
How to decide between PCA and logistic regression?
The key difference between two approches PCA will NOT consider the response variable but only the variance of the independent variables. Logistic Regression will consider how each independent variab
How to decide between PCA and logistic regression? The key difference between two approches PCA will NOT consider the response variable but only the variance of the independent variables. Logistic Regression will consider how each independent variable impact on response variable. We can make an example that PCA and logistic regression will have completely different results, i.e., one method shows some feature is important, but other says opposite. Here is how do we construct the example: Independent variable $X_1$ has very small variance (see left plot $x_1$ and $x_2$ are on different scale.), BUT it closely related to the response (from the code, you can see $y$ is assigned based on $X_1$ plus uniform noise). Logistic regression will say it is very important (see the summary of the model in code section), but PCA will say the opposite (see the biplot / right subfigure, the length of $X_1$ arrow is very short short.). Code (in case you want to make the same simulation) set.seed(0) n_data=200 x1=rnorm(n_data,sd=0.3) x2=rnorm(n_data,sd=1) y=ifelse(x1+0.1*runif(n_data)>0,1,2) par(mfrow=c(1,2),cex=1.2) plot(x1,x2,col=y,pch=20) summary(glm(factor(y)~x1+x2-1,family = binomial())) pr.out=princomp(cbind(x1,x2)) biplot(pr.out,xlabs=rep("*",200)) > summary(glm(factor(y)~x1+x2-1,family = binomial())) Call: glm(formula = factor(y) ~ x1 + x2 - 1, family = binomial()) Deviance Residuals: Min 1Q Median 3Q Max -2.27753 -0.19392 -0.00118 0.05413 1.24053 Coefficients: Estimate Std. Error z value Pr(>|z|) x1 -26.4414 4.8434 -5.459 4.78e-08 *** x2 -0.4267 0.2975 -1.434 0.152 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 277.259 on 200 degrees of freedom Residual deviance: 66.817 on 198 degrees of freedom AIC: 70.817 Number of Fisher Scoring iterations: 8
How to decide between PCA and logistic regression? The key difference between two approches PCA will NOT consider the response variable but only the variance of the independent variables. Logistic Regression will consider how each independent variab
51,432
How to decide between PCA and logistic regression?
https://en.wikipedia.org/wiki/Principal_component_analysis Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. It does completely different thing to logistic regression. PCA is not a substitute for logistic regression. You could actually even use them together. PCA can be used to remove dimensions that have strong correlations before PCA transformation. PCA has actually a problem that it only looks into the dimensions, not into categories and for that reason there is linear discriminant analysis (LDA). https://en.wikipedia.org/wiki/Linear_discriminant_analysis LDA is not a substitution for logistic regression, but a variant to do PCA-like things for data that needs to keep categories separated, for example to make logistic regression faster/simpler.
How to decide between PCA and logistic regression?
https://en.wikipedia.org/wiki/Principal_component_analysis Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of pos
How to decide between PCA and logistic regression? https://en.wikipedia.org/wiki/Principal_component_analysis Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. It does completely different thing to logistic regression. PCA is not a substitute for logistic regression. You could actually even use them together. PCA can be used to remove dimensions that have strong correlations before PCA transformation. PCA has actually a problem that it only looks into the dimensions, not into categories and for that reason there is linear discriminant analysis (LDA). https://en.wikipedia.org/wiki/Linear_discriminant_analysis LDA is not a substitution for logistic regression, but a variant to do PCA-like things for data that needs to keep categories separated, for example to make logistic regression faster/simpler.
How to decide between PCA and logistic regression? https://en.wikipedia.org/wiki/Principal_component_analysis Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of pos
51,433
How to decide between PCA and logistic regression?
PCA is good at streamlining numerous variables and recombining them into three Mega-variables called Principal Components. PCA by doing so is very good at resolving multicollinearity issues within your numerous variables. Your Principal Components will be the equivalent of indexes that are weighting your variable combinations. One variable is typically included in all three Principal Components. Given that, PCA is often very difficult to interpret. It makes for pretty opaque model lacking much transparency and explanatory power. That is especially true if you have to convey your results to anyone that is not a super-quant. Last but not least, I don't think traditional PCA does work with a binomial variable. Someone else, mentioned Discriminant Analysis which would work better than PCA. But, it suffers similar issues regarding opaqueness. But, instead of having three recombined Principal Components, it has three main Discriminants. Those recombined variables are really difficult to interpret. In view of the above, Logit Regression is a far better fit for what you are trying to achieve. This method is earmarked to handle a binomial variable exactly as you have. And, by standardizing the regression coefficients of your variables you will be able to easily measure their relative influence on your dependent binomial variable.
How to decide between PCA and logistic regression?
PCA is good at streamlining numerous variables and recombining them into three Mega-variables called Principal Components. PCA by doing so is very good at resolving multicollinearity issues within yo
How to decide between PCA and logistic regression? PCA is good at streamlining numerous variables and recombining them into three Mega-variables called Principal Components. PCA by doing so is very good at resolving multicollinearity issues within your numerous variables. Your Principal Components will be the equivalent of indexes that are weighting your variable combinations. One variable is typically included in all three Principal Components. Given that, PCA is often very difficult to interpret. It makes for pretty opaque model lacking much transparency and explanatory power. That is especially true if you have to convey your results to anyone that is not a super-quant. Last but not least, I don't think traditional PCA does work with a binomial variable. Someone else, mentioned Discriminant Analysis which would work better than PCA. But, it suffers similar issues regarding opaqueness. But, instead of having three recombined Principal Components, it has three main Discriminants. Those recombined variables are really difficult to interpret. In view of the above, Logit Regression is a far better fit for what you are trying to achieve. This method is earmarked to handle a binomial variable exactly as you have. And, by standardizing the regression coefficients of your variables you will be able to easily measure their relative influence on your dependent binomial variable.
How to decide between PCA and logistic regression? PCA is good at streamlining numerous variables and recombining them into three Mega-variables called Principal Components. PCA by doing so is very good at resolving multicollinearity issues within yo
51,434
What sort of GLM should I use if my response variable is colony size?
This is a partial answer only, but the graphical content makes a comment a poor alternative. In a comment, the OP talks about using sqrt(log()) as a transformation. I'd advise against that on the general grounds that it is a very unusual and idiosyncratic transformation, so you will face puzzlement at all levels up to supervisors, examiners and paper reviewers. (Translate to the terminology of your own education and research set-up.) I'd also advise that it really stretches out low colony sizes. Here is a plot for sizes 1(1)600. Notice how on this transformed scale the interval from size 1 to size 5 is more than half the entire range from size 1 to size 600. Implication: this transformation will create outliers for small colony sizes. The better fit observed could be an artefact of over-transforming. I wouldn't go further than a log link. At least that is standard and easier to think about.
What sort of GLM should I use if my response variable is colony size?
This is a partial answer only, but the graphical content makes a comment a poor alternative. In a comment, the OP talks about using sqrt(log()) as a transformation. I'd advise against that on the gen
What sort of GLM should I use if my response variable is colony size? This is a partial answer only, but the graphical content makes a comment a poor alternative. In a comment, the OP talks about using sqrt(log()) as a transformation. I'd advise against that on the general grounds that it is a very unusual and idiosyncratic transformation, so you will face puzzlement at all levels up to supervisors, examiners and paper reviewers. (Translate to the terminology of your own education and research set-up.) I'd also advise that it really stretches out low colony sizes. Here is a plot for sizes 1(1)600. Notice how on this transformed scale the interval from size 1 to size 5 is more than half the entire range from size 1 to size 600. Implication: this transformation will create outliers for small colony sizes. The better fit observed could be an artefact of over-transforming. I wouldn't go further than a log link. At least that is standard and easier to think about.
What sort of GLM should I use if my response variable is colony size? This is a partial answer only, but the graphical content makes a comment a poor alternative. In a comment, the OP talks about using sqrt(log()) as a transformation. I'd advise against that on the gen
51,435
What sort of GLM should I use if my response variable is colony size?
There's not enough information to offer a definitive answer. Colony size would be a count, I gather (rather than say an area, or a mass). It's better not to think of them as representing an error term but rather the conditional distribution of the response. Some possible models include negative binomial or log series distribution -- these are two distributions in the exponential-family which could be suitable for such counts, though they're not the only possibilities (some care must betaken with the zero-case; for example the negative binomial used in GLMs starts from 0, but colony sizes are presumably bounded below by 1). In some cases you may be able to get some use from the Poisson but I expect that typically it won't be nearly skewed enough. (The log series distribution may not be implemented in most software.) You may need to consider more complex models that straight exponential family. You might need to consider truncated or shifted distributions, or if zeroes are possible, perhaps zero-inflated or hurdle models; you may also need mixed effect GLMs for example.
What sort of GLM should I use if my response variable is colony size?
There's not enough information to offer a definitive answer. Colony size would be a count, I gather (rather than say an area, or a mass). It's better not to think of them as representing an error term
What sort of GLM should I use if my response variable is colony size? There's not enough information to offer a definitive answer. Colony size would be a count, I gather (rather than say an area, or a mass). It's better not to think of them as representing an error term but rather the conditional distribution of the response. Some possible models include negative binomial or log series distribution -- these are two distributions in the exponential-family which could be suitable for such counts, though they're not the only possibilities (some care must betaken with the zero-case; for example the negative binomial used in GLMs starts from 0, but colony sizes are presumably bounded below by 1). In some cases you may be able to get some use from the Poisson but I expect that typically it won't be nearly skewed enough. (The log series distribution may not be implemented in most software.) You may need to consider more complex models that straight exponential family. You might need to consider truncated or shifted distributions, or if zeroes are possible, perhaps zero-inflated or hurdle models; you may also need mixed effect GLMs for example.
What sort of GLM should I use if my response variable is colony size? There's not enough information to offer a definitive answer. Colony size would be a count, I gather (rather than say an area, or a mass). It's better not to think of them as representing an error term
51,436
What sort of GLM should I use if my response variable is colony size?
About transformation, see : O’Hara, Robert B., and D. Johan Kotze. 2010. « Do Not Log-Transform Count Data ». Methods in Ecology and Evolution 1 (2): 118‑22. doi:10.1111/j.2041-210X.2010.00021.x. Summary : 1. Ecological count data (e.g. number of individuals or species) are often log-transformed to satisfy parametric test assumptions. 2. Apart from the fact that generalized linear models are better suited in dealing with count data, a log-transformation of counts has the additional quandary in how to deal with zero observations. With just one zero observation (if this observation represents a sampling unit), the whole data set needs to be fudged by adding a value (usually 1) before transformation. 3. Simulating data from a negative binomial distribution, we compared the outcome of fitting models that were transformed in various ways (log, square root) with results from fitting models using quasi-Poisson and negative binomial models to untransformed count data. 4. We found that the transformations performed poorly, except when the dispersion was small and the mean counts were large. The quasi-Poisson and negative binomial models consistently performed well, with little bias. 5. We recommend that count data should not be analysed by log-transforming it, but instead models based on Poisson and negative binomial distributions should be used.
What sort of GLM should I use if my response variable is colony size?
About transformation, see : O’Hara, Robert B., and D. Johan Kotze. 2010. « Do Not Log-Transform Count Data ». Methods in Ecology and Evolution 1 (2): 118‑22. doi:10.1111/j.2041-210X.2010.00021.x. Summ
What sort of GLM should I use if my response variable is colony size? About transformation, see : O’Hara, Robert B., and D. Johan Kotze. 2010. « Do Not Log-Transform Count Data ». Methods in Ecology and Evolution 1 (2): 118‑22. doi:10.1111/j.2041-210X.2010.00021.x. Summary : 1. Ecological count data (e.g. number of individuals or species) are often log-transformed to satisfy parametric test assumptions. 2. Apart from the fact that generalized linear models are better suited in dealing with count data, a log-transformation of counts has the additional quandary in how to deal with zero observations. With just one zero observation (if this observation represents a sampling unit), the whole data set needs to be fudged by adding a value (usually 1) before transformation. 3. Simulating data from a negative binomial distribution, we compared the outcome of fitting models that were transformed in various ways (log, square root) with results from fitting models using quasi-Poisson and negative binomial models to untransformed count data. 4. We found that the transformations performed poorly, except when the dispersion was small and the mean counts were large. The quasi-Poisson and negative binomial models consistently performed well, with little bias. 5. We recommend that count data should not be analysed by log-transforming it, but instead models based on Poisson and negative binomial distributions should be used.
What sort of GLM should I use if my response variable is colony size? About transformation, see : O’Hara, Robert B., and D. Johan Kotze. 2010. « Do Not Log-Transform Count Data ». Methods in Ecology and Evolution 1 (2): 118‑22. doi:10.1111/j.2041-210X.2010.00021.x. Summ
51,437
Inference to the population when the survey response rate is only 30%
If it is "correct" to do logistic regression with these data depends on the type of non-response you have. Usually, one distinguishes three types of mechanisms for non-response. Missing completely at random: The non-response does not depend neither on the variable of interest nor the covariates. Missing at random, given some covariates: The non-response depends on some covariates but not on the variable of interest. Some people call this "ignorable non-response" Not missing at random: The non-response depends on the variable of interest, and cannot be completely explained by the observed covariates. If you are in the first situation, non-response does not bias the results. It will merely induce a loss of precision in your estimates If you are in the second situation you should be able to model non-response successfully and to adjust the data accordingly. If you are in the third case, you are unlucky. There is not much you can do. If you want to read about the topic, have a look at a textbook such as e.g. Sharon Lohr's "Sampling: Design and Analysis" HTH
Inference to the population when the survey response rate is only 30%
If it is "correct" to do logistic regression with these data depends on the type of non-response you have. Usually, one distinguishes three types of mechanisms for non-response. Missing completely a
Inference to the population when the survey response rate is only 30% If it is "correct" to do logistic regression with these data depends on the type of non-response you have. Usually, one distinguishes three types of mechanisms for non-response. Missing completely at random: The non-response does not depend neither on the variable of interest nor the covariates. Missing at random, given some covariates: The non-response depends on some covariates but not on the variable of interest. Some people call this "ignorable non-response" Not missing at random: The non-response depends on the variable of interest, and cannot be completely explained by the observed covariates. If you are in the first situation, non-response does not bias the results. It will merely induce a loss of precision in your estimates If you are in the second situation you should be able to model non-response successfully and to adjust the data accordingly. If you are in the third case, you are unlucky. There is not much you can do. If you want to read about the topic, have a look at a textbook such as e.g. Sharon Lohr's "Sampling: Design and Analysis" HTH
Inference to the population when the survey response rate is only 30% If it is "correct" to do logistic regression with these data depends on the type of non-response you have. Usually, one distinguishes three types of mechanisms for non-response. Missing completely a
51,438
Inference to the population when the survey response rate is only 30%
If at all possible, be very careful about scaling the results of your analysis. Non-response tends to be related to interest. For example, regardless of demographic match, people who fill out a survey about bus service tend to be people who are more interested in taking the bus than the average person. Therefore, estimates based on the survey respondents themselves tend to be much too high. For example, the old rule of thumb about predicting consumer product sales from a standard 5-point buying intent question was: 75% of those who "definitely would buy", 25% of those who "probably would buy", and 5% of the rest. [Advertising Research Foundation Arrowhead project] To directly answer the question, you can certainly do the logistic regression, but it is validly applied to the population you are analyzing (i.e., the population of survey respondents). Anything beyond that is an inference, similar to inferring opinions in Minnesota from surveys in Iowa and Wisconsin. That inference can be supported by some of the strategies suggested above, but is still an inference. In many cases you MUST do the inference. The client paid for the survey, and wants to get at least some useful information out of it, so you have to give it your best shot, with whatever caveats you deem necessary.
Inference to the population when the survey response rate is only 30%
If at all possible, be very careful about scaling the results of your analysis. Non-response tends to be related to interest. For example, regardless of demographic match, people who fill out a survey
Inference to the population when the survey response rate is only 30% If at all possible, be very careful about scaling the results of your analysis. Non-response tends to be related to interest. For example, regardless of demographic match, people who fill out a survey about bus service tend to be people who are more interested in taking the bus than the average person. Therefore, estimates based on the survey respondents themselves tend to be much too high. For example, the old rule of thumb about predicting consumer product sales from a standard 5-point buying intent question was: 75% of those who "definitely would buy", 25% of those who "probably would buy", and 5% of the rest. [Advertising Research Foundation Arrowhead project] To directly answer the question, you can certainly do the logistic regression, but it is validly applied to the population you are analyzing (i.e., the population of survey respondents). Anything beyond that is an inference, similar to inferring opinions in Minnesota from surveys in Iowa and Wisconsin. That inference can be supported by some of the strategies suggested above, but is still an inference. In many cases you MUST do the inference. The client paid for the survey, and wants to get at least some useful information out of it, so you have to give it your best shot, with whatever caveats you deem necessary.
Inference to the population when the survey response rate is only 30% If at all possible, be very careful about scaling the results of your analysis. Non-response tends to be related to interest. For example, regardless of demographic match, people who fill out a survey
51,439
Inference to the population when the survey response rate is only 30%
Ignore it (at your risk). (30% is probably an excellent return rate by most standards.) Resample some of the non-responders (with a slightly different survey, and perhaps some "teasers" to gain interest), and see if they respond differently from the original sample (bearing in mind that you will get an even poorer response rate and more skew in this second sample). Analyze the demographics (and other interests) of responders vs the population (if you have any clues to demographics/interests in the survey). Scale responses based on that info.
Inference to the population when the survey response rate is only 30%
Ignore it (at your risk). (30% is probably an excellent return rate by most standards.) Resample some of the non-responders (with a slightly different survey, and perhaps some "teasers" to gain inter
Inference to the population when the survey response rate is only 30% Ignore it (at your risk). (30% is probably an excellent return rate by most standards.) Resample some of the non-responders (with a slightly different survey, and perhaps some "teasers" to gain interest), and see if they respond differently from the original sample (bearing in mind that you will get an even poorer response rate and more skew in this second sample). Analyze the demographics (and other interests) of responders vs the population (if you have any clues to demographics/interests in the survey). Scale responses based on that info.
Inference to the population when the survey response rate is only 30% Ignore it (at your risk). (30% is probably an excellent return rate by most standards.) Resample some of the non-responders (with a slightly different survey, and perhaps some "teasers" to gain inter
51,440
Inference to the population when the survey response rate is only 30%
Here is a simple element of answer A simple way to decide whether or not having filled in the form is related to explanatory variables is to perform a logistic regression where the binary response variable is 1 if the person answered and 0 otherwise. If it turns out that having filled in the form is not related to explanatory variables then your sample of respondents may be seen as a random sub-sample of your sample, and hence as a random sample of the original population. Of course, the power of your tests will be affected... You will find more if you look at the missing data problem. Edit The previous technique requires a special framework --- see Whuber's comments.
Inference to the population when the survey response rate is only 30%
Here is a simple element of answer A simple way to decide whether or not having filled in the form is related to explanatory variables is to perform a logistic regression where the binary response var
Inference to the population when the survey response rate is only 30% Here is a simple element of answer A simple way to decide whether or not having filled in the form is related to explanatory variables is to perform a logistic regression where the binary response variable is 1 if the person answered and 0 otherwise. If it turns out that having filled in the form is not related to explanatory variables then your sample of respondents may be seen as a random sub-sample of your sample, and hence as a random sample of the original population. Of course, the power of your tests will be affected... You will find more if you look at the missing data problem. Edit The previous technique requires a special framework --- see Whuber's comments.
Inference to the population when the survey response rate is only 30% Here is a simple element of answer A simple way to decide whether or not having filled in the form is related to explanatory variables is to perform a logistic regression where the binary response var
51,441
Inference to the population when the survey response rate is only 30%
Hmm, I analyse data like this all the time. I know it's very naughty but I figure it's better than having no analysis. The best I've ever come up with is to compare your sample with what you know about the population of interest. Are there more people from white backgrounds in the sample? Are there more women in the sample? Is the sample older than the population? I've found this to be quite a reasonable approach with, say, a staff survey, where I can say "well, there are a lot of doctors missing from my sample, nurses are over-represented. Based on other research I think doctors think x whereas nurses think y. So we should be careful where the missingness is consistent with this difference (e.g. everyone unhappy about pay) and reassured when it is inconsistent (e.g. everyone saying they feel supported to develop as a professional)". Not very scientific, but as far as I know analysing data which is or could be MNAR never is! I've always had some idea at the back of my head about thinking about WHY people don't respond (is it happy people, busy people, etc.) but I've never been able to formulate it properly (lack of time, data, etc.)
Inference to the population when the survey response rate is only 30%
Hmm, I analyse data like this all the time. I know it's very naughty but I figure it's better than having no analysis. The best I've ever come up with is to compare your sample with what you know abou
Inference to the population when the survey response rate is only 30% Hmm, I analyse data like this all the time. I know it's very naughty but I figure it's better than having no analysis. The best I've ever come up with is to compare your sample with what you know about the population of interest. Are there more people from white backgrounds in the sample? Are there more women in the sample? Is the sample older than the population? I've found this to be quite a reasonable approach with, say, a staff survey, where I can say "well, there are a lot of doctors missing from my sample, nurses are over-represented. Based on other research I think doctors think x whereas nurses think y. So we should be careful where the missingness is consistent with this difference (e.g. everyone unhappy about pay) and reassured when it is inconsistent (e.g. everyone saying they feel supported to develop as a professional)". Not very scientific, but as far as I know analysing data which is or could be MNAR never is! I've always had some idea at the back of my head about thinking about WHY people don't respond (is it happy people, busy people, etc.) but I've never been able to formulate it properly (lack of time, data, etc.)
Inference to the population when the survey response rate is only 30% Hmm, I analyse data like this all the time. I know it's very naughty but I figure it's better than having no analysis. The best I've ever come up with is to compare your sample with what you know abou
51,442
Inference to the population when the survey response rate is only 30%
Dealing with response rates gets less quantitative and more qualitative, as there are no statistical tests to compare the respondents and non-respondents. You can find some guidelines to computing and generally dealing with (low) response rates at http://www.aapor.org/Content/aapor/Resources/PollampSurveyFAQ1/DoResponseRatesMatter/ResponseRatesAnOverview/default.htm. I'd agree that 30% is a very good response rate on a mail-in survey; generally you'd expect response rates in single digit % for this mode these days. If this is a list survey for an organization that has the information on its members, or for a medical provider who has the records, then you'd certainly want to do some matching of the sample with the existing records. You can either estimate response propensity models (assuming MAR, see an earlier comment), or you can calibrate the sampling weights to adjust the demographics of your survey to that of the original list.
Inference to the population when the survey response rate is only 30%
Dealing with response rates gets less quantitative and more qualitative, as there are no statistical tests to compare the respondents and non-respondents. You can find some guidelines to computing and
Inference to the population when the survey response rate is only 30% Dealing with response rates gets less quantitative and more qualitative, as there are no statistical tests to compare the respondents and non-respondents. You can find some guidelines to computing and generally dealing with (low) response rates at http://www.aapor.org/Content/aapor/Resources/PollampSurveyFAQ1/DoResponseRatesMatter/ResponseRatesAnOverview/default.htm. I'd agree that 30% is a very good response rate on a mail-in survey; generally you'd expect response rates in single digit % for this mode these days. If this is a list survey for an organization that has the information on its members, or for a medical provider who has the records, then you'd certainly want to do some matching of the sample with the existing records. You can either estimate response propensity models (assuming MAR, see an earlier comment), or you can calibrate the sampling weights to adjust the demographics of your survey to that of the original list.
Inference to the population when the survey response rate is only 30% Dealing with response rates gets less quantitative and more qualitative, as there are no statistical tests to compare the respondents and non-respondents. You can find some guidelines to computing and
51,443
When is a statistic not a statistic?
The statistic is defined as A statistic is a function $T (X^n )$ of the data. (Larry Wasserman All of Statistics, p. 137) A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. (Wikipedia) Definition 5.2.1 Let $X_1,\dots,X_n$ be a random sample of size $n$ from a population and let $T(X_1,\dots,X_n)$ be a real-valued or vector-valued function whose domain includes the sample space of $(X_1,\dots,X_n)$. Then the random variable or random vector $Y = T(X_1,\dots,X_n)$ is called a statistic. The probability distribution of statistic $Y$ is called the sampling distribution of $Y$. The definition of a statistic is very broad, with the only restriction being that a statistic cannot be a function of a parameter. [...] (Casella and Berger Statistical inference, p. 211) TL;DR If you calculate something from the data, "for statistical purposes", it's a statistic. As noticed by @SextusEmpiricus in the comments, another restriction that may not be instantly obvious to everyone is that $T$ is a function in a mathematical sense, that is, a mapping a function from a set $X$ to a set $Y$ assigns to each element of $X$ exactly one element of $Y$ So something generating the results at random wouldn't fit the definition.
When is a statistic not a statistic?
The statistic is defined as A statistic is a function $T (X^n )$ of the data. (Larry Wasserman All of Statistics, p. 137) A statistic (singular) or sample statistic is any quantity computed from va
When is a statistic not a statistic? The statistic is defined as A statistic is a function $T (X^n )$ of the data. (Larry Wasserman All of Statistics, p. 137) A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. (Wikipedia) Definition 5.2.1 Let $X_1,\dots,X_n$ be a random sample of size $n$ from a population and let $T(X_1,\dots,X_n)$ be a real-valued or vector-valued function whose domain includes the sample space of $(X_1,\dots,X_n)$. Then the random variable or random vector $Y = T(X_1,\dots,X_n)$ is called a statistic. The probability distribution of statistic $Y$ is called the sampling distribution of $Y$. The definition of a statistic is very broad, with the only restriction being that a statistic cannot be a function of a parameter. [...] (Casella and Berger Statistical inference, p. 211) TL;DR If you calculate something from the data, "for statistical purposes", it's a statistic. As noticed by @SextusEmpiricus in the comments, another restriction that may not be instantly obvious to everyone is that $T$ is a function in a mathematical sense, that is, a mapping a function from a set $X$ to a set $Y$ assigns to each element of $X$ exactly one element of $Y$ So something generating the results at random wouldn't fit the definition.
When is a statistic not a statistic? The statistic is defined as A statistic is a function $T (X^n )$ of the data. (Larry Wasserman All of Statistics, p. 137) A statistic (singular) or sample statistic is any quantity computed from va
51,444
When is a statistic not a statistic?
This answer is a theoretical supplement to Tim's more practical answer (+1). There is an axiomatic and mathematical side of statistics. Random variables are measurable functions on the outcome space $\Omega$ of a probability space $(\Omega, \mathcal{F}, P)$. Data are modelled to be instances of random variables, that is subsets of the image of the random variable. In this theoretical context, a statistic is any function of one or more random variables; a functional. Functions of data are then thought of as instances of a statistic. It sometimes creates confusion that the word "statistic" is often used either as a function of random variables or as a function of data, with the former being a model of the latter. I am not suggesting you think in these terms all the time, or even most of the time depending on what it is you are into doing, but you might like to know that such a perspective exists. If you are ever curious about this theoretical perspective applied to bootstrapping, see Chapter 17 of Measure Theory and Probability Theory by Athreya and Lahiri 2006.
When is a statistic not a statistic?
This answer is a theoretical supplement to Tim's more practical answer (+1). There is an axiomatic and mathematical side of statistics. Random variables are measurable functions on the outcome space $
When is a statistic not a statistic? This answer is a theoretical supplement to Tim's more practical answer (+1). There is an axiomatic and mathematical side of statistics. Random variables are measurable functions on the outcome space $\Omega$ of a probability space $(\Omega, \mathcal{F}, P)$. Data are modelled to be instances of random variables, that is subsets of the image of the random variable. In this theoretical context, a statistic is any function of one or more random variables; a functional. Functions of data are then thought of as instances of a statistic. It sometimes creates confusion that the word "statistic" is often used either as a function of random variables or as a function of data, with the former being a model of the latter. I am not suggesting you think in these terms all the time, or even most of the time depending on what it is you are into doing, but you might like to know that such a perspective exists. If you are ever curious about this theoretical perspective applied to bootstrapping, see Chapter 17 of Measure Theory and Probability Theory by Athreya and Lahiri 2006.
When is a statistic not a statistic? This answer is a theoretical supplement to Tim's more practical answer (+1). There is an axiomatic and mathematical side of statistics. Random variables are measurable functions on the outcome space $
51,445
Does rnorm produce numbers with replacement/without replacement?
Both the normal and uniform distributions are continuous; ie, any particular value has probability of zero. Obviously there is numerical precision and other considerations involved with a machine-specific implementation but for all intents and purposes, you can suppose that $\mathbb P(X = x) = 0$ for any particular $x$; ie, the probability that you randomly draw a value identical to any particular point you specify is $0$. Further, if you have any countable (ie, you could count it out, possibly infinite) or finite (ie, you have a set with a fixed number of elements) set $\mathcal X$, $\mathbb P(X \in \mathcal X) = 0$ as well. That is, for any countable set of real numbers within the support of the distribution (for the uniform case, this would be $[0, 1]$, and for the normal case, the support is just all of the reals) there is also a probability of $0$ that a "new draw" or any finite number of draws will be equal to any of them (this applies for all continuous distributions; one can define mixed distributions in which this is NOT the case, but for continuous random variables, this is always the case as by definition a continuous distribution is defined over uncountably infinitely many points). There is no such thing as "with replacement" and "without replacement" if the distn function is continuous; further, if the distribution is discrete "without replacement" explicitly violates $iid$ (as you are specifically and deliberately omitting values from the possible sets of values based completely on the values you have already obtained). rnorm and runif generate $iid$ samples.
Does rnorm produce numbers with replacement/without replacement?
Both the normal and uniform distributions are continuous; ie, any particular value has probability of zero. Obviously there is numerical precision and other considerations involved with a machine-spec
Does rnorm produce numbers with replacement/without replacement? Both the normal and uniform distributions are continuous; ie, any particular value has probability of zero. Obviously there is numerical precision and other considerations involved with a machine-specific implementation but for all intents and purposes, you can suppose that $\mathbb P(X = x) = 0$ for any particular $x$; ie, the probability that you randomly draw a value identical to any particular point you specify is $0$. Further, if you have any countable (ie, you could count it out, possibly infinite) or finite (ie, you have a set with a fixed number of elements) set $\mathcal X$, $\mathbb P(X \in \mathcal X) = 0$ as well. That is, for any countable set of real numbers within the support of the distribution (for the uniform case, this would be $[0, 1]$, and for the normal case, the support is just all of the reals) there is also a probability of $0$ that a "new draw" or any finite number of draws will be equal to any of them (this applies for all continuous distributions; one can define mixed distributions in which this is NOT the case, but for continuous random variables, this is always the case as by definition a continuous distribution is defined over uncountably infinitely many points). There is no such thing as "with replacement" and "without replacement" if the distn function is continuous; further, if the distribution is discrete "without replacement" explicitly violates $iid$ (as you are specifically and deliberately omitting values from the possible sets of values based completely on the values you have already obtained). rnorm and runif generate $iid$ samples.
Does rnorm produce numbers with replacement/without replacement? Both the normal and uniform distributions are continuous; ie, any particular value has probability of zero. Obviously there is numerical precision and other considerations involved with a machine-spec
51,446
Does rnorm produce numbers with replacement/without replacement?
For "with replacement" and "without replacement" to be distinct, you'd need a finite population. If you have a finite population, you don't have a normal distribution (nor any other continuous distribution). [Implementation wise, however, there's only a finite number of different values that it's possible to generate on a computer -- at any fixed number of bits, the computer can only represent a finite number of different values, so in that sense you have a finite population, and strictly speaking the distribution is discrete rather than continuous. However the normal model is itself continuous even though we don't have a way to practically generate values that are truly/perfectly from it.]
Does rnorm produce numbers with replacement/without replacement?
For "with replacement" and "without replacement" to be distinct, you'd need a finite population. If you have a finite population, you don't have a normal distribution (nor any other continuous distrib
Does rnorm produce numbers with replacement/without replacement? For "with replacement" and "without replacement" to be distinct, you'd need a finite population. If you have a finite population, you don't have a normal distribution (nor any other continuous distribution). [Implementation wise, however, there's only a finite number of different values that it's possible to generate on a computer -- at any fixed number of bits, the computer can only represent a finite number of different values, so in that sense you have a finite population, and strictly speaking the distribution is discrete rather than continuous. However the normal model is itself continuous even though we don't have a way to practically generate values that are truly/perfectly from it.]
Does rnorm produce numbers with replacement/without replacement? For "with replacement" and "without replacement" to be distinct, you'd need a finite population. If you have a finite population, you don't have a normal distribution (nor any other continuous distrib
51,447
Neural network vs regression in a small sample
Neural networks, in vast majority of cases, need lots of data. If you have 20 observations, neural network is clearly a bad choice. With that small sample size, network would easily memorize the data and overfit. Even cross-validation with that small sample size is disputable, because you'd be validating the results on just few samples at a time. With that small sample you should aim at simple, robust models like (regularized) linear regression. Check also other questions tagged as small-sample.
Neural network vs regression in a small sample
Neural networks, in vast majority of cases, need lots of data. If you have 20 observations, neural network is clearly a bad choice. With that small sample size, network would easily memorize the data
Neural network vs regression in a small sample Neural networks, in vast majority of cases, need lots of data. If you have 20 observations, neural network is clearly a bad choice. With that small sample size, network would easily memorize the data and overfit. Even cross-validation with that small sample size is disputable, because you'd be validating the results on just few samples at a time. With that small sample you should aim at simple, robust models like (regularized) linear regression. Check also other questions tagged as small-sample.
Neural network vs regression in a small sample Neural networks, in vast majority of cases, need lots of data. If you have 20 observations, neural network is clearly a bad choice. With that small sample size, network would easily memorize the data
51,448
Neural network vs regression in a small sample
In your first case, you will have 30 * 7 + 1 parameters to explain 30 * 20 data points. With such a complex model you are bound to overfit and memorize your training data to a degree. With such a small sample size, your validation results can also be unreliable and merely due to chance. I would maybe try leave-one-out cross-validation to at least get some distribution of the validation score. That makes the comparison a bit more reasonable. I would go with regression and maybe even do some feature elimination to make the model a bit simpler.
Neural network vs regression in a small sample
In your first case, you will have 30 * 7 + 1 parameters to explain 30 * 20 data points. With such a complex model you are bound to overfit and memorize your training data to a degree. With such a smal
Neural network vs regression in a small sample In your first case, you will have 30 * 7 + 1 parameters to explain 30 * 20 data points. With such a complex model you are bound to overfit and memorize your training data to a degree. With such a small sample size, your validation results can also be unreliable and merely due to chance. I would maybe try leave-one-out cross-validation to at least get some distribution of the validation score. That makes the comparison a bit more reasonable. I would go with regression and maybe even do some feature elimination to make the model a bit simpler.
Neural network vs regression in a small sample In your first case, you will have 30 * 7 + 1 parameters to explain 30 * 20 data points. With such a complex model you are bound to overfit and memorize your training data to a degree. With such a smal
51,449
Neural network vs regression in a small sample
The sample size is so low and the variables-to-observations ratio is so high that the modeling framework has to be made even more "modest", beyond linear regression. It is quite likely that some form of regularization will improve the performance of the estimated model out of sample. Try lasso, ridge regression or least angle regression. A good resource on these methods is the 3-rd chapter of Hastie, T., Tibshirani, R., & Friedman, J. H. (2008). The elements of statistical learning: Data mining, inference, and prediction. New York: Springer.
Neural network vs regression in a small sample
The sample size is so low and the variables-to-observations ratio is so high that the modeling framework has to be made even more "modest", beyond linear regression. It is quite likely that some form
Neural network vs regression in a small sample The sample size is so low and the variables-to-observations ratio is so high that the modeling framework has to be made even more "modest", beyond linear regression. It is quite likely that some form of regularization will improve the performance of the estimated model out of sample. Try lasso, ridge regression or least angle regression. A good resource on these methods is the 3-rd chapter of Hastie, T., Tibshirani, R., & Friedman, J. H. (2008). The elements of statistical learning: Data mining, inference, and prediction. New York: Springer.
Neural network vs regression in a small sample The sample size is so low and the variables-to-observations ratio is so high that the modeling framework has to be made even more "modest", beyond linear regression. It is quite likely that some form
51,450
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
One way to make your mgf approach to the problem easier is to use the power series $$(1-t^2)^{-1}=\sum_{j=0}^{\infty} t^{2j}$$ Differentiating the rhs repeatedly is much easier than differentiating the lhs. (note this only applies for $|t|<1$ but as you are differentiating at $t=0$ it still works). You should see that it will just be a factorial. However is it probably even simpler to just evaluate the expectation directly $$E(X^{2n})=\int_{-\infty}^{\infty}\frac{x^{2n}}{2}\exp(-|x|)dx$$ The result should be a gamma function (ie factorial). This also becomes clear from the mgf power series. update regarding the mgf approach, to evaluate $E(X^{2n})$ we need to differentiate the rhs $2n$ times. The following result will be useful $$\frac{\partial^k x^{r}}{\partial x^k} =\left( \begin{matrix}  \frac{r!}{(r-k)!}x^{r-k} & r=k,k+1,\dots \\ 0 & r=1,\dots,k-1 \end{matrix} \right) $$ Now if you apply this to the term $t^{2j}$ and take the "2n-th" derivative, we have $r=2j$ and $k=2n$ and $x=t$. Then we get $$\frac{\partial^{2n} t^{2j}}{\partial t^{2n}} =\left( \begin{matrix}  \frac{(2j)!}{(2j-2n)!}t^{2j-2n} & 2j=2n,2n+2,2n+4,2n+6,\dots \\ 0 & 2j=0,2,\dots,2n-2 \end{matrix} \right) $$ This means when we add up over all the terms we can write this as $$\sum_{j=0}^{\infty}\frac{\partial^{2n} t^{2j}}{\partial t^{2n}}=0+\dots+0+(2n)!+t^2\frac{(2n+2)!}{2!}+t^4\frac{(2n+4)!}{4!}+\dots$$ only the single tertm $(2n)!$ is not $0$ and also not a multiple of $t$. So setting $t=0$ leaves only that term.
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
One way to make your mgf approach to the problem easier is to use the power series $$(1-t^2)^{-1}=\sum_{j=0}^{\infty} t^{2j}$$ Differentiating the rhs repeatedly is much easier than differentiating th
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? One way to make your mgf approach to the problem easier is to use the power series $$(1-t^2)^{-1}=\sum_{j=0}^{\infty} t^{2j}$$ Differentiating the rhs repeatedly is much easier than differentiating the lhs. (note this only applies for $|t|<1$ but as you are differentiating at $t=0$ it still works). You should see that it will just be a factorial. However is it probably even simpler to just evaluate the expectation directly $$E(X^{2n})=\int_{-\infty}^{\infty}\frac{x^{2n}}{2}\exp(-|x|)dx$$ The result should be a gamma function (ie factorial). This also becomes clear from the mgf power series. update regarding the mgf approach, to evaluate $E(X^{2n})$ we need to differentiate the rhs $2n$ times. The following result will be useful $$\frac{\partial^k x^{r}}{\partial x^k} =\left( \begin{matrix}  \frac{r!}{(r-k)!}x^{r-k} & r=k,k+1,\dots \\ 0 & r=1,\dots,k-1 \end{matrix} \right) $$ Now if you apply this to the term $t^{2j}$ and take the "2n-th" derivative, we have $r=2j$ and $k=2n$ and $x=t$. Then we get $$\frac{\partial^{2n} t^{2j}}{\partial t^{2n}} =\left( \begin{matrix}  \frac{(2j)!}{(2j-2n)!}t^{2j-2n} & 2j=2n,2n+2,2n+4,2n+6,\dots \\ 0 & 2j=0,2,\dots,2n-2 \end{matrix} \right) $$ This means when we add up over all the terms we can write this as $$\sum_{j=0}^{\infty}\frac{\partial^{2n} t^{2j}}{\partial t^{2n}}=0+\dots+0+(2n)!+t^2\frac{(2n+2)!}{2!}+t^4\frac{(2n+4)!}{4!}+\dots$$ only the single tertm $(2n)!$ is not $0$ and also not a multiple of $t$. So setting $t=0$ leaves only that term.
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? One way to make your mgf approach to the problem easier is to use the power series $$(1-t^2)^{-1}=\sum_{j=0}^{\infty} t^{2j}$$ Differentiating the rhs repeatedly is much easier than differentiating th
51,451
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
Hint: This is an example of a probability density function that is symmetric about zero: $$f_X(0+x) = f_X(0-x) \quad \quad \quad \text{for all } x \in \mathbb{R}.$$ Visually, this means that the distribution is reflected around the zero line, and is the same on both sides. See if you can use this property to figure out (and then prove) what the odd moments would be. The even moments are a bit trickier, but see if you can use this property to reduce those moments to a simpler form.
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
Hint: This is an example of a probability density function that is symmetric about zero: $$f_X(0+x) = f_X(0-x) \quad \quad \quad \text{for all } x \in \mathbb{R}.$$ Visually, this means that the distr
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? Hint: This is an example of a probability density function that is symmetric about zero: $$f_X(0+x) = f_X(0-x) \quad \quad \quad \text{for all } x \in \mathbb{R}.$$ Visually, this means that the distribution is reflected around the zero line, and is the same on both sides. See if you can use this property to figure out (and then prove) what the odd moments would be. The even moments are a bit trickier, but see if you can use this property to reduce those moments to a simpler form.
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? Hint: This is an example of a probability density function that is symmetric about zero: $$f_X(0+x) = f_X(0-x) \quad \quad \quad \text{for all } x \in \mathbb{R}.$$ Visually, this means that the distr
51,452
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
If you don't care to do unnecessary calculation, it is convenient to view your distribution as an equal mixture of an Exponential and its negative: $$\frac{1}{2} e^{-|x|} = \frac{1}{2} e^{-x}\,\mathcal{I}(x\gt 0) + \frac{1}{2} e^{x}\,\mathcal{I}(x \lt 0).$$ Because $((-1)^n + (1)^n)/2$ is either $-1+1=0$ or $(1+1)/2 = 2/2 = 1$ as $n$ is odd or even, respectively, the odd moments of your distribution are zero and the even moments are the same as those of the Exponential. But, by definition, the even Exponential moments are $$\mu_{2k} = \int_0^\infty x^{2k} e^{-x}\mathrm{d}x = \Gamma(2k+1) = (2k)!$$ Because this required no calculation more difficult than $1+1=2,$ it's hard to imagine a simpler solution. If instead you wish to pursue the moment generating approach, begin by noting that the mgf of the Exponential distribution is $$\phi(t)=\int_0^\infty e^{tx} e^{-x}\mathrm{d}x = \int_0^\infty e^{-(1-t)x}\mathrm{d}x = \frac{1}{1-t}.$$ Thus the mgf of this mixture is $$(\phi(t) + \phi(-t))/2 = \frac{1}{2}\left(\frac{1}{1-t} + \frac{1}{1-(-t)}\right) = \frac{1}{1-t^2}.$$ This is analytic near $0$ and therefore equals the power series expansion given by the Binomial Theorem as $$\frac{1}{1-t^2} = (1 + (-t^2))^{-1} = \sum_{k=0}^\infty \binom{-1}{k} (-t^2)^k = \sum_{k=0}^\infty t^{2k} = \sum_{k=0}^\infty \color{red}{(2k)!}\, \color{gray}{\frac{t^{2k}}{(2k)!}},$$ from which you may read off the moments as the coefficients of $t^n/n!:$ again we see they are zero for the odd moments and $(2k)!$ for the even moments.
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
If you don't care to do unnecessary calculation, it is convenient to view your distribution as an equal mixture of an Exponential and its negative: $$\frac{1}{2} e^{-|x|} = \frac{1}{2} e^{-x}\,\mathca
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? If you don't care to do unnecessary calculation, it is convenient to view your distribution as an equal mixture of an Exponential and its negative: $$\frac{1}{2} e^{-|x|} = \frac{1}{2} e^{-x}\,\mathcal{I}(x\gt 0) + \frac{1}{2} e^{x}\,\mathcal{I}(x \lt 0).$$ Because $((-1)^n + (1)^n)/2$ is either $-1+1=0$ or $(1+1)/2 = 2/2 = 1$ as $n$ is odd or even, respectively, the odd moments of your distribution are zero and the even moments are the same as those of the Exponential. But, by definition, the even Exponential moments are $$\mu_{2k} = \int_0^\infty x^{2k} e^{-x}\mathrm{d}x = \Gamma(2k+1) = (2k)!$$ Because this required no calculation more difficult than $1+1=2,$ it's hard to imagine a simpler solution. If instead you wish to pursue the moment generating approach, begin by noting that the mgf of the Exponential distribution is $$\phi(t)=\int_0^\infty e^{tx} e^{-x}\mathrm{d}x = \int_0^\infty e^{-(1-t)x}\mathrm{d}x = \frac{1}{1-t}.$$ Thus the mgf of this mixture is $$(\phi(t) + \phi(-t))/2 = \frac{1}{2}\left(\frac{1}{1-t} + \frac{1}{1-(-t)}\right) = \frac{1}{1-t^2}.$$ This is analytic near $0$ and therefore equals the power series expansion given by the Binomial Theorem as $$\frac{1}{1-t^2} = (1 + (-t^2))^{-1} = \sum_{k=0}^\infty \binom{-1}{k} (-t^2)^k = \sum_{k=0}^\infty t^{2k} = \sum_{k=0}^\infty \color{red}{(2k)!}\, \color{gray}{\frac{t^{2k}}{(2k)!}},$$ from which you may read off the moments as the coefficients of $t^n/n!:$ again we see they are zero for the odd moments and $(2k)!$ for the even moments.
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? If you don't care to do unnecessary calculation, it is convenient to view your distribution as an equal mixture of an Exponential and its negative: $$\frac{1}{2} e^{-|x|} = \frac{1}{2} e^{-x}\,\mathca
51,453
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
Since the OP seems to be having difficulty with the various hints in the comments and the other answers, here is a heuristic method that yields the right answer in this instance. \begin{align} E[\exp(tX)] &= E\left[1 + tX + \frac{(tX)^2}{2!} + \frac{(tX)^3}{3!} + \cdots\right]\\ &= 1 + tE[X] + \frac{t^2}{2!}E[X^2] + \frac{t^3}{3!}E[X^3] + \cdots\tag{1}\\ \end{align} and so if we have a power series for $E[\exp(tX)] = M_X(t)$ that we have managed to find by hook or by crook and without looking at the displayed equation above since that will just serve to confuse us, then we can just look at the coefficient of $t^n$ in the power series that we have and say "Hey Ma! I think that the coefficient of $t^n$ is just $\frac{1}{n!}E[X^n]$ and so I can find $E[X^n]$ by multiplying the coefficient that I already have by $n!$. For the OP's specific case, he has already found that $$M_X(t) = \frac{1}{1-t^2} = 1 + t^2 + t^4 + \cdots + t^{2n} + \cdots \tag{2}$$ (that's the "by hook or by crook" part) and so he can compare $(1)$ and $(2)$ to figure out what the moments are. I will leave it to the OP to tell Ma that $E[X^n]=0$ whenever $n$ is odd. Whether he wishes to tell Ma what $E[X^n]$ is for even $n$ is a matter for him to decide. The $E[X^4]=12$ that the OP claims (in a comment) to have calculated doesn't sound right to me (I think it should be $E[X^4] = 4! = 24$), but who am I to interfere in the sacred relationship between mother and child?
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
Since the OP seems to be having difficulty with the various hints in the comments and the other answers, here is a heuristic method that yields the right answer in this instance. \begin{align} E[\exp(
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? Since the OP seems to be having difficulty with the various hints in the comments and the other answers, here is a heuristic method that yields the right answer in this instance. \begin{align} E[\exp(tX)] &= E\left[1 + tX + \frac{(tX)^2}{2!} + \frac{(tX)^3}{3!} + \cdots\right]\\ &= 1 + tE[X] + \frac{t^2}{2!}E[X^2] + \frac{t^3}{3!}E[X^3] + \cdots\tag{1}\\ \end{align} and so if we have a power series for $E[\exp(tX)] = M_X(t)$ that we have managed to find by hook or by crook and without looking at the displayed equation above since that will just serve to confuse us, then we can just look at the coefficient of $t^n$ in the power series that we have and say "Hey Ma! I think that the coefficient of $t^n$ is just $\frac{1}{n!}E[X^n]$ and so I can find $E[X^n]$ by multiplying the coefficient that I already have by $n!$. For the OP's specific case, he has already found that $$M_X(t) = \frac{1}{1-t^2} = 1 + t^2 + t^4 + \cdots + t^{2n} + \cdots \tag{2}$$ (that's the "by hook or by crook" part) and so he can compare $(1)$ and $(2)$ to figure out what the moments are. I will leave it to the OP to tell Ma that $E[X^n]=0$ whenever $n$ is odd. Whether he wishes to tell Ma what $E[X^n]$ is for even $n$ is a matter for him to decide. The $E[X^4]=12$ that the OP claims (in a comment) to have calculated doesn't sound right to me (I think it should be $E[X^4] = 4! = 24$), but who am I to interfere in the sacred relationship between mother and child?
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? Since the OP seems to be having difficulty with the various hints in the comments and the other answers, here is a heuristic method that yields the right answer in this instance. \begin{align} E[\exp(
51,454
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
So I calculated the first, second, third, and fourth derivatives. I got $E(X^1)=0$, $E(X^2)=2$, $E(X^3)=0$, and $E(X^4)=12$. These derivatives are quite long to compute at this point, so I m wondering if there is an easier way to go about this to obtain a formula for the evens. You could use the Taylor series expansion: $$\frac{1}{1-t^2} = \sum_{k=0}^\infty t^{2k}$$ However, this is a bit of a circular reasoning since the Taylor series expansion is itself derived by computing the derivatives. In that case you can just as well look up directly a formula for the higher order moments of the Laplace distribution. You might find that Taylor series expansion indirectly - not using $f(x) = \sum_{n=0}^\infty f^{(n)}/n! t^k $ - by instead using the formula for a geometric series. However, you could also 'manually' derive the derivatives (that means straight forward computation using chain rule and product rule) and when you look at the pattern of the terms then you will find that many of the terms become zero and a regular pattern emerges. Say we substitute $u = t^2$ then the derivation it looks simpler: $$ \frac{\text{d}^n}{\text{d}u^n} \frac{1}{(1-u)} = \frac{n!}{(1-u)^n}$$ Now use Faà di Bruno's formula (chain rule but then applied several times): $$ \frac{\text{d}^n}{\text{d}t^n} \frac{1}{(1-u)} = \sum_{k=1}^n \frac{k!}{(1-u)^k} \cdot B_{n,k}(2t,2,0,...,0)$$ where $B_{n,k}$ refers to Bell polynomials. Most of the terms will be zero and you get $$ \frac{\text{d}^{2n}}{\text{d}t^{2n}} \frac{1}{(1-t^2)} = \sum_{k=0}^n c_{nk} \frac{t^{2k}}{(1-t^2)^{1+n+k}}$$ with $$c_{nk} = 2^{2k} \frac{(2n)! \cdot (n+k)!}{(n-k)! \cdot (2k)!} $$ and for the value at $t=0$ you have $$ \frac{\text{d}^{2n}}{\text{d}t^{2n}} \frac{1}{(1-t^2)} = c_{n0} = (2n)! $$
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$?
So I calculated the first, second, third, and fourth derivatives. I got $E(X^1)=0$, $E(X^2)=2$, $E(X^3)=0$, and $E(X^4)=12$. These derivatives are quite long to compute at this point, so I m wondering
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? So I calculated the first, second, third, and fourth derivatives. I got $E(X^1)=0$, $E(X^2)=2$, $E(X^3)=0$, and $E(X^4)=12$. These derivatives are quite long to compute at this point, so I m wondering if there is an easier way to go about this to obtain a formula for the evens. You could use the Taylor series expansion: $$\frac{1}{1-t^2} = \sum_{k=0}^\infty t^{2k}$$ However, this is a bit of a circular reasoning since the Taylor series expansion is itself derived by computing the derivatives. In that case you can just as well look up directly a formula for the higher order moments of the Laplace distribution. You might find that Taylor series expansion indirectly - not using $f(x) = \sum_{n=0}^\infty f^{(n)}/n! t^k $ - by instead using the formula for a geometric series. However, you could also 'manually' derive the derivatives (that means straight forward computation using chain rule and product rule) and when you look at the pattern of the terms then you will find that many of the terms become zero and a regular pattern emerges. Say we substitute $u = t^2$ then the derivation it looks simpler: $$ \frac{\text{d}^n}{\text{d}u^n} \frac{1}{(1-u)} = \frac{n!}{(1-u)^n}$$ Now use Faà di Bruno's formula (chain rule but then applied several times): $$ \frac{\text{d}^n}{\text{d}t^n} \frac{1}{(1-u)} = \sum_{k=1}^n \frac{k!}{(1-u)^k} \cdot B_{n,k}(2t,2,0,...,0)$$ where $B_{n,k}$ refers to Bell polynomials. Most of the terms will be zero and you get $$ \frac{\text{d}^{2n}}{\text{d}t^{2n}} \frac{1}{(1-t^2)} = \sum_{k=0}^n c_{nk} \frac{t^{2k}}{(1-t^2)^{1+n+k}}$$ with $$c_{nk} = 2^{2k} \frac{(2n)! \cdot (n+k)!}{(n-k)! \cdot (2k)!} $$ and for the value at $t=0$ you have $$ \frac{\text{d}^{2n}}{\text{d}t^{2n}} \frac{1}{(1-t^2)} = c_{n0} = (2n)! $$
How do I find all even moments (and odd moments) for $f_X(x)=\frac{1}{2}e^{-|x|}$? So I calculated the first, second, third, and fourth derivatives. I got $E(X^1)=0$, $E(X^2)=2$, $E(X^3)=0$, and $E(X^4)=12$. These derivatives are quite long to compute at this point, so I m wondering
51,455
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but the same result does not hold for covariance?
Zero correlation does not imply independence. Either: There is a typo/mistake and the book is wrong or The book made additional assumptions previously, for example, that the joint distribution of A and B were bivariate normal. There exist additional conditions such that zero correlation and these conditions would imply independence.
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but t
Zero correlation does not imply independence. Either: There is a typo/mistake and the book is wrong or The book made additional assumptions previously, for example, that the joint distribution of A a
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but the same result does not hold for covariance? Zero correlation does not imply independence. Either: There is a typo/mistake and the book is wrong or The book made additional assumptions previously, for example, that the joint distribution of A and B were bivariate normal. There exist additional conditions such that zero correlation and these conditions would imply independence.
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but t Zero correlation does not imply independence. Either: There is a typo/mistake and the book is wrong or The book made additional assumptions previously, for example, that the joint distribution of A a
51,456
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but the same result does not hold for covariance?
Your book is wrong. Correlation zero is not a sufficient condition for independence. You can have Pearson correlation zero for variables that are not independent. The independent variables will have both covariance and correlation zero, provided their variances are non-zero. There's no contradiction here.
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but t
Your book is wrong. Correlation zero is not a sufficient condition for independence. You can have Pearson correlation zero for variables that are not independent. The independent variables will have
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but the same result does not hold for covariance? Your book is wrong. Correlation zero is not a sufficient condition for independence. You can have Pearson correlation zero for variables that are not independent. The independent variables will have both covariance and correlation zero, provided their variances are non-zero. There's no contradiction here.
Why are two random variables independent if the Pearson's correlation coefficient equals zero, but t Your book is wrong. Correlation zero is not a sufficient condition for independence. You can have Pearson correlation zero for variables that are not independent. The independent variables will have
51,457
If two events are not mutually exclusive, does that mean they're independent?
No. You can have dependent events that are not mutually exclusive. Consider the events: $A$: The radio traffic report says that traffic is "heavy". $B$: I am late for work $B^c$: I am not late for work Neither $B$ nor $B^c$ are independent of $A$ (since I am more likely to be late when the radio says traffic is heavy than otherwise), but neither is mutually exclusive of it since I would generally still be there in sufficient time, but will not always be there on time irrespective of $A$ happening. [To be independent, two events with non-zero probability cannot be mutually exclusive, but the converse doesn't hold.]
If two events are not mutually exclusive, does that mean they're independent?
No. You can have dependent events that are not mutually exclusive. Consider the events: $A$: The radio traffic report says that traffic is "heavy". $B$: I am late for work $B^c$: I am not late for wor
If two events are not mutually exclusive, does that mean they're independent? No. You can have dependent events that are not mutually exclusive. Consider the events: $A$: The radio traffic report says that traffic is "heavy". $B$: I am late for work $B^c$: I am not late for work Neither $B$ nor $B^c$ are independent of $A$ (since I am more likely to be late when the radio says traffic is heavy than otherwise), but neither is mutually exclusive of it since I would generally still be there in sufficient time, but will not always be there on time irrespective of $A$ happening. [To be independent, two events with non-zero probability cannot be mutually exclusive, but the converse doesn't hold.]
If two events are not mutually exclusive, does that mean they're independent? No. You can have dependent events that are not mutually exclusive. Consider the events: $A$: The radio traffic report says that traffic is "heavy". $B$: I am late for work $B^c$: I am not late for wor
51,458
If two events are not mutually exclusive, does that mean they're independent?
Suppose for your two events $A$ and $B$, $P(A)\ne 0$ and $P(B)\ne 0$. Now suppose $A$ and $B$ are mutually exclusive which means $A\cap B =\varnothing$ then $P(A\cap B)=0 \tag{1}$ If $A$ and $B$ are independent then $P(A\cap B)=P(A)P(B)\ne0 \tag{2}$ since neither $P(A)$ nor $P(B)$ are $0$. You can see $(1)$ and $(2)$ are contradictory. So you can see if $A$ and $B$ are mutually exclusive then, under the additional condition that $P(A)$ and $P(B)$ are both non-zero, they are in fact dependent i.e $P(A\cap B)\ne P(A)P(B)$. One example is a disease and its vaccine. Suppose you have a 100% effective vaccine such that you cannot contract the disease if you were vaccinated. Then being vaccinated and contracting the disease are mutually exclusive and they are dependent.
If two events are not mutually exclusive, does that mean they're independent?
Suppose for your two events $A$ and $B$, $P(A)\ne 0$ and $P(B)\ne 0$. Now suppose $A$ and $B$ are mutually exclusive which means $A\cap B =\varnothing$ then $P(A\cap B)=0 \tag{1}$ If $A$ and $B$ a
If two events are not mutually exclusive, does that mean they're independent? Suppose for your two events $A$ and $B$, $P(A)\ne 0$ and $P(B)\ne 0$. Now suppose $A$ and $B$ are mutually exclusive which means $A\cap B =\varnothing$ then $P(A\cap B)=0 \tag{1}$ If $A$ and $B$ are independent then $P(A\cap B)=P(A)P(B)\ne0 \tag{2}$ since neither $P(A)$ nor $P(B)$ are $0$. You can see $(1)$ and $(2)$ are contradictory. So you can see if $A$ and $B$ are mutually exclusive then, under the additional condition that $P(A)$ and $P(B)$ are both non-zero, they are in fact dependent i.e $P(A\cap B)\ne P(A)P(B)$. One example is a disease and its vaccine. Suppose you have a 100% effective vaccine such that you cannot contract the disease if you were vaccinated. Then being vaccinated and contracting the disease are mutually exclusive and they are dependent.
If two events are not mutually exclusive, does that mean they're independent? Suppose for your two events $A$ and $B$, $P(A)\ne 0$ and $P(B)\ne 0$. Now suppose $A$ and $B$ are mutually exclusive which means $A\cap B =\varnothing$ then $P(A\cap B)=0 \tag{1}$ If $A$ and $B$ a
51,459
If two events are not mutually exclusive, does that mean they're independent?
Event that can't occur simultaneously are mutually exclusive. Eg. If you toss a coin you will get head or tail, so event of getting head and event of getting tail are mutually exclusive. If events are independent outcome of one does not affect other event.Eg. 2 different events of tossing two coins are independent as outcome of one does not affect other. So mutual dependence is prerequisite for mutual exclusivity therefore mutually exclusive events can't be independent. PS. These are layman definitions might not be mathematically correct.
If two events are not mutually exclusive, does that mean they're independent?
Event that can't occur simultaneously are mutually exclusive. Eg. If you toss a coin you will get head or tail, so event of getting head and event of getting tail are mutually exclusive. If events are
If two events are not mutually exclusive, does that mean they're independent? Event that can't occur simultaneously are mutually exclusive. Eg. If you toss a coin you will get head or tail, so event of getting head and event of getting tail are mutually exclusive. If events are independent outcome of one does not affect other event.Eg. 2 different events of tossing two coins are independent as outcome of one does not affect other. So mutual dependence is prerequisite for mutual exclusivity therefore mutually exclusive events can't be independent. PS. These are layman definitions might not be mathematically correct.
If two events are not mutually exclusive, does that mean they're independent? Event that can't occur simultaneously are mutually exclusive. Eg. If you toss a coin you will get head or tail, so event of getting head and event of getting tail are mutually exclusive. If events are
51,460
If two events are not mutually exclusive, does that mean they're independent?
Consider this example: Rolling the dice only once: Event A: The result is 1 or 6. Event B: The result is 1 or 3. Does Event A and Event B mutually Exclusive? No, two events occur simultaneously if the result is 1. Does Event A and Event B independent? No, because if event A occurs, it increases the chance that event B occurs. So does "not mutual exclusive" implies independence? NO!
If two events are not mutually exclusive, does that mean they're independent?
Consider this example: Rolling the dice only once: Event A: The result is 1 or 6. Event B: The result is 1 or 3. Does Event A and Event B mutually Exclusive? No, two events occur simultaneously if
If two events are not mutually exclusive, does that mean they're independent? Consider this example: Rolling the dice only once: Event A: The result is 1 or 6. Event B: The result is 1 or 3. Does Event A and Event B mutually Exclusive? No, two events occur simultaneously if the result is 1. Does Event A and Event B independent? No, because if event A occurs, it increases the chance that event B occurs. So does "not mutual exclusive" implies independence? NO!
If two events are not mutually exclusive, does that mean they're independent? Consider this example: Rolling the dice only once: Event A: The result is 1 or 6. Event B: The result is 1 or 3. Does Event A and Event B mutually Exclusive? No, two events occur simultaneously if
51,461
Testing significance of overlap in R
If my comment is right, then you can Monte-carlo simulate it: sim=unlist(lapply(1:10000, function(i){A=sample(1:27511,23706);B=sample(1:27511,14557);return(sum(A %in% B))})) hist(sim) Probably neater ways to do that loop but whatever. Your 10752 is waaaay over to the left of my histogram, so significantly fewer common elements than expected by chance. There may be some exact test that does the same. There's probably a Normal approximation - in which case it looks about 20 sigma off the mean: hist(sim,xlim=c(10752,12660))
Testing significance of overlap in R
If my comment is right, then you can Monte-carlo simulate it: sim=unlist(lapply(1:10000, function(i){A=sample(1:27511,23706);B=sample(1:27511,14557);return(sum(A %in% B))})) hist(sim) Probably neate
Testing significance of overlap in R If my comment is right, then you can Monte-carlo simulate it: sim=unlist(lapply(1:10000, function(i){A=sample(1:27511,23706);B=sample(1:27511,14557);return(sum(A %in% B))})) hist(sim) Probably neater ways to do that loop but whatever. Your 10752 is waaaay over to the left of my histogram, so significantly fewer common elements than expected by chance. There may be some exact test that does the same. There's probably a Normal approximation - in which case it looks about 20 sigma off the mean: hist(sim,xlim=c(10752,12660))
Testing significance of overlap in R If my comment is right, then you can Monte-carlo simulate it: sim=unlist(lapply(1:10000, function(i){A=sample(1:27511,23706);B=sample(1:27511,14557);return(sum(A %in% B))})) hist(sim) Probably neate
51,462
Testing significance of overlap in R
A model for this situation is to put 61000 ($n$) balls into an urn, of which 23000 ($n_1$) are labeled "A". 15000 ($k$) of these are drawn randomly without replacement. Of these, $m$ are found to be labeled "A". What is the chance that $m \ge 10000$? The total number of possible samples equals the number of $k$-element subsets of an $n$-set, $\binom{n}{k}$. All are equally likely to be drawn, by hypothesis. Let $i \ge 10000$. The number of possible samples with $i$ A's is the number of subsets of an $n_1$-set having $i$ A's, times the number of subsets of an $n-n_1$-set having $k-i$ non-A's; that is, $\binom{n_1}{i}\binom{n-n_1}{k-i}$. Summing over all possible $i$ and dividing by the chance of each sample gives the probability of observing an overlap of $m = 10000$ or greater: $$\Pr(\text{overlap} \ge m) = \frac{1}{\binom{n}{k}} \sum_{i=m}^{\min(n_1,k)} \binom{n_1}{i}\binom{n-n_1}{k-i}.$$ This answer is exact. For rapid calculation it can be expressed (in closed form) in terms of generalized hypergeometric functions; the details of this expression can be provided by a symbolic algebra program like Mathematica. The answer in this particular instance is $3.8057078557887\ldots \times 10^{-1515}$. We can also use a Normal approximation. Coding A's as 1 and non-A's as 0, as usual, the mean of the urn is $p = 23000/61000 \sim 0.377$. The standard deviation of the urn is $\sigma = \sqrt{p(1-p)}$. Therefore the standard error of the observed proportion, $u = 10000/15000 \sim 0.667$, is $$se(u) = \sigma \sqrt{(1 - \frac{15000-1}{61000-1})/15000} \sim 0.003436.$$ (see http://www.ma.utexas.edu/users/parker/sampling/woreplshort.htm). Thus the observed proportion is $z = \frac{u - p}{se(u)} \sim 84.28$ standard errors larger than expected. Obviously the corresponding p-value is low (it computes to $1.719\ldots \times 10^{-1545}$). Although the Normal approximation is no longer very accurate at such extreme z values (it's off by 30 orders of magnitude!), it still gives excellent guidance.
Testing significance of overlap in R
A model for this situation is to put 61000 ($n$) balls into an urn, of which 23000 ($n_1$) are labeled "A". 15000 ($k$) of these are drawn randomly without replacement. Of these, $m$ are found to be
Testing significance of overlap in R A model for this situation is to put 61000 ($n$) balls into an urn, of which 23000 ($n_1$) are labeled "A". 15000 ($k$) of these are drawn randomly without replacement. Of these, $m$ are found to be labeled "A". What is the chance that $m \ge 10000$? The total number of possible samples equals the number of $k$-element subsets of an $n$-set, $\binom{n}{k}$. All are equally likely to be drawn, by hypothesis. Let $i \ge 10000$. The number of possible samples with $i$ A's is the number of subsets of an $n_1$-set having $i$ A's, times the number of subsets of an $n-n_1$-set having $k-i$ non-A's; that is, $\binom{n_1}{i}\binom{n-n_1}{k-i}$. Summing over all possible $i$ and dividing by the chance of each sample gives the probability of observing an overlap of $m = 10000$ or greater: $$\Pr(\text{overlap} \ge m) = \frac{1}{\binom{n}{k}} \sum_{i=m}^{\min(n_1,k)} \binom{n_1}{i}\binom{n-n_1}{k-i}.$$ This answer is exact. For rapid calculation it can be expressed (in closed form) in terms of generalized hypergeometric functions; the details of this expression can be provided by a symbolic algebra program like Mathematica. The answer in this particular instance is $3.8057078557887\ldots \times 10^{-1515}$. We can also use a Normal approximation. Coding A's as 1 and non-A's as 0, as usual, the mean of the urn is $p = 23000/61000 \sim 0.377$. The standard deviation of the urn is $\sigma = \sqrt{p(1-p)}$. Therefore the standard error of the observed proportion, $u = 10000/15000 \sim 0.667$, is $$se(u) = \sigma \sqrt{(1 - \frac{15000-1}{61000-1})/15000} \sim 0.003436.$$ (see http://www.ma.utexas.edu/users/parker/sampling/woreplshort.htm). Thus the observed proportion is $z = \frac{u - p}{se(u)} \sim 84.28$ standard errors larger than expected. Obviously the corresponding p-value is low (it computes to $1.719\ldots \times 10^{-1545}$). Although the Normal approximation is no longer very accurate at such extreme z values (it's off by 30 orders of magnitude!), it still gives excellent guidance.
Testing significance of overlap in R A model for this situation is to put 61000 ($n$) balls into an urn, of which 23000 ($n_1$) are labeled "A". 15000 ($k$) of these are drawn randomly without replacement. Of these, $m$ are found to be
51,463
Testing significance of overlap in R
You have a large enough sample size that a Chi-square test is reasonable. If you are using R, then chisq.test() is the function to utilize; its help page can be found via ?chisq.test.
Testing significance of overlap in R
You have a large enough sample size that a Chi-square test is reasonable. If you are using R, then chisq.test() is the function to utilize; its help page can be found via ?chisq.test.
Testing significance of overlap in R You have a large enough sample size that a Chi-square test is reasonable. If you are using R, then chisq.test() is the function to utilize; its help page can be found via ?chisq.test.
Testing significance of overlap in R You have a large enough sample size that a Chi-square test is reasonable. If you are using R, then chisq.test() is the function to utilize; its help page can be found via ?chisq.test.
51,464
Testing significance of overlap in R
Well, if nothing else, you can do it by simulation: Draw 23000 random elements out of 61000 and draw 15000 random elements from the same 61000. Now count the number of overlapped items. Repeat 100000 times (should not take all too long): now you have an empirical distribution of the number of overlapped items, and you can easily find an empirical p-value for 10000 elements. Code like the following will do this: countOverlap<-function(total=61000, numgA=23000, numgB=15000, replace=FALSE){ groupA<-sample.int(total, numgA, replace=replace) groupB<-sample.int(total, numgB, replace=replace) return(length(intersect(groupA, groupB))) } tmpres<-replicate(1000, countOverlap(total=100, numgA=20, numgB=30)) #if your true observed value was 10: pval<-mean(tmpres >= 10) pval However, it is possible that an analytical solution exists, and that simulation is not needed. Maybe someone else can provide that for you.
Testing significance of overlap in R
Well, if nothing else, you can do it by simulation: Draw 23000 random elements out of 61000 and draw 15000 random elements from the same 61000. Now count the number of overlapped items. Repeat 100000
Testing significance of overlap in R Well, if nothing else, you can do it by simulation: Draw 23000 random elements out of 61000 and draw 15000 random elements from the same 61000. Now count the number of overlapped items. Repeat 100000 times (should not take all too long): now you have an empirical distribution of the number of overlapped items, and you can easily find an empirical p-value for 10000 elements. Code like the following will do this: countOverlap<-function(total=61000, numgA=23000, numgB=15000, replace=FALSE){ groupA<-sample.int(total, numgA, replace=replace) groupB<-sample.int(total, numgB, replace=replace) return(length(intersect(groupA, groupB))) } tmpres<-replicate(1000, countOverlap(total=100, numgA=20, numgB=30)) #if your true observed value was 10: pval<-mean(tmpres >= 10) pval However, it is possible that an analytical solution exists, and that simulation is not needed. Maybe someone else can provide that for you.
Testing significance of overlap in R Well, if nothing else, you can do it by simulation: Draw 23000 random elements out of 61000 and draw 15000 random elements from the same 61000. Now count the number of overlapped items. Repeat 100000
51,465
Testing significance of overlap in R
The correct approach depends on what is fixed. If the size of the 2 sets are "fixed" then this is just done by a chi-squared test of independence where the 2x2 table is "in /not in" each set. If only the total number of items is fixed, then the chi-squared test is not correct.
Testing significance of overlap in R
The correct approach depends on what is fixed. If the size of the 2 sets are "fixed" then this is just done by a chi-squared test of independence where the 2x2 table is "in /not in" each set. If onl
Testing significance of overlap in R The correct approach depends on what is fixed. If the size of the 2 sets are "fixed" then this is just done by a chi-squared test of independence where the 2x2 table is "in /not in" each set. If only the total number of items is fixed, then the chi-squared test is not correct.
Testing significance of overlap in R The correct approach depends on what is fixed. If the size of the 2 sets are "fixed" then this is just done by a chi-squared test of independence where the 2x2 table is "in /not in" each set. If onl
51,466
What would be a good way to work with a large data set in Excel?
If you feel you may start more of such very large Excel type projects in the future, then you should consider installing and spending 10 hours learning the basics of R (free), which will let you do what you mention in your question, in a much more efficient manner than Excel. R for Beginners PDF You can ask questions about R on StackOverflow and here.
What would be a good way to work with a large data set in Excel?
If you feel you may start more of such very large Excel type projects in the future, then you should consider installing and spending 10 hours learning the basics of R (free), which will let you do wh
What would be a good way to work with a large data set in Excel? If you feel you may start more of such very large Excel type projects in the future, then you should consider installing and spending 10 hours learning the basics of R (free), which will let you do what you mention in your question, in a much more efficient manner than Excel. R for Beginners PDF You can ask questions about R on StackOverflow and here.
What would be a good way to work with a large data set in Excel? If you feel you may start more of such very large Excel type projects in the future, then you should consider installing and spending 10 hours learning the basics of R (free), which will let you do wh
51,467
What would be a good way to work with a large data set in Excel?
Although I would always recommend to use R, you could nevertheless achieve what you want with python. There is at least a package for reading dbf files. Furthermore, scipy offers a great range of functions for statistical analysis. For example the library ScientifyPython probably contains the functions you need. The best idea is to check scipy.org. There you will find what you want. (But learning R is a great idea!!)
What would be a good way to work with a large data set in Excel?
Although I would always recommend to use R, you could nevertheless achieve what you want with python. There is at least a package for reading dbf files. Furthermore, scipy offers a great range of func
What would be a good way to work with a large data set in Excel? Although I would always recommend to use R, you could nevertheless achieve what you want with python. There is at least a package for reading dbf files. Furthermore, scipy offers a great range of functions for statistical analysis. For example the library ScientifyPython probably contains the functions you need. The best idea is to check scipy.org. There you will find what you want. (But learning R is a great idea!!)
What would be a good way to work with a large data set in Excel? Although I would always recommend to use R, you could nevertheless achieve what you want with python. There is at least a package for reading dbf files. Furthermore, scipy offers a great range of func
51,468
What would be a good way to work with a large data set in Excel?
Excel 2010 and 2013 have a free microsoft addin called power pivot which allows you to work with millions of rows. Its a columnar database that is designed for creating pivot tables, subtotals etc and has standard deviation etc predefined. you might also look at other microsoft addins power query (data input), power view (visualisation), and power map (mapping)
What would be a good way to work with a large data set in Excel?
Excel 2010 and 2013 have a free microsoft addin called power pivot which allows you to work with millions of rows. Its a columnar database that is designed for creating pivot tables, subtotals etc an
What would be a good way to work with a large data set in Excel? Excel 2010 and 2013 have a free microsoft addin called power pivot which allows you to work with millions of rows. Its a columnar database that is designed for creating pivot tables, subtotals etc and has standard deviation etc predefined. you might also look at other microsoft addins power query (data input), power view (visualisation), and power map (mapping)
What would be a good way to work with a large data set in Excel? Excel 2010 and 2013 have a free microsoft addin called power pivot which allows you to work with millions of rows. Its a columnar database that is designed for creating pivot tables, subtotals etc an
51,469
How to compute $\eta^2$ in ANOVA by hand?
It rather depends on what you mean by "by hand". There is more than one way to do it. You can use the residuals: > etasq(xyaov) Partial eta^2 x 0.4854899 Residuals NA > 1 - var(xyaov$residuals)/var(y) [1] 0.4854899 (You didn't set a seed, so we don't have exactly the same result). Almost equivalently, you can use the predicted values: > var(predict(xyaov)) / var(y) [1] 0.4854899 You can use the sums of squares from the ANOVA model (which is given by the rather unintuitive): > summary(xyaov)[[1]][[2]][[1]] / (summary(xyaov)[[1]][[2]][[2]] + summary(xyaov)[[1]][[2]][[1]] ) [1] 0.4854899 You can use summary.lm and get the R^2 (because R-squared is eta squared): > summary.lm(xyaov)$r.squared [1] 0.4854899 You can do it with no reference to the aov() function by calculating the mean for each group, then the residual, then eta squared based on that: xy <- as.data.frame(cbind(x, y)) xy$y <- as.numeric(as.character(xy$y)) #I don't understand why this line is needed x.means <- as.data.frame(tapply(y, x, mean)) x.means$x <- row.names(x.means) xy <- merge(x.means, xy, by="x") xy$resid <- xy[, 2] - xy$y 1 - var(xy$resid) / var(xy$y) [1] 0.4854899
How to compute $\eta^2$ in ANOVA by hand?
It rather depends on what you mean by "by hand". There is more than one way to do it. You can use the residuals: > etasq(xyaov) Partial eta^2 x 0.4854899 Residuals NA
How to compute $\eta^2$ in ANOVA by hand? It rather depends on what you mean by "by hand". There is more than one way to do it. You can use the residuals: > etasq(xyaov) Partial eta^2 x 0.4854899 Residuals NA > 1 - var(xyaov$residuals)/var(y) [1] 0.4854899 (You didn't set a seed, so we don't have exactly the same result). Almost equivalently, you can use the predicted values: > var(predict(xyaov)) / var(y) [1] 0.4854899 You can use the sums of squares from the ANOVA model (which is given by the rather unintuitive): > summary(xyaov)[[1]][[2]][[1]] / (summary(xyaov)[[1]][[2]][[2]] + summary(xyaov)[[1]][[2]][[1]] ) [1] 0.4854899 You can use summary.lm and get the R^2 (because R-squared is eta squared): > summary.lm(xyaov)$r.squared [1] 0.4854899 You can do it with no reference to the aov() function by calculating the mean for each group, then the residual, then eta squared based on that: xy <- as.data.frame(cbind(x, y)) xy$y <- as.numeric(as.character(xy$y)) #I don't understand why this line is needed x.means <- as.data.frame(tapply(y, x, mean)) x.means$x <- row.names(x.means) xy <- merge(x.means, xy, by="x") xy$resid <- xy[, 2] - xy$y 1 - var(xy$resid) / var(xy$y) [1] 0.4854899
How to compute $\eta^2$ in ANOVA by hand? It rather depends on what you mean by "by hand". There is more than one way to do it. You can use the residuals: > etasq(xyaov) Partial eta^2 x 0.4854899 Residuals NA
51,470
How to compute $\eta^2$ in ANOVA by hand?
eta-squared ($\eta^2$), is a measure of effect size for ANOVA models that is analogous to $R^2$. That is, it gives the proportion of the variability in $Y$ that can be accounted for by knowledge of $X$. There is a 'regular' $\eta^2$, and a partial $\eta^2$. This distinction only comes into play when you have an ANOVA with multiple factors. Here are the formulas: \begin{align} \eta^2_\text{(regular)} &= \frac{SS_\text{between}}{SS_\text{total}} \\[10pt] \eta^2_\text{partial} &= \frac{SS_\text{factor}}{SS_\text{factor} + SS_\text{error}} \end{align} For the latter, only a specific factor is implied, and the sums of squares associated with other factors in the model would not enter into the calculation. For your example, the top formula would be used: set.seed(55) y <- c(rnorm(30, 3), rnorm(30, 4), rnorm(30, 5)) x <- sort(rep(paste("treatment", 1:3), 30)) xy <- data.frame(x,y) xyaov <- aov(y ~ x, xy) anova(xyaov) # Analysis of Variance Table # # Response: y # Df Sum Sq Mean Sq F value Pr(>F) # x 2 62.808 31.404 33.622 1.52e-11 *** # Residuals 87 81.260 0.934 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 anova(xyaov)[1,2]/sum(anova(xyaov)[,2]) [1] 0.435961
How to compute $\eta^2$ in ANOVA by hand?
eta-squared ($\eta^2$), is a measure of effect size for ANOVA models that is analogous to $R^2$. That is, it gives the proportion of the variability in $Y$ that can be accounted for by knowledge of $
How to compute $\eta^2$ in ANOVA by hand? eta-squared ($\eta^2$), is a measure of effect size for ANOVA models that is analogous to $R^2$. That is, it gives the proportion of the variability in $Y$ that can be accounted for by knowledge of $X$. There is a 'regular' $\eta^2$, and a partial $\eta^2$. This distinction only comes into play when you have an ANOVA with multiple factors. Here are the formulas: \begin{align} \eta^2_\text{(regular)} &= \frac{SS_\text{between}}{SS_\text{total}} \\[10pt] \eta^2_\text{partial} &= \frac{SS_\text{factor}}{SS_\text{factor} + SS_\text{error}} \end{align} For the latter, only a specific factor is implied, and the sums of squares associated with other factors in the model would not enter into the calculation. For your example, the top formula would be used: set.seed(55) y <- c(rnorm(30, 3), rnorm(30, 4), rnorm(30, 5)) x <- sort(rep(paste("treatment", 1:3), 30)) xy <- data.frame(x,y) xyaov <- aov(y ~ x, xy) anova(xyaov) # Analysis of Variance Table # # Response: y # Df Sum Sq Mean Sq F value Pr(>F) # x 2 62.808 31.404 33.622 1.52e-11 *** # Residuals 87 81.260 0.934 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 anova(xyaov)[1,2]/sum(anova(xyaov)[,2]) [1] 0.435961
How to compute $\eta^2$ in ANOVA by hand? eta-squared ($\eta^2$), is a measure of effect size for ANOVA models that is analogous to $R^2$. That is, it gives the proportion of the variability in $Y$ that can be accounted for by knowledge of $
51,471
Alternatives to the Baron-Kenny approach to modeling mediation
Baron and Kenny are indeed outdated, though that does not make them wrong in all cases. The concerns divide into broadly statistical limitations and assumptions which are discussed in the reference your reviewer suggests and in the literature alluded to by @PeterFlom, and broadly non-statistical concerns about the definition and causal identification of mediation effects. Following this order it might be helpful to start your reading with MacKinnon et al.'s 2007 review, or with the reviewers suggested reading. Then move on to Imai et al. 2010a (or Imai et al. 2010b) These last papers are dense, but repay study. that should bring you more or less up to speed on how mediation analysis is being thought of lately.
Alternatives to the Baron-Kenny approach to modeling mediation
Baron and Kenny are indeed outdated, though that does not make them wrong in all cases. The concerns divide into broadly statistical limitations and assumptions which are discussed in the reference
Alternatives to the Baron-Kenny approach to modeling mediation Baron and Kenny are indeed outdated, though that does not make them wrong in all cases. The concerns divide into broadly statistical limitations and assumptions which are discussed in the reference your reviewer suggests and in the literature alluded to by @PeterFlom, and broadly non-statistical concerns about the definition and causal identification of mediation effects. Following this order it might be helpful to start your reading with MacKinnon et al.'s 2007 review, or with the reviewers suggested reading. Then move on to Imai et al. 2010a (or Imai et al. 2010b) These last papers are dense, but repay study. that should bring you more or less up to speed on how mediation analysis is being thought of lately.
Alternatives to the Baron-Kenny approach to modeling mediation Baron and Kenny are indeed outdated, though that does not make them wrong in all cases. The concerns divide into broadly statistical limitations and assumptions which are discussed in the reference
51,472
Alternatives to the Baron-Kenny approach to modeling mediation
This is more a discussion of concerns I have firstly with the approach of Baron and Kenny (which has some bearing on your question), and with a number of more recent papers (I haven't seen them all, so my comments may not apply to everything). It may also relate to the 2011 paper you mention, which I have only had the chance to skim through just now. From what I've seen, the idea of measuring/establishing mediation mostly seems to suffer a basic problem* that I haven't seen adequately dealt with. (I've just taken a fairly quick look at the 2011 paper you mention, so maybe I missed something. The example in figure 2 of the paper seems to be related, which is encouraging in the sense that at least some possibilities are being mentioned in some parts of the literature now.) * The first time I ever heard of mediation and read a copy of Baron & Kenny, I saw this would be an issue. It seemed to be a surprise to most people I mentioned it to. The problem is this - to establish that $M$ actually mediates $X\to Y$ (at least partly), as below: $\quad $ (the dashed arrow indicates a reduced level of relationship), it is necessary (for example) to rule out all feasible explanations like these in place of the second diagram: $\quad $ (The grey variables might be latent, unknown, unaccounted for - or in some other sense 'hidden' from the model, or the researcher, or perhaps even anyone. There may also be some direct relationship between $X$ and $Y$ as well, it makes no real difference to this issue.) Many papers (at least many of the ones I have seen) which deal with mediation, when they follow the recipe that is supposed to establish whether mediation happens, immediately start saying things along the lines of "$M$ mediates $X\to Y$" and discuss the size of this effect, when unless they have eliminated essentially any possibility of such hidden variables actually driving the relationship in any number of arrangements and variations, they really haven't established such a thing at all, and any measures of the size of the mediation effect rely heavily on those other possibilities not being present, at least not to any substantive degree. An additional issue is that methods such as regression don't "put heads on arrows". To do so with such methods requires careful experiments, or very careful argument; if both are missing, generally speaking all really we have is correlation, and correlation is not the same thing as causation. I am hoping one of the very good quality statisticians here will be able to school me on why my concerns are mostly unfounded; such an education would be most welcome.
Alternatives to the Baron-Kenny approach to modeling mediation
This is more a discussion of concerns I have firstly with the approach of Baron and Kenny (which has some bearing on your question), and with a number of more recent papers (I haven't seen them all, s
Alternatives to the Baron-Kenny approach to modeling mediation This is more a discussion of concerns I have firstly with the approach of Baron and Kenny (which has some bearing on your question), and with a number of more recent papers (I haven't seen them all, so my comments may not apply to everything). It may also relate to the 2011 paper you mention, which I have only had the chance to skim through just now. From what I've seen, the idea of measuring/establishing mediation mostly seems to suffer a basic problem* that I haven't seen adequately dealt with. (I've just taken a fairly quick look at the 2011 paper you mention, so maybe I missed something. The example in figure 2 of the paper seems to be related, which is encouraging in the sense that at least some possibilities are being mentioned in some parts of the literature now.) * The first time I ever heard of mediation and read a copy of Baron & Kenny, I saw this would be an issue. It seemed to be a surprise to most people I mentioned it to. The problem is this - to establish that $M$ actually mediates $X\to Y$ (at least partly), as below: $\quad $ (the dashed arrow indicates a reduced level of relationship), it is necessary (for example) to rule out all feasible explanations like these in place of the second diagram: $\quad $ (The grey variables might be latent, unknown, unaccounted for - or in some other sense 'hidden' from the model, or the researcher, or perhaps even anyone. There may also be some direct relationship between $X$ and $Y$ as well, it makes no real difference to this issue.) Many papers (at least many of the ones I have seen) which deal with mediation, when they follow the recipe that is supposed to establish whether mediation happens, immediately start saying things along the lines of "$M$ mediates $X\to Y$" and discuss the size of this effect, when unless they have eliminated essentially any possibility of such hidden variables actually driving the relationship in any number of arrangements and variations, they really haven't established such a thing at all, and any measures of the size of the mediation effect rely heavily on those other possibilities not being present, at least not to any substantive degree. An additional issue is that methods such as regression don't "put heads on arrows". To do so with such methods requires careful experiments, or very careful argument; if both are missing, generally speaking all really we have is correlation, and correlation is not the same thing as causation. I am hoping one of the very good quality statisticians here will be able to school me on why my concerns are mostly unfounded; such an education would be most welcome.
Alternatives to the Baron-Kenny approach to modeling mediation This is more a discussion of concerns I have firstly with the approach of Baron and Kenny (which has some bearing on your question), and with a number of more recent papers (I haven't seen them all, s
51,473
Alternatives to the Baron-Kenny approach to modeling mediation
Here are some places to look. I'd especially recommend the work by Kosuke Imai and colleagues. Bullock, John G., and Shang E. Ha. 2011. Mediation Analysis is Harder Than it Looks. In Cambridge Handbook of Experimental Political Science, ed. James N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupia. New York: Cambridge University Press. Bullock, John G., Donald P. Green, and Shang E. Ha. 2010. Yes, But What’s the Mechanism? (Don’t Expect an Easy Answer). Journal of Personality and Social Psychology 98 (April): 550-58. Fiedler, Klaus, Malte Schott, and Thorsten Meiser. 2011. “What Mediation Analysis Can (Not) Do.” Journal of Experimental Social Psychology. doi:10.1016/j.jesp.2011.05.007. http://dx.doi.org/10.1016/j.jesp.2011.05.007. Green, Donald P., Shang E. Ha, and John G. Bullock. 2009. “Enough Already about `Black Box’ Experiments: Studying Mediation is More Difficult than Most Scholars Suppose.” The ANNALS of the American Academy of Political and Social Science 628 (March) (January): 200-208. Imai, Kosuke, Luke Keele, Dustin Tingley, and Teppei Yamamoto. 2011. “Unpacking the Black Box: Learning about Causal Mechanisms from Experimental and Observational Studies.” American Political Science Review 105 (4) (November 10): 765-789. http://imai.princeton.edu/talk/files/ISM10.pdf. Imai, Kosuke, and Teppei Yamamoto. 2011. "Identification and Sensitivity Analysis for Multiple Causal Mechanisms: Revisiting Evidence from Framing Experiments." MacKinnon, David P., Amanda J. Fairchild, and Matthew S. Fritz. 2007. “Mediation Analysis.” Annual review of psychology 58 (January): 593-614. doi:10.1146/annurev.psych.58.110405.085542. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2819368&tool=pmcentrez&rendertype=abstract. Spencer, Steven J., Mark P. Zanna, and Geoffrey T Fong. 2005. “Establishing a Causal Chain: Why Experiments Are Often More Effective than Mediational Analyses in Examining Psychological Processes.” Journal of Personality and Social Psychology 89 (6) (December): 845-51. doi:10.1037/0022-3514.89.6.845. http://www.ncbi.nlm.nih.gov/pubmed/16393019.
Alternatives to the Baron-Kenny approach to modeling mediation
Here are some places to look. I'd especially recommend the work by Kosuke Imai and colleagues. Bullock, John G., and Shang E. Ha. 2011. Mediation Analysis is Harder Than it Looks. In Cambridge Handboo
Alternatives to the Baron-Kenny approach to modeling mediation Here are some places to look. I'd especially recommend the work by Kosuke Imai and colleagues. Bullock, John G., and Shang E. Ha. 2011. Mediation Analysis is Harder Than it Looks. In Cambridge Handbook of Experimental Political Science, ed. James N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupia. New York: Cambridge University Press. Bullock, John G., Donald P. Green, and Shang E. Ha. 2010. Yes, But What’s the Mechanism? (Don’t Expect an Easy Answer). Journal of Personality and Social Psychology 98 (April): 550-58. Fiedler, Klaus, Malte Schott, and Thorsten Meiser. 2011. “What Mediation Analysis Can (Not) Do.” Journal of Experimental Social Psychology. doi:10.1016/j.jesp.2011.05.007. http://dx.doi.org/10.1016/j.jesp.2011.05.007. Green, Donald P., Shang E. Ha, and John G. Bullock. 2009. “Enough Already about `Black Box’ Experiments: Studying Mediation is More Difficult than Most Scholars Suppose.” The ANNALS of the American Academy of Political and Social Science 628 (March) (January): 200-208. Imai, Kosuke, Luke Keele, Dustin Tingley, and Teppei Yamamoto. 2011. “Unpacking the Black Box: Learning about Causal Mechanisms from Experimental and Observational Studies.” American Political Science Review 105 (4) (November 10): 765-789. http://imai.princeton.edu/talk/files/ISM10.pdf. Imai, Kosuke, and Teppei Yamamoto. 2011. "Identification and Sensitivity Analysis for Multiple Causal Mechanisms: Revisiting Evidence from Framing Experiments." MacKinnon, David P., Amanda J. Fairchild, and Matthew S. Fritz. 2007. “Mediation Analysis.” Annual review of psychology 58 (January): 593-614. doi:10.1146/annurev.psych.58.110405.085542. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2819368&tool=pmcentrez&rendertype=abstract. Spencer, Steven J., Mark P. Zanna, and Geoffrey T Fong. 2005. “Establishing a Causal Chain: Why Experiments Are Often More Effective than Mediational Analyses in Examining Psychological Processes.” Journal of Personality and Social Psychology 89 (6) (December): 845-51. doi:10.1037/0022-3514.89.6.845. http://www.ncbi.nlm.nih.gov/pubmed/16393019.
Alternatives to the Baron-Kenny approach to modeling mediation Here are some places to look. I'd especially recommend the work by Kosuke Imai and colleagues. Bullock, John G., and Shang E. Ha. 2011. Mediation Analysis is Harder Than it Looks. In Cambridge Handboo
51,474
Alternatives to the Baron-Kenny approach to modeling mediation
Baron and Kenny is distinctly old fashioned these days. They see mediation as a "yes-no" "present-absent" quality; more recent approaches (lots of work by MacKinnon and others) treats it as a continuum. This makes more sense to me.
Alternatives to the Baron-Kenny approach to modeling mediation
Baron and Kenny is distinctly old fashioned these days. They see mediation as a "yes-no" "present-absent" quality; more recent approaches (lots of work by MacKinnon and others) treats it as a continuu
Alternatives to the Baron-Kenny approach to modeling mediation Baron and Kenny is distinctly old fashioned these days. They see mediation as a "yes-no" "present-absent" quality; more recent approaches (lots of work by MacKinnon and others) treats it as a continuum. This makes more sense to me.
Alternatives to the Baron-Kenny approach to modeling mediation Baron and Kenny is distinctly old fashioned these days. They see mediation as a "yes-no" "present-absent" quality; more recent approaches (lots of work by MacKinnon and others) treats it as a continuu
51,475
Alternatives to the Baron-Kenny approach to modeling mediation
I agree with the above answer, and I would like to add more information in a form of a succinct summary. Baron and Kenny's (1986) method of testing mediation has been extensively applied, but there are many papers discussing severe limitations of this approach, which broadly include: 1) Not directly testing the significance of an indirect effect 2) Low statistical power 3) Inability to accommodate models with inconsistent mediation *Note: see Memon, Cheah, Ramayah, Ting, and Chuah (2018) for an overview. Considering these limitations, a new typology of mediation was developed by Zhao, Lynch and Chen (2010). As of Oct 2019, it has over 5,000 citations, so it is gaining greater popularity. As a brief summary, and taking a three-variable causal model as an example, thee types of mediation exist. Complementary mediation: Mediated effect (a x b) and direct effect (c) both exist and point at the same direction. Competitive mediation: Mediated effect (a x b) and direct effect (c) both exist and point in opposite directions. Indirect-only mediation: Mediated effect (a x b) exists, but no direct effect (c). Further, two non-mediation types were proposed: Direct-only non-mediation: Direct effect (c) exists, but no indirect effect. No-effect non-mediation: Nether direct effect (c), nor indirect effect exists. References Memon, M. A., Cheah, J., Ramayah, T., Ting, H., & Chuah, F. (2018). Mediation Analysis Issues and Recommendations. Journal of Applied Structural Equation Modeling, 2(1), 1-9. Zhao, X., Lynch Jr, J. G., & Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197-206.
Alternatives to the Baron-Kenny approach to modeling mediation
I agree with the above answer, and I would like to add more information in a form of a succinct summary. Baron and Kenny's (1986) method of testing mediation has been extensively applied, but there ar
Alternatives to the Baron-Kenny approach to modeling mediation I agree with the above answer, and I would like to add more information in a form of a succinct summary. Baron and Kenny's (1986) method of testing mediation has been extensively applied, but there are many papers discussing severe limitations of this approach, which broadly include: 1) Not directly testing the significance of an indirect effect 2) Low statistical power 3) Inability to accommodate models with inconsistent mediation *Note: see Memon, Cheah, Ramayah, Ting, and Chuah (2018) for an overview. Considering these limitations, a new typology of mediation was developed by Zhao, Lynch and Chen (2010). As of Oct 2019, it has over 5,000 citations, so it is gaining greater popularity. As a brief summary, and taking a three-variable causal model as an example, thee types of mediation exist. Complementary mediation: Mediated effect (a x b) and direct effect (c) both exist and point at the same direction. Competitive mediation: Mediated effect (a x b) and direct effect (c) both exist and point in opposite directions. Indirect-only mediation: Mediated effect (a x b) exists, but no direct effect (c). Further, two non-mediation types were proposed: Direct-only non-mediation: Direct effect (c) exists, but no indirect effect. No-effect non-mediation: Nether direct effect (c), nor indirect effect exists. References Memon, M. A., Cheah, J., Ramayah, T., Ting, H., & Chuah, F. (2018). Mediation Analysis Issues and Recommendations. Journal of Applied Structural Equation Modeling, 2(1), 1-9. Zhao, X., Lynch Jr, J. G., & Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197-206.
Alternatives to the Baron-Kenny approach to modeling mediation I agree with the above answer, and I would like to add more information in a form of a succinct summary. Baron and Kenny's (1986) method of testing mediation has been extensively applied, but there ar
51,476
Same SE for all coefficients of a linear model
Mathematically, if writing the linear model as $y = X\beta + \epsilon$, where $X = \begin{bmatrix}e & x_1 & x_2 & \cdots & x_p\end{bmatrix}$, $\beta = \begin{bmatrix}\beta_0 & \beta_1 & \beta_2 & \cdots & \beta_p\end{bmatrix}$. The standard error of $\hat{\beta}_j, 1 \leq j \leq p$, denoted by $\hat{\sigma}_{\hat{\beta}_j}$, is then given by \begin{align} \hat{\sigma}\sqrt{e_j'(X'X)^{-1}e_j}, \tag{1} \end{align} where $e_j$ is a $(p+1)$-long column vector whose $(j + 1)$-st entry is $1$ and all the other entries $0$, $\hat{\sigma} = (n - p - 1)^{-1}y'(I - H)y$, $H = X(X'X)^{-1}X'$. In other words, $\hat{\sigma}_{\hat{\beta}_j}$ is the square root of the $(j + 1, j + 1)$ diagonal entry of the matrix $\hat{\sigma}^2(X'X)^{-1}$. Therefore, if all the diagonal entries of $(X'X)^{-1}$ are the same (or block-wise same), then you would see identical (or block-wise identical) standard errors of OLS estimates. Although you did not include details of the input data in your post, the output implies that your data probably came from a designed experiment, where the columns of $X$ are mutually orthogonal. In this case, the diagonal entries of $(X'X)^{-1}$ could be identical. For example, suppose \begin{align} X = \begin{bmatrix} e_9 & e_9 & 0 & 0 \\ e_9 & 0 & e_9 & 0 \\ e_9 & 0 & 0 & e_9 \\ e_9 & 0 & 0 & 0 \end{bmatrix}, \end{align} where $e_9$ is a $9$-long column vector consisting of all ones. It is then easy to verify that \begin{align} X'X = \begin{bmatrix} 36 & 9 & 9 & 9 \\ 9 & 9 & 0 & 0 \\ 9 & 0 & 9 & 0 \\ 9 & 0 & 0 & 9 \end{bmatrix}, \quad (X'X)^{-1} = \frac{1}{9}\begin{bmatrix} 1 & -1 & -1 & -1 \\ -1 & 2 & 1 & 1 \\ -1 & 1 & 2 & 1 \\ -1 & 1 & 1 & 2 \end{bmatrix}. \end{align} It can be seen that all the $(2, 2), (3, 3), (4, 4)$ diagonal entries equal to $2$, yielding the same standard errors of $\hat{\beta}_1, \hat{\beta}_2$ and $\hat{\beta}_3$. To analyze your particular case, you can use model.matrix command to print out your design matrix $X$ and compute $(X'X)^{-1}$ to verify the reason.
Same SE for all coefficients of a linear model
Mathematically, if writing the linear model as $y = X\beta + \epsilon$, where $X = \begin{bmatrix}e & x_1 & x_2 & \cdots & x_p\end{bmatrix}$, $\beta = \begin{bmatrix}\beta_0 & \beta_1 & \beta_2 & \cdo
Same SE for all coefficients of a linear model Mathematically, if writing the linear model as $y = X\beta + \epsilon$, where $X = \begin{bmatrix}e & x_1 & x_2 & \cdots & x_p\end{bmatrix}$, $\beta = \begin{bmatrix}\beta_0 & \beta_1 & \beta_2 & \cdots & \beta_p\end{bmatrix}$. The standard error of $\hat{\beta}_j, 1 \leq j \leq p$, denoted by $\hat{\sigma}_{\hat{\beta}_j}$, is then given by \begin{align} \hat{\sigma}\sqrt{e_j'(X'X)^{-1}e_j}, \tag{1} \end{align} where $e_j$ is a $(p+1)$-long column vector whose $(j + 1)$-st entry is $1$ and all the other entries $0$, $\hat{\sigma} = (n - p - 1)^{-1}y'(I - H)y$, $H = X(X'X)^{-1}X'$. In other words, $\hat{\sigma}_{\hat{\beta}_j}$ is the square root of the $(j + 1, j + 1)$ diagonal entry of the matrix $\hat{\sigma}^2(X'X)^{-1}$. Therefore, if all the diagonal entries of $(X'X)^{-1}$ are the same (or block-wise same), then you would see identical (or block-wise identical) standard errors of OLS estimates. Although you did not include details of the input data in your post, the output implies that your data probably came from a designed experiment, where the columns of $X$ are mutually orthogonal. In this case, the diagonal entries of $(X'X)^{-1}$ could be identical. For example, suppose \begin{align} X = \begin{bmatrix} e_9 & e_9 & 0 & 0 \\ e_9 & 0 & e_9 & 0 \\ e_9 & 0 & 0 & e_9 \\ e_9 & 0 & 0 & 0 \end{bmatrix}, \end{align} where $e_9$ is a $9$-long column vector consisting of all ones. It is then easy to verify that \begin{align} X'X = \begin{bmatrix} 36 & 9 & 9 & 9 \\ 9 & 9 & 0 & 0 \\ 9 & 0 & 9 & 0 \\ 9 & 0 & 0 & 9 \end{bmatrix}, \quad (X'X)^{-1} = \frac{1}{9}\begin{bmatrix} 1 & -1 & -1 & -1 \\ -1 & 2 & 1 & 1 \\ -1 & 1 & 2 & 1 \\ -1 & 1 & 1 & 2 \end{bmatrix}. \end{align} It can be seen that all the $(2, 2), (3, 3), (4, 4)$ diagonal entries equal to $2$, yielding the same standard errors of $\hat{\beta}_1, \hat{\beta}_2$ and $\hat{\beta}_3$. To analyze your particular case, you can use model.matrix command to print out your design matrix $X$ and compute $(X'X)^{-1}$ to verify the reason.
Same SE for all coefficients of a linear model Mathematically, if writing the linear model as $y = X\beta + \epsilon$, where $X = \begin{bmatrix}e & x_1 & x_2 & \cdots & x_p\end{bmatrix}$, $\beta = \begin{bmatrix}\beta_0 & \beta_1 & \beta_2 & \cdo
51,477
Same SE for all coefficients of a linear model
It doesn't look like there's an issue. Your standard errors are equal because you presumably have a balanced data set, i.e. two observations for each of the 18 possible combinations of reef and site.
Same SE for all coefficients of a linear model
It doesn't look like there's an issue. Your standard errors are equal because you presumably have a balanced data set, i.e. two observations for each of the 18 possible combinations of reef and site.
Same SE for all coefficients of a linear model It doesn't look like there's an issue. Your standard errors are equal because you presumably have a balanced data set, i.e. two observations for each of the 18 possible combinations of reef and site.
Same SE for all coefficients of a linear model It doesn't look like there's an issue. Your standard errors are equal because you presumably have a balanced data set, i.e. two observations for each of the 18 possible combinations of reef and site.
51,478
Same SE for all coefficients of a linear model
To accompany Doctor Milt's answer (+1) [and Zhanxiong's answer, also +1, which was written while I was posting mine], here is a simulation showing the same thing happening with random data of the same size and structure as yours. You can play around and see whether the std errors are equal as you change the sample size or add/remove individual observations: df <- expand.grid(reef=factor(rep(1:6)), site=factor(c("A", "B", "C"))) df <- rbind(df, df) df$outcome <- rnorm(n=nrow(df)) summary(lm(outcome ~ reef + site, data=df)) And the output is: > dim(df) [1] 36 3 > summary(lm(outcome ~ reef + site, data=df)) Call: lm(formula = outcome ~ reef + site, data = df) Residuals: Min 1Q Median 3Q Max -1.92710 -0.61335 0.04716 0.48348 1.44549 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.01300 0.41824 0.031 0.9754 reef2 -0.83613 0.51223 -1.632 0.1138 reef3 -0.74835 0.51223 -1.461 0.1552 reef4 -0.30631 0.51223 -0.598 0.5547 reef5 -0.19428 0.51223 -0.379 0.7073 reef6 -0.08205 0.51223 -0.160 0.8739 siteB 0.66652 0.36220 1.840 0.0764 . siteC 0.22147 0.36220 0.611 0.5458 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.8872 on 28 degrees of freedom Multiple R-squared: 0.2265, Adjusted R-squared: 0.03312 F-statistic: 1.171 on 7 and 28 DF, p-value: 0.3502 You can try things like summary(lm(outcome ~ reef + site, data=df[-5, ])) to see the impact of removing single rows of data.
Same SE for all coefficients of a linear model
To accompany Doctor Milt's answer (+1) [and Zhanxiong's answer, also +1, which was written while I was posting mine], here is a simulation showing the same thing happening with random data of the same
Same SE for all coefficients of a linear model To accompany Doctor Milt's answer (+1) [and Zhanxiong's answer, also +1, which was written while I was posting mine], here is a simulation showing the same thing happening with random data of the same size and structure as yours. You can play around and see whether the std errors are equal as you change the sample size or add/remove individual observations: df <- expand.grid(reef=factor(rep(1:6)), site=factor(c("A", "B", "C"))) df <- rbind(df, df) df$outcome <- rnorm(n=nrow(df)) summary(lm(outcome ~ reef + site, data=df)) And the output is: > dim(df) [1] 36 3 > summary(lm(outcome ~ reef + site, data=df)) Call: lm(formula = outcome ~ reef + site, data = df) Residuals: Min 1Q Median 3Q Max -1.92710 -0.61335 0.04716 0.48348 1.44549 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.01300 0.41824 0.031 0.9754 reef2 -0.83613 0.51223 -1.632 0.1138 reef3 -0.74835 0.51223 -1.461 0.1552 reef4 -0.30631 0.51223 -0.598 0.5547 reef5 -0.19428 0.51223 -0.379 0.7073 reef6 -0.08205 0.51223 -0.160 0.8739 siteB 0.66652 0.36220 1.840 0.0764 . siteC 0.22147 0.36220 0.611 0.5458 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.8872 on 28 degrees of freedom Multiple R-squared: 0.2265, Adjusted R-squared: 0.03312 F-statistic: 1.171 on 7 and 28 DF, p-value: 0.3502 You can try things like summary(lm(outcome ~ reef + site, data=df[-5, ])) to see the impact of removing single rows of data.
Same SE for all coefficients of a linear model To accompany Doctor Milt's answer (+1) [and Zhanxiong's answer, also +1, which was written while I was posting mine], here is a simulation showing the same thing happening with random data of the same
51,479
Same SE for all coefficients of a linear model
The (usual) equation for the variance of a regression parameter (not the intercept) $\hat\beta_j$ is as follows. $$ \widehat{\text{var}}\left( \hat\beta_j \right) = \dfrac{ s^2 }{ (n-1)s^2_{X_j} }\times\dfrac{1}{ 1-R^2_j } $$ $s^2$ is the residual variance. $n$ is the sample size $s^2_{X_j}$ is the variance of the feature $j$ $R^2_j$ is the variance-inflation factor (VIF) for feature $j$, which is calculated by running a regression of feature $j$ on the other features and calculating the $R^2$ of that regression. To get your observed result, one of the following must have occurred. The change in $s^2_{X_j}$ perfectly offset the change in the VIF for every variable All $s^2_{X_j}$ and all $R^2_j$ are equal. (Perhaps there could be a combination of the two where some changes in $R^2_j$ are offset by changes in $s^2_{X_j}$ and the rest of the features have the same $s^2_{X_j}$ and $R^2_j$.) Possibility #1 requires a lot of coincidences. On the other hand, for a designed experiment where all of the features have equal variance and are independent of each other, possibility #2 will occur. And since not all of your slope coefficients are equal, apply the above logic to the groups of coefficients that do have equal coefficients. The equation for $ \widehat{\text{var}}\left( \hat\beta_j \right) $ is available on the VIF Wikipedia article, which has references to other material.
Same SE for all coefficients of a linear model
The (usual) equation for the variance of a regression parameter (not the intercept) $\hat\beta_j$ is as follows. $$ \widehat{\text{var}}\left( \hat\beta_j \right) = \dfrac{ s^2 }{ (n-1)s^2_{X_j} }\tim
Same SE for all coefficients of a linear model The (usual) equation for the variance of a regression parameter (not the intercept) $\hat\beta_j$ is as follows. $$ \widehat{\text{var}}\left( \hat\beta_j \right) = \dfrac{ s^2 }{ (n-1)s^2_{X_j} }\times\dfrac{1}{ 1-R^2_j } $$ $s^2$ is the residual variance. $n$ is the sample size $s^2_{X_j}$ is the variance of the feature $j$ $R^2_j$ is the variance-inflation factor (VIF) for feature $j$, which is calculated by running a regression of feature $j$ on the other features and calculating the $R^2$ of that regression. To get your observed result, one of the following must have occurred. The change in $s^2_{X_j}$ perfectly offset the change in the VIF for every variable All $s^2_{X_j}$ and all $R^2_j$ are equal. (Perhaps there could be a combination of the two where some changes in $R^2_j$ are offset by changes in $s^2_{X_j}$ and the rest of the features have the same $s^2_{X_j}$ and $R^2_j$.) Possibility #1 requires a lot of coincidences. On the other hand, for a designed experiment where all of the features have equal variance and are independent of each other, possibility #2 will occur. And since not all of your slope coefficients are equal, apply the above logic to the groups of coefficients that do have equal coefficients. The equation for $ \widehat{\text{var}}\left( \hat\beta_j \right) $ is available on the VIF Wikipedia article, which has references to other material.
Same SE for all coefficients of a linear model The (usual) equation for the variance of a regression parameter (not the intercept) $\hat\beta_j$ is as follows. $$ \widehat{\text{var}}\left( \hat\beta_j \right) = \dfrac{ s^2 }{ (n-1)s^2_{X_j} }\tim
51,480
Why Logistic Regression is not a generative model?
The fundamental difference between Generative Model and Discriminative Model is, one is learning about $ P(X,y) $ while discriminative model is learning $ P(y|X) $ According to this definition, Logistic Regression is not a generative model. For your example "create a vector x that maximizes $ P(y=i|X=x)$ ", it was not a generative model at all. Since it still learn nothing about $ P(X,y) $ Taking a real life example, suppose we got a logistic regression model that predict whether the image is a photo of cat. (Usually this is CNN ,but the logic is the same) The image $ X $ that maximize $ P(y= cat| X=x) $ is an image that consist of different feature of cats (e.g. tail, eyes) everywhere inside the image. Certain is maximize the probability of being classified as a cat, but you never see this image in reality. Mathematically speaking, if $ x^* =argmax(P(y= cat| X=x))$ , $ P(x^*,y) $ can still be very low.
Why Logistic Regression is not a generative model?
The fundamental difference between Generative Model and Discriminative Model is, one is learning about $ P(X,y) $ while discriminative model is learning $ P(y|X) $ According to this definition, Logist
Why Logistic Regression is not a generative model? The fundamental difference between Generative Model and Discriminative Model is, one is learning about $ P(X,y) $ while discriminative model is learning $ P(y|X) $ According to this definition, Logistic Regression is not a generative model. For your example "create a vector x that maximizes $ P(y=i|X=x)$ ", it was not a generative model at all. Since it still learn nothing about $ P(X,y) $ Taking a real life example, suppose we got a logistic regression model that predict whether the image is a photo of cat. (Usually this is CNN ,but the logic is the same) The image $ X $ that maximize $ P(y= cat| X=x) $ is an image that consist of different feature of cats (e.g. tail, eyes) everywhere inside the image. Certain is maximize the probability of being classified as a cat, but you never see this image in reality. Mathematically speaking, if $ x^* =argmax(P(y= cat| X=x))$ , $ P(x^*,y) $ can still be very low.
Why Logistic Regression is not a generative model? The fundamental difference between Generative Model and Discriminative Model is, one is learning about $ P(X,y) $ while discriminative model is learning $ P(y|X) $ According to this definition, Logist
51,481
Why Logistic Regression is not a generative model?
To elaborate on @Bayesian's (correct) answer, consider a logistic regression model where cases of diabetes ($y$) are predicted by sugar intake ($x$). The model learns $P(y = 1 | x) = \text{logit}^{-1}(\alpha + \beta x)$ - that is, $P(\text{Diabetes}|\text{Sugar intake})$, but since it doesn't learn the distribution of sugar intakes in the population, $P(x)$, it can't generate samples from the distribution of diabetes cases, $P(y)$.
Why Logistic Regression is not a generative model?
To elaborate on @Bayesian's (correct) answer, consider a logistic regression model where cases of diabetes ($y$) are predicted by sugar intake ($x$). The model learns $P(y = 1 | x) = \text{logit}^{-1}
Why Logistic Regression is not a generative model? To elaborate on @Bayesian's (correct) answer, consider a logistic regression model where cases of diabetes ($y$) are predicted by sugar intake ($x$). The model learns $P(y = 1 | x) = \text{logit}^{-1}(\alpha + \beta x)$ - that is, $P(\text{Diabetes}|\text{Sugar intake})$, but since it doesn't learn the distribution of sugar intakes in the population, $P(x)$, it can't generate samples from the distribution of diabetes cases, $P(y)$.
Why Logistic Regression is not a generative model? To elaborate on @Bayesian's (correct) answer, consider a logistic regression model where cases of diabetes ($y$) are predicted by sugar intake ($x$). The model learns $P(y = 1 | x) = \text{logit}^{-1}
51,482
Why is random sampling good?
You seem to be conflating the idea of random sampling with the separate question of whether objects are sampled with or without replacement. The first method you describe is a simple-random-sample with replacement and the second is a simple-random-sample without replacement. In the second case the sample is the whole population so you do indeed perfectly learn the states --- that occurs whenever you take a simple-random-sample without replacement that has the same size as the population. As to your initial question, there is a large statistical literature on the properties of random sampling and why it is desirable for making inferences from a sample to a larger population. Simple random sampling does not favour any object in the population over any other object, which makes it easy to make unbiased inferences about the larger population of interest. Whether or not a sampling method is "optimal" would require more detailed specification of the problem and the optimality criteria. In any case, it is reasonable to say that simple random sampling performs well in a wide range of sampling problems.
Why is random sampling good?
You seem to be conflating the idea of random sampling with the separate question of whether objects are sampled with or without replacement. The first method you describe is a simple-random-sample wi
Why is random sampling good? You seem to be conflating the idea of random sampling with the separate question of whether objects are sampled with or without replacement. The first method you describe is a simple-random-sample with replacement and the second is a simple-random-sample without replacement. In the second case the sample is the whole population so you do indeed perfectly learn the states --- that occurs whenever you take a simple-random-sample without replacement that has the same size as the population. As to your initial question, there is a large statistical literature on the properties of random sampling and why it is desirable for making inferences from a sample to a larger population. Simple random sampling does not favour any object in the population over any other object, which makes it easy to make unbiased inferences about the larger population of interest. Whether or not a sampling method is "optimal" would require more detailed specification of the problem and the optimality criteria. In any case, it is reasonable to say that simple random sampling performs well in a wide range of sampling problems.
Why is random sampling good? You seem to be conflating the idea of random sampling with the separate question of whether objects are sampled with or without replacement. The first method you describe is a simple-random-sample wi
51,483
Why is random sampling good?
The Central Limit Theorem may be the theory you're looking for. It shows that random sample means follow a Normal distribution (even if the population isn't Normally distributed) and that allows us to use a lot of popular statistics like standard deviations, p-values, etc. Of course, if your entire population of interest is two individuals, then you take a census, not a sample. Sampling is used when the population (of individuals, events, etc) is too large for you to collect data on every individual case, or when it's otherwise impossible to do so (e.g., some cases are future events).
Why is random sampling good?
The Central Limit Theorem may be the theory you're looking for. It shows that random sample means follow a Normal distribution (even if the population isn't Normally distributed) and that allows us t
Why is random sampling good? The Central Limit Theorem may be the theory you're looking for. It shows that random sample means follow a Normal distribution (even if the population isn't Normally distributed) and that allows us to use a lot of popular statistics like standard deviations, p-values, etc. Of course, if your entire population of interest is two individuals, then you take a census, not a sample. Sampling is used when the population (of individuals, events, etc) is too large for you to collect data on every individual case, or when it's otherwise impossible to do so (e.g., some cases are future events).
Why is random sampling good? The Central Limit Theorem may be the theory you're looking for. It shows that random sample means follow a Normal distribution (even if the population isn't Normally distributed) and that allows us t
51,484
Why is random sampling good?
A non-random sample may be good for a particular purpose, or it may be bad. A random sample can be shown with high probability to be "good" for many purposes. In particular, in statistics, our purpose is to learn general properties of a population. There are some non-random samples we can draw that help us do that very well, but also some that mislead us. Without prior knowledge about what kind of sample is useful, it's easy to mess up. On the other hand, when we draw a random sample, we can prove that with high probability, we will get a sample that is "good" for our purposes. Sure, there might be deterministic ways to get a good sample too, but those may require a lot of knowledge and planning. Random sampling is much easier. In your examples, the sample sizes are too small for us to prove the good things I mentioned above. But if the urn has 100 or 1000 balls, random sampling will give good results.
Why is random sampling good?
A non-random sample may be good for a particular purpose, or it may be bad. A random sample can be shown with high probability to be "good" for many purposes. In particular, in statistics, our purpose
Why is random sampling good? A non-random sample may be good for a particular purpose, or it may be bad. A random sample can be shown with high probability to be "good" for many purposes. In particular, in statistics, our purpose is to learn general properties of a population. There are some non-random samples we can draw that help us do that very well, but also some that mislead us. Without prior knowledge about what kind of sample is useful, it's easy to mess up. On the other hand, when we draw a random sample, we can prove that with high probability, we will get a sample that is "good" for our purposes. Sure, there might be deterministic ways to get a good sample too, but those may require a lot of knowledge and planning. Random sampling is much easier. In your examples, the sample sizes are too small for us to prove the good things I mentioned above. But if the urn has 100 or 1000 balls, random sampling will give good results.
Why is random sampling good? A non-random sample may be good for a particular purpose, or it may be bad. A random sample can be shown with high probability to be "good" for many purposes. In particular, in statistics, our purpose
51,485
Using PCA vs Linear Regression
PCA does not involve a dependent variable: All the variables are treated the same. It is primarily dimension reduction method. Factor analysis also doesn't involve a dependent variable, but its goal is somewhat different: It is to uncover latent factors. Some people use either the components or the factors (or a subset of them) as independent variables in a later regression. This can be useful if you have a lot of IVs: If you want to reduce the number while losing as little variance as possible, that's PCA. If you think these IVs represent some factors, that's FA. If you think there are factors, then it may be best to use FA; but if you are just trying to reduce the number of variables, then there is no guarantee that the components will relate well to the DV. Another method is partial least squares. That does include the DV.
Using PCA vs Linear Regression
PCA does not involve a dependent variable: All the variables are treated the same. It is primarily dimension reduction method. Factor analysis also doesn't involve a dependent variable, but its goal
Using PCA vs Linear Regression PCA does not involve a dependent variable: All the variables are treated the same. It is primarily dimension reduction method. Factor analysis also doesn't involve a dependent variable, but its goal is somewhat different: It is to uncover latent factors. Some people use either the components or the factors (or a subset of them) as independent variables in a later regression. This can be useful if you have a lot of IVs: If you want to reduce the number while losing as little variance as possible, that's PCA. If you think these IVs represent some factors, that's FA. If you think there are factors, then it may be best to use FA; but if you are just trying to reduce the number of variables, then there is no guarantee that the components will relate well to the DV. Another method is partial least squares. That does include the DV.
Using PCA vs Linear Regression PCA does not involve a dependent variable: All the variables are treated the same. It is primarily dimension reduction method. Factor analysis also doesn't involve a dependent variable, but its goal
51,486
Using PCA vs Linear Regression
These techniques are not exclusive, and they can be complimentary. PCA is a dimension reduction technique. The number of dimensions in your dataset corresponds to the number of observations you have per case. For example, imagine your data is survey data, and you administered a 100 item questionnaire. Each individual who completed the questionnaire is represented by a single point in 100 dimensional space. The goal of PCA is to simplify this space in such a way that the distribution of points is preserved in fewer dimensions. This simplification can help you to describe the data more elegantly, but it can also reveal the dominant trends in your data. A great explanation of PCA can be found here: Making sense of principal component analysis, eigenvectors & eigenvalues Hierarchical linear regression is used to determine whether a predictor (or set of predictors) explains variance in an outcome variable over and above some other predictor (or set of predictors). For example, you may want to know if exercising (IV1) or eating well (IV2) is a better predictor of cardiovascular health (DV). Hierarchical linear regression can help answer this question. If your data is complex (i.e. you have many variables) you can apply PCA to reduce the number of variables/find the "latent variables". These latent variables can then be used in the hierarchical linear regression. Best of luck!
Using PCA vs Linear Regression
These techniques are not exclusive, and they can be complimentary. PCA is a dimension reduction technique. The number of dimensions in your dataset corresponds to the number of observations you have
Using PCA vs Linear Regression These techniques are not exclusive, and they can be complimentary. PCA is a dimension reduction technique. The number of dimensions in your dataset corresponds to the number of observations you have per case. For example, imagine your data is survey data, and you administered a 100 item questionnaire. Each individual who completed the questionnaire is represented by a single point in 100 dimensional space. The goal of PCA is to simplify this space in such a way that the distribution of points is preserved in fewer dimensions. This simplification can help you to describe the data more elegantly, but it can also reveal the dominant trends in your data. A great explanation of PCA can be found here: Making sense of principal component analysis, eigenvectors & eigenvalues Hierarchical linear regression is used to determine whether a predictor (or set of predictors) explains variance in an outcome variable over and above some other predictor (or set of predictors). For example, you may want to know if exercising (IV1) or eating well (IV2) is a better predictor of cardiovascular health (DV). Hierarchical linear regression can help answer this question. If your data is complex (i.e. you have many variables) you can apply PCA to reduce the number of variables/find the "latent variables". These latent variables can then be used in the hierarchical linear regression. Best of luck!
Using PCA vs Linear Regression These techniques are not exclusive, and they can be complimentary. PCA is a dimension reduction technique. The number of dimensions in your dataset corresponds to the number of observations you have
51,487
Using PCA vs Linear Regression
As other answers have said, PCA and Linear Regression (in general) are different tools. PCA is an unsupervised method (only takes in data, no dependent variables) and Linear regression (in general) is a supervised learning method. If you have a dependent variable, a supervised method would be suited to your goals. If you're trying to find out which variables in your data capture most of the variation in the data, PCA is a useful tool.
Using PCA vs Linear Regression
As other answers have said, PCA and Linear Regression (in general) are different tools. PCA is an unsupervised method (only takes in data, no dependent variables) and Linear regression (in general) i
Using PCA vs Linear Regression As other answers have said, PCA and Linear Regression (in general) are different tools. PCA is an unsupervised method (only takes in data, no dependent variables) and Linear regression (in general) is a supervised learning method. If you have a dependent variable, a supervised method would be suited to your goals. If you're trying to find out which variables in your data capture most of the variation in the data, PCA is a useful tool.
Using PCA vs Linear Regression As other answers have said, PCA and Linear Regression (in general) are different tools. PCA is an unsupervised method (only takes in data, no dependent variables) and Linear regression (in general) i
51,488
Using PCA vs Linear Regression
If you are just looking for correlation between variables, you can estimate this simply with the correlation coefficient. It will tell you the strength of the correlation between two variables.
Using PCA vs Linear Regression
If you are just looking for correlation between variables, you can estimate this simply with the correlation coefficient. It will tell you the strength of the correlation between two variables.
Using PCA vs Linear Regression If you are just looking for correlation between variables, you can estimate this simply with the correlation coefficient. It will tell you the strength of the correlation between two variables.
Using PCA vs Linear Regression If you are just looking for correlation between variables, you can estimate this simply with the correlation coefficient. It will tell you the strength of the correlation between two variables.
51,489
Interquartile range exceeds the median
Note that the IQR can never be negative, but medians certainly can be negative; it's not clear that it usually makes sense to compare the two, since one is a location measure and the other is a measure of spread. If you had data that was restricted to be always positive (no such restriction is mentioned, though), you could calculate something akin to a coefficient of variation (by calculating the ratio IQR/median) This would be measure of relative variability, and would be unitless, like the coefficient of variation is. It might then at least make sense to ask "does such a ratio exceeding 1 indicate a large amount of relative variability?" However, the answer is, we can't really say; it depends on what counts as "large" for you. There's no clear absolute standard. (There isn't an absolute standard for the CV either - one that would make a particular value count as "large" or "small", though in some application areas you can find rules of thumb -- if you have some assumed distribution and some rule of thumb threshold for CV, it might be possible to find a roughly corresponding rule for IQR/median; at least perhaps under some conditions.)
Interquartile range exceeds the median
Note that the IQR can never be negative, but medians certainly can be negative; it's not clear that it usually makes sense to compare the two, since one is a location measure and the other is a measur
Interquartile range exceeds the median Note that the IQR can never be negative, but medians certainly can be negative; it's not clear that it usually makes sense to compare the two, since one is a location measure and the other is a measure of spread. If you had data that was restricted to be always positive (no such restriction is mentioned, though), you could calculate something akin to a coefficient of variation (by calculating the ratio IQR/median) This would be measure of relative variability, and would be unitless, like the coefficient of variation is. It might then at least make sense to ask "does such a ratio exceeding 1 indicate a large amount of relative variability?" However, the answer is, we can't really say; it depends on what counts as "large" for you. There's no clear absolute standard. (There isn't an absolute standard for the CV either - one that would make a particular value count as "large" or "small", though in some application areas you can find rules of thumb -- if you have some assumed distribution and some rule of thumb threshold for CV, it might be possible to find a roughly corresponding rule for IQR/median; at least perhaps under some conditions.)
Interquartile range exceeds the median Note that the IQR can never be negative, but medians certainly can be negative; it's not clear that it usually makes sense to compare the two, since one is a location measure and the other is a measur
51,490
Interquartile range exceeds the median
In general, comparing the IQR to the median won't give you any extra insight about the dispersion. For example, consider these distributions: They have the same IQR; in fact they're identical copies, just shifted along the x axis. But the IQR is greater than the median for distribution 1, and less for distribution 2. Also, consider that any distribution with median less than 0 will have IQR greater than the median.
Interquartile range exceeds the median
In general, comparing the IQR to the median won't give you any extra insight about the dispersion. For example, consider these distributions: They have the same IQR; in fact they're identical copies,
Interquartile range exceeds the median In general, comparing the IQR to the median won't give you any extra insight about the dispersion. For example, consider these distributions: They have the same IQR; in fact they're identical copies, just shifted along the x axis. But the IQR is greater than the median for distribution 1, and less for distribution 2. Also, consider that any distribution with median less than 0 will have IQR greater than the median.
Interquartile range exceeds the median In general, comparing the IQR to the median won't give you any extra insight about the dispersion. For example, consider these distributions: They have the same IQR; in fact they're identical copies,
51,491
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with?
Correlation is calculated between columns and not between rows. The output should be read as, correlation between column-i and column j. Since you have 6 columns, you get a 6x6 correlation matrix. All 8 rows have been considered while calculating these correlations.
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with?
Correlation is calculated between columns and not between rows. The output should be read as, correlation between column-i and column j. Since you have 6 columns, you get a 6x6 correlation matrix. All
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with? Correlation is calculated between columns and not between rows. The output should be read as, correlation between column-i and column j. Since you have 6 columns, you get a 6x6 correlation matrix. All 8 rows have been considered while calculating these correlations.
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with? Correlation is calculated between columns and not between rows. The output should be read as, correlation between column-i and column j. Since you have 6 columns, you get a 6x6 correlation matrix. All
51,492
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with?
In the $G^TG$ operation, $G^T$ is an $6 \times 8$ matrix, and $G$ an $8 \times 6$. Hence, the matrix multiplication will yield a $6 \times 6$ matrix. Addressing the comments and the underlying issue, let's pretend that we have a matrix corresponding to returns of different stocks (in the columns) versus 5 consecutive years (in the rows) - completely fictitious stocks and years. Let's call the matrix, $A$: $$A = \begin{bmatrix} & \color{red}{\text{yah(y)}} & \color{blue}{\text{goog(g)}} & \color{green}{\text{ms(m)}} \\ \text{Yr.1} & 1 &8 & 1\\ \text{Yr.2} & -4 &9 & 3 \\ \text{Yr.3} & 5 & 10 & 4 \\ \text{Yr.4} & 7 & 3 & 5\\ \text{Yr.5} & 8 & 7& 6 \end{bmatrix}$$ We want to calculate the correlations between the different vectors of returns, one for each company, "packaged" in the matrix $A$. The variance-covariance matrix of the portfolio assuming equal holdings will be: $\Large \sigma(A) = \frac{G^TG}{n-1}$ with $G$ being the mean-centered observations and $n-1$ corresponding to the number of observations minus $1$. The mean-centered (or demeaned) matrix $G$ is: $$\begin{bmatrix} & \color{red}{\text{y}} & \color{blue}{\text{g}} & \color{green}{\text{m}} \\ \text{Yr.1} & -2.4 &0.6 & -2.8\\ \text{Yr.2} & -7.4 &1.6 & -0.8 \\ \text{Yr.3} & 1.6 & 2.6 & 0.2 \\ \text{Yr.4} & 3.6 & -4.4 & 1.2\\ \text{Yr.5} & 4.6 & -0.4& 2.2 \end{bmatrix} $$ And the variance-covariance matrix: $$\begin{bmatrix} & \color{red}{y} & \color{blue}{g} & \color{green}{m} \\ \color{red}{y} & 24.30 &-6.70 & 6.85\\ \color{blue}{g} & -6.70 & 7.30 & -2.15 \\ \color{green}{m} & 6.85 & -2.15 & 3.70 \\ \end{bmatrix} $$ So it went from the $5 \times 3$ $A$ matrix to a $3 \times 3$ matrix. The operations involved in calculating the correlation matrix are similar, but the data points are standardized by dividing each one by the standard deviation of the returns of each company (column vectors), right after centering the data points by subtracting the column means as in the covariance matrix: $$\small cor(A)=\tiny\frac{1}{n-1}\small\begin{bmatrix} \frac{\color{red}{y_1 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{red}{y_2 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{red}{y_3 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{red}{y_4 - \bar{y}}}{\color{red}{sd(y)}} &\frac{\color{red}{y_5 - \bar{y}}}{\color{red}{sd(y)}} \\ \frac{\color{blue}{g_1 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{blue}{g_2 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{blue}{g_3 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{blue}{g_4 - \bar{g}}} {\color{blue}{sd(g)}}& \frac{\color{blue}{g_5 - \bar{g}}}{\color{blue}{sd(g)}}\\ \frac{\color{green}{m_1 - \bar{m}}}{\color{green}{sd(m)}}& \frac{\color{green}{m_2 - \bar{m}}}{\color{green}{sd(m)}} &\frac{\color{green}{m_3 - \bar{m}}}{\color{green}{sd(m)}} & \frac{\color{green}{m_4 - \bar{m}}}{\color{green}{sd(m)}} & \frac{\color{green}{m_5 - \bar{m}}}{\color{green}{sd(m)}}\\ &&\color{purple} {3\times 5 \,\text{matrix}} \end{bmatrix} \begin{bmatrix} \frac{\color{red}{y_1 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_1 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_1 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_2 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_2 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_2 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_3 - \bar{y}}}{\color{red}{sd(y)}} &\frac{\color{blue}{g_3 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_3 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_4 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_4 - \bar{go}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_4 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_5 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_5 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_5 - \bar{m}}}{\color{green}{sd(m)}} \\ &\color{purple} {5\times 3 \,\text{matrix}} \end{bmatrix}$$ One more quick thing for completeness sake: We have so far a clunky matrix as the result, but in general we want to estimate the portfolio variance: $1$ portfolio; $1$ variance. To do that we multiply the matrix of variance-covariance of $A$ to the left and to the right by the vector containing the proportions or weightings in each stock - $W$. Since we want to end up with a scalar single number, it is unsurprising that the algebra will be: $W^T\,\sigma(A)\,W$, with the vector of weights (fractions) being in this case $\color{blue}{3}\times \color{blue}{1}$ to match perfectly on the left as $W^T$, and on the right as $W$. Code in R: Fictitious data set of returns in billions, percentage (?) - the matrix A: yah = c(1, - 4, 5, 7, 8) goog = c(8, 9, 10, 3, 7) ms = c(1, 3, 4, 5, 6) returns <- cbind(yah, goog, ms) row.names(returns) =c("Yr.1","Yr.2","Yr.3","Yr.4", "Yr.5") Centered matrix (G) of demeaned returns: demeaned_returns <- scale(returns, scale = F, center = T) Manual and R function calculation of the variance-covariance matrix: (var_cov_A = (t(demeaned_returns)%*%demeaned_returns)/(nrow(returns)-1)) cov(returns) # the R in-built function cov() returns the same results. Correlation matrix calculation: We need to divide by the standard deviation column-wise: demeaned_scaled_returns <- scale(returns, scale = T, center = T) and then proceed as above: (corr_A = (t(demeaned_scaled_returns) %*% demeaned_scaled_returns)/(nrow(returns)-1)) cor(returns) # Again, the R function returns the same matrix.
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with?
In the $G^TG$ operation, $G^T$ is an $6 \times 8$ matrix, and $G$ an $8 \times 6$. Hence, the matrix multiplication will yield a $6 \times 6$ matrix. Addressing the comments and the underlying issue,
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with? In the $G^TG$ operation, $G^T$ is an $6 \times 8$ matrix, and $G$ an $8 \times 6$. Hence, the matrix multiplication will yield a $6 \times 6$ matrix. Addressing the comments and the underlying issue, let's pretend that we have a matrix corresponding to returns of different stocks (in the columns) versus 5 consecutive years (in the rows) - completely fictitious stocks and years. Let's call the matrix, $A$: $$A = \begin{bmatrix} & \color{red}{\text{yah(y)}} & \color{blue}{\text{goog(g)}} & \color{green}{\text{ms(m)}} \\ \text{Yr.1} & 1 &8 & 1\\ \text{Yr.2} & -4 &9 & 3 \\ \text{Yr.3} & 5 & 10 & 4 \\ \text{Yr.4} & 7 & 3 & 5\\ \text{Yr.5} & 8 & 7& 6 \end{bmatrix}$$ We want to calculate the correlations between the different vectors of returns, one for each company, "packaged" in the matrix $A$. The variance-covariance matrix of the portfolio assuming equal holdings will be: $\Large \sigma(A) = \frac{G^TG}{n-1}$ with $G$ being the mean-centered observations and $n-1$ corresponding to the number of observations minus $1$. The mean-centered (or demeaned) matrix $G$ is: $$\begin{bmatrix} & \color{red}{\text{y}} & \color{blue}{\text{g}} & \color{green}{\text{m}} \\ \text{Yr.1} & -2.4 &0.6 & -2.8\\ \text{Yr.2} & -7.4 &1.6 & -0.8 \\ \text{Yr.3} & 1.6 & 2.6 & 0.2 \\ \text{Yr.4} & 3.6 & -4.4 & 1.2\\ \text{Yr.5} & 4.6 & -0.4& 2.2 \end{bmatrix} $$ And the variance-covariance matrix: $$\begin{bmatrix} & \color{red}{y} & \color{blue}{g} & \color{green}{m} \\ \color{red}{y} & 24.30 &-6.70 & 6.85\\ \color{blue}{g} & -6.70 & 7.30 & -2.15 \\ \color{green}{m} & 6.85 & -2.15 & 3.70 \\ \end{bmatrix} $$ So it went from the $5 \times 3$ $A$ matrix to a $3 \times 3$ matrix. The operations involved in calculating the correlation matrix are similar, but the data points are standardized by dividing each one by the standard deviation of the returns of each company (column vectors), right after centering the data points by subtracting the column means as in the covariance matrix: $$\small cor(A)=\tiny\frac{1}{n-1}\small\begin{bmatrix} \frac{\color{red}{y_1 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{red}{y_2 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{red}{y_3 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{red}{y_4 - \bar{y}}}{\color{red}{sd(y)}} &\frac{\color{red}{y_5 - \bar{y}}}{\color{red}{sd(y)}} \\ \frac{\color{blue}{g_1 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{blue}{g_2 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{blue}{g_3 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{blue}{g_4 - \bar{g}}} {\color{blue}{sd(g)}}& \frac{\color{blue}{g_5 - \bar{g}}}{\color{blue}{sd(g)}}\\ \frac{\color{green}{m_1 - \bar{m}}}{\color{green}{sd(m)}}& \frac{\color{green}{m_2 - \bar{m}}}{\color{green}{sd(m)}} &\frac{\color{green}{m_3 - \bar{m}}}{\color{green}{sd(m)}} & \frac{\color{green}{m_4 - \bar{m}}}{\color{green}{sd(m)}} & \frac{\color{green}{m_5 - \bar{m}}}{\color{green}{sd(m)}}\\ &&\color{purple} {3\times 5 \,\text{matrix}} \end{bmatrix} \begin{bmatrix} \frac{\color{red}{y_1 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_1 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_1 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_2 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_2 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_2 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_3 - \bar{y}}}{\color{red}{sd(y)}} &\frac{\color{blue}{g_3 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_3 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_4 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_4 - \bar{go}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_4 - \bar{m}}}{\color{green}{sd(m)}} \\ \frac{\color{red}{y_5 - \bar{y}}}{\color{red}{sd(y)}} & \frac{\color{blue}{g_5 - \bar{g}}}{\color{blue}{sd(g)}} & \frac{\color{green}{m_5 - \bar{m}}}{\color{green}{sd(m)}} \\ &\color{purple} {5\times 3 \,\text{matrix}} \end{bmatrix}$$ One more quick thing for completeness sake: We have so far a clunky matrix as the result, but in general we want to estimate the portfolio variance: $1$ portfolio; $1$ variance. To do that we multiply the matrix of variance-covariance of $A$ to the left and to the right by the vector containing the proportions or weightings in each stock - $W$. Since we want to end up with a scalar single number, it is unsurprising that the algebra will be: $W^T\,\sigma(A)\,W$, with the vector of weights (fractions) being in this case $\color{blue}{3}\times \color{blue}{1}$ to match perfectly on the left as $W^T$, and on the right as $W$. Code in R: Fictitious data set of returns in billions, percentage (?) - the matrix A: yah = c(1, - 4, 5, 7, 8) goog = c(8, 9, 10, 3, 7) ms = c(1, 3, 4, 5, 6) returns <- cbind(yah, goog, ms) row.names(returns) =c("Yr.1","Yr.2","Yr.3","Yr.4", "Yr.5") Centered matrix (G) of demeaned returns: demeaned_returns <- scale(returns, scale = F, center = T) Manual and R function calculation of the variance-covariance matrix: (var_cov_A = (t(demeaned_returns)%*%demeaned_returns)/(nrow(returns)-1)) cov(returns) # the R in-built function cov() returns the same results. Correlation matrix calculation: We need to divide by the standard deviation column-wise: demeaned_scaled_returns <- scale(returns, scale = T, center = T) and then proceed as above: (corr_A = (t(demeaned_scaled_returns) %*% demeaned_scaled_returns)/(nrow(returns)-1)) cor(returns) # Again, the R function returns the same matrix.
Why does the correlation function in R, cor() return a matrix with fewer rows that you started with? In the $G^TG$ operation, $G^T$ is an $6 \times 8$ matrix, and $G$ an $8 \times 6$. Hence, the matrix multiplication will yield a $6 \times 6$ matrix. Addressing the comments and the underlying issue,
51,493
Probability that the minimum of a normal random sample will exceed the maximum of another?
Since the datapoints are drawn independently from a continuous distribution, probability of obtaining equal values is $0$ and thus the question is equivalent to "What is the probability that $g_2$ largest values are assigned to group $G_2$". Assuming $g_1$ and $g_2$ are constants and the random partition into groups is independent of the values, all ${{g_1+g_2}\choose{g_2}}$ partitions into groups of sizes $(g_1,g_2)$ are equally likely, while exactly one of them satisfies the condition. Thus the probability is \begin{equation} \frac{1}{{g_1+g_2}\choose{g_2}} = \frac{g_1!\,g_2!}{(g_1+g_2)!}. \end{equation}
Probability that the minimum of a normal random sample will exceed the maximum of another?
Since the datapoints are drawn independently from a continuous distribution, probability of obtaining equal values is $0$ and thus the question is equivalent to "What is the probability that $g_2$ lar
Probability that the minimum of a normal random sample will exceed the maximum of another? Since the datapoints are drawn independently from a continuous distribution, probability of obtaining equal values is $0$ and thus the question is equivalent to "What is the probability that $g_2$ largest values are assigned to group $G_2$". Assuming $g_1$ and $g_2$ are constants and the random partition into groups is independent of the values, all ${{g_1+g_2}\choose{g_2}}$ partitions into groups of sizes $(g_1,g_2)$ are equally likely, while exactly one of them satisfies the condition. Thus the probability is \begin{equation} \frac{1}{{g_1+g_2}\choose{g_2}} = \frac{g_1!\,g_2!}{(g_1+g_2)!}. \end{equation}
Probability that the minimum of a normal random sample will exceed the maximum of another? Since the datapoints are drawn independently from a continuous distribution, probability of obtaining equal values is $0$ and thus the question is equivalent to "What is the probability that $g_2$ lar
51,494
Probability that the minimum of a normal random sample will exceed the maximum of another?
I would rather comment, but lack the reputation to do so. As such, this is not a complete answer. It is equivalent to say "What is the probability that the minimum member of $G_2$ is larger than the maximum member of $G_1$?". This sounds like a job for order statistics! In case you are unfamiliar, order statistics are simply your data ordered by magnitude (look it up on wikipedia for a longer explanation. As such, the first order statistic is the minimum, and, in your case, the $g_1^{th}$ order statistic is the max of $G_1$, and likewise, $g_2^{th}$ order statistic the max of $G_2$. If you would like, you can compute, for $x_1$, $x_2$ ... $x_{g_1}$ from $G_1$ and $y_1$,$y_2$...$y_{g_2}$ from $G_2$, $P(x_{g_1} < y_1)$, which will give you an exact answer in terms of $g_1$ and $g_2$. As Tim stated, the probability will be very small for any sizable $n$.
Probability that the minimum of a normal random sample will exceed the maximum of another?
I would rather comment, but lack the reputation to do so. As such, this is not a complete answer. It is equivalent to say "What is the probability that the minimum member of $G_2$ is larger than the m
Probability that the minimum of a normal random sample will exceed the maximum of another? I would rather comment, but lack the reputation to do so. As such, this is not a complete answer. It is equivalent to say "What is the probability that the minimum member of $G_2$ is larger than the maximum member of $G_1$?". This sounds like a job for order statistics! In case you are unfamiliar, order statistics are simply your data ordered by magnitude (look it up on wikipedia for a longer explanation. As such, the first order statistic is the minimum, and, in your case, the $g_1^{th}$ order statistic is the max of $G_1$, and likewise, $g_2^{th}$ order statistic the max of $G_2$. If you would like, you can compute, for $x_1$, $x_2$ ... $x_{g_1}$ from $G_1$ and $y_1$,$y_2$...$y_{g_2}$ from $G_2$, $P(x_{g_1} < y_1)$, which will give you an exact answer in terms of $g_1$ and $g_2$. As Tim stated, the probability will be very small for any sizable $n$.
Probability that the minimum of a normal random sample will exceed the maximum of another? I would rather comment, but lack the reputation to do so. As such, this is not a complete answer. It is equivalent to say "What is the probability that the minimum member of $G_2$ is larger than the m
51,495
Probability that the minimum of a normal random sample will exceed the maximum of another?
You have two samples $G_1$ and $G_2$ taken from the same population (assuming that you divide your initial sample randomly), this means that as your sample grows you expect each of those two sample to be more and more similar to the initial population. This means that as your sample grows, probability of all value from $G_2$ being grater than any value of $G_1$ gets closer and closer to $0$. This is true no matter what was the distribution since you expect those two samples to be more and more similar to each other as your samples grow. You can show this with simple simulation: out <- list() fun <- function(n) { X <- rnorm(n) index <- 1:(n/2) all(max(X[-index]) < X[index]) # we need to check only if all # the values are greater than max } for (n in c(4, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6)) out[[as.character(n)]] <- mean(replicate(1000, fun(n))) as.data.frame(out) ## X4 X10 X100 X1000 X10000 X1e.05 X1e.06 ## 1 0.161 0.004 0 0 0 0 0
Probability that the minimum of a normal random sample will exceed the maximum of another?
You have two samples $G_1$ and $G_2$ taken from the same population (assuming that you divide your initial sample randomly), this means that as your sample grows you expect each of those two sample to
Probability that the minimum of a normal random sample will exceed the maximum of another? You have two samples $G_1$ and $G_2$ taken from the same population (assuming that you divide your initial sample randomly), this means that as your sample grows you expect each of those two sample to be more and more similar to the initial population. This means that as your sample grows, probability of all value from $G_2$ being grater than any value of $G_1$ gets closer and closer to $0$. This is true no matter what was the distribution since you expect those two samples to be more and more similar to each other as your samples grow. You can show this with simple simulation: out <- list() fun <- function(n) { X <- rnorm(n) index <- 1:(n/2) all(max(X[-index]) < X[index]) # we need to check only if all # the values are greater than max } for (n in c(4, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6)) out[[as.character(n)]] <- mean(replicate(1000, fun(n))) as.data.frame(out) ## X4 X10 X100 X1000 X10000 X1e.05 X1e.06 ## 1 0.161 0.004 0 0 0 0 0
Probability that the minimum of a normal random sample will exceed the maximum of another? You have two samples $G_1$ and $G_2$ taken from the same population (assuming that you divide your initial sample randomly), this means that as your sample grows you expect each of those two sample to
51,496
Why does multiple linear regression fail when the number of variables are larger than the number of samples?
I will provide a visual in a very simple case because it is the easiest case to visualize. Imagine you are trying to fit the following linear model: $Y\sim \alpha + X\beta + \epsilon$. In this situation you have two parameters, $\alpha$ and $\beta$, and imagine you only have a sample size of $n=1$. Your single piece of data is represented by the black dot below. Notice all the lines we can fit through this point! Each of these lines are the line of best fit as they all minimize your SSE to 0. There are in fact infinite lines through this point. You can see this because the data point above is $(1,1)$ and there are infinite solutions to $1=\alpha + \beta$. Now, try to generalize this to thinking of fitting a plane in three dimensional space when you only have two pieces of data $(n=2)$. You will find a similar issue there. What happens when we try to fit the same model, $Y\sim \alpha + X\beta + \epsilon$, when we have two points of data represented by the two black dots below? In other words, what happens when $n \geq p$? The blue line above uniquely beats every other line we can draw on this graph in terms of SSE. In other words, there is no other line that would provide as good a fit as the blue line. If you are not satisfied with a visual explanation, let's think about this in matrix notation. Recall, in multiple linear regression $\hat{\beta}_{p\times1} = [(X^TX)^{-1}X^T]_{p\times n}Y_{n\times1}$. We can equivalently write this as $[(X^TX)^{-1}X^T]_{n\times p}^{-1}\hat{\beta}_{p\times1} = Y_{n\times1}$ by taking the left inverse on each side. If you are familiar with linear algebra, you'll see that $\hat{\beta}$ is the solution to a system of $n$ equations with $p$ unknowns. There is no unique solution to this system when the number of unknowns, $p$, is larger than $n$!
Why does multiple linear regression fail when the number of variables are larger than the number of
I will provide a visual in a very simple case because it is the easiest case to visualize. Imagine you are trying to fit the following linear model: $Y\sim \alpha + X\beta + \epsilon$. In this situati
Why does multiple linear regression fail when the number of variables are larger than the number of samples? I will provide a visual in a very simple case because it is the easiest case to visualize. Imagine you are trying to fit the following linear model: $Y\sim \alpha + X\beta + \epsilon$. In this situation you have two parameters, $\alpha$ and $\beta$, and imagine you only have a sample size of $n=1$. Your single piece of data is represented by the black dot below. Notice all the lines we can fit through this point! Each of these lines are the line of best fit as they all minimize your SSE to 0. There are in fact infinite lines through this point. You can see this because the data point above is $(1,1)$ and there are infinite solutions to $1=\alpha + \beta$. Now, try to generalize this to thinking of fitting a plane in three dimensional space when you only have two pieces of data $(n=2)$. You will find a similar issue there. What happens when we try to fit the same model, $Y\sim \alpha + X\beta + \epsilon$, when we have two points of data represented by the two black dots below? In other words, what happens when $n \geq p$? The blue line above uniquely beats every other line we can draw on this graph in terms of SSE. In other words, there is no other line that would provide as good a fit as the blue line. If you are not satisfied with a visual explanation, let's think about this in matrix notation. Recall, in multiple linear regression $\hat{\beta}_{p\times1} = [(X^TX)^{-1}X^T]_{p\times n}Y_{n\times1}$. We can equivalently write this as $[(X^TX)^{-1}X^T]_{n\times p}^{-1}\hat{\beta}_{p\times1} = Y_{n\times1}$ by taking the left inverse on each side. If you are familiar with linear algebra, you'll see that $\hat{\beta}$ is the solution to a system of $n$ equations with $p$ unknowns. There is no unique solution to this system when the number of unknowns, $p$, is larger than $n$!
Why does multiple linear regression fail when the number of variables are larger than the number of I will provide a visual in a very simple case because it is the easiest case to visualize. Imagine you are trying to fit the following linear model: $Y\sim \alpha + X\beta + \epsilon$. In this situati
51,497
Why does multiple linear regression fail when the number of variables are larger than the number of samples?
I believe what Nick was saying in his comment is: your MLR with N variables is trying to fix N values (coefficients) in N-dimensional space, but you are trying to do it with M (M < N) pieces of data. How will you do this? Since you only have M data points, the other M-N dimensions of your answer are free-floating, as happens when trying to define a line through a single point (2D problem with only one sample) or a plane through two points (3D problem with only 2 samples). In the case of a line through a point: you have a single sample and you're trying to determine the slope and intercept of a line through it. You can arbitrarily pick a slope and that determines your intercept, or you can arbitrarily pick an intercept and it determines your slope, but you are not doing this from the sample. You have an infinite number of choices, all arbitrary. If you have two points, the line through them is determined unambiguously. If you have many points and are doing OLS, the line goes through the "center" of the cloud of points in some sense, but it is still unambiguous by the rules of OLS.
Why does multiple linear regression fail when the number of variables are larger than the number of
I believe what Nick was saying in his comment is: your MLR with N variables is trying to fix N values (coefficients) in N-dimensional space, but you are trying to do it with M (M < N) pieces of data.
Why does multiple linear regression fail when the number of variables are larger than the number of samples? I believe what Nick was saying in his comment is: your MLR with N variables is trying to fix N values (coefficients) in N-dimensional space, but you are trying to do it with M (M < N) pieces of data. How will you do this? Since you only have M data points, the other M-N dimensions of your answer are free-floating, as happens when trying to define a line through a single point (2D problem with only one sample) or a plane through two points (3D problem with only 2 samples). In the case of a line through a point: you have a single sample and you're trying to determine the slope and intercept of a line through it. You can arbitrarily pick a slope and that determines your intercept, or you can arbitrarily pick an intercept and it determines your slope, but you are not doing this from the sample. You have an infinite number of choices, all arbitrary. If you have two points, the line through them is determined unambiguously. If you have many points and are doing OLS, the line goes through the "center" of the cloud of points in some sense, but it is still unambiguous by the rules of OLS.
Why does multiple linear regression fail when the number of variables are larger than the number of I believe what Nick was saying in his comment is: your MLR with N variables is trying to fix N values (coefficients) in N-dimensional space, but you are trying to do it with M (M < N) pieces of data.
51,498
Why does multiple linear regression fail when the number of variables are larger than the number of samples?
Analyst's answer is in fact correct. If p>n you end up with an underdetermined system and can use pseudo-inverse to solve it. When you have more parameters than equations as in this case, using pseudo-inverse finds the minimum euclidian norm solution. This is the best assumption you can do since this solution has the lowest variance. That is also what you achieve with ridge regression but in a different way. We want this since, given a model $Y=Xb$, if there is noise in measurement $X$ i.e $X_n=X+n$, then estimate $Y=X_n b$ will have an error of $nb$ that its value depends on norm of $b$. I ignored the constant parameter because one can set it equal to the average of samples and use the above model to estimate the rest.
Why does multiple linear regression fail when the number of variables are larger than the number of
Analyst's answer is in fact correct. If p>n you end up with an underdetermined system and can use pseudo-inverse to solve it. When you have more parameters than equations as in this case, using pseud
Why does multiple linear regression fail when the number of variables are larger than the number of samples? Analyst's answer is in fact correct. If p>n you end up with an underdetermined system and can use pseudo-inverse to solve it. When you have more parameters than equations as in this case, using pseudo-inverse finds the minimum euclidian norm solution. This is the best assumption you can do since this solution has the lowest variance. That is also what you achieve with ridge regression but in a different way. We want this since, given a model $Y=Xb$, if there is noise in measurement $X$ i.e $X_n=X+n$, then estimate $Y=X_n b$ will have an error of $nb$ that its value depends on norm of $b$. I ignored the constant parameter because one can set it equal to the average of samples and use the above model to estimate the rest.
Why does multiple linear regression fail when the number of variables are larger than the number of Analyst's answer is in fact correct. If p>n you end up with an underdetermined system and can use pseudo-inverse to solve it. When you have more parameters than equations as in this case, using pseud
51,499
Why does multiple linear regression fail when the number of variables are larger than the number of samples?
If P, number of variables, is larger than N number of observations then you have underdetermined system of equations. There exist pseudo-inverse which can solve this: http://people.csail.mit.edu/bkph/articles/Pseudo_Inverse.pdf
Why does multiple linear regression fail when the number of variables are larger than the number of
If P, number of variables, is larger than N number of observations then you have underdetermined system of equations. There exist pseudo-inverse which can solve this: http://people.csail.mit.edu/bkp
Why does multiple linear regression fail when the number of variables are larger than the number of samples? If P, number of variables, is larger than N number of observations then you have underdetermined system of equations. There exist pseudo-inverse which can solve this: http://people.csail.mit.edu/bkph/articles/Pseudo_Inverse.pdf
Why does multiple linear regression fail when the number of variables are larger than the number of If P, number of variables, is larger than N number of observations then you have underdetermined system of equations. There exist pseudo-inverse which can solve this: http://people.csail.mit.edu/bkp
51,500
Why does multiple linear regression fail when the number of variables are larger than the number of samples?
Let's take an example: Case A (n = 1, p = 1): RSS = $(y-α-Xβ)^2$ Since RSS is a squared number, the minimum possible value it can have is zero. $(y-α-Xβ)^2$ = 0 $(y-α-Xβ)$ = 0 $y=α+Xβ$ Say x=1, y=4 then: $4=α+β$ Now as n=1 only, for this RSS equation we can have infinite solutions. Case B (n = 2, p = 1): RSS = $(y-α-Xβ)^2$ $y=α+Xβ$ Say we have two (x, y) pairs now (as n=2). Let them be (1, 4) and (3, 16). With the same logic as Case A, we will have two equations: $4=α+β$ $16=α+3β$ Solving them we get a unique solution : $α = -2$ $β = 6$ We got a unique solution here and OLS estimates can be calculated. The same logic can be extended to higher dimensions. For theoretical explanation, you can refer @TrynnaDoStat's answer.
Why does multiple linear regression fail when the number of variables are larger than the number of
Let's take an example: Case A (n = 1, p = 1): RSS = $(y-α-Xβ)^2$ Since RSS is a squared number, the minimum possible value it can have is zero. $(y-α-Xβ)^2$ = 0 $(y-α-Xβ)$ = 0 $y=α+Xβ$ Say x=1, y=4 th
Why does multiple linear regression fail when the number of variables are larger than the number of samples? Let's take an example: Case A (n = 1, p = 1): RSS = $(y-α-Xβ)^2$ Since RSS is a squared number, the minimum possible value it can have is zero. $(y-α-Xβ)^2$ = 0 $(y-α-Xβ)$ = 0 $y=α+Xβ$ Say x=1, y=4 then: $4=α+β$ Now as n=1 only, for this RSS equation we can have infinite solutions. Case B (n = 2, p = 1): RSS = $(y-α-Xβ)^2$ $y=α+Xβ$ Say we have two (x, y) pairs now (as n=2). Let them be (1, 4) and (3, 16). With the same logic as Case A, we will have two equations: $4=α+β$ $16=α+3β$ Solving them we get a unique solution : $α = -2$ $β = 6$ We got a unique solution here and OLS estimates can be calculated. The same logic can be extended to higher dimensions. For theoretical explanation, you can refer @TrynnaDoStat's answer.
Why does multiple linear regression fail when the number of variables are larger than the number of Let's take an example: Case A (n = 1, p = 1): RSS = $(y-α-Xβ)^2$ Since RSS is a squared number, the minimum possible value it can have is zero. $(y-α-Xβ)^2$ = 0 $(y-α-Xβ)$ = 0 $y=α+Xβ$ Say x=1, y=4 th