idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
51,801 | Fallacy in p-value definition | This is a helpful link.
More formally, if you observed a p-value that was less than 5%, you could say: "The probability of the available (or of even less likely) data, given that the null hypothesis is true, is less than 5%. | Fallacy in p-value definition | This is a helpful link.
More formally, if you observed a p-value that was less than 5%, you could say: "The probability of the available (or of even less likely) data, given that the null hypothesis i | Fallacy in p-value definition
This is a helpful link.
More formally, if you observed a p-value that was less than 5%, you could say: "The probability of the available (or of even less likely) data, given that the null hypothesis is true, is less than 5%. | Fallacy in p-value definition
This is a helpful link.
More formally, if you observed a p-value that was less than 5%, you could say: "The probability of the available (or of even less likely) data, given that the null hypothesis i |
51,802 | Data visualization of average and standard deviation over a small time series | Why not a line plot? A line plot seems pretty fitting if you'd like to show general trends in mean and SD provided individual specie is not the focal point.
Anyway, here is an alternative, which strictly speaking is still a line plot but time is not one of the axes. It is also good for discussing individual species. x-axis is your variable 1 (I guessed that's mean), y-axis is the variable 2 (SD).
And yes, heat map maybe too much, but I think it's worth a try to show 3D kernel density if you can provide the actual data as well. Codes are taken from this thread.
Another possibility is to make three plots:
t1 SD vs. mean, use black symbol
t1 SD vs. mean, but this time the symbol turned to light grey; overlay t2 SD vs. mean, use black symbol. Connect each species pair with an arrow.
t2 SD vs. mean, but this time the symbol turned to light grey; overlay t3 SD vs. mean, use symbol. Connect each species pair with an arrow. | Data visualization of average and standard deviation over a small time series | Why not a line plot? A line plot seems pretty fitting if you'd like to show general trends in mean and SD provided individual specie is not the focal point.
Anyway, here is an alternative, which stri | Data visualization of average and standard deviation over a small time series
Why not a line plot? A line plot seems pretty fitting if you'd like to show general trends in mean and SD provided individual specie is not the focal point.
Anyway, here is an alternative, which strictly speaking is still a line plot but time is not one of the axes. It is also good for discussing individual species. x-axis is your variable 1 (I guessed that's mean), y-axis is the variable 2 (SD).
And yes, heat map maybe too much, but I think it's worth a try to show 3D kernel density if you can provide the actual data as well. Codes are taken from this thread.
Another possibility is to make three plots:
t1 SD vs. mean, use black symbol
t1 SD vs. mean, but this time the symbol turned to light grey; overlay t2 SD vs. mean, use black symbol. Connect each species pair with an arrow.
t2 SD vs. mean, but this time the symbol turned to light grey; overlay t3 SD vs. mean, use symbol. Connect each species pair with an arrow. | Data visualization of average and standard deviation over a small time series
Why not a line plot? A line plot seems pretty fitting if you'd like to show general trends in mean and SD provided individual specie is not the focal point.
Anyway, here is an alternative, which stri |
51,803 | Data visualization of average and standard deviation over a small time series | You mean you don't want a line plot like this? :)
Time is a natural for the X axis and line plots. Here's a better view using a small multiples approach.
Sometimes it helps to order the panels by something analytical. Here's an example ordered by average std:
With so many category levels (20), another technique is to cluster similar levels and overlay those together. Here's an example with 4 clusters. (Not bothering with obvious but addressable deficiencies in coloring and labeling.) | Data visualization of average and standard deviation over a small time series | You mean you don't want a line plot like this? :)
Time is a natural for the X axis and line plots. Here's a better view using a small multiples approach.
Sometimes it helps to order the panels by s | Data visualization of average and standard deviation over a small time series
You mean you don't want a line plot like this? :)
Time is a natural for the X axis and line plots. Here's a better view using a small multiples approach.
Sometimes it helps to order the panels by something analytical. Here's an example ordered by average std:
With so many category levels (20), another technique is to cluster similar levels and overlay those together. Here's an example with 4 clusters. (Not bothering with obvious but addressable deficiencies in coloring and labeling.) | Data visualization of average and standard deviation over a small time series
You mean you don't want a line plot like this? :)
Time is a natural for the X axis and line plots. Here's a better view using a small multiples approach.
Sometimes it helps to order the panels by s |
51,804 | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? Weak? | If you were to blindly apply Cohen's criteria for describing the strength of a correlation, an r = .59 would be described as a "large" effect to have been observed.
However, you should note that this "large" effect might not actually be statistically significant or even that meaningful. For the former argument, if you look at the critical values for a correlation coefficient, you need more than 9 degrees of freedom to have this be "statistically significant" at $\alpha < .05$. For the later, you should look at the scatterplot and really look at how the scores are falling - like the above commenters mentioned, the relationship might not be linear or that useful.
Of course, if the linear relationship does seem tenable ande you have the correlation, you can also generate the linear regression equation to predict someones 102 grade based upon their 101 grade. But, like Nestor said, this does not mean their 101 grade is causing their 102 grade. | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? | If you were to blindly apply Cohen's criteria for describing the strength of a correlation, an r = .59 would be described as a "large" effect to have been observed.
However, you should note that thi | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? Weak?
If you were to blindly apply Cohen's criteria for describing the strength of a correlation, an r = .59 would be described as a "large" effect to have been observed.
However, you should note that this "large" effect might not actually be statistically significant or even that meaningful. For the former argument, if you look at the critical values for a correlation coefficient, you need more than 9 degrees of freedom to have this be "statistically significant" at $\alpha < .05$. For the later, you should look at the scatterplot and really look at how the scores are falling - like the above commenters mentioned, the relationship might not be linear or that useful.
Of course, if the linear relationship does seem tenable ande you have the correlation, you can also generate the linear regression equation to predict someones 102 grade based upon their 101 grade. But, like Nestor said, this does not mean their 101 grade is causing their 102 grade. | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong?
If you were to blindly apply Cohen's criteria for describing the strength of a correlation, an r = .59 would be described as a "large" effect to have been observed.
However, you should note that thi |
51,805 | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? Weak? | What is considered strong and weak tends to differ considerably between disciplines, and for good reasons. It helps to have a correlation in mind from your field that is considered very strong. In my case that would be the correlation of the years of education of spouses (about .60). This gives a bit more substance to a statement that a correlation is "strong": the correlation between grades of two related classes is about as strong as the correlation between years of education of two spouses. | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? | What is considered strong and weak tends to differ considerably between disciplines, and for good reasons. It helps to have a correlation in mind from your field that is considered very strong. In my | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? Weak?
What is considered strong and weak tends to differ considerably between disciplines, and for good reasons. It helps to have a correlation in mind from your field that is considered very strong. In my case that would be the correlation of the years of education of spouses (about .60). This gives a bit more substance to a statement that a correlation is "strong": the correlation between grades of two related classes is about as strong as the correlation between years of education of two spouses. | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong?
What is considered strong and weak tends to differ considerably between disciplines, and for good reasons. It helps to have a correlation in mind from your field that is considered very strong. In my |
51,806 | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? Weak? | Without actually seeing the plot, a 'pro' statistician wouldn't bother interpreting it as a 'strong' or 'weak' correlation because, as you word it, it seems you are implying causality. Remember that dependence sometimes implies (linear, in the case you are referring to the $r^2$ given by a linear fit) correlation, but not the other way around: correlation does not implies causality. | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? | Without actually seeing the plot, a 'pro' statistician wouldn't bother interpreting it as a 'strong' or 'weak' correlation because, as you word it, it seems you are implying causality. Remember that d | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong? Weak?
Without actually seeing the plot, a 'pro' statistician wouldn't bother interpreting it as a 'strong' or 'weak' correlation because, as you word it, it seems you are implying causality. Remember that dependence sometimes implies (linear, in the case you are referring to the $r^2$ given by a linear fit) correlation, but not the other way around: correlation does not implies causality. | $r^2 = 35\%$, $r = 0.59$ How does a (pro) statistician formally interpret this correlation? Strong?
Without actually seeing the plot, a 'pro' statistician wouldn't bother interpreting it as a 'strong' or 'weak' correlation because, as you word it, it seems you are implying causality. Remember that d |
51,807 | Can the coefficient of determination (R-squared) for a linear regression ever be zero? | Yes, when ever there is no linear relationship between variables. For example, when either X or Y are constant, or where each high-low data points are balanced by high-high, or low-low data points. For example, $X=(1,1,2,2)$, $Y=(1,2,1,2)$, or $X=(-2,1,0,1,2)$, $Y=X^2$
Here are some examples: all of these have correlation of 0, and hence a coefficient of determination of zero:
It's worth noting that as soon as there's any randomness, then there's almost certainly going to be some correlation. With a small sample size, that correlation might be quite high - wouldn't be unusual to have a correlation as high as $\pm$0.3, with a sample size of 20. | Can the coefficient of determination (R-squared) for a linear regression ever be zero? | Yes, when ever there is no linear relationship between variables. For example, when either X or Y are constant, or where each high-low data points are balanced by high-high, or low-low data points. Fo | Can the coefficient of determination (R-squared) for a linear regression ever be zero?
Yes, when ever there is no linear relationship between variables. For example, when either X or Y are constant, or where each high-low data points are balanced by high-high, or low-low data points. For example, $X=(1,1,2,2)$, $Y=(1,2,1,2)$, or $X=(-2,1,0,1,2)$, $Y=X^2$
Here are some examples: all of these have correlation of 0, and hence a coefficient of determination of zero:
It's worth noting that as soon as there's any randomness, then there's almost certainly going to be some correlation. With a small sample size, that correlation might be quite high - wouldn't be unusual to have a correlation as high as $\pm$0.3, with a sample size of 20. | Can the coefficient of determination (R-squared) for a linear regression ever be zero?
Yes, when ever there is no linear relationship between variables. For example, when either X or Y are constant, or where each high-low data points are balanced by high-high, or low-low data points. Fo |
51,808 | Can the coefficient of determination (R-squared) for a linear regression ever be zero? | The null hypothesis is always false, basically. That's why people shouldn't just report p values (to make up a stupid example, in the UK highly significant correlation between house number and income: sample size of 20 million, r=0.004, p<.0001). | Can the coefficient of determination (R-squared) for a linear regression ever be zero? | The null hypothesis is always false, basically. That's why people shouldn't just report p values (to make up a stupid example, in the UK highly significant correlation between house number and income: | Can the coefficient of determination (R-squared) for a linear regression ever be zero?
The null hypothesis is always false, basically. That's why people shouldn't just report p values (to make up a stupid example, in the UK highly significant correlation between house number and income: sample size of 20 million, r=0.004, p<.0001). | Can the coefficient of determination (R-squared) for a linear regression ever be zero?
The null hypothesis is always false, basically. That's why people shouldn't just report p values (to make up a stupid example, in the UK highly significant correlation between house number and income: |
51,809 | Residuals correlated positively with response variable strongly in linear regression | 1) Residuals do correlate positively with observed values in many, many cases. Think of it this way - a very large positive error ("error" is the "true residual", to misuse the language) means that the corresponding observation is, all other things equal, likely to be very large in a positive direction. A very large negative error means that the corresponding observation is likely to be very large in a negative direction. If the $R^2$ of the regression is not large, then the variability of the errors will be the dominating effect on the variability of the target variable, and you will see this effect in your plots and correlations.
For example, consider the model $y_i = a + x_i + e_i$, which we'll model as $y_i = a + bx_i + e_i$, (which is correct for $b = 1$.) Here's the result of a regression with 100 observations:
e <- rnorm(100)
x <- rnorm(100)
y <- 1 + x + e
foo <- lm(y~x)
plot(residuals(foo)~y, xlab="y", ylab="Residuals")
> summary(foo)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-3.3292 -0.8280 -0.0448 0.8213 2.9450
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.8498 0.1288 6.600 2.12e-09 ***
x 0.8929 0.1316 6.787 8.81e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.286 on 98 degrees of freedom
Multiple R-squared: 0.3197, Adjusted R-squared: 0.3128
F-statistic: 46.06 on 1 and 98 DF, p-value: 8.813e-10
Note that we achieved a fairly respectable (in some fields) $R^2$ of 0.32.
We can obscure this effect with a different model:
y <- 1 + 5*x + e
foo <- lm(y~x)
plot(residuals(foo)~y, xlab="y", ylab="Residuals")
which has an $R^2$ of 0.93 and the following residual plot:
Here the correlation between $y$ and the residuals is about 0.25, but it's a lot less obvious on the plot.
2) Residuals have correlation zero with fitted values in a linear regression, by construction. Is your statement "... weakly correlated with fitted Y negatively" based solely upon looking at the plot, or did you actually calculate the correlation? If the former, appearances can be deceiving... if the latter, something is wrong; possibly you aren't looking at what you think you're looking at. | Residuals correlated positively with response variable strongly in linear regression | 1) Residuals do correlate positively with observed values in many, many cases. Think of it this way - a very large positive error ("error" is the "true residual", to misuse the language) means that t | Residuals correlated positively with response variable strongly in linear regression
1) Residuals do correlate positively with observed values in many, many cases. Think of it this way - a very large positive error ("error" is the "true residual", to misuse the language) means that the corresponding observation is, all other things equal, likely to be very large in a positive direction. A very large negative error means that the corresponding observation is likely to be very large in a negative direction. If the $R^2$ of the regression is not large, then the variability of the errors will be the dominating effect on the variability of the target variable, and you will see this effect in your plots and correlations.
For example, consider the model $y_i = a + x_i + e_i$, which we'll model as $y_i = a + bx_i + e_i$, (which is correct for $b = 1$.) Here's the result of a regression with 100 observations:
e <- rnorm(100)
x <- rnorm(100)
y <- 1 + x + e
foo <- lm(y~x)
plot(residuals(foo)~y, xlab="y", ylab="Residuals")
> summary(foo)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-3.3292 -0.8280 -0.0448 0.8213 2.9450
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.8498 0.1288 6.600 2.12e-09 ***
x 0.8929 0.1316 6.787 8.81e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.286 on 98 degrees of freedom
Multiple R-squared: 0.3197, Adjusted R-squared: 0.3128
F-statistic: 46.06 on 1 and 98 DF, p-value: 8.813e-10
Note that we achieved a fairly respectable (in some fields) $R^2$ of 0.32.
We can obscure this effect with a different model:
y <- 1 + 5*x + e
foo <- lm(y~x)
plot(residuals(foo)~y, xlab="y", ylab="Residuals")
which has an $R^2$ of 0.93 and the following residual plot:
Here the correlation between $y$ and the residuals is about 0.25, but it's a lot less obvious on the plot.
2) Residuals have correlation zero with fitted values in a linear regression, by construction. Is your statement "... weakly correlated with fitted Y negatively" based solely upon looking at the plot, or did you actually calculate the correlation? If the former, appearances can be deceiving... if the latter, something is wrong; possibly you aren't looking at what you think you're looking at. | Residuals correlated positively with response variable strongly in linear regression
1) Residuals do correlate positively with observed values in many, many cases. Think of it this way - a very large positive error ("error" is the "true residual", to misuse the language) means that t |
51,810 | Residuals correlated positively with response variable strongly in linear regression | residuals almost always correlate with your observations as long es your regressors do not fully explain the true underlying data model. So the presence of high correlation between $y$ and residuals is evidence for the presence of noise/variation that is not captured by your explanatory variables.
This could have several reasons
Your regressors are only weakly related to your target variable
Even if they are strongly correlated, your observation might be very noisy or the amount of noise in the system is high.
Your regression model is not well specified, i.e., non-linearity, non-gaussianity in the error terms.
You have too few data points to efficiently identify the right relationships.
Since you will most likely not be able to increase your data sample size as it is an outside restriction, try to normalize your regressors and target variable so that they resemble more normally distributed variables (log transforms for positive variable for instance). In this case, OLS regression models can be more efficient in identifying the true underlying relationships.
Correlation among regressors should not affect the in-sample residual correlation too much (only your parameter estimates will be skewed). | Residuals correlated positively with response variable strongly in linear regression | residuals almost always correlate with your observations as long es your regressors do not fully explain the true underlying data model. So the presence of high correlation between $y$ and residuals i | Residuals correlated positively with response variable strongly in linear regression
residuals almost always correlate with your observations as long es your regressors do not fully explain the true underlying data model. So the presence of high correlation between $y$ and residuals is evidence for the presence of noise/variation that is not captured by your explanatory variables.
This could have several reasons
Your regressors are only weakly related to your target variable
Even if they are strongly correlated, your observation might be very noisy or the amount of noise in the system is high.
Your regression model is not well specified, i.e., non-linearity, non-gaussianity in the error terms.
You have too few data points to efficiently identify the right relationships.
Since you will most likely not be able to increase your data sample size as it is an outside restriction, try to normalize your regressors and target variable so that they resemble more normally distributed variables (log transforms for positive variable for instance). In this case, OLS regression models can be more efficient in identifying the true underlying relationships.
Correlation among regressors should not affect the in-sample residual correlation too much (only your parameter estimates will be skewed). | Residuals correlated positively with response variable strongly in linear regression
residuals almost always correlate with your observations as long es your regressors do not fully explain the true underlying data model. So the presence of high correlation between $y$ and residuals i |
51,811 | Is there a limit to the SPSS random number generator? | I think SPSS, like most modern software, uses the Mersenne Twister. Its period is $2^{19937} − 1$ so you’re pretty safe from this point of view.
Up to 623 successive outcomes are uncorrelated, so you can safely consider a few consecutive outcomes as independent (this would not be the case with a classical Linear congruential generator).
To summarize: modern random number generators are performant enough for all ordinary applications in statistics... don’t worry. | Is there a limit to the SPSS random number generator? | I think SPSS, like most modern software, uses the Mersenne Twister. Its period is $2^{19937} − 1$ so you’re pretty safe from this point of view.
Up to 623 successive outcomes are uncorrelated, so you | Is there a limit to the SPSS random number generator?
I think SPSS, like most modern software, uses the Mersenne Twister. Its period is $2^{19937} − 1$ so you’re pretty safe from this point of view.
Up to 623 successive outcomes are uncorrelated, so you can safely consider a few consecutive outcomes as independent (this would not be the case with a classical Linear congruential generator).
To summarize: modern random number generators are performant enough for all ordinary applications in statistics... don’t worry. | Is there a limit to the SPSS random number generator?
I think SPSS, like most modern software, uses the Mersenne Twister. Its period is $2^{19937} − 1$ so you’re pretty safe from this point of view.
Up to 623 successive outcomes are uncorrelated, so you |
51,812 | Is there a limit to the SPSS random number generator? | SPSS Statistics provides both the Mersenne Twister and, for compatibility, an older shift-congruential generator. By default, the older generator is used. Use SET RNG=MT or the Transform>Random Number Generators menu item to change this. The MT should give you all the numbers you need.
There is also a user-contributed Python function that fetches truly random, not pseudo random numbers generated from atmospheric noise. These are fetched from a website that has some rules about quantities that you should read. The package is tr_rnd0.1.zip. It can be downloaded from the SPSS Community website in the Python Modules collection. Of course, this requires you to use Python programmability. The tools for that can also be downloaded from the Community site. | Is there a limit to the SPSS random number generator? | SPSS Statistics provides both the Mersenne Twister and, for compatibility, an older shift-congruential generator. By default, the older generator is used. Use SET RNG=MT or the Transform>Random Numb | Is there a limit to the SPSS random number generator?
SPSS Statistics provides both the Mersenne Twister and, for compatibility, an older shift-congruential generator. By default, the older generator is used. Use SET RNG=MT or the Transform>Random Number Generators menu item to change this. The MT should give you all the numbers you need.
There is also a user-contributed Python function that fetches truly random, not pseudo random numbers generated from atmospheric noise. These are fetched from a website that has some rules about quantities that you should read. The package is tr_rnd0.1.zip. It can be downloaded from the SPSS Community website in the Python Modules collection. Of course, this requires you to use Python programmability. The tools for that can also be downloaded from the Community site. | Is there a limit to the SPSS random number generator?
SPSS Statistics provides both the Mersenne Twister and, for compatibility, an older shift-congruential generator. By default, the older generator is used. Use SET RNG=MT or the Transform>Random Numb |
51,813 | Is there a limit to the SPSS random number generator? | I used the SPSS uniform function to create a random sample weekly for two years. Do not do this. They DO NOT generate random samples. The same dataset will generate the same random sample upon re-opening SPSS. Not all cases have the same probability to be selected (depends on sorting of your file).
My recommendation would be to use several methods of randomization sequentially. E.g. first randomize sorting, then use the select random sample function. | Is there a limit to the SPSS random number generator? | I used the SPSS uniform function to create a random sample weekly for two years. Do not do this. They DO NOT generate random samples. The same dataset will generate the same random sample upon re-open | Is there a limit to the SPSS random number generator?
I used the SPSS uniform function to create a random sample weekly for two years. Do not do this. They DO NOT generate random samples. The same dataset will generate the same random sample upon re-opening SPSS. Not all cases have the same probability to be selected (depends on sorting of your file).
My recommendation would be to use several methods of randomization sequentially. E.g. first randomize sorting, then use the select random sample function. | Is there a limit to the SPSS random number generator?
I used the SPSS uniform function to create a random sample weekly for two years. Do not do this. They DO NOT generate random samples. The same dataset will generate the same random sample upon re-open |
51,814 | How can I test the quality of a RNG? | J.M. already mentioned the original Diehard battery of tests by George Marsaglia. As far as I know, this test set is no longer being maintained.
Robert Brown has been working for years on DieHarder which is
a GPL'ed reimplementation of the DieHard suite
plus additional tests from the NIST suite
plus development of new tests
and you may find DieHarder useful. | How can I test the quality of a RNG? | J.M. already mentioned the original Diehard battery of tests by George Marsaglia. As far as I know, this test set is no longer being maintained.
Robert Brown has been working for years on DieHarder w | How can I test the quality of a RNG?
J.M. already mentioned the original Diehard battery of tests by George Marsaglia. As far as I know, this test set is no longer being maintained.
Robert Brown has been working for years on DieHarder which is
a GPL'ed reimplementation of the DieHard suite
plus additional tests from the NIST suite
plus development of new tests
and you may find DieHarder useful. | How can I test the quality of a RNG?
J.M. already mentioned the original Diehard battery of tests by George Marsaglia. As far as I know, this test set is no longer being maintained.
Robert Brown has been working for years on DieHarder w |
51,815 | How can I test the quality of a RNG? | You're right: testing is empirical. It's all done in a standard hypothesis testing framework. Different tests are applied to assess different alternative behaviors of RNGs. As always, the user is free to choose the level of confidence with which each test is conducted. This level determines the critical region of each test, which is the "switch over" between a "significant" result and one not considered significant.
In practice, the confidence level matters little, because most RNGs can generate such a long series of values that eventually any long-run departure from complete, independent, equidistributed randomness will be detectable with high confidence. (This is the main reason that proper application of test suites like DieHard require you to generate a large number of bits at a minimum. I recollect--it has been a while--that early versions demanded 80 million bits.) Typically, an RNG will pass many tests thrown at it (otherwise it would never have seen the light of day) but it will clearly blow a few of them. | How can I test the quality of a RNG? | You're right: testing is empirical. It's all done in a standard hypothesis testing framework. Different tests are applied to assess different alternative behaviors of RNGs. As always, the user is f | How can I test the quality of a RNG?
You're right: testing is empirical. It's all done in a standard hypothesis testing framework. Different tests are applied to assess different alternative behaviors of RNGs. As always, the user is free to choose the level of confidence with which each test is conducted. This level determines the critical region of each test, which is the "switch over" between a "significant" result and one not considered significant.
In practice, the confidence level matters little, because most RNGs can generate such a long series of values that eventually any long-run departure from complete, independent, equidistributed randomness will be detectable with high confidence. (This is the main reason that proper application of test suites like DieHard require you to generate a large number of bits at a minimum. I recollect--it has been a while--that early versions demanded 80 million bits.) Typically, an RNG will pass many tests thrown at it (otherwise it would never have seen the light of day) but it will clearly blow a few of them. | How can I test the quality of a RNG?
You're right: testing is empirical. It's all done in a standard hypothesis testing framework. Different tests are applied to assess different alternative behaviors of RNGs. As always, the user is f |
51,816 | Cross tabulation of two categorical variables: recommended techniques | I think you need to rework this question. It all depends on the problem/data which has generated the cross-tab. | Cross tabulation of two categorical variables: recommended techniques | I think you need to rework this question. It all depends on the problem/data which has generated the cross-tab. | Cross tabulation of two categorical variables: recommended techniques
I think you need to rework this question. It all depends on the problem/data which has generated the cross-tab. | Cross tabulation of two categorical variables: recommended techniques
I think you need to rework this question. It all depends on the problem/data which has generated the cross-tab. |
51,817 | Cross tabulation of two categorical variables: recommended techniques | Arguably, the question is not very precise. Rather than enumerating all measures of association for $2\times 2$ tables, I shall concentrate on the way such measures may be constructed and how to select the one that is most appropriate with respect to hypothesis or constraints relevant to a cross-classification.
The very first questions to ask are: what does the table reflect (concordance, agreement, association between two attributes, etc.), do you seek an overall measure of association or do you think one of the two variables plays a specific role (which would justify the search for an "oriented" association), do you consider either or both of the margins fixed (row and/or columns totals)? All of this impact on the method to choose and the way to interpret the results.
The $2\times 2$ case
Two-by-two tables are often treated separately from $I\times J$ tables because we often consider that variables play a symmetric role in this particular case. Obviously, this is not always the case: cross-classification of exposure and disease, as commonly found in epidemiological studies, is an example where both variables play a distinct role, which may lend to more than a simple interpretation in terms of association. Another one is $2\times 2$ tables constructed for studying the screening properties of a given diagnostic instrument. Although the odds-ratio (compared to, e.g. the relative risk) keeps its nice properties, we may be interested in predictive/negative positive values or specificity/sensibility, which means working with other quantities of interest. Hence, the need to specifiy whether the problem at hand implies two variables that are purely acting in a symmetrical way, or not, because it influences the way we interpret the results or derive a useful measure of association, agreement, or discrimination.
For the sake of clarity, I will consider that data (counts) are arranged in the following way:
Basically, measures of association for $2\times 2$ tables can be grouped in two classes: those relying on (a) (a function of) the cross-product ratio and those based on (b) the product-moment (Pearson) correlation, or a function thereof.
The cross-product ratio, mostly known as the odds-ratio, is simply $\alpha=p_{11}p_{22}/p_{12}p_{21}$. It is invariant under rows and columns interchange, and transformations of margins that preserves $\sum_{i,j}p_{ij}=1$. In epidemiology, we usually think of it as a measure of association where rows (or columns) are fixed: $p_{11}/p_{12}$ is then the odds of being in the first column (e.g., diseased) conditional on being in the first row (e.g., exposed), and likewise $p_{21}/p_{22}$ is the odds for the second row, or in other words
$$
\alpha=\frac{p_{11}/p_{12}}{p_{21}/p_{22}}.
$$
Yule's $Q=(\alpha-1)/(\alpha+1)$ fall into the former case, (a). Yule also proposed a measure of "colligation", $Y$, as $(\sqrt{\alpha}-1)/(\sqrt{\alpha}+1)$. Yule's $Q$ can be interpreted as the difference between conditional probabilities of like and unlike "orders" for two individuals chosen at random; it is identical to Goodman and Kruskal's $\gamma$ measure of association for $I\times J$ tables.
For (b), we can derive a correlation coefficient for a $2\times 2$ table by thinking of the table as a combination of each of two variables scores (taking value 0 and 1, for the first and second row/column, resp.). Then, the coefficient $\rho$ is defined as the covariance divided by the square root of the product of the variances:
$$
\rho=\frac{p_{22}-p_{2\cdot}p_{\cdot 2}}{\sqrt{p_{1\cdot}p_{2\cdot}p_{\cdot 1}p_{\cdot 2}}},
$$
which is equivalent to putting $p_{11}p_{22}-p_{21}p_{12}$ in the numerator. Plugging in the observed counts, Pearson's $r$ is the MLE of $\rho$ under a multinomial sampling model.
It is invariant under rows and columns interchange, and positive linear transformation.
It can be shown (Yule, 1912) that $\rho$ is identical to Yule's $Y$ if we standardize our table such that row and column margins sum to 1/2, i.e. $p_{11}^*=p_{22}^*=0.5\left(\sqrt{\alpha}/(\sqrt{\alpha}+1)\right)$ and $p_{12}^*=p_{21}^*=0.5\left(1/(\sqrt{\alpha}+1)\right)$. By doing this, we remove the information coming from the margins, such that $Y=2(p_{11}^*-p_{12}^*)$.
Correlation-based measures are connected to the usual Pearson's chi-square statistic, since
$$
\Phi^2=\sum_{i=1}^2\sum_{j=1}^2\frac{(p_{ij}-p_{i\cdot}p_{\cdot j})^2}{p_{i\cdot}p_{\cdot j}},
$$
that is,
$$
\Phi^2=\frac{(p_{11}p_{22}-p_{21}p_{12})^2}{p_{1\cdot}p_{2\cdot}p_{\cdot 1}p_{\cdot 2}}=\rho^2.
$$
In a $2\times 2$ table, we thus have $r^2=\chi^2/N$. Pearson also proposed to use $\sqrt{\rho^2/(1+\rho^2)}$ as a measure of association, and he coined it the coefficient of mean square contingency.
As to how to choose the correct measure (a vs. b), it clearly depends on whether we want to be sensitive to marginal totals (in this case, $\rho$ cannot take its full range of possible values in $[-1;1]$), and whether we consider that we observe a full association even if one of the four cells is zero (in this case, $\rho$ cannot take the value $+1$ or $-1$ if only one of the cells is zero, which is not the case of Yule's $Q$).
Of note, correlation-based measures are better if they are used in a correlation matrix (e.g., for factor analysis), because we cannot guarantee that a matrix composed of Yule's $Q$ coefficient will be positive definite.
The $I\times J$ case
Like for the $2\times 2$ case, we can derive measures of association based on different quantities. Measures based on chi-square include
Pearson's $P$ coefficient based on $\Phi^2$ (see above), $\sqrt{\Phi^2/(\Phi^2+1)}$ (to overcome the fact that $\Phi^2$ no longer lies in $[0;1]$ when $I$ or $J>2$);
Tschuprow's $T=\left(\Phi^2/\sqrt{(I-1)(J-1)}\right)^{1/2}$, which behaves better than $P$ in square tables (in that it can reach a maximum value of 1, for full or complete association);
Cramer's $V$ is another derivation, and $V=\left(\Phi^2/\text{min}(I-1,J-1)\right)^{1/2}$ (we have $V\geq T$ for all $I,J>2$).
These measures are all measures of association where none of the variables plays a specific role. In case a $\chi^2$ test is significant, it is more interesting to look at how the expected counts depart from the observed counts (i.e. look at the Pearson residuals) in all $(i,j)$ cells, or to use something like a mosaic plot.
Goodman and Kruskall (1954) also proposed a predictive measure of association between rows and columns, or more specifically a measure of proportional reduction in error in predicting one column category when the row category is known as opposed to the case when the latter one is unknown. This is called $\lambda_{C|R}$ and its MLE is
$$
\hat\lambda_{C|R}=\frac{\sum_{i=1}^Ix_{im}-x_{\cdot m}}{N-x_{\cdot m}}
$$
where $x_{im}$ and $x_{\cdot m}$ stand for the maximum for the $i$th row and the column totals. This measure is interesting because it has a nicer interpretation than $\chi^2$-based measure, but it also has some drawbacks: when there is statistical independence, $\lambda_{C|R}$ is not necessarily zero, for instance.
A measure of the proportion of explained variance (derived from Gini's total variation) may be derived from the total sum of squares (SS) in an $I\times J$ table
$$
\text{TSS}=\frac{N}{2}-\frac{1}{2N}\sum_{i=1}^Ix_{i\cdot}^2,
$$
which can be partitioned as a within- and between-group SS. Of interest here is the variance explained by considering the different categories (BSS) divided by the total variance, TSS. Like in the ANOVA framework, we have BSS=TSS-WSS, where
$$
\text{WSS}=\frac{N}{2}-\frac{1}{2}\sum_{j=1}^J\frac{1}{x_{\cdot j}}\sum_{i=1}^Ix_{ij}^2,
$$
so that we can derive BSS/TSS as
$$
\hat\tau_{R|C}=\frac{\sum_j\frac{1}{x_{\cdot j}}\sum_i x_{ij}^2-\frac{1}{N}\sum_ix_{i\cdot}^2}{N-\frac{1}{N}\sum_ix_{i\cdot}^2}.
$$
This measure can be interpreted as "the relative decrease in the proportion of incorrect predictions when we go from predicting the row category based only on the row marginal probabilities to predicting the row category based on the conditional proportions $p_{ij}/p_{\cdot j}$" (Bishop et al., 2007, p. 391).
Finally, measures based on the cross-product ratios are also available, as well as measures of agreement for ordinal variables, but I realize now that I need to stop (and thank the reader who reached the end of this overview).
A thorough overview of measures of association may be found in Bishop et al. (2007), from which I grabbed most of the above discussion, and of course Agresti (2002), about which Laura Thompson made a complete R adaptation in his textbook R (and S-PLUS) Manual to Accompany Agresti's Categorical Data Analysis.
References
Agresti, A. (2002). Categorical Data Analysis. Wiley. Companion website
Bishop, Y.M., Fienberg, S.E., and Holland, P.W. (2007). Discrete Multivariate Analysis. Springer.
Goodman, L.A. and Kruskall, W.H. (1954). Measures of association for cross-classification. JASA, 49, 732-764.
Yule, G.U. (1912). On the methods of measuring association between two attributes. Journal of the Royal Society, 75, 579-642. | Cross tabulation of two categorical variables: recommended techniques | Arguably, the question is not very precise. Rather than enumerating all measures of association for $2\times 2$ tables, I shall concentrate on the way such measures may be constructed and how to selec | Cross tabulation of two categorical variables: recommended techniques
Arguably, the question is not very precise. Rather than enumerating all measures of association for $2\times 2$ tables, I shall concentrate on the way such measures may be constructed and how to select the one that is most appropriate with respect to hypothesis or constraints relevant to a cross-classification.
The very first questions to ask are: what does the table reflect (concordance, agreement, association between two attributes, etc.), do you seek an overall measure of association or do you think one of the two variables plays a specific role (which would justify the search for an "oriented" association), do you consider either or both of the margins fixed (row and/or columns totals)? All of this impact on the method to choose and the way to interpret the results.
The $2\times 2$ case
Two-by-two tables are often treated separately from $I\times J$ tables because we often consider that variables play a symmetric role in this particular case. Obviously, this is not always the case: cross-classification of exposure and disease, as commonly found in epidemiological studies, is an example where both variables play a distinct role, which may lend to more than a simple interpretation in terms of association. Another one is $2\times 2$ tables constructed for studying the screening properties of a given diagnostic instrument. Although the odds-ratio (compared to, e.g. the relative risk) keeps its nice properties, we may be interested in predictive/negative positive values or specificity/sensibility, which means working with other quantities of interest. Hence, the need to specifiy whether the problem at hand implies two variables that are purely acting in a symmetrical way, or not, because it influences the way we interpret the results or derive a useful measure of association, agreement, or discrimination.
For the sake of clarity, I will consider that data (counts) are arranged in the following way:
Basically, measures of association for $2\times 2$ tables can be grouped in two classes: those relying on (a) (a function of) the cross-product ratio and those based on (b) the product-moment (Pearson) correlation, or a function thereof.
The cross-product ratio, mostly known as the odds-ratio, is simply $\alpha=p_{11}p_{22}/p_{12}p_{21}$. It is invariant under rows and columns interchange, and transformations of margins that preserves $\sum_{i,j}p_{ij}=1$. In epidemiology, we usually think of it as a measure of association where rows (or columns) are fixed: $p_{11}/p_{12}$ is then the odds of being in the first column (e.g., diseased) conditional on being in the first row (e.g., exposed), and likewise $p_{21}/p_{22}$ is the odds for the second row, or in other words
$$
\alpha=\frac{p_{11}/p_{12}}{p_{21}/p_{22}}.
$$
Yule's $Q=(\alpha-1)/(\alpha+1)$ fall into the former case, (a). Yule also proposed a measure of "colligation", $Y$, as $(\sqrt{\alpha}-1)/(\sqrt{\alpha}+1)$. Yule's $Q$ can be interpreted as the difference between conditional probabilities of like and unlike "orders" for two individuals chosen at random; it is identical to Goodman and Kruskal's $\gamma$ measure of association for $I\times J$ tables.
For (b), we can derive a correlation coefficient for a $2\times 2$ table by thinking of the table as a combination of each of two variables scores (taking value 0 and 1, for the first and second row/column, resp.). Then, the coefficient $\rho$ is defined as the covariance divided by the square root of the product of the variances:
$$
\rho=\frac{p_{22}-p_{2\cdot}p_{\cdot 2}}{\sqrt{p_{1\cdot}p_{2\cdot}p_{\cdot 1}p_{\cdot 2}}},
$$
which is equivalent to putting $p_{11}p_{22}-p_{21}p_{12}$ in the numerator. Plugging in the observed counts, Pearson's $r$ is the MLE of $\rho$ under a multinomial sampling model.
It is invariant under rows and columns interchange, and positive linear transformation.
It can be shown (Yule, 1912) that $\rho$ is identical to Yule's $Y$ if we standardize our table such that row and column margins sum to 1/2, i.e. $p_{11}^*=p_{22}^*=0.5\left(\sqrt{\alpha}/(\sqrt{\alpha}+1)\right)$ and $p_{12}^*=p_{21}^*=0.5\left(1/(\sqrt{\alpha}+1)\right)$. By doing this, we remove the information coming from the margins, such that $Y=2(p_{11}^*-p_{12}^*)$.
Correlation-based measures are connected to the usual Pearson's chi-square statistic, since
$$
\Phi^2=\sum_{i=1}^2\sum_{j=1}^2\frac{(p_{ij}-p_{i\cdot}p_{\cdot j})^2}{p_{i\cdot}p_{\cdot j}},
$$
that is,
$$
\Phi^2=\frac{(p_{11}p_{22}-p_{21}p_{12})^2}{p_{1\cdot}p_{2\cdot}p_{\cdot 1}p_{\cdot 2}}=\rho^2.
$$
In a $2\times 2$ table, we thus have $r^2=\chi^2/N$. Pearson also proposed to use $\sqrt{\rho^2/(1+\rho^2)}$ as a measure of association, and he coined it the coefficient of mean square contingency.
As to how to choose the correct measure (a vs. b), it clearly depends on whether we want to be sensitive to marginal totals (in this case, $\rho$ cannot take its full range of possible values in $[-1;1]$), and whether we consider that we observe a full association even if one of the four cells is zero (in this case, $\rho$ cannot take the value $+1$ or $-1$ if only one of the cells is zero, which is not the case of Yule's $Q$).
Of note, correlation-based measures are better if they are used in a correlation matrix (e.g., for factor analysis), because we cannot guarantee that a matrix composed of Yule's $Q$ coefficient will be positive definite.
The $I\times J$ case
Like for the $2\times 2$ case, we can derive measures of association based on different quantities. Measures based on chi-square include
Pearson's $P$ coefficient based on $\Phi^2$ (see above), $\sqrt{\Phi^2/(\Phi^2+1)}$ (to overcome the fact that $\Phi^2$ no longer lies in $[0;1]$ when $I$ or $J>2$);
Tschuprow's $T=\left(\Phi^2/\sqrt{(I-1)(J-1)}\right)^{1/2}$, which behaves better than $P$ in square tables (in that it can reach a maximum value of 1, for full or complete association);
Cramer's $V$ is another derivation, and $V=\left(\Phi^2/\text{min}(I-1,J-1)\right)^{1/2}$ (we have $V\geq T$ for all $I,J>2$).
These measures are all measures of association where none of the variables plays a specific role. In case a $\chi^2$ test is significant, it is more interesting to look at how the expected counts depart from the observed counts (i.e. look at the Pearson residuals) in all $(i,j)$ cells, or to use something like a mosaic plot.
Goodman and Kruskall (1954) also proposed a predictive measure of association between rows and columns, or more specifically a measure of proportional reduction in error in predicting one column category when the row category is known as opposed to the case when the latter one is unknown. This is called $\lambda_{C|R}$ and its MLE is
$$
\hat\lambda_{C|R}=\frac{\sum_{i=1}^Ix_{im}-x_{\cdot m}}{N-x_{\cdot m}}
$$
where $x_{im}$ and $x_{\cdot m}$ stand for the maximum for the $i$th row and the column totals. This measure is interesting because it has a nicer interpretation than $\chi^2$-based measure, but it also has some drawbacks: when there is statistical independence, $\lambda_{C|R}$ is not necessarily zero, for instance.
A measure of the proportion of explained variance (derived from Gini's total variation) may be derived from the total sum of squares (SS) in an $I\times J$ table
$$
\text{TSS}=\frac{N}{2}-\frac{1}{2N}\sum_{i=1}^Ix_{i\cdot}^2,
$$
which can be partitioned as a within- and between-group SS. Of interest here is the variance explained by considering the different categories (BSS) divided by the total variance, TSS. Like in the ANOVA framework, we have BSS=TSS-WSS, where
$$
\text{WSS}=\frac{N}{2}-\frac{1}{2}\sum_{j=1}^J\frac{1}{x_{\cdot j}}\sum_{i=1}^Ix_{ij}^2,
$$
so that we can derive BSS/TSS as
$$
\hat\tau_{R|C}=\frac{\sum_j\frac{1}{x_{\cdot j}}\sum_i x_{ij}^2-\frac{1}{N}\sum_ix_{i\cdot}^2}{N-\frac{1}{N}\sum_ix_{i\cdot}^2}.
$$
This measure can be interpreted as "the relative decrease in the proportion of incorrect predictions when we go from predicting the row category based only on the row marginal probabilities to predicting the row category based on the conditional proportions $p_{ij}/p_{\cdot j}$" (Bishop et al., 2007, p. 391).
Finally, measures based on the cross-product ratios are also available, as well as measures of agreement for ordinal variables, but I realize now that I need to stop (and thank the reader who reached the end of this overview).
A thorough overview of measures of association may be found in Bishop et al. (2007), from which I grabbed most of the above discussion, and of course Agresti (2002), about which Laura Thompson made a complete R adaptation in his textbook R (and S-PLUS) Manual to Accompany Agresti's Categorical Data Analysis.
References
Agresti, A. (2002). Categorical Data Analysis. Wiley. Companion website
Bishop, Y.M., Fienberg, S.E., and Holland, P.W. (2007). Discrete Multivariate Analysis. Springer.
Goodman, L.A. and Kruskall, W.H. (1954). Measures of association for cross-classification. JASA, 49, 732-764.
Yule, G.U. (1912). On the methods of measuring association between two attributes. Journal of the Royal Society, 75, 579-642. | Cross tabulation of two categorical variables: recommended techniques
Arguably, the question is not very precise. Rather than enumerating all measures of association for $2\times 2$ tables, I shall concentrate on the way such measures may be constructed and how to selec |
51,818 | Cross tabulation of two categorical variables: recommended techniques | I would use Fisher's Exact Test, even for large N. I wouldn't know why not. Any performance argument predates today's fast computers. | Cross tabulation of two categorical variables: recommended techniques | I would use Fisher's Exact Test, even for large N. I wouldn't know why not. Any performance argument predates today's fast computers. | Cross tabulation of two categorical variables: recommended techniques
I would use Fisher's Exact Test, even for large N. I wouldn't know why not. Any performance argument predates today's fast computers. | Cross tabulation of two categorical variables: recommended techniques
I would use Fisher's Exact Test, even for large N. I wouldn't know why not. Any performance argument predates today's fast computers. |
51,819 | Cross tabulation of two categorical variables: recommended techniques | I must agree.. there is no single best analysis!
not just in cross tabulations or analysis of categorical data but in any data analysis... and thank god for that!
if there was just a single best way to address these analyses well many of us would not have a job to start with... not to mention the loss of the thrill of the hunt!
the joy of analysis is the unknown and the search for answers and evidence and how one question leads to another... that is what i love about statistics!
So back to the categorical data analysis... it really depends on what your doing. Are you looking to find how different variables affect each other as in drug tests for example we may look at treatment vs placebo crossed with disease and no disease... the question here is does treatment reduce disease.... chi square usually does well here (given a good sample size).
Another context ihad today was looking at missing value trends... i was looking to find if missing values in one categorical variable relate to another... in some cases i knew the result should be missing and yet there were observations that had values... a completely different context to the drug test! | Cross tabulation of two categorical variables: recommended techniques | I must agree.. there is no single best analysis!
not just in cross tabulations or analysis of categorical data but in any data analysis... and thank god for that!
if there was just a single best way t | Cross tabulation of two categorical variables: recommended techniques
I must agree.. there is no single best analysis!
not just in cross tabulations or analysis of categorical data but in any data analysis... and thank god for that!
if there was just a single best way to address these analyses well many of us would not have a job to start with... not to mention the loss of the thrill of the hunt!
the joy of analysis is the unknown and the search for answers and evidence and how one question leads to another... that is what i love about statistics!
So back to the categorical data analysis... it really depends on what your doing. Are you looking to find how different variables affect each other as in drug tests for example we may look at treatment vs placebo crossed with disease and no disease... the question here is does treatment reduce disease.... chi square usually does well here (given a good sample size).
Another context ihad today was looking at missing value trends... i was looking to find if missing values in one categorical variable relate to another... in some cases i knew the result should be missing and yet there were observations that had values... a completely different context to the drug test! | Cross tabulation of two categorical variables: recommended techniques
I must agree.. there is no single best analysis!
not just in cross tabulations or analysis of categorical data but in any data analysis... and thank god for that!
if there was just a single best way t |
51,820 | Where can I find useful R tutorials with various implementations? [duplicate] | Quick R site is basic, but quite nice for start http://www.statmethods.net/index.html . | Where can I find useful R tutorials with various implementations? [duplicate] | Quick R site is basic, but quite nice for start http://www.statmethods.net/index.html . | Where can I find useful R tutorials with various implementations? [duplicate]
Quick R site is basic, but quite nice for start http://www.statmethods.net/index.html . | Where can I find useful R tutorials with various implementations? [duplicate]
Quick R site is basic, but quite nice for start http://www.statmethods.net/index.html . |
51,821 | Where can I find useful R tutorials with various implementations? [duplicate] | R bloggers has been steadily supplying me with a lot of good pragmatic content.
From the author:
R-Bloggers.com is a central hub (e.g: A blog aggregator) of content
collected from bloggers who write about R (in English).
The site will help R bloggers and users to connect and follow
the “R blogosphere”. | Where can I find useful R tutorials with various implementations? [duplicate] | R bloggers has been steadily supplying me with a lot of good pragmatic content.
From the author:
R-Bloggers.com is a central hub (e.g: A blog aggregator) of content
collected from bloggers who write | Where can I find useful R tutorials with various implementations? [duplicate]
R bloggers has been steadily supplying me with a lot of good pragmatic content.
From the author:
R-Bloggers.com is a central hub (e.g: A blog aggregator) of content
collected from bloggers who write about R (in English).
The site will help R bloggers and users to connect and follow
the “R blogosphere”. | Where can I find useful R tutorials with various implementations? [duplicate]
R bloggers has been steadily supplying me with a lot of good pragmatic content.
From the author:
R-Bloggers.com is a central hub (e.g: A blog aggregator) of content
collected from bloggers who write |
51,822 | Where can I find useful R tutorials with various implementations? [duplicate] | Another great resource is the LearnR blog, which went through an extensive study of visualizations with lattice and ggplot2. | Where can I find useful R tutorials with various implementations? [duplicate] | Another great resource is the LearnR blog, which went through an extensive study of visualizations with lattice and ggplot2. | Where can I find useful R tutorials with various implementations? [duplicate]
Another great resource is the LearnR blog, which went through an extensive study of visualizations with lattice and ggplot2. | Where can I find useful R tutorials with various implementations? [duplicate]
Another great resource is the LearnR blog, which went through an extensive study of visualizations with lattice and ggplot2. |
51,823 | Where can I find useful R tutorials with various implementations? [duplicate] | R is designed around ideas such as "reproducible research" and "trustworthy software", as John Chambers says in his excellent book "Software for Data Analysis: Programming with R".
One of the best ways to learn R is to look at the wealth of source code that available on CRAN (with 2461 packages and counting). Simple install.packages, load a library(), and start browsing the code. | Where can I find useful R tutorials with various implementations? [duplicate] | R is designed around ideas such as "reproducible research" and "trustworthy software", as John Chambers says in his excellent book "Software for Data Analysis: Programming with R".
One of the best w | Where can I find useful R tutorials with various implementations? [duplicate]
R is designed around ideas such as "reproducible research" and "trustworthy software", as John Chambers says in his excellent book "Software for Data Analysis: Programming with R".
One of the best ways to learn R is to look at the wealth of source code that available on CRAN (with 2461 packages and counting). Simple install.packages, load a library(), and start browsing the code. | Where can I find useful R tutorials with various implementations? [duplicate]
R is designed around ideas such as "reproducible research" and "trustworthy software", as John Chambers says in his excellent book "Software for Data Analysis: Programming with R".
One of the best w |
51,824 | Where can I find useful R tutorials with various implementations? [duplicate] | I found this rather helpful: http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf | Where can I find useful R tutorials with various implementations? [duplicate] | I found this rather helpful: http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf | Where can I find useful R tutorials with various implementations? [duplicate]
I found this rather helpful: http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf | Where can I find useful R tutorials with various implementations? [duplicate]
I found this rather helpful: http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf |
51,825 | What test does summary() perform on a glm() model using a Gamma distribution in r? | But a Wald test is a parametric test which assumes a normal distribution. Is R perhaps performing a Wald Log-Linear Chi-Square Test instead of a normal Wald test in these cases?
The coefficients are maximum likelihood estimates, which are asymptotically normal. The resulting test is indeed a Wald test a t-test as noted by EdM
There is a difference between the sampling distribution for the coefficients and the likelihood you're using. Let me demonstrate. Here is some code to generate data from a gamma distribution. The shape and scale of the gamma distirbution depend on some variable group.
library(tidyverse)
set.seed(0)
phi <- 10
group <- rep(0:1, 500)
mu <- exp( 1.5 + group)
y <- rgamma(1000, mu^2/phi, mu/phi)
tibble(group, y) %>%
ggplot(aes(y, fill=factor(group))) +
geom_histogram(alpha = 0.5, position = 'identity')
What I'm going to do is create these data 1000 times, run a regression, and look at the resulting distribution of the coefficients. This is the sampling distribution of the beta. Here they are, stratified by the type of coefficient (intercept or for group). You can see they are roughly symmetric and bell shaped.
There is much more theory to demonstrate their asymptotic normality, but perhaps this suffices to convince you. | What test does summary() perform on a glm() model using a Gamma distribution in r? | But a Wald test is a parametric test which assumes a normal distribution. Is R perhaps performing a Wald Log-Linear Chi-Square Test instead of a normal Wald test in these cases?
The coefficients are | What test does summary() perform on a glm() model using a Gamma distribution in r?
But a Wald test is a parametric test which assumes a normal distribution. Is R perhaps performing a Wald Log-Linear Chi-Square Test instead of a normal Wald test in these cases?
The coefficients are maximum likelihood estimates, which are asymptotically normal. The resulting test is indeed a Wald test a t-test as noted by EdM
There is a difference between the sampling distribution for the coefficients and the likelihood you're using. Let me demonstrate. Here is some code to generate data from a gamma distribution. The shape and scale of the gamma distirbution depend on some variable group.
library(tidyverse)
set.seed(0)
phi <- 10
group <- rep(0:1, 500)
mu <- exp( 1.5 + group)
y <- rgamma(1000, mu^2/phi, mu/phi)
tibble(group, y) %>%
ggplot(aes(y, fill=factor(group))) +
geom_histogram(alpha = 0.5, position = 'identity')
What I'm going to do is create these data 1000 times, run a regression, and look at the resulting distribution of the coefficients. This is the sampling distribution of the beta. Here they are, stratified by the type of coefficient (intercept or for group). You can see they are roughly symmetric and bell shaped.
There is much more theory to demonstrate their asymptotic normality, but perhaps this suffices to convince you. | What test does summary() perform on a glm() model using a Gamma distribution in r?
But a Wald test is a parametric test which assumes a normal distribution. Is R perhaps performing a Wald Log-Linear Chi-Square Test instead of a normal Wald test in these cases?
The coefficients are |
51,826 | What test does summary() perform on a glm() model using a Gamma distribution in r? | Demetri Pananos makes the critical point about regression models (+1): the coefficient estimates are taken to have underlying multivariate normal distributions, at least in the asymptotic limit of large numbers of observations. That does not necessarily require a normal distribution of data, as the reviewer evidently thinks. In ordinary linear regression that doesn't necessarily even require a normal distribution of residuals, as the law of large numbers can lead to a normal distribution of coefficient estimates with a large enough data set.
The appropriate test to perform on the coefficient estimates depends, however, on whether the model is based on an assumed variance or if the model estimates the variance from the data. In ordinary linear regression you estimate both a mean value (as a function of covariates) and an error variance. In that situation you take the extra uncertainty from making dual estimates from the data into account by using a t-test.
In a generalized linear model (GLM) you could have either situation, depending on the family. The binomial and Poisson families, for example, have fixed associations between mean values and variances. If the choice of family is correct, there is no extra estimate of variance to make once the mean value has been estimated. As there is no independent estimate of variance, the correct test is a z-test for the probability that the normally-distributed coefficient estimate differs from 0. For a single coefficient that's equivalent to a single-coefficient Wald test.*
In the gamma family, however, the model fits both the coefficient estimates and a "dispersion" that defines how much the variance increases with the mean value. That's the same situation as in ordinary linear regression; the separate estimate of dispersion calls for a t-test with degrees of freedom based on the data size, instead of a z-test. The code for summary.glm makes that clear, as shown in the following snippet for the situation where est.disp=TRUE as for your gamma GLM:
>summary.glm
## much code omitted
pvalue <- 2 * pt(-abs(tvalue), df.r)
coef.table <- cbind(coef.p, s.err, tvalue, pvalue)
dimnames(coef.table) <- list(names(coef.p), c(dn,
"t value", "Pr(>|t|)"))
## much more code omitted
So the simplest way to address the reviewer's concern in your case is to apologize for your error and state that these are actually t-tests on the coefficient estimates. I suspect that if you had called them t-tests to start with there wouldn't have been a complaint, even though t-tests and z-tests and Wald tests are all based on underlying normal distributions of coefficient estimates (and not of "data").
*The general form of the Wald test is particularly important when there are multiple parameters to test together. For a single parameter, the Wald statistic is just the square of the z statistic and p-values are identical. | What test does summary() perform on a glm() model using a Gamma distribution in r? | Demetri Pananos makes the critical point about regression models (+1): the coefficient estimates are taken to have underlying multivariate normal distributions, at least in the asymptotic limit of lar | What test does summary() perform on a glm() model using a Gamma distribution in r?
Demetri Pananos makes the critical point about regression models (+1): the coefficient estimates are taken to have underlying multivariate normal distributions, at least in the asymptotic limit of large numbers of observations. That does not necessarily require a normal distribution of data, as the reviewer evidently thinks. In ordinary linear regression that doesn't necessarily even require a normal distribution of residuals, as the law of large numbers can lead to a normal distribution of coefficient estimates with a large enough data set.
The appropriate test to perform on the coefficient estimates depends, however, on whether the model is based on an assumed variance or if the model estimates the variance from the data. In ordinary linear regression you estimate both a mean value (as a function of covariates) and an error variance. In that situation you take the extra uncertainty from making dual estimates from the data into account by using a t-test.
In a generalized linear model (GLM) you could have either situation, depending on the family. The binomial and Poisson families, for example, have fixed associations between mean values and variances. If the choice of family is correct, there is no extra estimate of variance to make once the mean value has been estimated. As there is no independent estimate of variance, the correct test is a z-test for the probability that the normally-distributed coefficient estimate differs from 0. For a single coefficient that's equivalent to a single-coefficient Wald test.*
In the gamma family, however, the model fits both the coefficient estimates and a "dispersion" that defines how much the variance increases with the mean value. That's the same situation as in ordinary linear regression; the separate estimate of dispersion calls for a t-test with degrees of freedom based on the data size, instead of a z-test. The code for summary.glm makes that clear, as shown in the following snippet for the situation where est.disp=TRUE as for your gamma GLM:
>summary.glm
## much code omitted
pvalue <- 2 * pt(-abs(tvalue), df.r)
coef.table <- cbind(coef.p, s.err, tvalue, pvalue)
dimnames(coef.table) <- list(names(coef.p), c(dn,
"t value", "Pr(>|t|)"))
## much more code omitted
So the simplest way to address the reviewer's concern in your case is to apologize for your error and state that these are actually t-tests on the coefficient estimates. I suspect that if you had called them t-tests to start with there wouldn't have been a complaint, even though t-tests and z-tests and Wald tests are all based on underlying normal distributions of coefficient estimates (and not of "data").
*The general form of the Wald test is particularly important when there are multiple parameters to test together. For a single parameter, the Wald statistic is just the square of the z statistic and p-values are identical. | What test does summary() perform on a glm() model using a Gamma distribution in r?
Demetri Pananos makes the critical point about regression models (+1): the coefficient estimates are taken to have underlying multivariate normal distributions, at least in the asymptotic limit of lar |
51,827 | Can I use Pearson correlation for discrete variables? [closed] | Sure!
set.seed(2022)
N <- 100
x1 <- rpois(N, 5) # Poisson(5)
x2 <- rpois(N, 7) # Poisson(7)
cor(x1, x2)
At no point does the Pearson correlation make distribution assumptions beyond the covariance and variances existing. | Can I use Pearson correlation for discrete variables? [closed] | Sure!
set.seed(2022)
N <- 100
x1 <- rpois(N, 5) # Poisson(5)
x2 <- rpois(N, 7) # Poisson(7)
cor(x1, x2)
At no point does the Pearson correlation make distribution assumptions beyond the covariance an | Can I use Pearson correlation for discrete variables? [closed]
Sure!
set.seed(2022)
N <- 100
x1 <- rpois(N, 5) # Poisson(5)
x2 <- rpois(N, 7) # Poisson(7)
cor(x1, x2)
At no point does the Pearson correlation make distribution assumptions beyond the covariance and variances existing. | Can I use Pearson correlation for discrete variables? [closed]
Sure!
set.seed(2022)
N <- 100
x1 <- rpois(N, 5) # Poisson(5)
x2 <- rpois(N, 7) # Poisson(7)
cor(x1, x2)
At no point does the Pearson correlation make distribution assumptions beyond the covariance an |
51,828 | Can I use Pearson correlation for discrete variables? [closed] | The correlation coefficient, sure, if you're interested in measuring linear correlation.
If you want to test it, maybe -- you might sometimes need to consider using something else in place of the usual test, though.
The usual test is typically pretty level-robust but for example with count data (and often with other forms of discrete data) you tend to have (a) heteroskedasticity, related to the level of the mean; (b) nonlinear relationships; and (c) changing distribution shape as the mean changes. In particular (a) and (b) might lead to you consider other tests (whether using the same test statistic and a different approach to calculating p-values, or a slightly modified statistic, or even asking somewhat different questions of the data). | Can I use Pearson correlation for discrete variables? [closed] | The correlation coefficient, sure, if you're interested in measuring linear correlation.
If you want to test it, maybe -- you might sometimes need to consider using something else in place of the usua | Can I use Pearson correlation for discrete variables? [closed]
The correlation coefficient, sure, if you're interested in measuring linear correlation.
If you want to test it, maybe -- you might sometimes need to consider using something else in place of the usual test, though.
The usual test is typically pretty level-robust but for example with count data (and often with other forms of discrete data) you tend to have (a) heteroskedasticity, related to the level of the mean; (b) nonlinear relationships; and (c) changing distribution shape as the mean changes. In particular (a) and (b) might lead to you consider other tests (whether using the same test statistic and a different approach to calculating p-values, or a slightly modified statistic, or even asking somewhat different questions of the data). | Can I use Pearson correlation for discrete variables? [closed]
The correlation coefficient, sure, if you're interested in measuring linear correlation.
If you want to test it, maybe -- you might sometimes need to consider using something else in place of the usua |
51,829 | Can I use Pearson correlation for discrete variables? [closed] | Yes, of course. Discrete variables belong to numerical data with ratio scale, and not categorical data. So you can use Pearson correlation coefficient to measure the relationship between those two variables. | Can I use Pearson correlation for discrete variables? [closed] | Yes, of course. Discrete variables belong to numerical data with ratio scale, and not categorical data. So you can use Pearson correlation coefficient to measure the relationship between those two var | Can I use Pearson correlation for discrete variables? [closed]
Yes, of course. Discrete variables belong to numerical data with ratio scale, and not categorical data. So you can use Pearson correlation coefficient to measure the relationship between those two variables. | Can I use Pearson correlation for discrete variables? [closed]
Yes, of course. Discrete variables belong to numerical data with ratio scale, and not categorical data. So you can use Pearson correlation coefficient to measure the relationship between those two var |
51,830 | Can I use Pearson correlation for discrete variables? [closed] | In the case of binary variables, this is even given a special name, either Matthew's Correlation Coefficient (MCC) or the "Phi coefficient". MCC is simply the Pearson correlation of two binary variables. | Can I use Pearson correlation for discrete variables? [closed] | In the case of binary variables, this is even given a special name, either Matthew's Correlation Coefficient (MCC) or the "Phi coefficient". MCC is simply the Pearson correlation of two binary variabl | Can I use Pearson correlation for discrete variables? [closed]
In the case of binary variables, this is even given a special name, either Matthew's Correlation Coefficient (MCC) or the "Phi coefficient". MCC is simply the Pearson correlation of two binary variables. | Can I use Pearson correlation for discrete variables? [closed]
In the case of binary variables, this is even given a special name, either Matthew's Correlation Coefficient (MCC) or the "Phi coefficient". MCC is simply the Pearson correlation of two binary variabl |
51,831 | Can I use Pearson correlation for discrete variables? [closed] | It depends on what you mean in "discrete". You can use Pearson's R for discrete numeric variable. If the data are in ordinal scale, you should use Spearman's correlation. If the data are in nominal scale, then Pearson's and Spearman's coefficients are not valid, even if you code the categories by numbers. | Can I use Pearson correlation for discrete variables? [closed] | It depends on what you mean in "discrete". You can use Pearson's R for discrete numeric variable. If the data are in ordinal scale, you should use Spearman's correlation. If the data are in nominal sc | Can I use Pearson correlation for discrete variables? [closed]
It depends on what you mean in "discrete". You can use Pearson's R for discrete numeric variable. If the data are in ordinal scale, you should use Spearman's correlation. If the data are in nominal scale, then Pearson's and Spearman's coefficients are not valid, even if you code the categories by numbers. | Can I use Pearson correlation for discrete variables? [closed]
It depends on what you mean in "discrete". You can use Pearson's R for discrete numeric variable. If the data are in ordinal scale, you should use Spearman's correlation. If the data are in nominal sc |
51,832 | Introduction to Statistical Learning Eq. 4.32 | It is an issue of expanding and tidying up. You have for example
$(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) = x^T\Sigma^{-1}x - \mu_k^T\Sigma^{-1}x-x^T\Sigma^{-1}\mu_k+\mu_k^T\Sigma^{-1}\mu_k$ and
$\mu_k^T\Sigma^{-1}x=x^T\Sigma^{-1}\mu_k$ and $\mu_k^T\Sigma^{-1}\mu_K=\mu_K^T\Sigma^{-1}\mu_k$ and
$(\mu_k+\mu_K)^T\Sigma^{-1}(\mu_k-\mu_K) = \mu_k^T\Sigma^{-1}\mu_k -\mu_K^T\Sigma^{-1}\mu_K$
so
$\log\left(\frac{\pi_k}{\pi_K}\right) -\frac12(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) +\frac12(x-\mu_K)^T\Sigma^{-1}(x-\mu_K)$
$= \log\left(\frac{\pi_k}{\pi_K}\right) -\frac12 x^T\Sigma^{-1}x+ x^T\Sigma^{-1}\mu_k- \frac12\mu_k^T\Sigma^{-1}\mu_k +\frac12 x^T\Sigma^{-1}x- x^T\Sigma^{-1}\mu_K+ \frac12\mu_K^T\Sigma^{-1}\mu_K$
$= \log\left(\frac{\pi_k}{\pi_K}\right) - \frac12(\mu_k^T\Sigma^{-1}\mu_k - \mu_K^T\Sigma^{-1}\mu_K) + x^T\Sigma^{-1}\mu_k- x^T\Sigma^{-1}\mu_K$
$= \log\left(\frac{\pi_k}{\pi_K}\right) - \frac12\mu_k^T\Sigma^{-1}\mu_k -\mu_K^T\Sigma^{-1}\mu_K + x^T\Sigma^{-1}(\mu_k- \mu_K)$ | Introduction to Statistical Learning Eq. 4.32 | It is an issue of expanding and tidying up. You have for example
$(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) = x^T\Sigma^{-1}x - \mu_k^T\Sigma^{-1}x-x^T\Sigma^{-1}\mu_k+\mu_k^T\Sigma^{-1}\mu_k$ and
$\mu_k^T\Si | Introduction to Statistical Learning Eq. 4.32
It is an issue of expanding and tidying up. You have for example
$(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) = x^T\Sigma^{-1}x - \mu_k^T\Sigma^{-1}x-x^T\Sigma^{-1}\mu_k+\mu_k^T\Sigma^{-1}\mu_k$ and
$\mu_k^T\Sigma^{-1}x=x^T\Sigma^{-1}\mu_k$ and $\mu_k^T\Sigma^{-1}\mu_K=\mu_K^T\Sigma^{-1}\mu_k$ and
$(\mu_k+\mu_K)^T\Sigma^{-1}(\mu_k-\mu_K) = \mu_k^T\Sigma^{-1}\mu_k -\mu_K^T\Sigma^{-1}\mu_K$
so
$\log\left(\frac{\pi_k}{\pi_K}\right) -\frac12(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) +\frac12(x-\mu_K)^T\Sigma^{-1}(x-\mu_K)$
$= \log\left(\frac{\pi_k}{\pi_K}\right) -\frac12 x^T\Sigma^{-1}x+ x^T\Sigma^{-1}\mu_k- \frac12\mu_k^T\Sigma^{-1}\mu_k +\frac12 x^T\Sigma^{-1}x- x^T\Sigma^{-1}\mu_K+ \frac12\mu_K^T\Sigma^{-1}\mu_K$
$= \log\left(\frac{\pi_k}{\pi_K}\right) - \frac12(\mu_k^T\Sigma^{-1}\mu_k - \mu_K^T\Sigma^{-1}\mu_K) + x^T\Sigma^{-1}\mu_k- x^T\Sigma^{-1}\mu_K$
$= \log\left(\frac{\pi_k}{\pi_K}\right) - \frac12\mu_k^T\Sigma^{-1}\mu_k -\mu_K^T\Sigma^{-1}\mu_K + x^T\Sigma^{-1}(\mu_k- \mu_K)$ | Introduction to Statistical Learning Eq. 4.32
It is an issue of expanding and tidying up. You have for example
$(x-\mu_k)^T\Sigma^{-1}(x-\mu_k) = x^T\Sigma^{-1}x - \mu_k^T\Sigma^{-1}x-x^T\Sigma^{-1}\mu_k+\mu_k^T\Sigma^{-1}\mu_k$ and
$\mu_k^T\Si |
51,833 | Introduction to Statistical Learning Eq. 4.32 | Another way would be adding some terms to make the first multiplicands the same, and group (subtract the added terms outside of the parantheses as well):
$$\begin{align}A&=-\frac{1}{2}(x-\mu_k-\overbrace{\mu_K}^{new})^T\Sigma^{-1}(x-\mu_k)+\frac{1}{2}(x-\mu_K-\overbrace{\mu_k}^{new})^T\Sigma^{-1}(x-\mu_K)\\&- \frac{1}{2}\mu_K^T\Sigma^{-1}(x-\mu_k)+\frac{1}{2}\mu_k^T\Sigma^{-1}(x-\mu_K)\rightarrow \text{subtract newly added terms}\\&=\frac{1}{2}(x-\mu_k-\mu_K)^T\Sigma^{-1}(\mu_k-\mu_K)+\frac{1}{2}x^T\Sigma^{-1}(\mu_k-\mu_K)\\&=x^T\Sigma^{-1}(\mu_k-\mu_K)-\frac{1}{2}(\mu_k+\mu_K)^T\Sigma^{-1}(\mu_k-\mu_K)\end{align}$$
Note that $x^T\Sigma^{-1}\mu=\mu^T\Sigma^{-1}x$ since inverse of cov. matrix is symmetric. | Introduction to Statistical Learning Eq. 4.32 | Another way would be adding some terms to make the first multiplicands the same, and group (subtract the added terms outside of the parantheses as well):
$$\begin{align}A&=-\frac{1}{2}(x-\mu_k-\overbr | Introduction to Statistical Learning Eq. 4.32
Another way would be adding some terms to make the first multiplicands the same, and group (subtract the added terms outside of the parantheses as well):
$$\begin{align}A&=-\frac{1}{2}(x-\mu_k-\overbrace{\mu_K}^{new})^T\Sigma^{-1}(x-\mu_k)+\frac{1}{2}(x-\mu_K-\overbrace{\mu_k}^{new})^T\Sigma^{-1}(x-\mu_K)\\&- \frac{1}{2}\mu_K^T\Sigma^{-1}(x-\mu_k)+\frac{1}{2}\mu_k^T\Sigma^{-1}(x-\mu_K)\rightarrow \text{subtract newly added terms}\\&=\frac{1}{2}(x-\mu_k-\mu_K)^T\Sigma^{-1}(\mu_k-\mu_K)+\frac{1}{2}x^T\Sigma^{-1}(\mu_k-\mu_K)\\&=x^T\Sigma^{-1}(\mu_k-\mu_K)-\frac{1}{2}(\mu_k+\mu_K)^T\Sigma^{-1}(\mu_k-\mu_K)\end{align}$$
Note that $x^T\Sigma^{-1}\mu=\mu^T\Sigma^{-1}x$ since inverse of cov. matrix is symmetric. | Introduction to Statistical Learning Eq. 4.32
Another way would be adding some terms to make the first multiplicands the same, and group (subtract the added terms outside of the parantheses as well):
$$\begin{align}A&=-\frac{1}{2}(x-\mu_k-\overbr |
51,834 | Calculate E[X/Y] from E[XY] for two random variables with zero mean | You will have to know the full joint distribution of $X$ and $Y$ in order to calculate $$E[X/Y] = \int (x/y) p(x,y) ~dx dy. $$
Note that $E[X/Y]$ might not even be defined - this is the case for example when $X$ and $Y$ are normally distributed, and the ratio has a Cauchy distribution which has no mean.
See also Ratio distribution. | Calculate E[X/Y] from E[XY] for two random variables with zero mean | You will have to know the full joint distribution of $X$ and $Y$ in order to calculate $$E[X/Y] = \int (x/y) p(x,y) ~dx dy. $$
Note that $E[X/Y]$ might not even be defined - this is the case for examp | Calculate E[X/Y] from E[XY] for two random variables with zero mean
You will have to know the full joint distribution of $X$ and $Y$ in order to calculate $$E[X/Y] = \int (x/y) p(x,y) ~dx dy. $$
Note that $E[X/Y]$ might not even be defined - this is the case for example when $X$ and $Y$ are normally distributed, and the ratio has a Cauchy distribution which has no mean.
See also Ratio distribution. | Calculate E[X/Y] from E[XY] for two random variables with zero mean
You will have to know the full joint distribution of $X$ and $Y$ in order to calculate $$E[X/Y] = \int (x/y) p(x,y) ~dx dy. $$
Note that $E[X/Y]$ might not even be defined - this is the case for examp |
51,835 | Calculate E[X/Y] from E[XY] for two random variables with zero mean | Intuitively, take the singular density in $\mathbb{R}^2$ that is only nonzero along some line $X = a Y$, $var(Y) = b$, $Y$ always nonzero, and that has $E[X] = E[Y] = 0$, as you required. Then:
$$
E\left[\frac{X}{Y}\right] = E[a] = a
$$
and
$$
E[XY] = a\,E[Y^2] = a \; var(Y) = ab.
$$
So you cannot compute $E\left[\frac{X}{Y}\right]$ from the covariance alone.
I don't think that there are easily obtainable information which, would you avail yourself of them in addition to $E[XY]$, were to give you $E[X/Y]$. But you can of course always construct some (silly) additional information satisfying your request, e.g. the knowledge of $E[X/Y - XY]$ (just kidding). | Calculate E[X/Y] from E[XY] for two random variables with zero mean | Intuitively, take the singular density in $\mathbb{R}^2$ that is only nonzero along some line $X = a Y$, $var(Y) = b$, $Y$ always nonzero, and that has $E[X] = E[Y] = 0$, as you required. Then:
$$
E\l | Calculate E[X/Y] from E[XY] for two random variables with zero mean
Intuitively, take the singular density in $\mathbb{R}^2$ that is only nonzero along some line $X = a Y$, $var(Y) = b$, $Y$ always nonzero, and that has $E[X] = E[Y] = 0$, as you required. Then:
$$
E\left[\frac{X}{Y}\right] = E[a] = a
$$
and
$$
E[XY] = a\,E[Y^2] = a \; var(Y) = ab.
$$
So you cannot compute $E\left[\frac{X}{Y}\right]$ from the covariance alone.
I don't think that there are easily obtainable information which, would you avail yourself of them in addition to $E[XY]$, were to give you $E[X/Y]$. But you can of course always construct some (silly) additional information satisfying your request, e.g. the knowledge of $E[X/Y - XY]$ (just kidding). | Calculate E[X/Y] from E[XY] for two random variables with zero mean
Intuitively, take the singular density in $\mathbb{R}^2$ that is only nonzero along some line $X = a Y$, $var(Y) = b$, $Y$ always nonzero, and that has $E[X] = E[Y] = 0$, as you required. Then:
$$
E\l |
51,836 | Mean is not a sufficient statistic for the normal distribution when variance is not known? | $\bar X$ is not a sufficient statistic because it does not contain all the information about $(\mu,\sigma^2)$, which is what it would mean for it to be sufficient.
However, $\bar X$ does contain all the information about $\mu$ in the sample, whether or not $\sigma^2$ is known. For example, $\bar X$ attains the Cramèr-Rao bound. Similarly, if $\mu$ is not known, $s^2$ contains all the information about $\sigma^2$ (though not if $\mu$ is known since $(\mu-\bar X)^2$ has information about $\sigma^2$). Having all the information about parts of a parameter is a more complicated property than sufficiency, though it has been studied (see e.g., Sprott 1975). | Mean is not a sufficient statistic for the normal distribution when variance is not known? | $\bar X$ is not a sufficient statistic because it does not contain all the information about $(\mu,\sigma^2)$, which is what it would mean for it to be sufficient.
However, $\bar X$ does contain all t | Mean is not a sufficient statistic for the normal distribution when variance is not known?
$\bar X$ is not a sufficient statistic because it does not contain all the information about $(\mu,\sigma^2)$, which is what it would mean for it to be sufficient.
However, $\bar X$ does contain all the information about $\mu$ in the sample, whether or not $\sigma^2$ is known. For example, $\bar X$ attains the Cramèr-Rao bound. Similarly, if $\mu$ is not known, $s^2$ contains all the information about $\sigma^2$ (though not if $\mu$ is known since $(\mu-\bar X)^2$ has information about $\sigma^2$). Having all the information about parts of a parameter is a more complicated property than sufficiency, though it has been studied (see e.g., Sprott 1975). | Mean is not a sufficient statistic for the normal distribution when variance is not known?
$\bar X$ is not a sufficient statistic because it does not contain all the information about $(\mu,\sigma^2)$, which is what it would mean for it to be sufficient.
However, $\bar X$ does contain all t |
51,837 | Mean is not a sufficient statistic for the normal distribution when variance is not known? | There is a relationship between the sample range and the sample standard deviation. It is not as good as using the sufficient statistic, but not useless.
[The somewhat bogus 'rule' of thumb, mentioned in some elementary tests, that $S$ is well-estimated by the range divided by 5 or 6, is not what I have in mind; for normal data, the appropriate divisor depends crucially on sample size.]
Simulation in R: For a sample of size $n = 10$ from a normal
distribution, the sample range divided by
$k = 3.164$ is approximately equal to the sample standard deviation.
set.seed(2021)
m = 10^6; n=10; rng = s = numeric(m)
for(i in 1:m) {
x = rnorm(n, 100, 10) # mean irrelevant
rng[i] = diff(range(x)); s[i] = sd(x)
}
k = rng/s; k
[1] 3.164182
Smaller number of points for plot:
Range = rng[1:10000]; StDev = s[1:10000]
plot(StDev, Range, pch=".")
abline(a=0, b=k, col="green2", lwd=2) | Mean is not a sufficient statistic for the normal distribution when variance is not known? | There is a relationship between the sample range and the sample standard deviation. It is not as good as using the sufficient statistic, but not useless.
[The somewhat bogus 'rule' of thumb, mentioned | Mean is not a sufficient statistic for the normal distribution when variance is not known?
There is a relationship between the sample range and the sample standard deviation. It is not as good as using the sufficient statistic, but not useless.
[The somewhat bogus 'rule' of thumb, mentioned in some elementary tests, that $S$ is well-estimated by the range divided by 5 or 6, is not what I have in mind; for normal data, the appropriate divisor depends crucially on sample size.]
Simulation in R: For a sample of size $n = 10$ from a normal
distribution, the sample range divided by
$k = 3.164$ is approximately equal to the sample standard deviation.
set.seed(2021)
m = 10^6; n=10; rng = s = numeric(m)
for(i in 1:m) {
x = rnorm(n, 100, 10) # mean irrelevant
rng[i] = diff(range(x)); s[i] = sd(x)
}
k = rng/s; k
[1] 3.164182
Smaller number of points for plot:
Range = rng[1:10000]; StDev = s[1:10000]
plot(StDev, Range, pch=".")
abline(a=0, b=k, col="green2", lwd=2) | Mean is not a sufficient statistic for the normal distribution when variance is not known?
There is a relationship between the sample range and the sample standard deviation. It is not as good as using the sufficient statistic, but not useless.
[The somewhat bogus 'rule' of thumb, mentioned |
51,838 | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | Using the formula for covariance that you gave, you can reexpress the covariance as follows:
$$\begin{aligned}
\text{Cov}\left(X, \frac{1}{X}\right) &= E \left[ X\frac{1}{X}\right]-E[X]E\left[\frac{1}{X}\right] \\ &= 1 - E[X]E\left[\frac{1}{X}\right] \end{aligned}$$
Let $\varphi(Y) = \frac{1}{Y}$, which is a convex function for positive values of $y$ (because any line drawn between to points on the curve is above the curve for positive values). Jensen's inequality says that for convex functions,
$$\varphi(E[Y]) \le E[\varphi(Y)]$$
or, equivalently, that
$$\frac{1}{\varphi(E[Y])}E[\varphi(Y)] \ge 1$$
Writing $E[X]$ and $E\left[\frac{1}{X}\right]$ in terms of $\varphi(Y) = \frac{1}{Y}$, we can write
$$E[X] = \frac{1}{\varphi(E[X])} \\ E\left[\frac{1}{X}\right] = E[\varphi(X)]$$
so
$$E[X]E\left[\frac{1}{X}\right] = \frac{1}{\varphi(E[X])}E[\varphi(X)] \ge 1 $$
$1$ minus something greater than $1$ is less than $0$. | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | Using the formula for covariance that you gave, you can reexpress the covariance as follows:
$$\begin{aligned}
\text{Cov}\left(X, \frac{1}{X}\right) &= E \left[ X\frac{1}{X}\right]-E[X]E\left[\frac{1 | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
Using the formula for covariance that you gave, you can reexpress the covariance as follows:
$$\begin{aligned}
\text{Cov}\left(X, \frac{1}{X}\right) &= E \left[ X\frac{1}{X}\right]-E[X]E\left[\frac{1}{X}\right] \\ &= 1 - E[X]E\left[\frac{1}{X}\right] \end{aligned}$$
Let $\varphi(Y) = \frac{1}{Y}$, which is a convex function for positive values of $y$ (because any line drawn between to points on the curve is above the curve for positive values). Jensen's inequality says that for convex functions,
$$\varphi(E[Y]) \le E[\varphi(Y)]$$
or, equivalently, that
$$\frac{1}{\varphi(E[Y])}E[\varphi(Y)] \ge 1$$
Writing $E[X]$ and $E\left[\frac{1}{X}\right]$ in terms of $\varphi(Y) = \frac{1}{Y}$, we can write
$$E[X] = \frac{1}{\varphi(E[X])} \\ E\left[\frac{1}{X}\right] = E[\varphi(X)]$$
so
$$E[X]E\left[\frac{1}{X}\right] = \frac{1}{\varphi(E[X])}E[\varphi(X)] \ge 1 $$
$1$ minus something greater than $1$ is less than $0$. | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
Using the formula for covariance that you gave, you can reexpress the covariance as follows:
$$\begin{aligned}
\text{Cov}\left(X, \frac{1}{X}\right) &= E \left[ X\frac{1}{X}\right]-E[X]E\left[\frac{1 |
51,839 | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | Via, Jensen's inequality, you'll have $$\frac{1}{E[X]}\leq E\left[\frac{1}{X}\right]$$
because $f(x)=1/x$ is a convex function for positive $x$. If you substitute this into the covariance definition, you'll reach the desired result. | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | Via, Jensen's inequality, you'll have $$\frac{1}{E[X]}\leq E\left[\frac{1}{X}\right]$$
because $f(x)=1/x$ is a convex function for positive $x$. If you substitute this into the covariance definition, | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
Via, Jensen's inequality, you'll have $$\frac{1}{E[X]}\leq E\left[\frac{1}{X}\right]$$
because $f(x)=1/x$ is a convex function for positive $x$. If you substitute this into the covariance definition, you'll reach the desired result. | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
Via, Jensen's inequality, you'll have $$\frac{1}{E[X]}\leq E\left[\frac{1}{X}\right]$$
because $f(x)=1/x$ is a convex function for positive $x$. If you substitute this into the covariance definition, |
51,840 | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | One proof is to note that
\begin{align}
\mathbf{Cov} (X, X^{-1}) &= \frac{1}{2}\mathbf{E} \left[ \left( X_1 - X_2 \right) \cdot \left( X_1^{-1} - X_2^{-1} \right) \right] \\
&= -\frac{1}{2}\mathbf{E} \left[ \frac{\left( X_1 - X_2 \right)^2}{X_1\cdot X_2} \right] \leq 0.
\end{align} | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | One proof is to note that
\begin{align}
\mathbf{Cov} (X, X^{-1}) &= \frac{1}{2}\mathbf{E} \left[ \left( X_1 - X_2 \right) \cdot \left( X_1^{-1} - X_2^{-1} \right) \right] \\
&= -\frac{1}{2}\mathbf{E} | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
One proof is to note that
\begin{align}
\mathbf{Cov} (X, X^{-1}) &= \frac{1}{2}\mathbf{E} \left[ \left( X_1 - X_2 \right) \cdot \left( X_1^{-1} - X_2^{-1} \right) \right] \\
&= -\frac{1}{2}\mathbf{E} \left[ \frac{\left( X_1 - X_2 \right)^2}{X_1\cdot X_2} \right] \leq 0.
\end{align} | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
One proof is to note that
\begin{align}
\mathbf{Cov} (X, X^{-1}) &= \frac{1}{2}\mathbf{E} \left[ \left( X_1 - X_2 \right) \cdot \left( X_1^{-1} - X_2^{-1} \right) \right] \\
&= -\frac{1}{2}\mathbf{E} |
51,841 | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | This result is not due to the positivity of $X,$ nor to the convexity of the function $x\to 1/x,$ nor to any particular property of this function apart from that it decreases. It would be less than satisfactory, then, to rely on the standard convexity inequalities such as Jensen's Inequality.
Consider this characterization of the covariance of random variables $(X,Y):$ If we let $(X_1,Y_1)$ and $(X_2,Y_2)$ be independent versions of $(X,Y),$ then
$$\operatorname{Cov}(X, Y) = E\left[(X_2-X_1)(Y_2-Y_1)\right]/2.\tag{*}$$
See https://stats.stackexchange.com/a/18200/919 for an intuitive explanation. For completeness, I include a proof below.
Let $X$ be any random variable and let $f$ be any (measurable) function on the support of $X,$ so that $Y=f(X)$ also is a random variable. Suppose $f$ does not increase. (The function $x\to 1/x$ on the positive reals has this property.) That is,
$$x_1\gt x_2\text{ implies } f(x_1) \le f(x_2)$$
unless there is no probability that $X$ will be close to one of $x_1$ or $x_2.$ This immediately implies $(X_2-X_1)(f(X_2)-f(X_1))\le 0$ almost surely, whence by $(*)$ the covariance of $X$ and $f(X)$ cannot be positive, QED.
Proof of $(*)$
Suppose $\operatorname{Cov}(X,Y)$ exists and is finite. Each of the following steps is almost trivial, beginning with linearity of expectation, remembering that $(X,Y),$ $(X_1,Y_1),$ and $(X_2,Y_2)$ all have the same distributions, and exploiting the independence of the latter two:
$$\begin{aligned}
&E\left[(X_2-X_1)(Y_2-Y_1)\right] &\\
&=E[X_2Y_2] - E[X_1Y_2] - E[X_2Y_1] + E[X_1Y_1]&\text{(linearity of E)}\\
&=E[X_2Y_2] - E[X_1]E[Y_2]\ +\ E[X_1Y_1] - E[X_2][Y_1]&\text{(independence)}\\
&=E[X_2Y_2] - E[X_2]E[Y_2]\ +\ E[X_1Y_1] - E[X_1][Y_1]&\text{(equal distributions)}\\
&=\operatorname{Cov}(X_2,Y_2)+\operatorname{Cov}(X_1,Y_1)&\text{(definition of Cov)}\\
&=2\operatorname{Cov}(X,Y)&\text{(equal distributions)},
\end{aligned}$$
QED. | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable? | This result is not due to the positivity of $X,$ nor to the convexity of the function $x\to 1/x,$ nor to any particular property of this function apart from that it decreases. It would be less than s | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
This result is not due to the positivity of $X,$ nor to the convexity of the function $x\to 1/x,$ nor to any particular property of this function apart from that it decreases. It would be less than satisfactory, then, to rely on the standard convexity inequalities such as Jensen's Inequality.
Consider this characterization of the covariance of random variables $(X,Y):$ If we let $(X_1,Y_1)$ and $(X_2,Y_2)$ be independent versions of $(X,Y),$ then
$$\operatorname{Cov}(X, Y) = E\left[(X_2-X_1)(Y_2-Y_1)\right]/2.\tag{*}$$
See https://stats.stackexchange.com/a/18200/919 for an intuitive explanation. For completeness, I include a proof below.
Let $X$ be any random variable and let $f$ be any (measurable) function on the support of $X,$ so that $Y=f(X)$ also is a random variable. Suppose $f$ does not increase. (The function $x\to 1/x$ on the positive reals has this property.) That is,
$$x_1\gt x_2\text{ implies } f(x_1) \le f(x_2)$$
unless there is no probability that $X$ will be close to one of $x_1$ or $x_2.$ This immediately implies $(X_2-X_1)(f(X_2)-f(X_1))\le 0$ almost surely, whence by $(*)$ the covariance of $X$ and $f(X)$ cannot be positive, QED.
Proof of $(*)$
Suppose $\operatorname{Cov}(X,Y)$ exists and is finite. Each of the following steps is almost trivial, beginning with linearity of expectation, remembering that $(X,Y),$ $(X_1,Y_1),$ and $(X_2,Y_2)$ all have the same distributions, and exploiting the independence of the latter two:
$$\begin{aligned}
&E\left[(X_2-X_1)(Y_2-Y_1)\right] &\\
&=E[X_2Y_2] - E[X_1Y_2] - E[X_2Y_1] + E[X_1Y_1]&\text{(linearity of E)}\\
&=E[X_2Y_2] - E[X_1]E[Y_2]\ +\ E[X_1Y_1] - E[X_2][Y_1]&\text{(independence)}\\
&=E[X_2Y_2] - E[X_2]E[Y_2]\ +\ E[X_1Y_1] - E[X_1][Y_1]&\text{(equal distributions)}\\
&=\operatorname{Cov}(X_2,Y_2)+\operatorname{Cov}(X_1,Y_1)&\text{(definition of Cov)}\\
&=2\operatorname{Cov}(X,Y)&\text{(equal distributions)},
\end{aligned}$$
QED. | Show That $Cov(X,\frac{1}{X})\le0$ if $X$ Is Positive Random Variable?
This result is not due to the positivity of $X,$ nor to the convexity of the function $x\to 1/x,$ nor to any particular property of this function apart from that it decreases. It would be less than s |
51,842 | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$? | I would like to add a thing to the answer by @Thomas Lumley
One can come up with the following:
$$\begin{align}
E[\max(X,a)]&=P(X\geq a)\cdot E[\max(X,a)|X\geq a]+P(X<a)\cdot E[\max(X,a)|X<a]\\
&=P(X\geq a)\cdot E[X|X\geq a]+P(X<a)\cdot a\\
&=P(X\geq a)\cdot E[X|X\geq a]+(1-P(X\geq a))\cdot a\\
&=P(X\geq a)\cdot (E[X|X\geq a]-a)+a
\end{align}$$
Combining this with the previous answer, we get:
$$\int_a^\infty(1-F(x))dx=E[\max(X,a)]-a=E[\max(X-a,0)]$$
Edit:
As @Ben added in his comment, it's with noting that in the special case $a=0$, you recover the usual expected value rule for non-negative random variables:
$$\int_0^\infty(1-F(x))dx=E[\max(X,0)]=E[X]$$ | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$? | I would like to add a thing to the answer by @Thomas Lumley
One can come up with the following:
$$\begin{align}
E[\max(X,a)]&=P(X\geq a)\cdot E[\max(X,a)|X\geq a]+P(X<a)\cdot E[\max(X,a)|X<a]\\
&=P(X\ | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$?
I would like to add a thing to the answer by @Thomas Lumley
One can come up with the following:
$$\begin{align}
E[\max(X,a)]&=P(X\geq a)\cdot E[\max(X,a)|X\geq a]+P(X<a)\cdot E[\max(X,a)|X<a]\\
&=P(X\geq a)\cdot E[X|X\geq a]+P(X<a)\cdot a\\
&=P(X\geq a)\cdot E[X|X\geq a]+(1-P(X\geq a))\cdot a\\
&=P(X\geq a)\cdot (E[X|X\geq a]-a)+a
\end{align}$$
Combining this with the previous answer, we get:
$$\int_a^\infty(1-F(x))dx=E[\max(X,a)]-a=E[\max(X-a,0)]$$
Edit:
As @Ben added in his comment, it's with noting that in the special case $a=0$, you recover the usual expected value rule for non-negative random variables:
$$\int_0^\infty(1-F(x))dx=E[\max(X,0)]=E[X]$$ | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$?
I would like to add a thing to the answer by @Thomas Lumley
One can come up with the following:
$$\begin{align}
E[\max(X,a)]&=P(X\geq a)\cdot E[\max(X,a)|X\geq a]+P(X<a)\cdot E[\max(X,a)|X<a]\\
&=P(X\ |
51,843 | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$? | There's a connection to the conditional expectation. I'll write $S_X(x)=1=F_X(x)$ for the survival function. The conditional survival function conditional on $X\geq a$ is
$$S_{a}(x)= \frac{P(X>a \cap X>x)}{P(X>a)}$$
which is 1 for $x<a$ and $S_X(x)/S_x(a)$ for $x\geq a$. So the conditional expectation is
$$E[X|X\geq a]=\int_0^\infty S_a(x)\,dx = \int_0^a\,dx + \frac{1}{S_x(a)}\int_a^{\infty} S_X(x)\,dx.$$
Rearranging,
$$\int_a^{\infty} S_X(x)\,dx = (E[X|X\geq a]-a)P[X>a]$$ | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$? | There's a connection to the conditional expectation. I'll write $S_X(x)=1=F_X(x)$ for the survival function. The conditional survival function conditional on $X\geq a$ is
$$S_{a}(x)= \frac{P(X>a \ca | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$?
There's a connection to the conditional expectation. I'll write $S_X(x)=1=F_X(x)$ for the survival function. The conditional survival function conditional on $X\geq a$ is
$$S_{a}(x)= \frac{P(X>a \cap X>x)}{P(X>a)}$$
which is 1 for $x<a$ and $S_X(x)/S_x(a)$ for $x\geq a$. So the conditional expectation is
$$E[X|X\geq a]=\int_0^\infty S_a(x)\,dx = \int_0^a\,dx + \frac{1}{S_x(a)}\int_a^{\infty} S_X(x)\,dx.$$
Rearranging,
$$\int_a^{\infty} S_X(x)\,dx = (E[X|X\geq a]-a)P[X>a]$$ | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$?
There's a connection to the conditional expectation. I'll write $S_X(x)=1=F_X(x)$ for the survival function. The conditional survival function conditional on $X\geq a$ is
$$S_{a}(x)= \frac{P(X>a \ca |
51,844 | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$? | For simplicity, consider the case where $X$ is continuous with density function $f_X$. The standard expectation rule for non-negative random variables is derived by using integration by parts to alter the standard moment integral. We will use the same technique here. Using integration by parts and L'Hôpital's rule we have:
$$\begin{align}
\int \limits_{a}^\infty (1-F_X(x)) \ dx
&= \Bigg[ x (1-F_X(x)) \Bigg]_{x=a}^{x \rightarrow \infty} + \int \limits_{a}^\infty x f_X(x) \ dx \\[6pt]
&= -a(1-F_X(a)) + \int \limits_{a}^\infty x f_X(x) \ dx \\[6pt]
&= -a + a F_X(a) + \int \limits_{a}^\infty x f_X(x) \ dx. \\[6pt]
\end{align}$$
We therefore have the general rule:
$$\begin{align}
\mathbb{E}[\max(X-a,0)]
&= \int \limits_\mathbb{R} \max(x-a,0) f_X(x) \ dx \\[6pt]
&= -a + \int \limits_\mathbb{R} \max(x,a) f_X(x) \ dx \\[6pt]
&= -a + a F_X(a) + \int \limits_{a}^\infty x f_X(x) \ dx \\[6pt]
&= \int \limits_{a}^\infty (1-F_X(x)) \ dx. \\[6pt]
\end{align}$$
In the special case where $X$ is a non-negative random variable and $a=0$ this equation reduces to the standard rule for the expectation of a non-negative random variable. Thus, the present formula generalises that rule. | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$? | For simplicity, consider the case where $X$ is continuous with density function $f_X$. The standard expectation rule for non-negative random variables is derived by using integration by parts to alte | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$?
For simplicity, consider the case where $X$ is continuous with density function $f_X$. The standard expectation rule for non-negative random variables is derived by using integration by parts to alter the standard moment integral. We will use the same technique here. Using integration by parts and L'Hôpital's rule we have:
$$\begin{align}
\int \limits_{a}^\infty (1-F_X(x)) \ dx
&= \Bigg[ x (1-F_X(x)) \Bigg]_{x=a}^{x \rightarrow \infty} + \int \limits_{a}^\infty x f_X(x) \ dx \\[6pt]
&= -a(1-F_X(a)) + \int \limits_{a}^\infty x f_X(x) \ dx \\[6pt]
&= -a + a F_X(a) + \int \limits_{a}^\infty x f_X(x) \ dx. \\[6pt]
\end{align}$$
We therefore have the general rule:
$$\begin{align}
\mathbb{E}[\max(X-a,0)]
&= \int \limits_\mathbb{R} \max(x-a,0) f_X(x) \ dx \\[6pt]
&= -a + \int \limits_\mathbb{R} \max(x,a) f_X(x) \ dx \\[6pt]
&= -a + a F_X(a) + \int \limits_{a}^\infty x f_X(x) \ dx \\[6pt]
&= \int \limits_{a}^\infty (1-F_X(x)) \ dx. \\[6pt]
\end{align}$$
In the special case where $X$ is a non-negative random variable and $a=0$ this equation reduces to the standard rule for the expectation of a non-negative random variable. Thus, the present formula generalises that rule. | Expected value of a random variable by integrating $1-CDF$ when lower limit $a\neq 0$?
For simplicity, consider the case where $X$ is continuous with density function $f_X$. The standard expectation rule for non-negative random variables is derived by using integration by parts to alte |
51,845 | Why are the ends of the prediction interval wider in the regression? [duplicate] | When performing a linear regression, there are 2 types of uncertainty in the prediction.
First is the prediction of the overall mean of the estimate (ie the center of the fit). The second is the uncertainly in the estimate calculating the slope.
Thus when you combine both uncertainties of the prediction there is a spread between the high and low estimates. Then as further away from the center, uncertainty of the slope becomes a large and more noticeable factor, thus the limits widen.
Hope this answers your question. | Why are the ends of the prediction interval wider in the regression? [duplicate] | When performing a linear regression, there are 2 types of uncertainty in the prediction.
First is the prediction of the overall mean of the estimate (ie the center of the fit). The second is the unce | Why are the ends of the prediction interval wider in the regression? [duplicate]
When performing a linear regression, there are 2 types of uncertainty in the prediction.
First is the prediction of the overall mean of the estimate (ie the center of the fit). The second is the uncertainly in the estimate calculating the slope.
Thus when you combine both uncertainties of the prediction there is a spread between the high and low estimates. Then as further away from the center, uncertainty of the slope becomes a large and more noticeable factor, thus the limits widen.
Hope this answers your question. | Why are the ends of the prediction interval wider in the regression? [duplicate]
When performing a linear regression, there are 2 types of uncertainty in the prediction.
First is the prediction of the overall mean of the estimate (ie the center of the fit). The second is the unce |
51,846 | Why are the ends of the prediction interval wider in the regression? [duplicate] | Its very easy to determine the prediction interval for the data.
$$\operatorname{Var}(y) = \operatorname{Var}(\beta_0 + \beta_1 x) + \operatorname{Var}(\varepsilon) = \sigma^2_{\beta_0} + \sigma^2_{\beta_1}x^2 + 2x \operatorname{Cov}(\beta_0, \beta_1)+ \sigma^2_{\epsilon}$$
As you can see, this is a quadratic function in x, which means for larger values of $x$ (well...larger as compared to the sample mean of $x$), the variance of the prediction will be larger. | Why are the ends of the prediction interval wider in the regression? [duplicate] | Its very easy to determine the prediction interval for the data.
$$\operatorname{Var}(y) = \operatorname{Var}(\beta_0 + \beta_1 x) + \operatorname{Var}(\varepsilon) = \sigma^2_{\beta_0} + \sigma^2_{\b | Why are the ends of the prediction interval wider in the regression? [duplicate]
Its very easy to determine the prediction interval for the data.
$$\operatorname{Var}(y) = \operatorname{Var}(\beta_0 + \beta_1 x) + \operatorname{Var}(\varepsilon) = \sigma^2_{\beta_0} + \sigma^2_{\beta_1}x^2 + 2x \operatorname{Cov}(\beta_0, \beta_1)+ \sigma^2_{\epsilon}$$
As you can see, this is a quadratic function in x, which means for larger values of $x$ (well...larger as compared to the sample mean of $x$), the variance of the prediction will be larger. | Why are the ends of the prediction interval wider in the regression? [duplicate]
Its very easy to determine the prediction interval for the data.
$$\operatorname{Var}(y) = \operatorname{Var}(\beta_0 + \beta_1 x) + \operatorname{Var}(\varepsilon) = \sigma^2_{\beta_0} + \sigma^2_{\b |
51,847 | Why are the ends of the prediction interval wider in the regression? [duplicate] | This reference, for example, clearly gives the formula of a prediction interval for a simple linear regression model, which contains the expression:
$\sqrt{({1/n + (x_p - x_m)^2}/{(n-1){s_x}^2}}$
So, as the prediction for the explanatory variable $x_p$ becomes more removed from its mean $x_m$, the interval widens.
In practice, I would be cautious about applying the prediction interval far from the mean, particularly a point estimate not included in the observed range of the data, as the structural equation itself may no longer be accurate/valid. | Why are the ends of the prediction interval wider in the regression? [duplicate] | This reference, for example, clearly gives the formula of a prediction interval for a simple linear regression model, which contains the expression:
$\sqrt{({1/n + (x_p - x_m)^2}/{(n-1){s_x}^2}}$
So, | Why are the ends of the prediction interval wider in the regression? [duplicate]
This reference, for example, clearly gives the formula of a prediction interval for a simple linear regression model, which contains the expression:
$\sqrt{({1/n + (x_p - x_m)^2}/{(n-1){s_x}^2}}$
So, as the prediction for the explanatory variable $x_p$ becomes more removed from its mean $x_m$, the interval widens.
In practice, I would be cautious about applying the prediction interval far from the mean, particularly a point estimate not included in the observed range of the data, as the structural equation itself may no longer be accurate/valid. | Why are the ends of the prediction interval wider in the regression? [duplicate]
This reference, for example, clearly gives the formula of a prediction interval for a simple linear regression model, which contains the expression:
$\sqrt{({1/n + (x_p - x_m)^2}/{(n-1){s_x}^2}}$
So, |
51,848 | What is sigma function in the YOLO object detector? | It is the logistic sigmoid function:
$$
\sigma(x) = \frac 1 {1+e^{-x}}
$$
It is bounded between 0 and 1, which is a desired property in their case (image from Wikipedia):
Regarding the exponential, see this answer. | What is sigma function in the YOLO object detector? | It is the logistic sigmoid function:
$$
\sigma(x) = \frac 1 {1+e^{-x}}
$$
It is bounded between 0 and 1, which is a desired property in their case (image from Wikipedia):
Regarding the exponential, s | What is sigma function in the YOLO object detector?
It is the logistic sigmoid function:
$$
\sigma(x) = \frac 1 {1+e^{-x}}
$$
It is bounded between 0 and 1, which is a desired property in their case (image from Wikipedia):
Regarding the exponential, see this answer. | What is sigma function in the YOLO object detector?
It is the logistic sigmoid function:
$$
\sigma(x) = \frac 1 {1+e^{-x}}
$$
It is bounded between 0 and 1, which is a desired property in their case (image from Wikipedia):
Regarding the exponential, s |
51,849 | What is sigma function in the YOLO object detector? | In addition to the notation using the symbol $\sigma$, the caption to one image names this function the "sigmoid" function. From the paper,
Figure 3: Bounding boxes with dimension priors and location
prediction. We predict the width and height of the box as offsets
from cluster centroids. We predict the center coordinates of the
box relative to the location of filter application using a sigmoid
function.
The "sigmoid" function is one of many names for a certain function. This name is especially common in the neural networks literature; for some elaboration, see Does the function $e^x/(1+e^x)$ have a standard name? | What is sigma function in the YOLO object detector? | In addition to the notation using the symbol $\sigma$, the caption to one image names this function the "sigmoid" function. From the paper,
Figure 3: Bounding boxes with dimension priors and location | What is sigma function in the YOLO object detector?
In addition to the notation using the symbol $\sigma$, the caption to one image names this function the "sigmoid" function. From the paper,
Figure 3: Bounding boxes with dimension priors and location
prediction. We predict the width and height of the box as offsets
from cluster centroids. We predict the center coordinates of the
box relative to the location of filter application using a sigmoid
function.
The "sigmoid" function is one of many names for a certain function. This name is especially common in the neural networks literature; for some elaboration, see Does the function $e^x/(1+e^x)$ have a standard name? | What is sigma function in the YOLO object detector?
In addition to the notation using the symbol $\sigma$, the caption to one image names this function the "sigmoid" function. From the paper,
Figure 3: Bounding boxes with dimension priors and location |
51,850 | Is there any truth to the phrase "statistics mean nothing to the individual"? | I think "nothing" is too strong, but I imagine the statement is a pedagogical challenge meant to address one or more issues:
I. It may be addressing the reification of statistical models.
II. As you say, it may be addressing point summaries, which do not necessarily represent any individual. (In fact, the summary value may be impossible for any individual to achieve.)
III. It may be emphasizing that probabilistic statements have a context. If you say "there's a 95% mortality rate for this disease" you're making a statement that marginalizes out all characteristics of patients except that they're human. In reality, a particular disease may affect men or women more, will affect the old, young, or middle more, will affect someone with a chronic disease or other pre-existing health issues more, will probably affect the never-exposed more than those who have previously been exposed (hence vaccines), will be more deadly if not treated or if not treated within an initial time window, ...
We're always leaving stuff out of predictions either because we don't have the information for the model we built or we don't have the information for the individual we're predicting on. In the best case, we lump all of the unknowns into an "error" term and if it's small enough and tame enough we ignore it. That's an ideal, though.
In light of that, saying "the disease kills 95% of humans who acquire it" is not the same as saying "the disease will kill you with 95% certainty".
We are only now getting to the place where medicine might be developed at the individual level and not the population level. The long list of side effects on medications is a testament to your slogan.
IV. It may be emphasizing that statistical machinery is not always necessary or useful. For example, if the question is "who is the tallest person in the class?" or "was our factory's production at an all-time high?", you don't need statistics. For this class or that factory, the data itself tells you and applying statistical machinery in that case is really more of a manipulation than a clarification. | Is there any truth to the phrase "statistics mean nothing to the individual"? | I think "nothing" is too strong, but I imagine the statement is a pedagogical challenge meant to address one or more issues:
I. It may be addressing the reification of statistical models.
II. As you s | Is there any truth to the phrase "statistics mean nothing to the individual"?
I think "nothing" is too strong, but I imagine the statement is a pedagogical challenge meant to address one or more issues:
I. It may be addressing the reification of statistical models.
II. As you say, it may be addressing point summaries, which do not necessarily represent any individual. (In fact, the summary value may be impossible for any individual to achieve.)
III. It may be emphasizing that probabilistic statements have a context. If you say "there's a 95% mortality rate for this disease" you're making a statement that marginalizes out all characteristics of patients except that they're human. In reality, a particular disease may affect men or women more, will affect the old, young, or middle more, will affect someone with a chronic disease or other pre-existing health issues more, will probably affect the never-exposed more than those who have previously been exposed (hence vaccines), will be more deadly if not treated or if not treated within an initial time window, ...
We're always leaving stuff out of predictions either because we don't have the information for the model we built or we don't have the information for the individual we're predicting on. In the best case, we lump all of the unknowns into an "error" term and if it's small enough and tame enough we ignore it. That's an ideal, though.
In light of that, saying "the disease kills 95% of humans who acquire it" is not the same as saying "the disease will kill you with 95% certainty".
We are only now getting to the place where medicine might be developed at the individual level and not the population level. The long list of side effects on medications is a testament to your slogan.
IV. It may be emphasizing that statistical machinery is not always necessary or useful. For example, if the question is "who is the tallest person in the class?" or "was our factory's production at an all-time high?", you don't need statistics. For this class or that factory, the data itself tells you and applying statistical machinery in that case is really more of a manipulation than a clarification. | Is there any truth to the phrase "statistics mean nothing to the individual"?
I think "nothing" is too strong, but I imagine the statement is a pedagogical challenge meant to address one or more issues:
I. It may be addressing the reification of statistical models.
II. As you s |
51,851 | Is there any truth to the phrase "statistics mean nothing to the individual"? | I agree with the other answers, but I would like to add that in the New Causal Revolution, particularly with the highest level of reasoning - the counterfactual - it is possible to reason about individuals. That only works if you have an accurate model. The three-step process of abduction, action or intervention, and prediction all happen to individuals. For more information, see Causal Inference in Statistics: A Primer, by Pearl, Glymour, and Jewell, p. 96ff. | Is there any truth to the phrase "statistics mean nothing to the individual"? | I agree with the other answers, but I would like to add that in the New Causal Revolution, particularly with the highest level of reasoning - the counterfactual - it is possible to reason about indivi | Is there any truth to the phrase "statistics mean nothing to the individual"?
I agree with the other answers, but I would like to add that in the New Causal Revolution, particularly with the highest level of reasoning - the counterfactual - it is possible to reason about individuals. That only works if you have an accurate model. The three-step process of abduction, action or intervention, and prediction all happen to individuals. For more information, see Causal Inference in Statistics: A Primer, by Pearl, Glymour, and Jewell, p. 96ff. | Is there any truth to the phrase "statistics mean nothing to the individual"?
I agree with the other answers, but I would like to add that in the New Causal Revolution, particularly with the highest level of reasoning - the counterfactual - it is possible to reason about indivi |
51,852 | Is there any truth to the phrase "statistics mean nothing to the individual"? | No, it's a statement made by people who don't understand probability.
If you contract a disease with a 95% mortality rate, you can't say you will certainly die.
But you absolutely can say "I will probably die." You can go further and say "The probability of my dying is 0.9." That's an awfully specific and highly informative statement.
People latch on to the idea that you can't assess people/objects on an individual level based on population statistics because it very quickly leads to uncomfortable situations such as profiling. But the idea that statistics don't matter on an individual level is absolute hogwash.
You cannot make definitive statements, but you can certainly make probabilistic ones. | Is there any truth to the phrase "statistics mean nothing to the individual"? | No, it's a statement made by people who don't understand probability.
If you contract a disease with a 95% mortality rate, you can't say you will certainly die.
But you absolutely can say "I will prob | Is there any truth to the phrase "statistics mean nothing to the individual"?
No, it's a statement made by people who don't understand probability.
If you contract a disease with a 95% mortality rate, you can't say you will certainly die.
But you absolutely can say "I will probably die." You can go further and say "The probability of my dying is 0.9." That's an awfully specific and highly informative statement.
People latch on to the idea that you can't assess people/objects on an individual level based on population statistics because it very quickly leads to uncomfortable situations such as profiling. But the idea that statistics don't matter on an individual level is absolute hogwash.
You cannot make definitive statements, but you can certainly make probabilistic ones. | Is there any truth to the phrase "statistics mean nothing to the individual"?
No, it's a statement made by people who don't understand probability.
If you contract a disease with a 95% mortality rate, you can't say you will certainly die.
But you absolutely can say "I will prob |
51,853 | Is there any truth to the phrase "statistics mean nothing to the individual"? | It is true in general that statistical quantities pertaining to a single individual have more variance than those that average over a larger group subsuming that individual. This holds for any set of quantities that are not perfectly positively correlated. In this sense, it is reasonable to say that statistics predicts individual outcomes much less well than aggregate outcomes for larger groups. It is an exaggeration to say that statistics "means nothing" to the individual, but it does predict individual outcomes with less accuracy than aggregate outcomes over larger groups. | Is there any truth to the phrase "statistics mean nothing to the individual"? | It is true in general that statistical quantities pertaining to a single individual have more variance than those that average over a larger group subsuming that individual. This holds for any set of | Is there any truth to the phrase "statistics mean nothing to the individual"?
It is true in general that statistical quantities pertaining to a single individual have more variance than those that average over a larger group subsuming that individual. This holds for any set of quantities that are not perfectly positively correlated. In this sense, it is reasonable to say that statistics predicts individual outcomes much less well than aggregate outcomes for larger groups. It is an exaggeration to say that statistics "means nothing" to the individual, but it does predict individual outcomes with less accuracy than aggregate outcomes over larger groups. | Is there any truth to the phrase "statistics mean nothing to the individual"?
It is true in general that statistical quantities pertaining to a single individual have more variance than those that average over a larger group subsuming that individual. This holds for any set of |
51,854 | Is there any truth to the phrase "statistics mean nothing to the individual"? | Summary: we can derive statistics about individuals.
However, depending on the problem at hand, they may only tell us that we do not have sufficient information (e.g., because the correlation between our independent and dependent variable is too low) to draw meaningful conclusions.
Some sources claim that IQ is correlated with academic and/or professional success. Let's assume there is a correlation of 0.5 between IQ and University GPA, discovered from a study of a very large group of students.
[...]
Is there any truth to the phrase "statistics mean nothing to the individual"?
Any truth, yes: as you already suspected, some statistics such as the population mean do not necessarily say much about the individual.
However, there are statistics that relate to the individual.
In particular, the prediction interval gives a range into which we expect future individuals/observations to fall with the specified probability.
I found a data set with GPR and IQ recorded for 78 people (it has a correlation of 0.63) and below (left) you see a univariate linear model fitted to that data.
The black line gives the regression point estimates: this is our best guess as to what average GPA people with a given IQ have. The confidence interval in blue says how certain we are about these population averages. The more observations we have, the narrower the confidence interval gets. On the right, I calculated the same regression on a data set that is a triplication of the original data, and you can see the confidence interval gets substantially narrower.
We can also calculate what range of GPA we expect a new, unknown person to fall in (again with a specified probability). That is the prediction interval marked in red. Prediction intervals are always wider than confidence intervals because they take the uncertainty in the regression function plus the variation we observe between individuals.
When you compare the fake "large data" model on the right to the original data model on the left, you see hardly any difference in the prediction intervals: already for the original data model, the prediction interval is dominated by the variance between individuals.
Looking at your question about IQ 100: we expect people with IQ 100 to have on average a GPA of 6.5, and we're reasonably certain about this: the 95 % confidence interval is 6.1 - 7.0. We expect a given individual to have a GPA between 3.3 and 9.8 (again with 95 % probability). Unfortunately, the data set has no documentation to speak of, so I can only guess an interpretation. But this looks to me as if we'd expect an individual with IQ 100 to be anywhere between failing and very good.
So we may also say that this quantifies that we cannot really predict anything meaningful about an individual.
This lack of a useful result from the application/problem (predicting GPA) point of view is due to the correlation between IQ and GPA being too low.
R code to generate the plots:
library(openintro)
library(ggplot2)
cor(gpa_iq$gpa, gpa_iq$iq)
model <- lm (gpa ~ iq, data = gpa_iq)
predictions <- data.frame (iq = seq(70, 140))
ci <- predict(model, predictions, interval = "confidence", level = 0.95)
colnames(ci) <- c("fit", "ci.lower", "ci.upper")
pi <- predict(model, predictions, interval = "prediction", level = 0.95)
colnames(pi) <- c("fit", "pi.lower", "pi.upper")
predictions <- cbind (predictions, ci, pi [, -1])
predictions |>
ggplot (aes(x = iq)) +
geom_point(data = gpa_iq, aes(y = gpa)) +
#stat_smooth(method = "lm") +
geom_line(aes (y = fit)) +
geom_ribbon(aes (ymin = ci.lower, ymax = ci.upper), alpha = 0.25, fill = "blue") +
geom_ribbon(aes (ymin = pi.lower, ymax = pi.upper), alpha = 0.1, col = "red", fill = "red")
gpa_iq2 <- rbind(gpa_iq, gpa_iq, gpa_iq)
cor(gpa_iq2$gpa, gpa_iq2$iq)
model2 <- lm (gpa ~ iq, data = gpa_iq2)
predictions2 <- data.frame (iq = seq(70, 140))
ci <- predict(model2, predictions2, interval = "confidence", level = 0.95)
colnames(ci) <- c("fit", "ci.lower", "ci.upper")
pi <- predict(model2, predictions2, interval = "prediction", level = 0.95)
colnames(pi) <- c("fit", "pi.lower", "pi.upper")
predictions2 <- cbind (predictions2, ci, pi [, -1])
predictions2 |>
ggplot (aes(x = iq)) +
geom_point(data = gpa_iq2, aes(y = gpa)) +
#stat_smooth(method = "lm") +
geom_line(aes (y = fit)) +
geom_ribbon(aes (ymin = ci.lower, ymax = ci.upper), alpha = 0.25, fill = "blue") +
geom_ribbon(aes (ymin = pi.lower, ymax = pi.upper), alpha = 0.1, col = "red", fill = "red") | Is there any truth to the phrase "statistics mean nothing to the individual"? | Summary: we can derive statistics about individuals.
However, depending on the problem at hand, they may only tell us that we do not have sufficient information (e.g., because the correlation between | Is there any truth to the phrase "statistics mean nothing to the individual"?
Summary: we can derive statistics about individuals.
However, depending on the problem at hand, they may only tell us that we do not have sufficient information (e.g., because the correlation between our independent and dependent variable is too low) to draw meaningful conclusions.
Some sources claim that IQ is correlated with academic and/or professional success. Let's assume there is a correlation of 0.5 between IQ and University GPA, discovered from a study of a very large group of students.
[...]
Is there any truth to the phrase "statistics mean nothing to the individual"?
Any truth, yes: as you already suspected, some statistics such as the population mean do not necessarily say much about the individual.
However, there are statistics that relate to the individual.
In particular, the prediction interval gives a range into which we expect future individuals/observations to fall with the specified probability.
I found a data set with GPR and IQ recorded for 78 people (it has a correlation of 0.63) and below (left) you see a univariate linear model fitted to that data.
The black line gives the regression point estimates: this is our best guess as to what average GPA people with a given IQ have. The confidence interval in blue says how certain we are about these population averages. The more observations we have, the narrower the confidence interval gets. On the right, I calculated the same regression on a data set that is a triplication of the original data, and you can see the confidence interval gets substantially narrower.
We can also calculate what range of GPA we expect a new, unknown person to fall in (again with a specified probability). That is the prediction interval marked in red. Prediction intervals are always wider than confidence intervals because they take the uncertainty in the regression function plus the variation we observe between individuals.
When you compare the fake "large data" model on the right to the original data model on the left, you see hardly any difference in the prediction intervals: already for the original data model, the prediction interval is dominated by the variance between individuals.
Looking at your question about IQ 100: we expect people with IQ 100 to have on average a GPA of 6.5, and we're reasonably certain about this: the 95 % confidence interval is 6.1 - 7.0. We expect a given individual to have a GPA between 3.3 and 9.8 (again with 95 % probability). Unfortunately, the data set has no documentation to speak of, so I can only guess an interpretation. But this looks to me as if we'd expect an individual with IQ 100 to be anywhere between failing and very good.
So we may also say that this quantifies that we cannot really predict anything meaningful about an individual.
This lack of a useful result from the application/problem (predicting GPA) point of view is due to the correlation between IQ and GPA being too low.
R code to generate the plots:
library(openintro)
library(ggplot2)
cor(gpa_iq$gpa, gpa_iq$iq)
model <- lm (gpa ~ iq, data = gpa_iq)
predictions <- data.frame (iq = seq(70, 140))
ci <- predict(model, predictions, interval = "confidence", level = 0.95)
colnames(ci) <- c("fit", "ci.lower", "ci.upper")
pi <- predict(model, predictions, interval = "prediction", level = 0.95)
colnames(pi) <- c("fit", "pi.lower", "pi.upper")
predictions <- cbind (predictions, ci, pi [, -1])
predictions |>
ggplot (aes(x = iq)) +
geom_point(data = gpa_iq, aes(y = gpa)) +
#stat_smooth(method = "lm") +
geom_line(aes (y = fit)) +
geom_ribbon(aes (ymin = ci.lower, ymax = ci.upper), alpha = 0.25, fill = "blue") +
geom_ribbon(aes (ymin = pi.lower, ymax = pi.upper), alpha = 0.1, col = "red", fill = "red")
gpa_iq2 <- rbind(gpa_iq, gpa_iq, gpa_iq)
cor(gpa_iq2$gpa, gpa_iq2$iq)
model2 <- lm (gpa ~ iq, data = gpa_iq2)
predictions2 <- data.frame (iq = seq(70, 140))
ci <- predict(model2, predictions2, interval = "confidence", level = 0.95)
colnames(ci) <- c("fit", "ci.lower", "ci.upper")
pi <- predict(model2, predictions2, interval = "prediction", level = 0.95)
colnames(pi) <- c("fit", "pi.lower", "pi.upper")
predictions2 <- cbind (predictions2, ci, pi [, -1])
predictions2 |>
ggplot (aes(x = iq)) +
geom_point(data = gpa_iq2, aes(y = gpa)) +
#stat_smooth(method = "lm") +
geom_line(aes (y = fit)) +
geom_ribbon(aes (ymin = ci.lower, ymax = ci.upper), alpha = 0.25, fill = "blue") +
geom_ribbon(aes (ymin = pi.lower, ymax = pi.upper), alpha = 0.1, col = "red", fill = "red") | Is there any truth to the phrase "statistics mean nothing to the individual"?
Summary: we can derive statistics about individuals.
However, depending on the problem at hand, they may only tell us that we do not have sufficient information (e.g., because the correlation between |
51,855 | Is there any truth to the phrase "statistics mean nothing to the individual"? | There is SOME truth to that phrase.
The problem occurs when people misapply statistics to an individual in a way that restricts the possibility of that individual not being in the described group...most often in racial profiling.
If you have a disease that consistently kills 95% of those who catch it and 100 people catch it, you would be correct in building 95 coffins...however, you still cannot answer the "yes or no" question, "Will John Smith be using one of those coffins?" Thus you should not put his name on it in advance!
For those who might want to argue that there's a 95% chance that John Smith WILL be using a coffin, I ask, will you be putting 95% of his name on it, then? The name isn't a percentage. It's a yes or no situation, and this is where statistics breaks down, because we usually cannot address an individual with a percentage. You either do or you don't. | Is there any truth to the phrase "statistics mean nothing to the individual"? | There is SOME truth to that phrase.
The problem occurs when people misapply statistics to an individual in a way that restricts the possibility of that individual not being in the described group...mo | Is there any truth to the phrase "statistics mean nothing to the individual"?
There is SOME truth to that phrase.
The problem occurs when people misapply statistics to an individual in a way that restricts the possibility of that individual not being in the described group...most often in racial profiling.
If you have a disease that consistently kills 95% of those who catch it and 100 people catch it, you would be correct in building 95 coffins...however, you still cannot answer the "yes or no" question, "Will John Smith be using one of those coffins?" Thus you should not put his name on it in advance!
For those who might want to argue that there's a 95% chance that John Smith WILL be using a coffin, I ask, will you be putting 95% of his name on it, then? The name isn't a percentage. It's a yes or no situation, and this is where statistics breaks down, because we usually cannot address an individual with a percentage. You either do or you don't. | Is there any truth to the phrase "statistics mean nothing to the individual"?
There is SOME truth to that phrase.
The problem occurs when people misapply statistics to an individual in a way that restricts the possibility of that individual not being in the described group...mo |
51,856 | Is there any truth to the phrase "statistics mean nothing to the individual"? | Ironically, this statement is sort of meta and illustrates its own shortcomings.
A few people have mentioned the Ecological Fallacy and this statement arguably a way to summarize or describe an aspect of the fallacy. In ecology (or any field that uses statistics which is all sciences and many of the humanities), you are in some way using statistics to summarize or create a shorthand for a more complex system. There is always some kind of trade off in terms of reducing individuals to categories.
The more in-depth your statistics, the more you can understand the data you're starting with. For example, if you have a group of 100 students and half score 0% on a test and half score 100% on a test, the average would be 50%. However, if you take the statistics further, you can look at the standard divination a sort of (and I know I'm simplifying this idea) average distance from average. This would tell you that the average is not very representative. You could also use a histogram to look at the data. This would show you very clearly that no one got the average score in this situation. It's not that statistics is "wrong" or "misleading" but if you only include the average, you're not telling the whole story.
At the same time, when you summarize a group or a set of data with statistics, you're always removing some sort of information to make it more digestible. Conversely, if you use statistics to try and guess at an individual, you will almost always get something wrong. It's very rare to find an individual within a data set that perfectly matches what the statics for the whole group says they are likely to be. This second half of the ecological fallacy is what the statement refers to.
So the statement "statistics is meaningless to the individual" is sort of a caution that you can't characterize an individual based on statistics. | Is there any truth to the phrase "statistics mean nothing to the individual"? | Ironically, this statement is sort of meta and illustrates its own shortcomings.
A few people have mentioned the Ecological Fallacy and this statement arguably a way to summarize or describe an aspect | Is there any truth to the phrase "statistics mean nothing to the individual"?
Ironically, this statement is sort of meta and illustrates its own shortcomings.
A few people have mentioned the Ecological Fallacy and this statement arguably a way to summarize or describe an aspect of the fallacy. In ecology (or any field that uses statistics which is all sciences and many of the humanities), you are in some way using statistics to summarize or create a shorthand for a more complex system. There is always some kind of trade off in terms of reducing individuals to categories.
The more in-depth your statistics, the more you can understand the data you're starting with. For example, if you have a group of 100 students and half score 0% on a test and half score 100% on a test, the average would be 50%. However, if you take the statistics further, you can look at the standard divination a sort of (and I know I'm simplifying this idea) average distance from average. This would tell you that the average is not very representative. You could also use a histogram to look at the data. This would show you very clearly that no one got the average score in this situation. It's not that statistics is "wrong" or "misleading" but if you only include the average, you're not telling the whole story.
At the same time, when you summarize a group or a set of data with statistics, you're always removing some sort of information to make it more digestible. Conversely, if you use statistics to try and guess at an individual, you will almost always get something wrong. It's very rare to find an individual within a data set that perfectly matches what the statics for the whole group says they are likely to be. This second half of the ecological fallacy is what the statement refers to.
So the statement "statistics is meaningless to the individual" is sort of a caution that you can't characterize an individual based on statistics. | Is there any truth to the phrase "statistics mean nothing to the individual"?
Ironically, this statement is sort of meta and illustrates its own shortcomings.
A few people have mentioned the Ecological Fallacy and this statement arguably a way to summarize or describe an aspect |
51,857 | When is it valid to use race/ethnicity in causal inference? | Race and ethnicity are variables that cannot be "controlled" in experiments, since it is not possible for the researcher to assign or change this characteristic of the study participant.$^\dagger$ For this reason, causal inference relating to race and ethnicity cannot generally rely on randomised controlled trials, and must instead fall back on uncontrolled observational studies. As with other uncontrolled studies on any other topic, this comes with all the regular drawbacks and caveats on causal interpretation of results, including the possibility that there may be omitted "lurking variables" that affect analysis. As a general principle, causal inference from uncontrolled studies is not reliable, and tends to be reasonable only in cases where the predictor in question is shown to have a statistical relationship conditional on a wide array of covariates, and tends to retain its predictive ability under variations in covariates that are not themselves intermediate causes.
Many studies in the social sciences include race/ethnicity as covariates, and the goal is to filter out these variables to find some other causal or statistical relationship. There may be some studies where race/ethnicity is of direct interest as a predictor, and in this case the researcher needs to be careful to distinguish predictive efffects from causal effects, as in any uncontrolled observational study. There is certainly no scientific problem with including race/ethnicity as variables in social science studies; the problems, if any, arise in regard to interpretation of results. There is a good discussion of the causal interpretation of race variables in VanderWeele and Robinson (2014).
For the most part, all of this is just a matter of applying general statistical principles to a particular set of variables. However, one issue that arises specifically in regard to causal inference regarding race and ethnicity is competing theories of whether any causality is direct (i.e., genetic/hereditary) or indirect (i.e., due to discrimination). This aspect of the problem has been discussed at length by the economist Thomas Sowell in a series of books discussing statistical disparities among racial groups (see esp., Sowell 1975, Sowell 2013, Sowell 2018). Sowell notes that historically, there was an excessive tendency to ascribe all racial disparities to genetic causes in the nineteenth and early twentieth centuries, and since the late twentieth century there is now an excessive tendency to ascribe all racial disparities to discrimination. Both of these constitute a failure to properly apply statistical reasoning relating to causality, and both tend to occur due to a conflation of correlation and cause. In any case, if you have not already read these works, they may give you a better understanding of the difficulties that arise in making causal inferences from statistical disparities among racial and ethnic groups.
It is difficult to answer your specific question without seeing a particular example of the kind of inference that concerns you. There are a wide variety of cases where social science researchers "use race for causal inference" and the validity would depend on the nature of the data and the resulting inference. (It is not even clear from that framing of the question whether race is the predictor of interest or just a covariate.)
$^\dagger$ Note that there are some randomised experiments where the appearance of race is controlled via some experimental mechanism. For example, many studies on ethnic discrimination in employment use randomised 'correspondence tests' where the researchers control (and randomise) the markers of race and ethnicity in submitted job applications (see e.g., Zschirnt and Ruedin 2015). | When is it valid to use race/ethnicity in causal inference? | Race and ethnicity are variables that cannot be "controlled" in experiments, since it is not possible for the researcher to assign or change this characteristic of the study participant.$^\dagger$ Fo | When is it valid to use race/ethnicity in causal inference?
Race and ethnicity are variables that cannot be "controlled" in experiments, since it is not possible for the researcher to assign or change this characteristic of the study participant.$^\dagger$ For this reason, causal inference relating to race and ethnicity cannot generally rely on randomised controlled trials, and must instead fall back on uncontrolled observational studies. As with other uncontrolled studies on any other topic, this comes with all the regular drawbacks and caveats on causal interpretation of results, including the possibility that there may be omitted "lurking variables" that affect analysis. As a general principle, causal inference from uncontrolled studies is not reliable, and tends to be reasonable only in cases where the predictor in question is shown to have a statistical relationship conditional on a wide array of covariates, and tends to retain its predictive ability under variations in covariates that are not themselves intermediate causes.
Many studies in the social sciences include race/ethnicity as covariates, and the goal is to filter out these variables to find some other causal or statistical relationship. There may be some studies where race/ethnicity is of direct interest as a predictor, and in this case the researcher needs to be careful to distinguish predictive efffects from causal effects, as in any uncontrolled observational study. There is certainly no scientific problem with including race/ethnicity as variables in social science studies; the problems, if any, arise in regard to interpretation of results. There is a good discussion of the causal interpretation of race variables in VanderWeele and Robinson (2014).
For the most part, all of this is just a matter of applying general statistical principles to a particular set of variables. However, one issue that arises specifically in regard to causal inference regarding race and ethnicity is competing theories of whether any causality is direct (i.e., genetic/hereditary) or indirect (i.e., due to discrimination). This aspect of the problem has been discussed at length by the economist Thomas Sowell in a series of books discussing statistical disparities among racial groups (see esp., Sowell 1975, Sowell 2013, Sowell 2018). Sowell notes that historically, there was an excessive tendency to ascribe all racial disparities to genetic causes in the nineteenth and early twentieth centuries, and since the late twentieth century there is now an excessive tendency to ascribe all racial disparities to discrimination. Both of these constitute a failure to properly apply statistical reasoning relating to causality, and both tend to occur due to a conflation of correlation and cause. In any case, if you have not already read these works, they may give you a better understanding of the difficulties that arise in making causal inferences from statistical disparities among racial and ethnic groups.
It is difficult to answer your specific question without seeing a particular example of the kind of inference that concerns you. There are a wide variety of cases where social science researchers "use race for causal inference" and the validity would depend on the nature of the data and the resulting inference. (It is not even clear from that framing of the question whether race is the predictor of interest or just a covariate.)
$^\dagger$ Note that there are some randomised experiments where the appearance of race is controlled via some experimental mechanism. For example, many studies on ethnic discrimination in employment use randomised 'correspondence tests' where the researchers control (and randomise) the markers of race and ethnicity in submitted job applications (see e.g., Zschirnt and Ruedin 2015). | When is it valid to use race/ethnicity in causal inference?
Race and ethnicity are variables that cannot be "controlled" in experiments, since it is not possible for the researcher to assign or change this characteristic of the study participant.$^\dagger$ Fo |
51,858 | When is it valid to use race/ethnicity in causal inference? | There is nothing special about race when it comes to causal inference. You can look for its effects just as you might look for the effect of the season.
The difficulty with a lot of causal models in the social sciences is mediation analysis: you want to know when A causes B directly, but not through C (What Ben means when he said direct versus indirect). For example, in the causal model Gender, Hiring, Education, you want to know if Gender affects Hiring directly, but you concede that Gender might affect Education and we all expect Education to affect Hiring. You cannot simply control for Education without opening back doors. This pattern is ubiquitous.
There are approaches to tackling these problems, but as mentioned in Ben's answer, these are experimental changes. For example, although you cannot control for Education without opening more back doors, you can freeze it by having the women use the men's resumes. | When is it valid to use race/ethnicity in causal inference? | There is nothing special about race when it comes to causal inference. You can look for its effects just as you might look for the effect of the season.
The difficulty with a lot of causal models in | When is it valid to use race/ethnicity in causal inference?
There is nothing special about race when it comes to causal inference. You can look for its effects just as you might look for the effect of the season.
The difficulty with a lot of causal models in the social sciences is mediation analysis: you want to know when A causes B directly, but not through C (What Ben means when he said direct versus indirect). For example, in the causal model Gender, Hiring, Education, you want to know if Gender affects Hiring directly, but you concede that Gender might affect Education and we all expect Education to affect Hiring. You cannot simply control for Education without opening back doors. This pattern is ubiquitous.
There are approaches to tackling these problems, but as mentioned in Ben's answer, these are experimental changes. For example, although you cannot control for Education without opening more back doors, you can freeze it by having the women use the men's resumes. | When is it valid to use race/ethnicity in causal inference?
There is nothing special about race when it comes to causal inference. You can look for its effects just as you might look for the effect of the season.
The difficulty with a lot of causal models in |
51,859 | When is it valid to use race/ethnicity in causal inference? | The idea that race can be a cause is not without dispute.
In a 1986 JASA article, Paul Holland discussed how he and Don Rubin coined the expression, “no causation without manipulation”.
The idea here is that causal inference requires a strict definition of a cause that identifies an intervention that hypothetically could be implemented -- even if that manipulation is not physically possible or ethically feasible.
So what is a hypothetical intervention that would change someone's race? Perhaps a genetic manipulation? But there is no "race" gene that can be flipped like a digital bit. It is hard to imagine there is some way of changing all the genes that contribute to the phenotypes that define race, while keeping all other phenotypes constant. Perhaps instead a cosmetic procedure could make a white person pass as black or vice versa?
Both of these lines of reasoning lead one to think about how race is defined by how other people perceive an individual. So then is "race" what the researcher is looking for? Perhaps it is racial discrimination? Or a persons professed ethnic identity?
If you talk to Miguel A. Hernán and James M. Robins, authors of the causal inference book cited below, they would tell you race is not a valid cause and that only thinking more deeply about what people actually mean when they talk about race as a cause will lead to better inferences.
On the other hand, some, including Judea Pearl, take issue with position.
Here is a quote (from Twitter) by Hernan:
Pearl believes that any causal effect we can name must also exist. To
him, the meaning of “the causal effect of A on death” is self-evident.
He says we can quantify, say, the causal effect of race or the causal
effect of obesity. I don't think we can.
We cannot estimate "the causal effect of obesity" because we don't
know what that means. For the causal effect of A to be well defined,
we need a common understanding of the interventions that we would use
to change A. Otherwise, the effect is undefined.
If by now you are thinking that this is just another academic debate
on the sex of the angels, think again: you beliefs about this issue
determine your beliefs about the limits of science and about how to
conduct data analyses.
Pearl addresses the issue here.
Hernán MA, Robins JM (2018). Causal Inference. Boca Raton: Chapman & Hall/CRC, forthcoming.” | When is it valid to use race/ethnicity in causal inference? | The idea that race can be a cause is not without dispute.
In a 1986 JASA article, Paul Holland discussed how he and Don Rubin coined the expression, “no causation without manipulation”.
The idea here | When is it valid to use race/ethnicity in causal inference?
The idea that race can be a cause is not without dispute.
In a 1986 JASA article, Paul Holland discussed how he and Don Rubin coined the expression, “no causation without manipulation”.
The idea here is that causal inference requires a strict definition of a cause that identifies an intervention that hypothetically could be implemented -- even if that manipulation is not physically possible or ethically feasible.
So what is a hypothetical intervention that would change someone's race? Perhaps a genetic manipulation? But there is no "race" gene that can be flipped like a digital bit. It is hard to imagine there is some way of changing all the genes that contribute to the phenotypes that define race, while keeping all other phenotypes constant. Perhaps instead a cosmetic procedure could make a white person pass as black or vice versa?
Both of these lines of reasoning lead one to think about how race is defined by how other people perceive an individual. So then is "race" what the researcher is looking for? Perhaps it is racial discrimination? Or a persons professed ethnic identity?
If you talk to Miguel A. Hernán and James M. Robins, authors of the causal inference book cited below, they would tell you race is not a valid cause and that only thinking more deeply about what people actually mean when they talk about race as a cause will lead to better inferences.
On the other hand, some, including Judea Pearl, take issue with position.
Here is a quote (from Twitter) by Hernan:
Pearl believes that any causal effect we can name must also exist. To
him, the meaning of “the causal effect of A on death” is self-evident.
He says we can quantify, say, the causal effect of race or the causal
effect of obesity. I don't think we can.
We cannot estimate "the causal effect of obesity" because we don't
know what that means. For the causal effect of A to be well defined,
we need a common understanding of the interventions that we would use
to change A. Otherwise, the effect is undefined.
If by now you are thinking that this is just another academic debate
on the sex of the angels, think again: you beliefs about this issue
determine your beliefs about the limits of science and about how to
conduct data analyses.
Pearl addresses the issue here.
Hernán MA, Robins JM (2018). Causal Inference. Boca Raton: Chapman & Hall/CRC, forthcoming.” | When is it valid to use race/ethnicity in causal inference?
The idea that race can be a cause is not without dispute.
In a 1986 JASA article, Paul Holland discussed how he and Don Rubin coined the expression, “no causation without manipulation”.
The idea here |
51,860 | Are dimensionality reduction techniques useful in deep learning? | $t$-SNE
Two obvious reasons that tsne is not commonly used as a dimension reduction method is that it is non-deterministic and it can't be applied in a consistent fashion to test-set data. See: Are there cases where PCA is more suitable than t-SNE?
PCA
First, pca is not inherently a dimensionality reduction method. It's a method that makes a new matrix of the same size, represented in a decorrelated basis. Truncated PCA reduces the rank of that matrix, so it is reduced in dimension.
Second, even if you do not use PCA to reduce dimensionality, it can still be useful. In "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", Sergey Ioffe and Christian Szegedy suggest that whitening transformation are helpful during the optimization steps.
It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its inputs are whitened – i.e., linearly transformed to have zero means and unit variances, and decorrelated.
Clearly, PCA yields decorrelated vectors and subtracting the mean and rescaling by the standard deviation achieves the rest. This quotation suggests that pre-whitening the input data might give your model a nice boost in terms of training time.
Whether or not whitening is helpful for any particular model is, obviously, problem-specific. One very common deep learning application is computer-vision. These networks tend not to use whitening transformations because the transformation to an orthogonal basis changes the image in a way which might not actually be useful to whatever network you're using. I'm not aware of an example where PCA improves a modern deep neural network for image classification, but that's probably due to a limitation of my knowledge; I'm sure someone will post a recent conv-neural-network paper that uses PCA in a comment.
Moreover, truncated PCA of an image will, obviously, distort the image in some way, with the amount of distortion depending on the number of PCs that you retain.
On the other hand, a great reason to use truncated PCA for dimensionality reduction is when your data is rank-deficient. It's common for hand-crafted feature vectors, such as those used in a feed-forward network, to have a certain amount of redundancy. Presenting all of these features to your network unnecessarily increases the number of parameters, so it can be more efficient to drop them.
Common Sense
If we take a wider view of dimensionality reduction, we can still reduce the dimension of our data by using common sense.
Consider the MNIST task. The digits occupy the center of the image. If you look at the whole data set, you can find that there are some pixels around the periphery of each image which are always white. If you trim each image to exclude these always-white pixels, you've taken a significant step towards reducing how much computational power you need, since all of these pixels are now effectively "skipped over". "Always white" pixels have no useful information for the network because the pixel values are constant in all samples, so you're not losing any distinguishing information. | Are dimensionality reduction techniques useful in deep learning? | $t$-SNE
Two obvious reasons that tsne is not commonly used as a dimension reduction method is that it is non-deterministic and it can't be applied in a consistent fashion to test-set data. See: Are th | Are dimensionality reduction techniques useful in deep learning?
$t$-SNE
Two obvious reasons that tsne is not commonly used as a dimension reduction method is that it is non-deterministic and it can't be applied in a consistent fashion to test-set data. See: Are there cases where PCA is more suitable than t-SNE?
PCA
First, pca is not inherently a dimensionality reduction method. It's a method that makes a new matrix of the same size, represented in a decorrelated basis. Truncated PCA reduces the rank of that matrix, so it is reduced in dimension.
Second, even if you do not use PCA to reduce dimensionality, it can still be useful. In "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", Sergey Ioffe and Christian Szegedy suggest that whitening transformation are helpful during the optimization steps.
It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its inputs are whitened – i.e., linearly transformed to have zero means and unit variances, and decorrelated.
Clearly, PCA yields decorrelated vectors and subtracting the mean and rescaling by the standard deviation achieves the rest. This quotation suggests that pre-whitening the input data might give your model a nice boost in terms of training time.
Whether or not whitening is helpful for any particular model is, obviously, problem-specific. One very common deep learning application is computer-vision. These networks tend not to use whitening transformations because the transformation to an orthogonal basis changes the image in a way which might not actually be useful to whatever network you're using. I'm not aware of an example where PCA improves a modern deep neural network for image classification, but that's probably due to a limitation of my knowledge; I'm sure someone will post a recent conv-neural-network paper that uses PCA in a comment.
Moreover, truncated PCA of an image will, obviously, distort the image in some way, with the amount of distortion depending on the number of PCs that you retain.
On the other hand, a great reason to use truncated PCA for dimensionality reduction is when your data is rank-deficient. It's common for hand-crafted feature vectors, such as those used in a feed-forward network, to have a certain amount of redundancy. Presenting all of these features to your network unnecessarily increases the number of parameters, so it can be more efficient to drop them.
Common Sense
If we take a wider view of dimensionality reduction, we can still reduce the dimension of our data by using common sense.
Consider the MNIST task. The digits occupy the center of the image. If you look at the whole data set, you can find that there are some pixels around the periphery of each image which are always white. If you trim each image to exclude these always-white pixels, you've taken a significant step towards reducing how much computational power you need, since all of these pixels are now effectively "skipped over". "Always white" pixels have no useful information for the network because the pixel values are constant in all samples, so you're not losing any distinguishing information. | Are dimensionality reduction techniques useful in deep learning?
$t$-SNE
Two obvious reasons that tsne is not commonly used as a dimension reduction method is that it is non-deterministic and it can't be applied in a consistent fashion to test-set data. See: Are th |
51,861 | Are dimensionality reduction techniques useful in deep learning? | Complementary to @Sycorax's nice answer (+1):
Remember that one of the "deep-learning" strengths is the ability of deep neural network to perform automatic feature extraction and encapsulate non-linear relations (e.g. through convolutions (ConvNNets), recurrences (RNNs), etc.). Making a highly condensed version of the data to be used an input to a deep neural network would kind off beat the objective of using a deep neural network to begin with. The network would have to "uncompress" the data only to try to recombine them once more internally to reach its full predictive capacity. | Are dimensionality reduction techniques useful in deep learning? | Complementary to @Sycorax's nice answer (+1):
Remember that one of the "deep-learning" strengths is the ability of deep neural network to perform automatic feature extraction and encapsulate non-line | Are dimensionality reduction techniques useful in deep learning?
Complementary to @Sycorax's nice answer (+1):
Remember that one of the "deep-learning" strengths is the ability of deep neural network to perform automatic feature extraction and encapsulate non-linear relations (e.g. through convolutions (ConvNNets), recurrences (RNNs), etc.). Making a highly condensed version of the data to be used an input to a deep neural network would kind off beat the objective of using a deep neural network to begin with. The network would have to "uncompress" the data only to try to recombine them once more internally to reach its full predictive capacity. | Are dimensionality reduction techniques useful in deep learning?
Complementary to @Sycorax's nice answer (+1):
Remember that one of the "deep-learning" strengths is the ability of deep neural network to perform automatic feature extraction and encapsulate non-line |
51,862 | Are dimensionality reduction techniques useful in deep learning? | But, i rarely noticed anyone doing it for deep learning projects. Is there a specific reason for not using Dimensionality reduction techniques in deep learning?
That depends on the aims of these projects.
For example for projects/papers dealing with representation learning it's pretty common:
Reducing the Dimensionality of Data with Neural Networks
Semantic Hashing
Vector Representations of Words from Tensorflow documentation
On the other hand, if you have a supervised problem and you're primarily interested in some metric, then you're less likely to incline what are the features network actually learned. | Are dimensionality reduction techniques useful in deep learning? | But, i rarely noticed anyone doing it for deep learning projects. Is there a specific reason for not using Dimensionality reduction techniques in deep learning?
That depends on the aims of these proj | Are dimensionality reduction techniques useful in deep learning?
But, i rarely noticed anyone doing it for deep learning projects. Is there a specific reason for not using Dimensionality reduction techniques in deep learning?
That depends on the aims of these projects.
For example for projects/papers dealing with representation learning it's pretty common:
Reducing the Dimensionality of Data with Neural Networks
Semantic Hashing
Vector Representations of Words from Tensorflow documentation
On the other hand, if you have a supervised problem and you're primarily interested in some metric, then you're less likely to incline what are the features network actually learned. | Are dimensionality reduction techniques useful in deep learning?
But, i rarely noticed anyone doing it for deep learning projects. Is there a specific reason for not using Dimensionality reduction techniques in deep learning?
That depends on the aims of these proj |
51,863 | Tensorflow - Why do we need tensors? [duplicate] | Tensors, as defined by the deep learning software are multidimensional arrays, so if you need only to conduct simple (small-scale) mathematical operations and transformations on the data, then TensorFlow is an overkill. But TensorFlow is much more then this,
it implements most of the common building bricks for building the deep learning models,
it has the state-of-the-art optimization algorithms,
it does automatic differentiation out-of-the-box,
it supports GPU training,
it has high level (Estimator), middle level (Keras) and low level (core TensorFlow) interfaces depending if you want to program something by hand, or just train a generic model,
the probabilistic programming module is developed,
there is an ongoing work of building Spark interface for TensorFlow, so it will natively integrate with big data environments,
it can be easily integrated in production environment,
or the models can be transformed to TensorFlow Light and ran on mobile devices, or translated to JavaScript,
and it has many, many more features. | Tensorflow - Why do we need tensors? [duplicate] | Tensors, as defined by the deep learning software are multidimensional arrays, so if you need only to conduct simple (small-scale) mathematical operations and transformations on the data, then TensorF | Tensorflow - Why do we need tensors? [duplicate]
Tensors, as defined by the deep learning software are multidimensional arrays, so if you need only to conduct simple (small-scale) mathematical operations and transformations on the data, then TensorFlow is an overkill. But TensorFlow is much more then this,
it implements most of the common building bricks for building the deep learning models,
it has the state-of-the-art optimization algorithms,
it does automatic differentiation out-of-the-box,
it supports GPU training,
it has high level (Estimator), middle level (Keras) and low level (core TensorFlow) interfaces depending if you want to program something by hand, or just train a generic model,
the probabilistic programming module is developed,
there is an ongoing work of building Spark interface for TensorFlow, so it will natively integrate with big data environments,
it can be easily integrated in production environment,
or the models can be transformed to TensorFlow Light and ran on mobile devices, or translated to JavaScript,
and it has many, many more features. | Tensorflow - Why do we need tensors? [duplicate]
Tensors, as defined by the deep learning software are multidimensional arrays, so if you need only to conduct simple (small-scale) mathematical operations and transformations on the data, then TensorF |
51,864 | Tensorflow - Why do we need tensors? [duplicate] | I am assuming that a) you do not question why do we have TensorFlow itself, and understand its value; and b) you only question why wouldn't TF use np.array class instead of creating a new class tf.Tensor.
When you write any kind of extensive framework such as TensorFlow, you end up creating many types and even type hierarchies. Creation of every single one of them could potentially be challenged, of course. However, it doesn't make a sense to do it individually. You have to consider the context of the framework. For instance, sometimes you end up creating almost exact copies of classes from the already existing package. Why not simply import the package? There could be many reasons including that maybe you don't want to drag entire package just to get a few classes from it, so you duplicate them etc. Therefore, from software engineering point of view it is almost easy to discard your question as lacking the context. However, I'll try to answer it because it touches the central data structure of TensorFlow framework.
There are many ways to answer the question, I'll only pick one. Take a look at the following numpy code in Python:
import numpy as np
np.array([[1,2],[3,4]])**2
Here's the output it produces, an element-wise square of the 2x2 array:
array([[ 1, 4],
[ 9, 16]], dtype=int32)
Next, look at a seemingly equivalent TensorFlow code:
import tensorflow as tf
t = tf.constant([[1,2],[3,4]])
t = t**2
t
Let's look at its output closely and compare to numpy's:
<tf.Tensor 'pow_1:0' shape=(2, 2) dtype=int32>
This is not the square of the array... yet. This tensor is linked to the description of what will be ultimately calculated by the computation graph that we created above. It's a very simple graph, has only a couple of nodes: input array and the square operation. Nevertheless it's a graph. Let's now evaluate it as follows:
sess = tf.Session()
sess.run(t)
Here's the output:
array([[ 1, 4],
[ 9, 16]], dtype=int32)
Now, this looks like an array that we expected!
We can make this even more explicit with this TF code:
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=(2,2))
t = x**2
t
Here, we get the Tensor again, and it can't even be calculated eagerly (thanks, @Tim)!
<tf.Tensor 'pow_3:0' shape=(2, 2) dtype=float32>
We need to plug the actual array instead of the placeholder to get a results as follows:
sess = tf.Session()
sess.run(t,feed_dict={x: np.array([[1,2],[3,4]])})
getting the desired output:
array([[ 1., 4.],
[ 9., 16.]], dtype=float32)
So, this was the long way to say that Tensor class in TensorFlow is a lot more than just a numpy array. It's almost like a variable vs. value comparison in this context. | Tensorflow - Why do we need tensors? [duplicate] | I am assuming that a) you do not question why do we have TensorFlow itself, and understand its value; and b) you only question why wouldn't TF use np.array class instead of creating a new class tf.Ten | Tensorflow - Why do we need tensors? [duplicate]
I am assuming that a) you do not question why do we have TensorFlow itself, and understand its value; and b) you only question why wouldn't TF use np.array class instead of creating a new class tf.Tensor.
When you write any kind of extensive framework such as TensorFlow, you end up creating many types and even type hierarchies. Creation of every single one of them could potentially be challenged, of course. However, it doesn't make a sense to do it individually. You have to consider the context of the framework. For instance, sometimes you end up creating almost exact copies of classes from the already existing package. Why not simply import the package? There could be many reasons including that maybe you don't want to drag entire package just to get a few classes from it, so you duplicate them etc. Therefore, from software engineering point of view it is almost easy to discard your question as lacking the context. However, I'll try to answer it because it touches the central data structure of TensorFlow framework.
There are many ways to answer the question, I'll only pick one. Take a look at the following numpy code in Python:
import numpy as np
np.array([[1,2],[3,4]])**2
Here's the output it produces, an element-wise square of the 2x2 array:
array([[ 1, 4],
[ 9, 16]], dtype=int32)
Next, look at a seemingly equivalent TensorFlow code:
import tensorflow as tf
t = tf.constant([[1,2],[3,4]])
t = t**2
t
Let's look at its output closely and compare to numpy's:
<tf.Tensor 'pow_1:0' shape=(2, 2) dtype=int32>
This is not the square of the array... yet. This tensor is linked to the description of what will be ultimately calculated by the computation graph that we created above. It's a very simple graph, has only a couple of nodes: input array and the square operation. Nevertheless it's a graph. Let's now evaluate it as follows:
sess = tf.Session()
sess.run(t)
Here's the output:
array([[ 1, 4],
[ 9, 16]], dtype=int32)
Now, this looks like an array that we expected!
We can make this even more explicit with this TF code:
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=(2,2))
t = x**2
t
Here, we get the Tensor again, and it can't even be calculated eagerly (thanks, @Tim)!
<tf.Tensor 'pow_3:0' shape=(2, 2) dtype=float32>
We need to plug the actual array instead of the placeholder to get a results as follows:
sess = tf.Session()
sess.run(t,feed_dict={x: np.array([[1,2],[3,4]])})
getting the desired output:
array([[ 1., 4.],
[ 9., 16.]], dtype=float32)
So, this was the long way to say that Tensor class in TensorFlow is a lot more than just a numpy array. It's almost like a variable vs. value comparison in this context. | Tensorflow - Why do we need tensors? [duplicate]
I am assuming that a) you do not question why do we have TensorFlow itself, and understand its value; and b) you only question why wouldn't TF use np.array class instead of creating a new class tf.Ten |
51,865 | Tensorflow - Why do we need tensors? [duplicate] | In addition to the other answer, I think a Tensor can also be an operation. In short a Tensor is an abstraction of a data or operation and it represents a node in the computational graph. The graph is the set of calculations to be done by TF in order to train, predict, etc. | Tensorflow - Why do we need tensors? [duplicate] | In addition to the other answer, I think a Tensor can also be an operation. In short a Tensor is an abstraction of a data or operation and it represents a node in the computational graph. The graph is | Tensorflow - Why do we need tensors? [duplicate]
In addition to the other answer, I think a Tensor can also be an operation. In short a Tensor is an abstraction of a data or operation and it represents a node in the computational graph. The graph is the set of calculations to be done by TF in order to train, predict, etc. | Tensorflow - Why do we need tensors? [duplicate]
In addition to the other answer, I think a Tensor can also be an operation. In short a Tensor is an abstraction of a data or operation and it represents a node in the computational graph. The graph is |
51,866 | Rolling a $6$ Sided Die | This is very much not about the order in which you roll dice or whether you roll them at the same time or one after the other. It is about whether you examine the conditional probability or the overall probability.
Given die one shows a 6, die two has a 6-probability of $1/6$.
Not given that any one of them has a 6, the probability of them both getting 6 is $1/6\times1/6=1/36$ | Rolling a $6$ Sided Die | This is very much not about the order in which you roll dice or whether you roll them at the same time or one after the other. It is about whether you examine the conditional probability or the overal | Rolling a $6$ Sided Die
This is very much not about the order in which you roll dice or whether you roll them at the same time or one after the other. It is about whether you examine the conditional probability or the overall probability.
Given die one shows a 6, die two has a 6-probability of $1/6$.
Not given that any one of them has a 6, the probability of them both getting 6 is $1/6\times1/6=1/36$ | Rolling a $6$ Sided Die
This is very much not about the order in which you roll dice or whether you roll them at the same time or one after the other. It is about whether you examine the conditional probability or the overal |
51,867 | Rolling a $6$ Sided Die | Dice rolls are independent. The die doesn't "remember" if you just rolled a six (or any other number).
What this means is that the probability of a 6 on a single roll is always $\frac{1}{6}$.
The apparent contradiction in your question arises from conditional probability.
Conditional probability expresses the event "given that I just rolled a six, what's the probability that the next roll is a six?" We just established that there's no "memory" to this process, so the information "I just rolled a six" is irrelevant, so we can re-state the problem as "what's the probability that the next roll is a six?" without any loss of information. Therefore, the probability of rolling a six is (still) $\frac{1}{6}$.
But if you're rolling two dice at the same time, each die is independent of the other (the dice can't "see" what each other is doing), so the joint probability is $\frac{1}{6} \cdot \frac{1}{6} = \frac{1}{36}$. | Rolling a $6$ Sided Die | Dice rolls are independent. The die doesn't "remember" if you just rolled a six (or any other number).
What this means is that the probability of a 6 on a single roll is always $\frac{1}{6}$.
The app | Rolling a $6$ Sided Die
Dice rolls are independent. The die doesn't "remember" if you just rolled a six (or any other number).
What this means is that the probability of a 6 on a single roll is always $\frac{1}{6}$.
The apparent contradiction in your question arises from conditional probability.
Conditional probability expresses the event "given that I just rolled a six, what's the probability that the next roll is a six?" We just established that there's no "memory" to this process, so the information "I just rolled a six" is irrelevant, so we can re-state the problem as "what's the probability that the next roll is a six?" without any loss of information. Therefore, the probability of rolling a six is (still) $\frac{1}{6}$.
But if you're rolling two dice at the same time, each die is independent of the other (the dice can't "see" what each other is doing), so the joint probability is $\frac{1}{6} \cdot \frac{1}{6} = \frac{1}{36}$. | Rolling a $6$ Sided Die
Dice rolls are independent. The die doesn't "remember" if you just rolled a six (or any other number).
What this means is that the probability of a 6 on a single roll is always $\frac{1}{6}$.
The app |
51,868 | Rolling a $6$ Sided Die | Here's a way to think about it that clearly shows that the results of rolling two simultaneously have the same probability as one after the other:
Since you asked specifically about the chance of failure, lets make that explicit: You correctly noted that the chance of success when rolling simultaneously is 1/36, so the chance of failure is 35/36.
When rolling the first die separately, there are 6 possible outcomes that could lead to failure. If you haven't failed already, then when you roll the second die, there are five possible outcomes that lead to failure. So your probability of failure is 6/6 * 5/6 which is also 35/36.
[Conversely, when you roll 1st, there are 1/6 possible results that could lead to success, and the same when you roll the second, so the chance of success is 1/6 * 1/6 = 1/36.] | Rolling a $6$ Sided Die | Here's a way to think about it that clearly shows that the results of rolling two simultaneously have the same probability as one after the other:
Since you asked specifically about the chance of fail | Rolling a $6$ Sided Die
Here's a way to think about it that clearly shows that the results of rolling two simultaneously have the same probability as one after the other:
Since you asked specifically about the chance of failure, lets make that explicit: You correctly noted that the chance of success when rolling simultaneously is 1/36, so the chance of failure is 35/36.
When rolling the first die separately, there are 6 possible outcomes that could lead to failure. If you haven't failed already, then when you roll the second die, there are five possible outcomes that lead to failure. So your probability of failure is 6/6 * 5/6 which is also 35/36.
[Conversely, when you roll 1st, there are 1/6 possible results that could lead to success, and the same when you roll the second, so the chance of success is 1/6 * 1/6 = 1/36.] | Rolling a $6$ Sided Die
Here's a way to think about it that clearly shows that the results of rolling two simultaneously have the same probability as one after the other:
Since you asked specifically about the chance of fail |
51,869 | Difference between Pearson's r ~= 0 and p > 0.05 | The p-values and Pearson's correlation coefficient $r$ measure different things.
$r$ measures the strength of the correlation. The p-value, on the other hand, measures how likely you would be to observe a correlation of this strength under the null hypothesis - e.g., under the assumption that your random variables are uncorrelated.
Intuitively, the stronger the correlation you observe, the less likely it is that it occurred by chance from two uncorrelated variables. However, even if you observe a very weak correlation, you can have a very low p-value associated with this observation - e.g., as your sample size goes to infinity, you will get very low p-values even if your observed correlation is very weak.
So to answer your question: $r$ close to 0 and p-value < 0.05 would mean that there is a correlation, but it is very weak. | Difference between Pearson's r ~= 0 and p > 0.05 | The p-values and Pearson's correlation coefficient $r$ measure different things.
$r$ measures the strength of the correlation. The p-value, on the other hand, measures how likely you would be to obser | Difference between Pearson's r ~= 0 and p > 0.05
The p-values and Pearson's correlation coefficient $r$ measure different things.
$r$ measures the strength of the correlation. The p-value, on the other hand, measures how likely you would be to observe a correlation of this strength under the null hypothesis - e.g., under the assumption that your random variables are uncorrelated.
Intuitively, the stronger the correlation you observe, the less likely it is that it occurred by chance from two uncorrelated variables. However, even if you observe a very weak correlation, you can have a very low p-value associated with this observation - e.g., as your sample size goes to infinity, you will get very low p-values even if your observed correlation is very weak.
So to answer your question: $r$ close to 0 and p-value < 0.05 would mean that there is a correlation, but it is very weak. | Difference between Pearson's r ~= 0 and p > 0.05
The p-values and Pearson's correlation coefficient $r$ measure different things.
$r$ measures the strength of the correlation. The p-value, on the other hand, measures how likely you would be to obser |
51,870 | Difference between Pearson's r ~= 0 and p > 0.05 | I upvoted the answer by rinspy. Here, I’ll try to add a few things.
~ ~ ~
r and p-value measure different things.
The p-value indicates the probability of getting data as extreme† as the observed data assuming that the null hypothesis is true. By our decision rule, if p < alpha, we have sufficient evidence to reject the null hypothesis that there’s no correlation. And that’s all the p-value does for us.
The p-value result is often much less informative than we pretend it is. Concluding that there’s a statistically significant correlation doesn’t tell us how strong the correlation is, and certainly doesn’t tell us if the correlation is of practical importance.
r is a measure of effect size. It tells us how strong the correlation is.
Interpretation of effect sizes are necessarily dependent on the discipline and the expectations of the analysis. In physics or chemistry, a near-perfect relationship may be expected, whereas in macro biology or psychology, a much smaller effect size may be notable. Cohen (1988) gives some guidelines for behavior sciences: Small, ≥ 0.10; Medium, ≥ 0.30; Large ≥ 0.50.
~ ~ ~
Effect sizes are not affected by sample size, whereas a p-value will be affected by sample size for a given effect size.
Consider x = (1, 2, 3), y = (1, 1, 2). Here, r = 0.866; p = 0.33.
Now, we’ll keep the same values, but double the number of observations for each of x and y. The effect size stays exactly the same. But now, with six observations, and an r of 0.866, the p-value decreases to below 0.05.
x = (1, 2, 3, 1, 2, 3), y = (1, 1, 2, 1, 1, 2). r = 0.866; p = 0.03.
And we’ll increase the sample size again.
x = (1, 2, 3, 1, 2, 3, 1, 2, 3), y = (1, 1, 2, 1, 1, 2, 1, 1, 2). r = 0.866; p = 0.003.
For small samples, an effect size may not be very informative. For example, if we measure the height and weight of two people, we will find that height and weight are perfectly correlated, and r = 1. This will not impress us. If we increase the sample size to three or four, we still may find a large r value, but know that this could very well be by chance. Here, we might rely more on the p-value to determine if something interesting going on.
For larger sample sizes, a p-value might be significant even if the r is small. Here, we want to make sure we look at the effect size, r, and not give too much weight to the p-value.
~ ~ ~
r may not always be the best way to understand the size of a correlation. For example, if I were telling you about a correlation between corn yield and fertilizer rate, “p = 0.01; r = 0.4” might be of interest. But probably you would want to know something like, “For an increase in 5 kg/ha fertilizer, corn yield increased 1000 kg/ha.
† In this case, "as extreme" means as correlated.
Reference
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Routledge. | Difference between Pearson's r ~= 0 and p > 0.05 | I upvoted the answer by rinspy. Here, I’ll try to add a few things.
~ ~ ~
r and p-value measure different things.
The p-value indicates the probability of getting data as extreme† as the observed d | Difference between Pearson's r ~= 0 and p > 0.05
I upvoted the answer by rinspy. Here, I’ll try to add a few things.
~ ~ ~
r and p-value measure different things.
The p-value indicates the probability of getting data as extreme† as the observed data assuming that the null hypothesis is true. By our decision rule, if p < alpha, we have sufficient evidence to reject the null hypothesis that there’s no correlation. And that’s all the p-value does for us.
The p-value result is often much less informative than we pretend it is. Concluding that there’s a statistically significant correlation doesn’t tell us how strong the correlation is, and certainly doesn’t tell us if the correlation is of practical importance.
r is a measure of effect size. It tells us how strong the correlation is.
Interpretation of effect sizes are necessarily dependent on the discipline and the expectations of the analysis. In physics or chemistry, a near-perfect relationship may be expected, whereas in macro biology or psychology, a much smaller effect size may be notable. Cohen (1988) gives some guidelines for behavior sciences: Small, ≥ 0.10; Medium, ≥ 0.30; Large ≥ 0.50.
~ ~ ~
Effect sizes are not affected by sample size, whereas a p-value will be affected by sample size for a given effect size.
Consider x = (1, 2, 3), y = (1, 1, 2). Here, r = 0.866; p = 0.33.
Now, we’ll keep the same values, but double the number of observations for each of x and y. The effect size stays exactly the same. But now, with six observations, and an r of 0.866, the p-value decreases to below 0.05.
x = (1, 2, 3, 1, 2, 3), y = (1, 1, 2, 1, 1, 2). r = 0.866; p = 0.03.
And we’ll increase the sample size again.
x = (1, 2, 3, 1, 2, 3, 1, 2, 3), y = (1, 1, 2, 1, 1, 2, 1, 1, 2). r = 0.866; p = 0.003.
For small samples, an effect size may not be very informative. For example, if we measure the height and weight of two people, we will find that height and weight are perfectly correlated, and r = 1. This will not impress us. If we increase the sample size to three or four, we still may find a large r value, but know that this could very well be by chance. Here, we might rely more on the p-value to determine if something interesting going on.
For larger sample sizes, a p-value might be significant even if the r is small. Here, we want to make sure we look at the effect size, r, and not give too much weight to the p-value.
~ ~ ~
r may not always be the best way to understand the size of a correlation. For example, if I were telling you about a correlation between corn yield and fertilizer rate, “p = 0.01; r = 0.4” might be of interest. But probably you would want to know something like, “For an increase in 5 kg/ha fertilizer, corn yield increased 1000 kg/ha.
† In this case, "as extreme" means as correlated.
Reference
Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Routledge. | Difference between Pearson's r ~= 0 and p > 0.05
I upvoted the answer by rinspy. Here, I’ll try to add a few things.
~ ~ ~
r and p-value measure different things.
The p-value indicates the probability of getting data as extreme† as the observed d |
51,871 | If X can predict Y in regression, why isn't Y guaranteed to predict X? | The answer depends on what you mean by “predict”. If you imply any kind of causation then obviously it is one way road. Suppose sunrise causes you to wake up. If I wake you up in the middle of the night sun will not rise suddenly.
On the other hand, if you mean by predicting an explanatory power of X in a multiple regression Y~1+X+Z or explanatory power of Y in a regression X~1+Y+Z, then it is a different story. You can certainly invert the relationship algebraically as long as it is strong. If I know what time you wake up I can predict when the sun rises.
I conditioned on relationship being "strong" because when you invert the equation your optimization problem changes. Instead of minimizing squares $(\hat y_i-y_i)^2$ we minimize squares $(\hat x_i-x_i)^2$. It is a different equation that may not render a significant relationship if the relationship was weak to start with. For instance, you can find that $\beta_1$ is significant, i.e. X predicts Y in this narrow definition. Yet, when you invert the equation you may end up with a model where coefficient of Y is not significant, so in this sense Y does not predict X. | If X can predict Y in regression, why isn't Y guaranteed to predict X? | The answer depends on what you mean by “predict”. If you imply any kind of causation then obviously it is one way road. Suppose sunrise causes you to wake up. If I wake you up in the middle of the nig | If X can predict Y in regression, why isn't Y guaranteed to predict X?
The answer depends on what you mean by “predict”. If you imply any kind of causation then obviously it is one way road. Suppose sunrise causes you to wake up. If I wake you up in the middle of the night sun will not rise suddenly.
On the other hand, if you mean by predicting an explanatory power of X in a multiple regression Y~1+X+Z or explanatory power of Y in a regression X~1+Y+Z, then it is a different story. You can certainly invert the relationship algebraically as long as it is strong. If I know what time you wake up I can predict when the sun rises.
I conditioned on relationship being "strong" because when you invert the equation your optimization problem changes. Instead of minimizing squares $(\hat y_i-y_i)^2$ we minimize squares $(\hat x_i-x_i)^2$. It is a different equation that may not render a significant relationship if the relationship was weak to start with. For instance, you can find that $\beta_1$ is significant, i.e. X predicts Y in this narrow definition. Yet, when you invert the equation you may end up with a model where coefficient of Y is not significant, so in this sense Y does not predict X. | If X can predict Y in regression, why isn't Y guaranteed to predict X?
The answer depends on what you mean by “predict”. If you imply any kind of causation then obviously it is one way road. Suppose sunrise causes you to wake up. If I wake you up in the middle of the nig |
51,872 | If X can predict Y in regression, why isn't Y guaranteed to predict X? | When you use the regression equation to make a prediction by plugging in a value of $x$, you are not predicting the value of $y$ for that value of $x$. You are predicting the mean of the $y$-values for that value of $x$. In detail:
The regression equation
$$y = \beta_0 + \beta_1 x + \epsilon$$
says that $y$ is equal to a linear function of $x$ plus some random scatter. If you set $x=3$, say, you have
$$y = \beta_0 + 3\beta_1 + \epsilon$$
and there is still some random scatter there. In other words, you are saying that "my prediction is that $y$ is normally distributed with mean equal to $\beta_0 + 3\beta_1$". To get an actual value for $y$, you need to take the expectation. So you are saying that "The mean of all the $y$-values for which $x=3$ is $\beta_0 + 3\beta_1$".
If you make a prediction by inverting the regression equation, say by plugging in $y=4$, then you are saying "The $x$-value for which the mean of all the corresponding $y$-values is equal to $4$ is $(4-\beta_0)/\beta_1$", which isn't usually the kind of prediction that you want.
Statistical courses often don't help by talking about "the line of best fit", which makes it sound like the situation is symmetrical in $x$ and $y$, which is very not the case. Recently there was a debate on the ANZSTAT mailing list, and someone posted a link to a good introductory course which explains it well:
https://www.stat.berkeley.edu/~stark/SticiGui/Text/regression.htm | If X can predict Y in regression, why isn't Y guaranteed to predict X? | When you use the regression equation to make a prediction by plugging in a value of $x$, you are not predicting the value of $y$ for that value of $x$. You are predicting the mean of the $y$-values fo | If X can predict Y in regression, why isn't Y guaranteed to predict X?
When you use the regression equation to make a prediction by plugging in a value of $x$, you are not predicting the value of $y$ for that value of $x$. You are predicting the mean of the $y$-values for that value of $x$. In detail:
The regression equation
$$y = \beta_0 + \beta_1 x + \epsilon$$
says that $y$ is equal to a linear function of $x$ plus some random scatter. If you set $x=3$, say, you have
$$y = \beta_0 + 3\beta_1 + \epsilon$$
and there is still some random scatter there. In other words, you are saying that "my prediction is that $y$ is normally distributed with mean equal to $\beta_0 + 3\beta_1$". To get an actual value for $y$, you need to take the expectation. So you are saying that "The mean of all the $y$-values for which $x=3$ is $\beta_0 + 3\beta_1$".
If you make a prediction by inverting the regression equation, say by plugging in $y=4$, then you are saying "The $x$-value for which the mean of all the corresponding $y$-values is equal to $4$ is $(4-\beta_0)/\beta_1$", which isn't usually the kind of prediction that you want.
Statistical courses often don't help by talking about "the line of best fit", which makes it sound like the situation is symmetrical in $x$ and $y$, which is very not the case. Recently there was a debate on the ANZSTAT mailing list, and someone posted a link to a good introductory course which explains it well:
https://www.stat.berkeley.edu/~stark/SticiGui/Text/regression.htm | If X can predict Y in regression, why isn't Y guaranteed to predict X?
When you use the regression equation to make a prediction by plugging in a value of $x$, you are not predicting the value of $y$ for that value of $x$. You are predicting the mean of the $y$-values fo |
51,873 | If X can predict Y in regression, why isn't Y guaranteed to predict X? | This is an interesting question. In the case of single variable linear regression, there is a (assumption) symmetric relationship but the same is not true for multiple linear regression.
A symmetric relationship can exist for some problems, a contrived example is training a linear regression model to behave like an AND gate.
If you ask why is this not "guaranteed" then there is a clear counterexample (proof by construction) which is training an OR gate using linear regression. You can learn a model that can predict the output of the gate $Y$ given 2 inputs $X_1, X_2$ accurately but the reverse is not possible.
Another way to think about it is that the multivariable linear regression model learns a many-to-one dimension mapping. And since many different points on the input can map to the same point on the output the reverse mapping is ambiguous.
UPDATE: The other answers seems to explain why the linear-regression objective (or loss function) is not symmetric. But I think you are asking a different question which is (paraphrasing), once I learn a linear relationship between $Y$ and $X_1, X_2$, why can't I use it in reverse? | If X can predict Y in regression, why isn't Y guaranteed to predict X? | This is an interesting question. In the case of single variable linear regression, there is a (assumption) symmetric relationship but the same is not true for multiple linear regression.
A symmetric r | If X can predict Y in regression, why isn't Y guaranteed to predict X?
This is an interesting question. In the case of single variable linear regression, there is a (assumption) symmetric relationship but the same is not true for multiple linear regression.
A symmetric relationship can exist for some problems, a contrived example is training a linear regression model to behave like an AND gate.
If you ask why is this not "guaranteed" then there is a clear counterexample (proof by construction) which is training an OR gate using linear regression. You can learn a model that can predict the output of the gate $Y$ given 2 inputs $X_1, X_2$ accurately but the reverse is not possible.
Another way to think about it is that the multivariable linear regression model learns a many-to-one dimension mapping. And since many different points on the input can map to the same point on the output the reverse mapping is ambiguous.
UPDATE: The other answers seems to explain why the linear-regression objective (or loss function) is not symmetric. But I think you are asking a different question which is (paraphrasing), once I learn a linear relationship between $Y$ and $X_1, X_2$, why can't I use it in reverse? | If X can predict Y in regression, why isn't Y guaranteed to predict X?
This is an interesting question. In the case of single variable linear regression, there is a (assumption) symmetric relationship but the same is not true for multiple linear regression.
A symmetric r |
51,874 | SVM - why quadratic programming problem? | Because the optimal separating hyperplane between classes of data
\begin{equation}
D(\mathbf{x})=\mathbf{w}^T\mathbf{x} + b = c \quad \quad -1 < c < 1,
\end{equation}
is found by minimizing the objective function
\begin{equation}
\begin{split}
Q(\mathbf{w})&=\frac{1}{2} ||\mathbf{w}||^2\\
\mathrm{w.r.t } & \quad y_i(\mathbf{w}^T\mathbf{x}_i + b) \geq 1,\\
\end{split}
\end{equation}
which is linear in the inequality contraint but is a quadratic objective function due to the squared term. The square of the Euclidean norm $||\mathbf{w}||$ makes the optimization problem "quadratic programming." The quadratic objective function with inequality constraints results in a function value that is unique, but the solutions are nonunique.
There's also a definition in optimization theory:
Definition: An optimization problem for which the objective function, inequality, and equality constraints are linear is said to be a linear program. However, if the objective function is quadratic while the constraints are all linear, then the optimization problem is called a quadratic program. | SVM - why quadratic programming problem? | Because the optimal separating hyperplane between classes of data
\begin{equation}
D(\mathbf{x})=\mathbf{w}^T\mathbf{x} + b = c \quad \quad -1 < c < 1,
\end{equation}
is found by minimizing the obje | SVM - why quadratic programming problem?
Because the optimal separating hyperplane between classes of data
\begin{equation}
D(\mathbf{x})=\mathbf{w}^T\mathbf{x} + b = c \quad \quad -1 < c < 1,
\end{equation}
is found by minimizing the objective function
\begin{equation}
\begin{split}
Q(\mathbf{w})&=\frac{1}{2} ||\mathbf{w}||^2\\
\mathrm{w.r.t } & \quad y_i(\mathbf{w}^T\mathbf{x}_i + b) \geq 1,\\
\end{split}
\end{equation}
which is linear in the inequality contraint but is a quadratic objective function due to the squared term. The square of the Euclidean norm $||\mathbf{w}||$ makes the optimization problem "quadratic programming." The quadratic objective function with inequality constraints results in a function value that is unique, but the solutions are nonunique.
There's also a definition in optimization theory:
Definition: An optimization problem for which the objective function, inequality, and equality constraints are linear is said to be a linear program. However, if the objective function is quadratic while the constraints are all linear, then the optimization problem is called a quadratic program. | SVM - why quadratic programming problem?
Because the optimal separating hyperplane between classes of data
\begin{equation}
D(\mathbf{x})=\mathbf{w}^T\mathbf{x} + b = c \quad \quad -1 < c < 1,
\end{equation}
is found by minimizing the obje |
51,875 | SVM - why quadratic programming problem? | In order to find an optimal separating hyperplane, the norm of the weight vector $||\overline{w}||$ should be minimized, subject to constraints $y_i(\overline{w} \cdot \varphi(x_i) + b) ≥ 1 − \xi_i$, $\xi_i \geqslant 0, i=1,\dots, l$ (see here).
While it's technically possible to minimize $\mathcal{l}^1$-norm $||\overline{w}|| = \sum_i^n |w_i|$ (i.e. to solve linear programming problem) instead of $\mathcal{l}^2$ norm (quadratic problem), the $l^1$-approach has a number of disadvantages over the $l^2$:
(a) the solutions for $l^1$-norm minimization problem lack stability,
(b) the solution isn't unique,
(c) it's harder to provide computationally efficient method for $l^1$-minimization, as compared to $l^2$-minimization.
On the other hand, while the solution of $l^1$-minimization problem is more robust to outliers than of the corresponding $l^2$ problem, this doesn't play a great role specifically for SVMs, since there's a very small chance for an outlier to become a support vector. | SVM - why quadratic programming problem? | In order to find an optimal separating hyperplane, the norm of the weight vector $||\overline{w}||$ should be minimized, subject to constraints $y_i(\overline{w} \cdot \varphi(x_i) + b) ≥ 1 − \xi_i$, | SVM - why quadratic programming problem?
In order to find an optimal separating hyperplane, the norm of the weight vector $||\overline{w}||$ should be minimized, subject to constraints $y_i(\overline{w} \cdot \varphi(x_i) + b) ≥ 1 − \xi_i$, $\xi_i \geqslant 0, i=1,\dots, l$ (see here).
While it's technically possible to minimize $\mathcal{l}^1$-norm $||\overline{w}|| = \sum_i^n |w_i|$ (i.e. to solve linear programming problem) instead of $\mathcal{l}^2$ norm (quadratic problem), the $l^1$-approach has a number of disadvantages over the $l^2$:
(a) the solutions for $l^1$-norm minimization problem lack stability,
(b) the solution isn't unique,
(c) it's harder to provide computationally efficient method for $l^1$-minimization, as compared to $l^2$-minimization.
On the other hand, while the solution of $l^1$-minimization problem is more robust to outliers than of the corresponding $l^2$ problem, this doesn't play a great role specifically for SVMs, since there's a very small chance for an outlier to become a support vector. | SVM - why quadratic programming problem?
In order to find an optimal separating hyperplane, the norm of the weight vector $||\overline{w}||$ should be minimized, subject to constraints $y_i(\overline{w} \cdot \varphi(x_i) + b) ≥ 1 − \xi_i$, |
51,876 | SVM - why quadratic programming problem? | This has nothing to do with the L1 norm. Both are the Euclidean norm. Conversion to QP is due to practical reasons so that the gradient is continuous at the origin.
See https://math.stackexchange.com/a/439168/532462 | SVM - why quadratic programming problem? | This has nothing to do with the L1 norm. Both are the Euclidean norm. Conversion to QP is due to practical reasons so that the gradient is continuous at the origin.
See https://math.stackexchange.com/ | SVM - why quadratic programming problem?
This has nothing to do with the L1 norm. Both are the Euclidean norm. Conversion to QP is due to practical reasons so that the gradient is continuous at the origin.
See https://math.stackexchange.com/a/439168/532462 | SVM - why quadratic programming problem?
This has nothing to do with the L1 norm. Both are the Euclidean norm. Conversion to QP is due to practical reasons so that the gradient is continuous at the origin.
See https://math.stackexchange.com/ |
51,877 | Is it logical to use correlation between percentages | One reason that someone might assert that "you cannot run correlation on percentages" is that percentages are bounded by [0, 1], and the underlying assumption of the Pearson r test is that values are normally distributed; these are manifestly incompatible.
If your percentages are concentrated in a not-too-wide band not-too-close to 0% or 100%, I wouldn't worry about this too much.
If your percentages are all over the [0, 1] interval, or are concentrated near 0 or near 1, I would do Spearman rho or Kendall tau correlation test instead. These tests to do not depend on any underlying assumption of normality. | Is it logical to use correlation between percentages | One reason that someone might assert that "you cannot run correlation on percentages" is that percentages are bounded by [0, 1], and the underlying assumption of the Pearson r test is that values are | Is it logical to use correlation between percentages
One reason that someone might assert that "you cannot run correlation on percentages" is that percentages are bounded by [0, 1], and the underlying assumption of the Pearson r test is that values are normally distributed; these are manifestly incompatible.
If your percentages are concentrated in a not-too-wide band not-too-close to 0% or 100%, I wouldn't worry about this too much.
If your percentages are all over the [0, 1] interval, or are concentrated near 0 or near 1, I would do Spearman rho or Kendall tau correlation test instead. These tests to do not depend on any underlying assumption of normality. | Is it logical to use correlation between percentages
One reason that someone might assert that "you cannot run correlation on percentages" is that percentages are bounded by [0, 1], and the underlying assumption of the Pearson r test is that values are |
51,878 | Is it logical to use correlation between percentages | The video you linked here in one of your comments makes reference to compositional data. This would be an issue if you tried to compare percentages adding up to $100\%$, but this is not the case in your question. Regardless of whether you express the variable independent contractors as counts or a percentage, it wouldn't constrain the range of possible values of the variable offsite workers.
In terms of the count (integer-valued) nature of independent contractors you could apply a Poisson, semi-Poisson or a negative binomial model:
I created a fictional dataset design to fit a Poisson regression model. I'll use R.
The Poisson regression model is of the form $\log (Y) = \beta_0 + \beta_1 X$. Hence, for each value of the $X=x$ variable the $\log(\mathbb E[Y\vert x])$ will be distributed as a Poisson variable with mean $\lambda = \exp(\beta_0 + \beta_1 x).$ We can therefore simulate a dataset suitable for a Poisson regression model as follows (please refer to this post to see the truncation trick to avoid to high or zero values for the number of independent contractors):
set.seed(0) # Setting the seed value to make findings reproducible.
co = 70 # The number of companies (firms) you have data on.
n = 1000 # Dirty trick to get "tons" of point to truncate later and end up with 70.
i = 0 # Intercept chosen to be zero hoping to simplify things.
sl = .04 # The slope or beta1 in the equation in the previous paragraph.
# Assuming the % of off-site workers is varies uniformly from 0% to 100%:
offsite = runif(n, 0, 100)
mu = exp(i + sl * offsite) # Getting the means
# Generating the number of independent contractors:
indep = rpois(n, mu)
# Creating data set with offsite % and no. contractors in two columns:
dat = as.data.frame(cbind(offsite, indep))
# Truncating the data to obtain data points avoiding 0 contractors
# ...and keeping max. to < 45 (sounds like a real-life plausible max):
dat = dat[which(dat$indep > 0 & dat$indep < 45), ]
# Selecting only 70 of these truncated data points:
dat = dat[sample(nrow(dat),co), ]; rownames(dat) = NULL
We know that the fit of Poisson model will be perfect by design, and hence, we can apply it to proof the association between the number of subcontracting companies and the percentage of telecommuters; however, in a real-life situation this is not going to be the case. Therefore, models such a negative binomial regression that don't assume equal mean and variance will be more often use to deal with over-dispersion. In the basic statistical R package we can also deal with this problem running a quasi-Poisson regression:
> fit = glm(indep ~ offsite, family = "quasipoisson", data = dat)
> summary(fit)
Call:
glm(formula = indep ~ offsite, family = "quasipoisson", data = dat)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.96453 -0.61346 -0.08643 0.34988 2.16355
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.238500 0.108206 2.204 0.0309 *
offsite 0.036529 0.001404 26.009 <2e-16 ***
Not exactly the intercept and slope we had chosen initially, but the data has been truncated and we have only 70 points. And it shows the significant association between the percentage of off-site workers and the the number of independent contractors (by design, p ~ $ 0$): for each increase of $1\%$ in the number of off-site workers there will be a positive difference of $0.037$ in the log of the expected number of subcontracting firms. Or for a one percentage point increase in off-site workers there is a $e^{0.037}$ $(\times)$ increase in subcontractors.
These are the plots showing the relationship between the number of independent contractors and the percentage of off-site workers:
A different way to look at the relationship is through Spearman correlation (more in line with your original question). The results are very close to the default Pearson correlation as indicated in the correct answer above by David Wright:
> cor(dat$offsite, dat$indep, method="pearson")
[1] 0.90324
> cor(dat$offsite, log(dat$indep), method="spearman")
[1] 0.9510245
It is interesting to try a log transformation of the dependent variable given the plot to the left above, and running an OLS regression (note that this is not equivalent to a Poisson regression (or a negative binomial)). If we do so, we find that the slope:
> lm(log(indep) ~ offsite, data = dat)$coef[2]
offsite
0.03926557
is equivalent to
> cor(dat$offsite, log(dat$indep)) * sd(log(dat$indep)) / sd(dat$offsite)
[1] 0.03926557
because
$$r = \beta_1\sqrt{\frac{\sum_{i=1}^n(x_i-\bar x)^2}{\sum_{i=1}^n(y_i-\bar y)^2}}=\beta_1\frac{S_{xx}}{S_{yy}}$$ | Is it logical to use correlation between percentages | The video you linked here in one of your comments makes reference to compositional data. This would be an issue if you tried to compare percentages adding up to $100\%$, but this is not the case in yo | Is it logical to use correlation between percentages
The video you linked here in one of your comments makes reference to compositional data. This would be an issue if you tried to compare percentages adding up to $100\%$, but this is not the case in your question. Regardless of whether you express the variable independent contractors as counts or a percentage, it wouldn't constrain the range of possible values of the variable offsite workers.
In terms of the count (integer-valued) nature of independent contractors you could apply a Poisson, semi-Poisson or a negative binomial model:
I created a fictional dataset design to fit a Poisson regression model. I'll use R.
The Poisson regression model is of the form $\log (Y) = \beta_0 + \beta_1 X$. Hence, for each value of the $X=x$ variable the $\log(\mathbb E[Y\vert x])$ will be distributed as a Poisson variable with mean $\lambda = \exp(\beta_0 + \beta_1 x).$ We can therefore simulate a dataset suitable for a Poisson regression model as follows (please refer to this post to see the truncation trick to avoid to high or zero values for the number of independent contractors):
set.seed(0) # Setting the seed value to make findings reproducible.
co = 70 # The number of companies (firms) you have data on.
n = 1000 # Dirty trick to get "tons" of point to truncate later and end up with 70.
i = 0 # Intercept chosen to be zero hoping to simplify things.
sl = .04 # The slope or beta1 in the equation in the previous paragraph.
# Assuming the % of off-site workers is varies uniformly from 0% to 100%:
offsite = runif(n, 0, 100)
mu = exp(i + sl * offsite) # Getting the means
# Generating the number of independent contractors:
indep = rpois(n, mu)
# Creating data set with offsite % and no. contractors in two columns:
dat = as.data.frame(cbind(offsite, indep))
# Truncating the data to obtain data points avoiding 0 contractors
# ...and keeping max. to < 45 (sounds like a real-life plausible max):
dat = dat[which(dat$indep > 0 & dat$indep < 45), ]
# Selecting only 70 of these truncated data points:
dat = dat[sample(nrow(dat),co), ]; rownames(dat) = NULL
We know that the fit of Poisson model will be perfect by design, and hence, we can apply it to proof the association between the number of subcontracting companies and the percentage of telecommuters; however, in a real-life situation this is not going to be the case. Therefore, models such a negative binomial regression that don't assume equal mean and variance will be more often use to deal with over-dispersion. In the basic statistical R package we can also deal with this problem running a quasi-Poisson regression:
> fit = glm(indep ~ offsite, family = "quasipoisson", data = dat)
> summary(fit)
Call:
glm(formula = indep ~ offsite, family = "quasipoisson", data = dat)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.96453 -0.61346 -0.08643 0.34988 2.16355
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.238500 0.108206 2.204 0.0309 *
offsite 0.036529 0.001404 26.009 <2e-16 ***
Not exactly the intercept and slope we had chosen initially, but the data has been truncated and we have only 70 points. And it shows the significant association between the percentage of off-site workers and the the number of independent contractors (by design, p ~ $ 0$): for each increase of $1\%$ in the number of off-site workers there will be a positive difference of $0.037$ in the log of the expected number of subcontracting firms. Or for a one percentage point increase in off-site workers there is a $e^{0.037}$ $(\times)$ increase in subcontractors.
These are the plots showing the relationship between the number of independent contractors and the percentage of off-site workers:
A different way to look at the relationship is through Spearman correlation (more in line with your original question). The results are very close to the default Pearson correlation as indicated in the correct answer above by David Wright:
> cor(dat$offsite, dat$indep, method="pearson")
[1] 0.90324
> cor(dat$offsite, log(dat$indep), method="spearman")
[1] 0.9510245
It is interesting to try a log transformation of the dependent variable given the plot to the left above, and running an OLS regression (note that this is not equivalent to a Poisson regression (or a negative binomial)). If we do so, we find that the slope:
> lm(log(indep) ~ offsite, data = dat)$coef[2]
offsite
0.03926557
is equivalent to
> cor(dat$offsite, log(dat$indep)) * sd(log(dat$indep)) / sd(dat$offsite)
[1] 0.03926557
because
$$r = \beta_1\sqrt{\frac{\sum_{i=1}^n(x_i-\bar x)^2}{\sum_{i=1}^n(y_i-\bar y)^2}}=\beta_1\frac{S_{xx}}{S_{yy}}$$ | Is it logical to use correlation between percentages
The video you linked here in one of your comments makes reference to compositional data. This would be an issue if you tried to compare percentages adding up to $100\%$, but this is not the case in yo |
51,879 | Is it logical to use correlation between percentages | Where did you read that you cannot use correlation between percentages? I think that the meaning of the correlation coefficient will be preserved (e.g "If X increases, Y is likely to increase." or "High values of X are associated with high values of Y."), so correlation is fair game.
One thing to watch out for may be ecological correlation. But as long as you keep your interpretation in check, you should be ok.
Your question feels more opinion based - so please take my answer in aggregate with others :). | Is it logical to use correlation between percentages | Where did you read that you cannot use correlation between percentages? I think that the meaning of the correlation coefficient will be preserved (e.g "If X increases, Y is likely to increase." or "H | Is it logical to use correlation between percentages
Where did you read that you cannot use correlation between percentages? I think that the meaning of the correlation coefficient will be preserved (e.g "If X increases, Y is likely to increase." or "High values of X are associated with high values of Y."), so correlation is fair game.
One thing to watch out for may be ecological correlation. But as long as you keep your interpretation in check, you should be ok.
Your question feels more opinion based - so please take my answer in aggregate with others :). | Is it logical to use correlation between percentages
Where did you read that you cannot use correlation between percentages? I think that the meaning of the correlation coefficient will be preserved (e.g "If X increases, Y is likely to increase." or "H |
51,880 | Quadratic effect in OLS regression | You don't have to use a linear term to use a quadratic, but it's usually a good idea. The only situation I wouldn't use it is when your theory tells you that you have a quadratic process. For instance, if you somehow are measuring kinetic energy as a function of speed, then there's no linear term in theory:
$$e=m\frac{v^2}{2}$$
These are rare cases when there's no need for linear terms. Otherwise, it's better to keep them. If you don't have a solid theory, keep them. | Quadratic effect in OLS regression | You don't have to use a linear term to use a quadratic, but it's usually a good idea. The only situation I wouldn't use it is when your theory tells you that you have a quadratic process. For instance | Quadratic effect in OLS regression
You don't have to use a linear term to use a quadratic, but it's usually a good idea. The only situation I wouldn't use it is when your theory tells you that you have a quadratic process. For instance, if you somehow are measuring kinetic energy as a function of speed, then there's no linear term in theory:
$$e=m\frac{v^2}{2}$$
These are rare cases when there's no need for linear terms. Otherwise, it's better to keep them. If you don't have a solid theory, keep them. | Quadratic effect in OLS regression
You don't have to use a linear term to use a quadratic, but it's usually a good idea. The only situation I wouldn't use it is when your theory tells you that you have a quadratic process. For instance |
51,881 | Quadratic effect in OLS regression | In my experience, I would say yes you would always adjust for lower level terms when fitting polynomial trends. This is the approach that is advocated in most of the biostatistics textbooks I've encountered. The reason for this is that the terms are guaranteed to have the correct interpretation. For instance, if you omit a linear (first order) term when fitting a quadratic effect, it is not guaranteed that the interpretation of the coefficient is the slope of the quadratic trend curve. You are constraining the fit so that the slope of the fit through the origin is 0. But when on Earth are we so confident that this is the case? Small measurement calibrations that bias or offset measurements by a single unit can catastrophically attenuate a quadratic slope. Relative to the small amount of power we spend to estimate the linear term, I advocate for using it always.
With respect to your second point, this argues all the more for including the linear term. I agree centering the value on 3 is nice because it gives a 0 and 5 star rating the same influence by symmetry (but the hypothesis that a certain rating is more "helpful" depends on what you estimate. You might estimate a negative quadratic trend on 3 star suggesting that moderate reviews are the most helpful... that's the point about being agnostic with statistics). | Quadratic effect in OLS regression | In my experience, I would say yes you would always adjust for lower level terms when fitting polynomial trends. This is the approach that is advocated in most of the biostatistics textbooks I've encou | Quadratic effect in OLS regression
In my experience, I would say yes you would always adjust for lower level terms when fitting polynomial trends. This is the approach that is advocated in most of the biostatistics textbooks I've encountered. The reason for this is that the terms are guaranteed to have the correct interpretation. For instance, if you omit a linear (first order) term when fitting a quadratic effect, it is not guaranteed that the interpretation of the coefficient is the slope of the quadratic trend curve. You are constraining the fit so that the slope of the fit through the origin is 0. But when on Earth are we so confident that this is the case? Small measurement calibrations that bias or offset measurements by a single unit can catastrophically attenuate a quadratic slope. Relative to the small amount of power we spend to estimate the linear term, I advocate for using it always.
With respect to your second point, this argues all the more for including the linear term. I agree centering the value on 3 is nice because it gives a 0 and 5 star rating the same influence by symmetry (but the hypothesis that a certain rating is more "helpful" depends on what you estimate. You might estimate a negative quadratic trend on 3 star suggesting that moderate reviews are the most helpful... that's the point about being agnostic with statistics). | Quadratic effect in OLS regression
In my experience, I would say yes you would always adjust for lower level terms when fitting polynomial trends. This is the approach that is advocated in most of the biostatistics textbooks I've encou |
51,882 | In residual sum of squares, why do we need to square? [duplicate] | Squaring the residuals changes the shape of the regularization function. In particular, large errors are penalized more with the square of the error. Imagine two cases, one where you have one point with an error of 0 and another with an error of 10, versus another case where you have two points with an error of 5. The linear error function will treat both of these as having equal sum of residuals, while the squared error will penalize the case with the large error more.
With a squared residual, your solution will prefer more small errors to having any large errors. The linear residual is indifferent, not caring whether the total error is all coming from one sample or spread out as a sum of many tiny errors.
You could also raise the error to a higher power to penalize large errors even more. Summing the tenth power of the residuals, for example, would likely result in a solution that has small errors for most points, but no large errors for any one point. | In residual sum of squares, why do we need to square? [duplicate] | Squaring the residuals changes the shape of the regularization function. In particular, large errors are penalized more with the square of the error. Imagine two cases, one where you have one point wi | In residual sum of squares, why do we need to square? [duplicate]
Squaring the residuals changes the shape of the regularization function. In particular, large errors are penalized more with the square of the error. Imagine two cases, one where you have one point with an error of 0 and another with an error of 10, versus another case where you have two points with an error of 5. The linear error function will treat both of these as having equal sum of residuals, while the squared error will penalize the case with the large error more.
With a squared residual, your solution will prefer more small errors to having any large errors. The linear residual is indifferent, not caring whether the total error is all coming from one sample or spread out as a sum of many tiny errors.
You could also raise the error to a higher power to penalize large errors even more. Summing the tenth power of the residuals, for example, would likely result in a solution that has small errors for most points, but no large errors for any one point. | In residual sum of squares, why do we need to square? [duplicate]
Squaring the residuals changes the shape of the regularization function. In particular, large errors are penalized more with the square of the error. Imagine two cases, one where you have one point wi |
51,883 | In residual sum of squares, why do we need to square? [duplicate] | @ocram's answer is good, but one point I'd add is the connection between least squares and maximum likelihood estimation. If we have a regression model of the form $y_i = \beta_0 + \sum_{j=1}^{p} \beta_j x_{ij} + \epsilon_i$ where the $\epsilon_i$ are independent normal$(0, \sigma^2)$ random variables then the likelihood function becomes
$$
\mathcal{L}(\beta) = \frac{1}{\sigma^n \sqrt{2 \pi}^n} \exp \left ( - \frac{\sum_{i=1}^{n} (y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij})^2}{2 \sigma^2} \right ) .
$$
If we want to maximize this as a function of $\beta$ that's equivalent to minimizing $\sum_{i=1}^{n} (y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij})^2$, and this is nothing but the least squares criterion.
It's also interesting to note that means themselves are least squares estimates in the univariate case, so if we agree that means are good things to look at then least squares makes sense. | In residual sum of squares, why do we need to square? [duplicate] | @ocram's answer is good, but one point I'd add is the connection between least squares and maximum likelihood estimation. If we have a regression model of the form $y_i = \beta_0 + \sum_{j=1}^{p} \be | In residual sum of squares, why do we need to square? [duplicate]
@ocram's answer is good, but one point I'd add is the connection between least squares and maximum likelihood estimation. If we have a regression model of the form $y_i = \beta_0 + \sum_{j=1}^{p} \beta_j x_{ij} + \epsilon_i$ where the $\epsilon_i$ are independent normal$(0, \sigma^2)$ random variables then the likelihood function becomes
$$
\mathcal{L}(\beta) = \frac{1}{\sigma^n \sqrt{2 \pi}^n} \exp \left ( - \frac{\sum_{i=1}^{n} (y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij})^2}{2 \sigma^2} \right ) .
$$
If we want to maximize this as a function of $\beta$ that's equivalent to minimizing $\sum_{i=1}^{n} (y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij})^2$, and this is nothing but the least squares criterion.
It's also interesting to note that means themselves are least squares estimates in the univariate case, so if we agree that means are good things to look at then least squares makes sense. | In residual sum of squares, why do we need to square? [duplicate]
@ocram's answer is good, but one point I'd add is the connection between least squares and maximum likelihood estimation. If we have a regression model of the form $y_i = \beta_0 + \sum_{j=1}^{p} \be |
51,884 | In residual sum of squares, why do we need to square? [duplicate] | If you do not square, a negative residual (below the line) can offset the impact of a positive residual (above the line). Squaring is a remedy. Taking the absolute values of the residuals provides an alternative. But squaring is much easier to handle from a mathematical point of view (cf. derivatives). | In residual sum of squares, why do we need to square? [duplicate] | If you do not square, a negative residual (below the line) can offset the impact of a positive residual (above the line). Squaring is a remedy. Taking the absolute values of the residuals provides an | In residual sum of squares, why do we need to square? [duplicate]
If you do not square, a negative residual (below the line) can offset the impact of a positive residual (above the line). Squaring is a remedy. Taking the absolute values of the residuals provides an alternative. But squaring is much easier to handle from a mathematical point of view (cf. derivatives). | In residual sum of squares, why do we need to square? [duplicate]
If you do not square, a negative residual (below the line) can offset the impact of a positive residual (above the line). Squaring is a remedy. Taking the absolute values of the residuals provides an |
51,885 | Why does this neural network in keras fail so badly? | Here are a few observations:
Your first layer of a single sigmoid neuron is a big bottleneck. Unless you are very lucky and the neuron is initialised to map your input onto the near-linear central part of the sigmoid, you end up with unnecessary information loss and vanishing gradient in the first layer right from the start. You maybe added this because a basic ANN is usually described to have 3 layers: input, hidden, and output, but the input layer is not actually modelled. It is better to think in the number of sets of weights are needed: input to hidden, and hidden to output.
You only run a single epoch. That is not enough to learn the full function. At each epoch the weights are only updated a little bit to follow the local gradient, this needs to be repeated many more times to converge at the final result.
In this case your batch size is fairly big, which means less updates are done per epoch. Usually smaller batch sizes can be more efficient, but this can differ per problem.
After removing the first layer, increasing nb_epoch to 100 and decreasing batch_size to 10, I get a much better result: | Why does this neural network in keras fail so badly? | Here are a few observations:
Your first layer of a single sigmoid neuron is a big bottleneck. Unless you are very lucky and the neuron is initialised to map your input onto the near-linear central pa | Why does this neural network in keras fail so badly?
Here are a few observations:
Your first layer of a single sigmoid neuron is a big bottleneck. Unless you are very lucky and the neuron is initialised to map your input onto the near-linear central part of the sigmoid, you end up with unnecessary information loss and vanishing gradient in the first layer right from the start. You maybe added this because a basic ANN is usually described to have 3 layers: input, hidden, and output, but the input layer is not actually modelled. It is better to think in the number of sets of weights are needed: input to hidden, and hidden to output.
You only run a single epoch. That is not enough to learn the full function. At each epoch the weights are only updated a little bit to follow the local gradient, this needs to be repeated many more times to converge at the final result.
In this case your batch size is fairly big, which means less updates are done per epoch. Usually smaller batch sizes can be more efficient, but this can differ per problem.
After removing the first layer, increasing nb_epoch to 100 and decreasing batch_size to 10, I get a much better result: | Why does this neural network in keras fail so badly?
Here are a few observations:
Your first layer of a single sigmoid neuron is a big bottleneck. Unless you are very lucky and the neuron is initialised to map your input onto the near-linear central pa |
51,886 | Normalizing logistic regression coefficients? | The short answer is that normalizing the coefficients will not affect the predictions, but it will mess up the estimated class probabilities. Don't do it.
The coefficients don't represent the odds ratios but rather the feature weights. They can be negative. If a coefficient is strongly positive, it means that the corresponding feature is very much correlated with the positive class. If it is strongly negative, then its means that the feature is strongly correlated with the negative class. If the coefficient is close to zero, then it means that the feature is not correlated much with either the positive or the negative class.
So if you want to compare the importance of each feature, you should compare the absolute values of the coefficients (and you can normalize them just for convenience, if you want, but don't use these normalized absolute coefficients to make predictions, only use them to compare feature importance). (Edit: this assumes that the features have been normalized prior to training)
This is probably all you need to know. Read on if you want to understand what would happen if you tried to normalize the coefficients.
The decision function for logistic regression is:
$h_\mathbf{\theta}(\mathbf{x}) = \sigma(\sum\limits_{i=0}^{n}\theta_i x_i)$
where $\sigma(t) = \dfrac{1}{1 + \exp(-t)}$ (the logistic function)
and $\mathbf{\theta}$ is the parameter vector, and $\mathbf{x}$ is the feature vector (including a bias term $x_0 = 1$) and $n$ is the number of features.
The model's prediction $\hat{y}$ for the instance $\mathbf{x}$ is given by:
$
\hat{y} =
\begin{cases}
0 & \text{ if }h_\mathbf{\theta}(\mathbf{x}) < 0.5\\
1 & \text{ if }h_\mathbf{\theta}(\mathbf{x}) \ge 0.5
\end{cases}
$
Notice that $\sigma(t) \ge 0.5$ when $t \ge 0$, and $\sigma(t) < 0.5$ when $t < 0$ so the prediction simplifies to:
$
\hat{y} =
\begin{cases}
0 & \text{ if }\sum\limits_{i=0}^{n}\theta_i x_i < 0\\
1 & \text{ if }\sum\limits_{i=0}^{n}\theta_i x_i \ge 0
\end{cases}
$
If you normalize the feature vector, you get the new parameter vector $\bar{\mathbf{\theta}} = \dfrac{\mathbf{\theta}}{K} $. Since the coefficients can be negative, it would not make sense to divide them by the sum of coefficients (the sum could be negative or zero). So instead, let's define $K$ as the range of values (anyway, even if you choose another method for normalization, it does not change what follows).
$K = \underset{i}\max(\theta_i) - \underset{i}\min(\theta_i)$
Look at what happens to the sum used for predictions:
$
\sum\limits_{i=0}^{n}\bar{\theta}_i x_i = \sum\limits_{i=0}^{n}\dfrac{\theta_i}{K} x_i = \dfrac{1}{K}\sum\limits_{i=0}^{n}\theta_i x_i
$
Everything just got multiplied by the constant $\dfrac{1}{K}$.
If $K > 0$, the predictions don't change a bit, since $\dfrac{1}{K} \sum\limits_{i=0}^{n}\theta_i x_i$ has the same sign as $\sum\limits_{i=0}^{n}\theta_i x_i$. If $K = 0$ (which can only happen if all coefficients are equal), then $K$ is not defined (you can't normalize the coefficients). If you use another normalization technique, and end up with $K<0$, then all predictions get reversed, which is probably not a good idea!
So normalizing the coefficients (by dividing them by their range of values) will not affect predictions. However, it will mess up the decision function $h_\mathbf{\theta}$. This function is used to estimate the probability of the positive class. By multiplying all coefficients by $\dfrac{1}{K}$, you will end up making the same predictions (since the sign is not affected), but the estimated probability will be higher or lower depending on the value of $K$. For example, if $K = 2$ then:
$h_\mathbf{\bar{\theta}}(\mathbf{x}) = \sigma(\sum\limits_{i=0}^{n}\bar{\theta}_i x_i) = \sigma\left(\dfrac{1}{2} \sum\limits_{i=0}^{n}\theta_i x_i\right)$
This can't be simplified much: if you plot the curve of $\sigma(t)$ and compare it to $\sigma(\frac{1}{2}t)$, you will find that this updated model will be much less confident about its predictions. For no good reason. | Normalizing logistic regression coefficients? | The short answer is that normalizing the coefficients will not affect the predictions, but it will mess up the estimated class probabilities. Don't do it.
The coefficients don't represent the odds rat | Normalizing logistic regression coefficients?
The short answer is that normalizing the coefficients will not affect the predictions, but it will mess up the estimated class probabilities. Don't do it.
The coefficients don't represent the odds ratios but rather the feature weights. They can be negative. If a coefficient is strongly positive, it means that the corresponding feature is very much correlated with the positive class. If it is strongly negative, then its means that the feature is strongly correlated with the negative class. If the coefficient is close to zero, then it means that the feature is not correlated much with either the positive or the negative class.
So if you want to compare the importance of each feature, you should compare the absolute values of the coefficients (and you can normalize them just for convenience, if you want, but don't use these normalized absolute coefficients to make predictions, only use them to compare feature importance). (Edit: this assumes that the features have been normalized prior to training)
This is probably all you need to know. Read on if you want to understand what would happen if you tried to normalize the coefficients.
The decision function for logistic regression is:
$h_\mathbf{\theta}(\mathbf{x}) = \sigma(\sum\limits_{i=0}^{n}\theta_i x_i)$
where $\sigma(t) = \dfrac{1}{1 + \exp(-t)}$ (the logistic function)
and $\mathbf{\theta}$ is the parameter vector, and $\mathbf{x}$ is the feature vector (including a bias term $x_0 = 1$) and $n$ is the number of features.
The model's prediction $\hat{y}$ for the instance $\mathbf{x}$ is given by:
$
\hat{y} =
\begin{cases}
0 & \text{ if }h_\mathbf{\theta}(\mathbf{x}) < 0.5\\
1 & \text{ if }h_\mathbf{\theta}(\mathbf{x}) \ge 0.5
\end{cases}
$
Notice that $\sigma(t) \ge 0.5$ when $t \ge 0$, and $\sigma(t) < 0.5$ when $t < 0$ so the prediction simplifies to:
$
\hat{y} =
\begin{cases}
0 & \text{ if }\sum\limits_{i=0}^{n}\theta_i x_i < 0\\
1 & \text{ if }\sum\limits_{i=0}^{n}\theta_i x_i \ge 0
\end{cases}
$
If you normalize the feature vector, you get the new parameter vector $\bar{\mathbf{\theta}} = \dfrac{\mathbf{\theta}}{K} $. Since the coefficients can be negative, it would not make sense to divide them by the sum of coefficients (the sum could be negative or zero). So instead, let's define $K$ as the range of values (anyway, even if you choose another method for normalization, it does not change what follows).
$K = \underset{i}\max(\theta_i) - \underset{i}\min(\theta_i)$
Look at what happens to the sum used for predictions:
$
\sum\limits_{i=0}^{n}\bar{\theta}_i x_i = \sum\limits_{i=0}^{n}\dfrac{\theta_i}{K} x_i = \dfrac{1}{K}\sum\limits_{i=0}^{n}\theta_i x_i
$
Everything just got multiplied by the constant $\dfrac{1}{K}$.
If $K > 0$, the predictions don't change a bit, since $\dfrac{1}{K} \sum\limits_{i=0}^{n}\theta_i x_i$ has the same sign as $\sum\limits_{i=0}^{n}\theta_i x_i$. If $K = 0$ (which can only happen if all coefficients are equal), then $K$ is not defined (you can't normalize the coefficients). If you use another normalization technique, and end up with $K<0$, then all predictions get reversed, which is probably not a good idea!
So normalizing the coefficients (by dividing them by their range of values) will not affect predictions. However, it will mess up the decision function $h_\mathbf{\theta}$. This function is used to estimate the probability of the positive class. By multiplying all coefficients by $\dfrac{1}{K}$, you will end up making the same predictions (since the sign is not affected), but the estimated probability will be higher or lower depending on the value of $K$. For example, if $K = 2$ then:
$h_\mathbf{\bar{\theta}}(\mathbf{x}) = \sigma(\sum\limits_{i=0}^{n}\bar{\theta}_i x_i) = \sigma\left(\dfrac{1}{2} \sum\limits_{i=0}^{n}\theta_i x_i\right)$
This can't be simplified much: if you plot the curve of $\sigma(t)$ and compare it to $\sigma(\frac{1}{2}t)$, you will find that this updated model will be much less confident about its predictions. For no good reason. | Normalizing logistic regression coefficients?
The short answer is that normalizing the coefficients will not affect the predictions, but it will mess up the estimated class probabilities. Don't do it.
The coefficients don't represent the odds rat |
51,887 | Normalizing logistic regression coefficients? | The correct way to compare coefficients on the same scale is to rescale the predictors themselves. A common transformation is to turn continuous predictors into Z-scores by subtracting the sample mean and dividing by the sample standard deviation. (Gelman[1] recommends two standard deviations)
This will change the values of the coefficients, because they are now measured in "standard deviations" rather than "units" of each input. But this is in fact your goal: since they coefficients are all measured on the same scale, they will all be directly comparable. This will not change the predictions because the predictors enter into logistic regression linearly.
[1]: Gelman, A. (2008), Scaling regression inputs by dividing by two standard deviations. Statistics in Medicine, 27: 2865–2873. doi:10.1002/sim.3107
http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf | Normalizing logistic regression coefficients? | The correct way to compare coefficients on the same scale is to rescale the predictors themselves. A common transformation is to turn continuous predictors into Z-scores by subtracting the sample mean | Normalizing logistic regression coefficients?
The correct way to compare coefficients on the same scale is to rescale the predictors themselves. A common transformation is to turn continuous predictors into Z-scores by subtracting the sample mean and dividing by the sample standard deviation. (Gelman[1] recommends two standard deviations)
This will change the values of the coefficients, because they are now measured in "standard deviations" rather than "units" of each input. But this is in fact your goal: since they coefficients are all measured on the same scale, they will all be directly comparable. This will not change the predictions because the predictors enter into logistic regression linearly.
[1]: Gelman, A. (2008), Scaling regression inputs by dividing by two standard deviations. Statistics in Medicine, 27: 2865–2873. doi:10.1002/sim.3107
http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf | Normalizing logistic regression coefficients?
The correct way to compare coefficients on the same scale is to rescale the predictors themselves. A common transformation is to turn continuous predictors into Z-scores by subtracting the sample mean |
51,888 | Normalizing logistic regression coefficients? | To show the relative contribution of each predictor, and to be able to handle the common case where the predictor is represented by multiple variables (indicator variables or nonlinear terms), I prefer to make a dot chart of the chunk test partial $\chi^2$ statistic for each predictor. You can also get a meaningful chart by computing the proportion of the total model $\chi^2$ is explained by each predictor and plotting that. In the R rms package these things are simple: plot(anova(fit)). | Normalizing logistic regression coefficients? | To show the relative contribution of each predictor, and to be able to handle the common case where the predictor is represented by multiple variables (indicator variables or nonlinear terms), I prefe | Normalizing logistic regression coefficients?
To show the relative contribution of each predictor, and to be able to handle the common case where the predictor is represented by multiple variables (indicator variables or nonlinear terms), I prefer to make a dot chart of the chunk test partial $\chi^2$ statistic for each predictor. You can also get a meaningful chart by computing the proportion of the total model $\chi^2$ is explained by each predictor and plotting that. In the R rms package these things are simple: plot(anova(fit)). | Normalizing logistic regression coefficients?
To show the relative contribution of each predictor, and to be able to handle the common case where the predictor is represented by multiple variables (indicator variables or nonlinear terms), I prefe |
51,889 | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to one that does not integrate to a finite value? | Strictly speaking, an everywhere non-negative prior that integrated to some finite positive value other than 1 would not be a proper density and so arguably could in that sense be referred to as "improper", but since
(a) if it integrates to $k$, say, it's easy enough to scale if you need that
(b) frequently in Bayesian work we're only dealing with forms up to normalizing constants anyway, such as: (i) because of the need to integrate the denominator in Bayes' theorem so we'd just be applying a different scaling constant to that calculation (e.g. if we recognize the form of the posterior we can work out the right normalizing constant directly), or (ii) the need for explicit normalizing is removed by other considerations (such as if we're generating via accept-reject, say, we don't necessarily need the explicit normalizing constant)
So a prior for which we failed to give the normalizing constant to make it integrate to 1 doesn't usually pose any problem since we can get the result as if we had done so.
As a result, generally when people say "improper" they literally mean one with a non-finite integral, for which no "correct" normalizing constant exists | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to on | Strictly speaking, an everywhere non-negative prior that integrated to some finite positive value other than 1 would not be a proper density and so arguably could in that sense be referred to as "impr | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to one that does not integrate to a finite value?
Strictly speaking, an everywhere non-negative prior that integrated to some finite positive value other than 1 would not be a proper density and so arguably could in that sense be referred to as "improper", but since
(a) if it integrates to $k$, say, it's easy enough to scale if you need that
(b) frequently in Bayesian work we're only dealing with forms up to normalizing constants anyway, such as: (i) because of the need to integrate the denominator in Bayes' theorem so we'd just be applying a different scaling constant to that calculation (e.g. if we recognize the form of the posterior we can work out the right normalizing constant directly), or (ii) the need for explicit normalizing is removed by other considerations (such as if we're generating via accept-reject, say, we don't necessarily need the explicit normalizing constant)
So a prior for which we failed to give the normalizing constant to make it integrate to 1 doesn't usually pose any problem since we can get the result as if we had done so.
As a result, generally when people say "improper" they literally mean one with a non-finite integral, for which no "correct" normalizing constant exists | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to on
Strictly speaking, an everywhere non-negative prior that integrated to some finite positive value other than 1 would not be a proper density and so arguably could in that sense be referred to as "impr |
51,890 | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to one that does not integrate to a finite value? | A density function integrates to infinity, then the density function is termed as being "improper". A pdf is termed as being "proper" if it integrates to a finite quantity.
Ideally, a proper function will integrate to 1. However, sometimes density functions are known only up to a normalizing constant. That is $p(x) = c f(x)$, where $p(x)$ integrates to 1, but $c$ is unknown. In such a case, we might let $f(x)$ represents a "proper" density function, where it is understood that when we refer to $f(x)$ as a pdf, we in fact are referring to $p(x)$. | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to on | A density function integrates to infinity, then the density function is termed as being "improper". A pdf is termed as being "proper" if it integrates to a finite quantity.
Ideally, a proper function | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to one that does not integrate to a finite value?
A density function integrates to infinity, then the density function is termed as being "improper". A pdf is termed as being "proper" if it integrates to a finite quantity.
Ideally, a proper function will integrate to 1. However, sometimes density functions are known only up to a normalizing constant. That is $p(x) = c f(x)$, where $p(x)$ integrates to 1, but $c$ is unknown. In such a case, we might let $f(x)$ represents a "proper" density function, where it is understood that when we refer to $f(x)$ as a pdf, we in fact are referring to $p(x)$. | Does "improper" posterior or prior refer to a density function that does not integrate to 1 or to on
A density function integrates to infinity, then the density function is termed as being "improper". A pdf is termed as being "proper" if it integrates to a finite quantity.
Ideally, a proper function |
51,891 | Making up for a low sample size by increasing the number of observations | Another way to look at this is to consider that all the measures you take reflect a systematic source of variance (which you are interested in) compounded by various sources of errors (i.e. variance you are not interested in).
Depending on your objectives and the relative magnitude of these sources of errors, different designs might make sense but you can't learn about or compensate one source of error (e.g. individual differences between participants) by increasing the number of observations in another facet of the design (e.g. repeated observations of the same participant over time).
Generalizability theory is a way to formalize this insight (and the source of the terminology I used in this answer). | Making up for a low sample size by increasing the number of observations | Another way to look at this is to consider that all the measures you take reflect a systematic source of variance (which you are interested in) compounded by various sources of errors (i.e. variance y | Making up for a low sample size by increasing the number of observations
Another way to look at this is to consider that all the measures you take reflect a systematic source of variance (which you are interested in) compounded by various sources of errors (i.e. variance you are not interested in).
Depending on your objectives and the relative magnitude of these sources of errors, different designs might make sense but you can't learn about or compensate one source of error (e.g. individual differences between participants) by increasing the number of observations in another facet of the design (e.g. repeated observations of the same participant over time).
Generalizability theory is a way to formalize this insight (and the source of the terminology I used in this answer). | Making up for a low sample size by increasing the number of observations
Another way to look at this is to consider that all the measures you take reflect a systematic source of variance (which you are interested in) compounded by various sources of errors (i.e. variance y |
51,892 | Making up for a low sample size by increasing the number of observations | Simple counter-example would be probably enough for you to figure the answer yourself. Say, that you have to make a study on human height, however you do not have enough time to conduct a full-scale survey. In this case you decide to ask your roommate 1000 times about his height. | Making up for a low sample size by increasing the number of observations | Simple counter-example would be probably enough for you to figure the answer yourself. Say, that you have to make a study on human height, however you do not have enough time to conduct a full-scale s | Making up for a low sample size by increasing the number of observations
Simple counter-example would be probably enough for you to figure the answer yourself. Say, that you have to make a study on human height, however you do not have enough time to conduct a full-scale survey. In this case you decide to ask your roommate 1000 times about his height. | Making up for a low sample size by increasing the number of observations
Simple counter-example would be probably enough for you to figure the answer yourself. Say, that you have to make a study on human height, however you do not have enough time to conduct a full-scale s |
51,893 | Making up for a low sample size by increasing the number of observations | Asking the same person to perform the same trial will increase your information about within-person variance but not between-person variance. If within-person variance is what you're interested in, then the logic makes sense. If it's not what you're interested in, then it doesn't. | Making up for a low sample size by increasing the number of observations | Asking the same person to perform the same trial will increase your information about within-person variance but not between-person variance. If within-person variance is what you're interested in, th | Making up for a low sample size by increasing the number of observations
Asking the same person to perform the same trial will increase your information about within-person variance but not between-person variance. If within-person variance is what you're interested in, then the logic makes sense. If it's not what you're interested in, then it doesn't. | Making up for a low sample size by increasing the number of observations
Asking the same person to perform the same trial will increase your information about within-person variance but not between-person variance. If within-person variance is what you're interested in, th |
51,894 | How to calculate SE of an odds ratio | @FrankHarrell is right that the standard error for an odds ratio is a problematic number in the sense that you can do better by testing on the corresponding log(odds ratio) scale, as the sampling distribution of the log(odds ratio) is more likely to be normally distributed.
Nonetheless, the standard error of the odds ratio does exist, even if it is not that useful. One possible estimate is to use the delta method to move from the standard error of the log(odds ratio) to an approximation of the standard error of the odds ratio.
$\sqrt{(1/a + 1/b + 1/c + 1/d)}\times\frac{a\times d}{b\times c}$ | How to calculate SE of an odds ratio | @FrankHarrell is right that the standard error for an odds ratio is a problematic number in the sense that you can do better by testing on the corresponding log(odds ratio) scale, as the sampling dist | How to calculate SE of an odds ratio
@FrankHarrell is right that the standard error for an odds ratio is a problematic number in the sense that you can do better by testing on the corresponding log(odds ratio) scale, as the sampling distribution of the log(odds ratio) is more likely to be normally distributed.
Nonetheless, the standard error of the odds ratio does exist, even if it is not that useful. One possible estimate is to use the delta method to move from the standard error of the log(odds ratio) to an approximation of the standard error of the odds ratio.
$\sqrt{(1/a + 1/b + 1/c + 1/d)}\times\frac{a\times d}{b\times c}$ | How to calculate SE of an odds ratio
@FrankHarrell is right that the standard error for an odds ratio is a problematic number in the sense that you can do better by testing on the corresponding log(odds ratio) scale, as the sampling dist |
51,895 | How to calculate SE of an odds ratio | OR is not a valid quantity to compute a SE of in the sense that it cannot have a symmetric distribution. Applying +/- SE to it may lead to negative ORs. | How to calculate SE of an odds ratio | OR is not a valid quantity to compute a SE of in the sense that it cannot have a symmetric distribution. Applying +/- SE to it may lead to negative ORs. | How to calculate SE of an odds ratio
OR is not a valid quantity to compute a SE of in the sense that it cannot have a symmetric distribution. Applying +/- SE to it may lead to negative ORs. | How to calculate SE of an odds ratio
OR is not a valid quantity to compute a SE of in the sense that it cannot have a symmetric distribution. Applying +/- SE to it may lead to negative ORs. |
51,896 | How to calculate SE of an odds ratio | Instead of a standard error why not compute the standard deviation of the posterior distribution of the OR? You can solve for it numerically very easily using an MCMC sampler.
Here is some R and JAGS code to do so.
################################################################
### ###
### Contingency Table Analysis for Obestity Data ###
### ###
################################################################
# Required Pacakges
library("ggplot2")
library("runjags")
library("parallel") # sets parallelization for MCMC
# set up the model
mod = 'model {
################################################
### Greater than 80th BMI Percentile ###
################################################
# marginal likelihood functions
n11_G ~ dbin(pi_one_G, n1_plus_G)
n21_G ~ dbin(pi_two_G, n2_plus_G)
#priors
pi_one_G ~ dbeta(1,1)
pi_two_G ~ dbeta(1,1)
# transformations
rho_G <- pi_two_G/pi_one_G
theta_G <- pi_two_G*(1-pi_one_G)/(pi_one_G*(1-pi_two_G))
delta_G <- pi_two_G-pi_one_G
}'
# set up the data
Dat = list(n1_plus_G = 108,
n2_plus_G = 88,
n11_G = 68,
n21_G = 44)
# Monitor these variables
Vars = c("pi_one_G","pi_two_G","rho_G","theta_G","delta_G")
# set up MCMC parameters
inits1=list(.RNG.name= "base::Wichmann-Hill",
.RNG.seed= 12341)
inits2=list(.RNG.name= "base::Marsaglia-Multicarry",
.RNG.seed= 12342)
inits3=list(.RNG.name= "base::Super-Duper",
.RNG.seed= 12343)
inits4=list(.RNG.name= "base::Mersenne-Twister",
.RNG.seed= 12344)
chains = 4
burn = 5000
samp = 10000
adapt = 5000
thin = 1
# parallel chains
cl = makeCluster(4)
# MCMC estimation
HjagsOut = run.jags(model = mod, monitor = Vars, data=Dat, n.chains=chains, thin = thin,
burnin = burn, sample = samp, adapt=adapt, method="rjparallel",method.options=list(cl=cl),
inits=list(inits1,inits2,inits3,inits4))
#summarize results
summary(HjagsOut)
plot(HjagsOut, layout=c(4,2))
The odds ratio parameter ($\theta_\text{G}$) is simply a function of the samples from the binomial parameters. This of course assumes certain study design. In this case I was looking at the difference in children's BMI percentile group (80th and above or below 80th) from a control and experimental group, pre and post intervention treatment. Therefore, the row totals (number of children in the experimental group and control group respectively) were fixed.
The beta(1,1) prior is equivalent to a Uniform(0,1) prior and could easily be changed to the Jeffreys's beta(0.5,0.5) prior or anything you desire. | How to calculate SE of an odds ratio | Instead of a standard error why not compute the standard deviation of the posterior distribution of the OR? You can solve for it numerically very easily using an MCMC sampler.
Here is some R and JAGS | How to calculate SE of an odds ratio
Instead of a standard error why not compute the standard deviation of the posterior distribution of the OR? You can solve for it numerically very easily using an MCMC sampler.
Here is some R and JAGS code to do so.
################################################################
### ###
### Contingency Table Analysis for Obestity Data ###
### ###
################################################################
# Required Pacakges
library("ggplot2")
library("runjags")
library("parallel") # sets parallelization for MCMC
# set up the model
mod = 'model {
################################################
### Greater than 80th BMI Percentile ###
################################################
# marginal likelihood functions
n11_G ~ dbin(pi_one_G, n1_plus_G)
n21_G ~ dbin(pi_two_G, n2_plus_G)
#priors
pi_one_G ~ dbeta(1,1)
pi_two_G ~ dbeta(1,1)
# transformations
rho_G <- pi_two_G/pi_one_G
theta_G <- pi_two_G*(1-pi_one_G)/(pi_one_G*(1-pi_two_G))
delta_G <- pi_two_G-pi_one_G
}'
# set up the data
Dat = list(n1_plus_G = 108,
n2_plus_G = 88,
n11_G = 68,
n21_G = 44)
# Monitor these variables
Vars = c("pi_one_G","pi_two_G","rho_G","theta_G","delta_G")
# set up MCMC parameters
inits1=list(.RNG.name= "base::Wichmann-Hill",
.RNG.seed= 12341)
inits2=list(.RNG.name= "base::Marsaglia-Multicarry",
.RNG.seed= 12342)
inits3=list(.RNG.name= "base::Super-Duper",
.RNG.seed= 12343)
inits4=list(.RNG.name= "base::Mersenne-Twister",
.RNG.seed= 12344)
chains = 4
burn = 5000
samp = 10000
adapt = 5000
thin = 1
# parallel chains
cl = makeCluster(4)
# MCMC estimation
HjagsOut = run.jags(model = mod, monitor = Vars, data=Dat, n.chains=chains, thin = thin,
burnin = burn, sample = samp, adapt=adapt, method="rjparallel",method.options=list(cl=cl),
inits=list(inits1,inits2,inits3,inits4))
#summarize results
summary(HjagsOut)
plot(HjagsOut, layout=c(4,2))
The odds ratio parameter ($\theta_\text{G}$) is simply a function of the samples from the binomial parameters. This of course assumes certain study design. In this case I was looking at the difference in children's BMI percentile group (80th and above or below 80th) from a control and experimental group, pre and post intervention treatment. Therefore, the row totals (number of children in the experimental group and control group respectively) were fixed.
The beta(1,1) prior is equivalent to a Uniform(0,1) prior and could easily be changed to the Jeffreys's beta(0.5,0.5) prior or anything you desire. | How to calculate SE of an odds ratio
Instead of a standard error why not compute the standard deviation of the posterior distribution of the OR? You can solve for it numerically very easily using an MCMC sampler.
Here is some R and JAGS |
51,897 | What does "Virgin Data" mean? | When the virgin flag is set to FALSE, it indicates that all data in the training and testing sets have corresponding labels.
When the virgin flag is set to TRUE, it indicates that the testing set is unclassified data with no known true values. | What does "Virgin Data" mean? | When the virgin flag is set to FALSE, it indicates that all data in the training and testing sets have corresponding labels.
When the virgin flag is set to TRUE, it indicates that the testing set is u | What does "Virgin Data" mean?
When the virgin flag is set to FALSE, it indicates that all data in the training and testing sets have corresponding labels.
When the virgin flag is set to TRUE, it indicates that the testing set is unclassified data with no known true values. | What does "Virgin Data" mean?
When the virgin flag is set to FALSE, it indicates that all data in the training and testing sets have corresponding labels.
When the virgin flag is set to TRUE, it indicates that the testing set is u |
51,898 | What does "Virgin Data" mean? | The answer by @pratik_m is correct.
As a historical note, this terminology comes from Laver et al. 2003. Although it's not stated explicitly, Laver et al. use 'virgin' to refer to a document that a) has no target value because b) it is out of sample. What makes things confusing is that in the paper (though not in RTextTools) the target value is not actually a label but a real valued position. Also, while all out-of-sample documents will lack a target value, not all documents lacking a target value must be out-of-sample. (Lowe 2008 shows that if the missing target values are treated as in-sample then the algorithm in the paper computes a correspondence analysis with a subset of row scores fixed). | What does "Virgin Data" mean? | The answer by @pratik_m is correct.
As a historical note, this terminology comes from Laver et al. 2003. Although it's not stated explicitly, Laver et al. use 'virgin' to refer to a document that a) | What does "Virgin Data" mean?
The answer by @pratik_m is correct.
As a historical note, this terminology comes from Laver et al. 2003. Although it's not stated explicitly, Laver et al. use 'virgin' to refer to a document that a) has no target value because b) it is out of sample. What makes things confusing is that in the paper (though not in RTextTools) the target value is not actually a label but a real valued position. Also, while all out-of-sample documents will lack a target value, not all documents lacking a target value must be out-of-sample. (Lowe 2008 shows that if the missing target values are treated as in-sample then the algorithm in the paper computes a correspondence analysis with a subset of row scores fixed). | What does "Virgin Data" mean?
The answer by @pratik_m is correct.
As a historical note, this terminology comes from Laver et al. 2003. Although it's not stated explicitly, Laver et al. use 'virgin' to refer to a document that a) |
51,899 | Weird fitted values/residuals plot | You can answer the question for yourself with simple mathematics. If observed $y \ge 0$ and $\hat y$ denotes fitted $y$, then residual $e = y - \hat y$ must be $\ge -\hat y$. The line $e = - \hat y$ is thus a lower limit on your residuals. Despite your unconventional axis choice, it is clear that your data follow suit.
The underlying problem is presumably use of a standard linear model on data not suited to such. One way forward is a log-linear or Poisson(-like) model: (fortuitously but fortunately for the OP as a Stata user) there is a Stata-rich explanation in this blog posting. The posting should be of considerable interest to many users of statistics, however.
P.S. A standard residual plot has residuals on the vertical axis and fitted or predicted on the horizontal axis. The choice of axes is not here an arbitrary convention. A horizontal line indicating zero residuals is the natural reference line, as indicating behaviour matching a perfect model. As emphasised often by J.W. Tukey and others, the best references are linear, and the best linear references are horizontal, in the sense of being easiest to think about. In Stata there is a built-in post-estimation rvfplot for use after regress.
P.P.S. The graph flags a Stata user. Naturally use of Stata is quite secondary here to the main question. | Weird fitted values/residuals plot | You can answer the question for yourself with simple mathematics. If observed $y \ge 0$ and $\hat y$ denotes fitted $y$, then residual $e = y - \hat y$ must be $\ge -\hat y$. The line $e = - \hat | Weird fitted values/residuals plot
You can answer the question for yourself with simple mathematics. If observed $y \ge 0$ and $\hat y$ denotes fitted $y$, then residual $e = y - \hat y$ must be $\ge -\hat y$. The line $e = - \hat y$ is thus a lower limit on your residuals. Despite your unconventional axis choice, it is clear that your data follow suit.
The underlying problem is presumably use of a standard linear model on data not suited to such. One way forward is a log-linear or Poisson(-like) model: (fortuitously but fortunately for the OP as a Stata user) there is a Stata-rich explanation in this blog posting. The posting should be of considerable interest to many users of statistics, however.
P.S. A standard residual plot has residuals on the vertical axis and fitted or predicted on the horizontal axis. The choice of axes is not here an arbitrary convention. A horizontal line indicating zero residuals is the natural reference line, as indicating behaviour matching a perfect model. As emphasised often by J.W. Tukey and others, the best references are linear, and the best linear references are horizontal, in the sense of being easiest to think about. In Stata there is a built-in post-estimation rvfplot for use after regress.
P.P.S. The graph flags a Stata user. Naturally use of Stata is quite secondary here to the main question. | Weird fitted values/residuals plot
You can answer the question for yourself with simple mathematics. If observed $y \ge 0$ and $\hat y$ denotes fitted $y$, then residual $e = y - \hat y$ must be $\ge -\hat y$. The line $e = - \hat |
51,900 | Weird fitted values/residuals plot | There are two major aspects I see in the plot that I expect you might wonder about.
(I took the liberty of flipping your plot about to the way I'm more used to looking at them, with the random quantity on the y-axis.)
The first aspect is what looks like hard lower bound on the y-values (which is presumably 0), as you suggested.
The second is the fan-shape ("$<$") in the residuals. The two are related issues.
The spread seems to be linear in the mean - indeed, I'd guess proportional to it, but it's a little hard to tell from this plot, since your model looks like it's also biased at 0.
In that case, variance is proportional to the square of the mean, which suggests either taking logs (working with log-earnings would be a pretty common choice) or fitting a model with variance proportional to mean-squared (such as a Gamma GLM).
I disagree with Nick on this* - a Poisson-like model is unsuitable; a quasi-Poisson has variance proportional to mean, or standard deviation proportional to the square root of the mean, so its residual plot would look more parabolic. This one doesn't. As is common with financial data, the standard deviation is approximately proportional to the mean -- indeed it would be somewhat surprising if it were not, since it would imply that it would matter whether you worked in dollars or thousands of dollars
* Or perhaps I don't, since it seems we were having more a difference of terminology than substance.
If you have exact zeros in your data, neither of those suggestions would be suitable (at least not without some modification), but there are also zero-inflated models.
Working with a more appropriate model for the variance will likely improve other aspects of your model as well. | Weird fitted values/residuals plot | There are two major aspects I see in the plot that I expect you might wonder about.
(I took the liberty of flipping your plot about to the way I'm more used to looking at them, with the random quanti | Weird fitted values/residuals plot
There are two major aspects I see in the plot that I expect you might wonder about.
(I took the liberty of flipping your plot about to the way I'm more used to looking at them, with the random quantity on the y-axis.)
The first aspect is what looks like hard lower bound on the y-values (which is presumably 0), as you suggested.
The second is the fan-shape ("$<$") in the residuals. The two are related issues.
The spread seems to be linear in the mean - indeed, I'd guess proportional to it, but it's a little hard to tell from this plot, since your model looks like it's also biased at 0.
In that case, variance is proportional to the square of the mean, which suggests either taking logs (working with log-earnings would be a pretty common choice) or fitting a model with variance proportional to mean-squared (such as a Gamma GLM).
I disagree with Nick on this* - a Poisson-like model is unsuitable; a quasi-Poisson has variance proportional to mean, or standard deviation proportional to the square root of the mean, so its residual plot would look more parabolic. This one doesn't. As is common with financial data, the standard deviation is approximately proportional to the mean -- indeed it would be somewhat surprising if it were not, since it would imply that it would matter whether you worked in dollars or thousands of dollars
* Or perhaps I don't, since it seems we were having more a difference of terminology than substance.
If you have exact zeros in your data, neither of those suggestions would be suitable (at least not without some modification), but there are also zero-inflated models.
Working with a more appropriate model for the variance will likely improve other aspects of your model as well. | Weird fitted values/residuals plot
There are two major aspects I see in the plot that I expect you might wonder about.
(I took the liberty of flipping your plot about to the way I'm more used to looking at them, with the random quanti |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.