idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
54,001 | Continuity correction in proportion test | In order to consider what you asked, without additional material, consider the $2\times 2$ table TAB as follows:
TAB = rbind(c(80,90), c(20,15))
TAB
[,1] [,2]
[1,] 80 90
[2,] 20 15
A chi-squared test of independence with the Yates
correction is as follows:
chisq.test(TAB)
Pearson's Chi-squared test
with Yates' continuity correction
data: TAB
X-squared = 0.81215, df = 1, p-value = 0.3675
In R, prop.test compares proportions $80/100$ and
$90/105.$ A two-sided test is equivalent to the
chi-squared test above--if the 'continuity correction'
is used. Notice that the chi-squared statistic and the P-value are exactly the same. [The procedure prop.test gives more
detail.]
prop.test(c(80,90), c(100,105), cor=T)
2-sample test for equality of proportions
with continuity correction
data: c(80, 90) out of c(100, 105)
X-squared = 0.81215, df = 1, p-value = 0.3675
alternative hypothesis: two.sided
95 percent confidence interval:
-0.16998808 0.05570236
sample estimates:
prop 1 prop 2
0.8000000 0.8571429
For sufficiently large counts, Yates' correction is not
needed. (And in my opinion too conservative.) If parameter
cor=F is used to disable the correction, the chi-squared
test is as follows:
chisq.test(TAB, cor=F)
Pearson's Chi-squared test
data: TAB
X-squared = 1.1813, df = 1, p-value = 0.2771
Similarly, if cor=F is used, prop.test gives
the same test statistic and P-value as just above.
prop.test(c(80,90), c(100,105), cor=F)
2-sample test for equality of proportions
without continuity correction
data: c(80, 90) out of c(100, 105)
X-squared = 1.1813, df = 1, p-value = 0.2771
alternative hypothesis: two.sided
95 percent confidence interval:
-0.16022617 0.04594046
sample estimates:
prop 1 prop 2
0.8000000 0.8571429
Note: Simulated test statistics in case of expected counts
below 5, and the use of Fisher's exact test are interesting
topics, but have no direct connection with use or non-use of
the Yates correction. | Continuity correction in proportion test | In order to consider what you asked, without additional material, consider the $2\times 2$ table TAB as follows:
TAB = rbind(c(80,90), c(20,15))
TAB
[,1] [,2]
[1,] 80 90
[2,] 20 15
A chi | Continuity correction in proportion test
In order to consider what you asked, without additional material, consider the $2\times 2$ table TAB as follows:
TAB = rbind(c(80,90), c(20,15))
TAB
[,1] [,2]
[1,] 80 90
[2,] 20 15
A chi-squared test of independence with the Yates
correction is as follows:
chisq.test(TAB)
Pearson's Chi-squared test
with Yates' continuity correction
data: TAB
X-squared = 0.81215, df = 1, p-value = 0.3675
In R, prop.test compares proportions $80/100$ and
$90/105.$ A two-sided test is equivalent to the
chi-squared test above--if the 'continuity correction'
is used. Notice that the chi-squared statistic and the P-value are exactly the same. [The procedure prop.test gives more
detail.]
prop.test(c(80,90), c(100,105), cor=T)
2-sample test for equality of proportions
with continuity correction
data: c(80, 90) out of c(100, 105)
X-squared = 0.81215, df = 1, p-value = 0.3675
alternative hypothesis: two.sided
95 percent confidence interval:
-0.16998808 0.05570236
sample estimates:
prop 1 prop 2
0.8000000 0.8571429
For sufficiently large counts, Yates' correction is not
needed. (And in my opinion too conservative.) If parameter
cor=F is used to disable the correction, the chi-squared
test is as follows:
chisq.test(TAB, cor=F)
Pearson's Chi-squared test
data: TAB
X-squared = 1.1813, df = 1, p-value = 0.2771
Similarly, if cor=F is used, prop.test gives
the same test statistic and P-value as just above.
prop.test(c(80,90), c(100,105), cor=F)
2-sample test for equality of proportions
without continuity correction
data: c(80, 90) out of c(100, 105)
X-squared = 1.1813, df = 1, p-value = 0.2771
alternative hypothesis: two.sided
95 percent confidence interval:
-0.16022617 0.04594046
sample estimates:
prop 1 prop 2
0.8000000 0.8571429
Note: Simulated test statistics in case of expected counts
below 5, and the use of Fisher's exact test are interesting
topics, but have no direct connection with use or non-use of
the Yates correction. | Continuity correction in proportion test
In order to consider what you asked, without additional material, consider the $2\times 2$ table TAB as follows:
TAB = rbind(c(80,90), c(20,15))
TAB
[,1] [,2]
[1,] 80 90
[2,] 20 15
A chi |
54,002 | Continuity correction in proportion test | There are many opinions on how to best test for 2 x 2 contingency tables like this. Here the prop.test implements a chi-squared test. The issue is that in some cases data from a 2 x 2 contingency table isn't well approximated by the chi-squared distribution under the null. This can lead to p-values that are too small. Yates continuity correction adjusted for this but can lead to p-values that are too large.
There are other tests that you could use (Fisher's exact test being one) but another option in R is to simulate the distribution under the null and utilize this empirical distribution to calculate the p-value.
mat <- matrix(data = c(84, (99 - 84), 105.5, (146 - 105.5)), ncol = 2)
chisq.test(x = mat, correct = FALSE)
chisq.test(x = mat, correct = TRUE)
chisq.test(x = mat, simulate.p.value = TRUE)
Without the continuity correction, the p-value is 0.021. With the continuity correction, it's 0.031. Simulating the distribution produces a p-value of ~ 0.026. (One issue with this approach is that in R this simulation samples from a hypergeometric distribution which may not approximate your sampling situation.)
For other good threads on this see: Yates continuity correction for 2 x 2 contingency tables and Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test? | Continuity correction in proportion test | There are many opinions on how to best test for 2 x 2 contingency tables like this. Here the prop.test implements a chi-squared test. The issue is that in some cases data from a 2 x 2 contingency tabl | Continuity correction in proportion test
There are many opinions on how to best test for 2 x 2 contingency tables like this. Here the prop.test implements a chi-squared test. The issue is that in some cases data from a 2 x 2 contingency table isn't well approximated by the chi-squared distribution under the null. This can lead to p-values that are too small. Yates continuity correction adjusted for this but can lead to p-values that are too large.
There are other tests that you could use (Fisher's exact test being one) but another option in R is to simulate the distribution under the null and utilize this empirical distribution to calculate the p-value.
mat <- matrix(data = c(84, (99 - 84), 105.5, (146 - 105.5)), ncol = 2)
chisq.test(x = mat, correct = FALSE)
chisq.test(x = mat, correct = TRUE)
chisq.test(x = mat, simulate.p.value = TRUE)
Without the continuity correction, the p-value is 0.021. With the continuity correction, it's 0.031. Simulating the distribution produces a p-value of ~ 0.026. (One issue with this approach is that in R this simulation samples from a hypergeometric distribution which may not approximate your sampling situation.)
For other good threads on this see: Yates continuity correction for 2 x 2 contingency tables and Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test? | Continuity correction in proportion test
There are many opinions on how to best test for 2 x 2 contingency tables like this. Here the prop.test implements a chi-squared test. The issue is that in some cases data from a 2 x 2 contingency tabl |
54,003 | Controlling for non-linear variable in non-linear modeling of response | The intent is to understand how much an increase of 1 in one of the features (e.g. $𝑥_1$) would on average impact $y$ while controlling for $x_c$. (Emphasis added.)
First, given nonlinear associations of predictors with outcome, there isn't a unique answer. You have to specify a particular value of $x_1$ from which to evaluate the change in $y$ or a range of $x_1$ values over which you would average. If the nonlinearities involve interactions with other predictors, you would need to specify the levels of the interacting predictors too. Keep that in mind.
Second, nonlinear associations of predictors with outcome can often be analyzed empirically with a linear regression model if you have no theoretical model in mind. A particular form of polynomial approximation, restricted cubic regression splines, is a common choice. The regression is then still linear in the coefficients, so once the general form of the spline is specified (via methods in standard statistical software) linear regression fitting is all that is required.
Chapter 2 of Frank Harrell's course notes outlines that approach to modeling nonlinear relationships among variables (Section 2.4), including how to evaluate model fit and handle interactions among such predictors (Section 2.7). There are related approaches with penalized splines and generalized additive models, discussed in this thread.
Finally, as the comments indicate a potential interest in "feature importance," see Section 5.4 of Harrell's notes. The anova() function in his rms package can provide a measure of predictor importance that combines all nonlinear and interaction terms for a predictor, the difference between the partial $\chi^2$ for the predictor and the number of degrees of freedom. He uses analysis of multiple bootstrap samples to illustrate how unreliable such a measure can be. | Controlling for non-linear variable in non-linear modeling of response | The intent is to understand how much an increase of 1 in one of the features (e.g. $𝑥_1$) would on average impact $y$ while controlling for $x_c$. (Emphasis added.)
First, given nonlinear association | Controlling for non-linear variable in non-linear modeling of response
The intent is to understand how much an increase of 1 in one of the features (e.g. $𝑥_1$) would on average impact $y$ while controlling for $x_c$. (Emphasis added.)
First, given nonlinear associations of predictors with outcome, there isn't a unique answer. You have to specify a particular value of $x_1$ from which to evaluate the change in $y$ or a range of $x_1$ values over which you would average. If the nonlinearities involve interactions with other predictors, you would need to specify the levels of the interacting predictors too. Keep that in mind.
Second, nonlinear associations of predictors with outcome can often be analyzed empirically with a linear regression model if you have no theoretical model in mind. A particular form of polynomial approximation, restricted cubic regression splines, is a common choice. The regression is then still linear in the coefficients, so once the general form of the spline is specified (via methods in standard statistical software) linear regression fitting is all that is required.
Chapter 2 of Frank Harrell's course notes outlines that approach to modeling nonlinear relationships among variables (Section 2.4), including how to evaluate model fit and handle interactions among such predictors (Section 2.7). There are related approaches with penalized splines and generalized additive models, discussed in this thread.
Finally, as the comments indicate a potential interest in "feature importance," see Section 5.4 of Harrell's notes. The anova() function in his rms package can provide a measure of predictor importance that combines all nonlinear and interaction terms for a predictor, the difference between the partial $\chi^2$ for the predictor and the number of degrees of freedom. He uses analysis of multiple bootstrap samples to illustrate how unreliable such a measure can be. | Controlling for non-linear variable in non-linear modeling of response
The intent is to understand how much an increase of 1 in one of the features (e.g. $𝑥_1$) would on average impact $y$ while controlling for $x_c$. (Emphasis added.)
First, given nonlinear association |
54,004 | Controlling for non-linear variable in non-linear modeling of response | All you've really told us is that the relationships are nonlinear, and you are asking us to tell you how to get the best model. It's not really possible to do that, since "nonlinear" is a huge class of relationships. Nevertheless, here are some things to think about:
Ideally, there would be some aspect of the variables that would suggest a class of nonlinear functions that might be plausible for the regression relationships. Sometimes the nature of the variables and the context from which they are obtained give us some hints on the types of nonlinear relationships they might have.
If this is not the case, you might need to proceed on a purely empirical basis. For non-periodic functions you can usually approximate them locally by polynomials (based on the theory of Taylor series), and for periodic functions you can usually approximate them locally by sums of sinusoidal functions (based on the theory of Fourier series). These approximations work fairly well on a wide class of nonlinear functions, so they are commonly used as a starting point in nonlinear regression in cases where there is no prior information about the nature of the nonlinear relationship.
Once you have formulated a reasonable starting point for your model, fit the model and produce its diagnostic plots, paying particular attention to the added variable plots. These latter plots will allow you to scrutinise the chosen regression relationshops in your model based on the data. This can alert you to erroneous assumptions in your model, and allow you to choose a more appropriate regression function to fit your data. (But there is a limit to this; beware of the phenomenon of over-fitting.)
Finally, the issue of multicollinearity of the explanatory variables is a common issue in prediction. Unless you are conducting a controlled trial and you have control over your explanatory variables, there is nothing much you can do about this, except to be aware of the effect that multicollinearity has on the accuracy of your regression estimates, and to make sure that you are clear on what type of predictive inferences you want to make (i.e., exactly what do you want to condition on). | Controlling for non-linear variable in non-linear modeling of response | All you've really told us is that the relationships are nonlinear, and you are asking us to tell you how to get the best model. It's not really possible to do that, since "nonlinear" is a huge class | Controlling for non-linear variable in non-linear modeling of response
All you've really told us is that the relationships are nonlinear, and you are asking us to tell you how to get the best model. It's not really possible to do that, since "nonlinear" is a huge class of relationships. Nevertheless, here are some things to think about:
Ideally, there would be some aspect of the variables that would suggest a class of nonlinear functions that might be plausible for the regression relationships. Sometimes the nature of the variables and the context from which they are obtained give us some hints on the types of nonlinear relationships they might have.
If this is not the case, you might need to proceed on a purely empirical basis. For non-periodic functions you can usually approximate them locally by polynomials (based on the theory of Taylor series), and for periodic functions you can usually approximate them locally by sums of sinusoidal functions (based on the theory of Fourier series). These approximations work fairly well on a wide class of nonlinear functions, so they are commonly used as a starting point in nonlinear regression in cases where there is no prior information about the nature of the nonlinear relationship.
Once you have formulated a reasonable starting point for your model, fit the model and produce its diagnostic plots, paying particular attention to the added variable plots. These latter plots will allow you to scrutinise the chosen regression relationshops in your model based on the data. This can alert you to erroneous assumptions in your model, and allow you to choose a more appropriate regression function to fit your data. (But there is a limit to this; beware of the phenomenon of over-fitting.)
Finally, the issue of multicollinearity of the explanatory variables is a common issue in prediction. Unless you are conducting a controlled trial and you have control over your explanatory variables, there is nothing much you can do about this, except to be aware of the effect that multicollinearity has on the accuracy of your regression estimates, and to make sure that you are clear on what type of predictive inferences you want to make (i.e., exactly what do you want to condition on). | Controlling for non-linear variable in non-linear modeling of response
All you've really told us is that the relationships are nonlinear, and you are asking us to tell you how to get the best model. It's not really possible to do that, since "nonlinear" is a huge class |
54,005 | Controlling for non-linear variable in non-linear modeling of response | You could train a non-linear model using only $x_c$ to predict $y$, i.e. $f(x_c) = \hat{y}$. Then compute the residual $\epsilon = \hat{y} - y$. The residual $\epsilon$ will have been controlled for with regards to $x_c$ as $f$ has captured the relationship between $x_c$ and $y$ as well as it possibly could.
You can now use the residual $\epsilon$ as input for all further analysis such as any modeling or feature importance algorithms. You’ll want to include $x_c$ as a feature in your further analysis in case of any interaction effects between $x_c$ and $x_1, …, x_n$.
In terms of the choice of the non-linear model for $f$, random forest is a good and easy-to-use method, but many other non-linear models (e.g. neural networks) could be used as well.
Warning: I think this approach works, but would appreciate references to confirm it! | Controlling for non-linear variable in non-linear modeling of response | You could train a non-linear model using only $x_c$ to predict $y$, i.e. $f(x_c) = \hat{y}$. Then compute the residual $\epsilon = \hat{y} - y$. The residual $\epsilon$ will have been controlled for w | Controlling for non-linear variable in non-linear modeling of response
You could train a non-linear model using only $x_c$ to predict $y$, i.e. $f(x_c) = \hat{y}$. Then compute the residual $\epsilon = \hat{y} - y$. The residual $\epsilon$ will have been controlled for with regards to $x_c$ as $f$ has captured the relationship between $x_c$ and $y$ as well as it possibly could.
You can now use the residual $\epsilon$ as input for all further analysis such as any modeling or feature importance algorithms. You’ll want to include $x_c$ as a feature in your further analysis in case of any interaction effects between $x_c$ and $x_1, …, x_n$.
In terms of the choice of the non-linear model for $f$, random forest is a good and easy-to-use method, but many other non-linear models (e.g. neural networks) could be used as well.
Warning: I think this approach works, but would appreciate references to confirm it! | Controlling for non-linear variable in non-linear modeling of response
You could train a non-linear model using only $x_c$ to predict $y$, i.e. $f(x_c) = \hat{y}$. Then compute the residual $\epsilon = \hat{y} - y$. The residual $\epsilon$ will have been controlled for w |
54,006 | Estimating confidence interval of a parameter from the MLE of another parameter | The question, as clarified in comments, is a general one about confidence intervals. It is best framed generally, because the generality strips away irrelevant details to bring out the main idea.
So, imagine a statistical estimation setting in which a sample $\mathbf X = (X_1,X_2,\ldots,X_n)$ is assumed to be governed by a probability law $F\in\Theta$ (a specified space of such laws). A property is a real-valued function $\theta:\Theta\to\mathbb R.$ This is convenient language to talk about things we might want to know about $F$ and, by extension, any sample $\mathbf X$ governed by $F,$ while allowing for very general applications.
By definition, a confidence interval procedure (of size $\alpha$) for a property $\theta$ is a pair of functions $l, u,$ defined on the set of all possible samples, for which
$$\inf_{F\in\Theta}{\Pr}_F\left[l(\mathbf X) \le \theta(F) \le u(\mathbf X)\right] = 1-\alpha.\tag{*}$$
(It is my purpose not to mention issues related to measurability or how things might depend on the sample size $n,$ because these would just be distractions from the main idea.)
This looks painfully abstract but conveys a relatively straightforward idea. The left hand side concerns the chance that the interval $[l(\mathbf X), u(\mathbf X)],$ as computed from the sample, covers the true (but unknown) value $\theta(F).$ The infimum applied to it means that the coverage probability might depend on $F,$ but is never less than $1-\alpha$ and can approach it arbitrarily closely (or even equal it).
The question concerns a transformation $h$ of the property. That is, $h:\mathbb R \to \mathbb R$ is some function or partial function, such as the square root (which is defined only for non-negative numbers). When the property $\theta$ takes on values in the domain of $h,$ then the composition of $\theta$ followed by $h,$ $h\circ \theta:F\to h(\theta(F)),$ is also a property. The question asks:
Given a property $\theta,$ a confidence interval procedure $(l,u),$ and a transformation $h,$ is it the case that $(h\circ l, h\circ u)$ is a confidence interval procedure for $h\circ\theta$?
In some special but common cases--usually not including the square root function in the question--the answer is yes. Specifically, assume
$h$ is a strictly increasing function and
The values of all upper and lower limits are in the domain of $h.$
Condition (2) assures the question even makes sense, while condition (1) guarantees that the event $l(\mathbf X) \le \theta(F) \le u(\mathbf X)$ is equivalent to the event $h\circ l(\mathbf X) \le h\circ\theta(F) \le h\circ u(\mathbf X).$ Since for any given $F$ these are the same events (as implied by the strict monotonicity of $h$), they have the same probability. Thus, the infimum in the defining equation $(*)$ is still $1-\alpha.$
That's all there is to it. But let's restate the result in English:
Because when a transformation $h$ is strictly increasing, any interval $[\lambda, \upsilon]$ covers $\theta$ if and only if $[h(\lambda), h(\upsilon)]$ covers $h(\theta),$ a procedure $l,u$ is a $1-\alpha$ confidence interval for a property $\theta:\Theta\to\mathbb R$ if and only if $h\circ l, h\circ u$ is a $1-\alpha$ confidence interval procedure for the property $h\circ\theta.$
Wait a second, you might be asking: aren't there cases where this result obviously is false? Consider a confidence interval for the mean of a Lognormal distribution. (We say $X$ has a Lognormal distribution when $\log(X)=Y$ has a Normal distribution.) This is a challenging problem with various complicated solutions. But if we let $h$ be the logarithm, then upon applying $h$ to $X$ we seem to have reduced the question to finding a confidence interval for the mean of a Normal distribution, which is simple and well known.
The resolution of this conundrum is that it's an apparent paradox of terminology, not of estimation. The preceding analysis has shown only that exponentiating the endpoints of a confidence interval for a Normal mean will give a confidence interval for the exponential of the Normal mean. However, the exponential of that mean is not the mean of the corresponding lognormal distribution. Indeed, when $Y$ has a Normal$(\mu,\sigma^2)$ distribution, the mean of $X=\exp(Y)$ is not $\exp(\mu):$ instead, it is $\exp(\mu + \sigma^2/2).$
To put it the other way: $\exp(\mu)$ is the geometric mean of the Lognormal distribution. We conclude that the exponentials of the limits of a CI for the mean of a Normal distribution give a CI for the geometric mean of the corresponding Lognormal distribution.
Thus, the paradox is resolved by pointing out that
$h(\theta)$ and $\theta,$ as properties of $h(\mathbf X)$ and $\mathbf X,$ respectively, might not necessarily be called by the same names.
Finally, just to conclude this example: A confidence interval for the arithmetic mean of a sample $\mathbf X$ from a Lognormal$(\mu,\sigma^2)$ distribution is equivalent to a confidence interval for the property $\mu+\sigma^2/2$ in a sample $\mathbf Y$ from a Normal$(\mu,\sigma^2)$ distribution. One merely has to exponentiate the latter. Proof: the exponential is defined everywhere and is strictly increasing. | Estimating confidence interval of a parameter from the MLE of another parameter | The question, as clarified in comments, is a general one about confidence intervals. It is best framed generally, because the generality strips away irrelevant details to bring out the main idea.
So, | Estimating confidence interval of a parameter from the MLE of another parameter
The question, as clarified in comments, is a general one about confidence intervals. It is best framed generally, because the generality strips away irrelevant details to bring out the main idea.
So, imagine a statistical estimation setting in which a sample $\mathbf X = (X_1,X_2,\ldots,X_n)$ is assumed to be governed by a probability law $F\in\Theta$ (a specified space of such laws). A property is a real-valued function $\theta:\Theta\to\mathbb R.$ This is convenient language to talk about things we might want to know about $F$ and, by extension, any sample $\mathbf X$ governed by $F,$ while allowing for very general applications.
By definition, a confidence interval procedure (of size $\alpha$) for a property $\theta$ is a pair of functions $l, u,$ defined on the set of all possible samples, for which
$$\inf_{F\in\Theta}{\Pr}_F\left[l(\mathbf X) \le \theta(F) \le u(\mathbf X)\right] = 1-\alpha.\tag{*}$$
(It is my purpose not to mention issues related to measurability or how things might depend on the sample size $n,$ because these would just be distractions from the main idea.)
This looks painfully abstract but conveys a relatively straightforward idea. The left hand side concerns the chance that the interval $[l(\mathbf X), u(\mathbf X)],$ as computed from the sample, covers the true (but unknown) value $\theta(F).$ The infimum applied to it means that the coverage probability might depend on $F,$ but is never less than $1-\alpha$ and can approach it arbitrarily closely (or even equal it).
The question concerns a transformation $h$ of the property. That is, $h:\mathbb R \to \mathbb R$ is some function or partial function, such as the square root (which is defined only for non-negative numbers). When the property $\theta$ takes on values in the domain of $h,$ then the composition of $\theta$ followed by $h,$ $h\circ \theta:F\to h(\theta(F)),$ is also a property. The question asks:
Given a property $\theta,$ a confidence interval procedure $(l,u),$ and a transformation $h,$ is it the case that $(h\circ l, h\circ u)$ is a confidence interval procedure for $h\circ\theta$?
In some special but common cases--usually not including the square root function in the question--the answer is yes. Specifically, assume
$h$ is a strictly increasing function and
The values of all upper and lower limits are in the domain of $h.$
Condition (2) assures the question even makes sense, while condition (1) guarantees that the event $l(\mathbf X) \le \theta(F) \le u(\mathbf X)$ is equivalent to the event $h\circ l(\mathbf X) \le h\circ\theta(F) \le h\circ u(\mathbf X).$ Since for any given $F$ these are the same events (as implied by the strict monotonicity of $h$), they have the same probability. Thus, the infimum in the defining equation $(*)$ is still $1-\alpha.$
That's all there is to it. But let's restate the result in English:
Because when a transformation $h$ is strictly increasing, any interval $[\lambda, \upsilon]$ covers $\theta$ if and only if $[h(\lambda), h(\upsilon)]$ covers $h(\theta),$ a procedure $l,u$ is a $1-\alpha$ confidence interval for a property $\theta:\Theta\to\mathbb R$ if and only if $h\circ l, h\circ u$ is a $1-\alpha$ confidence interval procedure for the property $h\circ\theta.$
Wait a second, you might be asking: aren't there cases where this result obviously is false? Consider a confidence interval for the mean of a Lognormal distribution. (We say $X$ has a Lognormal distribution when $\log(X)=Y$ has a Normal distribution.) This is a challenging problem with various complicated solutions. But if we let $h$ be the logarithm, then upon applying $h$ to $X$ we seem to have reduced the question to finding a confidence interval for the mean of a Normal distribution, which is simple and well known.
The resolution of this conundrum is that it's an apparent paradox of terminology, not of estimation. The preceding analysis has shown only that exponentiating the endpoints of a confidence interval for a Normal mean will give a confidence interval for the exponential of the Normal mean. However, the exponential of that mean is not the mean of the corresponding lognormal distribution. Indeed, when $Y$ has a Normal$(\mu,\sigma^2)$ distribution, the mean of $X=\exp(Y)$ is not $\exp(\mu):$ instead, it is $\exp(\mu + \sigma^2/2).$
To put it the other way: $\exp(\mu)$ is the geometric mean of the Lognormal distribution. We conclude that the exponentials of the limits of a CI for the mean of a Normal distribution give a CI for the geometric mean of the corresponding Lognormal distribution.
Thus, the paradox is resolved by pointing out that
$h(\theta)$ and $\theta,$ as properties of $h(\mathbf X)$ and $\mathbf X,$ respectively, might not necessarily be called by the same names.
Finally, just to conclude this example: A confidence interval for the arithmetic mean of a sample $\mathbf X$ from a Lognormal$(\mu,\sigma^2)$ distribution is equivalent to a confidence interval for the property $\mu+\sigma^2/2$ in a sample $\mathbf Y$ from a Normal$(\mu,\sigma^2)$ distribution. One merely has to exponentiate the latter. Proof: the exponential is defined everywhere and is strictly increasing. | Estimating confidence interval of a parameter from the MLE of another parameter
The question, as clarified in comments, is a general one about confidence intervals. It is best framed generally, because the generality strips away irrelevant details to bring out the main idea.
So, |
54,007 | Are there classifiers with infinite number of classes? | According to your definition infinite number of classes should be possible.
One way to think about a classification rule is as a division of feature space into segments. Hence, if we consider a one-dimensional feature space $x_1 \in \mathbb{R} $ as an input we can imagine a classifier that returns a class of "$k$" for any input within $[(k-1):k)$. So it would return "$1$" for $[0:1)$, "$2$" for $[1:2)$, etc.
Such a "model" is not estimated from any data but it is a classification rule.
But in practice having a classifier with an infinite number of samples will probably not be possible, because any dataset will be finite. However, we can imagine an "on-line" classifier that is re-adjusted as new samples come in. And in principle new samples can be of a yet-unseen class. Hence the number of classes can keep increasing forever and not be known in advance. | Are there classifiers with infinite number of classes? | According to your definition infinite number of classes should be possible.
One way to think about a classification rule is as a division of feature space into segments. Hence, if we consider a one-di | Are there classifiers with infinite number of classes?
According to your definition infinite number of classes should be possible.
One way to think about a classification rule is as a division of feature space into segments. Hence, if we consider a one-dimensional feature space $x_1 \in \mathbb{R} $ as an input we can imagine a classifier that returns a class of "$k$" for any input within $[(k-1):k)$. So it would return "$1$" for $[0:1)$, "$2$" for $[1:2)$, etc.
Such a "model" is not estimated from any data but it is a classification rule.
But in practice having a classifier with an infinite number of samples will probably not be possible, because any dataset will be finite. However, we can imagine an "on-line" classifier that is re-adjusted as new samples come in. And in principle new samples can be of a yet-unseen class. Hence the number of classes can keep increasing forever and not be known in advance. | Are there classifiers with infinite number of classes?
According to your definition infinite number of classes should be possible.
One way to think about a classification rule is as a division of feature space into segments. Hence, if we consider a one-di |
54,008 | Are there classifiers with infinite number of classes? | Classifying images to say which integer was depicted would be a classification problem with a countably infinite set of classes, but you would also need an infinitely large image as the input data. It would also be a rather bad way of solving the problem to view it as single classification problem, rather than construct a solution by classifying the digits and constructing the answer from the component parts (if you did it that way, you might not need an infinite dataset to train it).
I doubt there is a practical case of an infinite set of classes as we have rather finite brains and live rather finite lives (in time and space), so we rarely encounter infinities in a meaningful way. | Are there classifiers with infinite number of classes? | Classifying images to say which integer was depicted would be a classification problem with a countably infinite set of classes, but you would also need an infinitely large image as the input data. I | Are there classifiers with infinite number of classes?
Classifying images to say which integer was depicted would be a classification problem with a countably infinite set of classes, but you would also need an infinitely large image as the input data. It would also be a rather bad way of solving the problem to view it as single classification problem, rather than construct a solution by classifying the digits and constructing the answer from the component parts (if you did it that way, you might not need an infinite dataset to train it).
I doubt there is a practical case of an infinite set of classes as we have rather finite brains and live rather finite lives (in time and space), so we rarely encounter infinities in a meaningful way. | Are there classifiers with infinite number of classes?
Classifying images to say which integer was depicted would be a classification problem with a countably infinite set of classes, but you would also need an infinitely large image as the input data. I |
54,009 | Are there classifiers with infinite number of classes? | The Distance dependent Chinese restaurant process may be an example. You can think of it as a clustering method, in which the number of clusters is adaptively chosen based on the data. As you add more data points, the number of clusters can in principle increase without bound. So there are an infinite possible number of clusters (or classes), but for any finite amount of data, only finitely many of the clusters are occupied. | Are there classifiers with infinite number of classes? | The Distance dependent Chinese restaurant process may be an example. You can think of it as a clustering method, in which the number of clusters is adaptively chosen based on the data. As you add more | Are there classifiers with infinite number of classes?
The Distance dependent Chinese restaurant process may be an example. You can think of it as a clustering method, in which the number of clusters is adaptively chosen based on the data. As you add more data points, the number of clusters can in principle increase without bound. So there are an infinite possible number of clusters (or classes), but for any finite amount of data, only finitely many of the clusters are occupied. | Are there classifiers with infinite number of classes?
The Distance dependent Chinese restaurant process may be an example. You can think of it as a clustering method, in which the number of clusters is adaptively chosen based on the data. As you add more |
54,010 | Visual Demonstration of Residual Confounding | Confounding is about bias. There are at least two ways I can think of in which dichotomization can bias a regression:
Model Bias. Dichotomizing a continuous predictor transforms the conditional mean from a continuous (possibly smooth, depending on the functional form) function to a discontinuous constant function. It would be good to ask one's self if we truly believe the phenomenon to display this sort of behaviour (I tend to think most things in medicine do not behave like this). Hence, dichotomizing biases our estimates towards the set of functions which are discontinuous and constant. That's bad, as we will see.
Bias in Estimates of Association. Regression Analysis, so far as it is from observational data and not part of some causal framework, is all about associations. The coefficients of the model $\hat{\beta}_j$ are the observed association between $x_j$ and the outcome. The estimated association can differ from the true association in expectation. This is the sort of standard statistical bias you might see in many senior undergrad or graduate level texts
$$ \operatorname{Bias} = E(\hat{\beta_j} - \beta_j) $$
with the estimate being called unbiased if this expectation is 0.
In what follows, I demonstrate that dichomotization leads to both kinds of bias in a very simple model.
Let's simulate some data from the following data generating process
$$ y \vert x \sim \mathcal{N}(\mu(x), 1) $$
$$ \mu(x) = 2x + 1$$
Let's treat $x$ as if it were continuous, when in reality I will bucket it into bins of length 0.1 for means of exposition. Recall linear model fit via OLS has a few key properties, one being that the expectation of the residual should be 0 and independent of the predictor.
Let's generate data, fit a model, and get the residuals say 1000 times. Let's take the expectation of the residual across each $x$ value. The expectation should be 0 for each $x$ value. We can demonstrate that with some R code (which I will include at the end of the answer).
I've gone ahead and included approximate 95% confidence intervals as well. Not every residual CI covers 0, but this is to be expected (in fact, 5% should fail to do so), but this is largely in line with what we would expect. Let's do this again, but now we will dichotomize $x$ at the sample median (perhaps the most charitable point to split at so that both groups have approximately the same number of observations).
The effect of dichotomization is that the expectation for the residual is no longer 0 and is dependent on the covariate. This is residual confounding, as I understand it. In particular, this is the result of model bias. Our chosen class of functions (discontinuous constant functions) can not properly accommodate our data, leading to a relationship between the predictor and the residuals. This is the same sort of diagnostic one might perform to see if there are any non-linear relationships in the predictor (the difference being here that I have aggregated residuals over 1000 simulations, and in practice one usually has 1 dataset). After all, a prediction is an estimate, and it seems that many estimates in this model are biased (even within groups determined by the dichotomization).
Not only are the residuals not consistent with the modelling assumptions, the estimate of the observed association is also biased. If the assumptions of OLS are met, then the estimate of $\beta$ (the slope, or really any parameter in the model) should be unbiased. This means that, under repeated simulation, the expectation of the difference between estimate and truth should be 0. We can demonstrate that this is not the case when the data are dichotomized. In fact, for this problem, the estimated relationship is about 4 units higher (which makes sense after you think about the structure of the problem).
The difference is even more striking when you plot the difference between estimated and truth on the same axis.
Note that the bias can be fixed by correctly adjusting for the distance between mean of points which belong to each group (shown below) but this does not improve the model bias (the systematic tendency for the model to under/over estimate given a particular value of the predictor). Additionally, if you're going to correct the dichotmomized estimate in this way, you might as well just estimate the slope correctly. This is just getting the right answer via the wrong way and calling the entire approach valid.
What can we conclude from this answer? Somewhat tongue in cheek, we might conclude dichotmization bad. More sincerely, dichotmization needlessly biases our class of possible conditional expectations towards a class of functions which, at least in my opinion, is rarely observed in practice. Indeed, if I could opine slighly longer, dichotmization is not so much about statistical efficiency more than it is cognitive economy; people dichotmize because it is easier to think about 2 numbers than a continuum of them.
Second, the observed relationship is biased (in this case away from the null, making the relationship seem more extreme than it actually is, but I could just as easily create an example where the bias is towards the null). This is a particularly pernicious form of bias, because it can go largely undetected without critical analysis of the model, unlike model bias. I will leave you to wonder about how this bias can effect care, should the regression analysis be used in some sort of evidence based investigation.
I should add that the simplicity of the model is not a limitation. These sorts of phenomenon can appear in multiple regressions, and in different ways, in GLMs (like logistic regression).
# Predictor buckted at bins of width 0.1
x = seq(-3.0, 3.0, 0.1)
N = length(x)
# Simulate the data 1000 times and return a residual for each predictor.
r = replicate(1000,{
y = 2*x + 1 + rnorm(N, 0, 1)
fit = lm(y~x)
resid(fit)
})
e = rowMeans(r)
s = apply(r, 1, function(x) sd(x)/sqrt(length(x)))
plot(x, e, pch=20, ylim = c(-0.15, 0.15), main='Expectation of Residuals Under No Dichotimization', ylab=expression(y-hat(y)))
arrows(x0=x, y0=e-2*s, x1=x, y1=e+2*s, code=3, angle=90, length=0, col="black", lwd=2)
abline(h=0, col='dark grey')
r2 = replicate(1000,{
y = 2*x + 1 + rnorm(N, 0, 1)
z = as.numeric(x>0)
fit = lm(y~z)
resid(fit)
})
e = rowMeans(r2)
s = apply(r2, 1, function(x) sd(x)/sqrt(length(x)))
plot(x, e, pch=20,, main='Expectation of Residuals Under Dichotomization', ylab=expression(y-hat(y)))
arrows(x0=x, y0=e-2*s, x1=x, y1=e+2*s, code=3, angle=90, length=0, col="black", lwd=2)
abline(h=0, col='dark grey')
no_dichot = replicate(1000, {
x = seq(-3.0, 3.0, 0.1)
N = length(x)
true_slope=2
y =true_slope*x + 1 + rnorm(N, 0, 1)
fit = lm(y~x)
estimated_slope = coef(fit)[2]
estimated_slope -true_slope
})
dichot = replicate(1000, {
x = seq(-3.0, 3.0, 0.1)
N = length(x)
true_slope=2
z = as.numeric(x>0)
y =true_slope*x + 1 + rnorm(N, 0, 1)
fit = lm(y~z)
estimated_slope = coef(fit)[2]
estimated_slope -true_slope
})
hist(no_dichot,
col=rgb(1, 0, 0, 0.5),
breaks = seq(-2, 6, 0.05),
xlab=expression(hat(beta)-beta),
main='Difference between estimated relationship and true relationship (1000 simulations)',
xlim = c(-2, 6))
hist(dichot,
col=rgb(0, 0, 1, 0.5),
breaks = seq(-2, 6, 0.05),
add=T)
legend("topright", c("Continuous", "Dichotomize"), col=c(rgb(1, 0, 0, 0.5), rgb(0, 0, 1, 0.5)), lwd=10) | Visual Demonstration of Residual Confounding | Confounding is about bias. There are at least two ways I can think of in which dichotomization can bias a regression:
Model Bias. Dichotomizing a continuous predictor transforms the conditional mea | Visual Demonstration of Residual Confounding
Confounding is about bias. There are at least two ways I can think of in which dichotomization can bias a regression:
Model Bias. Dichotomizing a continuous predictor transforms the conditional mean from a continuous (possibly smooth, depending on the functional form) function to a discontinuous constant function. It would be good to ask one's self if we truly believe the phenomenon to display this sort of behaviour (I tend to think most things in medicine do not behave like this). Hence, dichotomizing biases our estimates towards the set of functions which are discontinuous and constant. That's bad, as we will see.
Bias in Estimates of Association. Regression Analysis, so far as it is from observational data and not part of some causal framework, is all about associations. The coefficients of the model $\hat{\beta}_j$ are the observed association between $x_j$ and the outcome. The estimated association can differ from the true association in expectation. This is the sort of standard statistical bias you might see in many senior undergrad or graduate level texts
$$ \operatorname{Bias} = E(\hat{\beta_j} - \beta_j) $$
with the estimate being called unbiased if this expectation is 0.
In what follows, I demonstrate that dichomotization leads to both kinds of bias in a very simple model.
Let's simulate some data from the following data generating process
$$ y \vert x \sim \mathcal{N}(\mu(x), 1) $$
$$ \mu(x) = 2x + 1$$
Let's treat $x$ as if it were continuous, when in reality I will bucket it into bins of length 0.1 for means of exposition. Recall linear model fit via OLS has a few key properties, one being that the expectation of the residual should be 0 and independent of the predictor.
Let's generate data, fit a model, and get the residuals say 1000 times. Let's take the expectation of the residual across each $x$ value. The expectation should be 0 for each $x$ value. We can demonstrate that with some R code (which I will include at the end of the answer).
I've gone ahead and included approximate 95% confidence intervals as well. Not every residual CI covers 0, but this is to be expected (in fact, 5% should fail to do so), but this is largely in line with what we would expect. Let's do this again, but now we will dichotomize $x$ at the sample median (perhaps the most charitable point to split at so that both groups have approximately the same number of observations).
The effect of dichotomization is that the expectation for the residual is no longer 0 and is dependent on the covariate. This is residual confounding, as I understand it. In particular, this is the result of model bias. Our chosen class of functions (discontinuous constant functions) can not properly accommodate our data, leading to a relationship between the predictor and the residuals. This is the same sort of diagnostic one might perform to see if there are any non-linear relationships in the predictor (the difference being here that I have aggregated residuals over 1000 simulations, and in practice one usually has 1 dataset). After all, a prediction is an estimate, and it seems that many estimates in this model are biased (even within groups determined by the dichotomization).
Not only are the residuals not consistent with the modelling assumptions, the estimate of the observed association is also biased. If the assumptions of OLS are met, then the estimate of $\beta$ (the slope, or really any parameter in the model) should be unbiased. This means that, under repeated simulation, the expectation of the difference between estimate and truth should be 0. We can demonstrate that this is not the case when the data are dichotomized. In fact, for this problem, the estimated relationship is about 4 units higher (which makes sense after you think about the structure of the problem).
The difference is even more striking when you plot the difference between estimated and truth on the same axis.
Note that the bias can be fixed by correctly adjusting for the distance between mean of points which belong to each group (shown below) but this does not improve the model bias (the systematic tendency for the model to under/over estimate given a particular value of the predictor). Additionally, if you're going to correct the dichotmomized estimate in this way, you might as well just estimate the slope correctly. This is just getting the right answer via the wrong way and calling the entire approach valid.
What can we conclude from this answer? Somewhat tongue in cheek, we might conclude dichotmization bad. More sincerely, dichotmization needlessly biases our class of possible conditional expectations towards a class of functions which, at least in my opinion, is rarely observed in practice. Indeed, if I could opine slighly longer, dichotmization is not so much about statistical efficiency more than it is cognitive economy; people dichotmize because it is easier to think about 2 numbers than a continuum of them.
Second, the observed relationship is biased (in this case away from the null, making the relationship seem more extreme than it actually is, but I could just as easily create an example where the bias is towards the null). This is a particularly pernicious form of bias, because it can go largely undetected without critical analysis of the model, unlike model bias. I will leave you to wonder about how this bias can effect care, should the regression analysis be used in some sort of evidence based investigation.
I should add that the simplicity of the model is not a limitation. These sorts of phenomenon can appear in multiple regressions, and in different ways, in GLMs (like logistic regression).
# Predictor buckted at bins of width 0.1
x = seq(-3.0, 3.0, 0.1)
N = length(x)
# Simulate the data 1000 times and return a residual for each predictor.
r = replicate(1000,{
y = 2*x + 1 + rnorm(N, 0, 1)
fit = lm(y~x)
resid(fit)
})
e = rowMeans(r)
s = apply(r, 1, function(x) sd(x)/sqrt(length(x)))
plot(x, e, pch=20, ylim = c(-0.15, 0.15), main='Expectation of Residuals Under No Dichotimization', ylab=expression(y-hat(y)))
arrows(x0=x, y0=e-2*s, x1=x, y1=e+2*s, code=3, angle=90, length=0, col="black", lwd=2)
abline(h=0, col='dark grey')
r2 = replicate(1000,{
y = 2*x + 1 + rnorm(N, 0, 1)
z = as.numeric(x>0)
fit = lm(y~z)
resid(fit)
})
e = rowMeans(r2)
s = apply(r2, 1, function(x) sd(x)/sqrt(length(x)))
plot(x, e, pch=20,, main='Expectation of Residuals Under Dichotomization', ylab=expression(y-hat(y)))
arrows(x0=x, y0=e-2*s, x1=x, y1=e+2*s, code=3, angle=90, length=0, col="black", lwd=2)
abline(h=0, col='dark grey')
no_dichot = replicate(1000, {
x = seq(-3.0, 3.0, 0.1)
N = length(x)
true_slope=2
y =true_slope*x + 1 + rnorm(N, 0, 1)
fit = lm(y~x)
estimated_slope = coef(fit)[2]
estimated_slope -true_slope
})
dichot = replicate(1000, {
x = seq(-3.0, 3.0, 0.1)
N = length(x)
true_slope=2
z = as.numeric(x>0)
y =true_slope*x + 1 + rnorm(N, 0, 1)
fit = lm(y~z)
estimated_slope = coef(fit)[2]
estimated_slope -true_slope
})
hist(no_dichot,
col=rgb(1, 0, 0, 0.5),
breaks = seq(-2, 6, 0.05),
xlab=expression(hat(beta)-beta),
main='Difference between estimated relationship and true relationship (1000 simulations)',
xlim = c(-2, 6))
hist(dichot,
col=rgb(0, 0, 1, 0.5),
breaks = seq(-2, 6, 0.05),
add=T)
legend("topright", c("Continuous", "Dichotomize"), col=c(rgb(1, 0, 0, 0.5), rgb(0, 0, 1, 0.5)), lwd=10) | Visual Demonstration of Residual Confounding
Confounding is about bias. There are at least two ways I can think of in which dichotomization can bias a regression:
Model Bias. Dichotomizing a continuous predictor transforms the conditional mea |
54,011 | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logistic regression model in R? | EDIT: I see you mentioned the Fieller method in your original post. Perhaps you were referring to the solution I provided below.
Here is a great paper on the topic. Using a logistic regression with a logit link function you can model the proportion of fish as a function of length, with $\lambda:=$LD50. Based on the asymptotic normality of
$$ \frac{(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)}{\sqrt{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}}$$
a $100(1-\alpha)\%$ confidence interval is found by identifying the set of $\lambda$ that satisfy
$$ \frac{\left[(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)\right]^2}{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}<z_{\alpha}^2$$
where $\hat{\text{se}}_{0}$ is the estimated standard error of $\hat{\beta}_0$, $\hat{\text{se}}_{1}$ is the estimated standard error of $\hat{\beta}_1$, and $\hat{\text{cov}}_{01}$ is the estimated covariance between $\hat{\beta}_0$ and $\hat{\beta}_1$. This works well even in small sample sizes and is a much better normal approximation than a Wald interval for $\lambda$ based on an identity link using the dose.p() output. The confidence interval above can be calculated using standard output from the logistic regression without calling dose.p(), and should perform similarly to the likelihood ratio CI you are interested in. The only part that would require some work is numerically inverting the quantity above. You can create a sequence of values for $\lambda$, evaluate the quantity above for each value of $\lambda$, and identify those values that satisfy the inequality.
A great way to visualize this is to define and plot the following functions
$$H(\lambda)=1-\Phi\Bigg(\frac{(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)}{\sqrt{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}}\Bigg)$$
$$H^{\text{-}}(\lambda)=\Phi\Bigg(\frac{(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)}{\sqrt{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}}\Bigg)$$
\begin{eqnarray}
C(\lambda)= \left\{ \begin{array}{cc}
H(\lambda) & \text{if } \lambda\le \hat{\lambda}(\boldsymbol{y}) \\
& \nonumber\\
H^{\text{-}}(\lambda) & \text{if } \lambda\ge \hat{\lambda}(\boldsymbol{y}). \end{array} \right.\nonumber
\end{eqnarray}.
where $\hat{\lambda}(\boldsymbol{y})$ is the estimate of LD50 based on the observed data. $C(\lambda)$ is called a confidence curve and depicts p-values and confidence intervals of all levels. In small sample sizes the performance of this interval might be improved by referencing a $t$-distribution with $n-1$ degrees of freedom instead of a standard normal distribution.
If you are still interested in the likelihood ratio test you can create a similar confidence curve:
$$p:=\text{logit}^{-1}({\beta}_0 + \lambda{\beta}_1)$$
$$L(\beta_0,\beta_1)\propto \prod_{i=1}^n \text{logit}^{-1}({\beta}_0 + x_i{\beta}_1)^{y_i}\times[1-\text{logit}^{-1}({\beta}_0 + x_i{\beta}_1)]^{1-y_i}$$
$$\text{LR}=\frac{L(\tilde{\beta}_0,\tilde{\beta}_1)}{L(\hat{\beta}_0,\hat{\beta}_1)}$$
where $\tilde{\beta}_0$ and $\tilde{\beta}_1$ are estimates calculated under the restricted null space for $\lambda$.
\begin{eqnarray}
H(\lambda)= \left\{ \begin{array}{cc}
\big[1-F_{\chi^2_1}\big(-2\text{log(LR)}\big)\big]/2 & \text{if } \lambda\le \hat{\lambda}(\boldsymbol{y}) \\
& \nonumber\\
\big[1+F_{\chi^2_1}\big(-2\text{log(LR)}\big)\big]/2 & \text{if } \lambda\gt \hat{\lambda}(\boldsymbol{y}). \end{array} \right.\nonumber
\end{eqnarray}.
\begin{eqnarray}
C(\lambda)= \left\{ \begin{array}{cc}
H(\lambda) & \text{if } \lambda\le \hat{\lambda}(\boldsymbol{y}) \\
& \nonumber\\
1-H(\lambda) & \text{if } \lambda\ge \hat{\lambda}(\boldsymbol{y}). \end{array} \right.\nonumber
\end{eqnarray}
where $F_{\chi^2_1}$ is the CDF of a chi-square distribution with 1 degree of freedom. Because the likelihood ratio confidence interval requires profiling nuisance parameters it is almost as computationally intensive as iterative methods such as bootstrap and Monte Carlo approaches. | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logis | EDIT: I see you mentioned the Fieller method in your original post. Perhaps you were referring to the solution I provided below.
Here is a great paper on the topic. Using a logistic regression with a | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logistic regression model in R?
EDIT: I see you mentioned the Fieller method in your original post. Perhaps you were referring to the solution I provided below.
Here is a great paper on the topic. Using a logistic regression with a logit link function you can model the proportion of fish as a function of length, with $\lambda:=$LD50. Based on the asymptotic normality of
$$ \frac{(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)}{\sqrt{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}}$$
a $100(1-\alpha)\%$ confidence interval is found by identifying the set of $\lambda$ that satisfy
$$ \frac{\left[(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)\right]^2}{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}<z_{\alpha}^2$$
where $\hat{\text{se}}_{0}$ is the estimated standard error of $\hat{\beta}_0$, $\hat{\text{se}}_{1}$ is the estimated standard error of $\hat{\beta}_1$, and $\hat{\text{cov}}_{01}$ is the estimated covariance between $\hat{\beta}_0$ and $\hat{\beta}_1$. This works well even in small sample sizes and is a much better normal approximation than a Wald interval for $\lambda$ based on an identity link using the dose.p() output. The confidence interval above can be calculated using standard output from the logistic regression without calling dose.p(), and should perform similarly to the likelihood ratio CI you are interested in. The only part that would require some work is numerically inverting the quantity above. You can create a sequence of values for $\lambda$, evaluate the quantity above for each value of $\lambda$, and identify those values that satisfy the inequality.
A great way to visualize this is to define and plot the following functions
$$H(\lambda)=1-\Phi\Bigg(\frac{(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)}{\sqrt{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}}\Bigg)$$
$$H^{\text{-}}(\lambda)=\Phi\Bigg(\frac{(\hat{\beta}_0 + \lambda\hat{\beta}_1)-\text{ln}\big(\frac{0.5}{1-0.5}\big)}{\sqrt{\hat{\text{se}}_{0}^2 + \lambda^2\hat{\text{se}}_1^2 + 2\lambda\hat{\text{cov}}_{01}}}\Bigg)$$
\begin{eqnarray}
C(\lambda)= \left\{ \begin{array}{cc}
H(\lambda) & \text{if } \lambda\le \hat{\lambda}(\boldsymbol{y}) \\
& \nonumber\\
H^{\text{-}}(\lambda) & \text{if } \lambda\ge \hat{\lambda}(\boldsymbol{y}). \end{array} \right.\nonumber
\end{eqnarray}.
where $\hat{\lambda}(\boldsymbol{y})$ is the estimate of LD50 based on the observed data. $C(\lambda)$ is called a confidence curve and depicts p-values and confidence intervals of all levels. In small sample sizes the performance of this interval might be improved by referencing a $t$-distribution with $n-1$ degrees of freedom instead of a standard normal distribution.
If you are still interested in the likelihood ratio test you can create a similar confidence curve:
$$p:=\text{logit}^{-1}({\beta}_0 + \lambda{\beta}_1)$$
$$L(\beta_0,\beta_1)\propto \prod_{i=1}^n \text{logit}^{-1}({\beta}_0 + x_i{\beta}_1)^{y_i}\times[1-\text{logit}^{-1}({\beta}_0 + x_i{\beta}_1)]^{1-y_i}$$
$$\text{LR}=\frac{L(\tilde{\beta}_0,\tilde{\beta}_1)}{L(\hat{\beta}_0,\hat{\beta}_1)}$$
where $\tilde{\beta}_0$ and $\tilde{\beta}_1$ are estimates calculated under the restricted null space for $\lambda$.
\begin{eqnarray}
H(\lambda)= \left\{ \begin{array}{cc}
\big[1-F_{\chi^2_1}\big(-2\text{log(LR)}\big)\big]/2 & \text{if } \lambda\le \hat{\lambda}(\boldsymbol{y}) \\
& \nonumber\\
\big[1+F_{\chi^2_1}\big(-2\text{log(LR)}\big)\big]/2 & \text{if } \lambda\gt \hat{\lambda}(\boldsymbol{y}). \end{array} \right.\nonumber
\end{eqnarray}.
\begin{eqnarray}
C(\lambda)= \left\{ \begin{array}{cc}
H(\lambda) & \text{if } \lambda\le \hat{\lambda}(\boldsymbol{y}) \\
& \nonumber\\
1-H(\lambda) & \text{if } \lambda\ge \hat{\lambda}(\boldsymbol{y}). \end{array} \right.\nonumber
\end{eqnarray}
where $F_{\chi^2_1}$ is the CDF of a chi-square distribution with 1 degree of freedom. Because the likelihood ratio confidence interval requires profiling nuisance parameters it is almost as computationally intensive as iterative methods such as bootstrap and Monte Carlo approaches. | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logis
EDIT: I see you mentioned the Fieller method in your original post. Perhaps you were referring to the solution I provided below.
Here is a great paper on the topic. Using a logistic regression with a |
54,012 | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logistic regression model in R? | TL;DR @kjetilbhalvorsen explains how to compute confidence intervals based on the profile likelihood in Confidence interval for difference between two predicted probabilities in R. The entire thread is worth reading as it describes several methods to compute confidence intervals. I summarize the profile likelihood method briefly and then apply it to calculate a CI for the median lethal dose, LD50, on the tobacco budworms dataset. See Section 7.2 of Modern Applied Statistics with S by Venables and Ripley.
Let $\theta(\boldsymbol{\beta})$ be a scalar function of the model parameters $\boldsymbol{\beta}$. Then the profile likelihood of $\theta$ is:
$$
\begin{aligned}
L_P(\theta) = \max_{\theta:\theta(\boldsymbol{\beta})=\theta}L(\boldsymbol{\beta})
\end{aligned}
$$
where $L(\boldsymbol{\beta})$ is the likelihood. For the dose-response example, $\boldsymbol{\beta} = (\beta_0,\beta_1)$ but the theory — and the R code — are straightforward to extend to more parameters. The LD50, the dose expected to kill 50% of individuals, is the point where the log-odds of survival are equal to zero: $\theta = -\beta_0/\beta_1$.
x <- budworm$x # dose
y <- budworm$y # survival (0 or 1)
# The LD50 (dose that is expected to kill 50% of individuals) is defined as
# the point where the log-odds of survival are equal to zero.
theta <- function(beta0, beta1) {
-beta0 / beta1
}
# The negative log-likelihood of the logistic regression model
# with an intercept and one predictor x.
negloglik_lrm <- function(beta0, beta1) {
prob <- plogis(beta0 + beta1 * x)
-sum(dbinom(y, 1, prob, log = TRUE))
}
# Fit a simple linear regression to find reasonable starting values.
initialize_betas <- function(x, y) {
betas <- coef(lm(y ~ x))
list(beta0 = betas[1], beta1 = betas[2])
}
The crux of the method is to profile the likelihood, ie, to maximize the likelihood over a grid of $\theta$ values. For a fixed $\theta$ we find $\max L(\boldsymbol{\beta})$ over the set $\left\{\boldsymbol{\beta}:\theta(\boldsymbol{\beta})=\theta\right\}$.
# Minimize the negative log-likelihood instead of maximizing the likelihood
model <- bbmle::mle2(negloglik_lrm, start = initialize_betas(x, y))
# nlp = negative profile log-likelihood
grid <- profile_negloglik(model)
grid
#> # A tibble: 201 × 2
#> theta npl
#> <dbl> <dbl>
#> 1 1.65 141.
#> 2 1.67 140.
#> 3 1.68 139.
#> 4 1.70 138.
#> 5 1.71 137.
#> 6 1.73 136.
#> 7 1.74 135.
#> 8 1.75 134.
#> 9 1.77 133.
#> 10 1.78 132.
#> # … with 191 more rows
Now that we have calculated the profile likelihood for $\theta$ at a grid of values, it remains to find a subset of those to represent a confidence interval.
Under regularity conditions (which mean the log likelihood is well approximated by a quadratic function), the ratio likelihood test states that under the null hypothesis $H_0:\theta=\theta_0$:
$$
\begin{aligned}
2\log\frac{L(\widehat{\theta})}{L(\theta_0)} \sim \chi^2_p
\end{aligned}
$$
where $p$ is the number of parameters $\theta$, $\widehat{\theta}$ is the maximum likelihood estimate (MLE) and $\chi^2_p$ is the Chi-squared distribution with $p$ degrees of freedom. This is the value at which the (profile) likelihood is minimized.
By inverting the hypothesis test we get a 100(1-$\theta$)% confidence interval (or region) for $\theta$.
$$
\begin{aligned}
\left\{\theta:2\log\frac{L(\widehat{\theta})}{L(\theta)}<\chi^2_{p,(1-\alpha)}\right\}
\end{aligned}
$$
For the budworms example, the LD50 $\theta$ is a scalar, so $p=1$ and we can find the profile likelihood 95% and 99% confidence intervals numerically:
#> conf.level conf.low conf.high nll
#> 1 0.95 2.415057 3.084742 113.6548
#> 2 0.99 2.312052 3.195448 115.0515
I also use the MASS:dose.p function to compute the Wald 95% and 99% confidence intervals. (See R code attached.)
data.frame(
conf.level,
conf.low = dose.hat + dose.se * qnorm((1 - conf.level) / 2),
conf.high = dose.hat - dose.se * qnorm((1 - conf.level) / 2)
)
#> conf.level conf.low conf.high
#> 1 0.95 2.422786 3.071986
#> 2 0.99 2.320789 3.173983
In this example, the profile and the Wald confidence intervals for $\theta$ agree well.
References
In addition to @kjetilbhalvorsen's posts about profile likelihood, I'm also reading "In All Likelihood: Statistical Modelling And Inference Using Likelihood" by Yudi Pawitan. This book explains the theory in great detail and with many examples. (Not about budworms though.)
R code to compute profile likelihood confidence intervals:
library("MASS")
library("tidyverse")
# Compute negative log-likelihood-based intervals for a scalar parameter theta
# at the specified alpha levels.
# This implementation is based on the program `li.r` for computing likelihood
# intervals which accompanies the book "In All Likelihood" by Yudi Pawitan.
# https://www.meb.ki.se/sites/yudpaw/book/
confint_negloglik <- function(theta, nll, conf.level = 0.95) {
nll_min <- min(nll)
# 2*{ log L(theta_mle) - log L(theta) } < qchisq(df=1,p=1-alpha)
nll_cut <- nll_min + qchisq(conf.level, 1) / 2
theta_mle <- mean(theta[nll == nll_min])
theta_below <- theta[theta < theta_mle]
if (length(theta_below) < 2) {
conf.low <- min(theta)
} else {
nll.below <- nll[theta < theta_mle]
conf.low <- approx(nll.below, theta_below, xout = nll_cut)$y
}
theta_above <- theta[theta > theta_mle]
if (length(theta_above) < 2) {
conf.high <- max(theta)
} else {
nll.above <- nll[theta > theta_mle]
conf.high <- approx(nll.above, theta_above, xout = nll_cut)$y
}
data.frame(conf.level, conf.low, conf.high, nll = nll_cut)
}
grid_params <- function(model, pts = 100) {
seq_range <- function(x, n) {
seq(min(x), max(x), len = n)
}
# Use the confidence intervals to find reasonable ranges for the betas.
range_betas <- bbmle::confint(bbmle::profile(model))
expand_grid(
# Create a grid (= an outer product) for beta0 and beta1
beta0 = seq_range(range_betas["beta0", ], pts),
beta1 = seq_range(range_betas["beta1", ], pts)
) %>%
# For each point in the grid, compute the parameter of interest theta
# and the negative log-likelihood of the model. Both are functions of
# the regression coefficients beta0 and beta1.
mutate(
theta = theta(beta0, beta1),
nll = map2_dbl(beta0, beta1, negloglik_lrm)
)
}
profile_negloglik <- function(profile, bins = 201) {
profile %>%
grid_params() %>%
group_by(
# Subdivide the observed range of theta into many small intervals.
cut_interval(theta, bins),
.drop = TRUE
) %>%
summarise(
# For each interval, find its midpoint as well as the minimum of
# the negative log-likelihood. This is the negative profile
# log-likelihood (npl).
theta = (min(theta) + max(theta)) / 2,
npl = min(nll)
) %>%
select(
theta, npl
)
}
plot_profile <- function(grid, confint) {
grid %>%
ggplot(
aes(theta, npl)
) +
geom_line(
linewidth = 1
) +
geom_segment(
aes(
x = conf.low, xend = conf.high,
y = nll, yend = nll,
color = factor(conf.level)
),
inherit.aes = FALSE,
data = confint,
linewidth = 1
) +
guides(
color = guide_legend(title = "confidence level")
)
}
# Tobacco budworms survival data. See `MASS::dose.p`.
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
numalive <- 20 - numdead
budworm <- list(
x = c(rep(ldose, numdead), rep(ldose, numalive)),
y = c(rep(0, sum(numdead)), rep(1, sum(numalive)))
)
conf.level <- c(0.95, 0.99)
x <- budworm$x
y <- budworm$y
# The negative log-likelihood of the logistic regression model
# with an intercept and one predictor x.
negloglik_lrm <- function(beta0, beta1) {
prob <- plogis(beta0 + beta1 * x)
-sum(dbinom(y, 1, prob, log = TRUE))
}
# The LD50 (dose that is expected to kill 50% of individuals) is defined as
# the point where the log-odds of survival are equal to zero.
theta <- function(beta0, beta1) {
-beta0 / beta1
}
# Fit a simple linear regression to find reasonable starting values.
initialize_betas <- function(x, y) {
betas <- coef(lm(y ~ x))
list(beta0 = betas[1], beta1 = betas[2])
}
# Minimize the negative log-likelihood instead of maximizing the likelihood
model <- bbmle::mle2(negloglik_lrm, start = initialize_betas(x, y))
profile <- bbmle::profile(model)
grid <- profile_negloglik(model)
grid
# Profile confidence intervals for the betas parameters and for theta
ci.betas <- bbmle::confint(profile)
ci.betas
ci.theta <- confint_negloglik(grid$theta, grid$npl, conf.level)
ci.theta
plt <- plot_profile(grid, ci.theta)
plt +
labs(
x = expression(paste(θ, "=", -beta[0] / beta[1])),
y = "negative profile log-likelihood",
title = "Negative profile log-likelihood for LD50"
)
model <- glm(
y ~ x,
family = binomial
)
dose.p(model)
dose.hat <- 2.747386
dose.se <- 0.1656153
# Wald confidence interval
data.frame(
conf.level,
conf.low = dose.hat + dose.se * qnorm((1 - conf.level) / 2),
conf.high = dose.hat - dose.se * qnorm((1 - conf.level) / 2)
) | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logis | TL;DR @kjetilbhalvorsen explains how to compute confidence intervals based on the profile likelihood in Confidence interval for difference between two predicted probabilities in R. The entire thread i | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logistic regression model in R?
TL;DR @kjetilbhalvorsen explains how to compute confidence intervals based on the profile likelihood in Confidence interval for difference between two predicted probabilities in R. The entire thread is worth reading as it describes several methods to compute confidence intervals. I summarize the profile likelihood method briefly and then apply it to calculate a CI for the median lethal dose, LD50, on the tobacco budworms dataset. See Section 7.2 of Modern Applied Statistics with S by Venables and Ripley.
Let $\theta(\boldsymbol{\beta})$ be a scalar function of the model parameters $\boldsymbol{\beta}$. Then the profile likelihood of $\theta$ is:
$$
\begin{aligned}
L_P(\theta) = \max_{\theta:\theta(\boldsymbol{\beta})=\theta}L(\boldsymbol{\beta})
\end{aligned}
$$
where $L(\boldsymbol{\beta})$ is the likelihood. For the dose-response example, $\boldsymbol{\beta} = (\beta_0,\beta_1)$ but the theory — and the R code — are straightforward to extend to more parameters. The LD50, the dose expected to kill 50% of individuals, is the point where the log-odds of survival are equal to zero: $\theta = -\beta_0/\beta_1$.
x <- budworm$x # dose
y <- budworm$y # survival (0 or 1)
# The LD50 (dose that is expected to kill 50% of individuals) is defined as
# the point where the log-odds of survival are equal to zero.
theta <- function(beta0, beta1) {
-beta0 / beta1
}
# The negative log-likelihood of the logistic regression model
# with an intercept and one predictor x.
negloglik_lrm <- function(beta0, beta1) {
prob <- plogis(beta0 + beta1 * x)
-sum(dbinom(y, 1, prob, log = TRUE))
}
# Fit a simple linear regression to find reasonable starting values.
initialize_betas <- function(x, y) {
betas <- coef(lm(y ~ x))
list(beta0 = betas[1], beta1 = betas[2])
}
The crux of the method is to profile the likelihood, ie, to maximize the likelihood over a grid of $\theta$ values. For a fixed $\theta$ we find $\max L(\boldsymbol{\beta})$ over the set $\left\{\boldsymbol{\beta}:\theta(\boldsymbol{\beta})=\theta\right\}$.
# Minimize the negative log-likelihood instead of maximizing the likelihood
model <- bbmle::mle2(negloglik_lrm, start = initialize_betas(x, y))
# nlp = negative profile log-likelihood
grid <- profile_negloglik(model)
grid
#> # A tibble: 201 × 2
#> theta npl
#> <dbl> <dbl>
#> 1 1.65 141.
#> 2 1.67 140.
#> 3 1.68 139.
#> 4 1.70 138.
#> 5 1.71 137.
#> 6 1.73 136.
#> 7 1.74 135.
#> 8 1.75 134.
#> 9 1.77 133.
#> 10 1.78 132.
#> # … with 191 more rows
Now that we have calculated the profile likelihood for $\theta$ at a grid of values, it remains to find a subset of those to represent a confidence interval.
Under regularity conditions (which mean the log likelihood is well approximated by a quadratic function), the ratio likelihood test states that under the null hypothesis $H_0:\theta=\theta_0$:
$$
\begin{aligned}
2\log\frac{L(\widehat{\theta})}{L(\theta_0)} \sim \chi^2_p
\end{aligned}
$$
where $p$ is the number of parameters $\theta$, $\widehat{\theta}$ is the maximum likelihood estimate (MLE) and $\chi^2_p$ is the Chi-squared distribution with $p$ degrees of freedom. This is the value at which the (profile) likelihood is minimized.
By inverting the hypothesis test we get a 100(1-$\theta$)% confidence interval (or region) for $\theta$.
$$
\begin{aligned}
\left\{\theta:2\log\frac{L(\widehat{\theta})}{L(\theta)}<\chi^2_{p,(1-\alpha)}\right\}
\end{aligned}
$$
For the budworms example, the LD50 $\theta$ is a scalar, so $p=1$ and we can find the profile likelihood 95% and 99% confidence intervals numerically:
#> conf.level conf.low conf.high nll
#> 1 0.95 2.415057 3.084742 113.6548
#> 2 0.99 2.312052 3.195448 115.0515
I also use the MASS:dose.p function to compute the Wald 95% and 99% confidence intervals. (See R code attached.)
data.frame(
conf.level,
conf.low = dose.hat + dose.se * qnorm((1 - conf.level) / 2),
conf.high = dose.hat - dose.se * qnorm((1 - conf.level) / 2)
)
#> conf.level conf.low conf.high
#> 1 0.95 2.422786 3.071986
#> 2 0.99 2.320789 3.173983
In this example, the profile and the Wald confidence intervals for $\theta$ agree well.
References
In addition to @kjetilbhalvorsen's posts about profile likelihood, I'm also reading "In All Likelihood: Statistical Modelling And Inference Using Likelihood" by Yudi Pawitan. This book explains the theory in great detail and with many examples. (Not about budworms though.)
R code to compute profile likelihood confidence intervals:
library("MASS")
library("tidyverse")
# Compute negative log-likelihood-based intervals for a scalar parameter theta
# at the specified alpha levels.
# This implementation is based on the program `li.r` for computing likelihood
# intervals which accompanies the book "In All Likelihood" by Yudi Pawitan.
# https://www.meb.ki.se/sites/yudpaw/book/
confint_negloglik <- function(theta, nll, conf.level = 0.95) {
nll_min <- min(nll)
# 2*{ log L(theta_mle) - log L(theta) } < qchisq(df=1,p=1-alpha)
nll_cut <- nll_min + qchisq(conf.level, 1) / 2
theta_mle <- mean(theta[nll == nll_min])
theta_below <- theta[theta < theta_mle]
if (length(theta_below) < 2) {
conf.low <- min(theta)
} else {
nll.below <- nll[theta < theta_mle]
conf.low <- approx(nll.below, theta_below, xout = nll_cut)$y
}
theta_above <- theta[theta > theta_mle]
if (length(theta_above) < 2) {
conf.high <- max(theta)
} else {
nll.above <- nll[theta > theta_mle]
conf.high <- approx(nll.above, theta_above, xout = nll_cut)$y
}
data.frame(conf.level, conf.low, conf.high, nll = nll_cut)
}
grid_params <- function(model, pts = 100) {
seq_range <- function(x, n) {
seq(min(x), max(x), len = n)
}
# Use the confidence intervals to find reasonable ranges for the betas.
range_betas <- bbmle::confint(bbmle::profile(model))
expand_grid(
# Create a grid (= an outer product) for beta0 and beta1
beta0 = seq_range(range_betas["beta0", ], pts),
beta1 = seq_range(range_betas["beta1", ], pts)
) %>%
# For each point in the grid, compute the parameter of interest theta
# and the negative log-likelihood of the model. Both are functions of
# the regression coefficients beta0 and beta1.
mutate(
theta = theta(beta0, beta1),
nll = map2_dbl(beta0, beta1, negloglik_lrm)
)
}
profile_negloglik <- function(profile, bins = 201) {
profile %>%
grid_params() %>%
group_by(
# Subdivide the observed range of theta into many small intervals.
cut_interval(theta, bins),
.drop = TRUE
) %>%
summarise(
# For each interval, find its midpoint as well as the minimum of
# the negative log-likelihood. This is the negative profile
# log-likelihood (npl).
theta = (min(theta) + max(theta)) / 2,
npl = min(nll)
) %>%
select(
theta, npl
)
}
plot_profile <- function(grid, confint) {
grid %>%
ggplot(
aes(theta, npl)
) +
geom_line(
linewidth = 1
) +
geom_segment(
aes(
x = conf.low, xend = conf.high,
y = nll, yend = nll,
color = factor(conf.level)
),
inherit.aes = FALSE,
data = confint,
linewidth = 1
) +
guides(
color = guide_legend(title = "confidence level")
)
}
# Tobacco budworms survival data. See `MASS::dose.p`.
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
numalive <- 20 - numdead
budworm <- list(
x = c(rep(ldose, numdead), rep(ldose, numalive)),
y = c(rep(0, sum(numdead)), rep(1, sum(numalive)))
)
conf.level <- c(0.95, 0.99)
x <- budworm$x
y <- budworm$y
# The negative log-likelihood of the logistic regression model
# with an intercept and one predictor x.
negloglik_lrm <- function(beta0, beta1) {
prob <- plogis(beta0 + beta1 * x)
-sum(dbinom(y, 1, prob, log = TRUE))
}
# The LD50 (dose that is expected to kill 50% of individuals) is defined as
# the point where the log-odds of survival are equal to zero.
theta <- function(beta0, beta1) {
-beta0 / beta1
}
# Fit a simple linear regression to find reasonable starting values.
initialize_betas <- function(x, y) {
betas <- coef(lm(y ~ x))
list(beta0 = betas[1], beta1 = betas[2])
}
# Minimize the negative log-likelihood instead of maximizing the likelihood
model <- bbmle::mle2(negloglik_lrm, start = initialize_betas(x, y))
profile <- bbmle::profile(model)
grid <- profile_negloglik(model)
grid
# Profile confidence intervals for the betas parameters and for theta
ci.betas <- bbmle::confint(profile)
ci.betas
ci.theta <- confint_negloglik(grid$theta, grid$npl, conf.level)
ci.theta
plt <- plot_profile(grid, ci.theta)
plt +
labs(
x = expression(paste(θ, "=", -beta[0] / beta[1])),
y = "negative profile log-likelihood",
title = "Negative profile log-likelihood for LD50"
)
model <- glm(
y ~ x,
family = binomial
)
dose.p(model)
dose.hat <- 2.747386
dose.se <- 0.1656153
# Wald confidence interval
data.frame(
conf.level,
conf.low = dose.hat + dose.se * qnorm((1 - conf.level) / 2),
conf.high = dose.hat - dose.se * qnorm((1 - conf.level) / 2)
) | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logis
TL;DR @kjetilbhalvorsen explains how to compute confidence intervals based on the profile likelihood in Confidence interval for difference between two predicted probabilities in R. The entire thread i |
54,013 | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logistic regression model in R? | After some research on the profile likelihood function, it seems that R packages exist to estimate the confidence intervals of parameter (Beta) estimates of a logistic regression model with this approach (e.g., ProfileLikelihood), on top of the confint() function that is already available for this. However, for the predicted probability I haven't been able to find any.
For our needs, my statistician collaborator will write our own R scripts for this and if everything works fine, we'll make them available.
Two last things:
the dose.p() function of the MASS package seems to provide an SE at the response scale that corresponds to the Delta method, not the Wald method. We have also encountered some problems with the dose.p() function with small sample sizes. The deltavar() function of the emdbook package seems a better option. For more information, visit for instance the following B. Bolker's webpage: https://bbolker.github.io/stat4c03/HW/hw3_sol.html
The plot that I have included with my initial question shows the predicted probability to observe developed gonads according to fish length but contrary to what I've originally thought, the CI shown were obtained with the Wald method, i.e. the predicted SE on the logit scale is multiplied by + or - 1.96 and then these values are back-transformed on the response scale (e.g., Xu and Long 2005), which seems to be the most commonly-used approach, but under some circumstances it may provide less reliable results (Brown et al. 2003).
Our main objective will be to compare the perfomance of alternative methods to estimate the uncertainty of logistic regression model predictions for the L50 in fish. | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logis | After some research on the profile likelihood function, it seems that R packages exist to estimate the confidence intervals of parameter (Beta) estimates of a logistic regression model with this appro | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logistic regression model in R?
After some research on the profile likelihood function, it seems that R packages exist to estimate the confidence intervals of parameter (Beta) estimates of a logistic regression model with this approach (e.g., ProfileLikelihood), on top of the confint() function that is already available for this. However, for the predicted probability I haven't been able to find any.
For our needs, my statistician collaborator will write our own R scripts for this and if everything works fine, we'll make them available.
Two last things:
the dose.p() function of the MASS package seems to provide an SE at the response scale that corresponds to the Delta method, not the Wald method. We have also encountered some problems with the dose.p() function with small sample sizes. The deltavar() function of the emdbook package seems a better option. For more information, visit for instance the following B. Bolker's webpage: https://bbolker.github.io/stat4c03/HW/hw3_sol.html
The plot that I have included with my initial question shows the predicted probability to observe developed gonads according to fish length but contrary to what I've originally thought, the CI shown were obtained with the Wald method, i.e. the predicted SE on the logit scale is multiplied by + or - 1.96 and then these values are back-transformed on the response scale (e.g., Xu and Long 2005), which seems to be the most commonly-used approach, but under some circumstances it may provide less reliable results (Brown et al. 2003).
Our main objective will be to compare the perfomance of alternative methods to estimate the uncertainty of logistic regression model predictions for the L50 in fish. | How to easily obtain the profile likelihood 95% confidence interval for a predicted value in a logis
After some research on the profile likelihood function, it seems that R packages exist to estimate the confidence intervals of parameter (Beta) estimates of a logistic regression model with this appro |
54,014 | A realistic example of a non identifiable model? | Another classic example of non-identification is a dose response model like the Emax model, which assumes that some (continuous) outcome for patients $i=1,\ldots,N$ for a drug that you can give in different doses obeys the following:
$$Y_i \sim N( E_0 + E_\text{max} \frac{\text{dose}_i}{\text{dose}_i + \text{ED}_{50}}, \sigma^2),$$
where $\text{dose}_i$ is the dose given to patient $i$.
Let's assume we have observed the following data:
i (patient)
$\text{dose}_i$
$Y_i$
1
0 (placebo)
1.2
2
0 (placebo)
0.2
3
0 (placebo)
-0.1
4
0 (placebo)
0.7
5
0 (placebo)
-0.3
6
0 (placebo)
-0.8
7
100 mg
11.7
8
100 mg
12.5
9
100 mg
13.1
10
100 mg
11.4
11
100 mg
10.9
12
100 mg
11.8
The parameters $E_0$ and $\sigma$ are identified by the model/data, but $E_\text{max}$ and $\text{ED}_{50}$ are not. No matter how much data you collect for dose 0 and 100, you will have a whole curve of combinations of those latter two parameters that will fit the data equally well. The way out of this situation is of course to try additional doses of the drug. | A realistic example of a non identifiable model? | Another classic example of non-identification is a dose response model like the Emax model, which assumes that some (continuous) outcome for patients $i=1,\ldots,N$ for a drug that you can give in dif | A realistic example of a non identifiable model?
Another classic example of non-identification is a dose response model like the Emax model, which assumes that some (continuous) outcome for patients $i=1,\ldots,N$ for a drug that you can give in different doses obeys the following:
$$Y_i \sim N( E_0 + E_\text{max} \frac{\text{dose}_i}{\text{dose}_i + \text{ED}_{50}}, \sigma^2),$$
where $\text{dose}_i$ is the dose given to patient $i$.
Let's assume we have observed the following data:
i (patient)
$\text{dose}_i$
$Y_i$
1
0 (placebo)
1.2
2
0 (placebo)
0.2
3
0 (placebo)
-0.1
4
0 (placebo)
0.7
5
0 (placebo)
-0.3
6
0 (placebo)
-0.8
7
100 mg
11.7
8
100 mg
12.5
9
100 mg
13.1
10
100 mg
11.4
11
100 mg
10.9
12
100 mg
11.8
The parameters $E_0$ and $\sigma$ are identified by the model/data, but $E_\text{max}$ and $\text{ED}_{50}$ are not. No matter how much data you collect for dose 0 and 100, you will have a whole curve of combinations of those latter two parameters that will fit the data equally well. The way out of this situation is of course to try additional doses of the drug. | A realistic example of a non identifiable model?
Another classic example of non-identification is a dose response model like the Emax model, which assumes that some (continuous) outcome for patients $i=1,\ldots,N$ for a drug that you can give in dif |
54,015 | A realistic example of a non identifiable model? | Non-identification is not just a consequence of having insufficient observations, and indeed, some of the most pernicious examples of not being able to identify parameters from data arise even if we had access to the entire population, as the below example demonstrates. The discussion below is fairly informal and tries to get at intuition. If you would like to see how statisticians formalize the basic intuition, you may consider reading about the potential outcomes model.
An important example where identification issues play a central role is when one wants to give some causal interpretation to the data. Let us consider the following toy example. We want to find out whether or not the drug Snake Oil decreases the risk of heart attacks. We go out and conduct a survey, and are able to learn (for a moment, suppose that our sample is large enough to avoid inference issues, the following:
Among those taking Snake Oil in a given year, 1% of individuals have a heart attack in the subsequent year
Among those not taking Snake Oil in a given year, 2% of individuals have a heart attack in the subsequent year
In this setting, a $\theta$ we likely care about is the average degree by which Snake Oil causes an increase/decrease the risk of heart attack (i.e. the degree to which making someone take Snake Oil who otherwise would not have taken Snake Oil on average changes their probability in the subsequent year of getting a heart attack). As we will see, the old statistical adage that "correlation does not imply causation" will imply that thes $\theta$ is not identified with the above evidence.
To see why, we show that different values of $\theta$ could be plausibly consistent with the above data. One interpretation of the above data (and one that Snake Oil salespeople will tend to gravitate towards) is that if the population who happened to take Snake Oil had not, they would have been identical to the population who did not take Snake Oil, in which case, the $-1\%$ difference can be fully attributed to the causal effect of Snake Oil on heart attack risk: $\theta = -1\%$. However, the following model might also be consistent with the data: Snake Oil has no effect, $\theta=0\%$, but the population who consumes Snake Oil is especially health conscious. They thus take other actions (some of which are actually effective) to try to improve their health. In this case, the consumption of Snake Oil does not actually cause any differences in heart health, rather, knowing whether or not someone consumed Snake Oil tells us something about what type of person a given individual in the survey is, and these underlying differences in types of people is what leads to the difference in heart health outcomes in the subsequent year. | A realistic example of a non identifiable model? | Non-identification is not just a consequence of having insufficient observations, and indeed, some of the most pernicious examples of not being able to identify parameters from data arise even if we h | A realistic example of a non identifiable model?
Non-identification is not just a consequence of having insufficient observations, and indeed, some of the most pernicious examples of not being able to identify parameters from data arise even if we had access to the entire population, as the below example demonstrates. The discussion below is fairly informal and tries to get at intuition. If you would like to see how statisticians formalize the basic intuition, you may consider reading about the potential outcomes model.
An important example where identification issues play a central role is when one wants to give some causal interpretation to the data. Let us consider the following toy example. We want to find out whether or not the drug Snake Oil decreases the risk of heart attacks. We go out and conduct a survey, and are able to learn (for a moment, suppose that our sample is large enough to avoid inference issues, the following:
Among those taking Snake Oil in a given year, 1% of individuals have a heart attack in the subsequent year
Among those not taking Snake Oil in a given year, 2% of individuals have a heart attack in the subsequent year
In this setting, a $\theta$ we likely care about is the average degree by which Snake Oil causes an increase/decrease the risk of heart attack (i.e. the degree to which making someone take Snake Oil who otherwise would not have taken Snake Oil on average changes their probability in the subsequent year of getting a heart attack). As we will see, the old statistical adage that "correlation does not imply causation" will imply that thes $\theta$ is not identified with the above evidence.
To see why, we show that different values of $\theta$ could be plausibly consistent with the above data. One interpretation of the above data (and one that Snake Oil salespeople will tend to gravitate towards) is that if the population who happened to take Snake Oil had not, they would have been identical to the population who did not take Snake Oil, in which case, the $-1\%$ difference can be fully attributed to the causal effect of Snake Oil on heart attack risk: $\theta = -1\%$. However, the following model might also be consistent with the data: Snake Oil has no effect, $\theta=0\%$, but the population who consumes Snake Oil is especially health conscious. They thus take other actions (some of which are actually effective) to try to improve their health. In this case, the consumption of Snake Oil does not actually cause any differences in heart health, rather, knowing whether or not someone consumed Snake Oil tells us something about what type of person a given individual in the survey is, and these underlying differences in types of people is what leads to the difference in heart health outcomes in the subsequent year. | A realistic example of a non identifiable model?
Non-identification is not just a consequence of having insufficient observations, and indeed, some of the most pernicious examples of not being able to identify parameters from data arise even if we h |
54,016 | A realistic example of a non identifiable model? | In quantum mechanics appears many non-identifiable models in the problem of phase estimation. For example, when you try to estimate a phase $\theta \in [0, 2\pi)$ of a qubit, the $X \sim Ber(p(\theta))$, where $p(\theta) = \frac{1}{2}\left[1 + \sin(\theta) \right]$ which is $\pi$-periodic. | A realistic example of a non identifiable model? | In quantum mechanics appears many non-identifiable models in the problem of phase estimation. For example, when you try to estimate a phase $\theta \in [0, 2\pi)$ of a qubit, the $X \sim Ber(p(\theta) | A realistic example of a non identifiable model?
In quantum mechanics appears many non-identifiable models in the problem of phase estimation. For example, when you try to estimate a phase $\theta \in [0, 2\pi)$ of a qubit, the $X \sim Ber(p(\theta))$, where $p(\theta) = \frac{1}{2}\left[1 + \sin(\theta) \right]$ which is $\pi$-periodic. | A realistic example of a non identifiable model?
In quantum mechanics appears many non-identifiable models in the problem of phase estimation. For example, when you try to estimate a phase $\theta \in [0, 2\pi)$ of a qubit, the $X \sim Ber(p(\theta) |
54,017 | Expectation of double quadratic form | Although the solution is essentially and already contained in W. Huber's answer, here is a detailed derivation with a non-zero mean:
\begin{align}
\mathbb E(\overbrace{\hat{Y_k}'A\hat{Y_l}}^{\text{real}}\times\overbrace{\hat{Y_k}'A\hat{Y_l}}^{\text{real}})
&= \mathbb E(\hat{Y_k}'A\hat{Y_l}\hat{Y_l}'A'\hat{Y_k})\\
&= \mathbb E(\hat{Y_k}'A\hat{Y_l}\hat{Y_l}'A'\hat{Y_k})\\
&= \mathbb E[\hat{Y_k}'A\mathbb E(\hat{Y_l}\hat{Y_l}'|\hat{Y_k})A'\hat{Y_k})]\\
&= \mathbb E[\hat{Y_k}'\underbrace{A\{\text{Cov}(Y_l)+\mathbb E[\hat{Y_l}]\mathbb E[\hat{Y_l}]'\}A'}_{\Large \Sigma}\hat{Y_k})]\\
&=\mathbb E[\text{tr}(\hat{Y_k}'\Sigma\hat{Y_k})]\\
&=\mathbb E[\text{tr}(\hat{Y_k}\hat{Y_k}'\Sigma)]\\
&=\text{tr}(\mathbb E[\hat{Y_k}\hat{Y_k}']\Sigma)\\
&=\text{tr}(\{\text{Cov}(Y_k)+\mathbb E[\hat{Y_k}]\mathbb E[\hat{Y_k}]'\}\Sigma)
\end{align} | Expectation of double quadratic form | Although the solution is essentially and already contained in W. Huber's answer, here is a detailed derivation with a non-zero mean:
\begin{align}
\mathbb E(\overbrace{\hat{Y_k}'A\hat{Y_l}}^{\text{rea | Expectation of double quadratic form
Although the solution is essentially and already contained in W. Huber's answer, here is a detailed derivation with a non-zero mean:
\begin{align}
\mathbb E(\overbrace{\hat{Y_k}'A\hat{Y_l}}^{\text{real}}\times\overbrace{\hat{Y_k}'A\hat{Y_l}}^{\text{real}})
&= \mathbb E(\hat{Y_k}'A\hat{Y_l}\hat{Y_l}'A'\hat{Y_k})\\
&= \mathbb E(\hat{Y_k}'A\hat{Y_l}\hat{Y_l}'A'\hat{Y_k})\\
&= \mathbb E[\hat{Y_k}'A\mathbb E(\hat{Y_l}\hat{Y_l}'|\hat{Y_k})A'\hat{Y_k})]\\
&= \mathbb E[\hat{Y_k}'\underbrace{A\{\text{Cov}(Y_l)+\mathbb E[\hat{Y_l}]\mathbb E[\hat{Y_l}]'\}A'}_{\Large \Sigma}\hat{Y_k})]\\
&=\mathbb E[\text{tr}(\hat{Y_k}'\Sigma\hat{Y_k})]\\
&=\mathbb E[\text{tr}(\hat{Y_k}\hat{Y_k}'\Sigma)]\\
&=\text{tr}(\mathbb E[\hat{Y_k}\hat{Y_k}']\Sigma)\\
&=\text{tr}(\{\text{Cov}(Y_k)+\mathbb E[\hat{Y_k}]\mathbb E[\hat{Y_k}]'\}\Sigma)
\end{align} | Expectation of double quadratic form
Although the solution is essentially and already contained in W. Huber's answer, here is a detailed derivation with a non-zero mean:
\begin{align}
\mathbb E(\overbrace{\hat{Y_k}'A\hat{Y_l}}^{\text{rea |
54,018 | How are True Negative and False Negative converted into True Positive and False Positive in ROC curve? | The ROC curve shows the TPR and FPR as you change the threshold. Your question posits that a classifier only has one TPR statistic and one FPR statistic, and that both of these statistics correspond to a threshold at 0.5. This is a common misconception; there is no reason that the threshold must be 0.5. For each choice of threshold, there is a corresponding TPR and FPR statistic. The purpose of the ROC curve is to show the trade-off for each choice of threshold: as your TPR increases, so does your FPR, and vice-versa. | How are True Negative and False Negative converted into True Positive and False Positive in ROC curv | The ROC curve shows the TPR and FPR as you change the threshold. Your question posits that a classifier only has one TPR statistic and one FPR statistic, and that both of these statistics correspond t | How are True Negative and False Negative converted into True Positive and False Positive in ROC curve?
The ROC curve shows the TPR and FPR as you change the threshold. Your question posits that a classifier only has one TPR statistic and one FPR statistic, and that both of these statistics correspond to a threshold at 0.5. This is a common misconception; there is no reason that the threshold must be 0.5. For each choice of threshold, there is a corresponding TPR and FPR statistic. The purpose of the ROC curve is to show the trade-off for each choice of threshold: as your TPR increases, so does your FPR, and vice-versa. | How are True Negative and False Negative converted into True Positive and False Positive in ROC curv
The ROC curve shows the TPR and FPR as you change the threshold. Your question posits that a classifier only has one TPR statistic and one FPR statistic, and that both of these statistics correspond t |
54,019 | How are True Negative and False Negative converted into True Positive and False Positive in ROC curve? | you vary the threshold which is used to determine positives and negatives, ie at threshold 0.5 we get a point (TP, FP) which we plot, then at threshold 0.45 we get another point (TP, FP) which we plot, etc. These points make the ROC curve | How are True Negative and False Negative converted into True Positive and False Positive in ROC curv | you vary the threshold which is used to determine positives and negatives, ie at threshold 0.5 we get a point (TP, FP) which we plot, then at threshold 0.45 we get another point (TP, FP) which we plo | How are True Negative and False Negative converted into True Positive and False Positive in ROC curve?
you vary the threshold which is used to determine positives and negatives, ie at threshold 0.5 we get a point (TP, FP) which we plot, then at threshold 0.45 we get another point (TP, FP) which we plot, etc. These points make the ROC curve | How are True Negative and False Negative converted into True Positive and False Positive in ROC curv
you vary the threshold which is used to determine positives and negatives, ie at threshold 0.5 we get a point (TP, FP) which we plot, then at threshold 0.45 we get another point (TP, FP) which we plo |
54,020 | What is the point of using a Bayesian prior? | There are many excellent thoughts here. There is a short answer to the question. If you want to gauge relative evidence you can sometimes get away without using any outside information. Relative evidence can be summarized by a likelihood ratio in the likelihoodist school of statistics. For example one may use study data to compute the likelihood ratio assuming that true mean blood pressure is 120mmHg vs. the mean being 140mmHg. Or you can quantify evidence more indirectly using p-values (evidence against something, only). If you want to quantify absolute evidence there is no mathematical way to compute "absolute" probabilities without having a prior distribution. So if you wanted to compute the probability that the true mean blood pressure is between 135mmHg and 145mmHg you would need a prior. Likewise if you wanted to compute the probability that a medical treatment lowers mortality instead of just using a frequentist hypothesis test to compute the probability of getting data stranger than ours if the treatment does nothing about disease risk you would need a prior.
The most compelling stories I've seen about the Bayesian approach are Nate Silver's The Signal and the Noise and Bernoulli's Fallacy by Aubry Clayton. More thoughts are here and here.
An analogy in medical diagnostic testing is often useful. Sensitivity and specificity are sometimes used as test characteristics. These condition on the actual disease status so only provide relative information. To turn them into absolute information (probability of disease) one must use Bayes' rule to factor in disease prevalence (the prior).
So the prior distribution is the mathematical cost of being able to make direct statements that are not just about relative evidence. | What is the point of using a Bayesian prior? | There are many excellent thoughts here. There is a short answer to the question. If you want to gauge relative evidence you can sometimes get away without using any outside information. Relative ev | What is the point of using a Bayesian prior?
There are many excellent thoughts here. There is a short answer to the question. If you want to gauge relative evidence you can sometimes get away without using any outside information. Relative evidence can be summarized by a likelihood ratio in the likelihoodist school of statistics. For example one may use study data to compute the likelihood ratio assuming that true mean blood pressure is 120mmHg vs. the mean being 140mmHg. Or you can quantify evidence more indirectly using p-values (evidence against something, only). If you want to quantify absolute evidence there is no mathematical way to compute "absolute" probabilities without having a prior distribution. So if you wanted to compute the probability that the true mean blood pressure is between 135mmHg and 145mmHg you would need a prior. Likewise if you wanted to compute the probability that a medical treatment lowers mortality instead of just using a frequentist hypothesis test to compute the probability of getting data stranger than ours if the treatment does nothing about disease risk you would need a prior.
The most compelling stories I've seen about the Bayesian approach are Nate Silver's The Signal and the Noise and Bernoulli's Fallacy by Aubry Clayton. More thoughts are here and here.
An analogy in medical diagnostic testing is often useful. Sensitivity and specificity are sometimes used as test characteristics. These condition on the actual disease status so only provide relative information. To turn them into absolute information (probability of disease) one must use Bayes' rule to factor in disease prevalence (the prior).
So the prior distribution is the mathematical cost of being able to make direct statements that are not just about relative evidence. | What is the point of using a Bayesian prior?
There are many excellent thoughts here. There is a short answer to the question. If you want to gauge relative evidence you can sometimes get away without using any outside information. Relative ev |
54,021 | What is the point of using a Bayesian prior? | Questions
why is using a prior useful?
How do you "not use a prior"? Even the frequentist approach has a prior—it's just unstated, but it's there all the same. There's always a prior, whether you like it or not.
As for which prior you should use, that's a different question, which other answers on this site go over. See for example this question.
Especially if you have start with a prior that is really far-off, then you will be hurting your estimates.
I know what you're trying to say here, but it doesn't really make any sense. For your prior to not "hurt your estimates", it should reflect your data. But that's not the goal of the prior—that's the goal of the posterior. The prior should reflect any prior knowledge you have, and otherwise be "uninformative" (if you believe that priors can be uninformative, which I do).
Example 1
You can verify that your frequentist approach does have a hidden prior by simply redoing your calculation after transforming the parameter $\theta$. For example, let $\beta = e^\theta$. The frequentist approach produces a different distribution and confidence interval. The parametrization is part of the unstated prior.
Example 2
Saying that you're going to use maximum likelihood to make your estimate is identical to using MAP with a flat prior. There is nevertheless a prior, which is dependent on the model you choose. As before, this includes its parametrization. You can examine your prior by evaluating estimates before updating parameters given new data. | What is the point of using a Bayesian prior? | Questions
why is using a prior useful?
How do you "not use a prior"? Even the frequentist approach has a prior—it's just unstated, but it's there all the same. There's always a prior, whether you | What is the point of using a Bayesian prior?
Questions
why is using a prior useful?
How do you "not use a prior"? Even the frequentist approach has a prior—it's just unstated, but it's there all the same. There's always a prior, whether you like it or not.
As for which prior you should use, that's a different question, which other answers on this site go over. See for example this question.
Especially if you have start with a prior that is really far-off, then you will be hurting your estimates.
I know what you're trying to say here, but it doesn't really make any sense. For your prior to not "hurt your estimates", it should reflect your data. But that's not the goal of the prior—that's the goal of the posterior. The prior should reflect any prior knowledge you have, and otherwise be "uninformative" (if you believe that priors can be uninformative, which I do).
Example 1
You can verify that your frequentist approach does have a hidden prior by simply redoing your calculation after transforming the parameter $\theta$. For example, let $\beta = e^\theta$. The frequentist approach produces a different distribution and confidence interval. The parametrization is part of the unstated prior.
Example 2
Saying that you're going to use maximum likelihood to make your estimate is identical to using MAP with a flat prior. There is nevertheless a prior, which is dependent on the model you choose. As before, this includes its parametrization. You can examine your prior by evaluating estimates before updating parameters given new data. | What is the point of using a Bayesian prior?
Questions
why is using a prior useful?
How do you "not use a prior"? Even the frequentist approach has a prior—it's just unstated, but it's there all the same. There's always a prior, whether you |
54,022 | What is the point of using a Bayesian prior? | Here's a example of how to use Bayesian priors in a way that even "frequentists" agree is useful.
Let's say you want to estimate the know how well students at 100 different schools are doing in math, so you can identify schools that are doing particularly well or poorly. But you can only assess math knowledge through a test that not all students took. At most schools well over 100 students take the test but at some schools only a few take it due (assume nonresponse is random). Overall, the average score on test is 85% but obviously you are more interested in the average score at particular schools.
Now let's say that at school B only 5 students took the test and their average score was 50%. What are we to make of this?
A purely frequentist approach would to take the data at it's word and treat 50% as the best estimate available of the average score at that school, with some fairly large confidence intervals of course.
But that seems problematic. We know that overall the average test score across all students is an 85%. Given that we only have data from five students at this school, doesn't it seem MORE likely that the true value is actually somewhere closer to the 85% than 50%? The Bayesian approach to this problem would be to treat the overall mean as a "prior" and then update that prior with the additional data we got from the five students at this school. This is going to "shrink" our final estimate towards the mean by some amount. Since we only have five students at this school it's going to shrink it by quite a bit, since the data are weak. At a school with 150 respondents we would put more trust in the data and only shrink a little bit.
This approach is called "empirical Bayes estimation" and it's widely used in multilevel modeling, even by people who don't think of themselves as Bayesian, and more explicitly Bayesian versions of this approach (Google "Multilevel regression with post-stratification") are very common in political science when trying to get estimates of public opinion in small states
This is in fact what Fivethirtyeight.com does to predict elections in the US. To estimate the chance that a given candidate (say Trump) will win a state they look at polling at that state, but then they "shrink" the result of that polling towards a prior that they got from other data.
For example, let's say that before the 2020 campaign even starts you decide that, based on demographic trends, partisan affiliation and presidential approval numbers, Trump is only likely to get 30% of the vote in Vermont (this is our prior). Then someone does a poll of 100 people and finds that Trump is actually winning in Vermont with 51% of the vote. A frequentist would have to either put total trust in this result or ignore it completely. A Bayesian can do something more subtle: we use Bayes' rule to shrink this estimate towards our prior by some amount. In other words we don't actually believe that this poll means that Trump is actually ahead in Vermont, but we also no longer totally believe our old prior that Trump was only going to get 30%. Maybe now we think that we will win 40% of the vote. Then when we get even more new data, we update the new prior again.
Philosophical disagreements aside, this approach really works, which is why Fivethirthyeights's forecasts are so accurate:
https://projects.fivethirtyeight.com/checking-our-work/ | What is the point of using a Bayesian prior? | Here's a example of how to use Bayesian priors in a way that even "frequentists" agree is useful.
Let's say you want to estimate the know how well students at 100 different schools are doing in math, | What is the point of using a Bayesian prior?
Here's a example of how to use Bayesian priors in a way that even "frequentists" agree is useful.
Let's say you want to estimate the know how well students at 100 different schools are doing in math, so you can identify schools that are doing particularly well or poorly. But you can only assess math knowledge through a test that not all students took. At most schools well over 100 students take the test but at some schools only a few take it due (assume nonresponse is random). Overall, the average score on test is 85% but obviously you are more interested in the average score at particular schools.
Now let's say that at school B only 5 students took the test and their average score was 50%. What are we to make of this?
A purely frequentist approach would to take the data at it's word and treat 50% as the best estimate available of the average score at that school, with some fairly large confidence intervals of course.
But that seems problematic. We know that overall the average test score across all students is an 85%. Given that we only have data from five students at this school, doesn't it seem MORE likely that the true value is actually somewhere closer to the 85% than 50%? The Bayesian approach to this problem would be to treat the overall mean as a "prior" and then update that prior with the additional data we got from the five students at this school. This is going to "shrink" our final estimate towards the mean by some amount. Since we only have five students at this school it's going to shrink it by quite a bit, since the data are weak. At a school with 150 respondents we would put more trust in the data and only shrink a little bit.
This approach is called "empirical Bayes estimation" and it's widely used in multilevel modeling, even by people who don't think of themselves as Bayesian, and more explicitly Bayesian versions of this approach (Google "Multilevel regression with post-stratification") are very common in political science when trying to get estimates of public opinion in small states
This is in fact what Fivethirtyeight.com does to predict elections in the US. To estimate the chance that a given candidate (say Trump) will win a state they look at polling at that state, but then they "shrink" the result of that polling towards a prior that they got from other data.
For example, let's say that before the 2020 campaign even starts you decide that, based on demographic trends, partisan affiliation and presidential approval numbers, Trump is only likely to get 30% of the vote in Vermont (this is our prior). Then someone does a poll of 100 people and finds that Trump is actually winning in Vermont with 51% of the vote. A frequentist would have to either put total trust in this result or ignore it completely. A Bayesian can do something more subtle: we use Bayes' rule to shrink this estimate towards our prior by some amount. In other words we don't actually believe that this poll means that Trump is actually ahead in Vermont, but we also no longer totally believe our old prior that Trump was only going to get 30%. Maybe now we think that we will win 40% of the vote. Then when we get even more new data, we update the new prior again.
Philosophical disagreements aside, this approach really works, which is why Fivethirthyeights's forecasts are so accurate:
https://projects.fivethirtyeight.com/checking-our-work/ | What is the point of using a Bayesian prior?
Here's a example of how to use Bayesian priors in a way that even "frequentists" agree is useful.
Let's say you want to estimate the know how well students at 100 different schools are doing in math, |
54,023 | What is the point of using a Bayesian prior? | So an important area of work that I am involved in is decision making under uncertainty, particularly where money is placed at risk. You are asking the wrong question.
The first question should be when and where should I use a Bayesian method and when should I not, maybe never, use a Bayesian method. The subsidiary question would be about why you care about the answer. An academic and a person solving an applied problem, even if the question is identical, probably should not use the same method.
Bayesian methods are good at helping me understand my beliefs and change them as new information comes in. That me could be an us and the my could be an our. If you have no beliefs on the topic, why do you even need to consider the topic?
Frequentist and Maximum Likelihood methods work really well when we lack prior belief because they minimize the maximum amount of risk that you will be facing. Indeed, any two people that get the same result on a t-test with the same cutoff criterion should infer the same thing and decide the same thing. That should not be true with Bayesian methods.
Now let us get back to my area of expertise. How should I gamble money?
Well, it turns out that there are two very nice binding sets of rules that govern my behavior when money is involved. The first is the Dutch Book Theorem. If I were to massively oversimplify it, it says that if you gamble, you can never use a non-Bayesian method or it is possible to force you to take a sure loss. In some cases, I can force you to take a sure loss 100% of the time.
If you are in finance, as I am, methods such as ordinary least squares can guarantee a loss to the user. I do some interesting training exercises to show how that is done. You get some seriously stunned looks when they lose and realized they could never have won. Simple rule, you cannot use non-Bayesian methods when placing money at risk in a gambling or market type of situation.
The second rule comes because the Dutch Book Theorem and its converse imply that it is necessary to use a Bayesian method but it doesn't say that it is sufficient. It turns out that it is not a sufficient criterion.
The second rule requires the use of a proper prior built from actual information. The reason is that all admissible Frequentist rules are either the limiting form of a Bayesian rule or match the Bayesian outcome in every sample. That leads to the question of whether or not the Frequentist solution inherits the Bayesian blessing when gambling, or if there are limits on Bayesian gambling.
The result is that there are limits on Bayesian gambling. A market maker or bookie that is clueless about the game they are playing can be forced to take losses by an informed actor. Unfortunately, I don't have citations where I am at. I print things out and put them in binders and I am not near my binders. Nonetheless, if you did an academic search on the Dutch Book Theorem, you would likely come across them pretty quickly.
Your question is about the obvious problem of "what if the prior is wrong?" Well, it can never be wrong because, for it to be valid, it should reflect your beliefs. It isn't that the prior is wrong, it is that your beliefs are wrong. The data should update your beliefs. If you are very prejudiced, then they may be updated by an almost imperceptibly small amount.
Remember, in the Frequentist world, $\theta$ is a fixed point. There is a correct single answer. In the Bayesian world, $\theta$ is a random variable. Even if there is a fixed, correct answer, it is quite likely that you will never find it. That is okay, you have your distribution and if it is in the near neighborhood of reality, then all is well.
You cannot pass COVID on to others because COVID does not exist, therefore you do not need to isolate or quarantine if somebody tells you that you are exposed.
That is a belief. It is wrong, but it is a belief. Given the right type of data, you may interfere with or alter that belief. Bayesian methods begin with you and where you are at.
How do you know that your sample of French people was representative? Could you not have acquired a pleasant and welcoming subsample by chance? Why trust the data alone? Why not, instead, question if you are prejudiced? | What is the point of using a Bayesian prior? | So an important area of work that I am involved in is decision making under uncertainty, particularly where money is placed at risk. You are asking the wrong question.
The first question should be wh | What is the point of using a Bayesian prior?
So an important area of work that I am involved in is decision making under uncertainty, particularly where money is placed at risk. You are asking the wrong question.
The first question should be when and where should I use a Bayesian method and when should I not, maybe never, use a Bayesian method. The subsidiary question would be about why you care about the answer. An academic and a person solving an applied problem, even if the question is identical, probably should not use the same method.
Bayesian methods are good at helping me understand my beliefs and change them as new information comes in. That me could be an us and the my could be an our. If you have no beliefs on the topic, why do you even need to consider the topic?
Frequentist and Maximum Likelihood methods work really well when we lack prior belief because they minimize the maximum amount of risk that you will be facing. Indeed, any two people that get the same result on a t-test with the same cutoff criterion should infer the same thing and decide the same thing. That should not be true with Bayesian methods.
Now let us get back to my area of expertise. How should I gamble money?
Well, it turns out that there are two very nice binding sets of rules that govern my behavior when money is involved. The first is the Dutch Book Theorem. If I were to massively oversimplify it, it says that if you gamble, you can never use a non-Bayesian method or it is possible to force you to take a sure loss. In some cases, I can force you to take a sure loss 100% of the time.
If you are in finance, as I am, methods such as ordinary least squares can guarantee a loss to the user. I do some interesting training exercises to show how that is done. You get some seriously stunned looks when they lose and realized they could never have won. Simple rule, you cannot use non-Bayesian methods when placing money at risk in a gambling or market type of situation.
The second rule comes because the Dutch Book Theorem and its converse imply that it is necessary to use a Bayesian method but it doesn't say that it is sufficient. It turns out that it is not a sufficient criterion.
The second rule requires the use of a proper prior built from actual information. The reason is that all admissible Frequentist rules are either the limiting form of a Bayesian rule or match the Bayesian outcome in every sample. That leads to the question of whether or not the Frequentist solution inherits the Bayesian blessing when gambling, or if there are limits on Bayesian gambling.
The result is that there are limits on Bayesian gambling. A market maker or bookie that is clueless about the game they are playing can be forced to take losses by an informed actor. Unfortunately, I don't have citations where I am at. I print things out and put them in binders and I am not near my binders. Nonetheless, if you did an academic search on the Dutch Book Theorem, you would likely come across them pretty quickly.
Your question is about the obvious problem of "what if the prior is wrong?" Well, it can never be wrong because, for it to be valid, it should reflect your beliefs. It isn't that the prior is wrong, it is that your beliefs are wrong. The data should update your beliefs. If you are very prejudiced, then they may be updated by an almost imperceptibly small amount.
Remember, in the Frequentist world, $\theta$ is a fixed point. There is a correct single answer. In the Bayesian world, $\theta$ is a random variable. Even if there is a fixed, correct answer, it is quite likely that you will never find it. That is okay, you have your distribution and if it is in the near neighborhood of reality, then all is well.
You cannot pass COVID on to others because COVID does not exist, therefore you do not need to isolate or quarantine if somebody tells you that you are exposed.
That is a belief. It is wrong, but it is a belief. Given the right type of data, you may interfere with or alter that belief. Bayesian methods begin with you and where you are at.
How do you know that your sample of French people was representative? Could you not have acquired a pleasant and welcoming subsample by chance? Why trust the data alone? Why not, instead, question if you are prejudiced? | What is the point of using a Bayesian prior?
So an important area of work that I am involved in is decision making under uncertainty, particularly where money is placed at risk. You are asking the wrong question.
The first question should be wh |
54,024 | What is the point of using a Bayesian prior? | From whatever little I know about Bayesian method, the best use case is when you have limited pieces of evidence which by themselves are not enough to make a frequentist probability with reliability. But the little evidence you have can add to an informative prior probability to take you closer to the truth.
In the particular example you provided, I don't see a great use case for Bayesian application, but consider how you can solve this problem without using Bayesian interpretation - question from arbital
Suppose there's a bathtub full of coins:
Half the coins are "fair" and have a 50% probability of coming up Heads each time they are thrown.
A third of the coins are biased to produce Heads 25% of the time (Tails 75%).
The remaining sixth of the coins are biased to produce Heads 75% of the time.
You randomly draw a coin, flip it three times, and get the result HTH. What's the chance this is a fair coin?
Would it be doable if you throw away the prior information and see how you can answer the question maybe rephrased as "what is the probability of heads for a single toss of this coin which gave a HTH"? | What is the point of using a Bayesian prior? | From whatever little I know about Bayesian method, the best use case is when you have limited pieces of evidence which by themselves are not enough to make a frequentist probability with reliability. | What is the point of using a Bayesian prior?
From whatever little I know about Bayesian method, the best use case is when you have limited pieces of evidence which by themselves are not enough to make a frequentist probability with reliability. But the little evidence you have can add to an informative prior probability to take you closer to the truth.
In the particular example you provided, I don't see a great use case for Bayesian application, but consider how you can solve this problem without using Bayesian interpretation - question from arbital
Suppose there's a bathtub full of coins:
Half the coins are "fair" and have a 50% probability of coming up Heads each time they are thrown.
A third of the coins are biased to produce Heads 25% of the time (Tails 75%).
The remaining sixth of the coins are biased to produce Heads 75% of the time.
You randomly draw a coin, flip it three times, and get the result HTH. What's the chance this is a fair coin?
Would it be doable if you throw away the prior information and see how you can answer the question maybe rephrased as "what is the probability of heads for a single toss of this coin which gave a HTH"? | What is the point of using a Bayesian prior?
From whatever little I know about Bayesian method, the best use case is when you have limited pieces of evidence which by themselves are not enough to make a frequentist probability with reliability. |
54,025 | What is the point of using a Bayesian prior? | A prior is useful for incorporating historical information, analogous to a fixed effect meta-analysis. Because Bayesians define parameters as random variables and probability as the belief of the experimenter they may feel much freer to incorporate information from other studies or their personal beliefs. In contrast, because frequentists are concerned with the performance of their testing procedure they may scrutinize the compatibility of studies. The frequentist needs to feel comfortable assuming the unknown fixed true quantity being investigated in each study is identical (even if the subject-level observations are not necessarily exchangeable). This may be a big assumption to make, analgous to assuming an informative Bayesian prior. Using a purely subjective Bayesian prior is analogous to performing a meta-analysis with hypothetical experimental data.
I will argue against using a Bayesian prior and in favor of a frequentist meta-analysis, beginning with an example taken from Decision Making in Drug Development via Confidence Distributions (Johnson 2021). The primary reason not to use a Bayesian prior is that a subjective interpretation of probability as a measure of belief is unfalsifiable (1)(2).
A confidence distribution $H(\theta,\boldsymbol{x})$ as a function of the hypothesis and observed data has the appearance of a cdf on the parameter space and depicts p-values and confidence intervals of all levels based on a particular testing procedure. This same information can be displayed as a confidence density and a confidence curve.
The figure below depicts a meta-analysis using confidence distributions for a binomial proportion $\theta$. Density (a) represents an informative prior distribution based on historical data and a vague conjugate prior with an estimate of 0.90 and a sample size of n = 50. This same information is depicted in (b) as a confidence density resulting from a likelihood ratio test. A similar confidence density can be produced by inverting a Wald test with a logit link. The posterior based on the current data binomial likelihood and a vague conjugate prior is shown in (c) with an estimate of 0.87 resulting from n = 30. This same information can be represented as a likelihood ratio confidence density, (d). Using Bayes theorem, (a) and (c) combine to form (e). Multipliying the historical and current likelihoods and inverting a likelihood ratio test forms (f). This multiplication of independent likelihoods is precisely what Bayes theorem accomplishes (plus normalization), without the inversion of a hypothesis test.
The confidence densities above can be interpreted under a Neyman-Pearson framework using a pre-specified null hypothesis and type I error rate. they can also be interpreted under a Fisherian framework of evidential p-values to compare the plausibility of multiple hypotheses, not necessarily pre-specified. Many Bayesians will use a Dutch book argument in support of posterior probability. Here is an example where the long-run characteristics of a likelihood ratio test are used to form a Dutch book against two different Bayesian posteriors. (3) | What is the point of using a Bayesian prior? | A prior is useful for incorporating historical information, analogous to a fixed effect meta-analysis. Because Bayesians define parameters as random variables and probability as the belief of the exp | What is the point of using a Bayesian prior?
A prior is useful for incorporating historical information, analogous to a fixed effect meta-analysis. Because Bayesians define parameters as random variables and probability as the belief of the experimenter they may feel much freer to incorporate information from other studies or their personal beliefs. In contrast, because frequentists are concerned with the performance of their testing procedure they may scrutinize the compatibility of studies. The frequentist needs to feel comfortable assuming the unknown fixed true quantity being investigated in each study is identical (even if the subject-level observations are not necessarily exchangeable). This may be a big assumption to make, analgous to assuming an informative Bayesian prior. Using a purely subjective Bayesian prior is analogous to performing a meta-analysis with hypothetical experimental data.
I will argue against using a Bayesian prior and in favor of a frequentist meta-analysis, beginning with an example taken from Decision Making in Drug Development via Confidence Distributions (Johnson 2021). The primary reason not to use a Bayesian prior is that a subjective interpretation of probability as a measure of belief is unfalsifiable (1)(2).
A confidence distribution $H(\theta,\boldsymbol{x})$ as a function of the hypothesis and observed data has the appearance of a cdf on the parameter space and depicts p-values and confidence intervals of all levels based on a particular testing procedure. This same information can be displayed as a confidence density and a confidence curve.
The figure below depicts a meta-analysis using confidence distributions for a binomial proportion $\theta$. Density (a) represents an informative prior distribution based on historical data and a vague conjugate prior with an estimate of 0.90 and a sample size of n = 50. This same information is depicted in (b) as a confidence density resulting from a likelihood ratio test. A similar confidence density can be produced by inverting a Wald test with a logit link. The posterior based on the current data binomial likelihood and a vague conjugate prior is shown in (c) with an estimate of 0.87 resulting from n = 30. This same information can be represented as a likelihood ratio confidence density, (d). Using Bayes theorem, (a) and (c) combine to form (e). Multipliying the historical and current likelihoods and inverting a likelihood ratio test forms (f). This multiplication of independent likelihoods is precisely what Bayes theorem accomplishes (plus normalization), without the inversion of a hypothesis test.
The confidence densities above can be interpreted under a Neyman-Pearson framework using a pre-specified null hypothesis and type I error rate. they can also be interpreted under a Fisherian framework of evidential p-values to compare the plausibility of multiple hypotheses, not necessarily pre-specified. Many Bayesians will use a Dutch book argument in support of posterior probability. Here is an example where the long-run characteristics of a likelihood ratio test are used to form a Dutch book against two different Bayesian posteriors. (3) | What is the point of using a Bayesian prior?
A prior is useful for incorporating historical information, analogous to a fixed effect meta-analysis. Because Bayesians define parameters as random variables and probability as the belief of the exp |
54,026 | Are Random Forests trained with the whole dataset? | All previous answers are correct.
I would like just to clarify one point of your question:
My question is, how does this make sense? Because in theory, if we passed the whole training set to each individual model, wouldn't all the models be exactly the same?
As you have seen in previous answers, we are doing Bootstrapping, so the weak decision trees we are creating are going to be different since they have different dataset.
But, imagine you set the bootstrap=False. Will you obtain the same week trees since you have the same dataset? The answer is NO.
Why? Because Random Forest selects also randomly some features to create every decision tree. So, you will have same dataset, but different features you are playing with.
The hyperparameter to tune this idea is called max_features. | Are Random Forests trained with the whole dataset? | All previous answers are correct.
I would like just to clarify one point of your question:
My question is, how does this make sense? Because in theory, if we passed the whole training set to each ind | Are Random Forests trained with the whole dataset?
All previous answers are correct.
I would like just to clarify one point of your question:
My question is, how does this make sense? Because in theory, if we passed the whole training set to each individual model, wouldn't all the models be exactly the same?
As you have seen in previous answers, we are doing Bootstrapping, so the weak decision trees we are creating are going to be different since they have different dataset.
But, imagine you set the bootstrap=False. Will you obtain the same week trees since you have the same dataset? The answer is NO.
Why? Because Random Forest selects also randomly some features to create every decision tree. So, you will have same dataset, but different features you are playing with.
The hyperparameter to tune this idea is called max_features. | Are Random Forests trained with the whole dataset?
All previous answers are correct.
I would like just to clarify one point of your question:
My question is, how does this make sense? Because in theory, if we passed the whole training set to each ind |
54,027 | Are Random Forests trained with the whole dataset? | As noticed in the comment, random forest uses bootstrap resamples of the training data. What this means is that for each tree we sample randomly with replacement the max_samples number of observations from the training data. When using bootstrap in statistics you generally want the number of bootstrap observations to be equal to the number of observations in your data, because you want the bootstrap samples to resemble the original data. Using max_samples higher than the number of observations in the training data would be rather pointless, but sometimes people may choose it to be smaller to speed up the computations. This would be a bad idea in statistics because using smaller bootstrap samples would not give you an accurate estimate of things like standard errors, but when training random forest you are only concerned with making predictions, not inference. In such a case, you need to empirically verify what are the consequences for the quality of the predictions if you make the max_samples number smaller. | Are Random Forests trained with the whole dataset? | As noticed in the comment, random forest uses bootstrap resamples of the training data. What this means is that for each tree we sample randomly with replacement the max_samples number of observations | Are Random Forests trained with the whole dataset?
As noticed in the comment, random forest uses bootstrap resamples of the training data. What this means is that for each tree we sample randomly with replacement the max_samples number of observations from the training data. When using bootstrap in statistics you generally want the number of bootstrap observations to be equal to the number of observations in your data, because you want the bootstrap samples to resemble the original data. Using max_samples higher than the number of observations in the training data would be rather pointless, but sometimes people may choose it to be smaller to speed up the computations. This would be a bad idea in statistics because using smaller bootstrap samples would not give you an accurate estimate of things like standard errors, but when training random forest you are only concerned with making predictions, not inference. In such a case, you need to empirically verify what are the consequences for the quality of the predictions if you make the max_samples number smaller. | Are Random Forests trained with the whole dataset?
As noticed in the comment, random forest uses bootstrap resamples of the training data. What this means is that for each tree we sample randomly with replacement the max_samples number of observations |
54,028 | Are Random Forests trained with the whole dataset? | From the explanation of the BaggingClassifier:
max_samples: The number of samples to draw from X to train each base
estimator (with replacement by default, see bootstrap for more
details).
So, since the samples are drawn with replacement, it's a bootstrap sample with the same size of the whole dataset, but not exactly the same dataset. | Are Random Forests trained with the whole dataset? | From the explanation of the BaggingClassifier:
max_samples: The number of samples to draw from X to train each base
estimator (with replacement by default, see bootstrap for more
details).
So, since | Are Random Forests trained with the whole dataset?
From the explanation of the BaggingClassifier:
max_samples: The number of samples to draw from X to train each base
estimator (with replacement by default, see bootstrap for more
details).
So, since the samples are drawn with replacement, it's a bootstrap sample with the same size of the whole dataset, but not exactly the same dataset. | Are Random Forests trained with the whole dataset?
From the explanation of the BaggingClassifier:
max_samples: The number of samples to draw from X to train each base
estimator (with replacement by default, see bootstrap for more
details).
So, since |
54,029 | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$? | $X_1 ~|~ X_1+X_2$ and $X_2 ~|~ X_1+X_2$ are not independent. They are perfectly negatively correlated distributions. | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$? | $X_1 ~|~ X_1+X_2$ and $X_2 ~|~ X_1+X_2$ are not independent. They are perfectly negatively correlated distributions. | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$?
$X_1 ~|~ X_1+X_2$ and $X_2 ~|~ X_1+X_2$ are not independent. They are perfectly negatively correlated distributions. | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$?
$X_1 ~|~ X_1+X_2$ and $X_2 ~|~ X_1+X_2$ are not independent. They are perfectly negatively correlated distributions. |
54,030 | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$? | It's possible if one of them is constant - for example if $X_1$ has a Bernoulli distribution and $X_2$ is always equal to zero. | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$? | It's possible if one of them is constant - for example if $X_1$ has a Bernoulli distribution and $X_2$ is always equal to zero. | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$?
It's possible if one of them is constant - for example if $X_1$ has a Bernoulli distribution and $X_2$ is always equal to zero. | Can $X_1$ and $X_2$ be independent conditioning on $X_1+X_2$?
It's possible if one of them is constant - for example if $X_1$ has a Bernoulli distribution and $X_2$ is always equal to zero. |
54,031 | How does one design a custom loss function? What features make a loss function "good"? | To answer the titular question, a key characteristic of a loss function is that the loss is minimized at the target values $y$. In other words, if you're estimating a quantity, the least loss should be assigned to the estimates that are exactly correct.
Using $\mathcal{L}_\text{subtract}$ and divisive losses are not loss functions because they do not involve the target variables. In other words, you can achieve the absolute minimum value of 0 without learning anything about what you want to model.
Consider this loss function $L(\hat{y}) = (\hat{y} - c)^2$ for model outputs $\hat{y}$, a constant $c$ and model targets $y$. This function is clearly convex in $\hat{y},c$, but it has nothing to do with the model targets $y$. If $c$ is a fixed target and our model outputs $\hat{y}$, then we achieve a minimum at $\hat{y}=c$, and this is true no matter what the targets $y$ happen to be.
My suggestion is to use $\mathcal{L}_\text{subtract}$ as a regularization to augment a typical loss function. So if you're using a binomial cross entropy loss, you could write down a total loss as
$$
\text{total loss} = \text{BCE}(y,\hat{y}) + \lambda \mathcal{L}_\text{subtract}$$
for $\lambda >0$ some tuning parameter controlling how much deviation can occur from one layer to the next. Likewise, you can use an alternative loss in place of $\text{BCE}$ to model other types of data.
This seems related to, but not exactly the same as "Regularizing RNNs by Stabilizing Activations" by David Krueger & Roland Memisevic. Perhaps their work and bibliography is of interest. | How does one design a custom loss function? What features make a loss function "good"? | To answer the titular question, a key characteristic of a loss function is that the loss is minimized at the target values $y$. In other words, if you're estimating a quantity, the least loss should b | How does one design a custom loss function? What features make a loss function "good"?
To answer the titular question, a key characteristic of a loss function is that the loss is minimized at the target values $y$. In other words, if you're estimating a quantity, the least loss should be assigned to the estimates that are exactly correct.
Using $\mathcal{L}_\text{subtract}$ and divisive losses are not loss functions because they do not involve the target variables. In other words, you can achieve the absolute minimum value of 0 without learning anything about what you want to model.
Consider this loss function $L(\hat{y}) = (\hat{y} - c)^2$ for model outputs $\hat{y}$, a constant $c$ and model targets $y$. This function is clearly convex in $\hat{y},c$, but it has nothing to do with the model targets $y$. If $c$ is a fixed target and our model outputs $\hat{y}$, then we achieve a minimum at $\hat{y}=c$, and this is true no matter what the targets $y$ happen to be.
My suggestion is to use $\mathcal{L}_\text{subtract}$ as a regularization to augment a typical loss function. So if you're using a binomial cross entropy loss, you could write down a total loss as
$$
\text{total loss} = \text{BCE}(y,\hat{y}) + \lambda \mathcal{L}_\text{subtract}$$
for $\lambda >0$ some tuning parameter controlling how much deviation can occur from one layer to the next. Likewise, you can use an alternative loss in place of $\text{BCE}$ to model other types of data.
This seems related to, but not exactly the same as "Regularizing RNNs by Stabilizing Activations" by David Krueger & Roland Memisevic. Perhaps their work and bibliography is of interest. | How does one design a custom loss function? What features make a loss function "good"?
To answer the titular question, a key characteristic of a loss function is that the loss is minimized at the target values $y$. In other words, if you're estimating a quantity, the least loss should b |
54,032 | Why isn't the ROC curve naturally plotted in 3D? [duplicate] | First off, you are right that we could in principle plot the ROC curve as a curve in three-dimensional space. One axis would be the threshold, and the other two would be the TPR and the FPR.
As to why this is not done: I don't know who invented the ROC curve and what their thought processes are (that might be interesting to track down), but one thing is that a curve in 3d space is just harder to understand. Especially since we are limited to not plotting a 3d curve, but some projection to 2d on paper or a screen, even if we can use clever tricks like rotating the projection in an animation. Thus, we remove the threshold by only plotting a parameterized curve. Note that the concept also appears elsewhere - for instance, you could plot a trajectory of a particle parameterized by time.
To your three bullet points:
You are right that we don't know which point at the curve belongs to which threshold. Typically, both the TPR and the FPR increase with an increasing threshold, so if point A is to the right and above point B, the threshold is higher at A than at B.
The ROC is an instrument that attempts to evaluate your entire model, not a classifier that includes a threshold. Setting the threshold is not part of the modeling step (where we want to get a handle on class membership probabilities), but on the decision step (where we want to make a decision, based on probabilities, but also on costs). Any "optimal" threshold cannot be set based on the statistics alone, but will also require knowledge about costs. Take a look at Is threshold moving unnecessary in balanced classification problem?
Thresholds at zero or one will indeed likely not make any sense. I find it hard to imagine a situation where we would model class memberships, but subsequently decide to treat every instance as if they all belonged to class A or B. However, as per my answer to the question linked above, it can make sense to use very large or very small thresholds, depending on the costs. If you are sitting in the control room of a nuclear reactor, and your gauges give you a very small probability that the reactor could go out of control, then you take action, even if that probability is tiny - simply because the costs of not doing anything are astronomical in the improbable case that something Bad does occur. | Why isn't the ROC curve naturally plotted in 3D? [duplicate] | First off, you are right that we could in principle plot the ROC curve as a curve in three-dimensional space. One axis would be the threshold, and the other two would be the TPR and the FPR.
As to why | Why isn't the ROC curve naturally plotted in 3D? [duplicate]
First off, you are right that we could in principle plot the ROC curve as a curve in three-dimensional space. One axis would be the threshold, and the other two would be the TPR and the FPR.
As to why this is not done: I don't know who invented the ROC curve and what their thought processes are (that might be interesting to track down), but one thing is that a curve in 3d space is just harder to understand. Especially since we are limited to not plotting a 3d curve, but some projection to 2d on paper or a screen, even if we can use clever tricks like rotating the projection in an animation. Thus, we remove the threshold by only plotting a parameterized curve. Note that the concept also appears elsewhere - for instance, you could plot a trajectory of a particle parameterized by time.
To your three bullet points:
You are right that we don't know which point at the curve belongs to which threshold. Typically, both the TPR and the FPR increase with an increasing threshold, so if point A is to the right and above point B, the threshold is higher at A than at B.
The ROC is an instrument that attempts to evaluate your entire model, not a classifier that includes a threshold. Setting the threshold is not part of the modeling step (where we want to get a handle on class membership probabilities), but on the decision step (where we want to make a decision, based on probabilities, but also on costs). Any "optimal" threshold cannot be set based on the statistics alone, but will also require knowledge about costs. Take a look at Is threshold moving unnecessary in balanced classification problem?
Thresholds at zero or one will indeed likely not make any sense. I find it hard to imagine a situation where we would model class memberships, but subsequently decide to treat every instance as if they all belonged to class A or B. However, as per my answer to the question linked above, it can make sense to use very large or very small thresholds, depending on the costs. If you are sitting in the control room of a nuclear reactor, and your gauges give you a very small probability that the reactor could go out of control, then you take action, even if that probability is tiny - simply because the costs of not doing anything are astronomical in the improbable case that something Bad does occur. | Why isn't the ROC curve naturally plotted in 3D? [duplicate]
First off, you are right that we could in principle plot the ROC curve as a curve in three-dimensional space. One axis would be the threshold, and the other two would be the TPR and the FPR.
As to why |
54,033 | Why isn't the ROC curve naturally plotted in 3D? [duplicate] | Are the points even plotted according some sorted order of these
thresholds?
No, there is no guarantee that ordered thresholds be reflected on the curve.
Also, shouldn't the threshold be critical at creating your final
classifier? The ROC curve should have a point that corresponds to the
best threshold. But I don't know what that threshold is on this plot.
Yes. The best threshold is usually picked based on the costs. But sometimes it is used F1-score maximization, see Thresholding Classifiers to Maximize F1 Score
would not make any sense in creating a classifier, no?
Yes. Usually too low and or too high threshold indicates issues for the classifier.
Please note that ROC is criticized heavily for not being easy to interpret, prone to class imbalance and famously not being a coherent measure, see h-measure. Interpreting the result of ROCAUC should be practiced with strong caution. | Why isn't the ROC curve naturally plotted in 3D? [duplicate] | Are the points even plotted according some sorted order of these
thresholds?
No, there is no guarantee that ordered thresholds be reflected on the curve.
Also, shouldn't the threshold be critical at | Why isn't the ROC curve naturally plotted in 3D? [duplicate]
Are the points even plotted according some sorted order of these
thresholds?
No, there is no guarantee that ordered thresholds be reflected on the curve.
Also, shouldn't the threshold be critical at creating your final
classifier? The ROC curve should have a point that corresponds to the
best threshold. But I don't know what that threshold is on this plot.
Yes. The best threshold is usually picked based on the costs. But sometimes it is used F1-score maximization, see Thresholding Classifiers to Maximize F1 Score
would not make any sense in creating a classifier, no?
Yes. Usually too low and or too high threshold indicates issues for the classifier.
Please note that ROC is criticized heavily for not being easy to interpret, prone to class imbalance and famously not being a coherent measure, see h-measure. Interpreting the result of ROCAUC should be practiced with strong caution. | Why isn't the ROC curve naturally plotted in 3D? [duplicate]
Are the points even plotted according some sorted order of these
thresholds?
No, there is no guarantee that ordered thresholds be reflected on the curve.
Also, shouldn't the threshold be critical at |
54,034 | Why isn't the ROC curve naturally plotted in 3D? [duplicate] | The purpose of an ROC curve is to summarize the performance of a classifier. The majority of binary classifiers in common use produce a score which can be trivially mapped to the interval [0, 1] (indeed it is standard to output a score that has been mapped this way). It stands to reason that as you increase sensitivity (less false negatives), you will lose specificity (more false positives) and vice versa. This is the basic trade off of most classification: If you are too strict, you will miss some of the signal, but if you are too lenient you will capture too much noise.
So we use the ROC curve when the precise threshold is irrelevant, but rather we care about the distribution of the classifier's scores for positives and negatives (ideally, they should be different). And in fact, we don't really care about the distribution itself, but rather we care about the basic question: Do these scores distinguish them or not. This is what the ROC measures.
TPR and FPS are the two variables that are important. Hence we plot them on the classic 2D plot. If a reader is happy with the overall ROC of the classifier, but they have a certain TPR or FPR they want, there are many easy ways of estimating a threshold based on the ROC. However, the ROC curve is not plotted to evaluate a given threshold, it is plotted to evaluate a classifier. Thus adding a 3rd dimension as you suggest would not serve the goal of the plot.
3D plots are also very difficult to read, even with an interactive display. A lot of technical communication to this day happens with static documents, such as papers, where 3D plots would be a definite no-no. Even potentially dynamic media, such as websites, are (and should be) wary of displaying unnecessary interactive widgets because users are loathe to deal with them. If you really wanted to represent the threshold, you could simply plot it below the graph as a second panel, aligned to FPR.
Are the points even plotted according to some sorted order of threshold?
Sane classifier training algorithms tend to produce monotonically increasing TPR and FPR wrt threshold.
The ROC curve should show the best threshold.
No, because "best" is subjective. You pick the best threshold according to the relative value you place on false positives vs. false negatives.
For a significant application, you wouldn't just pick a point on the ROC curve to select your threshold. You would do additional simulations to estimate the threshold based on the validation data you have.
It seems pretty dumb to me to either use a very large or very small threshold.
No, it is not dumb at all. Entirely dependent on application. Furthermore, you could trivially apply something like a log or power transform to the score, and convert an "extreme" value to a moderate one or vice versa. | Why isn't the ROC curve naturally plotted in 3D? [duplicate] | The purpose of an ROC curve is to summarize the performance of a classifier. The majority of binary classifiers in common use produce a score which can be trivially mapped to the interval [0, 1] (inde | Why isn't the ROC curve naturally plotted in 3D? [duplicate]
The purpose of an ROC curve is to summarize the performance of a classifier. The majority of binary classifiers in common use produce a score which can be trivially mapped to the interval [0, 1] (indeed it is standard to output a score that has been mapped this way). It stands to reason that as you increase sensitivity (less false negatives), you will lose specificity (more false positives) and vice versa. This is the basic trade off of most classification: If you are too strict, you will miss some of the signal, but if you are too lenient you will capture too much noise.
So we use the ROC curve when the precise threshold is irrelevant, but rather we care about the distribution of the classifier's scores for positives and negatives (ideally, they should be different). And in fact, we don't really care about the distribution itself, but rather we care about the basic question: Do these scores distinguish them or not. This is what the ROC measures.
TPR and FPS are the two variables that are important. Hence we plot them on the classic 2D plot. If a reader is happy with the overall ROC of the classifier, but they have a certain TPR or FPR they want, there are many easy ways of estimating a threshold based on the ROC. However, the ROC curve is not plotted to evaluate a given threshold, it is plotted to evaluate a classifier. Thus adding a 3rd dimension as you suggest would not serve the goal of the plot.
3D plots are also very difficult to read, even with an interactive display. A lot of technical communication to this day happens with static documents, such as papers, where 3D plots would be a definite no-no. Even potentially dynamic media, such as websites, are (and should be) wary of displaying unnecessary interactive widgets because users are loathe to deal with them. If you really wanted to represent the threshold, you could simply plot it below the graph as a second panel, aligned to FPR.
Are the points even plotted according to some sorted order of threshold?
Sane classifier training algorithms tend to produce monotonically increasing TPR and FPR wrt threshold.
The ROC curve should show the best threshold.
No, because "best" is subjective. You pick the best threshold according to the relative value you place on false positives vs. false negatives.
For a significant application, you wouldn't just pick a point on the ROC curve to select your threshold. You would do additional simulations to estimate the threshold based on the validation data you have.
It seems pretty dumb to me to either use a very large or very small threshold.
No, it is not dumb at all. Entirely dependent on application. Furthermore, you could trivially apply something like a log or power transform to the score, and convert an "extreme" value to a moderate one or vice versa. | Why isn't the ROC curve naturally plotted in 3D? [duplicate]
The purpose of an ROC curve is to summarize the performance of a classifier. The majority of binary classifiers in common use produce a score which can be trivially mapped to the interval [0, 1] (inde |
54,035 | How does R find the dispersion parameter in a GLM? | You can look in the code for summary.glm where you'll see:
sum((object$weights * object$residuals^2)[object$weights > 0])/df.r
df.r is the "residual degrees of freedom" (number of observations - number of parameters)
object$residuals (from ?glm):
residuals: the working residuals, that is the residuals in the final
iteration of the IWLS fit. Since cases with zero weights are
omitted, their working residuals are ‘NA’.
I don't have the definition of the working residuals at the top of my brain right now, but it turns out that this is equivalent to calculating the sum of the squared Pearson residuals ($\sum (Y_i-\mu_i)^2/v_i$, where $v_i$ is the scaled variance predicted by the model, equal to $\mu_i^2$ for the Gamma family) divided by the residual df.
Running example("glm") to get the fitted object glm.D93:
> sum(residuals(glm.D93, "pearson")^2)
[1] 5.173202
> with(glm.D93, sum(weights*residuals^2))
[1] 5.173202
This is basically the sum of squares of the residuals (scaled to make the residuals homoscedastic), divided by the residual df (so it matches up with the way you estimate the standard deviation of a (Gaussian) linear model (mean squared error / residual df)).
One thing to keep in mind is that most GLM estimates of dispersion are approximations (and that even if you went to the trouble of getting a maximum likelihood estimate it would be biased for small sample size). For example, you can also compute sigma(glm.D93), which divides the deviance (or equivalently the sum of squares of the deviance residuals) by the residual df. Venables and Ripley MASS have a comment about the unreliability of these approximations for small sample sizes.
I'm not clear that your question ("does my model fit better with dispersion 1 or the estimated dispersion") makes sense; changing the dispersion does not change the predictions of the model at all. | How does R find the dispersion parameter in a GLM? | You can look in the code for summary.glm where you'll see:
sum((object$weights * object$residuals^2)[object$weights > 0])/df.r
df.r is the "residual degrees of freedom" (number of observations - num | How does R find the dispersion parameter in a GLM?
You can look in the code for summary.glm where you'll see:
sum((object$weights * object$residuals^2)[object$weights > 0])/df.r
df.r is the "residual degrees of freedom" (number of observations - number of parameters)
object$residuals (from ?glm):
residuals: the working residuals, that is the residuals in the final
iteration of the IWLS fit. Since cases with zero weights are
omitted, their working residuals are ‘NA’.
I don't have the definition of the working residuals at the top of my brain right now, but it turns out that this is equivalent to calculating the sum of the squared Pearson residuals ($\sum (Y_i-\mu_i)^2/v_i$, where $v_i$ is the scaled variance predicted by the model, equal to $\mu_i^2$ for the Gamma family) divided by the residual df.
Running example("glm") to get the fitted object glm.D93:
> sum(residuals(glm.D93, "pearson")^2)
[1] 5.173202
> with(glm.D93, sum(weights*residuals^2))
[1] 5.173202
This is basically the sum of squares of the residuals (scaled to make the residuals homoscedastic), divided by the residual df (so it matches up with the way you estimate the standard deviation of a (Gaussian) linear model (mean squared error / residual df)).
One thing to keep in mind is that most GLM estimates of dispersion are approximations (and that even if you went to the trouble of getting a maximum likelihood estimate it would be biased for small sample size). For example, you can also compute sigma(glm.D93), which divides the deviance (or equivalently the sum of squares of the deviance residuals) by the residual df. Venables and Ripley MASS have a comment about the unreliability of these approximations for small sample sizes.
I'm not clear that your question ("does my model fit better with dispersion 1 or the estimated dispersion") makes sense; changing the dispersion does not change the predictions of the model at all. | How does R find the dispersion parameter in a GLM?
You can look in the code for summary.glm where you'll see:
sum((object$weights * object$residuals^2)[object$weights > 0])/df.r
df.r is the "residual degrees of freedom" (number of observations - num |
54,036 | Metropolis Hastings algorithm bivariate normals | EDIT: REFER TO THE CORRECT ANSWER ABOVE:
Note that this is a problem of MIXTURE of NORMALS rather than BIVARIATE NORMALS:
The issue here is that the proposal density does not cover the whole region of interest. You were to calibrate it accordingly to cover the ROI as explained in the answer above. Here is another example of using a grid/uniform distribution ie
we can see from the graph that the x and y axis runs from -2 to 12. Then you could use that to propose. and since it is symmetric/constant, we can simply implement metropolis rather than metropolis Hastings.
Your Code:
library(mvtnorm)
mu <- c(2,2)
Sigma <- matrix(c(1, 1/2*1*sqrt(2), 1/2*1*sqrt(2), 2), nrow=2)
mu2 <- c(2,8)
Sigma2 <- matrix(c(1, -1/2*1*sqrt(2), -1/2*1*sqrt(2), 2), nrow=2)
mu3 <- c(6,4)
Sigma3 <- matrix(c(3, -1/2*sqrt(3)*sqrt(2), -1/2*sqrt(3)*sqrt(2), 2), nrow=2)
n <- 3000
s <- rmultinom(1, n, c(1,1,1))
dat <- mapply(rmvnorm, s, mean = list(mu, mu2, mu3), sigma = list(Sigma, Sigma2, Sigma3))
dat1 <- setNames(do.call(rbind.data.frame, dat), c("X1", "X2"))
ggplot(dat1, aes(X1, X2)) +
stat_density_2d(aes(fill=..level..), geom="polygon", color="white")
d <- function(x) {
a <- dmvnorm(x, mu, Sigma)
b <- dmvnorm(x, mu2, Sigma2)
c <- dmvnorm(x, mu3, Sigma3)
mean(c(a,b,c))
}
Implementation of Metropolis:
B <- 40000
y <- data.frame(matrix(nrow = B, ncol = 2))
colnames(y) <- c("X1", "X2")
y[1, ] <-colMeans(dat1)
accept<- 1
for(i in seq(2,B)){
prop <- c(runif(1, -2, 12),runif(1, -2,12))#
if( runif(1)<d(prop)/d(y[i-1, ])) {
y[i, ]<- prop
accept <- accept + 1
}
else y[i, ]<- y[i-1, ]
}
##Burn the first 5000 points
ggplot(y[-seq(5000), ], aes(X1, X2)) +
stat_density_2d(aes(fill=..level..), geom="polygon", color="white")
print(accept/B) | Metropolis Hastings algorithm bivariate normals | EDIT: REFER TO THE CORRECT ANSWER ABOVE:
Note that this is a problem of MIXTURE of NORMALS rather than BIVARIATE NORMALS:
The issue here is that the proposal density does not cover the whole region of | Metropolis Hastings algorithm bivariate normals
EDIT: REFER TO THE CORRECT ANSWER ABOVE:
Note that this is a problem of MIXTURE of NORMALS rather than BIVARIATE NORMALS:
The issue here is that the proposal density does not cover the whole region of interest. You were to calibrate it accordingly to cover the ROI as explained in the answer above. Here is another example of using a grid/uniform distribution ie
we can see from the graph that the x and y axis runs from -2 to 12. Then you could use that to propose. and since it is symmetric/constant, we can simply implement metropolis rather than metropolis Hastings.
Your Code:
library(mvtnorm)
mu <- c(2,2)
Sigma <- matrix(c(1, 1/2*1*sqrt(2), 1/2*1*sqrt(2), 2), nrow=2)
mu2 <- c(2,8)
Sigma2 <- matrix(c(1, -1/2*1*sqrt(2), -1/2*1*sqrt(2), 2), nrow=2)
mu3 <- c(6,4)
Sigma3 <- matrix(c(3, -1/2*sqrt(3)*sqrt(2), -1/2*sqrt(3)*sqrt(2), 2), nrow=2)
n <- 3000
s <- rmultinom(1, n, c(1,1,1))
dat <- mapply(rmvnorm, s, mean = list(mu, mu2, mu3), sigma = list(Sigma, Sigma2, Sigma3))
dat1 <- setNames(do.call(rbind.data.frame, dat), c("X1", "X2"))
ggplot(dat1, aes(X1, X2)) +
stat_density_2d(aes(fill=..level..), geom="polygon", color="white")
d <- function(x) {
a <- dmvnorm(x, mu, Sigma)
b <- dmvnorm(x, mu2, Sigma2)
c <- dmvnorm(x, mu3, Sigma3)
mean(c(a,b,c))
}
Implementation of Metropolis:
B <- 40000
y <- data.frame(matrix(nrow = B, ncol = 2))
colnames(y) <- c("X1", "X2")
y[1, ] <-colMeans(dat1)
accept<- 1
for(i in seq(2,B)){
prop <- c(runif(1, -2, 12),runif(1, -2,12))#
if( runif(1)<d(prop)/d(y[i-1, ])) {
y[i, ]<- prop
accept <- accept + 1
}
else y[i, ]<- y[i-1, ]
}
##Burn the first 5000 points
ggplot(y[-seq(5000), ], aes(X1, X2)) +
stat_density_2d(aes(fill=..level..), geom="polygon", color="white")
print(accept/B) | Metropolis Hastings algorithm bivariate normals
EDIT: REFER TO THE CORRECT ANSWER ABOVE:
Note that this is a problem of MIXTURE of NORMALS rather than BIVARIATE NORMALS:
The issue here is that the proposal density does not cover the whole region of |
54,037 | Metropolis Hastings algorithm bivariate normals | Since the independent Metropolis-Hastings algorithm is formally valid, the issue stands in an inadequate calibration of the proposal to reach the entire support of the target (mixture) distribution. I just modified the code by choosing a larger variance matrix
sig=5*matrix(c(4, 1/2*2*2, 1/2*2*2, 4), nrow=2)
ran the chain 10⁵ iterations, and somewhat recovered the entire target:
However, there is an error in the code and possibly a misunderstanding of the independent Metropolis-Hastings algorithm. The acceptance probability
a=d(xp)/d(x)
r=min(1, a)
should divide the ratio of the targets by the ratio of the proposals
a=d(xp)/d(x)/dmvnorm(xp,c(4,4), sig)*dmvnorm(x,c(4,4), sig)
r=min(1, a) #superfluous for the acceptance
which returns a better representation of the target after 10⁴ iterations: | Metropolis Hastings algorithm bivariate normals | Since the independent Metropolis-Hastings algorithm is formally valid, the issue stands in an inadequate calibration of the proposal to reach the entire support of the target (mixture) distribution. I | Metropolis Hastings algorithm bivariate normals
Since the independent Metropolis-Hastings algorithm is formally valid, the issue stands in an inadequate calibration of the proposal to reach the entire support of the target (mixture) distribution. I just modified the code by choosing a larger variance matrix
sig=5*matrix(c(4, 1/2*2*2, 1/2*2*2, 4), nrow=2)
ran the chain 10⁵ iterations, and somewhat recovered the entire target:
However, there is an error in the code and possibly a misunderstanding of the independent Metropolis-Hastings algorithm. The acceptance probability
a=d(xp)/d(x)
r=min(1, a)
should divide the ratio of the targets by the ratio of the proposals
a=d(xp)/d(x)/dmvnorm(xp,c(4,4), sig)*dmvnorm(x,c(4,4), sig)
r=min(1, a) #superfluous for the acceptance
which returns a better representation of the target after 10⁴ iterations: | Metropolis Hastings algorithm bivariate normals
Since the independent Metropolis-Hastings algorithm is formally valid, the issue stands in an inadequate calibration of the proposal to reach the entire support of the target (mixture) distribution. I |
54,038 | Joint probability measure | Joint Distributions and Expectation
In general, the joint distribution of random variables $X$ and $Y$, defined on a common probability space $(\Omega, \mathcal{A}, \mathbb{P})$ and taking values in measurable spaces $(\mathcal{X}, \mathcal{B})$ and $(\mathcal{Y}, \mathcal{C})$, respectively, is the probability measure defined on $(\mathcal{X} \times \mathcal{Y}, \mathcal{B} \otimes \mathcal{C})$ by
$$
\mathbb{P}_{X, Y}(E) = \mathbb{P}((X, Y) \in E)
$$
for all $E \in \mathcal{B} \otimes \mathcal{C}$.
This is the same as the ordinary distribution of $(X, Y) : \Omega \to \mathcal{X} \times \mathcal{Y}$ when viewed as a single random variable defined on $\Omega$.
Also, $X$ and $Y$ are said to be independent if it holds that
$$
\mathbb{P}(X \in B, Y \in C) = \mathbb{P}(X \in B) \mathbb{P}(Y \in C)
$$
for all $B \in \mathcal{B}$ and $C \in \mathcal{C}$.
The independence condition can be rephrased in terms of the joint distribution of $X$ and $Y$: $X$ and $Y$ are independent if and only if
$$
\mathbb{P}_{X,Y}(B \times C) = \mathbb{P}_X(B) \mathbb{P}_Y(C)
$$
for all $B \in \mathcal{B}$ and $C \in \mathcal{C}$. That is, if and only if
$$
\mathbb{P}_{X, Y} = \mathbb{P}_X \otimes \mathbb{P}_Y.
$$
Thus, the joint distribution of $X$ and $Y$ is the product measure of the (marginal) distributions of $X$ and $Y$ precisely in the case that $X$ and $Y$ are independent.
If $X$ and $Y$ are dependent, then their joint distribution is not the product measure of the marginal distributions.
Computing Expectations over Joint Distributions
If $X$ and $Y$, as above, are independent and $f : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ is a measurable function (satisfying either non-negativity or integrability with respect to $\mathbb{P}_{X,Y}$), then Fubini's theorem allows you to compute
$$
\begin{aligned}
\mathbb{E}[f(X, Y)]
&= \int_\Omega f(X(\omega), Y(\omega)) \, \mathbb{P}(d\omega) &&
\text{(def. of expectation)}
\\
&= \int_{\mathcal{X} \times \mathcal{Y}} f(x, y) \, \mathbb{P}_{X, Y}(d(x, y)) &&
\text{(change of variables)}
\\
&= \int_{\mathcal{X} \times \mathcal{Y}} f(x, y) \, \mathbb{P}_X\otimes\mathbb{P}_Y(d(x, y)) &&\text{(independence)}
\\
&= \int_{\mathcal{Y}} \left(\int_{\mathcal{X}} f(x, y) \, \mathbb{P}_X(dx)\right) \, \mathbb{P}_Y(dy) &&\text{(Fubini's theorem)}
\end{aligned}
$$
However, if $X$ are $Y$ are not independent, then this argument won't work. Instead, if you want to break an expectation of $f(X,Y)$ into an integral over $\mathcal{X}$ followed by an integral over $\mathcal{Y}$, as we did above, you need to know something about the conditional distribution of $X$ given $Y$.
For what follows, suppose $(\mathcal{X},\mathcal{B})$ and $(\mathcal{Y}, \mathcal{C})$ are "sufficiently nice" measurable spaces, meaning that they admit conditional distributions (this will happen for most spaces in practice; a sufficient condition is being standard Borel).
Then if $\mathbb{P}_{X\mid Y} : \mathcal{B} \times \mathcal{Y} \to [0, 1]$ is a version of the conditional distribution of $X$ given $Y$, then we can proceed similarly to the calculations above:
$$
\begin{aligned}
\mathbb{E}[f(X, Y)]
&= \int_{\mathcal{X} \times \mathcal{Y}} f(x, y) \, \mathbb{P}_{X, Y}(d(x, y))
\\
&= \int_{\mathcal{Y}} \left(\int_{\mathcal{X}} f(x, y) \, \mathbb{P}_{X\mid Y}(dx, y)\right) \, \mathbb{P}_Y(dy) \\
&= \mathbb{E}[\mathbb{E}[f(X, Y) \mid Y]]
\end{aligned}
$$
(in fact, the formula $\mathbb{E}[f(X, Y)] = \mathbb{E}[\mathbb{E}[f(X, Y) \mid Y]]$ holds even without considering conditional distributions (proof), but it's arguably harder to compute in that case).
If $X$ and $Y$ are independent, then it happens that $\mathbb{P}_{X\mid Y}(B, y) = \mathbb{P}_X(B)$ for every $B \in \mathcal{B}$ and $\mathbb{P}_Y$-almost every $y \in Y$.
In this case, the calculation reduces to the first computation above.
In practice, the conditional distribution $\mathbb{P}_{X\mid Y}$ will usually be given by a conditional density $p_{X\mid Y} : \mathcal{X} \times \mathcal{Y} \to [0, \infty)$ of $X$ given $Y$ with respect to some dominating measure $\mu$ on $(\mathcal{X}, \mathcal{B})$, yielding
$$
E[f(X, Y)]
= \int_{\mathcal{Y}} \left(\int_{\mathcal{X}} f(x, y) p_{X \mid Y}(x, y) \, \mu(dx)\right) \, \mathbb{P}_Y(d y).
$$ | Joint probability measure | Joint Distributions and Expectation
In general, the joint distribution of random variables $X$ and $Y$, defined on a common probability space $(\Omega, \mathcal{A}, \mathbb{P})$ and taking values in m | Joint probability measure
Joint Distributions and Expectation
In general, the joint distribution of random variables $X$ and $Y$, defined on a common probability space $(\Omega, \mathcal{A}, \mathbb{P})$ and taking values in measurable spaces $(\mathcal{X}, \mathcal{B})$ and $(\mathcal{Y}, \mathcal{C})$, respectively, is the probability measure defined on $(\mathcal{X} \times \mathcal{Y}, \mathcal{B} \otimes \mathcal{C})$ by
$$
\mathbb{P}_{X, Y}(E) = \mathbb{P}((X, Y) \in E)
$$
for all $E \in \mathcal{B} \otimes \mathcal{C}$.
This is the same as the ordinary distribution of $(X, Y) : \Omega \to \mathcal{X} \times \mathcal{Y}$ when viewed as a single random variable defined on $\Omega$.
Also, $X$ and $Y$ are said to be independent if it holds that
$$
\mathbb{P}(X \in B, Y \in C) = \mathbb{P}(X \in B) \mathbb{P}(Y \in C)
$$
for all $B \in \mathcal{B}$ and $C \in \mathcal{C}$.
The independence condition can be rephrased in terms of the joint distribution of $X$ and $Y$: $X$ and $Y$ are independent if and only if
$$
\mathbb{P}_{X,Y}(B \times C) = \mathbb{P}_X(B) \mathbb{P}_Y(C)
$$
for all $B \in \mathcal{B}$ and $C \in \mathcal{C}$. That is, if and only if
$$
\mathbb{P}_{X, Y} = \mathbb{P}_X \otimes \mathbb{P}_Y.
$$
Thus, the joint distribution of $X$ and $Y$ is the product measure of the (marginal) distributions of $X$ and $Y$ precisely in the case that $X$ and $Y$ are independent.
If $X$ and $Y$ are dependent, then their joint distribution is not the product measure of the marginal distributions.
Computing Expectations over Joint Distributions
If $X$ and $Y$, as above, are independent and $f : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ is a measurable function (satisfying either non-negativity or integrability with respect to $\mathbb{P}_{X,Y}$), then Fubini's theorem allows you to compute
$$
\begin{aligned}
\mathbb{E}[f(X, Y)]
&= \int_\Omega f(X(\omega), Y(\omega)) \, \mathbb{P}(d\omega) &&
\text{(def. of expectation)}
\\
&= \int_{\mathcal{X} \times \mathcal{Y}} f(x, y) \, \mathbb{P}_{X, Y}(d(x, y)) &&
\text{(change of variables)}
\\
&= \int_{\mathcal{X} \times \mathcal{Y}} f(x, y) \, \mathbb{P}_X\otimes\mathbb{P}_Y(d(x, y)) &&\text{(independence)}
\\
&= \int_{\mathcal{Y}} \left(\int_{\mathcal{X}} f(x, y) \, \mathbb{P}_X(dx)\right) \, \mathbb{P}_Y(dy) &&\text{(Fubini's theorem)}
\end{aligned}
$$
However, if $X$ are $Y$ are not independent, then this argument won't work. Instead, if you want to break an expectation of $f(X,Y)$ into an integral over $\mathcal{X}$ followed by an integral over $\mathcal{Y}$, as we did above, you need to know something about the conditional distribution of $X$ given $Y$.
For what follows, suppose $(\mathcal{X},\mathcal{B})$ and $(\mathcal{Y}, \mathcal{C})$ are "sufficiently nice" measurable spaces, meaning that they admit conditional distributions (this will happen for most spaces in practice; a sufficient condition is being standard Borel).
Then if $\mathbb{P}_{X\mid Y} : \mathcal{B} \times \mathcal{Y} \to [0, 1]$ is a version of the conditional distribution of $X$ given $Y$, then we can proceed similarly to the calculations above:
$$
\begin{aligned}
\mathbb{E}[f(X, Y)]
&= \int_{\mathcal{X} \times \mathcal{Y}} f(x, y) \, \mathbb{P}_{X, Y}(d(x, y))
\\
&= \int_{\mathcal{Y}} \left(\int_{\mathcal{X}} f(x, y) \, \mathbb{P}_{X\mid Y}(dx, y)\right) \, \mathbb{P}_Y(dy) \\
&= \mathbb{E}[\mathbb{E}[f(X, Y) \mid Y]]
\end{aligned}
$$
(in fact, the formula $\mathbb{E}[f(X, Y)] = \mathbb{E}[\mathbb{E}[f(X, Y) \mid Y]]$ holds even without considering conditional distributions (proof), but it's arguably harder to compute in that case).
If $X$ and $Y$ are independent, then it happens that $\mathbb{P}_{X\mid Y}(B, y) = \mathbb{P}_X(B)$ for every $B \in \mathcal{B}$ and $\mathbb{P}_Y$-almost every $y \in Y$.
In this case, the calculation reduces to the first computation above.
In practice, the conditional distribution $\mathbb{P}_{X\mid Y}$ will usually be given by a conditional density $p_{X\mid Y} : \mathcal{X} \times \mathcal{Y} \to [0, \infty)$ of $X$ given $Y$ with respect to some dominating measure $\mu$ on $(\mathcal{X}, \mathcal{B})$, yielding
$$
E[f(X, Y)]
= \int_{\mathcal{Y}} \left(\int_{\mathcal{X}} f(x, y) p_{X \mid Y}(x, y) \, \mu(dx)\right) \, \mathbb{P}_Y(d y).
$$ | Joint probability measure
Joint Distributions and Expectation
In general, the joint distribution of random variables $X$ and $Y$, defined on a common probability space $(\Omega, \mathcal{A}, \mathbb{P})$ and taking values in m |
54,039 | Scree plot: $m$ vs $m-1$ components/factors | The following simulations show that the correct Cattell's scree-plot rule is "elbow minus one". That is, select the number of factors one less than the number corresponding to the "elbow" location. The scree-plot considered here is the "classic" one - dealing with eigenvalues of the nonreduced correlation or covariance matrix; that is, it is the eigenvalues output by the "preliminary PCA done before a factor analysis".
I was generating random factor loading matrices (like described here in section A follow-up to @amoeba's "Update 3") with 4 factors underlying 20x20 correlation matrices, and then observed their eigenvalue scree-plots.
The simulation was 4-fold (hence 4 scree-plots below).
Diffuse factor structure, 20 random population matrices
A diffuse factor structure is where factor loadings distribute smoothly; there is no bimodality of "high" vs "low" loadings. A factor does not have "favourites" among the items (variables), and an item can be loaded highly, moderately or slightly, by a factor.
20 random sets of loadings (plus random uniquenesses) yielded, correspondingly, 20 covariance matrices with unit diagonal each, i.e. correlation matrices. These are labeled "population" matrices, because each was based on a different factor loading matrix. What they share though is the same number of factors 4 behind the same number of items 20. The 20 scree-plots superimposed on one chart:
We see that while the correct number of factors is 4, the principal (the last pronounced) "elbow" almost always corresponded to the 5th eigenvalue (pr. component). This eigenvalue was always below 1, in agreement the Kaiser's rule for a correlation matrix.
Diffuse factor structure, 20 random sample matrices
One matrix from the above 20 was chosen (you could discern its scree line above as the bold black one), and 20 random sample realizations of it under sample size 200 were produced out of Wishart distribution. Each of the sample covariance matrices was re-standardized to a correlation matrix. The 20 scree-plots superimposed on one chart:
We again see that the elbow is opposite the 5th eigenvalue (pr. component) which magnitude is below 1. The true number of factors was 4, as before.
Sharp factor structure, 20 random population matrices
A sharp factor structure is where factor loadings distribute bimodally: "high" vs "low" loadings. In my simulation design, each factor highly loaded exactly 5 items, and each item was loaded high by one factor only. This is a simulation of a most "simple structure" which you might expect factor-analyzing some developed and factor validated psychological questionnaire.
20 random sets of loadings (plus random uniquenesses) yielded, correspondingly, 20 covariance matrices with unit diagonal each, i.e. correlation matrices. Like in the first section, these are again labeled different "population" matrices sharing the same number of factors 4 for the same number of items 20. The 20 scree-plots:
What this picture differs by from the 1st one is just the effect of the sharpened factors: there are clearly 4 - no less no more - factors of approximately equal strength, what is seen by the emergence of the upward elbow opposite the 4th pr. component. The rest - and what interests us - is still the same as was on the previous pics.
Sharp factor structure, 20 random sample matrices
One matrix from the just above 20 was chosen, and 20 random sample realizations of it under sample size 200 were produced out of Wishart distribution, and standardized to correlations, - everything done analogously to the section 2. The scree-plots:
Same basic findings.
Conclusion
As far as the done simulations are relevant, and if on your scree-plot of eigenvalues of a correlation matrix you see a clear "elbow" (downward one, and followed by a sloping line) opposite the m-th component, consider m-1 factors to extract as the "Cattell's rule".
These above simulations pertain to the factor analysis of a correlation matrix (or of a covariance matrix with approximately equal diagonal elements). When covariance matrix with strongly unbalanced variances is input to exploratory FA, things may complicate. In particular, the Cattell's "elbow" itself can almost disappear.
Population noise added
One might remark that the population correlation matrices (e.g. the 1st and the 3rd pictures) were generated without a population noise. That is, the matrices, if you do FA on them properly, restore the loadings (20x4 matrices) by which they had been generated, precisely. One might remark that this is a bit ideal and that in real population there always exist common factors which are much weaker than the common factors considered in the model but may be numerous; in particularly, these factors can be responsible for surplus pairwise, partial associations. These noisy factors are ignored by a factor model but they meddle in the factor analysis.
Along with this reasoning, I added a small noise to off-diagonal correlations of each population matrix being created. (The noise was actually generated from Wishart distribution for the identity matrix, and the size of the random disturbancies was calibrated so that overall Kaiser-Meyer-Olkin MSA of a matrix would fall from about 0.8-0.9 to about 0.6-0.7 while the matrix still kept its positive definiteness. Diminishing of MSA is a symptom of strengthening of partial correlations.) So was a way to introduce to population noisy factors dismissed by the factor model.
I wondered will the noise added to matrices affect the appearance of scree-plot, the position of Cattell's "elbow".
The upper graphic is the copy of 1st one in this answer. The lower one is where the noise was added. No change or shift of the elbow's location. It is still against the 5th pr. component. There is a tiny change in the shape of the scree, however. The whole "staple" became less bent: The scree is less steep to the left and is more steep to the right, than it was. Consequently, the elbow point is a little at raise. And that is the recognizable effect of the presense of junior common factors we'd introduced via the off-diagonal noise. | Scree plot: $m$ vs $m-1$ components/factors | The following simulations show that the correct Cattell's scree-plot rule is "elbow minus one". That is, select the number of factors one less than the number corresponding to the "elbow" location. Th | Scree plot: $m$ vs $m-1$ components/factors
The following simulations show that the correct Cattell's scree-plot rule is "elbow minus one". That is, select the number of factors one less than the number corresponding to the "elbow" location. The scree-plot considered here is the "classic" one - dealing with eigenvalues of the nonreduced correlation or covariance matrix; that is, it is the eigenvalues output by the "preliminary PCA done before a factor analysis".
I was generating random factor loading matrices (like described here in section A follow-up to @amoeba's "Update 3") with 4 factors underlying 20x20 correlation matrices, and then observed their eigenvalue scree-plots.
The simulation was 4-fold (hence 4 scree-plots below).
Diffuse factor structure, 20 random population matrices
A diffuse factor structure is where factor loadings distribute smoothly; there is no bimodality of "high" vs "low" loadings. A factor does not have "favourites" among the items (variables), and an item can be loaded highly, moderately or slightly, by a factor.
20 random sets of loadings (plus random uniquenesses) yielded, correspondingly, 20 covariance matrices with unit diagonal each, i.e. correlation matrices. These are labeled "population" matrices, because each was based on a different factor loading matrix. What they share though is the same number of factors 4 behind the same number of items 20. The 20 scree-plots superimposed on one chart:
We see that while the correct number of factors is 4, the principal (the last pronounced) "elbow" almost always corresponded to the 5th eigenvalue (pr. component). This eigenvalue was always below 1, in agreement the Kaiser's rule for a correlation matrix.
Diffuse factor structure, 20 random sample matrices
One matrix from the above 20 was chosen (you could discern its scree line above as the bold black one), and 20 random sample realizations of it under sample size 200 were produced out of Wishart distribution. Each of the sample covariance matrices was re-standardized to a correlation matrix. The 20 scree-plots superimposed on one chart:
We again see that the elbow is opposite the 5th eigenvalue (pr. component) which magnitude is below 1. The true number of factors was 4, as before.
Sharp factor structure, 20 random population matrices
A sharp factor structure is where factor loadings distribute bimodally: "high" vs "low" loadings. In my simulation design, each factor highly loaded exactly 5 items, and each item was loaded high by one factor only. This is a simulation of a most "simple structure" which you might expect factor-analyzing some developed and factor validated psychological questionnaire.
20 random sets of loadings (plus random uniquenesses) yielded, correspondingly, 20 covariance matrices with unit diagonal each, i.e. correlation matrices. Like in the first section, these are again labeled different "population" matrices sharing the same number of factors 4 for the same number of items 20. The 20 scree-plots:
What this picture differs by from the 1st one is just the effect of the sharpened factors: there are clearly 4 - no less no more - factors of approximately equal strength, what is seen by the emergence of the upward elbow opposite the 4th pr. component. The rest - and what interests us - is still the same as was on the previous pics.
Sharp factor structure, 20 random sample matrices
One matrix from the just above 20 was chosen, and 20 random sample realizations of it under sample size 200 were produced out of Wishart distribution, and standardized to correlations, - everything done analogously to the section 2. The scree-plots:
Same basic findings.
Conclusion
As far as the done simulations are relevant, and if on your scree-plot of eigenvalues of a correlation matrix you see a clear "elbow" (downward one, and followed by a sloping line) opposite the m-th component, consider m-1 factors to extract as the "Cattell's rule".
These above simulations pertain to the factor analysis of a correlation matrix (or of a covariance matrix with approximately equal diagonal elements). When covariance matrix with strongly unbalanced variances is input to exploratory FA, things may complicate. In particular, the Cattell's "elbow" itself can almost disappear.
Population noise added
One might remark that the population correlation matrices (e.g. the 1st and the 3rd pictures) were generated without a population noise. That is, the matrices, if you do FA on them properly, restore the loadings (20x4 matrices) by which they had been generated, precisely. One might remark that this is a bit ideal and that in real population there always exist common factors which are much weaker than the common factors considered in the model but may be numerous; in particularly, these factors can be responsible for surplus pairwise, partial associations. These noisy factors are ignored by a factor model but they meddle in the factor analysis.
Along with this reasoning, I added a small noise to off-diagonal correlations of each population matrix being created. (The noise was actually generated from Wishart distribution for the identity matrix, and the size of the random disturbancies was calibrated so that overall Kaiser-Meyer-Olkin MSA of a matrix would fall from about 0.8-0.9 to about 0.6-0.7 while the matrix still kept its positive definiteness. Diminishing of MSA is a symptom of strengthening of partial correlations.) So was a way to introduce to population noisy factors dismissed by the factor model.
I wondered will the noise added to matrices affect the appearance of scree-plot, the position of Cattell's "elbow".
The upper graphic is the copy of 1st one in this answer. The lower one is where the noise was added. No change or shift of the elbow's location. It is still against the 5th pr. component. There is a tiny change in the shape of the scree, however. The whole "staple" became less bent: The scree is less steep to the left and is more steep to the right, than it was. Consequently, the elbow point is a little at raise. And that is the recognizable effect of the presense of junior common factors we'd introduced via the off-diagonal noise. | Scree plot: $m$ vs $m-1$ components/factors
The following simulations show that the correct Cattell's scree-plot rule is "elbow minus one". That is, select the number of factors one less than the number corresponding to the "elbow" location. Th |
54,040 | Scree plot: $m$ vs $m-1$ components/factors | Significant eigenvalues
You could use some sort of simulation to compute the probability for an eigenvalue exceeding a certain limit and base the selection on that. In the R-package psych, there is a function that does this (demonstrated below).
If you assume samples from a Gaussian distributed population without correlation, then you should expect a screeplot with all points/eigenvalues on a weak slope. (I do not know a formula for this slope but you could compute it by simulations, either based on a distribution as done below or by some sort of bootstrapping)
Applied to your plot
If we apply this loosely your plot (I do not have the data but we can imagine how the line would be): Your 4-th eigenvalue is on this weak slope/line, and there is a clear distinction between the rubble and the steep hill. So probably the 4-th eigenvalue is not very meaningful (or the variance is just not significant/noticeable or properly scaled; a prerequisite for interpreting PCA eigenvalues is that any potential sources of variance will be of the same order of scale). The 4-th component is indeed part of the rubble. The 4-th eigenvalue is barely different from the 5-th eigenvalue. So any argument to include the 4-th should count as strong as including the 5-th.
However, the significance is often not the issue
The m vs. m-1 issue is more the situation when the scree plot is not so obvious. But, in that less clear case the question is not about whether or not a component has a (statistical) significantly high eigenvalue (more about that in the example below). But instead, about whether the effect size is large enough or dominant.
The 'trick' in this case is to look for the most remarkable eigenvalues and then add 1 (Just to be sure. I guess. I do not understand this rule so well. But it isn't a very strict rule anyway.).
Example from psych package and manual plot
The R-code below generates the scree plots below for the example 7.4 from Harman.
The plot on the left is created with a single function fa.Parallel from the psych package. The plot gives the scree plot along with a line that relates to eigenvalues of a simulation with Gaussian distributed data that has the identity matrix as covariance.
The plot on the right is created manually. You can use the code (which I hope is intuitive enough) to figure out how it works.
In this plot, I have used a slightly different measure for the eigenvalues. I have scaled the eigenvalues based on the average of all the lower eigenvalues. The reason for this is because due to the presence of higher eigenvalues the eigenvalue that is being considered will be relatively lower than the random Gaussian data, which does not have these higher eigenvalues, that are used for the comparison.
The result is that many more points are above the line and seem to be significant. Isn't that a lot? Well, maybe not. The comparison is made with a model for data that is entirely spherical and variances are equal in all directions. But in practice is it not strange that data has some variations in variance/eigenvalues. Even if there is no structure, clustering, or other variances in-between-groups that cause an increase in variance, then one may still have that the noise is not the same for all directions.
set.seed(1)
psych::fa.parallel(Harman74.cor$cov, n.obs = 145, fa = "pc",
main = "plot using psych package")
### compute eigenvalues for Harman74 data
### A correlation matrix of 24 psychological tests given
### to 145 seventh and eight-grade children in a Chicago
### suburb by Holzinger and Swineford
ev <- eigen(Harman74.cor$cov)$values
### simulate normal distributed data
### and compute the eigen values
sim_eigen <- function(n_var,n_points) {
x <- matrix(rnorm(n_var*n_points), ncol = n_var)
m <- cov(x)
sim_ev <- eigen(m)$values
return(sim_ev)
}
### relative numbers
### compute the eigenvalue relative to the mean of the lower values
f_rel <- function(x_in) {
l <- length(x_in)
x_out <- sapply(1:l, FUN = function(k) {
x_in[k]/mean(x_in[k:l])
})
return(x_out)
}
### simulate 1000 times
sim <- replicate(1000,f_rel(sim_eigen(24,145)))
### compute mean and upper and lower 90% interval
ev_mu <- rowMeans(sim) ### compute the mean of thousand simulations
ev_up <- sapply(1:length(ev_mu), FUN = function(k) {
quantile(sim[k,], probs = 0.95)
})
ev_low <- sapply(1:length(ev_mu), FUN = function(k) {
quantile(sim[k,], probs = 0.05)
})
### plot alternative
plot(f_rel(ev), main = "plot using alternative measure", col = 4, pch = 4, type = "b",
xlab = "Component Number",
ylab = "eigen value relative to smaller eigenvalues")
lines(ev_mu)
lines(ev_up, col = 1, lty = 2)
lines(ev_low, col = 1, lty = 2)
Below is a simulation where we generate the data ourselves. Now we generate the data according to a Gaussian distribution.
x <- MASS::mvrnorm(145, mu = rep(0,24),
Sigma = diag(c(8,2,2,1.3,rep(1,20))))
cm <- cov(x)
ev <- eigen(cm)$values
The result is that you see the eigenvalues more closely within the simulated bandwidth. The vectors with eigenvalue 8, 2, 2 are picked out. The vector with eigenvalue 1.3 is too difficult.
Summary
The example above shows that you might get a nice clear scree plot like the last plot. But, this is only the case when all eigenvalues are the same except a few vectors/components (which you wish to detect and explore).
In many practical situations, the assumption of equal eigenvalues/variance will be invalid in any case. The scree plot will not look perfect and there is no distinct border between rubble and a steep hill. The analysis of the scree plot in such cases is not about finding statistically significant eigenvalues. But instead, the scree plot is to see the distribution of importance/variance for the different components.
Technically, all of the components can be important. The point of PCA is not to determine which ones are important, but it is to find some pragmatic cut-off value for the purpose of data reduction. The scree plot, if it is clear, can help you to categorize the different components, and determine a distinct group of large values. | Scree plot: $m$ vs $m-1$ components/factors | Significant eigenvalues
You could use some sort of simulation to compute the probability for an eigenvalue exceeding a certain limit and base the selection on that. In the R-package psych, there is a | Scree plot: $m$ vs $m-1$ components/factors
Significant eigenvalues
You could use some sort of simulation to compute the probability for an eigenvalue exceeding a certain limit and base the selection on that. In the R-package psych, there is a function that does this (demonstrated below).
If you assume samples from a Gaussian distributed population without correlation, then you should expect a screeplot with all points/eigenvalues on a weak slope. (I do not know a formula for this slope but you could compute it by simulations, either based on a distribution as done below or by some sort of bootstrapping)
Applied to your plot
If we apply this loosely your plot (I do not have the data but we can imagine how the line would be): Your 4-th eigenvalue is on this weak slope/line, and there is a clear distinction between the rubble and the steep hill. So probably the 4-th eigenvalue is not very meaningful (or the variance is just not significant/noticeable or properly scaled; a prerequisite for interpreting PCA eigenvalues is that any potential sources of variance will be of the same order of scale). The 4-th component is indeed part of the rubble. The 4-th eigenvalue is barely different from the 5-th eigenvalue. So any argument to include the 4-th should count as strong as including the 5-th.
However, the significance is often not the issue
The m vs. m-1 issue is more the situation when the scree plot is not so obvious. But, in that less clear case the question is not about whether or not a component has a (statistical) significantly high eigenvalue (more about that in the example below). But instead, about whether the effect size is large enough or dominant.
The 'trick' in this case is to look for the most remarkable eigenvalues and then add 1 (Just to be sure. I guess. I do not understand this rule so well. But it isn't a very strict rule anyway.).
Example from psych package and manual plot
The R-code below generates the scree plots below for the example 7.4 from Harman.
The plot on the left is created with a single function fa.Parallel from the psych package. The plot gives the scree plot along with a line that relates to eigenvalues of a simulation with Gaussian distributed data that has the identity matrix as covariance.
The plot on the right is created manually. You can use the code (which I hope is intuitive enough) to figure out how it works.
In this plot, I have used a slightly different measure for the eigenvalues. I have scaled the eigenvalues based on the average of all the lower eigenvalues. The reason for this is because due to the presence of higher eigenvalues the eigenvalue that is being considered will be relatively lower than the random Gaussian data, which does not have these higher eigenvalues, that are used for the comparison.
The result is that many more points are above the line and seem to be significant. Isn't that a lot? Well, maybe not. The comparison is made with a model for data that is entirely spherical and variances are equal in all directions. But in practice is it not strange that data has some variations in variance/eigenvalues. Even if there is no structure, clustering, or other variances in-between-groups that cause an increase in variance, then one may still have that the noise is not the same for all directions.
set.seed(1)
psych::fa.parallel(Harman74.cor$cov, n.obs = 145, fa = "pc",
main = "plot using psych package")
### compute eigenvalues for Harman74 data
### A correlation matrix of 24 psychological tests given
### to 145 seventh and eight-grade children in a Chicago
### suburb by Holzinger and Swineford
ev <- eigen(Harman74.cor$cov)$values
### simulate normal distributed data
### and compute the eigen values
sim_eigen <- function(n_var,n_points) {
x <- matrix(rnorm(n_var*n_points), ncol = n_var)
m <- cov(x)
sim_ev <- eigen(m)$values
return(sim_ev)
}
### relative numbers
### compute the eigenvalue relative to the mean of the lower values
f_rel <- function(x_in) {
l <- length(x_in)
x_out <- sapply(1:l, FUN = function(k) {
x_in[k]/mean(x_in[k:l])
})
return(x_out)
}
### simulate 1000 times
sim <- replicate(1000,f_rel(sim_eigen(24,145)))
### compute mean and upper and lower 90% interval
ev_mu <- rowMeans(sim) ### compute the mean of thousand simulations
ev_up <- sapply(1:length(ev_mu), FUN = function(k) {
quantile(sim[k,], probs = 0.95)
})
ev_low <- sapply(1:length(ev_mu), FUN = function(k) {
quantile(sim[k,], probs = 0.05)
})
### plot alternative
plot(f_rel(ev), main = "plot using alternative measure", col = 4, pch = 4, type = "b",
xlab = "Component Number",
ylab = "eigen value relative to smaller eigenvalues")
lines(ev_mu)
lines(ev_up, col = 1, lty = 2)
lines(ev_low, col = 1, lty = 2)
Below is a simulation where we generate the data ourselves. Now we generate the data according to a Gaussian distribution.
x <- MASS::mvrnorm(145, mu = rep(0,24),
Sigma = diag(c(8,2,2,1.3,rep(1,20))))
cm <- cov(x)
ev <- eigen(cm)$values
The result is that you see the eigenvalues more closely within the simulated bandwidth. The vectors with eigenvalue 8, 2, 2 are picked out. The vector with eigenvalue 1.3 is too difficult.
Summary
The example above shows that you might get a nice clear scree plot like the last plot. But, this is only the case when all eigenvalues are the same except a few vectors/components (which you wish to detect and explore).
In many practical situations, the assumption of equal eigenvalues/variance will be invalid in any case. The scree plot will not look perfect and there is no distinct border between rubble and a steep hill. The analysis of the scree plot in such cases is not about finding statistically significant eigenvalues. But instead, the scree plot is to see the distribution of importance/variance for the different components.
Technically, all of the components can be important. The point of PCA is not to determine which ones are important, but it is to find some pragmatic cut-off value for the purpose of data reduction. The scree plot, if it is clear, can help you to categorize the different components, and determine a distinct group of large values. | Scree plot: $m$ vs $m-1$ components/factors
Significant eigenvalues
You could use some sort of simulation to compute the probability for an eigenvalue exceeding a certain limit and base the selection on that. In the R-package psych, there is a |
54,041 | Doubt in the Invariance Property of Consistent Estimators | $X_{n+1}$ converges not to a constant, but to a distribution with variance 1.
Therefore, Slutsky's theorem (with the two "estimators" $\dfrac{1}{2}\overline{X}_n$ and $\dfrac{1}{2}X_{n+1}$ being two random elements) does not apply. For that, you would need to have $\dfrac{1}{2}X_{n+1}$ converging to a constant.
Intuitively, your estimator $T$ gives constant (and therefore "not vanishing") weight to $X_{n+1}$. | Doubt in the Invariance Property of Consistent Estimators | $X_{n+1}$ converges not to a constant, but to a distribution with variance 1.
Therefore, Slutsky's theorem (with the two "estimators" $\dfrac{1}{2}\overline{X}_n$ and $\dfrac{1}{2}X_{n+1}$ being two r | Doubt in the Invariance Property of Consistent Estimators
$X_{n+1}$ converges not to a constant, but to a distribution with variance 1.
Therefore, Slutsky's theorem (with the two "estimators" $\dfrac{1}{2}\overline{X}_n$ and $\dfrac{1}{2}X_{n+1}$ being two random elements) does not apply. For that, you would need to have $\dfrac{1}{2}X_{n+1}$ converging to a constant.
Intuitively, your estimator $T$ gives constant (and therefore "not vanishing") weight to $X_{n+1}$. | Doubt in the Invariance Property of Consistent Estimators
$X_{n+1}$ converges not to a constant, but to a distribution with variance 1.
Therefore, Slutsky's theorem (with the two "estimators" $\dfrac{1}{2}\overline{X}_n$ and $\dfrac{1}{2}X_{n+1}$ being two r |
54,042 | Causal Inference After Feature Selection | Data mining for potential predictors and causal inference don't go together well. Few problems may arise:
Identification problems:
Confoundness: If some unobserved common causes are still not included in the data, the estimators are still biased.
Bad control: Including too many variables is harmful. You can include colliders and unwanted mediators. This helps predictions, but harm causal inference badly. Estimators may be biased in all magnitudes and directions, especially after something as LASSO.
Mismeasurement: If the variables are not measured perfectly, as they appear in the DGP (real world), their estimators may be biased towards zero (attenuation bias). However if you mismeasure control variables, they do not fully control for potential confoundness.
Sampling issues: Unless the sample is 'representative' you may run into unwanted conditioning on colliders.
Estimation problems:
False significancy: Whatever significance level you choose, that much of false rejections of nulls you will have. In theory you can try to adjust p-values by Bonferroni (or others) procedures, but this means the loss of power by exclusion of some previously significant predictors.
Functional form: Unless the functional relationship between variables is perfectly described in model, issues similar to mismeasurement arise. It is hard to deal with it when you perform mass scanning of the data.
In theory, with some work, you can try to automatically solve estimation problems, but not the identification ones. This group of problems requires careful inquiry about DGP, and possibly some additional assumptions about how the data was generated.
EDIT:
Similar points make Hernan and Robins (2020) in chapter 18 of their book.
In 18.2 they emphasise the role of bias-inducing variables (what I referred to as bad control). Then they argue, that the decision wether to control for a variable must be based on the information outside the data. And therefore this decision can not be made by any automated procedures thet rely exclusively on statistical associations.
They also point out, that this problem is already deeply studied. They provide criticism, introduction to some basic solutions and additional citations.
Hernán MA, Robins JM (2020). Causal Inference: What If. Boca Raton: Chapman & Hall/CRC. | Causal Inference After Feature Selection | Data mining for potential predictors and causal inference don't go together well. Few problems may arise:
Identification problems:
Confoundness: If some unobserved common causes are still not include | Causal Inference After Feature Selection
Data mining for potential predictors and causal inference don't go together well. Few problems may arise:
Identification problems:
Confoundness: If some unobserved common causes are still not included in the data, the estimators are still biased.
Bad control: Including too many variables is harmful. You can include colliders and unwanted mediators. This helps predictions, but harm causal inference badly. Estimators may be biased in all magnitudes and directions, especially after something as LASSO.
Mismeasurement: If the variables are not measured perfectly, as they appear in the DGP (real world), their estimators may be biased towards zero (attenuation bias). However if you mismeasure control variables, they do not fully control for potential confoundness.
Sampling issues: Unless the sample is 'representative' you may run into unwanted conditioning on colliders.
Estimation problems:
False significancy: Whatever significance level you choose, that much of false rejections of nulls you will have. In theory you can try to adjust p-values by Bonferroni (or others) procedures, but this means the loss of power by exclusion of some previously significant predictors.
Functional form: Unless the functional relationship between variables is perfectly described in model, issues similar to mismeasurement arise. It is hard to deal with it when you perform mass scanning of the data.
In theory, with some work, you can try to automatically solve estimation problems, but not the identification ones. This group of problems requires careful inquiry about DGP, and possibly some additional assumptions about how the data was generated.
EDIT:
Similar points make Hernan and Robins (2020) in chapter 18 of their book.
In 18.2 they emphasise the role of bias-inducing variables (what I referred to as bad control). Then they argue, that the decision wether to control for a variable must be based on the information outside the data. And therefore this decision can not be made by any automated procedures thet rely exclusively on statistical associations.
They also point out, that this problem is already deeply studied. They provide criticism, introduction to some basic solutions and additional citations.
Hernán MA, Robins JM (2020). Causal Inference: What If. Boca Raton: Chapman & Hall/CRC. | Causal Inference After Feature Selection
Data mining for potential predictors and causal inference don't go together well. Few problems may arise:
Identification problems:
Confoundness: If some unobserved common causes are still not include |
54,043 | Why the confidence intervals in a categorical lm() are not calculated at the group level? | The lm function does its inferential work under a fairly standard assumption that the error term has constant variance. That appears to be a poor assumption for your data, but lm does not know that, so it just treats the groups as having equal variance and calculates the error term variance thinking it is the same for each group.
Weighted least squares is one approach to getting a different variance for each group. | Why the confidence intervals in a categorical lm() are not calculated at the group level? | The lm function does its inferential work under a fairly standard assumption that the error term has constant variance. That appears to be a poor assumption for your data, but lm does not know that, s | Why the confidence intervals in a categorical lm() are not calculated at the group level?
The lm function does its inferential work under a fairly standard assumption that the error term has constant variance. That appears to be a poor assumption for your data, but lm does not know that, so it just treats the groups as having equal variance and calculates the error term variance thinking it is the same for each group.
Weighted least squares is one approach to getting a different variance for each group. | Why the confidence intervals in a categorical lm() are not calculated at the group level?
The lm function does its inferential work under a fairly standard assumption that the error term has constant variance. That appears to be a poor assumption for your data, but lm does not know that, s |
54,044 | confusion about individual notation | The notation using subscript $i$ is the standard way to express the regression model, although you would also implicitly assume (or explicitly state) that this equation holds over a range of values ---e.g., $i=1,...,n$. When we have different observations/"realisations" of a generic form, we use the subscript notation to specify which of those observations/"realisations". If we do not use the subscript, then it is unclear what we are actually referring to.
In a regression context, the equation $y= \mathbf{x} \boldsymbol{\beta} + \epsilon$ does not really make sense without these subscripts for the observations, because there is no such thing as an "error term" $\epsilon$ that applies in general to all observations. The whole idea of the regression model is that the error terms are the deviations of the observed response values from their conditional expectation, and these deviations are different for different observations. The regression model consists of a stipulated form for the conditional expectation of the response variable and a distribution for the error terms (note the plural --- error terms, not error term).
How the linear regression equation is derived
In order to obtain the model form for the linear regression model, suppose that we are willing to stipulate that the conditional expectation of the response variable is a linear function of the unknown parameters $\boldsymbol{\beta}$ for all the observable values over some sample range. Let's denote this conditional expectation function (called the true regression function) by $u$. This gives us the starting equation:
$$u(\mathbf{x}) \equiv \mathbb{E}(Y_i | \mathbf{X}_i = \mathbf{x}) = \mathbf{x} \boldsymbol{\beta}
\quad \quad \quad \text{for all } i = 1,...,n.$$
Now, the error term for each observation is defined as the deviation of the response value from its conditional expectation ---i.e., for each $i=1,...,n$ we have:
$$\epsilon_i \equiv Y_i - u(\mathbf{x}_i) = Y_i - \mathbf{x}_i \boldsymbol{\beta}.$$
Note that this is a definition of what the error term is measuring. By rearranging this definition we get the standard form of the linear regression model:
$$Y_i = \mathbf{x}_i \boldsymbol{\beta} + \epsilon_i
\quad \quad \quad \text{for all } i = 1,...,n.$$
In order to obtain a regression model, we also need to make some assumption about the distribution of the error terms. The standard assumption is that the error terms are IID random variables with zero mean and a fixed finite variance (often also assumed to be normally distributed). This distributional assumption then gives us the full regression model, but as you can see, the defining equation for the model form follows directly from our definition of what the error terms (again, note the plural) are measuring.
A crucial thing to note here is that the true regression function $u$ is (assumed to be) the same for all the observations. (Consequently, when introducing this function I did not need to put a subscript on the value for the explanatory vector.) This means that we have a single generic true regression function that applies to all the observations. We still need to write our full model form as a set of equations over $i=1,...,n$ because we need to stipulate that this form holds for all the observations. Consequently, while it is not valid to state the model equation generically (since the error terms are different for different observations), there is still a single underlying function that we are trying to estimate.
Okay, so what about estimating the "general form"
In your question, you correctly note that we are interested in the general relationship between the response variable and the explanatory variables. This is encapsulated in the true regression function $u$. By assumption, this function is the same for all the observations, so we are only estimating this one regression function.
When we use the data to estimate the parameter $\boldsymbol{\beta}$ this gives us a corresponding estimator for the true regression function $\hat{u}(\mathbf{x}) = \mathbf{x} \hat{\boldsymbol{\beta}}$, leading to the predicted and residual values defined by:
$$\hat{Y}_i = \hat{u}(\mathbf{x}_i) = \mathbf{x}_i \hat{\boldsymbol{\beta}}
\quad \quad \quad
R_i = Y_i - \hat{u}(\mathbf{x}_i) = Y_i - \mathbf{x}_i \hat{\boldsymbol{\beta}}.$$
Again, you can see that even though we are estimating a single function $u$, this leads to different predicted values and residual values for each of the observations. Consequently, we again see that we need to use subscript $i$ on each of these equations even though they are all based on estimation of the same underlying true regression function. | confusion about individual notation | The notation using subscript $i$ is the standard way to express the regression model, although you would also implicitly assume (or explicitly state) that this equation holds over a range of values -- | confusion about individual notation
The notation using subscript $i$ is the standard way to express the regression model, although you would also implicitly assume (or explicitly state) that this equation holds over a range of values ---e.g., $i=1,...,n$. When we have different observations/"realisations" of a generic form, we use the subscript notation to specify which of those observations/"realisations". If we do not use the subscript, then it is unclear what we are actually referring to.
In a regression context, the equation $y= \mathbf{x} \boldsymbol{\beta} + \epsilon$ does not really make sense without these subscripts for the observations, because there is no such thing as an "error term" $\epsilon$ that applies in general to all observations. The whole idea of the regression model is that the error terms are the deviations of the observed response values from their conditional expectation, and these deviations are different for different observations. The regression model consists of a stipulated form for the conditional expectation of the response variable and a distribution for the error terms (note the plural --- error terms, not error term).
How the linear regression equation is derived
In order to obtain the model form for the linear regression model, suppose that we are willing to stipulate that the conditional expectation of the response variable is a linear function of the unknown parameters $\boldsymbol{\beta}$ for all the observable values over some sample range. Let's denote this conditional expectation function (called the true regression function) by $u$. This gives us the starting equation:
$$u(\mathbf{x}) \equiv \mathbb{E}(Y_i | \mathbf{X}_i = \mathbf{x}) = \mathbf{x} \boldsymbol{\beta}
\quad \quad \quad \text{for all } i = 1,...,n.$$
Now, the error term for each observation is defined as the deviation of the response value from its conditional expectation ---i.e., for each $i=1,...,n$ we have:
$$\epsilon_i \equiv Y_i - u(\mathbf{x}_i) = Y_i - \mathbf{x}_i \boldsymbol{\beta}.$$
Note that this is a definition of what the error term is measuring. By rearranging this definition we get the standard form of the linear regression model:
$$Y_i = \mathbf{x}_i \boldsymbol{\beta} + \epsilon_i
\quad \quad \quad \text{for all } i = 1,...,n.$$
In order to obtain a regression model, we also need to make some assumption about the distribution of the error terms. The standard assumption is that the error terms are IID random variables with zero mean and a fixed finite variance (often also assumed to be normally distributed). This distributional assumption then gives us the full regression model, but as you can see, the defining equation for the model form follows directly from our definition of what the error terms (again, note the plural) are measuring.
A crucial thing to note here is that the true regression function $u$ is (assumed to be) the same for all the observations. (Consequently, when introducing this function I did not need to put a subscript on the value for the explanatory vector.) This means that we have a single generic true regression function that applies to all the observations. We still need to write our full model form as a set of equations over $i=1,...,n$ because we need to stipulate that this form holds for all the observations. Consequently, while it is not valid to state the model equation generically (since the error terms are different for different observations), there is still a single underlying function that we are trying to estimate.
Okay, so what about estimating the "general form"
In your question, you correctly note that we are interested in the general relationship between the response variable and the explanatory variables. This is encapsulated in the true regression function $u$. By assumption, this function is the same for all the observations, so we are only estimating this one regression function.
When we use the data to estimate the parameter $\boldsymbol{\beta}$ this gives us a corresponding estimator for the true regression function $\hat{u}(\mathbf{x}) = \mathbf{x} \hat{\boldsymbol{\beta}}$, leading to the predicted and residual values defined by:
$$\hat{Y}_i = \hat{u}(\mathbf{x}_i) = \mathbf{x}_i \hat{\boldsymbol{\beta}}
\quad \quad \quad
R_i = Y_i - \hat{u}(\mathbf{x}_i) = Y_i - \mathbf{x}_i \hat{\boldsymbol{\beta}}.$$
Again, you can see that even though we are estimating a single function $u$, this leads to different predicted values and residual values for each of the observations. Consequently, we again see that we need to use subscript $i$ on each of these equations even though they are all based on estimation of the same underlying true regression function. | confusion about individual notation
The notation using subscript $i$ is the standard way to express the regression model, although you would also implicitly assume (or explicitly state) that this equation holds over a range of values -- |
54,045 | confusion about individual notation | This matrix equation
$$\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_{n-1}\\ y_{n}\end{bmatrix} = \beta \cdot \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1}\\ x_{n}\end{bmatrix} + \begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \vdots \\ \epsilon_{n-1}\\ \epsilon_{n}\end{bmatrix}$$
can be written as
$$\forall i : y_i = \beta x_i + \epsilon_i$$
Which is called index notation.
The individual notation is confusing me. Ultimately, we are interested in the general relationship between x and y, correct? The individual notation seems to me to look like it is the relationship between $x_i$ and $y_i$, so for individual i, the relationship between changing i's x on i's y.
You are supposed to think of the relation as being true for all $i$. In the equation above I have expressed this as '$\forall i$:' which means 'for all $i$:' | confusion about individual notation | This matrix equation
$$\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_{n-1}\\ y_{n}\end{bmatrix} = \beta \cdot \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1}\\ x_{n}\end{bmatrix} + \begin{bmatrix} \epsi | confusion about individual notation
This matrix equation
$$\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_{n-1}\\ y_{n}\end{bmatrix} = \beta \cdot \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1}\\ x_{n}\end{bmatrix} + \begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \vdots \\ \epsilon_{n-1}\\ \epsilon_{n}\end{bmatrix}$$
can be written as
$$\forall i : y_i = \beta x_i + \epsilon_i$$
Which is called index notation.
The individual notation is confusing me. Ultimately, we are interested in the general relationship between x and y, correct? The individual notation seems to me to look like it is the relationship between $x_i$ and $y_i$, so for individual i, the relationship between changing i's x on i's y.
You are supposed to think of the relation as being true for all $i$. In the equation above I have expressed this as '$\forall i$:' which means 'for all $i$:' | confusion about individual notation
This matrix equation
$$\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_{n-1}\\ y_{n}\end{bmatrix} = \beta \cdot \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1}\\ x_{n}\end{bmatrix} + \begin{bmatrix} \epsi |
54,046 | confusion about individual notation | If both $y$ and $y_i$ are univariate, it doesn't matter if you use index $i$ or not. It's the expected value of the dependent variable given the independent variable, i.e. $E[y|x=a]=E[y_i|x_i=a]$. Sometimes $y$ is the vertical concatenation of each sample response, in which $y=x\beta+\epsilon$ is the matrix notation for the relationship.
The notation $y_i=x_i\beta+\epsilon_i$ doesn't mean that we get to estimate $\beta$ only from sample $i$. So, it's the relationship between $y_i$ and $x_i$ but we estimate $\beta$ from all the samples (training samples). | confusion about individual notation | If both $y$ and $y_i$ are univariate, it doesn't matter if you use index $i$ or not. It's the expected value of the dependent variable given the independent variable, i.e. $E[y|x=a]=E[y_i|x_i=a]$. Som | confusion about individual notation
If both $y$ and $y_i$ are univariate, it doesn't matter if you use index $i$ or not. It's the expected value of the dependent variable given the independent variable, i.e. $E[y|x=a]=E[y_i|x_i=a]$. Sometimes $y$ is the vertical concatenation of each sample response, in which $y=x\beta+\epsilon$ is the matrix notation for the relationship.
The notation $y_i=x_i\beta+\epsilon_i$ doesn't mean that we get to estimate $\beta$ only from sample $i$. So, it's the relationship between $y_i$ and $x_i$ but we estimate $\beta$ from all the samples (training samples). | confusion about individual notation
If both $y$ and $y_i$ are univariate, it doesn't matter if you use index $i$ or not. It's the expected value of the dependent variable given the independent variable, i.e. $E[y|x=a]=E[y_i|x_i=a]$. Som |
54,047 | confusion about individual notation | As far as I know, there are three possible notations:
Individual notation: $y_i = \beta_0 + \beta_1x_{1i} + ... + \beta_Kx_{Ki} + \varepsilon_i$ where $i=1,...,N$.
Without expression $i=1,...,N$, this notation is mostly pointless, however it is common to assume this addition and omit this part. Omitting does not mean, that it does not exist.
Such notation means, that this is a system of N equations. One equation for every observed unit.
Sometimes individual notation is written shorter using vectors: $y_i = \pmb{ x}_i\pmb{\beta} + \varepsilon_i$ where $i=1,...,N$.
In this case: $\pmb{x}_i = [1, x_{1i}, ... , x_{Ki}]$ and $\pmb{\beta}^T = [\beta_0, \beta_1, ... , \beta_K]$.
It is equevalent individual notation to (1), because $\pmb{ x}_i\pmb{\beta} = \beta_0 + \beta_1x_{1i} + ... + \beta_Kx_{Ki}$.
The same problem of omission of $i=1,...,N$ is in place here. We also have N equations here.
Matrix notation: $\pmb{y} = \pmb{X\beta} + \pmb{\varepsilon}$.
Here: $\pmb{y}^T = [y_1,...,y_N], \pmb{X}^T = [\pmb{x}_1, ..., \pmb{x}_N]$, $\pmb{\beta}^T = [\beta_0, \beta_1, ... , \beta_K]$, $\pmb{\varepsilon} = [\varepsilon_1, ..., \varepsilon_N]$ .
This notation also describes exactly the same. The N equation system.
Now, to the confusion in question:
Ultimately, we are interested in K+1 parameters $[\beta_0, \beta_1, ... , \beta_K]$, which are the same for every equation in the whole system of N equations.
We have one random variable for the righthand side of every equation, $\varepsilon_i$. $\pmb{x}_i$ is often assumed non-random. Therefore for every $i = 1,...,N$, $y_i$ is a sum of a constant $\pmb{ x}_i\pmb{\beta}$ and a random variable $\varepsilon_i$.
Therefore we are having different random variables in every equation, which have common properties, like for example for every $i=1,...,N$: $\varepsilon_i \sim N(0,\sigma^2)$.
Generally, this approach is slightly different, than regression of two random variables known from probability calculus. This is statistics. | confusion about individual notation | As far as I know, there are three possible notations:
Individual notation: $y_i = \beta_0 + \beta_1x_{1i} + ... + \beta_Kx_{Ki} + \varepsilon_i$ where $i=1,...,N$.
Without expression $i=1,...,N$, th | confusion about individual notation
As far as I know, there are three possible notations:
Individual notation: $y_i = \beta_0 + \beta_1x_{1i} + ... + \beta_Kx_{Ki} + \varepsilon_i$ where $i=1,...,N$.
Without expression $i=1,...,N$, this notation is mostly pointless, however it is common to assume this addition and omit this part. Omitting does not mean, that it does not exist.
Such notation means, that this is a system of N equations. One equation for every observed unit.
Sometimes individual notation is written shorter using vectors: $y_i = \pmb{ x}_i\pmb{\beta} + \varepsilon_i$ where $i=1,...,N$.
In this case: $\pmb{x}_i = [1, x_{1i}, ... , x_{Ki}]$ and $\pmb{\beta}^T = [\beta_0, \beta_1, ... , \beta_K]$.
It is equevalent individual notation to (1), because $\pmb{ x}_i\pmb{\beta} = \beta_0 + \beta_1x_{1i} + ... + \beta_Kx_{Ki}$.
The same problem of omission of $i=1,...,N$ is in place here. We also have N equations here.
Matrix notation: $\pmb{y} = \pmb{X\beta} + \pmb{\varepsilon}$.
Here: $\pmb{y}^T = [y_1,...,y_N], \pmb{X}^T = [\pmb{x}_1, ..., \pmb{x}_N]$, $\pmb{\beta}^T = [\beta_0, \beta_1, ... , \beta_K]$, $\pmb{\varepsilon} = [\varepsilon_1, ..., \varepsilon_N]$ .
This notation also describes exactly the same. The N equation system.
Now, to the confusion in question:
Ultimately, we are interested in K+1 parameters $[\beta_0, \beta_1, ... , \beta_K]$, which are the same for every equation in the whole system of N equations.
We have one random variable for the righthand side of every equation, $\varepsilon_i$. $\pmb{x}_i$ is often assumed non-random. Therefore for every $i = 1,...,N$, $y_i$ is a sum of a constant $\pmb{ x}_i\pmb{\beta}$ and a random variable $\varepsilon_i$.
Therefore we are having different random variables in every equation, which have common properties, like for example for every $i=1,...,N$: $\varepsilon_i \sim N(0,\sigma^2)$.
Generally, this approach is slightly different, than regression of two random variables known from probability calculus. This is statistics. | confusion about individual notation
As far as I know, there are three possible notations:
Individual notation: $y_i = \beta_0 + \beta_1x_{1i} + ... + \beta_Kx_{Ki} + \varepsilon_i$ where $i=1,...,N$.
Without expression $i=1,...,N$, th |
54,048 | confusion about individual notation | I believe the confusion arises when distinguishing between population and sample levels. There is some inconsistencies with notation (when to use capital/cursive/bold letters), but once you determine on which level the notation is, everything should be clear. Usually, it is clear when we are on the sample levels since the equations would have to iterate through all observations $\{1 \dots n\}$ or $\{1 \dots T\}$, whereas on population levels only a single $i$ or $t$ as index is shown for models.
On population level, the distinction between indexed and non-indexed is very important when describing properties of the model. It is best to understand this when dealing with indexing by time $t$. Consider $y_t=\beta^Tx_t+\varepsilon_t$. On population level every observation is a random variable/vector, so the model holds for every $t$. Then:
Indexed values show a relation that holds only between variables for the same time point. For example: $E[\varepsilon_t|x_t]=0$ is the innovation property of the noise $\varepsilon_t$ (or predetermined property of the regressors $x_t$) and it states that $\varepsilon_t$ has no influence (in the mean) on any of the current response $y_{t}$. It does not rule out no influence on past/future values.
Non-indexed values show a relation that holds between variables for all time points. For example: $E[\varepsilon|x]=0$ is the exogeneity assumption of the regressors $x$, meaning that $\varepsilon$ has no influence (in the mean) on any of the responses $y$, both past, present and future. Here, $\varepsilon$ and $x$ are vectors that include all $\varepsilon_t$ respectively $x_t$.
Obviously, 2 is stronger than 1. Note that both $x_t$,$\varepsilon_t$ and $x$,$\varepsilon$ can be vectors. | confusion about individual notation | I believe the confusion arises when distinguishing between population and sample levels. There is some inconsistencies with notation (when to use capital/cursive/bold letters), but once you determine | confusion about individual notation
I believe the confusion arises when distinguishing between population and sample levels. There is some inconsistencies with notation (when to use capital/cursive/bold letters), but once you determine on which level the notation is, everything should be clear. Usually, it is clear when we are on the sample levels since the equations would have to iterate through all observations $\{1 \dots n\}$ or $\{1 \dots T\}$, whereas on population levels only a single $i$ or $t$ as index is shown for models.
On population level, the distinction between indexed and non-indexed is very important when describing properties of the model. It is best to understand this when dealing with indexing by time $t$. Consider $y_t=\beta^Tx_t+\varepsilon_t$. On population level every observation is a random variable/vector, so the model holds for every $t$. Then:
Indexed values show a relation that holds only between variables for the same time point. For example: $E[\varepsilon_t|x_t]=0$ is the innovation property of the noise $\varepsilon_t$ (or predetermined property of the regressors $x_t$) and it states that $\varepsilon_t$ has no influence (in the mean) on any of the current response $y_{t}$. It does not rule out no influence on past/future values.
Non-indexed values show a relation that holds between variables for all time points. For example: $E[\varepsilon|x]=0$ is the exogeneity assumption of the regressors $x$, meaning that $\varepsilon$ has no influence (in the mean) on any of the responses $y$, both past, present and future. Here, $\varepsilon$ and $x$ are vectors that include all $\varepsilon_t$ respectively $x_t$.
Obviously, 2 is stronger than 1. Note that both $x_t$,$\varepsilon_t$ and $x$,$\varepsilon$ can be vectors. | confusion about individual notation
I believe the confusion arises when distinguishing between population and sample levels. There is some inconsistencies with notation (when to use capital/cursive/bold letters), but once you determine |
54,049 | How to calculate the ACF and PACF for time series | Well if you mean how to estimate the ACF and PACF, here is how it's done:
1. ACF:
In practice, a simple procedure is:
Estimate the sample mean:
$$\bar{y} = \frac{\sum_{t=1}^{T} y_t}{T}$$
Calculate the sample autocorrelation:
$$\hat{\rho_j} = \frac{\sum_{t=j+1}^{T}(y_t - \bar{y})(y_{t-j} - \bar{y})}{\sum_{t=1}^{T}(y_t - \bar{y})^2}$$
Estimate the variance. In many softwares (including R if you use the acf() function), it is approximated by a the variance of a white noise: $T^{-1}$. This leads to confidence intervals that are asymptotically consistent, but the smaller than the actual confidence interval in many cases (leading to a larger probability of Type 1 Error), so interpret theese with caution!
2. PACF:
The PACF is a bit more complicated, because it tries to nullify the effects of other order correlations.
It is estimated via a set of OLS regressions:
$$y_{t,j} = \phi_{j,1} y_{t-1} + \phi_{j,2} y_{t-2} + ... + \phi_{j,j} y_{t-j} + \epsilon_t$$
And the coefficient you want is the $\phi_{j,j}$, estimated via OLS with the standard $\hat{\beta} = (X'X)^{-1}X'Y$ coefficients.
So, for example, if you would like the first order PACF:
$$y_{t,1} = \phi_{1,1} y_{t-1} + \epsilon_t$$
and the coefficient you want is the $\hat{\phi_{1,1}}$ given by OLS: $\hat{\phi_{1,1}}=\frac{Cov(y_{t-1},y_t)}{Var(y_t)}$ (assuming weak stationarity).
The second order PACF would be the $\phi_{2,2}$ coefficient of:
$$y_{t,2} = \phi_{2,1} y_{t-1} + \phi_{2,2} y_{t-2} + \epsilon_t$$
And so on.
Good references on this are Enders (2004) and Hamilton (1994). | How to calculate the ACF and PACF for time series | Well if you mean how to estimate the ACF and PACF, here is how it's done:
1. ACF:
In practice, a simple procedure is:
Estimate the sample mean:
$$\bar{y} = \frac{\sum_{t=1}^{T} y_t}{T}$$
Calculate th | How to calculate the ACF and PACF for time series
Well if you mean how to estimate the ACF and PACF, here is how it's done:
1. ACF:
In practice, a simple procedure is:
Estimate the sample mean:
$$\bar{y} = \frac{\sum_{t=1}^{T} y_t}{T}$$
Calculate the sample autocorrelation:
$$\hat{\rho_j} = \frac{\sum_{t=j+1}^{T}(y_t - \bar{y})(y_{t-j} - \bar{y})}{\sum_{t=1}^{T}(y_t - \bar{y})^2}$$
Estimate the variance. In many softwares (including R if you use the acf() function), it is approximated by a the variance of a white noise: $T^{-1}$. This leads to confidence intervals that are asymptotically consistent, but the smaller than the actual confidence interval in many cases (leading to a larger probability of Type 1 Error), so interpret theese with caution!
2. PACF:
The PACF is a bit more complicated, because it tries to nullify the effects of other order correlations.
It is estimated via a set of OLS regressions:
$$y_{t,j} = \phi_{j,1} y_{t-1} + \phi_{j,2} y_{t-2} + ... + \phi_{j,j} y_{t-j} + \epsilon_t$$
And the coefficient you want is the $\phi_{j,j}$, estimated via OLS with the standard $\hat{\beta} = (X'X)^{-1}X'Y$ coefficients.
So, for example, if you would like the first order PACF:
$$y_{t,1} = \phi_{1,1} y_{t-1} + \epsilon_t$$
and the coefficient you want is the $\hat{\phi_{1,1}}$ given by OLS: $\hat{\phi_{1,1}}=\frac{Cov(y_{t-1},y_t)}{Var(y_t)}$ (assuming weak stationarity).
The second order PACF would be the $\phi_{2,2}$ coefficient of:
$$y_{t,2} = \phi_{2,1} y_{t-1} + \phi_{2,2} y_{t-2} + \epsilon_t$$
And so on.
Good references on this are Enders (2004) and Hamilton (1994). | How to calculate the ACF and PACF for time series
Well if you mean how to estimate the ACF and PACF, here is how it's done:
1. ACF:
In practice, a simple procedure is:
Estimate the sample mean:
$$\bar{y} = \frac{\sum_{t=1}^{T} y_t}{T}$$
Calculate th |
54,050 | How to calculate the ACF and PACF for time series | By definition, a ACF is $\gamma_j := E[(Y_{t}-\mu)(Y_{t-j}-\mu)]$ (for covariance) and the is $\rho_j := \frac{\gamma_j}{\gamma_0}$ (for correlation). For a closed formula wrote in function of parameters and such, you need to specify the model that you have (if you say what's your model, i can tell you how to get that formula). For example, the ACF for an MA(1) is:
As for the PACF, you want the correlation after having controlled for the other lags in the model, so you need to use OLS, so PACF is defined as the $\beta_j$ on $Y_t = \beta_0 + \beta_1Y_{t-1} + ... + \beta_jY_{t-j} + u_t$.
For using the data of your time series to calculate the amostral counterpart of those statistics (without having to set a model as what i presented untill now), first you need to assume that your series is at least weakly stationary and ergodic, which in loose terms is like saying that the series "will not change its statistical properties with time", so that the values of the series that you observe can be meaningful to the process behind it. Obs: This is a more formal thing about statistics, ideally you should try to learn what exactly those things mean and how to see when you can make that assumption and when you can't, but it's best to ask that in another post.
Then, you can get $\gamma_j$ and $\rho_j$ by the formula present in the most upvoted answer in ACF and PACF Formula. And for the PACF, there is a sistem of equations that connect the ACF correlations to it, known as the Levinson recursion (which also is explained in that answer). | How to calculate the ACF and PACF for time series | By definition, a ACF is $\gamma_j := E[(Y_{t}-\mu)(Y_{t-j}-\mu)]$ (for covariance) and the is $\rho_j := \frac{\gamma_j}{\gamma_0}$ (for correlation). For a closed formula wrote in function of paramet | How to calculate the ACF and PACF for time series
By definition, a ACF is $\gamma_j := E[(Y_{t}-\mu)(Y_{t-j}-\mu)]$ (for covariance) and the is $\rho_j := \frac{\gamma_j}{\gamma_0}$ (for correlation). For a closed formula wrote in function of parameters and such, you need to specify the model that you have (if you say what's your model, i can tell you how to get that formula). For example, the ACF for an MA(1) is:
As for the PACF, you want the correlation after having controlled for the other lags in the model, so you need to use OLS, so PACF is defined as the $\beta_j$ on $Y_t = \beta_0 + \beta_1Y_{t-1} + ... + \beta_jY_{t-j} + u_t$.
For using the data of your time series to calculate the amostral counterpart of those statistics (without having to set a model as what i presented untill now), first you need to assume that your series is at least weakly stationary and ergodic, which in loose terms is like saying that the series "will not change its statistical properties with time", so that the values of the series that you observe can be meaningful to the process behind it. Obs: This is a more formal thing about statistics, ideally you should try to learn what exactly those things mean and how to see when you can make that assumption and when you can't, but it's best to ask that in another post.
Then, you can get $\gamma_j$ and $\rho_j$ by the formula present in the most upvoted answer in ACF and PACF Formula. And for the PACF, there is a sistem of equations that connect the ACF correlations to it, known as the Levinson recursion (which also is explained in that answer). | How to calculate the ACF and PACF for time series
By definition, a ACF is $\gamma_j := E[(Y_{t}-\mu)(Y_{t-j}-\mu)]$ (for covariance) and the is $\rho_j := \frac{\gamma_j}{\gamma_0}$ (for correlation). For a closed formula wrote in function of paramet |
54,051 | Glmer with count data, nested design and repeated measurements | Adding (1|round) as a random effect to prevent pseudoreplication is right, a good article about this subject can be found here.
To ensure your model assumes everything correct you could compare the df’s of your model summaries and check if they take account for your pseudoreplicates and are therefore lower in the model with round as a random effect.
The Term (1|Biotope/Location) is also correct for adding a nested structure, but you should then remove Location as a fixed effect,
a good post about this is from @JoeKing Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4?
Another possibility writing it down would be (1|Biotope) + (1|Biotope:Location)
So your final model should look like that:
Individuals ~ Trap + Temperature + Humidity + (1|Biotope/Location) + (1|round)
Other than that your model looks fine to me, you could consider if there are any additional interactions, maybe between Location and Temperature (1|Location:Temperature) and try adding them to the model | Glmer with count data, nested design and repeated measurements | Adding (1|round) as a random effect to prevent pseudoreplication is right, a good article about this subject can be found here.
To ensure your model assumes everything correct you could compare the df | Glmer with count data, nested design and repeated measurements
Adding (1|round) as a random effect to prevent pseudoreplication is right, a good article about this subject can be found here.
To ensure your model assumes everything correct you could compare the df’s of your model summaries and check if they take account for your pseudoreplicates and are therefore lower in the model with round as a random effect.
The Term (1|Biotope/Location) is also correct for adding a nested structure, but you should then remove Location as a fixed effect,
a good post about this is from @JoeKing Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4?
Another possibility writing it down would be (1|Biotope) + (1|Biotope:Location)
So your final model should look like that:
Individuals ~ Trap + Temperature + Humidity + (1|Biotope/Location) + (1|round)
Other than that your model looks fine to me, you could consider if there are any additional interactions, maybe between Location and Temperature (1|Location:Temperature) and try adding them to the model | Glmer with count data, nested design and repeated measurements
Adding (1|round) as a random effect to prevent pseudoreplication is right, a good article about this subject can be found here.
To ensure your model assumes everything correct you could compare the df |
54,052 | Glmer with count data, nested design and repeated measurements | Nice answer from Thomas! I think we may need some more information though before finalizing a modelling approach.
For example, the currently proposed model ignores the fact that the response variable was collected on different days.
One way I would think about this modelling exercise is like this:
We start out with 2 rounds of experiments, so Round can be treated as a random grouping factor sitting at the top level of your data hierarchy. (Note, however, that a rule of thumb suggest that one should have at least 5 levels for a random grouping factor, whereas you only have 2.)
Within each round, you consider multiple days. So Day could be considered a random grouping factor nested within Round (since the days are specific to each round.) It sounds like Temperature and Humidity are day-level predictor variables?
Within each Day, it looks like you consider different locations, so Location can be treated as a random grouping factor and provided the locations you selected are intended to be representative of a larger set of locations. If the locations are different from one day to the next within an experimental round, then Location would be nested within Day. If there is some overlap among locations across days within the same experimental round, then Location and Day would be partially crossed random grouping factors. If locations are the same across all days, then Location and Day would be fully crossed random grouping factors.
We don't know enough about Biotope - does it include in your study all possible levels you are interested in? Or have you selected just some of its possible levels for inclusion in your study? If the latter, then perhaps you could consider Biotope as a random grouping factor nested within/partially crossed with/fully crossed with Day, and Location as a random grouping factor nested within/partially crossed with/fully crossed with Biotope.
For Trap too, you would have to determine whether to consider it nested within/partially crossed with/fully crossed with Location, depending on whether traps are totally different from one location to the next (nested), some but not all traps are the same across locations (partially crossed) or traps are the same from one location to the next (fully crossed).
Basically, you have to go from the top to the bottom level of your data hierarchy and decide what makes most sense in terms of modelling options for each level. | Glmer with count data, nested design and repeated measurements | Nice answer from Thomas! I think we may need some more information though before finalizing a modelling approach.
For example, the currently proposed model ignores the fact that the response variable | Glmer with count data, nested design and repeated measurements
Nice answer from Thomas! I think we may need some more information though before finalizing a modelling approach.
For example, the currently proposed model ignores the fact that the response variable was collected on different days.
One way I would think about this modelling exercise is like this:
We start out with 2 rounds of experiments, so Round can be treated as a random grouping factor sitting at the top level of your data hierarchy. (Note, however, that a rule of thumb suggest that one should have at least 5 levels for a random grouping factor, whereas you only have 2.)
Within each round, you consider multiple days. So Day could be considered a random grouping factor nested within Round (since the days are specific to each round.) It sounds like Temperature and Humidity are day-level predictor variables?
Within each Day, it looks like you consider different locations, so Location can be treated as a random grouping factor and provided the locations you selected are intended to be representative of a larger set of locations. If the locations are different from one day to the next within an experimental round, then Location would be nested within Day. If there is some overlap among locations across days within the same experimental round, then Location and Day would be partially crossed random grouping factors. If locations are the same across all days, then Location and Day would be fully crossed random grouping factors.
We don't know enough about Biotope - does it include in your study all possible levels you are interested in? Or have you selected just some of its possible levels for inclusion in your study? If the latter, then perhaps you could consider Biotope as a random grouping factor nested within/partially crossed with/fully crossed with Day, and Location as a random grouping factor nested within/partially crossed with/fully crossed with Biotope.
For Trap too, you would have to determine whether to consider it nested within/partially crossed with/fully crossed with Location, depending on whether traps are totally different from one location to the next (nested), some but not all traps are the same across locations (partially crossed) or traps are the same from one location to the next (fully crossed).
Basically, you have to go from the top to the bottom level of your data hierarchy and decide what makes most sense in terms of modelling options for each level. | Glmer with count data, nested design and repeated measurements
Nice answer from Thomas! I think we may need some more information though before finalizing a modelling approach.
For example, the currently proposed model ignores the fact that the response variable |
54,053 | The interpretation of a random variable | I think it makes sense to think of the world of mathematics and the real world as separated in principle. However, firstly in order to make real use of mathematics, the world of mathematics and the real world have to be connected by interpretation, including addressing/checking whether the connection is appropriate. Secondly, mathematics was originally, and is mostly still now, set up in the way it is in order to be useful, i.e., having a connection to the real world in mind.
A "random variable" is a mathematical construct, and as such part of the world of mathematics and not of the real world. When handling random variables in the world of mathematics, indeed it doesn't matter whether this represents a "real random process", because such things do not exist in the world of mathematics. It is however called a "random variable" because the people who defined it had in mind to model real random processes with it.
The problem with the second question is that it is not clear whether what you call "real random process" actually exists. There are determinists around who believe that nothing is "really" random. There are also people who hold that probabilities do not model randomness in the real world, but rather the state of knowledge/uncertainty of an individual, or a scientific community as a whole. These people still use random variables when treating probability mathematically.
Even a frequentist, for whom probability models model processes existing in reality may concede that many real processes for which we use random variables are not really random, although most believe that some are (such as radioactive decay, or random sampling and randomisation of experiments if the researcher uses proper random numbers). Ultimately "objective randomness" cannot be confirmed by observation, so I'd rather say, if a model of a real process is used by a researcher, the researcher has to do two things in order to convince others that the model is useful:
(a) argue that nothing of the knowledge of the process invalidates randomness (such as convenience sampling where a proper random sample could be drawn),
(b) show that the data behave (in all relevant aspects) about as the model implies they should behave.
This will not secure that the real process is "really random", however it suggests that we can use the model in order to learn something about what goes on in the real world, e.g., make predictions, or simply statements like "data on the effect of homeopathy look like if they were generated by a random model according to which it is the same as placebo." | The interpretation of a random variable | I think it makes sense to think of the world of mathematics and the real world as separated in principle. However, firstly in order to make real use of mathematics, the world of mathematics and the re | The interpretation of a random variable
I think it makes sense to think of the world of mathematics and the real world as separated in principle. However, firstly in order to make real use of mathematics, the world of mathematics and the real world have to be connected by interpretation, including addressing/checking whether the connection is appropriate. Secondly, mathematics was originally, and is mostly still now, set up in the way it is in order to be useful, i.e., having a connection to the real world in mind.
A "random variable" is a mathematical construct, and as such part of the world of mathematics and not of the real world. When handling random variables in the world of mathematics, indeed it doesn't matter whether this represents a "real random process", because such things do not exist in the world of mathematics. It is however called a "random variable" because the people who defined it had in mind to model real random processes with it.
The problem with the second question is that it is not clear whether what you call "real random process" actually exists. There are determinists around who believe that nothing is "really" random. There are also people who hold that probabilities do not model randomness in the real world, but rather the state of knowledge/uncertainty of an individual, or a scientific community as a whole. These people still use random variables when treating probability mathematically.
Even a frequentist, for whom probability models model processes existing in reality may concede that many real processes for which we use random variables are not really random, although most believe that some are (such as radioactive decay, or random sampling and randomisation of experiments if the researcher uses proper random numbers). Ultimately "objective randomness" cannot be confirmed by observation, so I'd rather say, if a model of a real process is used by a researcher, the researcher has to do two things in order to convince others that the model is useful:
(a) argue that nothing of the knowledge of the process invalidates randomness (such as convenience sampling where a proper random sample could be drawn),
(b) show that the data behave (in all relevant aspects) about as the model implies they should behave.
This will not secure that the real process is "really random", however it suggests that we can use the model in order to learn something about what goes on in the real world, e.g., make predictions, or simply statements like "data on the effect of homeopathy look like if they were generated by a random model according to which it is the same as placebo." | The interpretation of a random variable
I think it makes sense to think of the world of mathematics and the real world as separated in principle. However, firstly in order to make real use of mathematics, the world of mathematics and the re |
54,054 | The interpretation of a random variable | something random is something you cannot predict.
so if you can somehow solve an equation that figures out which side a coin will land on before it lands on that side, it is not truly random. you can do this for every random number generating process, so that variable is random if you cannot figure out what it will equal. it seems as if you can't, so for you it is truly random. | The interpretation of a random variable | something random is something you cannot predict.
so if you can somehow solve an equation that figures out which side a coin will land on before it lands on that side, it is not truly random. you can | The interpretation of a random variable
something random is something you cannot predict.
so if you can somehow solve an equation that figures out which side a coin will land on before it lands on that side, it is not truly random. you can do this for every random number generating process, so that variable is random if you cannot figure out what it will equal. it seems as if you can't, so for you it is truly random. | The interpretation of a random variable
something random is something you cannot predict.
so if you can somehow solve an equation that figures out which side a coin will land on before it lands on that side, it is not truly random. you can |
54,055 | How Gradient Descent is used for classification with Decision Trees? | Gradient descent is not used for training decision trees. Not every machine learning algorithm uses a general optimization algorithm (e.g. gradient descent) for training, some of them use specialized algorithms for training them. Examples of such algorithms are $k$-NN, naive Bayes, or decision trees, in case of those algorithms we don't train them by directly minimizing some loss function to find best set of parameters (in fact, $k$-NN, or decision tree don't have parameters per se), but have their own algorithms for finding the solutions.
As a side note, gradient descent is even not always used for the algorithms that do train by directly minimizing loss function. It is only one of the many optimization algorithms. Moreover, it is not even the most efficient algorithm, as there are many algorithms that work better for some problems. Gradient descent got popular mostly because it is easy and efficient to use for training neural networks, but that does not make it "one size fits all" algorithm. | How Gradient Descent is used for classification with Decision Trees? | Gradient descent is not used for training decision trees. Not every machine learning algorithm uses a general optimization algorithm (e.g. gradient descent) for training, some of them use specialized | How Gradient Descent is used for classification with Decision Trees?
Gradient descent is not used for training decision trees. Not every machine learning algorithm uses a general optimization algorithm (e.g. gradient descent) for training, some of them use specialized algorithms for training them. Examples of such algorithms are $k$-NN, naive Bayes, or decision trees, in case of those algorithms we don't train them by directly minimizing some loss function to find best set of parameters (in fact, $k$-NN, or decision tree don't have parameters per se), but have their own algorithms for finding the solutions.
As a side note, gradient descent is even not always used for the algorithms that do train by directly minimizing loss function. It is only one of the many optimization algorithms. Moreover, it is not even the most efficient algorithm, as there are many algorithms that work better for some problems. Gradient descent got popular mostly because it is easy and efficient to use for training neural networks, but that does not make it "one size fits all" algorithm. | How Gradient Descent is used for classification with Decision Trees?
Gradient descent is not used for training decision trees. Not every machine learning algorithm uses a general optimization algorithm (e.g. gradient descent) for training, some of them use specialized |
54,056 | Is this the correct way to run an adaptive LASSO? | Yes, but it depends on what you're goal is. It's a little complicated.
What is adaptive Lasso?
Adaptive Lasso was introduced in Zhou (2006). Adaptive Lasso is a modification of Lasso where each coefficient, $\beta_j$, is given its own weight, $w_j$. The coefficients are estimated by minimizing the objective function,
$$ \underset{\beta}{\arg \min }\left\|\mathbf{y}-\sum_{j=1}^{p} \mathbf{x}_{j} \beta_{j}\right\|^{2}+\lambda \sum_{j=1}^{p} w_{j}\left|\beta_{j}\right|. $$
The weights control the rate each coefficient is shrunk towards 0. The general idea is that smaller coefficients should leave the model before larger coefficients.
How do you choose the weights?
The adaptive Lasso is very general. You can set the weights however you'd like, and you'll get something out. You might want to consider what the "best" set of weights are. Zhou (2006) say that you should choose your weights so the adaptive Lasso estimates have the Oracle Property:
You will always identify the set of nonzero coefficients...when the sample size is infinite
The estimates are unbiased, normally distributed, and the correct variance (Zhou (2006) has the technical definition)...when the sample size is infinite.
To ensure the Adaptive Lasso has these properties, you need to choose the weights as $w_j = 1/|\hat{\beta_j}|^{\gamma}$, where $\gamma > 0$ and $\hat{\beta_j}$ is an unbiased estimate of the true parameter, $\beta$. Generally, people choose the Ordinary Least Squares (OLS) estimate of $\beta$ because it will be unbiased. Ridge regression produces coefficient estimates that are biased, so you cannot guarantee the Oracle Property holds
What if I use something else for the weights?
What happens if you ignore the requirement of using unbiased estimates for the weights and use Ridge regression? You can't guarantee you'll get the right subset of coefficients and that they'll have the correct distribution. In practice, this probably doesn't matter. The Oracle Property is an asymptotic guarantee (when $n \to \infty$), so it doesn't necessary apply to your data with a finite number of observations. There may be scenarios where using Ridge estimates for weights performs really well. Zhou (2006) recommends using Ridge regression over OLS when your variables are highly correlated. | Is this the correct way to run an adaptive LASSO? | Yes, but it depends on what you're goal is. It's a little complicated.
What is adaptive Lasso?
Adaptive Lasso was introduced in Zhou (2006). Adaptive Lasso is a modification of Lasso where each coeffi | Is this the correct way to run an adaptive LASSO?
Yes, but it depends on what you're goal is. It's a little complicated.
What is adaptive Lasso?
Adaptive Lasso was introduced in Zhou (2006). Adaptive Lasso is a modification of Lasso where each coefficient, $\beta_j$, is given its own weight, $w_j$. The coefficients are estimated by minimizing the objective function,
$$ \underset{\beta}{\arg \min }\left\|\mathbf{y}-\sum_{j=1}^{p} \mathbf{x}_{j} \beta_{j}\right\|^{2}+\lambda \sum_{j=1}^{p} w_{j}\left|\beta_{j}\right|. $$
The weights control the rate each coefficient is shrunk towards 0. The general idea is that smaller coefficients should leave the model before larger coefficients.
How do you choose the weights?
The adaptive Lasso is very general. You can set the weights however you'd like, and you'll get something out. You might want to consider what the "best" set of weights are. Zhou (2006) say that you should choose your weights so the adaptive Lasso estimates have the Oracle Property:
You will always identify the set of nonzero coefficients...when the sample size is infinite
The estimates are unbiased, normally distributed, and the correct variance (Zhou (2006) has the technical definition)...when the sample size is infinite.
To ensure the Adaptive Lasso has these properties, you need to choose the weights as $w_j = 1/|\hat{\beta_j}|^{\gamma}$, where $\gamma > 0$ and $\hat{\beta_j}$ is an unbiased estimate of the true parameter, $\beta$. Generally, people choose the Ordinary Least Squares (OLS) estimate of $\beta$ because it will be unbiased. Ridge regression produces coefficient estimates that are biased, so you cannot guarantee the Oracle Property holds
What if I use something else for the weights?
What happens if you ignore the requirement of using unbiased estimates for the weights and use Ridge regression? You can't guarantee you'll get the right subset of coefficients and that they'll have the correct distribution. In practice, this probably doesn't matter. The Oracle Property is an asymptotic guarantee (when $n \to \infty$), so it doesn't necessary apply to your data with a finite number of observations. There may be scenarios where using Ridge estimates for weights performs really well. Zhou (2006) recommends using Ridge regression over OLS when your variables are highly correlated. | Is this the correct way to run an adaptive LASSO?
Yes, but it depends on what you're goal is. It's a little complicated.
What is adaptive Lasso?
Adaptive Lasso was introduced in Zhou (2006). Adaptive Lasso is a modification of Lasso where each coeffi |
54,057 | Basic preconditioned gradient descent example | Since the hessian is constant $P = H^{-1}$ is optimal (you fully recover Newton's method): $P = H^{-1} = \text{diag}(1/2, 1/20)$. This is equivalent to making the change of variables $x = \frac{1}{\sqrt{2}} x'$ and $y = \frac{1}{\sqrt{20}} y'$ which gives you the desired $\frac{1}{2} (x'^2 + y'^2)$ form.
As for approach 2, this is just a systematic way of estimating the scale of your variables. Get an empirical estimate for the covariance matrix $\Sigma \approx \text{Cov}(x)$. The answer you referenced shows how $P = \Sigma$ follows from a change of variable and chain-rule.
What distribution do you sample if you want to do approach 2? If you treat your objective function as a negative log-likelihood, then you should be sampling from a normal distribution with $\frac{1}{2} \Sigma^{-1} = \operatorname{diag}(1, 10)$. So you should get $\Sigma = \operatorname{diag}(1/2, 1/20)$. If you were to take many random samples from this distribution and calculate their covariance, you would get the same answer since $\Sigma_{i,j} := \operatorname{E} [(x_i - \mu_i)( x_j - \mu_j)] = \operatorname{Cov}(x)$ (see wikipedia).
These approaches are equivalent when your data is normally-distributed and you have a lot of it. In practice, approach 2 is more generally applicable to situations where the Hessian is not constant, but can be somewhat crude.
Automatically computing a dynamic preconditioning matrix is a very well studied topic. Optimization methods that do this are called Quasi-Newton methods. | Basic preconditioned gradient descent example | Since the hessian is constant $P = H^{-1}$ is optimal (you fully recover Newton's method): $P = H^{-1} = \text{diag}(1/2, 1/20)$. This is equivalent to making the change of variables $x = \frac{1}{\sq | Basic preconditioned gradient descent example
Since the hessian is constant $P = H^{-1}$ is optimal (you fully recover Newton's method): $P = H^{-1} = \text{diag}(1/2, 1/20)$. This is equivalent to making the change of variables $x = \frac{1}{\sqrt{2}} x'$ and $y = \frac{1}{\sqrt{20}} y'$ which gives you the desired $\frac{1}{2} (x'^2 + y'^2)$ form.
As for approach 2, this is just a systematic way of estimating the scale of your variables. Get an empirical estimate for the covariance matrix $\Sigma \approx \text{Cov}(x)$. The answer you referenced shows how $P = \Sigma$ follows from a change of variable and chain-rule.
What distribution do you sample if you want to do approach 2? If you treat your objective function as a negative log-likelihood, then you should be sampling from a normal distribution with $\frac{1}{2} \Sigma^{-1} = \operatorname{diag}(1, 10)$. So you should get $\Sigma = \operatorname{diag}(1/2, 1/20)$. If you were to take many random samples from this distribution and calculate their covariance, you would get the same answer since $\Sigma_{i,j} := \operatorname{E} [(x_i - \mu_i)( x_j - \mu_j)] = \operatorname{Cov}(x)$ (see wikipedia).
These approaches are equivalent when your data is normally-distributed and you have a lot of it. In practice, approach 2 is more generally applicable to situations where the Hessian is not constant, but can be somewhat crude.
Automatically computing a dynamic preconditioning matrix is a very well studied topic. Optimization methods that do this are called Quasi-Newton methods. | Basic preconditioned gradient descent example
Since the hessian is constant $P = H^{-1}$ is optimal (you fully recover Newton's method): $P = H^{-1} = \text{diag}(1/2, 1/20)$. This is equivalent to making the change of variables $x = \frac{1}{\sq |
54,058 | Basic preconditioned gradient descent example | I'm fairly sure you can just use the Hessian, which in this case is $$
2\begin{bmatrix}
1 & 0\\
0 & 10
\end{bmatrix}
$$
This comes from method 3 (using second-partial derivative information), and possibly also method 1.
The reason we're using the exact matrix here instead of an approximation is because this is a simple case; as noted elsewhere efficiency can be an issue with larger dimensions.
We've already got a diagonal matrix, so even if we did approximate as suggested it would be the same. Note this matrix also satisfies the criteria of being symmetric and positive semi-definite. | Basic preconditioned gradient descent example | I'm fairly sure you can just use the Hessian, which in this case is $$
2\begin{bmatrix}
1 & 0\\
0 & 10
\end{bmatrix}
$$
This comes from method 3 (using second-partial derivative information), and poss | Basic preconditioned gradient descent example
I'm fairly sure you can just use the Hessian, which in this case is $$
2\begin{bmatrix}
1 & 0\\
0 & 10
\end{bmatrix}
$$
This comes from method 3 (using second-partial derivative information), and possibly also method 1.
The reason we're using the exact matrix here instead of an approximation is because this is a simple case; as noted elsewhere efficiency can be an issue with larger dimensions.
We've already got a diagonal matrix, so even if we did approximate as suggested it would be the same. Note this matrix also satisfies the criteria of being symmetric and positive semi-definite. | Basic preconditioned gradient descent example
I'm fairly sure you can just use the Hessian, which in this case is $$
2\begin{bmatrix}
1 & 0\\
0 & 10
\end{bmatrix}
$$
This comes from method 3 (using second-partial derivative information), and poss |
54,059 | Basic preconditioned gradient descent example | This is the problem that preconditioning tries to solve:
From this blog (also discussed here on stackexchange: Fitting SIR model with 2019-nCoV data doesn't conververge)
Problems:
With gradient descent methods you follow a path down-hill. But, your algorithm is often not as smooth as water and doesn't flow straight down in the most efficient way. Instead, often the algorithm will 'over-shoot' and follow an (inefficient) zigzag path.
Another problem is that the algorithm might have some basic rules to decide when to stop. For instance, deciding to stop when the step sizes are getting very small. However, for that stopping rule, when you are not looking into the direction of the optimum (because you are looking along the slow zigzag path), it might be that your step sizes are small while the optimum is still far away.
(the first problem is about the speed of getting a solution/convergence, the second problem is about the accuracy of the result)
Methods to 'pre-condition':
So it would be nice if you could change the scale of the different variables somehow (make them more equal instead of having this valley). You only need to know how.
method 1: use the formula for the problem or some intuition about the problem.
In your case, the function $f(x,y) = x^2 + 10y^2$, you can compute the scaling/pre-conditioning based on the formula. In the formulation of your reference... $P$ should be a matrix. That is $P$ is not $\left[1,1/10\right]$ but instead $$P = \begin{bmatrix} 1 & 0 \\ 0 & 1/10 \end{bmatrix}$$
It is the scaling of the variables that is a vector, namely $R = \left[1,\sqrt{1/10}\right]$.
Without formula, you can use 'intuition'. For instance, you might be having features with widely varying scales. I often just do this. Scale the variables such that the range of the 'expected' solution will be similar for each variable. (In the image below you see this as well. The range of the $R_0$ and $K$ parameter is a lot different)
method 2: Use the data/variance. I believe that this refers to using a sample distribution of the features (or some other information about the scale of spread). The variance of the features relates to the curvature of the likelihood function (observed information matrix). Maybe this method is also much the same as 'use intuition'.
Aside from just looking at the scale of the parameters, it is
also important to look at the 'variation'. If some parameter is around $1 \, kilometer \pm 0.001 \, kilometer$, then the loss function has more curvature than a parameter around $10 \, meter \pm 2 \, meter$. (you can see this in the image below, the parameter $R_0 \approx 1$ has a larger scale than the $K \approx 0.4$ parameter, but the variation is in a much smaller range)
method 3: You may not have a formula or intuition like in method 1, but you could compute the Hessian at some point during the descent and adapt the scaling accordingly.
This method 3 is particularly nice if the problem is not continuous. For instance in the image below you get that the hessian and the scale is varying. Intuitively, the line that the "half-pipe" follows is bended (in the beginning the $K$ parameter has more curvature, but later the $R_0$ parameter). In these cases you can not easily solve problems of convergence by just setting a (single) pre-conditioning from the start, and you need to change it in multiple steps (along the path). | Basic preconditioned gradient descent example | This is the problem that preconditioning tries to solve:
From this blog (also discussed here on stackexchange: Fitting SIR model with 2019-nCoV data doesn't conververge)
Problems:
With gradient desc | Basic preconditioned gradient descent example
This is the problem that preconditioning tries to solve:
From this blog (also discussed here on stackexchange: Fitting SIR model with 2019-nCoV data doesn't conververge)
Problems:
With gradient descent methods you follow a path down-hill. But, your algorithm is often not as smooth as water and doesn't flow straight down in the most efficient way. Instead, often the algorithm will 'over-shoot' and follow an (inefficient) zigzag path.
Another problem is that the algorithm might have some basic rules to decide when to stop. For instance, deciding to stop when the step sizes are getting very small. However, for that stopping rule, when you are not looking into the direction of the optimum (because you are looking along the slow zigzag path), it might be that your step sizes are small while the optimum is still far away.
(the first problem is about the speed of getting a solution/convergence, the second problem is about the accuracy of the result)
Methods to 'pre-condition':
So it would be nice if you could change the scale of the different variables somehow (make them more equal instead of having this valley). You only need to know how.
method 1: use the formula for the problem or some intuition about the problem.
In your case, the function $f(x,y) = x^2 + 10y^2$, you can compute the scaling/pre-conditioning based on the formula. In the formulation of your reference... $P$ should be a matrix. That is $P$ is not $\left[1,1/10\right]$ but instead $$P = \begin{bmatrix} 1 & 0 \\ 0 & 1/10 \end{bmatrix}$$
It is the scaling of the variables that is a vector, namely $R = \left[1,\sqrt{1/10}\right]$.
Without formula, you can use 'intuition'. For instance, you might be having features with widely varying scales. I often just do this. Scale the variables such that the range of the 'expected' solution will be similar for each variable. (In the image below you see this as well. The range of the $R_0$ and $K$ parameter is a lot different)
method 2: Use the data/variance. I believe that this refers to using a sample distribution of the features (or some other information about the scale of spread). The variance of the features relates to the curvature of the likelihood function (observed information matrix). Maybe this method is also much the same as 'use intuition'.
Aside from just looking at the scale of the parameters, it is
also important to look at the 'variation'. If some parameter is around $1 \, kilometer \pm 0.001 \, kilometer$, then the loss function has more curvature than a parameter around $10 \, meter \pm 2 \, meter$. (you can see this in the image below, the parameter $R_0 \approx 1$ has a larger scale than the $K \approx 0.4$ parameter, but the variation is in a much smaller range)
method 3: You may not have a formula or intuition like in method 1, but you could compute the Hessian at some point during the descent and adapt the scaling accordingly.
This method 3 is particularly nice if the problem is not continuous. For instance in the image below you get that the hessian and the scale is varying. Intuitively, the line that the "half-pipe" follows is bended (in the beginning the $K$ parameter has more curvature, but later the $R_0$ parameter). In these cases you can not easily solve problems of convergence by just setting a (single) pre-conditioning from the start, and you need to change it in multiple steps (along the path). | Basic preconditioned gradient descent example
This is the problem that preconditioning tries to solve:
From this blog (also discussed here on stackexchange: Fitting SIR model with 2019-nCoV data doesn't conververge)
Problems:
With gradient desc |
54,060 | Why we learn $\log{\sigma^2}$ in VAE reparameterization trick instead of standard deviation? [duplicate] | It doesn't make any real difference; since $\log \sigma^2= 2\log \sigma$; learning one is as easy as learning the other
It's traditional in statistics to think of $\sigma^2$ as the second parameter of a Normal distribution (rather than $\sigma$).
There's a simple unbiased estimator for $\sigma^2$ but not for $\sigma$
The math for representing the Normal as a two-parameter exponential family is slightly simpler as $(\mu, \sigma^2)$ than $(\mu, \sigma)$
None of this is really important; it's mostly tradition. | Why we learn $\log{\sigma^2}$ in VAE reparameterization trick instead of standard deviation? [duplic | It doesn't make any real difference; since $\log \sigma^2= 2\log \sigma$; learning one is as easy as learning the other
It's traditional in statistics to think of $\sigma^2$ as the second parameter of | Why we learn $\log{\sigma^2}$ in VAE reparameterization trick instead of standard deviation? [duplicate]
It doesn't make any real difference; since $\log \sigma^2= 2\log \sigma$; learning one is as easy as learning the other
It's traditional in statistics to think of $\sigma^2$ as the second parameter of a Normal distribution (rather than $\sigma$).
There's a simple unbiased estimator for $\sigma^2$ but not for $\sigma$
The math for representing the Normal as a two-parameter exponential family is slightly simpler as $(\mu, \sigma^2)$ than $(\mu, \sigma)$
None of this is really important; it's mostly tradition. | Why we learn $\log{\sigma^2}$ in VAE reparameterization trick instead of standard deviation? [duplic
It doesn't make any real difference; since $\log \sigma^2= 2\log \sigma$; learning one is as easy as learning the other
It's traditional in statistics to think of $\sigma^2$ as the second parameter of |
54,061 | Does the t statistic have uses unrelated to hypothesis testing? | When you ask about "the t-statistic", I think about the concrete quantity $$\frac {{\bar {X}}-\mu }{S/{\sqrt {n}}}$$
To actually calculate this quantity, we have to specify $\mu$. This is typically chosen in reference to some given null hypothesis. So to me it seems awkward to try to disentangle "the statistic" from the null hypothesis to which it is implicitly linked by $\mu$. Setting $\mu$ to 0, for example, which you're implicitly doing when you type t.test(rnorm(10))$statistic into R, is implicitly related to the hypothesis test $H_0: \mu = 0$.
Where I think of Student's t-distribution as useful is as a parametric form for fitting to data. At the end of the day, it's just another symmetric, bell-shaped distribution. It just has fatter tails than a Gaussian. So it can be used to model things for which you'd like to preserve that symmetry and bell-shape, but give the extreme outcomes more probability mass than a Gaussian does. I know it's used in finance to model asset-returns (link 1, link 2) for example, but I can't speak to how successful or useful these kinds of models are because I don't use them myself.
I'd suspect them to be of particular use to hierarchical modelers who have some prior knowledge that points to fat tails. Gelman briefly discusses using the t instead of the the Gaussian in fat-tail situations in section 17.2 of Bayesian Data Analysis. | Does the t statistic have uses unrelated to hypothesis testing? | When you ask about "the t-statistic", I think about the concrete quantity $$\frac {{\bar {X}}-\mu }{S/{\sqrt {n}}}$$
To actually calculate this quantity, we have to specify $\mu$. This is typically ch | Does the t statistic have uses unrelated to hypothesis testing?
When you ask about "the t-statistic", I think about the concrete quantity $$\frac {{\bar {X}}-\mu }{S/{\sqrt {n}}}$$
To actually calculate this quantity, we have to specify $\mu$. This is typically chosen in reference to some given null hypothesis. So to me it seems awkward to try to disentangle "the statistic" from the null hypothesis to which it is implicitly linked by $\mu$. Setting $\mu$ to 0, for example, which you're implicitly doing when you type t.test(rnorm(10))$statistic into R, is implicitly related to the hypothesis test $H_0: \mu = 0$.
Where I think of Student's t-distribution as useful is as a parametric form for fitting to data. At the end of the day, it's just another symmetric, bell-shaped distribution. It just has fatter tails than a Gaussian. So it can be used to model things for which you'd like to preserve that symmetry and bell-shape, but give the extreme outcomes more probability mass than a Gaussian does. I know it's used in finance to model asset-returns (link 1, link 2) for example, but I can't speak to how successful or useful these kinds of models are because I don't use them myself.
I'd suspect them to be of particular use to hierarchical modelers who have some prior knowledge that points to fat tails. Gelman briefly discusses using the t instead of the the Gaussian in fat-tail situations in section 17.2 of Bayesian Data Analysis. | Does the t statistic have uses unrelated to hypothesis testing?
When you ask about "the t-statistic", I think about the concrete quantity $$\frac {{\bar {X}}-\mu }{S/{\sqrt {n}}}$$
To actually calculate this quantity, we have to specify $\mu$. This is typically ch |
54,062 | Does the t statistic have uses unrelated to hypothesis testing? | A "hypothesis test" in the strictest sense always results in a binary outcome of either rejecting or failing to reject a null hypothesis. T-statistics are generally turned into p-values, which are then compared against some pre-defined threshold to make that binary determination. It is possible to use the t-statistic itself, however, as a general measure of "deviation from the null", without ever having to take the final step of testing whether the null hypothesis should be rejected or not. Using a t-statistic in this way is still derived from a hypothesis testing framework, but does not actually result in a test of whether the null should be rejected or not, so I'd argue this is not strictly a "hypothesis test".
As an example, the t-statistic can be used as a means of ranking features by significance, while accounting for the directionality of the differences. Gene set enrichment analysis, for example, searches for sets of consistently up- or down-regulated genes, so the directionality of differences is important for this method. Ranking features by their p-value will draw no distinction between up- and down-regulated genes, and simply put the most significant genes at the top of the list. Ranking by the t-statistic, on the other hand, will put the most significant up-regulated genes at one end of the list, and the most significant down-regulated genes at the other end. Although the magnitude of the t-statistic is directly related to the p-value, the sign of the t-statistic is lost when calculating a p-value for a hypothesis test. Ranking genes in this way respects the directionality and how incompatible with the null hypothesis each gene is, but does not actually make any determination if any gene is "significantly dysregulated" or not. | Does the t statistic have uses unrelated to hypothesis testing? | A "hypothesis test" in the strictest sense always results in a binary outcome of either rejecting or failing to reject a null hypothesis. T-statistics are generally turned into p-values, which are the | Does the t statistic have uses unrelated to hypothesis testing?
A "hypothesis test" in the strictest sense always results in a binary outcome of either rejecting or failing to reject a null hypothesis. T-statistics are generally turned into p-values, which are then compared against some pre-defined threshold to make that binary determination. It is possible to use the t-statistic itself, however, as a general measure of "deviation from the null", without ever having to take the final step of testing whether the null hypothesis should be rejected or not. Using a t-statistic in this way is still derived from a hypothesis testing framework, but does not actually result in a test of whether the null should be rejected or not, so I'd argue this is not strictly a "hypothesis test".
As an example, the t-statistic can be used as a means of ranking features by significance, while accounting for the directionality of the differences. Gene set enrichment analysis, for example, searches for sets of consistently up- or down-regulated genes, so the directionality of differences is important for this method. Ranking features by their p-value will draw no distinction between up- and down-regulated genes, and simply put the most significant genes at the top of the list. Ranking by the t-statistic, on the other hand, will put the most significant up-regulated genes at one end of the list, and the most significant down-regulated genes at the other end. Although the magnitude of the t-statistic is directly related to the p-value, the sign of the t-statistic is lost when calculating a p-value for a hypothesis test. Ranking genes in this way respects the directionality and how incompatible with the null hypothesis each gene is, but does not actually make any determination if any gene is "significantly dysregulated" or not. | Does the t statistic have uses unrelated to hypothesis testing?
A "hypothesis test" in the strictest sense always results in a binary outcome of either rejecting or failing to reject a null hypothesis. T-statistics are generally turned into p-values, which are the |
54,063 | Measurement error in one indep variable in OLS with multiple regression | Suppose you true matrix is $X^*=\begin{bmatrix} x_1^{*} & x_2 \end{bmatrix}$, but you observe $x_1=x_1^*+v$.
Then the OLS coefficient on $x_1$ has the following probability limit:
$$\mathbf{plim} \hat \beta_{x_1|x_2}=\beta \left[1-\frac{\sigma^2_v}{\sigma^2_{x_1^*}\cdot(1-R^2_{x_1^*,x_2})+\sigma^2_v} \right]=\beta \left[1-\frac{1}{1+\frac{\sigma^2_{x_1^*}}{\sigma^2_v}\cdot(1-R^2_{x_1^*,x_2})} \right]$$
where $R^2_{x_1^*,x_2}$ denotes the $R^2$ from the auxiliary regression of $x_1^*$ on $x_2$.
This means the coefficient is still attenuated, but generally less so than in the single regressor case. The bias gets worse as collinearity with $x_2$ increases.
Here $x_2$ can contain more than one variable measured without error, so this formula is pretty general. The coefficient(s) measured without error will be inconsistent in the direction determined by $\Sigma_{X^*X^*}$.
You can find this formula (without proof, but surrounded by much auxiliary wisdom) in equation (5) in Bound, John & Brown, Charles & Mathiowetz, Nancy, 2001. "Measurement error in survey data," Handbook of Econometrics, edition 1, volume 5, chapter 59, pages 3705-3843.
They cite these two older papers:
Levi, M.D. (1973), "Errors in the variables bias in the presence of correctly measured variables", Econometrica 41:985−986.
Garber, S., and S. Klepper (1980), “Extending the classical normal errors-in-variables model”, Econometrica 48:1541−1546. | Measurement error in one indep variable in OLS with multiple regression | Suppose you true matrix is $X^*=\begin{bmatrix} x_1^{*} & x_2 \end{bmatrix}$, but you observe $x_1=x_1^*+v$.
Then the OLS coefficient on $x_1$ has the following probability limit:
$$\mathbf{plim} \hat | Measurement error in one indep variable in OLS with multiple regression
Suppose you true matrix is $X^*=\begin{bmatrix} x_1^{*} & x_2 \end{bmatrix}$, but you observe $x_1=x_1^*+v$.
Then the OLS coefficient on $x_1$ has the following probability limit:
$$\mathbf{plim} \hat \beta_{x_1|x_2}=\beta \left[1-\frac{\sigma^2_v}{\sigma^2_{x_1^*}\cdot(1-R^2_{x_1^*,x_2})+\sigma^2_v} \right]=\beta \left[1-\frac{1}{1+\frac{\sigma^2_{x_1^*}}{\sigma^2_v}\cdot(1-R^2_{x_1^*,x_2})} \right]$$
where $R^2_{x_1^*,x_2}$ denotes the $R^2$ from the auxiliary regression of $x_1^*$ on $x_2$.
This means the coefficient is still attenuated, but generally less so than in the single regressor case. The bias gets worse as collinearity with $x_2$ increases.
Here $x_2$ can contain more than one variable measured without error, so this formula is pretty general. The coefficient(s) measured without error will be inconsistent in the direction determined by $\Sigma_{X^*X^*}$.
You can find this formula (without proof, but surrounded by much auxiliary wisdom) in equation (5) in Bound, John & Brown, Charles & Mathiowetz, Nancy, 2001. "Measurement error in survey data," Handbook of Econometrics, edition 1, volume 5, chapter 59, pages 3705-3843.
They cite these two older papers:
Levi, M.D. (1973), "Errors in the variables bias in the presence of correctly measured variables", Econometrica 41:985−986.
Garber, S., and S. Klepper (1980), “Extending the classical normal errors-in-variables model”, Econometrica 48:1541−1546. | Measurement error in one indep variable in OLS with multiple regression
Suppose you true matrix is $X^*=\begin{bmatrix} x_1^{*} & x_2 \end{bmatrix}$, but you observe $x_1=x_1^*+v$.
Then the OLS coefficient on $x_1$ has the following probability limit:
$$\mathbf{plim} \hat |
54,064 | Measurement error in one indep variable in OLS with multiple regression | The solution to this problem is in Wooldridge's "Introductory Econometrics" (Chapter 9 Section "Measurement Error in an Explanatory Variable", p320 in the 2012 version) and in Wooldridge's "Econometric analysis of cross-section and_panel data" (Section 4.4.2, p73 in the 2002 version). Here is the takeaway.
Consider the multiple regression model with a single explanatory variable $x^*_K$ measured with error :
$$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_K x^*_K + \nu$$
And with "classical" assumptions, mainly that $\nu$ is uncorrelated to $x^*_K$ and $\nu$ is uncorrelated to $x_K$.
The measurement error is $e_K = x_K - x^*_K$ with $\text{E}(e_k) = 0$. The classical assumption implies that $\nu$ is uncorrelated to $e_K$
We want to replace $x^*_K$ with $x_K$ and see how this affects OLS estimators, w.r.t. assumptions on the relationship between the measurement error $e_k$ and $x^*_K$ and $x_K$.
The first case, which is not the OP case but I present briefly for the sake of completeness, is when $\text{Cov}(e_K, x_K) = 0$. Here OLS using $x_K$ instead of $x^*_K$ provides consistent estimators even if it inflates the error variance of the estimations (and thus of the estimators).
The case of interest is when $\text{Cov}(e_K, x^*_K) = 0$ and is called "classical in variables error" in the econometric literature. Here :
$$\text{Cov}(e_K, x_K) = \text{E}(e_Kx_K) = \text{E}(e_Kx^*_K)+ \text{E}(e^2_K) = \sigma^{2}_{e_{K}} $$
and :
$$
\text{plim}(\hat{\beta}_k) = \beta_K \left( \frac{\sigma^{2}_{r^{*}_{K}}}{\sigma^{2}_{r^{*}_{K}}+ \sigma^{2}_{e_{K}}} \right) = \beta_KA_K
$$
where $r_K$ is error in :
$$
x^*_K = \delta_0 + \delta_1 x_1 + \delta_2 x_2 + ... \delta_{K-1} x_{K-1} + r^*_K
$$
$A_K$ is always between 0 and 1 and is called the attenuation bias: If $\beta_K$ is positive (reps. negative), $\hat{\beta}_K$ will tend to underestimate (reps. underestimate) $\beta_K$.
In the multivariate regression, it is the variance of $x^*_K$ after controlling (netting) for the effects of the other explanatory variables, that affect the attenuation bias. This latter is worse as $x^*_K$ is colinear with the other variables.
In the case where $K=1$, i.e., the simple regression model where there is only one explanatory variable which is measured with error. In this case :
$$\text{plim}(\hat{\beta}_1) = \beta_1 \left( \frac{ \sigma^{2}_{x^*_1} }{\sigma^{2}_{x^*_1} + \sigma^{2}_{e_1}} \right)$$
The attenuation term, always between 0 and 1 becomes closer to 1 as $\sigma^{2}_{e_1}$ shrinks relatively to $\sigma^{2}_{x^*_1}$. Note that in this special case, $r^*_K = x^*_1$.
The $\text{plim}(\beta_j)$ for $j \neq K$ is complicated to derive in this framework, except in the case where $x^*_k$ is uncorrelated to other $x_j$, thus $x_k$ is uncorrelated to other $x_j$, which leads to $\text{plim}(\hat{\beta}_j)=\beta_j$. | Measurement error in one indep variable in OLS with multiple regression | The solution to this problem is in Wooldridge's "Introductory Econometrics" (Chapter 9 Section "Measurement Error in an Explanatory Variable", p320 in the 2012 version) and in Wooldridge's "Econometri | Measurement error in one indep variable in OLS with multiple regression
The solution to this problem is in Wooldridge's "Introductory Econometrics" (Chapter 9 Section "Measurement Error in an Explanatory Variable", p320 in the 2012 version) and in Wooldridge's "Econometric analysis of cross-section and_panel data" (Section 4.4.2, p73 in the 2002 version). Here is the takeaway.
Consider the multiple regression model with a single explanatory variable $x^*_K$ measured with error :
$$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_K x^*_K + \nu$$
And with "classical" assumptions, mainly that $\nu$ is uncorrelated to $x^*_K$ and $\nu$ is uncorrelated to $x_K$.
The measurement error is $e_K = x_K - x^*_K$ with $\text{E}(e_k) = 0$. The classical assumption implies that $\nu$ is uncorrelated to $e_K$
We want to replace $x^*_K$ with $x_K$ and see how this affects OLS estimators, w.r.t. assumptions on the relationship between the measurement error $e_k$ and $x^*_K$ and $x_K$.
The first case, which is not the OP case but I present briefly for the sake of completeness, is when $\text{Cov}(e_K, x_K) = 0$. Here OLS using $x_K$ instead of $x^*_K$ provides consistent estimators even if it inflates the error variance of the estimations (and thus of the estimators).
The case of interest is when $\text{Cov}(e_K, x^*_K) = 0$ and is called "classical in variables error" in the econometric literature. Here :
$$\text{Cov}(e_K, x_K) = \text{E}(e_Kx_K) = \text{E}(e_Kx^*_K)+ \text{E}(e^2_K) = \sigma^{2}_{e_{K}} $$
and :
$$
\text{plim}(\hat{\beta}_k) = \beta_K \left( \frac{\sigma^{2}_{r^{*}_{K}}}{\sigma^{2}_{r^{*}_{K}}+ \sigma^{2}_{e_{K}}} \right) = \beta_KA_K
$$
where $r_K$ is error in :
$$
x^*_K = \delta_0 + \delta_1 x_1 + \delta_2 x_2 + ... \delta_{K-1} x_{K-1} + r^*_K
$$
$A_K$ is always between 0 and 1 and is called the attenuation bias: If $\beta_K$ is positive (reps. negative), $\hat{\beta}_K$ will tend to underestimate (reps. underestimate) $\beta_K$.
In the multivariate regression, it is the variance of $x^*_K$ after controlling (netting) for the effects of the other explanatory variables, that affect the attenuation bias. This latter is worse as $x^*_K$ is colinear with the other variables.
In the case where $K=1$, i.e., the simple regression model where there is only one explanatory variable which is measured with error. In this case :
$$\text{plim}(\hat{\beta}_1) = \beta_1 \left( \frac{ \sigma^{2}_{x^*_1} }{\sigma^{2}_{x^*_1} + \sigma^{2}_{e_1}} \right)$$
The attenuation term, always between 0 and 1 becomes closer to 1 as $\sigma^{2}_{e_1}$ shrinks relatively to $\sigma^{2}_{x^*_1}$. Note that in this special case, $r^*_K = x^*_1$.
The $\text{plim}(\beta_j)$ for $j \neq K$ is complicated to derive in this framework, except in the case where $x^*_k$ is uncorrelated to other $x_j$, thus $x_k$ is uncorrelated to other $x_j$, which leads to $\text{plim}(\hat{\beta}_j)=\beta_j$. | Measurement error in one indep variable in OLS with multiple regression
The solution to this problem is in Wooldridge's "Introductory Econometrics" (Chapter 9 Section "Measurement Error in an Explanatory Variable", p320 in the 2012 version) and in Wooldridge's "Econometri |
54,065 | Measurement error in one indep variable in OLS with multiple regression | In the situation that you describe, the true model is like this:
$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + u$
now, you can observe $y$ and $x_2$ but you cannot observe $x_1$. However you can observe
$z = x_1 + \epsilon$
moreover we assume that $\rho(\epsilon,u)=0$
So, if we consider the simplification where $\beta_2 =0$ is possible to show that the OLS estimator for $\beta_1$ is like
$\theta_1 = \beta_1 V[x_1]/(V[x_1] + V[\epsilon]) $
Then the absolute value of $\theta_1$, in expected term and/or plim, is lower than of $\beta_1$. Then $\theta_1$ is biased (incorrect and inconsistent) for $\beta_1$. This kind of bias is known as attenuation bias. More $V[\epsilon]$ increase, more serious the problem become.
Now, for multivariate case the matrix algebra notation is usually used. Then in vector form we achieve $E[\theta]\neq \beta$ and/or plim $\theta \neq\beta$
Note that here you can consider that even if only one variable is endogenous, for measurement error or other problems, all parameters become biased. The direction of bias for any $\theta_i$ depends from correlations among variables and the sign of the first moments. Special cases exist, for example if the variables are all orthogonal the bias does not spread.
In your case, two variables ($\beta_1$ and $\beta_2$ different from $0$), you can estimate a regression like
$y = \theta_0 + \theta_1 z + \theta_2 x_2 + v$
here $\theta_1$ suffer from attenuation bias (on $\beta_1$) but also $\theta_2$ is biased (for $\beta_2$). In special case where $z$ and $x_2$ are orthogonal, for $\theta_1$ the problem remain but $\theta_2$ become correct and consistent. | Measurement error in one indep variable in OLS with multiple regression | In the situation that you describe, the true model is like this:
$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + u$
now, you can observe $y$ and $x_2$ but you cannot observe $x_1$. However you can observe
| Measurement error in one indep variable in OLS with multiple regression
In the situation that you describe, the true model is like this:
$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + u$
now, you can observe $y$ and $x_2$ but you cannot observe $x_1$. However you can observe
$z = x_1 + \epsilon$
moreover we assume that $\rho(\epsilon,u)=0$
So, if we consider the simplification where $\beta_2 =0$ is possible to show that the OLS estimator for $\beta_1$ is like
$\theta_1 = \beta_1 V[x_1]/(V[x_1] + V[\epsilon]) $
Then the absolute value of $\theta_1$, in expected term and/or plim, is lower than of $\beta_1$. Then $\theta_1$ is biased (incorrect and inconsistent) for $\beta_1$. This kind of bias is known as attenuation bias. More $V[\epsilon]$ increase, more serious the problem become.
Now, for multivariate case the matrix algebra notation is usually used. Then in vector form we achieve $E[\theta]\neq \beta$ and/or plim $\theta \neq\beta$
Note that here you can consider that even if only one variable is endogenous, for measurement error or other problems, all parameters become biased. The direction of bias for any $\theta_i$ depends from correlations among variables and the sign of the first moments. Special cases exist, for example if the variables are all orthogonal the bias does not spread.
In your case, two variables ($\beta_1$ and $\beta_2$ different from $0$), you can estimate a regression like
$y = \theta_0 + \theta_1 z + \theta_2 x_2 + v$
here $\theta_1$ suffer from attenuation bias (on $\beta_1$) but also $\theta_2$ is biased (for $\beta_2$). In special case where $z$ and $x_2$ are orthogonal, for $\theta_1$ the problem remain but $\theta_2$ become correct and consistent. | Measurement error in one indep variable in OLS with multiple regression
In the situation that you describe, the true model is like this:
$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + u$
now, you can observe $y$ and $x_2$ but you cannot observe $x_1$. However you can observe
|
54,066 | Linear Mixed Effect Model - random intercept and slope? Identifiability problems | First, I would almost always advise against deleting observations for any reason, but in your case I definitely advise against it. By deleting observations you lose statistical power but more importantly you can introduce bias.
Think for a moment about what it means to fit random slopes. It means that you allow the slope for a fixed effect to vary by subject. In other words each subject gets it's own slope for that variable. So in the case where a subject has only one observation, what slope could it have ? To make sense of fitting a slope you would ideally have at least 2 observations. Mixed Models are robust to small cluster sizes, but when you have a large proportion of singleton clusters it doesn't make sense to fit random slopes. | Linear Mixed Effect Model - random intercept and slope? Identifiability problems | First, I would almost always advise against deleting observations for any reason, but in your case I definitely advise against it. By deleting observations you lose statistical power but more importan | Linear Mixed Effect Model - random intercept and slope? Identifiability problems
First, I would almost always advise against deleting observations for any reason, but in your case I definitely advise against it. By deleting observations you lose statistical power but more importantly you can introduce bias.
Think for a moment about what it means to fit random slopes. It means that you allow the slope for a fixed effect to vary by subject. In other words each subject gets it's own slope for that variable. So in the case where a subject has only one observation, what slope could it have ? To make sense of fitting a slope you would ideally have at least 2 observations. Mixed Models are robust to small cluster sizes, but when you have a large proportion of singleton clusters it doesn't make sense to fit random slopes. | Linear Mixed Effect Model - random intercept and slope? Identifiability problems
First, I would almost always advise against deleting observations for any reason, but in your case I definitely advise against it. By deleting observations you lose statistical power but more importan |
54,067 | Overlap-tile strategy in U-Nets | On the "overlap-tile strategy" specifically:
The blue box in Fig 2 (left) shows the input to the network. Because they're using valid convolutions, the output is the smaller yellow box (right). Sounds like you understand this part already.
They're trying to show that the image that they want to predict on is bigger than the input to the network (e.g. perhaps the GPU memory is not big enough to hold the whole thing). So they have to run inference several times using different subsets of the input.
On the right side, imagine shifting the yellow box down so that the two squares are right next to each other (bottom side of original square touches top of shifted square). Do that a bunch of times to "tile" your output space. Now, you need a bigger region of the input (blue) for inference. For non-overlapping yellow boxes (in the output) you will need overlapping blue boxes (for the input).
(If its still not clear I can try drawing a picture) | Overlap-tile strategy in U-Nets | On the "overlap-tile strategy" specifically:
The blue box in Fig 2 (left) shows the input to the network. Because they're using valid convolutions, the output is the smaller yellow box (right). Soun | Overlap-tile strategy in U-Nets
On the "overlap-tile strategy" specifically:
The blue box in Fig 2 (left) shows the input to the network. Because they're using valid convolutions, the output is the smaller yellow box (right). Sounds like you understand this part already.
They're trying to show that the image that they want to predict on is bigger than the input to the network (e.g. perhaps the GPU memory is not big enough to hold the whole thing). So they have to run inference several times using different subsets of the input.
On the right side, imagine shifting the yellow box down so that the two squares are right next to each other (bottom side of original square touches top of shifted square). Do that a bunch of times to "tile" your output space. Now, you need a bigger region of the input (blue) for inference. For non-overlapping yellow boxes (in the output) you will need overlapping blue boxes (for the input).
(If its still not clear I can try drawing a picture) | Overlap-tile strategy in U-Nets
On the "overlap-tile strategy" specifically:
The blue box in Fig 2 (left) shows the input to the network. Because they're using valid convolutions, the output is the smaller yellow box (right). Soun |
54,068 | Overlap-tile strategy in U-Nets | Yeah their description is a bit confusing. I agree with your interpretation.
This paper has 15k citations, so if that strategy is effective, it's probably pretty commonly used. Otherwise, my only guess could be that other techniques in the paper were more important. I'm not familiar with biomedical computer vision research though. | Overlap-tile strategy in U-Nets | Yeah their description is a bit confusing. I agree with your interpretation.
This paper has 15k citations, so if that strategy is effective, it's probably pretty commonly used. Otherwise, my only gues | Overlap-tile strategy in U-Nets
Yeah their description is a bit confusing. I agree with your interpretation.
This paper has 15k citations, so if that strategy is effective, it's probably pretty commonly used. Otherwise, my only guess could be that other techniques in the paper were more important. I'm not familiar with biomedical computer vision research though. | Overlap-tile strategy in U-Nets
Yeah their description is a bit confusing. I agree with your interpretation.
This paper has 15k citations, so if that strategy is effective, it's probably pretty commonly used. Otherwise, my only gues |
54,069 | Overlap-tile strategy in U-Nets | Instead of having padded convolutions to maintain the spatial size of the feature maps, they pad the original image by mirroring the borders and forward the pre-padded image through the network
I agree with @bogovicj 's explanation.
What do they mean by "only us[ing] the valid part of each convolution"?
Based on your understanding of @bogovicj 's explanation, suppose that the 2nd patches under the 1st one within the yellow box, the 2nd one does not need mirror padding for the border between 1st and 2nd ones, just add the left padding.
the upper border of the 2nd patch is inside the 1st one, the author does not make mirror padding for the 2nd patch, and just picks the part from the 1st area. This is the 'valid part of each convolution'.
Thus, the input map is cut into several patches and belongs to the same image, the border they share is the 'valid part of each convolution' because this border is not produced by mirror padding. | Overlap-tile strategy in U-Nets | Instead of having padded convolutions to maintain the spatial size of the feature maps, they pad the original image by mirroring the borders and forward the pre-padded image through the network
I agr | Overlap-tile strategy in U-Nets
Instead of having padded convolutions to maintain the spatial size of the feature maps, they pad the original image by mirroring the borders and forward the pre-padded image through the network
I agree with @bogovicj 's explanation.
What do they mean by "only us[ing] the valid part of each convolution"?
Based on your understanding of @bogovicj 's explanation, suppose that the 2nd patches under the 1st one within the yellow box, the 2nd one does not need mirror padding for the border between 1st and 2nd ones, just add the left padding.
the upper border of the 2nd patch is inside the 1st one, the author does not make mirror padding for the 2nd patch, and just picks the part from the 1st area. This is the 'valid part of each convolution'.
Thus, the input map is cut into several patches and belongs to the same image, the border they share is the 'valid part of each convolution' because this border is not produced by mirror padding. | Overlap-tile strategy in U-Nets
Instead of having padded convolutions to maintain the spatial size of the feature maps, they pad the original image by mirroring the borders and forward the pre-padded image through the network
I agr |
54,070 | Plotting GAMs on Response Scale with Multiple Smooth and Linear Terms | If the model contains z then the effect of x estimated by the model is that given z is in the model. Hence the fitted response is the additive sum of the two effects, and we can't talk generally about the estimated values of the response for a range of values of x without also stating the value of z.
For Gaussian models, you can just add on the intercept in plot.gam() to shift around the smooth curve on the y-axis. See argument shift to plot.gam(). This assumes that as per the example x and z are unrelated in the model, and furthermore some value of z (in this case I think 0 as it is a linear term not subject to identifiability constraints).
A more general solution is just to predict from the model at a grid of values over x while holding z constant at some representative value, say its mean or median.
Here's a full example of doing this by hand:
library("mgcv")
library("ggplot2")
set.seed(1)
df <- gamSim()
m <- gam(y ~ s(x0) + s(x1) + s(x2) + s(x3), data = df, method = "REML")
new_data <- with(df, expand.grid(x2 = seq(min(x2), max(x2), length = 200),
x0 = median(x0),
x1 = median(x1),
x3 = median(x3)))
ilink <- family(m)$linkinv
pred <- predict(m, new_data, type = "link", se.fit = TRUE)
pred <- cbind(pred, new_data)
pred <- transform(pred, lwr_ci = ilink(fit - (2 * se.fit)),
upr_ci = ilink(fit + (2 * se.fit)),
fitted = ilink(fit))
ggplot(pred, aes(x = x2, y = fitted)) +
geom_ribbon(aes(ymin = lwr_ci, ymax = upr_ci), alpha = 0.2) +
geom_line()
producing
That script should be fine for any of the standard family options in mgcv, but you'll have to take careful note of what predict() returns for some of the fancier families in mgcv. | Plotting GAMs on Response Scale with Multiple Smooth and Linear Terms | If the model contains z then the effect of x estimated by the model is that given z is in the model. Hence the fitted response is the additive sum of the two effects, and we can't talk generally about | Plotting GAMs on Response Scale with Multiple Smooth and Linear Terms
If the model contains z then the effect of x estimated by the model is that given z is in the model. Hence the fitted response is the additive sum of the two effects, and we can't talk generally about the estimated values of the response for a range of values of x without also stating the value of z.
For Gaussian models, you can just add on the intercept in plot.gam() to shift around the smooth curve on the y-axis. See argument shift to plot.gam(). This assumes that as per the example x and z are unrelated in the model, and furthermore some value of z (in this case I think 0 as it is a linear term not subject to identifiability constraints).
A more general solution is just to predict from the model at a grid of values over x while holding z constant at some representative value, say its mean or median.
Here's a full example of doing this by hand:
library("mgcv")
library("ggplot2")
set.seed(1)
df <- gamSim()
m <- gam(y ~ s(x0) + s(x1) + s(x2) + s(x3), data = df, method = "REML")
new_data <- with(df, expand.grid(x2 = seq(min(x2), max(x2), length = 200),
x0 = median(x0),
x1 = median(x1),
x3 = median(x3)))
ilink <- family(m)$linkinv
pred <- predict(m, new_data, type = "link", se.fit = TRUE)
pred <- cbind(pred, new_data)
pred <- transform(pred, lwr_ci = ilink(fit - (2 * se.fit)),
upr_ci = ilink(fit + (2 * se.fit)),
fitted = ilink(fit))
ggplot(pred, aes(x = x2, y = fitted)) +
geom_ribbon(aes(ymin = lwr_ci, ymax = upr_ci), alpha = 0.2) +
geom_line()
producing
That script should be fine for any of the standard family options in mgcv, but you'll have to take careful note of what predict() returns for some of the fancier families in mgcv. | Plotting GAMs on Response Scale with Multiple Smooth and Linear Terms
If the model contains z then the effect of x estimated by the model is that given z is in the model. Hence the fitted response is the additive sum of the two effects, and we can't talk generally about |
54,071 | Why does conditioning on a mediator variable makes the ancestor and the child independent? | If we know 𝐹𝑖𝑟𝑒 we know the value of 𝐴𝑙𝑎𝑟𝑚. Conditioning on
𝑆𝑚𝑜𝑘𝑒 doesn't change that at all, so why did it render 𝐴 and 𝐶
or 𝐹𝑖𝑟𝑒 and 𝐴𝑙𝑎𝑟𝑚 independent?
Yes, if we know the value of fire we know the value of alarm. But this is not what conditioning on smoke means. Conditioning on smoke means we know the value of smoke, but do not know the value of fire. More generally, conditioning on an event means that this event is treated as being known to have occurred (with that event's cause being perhaps unknown).
There are many ways there can be smoke. Someone lights a cigarette, someone sautés food on a pan that's too hot, or a fire starts. A fire causes smoke, but it's only one such cause.
If we are conditioning on smoke, then we know there is smoke but do not know the smoke's cause. But the smoke's cause is irrelevant - the alarm will go off independent of whether or not a fire was the cause of the smoke. Hence, we say the alarm going off is conditionally independent of fire given smoke. | Why does conditioning on a mediator variable makes the ancestor and the child independent? | If we know 𝐹𝑖𝑟𝑒 we know the value of 𝐴𝑙𝑎𝑟𝑚. Conditioning on
𝑆𝑚𝑜𝑘𝑒 doesn't change that at all, so why did it render 𝐴 and 𝐶
or 𝐹𝑖𝑟𝑒 and 𝐴𝑙𝑎𝑟𝑚 independent?
Yes, if we know the value of fire we know | Why does conditioning on a mediator variable makes the ancestor and the child independent?
If we know 𝐹𝑖𝑟𝑒 we know the value of 𝐴𝑙𝑎𝑟𝑚. Conditioning on
𝑆𝑚𝑜𝑘𝑒 doesn't change that at all, so why did it render 𝐴 and 𝐶
or 𝐹𝑖𝑟𝑒 and 𝐴𝑙𝑎𝑟𝑚 independent?
Yes, if we know the value of fire we know the value of alarm. But this is not what conditioning on smoke means. Conditioning on smoke means we know the value of smoke, but do not know the value of fire. More generally, conditioning on an event means that this event is treated as being known to have occurred (with that event's cause being perhaps unknown).
There are many ways there can be smoke. Someone lights a cigarette, someone sautés food on a pan that's too hot, or a fire starts. A fire causes smoke, but it's only one such cause.
If we are conditioning on smoke, then we know there is smoke but do not know the smoke's cause. But the smoke's cause is irrelevant - the alarm will go off independent of whether or not a fire was the cause of the smoke. Hence, we say the alarm going off is conditionally independent of fire given smoke. | Why does conditioning on a mediator variable makes the ancestor and the child independent?
If we know 𝐹𝑖𝑟𝑒 we know the value of 𝐴𝑙𝑎𝑟𝑚. Conditioning on
𝑆𝑚𝑜𝑘𝑒 doesn't change that at all, so why did it render 𝐴 and 𝐶
or 𝐹𝑖𝑟𝑒 and 𝐴𝑙𝑎𝑟𝑚 independent?
Yes, if we know the value of fire we know |
54,072 | Why does conditioning on a mediator variable makes the ancestor and the child independent? | Formally, you model implies $P(Alarm|Smoke) = P(Alarm|Smoke, Fire)$.
In short, Fire and Alarm are independent conditional on Smoke.
In words, if you know there is smoke, then you can infer there is an alarm. The additional knowledge of fire does not change you assessment, because there is no other path between fire and alarm.
Note that in your particular dataset, it is also the other way around: $P(Alarm|Fire) = P(Alarm|Smoke, Fire)$. That is, if you know there is a fire, you are certain that there is an alarm, and learning that there is also smoke does not change this. This is not implied by the DAG, and will not generally be the case. For example, imagine that there is be a fire, but a strong wind blows away the smoke from the alarm. Then not every fire would lead to an alarm, and learning that there is no smoke in addition to learning that there is a fire will lead you to predict that the alarm does not go off. | Why does conditioning on a mediator variable makes the ancestor and the child independent? | Formally, you model implies $P(Alarm|Smoke) = P(Alarm|Smoke, Fire)$.
In short, Fire and Alarm are independent conditional on Smoke.
In words, if you know there is smoke, then you can infer there is an | Why does conditioning on a mediator variable makes the ancestor and the child independent?
Formally, you model implies $P(Alarm|Smoke) = P(Alarm|Smoke, Fire)$.
In short, Fire and Alarm are independent conditional on Smoke.
In words, if you know there is smoke, then you can infer there is an alarm. The additional knowledge of fire does not change you assessment, because there is no other path between fire and alarm.
Note that in your particular dataset, it is also the other way around: $P(Alarm|Fire) = P(Alarm|Smoke, Fire)$. That is, if you know there is a fire, you are certain that there is an alarm, and learning that there is also smoke does not change this. This is not implied by the DAG, and will not generally be the case. For example, imagine that there is be a fire, but a strong wind blows away the smoke from the alarm. Then not every fire would lead to an alarm, and learning that there is no smoke in addition to learning that there is a fire will lead you to predict that the alarm does not go off. | Why does conditioning on a mediator variable makes the ancestor and the child independent?
Formally, you model implies $P(Alarm|Smoke) = P(Alarm|Smoke, Fire)$.
In short, Fire and Alarm are independent conditional on Smoke.
In words, if you know there is smoke, then you can infer there is an |
54,073 | Is Coarsened Exact Matching superior to other matching methods in case-control studies? | CEM does not allow you to estimate the ATE. This is because the matched units in each treatment group will not resemble the overall sample. If no treated units are unmatched, you can estimate the average treatment effect on the treated (ATT). If any treated units are discarded, the estimand is an average treatment effect, but not for a specific pre-defined population; it's the average treatment effect in the matched sample (ATM).
The best method for estimating the ATM is exact matching. If exact matching is performed on the set of variables sufficient to remove confounding (I'll call these confounders), regardless of the form of the outcome model, the treatment effect will be unbiased, even in finite samples. This is because the samples will be exactly balanced on all confounders and their entire joint distribution. Generally, if there are continuous confounders or many confounders relative to the size of the control pool, exact matching will be impossible.
This phenomenon is known as the curse of dimensionality and is why propensity score matching became popular; rather than exact matching on every confounder, Rosenbaum & Rubin (1983) proved that exact matching on the true propensity score also balanced the joint distribution of confounders in large samples and therefore yields asymptotically unbiased and consistent estimates. A problem with the common implementation of propensity score matching is that it departs from the theoretical results in several ways: it is used in small samples, it uses an imperfect estimate of the propensity score, and it is only approximately matched. King & Nielsen (2019) also demonstrated in their infamous paper that propensity score matching as commonly implemented will fail to extract a randomized block experiment from a confounded sample, instead extracting only a randomized experiment, which is less efficient and therefore more model-dependent. All that said, propensity score matching does tend to work fairly well in practice if done right, as demonstrated by extensive simulation evidence, though there is also much simulation evidence demonstrating how its common uses can perform extremely poorly.
The problem with propensity score matching in finite samples is that when the propensity score is not known, it must be estimated, and the assessment of its correct specification relies on balance checking. The point of propensity score matching is to attain balance anyway, but ideally, propensity score matching yields balance on the joint distribution of all the confounders. Unfortunately, it's hard to assess balance on the joint distribution, though there have been attempts. Instead, we typically assess balance only on the means of each confounder individually. Simulations have shown that this can be an effective strategy, however (Franklin et al., 2014).
The problem is that it requires assumptions about the form of the outcome model. The whole point of matching is to avoid these assumptions; otherwise, if they were known, you could just model the outcome and your estimate would be far more precise. The presumed logic of balance checking for propensity score matching, then, is that if balance is achieved on the terms one checked balance for, it is also achieved in the joint distribution of confounders, so one doesn't need to make assumptions about the form of the outcome model. If you are skeptical of this logic, then you have to either know the form of the outcome model or know the form of the propensity score model and have very close matches.
CEM aims to avoid these problems by capitalizing on the strength of exact matching without succumbing to the curse of dimensionality. It does this by coarsening continuous variables and combining levels of categorical variables. It's more likely that you can find exact matches on the coarsened confounders than in the original confounders. Another selling point of CEM is that you get to control how balanced the sample is by adjusting the degree of coarsening; with no coarsening, you have exact matching and therefore exact balance on the joint distribution of confounders (if the data supports it), and with extreme coarsening you have individuals matched that are not very similar to each other, and therefore less balance. That's why Iacus et al. (2011) titled their paper "Causal Inference Without Balance Checking: Coarsened Exact Matching."
CEM unfortunately still succumbs to the curse of dimensionality in most samples because unless the coarsening is extreme, it's rare to find exact matches for every treated unit, so many treated units are discarded. In the remaining matched sample, however, approximate balance is achieved on the joint distribution of confounders, so the effect estimate will be approximately unbiased regardless of the form of the outcome model. CEM will be useful in the following scenario:
A large control pool with strong overlap with the treated units
Several continuous confounders
The effect estimate doesn't have to generalize to a target population or assumed to be the same for all units
The outcome model is highly nonlinear in the confounders and depends on their interactions
All of these must be true for CEM to be of value; if they are true, CEM is undoubtedly the best matching method for the reasons described in Iacus et al. (2011). If any of them are false, there is a better method out there. Below I'll discuss some alternatives and their strengths over CEM.
Genetic matching (Diamond & Sekhon, 2013) - recovers randomized block experiments; guarantees balance as the user defines it; doesn't have to discard treated units; in the Matching R package
Cardinality matching (Zubizarreta et al., 2014) - balance constraints can be specified without requiring exact balance on the joint distributions of confounders or their coarsened versions; in the designmatch R package
ATO weighting (Li & Thomas, 2018) - most precise weighted estimate possible, guarantees exact moment balance on each covariate (and many moments can be specified to capture the joint distribution); in the WeightIt R package
BART (Hill, 2011)/TMLE (van der Laan, 2010) - extremely flexible without assumptions on the outcome or treatment model and without discarding any units; in the bartCause and TMLE R packages
In the case you described, where you have many potential confounders to match on, there is FLAME (Wang et al., 2019), available in FLAME, and its successors.
I'm sorry this was so much, but this is a topic that deserves discussion and consideration. I spend my days thinking about it (actually; it's my line of research). Everything boils down to whether you want to make certain assumptions and how you can manage the bias-variance tradeoff given those assumptions. There is no right answer.
Diamond, A., & Sekhon, J. S. (2013). Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics, 95(3), 932–945. https://doi.org/10.1162/REST_a_00318
Franklin, J. M., Rassen, J. A., Ackermann, D., Bartels, D. B., & Schneeweiss, S. (2014). Metrics for covariate balance in cohort studies of causal effects. Statistics in Medicine, 33(10), 1685–1699. https://doi.org/10.1002/sim.6058
Iacus, S. M., King, G., & Porro, G. (2011). Causal Inference without Balance Checking: Coarsened Exact Matching. Political Analysis, mpr013. https://doi.org/10.1093/pan/mpr013
Hill, J. L. (2011). Bayesian Nonparametric Modeling for Causal Inference. Journal of Computational and Graphical Statistics, 20(1), 217–240. https://doi.org/10.1198/jcgs.2010.08162
King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 1–20. https://doi.org/10.1017/pan.2019.11
Li, F., & Thomas, L. E. (2018). Addressing Extreme Propensity Scores via the Overlap Weights. American Journal of Epidemiology. https://doi.org/10.1093/aje/kwy201
Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55. https://doi.org/10.1093/biomet/70.1.41
van der Laan, M. J. (2010). Targeted Maximum Likelihood Based Causal Inference: Part I. The International Journal of Biostatistics, 6(2). https://doi.org/10.2202/1557-4679.1211
Wang, T., Morucci, M., Awan, M. U., Liu, Y., Roy, S., Rudin, C., & Volfovsky, A. (2019). FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference. ArXiv:1707.06315 [Cs, Stat]. http://arxiv.org/abs/1707.06315
Zubizarreta, J. R., Paredes, R. D., & Rosenbaum, P. R. (2014). Matching for balance, pairing for heterogeneity in an observational study of the effectiveness of for-profit and not-for-profit high schools in Chile. The Annals of Applied Statistics, 8(1), 204–231. https://doi.org/10.1214/13-AOAS713 | Is Coarsened Exact Matching superior to other matching methods in case-control studies? | CEM does not allow you to estimate the ATE. This is because the matched units in each treatment group will not resemble the overall sample. If no treated units are unmatched, you can estimate the aver | Is Coarsened Exact Matching superior to other matching methods in case-control studies?
CEM does not allow you to estimate the ATE. This is because the matched units in each treatment group will not resemble the overall sample. If no treated units are unmatched, you can estimate the average treatment effect on the treated (ATT). If any treated units are discarded, the estimand is an average treatment effect, but not for a specific pre-defined population; it's the average treatment effect in the matched sample (ATM).
The best method for estimating the ATM is exact matching. If exact matching is performed on the set of variables sufficient to remove confounding (I'll call these confounders), regardless of the form of the outcome model, the treatment effect will be unbiased, even in finite samples. This is because the samples will be exactly balanced on all confounders and their entire joint distribution. Generally, if there are continuous confounders or many confounders relative to the size of the control pool, exact matching will be impossible.
This phenomenon is known as the curse of dimensionality and is why propensity score matching became popular; rather than exact matching on every confounder, Rosenbaum & Rubin (1983) proved that exact matching on the true propensity score also balanced the joint distribution of confounders in large samples and therefore yields asymptotically unbiased and consistent estimates. A problem with the common implementation of propensity score matching is that it departs from the theoretical results in several ways: it is used in small samples, it uses an imperfect estimate of the propensity score, and it is only approximately matched. King & Nielsen (2019) also demonstrated in their infamous paper that propensity score matching as commonly implemented will fail to extract a randomized block experiment from a confounded sample, instead extracting only a randomized experiment, which is less efficient and therefore more model-dependent. All that said, propensity score matching does tend to work fairly well in practice if done right, as demonstrated by extensive simulation evidence, though there is also much simulation evidence demonstrating how its common uses can perform extremely poorly.
The problem with propensity score matching in finite samples is that when the propensity score is not known, it must be estimated, and the assessment of its correct specification relies on balance checking. The point of propensity score matching is to attain balance anyway, but ideally, propensity score matching yields balance on the joint distribution of all the confounders. Unfortunately, it's hard to assess balance on the joint distribution, though there have been attempts. Instead, we typically assess balance only on the means of each confounder individually. Simulations have shown that this can be an effective strategy, however (Franklin et al., 2014).
The problem is that it requires assumptions about the form of the outcome model. The whole point of matching is to avoid these assumptions; otherwise, if they were known, you could just model the outcome and your estimate would be far more precise. The presumed logic of balance checking for propensity score matching, then, is that if balance is achieved on the terms one checked balance for, it is also achieved in the joint distribution of confounders, so one doesn't need to make assumptions about the form of the outcome model. If you are skeptical of this logic, then you have to either know the form of the outcome model or know the form of the propensity score model and have very close matches.
CEM aims to avoid these problems by capitalizing on the strength of exact matching without succumbing to the curse of dimensionality. It does this by coarsening continuous variables and combining levels of categorical variables. It's more likely that you can find exact matches on the coarsened confounders than in the original confounders. Another selling point of CEM is that you get to control how balanced the sample is by adjusting the degree of coarsening; with no coarsening, you have exact matching and therefore exact balance on the joint distribution of confounders (if the data supports it), and with extreme coarsening you have individuals matched that are not very similar to each other, and therefore less balance. That's why Iacus et al. (2011) titled their paper "Causal Inference Without Balance Checking: Coarsened Exact Matching."
CEM unfortunately still succumbs to the curse of dimensionality in most samples because unless the coarsening is extreme, it's rare to find exact matches for every treated unit, so many treated units are discarded. In the remaining matched sample, however, approximate balance is achieved on the joint distribution of confounders, so the effect estimate will be approximately unbiased regardless of the form of the outcome model. CEM will be useful in the following scenario:
A large control pool with strong overlap with the treated units
Several continuous confounders
The effect estimate doesn't have to generalize to a target population or assumed to be the same for all units
The outcome model is highly nonlinear in the confounders and depends on their interactions
All of these must be true for CEM to be of value; if they are true, CEM is undoubtedly the best matching method for the reasons described in Iacus et al. (2011). If any of them are false, there is a better method out there. Below I'll discuss some alternatives and their strengths over CEM.
Genetic matching (Diamond & Sekhon, 2013) - recovers randomized block experiments; guarantees balance as the user defines it; doesn't have to discard treated units; in the Matching R package
Cardinality matching (Zubizarreta et al., 2014) - balance constraints can be specified without requiring exact balance on the joint distributions of confounders or their coarsened versions; in the designmatch R package
ATO weighting (Li & Thomas, 2018) - most precise weighted estimate possible, guarantees exact moment balance on each covariate (and many moments can be specified to capture the joint distribution); in the WeightIt R package
BART (Hill, 2011)/TMLE (van der Laan, 2010) - extremely flexible without assumptions on the outcome or treatment model and without discarding any units; in the bartCause and TMLE R packages
In the case you described, where you have many potential confounders to match on, there is FLAME (Wang et al., 2019), available in FLAME, and its successors.
I'm sorry this was so much, but this is a topic that deserves discussion and consideration. I spend my days thinking about it (actually; it's my line of research). Everything boils down to whether you want to make certain assumptions and how you can manage the bias-variance tradeoff given those assumptions. There is no right answer.
Diamond, A., & Sekhon, J. S. (2013). Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics, 95(3), 932–945. https://doi.org/10.1162/REST_a_00318
Franklin, J. M., Rassen, J. A., Ackermann, D., Bartels, D. B., & Schneeweiss, S. (2014). Metrics for covariate balance in cohort studies of causal effects. Statistics in Medicine, 33(10), 1685–1699. https://doi.org/10.1002/sim.6058
Iacus, S. M., King, G., & Porro, G. (2011). Causal Inference without Balance Checking: Coarsened Exact Matching. Political Analysis, mpr013. https://doi.org/10.1093/pan/mpr013
Hill, J. L. (2011). Bayesian Nonparametric Modeling for Causal Inference. Journal of Computational and Graphical Statistics, 20(1), 217–240. https://doi.org/10.1198/jcgs.2010.08162
King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 1–20. https://doi.org/10.1017/pan.2019.11
Li, F., & Thomas, L. E. (2018). Addressing Extreme Propensity Scores via the Overlap Weights. American Journal of Epidemiology. https://doi.org/10.1093/aje/kwy201
Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55. https://doi.org/10.1093/biomet/70.1.41
van der Laan, M. J. (2010). Targeted Maximum Likelihood Based Causal Inference: Part I. The International Journal of Biostatistics, 6(2). https://doi.org/10.2202/1557-4679.1211
Wang, T., Morucci, M., Awan, M. U., Liu, Y., Roy, S., Rudin, C., & Volfovsky, A. (2019). FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference. ArXiv:1707.06315 [Cs, Stat]. http://arxiv.org/abs/1707.06315
Zubizarreta, J. R., Paredes, R. D., & Rosenbaum, P. R. (2014). Matching for balance, pairing for heterogeneity in an observational study of the effectiveness of for-profit and not-for-profit high schools in Chile. The Annals of Applied Statistics, 8(1), 204–231. https://doi.org/10.1214/13-AOAS713 | Is Coarsened Exact Matching superior to other matching methods in case-control studies?
CEM does not allow you to estimate the ATE. This is because the matched units in each treatment group will not resemble the overall sample. If no treated units are unmatched, you can estimate the aver |
54,074 | How to deal with incorrect labels in classification? | If you have wrong data and no way to get the true labels then there is nothing "correct" that you can do to obtain this information.
You could treat this as an unsupervised problem first (or semi-supervised), by using say clustering with 2 clusters (since you know there are only 2 labels) to get a model to predict labels, and then following with classification. Note that such results may be overly optimistic. | How to deal with incorrect labels in classification? | If you have wrong data and no way to get the true labels then there is nothing "correct" that you can do to obtain this information.
You could treat this as an unsupervised problem first (or semi-supe | How to deal with incorrect labels in classification?
If you have wrong data and no way to get the true labels then there is nothing "correct" that you can do to obtain this information.
You could treat this as an unsupervised problem first (or semi-supervised), by using say clustering with 2 clusters (since you know there are only 2 labels) to get a model to predict labels, and then following with classification. Note that such results may be overly optimistic. | How to deal with incorrect labels in classification?
If you have wrong data and no way to get the true labels then there is nothing "correct" that you can do to obtain this information.
You could treat this as an unsupervised problem first (or semi-supe |
54,075 | How to deal with incorrect labels in classification? | Under mild assumptions on the noise mechanism and data distribution (e.g. less than $\frac{1}{2}$ of the data is incorrectly labelled), some classifiers can be shown to be consistent in the binary classification setting. A classifier $C_n$, depending on the training data, is said to be consistent if
$$R(C_n) → R(C_{Bayes}) \;\; as \;\; n → ∞$$
where a classifiers risk, $R(C)$ := is minimised by the Bayes classifier
$$
C^{Bayes}(x) :=
\begin{cases}
1,& \text{if } η(x) ≥ 1/2\\
0,& \text{otherwise}
\end{cases}$$
K-nearest-neighbours and Support Vector Machines can be shown to satisfy this condition while Linear Discriminant Analysis does not. Since this limit is guaranteed as $n → ∞$, this doesn't answer how much data you will need in your case, however simulation studies are done in the paper I reference below which may help give you an intuition.
Reference
Cannings, T. I., Fan, Y. and Samworth, R. J. (2018) Classification with
imperfect training labels. https://arxiv.org/abs/1805.11505. | How to deal with incorrect labels in classification? | Under mild assumptions on the noise mechanism and data distribution (e.g. less than $\frac{1}{2}$ of the data is incorrectly labelled), some classifiers can be shown to be consistent in the binary cla | How to deal with incorrect labels in classification?
Under mild assumptions on the noise mechanism and data distribution (e.g. less than $\frac{1}{2}$ of the data is incorrectly labelled), some classifiers can be shown to be consistent in the binary classification setting. A classifier $C_n$, depending on the training data, is said to be consistent if
$$R(C_n) → R(C_{Bayes}) \;\; as \;\; n → ∞$$
where a classifiers risk, $R(C)$ := is minimised by the Bayes classifier
$$
C^{Bayes}(x) :=
\begin{cases}
1,& \text{if } η(x) ≥ 1/2\\
0,& \text{otherwise}
\end{cases}$$
K-nearest-neighbours and Support Vector Machines can be shown to satisfy this condition while Linear Discriminant Analysis does not. Since this limit is guaranteed as $n → ∞$, this doesn't answer how much data you will need in your case, however simulation studies are done in the paper I reference below which may help give you an intuition.
Reference
Cannings, T. I., Fan, Y. and Samworth, R. J. (2018) Classification with
imperfect training labels. https://arxiv.org/abs/1805.11505. | How to deal with incorrect labels in classification?
Under mild assumptions on the noise mechanism and data distribution (e.g. less than $\frac{1}{2}$ of the data is incorrectly labelled), some classifiers can be shown to be consistent in the binary cla |
54,076 | How to deal with incorrect labels in classification? | In case of wrong data the best practice, in my experience, is to get rid of it. See, unlike conventional programming where you build the algorithm and apply it to the data, in machine learning, the algorithm comes from the data itself, so if you put a wrong data it will disrupt your algorithm and you will get poor performance.
The data you use in any machine learning algorithm should be as clean and as concise as possible to yield good results. | How to deal with incorrect labels in classification? | In case of wrong data the best practice, in my experience, is to get rid of it. See, unlike conventional programming where you build the algorithm and apply it to the data, in machine learning, the al | How to deal with incorrect labels in classification?
In case of wrong data the best practice, in my experience, is to get rid of it. See, unlike conventional programming where you build the algorithm and apply it to the data, in machine learning, the algorithm comes from the data itself, so if you put a wrong data it will disrupt your algorithm and you will get poor performance.
The data you use in any machine learning algorithm should be as clean and as concise as possible to yield good results. | How to deal with incorrect labels in classification?
In case of wrong data the best practice, in my experience, is to get rid of it. See, unlike conventional programming where you build the algorithm and apply it to the data, in machine learning, the al |
54,077 | How to deal with incorrect labels in classification? | You have a bunch of known As (if I read correctly) and some other cases that may or may not be As. So you want to find the most similar cases from the unknown set. Sounds relatively straightforward. If As are really different, then you'll get a nice break in the similarity function. | How to deal with incorrect labels in classification? | You have a bunch of known As (if I read correctly) and some other cases that may or may not be As. So you want to find the most similar cases from the unknown set. Sounds relatively straightforward. I | How to deal with incorrect labels in classification?
You have a bunch of known As (if I read correctly) and some other cases that may or may not be As. So you want to find the most similar cases from the unknown set. Sounds relatively straightforward. If As are really different, then you'll get a nice break in the similarity function. | How to deal with incorrect labels in classification?
You have a bunch of known As (if I read correctly) and some other cases that may or may not be As. So you want to find the most similar cases from the unknown set. Sounds relatively straightforward. I |
54,078 | How to deal with incorrect labels in classification? | I'm a little late to this question, but for future readers: Try giving higher sample weights to data with class A. That way your algorithm will have a higher penalty for misclassifying A than for misclassifying B.
If your algorithm doesn't support sample weights you could try oversampling your data from class A.
There is a danger of overfitting with this method, so make sure to regularize and cross validate. | How to deal with incorrect labels in classification? | I'm a little late to this question, but for future readers: Try giving higher sample weights to data with class A. That way your algorithm will have a higher penalty for misclassifying A than for misc | How to deal with incorrect labels in classification?
I'm a little late to this question, but for future readers: Try giving higher sample weights to data with class A. That way your algorithm will have a higher penalty for misclassifying A than for misclassifying B.
If your algorithm doesn't support sample weights you could try oversampling your data from class A.
There is a danger of overfitting with this method, so make sure to regularize and cross validate. | How to deal with incorrect labels in classification?
I'm a little late to this question, but for future readers: Try giving higher sample weights to data with class A. That way your algorithm will have a higher penalty for misclassifying A than for misc |
54,079 | How to deal with incorrect labels in classification? | Learning with noisy label is an active research area. Serval methods are mentioned here: https://youtu.be/8mpBHbjG4E4. | How to deal with incorrect labels in classification? | Learning with noisy label is an active research area. Serval methods are mentioned here: https://youtu.be/8mpBHbjG4E4. | How to deal with incorrect labels in classification?
Learning with noisy label is an active research area. Serval methods are mentioned here: https://youtu.be/8mpBHbjG4E4. | How to deal with incorrect labels in classification?
Learning with noisy label is an active research area. Serval methods are mentioned here: https://youtu.be/8mpBHbjG4E4. |
54,080 | Lasso, Ridge and Best Subset estimator for orthogonal cases | From the context, I'm assuming that the $\beta_j's$ are the regular least squares estimates, and the table is showing how they would be transformed under each of the listed methods.
Best Subset:
Because the columns are orthonormal, the least squares coefficients are simply $\hat{B_j} = {x_j^{T}y}$. (Orthogonality implies that they're given by $\hat{B_j} = {\frac{x_j^{T}y}{x_j^{T}x_j}}$, but since we have orthonormal columns, ${x_j^{T}x_j}$ = 1.)
Then by definition of best subset, we're looking for the $M$ predictors that gives the smalles residual sum of squares. This is equivalent to finding the $M$ largest (in absolute value) coefficients. This might already be intuitive, but if not, note that the residual sum of squares from regressing ${y}$ on ${x_j}$ is given by:
$r_j = (y - x_j\hat{\beta_j})^T(y - x_j\hat{\beta_j})$
$= y^Ty - 2\hat{\beta_j}x_j^Ty + \hat{{\beta_j}}^2$
$= y^Ty - 2(x_j^Ty)^2 + (x_j^Ty)^2$ (applying the solution of $\hat{B_j} = {x_j^{T}y}$)
$= y^Ty-(x_j^Ty)^2$
$= y^Ty - {|\hat{B_j}|}^2$
Which is clearly minimized by having $|\hat{B_j}|$ as large as possible.
It follows then that the solution for best subset with $M$ predictors is to regress $y$ on each $x_j$, order the coefficients by size in absolute value, and then choose the $M$ largest of them, which is what is given by the solution in the table.
Lasso:
The lasso coefficient for regressing $y$ on $x_j$ is finding the $\hat{\beta}$ that minimizes $\frac{1}{2}(y - x_j\hat{\beta})^T(y - x_j\hat{\beta}) + \lambda|\hat{\beta}|$. Now assume that $\hat{\beta} \neq 0$. Taking the derivative of that expression with respect to $\hat{\beta}$ and setting equal to 0 gives
$-x_j^T(y - x_j\hat{\beta}) + sign(\hat{\beta})\lambda = 0$, where we need the sign operator because the derivative of $|\hat{\beta}|$ is $1$ if $\hat{\beta}$ > 0 and $-1$ otherwise.
Simplifying the expression above gives
$-x_j^Ty + x_j^Tx_j\hat{\beta} + sign(\hat{\beta})\lambda = 0$
$\implies \hat{\beta} = x_j^Ty - sign(\hat{\beta})\lambda$ (where we used the fact that $x_j^Tx_j = 1$, since the columns are orthonormal.
$\implies \hat{\beta} = \hat{\beta_j} - sign(\hat{\beta})\lambda$ (recall the definition of $\hat{\beta_j}$, the least squares solution).
Now we consider cases for the sign of $\hat{\beta}$:
If $sign(\hat{\beta}) > 0$, then we must have $\hat{\beta_j} - \lambda > 0$, which means $\hat{\beta_j} > \lambda$ (and therefore $\hat{\beta_j} > 0)$.
Note that if this is the case, then the lasso estimate is given by $\hat{\beta} = \hat{\beta_j} - \lambda = \hat{\beta_j} - \lambda = sign(\hat{\beta_j})(|\hat{\beta_j}| - \lambda)$
If $sign(\hat{\beta}) < 0$, then we must have $\hat{\beta_j} + \lambda < 0$, which means $-\hat{\beta_j} >\lambda$ (and therefore $\hat{\beta_j} < 0)$.
Note that if this is the case, then the lasso estimate is given by $\hat{\beta} = \hat{\beta_j} + \lambda = -|\hat{\beta_j}| + \lambda = sign(\hat{\beta_j})(|\hat{\beta_j}| - \lambda)$
In each of these, we required that $|\hat{\beta_j}| > \lambda$. If that was wrong, or our initial assumption that $\hat{\beta} \neq 0$ must have been wrong and we have $\hat{\beta} = 0$, which means we can say that we only take the positive part ($(|\hat{\beta_j}| - \lambda)_+$) in each of the solutions since otherwise $\hat{\beta} = 0$.
Therefore, you get the solution in the table. | Lasso, Ridge and Best Subset estimator for orthogonal cases | From the context, I'm assuming that the $\beta_j's$ are the regular least squares estimates, and the table is showing how they would be transformed under each of the listed methods.
Best Subset:
Becau | Lasso, Ridge and Best Subset estimator for orthogonal cases
From the context, I'm assuming that the $\beta_j's$ are the regular least squares estimates, and the table is showing how they would be transformed under each of the listed methods.
Best Subset:
Because the columns are orthonormal, the least squares coefficients are simply $\hat{B_j} = {x_j^{T}y}$. (Orthogonality implies that they're given by $\hat{B_j} = {\frac{x_j^{T}y}{x_j^{T}x_j}}$, but since we have orthonormal columns, ${x_j^{T}x_j}$ = 1.)
Then by definition of best subset, we're looking for the $M$ predictors that gives the smalles residual sum of squares. This is equivalent to finding the $M$ largest (in absolute value) coefficients. This might already be intuitive, but if not, note that the residual sum of squares from regressing ${y}$ on ${x_j}$ is given by:
$r_j = (y - x_j\hat{\beta_j})^T(y - x_j\hat{\beta_j})$
$= y^Ty - 2\hat{\beta_j}x_j^Ty + \hat{{\beta_j}}^2$
$= y^Ty - 2(x_j^Ty)^2 + (x_j^Ty)^2$ (applying the solution of $\hat{B_j} = {x_j^{T}y}$)
$= y^Ty-(x_j^Ty)^2$
$= y^Ty - {|\hat{B_j}|}^2$
Which is clearly minimized by having $|\hat{B_j}|$ as large as possible.
It follows then that the solution for best subset with $M$ predictors is to regress $y$ on each $x_j$, order the coefficients by size in absolute value, and then choose the $M$ largest of them, which is what is given by the solution in the table.
Lasso:
The lasso coefficient for regressing $y$ on $x_j$ is finding the $\hat{\beta}$ that minimizes $\frac{1}{2}(y - x_j\hat{\beta})^T(y - x_j\hat{\beta}) + \lambda|\hat{\beta}|$. Now assume that $\hat{\beta} \neq 0$. Taking the derivative of that expression with respect to $\hat{\beta}$ and setting equal to 0 gives
$-x_j^T(y - x_j\hat{\beta}) + sign(\hat{\beta})\lambda = 0$, where we need the sign operator because the derivative of $|\hat{\beta}|$ is $1$ if $\hat{\beta}$ > 0 and $-1$ otherwise.
Simplifying the expression above gives
$-x_j^Ty + x_j^Tx_j\hat{\beta} + sign(\hat{\beta})\lambda = 0$
$\implies \hat{\beta} = x_j^Ty - sign(\hat{\beta})\lambda$ (where we used the fact that $x_j^Tx_j = 1$, since the columns are orthonormal.
$\implies \hat{\beta} = \hat{\beta_j} - sign(\hat{\beta})\lambda$ (recall the definition of $\hat{\beta_j}$, the least squares solution).
Now we consider cases for the sign of $\hat{\beta}$:
If $sign(\hat{\beta}) > 0$, then we must have $\hat{\beta_j} - \lambda > 0$, which means $\hat{\beta_j} > \lambda$ (and therefore $\hat{\beta_j} > 0)$.
Note that if this is the case, then the lasso estimate is given by $\hat{\beta} = \hat{\beta_j} - \lambda = \hat{\beta_j} - \lambda = sign(\hat{\beta_j})(|\hat{\beta_j}| - \lambda)$
If $sign(\hat{\beta}) < 0$, then we must have $\hat{\beta_j} + \lambda < 0$, which means $-\hat{\beta_j} >\lambda$ (and therefore $\hat{\beta_j} < 0)$.
Note that if this is the case, then the lasso estimate is given by $\hat{\beta} = \hat{\beta_j} + \lambda = -|\hat{\beta_j}| + \lambda = sign(\hat{\beta_j})(|\hat{\beta_j}| - \lambda)$
In each of these, we required that $|\hat{\beta_j}| > \lambda$. If that was wrong, or our initial assumption that $\hat{\beta} \neq 0$ must have been wrong and we have $\hat{\beta} = 0$, which means we can say that we only take the positive part ($(|\hat{\beta_j}| - \lambda)_+$) in each of the solutions since otherwise $\hat{\beta} = 0$.
Therefore, you get the solution in the table. | Lasso, Ridge and Best Subset estimator for orthogonal cases
From the context, I'm assuming that the $\beta_j's$ are the regular least squares estimates, and the table is showing how they would be transformed under each of the listed methods.
Best Subset:
Becau |
54,081 | Inverse moment of Multivariate Normal Norm | You have to use the so-called law of the unconscious statistician to write the integral you need to compute. In this case, we are asked to calculate the first and second moments of the reciprocal distribution 1/x.
\begin{align} \mathbb{E}\left[ \frac{1}{ |x|^n } \right] =& \int d x^p \frac{1}{ \sqrt{(2 \pi)^p \det(\Sigma) } } \frac{1}{|x|^n} \\ & \times \exp \left( - \frac{1}{2} (x - \mu)^T \Sigma^{-1} (x- \mu) \right) \end{align}
We can study the behavior of the integrand around $r=0$ using polar coordinates. The integrals go like
$$ \mathbb{E}\left[ \frac{1}{ |x|^n } \right] \sim \int r^{p-1-n} dr \ldots $$
where dots represent convergent pieces and $r = |x|$. The integrals diverge for $p-1-n<0$, so for $n=1$ it is divergent for $p<2$ and for $n=2$ it is divergent for $p <3$. In these cases where the moment diverges it is said not to exist. See also here for the univariate normal distribution discussion. | Inverse moment of Multivariate Normal Norm | You have to use the so-called law of the unconscious statistician to write the integral you need to compute. In this case, we are asked to calculate the first and second moments of the reciprocal di | Inverse moment of Multivariate Normal Norm
You have to use the so-called law of the unconscious statistician to write the integral you need to compute. In this case, we are asked to calculate the first and second moments of the reciprocal distribution 1/x.
\begin{align} \mathbb{E}\left[ \frac{1}{ |x|^n } \right] =& \int d x^p \frac{1}{ \sqrt{(2 \pi)^p \det(\Sigma) } } \frac{1}{|x|^n} \\ & \times \exp \left( - \frac{1}{2} (x - \mu)^T \Sigma^{-1} (x- \mu) \right) \end{align}
We can study the behavior of the integrand around $r=0$ using polar coordinates. The integrals go like
$$ \mathbb{E}\left[ \frac{1}{ |x|^n } \right] \sim \int r^{p-1-n} dr \ldots $$
where dots represent convergent pieces and $r = |x|$. The integrals diverge for $p-1-n<0$, so for $n=1$ it is divergent for $p<2$ and for $n=2$ it is divergent for $p <3$. In these cases where the moment diverges it is said not to exist. See also here for the univariate normal distribution discussion. | Inverse moment of Multivariate Normal Norm
You have to use the so-called law of the unconscious statistician to write the integral you need to compute. In this case, we are asked to calculate the first and second moments of the reciprocal di |
54,082 | Inverse moment of Multivariate Normal Norm | For the case where $\Sigma = \mathbf{I}\sigma$, we have the following formulas:
\begin{equation}
\mathbb{E}\left( \frac{1}{||x||} \right) = \frac{1}{\sqrt{2}} {}_1F_1 \left(\frac{1}{2}, \frac{P}{2}, -\frac{||\frac{\mu}{\sigma}||^2}{2}\right) \frac{\Gamma\left(\frac{P-1}{2}\right)}{\Gamma\left(\frac{P}{2}\right)}
\end{equation}
and
\begin{equation}
\mathbb{E}\left( \frac{1}{||x||^2} \right) = \frac{1}{2} {}_1F_1 \left(1, \frac{P}{2}, -\frac{||\frac{\mu}{\sigma}||^2}{2}\right) \frac{\Gamma\left(\frac{P-2}{2}\right)}{\Gamma\left(\frac{P}{2}\right)}
\end{equation}
where ${}_1F_1 \left(a,b,c\right)$ is the hypergeometric confluent function, and $||\frac{\mu}{\sigma}||^2$ is the squared norm of the distribution mean divided by the isotropic noise standard deviation $\sigma$ above.
The first expression I got from the sympy Python package (symbolic computation), with the following code
from sympy.stats import ChiNoncentral, density, E
from sympy import Symbol, simplify
# Define some symbols to use
k = Symbol("k", integer=True)
l = Symbol("l")
z = Symbol("z")
# Define the chi noncentral distribution
X = ChiNoncentral("x", k, l)
# Get the analytic expression for the first moment of the distribution
analyticExp = simplify(E(1/X))
Note, in simplifying the first expression from the code below, I used the Kummer relation.
These expressions, and other related ones, are derived in "Intermediate Probability: A Computational Approach", M. Paolella, section 10.1.2. In that section, an expression for the moments of the non-centered Chi-square variable are shown. You get these expressions by applying the formula to moments -1 and -2 of the non-cented Chi-Square.
This can be generalized to some other $\Sigma$'s. The Mathai and Provost book "Quadratic Forms in Random Variables" specifies in Theorem 5.1.3 the specific $\Sigma$'s under which $X^T X$, with $X \sim \mathcal{N}(\mu, \Sigma)$, is distributed as a non-central chi square distribution with non-centrality parameter $\delta^2$ and $r$ degrees of freedom. For those cases, you can just apply the formulas above, subtituting the corresponding non-centrality parameter and degrees of freedom. The $\Sigma$'s for which this holds, however, seem to be quite limited (e.g. they have to satisfy $\Sigma^3 = \Sigma^2$, which is almost to say that it is idempotent). | Inverse moment of Multivariate Normal Norm | For the case where $\Sigma = \mathbf{I}\sigma$, we have the following formulas:
\begin{equation}
\mathbb{E}\left( \frac{1}{||x||} \right) = \frac{1}{\sqrt{2}} {}_1F_1 \left(\frac{1}{2}, \frac{P}{2}, - | Inverse moment of Multivariate Normal Norm
For the case where $\Sigma = \mathbf{I}\sigma$, we have the following formulas:
\begin{equation}
\mathbb{E}\left( \frac{1}{||x||} \right) = \frac{1}{\sqrt{2}} {}_1F_1 \left(\frac{1}{2}, \frac{P}{2}, -\frac{||\frac{\mu}{\sigma}||^2}{2}\right) \frac{\Gamma\left(\frac{P-1}{2}\right)}{\Gamma\left(\frac{P}{2}\right)}
\end{equation}
and
\begin{equation}
\mathbb{E}\left( \frac{1}{||x||^2} \right) = \frac{1}{2} {}_1F_1 \left(1, \frac{P}{2}, -\frac{||\frac{\mu}{\sigma}||^2}{2}\right) \frac{\Gamma\left(\frac{P-2}{2}\right)}{\Gamma\left(\frac{P}{2}\right)}
\end{equation}
where ${}_1F_1 \left(a,b,c\right)$ is the hypergeometric confluent function, and $||\frac{\mu}{\sigma}||^2$ is the squared norm of the distribution mean divided by the isotropic noise standard deviation $\sigma$ above.
The first expression I got from the sympy Python package (symbolic computation), with the following code
from sympy.stats import ChiNoncentral, density, E
from sympy import Symbol, simplify
# Define some symbols to use
k = Symbol("k", integer=True)
l = Symbol("l")
z = Symbol("z")
# Define the chi noncentral distribution
X = ChiNoncentral("x", k, l)
# Get the analytic expression for the first moment of the distribution
analyticExp = simplify(E(1/X))
Note, in simplifying the first expression from the code below, I used the Kummer relation.
These expressions, and other related ones, are derived in "Intermediate Probability: A Computational Approach", M. Paolella, section 10.1.2. In that section, an expression for the moments of the non-centered Chi-square variable are shown. You get these expressions by applying the formula to moments -1 and -2 of the non-cented Chi-Square.
This can be generalized to some other $\Sigma$'s. The Mathai and Provost book "Quadratic Forms in Random Variables" specifies in Theorem 5.1.3 the specific $\Sigma$'s under which $X^T X$, with $X \sim \mathcal{N}(\mu, \Sigma)$, is distributed as a non-central chi square distribution with non-centrality parameter $\delta^2$ and $r$ degrees of freedom. For those cases, you can just apply the formulas above, subtituting the corresponding non-centrality parameter and degrees of freedom. The $\Sigma$'s for which this holds, however, seem to be quite limited (e.g. they have to satisfy $\Sigma^3 = \Sigma^2$, which is almost to say that it is idempotent). | Inverse moment of Multivariate Normal Norm
For the case where $\Sigma = \mathbf{I}\sigma$, we have the following formulas:
\begin{equation}
\mathbb{E}\left( \frac{1}{||x||} \right) = \frac{1}{\sqrt{2}} {}_1F_1 \left(\frac{1}{2}, \frac{P}{2}, - |
54,083 | Updating the inverse covariance matrix after deleting the i-th column and row of the covariance matrix | When $i=n,$ write $\mathbb{A}$ in block matrix form
$$\mathbb A = \pmatrix{A & B \\ C & D}$$
where $A$ is the $n-1 \times n-1$ matrix obtained by omitting the last row and column of $\mathbb{A},$ $B = C^\prime$ is the first $n-1$ entries in the last column, and $D = \mathbb{A}_{nn}$ is a nonzero number because $\mathbb{A}$ is an invertible definite symmetric matrix.
Similarly write $$\mathbb{A}^{-1} = \pmatrix{a & b \\ c & d}$$ in block matrix form. We are looking for an efficient formula for $A^{-1}$ in terms of $a,b,c,d.$
By definition, the product of a matrix and its inverse is the $n\times n$ identity. Let's compute it using block matrix operations:
$$\pmatrix{\mathbb{I}_{n-1} & 0 \\ 0 & 1} = \mathbb{I}_n = \mathbb{A}\,\mathbb{A}^{-1} = \pmatrix{Aa + Bc & Ab + Bd \\ Ca + Dc & Cb+Dd}.$$
In the upper left block we almost have the result we would like: it says $Aa + Bc$ is the identity. The trick will be to adjust $a$ to compensate for the $Bc$ term in that block.
In the upper right block we find $0 = Ab + Bd.$ Use this to re-express
$$Bc = (Bd)(d^{-1}c) = ((Ab + Bd) - Ab)(d^{-1}c) = (0 - Ab)(d^{-1}c) = -Abd^{-1}c.$$
Consequently (returning to the upper left block),
$$\mathbb{I}_{n-1} = Aa + Bc = Aa - Abd^{-1}c = A(a - bd^{-1}c),$$
demonstrating that
$$A^{-1} = a - b\,d^{-1}\,c.\tag{*}$$
The computational effort is minimal: the matrix product requires $2(n-1)^2$ multiplications and then $a$ is updated with $(n-1)^2$ subtractions. This $O(n^2)$ performance is the best possible because potentially all entries of $a$ will change and there are $O(n^2)$ of them.
The case for general $i$ is now readily solved by permuting the $i^\text{th}$ row and column into the last positions. The (self-inverse) permutation matrix $\mathbb{P}^{\,i;n}$ given by
$$\mathbb{P}^{\,i;n}_{jk} = \left\{\eqalign{1, & j=k\text{ and } j\notin \{i,n\} \\ 1, & \{j,k\}=\{i,n\} \\ 0 & \text{otherwise}}\right.$$
does the trick via conjugation: the matrix $$\mathbb{P}^{\,i;n}\, \mathbb{A}\, \mathbb{P}^{\,i;n}$$
has the $i^\text{th}$ row and column moved into the last row and column. Thus, apply formula $(*)$ to this permuted version of $\mathbb{A}.$
Note that this solution works for arbitrary invertible square matrices, whether or not they are covariance matrices, provided only that $\mathbb{A}_{i,i}\ne 0.$
As an example, take
$$\mathbb{A} = \pmatrix{2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2}$$
and $i=2.$ That is, we are given
$$\mathbb{A}^{-1} = \frac{1}{4}\pmatrix{3 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 3}$$
and we wish to update it to find the inverse of $\mathbb{A}$ with its second row and column removed; that is, we wish to compute
$$\pmatrix{2 & 0 \\ 0 & 2}^{-1} = \pmatrix{\frac{1}{2} & 0 \\ 0 & \frac{1}{2}}.$$
The permutation matrix is
$$\mathbb{P}^{2;3} = \pmatrix{1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0}.$$
Conjugating gives
$$\mathbb{P}^{2;3}\,\mathbb{A}^{-1}\,\mathbb{P}^{2;3} = \frac{1}{4}\pmatrix{3 & 1 & 2 \\ 1 & 3 & 2 \\ 2 & 2 & 4}.$$
From this we read off
$$a = \frac{1}{4}\pmatrix{3 & 1 \\ 1 & 3},\quad c = b^\prime = \frac{1}{4}\pmatrix{2 & 2},\quad d = \frac{1}{4}\pmatrix{4}=1.$$
Thus formula $(*)$ gives
$$A^{-1} = a - b d^{-1} c = \frac{1}{4}\pmatrix{3 & 1 \\ 1 & 3} - \frac{1}{2}\pmatrix{1 \\ 1}\,(1)^{-1}\,\frac{1}{2}\pmatrix{1 & 1} = \frac{1}{4}\pmatrix{2 & 0 \\ 0 & 2} = \pmatrix{\frac{1}{2} & 0 \\ 0 & \frac{1}{2}},$$
which is correct.
This R code implements the algorithm. It is followed by an example of its use. The permutation is implemented by R's native subscripting function [, which therefore is efficient.
inverse.update <- function(x, i) {
a <- x[-i,-i, drop=FALSE]
b <- x[-i,i, drop=FALSE]
c <- x[i,-i, drop=FALSE]
d <- x[i,i]
a - b %*% c / d # For production code, should throw an error when d is 0.
}
#
# Example.
#
A <- matrix(c(2,-1,0, -1,2,-1, 0,-1,2), 3)
A.inv <- solve(A)
i <- 2
(x.1 <- solve(A[-i,-i])) # The desired result, directly obtained
(x.0 <- inverse.update(A.inv, i)) # The result via an update | Updating the inverse covariance matrix after deleting the i-th column and row of the covariance matr | When $i=n,$ write $\mathbb{A}$ in block matrix form
$$\mathbb A = \pmatrix{A & B \\ C & D}$$
where $A$ is the $n-1 \times n-1$ matrix obtained by omitting the last row and column of $\mathbb{A},$ $B | Updating the inverse covariance matrix after deleting the i-th column and row of the covariance matrix
When $i=n,$ write $\mathbb{A}$ in block matrix form
$$\mathbb A = \pmatrix{A & B \\ C & D}$$
where $A$ is the $n-1 \times n-1$ matrix obtained by omitting the last row and column of $\mathbb{A},$ $B = C^\prime$ is the first $n-1$ entries in the last column, and $D = \mathbb{A}_{nn}$ is a nonzero number because $\mathbb{A}$ is an invertible definite symmetric matrix.
Similarly write $$\mathbb{A}^{-1} = \pmatrix{a & b \\ c & d}$$ in block matrix form. We are looking for an efficient formula for $A^{-1}$ in terms of $a,b,c,d.$
By definition, the product of a matrix and its inverse is the $n\times n$ identity. Let's compute it using block matrix operations:
$$\pmatrix{\mathbb{I}_{n-1} & 0 \\ 0 & 1} = \mathbb{I}_n = \mathbb{A}\,\mathbb{A}^{-1} = \pmatrix{Aa + Bc & Ab + Bd \\ Ca + Dc & Cb+Dd}.$$
In the upper left block we almost have the result we would like: it says $Aa + Bc$ is the identity. The trick will be to adjust $a$ to compensate for the $Bc$ term in that block.
In the upper right block we find $0 = Ab + Bd.$ Use this to re-express
$$Bc = (Bd)(d^{-1}c) = ((Ab + Bd) - Ab)(d^{-1}c) = (0 - Ab)(d^{-1}c) = -Abd^{-1}c.$$
Consequently (returning to the upper left block),
$$\mathbb{I}_{n-1} = Aa + Bc = Aa - Abd^{-1}c = A(a - bd^{-1}c),$$
demonstrating that
$$A^{-1} = a - b\,d^{-1}\,c.\tag{*}$$
The computational effort is minimal: the matrix product requires $2(n-1)^2$ multiplications and then $a$ is updated with $(n-1)^2$ subtractions. This $O(n^2)$ performance is the best possible because potentially all entries of $a$ will change and there are $O(n^2)$ of them.
The case for general $i$ is now readily solved by permuting the $i^\text{th}$ row and column into the last positions. The (self-inverse) permutation matrix $\mathbb{P}^{\,i;n}$ given by
$$\mathbb{P}^{\,i;n}_{jk} = \left\{\eqalign{1, & j=k\text{ and } j\notin \{i,n\} \\ 1, & \{j,k\}=\{i,n\} \\ 0 & \text{otherwise}}\right.$$
does the trick via conjugation: the matrix $$\mathbb{P}^{\,i;n}\, \mathbb{A}\, \mathbb{P}^{\,i;n}$$
has the $i^\text{th}$ row and column moved into the last row and column. Thus, apply formula $(*)$ to this permuted version of $\mathbb{A}.$
Note that this solution works for arbitrary invertible square matrices, whether or not they are covariance matrices, provided only that $\mathbb{A}_{i,i}\ne 0.$
As an example, take
$$\mathbb{A} = \pmatrix{2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2}$$
and $i=2.$ That is, we are given
$$\mathbb{A}^{-1} = \frac{1}{4}\pmatrix{3 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 3}$$
and we wish to update it to find the inverse of $\mathbb{A}$ with its second row and column removed; that is, we wish to compute
$$\pmatrix{2 & 0 \\ 0 & 2}^{-1} = \pmatrix{\frac{1}{2} & 0 \\ 0 & \frac{1}{2}}.$$
The permutation matrix is
$$\mathbb{P}^{2;3} = \pmatrix{1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0}.$$
Conjugating gives
$$\mathbb{P}^{2;3}\,\mathbb{A}^{-1}\,\mathbb{P}^{2;3} = \frac{1}{4}\pmatrix{3 & 1 & 2 \\ 1 & 3 & 2 \\ 2 & 2 & 4}.$$
From this we read off
$$a = \frac{1}{4}\pmatrix{3 & 1 \\ 1 & 3},\quad c = b^\prime = \frac{1}{4}\pmatrix{2 & 2},\quad d = \frac{1}{4}\pmatrix{4}=1.$$
Thus formula $(*)$ gives
$$A^{-1} = a - b d^{-1} c = \frac{1}{4}\pmatrix{3 & 1 \\ 1 & 3} - \frac{1}{2}\pmatrix{1 \\ 1}\,(1)^{-1}\,\frac{1}{2}\pmatrix{1 & 1} = \frac{1}{4}\pmatrix{2 & 0 \\ 0 & 2} = \pmatrix{\frac{1}{2} & 0 \\ 0 & \frac{1}{2}},$$
which is correct.
This R code implements the algorithm. It is followed by an example of its use. The permutation is implemented by R's native subscripting function [, which therefore is efficient.
inverse.update <- function(x, i) {
a <- x[-i,-i, drop=FALSE]
b <- x[-i,i, drop=FALSE]
c <- x[i,-i, drop=FALSE]
d <- x[i,i]
a - b %*% c / d # For production code, should throw an error when d is 0.
}
#
# Example.
#
A <- matrix(c(2,-1,0, -1,2,-1, 0,-1,2), 3)
A.inv <- solve(A)
i <- 2
(x.1 <- solve(A[-i,-i])) # The desired result, directly obtained
(x.0 <- inverse.update(A.inv, i)) # The result via an update | Updating the inverse covariance matrix after deleting the i-th column and row of the covariance matr
When $i=n,$ write $\mathbb{A}$ in block matrix form
$$\mathbb A = \pmatrix{A & B \\ C & D}$$
where $A$ is the $n-1 \times n-1$ matrix obtained by omitting the last row and column of $\mathbb{A},$ $B |
54,084 | How Does L2 Norm Regularization Work with Negative Weights? | This is our objective function, composed of a loss function and a regularizer.
$$\mathcal O(w,x,y) = \mathcal L(w,x,y)+\mathcal R(w)$$
So $\mathcal R(w)=\|w\|_2^2=\sum{w_i^2}$ in the case of $\ell_2$ regularization.
Let's perform gradient-based minimization, i.e. we will update params based on the negative partial derivatives of the objective function.
In that case:
$${\partial \mathcal R\over \partial w_i} = 2w_i$$
If $w_i \lt 0$, that partial derivative is less than zero as well, and the converse is also true in the case that $w_i \gt 0$, where we get a positive partial derivative.
What does that mean? It means that, if $w_i$ is negative, you have to increase it to minimize the norm, and if $w_i$ is positive, you have to decrease it. | How Does L2 Norm Regularization Work with Negative Weights? | This is our objective function, composed of a loss function and a regularizer.
$$\mathcal O(w,x,y) = \mathcal L(w,x,y)+\mathcal R(w)$$
So $\mathcal R(w)=\|w\|_2^2=\sum{w_i^2}$ in the case of $\ell_2$ | How Does L2 Norm Regularization Work with Negative Weights?
This is our objective function, composed of a loss function and a regularizer.
$$\mathcal O(w,x,y) = \mathcal L(w,x,y)+\mathcal R(w)$$
So $\mathcal R(w)=\|w\|_2^2=\sum{w_i^2}$ in the case of $\ell_2$ regularization.
Let's perform gradient-based minimization, i.e. we will update params based on the negative partial derivatives of the objective function.
In that case:
$${\partial \mathcal R\over \partial w_i} = 2w_i$$
If $w_i \lt 0$, that partial derivative is less than zero as well, and the converse is also true in the case that $w_i \gt 0$, where we get a positive partial derivative.
What does that mean? It means that, if $w_i$ is negative, you have to increase it to minimize the norm, and if $w_i$ is positive, you have to decrease it. | How Does L2 Norm Regularization Work with Negative Weights?
This is our objective function, composed of a loss function and a regularizer.
$$\mathcal O(w,x,y) = \mathcal L(w,x,y)+\mathcal R(w)$$
So $\mathcal R(w)=\|w\|_2^2=\sum{w_i^2}$ in the case of $\ell_2$ |
54,085 | How Does L2 Norm Regularization Work with Negative Weights? | L2 regularization adds $w_i^2$ term to the loss function. In iterative approaches using gradients, we subtract the gradient of the loss function not the magnitude of the weight itself. And in the loss function, the regularization part's derivative with respect to $w_i$ is going to be ${d\over dw_i}(w_i^2)=2w_i$. Typically, this part is multiplied with a chosen $\lambda$ to decrease/increase the importance of the regularization.
When the weight is negative, it moves towards the positive direction, i.e. $$w_{new}=w_{old}-\dots-(2w_{old})$$ and if it is positive, it moves towards the negative direction. In either case, you move towards origin (with small enough $\lambda$). | How Does L2 Norm Regularization Work with Negative Weights? | L2 regularization adds $w_i^2$ term to the loss function. In iterative approaches using gradients, we subtract the gradient of the loss function not the magnitude of the weight itself. And in the loss | How Does L2 Norm Regularization Work with Negative Weights?
L2 regularization adds $w_i^2$ term to the loss function. In iterative approaches using gradients, we subtract the gradient of the loss function not the magnitude of the weight itself. And in the loss function, the regularization part's derivative with respect to $w_i$ is going to be ${d\over dw_i}(w_i^2)=2w_i$. Typically, this part is multiplied with a chosen $\lambda$ to decrease/increase the importance of the regularization.
When the weight is negative, it moves towards the positive direction, i.e. $$w_{new}=w_{old}-\dots-(2w_{old})$$ and if it is positive, it moves towards the negative direction. In either case, you move towards origin (with small enough $\lambda$). | How Does L2 Norm Regularization Work with Negative Weights?
L2 regularization adds $w_i^2$ term to the loss function. In iterative approaches using gradients, we subtract the gradient of the loss function not the magnitude of the weight itself. And in the loss |
54,086 | Closed form of the integral of the difference of two Gaussian CDFs? | The $(b-d)/a$ result is correct when $a \gt 0.$ This post explains why. It generalizes the question broadly in order to reveal the underlying ideas.
Because $\Phi$ is not special in this regard, let's consider any distribution function $F_X$ for a random variable $X.$ Recall that by definition, $F_X(x) = \Pr(X\le x)$ for any real number $x.$
Suppose $X$ has a finite expectation $E_X.$ One expression for the expectation is
$$E[X] = \int_{0}^\infty (1 - F(x))\mathrm{d}x - \int_{-\infty}^0 F(x) \mathrm{d}x = \int_0^\infty (1 - (F(x) + F(-x)))\mathrm{d}x.\tag{*}$$
Let's study how this behaves under affine transformations of $X$:
$F_X(x+b) = \Pr(X \le x+b) = \Pr(X-b \le x) = F_{X-b}(x).$
When $a \gt 0,$ $F_X(ax) = \Pr(X \le ax) = \Pr(X/a \le x) = F_{X/a}(x).$
Thus, for positive $a$ and $c,$
$$\frac{E[X] - b}{a} = E[(X-b)/a] = \int_{0}^\infty (1 - F(ax+b))\mathrm{d}x - \int_{-\infty}^0 F(ax+b) \mathrm{d}x$$
and
$$\frac{E[X] - d}{c} = E[(X-d)/c] = \int_{0}^\infty (1 - F(cx+d))\mathrm{d}x - \int_{-\infty}^0 F(cx+d) \mathrm{d}x.$$
Subtracting the first from the second yields
$$\eqalign{\frac{b-E[X]}{a} - \frac{d - E[X]}{c} &=\int_{0}^\infty (1 - F(cx+d))\mathrm{d}x - \int_{-\infty}^0 F(cx+d) \mathrm{d}x \\&- \left(\int_{0}^\infty (1 - F(ax+b))\mathrm{d}x - \int_{-\infty}^0 F(ax+b) \mathrm{d}x\right) \\
&= \int_{0}^\infty (F(ax+b)-F(cx+d))\mathrm{d}x \\ &+ \int_{-\infty}^0 (F(ax+b) - F(cx+d)) \mathrm{d}x\\
&= \int_{-\infty}^\infty (F(ax+b)-F(cx+d))\mathrm{d}x
.}$$
When $a \lt 0,$ replace $a$ by $-a = |a|$ and apply all results to the distribution of $-X.$
We have thereby established the following general result:
When $|E[X]| \lt \infty$ and $ac \ne 0,$ then $$\int_\mathbb{R} (F(ax+b)-F(cx+d))\,\mathrm{d}x = \frac{b-E[X]}{|a|} - \frac{d - E[X]}{|c|}.$$
In the question with positive $a$ and $c$ and $F=\Phi,$ we have $E[X] = 0,$ reducing the integral to $b/a - d/c.$ When $a=c$ this simplifies to $(b-d)/a,$ exactly as suggested in the question.
This result isn't quite the most general one: when $a=c,$ the result holds in the form $(b-d)/a$ even when $X$ does not have a finite expectation. This is most easily seen by integrating the quantile function $F^{-1}:$ see https://stats.stackexchange.com/a/18439/919.
In case any of these manipulations appear doubtful, here is numerical confirmation using a host of different distributions (some, like the Pareto and Student t, have infinite variance; others--the versions of a Binomial and Poisson distribution--are discrete). Each "Example ..." column corresponds to these randomly-chosen $(a,b,c,d):$
Example 1 Example 2 Example 3
a 0.6267 0.8831 -0.3398
b -0.7173 -0.4401 -0.5836
c -0.9224 0.9378 -0.2596
d 1.0414 -0.3053 -0.7139
In Example 1 the signs of $a$ and $c$ differ; in Example 2 they are both positive; and in Example 3 they are both negative.
The output is
Example 1 Example 2 Example 3 Method Distribution
1 -2.401 -0.1893 1.2598 Integral Gamma
2 -2.401 -0.1893 1.2598 Formula Gamma
3 -2.529 -0.2058 1.4871 Integral Uniform
4 -2.529 -0.2058 1.4871 Formula Uniform
5 -2.727 -0.2313 1.8382 Integral Weibull
6 -2.727 -0.2313 1.8382 Formula Weibull
7 -7.900 -0.8996 11.0329 Integral Pareto
8 -7.900 -0.8996 11.0329 Formula Pareto
9 -2.274 -0.1728 1.0326 Integral Normal
10 -2.274 -0.1728 1.0326 Formula Normal
11 -3.117 -0.2817 2.5314 Integral Lognormal
12 -3.117 -0.2817 2.5314 Formula Lognormal
13 -2.274 -0.1728 1.0326 Integral Student t
14 -2.274 -0.1728 1.0326 Formula Student t
15 -1.933 -0.1288 0.4265 Integral Binomial
16 -1.933 -0.1287 0.4265 Formula Binomial
17 -2.444 -0.1948 1.3356 Integral Poisson
18 -2.444 -0.1948 1.3356 Formula Poisson
Each pair of lines shows the integral's value followed by the formula's value; they agree in every case.
Here is the code that performed these computations. Notice how the expectation $E[X]$ is carried out with the integral $(*)$ in the function g. The blind integration of the discrete distribution functions in g can be a little delicate; this is handled by increasing the default number of subdivisions from 100 to 1000, but could be further improved by using a finite lower limit of integration (thereby giving the routine a decent hint concerning the scale of the calculation).
#
# Compute the original integral numerically.
#
g <- function(a,b,c,d, F.=pnorm, ...) {
integrate(function(x) F.(a*x + b) - F.(c*x + d), -Inf, Inf, ...)$value
}
#
# Apply the formula. This requires knowing or finding E_F, the expectation
# of `F.`. Here we find that expectation with a numerical integration.
#
g. <- function(a,b,c,d, F.=pnorm) {
expectation <- integrate(function(x) 1 - (F.(-x) + F.(x)), -Inf, 0)$value
(1/abs(c) - 1/abs(a)) * expectation + b/abs(a) - d/abs(c)
}
#
# This is the Pareto CDF.
#
pPareto <- function(x, alpha, x.min) ifelse(x <= x.min, 0, 1 - (x/x.min)^(-alpha))
#
# Test a bunch of different distributions.
#
distributions <- list(Gamma = function(x) pgamma(x, 0.25),
Uniform = punif,
Weibull = function(x) pweibull(x, 2, 1),
Pareto = function(x) pPareto(x, 1.1, 1),
Normal = pnorm,
Lognormal = plnorm,
`Student t` = function(x) pt(x, 1.1),
Binomial = function(x) pbinom(x+4, 10, 1/3),
Poisson = function(x) ppois(x, 1/3))
#
# Create some random sets of (a,b,c,d) values.
#
set.seed(17)
coeffnames <- c("a","b","c","d")
args <- lapply(1:3, function(i) {x <- as.list(rexp(4)-1); names(x) <- coeffnames; x})
names(args) <- paste("Example", seq_along(args))
print(matrix(unlist(args), 4, dimnames=list(coeffnames, names(args))), digits=4)
#
# Conduct the tests.
#
Results <- do.call(rbind, lapply(names(distributions), function(s) {
G <- distributions[[s]]
X <- as.data.frame(rbind(sapply(args, do.call,
what=function(...) g(..., F.=G, subdivisions=1000L)),
sapply(args, do.call, what=function(...) g.(..., F.=G))))
X$Method <- c("Integral", "Formula")
X$Distribution <- s
X
}))
print(Results, digits=4) | Closed form of the integral of the difference of two Gaussian CDFs? | The $(b-d)/a$ result is correct when $a \gt 0.$ This post explains why. It generalizes the question broadly in order to reveal the underlying ideas.
Because $\Phi$ is not special in this regard, le | Closed form of the integral of the difference of two Gaussian CDFs?
The $(b-d)/a$ result is correct when $a \gt 0.$ This post explains why. It generalizes the question broadly in order to reveal the underlying ideas.
Because $\Phi$ is not special in this regard, let's consider any distribution function $F_X$ for a random variable $X.$ Recall that by definition, $F_X(x) = \Pr(X\le x)$ for any real number $x.$
Suppose $X$ has a finite expectation $E_X.$ One expression for the expectation is
$$E[X] = \int_{0}^\infty (1 - F(x))\mathrm{d}x - \int_{-\infty}^0 F(x) \mathrm{d}x = \int_0^\infty (1 - (F(x) + F(-x)))\mathrm{d}x.\tag{*}$$
Let's study how this behaves under affine transformations of $X$:
$F_X(x+b) = \Pr(X \le x+b) = \Pr(X-b \le x) = F_{X-b}(x).$
When $a \gt 0,$ $F_X(ax) = \Pr(X \le ax) = \Pr(X/a \le x) = F_{X/a}(x).$
Thus, for positive $a$ and $c,$
$$\frac{E[X] - b}{a} = E[(X-b)/a] = \int_{0}^\infty (1 - F(ax+b))\mathrm{d}x - \int_{-\infty}^0 F(ax+b) \mathrm{d}x$$
and
$$\frac{E[X] - d}{c} = E[(X-d)/c] = \int_{0}^\infty (1 - F(cx+d))\mathrm{d}x - \int_{-\infty}^0 F(cx+d) \mathrm{d}x.$$
Subtracting the first from the second yields
$$\eqalign{\frac{b-E[X]}{a} - \frac{d - E[X]}{c} &=\int_{0}^\infty (1 - F(cx+d))\mathrm{d}x - \int_{-\infty}^0 F(cx+d) \mathrm{d}x \\&- \left(\int_{0}^\infty (1 - F(ax+b))\mathrm{d}x - \int_{-\infty}^0 F(ax+b) \mathrm{d}x\right) \\
&= \int_{0}^\infty (F(ax+b)-F(cx+d))\mathrm{d}x \\ &+ \int_{-\infty}^0 (F(ax+b) - F(cx+d)) \mathrm{d}x\\
&= \int_{-\infty}^\infty (F(ax+b)-F(cx+d))\mathrm{d}x
.}$$
When $a \lt 0,$ replace $a$ by $-a = |a|$ and apply all results to the distribution of $-X.$
We have thereby established the following general result:
When $|E[X]| \lt \infty$ and $ac \ne 0,$ then $$\int_\mathbb{R} (F(ax+b)-F(cx+d))\,\mathrm{d}x = \frac{b-E[X]}{|a|} - \frac{d - E[X]}{|c|}.$$
In the question with positive $a$ and $c$ and $F=\Phi,$ we have $E[X] = 0,$ reducing the integral to $b/a - d/c.$ When $a=c$ this simplifies to $(b-d)/a,$ exactly as suggested in the question.
This result isn't quite the most general one: when $a=c,$ the result holds in the form $(b-d)/a$ even when $X$ does not have a finite expectation. This is most easily seen by integrating the quantile function $F^{-1}:$ see https://stats.stackexchange.com/a/18439/919.
In case any of these manipulations appear doubtful, here is numerical confirmation using a host of different distributions (some, like the Pareto and Student t, have infinite variance; others--the versions of a Binomial and Poisson distribution--are discrete). Each "Example ..." column corresponds to these randomly-chosen $(a,b,c,d):$
Example 1 Example 2 Example 3
a 0.6267 0.8831 -0.3398
b -0.7173 -0.4401 -0.5836
c -0.9224 0.9378 -0.2596
d 1.0414 -0.3053 -0.7139
In Example 1 the signs of $a$ and $c$ differ; in Example 2 they are both positive; and in Example 3 they are both negative.
The output is
Example 1 Example 2 Example 3 Method Distribution
1 -2.401 -0.1893 1.2598 Integral Gamma
2 -2.401 -0.1893 1.2598 Formula Gamma
3 -2.529 -0.2058 1.4871 Integral Uniform
4 -2.529 -0.2058 1.4871 Formula Uniform
5 -2.727 -0.2313 1.8382 Integral Weibull
6 -2.727 -0.2313 1.8382 Formula Weibull
7 -7.900 -0.8996 11.0329 Integral Pareto
8 -7.900 -0.8996 11.0329 Formula Pareto
9 -2.274 -0.1728 1.0326 Integral Normal
10 -2.274 -0.1728 1.0326 Formula Normal
11 -3.117 -0.2817 2.5314 Integral Lognormal
12 -3.117 -0.2817 2.5314 Formula Lognormal
13 -2.274 -0.1728 1.0326 Integral Student t
14 -2.274 -0.1728 1.0326 Formula Student t
15 -1.933 -0.1288 0.4265 Integral Binomial
16 -1.933 -0.1287 0.4265 Formula Binomial
17 -2.444 -0.1948 1.3356 Integral Poisson
18 -2.444 -0.1948 1.3356 Formula Poisson
Each pair of lines shows the integral's value followed by the formula's value; they agree in every case.
Here is the code that performed these computations. Notice how the expectation $E[X]$ is carried out with the integral $(*)$ in the function g. The blind integration of the discrete distribution functions in g can be a little delicate; this is handled by increasing the default number of subdivisions from 100 to 1000, but could be further improved by using a finite lower limit of integration (thereby giving the routine a decent hint concerning the scale of the calculation).
#
# Compute the original integral numerically.
#
g <- function(a,b,c,d, F.=pnorm, ...) {
integrate(function(x) F.(a*x + b) - F.(c*x + d), -Inf, Inf, ...)$value
}
#
# Apply the formula. This requires knowing or finding E_F, the expectation
# of `F.`. Here we find that expectation with a numerical integration.
#
g. <- function(a,b,c,d, F.=pnorm) {
expectation <- integrate(function(x) 1 - (F.(-x) + F.(x)), -Inf, 0)$value
(1/abs(c) - 1/abs(a)) * expectation + b/abs(a) - d/abs(c)
}
#
# This is the Pareto CDF.
#
pPareto <- function(x, alpha, x.min) ifelse(x <= x.min, 0, 1 - (x/x.min)^(-alpha))
#
# Test a bunch of different distributions.
#
distributions <- list(Gamma = function(x) pgamma(x, 0.25),
Uniform = punif,
Weibull = function(x) pweibull(x, 2, 1),
Pareto = function(x) pPareto(x, 1.1, 1),
Normal = pnorm,
Lognormal = plnorm,
`Student t` = function(x) pt(x, 1.1),
Binomial = function(x) pbinom(x+4, 10, 1/3),
Poisson = function(x) ppois(x, 1/3))
#
# Create some random sets of (a,b,c,d) values.
#
set.seed(17)
coeffnames <- c("a","b","c","d")
args <- lapply(1:3, function(i) {x <- as.list(rexp(4)-1); names(x) <- coeffnames; x})
names(args) <- paste("Example", seq_along(args))
print(matrix(unlist(args), 4, dimnames=list(coeffnames, names(args))), digits=4)
#
# Conduct the tests.
#
Results <- do.call(rbind, lapply(names(distributions), function(s) {
G <- distributions[[s]]
X <- as.data.frame(rbind(sapply(args, do.call,
what=function(...) g(..., F.=G, subdivisions=1000L)),
sapply(args, do.call, what=function(...) g.(..., F.=G))))
X$Method <- c("Integral", "Formula")
X$Distribution <- s
X
}))
print(Results, digits=4) | Closed form of the integral of the difference of two Gaussian CDFs?
The $(b-d)/a$ result is correct when $a \gt 0.$ This post explains why. It generalizes the question broadly in order to reveal the underlying ideas.
Because $\Phi$ is not special in this regard, le |
54,087 | Closed form of the integral of the difference of two Gaussian CDFs? | A geometrical intuition
A geometrical intuition to accompany Whuber's answer is the following:
It relates to answers to other questions here and here.
The mean of the variable relates to the area of the gray striped surfaces, which can be computed in two directions
The vertical stripes: as an integral of the quantile function with the quantiles $dp$ as integrand
$$E[X] = \int_0^1 Q_X(p) dp$$
The horizontal stripes: as an integral with the CDF and the variable $dz$ as integrand
$$E[X] = -\int_{-\infty}^0 F_X(z) dz + \int_{0}^{\infty} 1-F_X(z) dz$$
Then the difference $E[X]-E[Y]$ is
$$\begin{array}{}
E[X]-E[Y] &=& \int_{0}^1 Q_X(p)-Q_Y(p) dp \\
\end{array}$$
which relates to the difference of the quantile functions, and the blue and red areas marked by the vertical stripes.
But we can just as well compute this in the horizontal direction by taking the difference between the CDF functions.
$$\begin{array}{}
E[X]-E[Y] &=& -\int_{-\infty}^0 F_X(z)-F_Y(z) dz + \int_{0}^{\infty} -F_X(z)+F_Y(z) dz \\
&=& - \int_{-\infty}^\infty F_X(z)-F_Y(z) dz
\end{array}$$
Applied to Gaussian CDF
With the general equation
$$\begin{array}{}
E[X]-E[Y] &=& - \int_{-\infty}^\infty F_X(z)-F_Y(z) dz
\end{array}$$
you will get for the CDF's of the Gaussian distribution $\Phi\left(ax+b \right)$ which has the mean $\mu(a,b) = -b/a$
$$\int_{-\infty}^\infty \left( \Phi\left(ax+b \right) - \Phi\left(cx+d \right) \right) dx = -[\mu(a,b) - \mu(c,d)] = -b/a - c/d$$ | Closed form of the integral of the difference of two Gaussian CDFs? | A geometrical intuition
A geometrical intuition to accompany Whuber's answer is the following:
It relates to answers to other questions here and here.
The mean of the variable relates to the area of | Closed form of the integral of the difference of two Gaussian CDFs?
A geometrical intuition
A geometrical intuition to accompany Whuber's answer is the following:
It relates to answers to other questions here and here.
The mean of the variable relates to the area of the gray striped surfaces, which can be computed in two directions
The vertical stripes: as an integral of the quantile function with the quantiles $dp$ as integrand
$$E[X] = \int_0^1 Q_X(p) dp$$
The horizontal stripes: as an integral with the CDF and the variable $dz$ as integrand
$$E[X] = -\int_{-\infty}^0 F_X(z) dz + \int_{0}^{\infty} 1-F_X(z) dz$$
Then the difference $E[X]-E[Y]$ is
$$\begin{array}{}
E[X]-E[Y] &=& \int_{0}^1 Q_X(p)-Q_Y(p) dp \\
\end{array}$$
which relates to the difference of the quantile functions, and the blue and red areas marked by the vertical stripes.
But we can just as well compute this in the horizontal direction by taking the difference between the CDF functions.
$$\begin{array}{}
E[X]-E[Y] &=& -\int_{-\infty}^0 F_X(z)-F_Y(z) dz + \int_{0}^{\infty} -F_X(z)+F_Y(z) dz \\
&=& - \int_{-\infty}^\infty F_X(z)-F_Y(z) dz
\end{array}$$
Applied to Gaussian CDF
With the general equation
$$\begin{array}{}
E[X]-E[Y] &=& - \int_{-\infty}^\infty F_X(z)-F_Y(z) dz
\end{array}$$
you will get for the CDF's of the Gaussian distribution $\Phi\left(ax+b \right)$ which has the mean $\mu(a,b) = -b/a$
$$\int_{-\infty}^\infty \left( \Phi\left(ax+b \right) - \Phi\left(cx+d \right) \right) dx = -[\mu(a,b) - \mu(c,d)] = -b/a - c/d$$ | Closed form of the integral of the difference of two Gaussian CDFs?
A geometrical intuition
A geometrical intuition to accompany Whuber's answer is the following:
It relates to answers to other questions here and here.
The mean of the variable relates to the area of |
54,088 | Does poisson penalized quasi likelihood regression use biased estimators? | The Penalized Quasi Likelihood (PQL) method has been proposed to fit generalized linear mixed-effects models. The way it works is by doing a kind of a Laplace approximation in a quasi-likelihood formulation of the model. This approximation results in a transformation of the original outcome variable. The aim of the transformation is to make the transformed outcome behave like a normally distributed variable for which a linear mixed-effects model can be fitted. Hence, fitting of GLMMs can be done with software fitting linear mixed models.
This transformation works well when the original outcome vector is already close to being normally distributed. E.g., when you have Poisson data with high counts or binomial data with many trials. PQL does not work satisfactorily and results in bias when the original outcome variable is more discrete, e.g., for binary data or Poisson with low counts.
The gold standard approaches for approximating the log-likelihood function of GLMMs are the adaptive Gaussian quadrature and Monte Carlo. More information also available here. | Does poisson penalized quasi likelihood regression use biased estimators? | The Penalized Quasi Likelihood (PQL) method has been proposed to fit generalized linear mixed-effects models. The way it works is by doing a kind of a Laplace approximation in a quasi-likelihood formu | Does poisson penalized quasi likelihood regression use biased estimators?
The Penalized Quasi Likelihood (PQL) method has been proposed to fit generalized linear mixed-effects models. The way it works is by doing a kind of a Laplace approximation in a quasi-likelihood formulation of the model. This approximation results in a transformation of the original outcome variable. The aim of the transformation is to make the transformed outcome behave like a normally distributed variable for which a linear mixed-effects model can be fitted. Hence, fitting of GLMMs can be done with software fitting linear mixed models.
This transformation works well when the original outcome vector is already close to being normally distributed. E.g., when you have Poisson data with high counts or binomial data with many trials. PQL does not work satisfactorily and results in bias when the original outcome variable is more discrete, e.g., for binary data or Poisson with low counts.
The gold standard approaches for approximating the log-likelihood function of GLMMs are the adaptive Gaussian quadrature and Monte Carlo. More information also available here. | Does poisson penalized quasi likelihood regression use biased estimators?
The Penalized Quasi Likelihood (PQL) method has been proposed to fit generalized linear mixed-effects models. The way it works is by doing a kind of a Laplace approximation in a quasi-likelihood formu |
54,089 | R-square and Intrumental Regression | No.
$R^2$ in instrumental variables regression is not useful.
Since one of the explanatory variables $x$ is correlated with the error $\epsilon$ we can't decompose the variance of the outcome $y$ into $\beta^2 Var(x) + Var(\epsilon)$, so the obtained $R^2$ has neither a natural interpretation, nor can it be used for computation of F statistics. Additionally, $R^2$ in instrumental variables regression can be negative so it makes not difference for whether you use
$$R^2 = \frac{MSS}{TSS} \quad \text{or} \quad R^2 = 1- \frac{RSS}{TSS}$$
because when $RSS>TSS$, then we also have that $MSS = TSS - RSS < 0$.
which makes little sense. | R-square and Intrumental Regression | No.
$R^2$ in instrumental variables regression is not useful.
Since one of the explanatory variables $x$ is correlated with the error $\epsilon$ we can't decompose the variance of the outcome $y$ in | R-square and Intrumental Regression
No.
$R^2$ in instrumental variables regression is not useful.
Since one of the explanatory variables $x$ is correlated with the error $\epsilon$ we can't decompose the variance of the outcome $y$ into $\beta^2 Var(x) + Var(\epsilon)$, so the obtained $R^2$ has neither a natural interpretation, nor can it be used for computation of F statistics. Additionally, $R^2$ in instrumental variables regression can be negative so it makes not difference for whether you use
$$R^2 = \frac{MSS}{TSS} \quad \text{or} \quad R^2 = 1- \frac{RSS}{TSS}$$
because when $RSS>TSS$, then we also have that $MSS = TSS - RSS < 0$.
which makes little sense. | R-square and Intrumental Regression
No.
$R^2$ in instrumental variables regression is not useful.
Since one of the explanatory variables $x$ is correlated with the error $\epsilon$ we can't decompose the variance of the outcome $y$ in |
54,090 | Why one result is so wide in this logistic multiple regession | You have too many variables for the amount of data you have. A rough rule of thumb is that, in logistic regression, you can have $1$ predictor variable for every $15$ observations in the less commonly occurring category. With $193$ data, you can have at most $97$ instances of yeses or noes. That implies you should use no more than $6$ predictors.
That reasoning relates to all variables. Regarding that specific variable as distinct from the others, it is presumably either collinear or completely separates the yeses or noes. To diagnose which might lie behind this, see how many fitting iterations were used, $>10$ is some evidence of separation; and fit an ordinary least squares multiple regression of var 8 against the all rest (it doesn't matter if the assumptions are met) and check the multiple $R^2$, a value $>.9$ considered problematic. To understand separation, it may help to read my answer here: Why does logistic regression become unstable when classes are well-separated? To understand multicollinearity, it may help to read my answers here: What is the effect of having correlated predictors in a multiple regression model?, and here: How seriously should I consider the effects of multicollinearity in my regression model?
Regarding your last question of whether you can conclude that the null holds for the non-significant variables, it might help you to read my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis? | Why one result is so wide in this logistic multiple regession | You have too many variables for the amount of data you have. A rough rule of thumb is that, in logistic regression, you can have $1$ predictor variable for every $15$ observations in the less commonl | Why one result is so wide in this logistic multiple regession
You have too many variables for the amount of data you have. A rough rule of thumb is that, in logistic regression, you can have $1$ predictor variable for every $15$ observations in the less commonly occurring category. With $193$ data, you can have at most $97$ instances of yeses or noes. That implies you should use no more than $6$ predictors.
That reasoning relates to all variables. Regarding that specific variable as distinct from the others, it is presumably either collinear or completely separates the yeses or noes. To diagnose which might lie behind this, see how many fitting iterations were used, $>10$ is some evidence of separation; and fit an ordinary least squares multiple regression of var 8 against the all rest (it doesn't matter if the assumptions are met) and check the multiple $R^2$, a value $>.9$ considered problematic. To understand separation, it may help to read my answer here: Why does logistic regression become unstable when classes are well-separated? To understand multicollinearity, it may help to read my answers here: What is the effect of having correlated predictors in a multiple regression model?, and here: How seriously should I consider the effects of multicollinearity in my regression model?
Regarding your last question of whether you can conclude that the null holds for the non-significant variables, it might help you to read my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis? | Why one result is so wide in this logistic multiple regession
You have too many variables for the amount of data you have. A rough rule of thumb is that, in logistic regression, you can have $1$ predictor variable for every $15$ observations in the less commonl |
54,091 | Intuition behind posterior predictive distribution | Let $X$ denotes the observations and $\theta \in \Theta$ the parameter. In a Bayesian approach, both are considered random quantities.
The first step of modeling is to define a statistical model, i.e. the distribution of $X$ given $\theta$, which can be written as $X \mid \theta \sim p(\cdot \mid \theta)$. This is mainly done by expliciting a likelihood function.
Thus our statistical model describe the conditional distribution of $X$ given $\theta$.
From a Bayesian perspective, we also define a prior distribution for $\theta$ on $\Theta$: $\theta \sim \pi(\theta)$.
The prior predictive distribution
Before observing any data, what we have is simply the chosen model, $p(x \mid \theta)$, and the prior distribution of $\theta$, $\pi(\theta)$.
One can then ask to see what is the marginal distribution of $X$, that is, the distribution of $X \mid \theta $ averaged over all possible values of $\theta$.
This can be simply written using expectation:
\begin{align*}
p(x) &= \mathbb{E}_\theta \Big [ p(x \mid \theta) \Big ] \\
&= \int_\Theta p(x \mid \theta) \pi(\theta) d\theta.
\end{align*}
The posterior predictive distribution
The interpretation is the same than for the prior predictive distribution, is it the marginal distribution of $X \mid \theta$ averaged over all values of $\theta$.
But this time the "weighting" function to be used is not $\pi(\theta)$ but our updated knowledge about $\theta$ after observing data $X^*$: $\pi(\theta \mid X^*)$.
Using the known known Bayes theorem we have:
$$
\pi(\theta \mid X^*) = \frac{p(X^* \mid \theta) \pi(\theta)}{p(X^*)}
$$
And thus, the marginal distribution of $X \mid (X^*,\theta)$ averaged over $\Theta$ is:
$$
p(x \mid X^*) = \int_\Theta p(x \mid \theta) \pi(\theta \mid X^*)d\theta
$$
Example: Gamma-Poisson mixture.
Suppose our observations are made of counts, $X$, and we define a Poisson model that is: $X \mid \lambda \sim \mathcal{P}(\lambda)$.
From a Bayesian perspective, we also define a prior distribution for $\lambda$.
For mathematical reasons, it is appealing to use a Gamma distribution, $\lambda \sim \mathcal{G}(a,b)$.
The prior predictive distribution
One particulariy of this Gamma-Poisson mixture, is that the marginal distribution will be distribution as a Negative-Binomial random variable.
That is, if $X \mid \lambda \sim \mathcal{P}(\lambda)$ and $\lambda \sim \mathcal{G}(a,b)$ then, $X \sim \mathcal{NB}\big (a,\frac{b}{b+1} \big )$.
Thus the prior predictive distribution of $X$ is a Negative Binomial distribution $\mathcal{NB}\big (a,\frac{b}{b+1} \big )$.
The posterior predictive distribution
Now, say we have observed $n$ counts $X =(X_1,\dots,X_n)$.
First, thanks to the choice of a Gamma prior for $\lambda$, the posterior distribution of $\lambda$ can be easily derived as being also a Gamma distribution:
$$
\lambda \mid X \sim \mathcal{G} \bigg ( a + \sum_{i=1}^n X_i , b+n \bigg)
$$
From what we saw for the prior predictive distribution, the posterior predictive distribution of $X$ will also be a Negative-Binomial:
$$
\mathcal{NB} \bigg ( a + \sum_{i=1}^n X_i, \frac{b+n}{b+1+n} \bigg )
$$
Here is an example where $a=100$, $b=2$ and we observe the vector of counts $X=(85,80,70,65,71,92)$ :
Here is the R code to produce the plot :
### Gamma-Poisson mixture: prior and posterior predictive distributions :
require(ggplot2)
# Parameters of the prior distribution of lambda
a<-100
b<-2
x<-0:150
y1<-dnbinom(x,a,b/(b+1))
# Vector of observations and posterior predictive distribution
X<-c(85,80,70,65,71,92)
n<-length(X)
XS<-sum(X)
x<-0:150
au<-a+XS
bu<-b+n
y2<-dnbinom(x,size=au,prob=bu/(bu+1))
plot1<-ggplot() + aes(x=x,y=y1,colour="Prior") + geom_line(size=1)+
geom_line(aes(x=x,y=y2,colour="Post"))+
scale_colour_manual(breaks=c("Prior", "Post"),
values=c("#cd7118","#1874cd"),labels=c("Prior Predictive",
"Postererior Predictive"))+
ggtitle("Prior and posterior predictive distributions for a=100 and b=2")+
labs(x="x", y="Density") +
theme(
panel.background = element_rect(fill = "white",
colour = "white",
size = 0.5, linetype = "solid"),
axis.line = element_line(size = 0.2, linetype = 'solid',
colour = "black"),
axis.text = element_text(size=10),
axis.title = element_text(size=10),
legend.title = element_blank(),
legend.background = element_blank(),
legend.key = element_blank(),
legend.position = c(.7,.5)
)
plot1 | Intuition behind posterior predictive distribution | Let $X$ denotes the observations and $\theta \in \Theta$ the parameter. In a Bayesian approach, both are considered random quantities.
The first step of modeling is to define a statistical model, i.e | Intuition behind posterior predictive distribution
Let $X$ denotes the observations and $\theta \in \Theta$ the parameter. In a Bayesian approach, both are considered random quantities.
The first step of modeling is to define a statistical model, i.e. the distribution of $X$ given $\theta$, which can be written as $X \mid \theta \sim p(\cdot \mid \theta)$. This is mainly done by expliciting a likelihood function.
Thus our statistical model describe the conditional distribution of $X$ given $\theta$.
From a Bayesian perspective, we also define a prior distribution for $\theta$ on $\Theta$: $\theta \sim \pi(\theta)$.
The prior predictive distribution
Before observing any data, what we have is simply the chosen model, $p(x \mid \theta)$, and the prior distribution of $\theta$, $\pi(\theta)$.
One can then ask to see what is the marginal distribution of $X$, that is, the distribution of $X \mid \theta $ averaged over all possible values of $\theta$.
This can be simply written using expectation:
\begin{align*}
p(x) &= \mathbb{E}_\theta \Big [ p(x \mid \theta) \Big ] \\
&= \int_\Theta p(x \mid \theta) \pi(\theta) d\theta.
\end{align*}
The posterior predictive distribution
The interpretation is the same than for the prior predictive distribution, is it the marginal distribution of $X \mid \theta$ averaged over all values of $\theta$.
But this time the "weighting" function to be used is not $\pi(\theta)$ but our updated knowledge about $\theta$ after observing data $X^*$: $\pi(\theta \mid X^*)$.
Using the known known Bayes theorem we have:
$$
\pi(\theta \mid X^*) = \frac{p(X^* \mid \theta) \pi(\theta)}{p(X^*)}
$$
And thus, the marginal distribution of $X \mid (X^*,\theta)$ averaged over $\Theta$ is:
$$
p(x \mid X^*) = \int_\Theta p(x \mid \theta) \pi(\theta \mid X^*)d\theta
$$
Example: Gamma-Poisson mixture.
Suppose our observations are made of counts, $X$, and we define a Poisson model that is: $X \mid \lambda \sim \mathcal{P}(\lambda)$.
From a Bayesian perspective, we also define a prior distribution for $\lambda$.
For mathematical reasons, it is appealing to use a Gamma distribution, $\lambda \sim \mathcal{G}(a,b)$.
The prior predictive distribution
One particulariy of this Gamma-Poisson mixture, is that the marginal distribution will be distribution as a Negative-Binomial random variable.
That is, if $X \mid \lambda \sim \mathcal{P}(\lambda)$ and $\lambda \sim \mathcal{G}(a,b)$ then, $X \sim \mathcal{NB}\big (a,\frac{b}{b+1} \big )$.
Thus the prior predictive distribution of $X$ is a Negative Binomial distribution $\mathcal{NB}\big (a,\frac{b}{b+1} \big )$.
The posterior predictive distribution
Now, say we have observed $n$ counts $X =(X_1,\dots,X_n)$.
First, thanks to the choice of a Gamma prior for $\lambda$, the posterior distribution of $\lambda$ can be easily derived as being also a Gamma distribution:
$$
\lambda \mid X \sim \mathcal{G} \bigg ( a + \sum_{i=1}^n X_i , b+n \bigg)
$$
From what we saw for the prior predictive distribution, the posterior predictive distribution of $X$ will also be a Negative-Binomial:
$$
\mathcal{NB} \bigg ( a + \sum_{i=1}^n X_i, \frac{b+n}{b+1+n} \bigg )
$$
Here is an example where $a=100$, $b=2$ and we observe the vector of counts $X=(85,80,70,65,71,92)$ :
Here is the R code to produce the plot :
### Gamma-Poisson mixture: prior and posterior predictive distributions :
require(ggplot2)
# Parameters of the prior distribution of lambda
a<-100
b<-2
x<-0:150
y1<-dnbinom(x,a,b/(b+1))
# Vector of observations and posterior predictive distribution
X<-c(85,80,70,65,71,92)
n<-length(X)
XS<-sum(X)
x<-0:150
au<-a+XS
bu<-b+n
y2<-dnbinom(x,size=au,prob=bu/(bu+1))
plot1<-ggplot() + aes(x=x,y=y1,colour="Prior") + geom_line(size=1)+
geom_line(aes(x=x,y=y2,colour="Post"))+
scale_colour_manual(breaks=c("Prior", "Post"),
values=c("#cd7118","#1874cd"),labels=c("Prior Predictive",
"Postererior Predictive"))+
ggtitle("Prior and posterior predictive distributions for a=100 and b=2")+
labs(x="x", y="Density") +
theme(
panel.background = element_rect(fill = "white",
colour = "white",
size = 0.5, linetype = "solid"),
axis.line = element_line(size = 0.2, linetype = 'solid',
colour = "black"),
axis.text = element_text(size=10),
axis.title = element_text(size=10),
legend.title = element_blank(),
legend.background = element_blank(),
legend.key = element_blank(),
legend.position = c(.7,.5)
)
plot1 | Intuition behind posterior predictive distribution
Let $X$ denotes the observations and $\theta \in \Theta$ the parameter. In a Bayesian approach, both are considered random quantities.
The first step of modeling is to define a statistical model, i.e |
54,092 | Why is R-squared equal to the sum of standardized coefficients times the correlation? | I offer two solutions. The first (geometric) solution reverses the usual least-squares perspective by starting with its solution--the fitted values--and working backwards to the problem(!), which exposes the basic nature and simplicity of this result. The second (algebraic) solution cranks the standard least-squares machinery to show how the result can be obtained in a straightforward manner using familiar formulas that simplify when the variables are first standardized.
For the cognoscenti, I will summarize the first solution to spare you the effort of reading through it. The ordinary least squares solution orthogonally projects the response vector $y$ into the subspace generated by the explanatory variables $x_i$. This means the projection can be expressed as a linear combination. The $\beta_i$ are its coefficients (I won't use hats here, since we will never refer to a "true model"). When the response and the explanatory variables are initially standardized, $R^2$ is just the squared length of the projection, $|\hat y|^2.$ That length in turn is the inner product of the response vector with its projection (because the residuals are orthogonal to the projection). Computing that inner product term by term introduces the inner products of $y$ with the $x_i$--but because these vectors have all been standardized, those inner products are just the correlation coefficients $r_{yi}.$ The equation $R^2 = \beta_1 r_{y1} + \beta_2 r_{y2} + \cdots$ drops right out.
Geometric solution
To serve as a counterpoint to the heavy linear algebra in the second solution, this discussion will not be any more advanced (mathematically) than the basics of analytic geometry taught in high schools for generations. I will, however, freely use the (modern) terminology of "vector spaces," "linear combination," "inner products," "orthogonality," and "linear forms" that is often avoided at the most elementary level, to make the connections between the two solutions more apparent.
Forget about regression for a moment and just suppose you are presented with a vector $\hat y.$ (As the notation suggests, $\hat y$ eventually will play the role of a least squares prediction--but so far we don't have anything to predict!) Choose, in any way you please, any number of (even infinitely many) vectors $x_1,x_2,\ldots$ with which $\hat y$ may be represented as a linear combination. That is, suppose there are scalars $\beta_i$ for which
$$\hat y = \beta_1 x_1 + \beta_2 x_2 + \cdots.$$
Now let $\phi$ be any linear form, also known as a covector. By definition, this means only that $\phi$ is a linear function defined on the vector space, with scalar values, for which
$$\phi(\hat y) = \phi(\beta_1 x_1 + \beta_2 x_2 + \cdots) = \beta_1\phi(x_1) + \beta_2\phi(x_2) + \cdots.\tag{1}$$
Believe it or not, that is our result! It remains only to apply it in the special case of ordinary least regression with standardized variables.
Because this is a least squares setting, our vector space is endowed with a Euclidean norm $|\ |$ giving the lengths of vectors (as a root sum of squares--that's where least squares comes into the picture) and its associated inner product $\langle\ ,\ \rangle$ for which $|x|^2 = \langle x,x\rangle$ for any vector $x.$ This inner product provides a splendid way to obtain linear forms. Namely, given any vector $y,$ define the function $y^{*}$ via
$$y^{*}(x) = \langle y, x\rangle.$$
Because the inner product is bilinear, $y^{*}$ is automatically linear, whence it is a linear form.
The term "standardized coefficient" in the question is conventional, but it's misleading: it's not the coefficient that has been standardized; it means the coefficient is obtained by first standardizing all the variables involved. So, let us restrict the forgoing discussion to unit vectors $x_i,$ which means $|x_i|=1,$ and let $y$ be an arbitrary unit vector (not, apparently, having anything whatsoever to do with $\hat y$ and the $x_i$).
In this case, where $\phi = y^{*},$ the basic relation $(1)$ is
$$y^{*}(\hat y) = \langle y, \hat y\rangle = \beta_1 \langle y, x_1\rangle + \beta_2 \langle y, x_2\rangle + \cdots.\tag{2}$$
The penultimate step is to suppose the scalars are real numbers and that the components of all vectors sum to zero. In this case, the inner products in the preceding sum are correlation coefficients:
$$\langle y, x_i\rangle = r_{yi}$$
(using the notation of the question). This is because the correlation coefficient of two vectors is defined as the sum of products after the vectors have been recentered (to make their components sum to zero) and normalized to unit length. For more about correlation from this perspective see Freedman, Pisani, & Purves, Statistics (any edition), a classic introductory (almost formula-free) textbook.
At some point we need to introduce $R^2.$ For this purpose I propose a general definition that reduces to the usual one in the least squares setting.
Definition: Given a nonzero vector $y$ and any vector $\hat y,$ let $$R^2(\hat y, y) = \frac{|\hat y|^2}{|y|^2} = \left(\frac{|\hat y|}{|y|}\right)^2.$$ It is the square of the ratios of the lengths of these vectors.
In any regression, no matter how it may be performed, when $\hat y$ is the regression estimate of $y$ this formula exhibits $R^2$ as the "regression sum of squares" ($|\hat y|^2$) divided by the "total sum of squares" ($|y|^2$). Usually $R^2$ is computed after centering $y$ (when the model contains an intercept), but it is often computed and reported even when $y$ is not centered ("regression through the origin"). For a good discussion of this, see Removal of ... intercept term increases $R^2$.
In this generality all we can say is that $R^2$ is not negative--but it could be arbitrarily large. That is about to change. But, in passing, observe that when $|y|=1,$ the formula simplifies to $$R^2(\hat y, y) = |\hat y|^2 / |y|^2 = |\hat y|^2.$$
Finally suppose that $y-\hat y$ is orthogonal to $\hat y.$ This is geometric language for stating
$$0 = \langle y - \hat y, \hat y\rangle = y^{*}(\hat y) - |\hat y|^2.$$
This connects the value of the form $y^{*}$ at $\hat y$ to the (squared) length of $\hat y:$ the two must be equal.
Applying this observation to $(2)$ and using the notation $r_{yi}$ gives
$$R^2(\hat y, y) = |\hat y|^2 = y^{*}(\hat y) = \beta_1 r_{y1} + \beta_2 r_{y2} + \cdots\tag{3}$$
So far everything has been about simple (almost trivial) relations among vectors in an inner product space. But geometrically, that's all least squares is: given a response vector $y$ and a collection of explanatory vectors $x_1,x_2,\ldots$ (in the same vector space as $y,$ of course), the Normal Equations of least squares theory assert that a least squares solution $\hat y$ is any linear combination of the $x_i$ whose residual is orthogonal to it:
$$\langle y - \hat y, \beta_1 x_1 + \beta_2 x_2 + \cdots\rangle = 0.$$
That was our final supposition above, which implied relation $(3),$ and we are done.
Algebraic solution
The question concerns regression statistics developed from a model matrix $X$ and response variable $y$ that have all been normalized: that is, the sums of all columns are zero, the sums of their squares are all constant $C\ne 0,$ and any constant column has been removed from $X.$ ($C$ varies depending on whether one is using Maximum Likelihood estimates, Ordinary Least Squares estimates, or whatever, but it will turn out its actual value doesn't matter.)
Because of these normalizations, some of the (usual) formulas simplify, including
$$(r_{y1}, r_{y2}, \ldots, r_{yp})^\prime = r(X,y) = \frac{1}{C} X^\prime y$$
is the vector of correlation coefficients between $Y$ and the columns of $X$ and
$$y^\prime y = C$$
is the total sum of squares, $TSS.$
Two useful formulas (which don't simplify) are
$$\hat\beta = (X^\prime X)^{-}X^\prime y$$
for the (standardized) regression coefficient (which estimate the true coefficients $\beta$) and
$$SSR = \hat y^\prime \hat y = (X\hat\beta)^\prime (X\hat\beta) = y^\prime X(X^\prime X)^{-}X^\prime y$$
for the "regression sum of squares."
Since $R^2$ is defined as the ratio of the regression sum of squares to the total sum of squares,
$$R^2 = \frac{SSR}{TSS} = \frac{y^\prime X(X^\prime X)^{-}X^\prime y}{C} = y^\prime X(X^\prime X)^{-}\left[\frac{1}{C}\, X^\prime y\right] = \hat\beta^\prime r(X,y).$$
In non-matrix form this latter expression is the sum (over $i$) of $\hat\beta_{i}r_{yi},$ QED. | Why is R-squared equal to the sum of standardized coefficients times the correlation? | I offer two solutions. The first (geometric) solution reverses the usual least-squares perspective by starting with its solution--the fitted values--and working backwards to the problem(!), which exp | Why is R-squared equal to the sum of standardized coefficients times the correlation?
I offer two solutions. The first (geometric) solution reverses the usual least-squares perspective by starting with its solution--the fitted values--and working backwards to the problem(!), which exposes the basic nature and simplicity of this result. The second (algebraic) solution cranks the standard least-squares machinery to show how the result can be obtained in a straightforward manner using familiar formulas that simplify when the variables are first standardized.
For the cognoscenti, I will summarize the first solution to spare you the effort of reading through it. The ordinary least squares solution orthogonally projects the response vector $y$ into the subspace generated by the explanatory variables $x_i$. This means the projection can be expressed as a linear combination. The $\beta_i$ are its coefficients (I won't use hats here, since we will never refer to a "true model"). When the response and the explanatory variables are initially standardized, $R^2$ is just the squared length of the projection, $|\hat y|^2.$ That length in turn is the inner product of the response vector with its projection (because the residuals are orthogonal to the projection). Computing that inner product term by term introduces the inner products of $y$ with the $x_i$--but because these vectors have all been standardized, those inner products are just the correlation coefficients $r_{yi}.$ The equation $R^2 = \beta_1 r_{y1} + \beta_2 r_{y2} + \cdots$ drops right out.
Geometric solution
To serve as a counterpoint to the heavy linear algebra in the second solution, this discussion will not be any more advanced (mathematically) than the basics of analytic geometry taught in high schools for generations. I will, however, freely use the (modern) terminology of "vector spaces," "linear combination," "inner products," "orthogonality," and "linear forms" that is often avoided at the most elementary level, to make the connections between the two solutions more apparent.
Forget about regression for a moment and just suppose you are presented with a vector $\hat y.$ (As the notation suggests, $\hat y$ eventually will play the role of a least squares prediction--but so far we don't have anything to predict!) Choose, in any way you please, any number of (even infinitely many) vectors $x_1,x_2,\ldots$ with which $\hat y$ may be represented as a linear combination. That is, suppose there are scalars $\beta_i$ for which
$$\hat y = \beta_1 x_1 + \beta_2 x_2 + \cdots.$$
Now let $\phi$ be any linear form, also known as a covector. By definition, this means only that $\phi$ is a linear function defined on the vector space, with scalar values, for which
$$\phi(\hat y) = \phi(\beta_1 x_1 + \beta_2 x_2 + \cdots) = \beta_1\phi(x_1) + \beta_2\phi(x_2) + \cdots.\tag{1}$$
Believe it or not, that is our result! It remains only to apply it in the special case of ordinary least regression with standardized variables.
Because this is a least squares setting, our vector space is endowed with a Euclidean norm $|\ |$ giving the lengths of vectors (as a root sum of squares--that's where least squares comes into the picture) and its associated inner product $\langle\ ,\ \rangle$ for which $|x|^2 = \langle x,x\rangle$ for any vector $x.$ This inner product provides a splendid way to obtain linear forms. Namely, given any vector $y,$ define the function $y^{*}$ via
$$y^{*}(x) = \langle y, x\rangle.$$
Because the inner product is bilinear, $y^{*}$ is automatically linear, whence it is a linear form.
The term "standardized coefficient" in the question is conventional, but it's misleading: it's not the coefficient that has been standardized; it means the coefficient is obtained by first standardizing all the variables involved. So, let us restrict the forgoing discussion to unit vectors $x_i,$ which means $|x_i|=1,$ and let $y$ be an arbitrary unit vector (not, apparently, having anything whatsoever to do with $\hat y$ and the $x_i$).
In this case, where $\phi = y^{*},$ the basic relation $(1)$ is
$$y^{*}(\hat y) = \langle y, \hat y\rangle = \beta_1 \langle y, x_1\rangle + \beta_2 \langle y, x_2\rangle + \cdots.\tag{2}$$
The penultimate step is to suppose the scalars are real numbers and that the components of all vectors sum to zero. In this case, the inner products in the preceding sum are correlation coefficients:
$$\langle y, x_i\rangle = r_{yi}$$
(using the notation of the question). This is because the correlation coefficient of two vectors is defined as the sum of products after the vectors have been recentered (to make their components sum to zero) and normalized to unit length. For more about correlation from this perspective see Freedman, Pisani, & Purves, Statistics (any edition), a classic introductory (almost formula-free) textbook.
At some point we need to introduce $R^2.$ For this purpose I propose a general definition that reduces to the usual one in the least squares setting.
Definition: Given a nonzero vector $y$ and any vector $\hat y,$ let $$R^2(\hat y, y) = \frac{|\hat y|^2}{|y|^2} = \left(\frac{|\hat y|}{|y|}\right)^2.$$ It is the square of the ratios of the lengths of these vectors.
In any regression, no matter how it may be performed, when $\hat y$ is the regression estimate of $y$ this formula exhibits $R^2$ as the "regression sum of squares" ($|\hat y|^2$) divided by the "total sum of squares" ($|y|^2$). Usually $R^2$ is computed after centering $y$ (when the model contains an intercept), but it is often computed and reported even when $y$ is not centered ("regression through the origin"). For a good discussion of this, see Removal of ... intercept term increases $R^2$.
In this generality all we can say is that $R^2$ is not negative--but it could be arbitrarily large. That is about to change. But, in passing, observe that when $|y|=1,$ the formula simplifies to $$R^2(\hat y, y) = |\hat y|^2 / |y|^2 = |\hat y|^2.$$
Finally suppose that $y-\hat y$ is orthogonal to $\hat y.$ This is geometric language for stating
$$0 = \langle y - \hat y, \hat y\rangle = y^{*}(\hat y) - |\hat y|^2.$$
This connects the value of the form $y^{*}$ at $\hat y$ to the (squared) length of $\hat y:$ the two must be equal.
Applying this observation to $(2)$ and using the notation $r_{yi}$ gives
$$R^2(\hat y, y) = |\hat y|^2 = y^{*}(\hat y) = \beta_1 r_{y1} + \beta_2 r_{y2} + \cdots\tag{3}$$
So far everything has been about simple (almost trivial) relations among vectors in an inner product space. But geometrically, that's all least squares is: given a response vector $y$ and a collection of explanatory vectors $x_1,x_2,\ldots$ (in the same vector space as $y,$ of course), the Normal Equations of least squares theory assert that a least squares solution $\hat y$ is any linear combination of the $x_i$ whose residual is orthogonal to it:
$$\langle y - \hat y, \beta_1 x_1 + \beta_2 x_2 + \cdots\rangle = 0.$$
That was our final supposition above, which implied relation $(3),$ and we are done.
Algebraic solution
The question concerns regression statistics developed from a model matrix $X$ and response variable $y$ that have all been normalized: that is, the sums of all columns are zero, the sums of their squares are all constant $C\ne 0,$ and any constant column has been removed from $X.$ ($C$ varies depending on whether one is using Maximum Likelihood estimates, Ordinary Least Squares estimates, or whatever, but it will turn out its actual value doesn't matter.)
Because of these normalizations, some of the (usual) formulas simplify, including
$$(r_{y1}, r_{y2}, \ldots, r_{yp})^\prime = r(X,y) = \frac{1}{C} X^\prime y$$
is the vector of correlation coefficients between $Y$ and the columns of $X$ and
$$y^\prime y = C$$
is the total sum of squares, $TSS.$
Two useful formulas (which don't simplify) are
$$\hat\beta = (X^\prime X)^{-}X^\prime y$$
for the (standardized) regression coefficient (which estimate the true coefficients $\beta$) and
$$SSR = \hat y^\prime \hat y = (X\hat\beta)^\prime (X\hat\beta) = y^\prime X(X^\prime X)^{-}X^\prime y$$
for the "regression sum of squares."
Since $R^2$ is defined as the ratio of the regression sum of squares to the total sum of squares,
$$R^2 = \frac{SSR}{TSS} = \frac{y^\prime X(X^\prime X)^{-}X^\prime y}{C} = y^\prime X(X^\prime X)^{-}\left[\frac{1}{C}\, X^\prime y\right] = \hat\beta^\prime r(X,y).$$
In non-matrix form this latter expression is the sum (over $i$) of $\hat\beta_{i}r_{yi},$ QED. | Why is R-squared equal to the sum of standardized coefficients times the correlation?
I offer two solutions. The first (geometric) solution reverses the usual least-squares perspective by starting with its solution--the fitted values--and working backwards to the problem(!), which exp |
54,093 | Multiple metrics in keras - why and when might we want to use it? | loss='mean_squared_error'
We divide these terms into differentiable loss function that's used to train neural network weights, and quality metrics that are used to assess the quality of the training convergence.
In your example, $$L = (Y - Y') ^ 2 / n$$ is the loss function which is minimzed along the training phase.
The metrics will be shown in log and on plot to give you an indication of how good your model performs at this stage of the training phase. They are not used as optimization functions. | Multiple metrics in keras - why and when might we want to use it? | loss='mean_squared_error'
We divide these terms into differentiable loss function that's used to train neural network weights, and quality metrics that are used to assess the quality of the training | Multiple metrics in keras - why and when might we want to use it?
loss='mean_squared_error'
We divide these terms into differentiable loss function that's used to train neural network weights, and quality metrics that are used to assess the quality of the training convergence.
In your example, $$L = (Y - Y') ^ 2 / n$$ is the loss function which is minimzed along the training phase.
The metrics will be shown in log and on plot to give you an indication of how good your model performs at this stage of the training phase. They are not used as optimization functions. | Multiple metrics in keras - why and when might we want to use it?
loss='mean_squared_error'
We divide these terms into differentiable loss function that's used to train neural network weights, and quality metrics that are used to assess the quality of the training |
54,094 | Multiple metrics in keras - why and when might we want to use it? | For classification problems, sometimes cross-entropy is preferable for the "objective function" (metric), as compared with the MSE (mean square error). MSE is absolutely required if you use ANNs for function approximation problems (vs. classification problems). Would recommend looking at texts (books) like Bishop or Ripley instead of reading software manuals. | Multiple metrics in keras - why and when might we want to use it? | For classification problems, sometimes cross-entropy is preferable for the "objective function" (metric), as compared with the MSE (mean square error). MSE is absolutely required if you use ANNs for | Multiple metrics in keras - why and when might we want to use it?
For classification problems, sometimes cross-entropy is preferable for the "objective function" (metric), as compared with the MSE (mean square error). MSE is absolutely required if you use ANNs for function approximation problems (vs. classification problems). Would recommend looking at texts (books) like Bishop or Ripley instead of reading software manuals. | Multiple metrics in keras - why and when might we want to use it?
For classification problems, sometimes cross-entropy is preferable for the "objective function" (metric), as compared with the MSE (mean square error). MSE is absolutely required if you use ANNs for |
54,095 | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$? | Taking $X_1,X_2 \sim \text{IID Exp}(1)$ we have:
$$R \equiv \ln \bigg( \frac{X_1+X_2}{X_1} \bigg) \sim \text{Exp}(1).$$
There are various ways to demonstrate this. In cases of difficulty, the simplest way is to derive the CDF. For all $r \geqslant 0$ we have:
$$\begin{equation} \begin{aligned}
F_R(r) = \mathbb{P}(R \leqslant r)
&= 1- \mathbb{P}( R > r ) \\[6pt]
&= 1-\mathbb{P}( e^R > e^r ) \\[6pt]
&= 1-\mathbb{P}( X_1+X_2 > X_1 \cdot e^r ) \\[6pt]
&= 1-\mathbb{P}( X_2 > X_1 \cdot (e^r-1) ) \\[6pt]
&= 1-\int \limits_0^\infty (1-F_{X_2}( x \cdot (e^r-1))) f_{X_1}(x) \ dx \\[6pt]
&= 1-\int \limits_0^\infty \exp(-x \cdot (e^r-1)) \exp(-x) \ dx \\[6pt]
&= 1-\int \limits_0^\infty \exp(-x \cdot e^r) \ dx \\[6pt]
&= 1-\Big[ -e^{-r} \exp(-x \cdot e^r) \Big]_{x=0}^{x \rightarrow \infty} \\[6pt]
&= 1- e^{-r}. \\[6pt]
\end{aligned} \end{equation}$$
This confirms that $R \sim \text{Exp}(1)$. | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$? | Taking $X_1,X_2 \sim \text{IID Exp}(1)$ we have:
$$R \equiv \ln \bigg( \frac{X_1+X_2}{X_1} \bigg) \sim \text{Exp}(1).$$
There are various ways to demonstrate this. In cases of difficulty, the simpl | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$?
Taking $X_1,X_2 \sim \text{IID Exp}(1)$ we have:
$$R \equiv \ln \bigg( \frac{X_1+X_2}{X_1} \bigg) \sim \text{Exp}(1).$$
There are various ways to demonstrate this. In cases of difficulty, the simplest way is to derive the CDF. For all $r \geqslant 0$ we have:
$$\begin{equation} \begin{aligned}
F_R(r) = \mathbb{P}(R \leqslant r)
&= 1- \mathbb{P}( R > r ) \\[6pt]
&= 1-\mathbb{P}( e^R > e^r ) \\[6pt]
&= 1-\mathbb{P}( X_1+X_2 > X_1 \cdot e^r ) \\[6pt]
&= 1-\mathbb{P}( X_2 > X_1 \cdot (e^r-1) ) \\[6pt]
&= 1-\int \limits_0^\infty (1-F_{X_2}( x \cdot (e^r-1))) f_{X_1}(x) \ dx \\[6pt]
&= 1-\int \limits_0^\infty \exp(-x \cdot (e^r-1)) \exp(-x) \ dx \\[6pt]
&= 1-\int \limits_0^\infty \exp(-x \cdot e^r) \ dx \\[6pt]
&= 1-\Big[ -e^{-r} \exp(-x \cdot e^r) \Big]_{x=0}^{x \rightarrow \infty} \\[6pt]
&= 1- e^{-r}. \\[6pt]
\end{aligned} \end{equation}$$
This confirms that $R \sim \text{Exp}(1)$. | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$?
Taking $X_1,X_2 \sim \text{IID Exp}(1)$ we have:
$$R \equiv \ln \bigg( \frac{X_1+X_2}{X_1} \bigg) \sim \text{Exp}(1).$$
There are various ways to demonstrate this. In cases of difficulty, the simpl |
54,096 | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$? | Observe that $ln(\frac{X_1+X_2}{X_1})$ = $-ln(\frac{X_1}{X_1+X_2})$. Since $-ln({U})$ is exp(1) when U ~ Uniform(0,1). If we can prove $\frac{X_1}{X_1+X_2}$ is standard uniform then we will have completed the proof.
Observe that
$P(\frac{X_1}{X_1+X_2}\leq t) = P(\frac{X_1+X_2}{X_1}\leq \frac{1}{t}) = P\{X_2\leq X_1(\frac{1}{t}-1)\}= \int_0^\infty f_{X_1}(x_1)P(X_2 \leq x_1(\frac{1}{t}-1))dx = \int_0^{\infty}e^{-x_1}e^{-x_1(\frac{1}{t}-1)}dx_1 = \int_0^{\infty}e^\frac{x_1}{t}dx_1=t$
This implies $(\frac{X_1+X_2}{X_1})$ is uniform, completing the proof. | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$? | Observe that $ln(\frac{X_1+X_2}{X_1})$ = $-ln(\frac{X_1}{X_1+X_2})$. Since $-ln({U})$ is exp(1) when U ~ Uniform(0,1). If we can prove $\frac{X_1}{X_1+X_2}$ is standard uniform then we will have compl | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$?
Observe that $ln(\frac{X_1+X_2}{X_1})$ = $-ln(\frac{X_1}{X_1+X_2})$. Since $-ln({U})$ is exp(1) when U ~ Uniform(0,1). If we can prove $\frac{X_1}{X_1+X_2}$ is standard uniform then we will have completed the proof.
Observe that
$P(\frac{X_1}{X_1+X_2}\leq t) = P(\frac{X_1+X_2}{X_1}\leq \frac{1}{t}) = P\{X_2\leq X_1(\frac{1}{t}-1)\}= \int_0^\infty f_{X_1}(x_1)P(X_2 \leq x_1(\frac{1}{t}-1))dx = \int_0^{\infty}e^{-x_1}e^{-x_1(\frac{1}{t}-1)}dx_1 = \int_0^{\infty}e^\frac{x_1}{t}dx_1=t$
This implies $(\frac{X_1+X_2}{X_1})$ is uniform, completing the proof. | What is the distribution of $\ln(\frac{X_1+X_2}{X_1})$ when $X_1, X_2 \sim \text{IID Exp}(1)$?
Observe that $ln(\frac{X_1+X_2}{X_1})$ = $-ln(\frac{X_1}{X_1+X_2})$. Since $-ln({U})$ is exp(1) when U ~ Uniform(0,1). If we can prove $\frac{X_1}{X_1+X_2}$ is standard uniform then we will have compl |
54,097 | What is the probability distribution used in logistic regression called? | Since $Y_i$ is a binary variable, its distribution is the Bernoulli distribution:
$$Y_i | \mathbf{x}, \mathbf{w} \sim \text{Bern} \Bigg( \text{Prob} = \frac{\exp(\mathbf{w}^\text{T} \mathbf{x})}{1 + \exp(\mathbf{w}^\text{T} \mathbf{x})} \Bigg).$$
One alternative way of looking at the logistic regression is to regard the observed response variable as a discretisation of an underlying "latent variable", where the latter has a logistic distribution. In this (equivalent) alternative formulation, we have an observed response variable $Y_i \equiv \mathbb{I}(\tilde{Y}_i > 0)$, with the underlying latent response having the distribution:
$$\tilde{Y}_i | \mathbf{x}, \mathbf{w} \sim \text{Logistic} \Bigg( \text{Location} = \mathbf{w}^\text{T} \mathbf{x}, \ \text{Scale} = 1 \Bigg).$$ | What is the probability distribution used in logistic regression called? | Since $Y_i$ is a binary variable, its distribution is the Bernoulli distribution:
$$Y_i | \mathbf{x}, \mathbf{w} \sim \text{Bern} \Bigg( \text{Prob} = \frac{\exp(\mathbf{w}^\text{T} \mathbf{x})}{1 + \ | What is the probability distribution used in logistic regression called?
Since $Y_i$ is a binary variable, its distribution is the Bernoulli distribution:
$$Y_i | \mathbf{x}, \mathbf{w} \sim \text{Bern} \Bigg( \text{Prob} = \frac{\exp(\mathbf{w}^\text{T} \mathbf{x})}{1 + \exp(\mathbf{w}^\text{T} \mathbf{x})} \Bigg).$$
One alternative way of looking at the logistic regression is to regard the observed response variable as a discretisation of an underlying "latent variable", where the latter has a logistic distribution. In this (equivalent) alternative formulation, we have an observed response variable $Y_i \equiv \mathbb{I}(\tilde{Y}_i > 0)$, with the underlying latent response having the distribution:
$$\tilde{Y}_i | \mathbf{x}, \mathbf{w} \sim \text{Logistic} \Bigg( \text{Location} = \mathbf{w}^\text{T} \mathbf{x}, \ \text{Scale} = 1 \Bigg).$$ | What is the probability distribution used in logistic regression called?
Since $Y_i$ is a binary variable, its distribution is the Bernoulli distribution:
$$Y_i | \mathbf{x}, \mathbf{w} \sim \text{Bern} \Bigg( \text{Prob} = \frac{\exp(\mathbf{w}^\text{T} \mathbf{x})}{1 + \ |
54,098 | What is the probability distribution used in logistic regression called? | In your link, you have the cumulative distribution function for the logistic distribution as $$\frac{1}{1+e^{-\frac{x-\mu}{s}}}$$
while in your question you have $$\dfrac{\exp(w^TX)}{(1+\exp(w^TX))} \text{ which is } \dfrac{1}{1+e^{-w^TX}}$$
and these are essentially the same so long as $w^TX$ has mean $0$ and variance $1$.
You can see that the first expression is a cumulative distribution function and this expression approaches $0$ when $x$ is very negative but approaches $1$ when $x$ is large and positive. This is what want from your logistic regression: the predicted conditional probability of a positive result $(Y=1)$ increases towards $1$ as $w^TX$ increases | What is the probability distribution used in logistic regression called? | In your link, you have the cumulative distribution function for the logistic distribution as $$\frac{1}{1+e^{-\frac{x-\mu}{s}}}$$
while in your question you have $$\dfrac{\exp(w^TX)}{(1+\exp(w^TX))} \ | What is the probability distribution used in logistic regression called?
In your link, you have the cumulative distribution function for the logistic distribution as $$\frac{1}{1+e^{-\frac{x-\mu}{s}}}$$
while in your question you have $$\dfrac{\exp(w^TX)}{(1+\exp(w^TX))} \text{ which is } \dfrac{1}{1+e^{-w^TX}}$$
and these are essentially the same so long as $w^TX$ has mean $0$ and variance $1$.
You can see that the first expression is a cumulative distribution function and this expression approaches $0$ when $x$ is very negative but approaches $1$ when $x$ is large and positive. This is what want from your logistic regression: the predicted conditional probability of a positive result $(Y=1)$ increases towards $1$ as $w^TX$ increases | What is the probability distribution used in logistic regression called?
In your link, you have the cumulative distribution function for the logistic distribution as $$\frac{1}{1+e^{-\frac{x-\mu}{s}}}$$
while in your question you have $$\dfrac{\exp(w^TX)}{(1+\exp(w^TX))} \ |
54,099 | What is the relation between "conjugate priors" and the approximate inference? | Priors are not chosen for convenience but for reflecting one's own beliefs or absence thereof about the parameters of the model. There is thus no foundational reason for always choosing conjugate priors, which main justification is computational. Furthermore,
Conjugate priors only exist for exponential family models. Outside these models, there is no easily handled prior and computation tools are always needed.
Variational Bayes approximations are themselves based on exponential family approximations to the exact model. | What is the relation between "conjugate priors" and the approximate inference? | Priors are not chosen for convenience but for reflecting one's own beliefs or absence thereof about the parameters of the model. There is thus no foundational reason for always choosing conjugate prio | What is the relation between "conjugate priors" and the approximate inference?
Priors are not chosen for convenience but for reflecting one's own beliefs or absence thereof about the parameters of the model. There is thus no foundational reason for always choosing conjugate priors, which main justification is computational. Furthermore,
Conjugate priors only exist for exponential family models. Outside these models, there is no easily handled prior and computation tools are always needed.
Variational Bayes approximations are themselves based on exponential family approximations to the exact model. | What is the relation between "conjugate priors" and the approximate inference?
Priors are not chosen for convenience but for reflecting one's own beliefs or absence thereof about the parameters of the model. There is thus no foundational reason for always choosing conjugate prio |
54,100 | What is the relation between "conjugate priors" and the approximate inference? | There's no relation whatsoever. If you use conjugate priors, then the solution exists in closed-form, so you don't need to use things like MCMC or approximate inference to obtain the solution. | What is the relation between "conjugate priors" and the approximate inference? | There's no relation whatsoever. If you use conjugate priors, then the solution exists in closed-form, so you don't need to use things like MCMC or approximate inference to obtain the solution. | What is the relation between "conjugate priors" and the approximate inference?
There's no relation whatsoever. If you use conjugate priors, then the solution exists in closed-form, so you don't need to use things like MCMC or approximate inference to obtain the solution. | What is the relation between "conjugate priors" and the approximate inference?
There's no relation whatsoever. If you use conjugate priors, then the solution exists in closed-form, so you don't need to use things like MCMC or approximate inference to obtain the solution. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.