idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
2,401
What are disadvantages of using the lasso for variable selection for regression?
If you only care about prediction error and don't care about interpretability, casual-inference, model-simplicity, coefficients' tests, etc, why do you still want to use linear regression model? You can use something like boosting on decision trees or support vector regression and get better prediction quality and still avoid overfitting in both mentioned cases. That is Lasso may not be the best choice to get best prediction quality. If my understanding is correct, Lasso is intended for situations when you are still interested in the model itself, not only predictions. That is - see selected variables and their coefficients, interpret in some way etc. And for this - Lasso may not be the best choice in certain situations as discussed in other questions here.
What are disadvantages of using the lasso for variable selection for regression?
If you only care about prediction error and don't care about interpretability, casual-inference, model-simplicity, coefficients' tests, etc, why do you still want to use linear regression model? You
What are disadvantages of using the lasso for variable selection for regression? If you only care about prediction error and don't care about interpretability, casual-inference, model-simplicity, coefficients' tests, etc, why do you still want to use linear regression model? You can use something like boosting on decision trees or support vector regression and get better prediction quality and still avoid overfitting in both mentioned cases. That is Lasso may not be the best choice to get best prediction quality. If my understanding is correct, Lasso is intended for situations when you are still interested in the model itself, not only predictions. That is - see selected variables and their coefficients, interpret in some way etc. And for this - Lasso may not be the best choice in certain situations as discussed in other questions here.
What are disadvantages of using the lasso for variable selection for regression? If you only care about prediction error and don't care about interpretability, casual-inference, model-simplicity, coefficients' tests, etc, why do you still want to use linear regression model? You
2,402
What are disadvantages of using the lasso for variable selection for regression?
LASSO encourages shrinking of coefficients to 0, i.e. dropping those variates from your model. On contrast, other regularization techniques like a ridge tend to keep all variates. So I'd recommend to think about whether this dropping makes sense for your data. E.g. consider setting up a clinical diagnostic test either on gene microarray data or on vibrational spectroscopic data. You'd expect some genes to carry relevant information, but lots of other genes are just noise wrt. your application. Dropping those variates is a perfectly sensible idea. By contrast, vibrational spectroscopic data sets (while usually having similar dimensions compared to microarray data) tend to have the relevant information "smeared" over large parts of the spectrum (correlation). In this situation, asking the regularization to drop variates is not a particularly sensible approach. The more so, as other regularization techniques like PLS are more adapted to this type of data. The Elements of Statistical Learning gives a good discussion of the LASSO, and contrasts it to other regularization techniques.
What are disadvantages of using the lasso for variable selection for regression?
LASSO encourages shrinking of coefficients to 0, i.e. dropping those variates from your model. On contrast, other regularization techniques like a ridge tend to keep all variates. So I'd recommend to
What are disadvantages of using the lasso for variable selection for regression? LASSO encourages shrinking of coefficients to 0, i.e. dropping those variates from your model. On contrast, other regularization techniques like a ridge tend to keep all variates. So I'd recommend to think about whether this dropping makes sense for your data. E.g. consider setting up a clinical diagnostic test either on gene microarray data or on vibrational spectroscopic data. You'd expect some genes to carry relevant information, but lots of other genes are just noise wrt. your application. Dropping those variates is a perfectly sensible idea. By contrast, vibrational spectroscopic data sets (while usually having similar dimensions compared to microarray data) tend to have the relevant information "smeared" over large parts of the spectrum (correlation). In this situation, asking the regularization to drop variates is not a particularly sensible approach. The more so, as other regularization techniques like PLS are more adapted to this type of data. The Elements of Statistical Learning gives a good discussion of the LASSO, and contrasts it to other regularization techniques.
What are disadvantages of using the lasso for variable selection for regression? LASSO encourages shrinking of coefficients to 0, i.e. dropping those variates from your model. On contrast, other regularization techniques like a ridge tend to keep all variates. So I'd recommend to
2,403
What are disadvantages of using the lasso for variable selection for regression?
This is already quite an old question but I feel that in the meantime most of the answers here are quite outdated (and the one that's checked as the correct answer is plain wrong imho). First, in terms of getting good prediction performance it is not universally true that LASSO is always better than stepwise. The paper "Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso" by Hastie et al (2017) provides an extensive comparison of forward stepwise, LASSO and some LASSO variants like the relaxed LASSO as well as best subset, and they show that stepwise is sometimes better than LASSO. A variant of LASSO though --- relaxed LASSO - was the one that produced the highest model prediction accuracy under the widest range of circumstances. The conclusion about which is best depends a lot on what you consider best though, e.g. whether this would be highest prediction accuracy or selecting the fewest false positive variables. There is a whole zoo of sparse learning methods though, most of which are better than LASSO. E.g. there is Meinhausen's relaxed LASSO, adaptive LASSO and SCAD and MCP penalized regression as implemented in the ncvreg package, which all have less bias than standard LASSO and so are preferrable. Furthermore, if you are interest in the absolute sparsest solution with the best prediction performance then L0 penalized regression (aka best subset, i.e. based on penalization of the nr of nonzero coefficients as opposed to the sum of the absolute value of the coefficients in LASSO) is better than LASSO, see e.g. the l0ara package which approximates L0 penalized GLMs using an iterative adaptive ridge procedure, and which unlike LASSO also works very well with highly collinear variables, and the L0Learn package, which can fit L0 penalized regression models using coordinate descent, potentially in combination with an L2 penalty to regularize collinearity. So to come back to your original question: why not use LASSO for variable selection? : because the coefficients will be highly biased, which is improved in relaxed LASSO, MCP and SCAD penalized regression, and resolved completely in L0 penalized regression (which has a full oracle property, ie it can pick out both the causal variables and retun unbiased coefficients, also for $p > n$ cases) because it tends to produce way more false positives than L0 penalized regression (in my tests l0ara performs best then, ie iterative adaptive ridge, followed by L0Learn) because it cannot deal well with collinear variables (it would essentially just randomly select one of the collinear variables) - iterative adapative ridge / l0ara and the L0L2 penalties in L0Learn are much better at dealing with that. Of course, in general, you'll still have to use cross validation to tune your regularization parameter(s) to get optimal prediction performance, but that's not an issue. And you can even do high dimensional inference on your parameters and calculate 95% confidence intervals on your coefficients if you like via nonparametric bootstrapping (even taking into account uncertainty on the selection of the optimal regularization if you do your cross validation also on each bootstrapped dataset, though that becomes quite slow then). Computationally LASSO is not slower to fit than stepwise approaches btw, certainly not if one uses highly optimized code that uses warm starts to optimize your LASSO regularization (you can compare yourself using the fs command for forward stepwise and lasso for LASSO in the bestsubset package). The fact that stepwise approaches are still popular probably has to do with the mistaken belief of many that one could then just keep your final model and report it's associated p values - which in fact is not a correct thing to do, as this doesn't take into account the uncertainty introduced by your model selection, resulting in way too optimistic p values. Hope this helps?
What are disadvantages of using the lasso for variable selection for regression?
This is already quite an old question but I feel that in the meantime most of the answers here are quite outdated (and the one that's checked as the correct answer is plain wrong imho). First, in ter
What are disadvantages of using the lasso for variable selection for regression? This is already quite an old question but I feel that in the meantime most of the answers here are quite outdated (and the one that's checked as the correct answer is plain wrong imho). First, in terms of getting good prediction performance it is not universally true that LASSO is always better than stepwise. The paper "Extended Comparisons of Best Subset Selection, Forward Stepwise Selection, and the Lasso" by Hastie et al (2017) provides an extensive comparison of forward stepwise, LASSO and some LASSO variants like the relaxed LASSO as well as best subset, and they show that stepwise is sometimes better than LASSO. A variant of LASSO though --- relaxed LASSO - was the one that produced the highest model prediction accuracy under the widest range of circumstances. The conclusion about which is best depends a lot on what you consider best though, e.g. whether this would be highest prediction accuracy or selecting the fewest false positive variables. There is a whole zoo of sparse learning methods though, most of which are better than LASSO. E.g. there is Meinhausen's relaxed LASSO, adaptive LASSO and SCAD and MCP penalized regression as implemented in the ncvreg package, which all have less bias than standard LASSO and so are preferrable. Furthermore, if you are interest in the absolute sparsest solution with the best prediction performance then L0 penalized regression (aka best subset, i.e. based on penalization of the nr of nonzero coefficients as opposed to the sum of the absolute value of the coefficients in LASSO) is better than LASSO, see e.g. the l0ara package which approximates L0 penalized GLMs using an iterative adaptive ridge procedure, and which unlike LASSO also works very well with highly collinear variables, and the L0Learn package, which can fit L0 penalized regression models using coordinate descent, potentially in combination with an L2 penalty to regularize collinearity. So to come back to your original question: why not use LASSO for variable selection? : because the coefficients will be highly biased, which is improved in relaxed LASSO, MCP and SCAD penalized regression, and resolved completely in L0 penalized regression (which has a full oracle property, ie it can pick out both the causal variables and retun unbiased coefficients, also for $p > n$ cases) because it tends to produce way more false positives than L0 penalized regression (in my tests l0ara performs best then, ie iterative adaptive ridge, followed by L0Learn) because it cannot deal well with collinear variables (it would essentially just randomly select one of the collinear variables) - iterative adapative ridge / l0ara and the L0L2 penalties in L0Learn are much better at dealing with that. Of course, in general, you'll still have to use cross validation to tune your regularization parameter(s) to get optimal prediction performance, but that's not an issue. And you can even do high dimensional inference on your parameters and calculate 95% confidence intervals on your coefficients if you like via nonparametric bootstrapping (even taking into account uncertainty on the selection of the optimal regularization if you do your cross validation also on each bootstrapped dataset, though that becomes quite slow then). Computationally LASSO is not slower to fit than stepwise approaches btw, certainly not if one uses highly optimized code that uses warm starts to optimize your LASSO regularization (you can compare yourself using the fs command for forward stepwise and lasso for LASSO in the bestsubset package). The fact that stepwise approaches are still popular probably has to do with the mistaken belief of many that one could then just keep your final model and report it's associated p values - which in fact is not a correct thing to do, as this doesn't take into account the uncertainty introduced by your model selection, resulting in way too optimistic p values. Hope this helps?
What are disadvantages of using the lasso for variable selection for regression? This is already quite an old question but I feel that in the meantime most of the answers here are quite outdated (and the one that's checked as the correct answer is plain wrong imho). First, in ter
2,404
What are disadvantages of using the lasso for variable selection for regression?
If two predictors are highly correlated LASSO can end up dropping one rather arbitrarily. That's not very good when you're wanting to make predictions for a population where those two predictors aren't highly correlated, & perhaps a reason for preferring ridge regression in those circumstances. You might also think standardization of predictors (to say when coefficients are "big" or "small") rather arbitrary & be puzzled (like me) about sensible ways to standardize categorical predictors.
What are disadvantages of using the lasso for variable selection for regression?
If two predictors are highly correlated LASSO can end up dropping one rather arbitrarily. That's not very good when you're wanting to make predictions for a population where those two predictors aren'
What are disadvantages of using the lasso for variable selection for regression? If two predictors are highly correlated LASSO can end up dropping one rather arbitrarily. That's not very good when you're wanting to make predictions for a population where those two predictors aren't highly correlated, & perhaps a reason for preferring ridge regression in those circumstances. You might also think standardization of predictors (to say when coefficients are "big" or "small") rather arbitrary & be puzzled (like me) about sensible ways to standardize categorical predictors.
What are disadvantages of using the lasso for variable selection for regression? If two predictors are highly correlated LASSO can end up dropping one rather arbitrarily. That's not very good when you're wanting to make predictions for a population where those two predictors aren'
2,405
What are disadvantages of using the lasso for variable selection for regression?
Lasso is only useful if you're restricting yourself to consider models which are linear in the parameters to be estimated. Stated another way, the lasso does not evaluate whether you have chosen the correct form of the relationship between the independent and dependent variable(s). It is very plausible that there may be nonlinear, interactive or polynomial effects in an arbitrary data set. However, these alternative model specifications will only be evaluated if the user conducts that analysis; the lasso is not a substitute for doing so. For a simple example of how this can go wrong, consider a data set in which disjoint intervals of the independent variable will predict alternating high and low values of the dependent variable. This will be challenging to sort out using conventional linear models, since there is not a linear effect in the manifest variables present for analysis (but some transformation of the manifest variables may be helpful). Left in its manifest form, the lasso will incorrectly conclude that this feature is extraneous and zero out its coefficient because there is no linear relationship. On the other hand, because there are axis-aligned splits in the data, a tree-based model like a random forest will probably do pretty well.
What are disadvantages of using the lasso for variable selection for regression?
Lasso is only useful if you're restricting yourself to consider models which are linear in the parameters to be estimated. Stated another way, the lasso does not evaluate whether you have chosen the c
What are disadvantages of using the lasso for variable selection for regression? Lasso is only useful if you're restricting yourself to consider models which are linear in the parameters to be estimated. Stated another way, the lasso does not evaluate whether you have chosen the correct form of the relationship between the independent and dependent variable(s). It is very plausible that there may be nonlinear, interactive or polynomial effects in an arbitrary data set. However, these alternative model specifications will only be evaluated if the user conducts that analysis; the lasso is not a substitute for doing so. For a simple example of how this can go wrong, consider a data set in which disjoint intervals of the independent variable will predict alternating high and low values of the dependent variable. This will be challenging to sort out using conventional linear models, since there is not a linear effect in the manifest variables present for analysis (but some transformation of the manifest variables may be helpful). Left in its manifest form, the lasso will incorrectly conclude that this feature is extraneous and zero out its coefficient because there is no linear relationship. On the other hand, because there are axis-aligned splits in the data, a tree-based model like a random forest will probably do pretty well.
What are disadvantages of using the lasso for variable selection for regression? Lasso is only useful if you're restricting yourself to consider models which are linear in the parameters to be estimated. Stated another way, the lasso does not evaluate whether you have chosen the c
2,406
What are disadvantages of using the lasso for variable selection for regression?
One practical disadvantage of lasso and other regularization techniques is finding the optimal regularization coefficient, lambda. Using cross validation to find this value can be just as expensive as stepwise selection techniques.
What are disadvantages of using the lasso for variable selection for regression?
One practical disadvantage of lasso and other regularization techniques is finding the optimal regularization coefficient, lambda. Using cross validation to find this value can be just as expensive as
What are disadvantages of using the lasso for variable selection for regression? One practical disadvantage of lasso and other regularization techniques is finding the optimal regularization coefficient, lambda. Using cross validation to find this value can be just as expensive as stepwise selection techniques.
What are disadvantages of using the lasso for variable selection for regression? One practical disadvantage of lasso and other regularization techniques is finding the optimal regularization coefficient, lambda. Using cross validation to find this value can be just as expensive as
2,407
What are disadvantages of using the lasso for variable selection for regression?
I am not a LASSO expert but I am an expert in time series. If you have time series data or spatial data then I would studiously avoid a solution that was predicated on independent observations. Furthermore if there are unknown deterministic effects that have played havoc with your data (level shifts / time trends etc) then LASSO would be even less a good hammer. In closing when you have time series data you often need to segment the data when faced with parameters or error variance that change over time.
What are disadvantages of using the lasso for variable selection for regression?
I am not a LASSO expert but I am an expert in time series. If you have time series data or spatial data then I would studiously avoid a solution that was predicated on independent observations. Furthe
What are disadvantages of using the lasso for variable selection for regression? I am not a LASSO expert but I am an expert in time series. If you have time series data or spatial data then I would studiously avoid a solution that was predicated on independent observations. Furthermore if there are unknown deterministic effects that have played havoc with your data (level shifts / time trends etc) then LASSO would be even less a good hammer. In closing when you have time series data you often need to segment the data when faced with parameters or error variance that change over time.
What are disadvantages of using the lasso for variable selection for regression? I am not a LASSO expert but I am an expert in time series. If you have time series data or spatial data then I would studiously avoid a solution that was predicated on independent observations. Furthe
2,408
What are disadvantages of using the lasso for variable selection for regression?
One big one is the difficulty of doing hypothesis testing. You can't easily figure out which variables are statistically significant with Lasso. With stepwise regression, you can do hypothesis testing to some degree, if you're careful about your treatment of multiple testing.
What are disadvantages of using the lasso for variable selection for regression?
One big one is the difficulty of doing hypothesis testing. You can't easily figure out which variables are statistically significant with Lasso. With stepwise regression, you can do hypothesis testi
What are disadvantages of using the lasso for variable selection for regression? One big one is the difficulty of doing hypothesis testing. You can't easily figure out which variables are statistically significant with Lasso. With stepwise regression, you can do hypothesis testing to some degree, if you're careful about your treatment of multiple testing.
What are disadvantages of using the lasso for variable selection for regression? One big one is the difficulty of doing hypothesis testing. You can't easily figure out which variables are statistically significant with Lasso. With stepwise regression, you can do hypothesis testi
2,409
What are disadvantages of using the lasso for variable selection for regression?
I have always found variable reduction techniques hurting the predictability, especially for multi classification. Stepwise elimination methods are also not very effective with highly correlated predictors, they are time consuming too. It is a tough area to deal with and it should be done differently on case to case basis. In my experience the dimensionality reduction techniques like LDA or PLS worked well, however they demand huge memory allocation if the number of predictors are too large in number. Even running LASSO on large size will demand huge memory allocation. Hence we should continuously look for more creative statistical based approaches for reducing large size of number of predictors.
What are disadvantages of using the lasso for variable selection for regression?
I have always found variable reduction techniques hurting the predictability, especially for multi classification. Stepwise elimination methods are also not very effective with highly correlated predi
What are disadvantages of using the lasso for variable selection for regression? I have always found variable reduction techniques hurting the predictability, especially for multi classification. Stepwise elimination methods are also not very effective with highly correlated predictors, they are time consuming too. It is a tough area to deal with and it should be done differently on case to case basis. In my experience the dimensionality reduction techniques like LDA or PLS worked well, however they demand huge memory allocation if the number of predictors are too large in number. Even running LASSO on large size will demand huge memory allocation. Hence we should continuously look for more creative statistical based approaches for reducing large size of number of predictors.
What are disadvantages of using the lasso for variable selection for regression? I have always found variable reduction techniques hurting the predictability, especially for multi classification. Stepwise elimination methods are also not very effective with highly correlated predi
2,410
What are disadvantages of using the lasso for variable selection for regression?
There is a simple reason why not using LASSO for variable selection. It just does not work as well as advertised. This is due to its fitting algorithm that includes a penalty factor that penalizes the model against higher regression coefficients. It seems like a good idea, as people think it always reduces model overfitting, and improves predictions (on new data). In reality it very often does the opposite ... increase model under-fitting and weakens prediction accuracy. You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO MSE graph." Whenever such graphs show the lowest MSE at the beginning of the X-axis, it shows a LASSO that has failed (increase model under-fitting). The above unintended consequences are due to the penalty algorithm. Because of it LASSO has no way of distinguishing between a strong causal variable with predictive information and an associated high regression coefficient and a weak variable with no explanatory or predictive information value that has a low regression coefficient. Often, LASSO will prefer the weak variable over the strong causal variable. Also, it may at times even cause to shift the directional signs of variables (shifting from one direction that makes sense to an opposite direction that does not). You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO coefficient path".
What are disadvantages of using the lasso for variable selection for regression?
There is a simple reason why not using LASSO for variable selection. It just does not work as well as advertised. This is due to its fitting algorithm that includes a penalty factor that penalizes t
What are disadvantages of using the lasso for variable selection for regression? There is a simple reason why not using LASSO for variable selection. It just does not work as well as advertised. This is due to its fitting algorithm that includes a penalty factor that penalizes the model against higher regression coefficients. It seems like a good idea, as people think it always reduces model overfitting, and improves predictions (on new data). In reality it very often does the opposite ... increase model under-fitting and weakens prediction accuracy. You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO MSE graph." Whenever such graphs show the lowest MSE at the beginning of the X-axis, it shows a LASSO that has failed (increase model under-fitting). The above unintended consequences are due to the penalty algorithm. Because of it LASSO has no way of distinguishing between a strong causal variable with predictive information and an associated high regression coefficient and a weak variable with no explanatory or predictive information value that has a low regression coefficient. Often, LASSO will prefer the weak variable over the strong causal variable. Also, it may at times even cause to shift the directional signs of variables (shifting from one direction that makes sense to an opposite direction that does not). You can see many examples of that by searching the Internet for Images and searching specifically for "LASSO coefficient path".
What are disadvantages of using the lasso for variable selection for regression? There is a simple reason why not using LASSO for variable selection. It just does not work as well as advertised. This is due to its fitting algorithm that includes a penalty factor that penalizes t
2,411
What is the intuition behind SVD?
Write the SVD of matrix $X$ (real, $n\times p$) as $$ X = U D V^T $$ where $U$ is $n\times p$, $D$ is diagonal $p\times p$ and $V^T$ is $p\times p$. In terms of the columns of the matrices $U$ and $V$ we can write $X=\sum_{i=1}^p d_i u_i v_i^T$. That shows $X$ written as a sum of $p$ rank-1 matrices. What does a rank-1 matrix look like? Let's see: $$ \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \begin{pmatrix} 4 & 5 & 6 \end{pmatrix} = \begin{pmatrix} 4 & 5 & 6 \\ 8 & 10 & 12 \\ 12 & 15 & 18 \end{pmatrix} $$ The rows are proportional, and the columns are proportional. Think now about $X$ as containing the grayscale values of a black-and-white image, each entry in the matrix representing one pixel. For instance the following picture of a baboon: Then read this image into R and get the matrix part of the resulting structure, maybe using the library pixmap. If you want a step-by-step guide as to how to reproduce the results, you can find the code here. Calculate the SVD: baboon.svd <- svd(bab) # May take some time How can we think about this? We get the $512 \times 512$ baboon image represented as a sum of $512$ simple images, with each one only showing vertical and horizontal structure, i.e. it is an image of vertical and horizontal stripes! So, the SVD of the baboon represents the baboon image as a superposition of $512$ simple images, each one only showing horizontal/vertical stripes. Let us calculate a low-rank reconstruction of the image with $1$ and with $20$ components: baboon.1 <- sweep(baboon.svd$u[, 1, drop=FALSE], 2, baboon.svd$d[1], "*") %*% t(baboon.svd$v[, 1, drop=FALSE]) baboon.20 <- sweep(baboon.svd$u[, 1:20, drop=FALSE], 2, baboon.svd$d[1:20], "*") %*% t(baboon.svd$v[ , 1:20, drop=FALSE]) resulting in the following two images: On the left we can easily see the vertical/horizontal stripes in the rank-1 image. Let us finally look at the "residual image", the image reconstructed (as above, code not shown) from the $20$ rank-one images with the lowest singular values. Here it is: Which is quite interesting: we see the parts of the original image that are difficult to represent as superposition of vertical/horizontal lines, mostly diagonal nose hair and some texture, and the eyes!
What is the intuition behind SVD?
Write the SVD of matrix $X$ (real, $n\times p$) as $$ X = U D V^T $$ where $U$ is $n\times p$, $D$ is diagonal $p\times p$ and $V^T$ is $p\times p$. In terms of the columns of the matrices $U$ and
What is the intuition behind SVD? Write the SVD of matrix $X$ (real, $n\times p$) as $$ X = U D V^T $$ where $U$ is $n\times p$, $D$ is diagonal $p\times p$ and $V^T$ is $p\times p$. In terms of the columns of the matrices $U$ and $V$ we can write $X=\sum_{i=1}^p d_i u_i v_i^T$. That shows $X$ written as a sum of $p$ rank-1 matrices. What does a rank-1 matrix look like? Let's see: $$ \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \begin{pmatrix} 4 & 5 & 6 \end{pmatrix} = \begin{pmatrix} 4 & 5 & 6 \\ 8 & 10 & 12 \\ 12 & 15 & 18 \end{pmatrix} $$ The rows are proportional, and the columns are proportional. Think now about $X$ as containing the grayscale values of a black-and-white image, each entry in the matrix representing one pixel. For instance the following picture of a baboon: Then read this image into R and get the matrix part of the resulting structure, maybe using the library pixmap. If you want a step-by-step guide as to how to reproduce the results, you can find the code here. Calculate the SVD: baboon.svd <- svd(bab) # May take some time How can we think about this? We get the $512 \times 512$ baboon image represented as a sum of $512$ simple images, with each one only showing vertical and horizontal structure, i.e. it is an image of vertical and horizontal stripes! So, the SVD of the baboon represents the baboon image as a superposition of $512$ simple images, each one only showing horizontal/vertical stripes. Let us calculate a low-rank reconstruction of the image with $1$ and with $20$ components: baboon.1 <- sweep(baboon.svd$u[, 1, drop=FALSE], 2, baboon.svd$d[1], "*") %*% t(baboon.svd$v[, 1, drop=FALSE]) baboon.20 <- sweep(baboon.svd$u[, 1:20, drop=FALSE], 2, baboon.svd$d[1:20], "*") %*% t(baboon.svd$v[ , 1:20, drop=FALSE]) resulting in the following two images: On the left we can easily see the vertical/horizontal stripes in the rank-1 image. Let us finally look at the "residual image", the image reconstructed (as above, code not shown) from the $20$ rank-one images with the lowest singular values. Here it is: Which is quite interesting: we see the parts of the original image that are difficult to represent as superposition of vertical/horizontal lines, mostly diagonal nose hair and some texture, and the eyes!
What is the intuition behind SVD? Write the SVD of matrix $X$ (real, $n\times p$) as $$ X = U D V^T $$ where $U$ is $n\times p$, $D$ is diagonal $p\times p$ and $V^T$ is $p\times p$. In terms of the columns of the matrices $U$ and
2,412
What is the intuition behind SVD?
Let $A$ be a real $m \times n$ matrix. I'll assume that $m \geq n$ for simplicity. It's natural to ask in which direction $v$ does $A$ have the most impact (or the most explosiveness, or the most amplifying power). The answer is \begin{align} \tag{1}v_1 = \,\,& \arg \max_{v \in \mathbb R^n} \quad \| A v \|_2 \\ & \text{subject to } \, \|v\|_2 = 1. \end{align} A natural follow-up question is, after $v_1$, what is the next most explosive direction for $A$? The answer is \begin{align} v_2 = \,\,& \arg \max_{v \in \mathbb R^n} \quad \| A v \|_2 \\ & \text{subject to } \,\langle v_1, v \rangle = 0, \\ & \qquad \qquad \, \, \, \, \|v\|_2 = 1. \end{align} Continuing like this, we obtain an orthonormal basis $v_1, \ldots, v_n$ of $\mathbb R^n$. This special basis of $\mathbb R^n$ tells us the directions that are, in some sense, most important for understanding $A$. Let $\sigma_i = \|A v_i \|_2$ (so $\sigma_i$ quantifies the explosive power of $A$ in the direction $v_i$). Suppose that unit vectors $u_i$ are defined so that $$ \tag{2} A v_i = \sigma_i u_i \quad \text{for } i = 1, \ldots, n. $$ The equations (2) can be expressed concisely using matrix notation as $$ \tag{3} A V = U \Sigma, $$ where $V$ is the $n \times n$ matrix whose $i$th column is $v_i$, $U$ is the $m \times n$ matrix whose $i$th column is $u_i$, and $\Sigma$ is the $n \times n$ diagonal matrix whose $i$th diagonal entry is $\sigma_i$. The matrix $V$ is orthogonal, so we can multiply both sides of (3) by $V^T$ to obtain $$ A = U \Sigma V^T. $$ It might appear that we have now derived the SVD of $A$ with almost zero effort. None of the steps so far have been difficult. However, a crucial piece of the picture is missing -- we do not yet know that the columns of $U$ are pairwise orthogonal. Here is the crucial fact, the missing piece: it turns out that $A v_1$ is orthogonal to $A v_2$: $$ \tag{4} \langle A v_1, A v_2 \rangle = 0. $$ I claim that if this were not true, then $v_1$ would not be optimal for problem (1). Indeed, if (4) were not satisfied, then it would be possible to improve $v_1$ by perturbing it a bit in the direction $v_2$. Suppose (for a contradiction) that (4) is not satisfied. If $v_1$ is perturbed slightly in the orthogonal direction $v_2$, the norm of $v_1$ does not change (or at least, the change in the norm of $v_1$ is negligible). When I walk on the surface of the earth, my distance from the center of the earth does not change. However, when $v_1$ is perturbed in the direction $v_2$, the vector $A v_1$ is perturbed in the non-orthogonal direction $A v_2$, and so the change in the norm of $A v_1$ is non-negligible. The norm of $A v_1$ can be increased by a non-negligible amount. This means that $v_1$ is not optimal for problem (1), which is a contradiction. I love this argument because: 1) the intuition is very clear; 2) the intuition can be converted directly into a rigorous proof. A similar argument shows that $A v_3$ is orthogonal to both $A v_1$ and $A v_2$, and so on. The vectors $A v_1, \ldots, A v_n$ are pairwise orthogonal. This means that the unit vectors $u_1, \ldots, u_n$ can be chosen to be pairwise orthogonal, which means the matrix $U$ above is an orthogonal matrix. This completes our discovery of the SVD. To convert the above intuitive argument into a rigorous proof, we must confront the fact that if $v_1$ is perturbed in the direction $v_2$, the perturbed vector $$ \tilde v_1 = v_1 + \epsilon v_2 $$ is not truly a unit vector. (Its norm is $\sqrt{1 + \epsilon^2}$.) To obtain a rigorous proof, define $$ \bar v_1(\epsilon) = \sqrt{1 - \epsilon^2} v_1 + \epsilon v_2. $$ The vector $\bar v_1(\epsilon)$ is truly a unit vector. But as you can easily show, if (4) is not satisfied, then for sufficiently small values of $\epsilon$ we have $$ f(\epsilon) = \| A \bar v_1(\epsilon) \|_2^2 > \| A v_1 \|_2^2 $$ (assuming that the sign of $\epsilon$ is chosen correctly). To show this, just check that $f'(0) \neq 0$. This means that $v_1$ is not optimal for problem (1), which is a contradiction. (By the way, I recommend reading Qiaochu Yuan's explanation of the SVD here. In particular, take a look at "Key lemma # 1", which is what we discussed above. As Qiaochu says, key lemma # 1 is "the technical heart of singular value decomposition".)
What is the intuition behind SVD?
Let $A$ be a real $m \times n$ matrix. I'll assume that $m \geq n$ for simplicity. It's natural to ask in which direction $v$ does $A$ have the most impact (or the most explosiveness, or the most ampl
What is the intuition behind SVD? Let $A$ be a real $m \times n$ matrix. I'll assume that $m \geq n$ for simplicity. It's natural to ask in which direction $v$ does $A$ have the most impact (or the most explosiveness, or the most amplifying power). The answer is \begin{align} \tag{1}v_1 = \,\,& \arg \max_{v \in \mathbb R^n} \quad \| A v \|_2 \\ & \text{subject to } \, \|v\|_2 = 1. \end{align} A natural follow-up question is, after $v_1$, what is the next most explosive direction for $A$? The answer is \begin{align} v_2 = \,\,& \arg \max_{v \in \mathbb R^n} \quad \| A v \|_2 \\ & \text{subject to } \,\langle v_1, v \rangle = 0, \\ & \qquad \qquad \, \, \, \, \|v\|_2 = 1. \end{align} Continuing like this, we obtain an orthonormal basis $v_1, \ldots, v_n$ of $\mathbb R^n$. This special basis of $\mathbb R^n$ tells us the directions that are, in some sense, most important for understanding $A$. Let $\sigma_i = \|A v_i \|_2$ (so $\sigma_i$ quantifies the explosive power of $A$ in the direction $v_i$). Suppose that unit vectors $u_i$ are defined so that $$ \tag{2} A v_i = \sigma_i u_i \quad \text{for } i = 1, \ldots, n. $$ The equations (2) can be expressed concisely using matrix notation as $$ \tag{3} A V = U \Sigma, $$ where $V$ is the $n \times n$ matrix whose $i$th column is $v_i$, $U$ is the $m \times n$ matrix whose $i$th column is $u_i$, and $\Sigma$ is the $n \times n$ diagonal matrix whose $i$th diagonal entry is $\sigma_i$. The matrix $V$ is orthogonal, so we can multiply both sides of (3) by $V^T$ to obtain $$ A = U \Sigma V^T. $$ It might appear that we have now derived the SVD of $A$ with almost zero effort. None of the steps so far have been difficult. However, a crucial piece of the picture is missing -- we do not yet know that the columns of $U$ are pairwise orthogonal. Here is the crucial fact, the missing piece: it turns out that $A v_1$ is orthogonal to $A v_2$: $$ \tag{4} \langle A v_1, A v_2 \rangle = 0. $$ I claim that if this were not true, then $v_1$ would not be optimal for problem (1). Indeed, if (4) were not satisfied, then it would be possible to improve $v_1$ by perturbing it a bit in the direction $v_2$. Suppose (for a contradiction) that (4) is not satisfied. If $v_1$ is perturbed slightly in the orthogonal direction $v_2$, the norm of $v_1$ does not change (or at least, the change in the norm of $v_1$ is negligible). When I walk on the surface of the earth, my distance from the center of the earth does not change. However, when $v_1$ is perturbed in the direction $v_2$, the vector $A v_1$ is perturbed in the non-orthogonal direction $A v_2$, and so the change in the norm of $A v_1$ is non-negligible. The norm of $A v_1$ can be increased by a non-negligible amount. This means that $v_1$ is not optimal for problem (1), which is a contradiction. I love this argument because: 1) the intuition is very clear; 2) the intuition can be converted directly into a rigorous proof. A similar argument shows that $A v_3$ is orthogonal to both $A v_1$ and $A v_2$, and so on. The vectors $A v_1, \ldots, A v_n$ are pairwise orthogonal. This means that the unit vectors $u_1, \ldots, u_n$ can be chosen to be pairwise orthogonal, which means the matrix $U$ above is an orthogonal matrix. This completes our discovery of the SVD. To convert the above intuitive argument into a rigorous proof, we must confront the fact that if $v_1$ is perturbed in the direction $v_2$, the perturbed vector $$ \tilde v_1 = v_1 + \epsilon v_2 $$ is not truly a unit vector. (Its norm is $\sqrt{1 + \epsilon^2}$.) To obtain a rigorous proof, define $$ \bar v_1(\epsilon) = \sqrt{1 - \epsilon^2} v_1 + \epsilon v_2. $$ The vector $\bar v_1(\epsilon)$ is truly a unit vector. But as you can easily show, if (4) is not satisfied, then for sufficiently small values of $\epsilon$ we have $$ f(\epsilon) = \| A \bar v_1(\epsilon) \|_2^2 > \| A v_1 \|_2^2 $$ (assuming that the sign of $\epsilon$ is chosen correctly). To show this, just check that $f'(0) \neq 0$. This means that $v_1$ is not optimal for problem (1), which is a contradiction. (By the way, I recommend reading Qiaochu Yuan's explanation of the SVD here. In particular, take a look at "Key lemma # 1", which is what we discussed above. As Qiaochu says, key lemma # 1 is "the technical heart of singular value decomposition".)
What is the intuition behind SVD? Let $A$ be a real $m \times n$ matrix. I'll assume that $m \geq n$ for simplicity. It's natural to ask in which direction $v$ does $A$ have the most impact (or the most explosiveness, or the most ampl
2,413
What is the intuition behind SVD?
Take an hour of your day and watch this lecture. This guy is super straight-forward; It's important not to skip any of it because it all comes together in the end. Even if it might seem a little slow at the beginning, he is trying to pin down a critical point, which he does! I'll sum it up for you, rather than just giving you the three matrices that everyone does (because that was confusing me when I read other descriptions). Where do those matrices come from and why do we set it up like that? The lecture nails it! Every matrix (ever in the history of everness) can be constructed from a base matrix with same dimensions, then rotate it, and stretch it (this is the fundamental theorem of linear algebra). Each of those three matrices people throw around represent an initial matrix (U), A scaling matrix (sigma), and a rotation matrix (V). The scaling matrix shows you which rotation vectors are dominating, these are called the singular values. The decomposition is solving for U, sigma, and V.
What is the intuition behind SVD?
Take an hour of your day and watch this lecture. This guy is super straight-forward; It's important not to skip any of it because it all comes together in the end. Even if it might seem a little slow
What is the intuition behind SVD? Take an hour of your day and watch this lecture. This guy is super straight-forward; It's important not to skip any of it because it all comes together in the end. Even if it might seem a little slow at the beginning, he is trying to pin down a critical point, which he does! I'll sum it up for you, rather than just giving you the three matrices that everyone does (because that was confusing me when I read other descriptions). Where do those matrices come from and why do we set it up like that? The lecture nails it! Every matrix (ever in the history of everness) can be constructed from a base matrix with same dimensions, then rotate it, and stretch it (this is the fundamental theorem of linear algebra). Each of those three matrices people throw around represent an initial matrix (U), A scaling matrix (sigma), and a rotation matrix (V). The scaling matrix shows you which rotation vectors are dominating, these are called the singular values. The decomposition is solving for U, sigma, and V.
What is the intuition behind SVD? Take an hour of your day and watch this lecture. This guy is super straight-forward; It's important not to skip any of it because it all comes together in the end. Even if it might seem a little slow
2,414
Cross-Validation in plain english?
Consider the following situation: I want to catch the subway to go to my office. My plan is to take my car, park at the subway and then take the train to go to my office. My goal is to catch the train at 8.15 am every day so that I can reach my office on time. I need to decide the following: (a) the time at which I need to leave from my home and (b) the route I will take to drive to the station. In the above example, I have two parameters (i.e., time of departure from home and route to take to the station) and I need to choose these parameters such that I reach the station by 8.15 am. In order to solve the above problem I may try out different sets of 'parameters' (i.e., different combination of times of departure and route) on Mondays, Wednesdays, and Fridays, to see which combination is the 'best' one. The idea is that once I have identified the best combination I can use it every day so that I achieve my objective. Problem of Overfitting The problem with the above approach is that I may overfit which essentially means that the best combination I identify may in some sense may be unique to Mon, Wed and Fridays and that combination may not work for Tue and Thu. Overfitting may happen if in my search for the best combination of times and routes I exploit some aspect of the traffic situation on Mon/Wed/Fri which does not occur on Tue and Thu. One Solution to Overfitting: Cross-Validation Cross-validation is one solution to overfitting. The idea is that once we have identified our best combination of parameters (in our case time and route) we test the performance of that set of parameters in a different context. Therefore, we may want to test on Tue and Thu as well to ensure that our choices work for those days as well. Extending the analogy to statistics In statistics, we have a similar issue. We often use a limited set of data to estimate the unknown parameters we do not know. If we overfit then our parameter estimates will work very well for the existing data but not as well for when we use them in another context. Thus, cross-validation helps in avoiding the above issue of overfitting by proving us some reassurance that the parameter estimates are not unique to the data we used to estimate them. Of course, cross validation is not perfect. Going back to our example of the subway, it can happen that even after cross-validation, our best choice of parameters may not work one month down the line because of various issues (e.g., construction, traffic volume changes over time etc).
Cross-Validation in plain english?
Consider the following situation: I want to catch the subway to go to my office. My plan is to take my car, park at the subway and then take the train to go to my office. My goal is to catch the tra
Cross-Validation in plain english? Consider the following situation: I want to catch the subway to go to my office. My plan is to take my car, park at the subway and then take the train to go to my office. My goal is to catch the train at 8.15 am every day so that I can reach my office on time. I need to decide the following: (a) the time at which I need to leave from my home and (b) the route I will take to drive to the station. In the above example, I have two parameters (i.e., time of departure from home and route to take to the station) and I need to choose these parameters such that I reach the station by 8.15 am. In order to solve the above problem I may try out different sets of 'parameters' (i.e., different combination of times of departure and route) on Mondays, Wednesdays, and Fridays, to see which combination is the 'best' one. The idea is that once I have identified the best combination I can use it every day so that I achieve my objective. Problem of Overfitting The problem with the above approach is that I may overfit which essentially means that the best combination I identify may in some sense may be unique to Mon, Wed and Fridays and that combination may not work for Tue and Thu. Overfitting may happen if in my search for the best combination of times and routes I exploit some aspect of the traffic situation on Mon/Wed/Fri which does not occur on Tue and Thu. One Solution to Overfitting: Cross-Validation Cross-validation is one solution to overfitting. The idea is that once we have identified our best combination of parameters (in our case time and route) we test the performance of that set of parameters in a different context. Therefore, we may want to test on Tue and Thu as well to ensure that our choices work for those days as well. Extending the analogy to statistics In statistics, we have a similar issue. We often use a limited set of data to estimate the unknown parameters we do not know. If we overfit then our parameter estimates will work very well for the existing data but not as well for when we use them in another context. Thus, cross-validation helps in avoiding the above issue of overfitting by proving us some reassurance that the parameter estimates are not unique to the data we used to estimate them. Of course, cross validation is not perfect. Going back to our example of the subway, it can happen that even after cross-validation, our best choice of parameters may not work one month down the line because of various issues (e.g., construction, traffic volume changes over time etc).
Cross-Validation in plain english? Consider the following situation: I want to catch the subway to go to my office. My plan is to take my car, park at the subway and then take the train to go to my office. My goal is to catch the tra
2,415
Cross-Validation in plain english?
I think that this is best described with the following picture (in this case showing k-fold cross-validation): Cross-validation is a technique used to protect against overfitting in a predictive model, particularly in a case where the amount of data may be limited. In cross-validation, you make a fixed number of folds (or partitions) of the data, run the analysis on each fold, and then average the overall error estimate.
Cross-Validation in plain english?
I think that this is best described with the following picture (in this case showing k-fold cross-validation): Cross-validation is a technique used to protect against overfitting in a predictive mode
Cross-Validation in plain english? I think that this is best described with the following picture (in this case showing k-fold cross-validation): Cross-validation is a technique used to protect against overfitting in a predictive model, particularly in a case where the amount of data may be limited. In cross-validation, you make a fixed number of folds (or partitions) of the data, run the analysis on each fold, and then average the overall error estimate.
Cross-Validation in plain english? I think that this is best described with the following picture (in this case showing k-fold cross-validation): Cross-validation is a technique used to protect against overfitting in a predictive mode
2,416
Cross-Validation in plain english?
"Avoid learning your training data by heart by making sure the trained model performs well on independent data."
Cross-Validation in plain english?
"Avoid learning your training data by heart by making sure the trained model performs well on independent data."
Cross-Validation in plain english? "Avoid learning your training data by heart by making sure the trained model performs well on independent data."
Cross-Validation in plain english? "Avoid learning your training data by heart by making sure the trained model performs well on independent data."
2,417
Cross-Validation in plain english?
Let's say you investigate some process; you've gathered some data describing it and you have build a model (either statistical or ML, doesn't matter). But now, how to judge if it is ok? Probably it fits suspiciously good to the data it was build on, so no-one will believe that your model is so splendid that you think. First idea is to separate a subset of your data and use it to test the model build by your method on the rest of data. Now the result is definitely overfitting-free, nevertheless (especially for small sets) you could have been (un)lucky and draw (less)more simpler cases to test, making it (harder)easier to predict... Also your accuracy/error/goodness estimate is useless for model comparison/optimization, since you probably know nothing about its distribution. When in doubt, use brute force, so just replicate the above process, gather few estimates of accuracy/error/goodness and average them -- and so you obtain cross validation. Among better estimate you will also get a histogram, so you will be able to approximate distribution or perform some non-parametric tests. And this is it; the details of test-train splitting are the reason for different CV types, still except of rare cases and small strength differences they are rather equivalent. Indeed it is a huge advantage, because it makes it a bulletproof-fair method; it is very hard to cheat it.
Cross-Validation in plain english?
Let's say you investigate some process; you've gathered some data describing it and you have build a model (either statistical or ML, doesn't matter). But now, how to judge if it is ok? Probably it fi
Cross-Validation in plain english? Let's say you investigate some process; you've gathered some data describing it and you have build a model (either statistical or ML, doesn't matter). But now, how to judge if it is ok? Probably it fits suspiciously good to the data it was build on, so no-one will believe that your model is so splendid that you think. First idea is to separate a subset of your data and use it to test the model build by your method on the rest of data. Now the result is definitely overfitting-free, nevertheless (especially for small sets) you could have been (un)lucky and draw (less)more simpler cases to test, making it (harder)easier to predict... Also your accuracy/error/goodness estimate is useless for model comparison/optimization, since you probably know nothing about its distribution. When in doubt, use brute force, so just replicate the above process, gather few estimates of accuracy/error/goodness and average them -- and so you obtain cross validation. Among better estimate you will also get a histogram, so you will be able to approximate distribution or perform some non-parametric tests. And this is it; the details of test-train splitting are the reason for different CV types, still except of rare cases and small strength differences they are rather equivalent. Indeed it is a huge advantage, because it makes it a bulletproof-fair method; it is very hard to cheat it.
Cross-Validation in plain english? Let's say you investigate some process; you've gathered some data describing it and you have build a model (either statistical or ML, doesn't matter). But now, how to judge if it is ok? Probably it fi
2,418
Cross-Validation in plain english?
Since you don't have access to the test data at the time of training, and you want your model to do well on the unseen test data, you "pretend" that you have access to some test data by repeatedly subsampling a small part of your training data, hold out this set while training the model, and then treating the held out set as a proxy to the test data (and choose model parameters that give best performance on the held out data). You hope that by randomly sampling various subsets from the training data, you might make them look like the test data (in the average behavior sense), and therefore the learned model parameters will be good for the test data as well (i.e., your model generalizes well for unseen data).
Cross-Validation in plain english?
Since you don't have access to the test data at the time of training, and you want your model to do well on the unseen test data, you "pretend" that you have access to some test data by repeatedly sub
Cross-Validation in plain english? Since you don't have access to the test data at the time of training, and you want your model to do well on the unseen test data, you "pretend" that you have access to some test data by repeatedly subsampling a small part of your training data, hold out this set while training the model, and then treating the held out set as a proxy to the test data (and choose model parameters that give best performance on the held out data). You hope that by randomly sampling various subsets from the training data, you might make them look like the test data (in the average behavior sense), and therefore the learned model parameters will be good for the test data as well (i.e., your model generalizes well for unseen data).
Cross-Validation in plain english? Since you don't have access to the test data at the time of training, and you want your model to do well on the unseen test data, you "pretend" that you have access to some test data by repeatedly sub
2,419
Line of best fit does not look like a good fit. Why?
Is there a dependent variable? The trend line in Excel is from the regression of the dependent variable "lat" on independent variable "lon." What you call a "common sense line" can be obtained when you don't designate dependent variable, and treat both the latitude and longitude equally. The latter can be obtained by applying PCA. In particular, it's one of the eigen vectors of the covariance matrix of these variables. You can think of it as a line minimizing the shortest distance from any given $(x_i,y_i)$ point to a line itself, i.e. you draw a perpendicular to a line, and minimize the sum of those for each observation. Here's how you could do it in R: > para <- read.csv("para.csv") > plot(para) > > # run PCA > pZ=prcomp(para,rank.=1) > # look at 1st PC > pZ$rotation PC1 lon 0.09504313 lat 0.99547316 > > colMeans(para) # PCA was centered lon lat -0.7129371 53.9368720 > # recover the data from 1st PC > pc1=t(pZ$rotation %*% t(pZ$x) ) > # center and show > lines(pc1 + t(t(rep(1,123))) %*% c) The trend line that you got from Excel is as a common sense as the eigen vector from PCA when you understand that in the Excel regression the variables are not equal. Here you're minimizing a vertical distance from $y_i$ to $y(x_i)$, where y-axis is latitude and x-axis is a longitude. Whether you want to treat the variables equally or not depends on the objective. It's not the inherent quality of the data. You have to pick the right statistical tool to analyze the data, in this case choose between the regression and PCA. An answer to a question that wasn't asked So, why in your case a (regression) trend line in Excel doesn't seem to be a suitable tool for your case? The reason is that the trend line is an answer to a question that wasn't asked. Here's why. Excel regression is trying to estimate the parameters of a line $lat=a+b \times lon$. So, the first problem is the latitude is not even a function of a longitude, strictly speaking (see the note at the end of the post), and it's not even the main issue. The real trouble is that you're not even interested in paraglider's location, you're interested in the wind. Imagine that there was no wind. A paraglider would be making the same circle over and over. What would be the trend line? Obviously, it would be flat horizontal line, its slope would be zero, yet it doesn't mean that the wind is blowing in horizontal direction! Here's a simulated plot for when there's a strong wind along y-axis, while a paraglider is making perfect circles. You can see how linear regression $y\sim x$ produces nonsensical result, a horizontal trend line. Actually, it's even slightly negative, but not significant. The wind direction is shown with a red line: R code for the simulation: t=1:123 a=1 #1 b=0 #1/10 y=10*sin(t)+a*t x=10*cos(t)+b*t plot(x,y,xlim=c(-60,60)) xp=-60:60 lines(b*t,a*t,col='red') model=lm(y~x) lines(xp,xp*model$coefficients[2]+model$coefficients[1]) So, the direction of the wind clearly is not aligned with the trend line at all. They're linked, of course, but in a nontrivial way. Hence, my statement that the Excel trend line is an answer to some question, but not the one you asked. Why PCA? As you noted there are at least two components of the motion of a paraglider: the drift with a wind and circular motion controlled by a paraglider. This is clearly seen when you connect the dots on your plot: On one hand, the circular motion is really a nuisance to you: you're interested in the wind. Though on the other hand, you don't observe the wind speed, you only observe the paraglider. So, your objective is to infer the unobservable wind from observable paraglider's location reading. This is exactly the situation where tools such as factor analysis and PCA can be useful. The aim of PCA is to isolate a few factors that determine the multiple outputs by analyzing the correlations in outputs. It's effective when the output is linked to factors linearly, which happens to be the case in your data: wind drift simply adds to the coordinates of the circular motion, that's why PCA is working here. PCA setup So, we established that PCA should have a chance here, but how will we actually set it up? Let's start with adding a third variable, time. We're going to assign time 1 to 123 to each 123 observation, assuming the constant sampling frequency. Here's how the 3D plot looks like of the data, revealing its spiral structure: The next plot shows the imaginary center of rotation of a paraglider as brown circles. You can see how it drifts on lat-lon plane with the wind, while paraglider shown with a blue dot is circling around it. The time is on vertical axis. I connected the center of rotation to a corresponding location of a paraglider showing only the first two circles. The corresponding R code: library(plotly) para <- read.csv("para.csv") n=24 para$t=1:123 # add time parameter # run PCA pZ3=prcomp(para) c3=colMeans(para) # PCA was centered # look at PCs in columns pZ3$rotation # get the imaginary center of rotation pc31=t(pZ3$rotation[,1] %*% t(pZ3$x[,1]) ) eye = pc31 + t(t(rep(1,123))) %*% c3 eyedata = data.frame(eye) p = plot_ly(x=para[1:n,1],y=para[1:n,2],z=para[1:n,3],mode="lines+markers",type="scatter3d") %>% layout(showlegend=FALSE,scene=list(xaxis = list(title = 'lat'),yaxis = list(title = 'lon'),zaxis = list(title = 't'))) %>% add_trace(x=eyedata[1:n,1],y=eyedata[1:n,2],z=eyedata[1:n,3],mode="markers",type="scatter3d") for( i in 1:n){ p = add_trace(p,x=c(eyedata[i,1],para[i,1]),y=c(eyedata[i,2],para[i,2]),z=c(eyedata[i,3],para[i,3]),color="black",mode="lines",type="scatter3d") } subplot(p) The drift of the center of paraglider's rotation is caused mainly by the wind, and the path and speed of the drift is correlated with the direction and the speed of the wind, unobservable variables of interest. This is how the drift looks like when projected to lat-lon plane: PCA Regression So, earlier we established that regular linear regression doesn't seem to work very well here. We also figured why: because it doesn't reflect the underlying process, because paraglider's motion is highly nonlinear. It's a combination of circular motion and a linear drift. We also discussed that in this situation factor analysis might be helpful. Here's an outline of one possible approach to modeling this data: PCA regression. But fist I'll show you the PCA regression fitted curve: This has been obtained as follows. Run PCA on the data set which has extra column t=1:123, as discussed earlier. You get three principal components. The first one is simply t. The second corresponds to the lon column, and the third to lat column. I fit the latter two principal components to a variable of the form $a\sin(\omega t+\varphi)$, where $\omega,\varphi$ are extracted from spectral analysis of the components. They happen to have the same frequency but different phases, which is not surprising given the circular motion. That's it. To get the fitted values you recover the data from fitted components by plugging the transpose of the PCA rotation matrix into the predicted principal components. My R code above shows parts of the procedure, and the rest you can figure out easily. Conclusion It's interesting to see how powerful is PCA and other simple tools when it comes to physical phenomena where the underlying processes are stable, and the inputs translate into outputs via linear (or linearized) relationships. So in our case the circular motion is very nonlinear but we easily linearized it by using sine/cosine functions on a time t parameter. My plots were produced with just a few lines of R code as you saw. The regression model should reflect the underlying process, then only you can expect that its parameters are meaningful. If this is a paraglider drifting in the wind, then a simple scatter plot like in the original question will hide the time structure of the process. Also Excel regression was a cross sectional analysis, for which the linear regression works best, while your data is a time series process, where the observations are ordered in time. Time series analysis must be applied here, and it was done in PCA regression. Notes on a function Since a paraglider is making circles, there will be multiple latitudes corresponding to a single longitude. In mathematics a function $y=f(x)$ maps a value $x$ to a single value $y$. It's many-to-one relationship, meaning that multiple $x$ may correspond to $y$, but not multiple $y$ correspond to a single $x$. That is why $lat=f(lon)$ is not a function, strictly speaking.
Line of best fit does not look like a good fit. Why?
Is there a dependent variable? The trend line in Excel is from the regression of the dependent variable "lat" on independent variable "lon." What you call a "common sense line" can be obtained when yo
Line of best fit does not look like a good fit. Why? Is there a dependent variable? The trend line in Excel is from the regression of the dependent variable "lat" on independent variable "lon." What you call a "common sense line" can be obtained when you don't designate dependent variable, and treat both the latitude and longitude equally. The latter can be obtained by applying PCA. In particular, it's one of the eigen vectors of the covariance matrix of these variables. You can think of it as a line minimizing the shortest distance from any given $(x_i,y_i)$ point to a line itself, i.e. you draw a perpendicular to a line, and minimize the sum of those for each observation. Here's how you could do it in R: > para <- read.csv("para.csv") > plot(para) > > # run PCA > pZ=prcomp(para,rank.=1) > # look at 1st PC > pZ$rotation PC1 lon 0.09504313 lat 0.99547316 > > colMeans(para) # PCA was centered lon lat -0.7129371 53.9368720 > # recover the data from 1st PC > pc1=t(pZ$rotation %*% t(pZ$x) ) > # center and show > lines(pc1 + t(t(rep(1,123))) %*% c) The trend line that you got from Excel is as a common sense as the eigen vector from PCA when you understand that in the Excel regression the variables are not equal. Here you're minimizing a vertical distance from $y_i$ to $y(x_i)$, where y-axis is latitude and x-axis is a longitude. Whether you want to treat the variables equally or not depends on the objective. It's not the inherent quality of the data. You have to pick the right statistical tool to analyze the data, in this case choose between the regression and PCA. An answer to a question that wasn't asked So, why in your case a (regression) trend line in Excel doesn't seem to be a suitable tool for your case? The reason is that the trend line is an answer to a question that wasn't asked. Here's why. Excel regression is trying to estimate the parameters of a line $lat=a+b \times lon$. So, the first problem is the latitude is not even a function of a longitude, strictly speaking (see the note at the end of the post), and it's not even the main issue. The real trouble is that you're not even interested in paraglider's location, you're interested in the wind. Imagine that there was no wind. A paraglider would be making the same circle over and over. What would be the trend line? Obviously, it would be flat horizontal line, its slope would be zero, yet it doesn't mean that the wind is blowing in horizontal direction! Here's a simulated plot for when there's a strong wind along y-axis, while a paraglider is making perfect circles. You can see how linear regression $y\sim x$ produces nonsensical result, a horizontal trend line. Actually, it's even slightly negative, but not significant. The wind direction is shown with a red line: R code for the simulation: t=1:123 a=1 #1 b=0 #1/10 y=10*sin(t)+a*t x=10*cos(t)+b*t plot(x,y,xlim=c(-60,60)) xp=-60:60 lines(b*t,a*t,col='red') model=lm(y~x) lines(xp,xp*model$coefficients[2]+model$coefficients[1]) So, the direction of the wind clearly is not aligned with the trend line at all. They're linked, of course, but in a nontrivial way. Hence, my statement that the Excel trend line is an answer to some question, but not the one you asked. Why PCA? As you noted there are at least two components of the motion of a paraglider: the drift with a wind and circular motion controlled by a paraglider. This is clearly seen when you connect the dots on your plot: On one hand, the circular motion is really a nuisance to you: you're interested in the wind. Though on the other hand, you don't observe the wind speed, you only observe the paraglider. So, your objective is to infer the unobservable wind from observable paraglider's location reading. This is exactly the situation where tools such as factor analysis and PCA can be useful. The aim of PCA is to isolate a few factors that determine the multiple outputs by analyzing the correlations in outputs. It's effective when the output is linked to factors linearly, which happens to be the case in your data: wind drift simply adds to the coordinates of the circular motion, that's why PCA is working here. PCA setup So, we established that PCA should have a chance here, but how will we actually set it up? Let's start with adding a third variable, time. We're going to assign time 1 to 123 to each 123 observation, assuming the constant sampling frequency. Here's how the 3D plot looks like of the data, revealing its spiral structure: The next plot shows the imaginary center of rotation of a paraglider as brown circles. You can see how it drifts on lat-lon plane with the wind, while paraglider shown with a blue dot is circling around it. The time is on vertical axis. I connected the center of rotation to a corresponding location of a paraglider showing only the first two circles. The corresponding R code: library(plotly) para <- read.csv("para.csv") n=24 para$t=1:123 # add time parameter # run PCA pZ3=prcomp(para) c3=colMeans(para) # PCA was centered # look at PCs in columns pZ3$rotation # get the imaginary center of rotation pc31=t(pZ3$rotation[,1] %*% t(pZ3$x[,1]) ) eye = pc31 + t(t(rep(1,123))) %*% c3 eyedata = data.frame(eye) p = plot_ly(x=para[1:n,1],y=para[1:n,2],z=para[1:n,3],mode="lines+markers",type="scatter3d") %>% layout(showlegend=FALSE,scene=list(xaxis = list(title = 'lat'),yaxis = list(title = 'lon'),zaxis = list(title = 't'))) %>% add_trace(x=eyedata[1:n,1],y=eyedata[1:n,2],z=eyedata[1:n,3],mode="markers",type="scatter3d") for( i in 1:n){ p = add_trace(p,x=c(eyedata[i,1],para[i,1]),y=c(eyedata[i,2],para[i,2]),z=c(eyedata[i,3],para[i,3]),color="black",mode="lines",type="scatter3d") } subplot(p) The drift of the center of paraglider's rotation is caused mainly by the wind, and the path and speed of the drift is correlated with the direction and the speed of the wind, unobservable variables of interest. This is how the drift looks like when projected to lat-lon plane: PCA Regression So, earlier we established that regular linear regression doesn't seem to work very well here. We also figured why: because it doesn't reflect the underlying process, because paraglider's motion is highly nonlinear. It's a combination of circular motion and a linear drift. We also discussed that in this situation factor analysis might be helpful. Here's an outline of one possible approach to modeling this data: PCA regression. But fist I'll show you the PCA regression fitted curve: This has been obtained as follows. Run PCA on the data set which has extra column t=1:123, as discussed earlier. You get three principal components. The first one is simply t. The second corresponds to the lon column, and the third to lat column. I fit the latter two principal components to a variable of the form $a\sin(\omega t+\varphi)$, where $\omega,\varphi$ are extracted from spectral analysis of the components. They happen to have the same frequency but different phases, which is not surprising given the circular motion. That's it. To get the fitted values you recover the data from fitted components by plugging the transpose of the PCA rotation matrix into the predicted principal components. My R code above shows parts of the procedure, and the rest you can figure out easily. Conclusion It's interesting to see how powerful is PCA and other simple tools when it comes to physical phenomena where the underlying processes are stable, and the inputs translate into outputs via linear (or linearized) relationships. So in our case the circular motion is very nonlinear but we easily linearized it by using sine/cosine functions on a time t parameter. My plots were produced with just a few lines of R code as you saw. The regression model should reflect the underlying process, then only you can expect that its parameters are meaningful. If this is a paraglider drifting in the wind, then a simple scatter plot like in the original question will hide the time structure of the process. Also Excel regression was a cross sectional analysis, for which the linear regression works best, while your data is a time series process, where the observations are ordered in time. Time series analysis must be applied here, and it was done in PCA regression. Notes on a function Since a paraglider is making circles, there will be multiple latitudes corresponding to a single longitude. In mathematics a function $y=f(x)$ maps a value $x$ to a single value $y$. It's many-to-one relationship, meaning that multiple $x$ may correspond to $y$, but not multiple $y$ correspond to a single $x$. That is why $lat=f(lon)$ is not a function, strictly speaking.
Line of best fit does not look like a good fit. Why? Is there a dependent variable? The trend line in Excel is from the regression of the dependent variable "lat" on independent variable "lon." What you call a "common sense line" can be obtained when yo
2,420
Line of best fit does not look like a good fit. Why?
The answer probably has to do with how you are mentally judging the distance to the regression line. Standard (Type 1) regression minimizes the squared error, where error is calculated based on vertical distance to the line. Type 2 regression may be more analogous to your judgement of the best line. In it, the squared error minimized is the perpendicular distance to the line. There are a number of consequences to this difference. One important one is that if you swap X- and Y-axes in your plot and refit the line, you will get a different relationship between the variables for Type 1 regression. For Type 2 regression, the relationship remains the same. My impression is that there is a fair amount of debate about where to use Type 1 vs Type 2 regression, and so I suggest reading carefully about the differences before deciding which to apply. Type 1 regression is frequently recommended in cases where one axis is either controlled experimentally, or at least measured with far less error than the other. If these conditions are not met, Type 1 regression will bias slopes towards 0 and therefore Type 2 regression is recommended. However, with sufficient noise in both axes, type 2 regression apparently tends to bias them towards 1. Warton et al. (2006) and Smith (2009) are good sources for understanding the debate. Also note that there are several subtly different methods that fall within the the broad category of Type 2 regression (Major Axis, Reduced Major Axis, and Standard Major Axis regression), and that terminology about the specific methods is inconsistent. Warton, D. I., I. J. Wright, D. S. Falster, and M. Westoby. 2006. Bivariate line-fitting methods for allometry. Biol. Rev. 81: 259–291. doi:10.1017/S1464793106007007 Smith, R. J. 2009. On the use and misuse of the reduced major axis for line-fitting. Am. J. Phys. Anthropol. 140:476–486. doi:10.1002/ajpa.21090 EDIT: @amoeba points out that what I am calling Type 2 regression above is also known as orthogonal regression; this may be the more appropriate term. As I said above, the terminology in this area is inconsistent, which warrants extra care.
Line of best fit does not look like a good fit. Why?
The answer probably has to do with how you are mentally judging the distance to the regression line. Standard (Type 1) regression minimizes the squared error, where error is calculated based on vertic
Line of best fit does not look like a good fit. Why? The answer probably has to do with how you are mentally judging the distance to the regression line. Standard (Type 1) regression minimizes the squared error, where error is calculated based on vertical distance to the line. Type 2 regression may be more analogous to your judgement of the best line. In it, the squared error minimized is the perpendicular distance to the line. There are a number of consequences to this difference. One important one is that if you swap X- and Y-axes in your plot and refit the line, you will get a different relationship between the variables for Type 1 regression. For Type 2 regression, the relationship remains the same. My impression is that there is a fair amount of debate about where to use Type 1 vs Type 2 regression, and so I suggest reading carefully about the differences before deciding which to apply. Type 1 regression is frequently recommended in cases where one axis is either controlled experimentally, or at least measured with far less error than the other. If these conditions are not met, Type 1 regression will bias slopes towards 0 and therefore Type 2 regression is recommended. However, with sufficient noise in both axes, type 2 regression apparently tends to bias them towards 1. Warton et al. (2006) and Smith (2009) are good sources for understanding the debate. Also note that there are several subtly different methods that fall within the the broad category of Type 2 regression (Major Axis, Reduced Major Axis, and Standard Major Axis regression), and that terminology about the specific methods is inconsistent. Warton, D. I., I. J. Wright, D. S. Falster, and M. Westoby. 2006. Bivariate line-fitting methods for allometry. Biol. Rev. 81: 259–291. doi:10.1017/S1464793106007007 Smith, R. J. 2009. On the use and misuse of the reduced major axis for line-fitting. Am. J. Phys. Anthropol. 140:476–486. doi:10.1002/ajpa.21090 EDIT: @amoeba points out that what I am calling Type 2 regression above is also known as orthogonal regression; this may be the more appropriate term. As I said above, the terminology in this area is inconsistent, which warrants extra care.
Line of best fit does not look like a good fit. Why? The answer probably has to do with how you are mentally judging the distance to the regression line. Standard (Type 1) regression minimizes the squared error, where error is calculated based on vertic
2,421
Line of best fit does not look like a good fit. Why?
The question that Excel tries to answer is: "Assuming that y is dependent on x, which line predicts y best". The answer is that because of the huge variations in y, no line could possibly be particularly good, and what Excel displays is the best you can do. If you take your proposed red line, and continue you it up to x = -0.714 and x = -0.712, you will find that its values are way, way off the chart, and it is at a huge distance from the corresponding y values. The question that Excel answers is not "which line is closest to the data points", but "which line is best to predict y values from x values", and it does this correctly.
Line of best fit does not look like a good fit. Why?
The question that Excel tries to answer is: "Assuming that y is dependent on x, which line predicts y best". The answer is that because of the huge variations in y, no line could possibly be particula
Line of best fit does not look like a good fit. Why? The question that Excel tries to answer is: "Assuming that y is dependent on x, which line predicts y best". The answer is that because of the huge variations in y, no line could possibly be particularly good, and what Excel displays is the best you can do. If you take your proposed red line, and continue you it up to x = -0.714 and x = -0.712, you will find that its values are way, way off the chart, and it is at a huge distance from the corresponding y values. The question that Excel answers is not "which line is closest to the data points", but "which line is best to predict y values from x values", and it does this correctly.
Line of best fit does not look like a good fit. Why? The question that Excel tries to answer is: "Assuming that y is dependent on x, which line predicts y best". The answer is that because of the huge variations in y, no line could possibly be particula
2,422
Line of best fit does not look like a good fit. Why?
I don't want to add anything to the other answers, but I do want to say that you have been led astray by bad terminology, in particular the term "line of best fit" which is used in some statistics courses. Intuitively, a "line of best fit" would look like your red line. But the line produced by Excel is not a "line of best fit"; it is not even trying to be. It is a line that answers the question: given the value of x, what is my best possible prediction for y? or alternatively, what is the average y value for each x value? Notice the asymmetry here between x and y; using the name "line of best fit" obscures this. So does Excel's use of "trendline". It is explained very well at the following link: https://www.stat.berkeley.edu/~stark/SticiGui/Text/regression.htm You might want something more like what is called "Type 2" in the answer above, or "SD Line" at the Berkeley stats course page.
Line of best fit does not look like a good fit. Why?
I don't want to add anything to the other answers, but I do want to say that you have been led astray by bad terminology, in particular the term "line of best fit" which is used in some statistics cou
Line of best fit does not look like a good fit. Why? I don't want to add anything to the other answers, but I do want to say that you have been led astray by bad terminology, in particular the term "line of best fit" which is used in some statistics courses. Intuitively, a "line of best fit" would look like your red line. But the line produced by Excel is not a "line of best fit"; it is not even trying to be. It is a line that answers the question: given the value of x, what is my best possible prediction for y? or alternatively, what is the average y value for each x value? Notice the asymmetry here between x and y; using the name "line of best fit" obscures this. So does Excel's use of "trendline". It is explained very well at the following link: https://www.stat.berkeley.edu/~stark/SticiGui/Text/regression.htm You might want something more like what is called "Type 2" in the answer above, or "SD Line" at the Berkeley stats course page.
Line of best fit does not look like a good fit. Why? I don't want to add anything to the other answers, but I do want to say that you have been led astray by bad terminology, in particular the term "line of best fit" which is used in some statistics cou
2,423
Line of best fit does not look like a good fit. Why?
Part of the optical issue comes from the different scales - if you use the same scale on both axis, it will look already different. In other words, you can make most such ‘best fit‘ lines look ‘unintuitive’ by spreading one axis scale out.
Line of best fit does not look like a good fit. Why?
Part of the optical issue comes from the different scales - if you use the same scale on both axis, it will look already different. In other words, you can make most such ‘best fit‘ lines look ‘unintu
Line of best fit does not look like a good fit. Why? Part of the optical issue comes from the different scales - if you use the same scale on both axis, it will look already different. In other words, you can make most such ‘best fit‘ lines look ‘unintuitive’ by spreading one axis scale out.
Line of best fit does not look like a good fit. Why? Part of the optical issue comes from the different scales - if you use the same scale on both axis, it will look already different. In other words, you can make most such ‘best fit‘ lines look ‘unintu
2,424
Line of best fit does not look like a good fit. Why?
A few individuals have noted that the problem is visual - the graphical scaling employed produces misleading information. More specifically, the scaling of "lon" is such that it it appears to be a tight spiral which suggests the regression line provides a poor fit (an assessment to which I agree, the red line you draw would provide lower squared errors if the data were shaped in the manner presented). Below I provide a scatterplot created in Excel with scaling for "lon" altered so it does not produce the tight spiral in your scatterplot. With this change, the regression line now provides a better visual fit and I think helps demonstrate how the scaling in the original scatterplot provided a misleading assessment of fit. I think regression works well here. I don't think a more complex analysis is needed. For any interested, I have plotted the data using a mapping tool and show the regression fitted to the data. The red dots are the recorded data and the green is the regression line. And here are the same data in a scatter plot with regression line; here lat is treated as the dependent and lat scores are reversed to fit with geographic profile.
Line of best fit does not look like a good fit. Why?
A few individuals have noted that the problem is visual - the graphical scaling employed produces misleading information. More specifically, the scaling of "lon" is such that it it appears to be a tig
Line of best fit does not look like a good fit. Why? A few individuals have noted that the problem is visual - the graphical scaling employed produces misleading information. More specifically, the scaling of "lon" is such that it it appears to be a tight spiral which suggests the regression line provides a poor fit (an assessment to which I agree, the red line you draw would provide lower squared errors if the data were shaped in the manner presented). Below I provide a scatterplot created in Excel with scaling for "lon" altered so it does not produce the tight spiral in your scatterplot. With this change, the regression line now provides a better visual fit and I think helps demonstrate how the scaling in the original scatterplot provided a misleading assessment of fit. I think regression works well here. I don't think a more complex analysis is needed. For any interested, I have plotted the data using a mapping tool and show the regression fitted to the data. The red dots are the recorded data and the green is the regression line. And here are the same data in a scatter plot with regression line; here lat is treated as the dependent and lat scores are reversed to fit with geographic profile.
Line of best fit does not look like a good fit. Why? A few individuals have noted that the problem is visual - the graphical scaling employed produces misleading information. More specifically, the scaling of "lon" is such that it it appears to be a tig
2,425
Line of best fit does not look like a good fit. Why?
Your confuse ordinary least squares (OLS) regression (which minimizes the sum of the squared deviation about the predicted values, (observed-predicted)^2) and major axis regression (which minimizes the sums of squares of the perpendicular distance between each point and the regression line, sometimes this is referred to as Type II regression, orthogonal regression or standardized principal component regression). If you want to compare the two approaches just in R just check out data=read.csv("https://pastebin.com/raw/4TsstQYm") require(lmodel2) fit = lmodel2(lat ~ lon, data=data) plot(fit,method="OLS") # ordinary least squares regression plot(fit,method="MA") # major axis regression What you find most intuitive (your red line) is just the major axis regression, which visually speaking is indeed the one that looks most logical, as it minimizes the perpendicular distance to your points. OLS regression will only appear to minimize the perpendicular distance to your points if the x and y variable are on the same measurement scale and/or have the same amount of error (you can see this simply based on Pythagoras' theorem). In your case, your y variable has way more spread on it, hence the difference...
Line of best fit does not look like a good fit. Why?
Your confuse ordinary least squares (OLS) regression (which minimizes the sum of the squared deviation about the predicted values, (observed-predicted)^2) and major axis regression (which minimizes th
Line of best fit does not look like a good fit. Why? Your confuse ordinary least squares (OLS) regression (which minimizes the sum of the squared deviation about the predicted values, (observed-predicted)^2) and major axis regression (which minimizes the sums of squares of the perpendicular distance between each point and the regression line, sometimes this is referred to as Type II regression, orthogonal regression or standardized principal component regression). If you want to compare the two approaches just in R just check out data=read.csv("https://pastebin.com/raw/4TsstQYm") require(lmodel2) fit = lmodel2(lat ~ lon, data=data) plot(fit,method="OLS") # ordinary least squares regression plot(fit,method="MA") # major axis regression What you find most intuitive (your red line) is just the major axis regression, which visually speaking is indeed the one that looks most logical, as it minimizes the perpendicular distance to your points. OLS regression will only appear to minimize the perpendicular distance to your points if the x and y variable are on the same measurement scale and/or have the same amount of error (you can see this simply based on Pythagoras' theorem). In your case, your y variable has way more spread on it, hence the difference...
Line of best fit does not look like a good fit. Why? Your confuse ordinary least squares (OLS) regression (which minimizes the sum of the squared deviation about the predicted values, (observed-predicted)^2) and major axis regression (which minimizes th
2,426
How to normalize data between -1 and 1?
With: $$ x' = \frac{x - \min{x}}{\max{x} - \min{x}} $$ you normalize your feature $x$ in $[0,1]$. To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In general, you can always get a new variable $x'''$ in $[a,b]$: $$ x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a $$ And in case you want to bring a variable back to its original value you can do it because these are linear transformations and thus invertible. For example: $$ x = (x''' - a)\frac{(\max{x} - \min{x})}{b-a} + \min{x} $$ An example in Python: import numpy as np x = np.array([1, 3, 4, 5, -1, -7]) # goal : range [0, 1] x1 = (x - min(x)) / ( max(x) - min(x) ) print(x1) >>> [0.66666667 0.83333333 0.91666667 1. 0.5 0.]
How to normalize data between -1 and 1?
With: $$ x' = \frac{x - \min{x}}{\max{x} - \min{x}} $$ you normalize your feature $x$ in $[0,1]$. To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In genera
How to normalize data between -1 and 1? With: $$ x' = \frac{x - \min{x}}{\max{x} - \min{x}} $$ you normalize your feature $x$ in $[0,1]$. To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In general, you can always get a new variable $x'''$ in $[a,b]$: $$ x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a $$ And in case you want to bring a variable back to its original value you can do it because these are linear transformations and thus invertible. For example: $$ x = (x''' - a)\frac{(\max{x} - \min{x})}{b-a} + \min{x} $$ An example in Python: import numpy as np x = np.array([1, 3, 4, 5, -1, -7]) # goal : range [0, 1] x1 = (x - min(x)) / ( max(x) - min(x) ) print(x1) >>> [0.66666667 0.83333333 0.91666667 1. 0.5 0.]
How to normalize data between -1 and 1? With: $$ x' = \frac{x - \min{x}}{\max{x} - \min{x}} $$ you normalize your feature $x$ in $[0,1]$. To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In genera
2,427
How to normalize data between -1 and 1?
I tested on randomly generated data, and \begin{equation} X_{out} = (b-a)\frac{X_{in} - \min{X_{in}}}{\max{X_{in}} - \min{X_{in}}} + a \end{equation} does not preserve the shape of the distribution. Would really like to see the proper derivation of this using functions of random variables. The approach that did preserve the shape for me was using: \begin{equation} X_{out} = \frac{X_{in} - \mu_{in}}{\sigma_{in}} \cdot \sigma_{out} + \mu_{out} \end{equation} where \begin{equation} \sigma_{out} = \frac{b-a}{6} \end{equation} (I admit that using 6 is a bit dirty) and \begin{equation} \mu_{out} = \frac{b+a}{2} \end{equation} and $a$ and $b$ is the desired range; so as per original question would be $a=-1$ and $b=1$. I arrived at the result from this reasoning \begin{equation} Z_{out} = Z_{in} \end{equation} \begin{equation} \frac{X_{out} - \mu_{out}}{\sigma_{out}} = \frac{X_{in} - \mu_{in}}{\sigma_{in}} \end{equation}
How to normalize data between -1 and 1?
I tested on randomly generated data, and \begin{equation} X_{out} = (b-a)\frac{X_{in} - \min{X_{in}}}{\max{X_{in}} - \min{X_{in}}} + a \end{equation} does not preserve the shape of the distributi
How to normalize data between -1 and 1? I tested on randomly generated data, and \begin{equation} X_{out} = (b-a)\frac{X_{in} - \min{X_{in}}}{\max{X_{in}} - \min{X_{in}}} + a \end{equation} does not preserve the shape of the distribution. Would really like to see the proper derivation of this using functions of random variables. The approach that did preserve the shape for me was using: \begin{equation} X_{out} = \frac{X_{in} - \mu_{in}}{\sigma_{in}} \cdot \sigma_{out} + \mu_{out} \end{equation} where \begin{equation} \sigma_{out} = \frac{b-a}{6} \end{equation} (I admit that using 6 is a bit dirty) and \begin{equation} \mu_{out} = \frac{b+a}{2} \end{equation} and $a$ and $b$ is the desired range; so as per original question would be $a=-1$ and $b=1$. I arrived at the result from this reasoning \begin{equation} Z_{out} = Z_{in} \end{equation} \begin{equation} \frac{X_{out} - \mu_{out}}{\sigma_{out}} = \frac{X_{in} - \mu_{in}}{\sigma_{in}} \end{equation}
How to normalize data between -1 and 1? I tested on randomly generated data, and \begin{equation} X_{out} = (b-a)\frac{X_{in} - \min{X_{in}}}{\max{X_{in}} - \min{X_{in}}} + a \end{equation} does not preserve the shape of the distributi
2,428
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
If you check the references below you'll find quite a bit of variation in the background, though there are some common elements. Those numbers are at least partly based on some comments from Fisher, where he said (while discussing a level of 1/20) It is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not. Deviations exceeding twice the standard deviation are thus formally regarded as significant $\quad$ Fisher, R.A. (1925) Statistical Methods for Research Workers, p. 47 On the other hand, he was sometimes more broad: If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent point), or one in a hundred (the 1 per cent point). Personally, the writer prefers to set a low standard of significance at the 5 per cent point, and ignore entirely all results which fail to reach this level. A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance. $\quad$ Fisher, R.A. (1926) The arrangement of field experiments. $\quad$ Journal of the Ministry of Agriculture, p. 504 Fisher also used 5% for one of his book's tables - but most of his other tables had a larger variety of significance levels Some of his comments have suggested more or less strict (i.e. lower or higher alpha levels) approaches in different situations. That sort of discussion above led to a tendency to produce tables focusing 5% and 1% significance levels (and sometimes with others, like 10%, 2% and 0.5%) for want of any other 'standard' values to use. However, in this paper, Cowles and Davis suggest that the use of 5% - or something close to it at least - goes back further than Fisher's comment. In short, our use of 5% (and to a lesser extent 1%) is pretty much arbitrary convention, though clearly a lot of people seem to feel that for many problems they're in the right kind of ballpark. There's no reason either particular value should be used in general. Further references: Dallal, Gerard E. (2012). The Little Handbook of Statistical Practice. - Why 0.05? Stigler, Stephen (December 2008). "Fisher and the 5% level". Chance 21 (4): 12. available here (Between them, you get a fair bit of background - it does look like between them there's a good case for thinking significance levels at least in the general ballpark of 5% - say between 2% and 10% - had been more or less in the air for a while.)
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
If you check the references below you'll find quite a bit of variation in the background, though there are some common elements. Those numbers are at least partly based on some comments from Fisher, w
Regarding p-values, why 1% and 5%? Why not 6% or 10%? If you check the references below you'll find quite a bit of variation in the background, though there are some common elements. Those numbers are at least partly based on some comments from Fisher, where he said (while discussing a level of 1/20) It is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not. Deviations exceeding twice the standard deviation are thus formally regarded as significant $\quad$ Fisher, R.A. (1925) Statistical Methods for Research Workers, p. 47 On the other hand, he was sometimes more broad: If one in twenty does not seem high enough odds, we may, if we prefer it, draw the line at one in fifty (the 2 per cent point), or one in a hundred (the 1 per cent point). Personally, the writer prefers to set a low standard of significance at the 5 per cent point, and ignore entirely all results which fail to reach this level. A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance. $\quad$ Fisher, R.A. (1926) The arrangement of field experiments. $\quad$ Journal of the Ministry of Agriculture, p. 504 Fisher also used 5% for one of his book's tables - but most of his other tables had a larger variety of significance levels Some of his comments have suggested more or less strict (i.e. lower or higher alpha levels) approaches in different situations. That sort of discussion above led to a tendency to produce tables focusing 5% and 1% significance levels (and sometimes with others, like 10%, 2% and 0.5%) for want of any other 'standard' values to use. However, in this paper, Cowles and Davis suggest that the use of 5% - or something close to it at least - goes back further than Fisher's comment. In short, our use of 5% (and to a lesser extent 1%) is pretty much arbitrary convention, though clearly a lot of people seem to feel that for many problems they're in the right kind of ballpark. There's no reason either particular value should be used in general. Further references: Dallal, Gerard E. (2012). The Little Handbook of Statistical Practice. - Why 0.05? Stigler, Stephen (December 2008). "Fisher and the 5% level". Chance 21 (4): 12. available here (Between them, you get a fair bit of background - it does look like between them there's a good case for thinking significance levels at least in the general ballpark of 5% - say between 2% and 10% - had been more or less in the air for a while.)
Regarding p-values, why 1% and 5%? Why not 6% or 10%? If you check the references below you'll find quite a bit of variation in the background, though there are some common elements. Those numbers are at least partly based on some comments from Fisher, w
2,429
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
I have to give a non-answer (same as here): "... surely, God loves the .06 nearly as much as the .05. Can there be any doubt that God views the strength of evidence for or against the null as a fairly continuous function of the magnitude of p?" (p.1277) Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44(10), 1276-1284. pdf The paper contains some more discussion on this issue.
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
I have to give a non-answer (same as here): "... surely, God loves the .06 nearly as much as the .05. Can there be any doubt that God views the strength of evidence for or against the null as a f
Regarding p-values, why 1% and 5%? Why not 6% or 10%? I have to give a non-answer (same as here): "... surely, God loves the .06 nearly as much as the .05. Can there be any doubt that God views the strength of evidence for or against the null as a fairly continuous function of the magnitude of p?" (p.1277) Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44(10), 1276-1284. pdf The paper contains some more discussion on this issue.
Regarding p-values, why 1% and 5%? Why not 6% or 10%? I have to give a non-answer (same as here): "... surely, God loves the .06 nearly as much as the .05. Can there be any doubt that God views the strength of evidence for or against the null as a f
2,430
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
I believe there is some underlying psychology for the 5%. I have to say I don't remember where I picked this up, but here's the exercise I used to do with every undergrad intro stats class. Imagine a stranger approaches you in a pub and tells you: "I have a biased coin that produces heads more often than tails. Would you like to buy one from me, so that you could bet with your buddies and make money on that?" You hesitantly agree to take a look, and toss the coin say 10 times. Question: how many times does it have to land heads/tails to convince you that it is biased? Then I take a show of hands: who would be convinced that the coin is biased if the split is 5/5? 4/6? 3/7? 2/8? 1/9? 0/10? Well, the first two or three won't convince anybody, and the last one would convince everybody; 2/8 and 1/9 would convince most people, though. Now, if you look up the binomial table, 2/8 is 5.5%, and 1/9 is 1%. QED. If anybody is teaching an intro undergrad course right now, I would encourage you to run this exercise, too, and post your results as comments, so that we could accumulate a large body of meta-analysis results and publish them at least in The American Statistician's Teaching Corner. Feel free to vary the $n$ and one-sided vs. two sided conditions! In another answer, Glen_b quotes Fisher providing the discussion about whether these magic numbers should be modified depending on how serious the problem is, so please don't make it "There's a new treatment for your sister's leukemia, but it would either cure her in 3 months or kill her in 3 days, so let's flip some coins" -- this would look as silly as the infamous xkcd comic that even Andrew Gelman did not like that much. Speaking of coins and Gelman, TAS had a very curious paper by Gelman and Nolan titled "You can load a die, but you can't bias a coin", putting forth an argument that the coin, flipped in the air or spun on a tabletop, will spend about half of the time heads up, and the other time, tails up, so it is difficult to come up with a physical mechanism to seriously bias a coin. (This clearly was a pub-originated research, as they experimented with beer bottle caps.) On the other hand, loading a die is a relatively easy thing to do, and I gave my students an exercise in that with some 1 cm/half-inch wooden cubes from a local hobby store and sandpaper asking them to load the die, and prove to me it is loaded -- which was an exercise in Pearson $\chi^2$ test for proportions and its power.
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
I believe there is some underlying psychology for the 5%. I have to say I don't remember where I picked this up, but here's the exercise I used to do with every undergrad intro stats class. Imagine a
Regarding p-values, why 1% and 5%? Why not 6% or 10%? I believe there is some underlying psychology for the 5%. I have to say I don't remember where I picked this up, but here's the exercise I used to do with every undergrad intro stats class. Imagine a stranger approaches you in a pub and tells you: "I have a biased coin that produces heads more often than tails. Would you like to buy one from me, so that you could bet with your buddies and make money on that?" You hesitantly agree to take a look, and toss the coin say 10 times. Question: how many times does it have to land heads/tails to convince you that it is biased? Then I take a show of hands: who would be convinced that the coin is biased if the split is 5/5? 4/6? 3/7? 2/8? 1/9? 0/10? Well, the first two or three won't convince anybody, and the last one would convince everybody; 2/8 and 1/9 would convince most people, though. Now, if you look up the binomial table, 2/8 is 5.5%, and 1/9 is 1%. QED. If anybody is teaching an intro undergrad course right now, I would encourage you to run this exercise, too, and post your results as comments, so that we could accumulate a large body of meta-analysis results and publish them at least in The American Statistician's Teaching Corner. Feel free to vary the $n$ and one-sided vs. two sided conditions! In another answer, Glen_b quotes Fisher providing the discussion about whether these magic numbers should be modified depending on how serious the problem is, so please don't make it "There's a new treatment for your sister's leukemia, but it would either cure her in 3 months or kill her in 3 days, so let's flip some coins" -- this would look as silly as the infamous xkcd comic that even Andrew Gelman did not like that much. Speaking of coins and Gelman, TAS had a very curious paper by Gelman and Nolan titled "You can load a die, but you can't bias a coin", putting forth an argument that the coin, flipped in the air or spun on a tabletop, will spend about half of the time heads up, and the other time, tails up, so it is difficult to come up with a physical mechanism to seriously bias a coin. (This clearly was a pub-originated research, as they experimented with beer bottle caps.) On the other hand, loading a die is a relatively easy thing to do, and I gave my students an exercise in that with some 1 cm/half-inch wooden cubes from a local hobby store and sandpaper asking them to load the die, and prove to me it is loaded -- which was an exercise in Pearson $\chi^2$ test for proportions and its power.
Regarding p-values, why 1% and 5%? Why not 6% or 10%? I believe there is some underlying psychology for the 5%. I have to say I don't remember where I picked this up, but here's the exercise I used to do with every undergrad intro stats class. Imagine a
2,431
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
5% seems to have been rounded from 4.56% by Fisher, corresponding to "the tail areas of the curve beyond the mean plus three or minus three probable errors" (Hurlbert & Lombardi, 2009). Another element of the story seems to be the reproduction of tables with critical vlaues (Pearson et al., 1990; Lehmann, 1993). Fisher was not given permission by Pearson to use his tables (probably both due to Pearson's marketing of his own publication (Hurlbert & Lombardi, 2009) and the problematic nature of their relationship. Hurlbert, S. H., & Lombardi, C. M. (2009, October). Final collapse of the Neyman-Pearson decision theoretic framework and rise of the neoFisherian. In Annales Zoologici Fennici (Vol. 46, No. 5, pp. 311-349). Finnish Zoological and Botanical Publishing Lehmann, E. L. (1993). The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?. Journal of the American Statistical Association, 88(424), 1242-1249. Pearson, E. S., Plackett, R. L., & Barnard, G. A. (1990). Student: a statistical biography of William Sealy Gosset. Oxford University Press, USA. See also: Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587-606. Hubbard, R., & Lindsay, R. M. (2008). Why P values are not a useful measure of evidence in statistical significance testing. Theory & Psychology, 18(1), 69-88.
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
5% seems to have been rounded from 4.56% by Fisher, corresponding to "the tail areas of the curve beyond the mean plus three or minus three probable errors" (Hurlbert & Lombardi, 2009). Another elemen
Regarding p-values, why 1% and 5%? Why not 6% or 10%? 5% seems to have been rounded from 4.56% by Fisher, corresponding to "the tail areas of the curve beyond the mean plus three or minus three probable errors" (Hurlbert & Lombardi, 2009). Another element of the story seems to be the reproduction of tables with critical vlaues (Pearson et al., 1990; Lehmann, 1993). Fisher was not given permission by Pearson to use his tables (probably both due to Pearson's marketing of his own publication (Hurlbert & Lombardi, 2009) and the problematic nature of their relationship. Hurlbert, S. H., & Lombardi, C. M. (2009, October). Final collapse of the Neyman-Pearson decision theoretic framework and rise of the neoFisherian. In Annales Zoologici Fennici (Vol. 46, No. 5, pp. 311-349). Finnish Zoological and Botanical Publishing Lehmann, E. L. (1993). The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?. Journal of the American Statistical Association, 88(424), 1242-1249. Pearson, E. S., Plackett, R. L., & Barnard, G. A. (1990). Student: a statistical biography of William Sealy Gosset. Oxford University Press, USA. See also: Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587-606. Hubbard, R., & Lindsay, R. M. (2008). Why P values are not a useful measure of evidence in statistical significance testing. Theory & Psychology, 18(1), 69-88.
Regarding p-values, why 1% and 5%? Why not 6% or 10%? 5% seems to have been rounded from 4.56% by Fisher, corresponding to "the tail areas of the curve beyond the mean plus three or minus three probable errors" (Hurlbert & Lombardi, 2009). Another elemen
2,432
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
Seems to me the answer is more in the game theory of research than in the statistics. Having 1% and 5% burned into the general consciousness means that researchers aren't effectively free to choose significance levels that suit their predispositions. Say we saw a paper with a p-value of .055 and where the significance level had been set at 6% - questions would be asked. 1% and 5% provide a form of credible commitment.
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
Seems to me the answer is more in the game theory of research than in the statistics. Having 1% and 5% burned into the general consciousness means that researchers aren't effectively free to choose si
Regarding p-values, why 1% and 5%? Why not 6% or 10%? Seems to me the answer is more in the game theory of research than in the statistics. Having 1% and 5% burned into the general consciousness means that researchers aren't effectively free to choose significance levels that suit their predispositions. Say we saw a paper with a p-value of .055 and where the significance level had been set at 6% - questions would be asked. 1% and 5% provide a form of credible commitment.
Regarding p-values, why 1% and 5%? Why not 6% or 10%? Seems to me the answer is more in the game theory of research than in the statistics. Having 1% and 5% burned into the general consciousness means that researchers aren't effectively free to choose si
2,433
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
The only correct number is .04284731 ...which is a flippant response intended to mean that the choice of .05 is essentially arbitrary. I usually just report the p value, rather than what the p value is greater or less than. "Significance" is a continuous variable, and, in my opinion, discretizing it often does more harm than good. I mean, if p=.13, you've got more confidence than if p=.21 and less than if p=.003
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
The only correct number is .04284731 ...which is a flippant response intended to mean that the choice of .05 is essentially arbitrary. I usually just report the p value, rather than what the p value
Regarding p-values, why 1% and 5%? Why not 6% or 10%? The only correct number is .04284731 ...which is a flippant response intended to mean that the choice of .05 is essentially arbitrary. I usually just report the p value, rather than what the p value is greater or less than. "Significance" is a continuous variable, and, in my opinion, discretizing it often does more harm than good. I mean, if p=.13, you've got more confidence than if p=.21 and less than if p=.003
Regarding p-values, why 1% and 5%? Why not 6% or 10%? The only correct number is .04284731 ...which is a flippant response intended to mean that the choice of .05 is essentially arbitrary. I usually just report the p value, rather than what the p value
2,434
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
My personal hypothesis is that 0.05 (or 1 in 20) is associated with a t/z value of (very close to) 2. Using 2 is nice, because it's very easy to spot if your result is statistically significant. There aren't other confluences of round numbers.
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
My personal hypothesis is that 0.05 (or 1 in 20) is associated with a t/z value of (very close to) 2. Using 2 is nice, because it's very easy to spot if your result is statistically significant. There
Regarding p-values, why 1% and 5%? Why not 6% or 10%? My personal hypothesis is that 0.05 (or 1 in 20) is associated with a t/z value of (very close to) 2. Using 2 is nice, because it's very easy to spot if your result is statistically significant. There aren't other confluences of round numbers.
Regarding p-values, why 1% and 5%? Why not 6% or 10%? My personal hypothesis is that 0.05 (or 1 in 20) is associated with a t/z value of (very close to) 2. Using 2 is nice, because it's very easy to spot if your result is statistically significant. There
2,435
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
A recent article sheds some light on the arbitrariness of $p$-values; the selection of two thresholds was motivated, at least in part, as a work-around to a dispute over publishing rights. Briefly, Fisher sought to use continuous-valued $p$-values as a characterization of strength of evidence. But he would not be able to publish accompanying tables to aid in their computation because of a copyright claim. To avoid copyright infringement, Fisher dichotomized $p$-values into "significant" and "non-significant." This meant he could publish critical values alone, without reproducing the entire tables. An awareness of the history of $p$-values might help deflate their swollen stature and encourage more judicious use. We were surprised to learn, in the course of writing this article, that the $p < 0.05$ cutoff was established as a competitive response to a disagreement over book royalties between two foundational statisticians. In the early 1920s, Kendall Pearson [sic -- I believe this is a typo for Karl Pearson, a prominent statistician who published mathematical and statistical tables in the 1920s], whose income depended on the sale of extensive statistical tables, was unwilling to allow Ronald A. Fisher to use them in his new book. To work around this barrier, Fisher created a method of inference based on only two values: $p$-values of 0.05 and 0.01 (Hurlbert and Lombardi, 2009). Fisher himself later admitted that Perason's more continuous method of inference was better than his binary approach: "no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects [null] hypotheses; he rather gives his mind to each particular case in the light of his evidence and ideas" (Hurlbert and Lombardi, 2009; 316). A fair interpretation of this history is that we use $p$-values at least in part because a statistician in the 1920s was afraid that sharing his work would undermine his income (Hurlbert and Lombardi, 2009). Following Fisher, we recommend that authors report $p$-values and refrain from emphasizing thresholds. from Brent Goldfarb and Andrew W. King. "Scientific Apophenia in Strategic Management Research: Significance & Mistaken Inference." Strategic Management Journal, vol. 37, no. 1, Wiley, 2016, pp. 167–76. Now, this passage does not answer the titular question "Why did Fisher choose 0.05 and 0.01 instead of 0.06 or 0.1?" After all, Fisher could have chosen to publish his book using 0.06 and 0.1 in place of 0.05 and 0.01 (or indeed he could have chosen any other probabilities). However, this passage does show that Fisher understood that the choice was arbitrary in its very nature, and that a single threshold for adjudicating all statistical inference is unsuitable. We might imagine a dramatically different statistical practice around hypothesis testing and inference if Fisher were instead able to publish Pearson's statistical tables! And while we're imagining some alternative worlds, we might also explore whether "significance & non-significance" are essential concepts for inference. Null hypothesis testing has no inherent requirement that $\alpha$ be specified, or that the "significant/non-significant" terminology be adopted. Fisher may have been impelled to those conventions, however, not only by historical antecedents but also by a very practical and personal obstacle. Kendall (1963) relates that "He [Fisher] himself told me that when he was writing Statistical Methods for Research Workers he applied to Pearson for permission to reproduce Elderton's tables of chi-squared and that it was refused. This was perhaps not simply a personal matter because the hard struggle which Pearson had for long experienced in obtaining funds for printing and publishing statistical tables had made him most unwilling to grant anyone permission to reproduce. He was afraid of the effect on sales of his Tables for Statisticians and Biometricians [K. Pearson 1914] on which he relied to secure money for further table publication. It seems, however, to have been this refusal which first directed Fisher's thoughts towards the alternative form of tabulation with quantiles as argument, a form which he subsequently adopted for all his tables and which has become common practice." This is what Fisher referred to when eh explained the absence from his book of more extended tables "owing to copyright restrictions" (Fisher 1925: 78, 1958: 79). Fisher did not invent the "significant/non-significant" dichotomy, but his books and novel tabulations of critical values of test statistics played a large role in its rapid and wide dissemination. from Hurlbert, Stuart H., and Celia M. Lombardi. 2009. “Final Collapse of the Neyman-Pearson Decision Theoretic Framework and Rise of the neoFisherian.” Annales Zoologici Fennici 46 (5): 311–49.
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
A recent article sheds some light on the arbitrariness of $p$-values; the selection of two thresholds was motivated, at least in part, as a work-around to a dispute over publishing rights. Briefly, Fi
Regarding p-values, why 1% and 5%? Why not 6% or 10%? A recent article sheds some light on the arbitrariness of $p$-values; the selection of two thresholds was motivated, at least in part, as a work-around to a dispute over publishing rights. Briefly, Fisher sought to use continuous-valued $p$-values as a characterization of strength of evidence. But he would not be able to publish accompanying tables to aid in their computation because of a copyright claim. To avoid copyright infringement, Fisher dichotomized $p$-values into "significant" and "non-significant." This meant he could publish critical values alone, without reproducing the entire tables. An awareness of the history of $p$-values might help deflate their swollen stature and encourage more judicious use. We were surprised to learn, in the course of writing this article, that the $p < 0.05$ cutoff was established as a competitive response to a disagreement over book royalties between two foundational statisticians. In the early 1920s, Kendall Pearson [sic -- I believe this is a typo for Karl Pearson, a prominent statistician who published mathematical and statistical tables in the 1920s], whose income depended on the sale of extensive statistical tables, was unwilling to allow Ronald A. Fisher to use them in his new book. To work around this barrier, Fisher created a method of inference based on only two values: $p$-values of 0.05 and 0.01 (Hurlbert and Lombardi, 2009). Fisher himself later admitted that Perason's more continuous method of inference was better than his binary approach: "no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects [null] hypotheses; he rather gives his mind to each particular case in the light of his evidence and ideas" (Hurlbert and Lombardi, 2009; 316). A fair interpretation of this history is that we use $p$-values at least in part because a statistician in the 1920s was afraid that sharing his work would undermine his income (Hurlbert and Lombardi, 2009). Following Fisher, we recommend that authors report $p$-values and refrain from emphasizing thresholds. from Brent Goldfarb and Andrew W. King. "Scientific Apophenia in Strategic Management Research: Significance & Mistaken Inference." Strategic Management Journal, vol. 37, no. 1, Wiley, 2016, pp. 167–76. Now, this passage does not answer the titular question "Why did Fisher choose 0.05 and 0.01 instead of 0.06 or 0.1?" After all, Fisher could have chosen to publish his book using 0.06 and 0.1 in place of 0.05 and 0.01 (or indeed he could have chosen any other probabilities). However, this passage does show that Fisher understood that the choice was arbitrary in its very nature, and that a single threshold for adjudicating all statistical inference is unsuitable. We might imagine a dramatically different statistical practice around hypothesis testing and inference if Fisher were instead able to publish Pearson's statistical tables! And while we're imagining some alternative worlds, we might also explore whether "significance & non-significance" are essential concepts for inference. Null hypothesis testing has no inherent requirement that $\alpha$ be specified, or that the "significant/non-significant" terminology be adopted. Fisher may have been impelled to those conventions, however, not only by historical antecedents but also by a very practical and personal obstacle. Kendall (1963) relates that "He [Fisher] himself told me that when he was writing Statistical Methods for Research Workers he applied to Pearson for permission to reproduce Elderton's tables of chi-squared and that it was refused. This was perhaps not simply a personal matter because the hard struggle which Pearson had for long experienced in obtaining funds for printing and publishing statistical tables had made him most unwilling to grant anyone permission to reproduce. He was afraid of the effect on sales of his Tables for Statisticians and Biometricians [K. Pearson 1914] on which he relied to secure money for further table publication. It seems, however, to have been this refusal which first directed Fisher's thoughts towards the alternative form of tabulation with quantiles as argument, a form which he subsequently adopted for all his tables and which has become common practice." This is what Fisher referred to when eh explained the absence from his book of more extended tables "owing to copyright restrictions" (Fisher 1925: 78, 1958: 79). Fisher did not invent the "significant/non-significant" dichotomy, but his books and novel tabulations of critical values of test statistics played a large role in its rapid and wide dissemination. from Hurlbert, Stuart H., and Celia M. Lombardi. 2009. “Final Collapse of the Neyman-Pearson Decision Theoretic Framework and Rise of the neoFisherian.” Annales Zoologici Fennici 46 (5): 311–49.
Regarding p-values, why 1% and 5%? Why not 6% or 10%? A recent article sheds some light on the arbitrariness of $p$-values; the selection of two thresholds was motivated, at least in part, as a work-around to a dispute over publishing rights. Briefly, Fi
2,436
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
This is an area of hypothesis testing that has always fascinated me. Specifically because one day someone decided on some arbitrary number that dichotomized the testing procedure and since then people rarely question it. I remember having a lecturer tell us not to put too much faith in the the Staiger and Stock test of instrumental variables (where the F-stat should be above 10 in the first stage regression to avoid weak instrument problems) because the number 10 was a completely arbitrary choice. I remember saying "But is that not what we do with regular hypothesis testing?????"
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
This is an area of hypothesis testing that has always fascinated me. Specifically because one day someone decided on some arbitrary number that dichotomized the testing procedure and since then people
Regarding p-values, why 1% and 5%? Why not 6% or 10%? This is an area of hypothesis testing that has always fascinated me. Specifically because one day someone decided on some arbitrary number that dichotomized the testing procedure and since then people rarely question it. I remember having a lecturer tell us not to put too much faith in the the Staiger and Stock test of instrumental variables (where the F-stat should be above 10 in the first stage regression to avoid weak instrument problems) because the number 10 was a completely arbitrary choice. I remember saying "But is that not what we do with regular hypothesis testing?????"
Regarding p-values, why 1% and 5%? Why not 6% or 10%? This is an area of hypothesis testing that has always fascinated me. Specifically because one day someone decided on some arbitrary number that dichotomized the testing procedure and since then people
2,437
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
Why 1 and 5? Because they feel right. I'm sure there are studies on the emotional value and cognitive salience of specific numbers, but we can understand the choice of 1 and 5 without having to resort to research. The people that created today's statistics were born, raised and live in a decimal world. Of course there are non-decimal counting systems, and counting to twelve using the phalanges is possible and has been done, but it is not obvious in the same way as using the fingers is (which are therefore called "digits", like the numbers). And while you (and Fisher) may know about non-decimal counting systems, the decimal system is and has been the predominant counting system your (and Fisher's world) in the past hundred years. But why are the numbers five and one special? Because both are the most naturally salient divisions of the basic ten: one finger, one hand (or: a half). You don't even have to go so far as to conceptualize fractions to get from ten to one and five. The one is simply there, just as your finger is simply there. And halving something is an operation much simpler than dividing it into any other proportion. Cutting anything into two parts requires no thinking, while dividing by three or four is already pretty complicated. Most currenct currency systems have coins and banknotes with values such as 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000. Some currency systems do not have 2, 20 and 200, but almost all have those beginning in 1 and 5. At the same time, most currency systems do not have a coin or banknote that begins in 3, 4, 6, 7, 8 or 9. Interesting, isn't it? But why is that so? Because you always need either ten of the 1s or two of the 5s (or five of the 2s) to arrive at the next bigger order. Calculating with money is very simple: times ten, or double. Just two kinds of operations. Every coin that you have is either half or a tenth of the next order coin. Those numbers multiply and add up easily and well. So the 1 and 5 have been deeply ingrained, from their earliest childhood on, into Fisher and whoever else chose the significance levels as the most straightforward, most simple, most basic divisions of 10. Any other number needs an argument for it, while these numbers are simply there. In the absence of an objective way to calculate the appropriate significance level for every individual data set, the one and five just feel right.
Regarding p-values, why 1% and 5%? Why not 6% or 10%?
Why 1 and 5? Because they feel right. I'm sure there are studies on the emotional value and cognitive salience of specific numbers, but we can understand the choice of 1 and 5 without having to resort
Regarding p-values, why 1% and 5%? Why not 6% or 10%? Why 1 and 5? Because they feel right. I'm sure there are studies on the emotional value and cognitive salience of specific numbers, but we can understand the choice of 1 and 5 without having to resort to research. The people that created today's statistics were born, raised and live in a decimal world. Of course there are non-decimal counting systems, and counting to twelve using the phalanges is possible and has been done, but it is not obvious in the same way as using the fingers is (which are therefore called "digits", like the numbers). And while you (and Fisher) may know about non-decimal counting systems, the decimal system is and has been the predominant counting system your (and Fisher's world) in the past hundred years. But why are the numbers five and one special? Because both are the most naturally salient divisions of the basic ten: one finger, one hand (or: a half). You don't even have to go so far as to conceptualize fractions to get from ten to one and five. The one is simply there, just as your finger is simply there. And halving something is an operation much simpler than dividing it into any other proportion. Cutting anything into two parts requires no thinking, while dividing by three or four is already pretty complicated. Most currenct currency systems have coins and banknotes with values such as 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000. Some currency systems do not have 2, 20 and 200, but almost all have those beginning in 1 and 5. At the same time, most currency systems do not have a coin or banknote that begins in 3, 4, 6, 7, 8 or 9. Interesting, isn't it? But why is that so? Because you always need either ten of the 1s or two of the 5s (or five of the 2s) to arrive at the next bigger order. Calculating with money is very simple: times ten, or double. Just two kinds of operations. Every coin that you have is either half or a tenth of the next order coin. Those numbers multiply and add up easily and well. So the 1 and 5 have been deeply ingrained, from their earliest childhood on, into Fisher and whoever else chose the significance levels as the most straightforward, most simple, most basic divisions of 10. Any other number needs an argument for it, while these numbers are simply there. In the absence of an objective way to calculate the appropriate significance level for every individual data set, the one and five just feel right.
Regarding p-values, why 1% and 5%? Why not 6% or 10%? Why 1 and 5? Because they feel right. I'm sure there are studies on the emotional value and cognitive salience of specific numbers, but we can understand the choice of 1 and 5 without having to resort
2,438
Please explain the waiting paradox
As Glen_b pointed out, if the buses arrive every $15$ minutes without any uncertainty whatsoever, we know that the maximum possible waiting time is $15$ minutes. If from our part we arrive "at random", we feel that "on average" we will wait half the maximum possible waiting time. And the maximum possible waiting time is here equal to the maximum possible length between two consecutive arrivals. Denote our waiting time $W$ and the maximum length between two consecutive bus arrivals $R$, and we argue that $$ E(W) = \frac 12 R = \frac {15}{2} = 7.5 \tag{1}$$ and we are right. But suddenly certainty is taken away from us and we are told that $15$ minutes is now the average length between two bus arrivals. And we fall into the "intuitive thinking trap" and think: "we only need to replace $R$ with its expected value", and we argue $$ E(W) = \frac 12 E(R) = \frac {15}{2} = 7.5\;\;\; \text{WRONG} \tag{2}$$ A first indication that we are wrong, is that $R$ is not "length between any two consecutive bus-arrivals", it is "maximum length etc". So in any case, we have that $E(R) \neq 15$. How did we arrive at equation $(1)$? We thought:"waiting time can be from $0$ to $15$ maximum. I arrive with equal probability at any instance, so I "choose" randomly and with equal probability all possible waiting times. Hence half the maximum length between two consecutive bus arrivals is my average waiting time". And we are right. But by mistakenly inserting the value $15$ in equation $(2)$, it no longer reflects our behavior. With $15$ in place of $E(R)$, equation $(2)$ says "I choose randomly and with equal probability all possible waiting times that are smaller or equal to the average length between two consecutive bus-arrivals" -and here is where our intuitive mistake lies, because, our behavior has not change - so, by arriving randomly uniformly, we in reality still "choose randomly and with equal probability" all possible waiting times - but "all possible waiting times" is not captured by $15$ - we have forgotten the right tail of the distribution of lengths between two consecutive bus-arrivals. So perhaps, we should calculate the expected value of the maximum length between any two consecutive bus arrivals, is this the correct solution? Yes it could be, but : the specific "paradox" goes hand-in-hand with a specific stochastic assumption: that bus-arrivals are modeled by the benchmark Poisson process, which means that as a consequence we assume that the time-length between any two consecutive bus-arrivals follows an Exponential distribution. Denote $\ell$ that length, and we have that $$f_{\ell}(\ell) = \lambda e^{-\lambda \ell},\;\; \lambda = 1/15,\;\; E(\ell) = 15$$ This is approximate of course, since the Exponential distribution has unbounded support from the right, meaning that strictly speaking "all possible waiting times" include, under this modeling assumption, larger and large magnitudes up to and "including" infinity, but with vanishing probability. But wait, the Exponential is memoryless: no matter at what point in time we will arrive, we face the same random variable, irrespective of what has gone before. Given this stochastic/distributional assumption, any point in time is part of an "interval between two consecutive bus-arrivals" whose length is described by the same probability distribution with expected value (not maximum value) $15$: "I am here, I am surrounded by an interval between two bus-arrivals. Some of its length lies in the past and some in the future but I have no way of knowing how much and how much, so the best I can do is ask What is its expected length -which will be my average waiting time?" - And the answer is always "$15$", alas.
Please explain the waiting paradox
As Glen_b pointed out, if the buses arrive every $15$ minutes without any uncertainty whatsoever, we know that the maximum possible waiting time is $15$ minutes. If from our part we arrive "at random"
Please explain the waiting paradox As Glen_b pointed out, if the buses arrive every $15$ minutes without any uncertainty whatsoever, we know that the maximum possible waiting time is $15$ minutes. If from our part we arrive "at random", we feel that "on average" we will wait half the maximum possible waiting time. And the maximum possible waiting time is here equal to the maximum possible length between two consecutive arrivals. Denote our waiting time $W$ and the maximum length between two consecutive bus arrivals $R$, and we argue that $$ E(W) = \frac 12 R = \frac {15}{2} = 7.5 \tag{1}$$ and we are right. But suddenly certainty is taken away from us and we are told that $15$ minutes is now the average length between two bus arrivals. And we fall into the "intuitive thinking trap" and think: "we only need to replace $R$ with its expected value", and we argue $$ E(W) = \frac 12 E(R) = \frac {15}{2} = 7.5\;\;\; \text{WRONG} \tag{2}$$ A first indication that we are wrong, is that $R$ is not "length between any two consecutive bus-arrivals", it is "maximum length etc". So in any case, we have that $E(R) \neq 15$. How did we arrive at equation $(1)$? We thought:"waiting time can be from $0$ to $15$ maximum. I arrive with equal probability at any instance, so I "choose" randomly and with equal probability all possible waiting times. Hence half the maximum length between two consecutive bus arrivals is my average waiting time". And we are right. But by mistakenly inserting the value $15$ in equation $(2)$, it no longer reflects our behavior. With $15$ in place of $E(R)$, equation $(2)$ says "I choose randomly and with equal probability all possible waiting times that are smaller or equal to the average length between two consecutive bus-arrivals" -and here is where our intuitive mistake lies, because, our behavior has not change - so, by arriving randomly uniformly, we in reality still "choose randomly and with equal probability" all possible waiting times - but "all possible waiting times" is not captured by $15$ - we have forgotten the right tail of the distribution of lengths between two consecutive bus-arrivals. So perhaps, we should calculate the expected value of the maximum length between any two consecutive bus arrivals, is this the correct solution? Yes it could be, but : the specific "paradox" goes hand-in-hand with a specific stochastic assumption: that bus-arrivals are modeled by the benchmark Poisson process, which means that as a consequence we assume that the time-length between any two consecutive bus-arrivals follows an Exponential distribution. Denote $\ell$ that length, and we have that $$f_{\ell}(\ell) = \lambda e^{-\lambda \ell},\;\; \lambda = 1/15,\;\; E(\ell) = 15$$ This is approximate of course, since the Exponential distribution has unbounded support from the right, meaning that strictly speaking "all possible waiting times" include, under this modeling assumption, larger and large magnitudes up to and "including" infinity, but with vanishing probability. But wait, the Exponential is memoryless: no matter at what point in time we will arrive, we face the same random variable, irrespective of what has gone before. Given this stochastic/distributional assumption, any point in time is part of an "interval between two consecutive bus-arrivals" whose length is described by the same probability distribution with expected value (not maximum value) $15$: "I am here, I am surrounded by an interval between two bus-arrivals. Some of its length lies in the past and some in the future but I have no way of knowing how much and how much, so the best I can do is ask What is its expected length -which will be my average waiting time?" - And the answer is always "$15$", alas.
Please explain the waiting paradox As Glen_b pointed out, if the buses arrive every $15$ minutes without any uncertainty whatsoever, we know that the maximum possible waiting time is $15$ minutes. If from our part we arrive "at random"
2,439
Please explain the waiting paradox
If the bus arrives "every 15 minutes" (i.e. on a schedule) then the (randomly arriving) passenger's average wait is indeed only 7.5 minutes, because it will be uniformly distributed in that 15 minute gap. -- If, on the other hand, the bus arrives randomly at the average rate of 4 per hour (i.e. according to a Poisson process), then the average wait is much longer; indeed you can work it out via the lack of memory property. Take the passenger's arrival as the start, and the time to the next event is exponential with mean 15 minutes. Let me take a discrete time analogy. Imagine I am rolling a die with 15 faces, one of which is labelled "B" (for bus) and 14 labelled "X" for the total absence of bus that minute (fair 30 sided dice exist, so I could label 2 of the faces of a 30-sided die "B"). So once per minute I roll and see if the bus comes. The die has no memory; it doesn't know how many rolls since the last "B" it has been. Now imagine some unconnected event happens - a dog barks, a passenger arrives, I hear a rumble of thunder. From now, how long do I wait (how many rolls) until the next "B"? Because of the lack of memory, on average, I wait the same time for the next "B" as the time between two consecutive "B"s. [Next imagine I have a 60-sided die I roll every fifteen seconds (again, with one "B" face); now imagine I had a 1000-sided die I rolled every 0.9 seconds (with one "B" face; or more realistically, three 10-sided dice each and I call the result a "B" if all 3 come up "10" at the same time)... and so on. In the limit, we get the continuous time Poisson process.] Another way to look at it is this: I am more likely to observe my 'start counting rolls' (i.e. 'the passenger arrives at the bus stop') event during a longer gap than a short one, in just the right way to make the average wait the same as the average time between buses (I mostly wait in long gaps and mostly miss out on the shortest ones; because I arrive at a uniformly distributed time, the chance of me arriving in a gap of length $t$ is proportional to $t$) As a veteran catcher of buses, in practice reality seems to lie somewhere in between 'buses arrive on a schedule' and 'buses arrive at random'. And sometimes (in bad traffic), you wait an hour then 3 arrive all at once (Zach identifies the reason for that in comments below).
Please explain the waiting paradox
If the bus arrives "every 15 minutes" (i.e. on a schedule) then the (randomly arriving) passenger's average wait is indeed only 7.5 minutes, because it will be uniformly distributed in that 15 minute
Please explain the waiting paradox If the bus arrives "every 15 minutes" (i.e. on a schedule) then the (randomly arriving) passenger's average wait is indeed only 7.5 minutes, because it will be uniformly distributed in that 15 minute gap. -- If, on the other hand, the bus arrives randomly at the average rate of 4 per hour (i.e. according to a Poisson process), then the average wait is much longer; indeed you can work it out via the lack of memory property. Take the passenger's arrival as the start, and the time to the next event is exponential with mean 15 minutes. Let me take a discrete time analogy. Imagine I am rolling a die with 15 faces, one of which is labelled "B" (for bus) and 14 labelled "X" for the total absence of bus that minute (fair 30 sided dice exist, so I could label 2 of the faces of a 30-sided die "B"). So once per minute I roll and see if the bus comes. The die has no memory; it doesn't know how many rolls since the last "B" it has been. Now imagine some unconnected event happens - a dog barks, a passenger arrives, I hear a rumble of thunder. From now, how long do I wait (how many rolls) until the next "B"? Because of the lack of memory, on average, I wait the same time for the next "B" as the time between two consecutive "B"s. [Next imagine I have a 60-sided die I roll every fifteen seconds (again, with one "B" face); now imagine I had a 1000-sided die I rolled every 0.9 seconds (with one "B" face; or more realistically, three 10-sided dice each and I call the result a "B" if all 3 come up "10" at the same time)... and so on. In the limit, we get the continuous time Poisson process.] Another way to look at it is this: I am more likely to observe my 'start counting rolls' (i.e. 'the passenger arrives at the bus stop') event during a longer gap than a short one, in just the right way to make the average wait the same as the average time between buses (I mostly wait in long gaps and mostly miss out on the shortest ones; because I arrive at a uniformly distributed time, the chance of me arriving in a gap of length $t$ is proportional to $t$) As a veteran catcher of buses, in practice reality seems to lie somewhere in between 'buses arrive on a schedule' and 'buses arrive at random'. And sometimes (in bad traffic), you wait an hour then 3 arrive all at once (Zach identifies the reason for that in comments below).
Please explain the waiting paradox If the bus arrives "every 15 minutes" (i.e. on a schedule) then the (randomly arriving) passenger's average wait is indeed only 7.5 minutes, because it will be uniformly distributed in that 15 minute
2,440
Please explain the waiting paradox
More on buses... Sorry to butt into the conversation so late in the discussion, but I have been looking at Poisson processes lately... So before it slips out of my mind, here is a pictorial representation of the inspection paradox: The fallacy stems from the assumption that since buses follow a certain pattern of arrival with a given inter-arrival average time (the inverse of the Poisson rate parameter $\lambda$, let's call it $\small \theta=1/\lambda=15$ min.), by showing up at the bus station at any random time, you are in effect picking up a bus. So if you show up at the bus station at random times, keeping up a log-book of the waiting times over, say, one month, will actually give you the average inter-arrival time between buses. But this is not what you'd be doing. If we were at a dispatch center, and could see all the buses on a screen, it would be true that randomly picking up multiple buses, and averaging the distance to the bus following behind, would produce the average inter-arrival time: But, if what we instead do is just show up at the bus station (instead of selecting a bus), we are doing a random cross-section of time, say, along the timeline of the bus schedule in a typical morning. The time we decide to show up at the bus station may very well be uniformly distributed along the "arrow" of time. However, since there are longer time gaps between buses spread more farther apart, we are more likely to end up oversampling these "stragglers": ... and hence, our waiting time log book will not reflect the inter-arrival time. This is the inspection paradox. As for the actual question on the OP regarding the expected waiting time of $15'$, minutes the mind-boggling explanation resides in the memoryless-ness of the Poisson process that makes the time-gap elapsed from the time the last bus we missed left the station to the time we show up irrelevant, and the expected time to the arrival of the next bus continues to be, stubbornly, $\theta=15$ minutes. This is best seen in discrete time (geometric distribution) with the dice example in Glen_b's answer. In fact, if we could know how long ago the preceding bus left, the $\small \mathbb E[\text{time waiting (future) + time to last bus departure (past)}]=30$ min! As explaine in this MIT video by John Tsitsiklis, we just would have to view what precedes the point of arrival as a Poisson process backwards in time: Still unclear? - try it with Legos.
Please explain the waiting paradox
More on buses... Sorry to butt into the conversation so late in the discussion, but I have been looking at Poisson processes lately... So before it slips out of my mind, here is a pictorial representa
Please explain the waiting paradox More on buses... Sorry to butt into the conversation so late in the discussion, but I have been looking at Poisson processes lately... So before it slips out of my mind, here is a pictorial representation of the inspection paradox: The fallacy stems from the assumption that since buses follow a certain pattern of arrival with a given inter-arrival average time (the inverse of the Poisson rate parameter $\lambda$, let's call it $\small \theta=1/\lambda=15$ min.), by showing up at the bus station at any random time, you are in effect picking up a bus. So if you show up at the bus station at random times, keeping up a log-book of the waiting times over, say, one month, will actually give you the average inter-arrival time between buses. But this is not what you'd be doing. If we were at a dispatch center, and could see all the buses on a screen, it would be true that randomly picking up multiple buses, and averaging the distance to the bus following behind, would produce the average inter-arrival time: But, if what we instead do is just show up at the bus station (instead of selecting a bus), we are doing a random cross-section of time, say, along the timeline of the bus schedule in a typical morning. The time we decide to show up at the bus station may very well be uniformly distributed along the "arrow" of time. However, since there are longer time gaps between buses spread more farther apart, we are more likely to end up oversampling these "stragglers": ... and hence, our waiting time log book will not reflect the inter-arrival time. This is the inspection paradox. As for the actual question on the OP regarding the expected waiting time of $15'$, minutes the mind-boggling explanation resides in the memoryless-ness of the Poisson process that makes the time-gap elapsed from the time the last bus we missed left the station to the time we show up irrelevant, and the expected time to the arrival of the next bus continues to be, stubbornly, $\theta=15$ minutes. This is best seen in discrete time (geometric distribution) with the dice example in Glen_b's answer. In fact, if we could know how long ago the preceding bus left, the $\small \mathbb E[\text{time waiting (future) + time to last bus departure (past)}]=30$ min! As explaine in this MIT video by John Tsitsiklis, we just would have to view what precedes the point of arrival as a Poisson process backwards in time: Still unclear? - try it with Legos.
Please explain the waiting paradox More on buses... Sorry to butt into the conversation so late in the discussion, but I have been looking at Poisson processes lately... So before it slips out of my mind, here is a pictorial representa
2,441
Please explain the waiting paradox
There is a simple explanation which resolves the different answers which one gets from calculating expected waiting time for buses arriving per a Poisson Process with given mean interarrival time (in this case 15 minutes), whose interarrival times are therefore i.i.d. exponential with mean of 15 minutes. Method 1) Because Poisson Process (exponential) is memoryless, the expected wait time is 15 minutes. Method 2) You are equally likely to arrive at any time during the interarrival period in which you arrive. Therefore the expected waiting time is 1/2 of the expected length of this interarrival period. THIS IS CORRECT, and does not conflict with method (1). How can (1) and (2) both be correct? The answer is that the expected length of the interarrival period for the time at which you arrive is not 15 minutes. It is actually 30 minutes; and 1/2 of 30 minutes is 15 minutes, so (1) and (2) agree. Why does the interarrival period for the time at which you arrive not equal 15 minutes? It's because by first "fixing" an arrival time, the interarrival period it is in is more likely than average to be a long interarrival period. In the case of an exponential interarrival period, the math works outs so the interarrival period containing the time at which you arrive is an exponential with double the mean interarrival time for the Poisson Process. It is not obvious that the exact distribution for the interarrival time containing the time at which you arrive would be an exponential with doubled mean, but it is obvious, after explanation, why it is increased. As an easy to understand example, let's say that the interarrival times are 10 minutes with probability 1/2 or 20 minutes with probability 1/2. In this case, 20 minutes long interarrival periods are equally likely to occur as 10 minute long interarrival periods, but when they do occur, they last twice as long. So, 2/3 of the time points during the day will be at times at which the interarrival period is 20 minutes. Put another way, if we first pick a time and then want to know what the interarrival time containing that time is, then (ignoring transient effects at the beginning of the "day") the expected length of that interarrival time is 16 1/3. But if we first pick the interarrival time and want to know what its expected length is, it is 15 minutes. There are other variants of the renewal paradox, length-biased sampling, etc., amounting to pretty much the same thing. Example 1) You have a bunch of light bulbs, with random lifetimes, but average of 1000 hours. When a light bulb fails, it is immediately replaced by another light bulb. If you pick a time to go in a room having the light bulb, the light bulb in operation then will wind up having a longer mean lifetime than 1000 hours. Example 2) If we go to a construction site at a given time, then the mean time until a construction worker who is working there at that time falls off the building (from when they first started working) is greater than the mean time until worker falls off (from when they first started working) from among all workers who start working. Why, because the workers with a short mean time until falling off are more likely than average to have already fallen off (and not continued working), so that the workers who are working then have longer than average times until falling off. Example 3) Pick some modest number of people at random in a city and if they have attended the home games (not all sell outs) of the city's Major League baseball team, find out how many people attended the games they were at. Then (under some slightly idealized but not too unreasonable assumptions), the average attendance for those games will be higher than the average attendance for all the team's home games. Why? Because there are more people who have attended high attendance games than low attendance games, so you are more likely to pick people who have attended high attendance games than low attendance games.
Please explain the waiting paradox
There is a simple explanation which resolves the different answers which one gets from calculating expected waiting time for buses arriving per a Poisson Process with given mean interarrival time (in
Please explain the waiting paradox There is a simple explanation which resolves the different answers which one gets from calculating expected waiting time for buses arriving per a Poisson Process with given mean interarrival time (in this case 15 minutes), whose interarrival times are therefore i.i.d. exponential with mean of 15 minutes. Method 1) Because Poisson Process (exponential) is memoryless, the expected wait time is 15 minutes. Method 2) You are equally likely to arrive at any time during the interarrival period in which you arrive. Therefore the expected waiting time is 1/2 of the expected length of this interarrival period. THIS IS CORRECT, and does not conflict with method (1). How can (1) and (2) both be correct? The answer is that the expected length of the interarrival period for the time at which you arrive is not 15 minutes. It is actually 30 minutes; and 1/2 of 30 minutes is 15 minutes, so (1) and (2) agree. Why does the interarrival period for the time at which you arrive not equal 15 minutes? It's because by first "fixing" an arrival time, the interarrival period it is in is more likely than average to be a long interarrival period. In the case of an exponential interarrival period, the math works outs so the interarrival period containing the time at which you arrive is an exponential with double the mean interarrival time for the Poisson Process. It is not obvious that the exact distribution for the interarrival time containing the time at which you arrive would be an exponential with doubled mean, but it is obvious, after explanation, why it is increased. As an easy to understand example, let's say that the interarrival times are 10 minutes with probability 1/2 or 20 minutes with probability 1/2. In this case, 20 minutes long interarrival periods are equally likely to occur as 10 minute long interarrival periods, but when they do occur, they last twice as long. So, 2/3 of the time points during the day will be at times at which the interarrival period is 20 minutes. Put another way, if we first pick a time and then want to know what the interarrival time containing that time is, then (ignoring transient effects at the beginning of the "day") the expected length of that interarrival time is 16 1/3. But if we first pick the interarrival time and want to know what its expected length is, it is 15 minutes. There are other variants of the renewal paradox, length-biased sampling, etc., amounting to pretty much the same thing. Example 1) You have a bunch of light bulbs, with random lifetimes, but average of 1000 hours. When a light bulb fails, it is immediately replaced by another light bulb. If you pick a time to go in a room having the light bulb, the light bulb in operation then will wind up having a longer mean lifetime than 1000 hours. Example 2) If we go to a construction site at a given time, then the mean time until a construction worker who is working there at that time falls off the building (from when they first started working) is greater than the mean time until worker falls off (from when they first started working) from among all workers who start working. Why, because the workers with a short mean time until falling off are more likely than average to have already fallen off (and not continued working), so that the workers who are working then have longer than average times until falling off. Example 3) Pick some modest number of people at random in a city and if they have attended the home games (not all sell outs) of the city's Major League baseball team, find out how many people attended the games they were at. Then (under some slightly idealized but not too unreasonable assumptions), the average attendance for those games will be higher than the average attendance for all the team's home games. Why? Because there are more people who have attended high attendance games than low attendance games, so you are more likely to pick people who have attended high attendance games than low attendance games.
Please explain the waiting paradox There is a simple explanation which resolves the different answers which one gets from calculating expected waiting time for buses arriving per a Poisson Process with given mean interarrival time (in
2,442
Please explain the waiting paradox
The question as posed was "...a bus arrives at the bus stop every 15 minutes and a passenger arrives at random." If the bus arrives every 15 minutes then its not random; it arrives every 15 minutes so the correct answer is 7.5 minutes. Either the source was incorrectly quoted or the writer of the source was sloppy. On the other hand, the radiation detector sounds like a different problem because radiation events do arrive at random according to some distribution, presumably something like Poisson with an average waiting time.
Please explain the waiting paradox
The question as posed was "...a bus arrives at the bus stop every 15 minutes and a passenger arrives at random." If the bus arrives every 15 minutes then its not random; it arrives every 15 minutes s
Please explain the waiting paradox The question as posed was "...a bus arrives at the bus stop every 15 minutes and a passenger arrives at random." If the bus arrives every 15 minutes then its not random; it arrives every 15 minutes so the correct answer is 7.5 minutes. Either the source was incorrectly quoted or the writer of the source was sloppy. On the other hand, the radiation detector sounds like a different problem because radiation events do arrive at random according to some distribution, presumably something like Poisson with an average waiting time.
Please explain the waiting paradox The question as posed was "...a bus arrives at the bus stop every 15 minutes and a passenger arrives at random." If the bus arrives every 15 minutes then its not random; it arrives every 15 minutes s
2,443
Mathematician wants the equivalent knowledge to a quality stats degree
(Very) short story Long story short, in some sense, statistics is like any other technical field: There is no fast track. Long story Bachelor's degree programs in statistics are relatively rare in the U.S. One reason I believe this is true is that it is quite hard to pack all that is necessary to learn statistics well into an undergraduate curriculum. This holds particularly true at universities that have significant general-education requirements. Developing the necessary skills (mathematical, computational, and intuitive) takes a lot of effort and time. Statistics can begin to be understood at a fairly decent "operational" level once the student has mastered calculus and a decent amount of linear and matrix algebra. However, any applied statistician knows that it is quite easy to find oneself in territory that doesn't conform to a cookie-cutter or recipe-based approach to statistics. To really understand what is going on beneath the surface requires as a prerequisite mathematical and, in today's world, computational maturity that are only really attainable in the later years of undergraduate training. This is one reason that true statistical training mostly starts at the M.S. level in the U.S. (India, with their dedicated ISI is a little different story. A similar argument might be made for some Canadian-based education. I'm not familiar enough with European-based or Russian-based undergraduate statistics education to have an informed opinion.) Nearly any (interesting) job would require an M.S. level education and the really interesting (in my opinion) jobs essentially require a doctorate-level education. Seeing as you have a doctorate in mathematics, though we don't know in what area, here are my suggestions for something closer to an M.S.-level education. I include some parenthetical remarks to explain the choices. D. Huff, How to Lie with Statistics. (Very quick, easy read. Shows many of the conceptual ideas and pitfalls, in particular, in presenting statistics to the layman.) Mood, Graybill, and Boes, Introduction to the Theory of Statistics, 3rd ed., 1974. (M.S.-level intro to theoretical statistics. You'll learn about sampling distributions, point estimation and hypothesis testing in a classical, frequentist framework. My opinion is that this is generally better, and a bit more advanced, than modern counterparts such as Casella & Berger or Rice.) Seber & Lee, Linear Regression Analysis, 2nd ed. (Lays out the theory behind point estimation and hypothesis testing for linear models, which is probably the most important topic to understand in applied statistics. Since you probably have a good linear algebra background, you should immediately be able to understand what is going on geometrically, which provides a lot of intuition. Also has good information related to assessment issues in model selection, departures from assumptions, prediction, and robust versions of linear models.) Hastie, Tibshirani, and Friedman, Elements of Statistical Learning, 2nd ed., 2009. (This book has a much more applied feeling than the last and broadly covers lots of modern machine-learning topics. The major contribution here is in providing statistical interpretations of many machine-learning ideas, which pays off particularly in quantifying uncertainty in such models. This is something that tends to go un(der)addressed in typical machine-learning books. Legally available for free here.) A. Agresti, Categorical Data Analysis, 2nd ed. (Good presentation of how to deal with discrete data in a statistical framework. Good theory and good practical examples. Perhaps on the traditional side in some respects.) Boyd & Vandenberghe, Convex Optimization. (Many of the most popular modern statistical estimation and hypothesis-testing problems can be formulated as convex optimization problems. This also goes for numerous machine-learning techniques, e.g., SVMs. Having a broader understanding and the ability to recognize such problems as convex programs is quite valuable, I think. Legally available for free here.) Efron & Tibshirani, An Introduction to the Bootstrap. (You ought to at least be familiar with the bootstrap and related techniques. For a textbook, it's a quick and easy read.) J. Liu, Monte Carlo Strategies in Scientific Computing or P. Glasserman, Monte Carlo Methods in Financial Engineering. (The latter sounds very directed to a particular application area, but I think it'll give a good overview and practical examples of all the most important techniques. Financial engineering applications have driven a fair amount of Monte Carlo research over the last decade or so.) E. Tufte, The Visual Display of Quantitative Information. (Good visualization and presentation of data is [highly] underrated, even by statisticians.) J. Tukey, Exploratory Data Analysis. (Standard. Oldie, but goodie. Some might say outdated, but still worth having a look at.) Complements Here are some other books, mostly of a little more advanced, theoretical and/or auxiliary nature, that are helpful. F. A. Graybill, Theory and Application of the Linear Model. (Old fashioned, terrible typesetting, but covers all the same ground of Seber & Lee, and more. I say old-fashioned because more modern treatments would probably tend to use the SVD to unify and simplify a lot of the techniques and proofs.) F. A. Graybill, Matrices with Applications in Statistics. (Companion text to the above. A wealth of good matrix algebra results useful to statistics here. Great desk reference.) Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition. (Rigorous and theoretical text on quantifying performance in classification problems.) Brockwell & Davis, Time Series: Theory and Methods. (Classical time-series analysis. Theoretical treatment. For more applied ones, Box, Jenkins & Reinsel or Ruey Tsay's texts are decent.) Motwani and Raghavan, Randomized Algorithms. (Probabilistic methods and analysis for computational algorithms.) D. Williams, Probability and Martingales and/or R. Durrett, Probability: Theory and Examples. (In case you've seen measure theory, say, at the level of D. L. Cohn, but maybe not probability theory. Both are good for getting quickly up to speed if you already know measure theory.) F. Harrell, Regression Modeling Strategies. (Not as good as Elements of Statistical Learning [ESL], but has a different, and interesting, take on things. Covers more "traditional" applied statistics topics than does ESL and so worth knowing about, for sure.) More Advanced (Doctorate-Level) Texts Lehmann and Casella, Theory of Point Estimation. (PhD-level treatment of point estimation. Part of the challenge of this book is reading it and figuring out what is a typo and what is not. When you see yourself recognizing them quickly, you'll know you understand. There's plenty of practice of this type in there, especially if you dive into the problems.) Lehmann and Romano, Testing Statistical Hypotheses. (PhD-level treatment of hypothesis testing. Not as many typos as TPE above.) A. van der Vaart, Asymptotic Statistics. (A beautiful book on the asymptotic theory of statistics with good hints on application areas. Not an applied book though. My only quibble is that some rather bizarre notation is used and details are at times brushed under the rug.)
Mathematician wants the equivalent knowledge to a quality stats degree
(Very) short story Long story short, in some sense, statistics is like any other technical field: There is no fast track. Long story Bachelor's degree programs in statistics are relatively rare in the
Mathematician wants the equivalent knowledge to a quality stats degree (Very) short story Long story short, in some sense, statistics is like any other technical field: There is no fast track. Long story Bachelor's degree programs in statistics are relatively rare in the U.S. One reason I believe this is true is that it is quite hard to pack all that is necessary to learn statistics well into an undergraduate curriculum. This holds particularly true at universities that have significant general-education requirements. Developing the necessary skills (mathematical, computational, and intuitive) takes a lot of effort and time. Statistics can begin to be understood at a fairly decent "operational" level once the student has mastered calculus and a decent amount of linear and matrix algebra. However, any applied statistician knows that it is quite easy to find oneself in territory that doesn't conform to a cookie-cutter or recipe-based approach to statistics. To really understand what is going on beneath the surface requires as a prerequisite mathematical and, in today's world, computational maturity that are only really attainable in the later years of undergraduate training. This is one reason that true statistical training mostly starts at the M.S. level in the U.S. (India, with their dedicated ISI is a little different story. A similar argument might be made for some Canadian-based education. I'm not familiar enough with European-based or Russian-based undergraduate statistics education to have an informed opinion.) Nearly any (interesting) job would require an M.S. level education and the really interesting (in my opinion) jobs essentially require a doctorate-level education. Seeing as you have a doctorate in mathematics, though we don't know in what area, here are my suggestions for something closer to an M.S.-level education. I include some parenthetical remarks to explain the choices. D. Huff, How to Lie with Statistics. (Very quick, easy read. Shows many of the conceptual ideas and pitfalls, in particular, in presenting statistics to the layman.) Mood, Graybill, and Boes, Introduction to the Theory of Statistics, 3rd ed., 1974. (M.S.-level intro to theoretical statistics. You'll learn about sampling distributions, point estimation and hypothesis testing in a classical, frequentist framework. My opinion is that this is generally better, and a bit more advanced, than modern counterparts such as Casella & Berger or Rice.) Seber & Lee, Linear Regression Analysis, 2nd ed. (Lays out the theory behind point estimation and hypothesis testing for linear models, which is probably the most important topic to understand in applied statistics. Since you probably have a good linear algebra background, you should immediately be able to understand what is going on geometrically, which provides a lot of intuition. Also has good information related to assessment issues in model selection, departures from assumptions, prediction, and robust versions of linear models.) Hastie, Tibshirani, and Friedman, Elements of Statistical Learning, 2nd ed., 2009. (This book has a much more applied feeling than the last and broadly covers lots of modern machine-learning topics. The major contribution here is in providing statistical interpretations of many machine-learning ideas, which pays off particularly in quantifying uncertainty in such models. This is something that tends to go un(der)addressed in typical machine-learning books. Legally available for free here.) A. Agresti, Categorical Data Analysis, 2nd ed. (Good presentation of how to deal with discrete data in a statistical framework. Good theory and good practical examples. Perhaps on the traditional side in some respects.) Boyd & Vandenberghe, Convex Optimization. (Many of the most popular modern statistical estimation and hypothesis-testing problems can be formulated as convex optimization problems. This also goes for numerous machine-learning techniques, e.g., SVMs. Having a broader understanding and the ability to recognize such problems as convex programs is quite valuable, I think. Legally available for free here.) Efron & Tibshirani, An Introduction to the Bootstrap. (You ought to at least be familiar with the bootstrap and related techniques. For a textbook, it's a quick and easy read.) J. Liu, Monte Carlo Strategies in Scientific Computing or P. Glasserman, Monte Carlo Methods in Financial Engineering. (The latter sounds very directed to a particular application area, but I think it'll give a good overview and practical examples of all the most important techniques. Financial engineering applications have driven a fair amount of Monte Carlo research over the last decade or so.) E. Tufte, The Visual Display of Quantitative Information. (Good visualization and presentation of data is [highly] underrated, even by statisticians.) J. Tukey, Exploratory Data Analysis. (Standard. Oldie, but goodie. Some might say outdated, but still worth having a look at.) Complements Here are some other books, mostly of a little more advanced, theoretical and/or auxiliary nature, that are helpful. F. A. Graybill, Theory and Application of the Linear Model. (Old fashioned, terrible typesetting, but covers all the same ground of Seber & Lee, and more. I say old-fashioned because more modern treatments would probably tend to use the SVD to unify and simplify a lot of the techniques and proofs.) F. A. Graybill, Matrices with Applications in Statistics. (Companion text to the above. A wealth of good matrix algebra results useful to statistics here. Great desk reference.) Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition. (Rigorous and theoretical text on quantifying performance in classification problems.) Brockwell & Davis, Time Series: Theory and Methods. (Classical time-series analysis. Theoretical treatment. For more applied ones, Box, Jenkins & Reinsel or Ruey Tsay's texts are decent.) Motwani and Raghavan, Randomized Algorithms. (Probabilistic methods and analysis for computational algorithms.) D. Williams, Probability and Martingales and/or R. Durrett, Probability: Theory and Examples. (In case you've seen measure theory, say, at the level of D. L. Cohn, but maybe not probability theory. Both are good for getting quickly up to speed if you already know measure theory.) F. Harrell, Regression Modeling Strategies. (Not as good as Elements of Statistical Learning [ESL], but has a different, and interesting, take on things. Covers more "traditional" applied statistics topics than does ESL and so worth knowing about, for sure.) More Advanced (Doctorate-Level) Texts Lehmann and Casella, Theory of Point Estimation. (PhD-level treatment of point estimation. Part of the challenge of this book is reading it and figuring out what is a typo and what is not. When you see yourself recognizing them quickly, you'll know you understand. There's plenty of practice of this type in there, especially if you dive into the problems.) Lehmann and Romano, Testing Statistical Hypotheses. (PhD-level treatment of hypothesis testing. Not as many typos as TPE above.) A. van der Vaart, Asymptotic Statistics. (A beautiful book on the asymptotic theory of statistics with good hints on application areas. Not an applied book though. My only quibble is that some rather bizarre notation is used and details are at times brushed under the rug.)
Mathematician wants the equivalent knowledge to a quality stats degree (Very) short story Long story short, in some sense, statistics is like any other technical field: There is no fast track. Long story Bachelor's degree programs in statistics are relatively rare in the
2,444
Mathematician wants the equivalent knowledge to a quality stats degree
I can't speak for the more rigorous schools, but I am doing a B.S. in General Statistics (the most rigorous at my school) at University of California, Davis, and there is a fairly heavy amount of reliance on rigor and derivation. A doctorate in math will be helpful, insomuch as you will have a very strong background in real analysis and linear algebra--useful skills in statistics. My statistics program has about 50% of the coursework going to support the fundamentals (linear algebra, real analysis, calculus, probability, estimation), and the other 50% goes towards specialized topics that rely on the fundamentals (nonparametrics, computation, ANOVA/Regression, time series, Bayesian analysis). Once you get the fundamentals, jumping to the specifics is usually not too difficult. Most of the individuals in my classes struggle with the proofs and real analysis, and easily grasp the statistical concepts, so coming from a math background will most definitely help. That being said, the following two texts have pretty good coverage of many topics covered in statistics. Both were recommended in the link you provided, by the way, so I wouldn't say your question and the one you linked are necessarily uncorrelated. Mathematical Methods of Statistics, by Harald Cramer All of Statistics: A Concise Course in Statistical Inference, by Larry Wasserman
Mathematician wants the equivalent knowledge to a quality stats degree
I can't speak for the more rigorous schools, but I am doing a B.S. in General Statistics (the most rigorous at my school) at University of California, Davis, and there is a fairly heavy amount of reli
Mathematician wants the equivalent knowledge to a quality stats degree I can't speak for the more rigorous schools, but I am doing a B.S. in General Statistics (the most rigorous at my school) at University of California, Davis, and there is a fairly heavy amount of reliance on rigor and derivation. A doctorate in math will be helpful, insomuch as you will have a very strong background in real analysis and linear algebra--useful skills in statistics. My statistics program has about 50% of the coursework going to support the fundamentals (linear algebra, real analysis, calculus, probability, estimation), and the other 50% goes towards specialized topics that rely on the fundamentals (nonparametrics, computation, ANOVA/Regression, time series, Bayesian analysis). Once you get the fundamentals, jumping to the specifics is usually not too difficult. Most of the individuals in my classes struggle with the proofs and real analysis, and easily grasp the statistical concepts, so coming from a math background will most definitely help. That being said, the following two texts have pretty good coverage of many topics covered in statistics. Both were recommended in the link you provided, by the way, so I wouldn't say your question and the one you linked are necessarily uncorrelated. Mathematical Methods of Statistics, by Harald Cramer All of Statistics: A Concise Course in Statistical Inference, by Larry Wasserman
Mathematician wants the equivalent knowledge to a quality stats degree I can't speak for the more rigorous schools, but I am doing a B.S. in General Statistics (the most rigorous at my school) at University of California, Davis, and there is a fairly heavy amount of reli
2,445
Mathematician wants the equivalent knowledge to a quality stats degree
The Royal Statistical Society in the UK offers the Graduate Diploma in Statistics, which is at the level of a good Bachelor's degree. A syllabus, reading list, & past papers are available from their website. I've known mathematicians use it to get up to speed in Statistics. Taking the exams (officially, or in the comfort of your own study) could be a useful way to measure when you're there.
Mathematician wants the equivalent knowledge to a quality stats degree
The Royal Statistical Society in the UK offers the Graduate Diploma in Statistics, which is at the level of a good Bachelor's degree. A syllabus, reading list, & past papers are available from their w
Mathematician wants the equivalent knowledge to a quality stats degree The Royal Statistical Society in the UK offers the Graduate Diploma in Statistics, which is at the level of a good Bachelor's degree. A syllabus, reading list, & past papers are available from their website. I've known mathematicians use it to get up to speed in Statistics. Taking the exams (officially, or in the comfort of your own study) could be a useful way to measure when you're there.
Mathematician wants the equivalent knowledge to a quality stats degree The Royal Statistical Society in the UK offers the Graduate Diploma in Statistics, which is at the level of a good Bachelor's degree. A syllabus, reading list, & past papers are available from their w
2,446
Mathematician wants the equivalent knowledge to a quality stats degree
I would go to the curriculum websites of the top stats schools, write down the books they use in their undergrad courses, see which ones are highly rated on Amazon, and order them at your public/university library. Some schools to consider: MIT - technically, cross-taught with Harvard. Caltech Carnegie Mellon Stanford Supplement the texts with the various lecture video sites such as MIT OCW and videolectures.net. Caltech doesn't have an undergrad degree in statistics, but you won't go wrong by following the curriculum of their undergrad stats courses.
Mathematician wants the equivalent knowledge to a quality stats degree
I would go to the curriculum websites of the top stats schools, write down the books they use in their undergrad courses, see which ones are highly rated on Amazon, and order them at your public/unive
Mathematician wants the equivalent knowledge to a quality stats degree I would go to the curriculum websites of the top stats schools, write down the books they use in their undergrad courses, see which ones are highly rated on Amazon, and order them at your public/university library. Some schools to consider: MIT - technically, cross-taught with Harvard. Caltech Carnegie Mellon Stanford Supplement the texts with the various lecture video sites such as MIT OCW and videolectures.net. Caltech doesn't have an undergrad degree in statistics, but you won't go wrong by following the curriculum of their undergrad stats courses.
Mathematician wants the equivalent knowledge to a quality stats degree I would go to the curriculum websites of the top stats schools, write down the books they use in their undergrad courses, see which ones are highly rated on Amazon, and order them at your public/unive
2,447
Mathematician wants the equivalent knowledge to a quality stats degree
I have seen Statistical Inference, by Silvey, used by mathematicians who needed some workaday grasp of statistics. It's a small book, and should by rights be cheap. Looking at http://www.amazon.com/Statistical-Inference-Monographs-Statistics-Probability/dp/0412138204/ref=sr_1_1?ie=UTF8&s=books&qid=1298750064&sr=1-1, it seems to be cheap second hand. It's old and concentrates on classical statistics. While it's not highly abstract, it is intended for a reasonably mathematical audience - many of the exercises are from the Cambridge (UK) Diploma in Mathematical Statistics, which is basically an MSc.
Mathematician wants the equivalent knowledge to a quality stats degree
I have seen Statistical Inference, by Silvey, used by mathematicians who needed some workaday grasp of statistics. It's a small book, and should by rights be cheap. Looking at http://www.amazon.com/St
Mathematician wants the equivalent knowledge to a quality stats degree I have seen Statistical Inference, by Silvey, used by mathematicians who needed some workaday grasp of statistics. It's a small book, and should by rights be cheap. Looking at http://www.amazon.com/Statistical-Inference-Monographs-Statistics-Probability/dp/0412138204/ref=sr_1_1?ie=UTF8&s=books&qid=1298750064&sr=1-1, it seems to be cheap second hand. It's old and concentrates on classical statistics. While it's not highly abstract, it is intended for a reasonably mathematical audience - many of the exercises are from the Cambridge (UK) Diploma in Mathematical Statistics, which is basically an MSc.
Mathematician wants the equivalent knowledge to a quality stats degree I have seen Statistical Inference, by Silvey, used by mathematicians who needed some workaday grasp of statistics. It's a small book, and should by rights be cheap. Looking at http://www.amazon.com/St
2,448
Mathematician wants the equivalent knowledge to a quality stats degree
Regarding the measurement of your knowledge: You could attend some data mining / data analysis competitions, such as 1, 2, 3, 4, and see how you score compared to others. There are a lot of pointers to textbooks on mathematical statistics in the answers. I would like to add as relevant topics: the empirical social research component, which comprise sampling theory, socio-demographic and regional standards data management, which includes knowlegde on databases (writing SQL queries, common database schemes) communication, how to present results in a way the audience stays awake (visualization methods) Disclaimer: I am not a statistician, this are just my 2cents
Mathematician wants the equivalent knowledge to a quality stats degree
Regarding the measurement of your knowledge: You could attend some data mining / data analysis competitions, such as 1, 2, 3, 4, and see how you score compared to others. There are a lot of pointers t
Mathematician wants the equivalent knowledge to a quality stats degree Regarding the measurement of your knowledge: You could attend some data mining / data analysis competitions, such as 1, 2, 3, 4, and see how you score compared to others. There are a lot of pointers to textbooks on mathematical statistics in the answers. I would like to add as relevant topics: the empirical social research component, which comprise sampling theory, socio-demographic and regional standards data management, which includes knowlegde on databases (writing SQL queries, common database schemes) communication, how to present results in a way the audience stays awake (visualization methods) Disclaimer: I am not a statistician, this are just my 2cents
Mathematician wants the equivalent knowledge to a quality stats degree Regarding the measurement of your knowledge: You could attend some data mining / data analysis competitions, such as 1, 2, 3, 4, and see how you score compared to others. There are a lot of pointers t
2,449
Mathematician wants the equivalent knowledge to a quality stats degree
E.T. Jaynes "Probability Theory: The Logic of Science: Principles and Elementary Applications Vol 1", Cambridge University Press, 2003 is pretty much a must-read for the Bayesian side of statistics, at about the right level. I'm looking forward to recommendations for the frequentist side of things (I have loads of monographs, but very few good general texts).
Mathematician wants the equivalent knowledge to a quality stats degree
E.T. Jaynes "Probability Theory: The Logic of Science: Principles and Elementary Applications Vol 1", Cambridge University Press, 2003 is pretty much a must-read for the Bayesian side of statistics, a
Mathematician wants the equivalent knowledge to a quality stats degree E.T. Jaynes "Probability Theory: The Logic of Science: Principles and Elementary Applications Vol 1", Cambridge University Press, 2003 is pretty much a must-read for the Bayesian side of statistics, at about the right level. I'm looking forward to recommendations for the frequentist side of things (I have loads of monographs, but very few good general texts).
Mathematician wants the equivalent knowledge to a quality stats degree E.T. Jaynes "Probability Theory: The Logic of Science: Principles and Elementary Applications Vol 1", Cambridge University Press, 2003 is pretty much a must-read for the Bayesian side of statistics, a
2,450
Mathematician wants the equivalent knowledge to a quality stats degree
I come from a computer science background focusing on machine learning. However, I really started to understand (and more important to apply) statistics after taking a Pattern Recognition course using Bishop's Book https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book here are some course slides from MIT: http://www.ai.mit.edu/courses/6.867-f03/lectures.html This will just give you the background (+ some matlab code) to use statistics for real work problems and is definitely more on the applied side. Yet, it highly depends on what you want to do with your knowledge. To get a measure for how good you are you might want to browse the open course ware of some university for advanced statistics courses, to check if you know the topics covered. Just my 5 cent.
Mathematician wants the equivalent knowledge to a quality stats degree
I come from a computer science background focusing on machine learning. However, I really started to understand (and more important to apply) statistics after taking a Pattern Recognition course usin
Mathematician wants the equivalent knowledge to a quality stats degree I come from a computer science background focusing on machine learning. However, I really started to understand (and more important to apply) statistics after taking a Pattern Recognition course using Bishop's Book https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book here are some course slides from MIT: http://www.ai.mit.edu/courses/6.867-f03/lectures.html This will just give you the background (+ some matlab code) to use statistics for real work problems and is definitely more on the applied side. Yet, it highly depends on what you want to do with your knowledge. To get a measure for how good you are you might want to browse the open course ware of some university for advanced statistics courses, to check if you know the topics covered. Just my 5 cent.
Mathematician wants the equivalent knowledge to a quality stats degree I come from a computer science background focusing on machine learning. However, I really started to understand (and more important to apply) statistics after taking a Pattern Recognition course usin
2,451
Mathematician wants the equivalent knowledge to a quality stats degree
I think Stanford provides the best resources when it comes to flexibility. They even have a machine learning course online that would provide you with a respectable base of knowledge when it comes to designing algorithms in R. Search it up on Google and it will redirect you to their Lagunita page where they have some interesting courses, most of them being free. I have Tibshirani's books, Introduction to Statistical Learning' and 'Elements of Statistical Learning' in PDF formats and both are extremely good resources. Since you're a mathematician, I would still advise you to not fast track as that wouldn't provide you with a solid base that you might find very helpful in the future if at all you start doing some serious machine learning. Treat statistics as a branch of mathematics for getting insights from data, and that requires some work. Other than that, there are tons of online resources, Johns Hopkins provides similar stuff as Stanford. Although experience always pays, a respectable credential will always reinforce that base. You can also think of the specific fields that you would like to enter; by that I mean whether you want to go into text analytics or applying your math and stats skills in finance. I come in the latter category so I have a degree in econometrics where we studied finance + statistics. A combination can always be very good.
Mathematician wants the equivalent knowledge to a quality stats degree
I think Stanford provides the best resources when it comes to flexibility. They even have a machine learning course online that would provide you with a respectable base of knowledge when it comes to
Mathematician wants the equivalent knowledge to a quality stats degree I think Stanford provides the best resources when it comes to flexibility. They even have a machine learning course online that would provide you with a respectable base of knowledge when it comes to designing algorithms in R. Search it up on Google and it will redirect you to their Lagunita page where they have some interesting courses, most of them being free. I have Tibshirani's books, Introduction to Statistical Learning' and 'Elements of Statistical Learning' in PDF formats and both are extremely good resources. Since you're a mathematician, I would still advise you to not fast track as that wouldn't provide you with a solid base that you might find very helpful in the future if at all you start doing some serious machine learning. Treat statistics as a branch of mathematics for getting insights from data, and that requires some work. Other than that, there are tons of online resources, Johns Hopkins provides similar stuff as Stanford. Although experience always pays, a respectable credential will always reinforce that base. You can also think of the specific fields that you would like to enter; by that I mean whether you want to go into text analytics or applying your math and stats skills in finance. I come in the latter category so I have a degree in econometrics where we studied finance + statistics. A combination can always be very good.
Mathematician wants the equivalent knowledge to a quality stats degree I think Stanford provides the best resources when it comes to flexibility. They even have a machine learning course online that would provide you with a respectable base of knowledge when it comes to
2,452
Basic question about Fisher Information matrix and relationship to Hessian and standard errors
Yudi Pawitan writes in his book In All Likelihood that the second derivative of the log-likelihood evaluated at the maximum likelihood estimates (MLE) is the observed Fisher information (see also this document, page 1). This is exactly what most optimization algorithms like optim in R return: the Hessian evaluated at the MLE. When the negative log-likelihood is minimized, the negative Hessian is returned. As you correctly point out, the estimated standard errors of the MLE are the square roots of the diagonal elements of the inverse of the observed Fisher information matrix. In other words: The square roots of the diagonal elements of the inverse of the Hessian (or the negative Hessian) are the estimated standard errors. Summary The negative Hessian evaluated at the MLE is the same as the observed Fisher information matrix evaluated at the MLE. Regarding your main question: No, it's not correct that the observed Fisher information can be found by inverting the (negative) Hessian. Regarding your second question: The inverse of the (negative) Hessian is an estimator of the asymptotic covariance matrix. Hence, the square roots of the diagonal elements of covariance matrix are estimators of the standard errors. I think the second document you link to got it wrong. Formally Let $l(\theta)$ be a log-likelihood function. The Fisher information matrix $\mathbf{I}(\theta)$ is a symmetrical $(p\times p)$ matrix containing the entries: $$ \mathbf{I}(\theta)=-\frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}}l(\theta),~~~~ 1\leq i, j\leq p $$ The observed Fisher information matrix is simply $\mathbf{I}(\hat{\theta}_{\mathrm{ML}})$, the information matrix evaluated at the maximum likelihood estimates (MLE). The Hessian is defined as: $$ \mathbf{H}(\theta)=\frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}}l(\theta),~~~~ 1\leq i, j\leq p $$ It is nothing else but the matrix of second derivatives of the likelihood function with respect to the parameters. It follows that if you minimize the negative log-likelihood, the returned Hessian is the equivalent of the observed Fisher information matrix whereas in the case that you maximize the log-likelihood, then the negative Hessian is the observed information matrix. Further, the inverse of the Fisher information matrix is an estimator of the asymptotic covariance matrix: $$ \mathrm{Var}(\hat{\theta}_{\mathrm{ML}})=[\mathbf{I}(\hat{\theta}_{\mathrm{ML}})]^{-1} $$ The standard errors are then the square roots of the diagonal elements of the covariance matrix. For the asymptotic distribution of a maximum likelihood estimate, we can write $$ \hat{\theta}_{\mathrm{ML}}\stackrel{a}{\sim}\mathcal{N}\left(\theta_{0}, [\mathbf{I}(\hat{\theta}_{\mathrm{ML}})]^{-1}\right) $$ where $\theta_{0}$ denotes the true parameter value. Hence, the estimated standard error of the maximum likelihood estimates is given by: $$ \mathrm{SE}(\hat{\theta}_{\mathrm{ML}})=\frac{1}{\sqrt{\mathbf{I}(\hat{\theta}_{\mathrm{ML}})}} $$
Basic question about Fisher Information matrix and relationship to Hessian and standard errors
Yudi Pawitan writes in his book In All Likelihood that the second derivative of the log-likelihood evaluated at the maximum likelihood estimates (MLE) is the observed Fisher information (see also this
Basic question about Fisher Information matrix and relationship to Hessian and standard errors Yudi Pawitan writes in his book In All Likelihood that the second derivative of the log-likelihood evaluated at the maximum likelihood estimates (MLE) is the observed Fisher information (see also this document, page 1). This is exactly what most optimization algorithms like optim in R return: the Hessian evaluated at the MLE. When the negative log-likelihood is minimized, the negative Hessian is returned. As you correctly point out, the estimated standard errors of the MLE are the square roots of the diagonal elements of the inverse of the observed Fisher information matrix. In other words: The square roots of the diagonal elements of the inverse of the Hessian (or the negative Hessian) are the estimated standard errors. Summary The negative Hessian evaluated at the MLE is the same as the observed Fisher information matrix evaluated at the MLE. Regarding your main question: No, it's not correct that the observed Fisher information can be found by inverting the (negative) Hessian. Regarding your second question: The inverse of the (negative) Hessian is an estimator of the asymptotic covariance matrix. Hence, the square roots of the diagonal elements of covariance matrix are estimators of the standard errors. I think the second document you link to got it wrong. Formally Let $l(\theta)$ be a log-likelihood function. The Fisher information matrix $\mathbf{I}(\theta)$ is a symmetrical $(p\times p)$ matrix containing the entries: $$ \mathbf{I}(\theta)=-\frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}}l(\theta),~~~~ 1\leq i, j\leq p $$ The observed Fisher information matrix is simply $\mathbf{I}(\hat{\theta}_{\mathrm{ML}})$, the information matrix evaluated at the maximum likelihood estimates (MLE). The Hessian is defined as: $$ \mathbf{H}(\theta)=\frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}}l(\theta),~~~~ 1\leq i, j\leq p $$ It is nothing else but the matrix of second derivatives of the likelihood function with respect to the parameters. It follows that if you minimize the negative log-likelihood, the returned Hessian is the equivalent of the observed Fisher information matrix whereas in the case that you maximize the log-likelihood, then the negative Hessian is the observed information matrix. Further, the inverse of the Fisher information matrix is an estimator of the asymptotic covariance matrix: $$ \mathrm{Var}(\hat{\theta}_{\mathrm{ML}})=[\mathbf{I}(\hat{\theta}_{\mathrm{ML}})]^{-1} $$ The standard errors are then the square roots of the diagonal elements of the covariance matrix. For the asymptotic distribution of a maximum likelihood estimate, we can write $$ \hat{\theta}_{\mathrm{ML}}\stackrel{a}{\sim}\mathcal{N}\left(\theta_{0}, [\mathbf{I}(\hat{\theta}_{\mathrm{ML}})]^{-1}\right) $$ where $\theta_{0}$ denotes the true parameter value. Hence, the estimated standard error of the maximum likelihood estimates is given by: $$ \mathrm{SE}(\hat{\theta}_{\mathrm{ML}})=\frac{1}{\sqrt{\mathbf{I}(\hat{\theta}_{\mathrm{ML}})}} $$
Basic question about Fisher Information matrix and relationship to Hessian and standard errors Yudi Pawitan writes in his book In All Likelihood that the second derivative of the log-likelihood evaluated at the maximum likelihood estimates (MLE) is the observed Fisher information (see also this
2,453
Basic question about Fisher Information matrix and relationship to Hessian and standard errors
Estimating likelihood functions entails a two-step process. First, one declares the log-likelihood function. Then one optimizes the log-likelihood functions. That's fine. Writing the log-likelihood functions in R, we ask for $-1*l$ (where $l$ represents the log - likelihood function) because the optim command in R minimizes a function by default. Minimization of -l is the same as maximization of l, which is what we want. Now, the observed Fisher Information Matrix is equal to $(-H)^{-1}$. The reason that we do not have to multiply the Hessian by -1 is that the evaluation has been done in terms of -1 times the log-likelihood. This means that the Hessian that is produced by optim is already multiplied by -1.
Basic question about Fisher Information matrix and relationship to Hessian and standard errors
Estimating likelihood functions entails a two-step process. First, one declares the log-likelihood function. Then one optimizes the log-likelihood functions. That's fine. Writing the log-likelihood fu
Basic question about Fisher Information matrix and relationship to Hessian and standard errors Estimating likelihood functions entails a two-step process. First, one declares the log-likelihood function. Then one optimizes the log-likelihood functions. That's fine. Writing the log-likelihood functions in R, we ask for $-1*l$ (where $l$ represents the log - likelihood function) because the optim command in R minimizes a function by default. Minimization of -l is the same as maximization of l, which is what we want. Now, the observed Fisher Information Matrix is equal to $(-H)^{-1}$. The reason that we do not have to multiply the Hessian by -1 is that the evaluation has been done in terms of -1 times the log-likelihood. This means that the Hessian that is produced by optim is already multiplied by -1.
Basic question about Fisher Information matrix and relationship to Hessian and standard errors Estimating likelihood functions entails a two-step process. First, one declares the log-likelihood function. Then one optimizes the log-likelihood functions. That's fine. Writing the log-likelihood fu
2,454
What are the 'big problems' in statistics?
A big question should involve key issues of statistical methodology or, because statistics is entirely about applications, it should concern how statistics is used with problems important to society. This characterization suggests the following should be included in any consideration of big problems: How best to conduct drug trials. Currently, classical hypothesis testing requires many formal phases of study. In later (confirmatory) phases, the economic and ethical issues loom large. Can we do better? Do we have to put hundreds or thousands of sick people into control groups and keep them there until the end of a study, for example, or can we find better ways to identify treatments that really work and deliver them to members of the trial (and others) sooner? Coping with scientific publication bias. Negative results are published much less simply because they just don't attain a magic p-value. All branches of science need to find better ways to bring scientifically important, not just statistically significant, results to light. (The multiple comparisons problem and coping with high-dimensional data are subcategories of this problem.) Probing the limits of statistical methods and their interfaces with machine learning and machine cognition. Inevitable advances in computing technology will make true AI accessible in our lifetimes. How are we going to program artificial brains? What role might statistical thinking and statistical learning have in creating these advances? How can statisticians help in thinking about artificial cognition, artificial learning, in exploring their limitations, and making advances? Developing better ways to analyze geospatial data. It is often claimed that the majority, or vast majority, of databases contain locational references. Soon many people and devices will be located in real time with GPS and cell phone technologies. Statistical methods to analyze and exploit spatial data are really just in their infancy (and seem to be relegated to GIS and spatial software which is typically used by non-statisticians).
What are the 'big problems' in statistics?
A big question should involve key issues of statistical methodology or, because statistics is entirely about applications, it should concern how statistics is used with problems important to society.
What are the 'big problems' in statistics? A big question should involve key issues of statistical methodology or, because statistics is entirely about applications, it should concern how statistics is used with problems important to society. This characterization suggests the following should be included in any consideration of big problems: How best to conduct drug trials. Currently, classical hypothesis testing requires many formal phases of study. In later (confirmatory) phases, the economic and ethical issues loom large. Can we do better? Do we have to put hundreds or thousands of sick people into control groups and keep them there until the end of a study, for example, or can we find better ways to identify treatments that really work and deliver them to members of the trial (and others) sooner? Coping with scientific publication bias. Negative results are published much less simply because they just don't attain a magic p-value. All branches of science need to find better ways to bring scientifically important, not just statistically significant, results to light. (The multiple comparisons problem and coping with high-dimensional data are subcategories of this problem.) Probing the limits of statistical methods and their interfaces with machine learning and machine cognition. Inevitable advances in computing technology will make true AI accessible in our lifetimes. How are we going to program artificial brains? What role might statistical thinking and statistical learning have in creating these advances? How can statisticians help in thinking about artificial cognition, artificial learning, in exploring their limitations, and making advances? Developing better ways to analyze geospatial data. It is often claimed that the majority, or vast majority, of databases contain locational references. Soon many people and devices will be located in real time with GPS and cell phone technologies. Statistical methods to analyze and exploit spatial data are really just in their infancy (and seem to be relegated to GIS and spatial software which is typically used by non-statisticians).
What are the 'big problems' in statistics? A big question should involve key issues of statistical methodology or, because statistics is entirely about applications, it should concern how statistics is used with problems important to society.
2,455
What are the 'big problems' in statistics?
Michael Jordan has a short article called What are the Open Problems in Bayesian Statistics?, in which he polled a bunch of statisticians for their views on the open problems in statistics. I'll summarize (aka, copy-and-paste) a bit here, but it's probably best just to read the original. Nonparametrics and semiparametrics For what problems is Bayesian nonparametrics useful and worth the trouble? David Dunson: "Nonparametric Bayes models involve infinitely many parameters and priors are typically chosen for convenience with hyperparameters set at seemingly reasonable values with no proper objective or subjective justification." "It was noted by several people that one of the appealing applications of frequentist nonparametrics is to semiparametric inference, where the nonparametric component of the model is a nuisance parameter. These people felt that it would be desirable to flesh out the (frequentist) theory of Bayesian semiparametrics." Priors "Elicitation remains a major source of open problems." 'Aad van der Vaart turned objective Bayes on its head and pointed to a lack of theory for "situations where one wants the prior to come through in the posterior" as opposed to "merely providing a Bayesian approach to smoothing."' Bayesian/frequentist relationships "Many respondents expressed a desire to further hammer out Bayesian/frequentist relationships. This was most commonly evinced in the context of high-dimensional models and data, where not only are subjective approaches to specification of priors difficult to implement but priors of convenience can be (highly) misleading." 'Some respondents pined for non-asymptotic theory that might reveal more fully the putative advantages of Bayesian methods; e.g., David Dunson: "Often, the frequentist optimal rate is obtained by procedures that clearly do much worse in finite samples than Bayesian approaches."' Computation and statistics Alan Gelfand: "If MCMC is no longer viable for the problems people want to address, then what is the role of INLA, of variational methods, of ABC approaches?" "Several respondents asked for a more thorough integration of computational science and statistical science, noting that the set of inferences that one can reach in any given situation are jointly a function of the model, the prior, the data and the computational resources, and wishing for more explicit management of the tradeoffs among these quantities. Indeed, Rob Kass raised the possibility of a notion of “inferential solvability,” where some problems are understood to be beyond hope (e.g., model selection in regression where “for modest amounts of data subject to nontrivial noise it is im- possible to get useful confidence intervals about regression coefficients when there are large numbers of variables whose presence or absence in the model is unspecified a priori”) and where there are other problems (“certain functionals for which useful con- fidence intervals exist”) for which there is hope." "Several respondents, while apologizing for a certain vagueness, expressed a feeling that a large amount of data does not necessarily imply a large amount of computation; rather, that somehow the inferential strength present in large data should transfer to the algorithm and make it possible to make do with fewer computational steps to achieve a satisfactory (approximate) inferential solution." Model Selection and Hypothesis Testing George Casella: "We now do model selection but Bayesians don’t seem to worry about the properties of basing inference on the selected model. What if it is wrong? What are the consequences of setting up credible regions for a certain parameter $β_1$ when you have selected the wrong model? Can we have procedures with some sort of guarantee?" Need for more work on decision-theoretic foundations in model selection. David Spiegelhalter: "How best to make checks for prior/data conflict an integral part of Bayesian analysis?" Andrew Gelman: "For model checking, a key open problem is developing graphical tools for understanding and comparing models. Graphics is not just for raw data; rather, complex Bayesian models give opportunity for better and more effective exploratory data analysis."
What are the 'big problems' in statistics?
Michael Jordan has a short article called What are the Open Problems in Bayesian Statistics?, in which he polled a bunch of statisticians for their views on the open problems in statistics. I'll summa
What are the 'big problems' in statistics? Michael Jordan has a short article called What are the Open Problems in Bayesian Statistics?, in which he polled a bunch of statisticians for their views on the open problems in statistics. I'll summarize (aka, copy-and-paste) a bit here, but it's probably best just to read the original. Nonparametrics and semiparametrics For what problems is Bayesian nonparametrics useful and worth the trouble? David Dunson: "Nonparametric Bayes models involve infinitely many parameters and priors are typically chosen for convenience with hyperparameters set at seemingly reasonable values with no proper objective or subjective justification." "It was noted by several people that one of the appealing applications of frequentist nonparametrics is to semiparametric inference, where the nonparametric component of the model is a nuisance parameter. These people felt that it would be desirable to flesh out the (frequentist) theory of Bayesian semiparametrics." Priors "Elicitation remains a major source of open problems." 'Aad van der Vaart turned objective Bayes on its head and pointed to a lack of theory for "situations where one wants the prior to come through in the posterior" as opposed to "merely providing a Bayesian approach to smoothing."' Bayesian/frequentist relationships "Many respondents expressed a desire to further hammer out Bayesian/frequentist relationships. This was most commonly evinced in the context of high-dimensional models and data, where not only are subjective approaches to specification of priors difficult to implement but priors of convenience can be (highly) misleading." 'Some respondents pined for non-asymptotic theory that might reveal more fully the putative advantages of Bayesian methods; e.g., David Dunson: "Often, the frequentist optimal rate is obtained by procedures that clearly do much worse in finite samples than Bayesian approaches."' Computation and statistics Alan Gelfand: "If MCMC is no longer viable for the problems people want to address, then what is the role of INLA, of variational methods, of ABC approaches?" "Several respondents asked for a more thorough integration of computational science and statistical science, noting that the set of inferences that one can reach in any given situation are jointly a function of the model, the prior, the data and the computational resources, and wishing for more explicit management of the tradeoffs among these quantities. Indeed, Rob Kass raised the possibility of a notion of “inferential solvability,” where some problems are understood to be beyond hope (e.g., model selection in regression where “for modest amounts of data subject to nontrivial noise it is im- possible to get useful confidence intervals about regression coefficients when there are large numbers of variables whose presence or absence in the model is unspecified a priori”) and where there are other problems (“certain functionals for which useful con- fidence intervals exist”) for which there is hope." "Several respondents, while apologizing for a certain vagueness, expressed a feeling that a large amount of data does not necessarily imply a large amount of computation; rather, that somehow the inferential strength present in large data should transfer to the algorithm and make it possible to make do with fewer computational steps to achieve a satisfactory (approximate) inferential solution." Model Selection and Hypothesis Testing George Casella: "We now do model selection but Bayesians don’t seem to worry about the properties of basing inference on the selected model. What if it is wrong? What are the consequences of setting up credible regions for a certain parameter $β_1$ when you have selected the wrong model? Can we have procedures with some sort of guarantee?" Need for more work on decision-theoretic foundations in model selection. David Spiegelhalter: "How best to make checks for prior/data conflict an integral part of Bayesian analysis?" Andrew Gelman: "For model checking, a key open problem is developing graphical tools for understanding and comparing models. Graphics is not just for raw data; rather, complex Bayesian models give opportunity for better and more effective exploratory data analysis."
What are the 'big problems' in statistics? Michael Jordan has a short article called What are the Open Problems in Bayesian Statistics?, in which he polled a bunch of statisticians for their views on the open problems in statistics. I'll summa
2,456
What are the 'big problems' in statistics?
I'm not sure how big they are, but there is a Wikipedia page for unsolved problems in statistics. Their list includes: Inference and testing Systematic errors Admissability of the Graybill–Deal estimator Combining dependent p-values in Meta-analysis Behrens–Fisher problem Multiple comparisons Open problems in Bayesian statistics Experimental design Problems in Latin squares Problems of a more philosophical nature Sampling of species problem Doomsday argument Exchange paradox
What are the 'big problems' in statistics?
I'm not sure how big they are, but there is a Wikipedia page for unsolved problems in statistics. Their list includes: Inference and testing Systematic errors Admissability of the Graybill–Deal e
What are the 'big problems' in statistics? I'm not sure how big they are, but there is a Wikipedia page for unsolved problems in statistics. Their list includes: Inference and testing Systematic errors Admissability of the Graybill–Deal estimator Combining dependent p-values in Meta-analysis Behrens–Fisher problem Multiple comparisons Open problems in Bayesian statistics Experimental design Problems in Latin squares Problems of a more philosophical nature Sampling of species problem Doomsday argument Exchange paradox
What are the 'big problems' in statistics? I'm not sure how big they are, but there is a Wikipedia page for unsolved problems in statistics. Their list includes: Inference and testing Systematic errors Admissability of the Graybill–Deal e
2,457
What are the 'big problems' in statistics?
As an example of the general spirit (if not quite specificity) of answer I'm looking for, I found a "Hilbert's 23"-inspired lecture by David Donoho at a "Math Challenges of the 21st Century" conference: High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality
What are the 'big problems' in statistics?
As an example of the general spirit (if not quite specificity) of answer I'm looking for, I found a "Hilbert's 23"-inspired lecture by David Donoho at a "Math Challenges of the 21st Century" conferenc
What are the 'big problems' in statistics? As an example of the general spirit (if not quite specificity) of answer I'm looking for, I found a "Hilbert's 23"-inspired lecture by David Donoho at a "Math Challenges of the 21st Century" conference: High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality
What are the 'big problems' in statistics? As an example of the general spirit (if not quite specificity) of answer I'm looking for, I found a "Hilbert's 23"-inspired lecture by David Donoho at a "Math Challenges of the 21st Century" conferenc
2,458
What are the 'big problems' in statistics?
Mathoverflow has a similar question about big problems in probability theory. It would appear from that page that the biggest questions are to do with self avoiding random walks and percolations.
What are the 'big problems' in statistics?
Mathoverflow has a similar question about big problems in probability theory. It would appear from that page that the biggest questions are to do with self avoiding random walks and percolations.
What are the 'big problems' in statistics? Mathoverflow has a similar question about big problems in probability theory. It would appear from that page that the biggest questions are to do with self avoiding random walks and percolations.
What are the 'big problems' in statistics? Mathoverflow has a similar question about big problems in probability theory. It would appear from that page that the biggest questions are to do with self avoiding random walks and percolations.
2,459
What are the 'big problems' in statistics?
You might check out Harvard's "Hard Problems in the Social Sciences' colloquium held earlier this year. Several of these talks offer issues in the use of statistics and modeling in the social sciences.
What are the 'big problems' in statistics?
You might check out Harvard's "Hard Problems in the Social Sciences' colloquium held earlier this year. Several of these talks offer issues in the use of statistics and modeling in the social sciences
What are the 'big problems' in statistics? You might check out Harvard's "Hard Problems in the Social Sciences' colloquium held earlier this year. Several of these talks offer issues in the use of statistics and modeling in the social sciences.
What are the 'big problems' in statistics? You might check out Harvard's "Hard Problems in the Social Sciences' colloquium held earlier this year. Several of these talks offer issues in the use of statistics and modeling in the social sciences
2,460
What are the 'big problems' in statistics?
My answer would be the struggle between frequentist and Bayesian statistics. When people ask you which you "believe in", this is not good! Especially for a scientific discipline.
What are the 'big problems' in statistics?
My answer would be the struggle between frequentist and Bayesian statistics. When people ask you which you "believe in", this is not good! Especially for a scientific discipline.
What are the 'big problems' in statistics? My answer would be the struggle between frequentist and Bayesian statistics. When people ask you which you "believe in", this is not good! Especially for a scientific discipline.
What are the 'big problems' in statistics? My answer would be the struggle between frequentist and Bayesian statistics. When people ask you which you "believe in", this is not good! Especially for a scientific discipline.
2,461
Likelihood ratio vs Bayes Factor
apparently the Bayes factor somehow uses likelihoods that represent the likelihood of each model integrated over it's entire parameter space (i.e. not just at the MLE). How is this integration actually achieved typically? Does one really just try to calculate the likelihood at each of thousands (millions?) of random samples from the parameter space, or are there analytic methods to integrating the likelihood across the parameter space? First, any situation where you consider a term such as $P(D|M)$ for data $D$ and model $M$ is considered a likelihood model. This is often the bread and butter of any statistical analysis, frequentist or Bayesian, and this is the portion that your analysis is meant to suggest is either a good fit or a bad fit. So Bayes factors are not doing anything fundamentally different than likelihood ratios. It's important to put Bayes factors in their right setting. When you have two models, say, and you convert from probabilities to odds, then Bayes factors act like an operator on prior beliefs: $$ Posterior Odds = Bayes Factor * Prior Odds $$ $$ \frac{P(M_{1}|D)}{P(M_{2}|D)} = B.F. \times \frac{P(M_{1})}{P(M_{2})} $$ The real difference is that likelihood ratios are cheaper to compute and generally conceptually easier to specify. The likelihood at the MLE is just a point estimate of the Bayes factor numerator and denominator, respectively. Like most frequentist constructions, it can be viewed as a special case of Bayesian analysis with a contrived prior that's hard to get at. But mostly it arose because it's analytically tractable and easier to compute (in the era before approximate Bayesian computational approaches arose). To the point on computation, yes: you will evaluate the different likelihood integrals in the Bayesian setting with a large-scale Monte Carlo procedure in almost any case of practical interest. There are some specialized simulators, such as GHK, that work if you assume certain distributions, and if you make these assumptions, sometimes you can find analytically tractable problems for which fully analytic Bayes factors exist. But no one uses these; there is no reason to. With optimized Metropolis/Gibbs samplers and other MCMC methods, it's totally tractable to approach these problems in a fully data driven way and compute your integrals numerically. In fact, one will often do this hierarchically and further integrate the results over meta-priors that relate to data collection mechanisms, non-ignorable experimental designs, etc. I recommend the book Bayesian Data Analysis for more on this. Although, the author, Andrew Gelman, seems not to care too much for Bayes factors. As an aside, I agree with Gelman. If you're going to go Bayesian, then exploit the full posterior. Doing model selection with Bayesian methods is like handicapping them, because model selection is a weak and mostly useless form of inference. I'd rather know distributions over model choices if I can... who cares about quantizing it down to "model A is better than model B" sorts of statements when you do not have to? Additionally, when computing the Bayes factor, does one apply correction for complexity (automatically via cross-validated estimation of likelihood or analytically via AIC) as one does with the likelihood ratio? This is one of the nice things about Bayesian methods. Bayes factors automatically account for model complexity in a technical sense. You can set up a simple scenario with two models, $M_{1}$ and $M_{2}$ with assumed model complexities $d_{1}$ and $d_{2}$, respectively, with $d_{1} < d_{2}$ and a sample size $N$. Then if $B_{1,2}$ is the Bayes factor with $M_{1}$ in the numerator, under the assumption that $M_{1}$ is true one can prove that as $N\to\infty$, $B_{1,2}$ approaches $\infty$ at a rate that depends on the difference in model complexity, and that the Bayes factor favors the simpler model. More specifically, you can show that under all of the above assumptions, $$ B_{1,2} = \mathcal{O}(N^{\frac{1}{2}(d_{2}-d_{1})}) $$ I'm familiar with this derivation and the discussion from the book Finite Mixture and Markov Switching Models by Sylvia Frühwirth-Schnatter, but there are likely more directly statistical accounts that dive more into the epistemology underlying it. I don't know the details well enough to give them here, but I believe there are some fairly deep theoretical connections between this and the derivation of AIC. The Information Theory book by Cover and Thomas hinted at this at least. Also, what are the philosophical differences between the likelihood ratio and the Bayes factor (n.b. I'm not asking about the philosophical differences between the likelihood ratio and Bayesian methods in general, but the Bayes factor as a representation of the objective evidence specifically). How would one go about characterizing the meaning of the Bayes factor as compared to the likelihood ratio? The Wikipedia article's section on "Interpretation" does a good job of discussing this (especially the chart showing Jeffreys' strength of evidence scale). Like usual, there's not too much philosophical stuff beyond the basic differences between Bayesian methods and frequentist methods (which you seem already familiar with). The main thing is that the likelihood ratio is not coherent in a Dutch book sense. You can concoct scenarios where the model selection inference from likelihood ratios will lead one to accept losing bets. The Bayesian method is coherent, but operates on a prior which could be extremely poor and has to be chosen subjectively. Tradeoffs.. tradeoffs... FWIW, I think this kind of heavily parameterized model selection is not very good inference. I prefer Bayesian methods and I prefer to organize them more hierarchically, and I want the inference to center on the full posterior distribution if it is at all computationally feasible to do so. I think Bayes factors have some neat mathematical properties, but as a Bayesian myself, I am not impressed by them. They conceal the really useful part of Bayesian analysis, which is that it forces you to deal with your priors out in the open instead of sweeping them under the rug, and allows you to do inference on full posteriors.
Likelihood ratio vs Bayes Factor
apparently the Bayes factor somehow uses likelihoods that represent the likelihood of each model integrated over it's entire parameter space (i.e. not just at the MLE). How is this integration actuall
Likelihood ratio vs Bayes Factor apparently the Bayes factor somehow uses likelihoods that represent the likelihood of each model integrated over it's entire parameter space (i.e. not just at the MLE). How is this integration actually achieved typically? Does one really just try to calculate the likelihood at each of thousands (millions?) of random samples from the parameter space, or are there analytic methods to integrating the likelihood across the parameter space? First, any situation where you consider a term such as $P(D|M)$ for data $D$ and model $M$ is considered a likelihood model. This is often the bread and butter of any statistical analysis, frequentist or Bayesian, and this is the portion that your analysis is meant to suggest is either a good fit or a bad fit. So Bayes factors are not doing anything fundamentally different than likelihood ratios. It's important to put Bayes factors in their right setting. When you have two models, say, and you convert from probabilities to odds, then Bayes factors act like an operator on prior beliefs: $$ Posterior Odds = Bayes Factor * Prior Odds $$ $$ \frac{P(M_{1}|D)}{P(M_{2}|D)} = B.F. \times \frac{P(M_{1})}{P(M_{2})} $$ The real difference is that likelihood ratios are cheaper to compute and generally conceptually easier to specify. The likelihood at the MLE is just a point estimate of the Bayes factor numerator and denominator, respectively. Like most frequentist constructions, it can be viewed as a special case of Bayesian analysis with a contrived prior that's hard to get at. But mostly it arose because it's analytically tractable and easier to compute (in the era before approximate Bayesian computational approaches arose). To the point on computation, yes: you will evaluate the different likelihood integrals in the Bayesian setting with a large-scale Monte Carlo procedure in almost any case of practical interest. There are some specialized simulators, such as GHK, that work if you assume certain distributions, and if you make these assumptions, sometimes you can find analytically tractable problems for which fully analytic Bayes factors exist. But no one uses these; there is no reason to. With optimized Metropolis/Gibbs samplers and other MCMC methods, it's totally tractable to approach these problems in a fully data driven way and compute your integrals numerically. In fact, one will often do this hierarchically and further integrate the results over meta-priors that relate to data collection mechanisms, non-ignorable experimental designs, etc. I recommend the book Bayesian Data Analysis for more on this. Although, the author, Andrew Gelman, seems not to care too much for Bayes factors. As an aside, I agree with Gelman. If you're going to go Bayesian, then exploit the full posterior. Doing model selection with Bayesian methods is like handicapping them, because model selection is a weak and mostly useless form of inference. I'd rather know distributions over model choices if I can... who cares about quantizing it down to "model A is better than model B" sorts of statements when you do not have to? Additionally, when computing the Bayes factor, does one apply correction for complexity (automatically via cross-validated estimation of likelihood or analytically via AIC) as one does with the likelihood ratio? This is one of the nice things about Bayesian methods. Bayes factors automatically account for model complexity in a technical sense. You can set up a simple scenario with two models, $M_{1}$ and $M_{2}$ with assumed model complexities $d_{1}$ and $d_{2}$, respectively, with $d_{1} < d_{2}$ and a sample size $N$. Then if $B_{1,2}$ is the Bayes factor with $M_{1}$ in the numerator, under the assumption that $M_{1}$ is true one can prove that as $N\to\infty$, $B_{1,2}$ approaches $\infty$ at a rate that depends on the difference in model complexity, and that the Bayes factor favors the simpler model. More specifically, you can show that under all of the above assumptions, $$ B_{1,2} = \mathcal{O}(N^{\frac{1}{2}(d_{2}-d_{1})}) $$ I'm familiar with this derivation and the discussion from the book Finite Mixture and Markov Switching Models by Sylvia Frühwirth-Schnatter, but there are likely more directly statistical accounts that dive more into the epistemology underlying it. I don't know the details well enough to give them here, but I believe there are some fairly deep theoretical connections between this and the derivation of AIC. The Information Theory book by Cover and Thomas hinted at this at least. Also, what are the philosophical differences between the likelihood ratio and the Bayes factor (n.b. I'm not asking about the philosophical differences between the likelihood ratio and Bayesian methods in general, but the Bayes factor as a representation of the objective evidence specifically). How would one go about characterizing the meaning of the Bayes factor as compared to the likelihood ratio? The Wikipedia article's section on "Interpretation" does a good job of discussing this (especially the chart showing Jeffreys' strength of evidence scale). Like usual, there's not too much philosophical stuff beyond the basic differences between Bayesian methods and frequentist methods (which you seem already familiar with). The main thing is that the likelihood ratio is not coherent in a Dutch book sense. You can concoct scenarios where the model selection inference from likelihood ratios will lead one to accept losing bets. The Bayesian method is coherent, but operates on a prior which could be extremely poor and has to be chosen subjectively. Tradeoffs.. tradeoffs... FWIW, I think this kind of heavily parameterized model selection is not very good inference. I prefer Bayesian methods and I prefer to organize them more hierarchically, and I want the inference to center on the full posterior distribution if it is at all computationally feasible to do so. I think Bayes factors have some neat mathematical properties, but as a Bayesian myself, I am not impressed by them. They conceal the really useful part of Bayesian analysis, which is that it forces you to deal with your priors out in the open instead of sweeping them under the rug, and allows you to do inference on full posteriors.
Likelihood ratio vs Bayes Factor apparently the Bayes factor somehow uses likelihoods that represent the likelihood of each model integrated over it's entire parameter space (i.e. not just at the MLE). How is this integration actuall
2,462
Likelihood ratio vs Bayes Factor
In understanding the difference between likelihood ratios and Bayes factors, it is useful to consider one key feature of Bayes factors in more detail: How do Bayes factors manage to automatically account for the complexity of the underlying models? One perspective on this question is to consider methods for deterministic approximate inference. Variational Bayes is one such method. It may not only dramatically reduce the computational complexity of stochastic approximations (e.g., MCMC sampling). Variational Bayes also provide an intuitive understanding of what makes up a Bayes factor. Recall first that a Bayes factor is based on the model evidences of two competing models, \begin{align} BF_{1,2} = \frac{p(\textrm{data} \mid M_1)}{p(\textrm{data} \mid M_2)}, \end{align} where the individual model evidences would have to be computed by a complicated integral: \begin{align} p(\textrm{data} \mid M_i) = \int p(\textrm{data} \mid \theta,M_i ) \ p(\theta \mid M_i) \ \textrm{d}\theta \end{align} This integral is not only needed to compute a Bayes factor; it is also needed for inference on the parameters themselves, i.e., when computing $p(\theta \mid \textrm{data}, M_i)$. A fixed-form variational Bayes approach addresses this problem by making a distributional assumption about the conditional posteriors (e.g., a Gaussian assumption). This turns a difficult integration problem into a much easier optimisation problem: the problem of finding the moments of an approximate density $q(\theta)$ that is maximally similar to the true, but unknown, posterior $p(\theta \mid \textrm{data},M_i)$. Variational calculus tells us that this can be achieved by maximising the so-called negative free-energy $\mathcal{F}$, which is directly related to the log model evidence: \begin{align} \mathcal{F} = \textrm{log} \; p(\textrm{data} \mid M_i) - \textrm{KL}\left[q(\theta) \; || \; p(\theta \mid \textrm{data},M_i) \right] \end{align} From this you can see that maximising the negative free-energy does not only provide us with an approximate posterior $q(\theta) \approx p(\theta \mid \textrm{data},M_i)$. Because the Kullback-Leibler divergence is non-negative, $\mathcal{F}$ also provides a lower bound on the (log) model evidence itself. We can now return to the original question of how a Bayes factor automatically balances goodness of fit and complexity of the involved models. It turns out that the negative free-energy can be rewritten as follows: \begin{align} \mathcal{F} = \left\langle p(\textrm{data} \mid \theta,M_i) \right\rangle_q - \textrm{KL}\left[ q(\theta) \; || \; p(\theta \mid M_i) \right] \end{align} The first term is the log-likelihood of the data expected under the approximate posterior; it represents the goodness of fit (or accuracy) of the model. The second term is the KL divergence between the approximate posterior and the prior; it represents the complexity of the model, under the view that a simpler model is one which is more consistent with our prior beliefs, or under the view that a simpler model does not have to be stretched as much to accommodate the data. The free-energy approximation to the log model evidence shows that the model evidence incorporates a trade-off between modelling the data (i.e., goodness of fit) and remaining consistent with our prior (i.e., simplicity or negative complexity). A Bayes factor (in contrast to a likelihood ratio) thus says which of two competing models is better at providing a simple yet accurate explanation of the data.
Likelihood ratio vs Bayes Factor
In understanding the difference between likelihood ratios and Bayes factors, it is useful to consider one key feature of Bayes factors in more detail: How do Bayes factors manage to automatically acco
Likelihood ratio vs Bayes Factor In understanding the difference between likelihood ratios and Bayes factors, it is useful to consider one key feature of Bayes factors in more detail: How do Bayes factors manage to automatically account for the complexity of the underlying models? One perspective on this question is to consider methods for deterministic approximate inference. Variational Bayes is one such method. It may not only dramatically reduce the computational complexity of stochastic approximations (e.g., MCMC sampling). Variational Bayes also provide an intuitive understanding of what makes up a Bayes factor. Recall first that a Bayes factor is based on the model evidences of two competing models, \begin{align} BF_{1,2} = \frac{p(\textrm{data} \mid M_1)}{p(\textrm{data} \mid M_2)}, \end{align} where the individual model evidences would have to be computed by a complicated integral: \begin{align} p(\textrm{data} \mid M_i) = \int p(\textrm{data} \mid \theta,M_i ) \ p(\theta \mid M_i) \ \textrm{d}\theta \end{align} This integral is not only needed to compute a Bayes factor; it is also needed for inference on the parameters themselves, i.e., when computing $p(\theta \mid \textrm{data}, M_i)$. A fixed-form variational Bayes approach addresses this problem by making a distributional assumption about the conditional posteriors (e.g., a Gaussian assumption). This turns a difficult integration problem into a much easier optimisation problem: the problem of finding the moments of an approximate density $q(\theta)$ that is maximally similar to the true, but unknown, posterior $p(\theta \mid \textrm{data},M_i)$. Variational calculus tells us that this can be achieved by maximising the so-called negative free-energy $\mathcal{F}$, which is directly related to the log model evidence: \begin{align} \mathcal{F} = \textrm{log} \; p(\textrm{data} \mid M_i) - \textrm{KL}\left[q(\theta) \; || \; p(\theta \mid \textrm{data},M_i) \right] \end{align} From this you can see that maximising the negative free-energy does not only provide us with an approximate posterior $q(\theta) \approx p(\theta \mid \textrm{data},M_i)$. Because the Kullback-Leibler divergence is non-negative, $\mathcal{F}$ also provides a lower bound on the (log) model evidence itself. We can now return to the original question of how a Bayes factor automatically balances goodness of fit and complexity of the involved models. It turns out that the negative free-energy can be rewritten as follows: \begin{align} \mathcal{F} = \left\langle p(\textrm{data} \mid \theta,M_i) \right\rangle_q - \textrm{KL}\left[ q(\theta) \; || \; p(\theta \mid M_i) \right] \end{align} The first term is the log-likelihood of the data expected under the approximate posterior; it represents the goodness of fit (or accuracy) of the model. The second term is the KL divergence between the approximate posterior and the prior; it represents the complexity of the model, under the view that a simpler model is one which is more consistent with our prior beliefs, or under the view that a simpler model does not have to be stretched as much to accommodate the data. The free-energy approximation to the log model evidence shows that the model evidence incorporates a trade-off between modelling the data (i.e., goodness of fit) and remaining consistent with our prior (i.e., simplicity or negative complexity). A Bayes factor (in contrast to a likelihood ratio) thus says which of two competing models is better at providing a simple yet accurate explanation of the data.
Likelihood ratio vs Bayes Factor In understanding the difference between likelihood ratios and Bayes factors, it is useful to consider one key feature of Bayes factors in more detail: How do Bayes factors manage to automatically acco
2,463
Central limit theorem for sample medians
If you work in terms of indicator variables (i.e. $Z_i = 1$ if $X_i \leq x$ and $0$ otherwise), you can directly apply the Central limit theorem to a mean of $Z$'s, and by using the Delta method, turn that into an asymptotic normal distribution for $F_X^{-1}(\bar{Z})$, which in turn means that you get asymptotic normality for fixed quantiles of $X$. So not just the median, but quartiles, 90th percentiles, ... etc. Loosely, if we're talking about the $q$th sample quantile in sufficiently large samples, we get that it will approximately have a normal distribution with mean the $q$th population quantile $x_q$ and variance $q(1-q)/(nf_X(x_q)^2)$. Hence for the median ($q = 1/2$), the variance in sufficiently large samples will be approximately $1/(4nf_X(\tilde{\mu})^2)$. You need all the conditions along the way to hold, of course, so it doesn't work in all situations, but for continuous distributions where the density at the population quantile is positive and differentiable, etc, ... Further, it doesn't hold for extreme quantiles, because the CLT doesn't kick in there (the average of Z's won't be asymptotically normal). You need different theory for extreme values. Edit: whuber's critique is correct; this would work if $x$ were a population median rather than a sample median. The argument needs to be modified to actually work properly.
Central limit theorem for sample medians
If you work in terms of indicator variables (i.e. $Z_i = 1$ if $X_i \leq x$ and $0$ otherwise), you can directly apply the Central limit theorem to a mean of $Z$'s, and by using the Delta method, turn
Central limit theorem for sample medians If you work in terms of indicator variables (i.e. $Z_i = 1$ if $X_i \leq x$ and $0$ otherwise), you can directly apply the Central limit theorem to a mean of $Z$'s, and by using the Delta method, turn that into an asymptotic normal distribution for $F_X^{-1}(\bar{Z})$, which in turn means that you get asymptotic normality for fixed quantiles of $X$. So not just the median, but quartiles, 90th percentiles, ... etc. Loosely, if we're talking about the $q$th sample quantile in sufficiently large samples, we get that it will approximately have a normal distribution with mean the $q$th population quantile $x_q$ and variance $q(1-q)/(nf_X(x_q)^2)$. Hence for the median ($q = 1/2$), the variance in sufficiently large samples will be approximately $1/(4nf_X(\tilde{\mu})^2)$. You need all the conditions along the way to hold, of course, so it doesn't work in all situations, but for continuous distributions where the density at the population quantile is positive and differentiable, etc, ... Further, it doesn't hold for extreme quantiles, because the CLT doesn't kick in there (the average of Z's won't be asymptotically normal). You need different theory for extreme values. Edit: whuber's critique is correct; this would work if $x$ were a population median rather than a sample median. The argument needs to be modified to actually work properly.
Central limit theorem for sample medians If you work in terms of indicator variables (i.e. $Z_i = 1$ if $X_i \leq x$ and $0$ otherwise), you can directly apply the Central limit theorem to a mean of $Z$'s, and by using the Delta method, turn
2,464
Central limit theorem for sample medians
The key idea is that the sampling distribution of the median is simple to express in terms of the distribution function but more complicated to express in terms of the median value. Once we understand how the distribution function can re-express values as probabilities and back again, it is easy to derive the exact sampling distribution of the median. A little analysis of the behavior of the distribution function near its median is needed to show that this is asymptotically Normal. (The same analysis works for the sampling distribution of any quantile, not just the median.) I will make no attempt to be rigorous in this exposition, but I do carry out it out in steps that are readily justified in a rigorous manner if you have a mind to do that. Intuition These are snapshots of a box containing 70 atoms of a hot atomic gas: In each image I have found a location, shown as a red vertical line, that splits the atoms into two equal groups between the left (drawn as black dots) and right (white dots). This a median of the positions: 35 of the atoms lie to its left and 35 to its right. The medians change because the atoms are moving randomly around the box. We are interested in the distribution of this middle position. Such a question is answered by reversing my procedure: let's first draw a vertical line somewhere, say at location $x$. What is the chance that half the atoms will be to the left of $x$ and half to its right? The atoms at the left individually had chances of $x$ to be at the left. The atoms at the right individually had chances of $1-x$ to be at the right. Assuming their positions are statistically independent, the chances multiply, giving $x^{35}(1-x)^{35}$ for the chance of this particular configuration. An equivalent configuration could be attained for a different split of the $70$ atoms into two $35$-element pieces. Adding these numbers for all possible such splits gives a chance of $${\Pr}(x\text{ is a median}) = C x^{n/2} (1-x)^{n/2}$$ where $n$ is the total number of atoms and $C$ is proportional to the number of splits of $n$ atoms into two equal subgroups. This formula identifies the distribution of the median as a Beta$(n/2+1, n/2+1)$ distribution. Now consider a box with a more complicated shape: Once again the medians vary. Because the box is low near the center, there isn't much of its volume there: a small change in the volume occupied by the left half of the atoms (the black ones once again)--or, we might as well admit, the area to the left as shown in these figures--corresponds to a relatively large change in the horizontal position of the median. In fact, because the area subtended by a small horizontal section of the box is proportional to the height there, the changes in the medians are divided by the box's height. This causes the median to be more variable for this box than for the square box, because this one is so much lower in the middle. In short, when we measure the position of the median in terms of area (to the left and right), the original analysis (for a square box) stands unchanged. The shape of the box only complicates the distribution if we insist on measuring the median in terms of its horizontal position. When we do so, the relationship between the area and position representation is inversely proportional to the height of the box. There is more to learn from these pictures. It is clear that when few atoms are in (either) box, there is a greater chance that half of them could accidentally wind up clustered far to either side. As the number of atoms grows, the potential for such an extreme imbalance decreases. To track this, I took "movies"--a long series of 5000 frames--for the curved box filled with $3$, then with $15$, then $75$, and finally with $375$ atoms, and noted the medians. Here are histograms of the median positions: Clearly, for a sufficiently large number of atoms, the distribution of their median position begins to look bell-shaped and grows narrower: that looks like a Central Limit Theorem result, doesn't it? Quantitative Results The "box," of course, depicts the probability density of some distribution: its top is the graph of the density function (PDF). Thus areas represent probabilities. Placing $n$ points randomly and independently within a box and observing their horizontal positions is one way to draw a sample from the distribution. (This is the idea behind rejection sampling.) The next figure connects these ideas. This looks complicated, but it's really quite simple. There are four related plots here: The top plot shows the PDF of a distribution along with one random sample of size $n$. Values greater than the median are shown as white dots; values less than the median as black dots. It does not need a vertical scale because we know the total area is unity. The middle plot is the cumulative distribution function for the same distribution: it uses height to denote probability. It shares its horizontal axis with the first plot. Its vertical axis must go from $0$ to $1$ because it represents probabilities. The left plot is meant to be read sideways: it is the PDF of the Beta$(n/2+1, n/2+1)$ distribution. It shows how the median in the box will vary, when the median is measured in terms of areas to the left and right of the middle (rather than measured by its horizontal position). I have drawn $16$ random points from this PDF, as shown, and connected them with horizontal dashed lines to the corresponding locations on the original CDF: this is how volumes (measured at the left) are converted to positions (measured across the top, center, and bottom graphics). One of these points actually corresponds to the median shown in the top plot; I have drawn a solid vertical line to show that. The bottom plot is the sampling density of the median, as measured by its horizontal position. It is obtained by converting area (in the left plot) to position. The conversion formula is given by the inverse of the original CDF: this is simply the definition of the inverse CDF! (In other words, the CDF converts position into area to the left; the inverse CDF converts back from area to position.) I have plotted vertical dashed lines showing how the random points from the left plot are converted into random points within the bottom plot. This process of reading across and then down tells us how to go from area to position. Let $F$ be the CDF of the original distribution (middle plot) and $G$ the CDF of the Beta distribution. To find the chance that the median lies to the left of some position $x$, first use $F$ to obtain the area to the left of $x$ in the box: this is $F(x)$ itself. The Beta distribution at the left tells us the chance that half the atoms will lie within this volume, giving $G(F(x))$: this is the CDF of the median position. To find its PDF (as shown in the bottom plot), take the derivative: $$\frac{d}{dx}G(F(x)) = G'(F(x))F'(x) = g(F(x))f(x)$$ where $f$ is the PDF (top plot) and $g$ is the Beta PDF (left plot). This is an exact formula for the distribution of the median for any continuous distribution. (With some care in interpretation it can be applied to any distribution whatsoever, whether continuous or not.) Asymptotic Results When $n$ is very large and $F$ does not have a jump at its median, the sample median must vary closely around the true median $\mu$ of the distribution. Also assuming the PDF $f$ is continuous near $\mu$, $f(x)$ in the preceding formula will not change much from its value at $\mu,$ given by $f(\mu).$ Moreover, $F$ will not change much from its value there either: to first order, $$F(x) = F\left(\mu + (x-\mu)\right) \approx F(\mu) + F^\prime(\mu)(x-\mu) = 1/2 + f(\mu)(x-\mu).$$ Thus, with an ever-improving approximation as $n$ grows large, $$g(F(x))f(x) \approx g\left(1/2 + f(\mu)(x-\mu)\right) f(\mu).$$ That is merely a shift of the location and scale of the Beta distribution. The rescaling by $f(\mu)$ will divide its variance by $f(\mu)^2$ (which had better be nonzero!). Incidentally, the variance of Beta$(n/2+1, n/2+1)$ is very close to $n/4$. This analysis can be viewed as an application of the Delta Method. Finally, Beta$(n/2+1, n/2+1)$ is approximately Normal for large $n$. There are many ways to see this; perhaps the simplest is to look at the logarithm of its PDF near $1/2$: $$\log\left(C(1/2 + x)^{n/2}(1/2-x)^{n/2}\right) = \frac{n}{2}\log\left(1-4x^2\right) + C' = C'-2nx^2 +O(x^4).$$ (The constants $C$ and $C'$ merely normalize the total area to unity.) Through third order in $x,$ then, this is the same as the log of the Normal PDF with variance $1/(4n).$ (This argument is made rigorous by using characteristic or cumulant generating functions instead of the log of the PDF.) Putting this altogether, we conclude that The distribution of the sample median has variance approximately $1/(4 n f(\mu)^2)$, and it is approximately Normal for large $n$, all provided the PDF $f$ is continuous and nonzero at the median $\mu.$
Central limit theorem for sample medians
The key idea is that the sampling distribution of the median is simple to express in terms of the distribution function but more complicated to express in terms of the median value. Once we understan
Central limit theorem for sample medians The key idea is that the sampling distribution of the median is simple to express in terms of the distribution function but more complicated to express in terms of the median value. Once we understand how the distribution function can re-express values as probabilities and back again, it is easy to derive the exact sampling distribution of the median. A little analysis of the behavior of the distribution function near its median is needed to show that this is asymptotically Normal. (The same analysis works for the sampling distribution of any quantile, not just the median.) I will make no attempt to be rigorous in this exposition, but I do carry out it out in steps that are readily justified in a rigorous manner if you have a mind to do that. Intuition These are snapshots of a box containing 70 atoms of a hot atomic gas: In each image I have found a location, shown as a red vertical line, that splits the atoms into two equal groups between the left (drawn as black dots) and right (white dots). This a median of the positions: 35 of the atoms lie to its left and 35 to its right. The medians change because the atoms are moving randomly around the box. We are interested in the distribution of this middle position. Such a question is answered by reversing my procedure: let's first draw a vertical line somewhere, say at location $x$. What is the chance that half the atoms will be to the left of $x$ and half to its right? The atoms at the left individually had chances of $x$ to be at the left. The atoms at the right individually had chances of $1-x$ to be at the right. Assuming their positions are statistically independent, the chances multiply, giving $x^{35}(1-x)^{35}$ for the chance of this particular configuration. An equivalent configuration could be attained for a different split of the $70$ atoms into two $35$-element pieces. Adding these numbers for all possible such splits gives a chance of $${\Pr}(x\text{ is a median}) = C x^{n/2} (1-x)^{n/2}$$ where $n$ is the total number of atoms and $C$ is proportional to the number of splits of $n$ atoms into two equal subgroups. This formula identifies the distribution of the median as a Beta$(n/2+1, n/2+1)$ distribution. Now consider a box with a more complicated shape: Once again the medians vary. Because the box is low near the center, there isn't much of its volume there: a small change in the volume occupied by the left half of the atoms (the black ones once again)--or, we might as well admit, the area to the left as shown in these figures--corresponds to a relatively large change in the horizontal position of the median. In fact, because the area subtended by a small horizontal section of the box is proportional to the height there, the changes in the medians are divided by the box's height. This causes the median to be more variable for this box than for the square box, because this one is so much lower in the middle. In short, when we measure the position of the median in terms of area (to the left and right), the original analysis (for a square box) stands unchanged. The shape of the box only complicates the distribution if we insist on measuring the median in terms of its horizontal position. When we do so, the relationship between the area and position representation is inversely proportional to the height of the box. There is more to learn from these pictures. It is clear that when few atoms are in (either) box, there is a greater chance that half of them could accidentally wind up clustered far to either side. As the number of atoms grows, the potential for such an extreme imbalance decreases. To track this, I took "movies"--a long series of 5000 frames--for the curved box filled with $3$, then with $15$, then $75$, and finally with $375$ atoms, and noted the medians. Here are histograms of the median positions: Clearly, for a sufficiently large number of atoms, the distribution of their median position begins to look bell-shaped and grows narrower: that looks like a Central Limit Theorem result, doesn't it? Quantitative Results The "box," of course, depicts the probability density of some distribution: its top is the graph of the density function (PDF). Thus areas represent probabilities. Placing $n$ points randomly and independently within a box and observing their horizontal positions is one way to draw a sample from the distribution. (This is the idea behind rejection sampling.) The next figure connects these ideas. This looks complicated, but it's really quite simple. There are four related plots here: The top plot shows the PDF of a distribution along with one random sample of size $n$. Values greater than the median are shown as white dots; values less than the median as black dots. It does not need a vertical scale because we know the total area is unity. The middle plot is the cumulative distribution function for the same distribution: it uses height to denote probability. It shares its horizontal axis with the first plot. Its vertical axis must go from $0$ to $1$ because it represents probabilities. The left plot is meant to be read sideways: it is the PDF of the Beta$(n/2+1, n/2+1)$ distribution. It shows how the median in the box will vary, when the median is measured in terms of areas to the left and right of the middle (rather than measured by its horizontal position). I have drawn $16$ random points from this PDF, as shown, and connected them with horizontal dashed lines to the corresponding locations on the original CDF: this is how volumes (measured at the left) are converted to positions (measured across the top, center, and bottom graphics). One of these points actually corresponds to the median shown in the top plot; I have drawn a solid vertical line to show that. The bottom plot is the sampling density of the median, as measured by its horizontal position. It is obtained by converting area (in the left plot) to position. The conversion formula is given by the inverse of the original CDF: this is simply the definition of the inverse CDF! (In other words, the CDF converts position into area to the left; the inverse CDF converts back from area to position.) I have plotted vertical dashed lines showing how the random points from the left plot are converted into random points within the bottom plot. This process of reading across and then down tells us how to go from area to position. Let $F$ be the CDF of the original distribution (middle plot) and $G$ the CDF of the Beta distribution. To find the chance that the median lies to the left of some position $x$, first use $F$ to obtain the area to the left of $x$ in the box: this is $F(x)$ itself. The Beta distribution at the left tells us the chance that half the atoms will lie within this volume, giving $G(F(x))$: this is the CDF of the median position. To find its PDF (as shown in the bottom plot), take the derivative: $$\frac{d}{dx}G(F(x)) = G'(F(x))F'(x) = g(F(x))f(x)$$ where $f$ is the PDF (top plot) and $g$ is the Beta PDF (left plot). This is an exact formula for the distribution of the median for any continuous distribution. (With some care in interpretation it can be applied to any distribution whatsoever, whether continuous or not.) Asymptotic Results When $n$ is very large and $F$ does not have a jump at its median, the sample median must vary closely around the true median $\mu$ of the distribution. Also assuming the PDF $f$ is continuous near $\mu$, $f(x)$ in the preceding formula will not change much from its value at $\mu,$ given by $f(\mu).$ Moreover, $F$ will not change much from its value there either: to first order, $$F(x) = F\left(\mu + (x-\mu)\right) \approx F(\mu) + F^\prime(\mu)(x-\mu) = 1/2 + f(\mu)(x-\mu).$$ Thus, with an ever-improving approximation as $n$ grows large, $$g(F(x))f(x) \approx g\left(1/2 + f(\mu)(x-\mu)\right) f(\mu).$$ That is merely a shift of the location and scale of the Beta distribution. The rescaling by $f(\mu)$ will divide its variance by $f(\mu)^2$ (which had better be nonzero!). Incidentally, the variance of Beta$(n/2+1, n/2+1)$ is very close to $n/4$. This analysis can be viewed as an application of the Delta Method. Finally, Beta$(n/2+1, n/2+1)$ is approximately Normal for large $n$. There are many ways to see this; perhaps the simplest is to look at the logarithm of its PDF near $1/2$: $$\log\left(C(1/2 + x)^{n/2}(1/2-x)^{n/2}\right) = \frac{n}{2}\log\left(1-4x^2\right) + C' = C'-2nx^2 +O(x^4).$$ (The constants $C$ and $C'$ merely normalize the total area to unity.) Through third order in $x,$ then, this is the same as the log of the Normal PDF with variance $1/(4n).$ (This argument is made rigorous by using characteristic or cumulant generating functions instead of the log of the PDF.) Putting this altogether, we conclude that The distribution of the sample median has variance approximately $1/(4 n f(\mu)^2)$, and it is approximately Normal for large $n$, all provided the PDF $f$ is continuous and nonzero at the median $\mu.$
Central limit theorem for sample medians The key idea is that the sampling distribution of the median is simple to express in terms of the distribution function but more complicated to express in terms of the median value. Once we understan
2,465
Central limit theorem for sample medians
@EngrStudent illuminating answer tells us that we should expect different results when the distribution is continuous, and when it is discrete (the "red" graphs, where the asymptotic distribution of the sample median fails spectacularly to look like normal, correspond to the distributions Binomial(3), Geometric(11), Hypergeometric(12), Negative Binomial(14), Poisson(18), Discrete Uniform(22). And indeed this is the case. When the distribution is discrete, things get complicated. I will provide the proof for the Absolutely Continuous Case, essentially doing no more than detailing the answer already given by @Glen_b, and then I will discuss a bit what happens when the distribution is discrete, providing also a recent reference for anyone interested in diving in. ABSOLUTELY CONTINUOUS DISTRIBUTION Consider a collection of i.i.d. absolutely continuous random variables $\{X_1,...X_n\}$ with distribution function (cdf) $F_X(x) = P(X_i\le x)$ and density function $F'_X(x)=f_X(x)$. Define $Z_i\equiv I\{X_i\le x\}$ where $I\{\}$ is the indicator function. Therefore $Z_i$ is a Bernoulli r.v., with $$E(Z_i) = E\left(I\{X_i\le x\}\right) = P(X_i\le x)=F_X(x),\;\; \text{Var}(Z_i) = F_X(x)[1-F_X(x)],\;\; \forall i$$ Let $Y_n(x)$ be the sample mean of these i.i.d. Bernoullis, defined for fixed $x$ as $$Y_n(x) = \frac 1n\sum_{i=1}^nZ_i$$ which means that $$E[Y_n(x)] = F_X(x),\;\; \text{Var}(Y_n(x)) = (1/n)F_X(x)[1-F_X(x)]$$ The Central Limit Theorem applies and we have $$\sqrt n\Big(Y_n(x) - F_X(x)\Big) \rightarrow_d \mathbb N\left(0,F_X(x)[1-F_X(x)]\right) $$ Note that $Y_n(x) = \hat F_n(x)$ i.e. non else than the empirical distribution function. By applying the "Delta Method" we have that for a continuous and differentiable function $g(t)$ with non-zero derivative $g'(t)$ at the point of interest, we obtain $$\sqrt n\Big(g[\hat F_n(x)] - g[F_X(x)]\Big) \rightarrow_d \mathbb N\left(0,F_X(x)[1-F_X(x)]\cdot\left(g'[F_X(x)]\right)^2\right) $$ Now, choose $g(t) \equiv F^{-1}_X(t),\;\; t\in (0,1)$ where $^{-1}$ denotes the inverse function. This is a continuous and differentiable function (since $F_X(x)$ is), and by the Inverse Function Theorem we have $$g'(t)=\frac {d}{dt}F^{-1}_X(t) = \frac 1{f_x\left(F^{-1}_X(t)\right)}$$ Inserting these results on $g$ in the delta-method derived asymptotic result we have $$\sqrt n\Big(F^{-1}_X(\hat F_n(x)) - F^{-1}_X(F_X(x))\Big) \rightarrow_d \mathbb N\left(0,\frac {F_X(x)[1-F_X(x)]}{\left[f_x\left(F^{-1}_X(F_X(x))\right)\right]^2} \right) $$ and simplifying, $$\sqrt n\Big(F^{-1}_X(\hat F_n(x)) - x\Big) \rightarrow_d \mathbb N\left(0,\frac {F_X(x)[1-F_X(x)]}{\left[f_x(x)\right]^2} \right) $$ .. for any fixed $x$. Now set $x=m$, the (true) median of the population. Then we have $F_X(m) = 1/2$ and the above general result becomes, for our case of interest, $$\sqrt n\Big(F^{-1}_X(\hat F_n(m)) - m\Big) \rightarrow_d \mathbb N\left(0,\frac {1}{\left[2f_x(m)\right]^2} \right) $$ But $F^{-1}_X(\hat F_n(m))$ converges to the sample median $\hat m$. This is because $$F^{-1}_X(\hat F_n(m)) = \inf\{x : F_X(x) \geq \hat F_n(m)\} = \inf\{x : F_X(x) \geq \frac 1n \sum_{i=1}^n I\{X_i\leq m\}\}$$ The right-hand side of the inequality converges to $1/2$ and the smallest $x$ for which eventually $F_X \geq 1/2$, is the sample median. So we obtain $$\sqrt n\Big(\hat m - m\Big) \rightarrow_d \mathbb N\left(0,\frac {1}{\left[2f_x(m)\right]^2} \right) $$ which is the Central Limit Theorem for the sample median for absolutely continuous distributions. DISCRETE DISTRIBUTIONS When the distribution is discrete (or when the sample contains ties) it has been argued that the "classical" definition of sample quantiles, and hence of the median also, may be misleading in the first place, as the theoretical concept to be used in order to measure what one attempts to measure by quantiles. In any case it has been simulated that under this classical definition (the one we all know), the asymptotic distribution of the sample median is non-normal and a discrete distribution. An alternative definition of sample quantiles is by using the concept of the "mid-distribution" function, which is defined as $$F_{mid}(x) = P(X\le x) - \frac 12P(X=x)$$ The definition of sample quantiles through the concept of mid-distribution function can be seen as a generalization that can cover as special cases the continuous distributions, but also, the not-so-continuous ones too. For the case of discrete distributions, among other results, it has been found that the sample median as defined through this concept has an asymptotically normal distribution with an ...elaborate looking variance. Most of these are recent results. The reference is Ma, Y., Genton, M. G., & Parzen, E. (2011). Asymptotic properties of sample quantiles of discrete distributions. Annals of the Institute of Statistical Mathematics, 63(2), 227-243., where one can find a discussion and links to the older relevant literature.
Central limit theorem for sample medians
@EngrStudent illuminating answer tells us that we should expect different results when the distribution is continuous, and when it is discrete (the "red" graphs, where the asymptotic distribution of t
Central limit theorem for sample medians @EngrStudent illuminating answer tells us that we should expect different results when the distribution is continuous, and when it is discrete (the "red" graphs, where the asymptotic distribution of the sample median fails spectacularly to look like normal, correspond to the distributions Binomial(3), Geometric(11), Hypergeometric(12), Negative Binomial(14), Poisson(18), Discrete Uniform(22). And indeed this is the case. When the distribution is discrete, things get complicated. I will provide the proof for the Absolutely Continuous Case, essentially doing no more than detailing the answer already given by @Glen_b, and then I will discuss a bit what happens when the distribution is discrete, providing also a recent reference for anyone interested in diving in. ABSOLUTELY CONTINUOUS DISTRIBUTION Consider a collection of i.i.d. absolutely continuous random variables $\{X_1,...X_n\}$ with distribution function (cdf) $F_X(x) = P(X_i\le x)$ and density function $F'_X(x)=f_X(x)$. Define $Z_i\equiv I\{X_i\le x\}$ where $I\{\}$ is the indicator function. Therefore $Z_i$ is a Bernoulli r.v., with $$E(Z_i) = E\left(I\{X_i\le x\}\right) = P(X_i\le x)=F_X(x),\;\; \text{Var}(Z_i) = F_X(x)[1-F_X(x)],\;\; \forall i$$ Let $Y_n(x)$ be the sample mean of these i.i.d. Bernoullis, defined for fixed $x$ as $$Y_n(x) = \frac 1n\sum_{i=1}^nZ_i$$ which means that $$E[Y_n(x)] = F_X(x),\;\; \text{Var}(Y_n(x)) = (1/n)F_X(x)[1-F_X(x)]$$ The Central Limit Theorem applies and we have $$\sqrt n\Big(Y_n(x) - F_X(x)\Big) \rightarrow_d \mathbb N\left(0,F_X(x)[1-F_X(x)]\right) $$ Note that $Y_n(x) = \hat F_n(x)$ i.e. non else than the empirical distribution function. By applying the "Delta Method" we have that for a continuous and differentiable function $g(t)$ with non-zero derivative $g'(t)$ at the point of interest, we obtain $$\sqrt n\Big(g[\hat F_n(x)] - g[F_X(x)]\Big) \rightarrow_d \mathbb N\left(0,F_X(x)[1-F_X(x)]\cdot\left(g'[F_X(x)]\right)^2\right) $$ Now, choose $g(t) \equiv F^{-1}_X(t),\;\; t\in (0,1)$ where $^{-1}$ denotes the inverse function. This is a continuous and differentiable function (since $F_X(x)$ is), and by the Inverse Function Theorem we have $$g'(t)=\frac {d}{dt}F^{-1}_X(t) = \frac 1{f_x\left(F^{-1}_X(t)\right)}$$ Inserting these results on $g$ in the delta-method derived asymptotic result we have $$\sqrt n\Big(F^{-1}_X(\hat F_n(x)) - F^{-1}_X(F_X(x))\Big) \rightarrow_d \mathbb N\left(0,\frac {F_X(x)[1-F_X(x)]}{\left[f_x\left(F^{-1}_X(F_X(x))\right)\right]^2} \right) $$ and simplifying, $$\sqrt n\Big(F^{-1}_X(\hat F_n(x)) - x\Big) \rightarrow_d \mathbb N\left(0,\frac {F_X(x)[1-F_X(x)]}{\left[f_x(x)\right]^2} \right) $$ .. for any fixed $x$. Now set $x=m$, the (true) median of the population. Then we have $F_X(m) = 1/2$ and the above general result becomes, for our case of interest, $$\sqrt n\Big(F^{-1}_X(\hat F_n(m)) - m\Big) \rightarrow_d \mathbb N\left(0,\frac {1}{\left[2f_x(m)\right]^2} \right) $$ But $F^{-1}_X(\hat F_n(m))$ converges to the sample median $\hat m$. This is because $$F^{-1}_X(\hat F_n(m)) = \inf\{x : F_X(x) \geq \hat F_n(m)\} = \inf\{x : F_X(x) \geq \frac 1n \sum_{i=1}^n I\{X_i\leq m\}\}$$ The right-hand side of the inequality converges to $1/2$ and the smallest $x$ for which eventually $F_X \geq 1/2$, is the sample median. So we obtain $$\sqrt n\Big(\hat m - m\Big) \rightarrow_d \mathbb N\left(0,\frac {1}{\left[2f_x(m)\right]^2} \right) $$ which is the Central Limit Theorem for the sample median for absolutely continuous distributions. DISCRETE DISTRIBUTIONS When the distribution is discrete (or when the sample contains ties) it has been argued that the "classical" definition of sample quantiles, and hence of the median also, may be misleading in the first place, as the theoretical concept to be used in order to measure what one attempts to measure by quantiles. In any case it has been simulated that under this classical definition (the one we all know), the asymptotic distribution of the sample median is non-normal and a discrete distribution. An alternative definition of sample quantiles is by using the concept of the "mid-distribution" function, which is defined as $$F_{mid}(x) = P(X\le x) - \frac 12P(X=x)$$ The definition of sample quantiles through the concept of mid-distribution function can be seen as a generalization that can cover as special cases the continuous distributions, but also, the not-so-continuous ones too. For the case of discrete distributions, among other results, it has been found that the sample median as defined through this concept has an asymptotically normal distribution with an ...elaborate looking variance. Most of these are recent results. The reference is Ma, Y., Genton, M. G., & Parzen, E. (2011). Asymptotic properties of sample quantiles of discrete distributions. Annals of the Institute of Statistical Mathematics, 63(2), 227-243., where one can find a discussion and links to the older relevant literature.
Central limit theorem for sample medians @EngrStudent illuminating answer tells us that we should expect different results when the distribution is continuous, and when it is discrete (the "red" graphs, where the asymptotic distribution of t
2,466
Central limit theorem for sample medians
I like the analytic answer given by Glen_b. It is a good answer. It needs a picture. I like pictures. Here are areas of elasticity in an answer to the question: There are lots of distributions in the world. Mileage is likely to vary. Sufficient has different meanings. For a counter-example to a theory, sometimes a single counter-example is required for "sufficient" to be met. For demonstration of low defect rates using binomial uncertainty hundreds or thousands of samples may be required. For a standard normal I used the following MatLab code: mysamples=1000; loops=10000; y1=median(normrnd(0,1,mysamples,loops)); cdfplot(y1) and I got the following plot as an output: So why not do this for the other 22 or so "built-in" distributions, except using prob-plots (where straight line means very normal-like)? And here is the source code for it: mysamples=1000; loops=600; y=zeros(loops,23); y(:,1)=median(random('Normal', 0,1,mysamples,loops)); y(:,2)=median(random('beta', 5,0.2,mysamples,loops)); y(:,3)=median(random('bino', 10,0.5,mysamples,loops)); y(:,4)=median(random('chi2', 10,mysamples,loops)); y(:,5)=median(random('exp', 700,mysamples,loops)); y(:,6)=median(random('ev', 700,mysamples,loops)); y(:,7)=median(random('f', 5,3,mysamples,loops)); y(:,8)=median(random('gam', 10,5,mysamples,loops)); y(:,9)=median(random('gev', 0.24, 1.17, 5.8,mysamples,loops)); y(:,10)=median(random('gp', 0.12, 0.81,mysamples,loops)); y(:,11)=median(random('geo', 0.03,mysamples,loops)); y(:,12)=median(random('hyge', 1000,50,20,mysamples,loops)); y(:,13)=median(random('logn', log(20000),1.0,mysamples,loops)); y(:,14)=median(random('nbin', 2,0.11,mysamples,loops)); y(:,15)=median(random('ncf', 5,20,10,mysamples,loops)); y(:,16)=median(random('nct', 10,1,mysamples,loops)); y(:,17)=median(random('ncx2', 4,2,mysamples,loops)); y(:,18)=median(random('poiss', 5,mysamples,loops)); y(:,19)=median(random('rayl', 0.5,mysamples,loops)); y(:,20)=median(random('t', 5,mysamples,loops)); y(:,21)=median(random('unif',0,1,mysamples,loops)); y(:,22)=median(random('unid', 5,mysamples,loops)); y(:,23)=median(random('wbl', 0.5,2,mysamples,loops)); figure(1); clf hold on for i=2:23 subplot(4,6,i-1) probplot(y(:,i)) title(['Probplot of ' num2str(i)]) axis tight if not(isempty(find(i==[3,11,12,14,18,22]))) set(gca,'Color','r') end end When I see the analytic proof I might think "in theory they all might fit" but when I try it out then I can temper that with "there are a number of ways this doesn't work so well, often involving discrete or highly constrained values" and this might make me want to be more careful about applying the theory to anything that costs money. Good luck.
Central limit theorem for sample medians
I like the analytic answer given by Glen_b. It is a good answer. It needs a picture. I like pictures. Here are areas of elasticity in an answer to the question: There are lots of distributions in t
Central limit theorem for sample medians I like the analytic answer given by Glen_b. It is a good answer. It needs a picture. I like pictures. Here are areas of elasticity in an answer to the question: There are lots of distributions in the world. Mileage is likely to vary. Sufficient has different meanings. For a counter-example to a theory, sometimes a single counter-example is required for "sufficient" to be met. For demonstration of low defect rates using binomial uncertainty hundreds or thousands of samples may be required. For a standard normal I used the following MatLab code: mysamples=1000; loops=10000; y1=median(normrnd(0,1,mysamples,loops)); cdfplot(y1) and I got the following plot as an output: So why not do this for the other 22 or so "built-in" distributions, except using prob-plots (where straight line means very normal-like)? And here is the source code for it: mysamples=1000; loops=600; y=zeros(loops,23); y(:,1)=median(random('Normal', 0,1,mysamples,loops)); y(:,2)=median(random('beta', 5,0.2,mysamples,loops)); y(:,3)=median(random('bino', 10,0.5,mysamples,loops)); y(:,4)=median(random('chi2', 10,mysamples,loops)); y(:,5)=median(random('exp', 700,mysamples,loops)); y(:,6)=median(random('ev', 700,mysamples,loops)); y(:,7)=median(random('f', 5,3,mysamples,loops)); y(:,8)=median(random('gam', 10,5,mysamples,loops)); y(:,9)=median(random('gev', 0.24, 1.17, 5.8,mysamples,loops)); y(:,10)=median(random('gp', 0.12, 0.81,mysamples,loops)); y(:,11)=median(random('geo', 0.03,mysamples,loops)); y(:,12)=median(random('hyge', 1000,50,20,mysamples,loops)); y(:,13)=median(random('logn', log(20000),1.0,mysamples,loops)); y(:,14)=median(random('nbin', 2,0.11,mysamples,loops)); y(:,15)=median(random('ncf', 5,20,10,mysamples,loops)); y(:,16)=median(random('nct', 10,1,mysamples,loops)); y(:,17)=median(random('ncx2', 4,2,mysamples,loops)); y(:,18)=median(random('poiss', 5,mysamples,loops)); y(:,19)=median(random('rayl', 0.5,mysamples,loops)); y(:,20)=median(random('t', 5,mysamples,loops)); y(:,21)=median(random('unif',0,1,mysamples,loops)); y(:,22)=median(random('unid', 5,mysamples,loops)); y(:,23)=median(random('wbl', 0.5,2,mysamples,loops)); figure(1); clf hold on for i=2:23 subplot(4,6,i-1) probplot(y(:,i)) title(['Probplot of ' num2str(i)]) axis tight if not(isempty(find(i==[3,11,12,14,18,22]))) set(gca,'Color','r') end end When I see the analytic proof I might think "in theory they all might fit" but when I try it out then I can temper that with "there are a number of ways this doesn't work so well, often involving discrete or highly constrained values" and this might make me want to be more careful about applying the theory to anything that costs money. Good luck.
Central limit theorem for sample medians I like the analytic answer given by Glen_b. It is a good answer. It needs a picture. I like pictures. Here are areas of elasticity in an answer to the question: There are lots of distributions in t
2,467
Central limit theorem for sample medians
Yes it is, and not just for the median, but for any sample quantile. Copying from this paper, written by T.S. Ferguson, a professor at UCLA (his page is here), which interestingly deals with the joint distribution of sample mean and sample quantiles, we have: Let $X_1, . . . ,X_n$ be i.i.d. with distribution function $F(x)$, density $f(x)$, mean $\mu$ and finite variance $\sigma^2$. Let $0 < p < 1$ and let $x_p$ denote the $p$-th quantile of $F$, so that $F(x_p) = p$. Assume that the density $f(x)$ is continuous and positive at $x_p$. Let $Y_n = X_{(n:\lceil np\rceil)}$ denote the sample $p$-th quantile. Then $$\sqrt n(Y_n − x_p) \xrightarrow{d} N(0, p(1 − p)/(f(x_p))^2)$$ For $p=1/2 \Rightarrow x_p=m$ (median), and you have the CLT for medians, $$\sqrt n(Y_n − m) \xrightarrow{d} N\left(0, [2f(m)]^{-2}\right)$$
Central limit theorem for sample medians
Yes it is, and not just for the median, but for any sample quantile. Copying from this paper, written by T.S. Ferguson, a professor at UCLA (his page is here), which interestingly deals with the joi
Central limit theorem for sample medians Yes it is, and not just for the median, but for any sample quantile. Copying from this paper, written by T.S. Ferguson, a professor at UCLA (his page is here), which interestingly deals with the joint distribution of sample mean and sample quantiles, we have: Let $X_1, . . . ,X_n$ be i.i.d. with distribution function $F(x)$, density $f(x)$, mean $\mu$ and finite variance $\sigma^2$. Let $0 < p < 1$ and let $x_p$ denote the $p$-th quantile of $F$, so that $F(x_p) = p$. Assume that the density $f(x)$ is continuous and positive at $x_p$. Let $Y_n = X_{(n:\lceil np\rceil)}$ denote the sample $p$-th quantile. Then $$\sqrt n(Y_n − x_p) \xrightarrow{d} N(0, p(1 − p)/(f(x_p))^2)$$ For $p=1/2 \Rightarrow x_p=m$ (median), and you have the CLT for medians, $$\sqrt n(Y_n − m) \xrightarrow{d} N\left(0, [2f(m)]^{-2}\right)$$
Central limit theorem for sample medians Yes it is, and not just for the median, but for any sample quantile. Copying from this paper, written by T.S. Ferguson, a professor at UCLA (his page is here), which interestingly deals with the joi
2,468
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation?
The correction is called Bessel's correction and it has a mathematical proof. Personally, I was taught it the easy way: using $n-1$ is how you correct the bias of $E[\frac{1}{n}\sum_1^n(x_i - \bar x)^2]$ (see here). You can also explain the correction based on the concept of degrees of freedom, simulation isn't strictly needed.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance
The correction is called Bessel's correction and it has a mathematical proof. Personally, I was taught it the easy way: using $n-1$ is how you correct the bias of $E[\frac{1}{n}\sum_1^n(x_i - \bar x)^
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation? The correction is called Bessel's correction and it has a mathematical proof. Personally, I was taught it the easy way: using $n-1$ is how you correct the bias of $E[\frac{1}{n}\sum_1^n(x_i - \bar x)^2]$ (see here). You can also explain the correction based on the concept of degrees of freedom, simulation isn't strictly needed.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance The correction is called Bessel's correction and it has a mathematical proof. Personally, I was taught it the easy way: using $n-1$ is how you correct the bias of $E[\frac{1}{n}\sum_1^n(x_i - \bar x)^
2,469
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation?
Most proofs I have seen are simple enough that Gauss (however he did it) probably found it pretty easy to prove. I've been looking for a derivation on CV that I could link you to (there are a number of links to proofs off-site, including at least one in answers here), but I haven't found one here on CV in a couple of searches, so for the sake of completeness, I'll give a simple one. Given its simplicity, it's easy to see how people would start to use what's usually called Bessel's correction. This takes $E(X^2)=\text{Var}(X) + E(X)^2$ as assumed knowledge, and assumes that the first few basic variance properties are known. \begin{eqnarray} E[\sum_{i=1}^{n} (x_i-\bar x)^2] &=& E[\sum_{i=1}^{n} x_i^2-2\bar x\sum_{i=1}^{n} x_i+n\bar{x}^2]\\ &=& E[\sum_{i=1}^{n} x_i^2-n\bar{x}^2] \\ &=& n E[x_i^2]- n E[\bar{x}^2]\\ &=& n (\mu^2 + \sigma^2) - n(\mu^2+\sigma^2/n)\\ &=& (n-1) \sigma^2 \end{eqnarray}
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance
Most proofs I have seen are simple enough that Gauss (however he did it) probably found it pretty easy to prove. I've been looking for a derivation on CV that I could link you to (there are a number o
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation? Most proofs I have seen are simple enough that Gauss (however he did it) probably found it pretty easy to prove. I've been looking for a derivation on CV that I could link you to (there are a number of links to proofs off-site, including at least one in answers here), but I haven't found one here on CV in a couple of searches, so for the sake of completeness, I'll give a simple one. Given its simplicity, it's easy to see how people would start to use what's usually called Bessel's correction. This takes $E(X^2)=\text{Var}(X) + E(X)^2$ as assumed knowledge, and assumes that the first few basic variance properties are known. \begin{eqnarray} E[\sum_{i=1}^{n} (x_i-\bar x)^2] &=& E[\sum_{i=1}^{n} x_i^2-2\bar x\sum_{i=1}^{n} x_i+n\bar{x}^2]\\ &=& E[\sum_{i=1}^{n} x_i^2-n\bar{x}^2] \\ &=& n E[x_i^2]- n E[\bar{x}^2]\\ &=& n (\mu^2 + \sigma^2) - n(\mu^2+\sigma^2/n)\\ &=& (n-1) \sigma^2 \end{eqnarray}
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance Most proofs I have seen are simple enough that Gauss (however he did it) probably found it pretty easy to prove. I've been looking for a derivation on CV that I could link you to (there are a number o
2,470
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation?
According to Weisstein's World of Mathematics, it was first proved by Gauss in 1823. The reference is volume 4 of Gauss' Werke, which can be read at https://archive.org/details/werkecarlf04gausrich. The relevant pages seem to be 47-49. It seems that Gauss investigated the question and came up with a proof. I don't read Latin, but there is a German summary in the text. Pages 103-104 explain what he did (Edit: I added a rough translation): Allein da man nicht berechtigt ist, die sichersten Werthe fuer die wahren Werthe selbst zu halten, so ueberzeugt man sich leicht, dass man durch dieses Verfahren allemal den wahrscheinlichsten und mittleren Fehler zu klein finden muss, und daher die gegebenen Resultaten eine groessere Genauigkeit beilegt, als sie wirklich besitzen. [But since one is not entitled to treat the most probable values as though they were the actual values, one can easily convince oneself that one must always find that the most probable error and the average error are too small, and that therefore the given results possess a greater accuracy than they really have.] from which it would seem that it was well-known that the sample variance is a biased estimate of the population variance. The article goes on to say that the difference between the two is usually ignored because it's not important if the sample size is big enough. Then it says: Der Verfasser hat daher diesen Gegenstand eine besondere Untersuchung unterworfen, die zu einem sehr Merkwuerdigen hoechst einfachen Resultate gefuehrt hat. Man braucht nemlich den nach dem angezeigten fahlerhaften Verfahren gefundenen mittleren Fehler, um ihn in die richtigen zu verwandeln, nur mit $$\sqrt{\frac{\pi-\rho}{\pi}}$$ zu multiplicieren, wo $\pi$ die Anzahl der beobachtungen (number of observations) und $\rho$ die Anzahl der unbekannten groessen (number of unknowns) bedeutet. [The author has therefore made a special study of this object which has led to a very strange and extremely simple result. Namely, one needs only to multiply the average error found by the above erroneous process by (the given expression) to change it into the right one, where $\pi$ is the number of observations and $\rho$ is the number of unknown quantities.] So if this is indeed the first time that the correction was found, then it seems that it was found by a clever calculation by Gauss, but people were already aware that some correction was required, so perhaps someone else could have found it empirically before this. Or possibly previous authors didn't care to derive the precise answer because they were working with fairly large data sets anyway. Summary: manual, but people already knew that $n$ in the denominator wasn't quite right.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance
According to Weisstein's World of Mathematics, it was first proved by Gauss in 1823. The reference is volume 4 of Gauss' Werke, which can be read at https://archive.org/details/werkecarlf04gausrich. T
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation? According to Weisstein's World of Mathematics, it was first proved by Gauss in 1823. The reference is volume 4 of Gauss' Werke, which can be read at https://archive.org/details/werkecarlf04gausrich. The relevant pages seem to be 47-49. It seems that Gauss investigated the question and came up with a proof. I don't read Latin, but there is a German summary in the text. Pages 103-104 explain what he did (Edit: I added a rough translation): Allein da man nicht berechtigt ist, die sichersten Werthe fuer die wahren Werthe selbst zu halten, so ueberzeugt man sich leicht, dass man durch dieses Verfahren allemal den wahrscheinlichsten und mittleren Fehler zu klein finden muss, und daher die gegebenen Resultaten eine groessere Genauigkeit beilegt, als sie wirklich besitzen. [But since one is not entitled to treat the most probable values as though they were the actual values, one can easily convince oneself that one must always find that the most probable error and the average error are too small, and that therefore the given results possess a greater accuracy than they really have.] from which it would seem that it was well-known that the sample variance is a biased estimate of the population variance. The article goes on to say that the difference between the two is usually ignored because it's not important if the sample size is big enough. Then it says: Der Verfasser hat daher diesen Gegenstand eine besondere Untersuchung unterworfen, die zu einem sehr Merkwuerdigen hoechst einfachen Resultate gefuehrt hat. Man braucht nemlich den nach dem angezeigten fahlerhaften Verfahren gefundenen mittleren Fehler, um ihn in die richtigen zu verwandeln, nur mit $$\sqrt{\frac{\pi-\rho}{\pi}}$$ zu multiplicieren, wo $\pi$ die Anzahl der beobachtungen (number of observations) und $\rho$ die Anzahl der unbekannten groessen (number of unknowns) bedeutet. [The author has therefore made a special study of this object which has led to a very strange and extremely simple result. Namely, one needs only to multiply the average error found by the above erroneous process by (the given expression) to change it into the right one, where $\pi$ is the number of observations and $\rho$ is the number of unknown quantities.] So if this is indeed the first time that the correction was found, then it seems that it was found by a clever calculation by Gauss, but people were already aware that some correction was required, so perhaps someone else could have found it empirically before this. Or possibly previous authors didn't care to derive the precise answer because they were working with fairly large data sets anyway. Summary: manual, but people already knew that $n$ in the denominator wasn't quite right.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance According to Weisstein's World of Mathematics, it was first proved by Gauss in 1823. The reference is volume 4 of Gauss' Werke, which can be read at https://archive.org/details/werkecarlf04gausrich. T
2,471
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation?
For me one piece of intuition is that $$\begin{array}{c} \mbox{The degree to which}\\ X_{i}\mbox{ varies from }\bar{X} \end{array}+\begin{array}{c} \mbox{The degree to which}\\ \bar{X}\mbox{ varies from }\mu \end{array}=\begin{array}{c} \mbox{The degree to which }\\ X_{i}\mbox{ varies from }\mu. \end{array}$$ That is, $$\mathbf{E}\left[\left(X_{i}-\bar{X}\right)^{2}\right]+\mathbf{E}\left[\left(\bar{X}-\mu\right)^{2}\right]=\mathbf{E}\left[\left(X_{i}-\mu\right)^{2}\right].$$ Actually proving the above equation takes a bit of algebra (this algebra is very similar to @Glen_b's answer above). But assuming it is true, we can rearrange to get: $$\mathbf{E}\left[\left(X_{i}-\bar{X}\right)^{2}\right]=\underset{\sigma^{2}}{\underbrace{\mathbf{E}\left[\left(X_{i}-\mu\right)^{2}\right]}}-\underset{\frac{\sigma^{2}}{n}}{\underbrace{\mathbf{E}\left[\left(\bar{X}-\mu\right)^{2}\right]}}=\frac{n-1}{n}\sigma^2.$$ For me, another piece of intuition is that using $\bar{X}$ instead of $\mu$ introduces bias. And this bias is exactly equal to $\mathbf{E}\left[\left(\bar{X}-\mu\right)^{2}\right]=\frac{\sigma^2}{n}$.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance
For me one piece of intuition is that $$\begin{array}{c} \mbox{The degree to which}\\ X_{i}\mbox{ varies from }\bar{X} \end{array}+\begin{array}{c} \mbox{The degree to which}\\ \bar{X}\mbox{ varies fr
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation? For me one piece of intuition is that $$\begin{array}{c} \mbox{The degree to which}\\ X_{i}\mbox{ varies from }\bar{X} \end{array}+\begin{array}{c} \mbox{The degree to which}\\ \bar{X}\mbox{ varies from }\mu \end{array}=\begin{array}{c} \mbox{The degree to which }\\ X_{i}\mbox{ varies from }\mu. \end{array}$$ That is, $$\mathbf{E}\left[\left(X_{i}-\bar{X}\right)^{2}\right]+\mathbf{E}\left[\left(\bar{X}-\mu\right)^{2}\right]=\mathbf{E}\left[\left(X_{i}-\mu\right)^{2}\right].$$ Actually proving the above equation takes a bit of algebra (this algebra is very similar to @Glen_b's answer above). But assuming it is true, we can rearrange to get: $$\mathbf{E}\left[\left(X_{i}-\bar{X}\right)^{2}\right]=\underset{\sigma^{2}}{\underbrace{\mathbf{E}\left[\left(X_{i}-\mu\right)^{2}\right]}}-\underset{\frac{\sigma^{2}}{n}}{\underbrace{\mathbf{E}\left[\left(\bar{X}-\mu\right)^{2}\right]}}=\frac{n-1}{n}\sigma^2.$$ For me, another piece of intuition is that using $\bar{X}$ instead of $\mu$ introduces bias. And this bias is exactly equal to $\mathbf{E}\left[\left(\bar{X}-\mu\right)^{2}\right]=\frac{\sigma^2}{n}$.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance For me one piece of intuition is that $$\begin{array}{c} \mbox{The degree to which}\\ X_{i}\mbox{ varies from }\bar{X} \end{array}+\begin{array}{c} \mbox{The degree to which}\\ \bar{X}\mbox{ varies fr
2,472
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation?
Most of the answers have already elaborately explained it but apart from those there's one simple illustration that one could find helpful: Suppose you are given that $n=4$ and the first three numbers are: $8,4,6$,_ Now the fourth number can be anything since there are no constraints. Now consider the situation when you are given that $n=4$ and $\bar x=6$, then if the first three numbers are: $8,4,6$ then the fourth number has to be $6$. This is to say that if you know $n-1$ values and $\bar x$, then the $nth$ value has no freedom. Thus $n-1$ gives us an unbiased estimator.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance
Most of the answers have already elaborately explained it but apart from those there's one simple illustration that one could find helpful: Suppose you are given that $n=4$ and the first three numbers
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance without simulation? Most of the answers have already elaborately explained it but apart from those there's one simple illustration that one could find helpful: Suppose you are given that $n=4$ and the first three numbers are: $8,4,6$,_ Now the fourth number can be anything since there are no constraints. Now consider the situation when you are given that $n=4$ and $\bar x=6$, then if the first three numbers are: $8,4,6$ then the fourth number has to be $6$. This is to say that if you know $n-1$ values and $\bar x$, then the $nth$ value has no freedom. Thus $n-1$ gives us an unbiased estimator.
How exactly did statisticians agree to using (n-1) as the unbiased estimator for population variance Most of the answers have already elaborately explained it but apart from those there's one simple illustration that one could find helpful: Suppose you are given that $n=4$ and the first three numbers
2,473
When (if ever) is a frequentist approach substantively better than a Bayesian?
Here's five reasons why frequentists methods may be preferred: Faster. Given that Bayesian statistics often give nearly identical answers to frequentist answers (and when they don't, it's not 100% clear that Bayesian is always the way to go), the fact that frequentist statistics can be obtained often several orders of magnitude faster is a strong argument. Likewise, frequentist methods do not require as much memory to store the results. While these things may seem somewhat trivial, especially with smaller datasets, the fact that Bayesian and Frequentist typically agree in results (especially if you have lots of informative data) means that if you are going to care, you may start caring about the less important things. And of course, if you live in the big data world, these are not trivial at all. Non-parametric statistics. I recognize that Bayesian statistics does have non-parametric statistics, but I would argue that the frequentist side of the field has some truly undeniably practical tools, such as the Empirical Distribution Function. No method in the world will ever replace the EDF, nor the Kaplan Meier curves, etc. (although clearly that's not to say those methods are the end of an analysis). Less diagnostics. MCMC methods, the most common method for fitting Bayesian models, typically require more work by the user than their frequentist counter part. Usually, the diagnostic for an MLE estimate is so simple that any good algorithm implementation will do it automatically (although that's not to say every available implementation is good...). As such, frequentist algorithmic diagnostics is typically "make sure there's no red text when fitting the model". Given that all statisticians have limited bandwidth, this frees up more time to ask questions like "is my data really approximately normal?" or "are these hazards really proportional?", etc. Valid inference under model misspecification. We've all heard that "All models are wrong but some are useful", but different areas of research take this more or less seriously. The Frequentist literature is full of methods for fixing up inference when the model is misspecified: bootstrap estimator, cross-validation, sandwich estimator (link also discusses general MLE inference under model misspecification), generalized estimation equations (GEE's), quasi-likelihood methods, etc. As far as I know, there is very little in the Bayesian literature about inference under model misspecification (although there's a lot of discussion of model checking, i.e., posterior predictive checks). I don't think this just by chance: evaluating how an estimator behaves over repeated trials does not require the estimator to be based on a "true" model, but using Bayes theorem does! Freedom from the prior (this is probably the most common reason for why people don't use Bayesian methods for everything). The strength of the Bayesian standpoint is often touted as the use of priors. However, in all of the applied fields I have worked in, the idea of an informative prior in the analysis is not considered. Reading literature on how to elicit priors from non-statistical experts gives good reasoning for this; I've read papers that say things like (cruel straw-man like paraphrasing my own) "Ask the researcher who hired you because they have trouble understanding statistics to give a range that they are 90% certain the effect size they have trouble imagining will be in. This range will typically be too narrow, so arbitrarily try to get them to widen it a little. Ask them if their belief looks like a gamma distribution. You will probably have to draw a gamma distribution for them, and show how it can have heavy tails if the shape parameter is small. This will also involve explaining what a PDF is to them."(note: I don't think even statisticians are really able to accurately say a priori whether they are 90% or 95% certain whether the effect size lies in a range, and this difference can have a substantial effect on the analysis!). Truth be told, I'm being quite unkind and there may be situations where eliciting a prior may be a little more straightforward. But you can see how this is a can of worms. Even if you switch to non-informative priors, it can still be a problem; when transforming parameters, what are easily mistaken for non-informative priors suddenly can be seen as very informative! Another example of this is that I've talked with several researchers who adamantly do not want to hear what another expert's interpretation of the data is because empirically, the other experts tend to be over confident. They'd rather just know what can be inferred from the other expert's data and then come to their own conclusion. I can't recall where I heard it, but somewhere I read the phrase "if you're a Bayesian, you want everyone to be a Frequentist". I interpret that to mean that theoretically, if you're a Bayesian and someone describes their analysis results, you should first try to remove the influence of their prior and then figure out what the impact would be if you had used your own. This little exercise would be simplified if they had given you a confidence interval rather than a credible interval! Of course, if you abandon informative priors, there is still utility in Bayesian analyses. Personally, this where I believe their highest utility lies; there are some problems that are extremely hard to get any answer from in using MLE methods but can be solved quite easily with MCMC. But my view on this being Bayesian's highest utility is due to strong priors on my part, so take it with a grain of salt.
When (if ever) is a frequentist approach substantively better than a Bayesian?
Here's five reasons why frequentists methods may be preferred: Faster. Given that Bayesian statistics often give nearly identical answers to frequentist answers (and when they don't, it's not 100% c
When (if ever) is a frequentist approach substantively better than a Bayesian? Here's five reasons why frequentists methods may be preferred: Faster. Given that Bayesian statistics often give nearly identical answers to frequentist answers (and when they don't, it's not 100% clear that Bayesian is always the way to go), the fact that frequentist statistics can be obtained often several orders of magnitude faster is a strong argument. Likewise, frequentist methods do not require as much memory to store the results. While these things may seem somewhat trivial, especially with smaller datasets, the fact that Bayesian and Frequentist typically agree in results (especially if you have lots of informative data) means that if you are going to care, you may start caring about the less important things. And of course, if you live in the big data world, these are not trivial at all. Non-parametric statistics. I recognize that Bayesian statistics does have non-parametric statistics, but I would argue that the frequentist side of the field has some truly undeniably practical tools, such as the Empirical Distribution Function. No method in the world will ever replace the EDF, nor the Kaplan Meier curves, etc. (although clearly that's not to say those methods are the end of an analysis). Less diagnostics. MCMC methods, the most common method for fitting Bayesian models, typically require more work by the user than their frequentist counter part. Usually, the diagnostic for an MLE estimate is so simple that any good algorithm implementation will do it automatically (although that's not to say every available implementation is good...). As such, frequentist algorithmic diagnostics is typically "make sure there's no red text when fitting the model". Given that all statisticians have limited bandwidth, this frees up more time to ask questions like "is my data really approximately normal?" or "are these hazards really proportional?", etc. Valid inference under model misspecification. We've all heard that "All models are wrong but some are useful", but different areas of research take this more or less seriously. The Frequentist literature is full of methods for fixing up inference when the model is misspecified: bootstrap estimator, cross-validation, sandwich estimator (link also discusses general MLE inference under model misspecification), generalized estimation equations (GEE's), quasi-likelihood methods, etc. As far as I know, there is very little in the Bayesian literature about inference under model misspecification (although there's a lot of discussion of model checking, i.e., posterior predictive checks). I don't think this just by chance: evaluating how an estimator behaves over repeated trials does not require the estimator to be based on a "true" model, but using Bayes theorem does! Freedom from the prior (this is probably the most common reason for why people don't use Bayesian methods for everything). The strength of the Bayesian standpoint is often touted as the use of priors. However, in all of the applied fields I have worked in, the idea of an informative prior in the analysis is not considered. Reading literature on how to elicit priors from non-statistical experts gives good reasoning for this; I've read papers that say things like (cruel straw-man like paraphrasing my own) "Ask the researcher who hired you because they have trouble understanding statistics to give a range that they are 90% certain the effect size they have trouble imagining will be in. This range will typically be too narrow, so arbitrarily try to get them to widen it a little. Ask them if their belief looks like a gamma distribution. You will probably have to draw a gamma distribution for them, and show how it can have heavy tails if the shape parameter is small. This will also involve explaining what a PDF is to them."(note: I don't think even statisticians are really able to accurately say a priori whether they are 90% or 95% certain whether the effect size lies in a range, and this difference can have a substantial effect on the analysis!). Truth be told, I'm being quite unkind and there may be situations where eliciting a prior may be a little more straightforward. But you can see how this is a can of worms. Even if you switch to non-informative priors, it can still be a problem; when transforming parameters, what are easily mistaken for non-informative priors suddenly can be seen as very informative! Another example of this is that I've talked with several researchers who adamantly do not want to hear what another expert's interpretation of the data is because empirically, the other experts tend to be over confident. They'd rather just know what can be inferred from the other expert's data and then come to their own conclusion. I can't recall where I heard it, but somewhere I read the phrase "if you're a Bayesian, you want everyone to be a Frequentist". I interpret that to mean that theoretically, if you're a Bayesian and someone describes their analysis results, you should first try to remove the influence of their prior and then figure out what the impact would be if you had used your own. This little exercise would be simplified if they had given you a confidence interval rather than a credible interval! Of course, if you abandon informative priors, there is still utility in Bayesian analyses. Personally, this where I believe their highest utility lies; there are some problems that are extremely hard to get any answer from in using MLE methods but can be solved quite easily with MCMC. But my view on this being Bayesian's highest utility is due to strong priors on my part, so take it with a grain of salt.
When (if ever) is a frequentist approach substantively better than a Bayesian? Here's five reasons why frequentists methods may be preferred: Faster. Given that Bayesian statistics often give nearly identical answers to frequentist answers (and when they don't, it's not 100% c
2,474
When (if ever) is a frequentist approach substantively better than a Bayesian?
A few concrete advantages of frequentist statistics: There are often closed-form solutions to frequentist problems whereas you would need a conjugate prior to have a closed form solution in the Bayesian analogue. This is useful for a number of reasons - one of which is computation time. A reason that'll, hopefully, eventually go away: laymen are taught frequentists statistics. If you want to be understood by many, you need to speak frequentist. An "Innocent until proven guilty" Null Hypothesis Significance Testing (NHST) approach is useful when the goal is to prove someone wrong (I'm going to assume your right and show the data overwhelming suggests you're wrong). Yes, there are NHST analogues in Bayesian but I find the frequentists versions much more straight-forward and interpretable. There is no such thing as a truly uninformative prior which makes some people uncomfortable.
When (if ever) is a frequentist approach substantively better than a Bayesian?
A few concrete advantages of frequentist statistics: There are often closed-form solutions to frequentist problems whereas you would need a conjugate prior to have a closed form solution in the Bayes
When (if ever) is a frequentist approach substantively better than a Bayesian? A few concrete advantages of frequentist statistics: There are often closed-form solutions to frequentist problems whereas you would need a conjugate prior to have a closed form solution in the Bayesian analogue. This is useful for a number of reasons - one of which is computation time. A reason that'll, hopefully, eventually go away: laymen are taught frequentists statistics. If you want to be understood by many, you need to speak frequentist. An "Innocent until proven guilty" Null Hypothesis Significance Testing (NHST) approach is useful when the goal is to prove someone wrong (I'm going to assume your right and show the data overwhelming suggests you're wrong). Yes, there are NHST analogues in Bayesian but I find the frequentists versions much more straight-forward and interpretable. There is no such thing as a truly uninformative prior which makes some people uncomfortable.
When (if ever) is a frequentist approach substantively better than a Bayesian? A few concrete advantages of frequentist statistics: There are often closed-form solutions to frequentist problems whereas you would need a conjugate prior to have a closed form solution in the Bayes
2,475
When (if ever) is a frequentist approach substantively better than a Bayesian?
The most important reason to use Frequentist approaches, which has surprisingly not yet been mentioned, is error control. Very often, research leads to dichotomous interpretations (should I do a study building on this, or not? Should be implement an intervention, or not?). Frequentist approaches allow you to strictly control your Type 1 error rate. Bayesian approaches don't (although some inherit the universal bound from likelihood approaches, but even then, error rates can be quite high in small samples and with relatively low thresholds of evidence (e.g., BF > 3). You can examine Frequentist properties of Bayes factors (see for example http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2604513) but that's still a Frequentist approach. I think very often, researchers care more about error control than about quantifying evidence per se (relative to some specific hypothesis), and I think at the very least, everyone cares about error control to some extent, and thus the two approaches should be used complementarily.
When (if ever) is a frequentist approach substantively better than a Bayesian?
The most important reason to use Frequentist approaches, which has surprisingly not yet been mentioned, is error control. Very often, research leads to dichotomous interpretations (should I do a study
When (if ever) is a frequentist approach substantively better than a Bayesian? The most important reason to use Frequentist approaches, which has surprisingly not yet been mentioned, is error control. Very often, research leads to dichotomous interpretations (should I do a study building on this, or not? Should be implement an intervention, or not?). Frequentist approaches allow you to strictly control your Type 1 error rate. Bayesian approaches don't (although some inherit the universal bound from likelihood approaches, but even then, error rates can be quite high in small samples and with relatively low thresholds of evidence (e.g., BF > 3). You can examine Frequentist properties of Bayes factors (see for example http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2604513) but that's still a Frequentist approach. I think very often, researchers care more about error control than about quantifying evidence per se (relative to some specific hypothesis), and I think at the very least, everyone cares about error control to some extent, and thus the two approaches should be used complementarily.
When (if ever) is a frequentist approach substantively better than a Bayesian? The most important reason to use Frequentist approaches, which has surprisingly not yet been mentioned, is error control. Very often, research leads to dichotomous interpretations (should I do a study
2,476
When (if ever) is a frequentist approach substantively better than a Bayesian?
I think one of the biggest questions, as a statistican, you have to ask yourself is whether or not you believe in, or want to adhere to, the likelihood principle. If you don't believe in the likelihood principle then I think the frequentist paradigm to statistics can be extremely powerful, however, if you do believe in the likelihood principle, then (I believe) you most certainly have to espouse the Bayesian paradigm in or to not violate it. In case you are unfamiliar with it, what the likelihood principle tells us is the following: The Likelihood Principle: In making inferences or decisions about $\theta$ after some data $\mathbf{x}$ is observed, all relevant experimental information is contained in the likelihood function: $$\ell(\theta;\mathbf{x})=p(\mathbf{x}|\theta)$$ where $\mathbf{x}$ corresponds to the data observed and is thus fixed. Furthermore, if $\mathbf{x}$ and $\mathbf{y}$ are two sample points such that $\ell(\theta;\mathbf{x})$ is proportional to $\ell(\theta;\mathbf{y})$, that is, there exists a constant $C(\mathbf{x},\mathbf{y})$ such that $$\ell(\theta;\mathbf{x})=C(\mathbf{x},\mathbf{y})\ell(\theta;\mathbf{y})\hspace{.1in}\text{for all }\theta,$$ then the conclusions drawn from $\mathbf{x}$ and $\mathbf{y}$ should be identical.\ Note that the constant $C(\mathbf{x},\mathbf{y})$ above may be different for different $(\mathbf{x},\mathbf{y})$ pairs but $C(\mathbf{x},\mathbf{y})$ does not depend on $\theta$. In the special case of $C(\mathbf{x},\mathbf{y})=1$, the Likelihood Principle states that if two sample points result in the same likelihood function, then they contain the same information about $\theta$. But the Likelihood Principle goes further. It states that even if two sample points have only proportional likelihoods, then they contain equivalent information about $\theta$. Now, one of the draws of Bayesian statistics is that, under proper priors, the Bayesian paradigm never violates the likelihood principle. However, there are very simple scenarios where the frequentist paradigm will violate the likelihood principle. Here is a very simple example based on hypothesis testing. Consider the following: Consider an experiment where 12 Bernoulli trials were run and 3 successes were observed. Depending on the stopping rule we could characterize the data as the following: Binomial Distribution: $X|\theta\sim\text{Bin}(n=12,\theta)$ and Data: $x=3$ Negative Binomial Distribution: $Y|\theta\sim\text{NegBin}(k=3,\theta)$ and Data: $y=12$ And thus we would obtain the following likelihood functions: \begin{align} \ell_1(\theta;x=3)&=\binom{12}{3}\theta^3(1-\theta)^9\\ \ell_2(\theta;y=12)&=\binom{11}{2}\theta^3(1-\theta)^9\\ \end{align} which implies that $$\ell_1(\theta;x)=C(x,y)\ell_2(\theta,y)$$ and thus, by the Likelihood Principle, we should obtain the same inferences about $\theta$ from either likelihood. Now, imagine testing the following hypotheses from the frequentist paradigm $$H_o:\theta\geq\frac{1}{2}\hspace{.2in}\text{versus}\hspace{.2in}H_a:\theta<\frac{1}{2}$$ For the Binomial model we have the following: \begin{align} \text{p-value}&=P\left(X\leq 3|\theta=\frac{1}{2}\right)\\ &=\binom{12}{0}\left(\frac{1}{2}\right)^{12}+\binom{12}{1} \left(\frac{1}{2}\right)^{12}+ \binom{12}{2}\left(\frac{1}{2}\right)^{12}+\binom{12}{3}\left(\frac{1}{2}\right)^{12}=0.0723 \end{align} Notice that $\binom{12}{3}\left(\frac{1}{2}\right)^{12}=\ell_1(\frac{1}{2};x=3)$ but the other terms do not satisfy the likelihood principle. For the Negative Binomial model we have the following: \begin{align} \text{p-value}&=P\left(Y\geq 12|\theta\frac{1}{2}\right)\\ &=\binom{11}{2}\left(\frac{1}{2}\right)^{12}+\binom{12}{2}\left(\frac{1}{2}\right)^{12}+ \binom{13}{2}\left(\frac{1}{2}\right)^{12}+...=0.0375 \end{align} From the above p-value calculations we see that in the Binomial model we would fail to reject $H_o$ but using the Negative Binomial model we would reject $H_o$. Thus, even though $\ell_1(\theta;x)\propto\ell_2(\theta;y)$ there p-values, and decisions based on these p-values, do not coincide. This p-value argument is one often used by Bayesians against the use of Frequentist p-values. Now consider again testing the following hypotheses, but from the Bayesian paradigm $$H_o:\theta\geq\frac{1}{2}\hspace{.2in}\text{versus}\hspace{.2in}H_a:\theta<\frac{1}{2}$$ For the Binomial model we have the following: \begin{align} P\left(\theta\geq\frac{1}{2}|x\right)=\int_{1/2}^1\pi(\theta|x)dx=\int_{1/2}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \bigg/\int_{0}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \end{align} Similarly, for the Negative Binomial model we have the following: \begin{align} P\left(\theta\geq\frac{1}{2}|y\right)=\int_{1/2}^1\pi(\theta|x)dx=\int_{1/2}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \bigg/\int_{0}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \end{align} Now using Bayesian decision rules, pick $H_o$ if $P(\theta\geq\frac{1}{2}|x)>\frac{1}{2}$ (or some other threshold) and repeat similarly for $y$. However, $P\left(\theta\geq\frac{1}{2}|x\right)=P\left(\theta\geq\frac{1}{2}|y\right)$ and so we arrive at the same conclusion and thus this approach satisfies the likelihood Principle. And so to conclude my ramblings, if you don't care about the likelihood principle then being frequentist is great! (If you can't tell, I'm a Bayesian :) )
When (if ever) is a frequentist approach substantively better than a Bayesian?
I think one of the biggest questions, as a statistican, you have to ask yourself is whether or not you believe in, or want to adhere to, the likelihood principle. If you don't believe in the likelihoo
When (if ever) is a frequentist approach substantively better than a Bayesian? I think one of the biggest questions, as a statistican, you have to ask yourself is whether or not you believe in, or want to adhere to, the likelihood principle. If you don't believe in the likelihood principle then I think the frequentist paradigm to statistics can be extremely powerful, however, if you do believe in the likelihood principle, then (I believe) you most certainly have to espouse the Bayesian paradigm in or to not violate it. In case you are unfamiliar with it, what the likelihood principle tells us is the following: The Likelihood Principle: In making inferences or decisions about $\theta$ after some data $\mathbf{x}$ is observed, all relevant experimental information is contained in the likelihood function: $$\ell(\theta;\mathbf{x})=p(\mathbf{x}|\theta)$$ where $\mathbf{x}$ corresponds to the data observed and is thus fixed. Furthermore, if $\mathbf{x}$ and $\mathbf{y}$ are two sample points such that $\ell(\theta;\mathbf{x})$ is proportional to $\ell(\theta;\mathbf{y})$, that is, there exists a constant $C(\mathbf{x},\mathbf{y})$ such that $$\ell(\theta;\mathbf{x})=C(\mathbf{x},\mathbf{y})\ell(\theta;\mathbf{y})\hspace{.1in}\text{for all }\theta,$$ then the conclusions drawn from $\mathbf{x}$ and $\mathbf{y}$ should be identical.\ Note that the constant $C(\mathbf{x},\mathbf{y})$ above may be different for different $(\mathbf{x},\mathbf{y})$ pairs but $C(\mathbf{x},\mathbf{y})$ does not depend on $\theta$. In the special case of $C(\mathbf{x},\mathbf{y})=1$, the Likelihood Principle states that if two sample points result in the same likelihood function, then they contain the same information about $\theta$. But the Likelihood Principle goes further. It states that even if two sample points have only proportional likelihoods, then they contain equivalent information about $\theta$. Now, one of the draws of Bayesian statistics is that, under proper priors, the Bayesian paradigm never violates the likelihood principle. However, there are very simple scenarios where the frequentist paradigm will violate the likelihood principle. Here is a very simple example based on hypothesis testing. Consider the following: Consider an experiment where 12 Bernoulli trials were run and 3 successes were observed. Depending on the stopping rule we could characterize the data as the following: Binomial Distribution: $X|\theta\sim\text{Bin}(n=12,\theta)$ and Data: $x=3$ Negative Binomial Distribution: $Y|\theta\sim\text{NegBin}(k=3,\theta)$ and Data: $y=12$ And thus we would obtain the following likelihood functions: \begin{align} \ell_1(\theta;x=3)&=\binom{12}{3}\theta^3(1-\theta)^9\\ \ell_2(\theta;y=12)&=\binom{11}{2}\theta^3(1-\theta)^9\\ \end{align} which implies that $$\ell_1(\theta;x)=C(x,y)\ell_2(\theta,y)$$ and thus, by the Likelihood Principle, we should obtain the same inferences about $\theta$ from either likelihood. Now, imagine testing the following hypotheses from the frequentist paradigm $$H_o:\theta\geq\frac{1}{2}\hspace{.2in}\text{versus}\hspace{.2in}H_a:\theta<\frac{1}{2}$$ For the Binomial model we have the following: \begin{align} \text{p-value}&=P\left(X\leq 3|\theta=\frac{1}{2}\right)\\ &=\binom{12}{0}\left(\frac{1}{2}\right)^{12}+\binom{12}{1} \left(\frac{1}{2}\right)^{12}+ \binom{12}{2}\left(\frac{1}{2}\right)^{12}+\binom{12}{3}\left(\frac{1}{2}\right)^{12}=0.0723 \end{align} Notice that $\binom{12}{3}\left(\frac{1}{2}\right)^{12}=\ell_1(\frac{1}{2};x=3)$ but the other terms do not satisfy the likelihood principle. For the Negative Binomial model we have the following: \begin{align} \text{p-value}&=P\left(Y\geq 12|\theta\frac{1}{2}\right)\\ &=\binom{11}{2}\left(\frac{1}{2}\right)^{12}+\binom{12}{2}\left(\frac{1}{2}\right)^{12}+ \binom{13}{2}\left(\frac{1}{2}\right)^{12}+...=0.0375 \end{align} From the above p-value calculations we see that in the Binomial model we would fail to reject $H_o$ but using the Negative Binomial model we would reject $H_o$. Thus, even though $\ell_1(\theta;x)\propto\ell_2(\theta;y)$ there p-values, and decisions based on these p-values, do not coincide. This p-value argument is one often used by Bayesians against the use of Frequentist p-values. Now consider again testing the following hypotheses, but from the Bayesian paradigm $$H_o:\theta\geq\frac{1}{2}\hspace{.2in}\text{versus}\hspace{.2in}H_a:\theta<\frac{1}{2}$$ For the Binomial model we have the following: \begin{align} P\left(\theta\geq\frac{1}{2}|x\right)=\int_{1/2}^1\pi(\theta|x)dx=\int_{1/2}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \bigg/\int_{0}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \end{align} Similarly, for the Negative Binomial model we have the following: \begin{align} P\left(\theta\geq\frac{1}{2}|y\right)=\int_{1/2}^1\pi(\theta|x)dx=\int_{1/2}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \bigg/\int_{0}^1\theta^3(1-\theta)^9\pi(\theta)d\theta \end{align} Now using Bayesian decision rules, pick $H_o$ if $P(\theta\geq\frac{1}{2}|x)>\frac{1}{2}$ (or some other threshold) and repeat similarly for $y$. However, $P\left(\theta\geq\frac{1}{2}|x\right)=P\left(\theta\geq\frac{1}{2}|y\right)$ and so we arrive at the same conclusion and thus this approach satisfies the likelihood Principle. And so to conclude my ramblings, if you don't care about the likelihood principle then being frequentist is great! (If you can't tell, I'm a Bayesian :) )
When (if ever) is a frequentist approach substantively better than a Bayesian? I think one of the biggest questions, as a statistican, you have to ask yourself is whether or not you believe in, or want to adhere to, the likelihood principle. If you don't believe in the likelihoo
2,477
When (if ever) is a frequentist approach substantively better than a Bayesian?
Personally I'm having difficulty thinking of a situation where the frequentist answer would be preferred over a Bayesian one. My thinking is detailed here and in other blog articles on fharrell.com about problems with p-values and null hypothesis testing. Frequentists tend to ignore a few fundamental problems. Here is just a sample: Outside of the Gaussian linear model with constant variance and a few other cases, the p-values that are computed are of unknown accuracy for your dataset and model When the experiment is sequential or adaptive, it is often the case that a p-value can't even be computed and one can only set an overall $\alpha$ level to achieve Frequentists seem happy to not let the type I error go below, say, 0.05 no matter now the sample size grows There is no frequentist prescription for how multiplicity corrections are formed, leading to an ad hoc hodge-podge of methods Regarding the first point, one commonly used model is the binary logistic model. Its log likelihood is very non-quadratic, and the vast majority of confidence limits and p-values computed for such models are not very accurate. Contrast that with the Bayesian logistic model, which provides exact inference. Others have mentioned error control as a reason for using frequentist inference. I do not think this is logical, because the error to which they refer is the long-run error, envisioning a process in which thousands of statistical tests are run. A judge who said "the long run false conviction probability in my courtroom is only 0.03" should be disbarred. She is charged with having the highest probability of making the correct decision for the current defendent. On the other hand one minus the posterior probability of an effect is the probabiity of zero or backwards effect and is the error probability we actually need.
When (if ever) is a frequentist approach substantively better than a Bayesian?
Personally I'm having difficulty thinking of a situation where the frequentist answer would be preferred over a Bayesian one. My thinking is detailed here and in other blog articles on fharrell.com a
When (if ever) is a frequentist approach substantively better than a Bayesian? Personally I'm having difficulty thinking of a situation where the frequentist answer would be preferred over a Bayesian one. My thinking is detailed here and in other blog articles on fharrell.com about problems with p-values and null hypothesis testing. Frequentists tend to ignore a few fundamental problems. Here is just a sample: Outside of the Gaussian linear model with constant variance and a few other cases, the p-values that are computed are of unknown accuracy for your dataset and model When the experiment is sequential or adaptive, it is often the case that a p-value can't even be computed and one can only set an overall $\alpha$ level to achieve Frequentists seem happy to not let the type I error go below, say, 0.05 no matter now the sample size grows There is no frequentist prescription for how multiplicity corrections are formed, leading to an ad hoc hodge-podge of methods Regarding the first point, one commonly used model is the binary logistic model. Its log likelihood is very non-quadratic, and the vast majority of confidence limits and p-values computed for such models are not very accurate. Contrast that with the Bayesian logistic model, which provides exact inference. Others have mentioned error control as a reason for using frequentist inference. I do not think this is logical, because the error to which they refer is the long-run error, envisioning a process in which thousands of statistical tests are run. A judge who said "the long run false conviction probability in my courtroom is only 0.03" should be disbarred. She is charged with having the highest probability of making the correct decision for the current defendent. On the other hand one minus the posterior probability of an effect is the probabiity of zero or backwards effect and is the error probability we actually need.
When (if ever) is a frequentist approach substantively better than a Bayesian? Personally I'm having difficulty thinking of a situation where the frequentist answer would be preferred over a Bayesian one. My thinking is detailed here and in other blog articles on fharrell.com a
2,478
When (if ever) is a frequentist approach substantively better than a Bayesian?
You and I are both scientists, and as scientists, are chiefly interested in questions of evidence. For that reason, I think Bayesian approaches, when feasible, are preferable. Bayesian approaches answer our question: What is the strength of evidence for one hypothesis over another? Frequentist approaches, on the other hand, do not: They report only whether the data are weird given one hypothesis. That said, Andrew Gelman, notable Bayesian, seems to espouse the use of p-values (or p-value-like graphical checks) as a check for errors in model specification. You can see an allusion to this approach in this blog post. His approach, as I understand it, is something like a two-step process: First, he asks the Bayesian question of what is the evidence for one model over the other. Second, he asks the Frequentist question of whether the preferred model actually looks at all plausible given the data. It seems like a reasonable hybrid approach to me.
When (if ever) is a frequentist approach substantively better than a Bayesian?
You and I are both scientists, and as scientists, are chiefly interested in questions of evidence. For that reason, I think Bayesian approaches, when feasible, are preferable. Bayesian approaches ans
When (if ever) is a frequentist approach substantively better than a Bayesian? You and I are both scientists, and as scientists, are chiefly interested in questions of evidence. For that reason, I think Bayesian approaches, when feasible, are preferable. Bayesian approaches answer our question: What is the strength of evidence for one hypothesis over another? Frequentist approaches, on the other hand, do not: They report only whether the data are weird given one hypothesis. That said, Andrew Gelman, notable Bayesian, seems to espouse the use of p-values (or p-value-like graphical checks) as a check for errors in model specification. You can see an allusion to this approach in this blog post. His approach, as I understand it, is something like a two-step process: First, he asks the Bayesian question of what is the evidence for one model over the other. Second, he asks the Frequentist question of whether the preferred model actually looks at all plausible given the data. It seems like a reasonable hybrid approach to me.
When (if ever) is a frequentist approach substantively better than a Bayesian? You and I are both scientists, and as scientists, are chiefly interested in questions of evidence. For that reason, I think Bayesian approaches, when feasible, are preferable. Bayesian approaches ans
2,479
When (if ever) is a frequentist approach substantively better than a Bayesian?
Many people do not seem aware of a third philosophical school: likelihoodism. AWF Edwards's book, Likelihood, is probably the best place to read up on it. Here is a short article he wrote. Likelihoodism eschews p-values, like Bayesianism, but also eschews the Bayesian's often dubious prior. There is an intro treatment here as well.
When (if ever) is a frequentist approach substantively better than a Bayesian?
Many people do not seem aware of a third philosophical school: likelihoodism. AWF Edwards's book, Likelihood, is probably the best place to read up on it. Here is a short article he wrote. Likelihoodi
When (if ever) is a frequentist approach substantively better than a Bayesian? Many people do not seem aware of a third philosophical school: likelihoodism. AWF Edwards's book, Likelihood, is probably the best place to read up on it. Here is a short article he wrote. Likelihoodism eschews p-values, like Bayesianism, but also eschews the Bayesian's often dubious prior. There is an intro treatment here as well.
When (if ever) is a frequentist approach substantively better than a Bayesian? Many people do not seem aware of a third philosophical school: likelihoodism. AWF Edwards's book, Likelihood, is probably the best place to read up on it. Here is a short article he wrote. Likelihoodi
2,480
When (if ever) is a frequentist approach substantively better than a Bayesian?
One of the biggest disadvantages of frequentist approaches to model building has always been, as TrynnaDoStats notes in his first point, the challenges involved with inverting big closed-form solutions. Closed-form matrix inversion requires that the entire matrix be resident in RAM, a significant limitation on single CPU platforms with either large amounts of data or massively categorical features. Bayesian methods have been able to work around this challenge by simulating random draws from a specified prior. This has always been one of the biggest selling points of Bayesian solutions, although answers are obtained only at a significant cost in CPU. Andrew Ainslie and Ken Train, in a paper from about 10 years ago that I have lost the reference to, compared finite mixture (which are frequentist or closed form) with Bayesian approaches to model-building and found that across a wide range of functional forms and performance metrics, the two methods delivered essentially equivalent results. Where Bayesian solutions had an edge or possessed greater flexibility were in those instances where the information was both sparse and very high-dimensional. However, that paper was written before "divide and conquer" algorithms were developed that leverage massively parallel platforms, e.g., see Chen and Minge's paper for more about this http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf The advent of D&C approaches has meant that, even for the hairiest, sparsest, most high dimensional problems, Bayesian approaches no longer have an advantage over frequentist methods. The two methods are at parity. This relatively recent development is worth noting in any debate about the practical advantages or limitations of either method.
When (if ever) is a frequentist approach substantively better than a Bayesian?
One of the biggest disadvantages of frequentist approaches to model building has always been, as TrynnaDoStats notes in his first point, the challenges involved with inverting big closed-form solution
When (if ever) is a frequentist approach substantively better than a Bayesian? One of the biggest disadvantages of frequentist approaches to model building has always been, as TrynnaDoStats notes in his first point, the challenges involved with inverting big closed-form solutions. Closed-form matrix inversion requires that the entire matrix be resident in RAM, a significant limitation on single CPU platforms with either large amounts of data or massively categorical features. Bayesian methods have been able to work around this challenge by simulating random draws from a specified prior. This has always been one of the biggest selling points of Bayesian solutions, although answers are obtained only at a significant cost in CPU. Andrew Ainslie and Ken Train, in a paper from about 10 years ago that I have lost the reference to, compared finite mixture (which are frequentist or closed form) with Bayesian approaches to model-building and found that across a wide range of functional forms and performance metrics, the two methods delivered essentially equivalent results. Where Bayesian solutions had an edge or possessed greater flexibility were in those instances where the information was both sparse and very high-dimensional. However, that paper was written before "divide and conquer" algorithms were developed that leverage massively parallel platforms, e.g., see Chen and Minge's paper for more about this http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf The advent of D&C approaches has meant that, even for the hairiest, sparsest, most high dimensional problems, Bayesian approaches no longer have an advantage over frequentist methods. The two methods are at parity. This relatively recent development is worth noting in any debate about the practical advantages or limitations of either method.
When (if ever) is a frequentist approach substantively better than a Bayesian? One of the biggest disadvantages of frequentist approaches to model building has always been, as TrynnaDoStats notes in his first point, the challenges involved with inverting big closed-form solution
2,481
When (if ever) is a frequentist approach substantively better than a Bayesian?
Several comments: The fundamental difference between the bayesian and frequentist statistician is that the bayesian is willing to extend the tools of probability to situations where the frequentist wouldn't. More specifically, the bayesian is willing to use probability to model the uncertainty in her own mind over various parameters. To the frequentist, these parameters are scalars (albeit scalars where the statistician does not know the true value). To the Bayesian, various parameters are represented as random variables! This is extremely different. The Bayesian's uncertainty over parameters valeus is represented by a prior. In Bayesian statistics, the hope is that after observing data, the posterior overwhelms the prior, that the prior doesn't matter. But this often isn't the case: results may be sensitive to the choice of prior! Different Bayesians with different priors need not agree on the posterior. A key point to keep in mind is that statements of the frequentist statistician are statements that any two Bayesians can agree on, regardless of their prior beliefs! The frequentist does not comment on priors or posteriors, merely the likelihood. The statements of the frequentist statistician in some sense are less ambitious, but the bolder statements of the Bayesian can significantly rely on the assignment of a prior. In situations where priors matter and where there is disagreement on priors, the more limited, conditional statements of frequentist statistics may stand on firmer ground.
When (if ever) is a frequentist approach substantively better than a Bayesian?
Several comments: The fundamental difference between the bayesian and frequentist statistician is that the bayesian is willing to extend the tools of probability to situations where the frequentist w
When (if ever) is a frequentist approach substantively better than a Bayesian? Several comments: The fundamental difference between the bayesian and frequentist statistician is that the bayesian is willing to extend the tools of probability to situations where the frequentist wouldn't. More specifically, the bayesian is willing to use probability to model the uncertainty in her own mind over various parameters. To the frequentist, these parameters are scalars (albeit scalars where the statistician does not know the true value). To the Bayesian, various parameters are represented as random variables! This is extremely different. The Bayesian's uncertainty over parameters valeus is represented by a prior. In Bayesian statistics, the hope is that after observing data, the posterior overwhelms the prior, that the prior doesn't matter. But this often isn't the case: results may be sensitive to the choice of prior! Different Bayesians with different priors need not agree on the posterior. A key point to keep in mind is that statements of the frequentist statistician are statements that any two Bayesians can agree on, regardless of their prior beliefs! The frequentist does not comment on priors or posteriors, merely the likelihood. The statements of the frequentist statistician in some sense are less ambitious, but the bolder statements of the Bayesian can significantly rely on the assignment of a prior. In situations where priors matter and where there is disagreement on priors, the more limited, conditional statements of frequentist statistics may stand on firmer ground.
When (if ever) is a frequentist approach substantively better than a Bayesian? Several comments: The fundamental difference between the bayesian and frequentist statistician is that the bayesian is willing to extend the tools of probability to situations where the frequentist w
2,482
When (if ever) is a frequentist approach substantively better than a Bayesian?
Frequentist tests focus on falsifying the null hypothesis. However, Null Hypothesis Significance Testing (NHST) can also be done from a Bayesian perspective, because in all cases NHST is simply a calculation of P( Observed Effect | Effect = 0 ). So, it's hard to identify a time when it would be necessary to conduct NHST from a frequentist perspective. That being said, the best argument for conducting NHST using a frequentist approach is ease and accessibility. People are taught frequentist statistics. So, it's easier to run a frequentist NHST, because there are many more statistical packages that make it simple to do this. Similarly, it is easier to communicate the results of a frequentist NHST, because people are familiar with this form of NHST. So, I see that as the best argument for frequentist approaches: accessibility to stats programs that will run them and ease of communication of results to colleagues. This is just cultural, though, so this argument could change if frequentist approaches lose their hegemony.
When (if ever) is a frequentist approach substantively better than a Bayesian?
Frequentist tests focus on falsifying the null hypothesis. However, Null Hypothesis Significance Testing (NHST) can also be done from a Bayesian perspective, because in all cases NHST is simply a calc
When (if ever) is a frequentist approach substantively better than a Bayesian? Frequentist tests focus on falsifying the null hypothesis. However, Null Hypothesis Significance Testing (NHST) can also be done from a Bayesian perspective, because in all cases NHST is simply a calculation of P( Observed Effect | Effect = 0 ). So, it's hard to identify a time when it would be necessary to conduct NHST from a frequentist perspective. That being said, the best argument for conducting NHST using a frequentist approach is ease and accessibility. People are taught frequentist statistics. So, it's easier to run a frequentist NHST, because there are many more statistical packages that make it simple to do this. Similarly, it is easier to communicate the results of a frequentist NHST, because people are familiar with this form of NHST. So, I see that as the best argument for frequentist approaches: accessibility to stats programs that will run them and ease of communication of results to colleagues. This is just cultural, though, so this argument could change if frequentist approaches lose their hegemony.
When (if ever) is a frequentist approach substantively better than a Bayesian? Frequentist tests focus on falsifying the null hypothesis. However, Null Hypothesis Significance Testing (NHST) can also be done from a Bayesian perspective, because in all cases NHST is simply a calc
2,483
When (if ever) is a frequentist approach substantively better than a Bayesian?
The goal of much research is not to reach a final conclusion, but just to obtain a little more evidence to incrementally push the community's sense of a question in one direction. Bayesian statistics are indispensable when what you need is to evaluate a decision or conclusion in light of the available evidence. Quality control would be impossible without Bayesian statistics. Any procedure where you need to take some data and then act on it (robotics, machine learning, business decision making) benefits from Bayesian statistics. But a lot of researchers are not doing that. They are running some experiments, collecting some data, and then saying "The data points this way", without really worrying too much about whether that's the best conclusion given all the evidence others have gathered so far. Science can be a slow process, and a statement like "The probability that this model is correct is 72%!" is often premature or unnecessary. This is appropriate in a simple mathematical way, too, because frequentist statistics often turn out to be mathematically the same as the update-step of a Bayesian statistic. In other words, while Bayesian statistics is (Prior Model, Evidence) → New Model, frequentist statistics is just Evidence, and leaves it to others to fill in the other two parts.
When (if ever) is a frequentist approach substantively better than a Bayesian?
The goal of much research is not to reach a final conclusion, but just to obtain a little more evidence to incrementally push the community's sense of a question in one direction. Bayesian statistics
When (if ever) is a frequentist approach substantively better than a Bayesian? The goal of much research is not to reach a final conclusion, but just to obtain a little more evidence to incrementally push the community's sense of a question in one direction. Bayesian statistics are indispensable when what you need is to evaluate a decision or conclusion in light of the available evidence. Quality control would be impossible without Bayesian statistics. Any procedure where you need to take some data and then act on it (robotics, machine learning, business decision making) benefits from Bayesian statistics. But a lot of researchers are not doing that. They are running some experiments, collecting some data, and then saying "The data points this way", without really worrying too much about whether that's the best conclusion given all the evidence others have gathered so far. Science can be a slow process, and a statement like "The probability that this model is correct is 72%!" is often premature or unnecessary. This is appropriate in a simple mathematical way, too, because frequentist statistics often turn out to be mathematically the same as the update-step of a Bayesian statistic. In other words, while Bayesian statistics is (Prior Model, Evidence) → New Model, frequentist statistics is just Evidence, and leaves it to others to fill in the other two parts.
When (if ever) is a frequentist approach substantively better than a Bayesian? The goal of much research is not to reach a final conclusion, but just to obtain a little more evidence to incrementally push the community's sense of a question in one direction. Bayesian statistics
2,484
When (if ever) is a frequentist approach substantively better than a Bayesian?
The actual execution of a Bayesian method is more technical than that of a Frequentist. By "more technical" I mean things like: 1) choosing priors, 2) programming your model in a BUGS/JAGS/STAN, and 3) thinking about sampling and convergence. Obviously, #1 is pretty much not optional, by definition of Bayesian. Though with some problems and procedures, there can be reasonable defaults, somewhat hiding the issue from the user. (Though this can also cause problems!) Whether #2 is an issue depends on the software you use. Bayesian statistics has a bent towards more general solutions than frequentist statistical methods, and tools like BUGS, JAGS, and STAN are a natural expression of this. However, there are Bayesian functions in various software packages that appear to work like the typical frequentist procedure, so this is not always an issue. (And recent solutions like the R packages rstanarm and brms are bridging this gap.) Still, using these tools is very similar to programming in a new language. Item #3 is usually applicable, since the majority of real-world Bayesian applications are going to use MCMC sampling. (On the other hand, frequentist MLE-based procedures use optimization which may converge to a local minima or not converge at all, and I wonder how many users should be checking this and don't?) As I said in a comment, I'm not sure that freedom from priors is actually a scientific benefit. It's certainly convenient in several ways and at several points in the publication process, but I'm not sure it actually makes for better science. (And in the big picture, we all have to be aware of our priors as scientists, or we'll suffer from all kinds of biases in our investigations, regardless of what statistical methods we use.)
When (if ever) is a frequentist approach substantively better than a Bayesian?
The actual execution of a Bayesian method is more technical than that of a Frequentist. By "more technical" I mean things like: 1) choosing priors, 2) programming your model in a BUGS/JAGS/STAN, and 3
When (if ever) is a frequentist approach substantively better than a Bayesian? The actual execution of a Bayesian method is more technical than that of a Frequentist. By "more technical" I mean things like: 1) choosing priors, 2) programming your model in a BUGS/JAGS/STAN, and 3) thinking about sampling and convergence. Obviously, #1 is pretty much not optional, by definition of Bayesian. Though with some problems and procedures, there can be reasonable defaults, somewhat hiding the issue from the user. (Though this can also cause problems!) Whether #2 is an issue depends on the software you use. Bayesian statistics has a bent towards more general solutions than frequentist statistical methods, and tools like BUGS, JAGS, and STAN are a natural expression of this. However, there are Bayesian functions in various software packages that appear to work like the typical frequentist procedure, so this is not always an issue. (And recent solutions like the R packages rstanarm and brms are bridging this gap.) Still, using these tools is very similar to programming in a new language. Item #3 is usually applicable, since the majority of real-world Bayesian applications are going to use MCMC sampling. (On the other hand, frequentist MLE-based procedures use optimization which may converge to a local minima or not converge at all, and I wonder how many users should be checking this and don't?) As I said in a comment, I'm not sure that freedom from priors is actually a scientific benefit. It's certainly convenient in several ways and at several points in the publication process, but I'm not sure it actually makes for better science. (And in the big picture, we all have to be aware of our priors as scientists, or we'll suffer from all kinds of biases in our investigations, regardless of what statistical methods we use.)
When (if ever) is a frequentist approach substantively better than a Bayesian? The actual execution of a Bayesian method is more technical than that of a Frequentist. By "more technical" I mean things like: 1) choosing priors, 2) programming your model in a BUGS/JAGS/STAN, and 3
2,485
When (if ever) is a frequentist approach substantively better than a Bayesian?
One type of problem in which a particular Frequentist based approach has essentially dominated any Bayesian is that of prediction in the M-open case. What does M-open mean? M-open implies that the true model that generates the data does not appear in the set of models we are considering. For example, if the true mean of $y$ is quadratic as a function of $x$, yet we only consider models with the mean a linear function of $x$, we are in the M-open case. In other words, model miss-specification results in an M-open case. In most cases, this is a huge problem for Bayesian analyses; pretty much all theory that I know about relies on the model being correctly specified. Of course, as critical statisticians, we should think that our model is always misspecified. This is quite an issue; most of our theory is based on the model being correct, yet we know it never is. Basically, we're just crossing our fingers hoping that our model is not too incorrect. Why do Frequentist methods handle this better? Not all do. For example, if we use standard MLE tools for creating the standard errors or building prediction intervals, we're not better off than using Bayesian methods. However, there is one particular Frequentist tool that is very specifically intended for exactly this purpose: cross validation. Here, in order to estimate how well our model will predict on new data, we simply leave of some of the data when fitting the model and measure how well our model predicts the unseen data. Note that this method is completely ambivalent to model miss-specification, it merely provides a method for us to estimate for how well a model will predict on new data, regardless of whether the model is "correct" or not. I don't think it's too hard to argue that this really changes the approach to predictive modeling that's hard to justify from a Bayesian perspective (prior is supposed to represent prior knowledge before seeing data, likelihood function is the model, etc.) to one that's very easy to justify from a Frequentist perspective (we chose the model + regularization parameters that, over repeated sampling, leads to the best out of sample errors). This has completely revolutionized how predictive inference is done. I don't think any statistician would (or at least, should) seriously consider a predictive model that wasn't built or checked with cross-validation, when it's available (i.e., we can reasonable assume observations are independent, not trying to account for sampling bias, etc.).
When (if ever) is a frequentist approach substantively better than a Bayesian?
One type of problem in which a particular Frequentist based approach has essentially dominated any Bayesian is that of prediction in the M-open case. What does M-open mean? M-open implies that the tr
When (if ever) is a frequentist approach substantively better than a Bayesian? One type of problem in which a particular Frequentist based approach has essentially dominated any Bayesian is that of prediction in the M-open case. What does M-open mean? M-open implies that the true model that generates the data does not appear in the set of models we are considering. For example, if the true mean of $y$ is quadratic as a function of $x$, yet we only consider models with the mean a linear function of $x$, we are in the M-open case. In other words, model miss-specification results in an M-open case. In most cases, this is a huge problem for Bayesian analyses; pretty much all theory that I know about relies on the model being correctly specified. Of course, as critical statisticians, we should think that our model is always misspecified. This is quite an issue; most of our theory is based on the model being correct, yet we know it never is. Basically, we're just crossing our fingers hoping that our model is not too incorrect. Why do Frequentist methods handle this better? Not all do. For example, if we use standard MLE tools for creating the standard errors or building prediction intervals, we're not better off than using Bayesian methods. However, there is one particular Frequentist tool that is very specifically intended for exactly this purpose: cross validation. Here, in order to estimate how well our model will predict on new data, we simply leave of some of the data when fitting the model and measure how well our model predicts the unseen data. Note that this method is completely ambivalent to model miss-specification, it merely provides a method for us to estimate for how well a model will predict on new data, regardless of whether the model is "correct" or not. I don't think it's too hard to argue that this really changes the approach to predictive modeling that's hard to justify from a Bayesian perspective (prior is supposed to represent prior knowledge before seeing data, likelihood function is the model, etc.) to one that's very easy to justify from a Frequentist perspective (we chose the model + regularization parameters that, over repeated sampling, leads to the best out of sample errors). This has completely revolutionized how predictive inference is done. I don't think any statistician would (or at least, should) seriously consider a predictive model that wasn't built or checked with cross-validation, when it's available (i.e., we can reasonable assume observations are independent, not trying to account for sampling bias, etc.).
When (if ever) is a frequentist approach substantively better than a Bayesian? One type of problem in which a particular Frequentist based approach has essentially dominated any Bayesian is that of prediction in the M-open case. What does M-open mean? M-open implies that the tr
2,486
When (if ever) is a frequentist approach substantively better than a Bayesian?
Conceptually: I don't know. I believe Bayesian statistics is the most logical way to think but I coudn't justify why. The advantage of frequentist is that it is easier for most people at elementary level. But for me it was strange. It took years until I could really clarify intellectually what a confidence interval is. But when I started facing practical situations, frequentist ideas appeared to be simple and highly relevant. Empirically The most important question I try to focus on nowadays is more about practical efficiency: personal work time, precision, and computation speed. Personal work time: For basic questions, I actually almost never use Bayesian methods: I use basic frequentist tools and will always prefer a t-test over a Bayesian equivalent that would just give me a headache. When I want to know if I'm significantly better at tictactoe than my girlfriend, I do a chi-squared :-). Actually, even in serious work as a computer scientist, frequentist basic tools are just invaluable to investigate problems and avoid false conclusions due to random. Precision: In machine learning where prediction matters more than analysis, there is not an absolute boundary between Bayesian and frequentist. MLE is a frequentist approcah: just an estimator. But regularized MLE (MAP) is a partially Bayesian approach: you find the mode of the posterior and you don't care for the rest of the posterior. I don't know of a frequentist justification of why use regularization. Practically, regularization is sometimes just inevitable because the raw MLE estimate is so overfitted that 0 would be a better predictor. If regularization is agreed to be a truly Bayesian method, then this alone justifies that Bayes can learn with less data. Computation speed: frequentist methods are most often computationally faster and simpler to implement. And somehow regularization provides a cheap way to introduce a bit of Bayes in them. It might be because Bayesian methods are still not as optimized as they could. For example, some LDA implementations are fast nowadays. But they required very hard work. For entropy estimations, the first advanced methods were Bayesian. They worked great but soon frequentist methods were discovered and take much less computation time... For computation time frequentist methods are generally clearly superior. It is not absurd, if your are a Bayesian, to think of frequentist methods as approximations of Bayesian methods.
When (if ever) is a frequentist approach substantively better than a Bayesian?
Conceptually: I don't know. I believe Bayesian statistics is the most logical way to think but I coudn't justify why. The advantage of frequentist is that it is easier for most people at elementary le
When (if ever) is a frequentist approach substantively better than a Bayesian? Conceptually: I don't know. I believe Bayesian statistics is the most logical way to think but I coudn't justify why. The advantage of frequentist is that it is easier for most people at elementary level. But for me it was strange. It took years until I could really clarify intellectually what a confidence interval is. But when I started facing practical situations, frequentist ideas appeared to be simple and highly relevant. Empirically The most important question I try to focus on nowadays is more about practical efficiency: personal work time, precision, and computation speed. Personal work time: For basic questions, I actually almost never use Bayesian methods: I use basic frequentist tools and will always prefer a t-test over a Bayesian equivalent that would just give me a headache. When I want to know if I'm significantly better at tictactoe than my girlfriend, I do a chi-squared :-). Actually, even in serious work as a computer scientist, frequentist basic tools are just invaluable to investigate problems and avoid false conclusions due to random. Precision: In machine learning where prediction matters more than analysis, there is not an absolute boundary between Bayesian and frequentist. MLE is a frequentist approcah: just an estimator. But regularized MLE (MAP) is a partially Bayesian approach: you find the mode of the posterior and you don't care for the rest of the posterior. I don't know of a frequentist justification of why use regularization. Practically, regularization is sometimes just inevitable because the raw MLE estimate is so overfitted that 0 would be a better predictor. If regularization is agreed to be a truly Bayesian method, then this alone justifies that Bayes can learn with less data. Computation speed: frequentist methods are most often computationally faster and simpler to implement. And somehow regularization provides a cheap way to introduce a bit of Bayes in them. It might be because Bayesian methods are still not as optimized as they could. For example, some LDA implementations are fast nowadays. But they required very hard work. For entropy estimations, the first advanced methods were Bayesian. They worked great but soon frequentist methods were discovered and take much less computation time... For computation time frequentist methods are generally clearly superior. It is not absurd, if your are a Bayesian, to think of frequentist methods as approximations of Bayesian methods.
When (if ever) is a frequentist approach substantively better than a Bayesian? Conceptually: I don't know. I believe Bayesian statistics is the most logical way to think but I coudn't justify why. The advantage of frequentist is that it is easier for most people at elementary le
2,487
Bayes regression: how is it done in comparison to standard regression?
The simple linear regression model $$ y_i = \alpha + \beta x_i + \varepsilon $$ can be written in terms of the probabilistic model behind it $$ \mu_i = \alpha + \beta x_i \\ y_i \sim \mathcal{N}(\mu_i, \sigma) $$ i.e. dependent variable $Y$ follows normal distribution parametrized by mean $\mu_i$, that is a linear function of $X$ parametrized by $\alpha,\beta$, and by standard deviation $\sigma$. If you estimate such a model using ordinary least squares, you do not have to bother about the probabilistic formulation, because you are searching for optimal values of $\alpha,\beta$ parameters by minimizing the squared errors of fitted values to predicted values. On another hand, you could estimate such model using maximum likelihood estimation, where you would be looking for optimal values of parameters by maximizing the likelihood function $$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\alpha,\,\beta,\,\sigma} \prod_{i=1}^n \mathcal{N}(y_i; \alpha + \beta x_i, \sigma) $$ where $\mathcal{N}$ is a density function of normal distribution evaluated at $y_i$ points, parametrized by means $\alpha + \beta x_i$ and standard deviation $\sigma$. In the Bayesian approach instead of maximizing the likelihood function alone, we would assume prior distributions for the parameters and use the Bayes theorem $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ The likelihood function is the same as above, but what changes is that you assume some prior distributions for the estimated parameters $\alpha,\beta,\sigma$ and include them into the equation $$ \underbrace{f(\alpha,\beta,\sigma\mid Y,X)}_{\text{posterior}} \propto \underbrace{\prod_{i=1}^n \mathcal{N}(y_i\mid \alpha + \beta x_i, \sigma)}_{\text{likelihood}} \; \underbrace{f_{\alpha}(\alpha) \, f_{\beta}(\beta) \, f_{\sigma}(\sigma)}_{\text{priors}} $$ "What distributions?" is a different question, since there is an unlimited number of choices. For $\alpha,\beta$ parameters you could, for example, assume normal distributions parametrized by some hyperparameters, or $t$-distribution if you want to assume heavier tails, or uniform distribution if you do not want to make many assumptions, but you want to assume that the parameters can be a priori "anything in the given range", etc. For $\sigma$ you need to assume some prior distribution that is bounded to be greater than zero since standard deviation needs to be positive. This may lead to the model formulation as illustrated below by John K. Kruschke. (source: http://www.indiana.edu/~kruschke/BMLR/) While in the maximum likelihood you were looking for a single optimal value for each of the parameters, in the Bayesian approach by applying the Bayes theorem you obtain the posterior distribution of the parameters. The final estimate will depend on the information that comes from your data and from your priors, but the more information is contained in your data, the less influential are priors. Notice that when using uniform priors, they take form $f(\theta) \propto 1$ after dropping the normalizing constants. This makes Bayes theorem proportional to the likelihood function alone, so the posterior distribution will reach its maximum at exactly the same point as the maximum likelihood estimate. What follows, the estimate under uniform priors will be the same as by using ordinary least squares since minimizing the squared errors corresponds to maximizing the normal likelihood. To estimate a model in the Bayesian approach in some cases you can use conjugate priors, so the posterior distribution is directly available (see example here). However, in the vast majority of cases, posterior distribution will not be directly available and you will have to use Markov Chain Monte Carlo methods for estimating the model (check this example of using Metropolis-Hastings algorithm to estimate parameters of linear regression). Finally, if you are only interested in point estimates of parameters, you could use maximum a posteriori estimation, i.e. $$ \argmax_{\alpha,\,\beta,\,\sigma} f(\alpha,\beta,\sigma\mid Y,X) $$ For a more detailed description of logistic regression, you can check the Bayesian logit model - intuitive explanation? thread. For learning more you could check the following books: Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC.
Bayes regression: how is it done in comparison to standard regression?
The simple linear regression model $$ y_i = \alpha + \beta x_i + \varepsilon $$ can be written in terms of the probabilistic model behind it $$ \mu_i = \alpha + \beta x_i \\ y_i \sim \mathcal{N}(\mu_
Bayes regression: how is it done in comparison to standard regression? The simple linear regression model $$ y_i = \alpha + \beta x_i + \varepsilon $$ can be written in terms of the probabilistic model behind it $$ \mu_i = \alpha + \beta x_i \\ y_i \sim \mathcal{N}(\mu_i, \sigma) $$ i.e. dependent variable $Y$ follows normal distribution parametrized by mean $\mu_i$, that is a linear function of $X$ parametrized by $\alpha,\beta$, and by standard deviation $\sigma$. If you estimate such a model using ordinary least squares, you do not have to bother about the probabilistic formulation, because you are searching for optimal values of $\alpha,\beta$ parameters by minimizing the squared errors of fitted values to predicted values. On another hand, you could estimate such model using maximum likelihood estimation, where you would be looking for optimal values of parameters by maximizing the likelihood function $$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\alpha,\,\beta,\,\sigma} \prod_{i=1}^n \mathcal{N}(y_i; \alpha + \beta x_i, \sigma) $$ where $\mathcal{N}$ is a density function of normal distribution evaluated at $y_i$ points, parametrized by means $\alpha + \beta x_i$ and standard deviation $\sigma$. In the Bayesian approach instead of maximizing the likelihood function alone, we would assume prior distributions for the parameters and use the Bayes theorem $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ The likelihood function is the same as above, but what changes is that you assume some prior distributions for the estimated parameters $\alpha,\beta,\sigma$ and include them into the equation $$ \underbrace{f(\alpha,\beta,\sigma\mid Y,X)}_{\text{posterior}} \propto \underbrace{\prod_{i=1}^n \mathcal{N}(y_i\mid \alpha + \beta x_i, \sigma)}_{\text{likelihood}} \; \underbrace{f_{\alpha}(\alpha) \, f_{\beta}(\beta) \, f_{\sigma}(\sigma)}_{\text{priors}} $$ "What distributions?" is a different question, since there is an unlimited number of choices. For $\alpha,\beta$ parameters you could, for example, assume normal distributions parametrized by some hyperparameters, or $t$-distribution if you want to assume heavier tails, or uniform distribution if you do not want to make many assumptions, but you want to assume that the parameters can be a priori "anything in the given range", etc. For $\sigma$ you need to assume some prior distribution that is bounded to be greater than zero since standard deviation needs to be positive. This may lead to the model formulation as illustrated below by John K. Kruschke. (source: http://www.indiana.edu/~kruschke/BMLR/) While in the maximum likelihood you were looking for a single optimal value for each of the parameters, in the Bayesian approach by applying the Bayes theorem you obtain the posterior distribution of the parameters. The final estimate will depend on the information that comes from your data and from your priors, but the more information is contained in your data, the less influential are priors. Notice that when using uniform priors, they take form $f(\theta) \propto 1$ after dropping the normalizing constants. This makes Bayes theorem proportional to the likelihood function alone, so the posterior distribution will reach its maximum at exactly the same point as the maximum likelihood estimate. What follows, the estimate under uniform priors will be the same as by using ordinary least squares since minimizing the squared errors corresponds to maximizing the normal likelihood. To estimate a model in the Bayesian approach in some cases you can use conjugate priors, so the posterior distribution is directly available (see example here). However, in the vast majority of cases, posterior distribution will not be directly available and you will have to use Markov Chain Monte Carlo methods for estimating the model (check this example of using Metropolis-Hastings algorithm to estimate parameters of linear regression). Finally, if you are only interested in point estimates of parameters, you could use maximum a posteriori estimation, i.e. $$ \argmax_{\alpha,\,\beta,\,\sigma} f(\alpha,\beta,\sigma\mid Y,X) $$ For a more detailed description of logistic regression, you can check the Bayesian logit model - intuitive explanation? thread. For learning more you could check the following books: Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC.
Bayes regression: how is it done in comparison to standard regression? The simple linear regression model $$ y_i = \alpha + \beta x_i + \varepsilon $$ can be written in terms of the probabilistic model behind it $$ \mu_i = \alpha + \beta x_i \\ y_i \sim \mathcal{N}(\mu_
2,488
Bayes regression: how is it done in comparison to standard regression?
Given a data set $D = (x_1,y_1), \ldots, (x_N,y_N)$ where $x \in \mathbb{R}^d, y \in \mathbb{R}$, a Bayesian Linear Regression models the problem in the following way: Prior: $$w \sim \mathcal{N}(0, \sigma_w^2 I_d)$$ $w$ is vector $(w_1, \ldots, w_d)^T$, so the previous distribution is a multivariate Gaussian; and $I_d$ is the $d\times d$ identity matrix. Likelihood: $$Y_i \sim \mathcal{N}(w^T x_i, \sigma^2)$$ We assume that $Y_i \perp Y_j | w, i \neq j$ For now we'll use the precision instead of the variance, $a = 1/\sigma^2$, and $b = 1/\sigma_w^2$. We'll also assume that $a,b$ are known. The prior can be stated as $$p(w) \propto \exp \Big\{ -\frac{b}{2} w^t w \Big\}$$ And the likelihood $$p(D|w) \propto \exp \Big\{ -\frac{a}{2} (y-Aw)^T (y-Aw) \Big\}$$ where $y = (y_1,\ldots,y_N)^T$ and $A$ is a $n\times d$ matrix where the i-th row is $x_i^T$. Then the posterior is $$p(w|D) \propto p(D|w) p(w)$$ After many calculations we discover that $$p(w|D) \sim \mathcal{N}(w | \mu, \Lambda^{-1})$$ where ($\Lambda$ is the precision matrix) $$\Lambda = a A^T A + b I_d $$ $$\mu = a \Lambda^{-1} A^T y$$ Notice that $\mu$ is equal to the $w_{MAP}$ of the regular linear regression, this is because for the Gaussian, the mean is equal to the mode. Also, we can make some algebra over $\mu$ and get the following equality ($\Lambda = aA^TA+bI_d$): $$\mu = (A^T A + \frac{b}{a} I_d)^{-1} A^T y$$ and compare with $w_{MLE}$: $$w_{MLE} = (A^T A)^{-1} A^T y$$ The extra expression in $\mu$ corresponds to the prior. This is similar to the expression for the Ridge regression, for the special case when $\lambda = \frac{b}{a}$. Ridge regression is more general because the technique can choose improper priors (in the Bayesian perspective). For the predictive posterior distribution: $$p(y|x,D) = \int p(y|x,D,w) p(w|x,D) dw = \int p(y|x,w) p(w|D) dw$$ it is possible to calculate that $$y|x,D \sim \mathcal{N}(\mu^Tx, \frac{1}{a} + x^T \Lambda^{-1}x)$$ Reference: Lunn et al. The BUGS Book For using a MCMC tool like JAGS/Stan check Kruschke's Doing Bayesian Data Analysis
Bayes regression: how is it done in comparison to standard regression?
Given a data set $D = (x_1,y_1), \ldots, (x_N,y_N)$ where $x \in \mathbb{R}^d, y \in \mathbb{R}$, a Bayesian Linear Regression models the problem in the following way: Prior: $$w \sim \mathcal{N}(0, \
Bayes regression: how is it done in comparison to standard regression? Given a data set $D = (x_1,y_1), \ldots, (x_N,y_N)$ where $x \in \mathbb{R}^d, y \in \mathbb{R}$, a Bayesian Linear Regression models the problem in the following way: Prior: $$w \sim \mathcal{N}(0, \sigma_w^2 I_d)$$ $w$ is vector $(w_1, \ldots, w_d)^T$, so the previous distribution is a multivariate Gaussian; and $I_d$ is the $d\times d$ identity matrix. Likelihood: $$Y_i \sim \mathcal{N}(w^T x_i, \sigma^2)$$ We assume that $Y_i \perp Y_j | w, i \neq j$ For now we'll use the precision instead of the variance, $a = 1/\sigma^2$, and $b = 1/\sigma_w^2$. We'll also assume that $a,b$ are known. The prior can be stated as $$p(w) \propto \exp \Big\{ -\frac{b}{2} w^t w \Big\}$$ And the likelihood $$p(D|w) \propto \exp \Big\{ -\frac{a}{2} (y-Aw)^T (y-Aw) \Big\}$$ where $y = (y_1,\ldots,y_N)^T$ and $A$ is a $n\times d$ matrix where the i-th row is $x_i^T$. Then the posterior is $$p(w|D) \propto p(D|w) p(w)$$ After many calculations we discover that $$p(w|D) \sim \mathcal{N}(w | \mu, \Lambda^{-1})$$ where ($\Lambda$ is the precision matrix) $$\Lambda = a A^T A + b I_d $$ $$\mu = a \Lambda^{-1} A^T y$$ Notice that $\mu$ is equal to the $w_{MAP}$ of the regular linear regression, this is because for the Gaussian, the mean is equal to the mode. Also, we can make some algebra over $\mu$ and get the following equality ($\Lambda = aA^TA+bI_d$): $$\mu = (A^T A + \frac{b}{a} I_d)^{-1} A^T y$$ and compare with $w_{MLE}$: $$w_{MLE} = (A^T A)^{-1} A^T y$$ The extra expression in $\mu$ corresponds to the prior. This is similar to the expression for the Ridge regression, for the special case when $\lambda = \frac{b}{a}$. Ridge regression is more general because the technique can choose improper priors (in the Bayesian perspective). For the predictive posterior distribution: $$p(y|x,D) = \int p(y|x,D,w) p(w|x,D) dw = \int p(y|x,w) p(w|D) dw$$ it is possible to calculate that $$y|x,D \sim \mathcal{N}(\mu^Tx, \frac{1}{a} + x^T \Lambda^{-1}x)$$ Reference: Lunn et al. The BUGS Book For using a MCMC tool like JAGS/Stan check Kruschke's Doing Bayesian Data Analysis
Bayes regression: how is it done in comparison to standard regression? Given a data set $D = (x_1,y_1), \ldots, (x_N,y_N)$ where $x \in \mathbb{R}^d, y \in \mathbb{R}$, a Bayesian Linear Regression models the problem in the following way: Prior: $$w \sim \mathcal{N}(0, \
2,489
When to use generalized estimating equations vs. mixed effects models?
Use GEE when you're interested in uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent in linear models, but not in non-linear (e.g. logistic). To see this, take, for example the random effects logistic model of the $j$'th observation of the $i$'th subject, $Y_{ij}$; $$ \log \left( \frac{p_{ij}}{1-p_{ij}} \right) = \mu + \eta_{i} $$ where $\eta_{i} \sim N(0,\sigma^{2})$ is a random effect for subject $i$ and $p_{ij} = P(Y_{ij} = 1|\eta_{i})$. If you used a random effects model on these data, then you would get an estimate of $\mu$ that accounts for the fact that a mean zero normally distributed perturbation was applied to each individual, making it individual specific. If you used GEE on these data, you would estimate the population average log odds. In this case that would be $$ \nu = \log \left( \frac{ E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)}{ 1-E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)} \right) $$ $\nu \neq \mu$, in general. For example, if $\mu = 1$ and $\sigma^{2} = 1$, then $\nu \approx .83$. Although the random effects have mean zero on the transformed (or linked) scale, their effect is not mean zero on the original scale of the data. Try simulating some data from a mixed effects logistic regression model and comparing the population level average with the inverse-logit of the intercept and you will see that they are not equal, as in this example. This difference in the interpretation of the coefficients is the fundamental difference between GEE and random effects models. Edit: In general, a mixed effects model with no predictors can be written as $$ \psi \big( E(Y_{ij}|\eta_{i}) \big) = \mu + \eta_{i} $$ where $\psi$ is a link function. Whenever $$ \psi \Big( E_{\eta} \Big( \psi^{-1} \big( E(Y_{ij}|\eta_{i}) \big) \Big) \Big) \neq E_{\eta} \big( E(Y_{ij}|\eta_{i}) \big) $$ there will be a difference between the population average coefficients (GEE) and the individual specific coefficients (random effects models). That is, the averages change by transforming the data, integrating out the random effects on the transformed scale, and then transformating back. Note that in the linear model, (that is, $\psi(x) = x$), the equality does hold, so they are equivalent. Edit 2: It is also worth noting that the "robust" sandwich-type standard errors produced by a GEE model provide valid asymptotic confidence intervals (e.g. they actually cover 95% of the time) even if the correlation structure specified in the model is not correct. Edit 3: If your interest is in understanding the association structure in the data, the GEE estimates of associations are notoriously inefficient (and sometimes inconsistent). I've seen a reference for this but can't place it right now.
When to use generalized estimating equations vs. mixed effects models?
Use GEE when you're interested in uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent in linear models, but not in non-line
When to use generalized estimating equations vs. mixed effects models? Use GEE when you're interested in uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent in linear models, but not in non-linear (e.g. logistic). To see this, take, for example the random effects logistic model of the $j$'th observation of the $i$'th subject, $Y_{ij}$; $$ \log \left( \frac{p_{ij}}{1-p_{ij}} \right) = \mu + \eta_{i} $$ where $\eta_{i} \sim N(0,\sigma^{2})$ is a random effect for subject $i$ and $p_{ij} = P(Y_{ij} = 1|\eta_{i})$. If you used a random effects model on these data, then you would get an estimate of $\mu$ that accounts for the fact that a mean zero normally distributed perturbation was applied to each individual, making it individual specific. If you used GEE on these data, you would estimate the population average log odds. In this case that would be $$ \nu = \log \left( \frac{ E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)}{ 1-E_{\eta} \left( \frac{1}{1 + e^{-\mu-\eta_{i}}} \right)} \right) $$ $\nu \neq \mu$, in general. For example, if $\mu = 1$ and $\sigma^{2} = 1$, then $\nu \approx .83$. Although the random effects have mean zero on the transformed (or linked) scale, their effect is not mean zero on the original scale of the data. Try simulating some data from a mixed effects logistic regression model and comparing the population level average with the inverse-logit of the intercept and you will see that they are not equal, as in this example. This difference in the interpretation of the coefficients is the fundamental difference between GEE and random effects models. Edit: In general, a mixed effects model with no predictors can be written as $$ \psi \big( E(Y_{ij}|\eta_{i}) \big) = \mu + \eta_{i} $$ where $\psi$ is a link function. Whenever $$ \psi \Big( E_{\eta} \Big( \psi^{-1} \big( E(Y_{ij}|\eta_{i}) \big) \Big) \Big) \neq E_{\eta} \big( E(Y_{ij}|\eta_{i}) \big) $$ there will be a difference between the population average coefficients (GEE) and the individual specific coefficients (random effects models). That is, the averages change by transforming the data, integrating out the random effects on the transformed scale, and then transformating back. Note that in the linear model, (that is, $\psi(x) = x$), the equality does hold, so they are equivalent. Edit 2: It is also worth noting that the "robust" sandwich-type standard errors produced by a GEE model provide valid asymptotic confidence intervals (e.g. they actually cover 95% of the time) even if the correlation structure specified in the model is not correct. Edit 3: If your interest is in understanding the association structure in the data, the GEE estimates of associations are notoriously inefficient (and sometimes inconsistent). I've seen a reference for this but can't place it right now.
When to use generalized estimating equations vs. mixed effects models? Use GEE when you're interested in uncovering the population average effect of a covariate vs. the individual specific effect. These two things are only equivalent in linear models, but not in non-line
2,490
When to use generalized estimating equations vs. mixed effects models?
GEE in my mind is most useful when we are not using Bayesian modeling and when a full likelihood solution is not available. Also, GEE may require larger sample sizes in order to be sufficiently accurate, and it is very non-robust to non-randomly missing longitudinal data. GEE assumes missing completely at random whereas likelihood methods (mixed effect models or generalized least squares, for example) assume only missing at random.
When to use generalized estimating equations vs. mixed effects models?
GEE in my mind is most useful when we are not using Bayesian modeling and when a full likelihood solution is not available. Also, GEE may require larger sample sizes in order to be sufficiently accur
When to use generalized estimating equations vs. mixed effects models? GEE in my mind is most useful when we are not using Bayesian modeling and when a full likelihood solution is not available. Also, GEE may require larger sample sizes in order to be sufficiently accurate, and it is very non-robust to non-randomly missing longitudinal data. GEE assumes missing completely at random whereas likelihood methods (mixed effect models or generalized least squares, for example) assume only missing at random.
When to use generalized estimating equations vs. mixed effects models? GEE in my mind is most useful when we are not using Bayesian modeling and when a full likelihood solution is not available. Also, GEE may require larger sample sizes in order to be sufficiently accur
2,491
When to use generalized estimating equations vs. mixed effects models?
You can find a thorough discussion and concrete examples in Fitzmaurice, Laird and Ware, Applied Longitudinal Analysis, John Wiley & Sons, 2011, 2nd edition, Chapters 11-16. As to the examples, you can find datasets and SAS/Stata/R programs in the companion website.
When to use generalized estimating equations vs. mixed effects models?
You can find a thorough discussion and concrete examples in Fitzmaurice, Laird and Ware, Applied Longitudinal Analysis, John Wiley & Sons, 2011, 2nd edition, Chapters 11-16. As to the examples, you ca
When to use generalized estimating equations vs. mixed effects models? You can find a thorough discussion and concrete examples in Fitzmaurice, Laird and Ware, Applied Longitudinal Analysis, John Wiley & Sons, 2011, 2nd edition, Chapters 11-16. As to the examples, you can find datasets and SAS/Stata/R programs in the companion website.
When to use generalized estimating equations vs. mixed effects models? You can find a thorough discussion and concrete examples in Fitzmaurice, Laird and Ware, Applied Longitudinal Analysis, John Wiley & Sons, 2011, 2nd edition, Chapters 11-16. As to the examples, you ca
2,492
What is a "kernel" in plain English?
In both statistics (kernel density estimation or kernel smoothing) and machine learning (kernel methods) literature, kernel is used as a measure of similarity. In particular, the kernel function $k(x,.)$ defines the distribution of similarities of points around a given point $x$. $k(x,y)$ denotes the similarity of point $x$ with another given point $y$.
What is a "kernel" in plain English?
In both statistics (kernel density estimation or kernel smoothing) and machine learning (kernel methods) literature, kernel is used as a measure of similarity. In particular, the kernel function $k(x,
What is a "kernel" in plain English? In both statistics (kernel density estimation or kernel smoothing) and machine learning (kernel methods) literature, kernel is used as a measure of similarity. In particular, the kernel function $k(x,.)$ defines the distribution of similarities of points around a given point $x$. $k(x,y)$ denotes the similarity of point $x$ with another given point $y$.
What is a "kernel" in plain English? In both statistics (kernel density estimation or kernel smoothing) and machine learning (kernel methods) literature, kernel is used as a measure of similarity. In particular, the kernel function $k(x,
2,493
What is a "kernel" in plain English?
There appear to be at least two different meanings of "kernel": one more commonly used in statistics; the other in machine learning. In statistics "kernel" is most commonly used to refer to kernel density estimation and kernel smoothing. A straightforward explanation of kernels in density estimation can be found (here). In machine learning "kernel" is usually used to refer to the kernel trick, a method of using a linear classifier to solve a non-linear problem "by mapping the original non-linear observations into a higher-dimensional space". A simple visualisation might be to imagine that all of class $0$ are within radius $r$ of the origin in an x, y plane (class $0$: $x^2 + y^2 < r^2$); and all of class $1$ are beyond radius $r$ in that plane (class $1$: $x^2 + y^2 > r^2$). No linear separator is possible, but clearly a circle of radius $r$ will perfectly separate the data. We can transform the data into three dimensional space by calculating three new variables $x^2$, $y^2$ and $\sqrt{2}xy$. The two classes will now be separable by a plane in this 3 dimensional space. The equation of that optimally separating hyperplane where $z_1 = x^2, z_2 = y^2$ and $z_3 = \sqrt{2}xy$ is $z_1 + z_2 = 1$, and in this case omits $z_3$. (If the circle is off-set from the origin, the optimal separating hyperplane will vary in $z_3$ as well.) The kernel is the mapping function which calculates the value of the 2-dimensional data in 3-dimensional space. In mathematics, there are other uses of "kernels", but these seem to be the main ones in statistics.
What is a "kernel" in plain English?
There appear to be at least two different meanings of "kernel": one more commonly used in statistics; the other in machine learning. In statistics "kernel" is most commonly used to refer to kernel den
What is a "kernel" in plain English? There appear to be at least two different meanings of "kernel": one more commonly used in statistics; the other in machine learning. In statistics "kernel" is most commonly used to refer to kernel density estimation and kernel smoothing. A straightforward explanation of kernels in density estimation can be found (here). In machine learning "kernel" is usually used to refer to the kernel trick, a method of using a linear classifier to solve a non-linear problem "by mapping the original non-linear observations into a higher-dimensional space". A simple visualisation might be to imagine that all of class $0$ are within radius $r$ of the origin in an x, y plane (class $0$: $x^2 + y^2 < r^2$); and all of class $1$ are beyond radius $r$ in that plane (class $1$: $x^2 + y^2 > r^2$). No linear separator is possible, but clearly a circle of radius $r$ will perfectly separate the data. We can transform the data into three dimensional space by calculating three new variables $x^2$, $y^2$ and $\sqrt{2}xy$. The two classes will now be separable by a plane in this 3 dimensional space. The equation of that optimally separating hyperplane where $z_1 = x^2, z_2 = y^2$ and $z_3 = \sqrt{2}xy$ is $z_1 + z_2 = 1$, and in this case omits $z_3$. (If the circle is off-set from the origin, the optimal separating hyperplane will vary in $z_3$ as well.) The kernel is the mapping function which calculates the value of the 2-dimensional data in 3-dimensional space. In mathematics, there are other uses of "kernels", but these seem to be the main ones in statistics.
What is a "kernel" in plain English? There appear to be at least two different meanings of "kernel": one more commonly used in statistics; the other in machine learning. In statistics "kernel" is most commonly used to refer to kernel den
2,494
Intuition on the Kullback–Leibler (KL) Divergence
A (metric) distance $D$ must be symmetric, i.e. $D(P,Q) = D(Q,P)$. But, from definition, $KL$ is not. Example: $\Omega = \{A,B\}$, $P(A) = 0.2, P(B) = 0.8$, $Q(A) = Q(B) = 0.5$. We have: $$KL(P,Q) = P(A)\log \frac{P(A)}{Q(A)} + P(B) \log \frac{P(B)}{Q(B)} \approx 0.19$$ and $$KL(Q,P) = Q(A)\log \frac{Q(A)}{P(A)} + Q(B) \log \frac{Q(B)}{P(B)} \approx 0.22$$ thus $KL(P,Q) \neq KL(Q,P)$ and therefore $KL$ is not a (metric) distance.
Intuition on the Kullback–Leibler (KL) Divergence
A (metric) distance $D$ must be symmetric, i.e. $D(P,Q) = D(Q,P)$. But, from definition, $KL$ is not. Example: $\Omega = \{A,B\}$, $P(A) = 0.2, P(B) = 0.8$, $Q(A) = Q(B) = 0.5$. We have: $$KL(P,Q) = P
Intuition on the Kullback–Leibler (KL) Divergence A (metric) distance $D$ must be symmetric, i.e. $D(P,Q) = D(Q,P)$. But, from definition, $KL$ is not. Example: $\Omega = \{A,B\}$, $P(A) = 0.2, P(B) = 0.8$, $Q(A) = Q(B) = 0.5$. We have: $$KL(P,Q) = P(A)\log \frac{P(A)}{Q(A)} + P(B) \log \frac{P(B)}{Q(B)} \approx 0.19$$ and $$KL(Q,P) = Q(A)\log \frac{Q(A)}{P(A)} + Q(B) \log \frac{Q(B)}{P(B)} \approx 0.22$$ thus $KL(P,Q) \neq KL(Q,P)$ and therefore $KL$ is not a (metric) distance.
Intuition on the Kullback–Leibler (KL) Divergence A (metric) distance $D$ must be symmetric, i.e. $D(P,Q) = D(Q,P)$. But, from definition, $KL$ is not. Example: $\Omega = \{A,B\}$, $P(A) = 0.2, P(B) = 0.8$, $Q(A) = Q(B) = 0.5$. We have: $$KL(P,Q) = P
2,495
Intuition on the Kullback–Leibler (KL) Divergence
Adding to the other excellent answers, an answer with another viewpoint which maybe can add some more intuition, which was asked for. The Kullback-Leibler divergence is $$ \DeclareMathOperator{\KL}{KL} \KL(P || Q) = \int_{-\infty}^\infty p(x) \log \frac{p(x)}{q(x)} \; dx $$ If you have two hypothesis regarding which distribution is generating the data $X$, $P$ and $Q$, then $\frac{p(x)}{q(x)}$ is the likelihood ratio for testing $H_0 \colon Q$ against $H_1 \colon P$. We see that the Kullback-Leibler divergence above is then the expected value of the loglikelihood ratio under the alternative hypothesis. So, $\KL(P || Q)$ is a measure of the difficulty of this test problem, when $Q$ is the null hypothesis. So the asymmetry $\KL(P || Q) \not= \KL(Q || P)$ simply reflects the asymmetry between null and alternative hypothesis. Let us look at this in a particular example. Let $P$ be the $t_\nu$-distribution and $Q$ the standard normal distribution (in the numerical exampe below $\nu=1$). The integral defining the divergence looks complicated, so let us simply use numerical integration in R: lLR_1 <- function(x) {dt(x, 1, log=TRUE)-dnorm(x, log=TRUE)} integrate(function(x) dt(x, 1)*lLR_1(x), lower=-Inf, upper=Inf) Error in integrate(function(x) dt(x, 1) * lLR_1(x), lower = -Inf, upper = Inf) : the integral is probably divergent lLR_2 <- function(x) {-dt(x, 1, log=TRUE) + dnorm(x, log=TRUE)} integrate(function(x) dnorm(x)*lLR_2(x), lower=-Inf, upper=Inf) 0.2592445 with absolute error < 1e-07 In the first case the integral seems to diverge numerically, indicating the divergence is very large or infinite, in the second case it is small, summarizing: $$ \KL(P || Q) \approx \infty \\ \KL(Q || P) \approx 0.26 $$ The first case is verified by analytical symbolic integration in answer by @Xi'an here: What's the maximum value of Kullback-Leibler (KL) divergence. What does this tell us, in practical terms? If the null model is a standard normal distribution but the data is generated from a $t_1$-distribution, then it is quite easy to reject the null! Data from a $t_1$-distribution do not look like normal distributed data. In the other case, the roles are switched. The null is the $t_1$ but data is normal. But normal distributed data could look like $t_1$ data, so this problem is much more difficult! Here we have sample size $n=1$, and every data which might come from a normal distribution could as well have come from a $t_1$! Switching the roles, not, the difference comes mostly from the role of outliers. Under the alternative distribution $t_1$ there is a rather large probability of obtaining a sample which have very small probability under the null (normal) model, giving a huge divergence. But when the alternative distribution is normal, practically all data we can get will have a moderate probability (really, density ...) under the null $t_1$ model, so the divergence is small. This is related to my answer here: Why should we use t errors instead of normal errors?
Intuition on the Kullback–Leibler (KL) Divergence
Adding to the other excellent answers, an answer with another viewpoint which maybe can add some more intuition, which was asked for. The Kullback-Leibler divergence is $$ \DeclareMathOperator{\KL}{KL
Intuition on the Kullback–Leibler (KL) Divergence Adding to the other excellent answers, an answer with another viewpoint which maybe can add some more intuition, which was asked for. The Kullback-Leibler divergence is $$ \DeclareMathOperator{\KL}{KL} \KL(P || Q) = \int_{-\infty}^\infty p(x) \log \frac{p(x)}{q(x)} \; dx $$ If you have two hypothesis regarding which distribution is generating the data $X$, $P$ and $Q$, then $\frac{p(x)}{q(x)}$ is the likelihood ratio for testing $H_0 \colon Q$ against $H_1 \colon P$. We see that the Kullback-Leibler divergence above is then the expected value of the loglikelihood ratio under the alternative hypothesis. So, $\KL(P || Q)$ is a measure of the difficulty of this test problem, when $Q$ is the null hypothesis. So the asymmetry $\KL(P || Q) \not= \KL(Q || P)$ simply reflects the asymmetry between null and alternative hypothesis. Let us look at this in a particular example. Let $P$ be the $t_\nu$-distribution and $Q$ the standard normal distribution (in the numerical exampe below $\nu=1$). The integral defining the divergence looks complicated, so let us simply use numerical integration in R: lLR_1 <- function(x) {dt(x, 1, log=TRUE)-dnorm(x, log=TRUE)} integrate(function(x) dt(x, 1)*lLR_1(x), lower=-Inf, upper=Inf) Error in integrate(function(x) dt(x, 1) * lLR_1(x), lower = -Inf, upper = Inf) : the integral is probably divergent lLR_2 <- function(x) {-dt(x, 1, log=TRUE) + dnorm(x, log=TRUE)} integrate(function(x) dnorm(x)*lLR_2(x), lower=-Inf, upper=Inf) 0.2592445 with absolute error < 1e-07 In the first case the integral seems to diverge numerically, indicating the divergence is very large or infinite, in the second case it is small, summarizing: $$ \KL(P || Q) \approx \infty \\ \KL(Q || P) \approx 0.26 $$ The first case is verified by analytical symbolic integration in answer by @Xi'an here: What's the maximum value of Kullback-Leibler (KL) divergence. What does this tell us, in practical terms? If the null model is a standard normal distribution but the data is generated from a $t_1$-distribution, then it is quite easy to reject the null! Data from a $t_1$-distribution do not look like normal distributed data. In the other case, the roles are switched. The null is the $t_1$ but data is normal. But normal distributed data could look like $t_1$ data, so this problem is much more difficult! Here we have sample size $n=1$, and every data which might come from a normal distribution could as well have come from a $t_1$! Switching the roles, not, the difference comes mostly from the role of outliers. Under the alternative distribution $t_1$ there is a rather large probability of obtaining a sample which have very small probability under the null (normal) model, giving a huge divergence. But when the alternative distribution is normal, practically all data we can get will have a moderate probability (really, density ...) under the null $t_1$ model, so the divergence is small. This is related to my answer here: Why should we use t errors instead of normal errors?
Intuition on the Kullback–Leibler (KL) Divergence Adding to the other excellent answers, an answer with another viewpoint which maybe can add some more intuition, which was asked for. The Kullback-Leibler divergence is $$ \DeclareMathOperator{\KL}{KL
2,496
Intuition on the Kullback–Leibler (KL) Divergence
First of all, the violation of the symmetry condition is the smallest problem with Kullback-Leibler divergence. $D(P||Q)$ also violates triangle inequality. You can simply introduce the symmetric version as $$ SKL(P, Q) = D(P||Q) + D(Q||P) $$, but that's still not metric, because both $D(P||Q)$ and $SKL(P, Q)$ violates triangle inequality. To prove that simply take three biased coins A, B & C that produces much less heads than tails, e.g. coins with heads probability of: A = 0.1, B = 0.2 and C = 0.3. In both cases, regular KL divergence D or its symmetric version SKL, check they don't fullfil triangle inequality $$D(A||B) + D(B||C) \ngeqslant D(A||C)$$ $$SKL(A, B) + SKL(B, C) \ngeqslant SKL(A, C)$$ Simply use this formulas: $$ D(P||Q) = \sum\limits_{i}p_i \cdot \log(\frac{p_i}{q_i})$$ $$ SKL(P, Q) = \sum\limits_{i}(p_i - q_i) \cdot \log(\frac{p_i}{q_i})$$ $$D(A||B) = 0.1 \cdot \log(\frac{0.1}{0.2}) + 0.9 \cdot \log(\frac{0.9}{0.8}) \approx 0.0159$$ $$D(B||C) \approx 0.0112$$ $$D(A||C) \approx 0.0505$$ $$0.0159 + 0.0112 \ngeqslant 0.0505$$ $$SKL(A, B) \approx 0.0352$$ $$SKL(B, C) \approx 0.0234$$ $$SKL(A, C) \approx 0.1173$$ $$ 0.0352 + 0.0234 \ngeqslant 0.1173$$ I introduced this example in purpose. Let's imagine that you're tossing some coins, e.g. 100 times. As long as this coins are unbiased, you would simply encode tossing results with sequence of 0-1 bits, (1-head, 0-tail). In such situation when probability of head is the same as probability of tail and equal to 0.5, that's a quite effective encoding. Now, we have some biased coins, so we would rather encode more likely results with shorter code, e.g. merge groups of heads and tails and represent sequences of k heads with longer code than sequence of k tails (they are more probable). And here Kullback-Leibler divergence $D(P||Q)$ occures. If P represents the true distribution of results, and Q is only an approximation of P, then $D(P||Q)$ denotes the penalty you pay when you encode results that actually come from P distrib with encoding intended for Q (penalty in the sense of the extra bits you need to use). If you simply need metric, use Bhattacharyya distance (of course the modified version $\sqrt{1 - [\sum\limits_{x} \sqrt{p(x)q(x)}]}$ )
Intuition on the Kullback–Leibler (KL) Divergence
First of all, the violation of the symmetry condition is the smallest problem with Kullback-Leibler divergence. $D(P||Q)$ also violates triangle inequality. You can simply introduce the symmetric vers
Intuition on the Kullback–Leibler (KL) Divergence First of all, the violation of the symmetry condition is the smallest problem with Kullback-Leibler divergence. $D(P||Q)$ also violates triangle inequality. You can simply introduce the symmetric version as $$ SKL(P, Q) = D(P||Q) + D(Q||P) $$, but that's still not metric, because both $D(P||Q)$ and $SKL(P, Q)$ violates triangle inequality. To prove that simply take three biased coins A, B & C that produces much less heads than tails, e.g. coins with heads probability of: A = 0.1, B = 0.2 and C = 0.3. In both cases, regular KL divergence D or its symmetric version SKL, check they don't fullfil triangle inequality $$D(A||B) + D(B||C) \ngeqslant D(A||C)$$ $$SKL(A, B) + SKL(B, C) \ngeqslant SKL(A, C)$$ Simply use this formulas: $$ D(P||Q) = \sum\limits_{i}p_i \cdot \log(\frac{p_i}{q_i})$$ $$ SKL(P, Q) = \sum\limits_{i}(p_i - q_i) \cdot \log(\frac{p_i}{q_i})$$ $$D(A||B) = 0.1 \cdot \log(\frac{0.1}{0.2}) + 0.9 \cdot \log(\frac{0.9}{0.8}) \approx 0.0159$$ $$D(B||C) \approx 0.0112$$ $$D(A||C) \approx 0.0505$$ $$0.0159 + 0.0112 \ngeqslant 0.0505$$ $$SKL(A, B) \approx 0.0352$$ $$SKL(B, C) \approx 0.0234$$ $$SKL(A, C) \approx 0.1173$$ $$ 0.0352 + 0.0234 \ngeqslant 0.1173$$ I introduced this example in purpose. Let's imagine that you're tossing some coins, e.g. 100 times. As long as this coins are unbiased, you would simply encode tossing results with sequence of 0-1 bits, (1-head, 0-tail). In such situation when probability of head is the same as probability of tail and equal to 0.5, that's a quite effective encoding. Now, we have some biased coins, so we would rather encode more likely results with shorter code, e.g. merge groups of heads and tails and represent sequences of k heads with longer code than sequence of k tails (they are more probable). And here Kullback-Leibler divergence $D(P||Q)$ occures. If P represents the true distribution of results, and Q is only an approximation of P, then $D(P||Q)$ denotes the penalty you pay when you encode results that actually come from P distrib with encoding intended for Q (penalty in the sense of the extra bits you need to use). If you simply need metric, use Bhattacharyya distance (of course the modified version $\sqrt{1 - [\sum\limits_{x} \sqrt{p(x)q(x)}]}$ )
Intuition on the Kullback–Leibler (KL) Divergence First of all, the violation of the symmetry condition is the smallest problem with Kullback-Leibler divergence. $D(P||Q)$ also violates triangle inequality. You can simply introduce the symmetric vers
2,497
Intuition on the Kullback–Leibler (KL) Divergence
I am tempted here to give a purely intuitive answer to your question. Rephrasing what you say, the KL divergence is a way to measure to the distance between two distributions as you would compute the distance between two data sets in a Hilbert space, but some caution should be taken. Why? The KL divergence is not a distance as that you may use usually, such as for instance the $L_2$ norm. Indeed, it is positive and equal to zero if and only if the two distributions are equal (as in the axioms for defining a distance). But as mentioned, it is not symmetric. There are ways to circumvent this, but it makes sense for it to not be symmetric. Indeed, the KL divergence defines the distance between a model distribution $Q$ (that you actually know) and a theoretical one $P$ such that it makes sense to handle differently $KL(P, Q)$ (the "theoretical" distance of $P$ to $Q$ assuming the model $P$) and $KL(Q, P)$ (the "empirical" distance of $P$ to $Q$ assuming the data $Q$) as they mean quite different measures.
Intuition on the Kullback–Leibler (KL) Divergence
I am tempted here to give a purely intuitive answer to your question. Rephrasing what you say, the KL divergence is a way to measure to the distance between two distributions as you would compute the
Intuition on the Kullback–Leibler (KL) Divergence I am tempted here to give a purely intuitive answer to your question. Rephrasing what you say, the KL divergence is a way to measure to the distance between two distributions as you would compute the distance between two data sets in a Hilbert space, but some caution should be taken. Why? The KL divergence is not a distance as that you may use usually, such as for instance the $L_2$ norm. Indeed, it is positive and equal to zero if and only if the two distributions are equal (as in the axioms for defining a distance). But as mentioned, it is not symmetric. There are ways to circumvent this, but it makes sense for it to not be symmetric. Indeed, the KL divergence defines the distance between a model distribution $Q$ (that you actually know) and a theoretical one $P$ such that it makes sense to handle differently $KL(P, Q)$ (the "theoretical" distance of $P$ to $Q$ assuming the model $P$) and $KL(Q, P)$ (the "empirical" distance of $P$ to $Q$ assuming the data $Q$) as they mean quite different measures.
Intuition on the Kullback–Leibler (KL) Divergence I am tempted here to give a purely intuitive answer to your question. Rephrasing what you say, the KL divergence is a way to measure to the distance between two distributions as you would compute the
2,498
Intuition on the Kullback–Leibler (KL) Divergence
The textbook Elements of Information Theory gives us an example: For example, if we knew the true distribution p of the random variable, we could construct a code with average description length H(p). If, instead, we used the code for a distribution q, we would need H(p) + D(p||q) bits on the average to describe the random variable. To paraphrase the above statement, we can say that if we change the information distribution(from q to p) we need D(p||q) extra bits on average to code the new distribution. An illustration Let me illustrate this using one application of it in natural language processing. Consider that a large group of people, labelled B, are mediators and each of them is assigned a task to choose a noun from turkey, animal and book and transmit it to C. There is a guy name A who may send each of them an email to give them some hints. If no one in the group received the email they may raise their eyebrows and hesitate for a while considering what C needs. And the probability of each option being chosen is 1/3. Toally uniform distribution(if not, it may relate to their own preference and we just ignore such cases). But if they are given a verb, like baste, 3/4 of them may choose turkey and 3/16 choose animal and 1/16 choose book. Then how much information in bits each of the mediators on average has obtained once they know the verb? It is: \begin{align*} D(p(nouns|baste)||p(nouns)) &= \sum_{x\in\{turkey, animal, book\}} p(x|baste) \log_2 \frac{p(x|baste)}{p(x)} \\ &= \frac{3}{4} * \log_2 \frac{\frac{3}{4}}{\frac{1}{3}} + \frac{3}{16} * \log_2\frac{\frac{3}{16}}{\frac{1}{3}} + \frac{1}{16} * \log_2\frac{\frac{1}{16}}{\frac{1}{3}}\\ &= 0.5709 \space \space bits\\ \end{align*} But what if the verb given is read? We may imagine that all of them would choose book with no hesitatation, then the average information gain for each mediator from the verb read is: \begin{align*} D(p(nouns|read)||p(nouns)) &= \sum_{x\in\{book\}} p(x|read) \log_2 \frac{p(x|read)}{p(x)} \\ &= 1 * \log_2 \frac{1}{\frac{1}{3}} \\ & =1.5849 \space \space bits \\ \end{align*} We can see that the verb read can give the mediators more information. And that's what relative entropy can measure. Let's continue our story. If C suspects that the noun may be wrong because A told him that he might have made a mistake by sending the wrong verb to the mediators. Then how much information in bits can such a piece of bad news give C? 1) if the verb given by A was baste: \begin{align*} D(p(nouns)||p(nouns|baste)) &= \sum_{x\in\{turkey, animal, book\}} p(x) \log_2 \frac{p(x)}{p(x|baste)} \\ &= \frac{1}{3} * \log_2 \frac{\frac{1}{3}}{\frac{3}{4}} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{\frac{3}{16}} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{\frac{1}{16}}\\ &= 0.69172 \space \space bits\\ \end{align*} 2) but what if the verb was read? \begin{align*} D(p(nouns)||p(nouns|baste)) &= \sum_{x\in\{book, *, *\}} p(x) \log_2 \frac{p(x)}{p(x|baste)} \\ &= \frac{1}{3} * \log_2 \frac{\frac{1}{3}}{1} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{0} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{0}\\ &= \infty \space \space bits\\ \end{align*} Since C never know what would the other two nouns be and any word in the vocabulary would be possible. We can see that the KL divergence is asymmetric. I hope I am right, and if not please comment and help correct me. Thanks in advance.
Intuition on the Kullback–Leibler (KL) Divergence
The textbook Elements of Information Theory gives us an example: For example, if we knew the true distribution p of the random variable, we could construct a code with average description length
Intuition on the Kullback–Leibler (KL) Divergence The textbook Elements of Information Theory gives us an example: For example, if we knew the true distribution p of the random variable, we could construct a code with average description length H(p). If, instead, we used the code for a distribution q, we would need H(p) + D(p||q) bits on the average to describe the random variable. To paraphrase the above statement, we can say that if we change the information distribution(from q to p) we need D(p||q) extra bits on average to code the new distribution. An illustration Let me illustrate this using one application of it in natural language processing. Consider that a large group of people, labelled B, are mediators and each of them is assigned a task to choose a noun from turkey, animal and book and transmit it to C. There is a guy name A who may send each of them an email to give them some hints. If no one in the group received the email they may raise their eyebrows and hesitate for a while considering what C needs. And the probability of each option being chosen is 1/3. Toally uniform distribution(if not, it may relate to their own preference and we just ignore such cases). But if they are given a verb, like baste, 3/4 of them may choose turkey and 3/16 choose animal and 1/16 choose book. Then how much information in bits each of the mediators on average has obtained once they know the verb? It is: \begin{align*} D(p(nouns|baste)||p(nouns)) &= \sum_{x\in\{turkey, animal, book\}} p(x|baste) \log_2 \frac{p(x|baste)}{p(x)} \\ &= \frac{3}{4} * \log_2 \frac{\frac{3}{4}}{\frac{1}{3}} + \frac{3}{16} * \log_2\frac{\frac{3}{16}}{\frac{1}{3}} + \frac{1}{16} * \log_2\frac{\frac{1}{16}}{\frac{1}{3}}\\ &= 0.5709 \space \space bits\\ \end{align*} But what if the verb given is read? We may imagine that all of them would choose book with no hesitatation, then the average information gain for each mediator from the verb read is: \begin{align*} D(p(nouns|read)||p(nouns)) &= \sum_{x\in\{book\}} p(x|read) \log_2 \frac{p(x|read)}{p(x)} \\ &= 1 * \log_2 \frac{1}{\frac{1}{3}} \\ & =1.5849 \space \space bits \\ \end{align*} We can see that the verb read can give the mediators more information. And that's what relative entropy can measure. Let's continue our story. If C suspects that the noun may be wrong because A told him that he might have made a mistake by sending the wrong verb to the mediators. Then how much information in bits can such a piece of bad news give C? 1) if the verb given by A was baste: \begin{align*} D(p(nouns)||p(nouns|baste)) &= \sum_{x\in\{turkey, animal, book\}} p(x) \log_2 \frac{p(x)}{p(x|baste)} \\ &= \frac{1}{3} * \log_2 \frac{\frac{1}{3}}{\frac{3}{4}} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{\frac{3}{16}} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{\frac{1}{16}}\\ &= 0.69172 \space \space bits\\ \end{align*} 2) but what if the verb was read? \begin{align*} D(p(nouns)||p(nouns|baste)) &= \sum_{x\in\{book, *, *\}} p(x) \log_2 \frac{p(x)}{p(x|baste)} \\ &= \frac{1}{3} * \log_2 \frac{\frac{1}{3}}{1} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{0} + \frac{1}{3} * \log_2\frac{\frac{1}{3}}{0}\\ &= \infty \space \space bits\\ \end{align*} Since C never know what would the other two nouns be and any word in the vocabulary would be possible. We can see that the KL divergence is asymmetric. I hope I am right, and if not please comment and help correct me. Thanks in advance.
Intuition on the Kullback–Leibler (KL) Divergence The textbook Elements of Information Theory gives us an example: For example, if we knew the true distribution p of the random variable, we could construct a code with average description length
2,499
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
Disclaimer: @ttnphns is very knowledgeable about both PCA and FA, and I respect his opinion and have learned a lot from many of his great answers on the topic. However, I tend to disagree with his reply here, as well as with other (numerous) posts on this topic here on CV, not only his; or rather, I think they have limited applicability. I think that the difference between PCA and FA is overrated. Look at it like that: both methods attempt to provide a low-rank approximation of a given covariance (or correlation) matrix. "Low-rank" means that only a limited (low) number of latent factors or principal components is used. If the $n \times n$ covariance matrix of the data is $\mathbf C$, then the models are: \begin{align} \mathrm{PCA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top \\ \mathrm{PPCA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top + \sigma^2 \mathbf I \\ \mathrm{FA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top + \boldsymbol \Psi \end{align} Here $\mathbf W$ is a matrix with $k$ columns (where $k$ is usually chosen to be a small number, $k<n$), representing $k$ principal components or factors, $\mathbf I$ is an identity matrix, and $\boldsymbol \Psi$ is a diagonal matrix. Each method can be formulated as finding $\mathbf W$ (and the rest) minimizing the [norm of the] difference between left-hand and right-hand sides. PPCA stands for probabilistic PCA, and if you don't know what that is, it does not matter so much for now. I wanted to mention it, because it neatly fits between PCA and FA, having intermediate model complexity. It also puts the allegedly large difference between PCA and FA into perspective: even though it is a probabilistic model (exactly like FA), it actually turns out to be almost equivalent to PCA ($\mathbf W$ spans the same subspace). Most importantly, note that the models only differ in how they treat the diagonal of $\mathbf C$. As the dimensionality $n$ increases, the diagonal becomes in a way less and less important (because there are only $n$ elements on the diagonal and $n(n-1)/2 = \mathcal O (n^2)$ elements off the diagonal). As a result, for the large $n$ there is usually not much of a difference between PCA and FA at all, an observation that is rarely appreciated. For small $n$ they can indeed differ a lot. Now to answer your main question as to why people in some disciplines seem to prefer PCA. I guess it boils down to the fact that it is mathematically a lot easier than FA (this is not obvious from the above formulas, so you have to believe me here): PCA -- as well as PPCA, which is only slightly different, -- has an analytic solution, whereas FA does not. So FA needs to be numerically fit, there exist various algorithms of doing it, giving possibly different answers and operating under different assumptions, etc. etc. In some cases some algorithms can get stuck (see e.g. "heywood cases"). For PCA you perform an eigen-decomposition and you are done; FA is a lot more messy. Technically, PCA simply rotates the variables, and that is why one can refer to it as a mere transformation, as @NickCox did in his comment above. PCA solution does not depend on $k$: you can find first three PCs ($k=3$) and the first two of those are going to be identical to the ones you would find if you initially set $k=2$. That is not true for FA: solution for $k=2$ is not necessarily contained inside the solution for $k=3$. This is counter-intuitive and confusing. Of course FA is more flexible model than PCA (after all, it has more parameters) and can often be more useful. I am not arguing against that. What I am arguing against, is the claim that they are conceptually very different with PCA being about "describing the data" and FA being about "finding latent variables". I just do not see this is as true [almost] at all. To comment on some specific points mentioned above and in the linked answers: "in PCA the number of dimensions to extract/retain is fundamentally subjective, while in EFA the number is fixed, and you usually have to check several solutions" -- well, the choice of the solution is still subjective, so I don't see any conceptual difference here. In both cases, $k$ is (subjectively or objectively) chosen to optimize the trade-off between model fit and model complexity. "FA is able to explain pairwise correlations (covariances). PCA generally cannot do it" -- not really, both of them explain correlations better and better as $k$ grows. Sometimes extra confusion arises (but not in @ttnphns's answers!) due to the different practices in the disciplines using PCA and FA. For example, it is a common practice to rotate factors in FA to improve interpretability. This is rarely done after PCA, but in principle nothing is preventing it. So people often tend to think that FA gives you something "interpretable" and PCA does not, but this is often an illusion. Finally, let me stress again that for very small $n$ the differences between PCA and FA can indeed be large, and maybe some of the claims in favour of FA are done with small $n$ in mind. As an extreme example, for $n=2$ a single factor can always perfectly explain the correlation, but one PC can fail to do it quite badly. Update 1: generative models of the data You can see from the number of comments that what I am saying is taken to be controversial. At the risk of flooding the comment section even further, here are some remarks regarding "models" (see comments by @ttnphns and @gung). @ttnphns does not like that I used the word "model" [of the covariance matrix] to refer to the approximations above; it is an issue of terminology, but what he calls "models" are probabilistic/generative models of the data: \begin{align} \mathrm{PPCA}: &\:\:\: \mathbf x = \mathbf W \mathbf z + \boldsymbol \mu + \boldsymbol \epsilon, \; \boldsymbol \epsilon \sim \mathcal N(0, \sigma^2 \mathbf I) \\ \mathrm{FA}: &\:\:\: \mathbf x = \mathbf W \mathbf z + \boldsymbol \mu + \boldsymbol \epsilon, \; \boldsymbol \epsilon \sim \mathcal N(0, \boldsymbol \Psi) \end{align} Note that PCA is not a probabilistic model, and cannot be formulated in this way. The difference between PPCA and FA is in the noise term: PPCA assumes the same noise variance $\sigma^2$ for each variable, whereas FA assumes different variances $\Psi_{ii}$ ("uniquenesses"). This minor difference has important consequences. Both models can be fit with a general expectation-maximization algorithm. For FA no analytic solution is known, but for PPCA one can analytically derive the solution that EM will converge to (both $\sigma^2$ and $\mathbf W$). Turns out, $\mathbf W_\mathrm{PPCA}$ has columns in the same direction but with a smaller length than standard PCA loadings $\mathbf W_\mathrm{PCA}$ (I omit exact formulas). For that reason I think of PPCA as "almost" PCA: $\mathbf W$ in both cases span the same "principal subspace". The proof (Tipping and Bishop 1999) is a bit technical; the intuitive reason for why homogeneous noise variance leads to a much simpler solution is that $\mathbf C - \sigma^2 \mathbf I$ has the same eigenvectors as $\mathbf C$ for any value of $\sigma^2$, but this is not true for $\mathbf C - \boldsymbol \Psi$. So yes, @gung and @ttnphns are right in that FA is based on a generative model and PCA is not, but I think it is important to add that PPCA is also based on a generative model, but is "almost" equivalent to PCA. Then it ceases to seem such an important difference. Update 2: how come PCA provides best approximation to the covariance matrix, when it is well-known to be looking for maximal variance? PCA has two equivalent formulations: e.g. first PC is (a) the one maximizing the variance of the projection and (b) the one providing minimal reconstruction error. More abstractly, the equivalence between maximizing variance and minimizing reconstruction error can be seen using Eckart-Young theorem. If $\mathbf X$ is the data matrix (with observations as rows, variables as columns, and columns are assumed to be centered) and its SVD decomposition is $\mathbf X=\mathbf U\mathbf S\mathbf V^\top$, then it is well known that columns of $\mathbf V$ are eigenvectors of the scatter matrix (or covariance matrix, if divided by the number of observations) $\mathbf C=\mathbf X^\top \mathbf X=\mathbf V\mathbf S^2\mathbf V^\top$ and so they are axes maximizing the variance (i.e. principal axes). But by the Eckart-Young theorem, first $k$ PCs provide the best rank-$k$ approximation to $\mathbf X$: $\mathbf X_k=\mathbf U_k\mathbf S_k \mathbf V^\top_k$ (this notation means taking only $k$ largest singular values/vectors) minimizes $\|\mathbf X-\mathbf X_k\|^2$. The first $k$ PCs provide not only the best rank-$k$ approximation to $\mathbf X$, but also to the covariance matrix $\mathbf C$. Indeed, $\mathbf C=\mathbf X^\top \mathbf X=\mathbf V\mathbf S^2\mathbf V^\top$, and the last equation provides the SVD decomposition of $\mathbf C$ (because $\mathbf V$ is orthogonal and $\mathbf S^2$ is diagonal). So the Eckert-Young theorem tells us that the best rank-$k$ approximation to $\mathbf C$ is given by $\mathbf C_k = \mathbf V_k\mathbf S_k^2\mathbf V_k^\top$. This can be transformed by noticing that $\mathbf W = \mathbf V\mathbf S$ are PCA loadings, and so $$\mathbf C_k=\mathbf V_k\mathbf S_k^2\mathbf V^\top_k=(\mathbf V\mathbf S)_k(\mathbf V\mathbf S)_k^\top=\mathbf W_k\mathbf W^\top_k.$$ The bottom-line here is that $$ \mathrm{minimizing} \; \left\{\begin{array}{ll} \|\mathbf C-\mathbf W\mathbf W^\top\|^2 \\ \|\mathbf C-\mathbf W\mathbf W^\top-\sigma^2\mathbf I\|^2 \\ \|\mathbf C-\mathbf W\mathbf W^\top-\boldsymbol\Psi\|^2\end{array}\right\} \; \mathrm{leads \: to} \; \left\{\begin{array}{cc} \mathrm{PCA}\\ \mathrm{PPCA} \\ \mathrm{FA} \end{array}\right\} \; \mathrm{loadings},$$ as stated in the beginning. Update 3: numerical demonstration that PCA$\to$FA when $n \to \infty$ I was encouraged by @ttnphns to provide a numerical demonstration of my claim that as dimensionality grows, PCA solution approaches FA solution. Here it goes. I generated a $200\times 200$ random correlation matrix with some strong off-diagonal correlations. I then took the upper-left $n \times n$ square block $\mathbf C$ of this matrix with $n=25, 50, \dots 200$ variables to investigate the effect of the dimensionality. For each $n$, I performed PCA and FA with number of components/factors $k=1\dots 5$, and for each $k$ I computed the off-diagonal reconstruction error $$\sum_{i\ne j}\left[\mathbf C - \mathbf W \mathbf W^\top\right]^2_{ij}$$ (note that on the diagonal, FA reconstructs $\mathbf C$ perfectly, due to the $\boldsymbol \Psi$ term, whereas PCA does not; but the diagonal is ignored here). Then for each $n$ and $k$, I computed the ratio of the PCA off-diagonal error to the FA off-diagonal error. This ratio has to be above $1$, because FA provides the best possible reconstruction. On the right, different lines correspond to different values of $k$, and $n$ is shown on the horizontal axis. Note that as $n$ grows, ratios (for all $k$) approach $1$, meaning that PCA and FA yield approximately the same loadings, PCA$\approx$FA. With relatively small $n$, e.g. when $n=25$, PCA performs [expectedly] worse, but the difference is not that strong for small $k$, and even for $k=5$ the ratio is below $1.2$. The ratio can become large when the number of factors $k$ becomes comparable with the number of variables $n$. In the example I gave above with $n=2$ and $k=1$, FA achieves $0$ reconstruction error, whereas PCA does not, i.e. the ratio would be infinite. But getting back to the original question, when $n=21$ and $k=3$, PCA will only moderately lose to FA in explaining the off-diagonal part of $\mathbf C$. For an illustrated example of PCA and FA applied to a real dataset (wine dataset with $n=13$), see my answers here: What are the differences between Factor Analysis and Principal Component Analysis? PCA and exploratory Factor Analysis on the same data set
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi
Disclaimer: @ttnphns is very knowledgeable about both PCA and FA, and I respect his opinion and have learned a lot from many of his great answers on the topic. However, I tend to disagree with his rep
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? Disclaimer: @ttnphns is very knowledgeable about both PCA and FA, and I respect his opinion and have learned a lot from many of his great answers on the topic. However, I tend to disagree with his reply here, as well as with other (numerous) posts on this topic here on CV, not only his; or rather, I think they have limited applicability. I think that the difference between PCA and FA is overrated. Look at it like that: both methods attempt to provide a low-rank approximation of a given covariance (or correlation) matrix. "Low-rank" means that only a limited (low) number of latent factors or principal components is used. If the $n \times n$ covariance matrix of the data is $\mathbf C$, then the models are: \begin{align} \mathrm{PCA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top \\ \mathrm{PPCA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top + \sigma^2 \mathbf I \\ \mathrm{FA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top + \boldsymbol \Psi \end{align} Here $\mathbf W$ is a matrix with $k$ columns (where $k$ is usually chosen to be a small number, $k<n$), representing $k$ principal components or factors, $\mathbf I$ is an identity matrix, and $\boldsymbol \Psi$ is a diagonal matrix. Each method can be formulated as finding $\mathbf W$ (and the rest) minimizing the [norm of the] difference between left-hand and right-hand sides. PPCA stands for probabilistic PCA, and if you don't know what that is, it does not matter so much for now. I wanted to mention it, because it neatly fits between PCA and FA, having intermediate model complexity. It also puts the allegedly large difference between PCA and FA into perspective: even though it is a probabilistic model (exactly like FA), it actually turns out to be almost equivalent to PCA ($\mathbf W$ spans the same subspace). Most importantly, note that the models only differ in how they treat the diagonal of $\mathbf C$. As the dimensionality $n$ increases, the diagonal becomes in a way less and less important (because there are only $n$ elements on the diagonal and $n(n-1)/2 = \mathcal O (n^2)$ elements off the diagonal). As a result, for the large $n$ there is usually not much of a difference between PCA and FA at all, an observation that is rarely appreciated. For small $n$ they can indeed differ a lot. Now to answer your main question as to why people in some disciplines seem to prefer PCA. I guess it boils down to the fact that it is mathematically a lot easier than FA (this is not obvious from the above formulas, so you have to believe me here): PCA -- as well as PPCA, which is only slightly different, -- has an analytic solution, whereas FA does not. So FA needs to be numerically fit, there exist various algorithms of doing it, giving possibly different answers and operating under different assumptions, etc. etc. In some cases some algorithms can get stuck (see e.g. "heywood cases"). For PCA you perform an eigen-decomposition and you are done; FA is a lot more messy. Technically, PCA simply rotates the variables, and that is why one can refer to it as a mere transformation, as @NickCox did in his comment above. PCA solution does not depend on $k$: you can find first three PCs ($k=3$) and the first two of those are going to be identical to the ones you would find if you initially set $k=2$. That is not true for FA: solution for $k=2$ is not necessarily contained inside the solution for $k=3$. This is counter-intuitive and confusing. Of course FA is more flexible model than PCA (after all, it has more parameters) and can often be more useful. I am not arguing against that. What I am arguing against, is the claim that they are conceptually very different with PCA being about "describing the data" and FA being about "finding latent variables". I just do not see this is as true [almost] at all. To comment on some specific points mentioned above and in the linked answers: "in PCA the number of dimensions to extract/retain is fundamentally subjective, while in EFA the number is fixed, and you usually have to check several solutions" -- well, the choice of the solution is still subjective, so I don't see any conceptual difference here. In both cases, $k$ is (subjectively or objectively) chosen to optimize the trade-off between model fit and model complexity. "FA is able to explain pairwise correlations (covariances). PCA generally cannot do it" -- not really, both of them explain correlations better and better as $k$ grows. Sometimes extra confusion arises (but not in @ttnphns's answers!) due to the different practices in the disciplines using PCA and FA. For example, it is a common practice to rotate factors in FA to improve interpretability. This is rarely done after PCA, but in principle nothing is preventing it. So people often tend to think that FA gives you something "interpretable" and PCA does not, but this is often an illusion. Finally, let me stress again that for very small $n$ the differences between PCA and FA can indeed be large, and maybe some of the claims in favour of FA are done with small $n$ in mind. As an extreme example, for $n=2$ a single factor can always perfectly explain the correlation, but one PC can fail to do it quite badly. Update 1: generative models of the data You can see from the number of comments that what I am saying is taken to be controversial. At the risk of flooding the comment section even further, here are some remarks regarding "models" (see comments by @ttnphns and @gung). @ttnphns does not like that I used the word "model" [of the covariance matrix] to refer to the approximations above; it is an issue of terminology, but what he calls "models" are probabilistic/generative models of the data: \begin{align} \mathrm{PPCA}: &\:\:\: \mathbf x = \mathbf W \mathbf z + \boldsymbol \mu + \boldsymbol \epsilon, \; \boldsymbol \epsilon \sim \mathcal N(0, \sigma^2 \mathbf I) \\ \mathrm{FA}: &\:\:\: \mathbf x = \mathbf W \mathbf z + \boldsymbol \mu + \boldsymbol \epsilon, \; \boldsymbol \epsilon \sim \mathcal N(0, \boldsymbol \Psi) \end{align} Note that PCA is not a probabilistic model, and cannot be formulated in this way. The difference between PPCA and FA is in the noise term: PPCA assumes the same noise variance $\sigma^2$ for each variable, whereas FA assumes different variances $\Psi_{ii}$ ("uniquenesses"). This minor difference has important consequences. Both models can be fit with a general expectation-maximization algorithm. For FA no analytic solution is known, but for PPCA one can analytically derive the solution that EM will converge to (both $\sigma^2$ and $\mathbf W$). Turns out, $\mathbf W_\mathrm{PPCA}$ has columns in the same direction but with a smaller length than standard PCA loadings $\mathbf W_\mathrm{PCA}$ (I omit exact formulas). For that reason I think of PPCA as "almost" PCA: $\mathbf W$ in both cases span the same "principal subspace". The proof (Tipping and Bishop 1999) is a bit technical; the intuitive reason for why homogeneous noise variance leads to a much simpler solution is that $\mathbf C - \sigma^2 \mathbf I$ has the same eigenvectors as $\mathbf C$ for any value of $\sigma^2$, but this is not true for $\mathbf C - \boldsymbol \Psi$. So yes, @gung and @ttnphns are right in that FA is based on a generative model and PCA is not, but I think it is important to add that PPCA is also based on a generative model, but is "almost" equivalent to PCA. Then it ceases to seem such an important difference. Update 2: how come PCA provides best approximation to the covariance matrix, when it is well-known to be looking for maximal variance? PCA has two equivalent formulations: e.g. first PC is (a) the one maximizing the variance of the projection and (b) the one providing minimal reconstruction error. More abstractly, the equivalence between maximizing variance and minimizing reconstruction error can be seen using Eckart-Young theorem. If $\mathbf X$ is the data matrix (with observations as rows, variables as columns, and columns are assumed to be centered) and its SVD decomposition is $\mathbf X=\mathbf U\mathbf S\mathbf V^\top$, then it is well known that columns of $\mathbf V$ are eigenvectors of the scatter matrix (or covariance matrix, if divided by the number of observations) $\mathbf C=\mathbf X^\top \mathbf X=\mathbf V\mathbf S^2\mathbf V^\top$ and so they are axes maximizing the variance (i.e. principal axes). But by the Eckart-Young theorem, first $k$ PCs provide the best rank-$k$ approximation to $\mathbf X$: $\mathbf X_k=\mathbf U_k\mathbf S_k \mathbf V^\top_k$ (this notation means taking only $k$ largest singular values/vectors) minimizes $\|\mathbf X-\mathbf X_k\|^2$. The first $k$ PCs provide not only the best rank-$k$ approximation to $\mathbf X$, but also to the covariance matrix $\mathbf C$. Indeed, $\mathbf C=\mathbf X^\top \mathbf X=\mathbf V\mathbf S^2\mathbf V^\top$, and the last equation provides the SVD decomposition of $\mathbf C$ (because $\mathbf V$ is orthogonal and $\mathbf S^2$ is diagonal). So the Eckert-Young theorem tells us that the best rank-$k$ approximation to $\mathbf C$ is given by $\mathbf C_k = \mathbf V_k\mathbf S_k^2\mathbf V_k^\top$. This can be transformed by noticing that $\mathbf W = \mathbf V\mathbf S$ are PCA loadings, and so $$\mathbf C_k=\mathbf V_k\mathbf S_k^2\mathbf V^\top_k=(\mathbf V\mathbf S)_k(\mathbf V\mathbf S)_k^\top=\mathbf W_k\mathbf W^\top_k.$$ The bottom-line here is that $$ \mathrm{minimizing} \; \left\{\begin{array}{ll} \|\mathbf C-\mathbf W\mathbf W^\top\|^2 \\ \|\mathbf C-\mathbf W\mathbf W^\top-\sigma^2\mathbf I\|^2 \\ \|\mathbf C-\mathbf W\mathbf W^\top-\boldsymbol\Psi\|^2\end{array}\right\} \; \mathrm{leads \: to} \; \left\{\begin{array}{cc} \mathrm{PCA}\\ \mathrm{PPCA} \\ \mathrm{FA} \end{array}\right\} \; \mathrm{loadings},$$ as stated in the beginning. Update 3: numerical demonstration that PCA$\to$FA when $n \to \infty$ I was encouraged by @ttnphns to provide a numerical demonstration of my claim that as dimensionality grows, PCA solution approaches FA solution. Here it goes. I generated a $200\times 200$ random correlation matrix with some strong off-diagonal correlations. I then took the upper-left $n \times n$ square block $\mathbf C$ of this matrix with $n=25, 50, \dots 200$ variables to investigate the effect of the dimensionality. For each $n$, I performed PCA and FA with number of components/factors $k=1\dots 5$, and for each $k$ I computed the off-diagonal reconstruction error $$\sum_{i\ne j}\left[\mathbf C - \mathbf W \mathbf W^\top\right]^2_{ij}$$ (note that on the diagonal, FA reconstructs $\mathbf C$ perfectly, due to the $\boldsymbol \Psi$ term, whereas PCA does not; but the diagonal is ignored here). Then for each $n$ and $k$, I computed the ratio of the PCA off-diagonal error to the FA off-diagonal error. This ratio has to be above $1$, because FA provides the best possible reconstruction. On the right, different lines correspond to different values of $k$, and $n$ is shown on the horizontal axis. Note that as $n$ grows, ratios (for all $k$) approach $1$, meaning that PCA and FA yield approximately the same loadings, PCA$\approx$FA. With relatively small $n$, e.g. when $n=25$, PCA performs [expectedly] worse, but the difference is not that strong for small $k$, and even for $k=5$ the ratio is below $1.2$. The ratio can become large when the number of factors $k$ becomes comparable with the number of variables $n$. In the example I gave above with $n=2$ and $k=1$, FA achieves $0$ reconstruction error, whereas PCA does not, i.e. the ratio would be infinite. But getting back to the original question, when $n=21$ and $k=3$, PCA will only moderately lose to FA in explaining the off-diagonal part of $\mathbf C$. For an illustrated example of PCA and FA applied to a real dataset (wine dataset with $n=13$), see my answers here: What are the differences between Factor Analysis and Principal Component Analysis? PCA and exploratory Factor Analysis on the same data set
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi Disclaimer: @ttnphns is very knowledgeable about both PCA and FA, and I respect his opinion and have learned a lot from many of his great answers on the topic. However, I tend to disagree with his rep
2,500
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
As you said, you are familiar with relevant answers; see also: So, as long as "Factor analysis..." + a couple of last paragraphs; and the bottom list here. In short, PCA is mostly a data reduction technique whereas FA is a modeling-of-latent-traits technique. Sometimes they happen to give similar results; but in your case - because you probably feel like constructing/validating latent traits as if real entities - using FA would be more honest and you shouldn't prefer PCA in hope that their results converge. On the other hand, whenever you aim to summarise/simplify the data - for subsequent analysis, for example - you would prefer PCA, as it doesn't impose any strong model (which might be irrelevant) on the data. To reiterate other way, PCA gives you dimensions which may correspond to some subjectively meaningful constructs, if you wish, while EFA poses that those are even covert features that actually generated your data, and it aims to find those features. In FA, interpretation of the dimensions (factors) is pending - whether you can attach a meaning to a latent variable or not, it "exists" (FA is essentialistic), otherwise you should drop it from the model or get more data to support it. In PCA, the meaning of a dimension is optional. And yet once again in other words: When you extract m factors (separate factors from errors), these few factors explain (almost) all correlation among variables, so that the variables are not left room to correlate via the errors anyhow. Therefore, so long as "factors" are defined as latent traits which generate/bind the correlated data, you have full clues to interpret that - what is responsible for the correlations. In PCA (extract components as if "factors"), errors (may) still correlate between the variables; so you can't claim that you've extracted something enough clean and exhaustive to be interpreted in that way. You may want to read my other, longer answer in the current discussion, for some theoretical and simulation experiment details about whether PCA is a viable substitute of FA. Please pay attention also to outstanding answers by @amoeba given on this thread. Upd: In their answer to this question @amoeba, who opposed there, introduced a (not well-known) technique PPCA as standing halfway between PCA and FA. This naturally launched the logic that PCA and FA are along one line rather than opposite. That valuable approach expands one's theoretical horizons. But it can mask the important practical difference about that FA reconstructs (explains) all the pairwise covariances with a few factors, while PCA cannot do it successfully (and when it occasionally does it - that is because it happened to mime FA).
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi
As you said, you are familiar with relevant answers; see also: So, as long as "Factor analysis..." + a couple of last paragraphs; and the bottom list here. In short, PCA is mostly a data reduction tec
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? As you said, you are familiar with relevant answers; see also: So, as long as "Factor analysis..." + a couple of last paragraphs; and the bottom list here. In short, PCA is mostly a data reduction technique whereas FA is a modeling-of-latent-traits technique. Sometimes they happen to give similar results; but in your case - because you probably feel like constructing/validating latent traits as if real entities - using FA would be more honest and you shouldn't prefer PCA in hope that their results converge. On the other hand, whenever you aim to summarise/simplify the data - for subsequent analysis, for example - you would prefer PCA, as it doesn't impose any strong model (which might be irrelevant) on the data. To reiterate other way, PCA gives you dimensions which may correspond to some subjectively meaningful constructs, if you wish, while EFA poses that those are even covert features that actually generated your data, and it aims to find those features. In FA, interpretation of the dimensions (factors) is pending - whether you can attach a meaning to a latent variable or not, it "exists" (FA is essentialistic), otherwise you should drop it from the model or get more data to support it. In PCA, the meaning of a dimension is optional. And yet once again in other words: When you extract m factors (separate factors from errors), these few factors explain (almost) all correlation among variables, so that the variables are not left room to correlate via the errors anyhow. Therefore, so long as "factors" are defined as latent traits which generate/bind the correlated data, you have full clues to interpret that - what is responsible for the correlations. In PCA (extract components as if "factors"), errors (may) still correlate between the variables; so you can't claim that you've extracted something enough clean and exhaustive to be interpreted in that way. You may want to read my other, longer answer in the current discussion, for some theoretical and simulation experiment details about whether PCA is a viable substitute of FA. Please pay attention also to outstanding answers by @amoeba given on this thread. Upd: In their answer to this question @amoeba, who opposed there, introduced a (not well-known) technique PPCA as standing halfway between PCA and FA. This naturally launched the logic that PCA and FA are along one line rather than opposite. That valuable approach expands one's theoretical horizons. But it can mask the important practical difference about that FA reconstructs (explains) all the pairwise covariances with a few factors, while PCA cannot do it successfully (and when it occasionally does it - that is because it happened to mime FA).
Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi As you said, you are familiar with relevant answers; see also: So, as long as "Factor analysis..." + a couple of last paragraphs; and the bottom list here. In short, PCA is mostly a data reduction tec