idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
5,201
Intuitive difference between hidden Markov models and conditional random fields
"Conditional Random Fields can be understood as a sequential extension to the Maximum Entropy Model". This sentence is from a technical report related to "Classical Probabilistic Models and Conditional Random Fields". It is probably the best read for topics such as HMM, CRF and Maximum Entropy. PS: Figure 1 in the link gives a very good comparison between them. Regards,
Intuitive difference between hidden Markov models and conditional random fields
"Conditional Random Fields can be understood as a sequential extension to the Maximum Entropy Model". This sentence is from a technical report related to "Classical Probabilistic Models and Conditiona
Intuitive difference between hidden Markov models and conditional random fields "Conditional Random Fields can be understood as a sequential extension to the Maximum Entropy Model". This sentence is from a technical report related to "Classical Probabilistic Models and Conditional Random Fields". It is probably the best read for topics such as HMM, CRF and Maximum Entropy. PS: Figure 1 in the link gives a very good comparison between them. Regards,
Intuitive difference between hidden Markov models and conditional random fields "Conditional Random Fields can be understood as a sequential extension to the Maximum Entropy Model". This sentence is from a technical report related to "Classical Probabilistic Models and Conditiona
5,202
Multiple regression or partial correlation coefficient? And relations between the two
Multiple linear regression coefficient and partial correlation are directly linked and have the same significance (p-value). Partial r is just another way of standardizing the coefficient, along with beta coefficient (standardized regression coefficient)$^1$. So, if the dependent variable is $y$ and the independents are $x_1$ and $x_2$ then $$\text{Beta:} \quad \beta_{x_1} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{1-r_{x_1x_2}^2}$$ $$\text{Partial r:} \quad r_{yx_1.x_2} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{\sqrt{ (1-r_{yx_2}^2)(1-r_{x_1x_2}^2) }}$$ You see that the numerators are the same which tell that both formulas measure the same unique effect of $x_1$. I will try to explain how the two formulas are structurally identical and how they are not. Suppose that you have z-standardized (mean 0, variance 1) all three variables. The numerator then is equal to the covariance between two kinds of residuals: the (a) residuals left in predicting $y$ by $x_2$ [both variables standard] and the (b) residuals left in predicting $x_1$ by $x_2$ [both variables standard]. Moreover, the variance of the residuals (a) is $1-r_{yx_2}^2$; the variance of the residuals (b) is $1-r_{x_1x_2}^2$. The formula for the partial correlation then appears clearly the formula of plain Pearson $r$, as computed in this instance between residuals (a) and residuals (b): Pearson $r$, we know, is covariance divided by the denominator that is the geometric mean of two different variances. Standardized coefficient beta is structurally like Pearson $r$, only that the denominator is the geometric mean of a variance with own self. The variance of residuals (a) was not counted; it was replaced by second counting of the variance of residuals (b). Beta is thus the covariance of the two residuals relative the variance of one of them (specifically, the one pertaining to the predictor of interest, $x_1$). While partial correlation, as already noticed, is that same covariance relative their hybrid variance. Both types of coefficient are ways to standardize the effect of $x_1$ in the milieu of other predictors. Some numerical consequences of the difference. If R-square of multiple regression of $y$ by $x_1$ and $x_2$ happens to be 1 then both partial correlations of the predictors with the dependent will be also 1 absolute value (but the betas will generally not be 1). Indeed, as said before, $r_{yx_1.x_2}$ is the correlation between the residuals of y <- x2 and the residuals of x1 <- x2. If what is not $x_2$ within $y$ is exactly what is not $x_2$ within $x_1$ then there is nothing within $y$ that is neither $x_1$ nor $x_2$: complete fit. Whatever is the amount of the unexplained (by $x_2$) portion left in $y$ (the $1-r_{yx_2}^2$), if it is captured relatively highly by the independent portion of $x_1$ (by the $1-r_{x_1x_2}^2$), the $r_{yx_1.x_2}$ will be high. $\beta_{x_1}$, on the other hand, will be high only provided that the being captured unexplained portion of $y$ is itself a substantial portion of $y$. From the above formulas one obtains (and extending from 2-predictor regression to a regression with arbitrary number of predictors $x_1,x_2,x_3,...$) the conversion formula between beta and corresponding partial r: $$r_{yx_1.X} = \beta_{x_1} \sqrt{ \frac {\text{var} (e_{x_1 \leftarrow X})} {\text{var} (e_{y \leftarrow X})}},$$ where $X$ stands for the collection of all predictors except the current ($x_1$); $e_{y \leftarrow X}$ are the residuals from regressing $y$ by $X$, and $e_{x_1 \leftarrow X}$ are the residuals from regressing $x_1$ by $X$, the variables in both these regressions enter them standardized. Note: if we need to to compute partial correlations of $y$ with every predictor $x$ we usually won't use this formula requiring to do two additional regressions. Rather, the sweep operations (often used in stepwise and all subsets regression algorithms) will be done or anti-image correlation matrix will be computed. $^1$ $\beta_{x_1} = b_{x_1} \frac {\sigma_{x_1}}{\sigma_y}$ is the relation between the raw $b$ and the standardized $\beta$ coefficients in regression with intercept. Addendum. Geometry of regression $beta$ and partial $r$. On the picture below, a linear regression with two correlated predictors, $X_1$ and $X_2$, is shown. The three variables, including the dependent $Y$, are drawn as vectors (arrows). This way of display is different from usual scatterplot (aka variable space display) and is called subject space display. (You may encounter similar drawings locally here, here, here, here, here, here, here and in some other threads.) The pictures are drawn after all the three variables were centered, and so (1) every vector's length = st. deviation of the respective variable, and (2) angle (its cosine) between every two vectors = correlation between the respective variables. $Y'$ is the regression prediction (orthogonal projection of $Y$ onto "plane X" spanned by the regressors); $e$ is the error term; $\cos \angle{Y Y'}={|Y'|}/|Y|$ is the multiple correlation coefficient. The skew coordinates of $Y'$ on the predictors $X1$ and $X2$ relate their multiple regression coefficients. These lengths from the origin are the scaled $b$'s or $beta$'s. For example, the magnitude of the skew coordinate onto $X_1$ equals $\beta_1\sigma_Y= b_1\sigma_{X_1}$; so, if $Y$ is standardized ($|Y|=1$), the coordinate = $\beta_1$. See also. But how to obtain an impression of the corresponding partial correlation $r_{yx_1.x_2}$? To partial out $X_2$ from the other two variables one has to project them on the plane which is orthogonal to $X_2$. Below, on the left, this plane perpendicular to $X_2$ has been drawn. It is shown at the bottom - and not on the level of the origin - simply in order not to jam the pic. Let's inspect what's going on in that space. Put your eye to the bottom (of the left pic) and glance up, $X_2$ vector starting right from your eye. All the vectors are now the projections. $X_2$ is a point since the plane was produced as the one perpendicular to it. We look so that "Plane X" is horizontal line to us. Therefore of the four vectors only (the projection of) $Y$ departs the line. From this perspective, $r_{yx_1.x_2}$ is $\cos \alpha$. It is the angle between the projection vectors of $Y$ and of $X_1$. On the plane orthogonal to $X_2$. So it is very simple to understand. Note that $r_{yx_1.x_2}=r_{yy'.x_2}$, as both $Y'$ and $X_1$ belong to "plane X". We can trace back the projections on the right picture back on the left one. Find that $Y$ on the right pic is $Y\perp$ of the left, which is the residuals of regressing $Y$ by $X_2$. Likewise, $X_1$ on the right pic is $X_1\perp$ of the left, which is the residuals of regressing $X_1$ by $X_2$. Correlation between these two residual vectors is $r_{yx_1.x_2}$, as we know.
Multiple regression or partial correlation coefficient? And relations between the two
Multiple linear regression coefficient and partial correlation are directly linked and have the same significance (p-value). Partial r is just another way of standardizing the coefficient, along with
Multiple regression or partial correlation coefficient? And relations between the two Multiple linear regression coefficient and partial correlation are directly linked and have the same significance (p-value). Partial r is just another way of standardizing the coefficient, along with beta coefficient (standardized regression coefficient)$^1$. So, if the dependent variable is $y$ and the independents are $x_1$ and $x_2$ then $$\text{Beta:} \quad \beta_{x_1} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{1-r_{x_1x_2}^2}$$ $$\text{Partial r:} \quad r_{yx_1.x_2} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{\sqrt{ (1-r_{yx_2}^2)(1-r_{x_1x_2}^2) }}$$ You see that the numerators are the same which tell that both formulas measure the same unique effect of $x_1$. I will try to explain how the two formulas are structurally identical and how they are not. Suppose that you have z-standardized (mean 0, variance 1) all three variables. The numerator then is equal to the covariance between two kinds of residuals: the (a) residuals left in predicting $y$ by $x_2$ [both variables standard] and the (b) residuals left in predicting $x_1$ by $x_2$ [both variables standard]. Moreover, the variance of the residuals (a) is $1-r_{yx_2}^2$; the variance of the residuals (b) is $1-r_{x_1x_2}^2$. The formula for the partial correlation then appears clearly the formula of plain Pearson $r$, as computed in this instance between residuals (a) and residuals (b): Pearson $r$, we know, is covariance divided by the denominator that is the geometric mean of two different variances. Standardized coefficient beta is structurally like Pearson $r$, only that the denominator is the geometric mean of a variance with own self. The variance of residuals (a) was not counted; it was replaced by second counting of the variance of residuals (b). Beta is thus the covariance of the two residuals relative the variance of one of them (specifically, the one pertaining to the predictor of interest, $x_1$). While partial correlation, as already noticed, is that same covariance relative their hybrid variance. Both types of coefficient are ways to standardize the effect of $x_1$ in the milieu of other predictors. Some numerical consequences of the difference. If R-square of multiple regression of $y$ by $x_1$ and $x_2$ happens to be 1 then both partial correlations of the predictors with the dependent will be also 1 absolute value (but the betas will generally not be 1). Indeed, as said before, $r_{yx_1.x_2}$ is the correlation between the residuals of y <- x2 and the residuals of x1 <- x2. If what is not $x_2$ within $y$ is exactly what is not $x_2$ within $x_1$ then there is nothing within $y$ that is neither $x_1$ nor $x_2$: complete fit. Whatever is the amount of the unexplained (by $x_2$) portion left in $y$ (the $1-r_{yx_2}^2$), if it is captured relatively highly by the independent portion of $x_1$ (by the $1-r_{x_1x_2}^2$), the $r_{yx_1.x_2}$ will be high. $\beta_{x_1}$, on the other hand, will be high only provided that the being captured unexplained portion of $y$ is itself a substantial portion of $y$. From the above formulas one obtains (and extending from 2-predictor regression to a regression with arbitrary number of predictors $x_1,x_2,x_3,...$) the conversion formula between beta and corresponding partial r: $$r_{yx_1.X} = \beta_{x_1} \sqrt{ \frac {\text{var} (e_{x_1 \leftarrow X})} {\text{var} (e_{y \leftarrow X})}},$$ where $X$ stands for the collection of all predictors except the current ($x_1$); $e_{y \leftarrow X}$ are the residuals from regressing $y$ by $X$, and $e_{x_1 \leftarrow X}$ are the residuals from regressing $x_1$ by $X$, the variables in both these regressions enter them standardized. Note: if we need to to compute partial correlations of $y$ with every predictor $x$ we usually won't use this formula requiring to do two additional regressions. Rather, the sweep operations (often used in stepwise and all subsets regression algorithms) will be done or anti-image correlation matrix will be computed. $^1$ $\beta_{x_1} = b_{x_1} \frac {\sigma_{x_1}}{\sigma_y}$ is the relation between the raw $b$ and the standardized $\beta$ coefficients in regression with intercept. Addendum. Geometry of regression $beta$ and partial $r$. On the picture below, a linear regression with two correlated predictors, $X_1$ and $X_2$, is shown. The three variables, including the dependent $Y$, are drawn as vectors (arrows). This way of display is different from usual scatterplot (aka variable space display) and is called subject space display. (You may encounter similar drawings locally here, here, here, here, here, here, here and in some other threads.) The pictures are drawn after all the three variables were centered, and so (1) every vector's length = st. deviation of the respective variable, and (2) angle (its cosine) between every two vectors = correlation between the respective variables. $Y'$ is the regression prediction (orthogonal projection of $Y$ onto "plane X" spanned by the regressors); $e$ is the error term; $\cos \angle{Y Y'}={|Y'|}/|Y|$ is the multiple correlation coefficient. The skew coordinates of $Y'$ on the predictors $X1$ and $X2$ relate their multiple regression coefficients. These lengths from the origin are the scaled $b$'s or $beta$'s. For example, the magnitude of the skew coordinate onto $X_1$ equals $\beta_1\sigma_Y= b_1\sigma_{X_1}$; so, if $Y$ is standardized ($|Y|=1$), the coordinate = $\beta_1$. See also. But how to obtain an impression of the corresponding partial correlation $r_{yx_1.x_2}$? To partial out $X_2$ from the other two variables one has to project them on the plane which is orthogonal to $X_2$. Below, on the left, this plane perpendicular to $X_2$ has been drawn. It is shown at the bottom - and not on the level of the origin - simply in order not to jam the pic. Let's inspect what's going on in that space. Put your eye to the bottom (of the left pic) and glance up, $X_2$ vector starting right from your eye. All the vectors are now the projections. $X_2$ is a point since the plane was produced as the one perpendicular to it. We look so that "Plane X" is horizontal line to us. Therefore of the four vectors only (the projection of) $Y$ departs the line. From this perspective, $r_{yx_1.x_2}$ is $\cos \alpha$. It is the angle between the projection vectors of $Y$ and of $X_1$. On the plane orthogonal to $X_2$. So it is very simple to understand. Note that $r_{yx_1.x_2}=r_{yy'.x_2}$, as both $Y'$ and $X_1$ belong to "plane X". We can trace back the projections on the right picture back on the left one. Find that $Y$ on the right pic is $Y\perp$ of the left, which is the residuals of regressing $Y$ by $X_2$. Likewise, $X_1$ on the right pic is $X_1\perp$ of the left, which is the residuals of regressing $X_1$ by $X_2$. Correlation between these two residual vectors is $r_{yx_1.x_2}$, as we know.
Multiple regression or partial correlation coefficient? And relations between the two Multiple linear regression coefficient and partial correlation are directly linked and have the same significance (p-value). Partial r is just another way of standardizing the coefficient, along with
5,203
Multiple regression or partial correlation coefficient? And relations between the two
Just bumped to this tread by chance. In the original answer, in the formula for $\beta_{x_1}$ the factor $\sqrt{SSY/SSX_1}$ is missing, that is $$ \beta_{x_1} = \frac{r_{yx_1} - r_{y x_2} ~r_{x_1 x_2}} {1-r^2_{x_1 x_2}} \times \sqrt{\frac{SSY}{SSX_1}}, $$ where $SSY=\sum_i (y_i-\bar y)^2$ and $SSX_1 = \sum_i {(x_{1i} - \bar{x}_1)^2}$.
Multiple regression or partial correlation coefficient? And relations between the two
Just bumped to this tread by chance. In the original answer, in the formula for $\beta_{x_1}$ the factor $\sqrt{SSY/SSX_1}$ is missing, that is $$ \beta_{x_1} = \frac{r_{yx_1} - r_{y x_2} ~r_{x_1 x_2
Multiple regression or partial correlation coefficient? And relations between the two Just bumped to this tread by chance. In the original answer, in the formula for $\beta_{x_1}$ the factor $\sqrt{SSY/SSX_1}$ is missing, that is $$ \beta_{x_1} = \frac{r_{yx_1} - r_{y x_2} ~r_{x_1 x_2}} {1-r^2_{x_1 x_2}} \times \sqrt{\frac{SSY}{SSX_1}}, $$ where $SSY=\sum_i (y_i-\bar y)^2$ and $SSX_1 = \sum_i {(x_{1i} - \bar{x}_1)^2}$.
Multiple regression or partial correlation coefficient? And relations between the two Just bumped to this tread by chance. In the original answer, in the formula for $\beta_{x_1}$ the factor $\sqrt{SSY/SSX_1}$ is missing, that is $$ \beta_{x_1} = \frac{r_{yx_1} - r_{y x_2} ~r_{x_1 x_2
5,204
Choosing variables to include in a multiple linear regression model
Based on your reaction to my comment: You are looking for prediction. Thus, you should not really rely on (in)significance of the coefficients. You would be better to Pick a criterion that describes your prediction needs best (e.g. missclassification rate, AUC of ROC, some form of these with weights,...) For each model of interest, evaluate this criterion. This can be done e.g.by providing a validation set (if you're lucky or rich), through crossvalidation (typically tenfold), or whatever other options your criterion of interest allows. If possible also find an estimate of the SE of the criterion for each model (e.g. by using the values over the different folds in crossvalidation) Now you can pick the model with the best value of the criterion, though it is typically advised to pick the most parsimoneous model (least variables) that is within one SE of the best value. Wrt each model of interest: herein lies quite a catch. With 10 potential predictors, that is a truckload of potential models. If you've got the time or the processors for this (or if your data is small enough so that models get fit and evaluated fast enough): have a ball. If not, you can go about this by educated guesses, forward or backward modelling (but using the criterion instead of significance), or better yet: use some algorithm that picks a reasonable set of models. One algorithm that does this, is penalized regression, in particular Lasso regression. If you're using R, just plug in the package glmnet and you're about ready to go.
Choosing variables to include in a multiple linear regression model
Based on your reaction to my comment: You are looking for prediction. Thus, you should not really rely on (in)significance of the coefficients. You would be better to Pick a criterion that describes
Choosing variables to include in a multiple linear regression model Based on your reaction to my comment: You are looking for prediction. Thus, you should not really rely on (in)significance of the coefficients. You would be better to Pick a criterion that describes your prediction needs best (e.g. missclassification rate, AUC of ROC, some form of these with weights,...) For each model of interest, evaluate this criterion. This can be done e.g.by providing a validation set (if you're lucky or rich), through crossvalidation (typically tenfold), or whatever other options your criterion of interest allows. If possible also find an estimate of the SE of the criterion for each model (e.g. by using the values over the different folds in crossvalidation) Now you can pick the model with the best value of the criterion, though it is typically advised to pick the most parsimoneous model (least variables) that is within one SE of the best value. Wrt each model of interest: herein lies quite a catch. With 10 potential predictors, that is a truckload of potential models. If you've got the time or the processors for this (or if your data is small enough so that models get fit and evaluated fast enough): have a ball. If not, you can go about this by educated guesses, forward or backward modelling (but using the criterion instead of significance), or better yet: use some algorithm that picks a reasonable set of models. One algorithm that does this, is penalized regression, in particular Lasso regression. If you're using R, just plug in the package glmnet and you're about ready to go.
Choosing variables to include in a multiple linear regression model Based on your reaction to my comment: You are looking for prediction. Thus, you should not really rely on (in)significance of the coefficients. You would be better to Pick a criterion that describes
5,205
Choosing variables to include in a multiple linear regression model
There is no simple answer to this. When you remove some of the non-significant explanatory variables, others that are correlated with those may become significant. There is nothing wrong with this, but it makes model selection at least partly art rather than science. This is why experiments aim for keeping explanatory variables orthogonal to eachother, to avoid this problem. Traditionally analysts did stepwise adding and subtracting of variables to the model one at a time (similar to what you have done) and testing them individually or in small groups with t or F tests. The problem with this is you may miss some combination of variables to subract (or add) where their combined effect (or non-effect) is hidden by the collinearity. With modern computing power it is feasible to fit all 2^10 = 1024 possible combinations of explanatory variables and choose the best model by one of a number of possible criteria eg AIC, BIC, or predictive power (for example, ability to predict the values of a test subset of the data that you have separated from the set you use to fit your model). However, if you are going to be testing (implicitly or explicitly) 1024 models you will need to rethink your p-values from the classical approach - treat with caution...
Choosing variables to include in a multiple linear regression model
There is no simple answer to this. When you remove some of the non-significant explanatory variables, others that are correlated with those may become significant. There is nothing wrong with this,
Choosing variables to include in a multiple linear regression model There is no simple answer to this. When you remove some of the non-significant explanatory variables, others that are correlated with those may become significant. There is nothing wrong with this, but it makes model selection at least partly art rather than science. This is why experiments aim for keeping explanatory variables orthogonal to eachother, to avoid this problem. Traditionally analysts did stepwise adding and subtracting of variables to the model one at a time (similar to what you have done) and testing them individually or in small groups with t or F tests. The problem with this is you may miss some combination of variables to subract (or add) where their combined effect (or non-effect) is hidden by the collinearity. With modern computing power it is feasible to fit all 2^10 = 1024 possible combinations of explanatory variables and choose the best model by one of a number of possible criteria eg AIC, BIC, or predictive power (for example, ability to predict the values of a test subset of the data that you have separated from the set you use to fit your model). However, if you are going to be testing (implicitly or explicitly) 1024 models you will need to rethink your p-values from the classical approach - treat with caution...
Choosing variables to include in a multiple linear regression model There is no simple answer to this. When you remove some of the non-significant explanatory variables, others that are correlated with those may become significant. There is nothing wrong with this,
5,206
Choosing variables to include in a multiple linear regression model
If you are only interested in predictive performance, then it is probably better to use all of the features and use ridge-regression to avoid over-fitting the training sample. This is essentially the advice given in the appendix of Millar's monograph on "subset selection in regression", so it comes with a reasonable pedigree! The reason for this is that if you choose a subset based on a performance estimate based on a fixed sample of data (e.g. AIC, BIC, cross-validation etc.), the selection criterion will have a finite variance and so it is possible to over-fit the selection criterion itself. In other words, to begin with as you minimise the selection criterion, generalisation performance will improve, however there will come a point where the more you reduce the selection criterion, the worse generalisation becomes. If you are unlucky, you can easily end up with a regression model that performs worse than the one you started with (i.e. a model with all of the attributes). This is especially likely when the dataset is small (so the selection criterion has a high variance) and when there are many possible choices of model (e.g. choosing combinations of features). Regularisation seems to be less prone to over-fitting as it is a scalar parameter that needs to be tuned and this gives a more constrained view of the complexity of the model, i.e. fewer effective degrees of freedom with which to over-fit the selection criterion.
Choosing variables to include in a multiple linear regression model
If you are only interested in predictive performance, then it is probably better to use all of the features and use ridge-regression to avoid over-fitting the training sample. This is essentially the
Choosing variables to include in a multiple linear regression model If you are only interested in predictive performance, then it is probably better to use all of the features and use ridge-regression to avoid over-fitting the training sample. This is essentially the advice given in the appendix of Millar's monograph on "subset selection in regression", so it comes with a reasonable pedigree! The reason for this is that if you choose a subset based on a performance estimate based on a fixed sample of data (e.g. AIC, BIC, cross-validation etc.), the selection criterion will have a finite variance and so it is possible to over-fit the selection criterion itself. In other words, to begin with as you minimise the selection criterion, generalisation performance will improve, however there will come a point where the more you reduce the selection criterion, the worse generalisation becomes. If you are unlucky, you can easily end up with a regression model that performs worse than the one you started with (i.e. a model with all of the attributes). This is especially likely when the dataset is small (so the selection criterion has a high variance) and when there are many possible choices of model (e.g. choosing combinations of features). Regularisation seems to be less prone to over-fitting as it is a scalar parameter that needs to be tuned and this gives a more constrained view of the complexity of the model, i.e. fewer effective degrees of freedom with which to over-fit the selection criterion.
Choosing variables to include in a multiple linear regression model If you are only interested in predictive performance, then it is probably better to use all of the features and use ridge-regression to avoid over-fitting the training sample. This is essentially the
5,207
Choosing variables to include in a multiple linear regression model
You can also use the step function in the Akaike information criterion. Example below. https://en.wikipedia.org/wiki/Akaike_information_criterion StepModel = step(ClimateChangeModel)
Choosing variables to include in a multiple linear regression model
You can also use the step function in the Akaike information criterion. Example below. https://en.wikipedia.org/wiki/Akaike_information_criterion StepModel = step(ClimateChangeModel)
Choosing variables to include in a multiple linear regression model You can also use the step function in the Akaike information criterion. Example below. https://en.wikipedia.org/wiki/Akaike_information_criterion StepModel = step(ClimateChangeModel)
Choosing variables to include in a multiple linear regression model You can also use the step function in the Akaike information criterion. Example below. https://en.wikipedia.org/wiki/Akaike_information_criterion StepModel = step(ClimateChangeModel)
5,208
Choosing variables to include in a multiple linear regression model
Use the leaps library. When you plot the variables the y-axis shows R^2 adjusted. You look at where the boxes are black at the highest R^2. This will show the variables you should use for your multiple linear regression. Wine example below: library(leaps) regsubsets.out <- regsubsets(Price ~ Year + WinterRain + AGST + HarvestRain + Age + FrancePop, data = wine, nbest = 1, # 1 best model for each number of predictors nvmax = NULL, # NULL for no limit on number of variables force.in = NULL, force.out = NULL, method = "exhaustive") regsubsets.out #----When you plot wherever R^2 is the highest with black boxes, #so in our case AGST + HarvestRain + WinterRain + Age and the dependent var.is Price----# summary.out <- summary(regsubsets.out) as.data.frame(summary.out$outmat) plot(regsubsets.out, scale = "adjr2", main = "Adjusted R^2")
Choosing variables to include in a multiple linear regression model
Use the leaps library. When you plot the variables the y-axis shows R^2 adjusted. You look at where the boxes are black at the highest R^2. This will show the variables you should use for your multipl
Choosing variables to include in a multiple linear regression model Use the leaps library. When you plot the variables the y-axis shows R^2 adjusted. You look at where the boxes are black at the highest R^2. This will show the variables you should use for your multiple linear regression. Wine example below: library(leaps) regsubsets.out <- regsubsets(Price ~ Year + WinterRain + AGST + HarvestRain + Age + FrancePop, data = wine, nbest = 1, # 1 best model for each number of predictors nvmax = NULL, # NULL for no limit on number of variables force.in = NULL, force.out = NULL, method = "exhaustive") regsubsets.out #----When you plot wherever R^2 is the highest with black boxes, #so in our case AGST + HarvestRain + WinterRain + Age and the dependent var.is Price----# summary.out <- summary(regsubsets.out) as.data.frame(summary.out$outmat) plot(regsubsets.out, scale = "adjr2", main = "Adjusted R^2")
Choosing variables to include in a multiple linear regression model Use the leaps library. When you plot the variables the y-axis shows R^2 adjusted. You look at where the boxes are black at the highest R^2. This will show the variables you should use for your multipl
5,209
Choosing variables to include in a multiple linear regression model
Why not doing correlation analysis First and then onclude in regression only those that corelate with Dv?
Choosing variables to include in a multiple linear regression model
Why not doing correlation analysis First and then onclude in regression only those that corelate with Dv?
Choosing variables to include in a multiple linear regression model Why not doing correlation analysis First and then onclude in regression only those that corelate with Dv?
Choosing variables to include in a multiple linear regression model Why not doing correlation analysis First and then onclude in regression only those that corelate with Dv?
5,210
Choosing variables to include in a multiple linear regression model
My advisor offered another possible way to go about this. Run all of your variables once, and then remove those that fail to meet some threshold (we set our threshold as p < .25). Continue iterating that way until all variables fall below that .25 value, then report those values which are significant.
Choosing variables to include in a multiple linear regression model
My advisor offered another possible way to go about this. Run all of your variables once, and then remove those that fail to meet some threshold (we set our threshold as p < .25). Continue iterating t
Choosing variables to include in a multiple linear regression model My advisor offered another possible way to go about this. Run all of your variables once, and then remove those that fail to meet some threshold (we set our threshold as p < .25). Continue iterating that way until all variables fall below that .25 value, then report those values which are significant.
Choosing variables to include in a multiple linear regression model My advisor offered another possible way to go about this. Run all of your variables once, and then remove those that fail to meet some threshold (we set our threshold as p < .25). Continue iterating t
5,211
PCA and the train/test split
For measuring the generalization error, you need to do the latter: a separate PCA for every training set (which would mean doing a separate PCA for every classifier and for every CV fold). You then apply the same transformation to the test set: i.e. you do not do a separate PCA on the test set! You subtract the mean (and if needed divide by the standard deviation) of the training set, as explained here: Zero-centering the testing set after PCA on the training set. Then you project the data onto the PCs of the training set. You'll need to define an automatic criterium for the number of PCs to use. As it is just a first data reduction step before the "actual" classification, using a few too many PCs will likely not hurt the performance. If you have an expectation how many PCs would be good from experience, you can maybe just use that. You can also test afterwards whether redoing the PCA for every surrogate model was necessary (repeating the analysis with only one PCA model). I think the result of this test is worth reporting. I once measured the bias of not repeating the PCA, and found that with my spectroscopic classification data, I detected only half of the generalization error rate when not redoing the PCA for every surrogate model. Also relevant: https://stats.stackexchange.com/a/240063/4598 That being said, you can build an additional PCA model of the whole data set for descriptive (e.g. visualization) purposes. Just make sure you keep the two approaches separate from each other. I am still finding it difficult to get a feeling of how an initial PCA on the whole dataset would bias the results without seeing the class labels. But it does see the data. And if the between-class variance is large compared to the within-class variance, between-class variance will influence the PCA projection. Usually the PCA step is done because you need to stabilize the classification. That is, in a situation where additional cases do influence the model. If between-class variance is small, this bias won't be much, but in that case neither would PCA help for the classification: the PCA projection then cannot help emphasizing the separation between the classes.
PCA and the train/test split
For measuring the generalization error, you need to do the latter: a separate PCA for every training set (which would mean doing a separate PCA for every classifier and for every CV fold). You then ap
PCA and the train/test split For measuring the generalization error, you need to do the latter: a separate PCA for every training set (which would mean doing a separate PCA for every classifier and for every CV fold). You then apply the same transformation to the test set: i.e. you do not do a separate PCA on the test set! You subtract the mean (and if needed divide by the standard deviation) of the training set, as explained here: Zero-centering the testing set after PCA on the training set. Then you project the data onto the PCs of the training set. You'll need to define an automatic criterium for the number of PCs to use. As it is just a first data reduction step before the "actual" classification, using a few too many PCs will likely not hurt the performance. If you have an expectation how many PCs would be good from experience, you can maybe just use that. You can also test afterwards whether redoing the PCA for every surrogate model was necessary (repeating the analysis with only one PCA model). I think the result of this test is worth reporting. I once measured the bias of not repeating the PCA, and found that with my spectroscopic classification data, I detected only half of the generalization error rate when not redoing the PCA for every surrogate model. Also relevant: https://stats.stackexchange.com/a/240063/4598 That being said, you can build an additional PCA model of the whole data set for descriptive (e.g. visualization) purposes. Just make sure you keep the two approaches separate from each other. I am still finding it difficult to get a feeling of how an initial PCA on the whole dataset would bias the results without seeing the class labels. But it does see the data. And if the between-class variance is large compared to the within-class variance, between-class variance will influence the PCA projection. Usually the PCA step is done because you need to stabilize the classification. That is, in a situation where additional cases do influence the model. If between-class variance is small, this bias won't be much, but in that case neither would PCA help for the classification: the PCA projection then cannot help emphasizing the separation between the classes.
PCA and the train/test split For measuring the generalization error, you need to do the latter: a separate PCA for every training set (which would mean doing a separate PCA for every classifier and for every CV fold). You then ap
5,212
PCA and the train/test split
The answer to this question depends on your experimental design. PCA can be done on the whole data set so long as you don't need to build your model in advance of knowing the data you are trying to predict. If you have a dataset where you have a bunch of samples some of which are known and some are unknown and you want to predict the unknowns, including the unknowns in the PCA will give you are richer view of data diversity and can help improve the performance of the model. Since PCA is unsupervised, it isn't "peaking" because you can do the same thing to the unknown samples as you can to the known. If, on the other hand, you have a data set where you have to build the model now and at some point in the future you will get new samples that you have to predict using that prebuilt model, you must do separate PCA in each fold to be sure it will generalize. Since in this case we won't know what the new features might look like and we can't rebuild the model to account for the new features, doing PCA on the testing data would be "peaking". In this case, both the features and the outcomes for the unknown samples are not available when the model would be used in practice, so they should not be available when training the model.
PCA and the train/test split
The answer to this question depends on your experimental design. PCA can be done on the whole data set so long as you don't need to build your model in advance of knowing the data you are trying to p
PCA and the train/test split The answer to this question depends on your experimental design. PCA can be done on the whole data set so long as you don't need to build your model in advance of knowing the data you are trying to predict. If you have a dataset where you have a bunch of samples some of which are known and some are unknown and you want to predict the unknowns, including the unknowns in the PCA will give you are richer view of data diversity and can help improve the performance of the model. Since PCA is unsupervised, it isn't "peaking" because you can do the same thing to the unknown samples as you can to the known. If, on the other hand, you have a data set where you have to build the model now and at some point in the future you will get new samples that you have to predict using that prebuilt model, you must do separate PCA in each fold to be sure it will generalize. Since in this case we won't know what the new features might look like and we can't rebuild the model to account for the new features, doing PCA on the testing data would be "peaking". In this case, both the features and the outcomes for the unknown samples are not available when the model would be used in practice, so they should not be available when training the model.
PCA and the train/test split The answer to this question depends on your experimental design. PCA can be done on the whole data set so long as you don't need to build your model in advance of knowing the data you are trying to p
5,213
PCA and the train/test split
Do the latter, PCA on training set each time In PCA, we learn the reduced matrix : U which helps us get the projection Z_train = U x X_train At test time, we use the same U learned from the training phase and then compute the projection Z_test = U x X_test So, essentially we are projecting the test set onto the reduced feature space obtained during the training. The underlying assumption, is that the test and train set should come from the same distribution, which explains the method above.
PCA and the train/test split
Do the latter, PCA on training set each time In PCA, we learn the reduced matrix : U which helps us get the projection Z_train = U x X_train At test time, we use the same U learned from the training p
PCA and the train/test split Do the latter, PCA on training set each time In PCA, we learn the reduced matrix : U which helps us get the projection Z_train = U x X_train At test time, we use the same U learned from the training phase and then compute the projection Z_test = U x X_test So, essentially we are projecting the test set onto the reduced feature space obtained during the training. The underlying assumption, is that the test and train set should come from the same distribution, which explains the method above.
PCA and the train/test split Do the latter, PCA on training set each time In PCA, we learn the reduced matrix : U which helps us get the projection Z_train = U x X_train At test time, we use the same U learned from the training p
5,214
How does the Adam method of stochastic gradient descent work?
The Adam paper says, "...many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradient steps w.r.t. individual subfunctions..." Here, they just mean that the objective function is a sum of errors over training examples, and training can be done on individual examples or minibatches. This is the same as in stochastic gradient descent (SGD), which is more efficient for large scale problems than batch training because parameter updates are more frequent. As for why Adam works, it uses a few tricks. One of these tricks is momentum, which can give faster convergence. Imagine an objective function that's shaped like a long, narrow canyon that gradually slopes toward a minimum. Say we want to minimize this function using gradient descent. If we start from some point on the canyon wall, the negative gradient will point in the direction of steepest descent, i.e. mostly toward the canyon floor. This is because the canyon walls are much steeper than the gradual slope of the canyon toward the minimum. If the learning rate (i.e. step size) is small, we could descend to the canyon floor, then follow it toward the minimum. But, progress would be slow. We could increase the learning rate, but this wouldn't change the direction of the steps. In this case, we'd overshoot the canyon floor and end up on the opposite wall. We would then repeat this pattern, oscillating from wall to wall while making slow progress toward the minimum. Momentum can help in this situation. Momentum simply means that some fraction of the previous update is added to the current update, so that repeated updates in a particular direction compound; we build up momentum, moving faster and faster in that direction. In the case of the canyon, we'd build up momentum in the direction of the minimum, since all updates have a component in that direction. In contrast, moving back and forth across the canyon walls involves constantly reversing direction, so momentum would help to damp the oscillations in those directions. Another trick that Adam uses is to adaptively select a separate learning rate for each parameter. Parameters that would ordinarily receive smaller or less frequent updates receive larger updates with Adam (the reverse is also true). This speeds learning in cases where the appropriate learning rates vary across parameters. For example, in deep networks, gradients can become small at early layers, and it make sense to increase learning rates for the corresponding parameters. Another benefit to this approach is that, because learning rates are adjusted automatically, manual tuning becomes less important. Standard SGD requires careful tuning (and possibly online adjustment) of learning rates, but this less true with Adam and related methods. It's still necessary to select hyperparameters, but performance is less sensitive to them than to SGD learning rates. Related methods: Momentum is often used with standard SGD. An improved version is called Nesterov momentum or Nesterov accelerated gradient. Other methods that use automatically tuned learning rates for each parameter include: Adagrad, RMSprop, and Adadelta. RMSprop and Adadelta solve a problem with Adagrad that could cause learning to stop. Adam is similar to RMSprop with momentum. Nadam modifies Adam to use Nesterov momentum instead of classical momentum. References: Kingma and Ba (2014). Adam: A Method for Stochastic Optimization. Goodfellow et al. (2016). Deep learning, chapter 8. Slides from Geoff Hinton's course Dozat (2016). Incorporating Nesterov Momentum into Adam.
How does the Adam method of stochastic gradient descent work?
The Adam paper says, "...many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradie
How does the Adam method of stochastic gradient descent work? The Adam paper says, "...many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradient steps w.r.t. individual subfunctions..." Here, they just mean that the objective function is a sum of errors over training examples, and training can be done on individual examples or minibatches. This is the same as in stochastic gradient descent (SGD), which is more efficient for large scale problems than batch training because parameter updates are more frequent. As for why Adam works, it uses a few tricks. One of these tricks is momentum, which can give faster convergence. Imagine an objective function that's shaped like a long, narrow canyon that gradually slopes toward a minimum. Say we want to minimize this function using gradient descent. If we start from some point on the canyon wall, the negative gradient will point in the direction of steepest descent, i.e. mostly toward the canyon floor. This is because the canyon walls are much steeper than the gradual slope of the canyon toward the minimum. If the learning rate (i.e. step size) is small, we could descend to the canyon floor, then follow it toward the minimum. But, progress would be slow. We could increase the learning rate, but this wouldn't change the direction of the steps. In this case, we'd overshoot the canyon floor and end up on the opposite wall. We would then repeat this pattern, oscillating from wall to wall while making slow progress toward the minimum. Momentum can help in this situation. Momentum simply means that some fraction of the previous update is added to the current update, so that repeated updates in a particular direction compound; we build up momentum, moving faster and faster in that direction. In the case of the canyon, we'd build up momentum in the direction of the minimum, since all updates have a component in that direction. In contrast, moving back and forth across the canyon walls involves constantly reversing direction, so momentum would help to damp the oscillations in those directions. Another trick that Adam uses is to adaptively select a separate learning rate for each parameter. Parameters that would ordinarily receive smaller or less frequent updates receive larger updates with Adam (the reverse is also true). This speeds learning in cases where the appropriate learning rates vary across parameters. For example, in deep networks, gradients can become small at early layers, and it make sense to increase learning rates for the corresponding parameters. Another benefit to this approach is that, because learning rates are adjusted automatically, manual tuning becomes less important. Standard SGD requires careful tuning (and possibly online adjustment) of learning rates, but this less true with Adam and related methods. It's still necessary to select hyperparameters, but performance is less sensitive to them than to SGD learning rates. Related methods: Momentum is often used with standard SGD. An improved version is called Nesterov momentum or Nesterov accelerated gradient. Other methods that use automatically tuned learning rates for each parameter include: Adagrad, RMSprop, and Adadelta. RMSprop and Adadelta solve a problem with Adagrad that could cause learning to stop. Adam is similar to RMSprop with momentum. Nadam modifies Adam to use Nesterov momentum instead of classical momentum. References: Kingma and Ba (2014). Adam: A Method for Stochastic Optimization. Goodfellow et al. (2016). Deep learning, chapter 8. Slides from Geoff Hinton's course Dozat (2016). Incorporating Nesterov Momentum into Adam.
How does the Adam method of stochastic gradient descent work? The Adam paper says, "...many objective functions are composed of a sum of subfunctions evaluated at different subsamples of data; in this case optimization can be made more efficient by taking gradie
5,215
Variational inference versus MCMC: when to choose one over the other?
For a long answer, see Blei, Kucukelbir and McAuliffe here. This short answer draws heavily therefrom. MCMC is asymptotically exact; VI is not. In the limit, MCMC will exactly approximate the target distribution. VI comes without warranty. MCMC is computationally expensive. In general, VI is faster. Meaning, when we have computational time to kill and value precision of our estimates, MCMC wins. If we can tolerate sacrificing that for expediency—or we're working with data so large we have to make the tradeoff—VI is a natural choice. Or, as more eloquently and thoroughly described by the authors mentioned above: Thus, variational inference is suited to large data sets and scenarios where we want to quickly explore many models; MCMC is suited to smaller data sets and scenarios where we happily pay a heavier computational cost for more precise samples. For example, we might use MCMC in a setting where we spent 20 years collecting a small but expensive data set, where we are confident that our model is appropriate, and where we require precise inferences. We might use variational inference when fitting a probabilistic model of text to one billion text documents and where the inferences will be used to serve search results to a large population of users. In this scenario, we can use distributed computation and stochastic optimization to scale and speed up inference, and we can easily explore many different models of the data.
Variational inference versus MCMC: when to choose one over the other?
For a long answer, see Blei, Kucukelbir and McAuliffe here. This short answer draws heavily therefrom. MCMC is asymptotically exact; VI is not. In the limit, MCMC will exactly approximate the target
Variational inference versus MCMC: when to choose one over the other? For a long answer, see Blei, Kucukelbir and McAuliffe here. This short answer draws heavily therefrom. MCMC is asymptotically exact; VI is not. In the limit, MCMC will exactly approximate the target distribution. VI comes without warranty. MCMC is computationally expensive. In general, VI is faster. Meaning, when we have computational time to kill and value precision of our estimates, MCMC wins. If we can tolerate sacrificing that for expediency—or we're working with data so large we have to make the tradeoff—VI is a natural choice. Or, as more eloquently and thoroughly described by the authors mentioned above: Thus, variational inference is suited to large data sets and scenarios where we want to quickly explore many models; MCMC is suited to smaller data sets and scenarios where we happily pay a heavier computational cost for more precise samples. For example, we might use MCMC in a setting where we spent 20 years collecting a small but expensive data set, where we are confident that our model is appropriate, and where we require precise inferences. We might use variational inference when fitting a probabilistic model of text to one billion text documents and where the inferences will be used to serve search results to a large population of users. In this scenario, we can use distributed computation and stochastic optimization to scale and speed up inference, and we can easily explore many different models of the data.
Variational inference versus MCMC: when to choose one over the other? For a long answer, see Blei, Kucukelbir and McAuliffe here. This short answer draws heavily therefrom. MCMC is asymptotically exact; VI is not. In the limit, MCMC will exactly approximate the target
5,216
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably?
My understanding is that AIC, DIC, and WAIC are all estimating the same thing: the expected out-of-sample deviance associated with a model. This is also the same thing that cross-validation estimates. In Gelman et al. (2013), they say this explicitly: A natural way to estimate out-of-sample prediction error is cross-validation (see Vehtari and Lampinen, 2002, for a Bayesian perspective), but researchers have always sought alternative mea- sures, as cross-validation requires repeated model fits and can run into trouble with sparse data. For practical reasons alone, there remains a place for simple bias corrections such as AIC (Akaike, 1973), DIC (Spiegelhalter, Best, Carlin, and van der Linde, 2002, van der Linde, 2005), and, more recently, WAIC (Watanabe, 2010), and all these can be viewed as approximations to different versions of cross-validation (Stone, 1977). BIC estimates something different, which is related to minimum description length. Gelman et al. say: BIC and its variants differ from the other information criteria considered here in being motivated not by an estimation of predictive fit but by the goal of approximating the marginal probability density of the data, p(y), under the model, which can be used to estimate relative posterior probabilities in a setting of discrete model comparison. I don't know anything about the other information criteria you listed, unfortunately. Can you use the AIC-like information criteria interchangeably? Opinions may differ, but given that AIC, DIC, WAIC, and cross-validation all estimate the same thing, then yes, they're more-or-less interchangeable. BIC is different, as noted above. I don't know about the others. Why have more than one? AIC works well when you have a maximum likelihood estimate and flat priors, but doesn't really have anything to say about other scenarios. The penalty is also too small when the number of parameters approaches the number of data points. AICc over-corrects for this, which can be good or bad depending on your perspective. DIC uses a smaller penalty if parts of the model are heavily constrained by priors (e.g. in some multi-level models where variance components are estimated). This is good, since heavily constrained parameters don't really constitute a full degree of freedom. Unfortunately, the formulas usually used for DIC assume that the posterior is essentially Gaussian (i.e. that it is well-described by its mean), and so one can get strange results (e.g. negative penalties) in some situations. WAIC uses the whole posterior density more effectively than DIC does, so Gelman et al. prefer it although it can be a pain to calculate in some cases. Cross-validation does not rely on any particular formula, but it can be computationally prohibitive for many models. In my view the decision about which one of the AIC-like criteria to use depends entirely on these sorts of practical issues, rather than a mathematical proof that one will do better than the other. References: Gelman et al. Understanding predictive information criteria for Bayesian models. Available from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.295.3501&rep=rep1&type=pdf
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably?
My understanding is that AIC, DIC, and WAIC are all estimating the same thing: the expected out-of-sample deviance associated with a model. This is also the same thing that cross-validation estimates
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? My understanding is that AIC, DIC, and WAIC are all estimating the same thing: the expected out-of-sample deviance associated with a model. This is also the same thing that cross-validation estimates. In Gelman et al. (2013), they say this explicitly: A natural way to estimate out-of-sample prediction error is cross-validation (see Vehtari and Lampinen, 2002, for a Bayesian perspective), but researchers have always sought alternative mea- sures, as cross-validation requires repeated model fits and can run into trouble with sparse data. For practical reasons alone, there remains a place for simple bias corrections such as AIC (Akaike, 1973), DIC (Spiegelhalter, Best, Carlin, and van der Linde, 2002, van der Linde, 2005), and, more recently, WAIC (Watanabe, 2010), and all these can be viewed as approximations to different versions of cross-validation (Stone, 1977). BIC estimates something different, which is related to minimum description length. Gelman et al. say: BIC and its variants differ from the other information criteria considered here in being motivated not by an estimation of predictive fit but by the goal of approximating the marginal probability density of the data, p(y), under the model, which can be used to estimate relative posterior probabilities in a setting of discrete model comparison. I don't know anything about the other information criteria you listed, unfortunately. Can you use the AIC-like information criteria interchangeably? Opinions may differ, but given that AIC, DIC, WAIC, and cross-validation all estimate the same thing, then yes, they're more-or-less interchangeable. BIC is different, as noted above. I don't know about the others. Why have more than one? AIC works well when you have a maximum likelihood estimate and flat priors, but doesn't really have anything to say about other scenarios. The penalty is also too small when the number of parameters approaches the number of data points. AICc over-corrects for this, which can be good or bad depending on your perspective. DIC uses a smaller penalty if parts of the model are heavily constrained by priors (e.g. in some multi-level models where variance components are estimated). This is good, since heavily constrained parameters don't really constitute a full degree of freedom. Unfortunately, the formulas usually used for DIC assume that the posterior is essentially Gaussian (i.e. that it is well-described by its mean), and so one can get strange results (e.g. negative penalties) in some situations. WAIC uses the whole posterior density more effectively than DIC does, so Gelman et al. prefer it although it can be a pain to calculate in some cases. Cross-validation does not rely on any particular formula, but it can be computationally prohibitive for many models. In my view the decision about which one of the AIC-like criteria to use depends entirely on these sorts of practical issues, rather than a mathematical proof that one will do better than the other. References: Gelman et al. Understanding predictive information criteria for Bayesian models. Available from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.295.3501&rep=rep1&type=pdf
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? My understanding is that AIC, DIC, and WAIC are all estimating the same thing: the expected out-of-sample deviance associated with a model. This is also the same thing that cross-validation estimates
5,217
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably?
"Interchangeably" is too strong a word. All of them are criteria that seek to compare models and find a "best" model, but each defines "best" differently and may identify different models as "best".
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably?
"Interchangeably" is too strong a word. All of them are criteria that seek to compare models and find a "best" model, but each defines "best" differently and may identify different models as "best".
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? "Interchangeably" is too strong a word. All of them are criteria that seek to compare models and find a "best" model, but each defines "best" differently and may identify different models as "best".
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? "Interchangeably" is too strong a word. All of them are criteria that seek to compare models and find a "best" model, but each defines "best" differently and may identify different models as "best".
5,218
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably?
“Propose a referendum”. Just to vote! ;-) I liked the CAIC (Bozdogan, 1987) and BIC purely from my personal practice, because these criteria gives a serious penalty for complexity, we got more parsimony, but I always displayed the list of good models - to delta 4-6-8 (instead of 2). In step of investigating parameters (because we have a “good stretching of candidate models”), the MM averaging (B&A) often almost nothing changes. I slightly skeptical both to the classical AIC and AICc (H&T, popularized by B&A), since they often give a very "thick layer of the cream". ;-)
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably?
“Propose a referendum”. Just to vote! ;-) I liked the CAIC (Bozdogan, 1987) and BIC purely from my personal practice, because these criteria gives a serious penalty for complexity, we got more parsimo
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? “Propose a referendum”. Just to vote! ;-) I liked the CAIC (Bozdogan, 1987) and BIC purely from my personal practice, because these criteria gives a serious penalty for complexity, we got more parsimony, but I always displayed the list of good models - to delta 4-6-8 (instead of 2). In step of investigating parameters (because we have a “good stretching of candidate models”), the MM averaging (B&A) often almost nothing changes. I slightly skeptical both to the classical AIC and AICc (H&T, popularized by B&A), since they often give a very "thick layer of the cream". ;-)
AIC,BIC,CIC,DIC,EIC,FIC,GIC,HIC,IIC --- Can I use them interchangeably? “Propose a referendum”. Just to vote! ;-) I liked the CAIC (Bozdogan, 1987) and BIC purely from my personal practice, because these criteria gives a serious penalty for complexity, we got more parsimo
5,219
Are splines overfitting the data?
Overfitting comes from allowing too large a class of models. This gets a bit tricky with models with continuous parameters (like splines and polynomials), but if you discretize the parameters into some number of distinct values, you'll see that increasing the number of knots/coefficients will increase the number of available models exponentially. For every dataset there is a spline and a polynomial that fits precisely, so long as you allow enough coefficients/knots. It may be that a spline with three knots overfits more than a polynomial with three coefficients, but that's hardly a fair comparison. If you have a low number of parameters, and a large dataset, you can be reasonably sure you're not overfitting. If you want to try higher numbers of parameters you can try cross validating within your test set to find the best number, or you can use a criterion like Minimum Description Length. EDIT: As requested in the comments, an example of how one would apply MDL. First you have to deal with the fact that your data is continuous, so it can't be represented in a finite code. For the sake of simplicity we'll segment the data space into boxes of side $\epsilon$ and instead of describing the data points, we'll describe the boxes that the data falls into. This means we lose some accuracy, but we can make $\epsilon$ arbitrarily small, so it doesn't matter much. Now, the task is to describe the dataset as sucinctly as possible with the help of some polynomial. First we describe the polynomial. If it's an n-th order polynomial, we just need to store (n+1) coefficients. Again, we need to discretize these values. After that we need to store first the value $n$ in prefix-free coding (so we know when to stop reading) and then the $n+1$ parameter values. With this information a receiver of our code could restore the polynomial. Then we add the rest of the information required to store the dataset. For each datapoint we give the x-value, and then how many boxes up or down the data point lies off the polynomial. Both values we store in prefix-free coding so that short values require few bits, and we won't need delimiters between points. (You can shorten the code for the x-values by only storing the increments between values) The fundamental point here is the tradeoff. If I choose a 0-order polynomial (like f(x) = 3.4), then the model is very simple to store, but for the y-values, I'm essentially storing the distance to the mean. More coefficients give me a better fitting polynomial (and thus shorter codes for the y values), but I have to spend more bits describing the model. The model that gives you the shortest code for your data is the best fit by the MDL criterion. (Note that this is known as 'crude MDL', and there are some refinements you can make to solve various technical issues).
Are splines overfitting the data?
Overfitting comes from allowing too large a class of models. This gets a bit tricky with models with continuous parameters (like splines and polynomials), but if you discretize the parameters into som
Are splines overfitting the data? Overfitting comes from allowing too large a class of models. This gets a bit tricky with models with continuous parameters (like splines and polynomials), but if you discretize the parameters into some number of distinct values, you'll see that increasing the number of knots/coefficients will increase the number of available models exponentially. For every dataset there is a spline and a polynomial that fits precisely, so long as you allow enough coefficients/knots. It may be that a spline with three knots overfits more than a polynomial with three coefficients, but that's hardly a fair comparison. If you have a low number of parameters, and a large dataset, you can be reasonably sure you're not overfitting. If you want to try higher numbers of parameters you can try cross validating within your test set to find the best number, or you can use a criterion like Minimum Description Length. EDIT: As requested in the comments, an example of how one would apply MDL. First you have to deal with the fact that your data is continuous, so it can't be represented in a finite code. For the sake of simplicity we'll segment the data space into boxes of side $\epsilon$ and instead of describing the data points, we'll describe the boxes that the data falls into. This means we lose some accuracy, but we can make $\epsilon$ arbitrarily small, so it doesn't matter much. Now, the task is to describe the dataset as sucinctly as possible with the help of some polynomial. First we describe the polynomial. If it's an n-th order polynomial, we just need to store (n+1) coefficients. Again, we need to discretize these values. After that we need to store first the value $n$ in prefix-free coding (so we know when to stop reading) and then the $n+1$ parameter values. With this information a receiver of our code could restore the polynomial. Then we add the rest of the information required to store the dataset. For each datapoint we give the x-value, and then how many boxes up or down the data point lies off the polynomial. Both values we store in prefix-free coding so that short values require few bits, and we won't need delimiters between points. (You can shorten the code for the x-values by only storing the increments between values) The fundamental point here is the tradeoff. If I choose a 0-order polynomial (like f(x) = 3.4), then the model is very simple to store, but for the y-values, I'm essentially storing the distance to the mean. More coefficients give me a better fitting polynomial (and thus shorter codes for the y values), but I have to spend more bits describing the model. The model that gives you the shortest code for your data is the best fit by the MDL criterion. (Note that this is known as 'crude MDL', and there are some refinements you can make to solve various technical issues).
Are splines overfitting the data? Overfitting comes from allowing too large a class of models. This gets a bit tricky with models with continuous parameters (like splines and polynomials), but if you discretize the parameters into som
5,220
Are splines overfitting the data?
Statisticians have been arguing about polynomial fitting for ages, and in my experience, it comes down to this: Splines are basically a series of different equations pieced together, which tends to increase the accuracy of interpolated values at the cost of the ability to project outside the data range. This is fine if you know your data is pure and coming from a consistent source and if you are trying to describe the likelihood of different values' presence within your range of values. However, we usually don't learn as much about the theoretical underpinnings driving our data, since a new spline starts when the old spline stops accurately describing the data. This makes prediction of values outside our data almost worthless. Now, splines are not unique in this respect. Polynomial functions actually suffer from the same problem if we are just fitting the data and not using a theoretical framework for choosing the variables. Those who have a well-formed theory driving which variables to allow to vary and by how much will be more trusting of a complex polynomial function's ability to extrapolate predictions outside the data. Many statisticians, though, are working with data without help from a pre-established theoretical framework, and this pushes some people towards simple polynomials. They reason that a less flexible function that fits the data is more likely to accurately predict values outside the data, because the function is less likely to be swayed by anomalies within the data. While I've had conversations about this with people who prefer simple polynomials, I've never gotten the feeling of an anti-spline group. It feels more like simple polynomials make some statisticians feel more comfortable about avoiding overfitting. Disclaimer Personally, I don't tend to use splines or simple polynomials with most of my data, because I work in a field with many pre-established theoretical frameworks. Also, I have usually observed the collection of the data and can get a decent grasp on what was driving the outcomes. In that case, I'm building more of a logical algorithm and testing the algorithm's fitness, rather than testing the fitness of a polynomial function. You can add this grain of salt to my answer.
Are splines overfitting the data?
Statisticians have been arguing about polynomial fitting for ages, and in my experience, it comes down to this: Splines are basically a series of different equations pieced together, which tends to in
Are splines overfitting the data? Statisticians have been arguing about polynomial fitting for ages, and in my experience, it comes down to this: Splines are basically a series of different equations pieced together, which tends to increase the accuracy of interpolated values at the cost of the ability to project outside the data range. This is fine if you know your data is pure and coming from a consistent source and if you are trying to describe the likelihood of different values' presence within your range of values. However, we usually don't learn as much about the theoretical underpinnings driving our data, since a new spline starts when the old spline stops accurately describing the data. This makes prediction of values outside our data almost worthless. Now, splines are not unique in this respect. Polynomial functions actually suffer from the same problem if we are just fitting the data and not using a theoretical framework for choosing the variables. Those who have a well-formed theory driving which variables to allow to vary and by how much will be more trusting of a complex polynomial function's ability to extrapolate predictions outside the data. Many statisticians, though, are working with data without help from a pre-established theoretical framework, and this pushes some people towards simple polynomials. They reason that a less flexible function that fits the data is more likely to accurately predict values outside the data, because the function is less likely to be swayed by anomalies within the data. While I've had conversations about this with people who prefer simple polynomials, I've never gotten the feeling of an anti-spline group. It feels more like simple polynomials make some statisticians feel more comfortable about avoiding overfitting. Disclaimer Personally, I don't tend to use splines or simple polynomials with most of my data, because I work in a field with many pre-established theoretical frameworks. Also, I have usually observed the collection of the data and can get a decent grasp on what was driving the outcomes. In that case, I'm building more of a logical algorithm and testing the algorithm's fitness, rather than testing the fitness of a polynomial function. You can add this grain of salt to my answer.
Are splines overfitting the data? Statisticians have been arguing about polynomial fitting for ages, and in my experience, it comes down to this: Splines are basically a series of different equations pieced together, which tends to in
5,221
Are splines overfitting the data?
choice between splines and polinomial interpolation (either Newton or Lagrange - deterministic ones) stops at really huge data - splines are more flexible ("using many polynomials in a piece-wise function rather than defining one overall polynomial")... And the problem of overfitting is really the problem of another causes (see marked answer - as of stat. view or here as of ML view) - can create your own ML-solution or NeuralNetwork with keras or tensorflow
Are splines overfitting the data?
choice between splines and polinomial interpolation (either Newton or Lagrange - deterministic ones) stops at really huge data - splines are more flexible ("using many polynomials in a piece-wise func
Are splines overfitting the data? choice between splines and polinomial interpolation (either Newton or Lagrange - deterministic ones) stops at really huge data - splines are more flexible ("using many polynomials in a piece-wise function rather than defining one overall polynomial")... And the problem of overfitting is really the problem of another causes (see marked answer - as of stat. view or here as of ML view) - can create your own ML-solution or NeuralNetwork with keras or tensorflow
Are splines overfitting the data? choice between splines and polinomial interpolation (either Newton or Lagrange - deterministic ones) stops at really huge data - splines are more flexible ("using many polynomials in a piece-wise func
5,222
Is it possible to interpret the bootstrap from a Bayesian perspective?
Section 8.4 of The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is "Relationship Between the Bootstrap and Bayesian Inference." That might be just what you are looking for. I believe that this book is freely available through a Stanford website, although I don't have the link on hand. Edit: Here is a link to the book, which the authors have made freely available online: http://www-stat.stanford.edu/~tibs/ElemStatLearn/ On page 272, the authors write: In this sense, the bootstrap distribution represents an (approximate) nonparametric, noninformative posterior distribution for our parameter. But this bootstrap distribution is obtained painlessly — without having to formally specify a prior and without having to sample from the posterior distribution. Hence we might think of the bootstrap distribution as a “poor man’s” Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out. One more piece of the puzzle is found in this cross validated question which mentions the Dvoretzky–Kiefer–Wolfowitz inequality that "shows [...] that the empirical distribution function converges uniformly to the true distribution function exponentially fast in probability." So all in all the non-parametric bootstrap could be seen as an asymptotic method that produces "an (approximate) nonparametric, noninformative posterior distribution for our parameter" and where this approximation gets better "exponentially fast" as the number of samples increases.
Is it possible to interpret the bootstrap from a Bayesian perspective?
Section 8.4 of The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is "Relationship Between the Bootstrap and Bayesian Inference." That might be just what you are looking for. I b
Is it possible to interpret the bootstrap from a Bayesian perspective? Section 8.4 of The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is "Relationship Between the Bootstrap and Bayesian Inference." That might be just what you are looking for. I believe that this book is freely available through a Stanford website, although I don't have the link on hand. Edit: Here is a link to the book, which the authors have made freely available online: http://www-stat.stanford.edu/~tibs/ElemStatLearn/ On page 272, the authors write: In this sense, the bootstrap distribution represents an (approximate) nonparametric, noninformative posterior distribution for our parameter. But this bootstrap distribution is obtained painlessly — without having to formally specify a prior and without having to sample from the posterior distribution. Hence we might think of the bootstrap distribution as a “poor man’s” Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out. One more piece of the puzzle is found in this cross validated question which mentions the Dvoretzky–Kiefer–Wolfowitz inequality that "shows [...] that the empirical distribution function converges uniformly to the true distribution function exponentially fast in probability." So all in all the non-parametric bootstrap could be seen as an asymptotic method that produces "an (approximate) nonparametric, noninformative posterior distribution for our parameter" and where this approximation gets better "exponentially fast" as the number of samples increases.
Is it possible to interpret the bootstrap from a Bayesian perspective? Section 8.4 of The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is "Relationship Between the Bootstrap and Bayesian Inference." That might be just what you are looking for. I b
5,223
Is it possible to interpret the bootstrap from a Bayesian perspective?
This is the latest paper I've seen on the subject: @article{efr13bay, author={Efron, Bradley}, title={Bayesian inference and the parametric bootstrap}, journal={Annals of Applied Statistics}, volume=6, number=4, pages={1971-1997}, year=2012, doi={10.1214/12-AOAS571}, abstract={Summary: The parametric bootstrap can be used for the efficient computation of Bayes posterior distributions. Importance sampling formulas take on an easy form relating to the deviance in exponential families and are particularly simple starting from Jeffreys invariant prior. Because of the i.i.d. nature of bootstrap sampling, familiar formulas describe the computational accuracy of the Bayes estimates. Besides computational methods, the theory provides a connection between Bayesian and frequentist analysis. Efficient algorithms for the frequentist accuracy of Bayesian inferences are developed and demonstrated in a model selection example.}, keywords={Jeffreys prior; exponential families; deviance; generalized linear models}, classmath={*62F15 (Bayesian inference) 62F40 (Resampling methods) 62J12 (Generalized linear models) 65C60 (Computational problems in statistics)}}
Is it possible to interpret the bootstrap from a Bayesian perspective?
This is the latest paper I've seen on the subject: @article{efr13bay, author={Efron, Bradley}, title={Bayesian inference and the parametric bootstrap}, journal={Annals of Applied Statistics}, volume=6
Is it possible to interpret the bootstrap from a Bayesian perspective? This is the latest paper I've seen on the subject: @article{efr13bay, author={Efron, Bradley}, title={Bayesian inference and the parametric bootstrap}, journal={Annals of Applied Statistics}, volume=6, number=4, pages={1971-1997}, year=2012, doi={10.1214/12-AOAS571}, abstract={Summary: The parametric bootstrap can be used for the efficient computation of Bayes posterior distributions. Importance sampling formulas take on an easy form relating to the deviance in exponential families and are particularly simple starting from Jeffreys invariant prior. Because of the i.i.d. nature of bootstrap sampling, familiar formulas describe the computational accuracy of the Bayes estimates. Besides computational methods, the theory provides a connection between Bayesian and frequentist analysis. Efficient algorithms for the frequentist accuracy of Bayesian inferences are developed and demonstrated in a model selection example.}, keywords={Jeffreys prior; exponential families; deviance; generalized linear models}, classmath={*62F15 (Bayesian inference) 62F40 (Resampling methods) 62J12 (Generalized linear models) 65C60 (Computational problems in statistics)}}
Is it possible to interpret the bootstrap from a Bayesian perspective? This is the latest paper I've seen on the subject: @article{efr13bay, author={Efron, Bradley}, title={Bayesian inference and the parametric bootstrap}, journal={Annals of Applied Statistics}, volume=6
5,224
Is it possible to interpret the bootstrap from a Bayesian perspective?
I too was seduced by both bootstrapping and Bayes' theorem, but I couldn't make much sense of the justifications of bootstrapping until I looked at it from a Bayesian perspective. Then - as I explain below - the bootstrap distribution can be seen as a Bayesian posterior distribution, which makes the (a?) rationale behind bootstrapping obvious, and also had the advantage of clarifying the assumptions made. There is more detail of the argument below, and the assumptions made, in https://arxiv.org/abs/1803.06214 (pages 22-26). As an example, which is set up on the spreadsheet at http://woodm.myweb.port.ac.uk/SL/resample.xlsx (click on the bootstrap tab at the bottom of the screen), suppose we've got a sample of 9 measurements with a mean of 60. When I used the spreadsheet to produce 1000 resamples with replacement from this sample and rounded the means off to the nearest even number, 82 of these means were 54. The idea of bootstrapping is that we use the sample as a "pretend" population to see how variable the means of samples of 9 are likely to be, so this suggests that the probability of a sample mean being 6 below the population mean (in this case the pretend population based on the sample with a mean of 60) is 8.2%. And we can come to a similar conclusion about the other bars in the resampling histogram. Now let's imagine that the truth is that the mean of the real population is 66. If this is so our estimate of the probability of the sample mean being the 60 (i.e. the Data) is 8.2% (using the conclusion in the paragraph above remembering that 60 is 6 below the hypothesised population mean of 66). Let's write this as P(Data given Mean=66) = 8.2% and this probability corresponds to an x value of 54 on the resampling distribution. The same sort of argument applies to each possible population mean from 0, 2, 4 ... 100. In each case the probability comes from the resampling distribution - but this distribution is reflected about the mean of 60. Now let's apply Bayes' theorem. The measurement in question can only take values between 0 and 100, so rounding off to the nearest even number the possibilities for the population mean are 0, 2, 4, 6, ....100. If we assume that the prior distribution is flat, each of these has a prior probability of 2% (to 1 dp), and Bayes' theorem tells us that P(PopMean=66 given Data)= 8.2%*2%/P(Data) where P(Data) = P(PopMean=0 given Data)*2%+ P(PopMean=2 given Data)*2% + ... + P(PopMean=100 given Data)*2% We can now cancel the 2% and remember that sum of the probabilities must be 1 since the probabilities are simply those from the resampling distribution. Which leaves us with the conclusion that P(PopMean=66)=8.2% Remembering that 8.2% is the probability from the resampling distribution corresponding to 54 (instead of 66), the posterior distribution is simply the resampling distribution reflected about the sample mean (60). Further, if the resampling distribution is symmetrical in the sense that asymmetries are random - as it is in this and many other cases, we can take the resample distribution as being identical to the posterior probability distribution. This argument makes various assumptions, the main one being that the prior distribution is uniform. These are spelled out in more detail in the article cited above.
Is it possible to interpret the bootstrap from a Bayesian perspective?
I too was seduced by both bootstrapping and Bayes' theorem, but I couldn't make much sense of the justifications of bootstrapping until I looked at it from a Bayesian perspective. Then - as I explain
Is it possible to interpret the bootstrap from a Bayesian perspective? I too was seduced by both bootstrapping and Bayes' theorem, but I couldn't make much sense of the justifications of bootstrapping until I looked at it from a Bayesian perspective. Then - as I explain below - the bootstrap distribution can be seen as a Bayesian posterior distribution, which makes the (a?) rationale behind bootstrapping obvious, and also had the advantage of clarifying the assumptions made. There is more detail of the argument below, and the assumptions made, in https://arxiv.org/abs/1803.06214 (pages 22-26). As an example, which is set up on the spreadsheet at http://woodm.myweb.port.ac.uk/SL/resample.xlsx (click on the bootstrap tab at the bottom of the screen), suppose we've got a sample of 9 measurements with a mean of 60. When I used the spreadsheet to produce 1000 resamples with replacement from this sample and rounded the means off to the nearest even number, 82 of these means were 54. The idea of bootstrapping is that we use the sample as a "pretend" population to see how variable the means of samples of 9 are likely to be, so this suggests that the probability of a sample mean being 6 below the population mean (in this case the pretend population based on the sample with a mean of 60) is 8.2%. And we can come to a similar conclusion about the other bars in the resampling histogram. Now let's imagine that the truth is that the mean of the real population is 66. If this is so our estimate of the probability of the sample mean being the 60 (i.e. the Data) is 8.2% (using the conclusion in the paragraph above remembering that 60 is 6 below the hypothesised population mean of 66). Let's write this as P(Data given Mean=66) = 8.2% and this probability corresponds to an x value of 54 on the resampling distribution. The same sort of argument applies to each possible population mean from 0, 2, 4 ... 100. In each case the probability comes from the resampling distribution - but this distribution is reflected about the mean of 60. Now let's apply Bayes' theorem. The measurement in question can only take values between 0 and 100, so rounding off to the nearest even number the possibilities for the population mean are 0, 2, 4, 6, ....100. If we assume that the prior distribution is flat, each of these has a prior probability of 2% (to 1 dp), and Bayes' theorem tells us that P(PopMean=66 given Data)= 8.2%*2%/P(Data) where P(Data) = P(PopMean=0 given Data)*2%+ P(PopMean=2 given Data)*2% + ... + P(PopMean=100 given Data)*2% We can now cancel the 2% and remember that sum of the probabilities must be 1 since the probabilities are simply those from the resampling distribution. Which leaves us with the conclusion that P(PopMean=66)=8.2% Remembering that 8.2% is the probability from the resampling distribution corresponding to 54 (instead of 66), the posterior distribution is simply the resampling distribution reflected about the sample mean (60). Further, if the resampling distribution is symmetrical in the sense that asymmetries are random - as it is in this and many other cases, we can take the resample distribution as being identical to the posterior probability distribution. This argument makes various assumptions, the main one being that the prior distribution is uniform. These are spelled out in more detail in the article cited above.
Is it possible to interpret the bootstrap from a Bayesian perspective? I too was seduced by both bootstrapping and Bayes' theorem, but I couldn't make much sense of the justifications of bootstrapping until I looked at it from a Bayesian perspective. Then - as I explain
5,225
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP)
Short answer is: YES. Average Precision is a single number used to summarise a Precision-Recall curve: You can approximate the integral (area under the curve) with: Please take a look at this link for a good explanation.
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP)
Short answer is: YES. Average Precision is a single number used to summarise a Precision-Recall curve: You can approximate the integral (area under the curve) with: Please take a look at this link f
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP) Short answer is: YES. Average Precision is a single number used to summarise a Precision-Recall curve: You can approximate the integral (area under the curve) with: Please take a look at this link for a good explanation.
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP) Short answer is: YES. Average Precision is a single number used to summarise a Precision-Recall curve: You can approximate the integral (area under the curve) with: Please take a look at this link f
5,226
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP)
average_precision_score function expect confidence or probability as second parameter. so you should use it as below, average_precision_score(y_test, clf.predict_proba(X_test)[:,1]) and then it's same result of auc function.
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP)
average_precision_score function expect confidence or probability as second parameter. so you should use it as below, average_precision_score(y_test, clf.predict_proba(X_test)[:,1]) and then it's sam
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP) average_precision_score function expect confidence or probability as second parameter. so you should use it as below, average_precision_score(y_test, clf.predict_proba(X_test)[:,1]) and then it's same result of auc function.
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP) average_precision_score function expect confidence or probability as second parameter. so you should use it as below, average_precision_score(y_test, clf.predict_proba(X_test)[:,1]) and then it's sam
5,227
How are kernels applied to feature maps to produce other feature maps?
The kernels are 3-dimensional, where width and height can be chosen, while the depth is equal to the number of maps in the input layer - in general. They are certainly not 2-dimensional and replicated across the input feature maps at the same 2D location! That would mean a kernel wouldn't be able to distinguish between its input features at a given location, since it would use one and the same weight across the input feature maps!
How are kernels applied to feature maps to produce other feature maps?
The kernels are 3-dimensional, where width and height can be chosen, while the depth is equal to the number of maps in the input layer - in general. They are certainly not 2-dimensional and replicate
How are kernels applied to feature maps to produce other feature maps? The kernels are 3-dimensional, where width and height can be chosen, while the depth is equal to the number of maps in the input layer - in general. They are certainly not 2-dimensional and replicated across the input feature maps at the same 2D location! That would mean a kernel wouldn't be able to distinguish between its input features at a given location, since it would use one and the same weight across the input feature maps!
How are kernels applied to feature maps to produce other feature maps? The kernels are 3-dimensional, where width and height can be chosen, while the depth is equal to the number of maps in the input layer - in general. They are certainly not 2-dimensional and replicate
5,228
How are kernels applied to feature maps to produce other feature maps?
There is not a one-to-one correspondence between layers and kernels necessarily. That depends on the particular architecture. The figure you posted suggests that in the S2 layers you have 6 feature maps, each combining all feature maps of the previous layers, i.e. different possible combinations of the features. Without more references I cannot say much more. See for example this paper
How are kernels applied to feature maps to produce other feature maps?
There is not a one-to-one correspondence between layers and kernels necessarily. That depends on the particular architecture. The figure you posted suggests that in the S2 layers you have 6 feature ma
How are kernels applied to feature maps to produce other feature maps? There is not a one-to-one correspondence between layers and kernels necessarily. That depends on the particular architecture. The figure you posted suggests that in the S2 layers you have 6 feature maps, each combining all feature maps of the previous layers, i.e. different possible combinations of the features. Without more references I cannot say much more. See for example this paper
How are kernels applied to feature maps to produce other feature maps? There is not a one-to-one correspondence between layers and kernels necessarily. That depends on the particular architecture. The figure you posted suggests that in the S2 layers you have 6 feature ma
5,229
How are kernels applied to feature maps to produce other feature maps?
Table 1 and Section 2a of Yann LeCun's "Gradient Based Learning Applied to Document Recognition" explains this well: http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf Not all regions of the 5x5 convolution are used to generate the 2nd convolutional layer.
How are kernels applied to feature maps to produce other feature maps?
Table 1 and Section 2a of Yann LeCun's "Gradient Based Learning Applied to Document Recognition" explains this well: http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf Not all regions of the 5x5 con
How are kernels applied to feature maps to produce other feature maps? Table 1 and Section 2a of Yann LeCun's "Gradient Based Learning Applied to Document Recognition" explains this well: http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf Not all regions of the 5x5 convolution are used to generate the 2nd convolutional layer.
How are kernels applied to feature maps to produce other feature maps? Table 1 and Section 2a of Yann LeCun's "Gradient Based Learning Applied to Document Recognition" explains this well: http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf Not all regions of the 5x5 con
5,230
How are kernels applied to feature maps to produce other feature maps?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This article can be helpful: Understanding Convolution in Deep Learning by Tim Dettmers from March 26 It doesn't really answers the question because it explains only the first convolution layer, but contains good explanation of basic intuition about convolution in CNNs. It also describes deeper mathematical definition of convolution. I think it is related to question topic.
How are kernels applied to feature maps to produce other feature maps?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How are kernels applied to feature maps to produce other feature maps? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This article can be helpful: Understanding Convolution in Deep Learning by Tim Dettmers from March 26 It doesn't really answers the question because it explains only the first convolution layer, but contains good explanation of basic intuition about convolution in CNNs. It also describes deeper mathematical definition of convolution. I think it is related to question topic.
How are kernels applied to feature maps to produce other feature maps? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
5,231
Normality of dependent variable = normality of residuals?
One point that may help your understanding: If $x$ is normally distributed and $a$ and $b$ are constants, then $y=\frac{x-a}{b}$ is also normally distributed (but with a possibly different mean and variance). Since the residuals are just the y values minus the estimated mean (standardized residuals are also divided by an estimate of the standard error) then if the y values are normally distributed then the residuals are as well and the other way around. So when we talk about theory or assumptions it does not matter which we talk about because one implies the other. So for the questions this leads to: yes, both, either No, (however the individual y-values will come from normals with different means which can make them look non-normal if grouped together) Normality of residuals means normality of groups, however it can be good to examine residuals or y-values by groups in some cases (pooling may obscure non-normality that is obvious in a group) or looking all together in other cases (not enough observations per group to determine, but all together you can tell). This depends on what you mean by compare, how big your sample size is, and your feelings on "Approximate". The normality assumption is only required for tests/intervals on the results, you can fit the model and describe the point estimates whether there is normality or not. The Central Limit Theorem says that if the sample size is large enough then the estimates will be approximately normal even if the residuals are not. It depends on what question your are trying to answer and how "approximate" your are happy with. Another point that is important to understand (but is often conflated in learning) is that there are 2 types of residuals here: The theoretical residuals which are the differences between the observed values and the true theoretical model, and the observed residuals which are the differences between the observed values and the estimates from the currently fitted model. We assume that the theoretical residuals are iid normal. The observed residuals are not i, i, or distributed normal (but do have a mean of 0). However, for practical purposes the observed residuals do estimate the theoretical residuals and are therefore still useful for diagnostics.
Normality of dependent variable = normality of residuals?
One point that may help your understanding: If $x$ is normally distributed and $a$ and $b$ are constants, then $y=\frac{x-a}{b}$ is also normally distributed (but with a possibly different mean and va
Normality of dependent variable = normality of residuals? One point that may help your understanding: If $x$ is normally distributed and $a$ and $b$ are constants, then $y=\frac{x-a}{b}$ is also normally distributed (but with a possibly different mean and variance). Since the residuals are just the y values minus the estimated mean (standardized residuals are also divided by an estimate of the standard error) then if the y values are normally distributed then the residuals are as well and the other way around. So when we talk about theory or assumptions it does not matter which we talk about because one implies the other. So for the questions this leads to: yes, both, either No, (however the individual y-values will come from normals with different means which can make them look non-normal if grouped together) Normality of residuals means normality of groups, however it can be good to examine residuals or y-values by groups in some cases (pooling may obscure non-normality that is obvious in a group) or looking all together in other cases (not enough observations per group to determine, but all together you can tell). This depends on what you mean by compare, how big your sample size is, and your feelings on "Approximate". The normality assumption is only required for tests/intervals on the results, you can fit the model and describe the point estimates whether there is normality or not. The Central Limit Theorem says that if the sample size is large enough then the estimates will be approximately normal even if the residuals are not. It depends on what question your are trying to answer and how "approximate" your are happy with. Another point that is important to understand (but is often conflated in learning) is that there are 2 types of residuals here: The theoretical residuals which are the differences between the observed values and the true theoretical model, and the observed residuals which are the differences between the observed values and the estimates from the currently fitted model. We assume that the theoretical residuals are iid normal. The observed residuals are not i, i, or distributed normal (but do have a mean of 0). However, for practical purposes the observed residuals do estimate the theoretical residuals and are therefore still useful for diagnostics.
Normality of dependent variable = normality of residuals? One point that may help your understanding: If $x$ is normally distributed and $a$ and $b$ are constants, then $y=\frac{x-a}{b}$ is also normally distributed (but with a possibly different mean and va
5,232
Normality of dependent variable = normality of residuals?
The short answers: residuals no depends, both approaches have advantages and disadvantages why not? It may make more sense to compare medians instead of means. from what you have told us, the normality assumption is probably violated The longer answer: The assumption is that the dependent variable (y) is normally distributed but with different means for different groups. As a consequence, if you plot just the distribution of y it can easily look very different from your standard bell shaped normal curve. The residuals represent the distribution of y with those differences in means "filtered out". Alternatively, you could look at the distribution of y in each group separately. This also filters out the differences in means across groups. The advantage is that this way you also get information about the distribution in each group, which in your case seems relevant. The disadvantage is that each group contains less observations than the combined dataset which you would get when looking at the residuals. Moreover, you would not be able to meaningfully compare groups if you have many groups, e.g. because you entered many predictor variables to your model or a (quasi-)continuous predictor variable to your model. So if your model consists of only one categorical predictor variable and the number of observations in each group is large enough, then it can be meaningful to inspect the distribution of y in each group separately. You can always inspect the residuals.
Normality of dependent variable = normality of residuals?
The short answers: residuals no depends, both approaches have advantages and disadvantages why not? It may make more sense to compare medians instead of means. from what you have told us, the normali
Normality of dependent variable = normality of residuals? The short answers: residuals no depends, both approaches have advantages and disadvantages why not? It may make more sense to compare medians instead of means. from what you have told us, the normality assumption is probably violated The longer answer: The assumption is that the dependent variable (y) is normally distributed but with different means for different groups. As a consequence, if you plot just the distribution of y it can easily look very different from your standard bell shaped normal curve. The residuals represent the distribution of y with those differences in means "filtered out". Alternatively, you could look at the distribution of y in each group separately. This also filters out the differences in means across groups. The advantage is that this way you also get information about the distribution in each group, which in your case seems relevant. The disadvantage is that each group contains less observations than the combined dataset which you would get when looking at the residuals. Moreover, you would not be able to meaningfully compare groups if you have many groups, e.g. because you entered many predictor variables to your model or a (quasi-)continuous predictor variable to your model. So if your model consists of only one categorical predictor variable and the number of observations in each group is large enough, then it can be meaningful to inspect the distribution of y in each group separately. You can always inspect the residuals.
Normality of dependent variable = normality of residuals? The short answers: residuals no depends, both approaches have advantages and disadvantages why not? It may make more sense to compare medians instead of means. from what you have told us, the normali
5,233
Normality of dependent variable = normality of residuals?
By definition of assumptions, the random variable $Y$ is a linear combination of $X$ and the residuals, all other things being constant. If $X$ is not stochastic, and the error terms are normal, then $Y$ is normal and so are the residuals. Question 1) The assumptions refers to two things. First, to the normality of the error terms. Second, to the linearity and completeness of the model. Both things are necessary for inference. But if these assumptions are met, then both the residuals $e$ and $Y$ are normally distributed and the solution can be calculated quite easily, because they depend on the error terms $\epsilon$, given $X$. For example the distribution of $Y$ in a regular OLS model might be $Y|X-N(X\beta,\sigma^2)$. If your $X$ group is not normal, then this will potentially skew the unconditional $Y$. In fact this is very likely to happen. However, the important thing is that the distribution of $Y|X$ is normal. Questions 2) Yes it is possible to have skewed values for $Y$ because of the $X$. Yet, the residuals will be normal if all assumptions are met (how else could you do intervals and hypothesis testing?!). For this part of your question there is a pretty definitive answer in this thread: What if residuals are normally distributed, but y is not? Question 3) The important thing for using linear models requiring normality is that residuals which are not normal, wgether this is in a group or not, are an important indicator that your model might not fit your data. If you are doing ANOVA, then of course your overall residuals don't have to be normal (or rather homoscedastic), that wouldn't make sense. In a regression though, you better have a model with ends up with overall normal residuals. If not, your interval estimators and tests will be wrong. This may be the case of certain autocorrelations, or a missing variable bias. If the model is 100% correct (including maybe structural breaks and weighting if necessary), it is not far fetched to assume normal error terms, even centered around 0. Practically the question often becomes: Can we get away with these things if the sample is large enough? There is no definitive answer, but for a 100% correct approach yes, all residuals should be normal. Question 4 & 5) It depends on what you mean by comparing. Given the assumption of normal error terms, you can test based on the the assumption of two different distributions. You can also use GLS estimation for a regression to account for the different distribution parameters - IF you have the right model... and I guess your groups themselves work as an indicator/binary variable? Then it would probably be very hard to reason that the distribution of residuals will be normal - consequence is that while you can do stuff with your data, it will not be on the basis of regular OLS. But it depends on what you want to do with the data. The important thing is though: You still do not get to go around the assumptions of the linear model you are using. You can make issues better by assuming asymptotic large sample properties, but if I guess since you are asking for a definitive answer that is not what you have in mind. In the case of your example, if you have data which might explain the skewedness you will regain normality in your residuals and in $Y|X$. But if you are just using binary indicators for a regression, you are essentially using the wrong model. You can indeed do tests with this, but when it comes to regression your interval results will be invalid, essentially you are missing data for a complete model. I think a good approach would be to look into the algebra of regular OLS with a focus on the resulting distributions.
Normality of dependent variable = normality of residuals?
By definition of assumptions, the random variable $Y$ is a linear combination of $X$ and the residuals, all other things being constant. If $X$ is not stochastic, and the error terms are normal, then
Normality of dependent variable = normality of residuals? By definition of assumptions, the random variable $Y$ is a linear combination of $X$ and the residuals, all other things being constant. If $X$ is not stochastic, and the error terms are normal, then $Y$ is normal and so are the residuals. Question 1) The assumptions refers to two things. First, to the normality of the error terms. Second, to the linearity and completeness of the model. Both things are necessary for inference. But if these assumptions are met, then both the residuals $e$ and $Y$ are normally distributed and the solution can be calculated quite easily, because they depend on the error terms $\epsilon$, given $X$. For example the distribution of $Y$ in a regular OLS model might be $Y|X-N(X\beta,\sigma^2)$. If your $X$ group is not normal, then this will potentially skew the unconditional $Y$. In fact this is very likely to happen. However, the important thing is that the distribution of $Y|X$ is normal. Questions 2) Yes it is possible to have skewed values for $Y$ because of the $X$. Yet, the residuals will be normal if all assumptions are met (how else could you do intervals and hypothesis testing?!). For this part of your question there is a pretty definitive answer in this thread: What if residuals are normally distributed, but y is not? Question 3) The important thing for using linear models requiring normality is that residuals which are not normal, wgether this is in a group or not, are an important indicator that your model might not fit your data. If you are doing ANOVA, then of course your overall residuals don't have to be normal (or rather homoscedastic), that wouldn't make sense. In a regression though, you better have a model with ends up with overall normal residuals. If not, your interval estimators and tests will be wrong. This may be the case of certain autocorrelations, or a missing variable bias. If the model is 100% correct (including maybe structural breaks and weighting if necessary), it is not far fetched to assume normal error terms, even centered around 0. Practically the question often becomes: Can we get away with these things if the sample is large enough? There is no definitive answer, but for a 100% correct approach yes, all residuals should be normal. Question 4 & 5) It depends on what you mean by comparing. Given the assumption of normal error terms, you can test based on the the assumption of two different distributions. You can also use GLS estimation for a regression to account for the different distribution parameters - IF you have the right model... and I guess your groups themselves work as an indicator/binary variable? Then it would probably be very hard to reason that the distribution of residuals will be normal - consequence is that while you can do stuff with your data, it will not be on the basis of regular OLS. But it depends on what you want to do with the data. The important thing is though: You still do not get to go around the assumptions of the linear model you are using. You can make issues better by assuming asymptotic large sample properties, but if I guess since you are asking for a definitive answer that is not what you have in mind. In the case of your example, if you have data which might explain the skewedness you will regain normality in your residuals and in $Y|X$. But if you are just using binary indicators for a regression, you are essentially using the wrong model. You can indeed do tests with this, but when it comes to regression your interval results will be invalid, essentially you are missing data for a complete model. I think a good approach would be to look into the algebra of regular OLS with a focus on the resulting distributions.
Normality of dependent variable = normality of residuals? By definition of assumptions, the random variable $Y$ is a linear combination of $X$ and the residuals, all other things being constant. If $X$ is not stochastic, and the error terms are normal, then
5,234
Normality of dependent variable = normality of residuals?
A clarification on Question 3: Normality of the residuals definitely does not imply normality within groups. The marginal distribution of residuals can be normal while the conditionals are not. This is true because a mixture of non-normal distributions can possibly be normal; see https://stats.stackexchange.com/a/486951/102879 for an example.
Normality of dependent variable = normality of residuals?
A clarification on Question 3: Normality of the residuals definitely does not imply normality within groups. The marginal distribution of residuals can be normal while the conditionals are not. This
Normality of dependent variable = normality of residuals? A clarification on Question 3: Normality of the residuals definitely does not imply normality within groups. The marginal distribution of residuals can be normal while the conditionals are not. This is true because a mixture of non-normal distributions can possibly be normal; see https://stats.stackexchange.com/a/486951/102879 for an example.
Normality of dependent variable = normality of residuals? A clarification on Question 3: Normality of the residuals definitely does not imply normality within groups. The marginal distribution of residuals can be normal while the conditionals are not. This
5,235
Normality of dependent variable = normality of residuals?
The assumptions refers to two things. First, to the normality of the error terms. Second, to the linearity and completeness of the model. Both things are necessary for inference. But if these assumptions are met, then both the residuals e and Y are normally distributed and the solution can be calculated quite easily, because they depend on the error terms ϵ, given X. For example the distribution of Y in a regular OLS model might be Y|X−N(Xβ,σ2). If your X group is not normal, then this will potentially skew the unconditional Y. In fact this is very likely to happen. However, the important thing is that the distribution of Y|X is normal. is there any reference available for this?
Normality of dependent variable = normality of residuals?
The assumptions refers to two things. First, to the normality of the error terms. Second, to the linearity and completeness of the model. Both things are necessary for inference. But if these assumpti
Normality of dependent variable = normality of residuals? The assumptions refers to two things. First, to the normality of the error terms. Second, to the linearity and completeness of the model. Both things are necessary for inference. But if these assumptions are met, then both the residuals e and Y are normally distributed and the solution can be calculated quite easily, because they depend on the error terms ϵ, given X. For example the distribution of Y in a regular OLS model might be Y|X−N(Xβ,σ2). If your X group is not normal, then this will potentially skew the unconditional Y. In fact this is very likely to happen. However, the important thing is that the distribution of Y|X is normal. is there any reference available for this?
Normality of dependent variable = normality of residuals? The assumptions refers to two things. First, to the normality of the error terms. Second, to the linearity and completeness of the model. Both things are necessary for inference. But if these assumpti
5,236
Can machine learning decode the SHA256 hashes?
This isn't really a stats answer, but: No, you can't determine the first character of the plaintext from the hash, because there's no such thing as "the plaintext" for a given hash. SHA-256 is a hashing algorithm. No matter what your plaintext, you get out a 32-byte signature, often expressed as a 64-character hex string. There are far more possible plaintexts than there are possible 64 character hex strings - the same hash can be generated from any number of different plaintexts. There's no reason to believe that the first character being/not being a '1' is uniform across all plaintexts producing a given hash.
Can machine learning decode the SHA256 hashes?
This isn't really a stats answer, but: No, you can't determine the first character of the plaintext from the hash, because there's no such thing as "the plaintext" for a given hash. SHA-256 is a hashi
Can machine learning decode the SHA256 hashes? This isn't really a stats answer, but: No, you can't determine the first character of the plaintext from the hash, because there's no such thing as "the plaintext" for a given hash. SHA-256 is a hashing algorithm. No matter what your plaintext, you get out a 32-byte signature, often expressed as a 64-character hex string. There are far more possible plaintexts than there are possible 64 character hex strings - the same hash can be generated from any number of different plaintexts. There's no reason to believe that the first character being/not being a '1' is uniform across all plaintexts producing a given hash.
Can machine learning decode the SHA256 hashes? This isn't really a stats answer, but: No, you can't determine the first character of the plaintext from the hash, because there's no such thing as "the plaintext" for a given hash. SHA-256 is a hashi
5,237
Can machine learning decode the SHA256 hashes?
SHA256 is designed to be as random as possible, so it is unlikely you would be able to separate hashes that came from 1-prefixed plaintext from those that do not; there should simply be no feature of the hash string that would give that information away.
Can machine learning decode the SHA256 hashes?
SHA256 is designed to be as random as possible, so it is unlikely you would be able to separate hashes that came from 1-prefixed plaintext from those that do not; there should simply be no feature of
Can machine learning decode the SHA256 hashes? SHA256 is designed to be as random as possible, so it is unlikely you would be able to separate hashes that came from 1-prefixed plaintext from those that do not; there should simply be no feature of the hash string that would give that information away.
Can machine learning decode the SHA256 hashes? SHA256 is designed to be as random as possible, so it is unlikely you would be able to separate hashes that came from 1-prefixed plaintext from those that do not; there should simply be no feature of
5,238
Can machine learning decode the SHA256 hashes?
Regardless if this is "Possible", what algorithm would be the best approach? Sorry, but that's a nonsensical question. If something is impossible, then you can't search for the best approach to the problem. In this case, this definitely should be impossible because hashing is a one-way function: several inputs (infinite, in fact) can produce the same output. If the first bit of input on its own would somehow influence the probability of a specific hash value, this would mean that the hash algorithm is completely flawed. You certainly can train a neural network, linear classifier, SVM and whatnot to attempt prediction. And if you will be able to reliably predict the input from output for a certain hashing algorithm, this would prove that this algorithm is worthless. I'd say that for a widely used algorithm like SHA256 such a possibility is vanishingly low. However, it's a reasonable approach to quickly rule out new, unproven and untested hashing algorithms.
Can machine learning decode the SHA256 hashes?
Regardless if this is "Possible", what algorithm would be the best approach? Sorry, but that's a nonsensical question. If something is impossible, then you can't search for the best approach to the pr
Can machine learning decode the SHA256 hashes? Regardless if this is "Possible", what algorithm would be the best approach? Sorry, but that's a nonsensical question. If something is impossible, then you can't search for the best approach to the problem. In this case, this definitely should be impossible because hashing is a one-way function: several inputs (infinite, in fact) can produce the same output. If the first bit of input on its own would somehow influence the probability of a specific hash value, this would mean that the hash algorithm is completely flawed. You certainly can train a neural network, linear classifier, SVM and whatnot to attempt prediction. And if you will be able to reliably predict the input from output for a certain hashing algorithm, this would prove that this algorithm is worthless. I'd say that for a widely used algorithm like SHA256 such a possibility is vanishingly low. However, it's a reasonable approach to quickly rule out new, unproven and untested hashing algorithms.
Can machine learning decode the SHA256 hashes? Regardless if this is "Possible", what algorithm would be the best approach? Sorry, but that's a nonsensical question. If something is impossible, then you can't search for the best approach to the pr
5,239
Can machine learning decode the SHA256 hashes?
While one can't prove a negative with an example. Still I feel an example would be suggestive; and perhaps useful. And it does show how one would (attempt to) solve similar problems. In the case of I want to make binary predictions, using features that are binary vectors, a Random Forest is a solid choice. I guess this kind of answers the second part of your question: what is a good algorithm. We well want to preprocess the SHA256 strings, into binary (Boolean) vectors, as each bit is statistically independent, thus each bit is a good feature. So that will make our inputs 256 element boolean vectors. Demo Here is a demonstration of how the whole thing can be done using the Julia DecisionTree.jl library. You can copy paste the below into the julia prompt. using SHA using DecisionTree using Statistics: mean using Random: randstring const maxlen=10_000 # longest string (document) to be hashed. gen_plaintext(x) = gen_plaintext(Val{x}()) gen_plaintext(::Val{true}) = "1" * randstring(rand(0:maxlen-1)) gen_plaintext(::Val{false}) = randstring(rand(1:maxlen)) bitvector(x) = BitVector(digits(x, base=2, pad=8sizeof(x))) bitvector(x::AbstractVector) = reduce(vcat, bitvector.(x)) function gen_observation(class) plaintext = gen_plaintext(class) obs = bitvector(sha256(plaintext)) obs end function feature_mat(obs) convert(Array, reduce(hcat, obs)') end ######################################## const train_labels = rand(Bool, 100_000) const train_obs = gen_observation.(train_labels) const train_feature_mat = feature_mat(train_obs) const test_labels = rand(Bool, 100_000) const test_obs = gen_observation.(test_labels) const test_feature_mat = feature_mat(test_obs) # Train the model const model = build_forest(train_labels, train_feature_mat) @show model #Training Set accuracy: @show mean(apply_forest(model, train_feature_mat) .== train_labels) #Test Set accuracy: @show mean(apply_forest(model, test_feature_mat) .== test_labels) Results When I did this, training on 100,000 random ASCII strings of length up to 10,000. Here are the results I saw: Train the model julia> const model = build_forest(train_labels, train_feature_mat) Ensemble of Decision Trees Trees: 10 Avg Leaves: 16124.7 Avg Depth: 17.9 Training Set accuracy: julia> mean(apply_forest(model, train_feature_mat) .== train_labels) 0.95162 Test Set accuracy: julia> mean(apply_forest(model, test_feature_mat) .== test_labels) 0.5016 Discussion So that is basically nothing. We went from 95% on the training set, to barely over 50% on the test set. Someone could apply proper hypothesis tests, to see if we can reject the null hypothesis, but I am pretty certain we can't. It is a tiny improvement over the guess rate. That suggests that it can't be learned. If a Random Forest, can go from well fitted to hitting just the guess rate. Random Forests are pretty capable of learning difficult inputs. If there was something to learn, I would expect at least a few percent. You can play around with different hash functions by changing the code. Which could be interesting I got basically same results when using the julia in built hash function (which is not a cryptographically secure hsah, but still is a good hash so should indeed send similar strings apart). I also got basically the same results for CRC32c.
Can machine learning decode the SHA256 hashes?
While one can't prove a negative with an example. Still I feel an example would be suggestive; and perhaps useful. And it does show how one would (attempt to) solve similar problems. In the case of
Can machine learning decode the SHA256 hashes? While one can't prove a negative with an example. Still I feel an example would be suggestive; and perhaps useful. And it does show how one would (attempt to) solve similar problems. In the case of I want to make binary predictions, using features that are binary vectors, a Random Forest is a solid choice. I guess this kind of answers the second part of your question: what is a good algorithm. We well want to preprocess the SHA256 strings, into binary (Boolean) vectors, as each bit is statistically independent, thus each bit is a good feature. So that will make our inputs 256 element boolean vectors. Demo Here is a demonstration of how the whole thing can be done using the Julia DecisionTree.jl library. You can copy paste the below into the julia prompt. using SHA using DecisionTree using Statistics: mean using Random: randstring const maxlen=10_000 # longest string (document) to be hashed. gen_plaintext(x) = gen_plaintext(Val{x}()) gen_plaintext(::Val{true}) = "1" * randstring(rand(0:maxlen-1)) gen_plaintext(::Val{false}) = randstring(rand(1:maxlen)) bitvector(x) = BitVector(digits(x, base=2, pad=8sizeof(x))) bitvector(x::AbstractVector) = reduce(vcat, bitvector.(x)) function gen_observation(class) plaintext = gen_plaintext(class) obs = bitvector(sha256(plaintext)) obs end function feature_mat(obs) convert(Array, reduce(hcat, obs)') end ######################################## const train_labels = rand(Bool, 100_000) const train_obs = gen_observation.(train_labels) const train_feature_mat = feature_mat(train_obs) const test_labels = rand(Bool, 100_000) const test_obs = gen_observation.(test_labels) const test_feature_mat = feature_mat(test_obs) # Train the model const model = build_forest(train_labels, train_feature_mat) @show model #Training Set accuracy: @show mean(apply_forest(model, train_feature_mat) .== train_labels) #Test Set accuracy: @show mean(apply_forest(model, test_feature_mat) .== test_labels) Results When I did this, training on 100,000 random ASCII strings of length up to 10,000. Here are the results I saw: Train the model julia> const model = build_forest(train_labels, train_feature_mat) Ensemble of Decision Trees Trees: 10 Avg Leaves: 16124.7 Avg Depth: 17.9 Training Set accuracy: julia> mean(apply_forest(model, train_feature_mat) .== train_labels) 0.95162 Test Set accuracy: julia> mean(apply_forest(model, test_feature_mat) .== test_labels) 0.5016 Discussion So that is basically nothing. We went from 95% on the training set, to barely over 50% on the test set. Someone could apply proper hypothesis tests, to see if we can reject the null hypothesis, but I am pretty certain we can't. It is a tiny improvement over the guess rate. That suggests that it can't be learned. If a Random Forest, can go from well fitted to hitting just the guess rate. Random Forests are pretty capable of learning difficult inputs. If there was something to learn, I would expect at least a few percent. You can play around with different hash functions by changing the code. Which could be interesting I got basically same results when using the julia in built hash function (which is not a cryptographically secure hsah, but still is a good hash so should indeed send similar strings apart). I also got basically the same results for CRC32c.
Can machine learning decode the SHA256 hashes? While one can't prove a negative with an example. Still I feel an example would be suggestive; and perhaps useful. And it does show how one would (attempt to) solve similar problems. In the case of
5,240
Can machine learning decode the SHA256 hashes?
Hash functions are (by design) extremely badly suited for doing anything machine learning with them. ML is essentially a family of methods for modelling / estimating locally continuous functions. I.e., you're trying to describe some physical system that, while it may have certain discontinuities, is in some sense in most of the parameter space smooth enough so that only a scattered sample of test data can be used to predict the result for other input. To do that, the AI algorithms need to somehow decompose the data into a clever basis representation, for which the training has suggested that e.g. if you see such and such shape (which appears to correlate with the result of such and such convolution) then there's a good chance that the output should have in the corresponding region such and such structure (which can again be described by a convolution or something). (I know, many ML approaches aren't like convolution at all, but the general idea is always the same: you have some input space that's so high dimensional it's impossible to sample exhaustively, so you find a clever decomposition that allows you to extrapolate results from a comparatively sparse sample.) The idea behind a cryptographic hash function however is that any change to the plaintext should result in a completely different digest. So no matter how you decompose the function, local estimators will not allow you to extrapolate how small fluctuations around that part influence the result. Unless of course you actually process all of the information of a limited set, but this wouldn't be called machine learning: you'd just be building a rainbow table.
Can machine learning decode the SHA256 hashes?
Hash functions are (by design) extremely badly suited for doing anything machine learning with them. ML is essentially a family of methods for modelling / estimating locally continuous functions. I.e.
Can machine learning decode the SHA256 hashes? Hash functions are (by design) extremely badly suited for doing anything machine learning with them. ML is essentially a family of methods for modelling / estimating locally continuous functions. I.e., you're trying to describe some physical system that, while it may have certain discontinuities, is in some sense in most of the parameter space smooth enough so that only a scattered sample of test data can be used to predict the result for other input. To do that, the AI algorithms need to somehow decompose the data into a clever basis representation, for which the training has suggested that e.g. if you see such and such shape (which appears to correlate with the result of such and such convolution) then there's a good chance that the output should have in the corresponding region such and such structure (which can again be described by a convolution or something). (I know, many ML approaches aren't like convolution at all, but the general idea is always the same: you have some input space that's so high dimensional it's impossible to sample exhaustively, so you find a clever decomposition that allows you to extrapolate results from a comparatively sparse sample.) The idea behind a cryptographic hash function however is that any change to the plaintext should result in a completely different digest. So no matter how you decompose the function, local estimators will not allow you to extrapolate how small fluctuations around that part influence the result. Unless of course you actually process all of the information of a limited set, but this wouldn't be called machine learning: you'd just be building a rainbow table.
Can machine learning decode the SHA256 hashes? Hash functions are (by design) extremely badly suited for doing anything machine learning with them. ML is essentially a family of methods for modelling / estimating locally continuous functions. I.e.
5,241
Can machine learning decode the SHA256 hashes?
This is an interesting question because it raises issues about what counts as "machine learning." There is certainly an algorithm that will eventually solve this problem if it can be solved. It goes like this: Pick your favorite programming language, and decide on an encoding that maps every string to a (potentially very large) integer. Pick a random number and convert it into a string. Check to see if it's a valid program in your language. If it's not, pick another number and try again. If it is, start it, immediately pause it, and add it to a list of paused programs. Run all the paused programs for a little while. If any of them halt without producing an adequate solution, remove them from the list. If one produces an adequate solution, you're done! Otherwise, return to 2 after letting them all run for a bit. There's no question that if you have infinite storage and infinite time, the above algorithm will eventually find a good solution. But that's probably not what you mean by "machine learning." Here's the rub: if you consider all possible problems, no machine learning algorithm can do better on average! This is known as the no free lunch theorem. It proves that among all the possible problems you could throw at any given machine learning algorithm, the number that it can solve quickly is vanishingly small. It can solve those problems quickly only because they are governed by patterns that the algorithm can anticipate. For example, many successful algorithms assume the following: Solutions can be described by some complex series of matrix multiplications and nonlinear distortions, governed by a set of parameters. Good solutions will be clustered together in parameter space, so that all you have to do is pick a search neighborhood, find the best solution there, shift your search neighborhood so that the best solution is in the center, and repeat. Obviously neither of these assumptions hold in general. The second is particularly suspect. And the no free lunch tells us that these assumptions don't even hold most of the time. In fact they almost never hold! It's just our good fortune that they do hold for certain problems that actually matter. The problem you've chosen is designed from the beginning to violate assumption 2. Hash functions are specifically designed so that similar inputs give completely different outputs. So your question—what is the best machine learning algorithm for solving this problem?—probably has a very straightforward answer: random search.
Can machine learning decode the SHA256 hashes?
This is an interesting question because it raises issues about what counts as "machine learning." There is certainly an algorithm that will eventually solve this problem if it can be solved. It goes l
Can machine learning decode the SHA256 hashes? This is an interesting question because it raises issues about what counts as "machine learning." There is certainly an algorithm that will eventually solve this problem if it can be solved. It goes like this: Pick your favorite programming language, and decide on an encoding that maps every string to a (potentially very large) integer. Pick a random number and convert it into a string. Check to see if it's a valid program in your language. If it's not, pick another number and try again. If it is, start it, immediately pause it, and add it to a list of paused programs. Run all the paused programs for a little while. If any of them halt without producing an adequate solution, remove them from the list. If one produces an adequate solution, you're done! Otherwise, return to 2 after letting them all run for a bit. There's no question that if you have infinite storage and infinite time, the above algorithm will eventually find a good solution. But that's probably not what you mean by "machine learning." Here's the rub: if you consider all possible problems, no machine learning algorithm can do better on average! This is known as the no free lunch theorem. It proves that among all the possible problems you could throw at any given machine learning algorithm, the number that it can solve quickly is vanishingly small. It can solve those problems quickly only because they are governed by patterns that the algorithm can anticipate. For example, many successful algorithms assume the following: Solutions can be described by some complex series of matrix multiplications and nonlinear distortions, governed by a set of parameters. Good solutions will be clustered together in parameter space, so that all you have to do is pick a search neighborhood, find the best solution there, shift your search neighborhood so that the best solution is in the center, and repeat. Obviously neither of these assumptions hold in general. The second is particularly suspect. And the no free lunch tells us that these assumptions don't even hold most of the time. In fact they almost never hold! It's just our good fortune that they do hold for certain problems that actually matter. The problem you've chosen is designed from the beginning to violate assumption 2. Hash functions are specifically designed so that similar inputs give completely different outputs. So your question—what is the best machine learning algorithm for solving this problem?—probably has a very straightforward answer: random search.
Can machine learning decode the SHA256 hashes? This is an interesting question because it raises issues about what counts as "machine learning." There is certainly an algorithm that will eventually solve this problem if it can be solved. It goes l
5,242
Can machine learning decode the SHA256 hashes?
It is next to impossible. However, people observed some patterns in SHA256 which might suggest its non-randomness A distinguisher for SHA256 using Bitcoin (mining faster along the way). Their tldr: "To distinguish between an ideal random permutation hash and SHA256, hash a large amount (~2^80) of candidate 1024 bit blocks twice, as done in Bitcoin. Ensure that the bits of the candidate blocks are sparsely set (much fewer than the 512 mean expected), according to the Bitcoin protocol, discarding candidate blocks that do not meet the Bitcoin “difficulty” standard (where the resultant hashes start with a the large number of 0’s). With the remaining set of valid input candidates (467369 when this analysis was done), observe a particular set of 32 bits in the input block (located where Bitcoin has the nonce, input bits 607-639). Note that the mean number of bits set in the nonce field is skewed to the left, i.e. fewer than the expected value of 16 bits set (estimated mean 15.428)." See a discussion on lobste.rs. One possible explanation is a bias introduced by the miners.
Can machine learning decode the SHA256 hashes?
It is next to impossible. However, people observed some patterns in SHA256 which might suggest its non-randomness A distinguisher for SHA256 using Bitcoin (mining faster along the way). Their tldr: "T
Can machine learning decode the SHA256 hashes? It is next to impossible. However, people observed some patterns in SHA256 which might suggest its non-randomness A distinguisher for SHA256 using Bitcoin (mining faster along the way). Their tldr: "To distinguish between an ideal random permutation hash and SHA256, hash a large amount (~2^80) of candidate 1024 bit blocks twice, as done in Bitcoin. Ensure that the bits of the candidate blocks are sparsely set (much fewer than the 512 mean expected), according to the Bitcoin protocol, discarding candidate blocks that do not meet the Bitcoin “difficulty” standard (where the resultant hashes start with a the large number of 0’s). With the remaining set of valid input candidates (467369 when this analysis was done), observe a particular set of 32 bits in the input block (located where Bitcoin has the nonce, input bits 607-639). Note that the mean number of bits set in the nonce field is skewed to the left, i.e. fewer than the expected value of 16 bits set (estimated mean 15.428)." See a discussion on lobste.rs. One possible explanation is a bias introduced by the miners.
Can machine learning decode the SHA256 hashes? It is next to impossible. However, people observed some patterns in SHA256 which might suggest its non-randomness A distinguisher for SHA256 using Bitcoin (mining faster along the way). Their tldr: "T
5,243
Can machine learning decode the SHA256 hashes?
I'll answer with a program. To reduce computational requirements I'll use a variant of sha256 I call sha16, which is just the first 16 bits of sha256. #!/usr/bin/python3 import hashlib from itertools import count def sha16(plaintext): h = hashlib.sha256() h.update(plaintext) return h.hexdigest()[:4] def has_plaintext_start_with_1(digest): """Return True if and only if the given digest can be generated from a plaintext starting with "1" first bit.""" return True def plaintext_starting_with_1(digest): """Return a plaintext starting with '1' matching the given digest.""" for c in count(): plaintext = (b'\x80' + str(c).encode('ascii')) d = sha16(plaintext) if d == digest: return plaintext for digest in range(0x10000): digest = "%04x" % (digest,) plain = plaintext_starting_with_1(digest) print("%s hashes to %s" % (plain, digest)) This produces the output: b'\x8094207' hashes to 0000 b'\x8047770' hashes to 0001 b'\x8078597' hashes to 0002 b'\x8025129' hashes to 0003 b'\x8055307' hashes to 0004 b'\x80120019' hashes to 0005 b'\x8062700' hashes to 0006 b'\x8036411' hashes to 0007 b'\x80135953' hashes to 0008 b'\x8044091' hashes to 0009 b'\x808968' hashes to 000a b'\x8039318' hashes to 000b [...] I'll leave the full proof as an exercise for the reader, but take my word for it: there's an input that starts with a "1" for each possible digest from 0000 to ffff. There's also an input that doesn't start with "1". And there's one that starts with the complete works of Shakespeare, too. This holds for any reasonably good hash function, although my brute force proof may become computationally infeasible.
Can machine learning decode the SHA256 hashes?
I'll answer with a program. To reduce computational requirements I'll use a variant of sha256 I call sha16, which is just the first 16 bits of sha256. #!/usr/bin/python3 import hashlib from itertools
Can machine learning decode the SHA256 hashes? I'll answer with a program. To reduce computational requirements I'll use a variant of sha256 I call sha16, which is just the first 16 bits of sha256. #!/usr/bin/python3 import hashlib from itertools import count def sha16(plaintext): h = hashlib.sha256() h.update(plaintext) return h.hexdigest()[:4] def has_plaintext_start_with_1(digest): """Return True if and only if the given digest can be generated from a plaintext starting with "1" first bit.""" return True def plaintext_starting_with_1(digest): """Return a plaintext starting with '1' matching the given digest.""" for c in count(): plaintext = (b'\x80' + str(c).encode('ascii')) d = sha16(plaintext) if d == digest: return plaintext for digest in range(0x10000): digest = "%04x" % (digest,) plain = plaintext_starting_with_1(digest) print("%s hashes to %s" % (plain, digest)) This produces the output: b'\x8094207' hashes to 0000 b'\x8047770' hashes to 0001 b'\x8078597' hashes to 0002 b'\x8025129' hashes to 0003 b'\x8055307' hashes to 0004 b'\x80120019' hashes to 0005 b'\x8062700' hashes to 0006 b'\x8036411' hashes to 0007 b'\x80135953' hashes to 0008 b'\x8044091' hashes to 0009 b'\x808968' hashes to 000a b'\x8039318' hashes to 000b [...] I'll leave the full proof as an exercise for the reader, but take my word for it: there's an input that starts with a "1" for each possible digest from 0000 to ffff. There's also an input that doesn't start with "1". And there's one that starts with the complete works of Shakespeare, too. This holds for any reasonably good hash function, although my brute force proof may become computationally infeasible.
Can machine learning decode the SHA256 hashes? I'll answer with a program. To reduce computational requirements I'll use a variant of sha256 I call sha16, which is just the first 16 bits of sha256. #!/usr/bin/python3 import hashlib from itertools
5,244
Can machine learning decode the SHA256 hashes?
What you describe is basically a pre-image attack. You're trying to find an input such that, when it is hashed, the output has some property like "a leading 1".* It is an explicit goal of cryptographic hashes to prevent such pre-image attacks. If you can make such an attack, we tend to consider that algorithm to be insecure and stop using it. So while that means it's not impossible, it means your machine learning algorithm would have to simultaneously outwit a large fraction of the mathematicians in the world, and their super computers. It is unlikely that you will do so. However, if you did, you would become known as someone who broke a major cryptographic hash algorithm. That fame is worth something! *Technically a "first preimage attack" tries to find a match for a specific hash. However, to show that a hash algorithm has first preimage attack resistence, they typically show that you can't find any meaningful information about the input from the hash.
Can machine learning decode the SHA256 hashes?
What you describe is basically a pre-image attack. You're trying to find an input such that, when it is hashed, the output has some property like "a leading 1".* It is an explicit goal of cryptograph
Can machine learning decode the SHA256 hashes? What you describe is basically a pre-image attack. You're trying to find an input such that, when it is hashed, the output has some property like "a leading 1".* It is an explicit goal of cryptographic hashes to prevent such pre-image attacks. If you can make such an attack, we tend to consider that algorithm to be insecure and stop using it. So while that means it's not impossible, it means your machine learning algorithm would have to simultaneously outwit a large fraction of the mathematicians in the world, and their super computers. It is unlikely that you will do so. However, if you did, you would become known as someone who broke a major cryptographic hash algorithm. That fame is worth something! *Technically a "first preimage attack" tries to find a match for a specific hash. However, to show that a hash algorithm has first preimage attack resistence, they typically show that you can't find any meaningful information about the input from the hash.
Can machine learning decode the SHA256 hashes? What you describe is basically a pre-image attack. You're trying to find an input such that, when it is hashed, the output has some property like "a leading 1".* It is an explicit goal of cryptograph
5,245
Can machine learning decode the SHA256 hashes?
Most all the answers here are telling you why you can't do this but here's the direct answer to: Regardless if this is "Possible", what algorithm would be the best approach? Assuming the input is sufficiently large: Take the count of the set of valid characters. Take the reciprocal of the number from step 1. That's the probability that the input string starts with '1'. You don't even need to look at the input. If you can do better than that, it would mean the hash is very broken. You can save a lot of CPU cycles over trying to train an algorithm to pick random numbers. You could train an algorithm and it might come up with a different answer because of overfitting. That is unless there's something really wrong with the hash algorithm. Using this algorithm is then going wrong more often than if you simply pick a random value.
Can machine learning decode the SHA256 hashes?
Most all the answers here are telling you why you can't do this but here's the direct answer to: Regardless if this is "Possible", what algorithm would be the best approach? Assuming the input is su
Can machine learning decode the SHA256 hashes? Most all the answers here are telling you why you can't do this but here's the direct answer to: Regardless if this is "Possible", what algorithm would be the best approach? Assuming the input is sufficiently large: Take the count of the set of valid characters. Take the reciprocal of the number from step 1. That's the probability that the input string starts with '1'. You don't even need to look at the input. If you can do better than that, it would mean the hash is very broken. You can save a lot of CPU cycles over trying to train an algorithm to pick random numbers. You could train an algorithm and it might come up with a different answer because of overfitting. That is unless there's something really wrong with the hash algorithm. Using this algorithm is then going wrong more often than if you simply pick a random value.
Can machine learning decode the SHA256 hashes? Most all the answers here are telling you why you can't do this but here's the direct answer to: Regardless if this is "Possible", what algorithm would be the best approach? Assuming the input is su
5,246
Can machine learning decode the SHA256 hashes?
Hashing functions are purposefully designed to be difficult to model, so (as pointed out already) this is likely to be very difficult. Nevertheless, any weakness in the hashing function will reduce its entropy, making it more predictable. Regardless if this is "Possible", what algorithm would be the best approach? A useful example is the Physically Unclonable Function, or PUF - which is analogous to a hardware hashing function. Typically, manufacturing variations are purposefully used to give each PUF a slightly different response so that their 'hashed' output is different for a given input. Design weaknesses limit the entropy, however, and given enough challenge-response pairs it is often possible to build a black-box model of the PUF so that the response for a new, previously unseen challenge can be predicted. Logistic regression is the most commonly used approach for these modelling attacks, such as in this paper by Rührmair. Genetic algorithms (or more generally evolutionary strategies) may be an alternative approach, as they are applicable to problems that are not differentiable and/or linearly separable. They are also discussed in the above paper.
Can machine learning decode the SHA256 hashes?
Hashing functions are purposefully designed to be difficult to model, so (as pointed out already) this is likely to be very difficult. Nevertheless, any weakness in the hashing function will reduce it
Can machine learning decode the SHA256 hashes? Hashing functions are purposefully designed to be difficult to model, so (as pointed out already) this is likely to be very difficult. Nevertheless, any weakness in the hashing function will reduce its entropy, making it more predictable. Regardless if this is "Possible", what algorithm would be the best approach? A useful example is the Physically Unclonable Function, or PUF - which is analogous to a hardware hashing function. Typically, manufacturing variations are purposefully used to give each PUF a slightly different response so that their 'hashed' output is different for a given input. Design weaknesses limit the entropy, however, and given enough challenge-response pairs it is often possible to build a black-box model of the PUF so that the response for a new, previously unseen challenge can be predicted. Logistic regression is the most commonly used approach for these modelling attacks, such as in this paper by Rührmair. Genetic algorithms (or more generally evolutionary strategies) may be an alternative approach, as they are applicable to problems that are not differentiable and/or linearly separable. They are also discussed in the above paper.
Can machine learning decode the SHA256 hashes? Hashing functions are purposefully designed to be difficult to model, so (as pointed out already) this is likely to be very difficult. Nevertheless, any weakness in the hashing function will reduce it
5,247
Can machine learning decode the SHA256 hashes?
Lets say that your plaintext/input is exactly one block long (512bits=1block for SHA256). The input space for it is $2^{512}$ and hash space is $2^{256}$. For simplicity, lets take the first $2^{256}$ inputs into consideration. Now you train a machine learning algorithm (Any algorithm of your choice), with a training set of size $2^{64}$ by hashing every number from $0$ to $2^{64}-1$ (This itself would take alot of time and gargantuan amount of space to store, but lets put that aside for a moment). After training over such a massive training set, you would expect the model to work accurately but no. The remaining $2^{256}-2^{64}$ input-hash pairs could be mapped in $(2^{256}-2^{64})!$ ways. Out of those many ways to arrange, only one arrangement would be our SHA256. Let $S=(2^{256}-2^{64})$ (Total number of mappings) and $C=\frac{90}{100}*S$ (Number of correct mappings for 90% accuracy) The probably to achieve even 90% accuracy with our model would be (probability of $C$ correct mappings)*(probability of ($S-C$) incorrect mappings) = $$(\frac{1}{S}*\frac{1}{S-1}*\frac{1}{S-2}...*\frac{1}{S-(C-1)} ) * (\frac{S-C-1}{S-C}*\frac{S-C-2}{S-C-1}*\frac{S-C-3}{S-C-2}...*\frac{1}{2}) = \frac{(S-C-1)!}{S!}$$ Plugging in the values, probability that our model would achieve 90% accuracy is$$= \frac{(\frac{1}{10}*(2^{256}-2^{64})-1)!}{(2^{256}-2^{64})!}$$ Taking logarithms and using Sterling's approximation for factorials, the probability is $$\approx 2^{-(2^{263.9918466566}-2^{260.6509677217})}$$ $$\approx2^{-10.1322237391*2^{260.6509677217}}$$ Phew, thats a really small number. And this is an overestimation, since we have considered only the first $2^{256}$ inputs instead of the total $2^{512}$. The actually probability will be still lower.
Can machine learning decode the SHA256 hashes?
Lets say that your plaintext/input is exactly one block long (512bits=1block for SHA256). The input space for it is $2^{512}$ and hash space is $2^{256}$. For simplicity, lets take the first $2^{256}$
Can machine learning decode the SHA256 hashes? Lets say that your plaintext/input is exactly one block long (512bits=1block for SHA256). The input space for it is $2^{512}$ and hash space is $2^{256}$. For simplicity, lets take the first $2^{256}$ inputs into consideration. Now you train a machine learning algorithm (Any algorithm of your choice), with a training set of size $2^{64}$ by hashing every number from $0$ to $2^{64}-1$ (This itself would take alot of time and gargantuan amount of space to store, but lets put that aside for a moment). After training over such a massive training set, you would expect the model to work accurately but no. The remaining $2^{256}-2^{64}$ input-hash pairs could be mapped in $(2^{256}-2^{64})!$ ways. Out of those many ways to arrange, only one arrangement would be our SHA256. Let $S=(2^{256}-2^{64})$ (Total number of mappings) and $C=\frac{90}{100}*S$ (Number of correct mappings for 90% accuracy) The probably to achieve even 90% accuracy with our model would be (probability of $C$ correct mappings)*(probability of ($S-C$) incorrect mappings) = $$(\frac{1}{S}*\frac{1}{S-1}*\frac{1}{S-2}...*\frac{1}{S-(C-1)} ) * (\frac{S-C-1}{S-C}*\frac{S-C-2}{S-C-1}*\frac{S-C-3}{S-C-2}...*\frac{1}{2}) = \frac{(S-C-1)!}{S!}$$ Plugging in the values, probability that our model would achieve 90% accuracy is$$= \frac{(\frac{1}{10}*(2^{256}-2^{64})-1)!}{(2^{256}-2^{64})!}$$ Taking logarithms and using Sterling's approximation for factorials, the probability is $$\approx 2^{-(2^{263.9918466566}-2^{260.6509677217})}$$ $$\approx2^{-10.1322237391*2^{260.6509677217}}$$ Phew, thats a really small number. And this is an overestimation, since we have considered only the first $2^{256}$ inputs instead of the total $2^{512}$. The actually probability will be still lower.
Can machine learning decode the SHA256 hashes? Lets say that your plaintext/input is exactly one block long (512bits=1block for SHA256). The input space for it is $2^{512}$ and hash space is $2^{256}$. For simplicity, lets take the first $2^{256}$
5,248
Can machine learning decode the SHA256 hashes?
The problem is that "machine learning" isn't intelligent. It just tries to find patterns. In SHA-256, there are no patterns. There is nothing to find. Machine learning hasn't got any chance that is better than brute force. If you want to crack SHA-256 by computer, the only possibility is to create real intelligence, and since lots of clever humans haven't found a way to create SHA-256, you need to create artificial intelligence that is a lot higher than that of many clever humans. At that point, we don't know if such a super-human intelligence would crack SHA-256, prove that it cannot be cracked, or would decide that it is not clever enough to do either (just as humans). The fourth possibibility is of course that such a super-human artificial intelligence would not even bother but think about problems that are more important (to it).
Can machine learning decode the SHA256 hashes?
The problem is that "machine learning" isn't intelligent. It just tries to find patterns. In SHA-256, there are no patterns. There is nothing to find. Machine learning hasn't got any chance that is be
Can machine learning decode the SHA256 hashes? The problem is that "machine learning" isn't intelligent. It just tries to find patterns. In SHA-256, there are no patterns. There is nothing to find. Machine learning hasn't got any chance that is better than brute force. If you want to crack SHA-256 by computer, the only possibility is to create real intelligence, and since lots of clever humans haven't found a way to create SHA-256, you need to create artificial intelligence that is a lot higher than that of many clever humans. At that point, we don't know if such a super-human intelligence would crack SHA-256, prove that it cannot be cracked, or would decide that it is not clever enough to do either (just as humans). The fourth possibibility is of course that such a super-human artificial intelligence would not even bother but think about problems that are more important (to it).
Can machine learning decode the SHA256 hashes? The problem is that "machine learning" isn't intelligent. It just tries to find patterns. In SHA-256, there are no patterns. There is nothing to find. Machine learning hasn't got any chance that is be
5,249
Expected number of ratio of girls vs boys birth
Start with no children repeat step { Every couple who is still having children has a child. Half the couples have males and half the couples have females. Those couples that have females stop having children } At each step you get an even number of males and females and the number of couples having children reduces by half (ie those that had females won't have any children in the next step) So, at any given time you have an equal number of males and females and from step to step the number of couples having children is falling by half. As more couples are created the same situation reoccurs and all other things being equal, the population will contain the same number of male and females
Expected number of ratio of girls vs boys birth
Start with no children repeat step { Every couple who is still having children has a child. Half the couples have males and half the couples have females. Those couples that have females stop having c
Expected number of ratio of girls vs boys birth Start with no children repeat step { Every couple who is still having children has a child. Half the couples have males and half the couples have females. Those couples that have females stop having children } At each step you get an even number of males and females and the number of couples having children reduces by half (ie those that had females won't have any children in the next step) So, at any given time you have an equal number of males and females and from step to step the number of couples having children is falling by half. As more couples are created the same situation reoccurs and all other things being equal, the population will contain the same number of male and females
Expected number of ratio of girls vs boys birth Start with no children repeat step { Every couple who is still having children has a child. Half the couples have males and half the couples have females. Those couples that have females stop having c
5,250
Expected number of ratio of girls vs boys birth
Let $X$ be the number of boys in a family. As soon as they have a girl, they stop, so \begin{array}{| l |l | } \hline X=0 & \text{if the first child was a girl}\\ X=1 & \text{if the first child was a boy and the second was a girl}\\ X=2 & \text{if the first two children were boys and the third was a girl}\\ \text{and so on}\ldots &\\ \hline \end{array} If $p$ is the probability that a child is a boy and if genders are independent between children, the probability that a family ends up having $k$ boys is $$\mbox{P}(X=k)=p^{k}\cdot (1-p),$$ i.e. the probability of having $k$ boys and then having a girl. The expected number of boys is $$ \operatorname{E}X=\sum_{k=0}^\infty kp^k \cdot (1-p)=\sum_{k=0}^\infty kp^k-\sum_{k=0}^\infty kp^{k+1}.$$ Noting that $$\sum_{k=0}^\infty kp^k=\sum_{k=0}^\infty (k+1)p^{k+1}$$ we get $$\sum_{k=0}^\infty kp^k-\sum_{k=0}^\infty kp^{k+1}=\sum_{k=0}^\infty (k+1)p^{k+1}-\sum_{k=0}^\infty kp^{k+1}=\sum_{k=0}^\infty p^{k+1}=p\sum_{k=0}^\infty p^{k}=\frac{p}{1-p}$$ where we used that $\sum_{k=0}^\infty p^k=1/(1-p)$ when $0<p<1$ (see geometric series). If $p=1/2$, we have that $\operatorname{E}X=0.5/0.5$. That is, the average family has 1 boy. We already know that all families have 1 girl, so the ratio will over time even out to be $1/1=1$. The random variable $X$ is known as a geometric random variable.
Expected number of ratio of girls vs boys birth
Let $X$ be the number of boys in a family. As soon as they have a girl, they stop, so \begin{array}{| l |l | } \hline X=0 & \text{if the first child was a girl}\\ X=1 & \te
Expected number of ratio of girls vs boys birth Let $X$ be the number of boys in a family. As soon as they have a girl, they stop, so \begin{array}{| l |l | } \hline X=0 & \text{if the first child was a girl}\\ X=1 & \text{if the first child was a boy and the second was a girl}\\ X=2 & \text{if the first two children were boys and the third was a girl}\\ \text{and so on}\ldots &\\ \hline \end{array} If $p$ is the probability that a child is a boy and if genders are independent between children, the probability that a family ends up having $k$ boys is $$\mbox{P}(X=k)=p^{k}\cdot (1-p),$$ i.e. the probability of having $k$ boys and then having a girl. The expected number of boys is $$ \operatorname{E}X=\sum_{k=0}^\infty kp^k \cdot (1-p)=\sum_{k=0}^\infty kp^k-\sum_{k=0}^\infty kp^{k+1}.$$ Noting that $$\sum_{k=0}^\infty kp^k=\sum_{k=0}^\infty (k+1)p^{k+1}$$ we get $$\sum_{k=0}^\infty kp^k-\sum_{k=0}^\infty kp^{k+1}=\sum_{k=0}^\infty (k+1)p^{k+1}-\sum_{k=0}^\infty kp^{k+1}=\sum_{k=0}^\infty p^{k+1}=p\sum_{k=0}^\infty p^{k}=\frac{p}{1-p}$$ where we used that $\sum_{k=0}^\infty p^k=1/(1-p)$ when $0<p<1$ (see geometric series). If $p=1/2$, we have that $\operatorname{E}X=0.5/0.5$. That is, the average family has 1 boy. We already know that all families have 1 girl, so the ratio will over time even out to be $1/1=1$. The random variable $X$ is known as a geometric random variable.
Expected number of ratio of girls vs boys birth Let $X$ be the number of boys in a family. As soon as they have a girl, they stop, so \begin{array}{| l |l | } \hline X=0 & \text{if the first child was a girl}\\ X=1 & \te
5,251
Expected number of ratio of girls vs boys birth
Summary The simple model that all births independently have a 50% chance of being girls is unrealistic and, as it turns out, exceptional. As soon as we consider the consequences of variation in outcomes among the population, the answer is that the girl:boy ratio can be any value not exceeding 1:1. (In reality it likely still would be close to 1:1, but that's a matter for data analysis to determine.) Because these two conflicting answers are both obtained by assuming statistical independence of birth outcomes, an appeal to independence is an insufficient explanation. Thus it appears that variation (in the chances of female births) is the key idea behind the paradox. Introduction A paradox occurs when we think we have good reasons to believe something but are confronted with a solid-looking argument to the contrary. A satisfactory resolution to a paradox helps us understand both what was right and what may have been wrong about both arguments. As is often the case in probability and statistics, both arguments can actually be valid: the resolution will hinge on differences among assumptions that are implicitly made. Comparing these different assumptions can help us identify which aspects of the situation lead to different answers. Identifying these aspects, I maintain, is what we should value the most. Assumptions As evidenced by all the answers posted so far, it is natural to assume that female births occur independently and with constant probabilities of $1/2$. It is well known that neither assumption is actually true, but it would seem that slight deviations from these assumptions should not affect the answer much. Let us see. To this end, consider the following more general and more realistic model: In each family $i$ the probability of a female birth is a constant $p_i$, regardless of birth order. In the absence of any stopping rule, the expected number of female births in the population should be close to the expected number of male births. All birth outcomes are (statistically) independent. This is still not a fully realistic model of human births, in which the $p_i$ may vary with the age of the parents (particularly the mother). However, it is sufficiently realistic and flexible to provide a satisfactory resolution of the paradox that will apply even to more general models. Analysis Although it is interesting to conduct a thorough analysis of this model, the main points become apparent even when a specific, simple (but somewhat extreme) version is considered. Suppose the population has $2N$ families. In half of these the chance of a female birth is $2/3$ and in the other half the chance of a female birth is $1/3$. This clearly satisfies condition (2): the expected numbers of female and male births are the same. Consider those first $N$ families. Let us reason in terms of expectations, understanding that actual outcomes will be random and therefore will vary a little from the expectations. (The idea behind the following analysis was conveyed more briefly and simply in the original answer which appears at the very end of this post.) Let $f(N,p)$ be the expected number of female births in a population of $N$ with constant female birth probability $p$. Obviously this is proportional to $N$ and so can be written $f(N,p) = f(p)N$. Similarly, let $m(p)N$ be the expected number of male births. The first $pN$ families produce a girl and stop. The other $(1-p)N$ families produce a boy and continue bearing children. That's $pN$ girls and $(1-p)N$ boys so far. The remaining $(1-p)N$ families are in the same position as before: the independence assumption (3) implies that what they experience in the future is not affected by the fact their firstborn was a son. Thus, these families will produce $f(p)[(1-p)N]$ more girls and $m(p)[(1-p)N]$ more boys. Adding up the total girls and total boys and comparing to their assumed values of $f(p)N$ and $m(p)N$ gives equations $$f(p)N = pN + f(p)(1-p)N\ \text{ and }\ m(p)N = (1-p)N + m(p)(1-p)N$$ with solutions $$f(p) = 1\ \text{ and }\ m(p) = \frac{1}{p}-1.$$ The expected number of girls in the first $N$ families, with $p=2/3$, therefore is $f(2/3)N = N$ and the expected number of boys is $m(2/3)N = N/2$. The expected number of girls in the second $N$ families, with $p=1/3$, therefore is $f(1/3)N = N$ and the expected number of boys is $m(1/3)N = 2N$. The totals are $(1+1)N = 2N$ girls and $(1/2+2)N = (5/2)N$ boys. For large $N$ the expected ratio will be close to the ratio of the expectations, $$\mathbb{E}\left(\frac{\text{# girls}}{\text{# boys}}\right) \approx \frac{2N}{(5/2)N} = \frac{4}{5}.$$ The stopping rule favors boys! More generally, with half the families bearing girls independently with probability $p$ and the other half bearing boys independently with probability $1-p$, conditions (1) through (3) continue to apply and the expected ratio for large $N$ approaches $$\frac{2p(1-p)}{1 - 2p(1-p)}.$$ Depending on $p$, which of course lies between $0$ and $1$, this value can be anywhere between $0$ and $1$ (but never any larger than $1$). It attains its maximum of $1$ only when $p=1/2$. In other words, an expected girl:boy ratio of 1:1 is a special exception to the more general and realistic rule that stopping with the first girl favors more boys in the population. Resolution If your intuition is that stopping with the first girl ought to produce more boys in the population, then you are correct, as this example shows. In order to be correct all you need is that the probability of giving birth to a girl varies (even by just a little) among the families. The "official" answer, that the ratio should be close to 1:1, requires several unrealistic assumptions and is sensitive to them: it supposes there can be no variation among families and all births must be independent. Comments The key idea highlighted by this analysis is that variation within the population has important consequences. Independence of births--although it is a simplifying assumption used for every analysis in this thread--does not resolve the paradox, because (depending on the other assumptions) it is consistent both with the official answer and its opposite. Note, however, that for the expected ratio to depart substantially from 1:1, we need a lot of variation among the $p_i$ in the population. If all the $p_i$ are, say, between 0.45 and 0.55, then the effects of this variation will not be very noticeable. Addressing this question of what the $p_i$ really are in a human population requires a fairly large and accurate dataset. One might use a generalized linear mixed model and test for overdispersion. If we replace gender by some other genetic expression, then we obtain a simple statistical explanation of natural selection: a rule that differentially limits the number of offspring based on their genetic makeup can systematically alter the proportions of those genes in the next generation. When the gene is not sex-linked, even a small effect will be multiplicatively propagated through successive generations and can rapidly become greatly magnified. Original answer Each child has a birth order: firstborn, second born, and so on. Assuming equal probabilities of male and female births and no correlations among the genders, the Weak Law of Large Numbers asserts there will be close to a 1:1 ratio of firstborn females to males. For the same reason there will be close to a 1:1 ratio of second born females to males, and so on. Because these ratios are constantly 1:1, the overall ratio must be 1:1 as well, regardless of what the relative frequencies of birth orders turn out to be in the population.
Expected number of ratio of girls vs boys birth
Summary The simple model that all births independently have a 50% chance of being girls is unrealistic and, as it turns out, exceptional. As soon as we consider the consequences of variation in outco
Expected number of ratio of girls vs boys birth Summary The simple model that all births independently have a 50% chance of being girls is unrealistic and, as it turns out, exceptional. As soon as we consider the consequences of variation in outcomes among the population, the answer is that the girl:boy ratio can be any value not exceeding 1:1. (In reality it likely still would be close to 1:1, but that's a matter for data analysis to determine.) Because these two conflicting answers are both obtained by assuming statistical independence of birth outcomes, an appeal to independence is an insufficient explanation. Thus it appears that variation (in the chances of female births) is the key idea behind the paradox. Introduction A paradox occurs when we think we have good reasons to believe something but are confronted with a solid-looking argument to the contrary. A satisfactory resolution to a paradox helps us understand both what was right and what may have been wrong about both arguments. As is often the case in probability and statistics, both arguments can actually be valid: the resolution will hinge on differences among assumptions that are implicitly made. Comparing these different assumptions can help us identify which aspects of the situation lead to different answers. Identifying these aspects, I maintain, is what we should value the most. Assumptions As evidenced by all the answers posted so far, it is natural to assume that female births occur independently and with constant probabilities of $1/2$. It is well known that neither assumption is actually true, but it would seem that slight deviations from these assumptions should not affect the answer much. Let us see. To this end, consider the following more general and more realistic model: In each family $i$ the probability of a female birth is a constant $p_i$, regardless of birth order. In the absence of any stopping rule, the expected number of female births in the population should be close to the expected number of male births. All birth outcomes are (statistically) independent. This is still not a fully realistic model of human births, in which the $p_i$ may vary with the age of the parents (particularly the mother). However, it is sufficiently realistic and flexible to provide a satisfactory resolution of the paradox that will apply even to more general models. Analysis Although it is interesting to conduct a thorough analysis of this model, the main points become apparent even when a specific, simple (but somewhat extreme) version is considered. Suppose the population has $2N$ families. In half of these the chance of a female birth is $2/3$ and in the other half the chance of a female birth is $1/3$. This clearly satisfies condition (2): the expected numbers of female and male births are the same. Consider those first $N$ families. Let us reason in terms of expectations, understanding that actual outcomes will be random and therefore will vary a little from the expectations. (The idea behind the following analysis was conveyed more briefly and simply in the original answer which appears at the very end of this post.) Let $f(N,p)$ be the expected number of female births in a population of $N$ with constant female birth probability $p$. Obviously this is proportional to $N$ and so can be written $f(N,p) = f(p)N$. Similarly, let $m(p)N$ be the expected number of male births. The first $pN$ families produce a girl and stop. The other $(1-p)N$ families produce a boy and continue bearing children. That's $pN$ girls and $(1-p)N$ boys so far. The remaining $(1-p)N$ families are in the same position as before: the independence assumption (3) implies that what they experience in the future is not affected by the fact their firstborn was a son. Thus, these families will produce $f(p)[(1-p)N]$ more girls and $m(p)[(1-p)N]$ more boys. Adding up the total girls and total boys and comparing to their assumed values of $f(p)N$ and $m(p)N$ gives equations $$f(p)N = pN + f(p)(1-p)N\ \text{ and }\ m(p)N = (1-p)N + m(p)(1-p)N$$ with solutions $$f(p) = 1\ \text{ and }\ m(p) = \frac{1}{p}-1.$$ The expected number of girls in the first $N$ families, with $p=2/3$, therefore is $f(2/3)N = N$ and the expected number of boys is $m(2/3)N = N/2$. The expected number of girls in the second $N$ families, with $p=1/3$, therefore is $f(1/3)N = N$ and the expected number of boys is $m(1/3)N = 2N$. The totals are $(1+1)N = 2N$ girls and $(1/2+2)N = (5/2)N$ boys. For large $N$ the expected ratio will be close to the ratio of the expectations, $$\mathbb{E}\left(\frac{\text{# girls}}{\text{# boys}}\right) \approx \frac{2N}{(5/2)N} = \frac{4}{5}.$$ The stopping rule favors boys! More generally, with half the families bearing girls independently with probability $p$ and the other half bearing boys independently with probability $1-p$, conditions (1) through (3) continue to apply and the expected ratio for large $N$ approaches $$\frac{2p(1-p)}{1 - 2p(1-p)}.$$ Depending on $p$, which of course lies between $0$ and $1$, this value can be anywhere between $0$ and $1$ (but never any larger than $1$). It attains its maximum of $1$ only when $p=1/2$. In other words, an expected girl:boy ratio of 1:1 is a special exception to the more general and realistic rule that stopping with the first girl favors more boys in the population. Resolution If your intuition is that stopping with the first girl ought to produce more boys in the population, then you are correct, as this example shows. In order to be correct all you need is that the probability of giving birth to a girl varies (even by just a little) among the families. The "official" answer, that the ratio should be close to 1:1, requires several unrealistic assumptions and is sensitive to them: it supposes there can be no variation among families and all births must be independent. Comments The key idea highlighted by this analysis is that variation within the population has important consequences. Independence of births--although it is a simplifying assumption used for every analysis in this thread--does not resolve the paradox, because (depending on the other assumptions) it is consistent both with the official answer and its opposite. Note, however, that for the expected ratio to depart substantially from 1:1, we need a lot of variation among the $p_i$ in the population. If all the $p_i$ are, say, between 0.45 and 0.55, then the effects of this variation will not be very noticeable. Addressing this question of what the $p_i$ really are in a human population requires a fairly large and accurate dataset. One might use a generalized linear mixed model and test for overdispersion. If we replace gender by some other genetic expression, then we obtain a simple statistical explanation of natural selection: a rule that differentially limits the number of offspring based on their genetic makeup can systematically alter the proportions of those genes in the next generation. When the gene is not sex-linked, even a small effect will be multiplicatively propagated through successive generations and can rapidly become greatly magnified. Original answer Each child has a birth order: firstborn, second born, and so on. Assuming equal probabilities of male and female births and no correlations among the genders, the Weak Law of Large Numbers asserts there will be close to a 1:1 ratio of firstborn females to males. For the same reason there will be close to a 1:1 ratio of second born females to males, and so on. Because these ratios are constantly 1:1, the overall ratio must be 1:1 as well, regardless of what the relative frequencies of birth orders turn out to be in the population.
Expected number of ratio of girls vs boys birth Summary The simple model that all births independently have a 50% chance of being girls is unrealistic and, as it turns out, exceptional. As soon as we consider the consequences of variation in outco
5,252
Expected number of ratio of girls vs boys birth
The birth of each child is an independent event with P=0.5 for a boy and P=0.5 for a girl. The other details (such as the family decisions) only distract you from this fact. The answer, then, is that the ratio is 1:1. To expound on this: imagine that instead of having children, you're flipping a fair coin (P(heads)=0.5) until you get a "heads". Let's say Family A flips the coin and gets the sequence of [tails, tails, heads]. Then Family B flips the coin and gets a tails. Now, what's the probability that the next will be heads? Still 0.5, because that's what independent means. If you were to do this with 1000 families (which means 1000 heads came up), the expected total number of tails is 1000, because each flip (event) was completely independent. Some things are not independent, such as the sequence within a family: the probability of the sequence [heads, heads] is 0, not equal to [tails, tails] (0.25). But since the question isn't asking about this, it's irrelevant.
Expected number of ratio of girls vs boys birth
The birth of each child is an independent event with P=0.5 for a boy and P=0.5 for a girl. The other details (such as the family decisions) only distract you from this fact. The answer, then, is that
Expected number of ratio of girls vs boys birth The birth of each child is an independent event with P=0.5 for a boy and P=0.5 for a girl. The other details (such as the family decisions) only distract you from this fact. The answer, then, is that the ratio is 1:1. To expound on this: imagine that instead of having children, you're flipping a fair coin (P(heads)=0.5) until you get a "heads". Let's say Family A flips the coin and gets the sequence of [tails, tails, heads]. Then Family B flips the coin and gets a tails. Now, what's the probability that the next will be heads? Still 0.5, because that's what independent means. If you were to do this with 1000 families (which means 1000 heads came up), the expected total number of tails is 1000, because each flip (event) was completely independent. Some things are not independent, such as the sequence within a family: the probability of the sequence [heads, heads] is 0, not equal to [tails, tails] (0.25). But since the question isn't asking about this, it's irrelevant.
Expected number of ratio of girls vs boys birth The birth of each child is an independent event with P=0.5 for a boy and P=0.5 for a girl. The other details (such as the family decisions) only distract you from this fact. The answer, then, is that
5,253
Expected number of ratio of girls vs boys birth
Imagine tossing a fair coin until you observe a head. How many tails do you toss? $P(0 \text{ tails}) = \frac{1}{2}, P(1 \text{ tail}) = (\frac{1}{2})^2, P(2 \text{ tails}) = (\frac{1}{2})^3, ...$ The expected number of tails is easily calculated* to be 1. The number of heads is always 1. * if this is not clear to you, see 'outline of proof' here
Expected number of ratio of girls vs boys birth
Imagine tossing a fair coin until you observe a head. How many tails do you toss? $P(0 \text{ tails}) = \frac{1}{2}, P(1 \text{ tail}) = (\frac{1}{2})^2, P(2 \text{ tails}) = (\frac{1}{2})^3, ...$ The
Expected number of ratio of girls vs boys birth Imagine tossing a fair coin until you observe a head. How many tails do you toss? $P(0 \text{ tails}) = \frac{1}{2}, P(1 \text{ tail}) = (\frac{1}{2})^2, P(2 \text{ tails}) = (\frac{1}{2})^3, ...$ The expected number of tails is easily calculated* to be 1. The number of heads is always 1. * if this is not clear to you, see 'outline of proof' here
Expected number of ratio of girls vs boys birth Imagine tossing a fair coin until you observe a head. How many tails do you toss? $P(0 \text{ tails}) = \frac{1}{2}, P(1 \text{ tail}) = (\frac{1}{2})^2, P(2 \text{ tails}) = (\frac{1}{2})^3, ...$ The
5,254
Expected number of ratio of girls vs boys birth
Couples with exactly one girl and no boys are the most common The reason this all works out is because the probability of the one scenario in which there are more girls is much larger than the scenarios where there are more boys. And the scenarios where there are lots more boys have very low probabilities. The specific way it works itself out is illustrated below NumberOfChilden Probability Girls Boys 1 0.5 1 0 2 0.25 1 1 3 0.125 1 2 4 0.0625 1 3 ... ... ... ... NumberOfChilden Probability Girls*probabilty Boys*probabilty 1 0.5 0.5 0 2 0.25 0.25 0.25 3 0.125 0.125 0.25 4 0.0625 0.0625 0.1875 5 0.03125 0.03125 0.125 ... ... ... ... n 1/2^n 1/(2^n) (n-1)/(2^n) You can pretty much see where this is going at this point, the total of the girls and boys are both going to add up to one. Expected girls from one couple${}=\sum_{n=1}^\infty(\frac{1}{2^n})=1 $ Expected boys from one couple${}=\sum_{n=1}^\infty(\frac{n-1}{n^2})=1 $ Limit solutions from wolfram Any birth, whatever family is it in has a 50:50 chance of being a boy or a girl This all makes intrinsic sense because (try as couples might) you can't control the probability of a specific birth being a boy or a girl. It doesn't matter whether a child is born to a couple with no children or a family of a hundred boys; the chance is 50:50 so if each individual birth has a 50:50 chance then you should always get half boys and half girls. And it doesn't matter how you shuffle the births between families; you're not going to affect that. This works for any1 rule For due to the 50:50 chance for any birth the ratio will end up as 1:1 for any (reasonable1) rule you can come up with. For example the similar rule below also works out even Couples stop having children when they have a girl, or have two children NumberOfChilden Probability Girls Boys 1 0.5 1 0 2 0.25 1 1 2 0.25 0 2 In this case the total expected children is more easily calculated Expected girls from one couple${}=0.5\cdot1 + 0.25\cdot1 =0.75$ Expected boys from one couple${}=0.25\cdot1 + 0.25\cdot2 =0.75$ 1As I said this works for any reasonable rule that could exist in the real world. An unreasonable rule would be one in which the expected children per couple was infinite. For example "Parents only stop having children when they have twice as many boys as girls", we can use the same techniques as above to show that this rule gives infinite children: NumberOfChilden Probability Girls Boys 3 0.125 1 2 6 1/64 2 4 9 1/512 3 6 3*m 1/((3m)^2 m 2m We can then find the number of parents with a finite number of children Expected number of parents with finite children${}=\sum_{m=1}^\infty(\frac{1}{1/(3m)^2})=\frac{\pi^2}{54}=0.18277\ldots. $ Limit solutions from wolfram So from that we can establish that 82% of parents would have an infinite number of children; from a town planning point of view this would probably cause difficulties and shows that this condition couldn't exist in the real world.
Expected number of ratio of girls vs boys birth
Couples with exactly one girl and no boys are the most common The reason this all works out is because the probability of the one scenario in which there are more girls is much larger than the scenari
Expected number of ratio of girls vs boys birth Couples with exactly one girl and no boys are the most common The reason this all works out is because the probability of the one scenario in which there are more girls is much larger than the scenarios where there are more boys. And the scenarios where there are lots more boys have very low probabilities. The specific way it works itself out is illustrated below NumberOfChilden Probability Girls Boys 1 0.5 1 0 2 0.25 1 1 3 0.125 1 2 4 0.0625 1 3 ... ... ... ... NumberOfChilden Probability Girls*probabilty Boys*probabilty 1 0.5 0.5 0 2 0.25 0.25 0.25 3 0.125 0.125 0.25 4 0.0625 0.0625 0.1875 5 0.03125 0.03125 0.125 ... ... ... ... n 1/2^n 1/(2^n) (n-1)/(2^n) You can pretty much see where this is going at this point, the total of the girls and boys are both going to add up to one. Expected girls from one couple${}=\sum_{n=1}^\infty(\frac{1}{2^n})=1 $ Expected boys from one couple${}=\sum_{n=1}^\infty(\frac{n-1}{n^2})=1 $ Limit solutions from wolfram Any birth, whatever family is it in has a 50:50 chance of being a boy or a girl This all makes intrinsic sense because (try as couples might) you can't control the probability of a specific birth being a boy or a girl. It doesn't matter whether a child is born to a couple with no children or a family of a hundred boys; the chance is 50:50 so if each individual birth has a 50:50 chance then you should always get half boys and half girls. And it doesn't matter how you shuffle the births between families; you're not going to affect that. This works for any1 rule For due to the 50:50 chance for any birth the ratio will end up as 1:1 for any (reasonable1) rule you can come up with. For example the similar rule below also works out even Couples stop having children when they have a girl, or have two children NumberOfChilden Probability Girls Boys 1 0.5 1 0 2 0.25 1 1 2 0.25 0 2 In this case the total expected children is more easily calculated Expected girls from one couple${}=0.5\cdot1 + 0.25\cdot1 =0.75$ Expected boys from one couple${}=0.25\cdot1 + 0.25\cdot2 =0.75$ 1As I said this works for any reasonable rule that could exist in the real world. An unreasonable rule would be one in which the expected children per couple was infinite. For example "Parents only stop having children when they have twice as many boys as girls", we can use the same techniques as above to show that this rule gives infinite children: NumberOfChilden Probability Girls Boys 3 0.125 1 2 6 1/64 2 4 9 1/512 3 6 3*m 1/((3m)^2 m 2m We can then find the number of parents with a finite number of children Expected number of parents with finite children${}=\sum_{m=1}^\infty(\frac{1}{1/(3m)^2})=\frac{\pi^2}{54}=0.18277\ldots. $ Limit solutions from wolfram So from that we can establish that 82% of parents would have an infinite number of children; from a town planning point of view this would probably cause difficulties and shows that this condition couldn't exist in the real world.
Expected number of ratio of girls vs boys birth Couples with exactly one girl and no boys are the most common The reason this all works out is because the probability of the one scenario in which there are more girls is much larger than the scenari
5,255
Expected number of ratio of girls vs boys birth
You can also use simulation: p<-0 for (i in 1:10000){ a<-0 while(a != 1){ #Stops when having a girl a<-as.numeric(rbinom(1, 1, 0.5)) #Simulation of a new birth with probability 0.5 p=p+1 #Number of births } } (p-10000)/10000 #Ratio
Expected number of ratio of girls vs boys birth
You can also use simulation: p<-0 for (i in 1:10000){ a<-0 while(a != 1){ #Stops when having a girl a<-as.numeric(rbinom(1, 1, 0.5)) #Simulation of a new birth with probability 0.5 p=p
Expected number of ratio of girls vs boys birth You can also use simulation: p<-0 for (i in 1:10000){ a<-0 while(a != 1){ #Stops when having a girl a<-as.numeric(rbinom(1, 1, 0.5)) #Simulation of a new birth with probability 0.5 p=p+1 #Number of births } } (p-10000)/10000 #Ratio
Expected number of ratio of girls vs boys birth You can also use simulation: p<-0 for (i in 1:10000){ a<-0 while(a != 1){ #Stops when having a girl a<-as.numeric(rbinom(1, 1, 0.5)) #Simulation of a new birth with probability 0.5 p=p
5,256
Expected number of ratio of girls vs boys birth
Mapping this out helped me better see how the ratio of the birth population (assumed to be 1:1) and the ratio of the population of children would both be 1:1. While some families would have multiple boys but only one girl, which initially led me to think there would be more boys than girls, the number of those families would not be greater than 50% and would diminish by half with each additional child, while the number of one-girl-only families would be 50%. The number of boys and girls would thus balance each other out. See the totals of 175 at the bottom.
Expected number of ratio of girls vs boys birth
Mapping this out helped me better see how the ratio of the birth population (assumed to be 1:1) and the ratio of the population of children would both be 1:1. While some families would have multiple
Expected number of ratio of girls vs boys birth Mapping this out helped me better see how the ratio of the birth population (assumed to be 1:1) and the ratio of the population of children would both be 1:1. While some families would have multiple boys but only one girl, which initially led me to think there would be more boys than girls, the number of those families would not be greater than 50% and would diminish by half with each additional child, while the number of one-girl-only families would be 50%. The number of boys and girls would thus balance each other out. See the totals of 175 at the bottom.
Expected number of ratio of girls vs boys birth Mapping this out helped me better see how the ratio of the birth population (assumed to be 1:1) and the ratio of the population of children would both be 1:1. While some families would have multiple
5,257
Expected number of ratio of girls vs boys birth
What you got was the simplest, and a correct answer. If the probability of a newborn child being a boy is p, and children of the wrong gender are not met by unfortunate accidents, then it doesn't matter if the parents make decisions about having more children based on the gender of the child. If the number of children is N and N is large, you can expect about p * N boys. There is no need for a more complicated calculation. There are certainly other questions, like "what is the probability that the youngest child of a family with children is a boy", or "what is the probability that the oldest child of a family with children is a boy". (One of these has a simple correct answer, the other has a simple wrong answer and getting a correct answer is tricky).
Expected number of ratio of girls vs boys birth
What you got was the simplest, and a correct answer. If the probability of a newborn child being a boy is p, and children of the wrong gender are not met by unfortunate accidents, then it doesn't matt
Expected number of ratio of girls vs boys birth What you got was the simplest, and a correct answer. If the probability of a newborn child being a boy is p, and children of the wrong gender are not met by unfortunate accidents, then it doesn't matter if the parents make decisions about having more children based on the gender of the child. If the number of children is N and N is large, you can expect about p * N boys. There is no need for a more complicated calculation. There are certainly other questions, like "what is the probability that the youngest child of a family with children is a boy", or "what is the probability that the oldest child of a family with children is a boy". (One of these has a simple correct answer, the other has a simple wrong answer and getting a correct answer is tricky).
Expected number of ratio of girls vs boys birth What you got was the simplest, and a correct answer. If the probability of a newborn child being a boy is p, and children of the wrong gender are not met by unfortunate accidents, then it doesn't matt
5,258
Expected number of ratio of girls vs boys birth
Let $\text{$\Omega$={(G),(B,G),(B,B,G),$\dots$}}$ be the sample space and let $\text{X: $\Omega\longrightarrow\mathbb{R}$; $\omega\mapsto\vert\omega\vert$-1}$ be the random variable that maps each outcome, $\omega$, onto the number of boys it involves. The expected value of boys, $\text{E(X)}$, comes then down to $\text{E(X)=$\sum_{n=1}^\infty(\text{n-1})\cdot0.5^n$=1}$, Trivially, the expected value of girls is 1. So the ratio is 1, too.
Expected number of ratio of girls vs boys birth
Let $\text{$\Omega$={(G),(B,G),(B,B,G),$\dots$}}$ be the sample space and let $\text{X: $\Omega\longrightarrow\mathbb{R}$; $\omega\mapsto\vert\omega\vert$-1}$ be the random variable that maps each out
Expected number of ratio of girls vs boys birth Let $\text{$\Omega$={(G),(B,G),(B,B,G),$\dots$}}$ be the sample space and let $\text{X: $\Omega\longrightarrow\mathbb{R}$; $\omega\mapsto\vert\omega\vert$-1}$ be the random variable that maps each outcome, $\omega$, onto the number of boys it involves. The expected value of boys, $\text{E(X)}$, comes then down to $\text{E(X)=$\sum_{n=1}^\infty(\text{n-1})\cdot0.5^n$=1}$, Trivially, the expected value of girls is 1. So the ratio is 1, too.
Expected number of ratio of girls vs boys birth Let $\text{$\Omega$={(G),(B,G),(B,B,G),$\dots$}}$ be the sample space and let $\text{X: $\Omega\longrightarrow\mathbb{R}$; $\omega\mapsto\vert\omega\vert$-1}$ be the random variable that maps each out
5,259
Expected number of ratio of girls vs boys birth
It's a trick question. The ratio stays the same (1:1). The right answer is that it does not affect birth ratio, but it does affect the number of children per family with a limiting factor of an average of 2 births per family. This is the kind of question you might find on a logic test. The answer is not about birth ratio. That's a distraction. This is not a probability question, but a cognitive reasoning question. Even if you answered 1:1 ratio, you still failed the test.
Expected number of ratio of girls vs boys birth
It's a trick question. The ratio stays the same (1:1). The right answer is that it does not affect birth ratio, but it does affect the number of children per family with a limiting factor of an averag
Expected number of ratio of girls vs boys birth It's a trick question. The ratio stays the same (1:1). The right answer is that it does not affect birth ratio, but it does affect the number of children per family with a limiting factor of an average of 2 births per family. This is the kind of question you might find on a logic test. The answer is not about birth ratio. That's a distraction. This is not a probability question, but a cognitive reasoning question. Even if you answered 1:1 ratio, you still failed the test.
Expected number of ratio of girls vs boys birth It's a trick question. The ratio stays the same (1:1). The right answer is that it does not affect birth ratio, but it does affect the number of children per family with a limiting factor of an averag
5,260
Expected number of ratio of girls vs boys birth
I am showing the code I wrote for a Monte Carlo simulation (500x1000 families) using `MATLAB' software. Please scrutinise the code so that I did not make a mistake. The result is generated and plotted below. It shows the simulated girl birth probability has very good agreement with the underlying natural birth probability regardless of the stopping rule for a range of natural birth probability. Playing around with the code it is easier to understand one point I didn't quite do before---as other's point out, the stopping rule is a distraction. The stopping rule only affects the number of families given a fixed population, or from another point of view the number of child births given a fixed number of families. The gender is solely determined by dice roll and hence the ratio or probability (which is independent of number of children) will solely depend on the natural boy:girl birth rato. testRange=0.45:0.01:0.55; N=uint32(100000); %Used to approximate probability distribution M=1000; %Number of families L=500; %Monte Carlo repetitions Nfamily=zeros(length(testRange),1); boys=zeros(length(testRange),1); girls=zeros(length(testRange),1); for l = 1:L j=1; %Index variable for the different bgratio for bgratio=testRange k=1; %Index variable for family in each run (temp family id) vec=zeros(N,1); vec(1:N*bgratio,1)=1; %Approximate boy:girl population for dice roll, %1 = boy vec=vec(randperm(s,N)); %Random permutation, technically not necessary %due to randi used later, just be safe bog = vec(randi(N)); %boy or girl? (God's dice roll) while k<M %For M families... if bog == 1 %if boy: boys(j) = boys(j)+1; %total global boys tally else girls(j)=girls(j)+1; %total global girls tally %Family stops bearing children Nfamily(j) = Nfamily(j)+1; %total global family tally k=k+1; %temp family id %Next family... end bog=vec(randi(N)); %Sample next gender (God's dice roll) end j=j+1; %Index variable for the different bgratio end end figure; scatter(testRange,girls./(boys+girls)) hold on line([0 1],[0 1],'LineStyle','--','Color','k') axis([0.44 0.56 0.44 0.56])
Expected number of ratio of girls vs boys birth
I am showing the code I wrote for a Monte Carlo simulation (500x1000 families) using `MATLAB' software. Please scrutinise the code so that I did not make a mistake. The result is generated and plotte
Expected number of ratio of girls vs boys birth I am showing the code I wrote for a Monte Carlo simulation (500x1000 families) using `MATLAB' software. Please scrutinise the code so that I did not make a mistake. The result is generated and plotted below. It shows the simulated girl birth probability has very good agreement with the underlying natural birth probability regardless of the stopping rule for a range of natural birth probability. Playing around with the code it is easier to understand one point I didn't quite do before---as other's point out, the stopping rule is a distraction. The stopping rule only affects the number of families given a fixed population, or from another point of view the number of child births given a fixed number of families. The gender is solely determined by dice roll and hence the ratio or probability (which is independent of number of children) will solely depend on the natural boy:girl birth rato. testRange=0.45:0.01:0.55; N=uint32(100000); %Used to approximate probability distribution M=1000; %Number of families L=500; %Monte Carlo repetitions Nfamily=zeros(length(testRange),1); boys=zeros(length(testRange),1); girls=zeros(length(testRange),1); for l = 1:L j=1; %Index variable for the different bgratio for bgratio=testRange k=1; %Index variable for family in each run (temp family id) vec=zeros(N,1); vec(1:N*bgratio,1)=1; %Approximate boy:girl population for dice roll, %1 = boy vec=vec(randperm(s,N)); %Random permutation, technically not necessary %due to randi used later, just be safe bog = vec(randi(N)); %boy or girl? (God's dice roll) while k<M %For M families... if bog == 1 %if boy: boys(j) = boys(j)+1; %total global boys tally else girls(j)=girls(j)+1; %total global girls tally %Family stops bearing children Nfamily(j) = Nfamily(j)+1; %total global family tally k=k+1; %temp family id %Next family... end bog=vec(randi(N)); %Sample next gender (God's dice roll) end j=j+1; %Index variable for the different bgratio end end figure; scatter(testRange,girls./(boys+girls)) hold on line([0 1],[0 1],'LineStyle','--','Color','k') axis([0.44 0.56 0.44 0.56])
Expected number of ratio of girls vs boys birth I am showing the code I wrote for a Monte Carlo simulation (500x1000 families) using `MATLAB' software. Please scrutinise the code so that I did not make a mistake. The result is generated and plotte
5,261
Expected number of ratio of girls vs boys birth
Let the random variable denoting the $i^{th}$ child in the country be $X_i$ taking on values 1 and 0 if the child is a boy or girl respectively. Assume that the marginal probability that each birth is a boy or girl is $0.5$. The expected number of boys in the country = $E[\sum_i X_i] = \sum_i E[X_i] = 0.5 n$ (where $n$ is the number of children in the country.) Similarly the expected number of girls = $E[\sum_i (1- X_i)] = \sum_i E[1-X_i] = 0.5 n$. The independence of the births is irrelevant for the calculation of expected values. Apropos @whuber's answer, if there is a variation of the marginal probability across families, the ratio becomes skewed towards boys, due to there being more children in families with higher probability of boys than families with a lower probability, thereby having an augmentative effect of the expected value sum for the boys.
Expected number of ratio of girls vs boys birth
Let the random variable denoting the $i^{th}$ child in the country be $X_i$ taking on values 1 and 0 if the child is a boy or girl respectively. Assume that the marginal probability that each birth is
Expected number of ratio of girls vs boys birth Let the random variable denoting the $i^{th}$ child in the country be $X_i$ taking on values 1 and 0 if the child is a boy or girl respectively. Assume that the marginal probability that each birth is a boy or girl is $0.5$. The expected number of boys in the country = $E[\sum_i X_i] = \sum_i E[X_i] = 0.5 n$ (where $n$ is the number of children in the country.) Similarly the expected number of girls = $E[\sum_i (1- X_i)] = \sum_i E[1-X_i] = 0.5 n$. The independence of the births is irrelevant for the calculation of expected values. Apropos @whuber's answer, if there is a variation of the marginal probability across families, the ratio becomes skewed towards boys, due to there being more children in families with higher probability of boys than families with a lower probability, thereby having an augmentative effect of the expected value sum for the boys.
Expected number of ratio of girls vs boys birth Let the random variable denoting the $i^{th}$ child in the country be $X_i$ taking on values 1 and 0 if the child is a boy or girl respectively. Assume that the marginal probability that each birth is
5,262
Expected number of ratio of girls vs boys birth
I independently also programmed a simulation in matlab, prior to seeing what others have done. Strictly speaking it is not a MC because I only run the experiment once. But once is sufficient to obtain results. Here is what my simulation yields. I don't take a stand on the probability of births being p=0.5 as a primitive. I let the birth probability vary over a range of Pr(Boys=1)=0.25:0.05:0.75. My results show that as the probability deviates from p=0.5, the sex ratio is different from 1: in expectation the sex ratio is simply the ratio of the probability of a boy's birth to the probability of a girl's birth. That is, this is a geometric random variable as identified previously by @månst. This is what I believe the original poster was intuiting. My results closely mimic what the above poster with the matlab code has done, matching the sex ratios at the 0.45, 0.50, and 0.55 probabilities that a boy is born. I present mine as I take a slightly different approach to get at the results with faster code. To accomplish the comparison I omitted the code section vec=vec(randperm(s,N)) since s is not defined in their code and I don't know the original intention of this variable (this code section also seems superfluous - as originally stated). I post my code clear all; rng('default') prob_of_boy = 0.25:0.05:0.75; prob_of_girls = 1 - prob_of_boy; iterations = 200; sex_ratio = zeros(length(prob_of_boy),1); prob_of_girl_est = zeros(length(prob_of_boy),1); rounds_of_reproduction = zeros(length(prob_of_boy),1); for p=1:length(prob_of_boy) pop = 1000000; boys = zeros(iterations,1); girls = zeros(iterations,1); prob_of_girl = zeros(iterations,1); for i=1:iterations x = rand(pop,1); x(x<prob_of_boy(p))=1; %count the number of boys and girls num_boys = sum(x(x==1)); boys(i) = num_boys; girls(i) = pop - num_boys; prob_of_girl(i) = girls(i)/(pop); %Only families that had a boy continue to reproduce x = x(x==1); %new population of reproducing parents pop = length(x); %check that there are no more boys if num_boys==0 boys(i+1:end)=[]; girls(i+1:end)=[]; prob_of_girl(i+1:end)=[]; break end end prob_of_girl_est(p) = mean(prob_of_girl(prob_of_girl~=0)); sex_ratio(p) = sum(boys)/sum(girls); rounds_of_reproduction(p) = length(boys); end scatter(prob_of_girls,prob_of_girl_est) hold on title('Est. vs. True Probability of a Girl Birth') ylabel('Est. Probability of Girl Birth') xlabel('True Probability of Girl Birth') line([0 1],[0 1],'LineStyle','--','Color','k') axis([0.2 0.8 0.2 0.8]) scatter(prob_of_girls,sex_ratio) hold on title('Sex Ratio as a function of Girls') xlabel('Probability of Girls Birth') ylabel('Sex Ratio: $\frac{E(Boys)}{E(Girls)}$','interpreter','latex') scatter(prob_of_girls,rounds_of_reproduction) hold on title('Rounds of Reproduction a function of Girls') xlabel('Probability of Girls Birth') ylabel('Rounds of Reproduction') The following graph is expected given the strong law of large number. I reproduce it, but the graph that matters is the second graph. Here, a population probability other than 0.5 for the birth of either sex of a child will alter the sex ratio in the overall population. Assuming that births are independent (but not the choice to keep reproducing), in each round of conditional reproduction the population probability governs the overall make up of the outcomes of boy and girl births. So as others have mentioned, the stopping rule in the problem is inconsequential to the population outcome, as answered by the poster who identified this as the geometric distribution. For completeness, what the stopping rule does affect is the number of rounds of reproduction in the population. Since I only run the experiment once, the graph is a bit jagged. But the intuition is there: for a given population size, as the probability of a girl's birth increases we see that families need less rounds of reproduction to obtain their desired girl before the entire population stops reproducing (obviously the number of rounds will depend on the population size, since it mechanistically increases the likelihood that a family will have, for example, 49 boys before they get their first girl) The comparison between my calculated sex ratios: [sex_ratio' prob_of_boy'] 0.3327 0.2500 0.4289 0.3000 0.5385 0.3500 0.6673 0.4000 0.8186 0.4500 1.0008 0.5000 1.2224 0.5500 1.5016 0.6000 1.8574 0.6500 2.3319 0.7000 2.9995 0.7500 and those from the previous poster with the matlab code: [boys./girls testRange'] 0.8199 0.4500 0.8494 0.4600 0.8871 0.4700 0.9257 0.4800 0.9590 0.4900 1.0016 0.5000 1.0374 0.5100 1.0836 0.5200 1.1273 0.5300 1.1750 0.5400 1.2215 0.5500 They are equivalent results.
Expected number of ratio of girls vs boys birth
I independently also programmed a simulation in matlab, prior to seeing what others have done. Strictly speaking it is not a MC because I only run the experiment once. But once is sufficient to obtain
Expected number of ratio of girls vs boys birth I independently also programmed a simulation in matlab, prior to seeing what others have done. Strictly speaking it is not a MC because I only run the experiment once. But once is sufficient to obtain results. Here is what my simulation yields. I don't take a stand on the probability of births being p=0.5 as a primitive. I let the birth probability vary over a range of Pr(Boys=1)=0.25:0.05:0.75. My results show that as the probability deviates from p=0.5, the sex ratio is different from 1: in expectation the sex ratio is simply the ratio of the probability of a boy's birth to the probability of a girl's birth. That is, this is a geometric random variable as identified previously by @månst. This is what I believe the original poster was intuiting. My results closely mimic what the above poster with the matlab code has done, matching the sex ratios at the 0.45, 0.50, and 0.55 probabilities that a boy is born. I present mine as I take a slightly different approach to get at the results with faster code. To accomplish the comparison I omitted the code section vec=vec(randperm(s,N)) since s is not defined in their code and I don't know the original intention of this variable (this code section also seems superfluous - as originally stated). I post my code clear all; rng('default') prob_of_boy = 0.25:0.05:0.75; prob_of_girls = 1 - prob_of_boy; iterations = 200; sex_ratio = zeros(length(prob_of_boy),1); prob_of_girl_est = zeros(length(prob_of_boy),1); rounds_of_reproduction = zeros(length(prob_of_boy),1); for p=1:length(prob_of_boy) pop = 1000000; boys = zeros(iterations,1); girls = zeros(iterations,1); prob_of_girl = zeros(iterations,1); for i=1:iterations x = rand(pop,1); x(x<prob_of_boy(p))=1; %count the number of boys and girls num_boys = sum(x(x==1)); boys(i) = num_boys; girls(i) = pop - num_boys; prob_of_girl(i) = girls(i)/(pop); %Only families that had a boy continue to reproduce x = x(x==1); %new population of reproducing parents pop = length(x); %check that there are no more boys if num_boys==0 boys(i+1:end)=[]; girls(i+1:end)=[]; prob_of_girl(i+1:end)=[]; break end end prob_of_girl_est(p) = mean(prob_of_girl(prob_of_girl~=0)); sex_ratio(p) = sum(boys)/sum(girls); rounds_of_reproduction(p) = length(boys); end scatter(prob_of_girls,prob_of_girl_est) hold on title('Est. vs. True Probability of a Girl Birth') ylabel('Est. Probability of Girl Birth') xlabel('True Probability of Girl Birth') line([0 1],[0 1],'LineStyle','--','Color','k') axis([0.2 0.8 0.2 0.8]) scatter(prob_of_girls,sex_ratio) hold on title('Sex Ratio as a function of Girls') xlabel('Probability of Girls Birth') ylabel('Sex Ratio: $\frac{E(Boys)}{E(Girls)}$','interpreter','latex') scatter(prob_of_girls,rounds_of_reproduction) hold on title('Rounds of Reproduction a function of Girls') xlabel('Probability of Girls Birth') ylabel('Rounds of Reproduction') The following graph is expected given the strong law of large number. I reproduce it, but the graph that matters is the second graph. Here, a population probability other than 0.5 for the birth of either sex of a child will alter the sex ratio in the overall population. Assuming that births are independent (but not the choice to keep reproducing), in each round of conditional reproduction the population probability governs the overall make up of the outcomes of boy and girl births. So as others have mentioned, the stopping rule in the problem is inconsequential to the population outcome, as answered by the poster who identified this as the geometric distribution. For completeness, what the stopping rule does affect is the number of rounds of reproduction in the population. Since I only run the experiment once, the graph is a bit jagged. But the intuition is there: for a given population size, as the probability of a girl's birth increases we see that families need less rounds of reproduction to obtain their desired girl before the entire population stops reproducing (obviously the number of rounds will depend on the population size, since it mechanistically increases the likelihood that a family will have, for example, 49 boys before they get their first girl) The comparison between my calculated sex ratios: [sex_ratio' prob_of_boy'] 0.3327 0.2500 0.4289 0.3000 0.5385 0.3500 0.6673 0.4000 0.8186 0.4500 1.0008 0.5000 1.2224 0.5500 1.5016 0.6000 1.8574 0.6500 2.3319 0.7000 2.9995 0.7500 and those from the previous poster with the matlab code: [boys./girls testRange'] 0.8199 0.4500 0.8494 0.4600 0.8871 0.4700 0.9257 0.4800 0.9590 0.4900 1.0016 0.5000 1.0374 0.5100 1.0836 0.5200 1.1273 0.5300 1.1750 0.5400 1.2215 0.5500 They are equivalent results.
Expected number of ratio of girls vs boys birth I independently also programmed a simulation in matlab, prior to seeing what others have done. Strictly speaking it is not a MC because I only run the experiment once. But once is sufficient to obtain
5,263
Expected number of ratio of girls vs boys birth
It depends on the number of families. Let $X$ be the number of children in a family, it is geometric random variable with $p=0.5$, i.e., $$ P(X = x) = 0.5^x, x=1,2,3... $$ which implies $E(X) = 2$ Suppose there are $N$ families in the country, the girl ratio is $$ \frac{N}{ \sum X_i} $$ Since $\sum X_i /N \rightarrow E(X) = 2$ (law of large number), the ratio coverages to 1/2 if $N \rightarrow \infty$. If there are only finite families, let $T$ be the total number of children of the country: $T = \sum X_i$, then $T$ has a negative binomial distribution with pmf $$ P(T=t) = C^{t-1}_{N-1} 0.5^t, t = N, N+1... $$ It implies $$ E\left[ \frac{N}{\sum X_i} \right] = E\left[ \frac{N}{T} \right] = \sum_{t=N}^{\infty} \frac{N}{t} C^{t-1}_{N-1} 0.5^t = {_2F_1} (N, 1, N+1, -1) $$ where $_2F_1$ is the hypergeometric function. Therefore the expected girl ratio is ${_2F_1} (N, 1, N+1, -1) $.
Expected number of ratio of girls vs boys birth
It depends on the number of families. Let $X$ be the number of children in a family, it is geometric random variable with $p=0.5$, i.e., $$ P(X = x) = 0.5^x, x=1,2,3... $$ which implies $E(X) = 2$ Sup
Expected number of ratio of girls vs boys birth It depends on the number of families. Let $X$ be the number of children in a family, it is geometric random variable with $p=0.5$, i.e., $$ P(X = x) = 0.5^x, x=1,2,3... $$ which implies $E(X) = 2$ Suppose there are $N$ families in the country, the girl ratio is $$ \frac{N}{ \sum X_i} $$ Since $\sum X_i /N \rightarrow E(X) = 2$ (law of large number), the ratio coverages to 1/2 if $N \rightarrow \infty$. If there are only finite families, let $T$ be the total number of children of the country: $T = \sum X_i$, then $T$ has a negative binomial distribution with pmf $$ P(T=t) = C^{t-1}_{N-1} 0.5^t, t = N, N+1... $$ It implies $$ E\left[ \frac{N}{\sum X_i} \right] = E\left[ \frac{N}{T} \right] = \sum_{t=N}^{\infty} \frac{N}{t} C^{t-1}_{N-1} 0.5^t = {_2F_1} (N, 1, N+1, -1) $$ where $_2F_1$ is the hypergeometric function. Therefore the expected girl ratio is ${_2F_1} (N, 1, N+1, -1) $.
Expected number of ratio of girls vs boys birth It depends on the number of families. Let $X$ be the number of children in a family, it is geometric random variable with $p=0.5$, i.e., $$ P(X = x) = 0.5^x, x=1,2,3... $$ which implies $E(X) = 2$ Sup
5,264
Would a Bayesian admit that there is one fixed parameter value?
IMHO "yes"! Here is one of my favorite quotes by Greenland (2006: 767): It is often said (incorrectly) that ‘parameters are treated as fixed by the frequentist but as random by the Bayesian’. For frequentists and Bayesians alike, the value of a parameter may have been fixed from the start or may have been generated from a physically random mechanism. In either case, both suppose it has taken on some fixed value that we would like to know. The Bayesian uses formal probability models to express personal uncertainty about that value. The ‘randomness’ in these models represents personal uncertainty about the parameter’s value; it is not a property of the parameter (although we should hope it accurately reflects properties of the mechanisms that produced the parameter). Greenland, S. (2006). Bayesian perspectives for epidemiological research: I. Foundations and basic methods. International Journal of Epidemiology, 35(3), 765–774.
Would a Bayesian admit that there is one fixed parameter value?
IMHO "yes"! Here is one of my favorite quotes by Greenland (2006: 767): It is often said (incorrectly) that ‘parameters are treated as fixed by the frequentist but as random by the Bayesian’. For f
Would a Bayesian admit that there is one fixed parameter value? IMHO "yes"! Here is one of my favorite quotes by Greenland (2006: 767): It is often said (incorrectly) that ‘parameters are treated as fixed by the frequentist but as random by the Bayesian’. For frequentists and Bayesians alike, the value of a parameter may have been fixed from the start or may have been generated from a physically random mechanism. In either case, both suppose it has taken on some fixed value that we would like to know. The Bayesian uses formal probability models to express personal uncertainty about that value. The ‘randomness’ in these models represents personal uncertainty about the parameter’s value; it is not a property of the parameter (although we should hope it accurately reflects properties of the mechanisms that produced the parameter). Greenland, S. (2006). Bayesian perspectives for epidemiological research: I. Foundations and basic methods. International Journal of Epidemiology, 35(3), 765–774.
Would a Bayesian admit that there is one fixed parameter value? IMHO "yes"! Here is one of my favorite quotes by Greenland (2006: 767): It is often said (incorrectly) that ‘parameters are treated as fixed by the frequentist but as random by the Bayesian’. For f
5,265
Would a Bayesian admit that there is one fixed parameter value?
The Bayesian conception of a probability is not necessarily subjective (c.f. Jaynes). The important distinction here is that the Bayesian attempts to determine his/her state of knowledge regarding the value of the parameter by combining a prior distribution for its plausible value with the likelihood which summarises the information contained in some observations. Hence, as a Bayesian, I'd say that I am happy with the idea that the parameter has a true value, which is not known exactly, and the purpose of a posterior distribution is to summarise what I do know about its plausible values, based on my prior assumptions and the observations. Now, when I make a model, the model is not reality. So in some cases the parameter in question does exist in reality (e.g. the average weight of a wombat) and in some questions it doesn't (e.g. the true value of a regression parameter - the regression model is only a model of the outcome of the physical laws that govern the system, which may not actually be captured fully by the regression model). So to say that there is one true fixed parameter value in the real world is not necessarily true. On the flip side, I would suggest that most frequentists would say there is one true value for the statistic, but they don't know what it is either, but they have estimators for it and confidence intervals on their estimates which (in a sense) quantifies their uncertainty regarding the plausibility of different values (but the frequentist conception of a probability prevent them from expressing this as directly).
Would a Bayesian admit that there is one fixed parameter value?
The Bayesian conception of a probability is not necessarily subjective (c.f. Jaynes). The important distinction here is that the Bayesian attempts to determine his/her state of knowledge regarding th
Would a Bayesian admit that there is one fixed parameter value? The Bayesian conception of a probability is not necessarily subjective (c.f. Jaynes). The important distinction here is that the Bayesian attempts to determine his/her state of knowledge regarding the value of the parameter by combining a prior distribution for its plausible value with the likelihood which summarises the information contained in some observations. Hence, as a Bayesian, I'd say that I am happy with the idea that the parameter has a true value, which is not known exactly, and the purpose of a posterior distribution is to summarise what I do know about its plausible values, based on my prior assumptions and the observations. Now, when I make a model, the model is not reality. So in some cases the parameter in question does exist in reality (e.g. the average weight of a wombat) and in some questions it doesn't (e.g. the true value of a regression parameter - the regression model is only a model of the outcome of the physical laws that govern the system, which may not actually be captured fully by the regression model). So to say that there is one true fixed parameter value in the real world is not necessarily true. On the flip side, I would suggest that most frequentists would say there is one true value for the statistic, but they don't know what it is either, but they have estimators for it and confidence intervals on their estimates which (in a sense) quantifies their uncertainty regarding the plausibility of different values (but the frequentist conception of a probability prevent them from expressing this as directly).
Would a Bayesian admit that there is one fixed parameter value? The Bayesian conception of a probability is not necessarily subjective (c.f. Jaynes). The important distinction here is that the Bayesian attempts to determine his/her state of knowledge regarding th
5,266
Would a Bayesian admit that there is one fixed parameter value?
To your main point, in Bayesian Data Analysis (3rd ed., 93), Gelman also writes From the perspective of Bayesian data analysis, we can often interpret classical point estimates as exact or approximate posterior summaries based on some implicit full probability model. In the limit of large sample size, in fact, we can use asymptotic theory to construct a theoretical Bayesian justification for classical maximum likelihood inference. So perhaps it's not Bayesians who should "admit" that there are, in truth, single real parameter values, but frequentists who should appeal to Bayesian statistics to justify their estimation procedures! (I say this with tongue firmly in cheek.) As an aside, I object to the blanket statement that Bayesian statistics is premised on subjective probability, and implication that Bayes is subjective while other inferential paradigms are not. That is certainly one argument that can be posed, perhaps also including the perspective of the "coherence of bets" argument, but see Gelman who here defines "Bayesian" as a statistician that uses the posterior distribution $\Pr(\theta|y)$, and here where he argues against overly restrictive definitions. But the idea that there are single parameters in nature or in social systems is just a simplifying assumption. There might be some ornate process generating observable results, but discovering that system is incredibly complicated; supposing that there is a single fixed parameter value simplifies the problem dramatically. I think that this cuts to the core of your question: Bayesians shouldn't have to "admit" to making this simplification any more than Frequentists should.
Would a Bayesian admit that there is one fixed parameter value?
To your main point, in Bayesian Data Analysis (3rd ed., 93), Gelman also writes From the perspective of Bayesian data analysis, we can often interpret classical point estimates as exact or approximat
Would a Bayesian admit that there is one fixed parameter value? To your main point, in Bayesian Data Analysis (3rd ed., 93), Gelman also writes From the perspective of Bayesian data analysis, we can often interpret classical point estimates as exact or approximate posterior summaries based on some implicit full probability model. In the limit of large sample size, in fact, we can use asymptotic theory to construct a theoretical Bayesian justification for classical maximum likelihood inference. So perhaps it's not Bayesians who should "admit" that there are, in truth, single real parameter values, but frequentists who should appeal to Bayesian statistics to justify their estimation procedures! (I say this with tongue firmly in cheek.) As an aside, I object to the blanket statement that Bayesian statistics is premised on subjective probability, and implication that Bayes is subjective while other inferential paradigms are not. That is certainly one argument that can be posed, perhaps also including the perspective of the "coherence of bets" argument, but see Gelman who here defines "Bayesian" as a statistician that uses the posterior distribution $\Pr(\theta|y)$, and here where he argues against overly restrictive definitions. But the idea that there are single parameters in nature or in social systems is just a simplifying assumption. There might be some ornate process generating observable results, but discovering that system is incredibly complicated; supposing that there is a single fixed parameter value simplifies the problem dramatically. I think that this cuts to the core of your question: Bayesians shouldn't have to "admit" to making this simplification any more than Frequentists should.
Would a Bayesian admit that there is one fixed parameter value? To your main point, in Bayesian Data Analysis (3rd ed., 93), Gelman also writes From the perspective of Bayesian data analysis, we can often interpret classical point estimates as exact or approximat
5,267
Would a Bayesian admit that there is one fixed parameter value?
Do you think that there is a single "true fixed parameter" for something like the contribution of milk drinking to a child's growth? Or for the decrease in a tumor's size based on the amount of chemical X you inject into a patient's body? Pick any model you're familiar with and ask yourself if you actually believe that there is one true, universal, precise and fixed value for each parameter, even in theory. Ignore measurement error, just look at your model as if all measurements were perfectly accurate and infinitely precise. Given your model, do you think that each parameter realistically has a specific point value? The fact that you have a model indicates that you are leaving some details out. Your model will have an amount of imprecision because you're averaging over the parameters/variables that you've left out in order to make a model -- a simplified representation of reality. (Just as you don't make a 1:1 map of the planet, complete with all details, but rather a 1:10000000 map, or some such simplification. The map is a model.) Given that you're averaging across the left-out variables, the parameters for the variables you include in your model will be distributions, not point values. That's only part of the Bayesian philosophy -- I'm ignoring theoretical uncertainty, measurement uncertainty, priors, etc -- but it seems to me that the idea that your parameters have distributions makes intuitive sense, in the same way that descriptive statistics have a distribution.
Would a Bayesian admit that there is one fixed parameter value?
Do you think that there is a single "true fixed parameter" for something like the contribution of milk drinking to a child's growth? Or for the decrease in a tumor's size based on the amount of chemic
Would a Bayesian admit that there is one fixed parameter value? Do you think that there is a single "true fixed parameter" for something like the contribution of milk drinking to a child's growth? Or for the decrease in a tumor's size based on the amount of chemical X you inject into a patient's body? Pick any model you're familiar with and ask yourself if you actually believe that there is one true, universal, precise and fixed value for each parameter, even in theory. Ignore measurement error, just look at your model as if all measurements were perfectly accurate and infinitely precise. Given your model, do you think that each parameter realistically has a specific point value? The fact that you have a model indicates that you are leaving some details out. Your model will have an amount of imprecision because you're averaging over the parameters/variables that you've left out in order to make a model -- a simplified representation of reality. (Just as you don't make a 1:1 map of the planet, complete with all details, but rather a 1:10000000 map, or some such simplification. The map is a model.) Given that you're averaging across the left-out variables, the parameters for the variables you include in your model will be distributions, not point values. That's only part of the Bayesian philosophy -- I'm ignoring theoretical uncertainty, measurement uncertainty, priors, etc -- but it seems to me that the idea that your parameters have distributions makes intuitive sense, in the same way that descriptive statistics have a distribution.
Would a Bayesian admit that there is one fixed parameter value? Do you think that there is a single "true fixed parameter" for something like the contribution of milk drinking to a child's growth? Or for the decrease in a tumor's size based on the amount of chemic
5,268
Would a Bayesian admit that there is one fixed parameter value?
But do Bayesians theoretically acknowledge that there is one true fixed parameter value out in the 'real world?' In my opinion, the answer is yes. There is an unknown value $\theta_0$ of the parameter and the prior distribution describes our knowledge/uncertainty about it. In the Bayesian mathematical modelling, $\theta_0$ is considered as the realization of a random variable following the prior distribution.
Would a Bayesian admit that there is one fixed parameter value?
But do Bayesians theoretically acknowledge that there is one true fixed parameter value out in the 'real world?' In my opinion, the answer is yes. There is an unknown value $\theta_0$ of the parame
Would a Bayesian admit that there is one fixed parameter value? But do Bayesians theoretically acknowledge that there is one true fixed parameter value out in the 'real world?' In my opinion, the answer is yes. There is an unknown value $\theta_0$ of the parameter and the prior distribution describes our knowledge/uncertainty about it. In the Bayesian mathematical modelling, $\theta_0$ is considered as the realization of a random variable following the prior distribution.
Would a Bayesian admit that there is one fixed parameter value? But do Bayesians theoretically acknowledge that there is one true fixed parameter value out in the 'real world?' In my opinion, the answer is yes. There is an unknown value $\theta_0$ of the parame
5,269
Would a Bayesian admit that there is one fixed parameter value?
If we go and couple Bayesianism with a deterministic universe (before you say anything with the word 'quantum' in it, humour me and recall that this is not physics.stackexchange) we get some interesting results. Making our assumptions explicit: We have a Bayesian agent being part of and observing a deterministic universe. The agent has limited computational resources. Now, the deterministic universe may be one where atoms are newtonian little billiard balls. It may be entirely non-quantum. Let's say it is. The agent now flips a fair coin. Think about that for a second, what does a fair coin constitute in a deterministic universe? A coin that has a 50/50 probability ratio? But it is deterministic! With enough computing power you can calculate exactly how the coin will land, purely by simulating a model of a coin being flipped in the same manner. In a deterministic universe a fair coin would be a disc of metal with uniform density. No force compels it to spend more time with one face down than the other (think about how weighted dice function.) So the agent flips a fair coin. Yet, the agent is not quite powerful enough. It does not have sharp enough eyes to measure how the coin spins when flipped, it sees but a blur. And so it says "This coin will land a heads with 50% probability." Lack of information leads to probabilities. We may look at the phase space of how a coin is thrown. A large multidimensional coordinate system with axes pertaining to direction of throw, force of throw, spin of the coin, speed and direction of wind and so on. A single point in this space corresponds to a single possible coinflip. If we ask the agent from before to colour in the coordinate system with a greyscale gradient corresponding to the agent's assignment of probability of heads for every given throw, it will most colour it all a uniform shade of grey. If we the gradually give it more powerful internal computers with which to compute probabilities of heads, it will be able to make more and more discerning colourings. When we finally give it the most powerful internal computer, making it omniscient, it will effectively paint a strange checkerboard. Fair coins are not made of probabilities, they are made of metal. Probabilities exist only in computational structures. So says the Bayesian.
Would a Bayesian admit that there is one fixed parameter value?
If we go and couple Bayesianism with a deterministic universe (before you say anything with the word 'quantum' in it, humour me and recall that this is not physics.stackexchange) we get some interesti
Would a Bayesian admit that there is one fixed parameter value? If we go and couple Bayesianism with a deterministic universe (before you say anything with the word 'quantum' in it, humour me and recall that this is not physics.stackexchange) we get some interesting results. Making our assumptions explicit: We have a Bayesian agent being part of and observing a deterministic universe. The agent has limited computational resources. Now, the deterministic universe may be one where atoms are newtonian little billiard balls. It may be entirely non-quantum. Let's say it is. The agent now flips a fair coin. Think about that for a second, what does a fair coin constitute in a deterministic universe? A coin that has a 50/50 probability ratio? But it is deterministic! With enough computing power you can calculate exactly how the coin will land, purely by simulating a model of a coin being flipped in the same manner. In a deterministic universe a fair coin would be a disc of metal with uniform density. No force compels it to spend more time with one face down than the other (think about how weighted dice function.) So the agent flips a fair coin. Yet, the agent is not quite powerful enough. It does not have sharp enough eyes to measure how the coin spins when flipped, it sees but a blur. And so it says "This coin will land a heads with 50% probability." Lack of information leads to probabilities. We may look at the phase space of how a coin is thrown. A large multidimensional coordinate system with axes pertaining to direction of throw, force of throw, spin of the coin, speed and direction of wind and so on. A single point in this space corresponds to a single possible coinflip. If we ask the agent from before to colour in the coordinate system with a greyscale gradient corresponding to the agent's assignment of probability of heads for every given throw, it will most colour it all a uniform shade of grey. If we the gradually give it more powerful internal computers with which to compute probabilities of heads, it will be able to make more and more discerning colourings. When we finally give it the most powerful internal computer, making it omniscient, it will effectively paint a strange checkerboard. Fair coins are not made of probabilities, they are made of metal. Probabilities exist only in computational structures. So says the Bayesian.
Would a Bayesian admit that there is one fixed parameter value? If we go and couple Bayesianism with a deterministic universe (before you say anything with the word 'quantum' in it, humour me and recall that this is not physics.stackexchange) we get some interesti
5,270
Would a Bayesian admit that there is one fixed parameter value?
I am not sure it is a relevant question because it requires more definition than the math itself requires. Because the math itself does not require it, I am not sure asking which Bayesian interpretation is correct has a lot of meaning. Imagine two parallel universes. They are identical in the sense that the sequence of physical events in both universes unfolds in the same way. In other words, of the sample space $\chi$, the Universe, $U\subset\chi$ is the same in every respect, $U_1=U_2$. Now, in Universe One, every observer believes that Nature draws fixed points, $\theta_0$, from the parameter space at the start of time, $t=0$. These fixed points are $\theta_0\subset\Theta$. An observer denoted $i$, explains their initial uncertainty about its location with a probability distribution, $\pi_i(\theta_0)$ and upon seeing the data $X_i\subset{U}$ revises their uncertainty to $\pi(\theta_0|X_i)$. Now, in Universe Two, every observer believes that Nature draws values for parameters, $\theta_t$, from a distribution which is believed to be approximated by $\pi_i(\theta_t)$ at time $t=\tau,$ which is when observer $i$ gathers the data. The draws are random. This differs from the concept of heteroskedasticity or stationary variables. Such a person would define either in a different manner. Upon seeing the data $X_\tau\subset{U}$ at time $\tau,$ they use this additional information to improve the description of that distribution of $\theta_t$ to $\pi_i(\theta_t|X_\tau)$. Do note that in the second case, it isn't really helpful to bring in Frequentist definitions of things like time series, heteroskedasticity, or stationary variables because their ideas are predicated on fixed points. Also, there is nothing in Universe Two that prohibits the distribution from being a Dirac Delta function. However, nothing prevents a prior in Universe One from being one either, and as such, one could completely miss $\theta_0$. If you drop the needless subsidiary notation, you end up with $\pi(\theta|X)\propto\pi(\theta)f(X|\theta)$. The math provides no mechanism to be able to distinguish a world with fixed but unobservable parameters and a mechanism to describe that uncertainty from a world where the parameters truly are random variables. Which is it? Who knows? That Frequentist methods in some sense "work," doesn't provide a solution either. There is nothing about countably additive sets that makes them better than finitely additive sets. It is true that there are use cases where only a null hypothesis method or only a Bayesian method could possibly work. They are not the general case. Interestingly, in those handfuls of cases where only one method could be thought of as suitable, the problem isn't resolved. For example, if the critical element of your method boils down to a sharp null hypothesis such as $$H_0:\beta_1,\beta_2,\dots\beta_k=0$$ it does depend on on a mathematical conditioning of those parameters at zero, as if it were the true fixed point. Nature isn't required to listen. Indeed, if nature were sometimes drawing parameters instead and causing false positives or negatives, the method would not be able to tell. Likewise, if the critical element of your method is setting gambling odds, there is no way to distinguish either world. You have to use a Bayesian method, but either conceptualization will work either well.
Would a Bayesian admit that there is one fixed parameter value?
I am not sure it is a relevant question because it requires more definition than the math itself requires. Because the math itself does not require it, I am not sure asking which Bayesian interpretat
Would a Bayesian admit that there is one fixed parameter value? I am not sure it is a relevant question because it requires more definition than the math itself requires. Because the math itself does not require it, I am not sure asking which Bayesian interpretation is correct has a lot of meaning. Imagine two parallel universes. They are identical in the sense that the sequence of physical events in both universes unfolds in the same way. In other words, of the sample space $\chi$, the Universe, $U\subset\chi$ is the same in every respect, $U_1=U_2$. Now, in Universe One, every observer believes that Nature draws fixed points, $\theta_0$, from the parameter space at the start of time, $t=0$. These fixed points are $\theta_0\subset\Theta$. An observer denoted $i$, explains their initial uncertainty about its location with a probability distribution, $\pi_i(\theta_0)$ and upon seeing the data $X_i\subset{U}$ revises their uncertainty to $\pi(\theta_0|X_i)$. Now, in Universe Two, every observer believes that Nature draws values for parameters, $\theta_t$, from a distribution which is believed to be approximated by $\pi_i(\theta_t)$ at time $t=\tau,$ which is when observer $i$ gathers the data. The draws are random. This differs from the concept of heteroskedasticity or stationary variables. Such a person would define either in a different manner. Upon seeing the data $X_\tau\subset{U}$ at time $\tau,$ they use this additional information to improve the description of that distribution of $\theta_t$ to $\pi_i(\theta_t|X_\tau)$. Do note that in the second case, it isn't really helpful to bring in Frequentist definitions of things like time series, heteroskedasticity, or stationary variables because their ideas are predicated on fixed points. Also, there is nothing in Universe Two that prohibits the distribution from being a Dirac Delta function. However, nothing prevents a prior in Universe One from being one either, and as such, one could completely miss $\theta_0$. If you drop the needless subsidiary notation, you end up with $\pi(\theta|X)\propto\pi(\theta)f(X|\theta)$. The math provides no mechanism to be able to distinguish a world with fixed but unobservable parameters and a mechanism to describe that uncertainty from a world where the parameters truly are random variables. Which is it? Who knows? That Frequentist methods in some sense "work," doesn't provide a solution either. There is nothing about countably additive sets that makes them better than finitely additive sets. It is true that there are use cases where only a null hypothesis method or only a Bayesian method could possibly work. They are not the general case. Interestingly, in those handfuls of cases where only one method could be thought of as suitable, the problem isn't resolved. For example, if the critical element of your method boils down to a sharp null hypothesis such as $$H_0:\beta_1,\beta_2,\dots\beta_k=0$$ it does depend on on a mathematical conditioning of those parameters at zero, as if it were the true fixed point. Nature isn't required to listen. Indeed, if nature were sometimes drawing parameters instead and causing false positives or negatives, the method would not be able to tell. Likewise, if the critical element of your method is setting gambling odds, there is no way to distinguish either world. You have to use a Bayesian method, but either conceptualization will work either well.
Would a Bayesian admit that there is one fixed parameter value? I am not sure it is a relevant question because it requires more definition than the math itself requires. Because the math itself does not require it, I am not sure asking which Bayesian interpretat
5,271
Would a Bayesian admit that there is one fixed parameter value?
There are improper priors, for example Jeffreys, which has a certain relation to Fishers Information matrix. Then it is not subjective.
Would a Bayesian admit that there is one fixed parameter value?
There are improper priors, for example Jeffreys, which has a certain relation to Fishers Information matrix. Then it is not subjective.
Would a Bayesian admit that there is one fixed parameter value? There are improper priors, for example Jeffreys, which has a certain relation to Fishers Information matrix. Then it is not subjective.
Would a Bayesian admit that there is one fixed parameter value? There are improper priors, for example Jeffreys, which has a certain relation to Fishers Information matrix. Then it is not subjective.
5,272
What are the cons of Bayesian analysis?
I'm going to give you an answer. Four drawbacks actually. Note that none of these are actually objections that should drive one all the way to frequentist analysis, but there are cons to going with a Bayesian framework: Choice of prior. This is the usual carping for a reason, though in my case it's not the usual "priors are subjective!" but that coming up with a prior that's well reasoned and actually represents your best attempt at summarizing a prior is a great deal of work in many cases. An entire aim of my dissertation, for example, can be summed up as "estimate priors". It's computationally intensive. Especially for models involving many variables. For a large dataset with many variables being estimated, it may very well be prohibitively computationally intensive, especially in certain circumstances where the data cannot readily be thrown onto a cluster or the like. Some of the ways to resolve this, like augmented data rather than MCMC, are somewhat theoretically challenging, at least to me. Posterior distributions are somewhat more difficult to incorporate into a meta-analysis, unless a frequentist, parametric description of the distribution has been provided. Depending on what journal the analysis is intended for, either the use of Bayes generally, or your choice of priors, gives your paper slightly more points where a reviewer can dig into it. Some of these are reasonable reviewer objections, but some just stem from the nature of Bayes and how familiar people in some fields are with it. None of these things should stop you. Indeed, none of these things have stopped me, and hopefully doing Bayesian analysis will help address at least number 4.
What are the cons of Bayesian analysis?
I'm going to give you an answer. Four drawbacks actually. Note that none of these are actually objections that should drive one all the way to frequentist analysis, but there are cons to going with a
What are the cons of Bayesian analysis? I'm going to give you an answer. Four drawbacks actually. Note that none of these are actually objections that should drive one all the way to frequentist analysis, but there are cons to going with a Bayesian framework: Choice of prior. This is the usual carping for a reason, though in my case it's not the usual "priors are subjective!" but that coming up with a prior that's well reasoned and actually represents your best attempt at summarizing a prior is a great deal of work in many cases. An entire aim of my dissertation, for example, can be summed up as "estimate priors". It's computationally intensive. Especially for models involving many variables. For a large dataset with many variables being estimated, it may very well be prohibitively computationally intensive, especially in certain circumstances where the data cannot readily be thrown onto a cluster or the like. Some of the ways to resolve this, like augmented data rather than MCMC, are somewhat theoretically challenging, at least to me. Posterior distributions are somewhat more difficult to incorporate into a meta-analysis, unless a frequentist, parametric description of the distribution has been provided. Depending on what journal the analysis is intended for, either the use of Bayes generally, or your choice of priors, gives your paper slightly more points where a reviewer can dig into it. Some of these are reasonable reviewer objections, but some just stem from the nature of Bayes and how familiar people in some fields are with it. None of these things should stop you. Indeed, none of these things have stopped me, and hopefully doing Bayesian analysis will help address at least number 4.
What are the cons of Bayesian analysis? I'm going to give you an answer. Four drawbacks actually. Note that none of these are actually objections that should drive one all the way to frequentist analysis, but there are cons to going with a
5,273
What are the cons of Bayesian analysis?
I am a Bayesian by inclination, but generally a frequentist in practice. The reason for this is usually that performing the full Bayesian analysis properly (rather than e.g. MAP solutions) for the types of problem I am interested in is tricky and computationally intensive. Often a full Bayesian analysis is required to really see the benefit of this approach over frequentist equivalents. For me, the trade-off is basically a choice between Bayesian methods that are conceptually elegant and easy to understand, but difficult to implement in practice and frequentist methods, which are conceptually awkward and subtle (try explaining how to interpret a hypothesis test accurately or why there isn't a 95% probability that the true value lies in a 95% confidence interval), but which are well suited to easily implemented "cookbook" solutions. Horses for courses.
What are the cons of Bayesian analysis?
I am a Bayesian by inclination, but generally a frequentist in practice. The reason for this is usually that performing the full Bayesian analysis properly (rather than e.g. MAP solutions) for the ty
What are the cons of Bayesian analysis? I am a Bayesian by inclination, but generally a frequentist in practice. The reason for this is usually that performing the full Bayesian analysis properly (rather than e.g. MAP solutions) for the types of problem I am interested in is tricky and computationally intensive. Often a full Bayesian analysis is required to really see the benefit of this approach over frequentist equivalents. For me, the trade-off is basically a choice between Bayesian methods that are conceptually elegant and easy to understand, but difficult to implement in practice and frequentist methods, which are conceptually awkward and subtle (try explaining how to interpret a hypothesis test accurately or why there isn't a 95% probability that the true value lies in a 95% confidence interval), but which are well suited to easily implemented "cookbook" solutions. Horses for courses.
What are the cons of Bayesian analysis? I am a Bayesian by inclination, but generally a frequentist in practice. The reason for this is usually that performing the full Bayesian analysis properly (rather than e.g. MAP solutions) for the ty
5,274
What are the cons of Bayesian analysis?
From a purely practical point of view, I am not a fan of methods which require lots of computation (I am thinking of Gibbs sampler and MCMC, often used in the Bayesian framework, but this also applies to e.g. bootstrap techniques in frequentist analysis). The reason being that any kind of debugging (testing the implementation, looking at robustness with respect to assumptions, etc) itself requires a bunch of Monte Carlo simulations, and you are quickly in a computational morass. I prefer the underlying analysis techniques to be fast and deterministic, even if they are only approximate. This is a purely practical objection, of course: given infinite computing resources, this objection would disappear. And it only applies to a subset of Bayesian methods. Also this is more of a preference given my workflow.
What are the cons of Bayesian analysis?
From a purely practical point of view, I am not a fan of methods which require lots of computation (I am thinking of Gibbs sampler and MCMC, often used in the Bayesian framework, but this also applies
What are the cons of Bayesian analysis? From a purely practical point of view, I am not a fan of methods which require lots of computation (I am thinking of Gibbs sampler and MCMC, often used in the Bayesian framework, but this also applies to e.g. bootstrap techniques in frequentist analysis). The reason being that any kind of debugging (testing the implementation, looking at robustness with respect to assumptions, etc) itself requires a bunch of Monte Carlo simulations, and you are quickly in a computational morass. I prefer the underlying analysis techniques to be fast and deterministic, even if they are only approximate. This is a purely practical objection, of course: given infinite computing resources, this objection would disappear. And it only applies to a subset of Bayesian methods. Also this is more of a preference given my workflow.
What are the cons of Bayesian analysis? From a purely practical point of view, I am not a fan of methods which require lots of computation (I am thinking of Gibbs sampler and MCMC, often used in the Bayesian framework, but this also applies
5,275
What are the cons of Bayesian analysis?
Sometimes there's a simple and natural "classical" solution to a problem, in which case a fancy Bayesian method (especially with MCMC) would be overkill. Further, in variable selection type problems, it can be more straightforward and clear to consider something like a penalized likelihood; there may exist a prior on models that gives an equivalent Bayesian approach, but how the prior corresponds to the ultimate performance can be less clear than the relationship between the penalty and performance. Finally, MCMC methods often require an expert both for assessing convergence/mixing and for making sense of the results.
What are the cons of Bayesian analysis?
Sometimes there's a simple and natural "classical" solution to a problem, in which case a fancy Bayesian method (especially with MCMC) would be overkill. Further, in variable selection type problems
What are the cons of Bayesian analysis? Sometimes there's a simple and natural "classical" solution to a problem, in which case a fancy Bayesian method (especially with MCMC) would be overkill. Further, in variable selection type problems, it can be more straightforward and clear to consider something like a penalized likelihood; there may exist a prior on models that gives an equivalent Bayesian approach, but how the prior corresponds to the ultimate performance can be less clear than the relationship between the penalty and performance. Finally, MCMC methods often require an expert both for assessing convergence/mixing and for making sense of the results.
What are the cons of Bayesian analysis? Sometimes there's a simple and natural "classical" solution to a problem, in which case a fancy Bayesian method (especially with MCMC) would be overkill. Further, in variable selection type problems
5,276
What are the cons of Bayesian analysis?
I am relatively new to Bayesian methods, but one thing that that irks me is that, while I understand the rationale of priors (i.e. science is a cumulative endeavour, so for most questions there is some amount of previous experience/thinking that should inform your interpretation of the data), I dislike that the Bayesian approach forces you to push subjectivity to the beginning of the analysis, rendering the end result contingent. I believe this is problematic for two reasons: 1) some less well versed readers won't even pay attention to the priors, and interpret Bayesian results as non-contingent; 2) unless the raw data is available, it is hard for readers to reframe the results in their own subjective priors. This is why I prefer likelihood ratios, they put the subjectivity at the end by simply quantifying relative evidence values and leaving it up to the reader to apply their own subjective criteria to determine whether they believe the evidence is sufficiently in favor of either model involved in the ratio. (Astute critics will note that even the likelihood ratio is "contingent" in the sense that it is contingent on the parameterization of the models being compared; however this is a feature shared by all methods, Frequentist, Bayesian and Likelihoodist)
What are the cons of Bayesian analysis?
I am relatively new to Bayesian methods, but one thing that that irks me is that, while I understand the rationale of priors (i.e. science is a cumulative endeavour, so for most questions there is som
What are the cons of Bayesian analysis? I am relatively new to Bayesian methods, but one thing that that irks me is that, while I understand the rationale of priors (i.e. science is a cumulative endeavour, so for most questions there is some amount of previous experience/thinking that should inform your interpretation of the data), I dislike that the Bayesian approach forces you to push subjectivity to the beginning of the analysis, rendering the end result contingent. I believe this is problematic for two reasons: 1) some less well versed readers won't even pay attention to the priors, and interpret Bayesian results as non-contingent; 2) unless the raw data is available, it is hard for readers to reframe the results in their own subjective priors. This is why I prefer likelihood ratios, they put the subjectivity at the end by simply quantifying relative evidence values and leaving it up to the reader to apply their own subjective criteria to determine whether they believe the evidence is sufficiently in favor of either model involved in the ratio. (Astute critics will note that even the likelihood ratio is "contingent" in the sense that it is contingent on the parameterization of the models being compared; however this is a feature shared by all methods, Frequentist, Bayesian and Likelihoodist)
What are the cons of Bayesian analysis? I am relatively new to Bayesian methods, but one thing that that irks me is that, while I understand the rationale of priors (i.e. science is a cumulative endeavour, so for most questions there is som
5,277
What are the cons of Bayesian analysis?
Decision theory is the underlying theory on which statistics operates. The problem is to find a good (in some sense) procedure for producing decisions from data. However, there's rarely an unambiguous choice of procedure, in the sense of minimizing expected loss, so other criteria must be invoked to choose among them. Choosing the procedures that is Bayes with respect to some prior is one of these criteria, but it may not always be what you want. Minimax might be more important in some case, or unbiasedness. Anyone who insists that the frequentists are wrong or the Bayesians or wrong is mostly revealing their ignorance of statistics.
What are the cons of Bayesian analysis?
Decision theory is the underlying theory on which statistics operates. The problem is to find a good (in some sense) procedure for producing decisions from data. However, there's rarely an unambiguo
What are the cons of Bayesian analysis? Decision theory is the underlying theory on which statistics operates. The problem is to find a good (in some sense) procedure for producing decisions from data. However, there's rarely an unambiguous choice of procedure, in the sense of minimizing expected loss, so other criteria must be invoked to choose among them. Choosing the procedures that is Bayes with respect to some prior is one of these criteria, but it may not always be what you want. Minimax might be more important in some case, or unbiasedness. Anyone who insists that the frequentists are wrong or the Bayesians or wrong is mostly revealing their ignorance of statistics.
What are the cons of Bayesian analysis? Decision theory is the underlying theory on which statistics operates. The problem is to find a good (in some sense) procedure for producing decisions from data. However, there's rarely an unambiguo
5,278
What are the cons of Bayesian analysis?
For some time I have wanted to educate myself more on Bayesian approaches to modeling to get past my cursory understanding (I have coded Gibbs samplers in graduate course work, but have never done anything real). Along the way though I have thought some of Brian Dennis' papers have been though-provoking and have wished I could find a Bayesian friend (the ones who weren't in the closet) to read the papers and hear their counterpoints. So, here are the papers I am referring to, but the quote I always remember is Being Bayesian means never having to say you're wrong. http://faculty.washington.edu/skalski/classes/QERM597/papers/Dennis_1996.pdf http://classes.warnercnr.colostate.edu/nr575/files/2011/01/Lele-and-Dennis-2009.pdf
What are the cons of Bayesian analysis?
For some time I have wanted to educate myself more on Bayesian approaches to modeling to get past my cursory understanding (I have coded Gibbs samplers in graduate course work, but have never done any
What are the cons of Bayesian analysis? For some time I have wanted to educate myself more on Bayesian approaches to modeling to get past my cursory understanding (I have coded Gibbs samplers in graduate course work, but have never done anything real). Along the way though I have thought some of Brian Dennis' papers have been though-provoking and have wished I could find a Bayesian friend (the ones who weren't in the closet) to read the papers and hear their counterpoints. So, here are the papers I am referring to, but the quote I always remember is Being Bayesian means never having to say you're wrong. http://faculty.washington.edu/skalski/classes/QERM597/papers/Dennis_1996.pdf http://classes.warnercnr.colostate.edu/nr575/files/2011/01/Lele-and-Dennis-2009.pdf
What are the cons of Bayesian analysis? For some time I have wanted to educate myself more on Bayesian approaches to modeling to get past my cursory understanding (I have coded Gibbs samplers in graduate course work, but have never done any
5,279
What are the cons of Bayesian analysis?
What are the open problems in Bayesian Statistics from the ISBA quarterly newsletter list 5 problems with bayesian stats from various leaders in the field, #1 being, boringly enough, "Model selection and hypothesis testing".
What are the cons of Bayesian analysis?
What are the open problems in Bayesian Statistics from the ISBA quarterly newsletter list 5 problems with bayesian stats from various leaders in the field, #1 being, boringly enough, "Model selection
What are the cons of Bayesian analysis? What are the open problems in Bayesian Statistics from the ISBA quarterly newsletter list 5 problems with bayesian stats from various leaders in the field, #1 being, boringly enough, "Model selection and hypothesis testing".
What are the cons of Bayesian analysis? What are the open problems in Bayesian Statistics from the ISBA quarterly newsletter list 5 problems with bayesian stats from various leaders in the field, #1 being, boringly enough, "Model selection
5,280
Are CDFs more fundamental than PDFs?
Every probability distribution on (a subset of) $\mathbb R^n$ has a cumulative distribution function, and it uniquely defines the distribution. So, in this sense, the CDF is indeed as fundamental as the distribution itself. A probability density function, however, exists only for (absolutely) continuous probability distributions. The simplest example of a distribution lacking a PDF is any discrete probability distribution, such as the distribution of a random variable that only takes integer values. Of course, such discrete probability distributions can be characterized by a probability mass function instead, but there are also distributions that have neither and PDF or a PMF, such as any mixture of a continuous and a discrete distribution: (Diagram shamelessly stolen from Glen_b's answer to a related question.) There are even singular probability distributions, such as the Cantor distribution, which cannot be described even by a combination of a PDF and a PMF. Such distributions still have a well defined CDF, though. For example, here is the CDF of the Cantor distribution, also sometimes called the "Devil's staircase": (Image from Wikimedia Commons by users Theon and Amirki, used under the CC-By-SA 3.0 license.) The CDF, known as the Cantor function, is continuous but not absolutely continuous. In fact, it is constant everywhere except on a Cantor set of zero Lebesgue measure, but which still contains infinitely many points. Thus, the entire probability mass of the Cantor distribution is concentrated on this vanishingly small subset of the real number line, but every point in the set still individually has zero probability. There are also probability distributions that do not have a moment-generating function. Probably the best known example is the Cauchy distribution, a fat-tailed distribution which has no well-defined moments of order 1 or higher (thus, in particular, having no well-defined mean or variance!). All probability distributions on $\mathbb R^n$ do, however, have a (possibly complex-valued) characteristic function), whose definition differs from that of the MGF only by a multiplication with the imaginary unit. Thus, the characteristic function may be regarded as being as fundamental as the CDF.
Are CDFs more fundamental than PDFs?
Every probability distribution on (a subset of) $\mathbb R^n$ has a cumulative distribution function, and it uniquely defines the distribution. So, in this sense, the CDF is indeed as fundamental as
Are CDFs more fundamental than PDFs? Every probability distribution on (a subset of) $\mathbb R^n$ has a cumulative distribution function, and it uniquely defines the distribution. So, in this sense, the CDF is indeed as fundamental as the distribution itself. A probability density function, however, exists only for (absolutely) continuous probability distributions. The simplest example of a distribution lacking a PDF is any discrete probability distribution, such as the distribution of a random variable that only takes integer values. Of course, such discrete probability distributions can be characterized by a probability mass function instead, but there are also distributions that have neither and PDF or a PMF, such as any mixture of a continuous and a discrete distribution: (Diagram shamelessly stolen from Glen_b's answer to a related question.) There are even singular probability distributions, such as the Cantor distribution, which cannot be described even by a combination of a PDF and a PMF. Such distributions still have a well defined CDF, though. For example, here is the CDF of the Cantor distribution, also sometimes called the "Devil's staircase": (Image from Wikimedia Commons by users Theon and Amirki, used under the CC-By-SA 3.0 license.) The CDF, known as the Cantor function, is continuous but not absolutely continuous. In fact, it is constant everywhere except on a Cantor set of zero Lebesgue measure, but which still contains infinitely many points. Thus, the entire probability mass of the Cantor distribution is concentrated on this vanishingly small subset of the real number line, but every point in the set still individually has zero probability. There are also probability distributions that do not have a moment-generating function. Probably the best known example is the Cauchy distribution, a fat-tailed distribution which has no well-defined moments of order 1 or higher (thus, in particular, having no well-defined mean or variance!). All probability distributions on $\mathbb R^n$ do, however, have a (possibly complex-valued) characteristic function), whose definition differs from that of the MGF only by a multiplication with the imaginary unit. Thus, the characteristic function may be regarded as being as fundamental as the CDF.
Are CDFs more fundamental than PDFs? Every probability distribution on (a subset of) $\mathbb R^n$ has a cumulative distribution function, and it uniquely defines the distribution. So, in this sense, the CDF is indeed as fundamental as
5,281
Are CDFs more fundamental than PDFs?
I believe your econometrics professor was thinking something along the following lines. Consider the function $F$ with domiain $[0, 1]$ defined by $$F(x) = \frac{1}{2}x \ \text{for} \ x < \frac{1}{2} $$ $$F(x) = \frac{1}{2}x + \frac{1}{2} \ \text{for} \ x \geq \frac{1}{2} $$ This is a discontinuous function, but a completely valid CDF for some probability distribution on $[0, 1]$. Note that, using this distribution $$ P\left(\left\{\frac{1}{2}\right\}\right) = \frac{1}{2} $$ There is no function $f$ that serves as a PDF for this distribution, even though there is a CDF. It's easy enough to check this is true in this simple example if you've seen this kind of thing before. Suppose there is such a pdf $f$, we will show it must have an impossible property, and hence cannot exist. By the definition of a PDF, we must have $$ \int_0^x f(t) dt = F(x) - F(0) = \frac{1}{4}x $$ for all $0 < x < \frac{1}{2}$. A function that integrates to a linear function must be constant (technically constant almost everywhere), so we conclude that $$ f(x) = \frac{1}{4} \ \text{for} \ x < \frac{1}{2} $$ In the same way, but integrating starting at one, moving towards zero, and ending at $x > \frac{1}{2}$, we reach the same conclusion $$ f(x) = \frac{1}{4} \ \text{for} \ x > \frac{1}{2} $$ So we have determined $f$ everywhere except for $f\left(\frac{1}{2}\right)$. But it really does not matter what $f\left(\frac{1}{2}\right)$ is, it cannot have the integration property desired. Since $$ P\left(\left\{\frac{1}{2}\right\}\right) = \frac{1}{2} $$ we would need $$ \int_{\frac{1}{2} - \epsilon}^{\frac{1}{2} + \epsilon} f(t) dt > \frac{1}{2} $$ for every interval containing $\frac{1}{2}$. But in fact the value of any integral is unaffected by changing the value of a function at any single point, so $$ \int_{\frac{1}{2} - \epsilon}^{\frac{1}{2} + \epsilon} f(t) dt = \int_{\frac{1}{2} - \epsilon}^{\frac{1}{2} + \epsilon} \frac{1}{4} dt = \frac{1}{2} \epsilon $$ So there's no way out, a function such as $f$ cannot exist. You can recover the spirit of a PDF, but you must use more sophisticated mathematical objects, either a measure or a distribution.
Are CDFs more fundamental than PDFs?
I believe your econometrics professor was thinking something along the following lines. Consider the function $F$ with domiain $[0, 1]$ defined by $$F(x) = \frac{1}{2}x \ \text{for} \ x < \frac{1}{2}
Are CDFs more fundamental than PDFs? I believe your econometrics professor was thinking something along the following lines. Consider the function $F$ with domiain $[0, 1]$ defined by $$F(x) = \frac{1}{2}x \ \text{for} \ x < \frac{1}{2} $$ $$F(x) = \frac{1}{2}x + \frac{1}{2} \ \text{for} \ x \geq \frac{1}{2} $$ This is a discontinuous function, but a completely valid CDF for some probability distribution on $[0, 1]$. Note that, using this distribution $$ P\left(\left\{\frac{1}{2}\right\}\right) = \frac{1}{2} $$ There is no function $f$ that serves as a PDF for this distribution, even though there is a CDF. It's easy enough to check this is true in this simple example if you've seen this kind of thing before. Suppose there is such a pdf $f$, we will show it must have an impossible property, and hence cannot exist. By the definition of a PDF, we must have $$ \int_0^x f(t) dt = F(x) - F(0) = \frac{1}{4}x $$ for all $0 < x < \frac{1}{2}$. A function that integrates to a linear function must be constant (technically constant almost everywhere), so we conclude that $$ f(x) = \frac{1}{4} \ \text{for} \ x < \frac{1}{2} $$ In the same way, but integrating starting at one, moving towards zero, and ending at $x > \frac{1}{2}$, we reach the same conclusion $$ f(x) = \frac{1}{4} \ \text{for} \ x > \frac{1}{2} $$ So we have determined $f$ everywhere except for $f\left(\frac{1}{2}\right)$. But it really does not matter what $f\left(\frac{1}{2}\right)$ is, it cannot have the integration property desired. Since $$ P\left(\left\{\frac{1}{2}\right\}\right) = \frac{1}{2} $$ we would need $$ \int_{\frac{1}{2} - \epsilon}^{\frac{1}{2} + \epsilon} f(t) dt > \frac{1}{2} $$ for every interval containing $\frac{1}{2}$. But in fact the value of any integral is unaffected by changing the value of a function at any single point, so $$ \int_{\frac{1}{2} - \epsilon}^{\frac{1}{2} + \epsilon} f(t) dt = \int_{\frac{1}{2} - \epsilon}^{\frac{1}{2} + \epsilon} \frac{1}{4} dt = \frac{1}{2} \epsilon $$ So there's no way out, a function such as $f$ cannot exist. You can recover the spirit of a PDF, but you must use more sophisticated mathematical objects, either a measure or a distribution.
Are CDFs more fundamental than PDFs? I believe your econometrics professor was thinking something along the following lines. Consider the function $F$ with domiain $[0, 1]$ defined by $$F(x) = \frac{1}{2}x \ \text{for} \ x < \frac{1}{2}
5,282
Are CDFs more fundamental than PDFs?
Ilmari gives a good answer from a theoretical perspective. However, one may also ask what purposes the density (pdf) and the distribution function (pdf) serve for practical computations. This could clarify for which situations one is more directly useful than the other. For probability distributions on $\mathbb{R}$ the distribution function directly gives probabilities of all intervals $(-\infty,x]$. From these probabilities, the probability of a finite union of intervals can be computed by elementary arithmetic. For all practical purposes I can think of, these are the only probabilities you would like to be able to compute. It may be theoretically convenient to express these $-$ or probabilities of more general sets $-$ as integrals, but for the actual computation we effectively need the distribution function. The density is, however, essential for statistics, as the likelihood is defined in terms of the density. Thus if we want to compute the maximum likelihood estimate, we directly need the density. If we turn to the comparison of an empirical and a theoretical distribution, both can be useful, but methods such as pp- and qq-plots based on the distribution function are often preferred. For probability distributions on $\mathbb{R}^d$ for $d \geq 2$ the distribution function plays a less prominent role. One reason is that the probabilities for many sets of interest (balls, ellipsoids, cones etc.) cannot easily be computed from it.
Are CDFs more fundamental than PDFs?
Ilmari gives a good answer from a theoretical perspective. However, one may also ask what purposes the density (pdf) and the distribution function (pdf) serve for practical computations. This could cl
Are CDFs more fundamental than PDFs? Ilmari gives a good answer from a theoretical perspective. However, one may also ask what purposes the density (pdf) and the distribution function (pdf) serve for practical computations. This could clarify for which situations one is more directly useful than the other. For probability distributions on $\mathbb{R}$ the distribution function directly gives probabilities of all intervals $(-\infty,x]$. From these probabilities, the probability of a finite union of intervals can be computed by elementary arithmetic. For all practical purposes I can think of, these are the only probabilities you would like to be able to compute. It may be theoretically convenient to express these $-$ or probabilities of more general sets $-$ as integrals, but for the actual computation we effectively need the distribution function. The density is, however, essential for statistics, as the likelihood is defined in terms of the density. Thus if we want to compute the maximum likelihood estimate, we directly need the density. If we turn to the comparison of an empirical and a theoretical distribution, both can be useful, but methods such as pp- and qq-plots based on the distribution function are often preferred. For probability distributions on $\mathbb{R}^d$ for $d \geq 2$ the distribution function plays a less prominent role. One reason is that the probabilities for many sets of interest (balls, ellipsoids, cones etc.) cannot easily be computed from it.
Are CDFs more fundamental than PDFs? Ilmari gives a good answer from a theoretical perspective. However, one may also ask what purposes the density (pdf) and the distribution function (pdf) serve for practical computations. This could cl
5,283
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
The quoted passage is indeed incorrect. A correlation coefficient quantifies the degree of association throughout an entire population (or sample, in the case of the sample correlation coefficient). It does not divide the population into parts with one part showing an association and the other part not. It could be the case that the population actually consists of two subpopulations with different degrees of association, but a correlation coefficient alone doesn't imply this.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
The quoted passage is indeed incorrect. A correlation coefficient quantifies the degree of association throughout an entire population (or sample, in the case of the sample correlation coefficient). I
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? The quoted passage is indeed incorrect. A correlation coefficient quantifies the degree of association throughout an entire population (or sample, in the case of the sample correlation coefficient). It does not divide the population into parts with one part showing an association and the other part not. It could be the case that the population actually consists of two subpopulations with different degrees of association, but a correlation coefficient alone doesn't imply this.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? The quoted passage is indeed incorrect. A correlation coefficient quantifies the degree of association throughout an entire population (or sample, in the case of the sample correlation coefficient). I
5,284
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
No, 0.2 doesn't mean 1 in 5 people show correlation. I don't know how he could write this nonsense. Here's the source of 0.2 number: "On the sources of the height–intelligence correlation: New insights from a bivariate ACE model with assortative mating", https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3044837/ Apparently, the correlation is robust. I already knew about it: my IQ rose considerably with my height as I grew taller. Now I know why am I not getting smarter anymore: my height is stable. This was a joke, of course, but it points out the issue with that "Idiot" book's author's argument: nobody's measuring within subject correlation of height and IQ, at least as far as I know. I'm not sure how would you do it cleanly, there'd be so much confounding. Having said that the researchers are using tricks such as looking at within twins and within family correlations of height and IQ, this helps them address confounding issues. Presumably, twins are growing up in similar environment and have the same DNA, so in observation studies it helps to address endogeneity and other issues. However if you set this all aside, the bottom line is that "0.2 correlation" gives no basis to saying nonsense like in some people there is correlation and in others there is none. It's just a ridiculous interpretation of correlation study results.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
No, 0.2 doesn't mean 1 in 5 people show correlation. I don't know how he could write this nonsense. Here's the source of 0.2 number: "On the sources of the height–intelligence correlation: New insigh
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? No, 0.2 doesn't mean 1 in 5 people show correlation. I don't know how he could write this nonsense. Here's the source of 0.2 number: "On the sources of the height–intelligence correlation: New insights from a bivariate ACE model with assortative mating", https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3044837/ Apparently, the correlation is robust. I already knew about it: my IQ rose considerably with my height as I grew taller. Now I know why am I not getting smarter anymore: my height is stable. This was a joke, of course, but it points out the issue with that "Idiot" book's author's argument: nobody's measuring within subject correlation of height and IQ, at least as far as I know. I'm not sure how would you do it cleanly, there'd be so much confounding. Having said that the researchers are using tricks such as looking at within twins and within family correlations of height and IQ, this helps them address confounding issues. Presumably, twins are growing up in similar environment and have the same DNA, so in observation studies it helps to address endogeneity and other issues. However if you set this all aside, the bottom line is that "0.2 correlation" gives no basis to saying nonsense like in some people there is correlation and in others there is none. It's just a ridiculous interpretation of correlation study results.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? No, 0.2 doesn't mean 1 in 5 people show correlation. I don't know how he could write this nonsense. Here's the source of 0.2 number: "On the sources of the height–intelligence correlation: New insigh
5,285
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
The irony in the statement is almost too thick to parse. Given the title of the text, I'm assuming some tongue-in-cheek was intended. However, your "gut" saying that this is wrong is probably on the right track, if intuition counts for anything. Unfortunately, a lot of scientific reporting eludes intuition when dealing with concepts we haven't encountered. It is possible that, when measuring an association between $X$ and $Y$, the correlation between $X$ and $Y$ is 1.0 in 20% of the population, and 0 in the remaining 80%. The net effect is that overall the correlation of $X$ and $Y$ is 0.2. We see this all the time in pharmacoepidemiology: an experimental drug is deemed "effective" if on average there is a positive benefit; many drugs in common circulation, some of which you could be taking, may harm you because of interactions with your behavior or genetics, but nobody actually knows this. The above is but one possible interpretation of a correlation of 0.2; it is extremely far-fetched because so few things in life have a correlation of either 1 or 0, and fewer things still have effect modification strong enough to produce such discrepant correlations.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
The irony in the statement is almost too thick to parse. Given the title of the text, I'm assuming some tongue-in-cheek was intended. However, your "gut" saying that this is wrong is probably on the r
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? The irony in the statement is almost too thick to parse. Given the title of the text, I'm assuming some tongue-in-cheek was intended. However, your "gut" saying that this is wrong is probably on the right track, if intuition counts for anything. Unfortunately, a lot of scientific reporting eludes intuition when dealing with concepts we haven't encountered. It is possible that, when measuring an association between $X$ and $Y$, the correlation between $X$ and $Y$ is 1.0 in 20% of the population, and 0 in the remaining 80%. The net effect is that overall the correlation of $X$ and $Y$ is 0.2. We see this all the time in pharmacoepidemiology: an experimental drug is deemed "effective" if on average there is a positive benefit; many drugs in common circulation, some of which you could be taking, may harm you because of interactions with your behavior or genetics, but nobody actually knows this. The above is but one possible interpretation of a correlation of 0.2; it is extremely far-fetched because so few things in life have a correlation of either 1 or 0, and fewer things still have effect modification strong enough to produce such discrepant correlations.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? The irony in the statement is almost too thick to parse. Given the title of the text, I'm assuming some tongue-in-cheek was intended. However, your "gut" saying that this is wrong is probably on the r
5,286
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
It would be difficult to come up with an interpretation of this that is meaningful, let alone correct. Association is not a property of individual data points. If you had just the height and intelligence of one person, how could you possibly say whether height and intelligence are associated? I suppose if we had the mean of height and intelligence, we could say that everyone above the mean in both, or below the mean in both, is showing an "association". But if you had completely random data (no correlation), you should expect that half of the people to show "association" in this sense. I generated a random data set with correlation around .2 (actually .22), and found that 55 showed "association" in this sense. It's possible for Y to be an increasing function of X, and the correlation between them be only .5; it would be silly to say that only half the people show an association if every person has a higher intelligence than every shorter person and a lower intelligence than everyone taller. Moreover, it's theoretically possible to have one outlier creating all of the correlation, and the correlation of the set without that point be zero. It's even possible to have 20% of the population have a negative correlation, and the other 80% to also have a negative correlation, and the total correlation be .2.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"?
It would be difficult to come up with an interpretation of this that is meaningful, let alone correct. Association is not a property of individual data points. If you had just the height and intellige
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? It would be difficult to come up with an interpretation of this that is meaningful, let alone correct. Association is not a property of individual data points. If you had just the height and intelligence of one person, how could you possibly say whether height and intelligence are associated? I suppose if we had the mean of height and intelligence, we could say that everyone above the mean in both, or below the mean in both, is showing an "association". But if you had completely random data (no correlation), you should expect that half of the people to show "association" in this sense. I generated a random data set with correlation around .2 (actually .22), and found that 55 showed "association" in this sense. It's possible for Y to be an increasing function of X, and the correlation between them be only .5; it would be silly to say that only half the people show an association if every person has a higher intelligence than every shorter person and a lower intelligence than everyone taller. Moreover, it's theoretically possible to have one outlier creating all of the correlation, and the correlation of the set without that point be zero. It's even possible to have 20% of the population have a negative correlation, and the other 80% to also have a negative correlation, and the total correlation be .2.
Does correlation = 0.2 mean that there is an association "in only 1 in 5 people"? It would be difficult to come up with an interpretation of this that is meaningful, let alone correct. Association is not a property of individual data points. If you had just the height and intellige
5,287
What is residual standard error?
A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same $X$ values an infinite number of times (and when the linear model is true). The difference between these predicted values and the ones used to fit the model are called "residuals" which, when replicating the data collection process, have properties of random variables with 0 means. The observed residuals are then used to subsequently estimate the variability in these values and to estimate the sampling distribution of the parameters. When the residual standard error is exactly 0 then the model fits the data perfectly (likely due to overfitting). If the residual standard error can not be shown to be significantly different from the variability in the unconditional response, then there is little evidence to suggest the linear model has any predictive ability.
What is residual standard error?
A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same $X$ values an infinite n
What is residual standard error? A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same $X$ values an infinite number of times (and when the linear model is true). The difference between these predicted values and the ones used to fit the model are called "residuals" which, when replicating the data collection process, have properties of random variables with 0 means. The observed residuals are then used to subsequently estimate the variability in these values and to estimate the sampling distribution of the parameters. When the residual standard error is exactly 0 then the model fits the data perfectly (likely due to overfitting). If the residual standard error can not be shown to be significantly different from the variability in the unconditional response, then there is little evidence to suggest the linear model has any predictive ability.
What is residual standard error? A fitted regression model uses the parameters to generate point estimate predictions which are the means of observed responses if you were to replicate the study with the same $X$ values an infinite n
5,288
What is residual standard error?
Say we have the following ANOVA table (adapted from R's example(aov) command): Df Sum Sq Mean Sq F value Pr(>F) Model 1 37.0 37.00 0.483 0.525 Residuals 4 306.3 76.57 If you divide the sum of squares from any source of variation (model or residuals) by its respective degrees of freedom, you get the mean square. Particularly for the residuals: $$ \frac{306.3}{4} = 76.575 \approx 76.57 $$ So 76.57 is the mean square of the residuals, i.e., the amount of residual (after applying the model) variation on your response variable. The residual standard error you've asked about is nothing more than the positive square root of the mean square error. In my example, the residual standard error would be equal to $\sqrt{76.57}$, or approximately 8.75. R would output this information as "8.75 on 4 degrees of freedom".
What is residual standard error?
Say we have the following ANOVA table (adapted from R's example(aov) command): Df Sum Sq Mean Sq F value Pr(>F) Model 1 37.0 37.00 0.483 0.525 Residuals 4 306.3 76.57
What is residual standard error? Say we have the following ANOVA table (adapted from R's example(aov) command): Df Sum Sq Mean Sq F value Pr(>F) Model 1 37.0 37.00 0.483 0.525 Residuals 4 306.3 76.57 If you divide the sum of squares from any source of variation (model or residuals) by its respective degrees of freedom, you get the mean square. Particularly for the residuals: $$ \frac{306.3}{4} = 76.575 \approx 76.57 $$ So 76.57 is the mean square of the residuals, i.e., the amount of residual (after applying the model) variation on your response variable. The residual standard error you've asked about is nothing more than the positive square root of the mean square error. In my example, the residual standard error would be equal to $\sqrt{76.57}$, or approximately 8.75. R would output this information as "8.75 on 4 degrees of freedom".
What is residual standard error? Say we have the following ANOVA table (adapted from R's example(aov) command): Df Sum Sq Mean Sq F value Pr(>F) Model 1 37.0 37.00 0.483 0.525 Residuals 4 306.3 76.57
5,289
What is residual standard error?
Typically you will have a regression model looks like this: $$ Y = \beta_{0} + \beta_{1}X + \epsilon $$ where $ \epsilon $ is an error term independent of $ X $. If $ \beta_{0} $ and $ \beta_{1} $ are known, we still cannot perfectly predict Y using X due to $ \epsilon $. Therefore, we use RSE as a judgement value of the Standard Deviation of $ \epsilon $. RSE is explained pretty much clearly in "Introduction to Statistical Learning".
What is residual standard error?
Typically you will have a regression model looks like this: $$ Y = \beta_{0} + \beta_{1}X + \epsilon $$ where $ \epsilon $ is an error term independent of $ X $. If $ \beta_{0} $ and $ \beta_{1} $
What is residual standard error? Typically you will have a regression model looks like this: $$ Y = \beta_{0} + \beta_{1}X + \epsilon $$ where $ \epsilon $ is an error term independent of $ X $. If $ \beta_{0} $ and $ \beta_{1} $ are known, we still cannot perfectly predict Y using X due to $ \epsilon $. Therefore, we use RSE as a judgement value of the Standard Deviation of $ \epsilon $. RSE is explained pretty much clearly in "Introduction to Statistical Learning".
What is residual standard error? Typically you will have a regression model looks like this: $$ Y = \beta_{0} + \beta_{1}X + \epsilon $$ where $ \epsilon $ is an error term independent of $ X $. If $ \beta_{0} $ and $ \beta_{1} $
5,290
What is residual standard error?
The residual standard error is $\sqrt{MSE}$. The $MSE$ is an unbiased estimator of $\sigma^2$, where $\sigma^2 = Var(y|x)$. To make it more clear of the answer by @Silverfish and @Waldir Leoncio. A summary of all definitions was shown below. Always got confused by these terms, put it here instead of making it as a comment for better formatting. Anova table of SLR/Simple Linear Regression (DF is different for multiple regression): Source DF Sum Sq Mean Sq F value Regression $1$ $SSR$ $MSR = \frac{SSR}{1}$ $\frac{MSR}{MSE}$ Residual $n - 2$ $SSE$ $MSE = \frac{SSE}{n - 2}$ Total $n - 1$ $SST$ where $n$ is the sample size of $x_i$, $SST = SSE + SSR$, $SST = S_{YY} = \sum_{i = 1}^{n}{(y_i - \bar{y})^2}$, $SSE = \sum_{i = 1}^{n}{(y_i - \hat{y_i})^2}$, $SSR = \sum_{i = 1}^{n}{(\hat{y_i} - \bar{y})^2}$. The $SSR$ is the part of variance of $y_i$ which can be explained by $\hat{y_i}$, the greater the better. Also for SLR, $se(\beta_1) = \sqrt{MSE}/\sqrt{S_{xx}}$, where $S_{XX}$ is defined similarly as $S_{YY}$.
What is residual standard error?
The residual standard error is $\sqrt{MSE}$. The $MSE$ is an unbiased estimator of $\sigma^2$, where $\sigma^2 = Var(y|x)$. To make it more clear of the answer by @Silverfish and @Waldir Leoncio. A su
What is residual standard error? The residual standard error is $\sqrt{MSE}$. The $MSE$ is an unbiased estimator of $\sigma^2$, where $\sigma^2 = Var(y|x)$. To make it more clear of the answer by @Silverfish and @Waldir Leoncio. A summary of all definitions was shown below. Always got confused by these terms, put it here instead of making it as a comment for better formatting. Anova table of SLR/Simple Linear Regression (DF is different for multiple regression): Source DF Sum Sq Mean Sq F value Regression $1$ $SSR$ $MSR = \frac{SSR}{1}$ $\frac{MSR}{MSE}$ Residual $n - 2$ $SSE$ $MSE = \frac{SSE}{n - 2}$ Total $n - 1$ $SST$ where $n$ is the sample size of $x_i$, $SST = SSE + SSR$, $SST = S_{YY} = \sum_{i = 1}^{n}{(y_i - \bar{y})^2}$, $SSE = \sum_{i = 1}^{n}{(y_i - \hat{y_i})^2}$, $SSR = \sum_{i = 1}^{n}{(\hat{y_i} - \bar{y})^2}$. The $SSR$ is the part of variance of $y_i$ which can be explained by $\hat{y_i}$, the greater the better. Also for SLR, $se(\beta_1) = \sqrt{MSE}/\sqrt{S_{xx}}$, where $S_{XX}$ is defined similarly as $S_{YY}$.
What is residual standard error? The residual standard error is $\sqrt{MSE}$. The $MSE$ is an unbiased estimator of $\sigma^2$, where $\sigma^2 = Var(y|x)$. To make it more clear of the answer by @Silverfish and @Waldir Leoncio. A su
5,291
What is residual standard error?
As noted by @Amelio Vazquez-Reina and @little_monster, given a (simple linear) regression model: $$ Y = \beta_0 + X \beta_1 + \epsilon $$ where $\epsilon$ is a noise term with variance $\sigma^2$, i.e. $Var(\epsilon) = \sigma^2$, Residual Standard Error ($RSE$) is an estimate of $\sigma^2$ (the latter being usually unknown). RSE is given by a formula: $$ RSE = \sqrt{\frac{RSS}{n-2}} $$ (see ISLR, page 66), where $RSS$ is a Residual Sum of Squares (sum of squared differences between the actual and predicted values): $$ RSS = \sum_{i=1}^{n}(y_i - \hat{y}_i) $$ The reason for $n-2$ in the denominator of $\sqrt{\frac{RSS}{n-2}}$ is that $\sqrt{\frac{RSS}{n}}$ would underestimate $\sigma^2$ - this is because the estimated function, $\hat{y}$, has been fit to the data such that it (by design) minimises the RSS of the residuals, but $\sigma^2$ pertains to errors (see the difference between errors and residuals). Imagine an extreme case - if there are only two data points ($n=2$), we can only fit one line - our RSS will be 0, but the true generating function (the one that was used to generate the two samples) will have a non-zero sum of errors (assuming $\sigma^2 \neq 0$). So the $n-2$ pertains to the fact that we only have $n-2$ degrees of freedom. Remark: In general case (Multiple Linear regression with $p$ features) the $RSE$ will be estimated with: $$ RSE = \sqrt{\frac{RSS}{n-p-1}} $$ (see ISLR, page 80)
What is residual standard error?
As noted by @Amelio Vazquez-Reina and @little_monster, given a (simple linear) regression model: $$ Y = \beta_0 + X \beta_1 + \epsilon $$ where $\epsilon$ is a noise term with variance $\sigma^2$, i.e
What is residual standard error? As noted by @Amelio Vazquez-Reina and @little_monster, given a (simple linear) regression model: $$ Y = \beta_0 + X \beta_1 + \epsilon $$ where $\epsilon$ is a noise term with variance $\sigma^2$, i.e. $Var(\epsilon) = \sigma^2$, Residual Standard Error ($RSE$) is an estimate of $\sigma^2$ (the latter being usually unknown). RSE is given by a formula: $$ RSE = \sqrt{\frac{RSS}{n-2}} $$ (see ISLR, page 66), where $RSS$ is a Residual Sum of Squares (sum of squared differences between the actual and predicted values): $$ RSS = \sum_{i=1}^{n}(y_i - \hat{y}_i) $$ The reason for $n-2$ in the denominator of $\sqrt{\frac{RSS}{n-2}}$ is that $\sqrt{\frac{RSS}{n}}$ would underestimate $\sigma^2$ - this is because the estimated function, $\hat{y}$, has been fit to the data such that it (by design) minimises the RSS of the residuals, but $\sigma^2$ pertains to errors (see the difference between errors and residuals). Imagine an extreme case - if there are only two data points ($n=2$), we can only fit one line - our RSS will be 0, but the true generating function (the one that was used to generate the two samples) will have a non-zero sum of errors (assuming $\sigma^2 \neq 0$). So the $n-2$ pertains to the fact that we only have $n-2$ degrees of freedom. Remark: In general case (Multiple Linear regression with $p$ features) the $RSE$ will be estimated with: $$ RSE = \sqrt{\frac{RSS}{n-p-1}} $$ (see ISLR, page 80)
What is residual standard error? As noted by @Amelio Vazquez-Reina and @little_monster, given a (simple linear) regression model: $$ Y = \beta_0 + X \beta_1 + \epsilon $$ where $\epsilon$ is a noise term with variance $\sigma^2$, i.e
5,292
What can we say about population mean from a sample size of 1?
Here is a brand-new article on this question for the Poisson case, taking a nice pedagogical approach: Andersson. Per Gösta (2015). A Classroom Approach to the Construction of an Approximate Confidence Interval of a Poisson Mean Using One Observation. The American Statistician, 69(3), 160-164, DOI: 10.1080/00031305.2015.1056830.
What can we say about population mean from a sample size of 1?
Here is a brand-new article on this question for the Poisson case, taking a nice pedagogical approach: Andersson. Per Gösta (2015). A Classroom Approach to the Construction of an Approximate Confidenc
What can we say about population mean from a sample size of 1? Here is a brand-new article on this question for the Poisson case, taking a nice pedagogical approach: Andersson. Per Gösta (2015). A Classroom Approach to the Construction of an Approximate Confidence Interval of a Poisson Mean Using One Observation. The American Statistician, 69(3), 160-164, DOI: 10.1080/00031305.2015.1056830.
What can we say about population mean from a sample size of 1? Here is a brand-new article on this question for the Poisson case, taking a nice pedagogical approach: Andersson. Per Gösta (2015). A Classroom Approach to the Construction of an Approximate Confidenc
5,293
What can we say about population mean from a sample size of 1?
If the population is known to be normal, a 95% confidence interval based on a single observation $x$ is given by $$x \pm 9.68 \left| x \right| $$ This is discussed in the article "An Effective Confidence Interval for the Mean With Samples of Size One and Two," by Wall, Boen, and Tweedie, The American Statistician, May 2001, Vol. 55, No.2. (pdf)
What can we say about population mean from a sample size of 1?
If the population is known to be normal, a 95% confidence interval based on a single observation $x$ is given by $$x \pm 9.68 \left| x \right| $$ This is discussed in the article "An Effective Confide
What can we say about population mean from a sample size of 1? If the population is known to be normal, a 95% confidence interval based on a single observation $x$ is given by $$x \pm 9.68 \left| x \right| $$ This is discussed in the article "An Effective Confidence Interval for the Mean With Samples of Size One and Two," by Wall, Boen, and Tweedie, The American Statistician, May 2001, Vol. 55, No.2. (pdf)
What can we say about population mean from a sample size of 1? If the population is known to be normal, a 95% confidence interval based on a single observation $x$ is given by $$x \pm 9.68 \left| x \right| $$ This is discussed in the article "An Effective Confide
5,294
What can we say about population mean from a sample size of 1?
Sure there is. Use a Bayesian paradigm. Chances are you have at least some idea of what $\mu$ could possibly be - for instance, that it physically cannot be negative, or that it obviously cannot be larger than 100 (maybe you are measuring the height of your local high school football team members in feet). Put a prior on that, update it with your lone observation, and you have a wonderful posterior.
What can we say about population mean from a sample size of 1?
Sure there is. Use a Bayesian paradigm. Chances are you have at least some idea of what $\mu$ could possibly be - for instance, that it physically cannot be negative, or that it obviously cannot be la
What can we say about population mean from a sample size of 1? Sure there is. Use a Bayesian paradigm. Chances are you have at least some idea of what $\mu$ could possibly be - for instance, that it physically cannot be negative, or that it obviously cannot be larger than 100 (maybe you are measuring the height of your local high school football team members in feet). Put a prior on that, update it with your lone observation, and you have a wonderful posterior.
What can we say about population mean from a sample size of 1? Sure there is. Use a Bayesian paradigm. Chances are you have at least some idea of what $\mu$ could possibly be - for instance, that it physically cannot be negative, or that it obviously cannot be la
5,295
What can we say about population mean from a sample size of 1?
A small simulation exercise to illustrate whether the answer by @soakley works: # Set the number of trials, M M=10^6 # Set the true mean for each trial mu=rep(0,M) # Set the true standard deviation for each trial sd=rep(1,M) # Set counter to zero count=0 for(i in 1:M){ # Control the random number generation so that the experiment is replicable set.seed(i) # Generate one draw of a normal random variable with a given mean and standard deviation x=rnorm(n=1,mean=mu[i],sd=sd[i]) # Estimate the lower confidence bound for the population mean lower=x-9.68*abs(x) # Estimate the upper confidence bound for the population mean upper=x+9.68*abs(x) # If the true mean is within the confidence interval, count it in if( (lower<mu[i]) && (mu[i]<upper) ) count=count+1 } # Obtain the percentage of cases when the true mean is within the confidence interval count_pct=count/M # Print the result print(count_pct) [1] 1 Out of one million random trials, the confidence interval includes the true mean one million times, that is, always. That should not happen in case the confidence interval was a 95% confidence interval. So the formula does not seem to work... Or have I made a coding mistake? Edit: the same empirical result holds when using $(\mu, \sigma)=(1000,1)$; however, it is $0.950097 \approx 0.95$ for $(\mu, \sigma)=(1000,1000)$ -- thus pretty close to the 95% confidence interval.
What can we say about population mean from a sample size of 1?
A small simulation exercise to illustrate whether the answer by @soakley works: # Set the number of trials, M M=10^6 # Set the true mean for each trial mu=rep(0,M) # Set the true standard deviation fo
What can we say about population mean from a sample size of 1? A small simulation exercise to illustrate whether the answer by @soakley works: # Set the number of trials, M M=10^6 # Set the true mean for each trial mu=rep(0,M) # Set the true standard deviation for each trial sd=rep(1,M) # Set counter to zero count=0 for(i in 1:M){ # Control the random number generation so that the experiment is replicable set.seed(i) # Generate one draw of a normal random variable with a given mean and standard deviation x=rnorm(n=1,mean=mu[i],sd=sd[i]) # Estimate the lower confidence bound for the population mean lower=x-9.68*abs(x) # Estimate the upper confidence bound for the population mean upper=x+9.68*abs(x) # If the true mean is within the confidence interval, count it in if( (lower<mu[i]) && (mu[i]<upper) ) count=count+1 } # Obtain the percentage of cases when the true mean is within the confidence interval count_pct=count/M # Print the result print(count_pct) [1] 1 Out of one million random trials, the confidence interval includes the true mean one million times, that is, always. That should not happen in case the confidence interval was a 95% confidence interval. So the formula does not seem to work... Or have I made a coding mistake? Edit: the same empirical result holds when using $(\mu, \sigma)=(1000,1)$; however, it is $0.950097 \approx 0.95$ for $(\mu, \sigma)=(1000,1000)$ -- thus pretty close to the 95% confidence interval.
What can we say about population mean from a sample size of 1? A small simulation exercise to illustrate whether the answer by @soakley works: # Set the number of trials, M M=10^6 # Set the true mean for each trial mu=rep(0,M) # Set the true standard deviation fo
5,296
What can we say about population mean from a sample size of 1?
See Edelman, D (1990) 'A confidence interval for the center of an unknown unimodal distribution based on a sample size one' The American Statistician, Vol 44, no 4. Article covers the Normal and Nonparametric cases.
What can we say about population mean from a sample size of 1?
See Edelman, D (1990) 'A confidence interval for the center of an unknown unimodal distribution based on a sample size one' The American Statistician, Vol 44, no 4. Article covers the Normal and Nonpa
What can we say about population mean from a sample size of 1? See Edelman, D (1990) 'A confidence interval for the center of an unknown unimodal distribution based on a sample size one' The American Statistician, Vol 44, no 4. Article covers the Normal and Nonparametric cases.
What can we say about population mean from a sample size of 1? See Edelman, D (1990) 'A confidence interval for the center of an unknown unimodal distribution based on a sample size one' The American Statistician, Vol 44, no 4. Article covers the Normal and Nonpa
5,297
What can we say about population mean from a sample size of 1?
Here is a simulation to demonstrate that @soakley's confidence interval works for a normally distributed random variable. It takes $10^4$ values of $\mu$ in $[-10,10]$ and of $\sigma$ in $[0,10]$ and for each of those pairs, it generates $10^6$ single observations $x$ and sees what proportion of the corresponding confidence intervals $[x- 9.68|x|, x+9.68|x|]$ cover $\mu$. It then plots this proportion against $\frac{|\mu|}{\sigma}$ as small black circles to illustrate when this proportion is about $0.95$ (the blue line) and when it is much higher. The red line shows the theoretical confidence for varying values of $\frac{|\mu|}{\sigma}$. Using R: set.seed(2023) cases <- 10^3 samplespercase <- 10^6 mu <- runif(cases)*20-10 sigma <- runif(cases)*10 withinconfint <- numeric(cases) for (i in 1:cases){ sample <- rnorm(samplespercase, mean=mu[i], sd=sigma[i]) withinconfint[i] <- sum(sample - 9.68*abs(sample) < mu[i] & sample + 9.68*abs(sample) > mu[i]) } plot(abs(mu/sigma), withinconfint/samplespercase, log="x") abline(h=0.95,col="blue") curve(pnorm(x/(1-9.68),x,1) + 1 - pnorm(x/(1+9.68),x,1), from=10^-4, to=10^4, col="red", n=50001, log="x", add=TRUE) Taking account of simulation noise, this suggests the claim and the corresponding expression for theoretical confidence of $\Phi\left(\frac{9.68}{(1-9.68)x}\right) + 1 - \Phi\left(\frac{-9.68}{(1+9.68)x}\right)$ are highly plausible. When $\frac{|\mu|}{\sigma} \approx 1$ the confidence is marginally above $0.95$, though at other times it is much more conservative and closer to $1$. The chart seems to suggest the minimum confidence is when $\frac{|\mu|}{\sigma} = 1$; this is not quite correct and for $\alpha=1-0.95=0.05$ it is closer to $0.99$; for larger $\alpha < \frac12$ it would be at an even lower value of $\frac{|\mu|}{\sigma}$. Meanwhile the $9.68$ is marginally higher than $\sqrt{\frac{2}{e\pi}} /\alpha \approx \frac{0.48394}{\alpha}$ and needs to be. For all $\alpha < \frac12$ we can use the slightly more conservative $\frac{0.5}{\alpha}$ and so a confidence interval of $\left[x-\frac1{2\alpha}|x|, x+\frac1{2\alpha}|x|\right]$ to have a probability of over $1-\alpha$ of covering the unknown mean, which in this example would have been $x\pm 10|x|$ which would have given confidence of about $0.9516$ in the tightest case.
What can we say about population mean from a sample size of 1?
Here is a simulation to demonstrate that @soakley's confidence interval works for a normally distributed random variable. It takes $10^4$ values of $\mu$ in $[-10,10]$ and of $\sigma$ in $[0,10]$ an
What can we say about population mean from a sample size of 1? Here is a simulation to demonstrate that @soakley's confidence interval works for a normally distributed random variable. It takes $10^4$ values of $\mu$ in $[-10,10]$ and of $\sigma$ in $[0,10]$ and for each of those pairs, it generates $10^6$ single observations $x$ and sees what proportion of the corresponding confidence intervals $[x- 9.68|x|, x+9.68|x|]$ cover $\mu$. It then plots this proportion against $\frac{|\mu|}{\sigma}$ as small black circles to illustrate when this proportion is about $0.95$ (the blue line) and when it is much higher. The red line shows the theoretical confidence for varying values of $\frac{|\mu|}{\sigma}$. Using R: set.seed(2023) cases <- 10^3 samplespercase <- 10^6 mu <- runif(cases)*20-10 sigma <- runif(cases)*10 withinconfint <- numeric(cases) for (i in 1:cases){ sample <- rnorm(samplespercase, mean=mu[i], sd=sigma[i]) withinconfint[i] <- sum(sample - 9.68*abs(sample) < mu[i] & sample + 9.68*abs(sample) > mu[i]) } plot(abs(mu/sigma), withinconfint/samplespercase, log="x") abline(h=0.95,col="blue") curve(pnorm(x/(1-9.68),x,1) + 1 - pnorm(x/(1+9.68),x,1), from=10^-4, to=10^4, col="red", n=50001, log="x", add=TRUE) Taking account of simulation noise, this suggests the claim and the corresponding expression for theoretical confidence of $\Phi\left(\frac{9.68}{(1-9.68)x}\right) + 1 - \Phi\left(\frac{-9.68}{(1+9.68)x}\right)$ are highly plausible. When $\frac{|\mu|}{\sigma} \approx 1$ the confidence is marginally above $0.95$, though at other times it is much more conservative and closer to $1$. The chart seems to suggest the minimum confidence is when $\frac{|\mu|}{\sigma} = 1$; this is not quite correct and for $\alpha=1-0.95=0.05$ it is closer to $0.99$; for larger $\alpha < \frac12$ it would be at an even lower value of $\frac{|\mu|}{\sigma}$. Meanwhile the $9.68$ is marginally higher than $\sqrt{\frac{2}{e\pi}} /\alpha \approx \frac{0.48394}{\alpha}$ and needs to be. For all $\alpha < \frac12$ we can use the slightly more conservative $\frac{0.5}{\alpha}$ and so a confidence interval of $\left[x-\frac1{2\alpha}|x|, x+\frac1{2\alpha}|x|\right]$ to have a probability of over $1-\alpha$ of covering the unknown mean, which in this example would have been $x\pm 10|x|$ which would have given confidence of about $0.9516$ in the tightest case.
What can we say about population mean from a sample size of 1? Here is a simulation to demonstrate that @soakley's confidence interval works for a normally distributed random variable. It takes $10^4$ values of $\mu$ in $[-10,10]$ and of $\sigma$ in $[0,10]$ an
5,298
What is model identifiability?
For identifiability we are talking about a parameter $\theta$ (which could be a vector), which ranges over a parameter space $\Theta$, and a family of distributions (for simplicity, think PDFs) indexed by $\theta$ which we typically write something like $\{ f_{\theta}|\, \theta \in \Theta\}$. For instance, $\theta$ could be $\theta = \beta$ and $f$ could be $$ f_{\theta}(x) = \frac{1}{\beta}\mathrm{e}^{-x/\beta}, \ x>0,\ \beta >0, $$ which would mean that $\Theta = (0,\infty)$. In order for the model to be identifiable, the transformation which maps $\theta$ to $f_{\theta}$ should be one-to-one. Given a model in your lap, the most straightforward way to check this is to start with the equation $f_{\theta_{1}} = f_{\theta_{2}}$, (this equality should hold for (almost) all $x$ in the support) and to try to use algebra (or some other argument) to show that just such an equation implies that, in fact, $\theta_{1} = \theta_{2}$. If you succeed with this plan, then your model is identifiable; go on with your business. If you don't, then either your model isn't identifiable, or you need to find another argument. The intuition is the same, regardless: in an identifiable model it is impossible for two distinct parameters (which could be vectors) to give rise to the same likelihood function. This makes sense, because if, for fixed data, two unique parameters gave rise to the same likelihood, then it would be impossible to distinguish between the two candidate parameters based on the data alone. It would be impossible to identify the true parameter, in that case. For the example above, the equation $f_{\theta_{1}} = f_{\theta_{2}}$ is $$ \frac{1}{\beta_{1}}\mathrm{e}^{-x/\beta_{1}} = \frac{1}{\beta_{2}}\mathrm{e}^{-x/\beta_{2}}, $$ for (almost) all $x > 0$. If we take logs of both sides we get $$ -\ln\,\beta_{1} - \frac{x}{\beta_{1}} = -\ln\,\beta_{2} - \frac{x}{\beta_{2}} $$ for $x > 0$, which implies the linear function $$ -\left(\frac{1}{\beta_{1}} - \frac{1}{\beta_{2}}\right)x - (\ln\,\beta_{1} - \ln\,\beta_{2}) $$ is (almost) identically zero. The only line which does such a thing is the one which has slope 0 and y-intercept zero. Hopefully you can see the rest. By the way, if you can tell by looking at your model that it isn't identifiable (sometimes you can), then it is common to introduce additional constraints on it to make it identifiable (as you mentioned). This is akin to recognizing that the function $f(y) = y^{2}$ isn't one-to-one for $y$ in $[-1,1]$, but it is one-to-one if we restrict $y$ to lie inside $[0,1]$. In more complicated models the equations are tougher but the idea is the same.
What is model identifiability?
For identifiability we are talking about a parameter $\theta$ (which could be a vector), which ranges over a parameter space $\Theta$, and a family of distributions (for simplicity, think PDFs) indexe
What is model identifiability? For identifiability we are talking about a parameter $\theta$ (which could be a vector), which ranges over a parameter space $\Theta$, and a family of distributions (for simplicity, think PDFs) indexed by $\theta$ which we typically write something like $\{ f_{\theta}|\, \theta \in \Theta\}$. For instance, $\theta$ could be $\theta = \beta$ and $f$ could be $$ f_{\theta}(x) = \frac{1}{\beta}\mathrm{e}^{-x/\beta}, \ x>0,\ \beta >0, $$ which would mean that $\Theta = (0,\infty)$. In order for the model to be identifiable, the transformation which maps $\theta$ to $f_{\theta}$ should be one-to-one. Given a model in your lap, the most straightforward way to check this is to start with the equation $f_{\theta_{1}} = f_{\theta_{2}}$, (this equality should hold for (almost) all $x$ in the support) and to try to use algebra (or some other argument) to show that just such an equation implies that, in fact, $\theta_{1} = \theta_{2}$. If you succeed with this plan, then your model is identifiable; go on with your business. If you don't, then either your model isn't identifiable, or you need to find another argument. The intuition is the same, regardless: in an identifiable model it is impossible for two distinct parameters (which could be vectors) to give rise to the same likelihood function. This makes sense, because if, for fixed data, two unique parameters gave rise to the same likelihood, then it would be impossible to distinguish between the two candidate parameters based on the data alone. It would be impossible to identify the true parameter, in that case. For the example above, the equation $f_{\theta_{1}} = f_{\theta_{2}}$ is $$ \frac{1}{\beta_{1}}\mathrm{e}^{-x/\beta_{1}} = \frac{1}{\beta_{2}}\mathrm{e}^{-x/\beta_{2}}, $$ for (almost) all $x > 0$. If we take logs of both sides we get $$ -\ln\,\beta_{1} - \frac{x}{\beta_{1}} = -\ln\,\beta_{2} - \frac{x}{\beta_{2}} $$ for $x > 0$, which implies the linear function $$ -\left(\frac{1}{\beta_{1}} - \frac{1}{\beta_{2}}\right)x - (\ln\,\beta_{1} - \ln\,\beta_{2}) $$ is (almost) identically zero. The only line which does such a thing is the one which has slope 0 and y-intercept zero. Hopefully you can see the rest. By the way, if you can tell by looking at your model that it isn't identifiable (sometimes you can), then it is common to introduce additional constraints on it to make it identifiable (as you mentioned). This is akin to recognizing that the function $f(y) = y^{2}$ isn't one-to-one for $y$ in $[-1,1]$, but it is one-to-one if we restrict $y$ to lie inside $[0,1]$. In more complicated models the equations are tougher but the idea is the same.
What is model identifiability? For identifiability we are talking about a parameter $\theta$ (which could be a vector), which ranges over a parameter space $\Theta$, and a family of distributions (for simplicity, think PDFs) indexe
5,299
What is model identifiability?
One way is to inspect the covariance matrix, $\Sigma$, of your parameter estimates. If two parameter estimates are perfectly (approximately) correlated with each other or one parameter estimate is a (approximately) linear combination of several others, then your model is not identified; the parameters that are functions of the others are not necessary. In each of these cases, $\Sigma$ will also be (approximately) singular. So, if $\Sigma$ is approximately singular, this may give you reason to be concerned about identifiability issues. (Although I don't think this would detect non-linear relationships between parameter estimates that would give rise to non-identifiability). The practical problem is that it is often difficult to calculate $\Sigma$ for even mildly complicated models. If you are doing a maximum likelihood problem, then you know the asymptotic covariance matrix of your estimates is equal to the inverse of the fisher information evaluated at the MLE. So, checking the fisher information matrix for (approximate) singularity is also a reasonable way of assessing identifiability. This also works where the theoretical fisher information is difficult to calculate because it is often possible to very accurately numerically approximate a consistent estimator of the fisher information matrix by, for example, estimating the expected outer product of the score function by the observed average outer product. In you are not doing an ML problem then you may be able to get a handle on $\Sigma$ by simulating data from the model and estimating parameters a large number of times and calculating a sample covariance matrix.
What is model identifiability?
One way is to inspect the covariance matrix, $\Sigma$, of your parameter estimates. If two parameter estimates are perfectly (approximately) correlated with each other or one parameter estimate is a (
What is model identifiability? One way is to inspect the covariance matrix, $\Sigma$, of your parameter estimates. If two parameter estimates are perfectly (approximately) correlated with each other or one parameter estimate is a (approximately) linear combination of several others, then your model is not identified; the parameters that are functions of the others are not necessary. In each of these cases, $\Sigma$ will also be (approximately) singular. So, if $\Sigma$ is approximately singular, this may give you reason to be concerned about identifiability issues. (Although I don't think this would detect non-linear relationships between parameter estimates that would give rise to non-identifiability). The practical problem is that it is often difficult to calculate $\Sigma$ for even mildly complicated models. If you are doing a maximum likelihood problem, then you know the asymptotic covariance matrix of your estimates is equal to the inverse of the fisher information evaluated at the MLE. So, checking the fisher information matrix for (approximate) singularity is also a reasonable way of assessing identifiability. This also works where the theoretical fisher information is difficult to calculate because it is often possible to very accurately numerically approximate a consistent estimator of the fisher information matrix by, for example, estimating the expected outer product of the score function by the observed average outer product. In you are not doing an ML problem then you may be able to get a handle on $\Sigma$ by simulating data from the model and estimating parameters a large number of times and calculating a sample covariance matrix.
What is model identifiability? One way is to inspect the covariance matrix, $\Sigma$, of your parameter estimates. If two parameter estimates are perfectly (approximately) correlated with each other or one parameter estimate is a (
5,300
What is the difference between errors and residuals?
Errors pertain to the true data generating process (DGP), whereas residuals are what is left over after having estimated your model. In truth, assumptions like normality, homoscedasticity, and independence apply to the errors of the DGP, not your model's residuals. (For example, having fit $p+1$ parameters in your model, only $N-(p+1)$ residuals can be independent.) However, we only have access to the residuals, so that's what we work with.
What is the difference between errors and residuals?
Errors pertain to the true data generating process (DGP), whereas residuals are what is left over after having estimated your model. In truth, assumptions like normality, homoscedasticity, and indepe
What is the difference between errors and residuals? Errors pertain to the true data generating process (DGP), whereas residuals are what is left over after having estimated your model. In truth, assumptions like normality, homoscedasticity, and independence apply to the errors of the DGP, not your model's residuals. (For example, having fit $p+1$ parameters in your model, only $N-(p+1)$ residuals can be independent.) However, we only have access to the residuals, so that's what we work with.
What is the difference between errors and residuals? Errors pertain to the true data generating process (DGP), whereas residuals are what is left over after having estimated your model. In truth, assumptions like normality, homoscedasticity, and indepe