idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,501
What's a real-world example of "overfitting"?
This one is made-up, but I hope it will illustrate the case. Example 1 First, let's make up some random data. Here you have $k=100$ variables, each drawn from a standard normal distribution, with $n=100$ cases: set.seed(123) k <- 100 data <- replicate(k, rnorm(100)) colnames(data) <- make.names(1:k) data <- as.data.frame(data) Now, let's fit a linear regression to it: fit <- lm(X1 ~ ., data=data) And here is a summary for first ten predictors: > summary(fit) Call: lm(formula = X1 ~ ., data = data) Residuals: ALL 100 residuals are 0: no residual degrees of freedom! Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.502e-01 NA NA NA X2 3.153e-02 NA NA NA X3 -6.200e-01 NA NA NA X4 7.087e-01 NA NA NA X5 4.392e-01 NA NA NA X6 2.979e-01 NA NA NA X7 -9.092e-02 NA NA NA X8 -5.783e-01 NA NA NA X9 5.965e-01 NA NA NA X10 -8.289e-01 NA NA NA ... Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 99 and 0 DF, p-value: NA results look pretty weird, but let's plot it. That is great, fitted values perfectly fit the $X_1$ values. Error variance is literally zero. But, let it not convince us, let's check what is the sum of absolute differences between $X_1$ and fitted values: > sum(abs(data$X1-fitted(fit))) [1] 0 It is zero, so the plots were not lying to us: the model fits perfectly. And how precise is it in classification? > sum(data$X1==fitted(fit)) [1] 100 We get 100 out of 100 fitted values that are identical to $X_1$. And we got this with totally random numbers fitted to other totally random numbers. Example 2 One more example. Lets make up some more data: data2 <- cbind(1:10, diag(10)) colnames(data2) <- make.names(1:11) data2 <- as.data.frame(data2) so it looks like this: X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 1 1 1 0 0 0 0 0 0 0 0 0 2 2 0 1 0 0 0 0 0 0 0 0 3 3 0 0 1 0 0 0 0 0 0 0 4 4 0 0 0 1 0 0 0 0 0 0 5 5 0 0 0 0 1 0 0 0 0 0 6 6 0 0 0 0 0 1 0 0 0 0 7 7 0 0 0 0 0 0 1 0 0 0 8 8 0 0 0 0 0 0 0 1 0 0 9 9 0 0 0 0 0 0 0 0 1 0 10 10 0 0 0 0 0 0 0 0 0 1 and now lets fit a linear regression to this: fit2 <- lm(X1~., data2) so we get following estimates: > summary(fit2) Call: lm(formula = X1 ~ ., data = data2) Residuals: ALL 10 residuals are 0: no residual degrees of freedom! Coefficients: (1 not defined because of singularities) Estimate Std. Error t value Pr(>|t|) (Intercept) 10 NA NA NA X2 -9 NA NA NA X3 -8 NA NA NA X4 -7 NA NA NA X5 -6 NA NA NA X6 -5 NA NA NA X7 -4 NA NA NA X8 -3 NA NA NA X9 -2 NA NA NA X10 -1 NA NA NA X11 NA NA NA NA Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 9 and 0 DF, p-value: NA as you can see, we have $R^2 = 1$, i.e. "100% variance explained". Linear regression didn't even need to use 10th predictor. From this regression we see, that $X_1$ can be predicted using function: $$X_1 = 10 + X_2 \times -9 + X_3 \times -8 + X_4 \times -7 + X_5 \times -6 + X_6 \times -5 + X_7 \times -4 + X_8 \times -3 + X_9 \times -2$$ so $X_1 = 1$ is: $$10 + 1 \times -9 + 0 \times -8 + 0 \times -7 + 0 \times -6 + 0 \times -5 + 0 \times -4 + 0 \times -3 + 0 \times -2$$ It is pretty self-explanatory. You can think of Example 1 as similar to Example 2 but with some "noise" added. If you have big enough data and use it for "predicting" something then sometimes a single "feature" may convince you that you have a "pattern" that describes your dependent variable well, while it could be just a coincidence. In Example 2 nothing is really predicted, but exactly the same has happened in Example 1 just the values of the variables were different. Real life examples The real life example for this is predicting terrorist attacks on 11 September 2001 by watching "patterns" in numbers randomly drawn by computer pseudorandom number generators by Global Consciousness Project or "secret messages" in "Moby Dick" that reveal facts about assassinations of famous people (inspired by similar findings in Bible). Conclusion If you look hard enough, you'll find "patterns" for anything. However, those patterns won't let you learn anything about the universe and won't help you reach any general conclusions. They will fit perfectly to your data, but would be useless since they won't fit anything else then the data itself. They won't let you make any reasonable out-of-sample predictions, because what they would do, is they would rather imitate than describe the data.
What's a real-world example of "overfitting"?
This one is made-up, but I hope it will illustrate the case. Example 1 First, let's make up some random data. Here you have $k=100$ variables, each drawn from a standard normal distribution, with $n=1
What's a real-world example of "overfitting"? This one is made-up, but I hope it will illustrate the case. Example 1 First, let's make up some random data. Here you have $k=100$ variables, each drawn from a standard normal distribution, with $n=100$ cases: set.seed(123) k <- 100 data <- replicate(k, rnorm(100)) colnames(data) <- make.names(1:k) data <- as.data.frame(data) Now, let's fit a linear regression to it: fit <- lm(X1 ~ ., data=data) And here is a summary for first ten predictors: > summary(fit) Call: lm(formula = X1 ~ ., data = data) Residuals: ALL 100 residuals are 0: no residual degrees of freedom! Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.502e-01 NA NA NA X2 3.153e-02 NA NA NA X3 -6.200e-01 NA NA NA X4 7.087e-01 NA NA NA X5 4.392e-01 NA NA NA X6 2.979e-01 NA NA NA X7 -9.092e-02 NA NA NA X8 -5.783e-01 NA NA NA X9 5.965e-01 NA NA NA X10 -8.289e-01 NA NA NA ... Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 99 and 0 DF, p-value: NA results look pretty weird, but let's plot it. That is great, fitted values perfectly fit the $X_1$ values. Error variance is literally zero. But, let it not convince us, let's check what is the sum of absolute differences between $X_1$ and fitted values: > sum(abs(data$X1-fitted(fit))) [1] 0 It is zero, so the plots were not lying to us: the model fits perfectly. And how precise is it in classification? > sum(data$X1==fitted(fit)) [1] 100 We get 100 out of 100 fitted values that are identical to $X_1$. And we got this with totally random numbers fitted to other totally random numbers. Example 2 One more example. Lets make up some more data: data2 <- cbind(1:10, diag(10)) colnames(data2) <- make.names(1:11) data2 <- as.data.frame(data2) so it looks like this: X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 1 1 1 0 0 0 0 0 0 0 0 0 2 2 0 1 0 0 0 0 0 0 0 0 3 3 0 0 1 0 0 0 0 0 0 0 4 4 0 0 0 1 0 0 0 0 0 0 5 5 0 0 0 0 1 0 0 0 0 0 6 6 0 0 0 0 0 1 0 0 0 0 7 7 0 0 0 0 0 0 1 0 0 0 8 8 0 0 0 0 0 0 0 1 0 0 9 9 0 0 0 0 0 0 0 0 1 0 10 10 0 0 0 0 0 0 0 0 0 1 and now lets fit a linear regression to this: fit2 <- lm(X1~., data2) so we get following estimates: > summary(fit2) Call: lm(formula = X1 ~ ., data = data2) Residuals: ALL 10 residuals are 0: no residual degrees of freedom! Coefficients: (1 not defined because of singularities) Estimate Std. Error t value Pr(>|t|) (Intercept) 10 NA NA NA X2 -9 NA NA NA X3 -8 NA NA NA X4 -7 NA NA NA X5 -6 NA NA NA X6 -5 NA NA NA X7 -4 NA NA NA X8 -3 NA NA NA X9 -2 NA NA NA X10 -1 NA NA NA X11 NA NA NA NA Residual standard error: NaN on 0 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: NaN F-statistic: NaN on 9 and 0 DF, p-value: NA as you can see, we have $R^2 = 1$, i.e. "100% variance explained". Linear regression didn't even need to use 10th predictor. From this regression we see, that $X_1$ can be predicted using function: $$X_1 = 10 + X_2 \times -9 + X_3 \times -8 + X_4 \times -7 + X_5 \times -6 + X_6 \times -5 + X_7 \times -4 + X_8 \times -3 + X_9 \times -2$$ so $X_1 = 1$ is: $$10 + 1 \times -9 + 0 \times -8 + 0 \times -7 + 0 \times -6 + 0 \times -5 + 0 \times -4 + 0 \times -3 + 0 \times -2$$ It is pretty self-explanatory. You can think of Example 1 as similar to Example 2 but with some "noise" added. If you have big enough data and use it for "predicting" something then sometimes a single "feature" may convince you that you have a "pattern" that describes your dependent variable well, while it could be just a coincidence. In Example 2 nothing is really predicted, but exactly the same has happened in Example 1 just the values of the variables were different. Real life examples The real life example for this is predicting terrorist attacks on 11 September 2001 by watching "patterns" in numbers randomly drawn by computer pseudorandom number generators by Global Consciousness Project or "secret messages" in "Moby Dick" that reveal facts about assassinations of famous people (inspired by similar findings in Bible). Conclusion If you look hard enough, you'll find "patterns" for anything. However, those patterns won't let you learn anything about the universe and won't help you reach any general conclusions. They will fit perfectly to your data, but would be useless since they won't fit anything else then the data itself. They won't let you make any reasonable out-of-sample predictions, because what they would do, is they would rather imitate than describe the data.
What's a real-world example of "overfitting"? This one is made-up, but I hope it will illustrate the case. Example 1 First, let's make up some random data. Here you have $k=100$ variables, each drawn from a standard normal distribution, with $n=1
1,502
What's a real-world example of "overfitting"?
A common problem that results in overfitting in real life is that in addition to terms for a correctly specified model, we may have have added something extraneous: irrelevant powers (or other transformations) of the correct terms, irrelevant variables, or irrelevant interactions. This happens in multiple regression if you add a variable that should not appear in the correctly specified model but do not want to drop it because you are afraid of inducing omitted variable bias. Of course, you have no way of knowing you have wrongly included it, since you can't see the whole population, only your sample, so can't know for sure what the correct specification is. (As @Scortchi points out in the comments, there may be no such thing as a the "correct" model specification - in that sense, the aim of modelling is finding a "good enough" specification; to avoid overfitting involves avoiding a model complexity greater than can be sustained from the available data.) If you want a real-world example of overfitting, this happens every time you throw all the potential predictors into a regression model, should any of them in fact have no relationship with the response once the effects of others are partialled out. With this type of overfitting, the good news is that inclusion of these irrelevant terms does not introduce bias of your estimators, and in very large samples the coefficients of the irrelevant terms should be close to zero. But there is also bad news: because the limited information from your sample is now being used to estimate more parameters, it can only do so with less precision - so the standard errors on the genuinely relevant terms increase. That also means they're likely to be further from the true values than estimates from a correctly specified regression, which in turn means that if given new values of your explanatory variables, the predictions from the overfitted model will tend to be less accurate than for the correctly specified model. Here is a plot of log GDP against log population for 50 US states in 2010. A random sample of 10 states was selected (highlighted in red) and for that sample we fit a simple linear model and a polynomial of degree 5. For the sample points, the polynomial has extra degrees of freedom that let it "wriggle" closer to the observed data than the straight line can. But the 50 states as a whole obey a nearly linear relationship, so the predictive performance of the polynomial model on the 40 out-of-sample points is very poor compared to the less complex model, particularly when extrapolating. The polynomial was effectively fitting some of the random structure (noise) of the sample, which did not generalise to the wider population. It was particularly poor at extrapolating beyond the observed range of the sample. (Code plus data for this plot is at the bottom of this revision of this answer.) Similar issues affect regression against multiple predictors. To look at some actual data, it's easier with simulation rather than real-world samples since this way you control the data-generating process (effectively, you get to see the "population" and the true relationship). In this R code, the true model is $y_i = 2x_{1,i} + 5 + \epsilon_i$ but data is also provided on irrelevant variables $x_2$ and $x_3$. I have designed the simulation so that the the predictor variables are correlated, which would be a common occurrence in real-life data. We fit models which are correctly specified and overfitted (includes the irrelevant predictors and their interactions) on one portion of the generated data, then compare predictive performance on a holdout set. The multicollinearity of the predictors makes life even harder for the overfitted model, since it becomes harder for it to pick apart the effects of $x_1$, $x_2$ and $x_3$, but note that this does not bias any of the coefficient estimators. require(MASS) #for multivariate normal simulation nsample <- 25 #sample to regress nholdout <- 1e6 #to check model predictions Sigma <- matrix(c(1, 0.5, 0.4, 0.5, 1, 0.3, 0.4, 0.3, 1), nrow=3) df <- as.data.frame(mvrnorm(n=(nsample+nholdout), mu=c(5,5,5), Sigma=Sigma)) colnames(df) <- c("x1", "x2", "x3") df$y <- 5 + 2 * df$x1 + rnorm(n=nrow(df)) #y = 5 + *x1 + e holdout.df <- df[1:nholdout,] regress.df <- df[(nholdout+1):(nholdout+nsample),] overfit.lm <- lm(y ~ x1*x2*x3, regress.df) correctspec.lm <- lm(y ~ x1, regress.df) summary(overfit.lm) summary(correctspec.lm) holdout.df$overfitPred <- predict.lm(overfit.lm, newdata=holdout.df) holdout.df$correctSpecPred <- predict.lm(correctspec.lm, newdata=holdout.df) with(holdout.df, sum((y - overfitPred)^2)) #SSE with(holdout.df, sum((y - correctSpecPred)^2)) require(ggplot2) errors.df <- data.frame( Model = rep(c("Overfitted", "Correctly specified"), each=nholdout), Error = with(holdout.df, c(y - overfitPred, y - correctSpecPred))) ggplot(errors.df, aes(x=Error, color=Model)) + geom_density(size=1) + theme(legend.position="bottom") Here are my results from one run, but it's best to run the simulation several times to see the effect of different generated samples. > summary(overfit.lm) Call: lm(formula = y ~ x1 * x2 * x3, data = regress.df) Residuals: Min 1Q Median 3Q Max -2.22294 -0.63142 -0.09491 0.51983 2.24193 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 18.85992 65.00775 0.290 0.775 x1 -2.40912 11.90433 -0.202 0.842 x2 -2.13777 12.48892 -0.171 0.866 x3 -1.13941 12.94670 -0.088 0.931 x1:x2 0.78280 2.25867 0.347 0.733 x1:x3 0.53616 2.30834 0.232 0.819 x2:x3 0.08019 2.49028 0.032 0.975 x1:x2:x3 -0.08584 0.43891 -0.196 0.847 Residual standard error: 1.101 on 17 degrees of freedom Multiple R-squared: 0.8297, Adjusted R-squared: 0.7596 F-statistic: 11.84 on 7 and 17 DF, p-value: 1.942e-05 These coefficient estimates for the overfitted model are terrible - should be about 5 for the intercept, 2 for $x_1$ and 0 for the rest. But the standard errors are also large. The correct values for those parameters do lie well within the 95% confidence intervals in each case. The $R^2$ is 0.8297 which suggests a reasonable fit. > summary(correctspec.lm) Call: lm(formula = y ~ x1, data = regress.df) Residuals: Min 1Q Median 3Q Max -2.4951 -0.4112 -0.2000 0.7876 2.1706 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.7844 1.1272 4.244 0.000306 *** x1 1.9974 0.2108 9.476 2.09e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.036 on 23 degrees of freedom Multiple R-squared: 0.7961, Adjusted R-squared: 0.7872 F-statistic: 89.8 on 1 and 23 DF, p-value: 2.089e-09 The coefficient estimates are much better for the correctly specified model. But note that the $R^2$ is lower, at 0.7961, as the less complex model has less flexibility in fitting the observed responses. $R^2$ is more dangerous than useful in this case! > with(holdout.df, sum((y - overfitPred)^2)) #SSE [1] 1271557 > with(holdout.df, sum((y - correctSpecPred)^2)) [1] 1052217 The higher $R^2$ on the sample we regressed on showed how the overfitted model produced predictions, $\hat{y}$, that were closer to the observed $y$ than the correctly specified model could. But that's because it was overfitting to that data (and had more degrees of freedom to do so than the correctly specified model did, so could produce a "better" fit). Look at the Sum of Squared Errors for the predictions on the holdout set, which we didn't use to estimate the regression coefficients from, and we can see how much worse the overfitted model has performed. In reality the correctly specified model is the one which makes the best predictions. We shouldn't base our assessment of the predictive performance on the results from the set of data we used to estimate the models. Here's a density plot of the errors, with the correct model specification producing more errors close to 0: The simulation clearly represents many relevant real-life situations (just imagine any real-life response which depends on a single predictor, and imagine including extraneous "predictors" into the model) but has the benefit that you can play with the data-generating process, the sample sizes, the nature of the overfitted model and so on. This is the best way you can examine the effects of overfitting since for observed data you don't generally have access to the DGP, and it's still "real" data in the sense that you can examine and use it. Here are some worthwhile ideas that you should experiment with: Run the simulation several times and see how the results differ. You will find more variability using small sample sizes than large ones. Try changing the sample sizes. If increased to, say, n <- 1e6, then the overfitted model eventually estimates reasonable coefficients (about 5 for intercept, about 2 for $x_1$, about 0 for everything else) and its predictive performance as measured by SSE doesn't trail the correctly specified model so badly. Conversely, try fitting on a very small sample (bear in mind you need to leave enough degrees of freedom to estimate all the coefficients) and you will see that the overfitted model has appalling performance both for estimating coefficients and predicting for new data. Try reducing the correlation between the predictor variables by playing with the off-diagonal elements of the variance-covariance matrix Sigma. Just remember to keep it positive semi-definite (which includes being symmetric). You should find if you reduce the multicollinearity, the overfitted model doesn't perform quite so badly. But bear in mind that correlated predictors do occur in real life. Try experimenting with the specification of the overfitted model. What if you include polynomial terms? What if you simulate data for a different region of predictors, rather than having their mean all around 5? If the correct data generating process for $y$ is still df$y <- 5 + 2*df$x1 + rnorm(n=nrow(df)), see how well the models fitted to the original data can predict that $y$. Depending on how you generate the $x_i$ values, you may find that extrapolation with the overfitted model produces predictions far worse than the correctly specified model. What if you change the data generating process so that $y$ now depends, weakly, on $x_2$, $x3$ and perhaps the interactions as well? This may be a more realistic scenario that depending on $x_1$ alone. If you use e.g. df$y <- 5 + 2 * df$x1 + 0.1*df$x2 + 0.1*df$x3 + rnorm(n=nrow(df)) then $x_2$ and $x_3$ are "almost irrelevant", but not quite. (Note that I drew all the $x$ variables from the same range, so it does make sense to compare their coefficients like that.) Then the simple model involving only $x_1$ suffers omitted variable bias, though since $x_2$ and $x_3$ are not particularly important, this is not too severe. On a small sample, e.g. nsample <- 25, the full model is still overfitted, despite being a better representation of the underlying population, and on repeated simulations its predictive performance on the holdout set is still consistently worse. With such limited data, it's more important to get a good estimate for the coefficient of $x_1$ than to expend information on the luxury of estimating the less important coefficients. With the effects of $x_2$ and $x_3$ being so hard to discern in a small sample, the full model is effectively using the flexibility from its extra degrees of freedom to "fit the noise" and this generalises poorly. But with nsample <- 1e6, it can estimate the weaker effects pretty well, and simulations show the complex model has predictive power that outperforms the simple one. This shows how "overfitting" is an issue of both model complexity and the available data.
What's a real-world example of "overfitting"?
A common problem that results in overfitting in real life is that in addition to terms for a correctly specified model, we may have have added something extraneous: irrelevant powers (or other transfo
What's a real-world example of "overfitting"? A common problem that results in overfitting in real life is that in addition to terms for a correctly specified model, we may have have added something extraneous: irrelevant powers (or other transformations) of the correct terms, irrelevant variables, or irrelevant interactions. This happens in multiple regression if you add a variable that should not appear in the correctly specified model but do not want to drop it because you are afraid of inducing omitted variable bias. Of course, you have no way of knowing you have wrongly included it, since you can't see the whole population, only your sample, so can't know for sure what the correct specification is. (As @Scortchi points out in the comments, there may be no such thing as a the "correct" model specification - in that sense, the aim of modelling is finding a "good enough" specification; to avoid overfitting involves avoiding a model complexity greater than can be sustained from the available data.) If you want a real-world example of overfitting, this happens every time you throw all the potential predictors into a regression model, should any of them in fact have no relationship with the response once the effects of others are partialled out. With this type of overfitting, the good news is that inclusion of these irrelevant terms does not introduce bias of your estimators, and in very large samples the coefficients of the irrelevant terms should be close to zero. But there is also bad news: because the limited information from your sample is now being used to estimate more parameters, it can only do so with less precision - so the standard errors on the genuinely relevant terms increase. That also means they're likely to be further from the true values than estimates from a correctly specified regression, which in turn means that if given new values of your explanatory variables, the predictions from the overfitted model will tend to be less accurate than for the correctly specified model. Here is a plot of log GDP against log population for 50 US states in 2010. A random sample of 10 states was selected (highlighted in red) and for that sample we fit a simple linear model and a polynomial of degree 5. For the sample points, the polynomial has extra degrees of freedom that let it "wriggle" closer to the observed data than the straight line can. But the 50 states as a whole obey a nearly linear relationship, so the predictive performance of the polynomial model on the 40 out-of-sample points is very poor compared to the less complex model, particularly when extrapolating. The polynomial was effectively fitting some of the random structure (noise) of the sample, which did not generalise to the wider population. It was particularly poor at extrapolating beyond the observed range of the sample. (Code plus data for this plot is at the bottom of this revision of this answer.) Similar issues affect regression against multiple predictors. To look at some actual data, it's easier with simulation rather than real-world samples since this way you control the data-generating process (effectively, you get to see the "population" and the true relationship). In this R code, the true model is $y_i = 2x_{1,i} + 5 + \epsilon_i$ but data is also provided on irrelevant variables $x_2$ and $x_3$. I have designed the simulation so that the the predictor variables are correlated, which would be a common occurrence in real-life data. We fit models which are correctly specified and overfitted (includes the irrelevant predictors and their interactions) on one portion of the generated data, then compare predictive performance on a holdout set. The multicollinearity of the predictors makes life even harder for the overfitted model, since it becomes harder for it to pick apart the effects of $x_1$, $x_2$ and $x_3$, but note that this does not bias any of the coefficient estimators. require(MASS) #for multivariate normal simulation nsample <- 25 #sample to regress nholdout <- 1e6 #to check model predictions Sigma <- matrix(c(1, 0.5, 0.4, 0.5, 1, 0.3, 0.4, 0.3, 1), nrow=3) df <- as.data.frame(mvrnorm(n=(nsample+nholdout), mu=c(5,5,5), Sigma=Sigma)) colnames(df) <- c("x1", "x2", "x3") df$y <- 5 + 2 * df$x1 + rnorm(n=nrow(df)) #y = 5 + *x1 + e holdout.df <- df[1:nholdout,] regress.df <- df[(nholdout+1):(nholdout+nsample),] overfit.lm <- lm(y ~ x1*x2*x3, regress.df) correctspec.lm <- lm(y ~ x1, regress.df) summary(overfit.lm) summary(correctspec.lm) holdout.df$overfitPred <- predict.lm(overfit.lm, newdata=holdout.df) holdout.df$correctSpecPred <- predict.lm(correctspec.lm, newdata=holdout.df) with(holdout.df, sum((y - overfitPred)^2)) #SSE with(holdout.df, sum((y - correctSpecPred)^2)) require(ggplot2) errors.df <- data.frame( Model = rep(c("Overfitted", "Correctly specified"), each=nholdout), Error = with(holdout.df, c(y - overfitPred, y - correctSpecPred))) ggplot(errors.df, aes(x=Error, color=Model)) + geom_density(size=1) + theme(legend.position="bottom") Here are my results from one run, but it's best to run the simulation several times to see the effect of different generated samples. > summary(overfit.lm) Call: lm(formula = y ~ x1 * x2 * x3, data = regress.df) Residuals: Min 1Q Median 3Q Max -2.22294 -0.63142 -0.09491 0.51983 2.24193 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 18.85992 65.00775 0.290 0.775 x1 -2.40912 11.90433 -0.202 0.842 x2 -2.13777 12.48892 -0.171 0.866 x3 -1.13941 12.94670 -0.088 0.931 x1:x2 0.78280 2.25867 0.347 0.733 x1:x3 0.53616 2.30834 0.232 0.819 x2:x3 0.08019 2.49028 0.032 0.975 x1:x2:x3 -0.08584 0.43891 -0.196 0.847 Residual standard error: 1.101 on 17 degrees of freedom Multiple R-squared: 0.8297, Adjusted R-squared: 0.7596 F-statistic: 11.84 on 7 and 17 DF, p-value: 1.942e-05 These coefficient estimates for the overfitted model are terrible - should be about 5 for the intercept, 2 for $x_1$ and 0 for the rest. But the standard errors are also large. The correct values for those parameters do lie well within the 95% confidence intervals in each case. The $R^2$ is 0.8297 which suggests a reasonable fit. > summary(correctspec.lm) Call: lm(formula = y ~ x1, data = regress.df) Residuals: Min 1Q Median 3Q Max -2.4951 -0.4112 -0.2000 0.7876 2.1706 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.7844 1.1272 4.244 0.000306 *** x1 1.9974 0.2108 9.476 2.09e-09 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.036 on 23 degrees of freedom Multiple R-squared: 0.7961, Adjusted R-squared: 0.7872 F-statistic: 89.8 on 1 and 23 DF, p-value: 2.089e-09 The coefficient estimates are much better for the correctly specified model. But note that the $R^2$ is lower, at 0.7961, as the less complex model has less flexibility in fitting the observed responses. $R^2$ is more dangerous than useful in this case! > with(holdout.df, sum((y - overfitPred)^2)) #SSE [1] 1271557 > with(holdout.df, sum((y - correctSpecPred)^2)) [1] 1052217 The higher $R^2$ on the sample we regressed on showed how the overfitted model produced predictions, $\hat{y}$, that were closer to the observed $y$ than the correctly specified model could. But that's because it was overfitting to that data (and had more degrees of freedom to do so than the correctly specified model did, so could produce a "better" fit). Look at the Sum of Squared Errors for the predictions on the holdout set, which we didn't use to estimate the regression coefficients from, and we can see how much worse the overfitted model has performed. In reality the correctly specified model is the one which makes the best predictions. We shouldn't base our assessment of the predictive performance on the results from the set of data we used to estimate the models. Here's a density plot of the errors, with the correct model specification producing more errors close to 0: The simulation clearly represents many relevant real-life situations (just imagine any real-life response which depends on a single predictor, and imagine including extraneous "predictors" into the model) but has the benefit that you can play with the data-generating process, the sample sizes, the nature of the overfitted model and so on. This is the best way you can examine the effects of overfitting since for observed data you don't generally have access to the DGP, and it's still "real" data in the sense that you can examine and use it. Here are some worthwhile ideas that you should experiment with: Run the simulation several times and see how the results differ. You will find more variability using small sample sizes than large ones. Try changing the sample sizes. If increased to, say, n <- 1e6, then the overfitted model eventually estimates reasonable coefficients (about 5 for intercept, about 2 for $x_1$, about 0 for everything else) and its predictive performance as measured by SSE doesn't trail the correctly specified model so badly. Conversely, try fitting on a very small sample (bear in mind you need to leave enough degrees of freedom to estimate all the coefficients) and you will see that the overfitted model has appalling performance both for estimating coefficients and predicting for new data. Try reducing the correlation between the predictor variables by playing with the off-diagonal elements of the variance-covariance matrix Sigma. Just remember to keep it positive semi-definite (which includes being symmetric). You should find if you reduce the multicollinearity, the overfitted model doesn't perform quite so badly. But bear in mind that correlated predictors do occur in real life. Try experimenting with the specification of the overfitted model. What if you include polynomial terms? What if you simulate data for a different region of predictors, rather than having their mean all around 5? If the correct data generating process for $y$ is still df$y <- 5 + 2*df$x1 + rnorm(n=nrow(df)), see how well the models fitted to the original data can predict that $y$. Depending on how you generate the $x_i$ values, you may find that extrapolation with the overfitted model produces predictions far worse than the correctly specified model. What if you change the data generating process so that $y$ now depends, weakly, on $x_2$, $x3$ and perhaps the interactions as well? This may be a more realistic scenario that depending on $x_1$ alone. If you use e.g. df$y <- 5 + 2 * df$x1 + 0.1*df$x2 + 0.1*df$x3 + rnorm(n=nrow(df)) then $x_2$ and $x_3$ are "almost irrelevant", but not quite. (Note that I drew all the $x$ variables from the same range, so it does make sense to compare their coefficients like that.) Then the simple model involving only $x_1$ suffers omitted variable bias, though since $x_2$ and $x_3$ are not particularly important, this is not too severe. On a small sample, e.g. nsample <- 25, the full model is still overfitted, despite being a better representation of the underlying population, and on repeated simulations its predictive performance on the holdout set is still consistently worse. With such limited data, it's more important to get a good estimate for the coefficient of $x_1$ than to expend information on the luxury of estimating the less important coefficients. With the effects of $x_2$ and $x_3$ being so hard to discern in a small sample, the full model is effectively using the flexibility from its extra degrees of freedom to "fit the noise" and this generalises poorly. But with nsample <- 1e6, it can estimate the weaker effects pretty well, and simulations show the complex model has predictive power that outperforms the simple one. This shows how "overfitting" is an issue of both model complexity and the available data.
What's a real-world example of "overfitting"? A common problem that results in overfitting in real life is that in addition to terms for a correctly specified model, we may have have added something extraneous: irrelevant powers (or other transfo
1,503
What's a real-world example of "overfitting"?
A form of overfitting is fairly common in sports, namely to identify patterns to explain past results by factors that have no or at best vague power to predict future results. A common feature of these "patterns" is that they are often based on very few cases so that pure chance is probably the most plausible explanation for the pattern. Examples include things like (the "quotes" are made up by me, but often look similar) Team A has won all X games since the coach has starting wearing his magical red jacket. Similar: We shall not be shaving ourselves during the playoffs, because that has helped us win the past X games. Less superstitious, but a form of overfitting as well: Borussia Dortmund has never lost a Champions League home game to a Spanish opponent when they have lost the previous Bundesliga away game by more than two goals, having scored at least once themselves. Similar: Roger Federer has won all his Davis Cup appearances to European opponents when he had at least reached the semi-finals in that year's Australian Open. The first two are fairly obvious nonsense (at least to me). The last two examples may perfectly well hold true in sample (i.e., in the past), but I would be most happy to bet against an opponent who would let this "information" substantially affect his odds for Dortmund beating Madrid if they lost 4:1 at Schalke on the previous Saturday or Federer beating Djokovic, even if he won the Australian Open that year.
What's a real-world example of "overfitting"?
A form of overfitting is fairly common in sports, namely to identify patterns to explain past results by factors that have no or at best vague power to predict future results. A common feature of thes
What's a real-world example of "overfitting"? A form of overfitting is fairly common in sports, namely to identify patterns to explain past results by factors that have no or at best vague power to predict future results. A common feature of these "patterns" is that they are often based on very few cases so that pure chance is probably the most plausible explanation for the pattern. Examples include things like (the "quotes" are made up by me, but often look similar) Team A has won all X games since the coach has starting wearing his magical red jacket. Similar: We shall not be shaving ourselves during the playoffs, because that has helped us win the past X games. Less superstitious, but a form of overfitting as well: Borussia Dortmund has never lost a Champions League home game to a Spanish opponent when they have lost the previous Bundesliga away game by more than two goals, having scored at least once themselves. Similar: Roger Federer has won all his Davis Cup appearances to European opponents when he had at least reached the semi-finals in that year's Australian Open. The first two are fairly obvious nonsense (at least to me). The last two examples may perfectly well hold true in sample (i.e., in the past), but I would be most happy to bet against an opponent who would let this "information" substantially affect his odds for Dortmund beating Madrid if they lost 4:1 at Schalke on the previous Saturday or Federer beating Djokovic, even if he won the Australian Open that year.
What's a real-world example of "overfitting"? A form of overfitting is fairly common in sports, namely to identify patterns to explain past results by factors that have no or at best vague power to predict future results. A common feature of thes
1,504
What's a real-world example of "overfitting"?
When I was trying to understand this myself, I started thinking in terms of analogies with describing real objects, so I guess it's as "real world" as you can get, if you want to understand the general idea: Say you want to describe to someone the concept of a chair, so that they get a conceptual model that allows them to predict if a new object they find is a chair. You go to Ikea and get a sample of chairs, and start describing them by using two variables: it's an object with 4 legs where you can sit. Well, that may also describe a stool or a bed or a lot of other things. Your model is underfitting, just as if you were to try and model a complex distribution with too few variables - a lot of non-chair things will be identified as chairs. So, let's increase the number of variables, add that the object has to have a back, for example. Now you have a pretty acceptable model that describes your set of chairs, but is general enough to allow a new object to be identified as one. Your model describes the data, and is able to make predictions. However, say you happen to have got a set where all chairs are black or white, and made of wood. You decide to include those variables in your model, and suddenly it won't identify a plastic yellow chair as a chair. So, you've overfitted your model, you have included features of your dataset as if they were features of chairs in general, (if you prefer, you have identified "noise" as "signal", by interpreting random variation from your sample as a feature of the whole "real world chairs"). So, you either increase your sample and hope to include some new material and colors, or decrease the number of variables in your models. This may be a simplistic analogy and breakdown under further scrutiny, but I think it works as a general conceptualization... Let me know if some part needs clarification.
What's a real-world example of "overfitting"?
When I was trying to understand this myself, I started thinking in terms of analogies with describing real objects, so I guess it's as "real world" as you can get, if you want to understand the genera
What's a real-world example of "overfitting"? When I was trying to understand this myself, I started thinking in terms of analogies with describing real objects, so I guess it's as "real world" as you can get, if you want to understand the general idea: Say you want to describe to someone the concept of a chair, so that they get a conceptual model that allows them to predict if a new object they find is a chair. You go to Ikea and get a sample of chairs, and start describing them by using two variables: it's an object with 4 legs where you can sit. Well, that may also describe a stool or a bed or a lot of other things. Your model is underfitting, just as if you were to try and model a complex distribution with too few variables - a lot of non-chair things will be identified as chairs. So, let's increase the number of variables, add that the object has to have a back, for example. Now you have a pretty acceptable model that describes your set of chairs, but is general enough to allow a new object to be identified as one. Your model describes the data, and is able to make predictions. However, say you happen to have got a set where all chairs are black or white, and made of wood. You decide to include those variables in your model, and suddenly it won't identify a plastic yellow chair as a chair. So, you've overfitted your model, you have included features of your dataset as if they were features of chairs in general, (if you prefer, you have identified "noise" as "signal", by interpreting random variation from your sample as a feature of the whole "real world chairs"). So, you either increase your sample and hope to include some new material and colors, or decrease the number of variables in your models. This may be a simplistic analogy and breakdown under further scrutiny, but I think it works as a general conceptualization... Let me know if some part needs clarification.
What's a real-world example of "overfitting"? When I was trying to understand this myself, I started thinking in terms of analogies with describing real objects, so I guess it's as "real world" as you can get, if you want to understand the genera
1,505
What's a real-world example of "overfitting"?
In predictive modeling, the idea is to use the data at hand to discover the trends that exist and that can be generalized to future data. By including variables in your model that have some minor, non-significant effect you are abandoning this idea. What you are doing is considering the specific trends in your specific sample that are only there because of random noise instead of a true, underlying trend. In other words, a model with too many variables fits the noise rather than discovering the signal. Here's an exaggerated illustration of what I'm talking about. Here the dots are the observed data and the line is our model. Look at that a perfect fit - what a great model! But did we really discover the trend or are we just fitting to the noise? Likely the latter.
What's a real-world example of "overfitting"?
In predictive modeling, the idea is to use the data at hand to discover the trends that exist and that can be generalized to future data. By including variables in your model that have some minor, non
What's a real-world example of "overfitting"? In predictive modeling, the idea is to use the data at hand to discover the trends that exist and that can be generalized to future data. By including variables in your model that have some minor, non-significant effect you are abandoning this idea. What you are doing is considering the specific trends in your specific sample that are only there because of random noise instead of a true, underlying trend. In other words, a model with too many variables fits the noise rather than discovering the signal. Here's an exaggerated illustration of what I'm talking about. Here the dots are the observed data and the line is our model. Look at that a perfect fit - what a great model! But did we really discover the trend or are we just fitting to the noise? Likely the latter.
What's a real-world example of "overfitting"? In predictive modeling, the idea is to use the data at hand to discover the trends that exist and that can be generalized to future data. By including variables in your model that have some minor, non
1,506
What's a real-world example of "overfitting"?
Here is a "real world" example not in the sense that somebody happened to come across it in research, but in the sense that it uses everyday concepts without many statistic-specific terms. Maybe this way of saying it will be more helpful for some people whose training is in other fields. Imagine that you have a database with data about patients with a rare disease. You are a medical graduate student and want to see if you can recognize risk factors for this disease. There have been 8 cases of the disease in this hospital, and you have recorded 100 random pieces of information about them: age, race, birth order, have they had measles as a child, whatever. You also have recorded the data for 8 patients without this disease. You decide to use the following heuristic for risk factors: if a factor takes a given value in more than one of your diseased patients, but in 0 of your controls, you will consider it a risk factor. (In real life, you'd use a better method, but I want to keep it simple). You find out that 6 of your patients are vegetarians (but none of the controls is vegetarian), 3 have Swedish ancestors, and two of them have a stuttering speech impairment. Out of the other 97 factors, there is nothing which occurs in more than one patient, but is not present among the controls. Years later, somebody else takes interest in this orphan disease and replicates your research. Because he works at a larger hospital, which has a data-sharing cooperation with other hospitals, he can use data about 106 cases, as opposed to your 8 cases. And he finds out that the prevalence of stutterers is the same in the patient group and the control group; stuttering is not a risk factor. What happened here is that your small group had 25% stutterers by random chance. Your heuristic had no way of knowing if this is medically relevant or not. You gave it criteria to decide when you consider a pattern in the data "interesting" enough to be included in the model, and according to these criteria, the stuttering was interesting enough. Your model has been overfitted, because it mistakenly included a parameter which is not really relevant in the real world. It fits your sample - the 8 patients + 8 controls - very well, but it does not fit the real world data. When a model describes your sample better than it describes reality, it's called overfitted. Had you chosen a threshold of 3 out of 8 patients having a feature, it wouldn't have happened - but you'd had a higher chance to miss something actually interesting. Especially in medicine, where many diseases only happen in a small fraction of people exhibiting in risk factor, that's a hard trade-off to make. And there are methods to avoid it (basically, compare to a second sample and see if the explaining power stays the same or falls), but this is a topic for another question.
What's a real-world example of "overfitting"?
Here is a "real world" example not in the sense that somebody happened to come across it in research, but in the sense that it uses everyday concepts without many statistic-specific terms. Maybe this
What's a real-world example of "overfitting"? Here is a "real world" example not in the sense that somebody happened to come across it in research, but in the sense that it uses everyday concepts without many statistic-specific terms. Maybe this way of saying it will be more helpful for some people whose training is in other fields. Imagine that you have a database with data about patients with a rare disease. You are a medical graduate student and want to see if you can recognize risk factors for this disease. There have been 8 cases of the disease in this hospital, and you have recorded 100 random pieces of information about them: age, race, birth order, have they had measles as a child, whatever. You also have recorded the data for 8 patients without this disease. You decide to use the following heuristic for risk factors: if a factor takes a given value in more than one of your diseased patients, but in 0 of your controls, you will consider it a risk factor. (In real life, you'd use a better method, but I want to keep it simple). You find out that 6 of your patients are vegetarians (but none of the controls is vegetarian), 3 have Swedish ancestors, and two of them have a stuttering speech impairment. Out of the other 97 factors, there is nothing which occurs in more than one patient, but is not present among the controls. Years later, somebody else takes interest in this orphan disease and replicates your research. Because he works at a larger hospital, which has a data-sharing cooperation with other hospitals, he can use data about 106 cases, as opposed to your 8 cases. And he finds out that the prevalence of stutterers is the same in the patient group and the control group; stuttering is not a risk factor. What happened here is that your small group had 25% stutterers by random chance. Your heuristic had no way of knowing if this is medically relevant or not. You gave it criteria to decide when you consider a pattern in the data "interesting" enough to be included in the model, and according to these criteria, the stuttering was interesting enough. Your model has been overfitted, because it mistakenly included a parameter which is not really relevant in the real world. It fits your sample - the 8 patients + 8 controls - very well, but it does not fit the real world data. When a model describes your sample better than it describes reality, it's called overfitted. Had you chosen a threshold of 3 out of 8 patients having a feature, it wouldn't have happened - but you'd had a higher chance to miss something actually interesting. Especially in medicine, where many diseases only happen in a small fraction of people exhibiting in risk factor, that's a hard trade-off to make. And there are methods to avoid it (basically, compare to a second sample and see if the explaining power stays the same or falls), but this is a topic for another question.
What's a real-world example of "overfitting"? Here is a "real world" example not in the sense that somebody happened to come across it in research, but in the sense that it uses everyday concepts without many statistic-specific terms. Maybe this
1,507
What's a real-world example of "overfitting"?
Here's a real-life example of overfitting that I helped perpetrate and then tried (unsuccessfully) to avert: I had several thousand independent, bivariate time series, each with no more than 50 data points, and the modeling project involved fitting a vector autoregression (VAR) to each one. No attempt was made to regularize across observations, estimate variance components, or anything like that. The time points were measured over the course of a single year, so the data were subject to all kinds of seasonal and cyclical effects that only appeared once in each time series. One subset of the data exhibited an implausibly high rate of Granger causality compared to the rest of the data. Spot checks revealed that positive spikes were occurring one or two lags apart in this subset, but it was clear from the context that both spikes were caused directly by an external source and that one spike was not causing the other. Out-of-sample forecasts using this models would probably be quite wrong, because the models were overfitted: rather than "smoothing out" the spikes by averaging them into the rest of the data, there were few enough observations that the spikes were actually driving the estimates. Overall, I don't think the project went badly but I don't think it produced results that were anywhere near as useful as they could have been. Part of the reason for this is that the many-independent-VARs procedure, even with just one or two lags, was having a hard time distinguishing between data and noise, and so was fitting to the latter at the expense of providing insight about the former.
What's a real-world example of "overfitting"?
Here's a real-life example of overfitting that I helped perpetrate and then tried (unsuccessfully) to avert: I had several thousand independent, bivariate time series, each with no more than 50 data p
What's a real-world example of "overfitting"? Here's a real-life example of overfitting that I helped perpetrate and then tried (unsuccessfully) to avert: I had several thousand independent, bivariate time series, each with no more than 50 data points, and the modeling project involved fitting a vector autoregression (VAR) to each one. No attempt was made to regularize across observations, estimate variance components, or anything like that. The time points were measured over the course of a single year, so the data were subject to all kinds of seasonal and cyclical effects that only appeared once in each time series. One subset of the data exhibited an implausibly high rate of Granger causality compared to the rest of the data. Spot checks revealed that positive spikes were occurring one or two lags apart in this subset, but it was clear from the context that both spikes were caused directly by an external source and that one spike was not causing the other. Out-of-sample forecasts using this models would probably be quite wrong, because the models were overfitted: rather than "smoothing out" the spikes by averaging them into the rest of the data, there were few enough observations that the spikes were actually driving the estimates. Overall, I don't think the project went badly but I don't think it produced results that were anywhere near as useful as they could have been. Part of the reason for this is that the many-independent-VARs procedure, even with just one or two lags, was having a hard time distinguishing between data and noise, and so was fitting to the latter at the expense of providing insight about the former.
What's a real-world example of "overfitting"? Here's a real-life example of overfitting that I helped perpetrate and then tried (unsuccessfully) to avert: I had several thousand independent, bivariate time series, each with no more than 50 data p
1,508
What's a real-world example of "overfitting"?
My favourite is the “3964 formula” discovered before the World Cup soccer competition in 1998: Brazil won the championships in 1970 and 1994. Sum up these 2 numbers and you will get 3964; Germany won in 1974 and 1990, adding up again to 3964; the same thing with Argentina winning in 1978 and 1986 (1978+1986 = 3964). This is a very surprising fact, but everyone can see that it is not advisable to base any future prediction on that rule. And indeed, the rule gives that the winner of the World Cup in 1998 should have been England since 1966 + 1998 = 3964 and England won in 1966. This didn’t happen and the winner was France.
What's a real-world example of "overfitting"?
My favourite is the “3964 formula” discovered before the World Cup soccer competition in 1998: Brazil won the championships in 1970 and 1994. Sum up these 2 numbers and you will get 3964; Germany wo
What's a real-world example of "overfitting"? My favourite is the “3964 formula” discovered before the World Cup soccer competition in 1998: Brazil won the championships in 1970 and 1994. Sum up these 2 numbers and you will get 3964; Germany won in 1974 and 1990, adding up again to 3964; the same thing with Argentina winning in 1978 and 1986 (1978+1986 = 3964). This is a very surprising fact, but everyone can see that it is not advisable to base any future prediction on that rule. And indeed, the rule gives that the winner of the World Cup in 1998 should have been England since 1966 + 1998 = 3964 and England won in 1966. This didn’t happen and the winner was France.
What's a real-world example of "overfitting"? My favourite is the “3964 formula” discovered before the World Cup soccer competition in 1998: Brazil won the championships in 1970 and 1994. Sum up these 2 numbers and you will get 3964; Germany wo
1,509
What's a real-world example of "overfitting"?
Studying for an exam by memorising the answers to last year's exam.
What's a real-world example of "overfitting"?
Studying for an exam by memorising the answers to last year's exam.
What's a real-world example of "overfitting"? Studying for an exam by memorising the answers to last year's exam.
What's a real-world example of "overfitting"? Studying for an exam by memorising the answers to last year's exam.
1,510
What's a real-world example of "overfitting"?
Many intelligent people in this thread --- many much more versed in statistics than I am. But I still don't see an easy-to-understand to the lay-person example. The Presidential example doesn't quite hit the bill in terms of typical overfitting, because while it is technically overfitting in each one of its wild claims, usually an overfitting model overfits -ALL- the given noise, not just one element of it. I really like the chart in the bias-variance tradeoff explanation in wikipedia: http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff (The lowermost chart is the example of overfitting). I'm hard pressed to think of a real world example that doesn't sound like complete mumbo-jumbo. The idea is that data is part caused by measurable, understandable variables --- part random noise. Attempting to model this noise as a pattern gives you inaccuracy. A classic example is modeling based SOLELY on R^2 in MS Excel (you are attempting to fit an equation/ model literally as close as possible to the data using polynomials, no matter how nonsensical). Say you're trying to model ice cream sales as a function of temperature. You have "real world" data. You plot the data and try to maximize R^2. You'll find using real-world data, the closest fit equation is not linear or quadratic (which would make logical sense). Like almost all equations, the more nonsensical polynomial terms you add (x^6 -2x^5 +3x^4+30x^3-43.2x^2-29x) -- the closer it fits the data. So how does that sensibly relate temperature to ice cream sales? How would you explain that ridiculous polynomial? Truth is, it's not the true model. You've overfit the data. You are taking unaccounted for noise -- which may have been due to sales promotions or some other variable or "noise" like a butterfly flapping its wings in the cosmos (something never predictable)--- and attempted to model that based on temperature. Now usually if your noise/ error does not average to zero or is auto-correlated, etc, it means there are more variables out there --- and then eventually you get to generally randomly distributed noise, but still, that's the best I can explain it.
What's a real-world example of "overfitting"?
Many intelligent people in this thread --- many much more versed in statistics than I am. But I still don't see an easy-to-understand to the lay-person example. The Presidential example doesn't quite
What's a real-world example of "overfitting"? Many intelligent people in this thread --- many much more versed in statistics than I am. But I still don't see an easy-to-understand to the lay-person example. The Presidential example doesn't quite hit the bill in terms of typical overfitting, because while it is technically overfitting in each one of its wild claims, usually an overfitting model overfits -ALL- the given noise, not just one element of it. I really like the chart in the bias-variance tradeoff explanation in wikipedia: http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff (The lowermost chart is the example of overfitting). I'm hard pressed to think of a real world example that doesn't sound like complete mumbo-jumbo. The idea is that data is part caused by measurable, understandable variables --- part random noise. Attempting to model this noise as a pattern gives you inaccuracy. A classic example is modeling based SOLELY on R^2 in MS Excel (you are attempting to fit an equation/ model literally as close as possible to the data using polynomials, no matter how nonsensical). Say you're trying to model ice cream sales as a function of temperature. You have "real world" data. You plot the data and try to maximize R^2. You'll find using real-world data, the closest fit equation is not linear or quadratic (which would make logical sense). Like almost all equations, the more nonsensical polynomial terms you add (x^6 -2x^5 +3x^4+30x^3-43.2x^2-29x) -- the closer it fits the data. So how does that sensibly relate temperature to ice cream sales? How would you explain that ridiculous polynomial? Truth is, it's not the true model. You've overfit the data. You are taking unaccounted for noise -- which may have been due to sales promotions or some other variable or "noise" like a butterfly flapping its wings in the cosmos (something never predictable)--- and attempted to model that based on temperature. Now usually if your noise/ error does not average to zero or is auto-correlated, etc, it means there are more variables out there --- and then eventually you get to generally randomly distributed noise, but still, that's the best I can explain it.
What's a real-world example of "overfitting"? Many intelligent people in this thread --- many much more versed in statistics than I am. But I still don't see an easy-to-understand to the lay-person example. The Presidential example doesn't quite
1,511
What's a real-world example of "overfitting"?
Most optimization methods have some fudge factors aka hyperparameters. A real example: For all systems under study, the following parameters yielded a fast and robust behavior: $N_{min} = 5,\ \ f_{inc} = 1.1,\ \ f_{dec} = 0.5,\ \ \alpha_{start} = 0.1, \ \ f_{\alpha} = 0.99.$ Is this over fitting, or just fitting to a particular set of problems ?
What's a real-world example of "overfitting"?
Most optimization methods have some fudge factors aka hyperparameters. A real example: For all systems under study, the following parameters yielded a fast and robust behavior: $N_{min} = 5,\ \ f_{
What's a real-world example of "overfitting"? Most optimization methods have some fudge factors aka hyperparameters. A real example: For all systems under study, the following parameters yielded a fast and robust behavior: $N_{min} = 5,\ \ f_{inc} = 1.1,\ \ f_{dec} = 0.5,\ \ \alpha_{start} = 0.1, \ \ f_{\alpha} = 0.99.$ Is this over fitting, or just fitting to a particular set of problems ?
What's a real-world example of "overfitting"? Most optimization methods have some fudge factors aka hyperparameters. A real example: For all systems under study, the following parameters yielded a fast and robust behavior: $N_{min} = 5,\ \ f_{
1,512
What's a real-world example of "overfitting"?
A bit intuitive, but maybe it'll help. Let's say you want to learn some new language. How do you learn? instead of learning the rules in a course, you use examples. Specifically, TV shows. So you like crime shows, and you watch few series of some cop show. Then, you take another crime show and watch some series form that one. By the third show you see - you know almost everything, no problem. You don't need the English subtitles. But then you try your newly learned language on the street on your next visit, and you realize you can't talk about anything other than saying "officer! that man took my bag and shot that lady!". While your 'training error' was zero, your 'test error' is high, due to 'overfitting' the language, studying only a limited subset of words and assuming its enough.
What's a real-world example of "overfitting"?
A bit intuitive, but maybe it'll help. Let's say you want to learn some new language. How do you learn? instead of learning the rules in a course, you use examples. Specifically, TV shows. So you like
What's a real-world example of "overfitting"? A bit intuitive, but maybe it'll help. Let's say you want to learn some new language. How do you learn? instead of learning the rules in a course, you use examples. Specifically, TV shows. So you like crime shows, and you watch few series of some cop show. Then, you take another crime show and watch some series form that one. By the third show you see - you know almost everything, no problem. You don't need the English subtitles. But then you try your newly learned language on the street on your next visit, and you realize you can't talk about anything other than saying "officer! that man took my bag and shot that lady!". While your 'training error' was zero, your 'test error' is high, due to 'overfitting' the language, studying only a limited subset of words and assuming its enough.
What's a real-world example of "overfitting"? A bit intuitive, but maybe it'll help. Let's say you want to learn some new language. How do you learn? instead of learning the rules in a course, you use examples. Specifically, TV shows. So you like
1,513
How does the reparameterization trick for VAEs work and why is it important?
After reading through Kingma's NIPS 2015 workshop slides, I realized that we need the reparameterization trick in order to backpropagate through a random node. Intuitively, in its original form, VAEs sample from a random node $z$ which is approximated by the parametric model $q(z \mid \phi, x)$ of the true posterior. Backprop cannot flow through a random node. Introducing a new parameter $\epsilon$ allows us to reparameterize $z$ in a way that allows backprop to flow through the deterministic nodes.
How does the reparameterization trick for VAEs work and why is it important?
After reading through Kingma's NIPS 2015 workshop slides, I realized that we need the reparameterization trick in order to backpropagate through a random node. Intuitively, in its original form, VAEs
How does the reparameterization trick for VAEs work and why is it important? After reading through Kingma's NIPS 2015 workshop slides, I realized that we need the reparameterization trick in order to backpropagate through a random node. Intuitively, in its original form, VAEs sample from a random node $z$ which is approximated by the parametric model $q(z \mid \phi, x)$ of the true posterior. Backprop cannot flow through a random node. Introducing a new parameter $\epsilon$ allows us to reparameterize $z$ in a way that allows backprop to flow through the deterministic nodes.
How does the reparameterization trick for VAEs work and why is it important? After reading through Kingma's NIPS 2015 workshop slides, I realized that we need the reparameterization trick in order to backpropagate through a random node. Intuitively, in its original form, VAEs
1,514
How does the reparameterization trick for VAEs work and why is it important?
Assume we have a normal distribution $q$ that is parameterized by $\theta$, specifically $q_{\theta}(x) = N(\theta,1)$. We want to solve the below problem $$ \text{min}_{\theta} \quad E_q[x^2] $$ This is of course a rather silly problem and the optimal $\theta$ is obvious. However, here we just want to understand how the reparameterization trick helps in calculating the gradient of this objective $E_q[x^2]$. One way to calculate $\nabla_{\theta} E_q[x^2]$ is as follows $$ \nabla_{\theta} E_q[x^2] = \nabla_{\theta} \int q_{\theta}(x) x^2 dx = \int x^2 \nabla_{\theta} q_{\theta}(x) \frac{q_{\theta}(x)}{q_{\theta}(x)} dx = \int q_{\theta}(x) \nabla_{\theta} \log q_{\theta}(x) x^2 dx = E_q[x^2 \nabla_{\theta} \log q_{\theta}(x)] $$ For our example where $q_{\theta}(x) = N(\theta,1)$, this method gives $$ \nabla_{\theta} E_q[x^2] = E_q[x^2 (x-\theta)] $$ Reparameterization trick is a way to rewrite the expectation so that the distribution with respect to which we take the gradient is independent of parameter $\theta$. To achieve this, we need to make the stochastic element in $q$ independent of $\theta$. Hence, we write $x$ as $$ x = \theta + \epsilon, \quad \epsilon \sim N(0,1) $$ Then, we can write $$ E_q[x^2] = E_p[(\theta+\epsilon)^2] $$ where $p$ is the distribution of $\epsilon$, i.e., $N(0,1)$. Now we can write the derivative of $E_q[x^2]$ as follows $$ \nabla_{\theta} E_q[x^2] = \nabla_{\theta} E_p[(\theta+\epsilon)^2] = E_p[2(\theta+\epsilon)] $$ Here is an IPython notebook I have written that looks at the variance of these two ways of calculating gradients. http://nbviewer.jupyter.org/github/gokererdogan/Notebooks/blob/master/Reparameterization%20Trick.ipynb
How does the reparameterization trick for VAEs work and why is it important?
Assume we have a normal distribution $q$ that is parameterized by $\theta$, specifically $q_{\theta}(x) = N(\theta,1)$. We want to solve the below problem $$ \text{min}_{\theta} \quad E_q[x^2] $$ This
How does the reparameterization trick for VAEs work and why is it important? Assume we have a normal distribution $q$ that is parameterized by $\theta$, specifically $q_{\theta}(x) = N(\theta,1)$. We want to solve the below problem $$ \text{min}_{\theta} \quad E_q[x^2] $$ This is of course a rather silly problem and the optimal $\theta$ is obvious. However, here we just want to understand how the reparameterization trick helps in calculating the gradient of this objective $E_q[x^2]$. One way to calculate $\nabla_{\theta} E_q[x^2]$ is as follows $$ \nabla_{\theta} E_q[x^2] = \nabla_{\theta} \int q_{\theta}(x) x^2 dx = \int x^2 \nabla_{\theta} q_{\theta}(x) \frac{q_{\theta}(x)}{q_{\theta}(x)} dx = \int q_{\theta}(x) \nabla_{\theta} \log q_{\theta}(x) x^2 dx = E_q[x^2 \nabla_{\theta} \log q_{\theta}(x)] $$ For our example where $q_{\theta}(x) = N(\theta,1)$, this method gives $$ \nabla_{\theta} E_q[x^2] = E_q[x^2 (x-\theta)] $$ Reparameterization trick is a way to rewrite the expectation so that the distribution with respect to which we take the gradient is independent of parameter $\theta$. To achieve this, we need to make the stochastic element in $q$ independent of $\theta$. Hence, we write $x$ as $$ x = \theta + \epsilon, \quad \epsilon \sim N(0,1) $$ Then, we can write $$ E_q[x^2] = E_p[(\theta+\epsilon)^2] $$ where $p$ is the distribution of $\epsilon$, i.e., $N(0,1)$. Now we can write the derivative of $E_q[x^2]$ as follows $$ \nabla_{\theta} E_q[x^2] = \nabla_{\theta} E_p[(\theta+\epsilon)^2] = E_p[2(\theta+\epsilon)] $$ Here is an IPython notebook I have written that looks at the variance of these two ways of calculating gradients. http://nbviewer.jupyter.org/github/gokererdogan/Notebooks/blob/master/Reparameterization%20Trick.ipynb
How does the reparameterization trick for VAEs work and why is it important? Assume we have a normal distribution $q$ that is parameterized by $\theta$, specifically $q_{\theta}(x) = N(\theta,1)$. We want to solve the below problem $$ \text{min}_{\theta} \quad E_q[x^2] $$ This
1,515
How does the reparameterization trick for VAEs work and why is it important?
A reasonable example of the mathematics of the "reparameterization trick" is given in goker's answer, but some motivation could be helpful. (I don't have permissions to comment on that answer; thus here is a separate answer.) In short, we want to compute some value $G_\theta$ of the form, $$G_\theta = \nabla_{\theta}E_{x\sim q_\theta}[\ldots]$$ Without the "reparameterization trick", we can often rewrite this, per goker's answer, as $E_{x\sim q_\theta}[G^{est}_\theta(x)]$, where, $$G^{est}_\theta(x) = \ldots\frac{1}{q_\theta(x)}\nabla_{\theta}q_\theta(x) = \ldots\nabla_{\theta} \log(q_\theta(x))$$ If we draw an $x$ from $q_\theta$, then $G^{est}_\theta$ is an unbiased estimate of $G_\theta$. This is an example of "importance sampling" for Monte Carlo integration. If the $\theta$ represented some outputs of a computational network (e.g., a policy network for reinforcement learning), we could use this in back-propagatation (apply the chain rule) to find derivatives with respect to network parameters. The key point is that $G^{est}_\theta$ is often a very bad (high variance) estimate. Even if you average over a large number of samples, you may find that its average seems to systematically undershoot (or overshoot) $G_\theta$. A fundamental problem is that essential contributions to $G_\theta$ may come from values of $x$ which are very rare (i.e., $x$ values for which $q_\theta(x)$ is small). The factor of $\frac{1}{q_\theta(x)}$ is scaling up your estimate to account for this, but that scaling won't help if you don't see such a value of $x$ when you estimate $G_\theta$ from a finite number of samples. The goodness or badness of $q_\theta$ (i.e.,the quality of the estimate, $G^{est}_\theta$, for $x$ drawn from $q_\theta$) may depend on $\theta$, which may be far from optimum (e.g., an arbitrarily chosen initial value). It is a little like the story of the drunk person who looks for his keys near the streetlight (because that's where he can see/sample) rather than near where he dropped them. The "reparameterization trick" sometimes address this problem. Using goker's notation, the trick is to rewrite $x$ as a function of a random variable, $\epsilon$, with a distribution, $p$, that does not depend on $\theta$, and then rewrite the expectation in $G_\theta$ as an expectation over $p$, $$G_\theta = \nabla_\theta E_{\epsilon\sim p}[J(\theta,\epsilon)] = E_{\epsilon\sim p}[ \nabla_\theta J(\theta,\epsilon)]$$ for some $J(\theta,\epsilon)$. The reparameterization trick is especially useful when the new estimator, $\nabla_\theta J(\theta,\epsilon)$, no longer has the problems mentioned above (i.e., when we are able to choose $p$ so that getting a good estimate does not depend on drawing rare values of $\epsilon$). This can be facilitated (but is not guaranteed) by the fact that $p$ does not depend on $\theta$ and that we can choose $p$ to be a simple unimodal distribution. However, the reparamerization trick may even "work" when $\nabla_\theta J(\theta,\epsilon)$ is not a good estimator of $G_\theta$. Specifically, even if there are large contributions to $G_\theta$ from $\epsilon$ which are very rare, we consistently don't see them during optimization and we also don't see them when we use our model (if our model is a generative model). In slightly more formal terms, we can think of replacing our objective (expectation over $p$) with an effective objective that is an expectation over some "typical set" for $p$. Outside of that typical set, our $\epsilon$ might produce arbitrarily poor values of $J$ -- see Figure 2(b) of Brock et. al. for a GAN evaluated outside the typical set sampled during training (in that paper, smaller truncation values corresponding to latent variable values farther from the typical set, even though they are higher probability). I hope that helps.
How does the reparameterization trick for VAEs work and why is it important?
A reasonable example of the mathematics of the "reparameterization trick" is given in goker's answer, but some motivation could be helpful. (I don't have permissions to comment on that answer; thus he
How does the reparameterization trick for VAEs work and why is it important? A reasonable example of the mathematics of the "reparameterization trick" is given in goker's answer, but some motivation could be helpful. (I don't have permissions to comment on that answer; thus here is a separate answer.) In short, we want to compute some value $G_\theta$ of the form, $$G_\theta = \nabla_{\theta}E_{x\sim q_\theta}[\ldots]$$ Without the "reparameterization trick", we can often rewrite this, per goker's answer, as $E_{x\sim q_\theta}[G^{est}_\theta(x)]$, where, $$G^{est}_\theta(x) = \ldots\frac{1}{q_\theta(x)}\nabla_{\theta}q_\theta(x) = \ldots\nabla_{\theta} \log(q_\theta(x))$$ If we draw an $x$ from $q_\theta$, then $G^{est}_\theta$ is an unbiased estimate of $G_\theta$. This is an example of "importance sampling" for Monte Carlo integration. If the $\theta$ represented some outputs of a computational network (e.g., a policy network for reinforcement learning), we could use this in back-propagatation (apply the chain rule) to find derivatives with respect to network parameters. The key point is that $G^{est}_\theta$ is often a very bad (high variance) estimate. Even if you average over a large number of samples, you may find that its average seems to systematically undershoot (or overshoot) $G_\theta$. A fundamental problem is that essential contributions to $G_\theta$ may come from values of $x$ which are very rare (i.e., $x$ values for which $q_\theta(x)$ is small). The factor of $\frac{1}{q_\theta(x)}$ is scaling up your estimate to account for this, but that scaling won't help if you don't see such a value of $x$ when you estimate $G_\theta$ from a finite number of samples. The goodness or badness of $q_\theta$ (i.e.,the quality of the estimate, $G^{est}_\theta$, for $x$ drawn from $q_\theta$) may depend on $\theta$, which may be far from optimum (e.g., an arbitrarily chosen initial value). It is a little like the story of the drunk person who looks for his keys near the streetlight (because that's where he can see/sample) rather than near where he dropped them. The "reparameterization trick" sometimes address this problem. Using goker's notation, the trick is to rewrite $x$ as a function of a random variable, $\epsilon$, with a distribution, $p$, that does not depend on $\theta$, and then rewrite the expectation in $G_\theta$ as an expectation over $p$, $$G_\theta = \nabla_\theta E_{\epsilon\sim p}[J(\theta,\epsilon)] = E_{\epsilon\sim p}[ \nabla_\theta J(\theta,\epsilon)]$$ for some $J(\theta,\epsilon)$. The reparameterization trick is especially useful when the new estimator, $\nabla_\theta J(\theta,\epsilon)$, no longer has the problems mentioned above (i.e., when we are able to choose $p$ so that getting a good estimate does not depend on drawing rare values of $\epsilon$). This can be facilitated (but is not guaranteed) by the fact that $p$ does not depend on $\theta$ and that we can choose $p$ to be a simple unimodal distribution. However, the reparamerization trick may even "work" when $\nabla_\theta J(\theta,\epsilon)$ is not a good estimator of $G_\theta$. Specifically, even if there are large contributions to $G_\theta$ from $\epsilon$ which are very rare, we consistently don't see them during optimization and we also don't see them when we use our model (if our model is a generative model). In slightly more formal terms, we can think of replacing our objective (expectation over $p$) with an effective objective that is an expectation over some "typical set" for $p$. Outside of that typical set, our $\epsilon$ might produce arbitrarily poor values of $J$ -- see Figure 2(b) of Brock et. al. for a GAN evaluated outside the typical set sampled during training (in that paper, smaller truncation values corresponding to latent variable values farther from the typical set, even though they are higher probability). I hope that helps.
How does the reparameterization trick for VAEs work and why is it important? A reasonable example of the mathematics of the "reparameterization trick" is given in goker's answer, but some motivation could be helpful. (I don't have permissions to comment on that answer; thus he
1,516
How does the reparameterization trick for VAEs work and why is it important?
Let me explain first, why do we need Reparameterization trick in VAE. VAE has encoder and decoder. Decoder randomly samples from true posterior Z~ q(z∣ϕ,x). To implement encoder and decoder as a neural network, you need to backpropogate through random sampling and that is the problem because backpropogation cannot flow through random node; to overcome this obstacle, we use reparameterization trick . Now lets come to trick. Since our posterior is normally distributed, we can approximate it with another normal distribution . We approximate Z with normally distributed ε. But how this is relevant ? Now instead of saying that Z is sampled from q(z∣ϕ,x) , we can say Z is a function that takes parameter (ε,( µ, L)) and these µ, L comes from upper neural network (encoder). Therefore while backpropogation all we need is partial derivatives w.r.t. µ, L and ε is irrelevant for taking derivatives.
How does the reparameterization trick for VAEs work and why is it important?
Let me explain first, why do we need Reparameterization trick in VAE. VAE has encoder and decoder. Decoder randomly samples from true posterior Z~ q(z∣ϕ,x). To implement encoder and decoder as a neura
How does the reparameterization trick for VAEs work and why is it important? Let me explain first, why do we need Reparameterization trick in VAE. VAE has encoder and decoder. Decoder randomly samples from true posterior Z~ q(z∣ϕ,x). To implement encoder and decoder as a neural network, you need to backpropogate through random sampling and that is the problem because backpropogation cannot flow through random node; to overcome this obstacle, we use reparameterization trick . Now lets come to trick. Since our posterior is normally distributed, we can approximate it with another normal distribution . We approximate Z with normally distributed ε. But how this is relevant ? Now instead of saying that Z is sampled from q(z∣ϕ,x) , we can say Z is a function that takes parameter (ε,( µ, L)) and these µ, L comes from upper neural network (encoder). Therefore while backpropogation all we need is partial derivatives w.r.t. µ, L and ε is irrelevant for taking derivatives.
How does the reparameterization trick for VAEs work and why is it important? Let me explain first, why do we need Reparameterization trick in VAE. VAE has encoder and decoder. Decoder randomly samples from true posterior Z~ q(z∣ϕ,x). To implement encoder and decoder as a neura
1,517
How does the reparameterization trick for VAEs work and why is it important?
I thought the explanation found in Stanford CS228 course on probabilistic graphical models was very good. It can be found here: https://ermongroup.github.io/cs228-notes/extras/vae/ I've summarized/copied the important parts here for convenience/my own understanding (although I strongly recommend just checking out the original link). So, our problem is that we have this gradient we want to calculate: $$\nabla_\phi \mathbb{E}_{z\sim q(z|x)}[f(x,z)]$$ If you're familiar with score function estimators (I believe REINFORCE is just a special case of this), you'll notice that is pretty much the problem they solve. However, the score function estimator has a high variance, leading to difficulties in learning models much of the time. So, under certain conditions, we can express the distribution $q_\phi (z|x)$ as a 2-step process. First we sample a noise variable $\epsilon$ from a simple distribution $p(\epsilon)$ like the standard Normal. Next, we apply a deterministic transformation $g_\phi(\epsilon, x)$ that maps the random noise onto this more complex distribution. This second part is not always possible, but it is true for many interesting classes of $q_\phi$. As an example, let's use a very simple q from which we sample. $$z \sim q_{\mu, \sigma} = \mathcal{N}(\mu, \sigma)$$ Now, instead of sampling from $q$, we can rewrite this as $$ z = g_{\mu, \sigma}(\epsilon) = \mu + \epsilon\cdot\sigma$$ where $\epsilon \sim \mathcal{N}(0, 1)$. Now, instead of needing to get the gradient of an expectation of q(z), we can rewrite it as the gradient of an expectation with respect to the simpler function $p(\epsilon)$. $$\nabla_\phi \mathbb{E}_{z\sim q(z|x)}[f(x,z)] = \mathbb{E}_{\epsilon \sim p(\epsilon)}[\nabla_\phi f(x,g(\epsilon, x))]$$ This has lower variance, for imo, non-trivial reasons. Check part D of the appendix here for an explanation: https://arxiv.org/pdf/1401.4082.pdf
How does the reparameterization trick for VAEs work and why is it important?
I thought the explanation found in Stanford CS228 course on probabilistic graphical models was very good. It can be found here: https://ermongroup.github.io/cs228-notes/extras/vae/ I've summarized/cop
How does the reparameterization trick for VAEs work and why is it important? I thought the explanation found in Stanford CS228 course on probabilistic graphical models was very good. It can be found here: https://ermongroup.github.io/cs228-notes/extras/vae/ I've summarized/copied the important parts here for convenience/my own understanding (although I strongly recommend just checking out the original link). So, our problem is that we have this gradient we want to calculate: $$\nabla_\phi \mathbb{E}_{z\sim q(z|x)}[f(x,z)]$$ If you're familiar with score function estimators (I believe REINFORCE is just a special case of this), you'll notice that is pretty much the problem they solve. However, the score function estimator has a high variance, leading to difficulties in learning models much of the time. So, under certain conditions, we can express the distribution $q_\phi (z|x)$ as a 2-step process. First we sample a noise variable $\epsilon$ from a simple distribution $p(\epsilon)$ like the standard Normal. Next, we apply a deterministic transformation $g_\phi(\epsilon, x)$ that maps the random noise onto this more complex distribution. This second part is not always possible, but it is true for many interesting classes of $q_\phi$. As an example, let's use a very simple q from which we sample. $$z \sim q_{\mu, \sigma} = \mathcal{N}(\mu, \sigma)$$ Now, instead of sampling from $q$, we can rewrite this as $$ z = g_{\mu, \sigma}(\epsilon) = \mu + \epsilon\cdot\sigma$$ where $\epsilon \sim \mathcal{N}(0, 1)$. Now, instead of needing to get the gradient of an expectation of q(z), we can rewrite it as the gradient of an expectation with respect to the simpler function $p(\epsilon)$. $$\nabla_\phi \mathbb{E}_{z\sim q(z|x)}[f(x,z)] = \mathbb{E}_{\epsilon \sim p(\epsilon)}[\nabla_\phi f(x,g(\epsilon, x))]$$ This has lower variance, for imo, non-trivial reasons. Check part D of the appendix here for an explanation: https://arxiv.org/pdf/1401.4082.pdf
How does the reparameterization trick for VAEs work and why is it important? I thought the explanation found in Stanford CS228 course on probabilistic graphical models was very good. It can be found here: https://ermongroup.github.io/cs228-notes/extras/vae/ I've summarized/cop
1,518
How does the reparameterization trick for VAEs work and why is it important?
We have our probablistic model. And want to recover parameters of the model. We reduce our task to optimizing variational lower bound (VLB). To do this we should be able make two things: calculate VLB get gradient of VLB Authors suggest using Monte Carlo Estimator for both. And actually they introduce this trick to get more precise Monte Carlo Gradient Estimator of VLB. It's just improvement of numerical method.
How does the reparameterization trick for VAEs work and why is it important?
We have our probablistic model. And want to recover parameters of the model. We reduce our task to optimizing variational lower bound (VLB). To do this we should be able make two things: calculate V
How does the reparameterization trick for VAEs work and why is it important? We have our probablistic model. And want to recover parameters of the model. We reduce our task to optimizing variational lower bound (VLB). To do this we should be able make two things: calculate VLB get gradient of VLB Authors suggest using Monte Carlo Estimator for both. And actually they introduce this trick to get more precise Monte Carlo Gradient Estimator of VLB. It's just improvement of numerical method.
How does the reparameterization trick for VAEs work and why is it important? We have our probablistic model. And want to recover parameters of the model. We reduce our task to optimizing variational lower bound (VLB). To do this we should be able make two things: calculate V
1,519
How does the reparameterization trick for VAEs work and why is it important?
The reparameterization trick reduces the variance of the MC estimator for the gradient dramatically. So it's a variance reduction technique: Our goal is to find an estimate of $$ \nabla_\phi \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \right] $$ We could use the "Score function estimator": $$ \nabla_\phi \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \right] = \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \nabla_\phi \log q_\phi(z)\right] $$ But the score function estimator has high variance. E.g. if the probability $p\left( x^{(i)} \mid z^{(i)}, w \right)$ is very small then the absolute value of $\log p\left( x^{(i)} \mid z^{(i)}, w \right)$ is very large and the value itself is negative. So we would have high variance. With Reparametrization $z^{(i)} = g(\epsilon^{(i)}, x^{(i)}, \phi)$ we have $$ \nabla_\phi \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \right] = \mathbb E_{p(\epsilon^{(i)})} \left[ \nabla_\phi \log p\left( x^{(i)} \mid g(\epsilon^{(i)}, x^{(i)}, \phi), w \right) \right] $$ Now the expectation is w.r.t. $p(\epsilon^{(i)})$ and $p(\epsilon^{(i)})$ is independent of the gradient parameter $\phi$. So we can put the gradient directly inside the expectation which can be easily seen by writing out the expectation explicitly. The gradient values are much smaller. Therefore, we have (intuitively) lower variance. Note: We can do this reparametrization trick only if $z^{(i)}$ is continuous so we can take the gradient of $z^{(i)} = g(\epsilon^{(i)}, x^{(i)}, \phi)$.
How does the reparameterization trick for VAEs work and why is it important?
The reparameterization trick reduces the variance of the MC estimator for the gradient dramatically. So it's a variance reduction technique: Our goal is to find an estimate of $$ \nabla_\phi \mathbb E
How does the reparameterization trick for VAEs work and why is it important? The reparameterization trick reduces the variance of the MC estimator for the gradient dramatically. So it's a variance reduction technique: Our goal is to find an estimate of $$ \nabla_\phi \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \right] $$ We could use the "Score function estimator": $$ \nabla_\phi \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \right] = \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \nabla_\phi \log q_\phi(z)\right] $$ But the score function estimator has high variance. E.g. if the probability $p\left( x^{(i)} \mid z^{(i)}, w \right)$ is very small then the absolute value of $\log p\left( x^{(i)} \mid z^{(i)}, w \right)$ is very large and the value itself is negative. So we would have high variance. With Reparametrization $z^{(i)} = g(\epsilon^{(i)}, x^{(i)}, \phi)$ we have $$ \nabla_\phi \mathbb E_{q(z^{(i)} \mid x^{(i)}; \phi)} \left[ \log p\left( x^{(i)} \mid z^{(i)}, w \right) \right] = \mathbb E_{p(\epsilon^{(i)})} \left[ \nabla_\phi \log p\left( x^{(i)} \mid g(\epsilon^{(i)}, x^{(i)}, \phi), w \right) \right] $$ Now the expectation is w.r.t. $p(\epsilon^{(i)})$ and $p(\epsilon^{(i)})$ is independent of the gradient parameter $\phi$. So we can put the gradient directly inside the expectation which can be easily seen by writing out the expectation explicitly. The gradient values are much smaller. Therefore, we have (intuitively) lower variance. Note: We can do this reparametrization trick only if $z^{(i)}$ is continuous so we can take the gradient of $z^{(i)} = g(\epsilon^{(i)}, x^{(i)}, \phi)$.
How does the reparameterization trick for VAEs work and why is it important? The reparameterization trick reduces the variance of the MC estimator for the gradient dramatically. So it's a variance reduction technique: Our goal is to find an estimate of $$ \nabla_\phi \mathbb E
1,520
How does the reparameterization trick for VAEs work and why is it important?
The issue is not that we cannot backprop through a “random node” in any technical sense. Rather, backproping would not compute an estimate of the derivative. Without the reparameterization trick, we have no guarantee that sampling large numbers of z will help converge to the right estimate of ∇θ Furthermore, this is the exact problem we have with the ELBO we want to estimate: I also find it easier to understand how the ELBO interacts with posterior while doing KL minimization and hence the reparametrization trick Source: https://gregorygundersen.com/blog/2018/04/29/reparameterization/ ​ .
How does the reparameterization trick for VAEs work and why is it important?
The issue is not that we cannot backprop through a “random node” in any technical sense. Rather, backproping would not compute an estimate of the derivative. Without the reparameterization trick, we h
How does the reparameterization trick for VAEs work and why is it important? The issue is not that we cannot backprop through a “random node” in any technical sense. Rather, backproping would not compute an estimate of the derivative. Without the reparameterization trick, we have no guarantee that sampling large numbers of z will help converge to the right estimate of ∇θ Furthermore, this is the exact problem we have with the ELBO we want to estimate: I also find it easier to understand how the ELBO interacts with posterior while doing KL minimization and hence the reparametrization trick Source: https://gregorygundersen.com/blog/2018/04/29/reparameterization/ ​ .
How does the reparameterization trick for VAEs work and why is it important? The issue is not that we cannot backprop through a “random node” in any technical sense. Rather, backproping would not compute an estimate of the derivative. Without the reparameterization trick, we h
1,521
What misused statistical terms are worth correcting?
It can be futile to fight against shifts in language. But parameter does not mean variable In classical statistics, which in this case starts precisely with R.A. Fisher who first used the term with this meaning, a parameter is an unknown constant to be estimated, say a population mean or correlation. In mathematics, there are related but not identical meanings, as when a curve is given parametrically. In many sciences, parameter is just another word for a measure (itself a term dense with mathematical meaning), property or variable, say length or conductivity or porosity or virtue, as the case may be. Naturally, an individual's length or virtue is unknown before it is measured. but statistically minded people can be bemused by its use for a set of such measurements. In ordinary or vulgar parlance, parameters (almost always plural) often mean the limits of something, say a personal relationship or a political policy, perhaps stemming from some original confusion with perimeter. With high prior probability it is to be presumed that Bayesians will speak for themselves on their own usages (grateful nod to @conjugateprior). skewed does not mean biased For a century or more, skewness has had a specific statistical sense of referring to asymmetry of distributions, whether assessed graphically, measured numerically, or presumed theoretically as a matter of faith or hope. For much longer, or so it may be guessed, bias has meant being wrong on average, which -- so long as we know the truth, meaning a true or correct value -- can be quantified as systematic error. Skewed in ordinary language has a common sense of being warped or distorted, and thus of being incorrect, wrong and so also biased too. That sense (so far as I have noticed, only quite recently) has begun filtering back into statistical discussions, so that the original meaning of skewness is in some danger of being blurred or submerged. correlation does not mean agreement Correlation has attracted several precise senses in statistics, which have in common an idea of a bivariate relationship perfect in some precise sense: the leading cases are linear and monotone relationship. It is often diluted, even in statistical discussions, to mean almost any kind of relationship or association. What correlation does not mean, necessarily, is agreement: thus $y = a + bx$ implies Pearson correlation of $1$ or $-1$ so long as $b \ne 0$, but agreement $y = x$ requires the very strict condition $a =0, b= 1$. unique does not mean distinct It is quite common to talk about the distinct values of data as unique, but unique is still ideally better preserved as meaning occurring once only. My own guess is that some of the blame stems from the Unix [sic] utility uniq and its imitators, which reduce possibly repeated values to a set in which each value really is unique. The usage, on this guess, conflates input and output of a program. (Conversely, if we talk of duplicates in data, we rarely restrict ourselves to doubletons that occur precisely twice. The term replicates would make more sense linguistically but has been pre-empted for deliberate replication of controls in experiments; the resulting response values are usually not at all identical, which is much of the point.) samples are rarely repeated In statistics, a sample includes several values, and repeated sampling is a high theoretical virtue, but one rarely practised, except by simulation, which is our customary term for any kind of faking in silico. In many sciences, a sample is a single object, consisting of a lump, chunk or dollop of water, soil, sediment, rock, blood, tissue, or other substances varying from attractive through benign to disgusting; far from being exceptional, taking many samples may be essential for any serious analysis. Here every field's terminology makes perfect sense to its people, but translation is sometimes needed. error does not usually mean mistake; as Harold Jeffreys pointed out, the primary sense is erratic, not erroneous. Nevertheless, we should be wary of our own sins or quirks of terminology: expected values or expectations (for means over the possible outcomes) may not be what you expect at all, and could even be impossible: in tossing a die fairly with outcomes 1 to 6, the expected value is 3.5 regression is not going backwards stationary does not mean immobile or fixed confidence has nothing to do with anyone's mental or psychological state significance has only sometimes its everyday meaning exact is often an honorific term, referring to a conveniently tractable solution or calculation rather than one appropriate to the problem right-skewed distributions to many look skewed left, and vice versa (and the terminology of right and left for skewness assumes that you are looking at something like a conventional histogram, with horizontal magnitude axis) the lognormal is so called because it's an exponentiated normal but the lognormal is more normal than the normal the Gaussian was discovered by De Moivre Poisson didn't discover the Poisson, let alone Poisson regression the bootstrap won't help you with your footwear the jackknife doesn't cut kurtosis is not a medical condition stem-and-leaf plots don't refer to plants a dummy variable is useful, not pointless or stupid who on Earth (or anywhere else) thinks that heteroscedasticity is really a preferable term over unequal variability? robust now has at least two major technical meanings for different groups, neither of which inhibits its frequent use, even in technical discussions, to mean merely something like "asserted to behave well" IV now has at least two major meanings for different groups factor now has at least two major meanings for different groups normalize and standardize have uncountably many meanings (we really need to standardize there) versus describing a graph means vertical variable versus horizontal variable, unless it means the opposite and (last but not least, to coin a phrase) statistics has at least three major meanings. Notes: Despite any appearances to the contrary, I think this is a good, serious question. Fashions shift. Well into the twentieth century, it seems that many people (no names, no pack-drill, but Karl Pearson could be mentioned) could only invent terms by reaching for their Greek and Latin dictionaries. (It would be unfair not to give him credit for scatter plot.) But R.A. Fisher did hijack many pre-existing English words, including variance, sufficiency, efficiency and likelihood. More recently, J.W. Tukey was a master in using homely terms, but few should feel distress that sploms and badmandments did not catch on. One comment is based on recollection of "Life is [...] Multiplicative rather than additive: the log normal distribution is more normal than the normal." Anon. 1962. Bloggins's working rules. In Good, I.J. (Ed.) The scientist speculates: an anthology of partly-baked ideas. London: Heinemann, 212-213 (quotation on p.213).
What misused statistical terms are worth correcting?
It can be futile to fight against shifts in language. But parameter does not mean variable In classical statistics, which in this case starts precisely with R.A. Fisher who first used the term with th
What misused statistical terms are worth correcting? It can be futile to fight against shifts in language. But parameter does not mean variable In classical statistics, which in this case starts precisely with R.A. Fisher who first used the term with this meaning, a parameter is an unknown constant to be estimated, say a population mean or correlation. In mathematics, there are related but not identical meanings, as when a curve is given parametrically. In many sciences, parameter is just another word for a measure (itself a term dense with mathematical meaning), property or variable, say length or conductivity or porosity or virtue, as the case may be. Naturally, an individual's length or virtue is unknown before it is measured. but statistically minded people can be bemused by its use for a set of such measurements. In ordinary or vulgar parlance, parameters (almost always plural) often mean the limits of something, say a personal relationship or a political policy, perhaps stemming from some original confusion with perimeter. With high prior probability it is to be presumed that Bayesians will speak for themselves on their own usages (grateful nod to @conjugateprior). skewed does not mean biased For a century or more, skewness has had a specific statistical sense of referring to asymmetry of distributions, whether assessed graphically, measured numerically, or presumed theoretically as a matter of faith or hope. For much longer, or so it may be guessed, bias has meant being wrong on average, which -- so long as we know the truth, meaning a true or correct value -- can be quantified as systematic error. Skewed in ordinary language has a common sense of being warped or distorted, and thus of being incorrect, wrong and so also biased too. That sense (so far as I have noticed, only quite recently) has begun filtering back into statistical discussions, so that the original meaning of skewness is in some danger of being blurred or submerged. correlation does not mean agreement Correlation has attracted several precise senses in statistics, which have in common an idea of a bivariate relationship perfect in some precise sense: the leading cases are linear and monotone relationship. It is often diluted, even in statistical discussions, to mean almost any kind of relationship or association. What correlation does not mean, necessarily, is agreement: thus $y = a + bx$ implies Pearson correlation of $1$ or $-1$ so long as $b \ne 0$, but agreement $y = x$ requires the very strict condition $a =0, b= 1$. unique does not mean distinct It is quite common to talk about the distinct values of data as unique, but unique is still ideally better preserved as meaning occurring once only. My own guess is that some of the blame stems from the Unix [sic] utility uniq and its imitators, which reduce possibly repeated values to a set in which each value really is unique. The usage, on this guess, conflates input and output of a program. (Conversely, if we talk of duplicates in data, we rarely restrict ourselves to doubletons that occur precisely twice. The term replicates would make more sense linguistically but has been pre-empted for deliberate replication of controls in experiments; the resulting response values are usually not at all identical, which is much of the point.) samples are rarely repeated In statistics, a sample includes several values, and repeated sampling is a high theoretical virtue, but one rarely practised, except by simulation, which is our customary term for any kind of faking in silico. In many sciences, a sample is a single object, consisting of a lump, chunk or dollop of water, soil, sediment, rock, blood, tissue, or other substances varying from attractive through benign to disgusting; far from being exceptional, taking many samples may be essential for any serious analysis. Here every field's terminology makes perfect sense to its people, but translation is sometimes needed. error does not usually mean mistake; as Harold Jeffreys pointed out, the primary sense is erratic, not erroneous. Nevertheless, we should be wary of our own sins or quirks of terminology: expected values or expectations (for means over the possible outcomes) may not be what you expect at all, and could even be impossible: in tossing a die fairly with outcomes 1 to 6, the expected value is 3.5 regression is not going backwards stationary does not mean immobile or fixed confidence has nothing to do with anyone's mental or psychological state significance has only sometimes its everyday meaning exact is often an honorific term, referring to a conveniently tractable solution or calculation rather than one appropriate to the problem right-skewed distributions to many look skewed left, and vice versa (and the terminology of right and left for skewness assumes that you are looking at something like a conventional histogram, with horizontal magnitude axis) the lognormal is so called because it's an exponentiated normal but the lognormal is more normal than the normal the Gaussian was discovered by De Moivre Poisson didn't discover the Poisson, let alone Poisson regression the bootstrap won't help you with your footwear the jackknife doesn't cut kurtosis is not a medical condition stem-and-leaf plots don't refer to plants a dummy variable is useful, not pointless or stupid who on Earth (or anywhere else) thinks that heteroscedasticity is really a preferable term over unequal variability? robust now has at least two major technical meanings for different groups, neither of which inhibits its frequent use, even in technical discussions, to mean merely something like "asserted to behave well" IV now has at least two major meanings for different groups factor now has at least two major meanings for different groups normalize and standardize have uncountably many meanings (we really need to standardize there) versus describing a graph means vertical variable versus horizontal variable, unless it means the opposite and (last but not least, to coin a phrase) statistics has at least three major meanings. Notes: Despite any appearances to the contrary, I think this is a good, serious question. Fashions shift. Well into the twentieth century, it seems that many people (no names, no pack-drill, but Karl Pearson could be mentioned) could only invent terms by reaching for their Greek and Latin dictionaries. (It would be unfair not to give him credit for scatter plot.) But R.A. Fisher did hijack many pre-existing English words, including variance, sufficiency, efficiency and likelihood. More recently, J.W. Tukey was a master in using homely terms, but few should feel distress that sploms and badmandments did not catch on. One comment is based on recollection of "Life is [...] Multiplicative rather than additive: the log normal distribution is more normal than the normal." Anon. 1962. Bloggins's working rules. In Good, I.J. (Ed.) The scientist speculates: an anthology of partly-baked ideas. London: Heinemann, 212-213 (quotation on p.213).
What misused statistical terms are worth correcting? It can be futile to fight against shifts in language. But parameter does not mean variable In classical statistics, which in this case starts precisely with R.A. Fisher who first used the term with th
1,522
What misused statistical terms are worth correcting?
Some of the things I encounter: Treating significance level and CI coverage probabilities as interchangeable, so that people end up doing things like speaking of "95% significance". [What's worse is when people who make such errors point to their lecture notes -- or even textbook -- as support for this; in other words the mistake is not theirs, but is being compounded a hundredfold or many-thousands-fold, and worse, even if they understand it correctly, they may actually have to repeat the error anyway, to pass the subject.] There's also a common tendency to think that "significance" somehow exists outside a specific hypothesis/question (leading to questions like "are my data significant" without any clear notion of what question is to be addressed). [A related issue is the "what test should I use for these data?" as if it were the data - rather than the question to be answered - that's the driver of choice of analysis. (While the "design" of the study can impact the specific tests used, the question of interest is more important -- for example, if you have three groups available but your question of interest only relates to a comparison of two of them, the fact that you have three doesn't force you to do a one-way type analysis rather than a straight comparison of the two groups of interest ... as long as your choice of analysis doesn't derive from what the data show. Ideally you plan your questions and analyses before you have data, rather than throwing analysis at data and see what sticks, which it seems post-hoc analysis questions - including "what test should I use for these data?" - tend to lead to.) An occasional tendency to refer to the complement of p-value as some sort of "confidence in", or "probability of" the alternative. "nonparametric data"; another one unfortunately found in a couple of books (and, sadly, in an article that purports to correct a common error) this one comes up so often that it's in my short list of automatically generated comments (which begins "Data are neither parametric nor nonparametric; those are adjectives that apply to models or techniques...") (thanks Nick Cox for reminding me of this particular bugbear) Usually what is intended is "non-normal data" but parametric doesn't imply normal, and having approximate normality doesn't imply we need parametric procedures. Similarly, non-normality doesn't imply we need non-parametric procedures. Occasionally, what is intended is "ordinal data" or "nominal data" but in neither case does that imply that finite-parametric models are inappropriate. A common tendency to misunderstand the meaning of "linear" in "linear model" in a way that would be inconsistent with the use of the term "linear" in "generalized linear model". This is partly the fault of the way we use terminology. conflating the mean-minus-median kind of skewness with third moment skewness, and conflating a zero in either (or even both) with symmetry. Both errors are frequently found in basic texts widely used in some particular application areas. [There's a related error of treating zero skewness and zero excess kurtosis as implying normality] this one is so common it's becoming hard to call it an error any more (due in part to the efforts of a particular program) -- calling excess kurtosis simply "kurtosis"; a mistake pretty much guaranteed to lead to communication problems.
What misused statistical terms are worth correcting?
Some of the things I encounter: Treating significance level and CI coverage probabilities as interchangeable, so that people end up doing things like speaking of "95% significance". [What's worse is
What misused statistical terms are worth correcting? Some of the things I encounter: Treating significance level and CI coverage probabilities as interchangeable, so that people end up doing things like speaking of "95% significance". [What's worse is when people who make such errors point to their lecture notes -- or even textbook -- as support for this; in other words the mistake is not theirs, but is being compounded a hundredfold or many-thousands-fold, and worse, even if they understand it correctly, they may actually have to repeat the error anyway, to pass the subject.] There's also a common tendency to think that "significance" somehow exists outside a specific hypothesis/question (leading to questions like "are my data significant" without any clear notion of what question is to be addressed). [A related issue is the "what test should I use for these data?" as if it were the data - rather than the question to be answered - that's the driver of choice of analysis. (While the "design" of the study can impact the specific tests used, the question of interest is more important -- for example, if you have three groups available but your question of interest only relates to a comparison of two of them, the fact that you have three doesn't force you to do a one-way type analysis rather than a straight comparison of the two groups of interest ... as long as your choice of analysis doesn't derive from what the data show. Ideally you plan your questions and analyses before you have data, rather than throwing analysis at data and see what sticks, which it seems post-hoc analysis questions - including "what test should I use for these data?" - tend to lead to.) An occasional tendency to refer to the complement of p-value as some sort of "confidence in", or "probability of" the alternative. "nonparametric data"; another one unfortunately found in a couple of books (and, sadly, in an article that purports to correct a common error) this one comes up so often that it's in my short list of automatically generated comments (which begins "Data are neither parametric nor nonparametric; those are adjectives that apply to models or techniques...") (thanks Nick Cox for reminding me of this particular bugbear) Usually what is intended is "non-normal data" but parametric doesn't imply normal, and having approximate normality doesn't imply we need parametric procedures. Similarly, non-normality doesn't imply we need non-parametric procedures. Occasionally, what is intended is "ordinal data" or "nominal data" but in neither case does that imply that finite-parametric models are inappropriate. A common tendency to misunderstand the meaning of "linear" in "linear model" in a way that would be inconsistent with the use of the term "linear" in "generalized linear model". This is partly the fault of the way we use terminology. conflating the mean-minus-median kind of skewness with third moment skewness, and conflating a zero in either (or even both) with symmetry. Both errors are frequently found in basic texts widely used in some particular application areas. [There's a related error of treating zero skewness and zero excess kurtosis as implying normality] this one is so common it's becoming hard to call it an error any more (due in part to the efforts of a particular program) -- calling excess kurtosis simply "kurtosis"; a mistake pretty much guaranteed to lead to communication problems.
What misused statistical terms are worth correcting? Some of the things I encounter: Treating significance level and CI coverage probabilities as interchangeable, so that people end up doing things like speaking of "95% significance". [What's worse is
1,523
What misused statistical terms are worth correcting?
"Data" is plural. (The singular is "datum").
What misused statistical terms are worth correcting?
"Data" is plural. (The singular is "datum").
What misused statistical terms are worth correcting? "Data" is plural. (The singular is "datum").
What misused statistical terms are worth correcting? "Data" is plural. (The singular is "datum").
1,524
What misused statistical terms are worth correcting?
While not strictly a statistical term, I vote to retire endogeneity. It's used to refer to everything from reverse causation through confounding to selection and collider bias, when all people really want to do is say: "That effect is not identified".
What misused statistical terms are worth correcting?
While not strictly a statistical term, I vote to retire endogeneity. It's used to refer to everything from reverse causation through confounding to selection and collider bias, when all people really
What misused statistical terms are worth correcting? While not strictly a statistical term, I vote to retire endogeneity. It's used to refer to everything from reverse causation through confounding to selection and collider bias, when all people really want to do is say: "That effect is not identified".
What misused statistical terms are worth correcting? While not strictly a statistical term, I vote to retire endogeneity. It's used to refer to everything from reverse causation through confounding to selection and collider bias, when all people really
1,525
What misused statistical terms are worth correcting?
Percent vs. Percentage Points: If something increases from 1% to 2%, it increased by 100%. Or: you can say that it increased by 1 percentage point. Stating that the increase was 1% is very misleading.
What misused statistical terms are worth correcting?
Percent vs. Percentage Points: If something increases from 1% to 2%, it increased by 100%. Or: you can say that it increased by 1 percentage point. Stating that the increase was 1% is very mislead
What misused statistical terms are worth correcting? Percent vs. Percentage Points: If something increases from 1% to 2%, it increased by 100%. Or: you can say that it increased by 1 percentage point. Stating that the increase was 1% is very misleading.
What misused statistical terms are worth correcting? Percent vs. Percentage Points: If something increases from 1% to 2%, it increased by 100%. Or: you can say that it increased by 1 percentage point. Stating that the increase was 1% is very mislead
1,526
What misused statistical terms are worth correcting?
"Regression towards the mean" does not mean that if we've observed a certain number of iid samples below expected value, the next iid samples are likely to be above the expected value.
What misused statistical terms are worth correcting?
"Regression towards the mean" does not mean that if we've observed a certain number of iid samples below expected value, the next iid samples are likely to be above the expected value.
What misused statistical terms are worth correcting? "Regression towards the mean" does not mean that if we've observed a certain number of iid samples below expected value, the next iid samples are likely to be above the expected value.
What misused statistical terms are worth correcting? "Regression towards the mean" does not mean that if we've observed a certain number of iid samples below expected value, the next iid samples are likely to be above the expected value.
1,527
What misused statistical terms are worth correcting?
Kurtosis does not measure "peakedness." By definition, it is the expected value (average) of $Z^4$.* Thus, $|Z|$-values less than 1 (corresponding to data values within one standard deviation of the mean, where any "peak" would be) contribute very little to kurtosis; nearly all the contribution to kurtosis is from $|Z|$-values greater than 1 (outliers in particular). See http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4321753/ , Figures 2 and 3 in particular. * Subtract 3 or not; it makes no difference to this point.
What misused statistical terms are worth correcting?
Kurtosis does not measure "peakedness." By definition, it is the expected value (average) of $Z^4$.* Thus, $|Z|$-values less than 1 (corresponding to data values within one standard deviation of the
What misused statistical terms are worth correcting? Kurtosis does not measure "peakedness." By definition, it is the expected value (average) of $Z^4$.* Thus, $|Z|$-values less than 1 (corresponding to data values within one standard deviation of the mean, where any "peak" would be) contribute very little to kurtosis; nearly all the contribution to kurtosis is from $|Z|$-values greater than 1 (outliers in particular). See http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4321753/ , Figures 2 and 3 in particular. * Subtract 3 or not; it makes no difference to this point.
What misused statistical terms are worth correcting? Kurtosis does not measure "peakedness." By definition, it is the expected value (average) of $Z^4$.* Thus, $|Z|$-values less than 1 (corresponding to data values within one standard deviation of the
1,528
What misused statistical terms are worth correcting?
I find abbreviations that aren't clearly indicated are a real problem. For example, I see things like GLM and nowhere is it specified if this means general linear model or generalized linear model. Once can usually figure out what is being referenced after digging into the context, but I find this is particularly troublesome for students just starting to learn about statistical models. Another example of this is IV. Does this mean instrumental variable or independent variable? Often times it's not made clear until you examine the context. Something else I see confusion over are "moderator" and "interaction." Also, population (as in the population in general) and the population of interest seems to confuse new students unless it is made very clear.
What misused statistical terms are worth correcting?
I find abbreviations that aren't clearly indicated are a real problem. For example, I see things like GLM and nowhere is it specified if this means general linear model or generalized linear model.
What misused statistical terms are worth correcting? I find abbreviations that aren't clearly indicated are a real problem. For example, I see things like GLM and nowhere is it specified if this means general linear model or generalized linear model. Once can usually figure out what is being referenced after digging into the context, but I find this is particularly troublesome for students just starting to learn about statistical models. Another example of this is IV. Does this mean instrumental variable or independent variable? Often times it's not made clear until you examine the context. Something else I see confusion over are "moderator" and "interaction." Also, population (as in the population in general) and the population of interest seems to confuse new students unless it is made very clear.
What misused statistical terms are worth correcting? I find abbreviations that aren't clearly indicated are a real problem. For example, I see things like GLM and nowhere is it specified if this means general linear model or generalized linear model.
1,529
What misused statistical terms are worth correcting?
One that is common in everyday language: average To the average person out there (bitter irony fully intended), the mean, median, mode and expected value of anything seem to be the same. They have a natural tendency to do a point estimation, with the unconscious and unassailable assumption that there is an underlying normal distribution. And the equally unconscious assumption of a very small variance. The belief that such an estimation 1) exists and 2) will be very useful for them, because they can take it as a practically certain predictor, is so ingrained, that it is basically impossible to convince them otherwise. For a real-world example, try to talk to a cook who is asking "what is the average size potato", absolutely certain that if you tell him a number, he will be able to use this one for any recipe which specifies a number of potatoes, and have it come out perfect every time. And getting angry at you for trying to tell him "there is no such number". Sadly, it happens in situations with much higher stakes than making soup.
What misused statistical terms are worth correcting?
One that is common in everyday language: average To the average person out there (bitter irony fully intended), the mean, median, mode and expected value of anything seem to be the same. They have a
What misused statistical terms are worth correcting? One that is common in everyday language: average To the average person out there (bitter irony fully intended), the mean, median, mode and expected value of anything seem to be the same. They have a natural tendency to do a point estimation, with the unconscious and unassailable assumption that there is an underlying normal distribution. And the equally unconscious assumption of a very small variance. The belief that such an estimation 1) exists and 2) will be very useful for them, because they can take it as a practically certain predictor, is so ingrained, that it is basically impossible to convince them otherwise. For a real-world example, try to talk to a cook who is asking "what is the average size potato", absolutely certain that if you tell him a number, he will be able to use this one for any recipe which specifies a number of potatoes, and have it come out perfect every time. And getting angry at you for trying to tell him "there is no such number". Sadly, it happens in situations with much higher stakes than making soup.
What misused statistical terms are worth correcting? One that is common in everyday language: average To the average person out there (bitter irony fully intended), the mean, median, mode and expected value of anything seem to be the same. They have a
1,530
What misused statistical terms are worth correcting?
Linear means: Line-like. As in $y = a + bx$ from elementary algebra. In this respect nonlinear means things like $y = a + bx + cx^{2}$, and also things like $y = ax^{b}$ Linear in the parameters being estimated. As in a regression model (linear, logistic, GLM, etc.) entails a sum of products of scalar parameters and independent variables. In this respect nonlinear means things like $y = \frac{e^{a+bx}}{1+e^{a+bx}}$, and $y = a + bx +x\max (x-\theta,0)$. Linear meaning the opposite of dynamic. As in whatever a dependent variable is a function of, it is not a function of its own previous values. In this respect nonlinear means things like $y_{t} = a + by_{t-1} + cx$, and $y_{t}-y_{t-1} = a + b(y_{t-1} - x_{t-x}) + c(x_{t}-x_{t-1}) + dx_{t-1}$. Where $y$ is a dependent variable, $x$ is an independent variable, and $a, b, c, d$, and $\theta$ are parameters in all the above examples.
What misused statistical terms are worth correcting?
Linear means: Line-like. As in $y = a + bx$ from elementary algebra. In this respect nonlinear means things like $y = a + bx + cx^{2}$, and also things like $y = ax^{b}$ Linear in the parameters bein
What misused statistical terms are worth correcting? Linear means: Line-like. As in $y = a + bx$ from elementary algebra. In this respect nonlinear means things like $y = a + bx + cx^{2}$, and also things like $y = ax^{b}$ Linear in the parameters being estimated. As in a regression model (linear, logistic, GLM, etc.) entails a sum of products of scalar parameters and independent variables. In this respect nonlinear means things like $y = \frac{e^{a+bx}}{1+e^{a+bx}}$, and $y = a + bx +x\max (x-\theta,0)$. Linear meaning the opposite of dynamic. As in whatever a dependent variable is a function of, it is not a function of its own previous values. In this respect nonlinear means things like $y_{t} = a + by_{t-1} + cx$, and $y_{t}-y_{t-1} = a + b(y_{t-1} - x_{t-x}) + c(x_{t}-x_{t-1}) + dx_{t-1}$. Where $y$ is a dependent variable, $x$ is an independent variable, and $a, b, c, d$, and $\theta$ are parameters in all the above examples.
What misused statistical terms are worth correcting? Linear means: Line-like. As in $y = a + bx$ from elementary algebra. In this respect nonlinear means things like $y = a + bx + cx^{2}$, and also things like $y = ax^{b}$ Linear in the parameters bein
1,531
What misused statistical terms are worth correcting?
The question was about uses of statistical terms that we should CARE to correct. I have been correcting my millennial kids' use of the term 'random' to mean things that are the opposite of random for 10 years now. Considering how many of my trainees struggle to produce a random data sample that is actually random, which happened even before this use of the word, the obfuscation of this term in everyday slang is a crisis. From the OnlineSlangDictionary: Definition of random random adjective unexpected and surprising. All of the sudden this guy jumped out from behind the bushes, it was so random! The street cleaner never comes down our street. How random. unexpectedly great. The party was totally random.
What misused statistical terms are worth correcting?
The question was about uses of statistical terms that we should CARE to correct. I have been correcting my millennial kids' use of the term 'random' to mean things that are the opposite of random for
What misused statistical terms are worth correcting? The question was about uses of statistical terms that we should CARE to correct. I have been correcting my millennial kids' use of the term 'random' to mean things that are the opposite of random for 10 years now. Considering how many of my trainees struggle to produce a random data sample that is actually random, which happened even before this use of the word, the obfuscation of this term in everyday slang is a crisis. From the OnlineSlangDictionary: Definition of random random adjective unexpected and surprising. All of the sudden this guy jumped out from behind the bushes, it was so random! The street cleaner never comes down our street. How random. unexpectedly great. The party was totally random.
What misused statistical terms are worth correcting? The question was about uses of statistical terms that we should CARE to correct. I have been correcting my millennial kids' use of the term 'random' to mean things that are the opposite of random for
1,532
What misused statistical terms are worth correcting?
There are already too many great examples mentioned by Glen and Nick... not much left! Some aspects of regression error term and residual (it is somewhat funny when people are proud their residuals are uncorrelated with the regressors) prediction and estimation (should we even stop making the distinction when they are about the predicted random effects?) prediction/forecast interval versus confidence interval. I think there is a probability > 0.5 to quote the wrong one. regressor (column in the design matrix) versus covariable et al. Especially in technical situations where the distinction is essential, many people (including myself) tend to be imprecise.
What misused statistical terms are worth correcting?
There are already too many great examples mentioned by Glen and Nick... not much left! Some aspects of regression error term and residual (it is somewhat funny when people are proud their residuals a
What misused statistical terms are worth correcting? There are already too many great examples mentioned by Glen and Nick... not much left! Some aspects of regression error term and residual (it is somewhat funny when people are proud their residuals are uncorrelated with the regressors) prediction and estimation (should we even stop making the distinction when they are about the predicted random effects?) prediction/forecast interval versus confidence interval. I think there is a probability > 0.5 to quote the wrong one. regressor (column in the design matrix) versus covariable et al. Especially in technical situations where the distinction is essential, many people (including myself) tend to be imprecise.
What misused statistical terms are worth correcting? There are already too many great examples mentioned by Glen and Nick... not much left! Some aspects of regression error term and residual (it is somewhat funny when people are proud their residuals a
1,533
What misused statistical terms are worth correcting?
In insurance environments especially, it is common to use variance to refer to any sort of difference, rather than the mean of the squared differences between each data point and the mean of the data set.
What misused statistical terms are worth correcting?
In insurance environments especially, it is common to use variance to refer to any sort of difference, rather than the mean of the squared differences between each data point and the mean of the data
What misused statistical terms are worth correcting? In insurance environments especially, it is common to use variance to refer to any sort of difference, rather than the mean of the squared differences between each data point and the mean of the data set.
What misused statistical terms are worth correcting? In insurance environments especially, it is common to use variance to refer to any sort of difference, rather than the mean of the squared differences between each data point and the mean of the data
1,534
What misused statistical terms are worth correcting?
Fixed effects and random effects can mean different things for different people. In econometrics fixed effects are actually random and when you think about it every effect in statistics is random so naming something random does not give any meaningful additional information.
What misused statistical terms are worth correcting?
Fixed effects and random effects can mean different things for different people. In econometrics fixed effects are actually random and when you think about it every effect in statistics is random so n
What misused statistical terms are worth correcting? Fixed effects and random effects can mean different things for different people. In econometrics fixed effects are actually random and when you think about it every effect in statistics is random so naming something random does not give any meaningful additional information.
What misused statistical terms are worth correcting? Fixed effects and random effects can mean different things for different people. In econometrics fixed effects are actually random and when you think about it every effect in statistics is random so n
1,535
What misused statistical terms are worth correcting?
Bayesian Students learning it might not have trouble telling you whether something "looks" Bayesian, but ask them to solve a problem with a frequentist and a Bayesian approach and they'll probably fail. In my experience students end up being taught that it's just a philosophical difference, with no concrete example that shows the same problem being attacked with both approaches. Now ask them why someone might take a frequentist approach in their example; chances are their best explanation would be something like "well, back in the old days, computers didn't exist..."
What misused statistical terms are worth correcting?
Bayesian Students learning it might not have trouble telling you whether something "looks" Bayesian, but ask them to solve a problem with a frequentist and a Bayesian approach and they'll probably fai
What misused statistical terms are worth correcting? Bayesian Students learning it might not have trouble telling you whether something "looks" Bayesian, but ask them to solve a problem with a frequentist and a Bayesian approach and they'll probably fail. In my experience students end up being taught that it's just a philosophical difference, with no concrete example that shows the same problem being attacked with both approaches. Now ask them why someone might take a frequentist approach in their example; chances are their best explanation would be something like "well, back in the old days, computers didn't exist..."
What misused statistical terms are worth correcting? Bayesian Students learning it might not have trouble telling you whether something "looks" Bayesian, but ask them to solve a problem with a frequentist and a Bayesian approach and they'll probably fai
1,536
What misused statistical terms are worth correcting?
Risk Risk does not mean probability Risk is the sum of the costs of all outcomes, each of these costs multiplied by the probability of them happening. Risk is usually weighed against reward which is the gain that we are seeking to achieve. Here is one example: How Deadly Is Your Kilowatt. Here the risks — number of dead people for different sources of energy — are weighed against the reward — terawatt hours of energy produced by these sources of energy. So for instance: the risk of nuclear power is not the probability that a meltdown will happen; it is the probability that a meltdown will happen, multiplied by the number of people that die from it, summed with with the number of people that die from normal operations multiplied by the probability that operations remain normal.
What misused statistical terms are worth correcting?
Risk Risk does not mean probability Risk is the sum of the costs of all outcomes, each of these costs multiplied by the probability of them happening. Risk is usually weighed against reward which is
What misused statistical terms are worth correcting? Risk Risk does not mean probability Risk is the sum of the costs of all outcomes, each of these costs multiplied by the probability of them happening. Risk is usually weighed against reward which is the gain that we are seeking to achieve. Here is one example: How Deadly Is Your Kilowatt. Here the risks — number of dead people for different sources of energy — are weighed against the reward — terawatt hours of energy produced by these sources of energy. So for instance: the risk of nuclear power is not the probability that a meltdown will happen; it is the probability that a meltdown will happen, multiplied by the number of people that die from it, summed with with the number of people that die from normal operations multiplied by the probability that operations remain normal.
What misused statistical terms are worth correcting? Risk Risk does not mean probability Risk is the sum of the costs of all outcomes, each of these costs multiplied by the probability of them happening. Risk is usually weighed against reward which is
1,537
What is rank deficiency, and how to deal with it?
Rank deficiency in this context says there is insufficient information contained in your data to estimate the model you desire. It stems from many origins. I'll talk here about modeling in a fairly general context, rather than explicitly logistic regression, but everything still applies to the specific context. The deficiency may stem from simply too little data. In general, you cannot uniquely estimate n parameters with less than n data points. That does not mean that all you need are n points, as if there is any noise in the process, you would get rather poor results. You need more data to help the algorithm to choose a solution that will represent all of the data, in a minimum error sense. This is why we use least squares tools. How much data do you need? I was always asked that question in a past life, and the answer was more than you have, or as much as you can get. :) Sometimes you may have more data than you need, but some (too many) points are replicates. Replication is GOOD in the sense that it helps to reduce the noise, but it does not help to increase numerical rank. Thus, suppose you have only two data points. You cannot estimate a unique quadratic model through the points. A million replicates of each point will still not allow you to fit more than a straight line, through what are still only effectively a pair of points. Essentially, replication does not add information content. All it does is decrease noise at locations where you already have information. Sometimes you have information in the wrong places. For example, you cannot fit a two dimensional quadratic model if all you have are points that all lie in a straight line in two dimensions. That is, suppose you have points scattered only along the line x = y in the plane, and you wish to fit a model for the surface z(x,y). Even with zillions of points (not even replicates) will you have sufficient information to intelligently estimate more than a constant model. Amazingly, this is a common problem that I've seen in sampled data. The user wonders why they cannot build a good model. The problem is built into the very data they have sampled. Sometimes it is simply choice of model. This can be viewed as "not enough data", but from the other side. You wish to estimate a complicated model, but have provided insufficient data to do so. In all of the above instances the answer is to get more data, sampled intelligently from places that will provide information about the process that you currently lack. Design of experiments is a good place to start. However, even good data is sometimes inadequate, at least numerically so. (Why do bad things happen to good data?) The problem here may be model related. It may lie in nothing more than a poor choice of units. It may stem from the computer programming done to solve the problem. (Ugh! Where to start?) First, lets talk about units and scaling. Suppose I try to solve a problem where one variable is MANY orders of magnitude larger than another. For example, suppose I have a problem that involves my height and my shoe size. I'll measure my height in nanometers. So my height would be roughly 1.78 billion (1.78e9) nanometers. Of course, I'll choose to measure my shoe size in kilo-parsecs, so 9.14e-21 kilo-parsecs. When you do regression modeling, linear regression is all about linear algebra, which involves linear combinations of variables. The problem here is these numbers are different by hugely many orders of magnitude (and not even the same units.) The mathematics will fail when a computer program tries to add and subtract numbers that vary by so many orders of magnitude (for a double precision number, that absolute limit is roughly 16 powers of 10.) The trick is usually to use common units, but on some problems even that is an issue when variables vary by too many orders of magnitude. More important is to scale your numbers to be similar in magnitude. Next, you may see problems with big numbers and small variation in those numbers. Thus, suppose you try to build a moderately high order polynomial model with data where your inputs all lie in the interval [1,2]. Squaring, cubing, etc., numbers that are on the order of 1 or 2 will cause no problems when working in double precision arithmetic. Alternatively, add 1e12 to every number. In theory, the mathematics will allow this. All it does is shift any polynomial model we build on the x-axis. It would have exactly the same shape, but be translated by 1e12 to the right. In practice, the linear algebra will fail miserably due to rank deficiency problems. You have done nothing but translate the data, but suddenly you start to see singular matrices popping up. Usually the comment made will be a suggestion to "center and scale your data". Effectively this says to shift and scale the data so that it has a mean near zero and a standard deviation that is roughly 1. That will greatly improve the conditioning of most polynomial models, reducing the rank deficiency issues. Other reasons for rank deficiency exist. In some cases it is built directly into the model. For example, suppose I provide the derivative of a function, can I uniquely infer the function itself? Of course not, as integration involves a constant of integration, an unknown parameter that is generally inferred by knowledge of the value of the function at some point. In fact, this sometimes arises in estimation problems too, where the singularity of a system is derived from the fundamental nature of the system under study. I surely left out a few of the many reasons for rank deficiency in a linear system, and I've prattled along for too long now. Hopefully I managed to explain those I covered in simple terms, and a way to alleviate the problem.
What is rank deficiency, and how to deal with it?
Rank deficiency in this context says there is insufficient information contained in your data to estimate the model you desire. It stems from many origins. I'll talk here about modeling in a fairly ge
What is rank deficiency, and how to deal with it? Rank deficiency in this context says there is insufficient information contained in your data to estimate the model you desire. It stems from many origins. I'll talk here about modeling in a fairly general context, rather than explicitly logistic regression, but everything still applies to the specific context. The deficiency may stem from simply too little data. In general, you cannot uniquely estimate n parameters with less than n data points. That does not mean that all you need are n points, as if there is any noise in the process, you would get rather poor results. You need more data to help the algorithm to choose a solution that will represent all of the data, in a minimum error sense. This is why we use least squares tools. How much data do you need? I was always asked that question in a past life, and the answer was more than you have, or as much as you can get. :) Sometimes you may have more data than you need, but some (too many) points are replicates. Replication is GOOD in the sense that it helps to reduce the noise, but it does not help to increase numerical rank. Thus, suppose you have only two data points. You cannot estimate a unique quadratic model through the points. A million replicates of each point will still not allow you to fit more than a straight line, through what are still only effectively a pair of points. Essentially, replication does not add information content. All it does is decrease noise at locations where you already have information. Sometimes you have information in the wrong places. For example, you cannot fit a two dimensional quadratic model if all you have are points that all lie in a straight line in two dimensions. That is, suppose you have points scattered only along the line x = y in the plane, and you wish to fit a model for the surface z(x,y). Even with zillions of points (not even replicates) will you have sufficient information to intelligently estimate more than a constant model. Amazingly, this is a common problem that I've seen in sampled data. The user wonders why they cannot build a good model. The problem is built into the very data they have sampled. Sometimes it is simply choice of model. This can be viewed as "not enough data", but from the other side. You wish to estimate a complicated model, but have provided insufficient data to do so. In all of the above instances the answer is to get more data, sampled intelligently from places that will provide information about the process that you currently lack. Design of experiments is a good place to start. However, even good data is sometimes inadequate, at least numerically so. (Why do bad things happen to good data?) The problem here may be model related. It may lie in nothing more than a poor choice of units. It may stem from the computer programming done to solve the problem. (Ugh! Where to start?) First, lets talk about units and scaling. Suppose I try to solve a problem where one variable is MANY orders of magnitude larger than another. For example, suppose I have a problem that involves my height and my shoe size. I'll measure my height in nanometers. So my height would be roughly 1.78 billion (1.78e9) nanometers. Of course, I'll choose to measure my shoe size in kilo-parsecs, so 9.14e-21 kilo-parsecs. When you do regression modeling, linear regression is all about linear algebra, which involves linear combinations of variables. The problem here is these numbers are different by hugely many orders of magnitude (and not even the same units.) The mathematics will fail when a computer program tries to add and subtract numbers that vary by so many orders of magnitude (for a double precision number, that absolute limit is roughly 16 powers of 10.) The trick is usually to use common units, but on some problems even that is an issue when variables vary by too many orders of magnitude. More important is to scale your numbers to be similar in magnitude. Next, you may see problems with big numbers and small variation in those numbers. Thus, suppose you try to build a moderately high order polynomial model with data where your inputs all lie in the interval [1,2]. Squaring, cubing, etc., numbers that are on the order of 1 or 2 will cause no problems when working in double precision arithmetic. Alternatively, add 1e12 to every number. In theory, the mathematics will allow this. All it does is shift any polynomial model we build on the x-axis. It would have exactly the same shape, but be translated by 1e12 to the right. In practice, the linear algebra will fail miserably due to rank deficiency problems. You have done nothing but translate the data, but suddenly you start to see singular matrices popping up. Usually the comment made will be a suggestion to "center and scale your data". Effectively this says to shift and scale the data so that it has a mean near zero and a standard deviation that is roughly 1. That will greatly improve the conditioning of most polynomial models, reducing the rank deficiency issues. Other reasons for rank deficiency exist. In some cases it is built directly into the model. For example, suppose I provide the derivative of a function, can I uniquely infer the function itself? Of course not, as integration involves a constant of integration, an unknown parameter that is generally inferred by knowledge of the value of the function at some point. In fact, this sometimes arises in estimation problems too, where the singularity of a system is derived from the fundamental nature of the system under study. I surely left out a few of the many reasons for rank deficiency in a linear system, and I've prattled along for too long now. Hopefully I managed to explain those I covered in simple terms, and a way to alleviate the problem.
What is rank deficiency, and how to deal with it? Rank deficiency in this context says there is insufficient information contained in your data to estimate the model you desire. It stems from many origins. I'll talk here about modeling in a fairly ge
1,538
What is rank deficiency, and how to deal with it?
For the definition of the rank of a matrix, you can refer to any good textbook on linear algebra, or have a look at the Wikipedia page. A $n \times p$ matrix $X$ is said to be full rank if $n \geq p$, and its columns are not a linear combination of each other. In that case, the $p \times p$ matrix $X^TX$ is positive definite, which implies that it has an inverse $(X^TX)^{-1}$. If $X$ is not full rank, one of the columns is fully explained by the others, in the sense that it is a linear combination of the others. A trivial example is when a column is duplicated. This can also happen if you have a 0-1 variable and a column consists of only 0 or only 1. In that case, the rank of the matrix $X$ is less than $n$ and $X^TX$ has no inverse. Since the solution of many regression problems (including logistic regression) involves the computation intermediate $(X^TX)^{-1}$, it is then impossible to estimate the parameters of the model. Out of curiosity, you can check here how this term is involved in the formula of multiple linear regression. That was it for absolute rank deficiency. But sometimes the problem shows up when the matrix $X$ is "almost" not full rank, as extensively detailed by @woodchips. This problem is usually referred to as multicollinearity. This issue is fairly common, you can find out more on how to deal with it on related posts here and there.
What is rank deficiency, and how to deal with it?
For the definition of the rank of a matrix, you can refer to any good textbook on linear algebra, or have a look at the Wikipedia page. A $n \times p$ matrix $X$ is said to be full rank if $n \geq p$,
What is rank deficiency, and how to deal with it? For the definition of the rank of a matrix, you can refer to any good textbook on linear algebra, or have a look at the Wikipedia page. A $n \times p$ matrix $X$ is said to be full rank if $n \geq p$, and its columns are not a linear combination of each other. In that case, the $p \times p$ matrix $X^TX$ is positive definite, which implies that it has an inverse $(X^TX)^{-1}$. If $X$ is not full rank, one of the columns is fully explained by the others, in the sense that it is a linear combination of the others. A trivial example is when a column is duplicated. This can also happen if you have a 0-1 variable and a column consists of only 0 or only 1. In that case, the rank of the matrix $X$ is less than $n$ and $X^TX$ has no inverse. Since the solution of many regression problems (including logistic regression) involves the computation intermediate $(X^TX)^{-1}$, it is then impossible to estimate the parameters of the model. Out of curiosity, you can check here how this term is involved in the formula of multiple linear regression. That was it for absolute rank deficiency. But sometimes the problem shows up when the matrix $X$ is "almost" not full rank, as extensively detailed by @woodchips. This problem is usually referred to as multicollinearity. This issue is fairly common, you can find out more on how to deal with it on related posts here and there.
What is rank deficiency, and how to deal with it? For the definition of the rank of a matrix, you can refer to any good textbook on linear algebra, or have a look at the Wikipedia page. A $n \times p$ matrix $X$ is said to be full rank if $n \geq p$,
1,539
What is rank deficiency, and how to deal with it?
user974's answer is fantastic from a modelling perspective and gui11aume's answer is fantastic from a mathematical perspective. I want to refine the former answer strictly from a mixed modelling perspective: specifically a generalized mixed modelling (GLMM) perspective. As you can see, you have referenced the R function mer_finalize which is in the fantastic lme4 package. You also say that you are fitting a logistic regression model. There are many issues that crop up with such types of numerical algorithms. The issue of the matrix structure of the model matrix of fixed effects is certainly worth considering, as user974 alluded to. But this is very easy to assess, simply calculated the model.matrix of your formula= and data= arguments in a model and take its determinant using the det function. Random effects, however, greatly complicate the interpretation, numerical estimation routine, and inference on the fixed effects (what you typically think of as regression coefficients in a "regular" regression model). Suppose in the simplest case you are only fitted a random intercepts model. Then you are basically considering there to be thousands of unmeasured sources of heterogeneity that are held constant in repeated measures within clusters. You estimate a "grand" intercept, but account for the heterogeneity by assuming that the cluster-specific intercepts have some mean0 normal distribution. The intercepts are iteratively estimated and used to update model effects until convergence is achieved (the log likelihood--or an approximation of it--is maximized). The mixed model is very easy to envision, but mathematically the likelihood is very complex and prone to issues with singularities, local minimae, and boundary points (odds ratios = 0 or infinity). Mixed models do not have quadratic likelihoods like canonical GLMs. Unfortunately, Venerables and Ripley did not invest much into diagnostics for converge-fails like yours. It is practically impossible to even speculate at the myriad of possible errors leading to such a message. Consider, then, the types of diagnostics I use below: How many observations are there per cluster? What are the results from a marginal model fit using GEE? What is the ICC of the clusters? Is the within cluster heterogeneity close to the between cluster heterogeneity? Fit a 1-step estimator and look at the estimated random effects. Are they approximately normal? Fit the Bayesian mixed model and look at the posterior distribution for the fixed effects. Do they appear to have an approximately normal distribution? Look at a panel plot of the clusters showing exposure or regressor of interest against the outcome using a smoother. Are the trends consistent and clear or are there many possible ways that such trends may be explained? (e.g. what is the "risk" among unexposed subjects, does the exposure appear protective or harmful?) Is it possible to restrict the sample to subjects only having a sufficient number of observations-per-person (say, n=5 or n=10) to estimate the "ideal" treatment effect? Alternately, you can consider some different modelling approaches: Are there few enough clusters or time points that you can use a fixed effect (such as group indicators or a polynomial time effect) to model the cluster level/autoregressive heterogeneity? Is a marginal model appropriate (using a GEE to improve efficiency of standard error estimation, but still using only fixed effects) Could a Bayesian model with an informative prior on the random effects improve estimation?
What is rank deficiency, and how to deal with it?
user974's answer is fantastic from a modelling perspective and gui11aume's answer is fantastic from a mathematical perspective. I want to refine the former answer strictly from a mixed modelling persp
What is rank deficiency, and how to deal with it? user974's answer is fantastic from a modelling perspective and gui11aume's answer is fantastic from a mathematical perspective. I want to refine the former answer strictly from a mixed modelling perspective: specifically a generalized mixed modelling (GLMM) perspective. As you can see, you have referenced the R function mer_finalize which is in the fantastic lme4 package. You also say that you are fitting a logistic regression model. There are many issues that crop up with such types of numerical algorithms. The issue of the matrix structure of the model matrix of fixed effects is certainly worth considering, as user974 alluded to. But this is very easy to assess, simply calculated the model.matrix of your formula= and data= arguments in a model and take its determinant using the det function. Random effects, however, greatly complicate the interpretation, numerical estimation routine, and inference on the fixed effects (what you typically think of as regression coefficients in a "regular" regression model). Suppose in the simplest case you are only fitted a random intercepts model. Then you are basically considering there to be thousands of unmeasured sources of heterogeneity that are held constant in repeated measures within clusters. You estimate a "grand" intercept, but account for the heterogeneity by assuming that the cluster-specific intercepts have some mean0 normal distribution. The intercepts are iteratively estimated and used to update model effects until convergence is achieved (the log likelihood--or an approximation of it--is maximized). The mixed model is very easy to envision, but mathematically the likelihood is very complex and prone to issues with singularities, local minimae, and boundary points (odds ratios = 0 or infinity). Mixed models do not have quadratic likelihoods like canonical GLMs. Unfortunately, Venerables and Ripley did not invest much into diagnostics for converge-fails like yours. It is practically impossible to even speculate at the myriad of possible errors leading to such a message. Consider, then, the types of diagnostics I use below: How many observations are there per cluster? What are the results from a marginal model fit using GEE? What is the ICC of the clusters? Is the within cluster heterogeneity close to the between cluster heterogeneity? Fit a 1-step estimator and look at the estimated random effects. Are they approximately normal? Fit the Bayesian mixed model and look at the posterior distribution for the fixed effects. Do they appear to have an approximately normal distribution? Look at a panel plot of the clusters showing exposure or regressor of interest against the outcome using a smoother. Are the trends consistent and clear or are there many possible ways that such trends may be explained? (e.g. what is the "risk" among unexposed subjects, does the exposure appear protective or harmful?) Is it possible to restrict the sample to subjects only having a sufficient number of observations-per-person (say, n=5 or n=10) to estimate the "ideal" treatment effect? Alternately, you can consider some different modelling approaches: Are there few enough clusters or time points that you can use a fixed effect (such as group indicators or a polynomial time effect) to model the cluster level/autoregressive heterogeneity? Is a marginal model appropriate (using a GEE to improve efficiency of standard error estimation, but still using only fixed effects) Could a Bayesian model with an informative prior on the random effects improve estimation?
What is rank deficiency, and how to deal with it? user974's answer is fantastic from a modelling perspective and gui11aume's answer is fantastic from a mathematical perspective. I want to refine the former answer strictly from a mixed modelling persp
1,540
What is rank deficiency, and how to deal with it?
I also kept getting warnings about rank deficiency. In my case it seemed to be caused by the fact that I had a categorical variable where some of the categories were empty/not represented in the training set. When I created a category called other for the different low-frequency categories, I got rid of the warning. This was when building a linear regression with the train() function from caret. Here is the warning: Warning messages: In predict.lm(modelFit, newdata) : prediction from a rank-deficient fit may be misleading
What is rank deficiency, and how to deal with it?
I also kept getting warnings about rank deficiency. In my case it seemed to be caused by the fact that I had a categorical variable where some of the categories were empty/not represented in the train
What is rank deficiency, and how to deal with it? I also kept getting warnings about rank deficiency. In my case it seemed to be caused by the fact that I had a categorical variable where some of the categories were empty/not represented in the training set. When I created a category called other for the different low-frequency categories, I got rid of the warning. This was when building a linear regression with the train() function from caret. Here is the warning: Warning messages: In predict.lm(modelFit, newdata) : prediction from a rank-deficient fit may be misleading
What is rank deficiency, and how to deal with it? I also kept getting warnings about rank deficiency. In my case it seemed to be caused by the fact that I had a categorical variable where some of the categories were empty/not represented in the train
1,541
Including the interaction but not the main effects in a model
In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to be nonlinear) main effects that are seemingly unrelated to the factors in the interactions of interest. That's because interactions between $x_1$ and $x_2$ can be stand-ins for main effects of $x_3$ and $x_4$. Interactions sometimes seem to be needed because they are collinear with omitted variables or omitted nonlinear (e.g., spline) terms.
Including the interaction but not the main effects in a model
In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to b
Including the interaction but not the main effects in a model In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to be nonlinear) main effects that are seemingly unrelated to the factors in the interactions of interest. That's because interactions between $x_1$ and $x_2$ can be stand-ins for main effects of $x_3$ and $x_4$. Interactions sometimes seem to be needed because they are collinear with omitted variables or omitted nonlinear (e.g., spline) terms.
Including the interaction but not the main effects in a model In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to b
1,542
Including the interaction but not the main effects in a model
You ask whether it's ever valid. Let me provide a common example, whose elucidation may suggest additional analytical approaches for you. The simplest example of an interaction is a model with one dependent variable $Z$ and two independent variables $X$, $Y$ in the form $$Z = \alpha + \beta' X + \gamma' Y + \delta' X Y + \varepsilon,$$ with $\varepsilon$ a random term variable having zero expectation, and using parameters $\alpha, \beta', \gamma',$ and $\delta'$. It's often worthwhile checking whether $\delta'$ approximates $\beta' \gamma'$, because an algebraically equivalent expression of the same model is $$Z = \alpha \left(1 + \beta X + \gamma Y + \delta X Y \right) + \varepsilon$$ $$= \alpha \left(1 + \beta X \right) \left(1 + \gamma Y \right) + \alpha \left( \delta - \beta \gamma \right) X Y + \varepsilon$$ (where $\beta' = \alpha \beta$, etc). Whence, if there's a reason to suppose $\left( \delta - \beta \gamma \right) \sim 0$, we can absorb it in the error term $\varepsilon$. Not only does this give a "pure interaction", it does so without a constant term. This in turn strongly suggests taking logarithms. Some heteroscedasticity in the residuals--that is, a tendency for residuals associated with larger values of $Z$ to be larger in absolute value than average--would also point in this direction. We would then want to explore an alternative formulation $$\log(Z) = \log(\alpha) + \log(1 + \beta X) + \log(1 + \gamma Y) + \tau$$ with iid random error $\tau$. Furthermore, if we expect $\beta X$ and $\gamma Y$ to be large compared to $1$, we would instead just propose the model $$\log(Z) = \left(\log(\alpha) + \log(\beta) + \log(\gamma)\right) + \log(X) + \log(Y) + \tau$$ $$= \eta + \log(X) + \log(Y) + \tau.$$ This new model has just a single parameter $\eta$ instead of four parameters ($\alpha$, $\beta'$, etc.) subject to a quadratic relation ($\delta' = \beta' \gamma'$), a considerable simplification. I am not saying that this is a necessary or even the only step to take, but I am suggesting that this kind of algebraic rearrangement of the model is usually worth considering whenever interactions alone appear to be significant. Some excellent ways to explore models with interaction, especially with just two and three independent variables, appear in chapters 10 - 13 of Tukey's EDA.
Including the interaction but not the main effects in a model
You ask whether it's ever valid. Let me provide a common example, whose elucidation may suggest additional analytical approaches for you. The simplest example of an interaction is a model with one de
Including the interaction but not the main effects in a model You ask whether it's ever valid. Let me provide a common example, whose elucidation may suggest additional analytical approaches for you. The simplest example of an interaction is a model with one dependent variable $Z$ and two independent variables $X$, $Y$ in the form $$Z = \alpha + \beta' X + \gamma' Y + \delta' X Y + \varepsilon,$$ with $\varepsilon$ a random term variable having zero expectation, and using parameters $\alpha, \beta', \gamma',$ and $\delta'$. It's often worthwhile checking whether $\delta'$ approximates $\beta' \gamma'$, because an algebraically equivalent expression of the same model is $$Z = \alpha \left(1 + \beta X + \gamma Y + \delta X Y \right) + \varepsilon$$ $$= \alpha \left(1 + \beta X \right) \left(1 + \gamma Y \right) + \alpha \left( \delta - \beta \gamma \right) X Y + \varepsilon$$ (where $\beta' = \alpha \beta$, etc). Whence, if there's a reason to suppose $\left( \delta - \beta \gamma \right) \sim 0$, we can absorb it in the error term $\varepsilon$. Not only does this give a "pure interaction", it does so without a constant term. This in turn strongly suggests taking logarithms. Some heteroscedasticity in the residuals--that is, a tendency for residuals associated with larger values of $Z$ to be larger in absolute value than average--would also point in this direction. We would then want to explore an alternative formulation $$\log(Z) = \log(\alpha) + \log(1 + \beta X) + \log(1 + \gamma Y) + \tau$$ with iid random error $\tau$. Furthermore, if we expect $\beta X$ and $\gamma Y$ to be large compared to $1$, we would instead just propose the model $$\log(Z) = \left(\log(\alpha) + \log(\beta) + \log(\gamma)\right) + \log(X) + \log(Y) + \tau$$ $$= \eta + \log(X) + \log(Y) + \tau.$$ This new model has just a single parameter $\eta$ instead of four parameters ($\alpha$, $\beta'$, etc.) subject to a quadratic relation ($\delta' = \beta' \gamma'$), a considerable simplification. I am not saying that this is a necessary or even the only step to take, but I am suggesting that this kind of algebraic rearrangement of the model is usually worth considering whenever interactions alone appear to be significant. Some excellent ways to explore models with interaction, especially with just two and three independent variables, appear in chapters 10 - 13 of Tukey's EDA.
Including the interaction but not the main effects in a model You ask whether it's ever valid. Let me provide a common example, whose elucidation may suggest additional analytical approaches for you. The simplest example of an interaction is a model with one de
1,543
Including the interaction but not the main effects in a model
While it is often stated in textbooks that one should never include an interaction in a model without the corresponding main effects, there are certainly examples where this would make perfect sense. I'll give you the simplest example I can imagine. Suppose subjects randomly assigned to two groups are measured twice, once at baseline (i.e., right after the randomization) and once after group T received some kind of treatment, while group C did not. Then a repeated-measures model for these data would include a main effect for measurement occasion (a dummy variable that is 0 for baseline and 1 for the follow-up) and an interaction term between the group dummy (0 for C, 1 for T) and the time dummy. The model intercept then estimates the average score of the subjects at baseline (regardless of the group they are in). The coefficient for the measurement occasion dummy indicates the change in the control group between baseline and the follow-up. And the coefficient for the interaction term indicates how much bigger/smaller the change was in the treatment group compared to the control group. Here, it is not necessary to include the main effect for group, because at baseline, the groups are equivalent by definition due to the randomization. One could of course argue that the main effect for group should still be included, so that, in case the randomization failed, this will be revealed by the analysis. However, that is equivalent to testing the baseline means of the two groups against each other. And there are plenty of people who frown upon testing for baseline differences in randomized studies (of course, there are also plenty who find it useful, but this is another issue).
Including the interaction but not the main effects in a model
While it is often stated in textbooks that one should never include an interaction in a model without the corresponding main effects, there are certainly examples where this would make perfect sense.
Including the interaction but not the main effects in a model While it is often stated in textbooks that one should never include an interaction in a model without the corresponding main effects, there are certainly examples where this would make perfect sense. I'll give you the simplest example I can imagine. Suppose subjects randomly assigned to two groups are measured twice, once at baseline (i.e., right after the randomization) and once after group T received some kind of treatment, while group C did not. Then a repeated-measures model for these data would include a main effect for measurement occasion (a dummy variable that is 0 for baseline and 1 for the follow-up) and an interaction term between the group dummy (0 for C, 1 for T) and the time dummy. The model intercept then estimates the average score of the subjects at baseline (regardless of the group they are in). The coefficient for the measurement occasion dummy indicates the change in the control group between baseline and the follow-up. And the coefficient for the interaction term indicates how much bigger/smaller the change was in the treatment group compared to the control group. Here, it is not necessary to include the main effect for group, because at baseline, the groups are equivalent by definition due to the randomization. One could of course argue that the main effect for group should still be included, so that, in case the randomization failed, this will be revealed by the analysis. However, that is equivalent to testing the baseline means of the two groups against each other. And there are plenty of people who frown upon testing for baseline differences in randomized studies (of course, there are also plenty who find it useful, but this is another issue).
Including the interaction but not the main effects in a model While it is often stated in textbooks that one should never include an interaction in a model without the corresponding main effects, there are certainly examples where this would make perfect sense.
1,544
Including the interaction but not the main effects in a model
The reason to keep the main effects in the model is for identifiability. Hence, if the purpose is statistical inference about each of the effects, you should keep the main effects in the model. However, if your modeling purpose is solely to predict new values, then it is perfectly legitimate to include only the interaction if that improves predictive accuracy.
Including the interaction but not the main effects in a model
The reason to keep the main effects in the model is for identifiability. Hence, if the purpose is statistical inference about each of the effects, you should keep the main effects in the model. Howeve
Including the interaction but not the main effects in a model The reason to keep the main effects in the model is for identifiability. Hence, if the purpose is statistical inference about each of the effects, you should keep the main effects in the model. However, if your modeling purpose is solely to predict new values, then it is perfectly legitimate to include only the interaction if that improves predictive accuracy.
Including the interaction but not the main effects in a model The reason to keep the main effects in the model is for identifiability. Hence, if the purpose is statistical inference about each of the effects, you should keep the main effects in the model. Howeve
1,545
Including the interaction but not the main effects in a model
this is implicit in many of answers others have given but the simple point is that models w/ a product term but w/ & w/o the moderator & predictor are just different models. Figure out what each means given the process you are modeling and whether a model w/o the moderator & predictor makes more sense given your theory or hypothesis. The observation that the product term is significant but only when moderator & predictor are not included doesn't tell you anything (except maybe that you are fishing around for "significance") w/o a cogent explanation of why it makes sense to leave them out.
Including the interaction but not the main effects in a model
this is implicit in many of answers others have given but the simple point is that models w/ a product term but w/ & w/o the moderator & predictor are just different models. Figure out what each means
Including the interaction but not the main effects in a model this is implicit in many of answers others have given but the simple point is that models w/ a product term but w/ & w/o the moderator & predictor are just different models. Figure out what each means given the process you are modeling and whether a model w/o the moderator & predictor makes more sense given your theory or hypothesis. The observation that the product term is significant but only when moderator & predictor are not included doesn't tell you anything (except maybe that you are fishing around for "significance") w/o a cogent explanation of why it makes sense to leave them out.
Including the interaction but not the main effects in a model this is implicit in many of answers others have given but the simple point is that models w/ a product term but w/ & w/o the moderator & predictor are just different models. Figure out what each means
1,546
Including the interaction but not the main effects in a model
Arguably, it depends on what you're using your model for. But I've never seen a reason not to run and describe models with main effects, even in cases where the hypothesis is only about the interaction.
Including the interaction but not the main effects in a model
Arguably, it depends on what you're using your model for. But I've never seen a reason not to run and describe models with main effects, even in cases where the hypothesis is only about the interacti
Including the interaction but not the main effects in a model Arguably, it depends on what you're using your model for. But I've never seen a reason not to run and describe models with main effects, even in cases where the hypothesis is only about the interaction.
Including the interaction but not the main effects in a model Arguably, it depends on what you're using your model for. But I've never seen a reason not to run and describe models with main effects, even in cases where the hypothesis is only about the interacti
1,547
Including the interaction but not the main effects in a model
Various texts on regression will tell you that you should never include an interaction term without the base effects --- that is not correct. One circumstance where it is appropriate to include an interaction term in your model without a base effect is when you have nested variables in your model. For example, if you have a regression problem with one indicator variable married and another variable spouse_age then you would use a model like this: Response ~ married + married:spouse_age + other_vars Note here that there is no base effect for spouse_age since it is only applicable if the person is married. In the case where the latter indicator is zero, the value of spouse_age is a placeholder value (and indeed, it should probably be coded as NA). If you were to include a base effect for the nested variable then this would bring the irrelevant placeholder value into the regression, which would give incorrect results If you would like some additional information on the use of interaction terms for "nested" variables like this, see this related question.
Including the interaction but not the main effects in a model
Various texts on regression will tell you that you should never include an interaction term without the base effects --- that is not correct. One circumstance where it is appropriate to include an in
Including the interaction but not the main effects in a model Various texts on regression will tell you that you should never include an interaction term without the base effects --- that is not correct. One circumstance where it is appropriate to include an interaction term in your model without a base effect is when you have nested variables in your model. For example, if you have a regression problem with one indicator variable married and another variable spouse_age then you would use a model like this: Response ~ married + married:spouse_age + other_vars Note here that there is no base effect for spouse_age since it is only applicable if the person is married. In the case where the latter indicator is zero, the value of spouse_age is a placeholder value (and indeed, it should probably be coded as NA). If you were to include a base effect for the nested variable then this would bring the irrelevant placeholder value into the regression, which would give incorrect results If you would like some additional information on the use of interaction terms for "nested" variables like this, see this related question.
Including the interaction but not the main effects in a model Various texts on regression will tell you that you should never include an interaction term without the base effects --- that is not correct. One circumstance where it is appropriate to include an in
1,548
Including the interaction but not the main effects in a model
Both x and y will be correlated with xy (unless you have taken a specific measure to prevent this by using centering). Thus if you obtain a substantial interaction effect with your approach, it will likely amount to one or more main effects masquerading as an interaction. This is not going to produce clear, interpretable results. What is desirable is instead to see how much the interaction can explain over and above what the main effects do, by including x, y, and (preferably in a subsequent step) xy. As to terminology: yes, β 0 is called the "constant." On the other hand, "partial" has specific meanings in regression and so I wouldn't use that term to describe your strategy here. Some interesting examples that will arise once in a blue moon are described at this thread.
Including the interaction but not the main effects in a model
Both x and y will be correlated with xy (unless you have taken a specific measure to prevent this by using centering). Thus if you obtain a substantial interaction effect with your approach, it will
Including the interaction but not the main effects in a model Both x and y will be correlated with xy (unless you have taken a specific measure to prevent this by using centering). Thus if you obtain a substantial interaction effect with your approach, it will likely amount to one or more main effects masquerading as an interaction. This is not going to produce clear, interpretable results. What is desirable is instead to see how much the interaction can explain over and above what the main effects do, by including x, y, and (preferably in a subsequent step) xy. As to terminology: yes, β 0 is called the "constant." On the other hand, "partial" has specific meanings in regression and so I wouldn't use that term to describe your strategy here. Some interesting examples that will arise once in a blue moon are described at this thread.
Including the interaction but not the main effects in a model Both x and y will be correlated with xy (unless you have taken a specific measure to prevent this by using centering). Thus if you obtain a substantial interaction effect with your approach, it will
1,549
Including the interaction but not the main effects in a model
I would suggest it is simply a special case of model uncertainty. From a Bayesian perspective, you simply treat this in exactly the same way you would treat any other kind of uncertainty, by either: Calculating its probability, if it is the object of interest Integrating or averaging it out, if it is not of interest, but may still affect your conclusions This is exactly what people do when testing for "significant effects" by using t-quantiles instead of normal quantiles. Because you have uncertainty about the "true noise level" you take this into account by using a more spread out distribution in testing. So from your perspective the "main effect" is actually a "nuisance parameter" in relation to the question that you are asking. So you simply average out the two cases (or more generally, over the models you are considering). So I would have the (vague) hypothesis: $$\newcommand{\int}{\mathrm{int}}H_{\int}:\text{The interaction between A and B is significant}$$ I would say that although not precisely defined, this is the question you want to answer here. And note that it is not the verbal statements such as above which "define" the hypothesis, but the mathematical equations as well. We have some data $D$, and prior information $I$, then we simply calculate: $$P(H_{\int}|DI)=P(H_{\int}|I)\frac{P(D|H_{\int}I)}{P(D|I)}$$ (small note: no matter how many times I write out this equation, it always helps me understand the problem better. weird). The main quantity to calculate is the likelihood $P(D|H_{int}I)$, this makes no reference to the model, so the model must have been removed using the law of total probability: $$P(D|H_{\int}I)=\sum_{m=1}^{N_{M}}P(DM_{m}|H_{\int}I)=\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{\int}I)$$ Where $M_{m}$ indexes the mth model, and $N_{M}$ is the number of models being considered. The first term is the "model weight" which says how much the data and prior information support the mth model. The second term indicates how much the mth model supports the hypothesis. Plugging this equation back into the original Bayes theorem gives: $$P(H_{\int}|DI)=\frac{P(H_{\int}|I)}{P(D|I)}\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{int}I)$$ $$=\frac{1}{P(D|I)}\sum_{m=1}^{N_{M}}P(DM_{m}|I)\frac{P(M_{m}H_{\int}D|I)}{P(DM_{m}|I)}=\sum_{m=1}^{N_{M}}P(M_{m}|DI)P(H_{\int}|DM_{m}I)$$ And you can see from this that $P(H_{\int}|DM_{m}I)$ is the "conditional conclusion" of the hypothesis under the mth model (this is usually all that is considered, for a chosen "best" model). Note that this standard analysis is justified whenever $P(M_{m}|DI)\approx 1$ - an "obviously best" model - or whenever $P(H_{\int}|DM_{j}I)\approx P(H_{\int}|DM_{k}I)$ - all models give the same/similar conclusions. However if neither are met, then Bayes' Theorem says the best procedure is to average out the results, placing higher weights on the models which are most supported by the data and prior information.
Including the interaction but not the main effects in a model
I would suggest it is simply a special case of model uncertainty. From a Bayesian perspective, you simply treat this in exactly the same way you would treat any other kind of uncertainty, by either:
Including the interaction but not the main effects in a model I would suggest it is simply a special case of model uncertainty. From a Bayesian perspective, you simply treat this in exactly the same way you would treat any other kind of uncertainty, by either: Calculating its probability, if it is the object of interest Integrating or averaging it out, if it is not of interest, but may still affect your conclusions This is exactly what people do when testing for "significant effects" by using t-quantiles instead of normal quantiles. Because you have uncertainty about the "true noise level" you take this into account by using a more spread out distribution in testing. So from your perspective the "main effect" is actually a "nuisance parameter" in relation to the question that you are asking. So you simply average out the two cases (or more generally, over the models you are considering). So I would have the (vague) hypothesis: $$\newcommand{\int}{\mathrm{int}}H_{\int}:\text{The interaction between A and B is significant}$$ I would say that although not precisely defined, this is the question you want to answer here. And note that it is not the verbal statements such as above which "define" the hypothesis, but the mathematical equations as well. We have some data $D$, and prior information $I$, then we simply calculate: $$P(H_{\int}|DI)=P(H_{\int}|I)\frac{P(D|H_{\int}I)}{P(D|I)}$$ (small note: no matter how many times I write out this equation, it always helps me understand the problem better. weird). The main quantity to calculate is the likelihood $P(D|H_{int}I)$, this makes no reference to the model, so the model must have been removed using the law of total probability: $$P(D|H_{\int}I)=\sum_{m=1}^{N_{M}}P(DM_{m}|H_{\int}I)=\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{\int}I)$$ Where $M_{m}$ indexes the mth model, and $N_{M}$ is the number of models being considered. The first term is the "model weight" which says how much the data and prior information support the mth model. The second term indicates how much the mth model supports the hypothesis. Plugging this equation back into the original Bayes theorem gives: $$P(H_{\int}|DI)=\frac{P(H_{\int}|I)}{P(D|I)}\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{int}I)$$ $$=\frac{1}{P(D|I)}\sum_{m=1}^{N_{M}}P(DM_{m}|I)\frac{P(M_{m}H_{\int}D|I)}{P(DM_{m}|I)}=\sum_{m=1}^{N_{M}}P(M_{m}|DI)P(H_{\int}|DM_{m}I)$$ And you can see from this that $P(H_{\int}|DM_{m}I)$ is the "conditional conclusion" of the hypothesis under the mth model (this is usually all that is considered, for a chosen "best" model). Note that this standard analysis is justified whenever $P(M_{m}|DI)\approx 1$ - an "obviously best" model - or whenever $P(H_{\int}|DM_{j}I)\approx P(H_{\int}|DM_{k}I)$ - all models give the same/similar conclusions. However if neither are met, then Bayes' Theorem says the best procedure is to average out the results, placing higher weights on the models which are most supported by the data and prior information.
Including the interaction but not the main effects in a model I would suggest it is simply a special case of model uncertainty. From a Bayesian perspective, you simply treat this in exactly the same way you would treat any other kind of uncertainty, by either:
1,550
Including the interaction but not the main effects in a model
I will borrow a paragraph from the book An introduction to survival analysis using Stata by M.Cleves, R.Gutierrez, W.Gould, Y.Marchenko edited by Stata press to answer to your question. It is common to read that interaction effects should be included in the model only when the corresponding main effects are also included, but there is nothing wrong with including interaction effects by themselves. [...] The goal of a researcher is to parametrize what is reasonably likely to be true for the data considering the problem at hand and not merely following a prescription.
Including the interaction but not the main effects in a model
I will borrow a paragraph from the book An introduction to survival analysis using Stata by M.Cleves, R.Gutierrez, W.Gould, Y.Marchenko edited by Stata press to answer to your question. It is common
Including the interaction but not the main effects in a model I will borrow a paragraph from the book An introduction to survival analysis using Stata by M.Cleves, R.Gutierrez, W.Gould, Y.Marchenko edited by Stata press to answer to your question. It is common to read that interaction effects should be included in the model only when the corresponding main effects are also included, but there is nothing wrong with including interaction effects by themselves. [...] The goal of a researcher is to parametrize what is reasonably likely to be true for the data considering the problem at hand and not merely following a prescription.
Including the interaction but not the main effects in a model I will borrow a paragraph from the book An introduction to survival analysis using Stata by M.Cleves, R.Gutierrez, W.Gould, Y.Marchenko edited by Stata press to answer to your question. It is common
1,551
Including the interaction but not the main effects in a model
There are various processes in nature that involve only an interaction effect and laws that decribe them. For instance Ohm's law. In psychology you have for instance the performance model of Vroom (1964): Performance = Ability x Motivation.Now, you might expect finding an significant interaction effect when this law is true. Regretfully, this is not the case. You might easily end up with finding two main effects and an insignificant interaction effect (for a demonstration and further explanation see Landsheer, van den Wittenboer and Maassen (2006), Social Science Research 35, 274-294). The linear model is not very well suited for detecting interaction effects; Ohm might never have found his law when he had used linear models. As a result, interpreting interaction effects in linear models is difficult. If you have a theory that predicts an interaction effect, you should include it even when insignificant. You may want to ignore main effects if your theory excludes those, but you will find that difficult, as significant main effects are often found in the case of a true data generating mechanism that has only a multiplicative effect. My answer is: Yes, it can be valid to include a two-way interaction in a model without including the main effects. Linear models are excellent tools to approximate the outcomes of a large variety of data generating mechanisms, but their formula's can not be easily interpreted as a valid description of the data generating mechanism.
Including the interaction but not the main effects in a model
There are various processes in nature that involve only an interaction effect and laws that decribe them. For instance Ohm's law. In psychology you have for instance the performance model of Vroom (19
Including the interaction but not the main effects in a model There are various processes in nature that involve only an interaction effect and laws that decribe them. For instance Ohm's law. In psychology you have for instance the performance model of Vroom (1964): Performance = Ability x Motivation.Now, you might expect finding an significant interaction effect when this law is true. Regretfully, this is not the case. You might easily end up with finding two main effects and an insignificant interaction effect (for a demonstration and further explanation see Landsheer, van den Wittenboer and Maassen (2006), Social Science Research 35, 274-294). The linear model is not very well suited for detecting interaction effects; Ohm might never have found his law when he had used linear models. As a result, interpreting interaction effects in linear models is difficult. If you have a theory that predicts an interaction effect, you should include it even when insignificant. You may want to ignore main effects if your theory excludes those, but you will find that difficult, as significant main effects are often found in the case of a true data generating mechanism that has only a multiplicative effect. My answer is: Yes, it can be valid to include a two-way interaction in a model without including the main effects. Linear models are excellent tools to approximate the outcomes of a large variety of data generating mechanisms, but their formula's can not be easily interpreted as a valid description of the data generating mechanism.
Including the interaction but not the main effects in a model There are various processes in nature that involve only an interaction effect and laws that decribe them. For instance Ohm's law. In psychology you have for instance the performance model of Vroom (19
1,552
Including the interaction but not the main effects in a model
F = m*a, force equals mass times acceleration. It is not represented as F = m + a + ma, or some other linear combination of those parameters. Indeed, only the interaction between mass and acceleration would make sense physically.
Including the interaction but not the main effects in a model
F = m*a, force equals mass times acceleration. It is not represented as F = m + a + ma, or some other linear combination of those parameters. Indeed, only the interaction between mass and acceleration
Including the interaction but not the main effects in a model F = m*a, force equals mass times acceleration. It is not represented as F = m + a + ma, or some other linear combination of those parameters. Indeed, only the interaction between mass and acceleration would make sense physically.
Including the interaction but not the main effects in a model F = m*a, force equals mass times acceleration. It is not represented as F = m + a + ma, or some other linear combination of those parameters. Indeed, only the interaction between mass and acceleration
1,553
Including the interaction but not the main effects in a model
Is it ever valid to include a two-way interaction without main effect? Yes it can be valid and even necessary. If for example in 2. you would include a factor for main effect (average difference of blue vs red condition) this would make the model worse. What if your hypothesis is only about the interaction, do you still need to include the main effects? Your hypothesis might be true independent of there being a main effect. But the model might need it to best describe the underlying process. So yes, you should try with and without. Note: In the case of only interaction you need to center the code for the "continuous" independent variable (measurement in the example). Otherwise the interaction coefficients in the model will not be symmetrically distributed (no coefficient for the first measurement in the example).
Including the interaction but not the main effects in a model
Is it ever valid to include a two-way interaction without main effect? Yes it can be valid and even necessary. If for example in 2. you would include a factor for main effect (average difference of b
Including the interaction but not the main effects in a model Is it ever valid to include a two-way interaction without main effect? Yes it can be valid and even necessary. If for example in 2. you would include a factor for main effect (average difference of blue vs red condition) this would make the model worse. What if your hypothesis is only about the interaction, do you still need to include the main effects? Your hypothesis might be true independent of there being a main effect. But the model might need it to best describe the underlying process. So yes, you should try with and without. Note: In the case of only interaction you need to center the code for the "continuous" independent variable (measurement in the example). Otherwise the interaction coefficients in the model will not be symmetrically distributed (no coefficient for the first measurement in the example).
Including the interaction but not the main effects in a model Is it ever valid to include a two-way interaction without main effect? Yes it can be valid and even necessary. If for example in 2. you would include a factor for main effect (average difference of b
1,554
Including the interaction but not the main effects in a model
It is very rarely a good idea to include an interaction term without the main effects involved in it. David Rindskopf of CCNY has written some papers about those rare instances. EDIT One such paper is https://psycnet.apa.org/record/1990-27319-001 Nonstandard loglinear models published in Psychological Bulletin vol 108 p 150-162
Including the interaction but not the main effects in a model
It is very rarely a good idea to include an interaction term without the main effects involved in it. David Rindskopf of CCNY has written some papers about those rare instances. EDIT One such paper is
Including the interaction but not the main effects in a model It is very rarely a good idea to include an interaction term without the main effects involved in it. David Rindskopf of CCNY has written some papers about those rare instances. EDIT One such paper is https://psycnet.apa.org/record/1990-27319-001 Nonstandard loglinear models published in Psychological Bulletin vol 108 p 150-162
Including the interaction but not the main effects in a model It is very rarely a good idea to include an interaction term without the main effects involved in it. David Rindskopf of CCNY has written some papers about those rare instances. EDIT One such paper is
1,555
Including the interaction but not the main effects in a model
This one is tricky and happened to me in my last project. I would explain it this way: lets say you had variables A and B which came out significant independently and by a business sense you thought that an interaction of A and B seems good. You included the interaction which came out to be significant but B lost its significance. You would explain your model initially by showing two results. The results would show that initially B was significant but when seen in light of A it lost its sheen. So B is a good variable but only when seen in light of various levels of A (if A is a categorical variable). Its like saying Obama is a good leader when seen in the light of its SEAL army. So Obama*seal will be a significant variable. But Obama when seen alone might not be as important. (No offense to Obama, just an example.)
Including the interaction but not the main effects in a model
This one is tricky and happened to me in my last project. I would explain it this way: lets say you had variables A and B which came out significant independently and by a business sense you thought t
Including the interaction but not the main effects in a model This one is tricky and happened to me in my last project. I would explain it this way: lets say you had variables A and B which came out significant independently and by a business sense you thought that an interaction of A and B seems good. You included the interaction which came out to be significant but B lost its significance. You would explain your model initially by showing two results. The results would show that initially B was significant but when seen in light of A it lost its sheen. So B is a good variable but only when seen in light of various levels of A (if A is a categorical variable). Its like saying Obama is a good leader when seen in the light of its SEAL army. So Obama*seal will be a significant variable. But Obama when seen alone might not be as important. (No offense to Obama, just an example.)
Including the interaction but not the main effects in a model This one is tricky and happened to me in my last project. I would explain it this way: lets say you had variables A and B which came out significant independently and by a business sense you thought t
1,556
Including the interaction but not the main effects in a model
If the variables in question are categorical, then including interactions without the main effects is just a reparameterizations of the model, and the choice of parameterization depends on what you are trying to accomplish with your model. Interacting continuous variables with other continuous variables ore with categorical variables is a whole different story. See: see this faq from UCLA's Institute for Digital Research and Education
Including the interaction but not the main effects in a model
If the variables in question are categorical, then including interactions without the main effects is just a reparameterizations of the model, and the choice of parameterization depends on what you ar
Including the interaction but not the main effects in a model If the variables in question are categorical, then including interactions without the main effects is just a reparameterizations of the model, and the choice of parameterization depends on what you are trying to accomplish with your model. Interacting continuous variables with other continuous variables ore with categorical variables is a whole different story. See: see this faq from UCLA's Institute for Digital Research and Education
Including the interaction but not the main effects in a model If the variables in question are categorical, then including interactions without the main effects is just a reparameterizations of the model, and the choice of parameterization depends on what you ar
1,557
Including the interaction but not the main effects in a model
Yes this can be valid, although it is rare. But in this case you still need to model the main effects, which you will afterward regress out. Indeed, in some models, only the interaction is interesting, such as drug testing/clinical models. This is for example the basis of the Generalized PsychoPhysiological Interactions (gPPI) model: y = ax + bxh + ch where x/y are voxels/regions of interest and h the block/events designs. In this model, both a and c will be regressed out, only b will be kept for inference (the beta coefficients). Indeed, both a and c represent spurious activity in our case, and only b represents what cannot be explained by spurious activity, the interaction with the task.
Including the interaction but not the main effects in a model
Yes this can be valid, although it is rare. But in this case you still need to model the main effects, which you will afterward regress out. Indeed, in some models, only the interaction is interesting
Including the interaction but not the main effects in a model Yes this can be valid, although it is rare. But in this case you still need to model the main effects, which you will afterward regress out. Indeed, in some models, only the interaction is interesting, such as drug testing/clinical models. This is for example the basis of the Generalized PsychoPhysiological Interactions (gPPI) model: y = ax + bxh + ch where x/y are voxels/regions of interest and h the block/events designs. In this model, both a and c will be regressed out, only b will be kept for inference (the beta coefficients). Indeed, both a and c represent spurious activity in our case, and only b represents what cannot be explained by spurious activity, the interaction with the task.
Including the interaction but not the main effects in a model Yes this can be valid, although it is rare. But in this case you still need to model the main effects, which you will afterward regress out. Indeed, in some models, only the interaction is interesting
1,558
Including the interaction but not the main effects in a model
The short answer: If you include interaction in the fixed effects, then the main effects are automatically included whether or not you specifically include them in your code. The only difference is your parametrization, i.e., what the parameters in your model mean (e.g., are they group means or are they differences from reference levels). Assumptions: I assume we are working in the general linear model and are asking when we can use the fixed effects specification $AB$ instead of $A + B + AB$, where $A$ and $B$ are (categorical) factors. Mathematical clarification: We assume that the response vector $Y \sim \mathcal N(\xi , \sigma^2 I_n )$. If $X_A$, $X_B$ and $X_{AB}$ are the design matrices for the three factors, then a model with "main effects and interaction" corresponds to the restriction $\xi \in$ span$\{X_A, X_B, X_{AB}\}$. A model with "only interaction" corresponds to the restriction $\xi \in$ span$\{X_{AB}\}$. However, span$\{X_{AB}\} =$ span$\{X_A, X_B, X_{AB}\}$. So, it's two different parametrizations of the same model (or the same family of distributions if you are more comfortable with that terminology). I just saw that David Beede provided a very similar answer (apologies), but I thought I would leave this up for those who respond well to a linear algebra perspective.
Including the interaction but not the main effects in a model
The short answer: If you include interaction in the fixed effects, then the main effects are automatically included whether or not you specifically include them in your code. The only difference is yo
Including the interaction but not the main effects in a model The short answer: If you include interaction in the fixed effects, then the main effects are automatically included whether or not you specifically include them in your code. The only difference is your parametrization, i.e., what the parameters in your model mean (e.g., are they group means or are they differences from reference levels). Assumptions: I assume we are working in the general linear model and are asking when we can use the fixed effects specification $AB$ instead of $A + B + AB$, where $A$ and $B$ are (categorical) factors. Mathematical clarification: We assume that the response vector $Y \sim \mathcal N(\xi , \sigma^2 I_n )$. If $X_A$, $X_B$ and $X_{AB}$ are the design matrices for the three factors, then a model with "main effects and interaction" corresponds to the restriction $\xi \in$ span$\{X_A, X_B, X_{AB}\}$. A model with "only interaction" corresponds to the restriction $\xi \in$ span$\{X_{AB}\}$. However, span$\{X_{AB}\} =$ span$\{X_A, X_B, X_{AB}\}$. So, it's two different parametrizations of the same model (or the same family of distributions if you are more comfortable with that terminology). I just saw that David Beede provided a very similar answer (apologies), but I thought I would leave this up for those who respond well to a linear algebra perspective.
Including the interaction but not the main effects in a model The short answer: If you include interaction in the fixed effects, then the main effects are automatically included whether or not you specifically include them in your code. The only difference is yo
1,559
T-test for non normal when N>50?
Normality assumption of a t-test Consider a large population from which you could take many different samples of a particular size. (In a particular study, you generally collect just one of these samples.) The t-test assumes that the means of the different samples are normally distributed; it does not assume that the population is normally distributed. By the central limit theorem, means of samples from a population with finite variance approach a normal distribution regardless of the distribution of the population. Rules of thumb say that the sample means are basically normally distributed as long as the sample size is at least 20 or 30. For a t-test to be valid on a sample of smaller size, the population distribution would have to be approximately normal. The t-test is invalid for small samples from non-normal distributions, but it is valid for large samples from non-normal distributions. Small samples from non-normal distributions As Michael notes below, sample size needed for the distribution of means to approximate normality depends on the degree of non-normality of the population. For approximately normal distributions, you won't need as large sample as a very non-normal distribution. Here are some simulations you can run in R to get a feel for this. First, here are a couple of population distributions. curve(dnorm,xlim=c(-4,4)) #Normal curve(dchisq(x,df=1),xlim=c(0,30)) #Chi-square with 1 degree of freedom Next are some simulations of samples from the population distributions. In each of these lines, "10" is the sample size, "100" is the number of samples and the function after that specifies the population distribution. They produce histograms of the sample means. hist(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') hist(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') For a t-test to be valid, these histograms should be normal. require(car) qqp(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') qqp(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') Utility of a t-test I have to note that all of the knowledge I just imparted is somewhat obsolete; now that we have computers, we can do better than t-tests. As Frank notes, you probably want to use Wilcoxon tests anywhere you were taught to run a t-test.
T-test for non normal when N>50?
Normality assumption of a t-test Consider a large population from which you could take many different samples of a particular size. (In a particular study, you generally collect just one of these samp
T-test for non normal when N>50? Normality assumption of a t-test Consider a large population from which you could take many different samples of a particular size. (In a particular study, you generally collect just one of these samples.) The t-test assumes that the means of the different samples are normally distributed; it does not assume that the population is normally distributed. By the central limit theorem, means of samples from a population with finite variance approach a normal distribution regardless of the distribution of the population. Rules of thumb say that the sample means are basically normally distributed as long as the sample size is at least 20 or 30. For a t-test to be valid on a sample of smaller size, the population distribution would have to be approximately normal. The t-test is invalid for small samples from non-normal distributions, but it is valid for large samples from non-normal distributions. Small samples from non-normal distributions As Michael notes below, sample size needed for the distribution of means to approximate normality depends on the degree of non-normality of the population. For approximately normal distributions, you won't need as large sample as a very non-normal distribution. Here are some simulations you can run in R to get a feel for this. First, here are a couple of population distributions. curve(dnorm,xlim=c(-4,4)) #Normal curve(dchisq(x,df=1),xlim=c(0,30)) #Chi-square with 1 degree of freedom Next are some simulations of samples from the population distributions. In each of these lines, "10" is the sample size, "100" is the number of samples and the function after that specifies the population distribution. They produce histograms of the sample means. hist(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') hist(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') For a t-test to be valid, these histograms should be normal. require(car) qqp(colMeans(sapply(rep(10,100),rnorm)),xlab='Sample mean',main='') qqp(colMeans(sapply(rep(10,100),rchisq,df=1)),xlab='Sample mean',main='') Utility of a t-test I have to note that all of the knowledge I just imparted is somewhat obsolete; now that we have computers, we can do better than t-tests. As Frank notes, you probably want to use Wilcoxon tests anywhere you were taught to run a t-test.
T-test for non normal when N>50? Normality assumption of a t-test Consider a large population from which you could take many different samples of a particular size. (In a particular study, you generally collect just one of these samp
1,560
T-test for non normal when N>50?
The central limit theorem is less useful than one might think in this context. First, as someone pointed out already, one does not know if the current sample size is "large enough". Secondly, the CLT is more about achieving the desired type I error than about type II error. In other words, the t-test can be uncompetitive power-wise. That's why the Wilcoxon test is so popular. If normality holds, it is 95% as efficient as the t-test. If normality does not hold it can be arbitrarily more efficient than the t-test.
T-test for non normal when N>50?
The central limit theorem is less useful than one might think in this context. First, as someone pointed out already, one does not know if the current sample size is "large enough". Secondly, the CL
T-test for non normal when N>50? The central limit theorem is less useful than one might think in this context. First, as someone pointed out already, one does not know if the current sample size is "large enough". Secondly, the CLT is more about achieving the desired type I error than about type II error. In other words, the t-test can be uncompetitive power-wise. That's why the Wilcoxon test is so popular. If normality holds, it is 95% as efficient as the t-test. If normality does not hold it can be arbitrarily more efficient than the t-test.
T-test for non normal when N>50? The central limit theorem is less useful than one might think in this context. First, as someone pointed out already, one does not know if the current sample size is "large enough". Secondly, the CL
1,561
T-test for non normal when N>50?
See my previous answer to a question on the robustness of the t-test. In particular, I recommend playing around with the onlinestatsbook applet. The image below is based on the following scenario: null hypothesis is true fairly severe skewness same distribution in both groups same variance in both groups sample size per group 5 (i.e., much less than 50 as per your question) I pressed the 10,000 simulations button about 100 times to get up to over one million simulations. The simulation obtained suggests that instead of getting a 5% Type I errors, I was only getting 4.5% Type I errors. Whether you consider this robust depends on your perspective.
T-test for non normal when N>50?
See my previous answer to a question on the robustness of the t-test. In particular, I recommend playing around with the onlinestatsbook applet. The image below is based on the following scenario: nu
T-test for non normal when N>50? See my previous answer to a question on the robustness of the t-test. In particular, I recommend playing around with the onlinestatsbook applet. The image below is based on the following scenario: null hypothesis is true fairly severe skewness same distribution in both groups same variance in both groups sample size per group 5 (i.e., much less than 50 as per your question) I pressed the 10,000 simulations button about 100 times to get up to over one million simulations. The simulation obtained suggests that instead of getting a 5% Type I errors, I was only getting 4.5% Type I errors. Whether you consider this robust depends on your perspective.
T-test for non normal when N>50? See my previous answer to a question on the robustness of the t-test. In particular, I recommend playing around with the onlinestatsbook applet. The image below is based on the following scenario: nu
1,562
T-test for non normal when N>50?
In my experience with just the one-sample t-test, I have found that the skew of the distributions is more important than the kurtosis, say. For non-skewed but fat-tailed distributions (a t with 5 degrees of freedom, a Tukey h-distribution with $h=0.24999$, etc), I have found that 40 samples has always been sufficient to get an empirical type I rate near the nominal. When the distribution is very skewed, however, you may need many many more samples. For example, suppose you were playing the lottery. With probability $p = 10^{-4}$ you will win 100 thousand dollars, and with probability $1-p$ you will lose one dollar. If you perform a t-test for the null that the mean return is zero based on a sample of one thousand draws of this process, I don't think you are going to achieve the nominal type I rate. edit: duh, per @whuber's catch in the comment, the example I gave did not have mean zero, so testing for mean zero has nothing to do with the type I rate. Because the lottery example often has a sample standard deviation of zero, the t-test chokes. So instead, I give a code example using Goerg's Lambert W x Gaussian distribution. The distribution I use here has a skew of around 1355. #hey look! I'm learning R! library(LambertW) Gauss_input = create_LambertW_input("normal", beta=c(0,1)) params = list(delta = c(0), gamma = c(2), alpha = 1) LW.Gauss = create_LambertW_output(input = Gauss_input, theta = params) #get the moments of this distribution moms <- mLambertW(beta=c(0,1),distname=c("normal"),delta = 0,gamma = 2, alpha = 1) test_ttest <- function(sampsize) { samp <- LW.Gauss$rY(params)(n=sampsize) tval <- t.test(samp, mu = moms$mean) return(tval$p.value) } #to replicate randomness set.seed(1) pvals <- replicate(1024,test_ttest(50)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) pvals <- replicate(1024,test_ttest(250)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) p vals <- replicate(1024,test_ttest(1000)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) pvals <- replicate(1024,test_ttest(2000)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) This code gives the empirical reject rate at the nominal 0.05 level for different sample sizes. For sample of size 50, the empirical rate is 0.40 (!); for sample size 250, 0.29; for sample size 1000, 0.21; for sample size 2000, 0.18. Clearly the one-sample t-test suffers from skew.
T-test for non normal when N>50?
In my experience with just the one-sample t-test, I have found that the skew of the distributions is more important than the kurtosis, say. For non-skewed but fat-tailed distributions (a t with 5 degr
T-test for non normal when N>50? In my experience with just the one-sample t-test, I have found that the skew of the distributions is more important than the kurtosis, say. For non-skewed but fat-tailed distributions (a t with 5 degrees of freedom, a Tukey h-distribution with $h=0.24999$, etc), I have found that 40 samples has always been sufficient to get an empirical type I rate near the nominal. When the distribution is very skewed, however, you may need many many more samples. For example, suppose you were playing the lottery. With probability $p = 10^{-4}$ you will win 100 thousand dollars, and with probability $1-p$ you will lose one dollar. If you perform a t-test for the null that the mean return is zero based on a sample of one thousand draws of this process, I don't think you are going to achieve the nominal type I rate. edit: duh, per @whuber's catch in the comment, the example I gave did not have mean zero, so testing for mean zero has nothing to do with the type I rate. Because the lottery example often has a sample standard deviation of zero, the t-test chokes. So instead, I give a code example using Goerg's Lambert W x Gaussian distribution. The distribution I use here has a skew of around 1355. #hey look! I'm learning R! library(LambertW) Gauss_input = create_LambertW_input("normal", beta=c(0,1)) params = list(delta = c(0), gamma = c(2), alpha = 1) LW.Gauss = create_LambertW_output(input = Gauss_input, theta = params) #get the moments of this distribution moms <- mLambertW(beta=c(0,1),distname=c("normal"),delta = 0,gamma = 2, alpha = 1) test_ttest <- function(sampsize) { samp <- LW.Gauss$rY(params)(n=sampsize) tval <- t.test(samp, mu = moms$mean) return(tval$p.value) } #to replicate randomness set.seed(1) pvals <- replicate(1024,test_ttest(50)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) pvals <- replicate(1024,test_ttest(250)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) p vals <- replicate(1024,test_ttest(1000)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) pvals <- replicate(1024,test_ttest(2000)) #how many rejects at the 0.05 level? print(sum(pvals < 0.05) / length(pvals)) This code gives the empirical reject rate at the nominal 0.05 level for different sample sizes. For sample of size 50, the empirical rate is 0.40 (!); for sample size 250, 0.29; for sample size 1000, 0.21; for sample size 2000, 0.18. Clearly the one-sample t-test suffers from skew.
T-test for non normal when N>50? In my experience with just the one-sample t-test, I have found that the skew of the distributions is more important than the kurtosis, say. For non-skewed but fat-tailed distributions (a t with 5 degr
1,563
T-test for non normal when N>50?
The central limit theorem establishes (under the required conditions) that the numerator of the t-statistic is asymptotically normal. The t-statistic also has a denominator. To have a t-distribution you'd need the denominator to be independent and square-root-of-a-chi-square-on-its-df. And we know it won't be independent (that characterizes the normal!) Slutsky's theorem combined with the CLT would give you that the t-statistic is asymptotically normal (but not necessarily at a very useful rate). We would want a theorem that would establish that the t-statistic is approximately t-distributed when there's non-normality, and how fast it comes in. (Of course, eventually the t- will go close to the normal as well, and in that sense we will often tend to get about the right significance level, at least in sufficiently large samples for all these effects to "work" sufficiently for our individual requirements.) However, more important than the purported level-robustness of the $t$ (at larger sample sizes) is the effect on its power. Note that the asymptotic relative efficiency of the t-test relative to the Wilcoxon-Mann-Whitney (for example) may be 0 (i.e. as sample sizes become large while effect sizes you want to pick up shrink, you may require vastly larger samples for the t to have the same power as an obvious alternative). So while the t-test may end up having a nice normal-looking null distribution in many cases if $n$ is large enough, its performance under the null isn't really what people care most about -- it is the performance under the alternative -- and there it may not be so great, if you care about rejecting the null in cases where the effect is not so easy to pick up.
T-test for non normal when N>50?
The central limit theorem establishes (under the required conditions) that the numerator of the t-statistic is asymptotically normal. The t-statistic also has a denominator. To have a t-distribution y
T-test for non normal when N>50? The central limit theorem establishes (under the required conditions) that the numerator of the t-statistic is asymptotically normal. The t-statistic also has a denominator. To have a t-distribution you'd need the denominator to be independent and square-root-of-a-chi-square-on-its-df. And we know it won't be independent (that characterizes the normal!) Slutsky's theorem combined with the CLT would give you that the t-statistic is asymptotically normal (but not necessarily at a very useful rate). We would want a theorem that would establish that the t-statistic is approximately t-distributed when there's non-normality, and how fast it comes in. (Of course, eventually the t- will go close to the normal as well, and in that sense we will often tend to get about the right significance level, at least in sufficiently large samples for all these effects to "work" sufficiently for our individual requirements.) However, more important than the purported level-robustness of the $t$ (at larger sample sizes) is the effect on its power. Note that the asymptotic relative efficiency of the t-test relative to the Wilcoxon-Mann-Whitney (for example) may be 0 (i.e. as sample sizes become large while effect sizes you want to pick up shrink, you may require vastly larger samples for the t to have the same power as an obvious alternative). So while the t-test may end up having a nice normal-looking null distribution in many cases if $n$ is large enough, its performance under the null isn't really what people care most about -- it is the performance under the alternative -- and there it may not be so great, if you care about rejecting the null in cases where the effect is not so easy to pick up.
T-test for non normal when N>50? The central limit theorem establishes (under the required conditions) that the numerator of the t-statistic is asymptotically normal. The t-statistic also has a denominator. To have a t-distribution y
1,564
T-test for non normal when N>50?
Yes, the Central Limit Theorem tells us this is true. So long as you avoid extremely heavy-tailed traits, non-Normality presents no problems in moderate-to-large samples. Here's a helpful review paper; http://www.annualreviews.org/doi/pdf/10.1146/annurev.publhealth.23.100901.140546 The Wilcoxon test (mentioned by others) can have terrible power when the alternative is not a location shift of the original distribution. Furthermore, the way it measures differences between distributions is not transitive.
T-test for non normal when N>50?
Yes, the Central Limit Theorem tells us this is true. So long as you avoid extremely heavy-tailed traits, non-Normality presents no problems in moderate-to-large samples. Here's a helpful review paper
T-test for non normal when N>50? Yes, the Central Limit Theorem tells us this is true. So long as you avoid extremely heavy-tailed traits, non-Normality presents no problems in moderate-to-large samples. Here's a helpful review paper; http://www.annualreviews.org/doi/pdf/10.1146/annurev.publhealth.23.100901.140546 The Wilcoxon test (mentioned by others) can have terrible power when the alternative is not a location shift of the original distribution. Furthermore, the way it measures differences between distributions is not transitive.
T-test for non normal when N>50? Yes, the Central Limit Theorem tells us this is true. So long as you avoid extremely heavy-tailed traits, non-Normality presents no problems in moderate-to-large samples. Here's a helpful review paper
1,565
T-test for non normal when N>50?
About the use of Wilcoxon-Mann-Whitney test as an alternative I recommend the paper The Wilcoxon-Man-Whitney test under scrutiny As a test of means or medians, the Wilcoxon–Mann–Whitney (WMW) test can be severely nonrobust for deviations from the pure shift model. These are the recommendation of the authors of the paper: The rank transformation can alter means, standard deviations, and skewnesses of the two samples differently. The only situation in which the rank transformation is guaranteed to achieve a beneficial effect is when distributions are identical and sample sizes are equal. For deviations from these rather strict assumptions, the effects of the rank transformation on sample moments are unpredictable. In the simulation study of the paper, the WMW test was compared with the Fligner–Policello test (FP), the Brunner–Munzel test (BM), the two-sample T test (T ), the Welch U test (U), and the Welch U test on ranks (RU). The four rank-based tests (WMW, FP, BM, and RU) performed similarly, although the BM test was frequently a little better than the others. When the sample sizes were equal, the parametric tests (T and U) were superior to the rank-based tests under the null hypothesis of equal means, but not under the null hypothesis of equal medians. When the sample sizes were unequal, the BM, RU, and U tests performed best. For several settings, small changes in population properties led to large alterations in the performance of the tests. In summary, the large sample approximate WMW test can be a poor method for comparing the means or medians of two populations, unless the two distributions have equal shapes and equal scales. This problem also seems to apply in various degrees to the exact WMW test, the FP test, the BM test, and the Welch U test on ranks. When using the WMW test, the authors recommend that the properties of the ranked samples are thoroughly investigated for signs of skewness and variance heterogeneity. If such effects are present, the Welch U test is preferable in many situations.
T-test for non normal when N>50?
About the use of Wilcoxon-Mann-Whitney test as an alternative I recommend the paper The Wilcoxon-Man-Whitney test under scrutiny As a test of means or medians, the Wilcoxon–Mann–Whitney (WMW) test can
T-test for non normal when N>50? About the use of Wilcoxon-Mann-Whitney test as an alternative I recommend the paper The Wilcoxon-Man-Whitney test under scrutiny As a test of means or medians, the Wilcoxon–Mann–Whitney (WMW) test can be severely nonrobust for deviations from the pure shift model. These are the recommendation of the authors of the paper: The rank transformation can alter means, standard deviations, and skewnesses of the two samples differently. The only situation in which the rank transformation is guaranteed to achieve a beneficial effect is when distributions are identical and sample sizes are equal. For deviations from these rather strict assumptions, the effects of the rank transformation on sample moments are unpredictable. In the simulation study of the paper, the WMW test was compared with the Fligner–Policello test (FP), the Brunner–Munzel test (BM), the two-sample T test (T ), the Welch U test (U), and the Welch U test on ranks (RU). The four rank-based tests (WMW, FP, BM, and RU) performed similarly, although the BM test was frequently a little better than the others. When the sample sizes were equal, the parametric tests (T and U) were superior to the rank-based tests under the null hypothesis of equal means, but not under the null hypothesis of equal medians. When the sample sizes were unequal, the BM, RU, and U tests performed best. For several settings, small changes in population properties led to large alterations in the performance of the tests. In summary, the large sample approximate WMW test can be a poor method for comparing the means or medians of two populations, unless the two distributions have equal shapes and equal scales. This problem also seems to apply in various degrees to the exact WMW test, the FP test, the BM test, and the Welch U test on ranks. When using the WMW test, the authors recommend that the properties of the ranked samples are thoroughly investigated for signs of skewness and variance heterogeneity. If such effects are present, the Welch U test is preferable in many situations.
T-test for non normal when N>50? About the use of Wilcoxon-Mann-Whitney test as an alternative I recommend the paper The Wilcoxon-Man-Whitney test under scrutiny As a test of means or medians, the Wilcoxon–Mann–Whitney (WMW) test can
1,566
What is the difference between zero-inflated and hurdle models?
Thank you for the interesting question! Difference: One limitation of standard count models is that the zeros and the nonzeros (positives) are assumed to come from the same data-generating process. With hurdle models, these two processes are not constrained to be the same. The basic idea is that a Bernoulli probability governs the binary outcome of whether a count variate has a zero or positive realization. If the realization is positive, the hurdle is crossed, and the conditional distribution of the positives is governed by a truncated-at-zero count data model. With zero-inflated models, the response variable is modelled as a mixture of a Bernoulli distribution (or call it a point mass at zero) and a Poisson distribution (or any other count distribution supported on non-negative integers). For more detail and formulae, see, for example, Gurmu and Trivedi (2011) and Dalrymple, Hudson, and Ford (2003). Example: Hurdle models can be motivated by sequential decision-making processes confronted by individuals. You first decide if you need to buy something, and then you decide on the quantity of that something (which must be positive). When you are allowed to (or can potentially) buy nothing after your decision to buy something is an example of a situation where zero-inflated model is appropriate. Zeros may come from two sources: a) no decision to buy; b) wanted to buy but ended up buying nothing (e.g. out of stock). Beta: The hurdle model is a special case of the two-part model described in Chapter 16 of Frees (2011). There, we will see that for two-part models, the amount of health care utilized may be a continuous as well as a count variable. So what has been somewhat confusingly termed "zero-inflated beta distribution" in the literature is in fact belongs in the class of two-part distributions and models (so common in actuarial science), which is consistent with the above definition of a hurdle model. This excellent book discussed zero-inflated models in section 12.4.1 and hurdle models in section 12.4.2, with formulas and examples from actuarial applications. History: zero-inflated Poisson (ZIP) models without covariates have a long history (see e.g., Johnson and Kotz, 1969). The general form of ZIP regression models incorporating covariates is due to Lambert (1992). Hurdle models were first proposed by a Canadian statistician Cragg (1971), and later developped further by Mullahy (1986). You may also consider Croston (1972), where positive geometric counts are used together with Bernoulli process to describe an integer-valued process dominated by zeros. R: Finally, if you use R, there is package pscl for "Classes and Methods for R developed in the Political Science Computational Laboratory" by Simon Jackman, containing hurdle() and zeroinfl() functions by Achim Zeileis. The following references have been consulted to produce the above: Gurmu, S. & Trivedi, P. K. Excess Zeros in Count Models for Recreational Trips Journal of Business & Economic Statistics, 1996, 14, 469-477 Johnson, N., Kotz, S., Distributions in Statistics: Discrete Distributions. 1969, Houghton MiZin, Boston Lambert, D., Zero-inflated Poisson regression with an application to defects in manufacturing. Technometrics, 1992, 34 (1), 1–14. Cragg, J. G. Some Statistical Models for Limited Dependent Variables with Application to the Demand for Durable Goods Econometrica, 1971, 39, 829-844 Mullahy, J. Specification and testing of some modified count data models Journal of Econometrics, 1986, 33, 341-365 Frees, E. W. Regression Modeling with Actuarial and Financial Applications Cambridge University Press, 2011 Dalrymple, M. L.; Hudson, I. L. & Ford, R. P. K. Finite Mixture, Zero-inflated Poisson and Hurdle models with application to SIDS Computational Statistics & Data Analysis, 2003, 41, 491-504 Croston, J. D. Forecasting and Stock Control for Intermittent Demands Operational Research Quarterly, 1972, 23, 289-303
What is the difference between zero-inflated and hurdle models?
Thank you for the interesting question! Difference: One limitation of standard count models is that the zeros and the nonzeros (positives) are assumed to come from the same data-generating p
What is the difference between zero-inflated and hurdle models? Thank you for the interesting question! Difference: One limitation of standard count models is that the zeros and the nonzeros (positives) are assumed to come from the same data-generating process. With hurdle models, these two processes are not constrained to be the same. The basic idea is that a Bernoulli probability governs the binary outcome of whether a count variate has a zero or positive realization. If the realization is positive, the hurdle is crossed, and the conditional distribution of the positives is governed by a truncated-at-zero count data model. With zero-inflated models, the response variable is modelled as a mixture of a Bernoulli distribution (or call it a point mass at zero) and a Poisson distribution (or any other count distribution supported on non-negative integers). For more detail and formulae, see, for example, Gurmu and Trivedi (2011) and Dalrymple, Hudson, and Ford (2003). Example: Hurdle models can be motivated by sequential decision-making processes confronted by individuals. You first decide if you need to buy something, and then you decide on the quantity of that something (which must be positive). When you are allowed to (or can potentially) buy nothing after your decision to buy something is an example of a situation where zero-inflated model is appropriate. Zeros may come from two sources: a) no decision to buy; b) wanted to buy but ended up buying nothing (e.g. out of stock). Beta: The hurdle model is a special case of the two-part model described in Chapter 16 of Frees (2011). There, we will see that for two-part models, the amount of health care utilized may be a continuous as well as a count variable. So what has been somewhat confusingly termed "zero-inflated beta distribution" in the literature is in fact belongs in the class of two-part distributions and models (so common in actuarial science), which is consistent with the above definition of a hurdle model. This excellent book discussed zero-inflated models in section 12.4.1 and hurdle models in section 12.4.2, with formulas and examples from actuarial applications. History: zero-inflated Poisson (ZIP) models without covariates have a long history (see e.g., Johnson and Kotz, 1969). The general form of ZIP regression models incorporating covariates is due to Lambert (1992). Hurdle models were first proposed by a Canadian statistician Cragg (1971), and later developped further by Mullahy (1986). You may also consider Croston (1972), where positive geometric counts are used together with Bernoulli process to describe an integer-valued process dominated by zeros. R: Finally, if you use R, there is package pscl for "Classes and Methods for R developed in the Political Science Computational Laboratory" by Simon Jackman, containing hurdle() and zeroinfl() functions by Achim Zeileis. The following references have been consulted to produce the above: Gurmu, S. & Trivedi, P. K. Excess Zeros in Count Models for Recreational Trips Journal of Business & Economic Statistics, 1996, 14, 469-477 Johnson, N., Kotz, S., Distributions in Statistics: Discrete Distributions. 1969, Houghton MiZin, Boston Lambert, D., Zero-inflated Poisson regression with an application to defects in manufacturing. Technometrics, 1992, 34 (1), 1–14. Cragg, J. G. Some Statistical Models for Limited Dependent Variables with Application to the Demand for Durable Goods Econometrica, 1971, 39, 829-844 Mullahy, J. Specification and testing of some modified count data models Journal of Econometrics, 1986, 33, 341-365 Frees, E. W. Regression Modeling with Actuarial and Financial Applications Cambridge University Press, 2011 Dalrymple, M. L.; Hudson, I. L. & Ford, R. P. K. Finite Mixture, Zero-inflated Poisson and Hurdle models with application to SIDS Computational Statistics & Data Analysis, 2003, 41, 491-504 Croston, J. D. Forecasting and Stock Control for Intermittent Demands Operational Research Quarterly, 1972, 23, 289-303
What is the difference between zero-inflated and hurdle models? Thank you for the interesting question! Difference: One limitation of standard count models is that the zeros and the nonzeros (positives) are assumed to come from the same data-generating p
1,567
What is the difference between zero-inflated and hurdle models?
Hurdle models assume that there is only one process by which a zero can be produced, while zero-inflated models assume that there are 2 different processes that can produce a zero. Hurdle models assume 2 types of subjects: (1) those who never experience the outcome and (2) those who always experience the outcome at least once. Zero-inflated models conceptualize subjects as (1) those who never experience the outcome and (2) those who can experience the outcome but don't always. In simple terms: both zero-inflated and hurdle models are described in two parts. The first is the on-off part, which is a binary process. The system is "off" with probability $\pi$ and "on" with probability $1-\pi$. (Here, $\pi$ is known as the inflation probability.) When the system is "off," only zero counts are possible. This part is the same for zero-inflated and hurdle models. The second part is the counting part, which occurs when the system is "on." This is where zero-inflated and hurdle models differ. In zero-inflated models, counts can still be zero. In hurdle models they must be nonzero. For this part, zero-inflated models use a "usual" discrete probability distribution while hurdle models use a zero-truncated discrete probability distribution function. Example of a hurdle model: An automobile manufacturer wants to compare two quality control programs for its automobiles. It will compare them on the basis of the number of warranty claims filed. For each program, a set of randomly selected customers are followed for 1 year and the number of warranty claims they file is counted. The inflation probabilities for each of the two programs are then compared. The “off” state is “filed zero claims” while the “on” state is “filed at least one claim.” Example of a zero-inflated model: In the same study above, the researchers find out that some repairs on the automobiles were fixed without the filing of a warranty claim. In this way, the zeroes are a mixture of the absence of quality control problems as well as the presence of quality control problems that involved no warranty claims. The “off” state means “filed zero claims” while the “on” state means “filed at least one claim OR had repairs fixed without filing a claim.” See here for a study in which both types of models were applied to the same data set.
What is the difference between zero-inflated and hurdle models?
Hurdle models assume that there is only one process by which a zero can be produced, while zero-inflated models assume that there are 2 different processes that can produce a zero. Hurdle models assum
What is the difference between zero-inflated and hurdle models? Hurdle models assume that there is only one process by which a zero can be produced, while zero-inflated models assume that there are 2 different processes that can produce a zero. Hurdle models assume 2 types of subjects: (1) those who never experience the outcome and (2) those who always experience the outcome at least once. Zero-inflated models conceptualize subjects as (1) those who never experience the outcome and (2) those who can experience the outcome but don't always. In simple terms: both zero-inflated and hurdle models are described in two parts. The first is the on-off part, which is a binary process. The system is "off" with probability $\pi$ and "on" with probability $1-\pi$. (Here, $\pi$ is known as the inflation probability.) When the system is "off," only zero counts are possible. This part is the same for zero-inflated and hurdle models. The second part is the counting part, which occurs when the system is "on." This is where zero-inflated and hurdle models differ. In zero-inflated models, counts can still be zero. In hurdle models they must be nonzero. For this part, zero-inflated models use a "usual" discrete probability distribution while hurdle models use a zero-truncated discrete probability distribution function. Example of a hurdle model: An automobile manufacturer wants to compare two quality control programs for its automobiles. It will compare them on the basis of the number of warranty claims filed. For each program, a set of randomly selected customers are followed for 1 year and the number of warranty claims they file is counted. The inflation probabilities for each of the two programs are then compared. The “off” state is “filed zero claims” while the “on” state is “filed at least one claim.” Example of a zero-inflated model: In the same study above, the researchers find out that some repairs on the automobiles were fixed without the filing of a warranty claim. In this way, the zeroes are a mixture of the absence of quality control problems as well as the presence of quality control problems that involved no warranty claims. The “off” state means “filed zero claims” while the “on” state means “filed at least one claim OR had repairs fixed without filing a claim.” See here for a study in which both types of models were applied to the same data set.
What is the difference between zero-inflated and hurdle models? Hurdle models assume that there is only one process by which a zero can be produced, while zero-inflated models assume that there are 2 different processes that can produce a zero. Hurdle models assum
1,568
What is the difference between zero-inflated and hurdle models?
in ZIP model $y_i$~0 with probability $\pi$ and $y_i$~ Poisson($\lambda$) distribution with probability $1-\pi$, thus the ZIP model is mixture model with 2 components and: $$\Pr (y_j = 0) = \pi + (1 - \pi) e^{-\lambda}$$ $$\Pr (y_j = x_i) = (1 - \pi) \frac{\lambda^{x_i} e^{-\lambda}} {x_i!},\qquad x_i \ge 1$$ and in an hurdle model $y_i$~ 0 with probability $\pi$ and $y_i$~ truncated Poisson($\lambda$) distribution with probability $1-\pi$, and: $$\Pr (y_j = 0) = \pi $$ $$\Pr (y_j = x_i) = \frac{(1 - \pi)} {1-e^{-\lambda}} (\frac{\lambda^{x_i} e^{-\lambda}} {x_i!}),\qquad x_i \ge 1$$
What is the difference between zero-inflated and hurdle models?
in ZIP model $y_i$~0 with probability $\pi$ and $y_i$~ Poisson($\lambda$) distribution with probability $1-\pi$, thus the ZIP model is mixture model with 2 components and: $$\Pr (y_j = 0) = \pi + (1
What is the difference between zero-inflated and hurdle models? in ZIP model $y_i$~0 with probability $\pi$ and $y_i$~ Poisson($\lambda$) distribution with probability $1-\pi$, thus the ZIP model is mixture model with 2 components and: $$\Pr (y_j = 0) = \pi + (1 - \pi) e^{-\lambda}$$ $$\Pr (y_j = x_i) = (1 - \pi) \frac{\lambda^{x_i} e^{-\lambda}} {x_i!},\qquad x_i \ge 1$$ and in an hurdle model $y_i$~ 0 with probability $\pi$ and $y_i$~ truncated Poisson($\lambda$) distribution with probability $1-\pi$, and: $$\Pr (y_j = 0) = \pi $$ $$\Pr (y_j = x_i) = \frac{(1 - \pi)} {1-e^{-\lambda}} (\frac{\lambda^{x_i} e^{-\lambda}} {x_i!}),\qquad x_i \ge 1$$
What is the difference between zero-inflated and hurdle models? in ZIP model $y_i$~0 with probability $\pi$ and $y_i$~ Poisson($\lambda$) distribution with probability $1-\pi$, thus the ZIP model is mixture model with 2 components and: $$\Pr (y_j = 0) = \pi + (1
1,569
What is the difference between zero-inflated and hurdle models?
Regarding hurdle models, here's a quote from Advances in Mathematical and Statistical Modeling (Arnold, Balakrishnan, Sarabia, & Mínguez, 2008): The hurdle model is characterized by the process below the hurdle and the one above. Obviously, the most widely used hurdle model is the one that sets the hurdle at zero. Formally, the hurdle-at-zero model is expressed as: $P(N_i=n_i)=f_1(0)$ for $n_i=0$ $P(N_i=n_i)=\frac{1-f_1(0)}{1-f_2(0)}f_2(n_i)=\phi f_2(n_i)$ for $n_i=1,2,...$ The variable $\phi$ can be interpreted as the probability of crossing the hurdle, or more precisely in the case of insurance, the probability to report at least one claim. As for zero-inflated models, Wikipedia says: A zero-inflated model is a statistical model based on a zero-inflated probability distribution, i.e. a distribution that allows for frequent zero-valued observations. The zero-inflated Poisson model concerns a random event containing excess zero-count data in unit time.$^{[1]}$ For example, the number of claims to an insurance company by any given covered person is almost always zero, otherwise substantial losses would cause the insurance company to go bankrupt. The zero-inflated Poisson (ZIP) model employs two components that correspond to two zero generating processes. The first process is governed by a binary distribution that generates structural zeros. The second process is governed by a Poisson distribution that generates counts, some of which may be zero. The two model components are described as follows: $$\Pr (y_j = 0) = \pi + (1 - \pi) e^{-\lambda}$$ $$\Pr (y_j = h_i) = (1 - \pi) \frac{\lambda^{h_i} e^{-\lambda}} {h_i!},\qquad h_i \ge 1$$ where the outcome variable $y_j$ has any non-negative integer value, $\lambda_i$ is the expected Poisson count for the $i$th individual; $\pi$ is the probability of extra zeros. From Arnold and colleagues (2008), I see that a hurdle-at-zero model is a special case of the more general class of hurdle models, but from a reference on Wikipedia (Hall, 2004), I also see that some zero-inflated models can be upper-bounded. I don't quite understand the difference in the formulas, but they seem to be quite similar (both even use a very similar example, insurance claims). I hope other answers can help explain any important difference(s), and that this answer will help set the stage for those. Wikipedia's reference: Lambert, D. (1992). Zero-inflated Poisson regression, with an application to defects in manufacturing. Technometrics, 34(1), 1–14.
What is the difference between zero-inflated and hurdle models?
Regarding hurdle models, here's a quote from Advances in Mathematical and Statistical Modeling (Arnold, Balakrishnan, Sarabia, & Mínguez, 2008): The hurdle model is characterized by the process below
What is the difference between zero-inflated and hurdle models? Regarding hurdle models, here's a quote from Advances in Mathematical and Statistical Modeling (Arnold, Balakrishnan, Sarabia, & Mínguez, 2008): The hurdle model is characterized by the process below the hurdle and the one above. Obviously, the most widely used hurdle model is the one that sets the hurdle at zero. Formally, the hurdle-at-zero model is expressed as: $P(N_i=n_i)=f_1(0)$ for $n_i=0$ $P(N_i=n_i)=\frac{1-f_1(0)}{1-f_2(0)}f_2(n_i)=\phi f_2(n_i)$ for $n_i=1,2,...$ The variable $\phi$ can be interpreted as the probability of crossing the hurdle, or more precisely in the case of insurance, the probability to report at least one claim. As for zero-inflated models, Wikipedia says: A zero-inflated model is a statistical model based on a zero-inflated probability distribution, i.e. a distribution that allows for frequent zero-valued observations. The zero-inflated Poisson model concerns a random event containing excess zero-count data in unit time.$^{[1]}$ For example, the number of claims to an insurance company by any given covered person is almost always zero, otherwise substantial losses would cause the insurance company to go bankrupt. The zero-inflated Poisson (ZIP) model employs two components that correspond to two zero generating processes. The first process is governed by a binary distribution that generates structural zeros. The second process is governed by a Poisson distribution that generates counts, some of which may be zero. The two model components are described as follows: $$\Pr (y_j = 0) = \pi + (1 - \pi) e^{-\lambda}$$ $$\Pr (y_j = h_i) = (1 - \pi) \frac{\lambda^{h_i} e^{-\lambda}} {h_i!},\qquad h_i \ge 1$$ where the outcome variable $y_j$ has any non-negative integer value, $\lambda_i$ is the expected Poisson count for the $i$th individual; $\pi$ is the probability of extra zeros. From Arnold and colleagues (2008), I see that a hurdle-at-zero model is a special case of the more general class of hurdle models, but from a reference on Wikipedia (Hall, 2004), I also see that some zero-inflated models can be upper-bounded. I don't quite understand the difference in the formulas, but they seem to be quite similar (both even use a very similar example, insurance claims). I hope other answers can help explain any important difference(s), and that this answer will help set the stage for those. Wikipedia's reference: Lambert, D. (1992). Zero-inflated Poisson regression, with an application to defects in manufacturing. Technometrics, 34(1), 1–14.
What is the difference between zero-inflated and hurdle models? Regarding hurdle models, here's a quote from Advances in Mathematical and Statistical Modeling (Arnold, Balakrishnan, Sarabia, & Mínguez, 2008): The hurdle model is characterized by the process below
1,570
When should linear regression be called "machine learning"?
Answering your question with a question: what exactly is machine learning? Trevor Hastie, Robert Tibshirani and Jerome Friedman in The Elements of Statistical Learning, Kevin P. Murphy in Machine Learning A Probabilistic Perspective, Christopher Bishop in Pattern Recognition and Machine Learning, Ian Goodfellow, Yoshua Bengio and Aaron Courville in Deep Learning and a number of other machine learning "bibles" mention linear regression as one of the machine learning "algorithms". Machine learning is partly a buzzword for applied statistics and the distinction between statistics and machine learning is often blurry.
When should linear regression be called "machine learning"?
Answering your question with a question: what exactly is machine learning? Trevor Hastie, Robert Tibshirani and Jerome Friedman in The Elements of Statistical Learning, Kevin P. Murphy in Machine Lear
When should linear regression be called "machine learning"? Answering your question with a question: what exactly is machine learning? Trevor Hastie, Robert Tibshirani and Jerome Friedman in The Elements of Statistical Learning, Kevin P. Murphy in Machine Learning A Probabilistic Perspective, Christopher Bishop in Pattern Recognition and Machine Learning, Ian Goodfellow, Yoshua Bengio and Aaron Courville in Deep Learning and a number of other machine learning "bibles" mention linear regression as one of the machine learning "algorithms". Machine learning is partly a buzzword for applied statistics and the distinction between statistics and machine learning is often blurry.
When should linear regression be called "machine learning"? Answering your question with a question: what exactly is machine learning? Trevor Hastie, Robert Tibshirani and Jerome Friedman in The Elements of Statistical Learning, Kevin P. Murphy in Machine Lear
1,571
When should linear regression be called "machine learning"?
Linear regression is definitely an algorithm that can be used in machine learning. But, reductio ad absurdum: Anyone with a copy of Excel can fit a linear model. Even restricting ourselves to linear models, there are a few more things to consider when discussing machine learning: Machine learning on business problems may involve a lot more data. "Big data", if you want to use the buzzword. Cleaning and preparing the data may take more work than the actual modelling. And when the volume of data exceeds the capacity of a single machine to process it then the engineering challenges are as significant as the statistical challenges. (Rule of thumb: if it fits in main memory it's not big data). Machine learning often involves many more explanatory variables (features) than traditional statistical models. Perhaps dozens, sometimes even hundreds of them, some of which will be categorical variables with many levels. When these features can potentially interact (e.g. in a cross effects model) the number of potential models to be fit grows rapidly. The machine learning practitioner is usually less concerned with the significance of individual features, and more concerned with squeezing as much predictive power as possible out of a model, using whichever combination of features does that. (P-values are associated with explanation, not prediction.) With a large number of features, and various ways of engineering those features, model selection by hand becomes infeasible. In my opinion, the real challenge in machine learning is the automated selection of features (feature engineering) and other aspects of model specification. With a linear model there are various ways of doing this, usually variants of brute force; including step-wise regression, back elimination etc, all of which again require significant computing power. (Second rule of thumb: if you are selecting features by hand, you are doing statistics, not machine learning). When you automatically fit many models with many features, over-fitting is a serious potential issue. Dealing with this problem often involves some form of cross validation: i.e. yet more brute force computation! The short answer, from my point of view, is that where machine learning deviates from traditional statistical modelling is in the application of brute force and numerical approaches to model selection, especially in domains with a large amount of data and a large number of explanatory variables, with a focus on predictive power, followed by more brute force for model validation.
When should linear regression be called "machine learning"?
Linear regression is definitely an algorithm that can be used in machine learning. But, reductio ad absurdum: Anyone with a copy of Excel can fit a linear model. Even restricting ourselves to linear
When should linear regression be called "machine learning"? Linear regression is definitely an algorithm that can be used in machine learning. But, reductio ad absurdum: Anyone with a copy of Excel can fit a linear model. Even restricting ourselves to linear models, there are a few more things to consider when discussing machine learning: Machine learning on business problems may involve a lot more data. "Big data", if you want to use the buzzword. Cleaning and preparing the data may take more work than the actual modelling. And when the volume of data exceeds the capacity of a single machine to process it then the engineering challenges are as significant as the statistical challenges. (Rule of thumb: if it fits in main memory it's not big data). Machine learning often involves many more explanatory variables (features) than traditional statistical models. Perhaps dozens, sometimes even hundreds of them, some of which will be categorical variables with many levels. When these features can potentially interact (e.g. in a cross effects model) the number of potential models to be fit grows rapidly. The machine learning practitioner is usually less concerned with the significance of individual features, and more concerned with squeezing as much predictive power as possible out of a model, using whichever combination of features does that. (P-values are associated with explanation, not prediction.) With a large number of features, and various ways of engineering those features, model selection by hand becomes infeasible. In my opinion, the real challenge in machine learning is the automated selection of features (feature engineering) and other aspects of model specification. With a linear model there are various ways of doing this, usually variants of brute force; including step-wise regression, back elimination etc, all of which again require significant computing power. (Second rule of thumb: if you are selecting features by hand, you are doing statistics, not machine learning). When you automatically fit many models with many features, over-fitting is a serious potential issue. Dealing with this problem often involves some form of cross validation: i.e. yet more brute force computation! The short answer, from my point of view, is that where machine learning deviates from traditional statistical modelling is in the application of brute force and numerical approaches to model selection, especially in domains with a large amount of data and a large number of explanatory variables, with a focus on predictive power, followed by more brute force for model validation.
When should linear regression be called "machine learning"? Linear regression is definitely an algorithm that can be used in machine learning. But, reductio ad absurdum: Anyone with a copy of Excel can fit a linear model. Even restricting ourselves to linear
1,572
When should linear regression be called "machine learning"?
I think Mitchell's definition provides a helpful way to ground the discussion of machine learning, a sort of first principle. As reproduced on Wikipedia: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E. This is helpful in a few ways. First, to your immediate question: Regression is machine learning when its task is to provide an estimated value from predictive features in some application. Its performance should improve, as measured by mean squared (or absolute, etc.) held out error, as it experiences more data. Second, it helps delineate machine learning from related terms, and its use as a marketing buzzword. Contrast the task above with a standard, inferential regression, wherein an analyst interprets coefficients for significant relationships. Here the program returns a summary: coefficients, p-values, etc. The program cannot be said to improve this performance with experience; the task is elaborate calculation. Finally, it helps unify machine learning sub fields, both those commonly used in introductory exposition (supervised, unsupervised) with others like reinforcement learning or density estimation. (Each has a task, performance measure and concept of experience, if you think on them enough.) It provides, I think, a richer definition that helps delineate the two fields without unnecessarily reducing either. As an example, "ML is for prediction, statistics for inference" ignores both machine learning techniques outside supervised learning, and statistical techniques that focus on prediction.
When should linear regression be called "machine learning"?
I think Mitchell's definition provides a helpful way to ground the discussion of machine learning, a sort of first principle. As reproduced on Wikipedia: A computer program is said to learn from expe
When should linear regression be called "machine learning"? I think Mitchell's definition provides a helpful way to ground the discussion of machine learning, a sort of first principle. As reproduced on Wikipedia: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E. This is helpful in a few ways. First, to your immediate question: Regression is machine learning when its task is to provide an estimated value from predictive features in some application. Its performance should improve, as measured by mean squared (or absolute, etc.) held out error, as it experiences more data. Second, it helps delineate machine learning from related terms, and its use as a marketing buzzword. Contrast the task above with a standard, inferential regression, wherein an analyst interprets coefficients for significant relationships. Here the program returns a summary: coefficients, p-values, etc. The program cannot be said to improve this performance with experience; the task is elaborate calculation. Finally, it helps unify machine learning sub fields, both those commonly used in introductory exposition (supervised, unsupervised) with others like reinforcement learning or density estimation. (Each has a task, performance measure and concept of experience, if you think on them enough.) It provides, I think, a richer definition that helps delineate the two fields without unnecessarily reducing either. As an example, "ML is for prediction, statistics for inference" ignores both machine learning techniques outside supervised learning, and statistical techniques that focus on prediction.
When should linear regression be called "machine learning"? I think Mitchell's definition provides a helpful way to ground the discussion of machine learning, a sort of first principle. As reproduced on Wikipedia: A computer program is said to learn from expe
1,573
When should linear regression be called "machine learning"?
There's no law that says that a cabinet maker can't use a barrel maker's saw. Machine learning and statistics are vague labels, but if well-defined there is a lot of overlap between statistics and machine learning. And this goes for methods of these two areas as well as (and separately) for people who label themselves with these two areas. But as far as math goes, machine learning is entirely within the field of statistics. Linear regression is a very well defined mathematical procedure. I tend to associate it with the area of statistics and people who call themselves 'statisticians' and those who come out of academic programs with labels like 'statistics'. SVM (Support Vector Machines) is likewise a very well defined mathematical procedure that has some every similar inputs and outputs and solves similar problems. But I tend to associate it however with the area of machine learning and people who call themselves computer scientists or people who work in artificial intelligence or machine learning which tend to be considered part of computer science as a discipline. But some statisticians might use SVM and some AI people use logistic regression. Just to be clear, it is more likely that a statistician or AI researcher would develop a method than actually put it to practical use. I put all the methods of machine learning squarely inside the domain of statistics. Even such recent things like Deep Learning, RNNs, CNNs, LSTMs, CRFs. An applied statistician (biostatistician, agronomist) may well not be familiar with them. Those are all predictive modeling methods usually labeled with 'machine learning', and rarely associated with statistics. But they are predictive models, with the allowance that they can be judged using statistical methods. In the end, logistic regression must be considered part of machine learning. But, yes, I see and often share your distaste for the misapplication of these words. Linear regression is such a fundamental part of things called statistics that it feels very strange and misleading to call its use 'machine learning'. To illustrate, Logistic regression is identical mathematically to a Deep Learning network with no hidden nodes and the logistic function as the activation function for the single output node. I wouldn't call logistic regression a machine learning method, but it is certainly used in machine learning contexts. It's mostly an issue of expectation. A:"I used machine learning to predict readmission to a hospital after heart surgery." B:"Oh yeah? Deep Learning? Random Forests?!!?" A:"Oh, no, nothing as fancy as that, just Logistic Regression." B: extremely disappointed look . It's like saying, when washing a window with water that you're using quantum chemistry. Well yeah sure that's not technically wrong but you're implying a lot more than what's needed. But really, that is exactly a culture difference vs. a substance difference. The connotations of a word and associations with groups of people (LR is totally not ML!) vs the math and applications (LR is totally ML!).
When should linear regression be called "machine learning"?
There's no law that says that a cabinet maker can't use a barrel maker's saw. Machine learning and statistics are vague labels, but if well-defined there is a lot of overlap between statistics and mac
When should linear regression be called "machine learning"? There's no law that says that a cabinet maker can't use a barrel maker's saw. Machine learning and statistics are vague labels, but if well-defined there is a lot of overlap between statistics and machine learning. And this goes for methods of these two areas as well as (and separately) for people who label themselves with these two areas. But as far as math goes, machine learning is entirely within the field of statistics. Linear regression is a very well defined mathematical procedure. I tend to associate it with the area of statistics and people who call themselves 'statisticians' and those who come out of academic programs with labels like 'statistics'. SVM (Support Vector Machines) is likewise a very well defined mathematical procedure that has some every similar inputs and outputs and solves similar problems. But I tend to associate it however with the area of machine learning and people who call themselves computer scientists or people who work in artificial intelligence or machine learning which tend to be considered part of computer science as a discipline. But some statisticians might use SVM and some AI people use logistic regression. Just to be clear, it is more likely that a statistician or AI researcher would develop a method than actually put it to practical use. I put all the methods of machine learning squarely inside the domain of statistics. Even such recent things like Deep Learning, RNNs, CNNs, LSTMs, CRFs. An applied statistician (biostatistician, agronomist) may well not be familiar with them. Those are all predictive modeling methods usually labeled with 'machine learning', and rarely associated with statistics. But they are predictive models, with the allowance that they can be judged using statistical methods. In the end, logistic regression must be considered part of machine learning. But, yes, I see and often share your distaste for the misapplication of these words. Linear regression is such a fundamental part of things called statistics that it feels very strange and misleading to call its use 'machine learning'. To illustrate, Logistic regression is identical mathematically to a Deep Learning network with no hidden nodes and the logistic function as the activation function for the single output node. I wouldn't call logistic regression a machine learning method, but it is certainly used in machine learning contexts. It's mostly an issue of expectation. A:"I used machine learning to predict readmission to a hospital after heart surgery." B:"Oh yeah? Deep Learning? Random Forests?!!?" A:"Oh, no, nothing as fancy as that, just Logistic Regression." B: extremely disappointed look . It's like saying, when washing a window with water that you're using quantum chemistry. Well yeah sure that's not technically wrong but you're implying a lot more than what's needed. But really, that is exactly a culture difference vs. a substance difference. The connotations of a word and associations with groups of people (LR is totally not ML!) vs the math and applications (LR is totally ML!).
When should linear regression be called "machine learning"? There's no law that says that a cabinet maker can't use a barrel maker's saw. Machine learning and statistics are vague labels, but if well-defined there is a lot of overlap between statistics and mac
1,574
When should linear regression be called "machine learning"?
Linear regression is a technique, while machine learning is a goal that can be achieved through different means and techniques. So regression performance is measured by how close it fits an expected line/curve, while machine learning is measured by how good it can solve a certain problem, with whatever means necessary.
When should linear regression be called "machine learning"?
Linear regression is a technique, while machine learning is a goal that can be achieved through different means and techniques. So regression performance is measured by how close it fits an expected l
When should linear regression be called "machine learning"? Linear regression is a technique, while machine learning is a goal that can be achieved through different means and techniques. So regression performance is measured by how close it fits an expected line/curve, while machine learning is measured by how good it can solve a certain problem, with whatever means necessary.
When should linear regression be called "machine learning"? Linear regression is a technique, while machine learning is a goal that can be achieved through different means and techniques. So regression performance is measured by how close it fits an expected l
1,575
When should linear regression be called "machine learning"?
It can be useful to call linear regression machine learning because doing so generally implies a couple important things about how you went about solving your problem: You decided it wasn't necessary to check causal assumptions and prior theory behind your explanatory variables. It signals that your model was not intended to explain but to predict. This is perfectly reasonable in a lot of settings, for example, predicting email spam based on keywords. There isn't really a lot of literature on which words predict spam, and there are so many words it doesn't make sense to think through the theoretical significance of each word You didn't check for variable significance or use p-values but instead likely opted for a holdout set or cross validation to assess out-of-sample predictive performance. This can be perfectly valid if - back to the email spam example - if really all you care about is producing a model that effectively predicts spam, even if this comes at at the cost of including variables that might not pass traditional significance tests. However, if your model is more intended to explain than predict, and you do rigorously check your model's theoretical causal assumptions, etc then yes, it is rather silly to call it machine learning.
When should linear regression be called "machine learning"?
It can be useful to call linear regression machine learning because doing so generally implies a couple important things about how you went about solving your problem: You decided it wasn't necessary
When should linear regression be called "machine learning"? It can be useful to call linear regression machine learning because doing so generally implies a couple important things about how you went about solving your problem: You decided it wasn't necessary to check causal assumptions and prior theory behind your explanatory variables. It signals that your model was not intended to explain but to predict. This is perfectly reasonable in a lot of settings, for example, predicting email spam based on keywords. There isn't really a lot of literature on which words predict spam, and there are so many words it doesn't make sense to think through the theoretical significance of each word You didn't check for variable significance or use p-values but instead likely opted for a holdout set or cross validation to assess out-of-sample predictive performance. This can be perfectly valid if - back to the email spam example - if really all you care about is producing a model that effectively predicts spam, even if this comes at at the cost of including variables that might not pass traditional significance tests. However, if your model is more intended to explain than predict, and you do rigorously check your model's theoretical causal assumptions, etc then yes, it is rather silly to call it machine learning.
When should linear regression be called "machine learning"? It can be useful to call linear regression machine learning because doing so generally implies a couple important things about how you went about solving your problem: You decided it wasn't necessary
1,576
When should linear regression be called "machine learning"?
Common view is that machine learning made up of 4 areas: 1) Dimensionality Reduction 2) Clustering 3) Classification 4) Regression Linear regression is a regression. Once the model is trained it could be used for predictions, like any other, say, Random Forest Regression.
When should linear regression be called "machine learning"?
Common view is that machine learning made up of 4 areas: 1) Dimensionality Reduction 2) Clustering 3) Classification 4) Regression Linear regression is a regression. Once the model is trained it could
When should linear regression be called "machine learning"? Common view is that machine learning made up of 4 areas: 1) Dimensionality Reduction 2) Clustering 3) Classification 4) Regression Linear regression is a regression. Once the model is trained it could be used for predictions, like any other, say, Random Forest Regression.
When should linear regression be called "machine learning"? Common view is that machine learning made up of 4 areas: 1) Dimensionality Reduction 2) Clustering 3) Classification 4) Regression Linear regression is a regression. Once the model is trained it could
1,577
When should linear regression be called "machine learning"?
I'll argue that the distinction between machine learning and statistical inference is clear. In short, machine learning = prediction of future observations; statistics = explanation. Here is an example from my field of interest (medicine): when developing a drug, we search for gene(s) which best explain a disease state, with the goal of targeting it/them with the drug. We use statistis for that. In contrast, when developing diagnostic tests, for example predicting whether the drug will help a patient, the goal is strictly finding the best predictor of the future outcome, even if it comprises many genes and is too complicated to understand. We use machine learning for this purpose. There are multiple published examples [1], [2], [3], [4] showing that presence of the drug target is not a good predictor of the treatment outcome, hence the distinction. Based on this, it is fair to say that one is doing machine learning when the goal is strictly predicting outcome of future/previously unseen observations. If the goal is understanding a particular phenomenon, then that is statistical inference, not machine learning. As others have pointed out, this is true regardless of the method involved. To answer your question: in the specific research that you describe, the scientists were comparing the factor roles (weights) in different linear regression models, not comparing the model accuracies. Therefore, it is not accurate to call their inference machine learning. [1] Messersmith WA, Ahnen DJ. Targeting EGFR in Colorectal Cancer. The New England Journal of Medicine; 2008; 359; 17. [2] Pogue-Geile KL et al. Predicting Degree of Benefit From Adjuvant Trastuzumab in NSABP Trial B-31. J Natl Cancer Inst; 2013; 105:1782-1788. [3] Pazdur R. FDA Approval for Vemurafenib. https://www.cancer.gov/about-cancer/treatment/drugs/fda-vemurafenib. Updated July 3, 2013. [4] Ray T. Two ASCO Studies Show Challenge of Using MET Signaling as Predictive Marker in NSCLC Drug Trials. GenomeWeb, June 11, 2014.
When should linear regression be called "machine learning"?
I'll argue that the distinction between machine learning and statistical inference is clear. In short, machine learning = prediction of future observations; statistics = explanation. Here is an exampl
When should linear regression be called "machine learning"? I'll argue that the distinction between machine learning and statistical inference is clear. In short, machine learning = prediction of future observations; statistics = explanation. Here is an example from my field of interest (medicine): when developing a drug, we search for gene(s) which best explain a disease state, with the goal of targeting it/them with the drug. We use statistis for that. In contrast, when developing diagnostic tests, for example predicting whether the drug will help a patient, the goal is strictly finding the best predictor of the future outcome, even if it comprises many genes and is too complicated to understand. We use machine learning for this purpose. There are multiple published examples [1], [2], [3], [4] showing that presence of the drug target is not a good predictor of the treatment outcome, hence the distinction. Based on this, it is fair to say that one is doing machine learning when the goal is strictly predicting outcome of future/previously unseen observations. If the goal is understanding a particular phenomenon, then that is statistical inference, not machine learning. As others have pointed out, this is true regardless of the method involved. To answer your question: in the specific research that you describe, the scientists were comparing the factor roles (weights) in different linear regression models, not comparing the model accuracies. Therefore, it is not accurate to call their inference machine learning. [1] Messersmith WA, Ahnen DJ. Targeting EGFR in Colorectal Cancer. The New England Journal of Medicine; 2008; 359; 17. [2] Pogue-Geile KL et al. Predicting Degree of Benefit From Adjuvant Trastuzumab in NSABP Trial B-31. J Natl Cancer Inst; 2013; 105:1782-1788. [3] Pazdur R. FDA Approval for Vemurafenib. https://www.cancer.gov/about-cancer/treatment/drugs/fda-vemurafenib. Updated July 3, 2013. [4] Ray T. Two ASCO Studies Show Challenge of Using MET Signaling as Predictive Marker in NSCLC Drug Trials. GenomeWeb, June 11, 2014.
When should linear regression be called "machine learning"? I'll argue that the distinction between machine learning and statistical inference is clear. In short, machine learning = prediction of future observations; statistics = explanation. Here is an exampl
1,578
When should linear regression be called "machine learning"?
Admittedly, any answer to this question is more opinion than objective fact, but I will try to lay out my logic why I think the answer is never. Any so-called machine learning expert or instructor only reveals their ignorance by representing linear regression as such. Delineation of academic disciplines is more about delineation of communities than methods. Scientific disciplines borrow methods across disciplines all the time. Also, in the 19th century (when linear regression was developed) and prior to that, scientific disciplines were not so clearly delineated as they are today. So particularly when methods were developed in the 19th century or prior, we should be careful to assign them to a particular discipline. That being said, one can look at the history of a discipline and reasonable conclude that particular methods "belong" to one discipline or another. No one would say today that calculus belongs to the field of physics, even though Newton, who was one of the inventors of calculus, was definitely trying to apply this to physics. Calculus clearly belongs to the discipline of mathematics, not physics. This is because calculus is a general mathematical method that can be used completely outside of a physics contexts. By the same reasoning, linear regression belongs to the discipline of statistics, even though it is commonly used as a simple example of fitting data to a model in the context of machine learning. Just as calculus can be used outside the context of physics, linear regression can (and is) used outside the context of machine learning. Machine learning instructors would be wise to point out that linear regression has been in use since the late 19th century long before the modern notion of machine learning came into existence. They should also emphasize that machine learning utilizes many concepts from probability and statistics, as well as other disciplines (e.g. information theory). However, these concepts do not themselves represent machine learning or an "algorithm" of machine learning.
When should linear regression be called "machine learning"?
Admittedly, any answer to this question is more opinion than objective fact, but I will try to lay out my logic why I think the answer is never. Any so-called machine learning expert or instructor onl
When should linear regression be called "machine learning"? Admittedly, any answer to this question is more opinion than objective fact, but I will try to lay out my logic why I think the answer is never. Any so-called machine learning expert or instructor only reveals their ignorance by representing linear regression as such. Delineation of academic disciplines is more about delineation of communities than methods. Scientific disciplines borrow methods across disciplines all the time. Also, in the 19th century (when linear regression was developed) and prior to that, scientific disciplines were not so clearly delineated as they are today. So particularly when methods were developed in the 19th century or prior, we should be careful to assign them to a particular discipline. That being said, one can look at the history of a discipline and reasonable conclude that particular methods "belong" to one discipline or another. No one would say today that calculus belongs to the field of physics, even though Newton, who was one of the inventors of calculus, was definitely trying to apply this to physics. Calculus clearly belongs to the discipline of mathematics, not physics. This is because calculus is a general mathematical method that can be used completely outside of a physics contexts. By the same reasoning, linear regression belongs to the discipline of statistics, even though it is commonly used as a simple example of fitting data to a model in the context of machine learning. Just as calculus can be used outside the context of physics, linear regression can (and is) used outside the context of machine learning. Machine learning instructors would be wise to point out that linear regression has been in use since the late 19th century long before the modern notion of machine learning came into existence. They should also emphasize that machine learning utilizes many concepts from probability and statistics, as well as other disciplines (e.g. information theory). However, these concepts do not themselves represent machine learning or an "algorithm" of machine learning.
When should linear regression be called "machine learning"? Admittedly, any answer to this question is more opinion than objective fact, but I will try to lay out my logic why I think the answer is never. Any so-called machine learning expert or instructor onl
1,579
When should linear regression be called "machine learning"?
It's the Machine, Stupid! I am neither a statistician nor a Big Data(TM) expert. However, I would say that the essential distinction is that "machine learning" requires "a machine". In particular, it implies agency. The result will not be consumed leisurely by a human. Rather, the result will be the input to a closed cycle whereby an automated system improves its performance. Closed System This is very much in line with Sean Easter's answer, but I just want to emphasize that in commercial applications, a machine is looking at the results and acting on them. A classic example is the CineMatch algorithm which was the target of the Netflix Prize. A human could look at the output of CineMatch and learn interesting features about movie viewers. But that is not why it exists. The purpose of CineMatch is to provide a mechanism whereby Netflix servers can suggest movies to customers that they will enjoy. The output of the statistical model goes into the recommender service, which ultimately produces more input as customers rate movies, some of which were selected on the advice of CineMatch. Open System On the other hand, if a researcher uses an algorithm to produce statistical results which are displayed in a presentation to other humans, then that researcher is most decidedly not engaging in machine learning. This is, quite obviously to me, human learning. The analysis is performed by a machine, but it is not a machine that is doing the learning, per se. Now, it is "machine learning" to the extent that a human brain did not experience all of the sample inputs and derive the statistical results "biologically". But I would call it "statistics" because this is exactly what statisticians have been doing since the field was invented. Conclusion Thus, I would answer this question by asking: "Who consumes the results?" If the answer is: "humans", then it's "statistics". If the answer is: "software", then it's "machine learning." And when we say that "software consumes the results", we don't mean that it stores it somewhere for later retrieval. We mean that it performs behavior which is determined by the results in a closed loop.
When should linear regression be called "machine learning"?
It's the Machine, Stupid! I am neither a statistician nor a Big Data(TM) expert. However, I would say that the essential distinction is that "machine learning" requires "a machine". In particular, i
When should linear regression be called "machine learning"? It's the Machine, Stupid! I am neither a statistician nor a Big Data(TM) expert. However, I would say that the essential distinction is that "machine learning" requires "a machine". In particular, it implies agency. The result will not be consumed leisurely by a human. Rather, the result will be the input to a closed cycle whereby an automated system improves its performance. Closed System This is very much in line with Sean Easter's answer, but I just want to emphasize that in commercial applications, a machine is looking at the results and acting on them. A classic example is the CineMatch algorithm which was the target of the Netflix Prize. A human could look at the output of CineMatch and learn interesting features about movie viewers. But that is not why it exists. The purpose of CineMatch is to provide a mechanism whereby Netflix servers can suggest movies to customers that they will enjoy. The output of the statistical model goes into the recommender service, which ultimately produces more input as customers rate movies, some of which were selected on the advice of CineMatch. Open System On the other hand, if a researcher uses an algorithm to produce statistical results which are displayed in a presentation to other humans, then that researcher is most decidedly not engaging in machine learning. This is, quite obviously to me, human learning. The analysis is performed by a machine, but it is not a machine that is doing the learning, per se. Now, it is "machine learning" to the extent that a human brain did not experience all of the sample inputs and derive the statistical results "biologically". But I would call it "statistics" because this is exactly what statisticians have been doing since the field was invented. Conclusion Thus, I would answer this question by asking: "Who consumes the results?" If the answer is: "humans", then it's "statistics". If the answer is: "software", then it's "machine learning." And when we say that "software consumes the results", we don't mean that it stores it somewhere for later retrieval. We mean that it performs behavior which is determined by the results in a closed loop.
When should linear regression be called "machine learning"? It's the Machine, Stupid! I am neither a statistician nor a Big Data(TM) expert. However, I would say that the essential distinction is that "machine learning" requires "a machine". In particular, i
1,580
When should linear regression be called "machine learning"?
One thing that I need to add (other have made great comments on distinction between ML and Stat) is that on the technical side, many of the classic assumptions of LR does not need to hold for predicting the mean response! Like no body even cares about "homoscedasticity" in ML, or even whether the residuals are normal (at least as long as they are symmetric around zero),... . People do careless univariate transformation on the response, while these are all important problems in statistics! Mainly because you don't need to do any inference about the coefficients, etc in ML! Another related point, when it comes to how we use it, is that you need to calculate design matrix for LR in statistics (which is the closed form result of MLE). For ML though, people find coefficients by gradient descent, stochastic gradient descent, etc. At the end, they only need the coefficients!
When should linear regression be called "machine learning"?
One thing that I need to add (other have made great comments on distinction between ML and Stat) is that on the technical side, many of the classic assumptions of LR does not need to hold for predicti
When should linear regression be called "machine learning"? One thing that I need to add (other have made great comments on distinction between ML and Stat) is that on the technical side, many of the classic assumptions of LR does not need to hold for predicting the mean response! Like no body even cares about "homoscedasticity" in ML, or even whether the residuals are normal (at least as long as they are symmetric around zero),... . People do careless univariate transformation on the response, while these are all important problems in statistics! Mainly because you don't need to do any inference about the coefficients, etc in ML! Another related point, when it comes to how we use it, is that you need to calculate design matrix for LR in statistics (which is the closed form result of MLE). For ML though, people find coefficients by gradient descent, stochastic gradient descent, etc. At the end, they only need the coefficients!
When should linear regression be called "machine learning"? One thing that I need to add (other have made great comments on distinction between ML and Stat) is that on the technical side, many of the classic assumptions of LR does not need to hold for predicti
1,581
When should linear regression be called "machine learning"?
In my opinion, one can speak of machine learning when a machine is programmed to infer parameters of some model using some data. If a linear regression is done by machine, it therefore qualifies. If done by hand, then it does not. Definitions that hinge on the prevalence of some agent (like Excel), or iterative improvement (like Sean Easter suggests above), somehow trying to separate it from statistics or depending on what to do with the results will prove inconsistent, in my opinion.
When should linear regression be called "machine learning"?
In my opinion, one can speak of machine learning when a machine is programmed to infer parameters of some model using some data. If a linear regression is done by machine, it therefore qualifies. If d
When should linear regression be called "machine learning"? In my opinion, one can speak of machine learning when a machine is programmed to infer parameters of some model using some data. If a linear regression is done by machine, it therefore qualifies. If done by hand, then it does not. Definitions that hinge on the prevalence of some agent (like Excel), or iterative improvement (like Sean Easter suggests above), somehow trying to separate it from statistics or depending on what to do with the results will prove inconsistent, in my opinion.
When should linear regression be called "machine learning"? In my opinion, one can speak of machine learning when a machine is programmed to infer parameters of some model using some data. If a linear regression is done by machine, it therefore qualifies. If d
1,582
What is the role of the logarithm in Shannon's entropy?
Shannon entropy is a quantity satisfying a set of relations. In short, logarithm is to make it growing linearly with system size and "behaving like information". The first means that entropy of tossing a coin $n$ times is $n$ times entropy of tossing a coin once: $$ - \sum_{i=1}^{2^n} \frac{1}{2^n} \log\left(\tfrac{1}{2^n}\right) = - \sum_{i=1}^{2^n} \frac{1}{2^n} n \log\left(\tfrac{1}{2}\right) = n \left( - \sum_{i=1}^{2} \frac{1}{2} \log\left(\tfrac{1}{2}\right) \right) = n. $$ Or just to see how it works when tossing two different coins (perhaps unfair - with heads with probability $p_1$ and tails $p_2$ for the first coin, and $q_1$ and $q_2$ for the second) $$ -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i q_j) = -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \left( \log(p_i) + \log(q_j) \right) $$ $$ = -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i) -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(q_j) = -\sum_{i=1}^2 p_i \log(p_i) - \sum_{j=1}^2 q_j \log(q_j) $$ so the properties of logarithm (logarithm of product is sum of logarithms) are crucial. But also Rényi entropy has this property (it is entropy parametrized by a real number $\alpha$, which becomes Shannon entropy for $\alpha \to 1$). However, here comes the second property - Shannon entropy is special, as it is related to information. To get some intuitive feeling, you can look at $$ H = \sum_i p_i \log \left(\tfrac{1}{p_i} \right) $$ as the average of $\log(1/p)$. We can call $\log(1/p)$ information. Why? Because if all events happen with probability $p$, it means that there are $1/p$ events. To tell which event have happened, we need to use $\log(1/p)$ bits (each bit doubles the number of events we can tell apart). You may feel anxious "OK, if all events have the same probability it makes sense to use $\log(1/p)$ as a measure of information. But if they are not, why averaging information makes any sense?" - and it is a natural concern. But it turns out that it makes sense - Shannon's source coding theorem says that a string with uncorrelated letters with probabilities $\{p_i\}_i$ of length $n$ cannot be compressed (on average) to binary string shorter than $n H$. And in fact, we can use Huffman coding to compress the string and get very close to $n H$. See also: A nice introduction is Cosma Shalizi's Information theory entry What is entropy, really? - MathOverflow Dissecting the GZIP format
What is the role of the logarithm in Shannon's entropy?
Shannon entropy is a quantity satisfying a set of relations. In short, logarithm is to make it growing linearly with system size and "behaving like information". The first means that entropy of tossin
What is the role of the logarithm in Shannon's entropy? Shannon entropy is a quantity satisfying a set of relations. In short, logarithm is to make it growing linearly with system size and "behaving like information". The first means that entropy of tossing a coin $n$ times is $n$ times entropy of tossing a coin once: $$ - \sum_{i=1}^{2^n} \frac{1}{2^n} \log\left(\tfrac{1}{2^n}\right) = - \sum_{i=1}^{2^n} \frac{1}{2^n} n \log\left(\tfrac{1}{2}\right) = n \left( - \sum_{i=1}^{2} \frac{1}{2} \log\left(\tfrac{1}{2}\right) \right) = n. $$ Or just to see how it works when tossing two different coins (perhaps unfair - with heads with probability $p_1$ and tails $p_2$ for the first coin, and $q_1$ and $q_2$ for the second) $$ -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i q_j) = -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \left( \log(p_i) + \log(q_j) \right) $$ $$ = -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i) -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(q_j) = -\sum_{i=1}^2 p_i \log(p_i) - \sum_{j=1}^2 q_j \log(q_j) $$ so the properties of logarithm (logarithm of product is sum of logarithms) are crucial. But also Rényi entropy has this property (it is entropy parametrized by a real number $\alpha$, which becomes Shannon entropy for $\alpha \to 1$). However, here comes the second property - Shannon entropy is special, as it is related to information. To get some intuitive feeling, you can look at $$ H = \sum_i p_i \log \left(\tfrac{1}{p_i} \right) $$ as the average of $\log(1/p)$. We can call $\log(1/p)$ information. Why? Because if all events happen with probability $p$, it means that there are $1/p$ events. To tell which event have happened, we need to use $\log(1/p)$ bits (each bit doubles the number of events we can tell apart). You may feel anxious "OK, if all events have the same probability it makes sense to use $\log(1/p)$ as a measure of information. But if they are not, why averaging information makes any sense?" - and it is a natural concern. But it turns out that it makes sense - Shannon's source coding theorem says that a string with uncorrelated letters with probabilities $\{p_i\}_i$ of length $n$ cannot be compressed (on average) to binary string shorter than $n H$. And in fact, we can use Huffman coding to compress the string and get very close to $n H$. See also: A nice introduction is Cosma Shalizi's Information theory entry What is entropy, really? - MathOverflow Dissecting the GZIP format
What is the role of the logarithm in Shannon's entropy? Shannon entropy is a quantity satisfying a set of relations. In short, logarithm is to make it growing linearly with system size and "behaving like information". The first means that entropy of tossin
1,583
What is the role of the logarithm in Shannon's entropy?
This is the same as the other answers, but I think the best way to explain it is to see what Shannon says in his original paper. The logarithmic measure is more convenient for various reasons: It is practically more useful. Parameters of engineering importance such as time, bandwidth, number of relays, etc., tend to vary linearly with the logarithm of the number of possibilities. For example, adding one relay to a group doubles the number of possible states of the relays. It adds 1 to the base 2 logarithm of this number. Doubling the time roughly squares the number of possible messages, or doubles the logarithm, etc. It is nearer to our intuitive feeling as to the proper measure. This is closely related to (1) since we intuitively measures entities by linear comparison with common standards. One feels, for example, that two punched cards should have twice the capacity of one for information storage, and two identical channels twice the capacity of one for transmitting information. It is mathematically more suitable. Many of the limiting operations are simple in terms of the logarithm but would require clumsy restatement in terms of the number of possibilities Source: Shannon, A Mathematical Theory of Communication (1948) [pdf]. Note that the Shannon entropy coincides with the Gibbs entropy of statistical mechanics, and there is also an explanation for why the log occurs in Gibbs entropy. In statistical mechanics, entropy is supposed to be a measure the number of possible states $\Omega$ in which a system can be found. The reason why $\log \Omega$ is better than $\Omega$ is because $\Omega$ is usually a very fast-growing function of its arguments, and so cannot be usefully approximated by a Taylor expansion, whereas $\log \Omega$ can be. (I don't know whether this was the original motivation for taking the log, but it is explained this way in a lot of introductory physics books.)
What is the role of the logarithm in Shannon's entropy?
This is the same as the other answers, but I think the best way to explain it is to see what Shannon says in his original paper. The logarithmic measure is more convenient for various reasons: It is
What is the role of the logarithm in Shannon's entropy? This is the same as the other answers, but I think the best way to explain it is to see what Shannon says in his original paper. The logarithmic measure is more convenient for various reasons: It is practically more useful. Parameters of engineering importance such as time, bandwidth, number of relays, etc., tend to vary linearly with the logarithm of the number of possibilities. For example, adding one relay to a group doubles the number of possible states of the relays. It adds 1 to the base 2 logarithm of this number. Doubling the time roughly squares the number of possible messages, or doubles the logarithm, etc. It is nearer to our intuitive feeling as to the proper measure. This is closely related to (1) since we intuitively measures entities by linear comparison with common standards. One feels, for example, that two punched cards should have twice the capacity of one for information storage, and two identical channels twice the capacity of one for transmitting information. It is mathematically more suitable. Many of the limiting operations are simple in terms of the logarithm but would require clumsy restatement in terms of the number of possibilities Source: Shannon, A Mathematical Theory of Communication (1948) [pdf]. Note that the Shannon entropy coincides with the Gibbs entropy of statistical mechanics, and there is also an explanation for why the log occurs in Gibbs entropy. In statistical mechanics, entropy is supposed to be a measure the number of possible states $\Omega$ in which a system can be found. The reason why $\log \Omega$ is better than $\Omega$ is because $\Omega$ is usually a very fast-growing function of its arguments, and so cannot be usefully approximated by a Taylor expansion, whereas $\log \Omega$ can be. (I don't know whether this was the original motivation for taking the log, but it is explained this way in a lot of introductory physics books.)
What is the role of the logarithm in Shannon's entropy? This is the same as the other answers, but I think the best way to explain it is to see what Shannon says in his original paper. The logarithmic measure is more convenient for various reasons: It is
1,584
What is the role of the logarithm in Shannon's entropy?
another way of looking at this is from an algorithmic point of view. Imagine that you're going to guess a number $x$, that the only information you have is that this number is in the interval $1 \leq x \leq N$. In this situation, the optimal algorithm for guessing the number is a simple Binary search algorithm, which finds $x$ in order $O(\log_2N)$. This formula intuitively says how many questions you need to ask to find out what's $x$. For example, if $N=8$, you need to ask maximum 3 questions to find the unkown $x$. From the probabilistic perspective, when you declare $x$ as being equally likely to be any values in range $1 \leq x \leq N$, it means $p(x) = 1/N$ for $1 \leq x \leq N$. Claude Shannon nicely showed that the information content of an outcome $x$ is defined as: \begin{equation} h(x) = \log_2 \frac{1}{p(x)} \end{equation} The reason for the base 2 in the logarithm is that here we're measuring the information in bits. You can also assume natural logarithm which makes your information measure in nats. As an example, the information content of outcom $x=4$ is $h(4) = 3$. This value is precisely equal to the number of steps in the binary search algorithm (or number of IF statements in the algorithm). Therefore, the number of questions you need to find out $x$ is equal to $4$, is exactly the information content of the outcome $x=4$. We can also analyze the performance of the binary search algorithm for any possible outcome. One way of doing that is to find out what's the expected number of questions to be asked for any values of $x$. Note that the number of required questions for guessing a value of $x$, as I discussed above, is $h(x)$. Therefore, the expected number of questions for any $x$ is by definition equal to: \begin{equation} \langle h(x) \rangle = \sum_{1 \leq x \leq N} p(x) h(x) \end{equation} The expected number of questions $\langle h(x) \rangle$ is just same as the entropy of an ensemble $H(X)$, or entropy in short. Therefore, we can conclude that entropy $H(X)$ quantifies the expected (or average) number of the questions one need to ask in order to guess an outcome, which is the computational complexity of the binary search algorithm.
What is the role of the logarithm in Shannon's entropy?
another way of looking at this is from an algorithmic point of view. Imagine that you're going to guess a number $x$, that the only information you have is that this number is in the interval $1 \leq
What is the role of the logarithm in Shannon's entropy? another way of looking at this is from an algorithmic point of view. Imagine that you're going to guess a number $x$, that the only information you have is that this number is in the interval $1 \leq x \leq N$. In this situation, the optimal algorithm for guessing the number is a simple Binary search algorithm, which finds $x$ in order $O(\log_2N)$. This formula intuitively says how many questions you need to ask to find out what's $x$. For example, if $N=8$, you need to ask maximum 3 questions to find the unkown $x$. From the probabilistic perspective, when you declare $x$ as being equally likely to be any values in range $1 \leq x \leq N$, it means $p(x) = 1/N$ for $1 \leq x \leq N$. Claude Shannon nicely showed that the information content of an outcome $x$ is defined as: \begin{equation} h(x) = \log_2 \frac{1}{p(x)} \end{equation} The reason for the base 2 in the logarithm is that here we're measuring the information in bits. You can also assume natural logarithm which makes your information measure in nats. As an example, the information content of outcom $x=4$ is $h(4) = 3$. This value is precisely equal to the number of steps in the binary search algorithm (or number of IF statements in the algorithm). Therefore, the number of questions you need to find out $x$ is equal to $4$, is exactly the information content of the outcome $x=4$. We can also analyze the performance of the binary search algorithm for any possible outcome. One way of doing that is to find out what's the expected number of questions to be asked for any values of $x$. Note that the number of required questions for guessing a value of $x$, as I discussed above, is $h(x)$. Therefore, the expected number of questions for any $x$ is by definition equal to: \begin{equation} \langle h(x) \rangle = \sum_{1 \leq x \leq N} p(x) h(x) \end{equation} The expected number of questions $\langle h(x) \rangle$ is just same as the entropy of an ensemble $H(X)$, or entropy in short. Therefore, we can conclude that entropy $H(X)$ quantifies the expected (or average) number of the questions one need to ask in order to guess an outcome, which is the computational complexity of the binary search algorithm.
What is the role of the logarithm in Shannon's entropy? another way of looking at this is from an algorithmic point of view. Imagine that you're going to guess a number $x$, that the only information you have is that this number is in the interval $1 \leq
1,585
What is the role of the logarithm in Shannon's entropy?
Here's an off-the-cuff explanation. You could say 2 books of the same size have twice as much information as 1 book, right? (Considering a book to be a string of bits.) Well, if a certain outcome has probability P, then you could say its information content is about the number of bits you need to write out 1/P. (e.g. if P=1/256, that's 8 bits.) Entropy is just the average of that information bit length, over all the outcomes.
What is the role of the logarithm in Shannon's entropy?
Here's an off-the-cuff explanation. You could say 2 books of the same size have twice as much information as 1 book, right? (Considering a book to be a string of bits.) Well, if a certain outcome has
What is the role of the logarithm in Shannon's entropy? Here's an off-the-cuff explanation. You could say 2 books of the same size have twice as much information as 1 book, right? (Considering a book to be a string of bits.) Well, if a certain outcome has probability P, then you could say its information content is about the number of bits you need to write out 1/P. (e.g. if P=1/256, that's 8 bits.) Entropy is just the average of that information bit length, over all the outcomes.
What is the role of the logarithm in Shannon's entropy? Here's an off-the-cuff explanation. You could say 2 books of the same size have twice as much information as 1 book, right? (Considering a book to be a string of bits.) Well, if a certain outcome has
1,586
What is the role of the logarithm in Shannon's entropy?
The purpose of $\log(p_i)$ appearing in Shannon's Entropy is that $\log(p_i)$ is the only function satisfying the basic set of properties that the entropy function, $H(p_1, \ldots ,p_N)$, is held to embody. Shannon provided a mathematical proof of this result that has been thoroughly picked over and widely accepted. The purpose and significance of the logarithm in the entropy equation is therefore self-contained within the assumptions & proof. This doesn't make it easy to understand, but it is ultimately the reason why the logarithm appears. I have found the following references useful in addition to those listed elsewhere: Probability Theory: The Logic of Science by E.T. Jaynes. Jaynes is one of the few authors who derives many results from scratch; see Chapter 11. Information Theory, Inference and Learning Algorithms by David MacKay. Contains an in-depth analysis of Shannon's source coding theorem; see Chapter 4.
What is the role of the logarithm in Shannon's entropy?
The purpose of $\log(p_i)$ appearing in Shannon's Entropy is that $\log(p_i)$ is the only function satisfying the basic set of properties that the entropy function, $H(p_1, \ldots ,p_N)$, is held to e
What is the role of the logarithm in Shannon's entropy? The purpose of $\log(p_i)$ appearing in Shannon's Entropy is that $\log(p_i)$ is the only function satisfying the basic set of properties that the entropy function, $H(p_1, \ldots ,p_N)$, is held to embody. Shannon provided a mathematical proof of this result that has been thoroughly picked over and widely accepted. The purpose and significance of the logarithm in the entropy equation is therefore self-contained within the assumptions & proof. This doesn't make it easy to understand, but it is ultimately the reason why the logarithm appears. I have found the following references useful in addition to those listed elsewhere: Probability Theory: The Logic of Science by E.T. Jaynes. Jaynes is one of the few authors who derives many results from scratch; see Chapter 11. Information Theory, Inference and Learning Algorithms by David MacKay. Contains an in-depth analysis of Shannon's source coding theorem; see Chapter 4.
What is the role of the logarithm in Shannon's entropy? The purpose of $\log(p_i)$ appearing in Shannon's Entropy is that $\log(p_i)$ is the only function satisfying the basic set of properties that the entropy function, $H(p_1, \ldots ,p_N)$, is held to e
1,587
What is the role of the logarithm in Shannon's entropy?
Suppose we have a discrete information source that produces symbols from some finite alphabet $\Omega = \{\omega_1, \dotsc, \omega_n\}$ with probabilities $p_1, \dotsc, p_n$. Shannon defines the entropy as the measure $H(p_1, \dotsc, p_n)$ such that $H$ is continuous in its parameters, $H$ is monotone increasing in $n$ when $p_1 = \dots = p_n = \frac1n$ (since uncertainty is increasing), and $H$ is independent of how a choice is split into successive choices. For example, consider three events when rolling a black die and a white die: (1) the white die is odd, (2) the white die is even and the black die is less than three, and (3) otherwise. Either the dice are rolled together, or else the white die is rolled first, and maybe the black die if necessary. This requirement states that \begin{align} H\left(\frac12, \frac16, \frac13\right) &= H\left(\frac12, \frac12\right) + \frac12 H\left(\frac13, \frac23\right). \end{align} Shannon proves that the only $H$ satisfying the three requirements has the form \begin{align} H(p_1, \dotsc, p_n) &= -\sum_{i=1}^np_i\log_kp_i \end{align} where $k>1$ corresponds to an arbitrary information measurement unit. When $k=2$, this unit is the bit. For a proof, see appendix 2 in: C. E. Shannon. 2001. A mathematical theory of communication. SIGMOBILE Mob. Comput. Commun. Rev. 5, 1 (January 2001), 3–55.
What is the role of the logarithm in Shannon's entropy?
Suppose we have a discrete information source that produces symbols from some finite alphabet $\Omega = \{\omega_1, \dotsc, \omega_n\}$ with probabilities $p_1, \dotsc, p_n$. Shannon defines the entro
What is the role of the logarithm in Shannon's entropy? Suppose we have a discrete information source that produces symbols from some finite alphabet $\Omega = \{\omega_1, \dotsc, \omega_n\}$ with probabilities $p_1, \dotsc, p_n$. Shannon defines the entropy as the measure $H(p_1, \dotsc, p_n)$ such that $H$ is continuous in its parameters, $H$ is monotone increasing in $n$ when $p_1 = \dots = p_n = \frac1n$ (since uncertainty is increasing), and $H$ is independent of how a choice is split into successive choices. For example, consider three events when rolling a black die and a white die: (1) the white die is odd, (2) the white die is even and the black die is less than three, and (3) otherwise. Either the dice are rolled together, or else the white die is rolled first, and maybe the black die if necessary. This requirement states that \begin{align} H\left(\frac12, \frac16, \frac13\right) &= H\left(\frac12, \frac12\right) + \frac12 H\left(\frac13, \frac23\right). \end{align} Shannon proves that the only $H$ satisfying the three requirements has the form \begin{align} H(p_1, \dotsc, p_n) &= -\sum_{i=1}^np_i\log_kp_i \end{align} where $k>1$ corresponds to an arbitrary information measurement unit. When $k=2$, this unit is the bit. For a proof, see appendix 2 in: C. E. Shannon. 2001. A mathematical theory of communication. SIGMOBILE Mob. Comput. Commun. Rev. 5, 1 (January 2001), 3–55.
What is the role of the logarithm in Shannon's entropy? Suppose we have a discrete information source that produces symbols from some finite alphabet $\Omega = \{\omega_1, \dotsc, \omega_n\}$ with probabilities $p_1, \dotsc, p_n$. Shannon defines the entro
1,588
What is the role of the logarithm in Shannon's entropy?
Summary: Because it represents average total number of perfect questions that you need them to get answered in order to fully resolve all ambiguities in a data that you hadn't seen yet. A perfect question with $n$ possible answers is one that, when answered, the space of possibilities will be reduced by $n$ times. Example: Suppose that I rolled a $6$-faced fair dice and you were to predict its outcome. The space of possibilities is $6$. You could ask me questions like this binary one "is the outcome $1$?" (answer is either yes or no, i.e. $n=2$) and my answer could be "nopies!". Then the space of possibilities by just $1$. So this question is not a good one to ask. Alternatively, you could ask better questions, such as this superior binary question "is it greater than $3.5$?", and my answer would be "yeppies!" -- then boom, the space of possibilities is reduced down by half! I.e. there are just $6/2=3$ candidates left (out of the originally 6). Hell yeah dude. Now suppose that you keep recursively asking more of these good questions until you reach the case when the space of possibilities has only $1$ possibility, by which -by definition- there is no ambiguity left (you know the answer). Let's do this: $6$ possibilities. Q: Is outcome $> 3.5$? A: Yes. $6/2=3$ possibilities left. Q: is outcome $\ge 5$? A: Yes. $6/2/2=1.5$ possibilities left. Q: is outcome $= 6$? A: Yes. You conclude that the outcome must be number $6$, and you only needed to ask $3$ binary questions. I.e. $ceil(\log_2(6)) = ceil(2.58) = 3$ Now, obviously, number of binary questions are always a natural number. So why doesn't Shannon's entropy use $ceil$ function? Because it actually spits out the average number of good questions that need to be asked. If you repeat this experiment (by writing a Python code), you will notice that on average you will need to ask $2.58$ perfect binary questions. Of course, if you ask binary questions, you set the base of the log to that. So here $\log_2(...)$ because our questions were binary. If you ask questions that expect $n$ many possible answers, you will set the base to $n$ instead of $2$, i.e. $\log_n(...)$. Simulation: import random total_questions = 0 TOTAL_ROUNDS = 10000 for i in range(0,TOTAL_ROUNDS): outcome = random.randrange(1,7) total_questions += 1 if outcome > 3.5: total_questions += 1 if outcome >= 5: total_questions += 1 if outcome == 5: pass else: # must be 6! no need to ask pass else: # must be 4! no need to ask pass else: total_questions += 1 if outcome >= 2: total_questions += 1 if outcome == 2: pass else: # must be 3! no need to ask pass else: # must be 1! no need to ask pass print 'total questions: ' + str(total_questions) print 'average questions per outcome: ' + str(total_questions/float(TOTAL_ROUNDS)) Results: total questions: 26634 average questions per outcome: 2.6634 Holy molly dude $2.6634 \ne \log_2(6) \ne 2.58$. What's wrong? It's almost close, but not really close as I hoped. Is it Python's PRNG trying to say a slow joke? Or is it Shannon being wrong? Or is it -God forbid- my understanding is wrong? Either way HELP. S.O.S. already dude.
What is the role of the logarithm in Shannon's entropy?
Summary: Because it represents average total number of perfect questions that you need them to get answered in order to fully resolve all ambiguities in a data that you hadn't seen yet. A perfect ques
What is the role of the logarithm in Shannon's entropy? Summary: Because it represents average total number of perfect questions that you need them to get answered in order to fully resolve all ambiguities in a data that you hadn't seen yet. A perfect question with $n$ possible answers is one that, when answered, the space of possibilities will be reduced by $n$ times. Example: Suppose that I rolled a $6$-faced fair dice and you were to predict its outcome. The space of possibilities is $6$. You could ask me questions like this binary one "is the outcome $1$?" (answer is either yes or no, i.e. $n=2$) and my answer could be "nopies!". Then the space of possibilities by just $1$. So this question is not a good one to ask. Alternatively, you could ask better questions, such as this superior binary question "is it greater than $3.5$?", and my answer would be "yeppies!" -- then boom, the space of possibilities is reduced down by half! I.e. there are just $6/2=3$ candidates left (out of the originally 6). Hell yeah dude. Now suppose that you keep recursively asking more of these good questions until you reach the case when the space of possibilities has only $1$ possibility, by which -by definition- there is no ambiguity left (you know the answer). Let's do this: $6$ possibilities. Q: Is outcome $> 3.5$? A: Yes. $6/2=3$ possibilities left. Q: is outcome $\ge 5$? A: Yes. $6/2/2=1.5$ possibilities left. Q: is outcome $= 6$? A: Yes. You conclude that the outcome must be number $6$, and you only needed to ask $3$ binary questions. I.e. $ceil(\log_2(6)) = ceil(2.58) = 3$ Now, obviously, number of binary questions are always a natural number. So why doesn't Shannon's entropy use $ceil$ function? Because it actually spits out the average number of good questions that need to be asked. If you repeat this experiment (by writing a Python code), you will notice that on average you will need to ask $2.58$ perfect binary questions. Of course, if you ask binary questions, you set the base of the log to that. So here $\log_2(...)$ because our questions were binary. If you ask questions that expect $n$ many possible answers, you will set the base to $n$ instead of $2$, i.e. $\log_n(...)$. Simulation: import random total_questions = 0 TOTAL_ROUNDS = 10000 for i in range(0,TOTAL_ROUNDS): outcome = random.randrange(1,7) total_questions += 1 if outcome > 3.5: total_questions += 1 if outcome >= 5: total_questions += 1 if outcome == 5: pass else: # must be 6! no need to ask pass else: # must be 4! no need to ask pass else: total_questions += 1 if outcome >= 2: total_questions += 1 if outcome == 2: pass else: # must be 3! no need to ask pass else: # must be 1! no need to ask pass print 'total questions: ' + str(total_questions) print 'average questions per outcome: ' + str(total_questions/float(TOTAL_ROUNDS)) Results: total questions: 26634 average questions per outcome: 2.6634 Holy molly dude $2.6634 \ne \log_2(6) \ne 2.58$. What's wrong? It's almost close, but not really close as I hoped. Is it Python's PRNG trying to say a slow joke? Or is it Shannon being wrong? Or is it -God forbid- my understanding is wrong? Either way HELP. S.O.S. already dude.
What is the role of the logarithm in Shannon's entropy? Summary: Because it represents average total number of perfect questions that you need them to get answered in order to fully resolve all ambiguities in a data that you hadn't seen yet. A perfect ques
1,589
What is the role of the logarithm in Shannon's entropy?
This question was raised two years ago and there have been many awesome answers already, but I'd like to add mine which helped myself a lot. The question is What purpose does the logarithm serve in this equation? The logarithm(usually based on 2) is because of the Kraft's Inequality. $\sum_{i=1}^m 2^{-l_i} <= 1$ we can intuit it this way: sum of the probability of all the code with length $l_i$ is smaller than 1. From the inequality we can derive the following result that for every code length function $L_x$ of a uniquely decodable code, there is a distribution $P(x)$ such that $P(x) = 2^{-L(x)} $, And hence $L_{(x)} = -logP(x)$ and $P(x)$ is the probability of the code with length $L_{(x)}$. The Shannon's entropy is defined as the average length of all code. Since the probability of every code with lenght $L_{(x)}$ is $P(x)$, the average length(or Shannon's entropy) is $-P(x)logP(x)$. An intuitive illustration and a visual answer(as you required, but more specifically for the Kraft's Inequality) is articulated in this paper Code Tree, and Kraft's Inequality.
What is the role of the logarithm in Shannon's entropy?
This question was raised two years ago and there have been many awesome answers already, but I'd like to add mine which helped myself a lot. The question is What purpose does the logarithm serve in
What is the role of the logarithm in Shannon's entropy? This question was raised two years ago and there have been many awesome answers already, but I'd like to add mine which helped myself a lot. The question is What purpose does the logarithm serve in this equation? The logarithm(usually based on 2) is because of the Kraft's Inequality. $\sum_{i=1}^m 2^{-l_i} <= 1$ we can intuit it this way: sum of the probability of all the code with length $l_i$ is smaller than 1. From the inequality we can derive the following result that for every code length function $L_x$ of a uniquely decodable code, there is a distribution $P(x)$ such that $P(x) = 2^{-L(x)} $, And hence $L_{(x)} = -logP(x)$ and $P(x)$ is the probability of the code with length $L_{(x)}$. The Shannon's entropy is defined as the average length of all code. Since the probability of every code with lenght $L_{(x)}$ is $P(x)$, the average length(or Shannon's entropy) is $-P(x)logP(x)$. An intuitive illustration and a visual answer(as you required, but more specifically for the Kraft's Inequality) is articulated in this paper Code Tree, and Kraft's Inequality.
What is the role of the logarithm in Shannon's entropy? This question was raised two years ago and there have been many awesome answers already, but I'd like to add mine which helped myself a lot. The question is What purpose does the logarithm serve in
1,590
What is the role of the logarithm in Shannon's entropy?
A historical view may be interesting. Entropy is related to the concept of weight of evidence from information theory (note this is not the same as discussed here Intuition behind Weight of Evidence and Information Value formula) This $\text{woe}$ is discussed deeply in this book by IJ Good, (much of the content in that book certainly he learnt when working with A Turing at Bletchley Park.) This other Good book can be easier to find, and contains some of the same material. But it goes even longer back, for instance to this classic paper of CS Peirce, which discussed at length why using the logarithm: It is that our belief ought to be proportional to the weight of evidence, in this sense, that two arguments which are entirely independent, neither weakening nor strengthening each other, ought, when they concur, to produce a belief equal to the sum of the intensities of belief which either would produce separately. Now, we have seen that the chances of independent concurrent arguments are to be multiplied together to get the chance of their combination, and therefore the quantities which best express the intensities of belief should be such that they are to be added when the chances are multiplied in order to produce the quantity which corresponds to the combined chance. Now, the logarithm is the only quantity which fulfills this condition. There is a general law of sensibility, called Fechner's psychophysical law. It is that the intensity of any sensation is proportional to the logarithm of the external force which produces it. It is entirely in harmony with this law that the feeling of belief should be as the logarithm of the chance, this latter being the expression of the state of facts which produces the belief. So Peirce compares to Fechner's law! Now, the weight of evidence for an hypothesis $H$ as compared to an alternative $\bar{H}$ on evidence $E$ (using notation from the Good book) is basically the logarithm of the likelihood ratio $$\DeclareMathOperator{\P}{\mathbb{P}} \text{woe}=\log\frac{\P(E | H)}{\P(E | \bar{H})} $$ The expectation of the weight of evidence is related to concepts of entropy. For instance $$\DeclareMathOperator{\KL}{KL} \DeclareMathOperator{\E}{\mathbb{E}} \E_H \text{woe} = \KL(\P(\cdot | H) || \P(\cdot | \bar{H}) ) $$ (and $\E_\bar{H} \text{woe} = 0$) where $\KL$ is the Kullback-Leibler divergence, see Intuition on the Kullback-Leibler (KL) Divergence. So arguments for the use of logarithm must be valid for both or neither. IJ Good comments about entropy While the manuscript was with the publishers an article appeared involving ideas that are related in some ways to those of the present chapter. Suppose that an event occurs whose probability on known evidence is p. It is desired to introduce a simple numerical definition for the amount of information that is thereby conveyed. We have already defined a measure for the weight of evidence in favour of a particular hypothesis, but we are now concerned with the amount of information as such, i.e. the amount from the point of view of a person who is interested merely in collecting information, without reference to any uncertain hypothesis. It is natural to make two demands on the measure (i) it should be a decreasing function of p, and (ii) the amount of information provided by two independent events should be the sum of the separate amounts. The only functions satisfying these conditions are of the form -log p, where the units are natural bels if the base of the logarithms is e. If the base is 2 then the unit may be called an "octave", a" binary digit" or (after J. Tukey) a "bit ". For example, if a coin is spun and comes down heads then one bit of information is provided. Then he goes on explaining how concepts from the Shannon paper is related to his book.
What is the role of the logarithm in Shannon's entropy?
A historical view may be interesting. Entropy is related to the concept of weight of evidence from information theory (note this is not the same as discussed here Intuition behind Weight of Evidence a
What is the role of the logarithm in Shannon's entropy? A historical view may be interesting. Entropy is related to the concept of weight of evidence from information theory (note this is not the same as discussed here Intuition behind Weight of Evidence and Information Value formula) This $\text{woe}$ is discussed deeply in this book by IJ Good, (much of the content in that book certainly he learnt when working with A Turing at Bletchley Park.) This other Good book can be easier to find, and contains some of the same material. But it goes even longer back, for instance to this classic paper of CS Peirce, which discussed at length why using the logarithm: It is that our belief ought to be proportional to the weight of evidence, in this sense, that two arguments which are entirely independent, neither weakening nor strengthening each other, ought, when they concur, to produce a belief equal to the sum of the intensities of belief which either would produce separately. Now, we have seen that the chances of independent concurrent arguments are to be multiplied together to get the chance of their combination, and therefore the quantities which best express the intensities of belief should be such that they are to be added when the chances are multiplied in order to produce the quantity which corresponds to the combined chance. Now, the logarithm is the only quantity which fulfills this condition. There is a general law of sensibility, called Fechner's psychophysical law. It is that the intensity of any sensation is proportional to the logarithm of the external force which produces it. It is entirely in harmony with this law that the feeling of belief should be as the logarithm of the chance, this latter being the expression of the state of facts which produces the belief. So Peirce compares to Fechner's law! Now, the weight of evidence for an hypothesis $H$ as compared to an alternative $\bar{H}$ on evidence $E$ (using notation from the Good book) is basically the logarithm of the likelihood ratio $$\DeclareMathOperator{\P}{\mathbb{P}} \text{woe}=\log\frac{\P(E | H)}{\P(E | \bar{H})} $$ The expectation of the weight of evidence is related to concepts of entropy. For instance $$\DeclareMathOperator{\KL}{KL} \DeclareMathOperator{\E}{\mathbb{E}} \E_H \text{woe} = \KL(\P(\cdot | H) || \P(\cdot | \bar{H}) ) $$ (and $\E_\bar{H} \text{woe} = 0$) where $\KL$ is the Kullback-Leibler divergence, see Intuition on the Kullback-Leibler (KL) Divergence. So arguments for the use of logarithm must be valid for both or neither. IJ Good comments about entropy While the manuscript was with the publishers an article appeared involving ideas that are related in some ways to those of the present chapter. Suppose that an event occurs whose probability on known evidence is p. It is desired to introduce a simple numerical definition for the amount of information that is thereby conveyed. We have already defined a measure for the weight of evidence in favour of a particular hypothesis, but we are now concerned with the amount of information as such, i.e. the amount from the point of view of a person who is interested merely in collecting information, without reference to any uncertain hypothesis. It is natural to make two demands on the measure (i) it should be a decreasing function of p, and (ii) the amount of information provided by two independent events should be the sum of the separate amounts. The only functions satisfying these conditions are of the form -log p, where the units are natural bels if the base of the logarithms is e. If the base is 2 then the unit may be called an "octave", a" binary digit" or (after J. Tukey) a "bit ". For example, if a coin is spun and comes down heads then one bit of information is provided. Then he goes on explaining how concepts from the Shannon paper is related to his book.
What is the role of the logarithm in Shannon's entropy? A historical view may be interesting. Entropy is related to the concept of weight of evidence from information theory (note this is not the same as discussed here Intuition behind Weight of Evidence a
1,591
What is the role of the logarithm in Shannon's entropy?
I don't think it is possible to give you a universal "intuitive" answer. I'll give you answer that is intuitive for some people, such as physicists. Logarithm is there to get the average energy of the system. Here's details. Shannon used a word "entropy" because he adapted the concept from statistical mechanics. In statistical mechanics there's a seminal distribution named after Boltzmann. Interestingly, it's an important distribution now in machine learning! The Boltzmann distribution can be written as $$P=e^{\frac{a-E} b}$$ where $a, b$ are constants, and $E$ is the energy of the system in a state $dV$ of the state space $V$. In classical thermodynamics $dV=dpdx$, where $x,p$ are a coordinate and momentum of the particle. It's a proper probability function when constants $a,b$ are selected properly, i.e. $\int_VPdV=1$. Also, you may find it interesting that $b$ corresponds to a temperature of the system. Now, notice how $\ln P\sim E$, i.e. a log of probability is linear (proportional) to energy. Now, you can see that the following expression is essentially an expected value of energy of the system: $$S\equiv -\int_VP\ln P dV=<E>$$ This is what Gibbs did. So, Shannon took this thing and discretized as $$\eta=-\sum_i P_i\ln P_i$$ and called it "entropy," and we call this "Shannon entropy." There's no more energy concept here, but maybe you could anti-log the probability of a state $e^{-P_i}$ and call this an energy of the state? Is this intuitive enough for you? It is for me, but I was a theoretical physicist in past life. Also, you can go to a deeper level of intuition by linking to even older thermodynamics concepts such as temperature and works of Boltzmann and Clausius.
What is the role of the logarithm in Shannon's entropy?
I don't think it is possible to give you a universal "intuitive" answer. I'll give you answer that is intuitive for some people, such as physicists. Logarithm is there to get the average energy of the
What is the role of the logarithm in Shannon's entropy? I don't think it is possible to give you a universal "intuitive" answer. I'll give you answer that is intuitive for some people, such as physicists. Logarithm is there to get the average energy of the system. Here's details. Shannon used a word "entropy" because he adapted the concept from statistical mechanics. In statistical mechanics there's a seminal distribution named after Boltzmann. Interestingly, it's an important distribution now in machine learning! The Boltzmann distribution can be written as $$P=e^{\frac{a-E} b}$$ where $a, b$ are constants, and $E$ is the energy of the system in a state $dV$ of the state space $V$. In classical thermodynamics $dV=dpdx$, where $x,p$ are a coordinate and momentum of the particle. It's a proper probability function when constants $a,b$ are selected properly, i.e. $\int_VPdV=1$. Also, you may find it interesting that $b$ corresponds to a temperature of the system. Now, notice how $\ln P\sim E$, i.e. a log of probability is linear (proportional) to energy. Now, you can see that the following expression is essentially an expected value of energy of the system: $$S\equiv -\int_VP\ln P dV=<E>$$ This is what Gibbs did. So, Shannon took this thing and discretized as $$\eta=-\sum_i P_i\ln P_i$$ and called it "entropy," and we call this "Shannon entropy." There's no more energy concept here, but maybe you could anti-log the probability of a state $e^{-P_i}$ and call this an energy of the state? Is this intuitive enough for you? It is for me, but I was a theoretical physicist in past life. Also, you can go to a deeper level of intuition by linking to even older thermodynamics concepts such as temperature and works of Boltzmann and Clausius.
What is the role of the logarithm in Shannon's entropy? I don't think it is possible to give you a universal "intuitive" answer. I'll give you answer that is intuitive for some people, such as physicists. Logarithm is there to get the average energy of the
1,592
What is the role of the logarithm in Shannon's entropy?
Entropy is defined as the logarithm of the geometric mean of the multinomial coefficient that expresses the number of states a system can be in: $$\log \sqrt[N]{N \choose n_1,\ldots,n_k}$$ The logarithms appear in the formula after using Stirling's approximation of the factorial (see this explanation)
What is the role of the logarithm in Shannon's entropy?
Entropy is defined as the logarithm of the geometric mean of the multinomial coefficient that expresses the number of states a system can be in: $$\log \sqrt[N]{N \choose n_1,\ldots,n_k}$$ The logarit
What is the role of the logarithm in Shannon's entropy? Entropy is defined as the logarithm of the geometric mean of the multinomial coefficient that expresses the number of states a system can be in: $$\log \sqrt[N]{N \choose n_1,\ldots,n_k}$$ The logarithms appear in the formula after using Stirling's approximation of the factorial (see this explanation)
What is the role of the logarithm in Shannon's entropy? Entropy is defined as the logarithm of the geometric mean of the multinomial coefficient that expresses the number of states a system can be in: $$\log \sqrt[N]{N \choose n_1,\ldots,n_k}$$ The logarit
1,593
What is the role of the logarithm in Shannon's entropy?
Based on your nonacceptance of any already answers, I think what you are looking for is the reason why Shannon used logarithm in his formula at the first place. In other words, the philosophy of it. Disclaimer: I'm just into this field for a week, coming here because of having the question just like you. If you have more knowledge on this, please let me know. I have this question after reading one of the most important paper of Ulanowicz, Increasing Entropy: Heat death or perpetual harmonies?. This is the paragraph explains why the formula has -log(p) instead of (1-p): Before further unpacking the formal definition of entropy, one would be justified in asking why not simply choose (1 – p) instead of [–log(p)] as the most appropriate measure of nonexistence? The answer is that the resultant product with p (that is [p–p^2]) is perfectly symmetrical around the value p = 0.5. Calculations pursuant to such a symmetric combination would be capable of describing only a reversible universe. Boltzmann and Gibbs, however, were seeking to quantify an irreversible universe. By choosing the univariate convex logarithmic function, Boltzmann thereby imparted a bias to nonbeing over being. One notices, for example, that max[–xlog{x}] = {1/e} ≈ 0.37, so that the measure of indeterminacy is skewed towards lower values of pi. Looks like that Shannon chose logarithm for no reason. He just "smelt" that he should use logarithm. Why did Newton choose multiply operation in his formula F=m*a? Note that at that time, he had no idea about entropy: My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage. So my answer is: there is no reason for this. He chose this because it just magically worked.
What is the role of the logarithm in Shannon's entropy?
Based on your nonacceptance of any already answers, I think what you are looking for is the reason why Shannon used logarithm in his formula at the first place. In other words, the philosophy of it. D
What is the role of the logarithm in Shannon's entropy? Based on your nonacceptance of any already answers, I think what you are looking for is the reason why Shannon used logarithm in his formula at the first place. In other words, the philosophy of it. Disclaimer: I'm just into this field for a week, coming here because of having the question just like you. If you have more knowledge on this, please let me know. I have this question after reading one of the most important paper of Ulanowicz, Increasing Entropy: Heat death or perpetual harmonies?. This is the paragraph explains why the formula has -log(p) instead of (1-p): Before further unpacking the formal definition of entropy, one would be justified in asking why not simply choose (1 – p) instead of [–log(p)] as the most appropriate measure of nonexistence? The answer is that the resultant product with p (that is [p–p^2]) is perfectly symmetrical around the value p = 0.5. Calculations pursuant to such a symmetric combination would be capable of describing only a reversible universe. Boltzmann and Gibbs, however, were seeking to quantify an irreversible universe. By choosing the univariate convex logarithmic function, Boltzmann thereby imparted a bias to nonbeing over being. One notices, for example, that max[–xlog{x}] = {1/e} ≈ 0.37, so that the measure of indeterminacy is skewed towards lower values of pi. Looks like that Shannon chose logarithm for no reason. He just "smelt" that he should use logarithm. Why did Newton choose multiply operation in his formula F=m*a? Note that at that time, he had no idea about entropy: My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage. So my answer is: there is no reason for this. He chose this because it just magically worked.
What is the role of the logarithm in Shannon's entropy? Based on your nonacceptance of any already answers, I think what you are looking for is the reason why Shannon used logarithm in his formula at the first place. In other words, the philosophy of it. D
1,594
What is the role of the logarithm in Shannon's entropy?
The log comes from the derivation of a function H satisfying certain natural requirements. See pg. 3 Sec. 2 of this source: http://www.lptl.jussieu.fr/user/lesne/MSCS-entropy.pdf Given the axioms, if you carry out the optimization, you get a unique (upto constants) function with a log in it. All of the above answers are correct, except that they interpret the log, but don't explain the source of it.
What is the role of the logarithm in Shannon's entropy?
The log comes from the derivation of a function H satisfying certain natural requirements. See pg. 3 Sec. 2 of this source: http://www.lptl.jussieu.fr/user/lesne/MSCS-entropy.pdf Given the axioms, if
What is the role of the logarithm in Shannon's entropy? The log comes from the derivation of a function H satisfying certain natural requirements. See pg. 3 Sec. 2 of this source: http://www.lptl.jussieu.fr/user/lesne/MSCS-entropy.pdf Given the axioms, if you carry out the optimization, you get a unique (upto constants) function with a log in it. All of the above answers are correct, except that they interpret the log, but don't explain the source of it.
What is the role of the logarithm in Shannon's entropy? The log comes from the derivation of a function H satisfying certain natural requirements. See pg. 3 Sec. 2 of this source: http://www.lptl.jussieu.fr/user/lesne/MSCS-entropy.pdf Given the axioms, if
1,595
What is the role of the logarithm in Shannon's entropy?
I guess your question is more about the "meaning" of that logarithm and why each component contributes to the overall meaning of the formula, rather than the mere formalism showing the coherence of the definition to certain requirements. The idea in the Shannon entropy is to evaluate the information of a message by looking at its FREQUENCY (i.e. $p(x)$) and at its GENERALITY (i.e. $-log(p(x))$): $p(x)$: the more "frequent" a message is the less information will carry (i.e. easier to be predicted). $-log(p(x))$: The more "general" a message is the more information will carry. The first term $p(x)$ is about the frequency, the $-log(p(x))$ is about its generality. From now on, I will discuss how the GENERALITY affects the final entropy formula. So, we can define how general (e.g. rain/not rain) or specific (e.g. ligth/avg/heavy/veryHeavy rain) is a message based on the number of bits needed to encode it: $$ log_2(x) = number\_of\_bits\_to\_encode\_the\_messages $$ Now, sit, relax and look at how beautifully Shannon's Entropy does the trick: it is based on the (reasonable) assumption that messages which are more GENERAL are, consequently, more FREQUENT. E.g. I will say that is raining either if it is an average, heavy or veryHeavy rain. Thus, he proposed to encode the GENERALITY of messages based on how FREQUENT they are... and there you go: $$ log_2 N = -log_2 1/N = -log_2 P $$ with $N$ the frequency of a message $x$. The equation can be interpreted as: rare messages will have longer encoding because they are less general, so they need more bits to be encoded and are less informative. Therefore, having more specific and rare messages will contribute more to the entropy than having many general and frequent messages. In the final formulation, we want to consider two aspects. The first, $p(x)$, is that frequent messages are easier to be predicted, and from this perspective less informative (i.e. longer encoding means higher entropy). The second one, $-log(p(x))$, is that frequent messages are also general, and from this perspective more informative (i.e. shorter encoding means lower entropy). The highest entropy is when we have a system with many rare and specific messages. The lowest entropy with frequent and general messages. In between, we have a spectrum of entropy-equivalent systems which might have both rare and general messages or frequent but specific messages.
What is the role of the logarithm in Shannon's entropy?
I guess your question is more about the "meaning" of that logarithm and why each component contributes to the overall meaning of the formula, rather than the mere formalism showing the coherence of th
What is the role of the logarithm in Shannon's entropy? I guess your question is more about the "meaning" of that logarithm and why each component contributes to the overall meaning of the formula, rather than the mere formalism showing the coherence of the definition to certain requirements. The idea in the Shannon entropy is to evaluate the information of a message by looking at its FREQUENCY (i.e. $p(x)$) and at its GENERALITY (i.e. $-log(p(x))$): $p(x)$: the more "frequent" a message is the less information will carry (i.e. easier to be predicted). $-log(p(x))$: The more "general" a message is the more information will carry. The first term $p(x)$ is about the frequency, the $-log(p(x))$ is about its generality. From now on, I will discuss how the GENERALITY affects the final entropy formula. So, we can define how general (e.g. rain/not rain) or specific (e.g. ligth/avg/heavy/veryHeavy rain) is a message based on the number of bits needed to encode it: $$ log_2(x) = number\_of\_bits\_to\_encode\_the\_messages $$ Now, sit, relax and look at how beautifully Shannon's Entropy does the trick: it is based on the (reasonable) assumption that messages which are more GENERAL are, consequently, more FREQUENT. E.g. I will say that is raining either if it is an average, heavy or veryHeavy rain. Thus, he proposed to encode the GENERALITY of messages based on how FREQUENT they are... and there you go: $$ log_2 N = -log_2 1/N = -log_2 P $$ with $N$ the frequency of a message $x$. The equation can be interpreted as: rare messages will have longer encoding because they are less general, so they need more bits to be encoded and are less informative. Therefore, having more specific and rare messages will contribute more to the entropy than having many general and frequent messages. In the final formulation, we want to consider two aspects. The first, $p(x)$, is that frequent messages are easier to be predicted, and from this perspective less informative (i.e. longer encoding means higher entropy). The second one, $-log(p(x))$, is that frequent messages are also general, and from this perspective more informative (i.e. shorter encoding means lower entropy). The highest entropy is when we have a system with many rare and specific messages. The lowest entropy with frequent and general messages. In between, we have a spectrum of entropy-equivalent systems which might have both rare and general messages or frequent but specific messages.
What is the role of the logarithm in Shannon's entropy? I guess your question is more about the "meaning" of that logarithm and why each component contributes to the overall meaning of the formula, rather than the mere formalism showing the coherence of th
1,596
What is the role of the logarithm in Shannon's entropy?
There is a profound reason why the logarithm comes into picture, and it is not randomly chosen. The relationship between $\log$ and information stems from this simple way of writing any number $m$ (the symbols don't have any meaning yet), and the discussion that follows. $$ m = \frac{1}{p} = 2^{i} \tag{1}$$ The above tells us that if we use exactly $i$ letters to encode a string where each letter can have one of 2 values at a time, we'll get $m$ different strings. A 2-valued letter is nothing else but a bit. So writing any number $m$ in this way brings into picture a property of the number - $i$ which can be used to construct the number $m$ again (uniquely) - using bits. Now, it is easy to see that for a given outcome that has a probability $p$, the number of other outcomes in the same event that have probabilities greater than $p$ will always be less than or equal to $\frac{1}{p}$. For detail on this, check here. This means that, as per $(1), $ $i=\log_2(\frac{1}{p})$ bits can be safely used to represent this outcome in an event unless there are lower probability outcomes. But even if there are lower probability outcomes, it is easy to see that we can still encode this outcome with $i=\log_2(\frac{1}{p})$ bits, and use more bits to encode the lower-$p$ outcomes. Check here for a detailed proof. In summary, $i=\log_2(\frac{1}{p})$ bits can be safely used to represent this outcome in any event. Now, the information about an outcome that goes from the sender to receiver is actually the codeword that represents the outcome. And we just saw how the length of the codeword is determined by $\log_2(\frac{1}{p})$. So, we choose to call this special length $i$ - the information of the event, and that's how $\log$ comes into picture naturally. $ p=0.25 \Rightarrow i = 2 $ means that we need $ 2 $ bits for encoding this outcome in any event. $ p=0.125 \Rightarrow i = 3 $ means that we need ( 3 ) bits for encoding this outcome in any event. Finally, what would be the information content of any event in total, that is, the information of all the outcomes combined? In other words, what is the information content of a system that can have different states with different probabilities? The answer is that each outcome or state adds its information to the system but only in the ratio of how much of it is there - i.e. its probability. This is just verbiage for the Entropy equation: $$\begin{align} H = & \sum_i{p_i.i} \\[6pt] = & - \sum_i{p_i \log_2({p_i}}) \end{align}$$ The above has been explained in more detail here.
What is the role of the logarithm in Shannon's entropy?
There is a profound reason why the logarithm comes into picture, and it is not randomly chosen. The relationship between $\log$ and information stems from this simple way of writing any number $m$ (th
What is the role of the logarithm in Shannon's entropy? There is a profound reason why the logarithm comes into picture, and it is not randomly chosen. The relationship between $\log$ and information stems from this simple way of writing any number $m$ (the symbols don't have any meaning yet), and the discussion that follows. $$ m = \frac{1}{p} = 2^{i} \tag{1}$$ The above tells us that if we use exactly $i$ letters to encode a string where each letter can have one of 2 values at a time, we'll get $m$ different strings. A 2-valued letter is nothing else but a bit. So writing any number $m$ in this way brings into picture a property of the number - $i$ which can be used to construct the number $m$ again (uniquely) - using bits. Now, it is easy to see that for a given outcome that has a probability $p$, the number of other outcomes in the same event that have probabilities greater than $p$ will always be less than or equal to $\frac{1}{p}$. For detail on this, check here. This means that, as per $(1), $ $i=\log_2(\frac{1}{p})$ bits can be safely used to represent this outcome in an event unless there are lower probability outcomes. But even if there are lower probability outcomes, it is easy to see that we can still encode this outcome with $i=\log_2(\frac{1}{p})$ bits, and use more bits to encode the lower-$p$ outcomes. Check here for a detailed proof. In summary, $i=\log_2(\frac{1}{p})$ bits can be safely used to represent this outcome in any event. Now, the information about an outcome that goes from the sender to receiver is actually the codeword that represents the outcome. And we just saw how the length of the codeword is determined by $\log_2(\frac{1}{p})$. So, we choose to call this special length $i$ - the information of the event, and that's how $\log$ comes into picture naturally. $ p=0.25 \Rightarrow i = 2 $ means that we need $ 2 $ bits for encoding this outcome in any event. $ p=0.125 \Rightarrow i = 3 $ means that we need ( 3 ) bits for encoding this outcome in any event. Finally, what would be the information content of any event in total, that is, the information of all the outcomes combined? In other words, what is the information content of a system that can have different states with different probabilities? The answer is that each outcome or state adds its information to the system but only in the ratio of how much of it is there - i.e. its probability. This is just verbiage for the Entropy equation: $$\begin{align} H = & \sum_i{p_i.i} \\[6pt] = & - \sum_i{p_i \log_2({p_i}}) \end{align}$$ The above has been explained in more detail here.
What is the role of the logarithm in Shannon's entropy? There is a profound reason why the logarithm comes into picture, and it is not randomly chosen. The relationship between $\log$ and information stems from this simple way of writing any number $m$ (th
1,597
What is the role of the logarithm in Shannon's entropy?
Ok, no maths. I too was curious to know the same thing, and have understood the purpose served by using logarithm in entropy equation?. Let's use a simple example. Suppose we are looking at 3 different containers. Each container has some triangles or circles. Let's focus on first container - Container 1 which has 26 triangles, 4 circles. If you put your hand inside the container and picked one, then what is the chance that you pick a triangle (or a circle)? If you were allowed to pick until there is nothing left in Container 1, that would mean after 30 picks you would have 26 triangles, 4 circles in hand. So you can say "The chance of getting a triangle is 26/30 and not getting a triangle is 4/30". The exact opposite can be said about picking a circle. In other words, because the number of triangles is more than the number of circles there is a higher chance (likelihood/probability/confidence) that you will pick a triangle when you take something out of Container 1. This likelihood is what we usually term a probability. To put it another way, when picking one item at a time randomly from Container 1, you have less doubt (uncertainty/surprise) about picking a triangle. In the same way, you have more doubt (less certain) about picking a circle. This uncertainty is what is usually termed as entropy, which is the opposite of probability. That means, for Container 1 you had higher ~87% chance to pick a triangle (26/30), less chance ~13% to pick a circle. You have more confidence you'll pick a triangle but if you do get a circle at times, you won't be as surprised because chances of getting a circle are less. In other words, entropy (element of surprise, chance that you get something else than what you expected) will be less for Container 1. Similarly, Container 2 has a ~50% chance to pick a triangle (14/30), ~50% chance for circle (16/30). To use the same line of thought, from Container 2 it is equally likely you pick a triangle or circle on any pick, you have same confidence about picking(or same uncertainty about not picking) a triangle or a circle. So entropy (element of surprise, chance that you will get something else than what you expected) is higher for Container 2. Finally for Container 3 no chances you'll pick a triangle (0/20), guaranteed chance you'll pick a circle (20/20). Thus, whenever you pick something out, you have absolutely no doubt (entropy is zero for Container 3) that it will be a circle. What purpose does the logarithm serve in entropy equation? How can one put a number on probability, entropy for Container 1 (or Container 2, Container 3) as a whole? It would be ideal to have the number between 0 and 1 so that one could easily represent it as a percentage. Looking at Container 1, the ratio of number of triangles to the total number of items is 26/30 (~0.87). For circles it is 4/30 (~0.13). Trying to just add these two ratios doesn't make sense (because I get back 30/30 = 1). I need some way to scale them (make them bigger or smaller) between 0 and 1. This scale should be the same irrespective of the number of items e.g. the same scale should work for Container 3 even if it has different number of items compared to Containers 1, 2. Logarithm provides this scale. Logarithms offer a way to represent numbers (especially if they are large) into reduced (or scaled down) versions. If multiply 26/30 with log(26/30) then I can scale it. To ensure my scale stays independent (because total numbers of items can vary in different containers), one option is to use 2 as the base for logarithms. Any base (2, 10, e) will suffice as long as one measures each container using the same base. Specific to our examples, we either pick a triangle or we don't (we pick a circle otherwise) - so we have only 2 outcomes for any pick. Base 2 represents 0, 1 as 2 choices in computers hence that has been the choice of base to use for logarithm. Using this fact about logarithms, if we multiply each ratio with it's log, we effectively scale it between 0 and 1. So using that background we can calculate entropy for each container as follows (I've avoided math so far, but now it will be plain & easy) Note: Since we divide by log(total number of items) to keep the new scaled ratio between 0 and 1, multiplying by that ends up being less than 0 (hence the negative sign is added to make it back to positive). As you can see, entropy (doubt, surprise, uncertainty) for Container 1 is less (56%), but more for Container 2 (99%). For Container 3, there is no (0%) entropy - you have 100% chance to pick a circle. Hope this explanation helps to both visualize and make an intuition about Entropy, the choice of logarithms - statistical functions almost always use logarithms to scale down ratios and large numbers towards a common range representable between 0 and 1.
What is the role of the logarithm in Shannon's entropy?
Ok, no maths. I too was curious to know the same thing, and have understood the purpose served by using logarithm in entropy equation?. Let's use a simple example. Suppose we are looking at 3 differen
What is the role of the logarithm in Shannon's entropy? Ok, no maths. I too was curious to know the same thing, and have understood the purpose served by using logarithm in entropy equation?. Let's use a simple example. Suppose we are looking at 3 different containers. Each container has some triangles or circles. Let's focus on first container - Container 1 which has 26 triangles, 4 circles. If you put your hand inside the container and picked one, then what is the chance that you pick a triangle (or a circle)? If you were allowed to pick until there is nothing left in Container 1, that would mean after 30 picks you would have 26 triangles, 4 circles in hand. So you can say "The chance of getting a triangle is 26/30 and not getting a triangle is 4/30". The exact opposite can be said about picking a circle. In other words, because the number of triangles is more than the number of circles there is a higher chance (likelihood/probability/confidence) that you will pick a triangle when you take something out of Container 1. This likelihood is what we usually term a probability. To put it another way, when picking one item at a time randomly from Container 1, you have less doubt (uncertainty/surprise) about picking a triangle. In the same way, you have more doubt (less certain) about picking a circle. This uncertainty is what is usually termed as entropy, which is the opposite of probability. That means, for Container 1 you had higher ~87% chance to pick a triangle (26/30), less chance ~13% to pick a circle. You have more confidence you'll pick a triangle but if you do get a circle at times, you won't be as surprised because chances of getting a circle are less. In other words, entropy (element of surprise, chance that you get something else than what you expected) will be less for Container 1. Similarly, Container 2 has a ~50% chance to pick a triangle (14/30), ~50% chance for circle (16/30). To use the same line of thought, from Container 2 it is equally likely you pick a triangle or circle on any pick, you have same confidence about picking(or same uncertainty about not picking) a triangle or a circle. So entropy (element of surprise, chance that you will get something else than what you expected) is higher for Container 2. Finally for Container 3 no chances you'll pick a triangle (0/20), guaranteed chance you'll pick a circle (20/20). Thus, whenever you pick something out, you have absolutely no doubt (entropy is zero for Container 3) that it will be a circle. What purpose does the logarithm serve in entropy equation? How can one put a number on probability, entropy for Container 1 (or Container 2, Container 3) as a whole? It would be ideal to have the number between 0 and 1 so that one could easily represent it as a percentage. Looking at Container 1, the ratio of number of triangles to the total number of items is 26/30 (~0.87). For circles it is 4/30 (~0.13). Trying to just add these two ratios doesn't make sense (because I get back 30/30 = 1). I need some way to scale them (make them bigger or smaller) between 0 and 1. This scale should be the same irrespective of the number of items e.g. the same scale should work for Container 3 even if it has different number of items compared to Containers 1, 2. Logarithm provides this scale. Logarithms offer a way to represent numbers (especially if they are large) into reduced (or scaled down) versions. If multiply 26/30 with log(26/30) then I can scale it. To ensure my scale stays independent (because total numbers of items can vary in different containers), one option is to use 2 as the base for logarithms. Any base (2, 10, e) will suffice as long as one measures each container using the same base. Specific to our examples, we either pick a triangle or we don't (we pick a circle otherwise) - so we have only 2 outcomes for any pick. Base 2 represents 0, 1 as 2 choices in computers hence that has been the choice of base to use for logarithm. Using this fact about logarithms, if we multiply each ratio with it's log, we effectively scale it between 0 and 1. So using that background we can calculate entropy for each container as follows (I've avoided math so far, but now it will be plain & easy) Note: Since we divide by log(total number of items) to keep the new scaled ratio between 0 and 1, multiplying by that ends up being less than 0 (hence the negative sign is added to make it back to positive). As you can see, entropy (doubt, surprise, uncertainty) for Container 1 is less (56%), but more for Container 2 (99%). For Container 3, there is no (0%) entropy - you have 100% chance to pick a circle. Hope this explanation helps to both visualize and make an intuition about Entropy, the choice of logarithms - statistical functions almost always use logarithms to scale down ratios and large numbers towards a common range representable between 0 and 1.
What is the role of the logarithm in Shannon's entropy? Ok, no maths. I too was curious to know the same thing, and have understood the purpose served by using logarithm in entropy equation?. Let's use a simple example. Suppose we are looking at 3 differen
1,598
What is the role of the logarithm in Shannon's entropy?
Entropy measures how chaotic and unpredictable is given distribution. For distributions with low entropy certain values occur significantly more often than others. When we draw a sample, the probability density function tells us how surprised are we about the given outcome. Let's draw a sequence of $N$ independent samples from the distribution and compute their joint probability: $$ \prod^N_i p(x_i) $$ For simplicity let's consider a discrete distribution with $K$ values. We can than group samples with each discrete value. We will denote $p(k)$ as $p_k$ for brevity. If we make $N$ sufficiently large than the count of each value will be proportional to its probability $N_k = p_k N$. We can also replace product of a constant with power: $$ \prod^K_k \prod^{N_k} p_k =\prod^K_k \prod^{p_k N} p_k = \prod^K_k p_k^{p_k N} $$ For convenience, we move to the negative log-likelihood. This allows us to replace the product with sum. We than use $x = e^{\log x}$ and $\log e^x = x$: $$ -\log \prod^K_k p_k^{p_k N} = -\sum^K_k \log p_k^{p_k N} = -N \sum^K_k p_k \log p_k $$ Once we drop the constant $N$ we get the Shannon's entropy formula. Replace the sum with integral for continuous distributions. As a bonus, note we assumed that the probability $p(x)$ is the true data distribution. We could instead choose to compute the log-likelihood under this $p(x)$, while still assuming that the true distribution is a different distribution $q(x)$. That would yield the cross-entropy formula: $$ - \sum^K_k q_k \log p_k $$ https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood
What is the role of the logarithm in Shannon's entropy?
Entropy measures how chaotic and unpredictable is given distribution. For distributions with low entropy certain values occur significantly more often than others. When we draw a sample, the probabili
What is the role of the logarithm in Shannon's entropy? Entropy measures how chaotic and unpredictable is given distribution. For distributions with low entropy certain values occur significantly more often than others. When we draw a sample, the probability density function tells us how surprised are we about the given outcome. Let's draw a sequence of $N$ independent samples from the distribution and compute their joint probability: $$ \prod^N_i p(x_i) $$ For simplicity let's consider a discrete distribution with $K$ values. We can than group samples with each discrete value. We will denote $p(k)$ as $p_k$ for brevity. If we make $N$ sufficiently large than the count of each value will be proportional to its probability $N_k = p_k N$. We can also replace product of a constant with power: $$ \prod^K_k \prod^{N_k} p_k =\prod^K_k \prod^{p_k N} p_k = \prod^K_k p_k^{p_k N} $$ For convenience, we move to the negative log-likelihood. This allows us to replace the product with sum. We than use $x = e^{\log x}$ and $\log e^x = x$: $$ -\log \prod^K_k p_k^{p_k N} = -\sum^K_k \log p_k^{p_k N} = -N \sum^K_k p_k \log p_k $$ Once we drop the constant $N$ we get the Shannon's entropy formula. Replace the sum with integral for continuous distributions. As a bonus, note we assumed that the probability $p(x)$ is the true data distribution. We could instead choose to compute the log-likelihood under this $p(x)$, while still assuming that the true distribution is a different distribution $q(x)$. That would yield the cross-entropy formula: $$ - \sum^K_k q_k \log p_k $$ https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood
What is the role of the logarithm in Shannon's entropy? Entropy measures how chaotic and unpredictable is given distribution. For distributions with low entropy certain values occur significantly more often than others. When we draw a sample, the probabili
1,599
Validation Error less than training error?
It is difficult to be certain without knowing your actual methodology (e.g. cross-validation method, performance metric, data splitting method, etc.). Generally speaking though, training error will almost always underestimate your validation error. However it is possible for the validation error to be less than the training. You can think of it two ways: Your training set had many 'hard' cases to learn Your validation set had mostly 'easy' cases to predict That is why it is important that you really evaluate your model training methodology. If you don't split your data for training properly your results will lead to confusing, if not simply incorrect, conclusions. I think of model evaluation in four different categories: Underfitting – Validation and training error high Overfitting – Validation error is high, training error low Good fit – Validation error low, slightly higher than the training error Unknown fit - Validation error low, training error 'high' I say 'unknown' fit because the result is counter intuitive to how machine learning works. The essence of ML is to predict the unknown. If you are better at predicting the unknown than what you have 'learned', AFAIK the data between training and validation must be different in some way. This could mean you either need to reevaluate your data splitting method, adding more data, or possibly changing your performance metric (are you actually measuring the performance you want?). EDIT To address the OP's reference to a previous python lasagne question. This suggests that you have sufficient data to not require cross-validation and simply have your training, validation, and testing data subsets. Now, if you look at the lasagne tutorial you can see that the same behavior is seen at the top of the page. I would find it hard to believe the authors would post such results if it was strange but instead of just assuming they are correct let's look further. The section of most interest to us here is in the training loop section, just above the bottom you will see how the loss parameters are calculated. The training loss is calculated over the entire training dataset. Likewise, the validation loss is calculated over the entire validation dataset. The training set is typically at least 4 times as large as the validation (80-20). Given that the error is calculated over all samples, you could expect up to approximately 4X the loss measure of the validation set. You will notice, however, that the training loss and validation loss are approaching one another as training continues. This is intentional as if your training error begins to get lower than your validation error you would be beginning to overfit your model!!! I hope this clarifies these errors.
Validation Error less than training error?
It is difficult to be certain without knowing your actual methodology (e.g. cross-validation method, performance metric, data splitting method, etc.). Generally speaking though, training error will al
Validation Error less than training error? It is difficult to be certain without knowing your actual methodology (e.g. cross-validation method, performance metric, data splitting method, etc.). Generally speaking though, training error will almost always underestimate your validation error. However it is possible for the validation error to be less than the training. You can think of it two ways: Your training set had many 'hard' cases to learn Your validation set had mostly 'easy' cases to predict That is why it is important that you really evaluate your model training methodology. If you don't split your data for training properly your results will lead to confusing, if not simply incorrect, conclusions. I think of model evaluation in four different categories: Underfitting – Validation and training error high Overfitting – Validation error is high, training error low Good fit – Validation error low, slightly higher than the training error Unknown fit - Validation error low, training error 'high' I say 'unknown' fit because the result is counter intuitive to how machine learning works. The essence of ML is to predict the unknown. If you are better at predicting the unknown than what you have 'learned', AFAIK the data between training and validation must be different in some way. This could mean you either need to reevaluate your data splitting method, adding more data, or possibly changing your performance metric (are you actually measuring the performance you want?). EDIT To address the OP's reference to a previous python lasagne question. This suggests that you have sufficient data to not require cross-validation and simply have your training, validation, and testing data subsets. Now, if you look at the lasagne tutorial you can see that the same behavior is seen at the top of the page. I would find it hard to believe the authors would post such results if it was strange but instead of just assuming they are correct let's look further. The section of most interest to us here is in the training loop section, just above the bottom you will see how the loss parameters are calculated. The training loss is calculated over the entire training dataset. Likewise, the validation loss is calculated over the entire validation dataset. The training set is typically at least 4 times as large as the validation (80-20). Given that the error is calculated over all samples, you could expect up to approximately 4X the loss measure of the validation set. You will notice, however, that the training loss and validation loss are approaching one another as training continues. This is intentional as if your training error begins to get lower than your validation error you would be beginning to overfit your model!!! I hope this clarifies these errors.
Validation Error less than training error? It is difficult to be certain without knowing your actual methodology (e.g. cross-validation method, performance metric, data splitting method, etc.). Generally speaking though, training error will al
1,600
Validation Error less than training error?
One possibility: If you are using dropout regularization layer in your network, it is reasonable that the validation error is smaller than training error. Because usually dropout is activated when training but deactivated when evaluating on the validation set. You get a more smooth (usually means better) function in the latter case.
Validation Error less than training error?
One possibility: If you are using dropout regularization layer in your network, it is reasonable that the validation error is smaller than training error. Because usually dropout is activated when tra
Validation Error less than training error? One possibility: If you are using dropout regularization layer in your network, it is reasonable that the validation error is smaller than training error. Because usually dropout is activated when training but deactivated when evaluating on the validation set. You get a more smooth (usually means better) function in the latter case.
Validation Error less than training error? One possibility: If you are using dropout regularization layer in your network, it is reasonable that the validation error is smaller than training error. Because usually dropout is activated when tra