idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
2,901 | Do the predictions of a Random Forest model have a prediction interval? | This is partly a response to @Sashikanth Dareddy (since it will not fit in a comment) and partly a response to the original post.
Remember what a prediction interval is, it is an interval or set of values where we predict that future observations will lie. Generally the prediction interval has 2 main pieces that determine its width, a piece representing the uncertainty about the predicted mean (or other parameter) this is the confidence interval part, and a piece representing the variability of the individual observations around that mean. The confidence interval is fairy robust due to the Central Limit Theorem and in the case of a random forest, the bootstrapping helps as well. But the prediction interval is completely dependent on the assumptions about how the data is distributed given the predictor variables, CLT and bootstrapping have no effect on that part.
The prediction interval should be wider where the corresponding confidence interval would also be wider. Other things that would affect the width of the prediction interval are assumptions about equal variance or not, this has to come from the knowledge of the researcher, not the random forest model.
A prediction interval does not make sense for a categorical outcome (you could do a prediction set rather than an interval, but most of the time it would probably not be very informative).
We can see some of the issues around prediction intervals by simulating data where we know the exact truth. Consider the following data:
set.seed(1)
x1 <- rep(0:1, each=500)
x2 <- rep(0:1, each=250, length=1000)
y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000)
This particular data follows the assumptions for a linear regression and is fairly straight forward for a random forest fit. We know from the "true" model that when both predictors are 0 that the mean is 10, we also know that the individual points follow a normal distribution with standard deviation of 1. This means that the 95% prediction interval based on perfect knowledge for these points would be from 8 to 12 (well actually 8.04 to 11.96, but rounding keeps it simpler). Any estimated prediction interval should be wider than this (not having perfect information adds width to compensate) and include this range.
Let's look at the intervals from regression:
fit1 <- lm(y ~ x1 * x2)
newdat <- expand.grid(x1=0:1, x2=0:1)
(pred.lm.ci <- predict(fit1, newdat, interval='confidence'))
# fit lwr upr
# 1 10.02217 9.893664 10.15067
# 2 14.90927 14.780765 15.03778
# 3 20.02312 19.894613 20.15162
# 4 21.99885 21.870343 22.12735
(pred.lm.pi <- predict(fit1, newdat, interval='prediction'))
# fit lwr upr
# 1 10.02217 7.98626 12.05808
# 2 14.90927 12.87336 16.94518
# 3 20.02312 17.98721 22.05903
# 4 21.99885 19.96294 24.03476
We can see there is some uncertainty in the estimated means (confidence interval) and that gives us a prediction interval that is wider (but includes) the 8 to 12 range.
Now let's look at the interval based on the individual predictions of individual trees (we should expect these to be wider since the random forest does not benefit from the assumptions (which we know to be true for this data) that the linear regression does):
library(randomForest)
fit2 <- randomForest(y ~ x1 + x2, ntree=1001)
pred.rf <- predict(fit2, newdat, predict.all=TRUE)
pred.rf.int <- apply(pred.rf$individual, 1, function(x) {
c(mean(x) + c(-1, 1) * sd(x),
quantile(x, c(0.025, 0.975)))
})
t(pred.rf.int)
# 2.5% 97.5%
# 1 9.785533 13.88629 9.920507 15.28662
# 2 13.017484 17.22297 12.330821 18.65796
# 3 16.764298 21.40525 14.749296 21.09071
# 4 19.494116 22.33632 18.245580 22.09904
The intervals are wider than the regression prediction intervals, but they don't cover the entire range. They do include the true values and therefore may be legitimate as confidence intervals, but they are only predicting where the mean (predicted value) is, no the added piece for the distribution around that mean. For the first case where x1 and x2 are both 0 the intervals don't go below 9.7, this is very different from the true prediction interval that goes down to 8. If we generate new data points then there will be several points (much more than 5%) that are in the true and regression intervals, but don't fall in the random forest intervals.
To generate a prediction interval you will need to make some strong assumptions about the distribution of the individual points around the predicted means, then you could take the predictions from the individual trees (the bootstrapped confidence interval piece) then generate a random value from the assumed distribution with that center. The quantiles for those generated pieces may form the prediction interval (but I would still test it, you may need to repeat the process several more times and combine).
Here is an example of doing this by adding normal (since we know the original data used a normal) deviations to the predictions with the standard deviation based on the estimated MSE from that tree:
pred.rf.int2 <- sapply(1:4, function(i) {
tmp <- pred.rf$individual[i, ] + rnorm(1001, 0, sqrt(fit2$mse))
quantile(tmp, c(0.025, 0.975))
})
t(pred.rf.int2)
# 2.5% 97.5%
# [1,] 7.351609 17.31065
# [2,] 10.386273 20.23700
# [3,] 13.004428 23.55154
# [4,] 16.344504 24.35970
These intervals contain those based on perfect knowledge, so look reasonable. But, they will depend greatly on the assumptions made (the assumptions are valid here because we used the knowledge of how the data was simulated, they may not be as valid in real data cases). I would still repeat the simulations several times for data that looks more like your real data (but simulated so you know the truth) several times before fully trusting this method. | Do the predictions of a Random Forest model have a prediction interval? | This is partly a response to @Sashikanth Dareddy (since it will not fit in a comment) and partly a response to the original post.
Remember what a prediction interval is, it is an interval or set of va | Do the predictions of a Random Forest model have a prediction interval?
This is partly a response to @Sashikanth Dareddy (since it will not fit in a comment) and partly a response to the original post.
Remember what a prediction interval is, it is an interval or set of values where we predict that future observations will lie. Generally the prediction interval has 2 main pieces that determine its width, a piece representing the uncertainty about the predicted mean (or other parameter) this is the confidence interval part, and a piece representing the variability of the individual observations around that mean. The confidence interval is fairy robust due to the Central Limit Theorem and in the case of a random forest, the bootstrapping helps as well. But the prediction interval is completely dependent on the assumptions about how the data is distributed given the predictor variables, CLT and bootstrapping have no effect on that part.
The prediction interval should be wider where the corresponding confidence interval would also be wider. Other things that would affect the width of the prediction interval are assumptions about equal variance or not, this has to come from the knowledge of the researcher, not the random forest model.
A prediction interval does not make sense for a categorical outcome (you could do a prediction set rather than an interval, but most of the time it would probably not be very informative).
We can see some of the issues around prediction intervals by simulating data where we know the exact truth. Consider the following data:
set.seed(1)
x1 <- rep(0:1, each=500)
x2 <- rep(0:1, each=250, length=1000)
y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000)
This particular data follows the assumptions for a linear regression and is fairly straight forward for a random forest fit. We know from the "true" model that when both predictors are 0 that the mean is 10, we also know that the individual points follow a normal distribution with standard deviation of 1. This means that the 95% prediction interval based on perfect knowledge for these points would be from 8 to 12 (well actually 8.04 to 11.96, but rounding keeps it simpler). Any estimated prediction interval should be wider than this (not having perfect information adds width to compensate) and include this range.
Let's look at the intervals from regression:
fit1 <- lm(y ~ x1 * x2)
newdat <- expand.grid(x1=0:1, x2=0:1)
(pred.lm.ci <- predict(fit1, newdat, interval='confidence'))
# fit lwr upr
# 1 10.02217 9.893664 10.15067
# 2 14.90927 14.780765 15.03778
# 3 20.02312 19.894613 20.15162
# 4 21.99885 21.870343 22.12735
(pred.lm.pi <- predict(fit1, newdat, interval='prediction'))
# fit lwr upr
# 1 10.02217 7.98626 12.05808
# 2 14.90927 12.87336 16.94518
# 3 20.02312 17.98721 22.05903
# 4 21.99885 19.96294 24.03476
We can see there is some uncertainty in the estimated means (confidence interval) and that gives us a prediction interval that is wider (but includes) the 8 to 12 range.
Now let's look at the interval based on the individual predictions of individual trees (we should expect these to be wider since the random forest does not benefit from the assumptions (which we know to be true for this data) that the linear regression does):
library(randomForest)
fit2 <- randomForest(y ~ x1 + x2, ntree=1001)
pred.rf <- predict(fit2, newdat, predict.all=TRUE)
pred.rf.int <- apply(pred.rf$individual, 1, function(x) {
c(mean(x) + c(-1, 1) * sd(x),
quantile(x, c(0.025, 0.975)))
})
t(pred.rf.int)
# 2.5% 97.5%
# 1 9.785533 13.88629 9.920507 15.28662
# 2 13.017484 17.22297 12.330821 18.65796
# 3 16.764298 21.40525 14.749296 21.09071
# 4 19.494116 22.33632 18.245580 22.09904
The intervals are wider than the regression prediction intervals, but they don't cover the entire range. They do include the true values and therefore may be legitimate as confidence intervals, but they are only predicting where the mean (predicted value) is, no the added piece for the distribution around that mean. For the first case where x1 and x2 are both 0 the intervals don't go below 9.7, this is very different from the true prediction interval that goes down to 8. If we generate new data points then there will be several points (much more than 5%) that are in the true and regression intervals, but don't fall in the random forest intervals.
To generate a prediction interval you will need to make some strong assumptions about the distribution of the individual points around the predicted means, then you could take the predictions from the individual trees (the bootstrapped confidence interval piece) then generate a random value from the assumed distribution with that center. The quantiles for those generated pieces may form the prediction interval (but I would still test it, you may need to repeat the process several more times and combine).
Here is an example of doing this by adding normal (since we know the original data used a normal) deviations to the predictions with the standard deviation based on the estimated MSE from that tree:
pred.rf.int2 <- sapply(1:4, function(i) {
tmp <- pred.rf$individual[i, ] + rnorm(1001, 0, sqrt(fit2$mse))
quantile(tmp, c(0.025, 0.975))
})
t(pred.rf.int2)
# 2.5% 97.5%
# [1,] 7.351609 17.31065
# [2,] 10.386273 20.23700
# [3,] 13.004428 23.55154
# [4,] 16.344504 24.35970
These intervals contain those based on perfect knowledge, so look reasonable. But, they will depend greatly on the assumptions made (the assumptions are valid here because we used the knowledge of how the data was simulated, they may not be as valid in real data cases). I would still repeat the simulations several times for data that looks more like your real data (but simulated so you know the truth) several times before fully trusting this method. | Do the predictions of a Random Forest model have a prediction interval?
This is partly a response to @Sashikanth Dareddy (since it will not fit in a comment) and partly a response to the original post.
Remember what a prediction interval is, it is an interval or set of va |
2,902 | Do the predictions of a Random Forest model have a prediction interval? | If you use R you can easily produce prediction intervals for the predictions of a random forests regression: Just use the package quantregForest (available at CRAN) and read the paper by N. Meinshausen on how conditional quantiles can be inferred with quantile regression forests and how they can be used to build prediction intervals. Very informative even if you don't work with R! | Do the predictions of a Random Forest model have a prediction interval? | If you use R you can easily produce prediction intervals for the predictions of a random forests regression: Just use the package quantregForest (available at CRAN) and read the paper by N. Meinshause | Do the predictions of a Random Forest model have a prediction interval?
If you use R you can easily produce prediction intervals for the predictions of a random forests regression: Just use the package quantregForest (available at CRAN) and read the paper by N. Meinshausen on how conditional quantiles can be inferred with quantile regression forests and how they can be used to build prediction intervals. Very informative even if you don't work with R! | Do the predictions of a Random Forest model have a prediction interval?
If you use R you can easily produce prediction intervals for the predictions of a random forests regression: Just use the package quantregForest (available at CRAN) and read the paper by N. Meinshause |
2,903 | Do the predictions of a Random Forest model have a prediction interval? | I realize this is an old post but I have been running some simulations on this and thought I will share my findings.
@GregSnow made a very detailed post about this but I believe when calculating the interval using predictions from individual trees he was looking at $[ \mu + \sigma, \mu - \sigma]$ which is only a 70% prediction interval. We need to look at $[\mu+1.96*\sigma,\mu-1.96*\sigma]$ to get the 95% prediction interval.
Making this change to @GregSnow code, we get the following results
set.seed(1)
x1 <- rep( 0:1, each=500 )
x2 <- rep( 0:1, each=250, length=1000 )
y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000)
library(randomForest)
fit2 <- randomForest(y~x1+x2)
pred.rf <- predict(fit2, newdat, predict.all=TRUE)
pred.rf.int <- t(apply( pred.rf$individual, 1, function(x){
c( mean(x) + c(-1.96,1.96)*sd(x), quantile(x, c(0.025,0.975)) )}))
pred.rf.int
2.5% 97.5%
1 7.826896 16.05521 9.915482 15.31431
2 11.010662 19.35793 12.298995 18.64296
3 14.296697 23.61657 14.749248 21.11239
4 18.000229 23.73539 18.237448 22.10331
Now, comparing these with the intervals generated by adding normal deviation to predictions with standard deviation as MSE like @GregSnow suggested we get,
pred.rf.int2 <- sapply(1:4, function(i) {
tmp <- pred.rf$individual[i,] + rnorm(1000, 0, sqrt(fit2$mse))
quantile(tmp, c(0.025, 0.975))
})
t(pred.rf.int2)
2.5% 97.5%
[1,] 7.486895 17.21144
[2,] 10.551811 20.50633
[3,] 12.959318 23.46027
[4,] 16.444967 24.57601
The interval by both these approaches are now looking very close. Plotting the prediction interval for the three approaches against the error distribution in this case looks as below
Black lines = prediction intervals from linear regression,
Red lines = Random forest intervals calculated on Individual Predictions,
Blue lines = Random forest intervals calculated by adding normal deviation to predictions
Now, let us re-run the simulation but this time increasing the variance of the error term. If our prediction interval calculations are good, we should end up with wider intervals than what we got above.
set.seed(1)
x1 <- rep( 0:1, each=500 )
x2 <- rep( 0:1, each=250, length=1000 )
y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000,mean=0,sd=5)
fit1 <- lm(y~x1+x2)
newdat <- expand.grid(x1=0:1,x2=0:1)
predict(fit1,newdata=newdat,interval = "prediction")
fit lwr upr
1 10.75006 0.503170 20.99695
2 13.90714 3.660248 24.15403
3 19.47638 9.229490 29.72327
4 22.63346 12.386568 32.88035
set.seed(1)
fit2 <- randomForest(y~x1+x2,localImp=T)
pred.rf.int <- t(apply( pred.rf$individual, 1, function(x){
c( mean(x) + c(-1.96,1.96)*sd(x), quantile(x, c(0.025,0.975)) )}))
pred.rf.int
2.5% 97.5%
1 7.889934 15.53642 9.564565 15.47893
2 10.616744 18.78837 11.965325 18.51922
3 15.024598 23.67563 14.724964 21.43195
4 17.967246 23.88760 17.858866 22.54337
pred.rf.int2 <- sapply(1:4, function(i) {
tmp <- pred.rf$individual[i,] + rnorm(1000, 0, sqrt(fit2$mse))
quantile(tmp, c(0.025, 0.975))
})
t(pred.rf.int2)
2.5% 97.5%
[1,] 1.291450 22.89231
[2,] 4.193414 25.93963
[3,] 7.428309 30.07291
[4,] 9.938158 31.63777
Now, this makes it clear that calculating the prediction intervals by the second approach is far more accurate and is yielding results quite close to the prediction interval from linear regression.
Taking the assumption of normality, there is another easier way to compute the prediction intervals from random forest. From each of the individual trees we have the predicted value ($\mu_i$) as well as the mean squared error ($MSE_i$). So prediction from each individual tree can be thought of as $ N(\mu_i,RMSE_i)$. Using the normal distribution properties our prediction from the random forest would have the distribution $N(\sum \mu_i/n, \sum RMSE_i/n)$. Applying this to the example we discussed above, we get the below results
mean.rf <- pred.rf$aggregate
sd.rf <- mean(sqrt(fit2$mse))
pred.rf.int3 <- cbind(mean.rf - 1.96*sd.rf, mean.rf + 1.96*sd.rf)
pred.rf.int3
1 1.332711 22.09364
2 4.322090 25.08302
3 8.969650 29.73058
4 10.546957 31.30789
These tally very well with the linear model intervals and also the approach @GregSnow suggested. But note that the underlying assumption in all the methods we discussed is that the errors follow a Normal distribution. | Do the predictions of a Random Forest model have a prediction interval? | I realize this is an old post but I have been running some simulations on this and thought I will share my findings.
@GregSnow made a very detailed post about this but I believe when calculating the i | Do the predictions of a Random Forest model have a prediction interval?
I realize this is an old post but I have been running some simulations on this and thought I will share my findings.
@GregSnow made a very detailed post about this but I believe when calculating the interval using predictions from individual trees he was looking at $[ \mu + \sigma, \mu - \sigma]$ which is only a 70% prediction interval. We need to look at $[\mu+1.96*\sigma,\mu-1.96*\sigma]$ to get the 95% prediction interval.
Making this change to @GregSnow code, we get the following results
set.seed(1)
x1 <- rep( 0:1, each=500 )
x2 <- rep( 0:1, each=250, length=1000 )
y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000)
library(randomForest)
fit2 <- randomForest(y~x1+x2)
pred.rf <- predict(fit2, newdat, predict.all=TRUE)
pred.rf.int <- t(apply( pred.rf$individual, 1, function(x){
c( mean(x) + c(-1.96,1.96)*sd(x), quantile(x, c(0.025,0.975)) )}))
pred.rf.int
2.5% 97.5%
1 7.826896 16.05521 9.915482 15.31431
2 11.010662 19.35793 12.298995 18.64296
3 14.296697 23.61657 14.749248 21.11239
4 18.000229 23.73539 18.237448 22.10331
Now, comparing these with the intervals generated by adding normal deviation to predictions with standard deviation as MSE like @GregSnow suggested we get,
pred.rf.int2 <- sapply(1:4, function(i) {
tmp <- pred.rf$individual[i,] + rnorm(1000, 0, sqrt(fit2$mse))
quantile(tmp, c(0.025, 0.975))
})
t(pred.rf.int2)
2.5% 97.5%
[1,] 7.486895 17.21144
[2,] 10.551811 20.50633
[3,] 12.959318 23.46027
[4,] 16.444967 24.57601
The interval by both these approaches are now looking very close. Plotting the prediction interval for the three approaches against the error distribution in this case looks as below
Black lines = prediction intervals from linear regression,
Red lines = Random forest intervals calculated on Individual Predictions,
Blue lines = Random forest intervals calculated by adding normal deviation to predictions
Now, let us re-run the simulation but this time increasing the variance of the error term. If our prediction interval calculations are good, we should end up with wider intervals than what we got above.
set.seed(1)
x1 <- rep( 0:1, each=500 )
x2 <- rep( 0:1, each=250, length=1000 )
y <- 10 + 5*x1 + 10*x2 - 3*x1*x2 + rnorm(1000,mean=0,sd=5)
fit1 <- lm(y~x1+x2)
newdat <- expand.grid(x1=0:1,x2=0:1)
predict(fit1,newdata=newdat,interval = "prediction")
fit lwr upr
1 10.75006 0.503170 20.99695
2 13.90714 3.660248 24.15403
3 19.47638 9.229490 29.72327
4 22.63346 12.386568 32.88035
set.seed(1)
fit2 <- randomForest(y~x1+x2,localImp=T)
pred.rf.int <- t(apply( pred.rf$individual, 1, function(x){
c( mean(x) + c(-1.96,1.96)*sd(x), quantile(x, c(0.025,0.975)) )}))
pred.rf.int
2.5% 97.5%
1 7.889934 15.53642 9.564565 15.47893
2 10.616744 18.78837 11.965325 18.51922
3 15.024598 23.67563 14.724964 21.43195
4 17.967246 23.88760 17.858866 22.54337
pred.rf.int2 <- sapply(1:4, function(i) {
tmp <- pred.rf$individual[i,] + rnorm(1000, 0, sqrt(fit2$mse))
quantile(tmp, c(0.025, 0.975))
})
t(pred.rf.int2)
2.5% 97.5%
[1,] 1.291450 22.89231
[2,] 4.193414 25.93963
[3,] 7.428309 30.07291
[4,] 9.938158 31.63777
Now, this makes it clear that calculating the prediction intervals by the second approach is far more accurate and is yielding results quite close to the prediction interval from linear regression.
Taking the assumption of normality, there is another easier way to compute the prediction intervals from random forest. From each of the individual trees we have the predicted value ($\mu_i$) as well as the mean squared error ($MSE_i$). So prediction from each individual tree can be thought of as $ N(\mu_i,RMSE_i)$. Using the normal distribution properties our prediction from the random forest would have the distribution $N(\sum \mu_i/n, \sum RMSE_i/n)$. Applying this to the example we discussed above, we get the below results
mean.rf <- pred.rf$aggregate
sd.rf <- mean(sqrt(fit2$mse))
pred.rf.int3 <- cbind(mean.rf - 1.96*sd.rf, mean.rf + 1.96*sd.rf)
pred.rf.int3
1 1.332711 22.09364
2 4.322090 25.08302
3 8.969650 29.73058
4 10.546957 31.30789
These tally very well with the linear model intervals and also the approach @GregSnow suggested. But note that the underlying assumption in all the methods we discussed is that the errors follow a Normal distribution. | Do the predictions of a Random Forest model have a prediction interval?
I realize this is an old post but I have been running some simulations on this and thought I will share my findings.
@GregSnow made a very detailed post about this but I believe when calculating the i |
2,904 | Do the predictions of a Random Forest model have a prediction interval? | This is easy to solve with randomForest.
First let me deal with the regression task (assuming your forest has 1000 trees).
In the predict function, you have the option to return results from individual trees. This means that you will receive 1000 column output. We can take the average of the 1000 columns for each row - this is the regular output RF would have produced any way. Now to get the prediction interval lets say +/- 2 std. deviations all you need to do is, for each row, from the 1000 values calculate +/-2 std. deviations and make these your upper and lower bounds on your prediction.
Second, in the case of classification, remember that each tree outputs either 1 or 0 (by default)and the sum over all 1000 trees divied by 1000 gives the class probablity (in the case of binary classification). In order to put a prediction interval on the probability you need to modify the min. nodesize option (see randomForest docuementation for the exact name of that option) once you set it a value >>1 then the individual trees will output numbers between 1 and 0. Now, from here on you can repeat the same process as described above for the regression task.
I hope that makes sense. | Do the predictions of a Random Forest model have a prediction interval? | This is easy to solve with randomForest.
First let me deal with the regression task (assuming your forest has 1000 trees).
In the predict function, you have the option to return results from individua | Do the predictions of a Random Forest model have a prediction interval?
This is easy to solve with randomForest.
First let me deal with the regression task (assuming your forest has 1000 trees).
In the predict function, you have the option to return results from individual trees. This means that you will receive 1000 column output. We can take the average of the 1000 columns for each row - this is the regular output RF would have produced any way. Now to get the prediction interval lets say +/- 2 std. deviations all you need to do is, for each row, from the 1000 values calculate +/-2 std. deviations and make these your upper and lower bounds on your prediction.
Second, in the case of classification, remember that each tree outputs either 1 or 0 (by default)and the sum over all 1000 trees divied by 1000 gives the class probablity (in the case of binary classification). In order to put a prediction interval on the probability you need to modify the min. nodesize option (see randomForest docuementation for the exact name of that option) once you set it a value >>1 then the individual trees will output numbers between 1 and 0. Now, from here on you can repeat the same process as described above for the regression task.
I hope that makes sense. | Do the predictions of a Random Forest model have a prediction interval?
This is easy to solve with randomForest.
First let me deal with the regression task (assuming your forest has 1000 trees).
In the predict function, you have the option to return results from individua |
2,905 | Do the predictions of a Random Forest model have a prediction interval? | The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper:
Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random Forest Prediction Intervals." The American Statistician,2019.
The R package "rfinterval" is its implementation available at CRAN.
Installation
To install the R package rfinterval:
#install.packages("devtools")
#devtools::install_github(repo="haozhestat/rfinterval")
install.packages("rfinterval")
library(rfinterval)
?rfinterval
Usage
Quickstart:
train_data <- sim_data(n = 1000, p = 10)
test_data <- sim_data(n = 1000, p = 10)
output <- rfinterval(y~., train_data = train_data, test_data = test_data,
method = c("oob", "split-conformal", "quantreg"),
symmetry = TRUE,alpha = 0.1)
### print the marginal coverage of OOB prediction interval
mean(output$oob_interval$lo < test_data$y & output$oob_interval$up > test_data$y)
### print the marginal coverage of Split-conformal prediction interval
mean(output$sc_interval$lo < test_data$y & output$sc_interval$up > test_data$y)
### print the marginal coverage of Quantile regression forest prediction interval
mean(output$quantreg_interval$lo < test_data$y & output$quantreg_interval$up > test_data$y)
Data example:
oob_interval <- rfinterval(pm2.5 ~ .,
train_data = BeijingPM25[1:1000, ],
test_data = BeijingPM25[1001:2000, ],
method = "oob",
symmetry = TRUE,
alpha = 0.1)
str(oob_interval) | Do the predictions of a Random Forest model have a prediction interval? | The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper:
Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random | Do the predictions of a Random Forest model have a prediction interval?
The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper:
Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random Forest Prediction Intervals." The American Statistician,2019.
The R package "rfinterval" is its implementation available at CRAN.
Installation
To install the R package rfinterval:
#install.packages("devtools")
#devtools::install_github(repo="haozhestat/rfinterval")
install.packages("rfinterval")
library(rfinterval)
?rfinterval
Usage
Quickstart:
train_data <- sim_data(n = 1000, p = 10)
test_data <- sim_data(n = 1000, p = 10)
output <- rfinterval(y~., train_data = train_data, test_data = test_data,
method = c("oob", "split-conformal", "quantreg"),
symmetry = TRUE,alpha = 0.1)
### print the marginal coverage of OOB prediction interval
mean(output$oob_interval$lo < test_data$y & output$oob_interval$up > test_data$y)
### print the marginal coverage of Split-conformal prediction interval
mean(output$sc_interval$lo < test_data$y & output$sc_interval$up > test_data$y)
### print the marginal coverage of Quantile regression forest prediction interval
mean(output$quantreg_interval$lo < test_data$y & output$quantreg_interval$up > test_data$y)
Data example:
oob_interval <- rfinterval(pm2.5 ~ .,
train_data = BeijingPM25[1:1000, ],
test_data = BeijingPM25[1001:2000, ],
method = "oob",
symmetry = TRUE,
alpha = 0.1)
str(oob_interval) | Do the predictions of a Random Forest model have a prediction interval?
The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper:
Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random |
2,906 | Do the predictions of a Random Forest model have a prediction interval? | I've tried some options (this all WIP):
I actually made the dependent variable a classification problem with the results as ranges, instead of a single value. The results I got were poor, compared to using a plain value. I gave up this approach.
I then converted it to multiple classification problems, each of which was a lower-bound for the range (the result of the model being whether it would cross the lower bound or not) and then ran all the models (~20), and then combined the result to get a final answer as a range. This works better than 1 above but not as good as I need it to. I'm still working to improve this approach.
I used OOB and leave-one-out estimates to decide how good/bad my models are. | Do the predictions of a Random Forest model have a prediction interval? | I've tried some options (this all WIP):
I actually made the dependent variable a classification problem with the results as ranges, instead of a single value. The results I got were poor, compared t | Do the predictions of a Random Forest model have a prediction interval?
I've tried some options (this all WIP):
I actually made the dependent variable a classification problem with the results as ranges, instead of a single value. The results I got were poor, compared to using a plain value. I gave up this approach.
I then converted it to multiple classification problems, each of which was a lower-bound for the range (the result of the model being whether it would cross the lower bound or not) and then ran all the models (~20), and then combined the result to get a final answer as a range. This works better than 1 above but not as good as I need it to. I'm still working to improve this approach.
I used OOB and leave-one-out estimates to decide how good/bad my models are. | Do the predictions of a Random Forest model have a prediction interval?
I've tried some options (this all WIP):
I actually made the dependent variable a classification problem with the results as ranges, instead of a single value. The results I got were poor, compared t |
2,907 | How can an artificial neural network ANN, be used for unsupervised clustering? | Neural networks are widely used in unsupervised learning in order to learn better representations of the input data. For example, given a set of text documents, NN can learn a mapping from document to real-valued vector in such a way that resulting vectors are similar for documents with similar content, i.e. distance preserving. This can be achieved using, for example, auto-encoders - a model that is trained to reconstruct the original vector from a smaller representation (hidden layer activations) with reconstruction error (distance from the ID function) as cost function. This process doesn't give you clusters, but it creates meaningful representations that can be used for clustering. You could, for instance, run a clustering algorithm on the hidden layer's activations.
Clustering: There are a number of different NN architectures specifically designed for clustering. The most widely known is probably self organizing maps. A SOM is a NN that has a set of neurons connected to form a topological grid (usually rectangular). When some pattern is presented to an SOM, the neuron with closest weight vector is considered a winner and its weights are adapted to the pattern, as well as the weights of its neighbourhood. In this way an SOM naturally finds data clusters. A somewhat related algorithm is growing neural gas (it is not limited to predefined number of neurons).
Another approach is Adaptive Resonance Theory where we have two layers: "comparison field" and "recognition field". Recognition field also determines the best match (neuron) to the vector transferred from the comparison field and also have lateral inhibitory connections. Implementation details and exact equations can readily found by googling the names of these models, so I won't put them here. | How can an artificial neural network ANN, be used for unsupervised clustering? | Neural networks are widely used in unsupervised learning in order to learn better representations of the input data. For example, given a set of text documents, NN can learn a mapping from document to | How can an artificial neural network ANN, be used for unsupervised clustering?
Neural networks are widely used in unsupervised learning in order to learn better representations of the input data. For example, given a set of text documents, NN can learn a mapping from document to real-valued vector in such a way that resulting vectors are similar for documents with similar content, i.e. distance preserving. This can be achieved using, for example, auto-encoders - a model that is trained to reconstruct the original vector from a smaller representation (hidden layer activations) with reconstruction error (distance from the ID function) as cost function. This process doesn't give you clusters, but it creates meaningful representations that can be used for clustering. You could, for instance, run a clustering algorithm on the hidden layer's activations.
Clustering: There are a number of different NN architectures specifically designed for clustering. The most widely known is probably self organizing maps. A SOM is a NN that has a set of neurons connected to form a topological grid (usually rectangular). When some pattern is presented to an SOM, the neuron with closest weight vector is considered a winner and its weights are adapted to the pattern, as well as the weights of its neighbourhood. In this way an SOM naturally finds data clusters. A somewhat related algorithm is growing neural gas (it is not limited to predefined number of neurons).
Another approach is Adaptive Resonance Theory where we have two layers: "comparison field" and "recognition field". Recognition field also determines the best match (neuron) to the vector transferred from the comparison field and also have lateral inhibitory connections. Implementation details and exact equations can readily found by googling the names of these models, so I won't put them here. | How can an artificial neural network ANN, be used for unsupervised clustering?
Neural networks are widely used in unsupervised learning in order to learn better representations of the input data. For example, given a set of text documents, NN can learn a mapping from document to |
2,908 | How can an artificial neural network ANN, be used for unsupervised clustering? | You want to look into self-organizing maps. Kohonen (who invented them) wrote a book about them. There are packages for this in R (som, kohonen), and there are implementations in other languages such as MATLAB. | How can an artificial neural network ANN, be used for unsupervised clustering? | You want to look into self-organizing maps. Kohonen (who invented them) wrote a book about them. There are packages for this in R (som, kohonen), and there are implementations in other languages suc | How can an artificial neural network ANN, be used for unsupervised clustering?
You want to look into self-organizing maps. Kohonen (who invented them) wrote a book about them. There are packages for this in R (som, kohonen), and there are implementations in other languages such as MATLAB. | How can an artificial neural network ANN, be used for unsupervised clustering?
You want to look into self-organizing maps. Kohonen (who invented them) wrote a book about them. There are packages for this in R (som, kohonen), and there are implementations in other languages suc |
2,909 | How can an artificial neural network ANN, be used for unsupervised clustering? | Maybe the Clustering with Neural Network and Index (CNNI) model is what you are looking for.
https://doi.org/10.31219/osf.io/ejxm6
CNNI uses a Neural Network to cluster data points. Training of the Neural Network mimics supervised learning, with an internal clustering evaluation index acting as the loss function. It successively adjusts the weights of the Neural Network to reduce the loss (improve the value of the index).
Structure of CNNI:
The structure of CNNI is simple: a Neural Network for supervised learning plus an internal clustering evaluation index. The index acts as the loss function, because there is no target output associated with each input data point in clustering scenario.
The number of neurons in the input layer of CNNI equals to the dimension of the data points given to the network. The number of neurons in the output layer of CNNI equals to $ K $ (number of clusters we want to classify). By comparing the values of each output neuron, label of one data point is obtained (e.g., find out the maximum of output neurons).
Training of CNNI has some difference from other supervised learning Neural Networks. We need to compute each data point's label according to the Neural Network's current state, then calculate the value of the clustering evaluation index, according to the labels of all data points. Adjustment of the weights of the Neural Network is based on the value of the index.
An experiment shows CNNI equipped with MMJ-SC, achieves the first parametric (inductive) clustering model that can deal with non-convex shaped (non-flat geometry) data, which implies it is the first general-purpose parametric (inductive) clustering model. | How can an artificial neural network ANN, be used for unsupervised clustering? | Maybe the Clustering with Neural Network and Index (CNNI) model is what you are looking for.
https://doi.org/10.31219/osf.io/ejxm6
CNNI uses a Neural Network to cluster data points. Training of the Ne | How can an artificial neural network ANN, be used for unsupervised clustering?
Maybe the Clustering with Neural Network and Index (CNNI) model is what you are looking for.
https://doi.org/10.31219/osf.io/ejxm6
CNNI uses a Neural Network to cluster data points. Training of the Neural Network mimics supervised learning, with an internal clustering evaluation index acting as the loss function. It successively adjusts the weights of the Neural Network to reduce the loss (improve the value of the index).
Structure of CNNI:
The structure of CNNI is simple: a Neural Network for supervised learning plus an internal clustering evaluation index. The index acts as the loss function, because there is no target output associated with each input data point in clustering scenario.
The number of neurons in the input layer of CNNI equals to the dimension of the data points given to the network. The number of neurons in the output layer of CNNI equals to $ K $ (number of clusters we want to classify). By comparing the values of each output neuron, label of one data point is obtained (e.g., find out the maximum of output neurons).
Training of CNNI has some difference from other supervised learning Neural Networks. We need to compute each data point's label according to the Neural Network's current state, then calculate the value of the clustering evaluation index, according to the labels of all data points. Adjustment of the weights of the Neural Network is based on the value of the index.
An experiment shows CNNI equipped with MMJ-SC, achieves the first parametric (inductive) clustering model that can deal with non-convex shaped (non-flat geometry) data, which implies it is the first general-purpose parametric (inductive) clustering model. | How can an artificial neural network ANN, be used for unsupervised clustering?
Maybe the Clustering with Neural Network and Index (CNNI) model is what you are looking for.
https://doi.org/10.31219/osf.io/ejxm6
CNNI uses a Neural Network to cluster data points. Training of the Ne |
2,910 | What makes the Gaussian kernel so magical for PCA, and also in general? | I think the key to the magic is smoothness. My long answer which follows
is simply to explain about this smoothness. It may or may not be an answer you expect.
Short answer:
Given a positive definite kernel $k$, there exists its corresponding
space of functions $\mathcal{H}$. Properties of functions are determined
by the kernel. It turns out that if $k$ is a Gaussian kernel, the
functions in $\mathcal{H}$ are very smooth. So, a learned function
(e.g, a regression function, principal components in RKHS as in kernel
PCA) is very smooth. Usually smoothness assumption is sensible for
most datasets we want to tackle. This explains why a Gaussian kernel
is magical.
Long answer for why a Gaussian kernel gives smooth functions:
A positive definite kernel $k(x,y)$ defines (implicitly) an inner
product $k(x,y)=\left\langle \phi(x),\phi(y)\right\rangle _{\mathcal{H}}$
for feature vector $\phi(x)$ constructed from your input $x$, and
$\mathcal{H}$ is a Hilbert space. The notation $\left\langle \phi(x),\phi(y)\right\rangle $
means an inner product between $\phi(x)$ and $\phi(y)$. For our purpose,
you can imagine $\mathcal{H}$ to be the usual Euclidean space but
possibly with inifinite number of dimensions. Imagine the usual vector
that is infinitely long like $\phi(x)=\left(\phi_{1}(x),\phi_{2}(x),\ldots\right)$.
In kernel methods, $\mathcal{H}$ is a space of functions called reproducing
kernel Hilbert space (RKHS). This space has a special property called
``reproducing property'' which is that $f(x)=\left\langle f,\phi(x)\right\rangle $.
This says that to evaluate $f(x)$, first you construct a feature
vector (infinitely long as mentioned) for $f$. Then you construct
your feature vector for $x$ denoted by $\phi(x)$ (infinitely long).
The evaluation of $f(x)$ is given by taking an inner product of the
two. Obviously, in practice, no one will construct an infinitely long vector. Since we only care about its inner product, we just directly evaluate the kernel $k$. Bypassing the computation of explicit features and directly computing its inner product is known as the "kernel trick".
What are the features ?
I kept saying features $\phi_{1}(x),\phi_{2}(x),\ldots$ without specifying
what they are. Given a kernel $k$, the features are not unique. But
$\left\langle \phi(x),\phi(y)\right\rangle $ is uniquely determined.
To explain smoothness of the functions, let us consider Fourier features.
Assume a translation invariant kernel $k$, meaning $k(x,y)=k(x-y)$
i.e., the kernel only depends on the difference of the two arguments.
Gaussian kernel has this property. Let $\hat{k}$ denote the Fourier
transform of $k$.
In this Fourier viewpoint, the features of $f$
are given by $f:=\left(\cdots,\hat{f}_{l}/\sqrt{\hat{k}_{l}},\cdots\right)$.
This is saying that the feature representation of your function $f$
is given by its Fourier transform divided by the Fourer transform
of the kernel $k$. The feature representation of $x$, which is $\phi(x)$
is $\left(\cdots,\sqrt{\hat{k}_{l}}\exp\left(-ilx\right),\cdots\right)$
where $i=\sqrt{-1}$. One can show that the reproducing property holds
(an exercise to readers).
As in any Hilbert space, all elements belonging to the space must
have a finite norm. Let us consider the squared norm of an $f\in\mathcal{H}$:
$
\|f\|_{\mathcal{H}}^{2}=\left\langle f,f\right\rangle _{\mathcal{H}}=\sum_{l=-\infty}^{\infty}\frac{\hat{f}_{l}^{2}}{\hat{k}_{l}}.
$
So when is this norm finite i.e., $f$ belongs to the space ? It is
when $\hat{f}_{l}^{2}$ drops faster than $\hat{k}_{l}$ so that the
sum converges. Now, the Fourier transform of a Gaussian kernel $k(x,y)=\exp\left(-\frac{\|x-y\|^{2}}{\sigma^{2}}\right)$
is another Gaussian where $\hat{k}_{l}$ decreases exponentially fast
with $l$. So if $f$ is to be in this space, its Fourier transform
must drop even faster than that of $k$. This means the function will
have effectively only a few low frequency components with high weights.
A signal with only low frequency components does not ``wiggle''
much. This explains why a Gaussian kernel gives you a smooth function.
Extra: What about a Laplace kernel ?
If you consider a Laplace kernel $k(x,y)=\exp\left(-\frac{\|x-y\|}{\sigma}\right)$,
its Fourier transform is a Cauchy distribution which drops much slower than the exponential function in the Fourier
transform of a Gaussian kernel. This means a function $f$ will have
more high-frequency components. As a result, the function given by
a Laplace kernel is ``rougher'' than that given by a Gaussian kernel.
What is a property of the Gaussian kernel that other kernels do not have ?
Regardless of the Gaussian width, one property is that Gaussian kernel is ``universal''. Intuitively,
this means, given a bounded continuous function $g$ (arbitrary),
there exists a function $f\in\mathcal{H}$ such that $f$ and $g$
are close (in the sense of $\|\cdot\|_{\infty})$ up to arbitrary
precision needed. Basically, this means Gaussian kernel gives functions which can approximate "nice" (bounded, continuous) functions arbitrarily well. Gaussian and Laplace kernels are universal. A polynomial kernel, for
example, is not.
Why don't we put the norm through, say, a Cauchy PDF and expect the
same results?
In general, you can do anything you like as long as the resulting
$k$ is positive definite. Positive definiteness is defined as $\sum_{i=1}^{N}\sum_{j=1}^{N}k(x_{i},x_{j})\alpha_{i}\alpha_{j}>0$
for all $\alpha_{i}\in\mathbb{R}$, $\{x_{i}\}_{i=1}^{N}$ and all
$N\in\mathbb{N}$ (set of natural numbers). If $k$ is not positive
definite, then it does not correspond to an inner product space. All
the analysis breaks because you do not even have a space of functions
$\mathcal{H}$ as mentioned. Nonetheless, it may work empirically. For example, the hyperbolic tangent kernel (see number 7 on this page)
$k(x,y) = tanh(\alpha x^\top y + c)$
which is intended to imitate sigmoid activation units in neural networks, is only positive definite for some settings of $\alpha$ and $c$. Still it was reported that it works in practice.
What about other kinds of features ?
I said features are not unique. For Gaussian kernel, another set of features is given by Mercer expansion. See Section 4.3.1 of the famous Gaussian process book. In this case, the features $\phi(x)$ are Hermite polynomials evaluated at $x$. | What makes the Gaussian kernel so magical for PCA, and also in general? | I think the key to the magic is smoothness. My long answer which follows
is simply to explain about this smoothness. It may or may not be an answer you expect.
Short answer:
Given a positive definite | What makes the Gaussian kernel so magical for PCA, and also in general?
I think the key to the magic is smoothness. My long answer which follows
is simply to explain about this smoothness. It may or may not be an answer you expect.
Short answer:
Given a positive definite kernel $k$, there exists its corresponding
space of functions $\mathcal{H}$. Properties of functions are determined
by the kernel. It turns out that if $k$ is a Gaussian kernel, the
functions in $\mathcal{H}$ are very smooth. So, a learned function
(e.g, a regression function, principal components in RKHS as in kernel
PCA) is very smooth. Usually smoothness assumption is sensible for
most datasets we want to tackle. This explains why a Gaussian kernel
is magical.
Long answer for why a Gaussian kernel gives smooth functions:
A positive definite kernel $k(x,y)$ defines (implicitly) an inner
product $k(x,y)=\left\langle \phi(x),\phi(y)\right\rangle _{\mathcal{H}}$
for feature vector $\phi(x)$ constructed from your input $x$, and
$\mathcal{H}$ is a Hilbert space. The notation $\left\langle \phi(x),\phi(y)\right\rangle $
means an inner product between $\phi(x)$ and $\phi(y)$. For our purpose,
you can imagine $\mathcal{H}$ to be the usual Euclidean space but
possibly with inifinite number of dimensions. Imagine the usual vector
that is infinitely long like $\phi(x)=\left(\phi_{1}(x),\phi_{2}(x),\ldots\right)$.
In kernel methods, $\mathcal{H}$ is a space of functions called reproducing
kernel Hilbert space (RKHS). This space has a special property called
``reproducing property'' which is that $f(x)=\left\langle f,\phi(x)\right\rangle $.
This says that to evaluate $f(x)$, first you construct a feature
vector (infinitely long as mentioned) for $f$. Then you construct
your feature vector for $x$ denoted by $\phi(x)$ (infinitely long).
The evaluation of $f(x)$ is given by taking an inner product of the
two. Obviously, in practice, no one will construct an infinitely long vector. Since we only care about its inner product, we just directly evaluate the kernel $k$. Bypassing the computation of explicit features and directly computing its inner product is known as the "kernel trick".
What are the features ?
I kept saying features $\phi_{1}(x),\phi_{2}(x),\ldots$ without specifying
what they are. Given a kernel $k$, the features are not unique. But
$\left\langle \phi(x),\phi(y)\right\rangle $ is uniquely determined.
To explain smoothness of the functions, let us consider Fourier features.
Assume a translation invariant kernel $k$, meaning $k(x,y)=k(x-y)$
i.e., the kernel only depends on the difference of the two arguments.
Gaussian kernel has this property. Let $\hat{k}$ denote the Fourier
transform of $k$.
In this Fourier viewpoint, the features of $f$
are given by $f:=\left(\cdots,\hat{f}_{l}/\sqrt{\hat{k}_{l}},\cdots\right)$.
This is saying that the feature representation of your function $f$
is given by its Fourier transform divided by the Fourer transform
of the kernel $k$. The feature representation of $x$, which is $\phi(x)$
is $\left(\cdots,\sqrt{\hat{k}_{l}}\exp\left(-ilx\right),\cdots\right)$
where $i=\sqrt{-1}$. One can show that the reproducing property holds
(an exercise to readers).
As in any Hilbert space, all elements belonging to the space must
have a finite norm. Let us consider the squared norm of an $f\in\mathcal{H}$:
$
\|f\|_{\mathcal{H}}^{2}=\left\langle f,f\right\rangle _{\mathcal{H}}=\sum_{l=-\infty}^{\infty}\frac{\hat{f}_{l}^{2}}{\hat{k}_{l}}.
$
So when is this norm finite i.e., $f$ belongs to the space ? It is
when $\hat{f}_{l}^{2}$ drops faster than $\hat{k}_{l}$ so that the
sum converges. Now, the Fourier transform of a Gaussian kernel $k(x,y)=\exp\left(-\frac{\|x-y\|^{2}}{\sigma^{2}}\right)$
is another Gaussian where $\hat{k}_{l}$ decreases exponentially fast
with $l$. So if $f$ is to be in this space, its Fourier transform
must drop even faster than that of $k$. This means the function will
have effectively only a few low frequency components with high weights.
A signal with only low frequency components does not ``wiggle''
much. This explains why a Gaussian kernel gives you a smooth function.
Extra: What about a Laplace kernel ?
If you consider a Laplace kernel $k(x,y)=\exp\left(-\frac{\|x-y\|}{\sigma}\right)$,
its Fourier transform is a Cauchy distribution which drops much slower than the exponential function in the Fourier
transform of a Gaussian kernel. This means a function $f$ will have
more high-frequency components. As a result, the function given by
a Laplace kernel is ``rougher'' than that given by a Gaussian kernel.
What is a property of the Gaussian kernel that other kernels do not have ?
Regardless of the Gaussian width, one property is that Gaussian kernel is ``universal''. Intuitively,
this means, given a bounded continuous function $g$ (arbitrary),
there exists a function $f\in\mathcal{H}$ such that $f$ and $g$
are close (in the sense of $\|\cdot\|_{\infty})$ up to arbitrary
precision needed. Basically, this means Gaussian kernel gives functions which can approximate "nice" (bounded, continuous) functions arbitrarily well. Gaussian and Laplace kernels are universal. A polynomial kernel, for
example, is not.
Why don't we put the norm through, say, a Cauchy PDF and expect the
same results?
In general, you can do anything you like as long as the resulting
$k$ is positive definite. Positive definiteness is defined as $\sum_{i=1}^{N}\sum_{j=1}^{N}k(x_{i},x_{j})\alpha_{i}\alpha_{j}>0$
for all $\alpha_{i}\in\mathbb{R}$, $\{x_{i}\}_{i=1}^{N}$ and all
$N\in\mathbb{N}$ (set of natural numbers). If $k$ is not positive
definite, then it does not correspond to an inner product space. All
the analysis breaks because you do not even have a space of functions
$\mathcal{H}$ as mentioned. Nonetheless, it may work empirically. For example, the hyperbolic tangent kernel (see number 7 on this page)
$k(x,y) = tanh(\alpha x^\top y + c)$
which is intended to imitate sigmoid activation units in neural networks, is only positive definite for some settings of $\alpha$ and $c$. Still it was reported that it works in practice.
What about other kinds of features ?
I said features are not unique. For Gaussian kernel, another set of features is given by Mercer expansion. See Section 4.3.1 of the famous Gaussian process book. In this case, the features $\phi(x)$ are Hermite polynomials evaluated at $x$. | What makes the Gaussian kernel so magical for PCA, and also in general?
I think the key to the magic is smoothness. My long answer which follows
is simply to explain about this smoothness. It may or may not be an answer you expect.
Short answer:
Given a positive definite |
2,911 | What makes the Gaussian kernel so magical for PCA, and also in general? | I will do my best to answer this question not because I'm an expert on the topic (quite the opposite), but because I'm curious about the field and the topic, combined with an idea that it could be a good educational experience. Anyway, here's the result of my brief amateur research on the subject.
TL;DR: I would consider the following passage from the research paper "The connection between regularization operators and support vector kernels" as the short answer to this question:
Gaussian kernels tend to yield good performance under general
smoothness assumptions and should be considered especially if no
additional knowledge of the data is available.
Now, a detailed answer (to the best of my understanding; for math details, please use references).
As we know, principal component analysis (PCA) is a highly popular approach to dimensionality reduction, alone and for subsequent classification of data: http://www.visiondummy.com/2014/05/feature-extraction-using-pca. However, in situations, when data carries non-linear dependencies (in other words, linearly inseparable), traditional PCA is not applicable (does not perform well). For those cases, other approaches can be used, and non-linear PCA is one of them.
Approaches, where PCA is based on using kernel function is usually referred to, using an umbrella term "kernel PCA" (kPCA). Using Gaussian radial-basis function (RBF) kernel is probably the most popular variation. This approach is described in detail in multiple sources, but I very much like an excellent explanation by Sebastian Raschka in this blog post. However, while mentioning the possibility of using kernel functions, other than Gaussian RBF, the post focuses on the latter due to its popularity. This nice blog post, introducing kernel approximations and kernel trick, mentions one more possible reason for Gaussian kernel popularity for PCA: infinite dimensionality.
Additional insights can be found in several answers on Quora. In particular, reading this excellent discussion reveals several points on potential reasons of Gaussian kernel's popularity, as follows.
Gaussian kernels are universal:
Gaussian kernels are universal kernels i.e. their use with appropriate
regularization guarantees a globally optimal predictor which minimizes
both the estimation and approximation errors of a classifier.
Gaussian kernels are circular (which leads to the above-mentioned infinite dimensionality?)
Gaussian kernels can represent "highly varying terrains"
The following point, supporting the main conclusion above, is better delivered by citing the author:
The Gaussian RBF kernel is very popular and makes a good default
kernel especially in absence of expert knowledge about data and domain
because it kind of subsumes polynomial and linear kernel as well.
Linear Kernels and Polynomial Kernels are a special case of Gaussian
RBF kernel. Gaussian RBF kernels are non-parametric model which
essentially means that the complexity of the model is potentially
infinite because the number of analytic functions are infinite.
Gaussian kernels are optimal (on smoothness, read more here - same author):
A Gaussian Kernel is just a band pass filter; it selects the most
smooth solution. [...] A Gaussian Kernel works best when the infinite
sum of high order derivatives converges fastest--and that happens for
the smoothest solutions.
Finally, additional points from this nice answer:
Gaussian kernels support infinitely complex models
Gaussian kernels are more flexible
NOTES:
The above-referenced point about Gaussian kernel being optimal choice, especially when there is no prior knowledge about the data, is supported by the following sentence from this CV answer:
In the absence of expert knowledge, the Radial Basis Function kernel
makes a good default kernel (once you have established it is a problem
requiring a non-linear model).
For those curious about non-essential differences between Gaussian RBF kernel and standard Gaussian kernel, this answer might be of interest: https://stats.stackexchange.com/a/79193/31372.
For those, interested in implementing kPCA for pleasure or business, this nice blog post might be helpful. It is written by one of the authors (creators?) of Accord.NET - a very interesting .NET open source framework for statistical analysis, machine learning, signal processing and much more. | What makes the Gaussian kernel so magical for PCA, and also in general? | I will do my best to answer this question not because I'm an expert on the topic (quite the opposite), but because I'm curious about the field and the topic, combined with an idea that it could be a g | What makes the Gaussian kernel so magical for PCA, and also in general?
I will do my best to answer this question not because I'm an expert on the topic (quite the opposite), but because I'm curious about the field and the topic, combined with an idea that it could be a good educational experience. Anyway, here's the result of my brief amateur research on the subject.
TL;DR: I would consider the following passage from the research paper "The connection between regularization operators and support vector kernels" as the short answer to this question:
Gaussian kernels tend to yield good performance under general
smoothness assumptions and should be considered especially if no
additional knowledge of the data is available.
Now, a detailed answer (to the best of my understanding; for math details, please use references).
As we know, principal component analysis (PCA) is a highly popular approach to dimensionality reduction, alone and for subsequent classification of data: http://www.visiondummy.com/2014/05/feature-extraction-using-pca. However, in situations, when data carries non-linear dependencies (in other words, linearly inseparable), traditional PCA is not applicable (does not perform well). For those cases, other approaches can be used, and non-linear PCA is one of them.
Approaches, where PCA is based on using kernel function is usually referred to, using an umbrella term "kernel PCA" (kPCA). Using Gaussian radial-basis function (RBF) kernel is probably the most popular variation. This approach is described in detail in multiple sources, but I very much like an excellent explanation by Sebastian Raschka in this blog post. However, while mentioning the possibility of using kernel functions, other than Gaussian RBF, the post focuses on the latter due to its popularity. This nice blog post, introducing kernel approximations and kernel trick, mentions one more possible reason for Gaussian kernel popularity for PCA: infinite dimensionality.
Additional insights can be found in several answers on Quora. In particular, reading this excellent discussion reveals several points on potential reasons of Gaussian kernel's popularity, as follows.
Gaussian kernels are universal:
Gaussian kernels are universal kernels i.e. their use with appropriate
regularization guarantees a globally optimal predictor which minimizes
both the estimation and approximation errors of a classifier.
Gaussian kernels are circular (which leads to the above-mentioned infinite dimensionality?)
Gaussian kernels can represent "highly varying terrains"
The following point, supporting the main conclusion above, is better delivered by citing the author:
The Gaussian RBF kernel is very popular and makes a good default
kernel especially in absence of expert knowledge about data and domain
because it kind of subsumes polynomial and linear kernel as well.
Linear Kernels and Polynomial Kernels are a special case of Gaussian
RBF kernel. Gaussian RBF kernels are non-parametric model which
essentially means that the complexity of the model is potentially
infinite because the number of analytic functions are infinite.
Gaussian kernels are optimal (on smoothness, read more here - same author):
A Gaussian Kernel is just a band pass filter; it selects the most
smooth solution. [...] A Gaussian Kernel works best when the infinite
sum of high order derivatives converges fastest--and that happens for
the smoothest solutions.
Finally, additional points from this nice answer:
Gaussian kernels support infinitely complex models
Gaussian kernels are more flexible
NOTES:
The above-referenced point about Gaussian kernel being optimal choice, especially when there is no prior knowledge about the data, is supported by the following sentence from this CV answer:
In the absence of expert knowledge, the Radial Basis Function kernel
makes a good default kernel (once you have established it is a problem
requiring a non-linear model).
For those curious about non-essential differences between Gaussian RBF kernel and standard Gaussian kernel, this answer might be of interest: https://stats.stackexchange.com/a/79193/31372.
For those, interested in implementing kPCA for pleasure or business, this nice blog post might be helpful. It is written by one of the authors (creators?) of Accord.NET - a very interesting .NET open source framework for statistical analysis, machine learning, signal processing and much more. | What makes the Gaussian kernel so magical for PCA, and also in general?
I will do my best to answer this question not because I'm an expert on the topic (quite the opposite), but because I'm curious about the field and the topic, combined with an idea that it could be a g |
2,912 | What makes the Gaussian kernel so magical for PCA, and also in general? | Let me put in my two cents.
The way I think about Gaussian kernels are as nearest-neighbor classifiers in some sense. What a Gaussian kernel does is that it represents each point with the distance to all the other points in the dataset. Now think of classifiers with linear or polynomial boundaries, the boundaries are limited to certain shapes. However, when you look at nearest neighbor, the boundary can practically take any shape. That is I think why we think of Gaussian kernel also as non-parametric, i.e., adjusting the boundary depending on the data. Another way to think of that is the Gaussian kernel adjusts to the local shape in a region, similar to how a nearest neighbor locally adjusts the boundary by looking at the distance to other points in the local region.
I don't have a mathematical argument for this, but I think the fact that the Gaussian kernel in fact maps to an infinite dimensional space has something to do with its success. For the linear and polynomial kernels, the dot products are taken in finite dimensional spaces; hence it seems more powerful to do things in a larger space. I hope someone has a better grasp of these things. That also means that if we can find other kernels with infinite dimensional spaces, they should also be quite powerful. Unfortunately, I'm not familiar with any such kernel.
For your last point, I think Cauchy pdf or any other pdf that in some way measures the distance to other points should work equally well. Again, I don't have a good mathematical argument for it, but the connection to nearest neighbor makes this plausible.
Edit:
Here are some ideas on how to think of a classifier using Gaussian kernels as nearest-neighbor classifiers. First, let us think about what a nearest-neighbor classifier does. Essentially, a nearest neighbor classifier is a standard classifier that uses the distances between points as inputs. More formally, imagine we create a feature representation $\phi_i$ for each point $x_i$ in the dataset by calculating its distance to all the other points.
$$\phi_i = (d(x_i,x_1), d(x_i, x_2), \ldots, d(x_i, x_n))$$
Above, $d$ is a distance function. Then what a nearest neighbor classifier does is to predict the class label for a point based on this feature representation and class labels for the data.
$$ p_i = f(\phi_i, y) $$
where $p_i$ is the prediction for data point $x_i$ and $y$ is a vector of class labels for $x_1, x_2, \ldots, x_n$.
The way I think about kernels is that they do a similar thing; they create a feature representation of each point using its kernel values with other points in the dataset. Similar to the nearest neighbor case, more formally this would be
$$ \phi_i = (k(x_i, x_1), k(x_i, x_2), \ldots, k(x_i, x_n)) $$
Now the connection with nearest neighbor is quite obvious; if our kernel function is some measure that is related to the distance measures we use in nearest neighbor classifiers, our kernel based classifier will be similar to a nearest neighbor model.
Note: The classifiers we train using kernels do not work directly with these $\phi_i$ representations, but I think that is what they do implicitly. | What makes the Gaussian kernel so magical for PCA, and also in general? | Let me put in my two cents.
The way I think about Gaussian kernels are as nearest-neighbor classifiers in some sense. What a Gaussian kernel does is that it represents each point with the distance to | What makes the Gaussian kernel so magical for PCA, and also in general?
Let me put in my two cents.
The way I think about Gaussian kernels are as nearest-neighbor classifiers in some sense. What a Gaussian kernel does is that it represents each point with the distance to all the other points in the dataset. Now think of classifiers with linear or polynomial boundaries, the boundaries are limited to certain shapes. However, when you look at nearest neighbor, the boundary can practically take any shape. That is I think why we think of Gaussian kernel also as non-parametric, i.e., adjusting the boundary depending on the data. Another way to think of that is the Gaussian kernel adjusts to the local shape in a region, similar to how a nearest neighbor locally adjusts the boundary by looking at the distance to other points in the local region.
I don't have a mathematical argument for this, but I think the fact that the Gaussian kernel in fact maps to an infinite dimensional space has something to do with its success. For the linear and polynomial kernels, the dot products are taken in finite dimensional spaces; hence it seems more powerful to do things in a larger space. I hope someone has a better grasp of these things. That also means that if we can find other kernels with infinite dimensional spaces, they should also be quite powerful. Unfortunately, I'm not familiar with any such kernel.
For your last point, I think Cauchy pdf or any other pdf that in some way measures the distance to other points should work equally well. Again, I don't have a good mathematical argument for it, but the connection to nearest neighbor makes this plausible.
Edit:
Here are some ideas on how to think of a classifier using Gaussian kernels as nearest-neighbor classifiers. First, let us think about what a nearest-neighbor classifier does. Essentially, a nearest neighbor classifier is a standard classifier that uses the distances between points as inputs. More formally, imagine we create a feature representation $\phi_i$ for each point $x_i$ in the dataset by calculating its distance to all the other points.
$$\phi_i = (d(x_i,x_1), d(x_i, x_2), \ldots, d(x_i, x_n))$$
Above, $d$ is a distance function. Then what a nearest neighbor classifier does is to predict the class label for a point based on this feature representation and class labels for the data.
$$ p_i = f(\phi_i, y) $$
where $p_i$ is the prediction for data point $x_i$ and $y$ is a vector of class labels for $x_1, x_2, \ldots, x_n$.
The way I think about kernels is that they do a similar thing; they create a feature representation of each point using its kernel values with other points in the dataset. Similar to the nearest neighbor case, more formally this would be
$$ \phi_i = (k(x_i, x_1), k(x_i, x_2), \ldots, k(x_i, x_n)) $$
Now the connection with nearest neighbor is quite obvious; if our kernel function is some measure that is related to the distance measures we use in nearest neighbor classifiers, our kernel based classifier will be similar to a nearest neighbor model.
Note: The classifiers we train using kernels do not work directly with these $\phi_i$ representations, but I think that is what they do implicitly. | What makes the Gaussian kernel so magical for PCA, and also in general?
Let me put in my two cents.
The way I think about Gaussian kernels are as nearest-neighbor classifiers in some sense. What a Gaussian kernel does is that it represents each point with the distance to |
2,913 | What makes the Gaussian kernel so magical for PCA, and also in general? | The reason is that the VC dimension for Gaussian kernels is infinite, and thus, given the correct values for the parameters (sigma), they can classify an arbitrarily large number of samples correctly.
RBFs work well because they ensure that the matrix $K(x_{i},x_{j})$ is full rank. The idea is that $K(x_{i},x_{i}) > 0$, and off-diagonal terms can be made arbitrarily small by decreasing the value of $\sigma$. Notice that the kernel corresponds to a dot product in the feature space. In this feature space, the dimension is infinite (by considering the series expansion of the exponential).
One could thus see this as projecting those points in different dimensions so that you can separate them.
Consider by contrast, the case of linear kernels, which can only shatter four points on the plane.
You may take a look at this paper, though it's very technical. One of the standard books on SVMs should make this concept more accessible. | What makes the Gaussian kernel so magical for PCA, and also in general? | The reason is that the VC dimension for Gaussian kernels is infinite, and thus, given the correct values for the parameters (sigma), they can classify an arbitrarily large number of samples correctly. | What makes the Gaussian kernel so magical for PCA, and also in general?
The reason is that the VC dimension for Gaussian kernels is infinite, and thus, given the correct values for the parameters (sigma), they can classify an arbitrarily large number of samples correctly.
RBFs work well because they ensure that the matrix $K(x_{i},x_{j})$ is full rank. The idea is that $K(x_{i},x_{i}) > 0$, and off-diagonal terms can be made arbitrarily small by decreasing the value of $\sigma$. Notice that the kernel corresponds to a dot product in the feature space. In this feature space, the dimension is infinite (by considering the series expansion of the exponential).
One could thus see this as projecting those points in different dimensions so that you can separate them.
Consider by contrast, the case of linear kernels, which can only shatter four points on the plane.
You may take a look at this paper, though it's very technical. One of the standard books on SVMs should make this concept more accessible. | What makes the Gaussian kernel so magical for PCA, and also in general?
The reason is that the VC dimension for Gaussian kernels is infinite, and thus, given the correct values for the parameters (sigma), they can classify an arbitrarily large number of samples correctly. |
2,914 | Maximum likelihood method vs. least squares method | I'd like to provide a straightforward answer.
What is the main difference between maximum likelihood estimation
(MLE) vs. least squares estimation (LSE) ?
As @TrynnaDoStat commented, minimizing squared error is equivalent to maximizing the likelihood in this case. As said in Wikipedia,
In a linear model, if the errors belong to a normal distribution the
least squares estimators are also the maximum likelihood estimators.
they can be viewed as almost the same in your case since the conditions of the least square methods are these four: 1) linearity; 2) linear normal residuals; 3) constant variability/homoscedasticity; 4) independence.
Let me detail it a bit. Since we know that the response variable $y$.
$$y=w^T X +\epsilon \quad\text{ where }\epsilon\thicksim N(0,\sigma^2)$$
follows a normal distribution(normal residuals),
$$P(y|w, X)=\mathcal{N}(y|w^TX, \sigma^2I)$$
then the likelihood function(independence) is,
\begin{align}
L(y^{(1)},\dots,y^{(N)};w, X^{(1)},\dots,X^{(N)}) &= \prod_{i=1}^N \mathcal{N}(y^{(i)}|w^TX^{(i)}, \sigma^2I) \\ &=
\frac{1}{(2\pi)^{\frac{N}{2}}\sigma^N}\exp\left(\frac{-1}{2\sigma^2}\left(\sum_{i=1}^N(y^{(i)}-w^TX^{(i)})^2\right)\right).
\end{align}
Maximizing L is equivalent to minimizing(since other stuff are all constants, homoscedasticity)
$$\sum_{i=1}^n(y^{(i)}-w^TX^{(i)})^2.$$
That's the least-squares method, the difference between the expected $\hat{Y_i}$ and the actual $Y_i$.
Why can't we use MLE for predicting $y$ values in linear regression
and vice versa?
As explained above we're actually(more precisely equivalently) using the MLE for predicting $y$ values. And if the response variable has arbitrary distributions rather than the normal distribution, like
Bernoulli distribution or anyone from the exponential family we map the linear predictor to the response variable distribution using a link function(according to the response distribution), then the likelihood function becomes the product of all the outcomes(probabilities between 0 and 1) after the transformation. We can treat the link function in the linear regression as the identity function(since the response is already a probability). | Maximum likelihood method vs. least squares method | I'd like to provide a straightforward answer.
What is the main difference between maximum likelihood estimation
(MLE) vs. least squares estimation (LSE) ?
As @TrynnaDoStat commented, minimizing squa | Maximum likelihood method vs. least squares method
I'd like to provide a straightforward answer.
What is the main difference between maximum likelihood estimation
(MLE) vs. least squares estimation (LSE) ?
As @TrynnaDoStat commented, minimizing squared error is equivalent to maximizing the likelihood in this case. As said in Wikipedia,
In a linear model, if the errors belong to a normal distribution the
least squares estimators are also the maximum likelihood estimators.
they can be viewed as almost the same in your case since the conditions of the least square methods are these four: 1) linearity; 2) linear normal residuals; 3) constant variability/homoscedasticity; 4) independence.
Let me detail it a bit. Since we know that the response variable $y$.
$$y=w^T X +\epsilon \quad\text{ where }\epsilon\thicksim N(0,\sigma^2)$$
follows a normal distribution(normal residuals),
$$P(y|w, X)=\mathcal{N}(y|w^TX, \sigma^2I)$$
then the likelihood function(independence) is,
\begin{align}
L(y^{(1)},\dots,y^{(N)};w, X^{(1)},\dots,X^{(N)}) &= \prod_{i=1}^N \mathcal{N}(y^{(i)}|w^TX^{(i)}, \sigma^2I) \\ &=
\frac{1}{(2\pi)^{\frac{N}{2}}\sigma^N}\exp\left(\frac{-1}{2\sigma^2}\left(\sum_{i=1}^N(y^{(i)}-w^TX^{(i)})^2\right)\right).
\end{align}
Maximizing L is equivalent to minimizing(since other stuff are all constants, homoscedasticity)
$$\sum_{i=1}^n(y^{(i)}-w^TX^{(i)})^2.$$
That's the least-squares method, the difference between the expected $\hat{Y_i}$ and the actual $Y_i$.
Why can't we use MLE for predicting $y$ values in linear regression
and vice versa?
As explained above we're actually(more precisely equivalently) using the MLE for predicting $y$ values. And if the response variable has arbitrary distributions rather than the normal distribution, like
Bernoulli distribution or anyone from the exponential family we map the linear predictor to the response variable distribution using a link function(according to the response distribution), then the likelihood function becomes the product of all the outcomes(probabilities between 0 and 1) after the transformation. We can treat the link function in the linear regression as the identity function(since the response is already a probability). | Maximum likelihood method vs. least squares method
I'd like to provide a straightforward answer.
What is the main difference between maximum likelihood estimation
(MLE) vs. least squares estimation (LSE) ?
As @TrynnaDoStat commented, minimizing squa |
2,915 | Maximum likelihood method vs. least squares method | ML is a higher set of estimators which includes least absolute deviations ($L_1$-Norm) and least squares ($L_2$-Norm). Under the hood of ML the estimators share a wide range of common properties like the (sadly) non-existent break point. In fact you can use the ML approach as a substitute to optimize a lot of things including OLS as long as you are aware what you're doing.
$L_2$-Norm goes back to C. F. Gauss and is around 200 years old while the modern ML approach goes back to (IMHO) Huber 1964. Many scientists are used to $L_2$-Norms and their equations. The theory is well understood and there are a lot of published papers which can be seen as useful extensions like:
data snooping
stochastic parameters
weak constraints
Professional applications don't just fit data, they check:
if the parameter are significant
if your dataset has outliers
which outlier can be tolerated since it does not cripple the performance
which measurement should be removed since it does not contribute to the degree of freedoms
Also there are huge number of specialized statistic tests for hypotheses. This does not necessary apply to all ML estimators or should be at least stated with a proof.
Another profane point is that $L_2$-Norm is very easy to implement, can be extended to Bayesian regularization or other algorithms like Levenberg-Marquard.
Not to forget: Performance. Not all least square cases like Gauss-Markov $\mathbf{X\beta}=\mathbf{L}+\mathbf{r}$ produce symmetric positive definite normal equations $(\mathbf{X}^{T}\mathbf{X})^{-1}$. Therefore I use a separate libraries for each $L_2$-Norm. It is possible to perform special optimizations for this certain case.
Feel free to ask for details. | Maximum likelihood method vs. least squares method | ML is a higher set of estimators which includes least absolute deviations ($L_1$-Norm) and least squares ($L_2$-Norm). Under the hood of ML the estimators share a wide range of common properties like | Maximum likelihood method vs. least squares method
ML is a higher set of estimators which includes least absolute deviations ($L_1$-Norm) and least squares ($L_2$-Norm). Under the hood of ML the estimators share a wide range of common properties like the (sadly) non-existent break point. In fact you can use the ML approach as a substitute to optimize a lot of things including OLS as long as you are aware what you're doing.
$L_2$-Norm goes back to C. F. Gauss and is around 200 years old while the modern ML approach goes back to (IMHO) Huber 1964. Many scientists are used to $L_2$-Norms and their equations. The theory is well understood and there are a lot of published papers which can be seen as useful extensions like:
data snooping
stochastic parameters
weak constraints
Professional applications don't just fit data, they check:
if the parameter are significant
if your dataset has outliers
which outlier can be tolerated since it does not cripple the performance
which measurement should be removed since it does not contribute to the degree of freedoms
Also there are huge number of specialized statistic tests for hypotheses. This does not necessary apply to all ML estimators or should be at least stated with a proof.
Another profane point is that $L_2$-Norm is very easy to implement, can be extended to Bayesian regularization or other algorithms like Levenberg-Marquard.
Not to forget: Performance. Not all least square cases like Gauss-Markov $\mathbf{X\beta}=\mathbf{L}+\mathbf{r}$ produce symmetric positive definite normal equations $(\mathbf{X}^{T}\mathbf{X})^{-1}$. Therefore I use a separate libraries for each $L_2$-Norm. It is possible to perform special optimizations for this certain case.
Feel free to ask for details. | Maximum likelihood method vs. least squares method
ML is a higher set of estimators which includes least absolute deviations ($L_1$-Norm) and least squares ($L_2$-Norm). Under the hood of ML the estimators share a wide range of common properties like |
2,916 | Maximum likelihood method vs. least squares method | Let's derive the equivalence through the Bayesian/PGM approach.
Here is the Bayesian network of linear regression:
We can factorize the joint distribution according to the above graph $\mathcal{G'}$:
$$P(y, w, X) = P(y|w, X)P(w)P(X)$$
Since the $P(X)$ is fixed we obtain this:
$$P(y, w, X) \propto P(y|w, X)P(w)$$
Since maximum likelihood is a frequentist term and from the perspective of Bayesian inference a special case of maximum a posterior estimation that assumes a uniform prior distribution of the parameters. Then we just ignore $P(w)$.
Then we get this: $P(y, w, X) \propto P(y|w, X)$, and we assume $P(y|w, X)=\mathcal{N}(y|w^TX, \sigma^2I)$ due to the normal residuals assumption.
Alone the same line in this answer we see that the least square method is equivalent to the meximum likelihood method in your case. | Maximum likelihood method vs. least squares method | Let's derive the equivalence through the Bayesian/PGM approach.
Here is the Bayesian network of linear regression:
We can factorize the joint distribution according to the above graph $\mathcal{G'}$: | Maximum likelihood method vs. least squares method
Let's derive the equivalence through the Bayesian/PGM approach.
Here is the Bayesian network of linear regression:
We can factorize the joint distribution according to the above graph $\mathcal{G'}$:
$$P(y, w, X) = P(y|w, X)P(w)P(X)$$
Since the $P(X)$ is fixed we obtain this:
$$P(y, w, X) \propto P(y|w, X)P(w)$$
Since maximum likelihood is a frequentist term and from the perspective of Bayesian inference a special case of maximum a posterior estimation that assumes a uniform prior distribution of the parameters. Then we just ignore $P(w)$.
Then we get this: $P(y, w, X) \propto P(y|w, X)$, and we assume $P(y|w, X)=\mathcal{N}(y|w^TX, \sigma^2I)$ due to the normal residuals assumption.
Alone the same line in this answer we see that the least square method is equivalent to the meximum likelihood method in your case. | Maximum likelihood method vs. least squares method
Let's derive the equivalence through the Bayesian/PGM approach.
Here is the Bayesian network of linear regression:
We can factorize the joint distribution according to the above graph $\mathcal{G'}$: |
2,917 | Model for predicting number of Youtube views of Gangnam Style | Aha, excellent question!!
I would also have naively proposed an S-shaped logisitic curve, but this is obviously a poor fit. As far as I know, the constant increase is an approximation because YouTube counts the unique views (one per IP address), so there cannot be more views than computers.
We could use an epidemiological model where people have different susceptibility. To make it simple, we could divide it in the high risk group (say the children) and the low risk group (say the adults). Let's call $x(t)$ the proportion of "infected" children and $y(t)$ the proportion of "infected" adults at time $t$. I will call $X$ the (unknown) number of individuals in the high risk group and $Y$ the (also unknown) number of individuals in the low risk group.
$$\dot{x}(t) = r_1(x(t)+y(t))(X-x(t))$$
$$\dot{y}(t) = r_2(x(t)+y(t))(Y-y(t)),$$
where $r_1 > r_2$. I don't know how to solve that system (maybe @EpiGrad would), but looking at your graphs, we could make a couple of simplifying assumptions. Because the growth does not saturate, we can assume that $Y$ is very large and $y$ is small, or
$$\dot{x}(t) = r_1x(t)(X-x(t))$$
$$\dot{y}(t) = r_2x(t),$$
which predicts linear growth once the high risk group is completely infected. Note that with this model there is no reason to assume $r_1 > r_2$, quite the contrary because the large term $Y-y(t)$ is now subsumed in $r_2$.
This system solves to
$$x(t) = X \frac{C_1e^{Xr_1t}}{1 + C_1e^{Xr_1t}}$$
$$y(t) = r_2 \int x(t)dt + C_2 = \frac{r_2}{r_1} \log(1+C_1e^{Xr_1t})+C_2,$$
where $C_1$ and $C_2$ are integration constants. The total "infected" population is then
$x(t) + y(t)$, which has 3 parameters and 2 integration constants (initial conditions). I don't know how easy it would be to fit...
Update: playing around with the parameters, I could not reproduce the shape of the top curve with this model, the transition from $0$ to $600,000,000$ is always sharper than above. Continuing with the same idea, we could again assume that there are two kinds of Internet users: the "sharers" $x(t)$ and the "loners" $y(t)$. The sharers infect each other, the loners bump into the video by chance. The model is
$$\dot{x}(t) = r_1x(t)(X-x(t))$$
$$\dot{y}(t) = r_2,$$
and solves to
$$x(t) = X \frac{C_1e^{Xr_1t}}{1 + C_1e^{Xr_1t}}$$
$$y(t) = r_2 t+C_2.$$
We could assume that $x(0) = 1$, i.e. that there is only patient 0 at $t=0$, which yields $C_1 = \frac{1}{X-1} \approx \frac{1}{X}$ because $X$ is a large number. $C_2 = y(0)$ so we can assume that $C_2 = 0$. Now only the 3 parameters $X$, $r_1$ and $r_2$ determine the dynamics.
Even with this model, it seems that the inflection is very sharp, it is not a good fit so the model must be wrong. That makes the problem very interesting actually. As an example, the figure below was built with $X = 600,000,000$, $r_1 = 3.667 \cdot 10^{-10}$ and $r_2 = 1,000,000$.
Update: From the comments I gathered that Youtube counts views (in its secret way) and not unique IPs, which makes a big difference. Back to the drawing board.
To keep it simple, let's assume that the viewers are "infected" by the video. They come back to watch it regularly, until they clear the infection. One of the simplest models is the SIR (Susceptible-Infected-Resistant) which is the following:
$$\dot{S}(t) = -\alpha S(t)I(t)$$
$$\dot{I}(t) = \alpha S(t)I(t) - \beta I(t)$$
$$\dot{R}(t) = \beta I(t)$$
where $\alpha$ is the rate of infection and $\beta$ is the rate of clearance. The total view count $x(t)$ is such that $\dot{x}(t) = kI(t)$, where $k$ is the average views per day per infected individual.
In this model, the view count starts increasing abruptly some time after the onset of the infection, which is not the case in the original data, perhaps because videos also spread in a non viral (or meme) way. I am no expert in estimating the parameters of the SIR model. Just playing with different values, here is what I came up with (in R).
S0 = 1e7; a = 5e-8; b = 0.01 ; k = 1.2
views = 0; S = S0; I = 1;
# Exrapolate 1 year after the onset.
for (i in 1:365) {
dS = -a*I*S;
dI = a*I*S - b*I;
S = S+dS;
I = I+dI;
views[i+1] = views[i] + k*I
}
par(mfrow=c(2,1))
plot(views[1:95], type='l', lwd=2, ylim=c(0,6e8))
plot(views, type='n', lwd=2)
lines(views[1:95], type='l', lwd=2)
lines(96:365, views[96:365], type='l', lty=2)
The model is obviously not perfect, and could be complemented in many sound ways. This very rough sketch predicts a billion views somewhere around March 2013, let's see... | Model for predicting number of Youtube views of Gangnam Style | Aha, excellent question!!
I would also have naively proposed an S-shaped logisitic curve, but this is obviously a poor fit. As far as I know, the constant increase is an approximation because YouTube | Model for predicting number of Youtube views of Gangnam Style
Aha, excellent question!!
I would also have naively proposed an S-shaped logisitic curve, but this is obviously a poor fit. As far as I know, the constant increase is an approximation because YouTube counts the unique views (one per IP address), so there cannot be more views than computers.
We could use an epidemiological model where people have different susceptibility. To make it simple, we could divide it in the high risk group (say the children) and the low risk group (say the adults). Let's call $x(t)$ the proportion of "infected" children and $y(t)$ the proportion of "infected" adults at time $t$. I will call $X$ the (unknown) number of individuals in the high risk group and $Y$ the (also unknown) number of individuals in the low risk group.
$$\dot{x}(t) = r_1(x(t)+y(t))(X-x(t))$$
$$\dot{y}(t) = r_2(x(t)+y(t))(Y-y(t)),$$
where $r_1 > r_2$. I don't know how to solve that system (maybe @EpiGrad would), but looking at your graphs, we could make a couple of simplifying assumptions. Because the growth does not saturate, we can assume that $Y$ is very large and $y$ is small, or
$$\dot{x}(t) = r_1x(t)(X-x(t))$$
$$\dot{y}(t) = r_2x(t),$$
which predicts linear growth once the high risk group is completely infected. Note that with this model there is no reason to assume $r_1 > r_2$, quite the contrary because the large term $Y-y(t)$ is now subsumed in $r_2$.
This system solves to
$$x(t) = X \frac{C_1e^{Xr_1t}}{1 + C_1e^{Xr_1t}}$$
$$y(t) = r_2 \int x(t)dt + C_2 = \frac{r_2}{r_1} \log(1+C_1e^{Xr_1t})+C_2,$$
where $C_1$ and $C_2$ are integration constants. The total "infected" population is then
$x(t) + y(t)$, which has 3 parameters and 2 integration constants (initial conditions). I don't know how easy it would be to fit...
Update: playing around with the parameters, I could not reproduce the shape of the top curve with this model, the transition from $0$ to $600,000,000$ is always sharper than above. Continuing with the same idea, we could again assume that there are two kinds of Internet users: the "sharers" $x(t)$ and the "loners" $y(t)$. The sharers infect each other, the loners bump into the video by chance. The model is
$$\dot{x}(t) = r_1x(t)(X-x(t))$$
$$\dot{y}(t) = r_2,$$
and solves to
$$x(t) = X \frac{C_1e^{Xr_1t}}{1 + C_1e^{Xr_1t}}$$
$$y(t) = r_2 t+C_2.$$
We could assume that $x(0) = 1$, i.e. that there is only patient 0 at $t=0$, which yields $C_1 = \frac{1}{X-1} \approx \frac{1}{X}$ because $X$ is a large number. $C_2 = y(0)$ so we can assume that $C_2 = 0$. Now only the 3 parameters $X$, $r_1$ and $r_2$ determine the dynamics.
Even with this model, it seems that the inflection is very sharp, it is not a good fit so the model must be wrong. That makes the problem very interesting actually. As an example, the figure below was built with $X = 600,000,000$, $r_1 = 3.667 \cdot 10^{-10}$ and $r_2 = 1,000,000$.
Update: From the comments I gathered that Youtube counts views (in its secret way) and not unique IPs, which makes a big difference. Back to the drawing board.
To keep it simple, let's assume that the viewers are "infected" by the video. They come back to watch it regularly, until they clear the infection. One of the simplest models is the SIR (Susceptible-Infected-Resistant) which is the following:
$$\dot{S}(t) = -\alpha S(t)I(t)$$
$$\dot{I}(t) = \alpha S(t)I(t) - \beta I(t)$$
$$\dot{R}(t) = \beta I(t)$$
where $\alpha$ is the rate of infection and $\beta$ is the rate of clearance. The total view count $x(t)$ is such that $\dot{x}(t) = kI(t)$, where $k$ is the average views per day per infected individual.
In this model, the view count starts increasing abruptly some time after the onset of the infection, which is not the case in the original data, perhaps because videos also spread in a non viral (or meme) way. I am no expert in estimating the parameters of the SIR model. Just playing with different values, here is what I came up with (in R).
S0 = 1e7; a = 5e-8; b = 0.01 ; k = 1.2
views = 0; S = S0; I = 1;
# Exrapolate 1 year after the onset.
for (i in 1:365) {
dS = -a*I*S;
dI = a*I*S - b*I;
S = S+dS;
I = I+dI;
views[i+1] = views[i] + k*I
}
par(mfrow=c(2,1))
plot(views[1:95], type='l', lwd=2, ylim=c(0,6e8))
plot(views, type='n', lwd=2)
lines(views[1:95], type='l', lwd=2)
lines(96:365, views[96:365], type='l', lty=2)
The model is obviously not perfect, and could be complemented in many sound ways. This very rough sketch predicts a billion views somewhere around March 2013, let's see... | Model for predicting number of Youtube views of Gangnam Style
Aha, excellent question!!
I would also have naively proposed an S-shaped logisitic curve, but this is obviously a poor fit. As far as I know, the constant increase is an approximation because YouTube |
2,918 | Model for predicting number of Youtube views of Gangnam Style | Probably the most common model for forecasting new product adoption is the Bass diffusion model, which - similar to @gui11aume's answer - models interactions between current and potential users. New product adoption is a pretty hot topic in forecasting, searching for this term should yield tons of info (which I unfortunately don't have the time to expand on here...). | Model for predicting number of Youtube views of Gangnam Style | Probably the most common model for forecasting new product adoption is the Bass diffusion model, which - similar to @gui11aume's answer - models interactions between current and potential users. New p | Model for predicting number of Youtube views of Gangnam Style
Probably the most common model for forecasting new product adoption is the Bass diffusion model, which - similar to @gui11aume's answer - models interactions between current and potential users. New product adoption is a pretty hot topic in forecasting, searching for this term should yield tons of info (which I unfortunately don't have the time to expand on here...). | Model for predicting number of Youtube views of Gangnam Style
Probably the most common model for forecasting new product adoption is the Bass diffusion model, which - similar to @gui11aume's answer - models interactions between current and potential users. New p |
2,919 | Model for predicting number of Youtube views of Gangnam Style | I think you need to separate phenomena like Gangnam Style, which owes much of it's views to being a meme/viral thing, from Justin Bieber and Eminem, who are big artists in their own right and who also would spread widely in a traditional setting - JB or Eminem would sell a lot of singles too, I'm not sure that PSY would. | Model for predicting number of Youtube views of Gangnam Style | I think you need to separate phenomena like Gangnam Style, which owes much of it's views to being a meme/viral thing, from Justin Bieber and Eminem, who are big artists in their own right and who also | Model for predicting number of Youtube views of Gangnam Style
I think you need to separate phenomena like Gangnam Style, which owes much of it's views to being a meme/viral thing, from Justin Bieber and Eminem, who are big artists in their own right and who also would spread widely in a traditional setting - JB or Eminem would sell a lot of singles too, I'm not sure that PSY would. | Model for predicting number of Youtube views of Gangnam Style
I think you need to separate phenomena like Gangnam Style, which owes much of it's views to being a meme/viral thing, from Justin Bieber and Eminem, who are big artists in their own right and who also |
2,920 | Model for predicting number of Youtube views of Gangnam Style | I would look at the Gompertz growth curve.
The Gompertz curve is a 3-parameter (a,b,c) double-exponential formula with time, T, as an independent variable.
R code:
gompertz_growth <- function(a=a,b=b,c=c, t) { a*exp(b*exp(c*t)) }
Gompertz growth formula is known to be good at describing many life-cycle phenomena where at first growth is accelerating, then tapers off resulting in a asymmetric sigmoid curve whose derivative is steeper on the left than on the right of the peak. For example, the total number of articles on Wikipedia which is also viral in nature, has been following a Gompertz growth curve (with certain a,b,c parameters) for many years with great accuracy.
Edit:
If the Gompertz curve isn't enough to approximate the shape you're looking for, you may want to add parameters d & θ as described in The Exponentaited Generalized Weibull Gompertz Distribution. Note that this paper uses x instead of t for the independent time parameter. Interestingly, Wikipedia also modified their best approximation by adding a single 4th parameter d, to account for a prediction divergence from the actual value after 2012. The modified 4-param Gompertz curve formula is:
gompertz_2 <- function(a=A,b=B,c=C,d=D, t) {a * exp(b * exp(c*t) + d*t)}
The Gompertz function is named after Benjamin Gompertz (1779-1865), a Gauss contemporary (just 2 years Gauss' junior), the first mathematician to describe it. | Model for predicting number of Youtube views of Gangnam Style | I would look at the Gompertz growth curve.
The Gompertz curve is a 3-parameter (a,b,c) double-exponential formula with time, T, as an independent variable.
R code:
gompertz_growth <- function(a=a,b=b, | Model for predicting number of Youtube views of Gangnam Style
I would look at the Gompertz growth curve.
The Gompertz curve is a 3-parameter (a,b,c) double-exponential formula with time, T, as an independent variable.
R code:
gompertz_growth <- function(a=a,b=b,c=c, t) { a*exp(b*exp(c*t)) }
Gompertz growth formula is known to be good at describing many life-cycle phenomena where at first growth is accelerating, then tapers off resulting in a asymmetric sigmoid curve whose derivative is steeper on the left than on the right of the peak. For example, the total number of articles on Wikipedia which is also viral in nature, has been following a Gompertz growth curve (with certain a,b,c parameters) for many years with great accuracy.
Edit:
If the Gompertz curve isn't enough to approximate the shape you're looking for, you may want to add parameters d & θ as described in The Exponentaited Generalized Weibull Gompertz Distribution. Note that this paper uses x instead of t for the independent time parameter. Interestingly, Wikipedia also modified their best approximation by adding a single 4th parameter d, to account for a prediction divergence from the actual value after 2012. The modified 4-param Gompertz curve formula is:
gompertz_2 <- function(a=A,b=B,c=C,d=D, t) {a * exp(b * exp(c*t) + d*t)}
The Gompertz function is named after Benjamin Gompertz (1779-1865), a Gauss contemporary (just 2 years Gauss' junior), the first mathematician to describe it. | Model for predicting number of Youtube views of Gangnam Style
I would look at the Gompertz growth curve.
The Gompertz curve is a 3-parameter (a,b,c) double-exponential formula with time, T, as an independent variable.
R code:
gompertz_growth <- function(a=a,b=b, |
2,921 | Model for predicting number of Youtube views of Gangnam Style | OK guys, we need some stylised facts about the diffusion of youtube videos, which turn out to suggest patterns rather different from the usual product diffusion literature. Good place to start is
Meeyoung Cha, Haewoon Kwak, Pablo Rodriguez, Yong-Yeol Ahn, and Sue Moon, 2007, I Tube, You Tube, Everybody Tubes: Analyzing the World’s Largest User Generated Content Video System, Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, ISBN: 978-1-59593-908-1.
and
X Cheng, C Dale, J Liu, 2008, Statistics and social network of youtube videos, in proceedings of International Workshop on Quality of Service (IWQoS), Enschede, the Netherlands, June. | Model for predicting number of Youtube views of Gangnam Style | OK guys, we need some stylised facts about the diffusion of youtube videos, which turn out to suggest patterns rather different from the usual product diffusion literature. Good place to start is
Mee | Model for predicting number of Youtube views of Gangnam Style
OK guys, we need some stylised facts about the diffusion of youtube videos, which turn out to suggest patterns rather different from the usual product diffusion literature. Good place to start is
Meeyoung Cha, Haewoon Kwak, Pablo Rodriguez, Yong-Yeol Ahn, and Sue Moon, 2007, I Tube, You Tube, Everybody Tubes: Analyzing the World’s Largest User Generated Content Video System, Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, ISBN: 978-1-59593-908-1.
and
X Cheng, C Dale, J Liu, 2008, Statistics and social network of youtube videos, in proceedings of International Workshop on Quality of Service (IWQoS), Enschede, the Netherlands, June. | Model for predicting number of Youtube views of Gangnam Style
OK guys, we need some stylised facts about the diffusion of youtube videos, which turn out to suggest patterns rather different from the usual product diffusion literature. Good place to start is
Mee |
2,922 | Model for predicting number of Youtube views of Gangnam Style | The model is obviously not perfect, and could be complemented in many
sound ways. This very rough sketch predicts a billion views somewhere
around March 2013, let's see...
Looking at the slowdown in views over the past week, the Mar-13 date looks like a decent bet. The majority of the new views appear to be already infected users that return multiple times per day.
With regards to complementing your model, one method that researchers use to track a virus' spread is to monitor its genome mutations - when and where it mutated can show researchers how fast a virus is transmitted and spread (see tracking West Nile Virus in USA).
In a practical sense, videos like Gangnam Style and Party Rock Anthem (by the group LMFAO) are more likely to 'mutate' into parodies, flash mobs, wedding dances, remixes and other video responses than say, Justin Bieber's Baby or Eminem's songs.
Researchers could analyse the number of video responses (and parodies in particular) as a proxy for mutations. Measuring the frequency and popularity of these mutations early in the life of the video could be useful is modelling its lifetime YouTube views. | Model for predicting number of Youtube views of Gangnam Style | The model is obviously not perfect, and could be complemented in many
sound ways. This very rough sketch predicts a billion views somewhere
around March 2013, let's see...
Looking at the slowdown | Model for predicting number of Youtube views of Gangnam Style
The model is obviously not perfect, and could be complemented in many
sound ways. This very rough sketch predicts a billion views somewhere
around March 2013, let's see...
Looking at the slowdown in views over the past week, the Mar-13 date looks like a decent bet. The majority of the new views appear to be already infected users that return multiple times per day.
With regards to complementing your model, one method that researchers use to track a virus' spread is to monitor its genome mutations - when and where it mutated can show researchers how fast a virus is transmitted and spread (see tracking West Nile Virus in USA).
In a practical sense, videos like Gangnam Style and Party Rock Anthem (by the group LMFAO) are more likely to 'mutate' into parodies, flash mobs, wedding dances, remixes and other video responses than say, Justin Bieber's Baby or Eminem's songs.
Researchers could analyse the number of video responses (and parodies in particular) as a proxy for mutations. Measuring the frequency and popularity of these mutations early in the life of the video could be useful is modelling its lifetime YouTube views. | Model for predicting number of Youtube views of Gangnam Style
The model is obviously not perfect, and could be complemented in many
sound ways. This very rough sketch predicts a billion views somewhere
around March 2013, let's see...
Looking at the slowdown |
2,923 | If A and B are correlated with C, why are A and B not necessarily correlated? | Because correlation is a mathematical property of multivariate distributions, some insight can be had purely through calculations, regardless of the statistical genesis of those distributions.
For the Pearson correlations, consider multinormal variables $X$, $Y$, $Z$. These are useful to work with because any non-negative definite matrix actually is the covariance matrix of some multinormal distributions, thereby resolving the existence question. If we stick to matrices with $1$ on the diagonal, the off-diagonal entries of the covariance matrix will be their correlations. Writing the correlation of $X$ and $Y$ as $\rho$, the correlation of $Y$ and $Z$ as $\tau$, and the correlation of $X$ and $Z$ as $\sigma$, we compute that
$1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$ (because this is the determinant of the correlation matrix and it cannot be negative).
When $\sigma = 0$ this implies that $\rho^2 + \tau^2 \le 1$. To put it another way: when both $\rho$ and $\tau$ are large in magnitude, $X$ and $Z$ must have nonzero correlation.
If $\rho^2 = \tau^2 = 1/2$, then any non-negative value of $\sigma$ (between $0$ and $1$ of course) is possible.
When $\rho^2 + \tau^2 \lt 1$, negative values of $\sigma$ are allowable. For example, when $\rho = \tau = 1/2$, $\sigma$ can be anywhere between $-1/2$ and $1$.
These considerations imply there are indeed some constraints on the mutual correlations. The constraints (which depend only on the non-negative definiteness of the correlation matrix, not on the actual distributions of the variables) can be tightened depending on assumptions about the univariate distributions. For instance, it's easy to see (and to prove) that when the distributions of $X$ and $Y$ are not in the same location-scale family, their correlations must be strictly less than $1$ in size. (Proof: a correlation of $\pm 1$ implies $X$ and $Y$ are linearly related a.s.)
As far as Spearman rank correlations go, consider three trivariate observations $(1,1,2)$, $(2,3,1)$, and $(3,2,3)$ of $(X, Y, Z)$. Their mutual rank correlations are $1/2$, $1/2$, and $-1/2$. Thus even the sign of the rank correlation of $Y$ and $Z$ can be the reverse of the signs of the correlations of $X$ and $Y$ and $X$ and $Z$. | If A and B are correlated with C, why are A and B not necessarily correlated? | Because correlation is a mathematical property of multivariate distributions, some insight can be had purely through calculations, regardless of the statistical genesis of those distributions.
For the | If A and B are correlated with C, why are A and B not necessarily correlated?
Because correlation is a mathematical property of multivariate distributions, some insight can be had purely through calculations, regardless of the statistical genesis of those distributions.
For the Pearson correlations, consider multinormal variables $X$, $Y$, $Z$. These are useful to work with because any non-negative definite matrix actually is the covariance matrix of some multinormal distributions, thereby resolving the existence question. If we stick to matrices with $1$ on the diagonal, the off-diagonal entries of the covariance matrix will be their correlations. Writing the correlation of $X$ and $Y$ as $\rho$, the correlation of $Y$ and $Z$ as $\tau$, and the correlation of $X$ and $Z$ as $\sigma$, we compute that
$1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$ (because this is the determinant of the correlation matrix and it cannot be negative).
When $\sigma = 0$ this implies that $\rho^2 + \tau^2 \le 1$. To put it another way: when both $\rho$ and $\tau$ are large in magnitude, $X$ and $Z$ must have nonzero correlation.
If $\rho^2 = \tau^2 = 1/2$, then any non-negative value of $\sigma$ (between $0$ and $1$ of course) is possible.
When $\rho^2 + \tau^2 \lt 1$, negative values of $\sigma$ are allowable. For example, when $\rho = \tau = 1/2$, $\sigma$ can be anywhere between $-1/2$ and $1$.
These considerations imply there are indeed some constraints on the mutual correlations. The constraints (which depend only on the non-negative definiteness of the correlation matrix, not on the actual distributions of the variables) can be tightened depending on assumptions about the univariate distributions. For instance, it's easy to see (and to prove) that when the distributions of $X$ and $Y$ are not in the same location-scale family, their correlations must be strictly less than $1$ in size. (Proof: a correlation of $\pm 1$ implies $X$ and $Y$ are linearly related a.s.)
As far as Spearman rank correlations go, consider three trivariate observations $(1,1,2)$, $(2,3,1)$, and $(3,2,3)$ of $(X, Y, Z)$. Their mutual rank correlations are $1/2$, $1/2$, and $-1/2$. Thus even the sign of the rank correlation of $Y$ and $Z$ can be the reverse of the signs of the correlations of $X$ and $Y$ and $X$ and $Z$. | If A and B are correlated with C, why are A and B not necessarily correlated?
Because correlation is a mathematical property of multivariate distributions, some insight can be had purely through calculations, regardless of the statistical genesis of those distributions.
For the |
2,924 | If A and B are correlated with C, why are A and B not necessarily correlated? | I'm on an annual fishing trip right now. There is a correlation between the time of day I fish and the amount of fish I catch. There is also a correlation between the size of the bait I use and the amount of fish I catch. There is no correlation between the size of the bait and the time of day. | If A and B are correlated with C, why are A and B not necessarily correlated? | I'm on an annual fishing trip right now. There is a correlation between the time of day I fish and the amount of fish I catch. There is also a correlation between the size of the bait I use and the | If A and B are correlated with C, why are A and B not necessarily correlated?
I'm on an annual fishing trip right now. There is a correlation between the time of day I fish and the amount of fish I catch. There is also a correlation between the size of the bait I use and the amount of fish I catch. There is no correlation between the size of the bait and the time of day. | If A and B are correlated with C, why are A and B not necessarily correlated?
I'm on an annual fishing trip right now. There is a correlation between the time of day I fish and the amount of fish I catch. There is also a correlation between the size of the bait I use and the |
2,925 | If A and B are correlated with C, why are A and B not necessarily correlated? | As an add-on to whuber's answer: The presented formula
$1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$.
can be transformed into following inequality (Olkin, 1981):
$\sigma\tau - \sqrt{(1-\sigma^2)(1-\tau^2)} \le \rho \le \sigma\tau + \sqrt{(1-\sigma^2)(1-\tau^2)}$
A graphical representation of the upper and lower limits for $\rho$ looks like:
Olkin, I. (1981). Range restrictions for product-moment correlation matrices. Psychometrika, 46, 469-472. doi:10.1007/BF02293804 | If A and B are correlated with C, why are A and B not necessarily correlated? | As an add-on to whuber's answer: The presented formula
$1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$.
can be transformed into following inequality (Olkin, 1981):
$\sigma\ta | If A and B are correlated with C, why are A and B not necessarily correlated?
As an add-on to whuber's answer: The presented formula
$1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$.
can be transformed into following inequality (Olkin, 1981):
$\sigma\tau - \sqrt{(1-\sigma^2)(1-\tau^2)} \le \rho \le \sigma\tau + \sqrt{(1-\sigma^2)(1-\tau^2)}$
A graphical representation of the upper and lower limits for $\rho$ looks like:
Olkin, I. (1981). Range restrictions for product-moment correlation matrices. Psychometrika, 46, 469-472. doi:10.1007/BF02293804 | If A and B are correlated with C, why are A and B not necessarily correlated?
As an add-on to whuber's answer: The presented formula
$1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$.
can be transformed into following inequality (Olkin, 1981):
$\sigma\ta |
2,926 | If A and B are correlated with C, why are A and B not necessarily correlated? | Correlation is the cosine of the angle between two vectors. In the situation described, (A,B,C) is a triple of observations, made n times, each observation being a real number. The correlation between A and B is the cosine of the angle between $V_A=A-E(A)$ and $V_B=B-E(B)$ as measured in n-dimensional euclidean space. So our situation reduces to considering 3 vectors $V_A$, $V_B$ and $V_C$ in n dimensional space. We have 3 pairs of vectors and therefore 3 angles. If two of the angles are small (high correlation) then the third one will also be small. But to say "correlated" is not much of a restriction: it means that the angle is between 0 and $\pi/2$. In general this gives no restriction at all on the third angle. Putting it another way, start with any angle less than $\pi$ between $V_A$ and $V_B$ (any correlation except -1). Let $V_C$ bisect the angle between $V_A$ and $V_B$. Then C will be correlated with both A and B. | If A and B are correlated with C, why are A and B not necessarily correlated? | Correlation is the cosine of the angle between two vectors. In the situation described, (A,B,C) is a triple of observations, made n times, each observation being a real number. The correlation between | If A and B are correlated with C, why are A and B not necessarily correlated?
Correlation is the cosine of the angle between two vectors. In the situation described, (A,B,C) is a triple of observations, made n times, each observation being a real number. The correlation between A and B is the cosine of the angle between $V_A=A-E(A)$ and $V_B=B-E(B)$ as measured in n-dimensional euclidean space. So our situation reduces to considering 3 vectors $V_A$, $V_B$ and $V_C$ in n dimensional space. We have 3 pairs of vectors and therefore 3 angles. If two of the angles are small (high correlation) then the third one will also be small. But to say "correlated" is not much of a restriction: it means that the angle is between 0 and $\pi/2$. In general this gives no restriction at all on the third angle. Putting it another way, start with any angle less than $\pi$ between $V_A$ and $V_B$ (any correlation except -1). Let $V_C$ bisect the angle between $V_A$ and $V_B$. Then C will be correlated with both A and B. | If A and B are correlated with C, why are A and B not necessarily correlated?
Correlation is the cosine of the angle between two vectors. In the situation described, (A,B,C) is a triple of observations, made n times, each observation being a real number. The correlation between |
2,927 | If A and B are correlated with C, why are A and B not necessarily correlated? | I think it's better to ask "why SHOULD they be correlated?" or, perhaps "Why should have any particular correlation?"
The following R code shows a case where x1 and x2 are both correlated with Y, but have 0 correlation with each other
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- 3*x1 + 2*x2 + rnorm(100, 0, .3)
cor(x1,y)
cor(x2,y)
cor(x1,x2)
The correlation with $Y$ can be made stronger by reducing the .3 to .1 or whatever | If A and B are correlated with C, why are A and B not necessarily correlated? | I think it's better to ask "why SHOULD they be correlated?" or, perhaps "Why should have any particular correlation?"
The following R code shows a case where x1 and x2 are both correlated with Y, but | If A and B are correlated with C, why are A and B not necessarily correlated?
I think it's better to ask "why SHOULD they be correlated?" or, perhaps "Why should have any particular correlation?"
The following R code shows a case where x1 and x2 are both correlated with Y, but have 0 correlation with each other
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- 3*x1 + 2*x2 + rnorm(100, 0, .3)
cor(x1,y)
cor(x2,y)
cor(x1,x2)
The correlation with $Y$ can be made stronger by reducing the .3 to .1 or whatever | If A and B are correlated with C, why are A and B not necessarily correlated?
I think it's better to ask "why SHOULD they be correlated?" or, perhaps "Why should have any particular correlation?"
The following R code shows a case where x1 and x2 are both correlated with Y, but |
2,928 | If A and B are correlated with C, why are A and B not necessarily correlated? | I will leave the statistical demonstration to those who are better suited than me for it... but intuitively say that event A generates a process X that contributes to the generation of event C. Then A is correlated to C (through X). B, on the other hand generates Y, that also shapes C. Therefore A is correlated to C, B is correlated to C but A and B are not correlated. | If A and B are correlated with C, why are A and B not necessarily correlated? | I will leave the statistical demonstration to those who are better suited than me for it... but intuitively say that event A generates a process X that contributes to the generation of event C. Then A | If A and B are correlated with C, why are A and B not necessarily correlated?
I will leave the statistical demonstration to those who are better suited than me for it... but intuitively say that event A generates a process X that contributes to the generation of event C. Then A is correlated to C (through X). B, on the other hand generates Y, that also shapes C. Therefore A is correlated to C, B is correlated to C but A and B are not correlated. | If A and B are correlated with C, why are A and B not necessarily correlated?
I will leave the statistical demonstration to those who are better suited than me for it... but intuitively say that event A generates a process X that contributes to the generation of event C. Then A |
2,929 | If A and B are correlated with C, why are A and B not necessarily correlated? | For those who want some intuition, a correlation can be seen as a cosine of some angle. So, consider three vectors in 3D, let say A, B, and C, each corresponding to one variable. The question is to determine the range of possible angles between A and C when the angle between A and B as well as the angle between B et C are known. For that, you can play with an online tool without installing any software. Just go to the page http://www.montefiore.ulg.ac.be/~pierard/chained_correlations.php | If A and B are correlated with C, why are A and B not necessarily correlated? | For those who want some intuition, a correlation can be seen as a cosine of some angle. So, consider three vectors in 3D, let say A, B, and C, each corresponding to one variable. The question is to de | If A and B are correlated with C, why are A and B not necessarily correlated?
For those who want some intuition, a correlation can be seen as a cosine of some angle. So, consider three vectors in 3D, let say A, B, and C, each corresponding to one variable. The question is to determine the range of possible angles between A and C when the angle between A and B as well as the angle between B et C are known. For that, you can play with an online tool without installing any software. Just go to the page http://www.montefiore.ulg.ac.be/~pierard/chained_correlations.php | If A and B are correlated with C, why are A and B not necessarily correlated?
For those who want some intuition, a correlation can be seen as a cosine of some angle. So, consider three vectors in 3D, let say A, B, and C, each corresponding to one variable. The question is to de |
2,930 | If A and B are correlated with C, why are A and B not necessarily correlated? | Lets take one example:
A={x1,x2,x3,x4,x5,x6,x7,x8,x9}
B={x1,x2,x3,0,0,0,0,0,0}
C={0,0,0,x4,x5,x6,0,0,0}
For some x, A and B will have significant correlation, similarly A and C will also have significant correlation but correlation of B and C won't be significant.
So, It's not necessarily true that if A and B correlated and A and C are correlated then B and C are also correlated.
Note: For deep understanding, Please think this example on large data. | If A and B are correlated with C, why are A and B not necessarily correlated? | Lets take one example:
A={x1,x2,x3,x4,x5,x6,x7,x8,x9}
B={x1,x2,x3,0,0,0,0,0,0}
C={0,0,0,x4,x5,x6,0,0,0}
For some x, A and B will have significant correlation, similarly A and C will also have signi | If A and B are correlated with C, why are A and B not necessarily correlated?
Lets take one example:
A={x1,x2,x3,x4,x5,x6,x7,x8,x9}
B={x1,x2,x3,0,0,0,0,0,0}
C={0,0,0,x4,x5,x6,0,0,0}
For some x, A and B will have significant correlation, similarly A and C will also have significant correlation but correlation of B and C won't be significant.
So, It's not necessarily true that if A and B correlated and A and C are correlated then B and C are also correlated.
Note: For deep understanding, Please think this example on large data. | If A and B are correlated with C, why are A and B not necessarily correlated?
Lets take one example:
A={x1,x2,x3,x4,x5,x6,x7,x8,x9}
B={x1,x2,x3,0,0,0,0,0,0}
C={0,0,0,x4,x5,x6,0,0,0}
For some x, A and B will have significant correlation, similarly A and C will also have signi |
2,931 | If A and B are correlated with C, why are A and B not necessarily correlated? | Because correlation is not the same as causation.
If A has a causal effect on B (resulting in a positive correlation between A and B), and B has a causal effect on C (resulting in a positive correlation between B and C), then A has a causal effect on C via B (and there will be a positive correlation between A and C).
You might illustrate this causal structure the following way:
A -> B -> C
If we simply specify that B is correlated with both A and C, this might be due to the underlying causal structure specified above, in which case A and C will be correlated. However, specifying the correlations generally under-constrains the causal structure. Another possible causal structure that could produce the desired correlations is one where A and C are totally independent but both have a causal influence on B (producing the desired positive correlation between both B and A, and B and C, without requiring any correlation between A and C).
You might illustrate this causal structure the following way:
A -> B <- C | If A and B are correlated with C, why are A and B not necessarily correlated? | Because correlation is not the same as causation.
If A has a causal effect on B (resulting in a positive correlation between A and B), and B has a causal effect on C (resulting in a positive correlati | If A and B are correlated with C, why are A and B not necessarily correlated?
Because correlation is not the same as causation.
If A has a causal effect on B (resulting in a positive correlation between A and B), and B has a causal effect on C (resulting in a positive correlation between B and C), then A has a causal effect on C via B (and there will be a positive correlation between A and C).
You might illustrate this causal structure the following way:
A -> B -> C
If we simply specify that B is correlated with both A and C, this might be due to the underlying causal structure specified above, in which case A and C will be correlated. However, specifying the correlations generally under-constrains the causal structure. Another possible causal structure that could produce the desired correlations is one where A and C are totally independent but both have a causal influence on B (producing the desired positive correlation between both B and A, and B and C, without requiring any correlation between A and C).
You might illustrate this causal structure the following way:
A -> B <- C | If A and B are correlated with C, why are A and B not necessarily correlated?
Because correlation is not the same as causation.
If A has a causal effect on B (resulting in a positive correlation between A and B), and B has a causal effect on C (resulting in a positive correlati |
2,932 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Probability spaces and Kolmogorov's axioms
A probability space $\mathcal{P}$ is by definition a tripple $(\Omega, \mathcal{F}, \mathbb{P} )$ where $\Omega$ is a set of outcomes, $\mathcal{F}$ is a $\sigma$-algebra on the subsets of $\Omega$ and $\mathbb{P}$ is a probability-measure that fulfills the axioms of Kolmogorov, i.e. $\mathbb{P}$ is a function from $\mathcal{F}$ to $[0,1]$ such that $\mathbb{P}(\Omega)=1$ and for disjoint $E_1, E_2, \dots$ in $\mathcal{F}$ it holds that $P \left( \cup_{j=1}^\infty E_j \right)=\sum_{j=1}^\infty \mathbb{P}(E_j)$.
Within such a probability space one can, for two events $E_1, E_2$ in $\mathcal{F}$ define the conditional probability as $\mathbb{P}(E_1|_{E_2})\stackrel{def}{=}\frac{\mathbb{P}(E_1 \cap E_2)}{\mathbb{P}(E_2)}$
Note that:
this ''conditional probability'' is only defined when $\mathbb{P}$ is defined on $\mathcal{F}$, so we need a probability space to be able to define conditional probabilities.
A probability space is defined in very general terms (a set $\Omega$, a $\sigma$-algebra $\mathcal{F}$ and a probability measure $\mathbb{P}$), the only requirement is that certain properties should be fulfilled but apart from that these three elements can be ''anything''.
More detail can be found in this link
Bayes' rule holds in any (valid) probability space
From the definition of conditional probability it also holds that $\mathbb{P}(E_2|_{E_1})=\frac{\mathbb{P}(E_2 \cap E_1)}{\mathbb{P}(E_1)}$. And from the two latter equations we find Bayes' rule. So Bayes' rule holds (by definition of conditional probabilty) in any probability space (to show it, derive $\mathbb{P}(E_1 \cap E_2)$ and $\mathbb{P}(E_2 \cap E_1)$ from each equation and equate them (they are equal because intersection is commutative)).
As Bayes rule is the basis for Bayesian inference, one can do Bayesian analysis in any valid (i.e. fulfilling all conditions, a.o. Kolmogorov's axioms) probability space.
Frequentist definition of probability is a ''special case''
The above holds ''in general'', i.e. we have no specific $\Omega$, $\mathcal{F}$, $\mathbb{P}$ in mind as long as $\mathcal{F}$ is a $\sigma$-algebra on subsets of $\Omega$ and $\mathbb{P}$ fulfills Kolmogorov's axioms.
We will now show that a ''frequentist'' definition of $\mathbb{P}$ fulfills Kolomogorov's axioms. If that is the case then ''frequentist'' probabilities are only a special case of Kolmogorov's general and abstract probability.
Let's take an example and roll the dice. Then the set of all possible outcomes $\Omega$ is $\Omega=\{1,2,3,4,5,6\}$. We also need a $\sigma$-algebra on this set $\Omega$ and we take $\mathcal{F}$ the set of all subsets of $\Omega$, i.e. $\mathcal{F}=2^\Omega$.
We still have to define the probability measure $\mathbb{P}$ in a frequentist way. Therefore we define $\mathbb{P}(\{1\})$ as $\mathbb{P}(\{1\}) \stackrel{def}{=} \lim_{n \to +\infty} \frac{n_1}{n}$ where $n_1$ is the number of $1$'s obtained in $n$ rolls of the dice. Similar for $\mathbb{P}(\{2\})$, ... $\mathbb{P}(\{6\})$.
In this way $\mathbb{P}$ is defined for all singletons in $\mathcal{F}$. For any other set in $\mathcal{F}$, e.g. $\{1,2\}$ we define $\mathbb{P}(\{1,2\})$ in a frequentist way i.e.
$\mathbb{P}(\{1,2\}) \stackrel{def}{=} \lim_{n \to +\infty} \frac{n_1+n_2}{n}$, but by the linearity of the 'lim', this is equal to $\mathbb{P}(\{1\})+\mathbb{P}(\{2\})$, which implies that Kolmogorov's axioms hold.
So the frequentist definition of probability is only a special case of Kolomogorov's general and abstract definition of a probability measure.
Note that there are other ways to define a probability measure that fulfills Kolmogorov's axioms, so the frequentist definition is not the only possible one.
Conclusion
The probability in Kolmogorov's axiomatic system is ''abstract'', it has no real meaning, it only has to fulfill conditions called ''axioms''. Using only these axioms Kolmogorov was able to derive a very rich set of theorems.
The frequentist definition of probability fullfills the axioms and therefore replacing the abstract, ''meaningless'' $\mathbb{P}$ by a probability defined in a frequentist way, all these theorems are valid because the ''frequentist probability'' is only a special case of Kolmogorov's abstract probability (i.e. it fulfills the axioms).
One of the properties that can be derived in Kolmogorov's general framework is Bayes rule. As it holds in the general and abstract framework, it will also hold (cfr supra) in the specific case that the probabilities are defined in a frequentist way (because the frequentist definition fulfills the axioms and these axioms were the only thing that is needed to derive all theorems). So one can do Bayesian analysis with a frequentist definition of probability.
Defining $\mathbb{P}$ in a frequentist way is not the only possibility, there are other ways to define it such that it fulfills the abstract axioms of Kolmogorov. Bayes' rule will also hold in these ''specific cases''. So one can also do Bayesian analysis with a non-frequentist definition of probability.
EDIT 23/8/2016
@mpiktas reaction to your comment:
As I said, the sets $\Omega, \mathcal{F}$ and the probability measure $\mathbb{P}$ have no particular meaning in the axiomatic system, they are abstract.
In order to apply this theory you have to give further definitions (so what you say in your comment "no need to muddle it further with some bizarre definitions'' is wrong, you need additional definitions).
Let's apply it to the case of tossing a fair coin. The set $\Omega$ in Kolmogorov's theory has no particular meaning, it just has to be ''a set''. So we must specify what this set is in case of the fair coin, i.e. we must define the set $\Omega$. If we represent head as H and tail as T, then the set $\Omega$ is by definition $\Omega\stackrel{def}{=}\{H,T\}$.
We also have to define the events, i.e. the $\sigma$-algebra $\mathcal{F}$. We define is as $\mathcal{F} \stackrel{def}{=} \{\emptyset, \{H\},\{T\},\{H,T\} \}$. It is easy to verify that $\mathcal{F}$ is a $\sigma$-algebra.
Next we must define for every event in $E \in \mathcal{F}$ its measure. So we need to define a map from $\mathcal{F}$ in $[0,1]$. I will define it in the frequentist way, for a fair coin, if I toss it a huge number of times, then the fraction of heads will be 0.5, so I define $\mathbb{P}(\{H\})\stackrel{def}{=}0.5$. Similarly I define $\mathbb{P}(\{T\})\stackrel{def}{=}0.5$, $\mathbb{P}(\{H,T\})\stackrel{def}{=}1$ and $\mathbb{P}(\emptyset)\stackrel{def}{=}0$. Note that $\mathbb{P}$ is a map from $\mathcal{F}$ in $[0,1]$ and that it fulfills Kolmogorov's axioms.
For a reference with the frequentist definition of probability see this link (at the end of the section 'definition') and this link. | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Probability spaces and Kolmogorov's axioms
A probability space $\mathcal{P}$ is by definition a tripple $(\Omega, \mathcal{F}, \mathbb{P} )$ where $\Omega$ is a set of outcomes, $\mathcal{F}$ is a $\s | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Probability spaces and Kolmogorov's axioms
A probability space $\mathcal{P}$ is by definition a tripple $(\Omega, \mathcal{F}, \mathbb{P} )$ where $\Omega$ is a set of outcomes, $\mathcal{F}$ is a $\sigma$-algebra on the subsets of $\Omega$ and $\mathbb{P}$ is a probability-measure that fulfills the axioms of Kolmogorov, i.e. $\mathbb{P}$ is a function from $\mathcal{F}$ to $[0,1]$ such that $\mathbb{P}(\Omega)=1$ and for disjoint $E_1, E_2, \dots$ in $\mathcal{F}$ it holds that $P \left( \cup_{j=1}^\infty E_j \right)=\sum_{j=1}^\infty \mathbb{P}(E_j)$.
Within such a probability space one can, for two events $E_1, E_2$ in $\mathcal{F}$ define the conditional probability as $\mathbb{P}(E_1|_{E_2})\stackrel{def}{=}\frac{\mathbb{P}(E_1 \cap E_2)}{\mathbb{P}(E_2)}$
Note that:
this ''conditional probability'' is only defined when $\mathbb{P}$ is defined on $\mathcal{F}$, so we need a probability space to be able to define conditional probabilities.
A probability space is defined in very general terms (a set $\Omega$, a $\sigma$-algebra $\mathcal{F}$ and a probability measure $\mathbb{P}$), the only requirement is that certain properties should be fulfilled but apart from that these three elements can be ''anything''.
More detail can be found in this link
Bayes' rule holds in any (valid) probability space
From the definition of conditional probability it also holds that $\mathbb{P}(E_2|_{E_1})=\frac{\mathbb{P}(E_2 \cap E_1)}{\mathbb{P}(E_1)}$. And from the two latter equations we find Bayes' rule. So Bayes' rule holds (by definition of conditional probabilty) in any probability space (to show it, derive $\mathbb{P}(E_1 \cap E_2)$ and $\mathbb{P}(E_2 \cap E_1)$ from each equation and equate them (they are equal because intersection is commutative)).
As Bayes rule is the basis for Bayesian inference, one can do Bayesian analysis in any valid (i.e. fulfilling all conditions, a.o. Kolmogorov's axioms) probability space.
Frequentist definition of probability is a ''special case''
The above holds ''in general'', i.e. we have no specific $\Omega$, $\mathcal{F}$, $\mathbb{P}$ in mind as long as $\mathcal{F}$ is a $\sigma$-algebra on subsets of $\Omega$ and $\mathbb{P}$ fulfills Kolmogorov's axioms.
We will now show that a ''frequentist'' definition of $\mathbb{P}$ fulfills Kolomogorov's axioms. If that is the case then ''frequentist'' probabilities are only a special case of Kolmogorov's general and abstract probability.
Let's take an example and roll the dice. Then the set of all possible outcomes $\Omega$ is $\Omega=\{1,2,3,4,5,6\}$. We also need a $\sigma$-algebra on this set $\Omega$ and we take $\mathcal{F}$ the set of all subsets of $\Omega$, i.e. $\mathcal{F}=2^\Omega$.
We still have to define the probability measure $\mathbb{P}$ in a frequentist way. Therefore we define $\mathbb{P}(\{1\})$ as $\mathbb{P}(\{1\}) \stackrel{def}{=} \lim_{n \to +\infty} \frac{n_1}{n}$ where $n_1$ is the number of $1$'s obtained in $n$ rolls of the dice. Similar for $\mathbb{P}(\{2\})$, ... $\mathbb{P}(\{6\})$.
In this way $\mathbb{P}$ is defined for all singletons in $\mathcal{F}$. For any other set in $\mathcal{F}$, e.g. $\{1,2\}$ we define $\mathbb{P}(\{1,2\})$ in a frequentist way i.e.
$\mathbb{P}(\{1,2\}) \stackrel{def}{=} \lim_{n \to +\infty} \frac{n_1+n_2}{n}$, but by the linearity of the 'lim', this is equal to $\mathbb{P}(\{1\})+\mathbb{P}(\{2\})$, which implies that Kolmogorov's axioms hold.
So the frequentist definition of probability is only a special case of Kolomogorov's general and abstract definition of a probability measure.
Note that there are other ways to define a probability measure that fulfills Kolmogorov's axioms, so the frequentist definition is not the only possible one.
Conclusion
The probability in Kolmogorov's axiomatic system is ''abstract'', it has no real meaning, it only has to fulfill conditions called ''axioms''. Using only these axioms Kolmogorov was able to derive a very rich set of theorems.
The frequentist definition of probability fullfills the axioms and therefore replacing the abstract, ''meaningless'' $\mathbb{P}$ by a probability defined in a frequentist way, all these theorems are valid because the ''frequentist probability'' is only a special case of Kolmogorov's abstract probability (i.e. it fulfills the axioms).
One of the properties that can be derived in Kolmogorov's general framework is Bayes rule. As it holds in the general and abstract framework, it will also hold (cfr supra) in the specific case that the probabilities are defined in a frequentist way (because the frequentist definition fulfills the axioms and these axioms were the only thing that is needed to derive all theorems). So one can do Bayesian analysis with a frequentist definition of probability.
Defining $\mathbb{P}$ in a frequentist way is not the only possibility, there are other ways to define it such that it fulfills the abstract axioms of Kolmogorov. Bayes' rule will also hold in these ''specific cases''. So one can also do Bayesian analysis with a non-frequentist definition of probability.
EDIT 23/8/2016
@mpiktas reaction to your comment:
As I said, the sets $\Omega, \mathcal{F}$ and the probability measure $\mathbb{P}$ have no particular meaning in the axiomatic system, they are abstract.
In order to apply this theory you have to give further definitions (so what you say in your comment "no need to muddle it further with some bizarre definitions'' is wrong, you need additional definitions).
Let's apply it to the case of tossing a fair coin. The set $\Omega$ in Kolmogorov's theory has no particular meaning, it just has to be ''a set''. So we must specify what this set is in case of the fair coin, i.e. we must define the set $\Omega$. If we represent head as H and tail as T, then the set $\Omega$ is by definition $\Omega\stackrel{def}{=}\{H,T\}$.
We also have to define the events, i.e. the $\sigma$-algebra $\mathcal{F}$. We define is as $\mathcal{F} \stackrel{def}{=} \{\emptyset, \{H\},\{T\},\{H,T\} \}$. It is easy to verify that $\mathcal{F}$ is a $\sigma$-algebra.
Next we must define for every event in $E \in \mathcal{F}$ its measure. So we need to define a map from $\mathcal{F}$ in $[0,1]$. I will define it in the frequentist way, for a fair coin, if I toss it a huge number of times, then the fraction of heads will be 0.5, so I define $\mathbb{P}(\{H\})\stackrel{def}{=}0.5$. Similarly I define $\mathbb{P}(\{T\})\stackrel{def}{=}0.5$, $\mathbb{P}(\{H,T\})\stackrel{def}{=}1$ and $\mathbb{P}(\emptyset)\stackrel{def}{=}0$. Note that $\mathbb{P}$ is a map from $\mathcal{F}$ in $[0,1]$ and that it fulfills Kolmogorov's axioms.
For a reference with the frequentist definition of probability see this link (at the end of the section 'definition') and this link. | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Probability spaces and Kolmogorov's axioms
A probability space $\mathcal{P}$ is by definition a tripple $(\Omega, \mathcal{F}, \mathbb{P} )$ where $\Omega$ is a set of outcomes, $\mathcal{F}$ is a $\s |
2,933 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Stats is not Math
First, I steal @whuber's words from a comment in Stats is not maths? (applied in a different context, so I'm stealing words, not citing):
If you were to replace "statistics" by "chemistry," "economics," "engineering," or any other field that employs mathematics (such as home economics), it appears none of your argument would change.
All these fields are allowed to exist and to have questions that are not solved only by checking which theorems are correct. Though some answers at Stats is not maths? disagree, I think it is clear that statistics is not (pure) mathematics. If you want to do probability theory, a branch of (pure) mathematics, you may indeed ignore all debates of the kind you ask about. If you want to apply probability theory into modeling some real-world questions, you need something more to guide you than just the axioms and theorems of the mathematical framework. The remainder of the answer is rambling about this point.
The claim "if we want to be mathematically correct, shouldn't we disallow any interpretation of probability" also seems unjustified. Putting an interpretation on top of a mathematical framework does not make the mathematics incorrect (as long as the interpretation is not claimed to be a theorem in the mathematical framework).
The debate is not (mainly) about axioms
Though there are some alternative axiomatizations*, the(?) debate is not about disputing Kolmogorov axioms. Ignoring some subtleties with zero-measure conditioning events, leading to regular conditional probability etc., about which I don't know enough, the Kolmogorov axioms and conditional probability imply the Bayes rule, which no-one disputes. However, if $X$ is not even a random variable in your model (model in the sense of the mathematical setup consisting of a probability space or a family of them, random variables, etc.), it is of course not possible to compute the conditional distribution $P(X\mid Y)$. No-one also disputes that the frequency properties, if correctly computed, are consequences of the model. For example, the conditional distributions $p(y\mid \theta)$ in a Bayesian model define an indexed family of probability distributions $p(y; \theta)$ by simply letting $p(y \mid \theta) = p(y; \theta)$ and if some results hold for all $\theta$ in the latter, they hold for all $\theta$ in the former, too.
The debate is about how to apply the mathematics
The debates (as much as any exist**), are instead about how to decide what kind of probability model to set up for a (real-life, non-mathematical) problem and which implications of the model are relevant for drawing (real-life) conclusions. But these questions would exist even if all statisticians agreed. To quote from the blog post you linked to [1], we want to answer questions like
How should I design a roulette so my casino makes $? Does this fertilizer increase crop yield? Does streptomycin cure pulmonary tuberculosis? Does smoking cause cancer? What movie would would this user enjoy? Which baseball player should the Red Sox give a contract to? Should this patient receive chemotherapy?
The axioms of probability theory do not even contain a definition of baseball, so it is obvious that "Red Sox should give a contract to baseball player X" is not a theorem in probability theory.
Note about mathematical justifications of the Bayesian approach
There are 'mathematical justifications' for considering all unknowns as probabilistic such as the Cox theorem that Jaynes refers to, (though I hear it has mathematical problems, that may or not have been fixed, I don't know, see [2] and references therein) or the (subjective Bayesian) Savage approach (I've heard this is in [3] but haven't ever read the book) that proves that under certain assumptions, a rational decision-maker will have a probability distribution over states of world and select his action based on maximizing the expected value of a utility function. However, whether or not the manager of Red Sox should accept the assumptions, or whether we should accept the theory that smoking causes cancer, cannot be deduced from any mathematical framework, so the debate cannot be (only) about the correctness of these justifications as theorems.
Footnotes
*I have not studied it, but I've heard de Finetti has an approach where conditional probabilities are primitives rather than obtained from the (unconditional) measure by conditioning. [4] mentions a debate between (Bayesians) José Bernardo, Dennis Lindley and Bruno de Finetti in a cosy French restaurant about whether $\sigma$-additivity is needed.
**as mentioned in the blog post you link to [1], there might be no clear cut debate with every statistician belonging to one team and despising the other team. I have heard it said that we are all pragmatics nowadays and the useless debate is over. However, in my experience these differences exist in, for example, whether someone's first approach is to model all unknowns as random variables or not and how interested someone is in frequency guarantees.
References
[1] Simply Statistics, a statistical blog by Rafa Irizarry, Roger Peng, and Jeff Leek, "I declare the Bayesian vs. Frequentist debate over for data scientists", 13 Oct 2014, http://simplystatistics.org/2014/10/13/as-an-applied-statistician-i-find-the-frequentists-versus-bayesians-debate-completely-inconsequential/
[2] Dupré, M. J., & Tipler, F. J. (2009). New axioms for rigorous Bayesian probability. Bayesian Analysis, 4(3), 599-606. http://projecteuclid.org/download/pdf_1/euclid.ba/1340369856
[3] Savage, L. J. (1972). The foundations of statistics. Courier Corporation.
[4] Bernardo, J.M. The Valencia Story - Some details of the origin and development of the Valencia International Meetings on Bayesian Statistics. http://www.uv.es/bernardo/ValenciaStory.pdf | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Stats is not Math
First, I steal @whuber's words from a comment in Stats is not maths? (applied in a different context, so I'm stealing words, not citing):
If you were to replace "statistics" by "ch | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Stats is not Math
First, I steal @whuber's words from a comment in Stats is not maths? (applied in a different context, so I'm stealing words, not citing):
If you were to replace "statistics" by "chemistry," "economics," "engineering," or any other field that employs mathematics (such as home economics), it appears none of your argument would change.
All these fields are allowed to exist and to have questions that are not solved only by checking which theorems are correct. Though some answers at Stats is not maths? disagree, I think it is clear that statistics is not (pure) mathematics. If you want to do probability theory, a branch of (pure) mathematics, you may indeed ignore all debates of the kind you ask about. If you want to apply probability theory into modeling some real-world questions, you need something more to guide you than just the axioms and theorems of the mathematical framework. The remainder of the answer is rambling about this point.
The claim "if we want to be mathematically correct, shouldn't we disallow any interpretation of probability" also seems unjustified. Putting an interpretation on top of a mathematical framework does not make the mathematics incorrect (as long as the interpretation is not claimed to be a theorem in the mathematical framework).
The debate is not (mainly) about axioms
Though there are some alternative axiomatizations*, the(?) debate is not about disputing Kolmogorov axioms. Ignoring some subtleties with zero-measure conditioning events, leading to regular conditional probability etc., about which I don't know enough, the Kolmogorov axioms and conditional probability imply the Bayes rule, which no-one disputes. However, if $X$ is not even a random variable in your model (model in the sense of the mathematical setup consisting of a probability space or a family of them, random variables, etc.), it is of course not possible to compute the conditional distribution $P(X\mid Y)$. No-one also disputes that the frequency properties, if correctly computed, are consequences of the model. For example, the conditional distributions $p(y\mid \theta)$ in a Bayesian model define an indexed family of probability distributions $p(y; \theta)$ by simply letting $p(y \mid \theta) = p(y; \theta)$ and if some results hold for all $\theta$ in the latter, they hold for all $\theta$ in the former, too.
The debate is about how to apply the mathematics
The debates (as much as any exist**), are instead about how to decide what kind of probability model to set up for a (real-life, non-mathematical) problem and which implications of the model are relevant for drawing (real-life) conclusions. But these questions would exist even if all statisticians agreed. To quote from the blog post you linked to [1], we want to answer questions like
How should I design a roulette so my casino makes $? Does this fertilizer increase crop yield? Does streptomycin cure pulmonary tuberculosis? Does smoking cause cancer? What movie would would this user enjoy? Which baseball player should the Red Sox give a contract to? Should this patient receive chemotherapy?
The axioms of probability theory do not even contain a definition of baseball, so it is obvious that "Red Sox should give a contract to baseball player X" is not a theorem in probability theory.
Note about mathematical justifications of the Bayesian approach
There are 'mathematical justifications' for considering all unknowns as probabilistic such as the Cox theorem that Jaynes refers to, (though I hear it has mathematical problems, that may or not have been fixed, I don't know, see [2] and references therein) or the (subjective Bayesian) Savage approach (I've heard this is in [3] but haven't ever read the book) that proves that under certain assumptions, a rational decision-maker will have a probability distribution over states of world and select his action based on maximizing the expected value of a utility function. However, whether or not the manager of Red Sox should accept the assumptions, or whether we should accept the theory that smoking causes cancer, cannot be deduced from any mathematical framework, so the debate cannot be (only) about the correctness of these justifications as theorems.
Footnotes
*I have not studied it, but I've heard de Finetti has an approach where conditional probabilities are primitives rather than obtained from the (unconditional) measure by conditioning. [4] mentions a debate between (Bayesians) José Bernardo, Dennis Lindley and Bruno de Finetti in a cosy French restaurant about whether $\sigma$-additivity is needed.
**as mentioned in the blog post you link to [1], there might be no clear cut debate with every statistician belonging to one team and despising the other team. I have heard it said that we are all pragmatics nowadays and the useless debate is over. However, in my experience these differences exist in, for example, whether someone's first approach is to model all unknowns as random variables or not and how interested someone is in frequency guarantees.
References
[1] Simply Statistics, a statistical blog by Rafa Irizarry, Roger Peng, and Jeff Leek, "I declare the Bayesian vs. Frequentist debate over for data scientists", 13 Oct 2014, http://simplystatistics.org/2014/10/13/as-an-applied-statistician-i-find-the-frequentists-versus-bayesians-debate-completely-inconsequential/
[2] Dupré, M. J., & Tipler, F. J. (2009). New axioms for rigorous Bayesian probability. Bayesian Analysis, 4(3), 599-606. http://projecteuclid.org/download/pdf_1/euclid.ba/1340369856
[3] Savage, L. J. (1972). The foundations of statistics. Courier Corporation.
[4] Bernardo, J.M. The Valencia Story - Some details of the origin and development of the Valencia International Meetings on Bayesian Statistics. http://www.uv.es/bernardo/ValenciaStory.pdf | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Stats is not Math
First, I steal @whuber's words from a comment in Stats is not maths? (applied in a different context, so I'm stealing words, not citing):
If you were to replace "statistics" by "ch |
2,934 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | The mathematical basis for the Bayesian vs frequentist debate is very simple. In Bayesian statistics the unknown parameter is treated as a random variable; in frequentist statistics it is treated as a fixed element. Since a random variable is a much more complicated mathematical object than a simple element of the set, the mathematical difference is quite evident.
However, it turns out that the actual results in terms of models can be surprisingly similar. Take linear regression for example. Bayesian linear regression with uninformative priors leads to a distribution of a regression parameter estimate, whose mean is equal to the estimate of the parameter of frequentist linear regression, which is a solution to a least squares problem, which is not even a problem from probability theory. Nevertheless the mathematics which was used to arrive at the similar solution is quite different, for the reason stated above.
Naturally because of the difference of treatment of the unknown parameter mathematical properties (random variable vs element of the set) both Bayesian and frequentist statistics hit on cases where it might seem that it is more advantageous to use a competing approach. Confidence intervals is a prime example. Not having to rely on MCMC to get a simple estimate is another. However, these are usually more matters of taste and not of mathematics. | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | The mathematical basis for the Bayesian vs frequentist debate is very simple. In Bayesian statistics the unknown parameter is treated as a random variable; in frequentist statistics it is treated as | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
The mathematical basis for the Bayesian vs frequentist debate is very simple. In Bayesian statistics the unknown parameter is treated as a random variable; in frequentist statistics it is treated as a fixed element. Since a random variable is a much more complicated mathematical object than a simple element of the set, the mathematical difference is quite evident.
However, it turns out that the actual results in terms of models can be surprisingly similar. Take linear regression for example. Bayesian linear regression with uninformative priors leads to a distribution of a regression parameter estimate, whose mean is equal to the estimate of the parameter of frequentist linear regression, which is a solution to a least squares problem, which is not even a problem from probability theory. Nevertheless the mathematics which was used to arrive at the similar solution is quite different, for the reason stated above.
Naturally because of the difference of treatment of the unknown parameter mathematical properties (random variable vs element of the set) both Bayesian and frequentist statistics hit on cases where it might seem that it is more advantageous to use a competing approach. Confidence intervals is a prime example. Not having to rely on MCMC to get a simple estimate is another. However, these are usually more matters of taste and not of mathematics. | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
The mathematical basis for the Bayesian vs frequentist debate is very simple. In Bayesian statistics the unknown parameter is treated as a random variable; in frequentist statistics it is treated as |
2,935 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | I don't like philosophy, but I do like math, and I want to work
exclusively within the framework of Kolmogorov's axioms.
How exactly would you apply Kolmogorov's axioms alone without any interpretation? How would you interpret probability? What would you say to someone who asked you "What does your estimate of probability $0.5$ mean?" Would you say that your result is a number $0.5$, which is correct since it follows the axioms? Without any interpretation you couldn't say that this suggests how often we would expect to see the outcome if we repeated our experiment. Nor could you say that this number tells you how certain are you about the chance of an event happening. Nor could you answer that this tells you how likely do you believe the event to be. How would you interpret expected value - as some numbers multiplied by some other numbers and summed together that are valid since they follow the axioms and a few other theorems?
If you want to apply the mathematics to the real world, then you need to interpret it. The numbers alone without interpretations are... numbers. People do not calculate expected values to estimate expected values, but to learn something about reality.
Moreover, probability is abstract, while we apply statistics (and probability per se) to real world happenings. Take the most basic example: a fair coin. In the frequentist interpretation, if you threw such a coin a large number of times, you would expect the same number of heads and tails. However, in a real-life experiment this would almost never happen. So $0.5$ probability has really nothing to to with any particular coin thrown any particular number of times.
Probability does not exist
-- Bruno de Finetti | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | I don't like philosophy, but I do like math, and I want to work
exclusively within the framework of Kolmogorov's axioms.
How exactly would you apply Kolmogorov's axioms alone without any interpreta | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
I don't like philosophy, but I do like math, and I want to work
exclusively within the framework of Kolmogorov's axioms.
How exactly would you apply Kolmogorov's axioms alone without any interpretation? How would you interpret probability? What would you say to someone who asked you "What does your estimate of probability $0.5$ mean?" Would you say that your result is a number $0.5$, which is correct since it follows the axioms? Without any interpretation you couldn't say that this suggests how often we would expect to see the outcome if we repeated our experiment. Nor could you say that this number tells you how certain are you about the chance of an event happening. Nor could you answer that this tells you how likely do you believe the event to be. How would you interpret expected value - as some numbers multiplied by some other numbers and summed together that are valid since they follow the axioms and a few other theorems?
If you want to apply the mathematics to the real world, then you need to interpret it. The numbers alone without interpretations are... numbers. People do not calculate expected values to estimate expected values, but to learn something about reality.
Moreover, probability is abstract, while we apply statistics (and probability per se) to real world happenings. Take the most basic example: a fair coin. In the frequentist interpretation, if you threw such a coin a large number of times, you would expect the same number of heads and tails. However, in a real-life experiment this would almost never happen. So $0.5$ probability has really nothing to to with any particular coin thrown any particular number of times.
Probability does not exist
-- Bruno de Finetti | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
I don't like philosophy, but I do like math, and I want to work
exclusively within the framework of Kolmogorov's axioms.
How exactly would you apply Kolmogorov's axioms alone without any interpreta |
2,936 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | My view of the contrast between Bayesian and frequentist inference is that the first issue is the choice of the event for which you want a probability. Frequentists assume what you are trying to prove (e.g., a null hypothesis) then compute the probability of observing something that you already observed, under that assumption. There is an exact analogy between such reverse-information flow-order probabilities and sensitivity and specificity in medical diagnosis, which have caused enormous misunderstandings and need to be bailed out by Bayes' rule to get forward probabilities ("post-test probabilities"). Bayesians compute the probability of an event, and absolute probabilities are impossible to compute without an anchor (the prior). The Bayesian probability of the veracity of a statement is much different from the frequentist probability of observing data under a certain unknowable assumption. The differences are more pronounced when the frequentist must adjust for other analyses that have been done or could have been done (multiplicity; sequential testing, etc.).
So the discussion of the mathematical basis is very interesting and is a very appropriate discussion to have. But one has to make a fundamental choice of forwards vs. backwards probabilities. Hence what is conditioned upon, which isn't exactly math, is incredibly important. Bayesians believe that full conditioning on what you already know is key. Frequentists more often condition on what makes the mathematics simple. | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | My view of the contrast between Bayesian and frequentist inference is that the first issue is the choice of the event for which you want a probability. Frequentists assume what you are trying to prov | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
My view of the contrast between Bayesian and frequentist inference is that the first issue is the choice of the event for which you want a probability. Frequentists assume what you are trying to prove (e.g., a null hypothesis) then compute the probability of observing something that you already observed, under that assumption. There is an exact analogy between such reverse-information flow-order probabilities and sensitivity and specificity in medical diagnosis, which have caused enormous misunderstandings and need to be bailed out by Bayes' rule to get forward probabilities ("post-test probabilities"). Bayesians compute the probability of an event, and absolute probabilities are impossible to compute without an anchor (the prior). The Bayesian probability of the veracity of a statement is much different from the frequentist probability of observing data under a certain unknowable assumption. The differences are more pronounced when the frequentist must adjust for other analyses that have been done or could have been done (multiplicity; sequential testing, etc.).
So the discussion of the mathematical basis is very interesting and is a very appropriate discussion to have. But one has to make a fundamental choice of forwards vs. backwards probabilities. Hence what is conditioned upon, which isn't exactly math, is incredibly important. Bayesians believe that full conditioning on what you already know is key. Frequentists more often condition on what makes the mathematics simple. | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
My view of the contrast between Bayesian and frequentist inference is that the first issue is the choice of the event for which you want a probability. Frequentists assume what you are trying to prov |
2,937 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | I will break this up into two separate questions and answer each.
1.) Given the different philosophical views of what probability means
in a Frequentist and Bayesian perspective, are there mathematical rules of probability that apply to one interpretation and do not apply to another?
No. The rules of probability remain exactly the same between the two groups.
2.) Do Bayesians and Frequentists use the same mathematical models to analyze data?
Generally speaking, no. This is because the two different interpretations suggest that a researcher can gain insight from different sources. In particular, the Frequentist framework is often thought to suggest that one can make inference on the parameters of interest only from the data observed, while a Bayesian perspective suggests that one should also include independent expert knowledge about the subject. Different data sources means different mathematical models will be used for analysis.
It is also of note that there are plenty divides between the models used by the two camps that is more related to what has been done than what can be done (i.e. many models that are traditionally used by one camp can be justified by the other camp). For example, BUGs models (Bayesian inference Using Gibbs sampling, a name that no longer accurately describes the set of models for many reasons) are traditionally analyzed with Bayesian methods, mostly due to the availability of great software packages to do this with (JAGs, Stan for example). However, there is nothing that says these models must be strictly Bayesian. In fact, I worked on the project NIMBLE that builds these models in the BUGs framework, but allows the user much more freedom on how to make inference on them. While the vast majority of the tools we provided were customizable Bayesian MCMC methods, one could also use maximum likelihood estimation, a traditionally Frequentist method, for these models as well. Similarly, priors are often thought of as what you can do with Bayesian that you cannot do with Frequentist models. However, penalized estimation can provide for the same models using regularizing parameter estimates (although the Bayesian framework provides an easier way of justifying and choosing regularization parameters, while Frequentists are left with, in a best case scenario of a lots of data, "we chose these regularization parameters because over a large number of cross-validated samples, they lowered the estimated out of sample error"...for better or for worse). | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | I will break this up into two separate questions and answer each.
1.) Given the different philosophical views of what probability means
in a Frequentist and Bayesian perspective, are there mathematic | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
I will break this up into two separate questions and answer each.
1.) Given the different philosophical views of what probability means
in a Frequentist and Bayesian perspective, are there mathematical rules of probability that apply to one interpretation and do not apply to another?
No. The rules of probability remain exactly the same between the two groups.
2.) Do Bayesians and Frequentists use the same mathematical models to analyze data?
Generally speaking, no. This is because the two different interpretations suggest that a researcher can gain insight from different sources. In particular, the Frequentist framework is often thought to suggest that one can make inference on the parameters of interest only from the data observed, while a Bayesian perspective suggests that one should also include independent expert knowledge about the subject. Different data sources means different mathematical models will be used for analysis.
It is also of note that there are plenty divides between the models used by the two camps that is more related to what has been done than what can be done (i.e. many models that are traditionally used by one camp can be justified by the other camp). For example, BUGs models (Bayesian inference Using Gibbs sampling, a name that no longer accurately describes the set of models for many reasons) are traditionally analyzed with Bayesian methods, mostly due to the availability of great software packages to do this with (JAGs, Stan for example). However, there is nothing that says these models must be strictly Bayesian. In fact, I worked on the project NIMBLE that builds these models in the BUGs framework, but allows the user much more freedom on how to make inference on them. While the vast majority of the tools we provided were customizable Bayesian MCMC methods, one could also use maximum likelihood estimation, a traditionally Frequentist method, for these models as well. Similarly, priors are often thought of as what you can do with Bayesian that you cannot do with Frequentist models. However, penalized estimation can provide for the same models using regularizing parameter estimates (although the Bayesian framework provides an easier way of justifying and choosing regularization parameters, while Frequentists are left with, in a best case scenario of a lots of data, "we chose these regularization parameters because over a large number of cross-validated samples, they lowered the estimated out of sample error"...for better or for worse). | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
I will break this up into two separate questions and answer each.
1.) Given the different philosophical views of what probability means
in a Frequentist and Bayesian perspective, are there mathematic |
2,938 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Bayesians and Frequentists think probabilities represent different things. Frequentists think they're related to frequencies and only make sense in contexts where frequencies are possible. Bayesians view them as ways to represent uncertainty. Since any fact can be uncertain, you can talk about the probability of anything.
The mathematical consequence is that Frequentists think the basic equations of probability only sometimes apply, and Bayesians think they always apply. So they view the same equations as correct, but differ on how general they are.
This has the following practical consequences:
(1) Bayesians will derive their methods from the basic equations of probability theory (of which Bayes Theorem is just one example), while Frequentists invent one intuitive ad-hoc approach after another to solve each problem.
(2) There are theorems indicating that if you reason from incomplete information you had better use the basic equations of probability theory consistently, or you'll be in trouble. Lots of people have doubts about how meaningful such theorems are, yet this is what we see in practice.
For example, it's possible for real world innocent looking 95% Confidence Intervals to consist entirely of values which are provably impossible (from the same info used to derive the Confidence Interval). In other words, Frequentist methods can contradict simple deductive logic. Bayesian methods derived entirely from the basic equations of probability theory don't have this problem.
(3) Bayesian is strictly more general than Frequentist. Since there can be uncertainty about any fact, any fact can be assigned a probability. In particular, if the facts you're working on is related to real world frequencies (either as something you're predicting or part of the data) then Bayesian methods can consider and use them just as they would any other real world fact.
Consequently any problem Frequentist feel their methods apply to Bayesians can also work naturally. The reverse however is often not true unless Frequentists invent subterfuges to interpret their probability as a "frequency" such as, for example, imagining the multiple universes, or inventing hypothetical repetitions out to infinity which are never performed and often couldn't be in principle. | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Bayesians and Frequentists think probabilities represent different things. Frequentists think they're related to frequencies and only make sense in contexts where frequencies are possible. Bayesians | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Bayesians and Frequentists think probabilities represent different things. Frequentists think they're related to frequencies and only make sense in contexts where frequencies are possible. Bayesians view them as ways to represent uncertainty. Since any fact can be uncertain, you can talk about the probability of anything.
The mathematical consequence is that Frequentists think the basic equations of probability only sometimes apply, and Bayesians think they always apply. So they view the same equations as correct, but differ on how general they are.
This has the following practical consequences:
(1) Bayesians will derive their methods from the basic equations of probability theory (of which Bayes Theorem is just one example), while Frequentists invent one intuitive ad-hoc approach after another to solve each problem.
(2) There are theorems indicating that if you reason from incomplete information you had better use the basic equations of probability theory consistently, or you'll be in trouble. Lots of people have doubts about how meaningful such theorems are, yet this is what we see in practice.
For example, it's possible for real world innocent looking 95% Confidence Intervals to consist entirely of values which are provably impossible (from the same info used to derive the Confidence Interval). In other words, Frequentist methods can contradict simple deductive logic. Bayesian methods derived entirely from the basic equations of probability theory don't have this problem.
(3) Bayesian is strictly more general than Frequentist. Since there can be uncertainty about any fact, any fact can be assigned a probability. In particular, if the facts you're working on is related to real world frequencies (either as something you're predicting or part of the data) then Bayesian methods can consider and use them just as they would any other real world fact.
Consequently any problem Frequentist feel their methods apply to Bayesians can also work naturally. The reverse however is often not true unless Frequentists invent subterfuges to interpret their probability as a "frequency" such as, for example, imagining the multiple universes, or inventing hypothetical repetitions out to infinity which are never performed and often couldn't be in principle. | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Bayesians and Frequentists think probabilities represent different things. Frequentists think they're related to frequencies and only make sense in contexts where frequencies are possible. Bayesians |
2,939 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Question: Then if we want to be mathematically correct, shouldn't we disallow any interpretation of probability? I.e., are both Bayesian and frequentism mathematically incorrect?
Yes, and this is exactly what people do both in Philosophy of Science and in Mathematics.
Philosophical approach. Wikipedia provides a compendium of interpretations/definitions of probability.
Mathematicians are not safe. In the past, the Kolmogorovian school had the monopoly of probability: a probability is defined as a finite measure that assigns 1 to the whole space ... This hegemony is no longer valid since there are new trends on defininig probability such as Quantum probability and Free probability. | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | Question: Then if we want to be mathematically correct, shouldn't we disallow any interpretation of probability? I.e., are both Bayesian and frequentism mathematically incorrect?
Yes, and this is exa | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Question: Then if we want to be mathematically correct, shouldn't we disallow any interpretation of probability? I.e., are both Bayesian and frequentism mathematically incorrect?
Yes, and this is exactly what people do both in Philosophy of Science and in Mathematics.
Philosophical approach. Wikipedia provides a compendium of interpretations/definitions of probability.
Mathematicians are not safe. In the past, the Kolmogorovian school had the monopoly of probability: a probability is defined as a finite measure that assigns 1 to the whole space ... This hegemony is no longer valid since there are new trends on defininig probability such as Quantum probability and Free probability. | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
Question: Then if we want to be mathematically correct, shouldn't we disallow any interpretation of probability? I.e., are both Bayesian and frequentism mathematically incorrect?
Yes, and this is exa |
2,940 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | The bayes/frequentist debate is based on numerous grounds. If you are talking about mathematical basis, I don't think there is much.
They both need to apply various approximate methods for complex problems. Two examples are "bootstrap" for frequentist and "mcmc" for bayesian.
They both come with rituals/procedures for how to use them. A frequentist example is "propose an estimator of something and evaluate its properties under repeated sampling" while a bayesian example is "calculate probability distributions for what you don't know conditional on what you do know". There is no mathematical basis for using probabilities in this way.
The debate is more about application, interpretation, and ability to solve real world problems.
In fact, this is often used by people debating "their side" where they will use a specific "ritual/procedure" used by the "other side" to argue that the whole theory should be thrown away for theirs. Some examples include...
using stupid priors (and not checking them)
using stupid CIs (and not checking them)
confusing a computational technique with the theory (bayes is not mcmc!! Same goes for equating cross validation with machine learning)
talking about a problem with a specific application with one theory and not how the other theory would solve the specific problem "better" | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | The bayes/frequentist debate is based on numerous grounds. If you are talking about mathematical basis, I don't think there is much.
They both need to apply various approximate methods for complex pro | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
The bayes/frequentist debate is based on numerous grounds. If you are talking about mathematical basis, I don't think there is much.
They both need to apply various approximate methods for complex problems. Two examples are "bootstrap" for frequentist and "mcmc" for bayesian.
They both come with rituals/procedures for how to use them. A frequentist example is "propose an estimator of something and evaluate its properties under repeated sampling" while a bayesian example is "calculate probability distributions for what you don't know conditional on what you do know". There is no mathematical basis for using probabilities in this way.
The debate is more about application, interpretation, and ability to solve real world problems.
In fact, this is often used by people debating "their side" where they will use a specific "ritual/procedure" used by the "other side" to argue that the whole theory should be thrown away for theirs. Some examples include...
using stupid priors (and not checking them)
using stupid CIs (and not checking them)
confusing a computational technique with the theory (bayes is not mcmc!! Same goes for equating cross validation with machine learning)
talking about a problem with a specific application with one theory and not how the other theory would solve the specific problem "better" | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
The bayes/frequentist debate is based on numerous grounds. If you are talking about mathematical basis, I don't think there is much.
They both need to apply various approximate methods for complex pro |
2,941 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | So then wouldn't it follow that the only mathematically correct version of statistics is that which refuses to be anything but entirely agnostic with respect to Bayesianism and frequentism? If methods with both classifications are mathematically correct, then isn't it improper practice to prefer some over the others, because that would be prioritizing vague, ill-defined philosophy over precise, well-defined mathematics?
No. It does not follow. Individuals who are unable to feel their emotions are biologically incapable of making decisions, including decisions that appear to have only one objective solution. The reason is that rational decision making depends upon our emotional capacity and our preferences both cognitive and emotional. While that is scary, it is the empirical reality.
Gupta R, Koscik TR, Bechara A, Tranel D. The amygdala and decision making. Neuropsychologia. 2011;49(4):760-766. doi:10.1016/j.neuropsychologia.2010.09.029.
A person who prefers apples to oranges cannot defend this as it is a preference. Conversely, a person who prefers oranges to apples cannot defend this rationally as it is a preference. People who prefer apples will often eat oranges because the cost of apples is too great compared to the cost of oranges.
Much of the Bayesian and Frequentist debate, as well as the Likelihoodist and Frequentist debate, was centered around mistakes of understanding. Nonetheless, if we imagine that we have a person who is well trained in all methods, including minor or no longer used methods such as Carnapian probability or fiducial statistics, then it is only rational for them to prefer some tools over other tools.
Rationality only depends upon preferences; the behavior depends upon preferences and costs.
It may be the case that from a purely mathematical perspective that one tool is better than the other, where better is defined using some cost or utility function, but unless there is a unique answer where only one tool could work, then both the costs and the preferences are to be weighed.
Consider the problem of a bookie considering offering a complex bet. Clearly, the bookie should use Bayesian methods in this case as they are coherent and have other nice properties, but also imagine that the bookie has a calculator only and not even a pencil and paper. It may be the case that the bookie, with the use of his calculator and by keeping track of things in his head can calculate the Frequentist solution and has no chance on Earth to calculate the Bayesian. If he is willing to take the risk of being "Dutch Booked," and also finds the potential cost small enough, then it is rational for him to offer bets using Frequentist methods.
It is rational for you to be agnostic because your emotional preferences find that to be better for you. It is not rational for the field to be agnostic unless you believe that all people share your emotional and cognitive preferences, which we know is not the case.
In short, I don't understand what the mathematical basis is for the Bayesian versus frequentist debate, and if there is no mathematical basis for the debate (which is what Wikipedia claims), I don't understand why it is tolerated at all in academic discourse.
The purpose of academic debate is to bring light to both old and to new ideas. Much of the Bayesian versus Frequentist debate and the Likelihoodist versus Frequentist debate came from misunderstandings and sloppiness of thought. Some came from failing to call out preferences for what they are. A discussion of the virtues of an estimator being unbiased and noisy versus and estimator being biased and accurate is a discussion of emotional preferences, but until someone has it, it is quite likely that the thinking on it will remain muddy throughout the field.
I don't like philosophy, but I do like math, and I want to work exclusively within the framework of Kolmogorov's axioms.
Why? Because you prefer Kolmogorov's to Cox's, de Finetti's or Savage's? Is that preference sneaking in? Also, probability and statistics are not math, they use math. It is a branch of rhetoric. To understand why this may matter consider your statement:
if a method is mathematically correct, then it is valid to use the method when the assumptions of the underlying mathematics hold, otherwise, if it is not mathematically correct or if the assumptions do not hold, then it is invalid to use it.
This is not true. There is a nice article on confidence intervals and their abuse its citation is:
Morey, Richard ; Hoekstra, Rink ; Rouder, Jeffrey ; Lee, Michael ; Wagenmakers, Eric-Jan, The fallacy of placing confidence in confidence intervals, Psychonomic Bulletin & Review, 2016, Vol.23(1), pp.103-123
If you read the different potential confidence intervals in the article, each one is mathematically valid, but if you then evaluate their properties, they differ very substantially. Indeed, some of the confidence intervals provided could be thought of as having "bad" properties, though they meet all of the assumptions in the problem. If you drop the Bayesian interval from the list and focus only on the four Frequentist intervals, then if you do a deeper analysis as to when the intervals are wide or narrow, or constant, then you will find that the intervals may not be "equal" though each meets the assumptions and requirements.
It is not enough for it to be mathematically valid for it to be useful or, alternatively, as useful as possible. Likewise, it could be mathematically true, but harmful. In the article, there is an interval that is at its most narrow precisely when there is the least amount of information about the true location and widest when perfect knowledge or near perfect knowledge exists about the location of the parameter. Regardless, it meets the coverage requirements and satisfies the assumptions.
Math can never be enough. | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | So then wouldn't it follow that the only mathematically correct version of statistics is that which refuses to be anything but entirely agnostic with respect to Bayesianism and frequentism? If methods | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
So then wouldn't it follow that the only mathematically correct version of statistics is that which refuses to be anything but entirely agnostic with respect to Bayesianism and frequentism? If methods with both classifications are mathematically correct, then isn't it improper practice to prefer some over the others, because that would be prioritizing vague, ill-defined philosophy over precise, well-defined mathematics?
No. It does not follow. Individuals who are unable to feel their emotions are biologically incapable of making decisions, including decisions that appear to have only one objective solution. The reason is that rational decision making depends upon our emotional capacity and our preferences both cognitive and emotional. While that is scary, it is the empirical reality.
Gupta R, Koscik TR, Bechara A, Tranel D. The amygdala and decision making. Neuropsychologia. 2011;49(4):760-766. doi:10.1016/j.neuropsychologia.2010.09.029.
A person who prefers apples to oranges cannot defend this as it is a preference. Conversely, a person who prefers oranges to apples cannot defend this rationally as it is a preference. People who prefer apples will often eat oranges because the cost of apples is too great compared to the cost of oranges.
Much of the Bayesian and Frequentist debate, as well as the Likelihoodist and Frequentist debate, was centered around mistakes of understanding. Nonetheless, if we imagine that we have a person who is well trained in all methods, including minor or no longer used methods such as Carnapian probability or fiducial statistics, then it is only rational for them to prefer some tools over other tools.
Rationality only depends upon preferences; the behavior depends upon preferences and costs.
It may be the case that from a purely mathematical perspective that one tool is better than the other, where better is defined using some cost or utility function, but unless there is a unique answer where only one tool could work, then both the costs and the preferences are to be weighed.
Consider the problem of a bookie considering offering a complex bet. Clearly, the bookie should use Bayesian methods in this case as they are coherent and have other nice properties, but also imagine that the bookie has a calculator only and not even a pencil and paper. It may be the case that the bookie, with the use of his calculator and by keeping track of things in his head can calculate the Frequentist solution and has no chance on Earth to calculate the Bayesian. If he is willing to take the risk of being "Dutch Booked," and also finds the potential cost small enough, then it is rational for him to offer bets using Frequentist methods.
It is rational for you to be agnostic because your emotional preferences find that to be better for you. It is not rational for the field to be agnostic unless you believe that all people share your emotional and cognitive preferences, which we know is not the case.
In short, I don't understand what the mathematical basis is for the Bayesian versus frequentist debate, and if there is no mathematical basis for the debate (which is what Wikipedia claims), I don't understand why it is tolerated at all in academic discourse.
The purpose of academic debate is to bring light to both old and to new ideas. Much of the Bayesian versus Frequentist debate and the Likelihoodist versus Frequentist debate came from misunderstandings and sloppiness of thought. Some came from failing to call out preferences for what they are. A discussion of the virtues of an estimator being unbiased and noisy versus and estimator being biased and accurate is a discussion of emotional preferences, but until someone has it, it is quite likely that the thinking on it will remain muddy throughout the field.
I don't like philosophy, but I do like math, and I want to work exclusively within the framework of Kolmogorov's axioms.
Why? Because you prefer Kolmogorov's to Cox's, de Finetti's or Savage's? Is that preference sneaking in? Also, probability and statistics are not math, they use math. It is a branch of rhetoric. To understand why this may matter consider your statement:
if a method is mathematically correct, then it is valid to use the method when the assumptions of the underlying mathematics hold, otherwise, if it is not mathematically correct or if the assumptions do not hold, then it is invalid to use it.
This is not true. There is a nice article on confidence intervals and their abuse its citation is:
Morey, Richard ; Hoekstra, Rink ; Rouder, Jeffrey ; Lee, Michael ; Wagenmakers, Eric-Jan, The fallacy of placing confidence in confidence intervals, Psychonomic Bulletin & Review, 2016, Vol.23(1), pp.103-123
If you read the different potential confidence intervals in the article, each one is mathematically valid, but if you then evaluate their properties, they differ very substantially. Indeed, some of the confidence intervals provided could be thought of as having "bad" properties, though they meet all of the assumptions in the problem. If you drop the Bayesian interval from the list and focus only on the four Frequentist intervals, then if you do a deeper analysis as to when the intervals are wide or narrow, or constant, then you will find that the intervals may not be "equal" though each meets the assumptions and requirements.
It is not enough for it to be mathematically valid for it to be useful or, alternatively, as useful as possible. Likewise, it could be mathematically true, but harmful. In the article, there is an interval that is at its most narrow precisely when there is the least amount of information about the true location and widest when perfect knowledge or near perfect knowledge exists about the location of the parameter. Regardless, it meets the coverage requirements and satisfies the assumptions.
Math can never be enough. | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
So then wouldn't it follow that the only mathematically correct version of statistics is that which refuses to be anything but entirely agnostic with respect to Bayesianism and frequentism? If methods |
2,942 | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | The following is taken from my manuscript on confidence distributions - Johnson, Geoffrey S. "Decision Making in Drug Development via Confidence Distributions." arXiv preprint arXiv:2005.04721 (2020).
In the Bayesian framework the population-level parameter of interest is considered an unrealized or unobservable realization of a random variable that depends on the observed data. This premise has cause and effect reversed. In order to overcome this the Bayesian approach reinterprets probability as measuring the subjective belief of the experimenter. Another interpretation is that the unknown fixed parameter, say theta, was randomly selected from a known collection or prevalence of theta's (prior distribution) and the observed data is used to subset this collection, forming the posterior. The unknown fixed true theta is now imagined to have instead been randomly selected from the posterior. Every time the prior or posterior is updated the sampling frame from where we obtained our unknown fixed true theta under investigation must be changed. A third interpretation is that all values of theta are true simultaneously. The truth exists in a superposition depending on the evidence observed (think Schrodinger's cat). Ascribing any of these interpretations to the posterior allows one to make philosophical probability statements about hypotheses given the data. While the p-value is typically not interpreted in the same manner, it does show us the plausibility of a hypothesis given the data - the ex-post sampling probability of the observed result or something more extreme if the hypothesis is true. This does not reverse cause and effect.
To the Bayesian, probability is axiomatic and measures the experimenter. To the frequentist, probability measures the experiment and must be verifiable. The Bayesian interpretation of probability as a measure of belief is unfalsifiable. Only if there exists a real-life mechanism by which we can sample values of theta can a probability distribution for theta be verified. In such settings probability statements about theta would have a purely frequentist interpretation (see the second interpretation of the posterior above). This may be a reason why frequentist inference is ubiquitous in the scientific literature.
The interpretation of frequentist inference is straight forward for non-statisticians by citing confidence levels, e.g. 'We are 15.9% confident that theta is less than or equal to theta_0.' Of course to fully appreciate this statement of confidence one must more fully define the p-value as a frequency probability of the experiment if the hypothesis is true (think of the proof by contradiction structure a prosecutor uses in a court room setting, innocent until proven guilty). A Bayesian approach may make it easy for some to interpret a posterior probability, e.g. 'There is 17.4% Bayesian belief probability that theta is less than or equal to theta_0.' Of course to fully appreciate this statement one must fully define Bayesian belief and make it clear this is not a verifiable statement about the actual parameter, the hypothesis, nor the experiment. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood or is proportional to the likelihood, Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment. In short, for those who subscribe to the frequentist interpretation of probability, the confidence distribution summarizes all probability statements about the experiment one can make. It is a matter of correct interpretation given the definition of probability and what constitutes a random variable. The posterior remains an incredibly useful tool and can be interpreted as a valid asymptotic confidence distribution. The frequentist framework can easily incorporate historical data through a fixed-effect meta-analysis. | Is there any *mathematical* basis for the Bayesian vs frequentist debate? | The following is taken from my manuscript on confidence distributions - Johnson, Geoffrey S. "Decision Making in Drug Development via Confidence Distributions." arXiv preprint arXiv:2005.04721 (2020). | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
The following is taken from my manuscript on confidence distributions - Johnson, Geoffrey S. "Decision Making in Drug Development via Confidence Distributions." arXiv preprint arXiv:2005.04721 (2020).
In the Bayesian framework the population-level parameter of interest is considered an unrealized or unobservable realization of a random variable that depends on the observed data. This premise has cause and effect reversed. In order to overcome this the Bayesian approach reinterprets probability as measuring the subjective belief of the experimenter. Another interpretation is that the unknown fixed parameter, say theta, was randomly selected from a known collection or prevalence of theta's (prior distribution) and the observed data is used to subset this collection, forming the posterior. The unknown fixed true theta is now imagined to have instead been randomly selected from the posterior. Every time the prior or posterior is updated the sampling frame from where we obtained our unknown fixed true theta under investigation must be changed. A third interpretation is that all values of theta are true simultaneously. The truth exists in a superposition depending on the evidence observed (think Schrodinger's cat). Ascribing any of these interpretations to the posterior allows one to make philosophical probability statements about hypotheses given the data. While the p-value is typically not interpreted in the same manner, it does show us the plausibility of a hypothesis given the data - the ex-post sampling probability of the observed result or something more extreme if the hypothesis is true. This does not reverse cause and effect.
To the Bayesian, probability is axiomatic and measures the experimenter. To the frequentist, probability measures the experiment and must be verifiable. The Bayesian interpretation of probability as a measure of belief is unfalsifiable. Only if there exists a real-life mechanism by which we can sample values of theta can a probability distribution for theta be verified. In such settings probability statements about theta would have a purely frequentist interpretation (see the second interpretation of the posterior above). This may be a reason why frequentist inference is ubiquitous in the scientific literature.
The interpretation of frequentist inference is straight forward for non-statisticians by citing confidence levels, e.g. 'We are 15.9% confident that theta is less than or equal to theta_0.' Of course to fully appreciate this statement of confidence one must more fully define the p-value as a frequency probability of the experiment if the hypothesis is true (think of the proof by contradiction structure a prosecutor uses in a court room setting, innocent until proven guilty). A Bayesian approach may make it easy for some to interpret a posterior probability, e.g. 'There is 17.4% Bayesian belief probability that theta is less than or equal to theta_0.' Of course to fully appreciate this statement one must fully define Bayesian belief and make it clear this is not a verifiable statement about the actual parameter, the hypothesis, nor the experiment. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood or is proportional to the likelihood, Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment. In short, for those who subscribe to the frequentist interpretation of probability, the confidence distribution summarizes all probability statements about the experiment one can make. It is a matter of correct interpretation given the definition of probability and what constitutes a random variable. The posterior remains an incredibly useful tool and can be interpreted as a valid asymptotic confidence distribution. The frequentist framework can easily incorporate historical data through a fixed-effect meta-analysis. | Is there any *mathematical* basis for the Bayesian vs frequentist debate?
The following is taken from my manuscript on confidence distributions - Johnson, Geoffrey S. "Decision Making in Drug Development via Confidence Distributions." arXiv preprint arXiv:2005.04721 (2020). |
2,943 | Hold-out validation vs. cross-validation | NOTE: This answer is old, incomplete, and thoroughly out of date. Its was only debatably correct when it was posted in 2014, and I'm not really sure how it got so many upvotes or how it became the accepted answer. I recommend this answer instead, written by an expert in the field (and with significantly more upvotes). I am leaving my answer here for historical/archival purposes only.
My only guess is that you can Hold-Out with three hours of programming experience; the other takes a week in principle and six months in practice.
In principle it's simple, but writing code is tedious and time-consuming. As Linus Torvalds famously said, "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Many of the people doing statistics are bad programmers, through no fault of their own. Doing k-fold cross validation efficiently (and by that I mean, in a way that isn't horribly frustrating to debug and use more than once) in R requires a vague understanding of data structures, but data structures are generally skipped in "intro to statistical programming" tutorials. It's like the old person using the Internet for the first time. It's really not hard, it just takes an extra half hour or so to figure out the first time, but it's brand new and that makes it confusing, so it's easy to ignore.
You have questions like this: How to implement a hold-out validation in R. No offense intended, whatsoever, to the asker. But many people just are not code-literate. The fact that people are doing cross-validation at all is enough to make me happy.
It sounds silly and trivial, but this comes from personal experience, having been that guy and having worked with many people who were that guy. | Hold-out validation vs. cross-validation | NOTE: This answer is old, incomplete, and thoroughly out of date. Its was only debatably correct when it was posted in 2014, and I'm not really sure how it got so many upvotes or how it became the acc | Hold-out validation vs. cross-validation
NOTE: This answer is old, incomplete, and thoroughly out of date. Its was only debatably correct when it was posted in 2014, and I'm not really sure how it got so many upvotes or how it became the accepted answer. I recommend this answer instead, written by an expert in the field (and with significantly more upvotes). I am leaving my answer here for historical/archival purposes only.
My only guess is that you can Hold-Out with three hours of programming experience; the other takes a week in principle and six months in practice.
In principle it's simple, but writing code is tedious and time-consuming. As Linus Torvalds famously said, "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Many of the people doing statistics are bad programmers, through no fault of their own. Doing k-fold cross validation efficiently (and by that I mean, in a way that isn't horribly frustrating to debug and use more than once) in R requires a vague understanding of data structures, but data structures are generally skipped in "intro to statistical programming" tutorials. It's like the old person using the Internet for the first time. It's really not hard, it just takes an extra half hour or so to figure out the first time, but it's brand new and that makes it confusing, so it's easy to ignore.
You have questions like this: How to implement a hold-out validation in R. No offense intended, whatsoever, to the asker. But many people just are not code-literate. The fact that people are doing cross-validation at all is enough to make me happy.
It sounds silly and trivial, but this comes from personal experience, having been that guy and having worked with many people who were that guy. | Hold-out validation vs. cross-validation
NOTE: This answer is old, incomplete, and thoroughly out of date. Its was only debatably correct when it was posted in 2014, and I'm not really sure how it got so many upvotes or how it became the acc |
2,944 | Hold-out validation vs. cross-validation | Hold-out is often used synonymous with validation with independent test set, although there are crucial differences between splitting the data randomly and designing a validation experiment for independent testing.
Independent test sets can be used to measure generalization performance that cannot be measured by resampling or hold-out validation, e.g. the performance for unknown future cases (= cases that are measured later, after the training is finished). This is important in order to know how long an existing model can be used for new data (think e.g. of instrument drift).
More generally, this may be described as measuring the extrapolation performance in order to define the limits of applicability.
Another scenario where hold-out can actually be beneficial is: it is very easy to ensure that training and test data are properly separated - much easier than for resampling validation: e.g.
decide splitting (e.g. do random assignment of cases)
measure
measurement and reference data of the training cases => modeling\
neither measurements nor reference of test cases is handed to the person who models.
final model + measurements of the held-out cases => prediction
compare predictions with reference for held-out cases.
Depending on the level of separation you need, each step may be done by someone else. As a first level, not handing over any data (not even the measurements) of the test cases to the modeler allows to be very certain that no test data leaks into the modeling process. At a second level, the final model and test case measurements could be handed over to yet someone else, and so on.
In some fields/cases/applications, we consider this obvious independence sufficiently important to prescribe that an independent organization is needed for validation*, e.g. in clinical chemistry (we also do that e.g. for vehicle safety: the one who safeties your car is not the same as your repair guy, and they are also in separate businesses).
(* I'm chemometrician/analytical chemist. To me, there is not much of a conceptual difference between validating a wet-lab method or an in-silico method (aka predictive model). And the difference will become even less with the advance of machine learning e.g. into medical diagnostics.)
Yes, you pay for that by the lower efficiency of the hold-out estimates compared to resampling validation. But I've seen many papers where I suspect that that the resampling validation does not properly separate cases (in my field we have lots of clustered/hierarchical/grouped data).
I've learned my lesson on data leaks for resampling by retracting a manuscript a week after submission when I found out that I had a previously undetected (by running permutation tests alongside) leak in my splitting procedure (typo in index calculation).
Sometimes hold-out can be more efficient than finding someone who is willing to put in the time to check the resampling code (e.g. for clustered data) in order to gain the same level of certainty about the results.
However, IMHO it is usually not efficient to do this before you are in the stage where you anyways need to measure e.g. future performance (first point) - in other words, when you anyways need to set up a validation experiment for the existing model.
OTOH, in small sample size situations, hold-out is no option: you need to hold out enough test cases so that the test results are precise enough to allow the needed conclusion (remember: 3 correct out of 3 test cases for classification means a binomial 95% confidence interval that ranges well below 50:50 guessing!)
Frank Harrell would point to the rule of thumb that at least ca. 100 (test) cases are needed to properly measure a proportion [such as the fraction of correctly predicted cases] with a useful precision.
Update: there are situations where proper splitting is particularly hard to achieve, and cross validation becomes unfeasible. Consider a problem with a number of confounders. Splitting is easy if these confounders are strictly nested (e.g. a study with a number of patients has several specimen of each patient and analyses a number of cells of each specimen): you split at the highest level of the sampling hierarchy (patient-wise). But you may have independent confounders which are not nested, e.g. day-to-day variation or variance caused by different experimenters running the test. You then need to make sure the split is independent for all confounders on the highest level (the nested confounders will automatically be independent). Taking care of this is very difficult if some confounders are only identified during the study, and designing and performing a validation experiment may be more efficient than dealing with splits that leave almost no data neither for training nor for testing of the surrogate models. | Hold-out validation vs. cross-validation | Hold-out is often used synonymous with validation with independent test set, although there are crucial differences between splitting the data randomly and designing a validation experiment for indepe | Hold-out validation vs. cross-validation
Hold-out is often used synonymous with validation with independent test set, although there are crucial differences between splitting the data randomly and designing a validation experiment for independent testing.
Independent test sets can be used to measure generalization performance that cannot be measured by resampling or hold-out validation, e.g. the performance for unknown future cases (= cases that are measured later, after the training is finished). This is important in order to know how long an existing model can be used for new data (think e.g. of instrument drift).
More generally, this may be described as measuring the extrapolation performance in order to define the limits of applicability.
Another scenario where hold-out can actually be beneficial is: it is very easy to ensure that training and test data are properly separated - much easier than for resampling validation: e.g.
decide splitting (e.g. do random assignment of cases)
measure
measurement and reference data of the training cases => modeling\
neither measurements nor reference of test cases is handed to the person who models.
final model + measurements of the held-out cases => prediction
compare predictions with reference for held-out cases.
Depending on the level of separation you need, each step may be done by someone else. As a first level, not handing over any data (not even the measurements) of the test cases to the modeler allows to be very certain that no test data leaks into the modeling process. At a second level, the final model and test case measurements could be handed over to yet someone else, and so on.
In some fields/cases/applications, we consider this obvious independence sufficiently important to prescribe that an independent organization is needed for validation*, e.g. in clinical chemistry (we also do that e.g. for vehicle safety: the one who safeties your car is not the same as your repair guy, and they are also in separate businesses).
(* I'm chemometrician/analytical chemist. To me, there is not much of a conceptual difference between validating a wet-lab method or an in-silico method (aka predictive model). And the difference will become even less with the advance of machine learning e.g. into medical diagnostics.)
Yes, you pay for that by the lower efficiency of the hold-out estimates compared to resampling validation. But I've seen many papers where I suspect that that the resampling validation does not properly separate cases (in my field we have lots of clustered/hierarchical/grouped data).
I've learned my lesson on data leaks for resampling by retracting a manuscript a week after submission when I found out that I had a previously undetected (by running permutation tests alongside) leak in my splitting procedure (typo in index calculation).
Sometimes hold-out can be more efficient than finding someone who is willing to put in the time to check the resampling code (e.g. for clustered data) in order to gain the same level of certainty about the results.
However, IMHO it is usually not efficient to do this before you are in the stage where you anyways need to measure e.g. future performance (first point) - in other words, when you anyways need to set up a validation experiment for the existing model.
OTOH, in small sample size situations, hold-out is no option: you need to hold out enough test cases so that the test results are precise enough to allow the needed conclusion (remember: 3 correct out of 3 test cases for classification means a binomial 95% confidence interval that ranges well below 50:50 guessing!)
Frank Harrell would point to the rule of thumb that at least ca. 100 (test) cases are needed to properly measure a proportion [such as the fraction of correctly predicted cases] with a useful precision.
Update: there are situations where proper splitting is particularly hard to achieve, and cross validation becomes unfeasible. Consider a problem with a number of confounders. Splitting is easy if these confounders are strictly nested (e.g. a study with a number of patients has several specimen of each patient and analyses a number of cells of each specimen): you split at the highest level of the sampling hierarchy (patient-wise). But you may have independent confounders which are not nested, e.g. day-to-day variation or variance caused by different experimenters running the test. You then need to make sure the split is independent for all confounders on the highest level (the nested confounders will automatically be independent). Taking care of this is very difficult if some confounders are only identified during the study, and designing and performing a validation experiment may be more efficient than dealing with splits that leave almost no data neither for training nor for testing of the surrogate models. | Hold-out validation vs. cross-validation
Hold-out is often used synonymous with validation with independent test set, although there are crucial differences between splitting the data randomly and designing a validation experiment for indepe |
2,945 | Hold-out validation vs. cross-validation | Just wanted to add some simple guidelines that Andrew Ng mentioned in our CS 229 class at Stanford regarding cross-validation. These are the practices that he follows in his own work.
Let $m$ be the number of samples in your dataset:
If $m\le 20$ use Leave-one-out cross validation.
If $20 < m \le 100$ use k-fold cross validation with a relatively
large $k \le m$ keeping in mind computational cost.
If $100 < m \le 1,000,000$ use regular k-fold cross validation $(k =
5)$. Or, if there is not enough computational power and $m > 10,000$,
use hold-out cross validation.
If $m \ge 1,000,000$ use hold-out cross validation, but if
computational power is available you can use k-fold cross validation
$(k = 5)$ if you want to squeeze that extra performance out of your
model. | Hold-out validation vs. cross-validation | Just wanted to add some simple guidelines that Andrew Ng mentioned in our CS 229 class at Stanford regarding cross-validation. These are the practices that he follows in his own work.
Let $m$ be the n | Hold-out validation vs. cross-validation
Just wanted to add some simple guidelines that Andrew Ng mentioned in our CS 229 class at Stanford regarding cross-validation. These are the practices that he follows in his own work.
Let $m$ be the number of samples in your dataset:
If $m\le 20$ use Leave-one-out cross validation.
If $20 < m \le 100$ use k-fold cross validation with a relatively
large $k \le m$ keeping in mind computational cost.
If $100 < m \le 1,000,000$ use regular k-fold cross validation $(k =
5)$. Or, if there is not enough computational power and $m > 10,000$,
use hold-out cross validation.
If $m \ge 1,000,000$ use hold-out cross validation, but if
computational power is available you can use k-fold cross validation
$(k = 5)$ if you want to squeeze that extra performance out of your
model. | Hold-out validation vs. cross-validation
Just wanted to add some simple guidelines that Andrew Ng mentioned in our CS 229 class at Stanford regarding cross-validation. These are the practices that he follows in his own work.
Let $m$ be the n |
2,946 | Hold-out validation vs. cross-validation | It might be useful to clear up the terminology a little. If we let $k$ be some integer less than (or equal to) $n$ where $n$ is the sample size and we partition the sample into $k$ unique subsamples, then what you are calling Hold-out validation is really just 2-fold ($k$ = 2) cross-validation. Cross-validation is merely a tool for estimating the out-of-sample error rates (or generalizability) of a particular model. The need to estimate the out-of-sample error rate is a common one and has spawned an entire literature. See, for starters, chapter 7 of ESL.
So to answer the questions:
Why talk about it? Pedagogically. It's worthwhile to think of Hold-out validation as a special - and only occasionally useful - case of an otherwise quite useful method with many, many variations.
Why use it? If one is lucky enough to have a colossal dataset (in terms of observations, $n$), then splitting the data in half - training on one half and testing on the other - makes sense. This makes sense for computational reasons since all that is required is fitting once and predicting once (rather than $k$ times). And it makes sense from a "large-sample estimation" perspective since you have a ton of observations to fit your model to.
A rule-of-thumb I've learned is: when $n$ is large, $k$ can be small, but when $n$ is small, $k$ should be close to $n$. | Hold-out validation vs. cross-validation | It might be useful to clear up the terminology a little. If we let $k$ be some integer less than (or equal to) $n$ where $n$ is the sample size and we partition the sample into $k$ unique subsamples, | Hold-out validation vs. cross-validation
It might be useful to clear up the terminology a little. If we let $k$ be some integer less than (or equal to) $n$ where $n$ is the sample size and we partition the sample into $k$ unique subsamples, then what you are calling Hold-out validation is really just 2-fold ($k$ = 2) cross-validation. Cross-validation is merely a tool for estimating the out-of-sample error rates (or generalizability) of a particular model. The need to estimate the out-of-sample error rate is a common one and has spawned an entire literature. See, for starters, chapter 7 of ESL.
So to answer the questions:
Why talk about it? Pedagogically. It's worthwhile to think of Hold-out validation as a special - and only occasionally useful - case of an otherwise quite useful method with many, many variations.
Why use it? If one is lucky enough to have a colossal dataset (in terms of observations, $n$), then splitting the data in half - training on one half and testing on the other - makes sense. This makes sense for computational reasons since all that is required is fitting once and predicting once (rather than $k$ times). And it makes sense from a "large-sample estimation" perspective since you have a ton of observations to fit your model to.
A rule-of-thumb I've learned is: when $n$ is large, $k$ can be small, but when $n$ is small, $k$ should be close to $n$. | Hold-out validation vs. cross-validation
It might be useful to clear up the terminology a little. If we let $k$ be some integer less than (or equal to) $n$ where $n$ is the sample size and we partition the sample into $k$ unique subsamples, |
2,947 | Hold-out validation vs. cross-validation | If your model selection & fitting procedure can't be coded up because it's subjective, or partly so,—involving looking at graphs & the like—hold-out validation might be the best you can do. (I suppose you could perhaps use something like Mechanical Turk in each CV fold, though I've never heard of its being done.) | Hold-out validation vs. cross-validation | If your model selection & fitting procedure can't be coded up because it's subjective, or partly so,—involving looking at graphs & the like—hold-out validation might be the best you can do. (I suppose | Hold-out validation vs. cross-validation
If your model selection & fitting procedure can't be coded up because it's subjective, or partly so,—involving looking at graphs & the like—hold-out validation might be the best you can do. (I suppose you could perhaps use something like Mechanical Turk in each CV fold, though I've never heard of its being done.) | Hold-out validation vs. cross-validation
If your model selection & fitting procedure can't be coded up because it's subjective, or partly so,—involving looking at graphs & the like—hold-out validation might be the best you can do. (I suppose |
2,948 | Hold-out validation vs. cross-validation | Short answer:
I would recommend to always use CV with at least $k=5$ for:
complex models
final results that have to adhere validity constraints
You might relax this for:
training on really large datasets
training simple models
prototyping when time is an issue
Some of you mentioned, that programming this in R might be an issue. I recommend you to have a look at the "mlr" package. It wraps different packages in a unified interface, also providing really advanced resampling and performance evaluation methods.
Have a look: http://mlr-org.github.io/mlr-tutorial/release/html/resample/
and: http://mlr-org.github.io/mlr-tutorial/release/html/performance/index.htm
Some more explanation - what CV really does is break the bias variance tradeoff:
Now, the problem that both approaches try to solve is to estimate the generalization error, which is conditional on the data that was used to train a model.
Holdout has a problem with bias and variance:
By making the amount of data that we test on smaller, we introduce variance to our estimated generalization error, as the test data might not represent the underlying distribution very well anymore. This itself does not introduce a bias though, as in expectation the estimated performance will be correct.
Making the training set smaller however introduces a pessimistic bias, as again the underlying distribution is not well represented in the data and the model cannot fit the data as well. Making the training set very small introduces variance as well.
As size of training and test set determine each other, this leaves us with a tradeoff: pessimistic bias vs high variance.
$k$-fold Cross validation tackles this problem by keeping the training set large (a fraction of $\frac{k-1}{k}$ of the data is used for training in every iteration) and dealing with the variance of the test error by resampling. After all iterations, we have tested performance on every observation of the dataset with one learner. Obviously, this requires more computation time than simple holdout.
Cross-validating is especially important for more complex (high variance) learners. Those usually are more expensive computationally as well, which can make the whole process quite time intensive. | Hold-out validation vs. cross-validation | Short answer:
I would recommend to always use CV with at least $k=5$ for:
complex models
final results that have to adhere validity constraints
You might relax this for:
training on really large da | Hold-out validation vs. cross-validation
Short answer:
I would recommend to always use CV with at least $k=5$ for:
complex models
final results that have to adhere validity constraints
You might relax this for:
training on really large datasets
training simple models
prototyping when time is an issue
Some of you mentioned, that programming this in R might be an issue. I recommend you to have a look at the "mlr" package. It wraps different packages in a unified interface, also providing really advanced resampling and performance evaluation methods.
Have a look: http://mlr-org.github.io/mlr-tutorial/release/html/resample/
and: http://mlr-org.github.io/mlr-tutorial/release/html/performance/index.htm
Some more explanation - what CV really does is break the bias variance tradeoff:
Now, the problem that both approaches try to solve is to estimate the generalization error, which is conditional on the data that was used to train a model.
Holdout has a problem with bias and variance:
By making the amount of data that we test on smaller, we introduce variance to our estimated generalization error, as the test data might not represent the underlying distribution very well anymore. This itself does not introduce a bias though, as in expectation the estimated performance will be correct.
Making the training set smaller however introduces a pessimistic bias, as again the underlying distribution is not well represented in the data and the model cannot fit the data as well. Making the training set very small introduces variance as well.
As size of training and test set determine each other, this leaves us with a tradeoff: pessimistic bias vs high variance.
$k$-fold Cross validation tackles this problem by keeping the training set large (a fraction of $\frac{k-1}{k}$ of the data is used for training in every iteration) and dealing with the variance of the test error by resampling. After all iterations, we have tested performance on every observation of the dataset with one learner. Obviously, this requires more computation time than simple holdout.
Cross-validating is especially important for more complex (high variance) learners. Those usually are more expensive computationally as well, which can make the whole process quite time intensive. | Hold-out validation vs. cross-validation
Short answer:
I would recommend to always use CV with at least $k=5$ for:
complex models
final results that have to adhere validity constraints
You might relax this for:
training on really large da |
2,949 | Hold-out validation vs. cross-validation | All these are useful comments. Just take one more into account. When you have enough data, using Hold-Out is a way to assess a specific model (a specific SVM model, a specific CART model, etc), whereas if you use other cross-validation procedures you are assessing methodologies (under your problem conditions) rather than models (SVM methodology, CART methodology, etc).
Hope this is helpful! | Hold-out validation vs. cross-validation | All these are useful comments. Just take one more into account. When you have enough data, using Hold-Out is a way to assess a specific model (a specific SVM model, a specific CART model, etc), wherea | Hold-out validation vs. cross-validation
All these are useful comments. Just take one more into account. When you have enough data, using Hold-Out is a way to assess a specific model (a specific SVM model, a specific CART model, etc), whereas if you use other cross-validation procedures you are assessing methodologies (under your problem conditions) rather than models (SVM methodology, CART methodology, etc).
Hope this is helpful! | Hold-out validation vs. cross-validation
All these are useful comments. Just take one more into account. When you have enough data, using Hold-Out is a way to assess a specific model (a specific SVM model, a specific CART model, etc), wherea |
2,950 | Hold-out validation vs. cross-validation | Simply put; time. Cross-validation you run the training routine k times (i.e. once for each hold-out set). If you have large data, then you it might take many hours or even days to train the model for just one data set, so you multiply that by k when using cross-validation.
So although cross-validation is the best method, in certain circumstances it's not feasible, and the time it would take might have been better spent modeling the data in different ways, or trying out different loss functions in order to get a better model.
My personal preference is to take validation data from throughout the data set, so rather than take a single 10% chunk from the head or tail of the data, I take 2% from 5 points in the data set. That makes the validation data a bit more representative of the data as a whole. | Hold-out validation vs. cross-validation | Simply put; time. Cross-validation you run the training routine k times (i.e. once for each hold-out set). If you have large data, then you it might take many hours or even days to train the model for | Hold-out validation vs. cross-validation
Simply put; time. Cross-validation you run the training routine k times (i.e. once for each hold-out set). If you have large data, then you it might take many hours or even days to train the model for just one data set, so you multiply that by k when using cross-validation.
So although cross-validation is the best method, in certain circumstances it's not feasible, and the time it would take might have been better spent modeling the data in different ways, or trying out different loss functions in order to get a better model.
My personal preference is to take validation data from throughout the data set, so rather than take a single 10% chunk from the head or tail of the data, I take 2% from 5 points in the data set. That makes the validation data a bit more representative of the data as a whole. | Hold-out validation vs. cross-validation
Simply put; time. Cross-validation you run the training routine k times (i.e. once for each hold-out set). If you have large data, then you it might take many hours or even days to train the model for |
2,951 | Hold-out validation vs. cross-validation | Modeling with time serious data is an exception for me. K fold cannot work in some cases when you need to predict the future based on the previous data. The test sets have to be the future data, and you can never touch them in training phase. e.x predicting sell or the stock market. Hold out is useful in those cases. | Hold-out validation vs. cross-validation | Modeling with time serious data is an exception for me. K fold cannot work in some cases when you need to predict the future based on the previous data. The test sets have to be the future data, and y | Hold-out validation vs. cross-validation
Modeling with time serious data is an exception for me. K fold cannot work in some cases when you need to predict the future based on the previous data. The test sets have to be the future data, and you can never touch them in training phase. e.x predicting sell or the stock market. Hold out is useful in those cases. | Hold-out validation vs. cross-validation
Modeling with time serious data is an exception for me. K fold cannot work in some cases when you need to predict the future based on the previous data. The test sets have to be the future data, and y |
2,952 | Hold-out validation vs. cross-validation | I'm aware this question is old but I landed here from Google anyway and the accepted answer isn't very pleasing as no one needs to programming CV themselves as this is handled by according libraries.
For a good answer first the scope terms must be defined. My answer focuses on machine learning ("classical" as in regression, random forest, etc... and not deep learning). The hold-out set or test set is part of the labeled data set, that is split of at the beginning of the model building process. (And the best way to split in my opinion is by acquisition date of the data with newest data being the hold-out set because that exactly mimics future use of the model)
A crucial aspect to consider that your model isn't just the used algorithm and parameters but the whole process you use to build it from feature selection to parameter optimization. That is why the hold-out set gets split off at the start so that in above definition the model has never seen that data in any way.
k-fold cross-validation is used within your model (reminder: model = your whole pipeline) for example within parameter optimization or feature selection. You need to use CV here because else you optimize your model for 1 specific data split instead of a more general optimization you get with CV. At the end of this pipeline you can also do another CV with the final model settings for an approximate guess of the models performance but be aware that this will almost always be better than the truth because the model during the model building process has already seen the data. It still gives you a rough estimate and especially a hint on the variance.
After you have your model you apply it to this hold-out set which if done correctly is 100% new to the model. This should give you a correct indication about your models performance and this as said above will almost always be worse than what you get with CV. | Hold-out validation vs. cross-validation | I'm aware this question is old but I landed here from Google anyway and the accepted answer isn't very pleasing as no one needs to programming CV themselves as this is handled by according libraries.
| Hold-out validation vs. cross-validation
I'm aware this question is old but I landed here from Google anyway and the accepted answer isn't very pleasing as no one needs to programming CV themselves as this is handled by according libraries.
For a good answer first the scope terms must be defined. My answer focuses on machine learning ("classical" as in regression, random forest, etc... and not deep learning). The hold-out set or test set is part of the labeled data set, that is split of at the beginning of the model building process. (And the best way to split in my opinion is by acquisition date of the data with newest data being the hold-out set because that exactly mimics future use of the model)
A crucial aspect to consider that your model isn't just the used algorithm and parameters but the whole process you use to build it from feature selection to parameter optimization. That is why the hold-out set gets split off at the start so that in above definition the model has never seen that data in any way.
k-fold cross-validation is used within your model (reminder: model = your whole pipeline) for example within parameter optimization or feature selection. You need to use CV here because else you optimize your model for 1 specific data split instead of a more general optimization you get with CV. At the end of this pipeline you can also do another CV with the final model settings for an approximate guess of the models performance but be aware that this will almost always be better than the truth because the model during the model building process has already seen the data. It still gives you a rough estimate and especially a hint on the variance.
After you have your model you apply it to this hold-out set which if done correctly is 100% new to the model. This should give you a correct indication about your models performance and this as said above will almost always be worse than what you get with CV. | Hold-out validation vs. cross-validation
I'm aware this question is old but I landed here from Google anyway and the accepted answer isn't very pleasing as no one needs to programming CV themselves as this is handled by according libraries.
|
2,953 | Hold-out validation vs. cross-validation | It should be noted that it's not always possible to apply the cross-validation. Consider the time-dependent datasets such that you want to use the historical data to train a predictive model for the future behaviour. In this case, you have to apply hold-out validation. | Hold-out validation vs. cross-validation | It should be noted that it's not always possible to apply the cross-validation. Consider the time-dependent datasets such that you want to use the historical data to train a predictive model for the f | Hold-out validation vs. cross-validation
It should be noted that it's not always possible to apply the cross-validation. Consider the time-dependent datasets such that you want to use the historical data to train a predictive model for the future behaviour. In this case, you have to apply hold-out validation. | Hold-out validation vs. cross-validation
It should be noted that it's not always possible to apply the cross-validation. Consider the time-dependent datasets such that you want to use the historical data to train a predictive model for the f |
2,954 | Hold-out validation vs. cross-validation | Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of the two components. A certain component A may appear in diverse observations: you can have it in a mixture of A+B, A+C, A+D, etc. Now, imagine that you use k-fold validation. When the model is predicting for the A+C mixture, maybe it was already trained with the observation "A+B", therefore, it will be biased towards the output of that observation (because half of the variables of the two observations are the same: in one you have the properties of A and the properties of C, and in the other one you gave the properties of A and the properties of B).
In the case described, hold out validation would give you a way less biased result than k-fold cross validation! | Hold-out validation vs. cross-validation | Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of the two components. A certain component A may appear in diverse observations: y | Hold-out validation vs. cross-validation
Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of the two components. A certain component A may appear in diverse observations: you can have it in a mixture of A+B, A+C, A+D, etc. Now, imagine that you use k-fold validation. When the model is predicting for the A+C mixture, maybe it was already trained with the observation "A+B", therefore, it will be biased towards the output of that observation (because half of the variables of the two observations are the same: in one you have the properties of A and the properties of C, and in the other one you gave the properties of A and the properties of B).
In the case described, hold out validation would give you a way less biased result than k-fold cross validation! | Hold-out validation vs. cross-validation
Imagine you are predicting if a given chemical mixture of two components will explode or not, based on the properties of the two components. A certain component A may appear in diverse observations: y |
2,955 | Random Forest - How to handle overfitting | To avoid over-fitting in random forest, the main thing you need to do is optimize a tuning parameter that governs the number of features that are randomly chosen to grow each tree from the bootstrapped data. Typically, you do this via $k$-fold cross-validation, where $k \in \{5, 10\}$, and choose the tuning parameter that minimizes test sample prediction error. In addition, growing a larger forest will improve predictive accuracy, although there are usually diminishing returns once you get up to several hundreds of trees. | Random Forest - How to handle overfitting | To avoid over-fitting in random forest, the main thing you need to do is optimize a tuning parameter that governs the number of features that are randomly chosen to grow each tree from the bootstrappe | Random Forest - How to handle overfitting
To avoid over-fitting in random forest, the main thing you need to do is optimize a tuning parameter that governs the number of features that are randomly chosen to grow each tree from the bootstrapped data. Typically, you do this via $k$-fold cross-validation, where $k \in \{5, 10\}$, and choose the tuning parameter that minimizes test sample prediction error. In addition, growing a larger forest will improve predictive accuracy, although there are usually diminishing returns once you get up to several hundreds of trees. | Random Forest - How to handle overfitting
To avoid over-fitting in random forest, the main thing you need to do is optimize a tuning parameter that governs the number of features that are randomly chosen to grow each tree from the bootstrappe |
2,956 | Random Forest - How to handle overfitting | How are you getting that 99% AUC on your training data? Be aware that there's a difference between
predict(model)
and
predict(model, newdata=train)
when getting predictions for the training dataset. The first option gets the out-of-bag predictions from the random forest. This is generally what you want, when comparing predicted values to actuals on the training data.
The second treats your training data as if it was a new dataset, and runs the observations down each tree. This will result in an artificially close correlation between the predictions and the actuals, since the RF algorithm generally doesn't prune the individual trees, relying instead on the ensemble of trees to control overfitting. So don't do this if you want to get predictions on the training data. | Random Forest - How to handle overfitting | How are you getting that 99% AUC on your training data? Be aware that there's a difference between
predict(model)
and
predict(model, newdata=train)
when getting predictions for the training dataset. | Random Forest - How to handle overfitting
How are you getting that 99% AUC on your training data? Be aware that there's a difference between
predict(model)
and
predict(model, newdata=train)
when getting predictions for the training dataset. The first option gets the out-of-bag predictions from the random forest. This is generally what you want, when comparing predicted values to actuals on the training data.
The second treats your training data as if it was a new dataset, and runs the observations down each tree. This will result in an artificially close correlation between the predictions and the actuals, since the RF algorithm generally doesn't prune the individual trees, relying instead on the ensemble of trees to control overfitting. So don't do this if you want to get predictions on the training data. | Random Forest - How to handle overfitting
How are you getting that 99% AUC on your training data? Be aware that there's a difference between
predict(model)
and
predict(model, newdata=train)
when getting predictions for the training dataset. |
2,957 | Random Forest - How to handle overfitting | For decision trees there are two ways of handling overfitting: (a) don't grow the trees to their entirety (b) prune
The same applies to a forest of trees - don't grow them too much and prune.
I don't use randomForest much, but to my knowledge, there are several parameters that you can use to tune your forests:
nodesize - minimum size of terminal nodes
maxnodes - maximum number of terminal nodes
mtry - number of variables used to build each tree (thanks @user777) | Random Forest - How to handle overfitting | For decision trees there are two ways of handling overfitting: (a) don't grow the trees to their entirety (b) prune
The same applies to a forest of trees - don't grow them too much and prune.
I don' | Random Forest - How to handle overfitting
For decision trees there are two ways of handling overfitting: (a) don't grow the trees to their entirety (b) prune
The same applies to a forest of trees - don't grow them too much and prune.
I don't use randomForest much, but to my knowledge, there are several parameters that you can use to tune your forests:
nodesize - minimum size of terminal nodes
maxnodes - maximum number of terminal nodes
mtry - number of variables used to build each tree (thanks @user777) | Random Forest - How to handle overfitting
For decision trees there are two ways of handling overfitting: (a) don't grow the trees to their entirety (b) prune
The same applies to a forest of trees - don't grow them too much and prune.
I don' |
2,958 | Random Forest - How to handle overfitting | Try to tune max_depth parameter in ranges of [5, 15] but not more than this because if you take large depth there is a high chance of overfitting. | Random Forest - How to handle overfitting | Try to tune max_depth parameter in ranges of [5, 15] but not more than this because if you take large depth there is a high chance of overfitting. | Random Forest - How to handle overfitting
Try to tune max_depth parameter in ranges of [5, 15] but not more than this because if you take large depth there is a high chance of overfitting. | Random Forest - How to handle overfitting
Try to tune max_depth parameter in ranges of [5, 15] but not more than this because if you take large depth there is a high chance of overfitting. |
2,959 | What's the difference between feed-forward and recurrent neural networks? | Feed-forward ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); i.e., the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straightforward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down.
Feedback (or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found.
Feedforward neural networks are ideally suitable for modeling relationships between a set of predictor or input variables and one or more response or output variables. In other words, they are appropriate for any functional mapping problem where we want to know how a number of input variables affect the output variable. The multilayer feedforward neural networks, also called multi-layer perceptrons (MLP), are the most widely studied and used neural network model in practice.
As an example of feedback network, I can recall Hopfield’s network. The main use of Hopfield’s network is as associative memory. An associative memory is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. The function of the associate memory is to recall the corresponding stored pattern, and then produce a clear version of the pattern at the output. Hopfield networks are typically used for those problems with binary pattern vectors and the input pattern may be a noisy version of one of the stored patterns. In the Hopfield network, the stored patterns are encoded as the weights of the network.
Kohonen’s self-organizing maps (SOM) represent another neural network type that is markedly different from the feedforward multilayer networks. Unlike training in the feedforward MLP, the SOM training or learning is often called unsupervised because there are no known target outputs associated with each input pattern in SOM and during the training process, the SOM processes the input patterns and learns to cluster or segment the data through adjustment of weights (that makes it an important neural network model for dimension reduction and data clustering). A two-dimensional map is typically created in such a way that the orders of the interrelationships among inputs are preserved. The number and composition of clusters can be visually determined based on the output distribution generated by the training process. With only input variables in the training sample, SOM aims to learn or discover the underlying structure of the data.
(The diagrams are from Dana Vrajitoru's C463 / B551 Artificial Intelligence web site.) | What's the difference between feed-forward and recurrent neural networks? | Feed-forward ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); i.e., the output of any layer does not affect that same layer. Feed-forward ANNs tend to be | What's the difference between feed-forward and recurrent neural networks?
Feed-forward ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); i.e., the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straightforward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down.
Feedback (or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found.
Feedforward neural networks are ideally suitable for modeling relationships between a set of predictor or input variables and one or more response or output variables. In other words, they are appropriate for any functional mapping problem where we want to know how a number of input variables affect the output variable. The multilayer feedforward neural networks, also called multi-layer perceptrons (MLP), are the most widely studied and used neural network model in practice.
As an example of feedback network, I can recall Hopfield’s network. The main use of Hopfield’s network is as associative memory. An associative memory is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. The function of the associate memory is to recall the corresponding stored pattern, and then produce a clear version of the pattern at the output. Hopfield networks are typically used for those problems with binary pattern vectors and the input pattern may be a noisy version of one of the stored patterns. In the Hopfield network, the stored patterns are encoded as the weights of the network.
Kohonen’s self-organizing maps (SOM) represent another neural network type that is markedly different from the feedforward multilayer networks. Unlike training in the feedforward MLP, the SOM training or learning is often called unsupervised because there are no known target outputs associated with each input pattern in SOM and during the training process, the SOM processes the input patterns and learns to cluster or segment the data through adjustment of weights (that makes it an important neural network model for dimension reduction and data clustering). A two-dimensional map is typically created in such a way that the orders of the interrelationships among inputs are preserved. The number and composition of clusters can be visually determined based on the output distribution generated by the training process. With only input variables in the training sample, SOM aims to learn or discover the underlying structure of the data.
(The diagrams are from Dana Vrajitoru's C463 / B551 Artificial Intelligence web site.) | What's the difference between feed-forward and recurrent neural networks?
Feed-forward ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); i.e., the output of any layer does not affect that same layer. Feed-forward ANNs tend to be |
2,960 | What's the difference between feed-forward and recurrent neural networks? | What George Dontas writes is correct, however the use of RNNs in practice today is restricted to a simpler class of problems: time series / sequential tasks.
While feedforward networks are used to learn datasets like $(i, t)$ where $i$ and $t$ are vectors, e.g. $i \in \mathcal{R}^n$, for recurrent networks $i$ will always be a sequence, e.g. $i \in (\mathcal{R}^n)^*$.
RNNs have been shown to be able to represent any measureable sequence to sequence mapping by Hammer.
Thus, RNNs are being used nowadays for all kinds of sequential tasks: time series prediction, sequence labeling, sequence classification etc. A good overview can be found on Schmidhuber's page on RNNs. | What's the difference between feed-forward and recurrent neural networks? | What George Dontas writes is correct, however the use of RNNs in practice today is restricted to a simpler class of problems: time series / sequential tasks.
While feedforward networks are used to lea | What's the difference between feed-forward and recurrent neural networks?
What George Dontas writes is correct, however the use of RNNs in practice today is restricted to a simpler class of problems: time series / sequential tasks.
While feedforward networks are used to learn datasets like $(i, t)$ where $i$ and $t$ are vectors, e.g. $i \in \mathcal{R}^n$, for recurrent networks $i$ will always be a sequence, e.g. $i \in (\mathcal{R}^n)^*$.
RNNs have been shown to be able to represent any measureable sequence to sequence mapping by Hammer.
Thus, RNNs are being used nowadays for all kinds of sequential tasks: time series prediction, sequence labeling, sequence classification etc. A good overview can be found on Schmidhuber's page on RNNs. | What's the difference between feed-forward and recurrent neural networks?
What George Dontas writes is correct, however the use of RNNs in practice today is restricted to a simpler class of problems: time series / sequential tasks.
While feedforward networks are used to lea |
2,961 | What's the difference between feed-forward and recurrent neural networks? | What is really interesting in asking this question?
Instead of saying RNN and FNN is different in their name. So they are different., I think what is more interesting is in terms of modeling dynamical system, does RNN differ much from FNN?
Background
There has been a debate for modeling dynamical system between Recurrent neural network and Feedforward neural network with additional features as previous time delays (FNN-TD).
From my knowledge after reading those papers on 90's~2010's. The majority of the literature prefer that vanilla RNN is better than FNN in that RNN uses a dynamic memory while FNN-TD is a static memory.
However, there isn't much numerical studies comparing those two. The one [1]on the early showed that for modeling dynamical system, FNN-TD shows comparable performance to vanilla RNN when it is noise-free while perform a bit worse when there is noise. In my experiences on modeling dynamical systems, I often see FNN-TD is good enough.
What is the key difference in how to treat memory effects between RNN and FNN-TD?
Unfortunately, I don't see anywhere and any publication theoretically showed the difference between these two. It is quite interesting. Let's consider a simple case, using a scalar sequence $X_n, X_{n-1},\ldots,X_{n-k}$ to predict $X_{n+1}$. So it is a sequence-to-scalar task.
FNN-TD is the most general, comprehensive way to treating the so called memory effects. Since it is brutal, it covers any kind, any sort, any memory effect theoretically. The only down side is that it just takes too much parameters in practice.
The memory in RNN is nothing but represented as a general "convolution" of the previous information. We all know that convolution between two scalar sequence in general is not an reversible process and deconvolution is most often ill-posed.
My conjecture is "degree of freedom" in such convolution process is determined by the number of hidden units in the RNN state $s$. And it is important for some dynamical systems. Note that the "degree of freedom" can be extended by time delay embedding of states[2] while keeping the same number of hidden units.
Therefore, RNN is actually compressing the previous memory information with loss by doing convolution, while FNN-TD is just exposing them in a sense with no loss of memory information. Note that you can reduce the information loss in convolution by increasing the number of hidden units or using more time delays than vanilla RNN. In this sense, RNN is more flexible than FNN-TD. RNN can achieve no memory loss as FNN-TD and it can be trivial to show the number of parameters are on the same order.
I know someone might want to mention that RNN is carrying the long time effect while FNN-TD can not. For this, I just want to mention that for a continuous autonomous dynamical system, from Takens embedding theory it is a generic property for the embedding to exists for FNN-TD with the seemingly short time memory to achieve the same performance as the seemingly long time memory in RNN. It explains why RNN and FNN-TD does not differ a lot in continuous dynamical system example in the early 90's.
Now I will mention the benefit of RNN. For the task of autonomous dynamical system, using more previous term, although effectively would be the same as using FNN-TD with less previous terms in theory, numerically it would be helpful in that it is more robust to noise. Result in [1] is consistent with this opinion.
Reference
[1]Gençay, Ramazan, and Tung Liu. "Nonlinear modelling and prediction with feedforward and recurrent networks." Physica D: Nonlinear Phenomena 108.1-2 (1997): 119-134.
[2]Pan, Shaowu, and Karthik Duraisamy. "Data-driven Discovery of Closure Models." arXiv preprint arXiv:1803.09318 (2018). | What's the difference between feed-forward and recurrent neural networks? | What is really interesting in asking this question?
Instead of saying RNN and FNN is different in their name. So they are different., I think what is more interesting is in terms of modeling dynamical | What's the difference between feed-forward and recurrent neural networks?
What is really interesting in asking this question?
Instead of saying RNN and FNN is different in their name. So they are different., I think what is more interesting is in terms of modeling dynamical system, does RNN differ much from FNN?
Background
There has been a debate for modeling dynamical system between Recurrent neural network and Feedforward neural network with additional features as previous time delays (FNN-TD).
From my knowledge after reading those papers on 90's~2010's. The majority of the literature prefer that vanilla RNN is better than FNN in that RNN uses a dynamic memory while FNN-TD is a static memory.
However, there isn't much numerical studies comparing those two. The one [1]on the early showed that for modeling dynamical system, FNN-TD shows comparable performance to vanilla RNN when it is noise-free while perform a bit worse when there is noise. In my experiences on modeling dynamical systems, I often see FNN-TD is good enough.
What is the key difference in how to treat memory effects between RNN and FNN-TD?
Unfortunately, I don't see anywhere and any publication theoretically showed the difference between these two. It is quite interesting. Let's consider a simple case, using a scalar sequence $X_n, X_{n-1},\ldots,X_{n-k}$ to predict $X_{n+1}$. So it is a sequence-to-scalar task.
FNN-TD is the most general, comprehensive way to treating the so called memory effects. Since it is brutal, it covers any kind, any sort, any memory effect theoretically. The only down side is that it just takes too much parameters in practice.
The memory in RNN is nothing but represented as a general "convolution" of the previous information. We all know that convolution between two scalar sequence in general is not an reversible process and deconvolution is most often ill-posed.
My conjecture is "degree of freedom" in such convolution process is determined by the number of hidden units in the RNN state $s$. And it is important for some dynamical systems. Note that the "degree of freedom" can be extended by time delay embedding of states[2] while keeping the same number of hidden units.
Therefore, RNN is actually compressing the previous memory information with loss by doing convolution, while FNN-TD is just exposing them in a sense with no loss of memory information. Note that you can reduce the information loss in convolution by increasing the number of hidden units or using more time delays than vanilla RNN. In this sense, RNN is more flexible than FNN-TD. RNN can achieve no memory loss as FNN-TD and it can be trivial to show the number of parameters are on the same order.
I know someone might want to mention that RNN is carrying the long time effect while FNN-TD can not. For this, I just want to mention that for a continuous autonomous dynamical system, from Takens embedding theory it is a generic property for the embedding to exists for FNN-TD with the seemingly short time memory to achieve the same performance as the seemingly long time memory in RNN. It explains why RNN and FNN-TD does not differ a lot in continuous dynamical system example in the early 90's.
Now I will mention the benefit of RNN. For the task of autonomous dynamical system, using more previous term, although effectively would be the same as using FNN-TD with less previous terms in theory, numerically it would be helpful in that it is more robust to noise. Result in [1] is consistent with this opinion.
Reference
[1]Gençay, Ramazan, and Tung Liu. "Nonlinear modelling and prediction with feedforward and recurrent networks." Physica D: Nonlinear Phenomena 108.1-2 (1997): 119-134.
[2]Pan, Shaowu, and Karthik Duraisamy. "Data-driven Discovery of Closure Models." arXiv preprint arXiv:1803.09318 (2018). | What's the difference between feed-forward and recurrent neural networks?
What is really interesting in asking this question?
Instead of saying RNN and FNN is different in their name. So they are different., I think what is more interesting is in terms of modeling dynamical |
2,962 | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | The first sentence of the current 2015 editorial to which the OP links, reads:
The Basic and Applied Social Psychology (BASP) 2014 Editorial
*emphasized* that the null hypothesis significance testing procedure
(NHSTP) is invalid...
(my emphasis)
In other words, for the editors it is an already proven scientific fact that "null hypothesis significance testing" is invalid, and the 2014 editorial only emphasized so, while the current 2015 editorial just implements this fact.
The misuse (even maliciously so) of NHSTP is indeed well discussed and documented. And it is not unheard of in human history that "things get banned" because it has been found that after all said and done, they were misused more than put to good use (but shouldn't we statistically test that?). It can be a "second-best" solution, to cut what on average (inferential statistics) has come to losses, rather than gains, and so we predict (inferential statistics) that it will be detrimental also in the future.
But the zeal revealed behind the wording of the above first sentence, makes this look -exactly, as a zealot approach rather than a cool-headed decision to cut the hand that tends to steal rather than offer. If one reads the one-year older editorial mentioned in the above quote (DOI:10.1080/01973533.2014.865505), one will see that this is only part of a re-hauling of the Journal's policies by a new Editor.
Scrolling down the editorial, they write
...On the contrary, we believe that the p<.05 bar is too easy to pass and
sometimes serves as an excuse for lower quality research.
So it appears that their conclusion related to their discipline is that null-hypotheses are rejected "too-often", and so alleged findings may acquire spurious statistical significance. This is not the same argument as the "invalid" dictum in the first sentence.
So, to answer to the question, it is obvious that for the editors of the journal, their decision is not only wise but already late in being implemented: they appear to think that they cut out what part of statistics has become harmful, keeping the beneficial parts -they don't seem to believe that there is anything here that needs replacing with something "equivalent".
Epistemologically, this is an instance where scholars of a social science partially retract back from an attempt to make their discipline more objective in its methods and results by using quantitative methods, because they have arrived at the conclusion (how?) that, in the end, the attempt created "more bad than good". I would say that this is a very important matter, in principle possible to have happened, and one that would require years of work to demonstrate it "beyond reasonable doubt" and really help your discipline. But just one or two editorials and papers published will most probably (inferential statistics) just ignite a civil war.
The final sentence of the 2015 editorial reads:
We hope and anticipate that banning the NHSTP will have the effect of
increasing the quality of submitted manuscripts by liberating authors
from the stultified structure of NHSTP thinking thereby eliminating an
important obstacle to creative thinking. The NHSTP has dominated
psychology for decades; we hope that by instituting the first NHSTP
ban, we demonstrate that psychology does not need the crutch of the
NHSTP, and that other journals follow suit. | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | The first sentence of the current 2015 editorial to which the OP links, reads:
The Basic and Applied Social Psychology (BASP) 2014 Editorial
*emphasized* that the null hypothesis significance testi | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
The first sentence of the current 2015 editorial to which the OP links, reads:
The Basic and Applied Social Psychology (BASP) 2014 Editorial
*emphasized* that the null hypothesis significance testing procedure
(NHSTP) is invalid...
(my emphasis)
In other words, for the editors it is an already proven scientific fact that "null hypothesis significance testing" is invalid, and the 2014 editorial only emphasized so, while the current 2015 editorial just implements this fact.
The misuse (even maliciously so) of NHSTP is indeed well discussed and documented. And it is not unheard of in human history that "things get banned" because it has been found that after all said and done, they were misused more than put to good use (but shouldn't we statistically test that?). It can be a "second-best" solution, to cut what on average (inferential statistics) has come to losses, rather than gains, and so we predict (inferential statistics) that it will be detrimental also in the future.
But the zeal revealed behind the wording of the above first sentence, makes this look -exactly, as a zealot approach rather than a cool-headed decision to cut the hand that tends to steal rather than offer. If one reads the one-year older editorial mentioned in the above quote (DOI:10.1080/01973533.2014.865505), one will see that this is only part of a re-hauling of the Journal's policies by a new Editor.
Scrolling down the editorial, they write
...On the contrary, we believe that the p<.05 bar is too easy to pass and
sometimes serves as an excuse for lower quality research.
So it appears that their conclusion related to their discipline is that null-hypotheses are rejected "too-often", and so alleged findings may acquire spurious statistical significance. This is not the same argument as the "invalid" dictum in the first sentence.
So, to answer to the question, it is obvious that for the editors of the journal, their decision is not only wise but already late in being implemented: they appear to think that they cut out what part of statistics has become harmful, keeping the beneficial parts -they don't seem to believe that there is anything here that needs replacing with something "equivalent".
Epistemologically, this is an instance where scholars of a social science partially retract back from an attempt to make their discipline more objective in its methods and results by using quantitative methods, because they have arrived at the conclusion (how?) that, in the end, the attempt created "more bad than good". I would say that this is a very important matter, in principle possible to have happened, and one that would require years of work to demonstrate it "beyond reasonable doubt" and really help your discipline. But just one or two editorials and papers published will most probably (inferential statistics) just ignite a civil war.
The final sentence of the 2015 editorial reads:
We hope and anticipate that banning the NHSTP will have the effect of
increasing the quality of submitted manuscripts by liberating authors
from the stultified structure of NHSTP thinking thereby eliminating an
important obstacle to creative thinking. The NHSTP has dominated
psychology for decades; we hope that by instituting the first NHSTP
ban, we demonstrate that psychology does not need the crutch of the
NHSTP, and that other journals follow suit. | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
The first sentence of the current 2015 editorial to which the OP links, reads:
The Basic and Applied Social Psychology (BASP) 2014 Editorial
*emphasized* that the null hypothesis significance testi |
2,963 | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | I feel that banning hypothesis tests is a great idea except for a select few "existence" hypotheses, e.g. testing the null hypothesis that there is not extra-sensory perception where all one would need to demonstrate to have evidence that ESP exists is non-randomness. But I think the journal missed the point that the main driver of poor research in psychology is the use of a threshold on $P$-values. It has been demonstrated in psychology and most other fields that a good deal of gaming goes on to arrive at $P < 0.05$. This includes hypothesis substitution, removing of observations, and subsetting data. It is thresholds that should be banned first.
The banning of confidence intervals is also overboard, but not for the reasons others have stated. Confidence intervals are useful only if one misinterprets them as Bayesian credible intervals (for suitable non-information priors). But they are still useful. The fact that their exact frequentist interpretation leads to nothing but confusion implies that we need to "get out of Dodge" and go Bayesian or likelihood school. But useful results can be obtained by misinterpreting good old confidence limits.
It is a shame that the editors of the journal misunderstood Bayesian statistics and don't know of the existence of pure likelihood inference. What they are seeking can be easily provided by Bayesian posterior distributions using slightly skeptical priors. | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | I feel that banning hypothesis tests is a great idea except for a select few "existence" hypotheses, e.g. testing the null hypothesis that there is not extra-sensory perception where all one would nee | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
I feel that banning hypothesis tests is a great idea except for a select few "existence" hypotheses, e.g. testing the null hypothesis that there is not extra-sensory perception where all one would need to demonstrate to have evidence that ESP exists is non-randomness. But I think the journal missed the point that the main driver of poor research in psychology is the use of a threshold on $P$-values. It has been demonstrated in psychology and most other fields that a good deal of gaming goes on to arrive at $P < 0.05$. This includes hypothesis substitution, removing of observations, and subsetting data. It is thresholds that should be banned first.
The banning of confidence intervals is also overboard, but not for the reasons others have stated. Confidence intervals are useful only if one misinterprets them as Bayesian credible intervals (for suitable non-information priors). But they are still useful. The fact that their exact frequentist interpretation leads to nothing but confusion implies that we need to "get out of Dodge" and go Bayesian or likelihood school. But useful results can be obtained by misinterpreting good old confidence limits.
It is a shame that the editors of the journal misunderstood Bayesian statistics and don't know of the existence of pure likelihood inference. What they are seeking can be easily provided by Bayesian posterior distributions using slightly skeptical priors. | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
I feel that banning hypothesis tests is a great idea except for a select few "existence" hypotheses, e.g. testing the null hypothesis that there is not extra-sensory perception where all one would nee |
2,964 | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | I see this approach as an attempt to address the inability of social psychology to replicate many previously published 'significant findings.'
Its disadvantages are:
that it doesn't address many of the factors leading to spurious effects. E.g.,
A) People can still peek at their data and stop running their studies when an effect size strikes them as being sufficiently large to be of interest.
B) Large effects sizes will still appear to have large power in retrospective assessments of power.
C) People will still fish for interesting and big effects (testing a bunch of hypotheses in an experiment and then reporting the one that popped up) or
D) pretend that an unexpected weird effect was expected all along.
Shouldn't efforts be made to address these issues first?
As a field going forwards it will make a review of past findings pretty awful. There is no way to quantitatively assess the believability of different studies. If every journal implemented this approach, you'll have a bunch of social scientists saying there is evidence for X when it is totally unclear how believable X is and scientists arguing about how to interpret a published effect or arguing about whether it is important or worth talking about. Isn't this the point of having stats? To provide a consistent way to assess numbers. In my opinion, this new approach would cause a mess if it was widely implemented.
This change does not encourage researchers to submit the results of studies with small effect sizes so it doesn't really address the file-drawer effect (or are they going to publish findings with large n's regardless of effect size?). If we published all results of carefully designed studies, then even though the believability of results of individual studies may be uncertain, meta-analyses and reviews of studies that supplied statistical analysis would do a much better job at identifying the truth. | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | I see this approach as an attempt to address the inability of social psychology to replicate many previously published 'significant findings.'
Its disadvantages are:
that it doesn't address many of | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
I see this approach as an attempt to address the inability of social psychology to replicate many previously published 'significant findings.'
Its disadvantages are:
that it doesn't address many of the factors leading to spurious effects. E.g.,
A) People can still peek at their data and stop running their studies when an effect size strikes them as being sufficiently large to be of interest.
B) Large effects sizes will still appear to have large power in retrospective assessments of power.
C) People will still fish for interesting and big effects (testing a bunch of hypotheses in an experiment and then reporting the one that popped up) or
D) pretend that an unexpected weird effect was expected all along.
Shouldn't efforts be made to address these issues first?
As a field going forwards it will make a review of past findings pretty awful. There is no way to quantitatively assess the believability of different studies. If every journal implemented this approach, you'll have a bunch of social scientists saying there is evidence for X when it is totally unclear how believable X is and scientists arguing about how to interpret a published effect or arguing about whether it is important or worth talking about. Isn't this the point of having stats? To provide a consistent way to assess numbers. In my opinion, this new approach would cause a mess if it was widely implemented.
This change does not encourage researchers to submit the results of studies with small effect sizes so it doesn't really address the file-drawer effect (or are they going to publish findings with large n's regardless of effect size?). If we published all results of carefully designed studies, then even though the believability of results of individual studies may be uncertain, meta-analyses and reviews of studies that supplied statistical analysis would do a much better job at identifying the truth. | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
I see this approach as an attempt to address the inability of social psychology to replicate many previously published 'significant findings.'
Its disadvantages are:
that it doesn't address many of |
2,965 | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | I came across a wonderful quote that almost argues for the same point, but not quite -- since it is an opening paragraph in a textbook that is mostly about frequentist statistics and hypothesis testing.
It is widely held by non-statisticians, like the author, that if you do good experiments statistics are not necessary. They are quite right. [...] The snag, of course, is that doing good experiments is difficult. Most people need all the help they can get to prevent them making fools of themselves by claiming that their favourite theory is substantiated by observations that do nothing of the sort. And the main function of that section of statistics that deals with tests of significance is to prevent people making fools of themselves. From this point of view, the function of significance tests is to prevent people publishing experiments, not to encourage them. Ideally, indeed, significance tests should never appear in print, having been used, if at all, in the preliminary stages to detect inadequate experiments, so that the final experiments are so clear that no justification is needed.
-- David Colquhoun, Lectures on biostatistics, 1971 | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them? | I came across a wonderful quote that almost argues for the same point, but not quite -- since it is an opening paragraph in a textbook that is mostly about frequentist statistics and hypothesis testin | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
I came across a wonderful quote that almost argues for the same point, but not quite -- since it is an opening paragraph in a textbook that is mostly about frequentist statistics and hypothesis testing.
It is widely held by non-statisticians, like the author, that if you do good experiments statistics are not necessary. They are quite right. [...] The snag, of course, is that doing good experiments is difficult. Most people need all the help they can get to prevent them making fools of themselves by claiming that their favourite theory is substantiated by observations that do nothing of the sort. And the main function of that section of statistics that deals with tests of significance is to prevent people making fools of themselves. From this point of view, the function of significance tests is to prevent people publishing experiments, not to encourage them. Ideally, indeed, significance tests should never appear in print, having been used, if at all, in the preliminary stages to detect inadequate experiments, so that the final experiments are so clear that no justification is needed.
-- David Colquhoun, Lectures on biostatistics, 1971 | A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?
I came across a wonderful quote that almost argues for the same point, but not quite -- since it is an opening paragraph in a textbook that is mostly about frequentist statistics and hypothesis testin |
2,966 | How to statistically compare two time series? | As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each series separately. This might be an ARIMA model or a multiply-trended Regression Model with possible Level Shifts or a composite model integrating both memory (ARIMA) and dummy variables. This common model could be estimated globally and separately for each of the two series and then one could construct an F test to test the hypothesis of a common set of parameters. | How to statistically compare two time series? | As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each s | How to statistically compare two time series?
As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each series separately. This might be an ARIMA model or a multiply-trended Regression Model with possible Level Shifts or a composite model integrating both memory (ARIMA) and dummy variables. This common model could be estimated globally and separately for each of the two series and then one could construct an F test to test the hypothesis of a common set of parameters. | How to statistically compare two time series?
As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each s |
2,967 | How to statistically compare two time series? | Consider the grangertest() in the lmtest library.
It is a test to see if one time series is useful in forecasting another.
A couple references to get you started:
https://spia.uga.edu/faculty_pages/monogan/teaching/ts/
https://spia.uga.edu/faculty_pages/monogan/teaching/ts/Kgranger.pdf
http://en.wikipedia.org/wiki/Granger_causality | How to statistically compare two time series? | Consider the grangertest() in the lmtest library.
It is a test to see if one time series is useful in forecasting another.
A couple references to get you started:
https://spia.uga.edu/faculty_pages/mo | How to statistically compare two time series?
Consider the grangertest() in the lmtest library.
It is a test to see if one time series is useful in forecasting another.
A couple references to get you started:
https://spia.uga.edu/faculty_pages/monogan/teaching/ts/
https://spia.uga.edu/faculty_pages/monogan/teaching/ts/Kgranger.pdf
http://en.wikipedia.org/wiki/Granger_causality | How to statistically compare two time series?
Consider the grangertest() in the lmtest library.
It is a test to see if one time series is useful in forecasting another.
A couple references to get you started:
https://spia.uga.edu/faculty_pages/mo |
2,968 | How to statistically compare two time series? | Just came across this. Your first answer us plotting g the two sets the same scale (timewise) to see the differences visually. You have done this and can easily see there are some glaring differences.
The next step is to use simple correlation analysis...and see how well are they related using the correlation coefficient (r). If the r is small your conclusion would be that they are weakly related and so no desirable comparisons and a larger value if r would suggest good comparisons s between the two series.
The third step where there is good correlation is to test the statistical significance of the r. Here you can use the Shapiro Welch test which would assume the two series are normally distributed (null hypothesis ) or not (alternative hypothesis).
There are other tests you can do but let me hope my answer helps. | How to statistically compare two time series? | Just came across this. Your first answer us plotting g the two sets the same scale (timewise) to see the differences visually. You have done this and can easily see there are some glaring differences. | How to statistically compare two time series?
Just came across this. Your first answer us plotting g the two sets the same scale (timewise) to see the differences visually. You have done this and can easily see there are some glaring differences.
The next step is to use simple correlation analysis...and see how well are they related using the correlation coefficient (r). If the r is small your conclusion would be that they are weakly related and so no desirable comparisons and a larger value if r would suggest good comparisons s between the two series.
The third step where there is good correlation is to test the statistical significance of the r. Here you can use the Shapiro Welch test which would assume the two series are normally distributed (null hypothesis ) or not (alternative hypothesis).
There are other tests you can do but let me hope my answer helps. | How to statistically compare two time series?
Just came across this. Your first answer us plotting g the two sets the same scale (timewise) to see the differences visually. You have done this and can easily see there are some glaring differences. |
2,969 | How to statistically compare two time series? | I want to propose another approach. This is to test whether two time series are the same. This approach is only suitable for infrequently sampled data where autocorrelation is low.
If time series x is the similar to time series y then the variance of x-y should be less than the variance of x. We can test this using a one sided F test for variance. If the ratio var(x-y)/var(x) is significantly less than one then then y explains a significant proportion of the variance of x.
We also need to check that x-y is not significantly different to 0. This can be done with a one sample two sided t.test.
x <- cumsum(runif(10)-0.5)
t <- seq_along(x)
y <- x + rnorm(10, 0, 0.2)
# y <- x + rnorm(10, 0.2, 0.2)
plot(t,x, "b", col = "red")
points(t,y, "b", col = "blue")
var.test(x-y, x, alternative = "less") # does y improve variance of x?
#>
#> F test to compare two variances
#>
#> data: x - y and x
#> F = 0.27768, num df = 9, denom df = 9, p-value = 0.03496
#> alternative hypothesis: true ratio of variances is less than 1
#> 95 percent confidence interval:
#> 0.0000000 0.8827118
#> sample estimates:
#> ratio of variances
#> 0.277679
t.test(x-y) # check that x-y does not have an offset
#>
#> One Sample t-test
#>
#> data: x - y
#> t = -0.0098369, df = 9, p-value = 0.9924
#> alternative hypothesis: true mean is not equal to 0
#> 95 percent confidence interval:
#> -0.1660619 0.1646239
#> sample estimates:
#> mean of x
#> -0.0007189834
Created on 2021-09-02 by the reprex package (v2.0.0)
I think it should be possible to extend this approach to test whether two time series are linearly correlated, using x-lm(x ~ y) instead of x-y.
Edit: Dealing with autocorrelation I think could be done by finding a suitable Effective Degrees of Freedom for the tests, c.f., https://doi.org/10.1016/j.neuroimage.2019.05.011 | How to statistically compare two time series? | I want to propose another approach. This is to test whether two time series are the same. This approach is only suitable for infrequently sampled data where autocorrelation is low.
If time series x is | How to statistically compare two time series?
I want to propose another approach. This is to test whether two time series are the same. This approach is only suitable for infrequently sampled data where autocorrelation is low.
If time series x is the similar to time series y then the variance of x-y should be less than the variance of x. We can test this using a one sided F test for variance. If the ratio var(x-y)/var(x) is significantly less than one then then y explains a significant proportion of the variance of x.
We also need to check that x-y is not significantly different to 0. This can be done with a one sample two sided t.test.
x <- cumsum(runif(10)-0.5)
t <- seq_along(x)
y <- x + rnorm(10, 0, 0.2)
# y <- x + rnorm(10, 0.2, 0.2)
plot(t,x, "b", col = "red")
points(t,y, "b", col = "blue")
var.test(x-y, x, alternative = "less") # does y improve variance of x?
#>
#> F test to compare two variances
#>
#> data: x - y and x
#> F = 0.27768, num df = 9, denom df = 9, p-value = 0.03496
#> alternative hypothesis: true ratio of variances is less than 1
#> 95 percent confidence interval:
#> 0.0000000 0.8827118
#> sample estimates:
#> ratio of variances
#> 0.277679
t.test(x-y) # check that x-y does not have an offset
#>
#> One Sample t-test
#>
#> data: x - y
#> t = -0.0098369, df = 9, p-value = 0.9924
#> alternative hypothesis: true mean is not equal to 0
#> 95 percent confidence interval:
#> -0.1660619 0.1646239
#> sample estimates:
#> mean of x
#> -0.0007189834
Created on 2021-09-02 by the reprex package (v2.0.0)
I think it should be possible to extend this approach to test whether two time series are linearly correlated, using x-lm(x ~ y) instead of x-y.
Edit: Dealing with autocorrelation I think could be done by finding a suitable Effective Degrees of Freedom for the tests, c.f., https://doi.org/10.1016/j.neuroimage.2019.05.011 | How to statistically compare two time series?
I want to propose another approach. This is to test whether two time series are the same. This approach is only suitable for infrequently sampled data where autocorrelation is low.
If time series x is |
2,970 | How to statistically compare two time series? | Fit a straight line to both the time series signals using polyfit. Then compute root-mean-square-error (RMSE) for both the lines. The obtained value for the red-line would be quite less than the one obtained for gray line.
Also make the readings on some common frequency. | How to statistically compare two time series? | Fit a straight line to both the time series signals using polyfit. Then compute root-mean-square-error (RMSE) for both the lines. The obtained value for the red-line would be quite less than the one o | How to statistically compare two time series?
Fit a straight line to both the time series signals using polyfit. Then compute root-mean-square-error (RMSE) for both the lines. The obtained value for the red-line would be quite less than the one obtained for gray line.
Also make the readings on some common frequency. | How to statistically compare two time series?
Fit a straight line to both the time series signals using polyfit. Then compute root-mean-square-error (RMSE) for both the lines. The obtained value for the red-line would be quite less than the one o |
2,971 | How to statistically compare two time series? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
For question 1, I believe a cluster permutation test would suit you best. | How to statistically compare two time series? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to statistically compare two time series?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
For question 1, I believe a cluster permutation test would suit you best. | How to statistically compare two time series?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
2,972 | How to split the dataset for cross validation, learning curve, and final evaluation? | I'm not sure what you want to do in step 7a. As I understand it right now, it doesn't make sense to me.
Here's how I understand your description: in step 7, you want to compare the hold-out performance with the results of a cross validation embracing steps 4 - 6. (so yes, that would be a nested setup).
The main points why I don't think this comparison makes much sense are:
This comparison cannot detect two of the main sources of overoptimistic validation results I encounter in practice:
data leaks (dependence) between training and test data which is caused by a hierarchical (aka clustered) data structure, and which is not accounted for in the splitting. In my field, we have typically multiple (sometimes thousands) of readings (= rows in the data matrix) of the same patient or biological replicate of an experiment. These are not independent, so the validation splitting needs to be done at patient level. However, such a data leak occurs, you'll have it both in the splitting for the hold out set and in the cross validation splitting. Hold-out wold then be just as optimistically biased as cross validation.
Preprocessing of the data done on the whole data matrix, where the calculations are not independent for each row but many/all rows are used to calculation parameters for the preprocessing. Typical examples would be e.g. a PCA projection before the "actual" classification.
Again, that would affect both your hold-out and the outer cross validation, so you cannot detect it.
For the data I work with, both errors can easily cause the fraction of misclassifications to be underestimated by an order of magnitude!
If you are restricted to this counted fraction of test cases type of performance, model comparisons need either extremely large numbers of test cases or ridiculously large differences in true performance. Comparing 2 classifiers with unlimited training data may be a good start for further reading.
However, comparing the model quality the inner cross validation claims for the "optimal" model and the outer cross validation or hold out validation does make sense: if the discrepancy is high, it is questionable whether your grid search optimization did work (you may have skimmed variance due to the high variance of the performance measure). This comparison is easier in that you can spot trouble if you have the inner estimate being ridiculously good compared to the other - if it isn't, you don't need to worry that much about your optimization. But in any case, if your outer (7) measurement of the performance is honest and sound, you at least have a useful estimate of the obtained model, whether it is optimal or not.
IMHO measuring the learning curve is yet a different problem. I'd probably deal with that separately, and I think you need to define more clearly what you need the learning curve for (do you need the learning curve for a data set of the given problem, data, and classification method or the learning curve for this data set of the given problem, data, and classification mehtod), and a bunch of further decisions (e.g. how to deal with the model complexity as function of the training sample size? Optimize all over again, use fixed hyperparameters, decide on function to fix hyperparameters depending on training set size?)
(My data usually has so few independent cases to get the measurement of the learning curve sufficiently precise to use it in practice - but you may be better of if your 1200 rows are actually independent)
update: What is "wrong" with the the scikit-learn example?
First of all, nothing is wrong with nested cross validation here. Nested validation is of utmost importance for data-driven optimization, and cross validation is a very powerful approaches (particularly if iterated/repeated).
Then, whether anything is wrong at all depends on your point of view: as long as you do an honest nested validation (keeping the outer test data strictly independent), the outer validation is a proper measure of the "optimal" model's performance. Nothing wrong with that.
But several things can and do go wrong with grid search of these proportion-type performance measures for hyperparameter tuning of SVM. Basically they mean that you may (probably?) cannont rely on the optimization. Nevertheless, as long as your outer split was done properly, even if the model is not the best possible, you have an honest estimate of the performance of the model you got.
I'll try to give intuitive explanations why the optimization may be in trouble:
Mathematically/statisticaly speaking, the problem with the proportions is that measured proportions $\hat p$ are subject to a huge variance due to finite test sample size $n$ (depending also on the true performance of the model, $p$):
$Var (\hat p) = \frac{p (1 - p)}{n}$
You need ridiculously huge numbers of cases (at least compared to the numbers of cases I can usually have) in order to achieve the needed precision (bias/variance sense) for estimating recall, precision (machine learning performance sense). This of course applies also to ratios you calculate from such proportions. Have a look at the confidence intervals for binomial proportions. They are shockingly large! Often larger than the true improvement in performance over the hyperparameter grid.
And statistically speaking, grid search is a massive multiple comparison problem: the more points of the grid you evaluate, the higher the risk of finding some combination of hyperparameters that accidentally looks very good for the train/test split you are evaluating. This is what I mean with skimming variance. The well known optimistic bias of the inner (optimization) validation is just a symptom of this variance skimming.
Intuitively, consider a hypothetical change of a hyperparameter, that slowly causes the model to deteriorate: one test case moves towards the decision boundary. The 'hard' proportion performance measures do not detect this until the case crosses the border and is on the wrong side. Then, however, they immediately assign a full error for an infinitely small change in the hyperparameter.
In order to do numerical optimization, you need the performance measure to be well behaved. That means: neither the jumpy (not continously differentiable) part of the proportion-type performance measure nor the fact that other than that jump, actually occuring changes are not detected are suitable for the optimization.
Proper scoring rules are defined in a way that is particularly suitable for optimization. They have their global maximum when the predicted probabilities match the true probabilities for each case to belong to the class in question.
For SVMs you have the additional problem that not only the performance measures but also the model reacts in this jumpy fashion: small changes of the hyperparameter will not change anything. The model changes only when the hyperparameters are changes enough to cause some case to either stop being support vector or to become support vector. Again, such models are hard to optimize.
Literature:
Brown, L.; Cai, T. & DasGupta, A.: Interval Estimation for a Binomial Proportion, Statistical Science, 16, 101-133 (2001).
Cawley, G. C. & Talbot, N. L. C.: On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 11, 2079-2107 (2010).
Gneiting, T. & Raftery, A. E.: Strictly Proper Scoring Rules, Prediction, and Estimation, Journal of the American Statistical Association, 102, 359-378 (2007). DOI: 10.1198/016214506000001437
Brereton, R.: Chemometrics for pattern recognition, Wiley, (2009).
points out the jumpy behaviour of the SVM as function of the hyperparameters.
Update II: Skimming variance
what you can afford in terms of model comparison obviously depends on the number of independent cases.
Let's make some quick and dirty simulation about the risk of skimming variance here:
scikit.learn says that they have 1797 are in the digits data.
assume that 100 models are compared, e.g. a $10 \times 10$ grid for 2 parameters.
assume that both parameter (ranges) do not affect the models at all,
i.e., all models have the same true performance of, say, 97 % (typical performance for the digits data set).
Run $10^4$ simulations of "testing these models" with sample size = 1797 rows in the digits data set
p.true = 0.97 # hypothetical true performance for all models
n.models = 100 # 10 x 10 grid
n.rows = 1797 # rows in scikit digits data
sim.test <- replicate (expr= rbinom (n= nmodels, size= n.rows, prob= p.true),
n = 1e4)
sim.test <- colMaxs (sim.test) # take best model
hist (sim.test / n.rows,
breaks = (round (p.true * n.rows) : n.rows) / n.rows + 1 / 2 / n.rows,
col = "black", main = 'Distribution max. observed performance',
xlab = "max. observed performance", ylab = "n runs")
abline (v = p.outer, col = "red")
Here's the distribution for the best observed performance:
The red line marks the true performance of all our hypothetical models. On average, we observe only 2/3 of the true error rate for the seemingly best of the 100 compared models (for the simulation we know that they all perform equally with 97% correct predictions).
This simulation is obviously very much simplified:
In addition to the test sample size variance there is at least the variance due to model instability, so we're underestimating the variance here
Tuning parameters affecting the model complexity will typically cover parameter sets where the models are unstable and thus have high variance.
For the UCI digits from the example, the original data base has ca. 11000 digits written by 44 persons. What if the data is clustered according to the person who wrote? (I.e. is it easier to recognize an 8 written by someone if you know how that person writes, say, a 3?) The effective sample size then may be as low as 44.
Tuning model hyperparameters may lead to correlation between the models (in fact, that would be considered well behaved from a numerical optimization perspective). It is difficult to predict the influence of that (and I suspect this is impossible without taking into account the actual type of classifier).
In general, however, both low number of independent test cases and high number of compared models increase the bias. Also, the Cawley and Talbot paper gives empirical observed behaviour. | How to split the dataset for cross validation, learning curve, and final evaluation? | I'm not sure what you want to do in step 7a. As I understand it right now, it doesn't make sense to me.
Here's how I understand your description: in step 7, you want to compare the hold-out performan | How to split the dataset for cross validation, learning curve, and final evaluation?
I'm not sure what you want to do in step 7a. As I understand it right now, it doesn't make sense to me.
Here's how I understand your description: in step 7, you want to compare the hold-out performance with the results of a cross validation embracing steps 4 - 6. (so yes, that would be a nested setup).
The main points why I don't think this comparison makes much sense are:
This comparison cannot detect two of the main sources of overoptimistic validation results I encounter in practice:
data leaks (dependence) between training and test data which is caused by a hierarchical (aka clustered) data structure, and which is not accounted for in the splitting. In my field, we have typically multiple (sometimes thousands) of readings (= rows in the data matrix) of the same patient or biological replicate of an experiment. These are not independent, so the validation splitting needs to be done at patient level. However, such a data leak occurs, you'll have it both in the splitting for the hold out set and in the cross validation splitting. Hold-out wold then be just as optimistically biased as cross validation.
Preprocessing of the data done on the whole data matrix, where the calculations are not independent for each row but many/all rows are used to calculation parameters for the preprocessing. Typical examples would be e.g. a PCA projection before the "actual" classification.
Again, that would affect both your hold-out and the outer cross validation, so you cannot detect it.
For the data I work with, both errors can easily cause the fraction of misclassifications to be underestimated by an order of magnitude!
If you are restricted to this counted fraction of test cases type of performance, model comparisons need either extremely large numbers of test cases or ridiculously large differences in true performance. Comparing 2 classifiers with unlimited training data may be a good start for further reading.
However, comparing the model quality the inner cross validation claims for the "optimal" model and the outer cross validation or hold out validation does make sense: if the discrepancy is high, it is questionable whether your grid search optimization did work (you may have skimmed variance due to the high variance of the performance measure). This comparison is easier in that you can spot trouble if you have the inner estimate being ridiculously good compared to the other - if it isn't, you don't need to worry that much about your optimization. But in any case, if your outer (7) measurement of the performance is honest and sound, you at least have a useful estimate of the obtained model, whether it is optimal or not.
IMHO measuring the learning curve is yet a different problem. I'd probably deal with that separately, and I think you need to define more clearly what you need the learning curve for (do you need the learning curve for a data set of the given problem, data, and classification method or the learning curve for this data set of the given problem, data, and classification mehtod), and a bunch of further decisions (e.g. how to deal with the model complexity as function of the training sample size? Optimize all over again, use fixed hyperparameters, decide on function to fix hyperparameters depending on training set size?)
(My data usually has so few independent cases to get the measurement of the learning curve sufficiently precise to use it in practice - but you may be better of if your 1200 rows are actually independent)
update: What is "wrong" with the the scikit-learn example?
First of all, nothing is wrong with nested cross validation here. Nested validation is of utmost importance for data-driven optimization, and cross validation is a very powerful approaches (particularly if iterated/repeated).
Then, whether anything is wrong at all depends on your point of view: as long as you do an honest nested validation (keeping the outer test data strictly independent), the outer validation is a proper measure of the "optimal" model's performance. Nothing wrong with that.
But several things can and do go wrong with grid search of these proportion-type performance measures for hyperparameter tuning of SVM. Basically they mean that you may (probably?) cannont rely on the optimization. Nevertheless, as long as your outer split was done properly, even if the model is not the best possible, you have an honest estimate of the performance of the model you got.
I'll try to give intuitive explanations why the optimization may be in trouble:
Mathematically/statisticaly speaking, the problem with the proportions is that measured proportions $\hat p$ are subject to a huge variance due to finite test sample size $n$ (depending also on the true performance of the model, $p$):
$Var (\hat p) = \frac{p (1 - p)}{n}$
You need ridiculously huge numbers of cases (at least compared to the numbers of cases I can usually have) in order to achieve the needed precision (bias/variance sense) for estimating recall, precision (machine learning performance sense). This of course applies also to ratios you calculate from such proportions. Have a look at the confidence intervals for binomial proportions. They are shockingly large! Often larger than the true improvement in performance over the hyperparameter grid.
And statistically speaking, grid search is a massive multiple comparison problem: the more points of the grid you evaluate, the higher the risk of finding some combination of hyperparameters that accidentally looks very good for the train/test split you are evaluating. This is what I mean with skimming variance. The well known optimistic bias of the inner (optimization) validation is just a symptom of this variance skimming.
Intuitively, consider a hypothetical change of a hyperparameter, that slowly causes the model to deteriorate: one test case moves towards the decision boundary. The 'hard' proportion performance measures do not detect this until the case crosses the border and is on the wrong side. Then, however, they immediately assign a full error for an infinitely small change in the hyperparameter.
In order to do numerical optimization, you need the performance measure to be well behaved. That means: neither the jumpy (not continously differentiable) part of the proportion-type performance measure nor the fact that other than that jump, actually occuring changes are not detected are suitable for the optimization.
Proper scoring rules are defined in a way that is particularly suitable for optimization. They have their global maximum when the predicted probabilities match the true probabilities for each case to belong to the class in question.
For SVMs you have the additional problem that not only the performance measures but also the model reacts in this jumpy fashion: small changes of the hyperparameter will not change anything. The model changes only when the hyperparameters are changes enough to cause some case to either stop being support vector or to become support vector. Again, such models are hard to optimize.
Literature:
Brown, L.; Cai, T. & DasGupta, A.: Interval Estimation for a Binomial Proportion, Statistical Science, 16, 101-133 (2001).
Cawley, G. C. & Talbot, N. L. C.: On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 11, 2079-2107 (2010).
Gneiting, T. & Raftery, A. E.: Strictly Proper Scoring Rules, Prediction, and Estimation, Journal of the American Statistical Association, 102, 359-378 (2007). DOI: 10.1198/016214506000001437
Brereton, R.: Chemometrics for pattern recognition, Wiley, (2009).
points out the jumpy behaviour of the SVM as function of the hyperparameters.
Update II: Skimming variance
what you can afford in terms of model comparison obviously depends on the number of independent cases.
Let's make some quick and dirty simulation about the risk of skimming variance here:
scikit.learn says that they have 1797 are in the digits data.
assume that 100 models are compared, e.g. a $10 \times 10$ grid for 2 parameters.
assume that both parameter (ranges) do not affect the models at all,
i.e., all models have the same true performance of, say, 97 % (typical performance for the digits data set).
Run $10^4$ simulations of "testing these models" with sample size = 1797 rows in the digits data set
p.true = 0.97 # hypothetical true performance for all models
n.models = 100 # 10 x 10 grid
n.rows = 1797 # rows in scikit digits data
sim.test <- replicate (expr= rbinom (n= nmodels, size= n.rows, prob= p.true),
n = 1e4)
sim.test <- colMaxs (sim.test) # take best model
hist (sim.test / n.rows,
breaks = (round (p.true * n.rows) : n.rows) / n.rows + 1 / 2 / n.rows,
col = "black", main = 'Distribution max. observed performance',
xlab = "max. observed performance", ylab = "n runs")
abline (v = p.outer, col = "red")
Here's the distribution for the best observed performance:
The red line marks the true performance of all our hypothetical models. On average, we observe only 2/3 of the true error rate for the seemingly best of the 100 compared models (for the simulation we know that they all perform equally with 97% correct predictions).
This simulation is obviously very much simplified:
In addition to the test sample size variance there is at least the variance due to model instability, so we're underestimating the variance here
Tuning parameters affecting the model complexity will typically cover parameter sets where the models are unstable and thus have high variance.
For the UCI digits from the example, the original data base has ca. 11000 digits written by 44 persons. What if the data is clustered according to the person who wrote? (I.e. is it easier to recognize an 8 written by someone if you know how that person writes, say, a 3?) The effective sample size then may be as low as 44.
Tuning model hyperparameters may lead to correlation between the models (in fact, that would be considered well behaved from a numerical optimization perspective). It is difficult to predict the influence of that (and I suspect this is impossible without taking into account the actual type of classifier).
In general, however, both low number of independent test cases and high number of compared models increase the bias. Also, the Cawley and Talbot paper gives empirical observed behaviour. | How to split the dataset for cross validation, learning curve, and final evaluation?
I'm not sure what you want to do in step 7a. As I understand it right now, it doesn't make sense to me.
Here's how I understand your description: in step 7, you want to compare the hold-out performan |
2,973 | Good GUI for R suitable for a beginner wanting to learn programming in R? | I would second @Shane's recommendation for Deducer, and would also recommend the R Commander by John Fox. The CRAN package is here. It's called the R "Commander" because it returns the R commands associated with the point-and-click menu selections, which can be saved and run later from the command prompt.
In this way, if you don't know how to do something then you can find it in the menus and get an immediate response for the proper way to do something with R code. It looks like Deducer operates similarly, though I haven't played with Deducer for a while.
The base R Commander is designed for beginner-minded tasks, but there are plugins available for some more sophisticated analyses (Deducer has plugins, too). Bear in mind, however, that no GUI can do everything, and at some point the user will need to wean him/herself from pointing-and-clicking. Some people (myself included) think that is a good thing. | Good GUI for R suitable for a beginner wanting to learn programming in R? | I would second @Shane's recommendation for Deducer, and would also recommend the R Commander by John Fox. The CRAN package is here. It's called the R "Commander" because it returns the R commands as | Good GUI for R suitable for a beginner wanting to learn programming in R?
I would second @Shane's recommendation for Deducer, and would also recommend the R Commander by John Fox. The CRAN package is here. It's called the R "Commander" because it returns the R commands associated with the point-and-click menu selections, which can be saved and run later from the command prompt.
In this way, if you don't know how to do something then you can find it in the menus and get an immediate response for the proper way to do something with R code. It looks like Deducer operates similarly, though I haven't played with Deducer for a while.
The base R Commander is designed for beginner-minded tasks, but there are plugins available for some more sophisticated analyses (Deducer has plugins, too). Bear in mind, however, that no GUI can do everything, and at some point the user will need to wean him/herself from pointing-and-clicking. Some people (myself included) think that is a good thing. | Good GUI for R suitable for a beginner wanting to learn programming in R?
I would second @Shane's recommendation for Deducer, and would also recommend the R Commander by John Fox. The CRAN package is here. It's called the R "Commander" because it returns the R commands as |
2,974 | Good GUI for R suitable for a beginner wanting to learn programming in R? | You can also try the brand-new RStudio. Reasonably full-featured IDE with easy set-up. I played with it yesterday and it seems nice.
Update
I now like RStudio even more. They actively implement feature requests, and it shows in the little things getting better and better. It also includes Git support (including remote syncing so Github integration is seamless).
A bunch of big names just joined so hopefully things will continue getting even better.
Update again
And indeed things have only gotten better, in rapid fashion. Package build-check cycles are now point-and-click, and the little stuff continues to improve as well. It now comes with an integrated debugging environment, too. | Good GUI for R suitable for a beginner wanting to learn programming in R? | You can also try the brand-new RStudio. Reasonably full-featured IDE with easy set-up. I played with it yesterday and it seems nice.
Update
I now like RStudio even more. They actively implement fea | Good GUI for R suitable for a beginner wanting to learn programming in R?
You can also try the brand-new RStudio. Reasonably full-featured IDE with easy set-up. I played with it yesterday and it seems nice.
Update
I now like RStudio even more. They actively implement feature requests, and it shows in the little things getting better and better. It also includes Git support (including remote syncing so Github integration is seamless).
A bunch of big names just joined so hopefully things will continue getting even better.
Update again
And indeed things have only gotten better, in rapid fashion. Package build-check cycles are now point-and-click, and the little stuff continues to improve as well. It now comes with an integrated debugging environment, too. | Good GUI for R suitable for a beginner wanting to learn programming in R?
You can also try the brand-new RStudio. Reasonably full-featured IDE with easy set-up. I played with it yesterday and it seems nice.
Update
I now like RStudio even more. They actively implement fea |
2,975 | Good GUI for R suitable for a beginner wanting to learn programming in R? | This has been answered several times on StackOverflow. The top selections on there seem to consistently be Eclipse with StatET or Emacs with ESS.
I wouldn't say that there are any good gui's to make it easier to learn the language. The closest thing would be deducer from Ian Fellows. But there are plenty of other resources (books, papers, blogs, packages, etc.) available for learning. | Good GUI for R suitable for a beginner wanting to learn programming in R? | This has been answered several times on StackOverflow. The top selections on there seem to consistently be Eclipse with StatET or Emacs with ESS.
I wouldn't say that there are any good gui's to make | Good GUI for R suitable for a beginner wanting to learn programming in R?
This has been answered several times on StackOverflow. The top selections on there seem to consistently be Eclipse with StatET or Emacs with ESS.
I wouldn't say that there are any good gui's to make it easier to learn the language. The closest thing would be deducer from Ian Fellows. But there are plenty of other resources (books, papers, blogs, packages, etc.) available for learning. | Good GUI for R suitable for a beginner wanting to learn programming in R?
This has been answered several times on StackOverflow. The top selections on there seem to consistently be Eclipse with StatET or Emacs with ESS.
I wouldn't say that there are any good gui's to make |
2,976 | Good GUI for R suitable for a beginner wanting to learn programming in R? | I think that the command line is the best interface, and especially for the beginners. The sooner you'll start with console, the sooner you'll find out that this is the fastest, the most comfortable and what's most important the only fully non-limiting way of using R. | Good GUI for R suitable for a beginner wanting to learn programming in R? | I think that the command line is the best interface, and especially for the beginners. The sooner you'll start with console, the sooner you'll find out that this is the fastest, the most comfortable a | Good GUI for R suitable for a beginner wanting to learn programming in R?
I think that the command line is the best interface, and especially for the beginners. The sooner you'll start with console, the sooner you'll find out that this is the fastest, the most comfortable and what's most important the only fully non-limiting way of using R. | Good GUI for R suitable for a beginner wanting to learn programming in R?
I think that the command line is the best interface, and especially for the beginners. The sooner you'll start with console, the sooner you'll find out that this is the fastest, the most comfortable a |
2,977 | Good GUI for R suitable for a beginner wanting to learn programming in R? | At least on linux, RKWard offers the best functionality. The new RStudio appears quite promising as well. | Good GUI for R suitable for a beginner wanting to learn programming in R? | At least on linux, RKWard offers the best functionality. The new RStudio appears quite promising as well. | Good GUI for R suitable for a beginner wanting to learn programming in R?
At least on linux, RKWard offers the best functionality. The new RStudio appears quite promising as well. | Good GUI for R suitable for a beginner wanting to learn programming in R?
At least on linux, RKWard offers the best functionality. The new RStudio appears quite promising as well. |
2,978 | Good GUI for R suitable for a beginner wanting to learn programming in R? | Personally ESS, but as stated above i have found Rcmdr very easy to use. | Good GUI for R suitable for a beginner wanting to learn programming in R? | Personally ESS, but as stated above i have found Rcmdr very easy to use. | Good GUI for R suitable for a beginner wanting to learn programming in R?
Personally ESS, but as stated above i have found Rcmdr very easy to use. | Good GUI for R suitable for a beginner wanting to learn programming in R?
Personally ESS, but as stated above i have found Rcmdr very easy to use. |
2,979 | Good GUI for R suitable for a beginner wanting to learn programming in R? | I used Rattle to both learn how to use R and for quick and dirty data mining tasks. | Good GUI for R suitable for a beginner wanting to learn programming in R? | I used Rattle to both learn how to use R and for quick and dirty data mining tasks. | Good GUI for R suitable for a beginner wanting to learn programming in R?
I used Rattle to both learn how to use R and for quick and dirty data mining tasks. | Good GUI for R suitable for a beginner wanting to learn programming in R?
I used Rattle to both learn how to use R and for quick and dirty data mining tasks. |
2,980 | Good GUI for R suitable for a beginner wanting to learn programming in R? | GUI != Programming
Asking which GUI will help you learn programming is like asking which grocery store will help you learn how to hunt for your own food. Using a GUI is not a way to learn programming.
The power of R is that it's not GUI driven, it uses scripts which fundamentally allow for more more reproducible results.
GUIs to demonstrate specific topics / Brief backpedaling
That having been said, I do think it's useful to use a GUI to demonstrate a single specific topic.
The latticist package is awesome for creating lattice graphs and learning lattice syntax.
The PBSmodelling package has some wonderful examples of GUIs that allow you to run specific models, such as MCMV models.
The TeachingDemos package is also seems to have some good demos of specific topics.
Roll your own GUI
The PBSmodelling package also has tools that allow you to make your own GUIs. This includes some amazing tools for project organization and documentation. Thank you Pacific Biological Station!
Also, by using Rook and Apache you can also make powerful web-based GUI applications.
Making your own GUI is not appropriate for beginners or the direct answer to your question. However, if you're an advanced user then you might want to consider making a GUI to demonstrate a particular topic.
The installed "R" is a GUI (technically)
It's worth noting that the installed version of R is a shortcut to Rgui.exe.
I know that you're asking for a GUI that let's you access all of the base functionality of R by pointing and clicking, not a glorified wrapper for the command line.
However, it's important to realize that a GUI wrapper for the command line is a GUI, and it's a valid answer to your question. The command line is the only way that you can get access to the rapidly evolving functionality of the power of R and the freshly packages authored daily.
So...
Again, the best GUI is R Studio
The best interface for R is definitely R Studio.
For some people the StatET / Eclipse interface is important for it's powerful features, but R Studio is rapidly overtaking those features and adding new ones.
Revolution R (the commercial version) also has a GUI, but it's not so great unless you are deeply passionate about the design of MS Visual Studio. However, you can access Revolution's build of R though R Studio or Eclipse, so that's a pretty neat trick too. | Good GUI for R suitable for a beginner wanting to learn programming in R? | GUI != Programming
Asking which GUI will help you learn programming is like asking which grocery store will help you learn how to hunt for your own food. Using a GUI is not a way to learn programmin | Good GUI for R suitable for a beginner wanting to learn programming in R?
GUI != Programming
Asking which GUI will help you learn programming is like asking which grocery store will help you learn how to hunt for your own food. Using a GUI is not a way to learn programming.
The power of R is that it's not GUI driven, it uses scripts which fundamentally allow for more more reproducible results.
GUIs to demonstrate specific topics / Brief backpedaling
That having been said, I do think it's useful to use a GUI to demonstrate a single specific topic.
The latticist package is awesome for creating lattice graphs and learning lattice syntax.
The PBSmodelling package has some wonderful examples of GUIs that allow you to run specific models, such as MCMV models.
The TeachingDemos package is also seems to have some good demos of specific topics.
Roll your own GUI
The PBSmodelling package also has tools that allow you to make your own GUIs. This includes some amazing tools for project organization and documentation. Thank you Pacific Biological Station!
Also, by using Rook and Apache you can also make powerful web-based GUI applications.
Making your own GUI is not appropriate for beginners or the direct answer to your question. However, if you're an advanced user then you might want to consider making a GUI to demonstrate a particular topic.
The installed "R" is a GUI (technically)
It's worth noting that the installed version of R is a shortcut to Rgui.exe.
I know that you're asking for a GUI that let's you access all of the base functionality of R by pointing and clicking, not a glorified wrapper for the command line.
However, it's important to realize that a GUI wrapper for the command line is a GUI, and it's a valid answer to your question. The command line is the only way that you can get access to the rapidly evolving functionality of the power of R and the freshly packages authored daily.
So...
Again, the best GUI is R Studio
The best interface for R is definitely R Studio.
For some people the StatET / Eclipse interface is important for it's powerful features, but R Studio is rapidly overtaking those features and adding new ones.
Revolution R (the commercial version) also has a GUI, but it's not so great unless you are deeply passionate about the design of MS Visual Studio. However, you can access Revolution's build of R though R Studio or Eclipse, so that's a pretty neat trick too. | Good GUI for R suitable for a beginner wanting to learn programming in R?
GUI != Programming
Asking which GUI will help you learn programming is like asking which grocery store will help you learn how to hunt for your own food. Using a GUI is not a way to learn programmin |
2,981 | Good GUI for R suitable for a beginner wanting to learn programming in R? | I used JGR for a short while, until it became apparent it would quickly consume all the memory on my system. I have not used it since, and recommend you do not use it. | Good GUI for R suitable for a beginner wanting to learn programming in R? | I used JGR for a short while, until it became apparent it would quickly consume all the memory on my system. I have not used it since, and recommend you do not use it. | Good GUI for R suitable for a beginner wanting to learn programming in R?
I used JGR for a short while, until it became apparent it would quickly consume all the memory on my system. I have not used it since, and recommend you do not use it. | Good GUI for R suitable for a beginner wanting to learn programming in R?
I used JGR for a short while, until it became apparent it would quickly consume all the memory on my system. I have not used it since, and recommend you do not use it. |
2,982 | Good GUI for R suitable for a beginner wanting to learn programming in R? | I recommend Tinn-R (Which is the acronym for Tinn is not Notepad) | Good GUI for R suitable for a beginner wanting to learn programming in R? | I recommend Tinn-R (Which is the acronym for Tinn is not Notepad) | Good GUI for R suitable for a beginner wanting to learn programming in R?
I recommend Tinn-R (Which is the acronym for Tinn is not Notepad) | Good GUI for R suitable for a beginner wanting to learn programming in R?
I recommend Tinn-R (Which is the acronym for Tinn is not Notepad) |
2,983 | Good GUI for R suitable for a beginner wanting to learn programming in R? | Despite all of the good recommendations, I've not found anything radically better than the default Mac GUI. R-Studio shows promise, but it's not currently that much more customizable or featureful than R and, say, BBEdit to edit. | Good GUI for R suitable for a beginner wanting to learn programming in R? | Despite all of the good recommendations, I've not found anything radically better than the default Mac GUI. R-Studio shows promise, but it's not currently that much more customizable or featureful tha | Good GUI for R suitable for a beginner wanting to learn programming in R?
Despite all of the good recommendations, I've not found anything radically better than the default Mac GUI. R-Studio shows promise, but it's not currently that much more customizable or featureful than R and, say, BBEdit to edit. | Good GUI for R suitable for a beginner wanting to learn programming in R?
Despite all of the good recommendations, I've not found anything radically better than the default Mac GUI. R-Studio shows promise, but it's not currently that much more customizable or featureful tha |
2,984 | Good GUI for R suitable for a beginner wanting to learn programming in R? | I would recommend having a look at AirXcell. It's an online (Web 2.0) calculation software based on R which provides a quite usable R GUI with a command line interface (The R console) a code editor, and various other things (data frame editor, etc.), all online from within the web browser.
See Use AirXcell as an online R console. | Good GUI for R suitable for a beginner wanting to learn programming in R? | I would recommend having a look at AirXcell. It's an online (Web 2.0) calculation software based on R which provides a quite usable R GUI with a command line interface (The R console) a code editor, a | Good GUI for R suitable for a beginner wanting to learn programming in R?
I would recommend having a look at AirXcell. It's an online (Web 2.0) calculation software based on R which provides a quite usable R GUI with a command line interface (The R console) a code editor, and various other things (data frame editor, etc.), all online from within the web browser.
See Use AirXcell as an online R console. | Good GUI for R suitable for a beginner wanting to learn programming in R?
I would recommend having a look at AirXcell. It's an online (Web 2.0) calculation software based on R which provides a quite usable R GUI with a command line interface (The R console) a code editor, a |
2,985 | Good GUI for R suitable for a beginner wanting to learn programming in R? | Having worked with the
(Base) R
RStudio
Revolution R Enterprise
in Windows environment, I strongly suggest "Revolution R Enterprise".
I accept that its installing takes little longer (it is 600-700MB) if compared with BaseR and RStudio, but anyway, the Object Browser of Revo R, the easiness of package installation procedure, management of variables, etc. etc. there are many things that - according to me - makes Revo R the best one (acc. to me).
That said, Revo R being purchased by Microsoft is - to me- one of its drawbacks since MS is eventually profit-oriented firm and may change its free nature sooner or later. | Good GUI for R suitable for a beginner wanting to learn programming in R? | Having worked with the
(Base) R
RStudio
Revolution R Enterprise
in Windows environment, I strongly suggest "Revolution R Enterprise".
I accept that its installing takes little longer (it is | Good GUI for R suitable for a beginner wanting to learn programming in R?
Having worked with the
(Base) R
RStudio
Revolution R Enterprise
in Windows environment, I strongly suggest "Revolution R Enterprise".
I accept that its installing takes little longer (it is 600-700MB) if compared with BaseR and RStudio, but anyway, the Object Browser of Revo R, the easiness of package installation procedure, management of variables, etc. etc. there are many things that - according to me - makes Revo R the best one (acc. to me).
That said, Revo R being purchased by Microsoft is - to me- one of its drawbacks since MS is eventually profit-oriented firm and may change its free nature sooner or later. | Good GUI for R suitable for a beginner wanting to learn programming in R?
Having worked with the
(Base) R
RStudio
Revolution R Enterprise
in Windows environment, I strongly suggest "Revolution R Enterprise".
I accept that its installing takes little longer (it is |
2,986 | Good GUI for R suitable for a beginner wanting to learn programming in R? | Quadstat is a free browser-based front-end to R and also an open-source statistical web application framework. After submitting a computing query, the user is presented with output from the request and also the R commands used. Prior to submission, the R help file is clearly displayed so that the user may understand some of the internals of R. I am the project maintainer and hope you find the software useful.
https://quadstat.com
https://www.drupal.org/project/quadstat | Good GUI for R suitable for a beginner wanting to learn programming in R? | Quadstat is a free browser-based front-end to R and also an open-source statistical web application framework. After submitting a computing query, the user is presented with output from the request an | Good GUI for R suitable for a beginner wanting to learn programming in R?
Quadstat is a free browser-based front-end to R and also an open-source statistical web application framework. After submitting a computing query, the user is presented with output from the request and also the R commands used. Prior to submission, the R help file is clearly displayed so that the user may understand some of the internals of R. I am the project maintainer and hope you find the software useful.
https://quadstat.com
https://www.drupal.org/project/quadstat | Good GUI for R suitable for a beginner wanting to learn programming in R?
Quadstat is a free browser-based front-end to R and also an open-source statistical web application framework. After submitting a computing query, the user is presented with output from the request an |
2,987 | Good GUI for R suitable for a beginner wanting to learn programming in R? | If you don't want to code R, but want graphical user interface like SPSS, there is a new cloud based software, Number Analytics (). It is based on cloud so you don't need to install the program. It is freemium model starting free. It is for beginners who don't have much knowledge about statistics. The biggest selling point is that it does interpret the statistical results. Color table, and built-in graphs also helps. | Good GUI for R suitable for a beginner wanting to learn programming in R? | If you don't want to code R, but want graphical user interface like SPSS, there is a new cloud based software, Number Analytics (). It is based on cloud so you don't need to install the program. It is | Good GUI for R suitable for a beginner wanting to learn programming in R?
If you don't want to code R, but want graphical user interface like SPSS, there is a new cloud based software, Number Analytics (). It is based on cloud so you don't need to install the program. It is freemium model starting free. It is for beginners who don't have much knowledge about statistics. The biggest selling point is that it does interpret the statistical results. Color table, and built-in graphs also helps. | Good GUI for R suitable for a beginner wanting to learn programming in R?
If you don't want to code R, but want graphical user interface like SPSS, there is a new cloud based software, Number Analytics (). It is based on cloud so you don't need to install the program. It is |
2,988 | Having a job in data-mining without a PhD | I believe actually the opposite of your conclusion is true. In The Disposable Academic, several pointers are given about the low wage premium in applied math, math, and computer science for PhD holders over master's degree holders. In part, this is because companies are realizing that master's degree holders usually have just as much theoretical depth, better programming skills, and are more pliable and can be trained for their company's specific tasks. It's not easy to get an SVM disciple, for instance, to appreciate your company's infrastructure that relies on decision trees, say. Often, when someone has dedicated tons of time to a particular machine learning paradigm, they have a hard time generalizing their productivity to other domains.
Another problem is that a lot of machine learning jobs these days are all about getting things done, and not so much about writing papers or developing new methods. You can take a high risk approach to developing new mathematical tools, studying VC-dimensional aspects of your method, its underlying complexity theory, etc. But in the end, you might not get something that practitioners will care about.
Meanwhile, look at something like poselets. Basically no new math arises from poselets at all. It's entirely unelegant, clunky, and lacks any mathematical sophistication. But it scales up to large data sets amazingly well and it's looking like it will be a staple in pose recognition (especially in computer vision) for some time to come. Those researchers did a great job and their work is to be applauded, but it's not something most people associate with a machine learning PhD.
With a question like this, you'll get tons of different opinions, so by all means consider them all. I am currently a PhD student in computer vision, but I've decided to leave my program early with a master's degree, and I'll be working for an asset management company doing natural language machine learning, computational statistics, etc. I also considered ad-based data mining jobs at several large TV companies, and a few robotics jobs. In all of these domains, there are plenty of jobs for someone with mathematical maturity and a knack for solving problems in multiple programming languages. Having a master's degree is just fine. And, according to that Economist article, you'll be paid basically just as well as someone with a PhD. And if you work outside of academia, bonuses and getting to promotions faster than someone who spends extra years on a PhD can often mean your overall lifetime earnings are higher.
As Peter Thiel once said, "Graduate school is like hitting the snooze button on the alarm clock of life..." | Having a job in data-mining without a PhD | I believe actually the opposite of your conclusion is true. In The Disposable Academic, several pointers are given about the low wage premium in applied math, math, and computer science for PhD holder | Having a job in data-mining without a PhD
I believe actually the opposite of your conclusion is true. In The Disposable Academic, several pointers are given about the low wage premium in applied math, math, and computer science for PhD holders over master's degree holders. In part, this is because companies are realizing that master's degree holders usually have just as much theoretical depth, better programming skills, and are more pliable and can be trained for their company's specific tasks. It's not easy to get an SVM disciple, for instance, to appreciate your company's infrastructure that relies on decision trees, say. Often, when someone has dedicated tons of time to a particular machine learning paradigm, they have a hard time generalizing their productivity to other domains.
Another problem is that a lot of machine learning jobs these days are all about getting things done, and not so much about writing papers or developing new methods. You can take a high risk approach to developing new mathematical tools, studying VC-dimensional aspects of your method, its underlying complexity theory, etc. But in the end, you might not get something that practitioners will care about.
Meanwhile, look at something like poselets. Basically no new math arises from poselets at all. It's entirely unelegant, clunky, and lacks any mathematical sophistication. But it scales up to large data sets amazingly well and it's looking like it will be a staple in pose recognition (especially in computer vision) for some time to come. Those researchers did a great job and their work is to be applauded, but it's not something most people associate with a machine learning PhD.
With a question like this, you'll get tons of different opinions, so by all means consider them all. I am currently a PhD student in computer vision, but I've decided to leave my program early with a master's degree, and I'll be working for an asset management company doing natural language machine learning, computational statistics, etc. I also considered ad-based data mining jobs at several large TV companies, and a few robotics jobs. In all of these domains, there are plenty of jobs for someone with mathematical maturity and a knack for solving problems in multiple programming languages. Having a master's degree is just fine. And, according to that Economist article, you'll be paid basically just as well as someone with a PhD. And if you work outside of academia, bonuses and getting to promotions faster than someone who spends extra years on a PhD can often mean your overall lifetime earnings are higher.
As Peter Thiel once said, "Graduate school is like hitting the snooze button on the alarm clock of life..." | Having a job in data-mining without a PhD
I believe actually the opposite of your conclusion is true. In The Disposable Academic, several pointers are given about the low wage premium in applied math, math, and computer science for PhD holder |
2,989 | Having a job in data-mining without a PhD | Disclaimer: I have a Ph.D. and work in machine learning. Having said that, I think other than becoming an academic, you don't need a Ph.D. to work in any field. Getting a Ph.D. helps you develop certain research skills, but
You don't need those research skills for most jobs.
You can acquire those skills without getting a Ph.D. degree.
Martin Wolf, the chief economic correspondent for the Financial Times, doesn't have a Ph.D. (he has a Master's degree), but his word carries a lot more weight than most Ph.D. graduates. I think to succeed in any field (including machine learning), you have to know how to learn and think thorough stuff on your own. A Ph.D. will help you practice those skills, but it's not an end to itself. Anyone who isn't willing to interview you just because you don't have a Ph.D., is probably not worth working for anyway. | Having a job in data-mining without a PhD | Disclaimer: I have a Ph.D. and work in machine learning. Having said that, I think other than becoming an academic, you don't need a Ph.D. to work in any field. Getting a Ph.D. helps you develop certa | Having a job in data-mining without a PhD
Disclaimer: I have a Ph.D. and work in machine learning. Having said that, I think other than becoming an academic, you don't need a Ph.D. to work in any field. Getting a Ph.D. helps you develop certain research skills, but
You don't need those research skills for most jobs.
You can acquire those skills without getting a Ph.D. degree.
Martin Wolf, the chief economic correspondent for the Financial Times, doesn't have a Ph.D. (he has a Master's degree), but his word carries a lot more weight than most Ph.D. graduates. I think to succeed in any field (including machine learning), you have to know how to learn and think thorough stuff on your own. A Ph.D. will help you practice those skills, but it's not an end to itself. Anyone who isn't willing to interview you just because you don't have a Ph.D., is probably not worth working for anyway. | Having a job in data-mining without a PhD
Disclaimer: I have a Ph.D. and work in machine learning. Having said that, I think other than becoming an academic, you don't need a Ph.D. to work in any field. Getting a Ph.D. helps you develop certa |
2,990 | Having a job in data-mining without a PhD | Disclaimer: I do not have a PhD in CS, nor do I work in machine learning; I am generalizing from other knowledge and experience.
I think there are several good answers here, but, in my honest opinion, they do not yet quite make the main issue explicit. I will attempt to do so, but recognize that I don't think I'm saying something radically different. The main issue here pertains to
skill development vs. signaling.
With respect to skill development, ultimately you want (as an employee) to be able to get the job done, done well and done quickly, and the employer wants (or presumably ought to) such a person. Thus, the question here is how much extra skill development do you get with the extra couple of years of academic training? Certainly you should be gaining something, but recognize that people who don't continue with graduate school probably don't just sit on their duff until they would have graduated. Thus, you are comparing one set of experiences (academic) vs. another (work). A good bit depends on the quality and nature of the Ph.D. program, your intrinsic interests, how self-directed you are, and what kind of opportunities and support would be available in your first job.
Outside of the effect continued academic training has on skill development, there is the question of the effect and value of the signal (i.e., of having "Ph.D" appended after your name). The signal can help in two ways: First, it can help you land your initial job, and that shouldn't be dismissed--it can be very important. Research has shown that people who are obliged to start out in a first position that isn't as appropriate for them never tend to do as well (career-wise, on average) as people who get to start in a job that is a good match for their abilities and interests. On the other hand, the consensus seems to be that after your first job, your future prospects are much more strongly influenced by your performance in your previous job than your academic credentials.
The second aspect of the signal has to do with the relationship between the analyst and the consumer of the analysis. @EMS does a good job of bringing this point out in a comment. There are a lot of small consulting shops, and they love to have Ph.D.'s to show off to potential clients: in initial meetings trying to land a contract, on letterhead, in presentations of finished work-products, etc. the Ph.D.'s are always there. It's easy to be cynical about this, but I do think there's legitimate value for the consulting firm and the consumer (who may not know much about these matters and can use credentials to help them select a firm that will do a good job for them). Behind the scenes, some of the work may be farmed out to competent people with less credentials, but they want the Ph.D. for the front end, and to sign off on the work-product before it's delivered. I could see something analogous happening with a start-up if they're trying to attract capital and want to reassure investors. On the other hand, if you are going to do work for internal consumption, and your boss is capable of evaluating it, this doesn't matter. | Having a job in data-mining without a PhD | Disclaimer: I do not have a PhD in CS, nor do I work in machine learning; I am generalizing from other knowledge and experience.
I think there are several good answers here, but, in my honest opinio | Having a job in data-mining without a PhD
Disclaimer: I do not have a PhD in CS, nor do I work in machine learning; I am generalizing from other knowledge and experience.
I think there are several good answers here, but, in my honest opinion, they do not yet quite make the main issue explicit. I will attempt to do so, but recognize that I don't think I'm saying something radically different. The main issue here pertains to
skill development vs. signaling.
With respect to skill development, ultimately you want (as an employee) to be able to get the job done, done well and done quickly, and the employer wants (or presumably ought to) such a person. Thus, the question here is how much extra skill development do you get with the extra couple of years of academic training? Certainly you should be gaining something, but recognize that people who don't continue with graduate school probably don't just sit on their duff until they would have graduated. Thus, you are comparing one set of experiences (academic) vs. another (work). A good bit depends on the quality and nature of the Ph.D. program, your intrinsic interests, how self-directed you are, and what kind of opportunities and support would be available in your first job.
Outside of the effect continued academic training has on skill development, there is the question of the effect and value of the signal (i.e., of having "Ph.D" appended after your name). The signal can help in two ways: First, it can help you land your initial job, and that shouldn't be dismissed--it can be very important. Research has shown that people who are obliged to start out in a first position that isn't as appropriate for them never tend to do as well (career-wise, on average) as people who get to start in a job that is a good match for their abilities and interests. On the other hand, the consensus seems to be that after your first job, your future prospects are much more strongly influenced by your performance in your previous job than your academic credentials.
The second aspect of the signal has to do with the relationship between the analyst and the consumer of the analysis. @EMS does a good job of bringing this point out in a comment. There are a lot of small consulting shops, and they love to have Ph.D.'s to show off to potential clients: in initial meetings trying to land a contract, on letterhead, in presentations of finished work-products, etc. the Ph.D.'s are always there. It's easy to be cynical about this, but I do think there's legitimate value for the consulting firm and the consumer (who may not know much about these matters and can use credentials to help them select a firm that will do a good job for them). Behind the scenes, some of the work may be farmed out to competent people with less credentials, but they want the Ph.D. for the front end, and to sign off on the work-product before it's delivered. I could see something analogous happening with a start-up if they're trying to attract capital and want to reassure investors. On the other hand, if you are going to do work for internal consumption, and your boss is capable of evaluating it, this doesn't matter. | Having a job in data-mining without a PhD
Disclaimer: I do not have a PhD in CS, nor do I work in machine learning; I am generalizing from other knowledge and experience.
I think there are several good answers here, but, in my honest opinio |
2,991 | Having a job in data-mining without a PhD | I agree with most that has been said here, but I want to introduce a few practical issues that arise when applying for jobs in finance. Often you will see ads stating that a PhD in statistics or mathematics is required to apply for a particular trading or quantitative developer position. I know there are some particular reasons for this. Mind, I'm not saying this is right, but it is what happens in practice:
There are many applicants to the job, especially for the most well known companies, and the employer can't possibly dedicate enough time to each candidate. Filtering applications based on the academic background shrinks the population size to a more manageable number. Yes, there will be misses. Yes, it's not the best way to find productive individuals. But on average you are looking at skilled professionals who have dedicated years to learn the craft. They should at least have the discipline to overtake a complex research project.
The team and the company will be enriched by a number of PhDs to showcase to investors and clients. This will give an image of "oracleous" knowledge to the company and benefit its reputation. The company intangible valuation can rise. The average investor will be more confident in granting their capital to such a knowledgeable team of scientists. You can make a similar point about MBAs.
Finally sometimes corporate policies dictate that higher academic achievements should have a preferential career path and compensation. I believe this is true for most corporations in different industries, not just finance. It is hard to see John with a BS in computer sciences managing PhDs in mathematics. | Having a job in data-mining without a PhD | I agree with most that has been said here, but I want to introduce a few practical issues that arise when applying for jobs in finance. Often you will see ads stating that a PhD in statistics or mathe | Having a job in data-mining without a PhD
I agree with most that has been said here, but I want to introduce a few practical issues that arise when applying for jobs in finance. Often you will see ads stating that a PhD in statistics or mathematics is required to apply for a particular trading or quantitative developer position. I know there are some particular reasons for this. Mind, I'm not saying this is right, but it is what happens in practice:
There are many applicants to the job, especially for the most well known companies, and the employer can't possibly dedicate enough time to each candidate. Filtering applications based on the academic background shrinks the population size to a more manageable number. Yes, there will be misses. Yes, it's not the best way to find productive individuals. But on average you are looking at skilled professionals who have dedicated years to learn the craft. They should at least have the discipline to overtake a complex research project.
The team and the company will be enriched by a number of PhDs to showcase to investors and clients. This will give an image of "oracleous" knowledge to the company and benefit its reputation. The company intangible valuation can rise. The average investor will be more confident in granting their capital to such a knowledgeable team of scientists. You can make a similar point about MBAs.
Finally sometimes corporate policies dictate that higher academic achievements should have a preferential career path and compensation. I believe this is true for most corporations in different industries, not just finance. It is hard to see John with a BS in computer sciences managing PhDs in mathematics. | Having a job in data-mining without a PhD
I agree with most that has been said here, but I want to introduce a few practical issues that arise when applying for jobs in finance. Often you will see ads stating that a PhD in statistics or mathe |
2,992 | Having a job in data-mining without a PhD | Disclaimer: I'm a recruiter and have been since 1982 so I understand your question very well. Let me break it down this way. Your resume is a screening out device. Companies get tons of resumes so they're reading resumes with one question in mind, "Why don't I want to talk to this person?" That reduces their pile to a few candidates who hold the best chances of meeting their needs. So if you're getting interviews and your resume doesn't show a PhD then there's something else going on here. I say that because, just as a resume is a screening OUT device, the interview is a screening IN device. Once they've invited you to an interview then they've already concluded you have enough "on paper" to do the job. So when you're walking in the interview the only quesion they're really asking is "why should I hire you?" The person they hire will be the individual who addresses out they can best serve the company's needs.
My advice as a recruiter is to ask questions throughout the interview to identify their deeper needs. Believe me, the job description rarely resembles the truth so you'll want to probe for their hot buttons then sell directly to those issues. Don't allow the interview to feel like an interrogation, waiting for the end to ask questions. You'll go down in flames and end up being told "you don't have a PhD". Be respectful yet show your willingness to help them solve their problem.
My favorite question is: "What are the traits of the best person you've ever known in this role?" Everyone has a dream team in mind so it's important to figure out what traits they feel are necessary to succeed in this role. Keep in mind, this isn't a question about experience, backgrounds or degrees. See, I can always find a mediocre PhD with tons of experience so this isn't the holy grail. It's just what companies continue to think is best because IMO they don't know how else to write a job description that captures the essence of the person they need. | Having a job in data-mining without a PhD | Disclaimer: I'm a recruiter and have been since 1982 so I understand your question very well. Let me break it down this way. Your resume is a screening out device. Companies get tons of resumes so the | Having a job in data-mining without a PhD
Disclaimer: I'm a recruiter and have been since 1982 so I understand your question very well. Let me break it down this way. Your resume is a screening out device. Companies get tons of resumes so they're reading resumes with one question in mind, "Why don't I want to talk to this person?" That reduces their pile to a few candidates who hold the best chances of meeting their needs. So if you're getting interviews and your resume doesn't show a PhD then there's something else going on here. I say that because, just as a resume is a screening OUT device, the interview is a screening IN device. Once they've invited you to an interview then they've already concluded you have enough "on paper" to do the job. So when you're walking in the interview the only quesion they're really asking is "why should I hire you?" The person they hire will be the individual who addresses out they can best serve the company's needs.
My advice as a recruiter is to ask questions throughout the interview to identify their deeper needs. Believe me, the job description rarely resembles the truth so you'll want to probe for their hot buttons then sell directly to those issues. Don't allow the interview to feel like an interrogation, waiting for the end to ask questions. You'll go down in flames and end up being told "you don't have a PhD". Be respectful yet show your willingness to help them solve their problem.
My favorite question is: "What are the traits of the best person you've ever known in this role?" Everyone has a dream team in mind so it's important to figure out what traits they feel are necessary to succeed in this role. Keep in mind, this isn't a question about experience, backgrounds or degrees. See, I can always find a mediocre PhD with tons of experience so this isn't the holy grail. It's just what companies continue to think is best because IMO they don't know how else to write a job description that captures the essence of the person they need. | Having a job in data-mining without a PhD
Disclaimer: I'm a recruiter and have been since 1982 so I understand your question very well. Let me break it down this way. Your resume is a screening out device. Companies get tons of resumes so the |
2,993 | Having a job in data-mining without a PhD | My 2 cents: No, I don't think so. A PhD per se does not entitle one to be be better for data mining or ML. Take kaggle's own Jeremy Howard. I would even go as far as saying that a PhD says not much about any qualification as there is a huge variability in quality of programs. Perhaps the only thing a PhD proves is for the holder to have a high tolerance of frustration.
Bottom line: If you are interested in that area, knowledgeable, creative and hard-working, why would you need a PhD? It is you that should count, not your titles. | Having a job in data-mining without a PhD | My 2 cents: No, I don't think so. A PhD per se does not entitle one to be be better for data mining or ML. Take kaggle's own Jeremy Howard. I would even go as far as saying that a PhD says not much ab | Having a job in data-mining without a PhD
My 2 cents: No, I don't think so. A PhD per se does not entitle one to be be better for data mining or ML. Take kaggle's own Jeremy Howard. I would even go as far as saying that a PhD says not much about any qualification as there is a huge variability in quality of programs. Perhaps the only thing a PhD proves is for the holder to have a high tolerance of frustration.
Bottom line: If you are interested in that area, knowledgeable, creative and hard-working, why would you need a PhD? It is you that should count, not your titles. | Having a job in data-mining without a PhD
My 2 cents: No, I don't think so. A PhD per se does not entitle one to be be better for data mining or ML. Take kaggle's own Jeremy Howard. I would even go as far as saying that a PhD says not much ab |
2,994 | Having a job in data-mining without a PhD | Whether a job requires a PhD or not depends on level of responsibility and the perception of the employer and/or his clients. I do not think there is a discipline that requires a PhD. Certainly data mining can be learned and an employee can do productive work without a PhD. This depends more on the person, his or her ability to learn quickly and adapt as well as being able to understand the literature, than on previous education. This is especially true for data mining which is an evolving field. So even the data miners with PhDs will have more to learn as time goes on. | Having a job in data-mining without a PhD | Whether a job requires a PhD or not depends on level of responsibility and the perception of the employer and/or his clients. I do not think there is a discipline that requires a PhD. Certainly data | Having a job in data-mining without a PhD
Whether a job requires a PhD or not depends on level of responsibility and the perception of the employer and/or his clients. I do not think there is a discipline that requires a PhD. Certainly data mining can be learned and an employee can do productive work without a PhD. This depends more on the person, his or her ability to learn quickly and adapt as well as being able to understand the literature, than on previous education. This is especially true for data mining which is an evolving field. So even the data miners with PhDs will have more to learn as time goes on. | Having a job in data-mining without a PhD
Whether a job requires a PhD or not depends on level of responsibility and the perception of the employer and/or his clients. I do not think there is a discipline that requires a PhD. Certainly data |
2,995 | Having a job in data-mining without a PhD | I have a masters degree in Applied Statistics and worked in Europe as a Data Miner. When I came to the UK nobody had even heard of data mining let alone studied for such a degree. Now it is common place and employers feel that a Phd is necessary for this job. However, it is the statistical knowledge and the modelling aspect which is important for this job. In my experience, most IT people do not understand statistics and are therefore unable to do the job well. I went into teaching and now am registering to do a Phd in Applied Statistics to satisfy these employers. I probably know more than most Phd graduates having studied for my Masters degree in 1980s when the level was very high. I think to be a good data miner, one has to have a background in Statistics. | Having a job in data-mining without a PhD | I have a masters degree in Applied Statistics and worked in Europe as a Data Miner. When I came to the UK nobody had even heard of data mining let alone studied for such a degree. Now it is common pl | Having a job in data-mining without a PhD
I have a masters degree in Applied Statistics and worked in Europe as a Data Miner. When I came to the UK nobody had even heard of data mining let alone studied for such a degree. Now it is common place and employers feel that a Phd is necessary for this job. However, it is the statistical knowledge and the modelling aspect which is important for this job. In my experience, most IT people do not understand statistics and are therefore unable to do the job well. I went into teaching and now am registering to do a Phd in Applied Statistics to satisfy these employers. I probably know more than most Phd graduates having studied for my Masters degree in 1980s when the level was very high. I think to be a good data miner, one has to have a background in Statistics. | Having a job in data-mining without a PhD
I have a masters degree in Applied Statistics and worked in Europe as a Data Miner. When I came to the UK nobody had even heard of data mining let alone studied for such a degree. Now it is common pl |
2,996 | Having a job in data-mining without a PhD | This totally depends on the job at hand. In my experience (I have a PhD), there are 3 types of jobs. First, as it has been said, most industry jobs these days are oriented towards applied machine learning, i.e. apply-tweak of existing ML algorithms to the domain-specific problem in question. These are by far the most common ML jobs and a Masters degree is more than sufficient for these kind of jobs. A smaller number of jobs, which happen to be in the research wing of companies or universities, institutions are apply-tweak-create ML jobs for the domain specific problem. The experience of creating a new method by looking at existing methods using new mathematics typically takes some time and these experiences are typically gained during the PhD, as the new theoretical result should be sufficiently robust to gain the acceptance of ones peers (a publication). Last and probably the hardest, highest risk and most uncommon type of jobs are the pure theoretical stuff going on at research universities where the focus is to come up with a new algorithm totally, or understand the mathematical properties better of existing algorithms (also has to be good enough to be published). This too is experience typically gained as a PhD. While a PhD student might have had some exposure to all three types of jobs during his/her training (purely due to the length of time of the program and the fact that there are no immediate product deadlines like a real job), the MS student typically is well trained for the first job and would probably have had only minor exposures to the 2nd and 3rd types of jobs. Each one of these jobs are equally important. | Having a job in data-mining without a PhD | This totally depends on the job at hand. In my experience (I have a PhD), there are 3 types of jobs. First, as it has been said, most industry jobs these days are oriented towards applied machine lear | Having a job in data-mining without a PhD
This totally depends on the job at hand. In my experience (I have a PhD), there are 3 types of jobs. First, as it has been said, most industry jobs these days are oriented towards applied machine learning, i.e. apply-tweak of existing ML algorithms to the domain-specific problem in question. These are by far the most common ML jobs and a Masters degree is more than sufficient for these kind of jobs. A smaller number of jobs, which happen to be in the research wing of companies or universities, institutions are apply-tweak-create ML jobs for the domain specific problem. The experience of creating a new method by looking at existing methods using new mathematics typically takes some time and these experiences are typically gained during the PhD, as the new theoretical result should be sufficiently robust to gain the acceptance of ones peers (a publication). Last and probably the hardest, highest risk and most uncommon type of jobs are the pure theoretical stuff going on at research universities where the focus is to come up with a new algorithm totally, or understand the mathematical properties better of existing algorithms (also has to be good enough to be published). This too is experience typically gained as a PhD. While a PhD student might have had some exposure to all three types of jobs during his/her training (purely due to the length of time of the program and the fact that there are no immediate product deadlines like a real job), the MS student typically is well trained for the first job and would probably have had only minor exposures to the 2nd and 3rd types of jobs. Each one of these jobs are equally important. | Having a job in data-mining without a PhD
This totally depends on the job at hand. In my experience (I have a PhD), there are 3 types of jobs. First, as it has been said, most industry jobs these days are oriented towards applied machine lear |
2,997 | Having a job in data-mining without a PhD | I dont think that Phd is required for any machine learning positions. A good masters and an inquistive mind with mathematical curiosity is all what it needs. A Phd biases your approach towards your specialization which is undesirable. I work on core Machine learning algorithms, and codes most of them in the way i want. And i have seen a lot of Phd people with the wrong mindset. Phds are mostly motivated by pure theoretical problems, unlike in industry where the focus is on working solutions in quick time | Having a job in data-mining without a PhD | I dont think that Phd is required for any machine learning positions. A good masters and an inquistive mind with mathematical curiosity is all what it needs. A Phd biases your approach towards your sp | Having a job in data-mining without a PhD
I dont think that Phd is required for any machine learning positions. A good masters and an inquistive mind with mathematical curiosity is all what it needs. A Phd biases your approach towards your specialization which is undesirable. I work on core Machine learning algorithms, and codes most of them in the way i want. And i have seen a lot of Phd people with the wrong mindset. Phds are mostly motivated by pure theoretical problems, unlike in industry where the focus is on working solutions in quick time | Having a job in data-mining without a PhD
I dont think that Phd is required for any machine learning positions. A good masters and an inquistive mind with mathematical curiosity is all what it needs. A Phd biases your approach towards your sp |
2,998 | Having a job in data-mining without a PhD | People who look down PhD training either don't know what a PhD means at all, or just intentionally make untrue comments; most masters training cannot compare with PhD training by any means. the intensity and rigor in PhD training requires unimaginable dedication, self-discipline, learning ability under great pressure, and solid skill sets..., a PhD title already proved all of those, a regular masters degree here in America is not at the same level at all.... | Having a job in data-mining without a PhD | People who look down PhD training either don't know what a PhD means at all, or just intentionally make untrue comments; most masters training cannot compare with PhD training by any means. the intens | Having a job in data-mining without a PhD
People who look down PhD training either don't know what a PhD means at all, or just intentionally make untrue comments; most masters training cannot compare with PhD training by any means. the intensity and rigor in PhD training requires unimaginable dedication, self-discipline, learning ability under great pressure, and solid skill sets..., a PhD title already proved all of those, a regular masters degree here in America is not at the same level at all.... | Having a job in data-mining without a PhD
People who look down PhD training either don't know what a PhD means at all, or just intentionally make untrue comments; most masters training cannot compare with PhD training by any means. the intens |
2,999 | Which activation function for output layer? | Regression: linear (because values are unbounded)
Classification: softmax (simple sigmoid works too but softmax works better)
Use simple sigmoid only if your output admits multiple "true" answers, for instance, a network that checks for the presence of various objects in an image. In other words, the output is not a probability distribution (does not need to sum to 1). | Which activation function for output layer? | Regression: linear (because values are unbounded)
Classification: softmax (simple sigmoid works too but softmax works better)
Use simple sigmoid only if your output admits multiple "true" answers, fo | Which activation function for output layer?
Regression: linear (because values are unbounded)
Classification: softmax (simple sigmoid works too but softmax works better)
Use simple sigmoid only if your output admits multiple "true" answers, for instance, a network that checks for the presence of various objects in an image. In other words, the output is not a probability distribution (does not need to sum to 1). | Which activation function for output layer?
Regression: linear (because values are unbounded)
Classification: softmax (simple sigmoid works too but softmax works better)
Use simple sigmoid only if your output admits multiple "true" answers, fo |
3,000 | Which activation function for output layer? | I might be late to the party, but it seems that there are some things that need to be cleared out here.
First of all: the activation function $g(x)$ at the output layer often depends on your cost function. This is done to make the derivative $\frac{\partial C}{\partial z}$ of the cost function $C$ with respect to the inputs $z$ at the last layer easy to compute.
As an example, we could use the mean squared error loss $C(y, g(z)) = \frac{1}{2} (y - g(z))^2$ in a regression setting. By setting $g(x) = x$ (linear activation function), we find for the derivative
$$\begin{align*}
\frac{\partial C(y,g(z))}{\partial z} & = \frac{\partial C(y, g(z))}{\partial g(z)} \cdot \frac{\partial g(z)}{\partial z} \\
& = \frac{\partial}{\partial g(z)}\left(\frac{1}{2} (y - g(z))^2\right) \cdot \frac{\partial}{\partial z}\left(z\right) \\
& = - (y-g(z)) \cdot 1 \\
& = g(z) - y
\end{align*}$$
You get the same, easy expression for $\frac{\partial C}{\partial z}$ if you combine cross-entropy loss with the logistic sigmoid or softmax activation functions.
This is the reason why linear activations are often used for regression and logistic/softmax activations for binary/multi-class classification. However, nothing keeps you from trying out different combinations. Although the expression for $\frac{\partial C}{\partial z}$ will probably not be so nice, it does not imply that your activation function would perform worse.
Second, I would like to add that there are plenty of activation functions that can be used for the hidden layers. Sigmoids (like the logistic function and hyperbolic tangent) have proven to work well indeed, but as indicated by Jatin, these suffer from vanishing gradients when your networks become too deep. In that case ReLUs have become popular. What I would like to emphasise though, is that there are plenty more activation functions available and different researchers keep on looking for new ones (e.g. Exponential Linear Units (ELUs), Gaussian Error Linear Units (GELUs), ...) with different/better properties
To conclude: When looking for the best activation functions, just be creative. Try out different things and see what combinations lead to the best performance.
Addendum: For more pairs of loss functions and activations, you probably want to look for (canonical) link functions | Which activation function for output layer? | I might be late to the party, but it seems that there are some things that need to be cleared out here.
First of all: the activation function $g(x)$ at the output layer often depends on your cost func | Which activation function for output layer?
I might be late to the party, but it seems that there are some things that need to be cleared out here.
First of all: the activation function $g(x)$ at the output layer often depends on your cost function. This is done to make the derivative $\frac{\partial C}{\partial z}$ of the cost function $C$ with respect to the inputs $z$ at the last layer easy to compute.
As an example, we could use the mean squared error loss $C(y, g(z)) = \frac{1}{2} (y - g(z))^2$ in a regression setting. By setting $g(x) = x$ (linear activation function), we find for the derivative
$$\begin{align*}
\frac{\partial C(y,g(z))}{\partial z} & = \frac{\partial C(y, g(z))}{\partial g(z)} \cdot \frac{\partial g(z)}{\partial z} \\
& = \frac{\partial}{\partial g(z)}\left(\frac{1}{2} (y - g(z))^2\right) \cdot \frac{\partial}{\partial z}\left(z\right) \\
& = - (y-g(z)) \cdot 1 \\
& = g(z) - y
\end{align*}$$
You get the same, easy expression for $\frac{\partial C}{\partial z}$ if you combine cross-entropy loss with the logistic sigmoid or softmax activation functions.
This is the reason why linear activations are often used for regression and logistic/softmax activations for binary/multi-class classification. However, nothing keeps you from trying out different combinations. Although the expression for $\frac{\partial C}{\partial z}$ will probably not be so nice, it does not imply that your activation function would perform worse.
Second, I would like to add that there are plenty of activation functions that can be used for the hidden layers. Sigmoids (like the logistic function and hyperbolic tangent) have proven to work well indeed, but as indicated by Jatin, these suffer from vanishing gradients when your networks become too deep. In that case ReLUs have become popular. What I would like to emphasise though, is that there are plenty more activation functions available and different researchers keep on looking for new ones (e.g. Exponential Linear Units (ELUs), Gaussian Error Linear Units (GELUs), ...) with different/better properties
To conclude: When looking for the best activation functions, just be creative. Try out different things and see what combinations lead to the best performance.
Addendum: For more pairs of loss functions and activations, you probably want to look for (canonical) link functions | Which activation function for output layer?
I might be late to the party, but it seems that there are some things that need to be cleared out here.
First of all: the activation function $g(x)$ at the output layer often depends on your cost func |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.