idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
49,701
Formula of the Chebyshev's inequality for an asymmetric interval
The reference for the formula for Chebyshev's inequality for the asymmetric two-sided case, $$ \mathrm{Pr}( l < X < h ) \ge \frac{ 4 [ ( \mu - l )( h - \mu ) - \sigma^2 ] }{ ( h - l )^2 } , $$ points to the paper by Steliga and Szynal (2010). I've done some further research and Steliga and Szynal cite Ferentinos (1982). And it turns out that Ferentinos states that the above formula is part of Selberg's inequalty (1940) which I could not access (and it's written in German). Nevertheless, Ferentinos provides enough information to understand what is happening. Ferentinos states that: $$ \mathrm{Pr}( l < X < h ) \ge \begin{cases} \displaystyle \frac{ 4 [ ( \mu - l )( h - \mu ) - \sigma^2 ] }{ ( h - l )^2 } & \text{if } (\mu - l) (h - \mu) \geq \sigma^2 \text{ and } (\mu - l) (h - \mu) - k^2 \leq 2 \sigma^2 \\ \displaystyle \frac{k^2}{\sigma + k^2} & \text{if } (\mu - l) (h - \mu) - k^2 \geq 2 \sigma^2 \end{cases} \ , $$ where $k = \min(\mu-l, h-\mu)$ and $l < \mu < h$ . Therefore, provided $\mu-l$ and $h-\mu$ are large enough, but not too large, the top formula is applied but when either $\mu-l$ or $h-\mu$ are too large, the second expression should be used. Hence, this second expression will be the relevant one for the examples in the question. Specifically, for the first example we will get $$\mathrm{Pr}( -1 < X < h ) \ge \frac{1}{2} \ , $$ and for the second example we will get $$\mathrm{Pr}( l < X < h ) \ge \frac{l^2}{l^2+1} \ . $$ As I was writing this text, the answer by Ben was posted which provided me further insight.
Formula of the Chebyshev's inequality for an asymmetric interval
The reference for the formula for Chebyshev's inequality for the asymmetric two-sided case, $$ \mathrm{Pr}( l < X < h ) \ge \frac{ 4 [ ( \mu - l )( h - \mu ) - \sigma^2 ] }{ ( h - l )^2 } , $$ points
Formula of the Chebyshev's inequality for an asymmetric interval The reference for the formula for Chebyshev's inequality for the asymmetric two-sided case, $$ \mathrm{Pr}( l < X < h ) \ge \frac{ 4 [ ( \mu - l )( h - \mu ) - \sigma^2 ] }{ ( h - l )^2 } , $$ points to the paper by Steliga and Szynal (2010). I've done some further research and Steliga and Szynal cite Ferentinos (1982). And it turns out that Ferentinos states that the above formula is part of Selberg's inequalty (1940) which I could not access (and it's written in German). Nevertheless, Ferentinos provides enough information to understand what is happening. Ferentinos states that: $$ \mathrm{Pr}( l < X < h ) \ge \begin{cases} \displaystyle \frac{ 4 [ ( \mu - l )( h - \mu ) - \sigma^2 ] }{ ( h - l )^2 } & \text{if } (\mu - l) (h - \mu) \geq \sigma^2 \text{ and } (\mu - l) (h - \mu) - k^2 \leq 2 \sigma^2 \\ \displaystyle \frac{k^2}{\sigma + k^2} & \text{if } (\mu - l) (h - \mu) - k^2 \geq 2 \sigma^2 \end{cases} \ , $$ where $k = \min(\mu-l, h-\mu)$ and $l < \mu < h$ . Therefore, provided $\mu-l$ and $h-\mu$ are large enough, but not too large, the top formula is applied but when either $\mu-l$ or $h-\mu$ are too large, the second expression should be used. Hence, this second expression will be the relevant one for the examples in the question. Specifically, for the first example we will get $$\mathrm{Pr}( -1 < X < h ) \ge \frac{1}{2} \ , $$ and for the second example we will get $$\mathrm{Pr}( l < X < h ) \ge \frac{l^2}{l^2+1} \ . $$ As I was writing this text, the answer by Ben was posted which provided me further insight.
Formula of the Chebyshev's inequality for an asymmetric interval The reference for the formula for Chebyshev's inequality for the asymmetric two-sided case, $$ \mathrm{Pr}( l < X < h ) \ge \frac{ 4 [ ( \mu - l )( h - \mu ) - \sigma^2 ] }{ ( h - l )^2 } , $$ points
49,702
Bootstrap hypothesis test for median of differences
Bonett & Price (2002) proposed a closed-form solution for this, so not requiring bootstrapping. At the very least, you could compare this to your bootstrapping to see how often that the two methods agree. Forgive me for the verbose code; I did a lot of comments and did it step-by-step for the purposes of being readable. # test from table 3 of b&p 2002 x1 <- c(77, 87, 88, 114, 151, 210, 219, 246, 253, 262, 296, 299, 306, 376, 428, 515, 666, 1310, 2611) x2 <- c(59, 106, 174, 207, 219, 237, 313, 365, 458, 497, 515, 529, 557, 615, 625, 645, 973, 1065, 3215) # sort vectors x1 <- sort(x1) x2 <- sort(x2) # get medians x1_mdn <- median(x1) x2_mdn <- median(x2) # stuff to calculate variance of medians x1_n <- length(x1) x2_n <- length(x2) x1_aj <- round((x1_n + 1) / 2 - x1_n ^ (1 / 2)) x2_aj <- round((x2_n + 1) / 2 - x2_n ^ (1 / 2)) z <- 1.855 # from table 1 of b&p 2002, see p. 376 # calculate variance x1_var <- ((x1[x1_n - x1_aj + 1] - x1[x1_aj]) / (2 * z)) ^ 2 x2_var <- ((x2[x2_n - x2_aj + 1] - x2[x2_aj]) / (2 * z)) ^ 2 # contrast coefficients, such that its median(d) - median(dg) x1_cj <- 1 x2_cj <- -1 # median difference mdn_diff <- x1_mdn * x1_cj + x2_mdn * x2_cj # standard error mdn_diff_se <- (((x1_cj ^ 2) * x1_var) + ((x2_cj ^ 2) * x2_var)) ^ (1 / 2) # 95% confidence interval lb <- mdn_diff - 1.96 * mdn_diff_se ub <- mdn_diff + 1.96 * mdn_diff_se # within roundng error of p. 376 of b&p 2002 paste0(mdn_diff, " [", round(lb), ", ", round(ub), "]") Reference Bonett, D. G., & Price, R. M. (2002). Statistical inference for a linear function of medians: Confidence intervals, hypothesis testing, and sample size requirements. Psychological Methods, 7(3), 370–383. doi:10.1037/1082-989x.7.3.370
Bootstrap hypothesis test for median of differences
Bonett & Price (2002) proposed a closed-form solution for this, so not requiring bootstrapping. At the very least, you could compare this to your bootstrapping to see how often that the two methods ag
Bootstrap hypothesis test for median of differences Bonett & Price (2002) proposed a closed-form solution for this, so not requiring bootstrapping. At the very least, you could compare this to your bootstrapping to see how often that the two methods agree. Forgive me for the verbose code; I did a lot of comments and did it step-by-step for the purposes of being readable. # test from table 3 of b&p 2002 x1 <- c(77, 87, 88, 114, 151, 210, 219, 246, 253, 262, 296, 299, 306, 376, 428, 515, 666, 1310, 2611) x2 <- c(59, 106, 174, 207, 219, 237, 313, 365, 458, 497, 515, 529, 557, 615, 625, 645, 973, 1065, 3215) # sort vectors x1 <- sort(x1) x2 <- sort(x2) # get medians x1_mdn <- median(x1) x2_mdn <- median(x2) # stuff to calculate variance of medians x1_n <- length(x1) x2_n <- length(x2) x1_aj <- round((x1_n + 1) / 2 - x1_n ^ (1 / 2)) x2_aj <- round((x2_n + 1) / 2 - x2_n ^ (1 / 2)) z <- 1.855 # from table 1 of b&p 2002, see p. 376 # calculate variance x1_var <- ((x1[x1_n - x1_aj + 1] - x1[x1_aj]) / (2 * z)) ^ 2 x2_var <- ((x2[x2_n - x2_aj + 1] - x2[x2_aj]) / (2 * z)) ^ 2 # contrast coefficients, such that its median(d) - median(dg) x1_cj <- 1 x2_cj <- -1 # median difference mdn_diff <- x1_mdn * x1_cj + x2_mdn * x2_cj # standard error mdn_diff_se <- (((x1_cj ^ 2) * x1_var) + ((x2_cj ^ 2) * x2_var)) ^ (1 / 2) # 95% confidence interval lb <- mdn_diff - 1.96 * mdn_diff_se ub <- mdn_diff + 1.96 * mdn_diff_se # within roundng error of p. 376 of b&p 2002 paste0(mdn_diff, " [", round(lb), ", ", round(ub), "]") Reference Bonett, D. G., & Price, R. M. (2002). Statistical inference for a linear function of medians: Confidence intervals, hypothesis testing, and sample size requirements. Psychological Methods, 7(3), 370–383. doi:10.1037/1082-989x.7.3.370
Bootstrap hypothesis test for median of differences Bonett & Price (2002) proposed a closed-form solution for this, so not requiring bootstrapping. At the very least, you could compare this to your bootstrapping to see how often that the two methods ag
49,703
Proper syntax for coding group level variables in mixed effect model using GLMER
There is often confusion about "the level that a variable varies" with mixed effects models, when someone comes more from a multilevel background and/or has used software such as HLM or MLwiN where (from what I remember) it is necessary to specify the level at which a variable varies. This is not the case with mixed effects models. The software "knows" what level a variable varies and it all that is needed is to include the variable in the fixed effects part of the formula. The model formula you used looks to be absolutely fine.
Proper syntax for coding group level variables in mixed effect model using GLMER
There is often confusion about "the level that a variable varies" with mixed effects models, when someone comes more from a multilevel background and/or has used software such as HLM or MLwiN where (f
Proper syntax for coding group level variables in mixed effect model using GLMER There is often confusion about "the level that a variable varies" with mixed effects models, when someone comes more from a multilevel background and/or has used software such as HLM or MLwiN where (from what I remember) it is necessary to specify the level at which a variable varies. This is not the case with mixed effects models. The software "knows" what level a variable varies and it all that is needed is to include the variable in the fixed effects part of the formula. The model formula you used looks to be absolutely fine.
Proper syntax for coding group level variables in mixed effect model using GLMER There is often confusion about "the level that a variable varies" with mixed effects models, when someone comes more from a multilevel background and/or has used software such as HLM or MLwiN where (f
49,704
Is there a bootstrap 're-sampling the residuals' equivalent for GLM?
Yes. The generalization is most clear if you think of linear regression not with an error term, but as a model for the conditional distribution of $Y \mid X$: $$ Y \mid X \sim Normal(X \beta, \sigma) $$ When using the parametric bootstrap, we can think of our new $y_i$'s as samples from these conditional distributions, one for each $x_i$. This generalizes directly to generalized linear models. For example, logistic regression: $$ Y \mid X \sim Bernoulli \left( p=\frac{1}{1 + e^{X \beta}} \right) $$ or Poisson regression: $$ Y \mid X \sim Poisson(\lambda = e^{X \beta}) $$ In each case the parametric bootstrap is the same, we sample a new $y_i$ from the estimated conditional distribution of $Y \mid x_i$. In the case of linear regression, this is mathematically equivalent to sampling from an error distribution and then adding on the linear predictor, but this error term distribution fails to generalize past the linear regression case (and, I'd argue, this makes the error term description of linear regression somewhat inferior to the conditional distribution description).
Is there a bootstrap 're-sampling the residuals' equivalent for GLM?
Yes. The generalization is most clear if you think of linear regression not with an error term, but as a model for the conditional distribution of $Y \mid X$: $$ Y \mid X \sim Normal(X \beta, \sigma)
Is there a bootstrap 're-sampling the residuals' equivalent for GLM? Yes. The generalization is most clear if you think of linear regression not with an error term, but as a model for the conditional distribution of $Y \mid X$: $$ Y \mid X \sim Normal(X \beta, \sigma) $$ When using the parametric bootstrap, we can think of our new $y_i$'s as samples from these conditional distributions, one for each $x_i$. This generalizes directly to generalized linear models. For example, logistic regression: $$ Y \mid X \sim Bernoulli \left( p=\frac{1}{1 + e^{X \beta}} \right) $$ or Poisson regression: $$ Y \mid X \sim Poisson(\lambda = e^{X \beta}) $$ In each case the parametric bootstrap is the same, we sample a new $y_i$ from the estimated conditional distribution of $Y \mid x_i$. In the case of linear regression, this is mathematically equivalent to sampling from an error distribution and then adding on the linear predictor, but this error term distribution fails to generalize past the linear regression case (and, I'd argue, this makes the error term description of linear regression somewhat inferior to the conditional distribution description).
Is there a bootstrap 're-sampling the residuals' equivalent for GLM? Yes. The generalization is most clear if you think of linear regression not with an error term, but as a model for the conditional distribution of $Y \mid X$: $$ Y \mid X \sim Normal(X \beta, \sigma)
49,705
What's the added value of SD line over regression line when examining association between 2 variables?
The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line. $$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \text {slope SD line} $$ The SD line shows how x and y are varying and this can give a more or less steep or flat line depending on the ratio $ \frac {\sigma_y}{\sigma_x}$. The regression line will be always with a smaller slope than the SD line(You might relate this to regression to the mean). By how much smaller will depend on the correlation. The SD line will help to see and get this view/interpretation of the regression line. The higher $R^2$ the more the model explains the variance in the data, and the closer the regression line will be to the SD line. The image below may illustrate how that SD line helps/works. For data with $\sigma_x = \sigma_y = 1$ but with different correlations the SD line and the regression line are drawn. Note that the regression line is closer to te SD line for larger correlations (but still always with a smaller slope). # random data set.seed(1) x <- rnorm(100,0,1) y <- rnorm(100,0,1) #normalizing x <- (x-mean(x))/sd(x) y <- (y-mean(y))/sd(y) #making x and y uncorrelated x <- x-cor(x,y)*y cor(x,y) x <- x/sd(x) # plotting cases with sd_x=sd_y=1 and different correlations for (rho in c(0.1,0.3,0.5,0.7)) { b <- sqrt(1/(1-rho^2)-1) z <- (y+b*x)/sqrt(1+b^2) plot(x,z, xlim = c(-5,5),ylim=c(-5,5), pch=21,col=1,bg=1,cex=0.7 ) title(bquote(rho == .(rho)),line = 1) lines(c(-10,10),c(-10,10),lty=2) lines(c(-10,10),c(-10,10)*rho) if (rho == 0.1) { legend(-5,5,c("sd line","regression line"),lty=c(2,1),cex=0.9) } } Similar descriptions http://www.jerrydallal.com/LHSP/regeff.htm https://books.google.ch/books?id=fW_9BV5Wpf8C&pg=PA18 Statistical Models: Theory and Practice by David A. Freedman Related Is the average of betas from Y ~ X and X ~ Y valid?
What's the added value of SD line over regression line when examining association between 2 variable
The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line. $$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \te
What's the added value of SD line over regression line when examining association between 2 variables? The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line. $$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \text {slope SD line} $$ The SD line shows how x and y are varying and this can give a more or less steep or flat line depending on the ratio $ \frac {\sigma_y}{\sigma_x}$. The regression line will be always with a smaller slope than the SD line(You might relate this to regression to the mean). By how much smaller will depend on the correlation. The SD line will help to see and get this view/interpretation of the regression line. The higher $R^2$ the more the model explains the variance in the data, and the closer the regression line will be to the SD line. The image below may illustrate how that SD line helps/works. For data with $\sigma_x = \sigma_y = 1$ but with different correlations the SD line and the regression line are drawn. Note that the regression line is closer to te SD line for larger correlations (but still always with a smaller slope). # random data set.seed(1) x <- rnorm(100,0,1) y <- rnorm(100,0,1) #normalizing x <- (x-mean(x))/sd(x) y <- (y-mean(y))/sd(y) #making x and y uncorrelated x <- x-cor(x,y)*y cor(x,y) x <- x/sd(x) # plotting cases with sd_x=sd_y=1 and different correlations for (rho in c(0.1,0.3,0.5,0.7)) { b <- sqrt(1/(1-rho^2)-1) z <- (y+b*x)/sqrt(1+b^2) plot(x,z, xlim = c(-5,5),ylim=c(-5,5), pch=21,col=1,bg=1,cex=0.7 ) title(bquote(rho == .(rho)),line = 1) lines(c(-10,10),c(-10,10),lty=2) lines(c(-10,10),c(-10,10)*rho) if (rho == 0.1) { legend(-5,5,c("sd line","regression line"),lty=c(2,1),cex=0.9) } } Similar descriptions http://www.jerrydallal.com/LHSP/regeff.htm https://books.google.ch/books?id=fW_9BV5Wpf8C&pg=PA18 Statistical Models: Theory and Practice by David A. Freedman Related Is the average of betas from Y ~ X and X ~ Y valid?
What's the added value of SD line over regression line when examining association between 2 variable The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line. $$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \te
49,706
Increasing multicollinearity in multilevel/hierarchical modeling?
Since you say : However, the relationship between $\textbf{y}$ and $\textbf{X}$ is not strictly linear over the entire domain, but could be better modeled as such within several subgroups $g$ (and coefficients are more meaningful if defined for each subgroup): it sounds to me very much like a mixed effects model (of which multilevel and hierarchical models are special cases) with random slopes for $\textbf{X}$ in subgroups $g$. This will have the general form: $$y = \textbf{X}\beta+\textbf{Z}u+e$$ where $\beta$ is a vector of fixed effects, $X$ and $Z$ are model matrices for the fixed effects and random effects respectively and $u$ and $e$ are vectors of random effects such that $E(u) = E(e) = 0$ In R you could fit such a model with, for example: y = func(y ~ X1 + X2 + (X1 + X2 | g ), ...) where func will be the relevant function from whatever package you choose, eg lme4 or GLMMAdaptive. Note that some packages, eg nlme use different syntax. This will estimate fixed effects (slopes) and random slopes for X1 and X2 and random intercepts for each group. If you do not want random intercepts - ie. you wish to allow the slopes to vary by group, but all pass through the same point on the y axis, then you would use: y = func(y ~ X1 + X2 + (X1 + X2 + 0 | g ), ...)
Increasing multicollinearity in multilevel/hierarchical modeling?
Since you say : However, the relationship between $\textbf{y}$ and $\textbf{X}$ is not strictly linear over the entire domain, but could be better modeled as such within several subgroups $g$ (and co
Increasing multicollinearity in multilevel/hierarchical modeling? Since you say : However, the relationship between $\textbf{y}$ and $\textbf{X}$ is not strictly linear over the entire domain, but could be better modeled as such within several subgroups $g$ (and coefficients are more meaningful if defined for each subgroup): it sounds to me very much like a mixed effects model (of which multilevel and hierarchical models are special cases) with random slopes for $\textbf{X}$ in subgroups $g$. This will have the general form: $$y = \textbf{X}\beta+\textbf{Z}u+e$$ where $\beta$ is a vector of fixed effects, $X$ and $Z$ are model matrices for the fixed effects and random effects respectively and $u$ and $e$ are vectors of random effects such that $E(u) = E(e) = 0$ In R you could fit such a model with, for example: y = func(y ~ X1 + X2 + (X1 + X2 | g ), ...) where func will be the relevant function from whatever package you choose, eg lme4 or GLMMAdaptive. Note that some packages, eg nlme use different syntax. This will estimate fixed effects (slopes) and random slopes for X1 and X2 and random intercepts for each group. If you do not want random intercepts - ie. you wish to allow the slopes to vary by group, but all pass through the same point on the y axis, then you would use: y = func(y ~ X1 + X2 + (X1 + X2 + 0 | g ), ...)
Increasing multicollinearity in multilevel/hierarchical modeling? Since you say : However, the relationship between $\textbf{y}$ and $\textbf{X}$ is not strictly linear over the entire domain, but could be better modeled as such within several subgroups $g$ (and co
49,707
How to statistically compare machine learning "regression" models?
Because my last answer was downvoted, I'm going to provide a full example. You don't want to compare the residuals, you want to compare losses. Let's say that your regression looks like this Let's compare two models on RMSE: a linear model and a generalized additive model. Clearly, the linear model will have larger loss because it is has high bias low variance. Let's take a look at the histogram of loss values. We have lots of data, so we can use the central limit theorem to help us make inference. When we have "enough" data, the sampling distribution for the mean is normal with expectation equal to the population mean and standard deviation $\sigma/\sqrt{n}$. So all we have to do is perform a t test on the loss values (and not the residuals) and that will allow us to determine which model has smaller expected loss. Using the data I generated >> t.test(loss2, loss1) Welch Two Sample t-test data: loss2 and loss1 t = -7.8795, df = 1955, p-value = 5.408e-15 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.2717431 -0.1634306 sample estimates: mean of x mean of y 0.3761796 0.5937665 The mean loss of the gam model is 0.37 while the mean loss of the linear model is 0.6. The t test tells us that that if the sampling distributions of the mean did have the same expectation (that is, if the losses for the models were the same) then the difference in means would be incredibly unlikely to observe by chance alone. Thus, we reject the null. A paired method might help, but usually we have so much data that the loss in power is really not a problem. Does that clarify things?
How to statistically compare machine learning "regression" models?
Because my last answer was downvoted, I'm going to provide a full example. You don't want to compare the residuals, you want to compare losses. Let's say that your regression looks like this Let's c
How to statistically compare machine learning "regression" models? Because my last answer was downvoted, I'm going to provide a full example. You don't want to compare the residuals, you want to compare losses. Let's say that your regression looks like this Let's compare two models on RMSE: a linear model and a generalized additive model. Clearly, the linear model will have larger loss because it is has high bias low variance. Let's take a look at the histogram of loss values. We have lots of data, so we can use the central limit theorem to help us make inference. When we have "enough" data, the sampling distribution for the mean is normal with expectation equal to the population mean and standard deviation $\sigma/\sqrt{n}$. So all we have to do is perform a t test on the loss values (and not the residuals) and that will allow us to determine which model has smaller expected loss. Using the data I generated >> t.test(loss2, loss1) Welch Two Sample t-test data: loss2 and loss1 t = -7.8795, df = 1955, p-value = 5.408e-15 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.2717431 -0.1634306 sample estimates: mean of x mean of y 0.3761796 0.5937665 The mean loss of the gam model is 0.37 while the mean loss of the linear model is 0.6. The t test tells us that that if the sampling distributions of the mean did have the same expectation (that is, if the losses for the models were the same) then the difference in means would be incredibly unlikely to observe by chance alone. Thus, we reject the null. A paired method might help, but usually we have so much data that the loss in power is really not a problem. Does that clarify things?
How to statistically compare machine learning "regression" models? Because my last answer was downvoted, I'm going to provide a full example. You don't want to compare the residuals, you want to compare losses. Let's say that your regression looks like this Let's c
49,708
How to statistically compare machine learning "regression" models?
The usual for of evaluating a regressor in machine learning is by evaluating the error on a different dataset from the one used in training. I will assume that your sentence "All algorithms are trained and evaluated on the exact same dataset." mean that there are 2 different data sets, one for training and one for testing, and they are the same for both regressors (XGBoost and NN for example). If you are using only one dataset for both training and testing (which is the usual for people that use linear regression and talks about Q-Q plots of residuals) please DON'T. Split your single dataset into say, 80% training and 20% testing. The distributions of the residuals (which in machine learning are similar to errors - more on this below) of the testing set are irrelevant. There is no assumption that errors/residuals for the regressor are normal, so Q-Q plot are irrelevant. And so on. The main difference between residuals and errors is that error are always positive - from a prediction point of view, predicting an y that is 5 above or below the correct value is equally wrong. There are two usual transformations from residuals to error - the MSE and MAE referred by user11852 - MSE takes the square of the residuals and MAE takes the absolute value. Now you have to sets of measures of ERRORS (be it MAE or MSE or others) and you want the regressor with least mean or median error. Errors are seldom normally distributed, so a non-parametric test is the more canonical approach, but if you have thousand of data points in your test data set, then the central limit theorem as referred by Demetri Pananos apply, so you can use a parametric test. Finally, there is an important point. The two sets of error measures (for each regressor) are PAIRED - that is each measure in one set has a correspondent measure on the other set. So you should use a paired test for that. To summarize: a) use 2 data sets, b) compute the error from the residuals - use the square transform - it is more frequently used, c) choose the regressor with least median error and d) if you want to argue that the one you choose is "really" (or significantly) better that the other one, then use the Wilcoxon signed rank test (the paired version of the Wilcoxon rank-sum)
How to statistically compare machine learning "regression" models?
The usual for of evaluating a regressor in machine learning is by evaluating the error on a different dataset from the one used in training. I will assume that your sentence "All algorithms are traine
How to statistically compare machine learning "regression" models? The usual for of evaluating a regressor in machine learning is by evaluating the error on a different dataset from the one used in training. I will assume that your sentence "All algorithms are trained and evaluated on the exact same dataset." mean that there are 2 different data sets, one for training and one for testing, and they are the same for both regressors (XGBoost and NN for example). If you are using only one dataset for both training and testing (which is the usual for people that use linear regression and talks about Q-Q plots of residuals) please DON'T. Split your single dataset into say, 80% training and 20% testing. The distributions of the residuals (which in machine learning are similar to errors - more on this below) of the testing set are irrelevant. There is no assumption that errors/residuals for the regressor are normal, so Q-Q plot are irrelevant. And so on. The main difference between residuals and errors is that error are always positive - from a prediction point of view, predicting an y that is 5 above or below the correct value is equally wrong. There are two usual transformations from residuals to error - the MSE and MAE referred by user11852 - MSE takes the square of the residuals and MAE takes the absolute value. Now you have to sets of measures of ERRORS (be it MAE or MSE or others) and you want the regressor with least mean or median error. Errors are seldom normally distributed, so a non-parametric test is the more canonical approach, but if you have thousand of data points in your test data set, then the central limit theorem as referred by Demetri Pananos apply, so you can use a parametric test. Finally, there is an important point. The two sets of error measures (for each regressor) are PAIRED - that is each measure in one set has a correspondent measure on the other set. So you should use a paired test for that. To summarize: a) use 2 data sets, b) compute the error from the residuals - use the square transform - it is more frequently used, c) choose the regressor with least median error and d) if you want to argue that the one you choose is "really" (or significantly) better that the other one, then use the Wilcoxon signed rank test (the paired version of the Wilcoxon rank-sum)
How to statistically compare machine learning "regression" models? The usual for of evaluating a regressor in machine learning is by evaluating the error on a different dataset from the one used in training. I will assume that your sentence "All algorithms are traine
49,709
Create Spline from Coefficients and Knots in GAMLSS
the pb() function fits P-splines as described by Eilers and Marx (1996): B-splines on equally spaced knots and finite difference penalties. In the same paper there are some code chunks that show how to fit the model (in the appendix if I remember well). I do not know about the details of the fitted.gamlss method but the code below should be helpful (look at the blue line in the plot). To compute the B-spline bases, I use the function splineDesign from splines package (the same function is used in the reference I menitoned above if I remember well). To get $\hat{y}$ for a new value of $x$, you can just compute the corresponding value of the splineDesign function and use the coefficeints estimated before (see last line of the code and the green dot) # B-splines stuffs - observed xs ndx = 20 deg = 3 xr = max(xs) xl = min(xs) xmax = xr + 0.01 * (xr - xl) xmin = xl - 0.01 * (xr - xl) dt = (xmax - xmin) / ndx knots = seq(xmin - deg * dt, xmax + deg * dt, by = dt) B = splineDesign(knots = knots, x = xs, ord = deg + 1, derivs = 0,outer.ok = TRUE) # Compute smooth ff = intercept + weight * xs + B %*% coefficients # New x-value xn = 35 Bn = splineDesign(knots = knots, x = xn, ord = deg + 1, derivs = 0,outer.ok = TRUE) fn = intercept + weight * xn + Bn %*% coefficients # Plot Results lines(xs[order(xs)], ff[order(xs)], col = 'blue', lty = 2, lwd = 2) points(xn, fn, col = 'green', pch = 16, cex = 1.5)
Create Spline from Coefficients and Knots in GAMLSS
the pb() function fits P-splines as described by Eilers and Marx (1996): B-splines on equally spaced knots and finite difference penalties. In the same paper there are some code chunks that show how t
Create Spline from Coefficients and Knots in GAMLSS the pb() function fits P-splines as described by Eilers and Marx (1996): B-splines on equally spaced knots and finite difference penalties. In the same paper there are some code chunks that show how to fit the model (in the appendix if I remember well). I do not know about the details of the fitted.gamlss method but the code below should be helpful (look at the blue line in the plot). To compute the B-spline bases, I use the function splineDesign from splines package (the same function is used in the reference I menitoned above if I remember well). To get $\hat{y}$ for a new value of $x$, you can just compute the corresponding value of the splineDesign function and use the coefficeints estimated before (see last line of the code and the green dot) # B-splines stuffs - observed xs ndx = 20 deg = 3 xr = max(xs) xl = min(xs) xmax = xr + 0.01 * (xr - xl) xmin = xl - 0.01 * (xr - xl) dt = (xmax - xmin) / ndx knots = seq(xmin - deg * dt, xmax + deg * dt, by = dt) B = splineDesign(knots = knots, x = xs, ord = deg + 1, derivs = 0,outer.ok = TRUE) # Compute smooth ff = intercept + weight * xs + B %*% coefficients # New x-value xn = 35 Bn = splineDesign(knots = knots, x = xn, ord = deg + 1, derivs = 0,outer.ok = TRUE) fn = intercept + weight * xn + Bn %*% coefficients # Plot Results lines(xs[order(xs)], ff[order(xs)], col = 'blue', lty = 2, lwd = 2) points(xn, fn, col = 'green', pch = 16, cex = 1.5)
Create Spline from Coefficients and Knots in GAMLSS the pb() function fits P-splines as described by Eilers and Marx (1996): B-splines on equally spaced knots and finite difference penalties. In the same paper there are some code chunks that show how t
49,710
Create Spline from Coefficients and Knots in GAMLSS
Here's a function call to generate new $y$ values for any $x$. predict(m1, newdata = data.frame(xs = new_x), data = data.frame(xs=xs)) Note that my xs is from the xs in your example. In general, you'll need to modify this to provide the original data to the predict function, with the original names.
Create Spline from Coefficients and Knots in GAMLSS
Here's a function call to generate new $y$ values for any $x$. predict(m1, newdata = data.frame(xs = new_x), data = data.frame(xs=xs)) Note that my xs is from the xs in your example. In general, you'
Create Spline from Coefficients and Knots in GAMLSS Here's a function call to generate new $y$ values for any $x$. predict(m1, newdata = data.frame(xs = new_x), data = data.frame(xs=xs)) Note that my xs is from the xs in your example. In general, you'll need to modify this to provide the original data to the predict function, with the original names.
Create Spline from Coefficients and Knots in GAMLSS Here's a function call to generate new $y$ values for any $x$. predict(m1, newdata = data.frame(xs = new_x), data = data.frame(xs=xs)) Note that my xs is from the xs in your example. In general, you'
49,711
L1-regularization enforces sparsity for convex functions
Regarding your question about general convex functions, you will get a sparse solution given that you apply a sparsity-inducing norm (which l1 is one such norm). For further information, read up to section 1.3 (including) here: https://hal.archives-ouvertes.fr/hal-00613125v1/document In general (taken form the link): If you have a convex optimization problem like: $$\min_{\omega \in > \mathbb{R}^p} f(\mathbf{\omega}) + \lambda \Omega(\mathbf{\omega})$$ Where $f:\mathbb{R}^p \rightarrow \mathbb{R}$ is a convex differentiable function and $\Omega:\mathbb{R}^p \rightarrow\mathbb{R}$ is a sparsity-inducing norm, you will get a sparse solution. You can see it if you write the problem as a constrained problem: $$\min_{\omega \in \mathbb{R}^p} f(\mathbf{\omega}) \quad \textrm{such that} \quad \Omega(\mathbf{\omega}) \leq \mu, \\ \textrm{for some}\;\mu \in \mathbb{R}_{+}$$ At optimality, the gradient of $f$ evaluated at any solution $\hat{\mathbf{\omega}}$ of the above equation is known to belong to the normal cone $\mathcal{B} =\{{\mathbf{\omega} \in \mathbb{R}^p; \; \Omega(\mathbf{\omega}) \leq \mu\}}$. In other words, for sufficiently small values of $\mu$, i.e., so that the constraint is active, the level set of $f$ for the value $f(\hat{\mathbf{\omega}})$ is tangent to $\mathcal{B}$. As a consequence, the geometry of the ball $\mathcal{B}$ is directly related to the properties of the solutions $\hat{\mathbf{\omega}}$. When $\Omega$ is the $l_1$-norm for example, $\mathcal{B}$ corresponds to a diamond-shaped pattern in two dimensions, and to a pyramid in three dimensions. In particular, $\mathcal{B}$ is anisotropic and exhibits some singular points due to the non-smoothness of $\Omega$. These singular points are located along the axis of $\mathbb{R}^p$, so that if the level set of $f$ happens to be tangent at one of these points, sparse solutions are obtained. Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski. Optimization with SparsityInducing Penalties. 2011. ffhal-00613125v1
L1-regularization enforces sparsity for convex functions
Regarding your question about general convex functions, you will get a sparse solution given that you apply a sparsity-inducing norm (which l1 is one such norm). For further information, read up to se
L1-regularization enforces sparsity for convex functions Regarding your question about general convex functions, you will get a sparse solution given that you apply a sparsity-inducing norm (which l1 is one such norm). For further information, read up to section 1.3 (including) here: https://hal.archives-ouvertes.fr/hal-00613125v1/document In general (taken form the link): If you have a convex optimization problem like: $$\min_{\omega \in > \mathbb{R}^p} f(\mathbf{\omega}) + \lambda \Omega(\mathbf{\omega})$$ Where $f:\mathbb{R}^p \rightarrow \mathbb{R}$ is a convex differentiable function and $\Omega:\mathbb{R}^p \rightarrow\mathbb{R}$ is a sparsity-inducing norm, you will get a sparse solution. You can see it if you write the problem as a constrained problem: $$\min_{\omega \in \mathbb{R}^p} f(\mathbf{\omega}) \quad \textrm{such that} \quad \Omega(\mathbf{\omega}) \leq \mu, \\ \textrm{for some}\;\mu \in \mathbb{R}_{+}$$ At optimality, the gradient of $f$ evaluated at any solution $\hat{\mathbf{\omega}}$ of the above equation is known to belong to the normal cone $\mathcal{B} =\{{\mathbf{\omega} \in \mathbb{R}^p; \; \Omega(\mathbf{\omega}) \leq \mu\}}$. In other words, for sufficiently small values of $\mu$, i.e., so that the constraint is active, the level set of $f$ for the value $f(\hat{\mathbf{\omega}})$ is tangent to $\mathcal{B}$. As a consequence, the geometry of the ball $\mathcal{B}$ is directly related to the properties of the solutions $\hat{\mathbf{\omega}}$. When $\Omega$ is the $l_1$-norm for example, $\mathcal{B}$ corresponds to a diamond-shaped pattern in two dimensions, and to a pyramid in three dimensions. In particular, $\mathcal{B}$ is anisotropic and exhibits some singular points due to the non-smoothness of $\Omega$. These singular points are located along the axis of $\mathbb{R}^p$, so that if the level set of $f$ happens to be tangent at one of these points, sparse solutions are obtained. Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski. Optimization with SparsityInducing Penalties. 2011. ffhal-00613125v1
L1-regularization enforces sparsity for convex functions Regarding your question about general convex functions, you will get a sparse solution given that you apply a sparsity-inducing norm (which l1 is one such norm). For further information, read up to se
49,712
L1-regularization enforces sparsity for convex functions
No, f(x) = (|x|-1)**2 is convex, and has an infinite number of zeros.
L1-regularization enforces sparsity for convex functions
No, f(x) = (|x|-1)**2 is convex, and has an infinite number of zeros.
L1-regularization enforces sparsity for convex functions No, f(x) = (|x|-1)**2 is convex, and has an infinite number of zeros.
L1-regularization enforces sparsity for convex functions No, f(x) = (|x|-1)**2 is convex, and has an infinite number of zeros.
49,713
Can you delete the missing values before the analysis of a time series dataset?
On the one hand, you should not be deleting anything. Information is power. On the other hand, I am highly skeptical about your ability to perform missing data imputation for the missing river-days. It is not clear how different rivers are related, and missing data imputation adds value only if different variables (rivers) are highly co-dependent. What you can do, however, is writing a parametric model for all the rivers. In this model different rivers are allowed to be correlated, any particular river is described by, say, an ARIMA process with unique parameters. Then, you can estimate the model using the version of Full Information Maximum Likelihood (FIML) for time series. You would need to look up a standard reference on FIML and read it carefully. In a nutshell, FIML does the following. The log-likelihood function has only terms corresponding to non-missing values. All non-missing values are used in at least one term (of the log-likelihood). Suppose, in time series $X_j$, values $X_j(t)$ and $X_j(t+3)$ are marked but values $X_j(t+1)$ and $X_j(t+2)$ are missing. Then the log-likelihood function contains term $$ ... + \log(f_{M,3}(X(t+3) | X(t))) + ... $$ where $f_{M,3}(X(t+3) | X(t))$ is the model-implied conditional density of $X_j(t+3)$ given $X_j(t)$. You see that we are skipping missing cells $X_j(t+1)$ and $X_j(t+2)$.
Can you delete the missing values before the analysis of a time series dataset?
On the one hand, you should not be deleting anything. Information is power. On the other hand, I am highly skeptical about your ability to perform missing data imputation for the missing river-days. I
Can you delete the missing values before the analysis of a time series dataset? On the one hand, you should not be deleting anything. Information is power. On the other hand, I am highly skeptical about your ability to perform missing data imputation for the missing river-days. It is not clear how different rivers are related, and missing data imputation adds value only if different variables (rivers) are highly co-dependent. What you can do, however, is writing a parametric model for all the rivers. In this model different rivers are allowed to be correlated, any particular river is described by, say, an ARIMA process with unique parameters. Then, you can estimate the model using the version of Full Information Maximum Likelihood (FIML) for time series. You would need to look up a standard reference on FIML and read it carefully. In a nutshell, FIML does the following. The log-likelihood function has only terms corresponding to non-missing values. All non-missing values are used in at least one term (of the log-likelihood). Suppose, in time series $X_j$, values $X_j(t)$ and $X_j(t+3)$ are marked but values $X_j(t+1)$ and $X_j(t+2)$ are missing. Then the log-likelihood function contains term $$ ... + \log(f_{M,3}(X(t+3) | X(t))) + ... $$ where $f_{M,3}(X(t+3) | X(t))$ is the model-implied conditional density of $X_j(t+3)$ given $X_j(t)$. You see that we are skipping missing cells $X_j(t+1)$ and $X_j(t+2)$.
Can you delete the missing values before the analysis of a time series dataset? On the one hand, you should not be deleting anything. Information is power. On the other hand, I am highly skeptical about your ability to perform missing data imputation for the missing river-days. I
49,714
Can you delete the missing values before the analysis of a time series dataset?
I would leave all of the data in, but then I would dig through to make sure there were not patterns misleading my data (e.g., that all of the new rivers are smaller because the state has been adding monitoring sites but already had the major rivers covered). The other caveat is that you may need to use precipitation in the preceding X days as a predictor. In that case, you would need to delete the first X days each time observations start or be able to determine the precipitation from some other source (e.g., if that is from a local weather station that was recording, even though the river monitor was not).
Can you delete the missing values before the analysis of a time series dataset?
I would leave all of the data in, but then I would dig through to make sure there were not patterns misleading my data (e.g., that all of the new rivers are smaller because the state has been adding m
Can you delete the missing values before the analysis of a time series dataset? I would leave all of the data in, but then I would dig through to make sure there were not patterns misleading my data (e.g., that all of the new rivers are smaller because the state has been adding monitoring sites but already had the major rivers covered). The other caveat is that you may need to use precipitation in the preceding X days as a predictor. In that case, you would need to delete the first X days each time observations start or be able to determine the precipitation from some other source (e.g., if that is from a local weather station that was recording, even though the river monitor was not).
Can you delete the missing values before the analysis of a time series dataset? I would leave all of the data in, but then I would dig through to make sure there were not patterns misleading my data (e.g., that all of the new rivers are smaller because the state has been adding m
49,715
Unbiased estimator of the ratio of variances
Assuming the two samples are independent of each other. Use the fact that $\frac{(m-1)S_1^2}{\sigma_1^2}\sim \chi^2_{m-1}$ and $\frac{(n-1)S_2^2}{\sigma_2^2}\sim \chi^2_{n-1}$, where $S_1^2=\frac1{m-1}\sum\limits_{i=1}^m (X_i-\overline X)^2$ and $S_2^2=\frac1{n-1}\sum\limits_{i=1}^n (Y_i-\overline Y)^2$ are the sample variances. Since the samples are independent, so are $S_1^2$ and $S_2^2$. A reasonable guess for an estimator of the ratio of variances is the ratio of the sample variances. Getting unbiasedness from there is simple. We already have $E(S_2^2)=\sigma_2^2$. So find an unbiased estimator of $1/\sigma_1^2$ based on $S_1^2$. For that, start with $E[1/S_1^2]$ to get $E\left[\frac{c}{S_1^2}\right]=\frac{1}{\sigma_1^2}$ for some constant $c$. Use the independence of $S_1^2$ and $S_2^2$ to eventually get $$E\left[\frac{cS_2^2}{S_1^2}\right]=E\left[S_2^2\right]E\left[\frac{c}{S_1^2}\right]=\frac{\sigma_2^2}{\sigma_1^2}$$ The easier way to do this is to construct an F distribution from the independent chi-square distributions. We have $$\frac{(n-1)S_2^2/(\sigma_2^2(n-1))}{(m-1)S_1^2/(\sigma_1^2(m-1))}=\frac{S_2^2/\sigma_2^2}{S_1^2/\sigma_1^2}\sim F_{n-1,m-1}$$ Taking expectation immediately leads to the answer.
Unbiased estimator of the ratio of variances
Assuming the two samples are independent of each other. Use the fact that $\frac{(m-1)S_1^2}{\sigma_1^2}\sim \chi^2_{m-1}$ and $\frac{(n-1)S_2^2}{\sigma_2^2}\sim \chi^2_{n-1}$, where $S_1^2=\frac1{m-1
Unbiased estimator of the ratio of variances Assuming the two samples are independent of each other. Use the fact that $\frac{(m-1)S_1^2}{\sigma_1^2}\sim \chi^2_{m-1}$ and $\frac{(n-1)S_2^2}{\sigma_2^2}\sim \chi^2_{n-1}$, where $S_1^2=\frac1{m-1}\sum\limits_{i=1}^m (X_i-\overline X)^2$ and $S_2^2=\frac1{n-1}\sum\limits_{i=1}^n (Y_i-\overline Y)^2$ are the sample variances. Since the samples are independent, so are $S_1^2$ and $S_2^2$. A reasonable guess for an estimator of the ratio of variances is the ratio of the sample variances. Getting unbiasedness from there is simple. We already have $E(S_2^2)=\sigma_2^2$. So find an unbiased estimator of $1/\sigma_1^2$ based on $S_1^2$. For that, start with $E[1/S_1^2]$ to get $E\left[\frac{c}{S_1^2}\right]=\frac{1}{\sigma_1^2}$ for some constant $c$. Use the independence of $S_1^2$ and $S_2^2$ to eventually get $$E\left[\frac{cS_2^2}{S_1^2}\right]=E\left[S_2^2\right]E\left[\frac{c}{S_1^2}\right]=\frac{\sigma_2^2}{\sigma_1^2}$$ The easier way to do this is to construct an F distribution from the independent chi-square distributions. We have $$\frac{(n-1)S_2^2/(\sigma_2^2(n-1))}{(m-1)S_1^2/(\sigma_1^2(m-1))}=\frac{S_2^2/\sigma_2^2}{S_1^2/\sigma_1^2}\sim F_{n-1,m-1}$$ Taking expectation immediately leads to the answer.
Unbiased estimator of the ratio of variances Assuming the two samples are independent of each other. Use the fact that $\frac{(m-1)S_1^2}{\sigma_1^2}\sim \chi^2_{m-1}$ and $\frac{(n-1)S_2^2}{\sigma_2^2}\sim \chi^2_{n-1}$, where $S_1^2=\frac1{m-1
49,716
Unbiased estimator of the ratio of variances
Using Slutsky's theorem of large sample theory you will get an assymptotic unbiased estimator! Else if you want to get an estimator of it then find out the MLEs and use the invariance property!
Unbiased estimator of the ratio of variances
Using Slutsky's theorem of large sample theory you will get an assymptotic unbiased estimator! Else if you want to get an estimator of it then find out the MLEs and use the invariance property!
Unbiased estimator of the ratio of variances Using Slutsky's theorem of large sample theory you will get an assymptotic unbiased estimator! Else if you want to get an estimator of it then find out the MLEs and use the invariance property!
Unbiased estimator of the ratio of variances Using Slutsky's theorem of large sample theory you will get an assymptotic unbiased estimator! Else if you want to get an estimator of it then find out the MLEs and use the invariance property!
49,717
Viral topics. How to describe and characterize bumps in time series of tag-activity on Meta Stack Exchange
I have read of your linked meta post, and it is an interesting situation. If I understand the issue correctly, there has been a proposal to create a tag specific to this particular controversy (such that all posts pertaining to the controversy would be easily identifiable together), and that proposal has been rejected. Instead, you have identified a number of different tags that pertain to certain aspects of the controversy (e.g., moderators, community, etiquette, code-of-conduct, etc.), and those tags have been used inconsistently on posts relating to the controversy. Notwithstanding the rejection of the proposal for a specific tag for this controversy, I see no reason, in principle, that would prevent you from going through all the posts that might potentially be related to the controversy (e.g., all posts in the relevant time period with any of the listed tags) and categorising if they are or are not related to that controversy, according to some set criteria. This is essentially the same exercise you would do if you were re-tagging the posts with a new tag, only you don't actually get to re-tag them, since the proposal for this was rejected. Now, essentially, this is an exercise in "qualitative data analysis", where you take a qualitative variable (a textual post) and you transform that into a binary variable (or perhaps a broader categorical variable) to identify posts that meet your chosen criteria for being "precipitated by the Monica Cellio controversy". Your specific question asks, "[a]re there certain standard common ways to categorize this sort of data ... ?" The broad answer is yes, and they are contained in the literature on qualitative data analysis (specifically the subject of categorising qualitative data). The literature on this subject is quite large, so I will not try to offer a summary here. In any case, yours seems like a fairly simple problem, insofar as it is likely to be quite clear (in most cases) whether a post is precipitated by the Monica Cellio controversy or not. Assuming you are able to categorise the posts in this period as either "precipitated by the Monica Cellio controversy", or not, it would then be fairly simple to quantify the extent to which the observed spike in traffic is attributable to these posts, and answers on these posts. I suspect that the underlying qualitative analysis would be fairly straight-forward, but it would be time-consuming, and a lot of work.
Viral topics. How to describe and characterize bumps in time series of tag-activity on Meta Stack Ex
I have read of your linked meta post, and it is an interesting situation. If I understand the issue correctly, there has been a proposal to create a tag specific to this particular controversy (such
Viral topics. How to describe and characterize bumps in time series of tag-activity on Meta Stack Exchange I have read of your linked meta post, and it is an interesting situation. If I understand the issue correctly, there has been a proposal to create a tag specific to this particular controversy (such that all posts pertaining to the controversy would be easily identifiable together), and that proposal has been rejected. Instead, you have identified a number of different tags that pertain to certain aspects of the controversy (e.g., moderators, community, etiquette, code-of-conduct, etc.), and those tags have been used inconsistently on posts relating to the controversy. Notwithstanding the rejection of the proposal for a specific tag for this controversy, I see no reason, in principle, that would prevent you from going through all the posts that might potentially be related to the controversy (e.g., all posts in the relevant time period with any of the listed tags) and categorising if they are or are not related to that controversy, according to some set criteria. This is essentially the same exercise you would do if you were re-tagging the posts with a new tag, only you don't actually get to re-tag them, since the proposal for this was rejected. Now, essentially, this is an exercise in "qualitative data analysis", where you take a qualitative variable (a textual post) and you transform that into a binary variable (or perhaps a broader categorical variable) to identify posts that meet your chosen criteria for being "precipitated by the Monica Cellio controversy". Your specific question asks, "[a]re there certain standard common ways to categorize this sort of data ... ?" The broad answer is yes, and they are contained in the literature on qualitative data analysis (specifically the subject of categorising qualitative data). The literature on this subject is quite large, so I will not try to offer a summary here. In any case, yours seems like a fairly simple problem, insofar as it is likely to be quite clear (in most cases) whether a post is precipitated by the Monica Cellio controversy or not. Assuming you are able to categorise the posts in this period as either "precipitated by the Monica Cellio controversy", or not, it would then be fairly simple to quantify the extent to which the observed spike in traffic is attributable to these posts, and answers on these posts. I suspect that the underlying qualitative analysis would be fairly straight-forward, but it would be time-consuming, and a lot of work.
Viral topics. How to describe and characterize bumps in time series of tag-activity on Meta Stack Ex I have read of your linked meta post, and it is an interesting situation. If I understand the issue correctly, there has been a proposal to create a tag specific to this particular controversy (such
49,718
Density estimation as an optimization problem
(I'll answer the second question, about the results of your maximum likelihood with fixed differential entropy procedure; I don't know the answer to the first, about whether KDE can be phrased as optimizing some objective functional.) Without additional constraints such as bounds on the derivatives, the problem is ill-posed: you can achieve any finite likelihood with fixed differential entropy. Let $f_0$ be any density over $\Omega$ with differential entropy $h$. If $\log \lvert \Omega \rvert < h$, this is impossible and the problem is unsatisfiable; otherwise, take it to be e.g. uniform over some subset of $\Omega$ with appropriate volume. Then define $$ f(x) = \begin{cases} q & \text{if } x = x_i \text{ for some } i \\ f_0(x) & \text{otherwise} \end{cases} $$ for any finite $q$. This has likelihood $q^m$, where $m$ is the number of points $x_i$; since we've only modified $f_0$ on a set of measure zero, $f$ still has integral 1 and differential entropy $h$. So you can make the likelihood as high as you want. Of course, changing the density function on a set of measure zero is in some ways "not fair"; the distribution itself won't be altered, for example. But we can achieve essentially the same thing with continuous, or even $C^\infty$ density functions; just make a very narrow bump at each $x_i$. The next section shows formally that we can do that, but it's just this intuition in equations. Let $f_0$ be a continuous (or $C^\infty$) density of differential entropy $Q$ whose support does not include any of the $x_i$ (or a ball around them). This makes the calculations much easier, as we don't have to worry about interactions between $f_0$ and the bumps we'll put at each $x_i$, but is not fundamental to the approach. Let $\ell < \min_{i,j} \lvert x_i - x_j \rvert$ and also smaller than the distance from any $x_i$ to the support of $f_0$. Define $$f_i(x) = \frac1\ell g\left( \frac{x - x_i}{\ell} \right),$$ where $g$ is the pdf of a mean-zero distribution with continuous (or $C^\infty$) density and support on a subset of the unit ball. Many kernel functions would work, for example. Let $g(0) = g_0$ so that $f_i(x_i) = \frac1\ell g_0$, and let $H[g] = \int g(x) \log g(x) \,\mathrm d x = \eta$ so that $H[f_i] = \eta - \log \ell$. We'll take a mixture between $f_0$ and the $f_i$, for some $\alpha \in (0, 1)$ to be determined later: $$ f(x) = (1-\alpha) f_0(x) + \frac{\alpha}{m} \sum_{i=1}^m f_i(x) .$$ Then we have $$ \int f(x) \,\mathrm d x = (1 - \alpha) \int f_0(x) \,\mathrm d x + \frac{\alpha}{m} \sum_{i=1}^m \int f_i(x) \,\mathrm d x = 1 ,$$ and $$L(f) = \left( \frac{\alpha}{m \ell} g_0 \right)^m = \left( \frac{g_0}{m} \right)^m \left( \frac{\alpha}{\ell} \right)^m .$$ The first term is constant; we want to choose $\alpha$, $\ell$ so as to maximize $\alpha / \ell$ and hence $L(f)$. Our constraint on $\alpha$ and $\ell$ is determined by \begin{align} \int f(x) \log f(x) \,\mathrm{d}x &= \int (1-\alpha) f_0(x) \log\left( (1-\alpha) f_0(x) \right) \,\mathrm{d}x + \sum_{i=1}^m \int \frac{\alpha}{m} f_i(x) \log\left( \frac{\alpha}{m} f_i(x) \right) \mathrm d x \\&= (1-\alpha) H[f_0] + (1-\alpha) \log(1 - \alpha) + \frac\alpha m \sum_{i=1}^m H[f_i] + \frac\alpha m \sum_{i=1}^m \log \frac\alpha m \\&= (1-\alpha) Q + \alpha \eta - \alpha \log \ell + (1-\alpha) \log(1 - \alpha) + \alpha \log \alpha - \alpha \log m .\end{align} Setting this equal to $h$, dividing both sides by $\alpha$, and moving some terms, we get $$ \frac{Q - h}{\alpha} + \frac{1-\alpha}{\alpha} \log(1 - \alpha) + \log \frac\alpha\ell = Q - \eta + \log m .$$ We could directly solve this for $\ell$, but the expression is little messy and we don't care about the particular value of $\ell$. It's sufficient to note that as we drive $\ell \to 0$, we'll have $\alpha \to 0$ as well; we care about the relative rate. We can use $$\log(1-\alpha) = - \alpha - \mathcal O(\alpha^2)$$ so that $$ \frac{1-\alpha}{\alpha} \log(1-\alpha) = - 1 + \alpha - \mathcal O(\alpha) = -1 \pm \mathcal O(\alpha) .$$ Thus $$ \frac{Q - h}{\alpha} \pm \mathcal O(\alpha) + \log \frac\alpha\ell = 1 + Q + \log m - \eta $$ and so $$ \frac\alpha\ell = \underbrace{\exp\left( 1 + Q + \log m - \eta \right)}_\text{constant} \exp\left( \frac{h - Q}{\alpha} \pm \mathcal{O}(\alpha) \right) .$$ As long as $Q < h$, we have $\frac\alpha\ell \to \infty$ as $\alpha \to 0$, so $L(h) \to \infty$ as $\alpha \to 0$. In the limit $\ell \to 0$, this converges to $f_0$ plus delta functions at each of the $x_i$ (as before). But for any finite $\ell$, we know we can balance the entropy of each nearly-a-delta-function spike with the entropy of $f_0$ appropriately so that the total is $h$. Thus, as before, for any finite target likelihood we can find a distribution with that likelihood and differential entropy $h$. A constraint such as a Lipschitz density would correspond in this setting to a constraint on $\ell$, preventing us from driving the density to infinity. It might be possible to derive the maximum-likelihood estimator among $L$-Lipschitz densities with differential entropy $h$. This might look something like: Put peaks as high as possible at each $x_i$. Decrease away from each peak uniformly with slope $L$. Stop decreasing at some base level uniform over $\Omega$. This likely would not actually be the MLE in this family, but you might be able to get a linear program or something that gives a solution kind of like this.
Density estimation as an optimization problem
(I'll answer the second question, about the results of your maximum likelihood with fixed differential entropy procedure; I don't know the answer to the first, about whether KDE can be phrased as opti
Density estimation as an optimization problem (I'll answer the second question, about the results of your maximum likelihood with fixed differential entropy procedure; I don't know the answer to the first, about whether KDE can be phrased as optimizing some objective functional.) Without additional constraints such as bounds on the derivatives, the problem is ill-posed: you can achieve any finite likelihood with fixed differential entropy. Let $f_0$ be any density over $\Omega$ with differential entropy $h$. If $\log \lvert \Omega \rvert < h$, this is impossible and the problem is unsatisfiable; otherwise, take it to be e.g. uniform over some subset of $\Omega$ with appropriate volume. Then define $$ f(x) = \begin{cases} q & \text{if } x = x_i \text{ for some } i \\ f_0(x) & \text{otherwise} \end{cases} $$ for any finite $q$. This has likelihood $q^m$, where $m$ is the number of points $x_i$; since we've only modified $f_0$ on a set of measure zero, $f$ still has integral 1 and differential entropy $h$. So you can make the likelihood as high as you want. Of course, changing the density function on a set of measure zero is in some ways "not fair"; the distribution itself won't be altered, for example. But we can achieve essentially the same thing with continuous, or even $C^\infty$ density functions; just make a very narrow bump at each $x_i$. The next section shows formally that we can do that, but it's just this intuition in equations. Let $f_0$ be a continuous (or $C^\infty$) density of differential entropy $Q$ whose support does not include any of the $x_i$ (or a ball around them). This makes the calculations much easier, as we don't have to worry about interactions between $f_0$ and the bumps we'll put at each $x_i$, but is not fundamental to the approach. Let $\ell < \min_{i,j} \lvert x_i - x_j \rvert$ and also smaller than the distance from any $x_i$ to the support of $f_0$. Define $$f_i(x) = \frac1\ell g\left( \frac{x - x_i}{\ell} \right),$$ where $g$ is the pdf of a mean-zero distribution with continuous (or $C^\infty$) density and support on a subset of the unit ball. Many kernel functions would work, for example. Let $g(0) = g_0$ so that $f_i(x_i) = \frac1\ell g_0$, and let $H[g] = \int g(x) \log g(x) \,\mathrm d x = \eta$ so that $H[f_i] = \eta - \log \ell$. We'll take a mixture between $f_0$ and the $f_i$, for some $\alpha \in (0, 1)$ to be determined later: $$ f(x) = (1-\alpha) f_0(x) + \frac{\alpha}{m} \sum_{i=1}^m f_i(x) .$$ Then we have $$ \int f(x) \,\mathrm d x = (1 - \alpha) \int f_0(x) \,\mathrm d x + \frac{\alpha}{m} \sum_{i=1}^m \int f_i(x) \,\mathrm d x = 1 ,$$ and $$L(f) = \left( \frac{\alpha}{m \ell} g_0 \right)^m = \left( \frac{g_0}{m} \right)^m \left( \frac{\alpha}{\ell} \right)^m .$$ The first term is constant; we want to choose $\alpha$, $\ell$ so as to maximize $\alpha / \ell$ and hence $L(f)$. Our constraint on $\alpha$ and $\ell$ is determined by \begin{align} \int f(x) \log f(x) \,\mathrm{d}x &= \int (1-\alpha) f_0(x) \log\left( (1-\alpha) f_0(x) \right) \,\mathrm{d}x + \sum_{i=1}^m \int \frac{\alpha}{m} f_i(x) \log\left( \frac{\alpha}{m} f_i(x) \right) \mathrm d x \\&= (1-\alpha) H[f_0] + (1-\alpha) \log(1 - \alpha) + \frac\alpha m \sum_{i=1}^m H[f_i] + \frac\alpha m \sum_{i=1}^m \log \frac\alpha m \\&= (1-\alpha) Q + \alpha \eta - \alpha \log \ell + (1-\alpha) \log(1 - \alpha) + \alpha \log \alpha - \alpha \log m .\end{align} Setting this equal to $h$, dividing both sides by $\alpha$, and moving some terms, we get $$ \frac{Q - h}{\alpha} + \frac{1-\alpha}{\alpha} \log(1 - \alpha) + \log \frac\alpha\ell = Q - \eta + \log m .$$ We could directly solve this for $\ell$, but the expression is little messy and we don't care about the particular value of $\ell$. It's sufficient to note that as we drive $\ell \to 0$, we'll have $\alpha \to 0$ as well; we care about the relative rate. We can use $$\log(1-\alpha) = - \alpha - \mathcal O(\alpha^2)$$ so that $$ \frac{1-\alpha}{\alpha} \log(1-\alpha) = - 1 + \alpha - \mathcal O(\alpha) = -1 \pm \mathcal O(\alpha) .$$ Thus $$ \frac{Q - h}{\alpha} \pm \mathcal O(\alpha) + \log \frac\alpha\ell = 1 + Q + \log m - \eta $$ and so $$ \frac\alpha\ell = \underbrace{\exp\left( 1 + Q + \log m - \eta \right)}_\text{constant} \exp\left( \frac{h - Q}{\alpha} \pm \mathcal{O}(\alpha) \right) .$$ As long as $Q < h$, we have $\frac\alpha\ell \to \infty$ as $\alpha \to 0$, so $L(h) \to \infty$ as $\alpha \to 0$. In the limit $\ell \to 0$, this converges to $f_0$ plus delta functions at each of the $x_i$ (as before). But for any finite $\ell$, we know we can balance the entropy of each nearly-a-delta-function spike with the entropy of $f_0$ appropriately so that the total is $h$. Thus, as before, for any finite target likelihood we can find a distribution with that likelihood and differential entropy $h$. A constraint such as a Lipschitz density would correspond in this setting to a constraint on $\ell$, preventing us from driving the density to infinity. It might be possible to derive the maximum-likelihood estimator among $L$-Lipschitz densities with differential entropy $h$. This might look something like: Put peaks as high as possible at each $x_i$. Decrease away from each peak uniformly with slope $L$. Stop decreasing at some base level uniform over $\Omega$. This likely would not actually be the MLE in this family, but you might be able to get a linear program or something that gives a solution kind of like this.
Density estimation as an optimization problem (I'll answer the second question, about the results of your maximum likelihood with fixed differential entropy procedure; I don't know the answer to the first, about whether KDE can be phrased as opti
49,719
What is really the Scale or Sigma parameter of a tobit regression?
Tobit regression assumes that there is a latent dependent variable that can take any value. It is part of a normal linear regression equation with a normally distributed error tern. The standard deviation of that error term is the scale parameter. However, a tobit model assumes that this latent variable is only observed when it exceeds a specific lower bound (often 0), otherwise the observed variable gets the value of that lower bound. So, yes the scale parameter is analogous to the standard deviation of the residuals, but you cannot get an estimate of those residuals by subtracting the predicted values from the observed dependent variable. That is probably what you did and it would underestimate the variance, which is what you observed.
What is really the Scale or Sigma parameter of a tobit regression?
Tobit regression assumes that there is a latent dependent variable that can take any value. It is part of a normal linear regression equation with a normally distributed error tern. The standard devia
What is really the Scale or Sigma parameter of a tobit regression? Tobit regression assumes that there is a latent dependent variable that can take any value. It is part of a normal linear regression equation with a normally distributed error tern. The standard deviation of that error term is the scale parameter. However, a tobit model assumes that this latent variable is only observed when it exceeds a specific lower bound (often 0), otherwise the observed variable gets the value of that lower bound. So, yes the scale parameter is analogous to the standard deviation of the residuals, but you cannot get an estimate of those residuals by subtracting the predicted values from the observed dependent variable. That is probably what you did and it would underestimate the variance, which is what you observed.
What is really the Scale or Sigma parameter of a tobit regression? Tobit regression assumes that there is a latent dependent variable that can take any value. It is part of a normal linear regression equation with a normally distributed error tern. The standard devia
49,720
Isolation Forest and average/expected depth formula
I’ll answer myself with help from these other questions in the math section: https://math.stackexchange.com/questions/3333220/expected-average-depth-in-random-binary-tree-constructed-top-to-bottom https://math.stackexchange.com/questions/52572/do-harmonic-numbers-have-a-closed-form-expression From the answer above, the actual expected depth when splitting at random a remaining sample size $n$ is given by $2 (H(n) – 1)$, where $H(n)$ is the harmonic number for $n$. The paper gives the formula $2 H(n – 1) – 2 \frac{n-1}{n}$, which gives the same result (https://en.wikipedia.org/wiki/Harmonic_number#Identities_involving_harmonic_numbers). In addition, the paper uses an approximation for the harmonic number $H(n)$ which is the limit as $n$ tends to infinite (https://en.wikipedia.org/wiki/Harmonic_number#Calculation). As such, if the sample size for which the approximate depth is determined is large, the actual vs. approximated results will be very close, and it’s not a big deal to use it for the expected average depth (from which the scores are calculated as $2^{-\frac{E[d]}{c}}$) in the whole data. However, the paper also uses it to determine the expected sample size at the end of each node that reaches the height limit, which will be smaller numbers and might be within a range in which the approximation is far from the real results – this usage of the approximation seems quite wrong to me. The difference between the paper’s approximation and the true value actually differ by less than $10^{-5}$ for values of $n > 5000$, and by less than $10^{-3}$ for values of $n > 500$, but for $n = 4$, which is reasonable to expect when reaching a height limit, the difference is rather large ($2.166667 - 1.85 = 0.317$).
Isolation Forest and average/expected depth formula
I’ll answer myself with help from these other questions in the math section: https://math.stackexchange.com/questions/3333220/expected-average-depth-in-random-binary-tree-constructed-top-to-bottom htt
Isolation Forest and average/expected depth formula I’ll answer myself with help from these other questions in the math section: https://math.stackexchange.com/questions/3333220/expected-average-depth-in-random-binary-tree-constructed-top-to-bottom https://math.stackexchange.com/questions/52572/do-harmonic-numbers-have-a-closed-form-expression From the answer above, the actual expected depth when splitting at random a remaining sample size $n$ is given by $2 (H(n) – 1)$, where $H(n)$ is the harmonic number for $n$. The paper gives the formula $2 H(n – 1) – 2 \frac{n-1}{n}$, which gives the same result (https://en.wikipedia.org/wiki/Harmonic_number#Identities_involving_harmonic_numbers). In addition, the paper uses an approximation for the harmonic number $H(n)$ which is the limit as $n$ tends to infinite (https://en.wikipedia.org/wiki/Harmonic_number#Calculation). As such, if the sample size for which the approximate depth is determined is large, the actual vs. approximated results will be very close, and it’s not a big deal to use it for the expected average depth (from which the scores are calculated as $2^{-\frac{E[d]}{c}}$) in the whole data. However, the paper also uses it to determine the expected sample size at the end of each node that reaches the height limit, which will be smaller numbers and might be within a range in which the approximation is far from the real results – this usage of the approximation seems quite wrong to me. The difference between the paper’s approximation and the true value actually differ by less than $10^{-5}$ for values of $n > 5000$, and by less than $10^{-3}$ for values of $n > 500$, but for $n = 4$, which is reasonable to expect when reaching a height limit, the difference is rather large ($2.166667 - 1.85 = 0.317$).
Isolation Forest and average/expected depth formula I’ll answer myself with help from these other questions in the math section: https://math.stackexchange.com/questions/3333220/expected-average-depth-in-random-binary-tree-constructed-top-to-bottom htt
49,721
Isolation Forest and average/expected depth formula
UPDATE: Now, it seems clearly, there are 3 methods to calculate: $c(3)= 2H(3-1)-2(3-1)/3 =2(H(3)-1)=2(1+1/2+1/3-1)=1.667 $ $c(3) = 2H(3-1)-2(3-1)/3 = 2(\log {(2)}+ 0.57721566)-4/3) = 1.20739$ $c(3) = 2(H(3)-1) = 2(\log 3+ 0.57721566-1) = 2(1.0986+0.57721566-1) = 1.35$ method1 is accurate result. method2 is approximate in the IForest papers and implemented in scikit-learn method3 is formula simplification then approximate when the sample size in leaf node is 3, $H(3) = 1+1/2+1/3 = 1.833$ $H(3)\approx log(3)+0.57721566$ can be computed by np.log(3)+np.euler_gamma which is 1.675827 All is is different from your computation, Anywhere I am missed?
Isolation Forest and average/expected depth formula
UPDATE: Now, it seems clearly, there are 3 methods to calculate: $c(3)= 2H(3-1)-2(3-1)/3 =2(H(3)-1)=2(1+1/2+1/3-1)=1.667 $ $c(3) = 2H(3-1)-2(3-1)/3 = 2(\log {(2)}+ 0.57721566)-4/3) = 1.20739$ $c(3) =
Isolation Forest and average/expected depth formula UPDATE: Now, it seems clearly, there are 3 methods to calculate: $c(3)= 2H(3-1)-2(3-1)/3 =2(H(3)-1)=2(1+1/2+1/3-1)=1.667 $ $c(3) = 2H(3-1)-2(3-1)/3 = 2(\log {(2)}+ 0.57721566)-4/3) = 1.20739$ $c(3) = 2(H(3)-1) = 2(\log 3+ 0.57721566-1) = 2(1.0986+0.57721566-1) = 1.35$ method1 is accurate result. method2 is approximate in the IForest papers and implemented in scikit-learn method3 is formula simplification then approximate when the sample size in leaf node is 3, $H(3) = 1+1/2+1/3 = 1.833$ $H(3)\approx log(3)+0.57721566$ can be computed by np.log(3)+np.euler_gamma which is 1.675827 All is is different from your computation, Anywhere I am missed?
Isolation Forest and average/expected depth formula UPDATE: Now, it seems clearly, there are 3 methods to calculate: $c(3)= 2H(3-1)-2(3-1)/3 =2(H(3)-1)=2(1+1/2+1/3-1)=1.667 $ $c(3) = 2H(3-1)-2(3-1)/3 = 2(\log {(2)}+ 0.57721566)-4/3) = 1.20739$ $c(3) =
49,722
Peeking Inside the Black Box, can Feature importance indicate overfit?
I don't think permutation feature importance can identify overfitting per se, it could however hint whether a good amount of noise has been modelled (overfitting) when features that you know for a fact to be significant, aren't deemed significant (pretty much your case). If you go with splitting, significantly smaller training error than test error indicates overfit. You could try k-fold the way you suggested, but you could also try leave-one-out cross validation. In both cases you'll get average training and test set errors so you can compare them. Since your goal is both inference and prediction, it might be best to use a separate model for each (as suggested in Harrell's, Regression Modeling Strategies). To check the statistical significance of your predictors you could use anything from a classic GLM with asymptotic p-values to a Bayesian regression. If you want to combine feature significance with prediction without having to split your data for cross validation, I would suggest permuted Random Forests.
Peeking Inside the Black Box, can Feature importance indicate overfit?
I don't think permutation feature importance can identify overfitting per se, it could however hint whether a good amount of noise has been modelled (overfitting) when features that you know for a fac
Peeking Inside the Black Box, can Feature importance indicate overfit? I don't think permutation feature importance can identify overfitting per se, it could however hint whether a good amount of noise has been modelled (overfitting) when features that you know for a fact to be significant, aren't deemed significant (pretty much your case). If you go with splitting, significantly smaller training error than test error indicates overfit. You could try k-fold the way you suggested, but you could also try leave-one-out cross validation. In both cases you'll get average training and test set errors so you can compare them. Since your goal is both inference and prediction, it might be best to use a separate model for each (as suggested in Harrell's, Regression Modeling Strategies). To check the statistical significance of your predictors you could use anything from a classic GLM with asymptotic p-values to a Bayesian regression. If you want to combine feature significance with prediction without having to split your data for cross validation, I would suggest permuted Random Forests.
Peeking Inside the Black Box, can Feature importance indicate overfit? I don't think permutation feature importance can identify overfitting per se, it could however hint whether a good amount of noise has been modelled (overfitting) when features that you know for a fac
49,723
R Multivariate Normal CDF
The first pmvnorm calculates the probability that variable 1 <=13 AND variable 2 <=15 AND variable <=12 all occurs at the same time. The probability that each individual variable fufills that criteria will be 0.5, however the joint probability will not be 0.5 If we use an example where all variables are uncorrelated mu=c(13,15,12) co=matrix(c(1,0.0,0,0,1,0,0,0,1),nrow=3,byrow=T) pmvnorm(lower=-Inf, upper=mu, mean=mu, sigma = co) This returns [1] 0.125 This is the probability comes from the probability of three uncorrelated events happening at the same time, i.e. 0.5^3 = 0.125. In your data the variables seems to be correlated, which is the reason that probability is greater than 0.125. Basically the reason that the sum of probabilities does not equal one is that the calculations above doesn't include all possible coordinates. For example the coordinates could be (10, 17,14), which isn't included. As for the difference in the estimate, I believe that the pmvnorm use some sort of sampling algorithm with may lead to differences in probability.
R Multivariate Normal CDF
The first pmvnorm calculates the probability that variable 1 <=13 AND variable 2 <=15 AND variable <=12 all occurs at the same time. The probability that each individual variable fufills that criteri
R Multivariate Normal CDF The first pmvnorm calculates the probability that variable 1 <=13 AND variable 2 <=15 AND variable <=12 all occurs at the same time. The probability that each individual variable fufills that criteria will be 0.5, however the joint probability will not be 0.5 If we use an example where all variables are uncorrelated mu=c(13,15,12) co=matrix(c(1,0.0,0,0,1,0,0,0,1),nrow=3,byrow=T) pmvnorm(lower=-Inf, upper=mu, mean=mu, sigma = co) This returns [1] 0.125 This is the probability comes from the probability of three uncorrelated events happening at the same time, i.e. 0.5^3 = 0.125. In your data the variables seems to be correlated, which is the reason that probability is greater than 0.125. Basically the reason that the sum of probabilities does not equal one is that the calculations above doesn't include all possible coordinates. For example the coordinates could be (10, 17,14), which isn't included. As for the difference in the estimate, I believe that the pmvnorm use some sort of sampling algorithm with may lead to differences in probability.
R Multivariate Normal CDF The first pmvnorm calculates the probability that variable 1 <=13 AND variable 2 <=15 AND variable <=12 all occurs at the same time. The probability that each individual variable fufills that criteri
49,724
Probability that all cards have been drawn [duplicate]
I think the solution would go something like this: Let $P(m, n, k, x, y)$ be the probability of seeing exactly $x$ cards at least once over $k$ draws, once $y$ cards have already been seen. Then $P(m, n, k, x, 0) = \sum_{i=0}^{x}P(m, n, k-1, x-i, 0) * P(m, n, 1, i, x-i)$ I think the possibilities under the sum sign are all unique, so we can sum them. Your solution is then $P(m, n, k, m, 0)$. What is left is to define $P(m, n, 1, i, x-i)$ and we should be able to solve the problem recursively. Edit: full solution in Python, implementing the method above: import numpy as np from scipy.special import comb import matplotlib.pyplot as plt m = 10 n = 4 def P(k, x, y): if k == 1: return (comb(m-y, x) * comb(y, n-x))/comb(m, n) else: prob = 0 for i in range(x): prob += P(k-1, x-i, y) * P(1, i, y+x-i) return prob def P_MC(k, x, y): sims = 10000 good = 0 for s in range(sims): ar = np.arange(m) seen = set(np.arange(y)) for draw in range(k): np.random.shuffle(ar) for el in ar[:n]: seen.add(el) if len(seen) == (x+y): good += 1 return good/sims ests = [] acts = [] for k in range(1,16): ests.append(P_MC(k, m, 0)) acts.append(P(k, m, 0)) plt.plot(range(1,16), ests) plt.plot(range(1,16), acts) plt.grid() plt.legend(['Simulated', 'Actual'], loc='lower right')
Probability that all cards have been drawn [duplicate]
I think the solution would go something like this: Let $P(m, n, k, x, y)$ be the probability of seeing exactly $x$ cards at least once over $k$ draws, once $y$ cards have already been seen. Then $P(m,
Probability that all cards have been drawn [duplicate] I think the solution would go something like this: Let $P(m, n, k, x, y)$ be the probability of seeing exactly $x$ cards at least once over $k$ draws, once $y$ cards have already been seen. Then $P(m, n, k, x, 0) = \sum_{i=0}^{x}P(m, n, k-1, x-i, 0) * P(m, n, 1, i, x-i)$ I think the possibilities under the sum sign are all unique, so we can sum them. Your solution is then $P(m, n, k, m, 0)$. What is left is to define $P(m, n, 1, i, x-i)$ and we should be able to solve the problem recursively. Edit: full solution in Python, implementing the method above: import numpy as np from scipy.special import comb import matplotlib.pyplot as plt m = 10 n = 4 def P(k, x, y): if k == 1: return (comb(m-y, x) * comb(y, n-x))/comb(m, n) else: prob = 0 for i in range(x): prob += P(k-1, x-i, y) * P(1, i, y+x-i) return prob def P_MC(k, x, y): sims = 10000 good = 0 for s in range(sims): ar = np.arange(m) seen = set(np.arange(y)) for draw in range(k): np.random.shuffle(ar) for el in ar[:n]: seen.add(el) if len(seen) == (x+y): good += 1 return good/sims ests = [] acts = [] for k in range(1,16): ests.append(P_MC(k, m, 0)) acts.append(P(k, m, 0)) plt.plot(range(1,16), ests) plt.plot(range(1,16), acts) plt.grid() plt.legend(['Simulated', 'Actual'], loc='lower right')
Probability that all cards have been drawn [duplicate] I think the solution would go something like this: Let $P(m, n, k, x, y)$ be the probability of seeing exactly $x$ cards at least once over $k$ draws, once $y$ cards have already been seen. Then $P(m,
49,725
Probability that all cards have been drawn [duplicate]
This is a simple variant of the coupon collector's problem, which uses the classical occupancy distribution. Let $K$ be the number of distinct cards that have been drawn, distributed according to the classical occupancy distribution. With $m$ distinct cards in the deck, and $n \geqslant m$ random draws with replacement, the probability of drawing each card at least once is: $$\mathbb{P}(\text{All cards drawn}) = \mathbb{P}(K=m) = \frac{m! \cdot S(n,m)}{m^n},$$ where the function $S$ denotes the Stirling numbers of the second kind. This can be written in explicit form as: $$\mathbb{P}(K=m) = \sum_{i=0}^m (-1)^i {m \choose i} \bigg( 1-\frac{i}{m} \bigg)^n.$$
Probability that all cards have been drawn [duplicate]
This is a simple variant of the coupon collector's problem, which uses the classical occupancy distribution. Let $K$ be the number of distinct cards that have been drawn, distributed according to the
Probability that all cards have been drawn [duplicate] This is a simple variant of the coupon collector's problem, which uses the classical occupancy distribution. Let $K$ be the number of distinct cards that have been drawn, distributed according to the classical occupancy distribution. With $m$ distinct cards in the deck, and $n \geqslant m$ random draws with replacement, the probability of drawing each card at least once is: $$\mathbb{P}(\text{All cards drawn}) = \mathbb{P}(K=m) = \frac{m! \cdot S(n,m)}{m^n},$$ where the function $S$ denotes the Stirling numbers of the second kind. This can be written in explicit form as: $$\mathbb{P}(K=m) = \sum_{i=0}^m (-1)^i {m \choose i} \bigg( 1-\frac{i}{m} \bigg)^n.$$
Probability that all cards have been drawn [duplicate] This is a simple variant of the coupon collector's problem, which uses the classical occupancy distribution. Let $K$ be the number of distinct cards that have been drawn, distributed according to the
49,726
What is meant by existence of a (discrete-time) stochastic process?
Could anyone offer a simple example of an existent process? A well-known theorem which guarantees the existence of a stochastic process ${(X_n)}_{n \geqslant 1}$, say with $\mathbb{R}$-valued random variables $X_n$, is the Daniell-Kolmogorov extension theorem. A typical application of this theorem gives the existence of a sequence of independent random variables ${(X_n)}_{n \geqslant 1}$, with $X_n$ following any probability law (possibly depending on $n$). Here is the statement of this theorem. Denote by $\mathcal{B}_n$ the Borel $\sigma$-field on $\mathbb{R}^n$. Suppose that for every $n \geqslant 1$ we have a probability measure $\mu_n$ on $(\mathbb{R}^n, \mathcal{B}_n)$. Suppose that the sequence of probability measures ${(\mu_n)}_{n \geqslant 1}$ is consistent, in the sense that $\mu_{n+1}(A \times \mathbb{R}) = \mu_n(A)$ for every $n \geqslant 1$ and every $A \in \mathcal{B}_n$. Then the theorem asserts that there exists a probability measure $\mu$ on $(\mathbb{R}^\mathbb{N}, \mathcal{B}_\infty)$ which extends all the $\mu_n$, in the sense that $\mu(A \times \mathbb{R}^\mathbb{N}) = \mu_n(A)$ for every $n \geqslant 1$ and every $A \in \mathcal{B}_n$. Let's see how to apply this theorem to show the existence of a sequence of independent random variables ${(X_n)}_{n \geqslant 1}$ with $X_n \sim \nu_n$, where ${(\nu_n)}_{n \geqslant 1}$ is a given sequence of probability measures on $\mathbb{R}$. One takes the product measure $\mu_n = \nu_1 \otimes \cdots \otimes \nu_n$ for every $n \geqslant 1$. Then the consistency condition of ${(\mu_n)}_{n \geqslant 1}$ is easy to check. Then the Daniell-Kolmogorov extension theorem provides a probability measure $\mu$ on $\mathbb{R}^\mathbb{N}$ which extends the $\mu_n$. Take the probability space $$ (\Omega, \mathcal{A}, \mathbb{P}) = (\mathbb{R}^\mathbb{N}, \mathcal{B}_\infty, \mu). $$ An element $\omega$ of $\Omega$ is a sequence of real numbers $(\omega_1, \omega_2, \ldots)$. Then it suffices to define for each $n \geqslant 1$ the random variable $X_n$ on $(\Omega, \mathcal{A}, \mathbb{P})$ by $X_n(\omega) = \omega_n$. In other words the random sequence ${(X_n)}_{n \geqslant 1}$ is a $\mathbb{R}^\mathbb{N}$-valued random variable whose probability distribution is $\mu = \nu_1 \otimes \nu_2 \otimes \cdots$. The theorem guarantees the existence of this infinite product measure. Could anyone offer a simple example of a nonexistent process? I like this example: there does not exist a "non-trivial" martingale ${(M_n)}_{n \geqslant 1}$ such that $M_n$ takes its values in $\{0,1\}$ for every $n \geqslant 1$. Indeed that would mean that $M_n = \mathbf{1}_{A_n}$ for a certain event $A_n$, for every $n \geqslant 1$. The martingale condition is $$ \mathbb{E}[M_{n+1} \mid M_1, \ldots, M_n] = \mathbb{E}[M_{n+1} \mid M_n] = M_n. $$ We have $$ \begin{align} \mathbb{E}\bigl[(\mathbf{1}_{A_{n+1}}-\mathbf{1}_{A_{n}})^2\bigr] & =\mathbb{E}[\mathbf{1}_{A_{n+1}}^2]+\mathbb{E}[\mathbf{1}_{A_{n}}^2]- 2\mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}}] \\ & =\mathbb{E}[\mathbf{1}_{A_{n+1}}]+\mathbb{E}[\mathbf{1}_{A_{n}}]- 2\mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}}]. \end{align} $$ But $$ \mathbb{E}[\mathbf{1}_{A_{n+1}}] = \mathbb{E}\bigl[\mathbb{E}[\mathbf{1}_{A_{n+1}} \mid \mathbf{1}_{A_{n}}]\bigr] = \mathbb{E}[\mathbf{1}_{A_{n}}] $$ and $$ \mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}}] = \mathbb{E}\bigl[\mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}} \mid \mathbf{1}_{A_{n}}]\bigr] = \mathbb{E}\bigl[\mathbf{1}_{A_{n}}\mathbb{E}[\mathbf{1}_{A_{n+1}} \mid \mathbf{1}_{A_{n}}]\bigr] = \mathbb{E}[\mathbf{1}_{A_{n}}^2] = \mathbb{E}[\mathbf{1}_{A_{n}}]. $$ Finally, $\mathbb{E}\bigl[(\mathbf{1}_{A_{n+1}}-\mathbf{1}_{A_{n}})^2\bigr] = 0$, which means that $A_{n+1} = A_n$ (almost surely): our martingale is "trivial".
What is meant by existence of a (discrete-time) stochastic process?
Could anyone offer a simple example of an existent process? A well-known theorem which guarantees the existence of a stochastic process ${(X_n)}_{n \geqslant 1}$, say with $\mathbb{R}$-valued random
What is meant by existence of a (discrete-time) stochastic process? Could anyone offer a simple example of an existent process? A well-known theorem which guarantees the existence of a stochastic process ${(X_n)}_{n \geqslant 1}$, say with $\mathbb{R}$-valued random variables $X_n$, is the Daniell-Kolmogorov extension theorem. A typical application of this theorem gives the existence of a sequence of independent random variables ${(X_n)}_{n \geqslant 1}$, with $X_n$ following any probability law (possibly depending on $n$). Here is the statement of this theorem. Denote by $\mathcal{B}_n$ the Borel $\sigma$-field on $\mathbb{R}^n$. Suppose that for every $n \geqslant 1$ we have a probability measure $\mu_n$ on $(\mathbb{R}^n, \mathcal{B}_n)$. Suppose that the sequence of probability measures ${(\mu_n)}_{n \geqslant 1}$ is consistent, in the sense that $\mu_{n+1}(A \times \mathbb{R}) = \mu_n(A)$ for every $n \geqslant 1$ and every $A \in \mathcal{B}_n$. Then the theorem asserts that there exists a probability measure $\mu$ on $(\mathbb{R}^\mathbb{N}, \mathcal{B}_\infty)$ which extends all the $\mu_n$, in the sense that $\mu(A \times \mathbb{R}^\mathbb{N}) = \mu_n(A)$ for every $n \geqslant 1$ and every $A \in \mathcal{B}_n$. Let's see how to apply this theorem to show the existence of a sequence of independent random variables ${(X_n)}_{n \geqslant 1}$ with $X_n \sim \nu_n$, where ${(\nu_n)}_{n \geqslant 1}$ is a given sequence of probability measures on $\mathbb{R}$. One takes the product measure $\mu_n = \nu_1 \otimes \cdots \otimes \nu_n$ for every $n \geqslant 1$. Then the consistency condition of ${(\mu_n)}_{n \geqslant 1}$ is easy to check. Then the Daniell-Kolmogorov extension theorem provides a probability measure $\mu$ on $\mathbb{R}^\mathbb{N}$ which extends the $\mu_n$. Take the probability space $$ (\Omega, \mathcal{A}, \mathbb{P}) = (\mathbb{R}^\mathbb{N}, \mathcal{B}_\infty, \mu). $$ An element $\omega$ of $\Omega$ is a sequence of real numbers $(\omega_1, \omega_2, \ldots)$. Then it suffices to define for each $n \geqslant 1$ the random variable $X_n$ on $(\Omega, \mathcal{A}, \mathbb{P})$ by $X_n(\omega) = \omega_n$. In other words the random sequence ${(X_n)}_{n \geqslant 1}$ is a $\mathbb{R}^\mathbb{N}$-valued random variable whose probability distribution is $\mu = \nu_1 \otimes \nu_2 \otimes \cdots$. The theorem guarantees the existence of this infinite product measure. Could anyone offer a simple example of a nonexistent process? I like this example: there does not exist a "non-trivial" martingale ${(M_n)}_{n \geqslant 1}$ such that $M_n$ takes its values in $\{0,1\}$ for every $n \geqslant 1$. Indeed that would mean that $M_n = \mathbf{1}_{A_n}$ for a certain event $A_n$, for every $n \geqslant 1$. The martingale condition is $$ \mathbb{E}[M_{n+1} \mid M_1, \ldots, M_n] = \mathbb{E}[M_{n+1} \mid M_n] = M_n. $$ We have $$ \begin{align} \mathbb{E}\bigl[(\mathbf{1}_{A_{n+1}}-\mathbf{1}_{A_{n}})^2\bigr] & =\mathbb{E}[\mathbf{1}_{A_{n+1}}^2]+\mathbb{E}[\mathbf{1}_{A_{n}}^2]- 2\mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}}] \\ & =\mathbb{E}[\mathbf{1}_{A_{n+1}}]+\mathbb{E}[\mathbf{1}_{A_{n}}]- 2\mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}}]. \end{align} $$ But $$ \mathbb{E}[\mathbf{1}_{A_{n+1}}] = \mathbb{E}\bigl[\mathbb{E}[\mathbf{1}_{A_{n+1}} \mid \mathbf{1}_{A_{n}}]\bigr] = \mathbb{E}[\mathbf{1}_{A_{n}}] $$ and $$ \mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}}] = \mathbb{E}\bigl[\mathbb{E}[\mathbf{1}_{A_{n+1}}\mathbf{1}_{A_{n}} \mid \mathbf{1}_{A_{n}}]\bigr] = \mathbb{E}\bigl[\mathbf{1}_{A_{n}}\mathbb{E}[\mathbf{1}_{A_{n+1}} \mid \mathbf{1}_{A_{n}}]\bigr] = \mathbb{E}[\mathbf{1}_{A_{n}}^2] = \mathbb{E}[\mathbf{1}_{A_{n}}]. $$ Finally, $\mathbb{E}\bigl[(\mathbf{1}_{A_{n+1}}-\mathbf{1}_{A_{n}})^2\bigr] = 0$, which means that $A_{n+1} = A_n$ (almost surely): our martingale is "trivial".
What is meant by existence of a (discrete-time) stochastic process? Could anyone offer a simple example of an existent process? A well-known theorem which guarantees the existence of a stochastic process ${(X_n)}_{n \geqslant 1}$, say with $\mathbb{R}$-valued random
49,727
What is meant by existence of a (discrete-time) stochastic process?
In this answer I collect and summarize some bits of insight I have received through comments and gathered myself. Considerable credit goes to @IsabellaGhement and @whuber What is meant by existence of a (discrete-time) stochastic process? A stochastic process exists if the relevant mathematical objects demanded of the axioms of probability exist: a set of outcomes (sample paths), a collection of measurable events (which generate a sigma field), and a valid probability function on those events. How do I know whether a process exists or not? To show existence of a stochastic process, the objects listed above have to be exhibited. How come some processes do not exist? A stochastic process is commonly defined by designating a set of properties that its distribution and sample paths must satisfy. If these properties are contradictory, no triplet of the form (outcomes, sigma field, probability function) can satisfy them, hence the process does not exist. Could anyone offer a simple example of an existent and another of a nonexistent process? I am still looking for such simple examples of processes that do exist and these that do not. (I suppose it is not too difficult to construct a nonexisting process and show why it is such. The interesting part is to find an example where the contradiction leading to nonexistence is not immediately obvious, so that the example is pedagogically useful.)
What is meant by existence of a (discrete-time) stochastic process?
In this answer I collect and summarize some bits of insight I have received through comments and gathered myself. Considerable credit goes to @IsabellaGhement and @whuber What is meant by existence o
What is meant by existence of a (discrete-time) stochastic process? In this answer I collect and summarize some bits of insight I have received through comments and gathered myself. Considerable credit goes to @IsabellaGhement and @whuber What is meant by existence of a (discrete-time) stochastic process? A stochastic process exists if the relevant mathematical objects demanded of the axioms of probability exist: a set of outcomes (sample paths), a collection of measurable events (which generate a sigma field), and a valid probability function on those events. How do I know whether a process exists or not? To show existence of a stochastic process, the objects listed above have to be exhibited. How come some processes do not exist? A stochastic process is commonly defined by designating a set of properties that its distribution and sample paths must satisfy. If these properties are contradictory, no triplet of the form (outcomes, sigma field, probability function) can satisfy them, hence the process does not exist. Could anyone offer a simple example of an existent and another of a nonexistent process? I am still looking for such simple examples of processes that do exist and these that do not. (I suppose it is not too difficult to construct a nonexisting process and show why it is such. The interesting part is to find an example where the contradiction leading to nonexistence is not immediately obvious, so that the example is pedagogically useful.)
What is meant by existence of a (discrete-time) stochastic process? In this answer I collect and summarize some bits of insight I have received through comments and gathered myself. Considerable credit goes to @IsabellaGhement and @whuber What is meant by existence o
49,728
Comparing ways to create a composite score
Question 1: You are incorrect that "we don't need any distributional assumption for the biomarkers in logistic regressions." A single-predictor logistic regression specifically assumes that the log-odds of the binary outcome are linearly related to the values of the predictor. So if you are using, say, RNAseq data as predictors you will get different results (for coefficients and p-values) if you use sequence counts instead of log-transformed counts. It is not at all surprising that logistic regression, with that strong parametric assumption, and the non-parametric rank-based Wilcoxon test are giving different p-values. Logistic regression might be more powerful (better ability to detect true significant associations) when the linearity assumption is met, but not when the assumption is violated. The validity of the linearity assumption might differ among predictors. That said, you should be wary of using any set of single-predictor tests to select components for your composite score. Logistic regression has an inherent omitted-variable bias such that if you omit any predictor related to outcome from a model you will bias the coefficients of the includes predictors. See this answer and its links as one of many on this site that discuss these dangers. Question 2: Based on the above, your Method 3 has substantial problems as it relies on a whole set of logistic regressions each of which omits many predictors related to outcome. The fourth method would be preferable, but a related approach described below could be even better, depending on the scale of your problem. Question 3: Although AUC is better than some measures of model performance, it has significant drawbacks for model comparison. The best way to evaluate a model that predicts a probability of an outcome is to use a proper scoring rule like the Brier score. You also need to be thorough in how you perform your comparisons. You should be evaluating each entire model-building process starting from the initial data, with bootstrapping or cross-validation, particularly when your modeling used the outcomes to select the predictors. Alternate approaches: These depend on whether you are evaluating a few dozen potential predictors (as in some clinical studies) or thousands of them (as in RNAseq studies). In the first case you should consider approaches like those recommended by Harrell's Regression Modeling Strategies. Chapter 11 of the second edition is a clinical case study that illustrates how to perform data reduction (including linear and nonlinear principal components), selection among modeling variations, backward variable selection from a full model to simplify, and model evaluation for logistic regression. In the second case you should be using a principled way to select and weight the predictors for a composite score. LASSO comes immediately to mind. This can be thought of as starting with the best individual predictor but then adding additional predictors in a way that avoids the overfitting seen in standard stepwise approaches. An Introduction to Statistical Learning provides one accessible presentation in Chapter 6 with a worked example for standard linear regression, but the glmnet() function illustrated there (of the R package having the same name) also allows for logistic regression. This would provide you with something similar to your Method 4 in Question 2, but with a more reliable basis. You might also consider the Elastic Net, a combination of LASSO and ridge regression, that minimizes the instability in LASSO predictor selection when there are multiple correlated predictors. Statistical Learning with Sparsity describes Elastic Net starting in Chapter 4. Elastic Net can also be implemented via glmnet(). A final warning: if you are going to use any of these linear regression approaches you need to document the linear relationship between the predictors and the log-odds of outcome. I suspect that a failure of that linear relationship for some of your candidate predictors led to your original question about different results with logistic regression and Wilcoxon test results, so you don't want to face that problem again farther down the road.
Comparing ways to create a composite score
Question 1: You are incorrect that "we don't need any distributional assumption for the biomarkers in logistic regressions." A single-predictor logistic regression specifically assumes that the log-od
Comparing ways to create a composite score Question 1: You are incorrect that "we don't need any distributional assumption for the biomarkers in logistic regressions." A single-predictor logistic regression specifically assumes that the log-odds of the binary outcome are linearly related to the values of the predictor. So if you are using, say, RNAseq data as predictors you will get different results (for coefficients and p-values) if you use sequence counts instead of log-transformed counts. It is not at all surprising that logistic regression, with that strong parametric assumption, and the non-parametric rank-based Wilcoxon test are giving different p-values. Logistic regression might be more powerful (better ability to detect true significant associations) when the linearity assumption is met, but not when the assumption is violated. The validity of the linearity assumption might differ among predictors. That said, you should be wary of using any set of single-predictor tests to select components for your composite score. Logistic regression has an inherent omitted-variable bias such that if you omit any predictor related to outcome from a model you will bias the coefficients of the includes predictors. See this answer and its links as one of many on this site that discuss these dangers. Question 2: Based on the above, your Method 3 has substantial problems as it relies on a whole set of logistic regressions each of which omits many predictors related to outcome. The fourth method would be preferable, but a related approach described below could be even better, depending on the scale of your problem. Question 3: Although AUC is better than some measures of model performance, it has significant drawbacks for model comparison. The best way to evaluate a model that predicts a probability of an outcome is to use a proper scoring rule like the Brier score. You also need to be thorough in how you perform your comparisons. You should be evaluating each entire model-building process starting from the initial data, with bootstrapping or cross-validation, particularly when your modeling used the outcomes to select the predictors. Alternate approaches: These depend on whether you are evaluating a few dozen potential predictors (as in some clinical studies) or thousands of them (as in RNAseq studies). In the first case you should consider approaches like those recommended by Harrell's Regression Modeling Strategies. Chapter 11 of the second edition is a clinical case study that illustrates how to perform data reduction (including linear and nonlinear principal components), selection among modeling variations, backward variable selection from a full model to simplify, and model evaluation for logistic regression. In the second case you should be using a principled way to select and weight the predictors for a composite score. LASSO comes immediately to mind. This can be thought of as starting with the best individual predictor but then adding additional predictors in a way that avoids the overfitting seen in standard stepwise approaches. An Introduction to Statistical Learning provides one accessible presentation in Chapter 6 with a worked example for standard linear regression, but the glmnet() function illustrated there (of the R package having the same name) also allows for logistic regression. This would provide you with something similar to your Method 4 in Question 2, but with a more reliable basis. You might also consider the Elastic Net, a combination of LASSO and ridge regression, that minimizes the instability in LASSO predictor selection when there are multiple correlated predictors. Statistical Learning with Sparsity describes Elastic Net starting in Chapter 4. Elastic Net can also be implemented via glmnet(). A final warning: if you are going to use any of these linear regression approaches you need to document the linear relationship between the predictors and the log-odds of outcome. I suspect that a failure of that linear relationship for some of your candidate predictors led to your original question about different results with logistic regression and Wilcoxon test results, so you don't want to face that problem again farther down the road.
Comparing ways to create a composite score Question 1: You are incorrect that "we don't need any distributional assumption for the biomarkers in logistic regressions." A single-predictor logistic regression specifically assumes that the log-od
49,729
Hard in calculating predictor‘s Relative Importance for GAM?
The caret package provides one answer. With the default tuneGrid and trainControl, library(caret) data("mtcars") gam1 <- train( mpg ~ drat + wt + qsec, data = mtcars, method = "gam" ) and you can then apply varImp. varImp(gam1) ## gam variable importance ## Overall ## wt 100.0 ## qsec 26.4 ## drat 0.0 For sort of the percentage-idea that you wanted, you can resize the returned object: library(dplyr) x <- varImp(gam1) x$importance %>% mutate( Variable = rownames(.), Overall = Overall / sum(Overall) * 100 ) %>% arrange(desc(Overall)) %>% select(Variable, Overall) ## Variable Overall ## 1 wt 79.09 ## 2 qsec 20.91 ## 3 drat 0.00 Because the default will not tune splines or degrees of freedom, you should check how to do these in the caret package. The method = 'gam' will call the mgcv package, but there are plenty other options. For instance if you used method = 'gamSpline', it would tune over the degrees of freedom, and give a different varImp result. Be wary of what caret is doing under the hood, however---if there are not many distinct values in a predictor, it may turn the term into linear.
Hard in calculating predictor‘s Relative Importance for GAM?
The caret package provides one answer. With the default tuneGrid and trainControl, library(caret) data("mtcars") gam1 <- train( mpg ~ drat + wt + qsec, data = mtcars, method = "gam" ) and yo
Hard in calculating predictor‘s Relative Importance for GAM? The caret package provides one answer. With the default tuneGrid and trainControl, library(caret) data("mtcars") gam1 <- train( mpg ~ drat + wt + qsec, data = mtcars, method = "gam" ) and you can then apply varImp. varImp(gam1) ## gam variable importance ## Overall ## wt 100.0 ## qsec 26.4 ## drat 0.0 For sort of the percentage-idea that you wanted, you can resize the returned object: library(dplyr) x <- varImp(gam1) x$importance %>% mutate( Variable = rownames(.), Overall = Overall / sum(Overall) * 100 ) %>% arrange(desc(Overall)) %>% select(Variable, Overall) ## Variable Overall ## 1 wt 79.09 ## 2 qsec 20.91 ## 3 drat 0.00 Because the default will not tune splines or degrees of freedom, you should check how to do these in the caret package. The method = 'gam' will call the mgcv package, but there are plenty other options. For instance if you used method = 'gamSpline', it would tune over the degrees of freedom, and give a different varImp result. Be wary of what caret is doing under the hood, however---if there are not many distinct values in a predictor, it may turn the term into linear.
Hard in calculating predictor‘s Relative Importance for GAM? The caret package provides one answer. With the default tuneGrid and trainControl, library(caret) data("mtcars") gam1 <- train( mpg ~ drat + wt + qsec, data = mtcars, method = "gam" ) and yo
49,730
What is *common variance* in factor analysis and how is it estimated?
1. According to Mulaik (2009) p. 133-134, given a factor model \begin{aligned} Y_1&=\lambda_{11}\xi_1+\dots+\lambda_{1r}\xi_r+\Psi_1\varepsilon_1 \\ Y_2&=\lambda_{21}\xi_1+\dots+\lambda_{2r}\xi_r+\Psi_2\varepsilon_2 \\ &\dots \\ Y_n&=\lambda_{n1}\xi_1+\dots+\lambda_{nr}\xi_r+\Psi_n\varepsilon_n \\ \end{aligned} where $\text{Var}(\xi_i)=1 \ \forall \ i$ and $\text{Var}(\varepsilon_j)=1 \ \forall \ j$, common variance a.k.a. communality of variable $Y_j$ is $$ \text{Var}(\lambda_{j1}\xi_1+\dots+\lambda_{jr}\xi_r), $$ that is, it is the variance of the part of $Y_j$ that is explained by the factors $\xi_1$ to $\xi_r$. If the factors are uncorrelated, then the common variance becomes $\sum_{i=1}^r\lambda_{ji}^2$. 2. According to Mulaik (2009) p. 184, the $R^2$ of the regression of $Y_j$ on all the other $Y$s ($Y_{-j}$) is the lower bound for the communality of $Y_j$. My impression from reading Mulaik (2009) Chapter 8 is that this could be used as an initial estimate of communality (e.g. equation 8.51 on p. 196). References Mulaik, S. A. (2009). Foundations of factor analysis. Chapman and Hall/CRC.
What is *common variance* in factor analysis and how is it estimated?
1. According to Mulaik (2009) p. 133-134, given a factor model \begin{aligned} Y_1&=\lambda_{11}\xi_1+\dots+\lambda_{1r}\xi_r+\Psi_1\varepsilon_1 \\ Y_2&=\lambda_{21}\xi_1+\dots+\lambda_{2r}\xi_r+\Psi
What is *common variance* in factor analysis and how is it estimated? 1. According to Mulaik (2009) p. 133-134, given a factor model \begin{aligned} Y_1&=\lambda_{11}\xi_1+\dots+\lambda_{1r}\xi_r+\Psi_1\varepsilon_1 \\ Y_2&=\lambda_{21}\xi_1+\dots+\lambda_{2r}\xi_r+\Psi_2\varepsilon_2 \\ &\dots \\ Y_n&=\lambda_{n1}\xi_1+\dots+\lambda_{nr}\xi_r+\Psi_n\varepsilon_n \\ \end{aligned} where $\text{Var}(\xi_i)=1 \ \forall \ i$ and $\text{Var}(\varepsilon_j)=1 \ \forall \ j$, common variance a.k.a. communality of variable $Y_j$ is $$ \text{Var}(\lambda_{j1}\xi_1+\dots+\lambda_{jr}\xi_r), $$ that is, it is the variance of the part of $Y_j$ that is explained by the factors $\xi_1$ to $\xi_r$. If the factors are uncorrelated, then the common variance becomes $\sum_{i=1}^r\lambda_{ji}^2$. 2. According to Mulaik (2009) p. 184, the $R^2$ of the regression of $Y_j$ on all the other $Y$s ($Y_{-j}$) is the lower bound for the communality of $Y_j$. My impression from reading Mulaik (2009) Chapter 8 is that this could be used as an initial estimate of communality (e.g. equation 8.51 on p. 196). References Mulaik, S. A. (2009). Foundations of factor analysis. Chapman and Hall/CRC.
What is *common variance* in factor analysis and how is it estimated? 1. According to Mulaik (2009) p. 133-134, given a factor model \begin{aligned} Y_1&=\lambda_{11}\xi_1+\dots+\lambda_{1r}\xi_r+\Psi_1\varepsilon_1 \\ Y_2&=\lambda_{21}\xi_1+\dots+\lambda_{2r}\xi_r+\Psi
49,731
Decomposing $R^2$ into independent variables
I struggled with this same problem--decomposing variance in high-dimensional prediction problems without limiting myself to fitting many, many linear regression models--and came up with the following solution: Shapley Decomposition of R-Squared in Machine Learning Models (with an R implementation). I would say that the trick in an applied setting is to first find classes of models that support fast Shapley value approximation calculations--e.g., neural networks and gradient boosted decision trees (see the shap Python package)--or use model-agnostic Monte Carlo Shapley value approximations like those implemented in the R package iml: Explaining prediction models and individual predictions with feature contributions. With the trained model, then run the package's predict() or feature importance function to get the resulting n_samples * n_features Shapley value matrix and decompose it according to the first link in my post. And be careful to take note of feature correlation as high correlations can make the feature-level attributions unstable. The authors of the shapr R package have come up with a few interesting approaches to adjusting Shapley values for feature correlations.
Decomposing $R^2$ into independent variables
I struggled with this same problem--decomposing variance in high-dimensional prediction problems without limiting myself to fitting many, many linear regression models--and came up with the following
Decomposing $R^2$ into independent variables I struggled with this same problem--decomposing variance in high-dimensional prediction problems without limiting myself to fitting many, many linear regression models--and came up with the following solution: Shapley Decomposition of R-Squared in Machine Learning Models (with an R implementation). I would say that the trick in an applied setting is to first find classes of models that support fast Shapley value approximation calculations--e.g., neural networks and gradient boosted decision trees (see the shap Python package)--or use model-agnostic Monte Carlo Shapley value approximations like those implemented in the R package iml: Explaining prediction models and individual predictions with feature contributions. With the trained model, then run the package's predict() or feature importance function to get the resulting n_samples * n_features Shapley value matrix and decompose it according to the first link in my post. And be careful to take note of feature correlation as high correlations can make the feature-level attributions unstable. The authors of the shapr R package have come up with a few interesting approaches to adjusting Shapley values for feature correlations.
Decomposing $R^2$ into independent variables I struggled with this same problem--decomposing variance in high-dimensional prediction problems without limiting myself to fitting many, many linear regression models--and came up with the following
49,732
Expectation of inverse of sum of positive iid variables
You cannot bound that expectation in $\sigma, n$. That's because there is the distinct possibility that the expectation do not exist at all (or, is $\infty$.) See I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?. If the conditions given there is fulfilled for the density of $X_1$, it will so be for the density of $\bar{X}_n$. If densities do not exist, but probability mass functions do, it is simpler, since your assumptions prohibit a probability atom at zero, but a probability density can still be positive at zero even if $P(X >0)=1$. For a useful bound you will at least need to restrict the common distribution of $X_1, \dotsc, X_n$ much more. EDIT After your new information, and with $v_1>0$, the expectation of $1/\bar{X}_n$ certainly will exist (irrespective if $K$ is finite or not.) And, since the function $x\mapsto 1/x$ is convex for $x>0$, we can use the Jensen Inequality to conclude that $\DeclareMathOperator{\E}{\mathbb{E}}\E 1/\bar{X}_n \ge 1/\E \bar{X}_n$.
Expectation of inverse of sum of positive iid variables
You cannot bound that expectation in $\sigma, n$. That's because there is the distinct possibility that the expectation do not exist at all (or, is $\infty$.) See I've heard that ratios or inverses of
Expectation of inverse of sum of positive iid variables You cannot bound that expectation in $\sigma, n$. That's because there is the distinct possibility that the expectation do not exist at all (or, is $\infty$.) See I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?. If the conditions given there is fulfilled for the density of $X_1$, it will so be for the density of $\bar{X}_n$. If densities do not exist, but probability mass functions do, it is simpler, since your assumptions prohibit a probability atom at zero, but a probability density can still be positive at zero even if $P(X >0)=1$. For a useful bound you will at least need to restrict the common distribution of $X_1, \dotsc, X_n$ much more. EDIT After your new information, and with $v_1>0$, the expectation of $1/\bar{X}_n$ certainly will exist (irrespective if $K$ is finite or not.) And, since the function $x\mapsto 1/x$ is convex for $x>0$, we can use the Jensen Inequality to conclude that $\DeclareMathOperator{\E}{\mathbb{E}}\E 1/\bar{X}_n \ge 1/\E \bar{X}_n$.
Expectation of inverse of sum of positive iid variables You cannot bound that expectation in $\sigma, n$. That's because there is the distinct possibility that the expectation do not exist at all (or, is $\infty$.) See I've heard that ratios or inverses of
49,733
Expectation of inverse of sum of positive iid variables
I think I have the gist of it. Given that $f(x)=1/x$ is infinitely differentiable in 1. Taylor's theorem tells us: There exists $\varepsilon>0$ such that $f(x) = f(1) + f'(1) (x-1)+ \frac{f''(1)(x-1)^2}{2} + \frac{f'''(\varepsilon) (x-1)^2}{2}$. In our case, if $X_i$ belongs in the domaine $[v_1;+\infty[$, then $\bar{X}_n$ has the same domain and we have $\varepsilon \geq v_1$. Hence $\mathbb{E}(1/\bar{X}_n) = \mathbb{E}\left (1 - (\bar{X}_n-1) + \frac{(x-1)^2}{4}+ \frac{f'''(\varepsilon) (\bar{X}_n-1)^2}{2} \right)$, and \begin{align*} \mathbb{E}(1/\bar{X}_n) &= 1 + \frac{f'''(\varepsilon) \mathbb{E}\left ((\bar{X}_n-1)^2\right )}{2} = 1 +\frac{ V(\bar{X}_n)}{4} - \frac{ V(\bar{X}_n)}{12 \varepsilon^4}\\ \end{align*} and hence $$1 + \frac{ \sigma^2}{4 n}- \frac{\sigma^2}{12 v_1^4 n} \leq \mathbb{E}(1/\bar{X}_n) \leq 1 + \frac{ \sigma^2}{4 n}.$$ For the case where $X_i$ do not admit a minimum but has an unlimited number of moments, one can do a similar transformation using the full taylor expansion: \begin{align*} \mathbb{E}(1/\bar{X}_n) &= \sum_{i=0}^{+\infty} \frac{f^{(i)}(1)}{i!}\mathbb{E}\left((\bar{X}_n-1)^i\right)\\ &= \sum_{i=0}^{+\infty} \frac{(-1)^i}{i!i!}\mathbb{E}\left((\bar{X}_n-1)^i\right) \end{align*} Now if we can say something about the $k^{th}$ moment of $\tilde{X}_n$ being $O(1/n^{k/2})$ this validates that $\mathbb{E}(1/\bar{X}_n) \approx 1 + \frac{ \sigma^2}{4 n}$.
Expectation of inverse of sum of positive iid variables
I think I have the gist of it. Given that $f(x)=1/x$ is infinitely differentiable in 1. Taylor's theorem tells us: There exists $\varepsilon>0$ such that $f(x) = f(1) + f'(1) (x-1)+ \frac{f''(1)(x-1)^
Expectation of inverse of sum of positive iid variables I think I have the gist of it. Given that $f(x)=1/x$ is infinitely differentiable in 1. Taylor's theorem tells us: There exists $\varepsilon>0$ such that $f(x) = f(1) + f'(1) (x-1)+ \frac{f''(1)(x-1)^2}{2} + \frac{f'''(\varepsilon) (x-1)^2}{2}$. In our case, if $X_i$ belongs in the domaine $[v_1;+\infty[$, then $\bar{X}_n$ has the same domain and we have $\varepsilon \geq v_1$. Hence $\mathbb{E}(1/\bar{X}_n) = \mathbb{E}\left (1 - (\bar{X}_n-1) + \frac{(x-1)^2}{4}+ \frac{f'''(\varepsilon) (\bar{X}_n-1)^2}{2} \right)$, and \begin{align*} \mathbb{E}(1/\bar{X}_n) &= 1 + \frac{f'''(\varepsilon) \mathbb{E}\left ((\bar{X}_n-1)^2\right )}{2} = 1 +\frac{ V(\bar{X}_n)}{4} - \frac{ V(\bar{X}_n)}{12 \varepsilon^4}\\ \end{align*} and hence $$1 + \frac{ \sigma^2}{4 n}- \frac{\sigma^2}{12 v_1^4 n} \leq \mathbb{E}(1/\bar{X}_n) \leq 1 + \frac{ \sigma^2}{4 n}.$$ For the case where $X_i$ do not admit a minimum but has an unlimited number of moments, one can do a similar transformation using the full taylor expansion: \begin{align*} \mathbb{E}(1/\bar{X}_n) &= \sum_{i=0}^{+\infty} \frac{f^{(i)}(1)}{i!}\mathbb{E}\left((\bar{X}_n-1)^i\right)\\ &= \sum_{i=0}^{+\infty} \frac{(-1)^i}{i!i!}\mathbb{E}\left((\bar{X}_n-1)^i\right) \end{align*} Now if we can say something about the $k^{th}$ moment of $\tilde{X}_n$ being $O(1/n^{k/2})$ this validates that $\mathbb{E}(1/\bar{X}_n) \approx 1 + \frac{ \sigma^2}{4 n}$.
Expectation of inverse of sum of positive iid variables I think I have the gist of it. Given that $f(x)=1/x$ is infinitely differentiable in 1. Taylor's theorem tells us: There exists $\varepsilon>0$ such that $f(x) = f(1) + f'(1) (x-1)+ \frac{f''(1)(x-1)^
49,734
What is element-wise max pooling?
In the paper you reference, they do the following: Assign lidar points to voxels, such that each voxel contains several points. Augment the point representation by stacking the offset from the voxel centroid to the original representation (denoted $\hat p$ in the text). Pass each point belonging to a voxel $\hat p_i$ through a neural net, getting some $m$-dimensional representation $f_i$ of that point. Compute the maximum of each dimension over all points $f_i$ belonging to the same voxel, getting $m$-dimensional vector $\tilde f$ representing the voxel. In terms of programming, you would have vectors $f_i$ in an array f of shape (n_points_in_voxel, m) and compute np.max(f, axis=0).
What is element-wise max pooling?
In the paper you reference, they do the following: Assign lidar points to voxels, such that each voxel contains several points. Augment the point representation by stacking the offset from the voxel
What is element-wise max pooling? In the paper you reference, they do the following: Assign lidar points to voxels, such that each voxel contains several points. Augment the point representation by stacking the offset from the voxel centroid to the original representation (denoted $\hat p$ in the text). Pass each point belonging to a voxel $\hat p_i$ through a neural net, getting some $m$-dimensional representation $f_i$ of that point. Compute the maximum of each dimension over all points $f_i$ belonging to the same voxel, getting $m$-dimensional vector $\tilde f$ representing the voxel. In terms of programming, you would have vectors $f_i$ in an array f of shape (n_points_in_voxel, m) and compute np.max(f, axis=0).
What is element-wise max pooling? In the paper you reference, they do the following: Assign lidar points to voxels, such that each voxel contains several points. Augment the point representation by stacking the offset from the voxel
49,735
What is element-wise max pooling?
It's a particular case of 1D max pooling where the pool size and stride are the same as the length of each y_i where 1 <= i <= k. Unfortunately there doesn't seem to be many implementations or definitions of this to use as reference. At least in here they define it as you are using it. Here how the issuer defined element-wise max pooling, loosely: Given the vector of vectors Y=y_1,...,y_k, the result would be a vector z where the kth element in z would be the maximum of the kth elements in Y.
What is element-wise max pooling?
It's a particular case of 1D max pooling where the pool size and stride are the same as the length of each y_i where 1 <= i <= k. Unfortunately there doesn't seem to be many implementations or definit
What is element-wise max pooling? It's a particular case of 1D max pooling where the pool size and stride are the same as the length of each y_i where 1 <= i <= k. Unfortunately there doesn't seem to be many implementations or definitions of this to use as reference. At least in here they define it as you are using it. Here how the issuer defined element-wise max pooling, loosely: Given the vector of vectors Y=y_1,...,y_k, the result would be a vector z where the kth element in z would be the maximum of the kth elements in Y.
What is element-wise max pooling? It's a particular case of 1D max pooling where the pool size and stride are the same as the length of each y_i where 1 <= i <= k. Unfortunately there doesn't seem to be many implementations or definit
49,736
Can you statistically compare one subset of a population with the whole population?
The data is usually organized by placing each variable value along a column and each individual (student) in a row. I'm assuming that each student was only evaluated once. The variables could start with student ID (e.g., 1,2,3,...), year, school (that can be coded as 1 for school A, 2 for school B, etc.), Math grade, English grade, Social Studies grade, Science grade, and Overall Score. If you know how to use a spreadsheet program this should be easy to do, and before you start putting data on your database, don't forget to use the first line to write the names of the variables. It will be even better if you know how to use some statistics software. To start with, you could compute the mean and the standard deviation for each school and for each grade. This would give you some feeling on what is going on with the data. I.e., which school has the highest grade on Math, or which school has the highest grade in English. If you want to do some statistical tests, you could start by evaluating the normality of each variable. Then, since you want to compare several independent groups (the schools) you should use an ANOVA (if that variable follows approximately a normal distribution) or the Kruskal-Wallis test (if that variable does not follow a normal distribution). The ANOVA tests will compare the means between the schools, while the Kruskal-Wallis tests compares the distributions between schools, but you can use the median grade of each school to visualise the differences. To raise the complexity one step further, and assuming that the normality of the grades can be assumed, you can do a MANOVA, using as dependent variables all the grades (eventually with the exception of the Overall Score, if it is just a function of the other grades) and using as independent factors the school and the year (I'm assuming that there is not a monotonous relation between the year and each grade). With this test you can observe not only if there are differences between schools, but also if there are differences between the years. I tried to present several ways of analysing the data, with different levels of difficulty.
Can you statistically compare one subset of a population with the whole population?
The data is usually organized by placing each variable value along a column and each individual (student) in a row. I'm assuming that each student was only evaluated once. The variables could start wi
Can you statistically compare one subset of a population with the whole population? The data is usually organized by placing each variable value along a column and each individual (student) in a row. I'm assuming that each student was only evaluated once. The variables could start with student ID (e.g., 1,2,3,...), year, school (that can be coded as 1 for school A, 2 for school B, etc.), Math grade, English grade, Social Studies grade, Science grade, and Overall Score. If you know how to use a spreadsheet program this should be easy to do, and before you start putting data on your database, don't forget to use the first line to write the names of the variables. It will be even better if you know how to use some statistics software. To start with, you could compute the mean and the standard deviation for each school and for each grade. This would give you some feeling on what is going on with the data. I.e., which school has the highest grade on Math, or which school has the highest grade in English. If you want to do some statistical tests, you could start by evaluating the normality of each variable. Then, since you want to compare several independent groups (the schools) you should use an ANOVA (if that variable follows approximately a normal distribution) or the Kruskal-Wallis test (if that variable does not follow a normal distribution). The ANOVA tests will compare the means between the schools, while the Kruskal-Wallis tests compares the distributions between schools, but you can use the median grade of each school to visualise the differences. To raise the complexity one step further, and assuming that the normality of the grades can be assumed, you can do a MANOVA, using as dependent variables all the grades (eventually with the exception of the Overall Score, if it is just a function of the other grades) and using as independent factors the school and the year (I'm assuming that there is not a monotonous relation between the year and each grade). With this test you can observe not only if there are differences between schools, but also if there are differences between the years. I tried to present several ways of analysing the data, with different levels of difficulty.
Can you statistically compare one subset of a population with the whole population? The data is usually organized by placing each variable value along a column and each individual (student) in a row. I'm assuming that each student was only evaluated once. The variables could start wi
49,737
Adaptive Parzen Estimator
I was also having some trouble understanding everything when reading the paper. To the specific questions you listed here, I might provide my understanding: Parzen estimators (or Parzen windows) are indeed kernel density estimators. In the linked wiki page, it is also stated that if the bandwidth is not fixed, the estimator is referred to as being adaptive, which gives rise to adaptive Parzen estimators (a.k.a. variable kernel density estimation). Since $\mathcal{B} \subset \mathcal{H}$, it seems that they only use a subset of the history to do the kernel density estimation. In this case, $K$ would be the number of observations that are used for the kernel density estimation. That seems to be correct. If we take a look at the implementation, it seems to me as if every variable is treated independently of all other values (i.e. no multivariate distributions for continuous variables either). Although it might be that I misinterpreted the code. I might have answered your questions at this point, but this does not necessarily make things much clearer. E.g. it is still unclear whether balloon or pointwise estimators were used or how to sample $\mathcal{B}$ from $\mathcal{H}$. Of course, if we take a look at the reference implementation of TPE, these details can be found (spoiler: the bandwidth is the bounded maximum difference to the nearest neighbours). However, even Yoshua Bengio (last author) acknowledges that this should have been cited properly in the paper.
Adaptive Parzen Estimator
I was also having some trouble understanding everything when reading the paper. To the specific questions you listed here, I might provide my understanding: Parzen estimators (or Parzen windows) are
Adaptive Parzen Estimator I was also having some trouble understanding everything when reading the paper. To the specific questions you listed here, I might provide my understanding: Parzen estimators (or Parzen windows) are indeed kernel density estimators. In the linked wiki page, it is also stated that if the bandwidth is not fixed, the estimator is referred to as being adaptive, which gives rise to adaptive Parzen estimators (a.k.a. variable kernel density estimation). Since $\mathcal{B} \subset \mathcal{H}$, it seems that they only use a subset of the history to do the kernel density estimation. In this case, $K$ would be the number of observations that are used for the kernel density estimation. That seems to be correct. If we take a look at the implementation, it seems to me as if every variable is treated independently of all other values (i.e. no multivariate distributions for continuous variables either). Although it might be that I misinterpreted the code. I might have answered your questions at this point, but this does not necessarily make things much clearer. E.g. it is still unclear whether balloon or pointwise estimators were used or how to sample $\mathcal{B}$ from $\mathcal{H}$. Of course, if we take a look at the reference implementation of TPE, these details can be found (spoiler: the bandwidth is the bounded maximum difference to the nearest neighbours). However, even Yoshua Bengio (last author) acknowledges that this should have been cited properly in the paper.
Adaptive Parzen Estimator I was also having some trouble understanding everything when reading the paper. To the specific questions you listed here, I might provide my understanding: Parzen estimators (or Parzen windows) are
49,738
Why not normalizing mutual information with harmonic mean of entropies?
The reason the Harmonic Mean is never mentioned as a normalization for MI is that it renders the measure equivalent to another commonly used clustering comparison method, the normalized Variation of Information (VI), where the regular VI is defined as: \begin{equation} VI(X,Y) = H(X|Y) + H(Y|X) \end{equation} and the normalized VI is defined as (see Appendix B from Lancichinetti et al; 2009): \begin{equation} VI_{\text{norm}}(X,Y) = \frac{1}{2}\left(\frac{H(X|Y)}{H(X)} + \frac{H(Y|X)}{H(Y)}\right) \end{equation} To see this, we use the relationship for condition entropy: $H(Y|X) = H(X,Y) - H(X)$ \begin{array} aI_{HM}(X,Y) &= \frac{H(X) + H(Y)-H(X,Y)}{\frac{2}{\frac{1}{H(X)}+\frac{1}{H(Y)}}} \\ &= \frac{H(X) + H(Y)-H(X,Y)}{2H(X)}+\frac{H(X) + H(Y)-H(X,Y)}{2H(Y)} \\ &= \frac{1}{2} - \frac{ H(X,Y) - H(Y)}{2H(X)}+\frac{1}{2} - \frac{ H(X,Y) - H(X)}{2H(Y)}\\ &= 1 - \frac{1}{2}\left(\frac{H(X|Y)}{H(X)} + \frac{H(Y|X)}{H(Y)}\right) \\ &= 1 - VI_{\text{norm}}(X,Y) \end{array}
Why not normalizing mutual information with harmonic mean of entropies?
The reason the Harmonic Mean is never mentioned as a normalization for MI is that it renders the measure equivalent to another commonly used clustering comparison method, the normalized Variation of I
Why not normalizing mutual information with harmonic mean of entropies? The reason the Harmonic Mean is never mentioned as a normalization for MI is that it renders the measure equivalent to another commonly used clustering comparison method, the normalized Variation of Information (VI), where the regular VI is defined as: \begin{equation} VI(X,Y) = H(X|Y) + H(Y|X) \end{equation} and the normalized VI is defined as (see Appendix B from Lancichinetti et al; 2009): \begin{equation} VI_{\text{norm}}(X,Y) = \frac{1}{2}\left(\frac{H(X|Y)}{H(X)} + \frac{H(Y|X)}{H(Y)}\right) \end{equation} To see this, we use the relationship for condition entropy: $H(Y|X) = H(X,Y) - H(X)$ \begin{array} aI_{HM}(X,Y) &= \frac{H(X) + H(Y)-H(X,Y)}{\frac{2}{\frac{1}{H(X)}+\frac{1}{H(Y)}}} \\ &= \frac{H(X) + H(Y)-H(X,Y)}{2H(X)}+\frac{H(X) + H(Y)-H(X,Y)}{2H(Y)} \\ &= \frac{1}{2} - \frac{ H(X,Y) - H(Y)}{2H(X)}+\frac{1}{2} - \frac{ H(X,Y) - H(X)}{2H(Y)}\\ &= 1 - \frac{1}{2}\left(\frac{H(X|Y)}{H(X)} + \frac{H(Y|X)}{H(Y)}\right) \\ &= 1 - VI_{\text{norm}}(X,Y) \end{array}
Why not normalizing mutual information with harmonic mean of entropies? The reason the Harmonic Mean is never mentioned as a normalization for MI is that it renders the measure equivalent to another commonly used clustering comparison method, the normalized Variation of I
49,739
How can one interpret a heat map plot
Each square shows the correlation between the variables on each axis. Correlation ranges from -1 to +1. Values closer to zero means there is no linear trend between the two variables. The close to 1 the correlation is the more positively correlated they are; that is as one increases so does the other and the closer to 1 the stronger this relationship is. A correlation closer to -1 is similar, but instead of both increasing one variable will decrease as the other increases. The diagonals are all 1/dark green because those squares are correlating each variable to itself (so it's a perfect correlation). For the rest the larger the number and darker the color the higher the correlation between the two variables. The plot is also symmetrical about the diagonal since the same two variables are being paired together in those squares.
How can one interpret a heat map plot
Each square shows the correlation between the variables on each axis. Correlation ranges from -1 to +1. Values closer to zero means there is no linear trend between the two variables. The close to 1
How can one interpret a heat map plot Each square shows the correlation between the variables on each axis. Correlation ranges from -1 to +1. Values closer to zero means there is no linear trend between the two variables. The close to 1 the correlation is the more positively correlated they are; that is as one increases so does the other and the closer to 1 the stronger this relationship is. A correlation closer to -1 is similar, but instead of both increasing one variable will decrease as the other increases. The diagonals are all 1/dark green because those squares are correlating each variable to itself (so it's a perfect correlation). For the rest the larger the number and darker the color the higher the correlation between the two variables. The plot is also symmetrical about the diagonal since the same two variables are being paired together in those squares.
How can one interpret a heat map plot Each square shows the correlation between the variables on each axis. Correlation ranges from -1 to +1. Values closer to zero means there is no linear trend between the two variables. The close to 1
49,740
How can one interpret a heat map plot
A heat map is an eye-catcher, nothing more. It gives extreme colors to extreme values so they are easily visible to the naked eye. Apart from that, it's just a matrix of numbers, no special interpretation required.
How can one interpret a heat map plot
A heat map is an eye-catcher, nothing more. It gives extreme colors to extreme values so they are easily visible to the naked eye. Apart from that, it's just a matrix of numbers, no special interpreta
How can one interpret a heat map plot A heat map is an eye-catcher, nothing more. It gives extreme colors to extreme values so they are easily visible to the naked eye. Apart from that, it's just a matrix of numbers, no special interpretation required.
How can one interpret a heat map plot A heat map is an eye-catcher, nothing more. It gives extreme colors to extreme values so they are easily visible to the naked eye. Apart from that, it's just a matrix of numbers, no special interpreta
49,741
How can one interpret a heat map plot
A heat map is a two-dimensional representation of data in which values are represented by colors. Correlation Heat map is a two dimensional plot of the amount of correlation (measure of dependence) between variables represented by colors. The varying intensity of color represents the measure of correlation. Correlation is a measure of linear relationship between two variables. Correlation between two variables can also be determined using scatter plot between these two variables. When there are multiple variables, and we want to find the correlation between all of them, a matrix data structure called correlation matrix is used. Correlation values ranges from -1 to +1. Using Seaborn package of Python heatmap can be plotted. To determine the correlation corr() method of pandas can be used. sns.heatmap(df.corr())
How can one interpret a heat map plot
A heat map is a two-dimensional representation of data in which values are represented by colors. Correlation Heat map is a two dimensional plot of the amount of correlation (measure of dependence) be
How can one interpret a heat map plot A heat map is a two-dimensional representation of data in which values are represented by colors. Correlation Heat map is a two dimensional plot of the amount of correlation (measure of dependence) between variables represented by colors. The varying intensity of color represents the measure of correlation. Correlation is a measure of linear relationship between two variables. Correlation between two variables can also be determined using scatter plot between these two variables. When there are multiple variables, and we want to find the correlation between all of them, a matrix data structure called correlation matrix is used. Correlation values ranges from -1 to +1. Using Seaborn package of Python heatmap can be plotted. To determine the correlation corr() method of pandas can be used. sns.heatmap(df.corr())
How can one interpret a heat map plot A heat map is a two-dimensional representation of data in which values are represented by colors. Correlation Heat map is a two dimensional plot of the amount of correlation (measure of dependence) be
49,742
How can I calculate the critical t-values of a linear regression model?
I'm going to construe the question as meaning: What is the t-statistic and what is its probability distribution? You have \begin{align} Y_i = {} & \beta_0 +\beta_1 x_i + \varepsilon_i \quad \text{for } i=1,\ldots,n \\[10pt] & \varepsilon_1,\ldots,\varepsilon_n \sim \operatorname{iid} \operatorname N(0,\sigma^2). \\[10pt] \widehat\beta_1 = {} & \frac{\sum_{i=1}^n (Y_i-\overline Y)(x_i-\overline x)}{\sum_{i=1}^n (x_i - \overline x)^2} \\[10pt] & \text{where } \overline Y = (Y_1+\cdots+Y_n)/n, \\ & \text{ and } \overline x = (x_1 + \cdots + x_n)/n. \\[10pt] \text{and } \overline Y = {} & \widehat\beta_0 + \widehat\beta_1 \overline x. \quad (\text{This defines } \widehat\beta_0.) \\[10pt] \widehat{\varepsilon\,}_i = {} & Y_i -\left( \widehat\beta_0 + \widehat\beta_1 x_i \right). \end{align} Then $\widehat\beta_1 \sim \operatorname N\left( \beta_1, \dfrac{\sigma^2}{\sum_{i=1}^n (x_i - \overline x)^2} \right)$ $\dfrac{\widehat{\varepsilon\,}_1^2 + \cdots + \widehat{\varepsilon\,}_n^2}{\sigma^2} \sim \chi^2_{n-2}.$ $\widehat\beta_1$ and $\widehat\sigma^2 = \dfrac{\widehat{\varepsilon\,}_1^2 + \cdots + \widehat{\varepsilon\,}_n^2}{n-2}$ are independent. From these it follows that $$ \frac{\widehat\beta_1 - \beta_1}{\widehat\sigma/\sqrt{n-2}} \sim t_{n-2}. $$ Therefore $$ \widehat\beta_1 \pm A \frac{\widehat\sigma}{\sqrt n} $$ are the endpoints of a confidence interval for $\beta_1,$ where $A$ is a suitable percentage point of the $t_{n-2}$ distribution. Here I have not included proofs of the points following the three typographical bullets above. Possibly proofs of those have been posted here before.
How can I calculate the critical t-values of a linear regression model?
I'm going to construe the question as meaning: What is the t-statistic and what is its probability distribution? You have \begin{align} Y_i = {} & \beta_0 +\beta_1 x_i + \varepsilon_i \quad \text{for
How can I calculate the critical t-values of a linear regression model? I'm going to construe the question as meaning: What is the t-statistic and what is its probability distribution? You have \begin{align} Y_i = {} & \beta_0 +\beta_1 x_i + \varepsilon_i \quad \text{for } i=1,\ldots,n \\[10pt] & \varepsilon_1,\ldots,\varepsilon_n \sim \operatorname{iid} \operatorname N(0,\sigma^2). \\[10pt] \widehat\beta_1 = {} & \frac{\sum_{i=1}^n (Y_i-\overline Y)(x_i-\overline x)}{\sum_{i=1}^n (x_i - \overline x)^2} \\[10pt] & \text{where } \overline Y = (Y_1+\cdots+Y_n)/n, \\ & \text{ and } \overline x = (x_1 + \cdots + x_n)/n. \\[10pt] \text{and } \overline Y = {} & \widehat\beta_0 + \widehat\beta_1 \overline x. \quad (\text{This defines } \widehat\beta_0.) \\[10pt] \widehat{\varepsilon\,}_i = {} & Y_i -\left( \widehat\beta_0 + \widehat\beta_1 x_i \right). \end{align} Then $\widehat\beta_1 \sim \operatorname N\left( \beta_1, \dfrac{\sigma^2}{\sum_{i=1}^n (x_i - \overline x)^2} \right)$ $\dfrac{\widehat{\varepsilon\,}_1^2 + \cdots + \widehat{\varepsilon\,}_n^2}{\sigma^2} \sim \chi^2_{n-2}.$ $\widehat\beta_1$ and $\widehat\sigma^2 = \dfrac{\widehat{\varepsilon\,}_1^2 + \cdots + \widehat{\varepsilon\,}_n^2}{n-2}$ are independent. From these it follows that $$ \frac{\widehat\beta_1 - \beta_1}{\widehat\sigma/\sqrt{n-2}} \sim t_{n-2}. $$ Therefore $$ \widehat\beta_1 \pm A \frac{\widehat\sigma}{\sqrt n} $$ are the endpoints of a confidence interval for $\beta_1,$ where $A$ is a suitable percentage point of the $t_{n-2}$ distribution. Here I have not included proofs of the points following the three typographical bullets above. Possibly proofs of those have been posted here before.
How can I calculate the critical t-values of a linear regression model? I'm going to construe the question as meaning: What is the t-statistic and what is its probability distribution? You have \begin{align} Y_i = {} & \beta_0 +\beta_1 x_i + \varepsilon_i \quad \text{for
49,743
How can I calculate the critical t-values of a linear regression model?
That answer by Michael Hardy gives you the formulae for manual calculation for simple linear regression. Since you have implemented your model in R the easiest thing would be just to generate the outputs using standard commands in that program: #Fit a linear regression model #Substitute the actual names of your data frame and variables MODEL <- lm(y ~ x1 + ... + xm, data = DATA); #Print structure of model str(MODEL); #Print summary output summary(MODEL); #Print ANOVA table anova(MODEL); #Generate studentised residuals RESID <- resid(MODEL); The summary table in the R output contains the coefficient estimates table, which includes the t-statistics and associated p-values for each of the terms in the model.
How can I calculate the critical t-values of a linear regression model?
That answer by Michael Hardy gives you the formulae for manual calculation for simple linear regression. Since you have implemented your model in R the easiest thing would be just to generate the out
How can I calculate the critical t-values of a linear regression model? That answer by Michael Hardy gives you the formulae for manual calculation for simple linear regression. Since you have implemented your model in R the easiest thing would be just to generate the outputs using standard commands in that program: #Fit a linear regression model #Substitute the actual names of your data frame and variables MODEL <- lm(y ~ x1 + ... + xm, data = DATA); #Print structure of model str(MODEL); #Print summary output summary(MODEL); #Print ANOVA table anova(MODEL); #Generate studentised residuals RESID <- resid(MODEL); The summary table in the R output contains the coefficient estimates table, which includes the t-statistics and associated p-values for each of the terms in the model.
How can I calculate the critical t-values of a linear regression model? That answer by Michael Hardy gives you the formulae for manual calculation for simple linear regression. Since you have implemented your model in R the easiest thing would be just to generate the out
49,744
Gauss Original Paper
According to author of several books on history of statistics, Stephen M. Stigler (1981), the method of least squares was first described by Carl Friedrich Gauss in 1809 in his Theoria Modus Corporum Coelestium [Theory of the Motion of Heavenly Bodies] book. In the same work Gauss proposed a form of normal distribution that is similar to what we use today. It seems to be the book you are referring to.
Gauss Original Paper
According to author of several books on history of statistics, Stephen M. Stigler (1981), the method of least squares was first described by Carl Friedrich Gauss in 1809 in his Theoria Modus Corporum
Gauss Original Paper According to author of several books on history of statistics, Stephen M. Stigler (1981), the method of least squares was first described by Carl Friedrich Gauss in 1809 in his Theoria Modus Corporum Coelestium [Theory of the Motion of Heavenly Bodies] book. In the same work Gauss proposed a form of normal distribution that is similar to what we use today. It seems to be the book you are referring to.
Gauss Original Paper According to author of several books on history of statistics, Stephen M. Stigler (1981), the method of least squares was first described by Carl Friedrich Gauss in 1809 in his Theoria Modus Corporum
49,745
Error on mean from measurements made from a distribution with a possible long tail
The behavior that the mean keeps increasing when more samples are added, typically means that the mean time is infinite. You could for example be facing a pareto distribution with $\alpha\le 1$. You may not be facing this exact distribution, but it is the most standard example of a fairly simple distribution with infinite mean, so it is worth taking a look at; and it will have the same behavior as you are describing: it only takes finite values (mostly 1 or slightly above) so the mean of any finite sample will be finite, but it will grow as the number of samples grows. To fortify my argument, I will explain why your "I expect that if I were to continue to run my simulations, the mean would eventually asymptotically reach some value" must be wrong. Let's say your mean after the first $n$ simulations is $\mu_1$, and after the first $2n$ simulations it is $\mu_2<\mu_1$. Then, the mean among simulations $n+1,\ldots,2n$ was $\mu_3=2\mu_2-\mu_1<\mu_2$, meaning we have $\mu_3<\mu_2<\mu_1$. However, since your simulations are independent, $E[\mu_3]=E[\mu_1]$, hence you cannot consistently find this behavior if the mean was finite and well-defined.
Error on mean from measurements made from a distribution with a possible long tail
The behavior that the mean keeps increasing when more samples are added, typically means that the mean time is infinite. You could for example be facing a pareto distribution with $\alpha\le 1$. You
Error on mean from measurements made from a distribution with a possible long tail The behavior that the mean keeps increasing when more samples are added, typically means that the mean time is infinite. You could for example be facing a pareto distribution with $\alpha\le 1$. You may not be facing this exact distribution, but it is the most standard example of a fairly simple distribution with infinite mean, so it is worth taking a look at; and it will have the same behavior as you are describing: it only takes finite values (mostly 1 or slightly above) so the mean of any finite sample will be finite, but it will grow as the number of samples grows. To fortify my argument, I will explain why your "I expect that if I were to continue to run my simulations, the mean would eventually asymptotically reach some value" must be wrong. Let's say your mean after the first $n$ simulations is $\mu_1$, and after the first $2n$ simulations it is $\mu_2<\mu_1$. Then, the mean among simulations $n+1,\ldots,2n$ was $\mu_3=2\mu_2-\mu_1<\mu_2$, meaning we have $\mu_3<\mu_2<\mu_1$. However, since your simulations are independent, $E[\mu_3]=E[\mu_1]$, hence you cannot consistently find this behavior if the mean was finite and well-defined.
Error on mean from measurements made from a distribution with a possible long tail The behavior that the mean keeps increasing when more samples are added, typically means that the mean time is infinite. You could for example be facing a pareto distribution with $\alpha\le 1$. You
49,746
Error on mean from measurements made from a distribution with a possible long tail
The difference of the empirical means of two sets of samples might well be larger than the standard error of the mean. Asymptotically, the standard error defines a confidence interval to the level 0.67 or so. If the distribution has long tails and the sample mean is unstable, the sample standard deviation should be large. As long as the underlying distribution has a well-defined mean and variance, things should work out more or less. Finally, if the sample mean is unstable, maybe it makes more sense to estimate and report a more robust measure of location, such as the median.
Error on mean from measurements made from a distribution with a possible long tail
The difference of the empirical means of two sets of samples might well be larger than the standard error of the mean. Asymptotically, the standard error defines a confidence interval to the level 0.6
Error on mean from measurements made from a distribution with a possible long tail The difference of the empirical means of two sets of samples might well be larger than the standard error of the mean. Asymptotically, the standard error defines a confidence interval to the level 0.67 or so. If the distribution has long tails and the sample mean is unstable, the sample standard deviation should be large. As long as the underlying distribution has a well-defined mean and variance, things should work out more or less. Finally, if the sample mean is unstable, maybe it makes more sense to estimate and report a more robust measure of location, such as the median.
Error on mean from measurements made from a distribution with a possible long tail The difference of the empirical means of two sets of samples might well be larger than the standard error of the mean. Asymptotically, the standard error defines a confidence interval to the level 0.6
49,747
Error on mean from measurements made from a distribution with a possible long tail
I think the answer to your problem may be more related to your field of study (say, molecular dynamics) than statistics. Statistics deals with probability distribution of random variables and, it does not matter what your distribution is, for applying central limit theorem, for making inferences about population (with large samples). However, since you have mentioned that such simulation does not work in your case with mean changing randomly every time, it is quite possible that the 'time spent in the region' may be a random variable with no finite mean at all (for example, a random normal distribution assumes that, even though the variable is random, the variable will tend towards a central value i.e. mean, so is the case with other known standard distributions which mimic some natural phenomenon). But if the 'time spent' variable is completely random, whose value is affected by other variables in your model, you cannot expect 'any mean value' whatever time you spend in simulation. For example, movement of stock prices over a time interval are completely random in nature, and it cannot be modeled under any known probability distribution and it cannot have any mean related to it. If your theory, however, indicates that there should be finite mean for time spent (which I intuit very unlikely for molecular movements) and if you are sure about it, then you should work on finding out the distribution function and its moment generating functions etc., to arrive at the mean function. Note: If finite mean is a sure theoretical possibility, another statistically sound method to estimate the mean is MLE estimation methods (especially in data science paradigm). But, MLE methods, that estimate population parameters, mandatorily require an assumption about the nature of population distribution and its density function (which is not required for CLT based inferential statistics that makes inferences on sample). But there could be other techniques available in data science paradigm, which could estimate population parameters, but they may rely on quantitative and computationally intensive algorithms.
Error on mean from measurements made from a distribution with a possible long tail
I think the answer to your problem may be more related to your field of study (say, molecular dynamics) than statistics. Statistics deals with probability distribution of random variables and, it does
Error on mean from measurements made from a distribution with a possible long tail I think the answer to your problem may be more related to your field of study (say, molecular dynamics) than statistics. Statistics deals with probability distribution of random variables and, it does not matter what your distribution is, for applying central limit theorem, for making inferences about population (with large samples). However, since you have mentioned that such simulation does not work in your case with mean changing randomly every time, it is quite possible that the 'time spent in the region' may be a random variable with no finite mean at all (for example, a random normal distribution assumes that, even though the variable is random, the variable will tend towards a central value i.e. mean, so is the case with other known standard distributions which mimic some natural phenomenon). But if the 'time spent' variable is completely random, whose value is affected by other variables in your model, you cannot expect 'any mean value' whatever time you spend in simulation. For example, movement of stock prices over a time interval are completely random in nature, and it cannot be modeled under any known probability distribution and it cannot have any mean related to it. If your theory, however, indicates that there should be finite mean for time spent (which I intuit very unlikely for molecular movements) and if you are sure about it, then you should work on finding out the distribution function and its moment generating functions etc., to arrive at the mean function. Note: If finite mean is a sure theoretical possibility, another statistically sound method to estimate the mean is MLE estimation methods (especially in data science paradigm). But, MLE methods, that estimate population parameters, mandatorily require an assumption about the nature of population distribution and its density function (which is not required for CLT based inferential statistics that makes inferences on sample). But there could be other techniques available in data science paradigm, which could estimate population parameters, but they may rely on quantitative and computationally intensive algorithms.
Error on mean from measurements made from a distribution with a possible long tail I think the answer to your problem may be more related to your field of study (say, molecular dynamics) than statistics. Statistics deals with probability distribution of random variables and, it does
49,748
Error on mean from measurements made from a distribution with a possible long tail
EDIT: If the mean is changing significantly when you add more data then you may not have a finite mean or not unimodal. hist(c(rnorm(100, 0, 1) + rnorm(100, 3, 1)))`. But I'm guessing you've already made a histogram of your data or have been able to visualize it. So Here's what I would do. 1.) Calculate the mean with an increasing number of data samples. The more you can do the better but only do what is feasible. 2.) Look to see if the means follow some type of infinite series that converge. Maybe a geometric series, p-series or something like that. If you can find that the mean follows some kind of pattern like $const.\frac{n}{n+1}$ would be a simple one. Even if you're mean is moving up and down and it doesn't seem to follow a pattern you may have $-1^{i}$ in the sequence or something similar. I hope this is a little closer to what you were looking for. When you say simulations I'm assuming that you are either running Monte Carlo simulations or constructing bootstrap samples. Either way here is what you want to do. 1. run b simulations 2. during each bth simulation draw a random sample that is the size of the data set with replacement. 3. Calculate the mean of the sample drawn. 4. The simulation is complete so now you have an array with b sample means. Take the mean of that array and this is your mean. You can also create credible regions (when you run a simulation confidence intervals no longer apply because your regions are empirical). B <- 1000 n <- dim(data)[1] ind <- seq_len(n) means_vector <- double(B) for ( b in seq_len(B)) { rand_samp <- sample(ind, n, replace = TRUE) means_vector[B] <- mean(data[rand_samp]) } mean(means_vector) quantile(means_vector, c(0.25, 0.975)) This is the bootstrap method. It's pretty simple, and there's an R function "boot()" that will make things crazy fast.
Error on mean from measurements made from a distribution with a possible long tail
EDIT: If the mean is changing significantly when you add more data then you may not have a finite mean or not unimodal. hist(c(rnorm(100, 0, 1) + rnorm(100, 3, 1)))`. But I'm guessing you've already m
Error on mean from measurements made from a distribution with a possible long tail EDIT: If the mean is changing significantly when you add more data then you may not have a finite mean or not unimodal. hist(c(rnorm(100, 0, 1) + rnorm(100, 3, 1)))`. But I'm guessing you've already made a histogram of your data or have been able to visualize it. So Here's what I would do. 1.) Calculate the mean with an increasing number of data samples. The more you can do the better but only do what is feasible. 2.) Look to see if the means follow some type of infinite series that converge. Maybe a geometric series, p-series or something like that. If you can find that the mean follows some kind of pattern like $const.\frac{n}{n+1}$ would be a simple one. Even if you're mean is moving up and down and it doesn't seem to follow a pattern you may have $-1^{i}$ in the sequence or something similar. I hope this is a little closer to what you were looking for. When you say simulations I'm assuming that you are either running Monte Carlo simulations or constructing bootstrap samples. Either way here is what you want to do. 1. run b simulations 2. during each bth simulation draw a random sample that is the size of the data set with replacement. 3. Calculate the mean of the sample drawn. 4. The simulation is complete so now you have an array with b sample means. Take the mean of that array and this is your mean. You can also create credible regions (when you run a simulation confidence intervals no longer apply because your regions are empirical). B <- 1000 n <- dim(data)[1] ind <- seq_len(n) means_vector <- double(B) for ( b in seq_len(B)) { rand_samp <- sample(ind, n, replace = TRUE) means_vector[B] <- mean(data[rand_samp]) } mean(means_vector) quantile(means_vector, c(0.25, 0.975)) This is the bootstrap method. It's pretty simple, and there's an R function "boot()" that will make things crazy fast.
Error on mean from measurements made from a distribution with a possible long tail EDIT: If the mean is changing significantly when you add more data then you may not have a finite mean or not unimodal. hist(c(rnorm(100, 0, 1) + rnorm(100, 3, 1)))`. But I'm guessing you've already m
49,749
Ratio of Unbiased Estimators
No, it will not be unbiased (unless the estimator of the denominator have zero variance.) And it will not help if the numerator and denominator are independent. In general, if $\hat{\theta}$ is an unbiased estimator of $\theta$ and $g$ is some nonlinear function, it would be a rare case that $g(\hat{\theta})$ is unbiased estimator for $g(\theta)$. There is more information in this related post: Test Statistic for a ratio of regression coefficients?.
Ratio of Unbiased Estimators
No, it will not be unbiased (unless the estimator of the denominator have zero variance.) And it will not help if the numerator and denominator are independent. In general, if $\hat{\theta}$ is an un
Ratio of Unbiased Estimators No, it will not be unbiased (unless the estimator of the denominator have zero variance.) And it will not help if the numerator and denominator are independent. In general, if $\hat{\theta}$ is an unbiased estimator of $\theta$ and $g$ is some nonlinear function, it would be a rare case that $g(\hat{\theta})$ is unbiased estimator for $g(\theta)$. There is more information in this related post: Test Statistic for a ratio of regression coefficients?.
Ratio of Unbiased Estimators No, it will not be unbiased (unless the estimator of the denominator have zero variance.) And it will not help if the numerator and denominator are independent. In general, if $\hat{\theta}$ is an un
49,750
Which is the right way to handle imbalanced data in a regression problem?
If I have understood you correctly, the issue here is that you wish to fit your regression in such a way that it performs equally well on both cities, by which you means that you want to minimise the weighted sum-of-squares, with weights that ensure equal total weight to the data from each city. If that is correct, then this should be a fairly simple problem, where you can use weighted least-squares estimation. For this type of estimation, you have an $n \times n$ diagonal weighting matrix $\mathbf{w}$, and the coefficient estimator is: $$\hat{\boldsymbol{\beta}} = (\mathbf{x}^\text{T} \mathbf{w} \mathbf{x})^{-1} (\mathbf{x}^\text{T} \mathbf{w} \mathbf{y}).$$ Now, suppose that you have $n_\text{NY}$ data points from New York and $n_\text{LA}$ data points from Los Angeles (so that $n= n_\text{NY}+n_\text{LA}$). Then you would use weights $w_\text{NY} = 1/n_\text{NY}$ and $w_\text{LA} = 1/n_\text{LA}$ in your weighting matrix, and this would ensure that the two cities are equally weighted in the aggregate. As a result of this weighting, more weight would be given to data points from the city that has been sampled less. Now, I will also deal with your further complication, which is that you say there is more variance in one city than the other. My suggestion here would be to fit a first-pass model where you use weighted-least-squares, with a weight of unity on one city, and a free parameter to weight the other city. This will give you an estimate of the relative sizes of the error variance for the two cities. You can then take that estimate and apply it as an additional weight when you do your main weighted analysis (as described above). So, for example, if we let $\hat{\delta} \equiv \hat{\sigma}_\text{NY}^2 / \hat{\sigma}_\text{LA}^2$ denote the estimated relative error variance, then we would use the subsequent weightings $w_\text{NY} = 1/n_\text{NY}$ and $w_\text{LA} = \hat{\delta}/n_\text{LA}$ in your weighted analysis. This should allow you to incorporate both the different error variance of the two cities, and also apply your own weighting to force the analysis to give "equal weight" (after adjustment for error variance) to the two cities.
Which is the right way to handle imbalanced data in a regression problem?
If I have understood you correctly, the issue here is that you wish to fit your regression in such a way that it performs equally well on both cities, by which you means that you want to minimise the
Which is the right way to handle imbalanced data in a regression problem? If I have understood you correctly, the issue here is that you wish to fit your regression in such a way that it performs equally well on both cities, by which you means that you want to minimise the weighted sum-of-squares, with weights that ensure equal total weight to the data from each city. If that is correct, then this should be a fairly simple problem, where you can use weighted least-squares estimation. For this type of estimation, you have an $n \times n$ diagonal weighting matrix $\mathbf{w}$, and the coefficient estimator is: $$\hat{\boldsymbol{\beta}} = (\mathbf{x}^\text{T} \mathbf{w} \mathbf{x})^{-1} (\mathbf{x}^\text{T} \mathbf{w} \mathbf{y}).$$ Now, suppose that you have $n_\text{NY}$ data points from New York and $n_\text{LA}$ data points from Los Angeles (so that $n= n_\text{NY}+n_\text{LA}$). Then you would use weights $w_\text{NY} = 1/n_\text{NY}$ and $w_\text{LA} = 1/n_\text{LA}$ in your weighting matrix, and this would ensure that the two cities are equally weighted in the aggregate. As a result of this weighting, more weight would be given to data points from the city that has been sampled less. Now, I will also deal with your further complication, which is that you say there is more variance in one city than the other. My suggestion here would be to fit a first-pass model where you use weighted-least-squares, with a weight of unity on one city, and a free parameter to weight the other city. This will give you an estimate of the relative sizes of the error variance for the two cities. You can then take that estimate and apply it as an additional weight when you do your main weighted analysis (as described above). So, for example, if we let $\hat{\delta} \equiv \hat{\sigma}_\text{NY}^2 / \hat{\sigma}_\text{LA}^2$ denote the estimated relative error variance, then we would use the subsequent weightings $w_\text{NY} = 1/n_\text{NY}$ and $w_\text{LA} = \hat{\delta}/n_\text{LA}$ in your weighted analysis. This should allow you to incorporate both the different error variance of the two cities, and also apply your own weighting to force the analysis to give "equal weight" (after adjustment for error variance) to the two cities.
Which is the right way to handle imbalanced data in a regression problem? If I have understood you correctly, the issue here is that you wish to fit your regression in such a way that it performs equally well on both cities, by which you means that you want to minimise the
49,751
Frequentist vs. Bayesian bias-variance decomposition
Yes, your analysis is correct. For $l_2$, the optimal estimator in the Bayesian setting will always the posterior mean $\left\langle \theta\right\rangle _{p\left(\theta\mid x\right)}$. In the case of absolute loss we would have the posterior median and in the discrete case of constant loss we would have the posterior mode. Now, for which one is more mainstream, my gut feeling tells me that the frequentist one is still more popular as it is usually taught first and it's very intuitive. Nevertheless, I believe that probably this gap is narrower than it used to be. The practical differences, now this is a wide topic. The first thing to know is that the more data you have and/or the more vague your prior is then the results of the analyses converge. Now, the practical differences start to emerge when you depart from the 2 things I mentioned before. Especially for the prior. The main advantage and at the same time drawback of the Bayesian analysis is when you start to include your personal beliefs and you make your prior less vague. In that case you may end up with very subjective/questionable/unreliable results or sometimes much more accurate ones! For example, here is how a Bayesian approach (via an informative prior) helped the location of the a plane crash. My suggestion would be to know and understand both and pick the one that suits best the problem. For a closed form of the Frequentist Minimial MSE, I'm afraid that there is not one for all distributions because of the nature of the problem. What I mean is that, under mild regularity conditions, the variance will always be higher or equal than the Cramér-Rao Lower Bound which makes it similar to the Uncertainty principle in physics: the more you try to lower one quantity the other one increases. So, what we usually try to do is to fix one quantity (the bias to 0 usually) and we try to minimise the variance. That said, the only thing that I can think of which is similar to a "closed form" (roughly speaking) is a theorem that says that: There is an estimator $\hat{g}(\textbf{X})$ of $g(\theta)$ Which attains the Cramér-Rao Lower Bound if and only if we can write $U(\textbf{X}) = \frac{\partial \log_{p_{\textbf{x}}}(\textbf{X}\vert\theta)}{\partial \theta}$ in the form $U(\textbf{X}) = \alpha(\theta)g(\hat{\theta}) +\beta(\theta)$ where $\alpha$ and $\beta$ are arbitrary functions of $\theta$.
Frequentist vs. Bayesian bias-variance decomposition
Yes, your analysis is correct. For $l_2$, the optimal estimator in the Bayesian setting will always the posterior mean $\left\langle \theta\right\rangle _{p\left(\theta\mid x\right)}$. In the case of
Frequentist vs. Bayesian bias-variance decomposition Yes, your analysis is correct. For $l_2$, the optimal estimator in the Bayesian setting will always the posterior mean $\left\langle \theta\right\rangle _{p\left(\theta\mid x\right)}$. In the case of absolute loss we would have the posterior median and in the discrete case of constant loss we would have the posterior mode. Now, for which one is more mainstream, my gut feeling tells me that the frequentist one is still more popular as it is usually taught first and it's very intuitive. Nevertheless, I believe that probably this gap is narrower than it used to be. The practical differences, now this is a wide topic. The first thing to know is that the more data you have and/or the more vague your prior is then the results of the analyses converge. Now, the practical differences start to emerge when you depart from the 2 things I mentioned before. Especially for the prior. The main advantage and at the same time drawback of the Bayesian analysis is when you start to include your personal beliefs and you make your prior less vague. In that case you may end up with very subjective/questionable/unreliable results or sometimes much more accurate ones! For example, here is how a Bayesian approach (via an informative prior) helped the location of the a plane crash. My suggestion would be to know and understand both and pick the one that suits best the problem. For a closed form of the Frequentist Minimial MSE, I'm afraid that there is not one for all distributions because of the nature of the problem. What I mean is that, under mild regularity conditions, the variance will always be higher or equal than the Cramér-Rao Lower Bound which makes it similar to the Uncertainty principle in physics: the more you try to lower one quantity the other one increases. So, what we usually try to do is to fix one quantity (the bias to 0 usually) and we try to minimise the variance. That said, the only thing that I can think of which is similar to a "closed form" (roughly speaking) is a theorem that says that: There is an estimator $\hat{g}(\textbf{X})$ of $g(\theta)$ Which attains the Cramér-Rao Lower Bound if and only if we can write $U(\textbf{X}) = \frac{\partial \log_{p_{\textbf{x}}}(\textbf{X}\vert\theta)}{\partial \theta}$ in the form $U(\textbf{X}) = \alpha(\theta)g(\hat{\theta}) +\beta(\theta)$ where $\alpha$ and $\beta$ are arbitrary functions of $\theta$.
Frequentist vs. Bayesian bias-variance decomposition Yes, your analysis is correct. For $l_2$, the optimal estimator in the Bayesian setting will always the posterior mean $\left\langle \theta\right\rangle _{p\left(\theta\mid x\right)}$. In the case of
49,752
Is the Quadratic Approximation of Log-Likelihood Equivalent to the Normal Approximation of the MLE?
How can I show this is the case? Since you have full specification of the sampling distribution of your observations, you can get the explicit form of the log-likelihood. Treating $\sigma$ as fixed and removing additive constants we have: $$\ell_\mathbf{x}(\theta) = -\frac{1}{2 \sigma^2} \sum_{i=1}^n (x_i - \theta)^2 \quad \quad \quad \text{for all } \theta \in \mathbb{R}.$$ From this function it is possible to derive the score function, the information function and the MLE, which means that you should be able to directly verify the equation by substituting all these items. (I will leave this work as an exercise.) Isn't the score of the MLE always zero? To understand when the score of the MLE is zero, think back to your early calculus classes. When you maximise a continuous differentiable function, this often gives a maximising value at a critical point of the function. But the maximising value is not always at a critical point. In some cases it may be at a boundary point of the function. Now, in the context of maximum-likelihood, it is common for the log-likelihood function to be strictly concave, so that there is a unique MLE at the critical point of the function --- i.e., when the score function equals zero. However, we still need to be careful that this is the case, and it is possible in some cases that the MLE will occur at a boundary point. Remember that there is nothing special about maximum likelihood analysis --- mathematically it is just a standard optimisation problem involving a log-likelihood function, and it is solved via ordinary optimisation techniques. Now, in this particular case, it turns out that the above log-likelihood function is strictly concave (show this by looking at its second-derivative) and so the MLE occurs at the unique critical point of the function. Thus, in this case, it is indeed correct that we find the MLE by setting the score function to zero (and so obviously the score of the MLE is equal to zero in this case). When statisticians deal with maximum-likelihood theory, they often assume "regularity conditions" which are the conditions required to allow the log-likelihood to be expanded into a Taylor expansion, and to ensure that the MLE falls at a critical point. So if you read material on the properties of MLEs, you will often find that they are of the form, "Under such-and-such regularity conditions, such-and-such a result occurs". Do these results depend on the data actually being normally distributed? In these kinds of problems, the log-likelihood function is taken to be the derived from the distribution we think the data follows. So even if the distribution of the data turns out not to be normal, the context of the problem suggest that we think it is normal, so this is the log-likelihood function we use for our analysis. Similarly, we derive the MLE as if the data were normal, even if they turn out not to be. In this particular case, all of the relevant equations you have should follow directly from the assumed form of the log-likelihood function, for all possible outcomes of the data. However, it is important to remember that the MLE is a function of the data, and so its probabilistic behaviour depends on the true distribution of the data, which might not be our assumed form. Thus, if you were to make some probabilistic statement about the MLE (e.g., that it will fall within a certain interval with a certain probability) then this would generally depend on the behaviour of the data, which would depend on its true distribution.
Is the Quadratic Approximation of Log-Likelihood Equivalent to the Normal Approximation of the MLE?
How can I show this is the case? Since you have full specification of the sampling distribution of your observations, you can get the explicit form of the log-likelihood. Treating $\sigma$ as fixed
Is the Quadratic Approximation of Log-Likelihood Equivalent to the Normal Approximation of the MLE? How can I show this is the case? Since you have full specification of the sampling distribution of your observations, you can get the explicit form of the log-likelihood. Treating $\sigma$ as fixed and removing additive constants we have: $$\ell_\mathbf{x}(\theta) = -\frac{1}{2 \sigma^2} \sum_{i=1}^n (x_i - \theta)^2 \quad \quad \quad \text{for all } \theta \in \mathbb{R}.$$ From this function it is possible to derive the score function, the information function and the MLE, which means that you should be able to directly verify the equation by substituting all these items. (I will leave this work as an exercise.) Isn't the score of the MLE always zero? To understand when the score of the MLE is zero, think back to your early calculus classes. When you maximise a continuous differentiable function, this often gives a maximising value at a critical point of the function. But the maximising value is not always at a critical point. In some cases it may be at a boundary point of the function. Now, in the context of maximum-likelihood, it is common for the log-likelihood function to be strictly concave, so that there is a unique MLE at the critical point of the function --- i.e., when the score function equals zero. However, we still need to be careful that this is the case, and it is possible in some cases that the MLE will occur at a boundary point. Remember that there is nothing special about maximum likelihood analysis --- mathematically it is just a standard optimisation problem involving a log-likelihood function, and it is solved via ordinary optimisation techniques. Now, in this particular case, it turns out that the above log-likelihood function is strictly concave (show this by looking at its second-derivative) and so the MLE occurs at the unique critical point of the function. Thus, in this case, it is indeed correct that we find the MLE by setting the score function to zero (and so obviously the score of the MLE is equal to zero in this case). When statisticians deal with maximum-likelihood theory, they often assume "regularity conditions" which are the conditions required to allow the log-likelihood to be expanded into a Taylor expansion, and to ensure that the MLE falls at a critical point. So if you read material on the properties of MLEs, you will often find that they are of the form, "Under such-and-such regularity conditions, such-and-such a result occurs". Do these results depend on the data actually being normally distributed? In these kinds of problems, the log-likelihood function is taken to be the derived from the distribution we think the data follows. So even if the distribution of the data turns out not to be normal, the context of the problem suggest that we think it is normal, so this is the log-likelihood function we use for our analysis. Similarly, we derive the MLE as if the data were normal, even if they turn out not to be. In this particular case, all of the relevant equations you have should follow directly from the assumed form of the log-likelihood function, for all possible outcomes of the data. However, it is important to remember that the MLE is a function of the data, and so its probabilistic behaviour depends on the true distribution of the data, which might not be our assumed form. Thus, if you were to make some probabilistic statement about the MLE (e.g., that it will fall within a certain interval with a certain probability) then this would generally depend on the behaviour of the data, which would depend on its true distribution.
Is the Quadratic Approximation of Log-Likelihood Equivalent to the Normal Approximation of the MLE? How can I show this is the case? Since you have full specification of the sampling distribution of your observations, you can get the explicit form of the log-likelihood. Treating $\sigma$ as fixed
49,753
Calculating confidence intervals of marginal means in linear mixed models
I have just checked your example (using the just released version 0.8.0 of ggeffects), however, instead of using effect() or emmeans() directly, I use the functions from ggeffects, which actually wrap around these functions (ggeffect() and ggemmeans()). ## effect c4 <- ggeffect(fit, terms = term) c4 <- c4[,4:5] ## emmeans,'kenward-roger' c5 <- ggemmeans(fit, terms = term) c5 <- c5[,4:5] Here, the CI's are similar. Also, after exactly running your examples, the plot is identical, with no deviation for effects. ## effect c4 <- with(effect(term,fit),cbind(lower,upper)) ## emmeans,'kenward-roger' c5 <- with(summary(emmeans(fit,spec=term)),cbind(lower.CL,upper.CL)) Have you set some options for the effects-package that might change the ci-level? It seems to me that all these methods produce confidence intervals, not prediction intervals. If you use type = "re" in ggpredict(), the intervals are much larger (except for this model, because the variance in the random effects is almost zero).
Calculating confidence intervals of marginal means in linear mixed models
I have just checked your example (using the just released version 0.8.0 of ggeffects), however, instead of using effect() or emmeans() directly, I use the functions from ggeffects, which actually wrap
Calculating confidence intervals of marginal means in linear mixed models I have just checked your example (using the just released version 0.8.0 of ggeffects), however, instead of using effect() or emmeans() directly, I use the functions from ggeffects, which actually wrap around these functions (ggeffect() and ggemmeans()). ## effect c4 <- ggeffect(fit, terms = term) c4 <- c4[,4:5] ## emmeans,'kenward-roger' c5 <- ggemmeans(fit, terms = term) c5 <- c5[,4:5] Here, the CI's are similar. Also, after exactly running your examples, the plot is identical, with no deviation for effects. ## effect c4 <- with(effect(term,fit),cbind(lower,upper)) ## emmeans,'kenward-roger' c5 <- with(summary(emmeans(fit,spec=term)),cbind(lower.CL,upper.CL)) Have you set some options for the effects-package that might change the ci-level? It seems to me that all these methods produce confidence intervals, not prediction intervals. If you use type = "re" in ggpredict(), the intervals are much larger (except for this model, because the variance in the random effects is almost zero).
Calculating confidence intervals of marginal means in linear mixed models I have just checked your example (using the just released version 0.8.0 of ggeffects), however, instead of using effect() or emmeans() directly, I use the functions from ggeffects, which actually wrap
49,754
What is Dimension of basis in splines
The easiest way of thinking of a basis is like the building blocks or the set of prototypes used to create something more complex. So when we are setting $k$ to a low number we indirectly force a relatively simple overall shape. Putting the notion of thing plate regression splines aside for a moment, one of the most trivial basis we use are the polynomial basis. Assuming that our predictor $x$ spans from $[-1,1]$ if we use a polynomial basis of degree 4 we have something like the following: x = seq(-1,1, by=0.01) matplot(x, poly(x, degree = 4, raw = TRUE), t='l', lwd = 3, lty=1, panel.first=grid()); legend("bottomright",col = 1:4, lwd=3, legend = paste0("Degree:", 1:4)) Which means we can allow of the behaviour of our predictor $x$ to vary according to any valid linear combination of these four basis functions. Great, right? But maybe we want to ensure we have periodic behaviour. Say, our $x$ represents the day of the year of something that clearly is periodic. The previous basis shown, the polynomial one, is not constrained to "start and finish" with the same value. Good thing is that a certain mathematical giant named Joseph Fourier made the concept of representing a periodic function (or signal) into the weighted sum of sines and cosines into a field of its own (called Fourier Analysis). The important bit here is that we can thus represent a periodic signal using a series of sines and cosines and thus be certain that any combination of them will be period. x = seq(1,365, by=1) plot( fda::create.fourier.basis(range(x), nbasis = 3), lwd= 3, lty=1) grid(ny = 12) legend("bottomright",col = 1:4, lwd=3, legend = paste0("N-th basis:", 1:3)) And we can easily see here that all basis shown will start and finish on the same value. This brings us to $k$. While the basis shown is great in terms of periodicity, we cannot represent a signal/function that oscillates more than twice in this yearly period. The "black prototype" will allows to move the baseline of the signal up or down, the "green prototype" will allows to have a slow year-long oscillation pattern and the "red prototype" will allows to have a six-month counter-balancing oscillation pattern. But what if we have monthly periodicity? Using this proposed basis with just three basis function we cannot model it. Simple as that. If we increase $k$ we can have fast oscillations, i.e. model more frequent changes. For example $k = 7$ would allows us to use a more extensive collection of building blocks like this: (Note that this effectively what s does when we define the basis type to be a cyclic spline s(..., bs = 'cc').) OK, so finally back to the thin-plate smoothing spline which is what mgcv::gam uses by default. The thin-plate smoothing spline estimates a smoothing function $f$ that minimizes a penalised least squares function $g$: $g(x,y,\lambda) = \sum^N_{i=1} (y_i - f(x_i)^2) + \lambda J(f)$ where $x$ is our explanatory variable, $y$ is our response, $\lambda$ is our penalisation parameter, $N$ is the number of available data-points and $J$ is a function that penalised how complex/wiggly the function $f$ is. Now, without going to any gory details, $k$ controls the number of building blocks for $f$ and thus is another way to make the overall fit be simpler/less complex. The aptly named 2003 paper "Thin plate regression splines" by S. Wood (the lead developer of mgcv), goes through the exact mechanics of how low-rank (i.e. not very large $k$) thin plate spline can be used as smoothers. Specifically for mgcv::gam, as Gavin mentioned (+1 if you have not already), $k$ is an upper limit and not necessary what will be used. So to conclude, the dimension of basis function means the number of building blocks we are allowed to used. Smaller number constrain us to simpler variational patterns while higher numbers allows to account for finer details. And this is the main methodological burden that GAM estimation tries to alleviate; we want to allow for the right amount of complexity; too little (too small $k$) and we under-fit missing important information, too high (too high $k$) and we over-fit finding patterns that are not really there.
What is Dimension of basis in splines
The easiest way of thinking of a basis is like the building blocks or the set of prototypes used to create something more complex. So when we are setting $k$ to a low number we indirectly force a rela
What is Dimension of basis in splines The easiest way of thinking of a basis is like the building blocks or the set of prototypes used to create something more complex. So when we are setting $k$ to a low number we indirectly force a relatively simple overall shape. Putting the notion of thing plate regression splines aside for a moment, one of the most trivial basis we use are the polynomial basis. Assuming that our predictor $x$ spans from $[-1,1]$ if we use a polynomial basis of degree 4 we have something like the following: x = seq(-1,1, by=0.01) matplot(x, poly(x, degree = 4, raw = TRUE), t='l', lwd = 3, lty=1, panel.first=grid()); legend("bottomright",col = 1:4, lwd=3, legend = paste0("Degree:", 1:4)) Which means we can allow of the behaviour of our predictor $x$ to vary according to any valid linear combination of these four basis functions. Great, right? But maybe we want to ensure we have periodic behaviour. Say, our $x$ represents the day of the year of something that clearly is periodic. The previous basis shown, the polynomial one, is not constrained to "start and finish" with the same value. Good thing is that a certain mathematical giant named Joseph Fourier made the concept of representing a periodic function (or signal) into the weighted sum of sines and cosines into a field of its own (called Fourier Analysis). The important bit here is that we can thus represent a periodic signal using a series of sines and cosines and thus be certain that any combination of them will be period. x = seq(1,365, by=1) plot( fda::create.fourier.basis(range(x), nbasis = 3), lwd= 3, lty=1) grid(ny = 12) legend("bottomright",col = 1:4, lwd=3, legend = paste0("N-th basis:", 1:3)) And we can easily see here that all basis shown will start and finish on the same value. This brings us to $k$. While the basis shown is great in terms of periodicity, we cannot represent a signal/function that oscillates more than twice in this yearly period. The "black prototype" will allows to move the baseline of the signal up or down, the "green prototype" will allows to have a slow year-long oscillation pattern and the "red prototype" will allows to have a six-month counter-balancing oscillation pattern. But what if we have monthly periodicity? Using this proposed basis with just three basis function we cannot model it. Simple as that. If we increase $k$ we can have fast oscillations, i.e. model more frequent changes. For example $k = 7$ would allows us to use a more extensive collection of building blocks like this: (Note that this effectively what s does when we define the basis type to be a cyclic spline s(..., bs = 'cc').) OK, so finally back to the thin-plate smoothing spline which is what mgcv::gam uses by default. The thin-plate smoothing spline estimates a smoothing function $f$ that minimizes a penalised least squares function $g$: $g(x,y,\lambda) = \sum^N_{i=1} (y_i - f(x_i)^2) + \lambda J(f)$ where $x$ is our explanatory variable, $y$ is our response, $\lambda$ is our penalisation parameter, $N$ is the number of available data-points and $J$ is a function that penalised how complex/wiggly the function $f$ is. Now, without going to any gory details, $k$ controls the number of building blocks for $f$ and thus is another way to make the overall fit be simpler/less complex. The aptly named 2003 paper "Thin plate regression splines" by S. Wood (the lead developer of mgcv), goes through the exact mechanics of how low-rank (i.e. not very large $k$) thin plate spline can be used as smoothers. Specifically for mgcv::gam, as Gavin mentioned (+1 if you have not already), $k$ is an upper limit and not necessary what will be used. So to conclude, the dimension of basis function means the number of building blocks we are allowed to used. Smaller number constrain us to simpler variational patterns while higher numbers allows to account for finer details. And this is the main methodological burden that GAM estimation tries to alleviate; we want to allow for the right amount of complexity; too little (too small $k$) and we under-fit missing important information, too high (too high $k$) and we over-fit finding patterns that are not really there.
What is Dimension of basis in splines The easiest way of thinking of a basis is like the building blocks or the set of prototypes used to create something more complex. So when we are setting $k$ to a low number we indirectly force a rela
49,755
What is Dimension of basis in splines
The dimension of the basis is the number of basis function in the basis. Typically, k reflects how many basis functions are created initially, but identifiability constraints may lower the number of basis functions per smooth that are actually used to fit the model. k sets some upper limit on the number of basis functions, but typically some of the basis functions will be removed when identifiability constraint are applied. For example, the default k with the default smooth type (Thin plate regression splines) is 10 for a univariate smooth. However, as one of those basis functions is a flat function, the model become unidentifiable when there is an intercept in the model; this horizontal function is the same thing as the model intercept, so you could add any value to the coef for the intercept and subtract the same value from the coef for the horizontal basis function and get the same fit but via a different model (the coefs are different). As such, the flat/horizontal basis function is removed from the basis, resulting in 9 basis functions used to fit the model.
What is Dimension of basis in splines
The dimension of the basis is the number of basis function in the basis. Typically, k reflects how many basis functions are created initially, but identifiability constraints may lower the number of b
What is Dimension of basis in splines The dimension of the basis is the number of basis function in the basis. Typically, k reflects how many basis functions are created initially, but identifiability constraints may lower the number of basis functions per smooth that are actually used to fit the model. k sets some upper limit on the number of basis functions, but typically some of the basis functions will be removed when identifiability constraint are applied. For example, the default k with the default smooth type (Thin plate regression splines) is 10 for a univariate smooth. However, as one of those basis functions is a flat function, the model become unidentifiable when there is an intercept in the model; this horizontal function is the same thing as the model intercept, so you could add any value to the coef for the intercept and subtract the same value from the coef for the horizontal basis function and get the same fit but via a different model (the coefs are different). As such, the flat/horizontal basis function is removed from the basis, resulting in 9 basis functions used to fit the model.
What is Dimension of basis in splines The dimension of the basis is the number of basis function in the basis. Typically, k reflects how many basis functions are created initially, but identifiability constraints may lower the number of b
49,756
Does cross validation say anything about parsimony?
First of all, I agree with @HEITZ: if all we have is equal cross validation performance, then that's all we have and it does not allow further distinction. Also, one model may be just as badly underfit as the other is overfit... As usual, this is where external (independent) knowledge about the situation at hand helps a lot, e.g. in judging what is going on. I'm thinking of, say, a discriminative classifer vs. a one-class classifer that both yield the same predictions and thus the same error/performance measure. The one-class classifier is more complex - but the decision one-class classification vs. discriminative classifier should anyways be based on the nature of the data/application. And yet, there may be situations where one concludes that one-class classification would be needed but the available data does need a more restictive model (with important differences in the CV-measured performance). However, I'd like to point out that it is possible to measure some symptoms of overfitting (namely, instability of predictions based on exchanging a few training cases) by iterated/repeated cross validation even if the chosen error measure per se does not penalize complexity. Therefore, I reserve the right to not believe that the complex model is not overfit unless results are presented that clearly show that possible overfitting was checked and found to be absent and that excludes the possibility of reporting a lucky cross validation split (particularly if the complex model has hyperparameters that are aggressively optimized). On the other hand, resampling validation cannot guard against drift in the underlying population - and such drift may either need a more complex model (human brain can correct for such drift in an amazing fashion!) or less complex model (that doesn't overfitt, so data drifting slightly out of the training space will not be subject to totally weird predictions). Secondly, I'd like to argue that the usual optimization approaches we typically take from numeric optimization is meant for rather different situations than what we have here. Searching the (=one) best model may or may not be appropriate. A situation with a true global optimum may be expected when optimizing the complexity of essentially the same model (say, the ridge parameter). Thus, a situation that may be described as selecting one of a continuous family of models. But if the compared models span a variety of model families, I don't think a finding that a number of model families can achieve the same performance should be too surprising at all. In fact, if I found a logistic regression, LDA and linear SVM to perform equally well, the conclusion would be "linear classification works" rather than thinking how these models differ in their stability depending on the training cases. And still, I don't see why a non-linear model shouldn't perform as well if sufficient training data is available. From a philosophical point of view, I'd say there's nothing that keeps nature from having tons of influencing factors and interactions between them. So parsimony doesn't make the model more true, it just guards against overfitting. So iff the model valiation is done properly on independent cases, we don't need this safeguard as overfitting is suitably penalized. In practice, however, cross validation frequently doesn't achieve as independent splitting as we'd like to believe - so an additional safeguard is a very sensible precaution in practice In theory, there is no difference beteween theory and practice. In practice, there is. In that sense, I think that Occam's Razor is more important for us (modeling folk) than for the models: we humans are known to be notoriously bad at detecting overfitting. I'm an optimist, though, and think that detecting overfitting can be learned. :-D it also allows us to construct predictive models that achieve reasonable prediction based on a few input variates (possibly easier assessment), and that are possibly easier to study, say, in terms of what part of the input and model space are actually populated by our data. In addition, such models may be more easily correlated (or augmented) by independent/external knowledge.
Does cross validation say anything about parsimony?
First of all, I agree with @HEITZ: if all we have is equal cross validation performance, then that's all we have and it does not allow further distinction. Also, one model may be just as badly underfi
Does cross validation say anything about parsimony? First of all, I agree with @HEITZ: if all we have is equal cross validation performance, then that's all we have and it does not allow further distinction. Also, one model may be just as badly underfit as the other is overfit... As usual, this is where external (independent) knowledge about the situation at hand helps a lot, e.g. in judging what is going on. I'm thinking of, say, a discriminative classifer vs. a one-class classifer that both yield the same predictions and thus the same error/performance measure. The one-class classifier is more complex - but the decision one-class classification vs. discriminative classifier should anyways be based on the nature of the data/application. And yet, there may be situations where one concludes that one-class classification would be needed but the available data does need a more restictive model (with important differences in the CV-measured performance). However, I'd like to point out that it is possible to measure some symptoms of overfitting (namely, instability of predictions based on exchanging a few training cases) by iterated/repeated cross validation even if the chosen error measure per se does not penalize complexity. Therefore, I reserve the right to not believe that the complex model is not overfit unless results are presented that clearly show that possible overfitting was checked and found to be absent and that excludes the possibility of reporting a lucky cross validation split (particularly if the complex model has hyperparameters that are aggressively optimized). On the other hand, resampling validation cannot guard against drift in the underlying population - and such drift may either need a more complex model (human brain can correct for such drift in an amazing fashion!) or less complex model (that doesn't overfitt, so data drifting slightly out of the training space will not be subject to totally weird predictions). Secondly, I'd like to argue that the usual optimization approaches we typically take from numeric optimization is meant for rather different situations than what we have here. Searching the (=one) best model may or may not be appropriate. A situation with a true global optimum may be expected when optimizing the complexity of essentially the same model (say, the ridge parameter). Thus, a situation that may be described as selecting one of a continuous family of models. But if the compared models span a variety of model families, I don't think a finding that a number of model families can achieve the same performance should be too surprising at all. In fact, if I found a logistic regression, LDA and linear SVM to perform equally well, the conclusion would be "linear classification works" rather than thinking how these models differ in their stability depending on the training cases. And still, I don't see why a non-linear model shouldn't perform as well if sufficient training data is available. From a philosophical point of view, I'd say there's nothing that keeps nature from having tons of influencing factors and interactions between them. So parsimony doesn't make the model more true, it just guards against overfitting. So iff the model valiation is done properly on independent cases, we don't need this safeguard as overfitting is suitably penalized. In practice, however, cross validation frequently doesn't achieve as independent splitting as we'd like to believe - so an additional safeguard is a very sensible precaution in practice In theory, there is no difference beteween theory and practice. In practice, there is. In that sense, I think that Occam's Razor is more important for us (modeling folk) than for the models: we humans are known to be notoriously bad at detecting overfitting. I'm an optimist, though, and think that detecting overfitting can be learned. :-D it also allows us to construct predictive models that achieve reasonable prediction based on a few input variates (possibly easier assessment), and that are possibly easier to study, say, in terms of what part of the input and model space are actually populated by our data. In addition, such models may be more easily correlated (or augmented) by independent/external knowledge.
Does cross validation say anything about parsimony? First of all, I agree with @HEITZ: if all we have is equal cross validation performance, then that's all we have and it does not allow further distinction. Also, one model may be just as badly underfi
49,757
Does cross validation say anything about parsimony?
In the absence of a controlled experiment, it is impossible to favor one of two models with equivalent fit, if parsimony is disregarded. It may be more interesting to interpret the more complex model, but that is story telling. From an agnostic point of view, both yield the same picture of reality.
Does cross validation say anything about parsimony?
In the absence of a controlled experiment, it is impossible to favor one of two models with equivalent fit, if parsimony is disregarded. It may be more interesting to interpret the more complex model,
Does cross validation say anything about parsimony? In the absence of a controlled experiment, it is impossible to favor one of two models with equivalent fit, if parsimony is disregarded. It may be more interesting to interpret the more complex model, but that is story telling. From an agnostic point of view, both yield the same picture of reality.
Does cross validation say anything about parsimony? In the absence of a controlled experiment, it is impossible to favor one of two models with equivalent fit, if parsimony is disregarded. It may be more interesting to interpret the more complex model,
49,758
Two-sample t-test with one the samples having only 1 element?
Here is an example of a two sample t test of $H_0: \mu_1 = \mu_2$ against $H_1: \mu_1 \ne \mu_2.$ As you specify, the first sample consists of five observations from a normal distribution and the second sample has only one observation from a normal distribution with a different mean (and the same variance). [Simulation in R.] set.seed(1122) x1 = rnorm(5, 100, 5); x2 = rnorm(1, 70, 5) I chose the means relatively far apart in order to give an example in which the null hypothesis is rejected. (See the Comment by @Glen_b.) The simulated data are: x1; x2 [1] 92.88224 102.24980 99.64795 109.80224 97.24648 [1] 74.83476 t.test(x1, x2, var.eq = T) Two Sample t-test data: x1 and x2 t = 3.6985, df = 4, p-value = 0.02086 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 6.364756 44.697198 sample estimates: mean of x mean of y 100.36574 74.83476 Notes: (1) The assumption of equal variances is important because that provides the only information available about the variance of the single-observation second sample. In fact, it is not possible to do a Welch (separate-variances) t test with such data. Without the parameter var.eq=T, R will not perform a two-sample t.test. t.test(x1, x2) Error in t.test.default(x1, x2) : not enough 'y' observations (2) Another, somewhat different, approach would be to make a 95% prediction interval for additional observations from the population associated with the sample of size 5. Such an interval would be $\bar X_1 \pm 2.7764\,S_1\sqrt{1 + \frac{1}{5}},$ where $\bar X_1$ and $S_1$ are the sample mean and SD, respectively, of that sample. Then you might be suspicious whether the additional value is from the same population as the sample of size five, if it does not fall within the prediction interval. qt(.975, 4) [1] 2.776445
Two-sample t-test with one the samples having only 1 element?
Here is an example of a two sample t test of $H_0: \mu_1 = \mu_2$ against $H_1: \mu_1 \ne \mu_2.$ As you specify, the first sample consists of five observations from a normal distribution and the seco
Two-sample t-test with one the samples having only 1 element? Here is an example of a two sample t test of $H_0: \mu_1 = \mu_2$ against $H_1: \mu_1 \ne \mu_2.$ As you specify, the first sample consists of five observations from a normal distribution and the second sample has only one observation from a normal distribution with a different mean (and the same variance). [Simulation in R.] set.seed(1122) x1 = rnorm(5, 100, 5); x2 = rnorm(1, 70, 5) I chose the means relatively far apart in order to give an example in which the null hypothesis is rejected. (See the Comment by @Glen_b.) The simulated data are: x1; x2 [1] 92.88224 102.24980 99.64795 109.80224 97.24648 [1] 74.83476 t.test(x1, x2, var.eq = T) Two Sample t-test data: x1 and x2 t = 3.6985, df = 4, p-value = 0.02086 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 6.364756 44.697198 sample estimates: mean of x mean of y 100.36574 74.83476 Notes: (1) The assumption of equal variances is important because that provides the only information available about the variance of the single-observation second sample. In fact, it is not possible to do a Welch (separate-variances) t test with such data. Without the parameter var.eq=T, R will not perform a two-sample t.test. t.test(x1, x2) Error in t.test.default(x1, x2) : not enough 'y' observations (2) Another, somewhat different, approach would be to make a 95% prediction interval for additional observations from the population associated with the sample of size 5. Such an interval would be $\bar X_1 \pm 2.7764\,S_1\sqrt{1 + \frac{1}{5}},$ where $\bar X_1$ and $S_1$ are the sample mean and SD, respectively, of that sample. Then you might be suspicious whether the additional value is from the same population as the sample of size five, if it does not fall within the prediction interval. qt(.975, 4) [1] 2.776445
Two-sample t-test with one the samples having only 1 element? Here is an example of a two sample t test of $H_0: \mu_1 = \mu_2$ against $H_1: \mu_1 \ne \mu_2.$ As you specify, the first sample consists of five observations from a normal distribution and the seco
49,759
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$
We have $$\frac{\partial^2\mathscr{L}}{\partial p^2}=-\frac{x_1}{p^2}-\frac{1-x_1}{(1-p)^2}-\frac{x_2}{p^2}-\frac{9(1-x_2)}{(1-3p)^2}$$ which is clearly negative for any $p\in[0,1/3]$ so $\hat{p}$ is a global maximum.
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$
We have $$\frac{\partial^2\mathscr{L}}{\partial p^2}=-\frac{x_1}{p^2}-\frac{1-x_1}{(1-p)^2}-\frac{x_2}{p^2}-\frac{9(1-x_2)}{(1-3p)^2}$$ which is clearly negative for any $p\in[0,1/3]$ so $\hat{p}$ is
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$ We have $$\frac{\partial^2\mathscr{L}}{\partial p^2}=-\frac{x_1}{p^2}-\frac{1-x_1}{(1-p)^2}-\frac{x_2}{p^2}-\frac{9(1-x_2)}{(1-3p)^2}$$ which is clearly negative for any $p\in[0,1/3]$ so $\hat{p}$ is a global maximum.
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$ We have $$\frac{\partial^2\mathscr{L}}{\partial p^2}=-\frac{x_1}{p^2}-\frac{1-x_1}{(1-p)^2}-\frac{x_2}{p^2}-\frac{9(1-x_2)}{(1-3p)^2}$$ which is clearly negative for any $p\in[0,1/3]$ so $\hat{p}$ is
49,760
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$
Though your final answer is correct, I think the derivation of the MLE is much more simpler. Given $x_1,x_2$, the likelihood function is \begin{align} L(p\mid x_1,x_2)&=3^{x_2}p^{x_1+x_2}(1-p)^{1-x_1}(1-3p)^{1-x_2}\mathbf1_{x_1,x_2\in\{0,1\}} \\&=\begin{cases}3p(1-p)&,\text{ if }x_1=0,x_2=1 \\ p(1-3p)&,\text{ if }x_1=1,x_2=0 \\ (1-p)(1-3p)&,\text{ if }x_1=x_2=0 \\ 3p^2&,\text{ if }x_1=x_2=1 \qquad\qquad,\,0\le p\le\frac{1}{3} \end{cases} \end{align} Study the cases separately: Observe that the curve $3p(1-p)$ is a parabola which is increasing in $p\in[0,1/3]$, so its maximum is reached at the boundary point $p=1/3$. Rewrite $p(1-3p)$ as $\frac{1}{12}-3(p-\frac{1}{6})^2$, so that it is maximized at $p=1/6$. It is obvious that $(1-p)(1-3p)$ is maximized for the minimum possible value of $p$, and $3p^2$ is maximized for the maximum possible value of $p$. So the maximization again occurs at the boundary points. So maximizing $L(p\mid x_1,x_2)$ in each case subject to the constraint $0\le p\le 1/3$ yields the maximum likelihood estimator of $p$ : \begin{align} \hat p(x_1,x_2)&=\begin{cases}\frac{1}{3}&,\text{ if }(x_1,x_2)=(0,1)\text{ or }(x_1,x_2)=(1,1) \\ \frac{1}{6}&,\text{ if }(x_1,x_2)=(1,0) \\ 0&,\text{ if }(x_1,x_2)=(0,0) \end{cases} \end{align} This agrees with the functional form of $\hat p$ you had obtained. And I don't think it is required to express the MLE in a functional form. It is perfectly okay to give the answer as above.
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$
Though your final answer is correct, I think the derivation of the MLE is much more simpler. Given $x_1,x_2$, the likelihood function is \begin{align} L(p\mid x_1,x_2)&=3^{x_2}p^{x_1+x_2}(1-p)^{1-x_1}
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$ Though your final answer is correct, I think the derivation of the MLE is much more simpler. Given $x_1,x_2$, the likelihood function is \begin{align} L(p\mid x_1,x_2)&=3^{x_2}p^{x_1+x_2}(1-p)^{1-x_1}(1-3p)^{1-x_2}\mathbf1_{x_1,x_2\in\{0,1\}} \\&=\begin{cases}3p(1-p)&,\text{ if }x_1=0,x_2=1 \\ p(1-3p)&,\text{ if }x_1=1,x_2=0 \\ (1-p)(1-3p)&,\text{ if }x_1=x_2=0 \\ 3p^2&,\text{ if }x_1=x_2=1 \qquad\qquad,\,0\le p\le\frac{1}{3} \end{cases} \end{align} Study the cases separately: Observe that the curve $3p(1-p)$ is a parabola which is increasing in $p\in[0,1/3]$, so its maximum is reached at the boundary point $p=1/3$. Rewrite $p(1-3p)$ as $\frac{1}{12}-3(p-\frac{1}{6})^2$, so that it is maximized at $p=1/6$. It is obvious that $(1-p)(1-3p)$ is maximized for the minimum possible value of $p$, and $3p^2$ is maximized for the maximum possible value of $p$. So the maximization again occurs at the boundary points. So maximizing $L(p\mid x_1,x_2)$ in each case subject to the constraint $0\le p\le 1/3$ yields the maximum likelihood estimator of $p$ : \begin{align} \hat p(x_1,x_2)&=\begin{cases}\frac{1}{3}&,\text{ if }(x_1,x_2)=(0,1)\text{ or }(x_1,x_2)=(1,1) \\ \frac{1}{6}&,\text{ if }(x_1,x_2)=(1,0) \\ 0&,\text{ if }(x_1,x_2)=(0,0) \end{cases} \end{align} This agrees with the functional form of $\hat p$ you had obtained. And I don't think it is required to express the MLE in a functional form. It is perfectly okay to give the answer as above.
Finding MLE of $p$ where $X_1\sim\text{Bernoulli}(p)$ and $X_2\sim\text{Bernoulli}(3p)$ Though your final answer is correct, I think the derivation of the MLE is much more simpler. Given $x_1,x_2$, the likelihood function is \begin{align} L(p\mid x_1,x_2)&=3^{x_2}p^{x_1+x_2}(1-p)^{1-x_1}
49,761
Difference of notation between cross entropy and joint entropy
Note that this notation for cross-entropy is non-standard. The normal notation is H(p,q). This notation is horrible for two reasons. Firstly, the exact same notation is also used for joint entropy. Secondly, it makes it seem like cross-entropy is symmetric. This is ridiculous, and I’ll be writing Hq(p) instead. Reference from link: http://colah.github.io/posts/2015-09-Visual-Information/#fn4 Also, possible duplicate of : https://math.stackexchange.com/questions/2505015/relation-between-cross-entropy-and-joint-entropy
Difference of notation between cross entropy and joint entropy
Note that this notation for cross-entropy is non-standard. The normal notation is H(p,q). This notation is horrible for two reasons. Firstly, the exact same notation is also used for joint entropy. Se
Difference of notation between cross entropy and joint entropy Note that this notation for cross-entropy is non-standard. The normal notation is H(p,q). This notation is horrible for two reasons. Firstly, the exact same notation is also used for joint entropy. Secondly, it makes it seem like cross-entropy is symmetric. This is ridiculous, and I’ll be writing Hq(p) instead. Reference from link: http://colah.github.io/posts/2015-09-Visual-Information/#fn4 Also, possible duplicate of : https://math.stackexchange.com/questions/2505015/relation-between-cross-entropy-and-joint-entropy
Difference of notation between cross entropy and joint entropy Note that this notation for cross-entropy is non-standard. The normal notation is H(p,q). This notation is horrible for two reasons. Firstly, the exact same notation is also used for joint entropy. Se
49,762
What to do if time series are non-stationary? [closed]
Take a look at my comments What "more" does differencing (d>0) do in ARIMA than detrend? suggesting alternative approaches ( best suggested by the data ) to evidented non-stationarity symptoms. Classicly differencing is more approptraite for stock market data. Often times power transforms like logs are attempted to deal with non-constant error variance When (and why) should you take the log of a distribution (of numbers)? . A viable alternative in many cases is Weighted least Squares as suggested by Tsay to deal with stock price data . See page 13 of https://pdfs.semanticscholar.org/09c4/ba8dd3cc88289caf18d71e8985bdd11ad21c.pdf to follow this thought.
What to do if time series are non-stationary? [closed]
Take a look at my comments What "more" does differencing (d>0) do in ARIMA than detrend? suggesting alternative approaches ( best suggested by the data ) to evidented non-stationarity symptoms. Class
What to do if time series are non-stationary? [closed] Take a look at my comments What "more" does differencing (d>0) do in ARIMA than detrend? suggesting alternative approaches ( best suggested by the data ) to evidented non-stationarity symptoms. Classicly differencing is more approptraite for stock market data. Often times power transforms like logs are attempted to deal with non-constant error variance When (and why) should you take the log of a distribution (of numbers)? . A viable alternative in many cases is Weighted least Squares as suggested by Tsay to deal with stock price data . See page 13 of https://pdfs.semanticscholar.org/09c4/ba8dd3cc88289caf18d71e8985bdd11ad21c.pdf to follow this thought.
What to do if time series are non-stationary? [closed] Take a look at my comments What "more" does differencing (d>0) do in ARIMA than detrend? suggesting alternative approaches ( best suggested by the data ) to evidented non-stationarity symptoms. Class
49,763
What to do if time series are non-stationary? [closed]
We actually do not need statistics itself to look at stationarity, and the first test for that is common sense. There is no stationarity because there are external factors that have a major influence on the data. For example, the market crash of Sept. 29, 2008 (in gray in the figure below), and the Dow Jones Industrial Average over that period of time. From https://www.macrotrends.net/1319/dow-jones-100-year-historical-chart'>Dow Jones - 100 Year Historical Chart Certainly, these two markets (Dow Jones and OMSX) are highly correlated. So it is likely more of interest as to which outperforms which than trying to predict either one from auto-correlation. Also, note that Dow Jones has a futures market, so that its futures market may predict the OMSX index futures better than it can be predicted from its own data. I would, as a first step, detrend for external influence, e.g., look at OMSX/Dow Jones, before examining autocorrelation or other trending. For example, one potential way to make money might be take advantage of any reliable ARIMA time delay between the markets. Finally, analyzing this type of data has a lot to do with what the objective of the analysis is.
What to do if time series are non-stationary? [closed]
We actually do not need statistics itself to look at stationarity, and the first test for that is common sense. There is no stationarity because there are external factors that have a major influence
What to do if time series are non-stationary? [closed] We actually do not need statistics itself to look at stationarity, and the first test for that is common sense. There is no stationarity because there are external factors that have a major influence on the data. For example, the market crash of Sept. 29, 2008 (in gray in the figure below), and the Dow Jones Industrial Average over that period of time. From https://www.macrotrends.net/1319/dow-jones-100-year-historical-chart'>Dow Jones - 100 Year Historical Chart Certainly, these two markets (Dow Jones and OMSX) are highly correlated. So it is likely more of interest as to which outperforms which than trying to predict either one from auto-correlation. Also, note that Dow Jones has a futures market, so that its futures market may predict the OMSX index futures better than it can be predicted from its own data. I would, as a first step, detrend for external influence, e.g., look at OMSX/Dow Jones, before examining autocorrelation or other trending. For example, one potential way to make money might be take advantage of any reliable ARIMA time delay between the markets. Finally, analyzing this type of data has a lot to do with what the objective of the analysis is.
What to do if time series are non-stationary? [closed] We actually do not need statistics itself to look at stationarity, and the first test for that is common sense. There is no stationarity because there are external factors that have a major influence
49,764
Why does bagging increase bias?
In principle bagging is performed to reduce variance of fitted values as it increases the stability of the fitted values. In addition, as a rule of thumb I would say that: "the magnitudes of the bias are roughly the same for the bagged and the original procedure" (Bühlmann & Yu, 2002). That is because bagging allows us to approximate relative complex response surfaces by practically smoothing over the learners' decision boundaries. That said, you raise a good point about bagging "using less data"; my understanding is that this is a problem when the learners are potentially weak. Having less data makes the learning task more difficult. An obvious example would be an imbalanced dataset where a positive example is rather rare; in that case a simple majority rule for the bagging ensemble will probably be more harmful than helpful as indeed it will be more likely to misclassify the rare class - Berk's "Statistical Learning from a Regression Perspective", Sect. 4.4. on "Some Limitations of Bagging" touches upon this too. Let me note that this deteriorated performance is not totally surprising; bagging or any other procedure is not a silver bullet so it is expected that there will be cases that an otherwise helpful procedure (here bagging) makes things worse. I think that the Bühlmann & Yu, 2002 paper: "Analyzing bagging" is a canonical reference on the matter if you want to explore further. I also liked the Strobl et al., 2007 paper: "Bias in random forest variable importance measures: Illustrations, sources and a solution", it focuses mostly on variable selection but makes a good point about how bagging affects the bias in that task.
Why does bagging increase bias?
In principle bagging is performed to reduce variance of fitted values as it increases the stability of the fitted values. In addition, as a rule of thumb I would say that: "the magnitudes of the bi
Why does bagging increase bias? In principle bagging is performed to reduce variance of fitted values as it increases the stability of the fitted values. In addition, as a rule of thumb I would say that: "the magnitudes of the bias are roughly the same for the bagged and the original procedure" (Bühlmann & Yu, 2002). That is because bagging allows us to approximate relative complex response surfaces by practically smoothing over the learners' decision boundaries. That said, you raise a good point about bagging "using less data"; my understanding is that this is a problem when the learners are potentially weak. Having less data makes the learning task more difficult. An obvious example would be an imbalanced dataset where a positive example is rather rare; in that case a simple majority rule for the bagging ensemble will probably be more harmful than helpful as indeed it will be more likely to misclassify the rare class - Berk's "Statistical Learning from a Regression Perspective", Sect. 4.4. on "Some Limitations of Bagging" touches upon this too. Let me note that this deteriorated performance is not totally surprising; bagging or any other procedure is not a silver bullet so it is expected that there will be cases that an otherwise helpful procedure (here bagging) makes things worse. I think that the Bühlmann & Yu, 2002 paper: "Analyzing bagging" is a canonical reference on the matter if you want to explore further. I also liked the Strobl et al., 2007 paper: "Bias in random forest variable importance measures: Illustrations, sources and a solution", it focuses mostly on variable selection but makes a good point about how bagging affects the bias in that task.
Why does bagging increase bias? In principle bagging is performed to reduce variance of fitted values as it increases the stability of the fitted values. In addition, as a rule of thumb I would say that: "the magnitudes of the bi
49,765
Is a single neuron in a Neural Network a GLM?
GLM (generalized linear model) are "general" in the sense, that they can use any kind of transformation on the results of the linear combination. So GLMs don't necessary have the form $y=e^{\theta^T*x}$. They just have the form $y=f(\theta^T*x)$ where $f$ is almost any monotonous and differentiable function (link function). $f$ transforms the result of the linear combination to the mean of a distribution from the exponential family. This family also contains the Bernoulli distribution and for this distribution, the link function is $f(z)=\frac{1}{1+e^{-z}}$ (inverse logistic function). So, the sigmoid function sometimes used in neurons of artificial networks does in fact turn that neuron into a GLM. About your question on how to handle count data (I assume as input) in an artificial neural network: First try to just input the data as is. The link function transforms the output, no the input. You are correct, that GLMs can be used to analyze count data (usually via Poisson based link function), but in that case, the count data is the output to be fitted by the GLM. Only if your networks does not learn well, you could think about other methods of transforming your input data. NNs usually handle transforming input data quite well. They are universal function approximators after all. But in some cases they can use some help to find the correct transformation, and they may learn faster if given suitably transformed data.
Is a single neuron in a Neural Network a GLM?
GLM (generalized linear model) are "general" in the sense, that they can use any kind of transformation on the results of the linear combination. So GLMs don't necessary have the form $y=e^{\theta^T*x
Is a single neuron in a Neural Network a GLM? GLM (generalized linear model) are "general" in the sense, that they can use any kind of transformation on the results of the linear combination. So GLMs don't necessary have the form $y=e^{\theta^T*x}$. They just have the form $y=f(\theta^T*x)$ where $f$ is almost any monotonous and differentiable function (link function). $f$ transforms the result of the linear combination to the mean of a distribution from the exponential family. This family also contains the Bernoulli distribution and for this distribution, the link function is $f(z)=\frac{1}{1+e^{-z}}$ (inverse logistic function). So, the sigmoid function sometimes used in neurons of artificial networks does in fact turn that neuron into a GLM. About your question on how to handle count data (I assume as input) in an artificial neural network: First try to just input the data as is. The link function transforms the output, no the input. You are correct, that GLMs can be used to analyze count data (usually via Poisson based link function), but in that case, the count data is the output to be fitted by the GLM. Only if your networks does not learn well, you could think about other methods of transforming your input data. NNs usually handle transforming input data quite well. They are universal function approximators after all. But in some cases they can use some help to find the correct transformation, and they may learn faster if given suitably transformed data.
Is a single neuron in a Neural Network a GLM? GLM (generalized linear model) are "general" in the sense, that they can use any kind of transformation on the results of the linear combination. So GLMs don't necessary have the form $y=e^{\theta^T*x
49,766
What are models for count data where the variance is not related to the sample size?
One attractive class of models is obtained by discretizing any non-negative random variable $X.$ Let $X$ be in a distribution family $\mathcal{F}.$ Let $\alpha\gt 0$ be a number and set $$Y = \lfloor X/\alpha \rfloor.$$ This is the process of "binning" by assigning $X$ to one of the intervals $[0,\alpha),$ $[\alpha,2\alpha),$ and so on, beginning the counting at $0.$ Computing the probability functions of $Y$. Let $F_X$ be the distribution function of $X,$ defined by $F_X(x)=\Pr(X\le x)$ for any real number $x.$ Then for any natural number $k,$ the definitions imply $$F_Y(k) = \Pr(Y \le k) = \Pr(X/\alpha \lt k+1) = \Pr(X \lt (k+1)\alpha) = 1 - S_X((k+1)\alpha)$$ where $S_X$ is the survival function for $X,$ $$S_X(x) = \Pr(X \ge x).$$ (When $X$ is continuous at $x,$ $S_X(x) = 1-F_X(x).$) This makes it almost as easy to work with $Y$ as with $X$ for analysis, estimation, and so on. Moreover, $Y$ has a probability mass function $$p_Y(k) = \Pr(Y=k) = \Pr(Y\le k) - \Pr(Y\le k-1)= S_X(k\alpha) - S_X((k+1)\alpha).$$ Controlling the mean and variance of $Y.$ This binning approximately multiplies $X$ by $1/\alpha.$ Thus, approximately (with the error due only to the discretization), the $n^\text{th}$ (raw) moment of $Y,$ $\mu_Y^n,$ must be close to a scaled moment of $X:$ $$\mu_Y^n \approx \alpha^{-n}\mu_X^n.$$ In particular, $\operatorname{Var}(Y) = \mu_Y^2 - (\mu_Y^1)^2 \approx \alpha^{-2}\operatorname{Var}(X).$ $E[Y] = \mu_Y^1 \approx \alpha^{-1}\mu_X = \alpha^{-1}E[X].$ Consequently, if $\operatorname{Var}(X) = f(E[X])$ for $X\in\mathcal{F},$ then $$\operatorname{Var}(Y) \approx \alpha^{-2}f(\alpha E[Y]).$$ In particular, when $f$ is a power function $f(x) = x^p,$ then $$\operatorname{Var}(Y) \approx \alpha^{-2}\left(\alpha^ E[Y]\right)^p = \alpha^{p-2}f(E[Y]).$$ When $\alpha \gt 1$ this means the variance of $Y$ tends to be relatively smaller, compared to its expectation, than $X.$ Examples Finally, everything works even when $X$ is a count variable (the "underlying count model" of the question). For instance, $X$ could be a Poisson$(\lambda)$ variable, for which $\operatorname{Var}(X)=E[X]=\lambda.$ If you fix $\alpha,$ then $Y$ is still parameterized by $\lambda$ and $$\operatorname{Var}(Y) \approx \alpha^{-2}\lambda = \alpha^{-1}\left(\alpha^{-1}\lambda\right) = \alpha^{-1}E[Y].$$ If you set $\alpha=\lambda^{1/2},$ then $Y$ is still parameterized by $\lambda$ but now the variance of $Y$ is approximately constant. If you set $\alpha=\lambda,$ the expectation of $Y$ is approximately $1$ but its variance decreases asymptotically to $0$ as $\lambda$ grows. (You can find exact, closed formulas for the moments of $Y$ when $X$ is Poisson by applying the techniques illustrated at https://stats.stackexchange.com/a/35138/919. ) These models all have simple, natural interpretations, especially when $\alpha$ is a whole number: you are still counting things (that's what $X$ does), but are grouping those things into bunches of size $\alpha$ (the last bunch will have somewhere between $1$ and $\alpha$ things in it) and reporting how many bunches there are (which is $Y+1$).
What are models for count data where the variance is not related to the sample size?
One attractive class of models is obtained by discretizing any non-negative random variable $X.$ Let $X$ be in a distribution family $\mathcal{F}.$ Let $\alpha\gt 0$ be a number and set $$Y = \lflo
What are models for count data where the variance is not related to the sample size? One attractive class of models is obtained by discretizing any non-negative random variable $X.$ Let $X$ be in a distribution family $\mathcal{F}.$ Let $\alpha\gt 0$ be a number and set $$Y = \lfloor X/\alpha \rfloor.$$ This is the process of "binning" by assigning $X$ to one of the intervals $[0,\alpha),$ $[\alpha,2\alpha),$ and so on, beginning the counting at $0.$ Computing the probability functions of $Y$. Let $F_X$ be the distribution function of $X,$ defined by $F_X(x)=\Pr(X\le x)$ for any real number $x.$ Then for any natural number $k,$ the definitions imply $$F_Y(k) = \Pr(Y \le k) = \Pr(X/\alpha \lt k+1) = \Pr(X \lt (k+1)\alpha) = 1 - S_X((k+1)\alpha)$$ where $S_X$ is the survival function for $X,$ $$S_X(x) = \Pr(X \ge x).$$ (When $X$ is continuous at $x,$ $S_X(x) = 1-F_X(x).$) This makes it almost as easy to work with $Y$ as with $X$ for analysis, estimation, and so on. Moreover, $Y$ has a probability mass function $$p_Y(k) = \Pr(Y=k) = \Pr(Y\le k) - \Pr(Y\le k-1)= S_X(k\alpha) - S_X((k+1)\alpha).$$ Controlling the mean and variance of $Y.$ This binning approximately multiplies $X$ by $1/\alpha.$ Thus, approximately (with the error due only to the discretization), the $n^\text{th}$ (raw) moment of $Y,$ $\mu_Y^n,$ must be close to a scaled moment of $X:$ $$\mu_Y^n \approx \alpha^{-n}\mu_X^n.$$ In particular, $\operatorname{Var}(Y) = \mu_Y^2 - (\mu_Y^1)^2 \approx \alpha^{-2}\operatorname{Var}(X).$ $E[Y] = \mu_Y^1 \approx \alpha^{-1}\mu_X = \alpha^{-1}E[X].$ Consequently, if $\operatorname{Var}(X) = f(E[X])$ for $X\in\mathcal{F},$ then $$\operatorname{Var}(Y) \approx \alpha^{-2}f(\alpha E[Y]).$$ In particular, when $f$ is a power function $f(x) = x^p,$ then $$\operatorname{Var}(Y) \approx \alpha^{-2}\left(\alpha^ E[Y]\right)^p = \alpha^{p-2}f(E[Y]).$$ When $\alpha \gt 1$ this means the variance of $Y$ tends to be relatively smaller, compared to its expectation, than $X.$ Examples Finally, everything works even when $X$ is a count variable (the "underlying count model" of the question). For instance, $X$ could be a Poisson$(\lambda)$ variable, for which $\operatorname{Var}(X)=E[X]=\lambda.$ If you fix $\alpha,$ then $Y$ is still parameterized by $\lambda$ and $$\operatorname{Var}(Y) \approx \alpha^{-2}\lambda = \alpha^{-1}\left(\alpha^{-1}\lambda\right) = \alpha^{-1}E[Y].$$ If you set $\alpha=\lambda^{1/2},$ then $Y$ is still parameterized by $\lambda$ but now the variance of $Y$ is approximately constant. If you set $\alpha=\lambda,$ the expectation of $Y$ is approximately $1$ but its variance decreases asymptotically to $0$ as $\lambda$ grows. (You can find exact, closed formulas for the moments of $Y$ when $X$ is Poisson by applying the techniques illustrated at https://stats.stackexchange.com/a/35138/919. ) These models all have simple, natural interpretations, especially when $\alpha$ is a whole number: you are still counting things (that's what $X$ does), but are grouping those things into bunches of size $\alpha$ (the last bunch will have somewhere between $1$ and $\alpha$ things in it) and reporting how many bunches there are (which is $Y+1$).
What are models for count data where the variance is not related to the sample size? One attractive class of models is obtained by discretizing any non-negative random variable $X.$ Let $X$ be in a distribution family $\mathcal{F}.$ Let $\alpha\gt 0$ be a number and set $$Y = \lflo
49,767
Train on balanced datasets, used for imbalanced datasets?
You can definitely train a model on an imbalanced dataset, however this often leads to deteriorated performance. Personally, I always do this so that I can measure the effect of applying SMOTE or related techniques. In many cases, the minority class has too few representative samples to accurately model it leading to poor generalization performance of the resulting model. This is what methods such as SMOTE attempt to correct. When testing your resulting model, you should not use your re-balanced set as this introduces bias and all sorts of other nasty issues. You want the test set to resemble the data your model will have to process in the wild as closely as possible For more in-depth reading, I suggest the following paper: Learning from Imbalanced Data by He.
Train on balanced datasets, used for imbalanced datasets?
You can definitely train a model on an imbalanced dataset, however this often leads to deteriorated performance. Personally, I always do this so that I can measure the effect of applying SMOTE or rela
Train on balanced datasets, used for imbalanced datasets? You can definitely train a model on an imbalanced dataset, however this often leads to deteriorated performance. Personally, I always do this so that I can measure the effect of applying SMOTE or related techniques. In many cases, the minority class has too few representative samples to accurately model it leading to poor generalization performance of the resulting model. This is what methods such as SMOTE attempt to correct. When testing your resulting model, you should not use your re-balanced set as this introduces bias and all sorts of other nasty issues. You want the test set to resemble the data your model will have to process in the wild as closely as possible For more in-depth reading, I suggest the following paper: Learning from Imbalanced Data by He.
Train on balanced datasets, used for imbalanced datasets? You can definitely train a model on an imbalanced dataset, however this often leads to deteriorated performance. Personally, I always do this so that I can measure the effect of applying SMOTE or rela
49,768
Applications of ARCH models
At least, we have the original paper of Engle (1982, Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation), which includes an application that looks at the variance of UK inflation. The ARCH(p) model, but more often a GARCH(1,1) model, is used as a benchmark in many papers that try to extend the original ARCH and GARCH framework. See e.g. the GARCH paper of Bollerslev (1986, Generalized autoregressive conditional heteroskedasticity). However, you are right in the statement that ARCH(p) models rarely are applied as the only model. The GARCH(1,1) model is often considered a good benchmark model - the title of the this paper may explain why (2005, A forecast comparison of volatility models: does anything beat a GARCH(1,1)?).
Applications of ARCH models
At least, we have the original paper of Engle (1982, Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation), which includes an application that looks
Applications of ARCH models At least, we have the original paper of Engle (1982, Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation), which includes an application that looks at the variance of UK inflation. The ARCH(p) model, but more often a GARCH(1,1) model, is used as a benchmark in many papers that try to extend the original ARCH and GARCH framework. See e.g. the GARCH paper of Bollerslev (1986, Generalized autoregressive conditional heteroskedasticity). However, you are right in the statement that ARCH(p) models rarely are applied as the only model. The GARCH(1,1) model is often considered a good benchmark model - the title of the this paper may explain why (2005, A forecast comparison of volatility models: does anything beat a GARCH(1,1)?).
Applications of ARCH models At least, we have the original paper of Engle (1982, Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation), which includes an application that looks
49,769
Applications of ARCH models
Introduction to (Generalized) Autoregressive Conditional Heteroskedasticity Models in Time Series Econometrics (Wong, 2014) is a good introduction to ARCH/GARCH models.
Applications of ARCH models
Introduction to (Generalized) Autoregressive Conditional Heteroskedasticity Models in Time Series Econometrics (Wong, 2014) is a good introduction to ARCH/GARCH models.
Applications of ARCH models Introduction to (Generalized) Autoregressive Conditional Heteroskedasticity Models in Time Series Econometrics (Wong, 2014) is a good introduction to ARCH/GARCH models.
Applications of ARCH models Introduction to (Generalized) Autoregressive Conditional Heteroskedasticity Models in Time Series Econometrics (Wong, 2014) is a good introduction to ARCH/GARCH models.
49,770
How many natural parameters are really in the exponential family conjugate prior?
It is a correct remark that a conjugate family of priors for an exponential family of distributions with a two-dimensional parameter $(\theta_1,\theta_2)$ can be defined by three parameters and hence that for a Normal family of distributions $$f(x|\theta_1,\theta_2)\propto \exp\left\{\frac{-x^2}{2\theta_2}+\frac{x\theta_1}{\theta_2}-\frac{\theta_1^2}{2\theta_2}-\frac{\log(\theta_2)}{2}\right\}$$ a conjugate family of priors is defined by $$\pi(\theta_1,\theta_2|\alpha_1,\alpha_2,\lambda)\propto \exp\left\{\frac{-\alpha_1}{2\theta_2}+\frac{\alpha_2\theta_1}{\theta_2}-\lambda\frac{\theta_1^2}{2\theta_2}-\lambda\frac{\log(\theta_2)}{2}\right\}$$ However it is also possible to defined conjugate priors with more hyperparameters in the sense that priors and posteriors belong to the same family. In our book Bayesian Essentials with R, we do resort to the four parameter version: We now consider the general case of an iid sample $\mathscr{D}_n=(x_1,\ldots,x_n)$ from the normal distribution $\mathscr{N}(\mu,\sigma^2)$ and $\theta=(\mu,\sigma^2)$. This setting also allows a conjugate prior since the normal distribution remains an exponential family when both parameters are unknown. It is of the form $$ (\sigma^2)^{-\lambda_\sigma-3/2}\,\exp\left\{-\left(\lambda_\mu(\mu-\xi)^2+\alpha\right)/2\sigma^2\right\} $$ since \begin{eqnarray}\label{eq:conjunor} \pi((\mu,\sigma^2)|\mathscr{D}_n) & \propto & (\sigma^2)^{-\lambda_\sigma-3/2}\, \exp\left\{-\left(\lambda_\mu (\mu-\xi)^2 + \alpha \right)/2\sigma^2\right\}\nonumber\\ && \times (\sigma^2)^{-n/2}\,\exp \left\{-\left(n(\mu-\overline{x})^2 + s_x^2 \right)/2\sigma^2\right\} \\ &\propto& (\sigma^2)^{-\lambda_\sigma(\mathscr{D}_n)}\exp\left\{-\left(\lambda_\mu(\mathscr{D}_n) (\mu-\xi(\mathscr{D}_n))^2+\alpha(\mathscr{D}_n)\right)/2\sigma^2\right\}\,,\nonumber \end{eqnarray} where $s_x^2 = \sum_{i=1}^n (x_i-\overline{x})^2$. Therefore, the conjugate prior on $\theta$ is the product of an inverse gamma distribution on $\sigma^2$, $\mathscr{IG}(\lambda_\sigma,\alpha/2)$, and, conditionally on $\sigma^2$, a normal distribution on $\mu$, $\mathscr{N} (\xi,\sigma^2/\lambda_\mu)$. but there is not particular reason for doing so, except if a different degree of prior precision ($\lambda_\mu\ne\lambda_\sigma$) is available on $\mu$ and on $\sigma$.
How many natural parameters are really in the exponential family conjugate prior?
It is a correct remark that a conjugate family of priors for an exponential family of distributions with a two-dimensional parameter $(\theta_1,\theta_2)$ can be defined by three parameters and hence
How many natural parameters are really in the exponential family conjugate prior? It is a correct remark that a conjugate family of priors for an exponential family of distributions with a two-dimensional parameter $(\theta_1,\theta_2)$ can be defined by three parameters and hence that for a Normal family of distributions $$f(x|\theta_1,\theta_2)\propto \exp\left\{\frac{-x^2}{2\theta_2}+\frac{x\theta_1}{\theta_2}-\frac{\theta_1^2}{2\theta_2}-\frac{\log(\theta_2)}{2}\right\}$$ a conjugate family of priors is defined by $$\pi(\theta_1,\theta_2|\alpha_1,\alpha_2,\lambda)\propto \exp\left\{\frac{-\alpha_1}{2\theta_2}+\frac{\alpha_2\theta_1}{\theta_2}-\lambda\frac{\theta_1^2}{2\theta_2}-\lambda\frac{\log(\theta_2)}{2}\right\}$$ However it is also possible to defined conjugate priors with more hyperparameters in the sense that priors and posteriors belong to the same family. In our book Bayesian Essentials with R, we do resort to the four parameter version: We now consider the general case of an iid sample $\mathscr{D}_n=(x_1,\ldots,x_n)$ from the normal distribution $\mathscr{N}(\mu,\sigma^2)$ and $\theta=(\mu,\sigma^2)$. This setting also allows a conjugate prior since the normal distribution remains an exponential family when both parameters are unknown. It is of the form $$ (\sigma^2)^{-\lambda_\sigma-3/2}\,\exp\left\{-\left(\lambda_\mu(\mu-\xi)^2+\alpha\right)/2\sigma^2\right\} $$ since \begin{eqnarray}\label{eq:conjunor} \pi((\mu,\sigma^2)|\mathscr{D}_n) & \propto & (\sigma^2)^{-\lambda_\sigma-3/2}\, \exp\left\{-\left(\lambda_\mu (\mu-\xi)^2 + \alpha \right)/2\sigma^2\right\}\nonumber\\ && \times (\sigma^2)^{-n/2}\,\exp \left\{-\left(n(\mu-\overline{x})^2 + s_x^2 \right)/2\sigma^2\right\} \\ &\propto& (\sigma^2)^{-\lambda_\sigma(\mathscr{D}_n)}\exp\left\{-\left(\lambda_\mu(\mathscr{D}_n) (\mu-\xi(\mathscr{D}_n))^2+\alpha(\mathscr{D}_n)\right)/2\sigma^2\right\}\,,\nonumber \end{eqnarray} where $s_x^2 = \sum_{i=1}^n (x_i-\overline{x})^2$. Therefore, the conjugate prior on $\theta$ is the product of an inverse gamma distribution on $\sigma^2$, $\mathscr{IG}(\lambda_\sigma,\alpha/2)$, and, conditionally on $\sigma^2$, a normal distribution on $\mu$, $\mathscr{N} (\xi,\sigma^2/\lambda_\mu)$. but there is not particular reason for doing so, except if a different degree of prior precision ($\lambda_\mu\ne\lambda_\sigma$) is available on $\mu$ and on $\sigma$.
How many natural parameters are really in the exponential family conjugate prior? It is a correct remark that a conjugate family of priors for an exponential family of distributions with a two-dimensional parameter $(\theta_1,\theta_2)$ can be defined by three parameters and hence
49,771
Reinforcement Learning split in training and test set necessary?
The split in training, including all optimization of models might be helpful and it is worth to use best practices in general supervised machine learning. On the other hand, the ultimate measure of quality in Reinforcement Learning is the discounted reward. To quantify the quality, you shall run several episodes after all your learning, tuning, and optimization - to see the ultimate quality of the solution. The fact is that even a model that is not very accurate in terms of MSE, MAPE, etc. can pretty well distinguish good and bad actions. Consider that you add a constant to true Q functions. They will be bad in terms of MSE, MAPE, etc., but they will be still able to decide optimally.
Reinforcement Learning split in training and test set necessary?
The split in training, including all optimization of models might be helpful and it is worth to use best practices in general supervised machine learning. On the other hand, the ultimate measure of qu
Reinforcement Learning split in training and test set necessary? The split in training, including all optimization of models might be helpful and it is worth to use best practices in general supervised machine learning. On the other hand, the ultimate measure of quality in Reinforcement Learning is the discounted reward. To quantify the quality, you shall run several episodes after all your learning, tuning, and optimization - to see the ultimate quality of the solution. The fact is that even a model that is not very accurate in terms of MSE, MAPE, etc. can pretty well distinguish good and bad actions. Consider that you add a constant to true Q functions. They will be bad in terms of MSE, MAPE, etc., but they will be still able to decide optimally.
Reinforcement Learning split in training and test set necessary? The split in training, including all optimization of models might be helpful and it is worth to use best practices in general supervised machine learning. On the other hand, the ultimate measure of qu
49,772
How to constrain gradient boosting predictions to be non-negative?
One way is to transform your data in such a way that negative values of your "real" variable are impossible. For instance, suppose you have a response variable Y. Define W(Y) = log(Y), then do a regression with W as your response variable. The take your predicted values, and exponentiate them to get back to Y. It's possible that your model will return a negative value of W, but e^W will always be positive. Of course, this will raise other issues, such as heteroskedasticity and not being linear (although if you're constraining the results to be positive, then it wasn't linear to begin with). You can play around with other functions; you just need two functions $f$ and $g$ such that they are inverses of each other and one of them has a codomain of only nonnegative numbers. You should think about why you expect Y to always be positive to guide you in deciding what function to use.
How to constrain gradient boosting predictions to be non-negative?
One way is to transform your data in such a way that negative values of your "real" variable are impossible. For instance, suppose you have a response variable Y. Define W(Y) = log(Y), then do a regre
How to constrain gradient boosting predictions to be non-negative? One way is to transform your data in such a way that negative values of your "real" variable are impossible. For instance, suppose you have a response variable Y. Define W(Y) = log(Y), then do a regression with W as your response variable. The take your predicted values, and exponentiate them to get back to Y. It's possible that your model will return a negative value of W, but e^W will always be positive. Of course, this will raise other issues, such as heteroskedasticity and not being linear (although if you're constraining the results to be positive, then it wasn't linear to begin with). You can play around with other functions; you just need two functions $f$ and $g$ such that they are inverses of each other and one of them has a codomain of only nonnegative numbers. You should think about why you expect Y to always be positive to guide you in deciding what function to use.
How to constrain gradient boosting predictions to be non-negative? One way is to transform your data in such a way that negative values of your "real" variable are impossible. For instance, suppose you have a response variable Y. Define W(Y) = log(Y), then do a regre
49,773
What is the distribution of min{0,X} when X follows some general normal distribution?
Suppose $X\sim N(\mu, \sigma^2)$ with CDF $\Phi(\cdot; \mu, \sigma^2)$. Then you can calculate the probability $p$ that $X>0$, by taking $$ p = 1- \Phi(0; \mu, \sigma^2) $$ Then $Y:=\min\{0,X\}$ is a mixture between a point mass at $0$ with weight $p$ and a truncated normal distribution with parameters $\mu$ and $\sigma^2$, truncated from above at $0$, with weight $(1-p)$. This is also known as a "censored normal distribution". You may be interested in previous threads tagged both "censoring" and "normal-distribution". We have a thread on the mean and variance in the multidimensional case.
What is the distribution of min{0,X} when X follows some general normal distribution?
Suppose $X\sim N(\mu, \sigma^2)$ with CDF $\Phi(\cdot; \mu, \sigma^2)$. Then you can calculate the probability $p$ that $X>0$, by taking $$ p = 1- \Phi(0; \mu, \sigma^2) $$ Then $Y:=\min\{0,X\}$ is a
What is the distribution of min{0,X} when X follows some general normal distribution? Suppose $X\sim N(\mu, \sigma^2)$ with CDF $\Phi(\cdot; \mu, \sigma^2)$. Then you can calculate the probability $p$ that $X>0$, by taking $$ p = 1- \Phi(0; \mu, \sigma^2) $$ Then $Y:=\min\{0,X\}$ is a mixture between a point mass at $0$ with weight $p$ and a truncated normal distribution with parameters $\mu$ and $\sigma^2$, truncated from above at $0$, with weight $(1-p)$. This is also known as a "censored normal distribution". You may be interested in previous threads tagged both "censoring" and "normal-distribution". We have a thread on the mean and variance in the multidimensional case.
What is the distribution of min{0,X} when X follows some general normal distribution? Suppose $X\sim N(\mu, \sigma^2)$ with CDF $\Phi(\cdot; \mu, \sigma^2)$. Then you can calculate the probability $p$ that $X>0$, by taking $$ p = 1- \Phi(0; \mu, \sigma^2) $$ Then $Y:=\min\{0,X\}$ is a
49,774
What is the distribution of min{0,X} when X follows some general normal distribution?
We have a simple answer for the distribution of $\min(0,X)$ for any random variable $X$ in terms of its distribution function. Suppose $Y=\min(0,X)$. That is, $$Y=\begin{cases}X&,\text{ if }X\le 0\\0&,\text{ if }X>0\end{cases}$$ Trivially, a plot of $y=\min(0,x)$ for real $x$ would look like Then distribution function (DF) of $Y$ is simply $$P(Y\le y)=\begin{cases}P(X\le y)&,\text{ if }y<0\\1&,\text{ if }y\ge 0\end{cases}$$ For $X\sim\mathcal N(\mu,\sigma^2)$, we get \begin{align} P(Y\le y)=\begin{cases}\Phi\left(\frac{y-\mu}{\sigma}\right)&,\text{ if }y<0\\1&,\text{ if }y\ge 0\end{cases} \end{align} , where $\Phi$ as usual is the DF of standard normal distribution. As already mentioned, $Y$ has a mixed distribution as it contains the mass point $0$ for $X>0$ as well as having a continuous part for $X\le 0$.
What is the distribution of min{0,X} when X follows some general normal distribution?
We have a simple answer for the distribution of $\min(0,X)$ for any random variable $X$ in terms of its distribution function. Suppose $Y=\min(0,X)$. That is, $$Y=\begin{cases}X&,\text{ if }X\le 0\\0&
What is the distribution of min{0,X} when X follows some general normal distribution? We have a simple answer for the distribution of $\min(0,X)$ for any random variable $X$ in terms of its distribution function. Suppose $Y=\min(0,X)$. That is, $$Y=\begin{cases}X&,\text{ if }X\le 0\\0&,\text{ if }X>0\end{cases}$$ Trivially, a plot of $y=\min(0,x)$ for real $x$ would look like Then distribution function (DF) of $Y$ is simply $$P(Y\le y)=\begin{cases}P(X\le y)&,\text{ if }y<0\\1&,\text{ if }y\ge 0\end{cases}$$ For $X\sim\mathcal N(\mu,\sigma^2)$, we get \begin{align} P(Y\le y)=\begin{cases}\Phi\left(\frac{y-\mu}{\sigma}\right)&,\text{ if }y<0\\1&,\text{ if }y\ge 0\end{cases} \end{align} , where $\Phi$ as usual is the DF of standard normal distribution. As already mentioned, $Y$ has a mixed distribution as it contains the mass point $0$ for $X>0$ as well as having a continuous part for $X\le 0$.
What is the distribution of min{0,X} when X follows some general normal distribution? We have a simple answer for the distribution of $\min(0,X)$ for any random variable $X$ in terms of its distribution function. Suppose $Y=\min(0,X)$. That is, $$Y=\begin{cases}X&,\text{ if }X\le 0\\0&
49,775
Suitable distance metric for time-series clustering with respect to location of shapes
Minor point, DTW is not a metric, just a measure. The measure you need is constrained DTW (or cDTW), With enough data you can learn the constraint, but here is it clear it should be about 1/3 the length of the time series. This tutorial will tell you all you need to know http://www.cs.unm.edu/~mueen/DTW.pdf eamonn
Suitable distance metric for time-series clustering with respect to location of shapes
Minor point, DTW is not a metric, just a measure. The measure you need is constrained DTW (or cDTW), With enough data you can learn the constraint, but here is it clear it should be about 1/3 the leng
Suitable distance metric for time-series clustering with respect to location of shapes Minor point, DTW is not a metric, just a measure. The measure you need is constrained DTW (or cDTW), With enough data you can learn the constraint, but here is it clear it should be about 1/3 the length of the time series. This tutorial will tell you all you need to know http://www.cs.unm.edu/~mueen/DTW.pdf eamonn
Suitable distance metric for time-series clustering with respect to location of shapes Minor point, DTW is not a metric, just a measure. The measure you need is constrained DTW (or cDTW), With enough data you can learn the constraint, but here is it clear it should be about 1/3 the leng
49,776
Difference between NA and None in biological dataset
You are dealing with a situation where you have variables (distance to log, diameter of nearest log) that only have meaning in the presence of a previous variable condition (log is present). You can deal with this by coding your variables so that the conditional variables are set to zero if there is no log present, so that the log-indicator variable captures the entire "effect" of the absence of a log. For data points where you could not observe the area, you would code both variables as NA and treat these as missing data. Here is an example of what I mean. $$\begin{array} {|r|r|r|} \hline \text{Outcome} & \text{Log (Indicator)} & \text{Distance (m)} & \text{Diameter (m)} \\ \hline \text{Inaccessible site} & \text{NA} & \text{NA} & \text{NA} \\ \hline \text{No log on site} & 0 & 0 & 0 \\ \hline \text{Log on site} & 1 & 2.34 & 1.21 \\ \hline \end{array}$$ If you were to put this kind of data into a regression model, the indicator variable for the presence of a log would allow the model to give a different prediction for no log than it would give for a very small log that is very nearby (distance and diameter near zero). This would allow you to interpret the parameters in your model so that you get a predictive result when there is no log, and a result where there is a log, with this latter effect depending on the distance and diameter. If you want to make sure your model does not allow an outcome with no log, but with positive effects for the other conditional variables, you could code the conditional variables as interaction effects as follows: Response ~ Intercept + Log + Log:Distance + Log:Diameter If you code your model this way then a value of $\text{Log} = 0$ means that there are no coefficients other than the intercept in that outcome. You would then treat NA values as missing data, using standard techniques (imputation, etc.) to deal with these.
Difference between NA and None in biological dataset
You are dealing with a situation where you have variables (distance to log, diameter of nearest log) that only have meaning in the presence of a previous variable condition (log is present). You can
Difference between NA and None in biological dataset You are dealing with a situation where you have variables (distance to log, diameter of nearest log) that only have meaning in the presence of a previous variable condition (log is present). You can deal with this by coding your variables so that the conditional variables are set to zero if there is no log present, so that the log-indicator variable captures the entire "effect" of the absence of a log. For data points where you could not observe the area, you would code both variables as NA and treat these as missing data. Here is an example of what I mean. $$\begin{array} {|r|r|r|} \hline \text{Outcome} & \text{Log (Indicator)} & \text{Distance (m)} & \text{Diameter (m)} \\ \hline \text{Inaccessible site} & \text{NA} & \text{NA} & \text{NA} \\ \hline \text{No log on site} & 0 & 0 & 0 \\ \hline \text{Log on site} & 1 & 2.34 & 1.21 \\ \hline \end{array}$$ If you were to put this kind of data into a regression model, the indicator variable for the presence of a log would allow the model to give a different prediction for no log than it would give for a very small log that is very nearby (distance and diameter near zero). This would allow you to interpret the parameters in your model so that you get a predictive result when there is no log, and a result where there is a log, with this latter effect depending on the distance and diameter. If you want to make sure your model does not allow an outcome with no log, but with positive effects for the other conditional variables, you could code the conditional variables as interaction effects as follows: Response ~ Intercept + Log + Log:Distance + Log:Diameter If you code your model this way then a value of $\text{Log} = 0$ means that there are no coefficients other than the intercept in that outcome. You would then treat NA values as missing data, using standard techniques (imputation, etc.) to deal with these.
Difference between NA and None in biological dataset You are dealing with a situation where you have variables (distance to log, diameter of nearest log) that only have meaning in the presence of a previous variable condition (log is present). You can
49,777
Joint raw moments of multivariate normal
To close this one, as whuber's comment pointed out, let a multivariate distribution be symmetric in the sense $$(X_1, \ldots, X_m) \sim_d (-X_1, \ldots, -X_m)$$ Suppose also that moments exist. Then $$E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = E[(-X_1)^{r_1}\cdot ... \cdot (-X_m)^{r_m}]$$ $$\implies E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = (-1)^{r_1+\cdots+r_m}\cdot E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m]$$ If $r_1+\cdots+r_m $, the sum of moment-orders, is an even number, then $(-1)^{r_1+\cdots+r_m}=1$ and the relation holds always. But if $r_1+\cdots+r_m $ is an odd number then we have $$E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = - E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m]$$ and this can only hold if $$E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = 0$$
Joint raw moments of multivariate normal
To close this one, as whuber's comment pointed out, let a multivariate distribution be symmetric in the sense $$(X_1, \ldots, X_m) \sim_d (-X_1, \ldots, -X_m)$$ Suppose also that moments exist. Then
Joint raw moments of multivariate normal To close this one, as whuber's comment pointed out, let a multivariate distribution be symmetric in the sense $$(X_1, \ldots, X_m) \sim_d (-X_1, \ldots, -X_m)$$ Suppose also that moments exist. Then $$E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = E[(-X_1)^{r_1}\cdot ... \cdot (-X_m)^{r_m}]$$ $$\implies E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = (-1)^{r_1+\cdots+r_m}\cdot E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m]$$ If $r_1+\cdots+r_m $, the sum of moment-orders, is an even number, then $(-1)^{r_1+\cdots+r_m}=1$ and the relation holds always. But if $r_1+\cdots+r_m $ is an odd number then we have $$E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = - E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m]$$ and this can only hold if $$E[X_1^{r_1}\cdot ... \cdot X^{r_m}_m] = 0$$
Joint raw moments of multivariate normal To close this one, as whuber's comment pointed out, let a multivariate distribution be symmetric in the sense $$(X_1, \ldots, X_m) \sim_d (-X_1, \ldots, -X_m)$$ Suppose also that moments exist. Then
49,778
How to find out if two time series correlated? they are of not equal length and with unknown delay between cause and effect, irregular log time?
Before tackling correlation of measurement values, it may be worthwhile to explore the correlation of measurement presence. In other words, there may be information to extract from measurement co-occurence. For instance, it is possible that you measure temperature more often when you eat sweets. If you discover it is true, you can isolate time periods where you have eaten sweets and analyze your temperature during those times. Here is where I would start, given the nature of your dataset and the sort of exploration you are trying to do. Decide on an appropriate window size, say, 10 minutes. Create time intervals corresponding to these windows, starting from the timestamp of your first measurement and ending at the timestamp of your final measurement. Construct a pandas dataframe, where each row corresponds to a time interval, and each column to a measurement type ("feature"). For each interval, compute the average (mean or median) value for each feature corresponding to that interval. If there was no measurement for a certain type during the interval, enter "NaN" for that column. You may also want to record other information. For instance, if you took 3 temperature measurements during a time interval, that could become another feature for you to analyze. (see example in edit below)* Make a scatter plot where time intervals are along the x axis, and feature values (the averages from step #2) are the y axis. Each feature should be a different color dot. If the feature is NaN for a time interval, don't plot it. Also, mind your y axis scaling, as it may be hard to visualize the data without doing some normalization first. This sort of plot will give you a first look at all of your features at once. There may be interesting trends that you can hone in on. Or, it may still be too difficult to visualize. That's ok, we are exploring! Use a tool like missingno to analyze your data completeness. There is a lot of cool stuff in this package, and I am not too familiar with it, so I will leave you to explore its possibilities. Personally, I'd take a look at missingno.matrix, missingno.heatmap, and missingno.dendrogram By that stage, you may have already observed interesting trends. Keep exploring! You don't necessarily need to correlate the time series themselves to uncover interesting stuff. If you are really interested in computing similarity between time series with different scales, look into dynamic time warping. If you computed the pairwise DTW similarity between all of your features, you may be able to infer which features tend to go together. I have seen this applied to financial data to analyze stocks that trend together. However, DTW doesn't solve the problem of missing data. For that, you will want to look into data imputation. But keep in mind, no amount of imputation can create data for you! It can only fill gaps based on what you think belongs there. EDIT You asked for clarification on the averaging process in step 2. Here is an example with body temperature: Assume you are interested in time intervals of width $w$. From time $t_j$ to time $t_{j+1}=t_j+w$, let's say you have measured your temperature $T$ a total of $n_j$ times: $T_1, \ldots, T_{n_j}$. Then, instead of plotting a point for each of the $n_j$ temperature measurements, plot a single point corresponding to the mean temperature $\bar{T}_j=1/{n_j}\sum_{i=1}^{n_j} T_i$. The goal is simply to create a plot with less clutter. For categorical variables, it would make more sense to plot the median than the mean. By reducing the fidelity of your x-axis to have fewer points, the plot may be easier to look at. But you are dealing with more features than I expected, so the utility of this approach is limited. I would play around with missingno some more --- understanding the feature co-occurrence may be the first step in understanding cause and effect relationships between features. Good luck!
How to find out if two time series correlated? they are of not equal length and with unknown delay b
Before tackling correlation of measurement values, it may be worthwhile to explore the correlation of measurement presence. In other words, there may be information to extract from measurement co-occu
How to find out if two time series correlated? they are of not equal length and with unknown delay between cause and effect, irregular log time? Before tackling correlation of measurement values, it may be worthwhile to explore the correlation of measurement presence. In other words, there may be information to extract from measurement co-occurence. For instance, it is possible that you measure temperature more often when you eat sweets. If you discover it is true, you can isolate time periods where you have eaten sweets and analyze your temperature during those times. Here is where I would start, given the nature of your dataset and the sort of exploration you are trying to do. Decide on an appropriate window size, say, 10 minutes. Create time intervals corresponding to these windows, starting from the timestamp of your first measurement and ending at the timestamp of your final measurement. Construct a pandas dataframe, where each row corresponds to a time interval, and each column to a measurement type ("feature"). For each interval, compute the average (mean or median) value for each feature corresponding to that interval. If there was no measurement for a certain type during the interval, enter "NaN" for that column. You may also want to record other information. For instance, if you took 3 temperature measurements during a time interval, that could become another feature for you to analyze. (see example in edit below)* Make a scatter plot where time intervals are along the x axis, and feature values (the averages from step #2) are the y axis. Each feature should be a different color dot. If the feature is NaN for a time interval, don't plot it. Also, mind your y axis scaling, as it may be hard to visualize the data without doing some normalization first. This sort of plot will give you a first look at all of your features at once. There may be interesting trends that you can hone in on. Or, it may still be too difficult to visualize. That's ok, we are exploring! Use a tool like missingno to analyze your data completeness. There is a lot of cool stuff in this package, and I am not too familiar with it, so I will leave you to explore its possibilities. Personally, I'd take a look at missingno.matrix, missingno.heatmap, and missingno.dendrogram By that stage, you may have already observed interesting trends. Keep exploring! You don't necessarily need to correlate the time series themselves to uncover interesting stuff. If you are really interested in computing similarity between time series with different scales, look into dynamic time warping. If you computed the pairwise DTW similarity between all of your features, you may be able to infer which features tend to go together. I have seen this applied to financial data to analyze stocks that trend together. However, DTW doesn't solve the problem of missing data. For that, you will want to look into data imputation. But keep in mind, no amount of imputation can create data for you! It can only fill gaps based on what you think belongs there. EDIT You asked for clarification on the averaging process in step 2. Here is an example with body temperature: Assume you are interested in time intervals of width $w$. From time $t_j$ to time $t_{j+1}=t_j+w$, let's say you have measured your temperature $T$ a total of $n_j$ times: $T_1, \ldots, T_{n_j}$. Then, instead of plotting a point for each of the $n_j$ temperature measurements, plot a single point corresponding to the mean temperature $\bar{T}_j=1/{n_j}\sum_{i=1}^{n_j} T_i$. The goal is simply to create a plot with less clutter. For categorical variables, it would make more sense to plot the median than the mean. By reducing the fidelity of your x-axis to have fewer points, the plot may be easier to look at. But you are dealing with more features than I expected, so the utility of this approach is limited. I would play around with missingno some more --- understanding the feature co-occurrence may be the first step in understanding cause and effect relationships between features. Good luck!
How to find out if two time series correlated? they are of not equal length and with unknown delay b Before tackling correlation of measurement values, it may be worthwhile to explore the correlation of measurement presence. In other words, there may be information to extract from measurement co-occu
49,779
How to find out if two time series correlated? they are of not equal length and with unknown delay between cause and effect, irregular log time?
You're interested in Granger causality. As for missing data, this will prove costly if you try to use Granger causality, as it uses linear regression (which is quite sensitive to missing data). You can try inputting the mean temperature to replace the missing values, but this comes with its own issues -- for more on this, refer to imputation.
How to find out if two time series correlated? they are of not equal length and with unknown delay b
You're interested in Granger causality. As for missing data, this will prove costly if you try to use Granger causality, as it uses linear regression (which is quite sensitive to missing data). You ca
How to find out if two time series correlated? they are of not equal length and with unknown delay between cause and effect, irregular log time? You're interested in Granger causality. As for missing data, this will prove costly if you try to use Granger causality, as it uses linear regression (which is quite sensitive to missing data). You can try inputting the mean temperature to replace the missing values, but this comes with its own issues -- for more on this, refer to imputation.
How to find out if two time series correlated? they are of not equal length and with unknown delay b You're interested in Granger causality. As for missing data, this will prove costly if you try to use Granger causality, as it uses linear regression (which is quite sensitive to missing data). You ca
49,780
Bootstrap yielding counterintuitive results (Update: but not anymore)
I think the bootstrap is not wrong. What's wrong is the test statistic you're using. The residual sum of squares is calculated using internal validation. The GAM is overfitting the curve. Polynomials approximations obtained from least squares are known to fit relatively badly in the tails, which is actually where a reasonable amount of the ethanol exposure measures are concentrated. Try modifying your approach to use split-sample validation. I get a much more conservative estimate of the incremental predictive accuracy of the GAM using this suggested modification of your program: require(lattice) library(mgcv) data(ethanol) set.seed(123) ## 80/20 test/valid nr <- nrow(ethanol) split0 <- sample(1:nr, floor(0.8*nr)) split1 <- setdiff(1:nr, split0) ## to generate data under the null M0 <- lm(NOx~E+I(E^2)+I(E^3)+I(E^4)+I(E^5),data=ethanol[split0,]) res0 <- residuals(M0) fitntest <- function(ethanol, split0, split1) { M0 <- lm(NOx~E+I(E^2)+I(E^3)+I(E^4)+I(E^5),data=ethanol[split0,]) M1 <- gam(NOx~s(E),data=ethanol[split0,]) RSS0 <- sum({ethanol[split1, 'NOx']-predict(M0, newdata=ethanol[split1, ])}^2) RSS1 <- sum({ethanol[split1, 'NOx']-predict(M1, newdata=ethanol[split1, ])}^2) (RSS0-RSS1)/RSS1 } R <- fitntest(ethanol, split0, split1) Rstar <- replicate(1000, { ethanol$NOx <- predict(M0, newdata=ethanol) + rnorm(nr, 0, sd(res0)) fitntest(ethanol, split0, split1) }) cat("p-value:",mean(Rstar>R),"\n")
Bootstrap yielding counterintuitive results (Update: but not anymore)
I think the bootstrap is not wrong. What's wrong is the test statistic you're using. The residual sum of squares is calculated using internal validation. The GAM is overfitting the curve. Polynomials
Bootstrap yielding counterintuitive results (Update: but not anymore) I think the bootstrap is not wrong. What's wrong is the test statistic you're using. The residual sum of squares is calculated using internal validation. The GAM is overfitting the curve. Polynomials approximations obtained from least squares are known to fit relatively badly in the tails, which is actually where a reasonable amount of the ethanol exposure measures are concentrated. Try modifying your approach to use split-sample validation. I get a much more conservative estimate of the incremental predictive accuracy of the GAM using this suggested modification of your program: require(lattice) library(mgcv) data(ethanol) set.seed(123) ## 80/20 test/valid nr <- nrow(ethanol) split0 <- sample(1:nr, floor(0.8*nr)) split1 <- setdiff(1:nr, split0) ## to generate data under the null M0 <- lm(NOx~E+I(E^2)+I(E^3)+I(E^4)+I(E^5),data=ethanol[split0,]) res0 <- residuals(M0) fitntest <- function(ethanol, split0, split1) { M0 <- lm(NOx~E+I(E^2)+I(E^3)+I(E^4)+I(E^5),data=ethanol[split0,]) M1 <- gam(NOx~s(E),data=ethanol[split0,]) RSS0 <- sum({ethanol[split1, 'NOx']-predict(M0, newdata=ethanol[split1, ])}^2) RSS1 <- sum({ethanol[split1, 'NOx']-predict(M1, newdata=ethanol[split1, ])}^2) (RSS0-RSS1)/RSS1 } R <- fitntest(ethanol, split0, split1) Rstar <- replicate(1000, { ethanol$NOx <- predict(M0, newdata=ethanol) + rnorm(nr, 0, sd(res0)) fitntest(ethanol, split0, split1) }) cat("p-value:",mean(Rstar>R),"\n")
Bootstrap yielding counterintuitive results (Update: but not anymore) I think the bootstrap is not wrong. What's wrong is the test statistic you're using. The residual sum of squares is calculated using internal validation. The GAM is overfitting the curve. Polynomials
49,781
Coefficient of determination invariant to centering and rescaling of variables
Denote $$\tilde{y}=\frac{y-\bar{y}}{\sigma_y}=\frac{Hy}{\sigma_y}$$ and $$\tilde{X}=HX\Sigma_x$$ the scaled variables (other than the scaled constant, which is evidently zero and drops out), where $H=I-\iota\iota'/n$ denotes the demeaning matrix and $\Sigma_x$ is a diagonal matrix with $1/\sigma_{x_j}$ on the main diagonal, $j=1,\ldots,K$. Step 1: In the denominator of the $R^2$ of the regression of the scaled variables, we have $$||\tilde{y}-\overline{\tilde{y}}||_2^2=(\tilde{y}-\bar{\tilde{y}})'(\tilde{y}-\bar{\tilde{y}})=(y-\bar{y})'(y-\bar{y})/\sigma_y^2,$$ as demeaning the demeaned variable again will evidently do nothing. Step 2: The numerator gives the sum of squared residuals of the regression. For the standard regression of $y$ on $X$, this well-known expression reads, in matrix notation, as $$y'(I-X(X'X)^{-1}X')y.\tag{1}$$ For the regression of the scaled variables, i.e., $\tilde{y}$ on $\tilde{X}$, we thus have $$ ||\tilde{y}-\hat{\tilde{y}}||_2^2=\frac{1}{\sigma_y^2}y'H[I-HX\Sigma_x(\Sigma_xX'HX\Sigma_x)^{-1}\Sigma_xX'H]Hy, $$ where it was used that $H$ is symmetric and idempotent and that $\Sigma_x$ is symmetric. Next, since $\Sigma_x$ is also invertible, the expression in square brackets simplifies to $$ I-HX(X'HX)^{-1}X'H, $$ as $(\Sigma_xX'HX\Sigma_x)^{-1}=\Sigma_x^{-1}(X'HX)^{-1}\Sigma_x^{-1}$. (Incidentally, this argument reveals that scaling $HX$ by any invertible matrix $A$ rather than just $\Sigma_x$ would do the trick, as would dividing $Hy$ by any $c\neq0$.) Thus, we obtain $$ ||\tilde{y}-\hat{\tilde{y}}||_2^2=\frac{1}{\sigma_y^2}y'H[I-HX(X'HX)^{-1}X'H]Hy, $$ Here, in analogy to (1), $$ y'H[I-HX(X'HX)^{-1}X'H]Hy $$ denotes the sum of squared residuals of a regression of $Hy$ on $HX$. By the FWL theorem, these residuals are the same as those of a regression of $y$ on $X$ (provided $X$ contains a constant, or at least has columns that can be combined into a constant), i.e. $$ ||y-\hat{y}||_2^2=y'H[I-HX(X'HX)^{-1}X'H]Hy. $$ Hence, \begin{eqnarray*} R^2_{scaled}&:=&1-\dfrac{||\tilde{y}-\hat{\tilde{y}}||_2^2}{||\tilde{y}-\overline{\tilde{y}}||_2^2}\\ &=&1-\dfrac{||y-\hat{y}||_2^2/\sigma_y^2}{||y-\overline{y}||_2^2/\sigma_y^2}\\ &=&R^2 \end{eqnarray*} For those who, like me, like numerical illustrations of formal results: y <- runif(10) X <- runif(10) reg <- lm(y~X) y.s <- scale(y) # scaled y X.s <- scale(X) # scaled X scale.reg <- lm(y.s~X.s) y.d <- scale(y,scale=F) # demeaned y X.d <- scale(X,scale=F) # demeaned X > all.equal(summary(reg)$r.squared, summary(scale.reg)$r.squared) # R^2s are the same [1] TRUE > all.equal(sum(resid(reg)^2)/sd(y)^2, sum(resid(scale.reg)^2)) # comparison numerators [1] TRUE > all.equal(sum((y-mean(y))^2)/sd(y)^2, sum((y.s-mean(y.s))^2)) # comparison denominators [1] TRUE > all.equal(resid(reg), resid(lm(y.d~X.d-1)), check.attributes = F) # residuals of regression and demeaned regression are the same [1] TRUE
Coefficient of determination invariant to centering and rescaling of variables
Denote $$\tilde{y}=\frac{y-\bar{y}}{\sigma_y}=\frac{Hy}{\sigma_y}$$ and $$\tilde{X}=HX\Sigma_x$$ the scaled variables (other than the scaled constant, which is evidently zero and drops out), where $H=
Coefficient of determination invariant to centering and rescaling of variables Denote $$\tilde{y}=\frac{y-\bar{y}}{\sigma_y}=\frac{Hy}{\sigma_y}$$ and $$\tilde{X}=HX\Sigma_x$$ the scaled variables (other than the scaled constant, which is evidently zero and drops out), where $H=I-\iota\iota'/n$ denotes the demeaning matrix and $\Sigma_x$ is a diagonal matrix with $1/\sigma_{x_j}$ on the main diagonal, $j=1,\ldots,K$. Step 1: In the denominator of the $R^2$ of the regression of the scaled variables, we have $$||\tilde{y}-\overline{\tilde{y}}||_2^2=(\tilde{y}-\bar{\tilde{y}})'(\tilde{y}-\bar{\tilde{y}})=(y-\bar{y})'(y-\bar{y})/\sigma_y^2,$$ as demeaning the demeaned variable again will evidently do nothing. Step 2: The numerator gives the sum of squared residuals of the regression. For the standard regression of $y$ on $X$, this well-known expression reads, in matrix notation, as $$y'(I-X(X'X)^{-1}X')y.\tag{1}$$ For the regression of the scaled variables, i.e., $\tilde{y}$ on $\tilde{X}$, we thus have $$ ||\tilde{y}-\hat{\tilde{y}}||_2^2=\frac{1}{\sigma_y^2}y'H[I-HX\Sigma_x(\Sigma_xX'HX\Sigma_x)^{-1}\Sigma_xX'H]Hy, $$ where it was used that $H$ is symmetric and idempotent and that $\Sigma_x$ is symmetric. Next, since $\Sigma_x$ is also invertible, the expression in square brackets simplifies to $$ I-HX(X'HX)^{-1}X'H, $$ as $(\Sigma_xX'HX\Sigma_x)^{-1}=\Sigma_x^{-1}(X'HX)^{-1}\Sigma_x^{-1}$. (Incidentally, this argument reveals that scaling $HX$ by any invertible matrix $A$ rather than just $\Sigma_x$ would do the trick, as would dividing $Hy$ by any $c\neq0$.) Thus, we obtain $$ ||\tilde{y}-\hat{\tilde{y}}||_2^2=\frac{1}{\sigma_y^2}y'H[I-HX(X'HX)^{-1}X'H]Hy, $$ Here, in analogy to (1), $$ y'H[I-HX(X'HX)^{-1}X'H]Hy $$ denotes the sum of squared residuals of a regression of $Hy$ on $HX$. By the FWL theorem, these residuals are the same as those of a regression of $y$ on $X$ (provided $X$ contains a constant, or at least has columns that can be combined into a constant), i.e. $$ ||y-\hat{y}||_2^2=y'H[I-HX(X'HX)^{-1}X'H]Hy. $$ Hence, \begin{eqnarray*} R^2_{scaled}&:=&1-\dfrac{||\tilde{y}-\hat{\tilde{y}}||_2^2}{||\tilde{y}-\overline{\tilde{y}}||_2^2}\\ &=&1-\dfrac{||y-\hat{y}||_2^2/\sigma_y^2}{||y-\overline{y}||_2^2/\sigma_y^2}\\ &=&R^2 \end{eqnarray*} For those who, like me, like numerical illustrations of formal results: y <- runif(10) X <- runif(10) reg <- lm(y~X) y.s <- scale(y) # scaled y X.s <- scale(X) # scaled X scale.reg <- lm(y.s~X.s) y.d <- scale(y,scale=F) # demeaned y X.d <- scale(X,scale=F) # demeaned X > all.equal(summary(reg)$r.squared, summary(scale.reg)$r.squared) # R^2s are the same [1] TRUE > all.equal(sum(resid(reg)^2)/sd(y)^2, sum(resid(scale.reg)^2)) # comparison numerators [1] TRUE > all.equal(sum((y-mean(y))^2)/sd(y)^2, sum((y.s-mean(y.s))^2)) # comparison denominators [1] TRUE > all.equal(resid(reg), resid(lm(y.d~X.d-1)), check.attributes = F) # residuals of regression and demeaned regression are the same [1] TRUE
Coefficient of determination invariant to centering and rescaling of variables Denote $$\tilde{y}=\frac{y-\bar{y}}{\sigma_y}=\frac{Hy}{\sigma_y}$$ and $$\tilde{X}=HX\Sigma_x$$ the scaled variables (other than the scaled constant, which is evidently zero and drops out), where $H=
49,782
Cohen's d from a linear regression model
In your regression model, $c_1$ (an unstandardized regression coefficient for a 0/1 indicator variable) is an adjusted mean difference, adjusting for the other variables in the model. As such, you can use $c_1$ as the numerator in the Cohen's $d$ computation, such as: $$ d = \frac{B}{s_{pooled}} ~, $$ where $B$ is the coefficient $c_i$, and $s_{pooled}$ is: $$ s_{pooled} = \sqrt{ \frac{ s_1^2 \left( n_1-1 \right) + s_2^2 \left( n_2-1 \right) } { n_1 + n_2 - 2 } } ~ $$ The subscripts in the above are for the two groups that are part of $c_i$. The challenge is that unless you have the raw data (which you might but someone conducting a meta-analysis might not), these values are typically not reported for a complex regression model. However, $s_{pooled}^2$ is simply the overall variance for $y$, minus the variance attributed to the treatment effect (e.g., $c_i$). Thus, assuming you have both the overall standard deviation for $y$ and the sample sizes for the two groups created by the $c_i$ indicator variable, $s_{pooled$} can be computed as follows: $$ s_{pooled} = \sqrt{ \frac{s_y^2(N-1) - B^2\left(\frac{n_1n_2}{n_1+n_2}\right) } {N-2}} ~. $$
Cohen's d from a linear regression model
In your regression model, $c_1$ (an unstandardized regression coefficient for a 0/1 indicator variable) is an adjusted mean difference, adjusting for the other variables in the model. As such, you can
Cohen's d from a linear regression model In your regression model, $c_1$ (an unstandardized regression coefficient for a 0/1 indicator variable) is an adjusted mean difference, adjusting for the other variables in the model. As such, you can use $c_1$ as the numerator in the Cohen's $d$ computation, such as: $$ d = \frac{B}{s_{pooled}} ~, $$ where $B$ is the coefficient $c_i$, and $s_{pooled}$ is: $$ s_{pooled} = \sqrt{ \frac{ s_1^2 \left( n_1-1 \right) + s_2^2 \left( n_2-1 \right) } { n_1 + n_2 - 2 } } ~ $$ The subscripts in the above are for the two groups that are part of $c_i$. The challenge is that unless you have the raw data (which you might but someone conducting a meta-analysis might not), these values are typically not reported for a complex regression model. However, $s_{pooled}^2$ is simply the overall variance for $y$, minus the variance attributed to the treatment effect (e.g., $c_i$). Thus, assuming you have both the overall standard deviation for $y$ and the sample sizes for the two groups created by the $c_i$ indicator variable, $s_{pooled$} can be computed as follows: $$ s_{pooled} = \sqrt{ \frac{s_y^2(N-1) - B^2\left(\frac{n_1n_2}{n_1+n_2}\right) } {N-2}} ~. $$
Cohen's d from a linear regression model In your regression model, $c_1$ (an unstandardized regression coefficient for a 0/1 indicator variable) is an adjusted mean difference, adjusting for the other variables in the model. As such, you can
49,783
Cohen's d from a linear regression model
So if you aren't making an statistical arguments, what needs to "hold"? Are you asking if it accurately measures the effect of $c_1$ when controlling for other variables? Probably not, since in its standard formulation Cohen's $d$ is the mean difference normalized by an estimate of the pooled variance. On the other hand, $\eta^2$/$\omega^2$, and their partial versions, are explicitly designed to express the effect size(s) you are looking for. Here is a fairly well cited paper about a generalization that works for many common experimental design structures. This article, however, discusses ways to use pairwise t-scores to calculate Cohen's $d$ while controlling for covariates.
Cohen's d from a linear regression model
So if you aren't making an statistical arguments, what needs to "hold"? Are you asking if it accurately measures the effect of $c_1$ when controlling for other variables? Probably not, since in its st
Cohen's d from a linear regression model So if you aren't making an statistical arguments, what needs to "hold"? Are you asking if it accurately measures the effect of $c_1$ when controlling for other variables? Probably not, since in its standard formulation Cohen's $d$ is the mean difference normalized by an estimate of the pooled variance. On the other hand, $\eta^2$/$\omega^2$, and their partial versions, are explicitly designed to express the effect size(s) you are looking for. Here is a fairly well cited paper about a generalization that works for many common experimental design structures. This article, however, discusses ways to use pairwise t-scores to calculate Cohen's $d$ while controlling for covariates.
Cohen's d from a linear regression model So if you aren't making an statistical arguments, what needs to "hold"? Are you asking if it accurately measures the effect of $c_1$ when controlling for other variables? Probably not, since in its st
49,784
Rupees from Bushes in Majora's Mask, what distribution to use?
So I contacted an old Statistics professor I had in college about this problem, and he pointed me to the Multinomial distribution, the exact solution to solving for the odds of getting any specific count of each drop type. I used the Multinomial function in the form of N! / (n1! * n2! * n3! *n4!) * (p1^n1 * p2^n2 * p3^n3 * p4^n4) Where $N$ is the total number of shrubs being cut, with the $n$'s representing each possible drop outcome, and the $p$'s the probability of each outcome occurring. With this, I could calculate the exact odds of getting 30 green rupees x1, 10 green rupees x3, 5 blue rupees, and nothing 27 times. However, since the multinomial doesn't have a straightforward CDF, and I wanted a CDF of total rupees, not simply counts, I still needed to calculate all of the possible combinations of drops I could get, and not just the total number, but actually each individual combination. This problem is called a multichoose, or stars and bars problem, and it's essentially a problem where, rather than selecting a smaller group from a larger group, as is typical in combination problems, you're basically figuring out every way you can place a large number of items into a smaller number of buckets. The total number of possibilities can be calculated by $$ \left( \! \binom{k}{n} \! \right) = \binom{k + n - 1}{n}. $$ Where $k$ is the number of buckets, and $n$ is the number of items being placed into the buckets. (Or classically, number of bars to divide with and the number of stars). Now, this formula can get me the total number of possibilities, but not the possibilities themselves, for that I needed to brute-force the problem programmatically. While I began by trying to write my own algorithm, I found a repository of multichoose generation functions in various languages by ekg. I used the Python version to output every possible multiset to a CSV file. From there, in Excel I totaled up the Rupee value for each multiset, and ran the numbers through the multinomial function to get the probability of each multiset occuring. Then I used a Pivot Table to total up the probabilities for all multisets with the same Rupee totals, and then simply used that data to get a cumulative probability for the Rupee totals. I also did a Monte Carlo simulation of 10 million attempts in R to confirm that the numbers were correct, and they matched very neatly (closely enough in fact that I think I'd recommend a Monte Carlo solution to anyone dealing with a problem like this). Here's the Monte Carlo code: combos <- matrix(0, nrow=10000000, ncol=4) for(trial in 1:10000000) { outcomes <- sample(c(1, 2, 3, 4), size = 72, replace = TRUE, prob = c(5/16, 1/16, 1/16, 9/16)) combos[trial,] <- tabulate(outcomes, nbins=4) } rupeeSimulations <- apply(combos, 1, function(row) row[1]*1+row[2]*3+row[3]*5) table(rupeeSimulations) Graphing it in R, the result was this: I showed this to Majora's Mask speedrunners, and the numbers didn't seem to line up with their experience. This had me dig further into the Drop Table behavior, and I discovered that Majora's Mask has a drop type that I now call "Mask Drops," where the game actually changes the drop depending on what mask the player is currently using. Turns out, Deku Scrub mask would turn that drop into green rupees, and on my original table, I had tested everything as Link, with no mask, so I got Arrows rather than rupees for that type of drop. So in the end, discovered something completely new about the game, had to update my Drop Table Infographic (new version here), and finally I was able to get an accurate chart of this original problem, enjoy: I learned a lot, I hadn't ever seen the multichoose and I didn't recall the multinomial function from my studies. Thank you for everyone who looked at this, big thanks to my college professor for pointing me in the right direction.
Rupees from Bushes in Majora's Mask, what distribution to use?
So I contacted an old Statistics professor I had in college about this problem, and he pointed me to the Multinomial distribution, the exact solution to solving for the odds of getting any specific co
Rupees from Bushes in Majora's Mask, what distribution to use? So I contacted an old Statistics professor I had in college about this problem, and he pointed me to the Multinomial distribution, the exact solution to solving for the odds of getting any specific count of each drop type. I used the Multinomial function in the form of N! / (n1! * n2! * n3! *n4!) * (p1^n1 * p2^n2 * p3^n3 * p4^n4) Where $N$ is the total number of shrubs being cut, with the $n$'s representing each possible drop outcome, and the $p$'s the probability of each outcome occurring. With this, I could calculate the exact odds of getting 30 green rupees x1, 10 green rupees x3, 5 blue rupees, and nothing 27 times. However, since the multinomial doesn't have a straightforward CDF, and I wanted a CDF of total rupees, not simply counts, I still needed to calculate all of the possible combinations of drops I could get, and not just the total number, but actually each individual combination. This problem is called a multichoose, or stars and bars problem, and it's essentially a problem where, rather than selecting a smaller group from a larger group, as is typical in combination problems, you're basically figuring out every way you can place a large number of items into a smaller number of buckets. The total number of possibilities can be calculated by $$ \left( \! \binom{k}{n} \! \right) = \binom{k + n - 1}{n}. $$ Where $k$ is the number of buckets, and $n$ is the number of items being placed into the buckets. (Or classically, number of bars to divide with and the number of stars). Now, this formula can get me the total number of possibilities, but not the possibilities themselves, for that I needed to brute-force the problem programmatically. While I began by trying to write my own algorithm, I found a repository of multichoose generation functions in various languages by ekg. I used the Python version to output every possible multiset to a CSV file. From there, in Excel I totaled up the Rupee value for each multiset, and ran the numbers through the multinomial function to get the probability of each multiset occuring. Then I used a Pivot Table to total up the probabilities for all multisets with the same Rupee totals, and then simply used that data to get a cumulative probability for the Rupee totals. I also did a Monte Carlo simulation of 10 million attempts in R to confirm that the numbers were correct, and they matched very neatly (closely enough in fact that I think I'd recommend a Monte Carlo solution to anyone dealing with a problem like this). Here's the Monte Carlo code: combos <- matrix(0, nrow=10000000, ncol=4) for(trial in 1:10000000) { outcomes <- sample(c(1, 2, 3, 4), size = 72, replace = TRUE, prob = c(5/16, 1/16, 1/16, 9/16)) combos[trial,] <- tabulate(outcomes, nbins=4) } rupeeSimulations <- apply(combos, 1, function(row) row[1]*1+row[2]*3+row[3]*5) table(rupeeSimulations) Graphing it in R, the result was this: I showed this to Majora's Mask speedrunners, and the numbers didn't seem to line up with their experience. This had me dig further into the Drop Table behavior, and I discovered that Majora's Mask has a drop type that I now call "Mask Drops," where the game actually changes the drop depending on what mask the player is currently using. Turns out, Deku Scrub mask would turn that drop into green rupees, and on my original table, I had tested everything as Link, with no mask, so I got Arrows rather than rupees for that type of drop. So in the end, discovered something completely new about the game, had to update my Drop Table Infographic (new version here), and finally I was able to get an accurate chart of this original problem, enjoy: I learned a lot, I hadn't ever seen the multichoose and I didn't recall the multinomial function from my studies. Thank you for everyone who looked at this, big thanks to my college professor for pointing me in the right direction.
Rupees from Bushes in Majora's Mask, what distribution to use? So I contacted an old Statistics professor I had in college about this problem, and he pointed me to the Multinomial distribution, the exact solution to solving for the odds of getting any specific co
49,785
Rupees from Bushes in Majora's Mask, what distribution to use?
You can compute an exact distribution by using a repeated convolution. Here is an example with r code: n = 10 ### a vector of probabilities for the number of rupees p = rep(0,n*5+5+1) p[1+c(0,1,3,5)] = c(11,3,1,1)/16 ### loop some number of times for the number of shrubs for (i in 2:n) { p_new = p*0 ### go through all values of the vector of probabilities and compute the new probabilities for (j in 1:(n*5+1)) { p_new[j+c(0,1,3,5)] = p_new[j+c(0,1,3,5)] + p[j]*c(11,3,1,1)/16 } p = p_new } plot(0:(n*5), cumsum(p[1:(n*5+1)]), xlab = "number of rupees", ylab = "cumulative probability")
Rupees from Bushes in Majora's Mask, what distribution to use?
You can compute an exact distribution by using a repeated convolution. Here is an example with r code: n = 10 ### a vector of probabilities for the number of rupees p = rep(0,n*5+5+1) p[1+c(0,1,3,5)]
Rupees from Bushes in Majora's Mask, what distribution to use? You can compute an exact distribution by using a repeated convolution. Here is an example with r code: n = 10 ### a vector of probabilities for the number of rupees p = rep(0,n*5+5+1) p[1+c(0,1,3,5)] = c(11,3,1,1)/16 ### loop some number of times for the number of shrubs for (i in 2:n) { p_new = p*0 ### go through all values of the vector of probabilities and compute the new probabilities for (j in 1:(n*5+1)) { p_new[j+c(0,1,3,5)] = p_new[j+c(0,1,3,5)] + p[j]*c(11,3,1,1)/16 } p = p_new } plot(0:(n*5), cumsum(p[1:(n*5+1)]), xlab = "number of rupees", ylab = "cumulative probability")
Rupees from Bushes in Majora's Mask, what distribution to use? You can compute an exact distribution by using a repeated convolution. Here is an example with r code: n = 10 ### a vector of probabilities for the number of rupees p = rep(0,n*5+5+1) p[1+c(0,1,3,5)]
49,786
Stopping rules in group sequential designs
What's the point of having a futility hypothesis? Government regulations and medical ethics affect the design of medical studies and govern the conduct of clinical trials by describing good clinical practices (GCPs) for studies with both human and non-human animal subjects. Examples and supporting quotes from the articles: "When inferiority meets non-inferiority: implications for interim analyses" (July 13 2012) by Bratton DJ, et al. Clin Trials. 2012 Oct;9(5):605-9. doi: 10.1177/1740774512453220. Epub 2012 Jul 13. "Method Conditional power is used to assess futility or the inability of the trial to show non-inferiority at the final analysis, by calculating the probability of demonstrating non-inferiority in the final analysis conditional on the observed results and upon assumptions on the future results of the trial. The Bullous Pemphigoid Steroids and Tetracyclines Study (BLISTER) trial is an example of a trial where a simultaneous inferior and non-inferior result could occur. A method for calculating conditional power for non-inferiority using simulations is described and applied at a hypothetical interim analysis of this trial. Results Stopping guidelines for futility based on conditional power are shown to be better suited to non-inferiority trials than the typical methods used in superiority trials. Simulations are a straightforward and flexible way of calculating conditional power. Conclusions Conditional power is an appropriate tool for defining stopping guidelines for futility in non-inferiority trials, particularly those with large non-inferiority margins.". "Noninferiority trials" by Steven M Snapinn (2000 Jul 31) doi: 10.1186/cvm-1-1-019 "Abstract Noninferiority trials are intended to show that the effect of a new treatment is not worse than that of an active control by more than a specified margin. These trials have a number of inherent weaknesses that superiority trials do not: no internal demonstration of assay sensitivity, no single conservative analysis approach, lack of protection from bias by blinding, and difficulty in specifying the noninferiority margin. Noninferiority trials may sometimes be necessary when a placebo group can not be ethically included, but it should be recognized that the results of such trials are not as credible as those from a superiority trial. Terminology 'Noninferiority' is a relatively new term that has not been universally adopted, and in the past noninferiority and equivalence trials, which have an important distinction, have both been referred to as 'equivalence trials'. To make the confusion even worse, both of these terms are somewhat misleading. It is fundamentally impossible to prove that two treatments have exactly equivalent effects. Equivalence trials, therefore, aim to show that the effects differ by no more than a specific amount. This tolerance is known as the equivalence margin, and is often denoted by the symbol δ. In an equivalence trial, if the effects of the two treatments differ by more than the equivalence margin in either direction, then equivalence does not hold. Noninferiority trials, on the other hand, aim to show that an experimental treatment is not worse than an active control by more than the equivalence margin. An improvement of any size fits within the definition of noninferiority. Bioequivalence trials are true equivalence trials, but it is difficult to imagine any trial comparing the clinical effects of an experimental treatment and active control that would not more appropriately be termed a noninferiority trial.". "Stopping clinical trials early for futility: retrospective analysis of several randomised clinical studies" (Dec 13 2011) by Mark Jitlal, Iftekhar Khan, Siow Ming Lee, and Allan Hackshaw. doi: 10.1186/1745-6215-12-S1-A53. "Background Many clinical trials show no overall benefit. We examined futility analyses applied to trials with different effect sizes. Methods Ten randomised cancer trials were retrospectively analysed; target sample size reached in all. The hazard ratio indicated no overall benefit (n=5), or moderate (n=4) or large (n=1) treatment effects. Futility analyses were applied after 25, 50 and 75% of events were observed, or patients were recruited. Outcomes were conditional power (CP), and time and cost savings. Results Futility analyses could stop some trials with no benefit, but not all. After observing 50% of the target number of events, 3 out of 5 trials with no benefit could be stopped early (low CP ≤ 15%). Trial duration for two studies could be reduced by 4-24 months, saving £44 000-231 000, but the third had already stopped recruiting, hence no savings were made. However, of concern was that 2 of the 4 trials with moderate treatment effects could be stopped early at some point, although they eventually showed worthwhile benefits. Conclusions Careful application of futility can lead to future patients in a trial not being given an ineffective treatment, and should therefore be used more often. A secondary consideration is that it could shorten trial duration and reduce costs. However, studies with modest treatment effects could be inappropriately stopped early. Unless there is very good evidence for futility, it is often best to continue to the planned end.". "Understanding noninferiority trials" (Nov 23 2012) by Seokyung Hahn, doi: 10.3345/kjp.2012.55.11.403. "Abstract Noninferiority trials test whether a new experimental treatment is not unacceptably less efficacious than an active control treatment already in use. With continuous improvements in health technologies, standard care, and clinical outcomes, the incremental benefits of newly developed treatments may be only marginal over existing treatments. Sometimes assigning patients to a placebo is unethical. In such circumstances, there has been increasing emphasis on the use of noninferiority trial designs. Noninferiority trials are more complex to design, conduct, and interpret than typical superiority trials. This paper reviews the concept of noninferiority trials and discusses some important issues related to them.". Please continue reading at the links provided. "... I don't care too much whether there's harm or futility: I'm not going to sell this drug." There's a big difference between harm and futility, the placebo effect is a necessary consideration in some types of trials. These tests are done long before sales are approved and in some cases drugs that are harmful in certain circumstances are life saving in others. That is why some designs call for three categories, later in the trials only effective drugs are used as harmful or placebo products are already removed from the trial. Use of conditional power alone to guide stopping would probably be incorrect. Years of education and study of trial methods are necessary to design competent clinical trials. There are too many things to consider and guidance by a team of experts is necessary, reliance on a Stack Exchange answer is ill advised. Indeed neither this site nor this author offers medical advice, this is mearly an attempt to provide a brief answer to laypersons whom share interest in the question and seek guidance for months of further reading. For a discussion of other statistical methods see the following paper where the authors explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. "The utility of Bayesian predictive probabilities for interim monitoring of clinical trials" (May 28 2014) by Benjamin R. Saville, Jason T. Connor, Gregory D Ayers, and JoAnn Alvarez. DOI: 10.1177/1740774514531352 Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. ... Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. ... Two common types of questions addressed by interim analyses include 1) Is there convincing evidence in favor of the null or alternative hypotheses? and 2) Is the trial likely to show convincing evidence in favor of the alternative hypothesis if additional data are collected? The first question pertains to the evidence presently shown by the data, and is best addressed using estimation, p-values, or Bayesian posterior probabilities. The second deals with prediction of what evidence will be available at later stages in the study, and is best addressed using stochastic curtailment methods such as conditional power or predictive probability. Conditional power is often criticized for assuming the unknown parameters are fixed at specific values. In contrast, Bayesian predictive probabilities average the probability of trial success over the variability in the parameter estimates, and can be used whether the trial’s primary analysis is frequentist or Bayesian. They are most often used to predict trial success at a final pre-determined sample size, but can also be used to predict trial success based on an interim sample size with extended follow-up. Predictive probabilities have been widely discussed in the literature. However, the literature lacks a general discussion of the advantages of predictive probabilities for interim monitoring. This article contrasts predictive probabilities versus traditional methods for interim monitoring of clinical trials. We illustrate settings in which predictive probabilities have advantages over more traditional methods of interim monitoring, particularly in the context of futility monitoring and efficacy monitoring with lagged outcomes (Sections 2-4). We also explore the relationship between the predictive probability and posterior probability (Section 5) and conclude with a discussion.".
Stopping rules in group sequential designs
What's the point of having a futility hypothesis? Government regulations and medical ethics affect the design of medical studies and govern the conduct of clinical trials by describing good clinical
Stopping rules in group sequential designs What's the point of having a futility hypothesis? Government regulations and medical ethics affect the design of medical studies and govern the conduct of clinical trials by describing good clinical practices (GCPs) for studies with both human and non-human animal subjects. Examples and supporting quotes from the articles: "When inferiority meets non-inferiority: implications for interim analyses" (July 13 2012) by Bratton DJ, et al. Clin Trials. 2012 Oct;9(5):605-9. doi: 10.1177/1740774512453220. Epub 2012 Jul 13. "Method Conditional power is used to assess futility or the inability of the trial to show non-inferiority at the final analysis, by calculating the probability of demonstrating non-inferiority in the final analysis conditional on the observed results and upon assumptions on the future results of the trial. The Bullous Pemphigoid Steroids and Tetracyclines Study (BLISTER) trial is an example of a trial where a simultaneous inferior and non-inferior result could occur. A method for calculating conditional power for non-inferiority using simulations is described and applied at a hypothetical interim analysis of this trial. Results Stopping guidelines for futility based on conditional power are shown to be better suited to non-inferiority trials than the typical methods used in superiority trials. Simulations are a straightforward and flexible way of calculating conditional power. Conclusions Conditional power is an appropriate tool for defining stopping guidelines for futility in non-inferiority trials, particularly those with large non-inferiority margins.". "Noninferiority trials" by Steven M Snapinn (2000 Jul 31) doi: 10.1186/cvm-1-1-019 "Abstract Noninferiority trials are intended to show that the effect of a new treatment is not worse than that of an active control by more than a specified margin. These trials have a number of inherent weaknesses that superiority trials do not: no internal demonstration of assay sensitivity, no single conservative analysis approach, lack of protection from bias by blinding, and difficulty in specifying the noninferiority margin. Noninferiority trials may sometimes be necessary when a placebo group can not be ethically included, but it should be recognized that the results of such trials are not as credible as those from a superiority trial. Terminology 'Noninferiority' is a relatively new term that has not been universally adopted, and in the past noninferiority and equivalence trials, which have an important distinction, have both been referred to as 'equivalence trials'. To make the confusion even worse, both of these terms are somewhat misleading. It is fundamentally impossible to prove that two treatments have exactly equivalent effects. Equivalence trials, therefore, aim to show that the effects differ by no more than a specific amount. This tolerance is known as the equivalence margin, and is often denoted by the symbol δ. In an equivalence trial, if the effects of the two treatments differ by more than the equivalence margin in either direction, then equivalence does not hold. Noninferiority trials, on the other hand, aim to show that an experimental treatment is not worse than an active control by more than the equivalence margin. An improvement of any size fits within the definition of noninferiority. Bioequivalence trials are true equivalence trials, but it is difficult to imagine any trial comparing the clinical effects of an experimental treatment and active control that would not more appropriately be termed a noninferiority trial.". "Stopping clinical trials early for futility: retrospective analysis of several randomised clinical studies" (Dec 13 2011) by Mark Jitlal, Iftekhar Khan, Siow Ming Lee, and Allan Hackshaw. doi: 10.1186/1745-6215-12-S1-A53. "Background Many clinical trials show no overall benefit. We examined futility analyses applied to trials with different effect sizes. Methods Ten randomised cancer trials were retrospectively analysed; target sample size reached in all. The hazard ratio indicated no overall benefit (n=5), or moderate (n=4) or large (n=1) treatment effects. Futility analyses were applied after 25, 50 and 75% of events were observed, or patients were recruited. Outcomes were conditional power (CP), and time and cost savings. Results Futility analyses could stop some trials with no benefit, but not all. After observing 50% of the target number of events, 3 out of 5 trials with no benefit could be stopped early (low CP ≤ 15%). Trial duration for two studies could be reduced by 4-24 months, saving £44 000-231 000, but the third had already stopped recruiting, hence no savings were made. However, of concern was that 2 of the 4 trials with moderate treatment effects could be stopped early at some point, although they eventually showed worthwhile benefits. Conclusions Careful application of futility can lead to future patients in a trial not being given an ineffective treatment, and should therefore be used more often. A secondary consideration is that it could shorten trial duration and reduce costs. However, studies with modest treatment effects could be inappropriately stopped early. Unless there is very good evidence for futility, it is often best to continue to the planned end.". "Understanding noninferiority trials" (Nov 23 2012) by Seokyung Hahn, doi: 10.3345/kjp.2012.55.11.403. "Abstract Noninferiority trials test whether a new experimental treatment is not unacceptably less efficacious than an active control treatment already in use. With continuous improvements in health technologies, standard care, and clinical outcomes, the incremental benefits of newly developed treatments may be only marginal over existing treatments. Sometimes assigning patients to a placebo is unethical. In such circumstances, there has been increasing emphasis on the use of noninferiority trial designs. Noninferiority trials are more complex to design, conduct, and interpret than typical superiority trials. This paper reviews the concept of noninferiority trials and discusses some important issues related to them.". Please continue reading at the links provided. "... I don't care too much whether there's harm or futility: I'm not going to sell this drug." There's a big difference between harm and futility, the placebo effect is a necessary consideration in some types of trials. These tests are done long before sales are approved and in some cases drugs that are harmful in certain circumstances are life saving in others. That is why some designs call for three categories, later in the trials only effective drugs are used as harmful or placebo products are already removed from the trial. Use of conditional power alone to guide stopping would probably be incorrect. Years of education and study of trial methods are necessary to design competent clinical trials. There are too many things to consider and guidance by a team of experts is necessary, reliance on a Stack Exchange answer is ill advised. Indeed neither this site nor this author offers medical advice, this is mearly an attempt to provide a brief answer to laypersons whom share interest in the question and seek guidance for months of further reading. For a discussion of other statistical methods see the following paper where the authors explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. "The utility of Bayesian predictive probabilities for interim monitoring of clinical trials" (May 28 2014) by Benjamin R. Saville, Jason T. Connor, Gregory D Ayers, and JoAnn Alvarez. DOI: 10.1177/1740774514531352 Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. ... Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. ... Two common types of questions addressed by interim analyses include 1) Is there convincing evidence in favor of the null or alternative hypotheses? and 2) Is the trial likely to show convincing evidence in favor of the alternative hypothesis if additional data are collected? The first question pertains to the evidence presently shown by the data, and is best addressed using estimation, p-values, or Bayesian posterior probabilities. The second deals with prediction of what evidence will be available at later stages in the study, and is best addressed using stochastic curtailment methods such as conditional power or predictive probability. Conditional power is often criticized for assuming the unknown parameters are fixed at specific values. In contrast, Bayesian predictive probabilities average the probability of trial success over the variability in the parameter estimates, and can be used whether the trial’s primary analysis is frequentist or Bayesian. They are most often used to predict trial success at a final pre-determined sample size, but can also be used to predict trial success based on an interim sample size with extended follow-up. Predictive probabilities have been widely discussed in the literature. However, the literature lacks a general discussion of the advantages of predictive probabilities for interim monitoring. This article contrasts predictive probabilities versus traditional methods for interim monitoring of clinical trials. We illustrate settings in which predictive probabilities have advantages over more traditional methods of interim monitoring, particularly in the context of futility monitoring and efficacy monitoring with lagged outcomes (Sections 2-4). We also explore the relationship between the predictive probability and posterior probability (Section 5) and conclude with a discussion.".
Stopping rules in group sequential designs What's the point of having a futility hypothesis? Government regulations and medical ethics affect the design of medical studies and govern the conduct of clinical trials by describing good clinical
49,787
Trying to implement the Jensen-Shannon Divergence for Multivariate Gaussians
Actually, using the answer in https://stackoverflow.com/questions/26079881/kl-divergence-of-two-gmms (and the fact, that the author factored out the 1/2 from the logarithm, made the montecarlo approximation sample from both distributions to average the result), I would say, that the symmetrized numerical code for jensen shannon divergence using monte carlo integration, even for general scikit.stats distributions (_p and _q), should look like this: def distributions_js(distribution_p, distribution_q, n_samples=10 ** 5): # jensen shannon divergence. (Jensen shannon distance is the square root of the divergence) # all the logarithms are defined as log2 (because of information entrophy) X = distribution_p.rvs(n_samples) p_X = distribution_p.pdf(X) q_X = distribution_q.pdf(X) log_mix_X = np.log2(p_X + q_X) Y = distribution_q.rvs(n_samples) p_Y = distribution_p.pdf(Y) q_Y = distribution_q.pdf(Y) log_mix_Y = np.log2(p_Y + q_Y) return (np.log2(p_X).mean() - (log_mix_X.mean() - np.log2(2)) + np.log2(q_Y).mean() - (log_mix_Y.mean() - np.log2(2))) / 2 print("should be different:") print(distributions_js(st.norm(loc=10000), st.norm(loc=0))) print("should be same:") print(distributions_js(st.norm(loc=0), st.norm(loc=0))) For noncontinuous, change .pdf to probabilities of samples.
Trying to implement the Jensen-Shannon Divergence for Multivariate Gaussians
Actually, using the answer in https://stackoverflow.com/questions/26079881/kl-divergence-of-two-gmms (and the fact, that the author factored out the 1/2 from the logarithm, made the montecarlo approxi
Trying to implement the Jensen-Shannon Divergence for Multivariate Gaussians Actually, using the answer in https://stackoverflow.com/questions/26079881/kl-divergence-of-two-gmms (and the fact, that the author factored out the 1/2 from the logarithm, made the montecarlo approximation sample from both distributions to average the result), I would say, that the symmetrized numerical code for jensen shannon divergence using monte carlo integration, even for general scikit.stats distributions (_p and _q), should look like this: def distributions_js(distribution_p, distribution_q, n_samples=10 ** 5): # jensen shannon divergence. (Jensen shannon distance is the square root of the divergence) # all the logarithms are defined as log2 (because of information entrophy) X = distribution_p.rvs(n_samples) p_X = distribution_p.pdf(X) q_X = distribution_q.pdf(X) log_mix_X = np.log2(p_X + q_X) Y = distribution_q.rvs(n_samples) p_Y = distribution_p.pdf(Y) q_Y = distribution_q.pdf(Y) log_mix_Y = np.log2(p_Y + q_Y) return (np.log2(p_X).mean() - (log_mix_X.mean() - np.log2(2)) + np.log2(q_Y).mean() - (log_mix_Y.mean() - np.log2(2))) / 2 print("should be different:") print(distributions_js(st.norm(loc=10000), st.norm(loc=0))) print("should be same:") print(distributions_js(st.norm(loc=0), st.norm(loc=0))) For noncontinuous, change .pdf to probabilities of samples.
Trying to implement the Jensen-Shannon Divergence for Multivariate Gaussians Actually, using the answer in https://stackoverflow.com/questions/26079881/kl-divergence-of-two-gmms (and the fact, that the author factored out the 1/2 from the logarithm, made the montecarlo approxi
49,788
What is the appropriate zero-correlation parameter model for factors in lmer?
SUMMARY: What is the most appropriate zero-correlation model, depends on the data. There is no universally correct choice. I will consider the same Machines data set. It has several Workers, each repeatedly tested on all of the three Machines. The maximal mixed model is thus lmer(score ~ 1 + Machine + (0 + Machine | Worker), d) which fits $3\times 3$ covariance matrix of the random effects. The fixed effects define the mean score for each Machine; there are three Machines so it is a three-dimensional vector $\mu$. On top of that each Worker $i$ deviates from this $\mu$ by some "random" three-dimensional vector $\mu_i$. These $\mu_i$ are random vectors with mean zero $(0,0,0)$ and some $3\times 3$ covariance matrix $\Sigma$. Such a covariance matrix has 6 parameters: $$\Sigma=\begin{bmatrix}\sigma^2_A&\sigma^2_{AB} &\sigma^2_{AC}\\\sigma^2_{AB}&\sigma^2_B&\sigma^2_{BC}\\\sigma^2_{AC}&\sigma^2_{BC}&\sigma^2_C\end{bmatrix}.$$ Note that lmer(score ~ 1 + Machine + (1 + Machine | Worker), d) yields an equivalent model, only parameterized differently. The exact parametrization can also depend on the chosen contrasts, but I find it the easiest to discuss this with dummy contrasts, hence my (0 + Machine | Worker) specification above. The crucial point here is that every model that simplifies the random effect structure can be understood as imposing some specific constraints on $\Sigma$. The random intercept (1 | Worker) model corresponds to $$\Sigma=\begin{bmatrix}\sigma^2_w&\sigma^2_w &\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_w\end{bmatrix}.$$ Here each Worker gets a random scalar intercept $m_i$, i.e. $\mu_i = (m_i, m_i, m_i)$; the entries of $\mu_i$ are correlated with correlation 1. The random interaction (1 | Worker:Machine) model corresponds to $$\Sigma=\begin{bmatrix}\sigma^2_{wm}&0&0\\0&\sigma^2_{wm}&0\\0&0&\sigma^2_{wm}\end{bmatrix}.$$ Here $\mu_i$ has three entries with the same variances but that are assumed to be uncorrelated. In the following let A, B, and C be dummy variables for three Machines. Then (0 + A | Worker) model corresponds to $$\Sigma=\begin{bmatrix}\sigma^2_A&0&0\\0&0&0\\0&0&0\end{bmatrix}.$$ Here $\mu_i$ has only one non-zero entry with variance $\sigma^2_A$. Similarly for (0 + B | Worker) and (0 + C | Worker). The second crucial thing to realize is that a sum of uncorrelated multivariate Gaussians with $\Sigma_1$ and $\Sigma_2$ has covariance matrix $\Sigma_1+\Sigma_2$. So to understand what happens with more complicated random structures we can simply add up covariance matrices written above. For example, lmer(score ~ 1 + Machine + (1 | Worker) + (1 | Worker:Machine), d) fits a covariance matrix with 2 parameters (this form of the covariance matrix is known as "compound symmetry"): $$\Sigma=\begin{bmatrix}\sigma^2_{wm}+\sigma^2_w&\sigma^2_w &\sigma^2_w\\\sigma^2_w&\sigma^2_{wm}+\sigma^2_w&\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_{wm}+\sigma^2_w\end{bmatrix}.$$ The model that Rune Christensen recommends for uncorrelated factors lmer(score ~ 1 + Machine + (1 + A + B + C || Worker), d) fits a model with 4 parameters that is a bit more general than compound symmetry (and is only 2 parameters away from the maximal model): $$\Sigma=\begin{bmatrix}\sigma^2_A+\sigma^2_w&\sigma^2_w &\sigma^2_w\\\sigma^2_w&\sigma^2_B+\sigma^2_w&\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_C+\sigma^2_w\end{bmatrix}.$$ The model that you have "until recently" had in mind (your m2) is the model that Reinhold Kliegl recommends as the zero-correlation model: lmer(score ~ 1 + Machine + (1 + c1 + c2 || Worker), d) If c1 and c2 were produced using the default treatment contrasts (with A being the reference level), then this model can be written as lmer(score ~ 1 + Machine + (1 + B + C || Worker), d) I agree with Rune that it is a somewhat unreasonable model because it treats factor levels differently: B and C get their own variance but A does not (corresponding $\Sigma$ would look the same as the one above but without $\sigma^2_A$). Whereas all three machines should arguably be treated on the same footing. Thus, the most reasonable sequence of nested models seems to be: max model --> comp symmetry w/ unequal vars --> comp symmetry --> rand. intercept A note on marginal distributions This post was inspired by Rune Christensen's email here https://stat.ethz.ch/pipermail/r-sig-mixed-models/2018q2/026847.html. He talks about $9\times 9$ marginal covariance matrices for individual observations within each Worker. I find this more difficult to think about, compared to my presentation above. The covariance matrix from Rune's email can be obtained from any $\Sigma$ as $$\Sigma_\text{marginal} = I_{m\times m} \otimes \Sigma + \sigma^2 I,$$ where $m$ is the number of repetitions per Worker/Machine combination (in this dataset $m=3$) and $\sigma^2$ is residual variance. A note on sum contrasts @statmerkur is asking about sum contrasts. Indeed, it is often recommended to use sum contrasts (contr.sum), especially when there are interactions in the model. I feel that this does not affect anything that I wrote above. E.g. the maximal model will still fit an unconstrained $\Sigma$, but the interpretation of its entries is going to be different (variances and covariances of the grand mean and deviations of A and B from the grand mean). The $\Sigma$ in m2 defined using contr.sum will have the same form as in (1+A+B || Worker) above, but again, with the different interpretation of the entries. Two further comments are: Rune's critique of m2 still applies: this random effect structure does not treat A, B, and C on the same footing; The recommendation to use the sum contrasts makes sense for the fixed effects (in the presence of interactions). I don't see a reason to necessarily prefer sum contrasts for the random effects, so I think, if one wants to, one can safely use (1+A+B+C || Worker) even if the fixed part uses sum contrasts. A note on custom contrasts I had an email exchange with Reinhold Kliegl about this answer. Reinhold says that in his applied work he prefers (1+c1+c2 || subject) over (1+A+B+C || subject) because he chose c1 and c2 as some meaningful contrasts. He wants to be able to interpret $\Sigma$ and he wants its entries to correspond to c1 and c2. This basically means that Reinhold is fine with rejecting the assumption (that I made above) that the factor levels should be treated equally. He does not care about individual factor levels at all! If so, then of course it is fine to use (1+c1+c2 || subject). He gives his paper https://www.frontiersin.org/articles/10.3389/fpsyg.2010.00238/full as an example. There a four-level factor is coded with 3 custom contrasts c1, c2, c3, and grand mean as the intercept. These specific contrasts are of interest, and not the individual factors A to D. In this situation I agree that (1+c1+c2+c3 || subject) makes total sense. But one should be clear that while (1+c1+c2+c3 | subject) does treat factor levels A to D equally (and merely re-parametrizes $\Sigma$ in terms of particular contrasts), (1+c1+c2+c3 || subject) will fail to treat factor levels equally.
What is the appropriate zero-correlation parameter model for factors in lmer?
SUMMARY: What is the most appropriate zero-correlation model, depends on the data. There is no universally correct choice. I will consider the same Machines data set. It has several Workers, each rep
What is the appropriate zero-correlation parameter model for factors in lmer? SUMMARY: What is the most appropriate zero-correlation model, depends on the data. There is no universally correct choice. I will consider the same Machines data set. It has several Workers, each repeatedly tested on all of the three Machines. The maximal mixed model is thus lmer(score ~ 1 + Machine + (0 + Machine | Worker), d) which fits $3\times 3$ covariance matrix of the random effects. The fixed effects define the mean score for each Machine; there are three Machines so it is a three-dimensional vector $\mu$. On top of that each Worker $i$ deviates from this $\mu$ by some "random" three-dimensional vector $\mu_i$. These $\mu_i$ are random vectors with mean zero $(0,0,0)$ and some $3\times 3$ covariance matrix $\Sigma$. Such a covariance matrix has 6 parameters: $$\Sigma=\begin{bmatrix}\sigma^2_A&\sigma^2_{AB} &\sigma^2_{AC}\\\sigma^2_{AB}&\sigma^2_B&\sigma^2_{BC}\\\sigma^2_{AC}&\sigma^2_{BC}&\sigma^2_C\end{bmatrix}.$$ Note that lmer(score ~ 1 + Machine + (1 + Machine | Worker), d) yields an equivalent model, only parameterized differently. The exact parametrization can also depend on the chosen contrasts, but I find it the easiest to discuss this with dummy contrasts, hence my (0 + Machine | Worker) specification above. The crucial point here is that every model that simplifies the random effect structure can be understood as imposing some specific constraints on $\Sigma$. The random intercept (1 | Worker) model corresponds to $$\Sigma=\begin{bmatrix}\sigma^2_w&\sigma^2_w &\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_w\end{bmatrix}.$$ Here each Worker gets a random scalar intercept $m_i$, i.e. $\mu_i = (m_i, m_i, m_i)$; the entries of $\mu_i$ are correlated with correlation 1. The random interaction (1 | Worker:Machine) model corresponds to $$\Sigma=\begin{bmatrix}\sigma^2_{wm}&0&0\\0&\sigma^2_{wm}&0\\0&0&\sigma^2_{wm}\end{bmatrix}.$$ Here $\mu_i$ has three entries with the same variances but that are assumed to be uncorrelated. In the following let A, B, and C be dummy variables for three Machines. Then (0 + A | Worker) model corresponds to $$\Sigma=\begin{bmatrix}\sigma^2_A&0&0\\0&0&0\\0&0&0\end{bmatrix}.$$ Here $\mu_i$ has only one non-zero entry with variance $\sigma^2_A$. Similarly for (0 + B | Worker) and (0 + C | Worker). The second crucial thing to realize is that a sum of uncorrelated multivariate Gaussians with $\Sigma_1$ and $\Sigma_2$ has covariance matrix $\Sigma_1+\Sigma_2$. So to understand what happens with more complicated random structures we can simply add up covariance matrices written above. For example, lmer(score ~ 1 + Machine + (1 | Worker) + (1 | Worker:Machine), d) fits a covariance matrix with 2 parameters (this form of the covariance matrix is known as "compound symmetry"): $$\Sigma=\begin{bmatrix}\sigma^2_{wm}+\sigma^2_w&\sigma^2_w &\sigma^2_w\\\sigma^2_w&\sigma^2_{wm}+\sigma^2_w&\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_{wm}+\sigma^2_w\end{bmatrix}.$$ The model that Rune Christensen recommends for uncorrelated factors lmer(score ~ 1 + Machine + (1 + A + B + C || Worker), d) fits a model with 4 parameters that is a bit more general than compound symmetry (and is only 2 parameters away from the maximal model): $$\Sigma=\begin{bmatrix}\sigma^2_A+\sigma^2_w&\sigma^2_w &\sigma^2_w\\\sigma^2_w&\sigma^2_B+\sigma^2_w&\sigma^2_w\\\sigma^2_w&\sigma^2_w&\sigma^2_C+\sigma^2_w\end{bmatrix}.$$ The model that you have "until recently" had in mind (your m2) is the model that Reinhold Kliegl recommends as the zero-correlation model: lmer(score ~ 1 + Machine + (1 + c1 + c2 || Worker), d) If c1 and c2 were produced using the default treatment contrasts (with A being the reference level), then this model can be written as lmer(score ~ 1 + Machine + (1 + B + C || Worker), d) I agree with Rune that it is a somewhat unreasonable model because it treats factor levels differently: B and C get their own variance but A does not (corresponding $\Sigma$ would look the same as the one above but without $\sigma^2_A$). Whereas all three machines should arguably be treated on the same footing. Thus, the most reasonable sequence of nested models seems to be: max model --> comp symmetry w/ unequal vars --> comp symmetry --> rand. intercept A note on marginal distributions This post was inspired by Rune Christensen's email here https://stat.ethz.ch/pipermail/r-sig-mixed-models/2018q2/026847.html. He talks about $9\times 9$ marginal covariance matrices for individual observations within each Worker. I find this more difficult to think about, compared to my presentation above. The covariance matrix from Rune's email can be obtained from any $\Sigma$ as $$\Sigma_\text{marginal} = I_{m\times m} \otimes \Sigma + \sigma^2 I,$$ where $m$ is the number of repetitions per Worker/Machine combination (in this dataset $m=3$) and $\sigma^2$ is residual variance. A note on sum contrasts @statmerkur is asking about sum contrasts. Indeed, it is often recommended to use sum contrasts (contr.sum), especially when there are interactions in the model. I feel that this does not affect anything that I wrote above. E.g. the maximal model will still fit an unconstrained $\Sigma$, but the interpretation of its entries is going to be different (variances and covariances of the grand mean and deviations of A and B from the grand mean). The $\Sigma$ in m2 defined using contr.sum will have the same form as in (1+A+B || Worker) above, but again, with the different interpretation of the entries. Two further comments are: Rune's critique of m2 still applies: this random effect structure does not treat A, B, and C on the same footing; The recommendation to use the sum contrasts makes sense for the fixed effects (in the presence of interactions). I don't see a reason to necessarily prefer sum contrasts for the random effects, so I think, if one wants to, one can safely use (1+A+B+C || Worker) even if the fixed part uses sum contrasts. A note on custom contrasts I had an email exchange with Reinhold Kliegl about this answer. Reinhold says that in his applied work he prefers (1+c1+c2 || subject) over (1+A+B+C || subject) because he chose c1 and c2 as some meaningful contrasts. He wants to be able to interpret $\Sigma$ and he wants its entries to correspond to c1 and c2. This basically means that Reinhold is fine with rejecting the assumption (that I made above) that the factor levels should be treated equally. He does not care about individual factor levels at all! If so, then of course it is fine to use (1+c1+c2 || subject). He gives his paper https://www.frontiersin.org/articles/10.3389/fpsyg.2010.00238/full as an example. There a four-level factor is coded with 3 custom contrasts c1, c2, c3, and grand mean as the intercept. These specific contrasts are of interest, and not the individual factors A to D. In this situation I agree that (1+c1+c2+c3 || subject) makes total sense. But one should be clear that while (1+c1+c2+c3 | subject) does treat factor levels A to D equally (and merely re-parametrizes $\Sigma$ in terms of particular contrasts), (1+c1+c2+c3 || subject) will fail to treat factor levels equally.
What is the appropriate zero-correlation parameter model for factors in lmer? SUMMARY: What is the most appropriate zero-correlation model, depends on the data. There is no universally correct choice. I will consider the same Machines data set. It has several Workers, each rep
49,789
$R^2$ of a regression predicting noise
As per this question, we have $$R^2 \sim Beta\left (\frac {p-1}{2}, \frac {n-p}{2}\right)$$ in view of your assumption of error normality (the result that the regressors are also multivariate normal would not be necessary). The answers there also show that the mode of this distribution (you might of course also want to look at the mean or other characteristics of the distribution) is $$\text{mode}\,R^2 = \frac {\frac {p-1}{2}-1}{\frac {p-1}{2}+ \frac {n-p}{2}-2} =\frac {p-3}{n-5} $$ For the distribution to have a unique and finite mode we must have $$p> 3,\;n >k+2. $$ Hence, we see that, for fixed $p$, the mode decreaes to zero with $O_p(1/n)$, but modes quite a bit away from zero are to be expected for "overfitted" models for which $p$ is large relative to $n$. n <- seq(10, 100, 10) p <- seq(4, 30, 3) modes <- outer(n, p, function(n, p) ifelse(n>p+2, (p-3)/(n-5), NA)) library(plotly) plot_ly(x=n, y=p, z=t(modes), type="surface") %>% layout( scene = list( xaxis = list(title = "n"), yaxis = list(title = "p"), zaxis = list(title = "R^2")))
$R^2$ of a regression predicting noise
As per this question, we have $$R^2 \sim Beta\left (\frac {p-1}{2}, \frac {n-p}{2}\right)$$ in view of your assumption of error normality (the result that the regressors are also multivariate normal w
$R^2$ of a regression predicting noise As per this question, we have $$R^2 \sim Beta\left (\frac {p-1}{2}, \frac {n-p}{2}\right)$$ in view of your assumption of error normality (the result that the regressors are also multivariate normal would not be necessary). The answers there also show that the mode of this distribution (you might of course also want to look at the mean or other characteristics of the distribution) is $$\text{mode}\,R^2 = \frac {\frac {p-1}{2}-1}{\frac {p-1}{2}+ \frac {n-p}{2}-2} =\frac {p-3}{n-5} $$ For the distribution to have a unique and finite mode we must have $$p> 3,\;n >k+2. $$ Hence, we see that, for fixed $p$, the mode decreaes to zero with $O_p(1/n)$, but modes quite a bit away from zero are to be expected for "overfitted" models for which $p$ is large relative to $n$. n <- seq(10, 100, 10) p <- seq(4, 30, 3) modes <- outer(n, p, function(n, p) ifelse(n>p+2, (p-3)/(n-5), NA)) library(plotly) plot_ly(x=n, y=p, z=t(modes), type="surface") %>% layout( scene = list( xaxis = list(title = "n"), yaxis = list(title = "p"), zaxis = list(title = "R^2")))
$R^2$ of a regression predicting noise As per this question, we have $$R^2 \sim Beta\left (\frac {p-1}{2}, \frac {n-p}{2}\right)$$ in view of your assumption of error normality (the result that the regressors are also multivariate normal w
49,790
Questions on Bayesian Softmax Regression [closed]
Logistic regression models probability of success $\mathbb{P}(Y=1 \mid X)$, and uses a Bernoulli likelihood function $$ Y \sim \mathcal{B}\big(g^{-1}(X\beta)\big) $$ and, most often, a logistic link function $g^{-1}$. For more then two categories, we use categorical distribution and multinomial regression (recall that categorical distribution is a special case of multinomial distribution with $n=1$), to model probability of observing $j$-th category $\mathbb{P}(Y=j\mid X)$. The likelihood is $$ Y \sim \mathcal{C}\big(\pi_1,\pi_2,\dots,\pi_K\big) $$ where the probabilities are calculated using softmax as a link function $$ \pi_j = \frac{ e^{x^T w_j} }{ \sum_{k=1}^K e^{x^T w_k} } $$ Since the vector of parameters $w$ does not really differ from the parameters that we'd see for linear regression, the priors won't differ either. So you can use any priors that you'd find reasonable (e.g. following normal distributions). You can find many examples of such models implemented in Stan or PyMC3, this could possibly help you with concerns about implementing it. As about the possible distribution for the $\pi_1,\pi_2,\dots,\pi_K$ probabilities, the two common distributions for them could be Dirichlet, or in this case Multivarite log-normal distribution, yet this is not something that you'd care, since they do not come into model definition. Saying this differently, you don't put any prior for those probabilities, since they are estimated from the data, you put priors on the regression parameters. For the intercept-only model, you could use the conjugate Dirichlet-multinomial model.
Questions on Bayesian Softmax Regression [closed]
Logistic regression models probability of success $\mathbb{P}(Y=1 \mid X)$, and uses a Bernoulli likelihood function $$ Y \sim \mathcal{B}\big(g^{-1}(X\beta)\big) $$ and, most often, a logistic link f
Questions on Bayesian Softmax Regression [closed] Logistic regression models probability of success $\mathbb{P}(Y=1 \mid X)$, and uses a Bernoulli likelihood function $$ Y \sim \mathcal{B}\big(g^{-1}(X\beta)\big) $$ and, most often, a logistic link function $g^{-1}$. For more then two categories, we use categorical distribution and multinomial regression (recall that categorical distribution is a special case of multinomial distribution with $n=1$), to model probability of observing $j$-th category $\mathbb{P}(Y=j\mid X)$. The likelihood is $$ Y \sim \mathcal{C}\big(\pi_1,\pi_2,\dots,\pi_K\big) $$ where the probabilities are calculated using softmax as a link function $$ \pi_j = \frac{ e^{x^T w_j} }{ \sum_{k=1}^K e^{x^T w_k} } $$ Since the vector of parameters $w$ does not really differ from the parameters that we'd see for linear regression, the priors won't differ either. So you can use any priors that you'd find reasonable (e.g. following normal distributions). You can find many examples of such models implemented in Stan or PyMC3, this could possibly help you with concerns about implementing it. As about the possible distribution for the $\pi_1,\pi_2,\dots,\pi_K$ probabilities, the two common distributions for them could be Dirichlet, or in this case Multivarite log-normal distribution, yet this is not something that you'd care, since they do not come into model definition. Saying this differently, you don't put any prior for those probabilities, since they are estimated from the data, you put priors on the regression parameters. For the intercept-only model, you could use the conjugate Dirichlet-multinomial model.
Questions on Bayesian Softmax Regression [closed] Logistic regression models probability of success $\mathbb{P}(Y=1 \mid X)$, and uses a Bernoulli likelihood function $$ Y \sim \mathcal{B}\big(g^{-1}(X\beta)\big) $$ and, most often, a logistic link f
49,791
Derivatives of features
The first derivative is often referred to as delta MFCC, and second derivative as delta-delta MFCC. The same concepts can also be applied to melspectrograms. The derivatives of the MFCC models changes, how much variation there is between frames (per filter band). A constant sound would have a high summarized mean MFCC, but a low summarize mean delta-MFCC. This has been shown to improve results on speech classification tasks for instance. The delta MFCC is computed per frame. For each frame, it is the current MFCC values minus the previous MFCC frame values. In practice it is common to also apply a smoothing filter, as the difference operation is naturally sensitive to noise. For example in Python, one can use librosa to compute the MFCC and its deltas. y, sr = librosa.load(librosa.util.example_audio_file()) mfcc = librosa.feature.mfcc(y=y, sr=sr) mfcc_delta = librosa.feature.delta(mfcc, order=1) mfcc_delta2 = librosa.feature.delta(mfcc, order=2)
Derivatives of features
The first derivative is often referred to as delta MFCC, and second derivative as delta-delta MFCC. The same concepts can also be applied to melspectrograms. The derivatives of the MFCC models changes
Derivatives of features The first derivative is often referred to as delta MFCC, and second derivative as delta-delta MFCC. The same concepts can also be applied to melspectrograms. The derivatives of the MFCC models changes, how much variation there is between frames (per filter band). A constant sound would have a high summarized mean MFCC, but a low summarize mean delta-MFCC. This has been shown to improve results on speech classification tasks for instance. The delta MFCC is computed per frame. For each frame, it is the current MFCC values minus the previous MFCC frame values. In practice it is common to also apply a smoothing filter, as the difference operation is naturally sensitive to noise. For example in Python, one can use librosa to compute the MFCC and its deltas. y, sr = librosa.load(librosa.util.example_audio_file()) mfcc = librosa.feature.mfcc(y=y, sr=sr) mfcc_delta = librosa.feature.delta(mfcc, order=1) mfcc_delta2 = librosa.feature.delta(mfcc, order=2)
Derivatives of features The first derivative is often referred to as delta MFCC, and second derivative as delta-delta MFCC. The same concepts can also be applied to melspectrograms. The derivatives of the MFCC models changes
49,792
GANS: Using Discriminator for prediction
The reason why this approach does not work is that at the end of the training, the discriminator converges to a state when both real and generated data are classified with the same probability, i.e. the discriminator is unable to distinguish between them. Consider the opposite: if the discriminator could still distinguish between real and generated data, you could simply continue improving the generator through the training using this information. It is nicely described in the Figure 1 of the original paper (Goodfellow et al., 2014: Generative Adversarial Nets): Figure 1: Generative adversarial nets are trained by simultaneously updating the discriminative distribution ($D$, blue, dashed line) so that it discriminates between samples from the data generating distribution (black, dotted line) $p_x$ from those of the generative distribution $p_g$ ($G$) (green, solid line). The lower horizontal line is the domain from which $z$ is sampled, in this case uniformly. The horizontal line above is part of the domain of $x$. The upward arrows show how the mapping $x = G(z)$ imposes the non-uniform distribution $p_g$ on transformed samples. $G$ contracts in regions of high density and expands in regions of low density of $p_g$. (a) Consider an adversarial pair near convergence: $p_g$ is similar to $p_\mathrm{data}$ and $D$ is a partially accurate classifier. (b) In the inner loop of the algorithm $D$ is trained to discriminate samples from data, converging to $D^∗ (x) = \frac{p_\mathrm{data}(x)}{p_\mathrm{data}(x)+p_g(x)}$. (c) After an update to $G$, gradient of $D$ has guided $G(z)$ to flow to regions that are more likely to be classified as data. (d) After several steps of training, if $G$ and $D$ have enough capacity, they will reach a point at which both cannot improve because $p_g = p_\mathrm{data}$. The discriminator is unable to differentiate between the two distributions, i.e. $D(x) = \frac{1}{2}$.
GANS: Using Discriminator for prediction
The reason why this approach does not work is that at the end of the training, the discriminator converges to a state when both real and generated data are classified with the same probability, i.e. t
GANS: Using Discriminator for prediction The reason why this approach does not work is that at the end of the training, the discriminator converges to a state when both real and generated data are classified with the same probability, i.e. the discriminator is unable to distinguish between them. Consider the opposite: if the discriminator could still distinguish between real and generated data, you could simply continue improving the generator through the training using this information. It is nicely described in the Figure 1 of the original paper (Goodfellow et al., 2014: Generative Adversarial Nets): Figure 1: Generative adversarial nets are trained by simultaneously updating the discriminative distribution ($D$, blue, dashed line) so that it discriminates between samples from the data generating distribution (black, dotted line) $p_x$ from those of the generative distribution $p_g$ ($G$) (green, solid line). The lower horizontal line is the domain from which $z$ is sampled, in this case uniformly. The horizontal line above is part of the domain of $x$. The upward arrows show how the mapping $x = G(z)$ imposes the non-uniform distribution $p_g$ on transformed samples. $G$ contracts in regions of high density and expands in regions of low density of $p_g$. (a) Consider an adversarial pair near convergence: $p_g$ is similar to $p_\mathrm{data}$ and $D$ is a partially accurate classifier. (b) In the inner loop of the algorithm $D$ is trained to discriminate samples from data, converging to $D^∗ (x) = \frac{p_\mathrm{data}(x)}{p_\mathrm{data}(x)+p_g(x)}$. (c) After an update to $G$, gradient of $D$ has guided $G(z)$ to flow to regions that are more likely to be classified as data. (d) After several steps of training, if $G$ and $D$ have enough capacity, they will reach a point at which both cannot improve because $p_g = p_\mathrm{data}$. The discriminator is unable to differentiate between the two distributions, i.e. $D(x) = \frac{1}{2}$.
GANS: Using Discriminator for prediction The reason why this approach does not work is that at the end of the training, the discriminator converges to a state when both real and generated data are classified with the same probability, i.e. t
49,793
GANS: Using Discriminator for prediction
I have a different opinion than above and I think you got misunderstood. I believe it is possible. Training of GAN is done when the generator fool the discriminator. The Generator is useless at this point and can be used as a generator of synthetic data - there are already papers on this topic https://iopscience.iop.org/article/10.1088/1742-6596/1577/1/012027/pdf Though this is not what you are interested in. Simple example: You had dataset of cats and generator learned to create pics of cats so well that discriminator cannot distinguish it anymore. Logically when you take the Discriminator from the trained GAN and you feed it with pictures of a dog and cats it should be capable to say NO the pic of the dog is not a cat! As your post was 4 years ago, how did go? :)
GANS: Using Discriminator for prediction
I have a different opinion than above and I think you got misunderstood. I believe it is possible. Training of GAN is done when the generator fool the discriminator. The Generator is useless at this p
GANS: Using Discriminator for prediction I have a different opinion than above and I think you got misunderstood. I believe it is possible. Training of GAN is done when the generator fool the discriminator. The Generator is useless at this point and can be used as a generator of synthetic data - there are already papers on this topic https://iopscience.iop.org/article/10.1088/1742-6596/1577/1/012027/pdf Though this is not what you are interested in. Simple example: You had dataset of cats and generator learned to create pics of cats so well that discriminator cannot distinguish it anymore. Logically when you take the Discriminator from the trained GAN and you feed it with pictures of a dog and cats it should be capable to say NO the pic of the dog is not a cat! As your post was 4 years ago, how did go? :)
GANS: Using Discriminator for prediction I have a different opinion than above and I think you got misunderstood. I believe it is possible. Training of GAN is done when the generator fool the discriminator. The Generator is useless at this p
49,794
How to calculate the confidence that a trend in a time-series is positive?
The data you describe is what might be considered summary statistics. Whether somebody likes apples can be considered a binary outcome (1 = likes apples, 0 = doesn't like apples). Therefore, if you survey 10,000 people over 6 years then you have 60,000 responses measuring the binary outcome of whether each individual likes apples. The probability that somebody likes apples is actually the mean value of the binomial outcome. This is important for two primary reasons: The sample size is much larger than summary statistics you have presented, and therefore you should have much higher confidence in your results than linear regression of summary statistics would indicate The binary nature of your data should lead you to consider logistic regression. I don't have the complete data since you only provided summary statistics. To illustrate the approach I will simulate some data assuming your probabilities of liking apples are in fact the "true" probabilities. library(dplyr) library(ggplot2) library(tidyr) # Simulate some data ------------------------------------------------------ # Number of individuals n.individuals <- 1e4 # Years of study years <- 2013:2018 # Numbre of years n.years <- length(years) # Probability of liking apples pLikeApples = c(0.044, 0.041, 0.055, 0.063, 0.054, 0.060) # Matrix of data # Rows = individuals # Columns = years likesApples <- matrix(rep(NA_integer_, n.individuals*n.years), nrow = n.individuals) for (j in 1:n.years) { for (i in 1:n.individuals) { likesApples[i,j] <- rbinom(1, 1, pLikeApples[j]) } } # Name columns colnames(likesApples) <- years # Convert to data frame appleData <- as_tibble(likesApples) %>% mutate(ID = row_number()) %>% gather(year, likesApples, -ID) %>% mutate(year = as.integer(year)) If we perform simple linear regression on summary statistics then we obtain results as shown below # Summary data ------------------------------------------------------------ summaryAppleData <- appleData %>% group_by(year) %>% summarize(pLikeApples = mean(likesApples)) # Plot data --------------------------------------------------------------- ggplot(summaryAppleData, aes(x = year, y = pLikeApples)) + geom_point() + geom_smooth(method = "lm") + labs(title = "Standard linear regression \nof summary data", y = "probability of liking apples") + ylim(0, 0.1) If we use the complete data and apply logistic regression then we obtain results as shown below # Logistic regression ----------------------------------------------------- # Fit logistic regression model myMod <- glm(data = appleData, formula = likesApples ~ year, family = "binomial") # Summarize model summary(myMod) # Inverse link function linkInv <- myMod$family$linkinv # Predict response on linear scale predLink <- predict.glm(myMod, newdata = summaryAppleData, type = "link", se.fit = TRUE) # Backtransform predictions pred <- summaryAppleData %>% mutate(pred = linkInv(predLink$fit), LL = linkInv(predLink$fit - 1.96*predLink$se.fit), UU = linkInv(predLink$fit + 1.96*predLink$se.fit)) # Plot logistic regression ggplot() + geom_point(data = summaryAppleData, aes(x = year, y = pLikeApples)) + geom_line(data = pred, aes(x = year, y = pred)) + geom_ribbon(data = pred, aes(x = year, y = pred, ymin = LL, ymax = UU), alpha = 0.3) + labs(title = "Logistic regression", y = "probability of liking apples") + ylim(0, 0.1) Finally to address your first question, based on the summary of the logistic regression model we see that the year variable is significant with a significant code of *** indicating a confidence level of basically 100%. You need to run the analysis on your data and not simulated data as I have done. Also, as discussed by @orcmor you should use caution when interpreting the confidence level. However, this indicates a significant increase in the probability of liking apples over time. Your second question is slightly more complicated. You are interested in how much the response Y has changed for a range of X values. A ballpark estimate might be to take the upper end of the 95% confidence interval for the mean probability in 2013 and take the lower end of the confidence interval for 2018 to get an approximate lower bound on the total change. From the data I simulated, this is about a 1.4 percentage point increase.
How to calculate the confidence that a trend in a time-series is positive?
The data you describe is what might be considered summary statistics. Whether somebody likes apples can be considered a binary outcome (1 = likes apples, 0 = doesn't like apples). Therefore, if you su
How to calculate the confidence that a trend in a time-series is positive? The data you describe is what might be considered summary statistics. Whether somebody likes apples can be considered a binary outcome (1 = likes apples, 0 = doesn't like apples). Therefore, if you survey 10,000 people over 6 years then you have 60,000 responses measuring the binary outcome of whether each individual likes apples. The probability that somebody likes apples is actually the mean value of the binomial outcome. This is important for two primary reasons: The sample size is much larger than summary statistics you have presented, and therefore you should have much higher confidence in your results than linear regression of summary statistics would indicate The binary nature of your data should lead you to consider logistic regression. I don't have the complete data since you only provided summary statistics. To illustrate the approach I will simulate some data assuming your probabilities of liking apples are in fact the "true" probabilities. library(dplyr) library(ggplot2) library(tidyr) # Simulate some data ------------------------------------------------------ # Number of individuals n.individuals <- 1e4 # Years of study years <- 2013:2018 # Numbre of years n.years <- length(years) # Probability of liking apples pLikeApples = c(0.044, 0.041, 0.055, 0.063, 0.054, 0.060) # Matrix of data # Rows = individuals # Columns = years likesApples <- matrix(rep(NA_integer_, n.individuals*n.years), nrow = n.individuals) for (j in 1:n.years) { for (i in 1:n.individuals) { likesApples[i,j] <- rbinom(1, 1, pLikeApples[j]) } } # Name columns colnames(likesApples) <- years # Convert to data frame appleData <- as_tibble(likesApples) %>% mutate(ID = row_number()) %>% gather(year, likesApples, -ID) %>% mutate(year = as.integer(year)) If we perform simple linear regression on summary statistics then we obtain results as shown below # Summary data ------------------------------------------------------------ summaryAppleData <- appleData %>% group_by(year) %>% summarize(pLikeApples = mean(likesApples)) # Plot data --------------------------------------------------------------- ggplot(summaryAppleData, aes(x = year, y = pLikeApples)) + geom_point() + geom_smooth(method = "lm") + labs(title = "Standard linear regression \nof summary data", y = "probability of liking apples") + ylim(0, 0.1) If we use the complete data and apply logistic regression then we obtain results as shown below # Logistic regression ----------------------------------------------------- # Fit logistic regression model myMod <- glm(data = appleData, formula = likesApples ~ year, family = "binomial") # Summarize model summary(myMod) # Inverse link function linkInv <- myMod$family$linkinv # Predict response on linear scale predLink <- predict.glm(myMod, newdata = summaryAppleData, type = "link", se.fit = TRUE) # Backtransform predictions pred <- summaryAppleData %>% mutate(pred = linkInv(predLink$fit), LL = linkInv(predLink$fit - 1.96*predLink$se.fit), UU = linkInv(predLink$fit + 1.96*predLink$se.fit)) # Plot logistic regression ggplot() + geom_point(data = summaryAppleData, aes(x = year, y = pLikeApples)) + geom_line(data = pred, aes(x = year, y = pred)) + geom_ribbon(data = pred, aes(x = year, y = pred, ymin = LL, ymax = UU), alpha = 0.3) + labs(title = "Logistic regression", y = "probability of liking apples") + ylim(0, 0.1) Finally to address your first question, based on the summary of the logistic regression model we see that the year variable is significant with a significant code of *** indicating a confidence level of basically 100%. You need to run the analysis on your data and not simulated data as I have done. Also, as discussed by @orcmor you should use caution when interpreting the confidence level. However, this indicates a significant increase in the probability of liking apples over time. Your second question is slightly more complicated. You are interested in how much the response Y has changed for a range of X values. A ballpark estimate might be to take the upper end of the 95% confidence interval for the mean probability in 2013 and take the lower end of the confidence interval for 2018 to get an approximate lower bound on the total change. From the data I simulated, this is about a 1.4 percentage point increase.
How to calculate the confidence that a trend in a time-series is positive? The data you describe is what might be considered summary statistics. Whether somebody likes apples can be considered a binary outcome (1 = likes apples, 0 = doesn't like apples). Therefore, if you su
49,795
How to calculate the confidence that a trend in a time-series is positive?
I suggest you take a quick look at hypothesis testing concepts if you already had not done it for understanding what is meant by confidence. In hypothesis testing, you make the distinction that your observed sample is a realization of your data generating process and is susceptible to random deviations from the exact function. Consequently, your estimation of the coefficients could be off due to random errors as well. Confidence, in this context, usually refers to the unlikeliness of your sample being deviated by random errors. You basically want to say that, given this sample, there is a X% chance that I did not observe this sample (and my conclusions) by random error. In order to be able to say this, you have to make an assumption on the distribution of errors so that you can calculate the distribution of your estimations. If your function is: $$ appleLiking = \beta_0 + \beta_1 t + \epsilon $$ then you can answer both of your question if you know the mean and variance of your $\beta_1$ estimation. For your first question, you are interested in how close your estimated $\hat\beta_1$ is to $0$, given your estimated $\hat\beta_1$ variance. This is commonly referred as the significance of $\hat\beta_1$. For your second one, you are interested in how close your estimated $\hat\beta_1$ is to $Y$, given your estimated $\hat\beta_1$ variance. Luckily, once you assume the distributions, the estimations are relatively straightforward statistical procedures. You calculate the confidence levels by common p-values using standard normal distributions. For the estimation procedure, since you are asking a time-series, you may go with ARIMA estimations to account for autocorrelations in error term. If you are using R, the following will provide an estimation: appleLiking = as.ts(c(0.044, 0.041, 0.055, 0.063, 0.054, 0.060)) auto.arima(appleLiking, xreg=1:length(appleLiking)) Notice the standard errors displayed. Those values are the ones you need for confidence calculations. See the answers here for more details on time series issues.
How to calculate the confidence that a trend in a time-series is positive?
I suggest you take a quick look at hypothesis testing concepts if you already had not done it for understanding what is meant by confidence. In hypothesis testing, you make the distinction that your o
How to calculate the confidence that a trend in a time-series is positive? I suggest you take a quick look at hypothesis testing concepts if you already had not done it for understanding what is meant by confidence. In hypothesis testing, you make the distinction that your observed sample is a realization of your data generating process and is susceptible to random deviations from the exact function. Consequently, your estimation of the coefficients could be off due to random errors as well. Confidence, in this context, usually refers to the unlikeliness of your sample being deviated by random errors. You basically want to say that, given this sample, there is a X% chance that I did not observe this sample (and my conclusions) by random error. In order to be able to say this, you have to make an assumption on the distribution of errors so that you can calculate the distribution of your estimations. If your function is: $$ appleLiking = \beta_0 + \beta_1 t + \epsilon $$ then you can answer both of your question if you know the mean and variance of your $\beta_1$ estimation. For your first question, you are interested in how close your estimated $\hat\beta_1$ is to $0$, given your estimated $\hat\beta_1$ variance. This is commonly referred as the significance of $\hat\beta_1$. For your second one, you are interested in how close your estimated $\hat\beta_1$ is to $Y$, given your estimated $\hat\beta_1$ variance. Luckily, once you assume the distributions, the estimations are relatively straightforward statistical procedures. You calculate the confidence levels by common p-values using standard normal distributions. For the estimation procedure, since you are asking a time-series, you may go with ARIMA estimations to account for autocorrelations in error term. If you are using R, the following will provide an estimation: appleLiking = as.ts(c(0.044, 0.041, 0.055, 0.063, 0.054, 0.060)) auto.arima(appleLiking, xreg=1:length(appleLiking)) Notice the standard errors displayed. Those values are the ones you need for confidence calculations. See the answers here for more details on time series issues.
How to calculate the confidence that a trend in a time-series is positive? I suggest you take a quick look at hypothesis testing concepts if you already had not done it for understanding what is meant by confidence. In hypothesis testing, you make the distinction that your o
49,796
How to calculate the confidence that a trend in a time-series is positive?
A more data-driven analysis i.e. less presumptive of the form of the model... detecting the presence of a level shift and NO TREND ... this uses 3 parameters rather than two which was used in the simple and presumptive trend model in time. leading to the detection of a pulse at 2016 enter image description here In summary building and identifying a model with 6 observations is tough at best .. The whole idea is to form a model that is more representative of the data. You asked "can be X% confident that the percentage of people who like apples increased between 2013 and 2018." .. the answer is yes with 99.49 % confidence .
How to calculate the confidence that a trend in a time-series is positive?
A more data-driven analysis i.e. less presumptive of the form of the model... detecting the presence of a level shift and NO TREND ... this uses 3 parameters rather than two which was used in the simp
How to calculate the confidence that a trend in a time-series is positive? A more data-driven analysis i.e. less presumptive of the form of the model... detecting the presence of a level shift and NO TREND ... this uses 3 parameters rather than two which was used in the simple and presumptive trend model in time. leading to the detection of a pulse at 2016 enter image description here In summary building and identifying a model with 6 observations is tough at best .. The whole idea is to form a model that is more representative of the data. You asked "can be X% confident that the percentage of people who like apples increased between 2013 and 2018." .. the answer is yes with 99.49 % confidence .
How to calculate the confidence that a trend in a time-series is positive? A more data-driven analysis i.e. less presumptive of the form of the model... detecting the presence of a level shift and NO TREND ... this uses 3 parameters rather than two which was used in the simp
49,797
Levene's test assumptions
Levene's test is just a $t$-test of the absolute values of the deviations of the data from their group means. (If you have more than 2 groups, you use the ANOVA; you can also use Brown-Forsythe version of the test, which takes the absolute values of the deviations from the group medians instead.) At any rate, it is a $t$-test, so the assumptions are based on that. Specifically, you are assuming that the absolute values of the deviations are independent, normally distributed and with equal variances between the groups. It's worth noting that these can't really be quite true. For instance, the normal distribution extends to negative infinity, but absolute values do not. That said, if those assumptions hold for the original data, and your sample size is sufficient, they may be close enough for Levene's test to be reliable.
Levene's test assumptions
Levene's test is just a $t$-test of the absolute values of the deviations of the data from their group means. (If you have more than 2 groups, you use the ANOVA; you can also use Brown-Forsythe vers
Levene's test assumptions Levene's test is just a $t$-test of the absolute values of the deviations of the data from their group means. (If you have more than 2 groups, you use the ANOVA; you can also use Brown-Forsythe version of the test, which takes the absolute values of the deviations from the group medians instead.) At any rate, it is a $t$-test, so the assumptions are based on that. Specifically, you are assuming that the absolute values of the deviations are independent, normally distributed and with equal variances between the groups. It's worth noting that these can't really be quite true. For instance, the normal distribution extends to negative infinity, but absolute values do not. That said, if those assumptions hold for the original data, and your sample size is sufficient, they may be close enough for Levene's test to be reliable.
Levene's test assumptions Levene's test is just a $t$-test of the absolute values of the deviations of the data from their group means. (If you have more than 2 groups, you use the ANOVA; you can also use Brown-Forsythe vers
49,798
Proof that the Hotelling T$^2$ statistic is invariant under the choice of contrast matrices
To test the null hypothesis $H_{0}\!:\mu_{1}=\mu_{2}=\ldots=\mu_{q}$ via Hotelling's one-sample $T^2$ statistic, $\mathbf{C} \in \mathbb R^{(q-1) \times q}$ can't be an arbitrary matrix with $\mathbf{C}\mathbf1=\mathbf{0}$. It certainly must have $q-1$ linearly independent rows and can be a matrix $\mathbf{C}=\mathbf{A}\tilde{\mathbf{C}}$ obtained by transforming $$ \tilde{\mathbf{C}}= \begin{pmatrix} 1 & -1 \\ & 1 & -1 \\ & & \; \ddots & \ddots \\ & & & 1 & -1 \end{pmatrix} \in \mathbb R^{(q-1) \times q} $$ by some invertible matrix $\mathbf{A} \in \mathbb R^{(q-1) \times (q-1)}$ since $$ \mathbf{C}\boldsymbol{\mu} = \mathbf{0} \iff \tilde{\mathbf{C}}\boldsymbol{\mu} = \mathbf{0} $$ and $$ \begin{align} \left(\mathbf{C}\bar{\mathbf{x}}\right)^\top\left(\mathbf{C}\mathbf{S}\mathbf{C}^\top\right)^{-1}\left(\mathbf{C}\bar{\mathbf{x}}\right) =\,&\left(\left(\mathbf{A}\tilde{\mathbf{C}}\right)\bar{\mathbf{x}}\right)^\top\left(\left(\mathbf{A}\tilde{\mathbf{C}}\right)\mathbf{S}\left(\mathbf{A}\tilde{\mathbf{C}}\right)^\top\right)^{-1}\left(\left(\mathbf{A}\tilde{\mathbf{C}}\right)\bar{\mathbf{x}}\right)\\ =\,&\bar{\mathbf{x}}^\top\tilde{\mathbf{C}}^\top\mathbf{A}^\top\left(\mathbf{A}\left(\tilde{\mathbf{C}}\mathbf{S}\tilde{\mathbf{C}}^\top\right)\mathbf{A}^\top\right)^{-1}\mathbf{A}\tilde{\mathbf{C}}\bar{\mathbf{x}}\\ =\,&\bar{\mathbf{x}}^\top\tilde{\mathbf{C}}^\top\mathbf{A}^\top\left(\mathbf{A}^\top\right)^{-1}\left(\tilde{\mathbf{C}}\mathbf{S}\tilde{\mathbf{C}}^\top\right)^{-1}\mathbf{A}^{-1}\mathbf{A}\tilde{\mathbf{C}}\bar{\mathbf{x}}\\ =\,&\left(\tilde{\mathbf{C}}\bar{\mathbf{x}}\right)^\top\left(\tilde{\mathbf{C}}\mathbf{S}\tilde{\mathbf{C}}^\top\right)^{-1}\left(\tilde{\mathbf{C}}\bar{\mathbf{x}}\right). \end{align} $$
Proof that the Hotelling T$^2$ statistic is invariant under the choice of contrast matrices
To test the null hypothesis $H_{0}\!:\mu_{1}=\mu_{2}=\ldots=\mu_{q}$ via Hotelling's one-sample $T^2$ statistic, $\mathbf{C} \in \mathbb R^{(q-1) \times q}$ can't be an arbitrary matrix with $\mathbf{
Proof that the Hotelling T$^2$ statistic is invariant under the choice of contrast matrices To test the null hypothesis $H_{0}\!:\mu_{1}=\mu_{2}=\ldots=\mu_{q}$ via Hotelling's one-sample $T^2$ statistic, $\mathbf{C} \in \mathbb R^{(q-1) \times q}$ can't be an arbitrary matrix with $\mathbf{C}\mathbf1=\mathbf{0}$. It certainly must have $q-1$ linearly independent rows and can be a matrix $\mathbf{C}=\mathbf{A}\tilde{\mathbf{C}}$ obtained by transforming $$ \tilde{\mathbf{C}}= \begin{pmatrix} 1 & -1 \\ & 1 & -1 \\ & & \; \ddots & \ddots \\ & & & 1 & -1 \end{pmatrix} \in \mathbb R^{(q-1) \times q} $$ by some invertible matrix $\mathbf{A} \in \mathbb R^{(q-1) \times (q-1)}$ since $$ \mathbf{C}\boldsymbol{\mu} = \mathbf{0} \iff \tilde{\mathbf{C}}\boldsymbol{\mu} = \mathbf{0} $$ and $$ \begin{align} \left(\mathbf{C}\bar{\mathbf{x}}\right)^\top\left(\mathbf{C}\mathbf{S}\mathbf{C}^\top\right)^{-1}\left(\mathbf{C}\bar{\mathbf{x}}\right) =\,&\left(\left(\mathbf{A}\tilde{\mathbf{C}}\right)\bar{\mathbf{x}}\right)^\top\left(\left(\mathbf{A}\tilde{\mathbf{C}}\right)\mathbf{S}\left(\mathbf{A}\tilde{\mathbf{C}}\right)^\top\right)^{-1}\left(\left(\mathbf{A}\tilde{\mathbf{C}}\right)\bar{\mathbf{x}}\right)\\ =\,&\bar{\mathbf{x}}^\top\tilde{\mathbf{C}}^\top\mathbf{A}^\top\left(\mathbf{A}\left(\tilde{\mathbf{C}}\mathbf{S}\tilde{\mathbf{C}}^\top\right)\mathbf{A}^\top\right)^{-1}\mathbf{A}\tilde{\mathbf{C}}\bar{\mathbf{x}}\\ =\,&\bar{\mathbf{x}}^\top\tilde{\mathbf{C}}^\top\mathbf{A}^\top\left(\mathbf{A}^\top\right)^{-1}\left(\tilde{\mathbf{C}}\mathbf{S}\tilde{\mathbf{C}}^\top\right)^{-1}\mathbf{A}^{-1}\mathbf{A}\tilde{\mathbf{C}}\bar{\mathbf{x}}\\ =\,&\left(\tilde{\mathbf{C}}\bar{\mathbf{x}}\right)^\top\left(\tilde{\mathbf{C}}\mathbf{S}\tilde{\mathbf{C}}^\top\right)^{-1}\left(\tilde{\mathbf{C}}\bar{\mathbf{x}}\right). \end{align} $$
Proof that the Hotelling T$^2$ statistic is invariant under the choice of contrast matrices To test the null hypothesis $H_{0}\!:\mu_{1}=\mu_{2}=\ldots=\mu_{q}$ via Hotelling's one-sample $T^2$ statistic, $\mathbf{C} \in \mathbb R^{(q-1) \times q}$ can't be an arbitrary matrix with $\mathbf{
49,799
How to choose a correlation measure when Pearson is close to 1 and Spearman close to -1?
Your last graph—$y$ vs $x$—provides the insight: Pearson's correlation coefficient is an inappropriate measure of association here, because your data are not linearly associated, which violates a fundamental assumption of Pearson's measure. Spearman's correlation coefficient is also inappropriate, because your data are not monotonically associated: in your last graph values of $y$ tend to both increase and decrease as a function of $x$, and if you swap $x$ and $y$ axes on you graph (as below), you will see that $x$ first decreases, then increases as a function of $y$ precisely where you said the highest concentration of points are massed. In monotonic function one variable either only fails to increase, or fails to decrease as a function of the other, but you have both—increase and decrease—happening, which violates a fundamental assumption of Spearman's rank correlation coefficient. Visually, your variables are obviously very strongly associated (if "associated" means "knowing something about one variable tells you something about the other variable"). I think the solution to your problem is not to use a correlation coefficient to characterize this relationship, but: a nonparametric regression (e.g., a generalized additive model), a parametric curve-fitting regression algorithm (e.g., fractional polynomial regression), or using nonlinear least squares regression to fit a parametric approximation to the previous two methods, or even a theory-driven model.
How to choose a correlation measure when Pearson is close to 1 and Spearman close to -1?
Your last graph—$y$ vs $x$—provides the insight: Pearson's correlation coefficient is an inappropriate measure of association here, because your data are not linearly associated, which violates a fun
How to choose a correlation measure when Pearson is close to 1 and Spearman close to -1? Your last graph—$y$ vs $x$—provides the insight: Pearson's correlation coefficient is an inappropriate measure of association here, because your data are not linearly associated, which violates a fundamental assumption of Pearson's measure. Spearman's correlation coefficient is also inappropriate, because your data are not monotonically associated: in your last graph values of $y$ tend to both increase and decrease as a function of $x$, and if you swap $x$ and $y$ axes on you graph (as below), you will see that $x$ first decreases, then increases as a function of $y$ precisely where you said the highest concentration of points are massed. In monotonic function one variable either only fails to increase, or fails to decrease as a function of the other, but you have both—increase and decrease—happening, which violates a fundamental assumption of Spearman's rank correlation coefficient. Visually, your variables are obviously very strongly associated (if "associated" means "knowing something about one variable tells you something about the other variable"). I think the solution to your problem is not to use a correlation coefficient to characterize this relationship, but: a nonparametric regression (e.g., a generalized additive model), a parametric curve-fitting regression algorithm (e.g., fractional polynomial regression), or using nonlinear least squares regression to fit a parametric approximation to the previous two methods, or even a theory-driven model.
How to choose a correlation measure when Pearson is close to 1 and Spearman close to -1? Your last graph—$y$ vs $x$—provides the insight: Pearson's correlation coefficient is an inappropriate measure of association here, because your data are not linearly associated, which violates a fun
49,800
Exact details of how word2vec (Skip-gram and CBOW) generate input word pairs
I did more research and found answers: 1. Why is downsampling used in the skip-gram case? Quote from its paper, We found that increasing the range improves quality of the resulting word vectors, but it also increases the computational complexity. Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples. So by choosing a random window size within the specified one, it effectively gave less weight to more distant words. 2. The way CBOW treats input and output. Suppose the training example is [(I, like, very, much), soup], the input one hot vectors are summed instead of averaged. See the diagram from the CBOW paper. Also, it doesn't matter to sum the one hot vectors first and then multiply with embedding matrix, or multiple first and then sum, they are equivalent mathematically. 3. Can we just do the exact opposite to the Skip-gram way of generating word pairs No, because this is not the way how CBOW worked as described above. However, it might be interesting to try this way and see how the results would differ from Skip-gram. In a way, it appears to be more similar to the Skip-gram model than CBOW. For CBOW, see more details in the answer to another thread, Tensorflow: Word2vec CBOW model.
Exact details of how word2vec (Skip-gram and CBOW) generate input word pairs
I did more research and found answers: 1. Why is downsampling used in the skip-gram case? Quote from its paper, We found that increasing the range improves quality of the resulting word vectors, but
Exact details of how word2vec (Skip-gram and CBOW) generate input word pairs I did more research and found answers: 1. Why is downsampling used in the skip-gram case? Quote from its paper, We found that increasing the range improves quality of the resulting word vectors, but it also increases the computational complexity. Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples. So by choosing a random window size within the specified one, it effectively gave less weight to more distant words. 2. The way CBOW treats input and output. Suppose the training example is [(I, like, very, much), soup], the input one hot vectors are summed instead of averaged. See the diagram from the CBOW paper. Also, it doesn't matter to sum the one hot vectors first and then multiply with embedding matrix, or multiple first and then sum, they are equivalent mathematically. 3. Can we just do the exact opposite to the Skip-gram way of generating word pairs No, because this is not the way how CBOW worked as described above. However, it might be interesting to try this way and see how the results would differ from Skip-gram. In a way, it appears to be more similar to the Skip-gram model than CBOW. For CBOW, see more details in the answer to another thread, Tensorflow: Word2vec CBOW model.
Exact details of how word2vec (Skip-gram and CBOW) generate input word pairs I did more research and found answers: 1. Why is downsampling used in the skip-gram case? Quote from its paper, We found that increasing the range improves quality of the resulting word vectors, but