idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
53,801
Why do the 95% confidence limits in ARIMA models widen at the forecasts?
In short, because in time series, the observations depend on previous observations/errors. Consider just a simple AR(1), for example. The next observation depends on the previous one. When you're predicting time $t+1$ from time $t$, you know $y_t$, but you have an uncertain prediction because of variation in the process and the uncertainty in the parameters. Each time you go out another time step, you have not only the variation in the process and the uncertainty in the parameters, you compound the uncertainty because you don't know where you were on the previous time step - on average you were at the previous prediction (e.g. at $t+2$ you're on average going from $\hat y_{t+1}$), but you may have been higher or lower (because the prediction at $t+1$ had that parameter uncertainty and process variation). So it compounds as you go - each step your previous value was uncertain, and so the next value is less certain still.
Why do the 95% confidence limits in ARIMA models widen at the forecasts?
In short, because in time series, the observations depend on previous observations/errors. Consider just a simple AR(1), for example. The next observation depends on the previous one. When you're pre
Why do the 95% confidence limits in ARIMA models widen at the forecasts? In short, because in time series, the observations depend on previous observations/errors. Consider just a simple AR(1), for example. The next observation depends on the previous one. When you're predicting time $t+1$ from time $t$, you know $y_t$, but you have an uncertain prediction because of variation in the process and the uncertainty in the parameters. Each time you go out another time step, you have not only the variation in the process and the uncertainty in the parameters, you compound the uncertainty because you don't know where you were on the previous time step - on average you were at the previous prediction (e.g. at $t+2$ you're on average going from $\hat y_{t+1}$), but you may have been higher or lower (because the prediction at $t+1$ had that parameter uncertainty and process variation). So it compounds as you go - each step your previous value was uncertain, and so the next value is less certain still.
Why do the 95% confidence limits in ARIMA models widen at the forecasts? In short, because in time series, the observations depend on previous observations/errors. Consider just a simple AR(1), for example. The next observation depends on the previous one. When you're pre
53,802
Building the connection between cosine similarity and correlation in R
The answer is really right there in your linked articles. From the first, here are the formulae for cosine and correlation (lightly edited for brevity and clarity): \begin{align} {\rm CosSim}(x,y) &= \frac{\sum_i x_i y_i}{ \sqrt{ \sum_i x_i^2} \sqrt{ \sum_i y_i^2 } } \\ \ \\ \ \\ {\rm Corr}(x,y) &= \frac{ \sum_i (x_i-\bar{x}) (y_i-\bar{y}) }{ \sqrt{\sum (x_i-\bar{x})^2} \sqrt{ \sum (y_i-\bar{y})^2 } } \\ \ \\ {\rm Corr}(x,y) &= {\rm CosSim}(x-\bar{x},\ y-\bar{y}) \end{align} So the simplest adaptation is just to subtract the means from your input vectors: library(MASS) # we need this package to generate correlated data below set.seed(2641) # this makes the example exactly reproducible # now I generate correlated data: X <- mvrnorm(1000, mu=c(100, 150), Sigma=rbind(c(30, 17), c(17, 50) ) ) # I adapted the function somewhat, as the original was keyed to its context cos.sim <- function(X, corr=FALSE){ if(corr){ A = X[,1] - mean(X[,1]) B = X[,2] - mean(X[,2]) } else { A = X[,1] B = X[,2] } return( t(A)%*%B / sqrt(sum(A^2)*sum(B^2)) ) } cos.sim(X) # [,1] # [1,] 0.9985756 cos.sim(X, corr=TRUE) # [,1] # [1,] 0.4604822 cor(X) # [,1] [,2] # [1,] 1.0000000 0.4604822 # [2,] 0.4604822 1.0000000 Here is a matrix version: set.seed(6616) X3 <- mvrnorm(1000, mu=c(100, 150, 175), Sigma=rbind(c(30, 17, 12), c(17, 50, 29), c(12, 29, 46) )) cos.sim.mat <- function(X, corr=FALSE){ if(corr){ X = apply(X, 2, function(x){ x-mean(x) }) } denom = solve(diag(sqrt(diag(t(X)%*%X)))) return( denom%*%(t(X)%*%X)%*%denom ) } cos.sim.mat(X3) # [,1] [,2] [,3] # [1,] 1.0000000 0.9984552 0.9983700 # [2,] 0.9984552 1.0000000 0.9992154 # [3,] 0.9983700 0.9992154 1.0000000 cos.sim.mat(X3, corr=TRUE) # [,1] [,2] [,3] # [1,] 1.0000000 0.3990872 0.2584569 # [2,] 0.3990872 1.0000000 0.5900067 # [3,] 0.2584569 0.5900067 1.0000000 cor(X3) # [,1] [,2] [,3] # [1,] 1.0000000 0.3990872 0.2584569 # [2,] 0.3990872 1.0000000 0.5900067 # [3,] 0.2584569 0.5900067 1.0000000
Building the connection between cosine similarity and correlation in R
The answer is really right there in your linked articles. From the first, here are the formulae for cosine and correlation (lightly edited for brevity and clarity): \begin{align} {\rm CosSim}(x,y)
Building the connection between cosine similarity and correlation in R The answer is really right there in your linked articles. From the first, here are the formulae for cosine and correlation (lightly edited for brevity and clarity): \begin{align} {\rm CosSim}(x,y) &= \frac{\sum_i x_i y_i}{ \sqrt{ \sum_i x_i^2} \sqrt{ \sum_i y_i^2 } } \\ \ \\ \ \\ {\rm Corr}(x,y) &= \frac{ \sum_i (x_i-\bar{x}) (y_i-\bar{y}) }{ \sqrt{\sum (x_i-\bar{x})^2} \sqrt{ \sum (y_i-\bar{y})^2 } } \\ \ \\ {\rm Corr}(x,y) &= {\rm CosSim}(x-\bar{x},\ y-\bar{y}) \end{align} So the simplest adaptation is just to subtract the means from your input vectors: library(MASS) # we need this package to generate correlated data below set.seed(2641) # this makes the example exactly reproducible # now I generate correlated data: X <- mvrnorm(1000, mu=c(100, 150), Sigma=rbind(c(30, 17), c(17, 50) ) ) # I adapted the function somewhat, as the original was keyed to its context cos.sim <- function(X, corr=FALSE){ if(corr){ A = X[,1] - mean(X[,1]) B = X[,2] - mean(X[,2]) } else { A = X[,1] B = X[,2] } return( t(A)%*%B / sqrt(sum(A^2)*sum(B^2)) ) } cos.sim(X) # [,1] # [1,] 0.9985756 cos.sim(X, corr=TRUE) # [,1] # [1,] 0.4604822 cor(X) # [,1] [,2] # [1,] 1.0000000 0.4604822 # [2,] 0.4604822 1.0000000 Here is a matrix version: set.seed(6616) X3 <- mvrnorm(1000, mu=c(100, 150, 175), Sigma=rbind(c(30, 17, 12), c(17, 50, 29), c(12, 29, 46) )) cos.sim.mat <- function(X, corr=FALSE){ if(corr){ X = apply(X, 2, function(x){ x-mean(x) }) } denom = solve(diag(sqrt(diag(t(X)%*%X)))) return( denom%*%(t(X)%*%X)%*%denom ) } cos.sim.mat(X3) # [,1] [,2] [,3] # [1,] 1.0000000 0.9984552 0.9983700 # [2,] 0.9984552 1.0000000 0.9992154 # [3,] 0.9983700 0.9992154 1.0000000 cos.sim.mat(X3, corr=TRUE) # [,1] [,2] [,3] # [1,] 1.0000000 0.3990872 0.2584569 # [2,] 0.3990872 1.0000000 0.5900067 # [3,] 0.2584569 0.5900067 1.0000000 cor(X3) # [,1] [,2] [,3] # [1,] 1.0000000 0.3990872 0.2584569 # [2,] 0.3990872 1.0000000 0.5900067 # [3,] 0.2584569 0.5900067 1.0000000
Building the connection between cosine similarity and correlation in R The answer is really right there in your linked articles. From the first, here are the formulae for cosine and correlation (lightly edited for brevity and clarity): \begin{align} {\rm CosSim}(x,y)
53,803
Building the connection between cosine similarity and correlation in R
The only line that has to be added is X <- apply(X, 2, function(x){ x-mean(x) }) which just subtracts the mean column-wise: X <- apply(X, 2, function(x){ x-mean(x) }) cos.sim <- function(ix) { A = X[,ix[1]] B = X[,ix[2]] return(t(A)%*%B/sqrt(sum(A^2)*sum(B^2))) } n <- ncol(X) cmb <- expand.grid(i=1:n, j=1:n) C <- matrix(apply(cmb,1,cos.sim),n,n)
Building the connection between cosine similarity and correlation in R
The only line that has to be added is X <- apply(X, 2, function(x){ x-mean(x) }) which just subtracts the mean column-wise: X <- apply(X, 2, function(x){ x-mean(x) }) cos.sim <- function(ix) {
Building the connection between cosine similarity and correlation in R The only line that has to be added is X <- apply(X, 2, function(x){ x-mean(x) }) which just subtracts the mean column-wise: X <- apply(X, 2, function(x){ x-mean(x) }) cos.sim <- function(ix) { A = X[,ix[1]] B = X[,ix[2]] return(t(A)%*%B/sqrt(sum(A^2)*sum(B^2))) } n <- ncol(X) cmb <- expand.grid(i=1:n, j=1:n) C <- matrix(apply(cmb,1,cos.sim),n,n)
Building the connection between cosine similarity and correlation in R The only line that has to be added is X <- apply(X, 2, function(x){ x-mean(x) }) which just subtracts the mean column-wise: X <- apply(X, 2, function(x){ x-mean(x) }) cos.sim <- function(ix) {
53,804
Neural network packages which allow shared weights and parallel training
It sounds like Pylearn2 may do what you want. It has two implementations for convolutional networks which requires some amount of weight sharing. Furthermore one of these implementations is optimized for use on a GPU via CUDA using Theano. See the documentation for more information. I believe the associated code is pylearn2.models.maxout.MaxoutConvC01B.
Neural network packages which allow shared weights and parallel training
It sounds like Pylearn2 may do what you want. It has two implementations for convolutional networks which requires some amount of weight sharing. Furthermore one of these implementations is optimized
Neural network packages which allow shared weights and parallel training It sounds like Pylearn2 may do what you want. It has two implementations for convolutional networks which requires some amount of weight sharing. Furthermore one of these implementations is optimized for use on a GPU via CUDA using Theano. See the documentation for more information. I believe the associated code is pylearn2.models.maxout.MaxoutConvC01B.
Neural network packages which allow shared weights and parallel training It sounds like Pylearn2 may do what you want. It has two implementations for convolutional networks which requires some amount of weight sharing. Furthermore one of these implementations is optimized
53,805
Neural network packages which allow shared weights and parallel training
I can't add a comment to @Emre's answer because I don't have enough points. You can train shared-weight networks in torch, be that using CUDA or not. The weight-sharing is supported for any tensor type. Training is done in parallel when you wrap the two shared modules in a nn.Parallel container We use this in torch quite a lot, to build siamese networks.
Neural network packages which allow shared weights and parallel training
I can't add a comment to @Emre's answer because I don't have enough points. You can train shared-weight networks in torch, be that using CUDA or not. The weight-sharing is supported for any tensor typ
Neural network packages which allow shared weights and parallel training I can't add a comment to @Emre's answer because I don't have enough points. You can train shared-weight networks in torch, be that using CUDA or not. The weight-sharing is supported for any tensor type. Training is done in parallel when you wrap the two shared modules in a nn.Parallel container We use this in torch quite a lot, to build siamese networks.
Neural network packages which allow shared weights and parallel training I can't add a comment to @Emre's answer because I don't have enough points. You can train shared-weight networks in torch, be that using CUDA or not. The weight-sharing is supported for any tensor typ
53,806
Neural network packages which allow shared weights and parallel training
I don't have practical experience with torch7, but I think the idea is to do the parallelization on the GPU, cf. cunn and cutorch. There's also https://github.com/clementfarabet/lua---parallel
Neural network packages which allow shared weights and parallel training
I don't have practical experience with torch7, but I think the idea is to do the parallelization on the GPU, cf. cunn and cutorch. There's also https://github.com/clementfarabet/lua---parallel
Neural network packages which allow shared weights and parallel training I don't have practical experience with torch7, but I think the idea is to do the parallelization on the GPU, cf. cunn and cutorch. There's also https://github.com/clementfarabet/lua---parallel
Neural network packages which allow shared weights and parallel training I don't have practical experience with torch7, but I think the idea is to do the parallelization on the GPU, cf. cunn and cutorch. There's also https://github.com/clementfarabet/lua---parallel
53,807
Permutation tests and multiple testing
You'll have to either (a) increase the number of permutations so the p-values won't have a lower bound as high as 0.01 (even with a single measure I'd use at least 1000 permutations!) or (b) incorporate multiple testing correction within the permutation testing. This can be achieved by the Max-T method: in each permutation, record the maximal statistic across all the 300 measures. You'll end up with a single empirical sampling distribution that describes the maximal statistic across all measures, given the null hypothesis. Then, for each measure in the unpermuted set, determine its p-value according to this distribution. The resulting 300 p-values will be already corrected for family-wise error. Reference for the max-t/min-p method: Westfall, P. H., & Young, S. S. (1993). Resampling-based multiple testing: Examples and methods for p-value adjustment (Vol. 279). John Wiley & Sons.
Permutation tests and multiple testing
You'll have to either (a) increase the number of permutations so the p-values won't have a lower bound as high as 0.01 (even with a single measure I'd use at least 1000 permutations!) or (b) incorpora
Permutation tests and multiple testing You'll have to either (a) increase the number of permutations so the p-values won't have a lower bound as high as 0.01 (even with a single measure I'd use at least 1000 permutations!) or (b) incorporate multiple testing correction within the permutation testing. This can be achieved by the Max-T method: in each permutation, record the maximal statistic across all the 300 measures. You'll end up with a single empirical sampling distribution that describes the maximal statistic across all measures, given the null hypothesis. Then, for each measure in the unpermuted set, determine its p-value according to this distribution. The resulting 300 p-values will be already corrected for family-wise error. Reference for the max-t/min-p method: Westfall, P. H., & Young, S. S. (1993). Resampling-based multiple testing: Examples and methods for p-value adjustment (Vol. 279). John Wiley & Sons.
Permutation tests and multiple testing You'll have to either (a) increase the number of permutations so the p-values won't have a lower bound as high as 0.01 (even with a single measure I'd use at least 1000 permutations!) or (b) incorpora
53,808
Dummies instead of the Chow test
Let's first create a fake data with a break-point at 3. > x=seq(1,5,length=100) > y=numeric(100) > y[1:50]=2*x[1:50] > y[51:100]=rep(2*x[51],50) > z=rnorm(100,0,.15) > y=y+z > plot(x,y) Now I am gonna perform a Chow test using package strucchange in R to test if 3 is a break-point or not. > require(strucchange) > sctest(y ~ x, type = "Chow", point = 3) Chow test data: y ~ x F = 3.4086, p-value = 0.03714 So based on this test, the point x=3 is a break-point. Now I will create a dummy variable dum.x and define it as 0 when $x>=3$ and 1 otherwise. > dum.x=rep(1,100) > dum.x[x>=3]=0 Next I fit a linear regression using the dummy variable I created with an interaction term and take the summary. So the model I am fitting here is $Y=\beta_0+\beta_1x+\beta_2dum.x+\beta_3x \times dum.x$. > M=lm(y~x*dum.x) > summary(M) Call: lm(formula = y ~ x * dum.x) Residuals: Min 1Q Median 3Q Max -0.35089 -0.09929 -0.01161 0.08907 0.40424 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.32411 0.14728 42.94 <2e-16 *** x -0.07234 0.03634 -1.99 0.0494 * dum.x -6.32203 0.16544 -38.21 <2e-16 *** x:dum.x 2.06979 0.05140 40.27 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.1498 on 96 degrees of freedom Multiple R-squared: 0.9877, Adjusted R-squared: 0.9874 F-statistic: 2579 on 3 and 96 DF, p-value: < 2.2e-16 Note that when $x\geq 3$, then dum.x=0 so $$Y=\beta_0+\beta_1x,$$ and when $x<3$ then dum.x=1 so $$Y=\beta_0+\beta_1x+\beta_2+\beta_3x=(\beta_0+\beta_2)+(\beta_1+\beta_3)x.$$ This means that I am actually changing both the intercept and slope by dividing my dataset and fitting above linear regression. According to the summary output the p-values for dum.x and x:dum.x are less than 0.05. So we reject $H_0:\beta_3=0$ vs. $H_1:\beta_3\ne 0$ at 5% sig. level. This means that the slope is changing at $x=3$ that confirms the Chow test we had before. Finally lets try to change our $x$ in a way that we don't have any break-point at 3. > y=2*x+rnorm(100) > plot(x,y) Using the Chow test again, we have: > sctest(y ~ x, type = "Chow", point = 3) Chow test data: y ~ x F = 2.1406, p-value = 0.1232 Therefore, x=3 is not a break-point as expected. Lets fit a linear model using our dummy variable: > M2=lm(y~x*dum.x) > summary(M2) Call: lm(formula = y ~ x * dum.x) Residuals: Min 1Q Median 3Q Max -2.50938 -0.64484 -0.03025 0.67947 2.21949 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.1014 0.9875 1.115 0.267 x 1.7388 0.2437 7.135 1.83e-10 *** dum.x -1.5082 1.1093 -1.360 0.177 x:dum.x 0.3508 0.3446 1.018 0.311 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 1.005 on 96 degrees of freedom Multiple R-squared: 0.8595, Adjusted R-squared: 0.8551 F-statistic: 195.8 on 3 and 96 DF, p-value: < 2.2e-16 As you can see from summary output, neither the slope nor the intercept is changing at 3 and again confirms the Chow test.
Dummies instead of the Chow test
Let's first create a fake data with a break-point at 3. > x=seq(1,5,length=100) > y=numeric(100) > y[1:50]=2*x[1:50] > y[51:100]=rep(2*x[51],50) > z=rnorm(100,0,.15) > y=y+z > plot(x,y) Now I am gon
Dummies instead of the Chow test Let's first create a fake data with a break-point at 3. > x=seq(1,5,length=100) > y=numeric(100) > y[1:50]=2*x[1:50] > y[51:100]=rep(2*x[51],50) > z=rnorm(100,0,.15) > y=y+z > plot(x,y) Now I am gonna perform a Chow test using package strucchange in R to test if 3 is a break-point or not. > require(strucchange) > sctest(y ~ x, type = "Chow", point = 3) Chow test data: y ~ x F = 3.4086, p-value = 0.03714 So based on this test, the point x=3 is a break-point. Now I will create a dummy variable dum.x and define it as 0 when $x>=3$ and 1 otherwise. > dum.x=rep(1,100) > dum.x[x>=3]=0 Next I fit a linear regression using the dummy variable I created with an interaction term and take the summary. So the model I am fitting here is $Y=\beta_0+\beta_1x+\beta_2dum.x+\beta_3x \times dum.x$. > M=lm(y~x*dum.x) > summary(M) Call: lm(formula = y ~ x * dum.x) Residuals: Min 1Q Median 3Q Max -0.35089 -0.09929 -0.01161 0.08907 0.40424 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.32411 0.14728 42.94 <2e-16 *** x -0.07234 0.03634 -1.99 0.0494 * dum.x -6.32203 0.16544 -38.21 <2e-16 *** x:dum.x 2.06979 0.05140 40.27 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.1498 on 96 degrees of freedom Multiple R-squared: 0.9877, Adjusted R-squared: 0.9874 F-statistic: 2579 on 3 and 96 DF, p-value: < 2.2e-16 Note that when $x\geq 3$, then dum.x=0 so $$Y=\beta_0+\beta_1x,$$ and when $x<3$ then dum.x=1 so $$Y=\beta_0+\beta_1x+\beta_2+\beta_3x=(\beta_0+\beta_2)+(\beta_1+\beta_3)x.$$ This means that I am actually changing both the intercept and slope by dividing my dataset and fitting above linear regression. According to the summary output the p-values for dum.x and x:dum.x are less than 0.05. So we reject $H_0:\beta_3=0$ vs. $H_1:\beta_3\ne 0$ at 5% sig. level. This means that the slope is changing at $x=3$ that confirms the Chow test we had before. Finally lets try to change our $x$ in a way that we don't have any break-point at 3. > y=2*x+rnorm(100) > plot(x,y) Using the Chow test again, we have: > sctest(y ~ x, type = "Chow", point = 3) Chow test data: y ~ x F = 2.1406, p-value = 0.1232 Therefore, x=3 is not a break-point as expected. Lets fit a linear model using our dummy variable: > M2=lm(y~x*dum.x) > summary(M2) Call: lm(formula = y ~ x * dum.x) Residuals: Min 1Q Median 3Q Max -2.50938 -0.64484 -0.03025 0.67947 2.21949 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.1014 0.9875 1.115 0.267 x 1.7388 0.2437 7.135 1.83e-10 *** dum.x -1.5082 1.1093 -1.360 0.177 x:dum.x 0.3508 0.3446 1.018 0.311 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 1.005 on 96 degrees of freedom Multiple R-squared: 0.8595, Adjusted R-squared: 0.8551 F-statistic: 195.8 on 3 and 96 DF, p-value: < 2.2e-16 As you can see from summary output, neither the slope nor the intercept is changing at 3 and again confirms the Chow test.
Dummies instead of the Chow test Let's first create a fake data with a break-point at 3. > x=seq(1,5,length=100) > y=numeric(100) > y[1:50]=2*x[1:50] > y[51:100]=rep(2*x[51],50) > z=rnorm(100,0,.15) > y=y+z > plot(x,y) Now I am gon
53,809
What is the minimum viable cell size for 2x2 ANOVA?
There isn't really any absolute minimum except in a trivial sense (if you won't try to test for the interaction, the minimum $n_{ij}$ will be $1$, if you do want to test for interactions, the minimum cell size might be $2$). Instead, there are two issues here: The first is the question of the robustness of the ANOVA to the violation of assumptions. Like all linear models (regression, $t$-tests, etc.), the ANOVA assumes that the data within each cell (i.e., the residuals) are independent, have equal (homogenous) variance, and are normally distributed. In truth, some of these assumptions are less necessary than others. For instance, with enough data you don't really need the within-cell distributions to be normal. However, what constitutes 'enough data' depends on how far from normal those distributions are. Thus, the further you are from normality, the more data you need. But there is another twist here, namely that with fewer data it is harder (or even impossible) to assess whether the assumptions of the ANOVA are met. So with fewer data per cell, you really are going by blind faith. If the assumptions are not met, then you can have increased type I error rates. The second issue is the question of power. The probability of getting significance is a function of the size of the effect and the amount of data you have. If the effects are large enough, you will have good power even if you have only 1 datum per cell. I suspect effects that large are uncommon, though. Thus, you need to determine how large of an effect you want to be able to detect with what power (etc.), and calculate your $N$ accordingly.
What is the minimum viable cell size for 2x2 ANOVA?
There isn't really any absolute minimum except in a trivial sense (if you won't try to test for the interaction, the minimum $n_{ij}$ will be $1$, if you do want to test for interactions, the minimum
What is the minimum viable cell size for 2x2 ANOVA? There isn't really any absolute minimum except in a trivial sense (if you won't try to test for the interaction, the minimum $n_{ij}$ will be $1$, if you do want to test for interactions, the minimum cell size might be $2$). Instead, there are two issues here: The first is the question of the robustness of the ANOVA to the violation of assumptions. Like all linear models (regression, $t$-tests, etc.), the ANOVA assumes that the data within each cell (i.e., the residuals) are independent, have equal (homogenous) variance, and are normally distributed. In truth, some of these assumptions are less necessary than others. For instance, with enough data you don't really need the within-cell distributions to be normal. However, what constitutes 'enough data' depends on how far from normal those distributions are. Thus, the further you are from normality, the more data you need. But there is another twist here, namely that with fewer data it is harder (or even impossible) to assess whether the assumptions of the ANOVA are met. So with fewer data per cell, you really are going by blind faith. If the assumptions are not met, then you can have increased type I error rates. The second issue is the question of power. The probability of getting significance is a function of the size of the effect and the amount of data you have. If the effects are large enough, you will have good power even if you have only 1 datum per cell. I suspect effects that large are uncommon, though. Thus, you need to determine how large of an effect you want to be able to detect with what power (etc.), and calculate your $N$ accordingly.
What is the minimum viable cell size for 2x2 ANOVA? There isn't really any absolute minimum except in a trivial sense (if you won't try to test for the interaction, the minimum $n_{ij}$ will be $1$, if you do want to test for interactions, the minimum
53,810
E[g(Y)] proof question
1. The fact that $$ \sum_{i=1}^m g_i \left\{ \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} p(y_j) \right\} = \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i p(y_j) $$ is just the distributive property of the sum. Here is a more transparent example: \begin{align*} & \sum_i \left\{ a_i \sum_j b_j \right\} \\ & = \sum_i \left\{ \vphantom{\sum_i} a_i \; (b_1 + b_2 + \dotsb)\right\} \\ & = \sum_i \left\{ \vphantom{\sum_i} (a_i b_1 + a_i b_2 + \dotsb) \right\} \\ & = \sum_i \left\{ \sum_j a_ib_j\right\} \end{align*} 2. $$ \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i p(y_j) = \sum_{j=1}^n g(y_j) p(y_j) $$ As the 2nd sum is over all $y_j$ such that $g(y_j) = g_i$, you can replace $g_i$ by $g(y_j)$ in its summand: $$ \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i p(y_j) = \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g(y_j) p(y_j) $$ Note that $i$ only appears in the index of the 2nd sum. For $i=1$, the 2nd sum is over all $j$'s such that $g(y_j)=g_1$. For $i=2$, the 2nd sum is over all $j$'s (different from before) such that $g(y_j)=g_2$. Etc. Thus, you can replace the double summation by a single summation running over all $j$'s.
E[g(Y)] proof question
1. The fact that $$ \sum_{i=1}^m g_i \left\{ \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} p(y_j) \right\} = \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i
E[g(Y)] proof question 1. The fact that $$ \sum_{i=1}^m g_i \left\{ \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} p(y_j) \right\} = \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i p(y_j) $$ is just the distributive property of the sum. Here is a more transparent example: \begin{align*} & \sum_i \left\{ a_i \sum_j b_j \right\} \\ & = \sum_i \left\{ \vphantom{\sum_i} a_i \; (b_1 + b_2 + \dotsb)\right\} \\ & = \sum_i \left\{ \vphantom{\sum_i} (a_i b_1 + a_i b_2 + \dotsb) \right\} \\ & = \sum_i \left\{ \sum_j a_ib_j\right\} \end{align*} 2. $$ \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i p(y_j) = \sum_{j=1}^n g(y_j) p(y_j) $$ As the 2nd sum is over all $y_j$ such that $g(y_j) = g_i$, you can replace $g_i$ by $g(y_j)$ in its summand: $$ \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i p(y_j) = \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g(y_j) p(y_j) $$ Note that $i$ only appears in the index of the 2nd sum. For $i=1$, the 2nd sum is over all $j$'s such that $g(y_j)=g_1$. For $i=2$, the 2nd sum is over all $j$'s (different from before) such that $g(y_j)=g_2$. Etc. Thus, you can replace the double summation by a single summation running over all $j$'s.
E[g(Y)] proof question 1. The fact that $$ \sum_{i=1}^m g_i \left\{ \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} p(y_j) \right\} = \sum_{i=1}^m \sum_{\substack{\text{all $y_j$ such that} \\ g(y_j)=g_i}} g_i
53,811
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size
There are analogous effect size measures to Cohen's for paired data, sometimes called the "standardized mean change" or "standardized mean gain". This is computed with $$d = \frac{\bar{x}_1 - \bar{x}_2}{SD_D} = \frac{\bar{x}_D}{SD_D},$$ where $\bar{x}_1$ is the mean at time 1 (or under condition 1), $\bar{x}_2$ is the mean at time 2 (or under condition 2), $\bar{x}_D$ is the mean of the change/differences scores, and $SD_D$ is the standard deviation of the change/differences scores. This is the standardized mean change using "change score standardization". There is also the standardized mean change using "raw score standarization", but the former more directly relates to your use of the dependent samples t-test. You can use the metafor package to compute this (and the corresponding CI): summary(escalc(measure="SMCC", m1i=mean(a), sd1i=sd(a), m2i=mean(b), sd2i=sd(b), ni=length(a), ri=cor(a,b))) yields: yi vi sei zi ci.lb ci.ub 1 0.4961 0.0401 0.2003 2.4769 0.1035 0.8886 So, now the CI doesn't include 0 anymore, which is consistent with the results from the t-test. (Note: the value under yi is the d-value above, but after using a slight bias correction). Some references if you want to read more about this: Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods, 7, 105–125. Viechtbauer, W. (2007). Approximate confidence intervals for standardized effect sizes in the two-independent and two-dependent samples design. Journal of Educational and Behavioral Statistics, 32, 39-60. Update: Getting the exact CI for d. In rare cases, it can happen that the results of the t-test (and the CI for the mean difference) yields a different conclusion than the CI for d obtained above (i.e., the CI for the mean difference includes the value 0, while the CI for d does not, or vice-versa). This is due to the fact that the CI for d is based on an asymptotic approximation using the normal distribution. One can compute an exact CI for the standardized mean change, but this requires iterative methods (see Viechtbauer, 2007, and the references given therein). The advantage of the exact CI is that it will always agree 100% with the results from the t-test and the CI for the mean difference in its conclusion. Instead of letting the computer do the iterative work for us (which can be done in a few lines of code), one can also just do this manually by trial and error. For the data given in http://pastebin.com/12J7UghC, the bounds of the exact CI for d can be obtained with: tval <- t.test(a, b, paired=TRUE)$statistic pt(tval, df=length(a)-1, ncp=-0.00265265 * sqrt(length(a)), lower.tail=TRUE) pt(tval, df=length(a)-1, ncp=-0.77193310 * sqrt(length(a)), lower.tail=FALSE) Essentially, we just need to find those two values of the non-centrality parameter of the t-distribution, so that the observed t-value cuts off .025 in the lower and upper tails of the distribution. With a bit of trial and error (and starting with the CI bounds obtained earlier), we find the exact 95% CI for d is $(-0.003, -0.772)$. And now things are consistent again: The t-test rejects (just barely, with $p=.048$), the CI for the mean difference excludes 0 (just barely), and the CI for d exclude 0 (just barely).
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size
There are analogous effect size measures to Cohen's for paired data, sometimes called the "standardized mean change" or "standardized mean gain". This is computed with $$d = \frac{\bar{x}_1 - \bar{x}_
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size There are analogous effect size measures to Cohen's for paired data, sometimes called the "standardized mean change" or "standardized mean gain". This is computed with $$d = \frac{\bar{x}_1 - \bar{x}_2}{SD_D} = \frac{\bar{x}_D}{SD_D},$$ where $\bar{x}_1$ is the mean at time 1 (or under condition 1), $\bar{x}_2$ is the mean at time 2 (or under condition 2), $\bar{x}_D$ is the mean of the change/differences scores, and $SD_D$ is the standard deviation of the change/differences scores. This is the standardized mean change using "change score standardization". There is also the standardized mean change using "raw score standarization", but the former more directly relates to your use of the dependent samples t-test. You can use the metafor package to compute this (and the corresponding CI): summary(escalc(measure="SMCC", m1i=mean(a), sd1i=sd(a), m2i=mean(b), sd2i=sd(b), ni=length(a), ri=cor(a,b))) yields: yi vi sei zi ci.lb ci.ub 1 0.4961 0.0401 0.2003 2.4769 0.1035 0.8886 So, now the CI doesn't include 0 anymore, which is consistent with the results from the t-test. (Note: the value under yi is the d-value above, but after using a slight bias correction). Some references if you want to read more about this: Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods, 7, 105–125. Viechtbauer, W. (2007). Approximate confidence intervals for standardized effect sizes in the two-independent and two-dependent samples design. Journal of Educational and Behavioral Statistics, 32, 39-60. Update: Getting the exact CI for d. In rare cases, it can happen that the results of the t-test (and the CI for the mean difference) yields a different conclusion than the CI for d obtained above (i.e., the CI for the mean difference includes the value 0, while the CI for d does not, or vice-versa). This is due to the fact that the CI for d is based on an asymptotic approximation using the normal distribution. One can compute an exact CI for the standardized mean change, but this requires iterative methods (see Viechtbauer, 2007, and the references given therein). The advantage of the exact CI is that it will always agree 100% with the results from the t-test and the CI for the mean difference in its conclusion. Instead of letting the computer do the iterative work for us (which can be done in a few lines of code), one can also just do this manually by trial and error. For the data given in http://pastebin.com/12J7UghC, the bounds of the exact CI for d can be obtained with: tval <- t.test(a, b, paired=TRUE)$statistic pt(tval, df=length(a)-1, ncp=-0.00265265 * sqrt(length(a)), lower.tail=TRUE) pt(tval, df=length(a)-1, ncp=-0.77193310 * sqrt(length(a)), lower.tail=FALSE) Essentially, we just need to find those two values of the non-centrality parameter of the t-distribution, so that the observed t-value cuts off .025 in the lower and upper tails of the distribution. With a bit of trial and error (and starting with the CI bounds obtained earlier), we find the exact 95% CI for d is $(-0.003, -0.772)$. And now things are consistent again: The t-test rejects (just barely, with $p=.048$), the CI for the mean difference excludes 0 (just barely), and the CI for d exclude 0 (just barely).
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size There are analogous effect size measures to Cohen's for paired data, sometimes called the "standardized mean change" or "standardized mean gain". This is computed with $$d = \frac{\bar{x}_1 - \bar{x}_
53,812
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size
mes does not account for the paired-ness of the data, so, it is a different test; usually, paired tests are more powerful, so it is not surprising that the paired test was significant and the unpaired was not.
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size
mes does not account for the paired-ness of the data, so, it is a different test; usually, paired tests are more powerful, so it is not surprising that the paired test was significant and the unpaired
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size mes does not account for the paired-ness of the data, so, it is a different test; usually, paired tests are more powerful, so it is not surprising that the paired test was significant and the unpaired was not.
Conflict in confidence intervals for mean difference and confidence interval for Cohen'd effect size mes does not account for the paired-ness of the data, so, it is a different test; usually, paired tests are more powerful, so it is not surprising that the paired test was significant and the unpaired
53,813
What is the expectation of one random variable divided over another (both independent)?
Basically, if $X$ and $Y$ are independent, then also $f(X)$ and $g(Y)$ are independent if $f$ and $g$ are measurable functions: $$\eqalign{ P(f(X) \in A,\ g(Y) \in B) &= P\left(X \in f^{-1}(A),\ Y \in g^{-1}(B)\right) \\ & = P\left(X \in f^{-1}(A)\right) \ P\left(Y \in g^{-1}(B)\right) \\ & = P\left(f(X) \in A\right) \ P\left(g(Y) \in B\right). }$$ In particular all continuous functions (like the $f(x)=1/x$ and $f(x)=x^2$ in your examples) are Borel-measurable, and hence also $X$ and $1/Y$ as well as $X^2$ and $Y^2$ are independent.
What is the expectation of one random variable divided over another (both independent)?
Basically, if $X$ and $Y$ are independent, then also $f(X)$ and $g(Y)$ are independent if $f$ and $g$ are measurable functions: $$\eqalign{ P(f(X) \in A,\ g(Y) \in B) &= P\left(X \in f^{-1}(A),\ Y \in
What is the expectation of one random variable divided over another (both independent)? Basically, if $X$ and $Y$ are independent, then also $f(X)$ and $g(Y)$ are independent if $f$ and $g$ are measurable functions: $$\eqalign{ P(f(X) \in A,\ g(Y) \in B) &= P\left(X \in f^{-1}(A),\ Y \in g^{-1}(B)\right) \\ & = P\left(X \in f^{-1}(A)\right) \ P\left(Y \in g^{-1}(B)\right) \\ & = P\left(f(X) \in A\right) \ P\left(g(Y) \in B\right). }$$ In particular all continuous functions (like the $f(x)=1/x$ and $f(x)=x^2$ in your examples) are Borel-measurable, and hence also $X$ and $1/Y$ as well as $X^2$ and $Y^2$ are independent.
What is the expectation of one random variable divided over another (both independent)? Basically, if $X$ and $Y$ are independent, then also $f(X)$ and $g(Y)$ are independent if $f$ and $g$ are measurable functions: $$\eqalign{ P(f(X) \in A,\ g(Y) \in B) &= P\left(X \in f^{-1}(A),\ Y \in
53,814
What is the expectation of one random variable divided over another (both independent)?
let $Z=\frac{1}{Y}$, then we have: \begin{equation} E(XZ) = \int \int XZ p(X,Z) \mathrm{d}x \mathrm{d}z \end{equation} but $X$ and $Z$ are independent, so $p(X,Z) = p(X)p(Z)$ we have \begin{equation} E(XZ) = \int \int x .z . p(X=x).p(Z=z) \mathrm{d}x \mathrm{d}z \end{equation} which can be arranged as: \begin{equation} E(XZ) = \int x . p(X=x) \mathrm{d}x \int z.p(Z=z) \mathrm{d}z = E(X)E(Z) \end{equation} using the same argument you can show that $E(X^2Y^2) = E(X^2).E(Y^2)$ does that help?
What is the expectation of one random variable divided over another (both independent)?
let $Z=\frac{1}{Y}$, then we have: \begin{equation} E(XZ) = \int \int XZ p(X,Z) \mathrm{d}x \mathrm{d}z \end{equation} but $X$ and $Z$ are independent, so $p(X,Z) = p(X)p(Z)$ we have \begin{equation}
What is the expectation of one random variable divided over another (both independent)? let $Z=\frac{1}{Y}$, then we have: \begin{equation} E(XZ) = \int \int XZ p(X,Z) \mathrm{d}x \mathrm{d}z \end{equation} but $X$ and $Z$ are independent, so $p(X,Z) = p(X)p(Z)$ we have \begin{equation} E(XZ) = \int \int x .z . p(X=x).p(Z=z) \mathrm{d}x \mathrm{d}z \end{equation} which can be arranged as: \begin{equation} E(XZ) = \int x . p(X=x) \mathrm{d}x \int z.p(Z=z) \mathrm{d}z = E(X)E(Z) \end{equation} using the same argument you can show that $E(X^2Y^2) = E(X^2).E(Y^2)$ does that help?
What is the expectation of one random variable divided over another (both independent)? let $Z=\frac{1}{Y}$, then we have: \begin{equation} E(XZ) = \int \int XZ p(X,Z) \mathrm{d}x \mathrm{d}z \end{equation} but $X$ and $Z$ are independent, so $p(X,Z) = p(X)p(Z)$ we have \begin{equation}
53,815
ARIMA model identification [duplicate]
1) A pure AR(p) will have a cut off at lag p in the PACF: ACF and PACF of a long AR(3) process $ $ 2) A pure MA(q) will have a cut off at lag q in the ACF. ACF and PACF of a long MA(3) process $ $ 3) ARMA(p,q) will (eventually) have a decay in both; you often can't immediately tell p and q from empirical ACF and PACF -- though with some practice you can get better at it. As Graeme Walsh points out, model identification is part of an iterative process (explicitly so in Box and Jenkins). The ACF plot above suggests perhaps an MA(4) while the PACF plot might suggest an AR(5). One might instead try say an ARMA(1,1) and see what was "left over". There are other tools than the ACF and PACF, but they're usually even harder to interpret in practice (and may require even larger sample sizes to give a reliable indication).
ARIMA model identification [duplicate]
1) A pure AR(p) will have a cut off at lag p in the PACF: ACF and PACF of a long AR(3) process $ $ 2) A pure MA(q) will have a cut off at lag q in the ACF. ACF and PACF of a long MA(3) process $ $
ARIMA model identification [duplicate] 1) A pure AR(p) will have a cut off at lag p in the PACF: ACF and PACF of a long AR(3) process $ $ 2) A pure MA(q) will have a cut off at lag q in the ACF. ACF and PACF of a long MA(3) process $ $ 3) ARMA(p,q) will (eventually) have a decay in both; you often can't immediately tell p and q from empirical ACF and PACF -- though with some practice you can get better at it. As Graeme Walsh points out, model identification is part of an iterative process (explicitly so in Box and Jenkins). The ACF plot above suggests perhaps an MA(4) while the PACF plot might suggest an AR(5). One might instead try say an ARMA(1,1) and see what was "left over". There are other tools than the ACF and PACF, but they're usually even harder to interpret in practice (and may require even larger sample sizes to give a reliable indication).
ARIMA model identification [duplicate] 1) A pure AR(p) will have a cut off at lag p in the PACF: ACF and PACF of a long AR(3) process $ $ 2) A pure MA(q) will have a cut off at lag q in the ACF. ACF and PACF of a long MA(3) process $ $
53,816
ARIMA model identification [duplicate]
About the ACF and PACF of ARMA(p,q) one can say: ACF tails off after lag (q-p) and PACF tails off after lag (p-q) [e.g. Wei (2005), S. 109], which makes it difficult to identify the orders p and q. Usually one uses the information criteria like the AIC, BIC, FPE, .... One estimates severeal models with different orders p and q and selects the one with the smallest value of the respective criterion.
ARIMA model identification [duplicate]
About the ACF and PACF of ARMA(p,q) one can say: ACF tails off after lag (q-p) and PACF tails off after lag (p-q) [e.g. Wei (2005), S. 109], which makes it difficult to identify the orders p and q. Us
ARIMA model identification [duplicate] About the ACF and PACF of ARMA(p,q) one can say: ACF tails off after lag (q-p) and PACF tails off after lag (p-q) [e.g. Wei (2005), S. 109], which makes it difficult to identify the orders p and q. Usually one uses the information criteria like the AIC, BIC, FPE, .... One estimates severeal models with different orders p and q and selects the one with the smallest value of the respective criterion.
ARIMA model identification [duplicate] About the ACF and PACF of ARMA(p,q) one can say: ACF tails off after lag (q-p) and PACF tails off after lag (p-q) [e.g. Wei (2005), S. 109], which makes it difficult to identify the orders p and q. Us
53,817
Which Distribution Does the Data Point Belong to?
(My answer looks like I am assuming univariate distributions, but the underlying ideas carry over to the case with more variates.) If you had population distributions ($F$, $G$) rather than samples and a point $x_{new}$, you could compare the height of the density (or probability function in the case of discrete random variables) to find the distribution with the greater likelihood of producing the observation. i.e. compare $f(x_{new})$ with $g(x_{new})$. However, you only have samples. With large samples, you could make some assumptions (such as "the original population densities are smooth") and use (say) kernel density estimates* ($\hat f$ and $\hat g$), and then compare heights of those at $x_{new}$ - though of course the estimated probabilities are dependent on things like your choice of bandwidth and kernel, and are subject to random variation (a new sample would result in different relative density estimates at each $x$, though in large random samples they should be looking something like the population densities). * or logspline density estimates, or whatever There are some other things you might do but it will pretty much boil down to 'what are you prepared to assume?' (Or did you want to take a Bayesian approach?)
Which Distribution Does the Data Point Belong to?
(My answer looks like I am assuming univariate distributions, but the underlying ideas carry over to the case with more variates.) If you had population distributions ($F$, $G$) rather than samples an
Which Distribution Does the Data Point Belong to? (My answer looks like I am assuming univariate distributions, but the underlying ideas carry over to the case with more variates.) If you had population distributions ($F$, $G$) rather than samples and a point $x_{new}$, you could compare the height of the density (or probability function in the case of discrete random variables) to find the distribution with the greater likelihood of producing the observation. i.e. compare $f(x_{new})$ with $g(x_{new})$. However, you only have samples. With large samples, you could make some assumptions (such as "the original population densities are smooth") and use (say) kernel density estimates* ($\hat f$ and $\hat g$), and then compare heights of those at $x_{new}$ - though of course the estimated probabilities are dependent on things like your choice of bandwidth and kernel, and are subject to random variation (a new sample would result in different relative density estimates at each $x$, though in large random samples they should be looking something like the population densities). * or logspline density estimates, or whatever There are some other things you might do but it will pretty much boil down to 'what are you prepared to assume?' (Or did you want to take a Bayesian approach?)
Which Distribution Does the Data Point Belong to? (My answer looks like I am assuming univariate distributions, but the underlying ideas carry over to the case with more variates.) If you had population distributions ($F$, $G$) rather than samples an
53,818
Which Distribution Does the Data Point Belong to?
Being not a professional statistician, I see situation as this. We have two (or more) overlapping distributions. The task often appears in spectroscopy. So here you have approximately 0.25 probability belonging to one distribution and 0.1 to another respectively.
Which Distribution Does the Data Point Belong to?
Being not a professional statistician, I see situation as this. We have two (or more) overlapping distributions. The task often appears in spectroscopy. So here you have approximately 0.25 probability
Which Distribution Does the Data Point Belong to? Being not a professional statistician, I see situation as this. We have two (or more) overlapping distributions. The task often appears in spectroscopy. So here you have approximately 0.25 probability belonging to one distribution and 0.1 to another respectively.
Which Distribution Does the Data Point Belong to? Being not a professional statistician, I see situation as this. We have two (or more) overlapping distributions. The task often appears in spectroscopy. So here you have approximately 0.25 probability
53,819
Which Distribution Does the Data Point Belong to?
I am trying to solve an completely analogous problem. What I have started using as a basic approach is hypothesis testing. If both the distributions are fairly spaced out, then you can check if the data point lies outside the bottom 95 percentile of the data of disliked websites. If it does, you can conclude with an error of 5% that the user liked the website. Similarly if the data point lies outside the top 95 percentile of the data of the liked websites, you can say that the user disliked the website. This way you can get a a number "d" and another "l". If the time spent is below d, they disliked and if the time spent is above l, they liked. What to do between d and l is something even I want to know. ie, we have never used the value of the expected mixture (approx 4% people dislike the website) and neither am I able to use it in my problem.
Which Distribution Does the Data Point Belong to?
I am trying to solve an completely analogous problem. What I have started using as a basic approach is hypothesis testing. If both the distributions are fairly spaced out, then you can check if the da
Which Distribution Does the Data Point Belong to? I am trying to solve an completely analogous problem. What I have started using as a basic approach is hypothesis testing. If both the distributions are fairly spaced out, then you can check if the data point lies outside the bottom 95 percentile of the data of disliked websites. If it does, you can conclude with an error of 5% that the user liked the website. Similarly if the data point lies outside the top 95 percentile of the data of the liked websites, you can say that the user disliked the website. This way you can get a a number "d" and another "l". If the time spent is below d, they disliked and if the time spent is above l, they liked. What to do between d and l is something even I want to know. ie, we have never used the value of the expected mixture (approx 4% people dislike the website) and neither am I able to use it in my problem.
Which Distribution Does the Data Point Belong to? I am trying to solve an completely analogous problem. What I have started using as a basic approach is hypothesis testing. If both the distributions are fairly spaced out, then you can check if the da
53,820
Should normalization completely weed out correlation?
Instead of normalizing by the length, I would suggest to normalize by the maximum possible number of subsequences for the given length $\ell$. The maximum is reached for a sequence made of the repetition of the alphabet. E.g., if the alphabet is {A,B,C}, the maximum for a sequence of length 6, would be the number of subsequences of A-B-C-A-B-C. An upper bound of this maximum is $2^\ell$. It is the maximum as long as $\ell$ does not exceed the size of the alphabet.
Should normalization completely weed out correlation?
Instead of normalizing by the length, I would suggest to normalize by the maximum possible number of subsequences for the given length $\ell$. The maximum is reached for a sequence made of the repetit
Should normalization completely weed out correlation? Instead of normalizing by the length, I would suggest to normalize by the maximum possible number of subsequences for the given length $\ell$. The maximum is reached for a sequence made of the repetition of the alphabet. E.g., if the alphabet is {A,B,C}, the maximum for a sequence of length 6, would be the number of subsequences of A-B-C-A-B-C. An upper bound of this maximum is $2^\ell$. It is the maximum as long as $\ell$ does not exceed the size of the alphabet.
Should normalization completely weed out correlation? Instead of normalizing by the length, I would suggest to normalize by the maximum possible number of subsequences for the given length $\ell$. The maximum is reached for a sequence made of the repetit
53,821
Should normalization completely weed out correlation?
Admittedly, this is just an operationalization of @whuber's comment (thanks!), but I still find it helpful to document it. normalization <- function(order, length){ data <- as.data.frame(cbind(order, length)) model <- lm(order ~ length, data) order_normalized <- order-(model$coefficients[1]+(model$coefficients[2]*length)) (order_normalized) } If you input the two variables into the function, it will normalize the former by the latter using least-squares.
Should normalization completely weed out correlation?
Admittedly, this is just an operationalization of @whuber's comment (thanks!), but I still find it helpful to document it. normalization <- function(order, length){ data <- as.data.frame(cbind(order
Should normalization completely weed out correlation? Admittedly, this is just an operationalization of @whuber's comment (thanks!), but I still find it helpful to document it. normalization <- function(order, length){ data <- as.data.frame(cbind(order, length)) model <- lm(order ~ length, data) order_normalized <- order-(model$coefficients[1]+(model$coefficients[2]*length)) (order_normalized) } If you input the two variables into the function, it will normalize the former by the latter using least-squares.
Should normalization completely weed out correlation? Admittedly, this is just an operationalization of @whuber's comment (thanks!), but I still find it helpful to document it. normalization <- function(order, length){ data <- as.data.frame(cbind(order
53,822
Which one to use Splines / Interaction or both?
Without going into detail, cubic splines have advantages over linear splines, namely more properly reflecting smooth underlying relationships and being less sensitive to knot placement knots can be placed using subject matter knowledge or using the observed data density (e.g., put knots at fixed quantiles of a predictor) getting nonlinear main effects right by modeling them flexibly is necessary before considering interactions interactions between regression splines are called tensor splines and can give you a smooth interaction surface. This involves taking all products of terms making up the regression splines in the main effects think about how the number of purchases guides the estimation of how many parameters (degrees of freedom) to spend on each predictor and on the interactions the R rms package has a restricted interaction operator %ia% to create tensor splines with fewer terms. The restriction is that the interactions are not doubly nonlinear.
Which one to use Splines / Interaction or both?
Without going into detail, cubic splines have advantages over linear splines, namely more properly reflecting smooth underlying relationships and being less sensitive to knot placement knots can be
Which one to use Splines / Interaction or both? Without going into detail, cubic splines have advantages over linear splines, namely more properly reflecting smooth underlying relationships and being less sensitive to knot placement knots can be placed using subject matter knowledge or using the observed data density (e.g., put knots at fixed quantiles of a predictor) getting nonlinear main effects right by modeling them flexibly is necessary before considering interactions interactions between regression splines are called tensor splines and can give you a smooth interaction surface. This involves taking all products of terms making up the regression splines in the main effects think about how the number of purchases guides the estimation of how many parameters (degrees of freedom) to spend on each predictor and on the interactions the R rms package has a restricted interaction operator %ia% to create tensor splines with fewer terms. The restriction is that the interactions are not doubly nonlinear.
Which one to use Splines / Interaction or both? Without going into detail, cubic splines have advantages over linear splines, namely more properly reflecting smooth underlying relationships and being less sensitive to knot placement knots can be
53,823
Statistical significance in canonical correlation analysis
Let $p_x$ and $p_y$ be the number of variables in your sets $X$ and $Y$. $N$ is the sample size. You have obtained $m=\min(p_x,p_y)$ canonical correlations $\gamma_1, \gamma_2,...,\gamma_m$. Testing them usually goes as follows. Given $\gamma_j$, its corresponding eigenvalue is $\lambda_j= \frac{1}{1-\gamma_j^2}-1$. Wilk's lambda statistic for it is $w_j= \frac{1}{1+\lambda_j}w_{j+1}$. So, first compute $w_m$ which is $\frac{1}{1+\lambda_m}$, then compute $w_{m-1}$ using $w_m$, etc., backwards. This statistic has approximately Chi-square distribution (under assumptions of normality and large $N$) with $df_j= (p_x-j+1)(p_y-j+1)$. To recalculate Wilk's into the Chi-square: $\chi_j^2= -\ln(w_j)(N-(p_x+p_y+3)/2)$. So, substitute $\chi_j^2$ in Chi-square cdf distribution with $df_j$, subtract from 1, and have the p-value for correlation $\gamma_j$. What does this p-value mean in fact? Nonsignificant p-value for $\gamma_1$ tells that all canonical correlations $\gamma_1$ through $\gamma_m$ are not significant (i.e. the hypothesis that they all are zero should not be rejected). Significant p-value for $\gamma_1$ and nonsignificant p-value for $\gamma_2$ tells that $\gamma_1$ is significant (likely to be nonzero in the population), while the rest $\gamma_2$ through $\gamma_m$ are all not significant; etc. Sometimes, p-value for $\gamma_{j+1}$ is lower than for $\gamma_{j}$. That should not be taken in the sense "$\gamma_{j+1}$ is more significant" because a more junior correlation cannot be more significant than more senior one. As said already, if $\gamma_{j}$ is not significant for you, all the remaining junior correlations must automatically be considered not significant too. For an algorithm of CCA, look here.
Statistical significance in canonical correlation analysis
Let $p_x$ and $p_y$ be the number of variables in your sets $X$ and $Y$. $N$ is the sample size. You have obtained $m=\min(p_x,p_y)$ canonical correlations $\gamma_1, \gamma_2,...,\gamma_m$. Testing t
Statistical significance in canonical correlation analysis Let $p_x$ and $p_y$ be the number of variables in your sets $X$ and $Y$. $N$ is the sample size. You have obtained $m=\min(p_x,p_y)$ canonical correlations $\gamma_1, \gamma_2,...,\gamma_m$. Testing them usually goes as follows. Given $\gamma_j$, its corresponding eigenvalue is $\lambda_j= \frac{1}{1-\gamma_j^2}-1$. Wilk's lambda statistic for it is $w_j= \frac{1}{1+\lambda_j}w_{j+1}$. So, first compute $w_m$ which is $\frac{1}{1+\lambda_m}$, then compute $w_{m-1}$ using $w_m$, etc., backwards. This statistic has approximately Chi-square distribution (under assumptions of normality and large $N$) with $df_j= (p_x-j+1)(p_y-j+1)$. To recalculate Wilk's into the Chi-square: $\chi_j^2= -\ln(w_j)(N-(p_x+p_y+3)/2)$. So, substitute $\chi_j^2$ in Chi-square cdf distribution with $df_j$, subtract from 1, and have the p-value for correlation $\gamma_j$. What does this p-value mean in fact? Nonsignificant p-value for $\gamma_1$ tells that all canonical correlations $\gamma_1$ through $\gamma_m$ are not significant (i.e. the hypothesis that they all are zero should not be rejected). Significant p-value for $\gamma_1$ and nonsignificant p-value for $\gamma_2$ tells that $\gamma_1$ is significant (likely to be nonzero in the population), while the rest $\gamma_2$ through $\gamma_m$ are all not significant; etc. Sometimes, p-value for $\gamma_{j+1}$ is lower than for $\gamma_{j}$. That should not be taken in the sense "$\gamma_{j+1}$ is more significant" because a more junior correlation cannot be more significant than more senior one. As said already, if $\gamma_{j}$ is not significant for you, all the remaining junior correlations must automatically be considered not significant too. For an algorithm of CCA, look here.
Statistical significance in canonical correlation analysis Let $p_x$ and $p_y$ be the number of variables in your sets $X$ and $Y$. $N$ is the sample size. You have obtained $m=\min(p_x,p_y)$ canonical correlations $\gamma_1, \gamma_2,...,\gamma_m$. Testing t
53,824
KL-divergence between two categorical/multinomial distributions gives negative values?
Let’s remove the two categories with probability $0$ in both distributions. Your example is $P = (0.9, 0.1)$ and $Q = (1,0)$. The KL divergence is $KL(P||Q) = \sum_i p_i \log\left( {p_i \over q_i }\right)$. It is not $$ 0.9 \times \log\, 0.9 + 0 $$ but $$ 0.9 \times \log\, 0.9 + 0.1 \times ( +\infty ) = + \infty.$$
KL-divergence between two categorical/multinomial distributions gives negative values?
Let’s remove the two categories with probability $0$ in both distributions. Your example is $P = (0.9, 0.1)$ and $Q = (1,0)$. The KL divergence is $KL(P||Q) = \sum_i p_i \log\left( {p_i \over q_i }\r
KL-divergence between two categorical/multinomial distributions gives negative values? Let’s remove the two categories with probability $0$ in both distributions. Your example is $P = (0.9, 0.1)$ and $Q = (1,0)$. The KL divergence is $KL(P||Q) = \sum_i p_i \log\left( {p_i \over q_i }\right)$. It is not $$ 0.9 \times \log\, 0.9 + 0 $$ but $$ 0.9 \times \log\, 0.9 + 0.1 \times ( +\infty ) = + \infty.$$
KL-divergence between two categorical/multinomial distributions gives negative values? Let’s remove the two categories with probability $0$ in both distributions. Your example is $P = (0.9, 0.1)$ and $Q = (1,0)$. The KL divergence is $KL(P||Q) = \sum_i p_i \log\left( {p_i \over q_i }\r
53,825
Implementing a Metropolis Hastings Algorithm in R
It is indeed a very poor idea to start learning a topic just from an on-line code with no explanation. Better read a book (like our Introduction to Monte Carlo methods with R!) or an introductory paper and write your own code. As written, this code proposes a random walk on the parameter $(a,b)$ which in your case could be $$(\text{logit}(\mu),\log(\tau))=\left(\log\left\{\dfrac{\mu}{1-\mu}\right\},\log(\tau)\right)$$ in order to account for the restricted supports of $\mu$ and $\tau$ that are incompatible with a random walk proposal. This random walk proposal is deduced from the lines theta<-2*3.1415926*runif(1) anew <- a[i] + s*cos(theta) # random walk bnew <- b[i] + 2*s*sin(theta) # random walk which correspond to simulating a bivariate normal $\text{N}(0,1)$ (a direct call to rnorm(2) would work as well!) and scaling the first component by s and the second by 2s. (There is no reason for this difference, each component requiring its own scale based on acceptance rates.) The line s <- s0*runif(1) that changes the scale at each step is actually incorrect because it makes the usual Metropolis-Hasting ratio invalid: in this ratio r <- pdf(anew,bnew)/pdf(a[i],b[i])# acceptance ratio pdf means the density of the target distribution which, in your case, could be coded as pdf <-function(a,b){ dbeta(1/(1+exp(-a)),2,2)*dnorm(b,1,10)*prod( dnorm(x,mean=1/(1+exp(-a)),sd=exp(b)))* exp(a)/(1+exp(a))^2} which multiplies the prior by the likelihood by the Jacobian. This may sound complicated but the explanation is that The random walk operates on the parametrisation $(\text{logit}(\mu),\log(\tau))$ and the Metropolis-Hastings ratio is the ratio of the targets for this parametrisation; The prior is defined for the parametrisation $(\mu,\tau)$ or $(\mu,\log\,\tau)$ and must be transformed into a prior for the same parametrisation $(\text{logit}(\mu),\log(\tau))$, which means using the Normal model on $\log(\tau))$ and the logit transform of the Beta(2,2) prior on $\mu$, hence the Jacobian of the inverse logit transform:$$\dfrac{\exp(a)}{(1+\exp(a))^2}$$ Once you define this function, your program should run, at least from a theoretical perspective. If the acceptance rate is too high or too low you need to calibrate the scale s.
Implementing a Metropolis Hastings Algorithm in R
It is indeed a very poor idea to start learning a topic just from an on-line code with no explanation. Better read a book (like our Introduction to Monte Carlo methods with R!) or an introductory pape
Implementing a Metropolis Hastings Algorithm in R It is indeed a very poor idea to start learning a topic just from an on-line code with no explanation. Better read a book (like our Introduction to Monte Carlo methods with R!) or an introductory paper and write your own code. As written, this code proposes a random walk on the parameter $(a,b)$ which in your case could be $$(\text{logit}(\mu),\log(\tau))=\left(\log\left\{\dfrac{\mu}{1-\mu}\right\},\log(\tau)\right)$$ in order to account for the restricted supports of $\mu$ and $\tau$ that are incompatible with a random walk proposal. This random walk proposal is deduced from the lines theta<-2*3.1415926*runif(1) anew <- a[i] + s*cos(theta) # random walk bnew <- b[i] + 2*s*sin(theta) # random walk which correspond to simulating a bivariate normal $\text{N}(0,1)$ (a direct call to rnorm(2) would work as well!) and scaling the first component by s and the second by 2s. (There is no reason for this difference, each component requiring its own scale based on acceptance rates.) The line s <- s0*runif(1) that changes the scale at each step is actually incorrect because it makes the usual Metropolis-Hasting ratio invalid: in this ratio r <- pdf(anew,bnew)/pdf(a[i],b[i])# acceptance ratio pdf means the density of the target distribution which, in your case, could be coded as pdf <-function(a,b){ dbeta(1/(1+exp(-a)),2,2)*dnorm(b,1,10)*prod( dnorm(x,mean=1/(1+exp(-a)),sd=exp(b)))* exp(a)/(1+exp(a))^2} which multiplies the prior by the likelihood by the Jacobian. This may sound complicated but the explanation is that The random walk operates on the parametrisation $(\text{logit}(\mu),\log(\tau))$ and the Metropolis-Hastings ratio is the ratio of the targets for this parametrisation; The prior is defined for the parametrisation $(\mu,\tau)$ or $(\mu,\log\,\tau)$ and must be transformed into a prior for the same parametrisation $(\text{logit}(\mu),\log(\tau))$, which means using the Normal model on $\log(\tau))$ and the logit transform of the Beta(2,2) prior on $\mu$, hence the Jacobian of the inverse logit transform:$$\dfrac{\exp(a)}{(1+\exp(a))^2}$$ Once you define this function, your program should run, at least from a theoretical perspective. If the acceptance rate is too high or too low you need to calibrate the scale s.
Implementing a Metropolis Hastings Algorithm in R It is indeed a very poor idea to start learning a topic just from an on-line code with no explanation. Better read a book (like our Introduction to Monte Carlo methods with R!) or an introductory pape
53,826
LIBSVM "Warning: using -h 0 may be faster"
This means, that optimization algorithm detected that with high probability (not in the strict, mathematical sense) you can speed up your training by turning the -h 0 flag in your options. Basically, -h is the shrinking heuristics, implemented in the libsvm package which for some data significantly reduces number of required computations, while in others - makes it slower. There is no general rule, but there are some heuristics regarding this heuristic, which "detect" when it could help - and this is exactly the reason for this message - one of such "metaheuristics" detected, that for your particular data and parameters - it could be more valuable to set the -h 0 flag. If you are interested in more details regarding this possible optimization speed up you could refer to the source paper: T. Joachims, Transductive Inference for Text Classification using Support Vector Machines. International Conference on Machine Learning (ICML), 1999.
LIBSVM "Warning: using -h 0 may be faster"
This means, that optimization algorithm detected that with high probability (not in the strict, mathematical sense) you can speed up your training by turning the -h 0 flag in your options. Basically,
LIBSVM "Warning: using -h 0 may be faster" This means, that optimization algorithm detected that with high probability (not in the strict, mathematical sense) you can speed up your training by turning the -h 0 flag in your options. Basically, -h is the shrinking heuristics, implemented in the libsvm package which for some data significantly reduces number of required computations, while in others - makes it slower. There is no general rule, but there are some heuristics regarding this heuristic, which "detect" when it could help - and this is exactly the reason for this message - one of such "metaheuristics" detected, that for your particular data and parameters - it could be more valuable to set the -h 0 flag. If you are interested in more details regarding this possible optimization speed up you could refer to the source paper: T. Joachims, Transductive Inference for Text Classification using Support Vector Machines. International Conference on Machine Learning (ICML), 1999.
LIBSVM "Warning: using -h 0 may be faster" This means, that optimization algorithm detected that with high probability (not in the strict, mathematical sense) you can speed up your training by turning the -h 0 flag in your options. Basically,
53,827
Linear regression best polynomial (or better approach to use)?
Because brightness is a response with independent random error and it is expected to taper off with distance from the optimal point according to a Gaussian function, a quick nonlinear regression ought to do a good job. The model is $$y = b + a \exp\left(-\frac{1}{2}\left(\frac{x-m}{s}\right)^2\right) + \varepsilon$$ where $\varepsilon$ represents the errors in measuring the brightness, modeled here as random quantities. The peak occurs at $m$; $s\gt 0$ quantifies the rate at which the curve tapers off; $a\gt 0$ reflects the overall magnitudes of the relative $y$ values, and $b$ is a baseline. Let's try it with the sample data (using R). By including the middle ($m$) among the parameters, the software will automatically output its estimate and a standard error for it: y <- c(-190279, -191971, -108325, 65298, 292274, 292274, 81548, -104653, -136166)/9 x <- (-4:4)*10 # # Define a Gaussian function (of four parameters). f <- function(x, theta) { m <- theta[1]; s <- theta[2]; a <- theta[3]; b <- theta[4]; a*exp(-0.5*((x-m)/s)^2) + b } # # Estimate some starting values. m.0 <- x[which.max(y)]; s.0 <- (max(x)-min(x))/4; b.0 <- min(y); a.0 <- (max(y)-min(y)) # # Do the fit. (It takes no time at all.) fit <- nls(y ~ f(x,c(m,s,a,b)), data.frame(x,y), start=list(m=m.0, s=s.0, a=a.0, b=b.0)) # # Display the estimated location of the peak and its SE. summary(fit)$parameters["m", 1:2] The tricky part with nonlinear fits usually is finding good starting values for the parameters: this code shows one (crude) approach. Its output, Estimate Std. Error 5.3161940 0.4303487 gives the estimate of the peak ($5.32$) and its standard error ($0.43$). It's always a good idea to plot the fit and compare it to the data: par(mfrow=c(1,1)) plot(c(x,0),c(y,f(coef(fit)["m"],coef(fit))), main="Data", type="n", xlab="x", ylab="Brightness") curve(f(x, coef(fit)), add=TRUE, col="Red", lwd=2) points(x,y, pch=19) That's what we expected: the data appear to fit the Gaussian pretty well. For a more incisive look at the fit, plot the residuals: plot(x, resid(fit), main="Residuals") You want to check that most residuals are as small as the (known?) variation in the brightness measurement and that there is no important trend or pattern in them. We might be a little concerned about the high residual at $x=40$, but rerunning the procedure with this last data point removed does not appreciably change the estimate of $m$ (which is now $5.25$ with a standard error of $0.17$, not distinguishable from the previous estimate). The new residuals bounce up and down, tend to get smaller as $x$ gets larger, but otherwise tend to be less than $1000$ or so in absolute value: there's no sign here that more effort is needed to pin down $m$.
Linear regression best polynomial (or better approach to use)?
Because brightness is a response with independent random error and it is expected to taper off with distance from the optimal point according to a Gaussian function, a quick nonlinear regression ought
Linear regression best polynomial (or better approach to use)? Because brightness is a response with independent random error and it is expected to taper off with distance from the optimal point according to a Gaussian function, a quick nonlinear regression ought to do a good job. The model is $$y = b + a \exp\left(-\frac{1}{2}\left(\frac{x-m}{s}\right)^2\right) + \varepsilon$$ where $\varepsilon$ represents the errors in measuring the brightness, modeled here as random quantities. The peak occurs at $m$; $s\gt 0$ quantifies the rate at which the curve tapers off; $a\gt 0$ reflects the overall magnitudes of the relative $y$ values, and $b$ is a baseline. Let's try it with the sample data (using R). By including the middle ($m$) among the parameters, the software will automatically output its estimate and a standard error for it: y <- c(-190279, -191971, -108325, 65298, 292274, 292274, 81548, -104653, -136166)/9 x <- (-4:4)*10 # # Define a Gaussian function (of four parameters). f <- function(x, theta) { m <- theta[1]; s <- theta[2]; a <- theta[3]; b <- theta[4]; a*exp(-0.5*((x-m)/s)^2) + b } # # Estimate some starting values. m.0 <- x[which.max(y)]; s.0 <- (max(x)-min(x))/4; b.0 <- min(y); a.0 <- (max(y)-min(y)) # # Do the fit. (It takes no time at all.) fit <- nls(y ~ f(x,c(m,s,a,b)), data.frame(x,y), start=list(m=m.0, s=s.0, a=a.0, b=b.0)) # # Display the estimated location of the peak and its SE. summary(fit)$parameters["m", 1:2] The tricky part with nonlinear fits usually is finding good starting values for the parameters: this code shows one (crude) approach. Its output, Estimate Std. Error 5.3161940 0.4303487 gives the estimate of the peak ($5.32$) and its standard error ($0.43$). It's always a good idea to plot the fit and compare it to the data: par(mfrow=c(1,1)) plot(c(x,0),c(y,f(coef(fit)["m"],coef(fit))), main="Data", type="n", xlab="x", ylab="Brightness") curve(f(x, coef(fit)), add=TRUE, col="Red", lwd=2) points(x,y, pch=19) That's what we expected: the data appear to fit the Gaussian pretty well. For a more incisive look at the fit, plot the residuals: plot(x, resid(fit), main="Residuals") You want to check that most residuals are as small as the (known?) variation in the brightness measurement and that there is no important trend or pattern in them. We might be a little concerned about the high residual at $x=40$, but rerunning the procedure with this last data point removed does not appreciably change the estimate of $m$ (which is now $5.25$ with a standard error of $0.17$, not distinguishable from the previous estimate). The new residuals bounce up and down, tend to get smaller as $x$ gets larger, but otherwise tend to be less than $1000$ or so in absolute value: there's no sign here that more effort is needed to pin down $m$.
Linear regression best polynomial (or better approach to use)? Because brightness is a response with independent random error and it is expected to taper off with distance from the optimal point according to a Gaussian function, a quick nonlinear regression ought
53,828
Linear regression best polynomial (or better approach to use)?
I assume the column of values you gave correspond to some kind of time series, and there is an implicit "time" column with evenly spaced values that you've not mentioned. Given that, the question remains if you intend to fit a polynomial curve for prediction or forecasting. If the former, then the use of splines can be achieved using adaptive bases and cross validation to determine optimal breakpoints. Splines estimate piecewise polynomial trends, depending on the polynomial degree you specify (and the number of breakpoints). On the other hand, forecasting does not warrant the use of splines with breakpoints because the trend cannot be extrapolated beyond where the next breakpoint would lie had you observed the data. In either case, are you determining the $R^2$ value using external validation with an independent dataset? If so, then how did you arrive at a choice of 80% variation to determine an adequate predictive model? It seems arbitrary to me and you're more likely to fit noise with little generalization by doing so.
Linear regression best polynomial (or better approach to use)?
I assume the column of values you gave correspond to some kind of time series, and there is an implicit "time" column with evenly spaced values that you've not mentioned. Given that, the question rema
Linear regression best polynomial (or better approach to use)? I assume the column of values you gave correspond to some kind of time series, and there is an implicit "time" column with evenly spaced values that you've not mentioned. Given that, the question remains if you intend to fit a polynomial curve for prediction or forecasting. If the former, then the use of splines can be achieved using adaptive bases and cross validation to determine optimal breakpoints. Splines estimate piecewise polynomial trends, depending on the polynomial degree you specify (and the number of breakpoints). On the other hand, forecasting does not warrant the use of splines with breakpoints because the trend cannot be extrapolated beyond where the next breakpoint would lie had you observed the data. In either case, are you determining the $R^2$ value using external validation with an independent dataset? If so, then how did you arrive at a choice of 80% variation to determine an adequate predictive model? It seems arbitrary to me and you're more likely to fit noise with little generalization by doing so.
Linear regression best polynomial (or better approach to use)? I assume the column of values you gave correspond to some kind of time series, and there is an implicit "time" column with evenly spaced values that you've not mentioned. Given that, the question rema
53,829
Linear regression best polynomial (or better approach to use)?
If your data should be a bell curve, you should fit that instead of the quadratic you have in the question. However, you can play with "other polynomials", for example in Python using mean squared error minimization with numpy.polyfit: import numpy as np import pylab as pl # generate bell curve data X = np.sort((10 * np.random.rand(50, 1)-5), axis=0) y = ((np.exp(-(X**2)/2))/(2*np.pi)).ravel() # add some noise y[::5] += (0.1 * np.random.rand(len(y)/5)) x = X.ravel() # do the fit with 4 inflections and get the coefficients z = np.polyfit(x,y,4) # get a callable p = np.poly1d(z) # plot hf = pl.figure() ax = hf.add_subplot(1,1,1) xlab = xrange(0,len(x)) ax.scatter(x, y, c='k', label='data') ax.plot(X,p(X), c='g', label='fit') ax.set_xticklabels(xlab) ax.set_xlabel('time') ax.set_ylabel('brightness') ax.set_title('Polyfit') ax.legend() pl.show() Output:
Linear regression best polynomial (or better approach to use)?
If your data should be a bell curve, you should fit that instead of the quadratic you have in the question. However, you can play with "other polynomials", for example in Python using mean squared
Linear regression best polynomial (or better approach to use)? If your data should be a bell curve, you should fit that instead of the quadratic you have in the question. However, you can play with "other polynomials", for example in Python using mean squared error minimization with numpy.polyfit: import numpy as np import pylab as pl # generate bell curve data X = np.sort((10 * np.random.rand(50, 1)-5), axis=0) y = ((np.exp(-(X**2)/2))/(2*np.pi)).ravel() # add some noise y[::5] += (0.1 * np.random.rand(len(y)/5)) x = X.ravel() # do the fit with 4 inflections and get the coefficients z = np.polyfit(x,y,4) # get a callable p = np.poly1d(z) # plot hf = pl.figure() ax = hf.add_subplot(1,1,1) xlab = xrange(0,len(x)) ax.scatter(x, y, c='k', label='data') ax.plot(X,p(X), c='g', label='fit') ax.set_xticklabels(xlab) ax.set_xlabel('time') ax.set_ylabel('brightness') ax.set_title('Polyfit') ax.legend() pl.show() Output:
Linear regression best polynomial (or better approach to use)? If your data should be a bell curve, you should fit that instead of the quadratic you have in the question. However, you can play with "other polynomials", for example in Python using mean squared
53,830
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from the PCs? [duplicate]
Each of the principal components projects the whole original feature space onto several dimensions, which I will call the latent features. The more an original feature contributes to a latent feature, the more important it is for that feature. Thus, look at the absolute values of the Eigenvectors' components corresponding to the $k$ largest Eigenvalues. The larger they are, the more a specific feature contributes to that principal component. Mind, however, that these will typically be dense. If you want to find some kind of minimal feature space that explains most of the data, you might be interested in sparse pca.
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from t
Each of the principal components projects the whole original feature space onto several dimensions, which I will call the latent features. The more an original feature contributes to a latent feature,
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from the PCs? [duplicate] Each of the principal components projects the whole original feature space onto several dimensions, which I will call the latent features. The more an original feature contributes to a latent feature, the more important it is for that feature. Thus, look at the absolute values of the Eigenvectors' components corresponding to the $k$ largest Eigenvalues. The larger they are, the more a specific feature contributes to that principal component. Mind, however, that these will typically be dense. If you want to find some kind of minimal feature space that explains most of the data, you might be interested in sparse pca.
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from t Each of the principal components projects the whole original feature space onto several dimensions, which I will call the latent features. The more an original feature contributes to a latent feature,
53,831
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from the PCs? [duplicate]
You can answer the last question by looking at the loadings of that PC. You could also do this for each of the other 5 PCs. I am not sure what you could do about the overall importance of particular variables.
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from t
You can answer the last question by looking at the loadings of that PC. You could also do this for each of the other 5 PCs. I am not sure what you could do about the overall importance of particular v
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from the PCs? [duplicate] You can answer the last question by looking at the loadings of that PC. You could also do this for each of the other 5 PCs. I am not sure what you could do about the overall importance of particular variables.
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from t You can answer the last question by looking at the loadings of that PC. You could also do this for each of the other 5 PCs. I am not sure what you could do about the overall importance of particular v
53,832
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from the PCs? [duplicate]
Since eigen vectors are from the linear combination of the original variables, I don't think you can safely determine which of the original variables you can do away with from PCA. I felt this is more like a Factor Analysis question (related to latent variables).
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from t
Since eigen vectors are from the linear combination of the original variables, I don't think you can safely determine which of the original variables you can do away with from PCA. I felt this is mor
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from the PCs? [duplicate] Since eigen vectors are from the linear combination of the original variables, I don't think you can safely determine which of the original variables you can do away with from PCA. I felt this is more like a Factor Analysis question (related to latent variables).
Perform PCA. Extract PCs. Can one then tell what the most important _original_ features were, from t Since eigen vectors are from the linear combination of the original variables, I don't think you can safely determine which of the original variables you can do away with from PCA. I felt this is mor
53,833
degrees of freedom in ARIMA model
Let's say you have an $ARIMA(2,0,0)$, and the length $n=100$, then you have $98$ degrees of freedom. Then the first two values of the variable you are modelling are "used", and you will have $98$ fitted values, and $98$ degrees of freedom.
degrees of freedom in ARIMA model
Let's say you have an $ARIMA(2,0,0)$, and the length $n=100$, then you have $98$ degrees of freedom. Then the first two values of the variable you are modelling are "used", and you will have $98$ fitt
degrees of freedom in ARIMA model Let's say you have an $ARIMA(2,0,0)$, and the length $n=100$, then you have $98$ degrees of freedom. Then the first two values of the variable you are modelling are "used", and you will have $98$ fitted values, and $98$ degrees of freedom.
degrees of freedom in ARIMA model Let's say you have an $ARIMA(2,0,0)$, and the length $n=100$, then you have $98$ degrees of freedom. Then the first two values of the variable you are modelling are "used", and you will have $98$ fitt
53,834
degrees of freedom in ARIMA model
Working with AR in R: for an ARIMA(2,0,0) the df = N - 2 including an intercept df = N - 3 including both an intercept and a time trend (i.e. xreg = 1:N) the df = N - 4. in the below code, an AR(2) is done using maximum likelihood to estimate the coefficients. In other languages (such as SAS) the sum of squared errors and mean squared error is calculated automatically using the appropriate number of degrees of freedom. You can tell that the degrees of freedom are N + X (where X = -4) because: Start out with all points are free. X = 0 It is AR(2), thus two points are not free. X = X - 2 An intercept is included, another point has lost freedom: X = X - 1 An external regressor (here, the time component, 1 : length(y), i.e. 1 to the number of y's) is included, another point has lost freedom: X = X - 1 Thus X = -4 because 4 points have lost freedom. y_ar <- arima(y, order = c(2,0,0), # AR(2) method = "ML, # maximum likelihood include.mean = TRUE, # include intercept # see the following link for details on irregularities in R # given this parameter # https://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm transform.pars = FALSE, # best practice for method = "ML" xreg = 1:length(y)) # includes time trend as external regressors sse = sum(y_ar$residuals^2) mse = sum(y_ar$residuals^2)/(length(y) - 4) # 4 parameters
degrees of freedom in ARIMA model
Working with AR in R: for an ARIMA(2,0,0) the df = N - 2 including an intercept df = N - 3 including both an intercept and a time trend (i.e. xreg = 1:N) the df = N - 4. in the below code, an
degrees of freedom in ARIMA model Working with AR in R: for an ARIMA(2,0,0) the df = N - 2 including an intercept df = N - 3 including both an intercept and a time trend (i.e. xreg = 1:N) the df = N - 4. in the below code, an AR(2) is done using maximum likelihood to estimate the coefficients. In other languages (such as SAS) the sum of squared errors and mean squared error is calculated automatically using the appropriate number of degrees of freedom. You can tell that the degrees of freedom are N + X (where X = -4) because: Start out with all points are free. X = 0 It is AR(2), thus two points are not free. X = X - 2 An intercept is included, another point has lost freedom: X = X - 1 An external regressor (here, the time component, 1 : length(y), i.e. 1 to the number of y's) is included, another point has lost freedom: X = X - 1 Thus X = -4 because 4 points have lost freedom. y_ar <- arima(y, order = c(2,0,0), # AR(2) method = "ML, # maximum likelihood include.mean = TRUE, # include intercept # see the following link for details on irregularities in R # given this parameter # https://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm transform.pars = FALSE, # best practice for method = "ML" xreg = 1:length(y)) # includes time trend as external regressors sse = sum(y_ar$residuals^2) mse = sum(y_ar$residuals^2)/(length(y) - 4) # 4 parameters
degrees of freedom in ARIMA model Working with AR in R: for an ARIMA(2,0,0) the df = N - 2 including an intercept df = N - 3 including both an intercept and a time trend (i.e. xreg = 1:N) the df = N - 4. in the below code, an
53,835
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large?
There are a couple of good answers here already, but none which directly implement the problem as suggested in the question. The steps to do this are: Sample 9 out of the 109 numbers from 0.01 to 1.09: sample(seq(0.01, 1.09, by=0.01), 9) Find the differences between them: Sort them: sort Place them in a vector between the minimum and maximum possible values: c(0, ..., 1.1) Difference: diff Remove 0.01 from each resulting value, so that 0s are possible: - 0.01 To do this N times, the code can be wrapped up in sapply. The resulting code is: N = 1e6 y <- sapply( 1:N, function(i) { diff( c(0, sort(sample(seq(0.01, 1.09, by=0.01), 9)), 1.1) ) - 0.01 } ) mean(y) hist(y, breaks=seq(-0.005, 1.005, by=0.01), freq=FALSE, main="Histogram of partial sums", xlab="Value") Taking the mean and producing a histogram is suggested by @whuber and yields a mean of 0.1 and the following histogram, consistent with his answer. On my latop (core-i7, 2.3Ghz), wrapping a system.time call around y->... gave an elapsed time of 124s, i.e. just over 2 minutes.
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large?
There are a couple of good answers here already, but none which directly implement the problem as suggested in the question. The steps to do this are: Sample 9 out of the 109 numbers from 0.01 to 1.0
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large? There are a couple of good answers here already, but none which directly implement the problem as suggested in the question. The steps to do this are: Sample 9 out of the 109 numbers from 0.01 to 1.09: sample(seq(0.01, 1.09, by=0.01), 9) Find the differences between them: Sort them: sort Place them in a vector between the minimum and maximum possible values: c(0, ..., 1.1) Difference: diff Remove 0.01 from each resulting value, so that 0s are possible: - 0.01 To do this N times, the code can be wrapped up in sapply. The resulting code is: N = 1e6 y <- sapply( 1:N, function(i) { diff( c(0, sort(sample(seq(0.01, 1.09, by=0.01), 9)), 1.1) ) - 0.01 } ) mean(y) hist(y, breaks=seq(-0.005, 1.005, by=0.01), freq=FALSE, main="Histogram of partial sums", xlab="Value") Taking the mean and producing a histogram is suggested by @whuber and yields a mean of 0.1 and the following histogram, consistent with his answer. On my latop (core-i7, 2.3Ghz), wrapping a system.time call around y->... gave an elapsed time of 124s, i.e. just over 2 minutes.
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large? There are a couple of good answers here already, but none which directly implement the problem as suggested in the question. The steps to do this are: Sample 9 out of the 109 numbers from 0.01 to 1.0
53,836
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large?
Based on the edit it sounds like what is desired is to generate, independently and uniformly at random, a partition of the $N=100$-vector $(1/100, 1/100, \ldots, 1/100)$ into $k=10$ pieces and to output the sums of elements in each piece. More generally we might ask how to do this for an arbitrary vector $x$. For example, the solution below can be used in the form set.seed(3) z <- rpart(10^(8:0), 6) sapply(z$sums, function(i) sprintf("%09d", i)); z$cuts to generate the $9$-vector $(100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1)$ and partition it into six (possibly empty) parts. Its output--which although random is reproducible by setting the seed to $3$--is [1] "000000000" "100000000" "011100000" "000011100" "000000000" "000000011" [1] 0 1 4 7 7 This partition was generated by cutting the vector at positions $0, 1, 4, 7,$ and $7$. Visually this can be represented by placing vertical bars at both ends of the vector and the (random) bars inside to designate the cuts, $$(\vert\ \vert x_1 \vert x_2, x_3, x_4 \vert x_5, x_6, x_7 \vert \vert x_8, x_9\ \vert).$$ The partial sums are $0$ (for the initial empty cut between the first two bars), $100000000$ for the unary sum $x_1$, $011100000$ for the sum $x_2+x_3+x_4$, $000011100$ for the sum $x_5+x_6+x_7$, $0$ for the empty sum between two successive bars following position $7$, and $000000011$ for $x_8+x_9$. The method to generate these is a standard mechanism for generating random permutations of length $k-1$ (the for loop below): to permute an $n$-vector $a$, $a_i$ is swapped with $a_j$ where $j$ is uniformly selected in the range $[i, n]$ inclusive. This is done for $i=1, 2, \ldots, k-1$. The prefix $(a_1, a_2, \ldots, a_{k-1})$ contains the desired random permutation. The rest is just reprocessing the data into the desired form of output. To accommodate the possibility of adjacent cuts (leading to some partial sums of zero), we use a well-known method of considering the $k-1$ internal vertical bars as if they, too, were elements of $x$. This extends $x$ from an $N$-vector to an $n=N+k-1$-vector. The random permutation selects the $k-1$ positions among $1, \ldots, n$ at which the internal vertical bars will be placed. The post-processing subtracts $1$ from the first index, $2$ from the second, ..., and $k-1$ from the last so that they point to the positions within $x$ itself that occur just before each cut. To represent the two outside vertical bars it adjoins $0$ and $N$ to these indexes (but does not explicitly output them). A difference of cumulative sums (via cumsum and diff) is an R-centric way to obtain the partial sums within each partition. As a check, let's generate (say) $10,000$ such partitions and look at histograms of (a) the $10,000 \times 10=100,000$ partial sums in the original problem and (b) the cut indices. We expect the former to average exactly $10 \times 1/100 = 1/10$ and the latter to be uniformly distributed among all possible values $0, 1, \ldots, 100$. First the code to do it: x <- rep(1/100, 100); k <- 10 par(mfrow=c(1,2)) set.seed(83) sample <- replicate(1e4, unlist(rpart(x, k)$sums)) hist(sample, breaks=seq(-0.5, 100.5, 1)/100, freq=FALSE, main="Histogram of partial sums", xlab="Value") mean(as.vector(sample)) set.seed(83) sample <- replicate(1e4, unlist(rpart(x, k)$cuts)) hist(sample, breaks=seq(-0.5, 100.5, 1), main="Histogram of cutpoint indices", xlab="Index") mean(as.vector(sample)) Now the results: The figures look appropriate. The left hand histogram in particular provides a reference for any other attempted solution: if it does not produce the same distribution of partial sums (up to sampling error), then that solution is not generating uniformly distributed partitions. By the way, although this code is not well-suited for R implementation, it has a reasonable speed, taking about $N\times k / 50000$ seconds on this machine (two seconds for the example). That would be three to four minutes to solve the problem posed in the question with $N=10^6$ iterations and $k=10$. Because these partitions can be generated one at a time, there is no RAM limitation. R code # # Generate a random partition of vector `x` into `k`>0 (possibly empty) pieces # and return the vector of k partial sums of `x` within those pieces as well as # the k-1 cutpoints between them. (Each partial sum starts *after* the # preceding cut and ends at the current cut.) # # Output is a list with the sums and the corresponding cutpoints. # rpart <- function(x, k) { if (k <= 1) return(list(sums=sum(x), cuts=numeric(0))) n <- length(x)+k-1 a <- 1:n for (i in 1:(k-1)) { j <- sample(i:n, 1) y <- a[i]; a[i] <- a[j]; a[j] <- y } cuts <- c(0, sort(a[1:(k-1)]) - (1:(k-1)), length(x)) psums <- c(0, cumsum(x)) return(list(sums=diff(psums[cuts+1]), cuts=cuts[2:k])) }
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large?
Based on the edit it sounds like what is desired is to generate, independently and uniformly at random, a partition of the $N=100$-vector $(1/100, 1/100, \ldots, 1/100)$ into $k=10$ pieces and to outp
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large? Based on the edit it sounds like what is desired is to generate, independently and uniformly at random, a partition of the $N=100$-vector $(1/100, 1/100, \ldots, 1/100)$ into $k=10$ pieces and to output the sums of elements in each piece. More generally we might ask how to do this for an arbitrary vector $x$. For example, the solution below can be used in the form set.seed(3) z <- rpart(10^(8:0), 6) sapply(z$sums, function(i) sprintf("%09d", i)); z$cuts to generate the $9$-vector $(100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1)$ and partition it into six (possibly empty) parts. Its output--which although random is reproducible by setting the seed to $3$--is [1] "000000000" "100000000" "011100000" "000011100" "000000000" "000000011" [1] 0 1 4 7 7 This partition was generated by cutting the vector at positions $0, 1, 4, 7,$ and $7$. Visually this can be represented by placing vertical bars at both ends of the vector and the (random) bars inside to designate the cuts, $$(\vert\ \vert x_1 \vert x_2, x_3, x_4 \vert x_5, x_6, x_7 \vert \vert x_8, x_9\ \vert).$$ The partial sums are $0$ (for the initial empty cut between the first two bars), $100000000$ for the unary sum $x_1$, $011100000$ for the sum $x_2+x_3+x_4$, $000011100$ for the sum $x_5+x_6+x_7$, $0$ for the empty sum between two successive bars following position $7$, and $000000011$ for $x_8+x_9$. The method to generate these is a standard mechanism for generating random permutations of length $k-1$ (the for loop below): to permute an $n$-vector $a$, $a_i$ is swapped with $a_j$ where $j$ is uniformly selected in the range $[i, n]$ inclusive. This is done for $i=1, 2, \ldots, k-1$. The prefix $(a_1, a_2, \ldots, a_{k-1})$ contains the desired random permutation. The rest is just reprocessing the data into the desired form of output. To accommodate the possibility of adjacent cuts (leading to some partial sums of zero), we use a well-known method of considering the $k-1$ internal vertical bars as if they, too, were elements of $x$. This extends $x$ from an $N$-vector to an $n=N+k-1$-vector. The random permutation selects the $k-1$ positions among $1, \ldots, n$ at which the internal vertical bars will be placed. The post-processing subtracts $1$ from the first index, $2$ from the second, ..., and $k-1$ from the last so that they point to the positions within $x$ itself that occur just before each cut. To represent the two outside vertical bars it adjoins $0$ and $N$ to these indexes (but does not explicitly output them). A difference of cumulative sums (via cumsum and diff) is an R-centric way to obtain the partial sums within each partition. As a check, let's generate (say) $10,000$ such partitions and look at histograms of (a) the $10,000 \times 10=100,000$ partial sums in the original problem and (b) the cut indices. We expect the former to average exactly $10 \times 1/100 = 1/10$ and the latter to be uniformly distributed among all possible values $0, 1, \ldots, 100$. First the code to do it: x <- rep(1/100, 100); k <- 10 par(mfrow=c(1,2)) set.seed(83) sample <- replicate(1e4, unlist(rpart(x, k)$sums)) hist(sample, breaks=seq(-0.5, 100.5, 1)/100, freq=FALSE, main="Histogram of partial sums", xlab="Value") mean(as.vector(sample)) set.seed(83) sample <- replicate(1e4, unlist(rpart(x, k)$cuts)) hist(sample, breaks=seq(-0.5, 100.5, 1), main="Histogram of cutpoint indices", xlab="Index") mean(as.vector(sample)) Now the results: The figures look appropriate. The left hand histogram in particular provides a reference for any other attempted solution: if it does not produce the same distribution of partial sums (up to sampling error), then that solution is not generating uniformly distributed partitions. By the way, although this code is not well-suited for R implementation, it has a reasonable speed, taking about $N\times k / 50000$ seconds on this machine (two seconds for the example). That would be three to four minutes to solve the problem posed in the question with $N=10^6$ iterations and $k=10$. Because these partitions can be generated one at a time, there is no RAM limitation. R code # # Generate a random partition of vector `x` into `k`>0 (possibly empty) pieces # and return the vector of k partial sums of `x` within those pieces as well as # the k-1 cutpoints between them. (Each partial sum starts *after* the # preceding cut and ends at the current cut.) # # Output is a list with the sums and the corresponding cutpoints. # rpart <- function(x, k) { if (k <= 1) return(list(sums=sum(x), cuts=numeric(0))) n <- length(x)+k-1 a <- 1:n for (i in 1:(k-1)) { j <- sample(i:n, 1) y <- a[i]; a[i] <- a[j]; a[j] <- y } cuts <- c(0, sort(a[1:(k-1)]) - (1:(k-1)), length(x)) psums <- c(0, cumsum(x)) return(list(sums=diff(psums[cuts+1]), cuts=cuts[2:k])) }
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large? Based on the edit it sounds like what is desired is to generate, independently and uniformly at random, a partition of the $N=100$-vector $(1/100, 1/100, \ldots, 1/100)$ into $k=10$ pieces and to outp
53,837
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large?
To hold the set of vectors that combn(109,9) would return if it could, by my estimate you would need upward of 32 petabytes of memory, which exceeds how much R can allocate. Are the stated criteria the only ones that matter to you? Or do the elements of each vector also have to be unique? I went ahead and assumed yes. If so, this should do it (in probably less than half an hour, I only tried it for 1e5 vectors). foo<-replicate(1e6,{oo<-0;while(sum(oo)!=1||length(unique(oo))!=length(oo)) oo<-round((ii<-runif(10))/sum(ii),2);oo}) Obviously, for vectors of 20 elements, substitutes runif(20) and so on. Now, if you don't care about uniqueness, the following will run much faster (a little over a minute on my computer): foo<-replicate(1e6,{oo<-0;while(sum(oo)!=1) oo<-round((ii<-runif(10))/sum(ii),2);oo}) If you don't care about them adding up to exactly 1, the following should also run faster: foo<-replicate(1e6,{oo<-0;while(length(unique(oo))!=length(oo)) oo<-round((ii<-runif(10))/sum(ii),2);oo}) And if you just want real numbers that add up to 1, and put in the stipulation of two decimal places because you thought it would make the task computationally easier, it doesn't. The easiest and fastest way to generate vectors that each add up to 1 is this (21 seconds on my computer): foo<-replicate(1e6,(ii<-runif(10))/sum(ii)) Actually, this is even faster (less than two seconds): foo <-matrix(runif(1e7),nrow=10); foo<-foo/rbind(colSums(foo))[rep(1,10),] ...the uniqueness and summing up to exactly 1 take care of themselves when you relax the two decimal places requirement. By the way, in all the above examples, foo is a matrix, each of whose columns is a vector $v_k$. I'm sorry if the question I answered was not the one you meant to ask. Most people on the StackOverflow sites will generally just flame you for not providing all relevant information (as if it's always obvious what is relevant to answering a question and waht isn't). So, I believe that it's more constructive to instead fill in the blanks with what I would reasonably expect you to mean, and answer that question, thus either guessing right or sending you a positive message to refine the question. Now I have a question, for my own curiousity. What procedure or simulation could possibly require numbers that fit these criteria? Are you trying to empirically get the probability density function of a uniform distribution or something? Because there's probably an easier way.
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large?
To hold the set of vectors that combn(109,9) would return if it could, by my estimate you would need upward of 32 petabytes of memory, which exceeds how much R can allocate. Are the stated criteria th
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large? To hold the set of vectors that combn(109,9) would return if it could, by my estimate you would need upward of 32 petabytes of memory, which exceeds how much R can allocate. Are the stated criteria the only ones that matter to you? Or do the elements of each vector also have to be unique? I went ahead and assumed yes. If so, this should do it (in probably less than half an hour, I only tried it for 1e5 vectors). foo<-replicate(1e6,{oo<-0;while(sum(oo)!=1||length(unique(oo))!=length(oo)) oo<-round((ii<-runif(10))/sum(ii),2);oo}) Obviously, for vectors of 20 elements, substitutes runif(20) and so on. Now, if you don't care about uniqueness, the following will run much faster (a little over a minute on my computer): foo<-replicate(1e6,{oo<-0;while(sum(oo)!=1) oo<-round((ii<-runif(10))/sum(ii),2);oo}) If you don't care about them adding up to exactly 1, the following should also run faster: foo<-replicate(1e6,{oo<-0;while(length(unique(oo))!=length(oo)) oo<-round((ii<-runif(10))/sum(ii),2);oo}) And if you just want real numbers that add up to 1, and put in the stipulation of two decimal places because you thought it would make the task computationally easier, it doesn't. The easiest and fastest way to generate vectors that each add up to 1 is this (21 seconds on my computer): foo<-replicate(1e6,(ii<-runif(10))/sum(ii)) Actually, this is even faster (less than two seconds): foo <-matrix(runif(1e7),nrow=10); foo<-foo/rbind(colSums(foo))[rep(1,10),] ...the uniqueness and summing up to exactly 1 take care of themselves when you relax the two decimal places requirement. By the way, in all the above examples, foo is a matrix, each of whose columns is a vector $v_k$. I'm sorry if the question I answered was not the one you meant to ask. Most people on the StackOverflow sites will generally just flame you for not providing all relevant information (as if it's always obvious what is relevant to answering a question and waht isn't). So, I believe that it's more constructive to instead fill in the blanks with what I would reasonably expect you to mean, and answer that question, thus either guessing right or sending you a positive message to refine the question. Now I have a question, for my own curiousity. What procedure or simulation could possibly require numbers that fit these criteria? Are you trying to empirically get the probability density function of a uniform distribution or something? Because there's probably an easier way.
In R, how to sample from the output of combn(a,b) if the "a choose b" is too large? To hold the set of vectors that combn(109,9) would return if it could, by my estimate you would need upward of 32 petabytes of memory, which exceeds how much R can allocate. Are the stated criteria th
53,838
Interpreting plot of PCA results (from 3 to 2 dimensions)
The previous answer correctly explains what you see, but I would like to add that one of the properties of PCA is that it makes all variables uncorrelated. So after doing PCA each component is guaranteed to be uncorrelated with any other component, and you should not be surprised. If you are a bit familiar with mathematics behind PCA, you know that it is equivalent to diagonalizing the covariance matrix of the data. If the covariance matrix is diagonal, then covariance (and also correlation) between any two different components is zero.
Interpreting plot of PCA results (from 3 to 2 dimensions)
The previous answer correctly explains what you see, but I would like to add that one of the properties of PCA is that it makes all variables uncorrelated. So after doing PCA each component is guarant
Interpreting plot of PCA results (from 3 to 2 dimensions) The previous answer correctly explains what you see, but I would like to add that one of the properties of PCA is that it makes all variables uncorrelated. So after doing PCA each component is guaranteed to be uncorrelated with any other component, and you should not be surprised. If you are a bit familiar with mathematics behind PCA, you know that it is equivalent to diagonalizing the covariance matrix of the data. If the covariance matrix is diagonal, then covariance (and also correlation) between any two different components is zero.
Interpreting plot of PCA results (from 3 to 2 dimensions) The previous answer correctly explains what you see, but I would like to add that one of the properties of PCA is that it makes all variables uncorrelated. So after doing PCA each component is guarant
53,839
Interpreting plot of PCA results (from 3 to 2 dimensions)
Your initial data was rotated in the existing three dimensions such that the bulk of the variance was along the X axis, then rotated again such that the remaining variance was predominantly along the Y axis. Then the Z axis was flattened so only the new X and Y axes remained. This article goes into a really good and accessible explanation of what is going on in PCA, I recommend you check it out: http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf UPDATE: Regarding your comment below, here's a simple example illustrating why your PCA transformed data doesn't look like the data projected into the X-Y plane: set.seed(123) X = rnorm(100) Y = X + rnorm(100,0,.5) Z = rnorm(100) df = data.frame(X,Y,Z) df.xz = as.matrix(cbind(Y,Z)) plot(df) cor(X,Y) # .8786993 cor(X,Z) # -0.129176 cor(Y,Z) # -0.1019494 # X and Y are tightly correlated. Z is uncorrelated to both pc.sd1 = princomp(df) # plot of first two components looks random. plot(pc.sd1$scores[,1:2]) It makes sense that the resulting plot looks random: it is random. The principal components capture the variance in the data. The idea is that if two variables are tightly correlated, we probably aren't adding much information by including both in our model: we really only need one, especially if one variable is actually a function of the other. Principal components is an easy way to ignore those relationships. The plot in this example is capturing the relationship between X/Y and Z because there's a lot of variance in that (those) relationship(s). There's very little variance in the relationship between X and Y, so PCA allows us to combine those two dimensions into a single dimension so we can focus on the more complex relationship between this new dimension and Z. Let's look at how each point was transformed in each pair of dimensions to further illustrate this. # The end point of each line signifies the end point of the PCA transformation. PCA_transform_plot=function(dims){ plot(df[,dims], main=paste(names(df)[dims], collapse="")) sapply(1:nrow(df), function(i){ lines(rbind(df[i,dims] ,pc.sd1$scores[i,1:2] ) ,col='red' ) }) } par(mfrow=c(1,3)) PCA_transform_plot(c(1,2)) PCA_transform_plot(c(1,3)) PCA_transform_plot(c(2,3)) As you can see, the points are displaced significantly relative to their initial positions in the XY plane, but much less so from their positions in the XZ and YZ planes. Moreover, the transformations in the XZ and YZ planes look very similar. In fact, the starting positions in the XZ and YZ planes look very similar. This isn't surprising: in this example, X and Y are so tightly correlated, their practically interchangeable. PCA is a technique that let's us say (in this example), "Hey, these variables are so close, we don't really need both. Let's pretend our data is two dimensional instead of three dimensional, because it may as well be."
Interpreting plot of PCA results (from 3 to 2 dimensions)
Your initial data was rotated in the existing three dimensions such that the bulk of the variance was along the X axis, then rotated again such that the remaining variance was predominantly along the
Interpreting plot of PCA results (from 3 to 2 dimensions) Your initial data was rotated in the existing three dimensions such that the bulk of the variance was along the X axis, then rotated again such that the remaining variance was predominantly along the Y axis. Then the Z axis was flattened so only the new X and Y axes remained. This article goes into a really good and accessible explanation of what is going on in PCA, I recommend you check it out: http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf UPDATE: Regarding your comment below, here's a simple example illustrating why your PCA transformed data doesn't look like the data projected into the X-Y plane: set.seed(123) X = rnorm(100) Y = X + rnorm(100,0,.5) Z = rnorm(100) df = data.frame(X,Y,Z) df.xz = as.matrix(cbind(Y,Z)) plot(df) cor(X,Y) # .8786993 cor(X,Z) # -0.129176 cor(Y,Z) # -0.1019494 # X and Y are tightly correlated. Z is uncorrelated to both pc.sd1 = princomp(df) # plot of first two components looks random. plot(pc.sd1$scores[,1:2]) It makes sense that the resulting plot looks random: it is random. The principal components capture the variance in the data. The idea is that if two variables are tightly correlated, we probably aren't adding much information by including both in our model: we really only need one, especially if one variable is actually a function of the other. Principal components is an easy way to ignore those relationships. The plot in this example is capturing the relationship between X/Y and Z because there's a lot of variance in that (those) relationship(s). There's very little variance in the relationship between X and Y, so PCA allows us to combine those two dimensions into a single dimension so we can focus on the more complex relationship between this new dimension and Z. Let's look at how each point was transformed in each pair of dimensions to further illustrate this. # The end point of each line signifies the end point of the PCA transformation. PCA_transform_plot=function(dims){ plot(df[,dims], main=paste(names(df)[dims], collapse="")) sapply(1:nrow(df), function(i){ lines(rbind(df[i,dims] ,pc.sd1$scores[i,1:2] ) ,col='red' ) }) } par(mfrow=c(1,3)) PCA_transform_plot(c(1,2)) PCA_transform_plot(c(1,3)) PCA_transform_plot(c(2,3)) As you can see, the points are displaced significantly relative to their initial positions in the XY plane, but much less so from their positions in the XZ and YZ planes. Moreover, the transformations in the XZ and YZ planes look very similar. In fact, the starting positions in the XZ and YZ planes look very similar. This isn't surprising: in this example, X and Y are so tightly correlated, their practically interchangeable. PCA is a technique that let's us say (in this example), "Hey, these variables are so close, we don't really need both. Let's pretend our data is two dimensional instead of three dimensional, because it may as well be."
Interpreting plot of PCA results (from 3 to 2 dimensions) Your initial data was rotated in the existing three dimensions such that the bulk of the variance was along the X axis, then rotated again such that the remaining variance was predominantly along the
53,840
Location, scale, and shape parameters of the lognormal (with notation ambiguity)
Naturally I can't speak authoritatively about any unreferenced papers you (don't) cite, but you appear to be encountering common conventions, apart from one mistake. You are being given location and scale on the back-transformed (logged) scale; things are simplest on that scale as the equivalent normal has Mean, median and mode identical at $\mu$. A statistically natural scale parameter, namely the standard deviation of logged values $\sigma$. Simple shape properties, those of the normal. When the distribution is exponentiated to become lognormal, mean, median and mode now differ. $e^\mu$ is the median, so cannot be regarded as a scale parameter. But the mean and mode and variance all depend on both $\mu$ and $\sigma$. Skewness and kurtosis depend on $\sigma$, so to the best of my knowledge there is no reason to seek or to quote a shape parameter, shape being determined by $\sigma$. http://en.wikipedia.org/wiki/Lognormal is useful. (The entries on distributions in Wikipedia seem of higher quality than some of the other statistical content.) (The usual terminology of lognormal is entrenched, but it is, I suggest, backwards. A lognormal is an exponentiated normal. Perhaps "expnormal" sounded too ugly, but I guess "lognormal" sounded ugly when introduced. It is also true, naturally, that a lognormal is defined by the fact that if you take logarithms, you get a normal, but it seems more fundamental to define a distribution by how it is generated, not by how you should want to transform it.)
Location, scale, and shape parameters of the lognormal (with notation ambiguity)
Naturally I can't speak authoritatively about any unreferenced papers you (don't) cite, but you appear to be encountering common conventions, apart from one mistake. You are being given location and
Location, scale, and shape parameters of the lognormal (with notation ambiguity) Naturally I can't speak authoritatively about any unreferenced papers you (don't) cite, but you appear to be encountering common conventions, apart from one mistake. You are being given location and scale on the back-transformed (logged) scale; things are simplest on that scale as the equivalent normal has Mean, median and mode identical at $\mu$. A statistically natural scale parameter, namely the standard deviation of logged values $\sigma$. Simple shape properties, those of the normal. When the distribution is exponentiated to become lognormal, mean, median and mode now differ. $e^\mu$ is the median, so cannot be regarded as a scale parameter. But the mean and mode and variance all depend on both $\mu$ and $\sigma$. Skewness and kurtosis depend on $\sigma$, so to the best of my knowledge there is no reason to seek or to quote a shape parameter, shape being determined by $\sigma$. http://en.wikipedia.org/wiki/Lognormal is useful. (The entries on distributions in Wikipedia seem of higher quality than some of the other statistical content.) (The usual terminology of lognormal is entrenched, but it is, I suggest, backwards. A lognormal is an exponentiated normal. Perhaps "expnormal" sounded too ugly, but I guess "lognormal" sounded ugly when introduced. It is also true, naturally, that a lognormal is defined by the fact that if you take logarithms, you get a normal, but it seems more fundamental to define a distribution by how it is generated, not by how you should want to transform it.)
Location, scale, and shape parameters of the lognormal (with notation ambiguity) Naturally I can't speak authoritatively about any unreferenced papers you (don't) cite, but you appear to be encountering common conventions, apart from one mistake. You are being given location and
53,841
Does the Holt-Winters algorithm for exponential smoothing in time series modelling require the normality assumption in residuals?
There is no normality assumption in fitting an exponential smoothing model. Even if maximum likelihood estimation is used with a Gaussian likelihood, the estimates will still be good under almost all residual distributions. There is also no normality assumption when producing point forecasts from an exponential smoothing model. However, there is often a normality assumption when producing prediction intervals from an exponential smoothing model. But this assumption is easily removed by using bootstrap prediction intervals. For example, if you use R, then the following code produces forecasts from a multiplicative Holt-Winters model with no normality assumption for the prediction intervals: library(forecast) fcast <- hw(x, seasonal="multiplicative", bootstrap=TRUE, simulate=TRUE) By the way, you don't need to worry about normality of residuals in regression either, except if you produce prediction intervals. As with most statistical models, the estimates are more efficient if the residuals are normally distributed, but they are still consistent and unbiased if the residuals are non-normal.
Does the Holt-Winters algorithm for exponential smoothing in time series modelling require the norma
There is no normality assumption in fitting an exponential smoothing model. Even if maximum likelihood estimation is used with a Gaussian likelihood, the estimates will still be good under almost all
Does the Holt-Winters algorithm for exponential smoothing in time series modelling require the normality assumption in residuals? There is no normality assumption in fitting an exponential smoothing model. Even if maximum likelihood estimation is used with a Gaussian likelihood, the estimates will still be good under almost all residual distributions. There is also no normality assumption when producing point forecasts from an exponential smoothing model. However, there is often a normality assumption when producing prediction intervals from an exponential smoothing model. But this assumption is easily removed by using bootstrap prediction intervals. For example, if you use R, then the following code produces forecasts from a multiplicative Holt-Winters model with no normality assumption for the prediction intervals: library(forecast) fcast <- hw(x, seasonal="multiplicative", bootstrap=TRUE, simulate=TRUE) By the way, you don't need to worry about normality of residuals in regression either, except if you produce prediction intervals. As with most statistical models, the estimates are more efficient if the residuals are normally distributed, but they are still consistent and unbiased if the residuals are non-normal.
Does the Holt-Winters algorithm for exponential smoothing in time series modelling require the norma There is no normality assumption in fitting an exponential smoothing model. Even if maximum likelihood estimation is used with a Gaussian likelihood, the estimates will still be good under almost all
53,842
Negative inner fence value in a box and whiskers plot?
You just calculated the limits on where the whiskers can go, not the whisker ends themselves (which are limited to the range of the data). The actual whiskers go out to the furthest-out data value inside the (inner) fences. The fences can go outside the data, but aren't actually plotted. (for more details see this image, which is from this page)
Negative inner fence value in a box and whiskers plot?
You just calculated the limits on where the whiskers can go, not the whisker ends themselves (which are limited to the range of the data). The actual whiskers go out to the furthest-out data value in
Negative inner fence value in a box and whiskers plot? You just calculated the limits on where the whiskers can go, not the whisker ends themselves (which are limited to the range of the data). The actual whiskers go out to the furthest-out data value inside the (inner) fences. The fences can go outside the data, but aren't actually plotted. (for more details see this image, which is from this page)
Negative inner fence value in a box and whiskers plot? You just calculated the limits on where the whiskers can go, not the whisker ends themselves (which are limited to the range of the data). The actual whiskers go out to the furthest-out data value in
53,843
Given the below dependent variable description, should I choose either Ordered or Multinomial logistic regressions?
Lets start with the good news: The proportional odds assumptions does not require that the distances between categories are the same. So what does the proportional odds assumption imply? Say we have three ordered outcomes (1, 2, 3), then we could model the choice of 1 versus 2 or 3 and the choice of 2 versus 3. The proportional odds assumption says that the effects of all explanatory/independent/x-variables are the same across these two choice comparisons. This is not a trivial assumption, but you obviously can't get the simplification of getting one set of parameters for multiple outcomes without making some strong assumptions. Test exist for this assumption. The one I most often come across is an approximate test by Brant (1990). Alternatives are the closely related Wolfe and Gould (1998) test and the usual Wald, likelihood ratio and score test. Richard Williams and I have been doing simulations comparing these different tests, to investigate under which circumstances these tests perform well and when they break down. We recently presented first results at the 2013 German Stata Users' meeting, and the slides are here: http://www.maartenbuis.nl/presentations/gsug13.pdf Brant, R. (1990). Assessing proportionality in the proportional odds model for ordinal logistic regression. Biometrics, 46(4):1171–1178. http://www.jstor.org/stable/2532457 Wolfe, R. and Gould, W. W. (1998). An approximate likelihood-ratio test for ordinal response models. Stata Technical Bulletin, 7(42):24–27. http://www.stata.com/products/stb/journals/stb42.pdf
Given the below dependent variable description, should I choose either Ordered or Multinomial logist
Lets start with the good news: The proportional odds assumptions does not require that the distances between categories are the same. So what does the proportional odds assumption imply? Say we have
Given the below dependent variable description, should I choose either Ordered or Multinomial logistic regressions? Lets start with the good news: The proportional odds assumptions does not require that the distances between categories are the same. So what does the proportional odds assumption imply? Say we have three ordered outcomes (1, 2, 3), then we could model the choice of 1 versus 2 or 3 and the choice of 2 versus 3. The proportional odds assumption says that the effects of all explanatory/independent/x-variables are the same across these two choice comparisons. This is not a trivial assumption, but you obviously can't get the simplification of getting one set of parameters for multiple outcomes without making some strong assumptions. Test exist for this assumption. The one I most often come across is an approximate test by Brant (1990). Alternatives are the closely related Wolfe and Gould (1998) test and the usual Wald, likelihood ratio and score test. Richard Williams and I have been doing simulations comparing these different tests, to investigate under which circumstances these tests perform well and when they break down. We recently presented first results at the 2013 German Stata Users' meeting, and the slides are here: http://www.maartenbuis.nl/presentations/gsug13.pdf Brant, R. (1990). Assessing proportionality in the proportional odds model for ordinal logistic regression. Biometrics, 46(4):1171–1178. http://www.jstor.org/stable/2532457 Wolfe, R. and Gould, W. W. (1998). An approximate likelihood-ratio test for ordinal response models. Stata Technical Bulletin, 7(42):24–27. http://www.stata.com/products/stb/journals/stb42.pdf
Given the below dependent variable description, should I choose either Ordered or Multinomial logist Lets start with the good news: The proportional odds assumptions does not require that the distances between categories are the same. So what does the proportional odds assumption imply? Say we have
53,844
is this interpretation of the p-value legit?
The null hypothesis in this case is that the proportions are equal. One alternative hypothesis that the two are not equal. The statement "there is a 79.76% chance that that the proportions are different" is a statement about the probability of the alternative hypothesis. This is not the correct interpretation of the p-value. Recall that a p-value is the probability that, purely due to the features of random sampling under the null model, you would observe a test statistic at least as extreme as that calculated for your particular sample. All the p-value tells you is the probability of observing the data, given the null model.
is this interpretation of the p-value legit?
The null hypothesis in this case is that the proportions are equal. One alternative hypothesis that the two are not equal. The statement "there is a 79.76% chance that that the proportions are differe
is this interpretation of the p-value legit? The null hypothesis in this case is that the proportions are equal. One alternative hypothesis that the two are not equal. The statement "there is a 79.76% chance that that the proportions are different" is a statement about the probability of the alternative hypothesis. This is not the correct interpretation of the p-value. Recall that a p-value is the probability that, purely due to the features of random sampling under the null model, you would observe a test statistic at least as extreme as that calculated for your particular sample. All the p-value tells you is the probability of observing the data, given the null model.
is this interpretation of the p-value legit? The null hypothesis in this case is that the proportions are equal. One alternative hypothesis that the two are not equal. The statement "there is a 79.76% chance that that the proportions are differe
53,845
Purpose of ScheffΓ©'s confidence intervals
This is a classic case of simultaneous inference used for selective inference (see [1]). I will explain. "Selective (marginal) inference" is when you want to make inference on a data-driven subset of parameters. In the ANOVA case, the subset would be several contrasts. "Simultaneous inference" is when you want to make inference on a particular vector of parameters. Naturally, simultaneous entails selective, as the joint truthfulness of a vector, entails the truthfulness of each of its (selected) elements. Sheffe's view of the problem is the following: since he does not know a-priori the contrast the researcher will study, he will offer simultaneous control over all possible contrasts. In this case, no matter what contrast the researcher chooses, data driven or not, he is already controlling for it. In conclusion: if using Scheffe's method for inference or CI, there is no problem in inferring on selected parameters. The problem is that for a particular parameter/contrast, it is excessively conservative (i.e., low powered). [1] Cox, D. R. 1965. β€œA Remark on Multiple Comparison Methods.” Technometrics 7 (2): 223–224. doi:10.1080/00401706.1965.10490250.
Purpose of ScheffΓ©'s confidence intervals
This is a classic case of simultaneous inference used for selective inference (see [1]). I will explain. "Selective (marginal) inference" is when you want to make inference on a data-driven subset of
Purpose of ScheffΓ©'s confidence intervals This is a classic case of simultaneous inference used for selective inference (see [1]). I will explain. "Selective (marginal) inference" is when you want to make inference on a data-driven subset of parameters. In the ANOVA case, the subset would be several contrasts. "Simultaneous inference" is when you want to make inference on a particular vector of parameters. Naturally, simultaneous entails selective, as the joint truthfulness of a vector, entails the truthfulness of each of its (selected) elements. Sheffe's view of the problem is the following: since he does not know a-priori the contrast the researcher will study, he will offer simultaneous control over all possible contrasts. In this case, no matter what contrast the researcher chooses, data driven or not, he is already controlling for it. In conclusion: if using Scheffe's method for inference or CI, there is no problem in inferring on selected parameters. The problem is that for a particular parameter/contrast, it is excessively conservative (i.e., low powered). [1] Cox, D. R. 1965. β€œA Remark on Multiple Comparison Methods.” Technometrics 7 (2): 223–224. doi:10.1080/00401706.1965.10490250.
Purpose of ScheffΓ©'s confidence intervals This is a classic case of simultaneous inference used for selective inference (see [1]). I will explain. "Selective (marginal) inference" is when you want to make inference on a data-driven subset of
53,846
Purpose of ScheffΓ©'s confidence intervals
"A test of contrasts will only have the correct distribution under the null when the set of contrasts that might be tested is specified before you see the data." would have been a better way of putting it. ScheffΓ©'s method then specifies all possible contrasts between the means of pre-defined groups & I doubt there's any remaining disagreement. I think the point here was that you seemed to be proposing the application of a new contrast-based test suggested by particular data to that data itself; and the experimentwise error rate is not really controlled in such a situation because the set of tests you might have invented is not well defined, even though once you've defined the test the set of contrasts is.
Purpose of ScheffΓ©'s confidence intervals
"A test of contrasts will only have the correct distribution under the null when the set of contrasts that might be tested is specified before you see the data." would have been a better way of puttin
Purpose of ScheffΓ©'s confidence intervals "A test of contrasts will only have the correct distribution under the null when the set of contrasts that might be tested is specified before you see the data." would have been a better way of putting it. ScheffΓ©'s method then specifies all possible contrasts between the means of pre-defined groups & I doubt there's any remaining disagreement. I think the point here was that you seemed to be proposing the application of a new contrast-based test suggested by particular data to that data itself; and the experimentwise error rate is not really controlled in such a situation because the set of tests you might have invented is not well defined, even though once you've defined the test the set of contrasts is.
Purpose of ScheffΓ©'s confidence intervals "A test of contrasts will only have the correct distribution under the null when the set of contrasts that might be tested is specified before you see the data." would have been a better way of puttin
53,847
Can time series data have both unit root and structural break?
Others have already answered about the wrong idea of hypothesis testing. Regarding your question : yes it is possible. Take for example this time series. It is the price series of a stock of the S&P 500 (therefore, real data) in a time span of more than 10 years. As you can see the series presents what seems to be a structural change. The ADF, with constant and constant plus trend, gives a p-value respectively of 0.003629 and 0.01257, therefore we can reject the null hypothesis of a unit root (if we assume a level of significance of 5%). Since we suspect the presence of a break, I run also a unit root test with an unknown breakpoint, which computed p-value is equal to 0.0239, once again we can reject the null of a unit root. The second figure shows an OLS-based CUSUM test with alternative boundaries and alfa equals to 0.05 plotted with the R package "strucchange", which shows you the contemporaneous precence of a break.
Can time series data have both unit root and structural break?
Others have already answered about the wrong idea of hypothesis testing. Regarding your question : yes it is possible. Take for example this time series. It is the price series of a stock of the S&P
Can time series data have both unit root and structural break? Others have already answered about the wrong idea of hypothesis testing. Regarding your question : yes it is possible. Take for example this time series. It is the price series of a stock of the S&P 500 (therefore, real data) in a time span of more than 10 years. As you can see the series presents what seems to be a structural change. The ADF, with constant and constant plus trend, gives a p-value respectively of 0.003629 and 0.01257, therefore we can reject the null hypothesis of a unit root (if we assume a level of significance of 5%). Since we suspect the presence of a break, I run also a unit root test with an unknown breakpoint, which computed p-value is equal to 0.0239, once again we can reject the null of a unit root. The second figure shows an OLS-based CUSUM test with alternative boundaries and alfa equals to 0.05 plotted with the R package "strucchange", which shows you the contemporaneous precence of a break.
Can time series data have both unit root and structural break? Others have already answered about the wrong idea of hypothesis testing. Regarding your question : yes it is possible. Take for example this time series. It is the price series of a stock of the S&P
53,848
Can time series data have both unit root and structural break?
The two older answers already give you a good sense, yet I would like to add some points about the nature of unit-roots and unit root testing. Please refer to the following question in order What is the difference between a stationary test and a unit root test? Here is the part of my answer which matters for you: How unit-root test and stationarity-test complement each other If you have a time series data set how it usually appears in econometric time series I propose you should apply both a Unit root test: (Augmented) Dickey Fuller or Phillips-Perron depending on the structure of the underlying data and a KPSS test. Case 1 Unit root test: you can’t reject $H_0$; KPSS test: reject $H_0$. Both imply that series has unit root. Case 2 Unit root test: Reject $H_0$. KPSS test: don`t reject $H_0$. Both imply that series is stationary. Case 3 If we can’t reject both test: data give not enough observations. Case 4 Reject unit root, reject stationarity: both hypothesis are component hypothesis – heteroskedasticity in series may make a big difference; if there is structural break it will affect inference. Power problem: if there is small random walk component (small variance $Οƒ^2$ u ), we can’t reject unit root and can’t reject stationarity. Your data does not reject a unit root or reject stationarity. Certain hypothesis tests reject a unit root or reject stationarity. Maybe you can go deeper into hypothesis testing for unit roots and stationarity and there won't be any contradiction between unit roots and stationarity anymore. If your data has a structural break it means that a pattern in the data "breaks" exactly one time. If your data has a unit root it means that a pattern in the data "breaks" in every time period. However there are various possibilities between "one break" and a "break in every time period". The paper Multiple Trend Breaks And The Unit-Root Hypothesis by Lumsdaine and Papell examines the possibility of having more than one structural break in the data. They also show a test for testing for more than one structural break.
Can time series data have both unit root and structural break?
The two older answers already give you a good sense, yet I would like to add some points about the nature of unit-roots and unit root testing. Please refer to the following question in order What is
Can time series data have both unit root and structural break? The two older answers already give you a good sense, yet I would like to add some points about the nature of unit-roots and unit root testing. Please refer to the following question in order What is the difference between a stationary test and a unit root test? Here is the part of my answer which matters for you: How unit-root test and stationarity-test complement each other If you have a time series data set how it usually appears in econometric time series I propose you should apply both a Unit root test: (Augmented) Dickey Fuller or Phillips-Perron depending on the structure of the underlying data and a KPSS test. Case 1 Unit root test: you can’t reject $H_0$; KPSS test: reject $H_0$. Both imply that series has unit root. Case 2 Unit root test: Reject $H_0$. KPSS test: don`t reject $H_0$. Both imply that series is stationary. Case 3 If we can’t reject both test: data give not enough observations. Case 4 Reject unit root, reject stationarity: both hypothesis are component hypothesis – heteroskedasticity in series may make a big difference; if there is structural break it will affect inference. Power problem: if there is small random walk component (small variance $Οƒ^2$ u ), we can’t reject unit root and can’t reject stationarity. Your data does not reject a unit root or reject stationarity. Certain hypothesis tests reject a unit root or reject stationarity. Maybe you can go deeper into hypothesis testing for unit roots and stationarity and there won't be any contradiction between unit roots and stationarity anymore. If your data has a structural break it means that a pattern in the data "breaks" exactly one time. If your data has a unit root it means that a pattern in the data "breaks" in every time period. However there are various possibilities between "one break" and a "break in every time period". The paper Multiple Trend Breaks And The Unit-Root Hypothesis by Lumsdaine and Papell examines the possibility of having more than one structural break in the data. They also show a test for testing for more than one structural break.
Can time series data have both unit root and structural break? The two older answers already give you a good sense, yet I would like to add some points about the nature of unit-roots and unit root testing. Please refer to the following question in order What is
53,849
Can time series data have both unit root and structural break?
Yes, it can. Nothing restricts the following DGP to exist: $$ y_t = \begin{cases} c,& \text{if } t = 0 \\ y_{t-1} + e_t,& \text{if } 0 < t < t_1\\ -y_{t-1} + e_t,& \text{if } t_1 \leq t \end{cases} $$ where $e_t \sim N(0,\sigma^2)$. An implementation in R: remove(list = ls()) set.seed(1) i <- 1 n1 <- 50 n2 <- 100 y <- seq(10, 10, length.out=n2) e <- rnorm(n2,0,6) for(i in 2:n1){ y[i] <- y[i - 1] + e[i] } for(j in n1:n2){ y[j] <- -y[j - 1] + e[j] } plot(y, type="o") library(tseries) adf.test(y) The graph is and the test result is: Augmented Dickey-Fuller Test data: y Dickey-Fuller = -2.564, Lag order = 4, p-value = 0.343 alternative hypothesis: stationary
Can time series data have both unit root and structural break?
Yes, it can. Nothing restricts the following DGP to exist: $$ y_t = \begin{cases} c,& \text{if } t = 0 \\ y_{t-1} + e_t,& \text{if } 0 < t < t_1\\ -y_{t-1} + e_t,& \text{if } t_1 \leq t \
Can time series data have both unit root and structural break? Yes, it can. Nothing restricts the following DGP to exist: $$ y_t = \begin{cases} c,& \text{if } t = 0 \\ y_{t-1} + e_t,& \text{if } 0 < t < t_1\\ -y_{t-1} + e_t,& \text{if } t_1 \leq t \end{cases} $$ where $e_t \sim N(0,\sigma^2)$. An implementation in R: remove(list = ls()) set.seed(1) i <- 1 n1 <- 50 n2 <- 100 y <- seq(10, 10, length.out=n2) e <- rnorm(n2,0,6) for(i in 2:n1){ y[i] <- y[i - 1] + e[i] } for(j in n1:n2){ y[j] <- -y[j - 1] + e[j] } plot(y, type="o") library(tseries) adf.test(y) The graph is and the test result is: Augmented Dickey-Fuller Test data: y Dickey-Fuller = -2.564, Lag order = 4, p-value = 0.343 alternative hypothesis: stationary
Can time series data have both unit root and structural break? Yes, it can. Nothing restricts the following DGP to exist: $$ y_t = \begin{cases} c,& \text{if } t = 0 \\ y_{t-1} + e_t,& \text{if } 0 < t < t_1\\ -y_{t-1} + e_t,& \text{if } t_1 \leq t \
53,850
Can time series data have both unit root and structural break?
Partially answered in comments: Your data doesn't reject anything, it's just numbers. I assume you applied some test. Wouldn't a structural break invalidate the test assumptions? But yes, you can write models that have a unit root and a structural break, so data could have both. – Glen_b
Can time series data have both unit root and structural break?
Partially answered in comments: Your data doesn't reject anything, it's just numbers. I assume you applied some test. Wouldn't a structural break invalidate the test assumptions? But yes, you can wri
Can time series data have both unit root and structural break? Partially answered in comments: Your data doesn't reject anything, it's just numbers. I assume you applied some test. Wouldn't a structural break invalidate the test assumptions? But yes, you can write models that have a unit root and a structural break, so data could have both. – Glen_b
Can time series data have both unit root and structural break? Partially answered in comments: Your data doesn't reject anything, it's just numbers. I assume you applied some test. Wouldn't a structural break invalidate the test assumptions? But yes, you can wri
53,851
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable?
Whatever software you used was evidently reporting coefficients to 3 d.p. So 0.000 just meant <0.0005. It makes perfect sense to use units of measurement that yield coefficients that aren't inconveniently large or small. No statistical principle is violated thereby. You don't need a reference or authority to back this up: it is fine to choose (e.g.) mm or m or km or miles or feet or inches for lengths depending on a problem and which units are familiar in your field. In your case, how about dividing by 60 to get counts per second? Do people in your field ever use that as a unit?
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable?
Whatever software you used was evidently reporting coefficients to 3 d.p. So 0.000 just meant <0.0005. It makes perfect sense to use units of measurement that yield coefficients that aren't inconveni
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable? Whatever software you used was evidently reporting coefficients to 3 d.p. So 0.000 just meant <0.0005. It makes perfect sense to use units of measurement that yield coefficients that aren't inconveniently large or small. No statistical principle is violated thereby. You don't need a reference or authority to back this up: it is fine to choose (e.g.) mm or m or km or miles or feet or inches for lengths depending on a problem and which units are familiar in your field. In your case, how about dividing by 60 to get counts per second? Do people in your field ever use that as a unit?
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable? Whatever software you used was evidently reporting coefficients to 3 d.p. So 0.000 just meant <0.0005. It makes perfect sense to use units of measurement that yield coefficients that aren't inconveni
53,852
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable?
One way to look at it is this. If you change the units of all independent variables (while keeping the same units for the dependent variable) then you should expect the regression coefficients to change. The smaller the units, implying larger values, the smaller the coefficients. No academic paper would make such a basic point, but if you need a justification you could refer to the matrix-form coefficient formula: $\beta = (X^{T}X)^{-1}X^{T}{\bf{y}}$ Here the matrix of values of independent variables $X$ appears as the inverse of a squared term times an unsquared term, the net effect being that smaller units, implying larger values within $X$, reduce the value of coefficients. In practice things are a little more complicated where the regression includes a constant term, requiring a column of 1's within $X$. Changing the units of the independent variables will not change the constant (which would change only if the units of the dependent variable were changed). By contrast, the p-value of a coefficient is a unit-free measure of significance (of its difference from zero) and will not change when the units of the variable are changed.
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable?
One way to look at it is this. If you change the units of all independent variables (while keeping the same units for the dependent variable) then you should expect the regression coefficients to cha
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable? One way to look at it is this. If you change the units of all independent variables (while keeping the same units for the dependent variable) then you should expect the regression coefficients to change. The smaller the units, implying larger values, the smaller the coefficients. No academic paper would make such a basic point, but if you need a justification you could refer to the matrix-form coefficient formula: $\beta = (X^{T}X)^{-1}X^{T}{\bf{y}}$ Here the matrix of values of independent variables $X$ appears as the inverse of a squared term times an unsquared term, the net effect being that smaller units, implying larger values within $X$, reduce the value of coefficients. In practice things are a little more complicated where the regression includes a constant term, requiring a column of 1's within $X$. Changing the units of the independent variables will not change the constant (which would change only if the units of the dependent variable were changed). By contrast, the p-value of a coefficient is a unit-free measure of significance (of its difference from zero) and will not change when the units of the variable are changed.
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable? One way to look at it is this. If you change the units of all independent variables (while keeping the same units for the dependent variable) then you should expect the regression coefficients to cha
53,853
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable?
I like to think of this as a problem in choosing representative or otherwise sensible values of x1 and x2 for a predictor X, and getting predicted Y when X=x2 minus predicted Y when X=x1. It is easy to get a confidence intervals for such a difference (in R this is a feature of the rms package). This handles nonlinearities in the X effect. I don't like to recode data just to make it easier to get predicted values. I like to fully specify exactly what I'm estimating. Often I choose x1=25th percentile of X, x2=75th percentile.
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable?
I like to think of this as a problem in choosing representative or otherwise sensible values of x1 and x2 for a predictor X, and getting predicted Y when X=x2 minus predicted Y when X=x1. It is easy
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable? I like to think of this as a problem in choosing representative or otherwise sensible values of x1 and x2 for a predictor X, and getting predicted Y when X=x2 minus predicted Y when X=x1. It is easy to get a confidence intervals for such a difference (in R this is a feature of the rms package). This handles nonlinearities in the X effect. I don't like to recode data just to make it easier to get predicted values. I like to fully specify exactly what I'm estimating. Often I choose x1=25th percentile of X, x2=75th percentile.
Is it valid to divide data by a constant to make the estimated beta larger / more interpretable? I like to think of this as a problem in choosing representative or otherwise sensible values of x1 and x2 for a predictor X, and getting predicted Y when X=x2 minus predicted Y when X=x1. It is easy
53,854
Monotonic and non-monotonic patterns of missing values: how do they look like?
The definition of monotonic missing is that, once the subject dropped out he will drop out forever, while for non-monotonic missing the subject may come back or be missing again. For example, if we follow one subject for five years and he dropped out in the third year, monotonic missing is like o o m m m, and one kind of non-monotonic missing can be o o m o m, where o indicates observed, m indicates missing. So the third o in the non-monotonic missing is like an island. This is just to classify the pattern of missing, and generally the monotonic missing is easier to handle.
Monotonic and non-monotonic patterns of missing values: how do they look like?
The definition of monotonic missing is that, once the subject dropped out he will drop out forever, while for non-monotonic missing the subject may come back or be missing again. For example, if we fo
Monotonic and non-monotonic patterns of missing values: how do they look like? The definition of monotonic missing is that, once the subject dropped out he will drop out forever, while for non-monotonic missing the subject may come back or be missing again. For example, if we follow one subject for five years and he dropped out in the third year, monotonic missing is like o o m m m, and one kind of non-monotonic missing can be o o m o m, where o indicates observed, m indicates missing. So the third o in the non-monotonic missing is like an island. This is just to classify the pattern of missing, and generally the monotonic missing is easier to handle.
Monotonic and non-monotonic patterns of missing values: how do they look like? The definition of monotonic missing is that, once the subject dropped out he will drop out forever, while for non-monotonic missing the subject may come back or be missing again. For example, if we fo
53,855
Monotonic and non-monotonic patterns of missing values: how do they look like?
Actually the accepted answer is not completely correct. According to the Statistical Analysis with Missing Data book a monotone pattern is: monotone missing data: where the variables can be arranged so that all Y(j+1) ... Y(K) are missing for cases where Y(j) is missing, for all J = 1 ... K-1. So for ANY order of Y. This means that also o o m o m can be a monotone pattern as long as the rest of the data has either a subset of a superset of the parameters missing.
Monotonic and non-monotonic patterns of missing values: how do they look like?
Actually the accepted answer is not completely correct. According to the Statistical Analysis with Missing Data book a monotone pattern is: monotone missing data: where the variables can be arranged
Monotonic and non-monotonic patterns of missing values: how do they look like? Actually the accepted answer is not completely correct. According to the Statistical Analysis with Missing Data book a monotone pattern is: monotone missing data: where the variables can be arranged so that all Y(j+1) ... Y(K) are missing for cases where Y(j) is missing, for all J = 1 ... K-1. So for ANY order of Y. This means that also o o m o m can be a monotone pattern as long as the rest of the data has either a subset of a superset of the parameters missing.
Monotonic and non-monotonic patterns of missing values: how do they look like? Actually the accepted answer is not completely correct. According to the Statistical Analysis with Missing Data book a monotone pattern is: monotone missing data: where the variables can be arranged
53,856
Proof of normal equation in regression using tensor notation
A "modern" (post-1900) view of tensors is a geometric one. Perhaps the clearest notation would therefore be a figure, because the "normal equations" are nothing more than a familiar, ages-old theorem of plane geometry! As a point of departure, let's begin with an expression from your previous question. The setting is Ordinary least squares (OLS). The data are the "design matrix" $(x_{i,j})$, $1 \le i \le n$, $1 \le j \le p$ of "independent variables" and a vector of values of the "dependent variable" $y = (y_i)$, $1 \le i \le n$; each dependent value $y_i$ is associated with the $p$-tuple of independent values $(x_{i,1}, x_{i,2}, \ldots, x_{i,p}).$ OLS aims to find a vector $\beta = (\beta_1, \beta_2, \ldots, \beta_p)$ minimizing the sum of squares of residuals $$\sum_{i=1}^n\left(y_i - \sum_{j=1}^p \left(x_{i,j}\beta_j\right)\right)^2.$$ Geometrically, the inner sum of products represents the application of the linear transformation determined by the matrix $x = (x_{i,j})$ to the $p$-vector $\beta$. As such we conceive of $x$ as representing a map $x: \mathbb{R}^p\to \mathbb{R}^n$. The outer sum is the usual Pythagorean formula for the Euclidean distance. Geometrically, then, OLS seeks a vector $\hat\beta \in \mathbb{R}^p$ for which $x\hat\beta\in \mathbb{R}^n$ is as close to $y$ as possible. The test of this is to take any other vector $\beta'\in \mathbb{R}^p$ and to compare distances within the triangle formed by the three points $y$, $x\hat\beta$, and $x\beta'$: According to Euclid, three (distinct) points determine a plane, so the figure is utterly faithful: all our work is done within this plane. We also know from Euclid that the ray $y - x\hat\beta$ must be perpendicular to the ray $x\beta' - x\hat\beta$. (This is the heart of the matter; in making this observation, we are really done and all that remains is to write down an equation expressing this perpendicularity.) Modern geometry has several notations for perpendicularity, which involves the (usual) "dot product" or positive-definite linear form (a "tensor of type $(2,0)$" if you wish). I will write it as $\langle\ ,\ \rangle$, whence we have deduced $$\langle x\beta', y-x\hat\beta\rangle = 0$$ for all $\beta\in\mathbb{R}^p$. Because the usual dot product of two vectors in $\mathbb{R}^n$ is $$\langle u, v\rangle = u^\intercal v,$$ the preceding may be written $$\beta'^\intercal x^\intercal (y - x\hat\beta) = 0.$$ Geometrically, this says that the linear transformation from $\mathbb{R}^{p*}\to\mathbb{R}^n$ defined by applying $x^\intercal (y - x\hat\beta)$ to $\beta'^\intercal$ is the zero transformation, whence its matrix representation must be identically zero: $$x^\intercal (y - x\hat\beta) = 0.$$ We are done; the conventional notation $$\hat\beta = \left(x^\intercal x\right)^{-}\left(x^\intercal y\right)$$ (where "$\left(\right)^{-}$" represents a generalized inverse) is merely an algebraically suggestive way of stating the same thing. Most computations do not actually compute the (generalized) inverse but instead solve the preceding equation directly. If, as an educational exercise, you really, really want to write things out in coordinates, then I would recommend adopting coordinates appropriate to this configuration. Choosing an orthonormal basis for $\mathbb{R}^n$ adapted to the plane determined by $y$ and $x\beta'$ for some arbitrary $\beta'$ allows you to forget the remaining $n-2$ coordinates, because they will be constantly $0$. Whence $y$ can be written $(\eta_1, \eta_2)$, $x$ can be written $(\xi_1, \xi_2)$, and $\beta'$ can be considered a number. We seek to minimize $$\sum_{i=1}^n\left(y_i - \sum_{j=1}^p \left(x_{i,j}\beta_j\right)\right)^2=\left(\eta_1 - \xi_1 \beta\right)^2 + \left(\eta_2 - \xi_2 \beta\right)^2.$$ OLS claims that this is achieved for $$\hat\beta = \left(x^\intercal x\right)^{-}\left(x^\intercal y\right)=\left(\xi_1^2 + \xi_2^2\right)^{-1}\left(\xi_1 \eta_1 + \xi_2 \eta_2\right)$$ provided $\xi_1^2 + \xi_2^2 \ne 0$. An elementary way to show this is to expand the former expression into powers of $\beta$ and apply the Quadratic Formula.
Proof of normal equation in regression using tensor notation
A "modern" (post-1900) view of tensors is a geometric one. Perhaps the clearest notation would therefore be a figure, because the "normal equations" are nothing more than a familiar, ages-old theorem
Proof of normal equation in regression using tensor notation A "modern" (post-1900) view of tensors is a geometric one. Perhaps the clearest notation would therefore be a figure, because the "normal equations" are nothing more than a familiar, ages-old theorem of plane geometry! As a point of departure, let's begin with an expression from your previous question. The setting is Ordinary least squares (OLS). The data are the "design matrix" $(x_{i,j})$, $1 \le i \le n$, $1 \le j \le p$ of "independent variables" and a vector of values of the "dependent variable" $y = (y_i)$, $1 \le i \le n$; each dependent value $y_i$ is associated with the $p$-tuple of independent values $(x_{i,1}, x_{i,2}, \ldots, x_{i,p}).$ OLS aims to find a vector $\beta = (\beta_1, \beta_2, \ldots, \beta_p)$ minimizing the sum of squares of residuals $$\sum_{i=1}^n\left(y_i - \sum_{j=1}^p \left(x_{i,j}\beta_j\right)\right)^2.$$ Geometrically, the inner sum of products represents the application of the linear transformation determined by the matrix $x = (x_{i,j})$ to the $p$-vector $\beta$. As such we conceive of $x$ as representing a map $x: \mathbb{R}^p\to \mathbb{R}^n$. The outer sum is the usual Pythagorean formula for the Euclidean distance. Geometrically, then, OLS seeks a vector $\hat\beta \in \mathbb{R}^p$ for which $x\hat\beta\in \mathbb{R}^n$ is as close to $y$ as possible. The test of this is to take any other vector $\beta'\in \mathbb{R}^p$ and to compare distances within the triangle formed by the three points $y$, $x\hat\beta$, and $x\beta'$: According to Euclid, three (distinct) points determine a plane, so the figure is utterly faithful: all our work is done within this plane. We also know from Euclid that the ray $y - x\hat\beta$ must be perpendicular to the ray $x\beta' - x\hat\beta$. (This is the heart of the matter; in making this observation, we are really done and all that remains is to write down an equation expressing this perpendicularity.) Modern geometry has several notations for perpendicularity, which involves the (usual) "dot product" or positive-definite linear form (a "tensor of type $(2,0)$" if you wish). I will write it as $\langle\ ,\ \rangle$, whence we have deduced $$\langle x\beta', y-x\hat\beta\rangle = 0$$ for all $\beta\in\mathbb{R}^p$. Because the usual dot product of two vectors in $\mathbb{R}^n$ is $$\langle u, v\rangle = u^\intercal v,$$ the preceding may be written $$\beta'^\intercal x^\intercal (y - x\hat\beta) = 0.$$ Geometrically, this says that the linear transformation from $\mathbb{R}^{p*}\to\mathbb{R}^n$ defined by applying $x^\intercal (y - x\hat\beta)$ to $\beta'^\intercal$ is the zero transformation, whence its matrix representation must be identically zero: $$x^\intercal (y - x\hat\beta) = 0.$$ We are done; the conventional notation $$\hat\beta = \left(x^\intercal x\right)^{-}\left(x^\intercal y\right)$$ (where "$\left(\right)^{-}$" represents a generalized inverse) is merely an algebraically suggestive way of stating the same thing. Most computations do not actually compute the (generalized) inverse but instead solve the preceding equation directly. If, as an educational exercise, you really, really want to write things out in coordinates, then I would recommend adopting coordinates appropriate to this configuration. Choosing an orthonormal basis for $\mathbb{R}^n$ adapted to the plane determined by $y$ and $x\beta'$ for some arbitrary $\beta'$ allows you to forget the remaining $n-2$ coordinates, because they will be constantly $0$. Whence $y$ can be written $(\eta_1, \eta_2)$, $x$ can be written $(\xi_1, \xi_2)$, and $\beta'$ can be considered a number. We seek to minimize $$\sum_{i=1}^n\left(y_i - \sum_{j=1}^p \left(x_{i,j}\beta_j\right)\right)^2=\left(\eta_1 - \xi_1 \beta\right)^2 + \left(\eta_2 - \xi_2 \beta\right)^2.$$ OLS claims that this is achieved for $$\hat\beta = \left(x^\intercal x\right)^{-}\left(x^\intercal y\right)=\left(\xi_1^2 + \xi_2^2\right)^{-1}\left(\xi_1 \eta_1 + \xi_2 \eta_2\right)$$ provided $\xi_1^2 + \xi_2^2 \ne 0$. An elementary way to show this is to expand the former expression into powers of $\beta$ and apply the Quadratic Formula.
Proof of normal equation in regression using tensor notation A "modern" (post-1900) view of tensors is a geometric one. Perhaps the clearest notation would therefore be a figure, because the "normal equations" are nothing more than a familiar, ages-old theorem
53,857
Proof of normal equation in regression using tensor notation
The proof for the normal equation (in matrix notation): Let $J$ be the cost function for the linear regression given by $$ \begin{aligned} J &= \vert\vert {\bf X}{\bf w} - {\bf t}\vert\vert^2\\ &= \left({\bf X}{\bf w} - {\bf t}\right)^T\left({\bf X}{\bf w} - {\bf t}\right)\\ &= \left({\bf w}^T{\bf X}^T - {\bf t}^T\right)\left({\bf X}{\bf w} - {\bf t}\right)\\ &= {\bf w}^T{\bf X}^T{\bf X}{\bf w} - 2{\bf w}^T{\bf X}^T{\bf t}-{\bf t}^T{\bf t}\\ &= \text{Tr}\left({\bf w}{\bf w}^T{\bf X}^T{\bf X}\right)- 2{\bf w}^T{\bf X}^T{\bf t}-{\bf t}^T{\bf t}\\ \end{aligned} $$ Noting that $\frac{\partial \text{Tr}(AB)}{A} = B^T$, and $\frac{\partial}{\partial {\bf x}}{\bf x}^T{\bf a} ={\bf a}$. We get $$ \begin{aligned} \frac{\partial J}{\partial {\bf w}} &= \frac{\partial}{\partial {\bf w}}\left( \text{Tr}\left({\bf w}{\bf w}^T{\bf X}^T{\bf X}\right)- 2{\bf w}^T{\bf X}^T{\bf t}-{\bf t}^T{\bf t}\right)\\ &= ({\bf X}^T{\bf X})\frac{\partial {\bf w}{\bf w}^T}{{\bf w}} - 2{\bf X}^T{\bf t}\\ &= ({\bf X}^T{\bf X})2{\bf w} - 2{\bf X}^T{\bf t}\\ \end{aligned} $$ Setting this last value to $\bf 0$ and solving for ${\bf w}$ we get: $$ {\bf w} =({\bf X}^T{\bf X})^{-1}{\bf X}^T{\bf t} $$ As we can see from the proof above, there exists a consistent layout conventions in dealing with matrix calculus, the same way that there exist a layout convention in calculus under the real line. The key to understand matrix derivatives, as with derivatives for functions $f:\mathbb{R}\to\mathbb{R}$ is practice. Take a look at the Matrix Cookbook for further reference.
Proof of normal equation in regression using tensor notation
The proof for the normal equation (in matrix notation): Let $J$ be the cost function for the linear regression given by $$ \begin{aligned} J &= \vert\vert {\bf X}{\bf w} - {\bf t}\vert\vert^2\\ &= \
Proof of normal equation in regression using tensor notation The proof for the normal equation (in matrix notation): Let $J$ be the cost function for the linear regression given by $$ \begin{aligned} J &= \vert\vert {\bf X}{\bf w} - {\bf t}\vert\vert^2\\ &= \left({\bf X}{\bf w} - {\bf t}\right)^T\left({\bf X}{\bf w} - {\bf t}\right)\\ &= \left({\bf w}^T{\bf X}^T - {\bf t}^T\right)\left({\bf X}{\bf w} - {\bf t}\right)\\ &= {\bf w}^T{\bf X}^T{\bf X}{\bf w} - 2{\bf w}^T{\bf X}^T{\bf t}-{\bf t}^T{\bf t}\\ &= \text{Tr}\left({\bf w}{\bf w}^T{\bf X}^T{\bf X}\right)- 2{\bf w}^T{\bf X}^T{\bf t}-{\bf t}^T{\bf t}\\ \end{aligned} $$ Noting that $\frac{\partial \text{Tr}(AB)}{A} = B^T$, and $\frac{\partial}{\partial {\bf x}}{\bf x}^T{\bf a} ={\bf a}$. We get $$ \begin{aligned} \frac{\partial J}{\partial {\bf w}} &= \frac{\partial}{\partial {\bf w}}\left( \text{Tr}\left({\bf w}{\bf w}^T{\bf X}^T{\bf X}\right)- 2{\bf w}^T{\bf X}^T{\bf t}-{\bf t}^T{\bf t}\right)\\ &= ({\bf X}^T{\bf X})\frac{\partial {\bf w}{\bf w}^T}{{\bf w}} - 2{\bf X}^T{\bf t}\\ &= ({\bf X}^T{\bf X})2{\bf w} - 2{\bf X}^T{\bf t}\\ \end{aligned} $$ Setting this last value to $\bf 0$ and solving for ${\bf w}$ we get: $$ {\bf w} =({\bf X}^T{\bf X})^{-1}{\bf X}^T{\bf t} $$ As we can see from the proof above, there exists a consistent layout conventions in dealing with matrix calculus, the same way that there exist a layout convention in calculus under the real line. The key to understand matrix derivatives, as with derivatives for functions $f:\mathbb{R}\to\mathbb{R}$ is practice. Take a look at the Matrix Cookbook for further reference.
Proof of normal equation in regression using tensor notation The proof for the normal equation (in matrix notation): Let $J$ be the cost function for the linear regression given by $$ \begin{aligned} J &= \vert\vert {\bf X}{\bf w} - {\bf t}\vert\vert^2\\ &= \
53,858
additive and non-additive(multiplicative) interactions - soft question -
we use models with multiplicative interaction effects when relationship between independent variable and dependent variable are non-additive. My question is, Are all models with multiplicative interaction effects non-linear? and all models with additive interaction effects linear? The answer to such a question depends on what you mean when you say 'linear' and 'nonlinear', and what domain of models you're restricting yourself to. Usually the terms 'linear' and 'nonlinear' in statistical models refers to linearity in the parameters, not the variables. So for example, $y = \alpha x^2 +\epsilon$ is linear in $\alpha$ though not in $x$, while $y = \exp(-\alpha) x +\epsilon$ is non-linear in $\alpha$, though it is in $x$. In usual parlance, the first is a linear model and the second is not. However, in those cases at least both may be turned into models that are linear in both the parameters and the predictors - in the first case by the transformation $x^* = x^2$, giving a model that has a linear relationship between $y$ and $x^*$, and in the second case by the reparameterization $\alpha^* = \exp(-\alpha)$. As such a standard general linear model (regression-type model) with multiplicative interaction is linear in the parameters, even though it's not linear in either predictor (IV). However, note that even in terms of the IVs, it is conditionally linear - fix one of the IVs and the relationship is linear in the other. [Minor mathematical aside: It should be noted that when we're taking about the relationships of $y$ and some $x$ being linear (in this sense rather than the 'makes a straight-line' sense), if we recognize we're using homogeneous co-ordinates in regression, it is linear. I mention it because I have seen people with enough mathematical background to be familiar with the mathematical definition of linearity object that 'linear regression is not linear'.] all models with additive interaction effects linear? If I understand what you're even asking with 'additive interaction effects', there's really no such thing. If it's additive it's already in the main effects and there's nothing left over for some notional 'interaction'. Also, With non-linearity, the effect of independent variable on dependent variable depends on the value of independent variable, Only if you think of 'effect' as inherently linear in effect, independent variable somehow interacts with itself. This way lies much confusion. Why not just think of there being a relationship that's described by some curve rather than by a straight line? -- Edit to address followup questions: What do you mean when you say "what domain of models you're restricting yourself to"? When you said "all models with multiplicative interaction effects" you presumably meant 'all models' in some class, such as regression models, or general linear models, or generalized linear models, or ... the list could go on for some time. Thanks! for noting about linearity. For the longest time, even I thought being linear meant the relationship was a straight line. Me too. This does clear some doubts, but raises a few questions. So, if we recognize we're using homogeneous co-ordinates in regression, it is linear. In terms of $x$'s - the actual columns of the $X$-matrix in a regression - it's linear in that linked mathematical sense if you realize you're working with homogeneous co-ordinates. A multiple linear regression is already linear in the mean parameters (i.e. the $\beta$ vector, the parameters other than $\sigma^2$), without any such need to invoke homogeneous co-ordinates. That I was referring to the relationship with the $x$'s when I raised homogeneous co-ordinates was explicitly stated. Also, Did you mean to say "Only if you think of 'effect' as inherently nonlinear instead of linear? Nope. The way you phrased the question I was responding to only makes sense if you take the word 'effect' to imply linearity, otherwise the whole notion of 'interaction with itself' seems to be utterly meaningless. How is one to interpret the phrase? What I meant to ask was. I read somewhere that "with non-linearity, the effect of X on Y depends on the value of X and X somehow interacts with itself". I regard the statement as an unhelpful attempt at analogy, and, as already explained, I think you should not think about it this way. Not everything that someone writes down is useful. Does this mean that X interacts with itself(X)? or does it mean that X interact with other variables(X,W etc.) if any? I'm not going to make any further attempt to interpret something that doesn't really make sense as a bare, general, statement without first having more clarification of its intent. I've suggested a way to interpret it that makes at least a little sense. If you want to interpret it more generally, explaining what it means would be up to you - or the original author of it. I expect if you were to ask "what, exactly, does it mean*, you would receive an answer that contained a number of hidden premises, and one of those premises would rely, directly or indirectly, on taking underlying meaning of 'effects' to be linear, when we have no good reason to do that.
additive and non-additive(multiplicative) interactions - soft question -
we use models with multiplicative interaction effects when relationship between independent variable and dependent variable are non-additive. My question is, Are all models with multiplicative intera
additive and non-additive(multiplicative) interactions - soft question - we use models with multiplicative interaction effects when relationship between independent variable and dependent variable are non-additive. My question is, Are all models with multiplicative interaction effects non-linear? and all models with additive interaction effects linear? The answer to such a question depends on what you mean when you say 'linear' and 'nonlinear', and what domain of models you're restricting yourself to. Usually the terms 'linear' and 'nonlinear' in statistical models refers to linearity in the parameters, not the variables. So for example, $y = \alpha x^2 +\epsilon$ is linear in $\alpha$ though not in $x$, while $y = \exp(-\alpha) x +\epsilon$ is non-linear in $\alpha$, though it is in $x$. In usual parlance, the first is a linear model and the second is not. However, in those cases at least both may be turned into models that are linear in both the parameters and the predictors - in the first case by the transformation $x^* = x^2$, giving a model that has a linear relationship between $y$ and $x^*$, and in the second case by the reparameterization $\alpha^* = \exp(-\alpha)$. As such a standard general linear model (regression-type model) with multiplicative interaction is linear in the parameters, even though it's not linear in either predictor (IV). However, note that even in terms of the IVs, it is conditionally linear - fix one of the IVs and the relationship is linear in the other. [Minor mathematical aside: It should be noted that when we're taking about the relationships of $y$ and some $x$ being linear (in this sense rather than the 'makes a straight-line' sense), if we recognize we're using homogeneous co-ordinates in regression, it is linear. I mention it because I have seen people with enough mathematical background to be familiar with the mathematical definition of linearity object that 'linear regression is not linear'.] all models with additive interaction effects linear? If I understand what you're even asking with 'additive interaction effects', there's really no such thing. If it's additive it's already in the main effects and there's nothing left over for some notional 'interaction'. Also, With non-linearity, the effect of independent variable on dependent variable depends on the value of independent variable, Only if you think of 'effect' as inherently linear in effect, independent variable somehow interacts with itself. This way lies much confusion. Why not just think of there being a relationship that's described by some curve rather than by a straight line? -- Edit to address followup questions: What do you mean when you say "what domain of models you're restricting yourself to"? When you said "all models with multiplicative interaction effects" you presumably meant 'all models' in some class, such as regression models, or general linear models, or generalized linear models, or ... the list could go on for some time. Thanks! for noting about linearity. For the longest time, even I thought being linear meant the relationship was a straight line. Me too. This does clear some doubts, but raises a few questions. So, if we recognize we're using homogeneous co-ordinates in regression, it is linear. In terms of $x$'s - the actual columns of the $X$-matrix in a regression - it's linear in that linked mathematical sense if you realize you're working with homogeneous co-ordinates. A multiple linear regression is already linear in the mean parameters (i.e. the $\beta$ vector, the parameters other than $\sigma^2$), without any such need to invoke homogeneous co-ordinates. That I was referring to the relationship with the $x$'s when I raised homogeneous co-ordinates was explicitly stated. Also, Did you mean to say "Only if you think of 'effect' as inherently nonlinear instead of linear? Nope. The way you phrased the question I was responding to only makes sense if you take the word 'effect' to imply linearity, otherwise the whole notion of 'interaction with itself' seems to be utterly meaningless. How is one to interpret the phrase? What I meant to ask was. I read somewhere that "with non-linearity, the effect of X on Y depends on the value of X and X somehow interacts with itself". I regard the statement as an unhelpful attempt at analogy, and, as already explained, I think you should not think about it this way. Not everything that someone writes down is useful. Does this mean that X interacts with itself(X)? or does it mean that X interact with other variables(X,W etc.) if any? I'm not going to make any further attempt to interpret something that doesn't really make sense as a bare, general, statement without first having more clarification of its intent. I've suggested a way to interpret it that makes at least a little sense. If you want to interpret it more generally, explaining what it means would be up to you - or the original author of it. I expect if you were to ask "what, exactly, does it mean*, you would receive an answer that contained a number of hidden premises, and one of those premises would rely, directly or indirectly, on taking underlying meaning of 'effects' to be linear, when we have no good reason to do that.
additive and non-additive(multiplicative) interactions - soft question - we use models with multiplicative interaction effects when relationship between independent variable and dependent variable are non-additive. My question is, Are all models with multiplicative intera
53,859
additive and non-additive(multiplicative) interactions - soft question -
To understand this, you must first be clear about the disctinction between features and parameters. Features - this is basically the input columns of your training data. Parameters - these are basically the weights of your model that you want to learn For example, for a linear regression model with two features, you make the assumption that the response depends on the features through the following relationship: $$Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$$ The features are $X_1$ and $X_2$ whereas $\beta_0, \beta_1, \beta_2$ are the parameters Linearity Linearity is meant in terms of the parameters of the model. So if the response(or a function of the response) can be written as being linear in the parameters, we say the model is linear. Eg. linear regression with two features shown above polynomial regression - $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_1^2+ \beta_4 X_2^2 + \beta_5 X_1 X_2 + \epsilon$ logistic regression - $log (\frac{p_i}{1-p_i}) = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$ Additivity Additivity is meant in terms of the features of the model. So if the response(or a function of the response) can be written as being additive in the features (or a function of features), we say the model is additive. Eg. linear regression with two features shown above logistic regression - $log (\frac{p_i}{1-p_i}) = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$ because $g(Y) = f(X_1) + g(X_2) + \epsilon$ Models can be (linear & additive), (linear & not additive), (nonlinear & additive), (nonlinear & not additive) The following models are linear & not additive: polynomial regression - $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_1^2+ \beta_4 X_2^2 + \beta_5 X_1 X_2 + \epsilon$ because $X_1, X_2$ terms cannot be split into separate additive functions of $X_1$ and $X_2$ The following models are nonlinear & additive: $Y = exp(\beta_1 X_1) + \beta_2 X_2 + \epsilon$ since this is not linear in the parameters The following models are nonlinear & not additive: $Y = exp(\beta_1 X_1) + \beta_2 X_2 + \beta_3 X_1 X_2 + \epsilon$ neither linear in parameters nor can it be split into separate additive functions of $X_1$ and $X_2$
additive and non-additive(multiplicative) interactions - soft question -
To understand this, you must first be clear about the disctinction between features and parameters. Features - this is basically the input columns of your training data. Parameters - these are basical
additive and non-additive(multiplicative) interactions - soft question - To understand this, you must first be clear about the disctinction between features and parameters. Features - this is basically the input columns of your training data. Parameters - these are basically the weights of your model that you want to learn For example, for a linear regression model with two features, you make the assumption that the response depends on the features through the following relationship: $$Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$$ The features are $X_1$ and $X_2$ whereas $\beta_0, \beta_1, \beta_2$ are the parameters Linearity Linearity is meant in terms of the parameters of the model. So if the response(or a function of the response) can be written as being linear in the parameters, we say the model is linear. Eg. linear regression with two features shown above polynomial regression - $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_1^2+ \beta_4 X_2^2 + \beta_5 X_1 X_2 + \epsilon$ logistic regression - $log (\frac{p_i}{1-p_i}) = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$ Additivity Additivity is meant in terms of the features of the model. So if the response(or a function of the response) can be written as being additive in the features (or a function of features), we say the model is additive. Eg. linear regression with two features shown above logistic regression - $log (\frac{p_i}{1-p_i}) = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$ because $g(Y) = f(X_1) + g(X_2) + \epsilon$ Models can be (linear & additive), (linear & not additive), (nonlinear & additive), (nonlinear & not additive) The following models are linear & not additive: polynomial regression - $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 X_1^2+ \beta_4 X_2^2 + \beta_5 X_1 X_2 + \epsilon$ because $X_1, X_2$ terms cannot be split into separate additive functions of $X_1$ and $X_2$ The following models are nonlinear & additive: $Y = exp(\beta_1 X_1) + \beta_2 X_2 + \epsilon$ since this is not linear in the parameters The following models are nonlinear & not additive: $Y = exp(\beta_1 X_1) + \beta_2 X_2 + \beta_3 X_1 X_2 + \epsilon$ neither linear in parameters nor can it be split into separate additive functions of $X_1$ and $X_2$
additive and non-additive(multiplicative) interactions - soft question - To understand this, you must first be clear about the disctinction between features and parameters. Features - this is basically the input columns of your training data. Parameters - these are basical
53,860
What does a confidence interval around a p-value mean when using a permutation test?
The theory of permutation tests (and bootstraping) is based on the idea of looking at every possible permutation. If you can look at every possible permutation then you have an exact p-value. For most cases it is impractical to compute every possible permutation so instead we sample from the "population" of possible permutations and compute a p-value from this "sample". So since we take a sample from the population we can use standard techniques (the binomial proportion in this case) to calculate a confidence interval for the true p-value that we would have calculated from every possible permutation. How to use this confidence interval: If the value of alpha (your significance cutoff or probability of type I error) is in the interval then you are not sure whether to reject or not since a different sample of permutations could have given a different result. This means that you should probably rerun the analysis using a much larger number of permutations to get a better estimate of the true p-value. If the entire interval is far from your alpha value then you can be confident in your rejection or nonrejection of the null hypothesis. Some people will report the bound of the confidence interval that is closest to alpha as a conservitive p-value. If you have done a large number of permutations then generally the bounds will be close enough together that it will not matter much either way.
What does a confidence interval around a p-value mean when using a permutation test?
The theory of permutation tests (and bootstraping) is based on the idea of looking at every possible permutation. If you can look at every possible permutation then you have an exact p-value. For mos
What does a confidence interval around a p-value mean when using a permutation test? The theory of permutation tests (and bootstraping) is based on the idea of looking at every possible permutation. If you can look at every possible permutation then you have an exact p-value. For most cases it is impractical to compute every possible permutation so instead we sample from the "population" of possible permutations and compute a p-value from this "sample". So since we take a sample from the population we can use standard techniques (the binomial proportion in this case) to calculate a confidence interval for the true p-value that we would have calculated from every possible permutation. How to use this confidence interval: If the value of alpha (your significance cutoff or probability of type I error) is in the interval then you are not sure whether to reject or not since a different sample of permutations could have given a different result. This means that you should probably rerun the analysis using a much larger number of permutations to get a better estimate of the true p-value. If the entire interval is far from your alpha value then you can be confident in your rejection or nonrejection of the null hypothesis. Some people will report the bound of the confidence interval that is closest to alpha as a conservitive p-value. If you have done a large number of permutations then generally the bounds will be close enough together that it will not matter much either way.
What does a confidence interval around a p-value mean when using a permutation test? The theory of permutation tests (and bootstraping) is based on the idea of looking at every possible permutation. If you can look at every possible permutation then you have an exact p-value. For mos
53,861
What does a confidence interval around a p-value mean when using a permutation test?
The pure frequentist interpretation: If your were to do this experiment $N$ times, you should expect $X$ percent of the time (99% in your case) or $N * X$ times, that the true $p$ value falls inside the range provided by this 99% confidence interval that percentage or number of times.
What does a confidence interval around a p-value mean when using a permutation test?
The pure frequentist interpretation: If your were to do this experiment $N$ times, you should expect $X$ percent of the time (99% in your case) or $N * X$ times, that the true $p$ value falls inside t
What does a confidence interval around a p-value mean when using a permutation test? The pure frequentist interpretation: If your were to do this experiment $N$ times, you should expect $X$ percent of the time (99% in your case) or $N * X$ times, that the true $p$ value falls inside the range provided by this 99% confidence interval that percentage or number of times.
What does a confidence interval around a p-value mean when using a permutation test? The pure frequentist interpretation: If your were to do this experiment $N$ times, you should expect $X$ percent of the time (99% in your case) or $N * X$ times, that the true $p$ value falls inside t
53,862
Constrained least squares estimation
One way to accomplish this task is to alter your model slightly. Rather than fitting the typical model of $$Y=\beta_0+\sum_{i=1}^{p}\beta_iX_i+\varepsilon$$ instead fit the model $$Y=e^{\beta_0}+\sum_{i=1}^{p}e^{\beta_i}X_i+\varepsilon$$ This can still be fit by least squares fairly easily in R and it has the effect of forcing the coefficients to be non-negative. Edit Here's some code to accomplish the model fitting. Using nls gives both parameter estimates and variances. # generate data set.seed(1) x <- cbind(1, matrix(rnorm(900, 1:9), nrow=100)) beta <- runif(10, 0, 1) y <- x %*% beta + rnorm(100) # try using nls function ests <- nls(y ~ exp(b0) + exp(b1)*x[,2] + exp(b2)*x[,3] + exp(b3)*x[,4] + exp(b4)*x[,5] + exp(b5)*x[,6] + exp(b6)*x[,7] + exp(b7)*x[,8] + exp(b8)*x[,9] + exp(b9)*x[,10], start = list(b0 = runif(1), b1 = runif(1), b2 = runif(1), b3 = runif(1), b4 = runif(1), b5 = runif(1), b6 = runif(1), b7 = runif(1), b8 = runif(1), b9 = runif(1))) summary(ests) exp(coef(ests))
Constrained least squares estimation
One way to accomplish this task is to alter your model slightly. Rather than fitting the typical model of $$Y=\beta_0+\sum_{i=1}^{p}\beta_iX_i+\varepsilon$$ instead fit the model $$Y=e^{\beta_0}+\sum_
Constrained least squares estimation One way to accomplish this task is to alter your model slightly. Rather than fitting the typical model of $$Y=\beta_0+\sum_{i=1}^{p}\beta_iX_i+\varepsilon$$ instead fit the model $$Y=e^{\beta_0}+\sum_{i=1}^{p}e^{\beta_i}X_i+\varepsilon$$ This can still be fit by least squares fairly easily in R and it has the effect of forcing the coefficients to be non-negative. Edit Here's some code to accomplish the model fitting. Using nls gives both parameter estimates and variances. # generate data set.seed(1) x <- cbind(1, matrix(rnorm(900, 1:9), nrow=100)) beta <- runif(10, 0, 1) y <- x %*% beta + rnorm(100) # try using nls function ests <- nls(y ~ exp(b0) + exp(b1)*x[,2] + exp(b2)*x[,3] + exp(b3)*x[,4] + exp(b4)*x[,5] + exp(b5)*x[,6] + exp(b6)*x[,7] + exp(b7)*x[,8] + exp(b8)*x[,9] + exp(b9)*x[,10], start = list(b0 = runif(1), b1 = runif(1), b2 = runif(1), b3 = runif(1), b4 = runif(1), b5 = runif(1), b6 = runif(1), b7 = runif(1), b8 = runif(1), b9 = runif(1))) summary(ests) exp(coef(ests))
Constrained least squares estimation One way to accomplish this task is to alter your model slightly. Rather than fitting the typical model of $$Y=\beta_0+\sum_{i=1}^{p}\beta_iX_i+\varepsilon$$ instead fit the model $$Y=e^{\beta_0}+\sum_
53,863
Constrained least squares estimation
1) You could fit it by putting box-type constraints to an optimizer that can deal with them - see ?optim and ?nlminb, for example. 2) You could use code like this: http://www.stat.colostate.edu/~meyer/constrparam.R 3) You could use a package designed to have positivity-constrained models, such as the package penalized 4) you can use nonlinear LS in various ways to ensure non-negative or strictly positive regressions.
Constrained least squares estimation
1) You could fit it by putting box-type constraints to an optimizer that can deal with them - see ?optim and ?nlminb, for example. 2) You could use code like this: http://www.stat.colostate.edu/~meyer
Constrained least squares estimation 1) You could fit it by putting box-type constraints to an optimizer that can deal with them - see ?optim and ?nlminb, for example. 2) You could use code like this: http://www.stat.colostate.edu/~meyer/constrparam.R 3) You could use a package designed to have positivity-constrained models, such as the package penalized 4) you can use nonlinear LS in various ways to ensure non-negative or strictly positive regressions.
Constrained least squares estimation 1) You could fit it by putting box-type constraints to an optimizer that can deal with them - see ?optim and ?nlminb, for example. 2) You could use code like this: http://www.stat.colostate.edu/~meyer
53,864
Constrained least squares estimation
This is a quadratic programming problem: Let $\mathbf I\in\mathbb R^{n\times 9}$, be your design matrix ($n$ is the number of measurements, so each row contains one measurement with the investments $I_1,\ldots, I_9$), and $\boldsymbol \beta\in\mathbb R^9$ your linear coefficients, which you want to be nonnegative (at least some of them). Then you want to minimize the cost function $c(\mathbf I, \boldsymbol \beta)$, which is the sum of the quadratic residuals, under the nonnegativity constraint on $\boldsymbol\beta$: $$ \begin{align} c(\mathbf X, \boldsymbol\beta) &:= \sum_{i=1}^n \left(\sum_{k=1}^9 I_{ik}\beta_k - Y_i\right)^2\\ \beta_i &\ge 0,\quad i=1,\ldots,9. \end{align} $$ Since $c$ is a quadratic function of $\boldsymbol \beta$ and you have inequality constraints on $\boldsymbol \beta$, this is a quadratic programming problem, and there is software for solving it.
Constrained least squares estimation
This is a quadratic programming problem: Let $\mathbf I\in\mathbb R^{n\times 9}$, be your design matrix ($n$ is the number of measurements, so each row contains one measurement with the investments $I
Constrained least squares estimation This is a quadratic programming problem: Let $\mathbf I\in\mathbb R^{n\times 9}$, be your design matrix ($n$ is the number of measurements, so each row contains one measurement with the investments $I_1,\ldots, I_9$), and $\boldsymbol \beta\in\mathbb R^9$ your linear coefficients, which you want to be nonnegative (at least some of them). Then you want to minimize the cost function $c(\mathbf I, \boldsymbol \beta)$, which is the sum of the quadratic residuals, under the nonnegativity constraint on $\boldsymbol\beta$: $$ \begin{align} c(\mathbf X, \boldsymbol\beta) &:= \sum_{i=1}^n \left(\sum_{k=1}^9 I_{ik}\beta_k - Y_i\right)^2\\ \beta_i &\ge 0,\quad i=1,\ldots,9. \end{align} $$ Since $c$ is a quadratic function of $\boldsymbol \beta$ and you have inequality constraints on $\boldsymbol \beta$, this is a quadratic programming problem, and there is software for solving it.
Constrained least squares estimation This is a quadratic programming problem: Let $\mathbf I\in\mathbb R^{n\times 9}$, be your design matrix ($n$ is the number of measurements, so each row contains one measurement with the investments $I
53,865
Constrained least squares estimation
The pcls() function in the mgcv package does partially constrained least squares. You can include equality and/or inequality constraints on some or all of the parameter estimates, and in fact on linear combinations of them. (It also allows for quadratic penalties, which are probably not relevant in your case.)
Constrained least squares estimation
The pcls() function in the mgcv package does partially constrained least squares. You can include equality and/or inequality constraints on some or all of the parameter estimates, and in fact on linea
Constrained least squares estimation The pcls() function in the mgcv package does partially constrained least squares. You can include equality and/or inequality constraints on some or all of the parameter estimates, and in fact on linear combinations of them. (It also allows for quadratic penalties, which are probably not relevant in your case.)
Constrained least squares estimation The pcls() function in the mgcv package does partially constrained least squares. You can include equality and/or inequality constraints on some or all of the parameter estimates, and in fact on linea
53,866
Probability of picking a biased coin
Your answer is right. The solution can be derived using Bayes' Theorem: $P(A|B) = \frac{P(B|A)P(A)}{P(B)}$ You want to know the probability of $P(\text{biased coin}|\text{three heads})$. What do we know? There are $100$ coins. $99$ are fair, $1$ is biased with both sides as heads. With a fair coin, the probability of three heads is $0.5^3 = 1/8$. The probability of picking the biased coin: $P(\text{biased coin}) = 1/100$. The probability of all three tosses is heads: $P(\text{three heads}) = \frac{1 \times 1+ 99 \times \frac{1}{8}}{100}$. The probability of three heads given the biased coin is trivial: $P(\text{three heads}|\text{biased coin}) = 1$. If we use Bayes' Theorem from above, we can calculate $$P(\text{biased coin}|\text{three heads}) = \frac{1 \times 1/100}{\frac{1 + 99 \times \frac{1}{8}}{100}} = \frac{1}{1 + 99 \times \frac{1}{8}} = \frac{8}{107} \approx 0.07476636$$
Probability of picking a biased coin
Your answer is right. The solution can be derived using Bayes' Theorem: $P(A|B) = \frac{P(B|A)P(A)}{P(B)}$ You want to know the probability of $P(\text{biased coin}|\text{three heads})$. What do we kn
Probability of picking a biased coin Your answer is right. The solution can be derived using Bayes' Theorem: $P(A|B) = \frac{P(B|A)P(A)}{P(B)}$ You want to know the probability of $P(\text{biased coin}|\text{three heads})$. What do we know? There are $100$ coins. $99$ are fair, $1$ is biased with both sides as heads. With a fair coin, the probability of three heads is $0.5^3 = 1/8$. The probability of picking the biased coin: $P(\text{biased coin}) = 1/100$. The probability of all three tosses is heads: $P(\text{three heads}) = \frac{1 \times 1+ 99 \times \frac{1}{8}}{100}$. The probability of three heads given the biased coin is trivial: $P(\text{three heads}|\text{biased coin}) = 1$. If we use Bayes' Theorem from above, we can calculate $$P(\text{biased coin}|\text{three heads}) = \frac{1 \times 1/100}{\frac{1 + 99 \times \frac{1}{8}}{100}} = \frac{1}{1 + 99 \times \frac{1}{8}} = \frac{8}{107} \approx 0.07476636$$
Probability of picking a biased coin Your answer is right. The solution can be derived using Bayes' Theorem: $P(A|B) = \frac{P(B|A)P(A)}{P(B)}$ You want to know the probability of $P(\text{biased coin}|\text{three heads})$. What do we kn
53,867
Probability of picking a biased coin
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Here is a write that describes something very similar to that. The Bayes approach is the right way to proceed.
Probability of picking a biased coin
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Probability of picking a biased coin Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Here is a write that describes something very similar to that. The Bayes approach is the right way to proceed.
Probability of picking a biased coin Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
53,868
Probability of picking a biased coin
In general, P(biased coin|k heads) = (2^k)/[(2^k) + 99] Where k is no of consective heads so if the trick coin was tossed 3 times (2^3)/[(2^3) + 99] = 8/(8+99) = 8/107 = 0.07
Probability of picking a biased coin
In general, P(biased coin|k heads) = (2^k)/[(2^k) + 99] Where k is no of consective heads so if the trick coin was tossed 3 times (2^3)/[(2^3) + 99] = 8/(8+99) = 8/107 = 0.07
Probability of picking a biased coin In general, P(biased coin|k heads) = (2^k)/[(2^k) + 99] Where k is no of consective heads so if the trick coin was tossed 3 times (2^3)/[(2^3) + 99] = 8/(8+99) = 8/107 = 0.07
Probability of picking a biased coin In general, P(biased coin|k heads) = (2^k)/[(2^k) + 99] Where k is no of consective heads so if the trick coin was tossed 3 times (2^3)/[(2^3) + 99] = 8/(8+99) = 8/107 = 0.07
53,869
Time series forecast in R with yearly frequency
Try using the tbats() function in the forecast package for R: x <- ts(myts, frequency=365) fit <- tbats(x) plot(forecast(fit)) TBATS is a generalization of ETS models designed to deal with high frequency data. See http://robjhyndman.com/papers/complex-seasonality/ for the JASA paper behind it.
Time series forecast in R with yearly frequency
Try using the tbats() function in the forecast package for R: x <- ts(myts, frequency=365) fit <- tbats(x) plot(forecast(fit)) TBATS is a generalization of ETS models designed to deal with high frequ
Time series forecast in R with yearly frequency Try using the tbats() function in the forecast package for R: x <- ts(myts, frequency=365) fit <- tbats(x) plot(forecast(fit)) TBATS is a generalization of ETS models designed to deal with high frequency data. See http://robjhyndman.com/papers/complex-seasonality/ for the JASA paper behind it.
Time series forecast in R with yearly frequency Try using the tbats() function in the forecast package for R: x <- ts(myts, frequency=365) fit <- tbats(x) plot(forecast(fit)) TBATS is a generalization of ETS models designed to deal with high frequ
53,870
Time series forecast in R with yearly frequency
R may be of little use to you due to the complexity of your problem. We recently developed forecasts for a daily series that looks "similar" but "different" to yours. . It required combining Intervention Detection , Causal Variables (Holidays , Promotions etc.) and ARIMA structure. We used AUTOBOX ( a product that I have helped develop ) to do this. You can post your data on the board and I will post some results. This is indeed one of the most challenging time series that we have seen ..... and we have been looking for a long time ! It might be interesting to see how other thought leaders would analyze these data and compare the results. EDITED After receiving the data from the OP Data like this requires incorporation of Fixed Events i.e. Monthly/Daily/Holidays along with specially formed days-of-year where significant activity can be detected around the event. The OP posted data and asked for forecasts 60 periods precisely prior to the Super Bowl. He delivered 1827 daily values starting at 1/1/12008. There were three origins for the forecast. Only data up to the point of the origin was used to develop the model/parameters. The three origins were 12/9/2010 (1074 values used), 12/8/2011 (1438 values used) and 12/5/2012(1801 values used). The three Actual-Fit-Forecast Graphs are presented here.First with 1065 values Then with 1438 values and finally using all the data (1801 values) The complete analyses can be found at http://www.autobox.com/1074.zip http://www.autobox.com/1438.zip http://www.autobox.com/1801.zip Each file contains an xls/xlsx file containing the 60 forecasts and other files showing all the analysis. Each equation is different because the number of observations used to identify the model changed. Following is the equation used from period 12/5/2012 to predict the 60 days leading up to Super Bowl Sunday. The equation uses daily indicators reflecting the buildup before Super Bowl Sunday M_SB is a 0/1 variable denoting the day for the Super Bowl , while M_1DB through M_22DB are the days before Super Bowl Sunday and M_1DA through M_3DA are for the days after Super Bowl. In addition there significant responses around 4 other holidays. Jan and Feb have a significant impact along with 6 daily indicators (N10107 through N10607). A significant ARIMA structure was found to round out the model. Y(T) = -12757. +[X1(T)][(+ 20512. )] M_SB +[X2(T)][(+ 8680.2 )] M_1DB +[X3(T)][(+ 1688.0 )] M_2DB +[X4(T)][(+ 2778.1 )] M_3DB +[X5(T)][(+ 1906.4 )] M_4DB +[X6(T)][(+ 1222.0 )] M_5DB +[X7(T)][(+ 829.06 )] M_6DB +[X8(T)][(+ 948.29 )] M_7DB +[X9(T)][(+ 397.93 )] M_8DB +[X10(T)[(+ 509.42 )] M_10DF +[X11(T)[(+ 804.90 )] M_11DB +[X12(T)[(+ 1102.0 )] M_12DB +[X13(T)[(+ 1867.1 )] M_13DB +[X14(T)[(+ 10258. )] M_14DB +[X15(T)[(+ 754.71 )] M_15DB +[X16(T)[(+ 328.09 )] M_17DB +[X17(T)[(+ 10116. )] M_21DB +[X18(T)[(+ 1467.6 )] M_22DB +[X19(T)[(+ 1113.0 )] M_1DA +[X20(T)[(- 673.57 )] M_2DA +[X21(T)[(+ 601.89 )] M_3DA +[X22(T)[(+ 584.44 B**-2+ 1669.4 B**-1+ 808.45 + 345.02 B** 1)] M_MARDIGRAS +[X23(T)[(- 7812.7 )] M_MARTINLKING +[X24(T)[(- 541.22 )] M_NEWYEARS +[X25(T)[(- 529.21 - 389.18 B** 1)] M_PRESIDENTS +[X26(T)[(+ 705.02 )] MONTH_EFF01 +[X27(T)[(+ 605.10 )] MONTH_EFF02 +[X28(T)[(+ 13116. )] FIXED_EFF_N10107 +[X29(T)[(+ 13017. )] FIXED_EFF_N10207 +[X30(T)[(+ 12971. )] FIXED_EFF_N10307 +[X31(T)[(+ 12974. )] FIXED_EFF_N10407 +[X32(T)[(+ 12917. )] FIXED_EFF_N10507 +[X33(T)[(+ 13036. )] FIXED_EFF_N10607 + [(1- .626B** 1)(1- .249B** 7)]**-1 [A(T)]
Time series forecast in R with yearly frequency
R may be of little use to you due to the complexity of your problem. We recently developed forecasts for a daily series that looks "similar" but "different" to yours. . It required combining Interve
Time series forecast in R with yearly frequency R may be of little use to you due to the complexity of your problem. We recently developed forecasts for a daily series that looks "similar" but "different" to yours. . It required combining Intervention Detection , Causal Variables (Holidays , Promotions etc.) and ARIMA structure. We used AUTOBOX ( a product that I have helped develop ) to do this. You can post your data on the board and I will post some results. This is indeed one of the most challenging time series that we have seen ..... and we have been looking for a long time ! It might be interesting to see how other thought leaders would analyze these data and compare the results. EDITED After receiving the data from the OP Data like this requires incorporation of Fixed Events i.e. Monthly/Daily/Holidays along with specially formed days-of-year where significant activity can be detected around the event. The OP posted data and asked for forecasts 60 periods precisely prior to the Super Bowl. He delivered 1827 daily values starting at 1/1/12008. There were three origins for the forecast. Only data up to the point of the origin was used to develop the model/parameters. The three origins were 12/9/2010 (1074 values used), 12/8/2011 (1438 values used) and 12/5/2012(1801 values used). The three Actual-Fit-Forecast Graphs are presented here.First with 1065 values Then with 1438 values and finally using all the data (1801 values) The complete analyses can be found at http://www.autobox.com/1074.zip http://www.autobox.com/1438.zip http://www.autobox.com/1801.zip Each file contains an xls/xlsx file containing the 60 forecasts and other files showing all the analysis. Each equation is different because the number of observations used to identify the model changed. Following is the equation used from period 12/5/2012 to predict the 60 days leading up to Super Bowl Sunday. The equation uses daily indicators reflecting the buildup before Super Bowl Sunday M_SB is a 0/1 variable denoting the day for the Super Bowl , while M_1DB through M_22DB are the days before Super Bowl Sunday and M_1DA through M_3DA are for the days after Super Bowl. In addition there significant responses around 4 other holidays. Jan and Feb have a significant impact along with 6 daily indicators (N10107 through N10607). A significant ARIMA structure was found to round out the model. Y(T) = -12757. +[X1(T)][(+ 20512. )] M_SB +[X2(T)][(+ 8680.2 )] M_1DB +[X3(T)][(+ 1688.0 )] M_2DB +[X4(T)][(+ 2778.1 )] M_3DB +[X5(T)][(+ 1906.4 )] M_4DB +[X6(T)][(+ 1222.0 )] M_5DB +[X7(T)][(+ 829.06 )] M_6DB +[X8(T)][(+ 948.29 )] M_7DB +[X9(T)][(+ 397.93 )] M_8DB +[X10(T)[(+ 509.42 )] M_10DF +[X11(T)[(+ 804.90 )] M_11DB +[X12(T)[(+ 1102.0 )] M_12DB +[X13(T)[(+ 1867.1 )] M_13DB +[X14(T)[(+ 10258. )] M_14DB +[X15(T)[(+ 754.71 )] M_15DB +[X16(T)[(+ 328.09 )] M_17DB +[X17(T)[(+ 10116. )] M_21DB +[X18(T)[(+ 1467.6 )] M_22DB +[X19(T)[(+ 1113.0 )] M_1DA +[X20(T)[(- 673.57 )] M_2DA +[X21(T)[(+ 601.89 )] M_3DA +[X22(T)[(+ 584.44 B**-2+ 1669.4 B**-1+ 808.45 + 345.02 B** 1)] M_MARDIGRAS +[X23(T)[(- 7812.7 )] M_MARTINLKING +[X24(T)[(- 541.22 )] M_NEWYEARS +[X25(T)[(- 529.21 - 389.18 B** 1)] M_PRESIDENTS +[X26(T)[(+ 705.02 )] MONTH_EFF01 +[X27(T)[(+ 605.10 )] MONTH_EFF02 +[X28(T)[(+ 13116. )] FIXED_EFF_N10107 +[X29(T)[(+ 13017. )] FIXED_EFF_N10207 +[X30(T)[(+ 12971. )] FIXED_EFF_N10307 +[X31(T)[(+ 12974. )] FIXED_EFF_N10407 +[X32(T)[(+ 12917. )] FIXED_EFF_N10507 +[X33(T)[(+ 13036. )] FIXED_EFF_N10607 + [(1- .626B** 1)(1- .249B** 7)]**-1 [A(T)]
Time series forecast in R with yearly frequency R may be of little use to you due to the complexity of your problem. We recently developed forecasts for a daily series that looks "similar" but "different" to yours. . It required combining Interve
53,871
Mixed model specification with nlme in R
If I understand your question correctly, you can specify your model with nested random effects like this: fit.1 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata) To specify the covariance structure, e.g. a simple compound symmetry form, try this: fit.2 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata, cor=corCompSymm()) To look at the estimated parameters try: summary(fit.1) To get all estimated coefficients try: coef(fit.1) To get the p-values then use: anova(fit.1) Notice that if you need to specify the covariance structure of the residuals, you'll have to use nlme as although lme4 (i.e. the lmer function) is a more advanced package, currently it does not support that feature.
Mixed model specification with nlme in R
If I understand your question correctly, you can specify your model with nested random effects like this: fit.1 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata) To specify the covariance st
Mixed model specification with nlme in R If I understand your question correctly, you can specify your model with nested random effects like this: fit.1 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata) To specify the covariance structure, e.g. a simple compound symmetry form, try this: fit.2 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata, cor=corCompSymm()) To look at the estimated parameters try: summary(fit.1) To get all estimated coefficients try: coef(fit.1) To get the p-values then use: anova(fit.1) Notice that if you need to specify the covariance structure of the residuals, you'll have to use nlme as although lme4 (i.e. the lmer function) is a more advanced package, currently it does not support that feature.
Mixed model specification with nlme in R If I understand your question correctly, you can specify your model with nested random effects like this: fit.1 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata) To specify the covariance st
53,872
Mixed model specification with nlme in R
Looks like you want to fit a mixed effects model with Dose-Time interaction with random intercepts for each animale (ID) e.g. lmer(Change ~ Dose*Time + ( 1 | ID ), data=data) A model that allows for random slopes (over Time) in addition to random intercepts would look like this: lmer(Change ~ Dose*Time + ( Time | ID ), data=data) Also have a look here
Mixed model specification with nlme in R
Looks like you want to fit a mixed effects model with Dose-Time interaction with random intercepts for each animale (ID) e.g. lmer(Change ~ Dose*Time + ( 1 | ID ), data=data) A model that allows for
Mixed model specification with nlme in R Looks like you want to fit a mixed effects model with Dose-Time interaction with random intercepts for each animale (ID) e.g. lmer(Change ~ Dose*Time + ( 1 | ID ), data=data) A model that allows for random slopes (over Time) in addition to random intercepts would look like this: lmer(Change ~ Dose*Time + ( Time | ID ), data=data) Also have a look here
Mixed model specification with nlme in R Looks like you want to fit a mixed effects model with Dose-Time interaction with random intercepts for each animale (ID) e.g. lmer(Change ~ Dose*Time + ( 1 | ID ), data=data) A model that allows for
53,873
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature selection?
You write From what I understand, if the p-values are above a certain threshold for a given variable (e.g. p-value > 0.05) as is the case with variable x1's pvalue=0.975, then one can say that this particular regression doesn't gain any additional information from having this variable in there. This is not correct. Every variable adds some information, unless it's just random noise. A p-value has a very specific meaning: If, in the population from which this sample was randomly drawn, the effect size associated with this p-value was 0, what would be the probability that, in a sample of this size, we would get a test statistic this far from 0 or farther? You also write: What else is confusing me is that same variable, x1, when I run a regression with just x1 and x5, x1's p-value=0.05. I'm guessing that I interpret this as, x1 has some useful information, but when compared with the information carried by x2, x2 and x4 together, x1 isn't useful. Rather than looking at p-values, I would look at effect sizes and changes in them. In addition, the phrase "when compare with" is not right. When you have multiple indepdent variables the effects are each after controlling for the others; that is, holding the other variables in the regression constant, what are the effects of this variable? Another thing I just noticed: You have only 23 observations! You should be looking at a maximum of 2 variables at a time. The model that you present (with 5 variables) is almost surely overfit.
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature se
You write From what I understand, if the p-values are above a certain threshold for a given variable (e.g. p-value > 0.05) as is the case with variable x1's pvalue=0.975, then one can say that th
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature selection? You write From what I understand, if the p-values are above a certain threshold for a given variable (e.g. p-value > 0.05) as is the case with variable x1's pvalue=0.975, then one can say that this particular regression doesn't gain any additional information from having this variable in there. This is not correct. Every variable adds some information, unless it's just random noise. A p-value has a very specific meaning: If, in the population from which this sample was randomly drawn, the effect size associated with this p-value was 0, what would be the probability that, in a sample of this size, we would get a test statistic this far from 0 or farther? You also write: What else is confusing me is that same variable, x1, when I run a regression with just x1 and x5, x1's p-value=0.05. I'm guessing that I interpret this as, x1 has some useful information, but when compared with the information carried by x2, x2 and x4 together, x1 isn't useful. Rather than looking at p-values, I would look at effect sizes and changes in them. In addition, the phrase "when compare with" is not right. When you have multiple indepdent variables the effects are each after controlling for the others; that is, holding the other variables in the regression constant, what are the effects of this variable? Another thing I just noticed: You have only 23 observations! You should be looking at a maximum of 2 variables at a time. The model that you present (with 5 variables) is almost surely overfit.
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature se You write From what I understand, if the p-values are above a certain threshold for a given variable (e.g. p-value > 0.05) as is the case with variable x1's pvalue=0.975, then one can say that th
53,874
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature selection?
I would interpret those t statistics with caution. The beta coefficients in the regression model are calculated by taking all of the variables into account, and the t statistics are based on all of the variables, so it can be misleading to interpret them individually. Each variable's estimate is influenced by the presence of others, and can be masked by the other variables. For example, two variables could be non-significant (when viewed in the full model), but either one individually could be significant and important for the model. Better would be to try a function that searches and tests for optimal subsets of regression variables - like regsubsets in R. In effect, what happens is that one tests a variety of models and tests them against each other using some criterion (like $R^2$ or AIC). The documentation of your variable hunting function/program should explain what procedure is being used.
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature se
I would interpret those t statistics with caution. The beta coefficients in the regression model are calculated by taking all of the variables into account, and the t statistics are based on all of th
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature selection? I would interpret those t statistics with caution. The beta coefficients in the regression model are calculated by taking all of the variables into account, and the t statistics are based on all of the variables, so it can be misleading to interpret them individually. Each variable's estimate is influenced by the presence of others, and can be masked by the other variables. For example, two variables could be non-significant (when viewed in the full model), but either one individually could be significant and important for the model. Better would be to try a function that searches and tests for optimal subsets of regression variables - like regsubsets in R. In effect, what happens is that one tests a variety of models and tests them against each other using some criterion (like $R^2$ or AIC). The documentation of your variable hunting function/program should explain what procedure is being used.
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature se I would interpret those t statistics with caution. The beta coefficients in the regression model are calculated by taking all of the variables into account, and the t statistics are based on all of th
53,875
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature selection?
Regarding feature selection: I would personally argue not to be based on p-values regarding feature selection. Use back-wards elimination; start with fitting a model with all the variables of interest and calculate your "favourite" information criterion (eg. AIC), then sequentially exclude one of your feature and recalculate the relevant AIC. Caveat: AIC is comparable among nested models. Given your rather small dataset in the current case I might even consider BIC (which is somewhat more stringent than AIC). Additionally you might want to consider bootstrapping your sample and recalculating this procedure. Regarding p-values, to quote Mackay (Chapt. 37) : p-value is the probability, given a null hypothesis for the probability distribution of the data, that the outcome would be as extreme as, or more extreme than, the observed outcome. Which is practically a (rather abstract) restatement of Peter Flom's answer. I would guess your initial statement would be valid if you said "enough information" instead of "any additional information". Adding features does decrease the degrees of freedom of your model.
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature se
Regarding feature selection: I would personally argue not to be based on p-values regarding feature selection. Use back-wards elimination; start with fitting a model with all the variables of interest
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature selection? Regarding feature selection: I would personally argue not to be based on p-values regarding feature selection. Use back-wards elimination; start with fitting a model with all the variables of interest and calculate your "favourite" information criterion (eg. AIC), then sequentially exclude one of your feature and recalculate the relevant AIC. Caveat: AIC is comparable among nested models. Given your rather small dataset in the current case I might even consider BIC (which is somewhat more stringent than AIC). Additionally you might want to consider bootstrapping your sample and recalculating this procedure. Regarding p-values, to quote Mackay (Chapt. 37) : p-value is the probability, given a null hypothesis for the probability distribution of the data, that the outcome would be as extreme as, or more extreme than, the observed outcome. Which is practically a (rather abstract) restatement of Peter Flom's answer. I would guess your initial statement would be valid if you said "enough information" instead of "any additional information". Adding features does decrease the degrees of freedom of your model.
How should I interpret the p-values (i.e. t-tests) in regressions, and can I use them for feature se Regarding feature selection: I would personally argue not to be based on p-values regarding feature selection. Use back-wards elimination; start with fitting a model with all the variables of interest
53,876
Ordinal regression vs. summated score
Ordinal regression is appropriate whenever the dependent variable is ordinal: That is, when you can assume that the levels are in order, but not that the gaps between the levels are equal. If you analyze individual Likert items (each scored 1-5 or 1-7 or whatever) then this could be a very reasonable assumption. But other methods might be good too; this post on my blog might be helpful. However, if you sum a bunch (say 10) Likert items (each, say, 1-5) then your total ranges from 10 to 50 and ordinal regression, while technically correct, will be hard to implement and interpret (there would be a huge number of parameters). In this case, I suggest that OLS regression could be the default. You could, certainly, do ordinal logistic regression on each of the 10 Likert items, but then you have 10 regressions to interpret, which might lead to confusion. In any case, you should start with a bunch of graphs to see what's going on. Before summing (or otherwise combining) the Likert items, I'd also check how they relate to each other using factor analysis and Cronbach's alpha.
Ordinal regression vs. summated score
Ordinal regression is appropriate whenever the dependent variable is ordinal: That is, when you can assume that the levels are in order, but not that the gaps between the levels are equal. If you ana
Ordinal regression vs. summated score Ordinal regression is appropriate whenever the dependent variable is ordinal: That is, when you can assume that the levels are in order, but not that the gaps between the levels are equal. If you analyze individual Likert items (each scored 1-5 or 1-7 or whatever) then this could be a very reasonable assumption. But other methods might be good too; this post on my blog might be helpful. However, if you sum a bunch (say 10) Likert items (each, say, 1-5) then your total ranges from 10 to 50 and ordinal regression, while technically correct, will be hard to implement and interpret (there would be a huge number of parameters). In this case, I suggest that OLS regression could be the default. You could, certainly, do ordinal logistic regression on each of the 10 Likert items, but then you have 10 regressions to interpret, which might lead to confusion. In any case, you should start with a bunch of graphs to see what's going on. Before summing (or otherwise combining) the Likert items, I'd also check how they relate to each other using factor analysis and Cronbach's alpha.
Ordinal regression vs. summated score Ordinal regression is appropriate whenever the dependent variable is ordinal: That is, when you can assume that the levels are in order, but not that the gaps between the levels are equal. If you ana
53,877
Plotting logistic regression interaction (categorical) in R
You could plot fitted probabilities by placement, separately for each level of ethnic. Create some dummy data and fit your model: set.seed(1) nn <- 100 placements <- seq(0,10) foo <- data.frame(disorder=sample(c(0,1),size=nn,replace=TRUE), placement=sample(placements,size=nn,replace=TRUE), ethnic=sample(c("White","Black","Other"),size=nn,replace=TRUE)) M <- glm(disorder~placement*ethnic, family=binomial, data=foo) Plot the fits: plot(placements, predict(M,newdata=data.frame(ethnic="White",placement=placements),type="response"), type="o",pch=21,bg="white",ylab="",ylim=c(0,1)) points(placements, predict(M,newdata=data.frame(ethnic="Black",placement=placements),type="response"), type="o",pch=21,bg="black") points(placements, predict(M,newdata=data.frame(ethnic="Other",placement=placements),type="response"), type="o",pch=21,col="red",bg="red") legend(x="topleft",pch=21,col=c("black","black","red"),pt.bg=c("white","black","red"), legend=c("White","Black","Other")) The result: A nice additional touch would be to bootstrap your data to get confidence intervals around the fits.
Plotting logistic regression interaction (categorical) in R
You could plot fitted probabilities by placement, separately for each level of ethnic. Create some dummy data and fit your model: set.seed(1) nn <- 100 placements <- seq(0,10) foo <- data.frame(disord
Plotting logistic regression interaction (categorical) in R You could plot fitted probabilities by placement, separately for each level of ethnic. Create some dummy data and fit your model: set.seed(1) nn <- 100 placements <- seq(0,10) foo <- data.frame(disorder=sample(c(0,1),size=nn,replace=TRUE), placement=sample(placements,size=nn,replace=TRUE), ethnic=sample(c("White","Black","Other"),size=nn,replace=TRUE)) M <- glm(disorder~placement*ethnic, family=binomial, data=foo) Plot the fits: plot(placements, predict(M,newdata=data.frame(ethnic="White",placement=placements),type="response"), type="o",pch=21,bg="white",ylab="",ylim=c(0,1)) points(placements, predict(M,newdata=data.frame(ethnic="Black",placement=placements),type="response"), type="o",pch=21,bg="black") points(placements, predict(M,newdata=data.frame(ethnic="Other",placement=placements),type="response"), type="o",pch=21,col="red",bg="red") legend(x="topleft",pch=21,col=c("black","black","red"),pt.bg=c("white","black","red"), legend=c("White","Black","Other")) The result: A nice additional touch would be to bootstrap your data to get confidence intervals around the fits.
Plotting logistic regression interaction (categorical) in R You could plot fitted probabilities by placement, separately for each level of ethnic. Create some dummy data and fit your model: set.seed(1) nn <- 100 placements <- seq(0,10) foo <- data.frame(disord
53,878
Plotting logistic regression interaction (categorical) in R
I suggest a mosaic plot, available in the vcd package. There is a vignette and a few papers; typing ??mosaic will show them. A very simple example is library(vcd) data(HairEyeColor) HairEye <- margin.table(HairEyeColor, c(1,2)) mosaic(HairEye, main = "Basic Mosaic Display of Hair Eye Color data") Another option is the plot function from @FrankHarrell 's effects package, e.g. library(effects) titanic <- glm(survived ~ (passengerClass + sex + age)^2, data=Titanic, family=binomial) titanic.all <- allEffects(titanic, typical=median, given.values=c(passengerClass2nd=1/3, passengerClass3rd=1/3, sexmale=0.5)) plot(titanic.all, ticks=list(at=c(.01, .05, seq(.1, .9, by=.2), .95, .99)), ask=FALSE) although that might be more appropriate for ordinal or continuous IVs.
Plotting logistic regression interaction (categorical) in R
I suggest a mosaic plot, available in the vcd package. There is a vignette and a few papers; typing ??mosaic will show them. A very simple example is library(vcd) data(HairEyeColor) HairEye <- margi
Plotting logistic regression interaction (categorical) in R I suggest a mosaic plot, available in the vcd package. There is a vignette and a few papers; typing ??mosaic will show them. A very simple example is library(vcd) data(HairEyeColor) HairEye <- margin.table(HairEyeColor, c(1,2)) mosaic(HairEye, main = "Basic Mosaic Display of Hair Eye Color data") Another option is the plot function from @FrankHarrell 's effects package, e.g. library(effects) titanic <- glm(survived ~ (passengerClass + sex + age)^2, data=Titanic, family=binomial) titanic.all <- allEffects(titanic, typical=median, given.values=c(passengerClass2nd=1/3, passengerClass3rd=1/3, sexmale=0.5)) plot(titanic.all, ticks=list(at=c(.01, .05, seq(.1, .9, by=.2), .95, .99)), ask=FALSE) although that might be more appropriate for ordinal or continuous IVs.
Plotting logistic regression interaction (categorical) in R I suggest a mosaic plot, available in the vcd package. There is a vignette and a few papers; typing ??mosaic will show them. A very simple example is library(vcd) data(HairEyeColor) HairEye <- margi
53,879
Plotting logistic regression interaction (categorical) in R
@Stephan Kolassa's answer was very good. Here I build in two other plotting ideas: It's helpful to see the relative N sizes at each level of the x-variable, Placements. These sizes can be indicated using hash marks. It can be convenient to show group (here, Ethnicity) labels alongside their lines, rather than in a separate legend. This becomes more and more applicable with larger numbers of groups. In this case I have set the random seed at 6 and have bypassed the "glm() and "predict()" commands. Instead I've plotted moving-average values using a lowess smoother. I've also changed N from 100 to 200 to better show plot features. library(Hmisc) plsmo(foo$placement, foo$disorder, datadensity = T, group = foo$ethnic, col=c('black', 'red','tan'), xlab = 'Placements', ylab ='Incidence of Disorder', ylim = c(0,1))
Plotting logistic regression interaction (categorical) in R
@Stephan Kolassa's answer was very good. Here I build in two other plotting ideas: It's helpful to see the relative N sizes at each level of the x-variable, Placements. These sizes can be indicated
Plotting logistic regression interaction (categorical) in R @Stephan Kolassa's answer was very good. Here I build in two other plotting ideas: It's helpful to see the relative N sizes at each level of the x-variable, Placements. These sizes can be indicated using hash marks. It can be convenient to show group (here, Ethnicity) labels alongside their lines, rather than in a separate legend. This becomes more and more applicable with larger numbers of groups. In this case I have set the random seed at 6 and have bypassed the "glm() and "predict()" commands. Instead I've plotted moving-average values using a lowess smoother. I've also changed N from 100 to 200 to better show plot features. library(Hmisc) plsmo(foo$placement, foo$disorder, datadensity = T, group = foo$ethnic, col=c('black', 'red','tan'), xlab = 'Placements', ylab ='Incidence of Disorder', ylim = c(0,1))
Plotting logistic regression interaction (categorical) in R @Stephan Kolassa's answer was very good. Here I build in two other plotting ideas: It's helpful to see the relative N sizes at each level of the x-variable, Placements. These sizes can be indicated
53,880
What is a WISARD Neural Network?
IIRC WISARD was a RAM based neural network method developed by Igor Aleksander at Imperial College in the 1980s and early 1990s. RAM based neural networks essentially use look up tables to store the function computed by each neuron, and hence are easily implemented in digital hardware and have efficient training algorithms. They don't seem to be used much these days. In think the best source of information would be Aleksander and Morton's book "Introduction to neural computing" (sadly I can't find my copy). I don't think these sort of networks are used much these days, which is a pity as they are rather interesting; my favourite variant was the "Single layer look up perceptron", which is related to n-tuple classifiers (see this paper, and this one, for details.) They seem reminiscent in some ways to random forest methods and also "extreme learning" methods (as a linear model is trained using a fixed set of random basis functions). A Google scholar search for Igor Aleksander WISARD should find most of the relevant material.
What is a WISARD Neural Network?
IIRC WISARD was a RAM based neural network method developed by Igor Aleksander at Imperial College in the 1980s and early 1990s. RAM based neural networks essentially use look up tables to store the
What is a WISARD Neural Network? IIRC WISARD was a RAM based neural network method developed by Igor Aleksander at Imperial College in the 1980s and early 1990s. RAM based neural networks essentially use look up tables to store the function computed by each neuron, and hence are easily implemented in digital hardware and have efficient training algorithms. They don't seem to be used much these days. In think the best source of information would be Aleksander and Morton's book "Introduction to neural computing" (sadly I can't find my copy). I don't think these sort of networks are used much these days, which is a pity as they are rather interesting; my favourite variant was the "Single layer look up perceptron", which is related to n-tuple classifiers (see this paper, and this one, for details.) They seem reminiscent in some ways to random forest methods and also "extreme learning" methods (as a linear model is trained using a fixed set of random basis functions). A Google scholar search for Igor Aleksander WISARD should find most of the relevant material.
What is a WISARD Neural Network? IIRC WISARD was a RAM based neural network method developed by Igor Aleksander at Imperial College in the 1980s and early 1990s. RAM based neural networks essentially use look up tables to store the
53,881
What is a WISARD Neural Network?
A major working development to date (1985) in this area however is the N-tuple classifier: "WISARD" (WIlkie, Stoneham and Aleksander, Recognition Device), which has been developed at Brunel University and is produced commercially by Computer Recognition Systems Limited. The WISARD system acts upon images from standard grey scale TV cameras working at 30 fps. For each frame, the image is binarised. Then each of the pixels of the stored image are pseudo randomly sampled (mapped - ie chosen from the whole frame area using a pseudo random number to choose each pixel in the frame) and formed into K groups (tuples) of N (where N is an integer - normally ranged between 1 .. . . .. 8). The data pattern formed by each N-tuple is applied as an address to a Random Access Memory (RAM) element. (Note: a pseudo random number generator will always deliver an identical non repeating stream of numbers starting from the same seed number). The size of this memory element is 2^N bits where N is the size of the tuple being used (thus a 4-tuple will use a 16 bit sized memory element and an 8-tuple will use a 256-bit memory element), and a "1" will be written to the bit in this memory space corresponding to the N sized address formed by the tuple (all other spaces initially set to zero). The memory space of the machine is partitioned into C groups of K addresses (where K = the number of pixels in the image and K*N = the vector space of one stored image). Each of these groups of memory addresses is termed a Discriminator. During the training stage, each object that is to be recognised by the system in the classification mode is sampled a number of times (a training set) and allocated one of the C groups of memory and becomes a discriminator for that object. If during the repeated sampling (using the identical pseudo random mapping) the addresses formed by the N-tuples are slightly different, additional "1"s are placed in the memory elements (it is expected that this will occur otherwise the method will be little better than Template matching). When the machine is used for recognition, using identical pseudo random mapping the generated N-tuples from the object to be recognised are applied to their corresponding memory elements within all the trained discriminators. Where there is a match (of β€œ1”s) a memory element will return a "1" or otherwise "0". For each discriminator a summation is taken and a histogram produced of the results. A discriminator producing a high response is likely to be the object being classified. from my 1985 Dissertation β€œA study of the INMOS Transputer in robotic vision” pp7 – 11
What is a WISARD Neural Network?
A major working development to date (1985) in this area however is the N-tuple classifier: "WISARD" (WIlkie, Stoneham and Aleksander, Recognition Device), which has been developed at Brunel University
What is a WISARD Neural Network? A major working development to date (1985) in this area however is the N-tuple classifier: "WISARD" (WIlkie, Stoneham and Aleksander, Recognition Device), which has been developed at Brunel University and is produced commercially by Computer Recognition Systems Limited. The WISARD system acts upon images from standard grey scale TV cameras working at 30 fps. For each frame, the image is binarised. Then each of the pixels of the stored image are pseudo randomly sampled (mapped - ie chosen from the whole frame area using a pseudo random number to choose each pixel in the frame) and formed into K groups (tuples) of N (where N is an integer - normally ranged between 1 .. . . .. 8). The data pattern formed by each N-tuple is applied as an address to a Random Access Memory (RAM) element. (Note: a pseudo random number generator will always deliver an identical non repeating stream of numbers starting from the same seed number). The size of this memory element is 2^N bits where N is the size of the tuple being used (thus a 4-tuple will use a 16 bit sized memory element and an 8-tuple will use a 256-bit memory element), and a "1" will be written to the bit in this memory space corresponding to the N sized address formed by the tuple (all other spaces initially set to zero). The memory space of the machine is partitioned into C groups of K addresses (where K = the number of pixels in the image and K*N = the vector space of one stored image). Each of these groups of memory addresses is termed a Discriminator. During the training stage, each object that is to be recognised by the system in the classification mode is sampled a number of times (a training set) and allocated one of the C groups of memory and becomes a discriminator for that object. If during the repeated sampling (using the identical pseudo random mapping) the addresses formed by the N-tuples are slightly different, additional "1"s are placed in the memory elements (it is expected that this will occur otherwise the method will be little better than Template matching). When the machine is used for recognition, using identical pseudo random mapping the generated N-tuples from the object to be recognised are applied to their corresponding memory elements within all the trained discriminators. Where there is a match (of β€œ1”s) a memory element will return a "1" or otherwise "0". For each discriminator a summation is taken and a histogram produced of the results. A discriminator producing a high response is likely to be the object being classified. from my 1985 Dissertation β€œA study of the INMOS Transputer in robotic vision” pp7 – 11
What is a WISARD Neural Network? A major working development to date (1985) in this area however is the N-tuple classifier: "WISARD" (WIlkie, Stoneham and Aleksander, Recognition Device), which has been developed at Brunel University
53,882
What is a WISARD Neural Network?
WISARD was actually built by Bruce Wilkie , Sonham asreader , Alexandre getting the funding . Eric Conan being one of technicians building it . It was presented at Royal Institution , Maggie had a look Wi Wilkie S Stonham A Alexandre R recognition D discriminator - two 19 inch racks of electronics - massive transformer with massive copper busbars and smoothing capacitor - linerar regulator - with tv camera input to TV display Sets of RAM - images to compatre . Randomly selecting content ( bit wise ) and counting hits to misses D
What is a WISARD Neural Network?
WISARD was actually built by Bruce Wilkie , Sonham asreader , Alexandre getting the funding . Eric Conan being one of technicians building it . It was presented at Royal Institution ,
What is a WISARD Neural Network? WISARD was actually built by Bruce Wilkie , Sonham asreader , Alexandre getting the funding . Eric Conan being one of technicians building it . It was presented at Royal Institution , Maggie had a look Wi Wilkie S Stonham A Alexandre R recognition D discriminator - two 19 inch racks of electronics - massive transformer with massive copper busbars and smoothing capacitor - linerar regulator - with tv camera input to TV display Sets of RAM - images to compatre . Randomly selecting content ( bit wise ) and counting hits to misses D
What is a WISARD Neural Network? WISARD was actually built by Bruce Wilkie , Sonham asreader , Alexandre getting the funding . Eric Conan being one of technicians building it . It was presented at Royal Institution ,
53,883
What is a WISARD Neural Network?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. WISARD stamds for WILKIE STONHAM AND ALEXANDER. A Neural network based on hardware RAMs hardwired to be a perceptron system. This is described in one of ALEXANDERS own books. The difference that it made was to bring back the industry's attention to Neural Networks after the disastrous paper issued by Prof. Marvin and Minsky on the linearity of Neural Networks in other words that any problem that they could solve would be linear and non linear. The WISARD system proved that NN work and it strated a new silemt revolution at the time.
What is a WISARD Neural Network?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
What is a WISARD Neural Network? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. WISARD stamds for WILKIE STONHAM AND ALEXANDER. A Neural network based on hardware RAMs hardwired to be a perceptron system. This is described in one of ALEXANDERS own books. The difference that it made was to bring back the industry's attention to Neural Networks after the disastrous paper issued by Prof. Marvin and Minsky on the linearity of Neural Networks in other words that any problem that they could solve would be linear and non linear. The WISARD system proved that NN work and it strated a new silemt revolution at the time.
What is a WISARD Neural Network? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
53,884
Strict Exogeneity and Seasonal Dummy Variables
Strict exogeneity means that the error $u_t$ is uncorrelated with all past and future values of the seasonal dummies. This means that such variables cannot react to shocks to $y$ in the past or the future. Suppose consumers feel worried about the economy in December and such sentiments are unobserved. This means there was a negative shock to Amazon sales that month as people cut back on presents. Big negative error. I don't get my pony. But Amazon just cannot decide to have a Christmas season again in January. Contrast this to the effect of police on crime example from earlier in that chapter. If there was a gang war in December, the police force would jump $n$ months later as the mayor gets tough on crime and the cadets graduate. Now that would violate the strict exogeneity assumption.
Strict Exogeneity and Seasonal Dummy Variables
Strict exogeneity means that the error $u_t$ is uncorrelated with all past and future values of the seasonal dummies. This means that such variables cannot react to shocks to $y$ in the past or the fu
Strict Exogeneity and Seasonal Dummy Variables Strict exogeneity means that the error $u_t$ is uncorrelated with all past and future values of the seasonal dummies. This means that such variables cannot react to shocks to $y$ in the past or the future. Suppose consumers feel worried about the economy in December and such sentiments are unobserved. This means there was a negative shock to Amazon sales that month as people cut back on presents. Big negative error. I don't get my pony. But Amazon just cannot decide to have a Christmas season again in January. Contrast this to the effect of police on crime example from earlier in that chapter. If there was a gang war in December, the police force would jump $n$ months later as the mayor gets tough on crime and the cadets graduate. Now that would violate the strict exogeneity assumption.
Strict Exogeneity and Seasonal Dummy Variables Strict exogeneity means that the error $u_t$ is uncorrelated with all past and future values of the seasonal dummies. This means that such variables cannot react to shocks to $y$ in the past or the fu
53,885
Strict Exogeneity and Seasonal Dummy Variables
A seasonal dummy is nonrandom: Whatever sample you draw, winter is winter, never summer. The covariance of a random variable and a constant is zero.
Strict Exogeneity and Seasonal Dummy Variables
A seasonal dummy is nonrandom: Whatever sample you draw, winter is winter, never summer. The covariance of a random variable and a constant is zero.
Strict Exogeneity and Seasonal Dummy Variables A seasonal dummy is nonrandom: Whatever sample you draw, winter is winter, never summer. The covariance of a random variable and a constant is zero.
Strict Exogeneity and Seasonal Dummy Variables A seasonal dummy is nonrandom: Whatever sample you draw, winter is winter, never summer. The covariance of a random variable and a constant is zero.
53,886
Within model with plm package
The two estimators are computed differently, but are numerically identical, so essentially it doesn't matter. The within estimator is computationally easier since it keeps the size of the design matrix in check, and I would think that is how the within estimator is implemented. Here is some R code to demonstrate this library(plm) data("Produc", package = "plm") plmResults <- plm(log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp, data = Produc, index = c("state","year")) summary(plmResults) regResults <- lm(log(gsp) ~ as.factor(state) + log(pcap) + log(pc) + log(emp) + unemp, data = Produc) summary(regResults) Or, if you prefer, some Stata code, webuse nlswork xtset idcode xtreg ln_w grade c.age##c.age c.ttl_exp##c.ttl_exp c.tenure##c.tenure /// 2.race not_smsa south, fe areg ln_w grade c.age##c.age c.ttl_exp##c.ttl_exp c.tenure##c.tenure /// 2.race not_smsa south, absorb(idcode) A proof using the Frisch-Waugh-Lovell theorem can easily be given. Note one crucial point that for a large number of groups, that is, $n\to \infty$, the estimates of the coefficients on the group dummies are not consistent.
Within model with plm package
The two estimators are computed differently, but are numerically identical, so essentially it doesn't matter. The within estimator is computationally easier since it keeps the size of the design matri
Within model with plm package The two estimators are computed differently, but are numerically identical, so essentially it doesn't matter. The within estimator is computationally easier since it keeps the size of the design matrix in check, and I would think that is how the within estimator is implemented. Here is some R code to demonstrate this library(plm) data("Produc", package = "plm") plmResults <- plm(log(gsp) ~ log(pcap) + log(pc) + log(emp) + unemp, data = Produc, index = c("state","year")) summary(plmResults) regResults <- lm(log(gsp) ~ as.factor(state) + log(pcap) + log(pc) + log(emp) + unemp, data = Produc) summary(regResults) Or, if you prefer, some Stata code, webuse nlswork xtset idcode xtreg ln_w grade c.age##c.age c.ttl_exp##c.ttl_exp c.tenure##c.tenure /// 2.race not_smsa south, fe areg ln_w grade c.age##c.age c.ttl_exp##c.ttl_exp c.tenure##c.tenure /// 2.race not_smsa south, absorb(idcode) A proof using the Frisch-Waugh-Lovell theorem can easily be given. Note one crucial point that for a large number of groups, that is, $n\to \infty$, the estimates of the coefficients on the group dummies are not consistent.
Within model with plm package The two estimators are computed differently, but are numerically identical, so essentially it doesn't matter. The within estimator is computationally easier since it keeps the size of the design matri
53,887
What are pure random sampling and orthogonal sampling?
Latin Hypercube (LHC) sampling is a sampling method than ensures that each sampling space dimension is roughly evenly sampled. Orthogonal sampling is basically an extension to latin hypercube sampling that ensures that each subspace is evenly sampled. This also ensures that correlation between sampling dimensions is minimised. In the following image, from wikipedia, you can see that in the LHC example (II), the two dimension have a strong (negative) correlation, and also the combinations of (low A, low B) and (high A, high B) (the bottom left and top right corners) are not sampled at all. This may confound any results, since it will be difficult to tell whether the A or B variable is causing the effects you're seeing. In contrast, in the Orthogonal sampling example (III), there is very little correlation between A and B, and each of the 4 subspaces are evenly sampled. Note that the subspaces are somewhat arbitrary, but larger than a single sample, and smaller than the entire sample space. Sounds to me like you actually implemented orthogonal sampling, rather than plain LHC, but depending on the project, there is no disadvantage to doing that. The usual process for generating a plain LHC is to divide the sample space up evenly into n divisions in each dimension, and then randomly allocate n samples to those divisions, ensuring that no two samples share the same division in a given dimension (which also ensures, by the pigeon hole principle, that all the divisions are sampled).
What are pure random sampling and orthogonal sampling?
Latin Hypercube (LHC) sampling is a sampling method than ensures that each sampling space dimension is roughly evenly sampled. Orthogonal sampling is basically an extension to latin hypercube sampling
What are pure random sampling and orthogonal sampling? Latin Hypercube (LHC) sampling is a sampling method than ensures that each sampling space dimension is roughly evenly sampled. Orthogonal sampling is basically an extension to latin hypercube sampling that ensures that each subspace is evenly sampled. This also ensures that correlation between sampling dimensions is minimised. In the following image, from wikipedia, you can see that in the LHC example (II), the two dimension have a strong (negative) correlation, and also the combinations of (low A, low B) and (high A, high B) (the bottom left and top right corners) are not sampled at all. This may confound any results, since it will be difficult to tell whether the A or B variable is causing the effects you're seeing. In contrast, in the Orthogonal sampling example (III), there is very little correlation between A and B, and each of the 4 subspaces are evenly sampled. Note that the subspaces are somewhat arbitrary, but larger than a single sample, and smaller than the entire sample space. Sounds to me like you actually implemented orthogonal sampling, rather than plain LHC, but depending on the project, there is no disadvantage to doing that. The usual process for generating a plain LHC is to divide the sample space up evenly into n divisions in each dimension, and then randomly allocate n samples to those divisions, ensuring that no two samples share the same division in a given dimension (which also ensures, by the pigeon hole principle, that all the divisions are sampled).
What are pure random sampling and orthogonal sampling? Latin Hypercube (LHC) sampling is a sampling method than ensures that each sampling space dimension is roughly evenly sampled. Orthogonal sampling is basically an extension to latin hypercube sampling
53,888
Can one have two random variables, perfectly correlated, but with different variances (as percent of their mean)?
Sure you can, e.g. set.seed(38821010) x <- rnorm(100) y <- 5*x + 3 var(y)/mean(y) var(x)/mean(x) cor(x,y) as an example.
Can one have two random variables, perfectly correlated, but with different variances (as percent of
Sure you can, e.g. set.seed(38821010) x <- rnorm(100) y <- 5*x + 3 var(y)/mean(y) var(x)/mean(x) cor(x,y) as an example.
Can one have two random variables, perfectly correlated, but with different variances (as percent of their mean)? Sure you can, e.g. set.seed(38821010) x <- rnorm(100) y <- 5*x + 3 var(y)/mean(y) var(x)/mean(x) cor(x,y) as an example.
Can one have two random variables, perfectly correlated, but with different variances (as percent of Sure you can, e.g. set.seed(38821010) x <- rnorm(100) y <- 5*x + 3 var(y)/mean(y) var(x)/mean(x) cor(x,y) as an example.
53,889
Can one have two random variables, perfectly correlated, but with different variances (as percent of their mean)?
I think to put what Peter said in simple terms. If Y is a linear function of X the absolute value of the Pearson correlation is 1. But the variances will differ unless the coefficient of X is 1. This is simply due to the fact that Var(cX)=c$^2$Var(X) and Var(X+a)=Var(X) for any constants a and c. Now adding the idea that they are different with regard to the ratio to the mean we see that E(cX+a)=cE(X) +a. So Var(Y)/E(Y)=c$^2$ Var(X)/(cE(X)+a) Take a=0 and c>0 and c not equal to 1 then Var(Y)/E(Y)=c Var(X)/E(X) which is not equal to Var(X)/E(X).
Can one have two random variables, perfectly correlated, but with different variances (as percent of
I think to put what Peter said in simple terms. If Y is a linear function of X the absolute value of the Pearson correlation is 1. But the variances will differ unless the coefficient of X is 1. Thi
Can one have two random variables, perfectly correlated, but with different variances (as percent of their mean)? I think to put what Peter said in simple terms. If Y is a linear function of X the absolute value of the Pearson correlation is 1. But the variances will differ unless the coefficient of X is 1. This is simply due to the fact that Var(cX)=c$^2$Var(X) and Var(X+a)=Var(X) for any constants a and c. Now adding the idea that they are different with regard to the ratio to the mean we see that E(cX+a)=cE(X) +a. So Var(Y)/E(Y)=c$^2$ Var(X)/(cE(X)+a) Take a=0 and c>0 and c not equal to 1 then Var(Y)/E(Y)=c Var(X)/E(X) which is not equal to Var(X)/E(X).
Can one have two random variables, perfectly correlated, but with different variances (as percent of I think to put what Peter said in simple terms. If Y is a linear function of X the absolute value of the Pearson correlation is 1. But the variances will differ unless the coefficient of X is 1. Thi
53,890
Contradiction between significant effect in multiple regression, but non-significant t-test on its own
The multiple regression model controls for other sources of variability in the DV, whereas in the t-test, all of that variability is lumped into the error term. Thus, the t-test has lower statistical power to detect the effect. Under the assumption that the effect is real, however, the t-test would show 'significance' with a sample that was large enough.
Contradiction between significant effect in multiple regression, but non-significant t-test on its o
The multiple regression model controls for other sources of variability in the DV, whereas in the t-test, all of that variability is lumped into the error term. Thus, the t-test has lower statistical
Contradiction between significant effect in multiple regression, but non-significant t-test on its own The multiple regression model controls for other sources of variability in the DV, whereas in the t-test, all of that variability is lumped into the error term. Thus, the t-test has lower statistical power to detect the effect. Under the assumption that the effect is real, however, the t-test would show 'significance' with a sample that was large enough.
Contradiction between significant effect in multiple regression, but non-significant t-test on its o The multiple regression model controls for other sources of variability in the DV, whereas in the t-test, all of that variability is lumped into the error term. Thus, the t-test has lower statistical
53,891
Contradiction between significant effect in multiple regression, but non-significant t-test on its own
Gung already gave a good answer. I would also add that in a model with 10 covariates, it's very easy to obtain small, sometimes spurious, effects, just because your other variables are absorbing so much variance. I would examine some effect size metrics (such as delta R^2) for your gender effect to help you determine whether your gender effect is real.
Contradiction between significant effect in multiple regression, but non-significant t-test on its o
Gung already gave a good answer. I would also add that in a model with 10 covariates, it's very easy to obtain small, sometimes spurious, effects, just because your other variables are absorbing so m
Contradiction between significant effect in multiple regression, but non-significant t-test on its own Gung already gave a good answer. I would also add that in a model with 10 covariates, it's very easy to obtain small, sometimes spurious, effects, just because your other variables are absorbing so much variance. I would examine some effect size metrics (such as delta R^2) for your gender effect to help you determine whether your gender effect is real.
Contradiction between significant effect in multiple regression, but non-significant t-test on its o Gung already gave a good answer. I would also add that in a model with 10 covariates, it's very easy to obtain small, sometimes spurious, effects, just because your other variables are absorbing so m
53,892
Contradiction between significant effect in multiple regression, but non-significant t-test on its own
In the model gender is one of 10 covariates. It has some influence on the response complaining in conjunction with the other covariates. But by itself it makes less of a difference and is not a statistically significant difference.
Contradiction between significant effect in multiple regression, but non-significant t-test on its o
In the model gender is one of 10 covariates. It has some influence on the response complaining in conjunction with the other covariates. But by itself it makes less of a difference and is not a stat
Contradiction between significant effect in multiple regression, but non-significant t-test on its own In the model gender is one of 10 covariates. It has some influence on the response complaining in conjunction with the other covariates. But by itself it makes less of a difference and is not a statistically significant difference.
Contradiction between significant effect in multiple regression, but non-significant t-test on its o In the model gender is one of 10 covariates. It has some influence on the response complaining in conjunction with the other covariates. But by itself it makes less of a difference and is not a stat
53,893
Two-sided permutation test vs. two one-sided [duplicate]
Here is an R example of permutation testing. ## dummy data set.seed(1) x <- runif(20) y <- 0.5 * x y <- y + rnorm(20) ## set up for the permutation, compute observed R nullR <- numeric(length = 1000) nullR[1] <- cor(x, y) ## obsered R in [1] N <- length(x) ## permutation test for(i in seq_len(999) + 1) { nullR[i] <- cor(x[sample(N)], y) } hist(nullR) ## histogram of R under H0 Now we can compute the permutation $p$ from the permutation distribution for the various tailed tests you consider: > ## one side H1 R > 0 > sum(nullR >= nullR[1]) / length(nullR) [1] 0.908 > ## one side H1 R < 0 > sum(nullR <= nullR[1]) / length(nullR) [1] 0.093 > ## two sided > sum(abs(nullR) >= abs(nullR[1])) / length(nullR) [1] 0.177
Two-sided permutation test vs. two one-sided [duplicate]
Here is an R example of permutation testing. ## dummy data set.seed(1) x <- runif(20) y <- 0.5 * x y <- y + rnorm(20) ## set up for the permutation, compute observed R nullR <- numeric(length = 1000)
Two-sided permutation test vs. two one-sided [duplicate] Here is an R example of permutation testing. ## dummy data set.seed(1) x <- runif(20) y <- 0.5 * x y <- y + rnorm(20) ## set up for the permutation, compute observed R nullR <- numeric(length = 1000) nullR[1] <- cor(x, y) ## obsered R in [1] N <- length(x) ## permutation test for(i in seq_len(999) + 1) { nullR[i] <- cor(x[sample(N)], y) } hist(nullR) ## histogram of R under H0 Now we can compute the permutation $p$ from the permutation distribution for the various tailed tests you consider: > ## one side H1 R > 0 > sum(nullR >= nullR[1]) / length(nullR) [1] 0.908 > ## one side H1 R < 0 > sum(nullR <= nullR[1]) / length(nullR) [1] 0.093 > ## two sided > sum(abs(nullR) >= abs(nullR[1])) / length(nullR) [1] 0.177
Two-sided permutation test vs. two one-sided [duplicate] Here is an R example of permutation testing. ## dummy data set.seed(1) x <- runif(20) y <- 0.5 * x y <- y + rnorm(20) ## set up for the permutation, compute observed R nullR <- numeric(length = 1000)
53,894
Two-sided permutation test vs. two one-sided [duplicate]
Your formula for $p_2$ only works if you have distribution symmetric around $0$, which at least your illustration do not seem to have. A better estimate is given by $p_2(t) = {\rm min}(2p_{+}, 2p_{-})$. using your definitions of $p_{+}$ and $p_{-}$.
Two-sided permutation test vs. two one-sided [duplicate]
Your formula for $p_2$ only works if you have distribution symmetric around $0$, which at least your illustration do not seem to have. A better estimate is given by $p_2(t) = {\rm min}(2p_{+}, 2p_{-}
Two-sided permutation test vs. two one-sided [duplicate] Your formula for $p_2$ only works if you have distribution symmetric around $0$, which at least your illustration do not seem to have. A better estimate is given by $p_2(t) = {\rm min}(2p_{+}, 2p_{-})$. using your definitions of $p_{+}$ and $p_{-}$.
Two-sided permutation test vs. two one-sided [duplicate] Your formula for $p_2$ only works if you have distribution symmetric around $0$, which at least your illustration do not seem to have. A better estimate is given by $p_2(t) = {\rm min}(2p_{+}, 2p_{-}
53,895
Why does my naive Bayes classifier only give me probabilities near 0?
Naive Bayes generally uses a decision rule like $$ \text{argmax}_{C_i} P(C_i)P(D|C_i), $$ which comes from the fact we can write $$ P(C_i|D) = \frac{P(C_i)P(D|C_i)}{P(D)}. $$ and drop the denominator $P(D)$ since it does not depend on the class. However, since $P(D) << 1$ (i.e. there are many possible documents), neglecting it will cause the output of your algorithm to be quite small, so this isn't necessarily indication your implementation is incorrect. A practical tip: One thing you can and should do is work with sums of log probabilities rather than products of probabilities to avoid underflow errors. Rather than doing $$ P(D|C_i) = \prod_{w_j \in D} P(w_j|C_i), $$ do $$ \log P(D|C_i) = \sum_{w_j \in D} \log P(w_j|C_i). $$ You'll also need to deal with unseen words as well since zero probabilities will give you problems.
Why does my naive Bayes classifier only give me probabilities near 0?
Naive Bayes generally uses a decision rule like $$ \text{argmax}_{C_i} P(C_i)P(D|C_i), $$ which comes from the fact we can write $$ P(C_i|D) = \frac{P(C_i)P(D|C_i)}{P(D)}. $$ and drop the denominator
Why does my naive Bayes classifier only give me probabilities near 0? Naive Bayes generally uses a decision rule like $$ \text{argmax}_{C_i} P(C_i)P(D|C_i), $$ which comes from the fact we can write $$ P(C_i|D) = \frac{P(C_i)P(D|C_i)}{P(D)}. $$ and drop the denominator $P(D)$ since it does not depend on the class. However, since $P(D) << 1$ (i.e. there are many possible documents), neglecting it will cause the output of your algorithm to be quite small, so this isn't necessarily indication your implementation is incorrect. A practical tip: One thing you can and should do is work with sums of log probabilities rather than products of probabilities to avoid underflow errors. Rather than doing $$ P(D|C_i) = \prod_{w_j \in D} P(w_j|C_i), $$ do $$ \log P(D|C_i) = \sum_{w_j \in D} \log P(w_j|C_i). $$ You'll also need to deal with unseen words as well since zero probabilities will give you problems.
Why does my naive Bayes classifier only give me probabilities near 0? Naive Bayes generally uses a decision rule like $$ \text{argmax}_{C_i} P(C_i)P(D|C_i), $$ which comes from the fact we can write $$ P(C_i|D) = \frac{P(C_i)P(D|C_i)}{P(D)}. $$ and drop the denominator
53,896
Likert scale question divided into different group. How to calculate mean of different group?
See this question: Analyzing Likert scales Agresti does a lot of this ordinal data analysis (e.g., "Analysis of Ordinal Categorical Data"). For your particular problem, I would suggest looking at three methods: multiple hypothesis testing http://en.wikipedia.org/wiki/Multiple_comparisons, mixed effects models http://en.wikipedia.org/wiki/Mixed_model package lme4 function lmer() in R, and cumulative link mixed models http://cran.r-project.org/web/packages/ordinal/vignettes/clmm2_tutorial.pdf package ordinal function clmm() in R. In general, I wouldn't recommend doing traditional multiple testing since that assumes the data is ratio (rather than ordinal like you have). If you want to make that assumption though, you can just test to see which questions have an average response different from to the center of the Lickert scale, and then use a correction to take into account the fact that you did 9+6+7+2+4+2 tests. For the mixed effects models use random effects, and treat each group of questions separately ("utility of the program", etc.). Treat each question as a random effect (there is a population of possible questions you could have chosen, and you happened to pick these 9 questions about utility), and treat the respondent as a random effect (there is a population of possible people who you want to gather opinions about, and you happened to sample this group). Hence, the model is $y_{ij}=\mu + a_i + b_j + e_{ij}$ where $y_{ij}$ is the response of person $i$ to question $j$, $a_i$ is the random effect due to person $i$ (you have 16 people), $b_j$ is the random effect due to question $j$ (you have 9 questions in the group "utility"), and $e_{ij}$ is the error of how much person $i$'s response to question $j$ differed from the model. Using the lme4 package, you can estimate $\mu$ and test if it is significantly different from the center of the Likert scale. Using the ordinal package, you can do this more carefully taking into account that your data is ordinal instead of ratio, but you lose some of the interpretability of the linear mixed effects model. Those packages use a sort of funny notation. Suppose your data is in a dataframe called dat with columns response, question, person. Then you can implement this as follows: require(lme4) lmer(response ~ 1 + (1 | question) + (1 | person), data=dat) require(ordinal) clmm(ordered(response) ~ 1 + (1 | question) + (1 | person), data=dat)
Likert scale question divided into different group. How to calculate mean of different group?
See this question: Analyzing Likert scales Agresti does a lot of this ordinal data analysis (e.g., "Analysis of Ordinal Categorical Data"). For your particular problem, I would suggest looking at thre
Likert scale question divided into different group. How to calculate mean of different group? See this question: Analyzing Likert scales Agresti does a lot of this ordinal data analysis (e.g., "Analysis of Ordinal Categorical Data"). For your particular problem, I would suggest looking at three methods: multiple hypothesis testing http://en.wikipedia.org/wiki/Multiple_comparisons, mixed effects models http://en.wikipedia.org/wiki/Mixed_model package lme4 function lmer() in R, and cumulative link mixed models http://cran.r-project.org/web/packages/ordinal/vignettes/clmm2_tutorial.pdf package ordinal function clmm() in R. In general, I wouldn't recommend doing traditional multiple testing since that assumes the data is ratio (rather than ordinal like you have). If you want to make that assumption though, you can just test to see which questions have an average response different from to the center of the Lickert scale, and then use a correction to take into account the fact that you did 9+6+7+2+4+2 tests. For the mixed effects models use random effects, and treat each group of questions separately ("utility of the program", etc.). Treat each question as a random effect (there is a population of possible questions you could have chosen, and you happened to pick these 9 questions about utility), and treat the respondent as a random effect (there is a population of possible people who you want to gather opinions about, and you happened to sample this group). Hence, the model is $y_{ij}=\mu + a_i + b_j + e_{ij}$ where $y_{ij}$ is the response of person $i$ to question $j$, $a_i$ is the random effect due to person $i$ (you have 16 people), $b_j$ is the random effect due to question $j$ (you have 9 questions in the group "utility"), and $e_{ij}$ is the error of how much person $i$'s response to question $j$ differed from the model. Using the lme4 package, you can estimate $\mu$ and test if it is significantly different from the center of the Likert scale. Using the ordinal package, you can do this more carefully taking into account that your data is ordinal instead of ratio, but you lose some of the interpretability of the linear mixed effects model. Those packages use a sort of funny notation. Suppose your data is in a dataframe called dat with columns response, question, person. Then you can implement this as follows: require(lme4) lmer(response ~ 1 + (1 | question) + (1 | person), data=dat) require(ordinal) clmm(ordered(response) ~ 1 + (1 | question) + (1 | person), data=dat)
Likert scale question divided into different group. How to calculate mean of different group? See this question: Analyzing Likert scales Agresti does a lot of this ordinal data analysis (e.g., "Analysis of Ordinal Categorical Data"). For your particular problem, I would suggest looking at thre
53,897
Likert scale question divided into different group. How to calculate mean of different group?
Often questions are grouped according to domains (your groups are like domains) and comparisons are made on the domains by summing the individual question scores (equivalent to averaging). This makes sense if the scoring is consistent from one question to another.
Likert scale question divided into different group. How to calculate mean of different group?
Often questions are grouped according to domains (your groups are like domains) and comparisons are made on the domains by summing the individual question scores (equivalent to averaging). This makes
Likert scale question divided into different group. How to calculate mean of different group? Often questions are grouped according to domains (your groups are like domains) and comparisons are made on the domains by summing the individual question scores (equivalent to averaging). This makes sense if the scoring is consistent from one question to another.
Likert scale question divided into different group. How to calculate mean of different group? Often questions are grouped according to domains (your groups are like domains) and comparisons are made on the domains by summing the individual question scores (equivalent to averaging). This makes
53,898
Likert scale question divided into different group. How to calculate mean of different group?
The mean would be totally acceptable. Some people like to think that it doesn't make sense because the distance between two values (e.g. 2 - 3 vs. 4 - 5) are not necessarily homogeneous. Those aren't the kinds of issues that keep me awake at night. Another option is to consider proportions of top-box responses. In survey design, you can look at the binomial outcome of getting a 5 or 4 or above as Y/N indicators of achieving a desired level of satisfaction. You would easily be able to cluster within sets of questions by taking the average number of top-box responses. A 100% here would indicate that the trainer achieved top-box responses on all questions whereas 66% indicates that 4 out of 6 achieved satisfactory levels.
Likert scale question divided into different group. How to calculate mean of different group?
The mean would be totally acceptable. Some people like to think that it doesn't make sense because the distance between two values (e.g. 2 - 3 vs. 4 - 5) are not necessarily homogeneous. Those aren't
Likert scale question divided into different group. How to calculate mean of different group? The mean would be totally acceptable. Some people like to think that it doesn't make sense because the distance between two values (e.g. 2 - 3 vs. 4 - 5) are not necessarily homogeneous. Those aren't the kinds of issues that keep me awake at night. Another option is to consider proportions of top-box responses. In survey design, you can look at the binomial outcome of getting a 5 or 4 or above as Y/N indicators of achieving a desired level of satisfaction. You would easily be able to cluster within sets of questions by taking the average number of top-box responses. A 100% here would indicate that the trainer achieved top-box responses on all questions whereas 66% indicates that 4 out of 6 achieved satisfactory levels.
Likert scale question divided into different group. How to calculate mean of different group? The mean would be totally acceptable. Some people like to think that it doesn't make sense because the distance between two values (e.g. 2 - 3 vs. 4 - 5) are not necessarily homogeneous. Those aren't
53,899
Likert scale question divided into different group. How to calculate mean of different group?
With 16 respondents, you may be better served just by reading each person's survey responses individually, and following up with them to understand any particularly high or low scores :-) But, since this is a statistical Q&A site, I'll discuss the statistical approach. Ideally, the survey would ask an overall question about the utility of the program and use that as your "success" metric, and the 9 individual questions as your "diagnostic" metrics, and so on for each overall topic. Lacking this, I would recommend just using the same measures you use for the individual questions. Mean, Median, Top-box score, etc. run on the entire set of responses to all 9 questions. The drawback to this approach is that it assumes that each of the 9 questions are independent and equally important. In reality, with 9 questions about the utility of the program, you probably have several questions asking basically the same thing, placing undue weight on those topics in your analysis. You also probably have a couple of questions asking about things the respondent doesn't actually associate with more overall utility. You could somewhat correct for overlapping topics if you had unlimited amounts of time through factor analysis (identifying groups of dimensions that can be combined into a single combined measure), although even this approach is limited as it can be tricked by coincidences. You can't correct for the relative importance of different topics, because there is nothing in the data that tells you what is more important.
Likert scale question divided into different group. How to calculate mean of different group?
With 16 respondents, you may be better served just by reading each person's survey responses individually, and following up with them to understand any particularly high or low scores :-) But, since t
Likert scale question divided into different group. How to calculate mean of different group? With 16 respondents, you may be better served just by reading each person's survey responses individually, and following up with them to understand any particularly high or low scores :-) But, since this is a statistical Q&A site, I'll discuss the statistical approach. Ideally, the survey would ask an overall question about the utility of the program and use that as your "success" metric, and the 9 individual questions as your "diagnostic" metrics, and so on for each overall topic. Lacking this, I would recommend just using the same measures you use for the individual questions. Mean, Median, Top-box score, etc. run on the entire set of responses to all 9 questions. The drawback to this approach is that it assumes that each of the 9 questions are independent and equally important. In reality, with 9 questions about the utility of the program, you probably have several questions asking basically the same thing, placing undue weight on those topics in your analysis. You also probably have a couple of questions asking about things the respondent doesn't actually associate with more overall utility. You could somewhat correct for overlapping topics if you had unlimited amounts of time through factor analysis (identifying groups of dimensions that can be combined into a single combined measure), although even this approach is limited as it can be tricked by coincidences. You can't correct for the relative importance of different topics, because there is nothing in the data that tells you what is more important.
Likert scale question divided into different group. How to calculate mean of different group? With 16 respondents, you may be better served just by reading each person's survey responses individually, and following up with them to understand any particularly high or low scores :-) But, since t
53,900
Likert scale question divided into different group. How to calculate mean of different group?
You are getting some good responses here. I will see if I can organize some of this information and add some other bits to create a fuller picture for you. Your project appears to be entirely descriptive rather than inferential, so I think you don't have to worry about as much. For the most part, as several others suggest, I think you can probably just average the ratings from the 9 questions regarding utility (since that's what you're interested in) for the 16 participants. People are often concerned about Likert items being ordinal rather than interval in nature, but when combining lots of Likert items into a scale, it can often be reasonable to consider the scale as roughly interval. Is yours truly interval? We'll never really know, but it probably doesn't matter in this context. (Here's a great discussion of the issues on CV.) Furthermore, you can calculate the standard deviation, from that the standard error (i.e., $SD/\sqrt{16}$), and then a 95% confidence interval can be approximated by multiplying the SE by 2 and adding (& subtracting) the product from your mean. This interval can serve as a measure of how much the mean might be likely to bounce around if you were to do this again. Of course, be sure all your items are scored the same way before you do all of this. As for the rest of your topics, it sounds like you are less interested in them. You could always follow this procedure with them anyway, but you really need to have several items for this approach to be reasonable. Four seems like a bare minimum, and I definitely would not want to combine two items. If you haven't administered the survey already, consider coming up with other items to probe people's assessments of those topics. I would generally prefer 5-8. In the long run, if you wanted a more sophisticated approach to developing an instrument that would allow you to measure satisfaction with the program, you should look into factor analysis and related methods.
Likert scale question divided into different group. How to calculate mean of different group?
You are getting some good responses here. I will see if I can organize some of this information and add some other bits to create a fuller picture for you. Your project appears to be entirely descri
Likert scale question divided into different group. How to calculate mean of different group? You are getting some good responses here. I will see if I can organize some of this information and add some other bits to create a fuller picture for you. Your project appears to be entirely descriptive rather than inferential, so I think you don't have to worry about as much. For the most part, as several others suggest, I think you can probably just average the ratings from the 9 questions regarding utility (since that's what you're interested in) for the 16 participants. People are often concerned about Likert items being ordinal rather than interval in nature, but when combining lots of Likert items into a scale, it can often be reasonable to consider the scale as roughly interval. Is yours truly interval? We'll never really know, but it probably doesn't matter in this context. (Here's a great discussion of the issues on CV.) Furthermore, you can calculate the standard deviation, from that the standard error (i.e., $SD/\sqrt{16}$), and then a 95% confidence interval can be approximated by multiplying the SE by 2 and adding (& subtracting) the product from your mean. This interval can serve as a measure of how much the mean might be likely to bounce around if you were to do this again. Of course, be sure all your items are scored the same way before you do all of this. As for the rest of your topics, it sounds like you are less interested in them. You could always follow this procedure with them anyway, but you really need to have several items for this approach to be reasonable. Four seems like a bare minimum, and I definitely would not want to combine two items. If you haven't administered the survey already, consider coming up with other items to probe people's assessments of those topics. I would generally prefer 5-8. In the long run, if you wanted a more sophisticated approach to developing an instrument that would allow you to measure satisfaction with the program, you should look into factor analysis and related methods.
Likert scale question divided into different group. How to calculate mean of different group? You are getting some good responses here. I will see if I can organize some of this information and add some other bits to create a fuller picture for you. Your project appears to be entirely descri