idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
50,101
Does inference from a heteroskedasticity-consistent covariance matrix follow the t-distribution or the normal?
Prof. Cribari-Neto was kind enough to suggest an article for this topic: F. Cribari-Neto and M. da G. A. Lima, “Heteroskedasticity-consistent interval estimators,” Journal of Statistical Computation and Simulation, vol. 79, no. 6, pp. 787–803, 2009. In this article he uses the normal distribution for the confidence intervals. As I wrote in my question this is also supported by my resampling study on the subject where the t-distribution gives considerable over-coverage (96-97 %) for samples $\leq 50$. A quick review of the t-distribution Unfortunately I’m not certain to exactly certain to why this is. From what I understand Gosset’s student t-distribution is based upon the uncertainty of the $S$ estimation of $\sigma$. A vital part to the t-distribution are these two formulas: $$Z = (\bar{X}_n-\mu)\frac{\sqrt{n}}{\sigma}$$ $$V = (n-1)\frac{S^2_n}{\sigma^2}$$ Where he noted that the $V$ follows the $\chi^2$-distribution. Now if we combine these two to get the t-value: $$T = \frac{Z}{\sqrt{V/v}} = \frac{(\bar{X}_n-\mu)\frac{\sqrt{n}}{\sigma}}{\sqrt{(n-1)\frac{S^2_n}{\sigma^2}/v}} = \frac{(\bar{X}_n-\mu)\frac{\sqrt{n}}{\sigma}}{\frac{s_n}{\sigma\sqrt{v}}\sqrt{(n-1)}}=\frac{(\bar{X}_n-\mu)\sqrt{n}\sqrt{v}}{s_n\sqrt{(n-1)}}$$ By working out that $ν = n − 1$ it further simplifies to: $$T = \frac{(\bar{X}_n-\mu)\sqrt{n}}{s_n}$$ I guess the $V$ estimation is changed in this case, if the uncertainty of the $S_n$ is less then we're left with the $Z$, in formula language: $$V = \lim_{S_n \to \sigma} (n-1)\frac{S^2_n}{\sigma^2} = (n-1)$$ Hmm... I hope that I got the formulas right, Wikipedia often has minimized the simplification sections and I thought it would be nice to have it here just to follow what exactly is happening (for us not working with maths on an every-day basis I find the connection between the raw form and the simplified a little hard to digest). It would be great if someone here could explain exactly why this happening with the HC-estimate.
Does inference from a heteroskedasticity-consistent covariance matrix follow the t-distribution or t
Prof. Cribari-Neto was kind enough to suggest an article for this topic: F. Cribari-Neto and M. da G. A. Lima, “Heteroskedasticity-consistent interval estimators,” Journal of Statistical Computation a
Does inference from a heteroskedasticity-consistent covariance matrix follow the t-distribution or the normal? Prof. Cribari-Neto was kind enough to suggest an article for this topic: F. Cribari-Neto and M. da G. A. Lima, “Heteroskedasticity-consistent interval estimators,” Journal of Statistical Computation and Simulation, vol. 79, no. 6, pp. 787–803, 2009. In this article he uses the normal distribution for the confidence intervals. As I wrote in my question this is also supported by my resampling study on the subject where the t-distribution gives considerable over-coverage (96-97 %) for samples $\leq 50$. A quick review of the t-distribution Unfortunately I’m not certain to exactly certain to why this is. From what I understand Gosset’s student t-distribution is based upon the uncertainty of the $S$ estimation of $\sigma$. A vital part to the t-distribution are these two formulas: $$Z = (\bar{X}_n-\mu)\frac{\sqrt{n}}{\sigma}$$ $$V = (n-1)\frac{S^2_n}{\sigma^2}$$ Where he noted that the $V$ follows the $\chi^2$-distribution. Now if we combine these two to get the t-value: $$T = \frac{Z}{\sqrt{V/v}} = \frac{(\bar{X}_n-\mu)\frac{\sqrt{n}}{\sigma}}{\sqrt{(n-1)\frac{S^2_n}{\sigma^2}/v}} = \frac{(\bar{X}_n-\mu)\frac{\sqrt{n}}{\sigma}}{\frac{s_n}{\sigma\sqrt{v}}\sqrt{(n-1)}}=\frac{(\bar{X}_n-\mu)\sqrt{n}\sqrt{v}}{s_n\sqrt{(n-1)}}$$ By working out that $ν = n − 1$ it further simplifies to: $$T = \frac{(\bar{X}_n-\mu)\sqrt{n}}{s_n}$$ I guess the $V$ estimation is changed in this case, if the uncertainty of the $S_n$ is less then we're left with the $Z$, in formula language: $$V = \lim_{S_n \to \sigma} (n-1)\frac{S^2_n}{\sigma^2} = (n-1)$$ Hmm... I hope that I got the formulas right, Wikipedia often has minimized the simplification sections and I thought it would be nice to have it here just to follow what exactly is happening (for us not working with maths on an every-day basis I find the connection between the raw form and the simplified a little hard to digest). It would be great if someone here could explain exactly why this happening with the HC-estimate.
Does inference from a heteroskedasticity-consistent covariance matrix follow the t-distribution or t Prof. Cribari-Neto was kind enough to suggest an article for this topic: F. Cribari-Neto and M. da G. A. Lima, “Heteroskedasticity-consistent interval estimators,” Journal of Statistical Computation a
50,102
SARIMA estimation
Your acf of the seasonally differenced series strongly suggests the need for a regular difference. What follows is the acf of the doubly differenced series ( your series reularly differenced) . THis acf suggests an autoregressive seasonal factor as the partial acf of lag 12 and 24 are apparently significant although since the seasonal coefficient will be between -.4 and +.4 it reall won't matter if you call it a seasnal ma of lag12. When models are tried including an ma(1) statistical significance for the ma(1) is rejected and is thusly not included. A good final model should include some indicators for anonmalous data . , So in conlusion your preferred model is in your terms (0,1,0)(1,1,0) OR (0,1,0)(0,1,1).
SARIMA estimation
Your acf of the seasonally differenced series strongly suggests the need for a regular difference. What follows is the acf of the doubly differenced series ( your series reularly differenced) . THis
SARIMA estimation Your acf of the seasonally differenced series strongly suggests the need for a regular difference. What follows is the acf of the doubly differenced series ( your series reularly differenced) . THis acf suggests an autoregressive seasonal factor as the partial acf of lag 12 and 24 are apparently significant although since the seasonal coefficient will be between -.4 and +.4 it reall won't matter if you call it a seasnal ma of lag12. When models are tried including an ma(1) statistical significance for the ma(1) is rejected and is thusly not included. A good final model should include some indicators for anonmalous data . , So in conlusion your preferred model is in your terms (0,1,0)(1,1,0) OR (0,1,0)(0,1,1).
SARIMA estimation Your acf of the seasonally differenced series strongly suggests the need for a regular difference. What follows is the acf of the doubly differenced series ( your series reularly differenced) . THis
50,103
SARIMA estimation
In oreder to get correct results from auto.arima() command try using it as shown below: auto.arima(timeseries,stepwise = F, approximation = FALSE) Hope it helps.
SARIMA estimation
In oreder to get correct results from auto.arima() command try using it as shown below: auto.arima(timeseries,stepwise = F, approximation = FALSE) Hope it helps.
SARIMA estimation In oreder to get correct results from auto.arima() command try using it as shown below: auto.arima(timeseries,stepwise = F, approximation = FALSE) Hope it helps.
SARIMA estimation In oreder to get correct results from auto.arima() command try using it as shown below: auto.arima(timeseries,stepwise = F, approximation = FALSE) Hope it helps.
50,104
Python packages for numerical data imputation [closed]
scikit-learn has an "Imputer" class you should look into. API Overview Demo EDIT: Looks like this class doesn't support imputing from a linear model. You could always try modifying the sklearn code to support it, maybe even submit a pull request: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/imputation.py EDIT2: Looks like you might have better luck with the more versatile imputation faculties available in the Orange library.
Python packages for numerical data imputation [closed]
scikit-learn has an "Imputer" class you should look into. API Overview Demo EDIT: Looks like this class doesn't support imputing from a linear model. You could always try modifying the sklearn code
Python packages for numerical data imputation [closed] scikit-learn has an "Imputer" class you should look into. API Overview Demo EDIT: Looks like this class doesn't support imputing from a linear model. You could always try modifying the sklearn code to support it, maybe even submit a pull request: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/imputation.py EDIT2: Looks like you might have better luck with the more versatile imputation faculties available in the Orange library.
Python packages for numerical data imputation [closed] scikit-learn has an "Imputer" class you should look into. API Overview Demo EDIT: Looks like this class doesn't support imputing from a linear model. You could always try modifying the sklearn code
50,105
How to simulate an unreplicated factorial design?
The key to generating random data like yours in R is to use ?rnorm. You may also want to set the random seed to a fixed value, via ?set.seed, so that your simulation can be exactly replicated in the future. For convenience, you may prefer to use ?expand.grid to create your factor combinations, although you can do it manually as well (and could conceivably prefer that for the clarity it affords). Once you have set up your simulation, you will want to run it many times to get a sense of its long run behavior; this can be done using ?replicate, I believe, but I usually just nest it in a for loop because I specialize in writing comically inefficient code. Here is an example: > set.seed(1) > A <- c(0,1) > B <- c(0,1) > C <- c(0,1) > D <- c(0,1) > Xmat <- expand.grid(A=A, B=B, C=C, D=D) > Xmat A B C D 1 0 0 0 0 2 1 0 0 0 3 0 1 0 0 4 1 1 0 0 5 0 0 1 0 6 1 0 1 0 7 0 1 1 0 8 1 1 1 0 9 0 0 0 1 10 1 0 0 1 11 0 1 0 1 12 1 1 0 1 13 0 0 1 1 14 1 0 1 1 15 0 1 1 1 16 1 1 1 1 > y <- 3 + 1.5*Xmat$A + 1.5*Xmat$B + 1.5*Xmat$C + 1.5*Xmat$B*Xmat$C + + rnorm(n=16, mean=0, sd=1) > y [1] 2.373546 4.683643 3.664371 7.595281 4.829508 5.179532 7.987429 9.738325 [9] 3.575781 4.194612 6.011781 6.389843 3.878759 3.785300 8.624931 8.955066
How to simulate an unreplicated factorial design?
The key to generating random data like yours in R is to use ?rnorm. You may also want to set the random seed to a fixed value, via ?set.seed, so that your simulation can be exactly replicated in the
How to simulate an unreplicated factorial design? The key to generating random data like yours in R is to use ?rnorm. You may also want to set the random seed to a fixed value, via ?set.seed, so that your simulation can be exactly replicated in the future. For convenience, you may prefer to use ?expand.grid to create your factor combinations, although you can do it manually as well (and could conceivably prefer that for the clarity it affords). Once you have set up your simulation, you will want to run it many times to get a sense of its long run behavior; this can be done using ?replicate, I believe, but I usually just nest it in a for loop because I specialize in writing comically inefficient code. Here is an example: > set.seed(1) > A <- c(0,1) > B <- c(0,1) > C <- c(0,1) > D <- c(0,1) > Xmat <- expand.grid(A=A, B=B, C=C, D=D) > Xmat A B C D 1 0 0 0 0 2 1 0 0 0 3 0 1 0 0 4 1 1 0 0 5 0 0 1 0 6 1 0 1 0 7 0 1 1 0 8 1 1 1 0 9 0 0 0 1 10 1 0 0 1 11 0 1 0 1 12 1 1 0 1 13 0 0 1 1 14 1 0 1 1 15 0 1 1 1 16 1 1 1 1 > y <- 3 + 1.5*Xmat$A + 1.5*Xmat$B + 1.5*Xmat$C + 1.5*Xmat$B*Xmat$C + + rnorm(n=16, mean=0, sd=1) > y [1] 2.373546 4.683643 3.664371 7.595281 4.829508 5.179532 7.987429 9.738325 [9] 3.575781 4.194612 6.011781 6.389843 3.878759 3.785300 8.624931 8.955066
How to simulate an unreplicated factorial design? The key to generating random data like yours in R is to use ?rnorm. You may also want to set the random seed to a fixed value, via ?set.seed, so that your simulation can be exactly replicated in the
50,106
The vcov function cannot be applied?
Actually you already have the S.E. for zeta and delta, calculated from log(zeta) and log(delta), in summary(hyperbfitalv): Variance-covariance matrix of the parameter estimates wrongly calculated? Let's look at it more closely and see how it was done: lzeta is log(zeta): What you already have, from solve(hyperbfitalv$hessian), is: $$ \sigma _{lzeta}^{2} $$ Now you what to calculate: $$ \sigma _{zeta}^{2} $$ Notice: $$ zeta=e^{lzeta} $$ So, applying Delta method: $$ \sigma _{zeta}^{2}\approx (\frac{d\,zeta}{d\, lzeta})^2\cdot \sigma _{lzeta}^{2}=(\frac{d\,e^{lzeta}}{d\, lzeta})^2\cdot \sigma _{lzeta}^{2}=(e^{lzeta})^2\cdot\sigma _{lzeta}^{2}=zeta^2\cdot\sigma _{lzeta}^{2} $$ which essentially is how the S.E. of zeta and delta were calculated in the 5th line of summary(hyperbfitalv) output in Variance-covariance matrix of the parameter estimates wrongly calculated?, such as: >>> sqrt(1.5261031428)*0.002035#for delta 0.0025139483860139073 @COOLSerdash 's answer is great, but he probably didn't realize you already have $$ \sigma _{lzeta}^{2} $$
The vcov function cannot be applied?
Actually you already have the S.E. for zeta and delta, calculated from log(zeta) and log(delta), in summary(hyperbfitalv): Variance-covariance matrix of the parameter estimates wrongly calculated? Let
The vcov function cannot be applied? Actually you already have the S.E. for zeta and delta, calculated from log(zeta) and log(delta), in summary(hyperbfitalv): Variance-covariance matrix of the parameter estimates wrongly calculated? Let's look at it more closely and see how it was done: lzeta is log(zeta): What you already have, from solve(hyperbfitalv$hessian), is: $$ \sigma _{lzeta}^{2} $$ Now you what to calculate: $$ \sigma _{zeta}^{2} $$ Notice: $$ zeta=e^{lzeta} $$ So, applying Delta method: $$ \sigma _{zeta}^{2}\approx (\frac{d\,zeta}{d\, lzeta})^2\cdot \sigma _{lzeta}^{2}=(\frac{d\,e^{lzeta}}{d\, lzeta})^2\cdot \sigma _{lzeta}^{2}=(e^{lzeta})^2\cdot\sigma _{lzeta}^{2}=zeta^2\cdot\sigma _{lzeta}^{2} $$ which essentially is how the S.E. of zeta and delta were calculated in the 5th line of summary(hyperbfitalv) output in Variance-covariance matrix of the parameter estimates wrongly calculated?, such as: >>> sqrt(1.5261031428)*0.002035#for delta 0.0025139483860139073 @COOLSerdash 's answer is great, but he probably didn't realize you already have $$ \sigma _{lzeta}^{2} $$
The vcov function cannot be applied? Actually you already have the S.E. for zeta and delta, calculated from log(zeta) and log(delta), in summary(hyperbfitalv): Variance-covariance matrix of the parameter estimates wrongly calculated? Let
50,107
The vcov function cannot be applied?
The reason that parameters passed to the optimizer are pi, log(zeta), log(delta), mu not zeta and delta is mostly likely to constrain the optimizer in R+ for zeta and delta. If you need the Hessian of pi, zeta, delta and mu rather than pi, log(zeta), log(delta), and mu, you can to write your own function, which is parametrized differently (in pi, zeta, delta and mu). Then calculate the 2nd derivatives of your new function at the MLE, which is given by thehyperbFit result. But why? Are you trying to get the confidence limit for zeta and delta?
The vcov function cannot be applied?
The reason that parameters passed to the optimizer are pi, log(zeta), log(delta), mu not zeta and delta is mostly likely to constrain the optimizer in R+ for zeta and delta. If you need the Hessian of
The vcov function cannot be applied? The reason that parameters passed to the optimizer are pi, log(zeta), log(delta), mu not zeta and delta is mostly likely to constrain the optimizer in R+ for zeta and delta. If you need the Hessian of pi, zeta, delta and mu rather than pi, log(zeta), log(delta), and mu, you can to write your own function, which is parametrized differently (in pi, zeta, delta and mu). Then calculate the 2nd derivatives of your new function at the MLE, which is given by thehyperbFit result. But why? Are you trying to get the confidence limit for zeta and delta?
The vcov function cannot be applied? The reason that parameters passed to the optimizer are pi, log(zeta), log(delta), mu not zeta and delta is mostly likely to constrain the optimizer in R+ for zeta and delta. If you need the Hessian of
50,108
Very large theta values using glm.nb in R - alternative approaches?
It doesn't necessarily mean that there is overdispersion (though it could), just that a saturated model may be a better fit. If you only have 7-9 observations, it will be very difficult to accurately test for overdispersion unless you have some values that are just way out there under a Poisson assumption. Another option you might look into is using the Poisson model but using a transformed value of your predictor rather than a linear fit on the raw variable. If it looks like the larger values of the predictor are where the Y-values are off more, you could try using something like a squared value of the predictor, or if it's the opposite then maybe a log-transform of the predictor. Thinking about overdispersion in a count model is always a good idea, but it does introduce complexity into the model. With so few data points, your best approach might be to keep it as simple as possible.
Very large theta values using glm.nb in R - alternative approaches?
It doesn't necessarily mean that there is overdispersion (though it could), just that a saturated model may be a better fit. If you only have 7-9 observations, it will be very difficult to accurately
Very large theta values using glm.nb in R - alternative approaches? It doesn't necessarily mean that there is overdispersion (though it could), just that a saturated model may be a better fit. If you only have 7-9 observations, it will be very difficult to accurately test for overdispersion unless you have some values that are just way out there under a Poisson assumption. Another option you might look into is using the Poisson model but using a transformed value of your predictor rather than a linear fit on the raw variable. If it looks like the larger values of the predictor are where the Y-values are off more, you could try using something like a squared value of the predictor, or if it's the opposite then maybe a log-transform of the predictor. Thinking about overdispersion in a count model is always a good idea, but it does introduce complexity into the model. With so few data points, your best approach might be to keep it as simple as possible.
Very large theta values using glm.nb in R - alternative approaches? It doesn't necessarily mean that there is overdispersion (though it could), just that a saturated model may be a better fit. If you only have 7-9 observations, it will be very difficult to accurately
50,109
What is smoothing in gaussian processes
One way to think of gaussian processes is a kernel density estimation with a fixed-finite number of kernels not fixed at the data. In this interpretation, the arguments for why KDEs are smoothing apply.
What is smoothing in gaussian processes
One way to think of gaussian processes is a kernel density estimation with a fixed-finite number of kernels not fixed at the data. In this interpretation, the arguments for why KDEs are smoothing appl
What is smoothing in gaussian processes One way to think of gaussian processes is a kernel density estimation with a fixed-finite number of kernels not fixed at the data. In this interpretation, the arguments for why KDEs are smoothing apply.
What is smoothing in gaussian processes One way to think of gaussian processes is a kernel density estimation with a fixed-finite number of kernels not fixed at the data. In this interpretation, the arguments for why KDEs are smoothing appl
50,110
What is smoothing in gaussian processes
From the book Gaussian Processes for Machine Learning by Rasmussen and Williams; If you're doing GP regression, and you want to predict a value at a point $\mathbf{x}^*$, the posterior predictive mean is given by: \begin{align*} \overline{f}_{*} = \mathbf{k}^T_* (K + \sigma^2_n I)^{-1} \mathbf{y} \end{align*} where $\mathbf{y}$ is the vector of observed outputs. Note this is a linear combination of the observed values $\mathbf{y}$, that is it can be rewritten as: \begin{align*} \overline{f}_{*} = \sum_{c =1}^{n} \beta_{c} y^{(c)} \end{align*} As I understand it using a linear combination of the observed values a your predicted mean is a sort of smoothing.
What is smoothing in gaussian processes
From the book Gaussian Processes for Machine Learning by Rasmussen and Williams; If you're doing GP regression, and you want to predict a value at a point $\mathbf{x}^*$, the posterior predictive mean
What is smoothing in gaussian processes From the book Gaussian Processes for Machine Learning by Rasmussen and Williams; If you're doing GP regression, and you want to predict a value at a point $\mathbf{x}^*$, the posterior predictive mean is given by: \begin{align*} \overline{f}_{*} = \mathbf{k}^T_* (K + \sigma^2_n I)^{-1} \mathbf{y} \end{align*} where $\mathbf{y}$ is the vector of observed outputs. Note this is a linear combination of the observed values $\mathbf{y}$, that is it can be rewritten as: \begin{align*} \overline{f}_{*} = \sum_{c =1}^{n} \beta_{c} y^{(c)} \end{align*} As I understand it using a linear combination of the observed values a your predicted mean is a sort of smoothing.
What is smoothing in gaussian processes From the book Gaussian Processes for Machine Learning by Rasmussen and Williams; If you're doing GP regression, and you want to predict a value at a point $\mathbf{x}^*$, the posterior predictive mean
50,111
Incidence Rate Ratio (IRR) in R from linear regression using log-transformed data?
Well, if your numerator is directly interpreted as counts, then both the poisson regression and the log transformed outcome linear regression will be consistent for the same parameters. The only discrepancy in this case is exactly how the observations are weighted (see paragraph 2). If your outcome is rates and you have measured (variable) denominators (such as 1-3 $\mu$gs of biopsied tumor, or 1-20 ccs blood), you need to use some alternative approaches to account for the various weighting differences in the two groups. In both linear regression and Poisson regression, this comes about in the form of an offset. I'm curious whether this should be a consideration in your problem. In OLS, the mean is independent of the variance (under classical assumptions), so your fitted model will have the minimum squared residuals, which will be largely driven by large counts. In the Poisson GLM, large counts are significantly downweighted by inverse variance reweighting. An inspection of the distribution of the data using one or more scatterplots (depending on the number of adjustment variables) and fitted curves is a very important consideration indeed. You will certainly need to verify high leverage / high influence observations to validate the alternative modeling approaches you've proposed. Using robust standard errors (one particular form of robust regression) does not assume that mean is independent of variance, but it does use such a working probability model, so while robust standard errors will be consistent, your point estimates will be unstable, and your inference will be of lower power (than when you can assume a better working probability model for the data). Although R warns you about non-integral counts in Poisson GLMs, there are plenty of sane regression models, especially in, say, ecology, where non-integral Poisson outcomes come about such as plankton concentration in a cubic meter of sampled water from various watersheds, or flow cytometry assessed mRNA concentration in biopsied tumor tissue.
Incidence Rate Ratio (IRR) in R from linear regression using log-transformed data?
Well, if your numerator is directly interpreted as counts, then both the poisson regression and the log transformed outcome linear regression will be consistent for the same parameters. The only discr
Incidence Rate Ratio (IRR) in R from linear regression using log-transformed data? Well, if your numerator is directly interpreted as counts, then both the poisson regression and the log transformed outcome linear regression will be consistent for the same parameters. The only discrepancy in this case is exactly how the observations are weighted (see paragraph 2). If your outcome is rates and you have measured (variable) denominators (such as 1-3 $\mu$gs of biopsied tumor, or 1-20 ccs blood), you need to use some alternative approaches to account for the various weighting differences in the two groups. In both linear regression and Poisson regression, this comes about in the form of an offset. I'm curious whether this should be a consideration in your problem. In OLS, the mean is independent of the variance (under classical assumptions), so your fitted model will have the minimum squared residuals, which will be largely driven by large counts. In the Poisson GLM, large counts are significantly downweighted by inverse variance reweighting. An inspection of the distribution of the data using one or more scatterplots (depending on the number of adjustment variables) and fitted curves is a very important consideration indeed. You will certainly need to verify high leverage / high influence observations to validate the alternative modeling approaches you've proposed. Using robust standard errors (one particular form of robust regression) does not assume that mean is independent of variance, but it does use such a working probability model, so while robust standard errors will be consistent, your point estimates will be unstable, and your inference will be of lower power (than when you can assume a better working probability model for the data). Although R warns you about non-integral counts in Poisson GLMs, there are plenty of sane regression models, especially in, say, ecology, where non-integral Poisson outcomes come about such as plankton concentration in a cubic meter of sampled water from various watersheds, or flow cytometry assessed mRNA concentration in biopsied tumor tissue.
Incidence Rate Ratio (IRR) in R from linear regression using log-transformed data? Well, if your numerator is directly interpreted as counts, then both the poisson regression and the log transformed outcome linear regression will be consistent for the same parameters. The only discr
50,112
Issues with using Expectation Maximization algorithm
Three possibilities that come to mind: Numerical stability problems: due to the scale of the numbers you are dealing with, the floating point representation causes small inaccuracies that may become significant. This usually happens when combining numbers on very different scales. Incorrect likelihood/gradient: you may have calculated the likelihood or gradient incorrectly. Incorrect implementation of EM: have you implemented EM before? there may be some gap in understanding of the algorithm (no disrespect meant). From your description it sounds like it is probably numerical issues.
Issues with using Expectation Maximization algorithm
Three possibilities that come to mind: Numerical stability problems: due to the scale of the numbers you are dealing with, the floating point representation causes small inaccuracies that may become
Issues with using Expectation Maximization algorithm Three possibilities that come to mind: Numerical stability problems: due to the scale of the numbers you are dealing with, the floating point representation causes small inaccuracies that may become significant. This usually happens when combining numbers on very different scales. Incorrect likelihood/gradient: you may have calculated the likelihood or gradient incorrectly. Incorrect implementation of EM: have you implemented EM before? there may be some gap in understanding of the algorithm (no disrespect meant). From your description it sounds like it is probably numerical issues.
Issues with using Expectation Maximization algorithm Three possibilities that come to mind: Numerical stability problems: due to the scale of the numbers you are dealing with, the floating point representation causes small inaccuracies that may become
50,113
Proper way to match a reference population for survival analysis
I will try to provide answer that is relevant to your second question: is there a better [or at least more mainstream] way to obtain a matched reference population? Relative survival analysis The standard approach to comparing the survival in a certain subgroup to that in a wider (and much larger) population is to use relative survival analysis. An accessible introduction is available in this article, which is focused on explaining the implementation in the R package relsurv. The essential idea is that the hazard experienced in the specific group under study can be decomposed into two components: one of which is contributed by the experience of the wider population and the other is contributed by the characteristics unique to the group under study. One common approach is to view the effect of these components as additive: $\lambda_O = \lambda_P + \lambda_E$ $\lambda_P$ is the baseline hazard experienced by the population and $\lambda_E$ is the excess hazard attributable to the particular exposures of the group under study. Regression can be performed which allows the difference between $\lambda_O$ and $\lambda_E$ to vary with age, sex, year of birth etc. I won't reiterate more from the article linked, because I can explain it no better. Other comments I would encourage you to match to the general population not just on age but also on other aspects like sex and year of birth if you are able to. This is advisable because women and men have markedly different life expectancies; and, since life expectancy has increased substantially in most countries over the 20th century, it may not be reasonable to compare the survival of a 40 year old born in 1920 with that of a forty year old born in 1970, for example.
Proper way to match a reference population for survival analysis
I will try to provide answer that is relevant to your second question: is there a better [or at least more mainstream] way to obtain a matched reference population? Relative survival analysis The stan
Proper way to match a reference population for survival analysis I will try to provide answer that is relevant to your second question: is there a better [or at least more mainstream] way to obtain a matched reference population? Relative survival analysis The standard approach to comparing the survival in a certain subgroup to that in a wider (and much larger) population is to use relative survival analysis. An accessible introduction is available in this article, which is focused on explaining the implementation in the R package relsurv. The essential idea is that the hazard experienced in the specific group under study can be decomposed into two components: one of which is contributed by the experience of the wider population and the other is contributed by the characteristics unique to the group under study. One common approach is to view the effect of these components as additive: $\lambda_O = \lambda_P + \lambda_E$ $\lambda_P$ is the baseline hazard experienced by the population and $\lambda_E$ is the excess hazard attributable to the particular exposures of the group under study. Regression can be performed which allows the difference between $\lambda_O$ and $\lambda_E$ to vary with age, sex, year of birth etc. I won't reiterate more from the article linked, because I can explain it no better. Other comments I would encourage you to match to the general population not just on age but also on other aspects like sex and year of birth if you are able to. This is advisable because women and men have markedly different life expectancies; and, since life expectancy has increased substantially in most countries over the 20th century, it may not be reasonable to compare the survival of a 40 year old born in 1920 with that of a forty year old born in 1970, for example.
Proper way to match a reference population for survival analysis I will try to provide answer that is relevant to your second question: is there a better [or at least more mainstream] way to obtain a matched reference population? Relative survival analysis The stan
50,114
How to choose the scaling matrix in ABC (without cheating!)?
One solution to your problem, which you may or may not deem as "cheating", is to use dimension reduction before running your sampler (see this review). The most straight-forward approach that comes to mind (and is also discussed in the aforementioned review) is partial least squares regression (PLS). The first use of PLS in ABC (at least that I know of) was by Wegmann et al. In this approach, one introduces a "calibration" step before running whatever ABC algorithm (rejection sampling, MCMC, SMC, etc.) wherein a large number of summary statistics are simulated based on draws from the prior and PLS is used to calculate a transform of the summary statistics explaining the most variance in the parameter space. The observed summary statistics are then transformed and in the ensuing ABC rejection steps the distance between transformed simulated and observed summary statistics will be used instead of the original summary statistics. In this way, you will avoid having to calculate (or invert) $\Sigma_\theta$. The calibration step can be outlined as follows- Sample $B$ draws from $\theta' \sim P(\theta)$ Conduct $B$ simulations from $S \sim P(S|\theta')$ Estimate PLS transform of of $S$ to $S'$ PLS is available for R and python and C++.
How to choose the scaling matrix in ABC (without cheating!)?
One solution to your problem, which you may or may not deem as "cheating", is to use dimension reduction before running your sampler (see this review). The most straight-forward approach that comes to
How to choose the scaling matrix in ABC (without cheating!)? One solution to your problem, which you may or may not deem as "cheating", is to use dimension reduction before running your sampler (see this review). The most straight-forward approach that comes to mind (and is also discussed in the aforementioned review) is partial least squares regression (PLS). The first use of PLS in ABC (at least that I know of) was by Wegmann et al. In this approach, one introduces a "calibration" step before running whatever ABC algorithm (rejection sampling, MCMC, SMC, etc.) wherein a large number of summary statistics are simulated based on draws from the prior and PLS is used to calculate a transform of the summary statistics explaining the most variance in the parameter space. The observed summary statistics are then transformed and in the ensuing ABC rejection steps the distance between transformed simulated and observed summary statistics will be used instead of the original summary statistics. In this way, you will avoid having to calculate (or invert) $\Sigma_\theta$. The calibration step can be outlined as follows- Sample $B$ draws from $\theta' \sim P(\theta)$ Conduct $B$ simulations from $S \sim P(S|\theta')$ Estimate PLS transform of of $S$ to $S'$ PLS is available for R and python and C++.
How to choose the scaling matrix in ABC (without cheating!)? One solution to your problem, which you may or may not deem as "cheating", is to use dimension reduction before running your sampler (see this review). The most straight-forward approach that comes to
50,115
How to interpret the significant interaction of two non-significant main predictors?
You seem to have the right intuition in your last paragraph. It is possible for variable x and z in a regression to appear non significant even though they have some effect on the dependent variable y. The following small reproducible example illustrates that fact. set.seed(890) x <- rnorm(1000, mean=10, sd=3) z <- rnorm(1000, mean=25, sd=6) y <- ifelse(z>30, sqrt(x), 0) + rnorm(1000, mean=12, sd=10) m1 <- lm(y~ x + z) m2 <- lm(y~ x*z) summary(m1) summary(m2) This produces the following output (troncated for readability): Estimate Std. Error t value Pr(>|t|) (Intercept) 10.61151 1.79312 5.918 4.48e-09 *** x -0.00765 0.11085 -0.069 0.945 z 0.08651 0.05514 1.569 0.117 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 10.34 on 997 degrees of freedom Multiple R-squared: 0.002464, Adjusted R-squared: 0.000463 F-statistic: 1.231 on 2 and 997 DF, p-value: 0.2923 Estimate Std. Error t value Pr(>|t|) (Intercept) 18.59305 5.11233 3.637 0.00029 *** x -0.79087 0.48273 -1.638 0.10167 z -0.22747 0.19625 -1.159 0.24669 x:z 0.03077 0.01846 1.667 0.09584 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 10.33 on 996 degrees of freedom Multiple R-squared: 0.005239, Adjusted R-squared: 0.002243 F-statistic: 1.749 on 3 and 996 DF, p-value: 0.1554 As you can see, y depends on x for some levels of z (this is your significant interaction). However, in m1, there does not appear to be a significant effect of either x or z on y when you only include the main effects. In m2, the interaction becomes significant (albeit barely). Note that neither m1 or m2 are very good models for the data. In terms of interpretation, you would probably say that x has a significant effect on z for some values of z. There are several ways of testing for this. The one you mention in the last paragraph, excluding part of your sample based on the score of observations on a certain variable is usually refered to as "split-sample" analysis in social sciences. Other ways of testing for this imply to calculate the marginal effect of an interaction, depending on the values of one of the two interacted variables.
How to interpret the significant interaction of two non-significant main predictors?
You seem to have the right intuition in your last paragraph. It is possible for variable x and z in a regression to appear non significant even though they have some effect on the dependent variable y
How to interpret the significant interaction of two non-significant main predictors? You seem to have the right intuition in your last paragraph. It is possible for variable x and z in a regression to appear non significant even though they have some effect on the dependent variable y. The following small reproducible example illustrates that fact. set.seed(890) x <- rnorm(1000, mean=10, sd=3) z <- rnorm(1000, mean=25, sd=6) y <- ifelse(z>30, sqrt(x), 0) + rnorm(1000, mean=12, sd=10) m1 <- lm(y~ x + z) m2 <- lm(y~ x*z) summary(m1) summary(m2) This produces the following output (troncated for readability): Estimate Std. Error t value Pr(>|t|) (Intercept) 10.61151 1.79312 5.918 4.48e-09 *** x -0.00765 0.11085 -0.069 0.945 z 0.08651 0.05514 1.569 0.117 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 10.34 on 997 degrees of freedom Multiple R-squared: 0.002464, Adjusted R-squared: 0.000463 F-statistic: 1.231 on 2 and 997 DF, p-value: 0.2923 Estimate Std. Error t value Pr(>|t|) (Intercept) 18.59305 5.11233 3.637 0.00029 *** x -0.79087 0.48273 -1.638 0.10167 z -0.22747 0.19625 -1.159 0.24669 x:z 0.03077 0.01846 1.667 0.09584 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 10.33 on 996 degrees of freedom Multiple R-squared: 0.005239, Adjusted R-squared: 0.002243 F-statistic: 1.749 on 3 and 996 DF, p-value: 0.1554 As you can see, y depends on x for some levels of z (this is your significant interaction). However, in m1, there does not appear to be a significant effect of either x or z on y when you only include the main effects. In m2, the interaction becomes significant (albeit barely). Note that neither m1 or m2 are very good models for the data. In terms of interpretation, you would probably say that x has a significant effect on z for some values of z. There are several ways of testing for this. The one you mention in the last paragraph, excluding part of your sample based on the score of observations on a certain variable is usually refered to as "split-sample" analysis in social sciences. Other ways of testing for this imply to calculate the marginal effect of an interaction, depending on the values of one of the two interacted variables.
How to interpret the significant interaction of two non-significant main predictors? You seem to have the right intuition in your last paragraph. It is possible for variable x and z in a regression to appear non significant even though they have some effect on the dependent variable y
50,116
2x2x5 repeated measures ANOVA: significant 3-way interaction
I'm not sure what you plan to test but looking at your first graph it seems pretty clear. There's generally an effect of colour but on the left side and at larger offsets it disappears. I'm guessing you wanted to test all of the effects of colour to see where they were significant and where they weren't. If you found they were all significant or all not it wouldn't give your more information than your interaction (and the interaction is independent of such findings). If you found some were and some weren't it doesn't test your interaction because the difference between significant and not significant is not itself significant. I suppose you could try to explore something else about the difference in patterns but given that offset seems like a continuous variable that should have some clear function if it is doing anything I'm not sure establishing anything else different about the waviness of the lines on the two sides would be where you'd want to go. UPDATE after viewing comments The explanation for your interaction would be that the effect of colour and side are consistent until offsets 4 and 5 where the effect of colour only exists for one of the sides. That's just a recasting of what I said before in line with your hypotheses. Keep in mind what you'd have to test post hoc or in planned contrasts to really demonstrate this. Finding an effect just at large offsets is useless because colour itself is interacting with side; therefore you have to show it interacts with offsets to show they're having an effect as well. That's what your ANOVA is telling you. It's already the planned contrast you want. Is there anything else there that could be making that interaction occur? Do you need to explain anything else? If you do the ANOVA at 4 and 5 you likely won't get an interaction with offset, just one between colour and side, which will be substantially less evidence for what you want to say, not more. Keep in mind, interactions mean something. Look at your data and figure out what they mean before considering further statistical tests. If they're relatively clear, as in this case, then you're done.
2x2x5 repeated measures ANOVA: significant 3-way interaction
I'm not sure what you plan to test but looking at your first graph it seems pretty clear. There's generally an effect of colour but on the left side and at larger offsets it disappears. I'm guessing
2x2x5 repeated measures ANOVA: significant 3-way interaction I'm not sure what you plan to test but looking at your first graph it seems pretty clear. There's generally an effect of colour but on the left side and at larger offsets it disappears. I'm guessing you wanted to test all of the effects of colour to see where they were significant and where they weren't. If you found they were all significant or all not it wouldn't give your more information than your interaction (and the interaction is independent of such findings). If you found some were and some weren't it doesn't test your interaction because the difference between significant and not significant is not itself significant. I suppose you could try to explore something else about the difference in patterns but given that offset seems like a continuous variable that should have some clear function if it is doing anything I'm not sure establishing anything else different about the waviness of the lines on the two sides would be where you'd want to go. UPDATE after viewing comments The explanation for your interaction would be that the effect of colour and side are consistent until offsets 4 and 5 where the effect of colour only exists for one of the sides. That's just a recasting of what I said before in line with your hypotheses. Keep in mind what you'd have to test post hoc or in planned contrasts to really demonstrate this. Finding an effect just at large offsets is useless because colour itself is interacting with side; therefore you have to show it interacts with offsets to show they're having an effect as well. That's what your ANOVA is telling you. It's already the planned contrast you want. Is there anything else there that could be making that interaction occur? Do you need to explain anything else? If you do the ANOVA at 4 and 5 you likely won't get an interaction with offset, just one between colour and side, which will be substantially less evidence for what you want to say, not more. Keep in mind, interactions mean something. Look at your data and figure out what they mean before considering further statistical tests. If they're relatively clear, as in this case, then you're done.
2x2x5 repeated measures ANOVA: significant 3-way interaction I'm not sure what you plan to test but looking at your first graph it seems pretty clear. There's generally an effect of colour but on the left side and at larger offsets it disappears. I'm guessing
50,117
2x2x5 repeated measures ANOVA: significant 3-way interaction
Responding to the last question: At each offset, compute the color x side interaction contrast score for each subject and do a t-test on the mean. Then all you have to worry about is adjusting for multiplicity. I think a stepwise Bonferroni will suffice, but others may think differently. On second thought, there is something else to worry about. Finding that the two-way interaction is significant at one offset and not at another does not by itself justify concluding that the two offsets have different two-way interactions. For that, you need to test and reject the 2 x 2 x 2 three-way interaction involving those two offsets. That means 10 more tests. The fact that the overall 2 x 2 x 5 three-way interaction is significant means only that some contrast of the five two-way interactions is signifcant. The logic here is much the same as in a one-way design where the overall test is significant and you want to know which means differ from which other means.
2x2x5 repeated measures ANOVA: significant 3-way interaction
Responding to the last question: At each offset, compute the color x side interaction contrast score for each subject and do a t-test on the mean. Then all you have to worry about is adjusting for mul
2x2x5 repeated measures ANOVA: significant 3-way interaction Responding to the last question: At each offset, compute the color x side interaction contrast score for each subject and do a t-test on the mean. Then all you have to worry about is adjusting for multiplicity. I think a stepwise Bonferroni will suffice, but others may think differently. On second thought, there is something else to worry about. Finding that the two-way interaction is significant at one offset and not at another does not by itself justify concluding that the two offsets have different two-way interactions. For that, you need to test and reject the 2 x 2 x 2 three-way interaction involving those two offsets. That means 10 more tests. The fact that the overall 2 x 2 x 5 three-way interaction is significant means only that some contrast of the five two-way interactions is signifcant. The logic here is much the same as in a one-way design where the overall test is significant and you want to know which means differ from which other means.
2x2x5 repeated measures ANOVA: significant 3-way interaction Responding to the last question: At each offset, compute the color x side interaction contrast score for each subject and do a t-test on the mean. Then all you have to worry about is adjusting for mul
50,118
2x2x5 repeated measures ANOVA: significant 3-way interaction
Generally speaking when in the presence of significant non-additivity ("interaction"), main effects and lower-order interactions are of less important. I usually find that by the time we have teased out a three-factor interaction, most of what is going on in the lower-order effects is explained. I (tentatively) concur with your analysis of the second set of plots. The same pattern is present in the first set, but it isn't so obvious. But, as you asked, how to show this? You can show this via single-degree of freedom contrasts. This is the sort of situation where over-parameterized models (like the effects model you fitted) are a real pain. What I would do is the following: Refit the model as RT ~ 0 + Color*Side*Offset; On a separate sheet of paper, lay out the 20 cells in the model; Define the Color MainEffect Contrast, the Side Main Effect Contrast, and the Offset Main Effect Contrasts, using the Helmert sequence stepping up from 0 offset; Use those Main effect contrasts to derive the four degrees of freedom for the Color x Side x Offset interaction; Do a step-up sequence of F-tests to see where the non-additivity becomes significant. No, actually what I would do is the same thing using SAS because I understand how SAS handles linear models better than I grok R.
2x2x5 repeated measures ANOVA: significant 3-way interaction
Generally speaking when in the presence of significant non-additivity ("interaction"), main effects and lower-order interactions are of less important. I usually find that by the time we have teased
2x2x5 repeated measures ANOVA: significant 3-way interaction Generally speaking when in the presence of significant non-additivity ("interaction"), main effects and lower-order interactions are of less important. I usually find that by the time we have teased out a three-factor interaction, most of what is going on in the lower-order effects is explained. I (tentatively) concur with your analysis of the second set of plots. The same pattern is present in the first set, but it isn't so obvious. But, as you asked, how to show this? You can show this via single-degree of freedom contrasts. This is the sort of situation where over-parameterized models (like the effects model you fitted) are a real pain. What I would do is the following: Refit the model as RT ~ 0 + Color*Side*Offset; On a separate sheet of paper, lay out the 20 cells in the model; Define the Color MainEffect Contrast, the Side Main Effect Contrast, and the Offset Main Effect Contrasts, using the Helmert sequence stepping up from 0 offset; Use those Main effect contrasts to derive the four degrees of freedom for the Color x Side x Offset interaction; Do a step-up sequence of F-tests to see where the non-additivity becomes significant. No, actually what I would do is the same thing using SAS because I understand how SAS handles linear models better than I grok R.
2x2x5 repeated measures ANOVA: significant 3-way interaction Generally speaking when in the presence of significant non-additivity ("interaction"), main effects and lower-order interactions are of less important. I usually find that by the time we have teased
50,119
What is the proper way to measure error in collected data for calibration purposes?
I am suggesting the following: Try to find the pattern of the error produced by your sensor ranging from 10 tons to 100 tons objects. You would need as many samples as it was possible to capture the variation of the error in all classes of weight. Let's suppose you could find a pattern of error (see figure below). Just as an example, look to the series "d". As you said, the absolute error is constant (% errors are not). If you find a pattern in the error you can build a model from regression techniques to describe the error as a function of true weight. Then, somehow (don't ask me why) you would adapt the calibration curve to the sensor. If it reads "x' value of weight, the sensor offsets the true value to "y", according to the calibration curve produced. Off course, the scatter plot is just an very simplified example to illustrate the patterns, but to build the model you would need lots of observations (do not know how many) to sample all classes of weight (e.g. 18 classes with 5 tons of interval).
What is the proper way to measure error in collected data for calibration purposes?
I am suggesting the following: Try to find the pattern of the error produced by your sensor ranging from 10 tons to 100 tons objects. You would need as many samples as it was possible to capture the v
What is the proper way to measure error in collected data for calibration purposes? I am suggesting the following: Try to find the pattern of the error produced by your sensor ranging from 10 tons to 100 tons objects. You would need as many samples as it was possible to capture the variation of the error in all classes of weight. Let's suppose you could find a pattern of error (see figure below). Just as an example, look to the series "d". As you said, the absolute error is constant (% errors are not). If you find a pattern in the error you can build a model from regression techniques to describe the error as a function of true weight. Then, somehow (don't ask me why) you would adapt the calibration curve to the sensor. If it reads "x' value of weight, the sensor offsets the true value to "y", according to the calibration curve produced. Off course, the scatter plot is just an very simplified example to illustrate the patterns, but to build the model you would need lots of observations (do not know how many) to sample all classes of weight (e.g. 18 classes with 5 tons of interval).
What is the proper way to measure error in collected data for calibration purposes? I am suggesting the following: Try to find the pattern of the error produced by your sensor ranging from 10 tons to 100 tons objects. You would need as many samples as it was possible to capture the v
50,120
MLE of multinomial distribution with missing values
One way to square your intuition with ML is to recognize that ML estimates are often biased. The ML estimate of $N$ looks like it's biased a little low. There are only two parameters, $N$ and $p=p_1$, because $p_3=p_1=p$ and $p_2 = 1-p_1-p_3 = 1-2p$. The log likelihood for observations $(a,b)$ is $$\log(\Lambda) = \log\binom{N}{a,b,N-a-b} + (N-b)\log(p) + b\log(1-2p)$$ which for any $N$ is maximized at the unique zero of its derivative, $p = (N-b)/(2N)$. It is apparent, too, that as a function of $N$ this is concave downward. Thus to obtain the MLE for $N$ we can scan over $N=a+b, a+b+1, \ldots$ until finding a maximum. (There are more efficient ways, but this works well enough.) For example, with $(a,b)=(3,6)$, the maximum occurs for $\widehat{N}=11$ where $\hat{p} = 5/22$. Here is a histogram of $10^5$ iid draws of this MLE from a Multinomial$(12; 1/4, 1/2, 1/4)$ distribution: The shift to a peak at $\widehat{N}=11$ is clear. For larger $N$ the shift still appears to be leftwards by only a small amount. Here is a histogram from a simulation with a Multinomial$(120; 1/4, 1/2, 1/4)$ distribution: The bias looks like a shift of $1$ or $2$ leftwards (the peak is at $119$ and the mean is $118.96$), but certainly there is not a proportional shift to $11/12 * 120 = 110$. Although (because $N$ must be integral) some care should be used in applying standard ML results, the mathematical formulation of the likelihood makes sense for non-integral $N$ (via Gamma functions), so you're probably ok using the usual ML-based confidence intervals, etc.
MLE of multinomial distribution with missing values
One way to square your intuition with ML is to recognize that ML estimates are often biased. The ML estimate of $N$ looks like it's biased a little low. There are only two parameters, $N$ and $p=p_1$
MLE of multinomial distribution with missing values One way to square your intuition with ML is to recognize that ML estimates are often biased. The ML estimate of $N$ looks like it's biased a little low. There are only two parameters, $N$ and $p=p_1$, because $p_3=p_1=p$ and $p_2 = 1-p_1-p_3 = 1-2p$. The log likelihood for observations $(a,b)$ is $$\log(\Lambda) = \log\binom{N}{a,b,N-a-b} + (N-b)\log(p) + b\log(1-2p)$$ which for any $N$ is maximized at the unique zero of its derivative, $p = (N-b)/(2N)$. It is apparent, too, that as a function of $N$ this is concave downward. Thus to obtain the MLE for $N$ we can scan over $N=a+b, a+b+1, \ldots$ until finding a maximum. (There are more efficient ways, but this works well enough.) For example, with $(a,b)=(3,6)$, the maximum occurs for $\widehat{N}=11$ where $\hat{p} = 5/22$. Here is a histogram of $10^5$ iid draws of this MLE from a Multinomial$(12; 1/4, 1/2, 1/4)$ distribution: The shift to a peak at $\widehat{N}=11$ is clear. For larger $N$ the shift still appears to be leftwards by only a small amount. Here is a histogram from a simulation with a Multinomial$(120; 1/4, 1/2, 1/4)$ distribution: The bias looks like a shift of $1$ or $2$ leftwards (the peak is at $119$ and the mean is $118.96$), but certainly there is not a proportional shift to $11/12 * 120 = 110$. Although (because $N$ must be integral) some care should be used in applying standard ML results, the mathematical formulation of the likelihood makes sense for non-integral $N$ (via Gamma functions), so you're probably ok using the usual ML-based confidence intervals, etc.
MLE of multinomial distribution with missing values One way to square your intuition with ML is to recognize that ML estimates are often biased. The ML estimate of $N$ looks like it's biased a little low. There are only two parameters, $N$ and $p=p_1$
50,121
Boltzmann machines and the asymmetry between 0 and 1
Yes, there is an asymmetry. I haven't thought deeply about it, but I do know of two papers that relate to the issue: 1, 2 Hope these help!
Boltzmann machines and the asymmetry between 0 and 1
Yes, there is an asymmetry. I haven't thought deeply about it, but I do know of two papers that relate to the issue: 1, 2 Hope these help!
Boltzmann machines and the asymmetry between 0 and 1 Yes, there is an asymmetry. I haven't thought deeply about it, but I do know of two papers that relate to the issue: 1, 2 Hope these help!
Boltzmann machines and the asymmetry between 0 and 1 Yes, there is an asymmetry. I haven't thought deeply about it, but I do know of two papers that relate to the issue: 1, 2 Hope these help!
50,122
Energy function of RBM
You only need to verify that the graphical model that represents a RBM fulfills the definition of a MRF as given in the document you refer to. See here for a picture. Then it is warranted that you can write it as a factorized product of positive functions defined on cliques that cover all the nodes and edges of G. Now, the theorem does not says that the decomposition is unique. It is mathematically convenient and meaningful to find a factorial decomposition through a energy function like that one. What you can do is very that this energy function gives you a valid factorial decomposition of the probability distribution in terms of positive functions which depend on the maximal cliques of the graph. In this case the (maximal) cliques are rather trivial: just the pairs of nodes (again, take a look at the graph). Each pair has a corresponding term in the sum. When raising the exponential to $E$, the sum turns to a product of positive functions (the exponential is a positive function), where each function depends only on two variables corresponding to each of the cliques.
Energy function of RBM
You only need to verify that the graphical model that represents a RBM fulfills the definition of a MRF as given in the document you refer to. See here for a picture. Then it is warranted that you can
Energy function of RBM You only need to verify that the graphical model that represents a RBM fulfills the definition of a MRF as given in the document you refer to. See here for a picture. Then it is warranted that you can write it as a factorized product of positive functions defined on cliques that cover all the nodes and edges of G. Now, the theorem does not says that the decomposition is unique. It is mathematically convenient and meaningful to find a factorial decomposition through a energy function like that one. What you can do is very that this energy function gives you a valid factorial decomposition of the probability distribution in terms of positive functions which depend on the maximal cliques of the graph. In this case the (maximal) cliques are rather trivial: just the pairs of nodes (again, take a look at the graph). Each pair has a corresponding term in the sum. When raising the exponential to $E$, the sum turns to a product of positive functions (the exponential is a positive function), where each function depends only on two variables corresponding to each of the cliques.
Energy function of RBM You only need to verify that the graphical model that represents a RBM fulfills the definition of a MRF as given in the document you refer to. See here for a picture. Then it is warranted that you can
50,123
Trying to understand formula for the Survival Function (survival analysis)
All of these terms are standard in actuarial science and all of them apply to all distributions (but when I have seen these terms in studying for exams, we're almost always talking about distributions that are defined only for nonnegative reals). $H(t)$ is the cumulative hazard function, and for any distribution is defined as $$H(t) = \int_0^t h(x) \,dx.$$ Notice the name makes perfect sense with this definition, since we are "adding" up the hazard function up to a certain point to get the cumulative hazard function. Now, since $$f(t) = F'(t) = -S'(t)$$ then we have $$h(t) = \frac{f(t)}{S(t)} = \frac{-S'(t)}{S(t)} = -\frac{d}{dt} (\ln S(t)).$$ Finally, that means we have $$H(t) = \int_0^t -\frac{d}{dx} (\ln S(x)) \,dx = -\ln S(t)$$ since $S(0)$ is usually required to be 1 and thus $\ln S(0) = 0$.
Trying to understand formula for the Survival Function (survival analysis)
All of these terms are standard in actuarial science and all of them apply to all distributions (but when I have seen these terms in studying for exams, we're almost always talking about distributions
Trying to understand formula for the Survival Function (survival analysis) All of these terms are standard in actuarial science and all of them apply to all distributions (but when I have seen these terms in studying for exams, we're almost always talking about distributions that are defined only for nonnegative reals). $H(t)$ is the cumulative hazard function, and for any distribution is defined as $$H(t) = \int_0^t h(x) \,dx.$$ Notice the name makes perfect sense with this definition, since we are "adding" up the hazard function up to a certain point to get the cumulative hazard function. Now, since $$f(t) = F'(t) = -S'(t)$$ then we have $$h(t) = \frac{f(t)}{S(t)} = \frac{-S'(t)}{S(t)} = -\frac{d}{dt} (\ln S(t)).$$ Finally, that means we have $$H(t) = \int_0^t -\frac{d}{dx} (\ln S(x)) \,dx = -\ln S(t)$$ since $S(0)$ is usually required to be 1 and thus $\ln S(0) = 0$.
Trying to understand formula for the Survival Function (survival analysis) All of these terms are standard in actuarial science and all of them apply to all distributions (but when I have seen these terms in studying for exams, we're almost always talking about distributions
50,124
Trying to understand formula for the Survival Function (survival analysis)
Yes, it goes for any hazard function. The hazard function is defined as $$h(t)=\frac{f(t)}{S(t)}$$ where $f(t)$ is the probability density function with respect to time, & the survival function is $$S(t)=1-F(t)$$ where $F(t)$ is the cumulative distribution function. So integrate the first expression, & you get the cumulative hazard $$H(t)=-\log S(t)$$
Trying to understand formula for the Survival Function (survival analysis)
Yes, it goes for any hazard function. The hazard function is defined as $$h(t)=\frac{f(t)}{S(t)}$$ where $f(t)$ is the probability density function with respect to time, & the survival function is $$
Trying to understand formula for the Survival Function (survival analysis) Yes, it goes for any hazard function. The hazard function is defined as $$h(t)=\frac{f(t)}{S(t)}$$ where $f(t)$ is the probability density function with respect to time, & the survival function is $$S(t)=1-F(t)$$ where $F(t)$ is the cumulative distribution function. So integrate the first expression, & you get the cumulative hazard $$H(t)=-\log S(t)$$
Trying to understand formula for the Survival Function (survival analysis) Yes, it goes for any hazard function. The hazard function is defined as $$h(t)=\frac{f(t)}{S(t)}$$ where $f(t)$ is the probability density function with respect to time, & the survival function is $$
50,125
The way an MA(q) model works
I found it out, but I can't tell you the exact reason for it: The problem is, that the initialization is unkown/strange, if you do an example with more values, you will see, that a simple MA(1) forecasting according to the following rule will work (notation in R of the MA is slightly different to yours, the sign of the theta is different): \begin{align}\hat{X}_{T|T-1}=E(c+a_T+\theta∗a_{T−1})=c+\theta*a_{T-1}\end{align} You can calculate these values manually, consider the following example: z<-c(2,5,3,4,3,4,5,4.3,4.3,4.5,4.3,4.5,3.4,5.3,4.2,3.4,2.3,2.3,4.5,3.4,5,5.4,5.4,3.4,5.43,5.64,5.6,3.4,5.3,5,6.3,4.5) m<-arima(z,order=c(0,0,1)) This gives Series: z ARIMA(0,0,1) with non-zero mean Coefficients: ma1 intercept 0.1162 4.2748 s.e. 0.1500 0.2076 sigma^2 estimated as 1.11: log likelihood=-47.09 AIC=100.18 AICc=101.03 BIC=104.57 The values of the output can be used via m$coef[1] and m$coef[1] So you now compare the values with the following code: m$coef[2]+ m$coef[1]*(2-4.259580) fitted(m)[2] m$coef[2]+ m$coef[1]*(5-4.013978) fitted(m)[3] m$coef[2]+ m$coef[1]*(3-4.389402) fitted(m)[4] m$coef[2]+ m$coef[1]*(4-4.113285) fitted(m)[5] m$coef[2]+ m$coef[1]*(3-4.261626) fitted(m)[6] You will notice, that at the beginning, there is a small difference, at the end, the values are the same! So the simple forecasting rule of a MA(1) does hold, but R seems to do some specific initialization. I know that STATA uses a certain Kalman filter setting, maybe R is doing the same. I hope this helps. If it did help you, you can accept my answer by clicking on the hook to the left of my post.
The way an MA(q) model works
I found it out, but I can't tell you the exact reason for it: The problem is, that the initialization is unkown/strange, if you do an example with more values, you will see, that a simple MA(1) foreca
The way an MA(q) model works I found it out, but I can't tell you the exact reason for it: The problem is, that the initialization is unkown/strange, if you do an example with more values, you will see, that a simple MA(1) forecasting according to the following rule will work (notation in R of the MA is slightly different to yours, the sign of the theta is different): \begin{align}\hat{X}_{T|T-1}=E(c+a_T+\theta∗a_{T−1})=c+\theta*a_{T-1}\end{align} You can calculate these values manually, consider the following example: z<-c(2,5,3,4,3,4,5,4.3,4.3,4.5,4.3,4.5,3.4,5.3,4.2,3.4,2.3,2.3,4.5,3.4,5,5.4,5.4,3.4,5.43,5.64,5.6,3.4,5.3,5,6.3,4.5) m<-arima(z,order=c(0,0,1)) This gives Series: z ARIMA(0,0,1) with non-zero mean Coefficients: ma1 intercept 0.1162 4.2748 s.e. 0.1500 0.2076 sigma^2 estimated as 1.11: log likelihood=-47.09 AIC=100.18 AICc=101.03 BIC=104.57 The values of the output can be used via m$coef[1] and m$coef[1] So you now compare the values with the following code: m$coef[2]+ m$coef[1]*(2-4.259580) fitted(m)[2] m$coef[2]+ m$coef[1]*(5-4.013978) fitted(m)[3] m$coef[2]+ m$coef[1]*(3-4.389402) fitted(m)[4] m$coef[2]+ m$coef[1]*(4-4.113285) fitted(m)[5] m$coef[2]+ m$coef[1]*(3-4.261626) fitted(m)[6] You will notice, that at the beginning, there is a small difference, at the end, the values are the same! So the simple forecasting rule of a MA(1) does hold, but R seems to do some specific initialization. I know that STATA uses a certain Kalman filter setting, maybe R is doing the same. I hope this helps. If it did help you, you can accept my answer by clicking on the hook to the left of my post.
The way an MA(q) model works I found it out, but I can't tell you the exact reason for it: The problem is, that the initialization is unkown/strange, if you do an example with more values, you will see, that a simple MA(1) foreca
50,126
The way an MA(q) model works
In terms of where those particular numbers come from, this seems to do the trick: > x - m$residuals Time Series: Start = 1 End = 3 Frequency = 1 [1] 3.060660 4.387627 3.000000 Or > x + (m$residuals * ma1) Time Series: Start = 1 End = 3 Frequency = 1 [1] 3.060660 4.387627 3.000000 I don't know in what sense these are meant to be the 'fitted' values (maybe someone else can help?). For an ARIMA model our explanatory data are also our fitted values. Our estimator for $Y_k$ given the history up to time $T$, $H_T$, where $T>k$, is $\hat y_k = \mathbb E [Y_k | H_T ] = y_k$, i.e. we already have the answer in the process history. So we have to ask, our fitted values anchored at what time? For a length $T$ series I guess there would be $T$ possible fitted values for each point. If anyone asks, based on having the data $y_{1:T}$ what's the best estimate of $y_k$ - seems there's only one pragmatic answer, $y_k$. The residuals are interesting in the sense that if they're massive relative to the data, that's saying something about the magnitude of the process variance.
The way an MA(q) model works
In terms of where those particular numbers come from, this seems to do the trick: > x - m$residuals Time Series: Start = 1 End = 3 Frequency = 1 [1] 3.060660 4.387627 3.000000 Or > x + (m$residual
The way an MA(q) model works In terms of where those particular numbers come from, this seems to do the trick: > x - m$residuals Time Series: Start = 1 End = 3 Frequency = 1 [1] 3.060660 4.387627 3.000000 Or > x + (m$residuals * ma1) Time Series: Start = 1 End = 3 Frequency = 1 [1] 3.060660 4.387627 3.000000 I don't know in what sense these are meant to be the 'fitted' values (maybe someone else can help?). For an ARIMA model our explanatory data are also our fitted values. Our estimator for $Y_k$ given the history up to time $T$, $H_T$, where $T>k$, is $\hat y_k = \mathbb E [Y_k | H_T ] = y_k$, i.e. we already have the answer in the process history. So we have to ask, our fitted values anchored at what time? For a length $T$ series I guess there would be $T$ possible fitted values for each point. If anyone asks, based on having the data $y_{1:T}$ what's the best estimate of $y_k$ - seems there's only one pragmatic answer, $y_k$. The residuals are interesting in the sense that if they're massive relative to the data, that's saying something about the magnitude of the process variance.
The way an MA(q) model works In terms of where those particular numbers come from, this seems to do the trick: > x - m$residuals Time Series: Start = 1 End = 3 Frequency = 1 [1] 3.060660 4.387627 3.000000 Or > x + (m$residual
50,127
Using MatchIt to match groups in a retrospective analysis
You can use the Matching package instead in the following steps: Use any package (preferably the MICE package) to impute n complete data sets. You will need to impute at least 5 data sets. Extract n complete data sets from the imputation. Calculate a propensity score (for treatment) in each complete data set. Average the propensity score in all n data sets and use that to retrieve your matched cohort by using the Matching package, in which you can use externally obtained propensity scores for creating matched groups. This is one of the alternatives for doing multiply imputed matching in R.
Using MatchIt to match groups in a retrospective analysis
You can use the Matching package instead in the following steps: Use any package (preferably the MICE package) to impute n complete data sets. You will need to impute at least 5 data sets. Extract n
Using MatchIt to match groups in a retrospective analysis You can use the Matching package instead in the following steps: Use any package (preferably the MICE package) to impute n complete data sets. You will need to impute at least 5 data sets. Extract n complete data sets from the imputation. Calculate a propensity score (for treatment) in each complete data set. Average the propensity score in all n data sets and use that to retrieve your matched cohort by using the Matching package, in which you can use externally obtained propensity scores for creating matched groups. This is one of the alternatives for doing multiply imputed matching in R.
Using MatchIt to match groups in a retrospective analysis You can use the Matching package instead in the following steps: Use any package (preferably the MICE package) to impute n complete data sets. You will need to impute at least 5 data sets. Extract n
50,128
Exponential family in testing and estimation
This is a rather broad question, but I will try to give in informal reply. Generally, these statements establish the fact that the exponential family is a "well behaved" parametric family of distributions. The emphasis in the first is in that the exponential family satisfies the regularity conditions needed so that a uniformly most powerful test exist. This means there is a test statistic and a rejection region, that have a type I error rate no greater than $\alpha$, and no other statistic nor rejection region have more power than these. Now to the second. The Cramer-Rao bound, or the "information bound" states that any unbiased estimator, cannot be arbitrarily precise, i.e., it's variance is bounded from below. So the novelty in the second statement is that, in those cases where the MLE is actually unbiased, it is also efficient in the sense it has the minimal possible variance.
Exponential family in testing and estimation
This is a rather broad question, but I will try to give in informal reply. Generally, these statements establish the fact that the exponential family is a "well behaved" parametric family of distribut
Exponential family in testing and estimation This is a rather broad question, but I will try to give in informal reply. Generally, these statements establish the fact that the exponential family is a "well behaved" parametric family of distributions. The emphasis in the first is in that the exponential family satisfies the regularity conditions needed so that a uniformly most powerful test exist. This means there is a test statistic and a rejection region, that have a type I error rate no greater than $\alpha$, and no other statistic nor rejection region have more power than these. Now to the second. The Cramer-Rao bound, or the "information bound" states that any unbiased estimator, cannot be arbitrarily precise, i.e., it's variance is bounded from below. So the novelty in the second statement is that, in those cases where the MLE is actually unbiased, it is also efficient in the sense it has the minimal possible variance.
Exponential family in testing and estimation This is a rather broad question, but I will try to give in informal reply. Generally, these statements establish the fact that the exponential family is a "well behaved" parametric family of distribut
50,129
Exponential family in testing and estimation
Here is a partial answer, maybe someone could complement it. "Statistical sufficency" means that no other statistic uses more information from the sample. Definition of sufficiency. Maximum likelihood estimate $\hat{\theta}$ is a sufficient statistic. Cramer-Rao lower bound states the lowest value (lower bound) for the variance. It is computed from the second differentiation (if it exists) of Fisher's information . When you use an estimate, you expect it to have the lowest variance, be unbiased, and sufficient. UMVUE estimate are Uniformly, Minimum Variance, Unbiased, estimator. This way, Cramer-Rao lower bound provides the minimum value for the variance of an estimator. By the way, MLE for $\sigma^2$ is $\hat{\sigma}^2=\dfrac{1}{n}\sum(X-E\{X\})^2$, which is a biased estimate.
Exponential family in testing and estimation
Here is a partial answer, maybe someone could complement it. "Statistical sufficency" means that no other statistic uses more information from the sample. Definition of sufficiency. Maximum likelihood
Exponential family in testing and estimation Here is a partial answer, maybe someone could complement it. "Statistical sufficency" means that no other statistic uses more information from the sample. Definition of sufficiency. Maximum likelihood estimate $\hat{\theta}$ is a sufficient statistic. Cramer-Rao lower bound states the lowest value (lower bound) for the variance. It is computed from the second differentiation (if it exists) of Fisher's information . When you use an estimate, you expect it to have the lowest variance, be unbiased, and sufficient. UMVUE estimate are Uniformly, Minimum Variance, Unbiased, estimator. This way, Cramer-Rao lower bound provides the minimum value for the variance of an estimator. By the way, MLE for $\sigma^2$ is $\hat{\sigma}^2=\dfrac{1}{n}\sum(X-E\{X\})^2$, which is a biased estimate.
Exponential family in testing and estimation Here is a partial answer, maybe someone could complement it. "Statistical sufficency" means that no other statistic uses more information from the sample. Definition of sufficiency. Maximum likelihood
50,130
IRT/Rasch modeling with very large N
It is possible to do in R with mirt, though it's still going to be a little slow (maybe 5-10 minutes) and you'll need a good amount of RAM (16+ GB...but with 6 million cases this should be expected). I just tested this and it seems to run okay: library(mirt) dat <- matrix(sample(0:1, 6e6 * 15, TRUE), ncol = 15) mod <- mirt(dat, 1, itemtype = 'Rasch', D = 1, calcNull = FALSE) Iteration: 4, Log-Lik: -64486844, Max-Change: 0e-04 If standard errors aren't interesting, nor is the comparison to the NULL model, then the above options should be fine. Since a large part of the problem is with sorting the data, there is a large = TRUE argument that can be passed so that sorting isn't repeated in each run.
IRT/Rasch modeling with very large N
It is possible to do in R with mirt, though it's still going to be a little slow (maybe 5-10 minutes) and you'll need a good amount of RAM (16+ GB...but with 6 million cases this should be expected).
IRT/Rasch modeling with very large N It is possible to do in R with mirt, though it's still going to be a little slow (maybe 5-10 minutes) and you'll need a good amount of RAM (16+ GB...but with 6 million cases this should be expected). I just tested this and it seems to run okay: library(mirt) dat <- matrix(sample(0:1, 6e6 * 15, TRUE), ncol = 15) mod <- mirt(dat, 1, itemtype = 'Rasch', D = 1, calcNull = FALSE) Iteration: 4, Log-Lik: -64486844, Max-Change: 0e-04 If standard errors aren't interesting, nor is the comparison to the NULL model, then the above options should be fine. Since a large part of the problem is with sorting the data, there is a large = TRUE argument that can be passed so that sorting isn't repeated in each run.
IRT/Rasch modeling with very large N It is possible to do in R with mirt, though it's still going to be a little slow (maybe 5-10 minutes) and you'll need a good amount of RAM (16+ GB...but with 6 million cases this should be expected).
50,131
Is it appropriate to use bootstrapping to measure variance?
Rather than representing problem in the bootstrap, this feature is sometimes used to estimate the bias in your original estimator, see for example chapter 10 of Bradley Efron and Robert Tibshirani (1993) "An Introduction to the Bootstrap". Chapman & Hall/CRC.
Is it appropriate to use bootstrapping to measure variance?
Rather than representing problem in the bootstrap, this feature is sometimes used to estimate the bias in your original estimator, see for example chapter 10 of Bradley Efron and Robert Tibshirani (19
Is it appropriate to use bootstrapping to measure variance? Rather than representing problem in the bootstrap, this feature is sometimes used to estimate the bias in your original estimator, see for example chapter 10 of Bradley Efron and Robert Tibshirani (1993) "An Introduction to the Bootstrap". Chapman & Hall/CRC.
Is it appropriate to use bootstrapping to measure variance? Rather than representing problem in the bootstrap, this feature is sometimes used to estimate the bias in your original estimator, see for example chapter 10 of Bradley Efron and Robert Tibshirani (19
50,132
Percentiles of a distribution
Since you are bootstrapping, why not take all of your observed medians and calculate the sample standard deviation $s$ and use that as your estimator of $\sigma$?
Percentiles of a distribution
Since you are bootstrapping, why not take all of your observed medians and calculate the sample standard deviation $s$ and use that as your estimator of $\sigma$?
Percentiles of a distribution Since you are bootstrapping, why not take all of your observed medians and calculate the sample standard deviation $s$ and use that as your estimator of $\sigma$?
Percentiles of a distribution Since you are bootstrapping, why not take all of your observed medians and calculate the sample standard deviation $s$ and use that as your estimator of $\sigma$?
50,133
Percentiles of a distribution
If I am understanding your intent, then the answer is "no", don't divide. The sample size is taken into account as part of the bootstrapping process. Of course the values that are using will mostly be meaningful if everything is normally distributed, the fact that you are bootstrapping makes that seem an unlikely assumption. Skewness could really throw this off. Are you dividing the distance from 16th to 84th by 2? If not that could be why you think you need to dived by something. In general (and good to do here to check if I understood you and even if this method will work for your case) you can answer questions like this through a little simulation. Simulate a data set and do the above analysis, calculate the value both ways and see which is closest to the "true" value from the simulation. Repeat the process with a different sample size, also simulate from different distributions. It should become fairly clear after a few simulations whether to divide or not.
Percentiles of a distribution
If I am understanding your intent, then the answer is "no", don't divide. The sample size is taken into account as part of the bootstrapping process. Of course the values that are using will mostly
Percentiles of a distribution If I am understanding your intent, then the answer is "no", don't divide. The sample size is taken into account as part of the bootstrapping process. Of course the values that are using will mostly be meaningful if everything is normally distributed, the fact that you are bootstrapping makes that seem an unlikely assumption. Skewness could really throw this off. Are you dividing the distance from 16th to 84th by 2? If not that could be why you think you need to dived by something. In general (and good to do here to check if I understood you and even if this method will work for your case) you can answer questions like this through a little simulation. Simulate a data set and do the above analysis, calculate the value both ways and see which is closest to the "true" value from the simulation. Repeat the process with a different sample size, also simulate from different distributions. It should become fairly clear after a few simulations whether to divide or not.
Percentiles of a distribution If I am understanding your intent, then the answer is "no", don't divide. The sample size is taken into account as part of the bootstrapping process. Of course the values that are using will mostly
50,134
Percentiles of a distribution
Following along the lines developed by Greg and soakley, here's a function that calculates the (bootstrapped) standard errors of an estimate of the median: median.w.se = function(vec,B){ # Inputs: vector of data (vec) # number of bootstrap replicates (B) # Outputs: list with estimates of median and standard error empty.vec = rep(NA,B) for(i in 1:B){ curr.sample = sample(vec,length(vec),replace = TRUE) curr.med = median(curr.sample) empty.vec[i] = curr.med } lst = list(median = mean(empty.vec), se = sd(empty.vec)) return(lst) } data(iris) median.w.se(iris$Sepal.Length,1000)
Percentiles of a distribution
Following along the lines developed by Greg and soakley, here's a function that calculates the (bootstrapped) standard errors of an estimate of the median: median.w.se = function(vec,B){ # Inputs:
Percentiles of a distribution Following along the lines developed by Greg and soakley, here's a function that calculates the (bootstrapped) standard errors of an estimate of the median: median.w.se = function(vec,B){ # Inputs: vector of data (vec) # number of bootstrap replicates (B) # Outputs: list with estimates of median and standard error empty.vec = rep(NA,B) for(i in 1:B){ curr.sample = sample(vec,length(vec),replace = TRUE) curr.med = median(curr.sample) empty.vec[i] = curr.med } lst = list(median = mean(empty.vec), se = sd(empty.vec)) return(lst) } data(iris) median.w.se(iris$Sepal.Length,1000)
Percentiles of a distribution Following along the lines developed by Greg and soakley, here's a function that calculates the (bootstrapped) standard errors of an estimate of the median: median.w.se = function(vec,B){ # Inputs:
50,135
Logistic regression algorithm in Ruby
As regression problems go, it's actually a fairly complicated algorithm. The answer to your question depends a lot on whether you have access to a reliable general-purpose CG optimization algorithm. If you do, the problem becomes somewhat simpler. If you don't, I wouldn't recommend re-implementing logistic regression from scratch (though others have tried, see here for a minimal R implementation without a GC routine) for the reasons explained here. At any rate, the underlying likelihood surface can be nearly flat so you have to be careful about the small prints of the implementation and test it on many corner cases (these are situations where the $X$ are highly correlated or when the two groups are nearly perfectly separable). A possible (quick and dirty) alternative is to rescale all your $X$'s to be in $[0,1]$ --for example by using the inverse logit function on each of them individually (after they have been standardized first to have mean 0 and unit variance)-- and estimate a fit by OLS (this approach is called the linear probability model). It will not be the same model and the coefficients won't be comparable but the results will be better than doing OLS on the raw data. The advantage here is that implementing OLS is trivial, assuming you have access to a good ruby linear algebra library (googling around I have found quix/linalg)
Logistic regression algorithm in Ruby
As regression problems go, it's actually a fairly complicated algorithm. The answer to your question depends a lot on whether you have access to a reliable general-purpose CG optimization algorithm.
Logistic regression algorithm in Ruby As regression problems go, it's actually a fairly complicated algorithm. The answer to your question depends a lot on whether you have access to a reliable general-purpose CG optimization algorithm. If you do, the problem becomes somewhat simpler. If you don't, I wouldn't recommend re-implementing logistic regression from scratch (though others have tried, see here for a minimal R implementation without a GC routine) for the reasons explained here. At any rate, the underlying likelihood surface can be nearly flat so you have to be careful about the small prints of the implementation and test it on many corner cases (these are situations where the $X$ are highly correlated or when the two groups are nearly perfectly separable). A possible (quick and dirty) alternative is to rescale all your $X$'s to be in $[0,1]$ --for example by using the inverse logit function on each of them individually (after they have been standardized first to have mean 0 and unit variance)-- and estimate a fit by OLS (this approach is called the linear probability model). It will not be the same model and the coefficients won't be comparable but the results will be better than doing OLS on the raw data. The advantage here is that implementing OLS is trivial, assuming you have access to a good ruby linear algebra library (googling around I have found quix/linalg)
Logistic regression algorithm in Ruby As regression problems go, it's actually a fairly complicated algorithm. The answer to your question depends a lot on whether you have access to a reliable general-purpose CG optimization algorithm.
50,136
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance
The eigenfunctions of SE kernel under Gaussian measure can be written using Hermite polynomials (see references below). If instead Lebesgue measure is used, it's more complicated. C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. http://www.gaussianprocess.org/gpml (p. 115) Zhu, H., Williams, C. K. I., Rohwer, R. J., and Morciniec, M. (1998). Gaussian Regression and Optimal Finite Dimensional Linear Models. In Bishop, C. M., editor, Neural Networks and Machine Learning. Springer-Verlag, Berlin.
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance
The eigenfunctions of SE kernel under Gaussian measure can be written using Hermite polynomials (see references below). If instead Lebesgue measure is used, it's more complicated. C. E. Rasmussen & C
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance The eigenfunctions of SE kernel under Gaussian measure can be written using Hermite polynomials (see references below). If instead Lebesgue measure is used, it's more complicated. C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. http://www.gaussianprocess.org/gpml (p. 115) Zhu, H., Williams, C. K. I., Rohwer, R. J., and Morciniec, M. (1998). Gaussian Regression and Optimal Finite Dimensional Linear Models. In Bishop, C. M., editor, Neural Networks and Machine Learning. Springer-Verlag, Berlin.
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance The eigenfunctions of SE kernel under Gaussian measure can be written using Hermite polynomials (see references below). If instead Lebesgue measure is used, it's more complicated. C. E. Rasmussen & C
50,137
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance
You could try following first the standard way of deriving the solution for $k(s,t) = e^{-|s-t|}$ given for instance in [1] and then try to reapply this to your case. If you don't have the access to [1] I can give an outline here. Generally you have to carefully differentiate $\int k_{SE}(s,t)f_i(s)dt = \lambda_i f_i(s)$ with respect to $t$ and see if it simplifies to an ODE for $f_i$. Your ODE is going to be more complicated but should be solvable (haven't done the calculation myself!). [1] Ghanem, R. and Spanos, P. "Stochastic Finite Elements: A Spectral Approach", 1991 Springer, pp 29-33
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance
You could try following first the standard way of deriving the solution for $k(s,t) = e^{-|s-t|}$ given for instance in [1] and then try to reapply this to your case. If you don't have the access to [
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance You could try following first the standard way of deriving the solution for $k(s,t) = e^{-|s-t|}$ given for instance in [1] and then try to reapply this to your case. If you don't have the access to [1] I can give an outline here. Generally you have to carefully differentiate $\int k_{SE}(s,t)f_i(s)dt = \lambda_i f_i(s)$ with respect to $t$ and see if it simplifies to an ODE for $f_i$. Your ODE is going to be more complicated but should be solvable (haven't done the calculation myself!). [1] Ghanem, R. and Spanos, P. "Stochastic Finite Elements: A Spectral Approach", 1991 Springer, pp 29-33
Closed form Karhunen-Loeve/PCA expansion for gaussian/squared-exponential covariance You could try following first the standard way of deriving the solution for $k(s,t) = e^{-|s-t|}$ given for instance in [1] and then try to reapply this to your case. If you don't have the access to [
50,138
Hypothesis test on data with confounding spatial clustering
According to this paper, OLS is consistent in the presence of spatial autocorrelation, but standard errors are incorrect and need to be adjusted. Solomon Hsiang provides stata and matlab code for doing so. Unfortunately I'm not familiar with any R code for this. There are certainly other approaches to this sort of problem in spatial statistics that explicitly model spatial processes. This one just inflates the standard errors.
Hypothesis test on data with confounding spatial clustering
According to this paper, OLS is consistent in the presence of spatial autocorrelation, but standard errors are incorrect and need to be adjusted. Solomon Hsiang provides stata and matlab code for doi
Hypothesis test on data with confounding spatial clustering According to this paper, OLS is consistent in the presence of spatial autocorrelation, but standard errors are incorrect and need to be adjusted. Solomon Hsiang provides stata and matlab code for doing so. Unfortunately I'm not familiar with any R code for this. There are certainly other approaches to this sort of problem in spatial statistics that explicitly model spatial processes. This one just inflates the standard errors.
Hypothesis test on data with confounding spatial clustering According to this paper, OLS is consistent in the presence of spatial autocorrelation, but standard errors are incorrect and need to be adjusted. Solomon Hsiang provides stata and matlab code for doi
50,139
How are categorical variables used when fitting a decision tree in scikit-learn?
To encode categorical feature as a scipy.sparse matrix you can use the DictVectorizer class. Then call the .toarray() method on the result to convert it to an contiguous numpy array as the scikit-learn trees do not support sparse input yet.
How are categorical variables used when fitting a decision tree in scikit-learn?
To encode categorical feature as a scipy.sparse matrix you can use the DictVectorizer class. Then call the .toarray() method on the result to convert it to an contiguous numpy array as the scikit-lear
How are categorical variables used when fitting a decision tree in scikit-learn? To encode categorical feature as a scipy.sparse matrix you can use the DictVectorizer class. Then call the .toarray() method on the result to convert it to an contiguous numpy array as the scikit-learn trees do not support sparse input yet.
How are categorical variables used when fitting a decision tree in scikit-learn? To encode categorical feature as a scipy.sparse matrix you can use the DictVectorizer class. Then call the .toarray() method on the result to convert it to an contiguous numpy array as the scikit-lear
50,140
What to do for missing data in time series [closed]
This is a good application for the EM algorithm of Shumway and Stoffer. First you need to specify your arima model, then you can use the Kalman Filter because is can handle missing values (see the Durbin and Koopman textbook). For starting parameter values you can compute the expected value of the missing values, then you maximize over the parameters given the missing values and you iterate. This is very well explained in http://www.amazon.com/Time-Analysis-Its-Applications-Statistics/dp/144197864X page 344 in the State-Space model chapter. I hope it helps!
What to do for missing data in time series [closed]
This is a good application for the EM algorithm of Shumway and Stoffer. First you need to specify your arima model, then you can use the Kalman Filter because is can handle missing values (see the Dur
What to do for missing data in time series [closed] This is a good application for the EM algorithm of Shumway and Stoffer. First you need to specify your arima model, then you can use the Kalman Filter because is can handle missing values (see the Durbin and Koopman textbook). For starting parameter values you can compute the expected value of the missing values, then you maximize over the parameters given the missing values and you iterate. This is very well explained in http://www.amazon.com/Time-Analysis-Its-Applications-Statistics/dp/144197864X page 344 in the State-Space model chapter. I hope it helps!
What to do for missing data in time series [closed] This is a good application for the EM algorithm of Shumway and Stoffer. First you need to specify your arima model, then you can use the Kalman Filter because is can handle missing values (see the Dur
50,141
What to do for missing data in time series [closed]
I would suggest looking into multiple imputation algorithms. Gary King has produced a package for R called Amelia that does very sophisticated multiple imputation and can handle time series quite well. Amelia treats all of the data as multivariate normal and performs multiple random draws from that distribution until the draws converge on the final imputed value. But there are a number of customization features that you can incorporate so that you can sculpt Amelia runs to your particular problem, such as including lags, leads, and polynomial expressions.
What to do for missing data in time series [closed]
I would suggest looking into multiple imputation algorithms. Gary King has produced a package for R called Amelia that does very sophisticated multiple imputation and can handle time series quite well
What to do for missing data in time series [closed] I would suggest looking into multiple imputation algorithms. Gary King has produced a package for R called Amelia that does very sophisticated multiple imputation and can handle time series quite well. Amelia treats all of the data as multivariate normal and performs multiple random draws from that distribution until the draws converge on the final imputed value. But there are a number of customization features that you can incorporate so that you can sculpt Amelia runs to your particular problem, such as including lags, leads, and polynomial expressions.
What to do for missing data in time series [closed] I would suggest looking into multiple imputation algorithms. Gary King has produced a package for R called Amelia that does very sophisticated multiple imputation and can handle time series quite well
50,142
Relationship between sample mean and sample survival probability
The survival function at time $t$ for an exponential distribution with true mean $\mu$ is $$ S(t) = e^{-t/\mu} \quad , \quad t > 0. $$ According to http://en.wikipedia.org/wiki/Exponential_distribution#Parameter_estimation one over the sample mean is the maximum likelihood estimator of $1/\mu$, so that $$ \hat S(t) = e^{-t /\hat \mu} $$ serves as a natural estimator of $S$. A different way would be to drop the parametric assumption and use 1 minus the empirical distribution function instead. For both approaches, stratification by gender or other factors works as well.
Relationship between sample mean and sample survival probability
The survival function at time $t$ for an exponential distribution with true mean $\mu$ is $$ S(t) = e^{-t/\mu} \quad , \quad t > 0. $$ According to http://en.wikipedia.org/wiki/Exponential_distributio
Relationship between sample mean and sample survival probability The survival function at time $t$ for an exponential distribution with true mean $\mu$ is $$ S(t) = e^{-t/\mu} \quad , \quad t > 0. $$ According to http://en.wikipedia.org/wiki/Exponential_distribution#Parameter_estimation one over the sample mean is the maximum likelihood estimator of $1/\mu$, so that $$ \hat S(t) = e^{-t /\hat \mu} $$ serves as a natural estimator of $S$. A different way would be to drop the parametric assumption and use 1 minus the empirical distribution function instead. For both approaches, stratification by gender or other factors works as well.
Relationship between sample mean and sample survival probability The survival function at time $t$ for an exponential distribution with true mean $\mu$ is $$ S(t) = e^{-t/\mu} \quad , \quad t > 0. $$ According to http://en.wikipedia.org/wiki/Exponential_distributio
50,143
Is it possible to compare model fit for a gaussian vs binomial glm
Well first things first, you can try eyeballing the residuals and see if one or other model makes them more even, but I suspect given that you haven't seen a clear winner, that won't split them either. In which case I think this really comes down to your definition of "better". Because you are using the same explanatory variables for both models, and the same number of parameters etc. then it is simply a matter of comparing the goodness of fit of the two models. You simply have to define what you mean by a good fit. For example, if your utility is to minimise the mean square error of predictions, then pick the model that does that best - which should be the gaussian. If on the otherhand you want the one that minimses the mean square log odds error, then pick the one that does that best (this should be the binomial). Of course, since you are choosing a model for empirical pragmatic reasons, you needn't be restricted to this. You could choose the model that minimises the chi-squared error, or something even more exotic. What are you going to use the model for when it is fitted? Really you want to think about what the cost to the user of the model is, when the model is wrong. This is related in spirit to this question about cost functions: Cost function for validating Poisson regression models
Is it possible to compare model fit for a gaussian vs binomial glm
Well first things first, you can try eyeballing the residuals and see if one or other model makes them more even, but I suspect given that you haven't seen a clear winner, that won't split them either
Is it possible to compare model fit for a gaussian vs binomial glm Well first things first, you can try eyeballing the residuals and see if one or other model makes them more even, but I suspect given that you haven't seen a clear winner, that won't split them either. In which case I think this really comes down to your definition of "better". Because you are using the same explanatory variables for both models, and the same number of parameters etc. then it is simply a matter of comparing the goodness of fit of the two models. You simply have to define what you mean by a good fit. For example, if your utility is to minimise the mean square error of predictions, then pick the model that does that best - which should be the gaussian. If on the otherhand you want the one that minimses the mean square log odds error, then pick the one that does that best (this should be the binomial). Of course, since you are choosing a model for empirical pragmatic reasons, you needn't be restricted to this. You could choose the model that minimises the chi-squared error, or something even more exotic. What are you going to use the model for when it is fitted? Really you want to think about what the cost to the user of the model is, when the model is wrong. This is related in spirit to this question about cost functions: Cost function for validating Poisson regression models
Is it possible to compare model fit for a gaussian vs binomial glm Well first things first, you can try eyeballing the residuals and see if one or other model makes them more even, but I suspect given that you haven't seen a clear winner, that won't split them either
50,144
Robust parameter estimation for Exponentially modified Gaussian distribution
If it seems like most of the outliers are to the far right you could decide on a threshold including most of the datapoints to the left and censor all values to the right of that threshold. It would be akin to trimming but without introducing a bias. I don't know how you would run such an analysis in a classical statistics framework but it is "pretty easy" using Bayesian statistics
Robust parameter estimation for Exponentially modified Gaussian distribution
If it seems like most of the outliers are to the far right you could decide on a threshold including most of the datapoints to the left and censor all values to the right of that threshold. It would b
Robust parameter estimation for Exponentially modified Gaussian distribution If it seems like most of the outliers are to the far right you could decide on a threshold including most of the datapoints to the left and censor all values to the right of that threshold. It would be akin to trimming but without introducing a bias. I don't know how you would run such an analysis in a classical statistics framework but it is "pretty easy" using Bayesian statistics
Robust parameter estimation for Exponentially modified Gaussian distribution If it seems like most of the outliers are to the far right you could decide on a threshold including most of the datapoints to the left and censor all values to the right of that threshold. It would b
50,145
Cannibalization of product sales
If you wish to determine the impact of sales of product B on Product A , you must look at the conditional effect. The conditions that you might need to consider are 1) day-of-the-week ; 2) week-of-the-year ; 3) month-of-the-year 4) specific days-of-the-month ; 5) lead and lag effects around each holiday/event 5) Monday-after-a Friday event ; 6) Friday-before a Monday event ; 7) particular-weeks-in-the-month ; 8) ARIMA structure ; 9) Level Shifts, Local Time Trends , Seasonal Pulses, Pulses ; 9) changes in parameters over time; 10) changes in variance over time ; 11) impact of price/promotions ...... to name a few. I have written a paper on this subject, please see http://www.autobox.com/cms/index.php/news/131-102706-white-paper-on-cannibalization-qtesting-market-hypothesisq-by-john-c-pickett-david-p-reilly-view
Cannibalization of product sales
If you wish to determine the impact of sales of product B on Product A , you must look at the conditional effect. The conditions that you might need to consider are 1) day-of-the-week ; 2) week-of-the
Cannibalization of product sales If you wish to determine the impact of sales of product B on Product A , you must look at the conditional effect. The conditions that you might need to consider are 1) day-of-the-week ; 2) week-of-the-year ; 3) month-of-the-year 4) specific days-of-the-month ; 5) lead and lag effects around each holiday/event 5) Monday-after-a Friday event ; 6) Friday-before a Monday event ; 7) particular-weeks-in-the-month ; 8) ARIMA structure ; 9) Level Shifts, Local Time Trends , Seasonal Pulses, Pulses ; 9) changes in parameters over time; 10) changes in variance over time ; 11) impact of price/promotions ...... to name a few. I have written a paper on this subject, please see http://www.autobox.com/cms/index.php/news/131-102706-white-paper-on-cannibalization-qtesting-market-hypothesisq-by-john-c-pickett-david-p-reilly-view
Cannibalization of product sales If you wish to determine the impact of sales of product B on Product A , you must look at the conditional effect. The conditions that you might need to consider are 1) day-of-the-week ; 2) week-of-the
50,146
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviations above the mean?
If I have understood your question correctly, you want the CDF of the bivariate normal distribution. That is, for the standardized case: $$ \Phi(\mathrm{\pmb{b}},\rho) = \frac{1}{2\pi\sqrt{1-\rho^{2}}}\int_{-\infty}^{b_{1}}{\int_{-\infty}^{b_{2}}}\exp\left[{-(x^{2}-2\rho xy+y^{2}})/(2(1-\rho^{2})\right]\mathrm{d}y\mathrm{d}x $$ This has no closed form solution and must be integrated numerically. With modern software, this is quite trivial. Here is an example in R with perfectly correlated normal distributions (i.e. $\rho = 1$) with $\pmb{\mu}=(100, 40)^\intercal$ and covariance matrix $\Sigma = \begin{bmatrix} 225 & 75 \\ 75 & 25 \end{bmatrix}$. We calculate the probabiltiy of both variables being above $2$ SD: library(mvtnorm) library(MASS) corr.mat <- matrix(c(1, 1, 1, 1), 2, 2, byrow = TRUE) # correlations sd.mat <- matrix(c(15, 0, 0, 5), 2, 2, byrow = TRUE) # standard deviations cov.mat <- sd.mat %*% corr.mat %*% sd.mat # covariance matrix mu <- c(100, 40) # means pmvnorm(lower = mu + 2*diag(sd.mat), upper = Inf, mean = mu, sigma = cov.mat) [1] 0.02275013 attr(,"error") [1] 2e-16 attr(,"msg") [1] "Normal Completion" As you rightly said: When they are perfectly correlated, the probability is about 2.3%. What about a correlation of -0.21? corr.mat <- matrix(c(1, -0.21, -0.21, 1), 2, 2, byrow = TRUE) # correlations sd.mat <- matrix(c(15, 0, 0, 5), 2, 2, byrow = TRUE) # standard deviations cov.mat <- sd.mat %*% corr.mat %*% sd.mat # covariance matrix mu <- c(100, 40) # means pmvnorm(lower = mu + 2*diag(sd.mat), upper = Inf, mean = mu, sigma = cov.mat) [1] 0.0001228342 attr(,"error") [1] 1e-15 attr(,"msg") [1] "Normal Completion" The probability is much lower, namely 0.0001228342. We can verify our calculations by simulation. For the example above: set.seed(21) dat <- rmvnorm(1e7, mean = c(100, 40), sigma = cov.mat) sum(dat[, 1] > (100+2*15) & dat[, 2] > (40+2*5))/dim(dat)[1] [1] 0.0001261 This is very close to the result from numerical integration. These calculations can easily be extended to multivariate normal distributions.
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviatio
If I have understood your question correctly, you want the CDF of the bivariate normal distribution. That is, for the standardized case: $$ \Phi(\mathrm{\pmb{b}},\rho) = \frac{1}{2\pi\sqrt{1-\rho^{2}}
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviations above the mean? If I have understood your question correctly, you want the CDF of the bivariate normal distribution. That is, for the standardized case: $$ \Phi(\mathrm{\pmb{b}},\rho) = \frac{1}{2\pi\sqrt{1-\rho^{2}}}\int_{-\infty}^{b_{1}}{\int_{-\infty}^{b_{2}}}\exp\left[{-(x^{2}-2\rho xy+y^{2}})/(2(1-\rho^{2})\right]\mathrm{d}y\mathrm{d}x $$ This has no closed form solution and must be integrated numerically. With modern software, this is quite trivial. Here is an example in R with perfectly correlated normal distributions (i.e. $\rho = 1$) with $\pmb{\mu}=(100, 40)^\intercal$ and covariance matrix $\Sigma = \begin{bmatrix} 225 & 75 \\ 75 & 25 \end{bmatrix}$. We calculate the probabiltiy of both variables being above $2$ SD: library(mvtnorm) library(MASS) corr.mat <- matrix(c(1, 1, 1, 1), 2, 2, byrow = TRUE) # correlations sd.mat <- matrix(c(15, 0, 0, 5), 2, 2, byrow = TRUE) # standard deviations cov.mat <- sd.mat %*% corr.mat %*% sd.mat # covariance matrix mu <- c(100, 40) # means pmvnorm(lower = mu + 2*diag(sd.mat), upper = Inf, mean = mu, sigma = cov.mat) [1] 0.02275013 attr(,"error") [1] 2e-16 attr(,"msg") [1] "Normal Completion" As you rightly said: When they are perfectly correlated, the probability is about 2.3%. What about a correlation of -0.21? corr.mat <- matrix(c(1, -0.21, -0.21, 1), 2, 2, byrow = TRUE) # correlations sd.mat <- matrix(c(15, 0, 0, 5), 2, 2, byrow = TRUE) # standard deviations cov.mat <- sd.mat %*% corr.mat %*% sd.mat # covariance matrix mu <- c(100, 40) # means pmvnorm(lower = mu + 2*diag(sd.mat), upper = Inf, mean = mu, sigma = cov.mat) [1] 0.0001228342 attr(,"error") [1] 1e-15 attr(,"msg") [1] "Normal Completion" The probability is much lower, namely 0.0001228342. We can verify our calculations by simulation. For the example above: set.seed(21) dat <- rmvnorm(1e7, mean = c(100, 40), sigma = cov.mat) sum(dat[, 1] > (100+2*15) & dat[, 2] > (40+2*5))/dim(dat)[1] [1] 0.0001261 This is very close to the result from numerical integration. These calculations can easily be extended to multivariate normal distributions.
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviatio If I have understood your question correctly, you want the CDF of the bivariate normal distribution. That is, for the standardized case: $$ \Phi(\mathrm{\pmb{b}},\rho) = \frac{1}{2\pi\sqrt{1-\rho^{2}}
50,147
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviations above the mean?
Good question. Your intuitions and approaches are right. There is, however, no simple analytic formula for the joint probability you're after. One could write a short program in R to compute the probability but it would necessarily use numerical integration.
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviatio
Good question. Your intuitions and approaches are right. There is, however, no simple analytic formula for the joint probability you're after. One could write a short program in R to compute the proba
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviations above the mean? Good question. Your intuitions and approaches are right. There is, however, no simple analytic formula for the joint probability you're after. One could write a short program in R to compute the probability but it would necessarily use numerical integration.
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviatio Good question. Your intuitions and approaches are right. There is, however, no simple analytic formula for the joint probability you're after. One could write a short program in R to compute the proba
50,148
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviations above the mean?
Here is a useful explanation for #3: Multivariate Normal distribution See 'bivariate normal distribution' in that section you can see the pdf for a bivariate normal distribution with the correlation coefficient. You can integrate the pdf between -infinity to $\mu$ + 2*$\sigma$. Or perhaps this is enough info to find an equation for the CDF, which can be evaluated at $\mu$ + 2*$\sigma$ to give you what you need.
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviatio
Here is a useful explanation for #3: Multivariate Normal distribution See 'bivariate normal distribution' in that section you can see the pdf for a bivariate normal distribution with the correlation c
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviations above the mean? Here is a useful explanation for #3: Multivariate Normal distribution See 'bivariate normal distribution' in that section you can see the pdf for a bivariate normal distribution with the correlation coefficient. You can integrate the pdf between -infinity to $\mu$ + 2*$\sigma$. Or perhaps this is enough info to find an equation for the CDF, which can be evaluated at $\mu$ + 2*$\sigma$ to give you what you need.
Correlated bivariate normal distribution: finding percentage of of data which is 2 standard deviatio Here is a useful explanation for #3: Multivariate Normal distribution See 'bivariate normal distribution' in that section you can see the pdf for a bivariate normal distribution with the correlation c
50,149
does it make sense for non-negative data to subtract the mean and divide by the std dev?
First of all, there have been several questions on standardization already, e.g. Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one? When should you center your data & when should you standardize? Subtracting the mean is one way of centering your data: The average becomes the new origin in the "point cloud description" of the data (each case is a point in $p$ dimensions, for RGB images, $p = 3$). Properly centered data can lead to numerically more stable models, and centering may also help in the interpretation of data and models: it sets a "baseline", and the centered data records deviations from this. Whether this is a sensible idea depends on your data: for some data it does make sense, for other data another center may be more appropriate, yet other data sets do already have a useful center. E.g. in the example of star photographs, you may want to find out the average background color and subtract that. Dividing by the standard deviation (or the variance) standardizes the data. This can be useful to achieve equal weights for all input channels in the subsequent data analysis. In other cases, is is not sensible. The latter may very well be the case for your data: your variates already share their physical unit. However, you may want to calibrate them to correct the wavelength dependence of the camera's sensitivity (whitelight correction). You may also want to adjust all channels together: that would be adjusting contrast and brightness, which are also a way to center and standardize.
does it make sense for non-negative data to subtract the mean and divide by the std dev?
First of all, there have been several questions on standardization already, e.g. Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a ba
does it make sense for non-negative data to subtract the mean and divide by the std dev? First of all, there have been several questions on standardization already, e.g. Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one? When should you center your data & when should you standardize? Subtracting the mean is one way of centering your data: The average becomes the new origin in the "point cloud description" of the data (each case is a point in $p$ dimensions, for RGB images, $p = 3$). Properly centered data can lead to numerically more stable models, and centering may also help in the interpretation of data and models: it sets a "baseline", and the centered data records deviations from this. Whether this is a sensible idea depends on your data: for some data it does make sense, for other data another center may be more appropriate, yet other data sets do already have a useful center. E.g. in the example of star photographs, you may want to find out the average background color and subtract that. Dividing by the standard deviation (or the variance) standardizes the data. This can be useful to achieve equal weights for all input channels in the subsequent data analysis. In other cases, is is not sensible. The latter may very well be the case for your data: your variates already share their physical unit. However, you may want to calibrate them to correct the wavelength dependence of the camera's sensitivity (whitelight correction). You may also want to adjust all channels together: that would be adjusting contrast and brightness, which are also a way to center and standardize.
does it make sense for non-negative data to subtract the mean and divide by the std dev? First of all, there have been several questions on standardization already, e.g. Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a ba
50,150
does it make sense for non-negative data to subtract the mean and divide by the std dev?
Since you mention sparse coding, I assume you are referring to natural images. For natural images, standardization is often carried out because natural image patches have pretty stable statistical properties once you subtracted the constant part (and whitened them; see below). You may look at it like this: A natural image patch $p$ has a mean lumination (the mean of the patch) and a contrast (the standard deviation of the patch). If you are interested in the content of the patch, then it is a good idea to subtract the mean lumination and divide out the contrast to map all image patches with the same content on the same point. Natural image patches have pretty stable statistical properties after subtracting the mean (also often called DC component). For reference you could look at papers by David Field, Bruno Olshausen, David Ruderman, Eero Simoncelli, Matthias Bethge, or Aapo Hyvaerinen. Interestingly, the statistics of the DC component varies a lot from image to image (if you sample many patches from one image), but the statistical properties of the patches are quite stable. This is true in particular for whitened patches, i.e. when you divide by the standard deviation in the PCA basis (a whitening matrix is not unique, but the PCA version is one possible choice). Note that many sparse coding models are actually trained on DC-subtracted and whitened natural image patches. In short: For natural images you like to do the standardization because the probability models fitted to the standardized patches generalize better from image to image.
does it make sense for non-negative data to subtract the mean and divide by the std dev?
Since you mention sparse coding, I assume you are referring to natural images. For natural images, standardization is often carried out because natural image patches have pretty stable statistical pro
does it make sense for non-negative data to subtract the mean and divide by the std dev? Since you mention sparse coding, I assume you are referring to natural images. For natural images, standardization is often carried out because natural image patches have pretty stable statistical properties once you subtracted the constant part (and whitened them; see below). You may look at it like this: A natural image patch $p$ has a mean lumination (the mean of the patch) and a contrast (the standard deviation of the patch). If you are interested in the content of the patch, then it is a good idea to subtract the mean lumination and divide out the contrast to map all image patches with the same content on the same point. Natural image patches have pretty stable statistical properties after subtracting the mean (also often called DC component). For reference you could look at papers by David Field, Bruno Olshausen, David Ruderman, Eero Simoncelli, Matthias Bethge, or Aapo Hyvaerinen. Interestingly, the statistics of the DC component varies a lot from image to image (if you sample many patches from one image), but the statistical properties of the patches are quite stable. This is true in particular for whitened patches, i.e. when you divide by the standard deviation in the PCA basis (a whitening matrix is not unique, but the PCA version is one possible choice). Note that many sparse coding models are actually trained on DC-subtracted and whitened natural image patches. In short: For natural images you like to do the standardization because the probability models fitted to the standardized patches generalize better from image to image.
does it make sense for non-negative data to subtract the mean and divide by the std dev? Since you mention sparse coding, I assume you are referring to natural images. For natural images, standardization is often carried out because natural image patches have pretty stable statistical pro
50,151
Latent class model with both continuous and categorical indicators in R [closed]
Regarding question 2, another similar question here (Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)?) suggested OpenMx for "advanced structural equation modeling" in R. If it can be used to estimate growth mixture models, I'm betting it can be used to model a hybrid latent class/profile model. Haven't played with it myself yet though.
Latent class model with both continuous and categorical indicators in R [closed]
Regarding question 2, another similar question here (Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)?) suggested OpenMx for "advanced structural eq
Latent class model with both continuous and categorical indicators in R [closed] Regarding question 2, another similar question here (Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)?) suggested OpenMx for "advanced structural equation modeling" in R. If it can be used to estimate growth mixture models, I'm betting it can be used to model a hybrid latent class/profile model. Haven't played with it myself yet though.
Latent class model with both continuous and categorical indicators in R [closed] Regarding question 2, another similar question here (Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)?) suggested OpenMx for "advanced structural eq
50,152
Question about inverse in a two-step estimator as a joint GMM-estimators approach
First of all, the formula for the inverse of $J$ has nothing to do with Jacobians, it is just the general inverse of a $2 \times 2$ matrix. Second, using the scalar expressions in the matrix context is extremely dangerous, as the matrix multiplication is not commutative, unlike the scalar multiplication (i.e., generally $AB \neq BA$). If you are not aware of this, you have very serious gaps in your math preparedness to tackle econometrics. The inverse of a block matrix $$ M = \left(\begin{array}{cc}A & B \\ C & D\end{array}\right) $$ is $$ M^{-1} = \left(\begin{array}{cc} A^{-1} + A^{-1} B P^{-1} C A^{-1} & - A^{-1} B P^{-1} \\ - P^{-1} C A^{-1} & P^{-1} \end{array}\right), \quad P = D - CA^{-1} B $$ or $$ M^{-1} = \left(\begin{array}{cc} Q^{-1} & - Q^{-1} B D^{-1} \\ - D^{-1} C Q^{-1} & D^{-1} + D^{-1} C Q^{-1} B D^{-1} \end{array}\right), \quad Q = A - BD^{-1} C $$ depending on which is easier to tackle. See Section 1.11 of Magnus & Neudecker (1999) and/or Exercise 5.13 of Abadir & Magnus (2005). Once again, an econometrician must have at least one of these books within an arm's length.
Question about inverse in a two-step estimator as a joint GMM-estimators approach
First of all, the formula for the inverse of $J$ has nothing to do with Jacobians, it is just the general inverse of a $2 \times 2$ matrix. Second, using the scalar expressions in the matrix context i
Question about inverse in a two-step estimator as a joint GMM-estimators approach First of all, the formula for the inverse of $J$ has nothing to do with Jacobians, it is just the general inverse of a $2 \times 2$ matrix. Second, using the scalar expressions in the matrix context is extremely dangerous, as the matrix multiplication is not commutative, unlike the scalar multiplication (i.e., generally $AB \neq BA$). If you are not aware of this, you have very serious gaps in your math preparedness to tackle econometrics. The inverse of a block matrix $$ M = \left(\begin{array}{cc}A & B \\ C & D\end{array}\right) $$ is $$ M^{-1} = \left(\begin{array}{cc} A^{-1} + A^{-1} B P^{-1} C A^{-1} & - A^{-1} B P^{-1} \\ - P^{-1} C A^{-1} & P^{-1} \end{array}\right), \quad P = D - CA^{-1} B $$ or $$ M^{-1} = \left(\begin{array}{cc} Q^{-1} & - Q^{-1} B D^{-1} \\ - D^{-1} C Q^{-1} & D^{-1} + D^{-1} C Q^{-1} B D^{-1} \end{array}\right), \quad Q = A - BD^{-1} C $$ depending on which is easier to tackle. See Section 1.11 of Magnus & Neudecker (1999) and/or Exercise 5.13 of Abadir & Magnus (2005). Once again, an econometrician must have at least one of these books within an arm's length.
Question about inverse in a two-step estimator as a joint GMM-estimators approach First of all, the formula for the inverse of $J$ has nothing to do with Jacobians, it is just the general inverse of a $2 \times 2$ matrix. Second, using the scalar expressions in the matrix context i
50,153
What kind of experiment might Hiawatha have designed?
I will make a stab at a helpful answer (the teacher in me thinks that you will learn more by using google or wikipedia or other resources to learn about truncated normals and who Bancroft is than reading a Cliff's notes versions (and the realist in me realizes that I would have to look up Bancroft to be sure of giving a good answer, I probably will later)). The poem is not critical of any specific experimental design, rather it is critical of focusing too much on one aspect of the research without considering other important questions. I remember one friend (my professor at the time) telling me of the first experiment he designed after graduating, it was a beautiful experiment and would have given great answers to many questions if it had ever been carried out, but it was too expensive and too complicated so the people he designed it for never did it. At the time of telling the story many years later he talked about how he could see ways that he could have designed the experiment in ways that they would have easily run the experiment and while these other ways were inferior to his original design on paper, any experiment that yields actual data is better than one that is never run. When I meet with a client at the beggining of a project the first hour us usually spent working out exactly what question(s) they are trying to answer. In some cases that is all I need to do, help them understand what their question really is, then the experiment and analysis is simple enough that they can do it themselves. While I really would like to do some of the complicated designs, more often I find myself helping them find a simpler design that answers a specific question well and is doable (I mostly do low budget (often no budget) projects). Hiawatha in the poem had his own goal of being unbiased, quick, and powerful, but did not take time to find out that the main goal was to actually hit the target. It is critical of using a design or technique because it is cool rather than finding out what the real question is and finding the design and technique that will answer that question. I have often seen in students, new graduates, or clients that have had enough stats to be dangerous (I have been included in all these categories and was guilty of the same sins), a tendancy to do every analysis that the textbook/teacher taught. When I ask them why they did a particular test the responses are often along the lines of "that is what the text did" or " that is what they did in this paper", but when I ask what question that test answers I often get the deer in the headlights look (hopefully followed by enlightenment of why do a test if it does not answer a question you are interested in). I think Hiawatha falls in this category, he was doing fancy things for the sake of doing fancy things rather than focusing on what the goal really was (hit the target). The poem is not saying that there are particular designs that are bad, for some questions the more complicated designs are appropriate, but for others the simpler designs are better. We need to find the right design for the question and the science involved, not focus on our favorite regardless of whether it is appropriate or not. Sometimes it is important to reduce or eliminate bias, other times a little bias can be lived with if it gains us something else such as reduced variance or reduction of costs to where things can be done (and my wife finds bias very useful, though to her bias means fabric that has been cut into diagonal strips which gives it more flexibility and stretch). Hopefully this is more helpful than just rambling. Maybe this will remind others of specific cases that they can add where a particular tool that could be useful in other cases was missused in a particular case.
What kind of experiment might Hiawatha have designed?
I will make a stab at a helpful answer (the teacher in me thinks that you will learn more by using google or wikipedia or other resources to learn about truncated normals and who Bancroft is than read
What kind of experiment might Hiawatha have designed? I will make a stab at a helpful answer (the teacher in me thinks that you will learn more by using google or wikipedia or other resources to learn about truncated normals and who Bancroft is than reading a Cliff's notes versions (and the realist in me realizes that I would have to look up Bancroft to be sure of giving a good answer, I probably will later)). The poem is not critical of any specific experimental design, rather it is critical of focusing too much on one aspect of the research without considering other important questions. I remember one friend (my professor at the time) telling me of the first experiment he designed after graduating, it was a beautiful experiment and would have given great answers to many questions if it had ever been carried out, but it was too expensive and too complicated so the people he designed it for never did it. At the time of telling the story many years later he talked about how he could see ways that he could have designed the experiment in ways that they would have easily run the experiment and while these other ways were inferior to his original design on paper, any experiment that yields actual data is better than one that is never run. When I meet with a client at the beggining of a project the first hour us usually spent working out exactly what question(s) they are trying to answer. In some cases that is all I need to do, help them understand what their question really is, then the experiment and analysis is simple enough that they can do it themselves. While I really would like to do some of the complicated designs, more often I find myself helping them find a simpler design that answers a specific question well and is doable (I mostly do low budget (often no budget) projects). Hiawatha in the poem had his own goal of being unbiased, quick, and powerful, but did not take time to find out that the main goal was to actually hit the target. It is critical of using a design or technique because it is cool rather than finding out what the real question is and finding the design and technique that will answer that question. I have often seen in students, new graduates, or clients that have had enough stats to be dangerous (I have been included in all these categories and was guilty of the same sins), a tendancy to do every analysis that the textbook/teacher taught. When I ask them why they did a particular test the responses are often along the lines of "that is what the text did" or " that is what they did in this paper", but when I ask what question that test answers I often get the deer in the headlights look (hopefully followed by enlightenment of why do a test if it does not answer a question you are interested in). I think Hiawatha falls in this category, he was doing fancy things for the sake of doing fancy things rather than focusing on what the goal really was (hit the target). The poem is not saying that there are particular designs that are bad, for some questions the more complicated designs are appropriate, but for others the simpler designs are better. We need to find the right design for the question and the science involved, not focus on our favorite regardless of whether it is appropriate or not. Sometimes it is important to reduce or eliminate bias, other times a little bias can be lived with if it gains us something else such as reduced variance or reduction of costs to where things can be done (and my wife finds bias very useful, though to her bias means fabric that has been cut into diagonal strips which gives it more flexibility and stretch). Hopefully this is more helpful than just rambling. Maybe this will remind others of specific cases that they can add where a particular tool that could be useful in other cases was missused in a particular case.
What kind of experiment might Hiawatha have designed? I will make a stab at a helpful answer (the teacher in me thinks that you will learn more by using google or wikipedia or other resources to learn about truncated normals and who Bancroft is than read
50,154
Combining posterior distributions
Unfortunately, you cannot combine posterior chains in that way. From your description, what you have are independent draws from MCMC chains for the following posterior distributions: $$p(\beta|\boldsymbol{x}_i, \pi_i),$$ where $\boldsymbol{x}_i$ is the data-set and $\pi_i$ is a prior in model $i = 1, ..., n$. You want a single posterior: $$p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_n, \pi),$$ taken using the data from all $n$ models, where $\pi$ is the chosen prior for your analysis. Assuming that all models used the same prior that you want to use (i.e., assuming that $\pi = \pi_1 = ... = \pi_n$) gives some simplification to this problem, but it is still not enough. Unfortunately, even with this simplifying assumption, there is no function that transforms the original posteriors to the posterior you want, and similarly, no way to map the set of original MCMC chains to an appropriate MCMC chain for the posterior distribution you want. Instead, you will need to go back to first-principles with the data sets from those models. You can either take each of the $n$ data-sets jointly, and build a single model to give an MCMC chain for the posterior from all this data, or you could do this iteratively, by updating one data-set at a time. When updating evidence in Bayesian statistics, the following recursive relationship holds: $$p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_n, \pi) \propto p(\boldsymbol{x}_n|\beta) \cdot p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_{n-1}, \pi).$$ What this says is that the posterior given $\boldsymbol{x}_1, ..., \boldsymbol{x}_{n}$ can be obtained by using the posterior given $\boldsymbol{x}_1, ..., \boldsymbol{x}_{n-1}$ as your prior, and then updating this with the newly observed data $\boldsymbol{x}_{n}$. When using MCMC chains, this means that you would use the following looping procedure. For $i = 1, ..., n-1$ do the following: Take the posterior MCMC chain from the distribution $p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_i, \pi)$. Use this as a chain of prior values for $\beta$. Input this into your MCMC calculations updating for the new data-set $\boldsymbol{x}_{i+1}$. This gives you a posterior MCMC chain from $p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_{i+1}, \pi)$. Generate the new posterior MCMC chain. Running through this iterative process gives you a posterior MCMC chain for the distribution $p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_n, \pi)$, which is of interest.
Combining posterior distributions
Unfortunately, you cannot combine posterior chains in that way. From your description, what you have are independent draws from MCMC chains for the following posterior distributions: $$p(\beta|\bolds
Combining posterior distributions Unfortunately, you cannot combine posterior chains in that way. From your description, what you have are independent draws from MCMC chains for the following posterior distributions: $$p(\beta|\boldsymbol{x}_i, \pi_i),$$ where $\boldsymbol{x}_i$ is the data-set and $\pi_i$ is a prior in model $i = 1, ..., n$. You want a single posterior: $$p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_n, \pi),$$ taken using the data from all $n$ models, where $\pi$ is the chosen prior for your analysis. Assuming that all models used the same prior that you want to use (i.e., assuming that $\pi = \pi_1 = ... = \pi_n$) gives some simplification to this problem, but it is still not enough. Unfortunately, even with this simplifying assumption, there is no function that transforms the original posteriors to the posterior you want, and similarly, no way to map the set of original MCMC chains to an appropriate MCMC chain for the posterior distribution you want. Instead, you will need to go back to first-principles with the data sets from those models. You can either take each of the $n$ data-sets jointly, and build a single model to give an MCMC chain for the posterior from all this data, or you could do this iteratively, by updating one data-set at a time. When updating evidence in Bayesian statistics, the following recursive relationship holds: $$p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_n, \pi) \propto p(\boldsymbol{x}_n|\beta) \cdot p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_{n-1}, \pi).$$ What this says is that the posterior given $\boldsymbol{x}_1, ..., \boldsymbol{x}_{n}$ can be obtained by using the posterior given $\boldsymbol{x}_1, ..., \boldsymbol{x}_{n-1}$ as your prior, and then updating this with the newly observed data $\boldsymbol{x}_{n}$. When using MCMC chains, this means that you would use the following looping procedure. For $i = 1, ..., n-1$ do the following: Take the posterior MCMC chain from the distribution $p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_i, \pi)$. Use this as a chain of prior values for $\beta$. Input this into your MCMC calculations updating for the new data-set $\boldsymbol{x}_{i+1}$. This gives you a posterior MCMC chain from $p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_{i+1}, \pi)$. Generate the new posterior MCMC chain. Running through this iterative process gives you a posterior MCMC chain for the distribution $p(\beta|\boldsymbol{x}_1, ..., \boldsymbol{x}_n, \pi)$, which is of interest.
Combining posterior distributions Unfortunately, you cannot combine posterior chains in that way. From your description, what you have are independent draws from MCMC chains for the following posterior distributions: $$p(\beta|\bolds
50,155
Categorial features in linear machine learning algorithms
What you told about categorial features is called feature binarization. That is extremely easy and commonly used. And that is basically the only working algorithm if your features consist of one word. What is used in Vowpal Wabbit package (as well as in most packages I know, like scipy, scikit) is a little bit more common thing and called Bag of Words. Suppose your features are of two or more words: "red wood" or "blue iron". Then you can evaluate the frequency of all words, not phrases. And for each sample you will now have a frequency vector of the words in it. If you will normalize it with overall frequencies, you will get the Bag of Words representation. A little bit more detailed description is given, for example, here. Speaking about Vowpal Wabbit package, they implement it in the function tokenize in the file *parse_primitives.cc*. Hope it will help.
Categorial features in linear machine learning algorithms
What you told about categorial features is called feature binarization. That is extremely easy and commonly used. And that is basically the only working algorithm if your features consist of one word.
Categorial features in linear machine learning algorithms What you told about categorial features is called feature binarization. That is extremely easy and commonly used. And that is basically the only working algorithm if your features consist of one word. What is used in Vowpal Wabbit package (as well as in most packages I know, like scipy, scikit) is a little bit more common thing and called Bag of Words. Suppose your features are of two or more words: "red wood" or "blue iron". Then you can evaluate the frequency of all words, not phrases. And for each sample you will now have a frequency vector of the words in it. If you will normalize it with overall frequencies, you will get the Bag of Words representation. A little bit more detailed description is given, for example, here. Speaking about Vowpal Wabbit package, they implement it in the function tokenize in the file *parse_primitives.cc*. Hope it will help.
Categorial features in linear machine learning algorithms What you told about categorial features is called feature binarization. That is extremely easy and commonly used. And that is basically the only working algorithm if your features consist of one word.
50,156
Ordinal Probit model in plain English
The reason one would use an OP is to study a categorical variables that is ordered, but where the actual values reflect only a ranking. For example, take bond ratings. There's an underlying variable that is unobserved called creditworthiness that some agency has divided into bins, which range from AAA, AA, A, BBB, and so on to D. You can imagine coding these as 12, 11, 10,.... Now AAA is better than AA, and AA is better than A, but the two differences are not equivalent. In your case, AAA is like "strong growth" and D is "no growth". This unobserved creditworthiness (or BF growth) is function of the explanatory variables (like sodium) and parameters $\beta$ and a normally* distributed error $\varepsilon$. Each bond rating corresponded to a specific range of the creditworthiness. These ranges are not necessarily the same length. Suppose a firm is now at AA and becomes more creditworthy. Eventually, it would pass over the boundary between AA and AAA and the firm would get a new ranking. The ordered probit would estimate the parameters $\beta$ using MLE, together with the values of the boundary (aka cut values) defining the bins of the creditworthiness. The interpretation of the parameters is a bit tricky since they are identified up to scale only. It's fairly easy to compare a ratio of two parameters to decide which one is more important. For a more involved exercise, you can also take the differences of adjacent cut values and divide by the sodium slope. This tells you the max change in sodium necessary to move out to the next bin. Alternatively, you can also look at the change in probability from being in a specific bin caused a change in sodium. *If the error has a logistic distribution, you would have the ordered logit instead of the probit.
Ordinal Probit model in plain English
The reason one would use an OP is to study a categorical variables that is ordered, but where the actual values reflect only a ranking. For example, take bond ratings. There's an underlying variable t
Ordinal Probit model in plain English The reason one would use an OP is to study a categorical variables that is ordered, but where the actual values reflect only a ranking. For example, take bond ratings. There's an underlying variable that is unobserved called creditworthiness that some agency has divided into bins, which range from AAA, AA, A, BBB, and so on to D. You can imagine coding these as 12, 11, 10,.... Now AAA is better than AA, and AA is better than A, but the two differences are not equivalent. In your case, AAA is like "strong growth" and D is "no growth". This unobserved creditworthiness (or BF growth) is function of the explanatory variables (like sodium) and parameters $\beta$ and a normally* distributed error $\varepsilon$. Each bond rating corresponded to a specific range of the creditworthiness. These ranges are not necessarily the same length. Suppose a firm is now at AA and becomes more creditworthy. Eventually, it would pass over the boundary between AA and AAA and the firm would get a new ranking. The ordered probit would estimate the parameters $\beta$ using MLE, together with the values of the boundary (aka cut values) defining the bins of the creditworthiness. The interpretation of the parameters is a bit tricky since they are identified up to scale only. It's fairly easy to compare a ratio of two parameters to decide which one is more important. For a more involved exercise, you can also take the differences of adjacent cut values and divide by the sodium slope. This tells you the max change in sodium necessary to move out to the next bin. Alternatively, you can also look at the change in probability from being in a specific bin caused a change in sodium. *If the error has a logistic distribution, you would have the ordered logit instead of the probit.
Ordinal Probit model in plain English The reason one would use an OP is to study a categorical variables that is ordered, but where the actual values reflect only a ranking. For example, take bond ratings. There's an underlying variable t
50,157
Do categorical variables have to be dummy coded in SVM?
Actually when you look at the model.matrix documentation, you will find that the way formula is specified it automatically dummy code the factor variables. You can specify explicitly via contrasts options about what to do with the factor variables. Hope that helped!!
Do categorical variables have to be dummy coded in SVM?
Actually when you look at the model.matrix documentation, you will find that the way formula is specified it automatically dummy code the factor variables. You can specify explicitly via contrasts opt
Do categorical variables have to be dummy coded in SVM? Actually when you look at the model.matrix documentation, you will find that the way formula is specified it automatically dummy code the factor variables. You can specify explicitly via contrasts options about what to do with the factor variables. Hope that helped!!
Do categorical variables have to be dummy coded in SVM? Actually when you look at the model.matrix documentation, you will find that the way formula is specified it automatically dummy code the factor variables. You can specify explicitly via contrasts opt
50,158
Analysis of Deviance in R - Which test?
As @caracal points out, anova(mod2,test="Chisq") should return the desired test. We can't tell what went wrong with 1-pchisq() as we don't know what you put inside the brackets; but one guess would be that you accidentally tested for the significance of all explanatory variables in the model together.
Analysis of Deviance in R - Which test?
As @caracal points out, anova(mod2,test="Chisq") should return the desired test. We can't tell what went wrong with 1-pchisq() as we don't know what you put inside the brackets; but one guess would b
Analysis of Deviance in R - Which test? As @caracal points out, anova(mod2,test="Chisq") should return the desired test. We can't tell what went wrong with 1-pchisq() as we don't know what you put inside the brackets; but one guess would be that you accidentally tested for the significance of all explanatory variables in the model together.
Analysis of Deviance in R - Which test? As @caracal points out, anova(mod2,test="Chisq") should return the desired test. We can't tell what went wrong with 1-pchisq() as we don't know what you put inside the brackets; but one guess would b
50,159
Analysis of Deviance in R - Which test?
I have to say, I've been doing statistics for 10 years and I've never heard anyone call it Analysis of Deviance before. Where is that from? Just wondering, it is a logical title though of course "Analysis of Variance" is the more universal term, as the portmanteau ANOVA implies. At any rate, the difference is that in the first ANOVA you were comparing two models (the probability of them being different) and in the second you did an ANOVA on one model to get the p-values of the terms in the model. If you want to know the significance of a particular term, then you're probably looking for the second choice. Further, just FYI, there's never a need to calculate your own p-value. This ANOVA function gives it to you in the output (Pr > F) as do all the much more common ways of invoking ANOVA in R. Cheers.
Analysis of Deviance in R - Which test?
I have to say, I've been doing statistics for 10 years and I've never heard anyone call it Analysis of Deviance before. Where is that from? Just wondering, it is a logical title though of course "Anal
Analysis of Deviance in R - Which test? I have to say, I've been doing statistics for 10 years and I've never heard anyone call it Analysis of Deviance before. Where is that from? Just wondering, it is a logical title though of course "Analysis of Variance" is the more universal term, as the portmanteau ANOVA implies. At any rate, the difference is that in the first ANOVA you were comparing two models (the probability of them being different) and in the second you did an ANOVA on one model to get the p-values of the terms in the model. If you want to know the significance of a particular term, then you're probably looking for the second choice. Further, just FYI, there's never a need to calculate your own p-value. This ANOVA function gives it to you in the output (Pr > F) as do all the much more common ways of invoking ANOVA in R. Cheers.
Analysis of Deviance in R - Which test? I have to say, I've been doing statistics for 10 years and I've never heard anyone call it Analysis of Deviance before. Where is that from? Just wondering, it is a logical title though of course "Anal
50,160
How do you predict the value of new instance, when the training data were normalized?
The normalization takes place with following steps: The mean of each variable is subtracted from that variable. Each variable is then divided by the standard deviation(stddev) of that variable So you have mean and stddev from the unnormalized X. The most common way is to use this mean and stddev to normalize the new data with the same steps but on your new data to be predicted. Thus, your last suggestion is both correct and common. In addition, you can also add intercept term to the BETA matrix/vector obtained from PLS model that would represent the mean-centering step. Additionally, you can also recalculate the BETA once more to give the division with stddev effect. Adding new data to the training set (your original X matrix) and then normalizing it, however, is not a correct way. This makes the scaling and more importantly your mean, which acts like an intercept term, dependent to the training data AND your new data combined. It would lead to unreliable results. You can also test and see these two methods produce different results.
How do you predict the value of new instance, when the training data were normalized?
The normalization takes place with following steps: The mean of each variable is subtracted from that variable. Each variable is then divided by the standard deviation(stddev) of that variable So yo
How do you predict the value of new instance, when the training data were normalized? The normalization takes place with following steps: The mean of each variable is subtracted from that variable. Each variable is then divided by the standard deviation(stddev) of that variable So you have mean and stddev from the unnormalized X. The most common way is to use this mean and stddev to normalize the new data with the same steps but on your new data to be predicted. Thus, your last suggestion is both correct and common. In addition, you can also add intercept term to the BETA matrix/vector obtained from PLS model that would represent the mean-centering step. Additionally, you can also recalculate the BETA once more to give the division with stddev effect. Adding new data to the training set (your original X matrix) and then normalizing it, however, is not a correct way. This makes the scaling and more importantly your mean, which acts like an intercept term, dependent to the training data AND your new data combined. It would lead to unreliable results. You can also test and see these two methods produce different results.
How do you predict the value of new instance, when the training data were normalized? The normalization takes place with following steps: The mean of each variable is subtracted from that variable. Each variable is then divided by the standard deviation(stddev) of that variable So yo
50,161
How do you predict the value of new instance, when the training data were normalized?
I normalized the sample data based on the values in the training dataset: sample_data['col1'] = (sample_data['col1'] - training_data['col1'].min()) / (training_data['col1'].max() - training_data['col1'].min()) sample_data['col2'] = (sample_data['col2'] - training_data['col2'].min()) / (training_data['col2'].max() - training_data['col2'].min()) sample_data['col3'] = (sample_data['col3'] - training_data['col3'].min()) / (training_data['col3'].max() - training_data['col3'].min()) ... an so on. You sample data will be normalized according to your training data and you can start doing your predictions.
How do you predict the value of new instance, when the training data were normalized?
I normalized the sample data based on the values in the training dataset: sample_data['col1'] = (sample_data['col1'] - training_data['col1'].min()) / (training_data['col1'].max() - training_data['col1
How do you predict the value of new instance, when the training data were normalized? I normalized the sample data based on the values in the training dataset: sample_data['col1'] = (sample_data['col1'] - training_data['col1'].min()) / (training_data['col1'].max() - training_data['col1'].min()) sample_data['col2'] = (sample_data['col2'] - training_data['col2'].min()) / (training_data['col2'].max() - training_data['col2'].min()) sample_data['col3'] = (sample_data['col3'] - training_data['col3'].min()) / (training_data['col3'].max() - training_data['col3'].min()) ... an so on. You sample data will be normalized according to your training data and you can start doing your predictions.
How do you predict the value of new instance, when the training data were normalized? I normalized the sample data based on the values in the training dataset: sample_data['col1'] = (sample_data['col1'] - training_data['col1'].min()) / (training_data['col1'].max() - training_data['col1
50,162
Random block design ANOVA in R
That looks right to me! There's a really good tutorial that I use in teaching R courses Baron's Using R for Psychology Experiments. Even if you're not doing psych. experiments, I think they way Baron does a great job explaining ANOVA and the proper use of the Error() function.
Random block design ANOVA in R
That looks right to me! There's a really good tutorial that I use in teaching R courses Baron's Using R for Psychology Experiments. Even if you're not doing psych. experiments, I think they way Baron
Random block design ANOVA in R That looks right to me! There's a really good tutorial that I use in teaching R courses Baron's Using R for Psychology Experiments. Even if you're not doing psych. experiments, I think they way Baron does a great job explaining ANOVA and the proper use of the Error() function.
Random block design ANOVA in R That looks right to me! There's a really good tutorial that I use in teaching R courses Baron's Using R for Psychology Experiments. Even if you're not doing psych. experiments, I think they way Baron
50,163
Random block design ANOVA in R
Just note aov( response ~ item + restaurant ) gives the same Anova (same DF and sums of squares for restaurant, item and residuals, so same F test) as aov( response ~ item + Error( restaurant / item ) )
Random block design ANOVA in R
Just note aov( response ~ item + restaurant ) gives the same Anova (same DF and sums of squares for restaurant, item and residuals, so same F test) as aov( response ~ item + Error( restaurant / ite
Random block design ANOVA in R Just note aov( response ~ item + restaurant ) gives the same Anova (same DF and sums of squares for restaurant, item and residuals, so same F test) as aov( response ~ item + Error( restaurant / item ) )
Random block design ANOVA in R Just note aov( response ~ item + restaurant ) gives the same Anova (same DF and sums of squares for restaurant, item and residuals, so same F test) as aov( response ~ item + Error( restaurant / ite
50,164
Fisher Distance for feature selection
'N' or number of samples is usually the number of cases, this can be the number of subject (assuming you have one measurement of each feature per subject) or the number of measurements per feature. If you have multiple measurements per feature, per subject you will need to account for this. Generally, EEG channels should be considered separately, as different activity patterns can be observed in different locations, i.e. occipital alpha activity may have different characteristics to frontal alpha for a given subject. For this reason I would recommend calculating each feature separately for each channel. IF it is justifiable for the application you can average each feature across all channels for each subject. One solution could be to take the mean value of each feature across all four trials, calculated for each of the 3 channels (electrodes) giving 9 features for each subject. You can use hypothesis testing such as a t-test or rank-sum test to examine how well each feature distinguishes between the two classes. However, if you are performing classification, feature selection should be performed within each fold of cross-validation.
Fisher Distance for feature selection
'N' or number of samples is usually the number of cases, this can be the number of subject (assuming you have one measurement of each feature per subject) or the number of measurements per feature. If
Fisher Distance for feature selection 'N' or number of samples is usually the number of cases, this can be the number of subject (assuming you have one measurement of each feature per subject) or the number of measurements per feature. If you have multiple measurements per feature, per subject you will need to account for this. Generally, EEG channels should be considered separately, as different activity patterns can be observed in different locations, i.e. occipital alpha activity may have different characteristics to frontal alpha for a given subject. For this reason I would recommend calculating each feature separately for each channel. IF it is justifiable for the application you can average each feature across all channels for each subject. One solution could be to take the mean value of each feature across all four trials, calculated for each of the 3 channels (electrodes) giving 9 features for each subject. You can use hypothesis testing such as a t-test or rank-sum test to examine how well each feature distinguishes between the two classes. However, if you are performing classification, feature selection should be performed within each fold of cross-validation.
Fisher Distance for feature selection 'N' or number of samples is usually the number of cases, this can be the number of subject (assuming you have one measurement of each feature per subject) or the number of measurements per feature. If
50,165
Fisher Distance for feature selection
You may experiment with two representations of the problem. 1- Group ECG channels of one class together and label class A as 1 and class B as 2. So, the target variable is composed of two classes only. Now, you may apply some feature ranking methods which are independent of the classification process. For example, you have mRmR and JMI. 2- Every channel is a separate class and for each channel, there are two groups i.e. normal and abnormal singals. This can be viewed as a hierarchical solution. So, you start by classifying the signal into one of the 3 channels. After that, you progress by classifying the signal into A or B. You need to pay attention that for different representations, the target variable differs and that may lead to different subsets of the selected features.
Fisher Distance for feature selection
You may experiment with two representations of the problem. 1- Group ECG channels of one class together and label class A as 1 and class B as 2. So, the target variable is composed of two classes only
Fisher Distance for feature selection You may experiment with two representations of the problem. 1- Group ECG channels of one class together and label class A as 1 and class B as 2. So, the target variable is composed of two classes only. Now, you may apply some feature ranking methods which are independent of the classification process. For example, you have mRmR and JMI. 2- Every channel is a separate class and for each channel, there are two groups i.e. normal and abnormal singals. This can be viewed as a hierarchical solution. So, you start by classifying the signal into one of the 3 channels. After that, you progress by classifying the signal into A or B. You need to pay attention that for different representations, the target variable differs and that may lead to different subsets of the selected features.
Fisher Distance for feature selection You may experiment with two representations of the problem. 1- Group ECG channels of one class together and label class A as 1 and class B as 2. So, the target variable is composed of two classes only
50,166
How to interpret regression coefficients when outcome variable was transformed to $1 / \sqrt{Y}$?
The model is $$\frac{1}{\sqrt{Y}} = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p + \varepsilon$$ where $Y$ is the original outcome, the $X_i$ are the explanatory variables, the $\beta_i$ are the coefficients, and $\varepsilon$ are iid, mean-zero error terms. Writing $b_i$ for the estimated value of $\beta_i$, we see that a one-unit change in $X_i$ adds $b_i$ to the right hand side. Starting from any baseline set of values $(x_1, \ldots, x_p)$, this induces a change in predicted values from $\widehat{1/\sqrt{y}} = b_0 + b_1 x_1 + \cdots + b_p x_p$ to $\widehat{1/\sqrt{y'}} = b_0 + b_1 x_1 + \cdots + b_p x_p + b_i$. Subtracting the first equation from the second gives $$\frac{1}{\sqrt{\hat{y'}}} - \frac{1}{\sqrt{\hat{y}}} = b_i.$$ Solving for $\hat{y'}$ gives $$\hat{y'} = \frac{\hat{y}}{1 + 2b_i\sqrt{\hat{y}} + b_i^2 \hat{y}}.$$ One may stop here, but often we seek simpler expressions: the behavior of this one might not be any easier to understand than the original model. Simplification can be achieved provided $b_i$ is very small. If necessary, we can contemplate a tiny change in $X_i$, say by an amount $\delta$, which would replace $b_i$ in the preceding equation by $\delta b_i$. Using a sufficiently small value of $\delta$ will assure the denominator is close to $1$. When it is, $$\frac{\hat{y}}{1 + 2\delta b_i\sqrt{\hat{y}} + \delta^2 b_i^2 \hat{y}} \approx \hat{y}(1 - 2\delta b_i\sqrt{\hat{y}} - \delta^2 b_i^2 \hat{y}),$$ whence the change in predicted values is $$\hat{y'} - \hat{y} \approx -\delta (2b_i\sqrt{\hat{y}} + \delta b_i^2 \hat{y}).$$ Taking $\delta$ to be so small that $\delta b_i^2 \hat{y} \ll 2 b_i\sqrt{\hat{y}}$ allows us to drop the second term in the right hand side. That is, for very tiny changes, the predicted outcome changes by $-(2b_i\sqrt{\hat{y}})$ times the amount of change in $x_i$. Comments The appearance of the negative sign indicates that an increase in $X_i$ will decrease $Y$ when $b_i$ is positive and increase $Y$ when $b_i$ is negative. Normally, we avoid this (potentially confusing) sign reversal by using $-1/\sqrt{Y}$ instead of $1/\sqrt{Y}$ when making a reciprocal square root transformation (or any other transformation that reverses the order of numbers). This solution method is always applicable no matter how $Y$ is re-expressed, but it can lead to complicated algebra for other transformations of $Y$. Those who know the basics of differential calculus will recognize that all we're doing here is approximating the change in $\hat{y}$ to first order using its derivative with respect to $x_i$, so they will be able to avoid most of the algebraic manipulations.
How to interpret regression coefficients when outcome variable was transformed to $1 / \sqrt{Y}$?
The model is $$\frac{1}{\sqrt{Y}} = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p + \varepsilon$$ where $Y$ is the original outcome, the $X_i$ are the explanatory variables, the $\beta_i$ are the coeff
How to interpret regression coefficients when outcome variable was transformed to $1 / \sqrt{Y}$? The model is $$\frac{1}{\sqrt{Y}} = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p + \varepsilon$$ where $Y$ is the original outcome, the $X_i$ are the explanatory variables, the $\beta_i$ are the coefficients, and $\varepsilon$ are iid, mean-zero error terms. Writing $b_i$ for the estimated value of $\beta_i$, we see that a one-unit change in $X_i$ adds $b_i$ to the right hand side. Starting from any baseline set of values $(x_1, \ldots, x_p)$, this induces a change in predicted values from $\widehat{1/\sqrt{y}} = b_0 + b_1 x_1 + \cdots + b_p x_p$ to $\widehat{1/\sqrt{y'}} = b_0 + b_1 x_1 + \cdots + b_p x_p + b_i$. Subtracting the first equation from the second gives $$\frac{1}{\sqrt{\hat{y'}}} - \frac{1}{\sqrt{\hat{y}}} = b_i.$$ Solving for $\hat{y'}$ gives $$\hat{y'} = \frac{\hat{y}}{1 + 2b_i\sqrt{\hat{y}} + b_i^2 \hat{y}}.$$ One may stop here, but often we seek simpler expressions: the behavior of this one might not be any easier to understand than the original model. Simplification can be achieved provided $b_i$ is very small. If necessary, we can contemplate a tiny change in $X_i$, say by an amount $\delta$, which would replace $b_i$ in the preceding equation by $\delta b_i$. Using a sufficiently small value of $\delta$ will assure the denominator is close to $1$. When it is, $$\frac{\hat{y}}{1 + 2\delta b_i\sqrt{\hat{y}} + \delta^2 b_i^2 \hat{y}} \approx \hat{y}(1 - 2\delta b_i\sqrt{\hat{y}} - \delta^2 b_i^2 \hat{y}),$$ whence the change in predicted values is $$\hat{y'} - \hat{y} \approx -\delta (2b_i\sqrt{\hat{y}} + \delta b_i^2 \hat{y}).$$ Taking $\delta$ to be so small that $\delta b_i^2 \hat{y} \ll 2 b_i\sqrt{\hat{y}}$ allows us to drop the second term in the right hand side. That is, for very tiny changes, the predicted outcome changes by $-(2b_i\sqrt{\hat{y}})$ times the amount of change in $x_i$. Comments The appearance of the negative sign indicates that an increase in $X_i$ will decrease $Y$ when $b_i$ is positive and increase $Y$ when $b_i$ is negative. Normally, we avoid this (potentially confusing) sign reversal by using $-1/\sqrt{Y}$ instead of $1/\sqrt{Y}$ when making a reciprocal square root transformation (or any other transformation that reverses the order of numbers). This solution method is always applicable no matter how $Y$ is re-expressed, but it can lead to complicated algebra for other transformations of $Y$. Those who know the basics of differential calculus will recognize that all we're doing here is approximating the change in $\hat{y}$ to first order using its derivative with respect to $x_i$, so they will be able to avoid most of the algebraic manipulations.
How to interpret regression coefficients when outcome variable was transformed to $1 / \sqrt{Y}$? The model is $$\frac{1}{\sqrt{Y}} = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p + \varepsilon$$ where $Y$ is the original outcome, the $X_i$ are the explanatory variables, the $\beta_i$ are the coeff
50,167
Dose response and lethal dose 50 analysis
Both analyses (non-linear regression and binomial GLM) have their advantages and disadvantages. They will produce quite similar LD50s if the data aren't variable but can produce wildly different results if the data are quite variable or if the bottom of the curve is not found (e.g. 100% lethality in the highest treatment). I'd recommend trying both to see what makes the most sense with your data. Personally I find nonlinear regression gives a more realistic fit of the data given the experiment is a controlled lab-based assay, and therefore can better meet some of the requirements such as normality of residuals, homogeneity of variances, minimal deviation from the model (i.e run/replicate tests), reasonably balanced sample units. Usually at least 5 treatments levels are needed for model convergence. Appropriate constraints are also needed. In addition to the drc package, a very easy to use program is Graphpad Prism which requires no coding. I'd recommend using it's free online Help-section to better understand non-linear regression. On the other hand, binomial GLMs (logistic regressions) need to meet fewer criteria and may be stronger for an unbalanced designs and smaller sample sizes, utilising the underlying probability distribution (the binomial) to inform estimates. Random variation, which is common in biological data, can also be incorporated using a binomial GLMM. Also they also don't need artificial constraints like non-linear regression often do. Hope this helps.
Dose response and lethal dose 50 analysis
Both analyses (non-linear regression and binomial GLM) have their advantages and disadvantages. They will produce quite similar LD50s if the data aren't variable but can produce wildly different resul
Dose response and lethal dose 50 analysis Both analyses (non-linear regression and binomial GLM) have their advantages and disadvantages. They will produce quite similar LD50s if the data aren't variable but can produce wildly different results if the data are quite variable or if the bottom of the curve is not found (e.g. 100% lethality in the highest treatment). I'd recommend trying both to see what makes the most sense with your data. Personally I find nonlinear regression gives a more realistic fit of the data given the experiment is a controlled lab-based assay, and therefore can better meet some of the requirements such as normality of residuals, homogeneity of variances, minimal deviation from the model (i.e run/replicate tests), reasonably balanced sample units. Usually at least 5 treatments levels are needed for model convergence. Appropriate constraints are also needed. In addition to the drc package, a very easy to use program is Graphpad Prism which requires no coding. I'd recommend using it's free online Help-section to better understand non-linear regression. On the other hand, binomial GLMs (logistic regressions) need to meet fewer criteria and may be stronger for an unbalanced designs and smaller sample sizes, utilising the underlying probability distribution (the binomial) to inform estimates. Random variation, which is common in biological data, can also be incorporated using a binomial GLMM. Also they also don't need artificial constraints like non-linear regression often do. Hope this helps.
Dose response and lethal dose 50 analysis Both analyses (non-linear regression and binomial GLM) have their advantages and disadvantages. They will produce quite similar LD50s if the data aren't variable but can produce wildly different resul
50,168
Dose response and lethal dose 50 analysis
Let's assume non-linear regression means non-linear least squares which is a common way of measuring pharmacokinetic curves. That is the intent of the package "drc" which has been renamed "drm". However this is not the right methodology to measure fatality. Pharmacokinetic curves predicts the concentration of drug, or it's conjugates or metabolites, in the right medium (blood, urine) as a function of time and/or subject characteristics. They typically require multiple measurements over time, multiple administrations of drug (possibly at several levels), and the resulting curves may be "fed forward" in a heirarchy or network to predict, e.g. symptom resolution, tumor response, or death. If you are administering one-and-done dosing with no in-vivo markers of drug activity like repeated biospecimen sampling, a typical failure analysis will be the right way to go. You should plot the survival curves of the species using a stratified Kaplan-Meier curve and test either with a parametric survival or Cox-model / log-rank test. These confer more power than a simple logistic regression because we additionally consider the time until death.
Dose response and lethal dose 50 analysis
Let's assume non-linear regression means non-linear least squares which is a common way of measuring pharmacokinetic curves. That is the intent of the package "drc" which has been renamed "drm". Howev
Dose response and lethal dose 50 analysis Let's assume non-linear regression means non-linear least squares which is a common way of measuring pharmacokinetic curves. That is the intent of the package "drc" which has been renamed "drm". However this is not the right methodology to measure fatality. Pharmacokinetic curves predicts the concentration of drug, or it's conjugates or metabolites, in the right medium (blood, urine) as a function of time and/or subject characteristics. They typically require multiple measurements over time, multiple administrations of drug (possibly at several levels), and the resulting curves may be "fed forward" in a heirarchy or network to predict, e.g. symptom resolution, tumor response, or death. If you are administering one-and-done dosing with no in-vivo markers of drug activity like repeated biospecimen sampling, a typical failure analysis will be the right way to go. You should plot the survival curves of the species using a stratified Kaplan-Meier curve and test either with a parametric survival or Cox-model / log-rank test. These confer more power than a simple logistic regression because we additionally consider the time until death.
Dose response and lethal dose 50 analysis Let's assume non-linear regression means non-linear least squares which is a common way of measuring pharmacokinetic curves. That is the intent of the package "drc" which has been renamed "drm". Howev
50,169
Wald vs. LR $\chi^2$ tests in SPSS generalized linear models
It's hard to be definitive without knowing all the details of your model (such as sample size), but I would remark that the likelihood ratio, Wald, and score estimators are only asymptotically equivalent. That is, they agree as N $\rightarrow$ $\infty$. The Wald estimator is generally considered to be the least reliable in terms of Type I and Type II error. In GLM applications, the likelihood ratio is to be preferred. Additionally, the Wald is not always consistent under transformations. Numerous studies abound that confirm these features, and I have even run simulations of my own using ecological data. Regarding the limitations of the Wald test, see, for example: Fears, Thomas R.; Benichou, Jacques; and Gail, Mitchell H. (1996); A reminder of the fallibility of the Wald statistic, The American Statistician 50:226–7 However, there are situations in which the Wald might behave better. I am not an expert on these statistical approaches, but here's an example: Yanqing Yi and Xikui Wang (2011). Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs, Journal of Statistical Theory and Applications, 10(4): 553-569 Hope that helps, Brenden
Wald vs. LR $\chi^2$ tests in SPSS generalized linear models
It's hard to be definitive without knowing all the details of your model (such as sample size), but I would remark that the likelihood ratio, Wald, and score estimators are only asymptotically equival
Wald vs. LR $\chi^2$ tests in SPSS generalized linear models It's hard to be definitive without knowing all the details of your model (such as sample size), but I would remark that the likelihood ratio, Wald, and score estimators are only asymptotically equivalent. That is, they agree as N $\rightarrow$ $\infty$. The Wald estimator is generally considered to be the least reliable in terms of Type I and Type II error. In GLM applications, the likelihood ratio is to be preferred. Additionally, the Wald is not always consistent under transformations. Numerous studies abound that confirm these features, and I have even run simulations of my own using ecological data. Regarding the limitations of the Wald test, see, for example: Fears, Thomas R.; Benichou, Jacques; and Gail, Mitchell H. (1996); A reminder of the fallibility of the Wald statistic, The American Statistician 50:226–7 However, there are situations in which the Wald might behave better. I am not an expert on these statistical approaches, but here's an example: Yanqing Yi and Xikui Wang (2011). Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs, Journal of Statistical Theory and Applications, 10(4): 553-569 Hope that helps, Brenden
Wald vs. LR $\chi^2$ tests in SPSS generalized linear models It's hard to be definitive without knowing all the details of your model (such as sample size), but I would remark that the likelihood ratio, Wald, and score estimators are only asymptotically equival
50,170
Follow-up updates in case-cohort designs
From your question, I gather you have two concerns: What happens to the control population when new cases are observed in a case-cohort study? What happens when not all covariates measured in some case or control subjects can be feasibly measured in all case or control subjects? First, a very brief overview of case-cohort study design: In a cohort study, the cohort population estimates the exposure distribution of the source population (e.g. all patients at risk for myocardial infarction in the USA, or all patients at risk for myocardial infarction who would be admitted to the coronary care unit at the Mayo Clinic if they experienced an infarction). In a nested case-control study, the control population is sub-sampled from the cohort population and can be thought of as a more efficient (in terms of research resources) estimate of the exposure distribution of the source population. A case-cohort study is a special design of the nested case-control study where the control population is a random sample of the cohort population at time $t_0$, and thus sampling is independent of an individual's contributed person-time or outcome (e.g. disease) status. Since the case-cohort study's control population includes all subjects at risk of the outcome at the beginning of follow-up, the odds-ratios from the case-cohort study estimate the risk-ratios from a cohort study. Regarding re-sampling of the control population as new cases are observerd Individual membership in the control population of a case-cohort study does not change during study follow-up. Any individual in the control population who develops the outcome at time $t$ remains a member of the control population, while also becoming a member of the case population. Thus, if at time $t_1$ $A_{t=t_1}$ individuals in the cohort have developed the outcome, the odds ratio estimating the risk of developing outcome given exposure over $t_1$ time is $OR_{t=t_1}$. If at time $t_2$ $A_{t=t_2}$ individuals in the cohort have developed the outcome, the odds ratio over $t_2$ time is $OR_{t=t_2}$. There is no need to re-sample the control population. For more see Kupper LL et al. A Hybrid Epidemiologic Study Design Useful in Estimating Relative Risk. J Am Stat Assoc. Vol. 70, No. 351, Sep., 1975. Regarding the inability to measure all covariates of the new cases and controls A variant of case-control studies is the two-stage or two-phase sampling design. My understanding of two-phase sampling is limited, but I cautiously offer the following brief summary: In a two-phase case-control study, inexpensive covariates are measured for all subjects included in the control and case populations. Expensive covariates are measured only on a sub-sample of control and case populations. Analytical methods attempt to treat inexpensive covariates as surrogates for expensive covariates, which can allow statistically-efficient estimates of associations between expensive covariates and outcome. Here are a few references, of which I have only glanced over: McNamee R. Optimal design and efficiency of two-phase case-control studies with error-prone and error-free exposure measures. Biostatistics. 2005 Oct;6(4):590-603. Epub 2005 Apr 28. Breslow NE, et al. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology. Stat Biosci. 2009 May 1;1(1):32. Kulich M and DY Lin. Improving the Efficiency of Relative-Risk Estimation in Case-Cohort Studies. J Am Stat Assoc. 2004 September Vol. 99, No. 467 The twophase() function from the R survey package implements two phase analysis methods. A short vignette on two phase analyses is available at the survey webpage on CRAN. If you have not already read it, an excellent reference for study design and other concerns in epidemiology is Rothman, Greenalnd, and Lash's Modern Epidemiology.
Follow-up updates in case-cohort designs
From your question, I gather you have two concerns: What happens to the control population when new cases are observed in a case-cohort study? What happens when not all covariates measured in some ca
Follow-up updates in case-cohort designs From your question, I gather you have two concerns: What happens to the control population when new cases are observed in a case-cohort study? What happens when not all covariates measured in some case or control subjects can be feasibly measured in all case or control subjects? First, a very brief overview of case-cohort study design: In a cohort study, the cohort population estimates the exposure distribution of the source population (e.g. all patients at risk for myocardial infarction in the USA, or all patients at risk for myocardial infarction who would be admitted to the coronary care unit at the Mayo Clinic if they experienced an infarction). In a nested case-control study, the control population is sub-sampled from the cohort population and can be thought of as a more efficient (in terms of research resources) estimate of the exposure distribution of the source population. A case-cohort study is a special design of the nested case-control study where the control population is a random sample of the cohort population at time $t_0$, and thus sampling is independent of an individual's contributed person-time or outcome (e.g. disease) status. Since the case-cohort study's control population includes all subjects at risk of the outcome at the beginning of follow-up, the odds-ratios from the case-cohort study estimate the risk-ratios from a cohort study. Regarding re-sampling of the control population as new cases are observerd Individual membership in the control population of a case-cohort study does not change during study follow-up. Any individual in the control population who develops the outcome at time $t$ remains a member of the control population, while also becoming a member of the case population. Thus, if at time $t_1$ $A_{t=t_1}$ individuals in the cohort have developed the outcome, the odds ratio estimating the risk of developing outcome given exposure over $t_1$ time is $OR_{t=t_1}$. If at time $t_2$ $A_{t=t_2}$ individuals in the cohort have developed the outcome, the odds ratio over $t_2$ time is $OR_{t=t_2}$. There is no need to re-sample the control population. For more see Kupper LL et al. A Hybrid Epidemiologic Study Design Useful in Estimating Relative Risk. J Am Stat Assoc. Vol. 70, No. 351, Sep., 1975. Regarding the inability to measure all covariates of the new cases and controls A variant of case-control studies is the two-stage or two-phase sampling design. My understanding of two-phase sampling is limited, but I cautiously offer the following brief summary: In a two-phase case-control study, inexpensive covariates are measured for all subjects included in the control and case populations. Expensive covariates are measured only on a sub-sample of control and case populations. Analytical methods attempt to treat inexpensive covariates as surrogates for expensive covariates, which can allow statistically-efficient estimates of associations between expensive covariates and outcome. Here are a few references, of which I have only glanced over: McNamee R. Optimal design and efficiency of two-phase case-control studies with error-prone and error-free exposure measures. Biostatistics. 2005 Oct;6(4):590-603. Epub 2005 Apr 28. Breslow NE, et al. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology. Stat Biosci. 2009 May 1;1(1):32. Kulich M and DY Lin. Improving the Efficiency of Relative-Risk Estimation in Case-Cohort Studies. J Am Stat Assoc. 2004 September Vol. 99, No. 467 The twophase() function from the R survey package implements two phase analysis methods. A short vignette on two phase analyses is available at the survey webpage on CRAN. If you have not already read it, an excellent reference for study design and other concerns in epidemiology is Rothman, Greenalnd, and Lash's Modern Epidemiology.
Follow-up updates in case-cohort designs From your question, I gather you have two concerns: What happens to the control population when new cases are observed in a case-cohort study? What happens when not all covariates measured in some ca
50,171
BIC or AIC to determine the optimal number of clusters in a scale-free graph?
AIC and BIC are used for constraining the complexity of a range of models competing to explain the same data. Following on the heels of results in complexity theory and machine learning, there is reason to believe that in some scenarios, using these proxies for complexity gives useful info about the model that explains the data best while having the smallest generalization error. In this case, you would first need to propose some model that produces the graph spectra you are seeing. You could postulate some different components that produce the graph you see, such as a mixture model. For example, if you were working with the common telephone data set from Belgium (where there is a trivial cut to make between those who speak French and those who speak Dutch), you would need to postulate the mechanism (language preference correlated to region or something) that actually causes the two different segments of the graph. Then, once you have a model, you would use AIC or BIC as a means for optimizing the parameter choices for fitting that model with your observed graph data. If the number of different causal clusters is something you include in the model, then it's something your optimization routine will spit back out under AIC or BIC constraints. But it's not really the same thing as normalized cut, which does not exactly model anything about the graph. Normalized cut (and other spectral segmentation methods) proposes a cost function that may or may not be related to anything about your graph, and then makes cuts so as to minimize the cost. If the cost functional used by the segmentation method does not meaningfully correspond to aspects of the data generating process that gives rise to your graph, then the segmentation performance can be arbitrarily bad. As you say, this is a reason why one-size-fits-all graph cutting procedures must be used delicately, with careful consideration of what the cost functional means for your data. I have a few sources for this, and I'll come back and document it later this evening.
BIC or AIC to determine the optimal number of clusters in a scale-free graph?
AIC and BIC are used for constraining the complexity of a range of models competing to explain the same data. Following on the heels of results in complexity theory and machine learning, there is reas
BIC or AIC to determine the optimal number of clusters in a scale-free graph? AIC and BIC are used for constraining the complexity of a range of models competing to explain the same data. Following on the heels of results in complexity theory and machine learning, there is reason to believe that in some scenarios, using these proxies for complexity gives useful info about the model that explains the data best while having the smallest generalization error. In this case, you would first need to propose some model that produces the graph spectra you are seeing. You could postulate some different components that produce the graph you see, such as a mixture model. For example, if you were working with the common telephone data set from Belgium (where there is a trivial cut to make between those who speak French and those who speak Dutch), you would need to postulate the mechanism (language preference correlated to region or something) that actually causes the two different segments of the graph. Then, once you have a model, you would use AIC or BIC as a means for optimizing the parameter choices for fitting that model with your observed graph data. If the number of different causal clusters is something you include in the model, then it's something your optimization routine will spit back out under AIC or BIC constraints. But it's not really the same thing as normalized cut, which does not exactly model anything about the graph. Normalized cut (and other spectral segmentation methods) proposes a cost function that may or may not be related to anything about your graph, and then makes cuts so as to minimize the cost. If the cost functional used by the segmentation method does not meaningfully correspond to aspects of the data generating process that gives rise to your graph, then the segmentation performance can be arbitrarily bad. As you say, this is a reason why one-size-fits-all graph cutting procedures must be used delicately, with careful consideration of what the cost functional means for your data. I have a few sources for this, and I'll come back and document it later this evening.
BIC or AIC to determine the optimal number of clusters in a scale-free graph? AIC and BIC are used for constraining the complexity of a range of models competing to explain the same data. Following on the heels of results in complexity theory and machine learning, there is reas
50,172
Minimum sample size to achieve a specific confidence level and interval
this may or may not be helpful for you, but Diaz-Emperanza (1996, 2000) has published on how many replications a Monte Carlo study has so that a certain width of a confidence interval for a parameter is achieved. Here are the two papers, which can both be accessed at http://ideas.repec.org/e/pda47.html Ignacio Díaz-Emparanza, 2000. "Is a small Monte Carlo analysis a good analysis? Checking the size, power and consistency of a simulation-based test," Econometrics 0004005, EconWPA. Ignacio Dmaz-Emparanza, 1996. "Selecting the Number of Replications in a Simulation Study," Econometrics 9612006, EconWPA.
Minimum sample size to achieve a specific confidence level and interval
this may or may not be helpful for you, but Diaz-Emperanza (1996, 2000) has published on how many replications a Monte Carlo study has so that a certain width of a confidence interval for a parameter
Minimum sample size to achieve a specific confidence level and interval this may or may not be helpful for you, but Diaz-Emperanza (1996, 2000) has published on how many replications a Monte Carlo study has so that a certain width of a confidence interval for a parameter is achieved. Here are the two papers, which can both be accessed at http://ideas.repec.org/e/pda47.html Ignacio Díaz-Emparanza, 2000. "Is a small Monte Carlo analysis a good analysis? Checking the size, power and consistency of a simulation-based test," Econometrics 0004005, EconWPA. Ignacio Dmaz-Emparanza, 1996. "Selecting the Number of Replications in a Simulation Study," Econometrics 9612006, EconWPA.
Minimum sample size to achieve a specific confidence level and interval this may or may not be helpful for you, but Diaz-Emperanza (1996, 2000) has published on how many replications a Monte Carlo study has so that a certain width of a confidence interval for a parameter
50,173
Time series analysis on login data to forecast CPU demand using R
Exponential smoothing is just a special case of an ARIMA model. If there is a benefit to fitting a general ARIMA model it is because of its generality and not that it handles gaps in the data any better than exponetial smoothing. I don't see any reason for throwing out February 29th. Individual dates would not have any appriciable effect on seasonality if there is some periodic component to the series. The time unit for time series analysis can be whatever time unit you measure the data in (it could be days weeks or years). You can cumulate data to create longer time intervals for the time series model. The fact that a date in one year falls on a different day of the week than in another year has nothing to do with its utility. If there are weekly effects this can show up in a 7 day periodic component. Gaps in the data does hurt your ability to fit the model. But if a single ARIMA model would have fit well to the complete series you probably can identify it piecing together the available portions of the series keeping count through the time index of the number of days missing at each gap. I don't understand why you can't have days as the time units. Is there a problem with having a long series because it seems to me that the time unit only affects the number of time points im the series?
Time series analysis on login data to forecast CPU demand using R
Exponential smoothing is just a special case of an ARIMA model. If there is a benefit to fitting a general ARIMA model it is because of its generality and not that it handles gaps in the data any bet
Time series analysis on login data to forecast CPU demand using R Exponential smoothing is just a special case of an ARIMA model. If there is a benefit to fitting a general ARIMA model it is because of its generality and not that it handles gaps in the data any better than exponetial smoothing. I don't see any reason for throwing out February 29th. Individual dates would not have any appriciable effect on seasonality if there is some periodic component to the series. The time unit for time series analysis can be whatever time unit you measure the data in (it could be days weeks or years). You can cumulate data to create longer time intervals for the time series model. The fact that a date in one year falls on a different day of the week than in another year has nothing to do with its utility. If there are weekly effects this can show up in a 7 day periodic component. Gaps in the data does hurt your ability to fit the model. But if a single ARIMA model would have fit well to the complete series you probably can identify it piecing together the available portions of the series keeping count through the time index of the number of days missing at each gap. I don't understand why you can't have days as the time units. Is there a problem with having a long series because it seems to me that the time unit only affects the number of time points im the series?
Time series analysis on login data to forecast CPU demand using R Exponential smoothing is just a special case of an ARIMA model. If there is a benefit to fitting a general ARIMA model it is because of its generality and not that it handles gaps in the data any bet
50,174
Time series analysis on login data to forecast CPU demand using R
First of all, you should check out the auto.arima and ets functions in the forecast package. Secondly, you should consider which frequency is most appropriate for the data. Do you really expect that the logins on 11/1/2011 will be directly related to the logins on 11/1/2001, 11/1/2002...11/1/2010? Make some seasonplots of your data at different frequencies, and see if you can identify what seems to be the most "natural." I would suggest trying frequency=7, as it makes intuitive sense to me that the number of logins on a Sunday will be directly related to the number of logins last Sunday, and the Sunday before that. Thirdly, if you use an arima model, you can use the xreg term to pass dummy variables to represent other seasonal periods, such as day of the month, day of the year, month of the year, and holidays. The holidayNYSE function in timeDate is useful for finding holidays. Fourthly, if you start fitting any neural networks, make sure you cross-validate them! Cross-validating time series is difficult, but here's some R code to help you out. Finally, before you go too crazy with the analysis, consider your overall goal. For example, if your single goal is to accurately forecast peak CPU demand 6 months in advance, perhaps you should aggregate all your data to the monthly level before making your forecasts. You could fit an arima model to CPU demand, and use logins and registrations as your external regressors.
Time series analysis on login data to forecast CPU demand using R
First of all, you should check out the auto.arima and ets functions in the forecast package. Secondly, you should consider which frequency is most appropriate for the data. Do you really expect that
Time series analysis on login data to forecast CPU demand using R First of all, you should check out the auto.arima and ets functions in the forecast package. Secondly, you should consider which frequency is most appropriate for the data. Do you really expect that the logins on 11/1/2011 will be directly related to the logins on 11/1/2001, 11/1/2002...11/1/2010? Make some seasonplots of your data at different frequencies, and see if you can identify what seems to be the most "natural." I would suggest trying frequency=7, as it makes intuitive sense to me that the number of logins on a Sunday will be directly related to the number of logins last Sunday, and the Sunday before that. Thirdly, if you use an arima model, you can use the xreg term to pass dummy variables to represent other seasonal periods, such as day of the month, day of the year, month of the year, and holidays. The holidayNYSE function in timeDate is useful for finding holidays. Fourthly, if you start fitting any neural networks, make sure you cross-validate them! Cross-validating time series is difficult, but here's some R code to help you out. Finally, before you go too crazy with the analysis, consider your overall goal. For example, if your single goal is to accurately forecast peak CPU demand 6 months in advance, perhaps you should aggregate all your data to the monthly level before making your forecasts. You could fit an arima model to CPU demand, and use logins and registrations as your external regressors.
Time series analysis on login data to forecast CPU demand using R First of all, you should check out the auto.arima and ets functions in the forecast package. Secondly, you should consider which frequency is most appropriate for the data. Do you really expect that
50,175
Confidence interval for the number of trials before you've observed each outcome in the sample space?
What you want is more of a prediction interval than a confidence interval (you are not estimating a population parameter). Here is a simulation approach: tmpfun <- function() { book <- numeric(200) while( any(book==0) ) { tmp <- sample(200,1) book[tmp] <- book[tmp] + 1 } return(sum(book)) } > out <- replicate( 100000, tmpfun() ) > mean(out) [1] 1175.593 > var(out) [1] 64697.3 > quantile(out, c(0.025, 0.975) ) 2.5% 97.5% 806 1792
Confidence interval for the number of trials before you've observed each outcome in the sample space
What you want is more of a prediction interval than a confidence interval (you are not estimating a population parameter). Here is a simulation approach: tmpfun <- function() { book <- numeric(200
Confidence interval for the number of trials before you've observed each outcome in the sample space? What you want is more of a prediction interval than a confidence interval (you are not estimating a population parameter). Here is a simulation approach: tmpfun <- function() { book <- numeric(200) while( any(book==0) ) { tmp <- sample(200,1) book[tmp] <- book[tmp] + 1 } return(sum(book)) } > out <- replicate( 100000, tmpfun() ) > mean(out) [1] 1175.593 > var(out) [1] 64697.3 > quantile(out, c(0.025, 0.975) ) 2.5% 97.5% 806 1792
Confidence interval for the number of trials before you've observed each outcome in the sample space What you want is more of a prediction interval than a confidence interval (you are not estimating a population parameter). Here is a simulation approach: tmpfun <- function() { book <- numeric(200
50,176
dummy variables with overlapping categories?
Question: Can Dummy variables have overlapping categories? Answer: No. Explanation: Dummy variables arise when you try to recode Categorical variables with more than two categories into a series of binary variables. Since these categories partition your dataset (i.e. each observation can be assigned to one and only one of these 'k' categories), there is no way that there can be any "overlapping". Now, with respect to the actual example you provide, there are two issues you should be aware of since they probably would otherwise screw up your analysis entirely: The binary variables which you describe are based, more or less, on arbitrary distinctions (for instance, would astroturf--more or less a rug covering concrete--really qualify as "soft" ground?). There's a good chance your model (as described in the OP) suffers from Multicollinearity (that is, that a linear combination of two or more of your independent variables are highly correlated). Just something you should keep in mind the next time you run a regression... Anyway, hope this helps.
dummy variables with overlapping categories?
Question: Can Dummy variables have overlapping categories? Answer: No. Explanation: Dummy variables arise when you try to recode Categorical variables with more than two categories into a series of bi
dummy variables with overlapping categories? Question: Can Dummy variables have overlapping categories? Answer: No. Explanation: Dummy variables arise when you try to recode Categorical variables with more than two categories into a series of binary variables. Since these categories partition your dataset (i.e. each observation can be assigned to one and only one of these 'k' categories), there is no way that there can be any "overlapping". Now, with respect to the actual example you provide, there are two issues you should be aware of since they probably would otherwise screw up your analysis entirely: The binary variables which you describe are based, more or less, on arbitrary distinctions (for instance, would astroturf--more or less a rug covering concrete--really qualify as "soft" ground?). There's a good chance your model (as described in the OP) suffers from Multicollinearity (that is, that a linear combination of two or more of your independent variables are highly correlated). Just something you should keep in mind the next time you run a regression... Anyway, hope this helps.
dummy variables with overlapping categories? Question: Can Dummy variables have overlapping categories? Answer: No. Explanation: Dummy variables arise when you try to recode Categorical variables with more than two categories into a series of bi
50,177
dummy variables with overlapping categories?
As others pointed out, dummy variables cannot be overlapping. What you have is several categorical variables, each of which will generate its own dummy variables. But you are worried (justifiably) about collinearity. There are several things you can do here. 1) If you are using R you can check the extent of collinearity with the perturb package which deals nicely with categorical variables. 2) If there is collinearity, you could investigate ridge regression, a common method when the IVs are colinear. I don't think I've seen it used with exclusively categorical variables so you might want to check if that is possible with ridge regression, but I don't see why not. You could start here and here. 3) You should also look at the sports you have and how they line up with regard to the "facets" that you think are theoretically important. Unless you have an amazing number of sports, you are probably going to have a lot of blank combinations. 4) Some of your "categories" don't seem like categorical variables to me. Is the ball "hard"? That's not yes or no. You could come (fairly easily) with a measure of the hardness of the ball. Similarly for the ground. These should be continuous. I mean, it goes from water polo (what's softer than water?) to ice hockey (not much is harder than ice) with (say) volleyball in between (but beach volleyball at a different spot).
dummy variables with overlapping categories?
As others pointed out, dummy variables cannot be overlapping. What you have is several categorical variables, each of which will generate its own dummy variables. But you are worried (justifiably) abo
dummy variables with overlapping categories? As others pointed out, dummy variables cannot be overlapping. What you have is several categorical variables, each of which will generate its own dummy variables. But you are worried (justifiably) about collinearity. There are several things you can do here. 1) If you are using R you can check the extent of collinearity with the perturb package which deals nicely with categorical variables. 2) If there is collinearity, you could investigate ridge regression, a common method when the IVs are colinear. I don't think I've seen it used with exclusively categorical variables so you might want to check if that is possible with ridge regression, but I don't see why not. You could start here and here. 3) You should also look at the sports you have and how they line up with regard to the "facets" that you think are theoretically important. Unless you have an amazing number of sports, you are probably going to have a lot of blank combinations. 4) Some of your "categories" don't seem like categorical variables to me. Is the ball "hard"? That's not yes or no. You could come (fairly easily) with a measure of the hardness of the ball. Similarly for the ground. These should be continuous. I mean, it goes from water polo (what's softer than water?) to ice hockey (not much is harder than ice) with (say) volleyball in between (but beach volleyball at a different spot).
dummy variables with overlapping categories? As others pointed out, dummy variables cannot be overlapping. What you have is several categorical variables, each of which will generate its own dummy variables. But you are worried (justifiably) abo
50,178
Comparing rates of binomial outcome response in large datasets
I think the answer is very simple. The large sample size is a blessing. Don't be upset with it. The standard errors are realistic. The problem you have is that you are think of a traditional null hypothesis that the difference is exactly 0 and the alternative is that it is statistically significantly different from 0. But you are not stuck with that null hypothesis! Say that only a difference greater than 5% is clinically meaningful. Then the null hypothesis is |p1-p2|<0.05 versus the alternative that it is greater than 0.05. I took 0.05 hypothetically. Pick any delta that you consider to be clinically meaningful. You are blessed with enough data to reach a meaningful conclusion!
Comparing rates of binomial outcome response in large datasets
I think the answer is very simple. The large sample size is a blessing. Don't be upset with it. The standard errors are realistic. The problem you have is that you are think of a traditional null
Comparing rates of binomial outcome response in large datasets I think the answer is very simple. The large sample size is a blessing. Don't be upset with it. The standard errors are realistic. The problem you have is that you are think of a traditional null hypothesis that the difference is exactly 0 and the alternative is that it is statistically significantly different from 0. But you are not stuck with that null hypothesis! Say that only a difference greater than 5% is clinically meaningful. Then the null hypothesis is |p1-p2|<0.05 versus the alternative that it is greater than 0.05. I took 0.05 hypothetically. Pick any delta that you consider to be clinically meaningful. You are blessed with enough data to reach a meaningful conclusion!
Comparing rates of binomial outcome response in large datasets I think the answer is very simple. The large sample size is a blessing. Don't be upset with it. The standard errors are realistic. The problem you have is that you are think of a traditional null
50,179
Random generation of n-dimensional data with possibly correlated variables
One approach to generating multivariate random data with correlations is to use a copula. Basically you generate n-dimensional data with uniform margins and a correlation structure, then transform the data to the marginal distribution of interest (binary variables can be generated by simply seeing if the value is greater than a cut-off). This does not guarentee an exact correlation with the transformed variables, but does give a general correlation structure and can be used with any distribution with an inverse CDF. This could be used to generate a dataset with a mixture of binary variables and continuous variables from other distributions (or other discrete variables as well). You can simulate several datasets and check the level of correlation and if it is not close to what you were hoping for, go back and adjust the copula accordingly and try again.
Random generation of n-dimensional data with possibly correlated variables
One approach to generating multivariate random data with correlations is to use a copula. Basically you generate n-dimensional data with uniform margins and a correlation structure, then transform th
Random generation of n-dimensional data with possibly correlated variables One approach to generating multivariate random data with correlations is to use a copula. Basically you generate n-dimensional data with uniform margins and a correlation structure, then transform the data to the marginal distribution of interest (binary variables can be generated by simply seeing if the value is greater than a cut-off). This does not guarentee an exact correlation with the transformed variables, but does give a general correlation structure and can be used with any distribution with an inverse CDF. This could be used to generate a dataset with a mixture of binary variables and continuous variables from other distributions (or other discrete variables as well). You can simulate several datasets and check the level of correlation and if it is not close to what you were hoping for, go back and adjust the copula accordingly and try again.
Random generation of n-dimensional data with possibly correlated variables One approach to generating multivariate random data with correlations is to use a copula. Basically you generate n-dimensional data with uniform margins and a correlation structure, then transform th
50,180
Tests for normed vector
You could transform your data points to spherical coordinates so that you get $p-1$ angles. Your null hypothesis is equivalent to the fact that those angles are independent and uniformly distributed. So you can do a goodness of fit test. Now there is the complication that you have a $p-1$-dimensional distribution. This paper shows a multidimensional version of the Smirnov statistic that may come in handy. Otherwise, another idea that comes to my mind woud be to scale the angles to $(0,1)$, apply the inverse erf function and do a test for multivariate normality. Erratum: it is not true that the angles will be uniform as shown in this answer. One angle is uniform between $0$ and $2\pi$, and the cosine of the others is uniform between $-1$ and $1$.
Tests for normed vector
You could transform your data points to spherical coordinates so that you get $p-1$ angles. Your null hypothesis is equivalent to the fact that those angles are independent and uniformly distributed.
Tests for normed vector You could transform your data points to spherical coordinates so that you get $p-1$ angles. Your null hypothesis is equivalent to the fact that those angles are independent and uniformly distributed. So you can do a goodness of fit test. Now there is the complication that you have a $p-1$-dimensional distribution. This paper shows a multidimensional version of the Smirnov statistic that may come in handy. Otherwise, another idea that comes to my mind woud be to scale the angles to $(0,1)$, apply the inverse erf function and do a test for multivariate normality. Erratum: it is not true that the angles will be uniform as shown in this answer. One angle is uniform between $0$ and $2\pi$, and the cosine of the others is uniform between $-1$ and $1$.
Tests for normed vector You could transform your data points to spherical coordinates so that you get $p-1$ angles. Your null hypothesis is equivalent to the fact that those angles are independent and uniformly distributed.
50,181
How to visualize both total counts of categories and proportions of subcategories in a plot?
This example of embedded/layered bar plots may represent one alternative. The three main categories are represented by individual bars, then embedded within are subcategory bars (created in ggplot2). Blog Link (Learning R)
How to visualize both total counts of categories and proportions of subcategories in a plot?
This example of embedded/layered bar plots may represent one alternative. The three main categories are represented by individual bars, then embedded within are subcategory bars (created in ggplot2).
How to visualize both total counts of categories and proportions of subcategories in a plot? This example of embedded/layered bar plots may represent one alternative. The three main categories are represented by individual bars, then embedded within are subcategory bars (created in ggplot2). Blog Link (Learning R)
How to visualize both total counts of categories and proportions of subcategories in a plot? This example of embedded/layered bar plots may represent one alternative. The three main categories are represented by individual bars, then embedded within are subcategory bars (created in ggplot2).
50,182
Is cross-validation an effective approach for feature/model selection for microarray data?
The answer really seems to be that cross-validation is not great because its results are extremely variable but it remains the best option available. The only other competitive approach seems to be the 0.632 bootstrap estimator which has slightly lower variance but also under-estimates the true performance. See Is cross-validation valid for small-sample microarray classification?. Also of relevance - (perhaps obvious) - the more features that are included, the higher the variance of the cv-estimates.
Is cross-validation an effective approach for feature/model selection for microarray data?
The answer really seems to be that cross-validation is not great because its results are extremely variable but it remains the best option available. The only other competitive approach seems to be t
Is cross-validation an effective approach for feature/model selection for microarray data? The answer really seems to be that cross-validation is not great because its results are extremely variable but it remains the best option available. The only other competitive approach seems to be the 0.632 bootstrap estimator which has slightly lower variance but also under-estimates the true performance. See Is cross-validation valid for small-sample microarray classification?. Also of relevance - (perhaps obvious) - the more features that are included, the higher the variance of the cv-estimates.
Is cross-validation an effective approach for feature/model selection for microarray data? The answer really seems to be that cross-validation is not great because its results are extremely variable but it remains the best option available. The only other competitive approach seems to be t
50,183
Is cross-validation an effective approach for feature/model selection for microarray data?
I think the problem may be that your training set is too small and therefore not representative of the entire population and if you test it on even smaller tests sets these data can be very different. This is more of a general large p small n problem and pertains to that type of problemn whether it is genetics or not. It has nothing to do with how well genes predict outcomes in breast cancer. In fact I think there are several biomarkers that are useful for estimating the probability of recurrence for patients who had the tumor completely removed.
Is cross-validation an effective approach for feature/model selection for microarray data?
I think the problem may be that your training set is too small and therefore not representative of the entire population and if you test it on even smaller tests sets these data can be very different.
Is cross-validation an effective approach for feature/model selection for microarray data? I think the problem may be that your training set is too small and therefore not representative of the entire population and if you test it on even smaller tests sets these data can be very different. This is more of a general large p small n problem and pertains to that type of problemn whether it is genetics or not. It has nothing to do with how well genes predict outcomes in breast cancer. In fact I think there are several biomarkers that are useful for estimating the probability of recurrence for patients who had the tumor completely removed.
Is cross-validation an effective approach for feature/model selection for microarray data? I think the problem may be that your training set is too small and therefore not representative of the entire population and if you test it on even smaller tests sets these data can be very different.
50,184
How to cluster LDA/LSI topics generated by gensim?
This is an example. You need copy matutils.py and utils.py from gensim first, and the directory should like the pic blow. The code blow should be in doc_similar.py. Then just move your data_file into directory data and change fname in function main. #coding:utf-8 from gensim import corpora, models, similarities import cPickle import logging import utils import os import numpy as np import scipy import matutils from collections import defaultdict data_dir = os.path.join(os.getcwd(), 'data') work_dir = os.path.join(os.getcwd(), 'model', os.path.basename(__file__).rstrip('.py')) if not os.path.exists(work_dir): os.mkdir(work_dir) os.chdir(work_dir) logger = logging.getLogger('text_similar') logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) # convert to unicode def to_unicode(text): if not isinstance(text, unicode): text = text.decode('utf-8') return text class TextSimilar(utils.SaveLoad): def __init__(self): self.conf = {} def _preprocess(self): docs = [to_unicode(doc.strip()).split()[1:] for doc in file(self.fname)] cPickle.dump(docs, open(self.conf['fname_docs'], 'wb')) dictionary = corpora.Dictionary(docs) dictionary.save(self.conf['fname_dict']) corpus = [dictionary.doc2bow(doc) for doc in docs] corpora.MmCorpus.serialize(self.conf['fname_corpus'], corpus) return docs, dictionary, corpus def _generate_conf(self): fname = self.fname[self.fname.rfind('/') + 1:] self.conf['fname_docs'] = '%s.docs' % fname self.conf['fname_dict'] = '%s.dict' % fname self.conf['fname_corpus'] = '%s.mm' % fname def train(self, fname, is_pre=True, method='lsi', **params): self.fname = fname self.method = method self._generate_conf() if is_pre: self.docs, self.dictionary, corpus = self._preprocess() else: self.docs = cPickle.load(open(self.conf['fname_docs'])) self.dictionary = corpora.Dictionary.load(self.conf['fname_dict']) corpus = corpora.MmCorpus(self.conf['fname_corpus']) if params is None: params = {} logger.info("training TF-IDF model") self.tfidf = models.TfidfModel(corpus, id2word=self.dictionary) corpus_tfidf = self.tfidf[corpus] if method == 'lsi': logger.info("training LSI model") self.lsi = models.LsiModel(corpus_tfidf, id2word=self.dictionary, **params) self.similar_index = similarities.MatrixSimilarity(self.lsi[corpus_tfidf]) self.para = self.lsi[corpus_tfidf] elif method == 'lda_tfidf': logger.info("training LDA model") self.lda = models.LdaMulticore(corpus_tfidf, id2word=self.dictionary, workers=8, **params) self.similar_index = similarities.MatrixSimilarity(self.lda[corpus_tfidf]) self.para = self.lda[corpus_tfidf] elif method == 'lda': logger.info("training LDA model") self.lda = models.LdaMulticore(corpus, id2word=self.dictionary, workers=8, **params) self.similar_index = similarities.MatrixSimilarity(self.lda[corpus]) self.para = self.lda[corpus] elif method == 'logentropy': logger.info("training a log-entropy model") self.logent = models.LogEntropyModel(corpus, id2word=self.dictionary) self.similar_index = similarities.MatrixSimilarity(self.logent[corpus]) self.para = self.logent[corpus] else: msg = "unknown semantic method %s" % method logger.error(msg) raise NotImplementedError(msg) def doc2vec(self, doc): bow = self.dictionary.doc2bow(to_unicode(doc).split()) if self.method == 'lsi': return self.lsi[self.tfidf[bow]] elif self.method == 'lda': return self.lda[bow] elif self.method == 'lda_tfidf': return self.lda[self.tfidf[bow]] elif self.method == 'logentropy': return self.logent[bow] def find_similar(self, doc, n=10): vec = self.doc2vec(doc) sims = self.similar_index[vec] sims = sorted(enumerate(sims), key=lambda item: -item[1]) for elem in sims[:n]: idx, value = elem print ' '.join(self.docs[idx]), value def get_vectors(self): return self._get_vector(self.para) def _get_vector(self, corpus): def get_max_id(): maxid = -1 for document in corpus: maxid = max(maxid, max([-1] + [fieldid for fieldid, _ in document])) # [-1] to avoid exceptions from max(empty) return maxid num_features = 1 + get_max_id() index = np.empty(shape=(len(corpus), num_features), dtype=np.float32) for docno, vector in enumerate(corpus): if docno % 1000 == 0: print("PROGRESS: at document #%i/%i" % (docno, len(corpus))) if isinstance(vector, np.ndarray): pass elif scipy.sparse.issparse(vector): vector = vector.toarray().flatten() else: vector = matutils.unitvec(matutils.sparse2full(vector, num_features)) index[docno] = vector return index def cluster(vectors, ts, k=30): from sklearn.cluster import k_means X = np.array(vectors) cluster_center, result, inertia = k_means(X.astype(np.float), n_clusters=k, init="k-means++") X_Y_dic = defaultdict(set) for i, pred_y in enumerate(result): X_Y_dic[pred_y].add(''.join(ts.docs[i])) print 'len(X_Y_dic): ', len(X_Y_dic) with open(data_dir + '/cluser.txt', 'w') as fo: for Y in X_Y_dic: fo.write(str(Y) + '\n') fo.write('{word}\n'.format(word='\n'.join(list(X_Y_dic[Y])[:100]))) def main(is_train=True): fname = data_dir + '/brand' num_topics = 100 method = 'lda' ts = TextSimilar() if is_train: ts.train(fname, method=method ,num_topics=num_topics, is_pre=True, iterations=100) ts.save(method) else: ts = TextSimilar().load(method) index = ts.get_vectors() cluster(index, ts, k=num_topics) if __name__ == '__main__': is_train = True if len(sys.argv) > 1 else False main(is_train)
How to cluster LDA/LSI topics generated by gensim?
This is an example. You need copy matutils.py and utils.py from gensim first, and the directory should like the pic blow. The code blow should be in doc_similar.py. Then just move your data_file into
How to cluster LDA/LSI topics generated by gensim? This is an example. You need copy matutils.py and utils.py from gensim first, and the directory should like the pic blow. The code blow should be in doc_similar.py. Then just move your data_file into directory data and change fname in function main. #coding:utf-8 from gensim import corpora, models, similarities import cPickle import logging import utils import os import numpy as np import scipy import matutils from collections import defaultdict data_dir = os.path.join(os.getcwd(), 'data') work_dir = os.path.join(os.getcwd(), 'model', os.path.basename(__file__).rstrip('.py')) if not os.path.exists(work_dir): os.mkdir(work_dir) os.chdir(work_dir) logger = logging.getLogger('text_similar') logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) # convert to unicode def to_unicode(text): if not isinstance(text, unicode): text = text.decode('utf-8') return text class TextSimilar(utils.SaveLoad): def __init__(self): self.conf = {} def _preprocess(self): docs = [to_unicode(doc.strip()).split()[1:] for doc in file(self.fname)] cPickle.dump(docs, open(self.conf['fname_docs'], 'wb')) dictionary = corpora.Dictionary(docs) dictionary.save(self.conf['fname_dict']) corpus = [dictionary.doc2bow(doc) for doc in docs] corpora.MmCorpus.serialize(self.conf['fname_corpus'], corpus) return docs, dictionary, corpus def _generate_conf(self): fname = self.fname[self.fname.rfind('/') + 1:] self.conf['fname_docs'] = '%s.docs' % fname self.conf['fname_dict'] = '%s.dict' % fname self.conf['fname_corpus'] = '%s.mm' % fname def train(self, fname, is_pre=True, method='lsi', **params): self.fname = fname self.method = method self._generate_conf() if is_pre: self.docs, self.dictionary, corpus = self._preprocess() else: self.docs = cPickle.load(open(self.conf['fname_docs'])) self.dictionary = corpora.Dictionary.load(self.conf['fname_dict']) corpus = corpora.MmCorpus(self.conf['fname_corpus']) if params is None: params = {} logger.info("training TF-IDF model") self.tfidf = models.TfidfModel(corpus, id2word=self.dictionary) corpus_tfidf = self.tfidf[corpus] if method == 'lsi': logger.info("training LSI model") self.lsi = models.LsiModel(corpus_tfidf, id2word=self.dictionary, **params) self.similar_index = similarities.MatrixSimilarity(self.lsi[corpus_tfidf]) self.para = self.lsi[corpus_tfidf] elif method == 'lda_tfidf': logger.info("training LDA model") self.lda = models.LdaMulticore(corpus_tfidf, id2word=self.dictionary, workers=8, **params) self.similar_index = similarities.MatrixSimilarity(self.lda[corpus_tfidf]) self.para = self.lda[corpus_tfidf] elif method == 'lda': logger.info("training LDA model") self.lda = models.LdaMulticore(corpus, id2word=self.dictionary, workers=8, **params) self.similar_index = similarities.MatrixSimilarity(self.lda[corpus]) self.para = self.lda[corpus] elif method == 'logentropy': logger.info("training a log-entropy model") self.logent = models.LogEntropyModel(corpus, id2word=self.dictionary) self.similar_index = similarities.MatrixSimilarity(self.logent[corpus]) self.para = self.logent[corpus] else: msg = "unknown semantic method %s" % method logger.error(msg) raise NotImplementedError(msg) def doc2vec(self, doc): bow = self.dictionary.doc2bow(to_unicode(doc).split()) if self.method == 'lsi': return self.lsi[self.tfidf[bow]] elif self.method == 'lda': return self.lda[bow] elif self.method == 'lda_tfidf': return self.lda[self.tfidf[bow]] elif self.method == 'logentropy': return self.logent[bow] def find_similar(self, doc, n=10): vec = self.doc2vec(doc) sims = self.similar_index[vec] sims = sorted(enumerate(sims), key=lambda item: -item[1]) for elem in sims[:n]: idx, value = elem print ' '.join(self.docs[idx]), value def get_vectors(self): return self._get_vector(self.para) def _get_vector(self, corpus): def get_max_id(): maxid = -1 for document in corpus: maxid = max(maxid, max([-1] + [fieldid for fieldid, _ in document])) # [-1] to avoid exceptions from max(empty) return maxid num_features = 1 + get_max_id() index = np.empty(shape=(len(corpus), num_features), dtype=np.float32) for docno, vector in enumerate(corpus): if docno % 1000 == 0: print("PROGRESS: at document #%i/%i" % (docno, len(corpus))) if isinstance(vector, np.ndarray): pass elif scipy.sparse.issparse(vector): vector = vector.toarray().flatten() else: vector = matutils.unitvec(matutils.sparse2full(vector, num_features)) index[docno] = vector return index def cluster(vectors, ts, k=30): from sklearn.cluster import k_means X = np.array(vectors) cluster_center, result, inertia = k_means(X.astype(np.float), n_clusters=k, init="k-means++") X_Y_dic = defaultdict(set) for i, pred_y in enumerate(result): X_Y_dic[pred_y].add(''.join(ts.docs[i])) print 'len(X_Y_dic): ', len(X_Y_dic) with open(data_dir + '/cluser.txt', 'w') as fo: for Y in X_Y_dic: fo.write(str(Y) + '\n') fo.write('{word}\n'.format(word='\n'.join(list(X_Y_dic[Y])[:100]))) def main(is_train=True): fname = data_dir + '/brand' num_topics = 100 method = 'lda' ts = TextSimilar() if is_train: ts.train(fname, method=method ,num_topics=num_topics, is_pre=True, iterations=100) ts.save(method) else: ts = TextSimilar().load(method) index = ts.get_vectors() cluster(index, ts, k=num_topics) if __name__ == '__main__': is_train = True if len(sys.argv) > 1 else False main(is_train)
How to cluster LDA/LSI topics generated by gensim? This is an example. You need copy matutils.py and utils.py from gensim first, and the directory should like the pic blow. The code blow should be in doc_similar.py. Then just move your data_file into
50,185
Minimisation algorithm for a mix of discreet and continuous parameters?
It sounds like you need a derivative free method (since it seems you can't supply a gradient for your objective function). I guess your parameter space is not bounded since you are currently using Nelder-Mead. I'm not sure what software you're using but you should check out NOMAD which is "freely distributed under the GNU Lesser General Public License." NOMAD handles nonlinear functions of integers and/or real variables. In R the crs package provides an interface, see this.
Minimisation algorithm for a mix of discreet and continuous parameters?
It sounds like you need a derivative free method (since it seems you can't supply a gradient for your objective function). I guess your parameter space is not bounded since you are currently using Ne
Minimisation algorithm for a mix of discreet and continuous parameters? It sounds like you need a derivative free method (since it seems you can't supply a gradient for your objective function). I guess your parameter space is not bounded since you are currently using Nelder-Mead. I'm not sure what software you're using but you should check out NOMAD which is "freely distributed under the GNU Lesser General Public License." NOMAD handles nonlinear functions of integers and/or real variables. In R the crs package provides an interface, see this.
Minimisation algorithm for a mix of discreet and continuous parameters? It sounds like you need a derivative free method (since it seems you can't supply a gradient for your objective function). I guess your parameter space is not bounded since you are currently using Ne
50,186
Visualising generalized linear model
From StatSoft, without links: Diagnostics in the generalized linear model. The two basic types of residuals are the so-called Pearson residuals and deviance residuals. Pearson residuals are based on the difference between observed responses and the predicted values; deviance residuals are based on the contribution of the observed responses to the log-likelihood statistic. In addition, leverage scores, studentized residuals, generalized Cook's D, and other observational statistics (statistics based on individual observations) can be computed. For a description and discussion of these statistics, see Hosmer and Lemeshow (1989). If you are using R: lrfit <- glm( cbind(using,notUsing) ~ age * noMore + hiEduc , family=binomial) summary(lrfit) plot(lrfit) Quoted from Germán Rodríguez: R follows the popular custom of flagging significant coefficients with one, two or three stars depending on their p-values. Try plot(lrfit). You get the same plots as in a linear model, but adapted to a generalized linear model; for example the residuals plotted are deviance residuals (the square root of the contribution of an observation to the deviance, with the same sign as the raw residual). The functions that can be used to extract results from the fit include residuals or resid, for the deviance residuals fitted or fitted.values, for the fitted values (estimated probabilities) predict, for the linear predictor (estimated logits) coef or coefficients, for the coefficients, and deviance, for the deviance. Some of these functions have optional arguments; for example, you can extract five different types of residuals, called "deviance", "pearson", "response" (response - fitted value), "working" (the working dependent variable in the IRLS algorithm - linear predictor), and "partial" (a matrix of working residuals formed by omitting each term in the model). You specify the one you want using the type argument, for example residuals(lrfit,type="pearson"). Depending on your type of study, there might be corrections to apply.
Visualising generalized linear model
From StatSoft, without links: Diagnostics in the generalized linear model. The two basic types of residuals are the so-called Pearson residuals and deviance residuals. Pearson residuals are based on
Visualising generalized linear model From StatSoft, without links: Diagnostics in the generalized linear model. The two basic types of residuals are the so-called Pearson residuals and deviance residuals. Pearson residuals are based on the difference between observed responses and the predicted values; deviance residuals are based on the contribution of the observed responses to the log-likelihood statistic. In addition, leverage scores, studentized residuals, generalized Cook's D, and other observational statistics (statistics based on individual observations) can be computed. For a description and discussion of these statistics, see Hosmer and Lemeshow (1989). If you are using R: lrfit <- glm( cbind(using,notUsing) ~ age * noMore + hiEduc , family=binomial) summary(lrfit) plot(lrfit) Quoted from Germán Rodríguez: R follows the popular custom of flagging significant coefficients with one, two or three stars depending on their p-values. Try plot(lrfit). You get the same plots as in a linear model, but adapted to a generalized linear model; for example the residuals plotted are deviance residuals (the square root of the contribution of an observation to the deviance, with the same sign as the raw residual). The functions that can be used to extract results from the fit include residuals or resid, for the deviance residuals fitted or fitted.values, for the fitted values (estimated probabilities) predict, for the linear predictor (estimated logits) coef or coefficients, for the coefficients, and deviance, for the deviance. Some of these functions have optional arguments; for example, you can extract five different types of residuals, called "deviance", "pearson", "response" (response - fitted value), "working" (the working dependent variable in the IRLS algorithm - linear predictor), and "partial" (a matrix of working residuals formed by omitting each term in the model). You specify the one you want using the type argument, for example residuals(lrfit,type="pearson"). Depending on your type of study, there might be corrections to apply.
Visualising generalized linear model From StatSoft, without links: Diagnostics in the generalized linear model. The two basic types of residuals are the so-called Pearson residuals and deviance residuals. Pearson residuals are based on
50,187
SVM and non-linear predictive models - feature selection
I'd say that it is generally a bad idea to use feature selection for non-linear support vector machines. The reason for using a maximal margin classifier is that it is an approximate implementation of a theoretical performance bound that is independent of the dimensionality of the feature space in which it is constructed. This means that over-fitting can be controlled via structural risk minimisation by carefully selecting the regularisation parameter, $C$. HOWEVER, and it is a rather big "however", this bit of computational learning theory only applies to the construction of the maximal margin classifier in a feature space induced by a fixed kernel. Any attempt to tune the kernel function, e.g. by adapting the kernel parameters, or via feature selection will invalidate the bounds on performance and in doing so potentially circumvent the structural risk minimisation principle. The other reason why it is generally a bad idea is that feature selection is difficult. If you perform feature selection by minimising some statistic computed on a finite sample of data, it is all too easy to over-fit the selection criterion, and end up with a worse model that you started with. This is especially true where there are many degrees of freedom in the optimisation problem (e.g. one binary parameter for the inclusion of each feature). The use of regularisation instead gives one (continuous) degree of freedom, which makes over-fitting more difficult. I have been performing some experiments for feature selection for non-linear kernel machines, and the basic message is that in general efforts at feature selection will result in lower generalisation performance. It helps on some datasets (sometimes it helps a lot), but usually it makes things worse (sometimes much worse).
SVM and non-linear predictive models - feature selection
I'd say that it is generally a bad idea to use feature selection for non-linear support vector machines. The reason for using a maximal margin classifier is that it is an approximate implementation o
SVM and non-linear predictive models - feature selection I'd say that it is generally a bad idea to use feature selection for non-linear support vector machines. The reason for using a maximal margin classifier is that it is an approximate implementation of a theoretical performance bound that is independent of the dimensionality of the feature space in which it is constructed. This means that over-fitting can be controlled via structural risk minimisation by carefully selecting the regularisation parameter, $C$. HOWEVER, and it is a rather big "however", this bit of computational learning theory only applies to the construction of the maximal margin classifier in a feature space induced by a fixed kernel. Any attempt to tune the kernel function, e.g. by adapting the kernel parameters, or via feature selection will invalidate the bounds on performance and in doing so potentially circumvent the structural risk minimisation principle. The other reason why it is generally a bad idea is that feature selection is difficult. If you perform feature selection by minimising some statistic computed on a finite sample of data, it is all too easy to over-fit the selection criterion, and end up with a worse model that you started with. This is especially true where there are many degrees of freedom in the optimisation problem (e.g. one binary parameter for the inclusion of each feature). The use of regularisation instead gives one (continuous) degree of freedom, which makes over-fitting more difficult. I have been performing some experiments for feature selection for non-linear kernel machines, and the basic message is that in general efforts at feature selection will result in lower generalisation performance. It helps on some datasets (sometimes it helps a lot), but usually it makes things worse (sometimes much worse).
SVM and non-linear predictive models - feature selection I'd say that it is generally a bad idea to use feature selection for non-linear support vector machines. The reason for using a maximal margin classifier is that it is an approximate implementation o
50,188
SVM and non-linear predictive models - feature selection
The whole point of deploying a non-linear kernel is to increase the dimensionality of the space, hoping that in the high dimensional space, there would be a linear solution to the problem. If you have lot of features, hopefully, they will offer some little extra information that can be used to build a better model. Doing feature selection, i.e. killing original features, and then deploying non-linear kernel, i.e. creating artificial features by combinations of the leftovers, is... strange. Not sure about your problem but in practice it may help if the features you filter out are not offering any extra information to the solution, e.g. may happen if they are so correlated that indeed all useful information exists in remaining features. In practice feature selection may help by reducing the training time required (probably far easier and practical to go for linear kernel). PS. and as I was writing this I realised how old is this question :)
SVM and non-linear predictive models - feature selection
The whole point of deploying a non-linear kernel is to increase the dimensionality of the space, hoping that in the high dimensional space, there would be a linear solution to the problem. If you hav
SVM and non-linear predictive models - feature selection The whole point of deploying a non-linear kernel is to increase the dimensionality of the space, hoping that in the high dimensional space, there would be a linear solution to the problem. If you have lot of features, hopefully, they will offer some little extra information that can be used to build a better model. Doing feature selection, i.e. killing original features, and then deploying non-linear kernel, i.e. creating artificial features by combinations of the leftovers, is... strange. Not sure about your problem but in practice it may help if the features you filter out are not offering any extra information to the solution, e.g. may happen if they are so correlated that indeed all useful information exists in remaining features. In practice feature selection may help by reducing the training time required (probably far easier and practical to go for linear kernel). PS. and as I was writing this I realised how old is this question :)
SVM and non-linear predictive models - feature selection The whole point of deploying a non-linear kernel is to increase the dimensionality of the space, hoping that in the high dimensional space, there would be a linear solution to the problem. If you hav
50,189
Find the probability of having k red balls after d days
Hmm, tough for an interview question. Condition the probability of being finished on the d+1-th draw on already having exactly k-1 red balls in the bag. The probability of being finished with the next draw is $1-\frac{k-1}{n}$. Now we just need to find the probability of having exactly k-1 red balls after d draws and multiply. This is identical to the probability of drawing exactly k-1 different balls from an urn with n balls after d draws with replacement. The probability of drawing at most k-1 different balls should be given by $\frac{(k-1)^d}{n^d}$. Drawing exactly k-1 is then given by subtracting the chance of drawing at most k-2, so we get $\frac{(k-1)^d}{n^d}-\frac{(k-2)^d}{n^d}$. So we get a total probability of $(1-\frac{k-1}{n})(\frac{(k-1)^d}{n^d}-\frac{(k-2)^d}{n^d})$. Note that this only holds for $d>=k$. I might also have made a mistake along the way. There is also probably a faster way to do this. And one might be able to simplify the result. Might help anyway.
Find the probability of having k red balls after d days
Hmm, tough for an interview question. Condition the probability of being finished on the d+1-th draw on already having exactly k-1 red balls in the bag. The probability of being finished with the next
Find the probability of having k red balls after d days Hmm, tough for an interview question. Condition the probability of being finished on the d+1-th draw on already having exactly k-1 red balls in the bag. The probability of being finished with the next draw is $1-\frac{k-1}{n}$. Now we just need to find the probability of having exactly k-1 red balls after d draws and multiply. This is identical to the probability of drawing exactly k-1 different balls from an urn with n balls after d draws with replacement. The probability of drawing at most k-1 different balls should be given by $\frac{(k-1)^d}{n^d}$. Drawing exactly k-1 is then given by subtracting the chance of drawing at most k-2, so we get $\frac{(k-1)^d}{n^d}-\frac{(k-2)^d}{n^d}$. So we get a total probability of $(1-\frac{k-1}{n})(\frac{(k-1)^d}{n^d}-\frac{(k-2)^d}{n^d})$. Note that this only holds for $d>=k$. I might also have made a mistake along the way. There is also probably a faster way to do this. And one might be able to simplify the result. Might help anyway.
Find the probability of having k red balls after d days Hmm, tough for an interview question. Condition the probability of being finished on the d+1-th draw on already having exactly k-1 red balls in the bag. The probability of being finished with the next
50,190
Find the probability of having k red balls after d days
I think you have to set up this problem using Markov Chains theory, are you familiar with it? The system you are considering is a discrete system which can be in any of the (n+1) states { [0], [1], [2], ... ,[n] } where [j] is the state with j red balls. The system starts from the state [0] and evolves over time with a transition matrix, (probability of going from state i to state j) $T_{ij}=\frac{n-i}{n}$ if j=i+i $T_{ij}=\frac{i}{n}$ if j=i $T_{ij}=0$ otherwise Now this Markov chain is stationary and you know the initial distribution (which is a delta function peaked on the state [0]) You can get the distribution at a future time k (in this case you are interested in time k, which means after k steps) by multiplying the transition matrix by itself k times and then multiplying the result times the initial distribution $p_k=T^k p_0$ where $p_k$ is the vector of probabilities (indexed by the state) at time k and $p_0$ is the vector of probabilities at time 0 (delta function at 0). Now, since the product fo a matrix ($T^k$) times a vector ($p_0$) which has a 1 at the first element and 0 everywhere else (your delta function) is just the first column of the matrix and since you are interested in the probability of $d$ red balls then what you are looking for is the matrix element $(T^k)_{d+1,0}$ I've put d+1 because we take [0] balls to be the first element. So all you have to do is to do the matrix product and look for that element. Possibly, since the matrix is nearly diagonal, you will find that you can calculate it analytically, give it a go
Find the probability of having k red balls after d days
I think you have to set up this problem using Markov Chains theory, are you familiar with it? The system you are considering is a discrete system which can be in any of the (n+1) states { [0], [1], [2
Find the probability of having k red balls after d days I think you have to set up this problem using Markov Chains theory, are you familiar with it? The system you are considering is a discrete system which can be in any of the (n+1) states { [0], [1], [2], ... ,[n] } where [j] is the state with j red balls. The system starts from the state [0] and evolves over time with a transition matrix, (probability of going from state i to state j) $T_{ij}=\frac{n-i}{n}$ if j=i+i $T_{ij}=\frac{i}{n}$ if j=i $T_{ij}=0$ otherwise Now this Markov chain is stationary and you know the initial distribution (which is a delta function peaked on the state [0]) You can get the distribution at a future time k (in this case you are interested in time k, which means after k steps) by multiplying the transition matrix by itself k times and then multiplying the result times the initial distribution $p_k=T^k p_0$ where $p_k$ is the vector of probabilities (indexed by the state) at time k and $p_0$ is the vector of probabilities at time 0 (delta function at 0). Now, since the product fo a matrix ($T^k$) times a vector ($p_0$) which has a 1 at the first element and 0 everywhere else (your delta function) is just the first column of the matrix and since you are interested in the probability of $d$ red balls then what you are looking for is the matrix element $(T^k)_{d+1,0}$ I've put d+1 because we take [0] balls to be the first element. So all you have to do is to do the matrix product and look for that element. Possibly, since the matrix is nearly diagonal, you will find that you can calculate it analytically, give it a go
Find the probability of having k red balls after d days I think you have to set up this problem using Markov Chains theory, are you familiar with it? The system you are considering is a discrete system which can be in any of the (n+1) states { [0], [1], [2
50,191
Find the probability of having k red balls after d days
This is a Markov chain exercise. Consider the vector $P_{d} = (p_d^0, p_d^1, \ldots, p_d^n) \in [0,1]^{n+1}$ where $p_d^k$ is the probability that there are exactly $k$ red ball in the bag after $d$ draws. Now, since $p_{d+1}^{k} = \frac{n-(k-1)}{n}p_{d}^{k-1} + \frac{k}{n}p_{d}^{k}$ one can immediately find a matrix $A \in M_{{n+1} \times {n+1}}$ such that $P_{d+1}=A \cdot P_d$ and solve the problem by computing $P_d = A^d P_0$ with $P_0=(1,0,0,\ldots,0)$. In conclusion, an algorithm to compute $p_d^k$ with complexity $\mathcal{O}(pk)$ is the following. Define $(p_0^0, p_0^1, \ldots, p_0^k)=(1,0,\ldots,0)$. Compute $(p_{j+1}^0, p_{j+1}^1, \ldots, p_{j+1}^k)$ using the formula $p_{j+1}^{i} = \frac{n-(i-1)}{n}p_{j}^{i-1} + \frac{i}{n}p_{i}^{j}$ for $j=0, \ldots, d-1$. The result is given by $p^k_d$.
Find the probability of having k red balls after d days
This is a Markov chain exercise. Consider the vector $P_{d} = (p_d^0, p_d^1, \ldots, p_d^n) \in [0,1]^{n+1}$ where $p_d^k$ is the probability that there are exactly $k$ red ball in the bag after $d$ d
Find the probability of having k red balls after d days This is a Markov chain exercise. Consider the vector $P_{d} = (p_d^0, p_d^1, \ldots, p_d^n) \in [0,1]^{n+1}$ where $p_d^k$ is the probability that there are exactly $k$ red ball in the bag after $d$ draws. Now, since $p_{d+1}^{k} = \frac{n-(k-1)}{n}p_{d}^{k-1} + \frac{k}{n}p_{d}^{k}$ one can immediately find a matrix $A \in M_{{n+1} \times {n+1}}$ such that $P_{d+1}=A \cdot P_d$ and solve the problem by computing $P_d = A^d P_0$ with $P_0=(1,0,0,\ldots,0)$. In conclusion, an algorithm to compute $p_d^k$ with complexity $\mathcal{O}(pk)$ is the following. Define $(p_0^0, p_0^1, \ldots, p_0^k)=(1,0,\ldots,0)$. Compute $(p_{j+1}^0, p_{j+1}^1, \ldots, p_{j+1}^k)$ using the formula $p_{j+1}^{i} = \frac{n-(i-1)}{n}p_{j}^{i-1} + \frac{i}{n}p_{i}^{j}$ for $j=0, \ldots, d-1$. The result is given by $p^k_d$.
Find the probability of having k red balls after d days This is a Markov chain exercise. Consider the vector $P_{d} = (p_d^0, p_d^1, \ldots, p_d^n) \in [0,1]^{n+1}$ where $p_d^k$ is the probability that there are exactly $k$ red ball in the bag after $d$ d
50,192
Repeated-measures ANCOVA in R?
It seems like you're saying that covariate B is correlated with predictor A. In that case, that is not a situation where you can use an ANCOVA. In an ANCOVA your factor B would have to be correlated only with the RT, not the other predictors. If you ever do find a situation where an ANCOVA is appropriate it would just be... data.aov <- aov(RT~ B + A + Error(subject/A), data = data)
Repeated-measures ANCOVA in R?
It seems like you're saying that covariate B is correlated with predictor A. In that case, that is not a situation where you can use an ANCOVA. In an ANCOVA your factor B would have to be correlated
Repeated-measures ANCOVA in R? It seems like you're saying that covariate B is correlated with predictor A. In that case, that is not a situation where you can use an ANCOVA. In an ANCOVA your factor B would have to be correlated only with the RT, not the other predictors. If you ever do find a situation where an ANCOVA is appropriate it would just be... data.aov <- aov(RT~ B + A + Error(subject/A), data = data)
Repeated-measures ANCOVA in R? It seems like you're saying that covariate B is correlated with predictor A. In that case, that is not a situation where you can use an ANCOVA. In an ANCOVA your factor B would have to be correlated
50,193
Compound distribution in Bayesian sense vs. compound distribution as random sum
Indeed the term compound is overloaded in statistics with both definitions. I prefer to describe the latter scenario as " a random sum of random variables" rather than a compound distribution and the former as a "continuous mixture" distribution but the term compound is also used and common for both. The only relationship between the two compound types I am aware of is with a Bernoulli random variable. If the number of summands in the random sum is Bernoulli then the distribution of the sum is $0$ if the Bernoulli was $0$ and $X_1$ if the Bernoulli was $1$. This is equivalently written $Y\stackrel{d}{=}Z_1X_1\stackrel{d}{=}\sum_{i=1}^{Z_1}X_1.$ Now the distribution of $Y$ after evaluating the sum has a zero-inflated distribution where the zero inflation is $\Pr(Z_1=0)$. This is the same distribution as if you "compounded" because the product representation above can be interpreted as a scale mixture where you mix between a $0$ value and a value given by the scale parameter of the distribution of $X_1$. The marginal distribution here would require compounding. In this case the compounding is not an integral but a finite sum.
Compound distribution in Bayesian sense vs. compound distribution as random sum
Indeed the term compound is overloaded in statistics with both definitions. I prefer to describe the latter scenario as " a random sum of random variables" rather than a compound distribution and the
Compound distribution in Bayesian sense vs. compound distribution as random sum Indeed the term compound is overloaded in statistics with both definitions. I prefer to describe the latter scenario as " a random sum of random variables" rather than a compound distribution and the former as a "continuous mixture" distribution but the term compound is also used and common for both. The only relationship between the two compound types I am aware of is with a Bernoulli random variable. If the number of summands in the random sum is Bernoulli then the distribution of the sum is $0$ if the Bernoulli was $0$ and $X_1$ if the Bernoulli was $1$. This is equivalently written $Y\stackrel{d}{=}Z_1X_1\stackrel{d}{=}\sum_{i=1}^{Z_1}X_1.$ Now the distribution of $Y$ after evaluating the sum has a zero-inflated distribution where the zero inflation is $\Pr(Z_1=0)$. This is the same distribution as if you "compounded" because the product representation above can be interpreted as a scale mixture where you mix between a $0$ value and a value given by the scale parameter of the distribution of $X_1$. The marginal distribution here would require compounding. In this case the compounding is not an integral but a finite sum.
Compound distribution in Bayesian sense vs. compound distribution as random sum Indeed the term compound is overloaded in statistics with both definitions. I prefer to describe the latter scenario as " a random sum of random variables" rather than a compound distribution and the
50,194
Using ordinal regression to evaluate predictor "importance"?
I think that you are thinking about this in a good way. In addition to using generalized $R^2$ to gauge predictive discrimination you can use formal likelihood ratio $\chi^2$ tests for added information as well as something I talk about in my book Regression Modeling Strategies called the "adequacy index". This is just the ratio of the likelihood ratio $\chi^2$ for a sub-model to the overall $\chi^2$ for the largest model.
Using ordinal regression to evaluate predictor "importance"?
I think that you are thinking about this in a good way. In addition to using generalized $R^2$ to gauge predictive discrimination you can use formal likelihood ratio $\chi^2$ tests for added informat
Using ordinal regression to evaluate predictor "importance"? I think that you are thinking about this in a good way. In addition to using generalized $R^2$ to gauge predictive discrimination you can use formal likelihood ratio $\chi^2$ tests for added information as well as something I talk about in my book Regression Modeling Strategies called the "adequacy index". This is just the ratio of the likelihood ratio $\chi^2$ for a sub-model to the overall $\chi^2$ for the largest model.
Using ordinal regression to evaluate predictor "importance"? I think that you are thinking about this in a good way. In addition to using generalized $R^2$ to gauge predictive discrimination you can use formal likelihood ratio $\chi^2$ tests for added informat
50,195
Public databases of learned HMM models for NLP
Python NLTK has a dataset called hmm_treebank_pos_tagger that you can download here. Stanford has a POS tagger described here. You can download it along with the training data
Public databases of learned HMM models for NLP
Python NLTK has a dataset called hmm_treebank_pos_tagger that you can download here. Stanford has a POS tagger described here. You can download it along with the training data
Public databases of learned HMM models for NLP Python NLTK has a dataset called hmm_treebank_pos_tagger that you can download here. Stanford has a POS tagger described here. You can download it along with the training data
Public databases of learned HMM models for NLP Python NLTK has a dataset called hmm_treebank_pos_tagger that you can download here. Stanford has a POS tagger described here. You can download it along with the training data
50,196
Public databases of learned HMM models for NLP
http://wordnet.princeton.edu/ not directly what you are after, but might be useful. It has a large list of words, stems and many different linkages between them. It was useful to me as a resource creating an NLP engine
Public databases of learned HMM models for NLP
http://wordnet.princeton.edu/ not directly what you are after, but might be useful. It has a large list of words, stems and many different linkages between them. It was useful to me as a resource cre
Public databases of learned HMM models for NLP http://wordnet.princeton.edu/ not directly what you are after, but might be useful. It has a large list of words, stems and many different linkages between them. It was useful to me as a resource creating an NLP engine
Public databases of learned HMM models for NLP http://wordnet.princeton.edu/ not directly what you are after, but might be useful. It has a large list of words, stems and many different linkages between them. It was useful to me as a resource cre
50,197
How to compare weighted multivariate linear models?
I recommend calculating AIC model weights for each of these. This gives you the flexibility of being able to choose the AIC-best model for inference, or if there is considerable structural uncertainty, to obtain a model-weighted estimate of the effect of x2.
How to compare weighted multivariate linear models?
I recommend calculating AIC model weights for each of these. This gives you the flexibility of being able to choose the AIC-best model for inference, or if there is considerable structural uncertainty
How to compare weighted multivariate linear models? I recommend calculating AIC model weights for each of these. This gives you the flexibility of being able to choose the AIC-best model for inference, or if there is considerable structural uncertainty, to obtain a model-weighted estimate of the effect of x2.
How to compare weighted multivariate linear models? I recommend calculating AIC model weights for each of these. This gives you the flexibility of being able to choose the AIC-best model for inference, or if there is considerable structural uncertainty
50,198
Normalized correlation with a constant vector
Let $\boldsymbol{x}$ and $\boldsymbol{y}$ be your two vectors and let $\boldsymbol{\bar{x}} \equiv \bar{x} \boldsymbol{1}$ and $\boldsymbol{\bar{y}} \equiv \bar{y} \boldsymbol{1}$ be constant vectors for the means of the two original vectors. The components of the sample correlation are: $$\begin{matrix} s_{x,y}^2 = \frac{1}{n-1} (\boldsymbol{x} - \boldsymbol{\bar{x}}) \cdot (\boldsymbol{y} - \boldsymbol{\bar{y}}) & & s_x = \frac{1}{n-1} ||\boldsymbol{x} - \boldsymbol{\bar{x}}|| & & s_y = \frac{1}{n-1} ||\boldsymbol{y} - \boldsymbol{\bar{y}}||. \end{matrix}$$ The sample correlation between $\boldsymbol{x}$ and $\boldsymbol{y}$ is just the cosine of the angle between the vectors $\boldsymbol{x} - \boldsymbol{\bar{x}}$ and $\boldsymbol{y} - \boldsymbol{\bar{y}}$. Letting this angle be $\theta$ we have: $$\rho_{x,y} = \frac{(\boldsymbol{x} - \boldsymbol{\bar{x}}) \cdot (\boldsymbol{y} - \boldsymbol{\bar{y}})}{||\boldsymbol{x} - \boldsymbol{\bar{x}}|| \cdot ||\boldsymbol{y} - \boldsymbol{\bar{y}}||} = \cos \theta.$$ Since scaling of either vector scales the covariance and standard deviation equivalently, this means that correlation is unaffected by scale. It is not correct to say that a low standard deviation gives a high correlation. What matters for correlation is the angle between the vectors, not their lengths. In the special case where $\boldsymbol{y} \propto \boldsymbol{1}$ (i.e., $\boldsymbol{y}$ is a constant vector) you have $\boldsymbol{y} - \boldsymbol{\bar{y}} = \boldsymbol{0}$ which then gives $s_{x,y}^2 = 0$ and $s_{y} = 0$. In this case the correlation is undefined. Geometrically this occurs because there is no defined angle with the zero vector.
Normalized correlation with a constant vector
Let $\boldsymbol{x}$ and $\boldsymbol{y}$ be your two vectors and let $\boldsymbol{\bar{x}} \equiv \bar{x} \boldsymbol{1}$ and $\boldsymbol{\bar{y}} \equiv \bar{y} \boldsymbol{1}$ be constant vectors
Normalized correlation with a constant vector Let $\boldsymbol{x}$ and $\boldsymbol{y}$ be your two vectors and let $\boldsymbol{\bar{x}} \equiv \bar{x} \boldsymbol{1}$ and $\boldsymbol{\bar{y}} \equiv \bar{y} \boldsymbol{1}$ be constant vectors for the means of the two original vectors. The components of the sample correlation are: $$\begin{matrix} s_{x,y}^2 = \frac{1}{n-1} (\boldsymbol{x} - \boldsymbol{\bar{x}}) \cdot (\boldsymbol{y} - \boldsymbol{\bar{y}}) & & s_x = \frac{1}{n-1} ||\boldsymbol{x} - \boldsymbol{\bar{x}}|| & & s_y = \frac{1}{n-1} ||\boldsymbol{y} - \boldsymbol{\bar{y}}||. \end{matrix}$$ The sample correlation between $\boldsymbol{x}$ and $\boldsymbol{y}$ is just the cosine of the angle between the vectors $\boldsymbol{x} - \boldsymbol{\bar{x}}$ and $\boldsymbol{y} - \boldsymbol{\bar{y}}$. Letting this angle be $\theta$ we have: $$\rho_{x,y} = \frac{(\boldsymbol{x} - \boldsymbol{\bar{x}}) \cdot (\boldsymbol{y} - \boldsymbol{\bar{y}})}{||\boldsymbol{x} - \boldsymbol{\bar{x}}|| \cdot ||\boldsymbol{y} - \boldsymbol{\bar{y}}||} = \cos \theta.$$ Since scaling of either vector scales the covariance and standard deviation equivalently, this means that correlation is unaffected by scale. It is not correct to say that a low standard deviation gives a high correlation. What matters for correlation is the angle between the vectors, not their lengths. In the special case where $\boldsymbol{y} \propto \boldsymbol{1}$ (i.e., $\boldsymbol{y}$ is a constant vector) you have $\boldsymbol{y} - \boldsymbol{\bar{y}} = \boldsymbol{0}$ which then gives $s_{x,y}^2 = 0$ and $s_{y} = 0$. In this case the correlation is undefined. Geometrically this occurs because there is no defined angle with the zero vector.
Normalized correlation with a constant vector Let $\boldsymbol{x}$ and $\boldsymbol{y}$ be your two vectors and let $\boldsymbol{\bar{x}} \equiv \bar{x} \boldsymbol{1}$ and $\boldsymbol{\bar{y}} \equiv \bar{y} \boldsymbol{1}$ be constant vectors
50,199
Restricted permutations
If env is the variable you are interested in then we need to test the "effect" of this variable in terms of the variance explained by env. To do the test using a permutation we need to think about what is and isn't exchangeable under the Null hypothesis we are testing. In this case, the Null is no effect of env so under this the sites are freely exchangeable. Within site, the samples could be permuted but we should use a cyclic shift permutation rather than free permutation to preserve the autocorrelations. If the three time points are the same across all sites we might also wish to restrict the permutation so that the same permutation is used within all sites, thus preserving those correlations across site. If time is a nuisance variable that you are not interested in, I would do a partial analysis, conditioning on time to remove it and look at the effect of env. To do this, include + Condition(time) in the model formula. > mod <- capscale(repdf[, -c(1,11:12)] ~ env + Condition(time), data = repdf) > mod Call: capscale(formula = repdf[, -c(1, 11:12)] ~ env + Condition(time), data = repdf) Inertia Proportion Rank Total 38.1148 1.0000 Conditional 0.0000 0.0000 1 Constrained 16.5286 0.4337 1 Unconstrained 21.5862 0.5663 9 Inertia is mean squared Euclidean distance Eigenvalues for constrained axes: CAP1 16.53 Eigenvalues for unconstrained axes: MDS1 MDS2 MDS3 MDS4 MDS5 MDS6 8.890e+00 7.312e+00 4.611e+00 4.793e-01 1.560e-01 1.370e-01 MDS7 MDS8 MDS9 7.542e-06 5.719e-06 3.341e-06 To permute strata (here site), you can use permute. To shuffle just the strata as blocks of three samples, a suitable control object would be: ctrl <- with(repdf, permControl(strata = site, within = Within(type = "none"), blocks = Blocks(type = "free"))) If you wanted to permute within site we would use this ctrl <- with(repdf, permControl(strata = site, within = Within(type = "series", constant = TRUE), blocks = Blocks(type = "free"))) The constant = TRUE forces the same random cyclic shift to be applied across each site. Even controlling for time (by conditioning on it in partial analysis) this only removes a linear effect of time so there could be some remaining autocorrelation in the residuals that are then used as the response data in the model testing the effect of env, hence the reason for using cyclic permutations even though we controlled for time in the model. Something to be aware of is that when permuting strata, you have to have a balanced design, as here, with the same number of samples within each level of strata. The code that generates permutations from control objects does not enforce this as it is designed to be very general purpose. There is a function permCheck() that exists to check a given control object and balance is one of the checks performed. When we interface permute with vegan we will enforce balance using permCheck() but I don't think permCheck() works correctly in the current version of permute. Of course, as we (well, I) haven't yet interfaced permute and vegan yet, you'll need to hack the anova() method to generate the relevant permutations. [Apologies for not responding sooner despite being alerted to this by @chlalanne on Twitter - my reply has nothing to do with you awarding a bounty, I had a poorly 11-month-old asleep on my chest most of the past few days that curtailed my typing ;-)]
Restricted permutations
If env is the variable you are interested in then we need to test the "effect" of this variable in terms of the variance explained by env. To do the test using a permutation we need to think about wha
Restricted permutations If env is the variable you are interested in then we need to test the "effect" of this variable in terms of the variance explained by env. To do the test using a permutation we need to think about what is and isn't exchangeable under the Null hypothesis we are testing. In this case, the Null is no effect of env so under this the sites are freely exchangeable. Within site, the samples could be permuted but we should use a cyclic shift permutation rather than free permutation to preserve the autocorrelations. If the three time points are the same across all sites we might also wish to restrict the permutation so that the same permutation is used within all sites, thus preserving those correlations across site. If time is a nuisance variable that you are not interested in, I would do a partial analysis, conditioning on time to remove it and look at the effect of env. To do this, include + Condition(time) in the model formula. > mod <- capscale(repdf[, -c(1,11:12)] ~ env + Condition(time), data = repdf) > mod Call: capscale(formula = repdf[, -c(1, 11:12)] ~ env + Condition(time), data = repdf) Inertia Proportion Rank Total 38.1148 1.0000 Conditional 0.0000 0.0000 1 Constrained 16.5286 0.4337 1 Unconstrained 21.5862 0.5663 9 Inertia is mean squared Euclidean distance Eigenvalues for constrained axes: CAP1 16.53 Eigenvalues for unconstrained axes: MDS1 MDS2 MDS3 MDS4 MDS5 MDS6 8.890e+00 7.312e+00 4.611e+00 4.793e-01 1.560e-01 1.370e-01 MDS7 MDS8 MDS9 7.542e-06 5.719e-06 3.341e-06 To permute strata (here site), you can use permute. To shuffle just the strata as blocks of three samples, a suitable control object would be: ctrl <- with(repdf, permControl(strata = site, within = Within(type = "none"), blocks = Blocks(type = "free"))) If you wanted to permute within site we would use this ctrl <- with(repdf, permControl(strata = site, within = Within(type = "series", constant = TRUE), blocks = Blocks(type = "free"))) The constant = TRUE forces the same random cyclic shift to be applied across each site. Even controlling for time (by conditioning on it in partial analysis) this only removes a linear effect of time so there could be some remaining autocorrelation in the residuals that are then used as the response data in the model testing the effect of env, hence the reason for using cyclic permutations even though we controlled for time in the model. Something to be aware of is that when permuting strata, you have to have a balanced design, as here, with the same number of samples within each level of strata. The code that generates permutations from control objects does not enforce this as it is designed to be very general purpose. There is a function permCheck() that exists to check a given control object and balance is one of the checks performed. When we interface permute with vegan we will enforce balance using permCheck() but I don't think permCheck() works correctly in the current version of permute. Of course, as we (well, I) haven't yet interfaced permute and vegan yet, you'll need to hack the anova() method to generate the relevant permutations. [Apologies for not responding sooner despite being alerted to this by @chlalanne on Twitter - my reply has nothing to do with you awarding a bounty, I had a poorly 11-month-old asleep on my chest most of the past few days that curtailed my typing ;-)]
Restricted permutations If env is the variable you are interested in then we need to test the "effect" of this variable in terms of the variance explained by env. To do the test using a permutation we need to think about wha
50,200
How to detect changes in amplitude?
Try package ‘changepoint’, described here: http://www.lancs.ac.uk/~killick/Pub/KillickEckley2011.pdf It is able to detected changepoints in both mean and variance.
How to detect changes in amplitude?
Try package ‘changepoint’, described here: http://www.lancs.ac.uk/~killick/Pub/KillickEckley2011.pdf It is able to detected changepoints in both mean and variance.
How to detect changes in amplitude? Try package ‘changepoint’, described here: http://www.lancs.ac.uk/~killick/Pub/KillickEckley2011.pdf It is able to detected changepoints in both mean and variance.
How to detect changes in amplitude? Try package ‘changepoint’, described here: http://www.lancs.ac.uk/~killick/Pub/KillickEckley2011.pdf It is able to detected changepoints in both mean and variance.