idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
55,401
How to handle both text and numbers for PCA in R?
Make variables Make your text values nominal variables using factor or ordinal variables using ordered e.g. if you have a data frame called mydata then mydata <- transform(race=factor(race)) will convert a textual race variable to a nominal one. See ?transform for details. This way R will show you labels but construct the appropriate dummy variables in the background for you when you fit any models. You can change the baseline category in the default contrast coding using relevel. If you used read.csv to get your data in, this may have already happened without your requesting it. You can check the classes of each variable using lapply(mydata, class) Things that are factors list their levels. Indeed half the battle with R is to persuade not to do these sorts of 'handy' variable conversions. Scale variables? If you are thinking of PCA then you are, I suppose, trying to scale this data somehow. If this is the case you might use a factor analysis approach that can deal with mixed variable types. One such method is MCMCmixfactanal in the MCMCpack R package. On the other hand you might ask yourself what space you would get out of a big scaling model that contains so much disparate stuff. Perhaps a wiser plan would be to keep the nominal variables as things to condition on / stratify with, in the context of fitting a regular multivariate model. If you'd like to elaborate on the purpose of your PCA plan we might be able to provide some more alternatives. What information are you trying to get out of the data by scaling?
How to handle both text and numbers for PCA in R?
Make variables Make your text values nominal variables using factor or ordinal variables using ordered e.g. if you have a data frame called mydata then mydata <- transform(race=factor(race)) will con
How to handle both text and numbers for PCA in R? Make variables Make your text values nominal variables using factor or ordinal variables using ordered e.g. if you have a data frame called mydata then mydata <- transform(race=factor(race)) will convert a textual race variable to a nominal one. See ?transform for details. This way R will show you labels but construct the appropriate dummy variables in the background for you when you fit any models. You can change the baseline category in the default contrast coding using relevel. If you used read.csv to get your data in, this may have already happened without your requesting it. You can check the classes of each variable using lapply(mydata, class) Things that are factors list their levels. Indeed half the battle with R is to persuade not to do these sorts of 'handy' variable conversions. Scale variables? If you are thinking of PCA then you are, I suppose, trying to scale this data somehow. If this is the case you might use a factor analysis approach that can deal with mixed variable types. One such method is MCMCmixfactanal in the MCMCpack R package. On the other hand you might ask yourself what space you would get out of a big scaling model that contains so much disparate stuff. Perhaps a wiser plan would be to keep the nominal variables as things to condition on / stratify with, in the context of fitting a regular multivariate model. If you'd like to elaborate on the purpose of your PCA plan we might be able to provide some more alternatives. What information are you trying to get out of the data by scaling?
How to handle both text and numbers for PCA in R? Make variables Make your text values nominal variables using factor or ordinal variables using ordered e.g. if you have a data frame called mydata then mydata <- transform(race=factor(race)) will con
55,402
How to check whether maximum likelihood estimation optimizer has converged in R?
If you're using the arima function as mod <- arima(rates,c(p,d,q)) then the convergence status from the underlying optim routine is mod$code If this is 0 then you converged, at least as far as optim was concerned. See ?optim for details. This is all in the help page: ?arima
How to check whether maximum likelihood estimation optimizer has converged in R?
If you're using the arima function as mod <- arima(rates,c(p,d,q)) then the convergence status from the underlying optim routine is mod$code If this is 0 then you converged, at least as far as opt
How to check whether maximum likelihood estimation optimizer has converged in R? If you're using the arima function as mod <- arima(rates,c(p,d,q)) then the convergence status from the underlying optim routine is mod$code If this is 0 then you converged, at least as far as optim was concerned. See ?optim for details. This is all in the help page: ?arima
How to check whether maximum likelihood estimation optimizer has converged in R? If you're using the arima function as mod <- arima(rates,c(p,d,q)) then the convergence status from the underlying optim routine is mod$code If this is 0 then you converged, at least as far as opt
55,403
How to check whether maximum likelihood estimation optimizer has converged in R?
Assuming you have a globally convex likelihood and a regular parameter space, then the optimizer may have not converged when it reaches a boundary of the parameter space. This yeilds parameter estimates which explode and information matrices which are singular. Usually any graphical method is good at diagnosing that. Some estimation problems have unique solutions which are on the boundary of the parameter space, though, such as separation of outcomes in univariate logistic regression, giving odds ratios which are $\infty$ or 0.
How to check whether maximum likelihood estimation optimizer has converged in R?
Assuming you have a globally convex likelihood and a regular parameter space, then the optimizer may have not converged when it reaches a boundary of the parameter space. This yeilds parameter estimat
How to check whether maximum likelihood estimation optimizer has converged in R? Assuming you have a globally convex likelihood and a regular parameter space, then the optimizer may have not converged when it reaches a boundary of the parameter space. This yeilds parameter estimates which explode and information matrices which are singular. Usually any graphical method is good at diagnosing that. Some estimation problems have unique solutions which are on the boundary of the parameter space, though, such as separation of outcomes in univariate logistic regression, giving odds ratios which are $\infty$ or 0.
How to check whether maximum likelihood estimation optimizer has converged in R? Assuming you have a globally convex likelihood and a regular parameter space, then the optimizer may have not converged when it reaches a boundary of the parameter space. This yeilds parameter estimat
55,404
Should Kevin Durant be benched?
There has actually been some research on this(!). Here's a paper by Maymin et al. (2012), (SSRN Link). They actually look at win-loss statistics instead of just trying to maximize the number of minutes a player plays. They argue that: Getting into foul trouble causes a player to play measurably worse, and so It's rational to yank a player with Q+1 fouls (where Q is the present quarter) There's a shorter piece of Slate, which is where I found the original reference. Enjoy!
Should Kevin Durant be benched?
There has actually been some research on this(!). Here's a paper by Maymin et al. (2012), (SSRN Link). They actually look at win-loss statistics instead of just trying to maximize the number of minute
Should Kevin Durant be benched? There has actually been some research on this(!). Here's a paper by Maymin et al. (2012), (SSRN Link). They actually look at win-loss statistics instead of just trying to maximize the number of minutes a player plays. They argue that: Getting into foul trouble causes a player to play measurably worse, and so It's rational to yank a player with Q+1 fouls (where Q is the present quarter) There's a shorter piece of Slate, which is where I found the original reference. Enjoy!
Should Kevin Durant be benched? There has actually been some research on this(!). Here's a paper by Maymin et al. (2012), (SSRN Link). They actually look at win-loss statistics instead of just trying to maximize the number of minute
55,405
Should Kevin Durant be benched?
If we stick rigidly to the assumption "any minute is as likely to draw a foul as the next" then we can model this simply as a Poisson process. The results are shown in the figure below. The vertical axis shows the probability that Durant has not been fouled out after playing the number of minutes shown on the horizontal axis. Reading off the numbers at each time asked, there is a 74% probability of Durant going the distance if he attempts to play all 20 minutes. If he is benched for 5 minutes, he will only be able to play for a maximum of 15 minutes and the probability he will not foul out in that time increases to 83%. As for part c), there is no trade off involved. Durant should never be benched if the goal is to maximise the number of minutes played. For what it's worth I don't think that the assumption of fouls being a Poisson process is true, but then the result of the analysis would entirely depend of what other assumptions are being made.
Should Kevin Durant be benched?
If we stick rigidly to the assumption "any minute is as likely to draw a foul as the next" then we can model this simply as a Poisson process. The results are shown in the figure below. The vertical
Should Kevin Durant be benched? If we stick rigidly to the assumption "any minute is as likely to draw a foul as the next" then we can model this simply as a Poisson process. The results are shown in the figure below. The vertical axis shows the probability that Durant has not been fouled out after playing the number of minutes shown on the horizontal axis. Reading off the numbers at each time asked, there is a 74% probability of Durant going the distance if he attempts to play all 20 minutes. If he is benched for 5 minutes, he will only be able to play for a maximum of 15 minutes and the probability he will not foul out in that time increases to 83%. As for part c), there is no trade off involved. Durant should never be benched if the goal is to maximise the number of minutes played. For what it's worth I don't think that the assumption of fouls being a Poisson process is true, but then the result of the analysis would entirely depend of what other assumptions are being made.
Should Kevin Durant be benched? If we stick rigidly to the assumption "any minute is as likely to draw a foul as the next" then we can model this simply as a Poisson process. The results are shown in the figure below. The vertical
55,406
Performance metric for algorithm predicting probability of low probability events
Mean squared error as suggested by Lakret will certainly work, however, I'd like to propose a method which captures the uncertainty of the clickrates of the adds (which are not known, exactly, but only estimated from historic data). Let's say we have an add in our validation set with 10000 showns and 10 clicks, i.e. the maximum likelihood estimate for the clickrate $p$ is $0.001$. Furthermore we predicted a clickrate of $\hat{p}$ for this add. Now instead of comparing the predicted $\hat{p}$ with p, we check whether $\hat{p}$ is in the confidence interval of $p$. Using the Beta-Distribution aka the Bayesian approach to calculate the confidence interval (called credible intervals then), we get using R alpha <- 0.05 qbeta(c(alpha/2,1-alpha/2),10+1,10000-10+1) # which results in [1] 0.000549185 0.001838080 For other methods to calculate binomial confidence intervals see e.g. the R-package confint. Now, the error for the prediction of a single add is ... 0, if $\hat{p}$ is in the confidence interval of p 1, else Starting from here, one can calculate binomial metrics like precision OR just the average error across multiple clickrate predictions. In a more sophisticated approach one could calculate the error as the distance to the nearest confidence interval bound (if outside the confidence interval) to make the error less discrete.
Performance metric for algorithm predicting probability of low probability events
Mean squared error as suggested by Lakret will certainly work, however, I'd like to propose a method which captures the uncertainty of the clickrates of the adds (which are not known, exactly, but onl
Performance metric for algorithm predicting probability of low probability events Mean squared error as suggested by Lakret will certainly work, however, I'd like to propose a method which captures the uncertainty of the clickrates of the adds (which are not known, exactly, but only estimated from historic data). Let's say we have an add in our validation set with 10000 showns and 10 clicks, i.e. the maximum likelihood estimate for the clickrate $p$ is $0.001$. Furthermore we predicted a clickrate of $\hat{p}$ for this add. Now instead of comparing the predicted $\hat{p}$ with p, we check whether $\hat{p}$ is in the confidence interval of $p$. Using the Beta-Distribution aka the Bayesian approach to calculate the confidence interval (called credible intervals then), we get using R alpha <- 0.05 qbeta(c(alpha/2,1-alpha/2),10+1,10000-10+1) # which results in [1] 0.000549185 0.001838080 For other methods to calculate binomial confidence intervals see e.g. the R-package confint. Now, the error for the prediction of a single add is ... 0, if $\hat{p}$ is in the confidence interval of p 1, else Starting from here, one can calculate binomial metrics like precision OR just the average error across multiple clickrate predictions. In a more sophisticated approach one could calculate the error as the distance to the nearest confidence interval bound (if outside the confidence interval) to make the error less discrete.
Performance metric for algorithm predicting probability of low probability events Mean squared error as suggested by Lakret will certainly work, however, I'd like to propose a method which captures the uncertainty of the clickrates of the adds (which are not known, exactly, but onl
55,407
Performance metric for algorithm predicting probability of low probability events
Since your actual data is bivariate (click/no click) you should make your predictions discrete as well (e.g. if predicted probability is more than some value then assign it 1). Then you can employ metrics like precision or accuracy.
Performance metric for algorithm predicting probability of low probability events
Since your actual data is bivariate (click/no click) you should make your predictions discrete as well (e.g. if predicted probability is more than some value then assign it 1). Then you can employ met
Performance metric for algorithm predicting probability of low probability events Since your actual data is bivariate (click/no click) you should make your predictions discrete as well (e.g. if predicted probability is more than some value then assign it 1). Then you can employ metrics like precision or accuracy.
Performance metric for algorithm predicting probability of low probability events Since your actual data is bivariate (click/no click) you should make your predictions discrete as well (e.g. if predicted probability is more than some value then assign it 1). Then you can employ met
55,408
Performance metric for algorithm predicting probability of low probability events
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Mean absolute error and Mean squared error may be helpful.
Performance metric for algorithm predicting probability of low probability events
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Performance metric for algorithm predicting probability of low probability events Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Mean absolute error and Mean squared error may be helpful.
Performance metric for algorithm predicting probability of low probability events Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
55,409
Performance metric for algorithm predicting probability of low probability events
why not simply use the correlation coefficient between the predicted click probability and the click event (0 or 1)? Higher the correlation, better the algorithm.
Performance metric for algorithm predicting probability of low probability events
why not simply use the correlation coefficient between the predicted click probability and the click event (0 or 1)? Higher the correlation, better the algorithm.
Performance metric for algorithm predicting probability of low probability events why not simply use the correlation coefficient between the predicted click probability and the click event (0 or 1)? Higher the correlation, better the algorithm.
Performance metric for algorithm predicting probability of low probability events why not simply use the correlation coefficient between the predicted click probability and the click event (0 or 1)? Higher the correlation, better the algorithm.
55,410
Performance metric for algorithm predicting probability of low probability events
I found this approach works well: If there are $N$ examples where we have to calculate CTRs, following is a metric for the performance of algorithm: Let our algorithm predict a $ctr$ of $ctr(i)$ for example $i$: $$ v = \sum_{i =1}^{N}\frac{p(i)}{N} $$ where $p(i) = ctr(i)$ if actually there was a click $p(i) = 1 - ctr(i)$ if actually there was not a click Now, $v$ will range from 0 to 1, and will be higher for better prediction algorithms.
Performance metric for algorithm predicting probability of low probability events
I found this approach works well: If there are $N$ examples where we have to calculate CTRs, following is a metric for the performance of algorithm: Let our algorithm predict a $ctr$ of $ctr(i)$ for e
Performance metric for algorithm predicting probability of low probability events I found this approach works well: If there are $N$ examples where we have to calculate CTRs, following is a metric for the performance of algorithm: Let our algorithm predict a $ctr$ of $ctr(i)$ for example $i$: $$ v = \sum_{i =1}^{N}\frac{p(i)}{N} $$ where $p(i) = ctr(i)$ if actually there was a click $p(i) = 1 - ctr(i)$ if actually there was not a click Now, $v$ will range from 0 to 1, and will be higher for better prediction algorithms.
Performance metric for algorithm predicting probability of low probability events I found this approach works well: If there are $N$ examples where we have to calculate CTRs, following is a metric for the performance of algorithm: Let our algorithm predict a $ctr$ of $ctr(i)$ for e
55,411
Fitting an exponential mixture model with interval constraints on the mixture weights
Best is not a solution in Excel -- spreadsheets are not a good environment for data analysis: http://www.burns-stat.com/pages/Tutor/spreadsheet_addiction.html The 'nls' function in R would be one choice.
Fitting an exponential mixture model with interval constraints on the mixture weights
Best is not a solution in Excel -- spreadsheets are not a good environment for data analysis: http://www.burns-stat.com/pages/Tutor/spreadsheet_addiction.html The 'nls' function in R would be one cho
Fitting an exponential mixture model with interval constraints on the mixture weights Best is not a solution in Excel -- spreadsheets are not a good environment for data analysis: http://www.burns-stat.com/pages/Tutor/spreadsheet_addiction.html The 'nls' function in R would be one choice.
Fitting an exponential mixture model with interval constraints on the mixture weights Best is not a solution in Excel -- spreadsheets are not a good environment for data analysis: http://www.burns-stat.com/pages/Tutor/spreadsheet_addiction.html The 'nls' function in R would be one cho
55,412
Fitting an exponential mixture model with interval constraints on the mixture weights
I'll show you how to analyze your data with Mathematica. First I'll use your model as requested. data = Import["Desktop/data.csv"]; y = NonlinearModelFit[data, a Exp[b x] + c Exp[d x] + e, {a, b, c, d, e}, x]; Mathematica returns this error: NonlinearModelFit::cvmit: Failed to converge to the requested accuracy or precision within 1000 iterations This means that your model might not be ideal. Let's consider the data to see if we can come up with a better one. The log-plot of the negative of your series (so we can take the logarithm) shows a fairly polynomial curve: (Generated using ListLogPlot[# {1, -1} & /@ data]) This suggests that instead of an exponential mixture, we should use a log-linear model: $ \hat y = -\exp(a+bx+cx^2 + \dots)$ Let's try a cubic polynomial: nlm = LinearModelFit[{#[[1]], Log[-#[[2]]]} & /@ data, {x, x^2, x^3}, x] Mathematica returns 71.6838 - 391.293 x + 764.791 x^2 - 501.198 x^3 In other words, $ \hat y = -\exp(71.6838 - 391.293 x + 764.791 x^2 - 501.198 x^3)$. Here is a plot of the data, the mixture model (red), and the log-linear model (green): (Generated using Show[{ListPlot@data, Plot[y[x], {x, 0.38, 0.57}, PlotStyle -> {Thick, Red}], Plot[-Exp@nlm@x, {x, .35, 0.6}, PlotStyle -> {Thick, Green}]}]) You can get an ever better fit with a quartic polynomial, but you get the idea. The residuals show structure, which means that the model has not squeezed all the information out of the data: (Generated using ListPlot@nlm["FitResiduals"]) Fortunately, the order virtually disappears by the time you raise the order to six:
Fitting an exponential mixture model with interval constraints on the mixture weights
I'll show you how to analyze your data with Mathematica. First I'll use your model as requested. data = Import["Desktop/data.csv"]; y = NonlinearModelFit[data, a Exp[b x] + c Exp[d x] + e, {a, b,
Fitting an exponential mixture model with interval constraints on the mixture weights I'll show you how to analyze your data with Mathematica. First I'll use your model as requested. data = Import["Desktop/data.csv"]; y = NonlinearModelFit[data, a Exp[b x] + c Exp[d x] + e, {a, b, c, d, e}, x]; Mathematica returns this error: NonlinearModelFit::cvmit: Failed to converge to the requested accuracy or precision within 1000 iterations This means that your model might not be ideal. Let's consider the data to see if we can come up with a better one. The log-plot of the negative of your series (so we can take the logarithm) shows a fairly polynomial curve: (Generated using ListLogPlot[# {1, -1} & /@ data]) This suggests that instead of an exponential mixture, we should use a log-linear model: $ \hat y = -\exp(a+bx+cx^2 + \dots)$ Let's try a cubic polynomial: nlm = LinearModelFit[{#[[1]], Log[-#[[2]]]} & /@ data, {x, x^2, x^3}, x] Mathematica returns 71.6838 - 391.293 x + 764.791 x^2 - 501.198 x^3 In other words, $ \hat y = -\exp(71.6838 - 391.293 x + 764.791 x^2 - 501.198 x^3)$. Here is a plot of the data, the mixture model (red), and the log-linear model (green): (Generated using Show[{ListPlot@data, Plot[y[x], {x, 0.38, 0.57}, PlotStyle -> {Thick, Red}], Plot[-Exp@nlm@x, {x, .35, 0.6}, PlotStyle -> {Thick, Green}]}]) You can get an ever better fit with a quartic polynomial, but you get the idea. The residuals show structure, which means that the model has not squeezed all the information out of the data: (Generated using ListPlot@nlm["FitResiduals"]) Fortunately, the order virtually disappears by the time you raise the order to six:
Fitting an exponential mixture model with interval constraints on the mixture weights I'll show you how to analyze your data with Mathematica. First I'll use your model as requested. data = Import["Desktop/data.csv"]; y = NonlinearModelFit[data, a Exp[b x] + c Exp[d x] + e, {a, b,
55,413
Fitting an exponential mixture model with interval constraints on the mixture weights
Are you looking for methods or software to implement it? It looks like a typical nonlinear regression problem. In SAS this is implemented using proc nlin.
Fitting an exponential mixture model with interval constraints on the mixture weights
Are you looking for methods or software to implement it? It looks like a typical nonlinear regression problem. In SAS this is implemented using proc nlin.
Fitting an exponential mixture model with interval constraints on the mixture weights Are you looking for methods or software to implement it? It looks like a typical nonlinear regression problem. In SAS this is implemented using proc nlin.
Fitting an exponential mixture model with interval constraints on the mixture weights Are you looking for methods or software to implement it? It looks like a typical nonlinear regression problem. In SAS this is implemented using proc nlin.
55,414
How to reduce autocorrelation in Metropolis algorithm?
First of all, for your model to be hierarchical, you need hyperpriors for $\alpha$ and $\beta$ (as already explained by Procrastinator). For the sake of simplicity, lets assume uniform priors on the positive part of the real axis. So that have a hierarchical model as follows: $$y_{i}| \lambda_{i}\sim Poisson(\lambda_{i})$$ $$\lambda_{i}|\alpha, \beta \sim Gamma(\alpha,\beta)$$ $$\pi(\alpha,\beta)\propto1_{[0,+\infty]}$$ Now you have a two groups of parameters: {$\lambda_{i}$}$_{i=1,N}$ and {$\alpha, \beta$}. You need to draw random samples of these two groups of parameters from your posterior distribution. Although the model itself is not very complicated, you might stuck with very slowly mixing chains for you parameters, especially if your $N$ is very large (say 1000). You can choose as your proposal normal distribution (just remember to reject properly negative proposed values) for whole set of parameters, in which case you would need ($N+2$)-variate normal distribution as a proposal, or you could use N-variate normal distirbution for lambda's and bivariate normal distribution for $\alpha$ and $\beta$. I would suggest first of all to go with separately proposing lambda's and gamma parameters - i.e. to use Metropolis within Gibbs sampler. This would allow you to slightly decouple thouse chains. In this investigatory step I would use covariance matrix with no autocorrelations (i.e. diagonal matrix) for $N$-variate normal distribution. If that does not work, I would introduce for proposal distribution covariance matrix where correlations are not equal to zero - this should improve the mixing. And if this does not produce well-mixing chains, I would turn to the Hamiltonian Monte Carlo. But first of all try to play with different matrices of multivariate normal proposal. I would also suggest to modify your model: instead of Poisson intensity $\lambda_{i}$ to use $exp(\lambda_{i})$ and then to use normal distribution for $\lambda_{i}$ with unknown mean and variance parameters.
How to reduce autocorrelation in Metropolis algorithm?
First of all, for your model to be hierarchical, you need hyperpriors for $\alpha$ and $\beta$ (as already explained by Procrastinator). For the sake of simplicity, lets assume uniform priors on the p
How to reduce autocorrelation in Metropolis algorithm? First of all, for your model to be hierarchical, you need hyperpriors for $\alpha$ and $\beta$ (as already explained by Procrastinator). For the sake of simplicity, lets assume uniform priors on the positive part of the real axis. So that have a hierarchical model as follows: $$y_{i}| \lambda_{i}\sim Poisson(\lambda_{i})$$ $$\lambda_{i}|\alpha, \beta \sim Gamma(\alpha,\beta)$$ $$\pi(\alpha,\beta)\propto1_{[0,+\infty]}$$ Now you have a two groups of parameters: {$\lambda_{i}$}$_{i=1,N}$ and {$\alpha, \beta$}. You need to draw random samples of these two groups of parameters from your posterior distribution. Although the model itself is not very complicated, you might stuck with very slowly mixing chains for you parameters, especially if your $N$ is very large (say 1000). You can choose as your proposal normal distribution (just remember to reject properly negative proposed values) for whole set of parameters, in which case you would need ($N+2$)-variate normal distribution as a proposal, or you could use N-variate normal distirbution for lambda's and bivariate normal distribution for $\alpha$ and $\beta$. I would suggest first of all to go with separately proposing lambda's and gamma parameters - i.e. to use Metropolis within Gibbs sampler. This would allow you to slightly decouple thouse chains. In this investigatory step I would use covariance matrix with no autocorrelations (i.e. diagonal matrix) for $N$-variate normal distribution. If that does not work, I would introduce for proposal distribution covariance matrix where correlations are not equal to zero - this should improve the mixing. And if this does not produce well-mixing chains, I would turn to the Hamiltonian Monte Carlo. But first of all try to play with different matrices of multivariate normal proposal. I would also suggest to modify your model: instead of Poisson intensity $\lambda_{i}$ to use $exp(\lambda_{i})$ and then to use normal distribution for $\lambda_{i}$ with unknown mean and variance parameters.
How to reduce autocorrelation in Metropolis algorithm? First of all, for your model to be hierarchical, you need hyperpriors for $\alpha$ and $\beta$ (as already explained by Procrastinator). For the sake of simplicity, lets assume uniform priors on the p
55,415
How to reduce autocorrelation in Metropolis algorithm?
Can't seem to figure out how to add a comment, so this is part comment part answer: First, it sounds from the comments that you are using a Random Walk Metropolis step to update $\alpha$ and $\beta$ jointly (or possibly separately), and then a Gibbs step to update each $\lambda_i$ conditional on the current values for $\alpha$ and $\beta$. You say you're using a truncated Normal proposal in the (random walk) Metropolis step: it probably won't make much difference in your case, but technically I think you need to propose from a full Gaussian, not truncated - otherwise your proposal distribution isn't symmetric, so $q(y|x) \neq q(x|y) \: \forall x,y$, so the Metropolis acceptance ratio doesn't produce a Markov chain which will converge to the desired density. Alternatively you could keep the truncated $q$ and use the full Metropolis-Hastings acceptance ratio. Secondly, as Procrastinator says, you presumably therefore have a posterior for $(\alpha, \beta)$, so you must have set a prior for these. What is it? Gamma priors with fixed hyperparameters (something like shape=1, rate=0.001) would do probably do the trick). Now, to attempt to answer some of your question: assuming you've done everything right and you're getting large autocorrelation because each $(\alpha^{(i)},\beta^{(i)})$ is highly correalated with each $(\alpha^{(i-1)},\beta^{(i-1)})$, then maybe the Random Walk Metropolis isn't the best method to draw $\alpha$ and $\beta$. Since your MH step is in only two dimensions, an independence sampler might work better. Slightly more advanced methods like the Metropolis-adjusted Langevin algorithm and Hamiltonian Monte Carlo are also designed to reduce auto-correlation, but that might be overkill here. So maybe try an independence sampler (don't forget to use the full Metropolis-Hastings ratio $\min\left(1,\frac{\pi(y)q(x|y)}{\pi(x)q(y|x)} \right)$ for acceptance).
How to reduce autocorrelation in Metropolis algorithm?
Can't seem to figure out how to add a comment, so this is part comment part answer: First, it sounds from the comments that you are using a Random Walk Metropolis step to update $\alpha$ and $\beta$ j
How to reduce autocorrelation in Metropolis algorithm? Can't seem to figure out how to add a comment, so this is part comment part answer: First, it sounds from the comments that you are using a Random Walk Metropolis step to update $\alpha$ and $\beta$ jointly (or possibly separately), and then a Gibbs step to update each $\lambda_i$ conditional on the current values for $\alpha$ and $\beta$. You say you're using a truncated Normal proposal in the (random walk) Metropolis step: it probably won't make much difference in your case, but technically I think you need to propose from a full Gaussian, not truncated - otherwise your proposal distribution isn't symmetric, so $q(y|x) \neq q(x|y) \: \forall x,y$, so the Metropolis acceptance ratio doesn't produce a Markov chain which will converge to the desired density. Alternatively you could keep the truncated $q$ and use the full Metropolis-Hastings acceptance ratio. Secondly, as Procrastinator says, you presumably therefore have a posterior for $(\alpha, \beta)$, so you must have set a prior for these. What is it? Gamma priors with fixed hyperparameters (something like shape=1, rate=0.001) would do probably do the trick). Now, to attempt to answer some of your question: assuming you've done everything right and you're getting large autocorrelation because each $(\alpha^{(i)},\beta^{(i)})$ is highly correalated with each $(\alpha^{(i-1)},\beta^{(i-1)})$, then maybe the Random Walk Metropolis isn't the best method to draw $\alpha$ and $\beta$. Since your MH step is in only two dimensions, an independence sampler might work better. Slightly more advanced methods like the Metropolis-adjusted Langevin algorithm and Hamiltonian Monte Carlo are also designed to reduce auto-correlation, but that might be overkill here. So maybe try an independence sampler (don't forget to use the full Metropolis-Hastings ratio $\min\left(1,\frac{\pi(y)q(x|y)}{\pi(x)q(y|x)} \right)$ for acceptance).
How to reduce autocorrelation in Metropolis algorithm? Can't seem to figure out how to add a comment, so this is part comment part answer: First, it sounds from the comments that you are using a Random Walk Metropolis step to update $\alpha$ and $\beta$ j
55,416
SVM using RBF and nearest neighbor classification method
The decision boundary for 1-NN algorithm is the union of the Voronoi cells of each training instance. As for SVM, when you use RBf kernel and if there is no regularization, the decision boundary will also be an approximation of the union of the Voronoi cells. So in this case, these methods are the same in terms of performance, but depending on the structure of the dataset, their complexities could be very much different. If the number of suport vectors in SVM is high, then both training and testing complexities of SVM will be much higher than 1-NN. The power of SVM, however, will become clear when you do regularization.
SVM using RBF and nearest neighbor classification method
The decision boundary for 1-NN algorithm is the union of the Voronoi cells of each training instance. As for SVM, when you use RBf kernel and if there is no regularization, the decision boundary will
SVM using RBF and nearest neighbor classification method The decision boundary for 1-NN algorithm is the union of the Voronoi cells of each training instance. As for SVM, when you use RBf kernel and if there is no regularization, the decision boundary will also be an approximation of the union of the Voronoi cells. So in this case, these methods are the same in terms of performance, but depending on the structure of the dataset, their complexities could be very much different. If the number of suport vectors in SVM is high, then both training and testing complexities of SVM will be much higher than 1-NN. The power of SVM, however, will become clear when you do regularization.
SVM using RBF and nearest neighbor classification method The decision boundary for 1-NN algorithm is the union of the Voronoi cells of each training instance. As for SVM, when you use RBf kernel and if there is no regularization, the decision boundary will
55,417
SVM using RBF and nearest neighbor classification method
They are not that similar, but they are related though. The point is, that both kNN and RBF are non-parametric methods to estimate the density of probability of your data. To see this let us first consider the case of kernel methods. Say you consider a region of the feature space $R$. If you draw sample points from the actual probability distribution, p(x), independently, then the probability of drawing a sample from that region is, $$ P = \int_{R} p(x) dx $$ What if you have $N$ points? The probability that $K$ points of those $N$ points fall in the region $R$ follows the binomial distribution, $$ Prob(K) = {{N} \choose {K}}P^{K}(1-P)^{N-K} $$ As $N \to \infty$ this distribution is sharply peaked, so that the probability can be approximated by its mean value $\frac{K}{N}$. An additional approximation is that the probability distribution over $R$ remains approximately constant, so that one can approximate the integral by, $$ P = \int_{R} p(x) dx \approx p(x)V $$ where $V$ is the total volume of the region. Under this approximations $p(x) \approx \frac{K}{NV}$. The idea of kernel methods is to split the feature space in several regions, estimate the counts for each region, and use those point estimates to interpolate across the whole feature space. That may sound gibberish. First let us see how we can rewrite the estimate for the probability. Let $\{x_{i}\}_{1}^{N}$ be your data set. Consider a region $V = h^{d}$ which corresponds to a hypercube of side length $h$ in the $d$-dimensional feature space. The Heaviside function is defined by, $$ H(x) = \begin{cases} 1, \text{if } |x| < 1/2 & \\ 0, \text{otherwise} \end{cases} $$ Then we can write the total number of points that fall within the hypercube, $V$ as, $$ K = \sum_{n}H\left(\frac{x-x_{n}}{h}\right) $$ where $x$ is the center of the hypercube. Or else, V is the neighborhood centered in $x$ and this is the number of points close to $x$. If we substitute back, $$ p(x) \approx \sum_{n} \frac{1}{h^{d}}H\left(\frac{x-x_{n}}{h}\right) $$ The case of RBF is a smoothed version, where $H$ is taken to be a Gaussian. For the case of kNN, please refer to this other question. Notice that this two algorithm approach the same problem differently: kernel methods fix the size of the neighborhood ($h$) and then calculate $K$, whereas kNN fixes the number of points, $K$, and then determines the region in space which contain those points. So yes, they are related, but nor equivalent. P.S. SVM does not estimate the density of probability, but finds the separating hyperplane. Here I just compared kernel methods vs. kNN for density estimation. Even for novelty detection, does SVM not estimate the p.d.f. but its support (those points for which $p(x) \gt 0$, which is a different story.
SVM using RBF and nearest neighbor classification method
They are not that similar, but they are related though. The point is, that both kNN and RBF are non-parametric methods to estimate the density of probability of your data. To see this let us first con
SVM using RBF and nearest neighbor classification method They are not that similar, but they are related though. The point is, that both kNN and RBF are non-parametric methods to estimate the density of probability of your data. To see this let us first consider the case of kernel methods. Say you consider a region of the feature space $R$. If you draw sample points from the actual probability distribution, p(x), independently, then the probability of drawing a sample from that region is, $$ P = \int_{R} p(x) dx $$ What if you have $N$ points? The probability that $K$ points of those $N$ points fall in the region $R$ follows the binomial distribution, $$ Prob(K) = {{N} \choose {K}}P^{K}(1-P)^{N-K} $$ As $N \to \infty$ this distribution is sharply peaked, so that the probability can be approximated by its mean value $\frac{K}{N}$. An additional approximation is that the probability distribution over $R$ remains approximately constant, so that one can approximate the integral by, $$ P = \int_{R} p(x) dx \approx p(x)V $$ where $V$ is the total volume of the region. Under this approximations $p(x) \approx \frac{K}{NV}$. The idea of kernel methods is to split the feature space in several regions, estimate the counts for each region, and use those point estimates to interpolate across the whole feature space. That may sound gibberish. First let us see how we can rewrite the estimate for the probability. Let $\{x_{i}\}_{1}^{N}$ be your data set. Consider a region $V = h^{d}$ which corresponds to a hypercube of side length $h$ in the $d$-dimensional feature space. The Heaviside function is defined by, $$ H(x) = \begin{cases} 1, \text{if } |x| < 1/2 & \\ 0, \text{otherwise} \end{cases} $$ Then we can write the total number of points that fall within the hypercube, $V$ as, $$ K = \sum_{n}H\left(\frac{x-x_{n}}{h}\right) $$ where $x$ is the center of the hypercube. Or else, V is the neighborhood centered in $x$ and this is the number of points close to $x$. If we substitute back, $$ p(x) \approx \sum_{n} \frac{1}{h^{d}}H\left(\frac{x-x_{n}}{h}\right) $$ The case of RBF is a smoothed version, where $H$ is taken to be a Gaussian. For the case of kNN, please refer to this other question. Notice that this two algorithm approach the same problem differently: kernel methods fix the size of the neighborhood ($h$) and then calculate $K$, whereas kNN fixes the number of points, $K$, and then determines the region in space which contain those points. So yes, they are related, but nor equivalent. P.S. SVM does not estimate the density of probability, but finds the separating hyperplane. Here I just compared kernel methods vs. kNN for density estimation. Even for novelty detection, does SVM not estimate the p.d.f. but its support (those points for which $p(x) \gt 0$, which is a different story.
SVM using RBF and nearest neighbor classification method They are not that similar, but they are related though. The point is, that both kNN and RBF are non-parametric methods to estimate the density of probability of your data. To see this let us first con
55,418
Definition of "Natural Images" in the context of machine learning
I suspect it depends on the context. In the example you gave, they are contrasting handwritten characters with photographs. Both obviously have covariance structure. However, there are a finite number of letters (26, 52, etc) and the covariance structure of a letter is, by convention, pretty tightly constrained. The number of possible photographs is much larger than 26, and there are considerably weaker constraints on the structure of the covariance matrix. In this case, I think the authors are suggesting that belief nets work well on a tightly constrained problem (map pixels onto [a-zA-Z]), but their method is superior when there is less a priori knowledge about the images. In other cases, "natural images" is essentially used as shorthand for "images which have a rich local covariance structure." This is particularly true in visual neuroscience, where the contrast is often between simple parametric stimuli (e.g., sine waves or white noise), which have somewhat boring covariance structure, and movies/photographs, which don't. Under this definition, I think it'd be completely reasonable to call paintings, video game screenshots, and the like "natural." I have noticed that people often hedge and call them "naturalistic" instead of "natural", but I think the point stands. People have compared the characteristics of these naturalistic scenes to arbitrary visual stimuli. There's a good review on natural scene statistics by Geisler (2008). For example, the power spectra of natural(istic) scenes has a stereotypical 1/f shape. You could use that criteria to distinguish between arbitrary input and "naturalistic" input. There has been a lot of interest in whether/how the visual system has evolved to represent naturalistic input; I'd be happy to provide you with more pointers if you're interested. Geisler, W. S. (2008). Visual perception and the statistical properties of natural scenes. Annu. Rev. Psychol., 59, 167-192.
Definition of "Natural Images" in the context of machine learning
I suspect it depends on the context. In the example you gave, they are contrasting handwritten characters with photographs. Both obviously have covariance structure. However, there are a finite number
Definition of "Natural Images" in the context of machine learning I suspect it depends on the context. In the example you gave, they are contrasting handwritten characters with photographs. Both obviously have covariance structure. However, there are a finite number of letters (26, 52, etc) and the covariance structure of a letter is, by convention, pretty tightly constrained. The number of possible photographs is much larger than 26, and there are considerably weaker constraints on the structure of the covariance matrix. In this case, I think the authors are suggesting that belief nets work well on a tightly constrained problem (map pixels onto [a-zA-Z]), but their method is superior when there is less a priori knowledge about the images. In other cases, "natural images" is essentially used as shorthand for "images which have a rich local covariance structure." This is particularly true in visual neuroscience, where the contrast is often between simple parametric stimuli (e.g., sine waves or white noise), which have somewhat boring covariance structure, and movies/photographs, which don't. Under this definition, I think it'd be completely reasonable to call paintings, video game screenshots, and the like "natural." I have noticed that people often hedge and call them "naturalistic" instead of "natural", but I think the point stands. People have compared the characteristics of these naturalistic scenes to arbitrary visual stimuli. There's a good review on natural scene statistics by Geisler (2008). For example, the power spectra of natural(istic) scenes has a stereotypical 1/f shape. You could use that criteria to distinguish between arbitrary input and "naturalistic" input. There has been a lot of interest in whether/how the visual system has evolved to represent naturalistic input; I'd be happy to provide you with more pointers if you're interested. Geisler, W. S. (2008). Visual perception and the statistical properties of natural scenes. Annu. Rev. Psychol., 59, 167-192.
Definition of "Natural Images" in the context of machine learning I suspect it depends on the context. In the example you gave, they are contrasting handwritten characters with photographs. Both obviously have covariance structure. However, there are a finite number
55,419
What model should I use for this research?
An example If you're happy with R you might want to look at the Seatbelts dataset that's built into the base R distribution. It sounds very similar. Like your problem, this consists of several time series of counts (front seat casualties) and an intervention/treatment (imposition of a seatbelt law) that affects only one of them (the front seat passenger sequence) and a bunch of covariates (seasons, petrol prices etc). On the help page you can see what a tiny analysis would look like. You'll notice several things there: To use count data it's often helpful to log the outcome. That enables you to work with proportional increases rather than absolute ones which is usually what you want. It also allowing you to fake a log linear link without the hassle of fitting a non-linear models. The basic visual analysis consists of building some model, an ARIMA model in the example, of the series before the intervention and then projecting it forward, and comparing it to what actually happened in the treatment in a plot. The code to do so is the line with the ts.plot Time series modeling A simple linear modelling approach, such as you see on the last line of the example, adds an intervention variable to indicate the period, a seasonal component, and an autoregressive term to deal with the autocorrelation. When you are content that the non-intervention variables are well set up, particularly that you have captured any seasonal components, christmas or weekend effects, or whatever it is in your movie sales domain that systematically drive variation in sales without being related to the intervention you are interested in, then you may want to interpret your intervention variable as a causal effect. In that simple analysis the intervention causes a shift in average level. However, it might have different effects that you'd want to model differently. What else you'd want to do depends on the model class. For ARIMA modelling the basic issues are 'stationarity' and AR, MA, and seasonal 'order'. It is also possible to take a state space approach. Any good Time Series text will discuss these possibilities. I quite like Shumway and Stoffer (2006) and Commandeur (2007) but there are many good choices, and lots of web material. Time series analysis can get quite complex quite fast, so taking a graphical exploratory approach first is very sensible, so you know how much time it's worth investing in figuring out these more complex parametric models. Regression approaches An alternative, non-time series approach is to treat the problem as a regression discontinuity design. There you compare the the period either side of the intervention to see the causal effect of the intervention. Morgan and Winship (2007) has a discussion of the pros and cons of this approach.
What model should I use for this research?
An example If you're happy with R you might want to look at the Seatbelts dataset that's built into the base R distribution. It sounds very similar. Like your problem, this consists of several time
What model should I use for this research? An example If you're happy with R you might want to look at the Seatbelts dataset that's built into the base R distribution. It sounds very similar. Like your problem, this consists of several time series of counts (front seat casualties) and an intervention/treatment (imposition of a seatbelt law) that affects only one of them (the front seat passenger sequence) and a bunch of covariates (seasons, petrol prices etc). On the help page you can see what a tiny analysis would look like. You'll notice several things there: To use count data it's often helpful to log the outcome. That enables you to work with proportional increases rather than absolute ones which is usually what you want. It also allowing you to fake a log linear link without the hassle of fitting a non-linear models. The basic visual analysis consists of building some model, an ARIMA model in the example, of the series before the intervention and then projecting it forward, and comparing it to what actually happened in the treatment in a plot. The code to do so is the line with the ts.plot Time series modeling A simple linear modelling approach, such as you see on the last line of the example, adds an intervention variable to indicate the period, a seasonal component, and an autoregressive term to deal with the autocorrelation. When you are content that the non-intervention variables are well set up, particularly that you have captured any seasonal components, christmas or weekend effects, or whatever it is in your movie sales domain that systematically drive variation in sales without being related to the intervention you are interested in, then you may want to interpret your intervention variable as a causal effect. In that simple analysis the intervention causes a shift in average level. However, it might have different effects that you'd want to model differently. What else you'd want to do depends on the model class. For ARIMA modelling the basic issues are 'stationarity' and AR, MA, and seasonal 'order'. It is also possible to take a state space approach. Any good Time Series text will discuss these possibilities. I quite like Shumway and Stoffer (2006) and Commandeur (2007) but there are many good choices, and lots of web material. Time series analysis can get quite complex quite fast, so taking a graphical exploratory approach first is very sensible, so you know how much time it's worth investing in figuring out these more complex parametric models. Regression approaches An alternative, non-time series approach is to treat the problem as a regression discontinuity design. There you compare the the period either side of the intervention to see the causal effect of the intervention. Morgan and Winship (2007) has a discussion of the pros and cons of this approach.
What model should I use for this research? An example If you're happy with R you might want to look at the Seatbelts dataset that's built into the base R distribution. It sounds very similar. Like your problem, this consists of several time
55,420
What model should I use for this research?
First, I would try to put both series on a comparable scale. For example, you might look at the revenue divided by population of movie-going age over time. I would more confident if the two series looked very similar before the event, but then diverged after. Another approach might be the difference in differences estimator
What model should I use for this research?
First, I would try to put both series on a comparable scale. For example, you might look at the revenue divided by population of movie-going age over time. I would more confident if the two series loo
What model should I use for this research? First, I would try to put both series on a comparable scale. For example, you might look at the revenue divided by population of movie-going age over time. I would more confident if the two series looked very similar before the event, but then diverged after. Another approach might be the difference in differences estimator
What model should I use for this research? First, I would try to put both series on a comparable scale. For example, you might look at the revenue divided by population of movie-going age over time. I would more confident if the two series loo
55,421
What model should I use for this research?
To explore the data you could use descriptives and it's very important to plot the data to get a visual representation and a feel for the data. It sounds like a regression could work for you, using dummies for the events and the different countries. Also, you could decide to give the countries a different slope and intercept. Finally, with time-series data, be careful of violations of the assumptions of regression, most specifically autocorrelation. Good luck!
What model should I use for this research?
To explore the data you could use descriptives and it's very important to plot the data to get a visual representation and a feel for the data. It sounds like a regression could work for you, using d
What model should I use for this research? To explore the data you could use descriptives and it's very important to plot the data to get a visual representation and a feel for the data. It sounds like a regression could work for you, using dummies for the events and the different countries. Also, you could decide to give the countries a different slope and intercept. Finally, with time-series data, be careful of violations of the assumptions of regression, most specifically autocorrelation. Good luck!
What model should I use for this research? To explore the data you could use descriptives and it's very important to plot the data to get a visual representation and a feel for the data. It sounds like a regression could work for you, using d
55,422
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
I think the Mann-Whitney/Wilcoxon ranked-sum test is the appropriate test. The KS test is specifically for comparing continuous distributions - your ratings are ordinal, so it doesn't seem appropriate here. The t-test and the Wilcoxon ranked-sum differ in that the t-test is comparing the means of the two distributions, while the Wilcoxon is comparing the 'locations' by looking at how the values of the two distributions compare when ranked. When your entire ratings distribution has only two values, one group has only ratings of 4 and your sample size is 14, the t-test seems less appropriate to me. It works, but I just have a harder time with it conceptually. This data is more binomial than it is continuous! Here's how I'd do all of that in R, which is a freely available software for statistical computing (a step up from using websites to compute tests, I think...) # A vector of data for people with smartphone experience smartphone <- c(4, 4, 4, 4) # A vector of data for people without smartphone experience dumbphone <- c(4, 3, 4, 3, 3, 3, 3, 3, 3, 3) # The Mann-Whitney/Wilcoxon ranked-sum test wilcox.test(x = smartphone, y = dumbphone) # t-test for comparison t.test(x = smartphone, y = dumbphone) # And, why not, a test of proportions # Consider 4 as the event, comparing 4/4 to 2/10 prop.test(x = c(4, 2), n = c(4, 10))
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
I think the Mann-Whitney/Wilcoxon ranked-sum test is the appropriate test. The KS test is specifically for comparing continuous distributions - your ratings are ordinal, so it doesn't seem appropriat
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? I think the Mann-Whitney/Wilcoxon ranked-sum test is the appropriate test. The KS test is specifically for comparing continuous distributions - your ratings are ordinal, so it doesn't seem appropriate here. The t-test and the Wilcoxon ranked-sum differ in that the t-test is comparing the means of the two distributions, while the Wilcoxon is comparing the 'locations' by looking at how the values of the two distributions compare when ranked. When your entire ratings distribution has only two values, one group has only ratings of 4 and your sample size is 14, the t-test seems less appropriate to me. It works, but I just have a harder time with it conceptually. This data is more binomial than it is continuous! Here's how I'd do all of that in R, which is a freely available software for statistical computing (a step up from using websites to compute tests, I think...) # A vector of data for people with smartphone experience smartphone <- c(4, 4, 4, 4) # A vector of data for people without smartphone experience dumbphone <- c(4, 3, 4, 3, 3, 3, 3, 3, 3, 3) # The Mann-Whitney/Wilcoxon ranked-sum test wilcox.test(x = smartphone, y = dumbphone) # t-test for comparison t.test(x = smartphone, y = dumbphone) # And, why not, a test of proportions # Consider 4 as the event, comparing 4/4 to 2/10 prop.test(x = c(4, 2), n = c(4, 10))
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? I think the Mann-Whitney/Wilcoxon ranked-sum test is the appropriate test. The KS test is specifically for comparing continuous distributions - your ratings are ordinal, so it doesn't seem appropriat
55,423
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
Plainly there's a difference between the groups - the locations are completely different, with no overlap in the groups. Any sensible test will reject even at this small sample size. I assume you want to ask a different question of the data than that.
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
Plainly there's a difference between the groups - the locations are completely different, with no overlap in the groups. Any sensible test will reject even at this small sample size. I assume you want
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? Plainly there's a difference between the groups - the locations are completely different, with no overlap in the groups. Any sensible test will reject even at this small sample size. I assume you want to ask a different question of the data than that.
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? Plainly there's a difference between the groups - the locations are completely different, with no overlap in the groups. Any sensible test will reject even at this small sample size. I assume you want
55,424
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
You should t-test the mean difference between smart phone users and non-smart phone users. Pearson's R is not appropriate for binary variables because it assumes that both variables are normally distributed (and X, in your case, cannot be because it is dichotomous). A t-test will ask if the mean score on the outcome (Y in this case) is significantly different across the two categories of X. This assumes that your Y variable is something that is appropriate for averaging. In other words it should at least be ordinal (meaning rank ordered categories) with 5 or so possible categories, an interval level of measurement is technically required but, in practice, ordinal is fine. There are several statistical tests and correlation coefficients that can be calculated and what you choose will probably be a function of whatever the convention is in your field for the sort of data you have. The t-test is pretty common across all fields, that's why I suggest it. Note also that the t-test doesn't give you an effect size - you are comparing the means of the two groups under the null hypothesis that the difference between them is 0 in the population. A statistically significant finding means only that the means are not the same in the population. You might also consider some effect size statistic - that's what Pearson's R is, but again that's not appropriate for binary variables.
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
You should t-test the mean difference between smart phone users and non-smart phone users. Pearson's R is not appropriate for binary variables because it assumes that both variables are normally dist
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? You should t-test the mean difference between smart phone users and non-smart phone users. Pearson's R is not appropriate for binary variables because it assumes that both variables are normally distributed (and X, in your case, cannot be because it is dichotomous). A t-test will ask if the mean score on the outcome (Y in this case) is significantly different across the two categories of X. This assumes that your Y variable is something that is appropriate for averaging. In other words it should at least be ordinal (meaning rank ordered categories) with 5 or so possible categories, an interval level of measurement is technically required but, in practice, ordinal is fine. There are several statistical tests and correlation coefficients that can be calculated and what you choose will probably be a function of whatever the convention is in your field for the sort of data you have. The t-test is pretty common across all fields, that's why I suggest it. Note also that the t-test doesn't give you an effect size - you are comparing the means of the two groups under the null hypothesis that the difference between them is 0 in the population. A statistically significant finding means only that the means are not the same in the population. You might also consider some effect size statistic - that's what Pearson's R is, but again that's not appropriate for binary variables.
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? You should t-test the mean difference between smart phone users and non-smart phone users. Pearson's R is not appropriate for binary variables because it assumes that both variables are normally dist
55,425
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
Another option is Pearson's Chi-square test, which is appropriate for categorical variables. You can use it to test whether the variable Y is independent of X, in other words, whether Y has any influence on X. However, you would need a sample larger than 10 values, and also I'm not sure if this test can be used with a 2x2 contingency table... I seem to recall there was a minimum number of cells.
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other?
Another option is Pearson's Chi-square test, which is appropriate for categorical variables. You can use it to test whether the variable Y is independent of X, in other words, whether Y has any influe
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? Another option is Pearson's Chi-square test, which is appropriate for categorical variables. You can use it to test whether the variable Y is independent of X, in other words, whether Y has any influence on X. However, you would need a sample larger than 10 values, and also I'm not sure if this test can be used with a 2x2 contingency table... I seem to recall there was a minimum number of cells.
Choosing the right test when comparing two groups: Mann-Whitney, Kolmogorov-Smirnov or other? Another option is Pearson's Chi-square test, which is appropriate for categorical variables. You can use it to test whether the variable Y is independent of X, in other words, whether Y has any influe
55,426
Panel data model estimation with dummy variables
A possible reason might be that your dummies do not vary over time. In this case, the fixed effects estimator and first differencing will remove such variables from the model. The reason is that these estimators cannot identify variables that do not vary over time because those variables will be eliminated together with the unobserved fixed effects. As an alternative you can try to use random effects and use a Hausman test to see whether random effects gives significantly different results from fixed effects. If not, you can use random effects. One problem with this approach is that the models are only comparable if they include the same variables. So for the Hausman test you should first exclude the dummies from the random effects model, too.
Panel data model estimation with dummy variables
A possible reason might be that your dummies do not vary over time. In this case, the fixed effects estimator and first differencing will remove such variables from the model. The reason is that these
Panel data model estimation with dummy variables A possible reason might be that your dummies do not vary over time. In this case, the fixed effects estimator and first differencing will remove such variables from the model. The reason is that these estimators cannot identify variables that do not vary over time because those variables will be eliminated together with the unobserved fixed effects. As an alternative you can try to use random effects and use a Hausman test to see whether random effects gives significantly different results from fixed effects. If not, you can use random effects. One problem with this approach is that the models are only comparable if they include the same variables. So for the Hausman test you should first exclude the dummies from the random effects model, too.
Panel data model estimation with dummy variables A possible reason might be that your dummies do not vary over time. In this case, the fixed effects estimator and first differencing will remove such variables from the model. The reason is that these
55,427
Panel data model estimation with dummy variables
Another possibility is that the dummy variables are (very close to) co-linear. In this case, plm will automatically exclude the co-linear variables from its output. One way to check this is to run the model, and check the aliased object: model.FE<- plm(income~area+weight+dproduct+dummy1+dummy2+dummy3,data=pa‌​nel, model = "within") model.FE$aliased If model.FE$aliased reports TRUE for any of the dummies, then they are linearly dependent (or very close being so). In that case, go back and check that dummy1,dummy2, and dummy3 are sufficiently different.
Panel data model estimation with dummy variables
Another possibility is that the dummy variables are (very close to) co-linear. In this case, plm will automatically exclude the co-linear variables from its output. One way to check this is to run the
Panel data model estimation with dummy variables Another possibility is that the dummy variables are (very close to) co-linear. In this case, plm will automatically exclude the co-linear variables from its output. One way to check this is to run the model, and check the aliased object: model.FE<- plm(income~area+weight+dproduct+dummy1+dummy2+dummy3,data=pa‌​nel, model = "within") model.FE$aliased If model.FE$aliased reports TRUE for any of the dummies, then they are linearly dependent (or very close being so). In that case, go back and check that dummy1,dummy2, and dummy3 are sufficiently different.
Panel data model estimation with dummy variables Another possibility is that the dummy variables are (very close to) co-linear. In this case, plm will automatically exclude the co-linear variables from its output. One way to check this is to run the
55,428
Problem in discrete valued time series forecasting
Returning to the discarded HMM approach. An HMM is just a state space model that assumes discrete hidden state. I think you have dismissed the state space approach too soon. The forecasting question you bring up is quite general for such models which I'll discuss in discrete time formulation, since that's what you've got. (If you happen to know about how Kalman filters work, use that intuition; they're used for very similar state space models when gaussian assumptions apply.) State and the role of past observations The basic assumption of any state space model is that the future is independent of the past given the current (hidden) state, and that the observations at a point depend only on the hidden state. Consequently when you have the state you don't need the preceding observed values or states. (And if that isn't enough, you need a formulation with more state...) The state is intended to represent all the relevant information. Forecasting Conceptually speaking, to forecast, you start with a distribution over the current hidden state, and you use the estimated transition function -- a transition matrix in the HMM case -- to propagate this distribution forward in time. Once you've moved forward to the point in time where you want to forecast you have a (wider) distribution over the future state. (Alternatively in the HMM case you can find the most likely state to be in then, but you will have neglected quite a bit of uncertainty.) You then generate a predictive distribution over observables by taking a weighted average of the conditional distribution of the observations where the weights are from the distribution over the hidden state. Basically it's just like filtering, only without tightening up the state estimate on the basis of observations. On cursory glance, the packages you mention don't seem to have a function for more than one step ahead forecasting (because that's needed to compute the likelihood) but now you know what you want to do, you may be able to persuade them to do so, or write the relevant function yourself. This is described in mathematical detail in a bunch of places, e.g ch. 5 of Zucchini and MacDonald (2008) for the HMM case. They also offer an R implementation. Petris and Patrone (2011) is a more general review of state space model packages in R, including those that work with non-discrete state models. These would be good places to start. Multivariate state space modelling is not particularly straightforward, but most of the difficulties are numerical and good implementations for the classic models are increasingly available.
Problem in discrete valued time series forecasting
Returning to the discarded HMM approach. An HMM is just a state space model that assumes discrete hidden state. I think you have dismissed the state space approach too soon. The forecasting question
Problem in discrete valued time series forecasting Returning to the discarded HMM approach. An HMM is just a state space model that assumes discrete hidden state. I think you have dismissed the state space approach too soon. The forecasting question you bring up is quite general for such models which I'll discuss in discrete time formulation, since that's what you've got. (If you happen to know about how Kalman filters work, use that intuition; they're used for very similar state space models when gaussian assumptions apply.) State and the role of past observations The basic assumption of any state space model is that the future is independent of the past given the current (hidden) state, and that the observations at a point depend only on the hidden state. Consequently when you have the state you don't need the preceding observed values or states. (And if that isn't enough, you need a formulation with more state...) The state is intended to represent all the relevant information. Forecasting Conceptually speaking, to forecast, you start with a distribution over the current hidden state, and you use the estimated transition function -- a transition matrix in the HMM case -- to propagate this distribution forward in time. Once you've moved forward to the point in time where you want to forecast you have a (wider) distribution over the future state. (Alternatively in the HMM case you can find the most likely state to be in then, but you will have neglected quite a bit of uncertainty.) You then generate a predictive distribution over observables by taking a weighted average of the conditional distribution of the observations where the weights are from the distribution over the hidden state. Basically it's just like filtering, only without tightening up the state estimate on the basis of observations. On cursory glance, the packages you mention don't seem to have a function for more than one step ahead forecasting (because that's needed to compute the likelihood) but now you know what you want to do, you may be able to persuade them to do so, or write the relevant function yourself. This is described in mathematical detail in a bunch of places, e.g ch. 5 of Zucchini and MacDonald (2008) for the HMM case. They also offer an R implementation. Petris and Patrone (2011) is a more general review of state space model packages in R, including those that work with non-discrete state models. These would be good places to start. Multivariate state space modelling is not particularly straightforward, but most of the difficulties are numerical and good implementations for the classic models are increasingly available.
Problem in discrete valued time series forecasting Returning to the discarded HMM approach. An HMM is just a state space model that assumes discrete hidden state. I think you have dismissed the state space approach too soon. The forecasting question
55,429
Problem in discrete valued time series forecasting
The airline series is a count series i.e the number of people flying per month. You could consider your series to be also a count series ( in reality it is an attribute/class series) but at a much lower level (1,2,3,4,5,6). Now if you were to identify and estimate an ARIMA model the problem arises insofar as the fitted values ( 1 period out predictions) have not been constrained to be integers which represents a slight problem. I once wrote software to constrain the fitted values to be integers but parameter convergence disappeared. I have studied rare disease data on a monthly basis that was similar to your "count series" and have been able to identify a useful seasonal patterns yielding a prediction which will be a non-integer. Rounding the forecast to an integer serves the purpose of providing predictions that will be integers. One can identify level shifts and/or local time trends in this kind of data and also to some extent anomalies. What I am suggesting is imperfect but until someone tells me a better way , here I stand. I would be more than happy to provide you with an example using existing software if you posted an actual historical series. REVISED With Data analysis: The plot of the 34 data points and the ACF of the data suggest that the data might be daily as the ACF of lag 7 suggests structure. A model was automatically developed which developed an auto-regressive memory of order 2 and an indicator variable for day 1 of each week with two Pulses ( anomalies) being identified at periods 14 and period 11 . The Actual and Cleansed plot illuminate the two unusual data points . . The residuals from the model appear visually to be "random" and the ACF of the Residuals appears to support that conclusion . The actual/fit/forecast plot is shown here . This model can be expressed as Pure Right-Hand Side ala a regression equation Note that when predicting the Pulse Indicators play no role as future values of these indicators are expected to be 0 while the future value of the first day of the week will be zero except for the first day of the week. For example if we are predicting out 14 days from period 34 , this would mean 0,1,0,0,0,0,0,1,0,0,0,0,0,0 .Note that all models are wrong but some models are useful ( attributed to G.E.P.Box ) . THE RIGHT-HAND SIDE CONSTANT IS: 3.5646 Y 1 .649306 Y( 34 )= 6.000000 3.895837 Y 2 -.539719 Y( 33 )= 5.000000 -2.698596 NET PREDICTION FOR Y( 35 )= 4.761866 The above is a prediction for period 35. Note that you would have to round off the forecasts in order to meet the integer requirements. Thus the prediction would be "5". I hope this helps. In closing here is a summary of the model .
Problem in discrete valued time series forecasting
The airline series is a count series i.e the number of people flying per month. You could consider your series to be also a count series ( in reality it is an attribute/class series) but at a much low
Problem in discrete valued time series forecasting The airline series is a count series i.e the number of people flying per month. You could consider your series to be also a count series ( in reality it is an attribute/class series) but at a much lower level (1,2,3,4,5,6). Now if you were to identify and estimate an ARIMA model the problem arises insofar as the fitted values ( 1 period out predictions) have not been constrained to be integers which represents a slight problem. I once wrote software to constrain the fitted values to be integers but parameter convergence disappeared. I have studied rare disease data on a monthly basis that was similar to your "count series" and have been able to identify a useful seasonal patterns yielding a prediction which will be a non-integer. Rounding the forecast to an integer serves the purpose of providing predictions that will be integers. One can identify level shifts and/or local time trends in this kind of data and also to some extent anomalies. What I am suggesting is imperfect but until someone tells me a better way , here I stand. I would be more than happy to provide you with an example using existing software if you posted an actual historical series. REVISED With Data analysis: The plot of the 34 data points and the ACF of the data suggest that the data might be daily as the ACF of lag 7 suggests structure. A model was automatically developed which developed an auto-regressive memory of order 2 and an indicator variable for day 1 of each week with two Pulses ( anomalies) being identified at periods 14 and period 11 . The Actual and Cleansed plot illuminate the two unusual data points . . The residuals from the model appear visually to be "random" and the ACF of the Residuals appears to support that conclusion . The actual/fit/forecast plot is shown here . This model can be expressed as Pure Right-Hand Side ala a regression equation Note that when predicting the Pulse Indicators play no role as future values of these indicators are expected to be 0 while the future value of the first day of the week will be zero except for the first day of the week. For example if we are predicting out 14 days from period 34 , this would mean 0,1,0,0,0,0,0,1,0,0,0,0,0,0 .Note that all models are wrong but some models are useful ( attributed to G.E.P.Box ) . THE RIGHT-HAND SIDE CONSTANT IS: 3.5646 Y 1 .649306 Y( 34 )= 6.000000 3.895837 Y 2 -.539719 Y( 33 )= 5.000000 -2.698596 NET PREDICTION FOR Y( 35 )= 4.761866 The above is a prediction for period 35. Note that you would have to round off the forecasts in order to meet the integer requirements. Thus the prediction would be "5". I hope this helps. In closing here is a summary of the model .
Problem in discrete valued time series forecasting The airline series is a count series i.e the number of people flying per month. You could consider your series to be also a count series ( in reality it is an attribute/class series) but at a much low
55,430
How to retrieve model-specific information when using the caret package?
The train function in caret returns an S3 list object. On of the items in this list is called finalModel. This item is just the fitted object in it's native form. So if you trained a gbm using train and stored it in an object called my.fit, then summary(my.fit$finalModel) would use the summary.gbm method from the gbm package and show you the relative variable importance.
How to retrieve model-specific information when using the caret package?
The train function in caret returns an S3 list object. On of the items in this list is called finalModel. This item is just the fitted object in it's native form. So if you trained a gbm using trai
How to retrieve model-specific information when using the caret package? The train function in caret returns an S3 list object. On of the items in this list is called finalModel. This item is just the fitted object in it's native form. So if you trained a gbm using train and stored it in an object called my.fit, then summary(my.fit$finalModel) would use the summary.gbm method from the gbm package and show you the relative variable importance.
How to retrieve model-specific information when using the caret package? The train function in caret returns an S3 list object. On of the items in this list is called finalModel. This item is just the fitted object in it's native form. So if you trained a gbm using trai
55,431
How can I discern whether a classifier's outcome is significantly different?
It's hard to derive anything of meaning from what is essentially 5 data points. I would use a Hamming distance from a reference vector and accept any results that are less than n away from that. It's not perfect, and it's probably not publishable, but you can get an idea for your own benchmarking and perhaps cross-validation purposes.
How can I discern whether a classifier's outcome is significantly different?
It's hard to derive anything of meaning from what is essentially 5 data points. I would use a Hamming distance from a reference vector and accept any results that are less than n away from that. I
How can I discern whether a classifier's outcome is significantly different? It's hard to derive anything of meaning from what is essentially 5 data points. I would use a Hamming distance from a reference vector and accept any results that are less than n away from that. It's not perfect, and it's probably not publishable, but you can get an idea for your own benchmarking and perhaps cross-validation purposes.
How can I discern whether a classifier's outcome is significantly different? It's hard to derive anything of meaning from what is essentially 5 data points. I would use a Hamming distance from a reference vector and accept any results that are less than n away from that. I
55,432
How can I discern whether a classifier's outcome is significantly different?
Firstly I'd agree with @jonsca - there's now way to derive anything from so few data points. When you posted on stats.SE, you made a comment that: I would like to do a statistic test to test whether those results are different from each other ( which I don't know exactly how to do, this is why I'm asking here....). from the classifier which are different from each other I would like to construct an Ensemble base system as described here: users.rowan.edu/~polikar/RESEARCH/PUBLICATIONS/csm06.pdf am I missing something here ? So I'll try to answer from that point of view. If you're going to train an ensemble classifier, such as AdaBoost, you only need to worry about statistical differences between classifiers if you're worried that there isn't enough diversity in the set of weak learners. If all of the weak learners are essentially the same, then clearly the strongest learner you can create is as strong as the strongest weak learner. In your case, with so few base classifiers, and what looks like a diverse set, I wouldn't worry about it and I would just try plugging it into AdaBoost. However again I would say that with so few datapoints you are unlikely to get anything meaningful out.
How can I discern whether a classifier's outcome is significantly different?
Firstly I'd agree with @jonsca - there's now way to derive anything from so few data points. When you posted on stats.SE, you made a comment that: I would like to do a statistic test to test whether
How can I discern whether a classifier's outcome is significantly different? Firstly I'd agree with @jonsca - there's now way to derive anything from so few data points. When you posted on stats.SE, you made a comment that: I would like to do a statistic test to test whether those results are different from each other ( which I don't know exactly how to do, this is why I'm asking here....). from the classifier which are different from each other I would like to construct an Ensemble base system as described here: users.rowan.edu/~polikar/RESEARCH/PUBLICATIONS/csm06.pdf am I missing something here ? So I'll try to answer from that point of view. If you're going to train an ensemble classifier, such as AdaBoost, you only need to worry about statistical differences between classifiers if you're worried that there isn't enough diversity in the set of weak learners. If all of the weak learners are essentially the same, then clearly the strongest learner you can create is as strong as the strongest weak learner. In your case, with so few base classifiers, and what looks like a diverse set, I wouldn't worry about it and I would just try plugging it into AdaBoost. However again I would say that with so few datapoints you are unlikely to get anything meaningful out.
How can I discern whether a classifier's outcome is significantly different? Firstly I'd agree with @jonsca - there's now way to derive anything from so few data points. When you posted on stats.SE, you made a comment that: I would like to do a statistic test to test whether
55,433
Convergence in distribution, probability, and 2nd mean
To prove that $X_n \stackrel{P}{\rightarrow} X$ you have to show that, for any given $\epsilon>0$, the sequence of real numbers $\{p_n\}_{n\geq 1}$ defined by $$ p_n = P\{\omega:|X_n(\omega)-X(\omega)|\geq \epsilon\} $$ has limit $0$. What happens at the ``beginning'' of the sequence does not change its limit. For $n\geq \log(1+\epsilon)$, we have $$ P\{\omega:|X_n(\omega)-X(\omega)|\geq \epsilon\} = P\{\omega:X_n(\omega)=e^n\}=\frac{1}{n} \, . $$ Hence, $p_n\to 0$, and therefore $X_n \stackrel{P}{\rightarrow} X$. Since convergence in probability implies convergence in distribution, we also have $X_n \leadsto X$. Now, since $$ \begin{align*} \mathbb{E}[(X_n-X)^2]&=0^2\times\left(1-\frac{1}{n}\right) + (e^n+1)^2\times\frac{1}{n}\times\frac{1}{2}+(e^n-1)^2\times\frac{1}{n}\times\frac{1}{2}\\ & = \frac{(e^{2n}+1)}{n}\to\infty \, , \end{align*} $$ we don't have convergence in quadratic mean.
Convergence in distribution, probability, and 2nd mean
To prove that $X_n \stackrel{P}{\rightarrow} X$ you have to show that, for any given $\epsilon>0$, the sequence of real numbers $\{p_n\}_{n\geq 1}$ defined by $$ p_n = P\{\omega:|X_n(\omega)-X(\omeg
Convergence in distribution, probability, and 2nd mean To prove that $X_n \stackrel{P}{\rightarrow} X$ you have to show that, for any given $\epsilon>0$, the sequence of real numbers $\{p_n\}_{n\geq 1}$ defined by $$ p_n = P\{\omega:|X_n(\omega)-X(\omega)|\geq \epsilon\} $$ has limit $0$. What happens at the ``beginning'' of the sequence does not change its limit. For $n\geq \log(1+\epsilon)$, we have $$ P\{\omega:|X_n(\omega)-X(\omega)|\geq \epsilon\} = P\{\omega:X_n(\omega)=e^n\}=\frac{1}{n} \, . $$ Hence, $p_n\to 0$, and therefore $X_n \stackrel{P}{\rightarrow} X$. Since convergence in probability implies convergence in distribution, we also have $X_n \leadsto X$. Now, since $$ \begin{align*} \mathbb{E}[(X_n-X)^2]&=0^2\times\left(1-\frac{1}{n}\right) + (e^n+1)^2\times\frac{1}{n}\times\frac{1}{2}+(e^n-1)^2\times\frac{1}{n}\times\frac{1}{2}\\ & = \frac{(e^{2n}+1)}{n}\to\infty \, , \end{align*} $$ we don't have convergence in quadratic mean.
Convergence in distribution, probability, and 2nd mean To prove that $X_n \stackrel{P}{\rightarrow} X$ you have to show that, for any given $\epsilon>0$, the sequence of real numbers $\{p_n\}_{n\geq 1}$ defined by $$ p_n = P\{\omega:|X_n(\omega)-X(\omeg
55,434
Create positive-definite 3x3 covariance matrix given specified correlation values
To follow up on @cardinal's comment: your $x$, $y$, and $z$ define a $(3 \times 3)$ correlation matrix $R$. Since a correlation matrix also is a possible covariance matrix (of standardized variables), it has to be positive definite. This is the case if all eigenvalues are $> 0$. If $R$ is indeed positive definite, then all vectors $\boldsymbol{s}$ of variances (i.e., numbers $> 0$) will turn $\boldsymbol{R}$ into a positive definite covariance matrix $\boldsymbol{\Sigma} = \boldsymbol{D}_{s}^{1/2} \boldsymbol{R} \boldsymbol{D}_{s}^{1/2}$, where $\boldsymbol{D}_{s}^{1/2}$ is the square root of the diagonal matrix made from $\boldsymbol{s}$. So just construct $R$ from $x, y, z$, and check if the eigenvalues are all $> 0$. If so, you're good, and you can transform any set of data to have a corresponding covariance matrix with arbitrary variances: x <- 0.5 y <- 0.3 # changing this to -0.6 makes it not pos.def. z <- 0.4 R <- matrix(numeric(3*3), nrow=3) # will be the correlation matrix diag(R) <- 1 # set diagonal to 1 R[upper.tri(R)] <- c(x, y, z) # fill in x, y, z to upper right R[lower.tri(R)] <- c(x, y, z) # fill in x, y, z to lower left eigen(R)$values # get eigenvalues to check if pos.def. gives [1] 1.8055810 0.7124457 0.4819732 So our $\boldsymbol{R}$ here is positive definite. Now construct the corresponding covariance matrix from arbitrary variances. vars <- c(4, 16, 9) # the variances Sigma <- diag(sqrt(vars)) %*% R %*% diag(sqrt(vars)) Generate some data matrix $\boldsymbol{X}$ that we will transform to later have exactly that covariance matrix. library(mvtnorm) # for rmvnorm() N <- 100 # number of simulated observations mu <- c(1, 2, 3) # some arbitrary centroid X <- round(rmvnorm(n=N, mean=mu, sigma=Sigma)) To do that, we first orthonormalize matrix $\boldsymbol{X}$, giving matrix $\boldsymbol{Y}$ with covariance matrix $\boldsymbol{I}$ (identity). orthGS <- function(X) { # implement Gram-Schmidt algorithm Id <- diag(nrow(X)) for(i in 2:ncol(X)) { A <- X[ , 1:(i-1), drop=FALSE] Q <- qr.Q(qr(A)) P <- tcrossprod(Q) X[ , i] <- (Id-P) %*% X[ , i] } scale(X, center=FALSE, scale=sqrt(colSums(X^2))) } Xctr <- scale(X, center=TRUE, scale=FALSE) # centered version of X Y <- orthGS(Xctr) # Y is orthonormal Transform matrix $\boldsymbol{Y}$ to have covariance matrix $\boldsymbol{\Sigma}$ and centroid $\boldsymbol{\mu}$. Edit: what's going on here: Do a spectral decomposition $\boldsymbol{\Sigma} = \boldsymbol{G} \boldsymbol{D} \boldsymbol{G}^{t}$, where $\boldsymbol{G}$ is the matrix of normalized eigenvectors of $\boldsymbol{\Sigma}$, and $\boldsymbol{D}$ is the corresponding matrix of eigenvalues. Now matrix $\boldsymbol{G} \boldsymbol{D}^{1/2} \boldsymbol{Y}$ has covariance matrix $\boldsymbol{G} \boldsymbol{D}^{1/2} Cov(\boldsymbol{Y}) \boldsymbol{D}^{1/2} \boldsymbol{G}^{t} = \boldsymbol{G} \boldsymbol{D} \boldsymbol{G}^{t} = \boldsymbol{\Sigma}$, as $Cov(\boldsymbol{Y}) = \boldsymbol{I}$. eig <- eigen(Sigma) A <- eig$vectors %*% sqrt(diag(eig$values)) XX1ctr <- t(A %*% t(Y)) * sqrt(nrow(Y)) XX1 <- sweep(XX1ctr, 2, mu, "+") # move centroid to mu Check that the correlation matrix is really $\boldsymbol{R}$. > all.equal(cor(XX1), R) [1] TRUE For other purposes, the question might now be: How do I find a positive definite matrix that is "very similar" to a pre-specified one that is not positive definite. That I don't know. Edit: corrected some square roots
Create positive-definite 3x3 covariance matrix given specified correlation values
To follow up on @cardinal's comment: your $x$, $y$, and $z$ define a $(3 \times 3)$ correlation matrix $R$. Since a correlation matrix also is a possible covariance matrix (of standardized variables),
Create positive-definite 3x3 covariance matrix given specified correlation values To follow up on @cardinal's comment: your $x$, $y$, and $z$ define a $(3 \times 3)$ correlation matrix $R$. Since a correlation matrix also is a possible covariance matrix (of standardized variables), it has to be positive definite. This is the case if all eigenvalues are $> 0$. If $R$ is indeed positive definite, then all vectors $\boldsymbol{s}$ of variances (i.e., numbers $> 0$) will turn $\boldsymbol{R}$ into a positive definite covariance matrix $\boldsymbol{\Sigma} = \boldsymbol{D}_{s}^{1/2} \boldsymbol{R} \boldsymbol{D}_{s}^{1/2}$, where $\boldsymbol{D}_{s}^{1/2}$ is the square root of the diagonal matrix made from $\boldsymbol{s}$. So just construct $R$ from $x, y, z$, and check if the eigenvalues are all $> 0$. If so, you're good, and you can transform any set of data to have a corresponding covariance matrix with arbitrary variances: x <- 0.5 y <- 0.3 # changing this to -0.6 makes it not pos.def. z <- 0.4 R <- matrix(numeric(3*3), nrow=3) # will be the correlation matrix diag(R) <- 1 # set diagonal to 1 R[upper.tri(R)] <- c(x, y, z) # fill in x, y, z to upper right R[lower.tri(R)] <- c(x, y, z) # fill in x, y, z to lower left eigen(R)$values # get eigenvalues to check if pos.def. gives [1] 1.8055810 0.7124457 0.4819732 So our $\boldsymbol{R}$ here is positive definite. Now construct the corresponding covariance matrix from arbitrary variances. vars <- c(4, 16, 9) # the variances Sigma <- diag(sqrt(vars)) %*% R %*% diag(sqrt(vars)) Generate some data matrix $\boldsymbol{X}$ that we will transform to later have exactly that covariance matrix. library(mvtnorm) # for rmvnorm() N <- 100 # number of simulated observations mu <- c(1, 2, 3) # some arbitrary centroid X <- round(rmvnorm(n=N, mean=mu, sigma=Sigma)) To do that, we first orthonormalize matrix $\boldsymbol{X}$, giving matrix $\boldsymbol{Y}$ with covariance matrix $\boldsymbol{I}$ (identity). orthGS <- function(X) { # implement Gram-Schmidt algorithm Id <- diag(nrow(X)) for(i in 2:ncol(X)) { A <- X[ , 1:(i-1), drop=FALSE] Q <- qr.Q(qr(A)) P <- tcrossprod(Q) X[ , i] <- (Id-P) %*% X[ , i] } scale(X, center=FALSE, scale=sqrt(colSums(X^2))) } Xctr <- scale(X, center=TRUE, scale=FALSE) # centered version of X Y <- orthGS(Xctr) # Y is orthonormal Transform matrix $\boldsymbol{Y}$ to have covariance matrix $\boldsymbol{\Sigma}$ and centroid $\boldsymbol{\mu}$. Edit: what's going on here: Do a spectral decomposition $\boldsymbol{\Sigma} = \boldsymbol{G} \boldsymbol{D} \boldsymbol{G}^{t}$, where $\boldsymbol{G}$ is the matrix of normalized eigenvectors of $\boldsymbol{\Sigma}$, and $\boldsymbol{D}$ is the corresponding matrix of eigenvalues. Now matrix $\boldsymbol{G} \boldsymbol{D}^{1/2} \boldsymbol{Y}$ has covariance matrix $\boldsymbol{G} \boldsymbol{D}^{1/2} Cov(\boldsymbol{Y}) \boldsymbol{D}^{1/2} \boldsymbol{G}^{t} = \boldsymbol{G} \boldsymbol{D} \boldsymbol{G}^{t} = \boldsymbol{\Sigma}$, as $Cov(\boldsymbol{Y}) = \boldsymbol{I}$. eig <- eigen(Sigma) A <- eig$vectors %*% sqrt(diag(eig$values)) XX1ctr <- t(A %*% t(Y)) * sqrt(nrow(Y)) XX1 <- sweep(XX1ctr, 2, mu, "+") # move centroid to mu Check that the correlation matrix is really $\boldsymbol{R}$. > all.equal(cor(XX1), R) [1] TRUE For other purposes, the question might now be: How do I find a positive definite matrix that is "very similar" to a pre-specified one that is not positive definite. That I don't know. Edit: corrected some square roots
Create positive-definite 3x3 covariance matrix given specified correlation values To follow up on @cardinal's comment: your $x$, $y$, and $z$ define a $(3 \times 3)$ correlation matrix $R$. Since a correlation matrix also is a possible covariance matrix (of standardized variables),
55,435
Create positive-definite 3x3 covariance matrix given specified correlation values
If you're not given a complete collection of correlations, but only a partial set, then you can construct a semidefinite programming feasibility problem that can be used to determine whether or not there is a correlation matrix that has the specified correlations. This can be solved by various SDP solvers such as SeDuMi and SDPT3.
Create positive-definite 3x3 covariance matrix given specified correlation values
If you're not given a complete collection of correlations, but only a partial set, then you can construct a semidefinite programming feasibility problem that can be used to determine whether or not th
Create positive-definite 3x3 covariance matrix given specified correlation values If you're not given a complete collection of correlations, but only a partial set, then you can construct a semidefinite programming feasibility problem that can be used to determine whether or not there is a correlation matrix that has the specified correlations. This can be solved by various SDP solvers such as SeDuMi and SDPT3.
Create positive-definite 3x3 covariance matrix given specified correlation values If you're not given a complete collection of correlations, but only a partial set, then you can construct a semidefinite programming feasibility problem that can be used to determine whether or not th
55,436
How to avoid and handle survey non-response?
In some cases it is possible to make adjustments and "correct" survey non-response. The topic is i.a. discussed in this post. However, your situation seems to be a bit special. You have a list of users, but no information about their profiles. The best thing to do about non-response, be it in your particular case or in a more general case, is to avoid it. When designing a survey a couple of measures can be taken. What follows is just some general advice. You have to see what can be applied in your particular situation. When designing the questionnaire you have to stick firmly to the "KISS" principle: keep it short and simple! Only ask for the items an characteristics you really need. Only ask for the detail you really need. For instance, Rather than asking an exact birthday, ask for the age in years, rather than asking for a city, ask for a broader region, ... Make sure that your questions are intelligible and rather straightforward to answer The collection mode can also have an impact on the response rate. Some people might claim that face-to-face interviews yield the best rates. However, depending on the topic, respondents might feel more at ease with more anonymous online or telephone surveys. Incentives may increase the response rate. The respondents can either be paid for taking part or they can, or a prize draw can be organized among the respondents. If interviewers are used, they have to convince the persons to respond. Similarly, follow-up and reminders are very important. If after a first contact, a person did not categorically refuse to take part in the survey, one or more further attempts should be done. Confidence is crucial. If you want to obtain some more confidential information, or information that is perceived as such by the respondents, you have to be trustworthy. You need to guarantee that confidential data will be treated as such.
How to avoid and handle survey non-response?
In some cases it is possible to make adjustments and "correct" survey non-response. The topic is i.a. discussed in this post. However, your situation seems to be a bit special. You have a list of user
How to avoid and handle survey non-response? In some cases it is possible to make adjustments and "correct" survey non-response. The topic is i.a. discussed in this post. However, your situation seems to be a bit special. You have a list of users, but no information about their profiles. The best thing to do about non-response, be it in your particular case or in a more general case, is to avoid it. When designing a survey a couple of measures can be taken. What follows is just some general advice. You have to see what can be applied in your particular situation. When designing the questionnaire you have to stick firmly to the "KISS" principle: keep it short and simple! Only ask for the items an characteristics you really need. Only ask for the detail you really need. For instance, Rather than asking an exact birthday, ask for the age in years, rather than asking for a city, ask for a broader region, ... Make sure that your questions are intelligible and rather straightforward to answer The collection mode can also have an impact on the response rate. Some people might claim that face-to-face interviews yield the best rates. However, depending on the topic, respondents might feel more at ease with more anonymous online or telephone surveys. Incentives may increase the response rate. The respondents can either be paid for taking part or they can, or a prize draw can be organized among the respondents. If interviewers are used, they have to convince the persons to respond. Similarly, follow-up and reminders are very important. If after a first contact, a person did not categorically refuse to take part in the survey, one or more further attempts should be done. Confidence is crucial. If you want to obtain some more confidential information, or information that is perceived as such by the respondents, you have to be trustworthy. You need to guarantee that confidential data will be treated as such.
How to avoid and handle survey non-response? In some cases it is possible to make adjustments and "correct" survey non-response. The topic is i.a. discussed in this post. However, your situation seems to be a bit special. You have a list of user
55,437
How to avoid and handle survey non-response?
Make it optional to answer the gender question. That way, it's more likely to be accurate. You will never know for sure the exact number of males and females because there will always be some sort of sampling error in your data -- you can't avoid that unless you interviewed every single customer. Selecting how many samples you need for the correct confidence interval requires a calculator (like this one http://www.macorr.com/sample-size-calculator.htm).
How to avoid and handle survey non-response?
Make it optional to answer the gender question. That way, it's more likely to be accurate. You will never know for sure the exact number of males and females because there will always be some sort of
How to avoid and handle survey non-response? Make it optional to answer the gender question. That way, it's more likely to be accurate. You will never know for sure the exact number of males and females because there will always be some sort of sampling error in your data -- you can't avoid that unless you interviewed every single customer. Selecting how many samples you need for the correct confidence interval requires a calculator (like this one http://www.macorr.com/sample-size-calculator.htm).
How to avoid and handle survey non-response? Make it optional to answer the gender question. That way, it's more likely to be accurate. You will never know for sure the exact number of males and females because there will always be some sort of
55,438
How to avoid and handle survey non-response?
You called them customers. What is the nature of the relationship? Is it possible to gather the information for billing purposes? Of course if you are dealing with businesses the contact/billing person is the choice of the business, they may not be the decision maker.
How to avoid and handle survey non-response?
You called them customers. What is the nature of the relationship? Is it possible to gather the information for billing purposes? Of course if you are dealing with businesses the contact/billing perso
How to avoid and handle survey non-response? You called them customers. What is the nature of the relationship? Is it possible to gather the information for billing purposes? Of course if you are dealing with businesses the contact/billing person is the choice of the business, they may not be the decision maker.
How to avoid and handle survey non-response? You called them customers. What is the nature of the relationship? Is it possible to gather the information for billing purposes? Of course if you are dealing with businesses the contact/billing perso
55,439
How to avoid and handle survey non-response?
This may be obvious, but I didn't notice it mentioned. I'd look for gender differences on every item on the survey--through comparisons of means, correlations, crosstabs with chi-square tests, even perhaps with logistic regression. If you find no substantial differences, great: gender doesn't matter to representativeness, at least when it comes to this survey's set of variables. If you do find big differences, you can weight your data so that aggregate results, at least, will be more in line with the proportion of (fe)males that you estimate (if you can) as existing in the larger population.
How to avoid and handle survey non-response?
This may be obvious, but I didn't notice it mentioned. I'd look for gender differences on every item on the survey--through comparisons of means, correlations, crosstabs with chi-square tests, even p
How to avoid and handle survey non-response? This may be obvious, but I didn't notice it mentioned. I'd look for gender differences on every item on the survey--through comparisons of means, correlations, crosstabs with chi-square tests, even perhaps with logistic regression. If you find no substantial differences, great: gender doesn't matter to representativeness, at least when it comes to this survey's set of variables. If you do find big differences, you can weight your data so that aggregate results, at least, will be more in line with the proportion of (fe)males that you estimate (if you can) as existing in the larger population.
How to avoid and handle survey non-response? This may be obvious, but I didn't notice it mentioned. I'd look for gender differences on every item on the survey--through comparisons of means, correlations, crosstabs with chi-square tests, even p
55,440
P values seem to be the wrong way around
The p value is the probability of making a certain observation - or more extreme - due to chance, given that the observation is drawn from the original distribution. Normally (haha) you would expect observations to be pretty close to the distribution's arithmetic mean. If an observation is far from the mean it can be because of two things: a) the observation occurred due to chance - this has to be possible; otherwise the distribution's probability density function would be 0 away from the mean. Or b) the observation is different because it is generated by a different distribution with a mean that differs from the original one. So a small p value means that an observation that far from the original distribution's mean is super unlikely, if it was generated by that distribution. Totally possible but really unlikely (i.e. with probability p). Now there is this convention in one branch of the statistics community to consider p values under a certain threshold to be evidence that the observation is from a distribution different from the original one. That threshold is called "alpha" and is set prior to an experiment. So if alpha e.g. 0.05 and p is below that, we think that this is evidence that the observation is from a different distribution. That's the story for p values for means. When it comes to correlations, the strength of the correlation (which is the observation in this case) is given by a different variable - e.g. r. Having a high r means the correlation is strong. But even high values of r can have different values of p - high ones and low ones. With a low p value we think that it is unlikely to get that kind of r due to chance. A high p value can be sign for having a great amount of noise in the data so that the correlation itself is high but the probability that this is due to chance is also high. You can also have very low values of r with high certainty (low values of p). So the strength of correlations is not measured by p.
P values seem to be the wrong way around
The p value is the probability of making a certain observation - or more extreme - due to chance, given that the observation is drawn from the original distribution. Normally (haha) you would expect
P values seem to be the wrong way around The p value is the probability of making a certain observation - or more extreme - due to chance, given that the observation is drawn from the original distribution. Normally (haha) you would expect observations to be pretty close to the distribution's arithmetic mean. If an observation is far from the mean it can be because of two things: a) the observation occurred due to chance - this has to be possible; otherwise the distribution's probability density function would be 0 away from the mean. Or b) the observation is different because it is generated by a different distribution with a mean that differs from the original one. So a small p value means that an observation that far from the original distribution's mean is super unlikely, if it was generated by that distribution. Totally possible but really unlikely (i.e. with probability p). Now there is this convention in one branch of the statistics community to consider p values under a certain threshold to be evidence that the observation is from a distribution different from the original one. That threshold is called "alpha" and is set prior to an experiment. So if alpha e.g. 0.05 and p is below that, we think that this is evidence that the observation is from a different distribution. That's the story for p values for means. When it comes to correlations, the strength of the correlation (which is the observation in this case) is given by a different variable - e.g. r. Having a high r means the correlation is strong. But even high values of r can have different values of p - high ones and low ones. With a low p value we think that it is unlikely to get that kind of r due to chance. A high p value can be sign for having a great amount of noise in the data so that the correlation itself is high but the probability that this is due to chance is also high. You can also have very low values of r with high certainty (low values of p). So the strength of correlations is not measured by p.
P values seem to be the wrong way around The p value is the probability of making a certain observation - or more extreme - due to chance, given that the observation is drawn from the original distribution. Normally (haha) you would expect
55,441
Including day of week in a logit model
The model with day as a categorical variable with seven levels does account for each single day; you don't need to do it "by hand", so to speak. For example: library(MASS) # Construct sample data: 700 observations, 100 on each of 7 days of week Day <- factor(rep(c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"), 100), levels=c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"), ordered=TRUE) Day.effect <- rep(rnorm(7), 100) y <- rbinom(700, 1, 1/(1+exp(-Day.effect))) # Estimate logit model without intercept (captures each day's effect) foo <- summary(glm(y~Day-1, family=binomial)) # compare actuals to estimates coefs <- foo$coefficients coefs <- cbind(Day.effect, coefs) colnames(coefs)[1] <- "Actual" options(digits=3) > coefs Actual Estimate Std. Error z value Pr(>|z|) DayMonday 0.520 0.490 0.206 2.376 1.75e-02 DayTuesday -0.230 -0.323 0.203 -1.593 1.11e-01 DayWednesday -0.247 -0.447 0.205 -2.182 2.91e-02 DayThursday -1.156 -1.266 0.241 -5.243 1.58e-07 DayFriday 0.282 0.160 0.201 0.799 4.24e-01 DaySaturday -0.383 -0.405 0.204 -1.986 4.70e-02 DaySunday -0.357 -0.447 0.205 -2.182 2.91e-02 This would appear to do just what you want.
Including day of week in a logit model
The model with day as a categorical variable with seven levels does account for each single day; you don't need to do it "by hand", so to speak. For example: library(MASS) # Construct sample data: 70
Including day of week in a logit model The model with day as a categorical variable with seven levels does account for each single day; you don't need to do it "by hand", so to speak. For example: library(MASS) # Construct sample data: 700 observations, 100 on each of 7 days of week Day <- factor(rep(c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"), 100), levels=c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"), ordered=TRUE) Day.effect <- rep(rnorm(7), 100) y <- rbinom(700, 1, 1/(1+exp(-Day.effect))) # Estimate logit model without intercept (captures each day's effect) foo <- summary(glm(y~Day-1, family=binomial)) # compare actuals to estimates coefs <- foo$coefficients coefs <- cbind(Day.effect, coefs) colnames(coefs)[1] <- "Actual" options(digits=3) > coefs Actual Estimate Std. Error z value Pr(>|z|) DayMonday 0.520 0.490 0.206 2.376 1.75e-02 DayTuesday -0.230 -0.323 0.203 -1.593 1.11e-01 DayWednesday -0.247 -0.447 0.205 -2.182 2.91e-02 DayThursday -1.156 -1.266 0.241 -5.243 1.58e-07 DayFriday 0.282 0.160 0.201 0.799 4.24e-01 DaySaturday -0.383 -0.405 0.204 -1.986 4.70e-02 DaySunday -0.357 -0.447 0.205 -2.182 2.91e-02 This would appear to do just what you want.
Including day of week in a logit model The model with day as a categorical variable with seven levels does account for each single day; you don't need to do it "by hand", so to speak. For example: library(MASS) # Construct sample data: 70
55,442
What kind of cut-offs are used for publication bias tests in meta-analysis?
The use of any fail-safe N method is depreciated. Please see Becker (2005) and check out the Conclusions section. Here, she states: "Given the other approaches that now exist for dealing with publication bias, the fail-safe N should be abandoned in favour of other, more informative analyses [...]". You also might want to read her statement in Summary of the Examples: "The incredible range of values shown here reveals one of the greatest weaknesses of the failsafe N computations - it is difficult to interpret the values without a statistical criterion." The "trim and fill method" is a good starting point. However, you also might want to give Egger's regression method a try which, for instance, is described in Sterne/Egger (2005) (please note that Egger et al. (1997) suggest a weighted least squares regression, while Sterne and Egger (2005: 101) recommend a simple OLS regression). See also this CV question on Egger’s linear regression method intercept in meta analysis. For a general overview of methods to detect publication bias see Sutton et al. (2000) or these slides by Birgit Schrödle. References Egger, M., G. Davey Smith, M. Schneider, C. Minder (1997), Bias in meta-analysis detected by a simple, graphical test British Medical Journal 315: 629-634. Sterne, J.A.C., M. Egger (2005), Regression Methods to Detect Publication and Other Bias in Meta-Analysis. Pp. 99-110 in: H.R. Rothstein, A.J. Sutton, M. Borenstein (eds.), Publication Bias in Meta-Analysis. Prevention, Assessment and Adjustments, The Atrium, Southern Gate, Chichester: Wiley. Sutton, A.J., F. Song, S.M. Gilbody, K.R. Abrams (2000), Modelling publication bias in meta-analysis: a review. Statistical Methods in Medical Research 9: 421-445
What kind of cut-offs are used for publication bias tests in meta-analysis?
The use of any fail-safe N method is depreciated. Please see Becker (2005) and check out the Conclusions section. Here, she states: "Given the other approaches that now exist for dealing with public
What kind of cut-offs are used for publication bias tests in meta-analysis? The use of any fail-safe N method is depreciated. Please see Becker (2005) and check out the Conclusions section. Here, she states: "Given the other approaches that now exist for dealing with publication bias, the fail-safe N should be abandoned in favour of other, more informative analyses [...]". You also might want to read her statement in Summary of the Examples: "The incredible range of values shown here reveals one of the greatest weaknesses of the failsafe N computations - it is difficult to interpret the values without a statistical criterion." The "trim and fill method" is a good starting point. However, you also might want to give Egger's regression method a try which, for instance, is described in Sterne/Egger (2005) (please note that Egger et al. (1997) suggest a weighted least squares regression, while Sterne and Egger (2005: 101) recommend a simple OLS regression). See also this CV question on Egger’s linear regression method intercept in meta analysis. For a general overview of methods to detect publication bias see Sutton et al. (2000) or these slides by Birgit Schrödle. References Egger, M., G. Davey Smith, M. Schneider, C. Minder (1997), Bias in meta-analysis detected by a simple, graphical test British Medical Journal 315: 629-634. Sterne, J.A.C., M. Egger (2005), Regression Methods to Detect Publication and Other Bias in Meta-Analysis. Pp. 99-110 in: H.R. Rothstein, A.J. Sutton, M. Borenstein (eds.), Publication Bias in Meta-Analysis. Prevention, Assessment and Adjustments, The Atrium, Southern Gate, Chichester: Wiley. Sutton, A.J., F. Song, S.M. Gilbody, K.R. Abrams (2000), Modelling publication bias in meta-analysis: a review. Statistical Methods in Medical Research 9: 421-445
What kind of cut-offs are used for publication bias tests in meta-analysis? The use of any fail-safe N method is depreciated. Please see Becker (2005) and check out the Conclusions section. Here, she states: "Given the other approaches that now exist for dealing with public
55,443
Can structural equation models be used to derive clinical formulae?
If you can build a regression model for something, it means to me that this is a measurable quantity. While a linear regression model is a special case of the general SEM, the greater strength of SEM is, arguably, being able to accommodate latent variables and measurement error in predictors. If you want to build a structural equation model for the latent variable BLAH (renal function), you need to have a study where several different ways to measure it have been undertaken (serum creatinine levels, inulin clearance). If, in turn, these variables are obtained from "clinical formulae" (another regression model, as far as I understand), that messes up the model quite a bit, and you need to figure out exactly which of the variables affect the measurement process, and which one affects the underlying latent variable (without knowing anything about the biochemisty of whatever it is that you are interested in, I would dare suggesting that age and gender affect the latent variable, rather than the measurement process). Ideally, you would want your equations linking the latent variables with the measured variables to be linear in parameters, so you would need to apply the typical transformations, such as logs. The standard language of structural equation models are path diagrams; mainstream statisticians tend to stare at them with little understanding of what's going on, but social scientists have found them to be very handy in explaining the relations between variables. I think this paper explains it quite well, although I don't know how closely it follows the biomedical language (it is written by psychometricians): http://www.citeulike.org/user/ctacmo/article/2663951.
Can structural equation models be used to derive clinical formulae?
If you can build a regression model for something, it means to me that this is a measurable quantity. While a linear regression model is a special case of the general SEM, the greater strength of SEM
Can structural equation models be used to derive clinical formulae? If you can build a regression model for something, it means to me that this is a measurable quantity. While a linear regression model is a special case of the general SEM, the greater strength of SEM is, arguably, being able to accommodate latent variables and measurement error in predictors. If you want to build a structural equation model for the latent variable BLAH (renal function), you need to have a study where several different ways to measure it have been undertaken (serum creatinine levels, inulin clearance). If, in turn, these variables are obtained from "clinical formulae" (another regression model, as far as I understand), that messes up the model quite a bit, and you need to figure out exactly which of the variables affect the measurement process, and which one affects the underlying latent variable (without knowing anything about the biochemisty of whatever it is that you are interested in, I would dare suggesting that age and gender affect the latent variable, rather than the measurement process). Ideally, you would want your equations linking the latent variables with the measured variables to be linear in parameters, so you would need to apply the typical transformations, such as logs. The standard language of structural equation models are path diagrams; mainstream statisticians tend to stare at them with little understanding of what's going on, but social scientists have found them to be very handy in explaining the relations between variables. I think this paper explains it quite well, although I don't know how closely it follows the biomedical language (it is written by psychometricians): http://www.citeulike.org/user/ctacmo/article/2663951.
Can structural equation models be used to derive clinical formulae? If you can build a regression model for something, it means to me that this is a measurable quantity. While a linear regression model is a special case of the general SEM, the greater strength of SEM
55,444
Can structural equation models be used to derive clinical formulae?
Yes SEM can do that. You enter the measured and latent variables into the model, specify their relationships, and then you will get quite a lot of output. This output will include a structural equation (looks like a regression equation, with coefficients, standard errors, etc) and an R^2 result. SEM also allows you to specify other relationships such as allowing covariance between serum creatinine and inulin.
Can structural equation models be used to derive clinical formulae?
Yes SEM can do that. You enter the measured and latent variables into the model, specify their relationships, and then you will get quite a lot of output. This output will include a structural equatio
Can structural equation models be used to derive clinical formulae? Yes SEM can do that. You enter the measured and latent variables into the model, specify their relationships, and then you will get quite a lot of output. This output will include a structural equation (looks like a regression equation, with coefficients, standard errors, etc) and an R^2 result. SEM also allows you to specify other relationships such as allowing covariance between serum creatinine and inulin.
Can structural equation models be used to derive clinical formulae? Yes SEM can do that. You enter the measured and latent variables into the model, specify their relationships, and then you will get quite a lot of output. This output will include a structural equatio
55,445
Handling outliers when comparing two means in a repeated measures design
I will take these out of order. If it is possible to establish a correspondence between the measurements in the first set and the measurements in the second set (for example, Bob's score at time 1 and Bob's score at time 2 correspond because they both came from Bob), then you should do a paired t-test. That is, you should not calculate means for each time, but take differences, and calculate the mean and standard deviation of the differences. The standard error of the differences (i.e., the denominator of the t-statistic) is that standard deviation divided by $\sqrt{n}$. If some students did not participate at one of the occasions, then their scores should be set aside. Furthermore, you do not care if a score is more than 2 s.d.'s higher than the mean, although you may care if one of your differences is more than 2 s.d.'s above the mean of the differences. The definition of an outlier is a data point that came from a different population than the one you want to be studying. The definition is not a data point that is far away from the rest of your data. However, we almost never know whether or not a data point came from a different distribution than the rest of our data, except that it looks really different. If you should ever spend much time conducting simulations, you will come to notice that every so often a data point that you know comes from the same distribution (because you wrote the simulation code) looks quite a bit different from the rest. This is an uncomfortable fact, but it is nonetheless true. Ultimately, you need to decide whether you believe that data point belongs there or not. There are some (potentially) helpful guidelines: With ~20 data points, a z-score with an absolute value greater than 2 is pretty unlikely (although it wouldn't be if you had, say, 100 data points); You can look at a plot of your data (e.g., a histogram) to see if the larger value is contiguous with the rest of your data, or if there is a large break between it and the rest; It can help to run your analysis both with the potential outlier and without it (often, you will get the same answer both ways, and that's reassuring); A final possibility is to use 'trimmed samples', that is, exclude the top and bottom 2 data points (given that you have ~20, this would be a 10% trimmed sample), note that this lessens your power, but many people think it's more even handed. In the end, I'm afraid, you will still have to make a decision, however. Lastly, you should know that the question of 2 vs. 1 - tailed t-tests has long been a contentious topic. It is probably not as important as people have made it out to be, but that is the nature of these things. Personally, I'm against 1-tailed tests, but my opinion is really unimportant. A question you could ask yourself is: What if I find that the mean decreased by a very large amount? Would I say, 'Nope, there was no change', or would I say there was a change? If it would be possible for you to believe a negative change if the data supported it, then you really should be using a 2-tailed test, but if there is no way you would ever believe the mean went down, then a 1-tailed test is probably fine, and you just let the old grumps (like me) harrumph about it. What you should not do is run the test both ways and pick the one that gives you the result you like best (or run a 1-tailed test, notice that the mean went down by a lot, then run a 2-tailed test and call it 'significant').
Handling outliers when comparing two means in a repeated measures design
I will take these out of order. If it is possible to establish a correspondence between the measurements in the first set and the measurements in the second set (for example, Bob's score at time 1 an
Handling outliers when comparing two means in a repeated measures design I will take these out of order. If it is possible to establish a correspondence between the measurements in the first set and the measurements in the second set (for example, Bob's score at time 1 and Bob's score at time 2 correspond because they both came from Bob), then you should do a paired t-test. That is, you should not calculate means for each time, but take differences, and calculate the mean and standard deviation of the differences. The standard error of the differences (i.e., the denominator of the t-statistic) is that standard deviation divided by $\sqrt{n}$. If some students did not participate at one of the occasions, then their scores should be set aside. Furthermore, you do not care if a score is more than 2 s.d.'s higher than the mean, although you may care if one of your differences is more than 2 s.d.'s above the mean of the differences. The definition of an outlier is a data point that came from a different population than the one you want to be studying. The definition is not a data point that is far away from the rest of your data. However, we almost never know whether or not a data point came from a different distribution than the rest of our data, except that it looks really different. If you should ever spend much time conducting simulations, you will come to notice that every so often a data point that you know comes from the same distribution (because you wrote the simulation code) looks quite a bit different from the rest. This is an uncomfortable fact, but it is nonetheless true. Ultimately, you need to decide whether you believe that data point belongs there or not. There are some (potentially) helpful guidelines: With ~20 data points, a z-score with an absolute value greater than 2 is pretty unlikely (although it wouldn't be if you had, say, 100 data points); You can look at a plot of your data (e.g., a histogram) to see if the larger value is contiguous with the rest of your data, or if there is a large break between it and the rest; It can help to run your analysis both with the potential outlier and without it (often, you will get the same answer both ways, and that's reassuring); A final possibility is to use 'trimmed samples', that is, exclude the top and bottom 2 data points (given that you have ~20, this would be a 10% trimmed sample), note that this lessens your power, but many people think it's more even handed. In the end, I'm afraid, you will still have to make a decision, however. Lastly, you should know that the question of 2 vs. 1 - tailed t-tests has long been a contentious topic. It is probably not as important as people have made it out to be, but that is the nature of these things. Personally, I'm against 1-tailed tests, but my opinion is really unimportant. A question you could ask yourself is: What if I find that the mean decreased by a very large amount? Would I say, 'Nope, there was no change', or would I say there was a change? If it would be possible for you to believe a negative change if the data supported it, then you really should be using a 2-tailed test, but if there is no way you would ever believe the mean went down, then a 1-tailed test is probably fine, and you just let the old grumps (like me) harrumph about it. What you should not do is run the test both ways and pick the one that gives you the result you like best (or run a 1-tailed test, notice that the mean went down by a lot, then run a 2-tailed test and call it 'significant').
Handling outliers when comparing two means in a repeated measures design I will take these out of order. If it is possible to establish a correspondence between the measurements in the first set and the measurements in the second set (for example, Bob's score at time 1 an
55,446
What is a good academic citation for cross-validation?
I find chapter 7 of Hastie, Tibshirani, Friedman's Elements of Statistical Learning to be a good reference for CV and how and why it is used.
What is a good academic citation for cross-validation?
I find chapter 7 of Hastie, Tibshirani, Friedman's Elements of Statistical Learning to be a good reference for CV and how and why it is used.
What is a good academic citation for cross-validation? I find chapter 7 of Hastie, Tibshirani, Friedman's Elements of Statistical Learning to be a good reference for CV and how and why it is used.
What is a good academic citation for cross-validation? I find chapter 7 of Hastie, Tibshirani, Friedman's Elements of Statistical Learning to be a good reference for CV and how and why it is used.
55,447
Obtaining random number from a mixture of two normal distributions
If you want to sample unequally (with probability 0.7 and 0.3) from two gaussians with parameters $(\mu_1,\sigma_1^2)$ and $(\mu_2,\sigma_2^2)$, then you can probably try something like that: n <- 100 yn <- rbinom(n, 1, .7) # draw n units from a mixture of N(0,1) and N(100,3^2) s <- rnorm(n, 0 + 100*yn, 1 + 2*yn) In fact, this is one of the illustrations provided in Modern Applied Statistics with S, by Venables and Ripley (Springer, 2002; §5.2, pp. 110-111). With different parameters, you can use an ifelse expression to select the mean and SD according to the binomial sequence given in yn, e.g. rnorm(n, mean=ifelse(yn, 21, 26), sd=ifelse(yn, 3.3, 4)). (No need to cast yn to a logical with as.logical.)
Obtaining random number from a mixture of two normal distributions
If you want to sample unequally (with probability 0.7 and 0.3) from two gaussians with parameters $(\mu_1,\sigma_1^2)$ and $(\mu_2,\sigma_2^2)$, then you can probably try something like that: n <- 100
Obtaining random number from a mixture of two normal distributions If you want to sample unequally (with probability 0.7 and 0.3) from two gaussians with parameters $(\mu_1,\sigma_1^2)$ and $(\mu_2,\sigma_2^2)$, then you can probably try something like that: n <- 100 yn <- rbinom(n, 1, .7) # draw n units from a mixture of N(0,1) and N(100,3^2) s <- rnorm(n, 0 + 100*yn, 1 + 2*yn) In fact, this is one of the illustrations provided in Modern Applied Statistics with S, by Venables and Ripley (Springer, 2002; §5.2, pp. 110-111). With different parameters, you can use an ifelse expression to select the mean and SD according to the binomial sequence given in yn, e.g. rnorm(n, mean=ifelse(yn, 21, 26), sd=ifelse(yn, 3.3, 4)). (No need to cast yn to a logical with as.logical.)
Obtaining random number from a mixture of two normal distributions If you want to sample unequally (with probability 0.7 and 0.3) from two gaussians with parameters $(\mu_1,\sigma_1^2)$ and $(\mu_2,\sigma_2^2)$, then you can probably try something like that: n <- 100
55,448
Obtaining random number from a mixture of two normal distributions
To accomplish the goal of sampling from an uneven mixture of distributions, the most straightforward approach is to sample separately, in proportion to the desired ratio: p <- 0.70 #P(from N(mu1, sd1)) n.samps <- 10000 mu1 <- 0 sd1 <- 1 mu2 <- 100 sd2 <- 10 x <- vector() for(i in 1:n.samps){ b <- runif(1, 0, 1) if(b < p){ x[i] <- rnorm(1, mu1, sd1) } else { x[i] <- rnorm(1, mu2, sd2) } } this can be done ~50 x faster: binary <- runif(n.samps, 0, 1) > p x <- c(rnorm(sum(binary), 1, 2), rnorm(sum(!binary), 5, 4) then to draw a sample: sample(x, 1) or to reshuffle: x <- sample(x, n.samp)
Obtaining random number from a mixture of two normal distributions
To accomplish the goal of sampling from an uneven mixture of distributions, the most straightforward approach is to sample separately, in proportion to the desired ratio: p <- 0.70 #P(from N(mu1, sd1
Obtaining random number from a mixture of two normal distributions To accomplish the goal of sampling from an uneven mixture of distributions, the most straightforward approach is to sample separately, in proportion to the desired ratio: p <- 0.70 #P(from N(mu1, sd1)) n.samps <- 10000 mu1 <- 0 sd1 <- 1 mu2 <- 100 sd2 <- 10 x <- vector() for(i in 1:n.samps){ b <- runif(1, 0, 1) if(b < p){ x[i] <- rnorm(1, mu1, sd1) } else { x[i] <- rnorm(1, mu2, sd2) } } this can be done ~50 x faster: binary <- runif(n.samps, 0, 1) > p x <- c(rnorm(sum(binary), 1, 2), rnorm(sum(!binary), 5, 4) then to draw a sample: sample(x, 1) or to reshuffle: x <- sample(x, n.samp)
Obtaining random number from a mixture of two normal distributions To accomplish the goal of sampling from an uneven mixture of distributions, the most straightforward approach is to sample separately, in proportion to the desired ratio: p <- 0.70 #P(from N(mu1, sd1
55,449
Issues with the Huynh-Feldt values using either Anova or ezANOVA in R?
If there were an error in Anova it would also be in ezANOVA because Mike Lawrence built ez on John Fox's Anova package. And the Anova package in turn uses the standard R anova command. R is getting this correct. Peter Dalgaard (pp.3-4) pointed out that SAS and SPSS get the H-F calculation wrong, and have for decades. It's because they use the original published formula and not the corrected equation (Lecoutre, 1991) that R uses. (My speculation about why they haven't fixed it is similar to the reason Excel remains buggy... they have a greater obligation to major customers to be bug compatible than to be correct. Fixing this would be admitting they were wrong as well. Although, if it were done in 1992 it would have been reasonable. Now it's just a great argument for open source. Alternatively, they just might not care. I wonder if one simply tests it's output against the other?)
Issues with the Huynh-Feldt values using either Anova or ezANOVA in R?
If there were an error in Anova it would also be in ezANOVA because Mike Lawrence built ez on John Fox's Anova package. And the Anova package in turn uses the standard R anova command. R is getting
Issues with the Huynh-Feldt values using either Anova or ezANOVA in R? If there were an error in Anova it would also be in ezANOVA because Mike Lawrence built ez on John Fox's Anova package. And the Anova package in turn uses the standard R anova command. R is getting this correct. Peter Dalgaard (pp.3-4) pointed out that SAS and SPSS get the H-F calculation wrong, and have for decades. It's because they use the original published formula and not the corrected equation (Lecoutre, 1991) that R uses. (My speculation about why they haven't fixed it is similar to the reason Excel remains buggy... they have a greater obligation to major customers to be bug compatible than to be correct. Fixing this would be admitting they were wrong as well. Although, if it were done in 1992 it would have been reasonable. Now it's just a great argument for open source. Alternatively, they just might not care. I wonder if one simply tests it's output against the other?)
Issues with the Huynh-Feldt values using either Anova or ezANOVA in R? If there were an error in Anova it would also be in ezANOVA because Mike Lawrence built ez on John Fox's Anova package. And the Anova package in turn uses the standard R anova command. R is getting
55,450
Expectation of length of a confidence interval
I'm aware that this is a 'homework' question but it has gone unanswered for over six months so I figure the homework has been turned in by now. Also, the hints in the comments (which I use directly here) are useful until you get to the point where you have to calculate the expected value of the sample standard deviation (a non-trivial exercise), which I give a link to in this answer. For a sample $X_1, X_2, ..., X_n$ from a $N(\mu,\sigma^2)$ population, the $95 \%$ confidence interval for $\mu$ when the variance $\sigma^2$ is unknown is $$ \overline{X} \pm t_{n-1} \cdot \frac{s}{\sqrt{n}} $$ where $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ and $ s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X})^2}$ and $t_{n-1}$ is the $97.5$th quantile of the $t$-distribution with $n-1$ degrees of freedom. From the expression for the confidence interval, we can see its width is the random variable $$W = 2t_{n-1} \cdot \frac{s}{\sqrt{n}}$$ The only random part of $W$ is $s$, therefore the expected width is $ E(W) = \frac{2t_{n-1}}{\sqrt{n}} \cdot E(s) $, which reduces the problem to that of calculating $E(s)$ (which is calculated in this thread) and doing some algebra: $$ E(W) = \sigma \cdot t_{n-1} \cdot \sqrt{ \frac{8}{n(n-1)} } \cdot \frac{ \Gamma(n/2) }{\Gamma( \frac{n-1}{2} ) } $$ From that formula, we can plug in $n=9$ and find that $ E(W) \approx 1.49 \sigma $. We can also plot $E(W)$ as a function of $n$: $\ \ \ \ \ \ \ \ \ $
Expectation of length of a confidence interval
I'm aware that this is a 'homework' question but it has gone unanswered for over six months so I figure the homework has been turned in by now. Also, the hints in the comments (which I use directly he
Expectation of length of a confidence interval I'm aware that this is a 'homework' question but it has gone unanswered for over six months so I figure the homework has been turned in by now. Also, the hints in the comments (which I use directly here) are useful until you get to the point where you have to calculate the expected value of the sample standard deviation (a non-trivial exercise), which I give a link to in this answer. For a sample $X_1, X_2, ..., X_n$ from a $N(\mu,\sigma^2)$ population, the $95 \%$ confidence interval for $\mu$ when the variance $\sigma^2$ is unknown is $$ \overline{X} \pm t_{n-1} \cdot \frac{s}{\sqrt{n}} $$ where $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ and $ s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X})^2}$ and $t_{n-1}$ is the $97.5$th quantile of the $t$-distribution with $n-1$ degrees of freedom. From the expression for the confidence interval, we can see its width is the random variable $$W = 2t_{n-1} \cdot \frac{s}{\sqrt{n}}$$ The only random part of $W$ is $s$, therefore the expected width is $ E(W) = \frac{2t_{n-1}}{\sqrt{n}} \cdot E(s) $, which reduces the problem to that of calculating $E(s)$ (which is calculated in this thread) and doing some algebra: $$ E(W) = \sigma \cdot t_{n-1} \cdot \sqrt{ \frac{8}{n(n-1)} } \cdot \frac{ \Gamma(n/2) }{\Gamma( \frac{n-1}{2} ) } $$ From that formula, we can plug in $n=9$ and find that $ E(W) \approx 1.49 \sigma $. We can also plot $E(W)$ as a function of $n$: $\ \ \ \ \ \ \ \ \ $
Expectation of length of a confidence interval I'm aware that this is a 'homework' question but it has gone unanswered for over six months so I figure the homework has been turned in by now. Also, the hints in the comments (which I use directly he
55,451
How to calculate quartiles with only standard deviation and mean assuming normal distribution?
The quantile or probit function, as you can see from the link (see "Computatuon"), is computed with inverse gaussian error function $erf^{-1}$ which I hope is downloadable for calculators like TI-89. Look here for instance.
How to calculate quartiles with only standard deviation and mean assuming normal distribution?
The quantile or probit function, as you can see from the link (see "Computatuon"), is computed with inverse gaussian error function $erf^{-1}$ which I hope is downloadable for calculators like TI-89.
How to calculate quartiles with only standard deviation and mean assuming normal distribution? The quantile or probit function, as you can see from the link (see "Computatuon"), is computed with inverse gaussian error function $erf^{-1}$ which I hope is downloadable for calculators like TI-89. Look here for instance.
How to calculate quartiles with only standard deviation and mean assuming normal distribution? The quantile or probit function, as you can see from the link (see "Computatuon"), is computed with inverse gaussian error function $erf^{-1}$ which I hope is downloadable for calculators like TI-89.
55,452
How to calculate quartiles with only standard deviation and mean assuming normal distribution?
2nd Vars (Distr)>"InvNorm" next you subtract 1-% and enter this into your Inverse Norm along with your Mean and standard deviation. Ex: Find the third quartile Q3 which is the IQ score separating the top 25% from the others. With a Mean of 100 and a Standard Deviation of 15. 1-.25=.75 in Inv Norm (.75,100,15)=110 My answer is 110
How to calculate quartiles with only standard deviation and mean assuming normal distribution?
2nd Vars (Distr)>"InvNorm" next you subtract 1-% and enter this into your Inverse Norm along with your Mean and standard deviation. Ex: Find the third quartile Q3 which is the IQ score separating the
How to calculate quartiles with only standard deviation and mean assuming normal distribution? 2nd Vars (Distr)>"InvNorm" next you subtract 1-% and enter this into your Inverse Norm along with your Mean and standard deviation. Ex: Find the third quartile Q3 which is the IQ score separating the top 25% from the others. With a Mean of 100 and a Standard Deviation of 15. 1-.25=.75 in Inv Norm (.75,100,15)=110 My answer is 110
How to calculate quartiles with only standard deviation and mean assuming normal distribution? 2nd Vars (Distr)>"InvNorm" next you subtract 1-% and enter this into your Inverse Norm along with your Mean and standard deviation. Ex: Find the third quartile Q3 which is the IQ score separating the
55,453
How to calculate quartiles with only standard deviation and mean assuming normal distribution?
Finding Q1 and Q3 for a bell curve my textbook says the formula for Q1 is M-(.675)SD=Q1 for Q3 it's M+(.675)SD=Q3. So M is the median and SD is standard deviation and Q1 is minus and Q3 is add. Normal distribution is different than data sets and frequency charts
How to calculate quartiles with only standard deviation and mean assuming normal distribution?
Finding Q1 and Q3 for a bell curve my textbook says the formula for Q1 is M-(.675)SD=Q1 for Q3 it's M+(.675)SD=Q3. So M is the median and SD is standard deviation and Q1 is minus and Q3 is add. Normal
How to calculate quartiles with only standard deviation and mean assuming normal distribution? Finding Q1 and Q3 for a bell curve my textbook says the formula for Q1 is M-(.675)SD=Q1 for Q3 it's M+(.675)SD=Q3. So M is the median and SD is standard deviation and Q1 is minus and Q3 is add. Normal distribution is different than data sets and frequency charts
How to calculate quartiles with only standard deviation and mean assuming normal distribution? Finding Q1 and Q3 for a bell curve my textbook says the formula for Q1 is M-(.675)SD=Q1 for Q3 it's M+(.675)SD=Q3. So M is the median and SD is standard deviation and Q1 is minus and Q3 is add. Normal
55,454
Plotting scatterplot with non-dominated front in R
Presumably "runtime" is better when it is low and "solution quality" is better when it is high. To make these variables more consistent in how they represent the attributes, first re-express them so that high numbers always correspond to better values. For this question, the reciprocal of runtime (which could be interpreted as number of runs per second) would be a good choice, but the negative of runtime would work fine too. With this convention established, we may plot each record in the dataset by associating the re-expressed "runtime" value with the x coordinate and the "solution quality" value with the y coordinate, creating a scatterplot. One record $(x,y)$ dominates another $(x',y')$ when simultaneously $x\ge x'$, $y\ge y'$, and $(x,y) \ne (x',y')$. No rational actor (using solely information about "runtime" and "solution quality") would choose the dominated option $(x',y')$ when $(x,y)$--which is at least as good on both attributes and better on at least one--is available. Therefore such an actor would focus their interest on the non-dominated options and would be free to neglect all others. All points dominated by the solid red point are shown in gray. All possible points that it can dominate constitute the (closed) rectangle shown in pink. Because evidently no points in the scatterplot dominate the solid point, it must lie on the non-dominated frontier. The set of non-dominated options forms the vertices of part of the "quasi-convex hull" or "non-dominated frontier" of the scatterplot. This is an analog of the better-known convex hull. Indeed, the quasi-convex hull will be invariant under all increasing monotonic re-expressions of the variables, $f(x)$ and $g(y)$, and one can always find such re-expressions for which the quasi-convex hull actually becomes the convex hull. This connection, and the fact that optimal construction of the convex hull of $n$ points requires $O(n\log(n))$ computation, suggests that we seek an algorithm for the quasi-convex hull that performs at least as well. Such algorithms exist. Here is one, given in R as requested, which is written to port easily to other computing platforms: hull.qc <- function(xy) { i <- order(xy[, 1], xy[, 2], decreasing=TRUE) frontier <- rep(0, length(i)) k <- 0; y <- -Inf for (j in i) { if (xy[j, 2] > y) { frontier[k <- k+1] <- j y <- xy[j, 2] } } return(frontier[1:k]) } It performs a decreasing lexicographic sort of the coordinates (time: $O(n\log(n))$) and then scans through the first coordinate (time: $O(n)$), putting into effect a line scan algorithm. The points where a new larger value of the second coordinate is found are recorded. Their indexes within the input array xy are returned. We can prove this algorithm is correct by induction. The initial point, chosen to have a maximal first coordinate and (among such points) a maximal second coordinate, obviously is not dominated. Using it we can eliminate all other points that it does dominate, including itself. The first non-eliminated point next encountered in the algorithm necessarily has a larger value of its second coordinate. It obviously is not dominated by any point because if it were, that other point would already have been encountered. Therefore the algorithm indeed selects non-dominated points and (by induction on the number of points) it finds them all, QED. To illustrate these concepts and this algorithm, here is plot of a dataset of $16$K records (all with integral values, shown jittered) along with its quasi-convex hull (a dark line at the upper right), its vertices marked as requested in the question, and the "dominance rectangles" of those vertices colored in. The code to do the calculations, make a scatterplot, and mark the vertices of its quasi-convex hull appears below. It includes a slightly faster R-centric version of hull.qc. It processes a million points in approximately two seconds on this machine. # # A faster solution (for R). # hull.qc <- function(xy) { i <- order(xy[, 1], xy[, 2], decreasing=TRUE) y <- xy[i, 2] frontier <- which(cummax(y) <= y) # # Eliminate interior points along edges of the hull. # y.0 <- y[frontier] frontier <- frontier[c(TRUE, y.0[-1] != y.0[-length(y.0)])] return(i[frontier]) } # # Generate data. # library(MASS) set.seed(17) n <- 2^14 xy <- floor(mvrnorm(n, c(1,1), matrix(c(1, -1/2, -1/2, 1), 2))^2) # # Do the work. # system.time(frontier <- hull.qc(xy)) xy.f <- xy[frontier, , drop=FALSE] # # Visualization. # plot(xy, xlab="X", ylab="Y", main="Quasiconvex Hull") points(xy.f, pch=19, col="Red")
Plotting scatterplot with non-dominated front in R
Presumably "runtime" is better when it is low and "solution quality" is better when it is high. To make these variables more consistent in how they represent the attributes, first re-express them so
Plotting scatterplot with non-dominated front in R Presumably "runtime" is better when it is low and "solution quality" is better when it is high. To make these variables more consistent in how they represent the attributes, first re-express them so that high numbers always correspond to better values. For this question, the reciprocal of runtime (which could be interpreted as number of runs per second) would be a good choice, but the negative of runtime would work fine too. With this convention established, we may plot each record in the dataset by associating the re-expressed "runtime" value with the x coordinate and the "solution quality" value with the y coordinate, creating a scatterplot. One record $(x,y)$ dominates another $(x',y')$ when simultaneously $x\ge x'$, $y\ge y'$, and $(x,y) \ne (x',y')$. No rational actor (using solely information about "runtime" and "solution quality") would choose the dominated option $(x',y')$ when $(x,y)$--which is at least as good on both attributes and better on at least one--is available. Therefore such an actor would focus their interest on the non-dominated options and would be free to neglect all others. All points dominated by the solid red point are shown in gray. All possible points that it can dominate constitute the (closed) rectangle shown in pink. Because evidently no points in the scatterplot dominate the solid point, it must lie on the non-dominated frontier. The set of non-dominated options forms the vertices of part of the "quasi-convex hull" or "non-dominated frontier" of the scatterplot. This is an analog of the better-known convex hull. Indeed, the quasi-convex hull will be invariant under all increasing monotonic re-expressions of the variables, $f(x)$ and $g(y)$, and one can always find such re-expressions for which the quasi-convex hull actually becomes the convex hull. This connection, and the fact that optimal construction of the convex hull of $n$ points requires $O(n\log(n))$ computation, suggests that we seek an algorithm for the quasi-convex hull that performs at least as well. Such algorithms exist. Here is one, given in R as requested, which is written to port easily to other computing platforms: hull.qc <- function(xy) { i <- order(xy[, 1], xy[, 2], decreasing=TRUE) frontier <- rep(0, length(i)) k <- 0; y <- -Inf for (j in i) { if (xy[j, 2] > y) { frontier[k <- k+1] <- j y <- xy[j, 2] } } return(frontier[1:k]) } It performs a decreasing lexicographic sort of the coordinates (time: $O(n\log(n))$) and then scans through the first coordinate (time: $O(n)$), putting into effect a line scan algorithm. The points where a new larger value of the second coordinate is found are recorded. Their indexes within the input array xy are returned. We can prove this algorithm is correct by induction. The initial point, chosen to have a maximal first coordinate and (among such points) a maximal second coordinate, obviously is not dominated. Using it we can eliminate all other points that it does dominate, including itself. The first non-eliminated point next encountered in the algorithm necessarily has a larger value of its second coordinate. It obviously is not dominated by any point because if it were, that other point would already have been encountered. Therefore the algorithm indeed selects non-dominated points and (by induction on the number of points) it finds them all, QED. To illustrate these concepts and this algorithm, here is plot of a dataset of $16$K records (all with integral values, shown jittered) along with its quasi-convex hull (a dark line at the upper right), its vertices marked as requested in the question, and the "dominance rectangles" of those vertices colored in. The code to do the calculations, make a scatterplot, and mark the vertices of its quasi-convex hull appears below. It includes a slightly faster R-centric version of hull.qc. It processes a million points in approximately two seconds on this machine. # # A faster solution (for R). # hull.qc <- function(xy) { i <- order(xy[, 1], xy[, 2], decreasing=TRUE) y <- xy[i, 2] frontier <- which(cummax(y) <= y) # # Eliminate interior points along edges of the hull. # y.0 <- y[frontier] frontier <- frontier[c(TRUE, y.0[-1] != y.0[-length(y.0)])] return(i[frontier]) } # # Generate data. # library(MASS) set.seed(17) n <- 2^14 xy <- floor(mvrnorm(n, c(1,1), matrix(c(1, -1/2, -1/2, 1), 2))^2) # # Do the work. # system.time(frontier <- hull.qc(xy)) xy.f <- xy[frontier, , drop=FALSE] # # Visualization. # plot(xy, xlab="X", ylab="Y", main="Quasiconvex Hull") points(xy.f, pch=19, col="Red")
Plotting scatterplot with non-dominated front in R Presumably "runtime" is better when it is low and "solution quality" is better when it is high. To make these variables more consistent in how they represent the attributes, first re-express them so
55,455
How do you calculate eigenvector/value of a non-square matrix in R?
According to the man page, svd returns a list with the following elements: d: a vector containing the singular values of x, of length min(n, p). u: a matrix whose columns contain the left singular vectors of x, present if nu 0. Dimension c(n, nu). v: a matrix whose columns contain the right singular vectors of x, present if nv 0. Dimension c(p, nv). Wikipedia describes the relation between Eigendecomposition and SVD as follows: Given the SVD of $M$ $$M = U \; \Sigma \; V^*,$$ then The left singular vectors of $M$ are eigenvectors of $MM^*$. The right singular vectors of $M$ are eigenvectors of $M^*M$. The non-zero singular values of $\Sigma$ are the square roots of the non-zero eigenvalues of $M^*M$ or $MM^*$. Does that help answer your question on how to identify the results returned by svd?
How do you calculate eigenvector/value of a non-square matrix in R?
According to the man page, svd returns a list with the following elements: d: a vector containing the singular values of x, of length min(n, p). u: a matrix whose columns contain the left singular v
How do you calculate eigenvector/value of a non-square matrix in R? According to the man page, svd returns a list with the following elements: d: a vector containing the singular values of x, of length min(n, p). u: a matrix whose columns contain the left singular vectors of x, present if nu 0. Dimension c(n, nu). v: a matrix whose columns contain the right singular vectors of x, present if nv 0. Dimension c(p, nv). Wikipedia describes the relation between Eigendecomposition and SVD as follows: Given the SVD of $M$ $$M = U \; \Sigma \; V^*,$$ then The left singular vectors of $M$ are eigenvectors of $MM^*$. The right singular vectors of $M$ are eigenvectors of $M^*M$. The non-zero singular values of $\Sigma$ are the square roots of the non-zero eigenvalues of $M^*M$ or $MM^*$. Does that help answer your question on how to identify the results returned by svd?
How do you calculate eigenvector/value of a non-square matrix in R? According to the man page, svd returns a list with the following elements: d: a vector containing the singular values of x, of length min(n, p). u: a matrix whose columns contain the left singular v
55,456
One-sided Fisher's exact test and its complement
The particular table helps a lot. The Fisher's exact test assigns probabilities to tables with these particular marginals using the hypergeometric distribution. In this case, we're thinking of drawing 9 balls from an urn (the cases) with 2852 white balls (exposed) and 2861 black balls (not exposed). The number of white balls drawn is the count for exposed cases. The distribution is: 0 1 2 3 4 5 6 7 8 9 0.002 0.018 0.071 0.165 0.247 0.246 0.163 0.070 0.017 0.002 The one-sided test in your output is giving the probability of 2 or fewer: 0.002 + 0.018 + 0.071 = 0.0904 The one-sided test in the other direction would give the probability of 2 or more, which is 1 minus the probability of 0 or 1: 1 - (0.002 + 0.018) = 0.98 Note that the two-sided test is the probability of 0, 1, 2, 7, 8, or 9, which does come to 0.179. So the p-values for the two one-tailed tests don't add to one, because they each include the particular observed value and the distribution is discrete.
One-sided Fisher's exact test and its complement
The particular table helps a lot. The Fisher's exact test assigns probabilities to tables with these particular marginals using the hypergeometric distribution. In this case, we're thinking of drawi
One-sided Fisher's exact test and its complement The particular table helps a lot. The Fisher's exact test assigns probabilities to tables with these particular marginals using the hypergeometric distribution. In this case, we're thinking of drawing 9 balls from an urn (the cases) with 2852 white balls (exposed) and 2861 black balls (not exposed). The number of white balls drawn is the count for exposed cases. The distribution is: 0 1 2 3 4 5 6 7 8 9 0.002 0.018 0.071 0.165 0.247 0.246 0.163 0.070 0.017 0.002 The one-sided test in your output is giving the probability of 2 or fewer: 0.002 + 0.018 + 0.071 = 0.0904 The one-sided test in the other direction would give the probability of 2 or more, which is 1 minus the probability of 0 or 1: 1 - (0.002 + 0.018) = 0.98 Note that the two-sided test is the probability of 0, 1, 2, 7, 8, or 9, which does come to 0.179. So the p-values for the two one-tailed tests don't add to one, because they each include the particular observed value and the distribution is discrete.
One-sided Fisher's exact test and its complement The particular table helps a lot. The Fisher's exact test assigns probabilities to tables with these particular marginals using the hypergeometric distribution. In this case, we're thinking of drawi
55,457
One-sided Fisher's exact test and its complement
Fisher's test has the null hypothesis that A is the same as B. Suppose that we have 50 observations each for A and B. Fisher's test puts all 100 observations in a bag, shakes it, pulls out 50, calling them A, then pulls out the remaining 50 and calls them B. Calculate the average of A - B. Repeat this permutation many times to get a distribution of the averages of A - B. Since our alternative is that A is superior to B, we reject if we got a really high value for A - B in the true sample. We count how many permutations were bigger than the one from the actual sample. You said 98% were. This means that A - B from the true sample is really low---only 2% of the permutations were lower. If the alternative hypothesis was that B is superior to A, then we are hypothesizing that A - B is small; this would have a p-value of 0.02---only 2% of the permutations were smaller. We could reject the null that A is the same as B. I caution against forming hypotheses after you see the data, however, as this biases your test statistics.
One-sided Fisher's exact test and its complement
Fisher's test has the null hypothesis that A is the same as B. Suppose that we have 50 observations each for A and B. Fisher's test puts all 100 observations in a bag, shakes it, pulls out 50, calling
One-sided Fisher's exact test and its complement Fisher's test has the null hypothesis that A is the same as B. Suppose that we have 50 observations each for A and B. Fisher's test puts all 100 observations in a bag, shakes it, pulls out 50, calling them A, then pulls out the remaining 50 and calls them B. Calculate the average of A - B. Repeat this permutation many times to get a distribution of the averages of A - B. Since our alternative is that A is superior to B, we reject if we got a really high value for A - B in the true sample. We count how many permutations were bigger than the one from the actual sample. You said 98% were. This means that A - B from the true sample is really low---only 2% of the permutations were lower. If the alternative hypothesis was that B is superior to A, then we are hypothesizing that A - B is small; this would have a p-value of 0.02---only 2% of the permutations were smaller. We could reject the null that A is the same as B. I caution against forming hypotheses after you see the data, however, as this biases your test statistics.
One-sided Fisher's exact test and its complement Fisher's test has the null hypothesis that A is the same as B. Suppose that we have 50 observations each for A and B. Fisher's test puts all 100 observations in a bag, shakes it, pulls out 50, calling
55,458
Interaction suppresses the main effect? How to interpret it?
The main effects went from "significant" to "not", but the evidence really didn't change all that much. For example, p=0.047 to p=0.063 for rok isn't, to me, a remarkable change. And a lack of evidence for a coefficient being non-zero isn't the same as saying it is 0. In considering the coefficient for obdobinehn when the interaction is included, you need to pay careful attention to the factor contrasts that are being used, as the meaning of the coefficient changes and depends on those contrasts. Note also that if a covariate is involved in an important interaction, then it does have an effect on the outcome, even if it shows no main effect. I agree with John's comment that it's useful, with factor covariates, to look at an ANOVA table.
Interaction suppresses the main effect? How to interpret it?
The main effects went from "significant" to "not", but the evidence really didn't change all that much. For example, p=0.047 to p=0.063 for rok isn't, to me, a remarkable change. And a lack of evide
Interaction suppresses the main effect? How to interpret it? The main effects went from "significant" to "not", but the evidence really didn't change all that much. For example, p=0.047 to p=0.063 for rok isn't, to me, a remarkable change. And a lack of evidence for a coefficient being non-zero isn't the same as saying it is 0. In considering the coefficient for obdobinehn when the interaction is included, you need to pay careful attention to the factor contrasts that are being used, as the meaning of the coefficient changes and depends on those contrasts. Note also that if a covariate is involved in an important interaction, then it does have an effect on the outcome, even if it shows no main effect. I agree with John's comment that it's useful, with factor covariates, to look at an ANOVA table.
Interaction suppresses the main effect? How to interpret it? The main effects went from "significant" to "not", but the evidence really didn't change all that much. For example, p=0.047 to p=0.063 for rok isn't, to me, a remarkable change. And a lack of evide
55,459
Interaction suppresses the main effect? How to interpret it?
You'll notice that in your ANOVA's (deviance tables) of the models there is no difference in the main effects with, or without the interaction. You don't have to know how to interpret the deviance table, just recognize that there's no difference! Keep in mind that your "Estimate" column in the regression is about the magnitude of the slope and the associated tests are of that magnitude. When you add interactions you can change how the slope is calculated and change it's significance. That doesn't mean it went away, it just means it's qualified by an interaction. So, a short answer is that, if you had a main effect without the interaction then you have a main effect. It's very common to do the additive and then additive+interaction models separately so you can see where your main effects are and then look at your interactions. The fact that it went away gives you some clues about the kind of interaction that you have but it's hard for someone to answer the whole thing with just what you've reported. Your next step is to start making some graphs. For example, make one with obdobinehn at different levels of kraj. You should really look at a paper on interpreting these interaction effects. A complete answer for your query is far too difficult to do based on what you've provided, and even guidance about where to go next is very involved. Read the linked paper, see how far you get, and get back to the SE with more questions at that time.
Interaction suppresses the main effect? How to interpret it?
You'll notice that in your ANOVA's (deviance tables) of the models there is no difference in the main effects with, or without the interaction. You don't have to know how to interpret the deviance ta
Interaction suppresses the main effect? How to interpret it? You'll notice that in your ANOVA's (deviance tables) of the models there is no difference in the main effects with, or without the interaction. You don't have to know how to interpret the deviance table, just recognize that there's no difference! Keep in mind that your "Estimate" column in the regression is about the magnitude of the slope and the associated tests are of that magnitude. When you add interactions you can change how the slope is calculated and change it's significance. That doesn't mean it went away, it just means it's qualified by an interaction. So, a short answer is that, if you had a main effect without the interaction then you have a main effect. It's very common to do the additive and then additive+interaction models separately so you can see where your main effects are and then look at your interactions. The fact that it went away gives you some clues about the kind of interaction that you have but it's hard for someone to answer the whole thing with just what you've reported. Your next step is to start making some graphs. For example, make one with obdobinehn at different levels of kraj. You should really look at a paper on interpreting these interaction effects. A complete answer for your query is far too difficult to do based on what you've provided, and even guidance about where to go next is very involved. Read the linked paper, see how far you get, and get back to the SE with more questions at that time.
Interaction suppresses the main effect? How to interpret it? You'll notice that in your ANOVA's (deviance tables) of the models there is no difference in the main effects with, or without the interaction. You don't have to know how to interpret the deviance ta
55,460
Automatic text quality grading
There are two basic components, technical analysis (grammar, sentence length, etc) and machine learning/statistical analysis. The best (IMHO) papers on this are those involving automated essay grading. It's a little dated, but this paper covers the techniques employed by all of the major vendors. ETS (the people behind the TOEFL and the GRE) have put a LOT of work into this area, and they are surprisingly open about their research. The wall they run up against is that of length and specificity. They can only grade papers (with a high degree of accuracy) that are short and have specific prompts. The biggest problem I see is that ALL of these grading facilities are based on machine learning. Even ETS uses their graders as backups for human graders; they pay 1 human to grade and if their grade doesn't match the ML grade, the essay is handed to another human grader. It essentially cuts their costs in half but it doesn't replace human graders. If you really want to try analysis on ad-hoc text you will probably need to use the API of someone that has access to a lot of unstructured text, like OpenCalais or AlchemyAPI or even Zamenta for the categorizing and then roll your own quality analysis. However, that's a bit out of my pay range. update Jonny's answer is really not getting enough credit. If you are really concerned about content quality, analyzing network connections and other measurements of human behavior are REALLY valuable. Again, I am no expert in this field! It would help if you could tell us what our accuracy/validity requirements are.
Automatic text quality grading
There are two basic components, technical analysis (grammar, sentence length, etc) and machine learning/statistical analysis. The best (IMHO) papers on this are those involving automated essay grading
Automatic text quality grading There are two basic components, technical analysis (grammar, sentence length, etc) and machine learning/statistical analysis. The best (IMHO) papers on this are those involving automated essay grading. It's a little dated, but this paper covers the techniques employed by all of the major vendors. ETS (the people behind the TOEFL and the GRE) have put a LOT of work into this area, and they are surprisingly open about their research. The wall they run up against is that of length and specificity. They can only grade papers (with a high degree of accuracy) that are short and have specific prompts. The biggest problem I see is that ALL of these grading facilities are based on machine learning. Even ETS uses their graders as backups for human graders; they pay 1 human to grade and if their grade doesn't match the ML grade, the essay is handed to another human grader. It essentially cuts their costs in half but it doesn't replace human graders. If you really want to try analysis on ad-hoc text you will probably need to use the API of someone that has access to a lot of unstructured text, like OpenCalais or AlchemyAPI or even Zamenta for the categorizing and then roll your own quality analysis. However, that's a bit out of my pay range. update Jonny's answer is really not getting enough credit. If you are really concerned about content quality, analyzing network connections and other measurements of human behavior are REALLY valuable. Again, I am no expert in this field! It would help if you could tell us what our accuracy/validity requirements are.
Automatic text quality grading There are two basic components, technical analysis (grammar, sentence length, etc) and machine learning/statistical analysis. The best (IMHO) papers on this are those involving automated essay grading
55,461
Automatic text quality grading
Actually if you want to judge the quality of works, you may want to approach this with something like Google Pagerank. So for example if a particular paper is footnoted or referenced in many other papers then there is a greater chance that it is a well respected paper than a paper that is never footnoted. Another factor which I believe would have a high correlation with quality would be the author(s). While not perfect, it is probably reasonable to assume that works by Nobel laureates within their area of expertise would tend to be of higher quality than the first work of someone who has never published before. Third you could consider the publisher. Articles from peer reviewed journals should tend to be of higher quality than those from tabloids. The problem with some of the items you mentioned are that they are either irrelevant to quality (length for example, by that measure the Lincoln's Gettysburg address would fail) or virtually impossible to score. Suppose I wrote a paper about the solar system and gave the distance to the moon as 500,000 miles and said the sun was at half that distance with a temperature of 27 degrees kelvin. Would your program be able to verify or disprove those "facts"? It would score high on elaboration (lots of details, all completely wrong). As for keywords, we all know about keyword stuffing. Some things you could easily score with a program however, such as reading level (fog index).
Automatic text quality grading
Actually if you want to judge the quality of works, you may want to approach this with something like Google Pagerank. So for example if a particular paper is footnoted or referenced in many other pap
Automatic text quality grading Actually if you want to judge the quality of works, you may want to approach this with something like Google Pagerank. So for example if a particular paper is footnoted or referenced in many other papers then there is a greater chance that it is a well respected paper than a paper that is never footnoted. Another factor which I believe would have a high correlation with quality would be the author(s). While not perfect, it is probably reasonable to assume that works by Nobel laureates within their area of expertise would tend to be of higher quality than the first work of someone who has never published before. Third you could consider the publisher. Articles from peer reviewed journals should tend to be of higher quality than those from tabloids. The problem with some of the items you mentioned are that they are either irrelevant to quality (length for example, by that measure the Lincoln's Gettysburg address would fail) or virtually impossible to score. Suppose I wrote a paper about the solar system and gave the distance to the moon as 500,000 miles and said the sun was at half that distance with a temperature of 27 degrees kelvin. Would your program be able to verify or disprove those "facts"? It would score high on elaboration (lots of details, all completely wrong). As for keywords, we all know about keyword stuffing. Some things you could easily score with a program however, such as reading level (fog index).
Automatic text quality grading Actually if you want to judge the quality of works, you may want to approach this with something like Google Pagerank. So for example if a particular paper is footnoted or referenced in many other pap
55,462
Automatic text quality grading
More a comment than an answer, but I suspect that trying to capture the quality of a particular document in a single (non-adaptive) score is the canonical example of YMMV. For example, in a technical context users might consider quality to mean in-depth treatment of a specific problem, a survey of different techniques, position statements from experts, or any of various other measures of quality, each depending on how they want to use the information they get from the document. Furthermore, depending on the users' levels of technical sophistication, different users can be expected to rate the same documents differently, even if they have similar aims. On the other hand this is exactly the sort of problem faced (and probably in the current state of the art partly solved) by recommender systems (Netflix ratings, Amazon recommendations, and so on). I'm no expert on these sorts of approaches, but no doubt others here are. For your problem I'd probably start my literature search in that domain.
Automatic text quality grading
More a comment than an answer, but I suspect that trying to capture the quality of a particular document in a single (non-adaptive) score is the canonical example of YMMV. For example, in a technical
Automatic text quality grading More a comment than an answer, but I suspect that trying to capture the quality of a particular document in a single (non-adaptive) score is the canonical example of YMMV. For example, in a technical context users might consider quality to mean in-depth treatment of a specific problem, a survey of different techniques, position statements from experts, or any of various other measures of quality, each depending on how they want to use the information they get from the document. Furthermore, depending on the users' levels of technical sophistication, different users can be expected to rate the same documents differently, even if they have similar aims. On the other hand this is exactly the sort of problem faced (and probably in the current state of the art partly solved) by recommender systems (Netflix ratings, Amazon recommendations, and so on). I'm no expert on these sorts of approaches, but no doubt others here are. For your problem I'd probably start my literature search in that domain.
Automatic text quality grading More a comment than an answer, but I suspect that trying to capture the quality of a particular document in a single (non-adaptive) score is the canonical example of YMMV. For example, in a technical
55,463
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared?
There are no consequences to having too many random assignments left over at the end, and pretty large consequences to having your randomization scheme fail mid-assignment for a trial. Given the cost of an envelop and a piece of paper, I'd vote on the side of safety and make 300 of each. If you're trying to trim the budget a bit, I'd go with 150 or so, based on whuber's answer, and be in the "absurdly unlikely" category.
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared?
There are no consequences to having too many random assignments left over at the end, and pretty large consequences to having your randomization scheme fail mid-assignment for a trial. Given the cost
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared? There are no consequences to having too many random assignments left over at the end, and pretty large consequences to having your randomization scheme fail mid-assignment for a trial. Given the cost of an envelop and a piece of paper, I'd vote on the side of safety and make 300 of each. If you're trying to trim the budget a bit, I'd go with 150 or so, based on whuber's answer, and be in the "absurdly unlikely" category.
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared? There are no consequences to having too many random assignments left over at the end, and pretty large consequences to having your randomization scheme fail mid-assignment for a trial. Given the cost
55,464
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared?
The stratum populations have a multinomial distribution. 300 is large enough that the Normal approximation will be accurate, implying the population in any given stratum has a Normal$(100, \sqrt{\frac{1}{3}\frac{2}{3}300})$ distribution. Although all three counts are correlated, it's still a good approximation to assume they are not and apply a Bonferroni correction: that is, to be assured of a $1-\alpha$ chance of not exhausting the envelopes for any stratum, find the upper $1-\alpha/3$ quantile of this distribution. (For $\alpha=.01$, for example, this quantile equals 122.2, which we round up to 123) Create this many envelopes for each stratum. If you mistrust these approximations, it's simple to simulate many experiments. In 1,000,000 trials, 10,064 had a stratum with 123 or more patients, indicating that 122 envelopes would have been sufficient 98.99+% of the time. You probably don't even want to run a 1/100 risk. If you prepare 128 envelopes, the risk of running out is only 1/1000; with 133 envelopes, it drops to 1/10,000. Simulation of 4,000,000 experiments bears out this expectation.
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared?
The stratum populations have a multinomial distribution. 300 is large enough that the Normal approximation will be accurate, implying the population in any given stratum has a Normal$(100, \sqrt{\fra
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared? The stratum populations have a multinomial distribution. 300 is large enough that the Normal approximation will be accurate, implying the population in any given stratum has a Normal$(100, \sqrt{\frac{1}{3}\frac{2}{3}300})$ distribution. Although all three counts are correlated, it's still a good approximation to assume they are not and apply a Bonferroni correction: that is, to be assured of a $1-\alpha$ chance of not exhausting the envelopes for any stratum, find the upper $1-\alpha/3$ quantile of this distribution. (For $\alpha=.01$, for example, this quantile equals 122.2, which we round up to 123) Create this many envelopes for each stratum. If you mistrust these approximations, it's simple to simulate many experiments. In 1,000,000 trials, 10,064 had a stratum with 123 or more patients, indicating that 122 envelopes would have been sufficient 98.99+% of the time. You probably don't even want to run a 1/100 risk. If you prepare 128 envelopes, the risk of running out is only 1/1000; with 133 envelopes, it drops to 1/10,000. Simulation of 4,000,000 experiments bears out this expectation.
When using stratified randomization in a RCT, how many envelopes per stratum should be prepared? The stratum populations have a multinomial distribution. 300 is large enough that the Normal approximation will be accurate, implying the population in any given stratum has a Normal$(100, \sqrt{\fra
55,465
Is SPSS giving the wrong residuals for a repeated measures design?
SPSS is giving the residuals from the group means without correcting for individual error. > my.lm <- lm(value ~ group, DFlong) > round(matrix(residuals(my.lm),ncol=2),2) [,1] [,2] [1,] -3.78 -4.78 [2,] 0.27 1.76 [3,] -4.82 3.02 [4,] 7.33 2.20 [5,] 1.00 -2.20 The residuals after correcting for individual error are as follows; they're not what you're finding in the aov fit either. > my.lm <- lm(value ~ group + participant, DFlong) > round(matrix(residuals(my.lm),ncol=2),2) [,1] [,2] [1,] 0.50 -0.50 [2,] -0.74 0.74 [3,] -3.92 3.92 [4,] 2.56 -2.56 [5,] 1.60 -1.60 The aov fit doesn't have a residuals method, which is a big hint that the residuals it's calculating are probably not what most end users want.
Is SPSS giving the wrong residuals for a repeated measures design?
SPSS is giving the residuals from the group means without correcting for individual error. > my.lm <- lm(value ~ group, DFlong) > round(matrix(residuals(my.lm),ncol=2),2) [,1] [,2] [1,] -3.78 -
Is SPSS giving the wrong residuals for a repeated measures design? SPSS is giving the residuals from the group means without correcting for individual error. > my.lm <- lm(value ~ group, DFlong) > round(matrix(residuals(my.lm),ncol=2),2) [,1] [,2] [1,] -3.78 -4.78 [2,] 0.27 1.76 [3,] -4.82 3.02 [4,] 7.33 2.20 [5,] 1.00 -2.20 The residuals after correcting for individual error are as follows; they're not what you're finding in the aov fit either. > my.lm <- lm(value ~ group + participant, DFlong) > round(matrix(residuals(my.lm),ncol=2),2) [,1] [,2] [1,] 0.50 -0.50 [2,] -0.74 0.74 [3,] -3.92 3.92 [4,] 2.56 -2.56 [5,] 1.60 -1.60 The aov fit doesn't have a residuals method, which is a big hint that the residuals it's calculating are probably not what most end users want.
Is SPSS giving the wrong residuals for a repeated measures design? SPSS is giving the residuals from the group means without correcting for individual error. > my.lm <- lm(value ~ group, DFlong) > round(matrix(residuals(my.lm),ncol=2),2) [,1] [,2] [1,] -3.78 -
55,466
Is SPSS giving the wrong residuals for a repeated measures design?
I've read more now on the repeated measures general linear model, and can explain how the residuals are calculated that R is reporting in my.aov. A repeated measures ANOVA is applied to a transformation of the within-subject variables, and the residuals of this model are the ones returned from the aov fit. The transformation multiplies the original variables by a transformation matrix, which can be output in SPSS by adding /PRINT = TEST(MMATRIX) to the GLM command. SPSS doesn't seem to allow the actual transformed variables to be saved, but we can calculate them ourselves, in R, see below. The way the transformation matrix is generated follows certain rules, outlined here: http://www.uccs.edu/~faculty/lbecker/SPSS/glm_1withn.htm i.e. the sum of coefficients should be zero, the sum of squares of coefficients should be 1. But for now I don't deal with generating the transformation matrix, I simply apply the matrix output from SPSS to our original variables to create the transformed variable, and then check it's the model being used by aov by comparing the residuals from my.aov to an lm call on our own model. # As before, continuing from the code in the original question: # my.aov <- aov(value ~ group + Error( participant / group ), DFlong) summary(my.aov) # create our transformed variable # (there will always be one less new variable # than the number of within subject variables) # We are applying here the transformation matrix that is output from SPSS, DF$DV <- sqrt(0.5) * DF$first + -sqrt(0.5) * DF$second residuals(lm(DV ~ 1, DF)) # Our self-calculated residuals should be the same as the ones returned by the aov fit. # They are the same except for the last one, # which is the negative of the residual calculated my.aov$"participant:group"$residuals # Again fitted values are the same except for the *-1 at the end fitted.values(lm(DV ~ 1, DF)) my.aov$"participant:group"$fitted.values I am not sure how aov calculates the participant residuals, but presumably that is why they have the *-1, because I have not included individual differences in the model above. In summary the residuals saved by SPSS are not from the repeated-measures GLM.
Is SPSS giving the wrong residuals for a repeated measures design?
I've read more now on the repeated measures general linear model, and can explain how the residuals are calculated that R is reporting in my.aov. A repeated measures ANOVA is applied to a transformati
Is SPSS giving the wrong residuals for a repeated measures design? I've read more now on the repeated measures general linear model, and can explain how the residuals are calculated that R is reporting in my.aov. A repeated measures ANOVA is applied to a transformation of the within-subject variables, and the residuals of this model are the ones returned from the aov fit. The transformation multiplies the original variables by a transformation matrix, which can be output in SPSS by adding /PRINT = TEST(MMATRIX) to the GLM command. SPSS doesn't seem to allow the actual transformed variables to be saved, but we can calculate them ourselves, in R, see below. The way the transformation matrix is generated follows certain rules, outlined here: http://www.uccs.edu/~faculty/lbecker/SPSS/glm_1withn.htm i.e. the sum of coefficients should be zero, the sum of squares of coefficients should be 1. But for now I don't deal with generating the transformation matrix, I simply apply the matrix output from SPSS to our original variables to create the transformed variable, and then check it's the model being used by aov by comparing the residuals from my.aov to an lm call on our own model. # As before, continuing from the code in the original question: # my.aov <- aov(value ~ group + Error( participant / group ), DFlong) summary(my.aov) # create our transformed variable # (there will always be one less new variable # than the number of within subject variables) # We are applying here the transformation matrix that is output from SPSS, DF$DV <- sqrt(0.5) * DF$first + -sqrt(0.5) * DF$second residuals(lm(DV ~ 1, DF)) # Our self-calculated residuals should be the same as the ones returned by the aov fit. # They are the same except for the last one, # which is the negative of the residual calculated my.aov$"participant:group"$residuals # Again fitted values are the same except for the *-1 at the end fitted.values(lm(DV ~ 1, DF)) my.aov$"participant:group"$fitted.values I am not sure how aov calculates the participant residuals, but presumably that is why they have the *-1, because I have not included individual differences in the model above. In summary the residuals saved by SPSS are not from the repeated-measures GLM.
Is SPSS giving the wrong residuals for a repeated measures design? I've read more now on the repeated measures general linear model, and can explain how the residuals are calculated that R is reporting in my.aov. A repeated measures ANOVA is applied to a transformati
55,467
Prediction with GLS
Suppose we have a GLS model: $$y=X\beta+u,$$ with $$Euu'=\Omega.$$ Suppose we want to predict $y^*$: $$y^*=x^*\beta+u^*,$$ Goldberger proved that the best linear unbiased prediction for $y^*$ is the following: $$\hat{y}=x^*\hat{\beta}+w'\Omega^{-1}\hat{u},$$ where $$\hat\beta=(X'\Omega^{-1}X)^{-1}X\Omega^{-1}y,\quad \hat{u}=y-X\hat\beta$$ and $$w=Eu^*u$$ So the answer to your first question would be that if you use simple prediction, then your prediction will not be optimal. On the other hand to use this formula you need to know $w$. And for that you need to know more about $\Omega$. Goldberger in his article discusses several special cases. As for your second question it is a bit unclear for me what are you trying to achieve. The problem with GLS model is that if we use OLS standard errors of the coefficients then they are biased. The formulas you give are for calculating the standard error of the error term. But this only makes sense for OLS model, since for GLS model the error term in general will not have unique variance. If you are going for prediction variance, then @whuber comment holds, you cannot calculate it in this setup. The basic problem for that is you predict one observation, so you get one number. And variance of one number is zero. What you can calculate is theoretical prediction variance, but this then depends on the model you are trying to test. If you want to calculate PRESS: the sum of squares of residuals from jackknife procedure and weight them with $\Omega$, I think you will run into the same problem of how to calculate $\Omega$ out of sample.
Prediction with GLS
Suppose we have a GLS model: $$y=X\beta+u,$$ with $$Euu'=\Omega.$$ Suppose we want to predict $y^*$: $$y^*=x^*\beta+u^*,$$ Goldberger proved that the best linear unbiased prediction for $y^*$ is the f
Prediction with GLS Suppose we have a GLS model: $$y=X\beta+u,$$ with $$Euu'=\Omega.$$ Suppose we want to predict $y^*$: $$y^*=x^*\beta+u^*,$$ Goldberger proved that the best linear unbiased prediction for $y^*$ is the following: $$\hat{y}=x^*\hat{\beta}+w'\Omega^{-1}\hat{u},$$ where $$\hat\beta=(X'\Omega^{-1}X)^{-1}X\Omega^{-1}y,\quad \hat{u}=y-X\hat\beta$$ and $$w=Eu^*u$$ So the answer to your first question would be that if you use simple prediction, then your prediction will not be optimal. On the other hand to use this formula you need to know $w$. And for that you need to know more about $\Omega$. Goldberger in his article discusses several special cases. As for your second question it is a bit unclear for me what are you trying to achieve. The problem with GLS model is that if we use OLS standard errors of the coefficients then they are biased. The formulas you give are for calculating the standard error of the error term. But this only makes sense for OLS model, since for GLS model the error term in general will not have unique variance. If you are going for prediction variance, then @whuber comment holds, you cannot calculate it in this setup. The basic problem for that is you predict one observation, so you get one number. And variance of one number is zero. What you can calculate is theoretical prediction variance, but this then depends on the model you are trying to test. If you want to calculate PRESS: the sum of squares of residuals from jackknife procedure and weight them with $\Omega$, I think you will run into the same problem of how to calculate $\Omega$ out of sample.
Prediction with GLS Suppose we have a GLS model: $$y=X\beta+u,$$ with $$Euu'=\Omega.$$ Suppose we want to predict $y^*$: $$y^*=x^*\beta+u^*,$$ Goldberger proved that the best linear unbiased prediction for $y^*$ is the f
55,468
Prediction with GLS
The $\beta$'s that come out of GLS are estimates of the predictor effects that contribute to the mean response and the assumption of GLS is that the mean of $Y|X$ is the same for each data point - only non-constant variance and serial correlation are allowed for. So, to predict a new data point, yes, you would just plug in the predictor values into the linear predictor. The observed prediction error will usually take a form like $$ \sum_{i} (Y_{i} - \hat{Y}_{i} )^2 $$ which potentially is the mean of a sum of non-independent random variables if the test data points are heteroskedastic/autocorrelated in the same way the training data was. Fortunately, the linearity of expectation, $$ E \big( \sum_{i} X_{i} \big) = \sum_{i} E(X_{i}) $$ is true regardless of whether or not the $X_{i}$ are independent so your prediction error will not be biased.
Prediction with GLS
The $\beta$'s that come out of GLS are estimates of the predictor effects that contribute to the mean response and the assumption of GLS is that the mean of $Y|X$ is the same for each data point - onl
Prediction with GLS The $\beta$'s that come out of GLS are estimates of the predictor effects that contribute to the mean response and the assumption of GLS is that the mean of $Y|X$ is the same for each data point - only non-constant variance and serial correlation are allowed for. So, to predict a new data point, yes, you would just plug in the predictor values into the linear predictor. The observed prediction error will usually take a form like $$ \sum_{i} (Y_{i} - \hat{Y}_{i} )^2 $$ which potentially is the mean of a sum of non-independent random variables if the test data points are heteroskedastic/autocorrelated in the same way the training data was. Fortunately, the linearity of expectation, $$ E \big( \sum_{i} X_{i} \big) = \sum_{i} E(X_{i}) $$ is true regardless of whether or not the $X_{i}$ are independent so your prediction error will not be biased.
Prediction with GLS The $\beta$'s that come out of GLS are estimates of the predictor effects that contribute to the mean response and the assumption of GLS is that the mean of $Y|X$ is the same for each data point - onl
55,469
Zero regression coefficient when correlations are not zero
Yes, that is possible. Take these data for example x y xy .2217 .5000 .1108 .3048 -.9787 -.2983 -1.6445 .3512 -.5775 -.2461 -.4866 .1197 -.3170 -.0954 .0302 -1.1603 1.8352 -2.1294 -.8720 .1372 -.1196 -1.7852 -.2160 .3856 1.0100 .0165 .0166 .3000 -.3251 -.0975 $XY$ is a product of $X$ and $Y$. Multiple regression of $X$ on $Y$ and $XY$ yields $b$ for $XY$ as 0 and $b$ for $Y$ as -.444. Constant is -.386. Note the theoretical prerequisite for this: $bXY$ will be 0 if and only if $rX.XY$ (i.e. correlation bw $X$ and $XY$; "." here means "with") $= rX.Y * rY.XY$. Here, .280 = (-.361) * (-.776).
Zero regression coefficient when correlations are not zero
Yes, that is possible. Take these data for example x y xy .2217 .5000 .1108 .3048 -.9787 -.2983 -1.6445 .3512 -.5775 -.2461 -.4866 .1197 -.3170 -.0954 .0302 -1.16
Zero regression coefficient when correlations are not zero Yes, that is possible. Take these data for example x y xy .2217 .5000 .1108 .3048 -.9787 -.2983 -1.6445 .3512 -.5775 -.2461 -.4866 .1197 -.3170 -.0954 .0302 -1.1603 1.8352 -2.1294 -.8720 .1372 -.1196 -1.7852 -.2160 .3856 1.0100 .0165 .0166 .3000 -.3251 -.0975 $XY$ is a product of $X$ and $Y$. Multiple regression of $X$ on $Y$ and $XY$ yields $b$ for $XY$ as 0 and $b$ for $Y$ as -.444. Constant is -.386. Note the theoretical prerequisite for this: $bXY$ will be 0 if and only if $rX.XY$ (i.e. correlation bw $X$ and $XY$; "." here means "with") $= rX.Y * rY.XY$. Here, .280 = (-.361) * (-.776).
Zero regression coefficient when correlations are not zero Yes, that is possible. Take these data for example x y xy .2217 .5000 .1108 .3048 -.9787 -.2983 -1.6445 .3512 -.5775 -.2461 -.4866 .1197 -.3170 -.0954 .0302 -1.16
55,470
What techniques are used for empirical, stochastic simulation of a time series?
Would dynamic linear models be applicable? (State Space formulation, Kalman filter, etc.) The dlm package has some nice tools to create and simulate from models.
What techniques are used for empirical, stochastic simulation of a time series?
Would dynamic linear models be applicable? (State Space formulation, Kalman filter, etc.) The dlm package has some nice tools to create and simulate from models.
What techniques are used for empirical, stochastic simulation of a time series? Would dynamic linear models be applicable? (State Space formulation, Kalman filter, etc.) The dlm package has some nice tools to create and simulate from models.
What techniques are used for empirical, stochastic simulation of a time series? Would dynamic linear models be applicable? (State Space formulation, Kalman filter, etc.) The dlm package has some nice tools to create and simulate from models.
55,471
What techniques are used for empirical, stochastic simulation of a time series?
Conceptually, the question fits into the framework of functional data analysis, see, for instance Applied Functional Data Analysis by Ramsay and Silverman. The usual assumption here is that we have a data set of independent, and perhaps even identically distributed, smooth curves. Fitting an fda model to your data over $[0, t_{\max}]$ you are able to predict a future $y_1(t)$ based on the fitted model, or, in principle, the continuation of $y_1(t)$ for $t > t'$ based on the conditional distribution from the fitted model of $y_1(t)$ for $t > t'$ given $y_1(t)$ for $0 \leq t \leq t'$. However, this may be easier said than done. A simple example is when your curves can be modeled via an ordinary differential equation (with random initial values). Then you can predict $y_1(t)$ for $t > t'$ by solving the ode with the observed initial condition $y_1(t')$. I would recommend that you take a look in the book above, and perhaps also their theoretical version Functional data analysis or their web page for inspiration. I think there are many ways to proceed depending on the relative impact of inhomogeneity in the $t$-variable and information content in the partial observation of $y_1$ on the continuation of the curve.
What techniques are used for empirical, stochastic simulation of a time series?
Conceptually, the question fits into the framework of functional data analysis, see, for instance Applied Functional Data Analysis by Ramsay and Silverman. The usual assumption here is that we have a
What techniques are used for empirical, stochastic simulation of a time series? Conceptually, the question fits into the framework of functional data analysis, see, for instance Applied Functional Data Analysis by Ramsay and Silverman. The usual assumption here is that we have a data set of independent, and perhaps even identically distributed, smooth curves. Fitting an fda model to your data over $[0, t_{\max}]$ you are able to predict a future $y_1(t)$ based on the fitted model, or, in principle, the continuation of $y_1(t)$ for $t > t'$ based on the conditional distribution from the fitted model of $y_1(t)$ for $t > t'$ given $y_1(t)$ for $0 \leq t \leq t'$. However, this may be easier said than done. A simple example is when your curves can be modeled via an ordinary differential equation (with random initial values). Then you can predict $y_1(t)$ for $t > t'$ by solving the ode with the observed initial condition $y_1(t')$. I would recommend that you take a look in the book above, and perhaps also their theoretical version Functional data analysis or their web page for inspiration. I think there are many ways to proceed depending on the relative impact of inhomogeneity in the $t$-variable and information content in the partial observation of $y_1$ on the continuation of the curve.
What techniques are used for empirical, stochastic simulation of a time series? Conceptually, the question fits into the framework of functional data analysis, see, for instance Applied Functional Data Analysis by Ramsay and Silverman. The usual assumption here is that we have a
55,472
How to derive a k-means objective function in matrix form?
Given an $m$ by $n$ matrix $X$, the algorithm seeks to group its $n$ columns, thought of as $m$-vectors, into a specified number of groups, $k$. This can be represented by an $n$ by $k$ matrix $A$ having entries in $\{0,1\}$ and one column for each of the $k$ groups. Column $j$ indicates which vectors in $X$ belong to group $j$; that is, $a_{ij} = 1$ if and only if column $i$ of $X$ is assigned to group $j$. Let $1_k$ be the column vector of $k$ 1's and $1_n$ the column vector of $n$ 1's. $A$ is constrained to satisfy $A\ 1_k = 1_n^{'}$, reflecting the assignment of each column of $X$ to exactly one group. The $m$ by $k$ matrix whose columns are the group centroids can be constructed as $$C = X\ A\ \textrm{diagonal}(1_n^{'}\ A)^{-1}.$$ The distances between the columns of $X$ and their associated centroids $C\ A^{'}$ are $$D = X - C\ A^{'},$$ also an $m$ by $n$ matrix, whence the objective function can be expressed as the number $$tr(D^{'}\ D)$$ (which is the sum of squares of the entries of $D$). For instance, consider forming two clusters of the points $(1,0), (-1,0), (0,2), (0,3), (0,4)$ in the plane ($k=2$, $m=2$, $n=5$). Then we can let $$X = \left( \begin{array}{ccccc} 1 & -1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 3 & 4 \end{array} \right).$$ To assign the first two points to the first cluster and the last three points to the second cluster, set $$A = \left( \begin{array}{cc} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 1 \\ 0 & 1 \end{array} \right).$$ The centroids of these clusters are $\left((1,0)+(-1,0)\right)/2 = (0,0)$ and $\left((0,2)+(0,3)+(0,4)\right)/3 = (0,3)$, respectively, whence $$C = X\ A\ \textrm{diagonal}(2,3)^{-1} = \left( \begin{array}{cc} 0 & 0 \\ 0 & 3 \end{array} \right).$$ Thus $$C\ A^{'} = \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 3 & 3 & 3 \end{array} \right)$$ (the columns give the centroids associated with the columns of $X$) and $$D = X - C\ A^{'} = \left( \begin{array}{ccccc} 1 & -1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 1 \end{array} \right)$$ and, finally, the value of the objective function equals the sum of squares of its entries, $4$.
How to derive a k-means objective function in matrix form?
Given an $m$ by $n$ matrix $X$, the algorithm seeks to group its $n$ columns, thought of as $m$-vectors, into a specified number of groups, $k$. This can be represented by an $n$ by $k$ matrix $A$ ha
How to derive a k-means objective function in matrix form? Given an $m$ by $n$ matrix $X$, the algorithm seeks to group its $n$ columns, thought of as $m$-vectors, into a specified number of groups, $k$. This can be represented by an $n$ by $k$ matrix $A$ having entries in $\{0,1\}$ and one column for each of the $k$ groups. Column $j$ indicates which vectors in $X$ belong to group $j$; that is, $a_{ij} = 1$ if and only if column $i$ of $X$ is assigned to group $j$. Let $1_k$ be the column vector of $k$ 1's and $1_n$ the column vector of $n$ 1's. $A$ is constrained to satisfy $A\ 1_k = 1_n^{'}$, reflecting the assignment of each column of $X$ to exactly one group. The $m$ by $k$ matrix whose columns are the group centroids can be constructed as $$C = X\ A\ \textrm{diagonal}(1_n^{'}\ A)^{-1}.$$ The distances between the columns of $X$ and their associated centroids $C\ A^{'}$ are $$D = X - C\ A^{'},$$ also an $m$ by $n$ matrix, whence the objective function can be expressed as the number $$tr(D^{'}\ D)$$ (which is the sum of squares of the entries of $D$). For instance, consider forming two clusters of the points $(1,0), (-1,0), (0,2), (0,3), (0,4)$ in the plane ($k=2$, $m=2$, $n=5$). Then we can let $$X = \left( \begin{array}{ccccc} 1 & -1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 3 & 4 \end{array} \right).$$ To assign the first two points to the first cluster and the last three points to the second cluster, set $$A = \left( \begin{array}{cc} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 1 \\ 0 & 1 \end{array} \right).$$ The centroids of these clusters are $\left((1,0)+(-1,0)\right)/2 = (0,0)$ and $\left((0,2)+(0,3)+(0,4)\right)/3 = (0,3)$, respectively, whence $$C = X\ A\ \textrm{diagonal}(2,3)^{-1} = \left( \begin{array}{cc} 0 & 0 \\ 0 & 3 \end{array} \right).$$ Thus $$C\ A^{'} = \left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 3 & 3 & 3 \end{array} \right)$$ (the columns give the centroids associated with the columns of $X$) and $$D = X - C\ A^{'} = \left( \begin{array}{ccccc} 1 & -1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 1 \end{array} \right)$$ and, finally, the value of the objective function equals the sum of squares of its entries, $4$.
How to derive a k-means objective function in matrix form? Given an $m$ by $n$ matrix $X$, the algorithm seeks to group its $n$ columns, thought of as $m$-vectors, into a specified number of groups, $k$. This can be represented by an $n$ by $k$ matrix $A$ ha
55,473
How do you use draws from a Markov chain (Monte Carlo) for (bayesian) inference?
Bayesian Statistics does not depend on Markov chains (well in theory), Markov chain Monte Carlo is a method for making the computations in Bayesian Statistics easier (doable). Generally we want to generate data from the posterior distribution which we can easily compute parts of, but not always all of (the normalizing coefficient is usually the hard part), but McMC methods will generate data from the posterior without needing to know some of the parts that are harder to find, that is the benifit of McMC with Bayesian stats (though there are some problems where the posterior is known exactly and McMC is not needed at all). It is very rare that the posterior is uniform, so your example in not realistic, but can still help with understanding. The short answer to length of burn-in and how long to run the chain is "We Don't Know". The little longer (and more useful) answer is to run the chain for a while, then look at it to see if it looks good (yes that is subjective, but experiance helps, and if we knew the exact answer we would know enough not to need McMC), then often run longer to be sure. I have seen cases where the chain was run for quite a few iterations, it looked good, but the researchers decided to run it for 4 times longer to be sure, and near the end sudenly it jumped to a different part of the distribution that had not been covered before. More dependency does mean that the chain needs to be run for longer (both burn-in and total runs), but as long as the chain has run long enough the dependency no longer matters. Think of a simple random sample with all values independent, now sort the values into order statistics, they are no longer independent, but the mean and sd are the same and estimate the population just as well. The fact that McMC iterations are dependent does not matter so long as there are enough iterations to fully represent the distribution. Your uniform example shows the risks from running for too short of a time.
How do you use draws from a Markov chain (Monte Carlo) for (bayesian) inference?
Bayesian Statistics does not depend on Markov chains (well in theory), Markov chain Monte Carlo is a method for making the computations in Bayesian Statistics easier (doable). Generally we want to ge
How do you use draws from a Markov chain (Monte Carlo) for (bayesian) inference? Bayesian Statistics does not depend on Markov chains (well in theory), Markov chain Monte Carlo is a method for making the computations in Bayesian Statistics easier (doable). Generally we want to generate data from the posterior distribution which we can easily compute parts of, but not always all of (the normalizing coefficient is usually the hard part), but McMC methods will generate data from the posterior without needing to know some of the parts that are harder to find, that is the benifit of McMC with Bayesian stats (though there are some problems where the posterior is known exactly and McMC is not needed at all). It is very rare that the posterior is uniform, so your example in not realistic, but can still help with understanding. The short answer to length of burn-in and how long to run the chain is "We Don't Know". The little longer (and more useful) answer is to run the chain for a while, then look at it to see if it looks good (yes that is subjective, but experiance helps, and if we knew the exact answer we would know enough not to need McMC), then often run longer to be sure. I have seen cases where the chain was run for quite a few iterations, it looked good, but the researchers decided to run it for 4 times longer to be sure, and near the end sudenly it jumped to a different part of the distribution that had not been covered before. More dependency does mean that the chain needs to be run for longer (both burn-in and total runs), but as long as the chain has run long enough the dependency no longer matters. Think of a simple random sample with all values independent, now sort the values into order statistics, they are no longer independent, but the mean and sd are the same and estimate the population just as well. The fact that McMC iterations are dependent does not matter so long as there are enough iterations to fully represent the distribution. Your uniform example shows the risks from running for too short of a time.
How do you use draws from a Markov chain (Monte Carlo) for (bayesian) inference? Bayesian Statistics does not depend on Markov chains (well in theory), Markov chain Monte Carlo is a method for making the computations in Bayesian Statistics easier (doable). Generally we want to ge
55,474
Multiple imputation on single subscale item or subscale scores?
I basically concur with everything wolf.rauch said here, and would like to discuss some alternatives that might be available to you. My understanding is that AMOS had had FIML (full information maximum likelihood) for continuous data for at least ten years before it was acquired by IBM -- see http://www.smallwaters.com/amos/faq/faqa-missdat.html, and that is an old FAQ by one of the original developers who left the project around 2000. If you are willing to ignore the ordinal nature of your items, you can just use this method, and won't bother figuring out multiple imputation. If you don't like this solution, and you want to retain the categorical nature of the data, you would need to find the chained equations method with ordinal links (if SPSS has it at all). If SPSS only imputes draws from a multivariate normal distribution, then you are back to the situation of ignoring the ordinal nature of the data, and in no way better off than with AMOS' FIML. (I've no clue what's available in SPSS, you'd have to figure it out. In the end, everything would be fruitless if AMOS does not support multiple imputation -- and that, again, I don't know.) If you are willing to consider Stata, there is a chance you'd be able to conduct your analysis in it, with all the bells and whistles of both multiple imputation for ordinal data using either Patrick Royston's ice or official mi, and then the new sem suite. Alternatively, you could run gllamm to obtain FIML estimates for ordinal data (although it would probably take eternity to converge).
Multiple imputation on single subscale item or subscale scores?
I basically concur with everything wolf.rauch said here, and would like to discuss some alternatives that might be available to you. My understanding is that AMOS had had FIML (full information maximu
Multiple imputation on single subscale item or subscale scores? I basically concur with everything wolf.rauch said here, and would like to discuss some alternatives that might be available to you. My understanding is that AMOS had had FIML (full information maximum likelihood) for continuous data for at least ten years before it was acquired by IBM -- see http://www.smallwaters.com/amos/faq/faqa-missdat.html, and that is an old FAQ by one of the original developers who left the project around 2000. If you are willing to ignore the ordinal nature of your items, you can just use this method, and won't bother figuring out multiple imputation. If you don't like this solution, and you want to retain the categorical nature of the data, you would need to find the chained equations method with ordinal links (if SPSS has it at all). If SPSS only imputes draws from a multivariate normal distribution, then you are back to the situation of ignoring the ordinal nature of the data, and in no way better off than with AMOS' FIML. (I've no clue what's available in SPSS, you'd have to figure it out. In the end, everything would be fruitless if AMOS does not support multiple imputation -- and that, again, I don't know.) If you are willing to consider Stata, there is a chance you'd be able to conduct your analysis in it, with all the bells and whistles of both multiple imputation for ordinal data using either Patrick Royston's ice or official mi, and then the new sem suite. Alternatively, you could run gllamm to obtain FIML estimates for ordinal data (although it would probably take eternity to converge).
Multiple imputation on single subscale item or subscale scores? I basically concur with everything wolf.rauch said here, and would like to discuss some alternatives that might be available to you. My understanding is that AMOS had had FIML (full information maximu
55,475
Multiple imputation on single subscale item or subscale scores?
Q1: I suppose you're not asking whether the variable names have anything to do with imputation. I suppose you're asking whether to do multiple imputation on your raw data or on "scales" which means sum scores of raw data variables which are assumed to measure the same construct. Given that you are concerned about the right method (which is good), well, then how do you want to compute scale scores before you do anything about the missing data? Answer: you need to work with your raw data. Q2: Multiple imputation works by estimating missing values from values of the other variables in your data set (this being a non-technical explanation). So it does not matter if you recode your variables or not, if by recoding you mean changing the sign of the relation from one var to the other variables. Still I would recommend recoding before the imputation so that you don't get confused afterwards. Q3: I don't know why AMOS does not calculate estimates. I don't really know AMOS, but the user guide seems to say it should be able to calculate estimates for multiply imputated data sets. But there is a general answer anyway: The whole point of multiple imputation is that you get point estimates but also incorporate the additional variation introduced by the imputation process. This means: You should not use a single "averaged" data set. Instead you work with all of your imputed data sets. First, you get point estimates for your model parameters by running your model (I suppose a structural equation model) for each of the data sets and taking the mean of the point estimates, and then you compute standard errors by combining between-imputation variance (variance of the parameter estimates between imputed data sets) with within-imputation variance (mean of the estimated standard errors from the different models). See any text on multiple imputation. A short and freely available tutorial is here: http://rhowell.ba.ttu.edu/Enders-MissingHancock.pdf
Multiple imputation on single subscale item or subscale scores?
Q1: I suppose you're not asking whether the variable names have anything to do with imputation. I suppose you're asking whether to do multiple imputation on your raw data or on "scales" which means su
Multiple imputation on single subscale item or subscale scores? Q1: I suppose you're not asking whether the variable names have anything to do with imputation. I suppose you're asking whether to do multiple imputation on your raw data or on "scales" which means sum scores of raw data variables which are assumed to measure the same construct. Given that you are concerned about the right method (which is good), well, then how do you want to compute scale scores before you do anything about the missing data? Answer: you need to work with your raw data. Q2: Multiple imputation works by estimating missing values from values of the other variables in your data set (this being a non-technical explanation). So it does not matter if you recode your variables or not, if by recoding you mean changing the sign of the relation from one var to the other variables. Still I would recommend recoding before the imputation so that you don't get confused afterwards. Q3: I don't know why AMOS does not calculate estimates. I don't really know AMOS, but the user guide seems to say it should be able to calculate estimates for multiply imputated data sets. But there is a general answer anyway: The whole point of multiple imputation is that you get point estimates but also incorporate the additional variation introduced by the imputation process. This means: You should not use a single "averaged" data set. Instead you work with all of your imputed data sets. First, you get point estimates for your model parameters by running your model (I suppose a structural equation model) for each of the data sets and taking the mean of the point estimates, and then you compute standard errors by combining between-imputation variance (variance of the parameter estimates between imputed data sets) with within-imputation variance (mean of the estimated standard errors from the different models). See any text on multiple imputation. A short and freely available tutorial is here: http://rhowell.ba.ttu.edu/Enders-MissingHancock.pdf
Multiple imputation on single subscale item or subscale scores? Q1: I suppose you're not asking whether the variable names have anything to do with imputation. I suppose you're asking whether to do multiple imputation on your raw data or on "scales" which means su
55,476
Multiple imputation on single subscale item or subscale scores?
There's a great article comparing item-level vs. scale-level imputation by Craig Enders and colleagues (see here; sadly, one needs institutional access). Basically, the authors conclude that both item-level and scale-level imputation are similar in the level of bias they introduce in scale estimates, but do differ in the efficiency (e.g., power), with scale-level imputation suffering a greater loss in power.
Multiple imputation on single subscale item or subscale scores?
There's a great article comparing item-level vs. scale-level imputation by Craig Enders and colleagues (see here; sadly, one needs institutional access). Basically, the authors conclude that both item
Multiple imputation on single subscale item or subscale scores? There's a great article comparing item-level vs. scale-level imputation by Craig Enders and colleagues (see here; sadly, one needs institutional access). Basically, the authors conclude that both item-level and scale-level imputation are similar in the level of bias they introduce in scale estimates, but do differ in the efficiency (e.g., power), with scale-level imputation suffering a greater loss in power.
Multiple imputation on single subscale item or subscale scores? There's a great article comparing item-level vs. scale-level imputation by Craig Enders and colleagues (see here; sadly, one needs institutional access). Basically, the authors conclude that both item
55,477
Bayesian analysis of data
If you just want to predict $Y$ then an explicit joint model for $X$ and $Y$ when $X$ is 4 dimensional is probably overkill. This is especially true if you truly don't have any information about the distribution of the $X_i$'s or the relationship between $Y$ and $X$. An exception would be if there is significant missingness in $X$, or perhaps measurement error in $X$. Bayesian approaches to binary regression will appear in any decent textbook on applied Bayesian methods; my usual recommendation is Bayesian Data Analysis by Gelman, Carlin, Stern & Rubin.
Bayesian analysis of data
If you just want to predict $Y$ then an explicit joint model for $X$ and $Y$ when $X$ is 4 dimensional is probably overkill. This is especially true if you truly don't have any information about the d
Bayesian analysis of data If you just want to predict $Y$ then an explicit joint model for $X$ and $Y$ when $X$ is 4 dimensional is probably overkill. This is especially true if you truly don't have any information about the distribution of the $X_i$'s or the relationship between $Y$ and $X$. An exception would be if there is significant missingness in $X$, or perhaps measurement error in $X$. Bayesian approaches to binary regression will appear in any decent textbook on applied Bayesian methods; my usual recommendation is Bayesian Data Analysis by Gelman, Carlin, Stern & Rubin.
Bayesian analysis of data If you just want to predict $Y$ then an explicit joint model for $X$ and $Y$ when $X$ is 4 dimensional is probably overkill. This is especially true if you truly don't have any information about the d
55,478
Looking for a name for a "mean-influencing" statistic
The mean is the coefficient in the regression of the data against the constant $1$. Your statistic, in this regression context, is the simplest possible example of the DFBETA diagnostic defined in Belsley, Kuh, & Welsch, Regression Diagnostics (J Wiley & Sons, 1980): ...we look first at the change in the estimated regression coefficients that would occur if the $i^\text{th}$ row were deleted. Denoting the coefficients estimated with the $i^\text{th}$ row deleted by $\mathbf{b}(i)$, this change is easily computed from the formula $$DFBETA_i = \mathbf{b} - \mathbf{b}(i) = \frac{(X^T X)^{-1} x_i^T e_i}{1 - h_i}$$ where $$ h_i = x_i (X^T X)^{-1} x_i^T \ldots$$ [pp 12-13, formulas (2.1) and (2.2)]. In this case the design matrix $X$ is the $n$ by $1$ matrix of ones, whence $(X^T X)^{-1} = 1/n$. The numbers $e_i$ are the residuals, $$e_i = x_i - \bar{x}.$$ Therefore $$\eqalign{ DFBETA_i &= \frac{x_i - \bar{x}}{n - 1} = \frac{1}{n-1}\left(x_i - \frac{1}{n}\sum_{j=1}^n x_j \right) \\ &= \frac{1}{n}\sum_{j=1}^n x_j - \frac{1}{n-1}\sum_{j \ne i} x_j \text{.} }$$
Looking for a name for a "mean-influencing" statistic
The mean is the coefficient in the regression of the data against the constant $1$. Your statistic, in this regression context, is the simplest possible example of the DFBETA diagnostic defined in Be
Looking for a name for a "mean-influencing" statistic The mean is the coefficient in the regression of the data against the constant $1$. Your statistic, in this regression context, is the simplest possible example of the DFBETA diagnostic defined in Belsley, Kuh, & Welsch, Regression Diagnostics (J Wiley & Sons, 1980): ...we look first at the change in the estimated regression coefficients that would occur if the $i^\text{th}$ row were deleted. Denoting the coefficients estimated with the $i^\text{th}$ row deleted by $\mathbf{b}(i)$, this change is easily computed from the formula $$DFBETA_i = \mathbf{b} - \mathbf{b}(i) = \frac{(X^T X)^{-1} x_i^T e_i}{1 - h_i}$$ where $$ h_i = x_i (X^T X)^{-1} x_i^T \ldots$$ [pp 12-13, formulas (2.1) and (2.2)]. In this case the design matrix $X$ is the $n$ by $1$ matrix of ones, whence $(X^T X)^{-1} = 1/n$. The numbers $e_i$ are the residuals, $$e_i = x_i - \bar{x}.$$ Therefore $$\eqalign{ DFBETA_i &= \frac{x_i - \bar{x}}{n - 1} = \frac{1}{n-1}\left(x_i - \frac{1}{n}\sum_{j=1}^n x_j \right) \\ &= \frac{1}{n}\sum_{j=1}^n x_j - \frac{1}{n-1}\sum_{j \ne i} x_j \text{.} }$$
Looking for a name for a "mean-influencing" statistic The mean is the coefficient in the regression of the data against the constant $1$. Your statistic, in this regression context, is the simplest possible example of the DFBETA diagnostic defined in Be
55,479
Looking for a name for a "mean-influencing" statistic
It's close to (if not exactly) local influence and/or Cook's distance. JRSS B, Vol. 48, No. 2, 1986, p.133-169 is the classic paper. A bit dense but a place to start looking in the literature.
Looking for a name for a "mean-influencing" statistic
It's close to (if not exactly) local influence and/or Cook's distance. JRSS B, Vol. 48, No. 2, 1986, p.133-169 is the classic paper. A bit dense but a place to start looking in the literature.
Looking for a name for a "mean-influencing" statistic It's close to (if not exactly) local influence and/or Cook's distance. JRSS B, Vol. 48, No. 2, 1986, p.133-169 is the classic paper. A bit dense but a place to start looking in the literature.
Looking for a name for a "mean-influencing" statistic It's close to (if not exactly) local influence and/or Cook's distance. JRSS B, Vol. 48, No. 2, 1986, p.133-169 is the classic paper. A bit dense but a place to start looking in the literature.
55,480
Covariates in regression models
Correlation with the dependent variable is a definite plus (especially for linear regression where there are close links between the coefficients and covariance with the dependent variable). Correlation with the other covariates/predictors is somewhat more subtle and depends on your goal. Generally, it is considered good practice to include as many variables as feasible at first (especially ones that you have some reason to believe could be relevant from previous research or the likes), and then try to optimize with some criterion (e.g. AIC or simple likelihood ratio tests) or through some optimizing algorithm (LASSO etc.). I make an exception for perfectly correlated variables: there is no use in leaving them both in. You should always be careful to leave covariates out, though, as leaving the wrong one(s) out could bias your coefficient estimates! Maybe you can ask your question somewhat more explicitly? Specify what are your goals in your research, then we may be more able to give specific advice.
Covariates in regression models
Correlation with the dependent variable is a definite plus (especially for linear regression where there are close links between the coefficients and covariance with the dependent variable). Correlati
Covariates in regression models Correlation with the dependent variable is a definite plus (especially for linear regression where there are close links between the coefficients and covariance with the dependent variable). Correlation with the other covariates/predictors is somewhat more subtle and depends on your goal. Generally, it is considered good practice to include as many variables as feasible at first (especially ones that you have some reason to believe could be relevant from previous research or the likes), and then try to optimize with some criterion (e.g. AIC or simple likelihood ratio tests) or through some optimizing algorithm (LASSO etc.). I make an exception for perfectly correlated variables: there is no use in leaving them both in. You should always be careful to leave covariates out, though, as leaving the wrong one(s) out could bias your coefficient estimates! Maybe you can ask your question somewhat more explicitly? Specify what are your goals in your research, then we may be more able to give specific advice.
Covariates in regression models Correlation with the dependent variable is a definite plus (especially for linear regression where there are close links between the coefficients and covariance with the dependent variable). Correlati
55,481
Covariates in regression models
This really depends on the scientific question being asked. If you are interested in if there is a relationship between x1 and y, then do the regression between x1 and y. If you are interested in if x1 helps predict y above and beyond the effects of x2, x3, etc. then you need to include the other x's in the model. For example: suppose that Y is length of stay in the hospital and x1 is the dose (amount) of medication for a given set of patients. Now you may want to adjust for the severity of illness measured at the time of entry into the hospital/study and the treatment starts just after the measure of the severity, then you would probably want to include severity in the model. But if your severity of illness measure is the most severe the illness gets during their stay and the medicine reduces the severity and therefore shortens the length of stay, then including severity of illness will hide the effectiveness of the treatment rather than give any useful information. Before doing a regression it can be helpful to draw out a path diagram, write out the names of all the variables that could go into the regression, then draw arrows where you know, or strongly believe that there are relationships and how the causality most likely goes. Then draw an arrow or arrows in another color or style to show what relationships you think might exist but don't know and want to test. Considering this diagram can be useful in thinking about what models make the most sense to fit and compare to each other.
Covariates in regression models
This really depends on the scientific question being asked. If you are interested in if there is a relationship between x1 and y, then do the regression between x1 and y. If you are interested in if
Covariates in regression models This really depends on the scientific question being asked. If you are interested in if there is a relationship between x1 and y, then do the regression between x1 and y. If you are interested in if x1 helps predict y above and beyond the effects of x2, x3, etc. then you need to include the other x's in the model. For example: suppose that Y is length of stay in the hospital and x1 is the dose (amount) of medication for a given set of patients. Now you may want to adjust for the severity of illness measured at the time of entry into the hospital/study and the treatment starts just after the measure of the severity, then you would probably want to include severity in the model. But if your severity of illness measure is the most severe the illness gets during their stay and the medicine reduces the severity and therefore shortens the length of stay, then including severity of illness will hide the effectiveness of the treatment rather than give any useful information. Before doing a regression it can be helpful to draw out a path diagram, write out the names of all the variables that could go into the regression, then draw arrows where you know, or strongly believe that there are relationships and how the causality most likely goes. Then draw an arrow or arrows in another color or style to show what relationships you think might exist but don't know and want to test. Considering this diagram can be useful in thinking about what models make the most sense to fit and compare to each other.
Covariates in regression models This really depends on the scientific question being asked. If you are interested in if there is a relationship between x1 and y, then do the regression between x1 and y. If you are interested in if
55,482
Sliding window validation for time series
Your understanding about sliding window analysis is generally correct. You may find it helpful to separate the model validation process from the actual forecasting. In model validation, you use $k$ instances to train a model that predicts "one step" forward. Make sure each of your $k$ instances uses only information available at that particular time. This can be subtle, because it is easy to accidentally peek ahead into the future and pollute your out-of-sample test. For example, you might accidentally use the entire time series history in feature selection, and then use those features to test the model at every step of time. This is cheating, and will give you an overestimate of accuracy. This is mentioned in Elements of Statistical Learning, but outside the sliding window time series context. It is also easy to accidentally pollute with future information if some of your independent variables are asset returns. Say I use the return on an asset from time $t=21$ days to $t=28$ days to test at $t=21$ days. In this case, I have also polluted the out-of-sample test. Instead I would want to train with instances up to $t=21$ days, and test with one step at $t=28$ days. When you have validated your model, and are happy with the parameters and feature selection, then you typically train with all of your data and forecast into the actual future.
Sliding window validation for time series
Your understanding about sliding window analysis is generally correct. You may find it helpful to separate the model validation process from the actual forecasting. In model validation, you use $k$ in
Sliding window validation for time series Your understanding about sliding window analysis is generally correct. You may find it helpful to separate the model validation process from the actual forecasting. In model validation, you use $k$ instances to train a model that predicts "one step" forward. Make sure each of your $k$ instances uses only information available at that particular time. This can be subtle, because it is easy to accidentally peek ahead into the future and pollute your out-of-sample test. For example, you might accidentally use the entire time series history in feature selection, and then use those features to test the model at every step of time. This is cheating, and will give you an overestimate of accuracy. This is mentioned in Elements of Statistical Learning, but outside the sliding window time series context. It is also easy to accidentally pollute with future information if some of your independent variables are asset returns. Say I use the return on an asset from time $t=21$ days to $t=28$ days to test at $t=21$ days. In this case, I have also polluted the out-of-sample test. Instead I would want to train with instances up to $t=21$ days, and test with one step at $t=28$ days. When you have validated your model, and are happy with the parameters and feature selection, then you typically train with all of your data and forecast into the actual future.
Sliding window validation for time series Your understanding about sliding window analysis is generally correct. You may find it helpful to separate the model validation process from the actual forecasting. In model validation, you use $k$ in
55,483
How can I generate correlated timeseries made up of 0s and 1s?
A standard method is to begin by generating an autocorrelated Gaussian process $z_i$. (It doesn't have to be Gaussian, but such processes are easy to generate.) Take the logistic (inverse logit) of the values, producing a series of numbers $p_i = 1/\left(1 + \exp(-z_i)\right)$ in the interval $(0,1)$. Independently draw values from Bernoulli($p_i$) distributions to create a series of $0$ and $1$ values. Clustering will tend to occur with positive autocorrelation. As a bonus, this procedure allows you to perform two stages of simulation: you can fix the underlying realization of the Gaussian process and iterate the second stage of Bernoulli draws. Or you can generate a separate realization of the Gaussian process each time. There are probably R packages to do all this directly. The geoRGLM package performs this simulation in two dimensions (using a Matern autocorrelation function, which includes exponential and Gaussian autocorrelations as special cases); you could simulate along a straight line (or $1$ by $n$ grid) to obtain a time series.
How can I generate correlated timeseries made up of 0s and 1s?
A standard method is to begin by generating an autocorrelated Gaussian process $z_i$. (It doesn't have to be Gaussian, but such processes are easy to generate.) Take the logistic (inverse logit) of
How can I generate correlated timeseries made up of 0s and 1s? A standard method is to begin by generating an autocorrelated Gaussian process $z_i$. (It doesn't have to be Gaussian, but such processes are easy to generate.) Take the logistic (inverse logit) of the values, producing a series of numbers $p_i = 1/\left(1 + \exp(-z_i)\right)$ in the interval $(0,1)$. Independently draw values from Bernoulli($p_i$) distributions to create a series of $0$ and $1$ values. Clustering will tend to occur with positive autocorrelation. As a bonus, this procedure allows you to perform two stages of simulation: you can fix the underlying realization of the Gaussian process and iterate the second stage of Bernoulli draws. Or you can generate a separate realization of the Gaussian process each time. There are probably R packages to do all this directly. The geoRGLM package performs this simulation in two dimensions (using a Matern autocorrelation function, which includes exponential and Gaussian autocorrelations as special cases); you could simulate along a straight line (or $1$ by $n$ grid) to obtain a time series.
How can I generate correlated timeseries made up of 0s and 1s? A standard method is to begin by generating an autocorrelated Gaussian process $z_i$. (It doesn't have to be Gaussian, but such processes are easy to generate.) Take the logistic (inverse logit) of
55,484
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings
Overview Usually when I think of multiple raters assessing multiple objects, I think of "bias" as a mean difference in expected rating of a particular judge from the mean of a hypothetical population of judges. This is a rather statistical definition of bias, which does not necessarily correspond to everyday definitions of bias, which would presumably also include the notion of failure to impartially apply relevant standards. Basic ideas Bearing in mind that there is probably an established literature on this, these are the ideas that came to my mind: Compare mean rating of each judge is a given judge harsher or more lenient on average? Compare standard deviation or variance of each judge is the judge differentiating to the extent that is expected or in ways consistent with other judges? For each judge, correlate that judge's ratings with the mean of all other judges, and use the correlation as an index of the validity of that judge's ratings is the judge identifying quality in the same way as other judges? Build a model predicting ratings for contestant i by judge j and record the residuals; large absolute residuals could be excluded from some overall rating. The model could be as simple as an ANOVA predicting response for contestant i by judge j using just the main effects (no interaction effects). is a judge responding in an uncharacteristic manner for a particular contestant? The mean approach is what I think of as bias. The residuals approach will capture what you are interested in. Basic implementation in R I hacked this out in a few minutes, so hopefully there aren't any bugs (but use at your own risk). # Import data x <- structure(list(contestant = c(1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L), judge = c(1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L), rating = c(83.03, 67.15, 72.05, 86.95, 44, 96.5, 89.9, 84.6, 93.3, 65.15, 88.5, 85.36, 78.95, 88, 52.45, 90.5, 89.85, 85, 94.1, 96.05)), .Names = c("contestant", "judge", "rating" ), class = "data.frame", row.names = c(NA, -20L)) > # Mean: Judge's Mean rating - i.e., bias round(tapply(x$rating, x$judge, function(X) mean(X)), 1) 1 2 3 4 5 89.6 83.1 80.2 90.6 64.4 This shows that judge 5 is harsh and perhaps also that judge 1 and 4 maybe too lenient. > # SD: Judge's SD rating i.e., excessive or insufficient variability in ratings round(tapply(x$rating, x$judge, function(X) sd(X)), 1) 1 2 3 4 5 5.6 10.8 6.1 3.6 22.8 This shows that judge 5 is vastly more variable, but equally the other judges vary in their variability quite a lot also. > # Correlation judgecor <- list() for (i in unique(x$judge)) { contestant_mean <- tapply( x[x$judge != i, "rating"], x[x$judge != i, "contestant"], function(X) mean(X)) judgecor[[as.character(i)]] <- cor(x[x$judge == i, "rating"], contestant_mean) } round(unlist(judgecor), 2) 1 2 3 4 5 0.70 0.84 0.96 0.95 0.73 Judge 1 and 5 are less consistent with the other judges. > # Residuals fit <- lm(rating~factor(judge)+factor(contestant), x) xres <- data.frame(x, res=residuals(fit)) xres$absres <- abs(xres$res) # Overview of problematic ratings head(xres[order(xres$absres, decreasing=TRUE), ], 5) contestant judge rating res absres 20 4 5 96.05 22.107 22.107 5 1 5 44.00 -9.479 9.479 15 3 5 52.45 -9.045 9.045 16 4 1 90.50 -8.663 8.663 4 1 4 86.95 7.296 7.296 This shows the largest five absolute residuals in ratings after taking out mean contestant and mean rater effects. It shows clearly that the rating by judge 5 on contestant 4 was an extreme outlier, relative to the other residuals.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin
Overview Usually when I think of multiple raters assessing multiple objects, I think of "bias" as a mean difference in expected rating of a particular judge from the mean of a hypothetical population
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings Overview Usually when I think of multiple raters assessing multiple objects, I think of "bias" as a mean difference in expected rating of a particular judge from the mean of a hypothetical population of judges. This is a rather statistical definition of bias, which does not necessarily correspond to everyday definitions of bias, which would presumably also include the notion of failure to impartially apply relevant standards. Basic ideas Bearing in mind that there is probably an established literature on this, these are the ideas that came to my mind: Compare mean rating of each judge is a given judge harsher or more lenient on average? Compare standard deviation or variance of each judge is the judge differentiating to the extent that is expected or in ways consistent with other judges? For each judge, correlate that judge's ratings with the mean of all other judges, and use the correlation as an index of the validity of that judge's ratings is the judge identifying quality in the same way as other judges? Build a model predicting ratings for contestant i by judge j and record the residuals; large absolute residuals could be excluded from some overall rating. The model could be as simple as an ANOVA predicting response for contestant i by judge j using just the main effects (no interaction effects). is a judge responding in an uncharacteristic manner for a particular contestant? The mean approach is what I think of as bias. The residuals approach will capture what you are interested in. Basic implementation in R I hacked this out in a few minutes, so hopefully there aren't any bugs (but use at your own risk). # Import data x <- structure(list(contestant = c(1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L), judge = c(1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L), rating = c(83.03, 67.15, 72.05, 86.95, 44, 96.5, 89.9, 84.6, 93.3, 65.15, 88.5, 85.36, 78.95, 88, 52.45, 90.5, 89.85, 85, 94.1, 96.05)), .Names = c("contestant", "judge", "rating" ), class = "data.frame", row.names = c(NA, -20L)) > # Mean: Judge's Mean rating - i.e., bias round(tapply(x$rating, x$judge, function(X) mean(X)), 1) 1 2 3 4 5 89.6 83.1 80.2 90.6 64.4 This shows that judge 5 is harsh and perhaps also that judge 1 and 4 maybe too lenient. > # SD: Judge's SD rating i.e., excessive or insufficient variability in ratings round(tapply(x$rating, x$judge, function(X) sd(X)), 1) 1 2 3 4 5 5.6 10.8 6.1 3.6 22.8 This shows that judge 5 is vastly more variable, but equally the other judges vary in their variability quite a lot also. > # Correlation judgecor <- list() for (i in unique(x$judge)) { contestant_mean <- tapply( x[x$judge != i, "rating"], x[x$judge != i, "contestant"], function(X) mean(X)) judgecor[[as.character(i)]] <- cor(x[x$judge == i, "rating"], contestant_mean) } round(unlist(judgecor), 2) 1 2 3 4 5 0.70 0.84 0.96 0.95 0.73 Judge 1 and 5 are less consistent with the other judges. > # Residuals fit <- lm(rating~factor(judge)+factor(contestant), x) xres <- data.frame(x, res=residuals(fit)) xres$absres <- abs(xres$res) # Overview of problematic ratings head(xres[order(xres$absres, decreasing=TRUE), ], 5) contestant judge rating res absres 20 4 5 96.05 22.107 22.107 5 1 5 44.00 -9.479 9.479 15 3 5 52.45 -9.045 9.045 16 4 1 90.50 -8.663 8.663 4 1 4 86.95 7.296 7.296 This shows the largest five absolute residuals in ratings after taking out mean contestant and mean rater effects. It shows clearly that the rating by judge 5 on contestant 4 was an extreme outlier, relative to the other residuals.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin Overview Usually when I think of multiple raters assessing multiple objects, I think of "bias" as a mean difference in expected rating of a particular judge from the mean of a hypothetical population
55,485
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings
You could measure agreement in ratings across judges with inter-rater reliability statistics. This would tell you whether the judging of contestants is consistent across judges. There may be a more sophisticated way of doing this, but I might naively try dropping out each of the five judges individually looking at how the reliability changes. But with such a small sample, I don't think you'll get particularly strong answers whatever you do.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin
You could measure agreement in ratings across judges with inter-rater reliability statistics. This would tell you whether the judging of contestants is consistent across judges. There may be a more so
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings You could measure agreement in ratings across judges with inter-rater reliability statistics. This would tell you whether the judging of contestants is consistent across judges. There may be a more sophisticated way of doing this, but I might naively try dropping out each of the five judges individually looking at how the reliability changes. But with such a small sample, I don't think you'll get particularly strong answers whatever you do.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin You could measure agreement in ratings across judges with inter-rater reliability statistics. This would tell you whether the judging of contestants is consistent across judges. There may be a more so
55,486
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings
You won't be able to demonstrate bias, but you can try to establish whether the 96.05 is an outlier using Dixon's Test for Outliers. If these judges went on to judge these same contestants on another task/domain, you could test for the replicability of this unusual result for Judge 5 and Contestant 4.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin
You won't be able to demonstrate bias, but you can try to establish whether the 96.05 is an outlier using Dixon's Test for Outliers. If these judges went on to judge these same contestants on another
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings You won't be able to demonstrate bias, but you can try to establish whether the 96.05 is an outlier using Dixon's Test for Outliers. If these judges went on to judge these same contestants on another task/domain, you could test for the replicability of this unusual result for Judge 5 and Contestant 4.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin You won't be able to demonstrate bias, but you can try to establish whether the 96.05 is an outlier using Dixon's Test for Outliers. If these judges went on to judge these same contestants on another
55,487
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings
You could think of this as a test of variances. Judge 5's scores will get more weight because the variability of the scores is higher. This test would be for the equality of two variances. It's in most intro stat books, and even in Excel, which provides the following results for judge 5 versus judge 1-4 F-Test Two-Sample for Variances Variable 1 Variable 2 Mean 64.4125 85.85875 Variance 520.415625 60.13891833 Observations 4 16 df 3 15 F 8.653558119 P(F<=f) one-tail 0.001424952 F Critical one-tail 3.287382105 This does show judge 5 is significantly more variable than the other judges, but frankly I would be careful of a result like this because of the amount of "fishing" involved. You're looking at this after-the-fact, with several possible hypotheses available (just to start, there are equivalent tests of judge 1 against 2,3,4,5, judge 2 against 1,3,4,5, etc.) It's also possible that even if you are observing something, it might not be what you think. You might not be observing bias for/against contestants for a consistent tendency to view things in a different way -- sort of like umpires being willing to call a high strike in American baseball, with various pitchers tending to use/not use a high strike. If you had more contest results, you could compare judge 5 (and others) versus some overall norm. That gets around the fact that with a small sample of judges and ratings (and a posthoc analysis!) you can't really get above the suspicion level.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin
You could think of this as a test of variances. Judge 5's scores will get more weight because the variability of the scores is higher. This test would be for the equality of two variances. It's in mo
Assessing rater bias where one rater has given one very high rating and the remainder very low ratings You could think of this as a test of variances. Judge 5's scores will get more weight because the variability of the scores is higher. This test would be for the equality of two variances. It's in most intro stat books, and even in Excel, which provides the following results for judge 5 versus judge 1-4 F-Test Two-Sample for Variances Variable 1 Variable 2 Mean 64.4125 85.85875 Variance 520.415625 60.13891833 Observations 4 16 df 3 15 F 8.653558119 P(F<=f) one-tail 0.001424952 F Critical one-tail 3.287382105 This does show judge 5 is significantly more variable than the other judges, but frankly I would be careful of a result like this because of the amount of "fishing" involved. You're looking at this after-the-fact, with several possible hypotheses available (just to start, there are equivalent tests of judge 1 against 2,3,4,5, judge 2 against 1,3,4,5, etc.) It's also possible that even if you are observing something, it might not be what you think. You might not be observing bias for/against contestants for a consistent tendency to view things in a different way -- sort of like umpires being willing to call a high strike in American baseball, with various pitchers tending to use/not use a high strike. If you had more contest results, you could compare judge 5 (and others) versus some overall norm. That gets around the fact that with a small sample of judges and ratings (and a posthoc analysis!) you can't really get above the suspicion level.
Assessing rater bias where one rater has given one very high rating and the remainder very low ratin You could think of this as a test of variances. Judge 5's scores will get more weight because the variability of the scores is higher. This test would be for the equality of two variances. It's in mo
55,488
Expected value of a transformation
Yes. In general if $X\sim f(x)$ then for a function $g(x)$ you have $E(g(X)) = \int g(x)f(x)dx$. You can verify this for simple cases by deriving the distribution of the transformed variable. The completely general result takes some more advanced math which you can probably safely avoid :)
Expected value of a transformation
Yes. In general if $X\sim f(x)$ then for a function $g(x)$ you have $E(g(X)) = \int g(x)f(x)dx$. You can verify this for simple cases by deriving the distribution of the transformed variable. The comp
Expected value of a transformation Yes. In general if $X\sim f(x)$ then for a function $g(x)$ you have $E(g(X)) = \int g(x)f(x)dx$. You can verify this for simple cases by deriving the distribution of the transformed variable. The completely general result takes some more advanced math which you can probably safely avoid :)
Expected value of a transformation Yes. In general if $X\sim f(x)$ then for a function $g(x)$ you have $E(g(X)) = \int g(x)f(x)dx$. You can verify this for simple cases by deriving the distribution of the transformed variable. The comp
55,489
Expected value of a transformation
Another approach if you are happy with a numerical estimate (as opposed to the theorectical exact value) is to generate a bunch of data from the distribution, do the transformation, then take the mean of the transformed data as the estimate of the expected value. This avoids integration which can be nice in ugly cases, but does not give the theory, relationship, or exact value.
Expected value of a transformation
Another approach if you are happy with a numerical estimate (as opposed to the theorectical exact value) is to generate a bunch of data from the distribution, do the transformation, then take the mean
Expected value of a transformation Another approach if you are happy with a numerical estimate (as opposed to the theorectical exact value) is to generate a bunch of data from the distribution, do the transformation, then take the mean of the transformed data as the estimate of the expected value. This avoids integration which can be nice in ugly cases, but does not give the theory, relationship, or exact value.
Expected value of a transformation Another approach if you are happy with a numerical estimate (as opposed to the theorectical exact value) is to generate a bunch of data from the distribution, do the transformation, then take the mean
55,490
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate]
If $\zeta_j = \sigma \xi_j $ for some $\sigma >0$ and some $\mu$ then $$E[\max|\zeta_j|] = E[\max|\sigma \xi_j |] = E[\sigma \max| \xi_j|]= \sigma E[ \max| \xi_j|]$$ and this tells us how to move from a standard normal with mean $0$ and standard deviation $1$ to a normal distribution with mean $0$ and standard deviation $\sigma$.
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate]
If $\zeta_j = \sigma \xi_j $ for some $\sigma >0$ and some $\mu$ then $$E[\max|\zeta_j|] = E[\max|\sigma \xi_j |] = E[\sigma \max| \xi_j|]= \sigma E[ \max| \xi_j|]$$ and this tells us how to move fr
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate] If $\zeta_j = \sigma \xi_j $ for some $\sigma >0$ and some $\mu$ then $$E[\max|\zeta_j|] = E[\max|\sigma \xi_j |] = E[\sigma \max| \xi_j|]= \sigma E[ \max| \xi_j|]$$ and this tells us how to move from a standard normal with mean $0$ and standard deviation $1$ to a normal distribution with mean $0$ and standard deviation $\sigma$.
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate] If $\zeta_j = \sigma \xi_j $ for some $\sigma >0$ and some $\mu$ then $$E[\max|\zeta_j|] = E[\max|\sigma \xi_j |] = E[\sigma \max| \xi_j|]= \sigma E[ \max| \xi_j|]$$ and this tells us how to move fr
55,491
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate]
Intuitively: values of the standard normal distribution (including the absolute maximum) 'tend to be' 1 SD = 1 away from 0. In non-standard zero-mean normal, the data 'tend to be' 1 SD = sigma away from 0. You could say that as long as you're doing linear stuff, all distances to zero will blow up by a factor sigma.
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate]
Intuitively: values of the standard normal distribution (including the absolute maximum) 'tend to be' 1 SD = 1 away from 0. In non-standard zero-mean normal, the data 'tend to be' 1 SD = sigma away fr
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate] Intuitively: values of the standard normal distribution (including the absolute maximum) 'tend to be' 1 SD = 1 away from 0. In non-standard zero-mean normal, the data 'tend to be' 1 SD = sigma away from 0. You could say that as long as you're doing linear stuff, all distances to zero will blow up by a factor sigma.
Calculating the distribution of maximal value of $n$ draws from a normal distribution [duplicate] Intuitively: values of the standard normal distribution (including the absolute maximum) 'tend to be' 1 SD = 1 away from 0. In non-standard zero-mean normal, the data 'tend to be' 1 SD = sigma away fr
55,492
Counterexample for interaction and parallel curves?
This depends on what is meant by "interaction". If the data have no noise - the plot is literally just two parallel lines, then there is certainly no interaction, we know this deductively, without any need for statistics. Secondly if the lines are not parallel, then we know deductively that there is interaction. So there is no counter example if there is no noise. But if there is noise (or error), then there is basically more than one possible place that the "noiseless" or "true" lines could be. It is also possible for the true lines to be parallel but if the noise is big enough and you get an "unlucky" sample of noise, then the noisy lines will cross. Just how unlucky depends on how "non-parallel" the two "true lines" are and how many units have been sampled. Consider the OLS case, the lines are generated by: $$y_{i}=x_{i}^{T}\beta_{true}+n_{i}$$ Where $\beta_{true}$ is a 4-D vector with the intercept for group 1, the offset for group 2, the slope for group 1 and the offset to the slope for group 2 Now you fit an OLS to the observed data, and you get $$\beta_{OLS}=(X^{T}X)^{-1}X^{T}Y=(X^{T}X)^{-1}X^{T}(X\beta_{true}+n)=\beta_{true}+(X^{T}X)^{-1}X^{T}n$$ So by a careful choice of the noise we can make the OLS estimates be basically anything. So I don't have to invert a $4\times 4$ matrix, I will specialise to the case where both intercepts are equal to zero, and we have $$y_{ij}=\beta_{1}x_{ij}+\beta_{2}x_{i2}I(j=2)$$ And then $$(X^{T}X)^{-1}=\frac{1}{\left(\sum_{i}x_{i1}^{2}\right)\left(\sum_{i}x_{i2}^{2}\right)}\begin{pmatrix} \sum_{i}x_{i1}^{2}+\sum_{i}x_{i2}^{2} & -\sum_{i}x_{i2}^{2} \\ -\sum_{i}x_{i2}^{2} & \sum_{i}x_{i2}^{2} \end{pmatrix}$$ $$=\frac{1}{\sum_{i}x_{i1}^{2}}\begin{pmatrix} 1 & -1 \\ -1 & 1\end{pmatrix} +\frac{1}{\sum_{i}x_{i2}^{2}}\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}$$ Now for $X^{T}n$ we have: $$X^{T}n=\sum_{i}x_{i2}n_{i2}\begin{pmatrix} 1\\1\end{pmatrix} +\sum_{i}x_{i1}n_{i1}\begin{pmatrix} 1\\0\end{pmatrix}$$ And so the total error from the regression is: $$\frac{\sum_{i}x_{i2}n_{i2}+\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i2}^{2}}\begin{pmatrix} 1\\0\end{pmatrix} +\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}\begin{pmatrix} 1\\-1\end{pmatrix}$$ Now if the true slopes are parallel, so that $\beta_{2,true}=0$, then the OLS estimates will be: $$\hat{\beta}_{1}=\beta_{1,true}+\frac{\sum_{i}x_{i2}n_{i2}+\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i2}^{2}}+\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}$$ $$\hat{\beta}_{2}=-\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}$$ Now this shows that the OLS estimate can indeed lead to erroneous interactions, just choose the "true" noise such that it is highly correlated with $x_{i1}$ - essentially you need to violate one of the assumptions of OLS, non-heteroscedasticity of the noise. So if you generate data according to: $$y_{i1}=x_{i1}(\beta_{1,true}+n_{i1})$$ $$y_{i2}=x_{i2}\beta_{1,true}+n_{i2}$$ And then try to fit an interaction model using OLS of $y$ on $x$ with an interaction, you will find a significant result, even though the true betas are the same. The plots will cross because of the fanning in the first group. One example data set (true beta is 2 and noise was generated from standard normal). You get a t-statistic above 10 for the interaction effect: $$\begin{array}{c|c} group & y & x \\ 1 & 1.282817715 & 1 \\ 1 & 2.026032115 & 2 \\ 1 & 5.9786882 & 3 \\ 1 & 22.1588319 & 7 \\ 2 & 16.28587668 & 9 \\ 2 & 15.12007527 & 6 \\ 2 & 9.566273403 & 5 \\ \end{array}$$
Counterexample for interaction and parallel curves?
This depends on what is meant by "interaction". If the data have no noise - the plot is literally just two parallel lines, then there is certainly no interaction, we know this deductively, without an
Counterexample for interaction and parallel curves? This depends on what is meant by "interaction". If the data have no noise - the plot is literally just two parallel lines, then there is certainly no interaction, we know this deductively, without any need for statistics. Secondly if the lines are not parallel, then we know deductively that there is interaction. So there is no counter example if there is no noise. But if there is noise (or error), then there is basically more than one possible place that the "noiseless" or "true" lines could be. It is also possible for the true lines to be parallel but if the noise is big enough and you get an "unlucky" sample of noise, then the noisy lines will cross. Just how unlucky depends on how "non-parallel" the two "true lines" are and how many units have been sampled. Consider the OLS case, the lines are generated by: $$y_{i}=x_{i}^{T}\beta_{true}+n_{i}$$ Where $\beta_{true}$ is a 4-D vector with the intercept for group 1, the offset for group 2, the slope for group 1 and the offset to the slope for group 2 Now you fit an OLS to the observed data, and you get $$\beta_{OLS}=(X^{T}X)^{-1}X^{T}Y=(X^{T}X)^{-1}X^{T}(X\beta_{true}+n)=\beta_{true}+(X^{T}X)^{-1}X^{T}n$$ So by a careful choice of the noise we can make the OLS estimates be basically anything. So I don't have to invert a $4\times 4$ matrix, I will specialise to the case where both intercepts are equal to zero, and we have $$y_{ij}=\beta_{1}x_{ij}+\beta_{2}x_{i2}I(j=2)$$ And then $$(X^{T}X)^{-1}=\frac{1}{\left(\sum_{i}x_{i1}^{2}\right)\left(\sum_{i}x_{i2}^{2}\right)}\begin{pmatrix} \sum_{i}x_{i1}^{2}+\sum_{i}x_{i2}^{2} & -\sum_{i}x_{i2}^{2} \\ -\sum_{i}x_{i2}^{2} & \sum_{i}x_{i2}^{2} \end{pmatrix}$$ $$=\frac{1}{\sum_{i}x_{i1}^{2}}\begin{pmatrix} 1 & -1 \\ -1 & 1\end{pmatrix} +\frac{1}{\sum_{i}x_{i2}^{2}}\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}$$ Now for $X^{T}n$ we have: $$X^{T}n=\sum_{i}x_{i2}n_{i2}\begin{pmatrix} 1\\1\end{pmatrix} +\sum_{i}x_{i1}n_{i1}\begin{pmatrix} 1\\0\end{pmatrix}$$ And so the total error from the regression is: $$\frac{\sum_{i}x_{i2}n_{i2}+\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i2}^{2}}\begin{pmatrix} 1\\0\end{pmatrix} +\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}\begin{pmatrix} 1\\-1\end{pmatrix}$$ Now if the true slopes are parallel, so that $\beta_{2,true}=0$, then the OLS estimates will be: $$\hat{\beta}_{1}=\beta_{1,true}+\frac{\sum_{i}x_{i2}n_{i2}+\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i2}^{2}}+\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}$$ $$\hat{\beta}_{2}=-\frac{\sum_{i}x_{i1}n_{i1}}{\sum_{i}x_{i1}^{2}}$$ Now this shows that the OLS estimate can indeed lead to erroneous interactions, just choose the "true" noise such that it is highly correlated with $x_{i1}$ - essentially you need to violate one of the assumptions of OLS, non-heteroscedasticity of the noise. So if you generate data according to: $$y_{i1}=x_{i1}(\beta_{1,true}+n_{i1})$$ $$y_{i2}=x_{i2}\beta_{1,true}+n_{i2}$$ And then try to fit an interaction model using OLS of $y$ on $x$ with an interaction, you will find a significant result, even though the true betas are the same. The plots will cross because of the fanning in the first group. One example data set (true beta is 2 and noise was generated from standard normal). You get a t-statistic above 10 for the interaction effect: $$\begin{array}{c|c} group & y & x \\ 1 & 1.282817715 & 1 \\ 1 & 2.026032115 & 2 \\ 1 & 5.9786882 & 3 \\ 1 & 22.1588319 & 7 \\ 2 & 16.28587668 & 9 \\ 2 & 15.12007527 & 6 \\ 2 & 9.566273403 & 5 \\ \end{array}$$
Counterexample for interaction and parallel curves? This depends on what is meant by "interaction". If the data have no noise - the plot is literally just two parallel lines, then there is certainly no interaction, we know this deductively, without an
55,493
Counterexample for interaction and parallel curves?
To me it seems like you (and many books probably) are confusing the empirical level with the theoretical level: The null hypothesis of an interaction effect in a two-way ANOVA is defined on the theoretical level using the cell expected values $\mu_{jk}$ (and not response values): there is an interaction if (and only if) the lines connecting the $\mu_{jk}$ in a diagram are exactly parallel. Note that "not parallel" is not the same as "lines cross". On the empirical side, we do not have the $\mu_{jk}$, but can only plot their estimates, the cell means $M_{jk}$. Even if the null hypothesis is true, their connecting lines will almost never be exactly parallel due to measurement error. Conversely, even if the alternative hypothesis is true, they could be almost parallel for the same reason. A measure for the degree to which deviation from parallelity of the $M_{jk}$ indicates interaction is the ANOVA's corresponding F-value.
Counterexample for interaction and parallel curves?
To me it seems like you (and many books probably) are confusing the empirical level with the theoretical level: The null hypothesis of an interaction effect in a two-way ANOVA is defined on the theore
Counterexample for interaction and parallel curves? To me it seems like you (and many books probably) are confusing the empirical level with the theoretical level: The null hypothesis of an interaction effect in a two-way ANOVA is defined on the theoretical level using the cell expected values $\mu_{jk}$ (and not response values): there is an interaction if (and only if) the lines connecting the $\mu_{jk}$ in a diagram are exactly parallel. Note that "not parallel" is not the same as "lines cross". On the empirical side, we do not have the $\mu_{jk}$, but can only plot their estimates, the cell means $M_{jk}$. Even if the null hypothesis is true, their connecting lines will almost never be exactly parallel due to measurement error. Conversely, even if the alternative hypothesis is true, they could be almost parallel for the same reason. A measure for the degree to which deviation from parallelity of the $M_{jk}$ indicates interaction is the ANOVA's corresponding F-value.
Counterexample for interaction and parallel curves? To me it seems like you (and many books probably) are confusing the empirical level with the theoretical level: The null hypothesis of an interaction effect in a two-way ANOVA is defined on the theore
55,494
Counterexample for interaction and parallel curves?
Yes, if the true (hypothetical) responses are not parallel there is interaction. Not parallel, however, does not necessarily mean that the segments cross. When you investigate interaction the sampling error may lead to different results in the sample than in the population, so it's useful to calculate confidence intervals or credibility intervals for the extent of the possible interaction. The extent of the interaction depends on the scales of the variables, in special cases (removable interaction) there is a transformation when the effects are additive and there is no interaction.
Counterexample for interaction and parallel curves?
Yes, if the true (hypothetical) responses are not parallel there is interaction. Not parallel, however, does not necessarily mean that the segments cross. When you investigate interaction the sampling
Counterexample for interaction and parallel curves? Yes, if the true (hypothetical) responses are not parallel there is interaction. Not parallel, however, does not necessarily mean that the segments cross. When you investigate interaction the sampling error may lead to different results in the sample than in the population, so it's useful to calculate confidence intervals or credibility intervals for the extent of the possible interaction. The extent of the interaction depends on the scales of the variables, in special cases (removable interaction) there is a transformation when the effects are additive and there is no interaction.
Counterexample for interaction and parallel curves? Yes, if the true (hypothetical) responses are not parallel there is interaction. Not parallel, however, does not necessarily mean that the segments cross. When you investigate interaction the sampling
55,495
Monte Carlo experiment to estimate coverage probability
I disagree with Henry - I think you should be dividing by sqrt(n), because it's a confidence interval for the mean. You also have to add a df = n-1 argument to your qt calls. And the last line should be mean(LCL < 2 & UCL > 2). This is because 2 is the true mean, and you're interested in the condition that 2 is in the confidence interval.
Monte Carlo experiment to estimate coverage probability
I disagree with Henry - I think you should be dividing by sqrt(n), because it's a confidence interval for the mean. You also have to add a df = n-1 argument to your qt calls. And the last line should
Monte Carlo experiment to estimate coverage probability I disagree with Henry - I think you should be dividing by sqrt(n), because it's a confidence interval for the mean. You also have to add a df = n-1 argument to your qt calls. And the last line should be mean(LCL < 2 & UCL > 2). This is because 2 is the true mean, and you're interested in the condition that 2 is in the confidence interval.
Monte Carlo experiment to estimate coverage probability I disagree with Henry - I think you should be dividing by sqrt(n), because it's a confidence interval for the mean. You also have to add a df = n-1 argument to your qt calls. And the last line should
55,496
Monte Carlo experiment to estimate coverage probability
You have several issues with your code: Your mean(UCL < 0 & LCL > 0) is decidedly strange, and in particular is failing because UCL is coming out positive so you are taking the mean of an empty set. A $\chi^2$ distribution takes only positive values. (since solved) You have UCL less than LCL, which is a slightly odd use of upper and lower You do not need semicolons in R unless you want more than one instruction on the same line (Wrong - as pointed out by mark999) You are dividing by sqrt(n). This wrongly narrows your confidence intervals: it is for finding the standard error of the mean, but you care about the original distribution. The question tells you to use "t-interval" but you are using a normal distribution. You might try typing ?qt into R Try this alpha = 0.05 n = 20 m = 1000 UCL = numeric(m) LCL = numeric(m) for(i in 1:m) { x = rchisq(n, 2) # compare with x = rnorm(n) + 2 LCL[i] = mean(x) - qt(alpha / 2, df=n-1, lower.tail = FALSE)*sd(x)/sqrt(n) UCL[i] = mean(x) + qt(alpha / 2, df=n-1, lower.tail = FALSE)*sd(x)/sqrt(n) } mean(LCL < 2 & UCL > 2)
Monte Carlo experiment to estimate coverage probability
You have several issues with your code: Your mean(UCL < 0 & LCL > 0) is decidedly strange, and in particular is failing because UCL is coming out positive so you are taking the mean of an empty set.
Monte Carlo experiment to estimate coverage probability You have several issues with your code: Your mean(UCL < 0 & LCL > 0) is decidedly strange, and in particular is failing because UCL is coming out positive so you are taking the mean of an empty set. A $\chi^2$ distribution takes only positive values. (since solved) You have UCL less than LCL, which is a slightly odd use of upper and lower You do not need semicolons in R unless you want more than one instruction on the same line (Wrong - as pointed out by mark999) You are dividing by sqrt(n). This wrongly narrows your confidence intervals: it is for finding the standard error of the mean, but you care about the original distribution. The question tells you to use "t-interval" but you are using a normal distribution. You might try typing ?qt into R Try this alpha = 0.05 n = 20 m = 1000 UCL = numeric(m) LCL = numeric(m) for(i in 1:m) { x = rchisq(n, 2) # compare with x = rnorm(n) + 2 LCL[i] = mean(x) - qt(alpha / 2, df=n-1, lower.tail = FALSE)*sd(x)/sqrt(n) UCL[i] = mean(x) + qt(alpha / 2, df=n-1, lower.tail = FALSE)*sd(x)/sqrt(n) } mean(LCL < 2 & UCL > 2)
Monte Carlo experiment to estimate coverage probability You have several issues with your code: Your mean(UCL < 0 & LCL > 0) is decidedly strange, and in particular is failing because UCL is coming out positive so you are taking the mean of an empty set.
55,497
Correlation analysis and correcting $p$-values for multiple testing
In my opinion you should test if your variables are distributed normally and chose a suitable test accordingly. Concerning the correction for alpha inflation: What you are doing is data mining. You have experimental data and now you are digging in it to find ... anything. Do that. But also know that anything you might find is just an observation and as such not reliable. Perform that exploratory thing, pick some promising pairs of variables and conduct new experiments to test these pairs for correlation.
Correlation analysis and correcting $p$-values for multiple testing
In my opinion you should test if your variables are distributed normally and chose a suitable test accordingly. Concerning the correction for alpha inflation: What you are doing is data mining. You h
Correlation analysis and correcting $p$-values for multiple testing In my opinion you should test if your variables are distributed normally and chose a suitable test accordingly. Concerning the correction for alpha inflation: What you are doing is data mining. You have experimental data and now you are digging in it to find ... anything. Do that. But also know that anything you might find is just an observation and as such not reliable. Perform that exploratory thing, pick some promising pairs of variables and conduct new experiments to test these pairs for correlation.
Correlation analysis and correcting $p$-values for multiple testing In my opinion you should test if your variables are distributed normally and chose a suitable test accordingly. Concerning the correction for alpha inflation: What you are doing is data mining. You h
55,498
Correlation analysis and correcting $p$-values for multiple testing
You want to perform a canonical correlation analysis. This will provide information about correlations among linear combinations of your sets of parameters, potentially uncovering stronger information of the type you seek. The Wikipedia article explains the theory, provides the equations, and presents the appropriate hypothesis test. It requires solving a 7 by 7 eigenvalue problem determined by the correlation and cross-correlation matrices of the variables, which is fast and straightforward. The R package CCorA will do it, for instance. (I haven't tested this package.) It might help first to re-express your original variables so they are each approximately symmetrically distributed. There are many ways to do this using, for example, Box-Cox transformations.
Correlation analysis and correcting $p$-values for multiple testing
You want to perform a canonical correlation analysis. This will provide information about correlations among linear combinations of your sets of parameters, potentially uncovering stronger informatio
Correlation analysis and correcting $p$-values for multiple testing You want to perform a canonical correlation analysis. This will provide information about correlations among linear combinations of your sets of parameters, potentially uncovering stronger information of the type you seek. The Wikipedia article explains the theory, provides the equations, and presents the appropriate hypothesis test. It requires solving a 7 by 7 eigenvalue problem determined by the correlation and cross-correlation matrices of the variables, which is fast and straightforward. The R package CCorA will do it, for instance. (I haven't tested this package.) It might help first to re-express your original variables so they are each approximately symmetrically distributed. There are many ways to do this using, for example, Box-Cox transformations.
Correlation analysis and correcting $p$-values for multiple testing You want to perform a canonical correlation analysis. This will provide information about correlations among linear combinations of your sets of parameters, potentially uncovering stronger informatio
55,499
How to correct uneven sampling distribution when calculating the mean?
The answer depends on the characteristics of $f$. Regardless, its average (by definition) is $\frac{1}{100}\int_0^{100}{f(x)dx}$, so your problem is one of estimating that integral from values at a discrete set of points. For a highly discontinuous function you can use any Riemann sum; for a differentiable function use the Trapezoidal Rule, for which you can estimate the error in terms of derivatives of $f$; for a thrice differentiable function use Simpson's Rule, etc. If necessary, you can combine the error estimates of these rules with statistical estimates of the error in computing each value of $f$ to obtain an estimate of the error in the average.
How to correct uneven sampling distribution when calculating the mean?
The answer depends on the characteristics of $f$. Regardless, its average (by definition) is $\frac{1}{100}\int_0^{100}{f(x)dx}$, so your problem is one of estimating that integral from values at a d
How to correct uneven sampling distribution when calculating the mean? The answer depends on the characteristics of $f$. Regardless, its average (by definition) is $\frac{1}{100}\int_0^{100}{f(x)dx}$, so your problem is one of estimating that integral from values at a discrete set of points. For a highly discontinuous function you can use any Riemann sum; for a differentiable function use the Trapezoidal Rule, for which you can estimate the error in terms of derivatives of $f$; for a thrice differentiable function use Simpson's Rule, etc. If necessary, you can combine the error estimates of these rules with statistical estimates of the error in computing each value of $f$ to obtain an estimate of the error in the average.
How to correct uneven sampling distribution when calculating the mean? The answer depends on the characteristics of $f$. Regardless, its average (by definition) is $\frac{1}{100}\int_0^{100}{f(x)dx}$, so your problem is one of estimating that integral from values at a d
55,500
Modeling vacancy rate
One of the tricks in modelling percentages is to use the logit transformation. Then instead of modelling percentage $p_i$ as linear function you model the logit transform of this percentage: \begin{align} y_i=\log\frac{p_i}{1-p_i} \end{align} In R you will need to create new transformed variable and use it as a dependent variable in lmer. You might look into modelling directly the number of empty houses instead of percentages, then you will not have a problem with non-sensical values. I suggest using log transformation for that. This of course means that you might get more non-vacant houses than there are houses, but this can be used as an indicator of model inadequacies. If on the other hand you have for some regions full booking in historical data meaning that demand was larger than supply, you might want to look into censored regression models.
Modeling vacancy rate
One of the tricks in modelling percentages is to use the logit transformation. Then instead of modelling percentage $p_i$ as linear function you model the logit transform of this percentage: \begin{al
Modeling vacancy rate One of the tricks in modelling percentages is to use the logit transformation. Then instead of modelling percentage $p_i$ as linear function you model the logit transform of this percentage: \begin{align} y_i=\log\frac{p_i}{1-p_i} \end{align} In R you will need to create new transformed variable and use it as a dependent variable in lmer. You might look into modelling directly the number of empty houses instead of percentages, then you will not have a problem with non-sensical values. I suggest using log transformation for that. This of course means that you might get more non-vacant houses than there are houses, but this can be used as an indicator of model inadequacies. If on the other hand you have for some regions full booking in historical data meaning that demand was larger than supply, you might want to look into censored regression models.
Modeling vacancy rate One of the tricks in modelling percentages is to use the logit transformation. Then instead of modelling percentage $p_i$ as linear function you model the logit transform of this percentage: \begin{al