idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
51,101
Fisher information of a statistic
There is no fisher information of the estimator, just the fisher information of a random sample $\theta$. In Wikipedia, it says: In mathematical statistics, the Fisher information (sometimes simply called information1) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter $\theta$ upon which the probability of X depends. so, it is true that fisher information is a kind of connection between two random variables, instead of some estimator, which is a function of X.
Fisher information of a statistic
There is no fisher information of the estimator, just the fisher information of a random sample $\theta$. In Wikipedia, it says: In mathematical statistics, the Fisher information (sometimes simply c
Fisher information of a statistic There is no fisher information of the estimator, just the fisher information of a random sample $\theta$. In Wikipedia, it says: In mathematical statistics, the Fisher information (sometimes simply called information1) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter $\theta$ upon which the probability of X depends. so, it is true that fisher information is a kind of connection between two random variables, instead of some estimator, which is a function of X.
Fisher information of a statistic There is no fisher information of the estimator, just the fisher information of a random sample $\theta$. In Wikipedia, it says: In mathematical statistics, the Fisher information (sometimes simply c
51,102
Fisher information of a statistic
I'm pretty sure that you've got some terminology mixed up. Fisher's Information is a function of the data, just like an estimator such as $\bar{X}_{n}$ that gives you an idea of how much information of the parameter of interest is contained in the sample you've acquired. You can compute Fisher's Information at an estimator (this is usually done because the F.I. depends on the unknown parameter being estimated) and we use the plug-in estimator consisting of the F.I. evaluated at the MLE (typically).
Fisher information of a statistic
I'm pretty sure that you've got some terminology mixed up. Fisher's Information is a function of the data, just like an estimator such as $\bar{X}_{n}$ that gives you an idea of how much information o
Fisher information of a statistic I'm pretty sure that you've got some terminology mixed up. Fisher's Information is a function of the data, just like an estimator such as $\bar{X}_{n}$ that gives you an idea of how much information of the parameter of interest is contained in the sample you've acquired. You can compute Fisher's Information at an estimator (this is usually done because the F.I. depends on the unknown parameter being estimated) and we use the plug-in estimator consisting of the F.I. evaluated at the MLE (typically).
Fisher information of a statistic I'm pretty sure that you've got some terminology mixed up. Fisher's Information is a function of the data, just like an estimator such as $\bar{X}_{n}$ that gives you an idea of how much information o
51,103
Estimating distribution from censored data
In R: estimate=function(y,z,u=1e-9){ ys=sort(unique(y)) # Inf signifies x's never observed (as they are higher than max y) zs=c(sort(unique(z))[-1],Inf) counts=xtabs(~z+y) observed=rbind(counts[-1,],rep(0,length(ys))) marginalHidden=counts[1,] m=sapply(seq(ys),function(i)zs>ys[i]) d=rep(1/length(zs),length(zs)) while(T){ # allocate hidden data according to current parameters p=apply(m*d,2,function(v)v/sum(v)) # can result in fractional counts hidden=sweep(p,2,marginalHidden,'*') total=observed+hidden d2=apply(total,1,sum)/sum(total) msd=mean((d2-d)^2) if(msd<u^2) break; d=d2 } d } xSupport=c(3,5,7) xDistribution=c(1/4,1/2,1/4) x=sample(xSupport,1000,replace=T,prob=xDistribution) ySupport=c(4,6) yDistribution=c(1/2,1/2) y=sample(ySupport,length(x),replace=T,prob=yDistribution) z=ifelse(x<y,x,0) estimate(y,z) table(x) Edit A direct (non-iterative) solution, compatible with the one given above. The idea is to start with the values of $Z$ that are never hidden (lower than $min(Y)$), and estimate their probability from proportions. After that, both these values and $min(Y)$ can be removed from the problem. Thus, the problem becomes smaller and smaller. estimate=function(y,z,u=1e-9){ ys=sort(unique(y)) # Inf signifies x's never observed (as they are higher than max y) z[z==0]=Inf zs=sort(unique(z)) counts=xtabs(~z+y) s=c() r=1 while(ncol(counts)>0){ # zs < min(ys) are all observed, so can be estimated from counts mzi=which(zs<min(ys)) ds=r*apply(counts[mzi,,drop=F],1,sum)/sum(counts) s=c(s,ds) # reduce probability remaining for the hidden cases r=r-sum(ds) # reduce the problem by removing the solved levels of zs, and the min(ys) counts=counts[-mzi,-1,drop=F] zs=zs[-mzi] ys=ys[-1] } c(s,r) }
Estimating distribution from censored data
In R: estimate=function(y,z,u=1e-9){ ys=sort(unique(y)) # Inf signifies x's never observed (as they are higher than max y) zs=c(sort(unique(z))[-1],Inf) counts=xtabs(~z+y) observed=rbind(cou
Estimating distribution from censored data In R: estimate=function(y,z,u=1e-9){ ys=sort(unique(y)) # Inf signifies x's never observed (as they are higher than max y) zs=c(sort(unique(z))[-1],Inf) counts=xtabs(~z+y) observed=rbind(counts[-1,],rep(0,length(ys))) marginalHidden=counts[1,] m=sapply(seq(ys),function(i)zs>ys[i]) d=rep(1/length(zs),length(zs)) while(T){ # allocate hidden data according to current parameters p=apply(m*d,2,function(v)v/sum(v)) # can result in fractional counts hidden=sweep(p,2,marginalHidden,'*') total=observed+hidden d2=apply(total,1,sum)/sum(total) msd=mean((d2-d)^2) if(msd<u^2) break; d=d2 } d } xSupport=c(3,5,7) xDistribution=c(1/4,1/2,1/4) x=sample(xSupport,1000,replace=T,prob=xDistribution) ySupport=c(4,6) yDistribution=c(1/2,1/2) y=sample(ySupport,length(x),replace=T,prob=yDistribution) z=ifelse(x<y,x,0) estimate(y,z) table(x) Edit A direct (non-iterative) solution, compatible with the one given above. The idea is to start with the values of $Z$ that are never hidden (lower than $min(Y)$), and estimate their probability from proportions. After that, both these values and $min(Y)$ can be removed from the problem. Thus, the problem becomes smaller and smaller. estimate=function(y,z,u=1e-9){ ys=sort(unique(y)) # Inf signifies x's never observed (as they are higher than max y) z[z==0]=Inf zs=sort(unique(z)) counts=xtabs(~z+y) s=c() r=1 while(ncol(counts)>0){ # zs < min(ys) are all observed, so can be estimated from counts mzi=which(zs<min(ys)) ds=r*apply(counts[mzi,,drop=F],1,sum)/sum(counts) s=c(s,ds) # reduce probability remaining for the hidden cases r=r-sum(ds) # reduce the problem by removing the solved levels of zs, and the min(ys) counts=counts[-mzi,-1,drop=F] zs=zs[-mzi] ys=ys[-1] } c(s,r) }
Estimating distribution from censored data In R: estimate=function(y,z,u=1e-9){ ys=sort(unique(y)) # Inf signifies x's never observed (as they are higher than max y) zs=c(sort(unique(z))[-1],Inf) counts=xtabs(~z+y) observed=rbind(cou
51,104
Multiple measures mediation analysis
At least one method for doing this has been published in: Judd et al. Estimating and testing mediation and moderation in within-subject designs. Psychological methods (2001) vol. 6 (2) pp. 115-134 http://www.ncbi.nlm.nih.gov/pubmed/11411437 I haven't yet found a link to code that implements this method though.
Multiple measures mediation analysis
At least one method for doing this has been published in: Judd et al. Estimating and testing mediation and moderation in within-subject designs. Psychological methods (2001) vol. 6 (2) pp. 115-134 ht
Multiple measures mediation analysis At least one method for doing this has been published in: Judd et al. Estimating and testing mediation and moderation in within-subject designs. Psychological methods (2001) vol. 6 (2) pp. 115-134 http://www.ncbi.nlm.nih.gov/pubmed/11411437 I haven't yet found a link to code that implements this method though.
Multiple measures mediation analysis At least one method for doing this has been published in: Judd et al. Estimating and testing mediation and moderation in within-subject designs. Psychological methods (2001) vol. 6 (2) pp. 115-134 ht
51,105
Disadvantages of the Kullback-Leibler divergence
I'd like to add the first answer, which would be unsatisfying, to this question through the lens of deep learning mostly in NLP: First things first, Disadvantages of the Kullback-Leibler divergence Let's see the definition (in terms of your question): $$ KL(q||p)=\sum q(s)\log \frac{q(s)}{p(s)} $$ When $p(s) > 0$ and $q(s)\to 0$, the KL divergence shrinks to 0, which means MLE assigns an extremely low cost to the scenarios, where the model generates some samples that do not locate on the data distribution. Consider this, the corpus in hand includes the whole samples existing in the world then $q(s) \to 0$ indicates that $s$ occurs very rarely in the corpus (the law of large number), and then its probability may happen to be very large (due to samples lookalike but different or opposite in fact). In this case, because of the lack of training for this kind of category and hence its high probabilities in the distribution, such rare samples that do not locate on the data distribution may be generated while we are testing or validating. For your sub-questions: Is the Kullback-Leibler divergence the best f-divergence to consider as error? You can refer to this answer which states that "Cross-entropy is prefered for classification, while mean squared error is one of the best choices for regression". Note that training by cross entropy is the same as training using relative entropy. For the details please refer to this. Does the usage of the Kullback-Leibler divergence entail any kind of issue? If I understand your question correctly, I suppose it can be falsified by the loss function for SVMs. Please refer to this question and this answer. Kullback-Leibler divergence can not solve all problems in estimation.
Disadvantages of the Kullback-Leibler divergence
I'd like to add the first answer, which would be unsatisfying, to this question through the lens of deep learning mostly in NLP: First things first, Disadvantages of the Kullback-Leibler divergence
Disadvantages of the Kullback-Leibler divergence I'd like to add the first answer, which would be unsatisfying, to this question through the lens of deep learning mostly in NLP: First things first, Disadvantages of the Kullback-Leibler divergence Let's see the definition (in terms of your question): $$ KL(q||p)=\sum q(s)\log \frac{q(s)}{p(s)} $$ When $p(s) > 0$ and $q(s)\to 0$, the KL divergence shrinks to 0, which means MLE assigns an extremely low cost to the scenarios, where the model generates some samples that do not locate on the data distribution. Consider this, the corpus in hand includes the whole samples existing in the world then $q(s) \to 0$ indicates that $s$ occurs very rarely in the corpus (the law of large number), and then its probability may happen to be very large (due to samples lookalike but different or opposite in fact). In this case, because of the lack of training for this kind of category and hence its high probabilities in the distribution, such rare samples that do not locate on the data distribution may be generated while we are testing or validating. For your sub-questions: Is the Kullback-Leibler divergence the best f-divergence to consider as error? You can refer to this answer which states that "Cross-entropy is prefered for classification, while mean squared error is one of the best choices for regression". Note that training by cross entropy is the same as training using relative entropy. For the details please refer to this. Does the usage of the Kullback-Leibler divergence entail any kind of issue? If I understand your question correctly, I suppose it can be falsified by the loss function for SVMs. Please refer to this question and this answer. Kullback-Leibler divergence can not solve all problems in estimation.
Disadvantages of the Kullback-Leibler divergence I'd like to add the first answer, which would be unsatisfying, to this question through the lens of deep learning mostly in NLP: First things first, Disadvantages of the Kullback-Leibler divergence
51,106
Perfect separation error message for glm with binomial but not with quasibinomial family
That sounds strange, I would guess it is a numerical coincidence. The only difference in R's glm between binomial and quasibinomial family is in the calculation of standard errors, the estimation process is exactly the same. Or, it might be that the difference in calculation of standard errors cause differences in the criteria for declaring convergence. Anyhow, you should not trust the model for hypothesis testing, the standard errors (for both binomial and quasibinomial case) is bogus. See my answer here Why does logistic regression become unstable when classes are well-separated? for some ideas of what to do in this case of separation.
Perfect separation error message for glm with binomial but not with quasibinomial family
That sounds strange, I would guess it is a numerical coincidence. The only difference in R's glm between binomial and quasibinomial family is in the calculation of standard errors, the estimation proc
Perfect separation error message for glm with binomial but not with quasibinomial family That sounds strange, I would guess it is a numerical coincidence. The only difference in R's glm between binomial and quasibinomial family is in the calculation of standard errors, the estimation process is exactly the same. Or, it might be that the difference in calculation of standard errors cause differences in the criteria for declaring convergence. Anyhow, you should not trust the model for hypothesis testing, the standard errors (for both binomial and quasibinomial case) is bogus. See my answer here Why does logistic regression become unstable when classes are well-separated? for some ideas of what to do in this case of separation.
Perfect separation error message for glm with binomial but not with quasibinomial family That sounds strange, I would guess it is a numerical coincidence. The only difference in R's glm between binomial and quasibinomial family is in the calculation of standard errors, the estimation proc
51,107
Trees and Cross Validation - # misclass
It returns the sum of deviances from each of the 10 fits, for a range of complexity parameters. from reference Manual... "A copy of FUN applied to object, with component dev replaced by the cross-validated results from the sum of the dev components of each fit." from code... cvdev <- 0 for (i in unique(rand)) { tlearn <- tree(model = m[rand != i, , drop = FALSE]) plearn <- do.call(FUN, c(list(tlearn, newdata = m[rand == i, , drop = FALSE], k = init$k), extras)) cvdev <- cvdev + plearn$dev } Notice the plearn$dev is summed across folds.
Trees and Cross Validation - # misclass
It returns the sum of deviances from each of the 10 fits, for a range of complexity parameters. from reference Manual... "A copy of FUN applied to object, with component dev replaced by the cross-vali
Trees and Cross Validation - # misclass It returns the sum of deviances from each of the 10 fits, for a range of complexity parameters. from reference Manual... "A copy of FUN applied to object, with component dev replaced by the cross-validated results from the sum of the dev components of each fit." from code... cvdev <- 0 for (i in unique(rand)) { tlearn <- tree(model = m[rand != i, , drop = FALSE]) plearn <- do.call(FUN, c(list(tlearn, newdata = m[rand == i, , drop = FALSE], k = init$k), extras)) cvdev <- cvdev + plearn$dev } Notice the plearn$dev is summed across folds.
Trees and Cross Validation - # misclass It returns the sum of deviances from each of the 10 fits, for a range of complexity parameters. from reference Manual... "A copy of FUN applied to object, with component dev replaced by the cross-vali
51,108
Graphical representation of variance
I found this paper to be helpful. Hopefully you will find it useful too: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9639.2010.00426.x Paraphrasing what's explained in the article, if you understand SD, you can imagine that the squared deviation (x - mean of x)^2 can be represented graphically as a square (area = length of side ^2), where each side is the deviation from the mean for that observation. The numerator in the formula for variance is the sum of all n squared deviations, also known as SS, which graphically is the sum of the areas of all these squares. The sum of all these squares itself can also be represented graphically as a square, made up by smaller rectangles, all with the same width (square root of SS), but different heights (given by the squared deviation for that observation divided by the square root of SS). Again, the area of this bigger square is also SS. If you then convert this bigger square into (n-1) equal squares all with the same area, you get variance, which is the area of each one of these smaller square (aka SS/(n-1), or the formula for variance). Interestingly, the length of each side of these variance squares will be equal to the standard deviation!
Graphical representation of variance
I found this paper to be helpful. Hopefully you will find it useful too: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9639.2010.00426.x Paraphrasing what's explained in the article, if you
Graphical representation of variance I found this paper to be helpful. Hopefully you will find it useful too: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9639.2010.00426.x Paraphrasing what's explained in the article, if you understand SD, you can imagine that the squared deviation (x - mean of x)^2 can be represented graphically as a square (area = length of side ^2), where each side is the deviation from the mean for that observation. The numerator in the formula for variance is the sum of all n squared deviations, also known as SS, which graphically is the sum of the areas of all these squares. The sum of all these squares itself can also be represented graphically as a square, made up by smaller rectangles, all with the same width (square root of SS), but different heights (given by the squared deviation for that observation divided by the square root of SS). Again, the area of this bigger square is also SS. If you then convert this bigger square into (n-1) equal squares all with the same area, you get variance, which is the area of each one of these smaller square (aka SS/(n-1), or the formula for variance). Interestingly, the length of each side of these variance squares will be equal to the standard deviation!
Graphical representation of variance I found this paper to be helpful. Hopefully you will find it useful too: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9639.2010.00426.x Paraphrasing what's explained in the article, if you
51,109
Graphical representation of variance
One method that is helpful to is to visualize the center point of the data, the mean, and how much each raw data point's distance varies along that mean. One way of achieving this is simply drawing lines that start from where the mean is (where the data on average is centrally located) and finishing those lines where the raw value is. I show how using R below. First we can load the tidyverse package for plotting, then create some x values and an index (for the order of raw data points). Then we plot the data on a scatterplot that draws segments from the mean to the raw data points. #### Load Library #### library(tidyverse) #### Create X Value and Index #### x <- c(10,30,20,40,50,10,10,20,50,40) df <- data.frame(Index = 1:10, X = x) #### Plot Variation from Mean #### df %>% ggplot(aes(x=Index, y=X))+ geom_point()+ geom_segment(aes(xend = Index, yend = mean(x)))+ geom_hline(yintercept = mean(x), color = "red")+ theme_classic()+ scale_x_continuous(n.breaks = 10) Which you can see below: We can see that the fluctuation in distance for each data point here is not the same. Some are very close to the mean ($x = 30$), while others are quite far from the mean ($x = 10$ and $x = 50$). Something you could also consider is drawing grid lines where each standard deviation below/above the mean is located to get a sense of where the bulk of the data should be contained within this variation. #### Add SD Lines #### df %>% ggplot(aes(x=Index, y=X))+ geom_point()+ geom_segment(aes(xend = Index, yend = mean(x)))+ geom_hline(yintercept = mean(x), color = "red", size=1)+ geom_hline(yintercept = mean(x) - 1*sd(x), color = "blue", linetype = "dashed")+ geom_hline(yintercept = mean(x) + 1*sd(x), color = "blue", linetype = "dashed")+ geom_hline(yintercept = mean(x) - 2*sd(x), color = "blue", linetype = "dashed")+ geom_hline(yintercept = mean(x) + 2*sd(x), color = "blue", linetype = "dashed")+ theme_classic()+ scale_x_continuous(n.breaks = 10) We can see that all of the data is contained within 2 SD above and below the mean and we have a fair amount of data hovering below or above the 1 SD grid lines.
Graphical representation of variance
One method that is helpful to is to visualize the center point of the data, the mean, and how much each raw data point's distance varies along that mean. One way of achieving this is simply drawing li
Graphical representation of variance One method that is helpful to is to visualize the center point of the data, the mean, and how much each raw data point's distance varies along that mean. One way of achieving this is simply drawing lines that start from where the mean is (where the data on average is centrally located) and finishing those lines where the raw value is. I show how using R below. First we can load the tidyverse package for plotting, then create some x values and an index (for the order of raw data points). Then we plot the data on a scatterplot that draws segments from the mean to the raw data points. #### Load Library #### library(tidyverse) #### Create X Value and Index #### x <- c(10,30,20,40,50,10,10,20,50,40) df <- data.frame(Index = 1:10, X = x) #### Plot Variation from Mean #### df %>% ggplot(aes(x=Index, y=X))+ geom_point()+ geom_segment(aes(xend = Index, yend = mean(x)))+ geom_hline(yintercept = mean(x), color = "red")+ theme_classic()+ scale_x_continuous(n.breaks = 10) Which you can see below: We can see that the fluctuation in distance for each data point here is not the same. Some are very close to the mean ($x = 30$), while others are quite far from the mean ($x = 10$ and $x = 50$). Something you could also consider is drawing grid lines where each standard deviation below/above the mean is located to get a sense of where the bulk of the data should be contained within this variation. #### Add SD Lines #### df %>% ggplot(aes(x=Index, y=X))+ geom_point()+ geom_segment(aes(xend = Index, yend = mean(x)))+ geom_hline(yintercept = mean(x), color = "red", size=1)+ geom_hline(yintercept = mean(x) - 1*sd(x), color = "blue", linetype = "dashed")+ geom_hline(yintercept = mean(x) + 1*sd(x), color = "blue", linetype = "dashed")+ geom_hline(yintercept = mean(x) - 2*sd(x), color = "blue", linetype = "dashed")+ geom_hline(yintercept = mean(x) + 2*sd(x), color = "blue", linetype = "dashed")+ theme_classic()+ scale_x_continuous(n.breaks = 10) We can see that all of the data is contained within 2 SD above and below the mean and we have a fair amount of data hovering below or above the 1 SD grid lines.
Graphical representation of variance One method that is helpful to is to visualize the center point of the data, the mean, and how much each raw data point's distance varies along that mean. One way of achieving this is simply drawing li
51,110
Unsupervised pre-training for Reinforcement Learning
I think, you should look into the Learning from Demonstration direction. The idea is pretty simple. Let's say, we want to teach a bot to play a video game. We record a human playing the game, and give the model this data in order to pre-train it then. There are lot's of possible ways of using this data. If you are interested in the unsupervised pre-training, you should look into the Inverse Reinforcement Learning (IRL) direction. In two words, the method tries to approximate the reward function and does the usual RL with this approximation along the road. I'm not aware of pre-training in the IRL, but it should be possible and interesting for the investigation, from my point of view. These are some well-known works where you can start: Algorithms for IRL by Andrew Ng and Stuart Russel, and Apprenticeship Learning via IRL by Peter Abbeel and Andrew Ng. In case you have the reward function in addition to actions, you might do a lot of interesting stuff with it, including pre-training. This work is one of the latest on the topic, and one will find much more interesting stuff in the References.
Unsupervised pre-training for Reinforcement Learning
I think, you should look into the Learning from Demonstration direction. The idea is pretty simple. Let's say, we want to teach a bot to play a video game. We record a human playing the game, and give
Unsupervised pre-training for Reinforcement Learning I think, you should look into the Learning from Demonstration direction. The idea is pretty simple. Let's say, we want to teach a bot to play a video game. We record a human playing the game, and give the model this data in order to pre-train it then. There are lot's of possible ways of using this data. If you are interested in the unsupervised pre-training, you should look into the Inverse Reinforcement Learning (IRL) direction. In two words, the method tries to approximate the reward function and does the usual RL with this approximation along the road. I'm not aware of pre-training in the IRL, but it should be possible and interesting for the investigation, from my point of view. These are some well-known works where you can start: Algorithms for IRL by Andrew Ng and Stuart Russel, and Apprenticeship Learning via IRL by Peter Abbeel and Andrew Ng. In case you have the reward function in addition to actions, you might do a lot of interesting stuff with it, including pre-training. This work is one of the latest on the topic, and one will find much more interesting stuff in the References.
Unsupervised pre-training for Reinforcement Learning I think, you should look into the Learning from Demonstration direction. The idea is pretty simple. Let's say, we want to teach a bot to play a video game. We record a human playing the game, and give
51,111
Agreement in clustered sample data
What you are doing is introducing multiple comparisons. For a confirmatory analysis, we usually specify our primary analysis and we may fit some post-hoc or secondary analyses not to confirm the prior findings, but to understand limitations in the data. Without any description of the various analyses you have conducted, I can't make any clear recommendations, but I suspect you are applying incorrect methods in several ways. The intraclass correlation coefficient (ICC) is a measure of proportion of variance within a cluster, and can be used to motivate a mixed modeling approach for analysis of longitudinal or panel data. You seem to describe applying the ICC to individual analyses (such as regression or classification models) which doesn't make sense, and is not in-line with the intended purposes of the model. Concordance correlation coefficient (CCC) is a measure of calibration of statistical risk prediction models which, to be clear, involves a single risk prediction per participant and requires separate test/training datasets. CCC can compare several risk models, but I emphasize: risk models in panel data is very nuanced, and I don't get a sense that that's what you're doing here. "Agreement" or interater agreement is yet another type of finding which has to do with evaluating several replications of a test applied in a large population. While statistical testing does have some relationship with classification, it is not correct to apply measures of "agreement" in this setting because statistical tests have no source of variability outside of the data themselves. Examples of settings in which agreement would be applied would be in settings where multiple radiologists are classifying different screens as benign versus possible cancer. So I can't really find a place to begin with your problem aside from reminding you of the correct approach to statistics: Decide (apriori) on the single analytic approach which measures an outcome of interest in a way that is understandable by the general community Fit any subsequent models as a way of assessing sensitivity in the first model, such as loss-to-follow-up, unmeasured sources of variation, and/or autoregressive effects. Describe any possible limitations after reporting the main findings..
Agreement in clustered sample data
What you are doing is introducing multiple comparisons. For a confirmatory analysis, we usually specify our primary analysis and we may fit some post-hoc or secondary analyses not to confirm the prior
Agreement in clustered sample data What you are doing is introducing multiple comparisons. For a confirmatory analysis, we usually specify our primary analysis and we may fit some post-hoc or secondary analyses not to confirm the prior findings, but to understand limitations in the data. Without any description of the various analyses you have conducted, I can't make any clear recommendations, but I suspect you are applying incorrect methods in several ways. The intraclass correlation coefficient (ICC) is a measure of proportion of variance within a cluster, and can be used to motivate a mixed modeling approach for analysis of longitudinal or panel data. You seem to describe applying the ICC to individual analyses (such as regression or classification models) which doesn't make sense, and is not in-line with the intended purposes of the model. Concordance correlation coefficient (CCC) is a measure of calibration of statistical risk prediction models which, to be clear, involves a single risk prediction per participant and requires separate test/training datasets. CCC can compare several risk models, but I emphasize: risk models in panel data is very nuanced, and I don't get a sense that that's what you're doing here. "Agreement" or interater agreement is yet another type of finding which has to do with evaluating several replications of a test applied in a large population. While statistical testing does have some relationship with classification, it is not correct to apply measures of "agreement" in this setting because statistical tests have no source of variability outside of the data themselves. Examples of settings in which agreement would be applied would be in settings where multiple radiologists are classifying different screens as benign versus possible cancer. So I can't really find a place to begin with your problem aside from reminding you of the correct approach to statistics: Decide (apriori) on the single analytic approach which measures an outcome of interest in a way that is understandable by the general community Fit any subsequent models as a way of assessing sensitivity in the first model, such as loss-to-follow-up, unmeasured sources of variation, and/or autoregressive effects. Describe any possible limitations after reporting the main findings..
Agreement in clustered sample data What you are doing is introducing multiple comparisons. For a confirmatory analysis, we usually specify our primary analysis and we may fit some post-hoc or secondary analyses not to confirm the prior
51,112
Agreement in clustered sample data
Yes you probably cannot assume data from the same patient are independent but do you detect a within-patient effect (try ANOVA)? If you want to compare across patients you could try to normalise to control for per-patient effects? If you fit a linear mixed model with the patient as a random effect then you can get significant values from the betas. ps no idea what ICC is or CCC
Agreement in clustered sample data
Yes you probably cannot assume data from the same patient are independent but do you detect a within-patient effect (try ANOVA)? If you want to compare across patients you could try to normalise to c
Agreement in clustered sample data Yes you probably cannot assume data from the same patient are independent but do you detect a within-patient effect (try ANOVA)? If you want to compare across patients you could try to normalise to control for per-patient effects? If you fit a linear mixed model with the patient as a random effect then you can get significant values from the betas. ps no idea what ICC is or CCC
Agreement in clustered sample data Yes you probably cannot assume data from the same patient are independent but do you detect a within-patient effect (try ANOVA)? If you want to compare across patients you could try to normalise to c
51,113
Constructing an interval estimate for a multivariate output
[I know this doesn't answer the question, nor provide sources; this was going to be a comment but got too long] I think your problem here is you haven't clearly defined "95% confidence interval", and your problem presents with more than one way of interpreting that. If you decide exactly what you mean by "95% confidence interval" you will probably answer your own question. For example, do you mean: If I reran my experiment many times, 95% of the regions so generated would completely contain the true path If I reran my experiment many times, the regions would on average contain 95% of the true points If I reran my experiment many times, then for any given true $X_i$, the corresponding true $Y_i$ would be contained in the vertical region above $X_i$ region 95% of the time. If I reran my experiment many times, then for any given true $Y_i$, the corresponding true $X_i$ would be contained in the horizontal region across from $Y_i$ region 95% of the time. A region containing 95% of the posterior probability density for a point $i$ picked at random. This is probably closest to what you have generated. Or perhaps some other variation. Your problem is that your estimate is strictly a very high dimensional vector of all the $(X_i, Y_i)$ pairs, and so your "standard" confidence region would be a high dimensional ellipsoid. Remember that the uncertainty around each point in the process need not be independent of the previous points, if one step is way too high, the next will probably be as well. Because this is a Gaussian process, you should be able to work out the full 2N x 2N correlation matrix, and so you can work a lot of different confidence regions depending on what interests you.
Constructing an interval estimate for a multivariate output
[I know this doesn't answer the question, nor provide sources; this was going to be a comment but got too long] I think your problem here is you haven't clearly defined "95% confidence interval", and
Constructing an interval estimate for a multivariate output [I know this doesn't answer the question, nor provide sources; this was going to be a comment but got too long] I think your problem here is you haven't clearly defined "95% confidence interval", and your problem presents with more than one way of interpreting that. If you decide exactly what you mean by "95% confidence interval" you will probably answer your own question. For example, do you mean: If I reran my experiment many times, 95% of the regions so generated would completely contain the true path If I reran my experiment many times, the regions would on average contain 95% of the true points If I reran my experiment many times, then for any given true $X_i$, the corresponding true $Y_i$ would be contained in the vertical region above $X_i$ region 95% of the time. If I reran my experiment many times, then for any given true $Y_i$, the corresponding true $X_i$ would be contained in the horizontal region across from $Y_i$ region 95% of the time. A region containing 95% of the posterior probability density for a point $i$ picked at random. This is probably closest to what you have generated. Or perhaps some other variation. Your problem is that your estimate is strictly a very high dimensional vector of all the $(X_i, Y_i)$ pairs, and so your "standard" confidence region would be a high dimensional ellipsoid. Remember that the uncertainty around each point in the process need not be independent of the previous points, if one step is way too high, the next will probably be as well. Because this is a Gaussian process, you should be able to work out the full 2N x 2N correlation matrix, and so you can work a lot of different confidence regions depending on what interests you.
Constructing an interval estimate for a multivariate output [I know this doesn't answer the question, nor provide sources; this was going to be a comment but got too long] I think your problem here is you haven't clearly defined "95% confidence interval", and
51,114
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpler test
Taking your second point first, your analysis of Day looked at the aggregate across days and there is no Day effect on average. There might be one on Day 2 but you really should have a justification for believing Day 2 more than other days. Point 1, that Day 1 isn't significant while Day 2 does show an effect is a meaningless point to make. Ignoring the correlation and analysis techniques, even if what your colleagues claim is true, it's not useful. The implied argument is that the effect of group in Day 1 is different from Day 2 and that wasn't tested. That's what your interaction tested and it's not significant. Finally, from the tenor of this report it sounds like there's a lot of being hung up on what significant and what's not. For example, if Day 1 and Day 2 effects are both in the same direction but one is significant and one is not are they really contradictory? Think about that.
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpl
Taking your second point first, your analysis of Day looked at the aggregate across days and there is no Day effect on average. There might be one on Day 2 but you really should have a justification f
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpler test Taking your second point first, your analysis of Day looked at the aggregate across days and there is no Day effect on average. There might be one on Day 2 but you really should have a justification for believing Day 2 more than other days. Point 1, that Day 1 isn't significant while Day 2 does show an effect is a meaningless point to make. Ignoring the correlation and analysis techniques, even if what your colleagues claim is true, it's not useful. The implied argument is that the effect of group in Day 1 is different from Day 2 and that wasn't tested. That's what your interaction tested and it's not significant. Finally, from the tenor of this report it sounds like there's a lot of being hung up on what significant and what's not. For example, if Day 1 and Day 2 effects are both in the same direction but one is significant and one is not are they really contradictory? Think about that.
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpl Taking your second point first, your analysis of Day looked at the aggregate across days and there is no Day effect on average. There might be one on Day 2 but you really should have a justification f
51,115
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpler test
This may raise more questions for you than it answers, but consider that you can directly test A2 vs B2 by looking directly at the coefficients (plus their standard error) of your model and linearly combining them > summary(M) Linear mixed-effects model fit by REML ... Fixed effects: BW ~ Day * Group Value Std.Error DF t-value p-value (Intercept) 2499.0736 66.30508 18 37.69053 0.0000 DayD2 -164.1803 93.76955 18 -1.75089 0.0970 GroupB -14.4246 93.76955 18 -0.15383 0.8795 DayD2:GroupB 252.6805 132.61017 18 1.90544 0.0728 In particular, the difference in group means on day 2 between A and B is 238.26, which when tested > anova(M, L=c(0,0,1,1)) F-test for linear combination(s) GroupB DayD2:GroupB 1 1 numDF denDF F-value p-value 1 1 18 6.456002 0.0205 > t.test(subset(dat, Day=="D2" & Group=="B")$BW, subset(dat, Day=="D2" & Group=="A")$BW) ... t = 2.2832, df = 11.332, p-value = 0.04264 ... mean of x mean of y 2573.149 2334.893
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpl
This may raise more questions for you than it answers, but consider that you can directly test A2 vs B2 by looking directly at the coefficients (plus their standard error) of your model and linearly c
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpler test This may raise more questions for you than it answers, but consider that you can directly test A2 vs B2 by looking directly at the coefficients (plus their standard error) of your model and linearly combining them > summary(M) Linear mixed-effects model fit by REML ... Fixed effects: BW ~ Day * Group Value Std.Error DF t-value p-value (Intercept) 2499.0736 66.30508 18 37.69053 0.0000 DayD2 -164.1803 93.76955 18 -1.75089 0.0970 GroupB -14.4246 93.76955 18 -0.15383 0.8795 DayD2:GroupB 252.6805 132.61017 18 1.90544 0.0728 In particular, the difference in group means on day 2 between A and B is 238.26, which when tested > anova(M, L=c(0,0,1,1)) F-test for linear combination(s) GroupB DayD2:GroupB 1 1 numDF denDF F-value p-value 1 1 18 6.456002 0.0205 > t.test(subset(dat, Day=="D2" & Group=="B")$BW, subset(dat, Day=="D2" & Group=="A")$BW) ... t = 2.2832, df = 11.332, p-value = 0.04264 ... mean of x mean of y 2573.149 2334.893
A mixed-effects model for repeated measurements vs multiple time point-wise comparisons with a simpl This may raise more questions for you than it answers, but consider that you can directly test A2 vs B2 by looking directly at the coefficients (plus their standard error) of your model and linearly c
51,116
Johansen test loading matrix
Loading matrix is the adjustment matrix ($α$ matrix). The elements of $α$ determine the speed of adjustment to the long-run equilibrium. Please see p.4 of this article to understand the relationship between adjustment matrix and cointegrating vector.
Johansen test loading matrix
Loading matrix is the adjustment matrix ($α$ matrix). The elements of $α$ determine the speed of adjustment to the long-run equilibrium. Please see p.4 of this article to understand the relationship
Johansen test loading matrix Loading matrix is the adjustment matrix ($α$ matrix). The elements of $α$ determine the speed of adjustment to the long-run equilibrium. Please see p.4 of this article to understand the relationship between adjustment matrix and cointegrating vector.
Johansen test loading matrix Loading matrix is the adjustment matrix ($α$ matrix). The elements of $α$ determine the speed of adjustment to the long-run equilibrium. Please see p.4 of this article to understand the relationship
51,117
Rolling out trial in phases, but not stepped wedge
Assuming your inferential focus is the treatment-control contrast, the solution you propose is a multicentre parallel randomised controlled trial design, where the centres are the communities, and there is no clear need to account for measurement times. A particular feature of the design you propose is that all participants from one cluster are accrued at the same time. Another feature is that the accrual times are pre-set and randomised. This step is not necessary for bias control, since period-specific effects will be equally balanced between the two arms. The randomisation of the accrual times may however help control bias under more complex time-community-intervention interactions. In all communities your primary endpoint is at three months (from randomisation). There is no need to account for the fact that endpoints are measured at different calendar times, similarly to a single-center parallel RCT design that may accrue participants over several years. The power implications of your design are those of a multicentre parallel RCT. (This addresses a statement in your post.) The clustered stepped wedge design does not require a measurement at each time interval for every participant. Every participant would usually be assessed at baseline, and then at any later time point set for a primary or secondary endpoint. The related analytical feature of the stepped wedge design is that the time period at which the participant is recruited into the study enters the analysis as a fixed or random effect (see Hussey & Hughes, 2007, section 3.1 & Discussion). This is to alleviate potential confounding with time, as might typically occur with a simple pre-post design. Such confounding will not arise in your design.
Rolling out trial in phases, but not stepped wedge
Assuming your inferential focus is the treatment-control contrast, the solution you propose is a multicentre parallel randomised controlled trial design, where the centres are the communities, and the
Rolling out trial in phases, but not stepped wedge Assuming your inferential focus is the treatment-control contrast, the solution you propose is a multicentre parallel randomised controlled trial design, where the centres are the communities, and there is no clear need to account for measurement times. A particular feature of the design you propose is that all participants from one cluster are accrued at the same time. Another feature is that the accrual times are pre-set and randomised. This step is not necessary for bias control, since period-specific effects will be equally balanced between the two arms. The randomisation of the accrual times may however help control bias under more complex time-community-intervention interactions. In all communities your primary endpoint is at three months (from randomisation). There is no need to account for the fact that endpoints are measured at different calendar times, similarly to a single-center parallel RCT design that may accrue participants over several years. The power implications of your design are those of a multicentre parallel RCT. (This addresses a statement in your post.) The clustered stepped wedge design does not require a measurement at each time interval for every participant. Every participant would usually be assessed at baseline, and then at any later time point set for a primary or secondary endpoint. The related analytical feature of the stepped wedge design is that the time period at which the participant is recruited into the study enters the analysis as a fixed or random effect (see Hussey & Hughes, 2007, section 3.1 & Discussion). This is to alleviate potential confounding with time, as might typically occur with a simple pre-post design. Such confounding will not arise in your design.
Rolling out trial in phases, but not stepped wedge Assuming your inferential focus is the treatment-control contrast, the solution you propose is a multicentre parallel randomised controlled trial design, where the centres are the communities, and the
51,118
Uncertainty propagation in linear interpolation
Typically the uncertainties from counting measurements are made using Poisson statistics. The standard deviation of such a distribution is just the square root of the counts. I am not sure why your uncertainties are slightly off from this. Why do you need to interpolate the data at all? I am not too familiar with Reitveld refinement, but why don't you just merge the datasets? You gain no statistical power by adding together two data points. For example, even if you had two independent measurements at the same x value, there is nothing gained by averaging the counts other than making prettier plots of the diffraction data. Finally, are the x positions actually different? I assume you ran the same scan twice. The slight differences in the x values in the data are either from the instrument having higher accuracy in measuring position than in its motors or it is just a result in always writing out the data to a precision higher than merited. Either way, treating the x-values as the same introduces very little affect on your uncertainties. If you treat the range of x-values as the uncertainty in the x-position and estimate this contribution to uncertainty in y by looking at the local slope, you'll see that it is ~3 orders of magnitude lower than the counting uncertainty. Were I do this, I would merge the data and just have twice the data points. If for some reason you cannot do this, I would just take the average x-value as doing so introduces negligible error.
Uncertainty propagation in linear interpolation
Typically the uncertainties from counting measurements are made using Poisson statistics. The standard deviation of such a distribution is just the square root of the counts. I am not sure why your un
Uncertainty propagation in linear interpolation Typically the uncertainties from counting measurements are made using Poisson statistics. The standard deviation of such a distribution is just the square root of the counts. I am not sure why your uncertainties are slightly off from this. Why do you need to interpolate the data at all? I am not too familiar with Reitveld refinement, but why don't you just merge the datasets? You gain no statistical power by adding together two data points. For example, even if you had two independent measurements at the same x value, there is nothing gained by averaging the counts other than making prettier plots of the diffraction data. Finally, are the x positions actually different? I assume you ran the same scan twice. The slight differences in the x values in the data are either from the instrument having higher accuracy in measuring position than in its motors or it is just a result in always writing out the data to a precision higher than merited. Either way, treating the x-values as the same introduces very little affect on your uncertainties. If you treat the range of x-values as the uncertainty in the x-position and estimate this contribution to uncertainty in y by looking at the local slope, you'll see that it is ~3 orders of magnitude lower than the counting uncertainty. Were I do this, I would merge the data and just have twice the data points. If for some reason you cannot do this, I would just take the average x-value as doing so introduces negligible error.
Uncertainty propagation in linear interpolation Typically the uncertainties from counting measurements are made using Poisson statistics. The standard deviation of such a distribution is just the square root of the counts. I am not sure why your un
51,119
Lesser-known but powerful probabilistic inference algorithms
Indirect inference According to The New Palgrave Dictionary of Economics, Second Edition (entry by Anthony A. Smith, Jr), Indirect inference is a simulation-based method for estimating the parameters of economic models. Its hallmark is the use of an auxiliary model to capture aspects of the data upon which to base the estimation. The parameters of the auxiliary model can be estimated using either the observed data or data simulated from the economic model. Indirect inference chooses the parameters of the economic model so that these two estimates of the parameters of the auxiliary model are as close as possible. The auxiliary model need not be correctly specified; when it is, indirect inference is equivalent to maximum likelihood. Another (more technical) reference is Gourieroux et al. "Indirect inference" (1993) in the Journal of Applied Econometrics, 8(S1). Indirect inference is also mentioned in the thread "Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation".
Lesser-known but powerful probabilistic inference algorithms
Indirect inference According to The New Palgrave Dictionary of Economics, Second Edition (entry by Anthony A. Smith, Jr), Indirect inference is a simulation-based method for estimating the parameters
Lesser-known but powerful probabilistic inference algorithms Indirect inference According to The New Palgrave Dictionary of Economics, Second Edition (entry by Anthony A. Smith, Jr), Indirect inference is a simulation-based method for estimating the parameters of economic models. Its hallmark is the use of an auxiliary model to capture aspects of the data upon which to base the estimation. The parameters of the auxiliary model can be estimated using either the observed data or data simulated from the economic model. Indirect inference chooses the parameters of the economic model so that these two estimates of the parameters of the auxiliary model are as close as possible. The auxiliary model need not be correctly specified; when it is, indirect inference is equivalent to maximum likelihood. Another (more technical) reference is Gourieroux et al. "Indirect inference" (1993) in the Journal of Applied Econometrics, 8(S1). Indirect inference is also mentioned in the thread "Parameter Estimation for intractable Likelihoods / Alternatives to approximate Bayesian computation".
Lesser-known but powerful probabilistic inference algorithms Indirect inference According to The New Palgrave Dictionary of Economics, Second Edition (entry by Anthony A. Smith, Jr), Indirect inference is a simulation-based method for estimating the parameters
51,120
Prioritizing data collection
Have you thought about modelling this problem with decision bayesian networks (http://en.wikipedia.org/wiki/Influence_diagrams)? If you can define some utility function that you can optimise, given the decision you make, that is, which chemicals should be tested. This is a kind of a problem, when you take decisions under uncertainty, which looks like the situation you describe. I know it's quite abtract response, but maybe it will guide you into the right direction.
Prioritizing data collection
Have you thought about modelling this problem with decision bayesian networks (http://en.wikipedia.org/wiki/Influence_diagrams)? If you can define some utility function that you can optimise, given th
Prioritizing data collection Have you thought about modelling this problem with decision bayesian networks (http://en.wikipedia.org/wiki/Influence_diagrams)? If you can define some utility function that you can optimise, given the decision you make, that is, which chemicals should be tested. This is a kind of a problem, when you take decisions under uncertainty, which looks like the situation you describe. I know it's quite abtract response, but maybe it will guide you into the right direction.
Prioritizing data collection Have you thought about modelling this problem with decision bayesian networks (http://en.wikipedia.org/wiki/Influence_diagrams)? If you can define some utility function that you can optimise, given th
51,121
Kendall's coefficient of concordance (W) for ratings with a lot of ties
I don't have anything specific to say about Kendall's W but I don't get this concern about the ICC, the F test and the sample size. Your sample is not so small that testing would necessarily be impossible but why would you want to do such a test? To see if agreement is different from 0? This is quite a low bar and should be evident from the data. If you have doubts about that, these ratings certainly don't form a good measure of anything the raters agree on so worrying about which specific measure of inter-rater of agreement you are using and the niceties of the relevant tests would not really be your main concern. On the other hand, anything you compute on a sample this small will obviously be subject to a lot of sampling variability and uncertainty. It's a rather basic fact that has nothing to do with ICC or the F-test specifically and there is no miracle inter-rater agreement index that would allow you to go around that. At the end of the day, I think the underlying issue is that you seem to be asking many rather abstract questions in search for the “true” inter-rater agreement and some sort of fail/pass test that would tell you if it is “good enough”. Such a thing simply does not exist in my opinion and published threshold are really quite arbitrary. Instead of trying to interpret every bit of advice recommending one index or another, I think it could be more fruitful to read broadly about inter-rater agreement measures (see the references provided in other questions on this topic) and think about what each of them reveal about your data rather than focus solely on whether agreement is “good” or not.
Kendall's coefficient of concordance (W) for ratings with a lot of ties
I don't have anything specific to say about Kendall's W but I don't get this concern about the ICC, the F test and the sample size. Your sample is not so small that testing would necessarily be imposs
Kendall's coefficient of concordance (W) for ratings with a lot of ties I don't have anything specific to say about Kendall's W but I don't get this concern about the ICC, the F test and the sample size. Your sample is not so small that testing would necessarily be impossible but why would you want to do such a test? To see if agreement is different from 0? This is quite a low bar and should be evident from the data. If you have doubts about that, these ratings certainly don't form a good measure of anything the raters agree on so worrying about which specific measure of inter-rater of agreement you are using and the niceties of the relevant tests would not really be your main concern. On the other hand, anything you compute on a sample this small will obviously be subject to a lot of sampling variability and uncertainty. It's a rather basic fact that has nothing to do with ICC or the F-test specifically and there is no miracle inter-rater agreement index that would allow you to go around that. At the end of the day, I think the underlying issue is that you seem to be asking many rather abstract questions in search for the “true” inter-rater agreement and some sort of fail/pass test that would tell you if it is “good enough”. Such a thing simply does not exist in my opinion and published threshold are really quite arbitrary. Instead of trying to interpret every bit of advice recommending one index or another, I think it could be more fruitful to read broadly about inter-rater agreement measures (see the references provided in other questions on this topic) and think about what each of them reveal about your data rather than focus solely on whether agreement is “good” or not.
Kendall's coefficient of concordance (W) for ratings with a lot of ties I don't have anything specific to say about Kendall's W but I don't get this concern about the ICC, the F test and the sample size. Your sample is not so small that testing would necessarily be imposs
51,122
How to simulate a Cox proportional hazards model with change point and code it in R
Well, after a little research I found the answer to my question, so here is the code that allows the simulation: simulation <- function(n,lambda,changepoint, surv.df=TRUE) { # Define the covariate, restrictions and parameters. X <- rbinom(n,prob=1/3,size=1) E <- rexp(n,1) EL <- rexp(n,lambda) # Define the piecewise function. Y <- ifelse(X==0,E, ifelse(E<=changepoint,E,changepoint+EL)) ## Construction of the data frame. if (surv.df) data.frame(Y,X) else cbind(Y,X) } Now, if we would like to do m Monte Carlo replicas of this dataframe: m = "number of replicas" survive.df <- replicate(m, simulation(n = 100 ,lambda = 1,changepoint = 1), simplify=FALSE) To view an i dataset: survive.df[[i]] Shape a Cox Proportional Hazards Model with the i dataset: coxph(Surv(Y)~X , data=survive.df[[i]]) # Model. cox.zph(coxph(Surv(Y)~X , data=survive.df[[i]])) # Schoenfield residuals. Any suggestions?
How to simulate a Cox proportional hazards model with change point and code it in R
Well, after a little research I found the answer to my question, so here is the code that allows the simulation: simulation <- function(n,lambda,changepoint, surv.df=TRUE) { # Define the covariate, r
How to simulate a Cox proportional hazards model with change point and code it in R Well, after a little research I found the answer to my question, so here is the code that allows the simulation: simulation <- function(n,lambda,changepoint, surv.df=TRUE) { # Define the covariate, restrictions and parameters. X <- rbinom(n,prob=1/3,size=1) E <- rexp(n,1) EL <- rexp(n,lambda) # Define the piecewise function. Y <- ifelse(X==0,E, ifelse(E<=changepoint,E,changepoint+EL)) ## Construction of the data frame. if (surv.df) data.frame(Y,X) else cbind(Y,X) } Now, if we would like to do m Monte Carlo replicas of this dataframe: m = "number of replicas" survive.df <- replicate(m, simulation(n = 100 ,lambda = 1,changepoint = 1), simplify=FALSE) To view an i dataset: survive.df[[i]] Shape a Cox Proportional Hazards Model with the i dataset: coxph(Surv(Y)~X , data=survive.df[[i]]) # Model. cox.zph(coxph(Surv(Y)~X , data=survive.df[[i]])) # Schoenfield residuals. Any suggestions?
How to simulate a Cox proportional hazards model with change point and code it in R Well, after a little research I found the answer to my question, so here is the code that allows the simulation: simulation <- function(n,lambda,changepoint, surv.df=TRUE) { # Define the covariate, r
51,123
When should I use errors-in-variables?
Describing the nature and extent of measurement error is a very good practice for statisticians. The next important implication for practice is to interpret the results and findings in a contextually appropriate manner, which may confuse the reader somewhat, but doesn't convey false associations. For instance, suppose we observe in-patient admits to a mental hospital and measure pain sensitivity and successful rehabilitation from addictive substances. In survey items about the frequency and intensity of pain, we are careful to refer to associations as describing self reported pain. If such patients are being considered as eligible for benzos or opiods, we might expect exaggeration of the pain response among those less likely to response to rehab therapies. In epidemiology, the above example could be referred to as differential misclassification. Even non-differential misclassification has implications on the measured associations. In particular, attentuation is a problem. However, in most circumstances, associations can be measured in the presence of attenuation. Hence we are capable of making inference on associations, but predicted values in non-linear trends are biased, and confidence intervals for effects may be too large. For inference, attenuation is less of a problem than it seems. If we had an estimate of the measurement error, then it's possible to extend maximum likelihood methods to obtain unbiased estimates of associations. In particular, this would involve the use of the EM Algorithm. It turns out this is equivalent to a latent variable approach where a latent variable is added between your measurement-errored variable and y.
When should I use errors-in-variables?
Describing the nature and extent of measurement error is a very good practice for statisticians. The next important implication for practice is to interpret the results and findings in a contextually
When should I use errors-in-variables? Describing the nature and extent of measurement error is a very good practice for statisticians. The next important implication for practice is to interpret the results and findings in a contextually appropriate manner, which may confuse the reader somewhat, but doesn't convey false associations. For instance, suppose we observe in-patient admits to a mental hospital and measure pain sensitivity and successful rehabilitation from addictive substances. In survey items about the frequency and intensity of pain, we are careful to refer to associations as describing self reported pain. If such patients are being considered as eligible for benzos or opiods, we might expect exaggeration of the pain response among those less likely to response to rehab therapies. In epidemiology, the above example could be referred to as differential misclassification. Even non-differential misclassification has implications on the measured associations. In particular, attentuation is a problem. However, in most circumstances, associations can be measured in the presence of attenuation. Hence we are capable of making inference on associations, but predicted values in non-linear trends are biased, and confidence intervals for effects may be too large. For inference, attenuation is less of a problem than it seems. If we had an estimate of the measurement error, then it's possible to extend maximum likelihood methods to obtain unbiased estimates of associations. In particular, this would involve the use of the EM Algorithm. It turns out this is equivalent to a latent variable approach where a latent variable is added between your measurement-errored variable and y.
When should I use errors-in-variables? Describing the nature and extent of measurement error is a very good practice for statisticians. The next important implication for practice is to interpret the results and findings in a contextually
51,124
Statistics version of "What is..." columns in Notices
These days you often find such stuff in blogs! Here are two favourites of mine: andrewgelman.com normaldeviate.wordpress.com/
Statistics version of "What is..." columns in Notices
These days you often find such stuff in blogs! Here are two favourites of mine: andrewgelman.com normaldeviate.wordpress.com/
Statistics version of "What is..." columns in Notices These days you often find such stuff in blogs! Here are two favourites of mine: andrewgelman.com normaldeviate.wordpress.com/
Statistics version of "What is..." columns in Notices These days you often find such stuff in blogs! Here are two favourites of mine: andrewgelman.com normaldeviate.wordpress.com/
51,125
Comparison of two classifiers based on precision/recall/F1 only?
If all you have is P/R/F1 scores for the two systems/classifiers there's no way of testing whether the difference between the two is statistically significant. For the McNemar's test, as you suggested, you would need the predictions of the two systems. If you have other labeled data and the implementation of the two systems, you can test those on the data (shuffling and 5- or 10-fold cross-validating several times) so that you can perform a statistical test.
Comparison of two classifiers based on precision/recall/F1 only?
If all you have is P/R/F1 scores for the two systems/classifiers there's no way of testing whether the difference between the two is statistically significant. For the McNemar's test, as you suggested
Comparison of two classifiers based on precision/recall/F1 only? If all you have is P/R/F1 scores for the two systems/classifiers there's no way of testing whether the difference between the two is statistically significant. For the McNemar's test, as you suggested, you would need the predictions of the two systems. If you have other labeled data and the implementation of the two systems, you can test those on the data (shuffling and 5- or 10-fold cross-validating several times) so that you can perform a statistical test.
Comparison of two classifiers based on precision/recall/F1 only? If all you have is P/R/F1 scores for the two systems/classifiers there's no way of testing whether the difference between the two is statistically significant. For the McNemar's test, as you suggested
51,126
Comparison of two classifiers based on precision/recall/F1 only?
You have different options about how to compare different classifiers in the text domain. However, you will need quality levels per class or per document. You can check this paper about Re-examining text categorisation methods.
Comparison of two classifiers based on precision/recall/F1 only?
You have different options about how to compare different classifiers in the text domain. However, you will need quality levels per class or per document. You can check this paper about Re-examining t
Comparison of two classifiers based on precision/recall/F1 only? You have different options about how to compare different classifiers in the text domain. However, you will need quality levels per class or per document. You can check this paper about Re-examining text categorisation methods.
Comparison of two classifiers based on precision/recall/F1 only? You have different options about how to compare different classifiers in the text domain. However, you will need quality levels per class or per document. You can check this paper about Re-examining t
51,127
Testing independence hypothesis in logistic regression
Yes, it is possible. You have to be somewhat careful about how to define the residuals. To be concrete, suppose we have a time series. Let the binary outcome be $y_t$ and the vector of regressors be $x_t$. An obvious thing is to take $\hat{\epsilon}_t = y_t - \frac{e^{x_t \hat{\beta}} } {1+e^{x_t \hat{\beta}} }$. However, these residuals are heteroskedastic, which makes their asymptotic distribution under the null slightly more complicated. It's easier to work with $\tilde{\epsilon}_t = \frac{\hat{\epsilon}_t}{ \sqrt{ (\frac{e^{x_t \hat{\beta}} } {1+e^{x_t \hat{\beta}} })(1 - \frac{e^{x_t \hat{\beta}} } {1+e^{x_t \hat{\beta}} }) } } $. Hsiao, Pesaran, and Pick (2007) discuss this approach for testing independence in panel data. Alternatively, you can work with what Gorieroux, Monfort, and Trognon (1985) call generalized residuals, $\hat{\varepsilon}_t = \mathrm{E}[y_t^\ast - x_t \beta | y_t, x_t] = y_t - \frac{1}{1+e^{-x_t\beta}}$. They show that $S_1 = \frac{(\sum_{t=2}^T \hat{\varepsilon}_t \hat{\varepsilon}_{t-1})^2}{\sum_{t=2}^T \hat{\varepsilon}_t^2 \hat{\varepsilon}_{t-1}^2} \overset{d}{\rightarrow} \chi^2_1$. They also discuss how $S_1$ relates to the usual Durbin-Watson test statistic.
Testing independence hypothesis in logistic regression
Yes, it is possible. You have to be somewhat careful about how to define the residuals. To be concrete, suppose we have a time series. Let the binary outcome be $y_t$ and the vector of regressors be $
Testing independence hypothesis in logistic regression Yes, it is possible. You have to be somewhat careful about how to define the residuals. To be concrete, suppose we have a time series. Let the binary outcome be $y_t$ and the vector of regressors be $x_t$. An obvious thing is to take $\hat{\epsilon}_t = y_t - \frac{e^{x_t \hat{\beta}} } {1+e^{x_t \hat{\beta}} }$. However, these residuals are heteroskedastic, which makes their asymptotic distribution under the null slightly more complicated. It's easier to work with $\tilde{\epsilon}_t = \frac{\hat{\epsilon}_t}{ \sqrt{ (\frac{e^{x_t \hat{\beta}} } {1+e^{x_t \hat{\beta}} })(1 - \frac{e^{x_t \hat{\beta}} } {1+e^{x_t \hat{\beta}} }) } } $. Hsiao, Pesaran, and Pick (2007) discuss this approach for testing independence in panel data. Alternatively, you can work with what Gorieroux, Monfort, and Trognon (1985) call generalized residuals, $\hat{\varepsilon}_t = \mathrm{E}[y_t^\ast - x_t \beta | y_t, x_t] = y_t - \frac{1}{1+e^{-x_t\beta}}$. They show that $S_1 = \frac{(\sum_{t=2}^T \hat{\varepsilon}_t \hat{\varepsilon}_{t-1})^2}{\sum_{t=2}^T \hat{\varepsilon}_t^2 \hat{\varepsilon}_{t-1}^2} \overset{d}{\rightarrow} \chi^2_1$. They also discuss how $S_1$ relates to the usual Durbin-Watson test statistic.
Testing independence hypothesis in logistic regression Yes, it is possible. You have to be somewhat careful about how to define the residuals. To be concrete, suppose we have a time series. Let the binary outcome be $y_t$ and the vector of regressors be $
51,128
Testing two trendlines for statistical significance [closed]
you can try log transforming the data on either of both axis to get a normal distribution. Then do linear regression and compare the slopes to see if they are significantly different (calculate Z and then look up p-value in statistics table). Alternatively you can calculate area under the curve (AUC) using the trapezoidal rule, for each rep, this should give a normal distribution and then do a standard Student's t-test. You can do this for different sections of your curve.
Testing two trendlines for statistical significance [closed]
you can try log transforming the data on either of both axis to get a normal distribution. Then do linear regression and compare the slopes to see if they are significantly different (calculate Z and
Testing two trendlines for statistical significance [closed] you can try log transforming the data on either of both axis to get a normal distribution. Then do linear regression and compare the slopes to see if they are significantly different (calculate Z and then look up p-value in statistics table). Alternatively you can calculate area under the curve (AUC) using the trapezoidal rule, for each rep, this should give a normal distribution and then do a standard Student's t-test. You can do this for different sections of your curve.
Testing two trendlines for statistical significance [closed] you can try log transforming the data on either of both axis to get a normal distribution. Then do linear regression and compare the slopes to see if they are significantly different (calculate Z and
51,129
Clustering longitudinal (trajectory) data
If you have a particular longitudinal variable you are interested in, you could take an unsupervised approach on the covariates using either a mixed-effects regression tree or latent growth curve structural equation modeling tree. For SEM trees, see this for more info: http://brandmaier.de/semtree/user-guide/
Clustering longitudinal (trajectory) data
If you have a particular longitudinal variable you are interested in, you could take an unsupervised approach on the covariates using either a mixed-effects regression tree or latent growth curve stru
Clustering longitudinal (trajectory) data If you have a particular longitudinal variable you are interested in, you could take an unsupervised approach on the covariates using either a mixed-effects regression tree or latent growth curve structural equation modeling tree. For SEM trees, see this for more info: http://brandmaier.de/semtree/user-guide/
Clustering longitudinal (trajectory) data If you have a particular longitudinal variable you are interested in, you could take an unsupervised approach on the covariates using either a mixed-effects regression tree or latent growth curve stru
51,130
Recommend classification algorithms to try
Can you describe simply what are these features ? If the features come from some complex data, like images or audio files, the size of your dataset allows you to use a classifier which learn itself the intermediate representation, for example deep neural networks. I don't know if R provides good ressources for deep learning, you can start for example with RBM, as shown in this question. I don't think the common libraries implement Autoencoder. See these lecture notes and Baldi, 2010 for an introduction.
Recommend classification algorithms to try
Can you describe simply what are these features ? If the features come from some complex data, like images or audio files, the size of your dataset allows you to use a classifier which learn itself th
Recommend classification algorithms to try Can you describe simply what are these features ? If the features come from some complex data, like images or audio files, the size of your dataset allows you to use a classifier which learn itself the intermediate representation, for example deep neural networks. I don't know if R provides good ressources for deep learning, you can start for example with RBM, as shown in this question. I don't think the common libraries implement Autoencoder. See these lecture notes and Baldi, 2010 for an introduction.
Recommend classification algorithms to try Can you describe simply what are these features ? If the features come from some complex data, like images or audio files, the size of your dataset allows you to use a classifier which learn itself th
51,131
Which model for panel data with dependent variables from [0,1]?
Addressing unobserved heterogeneity in panel models with fixed effects for fractional response variables (or nonlinear models in general) is not trivial due to the incidental parameter problem (for $N\rightarrow\infty$ and $T$ fixed), see for example Lancaster (2000) or this answer here at CrossValidated. If $T$ is small (and fixed), fixed effects are inconsistent (and random effects rely probably strongly on the distributional assumptions). So you cannot just compare a random effects and a fixed effects model via Hausman test. Proposals for panel models for fractional response variables can be found in Papke and Wooldridge (2008) or here.
Which model for panel data with dependent variables from [0,1]?
Addressing unobserved heterogeneity in panel models with fixed effects for fractional response variables (or nonlinear models in general) is not trivial due to the incidental parameter problem (for $N
Which model for panel data with dependent variables from [0,1]? Addressing unobserved heterogeneity in panel models with fixed effects for fractional response variables (or nonlinear models in general) is not trivial due to the incidental parameter problem (for $N\rightarrow\infty$ and $T$ fixed), see for example Lancaster (2000) or this answer here at CrossValidated. If $T$ is small (and fixed), fixed effects are inconsistent (and random effects rely probably strongly on the distributional assumptions). So you cannot just compare a random effects and a fixed effects model via Hausman test. Proposals for panel models for fractional response variables can be found in Papke and Wooldridge (2008) or here.
Which model for panel data with dependent variables from [0,1]? Addressing unobserved heterogeneity in panel models with fixed effects for fractional response variables (or nonlinear models in general) is not trivial due to the incidental parameter problem (for $N
51,132
Timeline of machine learning and data mining breakthroughs
This is far from complete, but Volker Tresp (Siemens) gives a nice timeline in the third slide of this talk.
Timeline of machine learning and data mining breakthroughs
This is far from complete, but Volker Tresp (Siemens) gives a nice timeline in the third slide of this talk.
Timeline of machine learning and data mining breakthroughs This is far from complete, but Volker Tresp (Siemens) gives a nice timeline in the third slide of this talk.
Timeline of machine learning and data mining breakthroughs This is far from complete, but Volker Tresp (Siemens) gives a nice timeline in the third slide of this talk.
51,133
What optimization problem does least angle regression try to solve?
If I don't miss anything, LAR tries to solve the same optimization problem with LASSO in a way that the solutions for all possible equivalent $\epsilon$s are given (i.e., the so-called LASSO path)
What optimization problem does least angle regression try to solve?
If I don't miss anything, LAR tries to solve the same optimization problem with LASSO in a way that the solutions for all possible equivalent $\epsilon$s are given (i.e., the so-called LASSO path)
What optimization problem does least angle regression try to solve? If I don't miss anything, LAR tries to solve the same optimization problem with LASSO in a way that the solutions for all possible equivalent $\epsilon$s are given (i.e., the so-called LASSO path)
What optimization problem does least angle regression try to solve? If I don't miss anything, LAR tries to solve the same optimization problem with LASSO in a way that the solutions for all possible equivalent $\epsilon$s are given (i.e., the so-called LASSO path)
51,134
Linear regression in matrix notation
Note: In an attempt to add more details to my question, I ended up with a possible correct answer. Further comments will be highly appreciated Starting from: $$\nabla E_{D} = \sum_{n=1}^{N} \{y_{n}-\beta^{T}x_{n}\}x_{n}^{T}=0$$ Simplifying a bit: $$\sum_{n=1}^{N}y_{n}x_{n}^{T}=\beta^{T}\left(\sum_{n=1}^{N}x_{n}x_{n}^{T}\right)$$ The term inside the parenthesis is a sum of matrices, so assuming the inverse exists: $$\beta^{T}=\sum_{n=1}^{N}y_{n}x_{n}^{T}\left(\sum_{n=1}^{N}x_{n}x_{n}^{T}\right)^{-1}$$ Applying the transpose: $$\beta=\left(\sum_{n=1}^{N}x_{n}x_{n}^{T}\right)^{-1}\sum_{n=1}^{N}x_{n}y_{n}$$ Taking $\bf{X}$ as described before, we find the correct denominator because $${\bf{X}}^{T}{\bf{X}}=( \bf{x}_{1} \thinspace \bf{x}_{2} ... \bf{x}_{N} ) \begin{pmatrix}{\bf{x}}_{1}^{T}\\{\bf{x}}_{2}^{T}\\ ...\\ {\bf{x}}_{n}^{T}\end{pmatrix}=\sum_{n=1}^{N}x_{n}x_{n}^{T}$$ and $${\bf{X^{T}y}}=( {\bf{x}}_{1} \thinspace {\bf{x}}_{2} ... {\bf{x}}_{N} ) \begin{pmatrix}y_{1}\\y_{2}\\ ...\\ y_{n}\end{pmatrix}={{\bf{x}}_{1}}y_{1}+{\bf{x}}_{2}y_{2}+...+{\bf{x}}_{N}y_{n}$$ which is the first and second part of the equation: $$\beta = (X^{T}X)^{-1}X^{T}{\bf{y}}$$ Looks like I got the correct result. Can you catch a mistake? I still find this calculations somewhat uncomfortable to deal with, particularly finding the correct derivatives.
Linear regression in matrix notation
Note: In an attempt to add more details to my question, I ended up with a possible correct answer. Further comments will be highly appreciated Starting from: $$\nabla E_{D} = \sum_{n=1}^{N} \{y_{n}-\
Linear regression in matrix notation Note: In an attempt to add more details to my question, I ended up with a possible correct answer. Further comments will be highly appreciated Starting from: $$\nabla E_{D} = \sum_{n=1}^{N} \{y_{n}-\beta^{T}x_{n}\}x_{n}^{T}=0$$ Simplifying a bit: $$\sum_{n=1}^{N}y_{n}x_{n}^{T}=\beta^{T}\left(\sum_{n=1}^{N}x_{n}x_{n}^{T}\right)$$ The term inside the parenthesis is a sum of matrices, so assuming the inverse exists: $$\beta^{T}=\sum_{n=1}^{N}y_{n}x_{n}^{T}\left(\sum_{n=1}^{N}x_{n}x_{n}^{T}\right)^{-1}$$ Applying the transpose: $$\beta=\left(\sum_{n=1}^{N}x_{n}x_{n}^{T}\right)^{-1}\sum_{n=1}^{N}x_{n}y_{n}$$ Taking $\bf{X}$ as described before, we find the correct denominator because $${\bf{X}}^{T}{\bf{X}}=( \bf{x}_{1} \thinspace \bf{x}_{2} ... \bf{x}_{N} ) \begin{pmatrix}{\bf{x}}_{1}^{T}\\{\bf{x}}_{2}^{T}\\ ...\\ {\bf{x}}_{n}^{T}\end{pmatrix}=\sum_{n=1}^{N}x_{n}x_{n}^{T}$$ and $${\bf{X^{T}y}}=( {\bf{x}}_{1} \thinspace {\bf{x}}_{2} ... {\bf{x}}_{N} ) \begin{pmatrix}y_{1}\\y_{2}\\ ...\\ y_{n}\end{pmatrix}={{\bf{x}}_{1}}y_{1}+{\bf{x}}_{2}y_{2}+...+{\bf{x}}_{N}y_{n}$$ which is the first and second part of the equation: $$\beta = (X^{T}X)^{-1}X^{T}{\bf{y}}$$ Looks like I got the correct result. Can you catch a mistake? I still find this calculations somewhat uncomfortable to deal with, particularly finding the correct derivatives.
Linear regression in matrix notation Note: In an attempt to add more details to my question, I ended up with a possible correct answer. Further comments will be highly appreciated Starting from: $$\nabla E_{D} = \sum_{n=1}^{N} \{y_{n}-\
51,135
Linear regression in matrix notation
I think I found the problem: when you went from the gradient to solving for $\beta$, you didn't reverse the order of the terms. The inversion should have been $$(x_nx^T_n)^{-1}=\frac{1}{x^T_nx_n},$$ so that $$\beta^T = \frac{\sum y_nx_n^T}{\sum x^T_nx_n}.$$ See the matrix cookbook (pdf) on page 5.
Linear regression in matrix notation
I think I found the problem: when you went from the gradient to solving for $\beta$, you didn't reverse the order of the terms. The inversion should have been $$(x_nx^T_n)^{-1}=\frac{1}{x^T_nx_n},$$ s
Linear regression in matrix notation I think I found the problem: when you went from the gradient to solving for $\beta$, you didn't reverse the order of the terms. The inversion should have been $$(x_nx^T_n)^{-1}=\frac{1}{x^T_nx_n},$$ so that $$\beta^T = \frac{\sum y_nx_n^T}{\sum x^T_nx_n}.$$ See the matrix cookbook (pdf) on page 5.
Linear regression in matrix notation I think I found the problem: when you went from the gradient to solving for $\beta$, you didn't reverse the order of the terms. The inversion should have been $$(x_nx^T_n)^{-1}=\frac{1}{x^T_nx_n},$$ s
51,136
Ftting a mixture of two Gaussians
Just so this thread gets an answer (since we can't access your data any more, I don't think much more answering will be happening): what you are doing seems to be perfectly fine. You could change your search for a quantile of a mixture to use KScorrect::qmixnorm(), as per Compute quantile function from a mixture of Normal distribution.
Ftting a mixture of two Gaussians
Just so this thread gets an answer (since we can't access your data any more, I don't think much more answering will be happening): what you are doing seems to be perfectly fine. You could change your
Ftting a mixture of two Gaussians Just so this thread gets an answer (since we can't access your data any more, I don't think much more answering will be happening): what you are doing seems to be perfectly fine. You could change your search for a quantile of a mixture to use KScorrect::qmixnorm(), as per Compute quantile function from a mixture of Normal distribution.
Ftting a mixture of two Gaussians Just so this thread gets an answer (since we can't access your data any more, I don't think much more answering will be happening): what you are doing seems to be perfectly fine. You could change your
51,137
How to predict match outcome in team game based on participating players
By using Naive Bayes you assume that there is no such thing as team-play (Two players being best when play together). Keeping that in mind, you can do just the following: For each player evaluate number of games he wins divided by the total number of games. That will player's rate. Team's rate can be evaluated as a sum of all members' rates. The team having greater rate most likily to win. If you would like to take team-play into account you need to introduce some measure of it. Say, for each pair of players have it's rate. Note, this value could be negative (if two players hates each other), so I would compute it as ((number of games pair win) - (number of games each one player win)) / (total number of games) This rate can be added to team's rate, i think. Also, in this approach, you should consider groups of 3, 4 and 5 players.
How to predict match outcome in team game based on participating players
By using Naive Bayes you assume that there is no such thing as team-play (Two players being best when play together). Keeping that in mind, you can do just the following: For each player evaluate numb
How to predict match outcome in team game based on participating players By using Naive Bayes you assume that there is no such thing as team-play (Two players being best when play together). Keeping that in mind, you can do just the following: For each player evaluate number of games he wins divided by the total number of games. That will player's rate. Team's rate can be evaluated as a sum of all members' rates. The team having greater rate most likily to win. If you would like to take team-play into account you need to introduce some measure of it. Say, for each pair of players have it's rate. Note, this value could be negative (if two players hates each other), so I would compute it as ((number of games pair win) - (number of games each one player win)) / (total number of games) This rate can be added to team's rate, i think. Also, in this approach, you should consider groups of 3, 4 and 5 players.
How to predict match outcome in team game based on participating players By using Naive Bayes you assume that there is no such thing as team-play (Two players being best when play together). Keeping that in mind, you can do just the following: For each player evaluate numb
51,138
How do I calculate the probabilities in this decision making scenario?
From probability theory: $P(T|You) = P(T \& You) / P(You)$ $P(T|Them) = P(T \& Them) / P(Them)$ $P(T) = P(T|You)P(You) + P(T|Them)P(Them)$ In your situation: $P(You) + P(Them) = 1$ The $P(You)$ is the probability that you're answering the question and $P(Them)$ is the probability that they are answering the question, and P(T) is the marginal of the true answer. To answer both your questions: $P(T) = P(T|You)(1 - P(Them)) + P(T|Them)P(Them)$ $= P(T|You) + P(Them)[P(T|Them) - P(T|You)]$ thus $P(Them) = [P(T) - P(T|You)] / [P(T|Them) - P(T|You)]$ $P(Them) = [0.6 - 0.9] / [0.1-0.9] = 0.375$ So yes your calculation is correct. In the event that their choice is sequentially dependent on yours then: $P(T|Them,X) = P(T \& Them | X) / P(Them | X)$ where $P(X)$ is the probability of you making some decision. Hope that helps.
How do I calculate the probabilities in this decision making scenario?
From probability theory: $P(T|You) = P(T \& You) / P(You)$ $P(T|Them) = P(T \& Them) / P(Them)$ $P(T) = P(T|You)P(You) + P(T|Them)P(Them)$ In your situation: $P(You) + P(Them) = 1$ The $P(You)$ is
How do I calculate the probabilities in this decision making scenario? From probability theory: $P(T|You) = P(T \& You) / P(You)$ $P(T|Them) = P(T \& Them) / P(Them)$ $P(T) = P(T|You)P(You) + P(T|Them)P(Them)$ In your situation: $P(You) + P(Them) = 1$ The $P(You)$ is the probability that you're answering the question and $P(Them)$ is the probability that they are answering the question, and P(T) is the marginal of the true answer. To answer both your questions: $P(T) = P(T|You)(1 - P(Them)) + P(T|Them)P(Them)$ $= P(T|You) + P(Them)[P(T|Them) - P(T|You)]$ thus $P(Them) = [P(T) - P(T|You)] / [P(T|Them) - P(T|You)]$ $P(Them) = [0.6 - 0.9] / [0.1-0.9] = 0.375$ So yes your calculation is correct. In the event that their choice is sequentially dependent on yours then: $P(T|Them,X) = P(T \& Them | X) / P(Them | X)$ where $P(X)$ is the probability of you making some decision. Hope that helps.
How do I calculate the probabilities in this decision making scenario? From probability theory: $P(T|You) = P(T \& You) / P(You)$ $P(T|Them) = P(T \& Them) / P(Them)$ $P(T) = P(T|You)P(You) + P(T|Them)P(Them)$ In your situation: $P(You) + P(Them) = 1$ The $P(You)$ is
51,139
How do I calculate the probabilities in this decision making scenario?
Let's say there are only two choices A and B. If you make decision A, then the naive person is 30 percent correct in their choice; if you make decision B, then the naive person is 20 correct in their choice. Historically let 60% times A be the right answer. You making decision A happens $0.6 * 0.9 + 0.4 * 0.1$. Hence the other person being correct would be $(0.6 * 0.9 + 0.4 * 0.1)*0.3 + (1-0.6 * 0.9 - 0.4 * 0.1)*0.2$. Then solve for x as before? Just my 2 cents.
How do I calculate the probabilities in this decision making scenario?
Let's say there are only two choices A and B. If you make decision A, then the naive person is 30 percent correct in their choice; if you make decision B, then the naive person is 20 correct in their
How do I calculate the probabilities in this decision making scenario? Let's say there are only two choices A and B. If you make decision A, then the naive person is 30 percent correct in their choice; if you make decision B, then the naive person is 20 correct in their choice. Historically let 60% times A be the right answer. You making decision A happens $0.6 * 0.9 + 0.4 * 0.1$. Hence the other person being correct would be $(0.6 * 0.9 + 0.4 * 0.1)*0.3 + (1-0.6 * 0.9 - 0.4 * 0.1)*0.2$. Then solve for x as before? Just my 2 cents.
How do I calculate the probabilities in this decision making scenario? Let's say there are only two choices A and B. If you make decision A, then the naive person is 30 percent correct in their choice; if you make decision B, then the naive person is 20 correct in their
51,140
Unbiased estimator from two SRS less duplicates
An option is to derive two independent HT estimates from each sample: $s_1$ and $s_2$: $\hat{t}_1 = \sum_{i \in s_1} \frac{y_i}{\pi_i} = \frac{N}{n} \sum_{i \in s_1} y_i$, $\hat{t}_2 = \sum_{i \in s_2} \frac{y_i}{\pi_i} = \frac{N}{m} \sum_{i \in s_2} y_i$. And then you can use average of both estimates to derive another unbiased estimate of $t$: $\hat{t} = \frac{1}{2} \left( \hat{t}_1 + \hat{t}_2 \right) = \frac{N}{2} \left( \frac{1}{n} \sum_{i \in s_1} y_i + \frac{1}{m} \sum_{i \in s_2} y_i \right)$.
Unbiased estimator from two SRS less duplicates
An option is to derive two independent HT estimates from each sample: $s_1$ and $s_2$: $\hat{t}_1 = \sum_{i \in s_1} \frac{y_i}{\pi_i} = \frac{N}{n} \sum_{i \in s_1} y_i$, $\hat{t}_2 = \sum_{i \in s_2
Unbiased estimator from two SRS less duplicates An option is to derive two independent HT estimates from each sample: $s_1$ and $s_2$: $\hat{t}_1 = \sum_{i \in s_1} \frac{y_i}{\pi_i} = \frac{N}{n} \sum_{i \in s_1} y_i$, $\hat{t}_2 = \sum_{i \in s_2} \frac{y_i}{\pi_i} = \frac{N}{m} \sum_{i \in s_2} y_i$. And then you can use average of both estimates to derive another unbiased estimate of $t$: $\hat{t} = \frac{1}{2} \left( \hat{t}_1 + \hat{t}_2 \right) = \frac{N}{2} \left( \frac{1}{n} \sum_{i \in s_1} y_i + \frac{1}{m} \sum_{i \in s_2} y_i \right)$.
Unbiased estimator from two SRS less duplicates An option is to derive two independent HT estimates from each sample: $s_1$ and $s_2$: $\hat{t}_1 = \sum_{i \in s_1} \frac{y_i}{\pi_i} = \frac{N}{n} \sum_{i \in s_1} y_i$, $\hat{t}_2 = \sum_{i \in s_2
51,141
Unbiased estimator from two SRS less duplicates
There is other solution. You can compute the sampling probability for each individual to be sampled in combined sample $S$ by $$\pi_i = \frac{n}{N} + \frac{m}{N} - \frac{mn}{N^2} = \frac{n + m - \frac{mn}{N}}{N}$$ You can apply the standard HT-estimator for the population total now: $$\hat{t} = \sum_{i \in S} \frac{y_i}{\pi_i} = \frac{N}{n + m - \frac{mn}{N}} \sum_{i \in S} y_i$$
Unbiased estimator from two SRS less duplicates
There is other solution. You can compute the sampling probability for each individual to be sampled in combined sample $S$ by $$\pi_i = \frac{n}{N} + \frac{m}{N} - \frac{mn}{N^2} = \frac{n + m - \frac
Unbiased estimator from two SRS less duplicates There is other solution. You can compute the sampling probability for each individual to be sampled in combined sample $S$ by $$\pi_i = \frac{n}{N} + \frac{m}{N} - \frac{mn}{N^2} = \frac{n + m - \frac{mn}{N}}{N}$$ You can apply the standard HT-estimator for the population total now: $$\hat{t} = \sum_{i \in S} \frac{y_i}{\pi_i} = \frac{N}{n + m - \frac{mn}{N}} \sum_{i \in S} y_i$$
Unbiased estimator from two SRS less duplicates There is other solution. You can compute the sampling probability for each individual to be sampled in combined sample $S$ by $$\pi_i = \frac{n}{N} + \frac{m}{N} - \frac{mn}{N^2} = \frac{n + m - \frac
51,142
Classification based on several marginal probabilities
Only $P(Y|X_n)$ and $P(X_n)$ are available in practice. For example, the $X_n$ are "whether a user has visited a specific website", and $Y$ is the gender. We have data of $X_n$, and we can buy the distribution of gender for some specific websites. Now I'll try the following approach: Assume that $P(X_1, X_2, ..., X_n|Y) = P(X_1|Y) P(X_2|Y) ... P(X_n|Y)$, i.e. $X_1, X_2, ..., X_n$ are independent given $Y$ Evaluate $P(X_n|Y)$ according to $P(Y|X_n)$ and $P(X_n)$ The following steps are straightforward. Maybe there are better assumptions for this case.
Classification based on several marginal probabilities
Only $P(Y|X_n)$ and $P(X_n)$ are available in practice. For example, the $X_n$ are "whether a user has visited a specific website", and $Y$ is the gender. We have data of $X_n$, and we can buy the dis
Classification based on several marginal probabilities Only $P(Y|X_n)$ and $P(X_n)$ are available in practice. For example, the $X_n$ are "whether a user has visited a specific website", and $Y$ is the gender. We have data of $X_n$, and we can buy the distribution of gender for some specific websites. Now I'll try the following approach: Assume that $P(X_1, X_2, ..., X_n|Y) = P(X_1|Y) P(X_2|Y) ... P(X_n|Y)$, i.e. $X_1, X_2, ..., X_n$ are independent given $Y$ Evaluate $P(X_n|Y)$ according to $P(Y|X_n)$ and $P(X_n)$ The following steps are straightforward. Maybe there are better assumptions for this case.
Classification based on several marginal probabilities Only $P(Y|X_n)$ and $P(X_n)$ are available in practice. For example, the $X_n$ are "whether a user has visited a specific website", and $Y$ is the gender. We have data of $X_n$, and we can buy the dis
51,143
Is there a standard name for probabilistic graphical models like this?
I think you need to recast your graphs into a different form. There is probably more than one way you could do this. A really important thing to notice about your model is that at any given time, you can make a best guess about the hidden state. What is special is at the next time step you can make an even better guess as to what the previous hidden state was. At each time step up to 4 your estimate of past hidden state significantly improves. Since you have 6 hidden states, you can in the bottom case replace that with 36 states representing all possible pairs. You then have a Markov process where most elements in the transition matrix are zero, since the tuple state $(A,D)$ must be followed by $(D,?)$. Now you have 36 output probability distributions to consider, which is too many vs your original diagram. The reason for the inflation is that we are now allowing interactions between $y(t_i)$ and $y(t_{i+1})$. If you ensure that your output probability model does not include interaction terms (i.e. it factorises completely) then you are back to the model you requested. The same for the case where you have 4 histories that are important. Change this to a single 4-tuple and you have $6^4$ states. Then ensure that your output probability model factorises on each element of the state and you are back to the model above. Notice that the Viterbi algorithm run on the vector hidden state is going to give you estimates, at each time point, of what the current set of 4 states are. If you "flatten" this back to a single series $...y(t_i), y(t_{i+1}),...$ then you have multiple estimates for each y, and they won't agree. This is correct and goes back to the point in the intro, that because each step depends on past values, we learn more on the past and can revise our estimate of what happened.
Is there a standard name for probabilistic graphical models like this?
I think you need to recast your graphs into a different form. There is probably more than one way you could do this. A really important thing to notice about your model is that at any given time, you
Is there a standard name for probabilistic graphical models like this? I think you need to recast your graphs into a different form. There is probably more than one way you could do this. A really important thing to notice about your model is that at any given time, you can make a best guess about the hidden state. What is special is at the next time step you can make an even better guess as to what the previous hidden state was. At each time step up to 4 your estimate of past hidden state significantly improves. Since you have 6 hidden states, you can in the bottom case replace that with 36 states representing all possible pairs. You then have a Markov process where most elements in the transition matrix are zero, since the tuple state $(A,D)$ must be followed by $(D,?)$. Now you have 36 output probability distributions to consider, which is too many vs your original diagram. The reason for the inflation is that we are now allowing interactions between $y(t_i)$ and $y(t_{i+1})$. If you ensure that your output probability model does not include interaction terms (i.e. it factorises completely) then you are back to the model you requested. The same for the case where you have 4 histories that are important. Change this to a single 4-tuple and you have $6^4$ states. Then ensure that your output probability model factorises on each element of the state and you are back to the model above. Notice that the Viterbi algorithm run on the vector hidden state is going to give you estimates, at each time point, of what the current set of 4 states are. If you "flatten" this back to a single series $...y(t_i), y(t_{i+1}),...$ then you have multiple estimates for each y, and they won't agree. This is correct and goes back to the point in the intro, that because each step depends on past values, we learn more on the past and can revise our estimate of what happened.
Is there a standard name for probabilistic graphical models like this? I think you need to recast your graphs into a different form. There is probably more than one way you could do this. A really important thing to notice about your model is that at any given time, you
51,144
Particle filtering importance weights
Short Answer: No, you can't do that. It doesn't make sense. Notation: Let $s_{1:t}$ be the states from time $1$ to time $t$. Let $z_{1:t}$ denote the observations. Let $q(s_{1:t}|z_{1:t})$ be the proposal distribution that you sample from to approximate the entire sequence of unobserved states. This proposal distribution is assumed to be factorizable, so you can sample at each time point as new $z_t$ data arrives. So say you have samples up to time $t-1$, then you sample the new states at time $t$ with $q_t(s_t|s_{t-1},z_t)$--this gives you samples form $$ q_t(s_t|z_{1:t},s_{1:t-1}) q(s_{1:t-1}|z_{1:t-1}). $$ And so on and so forth. Every time you extend the sample paths, you adjust the weights with some weight adjustment. Background: Here is the decomposition that suggests Sequential Import Sampling (SIS). This is Sequential Importance Sampling with Resampling (SISR/SIR) without the resampling. It is the simplest possible particle filter that will answer your question. \begin{align*} p(s_{1:t}|z_{1:t}) &= C_t^{-1} \frac{p(s_{1:t},z_{1:t})}{q(s_{1:t}|z_{1:t})}q(s_{1:t}|z_{1:t}) \\ &= C_t^{-1} \frac{g(z_t|s_t)f(s_t|s_{t-1})}{q_t(s_t|z_{1:t},s_{1:t-1}) } \frac{p(s_{1:t-1},z_{1:t-1})}{ q(s_{1:t-1}|z_{1:t-1})} q_t(s_t|z_{1:t},s_{1:t-1}) q(s_{1:t-1}|z_{1:t-1}). \end{align*} Here's the progression. At the previous time, time $t-1$, you have samples distributed according to $q(s_{1:t-1}|z_{1:t-1})$, and their weights are $\frac{p(s_{1:t-1},z_{1:t-1})}{ q(s_{1:t-1}|z_{1:t-1})} $. Then, at time $t$, you sample and extend the particle paths by sampling from $q_t(s_t|z_{1:t},s_{1:t-1})$, and multiplying your old weights by $\frac{g(z_t|s_t)f(s_t|s_{t-1})}{q_t(s_t|z_{1:t},s_{1:t-1}) }$ to give you new, adjusted, and un-normalized weights at time $t$. $C_t^{-1}$ is a normalizing constant. $g(z_t|s_t)$ is the observation density, and $f(s_t|s_{t-1})$ is the state transition density or "motion model." In the other answer he assumes that $q_t(s_t|s_{t-1},z_t)$ is the same as $f(s_t|s_{t-1})$, so they cancel in the weight adjustment term. Answer: So yes, your un-normalized weights have to involve a probability distribution. It's a ratio, with the denominator being the guy you just sampled from. You can't sample from a thing that isn't a probability distribution. And the numerator has to be $g$ and $f$, otherwise you aren't using a state space model, and particle filtering won't make sense (there are probably ways to use Sequential Monte Carlo/particle filters on models that aren't state space models, but I have never done it).
Particle filtering importance weights
Short Answer: No, you can't do that. It doesn't make sense. Notation: Let $s_{1:t}$ be the states from time $1$ to time $t$. Let $z_{1:t}$ denote the observations. Let $q(s_{1:t}|z_{1:t})$ be the prop
Particle filtering importance weights Short Answer: No, you can't do that. It doesn't make sense. Notation: Let $s_{1:t}$ be the states from time $1$ to time $t$. Let $z_{1:t}$ denote the observations. Let $q(s_{1:t}|z_{1:t})$ be the proposal distribution that you sample from to approximate the entire sequence of unobserved states. This proposal distribution is assumed to be factorizable, so you can sample at each time point as new $z_t$ data arrives. So say you have samples up to time $t-1$, then you sample the new states at time $t$ with $q_t(s_t|s_{t-1},z_t)$--this gives you samples form $$ q_t(s_t|z_{1:t},s_{1:t-1}) q(s_{1:t-1}|z_{1:t-1}). $$ And so on and so forth. Every time you extend the sample paths, you adjust the weights with some weight adjustment. Background: Here is the decomposition that suggests Sequential Import Sampling (SIS). This is Sequential Importance Sampling with Resampling (SISR/SIR) without the resampling. It is the simplest possible particle filter that will answer your question. \begin{align*} p(s_{1:t}|z_{1:t}) &= C_t^{-1} \frac{p(s_{1:t},z_{1:t})}{q(s_{1:t}|z_{1:t})}q(s_{1:t}|z_{1:t}) \\ &= C_t^{-1} \frac{g(z_t|s_t)f(s_t|s_{t-1})}{q_t(s_t|z_{1:t},s_{1:t-1}) } \frac{p(s_{1:t-1},z_{1:t-1})}{ q(s_{1:t-1}|z_{1:t-1})} q_t(s_t|z_{1:t},s_{1:t-1}) q(s_{1:t-1}|z_{1:t-1}). \end{align*} Here's the progression. At the previous time, time $t-1$, you have samples distributed according to $q(s_{1:t-1}|z_{1:t-1})$, and their weights are $\frac{p(s_{1:t-1},z_{1:t-1})}{ q(s_{1:t-1}|z_{1:t-1})} $. Then, at time $t$, you sample and extend the particle paths by sampling from $q_t(s_t|z_{1:t},s_{1:t-1})$, and multiplying your old weights by $\frac{g(z_t|s_t)f(s_t|s_{t-1})}{q_t(s_t|z_{1:t},s_{1:t-1}) }$ to give you new, adjusted, and un-normalized weights at time $t$. $C_t^{-1}$ is a normalizing constant. $g(z_t|s_t)$ is the observation density, and $f(s_t|s_{t-1})$ is the state transition density or "motion model." In the other answer he assumes that $q_t(s_t|s_{t-1},z_t)$ is the same as $f(s_t|s_{t-1})$, so they cancel in the weight adjustment term. Answer: So yes, your un-normalized weights have to involve a probability distribution. It's a ratio, with the denominator being the guy you just sampled from. You can't sample from a thing that isn't a probability distribution. And the numerator has to be $g$ and $f$, otherwise you aren't using a state space model, and particle filtering won't make sense (there are probably ways to use Sequential Monte Carlo/particle filters on models that aren't state space models, but I have never done it).
Particle filtering importance weights Short Answer: No, you can't do that. It doesn't make sense. Notation: Let $s_{1:t}$ be the states from time $1$ to time $t$. Let $z_{1:t}$ denote the observations. Let $q(s_{1:t}|z_{1:t})$ be the prop
51,145
Particle filtering importance weights
There is a reason why the weights turn out to be probabilities. In most simple implementations of SIR particle filters, the proposal distribution is the motion model. Under this setting the weight update equation simplifies to measurement likelihood. Check these resources: http://www.cs.berkeley.edu/~pabbeel/cs287-fa12/slides/ParticleFilters.pdf http://www.cns.nyu.edu/~eorhan/notes/particle-filtering.pdf
Particle filtering importance weights
There is a reason why the weights turn out to be probabilities. In most simple implementations of SIR particle filters, the proposal distribution is the motion model. Under this setting the weight upd
Particle filtering importance weights There is a reason why the weights turn out to be probabilities. In most simple implementations of SIR particle filters, the proposal distribution is the motion model. Under this setting the weight update equation simplifies to measurement likelihood. Check these resources: http://www.cs.berkeley.edu/~pabbeel/cs287-fa12/slides/ParticleFilters.pdf http://www.cns.nyu.edu/~eorhan/notes/particle-filtering.pdf
Particle filtering importance weights There is a reason why the weights turn out to be probabilities. In most simple implementations of SIR particle filters, the proposal distribution is the motion model. Under this setting the weight upd
51,146
lm() - model specification
Question 1 - There are two differences between the two approaches. First, by estimating it all together you are constraining the variance of the random part to be the same for each of variables 1, 2 and 5. If you fit the three models separately you will have a different estimate of the variance each time. A useful thing to do would be to check if this is necessary ie if there really is evidence of this heteroscedasticity (ie variable variance). Fitting the models simultaneously is more efficient, if it is acceptable to keep the variance the same. Second, because you do not have an interaction effect between var and f2, a second (perhaps even bigger) difference is that the coefficients for f2 are the same whatever the value of var. Fitting the models separately would allow those coefficients to change. Question 2 - absolutely you should. This is an important decision in your model selection. Question 3 - this is because you have the same number of observations of each level of var. Nothing to worry about here. Question 4 - No, this is not an appropriate interpretation. Note your model is specified in an unusual way. You have an interaction effect between f1 and var but no original effect. This makes it difficult to interpret. But in any event you cannot make inferences about factors with more than two levels that are explanatory variables by looking at the individual coefficients and their standard errors. At a minimum, you need to use anova(). And certainly you cannot say that f1 makes no difference. > mod <- lm(y ~ (var-1):f1 + (var-1) + f2, data = df) > anova(mod) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) var 3 74.246 24.7486 55.268 <2e-16 *** f2 2 0.330 0.1652 0.369 0.692 var:f1 3 67.887 22.6289 50.534 <2e-16 *** Residuals 172 77.020 0.4478 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > > mod2 <- lm(y~var*f1 + f2-1, data=df) > anova(mod2) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) var 3 74.246 24.749 55.2678 < 2.2e-16 *** f1 1 61.569 61.569 137.4938 < 2.2e-16 *** f2 2 0.611 0.305 0.6819 0.507011 var:f1 2 6.038 3.019 6.7415 0.001518 ** Residuals 172 77.020 0.448 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
lm() - model specification
Question 1 - There are two differences between the two approaches. First, by estimating it all together you are constraining the variance of the random part to be the same for each of variables 1, 2 a
lm() - model specification Question 1 - There are two differences between the two approaches. First, by estimating it all together you are constraining the variance of the random part to be the same for each of variables 1, 2 and 5. If you fit the three models separately you will have a different estimate of the variance each time. A useful thing to do would be to check if this is necessary ie if there really is evidence of this heteroscedasticity (ie variable variance). Fitting the models simultaneously is more efficient, if it is acceptable to keep the variance the same. Second, because you do not have an interaction effect between var and f2, a second (perhaps even bigger) difference is that the coefficients for f2 are the same whatever the value of var. Fitting the models separately would allow those coefficients to change. Question 2 - absolutely you should. This is an important decision in your model selection. Question 3 - this is because you have the same number of observations of each level of var. Nothing to worry about here. Question 4 - No, this is not an appropriate interpretation. Note your model is specified in an unusual way. You have an interaction effect between f1 and var but no original effect. This makes it difficult to interpret. But in any event you cannot make inferences about factors with more than two levels that are explanatory variables by looking at the individual coefficients and their standard errors. At a minimum, you need to use anova(). And certainly you cannot say that f1 makes no difference. > mod <- lm(y ~ (var-1):f1 + (var-1) + f2, data = df) > anova(mod) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) var 3 74.246 24.7486 55.268 <2e-16 *** f2 2 0.330 0.1652 0.369 0.692 var:f1 3 67.887 22.6289 50.534 <2e-16 *** Residuals 172 77.020 0.4478 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > > mod2 <- lm(y~var*f1 + f2-1, data=df) > anova(mod2) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) var 3 74.246 24.749 55.2678 < 2.2e-16 *** f1 1 61.569 61.569 137.4938 < 2.2e-16 *** f2 2 0.611 0.305 0.6819 0.507011 var:f1 2 6.038 3.019 6.7415 0.001518 ** Residuals 172 77.020 0.448 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
lm() - model specification Question 1 - There are two differences between the two approaches. First, by estimating it all together you are constraining the variance of the random part to be the same for each of variables 1, 2 a
51,147
lm() - model specification
I suggest that you use a model with a limited dependent variable. For example ordered probit, multinomial logit, negative binomial ...etc. The model choice depends on what you are modelling.
lm() - model specification
I suggest that you use a model with a limited dependent variable. For example ordered probit, multinomial logit, negative binomial ...etc. The model choice depends on what you are modelling.
lm() - model specification I suggest that you use a model with a limited dependent variable. For example ordered probit, multinomial logit, negative binomial ...etc. The model choice depends on what you are modelling.
lm() - model specification I suggest that you use a model with a limited dependent variable. For example ordered probit, multinomial logit, negative binomial ...etc. The model choice depends on what you are modelling.
51,148
Constructing most powerful critical region basics
Anyway you look at it, the answer looks good. You have discovered that the log likelihood ratio is linear: $$\log\left(\frac{f(x;\theta_1)}{f(x;\theta_0)}\right) = 25\log(2) - x\log(3).$$ The left hand panel of the figure plots this. (Since only the shape of the log likelihood ratio matters, no value axis is shown.) This reflects the fact that the relative probability that $X$ came from $\theta_1$ compared to $\theta_0$ decreases as $X$ grows larger. Those probabilities appear in the middle panel of the figure: red, peaking near $25\theta_1=12.5$, for $\theta_1$, and blue, peaking near $25\theta_0=18.75$, for $\theta_0$. That suffices to show the UMP critical region must be some interval starting at $X=0$. A good quantitative check of the result is afforded by the cumulative distribution functions, shown in the right panel with comparable colors. The dotted horizontal line is at a value of $\alpha=0.05$. The CDF corresponding to $\theta_0$ rises above $\alpha$ between $X=14$ and $X=15$, showing that the false positive rate must be less than $\alpha$ for the critical region $\{0,1,\ldots, 14\}$ but greater than $\alpha$ if $15$ is also included. Consequently $\{0,1,\ldots, 14\}$ is the unique solution.
Constructing most powerful critical region basics
Anyway you look at it, the answer looks good. You have discovered that the log likelihood ratio is linear: $$\log\left(\frac{f(x;\theta_1)}{f(x;\theta_0)}\right) = 25\log(2) - x\log(3).$$ The left han
Constructing most powerful critical region basics Anyway you look at it, the answer looks good. You have discovered that the log likelihood ratio is linear: $$\log\left(\frac{f(x;\theta_1)}{f(x;\theta_0)}\right) = 25\log(2) - x\log(3).$$ The left hand panel of the figure plots this. (Since only the shape of the log likelihood ratio matters, no value axis is shown.) This reflects the fact that the relative probability that $X$ came from $\theta_1$ compared to $\theta_0$ decreases as $X$ grows larger. Those probabilities appear in the middle panel of the figure: red, peaking near $25\theta_1=12.5$, for $\theta_1$, and blue, peaking near $25\theta_0=18.75$, for $\theta_0$. That suffices to show the UMP critical region must be some interval starting at $X=0$. A good quantitative check of the result is afforded by the cumulative distribution functions, shown in the right panel with comparable colors. The dotted horizontal line is at a value of $\alpha=0.05$. The CDF corresponding to $\theta_0$ rises above $\alpha$ between $X=14$ and $X=15$, showing that the false positive rate must be less than $\alpha$ for the critical region $\{0,1,\ldots, 14\}$ but greater than $\alpha$ if $15$ is also included. Consequently $\{0,1,\ldots, 14\}$ is the unique solution.
Constructing most powerful critical region basics Anyway you look at it, the answer looks good. You have discovered that the log likelihood ratio is linear: $$\log\left(\frac{f(x;\theta_1)}{f(x;\theta_0)}\right) = 25\log(2) - x\log(3).$$ The left han
51,149
Understanding output of a GEE model in R - Naive S.E vs. Robust S.E?
Naive SE is the same SE you would get if you perform GLM Robust SE is SE calculated using robust sandwich estimator. This vignettes explained GEE pretty well with an example https://cran.r-project.org/web/packages/HSAUR2/vignettes/Ch_analysing_longitudinal_dataII.pdf Hope that helps!
Understanding output of a GEE model in R - Naive S.E vs. Robust S.E?
Naive SE is the same SE you would get if you perform GLM Robust SE is SE calculated using robust sandwich estimator. This vignettes explained GEE pretty well with an example https://cran.r-project.or
Understanding output of a GEE model in R - Naive S.E vs. Robust S.E? Naive SE is the same SE you would get if you perform GLM Robust SE is SE calculated using robust sandwich estimator. This vignettes explained GEE pretty well with an example https://cran.r-project.org/web/packages/HSAUR2/vignettes/Ch_analysing_longitudinal_dataII.pdf Hope that helps!
Understanding output of a GEE model in R - Naive S.E vs. Robust S.E? Naive SE is the same SE you would get if you perform GLM Robust SE is SE calculated using robust sandwich estimator. This vignettes explained GEE pretty well with an example https://cran.r-project.or
51,150
What are the most popular domain adaptation methods (for transfer learning)?
Domain Adaptation is the process that attempts to alter the source domain in a way to bring the distribution of the source closer to that of the target domain. In many cases the Domain Adaptation methods are modifications of the basic algorithms from the area of the traditional Machine Learning, that they are trying to take into account the difference between the distributions of the train and the test set. For example, one popular DA algorithm is TrAdaboost, which is a Boosting algorithm and it is a modification of the Adaboost algorithm. Another issue is that the DA methods are developed based on the kind of the difference between the two domains that they are trying to tackle. Some of them are trying to get closer the marginal distributions and others the conditional distributions. In my opinion, they do not exist basic or popular methods in same sense as in the traditional Machine Learning, because the DA methods are strongly related with the application or with the assumption that they are based on. You can get an idea of the state of the art of the existing apporaches studying these surveys: PAN, S. J.; YANG, Q. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, v. 22, n. 10, p. 1345–1359, out. 2010. Weiss, K., Khoshgoftaar, T.M. & Wang: A Survey of Transfer Learning. D. J Big Data, 2016.
What are the most popular domain adaptation methods (for transfer learning)?
Domain Adaptation is the process that attempts to alter the source domain in a way to bring the distribution of the source closer to that of the target domain. In many cases the Domain Adaptation meth
What are the most popular domain adaptation methods (for transfer learning)? Domain Adaptation is the process that attempts to alter the source domain in a way to bring the distribution of the source closer to that of the target domain. In many cases the Domain Adaptation methods are modifications of the basic algorithms from the area of the traditional Machine Learning, that they are trying to take into account the difference between the distributions of the train and the test set. For example, one popular DA algorithm is TrAdaboost, which is a Boosting algorithm and it is a modification of the Adaboost algorithm. Another issue is that the DA methods are developed based on the kind of the difference between the two domains that they are trying to tackle. Some of them are trying to get closer the marginal distributions and others the conditional distributions. In my opinion, they do not exist basic or popular methods in same sense as in the traditional Machine Learning, because the DA methods are strongly related with the application or with the assumption that they are based on. You can get an idea of the state of the art of the existing apporaches studying these surveys: PAN, S. J.; YANG, Q. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, v. 22, n. 10, p. 1345–1359, out. 2010. Weiss, K., Khoshgoftaar, T.M. & Wang: A Survey of Transfer Learning. D. J Big Data, 2016.
What are the most popular domain adaptation methods (for transfer learning)? Domain Adaptation is the process that attempts to alter the source domain in a way to bring the distribution of the source closer to that of the target domain. In many cases the Domain Adaptation meth
51,151
How to apply Isomap to test data?
Applying the mapping to test data is called the out-of-sample problem. Take a look at the following paper to see a solution for Isomap: Bengio, Yoshua, et al. Out-of-sample extensions for lle, isomap, mds, eigenmaps, and spectral clustering. Advances in neural information processing systems 16 (2004): 177-184.
How to apply Isomap to test data?
Applying the mapping to test data is called the out-of-sample problem. Take a look at the following paper to see a solution for Isomap: Bengio, Yoshua, et al. Out-of-sample extensions for lle, isomap,
How to apply Isomap to test data? Applying the mapping to test data is called the out-of-sample problem. Take a look at the following paper to see a solution for Isomap: Bengio, Yoshua, et al. Out-of-sample extensions for lle, isomap, mds, eigenmaps, and spectral clustering. Advances in neural information processing systems 16 (2004): 177-184.
How to apply Isomap to test data? Applying the mapping to test data is called the out-of-sample problem. Take a look at the following paper to see a solution for Isomap: Bengio, Yoshua, et al. Out-of-sample extensions for lle, isomap,
51,152
How to apply Isomap to test data?
As far as I know, the Isomap in scikit-learn implements out-of-sample isomap: http://scikit-learn.org/stable/modules/generated/sklearn.manifold.Isomap.html
How to apply Isomap to test data?
As far as I know, the Isomap in scikit-learn implements out-of-sample isomap: http://scikit-learn.org/stable/modules/generated/sklearn.manifold.Isomap.html
How to apply Isomap to test data? As far as I know, the Isomap in scikit-learn implements out-of-sample isomap: http://scikit-learn.org/stable/modules/generated/sklearn.manifold.Isomap.html
How to apply Isomap to test data? As far as I know, the Isomap in scikit-learn implements out-of-sample isomap: http://scikit-learn.org/stable/modules/generated/sklearn.manifold.Isomap.html
51,153
How to apply Isomap to test data?
I solved the problem like this; solve Train*W = Y for W, map using Test*W. But other contributions are welcome.
How to apply Isomap to test data?
I solved the problem like this; solve Train*W = Y for W, map using Test*W. But other contributions are welcome.
How to apply Isomap to test data? I solved the problem like this; solve Train*W = Y for W, map using Test*W. But other contributions are welcome.
How to apply Isomap to test data? I solved the problem like this; solve Train*W = Y for W, map using Test*W. But other contributions are welcome.
51,154
Why arrange variables by causality in bivariate regression?
Distinguish two quantities that you might ask a regression to estimate: The expected value of $Y_{t+1}$ given that you observe $X_t$. This is always estimated by the regression of $Y_{t+1}$ on $X_t$ and targets the conditional distribution $P(Y_{t+1}\mid X_t)$. When you condition on different quantities you get (correctly) different answers because you are targeting different conditional distributions. The causal effect of changes in $X_t$ on the expected value of $Y_{t+1}$. This is only sometimes estimated by the regression of $Y_{t+1}$ on $X_t$ because it targets $P(Y_{t+1}\mid \text{do}(X_t))$ which is the expected value of $Y_{t+1}$ when you intervene to set the value of $X_t$, e.g. in an experiment where $X$ is a treatment variable. This is a stable feature of the system under study, even when quantity 1 varies, and may sometimes be identified by regression by conditioning on confounders, e.g. common causes of $X$ and $Y$. There are other strategies, but that's the one relevant to your question. With this distinction in mind it's clear that regressing $Y_{t+1}$ on $X_t$ could be a good idea if you are interested in the first sort of quantity, and a bad idea if you are interested in the second sort of quantity. If $t$ is time then the expected value of $X_t$ when you intervene to set $Y_{t+1}$ is not obviously well-defined because it is already observed; a hypothetical 'intervention' in the future would be changing the present. This is not just a time issue though: if $X$ causes $Y$ but not vice versa then the expected value of $X$ when you intervene to set $Y$ is simply the expected value of $X$ (and in the regression corresponding to this 'experiment' the coefficient of $Y$ is zero.) In contrast the expected value of $X$ given that you simply observe $Y$ perfectly well defined (indeed for real valued variables, the slopes of the two corresponding regressions $Y \mid X$ and $X \mid Y$ will be functions of the same correlation coefficient). In short, it's perfectly legitimate to respecify things any way you like, including all the ways you mention, provided you are interested only in the first type of quantity. What is potentially confusing is that although the two quantities are quite different you might use regression to estimate both of them.
Why arrange variables by causality in bivariate regression?
Distinguish two quantities that you might ask a regression to estimate: The expected value of $Y_{t+1}$ given that you observe $X_t$. This is always estimated by the regression of $Y_{t+1}$ on $X_t$
Why arrange variables by causality in bivariate regression? Distinguish two quantities that you might ask a regression to estimate: The expected value of $Y_{t+1}$ given that you observe $X_t$. This is always estimated by the regression of $Y_{t+1}$ on $X_t$ and targets the conditional distribution $P(Y_{t+1}\mid X_t)$. When you condition on different quantities you get (correctly) different answers because you are targeting different conditional distributions. The causal effect of changes in $X_t$ on the expected value of $Y_{t+1}$. This is only sometimes estimated by the regression of $Y_{t+1}$ on $X_t$ because it targets $P(Y_{t+1}\mid \text{do}(X_t))$ which is the expected value of $Y_{t+1}$ when you intervene to set the value of $X_t$, e.g. in an experiment where $X$ is a treatment variable. This is a stable feature of the system under study, even when quantity 1 varies, and may sometimes be identified by regression by conditioning on confounders, e.g. common causes of $X$ and $Y$. There are other strategies, but that's the one relevant to your question. With this distinction in mind it's clear that regressing $Y_{t+1}$ on $X_t$ could be a good idea if you are interested in the first sort of quantity, and a bad idea if you are interested in the second sort of quantity. If $t$ is time then the expected value of $X_t$ when you intervene to set $Y_{t+1}$ is not obviously well-defined because it is already observed; a hypothetical 'intervention' in the future would be changing the present. This is not just a time issue though: if $X$ causes $Y$ but not vice versa then the expected value of $X$ when you intervene to set $Y$ is simply the expected value of $X$ (and in the regression corresponding to this 'experiment' the coefficient of $Y$ is zero.) In contrast the expected value of $X$ given that you simply observe $Y$ perfectly well defined (indeed for real valued variables, the slopes of the two corresponding regressions $Y \mid X$ and $X \mid Y$ will be functions of the same correlation coefficient). In short, it's perfectly legitimate to respecify things any way you like, including all the ways you mention, provided you are interested only in the first type of quantity. What is potentially confusing is that although the two quantities are quite different you might use regression to estimate both of them.
Why arrange variables by causality in bivariate regression? Distinguish two quantities that you might ask a regression to estimate: The expected value of $Y_{t+1}$ given that you observe $X_t$. This is always estimated by the regression of $Y_{t+1}$ on $X_t$
51,155
Classification with increasing number of classes
This problem is sometimes called Open set recognition, or classification. There was a recent survey on open set recognition on Arxiv https://arxiv.org/pdf/1811.08581 The problem is also called zero shot learning https://en.wikipedia.org/wiki/Zero-shot_learning when the data points are images (I think)
Classification with increasing number of classes
This problem is sometimes called Open set recognition, or classification. There was a recent survey on open set recognition on Arxiv https://arxiv.org/pdf/1811.08581 The problem is also called zero sh
Classification with increasing number of classes This problem is sometimes called Open set recognition, or classification. There was a recent survey on open set recognition on Arxiv https://arxiv.org/pdf/1811.08581 The problem is also called zero shot learning https://en.wikipedia.org/wiki/Zero-shot_learning when the data points are images (I think)
Classification with increasing number of classes This problem is sometimes called Open set recognition, or classification. There was a recent survey on open set recognition on Arxiv https://arxiv.org/pdf/1811.08581 The problem is also called zero sh
51,156
Classification with increasing number of classes
It seems it is a clustering problem - if you don't know what and how many classes will you have, then unsupervised learning is for you. For example, if your trained classifier is very confident on some training example, then you conclude that it belongs to a known class. And if your classifier is unsure (say, it predicts probability less then 30% for EVERY existing class), then you conclude that you have encountered some previously unknown class. In such case you can use unsupervised learning on those unknown classes or label those samples manually, introducing some new classes.
Classification with increasing number of classes
It seems it is a clustering problem - if you don't know what and how many classes will you have, then unsupervised learning is for you. For example, if your trained classifier is very confident on som
Classification with increasing number of classes It seems it is a clustering problem - if you don't know what and how many classes will you have, then unsupervised learning is for you. For example, if your trained classifier is very confident on some training example, then you conclude that it belongs to a known class. And if your classifier is unsure (say, it predicts probability less then 30% for EVERY existing class), then you conclude that you have encountered some previously unknown class. In such case you can use unsupervised learning on those unknown classes or label those samples manually, introducing some new classes.
Classification with increasing number of classes It seems it is a clustering problem - if you don't know what and how many classes will you have, then unsupervised learning is for you. For example, if your trained classifier is very confident on som
51,157
Classification with increasing number of classes
You may treat all known classes as one class, and run some outlier detection / novel detection algorithms. For example, one class SVM Therefore we may have hierarchical classifiers, the first one to check if the data is some new species, if no, then second classifier will categorize data to $k$ known classes.
Classification with increasing number of classes
You may treat all known classes as one class, and run some outlier detection / novel detection algorithms. For example, one class SVM Therefore we may have hierarchical classifiers, the first one to c
Classification with increasing number of classes You may treat all known classes as one class, and run some outlier detection / novel detection algorithms. For example, one class SVM Therefore we may have hierarchical classifiers, the first one to check if the data is some new species, if no, then second classifier will categorize data to $k$ known classes.
Classification with increasing number of classes You may treat all known classes as one class, and run some outlier detection / novel detection algorithms. For example, one class SVM Therefore we may have hierarchical classifiers, the first one to c
51,158
Using doctor's data to identify hospitalisations
Machine learning idea: you could train a binary or one-class classifier of your choice to recognize what patterns in CPRD data indicate a hospitalization. You can use the HES data to label positive patterns in the CPRD data (e.g. those patterns that signify a hospitalization). I assume you could also use HES data to label negatives, if not, you'll need a one-class technique.
Using doctor's data to identify hospitalisations
Machine learning idea: you could train a binary or one-class classifier of your choice to recognize what patterns in CPRD data indicate a hospitalization. You can use the HES data to label positive pa
Using doctor's data to identify hospitalisations Machine learning idea: you could train a binary or one-class classifier of your choice to recognize what patterns in CPRD data indicate a hospitalization. You can use the HES data to label positive patterns in the CPRD data (e.g. those patterns that signify a hospitalization). I assume you could also use HES data to label negatives, if not, you'll need a one-class technique.
Using doctor's data to identify hospitalisations Machine learning idea: you could train a binary or one-class classifier of your choice to recognize what patterns in CPRD data indicate a hospitalization. You can use the HES data to label positive pa
51,159
How do I analyse data with 2 independent variables and 2 dependent variables?
Are you interested in examining the correlation between the dependent variables in the same model? I can't speak to the multiple independent variables part of the question, but you could investigate using a linear mixed model with multiple response variables (if your data will be longitudinal). I don't know of a website (I'm sure there is stuff out there, I just don't have a reference), but the book by Jeffrey Long, 'Longitudinal data analysis for the behavioral sciences using R' may be of use. Chapter 13 (p 501) has a section on models with multiple dv's.
How do I analyse data with 2 independent variables and 2 dependent variables?
Are you interested in examining the correlation between the dependent variables in the same model? I can't speak to the multiple independent variables part of the question, but you could investigate u
How do I analyse data with 2 independent variables and 2 dependent variables? Are you interested in examining the correlation between the dependent variables in the same model? I can't speak to the multiple independent variables part of the question, but you could investigate using a linear mixed model with multiple response variables (if your data will be longitudinal). I don't know of a website (I'm sure there is stuff out there, I just don't have a reference), but the book by Jeffrey Long, 'Longitudinal data analysis for the behavioral sciences using R' may be of use. Chapter 13 (p 501) has a section on models with multiple dv's.
How do I analyse data with 2 independent variables and 2 dependent variables? Are you interested in examining the correlation between the dependent variables in the same model? I can't speak to the multiple independent variables part of the question, but you could investigate u
51,160
How do I analyse data with 2 independent variables and 2 dependent variables?
You probably know about this, but I often start at this point: http://www.ats.ucla.edu/stat/stata/whatstat/default.htm
How do I analyse data with 2 independent variables and 2 dependent variables?
You probably know about this, but I often start at this point: http://www.ats.ucla.edu/stat/stata/whatstat/default.htm
How do I analyse data with 2 independent variables and 2 dependent variables? You probably know about this, but I often start at this point: http://www.ats.ucla.edu/stat/stata/whatstat/default.htm
How do I analyse data with 2 independent variables and 2 dependent variables? You probably know about this, but I often start at this point: http://www.ats.ucla.edu/stat/stata/whatstat/default.htm
51,161
How do I analyse data with 2 independent variables and 2 dependent variables?
You said "How do I see what effect the two DV's have combined". So, I believe your two DV's are somehow related based on theory. In this case, running independent equations each for one DV is totally wrong! Imagine you want to measure "speaking skill" and one of your DV's is "accuracy of talking" and the other one is "fluency of talking". In this respect, theory says that fluency and accuracy are related and are two sub-dimensions of the same concept-speaking skill. In this case, the two DV's should be considered together. Whenever it is the case, you have a continuous DV and categorical IV(s), Multivariate ANOVA (MANOVA) should be used. MANOVA is a variant of ANOVA that can incorporate multiple continuous DV's. Also, the number of IV's is irrelevant here as both ANOVA and MANOVA can accommodate 1 IV or 2 IV's or 3 IV's, etc. (called one way, two way, three-way etc. ANOVA or MANOVA) Also, pay attention that MANOVA is a parametric test, meaning that your measurement must be continuous. it is hard to assume that measurement by any Likert-type scale with less than 7 anchors is continuous.
How do I analyse data with 2 independent variables and 2 dependent variables?
You said "How do I see what effect the two DV's have combined". So, I believe your two DV's are somehow related based on theory. In this case, running independent equations each for one DV is totally
How do I analyse data with 2 independent variables and 2 dependent variables? You said "How do I see what effect the two DV's have combined". So, I believe your two DV's are somehow related based on theory. In this case, running independent equations each for one DV is totally wrong! Imagine you want to measure "speaking skill" and one of your DV's is "accuracy of talking" and the other one is "fluency of talking". In this respect, theory says that fluency and accuracy are related and are two sub-dimensions of the same concept-speaking skill. In this case, the two DV's should be considered together. Whenever it is the case, you have a continuous DV and categorical IV(s), Multivariate ANOVA (MANOVA) should be used. MANOVA is a variant of ANOVA that can incorporate multiple continuous DV's. Also, the number of IV's is irrelevant here as both ANOVA and MANOVA can accommodate 1 IV or 2 IV's or 3 IV's, etc. (called one way, two way, three-way etc. ANOVA or MANOVA) Also, pay attention that MANOVA is a parametric test, meaning that your measurement must be continuous. it is hard to assume that measurement by any Likert-type scale with less than 7 anchors is continuous.
How do I analyse data with 2 independent variables and 2 dependent variables? You said "How do I see what effect the two DV's have combined". So, I believe your two DV's are somehow related based on theory. In this case, running independent equations each for one DV is totally
51,162
adjust rating based on number of experiences
If the rating is fairly predictable based on the number of people who've been to the restaurant, it occurs to me that one possibility is to build a model of the ratings given experiences. Then you could use the residuals (obviously, constant variance is fairly crucial here) instead of the raw data.
adjust rating based on number of experiences
If the rating is fairly predictable based on the number of people who've been to the restaurant, it occurs to me that one possibility is to build a model of the ratings given experiences. Then you co
adjust rating based on number of experiences If the rating is fairly predictable based on the number of people who've been to the restaurant, it occurs to me that one possibility is to build a model of the ratings given experiences. Then you could use the residuals (obviously, constant variance is fairly crucial here) instead of the raw data.
adjust rating based on number of experiences If the rating is fairly predictable based on the number of people who've been to the restaurant, it occurs to me that one possibility is to build a model of the ratings given experiences. Then you co
51,163
Visualizing probability of event over time, based on epoch time
Visualization options include circular histograms and circular kernel density plots. Basically, these are histograms or KDE visualizations wrapped around a circle, better to stress the cyclical nature of the variable, in your case that Sunday is much nearer to Monday than Wednesday. To the question of which times are more likely to produce the event, you may want to look into harmonic regression (try this handout) or regression on circular statistics (see this question).
Visualizing probability of event over time, based on epoch time
Visualization options include circular histograms and circular kernel density plots. Basically, these are histograms or KDE visualizations wrapped around a circle, better to stress the cyclical nature
Visualizing probability of event over time, based on epoch time Visualization options include circular histograms and circular kernel density plots. Basically, these are histograms or KDE visualizations wrapped around a circle, better to stress the cyclical nature of the variable, in your case that Sunday is much nearer to Monday than Wednesday. To the question of which times are more likely to produce the event, you may want to look into harmonic regression (try this handout) or regression on circular statistics (see this question).
Visualizing probability of event over time, based on epoch time Visualization options include circular histograms and circular kernel density plots. Basically, these are histograms or KDE visualizations wrapped around a circle, better to stress the cyclical nature
51,164
Prediction/generalisation error derivation
OK, I got it: $E[(Y-\hat{f})^2] = E[\epsilon^2 + 2f\epsilon -2\epsilon\hat{f} + f^2 - 2f\hat{f} + \hat{f}^2]$ $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + f^2 - 2fE\hat{f} + E[\hat{f}^2]$ Note that: $E[(\hat{f}-E\hat{f})^2] = E[\hat{f}^2] - 2E\hat{f}E\hat{f} + (E[\hat{f}])^2$ $E[(\hat{f}-E\hat{f})^2] = E[\hat{f}^2] - (E[\hat{f}])^2$ It follows: $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + f^2 - 2fE\hat{f} + (E[\hat{f}])^2 + E[(\hat{f}-E\hat{f})^2]$ $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + (E[\hat{f}]-f)^2 + E[(\hat{f}-E\hat{f})^2]$
Prediction/generalisation error derivation
OK, I got it: $E[(Y-\hat{f})^2] = E[\epsilon^2 + 2f\epsilon -2\epsilon\hat{f} + f^2 - 2f\hat{f} + \hat{f}^2]$ $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + f^2 - 2fE\hat{f} + E[\hat{f}^2]$ Note that: $E[(\h
Prediction/generalisation error derivation OK, I got it: $E[(Y-\hat{f})^2] = E[\epsilon^2 + 2f\epsilon -2\epsilon\hat{f} + f^2 - 2f\hat{f} + \hat{f}^2]$ $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + f^2 - 2fE\hat{f} + E[\hat{f}^2]$ Note that: $E[(\hat{f}-E\hat{f})^2] = E[\hat{f}^2] - 2E\hat{f}E\hat{f} + (E[\hat{f}])^2$ $E[(\hat{f}-E\hat{f})^2] = E[\hat{f}^2] - (E[\hat{f}])^2$ It follows: $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + f^2 - 2fE\hat{f} + (E[\hat{f}])^2 + E[(\hat{f}-E\hat{f})^2]$ $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + (E[\hat{f}]-f)^2 + E[(\hat{f}-E\hat{f})^2]$
Prediction/generalisation error derivation OK, I got it: $E[(Y-\hat{f})^2] = E[\epsilon^2 + 2f\epsilon -2\epsilon\hat{f} + f^2 - 2f\hat{f} + \hat{f}^2]$ $E[(Y-\hat{f})^2] = \sigma_\epsilon^2 + f^2 - 2fE\hat{f} + E[\hat{f}^2]$ Note that: $E[(\h
51,165
Reporting chi-square tests for weighted data
Personally, I always report the unweighted n, not the weighted values, and some restricted surveys ask you to round n to the nearest 50, but that depends on the survey. My 2 cents CS
Reporting chi-square tests for weighted data
Personally, I always report the unweighted n, not the weighted values, and some restricted surveys ask you to round n to the nearest 50, but that depends on the survey. My 2 cents CS
Reporting chi-square tests for weighted data Personally, I always report the unweighted n, not the weighted values, and some restricted surveys ask you to round n to the nearest 50, but that depends on the survey. My 2 cents CS
Reporting chi-square tests for weighted data Personally, I always report the unweighted n, not the weighted values, and some restricted surveys ask you to round n to the nearest 50, but that depends on the survey. My 2 cents CS
51,166
Derivation of the formula for partial correlation coefficient of second order
You can find a proof of the general case in Section 2.5.3 (pp. 42-43) of Anderson (1984). The proof covers about a page and half to obtain the general formula $$\rho_{ij\cdot q+1,...,p} = \frac {\rho_{ij\cdot q+2,...,p} - \rho_{i, q+1\cdot q+2,...,p} \rho_{j, q+1\cdot q+2,...,p}} { \sqrt{1 - \rho^2_{i,q+1\cdot q+2,...,p}} \sqrt{1 - \rho^2_{j,q+1\cdot q+2,...,p}} }.$$ Your formula follows on substitution and on a relabeling of indices if needed. T.W. Anderson (1984) An Introduction to Multivariate Statistical Analysis. Second Edition. John Wiley & Sons.
Derivation of the formula for partial correlation coefficient of second order
You can find a proof of the general case in Section 2.5.3 (pp. 42-43) of Anderson (1984). The proof covers about a page and half to obtain the general formula $$\rho_{ij\cdot q+1,...,p} = \frac {\rho
Derivation of the formula for partial correlation coefficient of second order You can find a proof of the general case in Section 2.5.3 (pp. 42-43) of Anderson (1984). The proof covers about a page and half to obtain the general formula $$\rho_{ij\cdot q+1,...,p} = \frac {\rho_{ij\cdot q+2,...,p} - \rho_{i, q+1\cdot q+2,...,p} \rho_{j, q+1\cdot q+2,...,p}} { \sqrt{1 - \rho^2_{i,q+1\cdot q+2,...,p}} \sqrt{1 - \rho^2_{j,q+1\cdot q+2,...,p}} }.$$ Your formula follows on substitution and on a relabeling of indices if needed. T.W. Anderson (1984) An Introduction to Multivariate Statistical Analysis. Second Edition. John Wiley & Sons.
Derivation of the formula for partial correlation coefficient of second order You can find a proof of the general case in Section 2.5.3 (pp. 42-43) of Anderson (1984). The proof covers about a page and half to obtain the general formula $$\rho_{ij\cdot q+1,...,p} = \frac {\rho
51,167
Parameter estimation of a power spectrum equal to a power law + white noise
do a spectrum estimate(How? depends on your process, you might need some multi tapering method.) and fit your function to it. use wavelet, calculate wavelet variance, which is like spectrum but integrated over a band, then fit the integrated line to it. Similar to wavelet, but make use of the fact that wavelet approximated de-correlats, different frequency band for stationary process, figure out the approximated likelihood function to your process by assuming Gaussian, and do a maximum likelihood estimation. All of these can be found in the book Wavelet Methods for Time Series Analysis by Percival and Walden.
Parameter estimation of a power spectrum equal to a power law + white noise
do a spectrum estimate(How? depends on your process, you might need some multi tapering method.) and fit your function to it. use wavelet, calculate wavelet variance, which is like spectrum but integr
Parameter estimation of a power spectrum equal to a power law + white noise do a spectrum estimate(How? depends on your process, you might need some multi tapering method.) and fit your function to it. use wavelet, calculate wavelet variance, which is like spectrum but integrated over a band, then fit the integrated line to it. Similar to wavelet, but make use of the fact that wavelet approximated de-correlats, different frequency band for stationary process, figure out the approximated likelihood function to your process by assuming Gaussian, and do a maximum likelihood estimation. All of these can be found in the book Wavelet Methods for Time Series Analysis by Percival and Walden.
Parameter estimation of a power spectrum equal to a power law + white noise do a spectrum estimate(How? depends on your process, you might need some multi tapering method.) and fit your function to it. use wavelet, calculate wavelet variance, which is like spectrum but integr
51,168
Residual, linear least squares and quality engineering
I did not go through your question completely, it is quite long. But I think it concerns with multiple regression and goodness of fit analysis. I would recommend reading chapter 3 titled 'Diagnostics and Remedial Measures' from textbook 'Applied Linear Statistical Models'.
Residual, linear least squares and quality engineering
I did not go through your question completely, it is quite long. But I think it concerns with multiple regression and goodness of fit analysis. I would recommend reading chapter 3 titled 'Diagnostics
Residual, linear least squares and quality engineering I did not go through your question completely, it is quite long. But I think it concerns with multiple regression and goodness of fit analysis. I would recommend reading chapter 3 titled 'Diagnostics and Remedial Measures' from textbook 'Applied Linear Statistical Models'.
Residual, linear least squares and quality engineering I did not go through your question completely, it is quite long. But I think it concerns with multiple regression and goodness of fit analysis. I would recommend reading chapter 3 titled 'Diagnostics
51,169
Is the following an error-in-variance problem, and is there a recommended R (or SAS) package for it?
Your situation sounds very much like the situation I described with the last two questions I wrote here. I think those questions and the responses will help if you haven't already read them. If there are two variables that are linearly related and each is observed with measurement error then you have an error-in-variables problem. OLS is not appropriate because it considers one of the variables to be fixed and known. As I mentioned in my question the "interference" or "correlation" in errors for the repeated samples is bogus because they are applied to the same points but the error in each of the variables is still independent. This means that the original error-in-variables method is valid even with repeated measurement. Bill Huber indicated that a more standard approach to showing equivalence of two measurement techniques is something different that is referred to as inverse regression in books such as Draper and Smith. That method is apparently different from error-in-variables/Deming regression and it doesvariance component analysis that Deming regression does not do. I plan to look into it but have not yet. Your problem may not be the same as mine and Deming regression may be fine for you. You will note that the R package mrc was recommended to me as tool that does Deming regression. Bill Hiiber mentioned it to me either in comment or chat.
Is the following an error-in-variance problem, and is there a recommended R (or SAS) package for it?
Your situation sounds very much like the situation I described with the last two questions I wrote here. I think those questions and the responses will help if you haven't already read them. If ther
Is the following an error-in-variance problem, and is there a recommended R (or SAS) package for it? Your situation sounds very much like the situation I described with the last two questions I wrote here. I think those questions and the responses will help if you haven't already read them. If there are two variables that are linearly related and each is observed with measurement error then you have an error-in-variables problem. OLS is not appropriate because it considers one of the variables to be fixed and known. As I mentioned in my question the "interference" or "correlation" in errors for the repeated samples is bogus because they are applied to the same points but the error in each of the variables is still independent. This means that the original error-in-variables method is valid even with repeated measurement. Bill Huber indicated that a more standard approach to showing equivalence of two measurement techniques is something different that is referred to as inverse regression in books such as Draper and Smith. That method is apparently different from error-in-variables/Deming regression and it doesvariance component analysis that Deming regression does not do. I plan to look into it but have not yet. Your problem may not be the same as mine and Deming regression may be fine for you. You will note that the R package mrc was recommended to me as tool that does Deming regression. Bill Hiiber mentioned it to me either in comment or chat.
Is the following an error-in-variance problem, and is there a recommended R (or SAS) package for it? Your situation sounds very much like the situation I described with the last two questions I wrote here. I think those questions and the responses will help if you haven't already read them. If ther
51,170
How can the robustness of observational studies be increased?
The weakness of observational studies is the lack of randomization. The best way to make these studies valid (not biased), is to do them by matching cases to controls through methods such as propensity scoring. This makes each experimental case very similar to its control case in terms of the various covariates that are expected to affect the response (e.g. age, gender, status etc). My assumption is that you are using the term robust to mean close to unbiased which a randomized control study would be.
How can the robustness of observational studies be increased?
The weakness of observational studies is the lack of randomization. The best way to make these studies valid (not biased), is to do them by matching cases to controls through methods such as propensi
How can the robustness of observational studies be increased? The weakness of observational studies is the lack of randomization. The best way to make these studies valid (not biased), is to do them by matching cases to controls through methods such as propensity scoring. This makes each experimental case very similar to its control case in terms of the various covariates that are expected to affect the response (e.g. age, gender, status etc). My assumption is that you are using the term robust to mean close to unbiased which a randomized control study would be.
How can the robustness of observational studies be increased? The weakness of observational studies is the lack of randomization. The best way to make these studies valid (not biased), is to do them by matching cases to controls through methods such as propensi
51,171
Measurement error in dependent variable?
Essentially what you have is an error term with a nonzero mean. Consequently the assumptions required for OLS to be appropriate are violated. So you should use OLS directly. However if q is known subtract it from the Y^$_i$s and apply OLS using the variance of the error term which is σ$^2$$_ν$+σ$^2$$_e$. Given that this is a time series even after inclusion of the covariate X$_i$ there may still be some autocorrelation structure left in the series. So it might be appropriate to also include some autoregressive and/or moving average terms.
Measurement error in dependent variable?
Essentially what you have is an error term with a nonzero mean. Consequently the assumptions required for OLS to be appropriate are violated. So you should use OLS directly. However if q is known su
Measurement error in dependent variable? Essentially what you have is an error term with a nonzero mean. Consequently the assumptions required for OLS to be appropriate are violated. So you should use OLS directly. However if q is known subtract it from the Y^$_i$s and apply OLS using the variance of the error term which is σ$^2$$_ν$+σ$^2$$_e$. Given that this is a time series even after inclusion of the covariate X$_i$ there may still be some autocorrelation structure left in the series. So it might be appropriate to also include some autoregressive and/or moving average terms.
Measurement error in dependent variable? Essentially what you have is an error term with a nonzero mean. Consequently the assumptions required for OLS to be appropriate are violated. So you should use OLS directly. However if q is known su
51,172
How is the intercept calculated in a generalized linear model and why is it different from a linear model?
The difference in estimated intercepts is not because of the overdispersion in the data. Peter Flom's comment is the correct answer. To see this, change the lm() model into a glm() model with a gaussian family: glm(data ~ 1, family = gaussian) glm(data ~ 1, family = gaussian(link="log"),start=c(20)) The canonical link for the gaussian family is the identity link, so you get exactly the same estimate as for lm(). Changing the link to the log link function gives you the same estimate of the intercept that you're getting from the Poisson and NB models. The gaussian model with log link is $log(E(Y|X))=θ^′X$, while the glm with identity link is $E(Y|X)=θ^′X$. That's why exponentiating the estimated intercept for the log link models $e^{5.453} = 233$ gives you the estimated intercept for the identity link models - you are using the inverse link function. Getting the expected # of saplings per plot for this simple model is easy with just the value of the coefficient, but once you add treatment effects and other covariates it will be more difficult. You should use the predict() function like this: data = data.frame(saplings=data, treat=gl(2,6,24,label=c("control","treat")), year=gl(2,12,label=c("2004","2011"))) test.glm = glm.nb(saplings~treat*year,link=log,data=data) nd = data.frame(treat=gl(2,1,4,label=c("control","treat")), year=gl(2,2,label=c("2004","2011"))) predict(test.glm,newdata=nd,type="response") Also see this question, and read Chapter 6 of Zuur et al (2007) "Analysing Ecological Data"
How is the intercept calculated in a generalized linear model and why is it different from a linear
The difference in estimated intercepts is not because of the overdispersion in the data. Peter Flom's comment is the correct answer. To see this, change the lm() model into a glm() model with a gaussi
How is the intercept calculated in a generalized linear model and why is it different from a linear model? The difference in estimated intercepts is not because of the overdispersion in the data. Peter Flom's comment is the correct answer. To see this, change the lm() model into a glm() model with a gaussian family: glm(data ~ 1, family = gaussian) glm(data ~ 1, family = gaussian(link="log"),start=c(20)) The canonical link for the gaussian family is the identity link, so you get exactly the same estimate as for lm(). Changing the link to the log link function gives you the same estimate of the intercept that you're getting from the Poisson and NB models. The gaussian model with log link is $log(E(Y|X))=θ^′X$, while the glm with identity link is $E(Y|X)=θ^′X$. That's why exponentiating the estimated intercept for the log link models $e^{5.453} = 233$ gives you the estimated intercept for the identity link models - you are using the inverse link function. Getting the expected # of saplings per plot for this simple model is easy with just the value of the coefficient, but once you add treatment effects and other covariates it will be more difficult. You should use the predict() function like this: data = data.frame(saplings=data, treat=gl(2,6,24,label=c("control","treat")), year=gl(2,12,label=c("2004","2011"))) test.glm = glm.nb(saplings~treat*year,link=log,data=data) nd = data.frame(treat=gl(2,1,4,label=c("control","treat")), year=gl(2,2,label=c("2004","2011"))) predict(test.glm,newdata=nd,type="response") Also see this question, and read Chapter 6 of Zuur et al (2007) "Analysing Ecological Data"
How is the intercept calculated in a generalized linear model and why is it different from a linear The difference in estimated intercepts is not because of the overdispersion in the data. Peter Flom's comment is the correct answer. To see this, change the lm() model into a glm() model with a gaussi
51,173
Edge correction of Ripley's K-function for two 1D point processes?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Have you seen Gavin's K1D code? In his vignette, synchronicity is $K(t) > 2t$ and asynchrony between $2t$ and $0$. I believe the $2t$ arises from the evaluation of the pair in both ways which is necessary as Gavin explains that sometimes the edge correction $w(t_i,t_j)\neq w(t_j,t_i)$.
Edge correction of Ripley's K-function for two 1D point processes?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Edge correction of Ripley's K-function for two 1D point processes? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Have you seen Gavin's K1D code? In his vignette, synchronicity is $K(t) > 2t$ and asynchrony between $2t$ and $0$. I believe the $2t$ arises from the evaluation of the pair in both ways which is necessary as Gavin explains that sometimes the edge correction $w(t_i,t_j)\neq w(t_j,t_i)$.
Edge correction of Ripley's K-function for two 1D point processes? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
51,174
MCMC for structured matrix
First of all, lets simplify things. assume, that we have a zero-mean Gaussian Markov field with an unknown covariance matrix $\Sigma$ $$X\mid \Sigma\sim N\left ( 0,\Sigma \right )$$ Conditional independence of two variables $X_{i}$ and $X_{j}$, given the rest, is garantued iff the $(i,j)$ element of the precision matrix (sometimes refered as concentraition matrix) $Q=\Sigma^{-1}$ is equal to zero. This precision matrix has to be invertable and positive definite. So, in order to have a posterior samples of $\Sigma$ or $Q$ we have to have a way to randomly draw such structured matrices. Håvard Rue have advised to look for information about so called G-Wishart distribution and a. Lenkoski work on it. The expression for this distribution is as follows (up to a normalizing constant): $$f\left ( K|D \right )\propto \left ( det(K)) \right )^\frac{\delta-2}{2}exp\left \{ -tr\left ( KD \right ) \right \}$$ K is a matrix with the structure we need and is positive definite. Now lets put G-Wishart distribution as prior on $Q$, so that posterior will be also G-Wishart distribution: $$Q|X\sim W_{G}\left ( \delta +n, D+\sum_{k=1}^{n}X_{k}^{T}X_{k} \right )$$ To obtain random samples I have employed the iterated proportional scaling algorithm and Gibbs sampler as described in the paper of Lentoski and Dobra (see reference below). So that each random draw is a positive definetely matrix with zero entries where variables are conditionaly independent. The sampling algorithm is quite easy to implement. Lenkoski, A. and Dobra, A. (2010). Computational aspects related to inference in Gaussian graphical models with the G-Wishart prior. J. Comput. Graph. Statist.
MCMC for structured matrix
First of all, lets simplify things. assume, that we have a zero-mean Gaussian Markov field with an unknown covariance matrix $\Sigma$ $$X\mid \Sigma\sim N\left ( 0,\Sigma \right )$$ Conditional indepe
MCMC for structured matrix First of all, lets simplify things. assume, that we have a zero-mean Gaussian Markov field with an unknown covariance matrix $\Sigma$ $$X\mid \Sigma\sim N\left ( 0,\Sigma \right )$$ Conditional independence of two variables $X_{i}$ and $X_{j}$, given the rest, is garantued iff the $(i,j)$ element of the precision matrix (sometimes refered as concentraition matrix) $Q=\Sigma^{-1}$ is equal to zero. This precision matrix has to be invertable and positive definite. So, in order to have a posterior samples of $\Sigma$ or $Q$ we have to have a way to randomly draw such structured matrices. Håvard Rue have advised to look for information about so called G-Wishart distribution and a. Lenkoski work on it. The expression for this distribution is as follows (up to a normalizing constant): $$f\left ( K|D \right )\propto \left ( det(K)) \right )^\frac{\delta-2}{2}exp\left \{ -tr\left ( KD \right ) \right \}$$ K is a matrix with the structure we need and is positive definite. Now lets put G-Wishart distribution as prior on $Q$, so that posterior will be also G-Wishart distribution: $$Q|X\sim W_{G}\left ( \delta +n, D+\sum_{k=1}^{n}X_{k}^{T}X_{k} \right )$$ To obtain random samples I have employed the iterated proportional scaling algorithm and Gibbs sampler as described in the paper of Lentoski and Dobra (see reference below). So that each random draw is a positive definetely matrix with zero entries where variables are conditionaly independent. The sampling algorithm is quite easy to implement. Lenkoski, A. and Dobra, A. (2010). Computational aspects related to inference in Gaussian graphical models with the G-Wishart prior. J. Comput. Graph. Statist.
MCMC for structured matrix First of all, lets simplify things. assume, that we have a zero-mean Gaussian Markov field with an unknown covariance matrix $\Sigma$ $$X\mid \Sigma\sim N\left ( 0,\Sigma \right )$$ Conditional indepe
51,175
Likelihood test for dividing a distribution into two separate distributions
If I were approaching this I would: Try to use a random forest of gradient boosted trees to predict the patients (or patient characteristics of interest) based only on the amino acids. These tools handle categorical inputs. This would allow reduction of the region of interest from being 600-dimension (or whatever) to on the order of 5-30 dimensional. In the much smaller dimensionality data-set you would likely find more textbook approaches to be more successful. Reference: http://www.journalogy.net/Publication/6491785/feature-selection-with-ensembles-artificial-variables-and-redundancyelimination
Likelihood test for dividing a distribution into two separate distributions
If I were approaching this I would: Try to use a random forest of gradient boosted trees to predict the patients (or patient characteristics of interest) based only on the amino acids. These tools ha
Likelihood test for dividing a distribution into two separate distributions If I were approaching this I would: Try to use a random forest of gradient boosted trees to predict the patients (or patient characteristics of interest) based only on the amino acids. These tools handle categorical inputs. This would allow reduction of the region of interest from being 600-dimension (or whatever) to on the order of 5-30 dimensional. In the much smaller dimensionality data-set you would likely find more textbook approaches to be more successful. Reference: http://www.journalogy.net/Publication/6491785/feature-selection-with-ensembles-artificial-variables-and-redundancyelimination
Likelihood test for dividing a distribution into two separate distributions If I were approaching this I would: Try to use a random forest of gradient boosted trees to predict the patients (or patient characteristics of interest) based only on the amino acids. These tools ha
51,176
What are finite window effects?
This is a guess but an educated one. Since the authors are referring to a histogram of interarrival times they might be referring to a smoothed version of the histogram. A kernel density estimate is one way to smooth a histogram. The bandwidth of the kernel is called the window. Based on the article that Peter Flom linked I have a little more confidence that my guess is correct. The article deals with spectral density estimates and the rectangular and Hanning windows are particular shaped kernels.
What are finite window effects?
This is a guess but an educated one. Since the authors are referring to a histogram of interarrival times they might be referring to a smoothed version of the histogram. A kernel density estimate is
What are finite window effects? This is a guess but an educated one. Since the authors are referring to a histogram of interarrival times they might be referring to a smoothed version of the histogram. A kernel density estimate is one way to smooth a histogram. The bandwidth of the kernel is called the window. Based on the article that Peter Flom linked I have a little more confidence that my guess is correct. The article deals with spectral density estimates and the rectangular and Hanning windows are particular shaped kernels.
What are finite window effects? This is a guess but an educated one. Since the authors are referring to a histogram of interarrival times they might be referring to a smoothed version of the histogram. A kernel density estimate is
51,177
Model for circular statistics
This book by Nick Fisher is very popular. It covers many circular distributions including Cardioid, wrapped Cauchy, wrapped Normal and von MIses distribution. I don't know if it tells you how to generate observations from these distributions, but it gives you a lot of distributions, theory and methods. Statistical Analysis of Circular Data
Model for circular statistics
This book by Nick Fisher is very popular. It covers many circular distributions including Cardioid, wrapped Cauchy, wrapped Normal and von MIses distribution. I don't know if it tells you how to gen
Model for circular statistics This book by Nick Fisher is very popular. It covers many circular distributions including Cardioid, wrapped Cauchy, wrapped Normal and von MIses distribution. I don't know if it tells you how to generate observations from these distributions, but it gives you a lot of distributions, theory and methods. Statistical Analysis of Circular Data
Model for circular statistics This book by Nick Fisher is very popular. It covers many circular distributions including Cardioid, wrapped Cauchy, wrapped Normal and von MIses distribution. I don't know if it tells you how to gen
51,178
Sample size for multiple regression: How much more data do I need?
Consider this: given no population effect, and given an initial sample effect classified as significant and a 2nd-stage effect (controlling for dummy variable) classified as non-significant, what is the probability that a researcher will continue to amass more observations to the point where the effect in question appears significant? I suspect it is very high, but that doesn't shed much light on your research question. The literature on Type I errors, researcher bias, and the File Drawer Problem will be relevant here. I realize this will not seem to directly answer your question, but hopefully it will reveal a problem with the question and will prompt additional thinking about the role of significance tests. More directly to the point: you can treat the partial correlation coefficient you've obtained when controlling for the dummy as the best estimate and can then conduct a garden-variety power analysis to see what sample size would be needed for such a correlation coefficient to appear significantly different from zero. There are plenty of online correlation power calculators that will enable you to do this (e.g., at Vassar's site), or you can download the free program GPower. If you need a deeper understanding of the uncertaintly surrounding this power question, you could conduct a Monte Carlo simulation in which both your focal variable and the dummy have coefficients that can vary in proportion to their standard errors.
Sample size for multiple regression: How much more data do I need?
Consider this: given no population effect, and given an initial sample effect classified as significant and a 2nd-stage effect (controlling for dummy variable) classified as non-significant, what is
Sample size for multiple regression: How much more data do I need? Consider this: given no population effect, and given an initial sample effect classified as significant and a 2nd-stage effect (controlling for dummy variable) classified as non-significant, what is the probability that a researcher will continue to amass more observations to the point where the effect in question appears significant? I suspect it is very high, but that doesn't shed much light on your research question. The literature on Type I errors, researcher bias, and the File Drawer Problem will be relevant here. I realize this will not seem to directly answer your question, but hopefully it will reveal a problem with the question and will prompt additional thinking about the role of significance tests. More directly to the point: you can treat the partial correlation coefficient you've obtained when controlling for the dummy as the best estimate and can then conduct a garden-variety power analysis to see what sample size would be needed for such a correlation coefficient to appear significantly different from zero. There are plenty of online correlation power calculators that will enable you to do this (e.g., at Vassar's site), or you can download the free program GPower. If you need a deeper understanding of the uncertaintly surrounding this power question, you could conduct a Monte Carlo simulation in which both your focal variable and the dummy have coefficients that can vary in proportion to their standard errors.
Sample size for multiple regression: How much more data do I need? Consider this: given no population effect, and given an initial sample effect classified as significant and a 2nd-stage effect (controlling for dummy variable) classified as non-significant, what is
51,179
Calculating and comparing histograms of complex numbers
If I have a 1D sequence of complex numbers with real and imaginary parts, how can I compute the histogram of this sequence? Do I separately compute the histogram of both the real and imaginary parts? A complex number is just an ordered pair of real numbers (i.e., a two-element vector of real numbers) with extended definitions of arithmetic operators like addition, multiplication, etc. Visualisation of a data set of complex numbers can be done just like any other data-set consisting of bivariate vectors of real numbers. It is important to treat the values as bivariate values, to retain any statistical relationships between the parts, so it is not generally a good idea to construct separate histograms of the real and imaginary parts. The best way to visualise a vector of complex numbers is to generate the kernel density estimator (KDE) of this data and then visualise this using either a surface plot, a contour plot, or a heat plot, taken over the two-dimensions of the complex numbers. (A histogram is just a clunky version of a KDE; you are better off using a KDE with a smooth kernel that does not depend on arbitrary bin choices.) If you use a contour plot or heat plot you can easily superimpose a scatterplot of the data values, which allows you to see the raw data, plus the estimated density of that data. This is what I would recommend. Here is an example of what this looks like when implemented in R. In this code we generate some random complex data, and plot a scatterplot of the data with contours from the KDE using the ggalt package. This gives a clear visualisation of the raw data, plus giving an outline of the contours of its estimated density. #Generate random vector X consisting of n complex values set.seed(1) n <- 100; ARG <- 2*pi*runif(n); MOD <- rgamma(n, shape = 2, scale = 10) + rgamma(n, shape = 5, scale = 12); X <- complex(modulus = MOD, argument = ARG); #Generate scatterplot with KDE contours library(ggplot2); library(ggalt); DATA <- data.frame(Real = Re(X), Imaginary = Im(X)); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold'), axis.title.x = element_text(hjust = 0.5, face = 'bold'), axis.title.y = element_text(hjust = 0.5, face = 'bold')); ggplot(DATA, aes(x = Real, y = Imaginary)) + THEME + geom_point(size = 4, alpha = 0.4) + geom_bkde2d(bandwidth = c(20, 20)) + ggtitle('Scatterplot of complex data values with kernel density contours');
Calculating and comparing histograms of complex numbers
If I have a 1D sequence of complex numbers with real and imaginary parts, how can I compute the histogram of this sequence? Do I separately compute the histogram of both the real and imaginary parts?
Calculating and comparing histograms of complex numbers If I have a 1D sequence of complex numbers with real and imaginary parts, how can I compute the histogram of this sequence? Do I separately compute the histogram of both the real and imaginary parts? A complex number is just an ordered pair of real numbers (i.e., a two-element vector of real numbers) with extended definitions of arithmetic operators like addition, multiplication, etc. Visualisation of a data set of complex numbers can be done just like any other data-set consisting of bivariate vectors of real numbers. It is important to treat the values as bivariate values, to retain any statistical relationships between the parts, so it is not generally a good idea to construct separate histograms of the real and imaginary parts. The best way to visualise a vector of complex numbers is to generate the kernel density estimator (KDE) of this data and then visualise this using either a surface plot, a contour plot, or a heat plot, taken over the two-dimensions of the complex numbers. (A histogram is just a clunky version of a KDE; you are better off using a KDE with a smooth kernel that does not depend on arbitrary bin choices.) If you use a contour plot or heat plot you can easily superimpose a scatterplot of the data values, which allows you to see the raw data, plus the estimated density of that data. This is what I would recommend. Here is an example of what this looks like when implemented in R. In this code we generate some random complex data, and plot a scatterplot of the data with contours from the KDE using the ggalt package. This gives a clear visualisation of the raw data, plus giving an outline of the contours of its estimated density. #Generate random vector X consisting of n complex values set.seed(1) n <- 100; ARG <- 2*pi*runif(n); MOD <- rgamma(n, shape = 2, scale = 10) + rgamma(n, shape = 5, scale = 12); X <- complex(modulus = MOD, argument = ARG); #Generate scatterplot with KDE contours library(ggplot2); library(ggalt); DATA <- data.frame(Real = Re(X), Imaginary = Im(X)); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold'), axis.title.x = element_text(hjust = 0.5, face = 'bold'), axis.title.y = element_text(hjust = 0.5, face = 'bold')); ggplot(DATA, aes(x = Real, y = Imaginary)) + THEME + geom_point(size = 4, alpha = 0.4) + geom_bkde2d(bandwidth = c(20, 20)) + ggtitle('Scatterplot of complex data values with kernel density contours');
Calculating and comparing histograms of complex numbers If I have a 1D sequence of complex numbers with real and imaginary parts, how can I compute the histogram of this sequence? Do I separately compute the histogram of both the real and imaginary parts?
51,180
Finding the correct data mining approach
I am in agreement with the commentators, this is a simple time-series problem. If you are unconcerned with seasonal changes, I'm not sure what you expect to get out of simple day and hourly counts. ARIMA is what you want. If you really need something that is specifically machine learningish, just try basic Bayesian modeling. It would incorporate your prior data and it is the basis of most ML paradigms....
Finding the correct data mining approach
I am in agreement with the commentators, this is a simple time-series problem. If you are unconcerned with seasonal changes, I'm not sure what you expect to get out of simple day and hourly counts. A
Finding the correct data mining approach I am in agreement with the commentators, this is a simple time-series problem. If you are unconcerned with seasonal changes, I'm not sure what you expect to get out of simple day and hourly counts. ARIMA is what you want. If you really need something that is specifically machine learningish, just try basic Bayesian modeling. It would incorporate your prior data and it is the basis of most ML paradigms....
Finding the correct data mining approach I am in agreement with the commentators, this is a simple time-series problem. If you are unconcerned with seasonal changes, I'm not sure what you expect to get out of simple day and hourly counts. A
51,181
Gaussian Process goodness of fit
What do you think would cause the change? Is it a change in the mean, the variance or something else? The type of change you expect should determine the parameter you should test for change. Is this a sudden change? If so this could be done using intervention analysis for time series such as is done with Box Jenkins models for statiomary Gaussian time series. The autobox software can handle this for you.
Gaussian Process goodness of fit
What do you think would cause the change? Is it a change in the mean, the variance or something else? The type of change you expect should determine the parameter you should test for change. Is this a
Gaussian Process goodness of fit What do you think would cause the change? Is it a change in the mean, the variance or something else? The type of change you expect should determine the parameter you should test for change. Is this a sudden change? If so this could be done using intervention analysis for time series such as is done with Box Jenkins models for statiomary Gaussian time series. The autobox software can handle this for you.
Gaussian Process goodness of fit What do you think would cause the change? Is it a change in the mean, the variance or something else? The type of change you expect should determine the parameter you should test for change. Is this a
51,182
Gaussian Process goodness of fit
So following is the solution I came up with. Please correct me if I'm wrong :) Assumptions The model change is abrupt. Idea My idea was the following: We determine the goodness of the fit by means of a model comparison. So we create a very simple model for which we know how to calculate the likelihood. As soon as this models becomes more likely than the original GP model we assume there was a model change. We assume the data $D$ is normalized to have a zero mean. null model likelihood As a simplification of the GP we choose a normal distribution with zero mean and a large variance at each test location. $\log p(D|M_0) = \sum\limits_{d \in D}\log p(d_i|M_0)$ with $d_i = (x_i, y_i)$ $\log p(d|M_0) = -\frac{1}{2}\log(2\pi)-\frac{1}{2}\log(\sigma^2)-\frac{1}{2\sigma^2}x_i^2$ GP model likelihood For a GP with covariance matrix $K_y$ the likelihood is given by: $\log p(D|M_{GP}) = -\frac{1}{2}\mathbf{y}^\top K_y^{-1} \mathbf{y} -\frac{1}{2}\log |K_y| -\frac{n}{2}\log{2\pi}$ Making a decision based on the Bayes factor Assuming a uniform prior over the models, we calculate the log Bayes factor as follows: $\log B_{01} = \log p(D|M_0) - \log p(D|M_{GP})$ According to this paper if $\log B_{01}$ is greater than 2 we assume there was a change in the underlying model.
Gaussian Process goodness of fit
So following is the solution I came up with. Please correct me if I'm wrong :) Assumptions The model change is abrupt. Idea My idea was the following: We determine the goodness of the fit by means of
Gaussian Process goodness of fit So following is the solution I came up with. Please correct me if I'm wrong :) Assumptions The model change is abrupt. Idea My idea was the following: We determine the goodness of the fit by means of a model comparison. So we create a very simple model for which we know how to calculate the likelihood. As soon as this models becomes more likely than the original GP model we assume there was a model change. We assume the data $D$ is normalized to have a zero mean. null model likelihood As a simplification of the GP we choose a normal distribution with zero mean and a large variance at each test location. $\log p(D|M_0) = \sum\limits_{d \in D}\log p(d_i|M_0)$ with $d_i = (x_i, y_i)$ $\log p(d|M_0) = -\frac{1}{2}\log(2\pi)-\frac{1}{2}\log(\sigma^2)-\frac{1}{2\sigma^2}x_i^2$ GP model likelihood For a GP with covariance matrix $K_y$ the likelihood is given by: $\log p(D|M_{GP}) = -\frac{1}{2}\mathbf{y}^\top K_y^{-1} \mathbf{y} -\frac{1}{2}\log |K_y| -\frac{n}{2}\log{2\pi}$ Making a decision based on the Bayes factor Assuming a uniform prior over the models, we calculate the log Bayes factor as follows: $\log B_{01} = \log p(D|M_0) - \log p(D|M_{GP})$ According to this paper if $\log B_{01}$ is greater than 2 we assume there was a change in the underlying model.
Gaussian Process goodness of fit So following is the solution I came up with. Please correct me if I'm wrong :) Assumptions The model change is abrupt. Idea My idea was the following: We determine the goodness of the fit by means of
51,183
Risk assesment and non-statistician's perception of percentages
As mentioned in the comment @Tim, prospect theory proposes that probabilities are perceived in a particular way, such as in this figure: The "weighted probability" could also be called subjective probability or perceived probability. This function suggests you might be justified to propose three categories of low probability, middle probability, and high probability. While you might not be able to find a paper that provides thresholds for categories of divisions among these, you should be able to find some papers that attempt to parameterize this function, and then just come up with your own thresholds based on that. As an example, here is a citation for one of the first papers that came up on Google when I used the search string "prospect theory probability weighting function medical risk": Bleichrodt, H. and Pinto, J. L. 2000. A Parameter-free elicitation of the probability weighting function in medical decision analysis. Management Science 46(11): 1485-1496. I only briefly looked at this paper, but this along with the search terms given should give you a start into that literature.
Risk assesment and non-statistician's perception of percentages
As mentioned in the comment @Tim, prospect theory proposes that probabilities are perceived in a particular way, such as in this figure: The "weighted probability" could also be called subjective pro
Risk assesment and non-statistician's perception of percentages As mentioned in the comment @Tim, prospect theory proposes that probabilities are perceived in a particular way, such as in this figure: The "weighted probability" could also be called subjective probability or perceived probability. This function suggests you might be justified to propose three categories of low probability, middle probability, and high probability. While you might not be able to find a paper that provides thresholds for categories of divisions among these, you should be able to find some papers that attempt to parameterize this function, and then just come up with your own thresholds based on that. As an example, here is a citation for one of the first papers that came up on Google when I used the search string "prospect theory probability weighting function medical risk": Bleichrodt, H. and Pinto, J. L. 2000. A Parameter-free elicitation of the probability weighting function in medical decision analysis. Management Science 46(11): 1485-1496. I only briefly looked at this paper, but this along with the search terms given should give you a start into that literature.
Risk assesment and non-statistician's perception of percentages As mentioned in the comment @Tim, prospect theory proposes that probabilities are perceived in a particular way, such as in this figure: The "weighted probability" could also be called subjective pro
51,184
Risk assesment and non-statistician's perception of percentages
I imagine this would depend on the background of the individual. Someone with a strong statisticsl background would probably have a lower threshold on what is a reasoanbly high percentage than somone who doesn't.
Risk assesment and non-statistician's perception of percentages
I imagine this would depend on the background of the individual. Someone with a strong statisticsl background would probably have a lower threshold on what is a reasoanbly high percentage than somone
Risk assesment and non-statistician's perception of percentages I imagine this would depend on the background of the individual. Someone with a strong statisticsl background would probably have a lower threshold on what is a reasoanbly high percentage than somone who doesn't.
Risk assesment and non-statistician's perception of percentages I imagine this would depend on the background of the individual. Someone with a strong statisticsl background would probably have a lower threshold on what is a reasoanbly high percentage than somone
51,185
Method to estimate the prediction interval for GLM and negative binomial distribution
It makes sense if you're taking the $\theta$ as known. If you want to incorporate the uncertainty in $\theta$ you would either simulate from the joint distribution (asymptotically Gaussian) of the other parameters and $\theta$, or you could simulate one conditionally on the other (since $\theta$ is an input to the GLM, I'd suggest simulating $\theta$ from the normal approximation to its profile likelihood and then $\mu$ from the conditional of $\mu$ on $\theta$, since you get that bit of information out of the GLM). Then proceed as above to simulate the negative binomial. This is quite similar to an approach Ripley has discussed (for example on the R-help mailing list) -- e.g. here (though he was particularly terse in that one). [If anyone has a reference for that approach, it would round out the answer to this question nicely.]
Method to estimate the prediction interval for GLM and negative binomial distribution
It makes sense if you're taking the $\theta$ as known. If you want to incorporate the uncertainty in $\theta$ you would either simulate from the joint distribution (asymptotically Gaussian) of the oth
Method to estimate the prediction interval for GLM and negative binomial distribution It makes sense if you're taking the $\theta$ as known. If you want to incorporate the uncertainty in $\theta$ you would either simulate from the joint distribution (asymptotically Gaussian) of the other parameters and $\theta$, or you could simulate one conditionally on the other (since $\theta$ is an input to the GLM, I'd suggest simulating $\theta$ from the normal approximation to its profile likelihood and then $\mu$ from the conditional of $\mu$ on $\theta$, since you get that bit of information out of the GLM). Then proceed as above to simulate the negative binomial. This is quite similar to an approach Ripley has discussed (for example on the R-help mailing list) -- e.g. here (though he was particularly terse in that one). [If anyone has a reference for that approach, it would round out the answer to this question nicely.]
Method to estimate the prediction interval for GLM and negative binomial distribution It makes sense if you're taking the $\theta$ as known. If you want to incorporate the uncertainty in $\theta$ you would either simulate from the joint distribution (asymptotically Gaussian) of the oth
51,186
How to compare response to same question asked at different time points?
It depends on several things. One is what your scale was. Unless it can plausibly be treated as a continuous scale, a t test could not be appropriate. Some purists argue that you should never treat an ordinal scale as continuous for such purposes. The other, probably more important issue, is what you are interested in. Do you want to see if there is a perceived net improvement/decline in life; or are you interested in the correlation between the two. For example (assuming you can treat the variable as continuous) a paired t test would show you if on average people think life is better; but a correlation coefficient might be useful if what you are interested in is whether "Obama" is a decisive factor one way or another. Consider a highly politically partisan sample balanced between Republicans and Democrats, where everyone who agrees with the first statement disagrees with the second and vice versa. A paired t statistic will be exactly zero, showing no impact of Obama - but the reality is there was a big impact, in switching around who thinks life is good. A correlation coefficient of some sort would show this up. If you want to do a test with a correlation coefficient you should use a bootstrap method. If you want an appropriate correlation coefficient for ordinal data you should consider a polychoric correlation. If in fact you are interested in the net increase (up or down), then a paired t test would be ok if the data is sufficiently "close" to being continuous that you can pretend it is; but a non-parametric paired test as @psj suggests might be better.
How to compare response to same question asked at different time points?
It depends on several things. One is what your scale was. Unless it can plausibly be treated as a continuous scale, a t test could not be appropriate. Some purists argue that you should never treat
How to compare response to same question asked at different time points? It depends on several things. One is what your scale was. Unless it can plausibly be treated as a continuous scale, a t test could not be appropriate. Some purists argue that you should never treat an ordinal scale as continuous for such purposes. The other, probably more important issue, is what you are interested in. Do you want to see if there is a perceived net improvement/decline in life; or are you interested in the correlation between the two. For example (assuming you can treat the variable as continuous) a paired t test would show you if on average people think life is better; but a correlation coefficient might be useful if what you are interested in is whether "Obama" is a decisive factor one way or another. Consider a highly politically partisan sample balanced between Republicans and Democrats, where everyone who agrees with the first statement disagrees with the second and vice versa. A paired t statistic will be exactly zero, showing no impact of Obama - but the reality is there was a big impact, in switching around who thinks life is good. A correlation coefficient of some sort would show this up. If you want to do a test with a correlation coefficient you should use a bootstrap method. If you want an appropriate correlation coefficient for ordinal data you should consider a polychoric correlation. If in fact you are interested in the net increase (up or down), then a paired t test would be ok if the data is sufficiently "close" to being continuous that you can pretend it is; but a non-parametric paired test as @psj suggests might be better.
How to compare response to same question asked at different time points? It depends on several things. One is what your scale was. Unless it can plausibly be treated as a continuous scale, a t test could not be appropriate. Some purists argue that you should never treat
51,187
How to compare response to same question asked at different time points?
One approach is to visualize the changes by individuals. I can't recall the name of this chart type offhand, but this would be a chart in which each individual response to the first question is plotted in its y-axis position on the left side, and each response is plotted for the second question on the right side, with a line connecting the two datapoints for each individual. By the slopes of the lines, we can visually interpret the changes that individuals make in their responses. Because there are only 5 possible values for each axis, many of the lines will overlap. In order to make the density of the lines apparent, you can fake this by making the axes 1-500 instead of 1-5, and apply a random jitter to the exact location of each plot. Then they won't be precisely overlapping, and you will be able to distinguish areas of high and low density. A numerical approach would be to compute a variable for each respondent showing the direction and extent of the shift in their answer between the two questions. If somebody responded with a 1 on the first question and 3 on the second, you would calculate this change variable to be +2. You can then report on this many different ways; for example, x, y, and z% of respondents responded higher, lower, or the same between the two questions; you can give yourself a nice stacked bar chart going from +4 to -4, etc. These approaches I think make sense in light of Peter Ellis's comment that the overall group behavior obscures the shifts that individuals make, and the interesting thing is the individual shifts. You can look at the groups that have positive values, 0, or negative values and see how they differ in demographics or political affiliation. My main concern is that the data is being framed as "time 1" and "time 2", and it's really not that. It's "question 1" and "question 2". That's not to say that it is without value, it's just an entirely different kind of data collection which we would be expected to tell us more about their current positive and negative opinions, and their current perception of any shift. That's still valuable, it's just different.
How to compare response to same question asked at different time points?
One approach is to visualize the changes by individuals. I can't recall the name of this chart type offhand, but this would be a chart in which each individual response to the first question is plotte
How to compare response to same question asked at different time points? One approach is to visualize the changes by individuals. I can't recall the name of this chart type offhand, but this would be a chart in which each individual response to the first question is plotted in its y-axis position on the left side, and each response is plotted for the second question on the right side, with a line connecting the two datapoints for each individual. By the slopes of the lines, we can visually interpret the changes that individuals make in their responses. Because there are only 5 possible values for each axis, many of the lines will overlap. In order to make the density of the lines apparent, you can fake this by making the axes 1-500 instead of 1-5, and apply a random jitter to the exact location of each plot. Then they won't be precisely overlapping, and you will be able to distinguish areas of high and low density. A numerical approach would be to compute a variable for each respondent showing the direction and extent of the shift in their answer between the two questions. If somebody responded with a 1 on the first question and 3 on the second, you would calculate this change variable to be +2. You can then report on this many different ways; for example, x, y, and z% of respondents responded higher, lower, or the same between the two questions; you can give yourself a nice stacked bar chart going from +4 to -4, etc. These approaches I think make sense in light of Peter Ellis's comment that the overall group behavior obscures the shifts that individuals make, and the interesting thing is the individual shifts. You can look at the groups that have positive values, 0, or negative values and see how they differ in demographics or political affiliation. My main concern is that the data is being framed as "time 1" and "time 2", and it's really not that. It's "question 1" and "question 2". That's not to say that it is without value, it's just an entirely different kind of data collection which we would be expected to tell us more about their current positive and negative opinions, and their current perception of any shift. That's still valuable, it's just different.
How to compare response to same question asked at different time points? One approach is to visualize the changes by individuals. I can't recall the name of this chart type offhand, but this would be a chart in which each individual response to the first question is plotte
51,188
How to compare response to same question asked at different time points?
Your outcomes are paired ordered categories, so the t-test is not appropriate, but the generalizations of McNemar's test would be. Read the linked page to get the general idea, then follow the "Related Tests" links to find which is best for your situation.
How to compare response to same question asked at different time points?
Your outcomes are paired ordered categories, so the t-test is not appropriate, but the generalizations of McNemar's test would be. Read the linked page to get the general idea, then follow the "Relat
How to compare response to same question asked at different time points? Your outcomes are paired ordered categories, so the t-test is not appropriate, but the generalizations of McNemar's test would be. Read the linked page to get the general idea, then follow the "Related Tests" links to find which is best for your situation.
How to compare response to same question asked at different time points? Your outcomes are paired ordered categories, so the t-test is not appropriate, but the generalizations of McNemar's test would be. Read the linked page to get the general idea, then follow the "Relat
51,189
Variance stabilization "rule" for MCMC jumps...anyone?
As far as I know, in general it strongly depends on the landscape of your observable. Last year, me and my supervisors presented one way to adaptively obtain the step-sizes for one particular example of landscapes - Fractal Landscapes occurring in chaotic systems. However, I'm not aware of a systematic way to approach this problem in general (A better answer than this one would also be very helpful for me!)
Variance stabilization "rule" for MCMC jumps...anyone?
As far as I know, in general it strongly depends on the landscape of your observable. Last year, me and my supervisors presented one way to adaptively obtain the step-sizes for one particular example
Variance stabilization "rule" for MCMC jumps...anyone? As far as I know, in general it strongly depends on the landscape of your observable. Last year, me and my supervisors presented one way to adaptively obtain the step-sizes for one particular example of landscapes - Fractal Landscapes occurring in chaotic systems. However, I'm not aware of a systematic way to approach this problem in general (A better answer than this one would also be very helpful for me!)
Variance stabilization "rule" for MCMC jumps...anyone? As far as I know, in general it strongly depends on the landscape of your observable. Last year, me and my supervisors presented one way to adaptively obtain the step-sizes for one particular example
51,190
How to assess correlation when each variable is measured by independent replicates?
From your description I think the only viable way to go is what you don't want to do: Use the bucket as the level of analyses. That is, aggregate the 3 measurements within each bucket and you have your pairings. With this approach you should effectively aggegate out the measurement error. I made a small simulation and compared this approach with a second approach in which I used all possible pairings per bucket to estimate the correlation. The results show, the aggregation method is better in recovering the original correlation: # I use R and the mvtnorm library to generate the data library(mvtnorm) set.seed(12345) # make reproducible nbuckets <- 50 #number of buckets r.buckets <- 0.5 # correlation across buckets # generate data Cor <- array(c(1, r.buckets, r.buckets, 1), dim=c(2,2)) d.bucket <- rmvnorm(nbuckets, sigma = Cor) measurement.error = 0.5 # size of eror in relation to sd of the data data <- vector("list", nbuckets) for (bucket in seq_len(nbuckets)) { data[[bucket]] <- list(x = rep(d.bucket[bucket, 1], 3) + rnorm(3, measurement.error), y = rep(d.bucket[bucket, 2], 3) + rnorm(3, sd = measurement.error)) } # Note that there are separate error terms for the two types of measurements # aggregating per bucket: data.agg <- lapply(data, function(x) data.frame(x = mean(x[[1]]), y = mean(x[[2]]))) data.agg <- do.call("rbind", data.agg) cor(data.agg$x, data.agg$y) # should give .408 # using all pairs: all.pairs <- lapply(data, function(x) data.frame(x = x[[1]], y = x[[2]][c(1:3,3:1,2,1,3,2,3,1,1,3,2,3,1,2)])) all.pairs <- do.call("rbind", all.pairs) cor(all.pairs$x, all.pairs$y) # should give .321 If you allow for even larger measurement error (although it already is large) the difference remains. If you allow for a single error term within each bucket the results will be a lot nearer to the real value of r and the difference between the methods will decrease. However, aggregating remains the better tactic. I recommend you play around with it a little with more realistic values. You may even implement a bootstrap approach as was your initial thought.
How to assess correlation when each variable is measured by independent replicates?
From your description I think the only viable way to go is what you don't want to do: Use the bucket as the level of analyses. That is, aggregate the 3 measurements within each bucket and you have you
How to assess correlation when each variable is measured by independent replicates? From your description I think the only viable way to go is what you don't want to do: Use the bucket as the level of analyses. That is, aggregate the 3 measurements within each bucket and you have your pairings. With this approach you should effectively aggegate out the measurement error. I made a small simulation and compared this approach with a second approach in which I used all possible pairings per bucket to estimate the correlation. The results show, the aggregation method is better in recovering the original correlation: # I use R and the mvtnorm library to generate the data library(mvtnorm) set.seed(12345) # make reproducible nbuckets <- 50 #number of buckets r.buckets <- 0.5 # correlation across buckets # generate data Cor <- array(c(1, r.buckets, r.buckets, 1), dim=c(2,2)) d.bucket <- rmvnorm(nbuckets, sigma = Cor) measurement.error = 0.5 # size of eror in relation to sd of the data data <- vector("list", nbuckets) for (bucket in seq_len(nbuckets)) { data[[bucket]] <- list(x = rep(d.bucket[bucket, 1], 3) + rnorm(3, measurement.error), y = rep(d.bucket[bucket, 2], 3) + rnorm(3, sd = measurement.error)) } # Note that there are separate error terms for the two types of measurements # aggregating per bucket: data.agg <- lapply(data, function(x) data.frame(x = mean(x[[1]]), y = mean(x[[2]]))) data.agg <- do.call("rbind", data.agg) cor(data.agg$x, data.agg$y) # should give .408 # using all pairs: all.pairs <- lapply(data, function(x) data.frame(x = x[[1]], y = x[[2]][c(1:3,3:1,2,1,3,2,3,1,1,3,2,3,1,2)])) all.pairs <- do.call("rbind", all.pairs) cor(all.pairs$x, all.pairs$y) # should give .321 If you allow for even larger measurement error (although it already is large) the difference remains. If you allow for a single error term within each bucket the results will be a lot nearer to the real value of r and the difference between the methods will decrease. However, aggregating remains the better tactic. I recommend you play around with it a little with more realistic values. You may even implement a bootstrap approach as was your initial thought.
How to assess correlation when each variable is measured by independent replicates? From your description I think the only viable way to go is what you don't want to do: Use the bucket as the level of analyses. That is, aggregate the 3 measurements within each bucket and you have you
51,191
Abusing Linear Models under Multicollinearity: Simulation for 'realistic' movement of predictors
Try lavaan. It's an R package that is supposed to being built to handle link functions as well. The problem with your question is the lack of a purpose. Statistical modeling is very difficult to translate and interpret when dealing with variables and abstract hypotheses. X and Z are correlated. If there is large variation in either, you're bound to have a poor model when there is multicollinearity. The information from one is confounded by the other since they "move together". On the other hand, if you're dealing with variables that are relatively reliable in their measurement, and you have an ample sample, it's worth keeping both since the correlation is not as high as, say, 0.85-0.95. Lastly, if the goal is accurate prediction, keep them both. If the goal is statistical validity, use your fit statistics and use Wald tests, LR tests, AIC, BIC... etc. I'd also suggest writing code from scratch to ensure you really understand what you're doing. Packages are for the non-academics. If you want valid answers, you need to have a firm grip on everything happening "under the hood". And it is usually true: the ends justify the means.
Abusing Linear Models under Multicollinearity: Simulation for 'realistic' movement of predictors
Try lavaan. It's an R package that is supposed to being built to handle link functions as well. The problem with your question is the lack of a purpose. Statistical modeling is very difficult to trans
Abusing Linear Models under Multicollinearity: Simulation for 'realistic' movement of predictors Try lavaan. It's an R package that is supposed to being built to handle link functions as well. The problem with your question is the lack of a purpose. Statistical modeling is very difficult to translate and interpret when dealing with variables and abstract hypotheses. X and Z are correlated. If there is large variation in either, you're bound to have a poor model when there is multicollinearity. The information from one is confounded by the other since they "move together". On the other hand, if you're dealing with variables that are relatively reliable in their measurement, and you have an ample sample, it's worth keeping both since the correlation is not as high as, say, 0.85-0.95. Lastly, if the goal is accurate prediction, keep them both. If the goal is statistical validity, use your fit statistics and use Wald tests, LR tests, AIC, BIC... etc. I'd also suggest writing code from scratch to ensure you really understand what you're doing. Packages are for the non-academics. If you want valid answers, you need to have a firm grip on everything happening "under the hood". And it is usually true: the ends justify the means.
Abusing Linear Models under Multicollinearity: Simulation for 'realistic' movement of predictors Try lavaan. It's an R package that is supposed to being built to handle link functions as well. The problem with your question is the lack of a purpose. Statistical modeling is very difficult to trans
51,192
Confidence interval for the biggest difference between means
To estimate confidence intervals we could use simulations of the distribution of the biggest difference of the mean when we place half the means on one end and the other half on the other end. It would look for instance like this. Then based on some observed range you can compute which boundaries associate with that, and make your confidence interval. In the example image, a demonstration is given for the case that the observed studentized range would be 4 with a green line that corresponds to an estimate of a 95% confidence interval. The line is between the red lines, which give the upper and lower 2.5% of the samples. A few more things need to be done: We used only the range and compute a worst case for the distribution of the range when all the groups are at the furthest edges. This does not use all the information. When only two means are very far away and the rest is in the middle then this is a different situation from when many means are at the edges. We used a simulation but it would be nice if some expression would be used. This is not so easy. The case for the situation that all the means are equal is the studentized range distribution. That is already difficult to compute, and now you need a generalization of it. computer code for the image ### nu = number in sample ### nl = number of samples in each left side ### d = distance between means ### nr = number of samples in right side ### m = number of samples in the middle ### d2 = position of the middle samples simdif = function(nu, nl, d, nr = nl, m = 0, d2 = d/2) { ### means of the different groups mean = c(rep(0,nl),rep(d2,m),rep(d,nr)) ### compute samples x = matrix(rnorm(nu*(nl+m+nr),mean = mean), nu, byrow = 1) ### compute sample means mu = colMeans(x) ### compute pooled sample variance RSS = sum((x - rep(1,nu) %*% t(as.matrix(mu)))^2) sig = sqrt(RSS/(nu*(nl+m+nr)-nu)) ### studentized range range = (max(mu)-min(mu))/sig return(range) } ### compute for different distances dv = seq(0,5,0.1) ### smp contains the simulation smp = c() ### pct975 stores the upper 97.5-th quantile ### pct025 stores the lower 2.5-th quantile pct975 = c() pct025 = c() ### do the sampling nrep = 10^3 set.seed(1) for (di in dv) { sample = replicate(nrep, simdif(5,4,di)) perc = quantile(sample, probs = c(0.025,0.975)) smp = cbind(smp,sample) pct975 = c(pct975,perc[2]) pct025 = c(pct025,perc[1]) } dcor = rep(1,nrep) %*% t(as.matrix(dv)) ### plot experimental distribution with confidence boundaries plot(dcor, smp, pch = 21, col = rgb(0,0,0,0.05), bg = rgb(0,0,0,0.05), cex = 0.7, main = "simulation for 8 groups of size 5", ylab = "observed studentized range", xlab = "true difference in sigma") lines(dv,pct975, col = 2) lines(dv,pct025, col = 2) ### example confidence interval c1 = which.min(abs(pct975-4)) c2 = which.min(abs(pct025-4)) lines(dv[c(c1,c2)], c(4,4), col = 3, lwd = 4, lty = 2) Edit: I notice a problem with my approach. I looked at the situation of the 8 groups split up into two times 4 groups at the extreme ends. This gives a larger range than other configurations. But that is only good to compute the upper limit and would be good for a one-sided confidence interval. For the lower boundary, we should not look at the worst-case but at the least worse case. That is when only 2 out of the 8 groups are at the far ends and the rest is exactly in the middle. The code above already anticipated this possibility to compute the least worse case. With the code above we would use simdif(5,1,di,1,6) (which places 1 group mean at the lower end, 1 group mean on the upper end, and the other 6 in the middle) instead of simdif(5,4,di,4,0). Then we will see lower values for the range and the confidence interval boundaries shift to the right (higher values). So, with this approach, we would need to combine those intervals. It is a bit the curse of the composite null hypothesis. We need to cover the multiple situations that correspond with the null hypothesis. The effect of different configurations becomes even more extreme when we are comparing more group means. In the image below we look at the distribution of the range of 100 group means. In the left image, the distribution is computed for two times 50 groups with the mean on the ends. In the right image, there are only two times one group with the mean at the end, and the rest is in the middle. In the left image, the range is much larger. The limits for the confidence interval are very far from each other. This means that it would be useful to incorporate more data than just the range, and I should not should have ignored the distribution of the other 98 groups.
Confidence interval for the biggest difference between means
To estimate confidence intervals we could use simulations of the distribution of the biggest difference of the mean when we place half the means on one end and the other half on the other end. It woul
Confidence interval for the biggest difference between means To estimate confidence intervals we could use simulations of the distribution of the biggest difference of the mean when we place half the means on one end and the other half on the other end. It would look for instance like this. Then based on some observed range you can compute which boundaries associate with that, and make your confidence interval. In the example image, a demonstration is given for the case that the observed studentized range would be 4 with a green line that corresponds to an estimate of a 95% confidence interval. The line is between the red lines, which give the upper and lower 2.5% of the samples. A few more things need to be done: We used only the range and compute a worst case for the distribution of the range when all the groups are at the furthest edges. This does not use all the information. When only two means are very far away and the rest is in the middle then this is a different situation from when many means are at the edges. We used a simulation but it would be nice if some expression would be used. This is not so easy. The case for the situation that all the means are equal is the studentized range distribution. That is already difficult to compute, and now you need a generalization of it. computer code for the image ### nu = number in sample ### nl = number of samples in each left side ### d = distance between means ### nr = number of samples in right side ### m = number of samples in the middle ### d2 = position of the middle samples simdif = function(nu, nl, d, nr = nl, m = 0, d2 = d/2) { ### means of the different groups mean = c(rep(0,nl),rep(d2,m),rep(d,nr)) ### compute samples x = matrix(rnorm(nu*(nl+m+nr),mean = mean), nu, byrow = 1) ### compute sample means mu = colMeans(x) ### compute pooled sample variance RSS = sum((x - rep(1,nu) %*% t(as.matrix(mu)))^2) sig = sqrt(RSS/(nu*(nl+m+nr)-nu)) ### studentized range range = (max(mu)-min(mu))/sig return(range) } ### compute for different distances dv = seq(0,5,0.1) ### smp contains the simulation smp = c() ### pct975 stores the upper 97.5-th quantile ### pct025 stores the lower 2.5-th quantile pct975 = c() pct025 = c() ### do the sampling nrep = 10^3 set.seed(1) for (di in dv) { sample = replicate(nrep, simdif(5,4,di)) perc = quantile(sample, probs = c(0.025,0.975)) smp = cbind(smp,sample) pct975 = c(pct975,perc[2]) pct025 = c(pct025,perc[1]) } dcor = rep(1,nrep) %*% t(as.matrix(dv)) ### plot experimental distribution with confidence boundaries plot(dcor, smp, pch = 21, col = rgb(0,0,0,0.05), bg = rgb(0,0,0,0.05), cex = 0.7, main = "simulation for 8 groups of size 5", ylab = "observed studentized range", xlab = "true difference in sigma") lines(dv,pct975, col = 2) lines(dv,pct025, col = 2) ### example confidence interval c1 = which.min(abs(pct975-4)) c2 = which.min(abs(pct025-4)) lines(dv[c(c1,c2)], c(4,4), col = 3, lwd = 4, lty = 2) Edit: I notice a problem with my approach. I looked at the situation of the 8 groups split up into two times 4 groups at the extreme ends. This gives a larger range than other configurations. But that is only good to compute the upper limit and would be good for a one-sided confidence interval. For the lower boundary, we should not look at the worst-case but at the least worse case. That is when only 2 out of the 8 groups are at the far ends and the rest is exactly in the middle. The code above already anticipated this possibility to compute the least worse case. With the code above we would use simdif(5,1,di,1,6) (which places 1 group mean at the lower end, 1 group mean on the upper end, and the other 6 in the middle) instead of simdif(5,4,di,4,0). Then we will see lower values for the range and the confidence interval boundaries shift to the right (higher values). So, with this approach, we would need to combine those intervals. It is a bit the curse of the composite null hypothesis. We need to cover the multiple situations that correspond with the null hypothesis. The effect of different configurations becomes even more extreme when we are comparing more group means. In the image below we look at the distribution of the range of 100 group means. In the left image, the distribution is computed for two times 50 groups with the mean on the ends. In the right image, there are only two times one group with the mean at the end, and the rest is in the middle. In the left image, the range is much larger. The limits for the confidence interval are very far from each other. This means that it would be useful to incorporate more data than just the range, and I should not should have ignored the distribution of the other 98 groups.
Confidence interval for the biggest difference between means To estimate confidence intervals we could use simulations of the distribution of the biggest difference of the mean when we place half the means on one end and the other half on the other end. It woul
51,193
Classifier success rate and confidence intervals
Under the assumption that your data is normally distributed, then the standard error can be used as it is the error that is expected from normally distributed data with the same expectation (mean). We are interested in how many samples fall into the "tails" of the distribution - i.e. how many samples fall outside of a certain range. $\alpha$ here is the confidence interval - i.e. if we set $\alpha = 0.95$ then this defines the boundaries of where 95% of the data should lie, in ideal circumstances. We the use the inverse CDF $\phi^-1$ to calculate what these boundaries are. This is also called the "Q-function", and can be expressed in terms of the error function as: $Q(x) =\tfrac{1}{2} - \tfrac{1}{2} \operatorname{erf} \Bigl( \frac{x}{\sqrt{2}} \Bigr)=\tfrac{1}{2}\operatorname{erfc}(\frac{x}{\sqrt{2}}).$ (hopefully maths will render soon!) This is available in matlab. The calculation required is 2*(1-erfcinv(0.975)) or 1-erfcinv(0.95) since $Q(x) = 1-\phi(x)$ This is actually related to another question that I asked. The answer would be yes if you expect the classification scores to be normally distributed. However I'm not sure this is true - you might expect the scores to be biased towards 1 (if you're using accuracy) and almost certainly not symmetric (i.e. skewed). As given by one of the answers to my question, perhaps something like McNemar's test might be useful, although that's really for comparing classifiers. I guess the best you can do for a single classifier is provide the mean and standard deviation of many train/test splits, as is common practice in research papers.
Classifier success rate and confidence intervals
Under the assumption that your data is normally distributed, then the standard error can be used as it is the error that is expected from normally distributed data with the same expectation (mean). We
Classifier success rate and confidence intervals Under the assumption that your data is normally distributed, then the standard error can be used as it is the error that is expected from normally distributed data with the same expectation (mean). We are interested in how many samples fall into the "tails" of the distribution - i.e. how many samples fall outside of a certain range. $\alpha$ here is the confidence interval - i.e. if we set $\alpha = 0.95$ then this defines the boundaries of where 95% of the data should lie, in ideal circumstances. We the use the inverse CDF $\phi^-1$ to calculate what these boundaries are. This is also called the "Q-function", and can be expressed in terms of the error function as: $Q(x) =\tfrac{1}{2} - \tfrac{1}{2} \operatorname{erf} \Bigl( \frac{x}{\sqrt{2}} \Bigr)=\tfrac{1}{2}\operatorname{erfc}(\frac{x}{\sqrt{2}}).$ (hopefully maths will render soon!) This is available in matlab. The calculation required is 2*(1-erfcinv(0.975)) or 1-erfcinv(0.95) since $Q(x) = 1-\phi(x)$ This is actually related to another question that I asked. The answer would be yes if you expect the classification scores to be normally distributed. However I'm not sure this is true - you might expect the scores to be biased towards 1 (if you're using accuracy) and almost certainly not symmetric (i.e. skewed). As given by one of the answers to my question, perhaps something like McNemar's test might be useful, although that's really for comparing classifiers. I guess the best you can do for a single classifier is provide the mean and standard deviation of many train/test splits, as is common practice in research papers.
Classifier success rate and confidence intervals Under the assumption that your data is normally distributed, then the standard error can be used as it is the error that is expected from normally distributed data with the same expectation (mean). We
51,194
Non-parametric test for repeated measures and post-hoc single comparisons in R?
It may be worth looking at the Brunner-Munzel-Test, see e.g. Brunner, Munzel, Puri (1999) "Rank-score tests in factorial designs with repeated measures", Journal of Multivariate Analysis. You can use it even if you have any kind of ordinal scale and not too small sample sizes. Yet I'm not sure about your particular model and hypotheses. But the first author wrote some books about nonparametric statistics (some of them in German) with plenty of examples. There or in another of his publications you'll likely also find more about multiple comparisons.
Non-parametric test for repeated measures and post-hoc single comparisons in R?
It may be worth looking at the Brunner-Munzel-Test, see e.g. Brunner, Munzel, Puri (1999) "Rank-score tests in factorial designs with repeated measures", Journal of Multivariate Analysis. You can use
Non-parametric test for repeated measures and post-hoc single comparisons in R? It may be worth looking at the Brunner-Munzel-Test, see e.g. Brunner, Munzel, Puri (1999) "Rank-score tests in factorial designs with repeated measures", Journal of Multivariate Analysis. You can use it even if you have any kind of ordinal scale and not too small sample sizes. Yet I'm not sure about your particular model and hypotheses. But the first author wrote some books about nonparametric statistics (some of them in German) with plenty of examples. There or in another of his publications you'll likely also find more about multiple comparisons.
Non-parametric test for repeated measures and post-hoc single comparisons in R? It may be worth looking at the Brunner-Munzel-Test, see e.g. Brunner, Munzel, Puri (1999) "Rank-score tests in factorial designs with repeated measures", Journal of Multivariate Analysis. You can use
51,195
Can I somehow compute variance explained by PC after Oblique rotation in PCA?
I think you're on solid ground. Another useful thing to do is to call up a loading plot ('/plot rotation'), then to reanalyze using varimax rotation and again ask for a loading plot. But I hope you're analyzing objective data and not opinion data: using PCA on the latter is well-known to be a mistake, because it treats as part of an underlying dimension some information that should be treated as unique to particular variables, or even as measurement error.
Can I somehow compute variance explained by PC after Oblique rotation in PCA?
I think you're on solid ground. Another useful thing to do is to call up a loading plot ('/plot rotation'), then to reanalyze using varimax rotation and again ask for a loading plot. But I hope you'
Can I somehow compute variance explained by PC after Oblique rotation in PCA? I think you're on solid ground. Another useful thing to do is to call up a loading plot ('/plot rotation'), then to reanalyze using varimax rotation and again ask for a loading plot. But I hope you're analyzing objective data and not opinion data: using PCA on the latter is well-known to be a mistake, because it treats as part of an underlying dimension some information that should be treated as unique to particular variables, or even as measurement error.
Can I somehow compute variance explained by PC after Oblique rotation in PCA? I think you're on solid ground. Another useful thing to do is to call up a loading plot ('/plot rotation'), then to reanalyze using varimax rotation and again ask for a loading plot. But I hope you'
51,196
Kullback–Leibler divergence between two Wishart distributions
I think it's a typo and $a_s = a, B_s = B$. I don't know what the $\tilde\Gamma$ notation is supposed to represent. If you set it this way, this is consistent with the equations at http://people.ee.duke.edu/~shji/papers/AL_TPAMI.pdf on page 25.
Kullback–Leibler divergence between two Wishart distributions
I think it's a typo and $a_s = a, B_s = B$. I don't know what the $\tilde\Gamma$ notation is supposed to represent. If you set it this way, this is consistent with the equations at http://people.ee.du
Kullback–Leibler divergence between two Wishart distributions I think it's a typo and $a_s = a, B_s = B$. I don't know what the $\tilde\Gamma$ notation is supposed to represent. If you set it this way, this is consistent with the equations at http://people.ee.duke.edu/~shji/papers/AL_TPAMI.pdf on page 25.
Kullback–Leibler divergence between two Wishart distributions I think it's a typo and $a_s = a, B_s = B$. I don't know what the $\tilde\Gamma$ notation is supposed to represent. If you set it this way, this is consistent with the equations at http://people.ee.du
51,197
What is the distribution of the sample variance of the Skellam distribution?
Some of the comments have pointed out that there may be better ways to solve your problem that do not involve finding the distribution of the sample variance of the Skellam distribution. Here I will answer the title question irrespective of those other issues. The exact distribution of the sample variance from a Skellam distribution is complicated, since it is a quadratic function of Poisson random variables. Using the large-sample approximation with the chi-squared distribution (see e.g., O'Neill 2004) you can get the approximate distribution: $$S_n^2 \sim 2 \mu \cdot \frac{\text{Chi-Sq}(DF_n)}{DF_n} \quad \quad \quad DF_n = \frac{2n \mu^2}{3 \mu^2 + (n-3) / (n-1)}.$$ This should get you a reasonable approximation to the distribution of the sample variance, so long as $n$ is not too small.
What is the distribution of the sample variance of the Skellam distribution?
Some of the comments have pointed out that there may be better ways to solve your problem that do not involve finding the distribution of the sample variance of the Skellam distribution. Here I will
What is the distribution of the sample variance of the Skellam distribution? Some of the comments have pointed out that there may be better ways to solve your problem that do not involve finding the distribution of the sample variance of the Skellam distribution. Here I will answer the title question irrespective of those other issues. The exact distribution of the sample variance from a Skellam distribution is complicated, since it is a quadratic function of Poisson random variables. Using the large-sample approximation with the chi-squared distribution (see e.g., O'Neill 2004) you can get the approximate distribution: $$S_n^2 \sim 2 \mu \cdot \frac{\text{Chi-Sq}(DF_n)}{DF_n} \quad \quad \quad DF_n = \frac{2n \mu^2}{3 \mu^2 + (n-3) / (n-1)}.$$ This should get you a reasonable approximation to the distribution of the sample variance, so long as $n$ is not too small.
What is the distribution of the sample variance of the Skellam distribution? Some of the comments have pointed out that there may be better ways to solve your problem that do not involve finding the distribution of the sample variance of the Skellam distribution. Here I will
51,198
What is the distribution of the sample variance of the Skellam distribution?
I'm not sure what the problem is: sample variance is a random variable. I would guess it is distributed as a (re-scaled) Chi-square for this distribution. For a fixed number of samples, the standard error of the variance statistic will be proportional to the population variance you are trying to estimate. Thus I would expect these histograms to get 'wider' as the population variance increases.
What is the distribution of the sample variance of the Skellam distribution?
I'm not sure what the problem is: sample variance is a random variable. I would guess it is distributed as a (re-scaled) Chi-square for this distribution. For a fixed number of samples, the standard e
What is the distribution of the sample variance of the Skellam distribution? I'm not sure what the problem is: sample variance is a random variable. I would guess it is distributed as a (re-scaled) Chi-square for this distribution. For a fixed number of samples, the standard error of the variance statistic will be proportional to the population variance you are trying to estimate. Thus I would expect these histograms to get 'wider' as the population variance increases.
What is the distribution of the sample variance of the Skellam distribution? I'm not sure what the problem is: sample variance is a random variable. I would guess it is distributed as a (re-scaled) Chi-square for this distribution. For a fixed number of samples, the standard e
51,199
Sample-size calculations for Benjamini-Hochberg, Westfall-Young, Holm-Bonferroni methods
What is typically done (though it is easier said than done), is this: do a pilot study which gives you an idea of the data you're handling based on this (or if a pilot study is not an option, knowledge about the domain), create a data generating model so you can sample data that 'looks like' the true data. Make this so that you can control which observations are cases (in the sense that you want them picked up by the tests) and which are controls (again, in the sense that you want them to not be picked up by the tests). for each of a reasonable set of sample sizes, run 100 or 1000 simulations (i.e.: create that many datasets, the more the merrier), and run the analysis on it. Calculate how well, e.g., the false discovery rate performs. now you have estimates for each sample size on how good each of your measures will perform, so pick your sample size for the performance you need (if at all attainable), and be conservative about it (i.e.: if you can, add another slab of obervations) The difficulty in the above is obviously in creating that data generating model. Once again, when in doubt: make the 'true effects' small and add lots of noise to keep it on the conservative side. I'm pretty sure that the original article on FDR holds examples of where FWER performs really bad, so it is to be expected that the sample size calculations could be very different with the different measures.
Sample-size calculations for Benjamini-Hochberg, Westfall-Young, Holm-Bonferroni methods
What is typically done (though it is easier said than done), is this: do a pilot study which gives you an idea of the data you're handling based on this (or if a pilot study is not an option, knowled
Sample-size calculations for Benjamini-Hochberg, Westfall-Young, Holm-Bonferroni methods What is typically done (though it is easier said than done), is this: do a pilot study which gives you an idea of the data you're handling based on this (or if a pilot study is not an option, knowledge about the domain), create a data generating model so you can sample data that 'looks like' the true data. Make this so that you can control which observations are cases (in the sense that you want them picked up by the tests) and which are controls (again, in the sense that you want them to not be picked up by the tests). for each of a reasonable set of sample sizes, run 100 or 1000 simulations (i.e.: create that many datasets, the more the merrier), and run the analysis on it. Calculate how well, e.g., the false discovery rate performs. now you have estimates for each sample size on how good each of your measures will perform, so pick your sample size for the performance you need (if at all attainable), and be conservative about it (i.e.: if you can, add another slab of obervations) The difficulty in the above is obviously in creating that data generating model. Once again, when in doubt: make the 'true effects' small and add lots of noise to keep it on the conservative side. I'm pretty sure that the original article on FDR holds examples of where FWER performs really bad, so it is to be expected that the sample size calculations could be very different with the different measures.
Sample-size calculations for Benjamini-Hochberg, Westfall-Young, Holm-Bonferroni methods What is typically done (though it is easier said than done), is this: do a pilot study which gives you an idea of the data you're handling based on this (or if a pilot study is not an option, knowled
51,200
How to model the relationship between geocoded and ungeocoded sales data?
I don't have a succinct answer, but I do have advice and comments too long for a single comment... Having 70% of your sales come from transactions with a name and zip code (making it possible to match to an address in a given trade area) seems to be really really good. I would recommend not getting too bogged down with the untrackable transactions and simply scale the forecast up as needed. But, to this end, you should clarify if your current approach is able to model the sales from existing stores. Specifically, what is the distribution of forecast errors for existing stores? Regarding the 30% of sales that you cannot geocode, I suspect that this is roughly the percentage of cash transactions at each store, and these are of course untrackable. But, I also suspect that cash transactions are generally lower ticket value, and that the ratio of cash vs credit card transactions correlates with the median income of the trade area. So, a useful forecast may be to predict the ratio of cash (and therefore lower ticket) transactions to credit card transactions based on the new store's real estate and trade area. That would give you the "effect size" of the ungeocoded transactions that you need to scale total sales by.
How to model the relationship between geocoded and ungeocoded sales data?
I don't have a succinct answer, but I do have advice and comments too long for a single comment... Having 70% of your sales come from transactions with a name and zip code (making it possible to match
How to model the relationship between geocoded and ungeocoded sales data? I don't have a succinct answer, but I do have advice and comments too long for a single comment... Having 70% of your sales come from transactions with a name and zip code (making it possible to match to an address in a given trade area) seems to be really really good. I would recommend not getting too bogged down with the untrackable transactions and simply scale the forecast up as needed. But, to this end, you should clarify if your current approach is able to model the sales from existing stores. Specifically, what is the distribution of forecast errors for existing stores? Regarding the 30% of sales that you cannot geocode, I suspect that this is roughly the percentage of cash transactions at each store, and these are of course untrackable. But, I also suspect that cash transactions are generally lower ticket value, and that the ratio of cash vs credit card transactions correlates with the median income of the trade area. So, a useful forecast may be to predict the ratio of cash (and therefore lower ticket) transactions to credit card transactions based on the new store's real estate and trade area. That would give you the "effect size" of the ungeocoded transactions that you need to scale total sales by.
How to model the relationship between geocoded and ungeocoded sales data? I don't have a succinct answer, but I do have advice and comments too long for a single comment... Having 70% of your sales come from transactions with a name and zip code (making it possible to match