idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
48,701 | R latent class multinomial logit model [closed] | Have you tried the brms package? Its brm function supports multinomial logistic models and category-specific variables as well. Not sure if it will do what you want, though. Something like:
mod <- brm (choice ~ agentVar1 + agentVar2 + cse (choiceVar1),
family="categorical", data=yourData,
prior=c (set_prior ("student_t(3, 0, 5)", class="b")))
might be what you're after. The cse is a category-specific effect, which I think might be your alternative-specific variables. I think.
Note that brms is Bayesian and does sampling. The Bayesian part may require reasonable priors to converge. (Usually you can get started with its defaults, but some kinds of regressions -- I think categorical is one of them -- are pickier. I copied/pasted a prior I used for a different categorical task that could get you started.) Bayesian statistics requires more thought up front (priors) and more checking of convergence, but it offers a lot of goodness in return. So if you're totally unfamiliar with it, this could be a big leap that doesn't make sense right now, but it's an option to consider.
The formula automatically compiles down to the Stan language, which compiles down to C++, which compiles to an executable that is run. So you need to have a C++ compiler on your system for it to work. (Probably included if you're running Linux, install the free Developer Tools under MacOS, not sure about Windows.)
brm is unbelievably flexible and supports everything from censoring and truncation to random effects and smoothers. So it pretty much supports any kind of regression you can imagine.
I'm just a very satisfied user of brms, and thought it might solve your current issue and be useful in the future as well. | R latent class multinomial logit model [closed] | Have you tried the brms package? Its brm function supports multinomial logistic models and category-specific variables as well. Not sure if it will do what you want, though. Something like:
mod <- br | R latent class multinomial logit model [closed]
Have you tried the brms package? Its brm function supports multinomial logistic models and category-specific variables as well. Not sure if it will do what you want, though. Something like:
mod <- brm (choice ~ agentVar1 + agentVar2 + cse (choiceVar1),
family="categorical", data=yourData,
prior=c (set_prior ("student_t(3, 0, 5)", class="b")))
might be what you're after. The cse is a category-specific effect, which I think might be your alternative-specific variables. I think.
Note that brms is Bayesian and does sampling. The Bayesian part may require reasonable priors to converge. (Usually you can get started with its defaults, but some kinds of regressions -- I think categorical is one of them -- are pickier. I copied/pasted a prior I used for a different categorical task that could get you started.) Bayesian statistics requires more thought up front (priors) and more checking of convergence, but it offers a lot of goodness in return. So if you're totally unfamiliar with it, this could be a big leap that doesn't make sense right now, but it's an option to consider.
The formula automatically compiles down to the Stan language, which compiles down to C++, which compiles to an executable that is run. So you need to have a C++ compiler on your system for it to work. (Probably included if you're running Linux, install the free Developer Tools under MacOS, not sure about Windows.)
brm is unbelievably flexible and supports everything from censoring and truncation to random effects and smoothers. So it pretty much supports any kind of regression you can imagine.
I'm just a very satisfied user of brms, and thought it might solve your current issue and be useful in the future as well. | R latent class multinomial logit model [closed]
Have you tried the brms package? Its brm function supports multinomial logistic models and category-specific variables as well. Not sure if it will do what you want, though. Something like:
mod <- br |
48,702 | R latent class multinomial logit model [closed] | I have used both mlogit and flexmix. But there is a more general package in R called RSGHB that can easily implement the functions of those packages, as well as some things which are more difficult, such as latent class models. (I don't have enough reputation points to add as comment--strange that one needs more points for a comment...) | R latent class multinomial logit model [closed] | I have used both mlogit and flexmix. But there is a more general package in R called RSGHB that can easily implement the functions of those packages, as well as some things which are more difficult, | R latent class multinomial logit model [closed]
I have used both mlogit and flexmix. But there is a more general package in R called RSGHB that can easily implement the functions of those packages, as well as some things which are more difficult, such as latent class models. (I don't have enough reputation points to add as comment--strange that one needs more points for a comment...) | R latent class multinomial logit model [closed]
I have used both mlogit and flexmix. But there is a more general package in R called RSGHB that can easily implement the functions of those packages, as well as some things which are more difficult, |
48,703 | Plot Pareto tails in QQ-plot for log-normal distributions | If you take logs, it should be normal with exponential tail
Just do a normal and an exponential qq plot of the data, the first should be roughly linear before the kink, the second roughly linear after the kink:
(In this case the change point was at 5.5, and we see what we should - a kink near 5.5, and the first plot roughly linear before and the second roughly linear after the kink. The fact that the first plot looks roughly linear after the kink as well suggests that the Pareto data might in this particular example have been reasonably approximated by a second lognormal.) | Plot Pareto tails in QQ-plot for log-normal distributions | If you take logs, it should be normal with exponential tail
Just do a normal and an exponential qq plot of the data, the first should be roughly linear before the kink, the second roughly linear after | Plot Pareto tails in QQ-plot for log-normal distributions
If you take logs, it should be normal with exponential tail
Just do a normal and an exponential qq plot of the data, the first should be roughly linear before the kink, the second roughly linear after the kink:
(In this case the change point was at 5.5, and we see what we should - a kink near 5.5, and the first plot roughly linear before and the second roughly linear after the kink. The fact that the first plot looks roughly linear after the kink as well suggests that the Pareto data might in this particular example have been reasonably approximated by a second lognormal.) | Plot Pareto tails in QQ-plot for log-normal distributions
If you take logs, it should be normal with exponential tail
Just do a normal and an exponential qq plot of the data, the first should be roughly linear before the kink, the second roughly linear after |
48,704 | Plot Pareto tails in QQ-plot for log-normal distributions | Here we go. Normal QQ plot and Exponential QQ plot. | Plot Pareto tails in QQ-plot for log-normal distributions | Here we go. Normal QQ plot and Exponential QQ plot. | Plot Pareto tails in QQ-plot for log-normal distributions
Here we go. Normal QQ plot and Exponential QQ plot. | Plot Pareto tails in QQ-plot for log-normal distributions
Here we go. Normal QQ plot and Exponential QQ plot. |
48,705 | Plot Pareto tails in QQ-plot for log-normal distributions | If you are interested in testing the Pareto tail, this answer would help you. If you are interested in visualizing the Pareto tail, this gist can plot the empirical CCDF of your data in log scale. A Pareto tail would manifest itself in a straight line. | Plot Pareto tails in QQ-plot for log-normal distributions | If you are interested in testing the Pareto tail, this answer would help you. If you are interested in visualizing the Pareto tail, this gist can plot the empirical CCDF of your data in log scale. A P | Plot Pareto tails in QQ-plot for log-normal distributions
If you are interested in testing the Pareto tail, this answer would help you. If you are interested in visualizing the Pareto tail, this gist can plot the empirical CCDF of your data in log scale. A Pareto tail would manifest itself in a straight line. | Plot Pareto tails in QQ-plot for log-normal distributions
If you are interested in testing the Pareto tail, this answer would help you. If you are interested in visualizing the Pareto tail, this gist can plot the empirical CCDF of your data in log scale. A P |
48,706 | I want to learn about ROC curve -- what is the canonical textbook? | I would start from this practically canonical paper by Davis and Goadrich The relationship between Precision-Recall and ROC curves.
Through this, the ROC origins can be tracked to this book: Evaluation of Diagnostic Systems:
Methods from Signal Detection Theory which is unfortunately hardly accessible. | I want to learn about ROC curve -- what is the canonical textbook? | I would start from this practically canonical paper by Davis and Goadrich The relationship between Precision-Recall and ROC curves.
Through this, the ROC origins can be tracked to this book: Evaluatio | I want to learn about ROC curve -- what is the canonical textbook?
I would start from this practically canonical paper by Davis and Goadrich The relationship between Precision-Recall and ROC curves.
Through this, the ROC origins can be tracked to this book: Evaluation of Diagnostic Systems:
Methods from Signal Detection Theory which is unfortunately hardly accessible. | I want to learn about ROC curve -- what is the canonical textbook?
I would start from this practically canonical paper by Davis and Goadrich The relationship between Precision-Recall and ROC curves.
Through this, the ROC origins can be tracked to this book: Evaluatio |
48,707 | R rpart cross validation and 1 SE rule, why is the column in cptable called "xstd"? | 'xstd' is simply a poor label, it should say 'xse' since it's actually outputting the standard error, as opposed to the standard deviation. If you select row 7 in the above, then you are properly applying the '1SE Rule' as you intended. | R rpart cross validation and 1 SE rule, why is the column in cptable called "xstd"? | 'xstd' is simply a poor label, it should say 'xse' since it's actually outputting the standard error, as opposed to the standard deviation. If you select row 7 in the above, then you are properly appl | R rpart cross validation and 1 SE rule, why is the column in cptable called "xstd"?
'xstd' is simply a poor label, it should say 'xse' since it's actually outputting the standard error, as opposed to the standard deviation. If you select row 7 in the above, then you are properly applying the '1SE Rule' as you intended. | R rpart cross validation and 1 SE rule, why is the column in cptable called "xstd"?
'xstd' is simply a poor label, it should say 'xse' since it's actually outputting the standard error, as opposed to the standard deviation. If you select row 7 in the above, then you are properly appl |
48,708 | Posterior variance reduction | As you point out, $$H(X|Y)\le H(X)$$ is generally true, and can be interpreted from a Bayesian perspective as the entropy decrease in $X$ going from the prior to posterior distributions upon incorporating the additional information provided by observation of the data $Y$.
It is also generally true that
$\mathbb{V}ar_X[X] = \mathbb{E}_Y[\mathbb{V}ar_X[X|Y]] + \mathbb{V}ar_Y[\mathbb{E}_X[X|Y]]$
and therefore $$\mathbb{E}_Y[\mathbb{V}ar_X[X|Y]]\le \mathbb{V}ar_X[X]$$ i.e., the mean variance in the posterior, $X|Y$, is less than that of the prior, $X$.
These reductions in both entropy and variance in going from the prior to posterior distributions of $X$ are statements about expectations over $Y$. Recall that conditional entropy is defined as $H(X|Y)=\mathbb{E}_Y[\mathbb{E}_X[\ln(p(X|Y))]]$, so it really is an average over $Y$ of the entropy of the posterior.
Since these statements are about expectations, they leave open the possibilities that, for some $y$, we could have $$\mathbb{V}ar_X[X|Y=y]\gt\mathbb{V}ar_X[X], \ \ \ \ \text{and/or}\ \ \ \ \ \mathbb{E}_X[\ln(p(X|Y=y))] \gt H(X).$$
There is an excellent example of this phenomenon provided by this answer . Taking that example a step further. I calculated both the entropy and the variance of the posterior (conditional) distribution using the numbers and the beta / binomial set-up of that example:
In R:
> a0 <- 100
> b0 <- 20
> a <- a0 + 1
> b <- b0 + 9
> (postvar <- a*b / ((a+b)^2 * (a+b+1)))
[1] 0.001323005
> (priorvar <- a0*b0 / ((a0+b0)^2 * (a0+b0+1)))
[1] 0.001147842
> (postentropy <- log(beta(a,b)) - (a-1)*digamma(a) - (b-1)*digamma(b) + (a+b-2)*digamma(a+b)
[1] -1.899637
> (prie=log(beta(a0,b0))-(a0-1)*digamma(a0)-(b0-1)*digamma(b0)+(a0+b0-2)*digamma(a0+b0))
[1] -1.97511
So we see that what they found there: that the variance of the parameters' beta distribution increased after the data were collected, holds true of the entropy as well. I used the formula for entropy from here.
Now continuing in python / scipy (I reproduced the above to make contact with that variance example.)
In [1]: from scipy.stats import beta
In [2]: a0, b0 = 100, 20
In [3]: a, b = 100+1, 20+9
In [4]: beta.var(a0, b0) # Prior variance
Out[4]: 0.001147842056932966
In [5]: beta.var(a, b) # Posterior variance
Out[5]: 0.0013230046524233252
In [6]: beta.entropy(a0, b0) # Prior entropy
Out[6]: array(-1.97510984)
In [7]: beta.entropy(a, b) # Posterior entropy
Out[7]: array(-1.89963714)
In [8]: beta.entropy(1, 1) # uniform entropy
Out[8]: array(0.)
In [9]: beta.entropy(1000, 1000) # sharply peaked beta
Out[9]: array(-3.07491)
In [10]: beta.entropy(100000, 100000) # sharply peaked beta
Out[10]: array(-5.37724747)
In [11]: a, b = 100, 20
In [12]: for i in range(14):
...: print(a, b, beta.var(a,b), beta.entropy(a,b))
...: a, b = a+1, b+9
...:
100 20 0.001147842056932966 -1.9751098394063353
101 29 0.0013230046524233252 -1.8996371404250594
102 38 0.0014025184541901867 -1.868390558158729
103 47 0.0014248712288447386 -1.8593872860958125
104 56 0.0014130434782608694 -1.8629279074120686
105 65 0.0013810477751472106 -1.874007722570822
106 74 0.0013375622399563467 -1.889781703473504
107 83 0.0012880161273948166 -1.9085230031335483
108 92 0.001235820895522388 -1.929133400999706
109 101 0.0011831146360597952 -1.9508901652539663
110 110 0.0011312217194570135 -1.9733054939285433
111 119 0.0010809417425674515 -1.9960440293558486
112 128 0.00103273397879207 -2.0188722674726316
113 137 0.0009868366533864541 -2.0416263912578745
So we find that if we keep getting the same result (1 success in 10 trials) 14 times, the variance and entropy of the beta distribution for the parameter first increases, then begins to decrease from then on as the beta distribution becomes better defined by the data. | Posterior variance reduction | As you point out, $$H(X|Y)\le H(X)$$ is generally true, and can be interpreted from a Bayesian perspective as the entropy decrease in $X$ going from the prior to posterior distributions upon incorpora | Posterior variance reduction
As you point out, $$H(X|Y)\le H(X)$$ is generally true, and can be interpreted from a Bayesian perspective as the entropy decrease in $X$ going from the prior to posterior distributions upon incorporating the additional information provided by observation of the data $Y$.
It is also generally true that
$\mathbb{V}ar_X[X] = \mathbb{E}_Y[\mathbb{V}ar_X[X|Y]] + \mathbb{V}ar_Y[\mathbb{E}_X[X|Y]]$
and therefore $$\mathbb{E}_Y[\mathbb{V}ar_X[X|Y]]\le \mathbb{V}ar_X[X]$$ i.e., the mean variance in the posterior, $X|Y$, is less than that of the prior, $X$.
These reductions in both entropy and variance in going from the prior to posterior distributions of $X$ are statements about expectations over $Y$. Recall that conditional entropy is defined as $H(X|Y)=\mathbb{E}_Y[\mathbb{E}_X[\ln(p(X|Y))]]$, so it really is an average over $Y$ of the entropy of the posterior.
Since these statements are about expectations, they leave open the possibilities that, for some $y$, we could have $$\mathbb{V}ar_X[X|Y=y]\gt\mathbb{V}ar_X[X], \ \ \ \ \text{and/or}\ \ \ \ \ \mathbb{E}_X[\ln(p(X|Y=y))] \gt H(X).$$
There is an excellent example of this phenomenon provided by this answer . Taking that example a step further. I calculated both the entropy and the variance of the posterior (conditional) distribution using the numbers and the beta / binomial set-up of that example:
In R:
> a0 <- 100
> b0 <- 20
> a <- a0 + 1
> b <- b0 + 9
> (postvar <- a*b / ((a+b)^2 * (a+b+1)))
[1] 0.001323005
> (priorvar <- a0*b0 / ((a0+b0)^2 * (a0+b0+1)))
[1] 0.001147842
> (postentropy <- log(beta(a,b)) - (a-1)*digamma(a) - (b-1)*digamma(b) + (a+b-2)*digamma(a+b)
[1] -1.899637
> (prie=log(beta(a0,b0))-(a0-1)*digamma(a0)-(b0-1)*digamma(b0)+(a0+b0-2)*digamma(a0+b0))
[1] -1.97511
So we see that what they found there: that the variance of the parameters' beta distribution increased after the data were collected, holds true of the entropy as well. I used the formula for entropy from here.
Now continuing in python / scipy (I reproduced the above to make contact with that variance example.)
In [1]: from scipy.stats import beta
In [2]: a0, b0 = 100, 20
In [3]: a, b = 100+1, 20+9
In [4]: beta.var(a0, b0) # Prior variance
Out[4]: 0.001147842056932966
In [5]: beta.var(a, b) # Posterior variance
Out[5]: 0.0013230046524233252
In [6]: beta.entropy(a0, b0) # Prior entropy
Out[6]: array(-1.97510984)
In [7]: beta.entropy(a, b) # Posterior entropy
Out[7]: array(-1.89963714)
In [8]: beta.entropy(1, 1) # uniform entropy
Out[8]: array(0.)
In [9]: beta.entropy(1000, 1000) # sharply peaked beta
Out[9]: array(-3.07491)
In [10]: beta.entropy(100000, 100000) # sharply peaked beta
Out[10]: array(-5.37724747)
In [11]: a, b = 100, 20
In [12]: for i in range(14):
...: print(a, b, beta.var(a,b), beta.entropy(a,b))
...: a, b = a+1, b+9
...:
100 20 0.001147842056932966 -1.9751098394063353
101 29 0.0013230046524233252 -1.8996371404250594
102 38 0.0014025184541901867 -1.868390558158729
103 47 0.0014248712288447386 -1.8593872860958125
104 56 0.0014130434782608694 -1.8629279074120686
105 65 0.0013810477751472106 -1.874007722570822
106 74 0.0013375622399563467 -1.889781703473504
107 83 0.0012880161273948166 -1.9085230031335483
108 92 0.001235820895522388 -1.929133400999706
109 101 0.0011831146360597952 -1.9508901652539663
110 110 0.0011312217194570135 -1.9733054939285433
111 119 0.0010809417425674515 -1.9960440293558486
112 128 0.00103273397879207 -2.0188722674726316
113 137 0.0009868366533864541 -2.0416263912578745
So we find that if we keep getting the same result (1 success in 10 trials) 14 times, the variance and entropy of the beta distribution for the parameter first increases, then begins to decrease from then on as the beta distribution becomes better defined by the data. | Posterior variance reduction
As you point out, $$H(X|Y)\le H(X)$$ is generally true, and can be interpreted from a Bayesian perspective as the entropy decrease in $X$ going from the prior to posterior distributions upon incorpora |
48,709 | How to handle underdispersion in GLMM (binomial outcome variable) | For binary outcomes, overdispersion or underdispersion are only identifiable (i.e., can only be meaningfully measured) if sets of individuals with identical predictors can be grouped. For example, if the data look like
response fac1 fac2
0 A A
0 A A
1 A B
0 A B
(a ridiculously small sample that will lead to other problems such as complete separation if we actually tried to use it in a model), we could group it by unique sets of predictors:
successes total fac1 fac2
0 2 A A
1 2 A B
and then analyze it as a binomial response with number of trials>1 and use the various techniques suggested above (as well as ordinal models, e.g. the ordinal package in R) to handle over/underdispersion.
If you have truly binary, ungroupable outcomes (e.g. one of your response variables is a continuous predictor that is unique to individuals, as would be typical in an observational study), then (1) you can't estimate the degree of overdispersion and (2) you can't really worry about it (i.e., there may be additional sources of variability you don't know about, but they just go to inflate your uncertainty, but they don't bias your inference). This is well-known and stated e.g.
Overdispersion in logistic regression (this answer points out that one can identify dispersion, in a way, by grouping observations with similar (but not identical) predictor values)
Formula for computing the Pearson $\chi^2$, comparison with R
it's impossible to have overdispersion in a binary logistic regression
Gelman and Hill Data Analysis ... 2007 p. 302
logistic regression with binary data ... cannot be overdispersed | How to handle underdispersion in GLMM (binomial outcome variable) | For binary outcomes, overdispersion or underdispersion are only identifiable (i.e., can only be meaningfully measured) if sets of individuals with identical predictors can be grouped. For example, if | How to handle underdispersion in GLMM (binomial outcome variable)
For binary outcomes, overdispersion or underdispersion are only identifiable (i.e., can only be meaningfully measured) if sets of individuals with identical predictors can be grouped. For example, if the data look like
response fac1 fac2
0 A A
0 A A
1 A B
0 A B
(a ridiculously small sample that will lead to other problems such as complete separation if we actually tried to use it in a model), we could group it by unique sets of predictors:
successes total fac1 fac2
0 2 A A
1 2 A B
and then analyze it as a binomial response with number of trials>1 and use the various techniques suggested above (as well as ordinal models, e.g. the ordinal package in R) to handle over/underdispersion.
If you have truly binary, ungroupable outcomes (e.g. one of your response variables is a continuous predictor that is unique to individuals, as would be typical in an observational study), then (1) you can't estimate the degree of overdispersion and (2) you can't really worry about it (i.e., there may be additional sources of variability you don't know about, but they just go to inflate your uncertainty, but they don't bias your inference). This is well-known and stated e.g.
Overdispersion in logistic regression (this answer points out that one can identify dispersion, in a way, by grouping observations with similar (but not identical) predictor values)
Formula for computing the Pearson $\chi^2$, comparison with R
it's impossible to have overdispersion in a binary logistic regression
Gelman and Hill Data Analysis ... 2007 p. 302
logistic regression with binary data ... cannot be overdispersed | How to handle underdispersion in GLMM (binomial outcome variable)
For binary outcomes, overdispersion or underdispersion are only identifiable (i.e., can only be meaningfully measured) if sets of individuals with identical predictors can be grouped. For example, if |
48,710 | What does the "different populations" result of a significant group t-test mean exactly? | It certainly doesn't "have to be equivalent to the two groups coming from different underlying populations", hypothesis testing is a probabilistic endeavor. It could be a type I error. To use your example, if the students were randomized into the two groups and nothing else was done except to give them all a math test (i.e., there was no manipulation), the significant finding would be a type I error by definition. (This is not as strange as it sounds, when subjects are randomized into groups for a longitudinal study in the biomedical field, there is always someone who wants to test that the patients really are the same on their covariates at baseline, which is logically identical to the silly situation I just described.)
On the other hand, you could look at naturally occurring groups. For example, you could assess the students who sit in the front half of the room vs. the students who sit in the back half of the room. It is perfectly reasonable to imagine (both as a former student in various classes, and as an occasional stats teacher) that students who choose to sit in the front or the back may differ in abilities, interest, etc. You could legitimately conclude that those students do come from different populations if you found a significant result. What you could not do, in that exact situation, is assume causality: either that sitting in the front makes you better at math, or that being worse at math makes you sit at the back. In addition, a significant result in this situation could still be a type I error; there is never any guarantee that a significant result isn't a type I error.
Not to belabor the point about causality, but we can form a couple more hypothetical situations / studies. Imagine we randomize the students into two groups and gave them each a slightly different version of an otherwise identical math test: one version starts with the following text, "this is a very difficult test comprised of trick questions; most students will fail it", and the other version starts with, "this is a very easy test comprised of basic questions; most students will ace it". Furthermore, imagine that the mean scores of the two groups significantly differ. Now we may legitimately conclude that they come from different populations, and may conclude that the test's introductory statement does have a causal effect on performance (although once again, it could still be a type I error). The meaning of 'come from different populations' is subtle here. The students didn't belong to some pre-existing distinct groups, rather they have become members of the abstract population of students who have read a certain emotionally charged introductory statement before taking a math test.
In our last hypothetical study, we could mix the assignment of the students by classroom seating preference with the manipulation of the test's introductory statement. If we got significant results, we could legitimately conclude that the students represent different populations, in the sense just described, but would be skating on thin ice if we tried to infer causality for the text. This is because the experimental manipulation is confounded with seating preference (among any number of other possible invisible factors). The result could be due to the text, the seating preference, something unmeasured that is correlated with seating preference, perhaps math anxiety, or be a simple type I error. | What does the "different populations" result of a significant group t-test mean exactly? | It certainly doesn't "have to be equivalent to the two groups coming from different underlying populations", hypothesis testing is a probabilistic endeavor. It could be a type I error. To use your e | What does the "different populations" result of a significant group t-test mean exactly?
It certainly doesn't "have to be equivalent to the two groups coming from different underlying populations", hypothesis testing is a probabilistic endeavor. It could be a type I error. To use your example, if the students were randomized into the two groups and nothing else was done except to give them all a math test (i.e., there was no manipulation), the significant finding would be a type I error by definition. (This is not as strange as it sounds, when subjects are randomized into groups for a longitudinal study in the biomedical field, there is always someone who wants to test that the patients really are the same on their covariates at baseline, which is logically identical to the silly situation I just described.)
On the other hand, you could look at naturally occurring groups. For example, you could assess the students who sit in the front half of the room vs. the students who sit in the back half of the room. It is perfectly reasonable to imagine (both as a former student in various classes, and as an occasional stats teacher) that students who choose to sit in the front or the back may differ in abilities, interest, etc. You could legitimately conclude that those students do come from different populations if you found a significant result. What you could not do, in that exact situation, is assume causality: either that sitting in the front makes you better at math, or that being worse at math makes you sit at the back. In addition, a significant result in this situation could still be a type I error; there is never any guarantee that a significant result isn't a type I error.
Not to belabor the point about causality, but we can form a couple more hypothetical situations / studies. Imagine we randomize the students into two groups and gave them each a slightly different version of an otherwise identical math test: one version starts with the following text, "this is a very difficult test comprised of trick questions; most students will fail it", and the other version starts with, "this is a very easy test comprised of basic questions; most students will ace it". Furthermore, imagine that the mean scores of the two groups significantly differ. Now we may legitimately conclude that they come from different populations, and may conclude that the test's introductory statement does have a causal effect on performance (although once again, it could still be a type I error). The meaning of 'come from different populations' is subtle here. The students didn't belong to some pre-existing distinct groups, rather they have become members of the abstract population of students who have read a certain emotionally charged introductory statement before taking a math test.
In our last hypothetical study, we could mix the assignment of the students by classroom seating preference with the manipulation of the test's introductory statement. If we got significant results, we could legitimately conclude that the students represent different populations, in the sense just described, but would be skating on thin ice if we tried to infer causality for the text. This is because the experimental manipulation is confounded with seating preference (among any number of other possible invisible factors). The result could be due to the text, the seating preference, something unmeasured that is correlated with seating preference, perhaps math anxiety, or be a simple type I error. | What does the "different populations" result of a significant group t-test mean exactly?
It certainly doesn't "have to be equivalent to the two groups coming from different underlying populations", hypothesis testing is a probabilistic endeavor. It could be a type I error. To use your e |
48,711 | Fourier bases for a stationary signal & relation to PCA for natural images | I am afraid this will not fully answer your question, but I would still like to write it to give you some keywords and mainly hoping to sparkle further discussion. I will be happy if anybody provides a better answer.
Having said that, I disagree with what @whuber wrote in the comments above ("in no circumstances does a Fourier basis act like PCA" etc.). I think you are right in observing that PCA on smooth more-or-less-translation-invariant signals (or images) results in Fourier-like components. This is a real effect and it must have an explanation.
Mathematically, the covariance matrix with "diagonal structure" that you describe in your question is called Toeplitz matrix. Toeplitz matrices have many nice properties, one of them being that in the limit of infinite dimension (Toeplitz operator) the eigenvectors are sines and cosines with increasing frequencies. I think what you are asking to is to provide an intuitive explanation of this fact. I can't.
Of course sines and cosines are also eigenvectors of the Fourier operator, as you rightly observe. Which means that Toeplitz and Fourier operators are closely related. I would be very happy if somebody more savvy in math would explain how it works (I strongly suspect that with the correct perspective it becomes almost trivial).
Finally, if you are interested in a non-technical description of this phenomenon, I can recommend this paper Interpreting principal component analyses of spatial
population genetic variation from Nature Genetics. The authors remark that "sinusoidal mathematical artifacts ... arise generally when PCA is applied to spatial data" and give some nice examples. | Fourier bases for a stationary signal & relation to PCA for natural images | I am afraid this will not fully answer your question, but I would still like to write it to give you some keywords and mainly hoping to sparkle further discussion. I will be happy if anybody provides | Fourier bases for a stationary signal & relation to PCA for natural images
I am afraid this will not fully answer your question, but I would still like to write it to give you some keywords and mainly hoping to sparkle further discussion. I will be happy if anybody provides a better answer.
Having said that, I disagree with what @whuber wrote in the comments above ("in no circumstances does a Fourier basis act like PCA" etc.). I think you are right in observing that PCA on smooth more-or-less-translation-invariant signals (or images) results in Fourier-like components. This is a real effect and it must have an explanation.
Mathematically, the covariance matrix with "diagonal structure" that you describe in your question is called Toeplitz matrix. Toeplitz matrices have many nice properties, one of them being that in the limit of infinite dimension (Toeplitz operator) the eigenvectors are sines and cosines with increasing frequencies. I think what you are asking to is to provide an intuitive explanation of this fact. I can't.
Of course sines and cosines are also eigenvectors of the Fourier operator, as you rightly observe. Which means that Toeplitz and Fourier operators are closely related. I would be very happy if somebody more savvy in math would explain how it works (I strongly suspect that with the correct perspective it becomes almost trivial).
Finally, if you are interested in a non-technical description of this phenomenon, I can recommend this paper Interpreting principal component analyses of spatial
population genetic variation from Nature Genetics. The authors remark that "sinusoidal mathematical artifacts ... arise generally when PCA is applied to spatial data" and give some nice examples. | Fourier bases for a stationary signal & relation to PCA for natural images
I am afraid this will not fully answer your question, but I would still like to write it to give you some keywords and mainly hoping to sparkle further discussion. I will be happy if anybody provides |
48,712 | Expected standard deviation for a sample from a uniform distribution? | The integration is difficult even with as few as $3$ values. Why not estimate the bias in the sample SD by using a surrogate measure of spread? One set of choices is afforded by differences in the order statistics.
Consider, for instance, Tukey's H-spread. For a data set of $n$ values, let $m = \lfloor\frac{n+1}{2}\rfloor$ and set $h = \frac{m+1}{2}$. Let $n$ be such that $h$ is integral; values $n = 4i+1$ will work. In these cases the H-spread is the difference between the $h^\text{th}$ highest value $y$ and $h^\text{th}$ lowest value $x$. (For large $n$ it will be very close to the interquartile range.) The beauty of using the H-spread is that, being based on order statistics, its distribution can be obtained analytically, because the joint PDF of the $j,k$ order statistics $(x,y)$ is proportional to
$$x^{j-1}(1-y)^{n-k}(y-x)^{k-j-1},\ 0\le x\le y\le 1.$$
From this we can obtain the expectation of $y-x$ as
$$s(n; j,k) = \mathbb{E}(y-x) = \frac{k-j}{n+1}.$$
Set $j=h$ and $k=n+1-h$ for the H-spread itself. When $n=4i+1$, $j=i+1$ and $k=3i+1$, whence $s(4i+1; i+1, 3i+1)=\frac{2i}{4i+1}.$
At this point, consider regressing simulated (or even calculated values) of the expected SD ($sd(n)$) against the H-spreads $s(4i+1,i+1,3i+1) = s(n).$ We might expect to find an asymptotic series for $sd(n)/s(n)$ in negative powers of $n$:
$$sd(n)/s(n) = \alpha_0 + \alpha_1 n^{-1} + \alpha_2 n^{-2} + \cdots.$$
By spending two minutes to simulate values of $sd(n)$ and regressing them against computed values of $s(n)$ in the range $5\le n\le 401$ (at which point the bias becomes very small), I find that $\alpha_0 \approx 0.5774$ (which estimates $2\sqrt{1/12}\approx 0.57735$), $\alpha_1\approx 1.091,$ and $\alpha_2 \approx 1.$ The fit is excellent. For instance, basing the regression on the cases $n\ge 9$ and extrapolating down to $n=5$ is a pretty severe test and this fit passes with flying colors. I expect it to give four significant figures of accuracy for all $n\ge 5$.
#
# Expected spread of the j and kth order statistics (k > j) in n
# iid uniform values.
#
sd.r <- function(n,j,k) (k-j)/(n+1)
#
# Expected sd of n iid uniform values.
#
sim <- function(n, effort=10^6) {
x <- matrix(runif(n * ceiling(effort/n)), ncol=n)
y <- apply(x, 1, sd)
mean(y)
}
#
# Study the relationship between sd.r and sim.
#
i <- c(1:7, 9, 15, 30, 300)
system.time({
d <- replicate(9, t(sapply(i, function(i) c(4*i+1, sim(4*i+1), i))))
})
#
# Plot the results.
#
data <- as.data.frame(matrix(aperm(d, c(2,1,3)), ncol=3, byrow=TRUE))
colnames(data) <- c("n", "y", "i")
data$x <- with(data, sd.r(4*i+1,i+1,3*i+1))
plot(subset(data, select=c(x,y)), col="Gray", cex=1.2,
xlab="Expected H-spread", ylab="Expected SD (via simulation)")
fit <- lm(y ~ x + I(x/n) + I(x/n^2) - 1, data=subset(data, n > 5))
j <- seq(1, 1000, by=1/4)
x <- sd.r(4*j+1, j+1, 3*j+1)
y <- cbind(x, x/(4*j+1), x/(4*j+1)^2) %*% coef(fit)
lines(x[-(1:4)], y[-(1:4)], col="#606060", lwd=2, lty=2)
lines(x[(1:5)], y[(1:5)], col="#b0b0b0", lwd=2, lty=3)
points(subset(data, select=c(x,y)), col=rainbow(length(i)), pch=19)
#
# Report the fit.
#
summary(fit)
par(mfrow=c(2,2))
plot(fit)
par(mfrow=c(1,1))
#
# The fit based on all the data.
#
summary(fit <- lm(y ~ x + I(x/n) + I(x/n^2) - 1, data=data))
#
# An alternative fit (fixing alpha_0).
#
summary(fit <- lm((y - sqrt(1/12))/x ~ I(1/n) + I(1/n^2) + I(1/n^3) - 1, data=data)) | Expected standard deviation for a sample from a uniform distribution? | The integration is difficult even with as few as $3$ values. Why not estimate the bias in the sample SD by using a surrogate measure of spread? One set of choices is afforded by differences in the o | Expected standard deviation for a sample from a uniform distribution?
The integration is difficult even with as few as $3$ values. Why not estimate the bias in the sample SD by using a surrogate measure of spread? One set of choices is afforded by differences in the order statistics.
Consider, for instance, Tukey's H-spread. For a data set of $n$ values, let $m = \lfloor\frac{n+1}{2}\rfloor$ and set $h = \frac{m+1}{2}$. Let $n$ be such that $h$ is integral; values $n = 4i+1$ will work. In these cases the H-spread is the difference between the $h^\text{th}$ highest value $y$ and $h^\text{th}$ lowest value $x$. (For large $n$ it will be very close to the interquartile range.) The beauty of using the H-spread is that, being based on order statistics, its distribution can be obtained analytically, because the joint PDF of the $j,k$ order statistics $(x,y)$ is proportional to
$$x^{j-1}(1-y)^{n-k}(y-x)^{k-j-1},\ 0\le x\le y\le 1.$$
From this we can obtain the expectation of $y-x$ as
$$s(n; j,k) = \mathbb{E}(y-x) = \frac{k-j}{n+1}.$$
Set $j=h$ and $k=n+1-h$ for the H-spread itself. When $n=4i+1$, $j=i+1$ and $k=3i+1$, whence $s(4i+1; i+1, 3i+1)=\frac{2i}{4i+1}.$
At this point, consider regressing simulated (or even calculated values) of the expected SD ($sd(n)$) against the H-spreads $s(4i+1,i+1,3i+1) = s(n).$ We might expect to find an asymptotic series for $sd(n)/s(n)$ in negative powers of $n$:
$$sd(n)/s(n) = \alpha_0 + \alpha_1 n^{-1} + \alpha_2 n^{-2} + \cdots.$$
By spending two minutes to simulate values of $sd(n)$ and regressing them against computed values of $s(n)$ in the range $5\le n\le 401$ (at which point the bias becomes very small), I find that $\alpha_0 \approx 0.5774$ (which estimates $2\sqrt{1/12}\approx 0.57735$), $\alpha_1\approx 1.091,$ and $\alpha_2 \approx 1.$ The fit is excellent. For instance, basing the regression on the cases $n\ge 9$ and extrapolating down to $n=5$ is a pretty severe test and this fit passes with flying colors. I expect it to give four significant figures of accuracy for all $n\ge 5$.
#
# Expected spread of the j and kth order statistics (k > j) in n
# iid uniform values.
#
sd.r <- function(n,j,k) (k-j)/(n+1)
#
# Expected sd of n iid uniform values.
#
sim <- function(n, effort=10^6) {
x <- matrix(runif(n * ceiling(effort/n)), ncol=n)
y <- apply(x, 1, sd)
mean(y)
}
#
# Study the relationship between sd.r and sim.
#
i <- c(1:7, 9, 15, 30, 300)
system.time({
d <- replicate(9, t(sapply(i, function(i) c(4*i+1, sim(4*i+1), i))))
})
#
# Plot the results.
#
data <- as.data.frame(matrix(aperm(d, c(2,1,3)), ncol=3, byrow=TRUE))
colnames(data) <- c("n", "y", "i")
data$x <- with(data, sd.r(4*i+1,i+1,3*i+1))
plot(subset(data, select=c(x,y)), col="Gray", cex=1.2,
xlab="Expected H-spread", ylab="Expected SD (via simulation)")
fit <- lm(y ~ x + I(x/n) + I(x/n^2) - 1, data=subset(data, n > 5))
j <- seq(1, 1000, by=1/4)
x <- sd.r(4*j+1, j+1, 3*j+1)
y <- cbind(x, x/(4*j+1), x/(4*j+1)^2) %*% coef(fit)
lines(x[-(1:4)], y[-(1:4)], col="#606060", lwd=2, lty=2)
lines(x[(1:5)], y[(1:5)], col="#b0b0b0", lwd=2, lty=3)
points(subset(data, select=c(x,y)), col=rainbow(length(i)), pch=19)
#
# Report the fit.
#
summary(fit)
par(mfrow=c(2,2))
plot(fit)
par(mfrow=c(1,1))
#
# The fit based on all the data.
#
summary(fit <- lm(y ~ x + I(x/n) + I(x/n^2) - 1, data=data))
#
# An alternative fit (fixing alpha_0).
#
summary(fit <- lm((y - sqrt(1/12))/x ~ I(1/n) + I(1/n^2) + I(1/n^3) - 1, data=data)) | Expected standard deviation for a sample from a uniform distribution?
The integration is difficult even with as few as $3$ values. Why not estimate the bias in the sample SD by using a surrogate measure of spread? One set of choices is afforded by differences in the o |
48,713 | Comparing estimators in Cauchy distribution | Yes, this is the right approach. Let's see where it can go.
Getting Off to a Fast Start
You can usually sneak up on to a full-blown simulation by working from the inside out. The R command
x <- rcauchy(10, location=0, scale=1)
generates a sample of $10$ and stores it in the variable x. The next step is to compute the statistics of interest. This is why you stored the original output in a variable: so you could use it multiple times without having to regenerate it.
mean(x)
median(x)
sd(x)
etc, etc.
You can go back to the first calculation (of x), repeat it, and recompute its mean and standard deviation (or any other statistic). That's fun and instructive up to a point. For a simulation, though, you will want to repeat these procedures hundreds (or more) times. This is the point at which you want to encapsulate your work into reliable, reusable, flexible components.
Creating the Simulation
R (and many modern computing platforms) work best by generating all the simulated data at once. Let's respect this preference by writing a modular piece of code--a function--to handle both the sample generation and the simulation iterations (which in older languages would require a double loop). Its inputs evidently will include, at a minimum,
n, the sample size
N, the simulation size (number of samples). Let's default this to $1$ for testing purposes.
..., any other parameters to be passed to rcauchy.
Here we go:
sim <- function(n=10, N=1, ...) {
x <- matrix(rcauchy(n*N, ...), nrow=n) # Each sample is in a separate column
stats <- apply(x, 2, function(y) c(mean(y), sd(y)))# Compute sample statistics
rownames(stats) <- c("Mean", "SD")
return(stats)
}
The reason for giving names to the rows of the output is to help us read it, as in this simulation with three iterations. First the command appears, then its output.
sim(N=3, location=0, scale=1)
[,1] [,2] [,3]
Mean -2.837735 6.471259 0.04831808
SD 4.445549 17.837725 7.00943078
It has been arranged to put all the means in the first row and standard deviations in the second.
With this in hand, let's run a reproducible simulation by setting the random number seed, generating some statistics for independent random samples, and inspecting them. Let's draw histograms rather than using summary statistics to describe the two rows of output (means and sds).
set.seed(17)
x <- sim(n=10, N=500, location=0, scale=1)
hist(x["Mean", ])
hist(x["SD", ])
(To keep myself honest, I try hard to use the same seed every time for any published results so that people will know I didn't play around with the seed in order to tweak the results to make them look more like I expected! For private exploration I do change the seeds, or leave them unset, so that I see new results each time.)
Using, Re-using, and Extending the Simulation Code
Now you can play.
These histograms look awful. Is it because there weren't enough simulations? Rerun the last four lines of code but change $N$ to $5000$, say. It doesn't help. Change location and scale. Still confused? Maybe we should fall back to a more familiar situation. How about generating Normally distributed samples? To do this, let's just extend our workhorse function sim and let the distribution itself also be a parameter!
sim <- function(n=10, N=1, f=rcauchy, ...) {
# `f` is any function to generate random variables. Its first argument must
# be the number of values to output.
x <- matrix(f(n*N, ...), nrow=n) # Samples are in columns
stats <- apply(x, 2, function(y) c(mean(y), sd(y))) # Compute sample statistics
rownames(stats) <- c("Mean", "SD")
return(stats)
}
The only changes made were to include f=rcauchy in the argument list and to replace the one reference to rcauchy by f. Let's try this with a Normal distribution:
set.seed(17)
x <- sim(n=10, N=5000, f=rnorm, mean=0, sd=1)
par(mfrow=c(1,2)) # Shows two plots side-by-side
hist(x["Mean", ], main="Normal Means")
hist(x["SD", ], main="Normal SDs")
This seems to be working. Now you can vary the distribution, the sample size, the simulation size, and the distribution parameters just by editing the x <- sim(...) line and rerunning it. In a minute or two you should obtain a good sense of how a change of distribution parameters changes the simulation results. With a little more exploration (consider plotting the sequences of means and SDs) you should be able to see why the Cauchy sample distributions seem so messed up.
For studying the Cauchy distribution (and other long-tailed distributions), a good modification to make to sim would be to include sample medians in its output.
Follow-on Analyses
Finally, what about "mean squared errors" (MSE)? These typically are used to compare an estimator to what it is estimating. I recommend storing (in a variable) the value of anything that will be referred to more than once. Thus, for instance, you can study the mean squared error of the mean like this:
location <- 0
x <- sim(n=10, N=5000, f=rcauchy, location=location, scale=1)
mean((x["Mean", ] - location)^2) # Mean squared error
Consider stashing the calculation of the MSE within a function if you're going to compute it a lot.
Other assessments of simulation output--plots, descriptive statistics, etc--are just as easily computed because the entire array of relevant simulation output is still available as stored in x.
Going Further: Sources and Principles
For more examples of working simulations in R, search our site for [R] Simulation. Many of these will port nicely to other general-purpose computing platforms like Python, Matlab, and Mathematica. A bit more work might be necessary to port them to specialized statistical platforms like Stata or SAS, but the same principles apply:
Develop the simulation code from the inside out (which is the opposite of good software development practice in general!).
Encapsulate useful blocks of code, such as the simulation code, code to compute the MSE, even code to plot or summarize simulation output.
Respect preferences and idiosyncrasies of the computing environment for best performance (but do not let them dominate your attention: you're doing this to learn about statistics, not about R or Python or whatever).
Extend the functionality of the encapsulated code instead of creating and modifying copies of the original code in order to handle minor variations in what you're doing.
Make the extensions one small step at a time rather than trying to write a do-everything function at the outset.
Use good naming and commenting conventions to document anything that is not immediately clear in the code.
Visualize the output.
Play with your simulations to learn from them. To this end, make them as easy as possible to use and reasonably fast to run. | Comparing estimators in Cauchy distribution | Yes, this is the right approach. Let's see where it can go.
Getting Off to a Fast Start
You can usually sneak up on to a full-blown simulation by working from the inside out. The R command
x <- rcau | Comparing estimators in Cauchy distribution
Yes, this is the right approach. Let's see where it can go.
Getting Off to a Fast Start
You can usually sneak up on to a full-blown simulation by working from the inside out. The R command
x <- rcauchy(10, location=0, scale=1)
generates a sample of $10$ and stores it in the variable x. The next step is to compute the statistics of interest. This is why you stored the original output in a variable: so you could use it multiple times without having to regenerate it.
mean(x)
median(x)
sd(x)
etc, etc.
You can go back to the first calculation (of x), repeat it, and recompute its mean and standard deviation (or any other statistic). That's fun and instructive up to a point. For a simulation, though, you will want to repeat these procedures hundreds (or more) times. This is the point at which you want to encapsulate your work into reliable, reusable, flexible components.
Creating the Simulation
R (and many modern computing platforms) work best by generating all the simulated data at once. Let's respect this preference by writing a modular piece of code--a function--to handle both the sample generation and the simulation iterations (which in older languages would require a double loop). Its inputs evidently will include, at a minimum,
n, the sample size
N, the simulation size (number of samples). Let's default this to $1$ for testing purposes.
..., any other parameters to be passed to rcauchy.
Here we go:
sim <- function(n=10, N=1, ...) {
x <- matrix(rcauchy(n*N, ...), nrow=n) # Each sample is in a separate column
stats <- apply(x, 2, function(y) c(mean(y), sd(y)))# Compute sample statistics
rownames(stats) <- c("Mean", "SD")
return(stats)
}
The reason for giving names to the rows of the output is to help us read it, as in this simulation with three iterations. First the command appears, then its output.
sim(N=3, location=0, scale=1)
[,1] [,2] [,3]
Mean -2.837735 6.471259 0.04831808
SD 4.445549 17.837725 7.00943078
It has been arranged to put all the means in the first row and standard deviations in the second.
With this in hand, let's run a reproducible simulation by setting the random number seed, generating some statistics for independent random samples, and inspecting them. Let's draw histograms rather than using summary statistics to describe the two rows of output (means and sds).
set.seed(17)
x <- sim(n=10, N=500, location=0, scale=1)
hist(x["Mean", ])
hist(x["SD", ])
(To keep myself honest, I try hard to use the same seed every time for any published results so that people will know I didn't play around with the seed in order to tweak the results to make them look more like I expected! For private exploration I do change the seeds, or leave them unset, so that I see new results each time.)
Using, Re-using, and Extending the Simulation Code
Now you can play.
These histograms look awful. Is it because there weren't enough simulations? Rerun the last four lines of code but change $N$ to $5000$, say. It doesn't help. Change location and scale. Still confused? Maybe we should fall back to a more familiar situation. How about generating Normally distributed samples? To do this, let's just extend our workhorse function sim and let the distribution itself also be a parameter!
sim <- function(n=10, N=1, f=rcauchy, ...) {
# `f` is any function to generate random variables. Its first argument must
# be the number of values to output.
x <- matrix(f(n*N, ...), nrow=n) # Samples are in columns
stats <- apply(x, 2, function(y) c(mean(y), sd(y))) # Compute sample statistics
rownames(stats) <- c("Mean", "SD")
return(stats)
}
The only changes made were to include f=rcauchy in the argument list and to replace the one reference to rcauchy by f. Let's try this with a Normal distribution:
set.seed(17)
x <- sim(n=10, N=5000, f=rnorm, mean=0, sd=1)
par(mfrow=c(1,2)) # Shows two plots side-by-side
hist(x["Mean", ], main="Normal Means")
hist(x["SD", ], main="Normal SDs")
This seems to be working. Now you can vary the distribution, the sample size, the simulation size, and the distribution parameters just by editing the x <- sim(...) line and rerunning it. In a minute or two you should obtain a good sense of how a change of distribution parameters changes the simulation results. With a little more exploration (consider plotting the sequences of means and SDs) you should be able to see why the Cauchy sample distributions seem so messed up.
For studying the Cauchy distribution (and other long-tailed distributions), a good modification to make to sim would be to include sample medians in its output.
Follow-on Analyses
Finally, what about "mean squared errors" (MSE)? These typically are used to compare an estimator to what it is estimating. I recommend storing (in a variable) the value of anything that will be referred to more than once. Thus, for instance, you can study the mean squared error of the mean like this:
location <- 0
x <- sim(n=10, N=5000, f=rcauchy, location=location, scale=1)
mean((x["Mean", ] - location)^2) # Mean squared error
Consider stashing the calculation of the MSE within a function if you're going to compute it a lot.
Other assessments of simulation output--plots, descriptive statistics, etc--are just as easily computed because the entire array of relevant simulation output is still available as stored in x.
Going Further: Sources and Principles
For more examples of working simulations in R, search our site for [R] Simulation. Many of these will port nicely to other general-purpose computing platforms like Python, Matlab, and Mathematica. A bit more work might be necessary to port them to specialized statistical platforms like Stata or SAS, but the same principles apply:
Develop the simulation code from the inside out (which is the opposite of good software development practice in general!).
Encapsulate useful blocks of code, such as the simulation code, code to compute the MSE, even code to plot or summarize simulation output.
Respect preferences and idiosyncrasies of the computing environment for best performance (but do not let them dominate your attention: you're doing this to learn about statistics, not about R or Python or whatever).
Extend the functionality of the encapsulated code instead of creating and modifying copies of the original code in order to handle minor variations in what you're doing.
Make the extensions one small step at a time rather than trying to write a do-everything function at the outset.
Use good naming and commenting conventions to document anything that is not immediately clear in the code.
Visualize the output.
Play with your simulations to learn from them. To this end, make them as easy as possible to use and reasonably fast to run. | Comparing estimators in Cauchy distribution
Yes, this is the right approach. Let's see where it can go.
Getting Off to a Fast Start
You can usually sneak up on to a full-blown simulation by working from the inside out. The R command
x <- rcau |
48,714 | ML vs WLSMV: which is better for categorical data and why? | In one medical research paper, Proitsi et al. (2009) write:
"The WLSMV is a robust estimator which does not assume normally
distributed variables and provides the best option for modelling
categorical or ordered data (Brown, 2006)".
For your convenience, I'm including the cited reference in the reference list below (I use APA format):
Brown, T. (2006). Confirmatory factor analysis for applied research. New York: Guildford.
Proitsi, P., Hamilton, G., Tsolaki, M., Lupton, M., Daniilidou, M., Hollingworth, P., ..., Powell, J. F. (2009, in press). A multiple indicators multiple causes (MIMIC) model of behavioural and psychological symptoms in dementia (BPSD). Neurobiology Aging. doi:10.1016/j.neurobiolaging.2009.03.005
I hope this is helpful and answers your question. | ML vs WLSMV: which is better for categorical data and why? | In one medical research paper, Proitsi et al. (2009) write:
"The WLSMV is a robust estimator which does not assume normally
distributed variables and provides the best option for modelling
catego | ML vs WLSMV: which is better for categorical data and why?
In one medical research paper, Proitsi et al. (2009) write:
"The WLSMV is a robust estimator which does not assume normally
distributed variables and provides the best option for modelling
categorical or ordered data (Brown, 2006)".
For your convenience, I'm including the cited reference in the reference list below (I use APA format):
Brown, T. (2006). Confirmatory factor analysis for applied research. New York: Guildford.
Proitsi, P., Hamilton, G., Tsolaki, M., Lupton, M., Daniilidou, M., Hollingworth, P., ..., Powell, J. F. (2009, in press). A multiple indicators multiple causes (MIMIC) model of behavioural and psychological symptoms in dementia (BPSD). Neurobiology Aging. doi:10.1016/j.neurobiolaging.2009.03.005
I hope this is helpful and answers your question. | ML vs WLSMV: which is better for categorical data and why?
In one medical research paper, Proitsi et al. (2009) write:
"The WLSMV is a robust estimator which does not assume normally
distributed variables and provides the best option for modelling
catego |
48,715 | ML vs WLSMV: which is better for categorical data and why? | The most obvious reason for choosing one over the other would be the kind of fit indices you need. The WLSMV will give you CFI, TLI and RMSEA, which will help you evaluate the fit of a given model. If you need to compare non-nested models, you would need AIC and/or BIC, which aren't available with WLSMV and categorical data. The opposite is true of ML (again, only when dealing with categorical data).
I'm not sure why they recommend WLSMV on the Mplus website, but if you are comparing nested models, the WLSMV is probably the most convenient as it will allow you to both (1) evalute whether the models provide adequate fit to the data (e.g. CFI > .90 and RMSEA < .5), and (2) use a chi2 difference test to see which models provides the best fit out of a number of competing models. | ML vs WLSMV: which is better for categorical data and why? | The most obvious reason for choosing one over the other would be the kind of fit indices you need. The WLSMV will give you CFI, TLI and RMSEA, which will help you evaluate the fit of a given model. If | ML vs WLSMV: which is better for categorical data and why?
The most obvious reason for choosing one over the other would be the kind of fit indices you need. The WLSMV will give you CFI, TLI and RMSEA, which will help you evaluate the fit of a given model. If you need to compare non-nested models, you would need AIC and/or BIC, which aren't available with WLSMV and categorical data. The opposite is true of ML (again, only when dealing with categorical data).
I'm not sure why they recommend WLSMV on the Mplus website, but if you are comparing nested models, the WLSMV is probably the most convenient as it will allow you to both (1) evalute whether the models provide adequate fit to the data (e.g. CFI > .90 and RMSEA < .5), and (2) use a chi2 difference test to see which models provides the best fit out of a number of competing models. | ML vs WLSMV: which is better for categorical data and why?
The most obvious reason for choosing one over the other would be the kind of fit indices you need. The WLSMV will give you CFI, TLI and RMSEA, which will help you evaluate the fit of a given model. If |
48,716 | When is the standard error of the mean impossibly large for a given data range, when we know the sample size? | I'm looking at the paper now (figures are at the end).
I may have missed something, but so far I see nothing in the paper that states that the 2.9 is intended to be a standard error (I can't find "SE" or "standard error" in the paper, for example).
Edit: Mattias points out in comments that (unlike the html version I linked), the pdf version definitely says 'SE', which invalidates what follows. It means the article has it wrong.
You may have inferred that it is a standard error from the way the information is presented in Table.1, where for example the mean "Age" for age group 60-69 is given as 66.0$\pm$ 2.9.
However, it's not unusual for $\pm$ to be used in different ways, such as to indicate the standard deviation, or some multiple of the standard error (leaving us to always require it to be spelled out if we are to know for sure what it means).
Bounding the standard deviation and the standard error of the mean
In any case, it's a good question, and such investigation of the information in papers is important.
The largest possible value for a population standard deviation for a bounded continuous variable on $[a,b]$ is $(b-a)/2$; this happens when half the observations are at the lower limit and half at the upper limit.
So for example if we knew that the age group is $60-69$ (assuming ages are recorded only in whole years), the biggest standard deviation possible is $4.5$, and the biggest standard error of the mean would be $4.5/\sqrt{n}$.
Of course, if the sample variance is based on an $n-1$ denominator, then the standard deviation can slightly exceed half the range (in an easily computable way).
The simple rule of thumb - the standard deviation shouldn't be more than half the range - is one worth remembering, as long as for small samples we keep in mind it's really $s_n$ that it holds for.
However, we can bound it further. First note that $n$ was odd in the table in question, so that we can't actually place half at each end, and it's possible to compute the (slightly smaller) standard deviation that would allow. Even more importantly, we're told the mean, and that can have a greater impact, reducing the maximum standard deviation to roughly 4.15 ($s_n$) or 4.24 ($s_{n-1}$).
Note that if the age had been uniformly distributed, it would have given about the right standard deviation:
(It isn't actually uniform - we can tell that because the mean is higher than the center value, but it gives us some idea of the kind of spread we have.)
The "sd < $\frac{1}{2}$ range" rule of thumb remains probably the most useful one to remember, unless you're doing some very detailed investigation. | When is the standard error of the mean impossibly large for a given data range, when we know the sam | I'm looking at the paper now (figures are at the end).
I may have missed something, but so far I see nothing in the paper that states that the 2.9 is intended to be a standard error (I can't find "SE" | When is the standard error of the mean impossibly large for a given data range, when we know the sample size?
I'm looking at the paper now (figures are at the end).
I may have missed something, but so far I see nothing in the paper that states that the 2.9 is intended to be a standard error (I can't find "SE" or "standard error" in the paper, for example).
Edit: Mattias points out in comments that (unlike the html version I linked), the pdf version definitely says 'SE', which invalidates what follows. It means the article has it wrong.
You may have inferred that it is a standard error from the way the information is presented in Table.1, where for example the mean "Age" for age group 60-69 is given as 66.0$\pm$ 2.9.
However, it's not unusual for $\pm$ to be used in different ways, such as to indicate the standard deviation, or some multiple of the standard error (leaving us to always require it to be spelled out if we are to know for sure what it means).
Bounding the standard deviation and the standard error of the mean
In any case, it's a good question, and such investigation of the information in papers is important.
The largest possible value for a population standard deviation for a bounded continuous variable on $[a,b]$ is $(b-a)/2$; this happens when half the observations are at the lower limit and half at the upper limit.
So for example if we knew that the age group is $60-69$ (assuming ages are recorded only in whole years), the biggest standard deviation possible is $4.5$, and the biggest standard error of the mean would be $4.5/\sqrt{n}$.
Of course, if the sample variance is based on an $n-1$ denominator, then the standard deviation can slightly exceed half the range (in an easily computable way).
The simple rule of thumb - the standard deviation shouldn't be more than half the range - is one worth remembering, as long as for small samples we keep in mind it's really $s_n$ that it holds for.
However, we can bound it further. First note that $n$ was odd in the table in question, so that we can't actually place half at each end, and it's possible to compute the (slightly smaller) standard deviation that would allow. Even more importantly, we're told the mean, and that can have a greater impact, reducing the maximum standard deviation to roughly 4.15 ($s_n$) or 4.24 ($s_{n-1}$).
Note that if the age had been uniformly distributed, it would have given about the right standard deviation:
(It isn't actually uniform - we can tell that because the mean is higher than the center value, but it gives us some idea of the kind of spread we have.)
The "sd < $\frac{1}{2}$ range" rule of thumb remains probably the most useful one to remember, unless you're doing some very detailed investigation. | When is the standard error of the mean impossibly large for a given data range, when we know the sam
I'm looking at the paper now (figures are at the end).
I may have missed something, but so far I see nothing in the paper that states that the 2.9 is intended to be a standard error (I can't find "SE" |
48,717 | Data input uncertainty + Monte Carlo simulation + forecasting | You have two sources of uncertainty: the uncertainty in the historical data, and the uncertainty in producing the forecasts given the historical data. The simulation distribution of point forecasts is capturing the uncertainty in the historical data only.
To capture the joint uncertainty, I suggest you simulate a future value from the forecast distribution for each of the synthetic time series. That is, for each synthetic time series compute the point forecast and the forecast variance, and then simulate a value from this distribution. These simulated future values then include the uncertainty in both the forecast distribution and in the historical data.
You could compute a prediction interval from the percentiles of the future values and compare its width with the size of the prediction interval produced for each synthetic series. | Data input uncertainty + Monte Carlo simulation + forecasting | You have two sources of uncertainty: the uncertainty in the historical data, and the uncertainty in producing the forecasts given the historical data. The simulation distribution of point forecasts is | Data input uncertainty + Monte Carlo simulation + forecasting
You have two sources of uncertainty: the uncertainty in the historical data, and the uncertainty in producing the forecasts given the historical data. The simulation distribution of point forecasts is capturing the uncertainty in the historical data only.
To capture the joint uncertainty, I suggest you simulate a future value from the forecast distribution for each of the synthetic time series. That is, for each synthetic time series compute the point forecast and the forecast variance, and then simulate a value from this distribution. These simulated future values then include the uncertainty in both the forecast distribution and in the historical data.
You could compute a prediction interval from the percentiles of the future values and compare its width with the size of the prediction interval produced for each synthetic series. | Data input uncertainty + Monte Carlo simulation + forecasting
You have two sources of uncertainty: the uncertainty in the historical data, and the uncertainty in producing the forecasts given the historical data. The simulation distribution of point forecasts is |
48,718 | T-tests for power-law distributed data? | Actually with 4 groups, you'd normally compare means using one ANOVA, not six t-tests (though t-tests would still come in with multiple comparisons or planned contrasts).
If one assumes Pareto distributions, then there are a number of possible approaches. I'll mention only a few, starting with one I think is perhaps easiest and also some nonparametric tests that would work. I'll assume that the left boundary ($x_m$, the scale parameter) is known; if that's not the case, things are more complicated (especially if they're unknown and not necessarily equal).
(1) comparison of the power parameter (shape parameter, $\alpha$) implies a comparison of population means (that is, equal $\alpha$ implies equal mean, different $\alpha$ implies different mean). Take $z = \log(y/x_m)$ and compare scale parameters of the resulting exponential distributions via a generalized linear model; it's straightforward to do ANOVA-type comparisons as well as pairwise comparisons.
(1a) A quick way to deal with unknown-but-equal $x_m$ is to take logs and subtract the smallest value (of the whole set) from all samples (losing that value from the data in the process). You can then proceed as above.
Here's an R example (with $x_m=1$). Similar fits can be done in pretty much any decent statistics package:
# create some (Pareto) data:
y1 <- c(814.660, 1.47520, 1.28029, 2.08808, 13.5882, 25.1290, 10.7137,
10.3032, 13.9075, 1556.73, 1.73512, 1783.04, 2.10658, 56.7400,
1.34085, 4.01592, 1.19537, 2.23376, 22.5796, 12.3961)
y2 <- c(332.949, 13.0680, 1.19512, 9.19466, 1.10640, 11.5778, 4.69242, 2.50173,
1.51986, 184.397, 2.61102, 17.86237, 6.01949, 76.9210, 3.66999)
pdata <- stack(list(y1=y1,y2=y2))
pcompfit <- glm(log(values)~ind,family=Gamma(link=log),data=pdata)
summary(pcompfit, dispersion=1) # dispersion = 1 for exponential
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.9106 0.2236 4.072 4.66e-05 ***
indy2 -0.1311 0.3416 -0.384 0.701
(Dispersion parameter for Gamma family taken to be 1)
Null deviance: 34.674 on 34 degrees of freedom
Residual deviance: 34.528 on 33 degrees of freedom
AIC: 135.73
(Some non-essential lines of output removed)
The test has (correctly) picked up that the parameter was smaller for the second sample, but for such small samples the effect size is too small to tell from random variation.
Exponential-distributed data might also be compared via survreg in R.
(2) With the Pareto assumption (and $x_m\geq 1$), the log of the log (which is monotonic) reduces to a comparison of location with a shift alternative. You could therefore meaningfully compare the original data using Kruskal-Wallis, and it would be fairly easy to interpret the results under such a transformed shift-alternative.
# example - same data as before
# the 'anova-like' test (you don't need to transform to test for equality:
kruskal.test(values~ind,data=pdata)
Kruskal-Wallis rank sum test
data: values by ind
Kruskal-Wallis chi-squared = 0.0544, df = 1, p-value = 0.8155
If you work on the loglog scale you can even produce a confidence interval for the log of the ratio of parameters. [We're not used to thinking of these tests as being a comparison of means, but with the specific distributional assumption and common $x_m$, it is a comparison of population means (as well as medians and so on).]
Here that's shown with a Mann-Whitney, for which interval construction is easy --
# the estimate of shift in log-parameter:
> wilcox.test(log(log(values))~ind,data=pdata,conf.int=TRUE)
Wilcoxon rank sum test
data: log(log(values)) by ind
W = 157, p-value = 0.8307
alternative hypothesis: true location shift is not equal to 0
95 percent confidence interval:
-0.8001905 0.8935245
sample estimates:
difference in location
0.0668152
[However, something more appropriate like Dunn's test should be used for post hoc comparisons with the Kruskal-Wallis; though often suggested (e.g. at the bottom of this section, the Mann-Whitney is not quite the most appropriate choice for that purpose. One reason for that is discussed here.]
Note that this comparison is done 'the opposite way around' from the GLM ("first-second", not "second-first" as in the GLM). In any case it's trivial to flip the sign of the estimate and the end of the confidence interval, but it's important to know that it happens.
[The small difference in p-values is because the K-W is using a chi-square approximation, not the exact distribution of the test statistic. In large samples the approximation is very good.]
In large samples the estimate of the shift parameter should be more consistent between the two tests.
(3) you can do a likelihood ratio test. (This can deal with unknown $x_m$ without difficulty, but you may need to rely on the asymptotic chi-square result) | T-tests for power-law distributed data? | Actually with 4 groups, you'd normally compare means using one ANOVA, not six t-tests (though t-tests would still come in with multiple comparisons or planned contrasts).
If one assumes Pareto distrib | T-tests for power-law distributed data?
Actually with 4 groups, you'd normally compare means using one ANOVA, not six t-tests (though t-tests would still come in with multiple comparisons or planned contrasts).
If one assumes Pareto distributions, then there are a number of possible approaches. I'll mention only a few, starting with one I think is perhaps easiest and also some nonparametric tests that would work. I'll assume that the left boundary ($x_m$, the scale parameter) is known; if that's not the case, things are more complicated (especially if they're unknown and not necessarily equal).
(1) comparison of the power parameter (shape parameter, $\alpha$) implies a comparison of population means (that is, equal $\alpha$ implies equal mean, different $\alpha$ implies different mean). Take $z = \log(y/x_m)$ and compare scale parameters of the resulting exponential distributions via a generalized linear model; it's straightforward to do ANOVA-type comparisons as well as pairwise comparisons.
(1a) A quick way to deal with unknown-but-equal $x_m$ is to take logs and subtract the smallest value (of the whole set) from all samples (losing that value from the data in the process). You can then proceed as above.
Here's an R example (with $x_m=1$). Similar fits can be done in pretty much any decent statistics package:
# create some (Pareto) data:
y1 <- c(814.660, 1.47520, 1.28029, 2.08808, 13.5882, 25.1290, 10.7137,
10.3032, 13.9075, 1556.73, 1.73512, 1783.04, 2.10658, 56.7400,
1.34085, 4.01592, 1.19537, 2.23376, 22.5796, 12.3961)
y2 <- c(332.949, 13.0680, 1.19512, 9.19466, 1.10640, 11.5778, 4.69242, 2.50173,
1.51986, 184.397, 2.61102, 17.86237, 6.01949, 76.9210, 3.66999)
pdata <- stack(list(y1=y1,y2=y2))
pcompfit <- glm(log(values)~ind,family=Gamma(link=log),data=pdata)
summary(pcompfit, dispersion=1) # dispersion = 1 for exponential
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.9106 0.2236 4.072 4.66e-05 ***
indy2 -0.1311 0.3416 -0.384 0.701
(Dispersion parameter for Gamma family taken to be 1)
Null deviance: 34.674 on 34 degrees of freedom
Residual deviance: 34.528 on 33 degrees of freedom
AIC: 135.73
(Some non-essential lines of output removed)
The test has (correctly) picked up that the parameter was smaller for the second sample, but for such small samples the effect size is too small to tell from random variation.
Exponential-distributed data might also be compared via survreg in R.
(2) With the Pareto assumption (and $x_m\geq 1$), the log of the log (which is monotonic) reduces to a comparison of location with a shift alternative. You could therefore meaningfully compare the original data using Kruskal-Wallis, and it would be fairly easy to interpret the results under such a transformed shift-alternative.
# example - same data as before
# the 'anova-like' test (you don't need to transform to test for equality:
kruskal.test(values~ind,data=pdata)
Kruskal-Wallis rank sum test
data: values by ind
Kruskal-Wallis chi-squared = 0.0544, df = 1, p-value = 0.8155
If you work on the loglog scale you can even produce a confidence interval for the log of the ratio of parameters. [We're not used to thinking of these tests as being a comparison of means, but with the specific distributional assumption and common $x_m$, it is a comparison of population means (as well as medians and so on).]
Here that's shown with a Mann-Whitney, for which interval construction is easy --
# the estimate of shift in log-parameter:
> wilcox.test(log(log(values))~ind,data=pdata,conf.int=TRUE)
Wilcoxon rank sum test
data: log(log(values)) by ind
W = 157, p-value = 0.8307
alternative hypothesis: true location shift is not equal to 0
95 percent confidence interval:
-0.8001905 0.8935245
sample estimates:
difference in location
0.0668152
[However, something more appropriate like Dunn's test should be used for post hoc comparisons with the Kruskal-Wallis; though often suggested (e.g. at the bottom of this section, the Mann-Whitney is not quite the most appropriate choice for that purpose. One reason for that is discussed here.]
Note that this comparison is done 'the opposite way around' from the GLM ("first-second", not "second-first" as in the GLM). In any case it's trivial to flip the sign of the estimate and the end of the confidence interval, but it's important to know that it happens.
[The small difference in p-values is because the K-W is using a chi-square approximation, not the exact distribution of the test statistic. In large samples the approximation is very good.]
In large samples the estimate of the shift parameter should be more consistent between the two tests.
(3) you can do a likelihood ratio test. (This can deal with unknown $x_m$ without difficulty, but you may need to rely on the asymptotic chi-square result) | T-tests for power-law distributed data?
Actually with 4 groups, you'd normally compare means using one ANOVA, not six t-tests (though t-tests would still come in with multiple comparisons or planned contrasts).
If one assumes Pareto distrib |
48,719 | Sample mean of random walk | From
$$\bar{y}_T = \frac{1}{T}\sum_{i=1}^T y_t = \frac{T}{T} u_1 + \frac{T-1}{T} u_2 + \cdots + \frac{1}{T}u_T$$
the independence of the $u_i$ implies (along with their unit variance) that
$$\text{Var}(\bar{y}_T) = \left(\frac{T}{T}\right)^2 + \left(\frac{T-1}{T} \right)^2 + \cdots + \left(\frac{1}{T}\right)^2 = \frac{T(1+T)(1+2T)}{6T^2} \gt \frac{T}{3}.$$
Some positive amount of the probability of $\bar{y}_T$ must lie more than $\sqrt{\frac{T}{3}}$ from $\mathbb{E}(\bar{y}_T) = 0$ for otherwise the variance would be less than or equal to $\frac{T}{3}$. Since $\sqrt{\frac{T}{3}}$ diverges as $T\to\infty$, the sample mean cannot converge (in probability). | Sample mean of random walk | From
$$\bar{y}_T = \frac{1}{T}\sum_{i=1}^T y_t = \frac{T}{T} u_1 + \frac{T-1}{T} u_2 + \cdots + \frac{1}{T}u_T$$
the independence of the $u_i$ implies (along with their unit variance) that
$$\text{Var | Sample mean of random walk
From
$$\bar{y}_T = \frac{1}{T}\sum_{i=1}^T y_t = \frac{T}{T} u_1 + \frac{T-1}{T} u_2 + \cdots + \frac{1}{T}u_T$$
the independence of the $u_i$ implies (along with their unit variance) that
$$\text{Var}(\bar{y}_T) = \left(\frac{T}{T}\right)^2 + \left(\frac{T-1}{T} \right)^2 + \cdots + \left(\frac{1}{T}\right)^2 = \frac{T(1+T)(1+2T)}{6T^2} \gt \frac{T}{3}.$$
Some positive amount of the probability of $\bar{y}_T$ must lie more than $\sqrt{\frac{T}{3}}$ from $\mathbb{E}(\bar{y}_T) = 0$ for otherwise the variance would be less than or equal to $\frac{T}{3}$. Since $\sqrt{\frac{T}{3}}$ diverges as $T\to\infty$, the sample mean cannot converge (in probability). | Sample mean of random walk
From
$$\bar{y}_T = \frac{1}{T}\sum_{i=1}^T y_t = \frac{T}{T} u_1 + \frac{T-1}{T} u_2 + \cdots + \frac{1}{T}u_T$$
the independence of the $u_i$ implies (along with their unit variance) that
$$\text{Var |
48,720 | Regression discontinuity design parametric versus non-parametric different result | This is happening because you are restricting the effect of Democratic vote share to be the same on both sides of the cutoff in your third specification, which is a slightly different model. As the magnitude and significance of the interaction term in (2) tells you, the slopes are actually somewhat different:
Graph code:
tw (lfit lne d if inrange(d,-.2,0)) (lfit lne d if inrange(d,0,.2)), legend(off) ylab(#15, angle(0)) ytitle("lne") xtitle("d")
You may want something like my third specification (though it it not clear what you have in mind with the comparison):
. use votex, clear
(102nd Congress)
. /* RD/local linear regression model */
. rd lne d, mbw(100) bw(0.2) ker(rec)
Two variables specified; treatment is
assumed to jump from zero to one at Z=0.
Assignment variable Z is d
Treatment variable X_T unspecified
Outcome variable y is lne
Estimating for bandwidth .2
------------------------------------------------------------------------------
lne | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
lwald | -.1046939 .1147029 -0.91 0.361 -.3295075 .1201197
------------------------------------------------------------------------------
.
. /* OLS Version With Interactions */
. reg lne c.d##i.win if d > -.2 & d < .2 // note that you can specify interaction on the fly
Source | SS df MS Number of obs = 267
-------------+------------------------------ F( 3, 263) = 0.43
Model | .271662281 3 .090554094 Prob > F = 0.7339
Residual | 55.7885045 263 .212123591 R-squared = 0.0048
-------------+------------------------------ Adj R-squared = -0.0065
Total | 56.0601668 266 .210752507 Root MSE = .46057
------------------------------------------------------------------------------
lne | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
d | .84506 .7855123 1.08 0.283 -.7016333 2.391753
1.win | -.1046939 .1257913 -0.83 0.406 -.3523801 .1429923
|
win#c.d |
1 | -.8707604 1.048807 -0.83 0.407 -2.935887 1.194366
|
_cons | 21.44195 .0925378 231.71 0.000 21.25974 21.62415
------------------------------------------------------------------------------
.
. /* OLS Model Without Interaction */
. reg lne d if d >= -.2 & d < 0 // fit a line to the left
Source | SS df MS Number of obs = 109
-------------+------------------------------ F( 1, 107) = 1.58
Model | .245503732 1 .245503732 Prob > F = 0.2116
Residual | 16.6357215 107 .155474033 R-squared = 0.0145
-------------+------------------------------ Adj R-squared = 0.0053
Total | 16.8812252 108 .156307641 Root MSE = .3943
------------------------------------------------------------------------------
lne | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
d | .84506 .6724925 1.26 0.212 -.4880779 2.178198
_cons | 21.44195 .0792234 270.65 0.000 21.28489 21.599
------------------------------------------------------------------------------
. reg lne d if d >= 0 & d < .2 // fit a line to the right
Source | SS df MS Number of obs = 158
-------------+------------------------------ F( 1, 156) = 0.00
Model | .000290102 1 .000290102 Prob > F = 0.9729
Residual | 39.152783 156 .250979378 R-squared = 0.0000
-------------+------------------------------ Adj R-squared = -0.0064
Total | 39.1530731 157 .249382631 Root MSE = .50098
------------------------------------------------------------------------------
lne | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
d | -.0257003 .755932 -0.03 0.973 -1.518883 1.467483
_cons | 21.33725 .0926828 230.22 0.000 21.15418 21.52033
------------------------------------------------------------------------------
.
. di "RD is "21.33725 - 21.44195 // TE is the diff in the intercepts
RD is -.1047 | Regression discontinuity design parametric versus non-parametric different result | This is happening because you are restricting the effect of Democratic vote share to be the same on both sides of the cutoff in your third specification, which is a slightly different model. As the ma | Regression discontinuity design parametric versus non-parametric different result
This is happening because you are restricting the effect of Democratic vote share to be the same on both sides of the cutoff in your third specification, which is a slightly different model. As the magnitude and significance of the interaction term in (2) tells you, the slopes are actually somewhat different:
Graph code:
tw (lfit lne d if inrange(d,-.2,0)) (lfit lne d if inrange(d,0,.2)), legend(off) ylab(#15, angle(0)) ytitle("lne") xtitle("d")
You may want something like my third specification (though it it not clear what you have in mind with the comparison):
. use votex, clear
(102nd Congress)
. /* RD/local linear regression model */
. rd lne d, mbw(100) bw(0.2) ker(rec)
Two variables specified; treatment is
assumed to jump from zero to one at Z=0.
Assignment variable Z is d
Treatment variable X_T unspecified
Outcome variable y is lne
Estimating for bandwidth .2
------------------------------------------------------------------------------
lne | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
lwald | -.1046939 .1147029 -0.91 0.361 -.3295075 .1201197
------------------------------------------------------------------------------
.
. /* OLS Version With Interactions */
. reg lne c.d##i.win if d > -.2 & d < .2 // note that you can specify interaction on the fly
Source | SS df MS Number of obs = 267
-------------+------------------------------ F( 3, 263) = 0.43
Model | .271662281 3 .090554094 Prob > F = 0.7339
Residual | 55.7885045 263 .212123591 R-squared = 0.0048
-------------+------------------------------ Adj R-squared = -0.0065
Total | 56.0601668 266 .210752507 Root MSE = .46057
------------------------------------------------------------------------------
lne | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
d | .84506 .7855123 1.08 0.283 -.7016333 2.391753
1.win | -.1046939 .1257913 -0.83 0.406 -.3523801 .1429923
|
win#c.d |
1 | -.8707604 1.048807 -0.83 0.407 -2.935887 1.194366
|
_cons | 21.44195 .0925378 231.71 0.000 21.25974 21.62415
------------------------------------------------------------------------------
.
. /* OLS Model Without Interaction */
. reg lne d if d >= -.2 & d < 0 // fit a line to the left
Source | SS df MS Number of obs = 109
-------------+------------------------------ F( 1, 107) = 1.58
Model | .245503732 1 .245503732 Prob > F = 0.2116
Residual | 16.6357215 107 .155474033 R-squared = 0.0145
-------------+------------------------------ Adj R-squared = 0.0053
Total | 16.8812252 108 .156307641 Root MSE = .3943
------------------------------------------------------------------------------
lne | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
d | .84506 .6724925 1.26 0.212 -.4880779 2.178198
_cons | 21.44195 .0792234 270.65 0.000 21.28489 21.599
------------------------------------------------------------------------------
. reg lne d if d >= 0 & d < .2 // fit a line to the right
Source | SS df MS Number of obs = 158
-------------+------------------------------ F( 1, 156) = 0.00
Model | .000290102 1 .000290102 Prob > F = 0.9729
Residual | 39.152783 156 .250979378 R-squared = 0.0000
-------------+------------------------------ Adj R-squared = -0.0064
Total | 39.1530731 157 .249382631 Root MSE = .50098
------------------------------------------------------------------------------
lne | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
d | -.0257003 .755932 -0.03 0.973 -1.518883 1.467483
_cons | 21.33725 .0926828 230.22 0.000 21.15418 21.52033
------------------------------------------------------------------------------
.
. di "RD is "21.33725 - 21.44195 // TE is the diff in the intercepts
RD is -.1047 | Regression discontinuity design parametric versus non-parametric different result
This is happening because you are restricting the effect of Democratic vote share to be the same on both sides of the cutoff in your third specification, which is a slightly different model. As the ma |
48,721 | Corrected AIC (AICC) for k-means | The form of $AICc$ of
$$
AICc = AIC + \frac{2k(k+1)}{n-k-1}
$$
was proposed by
Hurvich, C. M.; Tsai, C.-L. (1989), "Regression and time series model selection in small samples", Biometrika 76: 297–307
specifically for a linear regression model with normally distributed errors. For different models, a different correction will need to be derived.
These derivations are often difficult and the resulting correction may be challenging to calculate. For instance
Hurvich, Clifford M., Jeffrey S. Simonoff, and Chih‐Ling Tsai. "Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 60, no. 2 (1998): 271-293.
propose a correction to be used in the case of nonparametric regression models which takes the form
$$
AICc = -2ln(L) + n^2\int_0^1(1-t)^{r/2-2}\prod_{j=1}^{r}(1-t+2d_j)^{-1/2}dt+n\int_0^{\infty}\sum_{i=1}^n\frac{c_{ii}}{1+2d_it}\prod_{i=1}^n(1+2d_it)^{-1/2}dt
$$
I will not go into the details here as they are largely irrelevant but I wanted to illustrate the complexity involved. Actual calculation of this value involves eigen-analysis and numerical integration.
For reasons like this, many authors such as
Burnham, K. P.; Anderson, D. R. (2002), Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag, ISBN 0-387-95364-7
suggest to use the form
$$
AICc = AIC + \frac{2k(k+1)}{n-k-1}
$$
regardless of model. Even Hurvich et al. (1998) despite deriving their complicated $AICc$ for nonparametric regression ultimately conclude that you might as well use the much simpler version for linear regression.
Generally this advice seems to work well, giving practically useful results. However there are circumstances, such as the one you've highlighted where it doesn't work. You would need to find an appropriate $AICc$ for k-means, or derive one yourself, or simply use $AIC$ which is more generally applicable. | Corrected AIC (AICC) for k-means | The form of $AICc$ of
$$
AICc = AIC + \frac{2k(k+1)}{n-k-1}
$$
was proposed by
Hurvich, C. M.; Tsai, C.-L. (1989), "Regression and time series model selection in small samples", Biometrika 76: 297–307 | Corrected AIC (AICC) for k-means
The form of $AICc$ of
$$
AICc = AIC + \frac{2k(k+1)}{n-k-1}
$$
was proposed by
Hurvich, C. M.; Tsai, C.-L. (1989), "Regression and time series model selection in small samples", Biometrika 76: 297–307
specifically for a linear regression model with normally distributed errors. For different models, a different correction will need to be derived.
These derivations are often difficult and the resulting correction may be challenging to calculate. For instance
Hurvich, Clifford M., Jeffrey S. Simonoff, and Chih‐Ling Tsai. "Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 60, no. 2 (1998): 271-293.
propose a correction to be used in the case of nonparametric regression models which takes the form
$$
AICc = -2ln(L) + n^2\int_0^1(1-t)^{r/2-2}\prod_{j=1}^{r}(1-t+2d_j)^{-1/2}dt+n\int_0^{\infty}\sum_{i=1}^n\frac{c_{ii}}{1+2d_it}\prod_{i=1}^n(1+2d_it)^{-1/2}dt
$$
I will not go into the details here as they are largely irrelevant but I wanted to illustrate the complexity involved. Actual calculation of this value involves eigen-analysis and numerical integration.
For reasons like this, many authors such as
Burnham, K. P.; Anderson, D. R. (2002), Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag, ISBN 0-387-95364-7
suggest to use the form
$$
AICc = AIC + \frac{2k(k+1)}{n-k-1}
$$
regardless of model. Even Hurvich et al. (1998) despite deriving their complicated $AICc$ for nonparametric regression ultimately conclude that you might as well use the much simpler version for linear regression.
Generally this advice seems to work well, giving practically useful results. However there are circumstances, such as the one you've highlighted where it doesn't work. You would need to find an appropriate $AICc$ for k-means, or derive one yourself, or simply use $AIC$ which is more generally applicable. | Corrected AIC (AICC) for k-means
The form of $AICc$ of
$$
AICc = AIC + \frac{2k(k+1)}{n-k-1}
$$
was proposed by
Hurvich, C. M.; Tsai, C.-L. (1989), "Regression and time series model selection in small samples", Biometrika 76: 297–307 |
48,722 | Estimate population quantiles from subpopulations' quantiles | Because the subpopulations are arbitrary and not random, the best you can do is to use interval arithmetic.
Let the quantiles of a subpopulation be $x_{[1]} \le x_{[2]} \le \cdots \le x_{[m]}$ corresponding to percentiles $100 q_1, 100 q_2, \ldots, 100 q_m,$ respectively. This information means that
$100 q_i \%$ of the values are less than or equal to $x_{[i]}$ and
$100 q_{i-1} \%$ of the values are less than or equal to $x_{[i-1]},$ whence
$100(q_i - q_{i-1}) \%$ of the values lie in the interval $(x_{[i-1]}, x_{[i]}].$
In the case $i=1$, take $q_0 = 0$ and $x_{[0]}=-\infty.$ Similarly take $q_{m+1}=1$ and $x_{[m+1]} = \infty.$
Consider the set of all possible distributions consistent with this information. Let $F$ be the CDF of one of them and suppose $x\in (x_{[i-1]}, x_{[i]}]$ for some $i \in \{0, 1, \ldots, m\}.$ From the preceding information we know
$$q_{i-1} \le F(x) \le q_{i}.$$
The set of all possible CDFs therefore forms a "p-box" filling up these intervals. For example, let the quartiles be $\{-1, 0, 1\}$. The corresponding p-box lies between the upper (red) and lower (blue) curves. A possible distribution $F$ consistent with this p-box is shown in black.
The horizontal gray line shows how quantiles can be read off this plot: the 60th percentile, shown, must lie between $0$ and $1$ given that the 50th percentile is at $0$ and the 75th percentile is at $1$. The solid part of the gray line depicts the interval of possible values of the 60th percentile.
When presented with information of this sort for separate populations of sizes $n_1, n_2, \ldots, n_k$, having associated distributions $F_i, i=1, 2, \ldots, k,$ the distribution for the total population will be the weighted average of the $F_i$:
$$F(x) = \frac{n_1 F_1(x) + n_2 F_2(x) + \cdots + n_k F_k(x)}{n_1 + n_2 + \cdots + n_k}.$$
Because we do not know $F$, we replace it by the p-boxes obtained from the available information and use interval arithmetic to perform the computation. Interval arithmetic in this case is simple: when a value $u$ is known to be in an interval $[u_{-}, u_{+}]$ and $v$ is known to lie in $[v_{-}, v_{+}],$ then certainly $u+v$ is in $[u_{-}+v_{-}, u_{+} + v_{+}]$ and a constant positive multiple $\alpha u$ is in $[\alpha u_{-}, \alpha u_{+}].$ And that's all we can say.
For example, suppose we have the following quantile information for subpopulations of sizes $n_i = 5, 4, 7$:
Subpopulation 1 has quartiles at $-2, 0, 1$,
Subpopulation 2 has quintiles at $-1, 0, 1/2, 3/2,$ and
Subpopulation 3 has tertiles at $-4/3, -1/3.$
The resulting p-box computed using interval arithmetic is shown here:
Its 60th percentile (shown by the dashed gray line) must lie between $-4/3$ and $1$, but that is all we know for certain. The distribution of the collective population of $5+4+7=15$ individuals will have a CDF lying somewhere between the upper and lower bounds.
R code to compute and manipulate p-boxes is relatively straightforward to write because R supports step functions (the piecewise constant functions that form the envelopes of empirical p-boxes). The hard work is performed by the functions f (which converts quantile specifications into p-boxes) and mix (which forms positive linear combinations of p-boxes).
#
# Create a pair of functions giving the p-box of a set of quantiles.
#
f <- function(quantiles, quants=seq(0, 1, length.out=length(quantiles)+2)) {
n <- length(quants)
return (list(lower=stepfun(quantiles, quants[-n]),
upper=stepfun(quantiles, quants[-1])))
}
#
# Figure 1: show the p-box for a single population.
#
g <- f(quantiles <- qnorm(c(1,2,3)/4) / qnorm(3/4))
curve(g$upper(x), from=-3, to=2, ylim=c(0,1), n=1001, col="Red", lwd=2,
ylab="Probability", main="Quartiles {-1, 0, 1}")
curve(g$lower(x), add=TRUE, n=1001, col="Blue", lwd=2)
curve(pnorm(x * qnorm(3/4)), add=TRUE)
lines(c(-3, quantiles[2]), c(0.6, 0.6), col="Gray", lty=2)
lines(c(quantiles[2],quantiles[3]), c(0.6, 0.6), col="Gray")
#
# Figure 2: show how to combine p-boxes using interval arithmetic.
#
quantiles <- list(c(-2, 0, 1), c(-1, 0, 1/2, 3/2), c(-4/3, -1/3))
weights <- c(5, 4, 7); weights <- weights / sum(weights)
mix <- function(x, components, weights) {
matrix(unlist(lapply(components, function(u) u(x))), ncol=length(weights)) %*% weights
}
g.upper <- lapply(quantiles, function(q) f(q)$upper)
g.lower <- lapply(quantiles, function(q) f(q)$lower)
curve(mix(x, g.upper, weights), from=-5/2, to=2, ylim=c(0,1),
ylab="Probability", main="P-box for Three Subpopulations", n=1001, col="Red")
curve(mix(x, g.lower, weights), add=TRUE, n=1001, col="Blue")
abline(h=0.6, lty=2, col="Gray") | Estimate population quantiles from subpopulations' quantiles | Because the subpopulations are arbitrary and not random, the best you can do is to use interval arithmetic.
Let the quantiles of a subpopulation be $x_{[1]} \le x_{[2]} \le \cdots \le x_{[m]}$ corresp | Estimate population quantiles from subpopulations' quantiles
Because the subpopulations are arbitrary and not random, the best you can do is to use interval arithmetic.
Let the quantiles of a subpopulation be $x_{[1]} \le x_{[2]} \le \cdots \le x_{[m]}$ corresponding to percentiles $100 q_1, 100 q_2, \ldots, 100 q_m,$ respectively. This information means that
$100 q_i \%$ of the values are less than or equal to $x_{[i]}$ and
$100 q_{i-1} \%$ of the values are less than or equal to $x_{[i-1]},$ whence
$100(q_i - q_{i-1}) \%$ of the values lie in the interval $(x_{[i-1]}, x_{[i]}].$
In the case $i=1$, take $q_0 = 0$ and $x_{[0]}=-\infty.$ Similarly take $q_{m+1}=1$ and $x_{[m+1]} = \infty.$
Consider the set of all possible distributions consistent with this information. Let $F$ be the CDF of one of them and suppose $x\in (x_{[i-1]}, x_{[i]}]$ for some $i \in \{0, 1, \ldots, m\}.$ From the preceding information we know
$$q_{i-1} \le F(x) \le q_{i}.$$
The set of all possible CDFs therefore forms a "p-box" filling up these intervals. For example, let the quartiles be $\{-1, 0, 1\}$. The corresponding p-box lies between the upper (red) and lower (blue) curves. A possible distribution $F$ consistent with this p-box is shown in black.
The horizontal gray line shows how quantiles can be read off this plot: the 60th percentile, shown, must lie between $0$ and $1$ given that the 50th percentile is at $0$ and the 75th percentile is at $1$. The solid part of the gray line depicts the interval of possible values of the 60th percentile.
When presented with information of this sort for separate populations of sizes $n_1, n_2, \ldots, n_k$, having associated distributions $F_i, i=1, 2, \ldots, k,$ the distribution for the total population will be the weighted average of the $F_i$:
$$F(x) = \frac{n_1 F_1(x) + n_2 F_2(x) + \cdots + n_k F_k(x)}{n_1 + n_2 + \cdots + n_k}.$$
Because we do not know $F$, we replace it by the p-boxes obtained from the available information and use interval arithmetic to perform the computation. Interval arithmetic in this case is simple: when a value $u$ is known to be in an interval $[u_{-}, u_{+}]$ and $v$ is known to lie in $[v_{-}, v_{+}],$ then certainly $u+v$ is in $[u_{-}+v_{-}, u_{+} + v_{+}]$ and a constant positive multiple $\alpha u$ is in $[\alpha u_{-}, \alpha u_{+}].$ And that's all we can say.
For example, suppose we have the following quantile information for subpopulations of sizes $n_i = 5, 4, 7$:
Subpopulation 1 has quartiles at $-2, 0, 1$,
Subpopulation 2 has quintiles at $-1, 0, 1/2, 3/2,$ and
Subpopulation 3 has tertiles at $-4/3, -1/3.$
The resulting p-box computed using interval arithmetic is shown here:
Its 60th percentile (shown by the dashed gray line) must lie between $-4/3$ and $1$, but that is all we know for certain. The distribution of the collective population of $5+4+7=15$ individuals will have a CDF lying somewhere between the upper and lower bounds.
R code to compute and manipulate p-boxes is relatively straightforward to write because R supports step functions (the piecewise constant functions that form the envelopes of empirical p-boxes). The hard work is performed by the functions f (which converts quantile specifications into p-boxes) and mix (which forms positive linear combinations of p-boxes).
#
# Create a pair of functions giving the p-box of a set of quantiles.
#
f <- function(quantiles, quants=seq(0, 1, length.out=length(quantiles)+2)) {
n <- length(quants)
return (list(lower=stepfun(quantiles, quants[-n]),
upper=stepfun(quantiles, quants[-1])))
}
#
# Figure 1: show the p-box for a single population.
#
g <- f(quantiles <- qnorm(c(1,2,3)/4) / qnorm(3/4))
curve(g$upper(x), from=-3, to=2, ylim=c(0,1), n=1001, col="Red", lwd=2,
ylab="Probability", main="Quartiles {-1, 0, 1}")
curve(g$lower(x), add=TRUE, n=1001, col="Blue", lwd=2)
curve(pnorm(x * qnorm(3/4)), add=TRUE)
lines(c(-3, quantiles[2]), c(0.6, 0.6), col="Gray", lty=2)
lines(c(quantiles[2],quantiles[3]), c(0.6, 0.6), col="Gray")
#
# Figure 2: show how to combine p-boxes using interval arithmetic.
#
quantiles <- list(c(-2, 0, 1), c(-1, 0, 1/2, 3/2), c(-4/3, -1/3))
weights <- c(5, 4, 7); weights <- weights / sum(weights)
mix <- function(x, components, weights) {
matrix(unlist(lapply(components, function(u) u(x))), ncol=length(weights)) %*% weights
}
g.upper <- lapply(quantiles, function(q) f(q)$upper)
g.lower <- lapply(quantiles, function(q) f(q)$lower)
curve(mix(x, g.upper, weights), from=-5/2, to=2, ylim=c(0,1),
ylab="Probability", main="P-box for Three Subpopulations", n=1001, col="Red")
curve(mix(x, g.lower, weights), add=TRUE, n=1001, col="Blue")
abline(h=0.6, lty=2, col="Gray") | Estimate population quantiles from subpopulations' quantiles
Because the subpopulations are arbitrary and not random, the best you can do is to use interval arithmetic.
Let the quantiles of a subpopulation be $x_{[1]} \le x_{[2]} \le \cdots \le x_{[m]}$ corresp |
48,723 | Estimate population quantiles from subpopulations' quantiles | Since your samples collectivey constitiute the entire population, you can re-aggregate to get some ideas about the overall distribution.
Let $N_s$ be the number of subpopulations, $n_i$ be the size of subpopulation $i$,$N_q$ be the number of quantiles, and $q_{ij}$ be the set of quantiles for subpopulation $i$, and let $p_{ij}$ be the associated percentile for that quantile (e.g. the first quartile would have p=0.25) where $p_{kj}=p_{lj}$ .
To get an estimate of the overall population's percentiles, you can aggregate your subpopulation data as follows:
Form the set of paired observations $(q_{ij},n_{ij})$ for each subpopulation, where $n_{ij} = p_{ij}n_i$ for each subpopulation. Let $N=\sum n_{i}$
Now, you need to form the Kaplan Meier estimate of the cumulative distribution function. Since the Kaplan meier estimator is for right censored data, you will need to multiply your quantiles by -1 to get "reversed quantiles", $r_{ij}$.
Group all quantiles together and form an ordered set of reversed quantiles, $r_{(k)}$ where k=1 corresponds to the smallest reversed quantile and k=N corresponds to the largest.
You will be estimaing the survival function, S(t), at each $r_{(k)}$. The number "at risk" (from the Kaplan meier estimate) at $r_{(k)}$ will be N minus the sum of the $n_{ij}$ whose reversed quantiles are less than (but NOT equal to) $r_{(k)}$. The number of "deaths" will be the the $n_{ij}$ whose reversed quantile equals $r_{(k)}$.
Use the formulas for S(t) in the wikipedia link to estimate the survival function using the above terms.
Now, the estimated CDF can be calculated at each actual ordered quantile, $q_{(k)}$: $\hat F(q_{(k)}) = 1 - S(r_{(N-k+1)})$.
This is a nonparametric estimate of the left-censored CDF. It will not be able to give an estimate below the lowest quantile.
A more refined estimate can be made if you are willing to assume a particular distribution for the population (or perhaps a mixture of several distributions). In that case, you can fit a distribution via maximum likelihood:
Use the empirial CDF from (6) to guess at the general features of the parametric distribution you want to use (e.g., normal, lognormal, gamma, exponential etc). Let $F(x|\theta)$ be the CDF for your assumed distribution given a set of parameters, $\theta$. For example, the normal distribution has $\theta = (\mu, \sigma^2)$.
Form the "censored" likelihood function for your data as follows: $L(\{\# q_{(k)},q_{(k)})\}|\theta) = \prod\limits_{\{(\# q_{(k)},q_{(k)})\}} (F(q_{(k)}|\theta))^{\#q_{(k)}}$. Where the $q_{(k)}$ are the ordered quantiles as before, and $\# q_{(k)}$ is the sum of $n_{ij}$ with $q_{ij}$ less than or equal to $q_{(k)}$
Now, you need to maximize L as a function of its parameters. To make the math easier, you can take the logarithm of L to get the LogLikelihood, which turns the product into a sum
The set of parameters, $\theta$ that maximizes the likelihood will give you a fitted distribution to your "censored" data. Now, you can estimate any desired percentiles from the fitted distribution.
Those are two possible approaches to your data, depending on how "refined" you need your esimates to be..of couse, with greater precision comes more potential bias if your data do not actually follow your assumed distriubtion. | Estimate population quantiles from subpopulations' quantiles | Since your samples collectivey constitiute the entire population, you can re-aggregate to get some ideas about the overall distribution.
Let $N_s$ be the number of subpopulations, $n_i$ be the size of | Estimate population quantiles from subpopulations' quantiles
Since your samples collectivey constitiute the entire population, you can re-aggregate to get some ideas about the overall distribution.
Let $N_s$ be the number of subpopulations, $n_i$ be the size of subpopulation $i$,$N_q$ be the number of quantiles, and $q_{ij}$ be the set of quantiles for subpopulation $i$, and let $p_{ij}$ be the associated percentile for that quantile (e.g. the first quartile would have p=0.25) where $p_{kj}=p_{lj}$ .
To get an estimate of the overall population's percentiles, you can aggregate your subpopulation data as follows:
Form the set of paired observations $(q_{ij},n_{ij})$ for each subpopulation, where $n_{ij} = p_{ij}n_i$ for each subpopulation. Let $N=\sum n_{i}$
Now, you need to form the Kaplan Meier estimate of the cumulative distribution function. Since the Kaplan meier estimator is for right censored data, you will need to multiply your quantiles by -1 to get "reversed quantiles", $r_{ij}$.
Group all quantiles together and form an ordered set of reversed quantiles, $r_{(k)}$ where k=1 corresponds to the smallest reversed quantile and k=N corresponds to the largest.
You will be estimaing the survival function, S(t), at each $r_{(k)}$. The number "at risk" (from the Kaplan meier estimate) at $r_{(k)}$ will be N minus the sum of the $n_{ij}$ whose reversed quantiles are less than (but NOT equal to) $r_{(k)}$. The number of "deaths" will be the the $n_{ij}$ whose reversed quantile equals $r_{(k)}$.
Use the formulas for S(t) in the wikipedia link to estimate the survival function using the above terms.
Now, the estimated CDF can be calculated at each actual ordered quantile, $q_{(k)}$: $\hat F(q_{(k)}) = 1 - S(r_{(N-k+1)})$.
This is a nonparametric estimate of the left-censored CDF. It will not be able to give an estimate below the lowest quantile.
A more refined estimate can be made if you are willing to assume a particular distribution for the population (or perhaps a mixture of several distributions). In that case, you can fit a distribution via maximum likelihood:
Use the empirial CDF from (6) to guess at the general features of the parametric distribution you want to use (e.g., normal, lognormal, gamma, exponential etc). Let $F(x|\theta)$ be the CDF for your assumed distribution given a set of parameters, $\theta$. For example, the normal distribution has $\theta = (\mu, \sigma^2)$.
Form the "censored" likelihood function for your data as follows: $L(\{\# q_{(k)},q_{(k)})\}|\theta) = \prod\limits_{\{(\# q_{(k)},q_{(k)})\}} (F(q_{(k)}|\theta))^{\#q_{(k)}}$. Where the $q_{(k)}$ are the ordered quantiles as before, and $\# q_{(k)}$ is the sum of $n_{ij}$ with $q_{ij}$ less than or equal to $q_{(k)}$
Now, you need to maximize L as a function of its parameters. To make the math easier, you can take the logarithm of L to get the LogLikelihood, which turns the product into a sum
The set of parameters, $\theta$ that maximizes the likelihood will give you a fitted distribution to your "censored" data. Now, you can estimate any desired percentiles from the fitted distribution.
Those are two possible approaches to your data, depending on how "refined" you need your esimates to be..of couse, with greater precision comes more potential bias if your data do not actually follow your assumed distriubtion. | Estimate population quantiles from subpopulations' quantiles
Since your samples collectivey constitiute the entire population, you can re-aggregate to get some ideas about the overall distribution.
Let $N_s$ be the number of subpopulations, $n_i$ be the size of |
48,724 | How to summarize and understand the reults of DBSCAN clustering on big data? | Are you sure that clustering big data is actually used anywhere?
As far as I can tell, it is not used. Everybody uses classification, nobody uses clustering. Because the clustering problem is much harder, and will require manual analysis of the results.
K-means: the usual Lloyd algorithm is naive parallel, and thus trivial to implement on Hadoop. But at the same time, it does not make sense to use k-means on big data. The reason is simple: there is no dense vector big data. K-means works well for say up to 10 dimensions. With double precision, I need 80 bytes per record then. A modest computer with 1 GB of RAM can then already fit some 13 million vectors into main memory. I have machines with 128 GB of RAM...
So you will have a hard time coming up with a real data set where:
I run out of memory on a single computer.
k-means produces notable results. (On high dimensional data, k-means is usually only as effective as random voronoi partitions!)
the result improves over a sample.
The last point is important: k-means computes means. The quality of a mean does not infinitely improve when you add more data. You only get marginal changes (if the result is stable, i.e. k-means worked). Most likely, your distributed computation already lost more precision on the way than you gain in the end...
Now for DBSCAN: I'm not aware of a popular distributed implementation. Every now and then a new parallel DBSCAN is proposed, usually using grids, but I've never seen one being used in practise or publicly available. Again, there are problems with the availability of interesting data where it would make sense to use DBSCAN.
For big data, how do you set the minPts and epsilon parameters? If you get this wrong, you won't have any clusters; or everything will be a single large custer.
If your data is low-dimensional, see above for k-means. Using techniques such as R*-trees and grids, a single computer can already cluster low-dimensional data with billions of points using DBSCAN.
If you have complex data, where indexing no longer works, DBSCAN will scale quadratically and thus be an inappropriate choice for big data.
Many platforms/companies like to pretend they can reasonably run k-means on their cluster. But fact is, it does not make sense this way, and its just maketing and tech demo. That is why they usually use random data to show off, or the dreaded broken KDDCup1999 data set (which I still can cluster faster on a single computer than on any Hadoop cluster!).
So what is really done in practise
The Hadoop cluster is your data warehouse (rebranded as fancy new big data).
You run distributed preprocessing on your raw data, to massage it into shape.
The preprocessed data is small enough to be clustered on a single computer, with more advanced algorithms (that may even scale quadratic, and do not have to be naive parallel)
You sell it to your marketing department
Your marketing department sells it to the CSomethingO.
Everybody is happy, because they are now big data experts. | How to summarize and understand the reults of DBSCAN clustering on big data? | Are you sure that clustering big data is actually used anywhere?
As far as I can tell, it is not used. Everybody uses classification, nobody uses clustering. Because the clustering problem is much har | How to summarize and understand the reults of DBSCAN clustering on big data?
Are you sure that clustering big data is actually used anywhere?
As far as I can tell, it is not used. Everybody uses classification, nobody uses clustering. Because the clustering problem is much harder, and will require manual analysis of the results.
K-means: the usual Lloyd algorithm is naive parallel, and thus trivial to implement on Hadoop. But at the same time, it does not make sense to use k-means on big data. The reason is simple: there is no dense vector big data. K-means works well for say up to 10 dimensions. With double precision, I need 80 bytes per record then. A modest computer with 1 GB of RAM can then already fit some 13 million vectors into main memory. I have machines with 128 GB of RAM...
So you will have a hard time coming up with a real data set where:
I run out of memory on a single computer.
k-means produces notable results. (On high dimensional data, k-means is usually only as effective as random voronoi partitions!)
the result improves over a sample.
The last point is important: k-means computes means. The quality of a mean does not infinitely improve when you add more data. You only get marginal changes (if the result is stable, i.e. k-means worked). Most likely, your distributed computation already lost more precision on the way than you gain in the end...
Now for DBSCAN: I'm not aware of a popular distributed implementation. Every now and then a new parallel DBSCAN is proposed, usually using grids, but I've never seen one being used in practise or publicly available. Again, there are problems with the availability of interesting data where it would make sense to use DBSCAN.
For big data, how do you set the minPts and epsilon parameters? If you get this wrong, you won't have any clusters; or everything will be a single large custer.
If your data is low-dimensional, see above for k-means. Using techniques such as R*-trees and grids, a single computer can already cluster low-dimensional data with billions of points using DBSCAN.
If you have complex data, where indexing no longer works, DBSCAN will scale quadratically and thus be an inappropriate choice for big data.
Many platforms/companies like to pretend they can reasonably run k-means on their cluster. But fact is, it does not make sense this way, and its just maketing and tech demo. That is why they usually use random data to show off, or the dreaded broken KDDCup1999 data set (which I still can cluster faster on a single computer than on any Hadoop cluster!).
So what is really done in practise
The Hadoop cluster is your data warehouse (rebranded as fancy new big data).
You run distributed preprocessing on your raw data, to massage it into shape.
The preprocessed data is small enough to be clustered on a single computer, with more advanced algorithms (that may even scale quadratic, and do not have to be naive parallel)
You sell it to your marketing department
Your marketing department sells it to the CSomethingO.
Everybody is happy, because they are now big data experts. | How to summarize and understand the reults of DBSCAN clustering on big data?
Are you sure that clustering big data is actually used anywhere?
As far as I can tell, it is not used. Everybody uses classification, nobody uses clustering. Because the clustering problem is much har |
48,725 | How to summarize and understand the reults of DBSCAN clustering on big data? | It is not true that we need to understand the clusters in every application. Actually if you have few well established clusters, probably you will soon end up doing some supervised learning rather than clustering: do the clustering of choice, check results, assign labels to cluster members, train using a supervised method based on the assigned labels.
I can give you a simple example where number of clusters can be huge and still useful: Grouping of similar news stories or tweets. For example, in a web site we want to provide links from a news story to other similar stories. This requires finding similar news, i.e. the cluster where each news story belongs, without necessarily needing to assign "labels" to each cluster. | How to summarize and understand the reults of DBSCAN clustering on big data? | It is not true that we need to understand the clusters in every application. Actually if you have few well established clusters, probably you will soon end up doing some supervised learning rather tha | How to summarize and understand the reults of DBSCAN clustering on big data?
It is not true that we need to understand the clusters in every application. Actually if you have few well established clusters, probably you will soon end up doing some supervised learning rather than clustering: do the clustering of choice, check results, assign labels to cluster members, train using a supervised method based on the assigned labels.
I can give you a simple example where number of clusters can be huge and still useful: Grouping of similar news stories or tweets. For example, in a web site we want to provide links from a news story to other similar stories. This requires finding similar news, i.e. the cluster where each news story belongs, without necessarily needing to assign "labels" to each cluster. | How to summarize and understand the reults of DBSCAN clustering on big data?
It is not true that we need to understand the clusters in every application. Actually if you have few well established clusters, probably you will soon end up doing some supervised learning rather tha |
48,726 | Minimizing number of questions of questionnaire from past binary responses | Sounds a lot like a computerized adaptive testing (CAT) application. This is just one small hint, not an attempt at a comprehensive solution, so I hope others will keep the answers coming.
I'm assuming that you're hoping to predict responses to the unasked questions from an optimally small subset of questions to such a degree of accuracy that there is effectively no need to actually ask the questions to which the answers can be predicted from previous responses. Specifically, I'm assuming a couple things about your original meaning:
"Some positive/negative features exclude others." = Some features can be used to predict the absence of others very accurately, maybe even without any error at all.
"In order to 'cut' the number of plausible subsequent questions" = The purpose is to reduce the number of follow-up questions that mostly provide information that is redundant with information collected by already-asked questions.
If I've misinterpreted these parts, my hint may be misleading; otherwise, I think I'm at least pointing in the right general direction. I don't know much more about CAT than this general purpose that it serves, so I expect you'd be better equipped than me to efficiently study it further.
One other idea concerns a slightly different approach, whereby you'd try to reduce the overall number of questions you care to ask at all of future users. You could begin to do this by analyzing the latent factor structure of your existing data using something like multidimensional item response theory (MIRT; see, for instance, Maydeu-Olivares, 2001; Osteen, 2010). If you find that a lot of your items provide information about the same underlying factors, this could help you understand your total pool of information in terms of a shorter list of broader factors. If you find that list (of the latent factors in your set of questions) contains enough of what you really want to know, you might choose to eliminate some questions that don't predict the latent factors very well and don't provide other important information. You might even consider retaining only one or two of the items that best predict each latent factor, depending on what you ultimately want to do with these data. This tangential idea of mine assumes that some of your questions are disposable. Also, disposing some questions would probably only simplify your problem somewhat, not really solve it.
Also, I think both CAT and MIRT would assume that your binary data are indicators of (an) underlying continuous dimension(s). If that's not the case, both ideas may be misleading, and you might want to say a little more about the nature of your data to help inform future answers (or edits to my own). | Minimizing number of questions of questionnaire from past binary responses | Sounds a lot like a computerized adaptive testing (CAT) application. This is just one small hint, not an attempt at a comprehensive solution, so I hope others will keep the answers coming.
I'm assumi | Minimizing number of questions of questionnaire from past binary responses
Sounds a lot like a computerized adaptive testing (CAT) application. This is just one small hint, not an attempt at a comprehensive solution, so I hope others will keep the answers coming.
I'm assuming that you're hoping to predict responses to the unasked questions from an optimally small subset of questions to such a degree of accuracy that there is effectively no need to actually ask the questions to which the answers can be predicted from previous responses. Specifically, I'm assuming a couple things about your original meaning:
"Some positive/negative features exclude others." = Some features can be used to predict the absence of others very accurately, maybe even without any error at all.
"In order to 'cut' the number of plausible subsequent questions" = The purpose is to reduce the number of follow-up questions that mostly provide information that is redundant with information collected by already-asked questions.
If I've misinterpreted these parts, my hint may be misleading; otherwise, I think I'm at least pointing in the right general direction. I don't know much more about CAT than this general purpose that it serves, so I expect you'd be better equipped than me to efficiently study it further.
One other idea concerns a slightly different approach, whereby you'd try to reduce the overall number of questions you care to ask at all of future users. You could begin to do this by analyzing the latent factor structure of your existing data using something like multidimensional item response theory (MIRT; see, for instance, Maydeu-Olivares, 2001; Osteen, 2010). If you find that a lot of your items provide information about the same underlying factors, this could help you understand your total pool of information in terms of a shorter list of broader factors. If you find that list (of the latent factors in your set of questions) contains enough of what you really want to know, you might choose to eliminate some questions that don't predict the latent factors very well and don't provide other important information. You might even consider retaining only one or two of the items that best predict each latent factor, depending on what you ultimately want to do with these data. This tangential idea of mine assumes that some of your questions are disposable. Also, disposing some questions would probably only simplify your problem somewhat, not really solve it.
Also, I think both CAT and MIRT would assume that your binary data are indicators of (an) underlying continuous dimension(s). If that's not the case, both ideas may be misleading, and you might want to say a little more about the nature of your data to help inform future answers (or edits to my own). | Minimizing number of questions of questionnaire from past binary responses
Sounds a lot like a computerized adaptive testing (CAT) application. This is just one small hint, not an attempt at a comprehensive solution, so I hope others will keep the answers coming.
I'm assumi |
48,727 | Can I use a z-test on heteroscedastic data? | Note that the $\beta$'s in your $z$ formula should be $\hat \beta$'s (both in the numerator and denominator).
The short answer is 'yes'; as long as (a) the sample sizes are sufficiently large that (i) the $\hat \beta$ terms are close to normal (i.e. the CLT 'kicks in'), and (ii) the two $\hat\sigma$ terms (on which the $SE$ terms are based) are very accurately estimated (i.e. Slutsky's theorem 'kicks in'); and (b) the parameter estimates are independent (i.e. the above formula for the denominator is correct).
With all the appropriate caveats in place for it to work (even unweighted), it doesn't matter that it's weighted. | Can I use a z-test on heteroscedastic data? | Note that the $\beta$'s in your $z$ formula should be $\hat \beta$'s (both in the numerator and denominator).
The short answer is 'yes'; as long as (a) the sample sizes are sufficiently large that (i) | Can I use a z-test on heteroscedastic data?
Note that the $\beta$'s in your $z$ formula should be $\hat \beta$'s (both in the numerator and denominator).
The short answer is 'yes'; as long as (a) the sample sizes are sufficiently large that (i) the $\hat \beta$ terms are close to normal (i.e. the CLT 'kicks in'), and (ii) the two $\hat\sigma$ terms (on which the $SE$ terms are based) are very accurately estimated (i.e. Slutsky's theorem 'kicks in'); and (b) the parameter estimates are independent (i.e. the above formula for the denominator is correct).
With all the appropriate caveats in place for it to work (even unweighted), it doesn't matter that it's weighted. | Can I use a z-test on heteroscedastic data?
Note that the $\beta$'s in your $z$ formula should be $\hat \beta$'s (both in the numerator and denominator).
The short answer is 'yes'; as long as (a) the sample sizes are sufficiently large that (i) |
48,728 | Confusion about hidden Markov models definition and graphical notation? | I think the problem is a misinterpretation of the notation. $X_1$ stands for $X(t)=1$, not for $X(t=1)$, and $a_{ij}=P(X(t)=j| X(t-1)=i)$. $X_1, X_2$ and $X_3$ are the three possible values of the variable $X$ at a time $t$, not three variables a $t=1, t=2$ and $t=3$.
So at time $t=1$ you have an initial value. Let's take, for the sake of the example, $X(1)=1$. the probability of $X(2)=2$ is $a_{12}$ and the probability of $X(2)=1$ is $1-a_{12}$.
The value of $X(t)$ is only dependant of the value of $X(t-1)$ because there is no arrow pointing from a $Y$ node to an $X$ node. | Confusion about hidden Markov models definition and graphical notation? | I think the problem is a misinterpretation of the notation. $X_1$ stands for $X(t)=1$, not for $X(t=1)$, and $a_{ij}=P(X(t)=j| X(t-1)=i)$. $X_1, X_2$ and $X_3$ are the three possible values of the var | Confusion about hidden Markov models definition and graphical notation?
I think the problem is a misinterpretation of the notation. $X_1$ stands for $X(t)=1$, not for $X(t=1)$, and $a_{ij}=P(X(t)=j| X(t-1)=i)$. $X_1, X_2$ and $X_3$ are the three possible values of the variable $X$ at a time $t$, not three variables a $t=1, t=2$ and $t=3$.
So at time $t=1$ you have an initial value. Let's take, for the sake of the example, $X(1)=1$. the probability of $X(2)=2$ is $a_{12}$ and the probability of $X(2)=1$ is $1-a_{12}$.
The value of $X(t)$ is only dependant of the value of $X(t-1)$ because there is no arrow pointing from a $Y$ node to an $X$ node. | Confusion about hidden Markov models definition and graphical notation?
I think the problem is a misinterpretation of the notation. $X_1$ stands for $X(t)=1$, not for $X(t=1)$, and $a_{ij}=P(X(t)=j| X(t-1)=i)$. $X_1, X_2$ and $X_3$ are the three possible values of the var |
48,729 | Confusion about hidden Markov models definition and graphical notation? | Your confusion is coming from different graphical notations about HMM. Specifically, there are two types of notations.
Type 1 is using nodes to represent possible values of random variables and use arrows to represent transitions
Type 2 is using nodes to represent random variables, and use errors to represent conditional dependencies.
Here is the same HMM using two different notation
Type 1 notation (Note, for the model shown below, random variable $X$ has $3$ possible values, and random variable $Y$ has $4$ possible values. But it does not show how many observations.)
Type 2 notation (Note, the model shown below has $N$ random variables / observations, but not show how many possible values for $X$ and $Y$)
PS,I think the figure in Wikipedia is confusing, if you want to use vertex to represent possible values, you should not capitalize $X$, but make it consistent with $y$. | Confusion about hidden Markov models definition and graphical notation? | Your confusion is coming from different graphical notations about HMM. Specifically, there are two types of notations.
Type 1 is using nodes to represent possible values of random variables and use a | Confusion about hidden Markov models definition and graphical notation?
Your confusion is coming from different graphical notations about HMM. Specifically, there are two types of notations.
Type 1 is using nodes to represent possible values of random variables and use arrows to represent transitions
Type 2 is using nodes to represent random variables, and use errors to represent conditional dependencies.
Here is the same HMM using two different notation
Type 1 notation (Note, for the model shown below, random variable $X$ has $3$ possible values, and random variable $Y$ has $4$ possible values. But it does not show how many observations.)
Type 2 notation (Note, the model shown below has $N$ random variables / observations, but not show how many possible values for $X$ and $Y$)
PS,I think the figure in Wikipedia is confusing, if you want to use vertex to represent possible values, you should not capitalize $X$, but make it consistent with $y$. | Confusion about hidden Markov models definition and graphical notation?
Your confusion is coming from different graphical notations about HMM. Specifically, there are two types of notations.
Type 1 is using nodes to represent possible values of random variables and use a |
48,730 | Confusion about hidden Markov models definition and graphical notation? | Transition probability is a potential for switching between any 2 states, not a statement about the relationship of events that have happened. These transition probabilities are independent of each other, as shown in the diagram by the fact that $a_{12}$ has no explicit relationship to $a_{21}$ or $a_{23}$.
The diagram is showing that there are probabilities of transition between hidden states $X_1$ and $X_2$. Moreover, any transition probability is referent to 2 states: the initial state and the state that will be transitioned to.
A loop in this case means that transitions between states 1 and 2 can happen in both directions (to and from $X_1$, to and from $X_2$) with some probabilities $a_{12}$, $a_{21}$. So, a process can start at $X_1$ and has the probability of moving to $X_2$ with $P(X_1 \rightarrow X_2) = a_{12}$, which, if it were to do so, would then have the probability of moving back to $X_1$ with $P(X_2 \rightarrow X_1) = a_{21}$.
This is what is meant when it is said the state at time $t$, $x(t)$, is dependent on the immediately prior state, which was at $t-1$, $x(t-1)$: a given state will be moved out of only with the probabilities of its immediately adjacent states. | Confusion about hidden Markov models definition and graphical notation? | Transition probability is a potential for switching between any 2 states, not a statement about the relationship of events that have happened. These transition probabilities are independent of each o | Confusion about hidden Markov models definition and graphical notation?
Transition probability is a potential for switching between any 2 states, not a statement about the relationship of events that have happened. These transition probabilities are independent of each other, as shown in the diagram by the fact that $a_{12}$ has no explicit relationship to $a_{21}$ or $a_{23}$.
The diagram is showing that there are probabilities of transition between hidden states $X_1$ and $X_2$. Moreover, any transition probability is referent to 2 states: the initial state and the state that will be transitioned to.
A loop in this case means that transitions between states 1 and 2 can happen in both directions (to and from $X_1$, to and from $X_2$) with some probabilities $a_{12}$, $a_{21}$. So, a process can start at $X_1$ and has the probability of moving to $X_2$ with $P(X_1 \rightarrow X_2) = a_{12}$, which, if it were to do so, would then have the probability of moving back to $X_1$ with $P(X_2 \rightarrow X_1) = a_{21}$.
This is what is meant when it is said the state at time $t$, $x(t)$, is dependent on the immediately prior state, which was at $t-1$, $x(t-1)$: a given state will be moved out of only with the probabilities of its immediately adjacent states. | Confusion about hidden Markov models definition and graphical notation?
Transition probability is a potential for switching between any 2 states, not a statement about the relationship of events that have happened. These transition probabilities are independent of each o |
48,731 | CDF for uncorrelated bivariate normal | If $X \sim N(0, \sigma_1^2)$ and $Y \sim N(0, \sigma_2^2)$ are independent random variables, then the joint pdf of $(X,Y)$ is say $f(x,y)$:
Given $Z = \sqrt{X^2 + Y^2}$, you seek $\text{Var}(Z)$:
where Var is the Variance function from the mathStatica add-on to Mathematica (to compute the pleasantries), and EllipticE is the complete elliptic integral: http://reference.wolfram.com/mathematica/ref/EllipticE.html
Here is a plot of the solution $\text{Var}(Z)$, as a function of $\sigma_1$ and $\sigma_2$, as you desired:
For the cdf calculation, I would suggest that a transformation to polar coordinates should do the trick. | CDF for uncorrelated bivariate normal | If $X \sim N(0, \sigma_1^2)$ and $Y \sim N(0, \sigma_2^2)$ are independent random variables, then the joint pdf of $(X,Y)$ is say $f(x,y)$:
Given $Z = \sqrt{X^2 + Y^2}$, you seek $\text{Var}(Z)$:
w | CDF for uncorrelated bivariate normal
If $X \sim N(0, \sigma_1^2)$ and $Y \sim N(0, \sigma_2^2)$ are independent random variables, then the joint pdf of $(X,Y)$ is say $f(x,y)$:
Given $Z = \sqrt{X^2 + Y^2}$, you seek $\text{Var}(Z)$:
where Var is the Variance function from the mathStatica add-on to Mathematica (to compute the pleasantries), and EllipticE is the complete elliptic integral: http://reference.wolfram.com/mathematica/ref/EllipticE.html
Here is a plot of the solution $\text{Var}(Z)$, as a function of $\sigma_1$ and $\sigma_2$, as you desired:
For the cdf calculation, I would suggest that a transformation to polar coordinates should do the trick. | CDF for uncorrelated bivariate normal
If $X \sim N(0, \sigma_1^2)$ and $Y \sim N(0, \sigma_2^2)$ are independent random variables, then the joint pdf of $(X,Y)$ is say $f(x,y)$:
Given $Z = \sqrt{X^2 + Y^2}$, you seek $\text{Var}(Z)$:
w |
48,732 | Difference of two random variable distributions | Are the two random variables $X$ and $Y$ supposed to be independent? If so, it is easy to prove that the distribution function of $Z=X-Y$ is given by the convolution
$$
F_Z(z) = P(X-Y\leq z) = \int F_X(z+y) \, dF_Y(y) \, .
$$
Hence, one idea is to compute the empirical distribution functions $\hat{F}_m$ of $(x_1,\dots,x_m)$, and $\hat{G}_n$ of $(y_1,\dots,y_n)$, and use
$$
\hat{H}(z) = \int \hat{F}_m(z+y)\,d\hat{G}_n(y) = \frac{1}{m\,n}\sum_{i=1}^n \sum_{j=1}^m I_{[x_j,\infty)}(z + y_i)
$$
as an estimate for $F_Z(z)$. Note that the corresponding estimator is strongly consistent for each $z$. | Difference of two random variable distributions | Are the two random variables $X$ and $Y$ supposed to be independent? If so, it is easy to prove that the distribution function of $Z=X-Y$ is given by the convolution
$$
F_Z(z) = P(X-Y\leq z) = \int | Difference of two random variable distributions
Are the two random variables $X$ and $Y$ supposed to be independent? If so, it is easy to prove that the distribution function of $Z=X-Y$ is given by the convolution
$$
F_Z(z) = P(X-Y\leq z) = \int F_X(z+y) \, dF_Y(y) \, .
$$
Hence, one idea is to compute the empirical distribution functions $\hat{F}_m$ of $(x_1,\dots,x_m)$, and $\hat{G}_n$ of $(y_1,\dots,y_n)$, and use
$$
\hat{H}(z) = \int \hat{F}_m(z+y)\,d\hat{G}_n(y) = \frac{1}{m\,n}\sum_{i=1}^n \sum_{j=1}^m I_{[x_j,\infty)}(z + y_i)
$$
as an estimate for $F_Z(z)$. Note that the corresponding estimator is strongly consistent for each $z$. | Difference of two random variable distributions
Are the two random variables $X$ and $Y$ supposed to be independent? If so, it is easy to prove that the distribution function of $Z=X-Y$ is given by the convolution
$$
F_Z(z) = P(X-Y\leq z) = \int |
48,733 | Difference of two random variable distributions | I don't think you need a special package to do this; ordinary numpy is enough. I've appended example code and its output below. Note that the cdf of (A-B) looks very similar to the cdfs of A and B separately, but actually it's not. You can see a subtle difference at around +/- 2 or 3 sigma. The cdf of (A-B) is a little wider than the individual cdfs of A and B separately.
import numpy as np
import matplotlib.pyplot as plt
#!/usr/bin/env python
# Number of random draws to use
ndraws = 1000
# Set this distance (in sigmas) large enough to capture all of the outliers
plotrange = 5
# Number of bins to use for pdf/cdf
nbin = 100
# Get random draws from a Gaussian
A = np.random.randn(1,ndraws)
B = np.random.randn(1,ndraws)
dfAB = A - B
# Calculate cdfs of A and B
Apdf, edges = np.histogram(A, bins=nbin, range=(-plotrange, plotrange))
Bpdf, edges = np.histogram(B, bins=nbin, range=(-plotrange, plotrange))
dfABpdf, edges = np.histogram(dfAB, bins=nbin, range=(-plotrange, plotrange))
xrng = (edges[0:-1] + edges[1:]) / 2
Acdf = np.cumsum(map(float, Apdf)) / ndraws
Bcdf = np.cumsum(map(float, Bpdf)) / ndraws
dfABcdf = np.cumsum(map(float,dfABpdf)) / ndraws
# Plot cdfs and differences of cdfs
fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.plot(xrng, Acdf)
ax1.set_title("A cdf")
ax2 = fig.add_subplot(2,2,2)
ax2.plot(xrng, Bcdf)
ax2.set_title("B cdf")
ax3 = fig.add_subplot(2,2,3)
ax3.plot(xrng, dfABcdf)
ax3.set_title("(A-B) cdf")
ax4 = fig.add_subplot(2,2,4)
ax4.plot(xrng, Acdf - Bcdf)
ax4.set_title("(A cdf) - (B cdf)")
plt.show() | Difference of two random variable distributions | I don't think you need a special package to do this; ordinary numpy is enough. I've appended example code and its output below. Note that the cdf of (A-B) looks very similar to the cdfs of A and B s | Difference of two random variable distributions
I don't think you need a special package to do this; ordinary numpy is enough. I've appended example code and its output below. Note that the cdf of (A-B) looks very similar to the cdfs of A and B separately, but actually it's not. You can see a subtle difference at around +/- 2 or 3 sigma. The cdf of (A-B) is a little wider than the individual cdfs of A and B separately.
import numpy as np
import matplotlib.pyplot as plt
#!/usr/bin/env python
# Number of random draws to use
ndraws = 1000
# Set this distance (in sigmas) large enough to capture all of the outliers
plotrange = 5
# Number of bins to use for pdf/cdf
nbin = 100
# Get random draws from a Gaussian
A = np.random.randn(1,ndraws)
B = np.random.randn(1,ndraws)
dfAB = A - B
# Calculate cdfs of A and B
Apdf, edges = np.histogram(A, bins=nbin, range=(-plotrange, plotrange))
Bpdf, edges = np.histogram(B, bins=nbin, range=(-plotrange, plotrange))
dfABpdf, edges = np.histogram(dfAB, bins=nbin, range=(-plotrange, plotrange))
xrng = (edges[0:-1] + edges[1:]) / 2
Acdf = np.cumsum(map(float, Apdf)) / ndraws
Bcdf = np.cumsum(map(float, Bpdf)) / ndraws
dfABcdf = np.cumsum(map(float,dfABpdf)) / ndraws
# Plot cdfs and differences of cdfs
fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.plot(xrng, Acdf)
ax1.set_title("A cdf")
ax2 = fig.add_subplot(2,2,2)
ax2.plot(xrng, Bcdf)
ax2.set_title("B cdf")
ax3 = fig.add_subplot(2,2,3)
ax3.plot(xrng, dfABcdf)
ax3.set_title("(A-B) cdf")
ax4 = fig.add_subplot(2,2,4)
ax4.plot(xrng, Acdf - Bcdf)
ax4.set_title("(A cdf) - (B cdf)")
plt.show() | Difference of two random variable distributions
I don't think you need a special package to do this; ordinary numpy is enough. I've appended example code and its output below. Note that the cdf of (A-B) looks very similar to the cdfs of A and B s |
48,734 | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per person? | I elaborate on @stachyra answer, trying to explain how the Lorenz
curve emerges.
We can regard the wealth of an individual chosen at random in the
population as a r.v. $X$ having density $f(x)$ over $(x_{\text{min}},
\,\infty)$ and survival $S(x):= \Pr\{X>x\}$. Assume that $n$
independent individuals $X_i$ are chosen at random. Their total
wealth $S := \sum_{i=1}^n X_i$ has expectation
$n\,\mathbb{E}[X]$. If $u \geq x_{\text{min}}$ is fixed, the
total wealth for those with wealth exceeding $u$, i.e. such that $X_i > u$
is $S_u := \sum_{i=1}^n X_i 1_{\{X_i >u\}}$ with expectation
$n\,\mathbb{E}[X 1_{\{X >u\}}]$. The ratio of sums $S_u/S$
writes as well as a ratio of means, and by the strong law of large
numbers it tends almost surely for large $n$ to the ratio of
expectations:
$$
\frac{S_u}{S} \underset{\text{a.s.}}{\to}
\frac{ \mathbb{E}\left[X 1_{\{X >u\}}\right] }{\mathbb{E}\left[X\right]} =
\frac{\int_{u}^\infty x f(x)\,\text{d}x}{\int_{x_{\text{min}}}^\infty x f(x)\,\text{d}x }.
$$
This result is valid as soon as $X$ has a finite expectation, i.e.
$\mathbb{E}\left[X\right] < \infty$. By choosing $u$ as the quantile
of probability $p$, we get at the right hand side $1-L(p)$ where
$L(p)$ is the value of the Lorenz curve, which is unitless. Provided that $n$ is
large enough, $L(p)$ (e.g. $L(0.8)$) thus gives the percent of the total wealth
owned by the fraction $p$ (e.g. $80\%$) of individuals with lowest wealth.
Now consider the Pareto distribution with shape $\alpha$ and scale
$x_{\text{min}}$; it has survival $S(x) = (x_{\text{min}}/x)^\alpha$
with $\alpha>0$. It has an interesting stability property: for any $u
\geq x_{\text{min}}$ the conditional distribution $X \, \vert \{X >u\} $
is Pareto with shape $\alpha$ and scale $u$. So the same
`"concentration" of the total wealth applies to the sub-population of
individuals with wealth $>u$. The expectation is finite for $\alpha >
1$ and the Lorenz curve is then given by $L(p) = 1 -
(1-p)^{1-1/\alpha}$. Only one shape parameter leads to the
80-20 rule. It is obtained with by solving $L(0.8) = 0.2$ in $\alpha$,
leading to $\alpha \approx 1.16$. | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per pers | I elaborate on @stachyra answer, trying to explain how the Lorenz
curve emerges.
We can regard the wealth of an individual chosen at random in the
population as a r.v. $X$ having density $f(x)$ over $ | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per person?
I elaborate on @stachyra answer, trying to explain how the Lorenz
curve emerges.
We can regard the wealth of an individual chosen at random in the
population as a r.v. $X$ having density $f(x)$ over $(x_{\text{min}},
\,\infty)$ and survival $S(x):= \Pr\{X>x\}$. Assume that $n$
independent individuals $X_i$ are chosen at random. Their total
wealth $S := \sum_{i=1}^n X_i$ has expectation
$n\,\mathbb{E}[X]$. If $u \geq x_{\text{min}}$ is fixed, the
total wealth for those with wealth exceeding $u$, i.e. such that $X_i > u$
is $S_u := \sum_{i=1}^n X_i 1_{\{X_i >u\}}$ with expectation
$n\,\mathbb{E}[X 1_{\{X >u\}}]$. The ratio of sums $S_u/S$
writes as well as a ratio of means, and by the strong law of large
numbers it tends almost surely for large $n$ to the ratio of
expectations:
$$
\frac{S_u}{S} \underset{\text{a.s.}}{\to}
\frac{ \mathbb{E}\left[X 1_{\{X >u\}}\right] }{\mathbb{E}\left[X\right]} =
\frac{\int_{u}^\infty x f(x)\,\text{d}x}{\int_{x_{\text{min}}}^\infty x f(x)\,\text{d}x }.
$$
This result is valid as soon as $X$ has a finite expectation, i.e.
$\mathbb{E}\left[X\right] < \infty$. By choosing $u$ as the quantile
of probability $p$, we get at the right hand side $1-L(p)$ where
$L(p)$ is the value of the Lorenz curve, which is unitless. Provided that $n$ is
large enough, $L(p)$ (e.g. $L(0.8)$) thus gives the percent of the total wealth
owned by the fraction $p$ (e.g. $80\%$) of individuals with lowest wealth.
Now consider the Pareto distribution with shape $\alpha$ and scale
$x_{\text{min}}$; it has survival $S(x) = (x_{\text{min}}/x)^\alpha$
with $\alpha>0$. It has an interesting stability property: for any $u
\geq x_{\text{min}}$ the conditional distribution $X \, \vert \{X >u\} $
is Pareto with shape $\alpha$ and scale $u$. So the same
`"concentration" of the total wealth applies to the sub-population of
individuals with wealth $>u$. The expectation is finite for $\alpha >
1$ and the Lorenz curve is then given by $L(p) = 1 -
(1-p)^{1-1/\alpha}$. Only one shape parameter leads to the
80-20 rule. It is obtained with by solving $L(0.8) = 0.2$ in $\alpha$,
leading to $\alpha \approx 1.16$. | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per pers
I elaborate on @stachyra answer, trying to explain how the Lorenz
curve emerges.
We can regard the wealth of an individual chosen at random in the
population as a r.v. $X$ having density $f(x)$ over $ |
48,735 | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per person? | I believe the concept that you are grasping for is the Lorenz curve. As stated in the link, it plots percentage of people vs. percentage of wealth, and points along the Lorenz curve represent statements such as "the bottom 20% of all households have 10% of the total income."
If you want to understand explicitly the relationship between the Lorenz curve and the Pareto CDF, I think a good explanation may be found here. | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per pers | I believe the concept that you are grasping for is the Lorenz curve. As stated in the link, it plots percentage of people vs. percentage of wealth, and points along the Lorenz curve represent stateme | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per person?
I believe the concept that you are grasping for is the Lorenz curve. As stated in the link, it plots percentage of people vs. percentage of wealth, and points along the Lorenz curve represent statements such as "the bottom 20% of all households have 10% of the total income."
If you want to understand explicitly the relationship between the Lorenz curve and the Pareto CDF, I think a good explanation may be found here. | Getting the units right for the Pareto distribution of wealth: x = people, dollars, dollars per pers
I believe the concept that you are grasping for is the Lorenz curve. As stated in the link, it plots percentage of people vs. percentage of wealth, and points along the Lorenz curve represent stateme |
48,736 | How to measure test set error with logistic regression | There is no standard way to define goodness-of-fit. It depends on your application and what the problem you are going to solve. As in classification, you may define the goodness-of-fit as 0-1 loss.
For a logistic regression, you can compute the likelihood function. I would use a McFadden pseudo-$R^2$, which is defined as:
$$
R^2 = 1 - \frac{\operatorname{L}(\theta)}{\operatorname{L}(\mathbf{0})}
$$
$\operatorname{L}$ is the log-likelihood function, $\theta$ is the parameter of the model and $\mathbf{0}$ denote a zero vector (i.e. you compare the likelihood ratio of your model against a model with all coefficients 0)
Moreover, given a probability measure $\mu(x) = P(Y = 1|X=x)$, define the loss function of a classifier $g$ as $L(g) = P(g(X) \neq Y)$.
The Bayes decision rule:
$$
g^*(x) = \begin{cases} 1 & \mbox{if } \mu(x) \geq 0.5 \\ 0 & \mbox{if } \mu(x) < 0.5 \end{cases}
$$
is the rule that minimize $L(g)$. That is nothing wrong to classify as 1 when your logistic regression output probability $\geq 0.5$ as long as you are thinking the loss function as above. | How to measure test set error with logistic regression | There is no standard way to define goodness-of-fit. It depends on your application and what the problem you are going to solve. As in classification, you may define the goodness-of-fit as 0-1 loss.
F | How to measure test set error with logistic regression
There is no standard way to define goodness-of-fit. It depends on your application and what the problem you are going to solve. As in classification, you may define the goodness-of-fit as 0-1 loss.
For a logistic regression, you can compute the likelihood function. I would use a McFadden pseudo-$R^2$, which is defined as:
$$
R^2 = 1 - \frac{\operatorname{L}(\theta)}{\operatorname{L}(\mathbf{0})}
$$
$\operatorname{L}$ is the log-likelihood function, $\theta$ is the parameter of the model and $\mathbf{0}$ denote a zero vector (i.e. you compare the likelihood ratio of your model against a model with all coefficients 0)
Moreover, given a probability measure $\mu(x) = P(Y = 1|X=x)$, define the loss function of a classifier $g$ as $L(g) = P(g(X) \neq Y)$.
The Bayes decision rule:
$$
g^*(x) = \begin{cases} 1 & \mbox{if } \mu(x) \geq 0.5 \\ 0 & \mbox{if } \mu(x) < 0.5 \end{cases}
$$
is the rule that minimize $L(g)$. That is nothing wrong to classify as 1 when your logistic regression output probability $\geq 0.5$ as long as you are thinking the loss function as above. | How to measure test set error with logistic regression
There is no standard way to define goodness-of-fit. It depends on your application and what the problem you are going to solve. As in classification, you may define the goodness-of-fit as 0-1 loss.
F |
48,737 | How to measure test set error with logistic regression | (1) You're describing split sample internal validation that has become less popular (in favor of bootstrapping) given the large dataset size you need to produce reliable estimates.
(2) You don't have to choose 0.5 as your classification cut-point. You can choose anything, depending on what suits your objective/utility function
(3) I don't understand your last sentence. You may be trying to distinguish between discrimination and calibration, this is an important distinction.
(4) Good overview: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3575184/ | How to measure test set error with logistic regression | (1) You're describing split sample internal validation that has become less popular (in favor of bootstrapping) given the large dataset size you need to produce reliable estimates.
(2) You don't have | How to measure test set error with logistic regression
(1) You're describing split sample internal validation that has become less popular (in favor of bootstrapping) given the large dataset size you need to produce reliable estimates.
(2) You don't have to choose 0.5 as your classification cut-point. You can choose anything, depending on what suits your objective/utility function
(3) I don't understand your last sentence. You may be trying to distinguish between discrimination and calibration, this is an important distinction.
(4) Good overview: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3575184/ | How to measure test set error with logistic regression
(1) You're describing split sample internal validation that has become less popular (in favor of bootstrapping) given the large dataset size you need to produce reliable estimates.
(2) You don't have |
48,738 | How to measure test set error with logistic regression | Besides data splitting sometimes requiring $n > 10,000$ to be reliable, you are using an "error" measure that is unnatural to probability models. Besides a generalized $R^2$ as mentioned above, consider the Brier score and the $c$-index (concordance probability) or the related Somers' $D_{xy}$ rank correlation between predicted probability and $Y$. @wonghang note that the Bayes decision rule requires an external loss (utility) function. | How to measure test set error with logistic regression | Besides data splitting sometimes requiring $n > 10,000$ to be reliable, you are using an "error" measure that is unnatural to probability models. Besides a generalized $R^2$ as mentioned above, consi | How to measure test set error with logistic regression
Besides data splitting sometimes requiring $n > 10,000$ to be reliable, you are using an "error" measure that is unnatural to probability models. Besides a generalized $R^2$ as mentioned above, consider the Brier score and the $c$-index (concordance probability) or the related Somers' $D_{xy}$ rank correlation between predicted probability and $Y$. @wonghang note that the Bayes decision rule requires an external loss (utility) function. | How to measure test set error with logistic regression
Besides data splitting sometimes requiring $n > 10,000$ to be reliable, you are using an "error" measure that is unnatural to probability models. Besides a generalized $R^2$ as mentioned above, consi |
48,739 | How to measure test set error with logistic regression | Measure the F statistic or proportion of explained variation. Also consider the covariate measurement error as it was performed by this study: S Rabe-Hesketh et al.,Correcting for covariate measurement error in logistic regression using nonparametric maximum likelihood estimation, 2003 | How to measure test set error with logistic regression | Measure the F statistic or proportion of explained variation. Also consider the covariate measurement error as it was performed by this study: S Rabe-Hesketh et al.,Correcting for covariate measuremen | How to measure test set error with logistic regression
Measure the F statistic or proportion of explained variation. Also consider the covariate measurement error as it was performed by this study: S Rabe-Hesketh et al.,Correcting for covariate measurement error in logistic regression using nonparametric maximum likelihood estimation, 2003 | How to measure test set error with logistic regression
Measure the F statistic or proportion of explained variation. Also consider the covariate measurement error as it was performed by this study: S Rabe-Hesketh et al.,Correcting for covariate measuremen |
48,740 | How to measure test set error with logistic regression | There isn't a perfect analogy to say R squared for a binary GLM though there are a few approximations using residual deviance and such.
Another way to approach measuring performance is to look at the AUC of the ROC curve which looks more at the ability of the model to separate goods and bads as probabilities increase.
You would expect a good model to score a larger proportion of the actual 1's with higher probabilities and more 0's at the lower probabilities.
AUC is more of a global measure of the power of the model. It doesn't require you to rely on a strictly greater than or less than .5 label as the measure of quality.
The actual mechanics of it are best explained visually with a ROC chart and confusion matrix.
More here.
And here | How to measure test set error with logistic regression | There isn't a perfect analogy to say R squared for a binary GLM though there are a few approximations using residual deviance and such.
Another way to approach measuring performance is to look at the | How to measure test set error with logistic regression
There isn't a perfect analogy to say R squared for a binary GLM though there are a few approximations using residual deviance and such.
Another way to approach measuring performance is to look at the AUC of the ROC curve which looks more at the ability of the model to separate goods and bads as probabilities increase.
You would expect a good model to score a larger proportion of the actual 1's with higher probabilities and more 0's at the lower probabilities.
AUC is more of a global measure of the power of the model. It doesn't require you to rely on a strictly greater than or less than .5 label as the measure of quality.
The actual mechanics of it are best explained visually with a ROC chart and confusion matrix.
More here.
And here | How to measure test set error with logistic regression
There isn't a perfect analogy to say R squared for a binary GLM though there are a few approximations using residual deviance and such.
Another way to approach measuring performance is to look at the |
48,741 | scikit-learn score metric on the coefficient of determination $R^2$ | Actually there are two different measures that are called correlations. Let us then call them little $r$, which is the Pearson correlation coefficient, and big $R$, which is what you have; a correlation (usually as $R^2$) adjusted for a generalized residual. Now $|r|=|R|$ only when we restrict ourselves to ordinary least squares linear regression in $Y$. If for example, we restrict our linear regression to slope only and set the intercept to zero, we would then use $R$, not $r$. Little $r$ is still the same, it just won't describe the correlation between the new regression line and the data anymore.
Little r is normalized covariance, i.e., $ r= \frac{\operatorname{cov}(X,Y)}{\sigma_X \sigma_Y}=\frac{\sum ^n _{i=1}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum ^n _{i=1}(x_i - \bar{x})^2} \sqrt{\sum ^n _{i=1}(y_i - \bar{y})^2}}$. Finally, $r^2$ is called the coefficient of determination only for the linear case.
Big $R$ is usually explained using ANOVA intermediary quantities:
The total sum of squares proportional to the variance of the data: $\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }$ $SS_\text{tot}=\sum_i (y_i-\bar{y})^2,$
The regression sum of squares, also called the explained sum of squares:
$SS_\text{reg}=\sum_i (f_i -\bar{y})^2,$
The sum of squares of residuals, also called the residual sum of squares: $SS_\text{res}=\sum_i (y_i - f_i)^2=\sum_i e_i^2\,$
The most general definition of the coefficient of determination is
$R^2 \equiv 1 - {SS_{\rm res}\over SS_{\rm tot}}.\,$
Now, what is the meaning of this $r^2$ or more generally $R^2$? $R^2$ is the explained fraction and $1-R^2$ is the unexplained fraction of the total variance.
What is a good coefficient and what is bad one? That depends on who says so in what context. Most medical papers say that a correlation is strong when $|r|\geq 0.8$. Since that only explains $0.64$ of the variance, I would call $0.8$ to be moderate correlation, and in most of my work $R^2\geq 0.95$ with $<5\%$ unexplained variance is called good. In some of my biological work $R^2>0.999$ is required for proper results. On the other hand, for experiments with only short $x$-axis data ranges, and copious noise, one is lucky to even get a significant correlation usually circa $0.5$ as a borderline significant (non-zero) result.
Perhaps the best way to communicate how variable the answer is, is to back calculate what the critical $r$ and $r^2$ values are for a $p<0.05$ significance.
First to calculate the t-value from an r-value let us use
$t=\frac{r}{\sqrt{(1-r^2)/(n-2)}}$, where $n\geq 6$
Then $r=\frac{t}{n-2+t^2}$, where $n\geq 6$ and using the t-significance tables
the critical two-tailed values of $r$ for significance are:
n r r^2
6 0.9496 0.9018
7 0.8541 0.7296
8 0.7827 0.6125
9 0.7267 0.5281
10 0.6812 0.4640
11 0.6434 0.4140
12 0.6113 0.3737
13 0.5836 0.3405
14 0.5594 0.3129
15 0.5377 0.2891
16 0.5187 0.2690
17 0.5013 0.2513
18 0.4857 0.2359
19 0.4715 0.2223
20 0.4584 0.2101
21 0.4463 0.1992
22 0.4352 0.1894
23 0.4249 0.1806
24 0.4152 0.1724
25 0.4063 0.1650
26 0.3978 0.1582
27 0.3899 0.1520
28 0.3824 0.1462
29 0.3753 0.1408
30 0.3685 0.1358
40 0.3167 0.1003
50 0.2821 0.0796
60 0.2568 0.0659
70 0.2371 0.0562
80 0.2215 0.0491
90 0.2086 0.0435
100 0.1977 0.0391
Note that the explained fraction ($r^2$) need for a significant $r$-value varies from 90% for $n=6$ to 3.9% for $n=100$. Nor does it stop there, the higher the value of $n$, the less explained fraction is needed for significance.
Finally, asking what a 'good' $R^2$ is, is also a bit ambiguous. Unlike $r^2$, $R^2$ can (surprise, shock and awe) actually take on negative values. So, although $R^2$ is more general than $r^2$, it also has problems that never occur with $r^2$. Moreover, like $r$ (see above), $R$ is $n$ biased, and if we adjust $R$ for degrees of freedom using adjusted $R^2$, negative $R^2$ values become even more frequent. | scikit-learn score metric on the coefficient of determination $R^2$ | Actually there are two different measures that are called correlations. Let us then call them little $r$, which is the Pearson correlation coefficient, and big $R$, which is what you have; a correlati | scikit-learn score metric on the coefficient of determination $R^2$
Actually there are two different measures that are called correlations. Let us then call them little $r$, which is the Pearson correlation coefficient, and big $R$, which is what you have; a correlation (usually as $R^2$) adjusted for a generalized residual. Now $|r|=|R|$ only when we restrict ourselves to ordinary least squares linear regression in $Y$. If for example, we restrict our linear regression to slope only and set the intercept to zero, we would then use $R$, not $r$. Little $r$ is still the same, it just won't describe the correlation between the new regression line and the data anymore.
Little r is normalized covariance, i.e., $ r= \frac{\operatorname{cov}(X,Y)}{\sigma_X \sigma_Y}=\frac{\sum ^n _{i=1}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum ^n _{i=1}(x_i - \bar{x})^2} \sqrt{\sum ^n _{i=1}(y_i - \bar{y})^2}}$. Finally, $r^2$ is called the coefficient of determination only for the linear case.
Big $R$ is usually explained using ANOVA intermediary quantities:
The total sum of squares proportional to the variance of the data: $\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ }$ $SS_\text{tot}=\sum_i (y_i-\bar{y})^2,$
The regression sum of squares, also called the explained sum of squares:
$SS_\text{reg}=\sum_i (f_i -\bar{y})^2,$
The sum of squares of residuals, also called the residual sum of squares: $SS_\text{res}=\sum_i (y_i - f_i)^2=\sum_i e_i^2\,$
The most general definition of the coefficient of determination is
$R^2 \equiv 1 - {SS_{\rm res}\over SS_{\rm tot}}.\,$
Now, what is the meaning of this $r^2$ or more generally $R^2$? $R^2$ is the explained fraction and $1-R^2$ is the unexplained fraction of the total variance.
What is a good coefficient and what is bad one? That depends on who says so in what context. Most medical papers say that a correlation is strong when $|r|\geq 0.8$. Since that only explains $0.64$ of the variance, I would call $0.8$ to be moderate correlation, and in most of my work $R^2\geq 0.95$ with $<5\%$ unexplained variance is called good. In some of my biological work $R^2>0.999$ is required for proper results. On the other hand, for experiments with only short $x$-axis data ranges, and copious noise, one is lucky to even get a significant correlation usually circa $0.5$ as a borderline significant (non-zero) result.
Perhaps the best way to communicate how variable the answer is, is to back calculate what the critical $r$ and $r^2$ values are for a $p<0.05$ significance.
First to calculate the t-value from an r-value let us use
$t=\frac{r}{\sqrt{(1-r^2)/(n-2)}}$, where $n\geq 6$
Then $r=\frac{t}{n-2+t^2}$, where $n\geq 6$ and using the t-significance tables
the critical two-tailed values of $r$ for significance are:
n r r^2
6 0.9496 0.9018
7 0.8541 0.7296
8 0.7827 0.6125
9 0.7267 0.5281
10 0.6812 0.4640
11 0.6434 0.4140
12 0.6113 0.3737
13 0.5836 0.3405
14 0.5594 0.3129
15 0.5377 0.2891
16 0.5187 0.2690
17 0.5013 0.2513
18 0.4857 0.2359
19 0.4715 0.2223
20 0.4584 0.2101
21 0.4463 0.1992
22 0.4352 0.1894
23 0.4249 0.1806
24 0.4152 0.1724
25 0.4063 0.1650
26 0.3978 0.1582
27 0.3899 0.1520
28 0.3824 0.1462
29 0.3753 0.1408
30 0.3685 0.1358
40 0.3167 0.1003
50 0.2821 0.0796
60 0.2568 0.0659
70 0.2371 0.0562
80 0.2215 0.0491
90 0.2086 0.0435
100 0.1977 0.0391
Note that the explained fraction ($r^2$) need for a significant $r$-value varies from 90% for $n=6$ to 3.9% for $n=100$. Nor does it stop there, the higher the value of $n$, the less explained fraction is needed for significance.
Finally, asking what a 'good' $R^2$ is, is also a bit ambiguous. Unlike $r^2$, $R^2$ can (surprise, shock and awe) actually take on negative values. So, although $R^2$ is more general than $r^2$, it also has problems that never occur with $r^2$. Moreover, like $r$ (see above), $R$ is $n$ biased, and if we adjust $R$ for degrees of freedom using adjusted $R^2$, negative $R^2$ values become even more frequent. | scikit-learn score metric on the coefficient of determination $R^2$
Actually there are two different measures that are called correlations. Let us then call them little $r$, which is the Pearson correlation coefficient, and big $R$, which is what you have; a correlati |
48,742 | scikit-learn score metric on the coefficient of determination $R^2$ | Consider using the precision and recall scores of scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html. It may give you a more tangible number to consider.
Precision and recall are defined as:
The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.
I often too find the score function of the classifiers to be somewhat abstract/ not applicable to my usecase, but the precision and recall gives you a percentage of how many of the predicted items was actually predicted correctly, and how many did the classifier miss. | scikit-learn score metric on the coefficient of determination $R^2$ | Consider using the precision and recall scores of scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html. It may give you a more tangible n | scikit-learn score metric on the coefficient of determination $R^2$
Consider using the precision and recall scores of scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html. It may give you a more tangible number to consider.
Precision and recall are defined as:
The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.
I often too find the score function of the classifiers to be somewhat abstract/ not applicable to my usecase, but the precision and recall gives you a percentage of how many of the predicted items was actually predicted correctly, and how many did the classifier miss. | scikit-learn score metric on the coefficient of determination $R^2$
Consider using the precision and recall scores of scikit-learn: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html. It may give you a more tangible n |
48,743 | scikit-learn score metric on the coefficient of determination $R^2$ | The Coefficient of Determination (R^2) generalizes the correlation coefficient (r) to multiple predictors, and is often summarized as the proportion of variance explained by the model. It will be quite comfortable for anyone used to analyzing linear regression models, and will be discussed in any text or course you might have takem.
1.0 is a perfect score, good is relative.
Note that the answer which discusses precision and recall does not answer the question posed, which was about Support Vector Regression, not Support Vector Classification. True Positive and False Positive assume binary (True/False) responses. | scikit-learn score metric on the coefficient of determination $R^2$ | The Coefficient of Determination (R^2) generalizes the correlation coefficient (r) to multiple predictors, and is often summarized as the proportion of variance explained by the model. It will be qui | scikit-learn score metric on the coefficient of determination $R^2$
The Coefficient of Determination (R^2) generalizes the correlation coefficient (r) to multiple predictors, and is often summarized as the proportion of variance explained by the model. It will be quite comfortable for anyone used to analyzing linear regression models, and will be discussed in any text or course you might have takem.
1.0 is a perfect score, good is relative.
Note that the answer which discusses precision and recall does not answer the question posed, which was about Support Vector Regression, not Support Vector Classification. True Positive and False Positive assume binary (True/False) responses. | scikit-learn score metric on the coefficient of determination $R^2$
The Coefficient of Determination (R^2) generalizes the correlation coefficient (r) to multiple predictors, and is often summarized as the proportion of variance explained by the model. It will be qui |
48,744 | Bayesian Aproach: Infering the N and $\theta$ values from a binomial distribution | The hierarchical model seems to be
$$
\begin{eqnarray}
X_i\mid N=n,\Theta=\theta,\Lambda=\lambda &\sim& \mathrm{Bin}(n,\theta) \qquad\qquad\qquad i=1,\dots,m \\
N\mid\Theta=\theta,\Lambda=\lambda &\sim& \mathrm{Poisson}(\lambda/\theta) \\
\Theta &\sim& \mathrm{Beta}(\alpha,\beta) \\
\Lambda &\sim& \mathrm{Gamma}(\kappa_1,\kappa_2) \, , \\
\end{eqnarray}
$$
in which $\alpha,\beta,\kappa_1$, and $\kappa_2$ are known. Defining $X=(X_1,\dots,X_m)$ and $x=(x_1,\dots,x_m)$, and using Bayes' Theorem, we have
$$
\begin{eqnarray}
f_{N,\Theta,\Lambda\mid X}(n,\theta,\lambda\mid x) &\propto& f_{X\mid N,\Theta,\Lambda}(x\mid n,\theta,\lambda)\; f_{N,\Theta,\Lambda}(n,\theta,\lambda) \\
&=& \left(\prod_{i=1}^m f_{X_i\mid N,\Theta,\Lambda}(x_i\mid n,\theta,\lambda)\right) \; f_{N\mid\Theta,\Lambda}(n\mid\theta,\lambda) \; f_\Theta(\theta) \; f_{\Lambda}(\lambda) \, .\\
\end{eqnarray}
$$
Therefore,
$$
\begin{eqnarray}
f_{N\mid X}(n\mid x) &=& \int_0^1 \int_0^\infty f_{N,\Theta,\Lambda\mid X}(n,\theta,\lambda\mid x)\,d\lambda\,d\theta \\
&\propto& \frac{1}{n!} \left( \prod_{i=1}^m {n\choose x_i}\right) \int_0^1 \int_0^\infty \theta^{t-n+\alpha -1} (1-\theta)^{mn-t+\beta-1} \\
&& \qquad\qquad\qquad\qquad\qquad\times\lambda^{n+\kappa_1-1} \exp\left(-\left(\frac{1+\kappa_2\theta}{\theta}\right)\lambda\right)\,d\lambda\,d\theta \, ,
\end{eqnarray}
$$
in which $t=\sum_{i=1}^m x_i$. The integration in $\lambda$ is easy. Just identify the kernel of a
$$
\mathrm{Gamma}\left( n+\kappa_1, \frac{1+\kappa_2\theta}{\theta} \right)
$$
distribution, and adjust the normalization constant. Doing this, we have
$$
f_{N\mid X}(n\mid x) \propto \frac{\Gamma(n+\kappa_1)}{n!} \left( \prod_{i=1}^m {n\choose x_i}\right) \int_0^1 \frac{\theta^{t+\alpha-mn-1}(1-\theta)^{mn-t+\beta-1}}{(1+\kappa_2\theta)^{n+\kappa_1}} \,d\theta \, .
$$
Of course, the last expression holds for $n\geq \max\{x_1,\dots,x_m\}$. If this is not clear, write the indicators $I_{\{0,\dots,n\}}(x_i)$ in the binomials explicitly.
If we use an improper prior $f_\Lambda(\lambda)\propto 1/\lambda$, the integrations are easy. Using an uniform prior $(\alpha=\beta=1)$ for $\Theta$, we find the closed form
$$
f_{N\mid X}(n\mid x) \propto \frac{n\,\Gamma(t+1)\Gamma(mn-t+1)}{\Gamma(mn+2)} .
$$
How do we normalize this? What is the Bayes estimate with quadratic loss $\mathrm{E}[N\mid X=x]$? Can we find analytically at least the posterior mode? Can we sample from this distribution to find an estimate and a credible set?
I didn't check the algebra. Please, do it. | Bayesian Aproach: Infering the N and $\theta$ values from a binomial distribution | The hierarchical model seems to be
$$
\begin{eqnarray}
X_i\mid N=n,\Theta=\theta,\Lambda=\lambda &\sim& \mathrm{Bin}(n,\theta) \qquad\qquad\qquad i=1,\dots,m \\
N\mid\Theta=\theta,\Lambda=\lambda | Bayesian Aproach: Infering the N and $\theta$ values from a binomial distribution
The hierarchical model seems to be
$$
\begin{eqnarray}
X_i\mid N=n,\Theta=\theta,\Lambda=\lambda &\sim& \mathrm{Bin}(n,\theta) \qquad\qquad\qquad i=1,\dots,m \\
N\mid\Theta=\theta,\Lambda=\lambda &\sim& \mathrm{Poisson}(\lambda/\theta) \\
\Theta &\sim& \mathrm{Beta}(\alpha,\beta) \\
\Lambda &\sim& \mathrm{Gamma}(\kappa_1,\kappa_2) \, , \\
\end{eqnarray}
$$
in which $\alpha,\beta,\kappa_1$, and $\kappa_2$ are known. Defining $X=(X_1,\dots,X_m)$ and $x=(x_1,\dots,x_m)$, and using Bayes' Theorem, we have
$$
\begin{eqnarray}
f_{N,\Theta,\Lambda\mid X}(n,\theta,\lambda\mid x) &\propto& f_{X\mid N,\Theta,\Lambda}(x\mid n,\theta,\lambda)\; f_{N,\Theta,\Lambda}(n,\theta,\lambda) \\
&=& \left(\prod_{i=1}^m f_{X_i\mid N,\Theta,\Lambda}(x_i\mid n,\theta,\lambda)\right) \; f_{N\mid\Theta,\Lambda}(n\mid\theta,\lambda) \; f_\Theta(\theta) \; f_{\Lambda}(\lambda) \, .\\
\end{eqnarray}
$$
Therefore,
$$
\begin{eqnarray}
f_{N\mid X}(n\mid x) &=& \int_0^1 \int_0^\infty f_{N,\Theta,\Lambda\mid X}(n,\theta,\lambda\mid x)\,d\lambda\,d\theta \\
&\propto& \frac{1}{n!} \left( \prod_{i=1}^m {n\choose x_i}\right) \int_0^1 \int_0^\infty \theta^{t-n+\alpha -1} (1-\theta)^{mn-t+\beta-1} \\
&& \qquad\qquad\qquad\qquad\qquad\times\lambda^{n+\kappa_1-1} \exp\left(-\left(\frac{1+\kappa_2\theta}{\theta}\right)\lambda\right)\,d\lambda\,d\theta \, ,
\end{eqnarray}
$$
in which $t=\sum_{i=1}^m x_i$. The integration in $\lambda$ is easy. Just identify the kernel of a
$$
\mathrm{Gamma}\left( n+\kappa_1, \frac{1+\kappa_2\theta}{\theta} \right)
$$
distribution, and adjust the normalization constant. Doing this, we have
$$
f_{N\mid X}(n\mid x) \propto \frac{\Gamma(n+\kappa_1)}{n!} \left( \prod_{i=1}^m {n\choose x_i}\right) \int_0^1 \frac{\theta^{t+\alpha-mn-1}(1-\theta)^{mn-t+\beta-1}}{(1+\kappa_2\theta)^{n+\kappa_1}} \,d\theta \, .
$$
Of course, the last expression holds for $n\geq \max\{x_1,\dots,x_m\}$. If this is not clear, write the indicators $I_{\{0,\dots,n\}}(x_i)$ in the binomials explicitly.
If we use an improper prior $f_\Lambda(\lambda)\propto 1/\lambda$, the integrations are easy. Using an uniform prior $(\alpha=\beta=1)$ for $\Theta$, we find the closed form
$$
f_{N\mid X}(n\mid x) \propto \frac{n\,\Gamma(t+1)\Gamma(mn-t+1)}{\Gamma(mn+2)} .
$$
How do we normalize this? What is the Bayes estimate with quadratic loss $\mathrm{E}[N\mid X=x]$? Can we find analytically at least the posterior mode? Can we sample from this distribution to find an estimate and a credible set?
I didn't check the algebra. Please, do it. | Bayesian Aproach: Infering the N and $\theta$ values from a binomial distribution
The hierarchical model seems to be
$$
\begin{eqnarray}
X_i\mid N=n,\Theta=\theta,\Lambda=\lambda &\sim& \mathrm{Bin}(n,\theta) \qquad\qquad\qquad i=1,\dots,m \\
N\mid\Theta=\theta,\Lambda=\lambda |
48,745 | Why does correlation come out the same on raw data and z-scored (standardized) data? | Two facts:
(i) Correlation is the covariance of the z-scores.
(e.g. see here about four-fifths of the way down the page; alternatively, try
zx = scale(x) # this returns z-scores directly, but you can use your form instead
zy = scale(y)
cov(zx,zy);cor(x,y)
to see that covariance of z-scores and correlation are the same.
(ii) If you takes z-scores of z-scores you get z-scores. You can see this by direct reasoning (if the mean and standard deviation are already 0 and 1, you change nothing by subtracting 0 and dividing by 1), and you can double-check by looking at scale(scale(x))
Hence the correlation of the z-scores is the covariance of the z-scores of the z-scores, which is just the covariance of the z-scores, which is just the correlation of the original scores. | Why does correlation come out the same on raw data and z-scored (standardized) data? | Two facts:
(i) Correlation is the covariance of the z-scores.
(e.g. see here about four-fifths of the way down the page; alternatively, try
zx = scale(x) # this returns z-scores directly, but you c | Why does correlation come out the same on raw data and z-scored (standardized) data?
Two facts:
(i) Correlation is the covariance of the z-scores.
(e.g. see here about four-fifths of the way down the page; alternatively, try
zx = scale(x) # this returns z-scores directly, but you can use your form instead
zy = scale(y)
cov(zx,zy);cor(x,y)
to see that covariance of z-scores and correlation are the same.
(ii) If you takes z-scores of z-scores you get z-scores. You can see this by direct reasoning (if the mean and standard deviation are already 0 and 1, you change nothing by subtracting 0 and dividing by 1), and you can double-check by looking at scale(scale(x))
Hence the correlation of the z-scores is the covariance of the z-scores of the z-scores, which is just the covariance of the z-scores, which is just the correlation of the original scores. | Why does correlation come out the same on raw data and z-scored (standardized) data?
Two facts:
(i) Correlation is the covariance of the z-scores.
(e.g. see here about four-fifths of the way down the page; alternatively, try
zx = scale(x) # this returns z-scores directly, but you c |
48,746 | Why does correlation come out the same on raw data and z-scored (standardized) data? | Correlation is scale-invariant. Try
> cor(zx, y)
and you'll see that the correlation between the raw and z scored data is also the same. | Why does correlation come out the same on raw data and z-scored (standardized) data? | Correlation is scale-invariant. Try
> cor(zx, y)
and you'll see that the correlation between the raw and z scored data is also the same. | Why does correlation come out the same on raw data and z-scored (standardized) data?
Correlation is scale-invariant. Try
> cor(zx, y)
and you'll see that the correlation between the raw and z scored data is also the same. | Why does correlation come out the same on raw data and z-scored (standardized) data?
Correlation is scale-invariant. Try
> cor(zx, y)
and you'll see that the correlation between the raw and z scored data is also the same. |
48,747 | Comparing means with unequal variance and very different sample sizes | The traditional test for comparing two sample means is the t-test. There are no assumptions about the sizes of the samples, so it is OK if they are different.
However, you touch upon the normality assumption. Even if the population is not normally distributed, the Central Limit Theorem allows us to infer normality as the sample sizes increase. This means your test will be approximate, but the sample size for female is a little low.
Finally, the result of the t-test will be different for the original and log-ed data. Do you have a specific reason based on your data to use the logarithm? Perhaps there is another assumption you would like to test about the behavior of the log of your data? Do not take the log simply to create a normal curve if there is no deeper meaning, but for fun compare the difference between the two results anyway! | Comparing means with unequal variance and very different sample sizes | The traditional test for comparing two sample means is the t-test. There are no assumptions about the sizes of the samples, so it is OK if they are different.
However, you touch upon the normality as | Comparing means with unequal variance and very different sample sizes
The traditional test for comparing two sample means is the t-test. There are no assumptions about the sizes of the samples, so it is OK if they are different.
However, you touch upon the normality assumption. Even if the population is not normally distributed, the Central Limit Theorem allows us to infer normality as the sample sizes increase. This means your test will be approximate, but the sample size for female is a little low.
Finally, the result of the t-test will be different for the original and log-ed data. Do you have a specific reason based on your data to use the logarithm? Perhaps there is another assumption you would like to test about the behavior of the log of your data? Do not take the log simply to create a normal curve if there is no deeper meaning, but for fun compare the difference between the two results anyway! | Comparing means with unequal variance and very different sample sizes
The traditional test for comparing two sample means is the t-test. There are no assumptions about the sizes of the samples, so it is OK if they are different.
However, you touch upon the normality as |
48,748 | Comparing means with unequal variance and very different sample sizes | Taking logs and testing the mean on the log scale would normally not correspond to a difference in means on the original scale.
However:
[Edit: my comments apply to an earlier version of the data, and don't apply to the data that are presently in the question. As such, my comments really apply to the situation where the coefficient of variation in two close-to-lognormal samples are very similar, rather than to the case now at hand.]
The coefficient of variation is almost identical in the two samples, which does suggest that you might consider these as having a scale shift; if you think the logs look reasonably close to normal, then that would suggest lognormal distributions with common coefficient of variation. In that case a difference of means on the log-scale would actually indicate a scale-shift on the original scale (and hence that one of the means is a multiple of the other mean on the original scale).
That is, under an assumption of equal variance and normal distribution on the log-scale, a rejection of equality of means implies that the means on the original scale have a ratio that differs from 1.
It seems like that would be a reasonable assumption.
There are other things you could do, though. | Comparing means with unequal variance and very different sample sizes | Taking logs and testing the mean on the log scale would normally not correspond to a difference in means on the original scale.
However:
[Edit: my comments apply to an earlier version of the data, and | Comparing means with unequal variance and very different sample sizes
Taking logs and testing the mean on the log scale would normally not correspond to a difference in means on the original scale.
However:
[Edit: my comments apply to an earlier version of the data, and don't apply to the data that are presently in the question. As such, my comments really apply to the situation where the coefficient of variation in two close-to-lognormal samples are very similar, rather than to the case now at hand.]
The coefficient of variation is almost identical in the two samples, which does suggest that you might consider these as having a scale shift; if you think the logs look reasonably close to normal, then that would suggest lognormal distributions with common coefficient of variation. In that case a difference of means on the log-scale would actually indicate a scale-shift on the original scale (and hence that one of the means is a multiple of the other mean on the original scale).
That is, under an assumption of equal variance and normal distribution on the log-scale, a rejection of equality of means implies that the means on the original scale have a ratio that differs from 1.
It seems like that would be a reasonable assumption.
There are other things you could do, though. | Comparing means with unequal variance and very different sample sizes
Taking logs and testing the mean on the log scale would normally not correspond to a difference in means on the original scale.
However:
[Edit: my comments apply to an earlier version of the data, and |
48,749 | Comparing means with unequal variance and very different sample sizes | From the data you cannot infer that the variance between males and females is same, in fact the opposite is almost certainly true. Also, since 50 is indeed a bit low, suppose you cannot assume normality.
Compare each female's value with the median of men's values. If median female is neither better nor worse than a median male (null hypothesis), then each female will have 1/2 chance to be better than a median male. The chance that K or less females are worse than a median male is $P(K) = 2^{-50} \sum_{m=0}^K {50 \choose K}$ . Here we consider the error in the male's median to be negligible, since there are much more males than females, and the variance between males is smaller than the variance between females. | Comparing means with unequal variance and very different sample sizes | From the data you cannot infer that the variance between males and females is same, in fact the opposite is almost certainly true. Also, since 50 is indeed a bit low, suppose you cannot assume normal | Comparing means with unequal variance and very different sample sizes
From the data you cannot infer that the variance between males and females is same, in fact the opposite is almost certainly true. Also, since 50 is indeed a bit low, suppose you cannot assume normality.
Compare each female's value with the median of men's values. If median female is neither better nor worse than a median male (null hypothesis), then each female will have 1/2 chance to be better than a median male. The chance that K or less females are worse than a median male is $P(K) = 2^{-50} \sum_{m=0}^K {50 \choose K}$ . Here we consider the error in the male's median to be negligible, since there are much more males than females, and the variance between males is smaller than the variance between females. | Comparing means with unequal variance and very different sample sizes
From the data you cannot infer that the variance between males and females is same, in fact the opposite is almost certainly true. Also, since 50 is indeed a bit low, suppose you cannot assume normal |
48,750 | Goodness of fit tests for quantile regression in R | quantreg includes several AIC functions: "AIC.nlrq", "AIC.rq", "AIC.rqs" and "AIC.rqss" and similar log likelihood functions.
It also has a vignette at vignette("rq",package="quantreg").
Do these do what you want? | Goodness of fit tests for quantile regression in R | quantreg includes several AIC functions: "AIC.nlrq", "AIC.rq", "AIC.rqs" and "AIC.rqss" and similar log likelihood functions.
It also has a vignette at vignette("rq",package="quantreg").
Do these do | Goodness of fit tests for quantile regression in R
quantreg includes several AIC functions: "AIC.nlrq", "AIC.rq", "AIC.rqs" and "AIC.rqss" and similar log likelihood functions.
It also has a vignette at vignette("rq",package="quantreg").
Do these do what you want? | Goodness of fit tests for quantile regression in R
quantreg includes several AIC functions: "AIC.nlrq", "AIC.rq", "AIC.rqs" and "AIC.rqss" and similar log likelihood functions.
It also has a vignette at vignette("rq",package="quantreg").
Do these do |
48,751 | Large scale ridge regression | I've found that LSQR is ideal for problems like this - I've used it successfully for operators of about 3e5 * 1e6 or so. Check http://www.stanford.edu/group/SOL/software/lsqr.html for details. I've used Friedlander's (I think) C port and the python port, which I have (hastily and sloppily) ported to R. | Large scale ridge regression | I've found that LSQR is ideal for problems like this - I've used it successfully for operators of about 3e5 * 1e6 or so. Check http://www.stanford.edu/group/SOL/software/lsqr.html for details. I've | Large scale ridge regression
I've found that LSQR is ideal for problems like this - I've used it successfully for operators of about 3e5 * 1e6 or so. Check http://www.stanford.edu/group/SOL/software/lsqr.html for details. I've used Friedlander's (I think) C port and the python port, which I have (hastily and sloppily) ported to R. | Large scale ridge regression
I've found that LSQR is ideal for problems like this - I've used it successfully for operators of about 3e5 * 1e6 or so. Check http://www.stanford.edu/group/SOL/software/lsqr.html for details. I've |
48,752 | Optimal orthogonal polynomial chaos basis functions for log-normally distributed random variables | There's no optimal polynomial basis for the log-normal distribution, because it's not determinate in the Hamburger sense. The approach indicated by Xiu and Karniadakis (Generalized Polynomial Chaos) doesn't always work. I'll be a bit informal here. For a rigorous treatement, see here.
Let $X$ denote a continuous random variable and $f_X(x)$ its pdf. Also, let's assume that $X$ has absolute finite moments of all order, i.e.,
$$ \int_\mathbb{R}|x|^nf_X(x)dx < \infty \quad \forall n \in N$$
This implies that $X$ has finite moments of all orders, i.e.
$$ m_n=\int_\mathbb{R}x^nf_X(x)dx < \infty \quad \forall n \in N$$
With these conditions, you can determine the family of orthogonal polynomials associated to $f_X(x)$, for example applying the Gram-Schmidt procedure to the monomial basis, or using the more stable Stieltjes procedure. The lognormal distribution satisfies these assumption, and thus there exist a set of orthonormal polynomials associated with it, the Stieltjes–Wigert polynomials.
However, this is not enough to be able to expand each mean-square integrable random variable $Y=g(X)$ in a series of orthogonal polynomials of $X$, which converges in square mean to $Y$. The reason is that with the conditions we stated, we cannot assure that the orthogonal polynomials are dense in $A=L^2(\Omega,\sigma(X),P)$, where $\Omega$ is the sample space of the probability space to which $X$ is associated, $P$ is the probability measure of the probability space and $\sigma(X)$ is the $\sigma$-algebra generated by $X$.
If this sounds too complicated, think of it like this: the space of all square-mean integrable random variables $Y$ which are a measurable function $g(X)$ of $X$ is a very large space. Unless you are sure that the polynomials orthonormal with respect to $f_X(x)$ are dense in such a space, there may be some $Y$ which cannot be expressed as a series of said polynomials.
Under our assumptions, a necessary and sufficient condition for the density of the orthonormal polynomials in $A$ is that $f_X(x)$ is determinate in the Hamburger sense, i.e., that it is uniquely determined by the sequence of its moments of all orders. The lognormal distribution is not determinate in the Hamburger sense, and thus there exist some square-mean integrable RVs $Y=g(X)$ such that the expansion of $Y$ in a series of Stieltjes–Wigert polynomials either doesn't converge, or it converges to a limit which is not $Y$ (recall that here convergence is always intended in the mean square sense). For some examples, again see the paper I linked above.
End of the story, for your random coefficient Poisson PDE it's probably better to represent $u$ as a series of Hermite polynomials. The convergence rate will be quite slow, though. | Optimal orthogonal polynomial chaos basis functions for log-normally distributed random variables | There's no optimal polynomial basis for the log-normal distribution, because it's not determinate in the Hamburger sense. The approach indicated by Xiu and Karniadakis (Generalized Polynomial Chaos) d | Optimal orthogonal polynomial chaos basis functions for log-normally distributed random variables
There's no optimal polynomial basis for the log-normal distribution, because it's not determinate in the Hamburger sense. The approach indicated by Xiu and Karniadakis (Generalized Polynomial Chaos) doesn't always work. I'll be a bit informal here. For a rigorous treatement, see here.
Let $X$ denote a continuous random variable and $f_X(x)$ its pdf. Also, let's assume that $X$ has absolute finite moments of all order, i.e.,
$$ \int_\mathbb{R}|x|^nf_X(x)dx < \infty \quad \forall n \in N$$
This implies that $X$ has finite moments of all orders, i.e.
$$ m_n=\int_\mathbb{R}x^nf_X(x)dx < \infty \quad \forall n \in N$$
With these conditions, you can determine the family of orthogonal polynomials associated to $f_X(x)$, for example applying the Gram-Schmidt procedure to the monomial basis, or using the more stable Stieltjes procedure. The lognormal distribution satisfies these assumption, and thus there exist a set of orthonormal polynomials associated with it, the Stieltjes–Wigert polynomials.
However, this is not enough to be able to expand each mean-square integrable random variable $Y=g(X)$ in a series of orthogonal polynomials of $X$, which converges in square mean to $Y$. The reason is that with the conditions we stated, we cannot assure that the orthogonal polynomials are dense in $A=L^2(\Omega,\sigma(X),P)$, where $\Omega$ is the sample space of the probability space to which $X$ is associated, $P$ is the probability measure of the probability space and $\sigma(X)$ is the $\sigma$-algebra generated by $X$.
If this sounds too complicated, think of it like this: the space of all square-mean integrable random variables $Y$ which are a measurable function $g(X)$ of $X$ is a very large space. Unless you are sure that the polynomials orthonormal with respect to $f_X(x)$ are dense in such a space, there may be some $Y$ which cannot be expressed as a series of said polynomials.
Under our assumptions, a necessary and sufficient condition for the density of the orthonormal polynomials in $A$ is that $f_X(x)$ is determinate in the Hamburger sense, i.e., that it is uniquely determined by the sequence of its moments of all orders. The lognormal distribution is not determinate in the Hamburger sense, and thus there exist some square-mean integrable RVs $Y=g(X)$ such that the expansion of $Y$ in a series of Stieltjes–Wigert polynomials either doesn't converge, or it converges to a limit which is not $Y$ (recall that here convergence is always intended in the mean square sense). For some examples, again see the paper I linked above.
End of the story, for your random coefficient Poisson PDE it's probably better to represent $u$ as a series of Hermite polynomials. The convergence rate will be quite slow, though. | Optimal orthogonal polynomial chaos basis functions for log-normally distributed random variables
There's no optimal polynomial basis for the log-normal distribution, because it's not determinate in the Hamburger sense. The approach indicated by Xiu and Karniadakis (Generalized Polynomial Chaos) d |
48,753 | Understanding tail dependence coefficients | The tail dependence coefficients $\lambda_U$ and $\lambda_L$ are measures of extremal dependence that quantify the dependence in the upper and lower tails of a bivariate distribution with continuous margins $F$ and $G$.
The coefficients $\lambda_U$ and $\lambda_L$ are defined in terms of quantile exceedences.
For the upper tail dependence coefficient $\lambda_U$ one looks at the probability that $Y$ exceeds the $u$-quantile $G^{-1}(u)$, given that $X$ exceeds the $u$-quantile $F^{-1}(u)$, and then consider the limit as $u$ goes to $1$, provided it exists.
Large values of $\lambda_U$ imply that joint extremes are more likely than for low values of $\lambda_U$. Interpretation for the lower tail dependence coefficient $\lambda_L$ is analogous.
If $\lambda_U = 0$, then $X$ and $Y$ are said to be asymptotically independent; if $\lambda_U \in (0, 1]$ they are asymptotically dependent.
For independent variables $\lambda_U = 0$; for perfectly dependent variables $\lambda_U = 0$. Note that $\lambda_U = 0$ does NOT imply independence; the comment/statement in the question's reference is wrong. Indeed, consider a bivariate normal distribution with correlation $\rho \notin \{0, 1\}$. Then, one can show that $\lambda_U = 0$ but the variables are dependent.
A distribution with, say, $\lambda_U = 0.5$ does not mean that there is a linear dependence between $X$ and $Y$; it means that $X$ and $Y$ are asymptotically dependent in the upper tail, and the strength of the dependence is 0.5. For example, the Gumbel copula with parameter $\theta = \log(2)/\log(1.5)$ has
a coefficient of upper tail dependence equal to $0.5$.
The following figure shows a sample of 1000 points of the Gumbel copula with parameter $\theta = \log(2)/\log(1.5)$ with uniform margins (left) and standard normal margins (right). The data were generated with the copula package in R (code is provided below).
## Parameters
theta <- log(2)/log(1.5)
n <- 1000
## Generate a sample
library(copula)
set.seed(234)
gumbel.cop <- archmCopula("gumbel", theta)
x <- rCopula(n, gumbel.cop)
## Visualization
par(mfrow = c(1, 2))
plot(x, pch = 16, col = "gray", xlab = "U", ylab = "V")
plot(qnorm(x), pch = 16, col = "gray", xlab = "X", ylab = "Y")
par(mfrow = c(1, 1)) | Understanding tail dependence coefficients | The tail dependence coefficients $\lambda_U$ and $\lambda_L$ are measures of extremal dependence that quantify the dependence in the upper and lower tails of a bivariate distribution with continuous m | Understanding tail dependence coefficients
The tail dependence coefficients $\lambda_U$ and $\lambda_L$ are measures of extremal dependence that quantify the dependence in the upper and lower tails of a bivariate distribution with continuous margins $F$ and $G$.
The coefficients $\lambda_U$ and $\lambda_L$ are defined in terms of quantile exceedences.
For the upper tail dependence coefficient $\lambda_U$ one looks at the probability that $Y$ exceeds the $u$-quantile $G^{-1}(u)$, given that $X$ exceeds the $u$-quantile $F^{-1}(u)$, and then consider the limit as $u$ goes to $1$, provided it exists.
Large values of $\lambda_U$ imply that joint extremes are more likely than for low values of $\lambda_U$. Interpretation for the lower tail dependence coefficient $\lambda_L$ is analogous.
If $\lambda_U = 0$, then $X$ and $Y$ are said to be asymptotically independent; if $\lambda_U \in (0, 1]$ they are asymptotically dependent.
For independent variables $\lambda_U = 0$; for perfectly dependent variables $\lambda_U = 0$. Note that $\lambda_U = 0$ does NOT imply independence; the comment/statement in the question's reference is wrong. Indeed, consider a bivariate normal distribution with correlation $\rho \notin \{0, 1\}$. Then, one can show that $\lambda_U = 0$ but the variables are dependent.
A distribution with, say, $\lambda_U = 0.5$ does not mean that there is a linear dependence between $X$ and $Y$; it means that $X$ and $Y$ are asymptotically dependent in the upper tail, and the strength of the dependence is 0.5. For example, the Gumbel copula with parameter $\theta = \log(2)/\log(1.5)$ has
a coefficient of upper tail dependence equal to $0.5$.
The following figure shows a sample of 1000 points of the Gumbel copula with parameter $\theta = \log(2)/\log(1.5)$ with uniform margins (left) and standard normal margins (right). The data were generated with the copula package in R (code is provided below).
## Parameters
theta <- log(2)/log(1.5)
n <- 1000
## Generate a sample
library(copula)
set.seed(234)
gumbel.cop <- archmCopula("gumbel", theta)
x <- rCopula(n, gumbel.cop)
## Visualization
par(mfrow = c(1, 2))
plot(x, pch = 16, col = "gray", xlab = "U", ylab = "V")
plot(qnorm(x), pch = 16, col = "gray", xlab = "X", ylab = "Y")
par(mfrow = c(1, 1)) | Understanding tail dependence coefficients
The tail dependence coefficients $\lambda_U$ and $\lambda_L$ are measures of extremal dependence that quantify the dependence in the upper and lower tails of a bivariate distribution with continuous m |
48,754 | Mean difference for count data | A t-test has already been proposed, a Poisson-like generalized linear regression has been proposed. With > 1000 sample size, how about bootstrapping the difference between both samples? It's easy, it's fast, it gives not only a point estimate but also a distribution an it gets rid of all assumptions of normality or poisson or negative-binomial and so on. Even if they are small counts, bootstrapping will do the job. | Mean difference for count data | A t-test has already been proposed, a Poisson-like generalized linear regression has been proposed. With > 1000 sample size, how about bootstrapping the difference between both samples? It's easy, it' | Mean difference for count data
A t-test has already been proposed, a Poisson-like generalized linear regression has been proposed. With > 1000 sample size, how about bootstrapping the difference between both samples? It's easy, it's fast, it gives not only a point estimate but also a distribution an it gets rid of all assumptions of normality or poisson or negative-binomial and so on. Even if they are small counts, bootstrapping will do the job. | Mean difference for count data
A t-test has already been proposed, a Poisson-like generalized linear regression has been proposed. With > 1000 sample size, how about bootstrapping the difference between both samples? It's easy, it' |
48,755 | Mean difference for count data | Give your large sample sizes, you could probably use a t-test on the means. If your sample sizes are equal, you are in pretty good shape whether you want to use a pooled estimate of the variance or unpooled (Welch's test). Do a one sided test, if you are sure that the population of s1 has a mean at least as large as the mean of the population of s2.
Note: If the variances are much larger than the means, your counts are not Poisson. But what matters here is the distribution of the sample averages, and that should be nearly normal, unless the data are super-skewed. In that case, you could do a non-parametric test like the Kruskal-Wallis. | Mean difference for count data | Give your large sample sizes, you could probably use a t-test on the means. If your sample sizes are equal, you are in pretty good shape whether you want to use a pooled estimate of the variance or un | Mean difference for count data
Give your large sample sizes, you could probably use a t-test on the means. If your sample sizes are equal, you are in pretty good shape whether you want to use a pooled estimate of the variance or unpooled (Welch's test). Do a one sided test, if you are sure that the population of s1 has a mean at least as large as the mean of the population of s2.
Note: If the variances are much larger than the means, your counts are not Poisson. But what matters here is the distribution of the sample averages, and that should be nearly normal, unless the data are super-skewed. In that case, you could do a non-parametric test like the Kruskal-Wallis. | Mean difference for count data
Give your large sample sizes, you could probably use a t-test on the means. If your sample sizes are equal, you are in pretty good shape whether you want to use a pooled estimate of the variance or un |
48,756 | Mean difference for count data | I would suggest you fit a Poisson or loglinear regression model with just one dummy variable created for the two groups and then test the slope parameter, say, $H_a: \beta_1 >0$. Any test method (LRT, Wald, or score) under the maximum likelihood framework can be used. As for over-dispersion problem, you may consider other count models such as negative binomial or generalized Poisson models. This should essentially give you a two-sample test for count data. | Mean difference for count data | I would suggest you fit a Poisson or loglinear regression model with just one dummy variable created for the two groups and then test the slope parameter, say, $H_a: \beta_1 >0$. Any test method (LRT, | Mean difference for count data
I would suggest you fit a Poisson or loglinear regression model with just one dummy variable created for the two groups and then test the slope parameter, say, $H_a: \beta_1 >0$. Any test method (LRT, Wald, or score) under the maximum likelihood framework can be used. As for over-dispersion problem, you may consider other count models such as negative binomial or generalized Poisson models. This should essentially give you a two-sample test for count data. | Mean difference for count data
I would suggest you fit a Poisson or loglinear regression model with just one dummy variable created for the two groups and then test the slope parameter, say, $H_a: \beta_1 >0$. Any test method (LRT, |
48,757 | Whether to test items for normality (and transform) when performing confirmatory factor analysis on a set of scales? | Ordered categorical items and normality:
First, ordered categorical items are discrete and lumpy. In particular, 3-point response scales lack the granularity required to even provide a rudimentary approximation of normality. When you have more response options in your ordered categorical variable, the item has more potential to approximate a normally distributed variable.
Ordered categorical items have ceilings and floors. If the mean is close the the ceiling or floor, then they are almost always skewed with the tail pointing away from the floor or ceiling as the case may be.
There is a model of responding to ordered categorical items, which suggests that a latent continuous numeric variable underlies responses. But that is different to examining the raw variable.
Whether to look at normality of items when doing CFA:
When looking at typical self-report scales on 3,4,5 point scales, I don't think it makes much sense to look at normality.
Generally, you find that the size of the skew is related the degree to which the item deviates from the scale mid-point, and moves towards either the minimum or maximum possible value. Thus, I imagine your most skewed items would be those on a 0-1-2 scales that have means either close to 0 or close to 2. That said, there are exceptions where people can focus their responses towards either extreme (e.g., see some contaminated Amazon 5-star ratings).
So in summary, I think it is useful to get a sense of how people have used the response scale. Is it consistent with the most common responses being around the mean for the item or are scores around the extremes, or is there something else guiding responses. The aim here is just to build an understanding of how people have used the scale.
In most psychological contexts, I find that the mean captures most of what is going on with the distribution. The main preliminary step I like to do is to see whether any items suffer from severe floor or ceiling effects. So for example, if you had an item with a mean below 0.4 or above 1.6 on a 0-1-2 scale, you might want to think about whether the item is adequately discriminating people. I wouldn't automatically drop such an item, but I would think about what it is contributing.
Should non-normal items be transformed before CFA?
As already mentioned, the items are not normally distributed anyway. Furthermore, the natural scale of ordered categorical items, prevents extreme outliers, and limits extreme skew. Furthermore, items will typically be scored using their original scaling, so it is best to leave them as is. For this reason, I tend not to transform individual items when performing CFA.
A second point relates generally to how to model ordered categorical items. While you can do CFA on individual items, there are also a range of more advanced, and arguably better, alternative approaches that are designed to explicitly model ordered categorical items:
MPlus has various models for ordinal items with thresholds between items
Amos has some models for modelling ordinal items
Optimal scaling PCA
Factor analysis on polychoric correlations
You might want to read this presentation by Bowen and Wegmann where they discuss solutions. | Whether to test items for normality (and transform) when performing confirmatory factor analysis on | Ordered categorical items and normality:
First, ordered categorical items are discrete and lumpy. In particular, 3-point response scales lack the granularity required to even provide a rudimentary app | Whether to test items for normality (and transform) when performing confirmatory factor analysis on a set of scales?
Ordered categorical items and normality:
First, ordered categorical items are discrete and lumpy. In particular, 3-point response scales lack the granularity required to even provide a rudimentary approximation of normality. When you have more response options in your ordered categorical variable, the item has more potential to approximate a normally distributed variable.
Ordered categorical items have ceilings and floors. If the mean is close the the ceiling or floor, then they are almost always skewed with the tail pointing away from the floor or ceiling as the case may be.
There is a model of responding to ordered categorical items, which suggests that a latent continuous numeric variable underlies responses. But that is different to examining the raw variable.
Whether to look at normality of items when doing CFA:
When looking at typical self-report scales on 3,4,5 point scales, I don't think it makes much sense to look at normality.
Generally, you find that the size of the skew is related the degree to which the item deviates from the scale mid-point, and moves towards either the minimum or maximum possible value. Thus, I imagine your most skewed items would be those on a 0-1-2 scales that have means either close to 0 or close to 2. That said, there are exceptions where people can focus their responses towards either extreme (e.g., see some contaminated Amazon 5-star ratings).
So in summary, I think it is useful to get a sense of how people have used the response scale. Is it consistent with the most common responses being around the mean for the item or are scores around the extremes, or is there something else guiding responses. The aim here is just to build an understanding of how people have used the scale.
In most psychological contexts, I find that the mean captures most of what is going on with the distribution. The main preliminary step I like to do is to see whether any items suffer from severe floor or ceiling effects. So for example, if you had an item with a mean below 0.4 or above 1.6 on a 0-1-2 scale, you might want to think about whether the item is adequately discriminating people. I wouldn't automatically drop such an item, but I would think about what it is contributing.
Should non-normal items be transformed before CFA?
As already mentioned, the items are not normally distributed anyway. Furthermore, the natural scale of ordered categorical items, prevents extreme outliers, and limits extreme skew. Furthermore, items will typically be scored using their original scaling, so it is best to leave them as is. For this reason, I tend not to transform individual items when performing CFA.
A second point relates generally to how to model ordered categorical items. While you can do CFA on individual items, there are also a range of more advanced, and arguably better, alternative approaches that are designed to explicitly model ordered categorical items:
MPlus has various models for ordinal items with thresholds between items
Amos has some models for modelling ordinal items
Optimal scaling PCA
Factor analysis on polychoric correlations
You might want to read this presentation by Bowen and Wegmann where they discuss solutions. | Whether to test items for normality (and transform) when performing confirmatory factor analysis on
Ordered categorical items and normality:
First, ordered categorical items are discrete and lumpy. In particular, 3-point response scales lack the granularity required to even provide a rudimentary app |
48,758 | Can you use the chi-squared test when the expected values are not determined? | Testing for equality of ratio of females::males across groups is the same as testing for equality of proportion of females across all groups ($r_i = f_i/m_i$, $p_i = f_i/(f_i+m_i)$, so $p_i = r_i/(1+r_i)$ and $r_i = p_i/(1-p_i)$. If the $p_i$ differ so do the $r_i$.
If you check that the proportions, $p_i$ differ significantly, you can then conclude that the ratios, $r_i$ differ significantly when the $p_i$ do.
So you are testing for equality of proportion across groups.
This is actually the same testing for independence in the two-way table.
As a result, for the data,
Group 1 2 3 4 5 6 Total
Men : 9 17 13 12 11 19 81
Women: 16 9 11 14 11 5 66
Total: 25 26 24 26 22 23 147
your expected values are just $E_i = \text{row total}\times\text{column total}/\text{overall total}$. E.g. the expected values to go with the first group are:
25 x 81 / 147 = 13.78
25 x 66 / 147 = 11.22
The table of expecteds is:
Group 1 2 3 4 5 6
Men 13.78 14.33 13.22 14.33 12.12 12.67
Women 11.22 11.67 10.78 11.67 9.88 10.33
As a result, you can just calculate the chisquare for the table - just find
(observed - expected)^2/expected
for all $6\times 2$ numbers and add the 12 terms up.
The first column:
(9- 13.78)^2/13.78 = 1.66
(16 - 11.22)^2/11.22 = 2.03
though it's better if you keep more than two decimal places for all the intermediate calculations. If you do it right you should get a chi-square of somewhere close to 11.5 on 5 df. Looking at the Pearson residuals, almost all of that is coming from the first and sixth groups (especially the sixth group). | Can you use the chi-squared test when the expected values are not determined? | Testing for equality of ratio of females::males across groups is the same as testing for equality of proportion of females across all groups ($r_i = f_i/m_i$, $p_i = f_i/(f_i+m_i)$, so $p_i = r_i/(1+r | Can you use the chi-squared test when the expected values are not determined?
Testing for equality of ratio of females::males across groups is the same as testing for equality of proportion of females across all groups ($r_i = f_i/m_i$, $p_i = f_i/(f_i+m_i)$, so $p_i = r_i/(1+r_i)$ and $r_i = p_i/(1-p_i)$. If the $p_i$ differ so do the $r_i$.
If you check that the proportions, $p_i$ differ significantly, you can then conclude that the ratios, $r_i$ differ significantly when the $p_i$ do.
So you are testing for equality of proportion across groups.
This is actually the same testing for independence in the two-way table.
As a result, for the data,
Group 1 2 3 4 5 6 Total
Men : 9 17 13 12 11 19 81
Women: 16 9 11 14 11 5 66
Total: 25 26 24 26 22 23 147
your expected values are just $E_i = \text{row total}\times\text{column total}/\text{overall total}$. E.g. the expected values to go with the first group are:
25 x 81 / 147 = 13.78
25 x 66 / 147 = 11.22
The table of expecteds is:
Group 1 2 3 4 5 6
Men 13.78 14.33 13.22 14.33 12.12 12.67
Women 11.22 11.67 10.78 11.67 9.88 10.33
As a result, you can just calculate the chisquare for the table - just find
(observed - expected)^2/expected
for all $6\times 2$ numbers and add the 12 terms up.
The first column:
(9- 13.78)^2/13.78 = 1.66
(16 - 11.22)^2/11.22 = 2.03
though it's better if you keep more than two decimal places for all the intermediate calculations. If you do it right you should get a chi-square of somewhere close to 11.5 on 5 df. Looking at the Pearson residuals, almost all of that is coming from the first and sixth groups (especially the sixth group). | Can you use the chi-squared test when the expected values are not determined?
Testing for equality of ratio of females::males across groups is the same as testing for equality of proportion of females across all groups ($r_i = f_i/m_i$, $p_i = f_i/(f_i+m_i)$, so $p_i = r_i/(1+r |
48,759 | Can you use the chi-squared test when the expected values are not determined? | It depends on your null hypothesis. If your hypothesis is that the ratio of males to females is equal then the expected value is 0.5.
EDIT:
So your data should look something like this:
1 2 3 4 5 6 TOTAL
MEN | a
WOMEN | b
---------------------------------------
TOTAL c d e f g h
$\chi^2 = \sum_{i=1}^2\sum_{j=1}^6 \dfrac{(O_{i,j} - E_{i,j})^2}{E_{i,j}}$
where the entries in your table are your observed values (e.g. How many men in group 1 is $O_{1,1}$). Under the null hypothesis of independence $E_{1,1} = a*c/N$, $E_{1,2} = a*d/N$, etc. where $N$ is total number of observations.
in R you can do
men <- c( 9 , 17 , 13 , 12 , 11 , 19 )
women<- c(16 , 9 , 11 , 14 , 11 , 5)
prop.test(x=men, n=(men+women))
6-sample test for equality of proportions without
continuity correction
data: men out of (men + women)
X-squared = 11.4978, df = 5, p-value = 0.04236
alternative hypothesis: two.sided
sample estimates:
prop 1 prop 2 prop 3 prop 4 prop 5
0.3600000 0.6538462 0.5416667 0.4615385 0.5000000
prop 6
0.7916667
So since the p-value is below 0.05 I would say we have enough evidence to reject the null hypothesis that ALL 6 proportions are equal. At least one of the proportions is not equal to the rest. | Can you use the chi-squared test when the expected values are not determined? | It depends on your null hypothesis. If your hypothesis is that the ratio of males to females is equal then the expected value is 0.5.
EDIT:
So your data should look something like this:
1 | Can you use the chi-squared test when the expected values are not determined?
It depends on your null hypothesis. If your hypothesis is that the ratio of males to females is equal then the expected value is 0.5.
EDIT:
So your data should look something like this:
1 2 3 4 5 6 TOTAL
MEN | a
WOMEN | b
---------------------------------------
TOTAL c d e f g h
$\chi^2 = \sum_{i=1}^2\sum_{j=1}^6 \dfrac{(O_{i,j} - E_{i,j})^2}{E_{i,j}}$
where the entries in your table are your observed values (e.g. How many men in group 1 is $O_{1,1}$). Under the null hypothesis of independence $E_{1,1} = a*c/N$, $E_{1,2} = a*d/N$, etc. where $N$ is total number of observations.
in R you can do
men <- c( 9 , 17 , 13 , 12 , 11 , 19 )
women<- c(16 , 9 , 11 , 14 , 11 , 5)
prop.test(x=men, n=(men+women))
6-sample test for equality of proportions without
continuity correction
data: men out of (men + women)
X-squared = 11.4978, df = 5, p-value = 0.04236
alternative hypothesis: two.sided
sample estimates:
prop 1 prop 2 prop 3 prop 4 prop 5
0.3600000 0.6538462 0.5416667 0.4615385 0.5000000
prop 6
0.7916667
So since the p-value is below 0.05 I would say we have enough evidence to reject the null hypothesis that ALL 6 proportions are equal. At least one of the proportions is not equal to the rest. | Can you use the chi-squared test when the expected values are not determined?
It depends on your null hypothesis. If your hypothesis is that the ratio of males to females is equal then the expected value is 0.5.
EDIT:
So your data should look something like this:
1 |
48,760 | Can you use the chi-squared test when the expected values are not determined? | Is it mandatory to run a test based on the chi2 statistics ?
If not, I would suggest you use a likelihood ratio test which can properly account for the fact you do not know the expected fraction of females $p$.
If I write $p_i = p + \Delta p_i$ ($\Delta p_0 = 0$) the expected fraction of females in category i, the the null hypothesis can be written as:
$$
H_0 : p_0 = p_1 = ... = p_6 = p
$$
or equivalently
$$
H_0 : \Delta p_1 = \Delta p_2 = ... = \Delta p_6 = 0
$$
with $p$ an unknown nuisance parameter.
With a likelihood ratio test, you would build your test statistics as
$$
D = -2 log(L({\bf n},{\bf N}, {\bf \Delta p} = 0, \hat{\hat{p}})) + 2 log(L({\bf n},{\bf N}, {\bf \hat{\Delta p}}, \hat{p}))
$$
with
$$
log(L({\bf n},{\bf N}, {\bf Delta p}, p)) = \sum_i log f_i(n_i, N_i, p_i)
$$
Now if you believe that your data are normally distributed (I do not think so... I would rather choose a binomial distribution), then $-2 log(L)$ simplifies to
$$
-2 log(L) = \sum_i \frac{(n_i - p_i N_i)^2}{n_i} = \chi^2
$$
so the test statistics would a be a difference of $\chi^2$
$$
D = \chi^2({\bf n},{\bf N}, {\bf \Delta p} = 0, p) - \chi^2({\bf n},{\bf N}, {\bf \Delta p}, p)
$$
In one case you run your least-square fit with all parameters free and in the other case you run your least-square fit with constrained ${\bf \Delta p} = 0$. If $H_0$ is satisfied $D$ should be distributed as a $\chi^2$ distribution, but as I said I think it is much better to work with a likelihood function and $f_i(n_i, N_i, p_i) = C_{N_i}^{n_i} p_i^{n_i} (1 - p_i)^{N_i - n_i}$ binomially distributed. | Can you use the chi-squared test when the expected values are not determined? | Is it mandatory to run a test based on the chi2 statistics ?
If not, I would suggest you use a likelihood ratio test which can properly account for the fact you do not know the expected fraction of fe | Can you use the chi-squared test when the expected values are not determined?
Is it mandatory to run a test based on the chi2 statistics ?
If not, I would suggest you use a likelihood ratio test which can properly account for the fact you do not know the expected fraction of females $p$.
If I write $p_i = p + \Delta p_i$ ($\Delta p_0 = 0$) the expected fraction of females in category i, the the null hypothesis can be written as:
$$
H_0 : p_0 = p_1 = ... = p_6 = p
$$
or equivalently
$$
H_0 : \Delta p_1 = \Delta p_2 = ... = \Delta p_6 = 0
$$
with $p$ an unknown nuisance parameter.
With a likelihood ratio test, you would build your test statistics as
$$
D = -2 log(L({\bf n},{\bf N}, {\bf \Delta p} = 0, \hat{\hat{p}})) + 2 log(L({\bf n},{\bf N}, {\bf \hat{\Delta p}}, \hat{p}))
$$
with
$$
log(L({\bf n},{\bf N}, {\bf Delta p}, p)) = \sum_i log f_i(n_i, N_i, p_i)
$$
Now if you believe that your data are normally distributed (I do not think so... I would rather choose a binomial distribution), then $-2 log(L)$ simplifies to
$$
-2 log(L) = \sum_i \frac{(n_i - p_i N_i)^2}{n_i} = \chi^2
$$
so the test statistics would a be a difference of $\chi^2$
$$
D = \chi^2({\bf n},{\bf N}, {\bf \Delta p} = 0, p) - \chi^2({\bf n},{\bf N}, {\bf \Delta p}, p)
$$
In one case you run your least-square fit with all parameters free and in the other case you run your least-square fit with constrained ${\bf \Delta p} = 0$. If $H_0$ is satisfied $D$ should be distributed as a $\chi^2$ distribution, but as I said I think it is much better to work with a likelihood function and $f_i(n_i, N_i, p_i) = C_{N_i}^{n_i} p_i^{n_i} (1 - p_i)^{N_i - n_i}$ binomially distributed. | Can you use the chi-squared test when the expected values are not determined?
Is it mandatory to run a test based on the chi2 statistics ?
If not, I would suggest you use a likelihood ratio test which can properly account for the fact you do not know the expected fraction of fe |
48,761 | How to draw ROC curve with three response variable? [duplicate] | You might want to have a look at the Volume Under the ROC Surface as defined in the following articles:
Ferri C, Hernández-orallo J, Salido MA. Volume Under the ROC Surface for Multi-class Problems. Exact Computation and Evaluation of Approximations. PROC OF 14TH EUROPEAN CONFERENCE ON MACHINE LEARNING. 2003;108‑120.
He X, Frey EC. The Meaning and Use of the Volume Under a Three-Class ROC Surface (VUS). IEEE Transactions on Medical Imaging. 2008;27(5):577‑588. | How to draw ROC curve with three response variable? [duplicate] | You might want to have a look at the Volume Under the ROC Surface as defined in the following articles:
Ferri C, Hernández-orallo J, Salido MA. Volume Under the ROC Surface for Multi-class Problems. | How to draw ROC curve with three response variable? [duplicate]
You might want to have a look at the Volume Under the ROC Surface as defined in the following articles:
Ferri C, Hernández-orallo J, Salido MA. Volume Under the ROC Surface for Multi-class Problems. Exact Computation and Evaluation of Approximations. PROC OF 14TH EUROPEAN CONFERENCE ON MACHINE LEARNING. 2003;108‑120.
He X, Frey EC. The Meaning and Use of the Volume Under a Three-Class ROC Surface (VUS). IEEE Transactions on Medical Imaging. 2008;27(5):577‑588. | How to draw ROC curve with three response variable? [duplicate]
You might want to have a look at the Volume Under the ROC Surface as defined in the following articles:
Ferri C, Hernández-orallo J, Salido MA. Volume Under the ROC Surface for Multi-class Problems. |
48,762 | How to draw ROC curve with three response variable? [duplicate] | ROC Analysis was designed for dealing with only two variables: noise and no noise, so using it for 3 or more variables makes little sense.
However, you for any multi-classification problem it's possible to use a bunch of binary classifiers and do so-called One-Vs-All Classification
E.g. consider the IRIS data set: there are 3 classes: setosa, versicolor, and virginica. So we can build 3 classifiers (e.g. Naive Bayes): for setosa, for vesicolor and for virginica. And then draw a ROC curve for each and tune the threshold for each model separately. AUC in such a case could be just the average across AUCs for individual models.
Here's a ROC curve for the IRIS data set:
AUC in this case is $\approx 0.98 = \frac{1 + 0.98 + 0.97}{3}$
R Code:
library(ROCR)
library(klaR)
data(iris)
lvls = levels(iris$Species)
testidx = which(1:length(iris[, 1]) %% 5 == 0)
iris.train = iris[testidx, ]
iris.test = iris[-testidx, ]
aucs = c()
plot(x=NA, y=NA, xlim=c(0,1), ylim=c(0,1),
ylab='True Positive Rate',
xlab='False Positive Rate',
bty='n')
for (type.id in 1:3) {
type = as.factor(iris.train$Species == lvls[type.id])
nbmodel = NaiveBayes(type ~ ., data=iris.train[, -5])
nbprediction = predict(nbmodel, iris.test[,-5], type='raw')
score = nbprediction$posterior[, 'TRUE']
actual.class = iris.test$Species == lvls[type.id]
pred = prediction(score, actual.class)
nbperf = performance(pred, "tpr", "fpr")
roc.x = unlist([email protected])
roc.y = unlist([email protected])
lines(roc.y ~ roc.x, col=type.id+1, lwd=2)
nbauc = performance(pred, "auc")
nbauc = unlist(slot(nbauc, "y.values"))
aucs[type.id] = nbauc
}
lines(x=c(0,1), c(0,1))
mean(aucs)
Source of inspiration: http://karchinlab.org/fcbb2_spr14/Lectures/Machine_Learning_R.pdf | How to draw ROC curve with three response variable? [duplicate] | ROC Analysis was designed for dealing with only two variables: noise and no noise, so using it for 3 or more variables makes little sense.
However, you for any multi-classification problem it's possi | How to draw ROC curve with three response variable? [duplicate]
ROC Analysis was designed for dealing with only two variables: noise and no noise, so using it for 3 or more variables makes little sense.
However, you for any multi-classification problem it's possible to use a bunch of binary classifiers and do so-called One-Vs-All Classification
E.g. consider the IRIS data set: there are 3 classes: setosa, versicolor, and virginica. So we can build 3 classifiers (e.g. Naive Bayes): for setosa, for vesicolor and for virginica. And then draw a ROC curve for each and tune the threshold for each model separately. AUC in such a case could be just the average across AUCs for individual models.
Here's a ROC curve for the IRIS data set:
AUC in this case is $\approx 0.98 = \frac{1 + 0.98 + 0.97}{3}$
R Code:
library(ROCR)
library(klaR)
data(iris)
lvls = levels(iris$Species)
testidx = which(1:length(iris[, 1]) %% 5 == 0)
iris.train = iris[testidx, ]
iris.test = iris[-testidx, ]
aucs = c()
plot(x=NA, y=NA, xlim=c(0,1), ylim=c(0,1),
ylab='True Positive Rate',
xlab='False Positive Rate',
bty='n')
for (type.id in 1:3) {
type = as.factor(iris.train$Species == lvls[type.id])
nbmodel = NaiveBayes(type ~ ., data=iris.train[, -5])
nbprediction = predict(nbmodel, iris.test[,-5], type='raw')
score = nbprediction$posterior[, 'TRUE']
actual.class = iris.test$Species == lvls[type.id]
pred = prediction(score, actual.class)
nbperf = performance(pred, "tpr", "fpr")
roc.x = unlist([email protected])
roc.y = unlist([email protected])
lines(roc.y ~ roc.x, col=type.id+1, lwd=2)
nbauc = performance(pred, "auc")
nbauc = unlist(slot(nbauc, "y.values"))
aucs[type.id] = nbauc
}
lines(x=c(0,1), c(0,1))
mean(aucs)
Source of inspiration: http://karchinlab.org/fcbb2_spr14/Lectures/Machine_Learning_R.pdf | How to draw ROC curve with three response variable? [duplicate]
ROC Analysis was designed for dealing with only two variables: noise and no noise, so using it for 3 or more variables makes little sense.
However, you for any multi-classification problem it's possi |
48,763 | How can a categorical variable where respondents can choose more than one response be used as a predictor in multiple regression? | Use dichotomous indicators (often referred to as dummy variables) to represent the items within this one question. For example, a variable called internet, and then a variable called newspaper, so on so forth. If a person picked both, they got a 1 in each. If a person only picked newspaper, then enter 0 for internet and then 1 for newspaper. | How can a categorical variable where respondents can choose more than one response be used as a pred | Use dichotomous indicators (often referred to as dummy variables) to represent the items within this one question. For example, a variable called internet, and then a variable called newspaper, so on | How can a categorical variable where respondents can choose more than one response be used as a predictor in multiple regression?
Use dichotomous indicators (often referred to as dummy variables) to represent the items within this one question. For example, a variable called internet, and then a variable called newspaper, so on so forth. If a person picked both, they got a 1 in each. If a person only picked newspaper, then enter 0 for internet and then 1 for newspaper. | How can a categorical variable where respondents can choose more than one response be used as a pred
Use dichotomous indicators (often referred to as dummy variables) to represent the items within this one question. For example, a variable called internet, and then a variable called newspaper, so on |
48,764 | Bias correction of logarithmic transformations | Your variable is defined as
$$X_{t} = e^{\alpha t + \beta} e^{z_{t}} $$
Say you have a sample $S_n$ of $n$ observations of past values of the variable and you want to forecast period $n+1$.
Then
$$E\Big (X_{n+1}\mid S_n \Big ) = E\Big (e^{\alpha (n+1) + \beta} e^{z_{n+1}}\mid S_n\Big) = E\Big (e^{\alpha (n+1) + \beta}\mid S_n\Big) E\left(e^{z_{n+1}}\right)$$ $$= E\Big (e^{\alpha (n+1) + \beta}\mid S_n\Big)e^{\sigma^2/2}$$
...since $z_t$ is Gaussian white noise.
The "adjusted forecast with an empirical correction factor", uses rather confusing if not incorrect notation, ignores various biases, and approximates the above by
$$E\Big (e^{\alpha (n+1) + \beta}\mid S_n\Big) \approx e^{\hat \alpha (n+1) + \hat \beta} = e^{\hat{\log x_{n+1}}} $$
and
$$ e^{\sigma^2/2} = E\left(e^{z_{n+1}}\right) \approx \frac {1}{n}\sum_{t=1}^{n} e^{\hat z_t}$$
and so
$$ \widehat E\Big (X_{n+1}\mid S_n \Big )= e^{\hat{\log x_{n+1}}}\frac {1}{n}\sum_{t=1}^{n} e^{\hat z_t} $$ | Bias correction of logarithmic transformations | Your variable is defined as
$$X_{t} = e^{\alpha t + \beta} e^{z_{t}} $$
Say you have a sample $S_n$ of $n$ observations of past values of the variable and you want to forecast period $n+1$.
Then
$$E\ | Bias correction of logarithmic transformations
Your variable is defined as
$$X_{t} = e^{\alpha t + \beta} e^{z_{t}} $$
Say you have a sample $S_n$ of $n$ observations of past values of the variable and you want to forecast period $n+1$.
Then
$$E\Big (X_{n+1}\mid S_n \Big ) = E\Big (e^{\alpha (n+1) + \beta} e^{z_{n+1}}\mid S_n\Big) = E\Big (e^{\alpha (n+1) + \beta}\mid S_n\Big) E\left(e^{z_{n+1}}\right)$$ $$= E\Big (e^{\alpha (n+1) + \beta}\mid S_n\Big)e^{\sigma^2/2}$$
...since $z_t$ is Gaussian white noise.
The "adjusted forecast with an empirical correction factor", uses rather confusing if not incorrect notation, ignores various biases, and approximates the above by
$$E\Big (e^{\alpha (n+1) + \beta}\mid S_n\Big) \approx e^{\hat \alpha (n+1) + \hat \beta} = e^{\hat{\log x_{n+1}}} $$
and
$$ e^{\sigma^2/2} = E\left(e^{z_{n+1}}\right) \approx \frac {1}{n}\sum_{t=1}^{n} e^{\hat z_t}$$
and so
$$ \widehat E\Big (X_{n+1}\mid S_n \Big )= e^{\hat{\log x_{n+1}}}\frac {1}{n}\sum_{t=1}^{n} e^{\hat z_t} $$ | Bias correction of logarithmic transformations
Your variable is defined as
$$X_{t} = e^{\alpha t + \beta} e^{z_{t}} $$
Say you have a sample $S_n$ of $n$ observations of past values of the variable and you want to forecast period $n+1$.
Then
$$E\ |
48,765 | Computing Out of Bag error in Random Forest: is it the average only over trees that didn't use each sample? | You second last paragraph is the correct answer. As you say, this is the estimate that uses the whole ensemble, but never uses any data that was used to construct the trees making the individual predictions. | Computing Out of Bag error in Random Forest: is it the average only over trees that didn't use each | You second last paragraph is the correct answer. As you say, this is the estimate that uses the whole ensemble, but never uses any data that was used to construct the trees making the individual predi | Computing Out of Bag error in Random Forest: is it the average only over trees that didn't use each sample?
You second last paragraph is the correct answer. As you say, this is the estimate that uses the whole ensemble, but never uses any data that was used to construct the trees making the individual predictions. | Computing Out of Bag error in Random Forest: is it the average only over trees that didn't use each
You second last paragraph is the correct answer. As you say, this is the estimate that uses the whole ensemble, but never uses any data that was used to construct the trees making the individual predi |
48,766 | Treatment of triple seasonal data | Possible yes, sensible no from most time series perspectives.
The main problem with your approach is an apparent assumption that removal of seasonality is, or should be, a trivial matter. But in practice most modern procedures require some kind of estimation of seasonal components based on some choice(s) on how to model it, especially because seasonal components usually vary from year to year. Conversely, if your seasonal components are essentially deterministic, this would be trivial.
Weeks are especially awkward as they don't nest in years.
If you are primarily interested in methods that ignore seasonality, datasets with major seasonality don't seem pertinent. Why make the problem more difficult than it is already? | Treatment of triple seasonal data | Possible yes, sensible no from most time series perspectives.
The main problem with your approach is an apparent assumption that removal of seasonality is, or should be, a trivial matter. But in prac | Treatment of triple seasonal data
Possible yes, sensible no from most time series perspectives.
The main problem with your approach is an apparent assumption that removal of seasonality is, or should be, a trivial matter. But in practice most modern procedures require some kind of estimation of seasonal components based on some choice(s) on how to model it, especially because seasonal components usually vary from year to year. Conversely, if your seasonal components are essentially deterministic, this would be trivial.
Weeks are especially awkward as they don't nest in years.
If you are primarily interested in methods that ignore seasonality, datasets with major seasonality don't seem pertinent. Why make the problem more difficult than it is already? | Treatment of triple seasonal data
Possible yes, sensible no from most time series perspectives.
The main problem with your approach is an apparent assumption that removal of seasonality is, or should be, a trivial matter. But in prac |
48,767 | Treatment of triple seasonal data | You would find it easier to use the tbats() function in the forecast package. It will estimate the seasonality and produce the forecasts. | Treatment of triple seasonal data | You would find it easier to use the tbats() function in the forecast package. It will estimate the seasonality and produce the forecasts. | Treatment of triple seasonal data
You would find it easier to use the tbats() function in the forecast package. It will estimate the seasonality and produce the forecasts. | Treatment of triple seasonal data
You would find it easier to use the tbats() function in the forecast package. It will estimate the seasonality and produce the forecasts. |
48,768 | Gamma and exponential distributions | Usually, the support of a distribution is defined to always be a closed set, so in the case of the gamma distribution, $[0,\infty)$. Even if the density function defined by some formula, for some parameter values, then is undefined, that is not a problem. The reason is that density functions are not really functions! Their values at any given point do not give meaning, they have meaning only through being integrated.
So, in one point of view, densities are equivalence classes of functions, two functions $f, g$ being equivalent if they give the same probabilities (that is, integrals) for all events $A$: $\int_A f(x)\; dx = \int_A g(x)\; dx$ for all events $A$. So the value you assign to the density function at zero do not matter (This is the $L^1$ view.) See also Can a probability distribution value exceeding 1 be OK? for discussion.
Another point of view (leading to the same conclusions) is that densities are differential forms, see Intuitive explanation for density of transformed variable? | Gamma and exponential distributions | Usually, the support of a distribution is defined to always be a closed set, so in the case of the gamma distribution, $[0,\infty)$. Even if the density function defined by some formula, for some para | Gamma and exponential distributions
Usually, the support of a distribution is defined to always be a closed set, so in the case of the gamma distribution, $[0,\infty)$. Even if the density function defined by some formula, for some parameter values, then is undefined, that is not a problem. The reason is that density functions are not really functions! Their values at any given point do not give meaning, they have meaning only through being integrated.
So, in one point of view, densities are equivalence classes of functions, two functions $f, g$ being equivalent if they give the same probabilities (that is, integrals) for all events $A$: $\int_A f(x)\; dx = \int_A g(x)\; dx$ for all events $A$. So the value you assign to the density function at zero do not matter (This is the $L^1$ view.) See also Can a probability distribution value exceeding 1 be OK? for discussion.
Another point of view (leading to the same conclusions) is that densities are differential forms, see Intuitive explanation for density of transformed variable? | Gamma and exponential distributions
Usually, the support of a distribution is defined to always be a closed set, so in the case of the gamma distribution, $[0,\infty)$. Even if the density function defined by some formula, for some para |
48,769 | Visualising a linear model with 6 predictors in R | Here is some code that is hopefully self-explanatory:
set.seed(20987) # for reproducability
N = 200
# variables
days_since = rpois(N, lambda=60)
site = factor(sample(c("site1", "site2"), N, replace=T), c("site1", "site2"))
age = factor(sample(c("juv", "adult"), N, replace=T), c("juv", "adult"))
year = factor(sample(c("2012", "2013"), N, replace=T), c("2012", "2013"))
PC1 = rnorm(N, mean=100, sd=25)
arrival_date = sample.int(365, N, replace=T)
# betas
B0 = 13
Bds = 74
Bs = 114
Ba = 160
By = 191
Bpc = 59
Bad = 11
# response variable
weight = B0 + Bds*days_since + Bs*(site=="site2") + Ba*(age=="adult") +
By*(year=="2013") + Bpc*PC1 + Bad*arrival_date + rnorm(N, mean=0, sd=10)
model = lm(weight~days_since+site+age+year+PC1+arrival_date)
# predicted values for plot
ds = seq(min(days_since), max(days_since))
ds1j2 = predict(model, data.frame(days_since=ds, site="site1", age="juv",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds1j3 = predict(model, data.frame(days_since=ds, site="site1", age="juv",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds1a2 = predict(model, data.frame(days_since=ds, site="site1", age="adult",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds1a3 = predict(model, data.frame(days_since=ds, site="site1", age="adult",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2j2 = predict(model, data.frame(days_since=ds, site="site2", age="juv",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2j3 = predict(model, data.frame(days_since=ds, site="site2", age="juv",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2a2 = predict(model, data.frame(days_since=ds, site="site2", age="adult",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2a3 = predict(model, data.frame(days_since=ds, site="site2", age="adult",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
# plot
windows()
plot(x=ds, y=ds1j2, ylim=c(11000, 14500), type="l", lty=1,
ylab="predicted weight", xlab="days since 1st Sept")
points(range(ds), range(ds1j2), pch=5)
lines(x=ds, y=ds1j3, lty=1); points(range(ds), range(ds1j3), pch=18)
lines(x=ds, y=ds1a2, lty=2); points(range(ds), range(ds1a2), pch=5)
lines(x=ds, y=ds1a3, lty=2); points(range(ds), range(ds1a3), pch=18)
lines(x=ds, y=ds2j2, lty=1); points(range(ds), range(ds2j2), pch=1)
lines(x=ds, y=ds2j3, lty=1); points(range(ds), range(ds2j3), pch=16)
lines(x=ds, y=ds2a2, lty=2); points(range(ds), range(ds2a2), pch=1)
lines(x=ds, y=ds2a3, lty=2); points(range(ds), range(ds2a3), pch=16)
legend("bottomright", lty=rep(1:2, 4), pch=c(5,18,5,18,1,16,1,16),
legend=c("site 1, juveniles, 2012", "site 1, juveniles, 2013",
"site 1, adults, 2012", "site 1, adults, 2013",
"site 2, juveniles, 2012", "site 2, juveniles, 2013",
"site 2, adults, 2012", "site 2, adults, 2013"))
You can write code that's much shorter by writing functions that will read in a list and do all of this for you rather than copying and pasting the same thing eight times in a row, but this should be easier to follow. Here is the plot:
This kind of plot is more interesting / useful when there are interactions (the lines aren't parallel). In this case, we just have a set of eight lines that are shifted vertically relative to each other. | Visualising a linear model with 6 predictors in R | Here is some code that is hopefully self-explanatory:
set.seed(20987) # for reproducability
N = 200
# variables
days_since = rpois(N, lambda=60)
site = factor(sample(c("site1", "si | Visualising a linear model with 6 predictors in R
Here is some code that is hopefully self-explanatory:
set.seed(20987) # for reproducability
N = 200
# variables
days_since = rpois(N, lambda=60)
site = factor(sample(c("site1", "site2"), N, replace=T), c("site1", "site2"))
age = factor(sample(c("juv", "adult"), N, replace=T), c("juv", "adult"))
year = factor(sample(c("2012", "2013"), N, replace=T), c("2012", "2013"))
PC1 = rnorm(N, mean=100, sd=25)
arrival_date = sample.int(365, N, replace=T)
# betas
B0 = 13
Bds = 74
Bs = 114
Ba = 160
By = 191
Bpc = 59
Bad = 11
# response variable
weight = B0 + Bds*days_since + Bs*(site=="site2") + Ba*(age=="adult") +
By*(year=="2013") + Bpc*PC1 + Bad*arrival_date + rnorm(N, mean=0, sd=10)
model = lm(weight~days_since+site+age+year+PC1+arrival_date)
# predicted values for plot
ds = seq(min(days_since), max(days_since))
ds1j2 = predict(model, data.frame(days_since=ds, site="site1", age="juv",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds1j3 = predict(model, data.frame(days_since=ds, site="site1", age="juv",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds1a2 = predict(model, data.frame(days_since=ds, site="site1", age="adult",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds1a3 = predict(model, data.frame(days_since=ds, site="site1", age="adult",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2j2 = predict(model, data.frame(days_since=ds, site="site2", age="juv",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2j3 = predict(model, data.frame(days_since=ds, site="site2", age="juv",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2a2 = predict(model, data.frame(days_since=ds, site="site2", age="adult",
year="2012", PC1=mean(PC1), arrival_date=mean(arrival_date)))
ds2a3 = predict(model, data.frame(days_since=ds, site="site2", age="adult",
year="2013", PC1=mean(PC1), arrival_date=mean(arrival_date)))
# plot
windows()
plot(x=ds, y=ds1j2, ylim=c(11000, 14500), type="l", lty=1,
ylab="predicted weight", xlab="days since 1st Sept")
points(range(ds), range(ds1j2), pch=5)
lines(x=ds, y=ds1j3, lty=1); points(range(ds), range(ds1j3), pch=18)
lines(x=ds, y=ds1a2, lty=2); points(range(ds), range(ds1a2), pch=5)
lines(x=ds, y=ds1a3, lty=2); points(range(ds), range(ds1a3), pch=18)
lines(x=ds, y=ds2j2, lty=1); points(range(ds), range(ds2j2), pch=1)
lines(x=ds, y=ds2j3, lty=1); points(range(ds), range(ds2j3), pch=16)
lines(x=ds, y=ds2a2, lty=2); points(range(ds), range(ds2a2), pch=1)
lines(x=ds, y=ds2a3, lty=2); points(range(ds), range(ds2a3), pch=16)
legend("bottomright", lty=rep(1:2, 4), pch=c(5,18,5,18,1,16,1,16),
legend=c("site 1, juveniles, 2012", "site 1, juveniles, 2013",
"site 1, adults, 2012", "site 1, adults, 2013",
"site 2, juveniles, 2012", "site 2, juveniles, 2013",
"site 2, adults, 2012", "site 2, adults, 2013"))
You can write code that's much shorter by writing functions that will read in a list and do all of this for you rather than copying and pasting the same thing eight times in a row, but this should be easier to follow. Here is the plot:
This kind of plot is more interesting / useful when there are interactions (the lines aren't parallel). In this case, we just have a set of eight lines that are shifted vertically relative to each other. | Visualising a linear model with 6 predictors in R
Here is some code that is hopefully self-explanatory:
set.seed(20987) # for reproducability
N = 200
# variables
days_since = rpois(N, lambda=60)
site = factor(sample(c("site1", "si |
48,770 | Hausman test: Include or not year effects and/or interaction variables | 1) Are you using the hausman or the xtoverid command? You can try the hausman command with the sigmamore option which sometimes resolves the negative test statistic. A negative test statistic can be due to small sample size and the sigmamore option takes this into account. It is also useful with respect to the point made by Wooldridge because this option bases the test on a common estimate of the disturbance variance.
2) It's not a problem for the FE estimator (and certainly not for the RE estimator) to have time-invariance in only one of the panels. Of course it is helpful to have within variation in all panels since the FE estimator relies on it but identification does not require within variability in all of the panels.
3) Given your particular research question (banks are dependent on macroeconomic factors that again relate to time) and that both FE and RE can make use of time dummies, you should include the time dummies. Wooldridge refers to comparisons between models where the RE model contains variables which are entirely time-invariant and thus cannot be used in the FE model - which then amounts to comparing two completely different models. You might find p. 4-10 of this lecture useful in which some of your questions about the Hausman test are discussed, including time dummies etc. | Hausman test: Include or not year effects and/or interaction variables | 1) Are you using the hausman or the xtoverid command? You can try the hausman command with the sigmamore option which sometimes resolves the negative test statistic. A negative test statistic can be d | Hausman test: Include or not year effects and/or interaction variables
1) Are you using the hausman or the xtoverid command? You can try the hausman command with the sigmamore option which sometimes resolves the negative test statistic. A negative test statistic can be due to small sample size and the sigmamore option takes this into account. It is also useful with respect to the point made by Wooldridge because this option bases the test on a common estimate of the disturbance variance.
2) It's not a problem for the FE estimator (and certainly not for the RE estimator) to have time-invariance in only one of the panels. Of course it is helpful to have within variation in all panels since the FE estimator relies on it but identification does not require within variability in all of the panels.
3) Given your particular research question (banks are dependent on macroeconomic factors that again relate to time) and that both FE and RE can make use of time dummies, you should include the time dummies. Wooldridge refers to comparisons between models where the RE model contains variables which are entirely time-invariant and thus cannot be used in the FE model - which then amounts to comparing two completely different models. You might find p. 4-10 of this lecture useful in which some of your questions about the Hausman test are discussed, including time dummies etc. | Hausman test: Include or not year effects and/or interaction variables
1) Are you using the hausman or the xtoverid command? You can try the hausman command with the sigmamore option which sometimes resolves the negative test statistic. A negative test statistic can be d |
48,771 | How to assess multilevel model assumptions using residual plots | A multilevel model is defined as $y = Xβ + Zη + ǫ$
Thus there are 3 different kinds of residuals:
Marginal residuals: $y − Xβ\ (= Zη + ǫ)$
Conditional residuals: $y − Xβ − Zη\ (= ǫ)$
Random effects: $y − Xβ − ǫ\ (= Zη)$
Marginal residuals:
Should be mean 0, but may show grouping structure
May not be homoskedastic!
Good for checking fixed effects, just like linear regression.
Conditional residuals:
Should be mean zero with no grouping structure
Should be homoskedastic!
Good for checking normality of ǫ, outliers
Random effects:
Should be mean zero with no grouping structure
May not be be homoskedastic!
Good for checking normality of , outliers
In R (if results is an mer object), the command residuals(results) gives you the conditional residuals.
# checking the normality of conditional residuals:
qqnorm(resid(results), main="Q-Q plot for conditional residuals")
# checking the normality of the random effects (here random intercept):
qqnorm(ranef(resuls)$Name_of_group_variable$`(Intercept)`,
main="Q-Q plot for the random intercept")
The answer is partly copied from the following PowerPoint slide deck pdf. | How to assess multilevel model assumptions using residual plots | A multilevel model is defined as $y = Xβ + Zη + ǫ$
Thus there are 3 different kinds of residuals:
Marginal residuals: $y − Xβ\ (= Zη + ǫ)$
Conditional residuals: $y − Xβ − Zη\ (= ǫ)$
Random effects: | How to assess multilevel model assumptions using residual plots
A multilevel model is defined as $y = Xβ + Zη + ǫ$
Thus there are 3 different kinds of residuals:
Marginal residuals: $y − Xβ\ (= Zη + ǫ)$
Conditional residuals: $y − Xβ − Zη\ (= ǫ)$
Random effects: $y − Xβ − ǫ\ (= Zη)$
Marginal residuals:
Should be mean 0, but may show grouping structure
May not be homoskedastic!
Good for checking fixed effects, just like linear regression.
Conditional residuals:
Should be mean zero with no grouping structure
Should be homoskedastic!
Good for checking normality of ǫ, outliers
Random effects:
Should be mean zero with no grouping structure
May not be be homoskedastic!
Good for checking normality of , outliers
In R (if results is an mer object), the command residuals(results) gives you the conditional residuals.
# checking the normality of conditional residuals:
qqnorm(resid(results), main="Q-Q plot for conditional residuals")
# checking the normality of the random effects (here random intercept):
qqnorm(ranef(resuls)$Name_of_group_variable$`(Intercept)`,
main="Q-Q plot for the random intercept")
The answer is partly copied from the following PowerPoint slide deck pdf. | How to assess multilevel model assumptions using residual plots
A multilevel model is defined as $y = Xβ + Zη + ǫ$
Thus there are 3 different kinds of residuals:
Marginal residuals: $y − Xβ\ (= Zη + ǫ)$
Conditional residuals: $y − Xβ − Zη\ (= ǫ)$
Random effects: |
48,772 | Relations between probabilities of "almost" independent random variables | Let $D_{\mathrm{KL}}(P\|Q)$ denote the Kullback–Leibler divergence between discrete probability distributions $P$ and $Q$. It is well-known that the following relation holds between the KL-divergence and mutual information:
$$I(X;Y)=D_{\mathrm{KL}}(P(X,Y)\|P(X)P(Y)) \enspace,$$
where $P(Z)$ denotes the probability distribution corresponding to the random variable $Z$.
Now, consider the definition of total variation distance between discrete probability distributions $P$ and $Q$:
$$\Delta = \frac 1 2 \sum_x \left| P(x) - Q(x) \right|\enspace.$$
Pinsker's inequality gives the relation between the KL divergence and the total variation distance:
$$\Delta(P,Q) \le \sqrt{\frac{\ln 2}{2} D_{\mathrm{KL}}(P\|Q)} \enspace.$$
(The term $\ln 2$ appears since I'm measuring the entropy in bits, while the respective Wikipedia formula uses nats.)
Finally, we note that for any $x,y$, we have ($\delta$ and $\epsilon$ are defined in the question):
$$|\delta| = \Big|\Pr[X=x,Y=y]-\Pr[X=x]\cdot\Pr[Y=y] \Big| \le 2\Delta(P(X,Y),P(X)P(Y)) \le 2\sqrt{\frac{\ln2}{2}D_{\mathrm{KL}}(P(X,Y),P(X)P(Y))}=2\sqrt{\frac{\ln2}{2}I(X;Y)}=\sqrt{2\epsilon\ln2} \enspace.$$
PS: This specially appears to be consistent with an example by @whuber (see comments below the question). | Relations between probabilities of "almost" independent random variables | Let $D_{\mathrm{KL}}(P\|Q)$ denote the Kullback–Leibler divergence between discrete probability distributions $P$ and $Q$. It is well-known that the following relation holds between the KL-divergence | Relations between probabilities of "almost" independent random variables
Let $D_{\mathrm{KL}}(P\|Q)$ denote the Kullback–Leibler divergence between discrete probability distributions $P$ and $Q$. It is well-known that the following relation holds between the KL-divergence and mutual information:
$$I(X;Y)=D_{\mathrm{KL}}(P(X,Y)\|P(X)P(Y)) \enspace,$$
where $P(Z)$ denotes the probability distribution corresponding to the random variable $Z$.
Now, consider the definition of total variation distance between discrete probability distributions $P$ and $Q$:
$$\Delta = \frac 1 2 \sum_x \left| P(x) - Q(x) \right|\enspace.$$
Pinsker's inequality gives the relation between the KL divergence and the total variation distance:
$$\Delta(P,Q) \le \sqrt{\frac{\ln 2}{2} D_{\mathrm{KL}}(P\|Q)} \enspace.$$
(The term $\ln 2$ appears since I'm measuring the entropy in bits, while the respective Wikipedia formula uses nats.)
Finally, we note that for any $x,y$, we have ($\delta$ and $\epsilon$ are defined in the question):
$$|\delta| = \Big|\Pr[X=x,Y=y]-\Pr[X=x]\cdot\Pr[Y=y] \Big| \le 2\Delta(P(X,Y),P(X)P(Y)) \le 2\sqrt{\frac{\ln2}{2}D_{\mathrm{KL}}(P(X,Y),P(X)P(Y))}=2\sqrt{\frac{\ln2}{2}I(X;Y)}=\sqrt{2\epsilon\ln2} \enspace.$$
PS: This specially appears to be consistent with an example by @whuber (see comments below the question). | Relations between probabilities of "almost" independent random variables
Let $D_{\mathrm{KL}}(P\|Q)$ denote the Kullback–Leibler divergence between discrete probability distributions $P$ and $Q$. It is well-known that the following relation holds between the KL-divergence |
48,773 | How to handle high dimensional feature vector in probability graph model? | A Hidden Markov Model is defined by two different probability distributions, namely
$$
\begin{align*}
p(s_t \mid s_{t-1}),&\;\;\;\text{the transition probabilities, and}\\
p(x_t \mid s_t),&\;\;\;\text{the emission probabilities.}
\end{align*}
$$
In typical presentations of HMMs the emission probabilities are taken to be categorical as this is the natural choice if each observation in the sequence is a word.
You can easily modify the emission distribution to deal with other types of observations. For example, if your observation $x_t$ is a continuous vector then it may make sense to assume $p(x_t|s_t)$ is a multivariate normal distribution with state dependent mean vector and covariance matrix, i.e.,
$$
x_t \mid s_t \sim N(\mu_{s_t}, \Sigma_{s_t}).
$$
On the other hand if $x_t$ is a binary vector you may want to make a conditional independence assumption (as you would for Naive Bayes') and assume
$$
p(x_t\mid s_t) = \prod_{i=1}^n p(x_{ti}\mid s_t) = \prod_{i=1}^n \left(\theta_{s_ti}\mathbb{I}\left\{x_{ti} = 1\right\} + (1-\theta_{s_ti})\mathbb{I}\left\{x_{ti} = 0\right\}\right).
$$
These are just some common distributions you'll see people work with. You can really use anything you want, provided you can compute the values you need and learn the parameters. | How to handle high dimensional feature vector in probability graph model? | A Hidden Markov Model is defined by two different probability distributions, namely
$$
\begin{align*}
p(s_t \mid s_{t-1}),&\;\;\;\text{the transition probabilities, and}\\
p(x_t \mid s_t),&\;\;\;\text | How to handle high dimensional feature vector in probability graph model?
A Hidden Markov Model is defined by two different probability distributions, namely
$$
\begin{align*}
p(s_t \mid s_{t-1}),&\;\;\;\text{the transition probabilities, and}\\
p(x_t \mid s_t),&\;\;\;\text{the emission probabilities.}
\end{align*}
$$
In typical presentations of HMMs the emission probabilities are taken to be categorical as this is the natural choice if each observation in the sequence is a word.
You can easily modify the emission distribution to deal with other types of observations. For example, if your observation $x_t$ is a continuous vector then it may make sense to assume $p(x_t|s_t)$ is a multivariate normal distribution with state dependent mean vector and covariance matrix, i.e.,
$$
x_t \mid s_t \sim N(\mu_{s_t}, \Sigma_{s_t}).
$$
On the other hand if $x_t$ is a binary vector you may want to make a conditional independence assumption (as you would for Naive Bayes') and assume
$$
p(x_t\mid s_t) = \prod_{i=1}^n p(x_{ti}\mid s_t) = \prod_{i=1}^n \left(\theta_{s_ti}\mathbb{I}\left\{x_{ti} = 1\right\} + (1-\theta_{s_ti})\mathbb{I}\left\{x_{ti} = 0\right\}\right).
$$
These are just some common distributions you'll see people work with. You can really use anything you want, provided you can compute the values you need and learn the parameters. | How to handle high dimensional feature vector in probability graph model?
A Hidden Markov Model is defined by two different probability distributions, namely
$$
\begin{align*}
p(s_t \mid s_{t-1}),&\;\;\;\text{the transition probabilities, and}\\
p(x_t \mid s_t),&\;\;\;\text |
48,774 | How to handle high dimensional feature vector in probability graph model? | If you think the observations interact with the observations, there is a very elegant HMM with Features model proposed by Berkeley that might be useful for you.
ee Paper and Slides
Me and my collaborator recently implemented this algorithm. Although currently it is mostly written for educational applications, it is general enough to get you started. | How to handle high dimensional feature vector in probability graph model? | If you think the observations interact with the observations, there is a very elegant HMM with Features model proposed by Berkeley that might be useful for you.
ee Paper and Slides
Me and my collabor | How to handle high dimensional feature vector in probability graph model?
If you think the observations interact with the observations, there is a very elegant HMM with Features model proposed by Berkeley that might be useful for you.
ee Paper and Slides
Me and my collaborator recently implemented this algorithm. Although currently it is mostly written for educational applications, it is general enough to get you started. | How to handle high dimensional feature vector in probability graph model?
If you think the observations interact with the observations, there is a very elegant HMM with Features model proposed by Berkeley that might be useful for you.
ee Paper and Slides
Me and my collabor |
48,775 | How to handle high dimensional feature vector in probability graph model? | You should check out the TrueSkill model proposed by Ralf Herbrich and Thore Graepel at Microsoft Research. (Some related papers are listed at the end of that page, where some implementation details are given.) Simply put, TrueSkill does very large scale learning with graphical models. It is able to handle very large data sets with millions of features, both nominal and numerical features. | How to handle high dimensional feature vector in probability graph model? | You should check out the TrueSkill model proposed by Ralf Herbrich and Thore Graepel at Microsoft Research. (Some related papers are listed at the end of that page, where some implementation details a | How to handle high dimensional feature vector in probability graph model?
You should check out the TrueSkill model proposed by Ralf Herbrich and Thore Graepel at Microsoft Research. (Some related papers are listed at the end of that page, where some implementation details are given.) Simply put, TrueSkill does very large scale learning with graphical models. It is able to handle very large data sets with millions of features, both nominal and numerical features. | How to handle high dimensional feature vector in probability graph model?
You should check out the TrueSkill model proposed by Ralf Herbrich and Thore Graepel at Microsoft Research. (Some related papers are listed at the end of that page, where some implementation details a |
48,776 | How to handle high dimensional feature vector in probability graph model? | See this answer: How to handle high dimensional feature vector in probability graph model? | How to handle high dimensional feature vector in probability graph model? | See this answer: How to handle high dimensional feature vector in probability graph model? | How to handle high dimensional feature vector in probability graph model?
See this answer: How to handle high dimensional feature vector in probability graph model? | How to handle high dimensional feature vector in probability graph model?
See this answer: How to handle high dimensional feature vector in probability graph model? |
48,777 | Understanding multinomial distribution | Suppose you roll a 6-sided die $N$ times.
The outcome of roll $i$, $i=1,\ldots,N$, is represented by the random variable $X_i$. The tuple $\mathbf{X}=\left(X_1,\ldots,X_N\right)$ contains the outcome of each roll.
We can obtain category-level count information from $\mathbf{X}$ by taking $N_j=\sum_{i=1}^{N}\delta\left(X_i=j\right)$, $j=1,\ldots,6$. The tuple $\mathbf{N}=\left(N_1,\ldots,N_6\right)$ contains the counts for each category.
What's the difference between having $\mathbf{X}$ and $\mathbf{N}$? They both arise from $N$ trials of a multinomial distribution with six possible outcomes, each with equal probability of occurring. However, when we discuss probability with respect to $\mathbf{X}$ we are talking about the probability of a specific sequence of outcomes. When we discuss probability with respect to $\mathbf{N}$ we are talking about the probability of a specific set of counts. There is a normalizing factor with the trial-level information, but it's just $1$ because there is only one way to get any specific sequence of outcomes.
EDIT The second section of the paper actually discusses when to use counts and when to use samples. | Understanding multinomial distribution | Suppose you roll a 6-sided die $N$ times.
The outcome of roll $i$, $i=1,\ldots,N$, is represented by the random variable $X_i$. The tuple $\mathbf{X}=\left(X_1,\ldots,X_N\right)$ contains the outcome | Understanding multinomial distribution
Suppose you roll a 6-sided die $N$ times.
The outcome of roll $i$, $i=1,\ldots,N$, is represented by the random variable $X_i$. The tuple $\mathbf{X}=\left(X_1,\ldots,X_N\right)$ contains the outcome of each roll.
We can obtain category-level count information from $\mathbf{X}$ by taking $N_j=\sum_{i=1}^{N}\delta\left(X_i=j\right)$, $j=1,\ldots,6$. The tuple $\mathbf{N}=\left(N_1,\ldots,N_6\right)$ contains the counts for each category.
What's the difference between having $\mathbf{X}$ and $\mathbf{N}$? They both arise from $N$ trials of a multinomial distribution with six possible outcomes, each with equal probability of occurring. However, when we discuss probability with respect to $\mathbf{X}$ we are talking about the probability of a specific sequence of outcomes. When we discuss probability with respect to $\mathbf{N}$ we are talking about the probability of a specific set of counts. There is a normalizing factor with the trial-level information, but it's just $1$ because there is only one way to get any specific sequence of outcomes.
EDIT The second section of the paper actually discusses when to use counts and when to use samples. | Understanding multinomial distribution
Suppose you roll a 6-sided die $N$ times.
The outcome of roll $i$, $i=1,\ldots,N$, is represented by the random variable $X_i$. The tuple $\mathbf{X}=\left(X_1,\ldots,X_N\right)$ contains the outcome |
48,778 | What exactly is the "proportion of variability explained"? | When you hear "more than 70% variability is explained by ...", the speaker is referring to the sums of squares (SS), not the mean squares (MS). I should note that exactly what they mean is not certain; they could be referring to either eta-squared or partial eta-squared:
\begin{align}
\eta^2&=\frac{SS~IV_j}{SS~Total} \\
~\\
~\\
\eta^2_\text{partial}&=\frac{SS~IV_j}{SS~IV_j+ SS~Residuals}
\end{align}
Part of the reason why is that the SS can be partitioned (at least if you are using type I SS, see here), but the MS cannot.
You raise a good point that there is more opportunity for a given factor to contribute to the variability in the response when there are more groups in that factor (this assumes, of course, that there is real variability in the levels of the factor). Many people forget, or are ignorant of, this fact. Unfortunately, it is not possible to get around this issue. The implication of this is that the question 'which factor is most important' may not be answerable in an absolute sense, but only relative to something else. | What exactly is the "proportion of variability explained"? | When you hear "more than 70% variability is explained by ...", the speaker is referring to the sums of squares (SS), not the mean squares (MS). I should note that exactly what they mean is not certai | What exactly is the "proportion of variability explained"?
When you hear "more than 70% variability is explained by ...", the speaker is referring to the sums of squares (SS), not the mean squares (MS). I should note that exactly what they mean is not certain; they could be referring to either eta-squared or partial eta-squared:
\begin{align}
\eta^2&=\frac{SS~IV_j}{SS~Total} \\
~\\
~\\
\eta^2_\text{partial}&=\frac{SS~IV_j}{SS~IV_j+ SS~Residuals}
\end{align}
Part of the reason why is that the SS can be partitioned (at least if you are using type I SS, see here), but the MS cannot.
You raise a good point that there is more opportunity for a given factor to contribute to the variability in the response when there are more groups in that factor (this assumes, of course, that there is real variability in the levels of the factor). Many people forget, or are ignorant of, this fact. Unfortunately, it is not possible to get around this issue. The implication of this is that the question 'which factor is most important' may not be answerable in an absolute sense, but only relative to something else. | What exactly is the "proportion of variability explained"?
When you hear "more than 70% variability is explained by ...", the speaker is referring to the sums of squares (SS), not the mean squares (MS). I should note that exactly what they mean is not certai |
48,779 | When does quantile regression produce biased coefficients (if ever)? | If you have a model
$$
Y_i = X_i B(\tau) + \epsilon_i(\tau)
$$
then a sufficient condition for $\tau$-quantile regression to give an unbiased estimate of $B(\tau)$ is that the $\tau$-th quantile of $\epsilon(\tau)$ conditional on $X$ is zero. This follows from the fact that (i) the sample quantile regression objective function, $\mathbb{E}_n[ \rho_\tau(Y-X'\beta)]$ converges uniformly to $\mathrm{E}[\rho_\tau(Y-X'\beta)]$ and (ii) $\mathrm{E}[\rho_\tau(Y-X'\beta)]$ is "uniquely" (actually something slightly stronger is needed) minimized at $B(\tau)$. (i) will be true under standard regularity conditions. (ii) is true because $\mathrm{E}[\rho_\tau(Y-X'\beta)]$ is convex as a function of $\beta$ and its first order condition can be written
$$
\tau \mathrm{E}[ P(Y - X\beta>0|X) X] = (1-\tau) \mathrm{E}[ P(Y - X\beta<0|X) X]
$$
which is satisfied at $\beta = B(\tau)$ if $Q_\tau(\epsilon(\tau)|X) = 0$. You can also see from this that $Q_\tau(\epsilon(\tau)|X) = 0$ is stronger than needed, but there doesn't seem to be an easy to interpret weaker condition.
None of the above tells you what the bias in $\hat{B}(\tau)$ would be if $Q_\tau(\epsilon(\tau)|X) \neq 0$. I don't know of a general expression for the bias like you can get for OLS, but you can get some nice results in a few cases. For example, Angrist, Chernozhukov, and Fernandez-Val (2006) give an omitted variables bias formula for quantile regression. If your model satisfies the conditions above, and $X = (X_1, X_2)$, and then you estimate a quantile regression leaving out $X_2$, then the expectation of your estimated coefficient on $X_1$ is
$$
\beta_1(\tau) + \mathrm{E}[w_\tau(X) X_1'X_1]^{-1} \mathrm{E}[w_\tau(X)X_1' (X_2' \beta_2(\tau))]
$$
where $w_\tau(X)$ are some weights that depend on $X$, $\tau$, and the distribution of $\epsilon$. | When does quantile regression produce biased coefficients (if ever)? | If you have a model
$$
Y_i = X_i B(\tau) + \epsilon_i(\tau)
$$
then a sufficient condition for $\tau$-quantile regression to give an unbiased estimate of $B(\tau)$ is that the $\tau$-th quantile of | When does quantile regression produce biased coefficients (if ever)?
If you have a model
$$
Y_i = X_i B(\tau) + \epsilon_i(\tau)
$$
then a sufficient condition for $\tau$-quantile regression to give an unbiased estimate of $B(\tau)$ is that the $\tau$-th quantile of $\epsilon(\tau)$ conditional on $X$ is zero. This follows from the fact that (i) the sample quantile regression objective function, $\mathbb{E}_n[ \rho_\tau(Y-X'\beta)]$ converges uniformly to $\mathrm{E}[\rho_\tau(Y-X'\beta)]$ and (ii) $\mathrm{E}[\rho_\tau(Y-X'\beta)]$ is "uniquely" (actually something slightly stronger is needed) minimized at $B(\tau)$. (i) will be true under standard regularity conditions. (ii) is true because $\mathrm{E}[\rho_\tau(Y-X'\beta)]$ is convex as a function of $\beta$ and its first order condition can be written
$$
\tau \mathrm{E}[ P(Y - X\beta>0|X) X] = (1-\tau) \mathrm{E}[ P(Y - X\beta<0|X) X]
$$
which is satisfied at $\beta = B(\tau)$ if $Q_\tau(\epsilon(\tau)|X) = 0$. You can also see from this that $Q_\tau(\epsilon(\tau)|X) = 0$ is stronger than needed, but there doesn't seem to be an easy to interpret weaker condition.
None of the above tells you what the bias in $\hat{B}(\tau)$ would be if $Q_\tau(\epsilon(\tau)|X) \neq 0$. I don't know of a general expression for the bias like you can get for OLS, but you can get some nice results in a few cases. For example, Angrist, Chernozhukov, and Fernandez-Val (2006) give an omitted variables bias formula for quantile regression. If your model satisfies the conditions above, and $X = (X_1, X_2)$, and then you estimate a quantile regression leaving out $X_2$, then the expectation of your estimated coefficient on $X_1$ is
$$
\beta_1(\tau) + \mathrm{E}[w_\tau(X) X_1'X_1]^{-1} \mathrm{E}[w_\tau(X)X_1' (X_2' \beta_2(\tau))]
$$
where $w_\tau(X)$ are some weights that depend on $X$, $\tau$, and the distribution of $\epsilon$. | When does quantile regression produce biased coefficients (if ever)?
If you have a model
$$
Y_i = X_i B(\tau) + \epsilon_i(\tau)
$$
then a sufficient condition for $\tau$-quantile regression to give an unbiased estimate of $B(\tau)$ is that the $\tau$-th quantile of |
48,780 | Implementation of M-spline in R | Though I know it is not a new question, just want to mention one existing implementation of M-splines in R for reference.
Package splines2 provides function named mSpline for M-splines. If you had experience of using package splines, you have probably known how to use mSpline already since its user interface is exactly the same with bs for B-splines in package splines.
In addition, the M-spline bases are evaluated by taking advantage of a simple transformation between B-spline and M-spline bases, while the evaluation of B-splines is efficiently done by package splines and implemented in C. Therefore, the function mSpline should have a better performance in speed, compared with direct implementation by recursive formulas purely in R.
A quick demonstration is available in the package vignettes. | Implementation of M-spline in R | Though I know it is not a new question, just want to mention one existing implementation of M-splines in R for reference.
Package splines2 provides function named mSpline for M-splines. If you had exp | Implementation of M-spline in R
Though I know it is not a new question, just want to mention one existing implementation of M-splines in R for reference.
Package splines2 provides function named mSpline for M-splines. If you had experience of using package splines, you have probably known how to use mSpline already since its user interface is exactly the same with bs for B-splines in package splines.
In addition, the M-spline bases are evaluated by taking advantage of a simple transformation between B-spline and M-spline bases, while the evaluation of B-splines is efficiently done by package splines and implemented in C. Therefore, the function mSpline should have a better performance in speed, compared with direct implementation by recursive formulas purely in R.
A quick demonstration is available in the package vignettes. | Implementation of M-spline in R
Though I know it is not a new question, just want to mention one existing implementation of M-splines in R for reference.
Package splines2 provides function named mSpline for M-splines. If you had exp |
48,781 | Implementation of M-spline in R | Well, after some tweaking with my code I tried to add this line to the definition of Mk :
if(ts[i+k]-ts[i]==0){0}
So that when it goes out of the list, the spline is simply zero. It worked and I could confirm that the basis was right. | Implementation of M-spline in R | Well, after some tweaking with my code I tried to add this line to the definition of Mk :
if(ts[i+k]-ts[i]==0){0}
So that when it goes out of the list, the spline is simply zero. It worked and I | Implementation of M-spline in R
Well, after some tweaking with my code I tried to add this line to the definition of Mk :
if(ts[i+k]-ts[i]==0){0}
So that when it goes out of the list, the spline is simply zero. It worked and I could confirm that the basis was right. | Implementation of M-spline in R
Well, after some tweaking with my code I tried to add this line to the definition of Mk :
if(ts[i+k]-ts[i]==0){0}
So that when it goes out of the list, the spline is simply zero. It worked and I |
48,782 | Is the Gaussian copula (for d=2) with normal margins identical to the bivariate normal? | Since the Gaussian copula results from taking a multivariate normal and transforming the margins to uniformity, a multivariate distribution with Gaussian copula and normal margins is multivariate normal.
Transforming the margins to normality merely undoes the original transform to uniform margins to obtain the copula.
See the second sentence at the Gaussian copula section of the Wikipedia article on Copulas for confirmation. | Is the Gaussian copula (for d=2) with normal margins identical to the bivariate normal? | Since the Gaussian copula results from taking a multivariate normal and transforming the margins to uniformity, a multivariate distribution with Gaussian copula and normal margins is multivariate norm | Is the Gaussian copula (for d=2) with normal margins identical to the bivariate normal?
Since the Gaussian copula results from taking a multivariate normal and transforming the margins to uniformity, a multivariate distribution with Gaussian copula and normal margins is multivariate normal.
Transforming the margins to normality merely undoes the original transform to uniform margins to obtain the copula.
See the second sentence at the Gaussian copula section of the Wikipedia article on Copulas for confirmation. | Is the Gaussian copula (for d=2) with normal margins identical to the bivariate normal?
Since the Gaussian copula results from taking a multivariate normal and transforming the margins to uniformity, a multivariate distribution with Gaussian copula and normal margins is multivariate norm |
48,783 | Explanatory variables with many zeros | You are focusing on zeros as part of the distributions of several predictors, but the central questions for modelling include (a) what kind of response variable you have and (b) what kind of relationship you expect between the response and the predictors or explanatory variables.
Zeros in the predictors themselves rule out little except straight logarithmic transformation.
From your description, the starting point is that price is the response and prices are necessarily positive. That suggests immediately a regression model with log link and quite possibly Poisson regression. (The fact that price is not a count is secondary here. See for example http://blog.stata.com/tag/poisson-regression/ and its literature for explanation.)
From that, how to represent your predictors depends on their relationship with the response as much as, or more than, their marginal distributions. Your post supplies no information to guide advice, but I'd start with including them as they come and then consider if you need other representations, e.g. as roots, squares, set of indicator variables. | Explanatory variables with many zeros | You are focusing on zeros as part of the distributions of several predictors, but the central questions for modelling include (a) what kind of response variable you have and (b) what kind of relations | Explanatory variables with many zeros
You are focusing on zeros as part of the distributions of several predictors, but the central questions for modelling include (a) what kind of response variable you have and (b) what kind of relationship you expect between the response and the predictors or explanatory variables.
Zeros in the predictors themselves rule out little except straight logarithmic transformation.
From your description, the starting point is that price is the response and prices are necessarily positive. That suggests immediately a regression model with log link and quite possibly Poisson regression. (The fact that price is not a count is secondary here. See for example http://blog.stata.com/tag/poisson-regression/ and its literature for explanation.)
From that, how to represent your predictors depends on their relationship with the response as much as, or more than, their marginal distributions. Your post supplies no information to guide advice, but I'd start with including them as they come and then consider if you need other representations, e.g. as roots, squares, set of indicator variables. | Explanatory variables with many zeros
You are focusing on zeros as part of the distributions of several predictors, but the central questions for modelling include (a) what kind of response variable you have and (b) what kind of relations |
48,784 | Justifying the distribution for the maximum likelihood estimator in a linear regression example | Step 1: General: Recognize that $(x_i - \bar x)$, and its square, and $\frac{1}{\sum (x_i - \bar x)^2}$ are constants, not random variables. Further note that $\alpha$ and $\beta$ and $\sigma$ are also constants. First write $B$ as a constant times an expression containing a random variable. Focus carefully on the part that's not just a constant and then deal with that constant after you have that part worked out.
Step 2: Expectation: Remember that the expectation of a sum is the sum of expectations. Notice that you can take the expectation inside the summation and then move $(x_i - \bar x)$ outside that expectation. Notice you have another expression for $Y_i$ that you can use. Look again at step 1 and split up the expectation and pull out constants in the appropriate way. What's left is trivial to find the expectation of.
Step 3: Variance: Since the $Y$s are independent, the variance of a sum is the sum of the variances. You should also know a fact about the variance of a constant times a random variable to use here (more than once).
It's really nothing more than the basic properties of expectation and variance and using the facts already there in your question. | Justifying the distribution for the maximum likelihood estimator in a linear regression example | Step 1: General: Recognize that $(x_i - \bar x)$, and its square, and $\frac{1}{\sum (x_i - \bar x)^2}$ are constants, not random variables. Further note that $\alpha$ and $\beta$ and $\sigma$ are als | Justifying the distribution for the maximum likelihood estimator in a linear regression example
Step 1: General: Recognize that $(x_i - \bar x)$, and its square, and $\frac{1}{\sum (x_i - \bar x)^2}$ are constants, not random variables. Further note that $\alpha$ and $\beta$ and $\sigma$ are also constants. First write $B$ as a constant times an expression containing a random variable. Focus carefully on the part that's not just a constant and then deal with that constant after you have that part worked out.
Step 2: Expectation: Remember that the expectation of a sum is the sum of expectations. Notice that you can take the expectation inside the summation and then move $(x_i - \bar x)$ outside that expectation. Notice you have another expression for $Y_i$ that you can use. Look again at step 1 and split up the expectation and pull out constants in the appropriate way. What's left is trivial to find the expectation of.
Step 3: Variance: Since the $Y$s are independent, the variance of a sum is the sum of the variances. You should also know a fact about the variance of a constant times a random variable to use here (more than once).
It's really nothing more than the basic properties of expectation and variance and using the facts already there in your question. | Justifying the distribution for the maximum likelihood estimator in a linear regression example
Step 1: General: Recognize that $(x_i - \bar x)$, and its square, and $\frac{1}{\sum (x_i - \bar x)^2}$ are constants, not random variables. Further note that $\alpha$ and $\beta$ and $\sigma$ are als |
48,785 | Understanding the mean shift algorithm with Gaussian kernel | if I were you I would refer to one of the main mean shift papers: Mean Shift: A Robust Approach Toward Feature Space Analysis. The short answer to your questions: yes c is always positive and no the kernel (window) is not a circle.
Now the long version. The kernel density estimator is:
$\hat{f}_h(x) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{x-x_i}{h}\Big)$,
where K(•) is the kernel and h is called bandwidth. Your question is about two-dimensional data and in practice we usually have to deal with multivariate data and things are a bit trickier for multivariate data. That's why in Mean Shift we are only interested in a special case of radially symmetric kernels satisfying: $K(x)= c_{k,d} k(\|x\|^2)$ where $c_{k,d}$, the normalization constant, makes $K(x)$ integrate to one and $k(x)$ is called the profile of the kernel. It helps us simplify the calculation in the case of multivariate data.
The profile of the Gaussian kernel is:
$e^{-\frac{1}{2}x^{2}}$ and therefore, the multivariate Gaussian kernel with the standard deviation $\sigma$, will be:
$K(x)=\frac{1}{\sqrt{2\pi}\sigma^d}e^{-\frac{1}{2}\frac{\|x\|^2}{\sigma^2}}$,
where $d$ is the number of dimensions. It's also worth mentioning that the standard deviation for the Gaussian kernel works as the bandwidth parameter, $h$.
Now having sample points $\{x_i\}_{i=1..n}$, each mean shift procedure starts from a sample point $y_j = x_j$ and update $y_j$ until convergence as follows:
$ {\displaystyle y_{j}^{t+1}=\frac{\sum_{i=1}^{n}x_{i}e^{-\frac{1}{2}\frac{\|y_{j}^{t}-x_{i}\|^{2}}{\sigma^{2}}}}{\sum_{i=1}^{n}e^{-\frac{1}{2}\frac{\|y_{j}^{t}-x_{i}\|^{2}}{\sigma^{2}}}}}$
So basically all the points are considered in calculation of the mean shift but there is a weight assigned to each point that decays exponentially as the distance from the current mean increases and the value of $\sigma$ determines how fast the decay is. | Understanding the mean shift algorithm with Gaussian kernel | if I were you I would refer to one of the main mean shift papers: Mean Shift: A Robust Approach Toward Feature Space Analysis. The short answer to your questions: yes c is always positive and no the k | Understanding the mean shift algorithm with Gaussian kernel
if I were you I would refer to one of the main mean shift papers: Mean Shift: A Robust Approach Toward Feature Space Analysis. The short answer to your questions: yes c is always positive and no the kernel (window) is not a circle.
Now the long version. The kernel density estimator is:
$\hat{f}_h(x) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{x-x_i}{h}\Big)$,
where K(•) is the kernel and h is called bandwidth. Your question is about two-dimensional data and in practice we usually have to deal with multivariate data and things are a bit trickier for multivariate data. That's why in Mean Shift we are only interested in a special case of radially symmetric kernels satisfying: $K(x)= c_{k,d} k(\|x\|^2)$ where $c_{k,d}$, the normalization constant, makes $K(x)$ integrate to one and $k(x)$ is called the profile of the kernel. It helps us simplify the calculation in the case of multivariate data.
The profile of the Gaussian kernel is:
$e^{-\frac{1}{2}x^{2}}$ and therefore, the multivariate Gaussian kernel with the standard deviation $\sigma$, will be:
$K(x)=\frac{1}{\sqrt{2\pi}\sigma^d}e^{-\frac{1}{2}\frac{\|x\|^2}{\sigma^2}}$,
where $d$ is the number of dimensions. It's also worth mentioning that the standard deviation for the Gaussian kernel works as the bandwidth parameter, $h$.
Now having sample points $\{x_i\}_{i=1..n}$, each mean shift procedure starts from a sample point $y_j = x_j$ and update $y_j$ until convergence as follows:
$ {\displaystyle y_{j}^{t+1}=\frac{\sum_{i=1}^{n}x_{i}e^{-\frac{1}{2}\frac{\|y_{j}^{t}-x_{i}\|^{2}}{\sigma^{2}}}}{\sum_{i=1}^{n}e^{-\frac{1}{2}\frac{\|y_{j}^{t}-x_{i}\|^{2}}{\sigma^{2}}}}}$
So basically all the points are considered in calculation of the mean shift but there is a weight assigned to each point that decays exponentially as the distance from the current mean increases and the value of $\sigma$ determines how fast the decay is. | Understanding the mean shift algorithm with Gaussian kernel
if I were you I would refer to one of the main mean shift papers: Mean Shift: A Robust Approach Toward Feature Space Analysis. The short answer to your questions: yes c is always positive and no the k |
48,786 | Outliers in importance sampling | This is my first answer on stackexchange, so feel free to point out anything I'm doing wrong. Also, I am a student studying this subject so I may make mistakes.
Let's consider the importance weights which are often abbreviated in the literature as P(x)/Q(x). If the proposal density does not have heavy tails while the target density does, then the importance weight will be giving very large importance values to relatively common values in P(x) which are much rarer in Q(x). As Q becomes very small, then the ratio P/Q becomes very large. This will cause outliers to unduly influence the estimate.
Conversely, if the proposal has heavy tails while the target does not, then the estimate will not be grossly distorted, as the ratio P/Q is very small and so this sample would be weighted lightly. This is not optimal, because then we are not incorporating the full value of this sample's information, but at least it is not leading to major distortions of the estimator.
TLDR: I think outliers are worst when the ratio of P/Q is very large because they'll impact the estimator the most. Since your target was Laplace which has heavier tails than the normal, I would not expect that to be an issue with this specific case. | Outliers in importance sampling | This is my first answer on stackexchange, so feel free to point out anything I'm doing wrong. Also, I am a student studying this subject so I may make mistakes.
Let's consider the importance weights w | Outliers in importance sampling
This is my first answer on stackexchange, so feel free to point out anything I'm doing wrong. Also, I am a student studying this subject so I may make mistakes.
Let's consider the importance weights which are often abbreviated in the literature as P(x)/Q(x). If the proposal density does not have heavy tails while the target density does, then the importance weight will be giving very large importance values to relatively common values in P(x) which are much rarer in Q(x). As Q becomes very small, then the ratio P/Q becomes very large. This will cause outliers to unduly influence the estimate.
Conversely, if the proposal has heavy tails while the target does not, then the estimate will not be grossly distorted, as the ratio P/Q is very small and so this sample would be weighted lightly. This is not optimal, because then we are not incorporating the full value of this sample's information, but at least it is not leading to major distortions of the estimator.
TLDR: I think outliers are worst when the ratio of P/Q is very large because they'll impact the estimator the most. Since your target was Laplace which has heavier tails than the normal, I would not expect that to be an issue with this specific case. | Outliers in importance sampling
This is my first answer on stackexchange, so feel free to point out anything I'm doing wrong. Also, I am a student studying this subject so I may make mistakes.
Let's consider the importance weights w |
48,787 | Wald test and F distribution | To emphasize that the distance between the point estimate $\hat{\mu}$ and the hypothesized value $\mu$ is being scaled by its standard error, I find it easier to write $W$ as
$$
W = \frac{(\hat{\mu} - \mu_0)^2}{\hat{\sigma}^2/n}.
$$
Recall the relationship between the $\hat{\sigma}^2$ and the unbiased sample variance $s^2$:
$$
\hat{\sigma}^2 = \frac{n-1}{n} s^2.
$$
Now, notice that we can write
$$
\frac{n-1}{n} W = \frac{(\hat{\mu} - \mu_0)^2}{s^2/n}.
$$
This is useful for a couple of reasons. First, recognize that the square root of the righthand side is the statistic for the one-sample Student's t-test, which has a Student's t sampling distribution with $n-1$ degrees of freedom under the null hypothesis assuming the sample is iid normally distributed. That is,
$$
\sqrt{\frac{n-1}{n} W} = \frac{\hat{\mu} - \mu_0}{s/\sqrt{n}} \sim t_{n-1}.
$$
Next, recall a relationship between the Student's t and (central) F distributions: if $Y \sim t_{\nu}$, then $Y^2 \sim F$ with degrees of freedom 1 and $\nu$. Therefore,
$$
\frac{n-1}{n} W \sim F
$$
with degrees of freedom 1 and $n-1$.
The note that you linked in point 2 does not explicitly apply here. First, as you stated, $\sigma^2$ is unknown. Also, you have described the classic one-sample Student's t-test, whereas the link is describing a more general case (i.e., testing regression coefficients). The dimension concepts you quoted are referring to multivariate problems. You can see a connection though by noting that the dimension here is $p = 1$. | Wald test and F distribution | To emphasize that the distance between the point estimate $\hat{\mu}$ and the hypothesized value $\mu$ is being scaled by its standard error, I find it easier to write $W$ as
$$
W = \frac{(\hat{\mu} - | Wald test and F distribution
To emphasize that the distance between the point estimate $\hat{\mu}$ and the hypothesized value $\mu$ is being scaled by its standard error, I find it easier to write $W$ as
$$
W = \frac{(\hat{\mu} - \mu_0)^2}{\hat{\sigma}^2/n}.
$$
Recall the relationship between the $\hat{\sigma}^2$ and the unbiased sample variance $s^2$:
$$
\hat{\sigma}^2 = \frac{n-1}{n} s^2.
$$
Now, notice that we can write
$$
\frac{n-1}{n} W = \frac{(\hat{\mu} - \mu_0)^2}{s^2/n}.
$$
This is useful for a couple of reasons. First, recognize that the square root of the righthand side is the statistic for the one-sample Student's t-test, which has a Student's t sampling distribution with $n-1$ degrees of freedom under the null hypothesis assuming the sample is iid normally distributed. That is,
$$
\sqrt{\frac{n-1}{n} W} = \frac{\hat{\mu} - \mu_0}{s/\sqrt{n}} \sim t_{n-1}.
$$
Next, recall a relationship between the Student's t and (central) F distributions: if $Y \sim t_{\nu}$, then $Y^2 \sim F$ with degrees of freedom 1 and $\nu$. Therefore,
$$
\frac{n-1}{n} W \sim F
$$
with degrees of freedom 1 and $n-1$.
The note that you linked in point 2 does not explicitly apply here. First, as you stated, $\sigma^2$ is unknown. Also, you have described the classic one-sample Student's t-test, whereas the link is describing a more general case (i.e., testing regression coefficients). The dimension concepts you quoted are referring to multivariate problems. You can see a connection though by noting that the dimension here is $p = 1$. | Wald test and F distribution
To emphasize that the distance between the point estimate $\hat{\mu}$ and the hypothesized value $\mu$ is being scaled by its standard error, I find it easier to write $W$ as
$$
W = \frac{(\hat{\mu} - |
48,788 | Visualising relationship data | There is much of relevance at
Graph for relationship between two ordinal variables
The detail there of using ordinal variables does not bite with your problem where the workers are just different.
You might need to expand on "appealing": there is often tension here between clever and unusual but difficult to decode and basic and simple but easy to decode. | Visualising relationship data | There is much of relevance at
Graph for relationship between two ordinal variables
The detail there of using ordinal variables does not bite with your problem where the workers are just different.
Y | Visualising relationship data
There is much of relevance at
Graph for relationship between two ordinal variables
The detail there of using ordinal variables does not bite with your problem where the workers are just different.
You might need to expand on "appealing": there is often tension here between clever and unusual but difficult to decode and basic and simple but easy to decode. | Visualising relationship data
There is much of relevance at
Graph for relationship between two ordinal variables
The detail there of using ordinal variables does not bite with your problem where the workers are just different.
Y |
48,789 | hyperspherical nature of K means (and other squared error clustering method) | Well, mathematically, k-means clusters are not spherical, but Voronoi cells.
However, the claim is not invalid, as the actual data usually does not fill the whole cell, but if you'd take the convex hull of the data it indeed is somewhat spherical in nature.
The reason probably is that when minimizing variance (and k-means minimizes the in-cluster variance, aka: sum of squares) you do also minimize euclidean distances: the squared euclidean distance is the sum of squares. And since the square root does not change ordering (it's monotone!) the assignment rule of k-means definitely prefers spherical clusters, by implicitly preferring Euclidean distance assignment.
Yes, k-means can be changed. Use k-medoids/PAM with maximum norm then (don't just exchange the norm - you may lose convergence. K-medoids/PAM is guaranteed to converge with arbitrary distances!)
Still, the result will not enforce a rectangular shape of clusters. They may still overlap in unexpected ways. The result will likely look like this (actually, rotated by 45 degree, but obviously this does not change the nature much - a strong preference for 45 degree angles): | hyperspherical nature of K means (and other squared error clustering method) | Well, mathematically, k-means clusters are not spherical, but Voronoi cells.
However, the claim is not invalid, as the actual data usually does not fill the whole cell, but if you'd take the convex hu | hyperspherical nature of K means (and other squared error clustering method)
Well, mathematically, k-means clusters are not spherical, but Voronoi cells.
However, the claim is not invalid, as the actual data usually does not fill the whole cell, but if you'd take the convex hull of the data it indeed is somewhat spherical in nature.
The reason probably is that when minimizing variance (and k-means minimizes the in-cluster variance, aka: sum of squares) you do also minimize euclidean distances: the squared euclidean distance is the sum of squares. And since the square root does not change ordering (it's monotone!) the assignment rule of k-means definitely prefers spherical clusters, by implicitly preferring Euclidean distance assignment.
Yes, k-means can be changed. Use k-medoids/PAM with maximum norm then (don't just exchange the norm - you may lose convergence. K-medoids/PAM is guaranteed to converge with arbitrary distances!)
Still, the result will not enforce a rectangular shape of clusters. They may still overlap in unexpected ways. The result will likely look like this (actually, rotated by 45 degree, but obviously this does not change the nature much - a strong preference for 45 degree angles): | hyperspherical nature of K means (and other squared error clustering method)
Well, mathematically, k-means clusters are not spherical, but Voronoi cells.
However, the claim is not invalid, as the actual data usually does not fill the whole cell, but if you'd take the convex hu |
48,790 | hyperspherical nature of K means (and other squared error clustering method) | Here is one way that one might think of k means in terms of hyperspheres. A point $x$ belongs to the cluster centered at $c \in CENTERS$ if there exists a radius $r$ such that $x$ belongs to the ball centered at $c$ of radius $r$ but does not belong to the ball radius $r$ centered at any $c' \neq c \in CENTERS$. What this means, intuitively, is that clusters gobble up points by looking around themselves in a sphere. As pointed out elsewhere, this does not imply that the shape of the cluster is a sphere, but this is an artifact of the fact that we make a discrete cutoff for membership in a cluster. If one considers the undiscretized membership scores (which are basically just the $L_2$ distances) as the real metric of interest, then clusters will look like a ball, in the sense that the set of all points that are at least $l$ "like" a cluster center is a ball. | hyperspherical nature of K means (and other squared error clustering method) | Here is one way that one might think of k means in terms of hyperspheres. A point $x$ belongs to the cluster centered at $c \in CENTERS$ if there exists a radius $r$ such that $x$ belongs to the ball | hyperspherical nature of K means (and other squared error clustering method)
Here is one way that one might think of k means in terms of hyperspheres. A point $x$ belongs to the cluster centered at $c \in CENTERS$ if there exists a radius $r$ such that $x$ belongs to the ball centered at $c$ of radius $r$ but does not belong to the ball radius $r$ centered at any $c' \neq c \in CENTERS$. What this means, intuitively, is that clusters gobble up points by looking around themselves in a sphere. As pointed out elsewhere, this does not imply that the shape of the cluster is a sphere, but this is an artifact of the fact that we make a discrete cutoff for membership in a cluster. If one considers the undiscretized membership scores (which are basically just the $L_2$ distances) as the real metric of interest, then clusters will look like a ball, in the sense that the set of all points that are at least $l$ "like" a cluster center is a ball. | hyperspherical nature of K means (and other squared error clustering method)
Here is one way that one might think of k means in terms of hyperspheres. A point $x$ belongs to the cluster centered at $c \in CENTERS$ if there exists a radius $r$ such that $x$ belongs to the ball |
48,791 | Fit poisson regression | If the probability model for $Y$ is this:
$$P(Y_i=y) = \exp(\lambda_i) {\lambda_i}^y / y!$$
and $i$-th observations rate parameter is in fact given by:
$$ \log(\lambda_i) = \beta_0 + \beta_1 x_i$$
(with no model misspecification per others' comments here)
Then the answer is yes you can calculate the PMF for a new $Y$ observation with a given $X$.
So if $X_i=x$, $$P(Y_i=y) = \exp(\exp(\beta_0 + \beta_1 x)) \exp(\beta_0 + \beta_1 x)^y / y!$$
If, however, the new $X$ observation is not known, then the marginal $Y$ distribution is a complex mixture of Poisson RVs. | Fit poisson regression | If the probability model for $Y$ is this:
$$P(Y_i=y) = \exp(\lambda_i) {\lambda_i}^y / y!$$
and $i$-th observations rate parameter is in fact given by:
$$ \log(\lambda_i) = \beta_0 + \beta_1 x_i$$
(w | Fit poisson regression
If the probability model for $Y$ is this:
$$P(Y_i=y) = \exp(\lambda_i) {\lambda_i}^y / y!$$
and $i$-th observations rate parameter is in fact given by:
$$ \log(\lambda_i) = \beta_0 + \beta_1 x_i$$
(with no model misspecification per others' comments here)
Then the answer is yes you can calculate the PMF for a new $Y$ observation with a given $X$.
So if $X_i=x$, $$P(Y_i=y) = \exp(\exp(\beta_0 + \beta_1 x)) \exp(\beta_0 + \beta_1 x)^y / y!$$
If, however, the new $X$ observation is not known, then the marginal $Y$ distribution is a complex mixture of Poisson RVs. | Fit poisson regression
If the probability model for $Y$ is this:
$$P(Y_i=y) = \exp(\lambda_i) {\lambda_i}^y / y!$$
and $i$-th observations rate parameter is in fact given by:
$$ \log(\lambda_i) = \beta_0 + \beta_1 x_i$$
(w |
48,792 | Fit poisson regression | Partially answered in comments:
Besides the fact that a conditional Poisson distribution (conditional on $x$) does not imply a marginal Poisson distribution, this would seem to be exactly the idea of Poisson regression, yes. – Nick Sabbe
For more information on Poisson regression, see Scaling vs Offsetting in Quasi-Poisson GLM | Fit poisson regression | Partially answered in comments:
Besides the fact that a conditional Poisson distribution (conditional on $x$) does not imply a marginal Poisson distribution, this would seem to be exactly the idea of | Fit poisson regression
Partially answered in comments:
Besides the fact that a conditional Poisson distribution (conditional on $x$) does not imply a marginal Poisson distribution, this would seem to be exactly the idea of Poisson regression, yes. – Nick Sabbe
For more information on Poisson regression, see Scaling vs Offsetting in Quasi-Poisson GLM | Fit poisson regression
Partially answered in comments:
Besides the fact that a conditional Poisson distribution (conditional on $x$) does not imply a marginal Poisson distribution, this would seem to be exactly the idea of |
48,793 | Sufficient statistic and hypothesis testing | Not sure if this is an answer. But perhaps a few comments. If I am restating what you are probably already aware of, my apologies.
First, based on the Fisher–Neyman Factorization, if $T(\mathbf{x})$ is a sufficient statistic, then the likelihood function factorizes to the product of (1) a function that does not involve $\theta$; times (2) a function that depends on the sample only through the sufficient statistic $T(\mathbf{x})$. So the first function out of the factorization cancels when one looks at the likelihood ratio. In other words, if there is a sufficient statistic $T(\mathbf{x})$, the likelihood ratio's dependency on the sample is only through $T(\mathbf{x})$. Then, assessing the plausibility of a bigger $\theta$ versus a smaller $\theta$ (i.e., whether or not to reject the null hypothesis) based on the sample $\mathbf{x}$ has to be tied to $T(\mathbf{x})$.
Second, if the ratio is "non-increasing (as opposed to non-decreasing)" function of $T(\mathbf{x})$, wouldn't the ratio automatically be a non-decreasing function of $-T(\mathbf{x})$? Doesn't the theorem about UMP test then apply, now using the "different" sufficient statistic $-T(\mathbf{x})$? | Sufficient statistic and hypothesis testing | Not sure if this is an answer. But perhaps a few comments. If I am restating what you are probably already aware of, my apologies.
First, based on the Fisher–Neyman Factorization, if $T(\mathbf{x})$ i | Sufficient statistic and hypothesis testing
Not sure if this is an answer. But perhaps a few comments. If I am restating what you are probably already aware of, my apologies.
First, based on the Fisher–Neyman Factorization, if $T(\mathbf{x})$ is a sufficient statistic, then the likelihood function factorizes to the product of (1) a function that does not involve $\theta$; times (2) a function that depends on the sample only through the sufficient statistic $T(\mathbf{x})$. So the first function out of the factorization cancels when one looks at the likelihood ratio. In other words, if there is a sufficient statistic $T(\mathbf{x})$, the likelihood ratio's dependency on the sample is only through $T(\mathbf{x})$. Then, assessing the plausibility of a bigger $\theta$ versus a smaller $\theta$ (i.e., whether or not to reject the null hypothesis) based on the sample $\mathbf{x}$ has to be tied to $T(\mathbf{x})$.
Second, if the ratio is "non-increasing (as opposed to non-decreasing)" function of $T(\mathbf{x})$, wouldn't the ratio automatically be a non-decreasing function of $-T(\mathbf{x})$? Doesn't the theorem about UMP test then apply, now using the "different" sufficient statistic $-T(\mathbf{x})$? | Sufficient statistic and hypothesis testing
Not sure if this is an answer. But perhaps a few comments. If I am restating what you are probably already aware of, my apologies.
First, based on the Fisher–Neyman Factorization, if $T(\mathbf{x})$ i |
48,794 | How to determine the sample size of a Latin Hypercube sampling? | The total number of sample combinations you have is $2\times 3 \times 2 \times 3 \times 3 = 108$ (or what ever). Depending on your experiment (and the difficulty of taking samples), you should ideally just sample everything. If not, there a a few other options.
You can't technically do standard LHC sampling, or orthogonal sampling, because it requires each dimension to have the same number of levels. However, you can do LHC if you use $6n$ (lowest common multiple of 3 and 2) levels, and then map that to your 2- and 3-level spaces.
The number of samples you choose is up to you, but more samples will give you more reliable results, and will also help avoid correlation between variables (you should check this when you decide what your samples are, before you actually take them). If you expect that your effect size is going to be small relative to noise, then choose a larger sample size.
Another method that might be sensible is to use a Low-discrepancy sequence, like the Sobol sequence. Basically, you take a sequence over the real space $[0,1]^5$, and then map each dimension to your variables (so if you get something in the lower half of your $[0,1]$ dimension for your first 2-level variable, then you choose level 1, etc.). This has the advantage over LHC that you can decide to add more samples later, while retaining relatively even sample coverage, and low correlations between variables. Also, you're not restricted to sample sizes of $6n$. I successfully used this method with sample sizes as low as 25. | How to determine the sample size of a Latin Hypercube sampling? | The total number of sample combinations you have is $2\times 3 \times 2 \times 3 \times 3 = 108$ (or what ever). Depending on your experiment (and the difficulty of taking samples), you should ideally | How to determine the sample size of a Latin Hypercube sampling?
The total number of sample combinations you have is $2\times 3 \times 2 \times 3 \times 3 = 108$ (or what ever). Depending on your experiment (and the difficulty of taking samples), you should ideally just sample everything. If not, there a a few other options.
You can't technically do standard LHC sampling, or orthogonal sampling, because it requires each dimension to have the same number of levels. However, you can do LHC if you use $6n$ (lowest common multiple of 3 and 2) levels, and then map that to your 2- and 3-level spaces.
The number of samples you choose is up to you, but more samples will give you more reliable results, and will also help avoid correlation between variables (you should check this when you decide what your samples are, before you actually take them). If you expect that your effect size is going to be small relative to noise, then choose a larger sample size.
Another method that might be sensible is to use a Low-discrepancy sequence, like the Sobol sequence. Basically, you take a sequence over the real space $[0,1]^5$, and then map each dimension to your variables (so if you get something in the lower half of your $[0,1]$ dimension for your first 2-level variable, then you choose level 1, etc.). This has the advantage over LHC that you can decide to add more samples later, while retaining relatively even sample coverage, and low correlations between variables. Also, you're not restricted to sample sizes of $6n$. I successfully used this method with sample sizes as low as 25. | How to determine the sample size of a Latin Hypercube sampling?
The total number of sample combinations you have is $2\times 3 \times 2 \times 3 \times 3 = 108$ (or what ever). Depending on your experiment (and the difficulty of taking samples), you should ideally |
48,795 | What regression analysis should I perform on my data and why? | Whether a variable is categorical depends only on the variable, not on any "sharing" of common values. In your case, LAW_FAM is categorical because it has four discrete categories: FRA, SCA, ENG, GER. In particular, LAW_FAM is nominal: the categories have no ordering. You could have several countries which happen to have exactly the same DEP_AVG, but that doesn't make DEP_AVG a categorical variable.
I would suggest that you look at Multilevel/Hierarchical Models, since you have hierarchical data: country-level data and company-level data nested within countries.
Your post is very good: you include enough details to help us help you. One more thing that would also help us point you in the right direction is to know what software you will be using for your analysis.
EDIT: You ask about Generalized Linear Models, which are chosen for specific kinds of dependent variables. For example, if you were wanting to predict a categorical variable, you'd use Logistic Regression (which is done with a GLM). | What regression analysis should I perform on my data and why? | Whether a variable is categorical depends only on the variable, not on any "sharing" of common values. In your case, LAW_FAM is categorical because it has four discrete categories: FRA, SCA, ENG, GER. | What regression analysis should I perform on my data and why?
Whether a variable is categorical depends only on the variable, not on any "sharing" of common values. In your case, LAW_FAM is categorical because it has four discrete categories: FRA, SCA, ENG, GER. In particular, LAW_FAM is nominal: the categories have no ordering. You could have several countries which happen to have exactly the same DEP_AVG, but that doesn't make DEP_AVG a categorical variable.
I would suggest that you look at Multilevel/Hierarchical Models, since you have hierarchical data: country-level data and company-level data nested within countries.
Your post is very good: you include enough details to help us help you. One more thing that would also help us point you in the right direction is to know what software you will be using for your analysis.
EDIT: You ask about Generalized Linear Models, which are chosen for specific kinds of dependent variables. For example, if you were wanting to predict a categorical variable, you'd use Logistic Regression (which is done with a GLM). | What regression analysis should I perform on my data and why?
Whether a variable is categorical depends only on the variable, not on any "sharing" of common values. In your case, LAW_FAM is categorical because it has four discrete categories: FRA, SCA, ENG, GER. |
48,796 | What regression analysis should I perform on my data and why? | Your situation is a bit complicated. We just need to take a step back.
In order for us to run this regression we need to know what your research question / hypothesis is?
You might not have to use the GLM, but could build a model from the linear regression and use the "test method" (which is not available in the drop menu of SPSS and only in syntax) described below in the sytnax.
Please run this syntax and let me know if the output is what you were looking for:
DATASET ACTIVATE DataSet1.
REGRESSION
/DESCRIPTIVES MEAN STDDEV CORR SIG N
/MISSING LISTWISE
/STATISTICS COEFF OUTS CI(95) R ANOVA COLLIN TOL CHANGE ZPP
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT GSE_RAW
/METHOD=ENTER DEP_AVG CON_AVG
/METHOD=ENTER SIGN INCID
/METHOD=TEST (LAW_FRA, LAW_SCA, LAW_ENG, LAW_GER)
/SCATTERPLOT=(*ZPRED ,*ZRESID)
/RESIDUALS HISTOGRAM(ZRESID) NORMPROB(ZRESID). | What regression analysis should I perform on my data and why? | Your situation is a bit complicated. We just need to take a step back.
In order for us to run this regression we need to know what your research question / hypothesis is?
You might not have to use the | What regression analysis should I perform on my data and why?
Your situation is a bit complicated. We just need to take a step back.
In order for us to run this regression we need to know what your research question / hypothesis is?
You might not have to use the GLM, but could build a model from the linear regression and use the "test method" (which is not available in the drop menu of SPSS and only in syntax) described below in the sytnax.
Please run this syntax and let me know if the output is what you were looking for:
DATASET ACTIVATE DataSet1.
REGRESSION
/DESCRIPTIVES MEAN STDDEV CORR SIG N
/MISSING LISTWISE
/STATISTICS COEFF OUTS CI(95) R ANOVA COLLIN TOL CHANGE ZPP
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT GSE_RAW
/METHOD=ENTER DEP_AVG CON_AVG
/METHOD=ENTER SIGN INCID
/METHOD=TEST (LAW_FRA, LAW_SCA, LAW_ENG, LAW_GER)
/SCATTERPLOT=(*ZPRED ,*ZRESID)
/RESIDUALS HISTOGRAM(ZRESID) NORMPROB(ZRESID). | What regression analysis should I perform on my data and why?
Your situation is a bit complicated. We just need to take a step back.
In order for us to run this regression we need to know what your research question / hypothesis is?
You might not have to use the |
48,797 | What regression analysis should I perform on my data and why? | OK, let me get this straight. In response to your older question here, you're trying to fit a more complicated mixed/multilevel/hierarchical model (yah for terminology). Not having any experience with SPSS, this is going to be more general, along with some guesses at what SPSS is looking for via the screenshots provided (one-eyed leading the blind and all that).
Analyze->Mixed Models->Linear is the correct choice here.
A note on terminology - you mention GLM or GLS several times. This isn't what you're trying to fit. A GLM "Generalized Linear Model" is when your response variable is not
normally distributed (for example, success or failure). GLS is something I'm unfamiliar with.
The warning message you're getting seems to be because you haven't specified any effects. You'll notice that the model result only returns a coefficient for the intercept. It looks like SPSS wants you to declare your variables in the first menu, then declare what they are under Fixed and Random. Now, for fixed vs random (warning: terminology differs):
For our purposes (and apparently for SPSS'), a fixed effect is an independent variables you're inserting into your model and estimating a coefficient for. So MKT_AVG_LN, SIGN, etc. All those country-wide variables you carried down and the source of all your questions go here
You'll need to go into the Fixed menu and specify them.
A random effect is what makes this tick and different from OLS. This is where the grouping/multilevel stuff comes into play. Rather than estimating a coefficient for these variables, a covariance structure is estimated which imposes further structure in your model, mediating the non-independence of your country-level variables being carried down to the firm-level. The structuring of these can get exceedingly complicated very quickly, but let's keep things simple here.
You will need a variable indicating the country (let's call it COUNTRY). This should be placed under Random->Subjects
Further notes:
It would appear that factors = categorical variables and covariates = continuous variables here. I see you have DEP_AVG and CON_AVG under factors. These are (probably) not categorical variables, and should be moved.
It looks like COUNTRY, LAW_FAM should be your only factors. Perhaps the other two LAW variables as well.
As I mentioned before, I don't use SPSS, so this is me eyeballing things and hoping things work out, while hopefully imparting some idea of how mixed models work. | What regression analysis should I perform on my data and why? | OK, let me get this straight. In response to your older question here, you're trying to fit a more complicated mixed/multilevel/hierarchical model (yah for terminology). Not having any experience with | What regression analysis should I perform on my data and why?
OK, let me get this straight. In response to your older question here, you're trying to fit a more complicated mixed/multilevel/hierarchical model (yah for terminology). Not having any experience with SPSS, this is going to be more general, along with some guesses at what SPSS is looking for via the screenshots provided (one-eyed leading the blind and all that).
Analyze->Mixed Models->Linear is the correct choice here.
A note on terminology - you mention GLM or GLS several times. This isn't what you're trying to fit. A GLM "Generalized Linear Model" is when your response variable is not
normally distributed (for example, success or failure). GLS is something I'm unfamiliar with.
The warning message you're getting seems to be because you haven't specified any effects. You'll notice that the model result only returns a coefficient for the intercept. It looks like SPSS wants you to declare your variables in the first menu, then declare what they are under Fixed and Random. Now, for fixed vs random (warning: terminology differs):
For our purposes (and apparently for SPSS'), a fixed effect is an independent variables you're inserting into your model and estimating a coefficient for. So MKT_AVG_LN, SIGN, etc. All those country-wide variables you carried down and the source of all your questions go here
You'll need to go into the Fixed menu and specify them.
A random effect is what makes this tick and different from OLS. This is where the grouping/multilevel stuff comes into play. Rather than estimating a coefficient for these variables, a covariance structure is estimated which imposes further structure in your model, mediating the non-independence of your country-level variables being carried down to the firm-level. The structuring of these can get exceedingly complicated very quickly, but let's keep things simple here.
You will need a variable indicating the country (let's call it COUNTRY). This should be placed under Random->Subjects
Further notes:
It would appear that factors = categorical variables and covariates = continuous variables here. I see you have DEP_AVG and CON_AVG under factors. These are (probably) not categorical variables, and should be moved.
It looks like COUNTRY, LAW_FAM should be your only factors. Perhaps the other two LAW variables as well.
As I mentioned before, I don't use SPSS, so this is me eyeballing things and hoping things work out, while hopefully imparting some idea of how mixed models work. | What regression analysis should I perform on my data and why?
OK, let me get this straight. In response to your older question here, you're trying to fit a more complicated mixed/multilevel/hierarchical model (yah for terminology). Not having any experience with |
48,798 | What regression analysis should I perform on my data and why? | I think that you are missing two critical factors. If you try to make a model of gravity but do not take into account mass or inter-mass distance, then no model will work well.
http://www.ted.com/talks/geoffrey_west_the_surprising_math_of_cities_and_corporations.html
I just love standing on the shoulders of giants. As much as I wish I were a giant, I always see farther with their help.
You need the company-specific variables that include "the current number of employees", "the cumulative sum of all employees over the life of the company", and "the age of the company".
I would also include the "cumulative revenue of the company" and "the current gross revenue".
Now I do not use SPSS. I do not speak its language. I do, however know a little about models. I would suggest use of a random forest to determine which of the variables in this collection inform GSR_Raw. Once you get an idea of which variables are worthless then you can remove them from your model, and simplify your analysis.
After you have a reduced model and are sure the inputs inform your output, then you can start trying to fit models. Start with the basics. Don't leap into crazy stuff until you are sure that the basic models don't do a "good enough" job. | What regression analysis should I perform on my data and why? | I think that you are missing two critical factors. If you try to make a model of gravity but do not take into account mass or inter-mass distance, then no model will work well.
http://www.ted.com/t | What regression analysis should I perform on my data and why?
I think that you are missing two critical factors. If you try to make a model of gravity but do not take into account mass or inter-mass distance, then no model will work well.
http://www.ted.com/talks/geoffrey_west_the_surprising_math_of_cities_and_corporations.html
I just love standing on the shoulders of giants. As much as I wish I were a giant, I always see farther with their help.
You need the company-specific variables that include "the current number of employees", "the cumulative sum of all employees over the life of the company", and "the age of the company".
I would also include the "cumulative revenue of the company" and "the current gross revenue".
Now I do not use SPSS. I do not speak its language. I do, however know a little about models. I would suggest use of a random forest to determine which of the variables in this collection inform GSR_Raw. Once you get an idea of which variables are worthless then you can remove them from your model, and simplify your analysis.
After you have a reduced model and are sure the inputs inform your output, then you can start trying to fit models. Start with the basics. Don't leap into crazy stuff until you are sure that the basic models don't do a "good enough" job. | What regression analysis should I perform on my data and why?
I think that you are missing two critical factors. If you try to make a model of gravity but do not take into account mass or inter-mass distance, then no model will work well.
http://www.ted.com/t |
48,799 | Is ARIMA better in comparision with Neural Networks? | It looks like you are using both of these models for time-series forecasts. I would cross-validate both models and compare their out-of-sample error. | Is ARIMA better in comparision with Neural Networks? | It looks like you are using both of these models for time-series forecasts. I would cross-validate both models and compare their out-of-sample error. | Is ARIMA better in comparision with Neural Networks?
It looks like you are using both of these models for time-series forecasts. I would cross-validate both models and compare their out-of-sample error. | Is ARIMA better in comparision with Neural Networks?
It looks like you are using both of these models for time-series forecasts. I would cross-validate both models and compare their out-of-sample error. |
48,800 | Is ARIMA better in comparision with Neural Networks? | NN ignore outliers. If you ignore outliers, then you are in big trouble.
Your ARIMA model is also ignoring outliers so then you are also in big trouble.
As for cross-validating, that is for those that are fitting models to data instead of actually modeling. Only the 849 page text book "Principles of Forecasting" agrees with me on this statement, BUT if you have taken care of all the things you need to then you can be so dumb. See the reference here.
4.6 Obtain the most recent data
See more on outliers here. | Is ARIMA better in comparision with Neural Networks? | NN ignore outliers. If you ignore outliers, then you are in big trouble.
Your ARIMA model is also ignoring outliers so then you are also in big trouble.
As for cross-validating, that is for those | Is ARIMA better in comparision with Neural Networks?
NN ignore outliers. If you ignore outliers, then you are in big trouble.
Your ARIMA model is also ignoring outliers so then you are also in big trouble.
As for cross-validating, that is for those that are fitting models to data instead of actually modeling. Only the 849 page text book "Principles of Forecasting" agrees with me on this statement, BUT if you have taken care of all the things you need to then you can be so dumb. See the reference here.
4.6 Obtain the most recent data
See more on outliers here. | Is ARIMA better in comparision with Neural Networks?
NN ignore outliers. If you ignore outliers, then you are in big trouble.
Your ARIMA model is also ignoring outliers so then you are also in big trouble.
As for cross-validating, that is for those |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.