source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
78,839 | For some tests in R , there is a lower limit on the p-value calculations of $2.22 \cdot 10^{-16}$ . I'm not sure why it's this number, if there is a good reason for it or if it's just arbitrary. A lot of other stats packages just go to 0.0001 , so this is a much higher level of precision. But I haven't seen too many papers reporting $p < 2.22\cdot 10^{-16}$ or $p = 2.22\cdot 10^{-16}$ . Is it a common/best practice to report this computed value or is it more typical to report something else (like p < 0.000000000000001 )? | There's a good reason for it. The value can be found via noquote(unlist(format(.Machine))) double.eps double.neg.eps double.xmin
2.220446e-16 1.110223e-16 2.225074e-308
double.xmax double.base double.digits
1.797693e+308 2 53
double.rounding double.guard double.ulp.digits
5 0 -52
double.neg.ulp.digits double.exponent double.min.exp
-53 11 -1022
double.max.exp integer.max sizeof.long
1024 2147483647 4
sizeof.longlong sizeof.longdouble sizeof.pointer
8 12 4 If you look at the help, ( ?".Machine" ): double.eps
the smallest positive floating-point number x such that 1 + x != 1. It equals
double.base ^ ulp.digits if either double.base is 2 or double.rounding is 0;
otherwise, it is (double.base ^ double.ulp.digits) / 2. Normally 2.220446e-16. It's essentially a value below which you can be quite confident the value will be pretty numerically meaningless - in that any smaller value isn't likely to be an accurate calculation of the value we were attempting to compute. (Having studied a little numerical analysis, depending on what computations were performed by the specific procedure, there's a good chance numerical meaninglessness comes in a fair way above that.) But statistical meaning will have been lost far earlier. Note that p-values depend on assumptions, and the further out into the extreme tail you go the more heavily the true p-value (rather than the nominal value we calculate) will be affected by the mistaken assumptions, in some cases even when they're only a little bit wrong. Since the assumptions are simply not going to be all exactly satisfied, middling p-values may be reasonably accurate (in terms of relative accuracy, perhaps only out by a modest fraction), but extremely tiny p-values may be out by many orders of magnitude. Which is to say that usual practice (something like the "<0.0001" that's you say is common in packages, or the APA rule that Jaap mentions in his answer) is probably not so far from sensible practice, but the approximate point at which things lose meaning beyond saying ' it's very very small ' will of course vary quite a lot depending on circumstances. This is one reason why I can't suggest a general rule - there can't be a single rule that's even remotely suitable for everyone in all circumstances - change the circumstances a little and the broad grey line marking the change from somewhat meaningful to relatively meaningless will change, sometimes by a long way. If you were to specify sufficient information about the exact circumstances (e.g. it's a regression, with this much nonlinearity, that amount of variation in this independent variable, this kind and amount of dependence in the error term, that kind of and amount of heteroskedasticity, this shape of error distribution), I could simulate 'true' p-values for you to compare with the nominal p-values, so you could see when they were too different for the nominal value to carry any meaning. But that leads us to the second reason why - even if you specified enough information to simulate the true p-values - I still couldn't responsibly state a cut-off for even those circumstances. What you report depends on people's preferences - yours, and your audience. Imagine you told me enough about the circumstances for me to decide that I wanted to draw the line at a nominal $p$ of $10^{-6}$ . All well and good, we might think - except your own preference function (what looks right to you, were you to look at the difference between nominal p-values given by stats packages and the the ones resulting from simulation when you suppose a particular set of failures of assumptions) might put it at $10^{-5}$ and the editors of the journal you want to submit to might put have their blanket rule to cut off at $10^{-4}$ , while the next journal might put it at $10^{-3}$ and the next may have no general rule and the specific editor you got might accept even lower values than I gave ... but one of the referees may then have a specific cut off! In the absence of knowledge of their preference functions and rules, and the absence of knowledge of your own utilities, how do I responsibly suggest any general choice of what actions to take? I can at least tell you the sorts of things that I do (and I don't suggest this is a good choice for you at all): There are few circumstances (outside of simulating p-values) in which I would make much of a p less than $10^{-6}$ (I may or may not mention the value reported by the package, but I wouldn't make anything of it other than it was very small, I would usually emphasize the meaningless of the exact number). Sometimes I take a value somewhere in the region of $10^{-5}$ to $10^{-4}$ and say that p was much less than that. On occasion I do actually do as suggested above - perform some simulations to see how sensitive the p-value is in the far tail to various violations of the assumptions, particularly if there's a specific kind of violation I am worried about. That's certainly helpful in informing a choice - but I am as likely to discuss the results of the simulation as to use them to choose a cut-off-value, giving others a chance to choose their own. An alternative to simulation is to look at some procedures that are more robust* to the various potential failures of assumption and see how much difference to the p-value that might make. Their p-values will also not be particularly meaningful, but they do at least give some sense of how much impact there might be. If some are very different from the nominal one, it also gives more of an idea which violations of assumptions to investigate the impact of. Even if you don't report any of those alternatives, it gives a better picture of how meaningful your small p-value is. * Note that here we don't really need procedures that are robust to gross violations of some assumption; ones that are less affected by relatively mild deviations of the relevant assumption should be fine for this exercise. I will say that when/if you do come to do such simulations, even with quite mild violations, in some cases it can be surprising at how far even not-that-small p-values can be wrong. That has done more to change the way I personally interpret a p-value than it has shifted the specific cut-offs I might use. When submitting the results of an actual hypothesis test to a journal, I try to find out if they have any rule. If they don't, I tend to please myself, and then wait for the referees to complain. | {
"source": [
"https://stats.stackexchange.com/questions/78839",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3984/"
]
} |
79,028 | With respect to the unsupervised learning (like clustering), are there any metrics to evaluate performance? | In some sense I think this question is unanswerable. I say this because how well a particular unsupervised method performs will largely depend on why one is doing unsupervised learning in the first place, i.e., does the method perform well in the context of your end goal? Obviously this isn't completely true, people work on these problems and publish results which include some sort of evaluation. I'll outline a few of the approaches I'm familiar with below. A good resource (with references) for clustering is sklearn's documentation page, Clustering Performance Evaluation . This covers several method, but all but one, the Silhouette Coefficient, assumes ground truth labels are available. This method is also mentioned in the question Evaluation measure of clustering , linked in the comments for this question. If your unsupervised learning method is probabilistic, another option is to evaluate some probability measure (log-likelihood, perplexity, etc) on held out data. The motivation here is that if your unsupervised learning method assigns high probability to similar data that wasn't used to fit parameters, then it has probably done a good job of capturing the distribution of interest. A domain where this type of evaluation is commonly used is language modeling. The last option I'll mention is using a supervised learner on a related auxiliary task. If you're unsupervised method produces latent variables, you can think of these latent variables as being a representation of the input. Thus, it is sensible to use these latent variables as input for a supervised classifier performing some task related to the domain the data is from. The performance of the supervised method can then serve as a surrogate for the performance of the unsupervised learner. This is essentially the setup you see in most work on representation learning. This description is probably a little nebulous, so I'll give a concrete example. Nearly all of the work on word representation learning uses the following approach for evaluation: Learn representations of words using an unsupervised learner. Use the learned representations as input for a supervised learner performing some NLP task like parts of speech tagging or named entity recognition. Assess the performance of the unsupervised learner by its ability to improve the performance of the supervised learner compared to a baseline using a standard representation, like binary word presence features, as input. For an example of this approach in action see the paper Training Restricted Boltzmann Machines on Word Observations by Dahl et al. | {
"source": [
"https://stats.stackexchange.com/questions/79028",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3125/"
]
} |
79,289 | I have completed my data analysis and got "statistically significant results" which is consistent with my hypothesis. However, a student in statistics told me this is a premature conclusion. Why? Is there anything else needed to be included in my report? | Hypothesis testing versus parameter estimation Typically, hypotheses are framed in a binary way. I'll put directional hypotheses to one side, as they don't change the issue much. It is common, at least in psychology, to talk about hypotheses such as: the difference between group means is or is not zero; the correlation is or is not zero; the regression coefficient is or is not zero; the r-square is or is not zero. In all these cases, there is a null hypothesis of no effect, and an alternative hypothesis of an effect. This binary thinking is generally not what we are most interested in. Once you think about your research question, you will almost always find that you are actually interested in estimating parameters. You are interested in the actual difference between group means, or the size of the correlation, or the size of the regression coefficient, or the amount of variance explained. Of course, when we get a sample of data, the sample estimate of a parameter is not the same as the population parameter. So we need a way of quantifying our uncertainty about what the value of the parameter might be. From a frequentist perspective, confidence intervals provide a means of doing, although Bayesian purists might argue that they don't strictly permit the inference you might want to make. From a Bayesian perspective, credible intervals on posterior densities provide a more direct means of quantifying your uncertainty about the value of a population parameter. Parameters / effect sizes Moving away from the binary hypothesis testing approach forces you to think in a continuous way. For example, what size difference in group means would be theoretically interesting? How would you map difference between group means onto subjective language or practical implications? Standardised measures of effect along with contextual norms are one way of building a language for quantifying what different parameter values mean. Such measures are often labelled "effect sizes" (e.g., Cohen's d, r, $R^2$, etc.). However, it is perfectly reasonable, and often preferable, to talk about the importance of an effect using unstandardised measures (e.g., the difference in group means on meaningful unstandardised variables such as income levels, life expectancy, etc.). There's a huge literature in psychology (and other fields) critiquing a focus on p-values, null hypothesis significance testing, and so on (see this Google Scholar search ). This literature often recommends reporting effect sizes with confidence intervals as a resolution (e.g., APA Task force by Wilkinson, 1999). Steps for moving away from binary hypothesis testing If you are thinking about adopting this thinking, I think there are progressively more sophisticated approaches you can take: Approach 1a. Report the point estimate of your sample effect (e.g., group mean differences) in both raw and standardised terms. When you report your results discuss what such a magnitude would mean for theory and practice. Approach 1b. Add to 1a, at least at a very basic level, some sense of the uncertainty around your parameter estimate based on your sample size. Approach 2. Also report confidence intervals on effect sizes and incorporate this uncertainty into your thinking about the plausible values of the parameter of interest. Approach 3. Report Bayesian credible intervals, and examine the implications of various assumptions on that credible interval, such as choice of prior, the data generating process implied by your model, and so on. Among many possible references, you'll see Andrew Gelman talk a lot about these issues on his blog and in his research. References Nickerson, R. S. (2000). Null hypothesis significance testing: a review of an old and continuing controversy. Psychological methods, 5(2), 241. Wilkinson, L. (1999). Statistical methods in psychology journals: guidelines and explanations. American psychologist, 54(8), 594. PDF | {
"source": [
"https://stats.stackexchange.com/questions/79289",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36006/"
]
} |
79,360 | I have data collected from an experiment organized as follows: Two sites, each with 30 trees. 15 are treated, 15 are control at each site. From each tree, we sample three pieces of the stem, and three pieces of the roots, so 6 level 1 samples per tree which is represented by one of two factor levels (root, stem). Then, from those stem / root samples, we take two samples by dissecting different tissues within the sample, which is represented by one of two factor levels for tissue type (tissue type A, tissue type B). These samples are measured as a continuous variable. Total number of observations is 720; 2 sites * 30 trees * (three stem samples + three root samples) * (one tissue A sample + one tissue B sample). Data looks like this... ï..Site Tree Treatment Organ Sample Tissue Total_Length
1 L LT1 T R 1 Phloem 30
2 L LT1 T R 1 Xylem 28
3 L LT1 T R 2 Phloem 46
4 L LT1 T R 2 Xylem 38
5 L LT1 T R 3 Phloem 103
6 L LT1 T R 3 Xylem 53
7 L LT1 T S 1 Phloem 29
8 L LT1 T S 1 Xylem 21
9 L LT1 T S 2 Phloem 56
10 L LT1 T S 2 Xylem 49
11 L LT1 T S 3 Phloem 41
12 L LT1 T S 3 Xylem 30 I am attempting to fit a mixed effects model using R and lme4, but am new to mixed models. I'd like to model the response as the Treatment + Level 1 Factor (stem, root) + Level 2 Factor (tissue A, tissue B), with random effects for the specific samples nested within the two levels. In R, I am doing this using lmer, as follows fit <- lmer(Response ~ Treatment + Organ + Tissue + (1|Tree/Organ/Sample)) From my understanding (...which is not certain, and why I am posting!) the term: (1|Tree/Organ/Sample) Specifies that 'Sample' is nested within the organ samples, which is nested within the tree. Is this sort of nesting relevant / valid? Sorry if this question is not clear, if so, please specify where I can elaborate. | I think this is correct. (1|Tree/Organ/Sample) expands to/is equivalent to (1|Tree)+(1|Tree:Organ)+(1|Tree:Organ:Sample) (where : denotes an interaction). The fixed factors Treatment , Organ and Tissue automatically get handled at the correct level. You should probably include Site as a fixed effect (conceptually it's a random effect, but it's not practical to try to estimate among-site variance with only two sites); this will reduce the among-tree variance slightly. You should probably include all the data within a data frame, and pass this explicitly to lmer via a data=my.data.frame argument. You may find the glmm FAQ helpful (it's focused on GLMMs but does have stuff relevant to linear mixed models as well). | {
"source": [
"https://stats.stackexchange.com/questions/79360",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36046/"
]
} |
79,399 | I have run a multiple regression in which the model as a whole is significant and explains about 13% of the variance. However, I need to find the amount of variance explained by each significant predictor. How can I do this using R? Here's some sample data and code: D = data.frame(
dv = c( 0.75, 1.00, 1.00, 0.75, 0.50, 0.75, 1.00, 1.00, 0.75, 0.50 ),
iv1 = c( 0.75, 1.00, 1.00, 0.75, 0.75, 1.00, 0.50, 0.50, 0.75, 0.25 ),
iv2 = c( 0.882, 0.867, 0.900, 0.333, 0.875, 0.500, 0.882, 0.875, 0.778, 0.867 ),
iv3 = c( 1.000, 0.067, 1.000, 0.933, 0.875, 0.500, 0.588, 0.875, 1.000, 0.467 ),
iv4 = c( 0.889, 1.000, 0.905, 0.938, 0.833, 0.882, 0.444, 0.588, 0.895, 0.812 ),
iv5 = c( 18, 16, 21, 16, 18, 17, 18, 17, 19, 16 ) )
fit = lm( dv ~ iv1 + iv2 + iv3 + iv4 + iv5, data=D )
summary( fit ) Here's the output with my actual data: Call: lm(formula = posttestScore ~ pretestScore + probCategorySame +
probDataRelated + practiceAccuracy + practiceNumTrials, data = D)
Residuals:
Min 1Q Median 3Q Max
-0.6881 -0.1185 0.0516 0.1359 0.3690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.77364 0.10603 7.30 8.5e-13 ***
iv1 0.29267 0.03091 9.47 < 2e-16 ***
iv2 0.06354 0.02456 2.59 0.0099 **
iv3 0.00553 0.02637 0.21 0.8340
iv4 -0.02642 0.06505 -0.41 0.6847
iv5 -0.00941 0.00501 -1.88 0.0607 .
--- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.18 on 665 degrees of freedom
Multiple R-squared: 0.13, Adjusted R-squared: 0.123
F-statistic: 19.8 on 5 and 665 DF, p-value: <2e-16 This question has been answered here , but the accepted answer only addresses uncorrelated predictors, and while there is an additional response that addresses correlated predictors, it only provides a general hint, not a specific solution. I would like to know what to do if my predictors are correlated. | The percentage explained depends on the order entered. If you specify a particular order, you can compute this trivially in R (e.g. via the update and anova functions, see below), but a different order of entry would yield potentially very different answers. [One possibility might be to average across all orders or something, but it would get unwieldy and might not be answering a particularly useful question.] -- As Stat points out, with a single model, if you're after one variable at a time, you can just use 'anova' to produce the incremental sums of squares table. This would follow on from your code: anova(fit)
Analysis of Variance Table
Response: dv
Df Sum Sq Mean Sq F value Pr(>F)
iv1 1 0.033989 0.033989 0.7762 0.4281
iv2 1 0.022435 0.022435 0.5123 0.5137
iv3 1 0.003048 0.003048 0.0696 0.8050
iv4 1 0.115143 0.115143 2.6294 0.1802
iv5 1 0.000220 0.000220 0.0050 0.9469
Residuals 4 0.175166 0.043791 -- So there we have the incremental variance explained; how do we get the proportion? Pretty trivially, scale them by 1 divided by their sum. (Replace the 1 with 100 for percentage variance explained.) Here I've displayed it as an added column to the anova table: af <- anova(fit)
afss <- af$"Sum Sq"
print(cbind(af,PctExp=afss/sum(afss)*100))
Df Sum Sq Mean Sq F value Pr(>F) PctExp
iv1 1 0.0339887640 0.0339887640 0.77615140 0.4280748 9.71107544
iv2 1 0.0224346357 0.0224346357 0.51230677 0.5137026 6.40989591
iv3 1 0.0030477233 0.0030477233 0.06959637 0.8049589 0.87077807
iv4 1 0.1151432643 0.1151432643 2.62935731 0.1802223 32.89807550
iv5 1 0.0002199726 0.0002199726 0.00502319 0.9468997 0.06284931
Residuals 4 0.1751656402 0.0437914100 NA NA 50.04732577 -- If you decide you want several particular orders of entry, you can do something even more general like this (which also allows you to enter or remove groups of variables at a time if you wish): m5 = fit
m4 = update(m5, ~ . - iv5)
m3 = update(m4, ~ . - iv4)
m2 = update(m3, ~ . - iv3)
m1 = update(m2, ~ . - iv2)
m0 = update(m1, ~ . - iv1)
anova(m0,m1,m2,m3,m4,m5)
Analysis of Variance Table
Model 1: dv ~ 1
Model 2: dv ~ iv1
Model 3: dv ~ iv1 + iv2
Model 4: dv ~ iv1 + iv2 + iv3
Model 5: dv ~ iv1 + iv2 + iv3 + iv4
Model 6: dv ~ iv1 + iv2 + iv3 + iv4 + iv5
Res.Df RSS Df Sum of Sq F Pr(>F)
1 9 0.35000
2 8 0.31601 1 0.033989 0.7762 0.4281
3 7 0.29358 1 0.022435 0.5123 0.5137
4 6 0.29053 1 0.003048 0.0696 0.8050
5 5 0.17539 1 0.115143 2.6294 0.1802
6 4 0.17517 1 0.000220 0.0050 0.9469 (Such an approach might also be automated, e.g. via loops and the use of get . You can add and remove variables in multiple orders if needed) ... and then scale to percentages as before. (NB. The fact that I explain how to do these things should not necessarily be taken as advocacy of everything I explain.) | {
"source": [
"https://stats.stackexchange.com/questions/79399",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12647/"
]
} |
79,454 | I'm trying to add a softmax layer to a neural network trained with backpropagation, so I'm trying to compute its gradient. The softmax output is $h_j = \frac{e^{z_j}}{\sum{e^{z_i}}}$ where $j$ is the output neuron number. If I derive it then I get $\frac{\partial{h_j}}{\partial{z_j}}=h_j(1-h_j)$ Similar to logistic regression.
However this is wrong since my numerical gradient check fails. What am I doing wrong? I had a thought that I need to compute the cross derivatives as well (i.e. $\frac{\partial{h_j}}{\partial{z_k}}$) but I'm not sure how to do this and keep the dimension of the gradient the same so it will fit for the back propagation process. | I feel a little bit bad about providing my own answer for this because it is pretty well captured by amoeba and juampa, except for maybe the final intuition about how the Jacobian can be reduced back to a vector. You correctly derived the gradient of the diagonal of the Jacobian matrix, which is to say that $ {\partial h_i \over \partial z_j}= h_i(1-h_j)\;\;\;\;\;\;: i = j $ and as amoeba stated it, you also have to derive the off diagonal entries of the Jacobian, which yield $ {\partial h_i \over \partial z_j}= -h_ih_j\;\;\;\;\;\;: i \ne j $ These two concepts definitions can be conveniently combined using a construct called the Kronecker Delta , so the definition of the gradient becomes $ {\partial h_i \over \partial z_j}= h_i(\delta_{ij}-h_j) $ So the Jacobian is a square matrix $ \left[J \right]_{ij}=h_i(\delta_{ij}-h_j) $ All of the information up to this point is already covered by amoeba and juampa. The problem is of course, that we need to get the input errors from the output errors that are already computed. Since the gradient of the output error $\nabla h_i$ depends on all of the inputs, then the gradient of the input $x_i$ is $[\nabla x]_k = \sum\limits_{i=1} \nabla h_{i,k} $ Given the Jacobian matrix defined above, this is implemented trivially as the product of the matrix and the output error vector: $ \vec{\sigma_l} = J\vec{\sigma_{l+1}} $ If the softmax layer is your output layer, then combining it with the cross-entropy cost model simplifies the computation to simply $ \vec{\sigma_l} = \vec{h}-\vec{t} $ where $\vec{t}$ is the vector of labels, and $\vec{h}$ is the output from the softmax function. Not only is the simplified form convenient, it is also extremely useful from a numerical stability standpoint. | {
"source": [
"https://stats.stackexchange.com/questions/79454",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5946/"
]
} |
79,455 | I have 8, 1-minute audio excerpts, all that feature the same music. Four of them were recorded by a middle school music ensemble (2 expressive, 2 unexpressive) and four by a high school music ensemble (2 expressive, 2 unexpressive). I am getting participants in the MS ensemble, the HS ensemble, and a set of expert evaluators to listen to all 8 excerpts and assign a single rating. Because all of these excerpts feature the same 1-minute piece of music (although by two different groups and under two different conditions - expressive and unexpressive), do I need to have 3 different audio presentation orders to help control for order effects? I am going to average the 2 expressive and 2 unexpressive audio excerpt ratings for each group (MS, HS, Experts). My thought that was by averaging the ratings (to get scores for each group - MS Expressive, MS Unexpressive, HS Expressive, HS Unexpressive) that I wouldn't really need to do have separate orders. Any help about counterbalancing and/or ways to avoid fatigue effects (since it is 8, 1-minute excerpts of the same piece of music, although recorded by two different music groups under two different conditions) would be most helpful. Thanks for your help! | I feel a little bit bad about providing my own answer for this because it is pretty well captured by amoeba and juampa, except for maybe the final intuition about how the Jacobian can be reduced back to a vector. You correctly derived the gradient of the diagonal of the Jacobian matrix, which is to say that $ {\partial h_i \over \partial z_j}= h_i(1-h_j)\;\;\;\;\;\;: i = j $ and as amoeba stated it, you also have to derive the off diagonal entries of the Jacobian, which yield $ {\partial h_i \over \partial z_j}= -h_ih_j\;\;\;\;\;\;: i \ne j $ These two concepts definitions can be conveniently combined using a construct called the Kronecker Delta , so the definition of the gradient becomes $ {\partial h_i \over \partial z_j}= h_i(\delta_{ij}-h_j) $ So the Jacobian is a square matrix $ \left[J \right]_{ij}=h_i(\delta_{ij}-h_j) $ All of the information up to this point is already covered by amoeba and juampa. The problem is of course, that we need to get the input errors from the output errors that are already computed. Since the gradient of the output error $\nabla h_i$ depends on all of the inputs, then the gradient of the input $x_i$ is $[\nabla x]_k = \sum\limits_{i=1} \nabla h_{i,k} $ Given the Jacobian matrix defined above, this is implemented trivially as the product of the matrix and the output error vector: $ \vec{\sigma_l} = J\vec{\sigma_{l+1}} $ If the softmax layer is your output layer, then combining it with the cross-entropy cost model simplifies the computation to simply $ \vec{\sigma_l} = \vec{h}-\vec{t} $ where $\vec{t}$ is the vector of labels, and $\vec{h}$ is the output from the softmax function. Not only is the simplified form convenient, it is also extremely useful from a numerical stability standpoint. | {
"source": [
"https://stats.stackexchange.com/questions/79455",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36093/"
]
} |
79,905 | I have a question regarding the Cross-validation process. I am in the middle of a course of the Machine Learning on the Cursera. One of the topic is about the Cross-validation. I found it slightly difficult to follow. I do know why we need CV because we want our models to work well on future (unknown) data and CV prevents from overfitting. However, the process itself is confusing. What I have understood is that I split data into 3 subsets: training, validation, and test. Train and Validation is to find optimum complexity of a model. What I don't understand is the third subset. I understand I take a number of features for the model, train it and validate it on Validation subset and look for the minimum Cost Function when I change the structure. When I found it, I do test the model on Test subset. If I have already found minimum Cost Function on Validation subset, why would I need to test it again on Test subset ??? Could someone please clarify this for me? Thank you | The training set is used to choose the optimum parameters for a given model. Note that evaluating some given set of parameters using the training set should give you an unbiased estimate of your cost function - it is the act of choosing the parameters which optimise the estimate of your cost function based on the training set that biases the estimate they provide. The parameters were chosen which perform best on the training set; hence, the apparent performance of those parameters, as evaluated on the training set, will be overly optimistic. Having trained using the training set, the validation set is used to choose the best model. Again, note that evaluating any given model using the validation set should give you a representative estimate of the cost function - it is the act of choosing the model which performs best on the validation set that biases the estimate they provide. The model was chosen which performs best on the validation set; hence, the apparent performance of that model, as evaluated on the validation set, will be overly optimistic. Having trained each model using the training set, and chosen the best model using the validation set, the test set tells you how good your final choice of model is. It gives you an unbiased estimate of the actual performance you will get at runtime, which is important to know for a lot of reasons. You can't use the training set for this, because the parameters are biased towards it. And you can't use the validation set for this, because the model itself is biased towards those. Hence, the need for a third set. | {
"source": [
"https://stats.stackexchange.com/questions/79905",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31065/"
]
} |
80,050 | I have two main effects, V1 and V2. The effects of V1 and V2 on the response variables are negative. However, for some reason I am getting positive coefficient for the interaction term V1*V2. How can I interpret this? is such situation possible? | This situation is certainly possible. As a simple example, consider an experiment where you are adding certain volumes of hot (V1) and cold (V2) water to a fish tank that begins at the correct temperature. The response variable (V3) is the number of fish that survive after a day. Intuitively, if you add only hot water (V1 increases), lots of fish will die (V3 goes down). If you add only cold water (V2 increases), lots of fish will die (V3 goes down). But if you add both hot and cold water (Both V1 and V2 increase, thus V1*V2 increases), the fish will be fine (V3 stays high), so the interaction must counteract the two main effects and be positive. Below, I made up 18 data points mimicking the above situation and fit multiple linear regression in R and included the output. You can see the two negative main effects and positive interaction in the last line. You can let V1 = Liters of hot water, V2 = Liters of cold water, and V3 = Number of fish alive after one day. V1 V2 V3
1 0 0 100
2 0 1 90
3 1 0 89
4 1 1 99
5 2 0 79
6 0 2 80
7 2 1 91
8 1 2 92
9 2 2 99
10 3 3 100
11 2 3 88
12 3 2 91
13 0 3 70
14 3 0 69
15 3 3 100
16 4 0 61
17 0 4 60
18 4 2 82
A = matrix(c(0,0,100, 0,1,90, 1,0,89, 1,1,99, 2,0,79, 0,2,80, 2,1,91, 1,2,92,
2,2,99, 3,3,100, 2,3,88, 3,2,91, 0,3,70, 3,0,69, 3,3,100, 4,0,61, 0,4,60,
4,2, 82), byrow=T, ncol=3)
A = as.data.frame(A)
summary(lm(V3 ~ V1 + V2 + V1:V2 , data=A))
Coefficients:
(Intercept) V1 V2 V1:V2
103.568 -10.853 -10.214 6.563 | {
"source": [
"https://stats.stackexchange.com/questions/80050",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35176/"
]
} |
80,196 | I've just come across Anscombe's quartet (four datasets that have almost indistinguishable descriptive statistics but look very different when plotted) and I am curious if there are other more or less well-known datasets that have been created to demonstrate the importance of certain aspects of statistical analyses. | Data sets that act as counterexamples to popular misunderstandings* do exist - I've constructed many myself under various circumstances, but most of them wouldn't be interesting to you, I'm sure. *(which is what the Anscombe data does, since it's a response to people operating under the misunderstanding that the quality of a model can be discerned from the identical statistics you mentioned) I'll include a few here that might be of greater interest than most of the ones I generate: 1) One example (of quite a few) are some example discrete distributions (and thereby data sets) I constructed to counter the common assertion that zero third-moment skewness implies symmetry. (Kendall and Stuart's Advanced Theory of Statistics offers a more impressive continuous family.) Here's one of those discrete distribution examples: \begin{array}{cccc}
\\
x&-4&1&5\\
\hline
P(X=x)&2/6&3/6&1/6
\\
\end{array} (A data set for a counterexample in the sample case is thereby obvious: $-4, -4, 1, 1, 1, 5$) As you can see, this distribution isn't symmetric, yet its third moment skewness is zero. Similarly, one can readily construct counterexamples to a similar assertion with respect to the second most common skewness measure, the second Pearson skewness coefficient ($3(\frac{mean-median}{\sigma})$). Indeed I have also come up with distributions and/or data sets for which the two measures are opposite in sign - which suffices to counter the idea that skewness is a single, easily understood concept, rather than a somewhat slippery idea we don't really know how to suitably measure in many cases. 2) There's a set of data constructed in this answer Box-and-whisker plot for multimodal distribution , following the approach of Choonpradub & McNeil (2005), which shows four very different-looking data sets with the same boxplot. In particular, the distinctly skewed distribution with the symmetric boxplot tends to surprise people. 3) There are another couple of collections of counterexample data sets I constructed in response to people's over-reliance on histograms, especially with only a few bins and only at one bin-width and bin-origin; which leads to mistakenly confident assertions about distributional shape. These data sets and example displays can be found here Here's one of the examples from there. This is the data: 1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.90, 2.93, 2.96, 2.99, 3.60,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62 And here are two histograms: That's the the 34 observations above in both cases, just with different breakpoints, one with binwidth $1$ and the other with binwidth $0.8$. The plots were generated in R as follows: x <- c(1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.9, 2.93, 2.96, 2.99, 3.6,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62)
hist(x,breaks=seq(0.3,6.7,by=0.8),xlim=c(0,6.7),col="green3",freq=FALSE)
hist(x,breaks=0:8,col="aquamarine",freq=FALSE) 4) I recently constructed some data sets to demonstrate the intransitivity of the Wilcoxon-Mann-Whitney test - that is, to show that one might reject a one tailed alternative for each of three or four pairs of data sets, A, B, and C, (and D in the four sample case) such that one concluded that $P(B>A)>\frac{1}{2}$ (i.e. conclude that B tends to be bigger than A), and similarly for C against B, and A against C (or D against C and A against D for the 4 sample case); each tends to be larger (in the sense that it has more than even chance of being larger) than the
previous one in the cycle. Here's one such data set, with 30 observations in each sample, labelled A to D: 1 2 3 4 5 6 7 8 9 10 11 12
A 1.58 2.10 16.64 17.34 18.74 19.90 1.53 2.78 16.48 17.53 18.57 19.05
B 3.35 4.62 5.03 20.97 21.25 22.92 3.12 4.83 5.29 20.82 21.64 22.06
C 6.63 7.92 8.15 9.97 23.34 24.70 6.40 7.54 8.24 9.37 23.33 24.26
D 10.21 11.19 12.99 13.22 14.17 15.99 10.32 11.33 12.65 13.24 14.90 15.50
13 14 15 16 17 18 19 20 21 22 23 24
A 1.64 2.01 16.79 17.10 18.14 19.70 1.25 2.73 16.19 17.76 18.82 19.08
B 3.39 4.67 5.34 20.52 21.10 22.29 3.38 4.96 5.70 20.45 21.67 22.89
C 6.18 7.74 8.63 9.62 23.07 24.80 6.54 7.37 8.37 9.09 23.22 24.16
D 10.20 11.47 12.54 13.08 14.45 15.38 10.87 11.56 12.98 13.99 14.82 15.65
25 26 27 28 29 30
A 1.42 2.56 16.73 17.01 18.86 19.98
B 3.44 4.13 6.00 20.85 21.82 22.05
C 6.57 7.58 8.81 9.08 23.43 24.45
D 10.29 11.48 12.19 13.09 14.68 15.36 Here's an example test: > wilcox.test(adf$A,adf$B,alt="less",conf.int=TRUE)
Wilcoxon rank sum test
data: adf$A and adf$B
W = 300, p-value = 0.01317
alternative hypothesis: true location shift is less than 0
95 percent confidence interval:
-Inf -1.336372
sample estimates:
difference in location
-2.500199 As you see, the one-sided test rejects the null; values from A tend to be smaller than values from B. The same conclusion (at the same p-value) applies to B vs C, C vs D and D vs A. This cycle of rejections, of itself, is not automatically a problem, if we don't interpret it to mean something it doesn't. (It's a simple matter to obtain much smaller p-values with similar, but larger, samples.) The larger "paradox" here comes when you compute the (one-sided in this case) intervals for a location shift -- in every case 0 is excluded (the intervals aren't identical in each case). This leads us to the conclusion that as we move across the data columns from A to B to C to D, the location moves to the right, and yet the same happens again when we move back to A. With a larger versions of these data sets (similar distribution of values, but more of them), we can get significance (one or two tailed) at substantially smaller significance levels, so that one might use Bonferroni adjustments for example, and still conclude each group came from a distribution which was shifted up from the next one. This shows us, among other things, that a rejection in the Wilcoxon-Mann-Whitney doesn't of itself automatically justify a claim of a location shift. (While it's not the case for these data, it's also possible to construct sets where the sample means are constant, while results like the above apply.) Added in later edit: A very informative and educational reference on this is Brown BM, and Hettmansperger TP. (2002) Kruskal-Wallis, multiple comaprisons and Efron dice. Aust&N.Z. J. Stat. , 44 , 427–438. 5) Another couple of related counterexamples come up here - where an ANOVA may be significant, but all pairwise comparisons aren't (interpreted two different ways there, yielding different counterexamples). So there's several counterexample data sets that contradict misunderstandings one might encounter. As you might guess, I construct such counterexamples reasonably often (as do many other people), usually as the need arises. For some of these common misunderstandings, you can characterize the counterexamples in such a way that new ones may be generated at will (though more often, a certain level of work is involved). If there are particular kinds of things you might be interested in, I might be able to locate more such sets (mine or those of other people), or perhaps even construct some. One useful trick for generating random regression data that has coefficients that you want is as follows (the part in parentheses is an outline of R code): a) set up the coefficients you want with no noise ( y = b0 + b1 * x1 + b2 * x2 ) b) generate error term with desired characteristics ( n = rnorm(length(y),s=0.4 ) c) set up a regression of noise on the same x's ( nfit = lm(n~x1+x2) ) d) add the residuals from that to the y variable ( y = y + nfit$residuals ) Done. (the whole thing can actually be done in a couple of lines of R) | {
"source": [
"https://stats.stackexchange.com/questions/80196",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/273/"
]
} |
80,380 | Maximum likelihood estimators (MLE) are asymptotically efficient; we see the practical upshot in that they often do better than method of moments (MoM) estimates (when they differ), even at small sample sizes Here 'better than' means in the sense of typically having smaller variance when both are unbiased, and typically smaller mean square error (MSE) more generally. The question, occurs, however: Are there cases where the MoM can beat the MLE - on MSE , say - in small samples? (where this isn't some odd/degenerate situation - i.e. given that conditions for ML to exist/be asymptotically efficient hold) A followup question would then be 'how big can small be?' - that is, if there are examples, are there some which still hold at relatively large sample sizes, perhaps even all finite sample sizes? [I can find an example of a biased estimator that can beat ML in finite samples, but it isn't MoM.] Note added retrospectively: my focus here is primarily on the univariate case (which is actually where my underlying curiosity is coming from). I don't want to rule out multivariate cases, but I also don't particularly want to stray into extended discussions of James-Stein estimation. | This may be considered... cheating, but the OLS estimator is a MoM estimator. Consider a standard linear regression specification (with $K$ stochastic regressors, so magnitudes are conditional on the regressor matrix), and a sample of size $n$. Denote $s^2$ the OLS estimator of the variance $\sigma^2$ of the error term. It is unbiased so $$ MSE(s^2) = \operatorname {Var}(s^2) = \frac {2\sigma^4}{n-K} $$ Consider now the MLE of $\sigma^2$. It is $$\hat \sigma^2_{ML} = \frac {n-K}{n}s^2$$
Is it biased. Its MSE is $$MSE (\hat \sigma^2_{ML}) = \operatorname {Var}(\hat \sigma^2_{ML}) + \Big[E(\hat \sigma^2_{ML})-\sigma^2\Big]^2$$
Expressing the MLE in terms of the OLS and using the expression for the OLS estimator variance we obtain $$MSE (\hat \sigma^2_{ML}) = \left(\frac {n-K}{n}\right)^2\frac {2\sigma^4}{n-K} + \left(\frac {K}{n}\right)^2\sigma^4$$
$$\Rightarrow MSE (\hat \sigma^2_{ML}) = \frac {2(n-K)+K^2}{n^2}\sigma^4$$ We want the conditions (if they exist) under which $$MSE (\hat \sigma^2_{ML}) > MSE (s^2) \Rightarrow \frac {2(n-K)+K^2}{n^2} > \frac {2}{n-K}$$ $$\Rightarrow 2(n-K)^2+K^2(n-K)> 2n^2$$
$$ 2n^2 -4nK + 2K^2 +nK^2 - K^3 > 2n^2 $$
Simplifying we obtain
$$ -4n + 2K +nK - K^2 > 0 \Rightarrow K^2 - (n+2)K + 4n < 0 $$
Is it feasible for this quadratic in $K$ to obtain negative values? We need its discriminant to be positive. We have
$$\Delta_K = (n+2)^2 -16n = n^2 + 4n + 4 - 16n = n^2 -12n + 4$$
which is another quadratic, in $n$ this time. This discriminant is
$$\Delta_n = 12^2 - 4^2 = 8\cdot 16$$
so
$$n_1,n_2 = \frac {12\pm \sqrt{8\cdot 16}}{2} = 6 \pm 4\sqrt2 \Rightarrow n_1,n_2 = \{1, 12\}$$
to take into account the fact that $n$ is an integer. If $n$ is inside this interval we have that $\Delta_K <0$ and the quadratic in $K$ takes always positive values, so we cannot obtain the required inequality. So: we need a sample size larger than 12. Given this the roots for $K$-quadratic are $$K_1, K_2 = \frac {(n+2)\pm \sqrt{n^2 -12n + 4}}{2} = \frac n2 +1 \pm \sqrt{\left(\frac n2\right)^2 +1 -3n}$$ Overall : for sample size $n>12$ and number of regressors $K$ such that $\lceil K_1\rceil <K<\lfloor K_2\rfloor $
we have
$$MSE (\hat \sigma^2_{ML}) > MSE (s^2)$$
For example, if $n=50$ then one finds that the number of regressors must be $5<K<47$ for the inequality to hold. It is interesting that for small numbers of regressors the MLE is better in MSE sense. ADDENDUM The equation for the roots of the $K$-quadratic can be written $$K_1, K_2 = \left(\frac n2 +1\right) \pm \sqrt{\left(\frac n2 +1\right)^2 -4n}$$
which by a quick look I think implies that the lower root will always be $5$ (taking into account the "integer-value" restriction) -so MLE will be MSE-efficient when regressors are up to $5$ for any (finite) sample size. | {
"source": [
"https://stats.stackexchange.com/questions/80380",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/805/"
]
} |
80,398 | What is the intuition behind the fact that an SVM with a Gaussian Kernel has infinite dimensional feature space? | This answer explains the following: Why perfect separation is always possible with distinct points and a Gaussian kernel (of sufficiently small bandwidth) How this separation may be interpreted as linear, but only in an abstract feature space distinct from the space where the data lives How the mapping from data space to feature space is "found". Spoiler: it's not found by SVM, it's implicitly defined by the kernel you choose. Why the feature space is infinite-dimensional. 1. Achieving perfect separation Perfect separation is always possible with a Gaussian kernel (provided no two points from different classes are ever exactly the same) because of the kernel's locality properties, which lead to an arbitrarily flexible decision boundary. For sufficiently small kernel bandwidth, the decision boundary will look like you just drew little circles around the points whenever they are needed to separate the positive and negative examples: (Credit: Andrew Ng's online machine learning course ). So, why does this occur from a mathematical perspective? Consider the standard setup: you have a Gaussian kernel $K(\mathbf{x},\mathbf{z}) = \exp(-||\mathbf{x}-\mathbf{z}||^2 / \sigma^2)$ and training data $(\mathbf{x}^{(1)},y^{(1)}), (\mathbf{x}^{(2)},y^{(2)}), \ldots, (\mathbf{x}^{(n)},y^{(n)})$ where the $y^{(i)}$ values are $\pm 1$. We want to learn a classifier function $$\hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x})$$ Now how will we ever assign the weights $w_i$? Do we need infinite dimensional spaces and a quadratic programming algorithm? No, because I just want to show that I can separate the points perfectly. So I make $\sigma$ a billion times smaller than the smallest separation $||\mathbf{x}^{(i)} - \mathbf{x}^{(j)}||$ between any two training examples, and I just set $w_i = 1$. This means that all the training points are a billion sigmas apart as far as the kernel is concerned, and each point completely controls the sign of $\hat{y}$ in its neighborhood. Formally, we have $$ \hat{y}(\mathbf{x}^{(k)})
= \sum_{i=1}^n y^{(k)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})
= y^{(k)} K(\mathbf{x}^{(k)},\mathbf{x}^{(k)}) + \sum_{i \neq k} y^{(i)} K(\mathbf{x}^{(i)},\mathbf{x}^{(k)})
= y^{(k)} + \epsilon$$ where $\epsilon$ is some arbitrarily tiny value. We know $\epsilon$ is tiny because $\mathbf{x}^{(k)}$ is a billion sigmas away from any other point, so for all $i \neq k$ we have $$K(\mathbf{x}^{(i)},\mathbf{x}^{(k)}) = \exp(-||\mathbf{x}^{(i)} - \mathbf{x}^{(k)}||^2 / \sigma^2) \approx 0.$$ Since $\epsilon$ is so small, $\hat{y}(\mathbf{x}^{(k)})$ definitely has the same sign as $y^{(k)}$, and the classifier achieves perfect accuracy on the training data. 2. Kernel SVM learning as linear separation The fact that this can be interpreted as "perfect linear separation in an infinite dimensional feature space" comes from the kernel trick, which allows you to interpret the kernel as an inner product in a (potentially infinite-dimensional) feature space: $$K(\mathbf{x}^{(i)},\mathbf{x}^{(j)}) = \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x}^{(j)})\rangle$$ where $\Phi(\mathbf{x})$ is the mapping from the data space into the feature space. It follows immediately that the $\hat{y}(\mathbf{x})$ function as a linear function in the feature space: $$ \hat{y}(\mathbf{x}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\Phi(\mathbf{x})\rangle = L(\Phi(\mathbf{x}))$$ where the linear function $L(\mathbf{v})$ is defined on feature space vectors $\mathbf{v}$ as $$ L(\mathbf{v}) = \sum_i w_i y^{(i)} \langle\Phi(\mathbf{x}^{(i)}),\mathbf{v}\rangle$$ This function is linear in $\mathbf{v}$ because it's just a linear combination of inner products with fixed vectors. In the feature space, the decision boundary $\hat{y}(\mathbf{x}) = 0$ is just $L(\mathbf{v}) = 0$, the level set of a linear function. This is the very definition of a hyperplane in the feature space. 3. Understanding the mapping and feature space Note: In this section, the notation $\mathbf{x}^{(i)}$ refers to an arbitrary set of $n$ points and not the training data. This is pure math; the training data does not figure into this section at all! Kernel methods never actually "find" or "compute" the feature space or the mapping $\Phi$ explicitly. Kernel learning methods such as SVM do not need them to work; they only need the kernel function $K$. That said, it is possible to write down a formula for $\Phi$. The feature space that $\Phi$ maps to is kind of abstract (and potentially infinite-dimensional), but essentially, the mapping is just using the kernel to do some simple feature engineering. In terms of the final result, the model you end up learning, using kernels is no different from the traditional feature engineering popularly applied in linear regression and GLM modeling, like taking the log of a positive predictor variable before feeding it into a regression formula. The math is mostly just there to help make sure the kernel plays well with the SVM algorithm, which has its vaunted advantages of sparsity and scaling well to large datasets. If you're still interested, here's how it works. Essentially we take the identity we want to hold, $\langle \Phi(\mathbf{x}), \Phi(\mathbf{y}) \rangle = K(\mathbf{x},\mathbf{y})$, and construct a space and inner product such that it holds by definition. To do this, we define an abstract vector space $V$ where each vector is a function from the space the data lives in, $\mathcal{X}$, to the real numbers $\mathbb{R}$. A vector $f$ in $V$ is a function formed from a finite linear combination of kernel slices:
$$f(\mathbf{x}) = \sum_{i=1}^n \alpha_i K(\mathbf{x}^{(i)},\mathbf{x})$$
It is convenient to write $f$ more compactly as
$$f = \sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}}$$
where $K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y})$ is a function giving a "slice" of the kernel at $\mathbf{x}$. The inner product on the space is not the ordinary dot product, but an abstract inner product based on the kernel: $$\langle
\sum_{i=1}^n \alpha_i K_{\mathbf{x}^{(i)}},
\sum_{j=1}^n \beta_j K_{\mathbf{x}^{(j)}}
\rangle = \sum_{i,j} \alpha_i \beta_j K(\mathbf{x}^{(i)},\mathbf{x}^{(j)})$$ With the feature space defined in this way, $\Phi$ is a mapping $\mathcal{X} \rightarrow V$, taking each point $\mathbf{x}$ to the "kernel slice" at that point: $$\Phi(\mathbf{x}) = K_\mathbf{x}, \quad \text{where} \quad K_\mathbf{x}(\mathbf{y}) = K(\mathbf{x},\mathbf{y}). $$ You can prove that $V$ is an inner product space when $K$ is a positive definite kernel. See this paper for details. (Kudos to f coppens for pointing this out!) 4. Why is the feature space infinite-dimensional? This answer gives a nice linear algebra explanation, but here's a geometric perspective, with both intuition and proof. Intuition For any fixed point $\mathbf{z}$, we have a kernel slice function $K_\mathbf{z}(\mathbf{x}) = K(\mathbf{z},\mathbf{x})$. The graph of $K_\mathbf{z}$ is just a Gaussian bump centered at $\mathbf{z}$. Now, if the feature space were only finite dimensional, that would mean we could take a finite set of bumps at a fixed set of points and form any Gaussian bump anywhere else. But clearly there's no way we can do this; you can't make a new bump out of old bumps, because the new bump could be really far away from the old ones. So, no matter how many feature vectors (bumps) we have, we can always add new bumps, and in the feature space these are new independent vectors. So the feature space can't be finite dimensional; it has to be infinite. Proof We use induction. Suppose you have an arbitrary set of points $\mathbf{x}^{(1)}, \mathbf{x}^{(2)}, \ldots, \mathbf{x}^{(n)}$ such that the vectors $\Phi(\mathbf{x}^{(i)})$ are linearly independent in the feature space. Now find a point $\mathbf{x}^{(n+1)}$ distinct from these $n$ points, in fact a billion sigmas away from all of them. We claim that $\Phi(\mathbf{x}^{(n+1)})$ is linearly independent from the first $n$ feature vectors $\Phi(\mathbf{x}^{(i)})$. Proof by contradiction. Suppose to the contrary that $$\Phi(\mathbf{x}^{(n+1)}) = \sum_{i=1}^n \alpha_i \Phi(\mathbf{x}^{(i)})$$ Now take the inner product on both sides with an arbitrary $\mathbf{x}$. By the identity $\langle \Phi(\mathbf{z}), \Phi(\mathbf{x}) \rangle = K(\mathbf{z},\mathbf{x})$, we obtain $$K(\mathbf{x}^{(n+1)},\mathbf{x})
= \sum_{i=1}^n \alpha_i K(\mathbf{x}^{(i)},\mathbf{x})$$ Here $\mathbf{x}$ is a free variable, so this equation is an identity stating that two functions are the same. In particular, it says that a Gaussian centered at $\mathbf{x}^{(n+1)}$ can be represented as a linear combination of Gaussians at other points $\mathbf{x}^{(i)}$. It is obvious geometrically that one cannot create a Gaussian bump centered at one point from a finite combination of Gaussian bumps centered at other points, especially when all those other Gaussian bumps are a billion sigmas away. So our assumption of linear dependence has led to a contradiction, as we set out to show. | {
"source": [
"https://stats.stackexchange.com/questions/80398",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36162/"
]
} |
80,407 | In almost all of the analysis work that I've ever done I use: set.seed(42) It's an homage to Hitchhiker's Guide to the Galaxy . But I'm wondering if I'm creating bias by using the same seed over and over. | There is no bias if the RNG is any good. By always using the same seed you are, however, creating a strong interdependence among all the simulations you perform in your career. This creates an unusual kind of risk. By using the same seed each time, either you are always getting a pretty nice pseudorandom sequence and all your work goes well or--with very low but non-zero probability--you are always using a pretty bad sequence and your simulations are not as representative of the underlying distributions as you think they might be. Either all your work is pretty good or all of it is pretty lousy! Contrast this with using truly random starting seeds each time. Once in a very long while you might obtain a sequence of random values that is not representative of the distribution you are modeling, but most of the time you would be just fine. If you never attempted to reproduce your own work (with a new seed), then once or twice in your career you might get misleading results, but the vast majority of the time you will be ok. There is a simple and obvious cure: Always, always check your work by restarting with another seed. It's virtually impossible that two seeds accidentally will give misleading results in the same way. On the other hand, there is extraordinary merit in having a well-known "personal seed": it shows the world you are being honest. A sly, subtle way to lie with simulations is to repeat them until they give you a predetermined outcome. Here's a working R example to "demonstrate" that even a fair coin is highly likely to land heads more than half the time: n.flips <- 100
seeds <- 1:10^3
#
# Run some preliminary simulations.
#
results <- sapply(seeds, function(seed) {
set.seed(seed)
mean(runif(n.flips) > 1/2)
})
#
# Now do the "real" simulation.
#
seed <- seeds[which.max(results)]
set.seed(seed)
x <- mean(runif(n.flips) > 1/2)
z <- (x - 1/2) * 2 * sqrt(n)
cat("Mean:", x, "Z:", z, "p-value:", pnorm(z, lower.tail=FALSE), "\n") By looking at a wider range of seeds (from $1$ through $10^6$), I was able to find a congenial one: 218134. When you start with this as the seed, the resulting $100$ simulated coin flips exhibit $75$ heads! That is significantly different from the expected value of $50$ ($p=0.000004$). The implications can be fascinating and important. For instance, if I knew in advance whom I would be recruiting into a randomized double-blind controlled trial, and in what order (which I might be able to control as a university professor testing a group of captive undergraduates or lab rats), then beforehand I could run such a set of simulations to find a seed that groups the students more to my liking to favor whatever I was hoping to "prove." I could include the planned order and that seed in my experimental plan before conducting the experiment, thereby creating a procedure that no critical reviewer could ever impeach--but nevertheless stacking the deck in my favor. (I believe there are entire branches of pseudoscience that use some variant of this trick to gain credibility. Would you believe I actually used ESP to control the computer? I can do it at a distance with yours, too!) Somebody whose default seed is known cannot play this game. My personal seed is 17 , as a large proportion of my posts attest (currently 155 out of 161 posts that set a seed use this one). In R it is a difficult seed to work with, because (as it turns out) most small datasets I create with it have a strong outlier. That's not a bad characteristic ... . | {
"source": [
"https://stats.stackexchange.com/questions/80407",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/776/"
]
} |
81,000 | In a multiple linear regression it is possible to find out the coeffient with the following formula. $b = (X'X)^{-1}(X')Y$ beta = solve(t(X) %*% X) %*% (t(X) %*% Y) ; beta For instance: > y <- c(9.3, 4.8, 8.9, 6.5, 4.2, 6.2, 7.4, 6, 7.6, 6.1)
> x0 <- c(1,1,1,1,1,1,1,1,1,1)
> x1 <- c(100,50,100,100,50,80,75,65,90,90)
> x2 <- c(4,3,4,2,2,2,3,4,3,2)
> Y <- as.matrix(y)
> X <- as.matrix(cbind(x0,x1,x2))
> beta = solve(t(X) %*% X) %*% (t(X) %*% Y);beta
[,1]
x0 -0.8687015
x1 0.0611346
x2 0.9234254
> model <- lm(y~+x1+x2) ; model$coefficients
(Intercept) x1 x2
-0.8687015 0.0611346 0.9234254 I would like how to calculate in the same "manual" way the beta for a logistic regression. Where of course the y would be 1 or 0. Assuming I'm using the binomial family with a logit link. | The OLS estimator in the linear regression model is quite rare in
having the property that it can be represented in closed form, that is
without needing to be expressed as the optimizer of a function. It is, however, an optimizer of a function -- the residual sum of squares
function -- and can be computed as such. The MLE in the logistic regression model is also the optimizer of a
suitably defined log-likelihood function, but since it is not available
in a closed form expression, it must be computed as an optimizer. Most statistical estimators are only expressible as optimizers of
appropriately constructed functions of the data called criterion functions.
Such optimizers require the use of appropriate numerical optimization
algorithms.
Optimizers of functions can be computed in R using the optim() function
that provides some general purpose optimization algorithms, or one of the more
specialized packages such as optimx . Knowing which
optimization algorithm to use for different types of models and statistical criterion
functions is key. Linear regression residual sum of squares The OLS estimator is defined as the optimizer of the well-known residual sum of
squares function:
$$
\begin{align}
\hat{\boldsymbol{\beta}} &= \arg\min_{\boldsymbol{\beta}}\left(\boldsymbol{Y} - \mathbf{X}\boldsymbol{\beta}\right)'\left(\boldsymbol{Y} - \mathbf{X}\boldsymbol{\beta}\right) \\
&= (\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\boldsymbol{Y}
\end{align}
$$ In the case of a twice differentiable, convex function like the residual sum of squares,
most gradient-based optimizers do good job. In this case, I will be using the BFGS
algorithm. #================================================
# reading in the data & pre-processing
#================================================
urlSheatherData = "http://www.stat.tamu.edu/~sheather/book/docs/datasets/MichelinNY.csv"
dfSheather = as.data.frame(read.csv(urlSheatherData, header = TRUE))
# create the design matrices
vY = as.matrix(dfSheather['InMichelin'])
mX = as.matrix(dfSheather[c('Service','Decor', 'Food', 'Price')])
# add an intercept to the predictor variables
mX = cbind(1, mX)
# the number of variables and observations
iK = ncol(mX)
iN = nrow(mX)
#================================================
# compute the linear regression parameters as
# an optimal value
#================================================
# the residual sum of squares criterion function
fnRSS = function(vBeta, vY, mX) {
return(sum((vY - mX %*% vBeta)^2))
}
# arbitrary starting values
vBeta0 = rep(0, ncol(mX))
# minimise the RSS function to get the parameter estimates
optimLinReg = optim(vBeta0, fnRSS,
mX = mX, vY = vY, method = 'BFGS',
hessian=TRUE)
#================================================
# compare to the LM function
#================================================
linregSheather = lm(InMichelin ~ Service + Decor + Food + Price,
data = dfSheather) This yields: > print(cbind(coef(linregSheather), optimLinReg$par))
[,1] [,2]
(Intercept) -1.492092490 -1.492093965
Service -0.011176619 -0.011176583
Decor 0.044193000 0.044193023
Food 0.057733737 0.057733770
Price 0.001797941 0.001797934 Logistic regression log-likelihood The criterion function corresponding to the MLE in the logistic regression model is
the log-likelihood function. $$
\begin{align}
\log L_n(\boldsymbol{\beta}) &= \sum_{i=1}^n \left(Y_i \log \Lambda(\boldsymbol{X}_i'\boldsymbol{\beta}) +
(1-Y_i)\log(1 - \Lambda(\boldsymbol{X}_i'\boldsymbol{\beta}))\right)
\end{align}
$$
where $\Lambda(k) = 1/(1+ \exp(-k))$ is the logistic function. The parameter estimates are the optimizers of this function
$$
\hat{\boldsymbol{\beta}} = \arg\max_{\boldsymbol{\beta}}\log L_n(\boldsymbol{\beta})
$$ I show how to construct and optimize the criterion function using the optim() function
once again employing the BFGS algorithm. #================================================
# compute the logistic regression parameters as
# an optimal value
#================================================
# define the logistic transformation
logit = function(mX, vBeta) {
return(exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) )
}
# stable parametrisation of the log-likelihood function
# Note: The negative of the log-likelihood is being returned, since we will be
# /minimising/ the function.
logLikelihoodLogitStable = function(vBeta, mX, vY) {
return(-sum(
vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta)))
+ (1-vY)*(-log(1 + exp(mX %*% vBeta)))
)
)
}
# initial set of parameters
vBeta0 = c(10, -0.1, -0.3, 0.001, 0.01) # arbitrary starting parameters
# minimise the (negative) log-likelihood to get the logit fit
optimLogit = optim(vBeta0, logLikelihoodLogitStable,
mX = mX, vY = vY, method = 'BFGS',
hessian=TRUE)
#================================================
# test against the implementation in R
# NOTE glm uses IRWLS:
# http://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares
# rather than the BFGS algorithm that we have reported
#================================================
logitSheather = glm(InMichelin ~ Service + Decor + Food + Price,
data = dfSheather,
family = binomial, x = TRUE) This yields > print(cbind(coef(logitSheather), optimLogit$par))
[,1] [,2]
(Intercept) -11.19745057 -11.19661798
Service -0.19242411 -0.19249119
Decor 0.09997273 0.09992445
Food 0.40484706 0.40483753
Price 0.09171953 0.09175369 As a caveat, note that numerical optimization algorithms require careful use or you can end up with
all sorts of pathological solutions. Until you understand them well, it is best to
use the available packaged options that allow you to concentrate on specifying the model
rather than worrying about how to numerically compute the estimates. | {
"source": [
"https://stats.stackexchange.com/questions/81000",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/19692/"
]
} |
81,395 | I remember having read somewhere on the web a connection between ridge regression (with $\ell_2$ regularization) and PCA regression: while using $\ell_2$-regularized regression with hyperparameter $\lambda$, if $\lambda \to 0$, then the regression is equivalent to removing the PC variable with the smallest eigenvalue. Why is this true? Does this have anything to do with the optimization procedure? Naively, I would have expected it to be equivalent to OLS. Does anybody have a reference for this? | Let $\mathbf X$ be the centered $n \times p$ predictor matrix and consider its singular value decomposition $\mathbf X = \mathbf{USV}^\top$ with $\mathbf S$ being a diagonal matrix with diagonal elements $s_i$ . The fitted values of ordinary least squares (OLS) regression are given by $$\hat {\mathbf y}_\mathrm{OLS} = \mathbf X \beta_\mathrm{OLS} = \mathbf X (\mathbf X^\top \mathbf X)^{-1} \mathbf X^\top \mathbf y = \mathbf U \mathbf U^\top \mathbf y.$$ The fitted values of the ridge regression are given by $$\hat {\mathbf y}_\mathrm{ridge} = \mathbf X \beta_\mathrm{ridge} = \mathbf X (\mathbf X^\top \mathbf X + \lambda \mathbf I)^{-1} \mathbf X^\top \mathbf y = \mathbf U\: \mathrm{diag}\left\{\frac{s_i^2}{s_i^2+\lambda}\right\}\mathbf U^\top \mathbf y.$$ The fitted values of the PCA regression (PCR) with $k$ components are given by $$\hat {\mathbf y}_\mathrm{PCR} = \mathbf X_\mathrm{PCA} \beta_\mathrm{PCR} = \mathbf U\: \mathrm{diag}\left\{1,\ldots, 1, 0, \ldots 0\right\}\mathbf U^\top \mathbf y,$$ where there are $k$ ones followed by zeroes. From here we can see that: If $\lambda=0$ then $\hat {\mathbf y}_\mathrm{ridge} = \hat {\mathbf y}_\mathrm{OLS}$ . If $\lambda>0$ then the larger the singular value $s_i$ , the less it will be penalized in ridge regression. Small singular values ( $s_i^2 \approx \lambda$ and smaller) are penalized the most. In contrast, in PCA regression, large singular values are kept intact, and the small ones (after certain number $k$ ) are completely removed. This would correspond to $\lambda=0$ for the first $k$ ones and $\lambda=\infty$ for the rest. This means that ridge regression can be seen as a "smooth version" of PCR. (This intuition is useful but does not always hold; e.g. if all $s_i$ are approximately equal, then ridge regression will only be able to penalize all principal components of $\mathbf X$ approximately equally and can strongly differ from PCR). Ridge regression tends to perform better in practice (e.g. to have higher cross-validated performance). Answering now your question specifically: if $\lambda \to 0$ , then $\hat {\mathbf y}_\mathrm{ridge} \to \hat {\mathbf y}_\mathrm{OLS}$ . I don't see how it can correspond to removing the smallest $s_i$ . I think this is wrong. One good reference is The Elements of Statistical Learning , Section 3.4.1 "Ridge regression". See also this thread: Interpretation of ridge regularization in regression and in particular the answer by @BrianBorchers. | {
"source": [
"https://stats.stackexchange.com/questions/81395",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36923/"
]
} |
81,427 | I typically use BIC as my understanding is that it values parsimony more strongly than does AIC. However, I have decided to use a more comprehensive approach now and would like to use AIC as well. I know that Raftery (1995) presented nice guidelines for BIC differences: 0-2 is weak, 2-4 is positive evidence for one model being better, etc. I looked in textbooks and they seem strange on AIC (it looks like a larger difference is weak and a smaller difference in AIC means one model is better). This goes against what I know I have been taught. My understanding is that you want lower AIC. Does anyone know if Raftery's guidelines extend to AIC as well, or where I might cite some guidelines for "strength of evidence" for one model vs. another? And yes, cutoffs are not great (I kind of find them irritating) but they are helpful when comparing different kinds of evidence. | AIC and BIC hold the same interpretation in terms of model comparison. That is, the larger difference in either AIC or BIC indicates stronger evidence for one model over the other (the lower the better). It's just the the AIC doesn't penalize the number of parameters as strongly as BIC. There is also a correction to the AIC (the AICc) that is used for smaller sample sizes. More information on the comparison of AIC/BIC can be found here . | {
"source": [
"https://stats.stackexchange.com/questions/81427",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36938/"
]
} |
81,434 | I would like to know if there is a way to compare the relationship between a factor (length) and a response variable (lipid content) among multiple groups? I have given up on ANCOVA, as length is continuous. I am unsure if a multiple regression allows the use of a categorical variable (group membership) as an IV. | AIC and BIC hold the same interpretation in terms of model comparison. That is, the larger difference in either AIC or BIC indicates stronger evidence for one model over the other (the lower the better). It's just the the AIC doesn't penalize the number of parameters as strongly as BIC. There is also a correction to the AIC (the AICc) that is used for smaller sample sizes. More information on the comparison of AIC/BIC can be found here . | {
"source": [
"https://stats.stackexchange.com/questions/81434",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36943/"
]
} |
81,457 | I wonder if there is a clear-cut difference between the so-called zero-inflated distributions (models) and so-called hurdle-at-zero distributions (models)? The terms occur quite often in the literature and I suspect they are not the same, but would you please explain me the difference in simple terms? | Thank you for the interesting question! Difference: One limitation of standard count models is that the zeros and the nonzeros (positives) are assumed to come from the same data-generating process. With hurdle models , these two processes are not constrained to be the same. The basic idea is that a Bernoulli probability governs the binary outcome of whether a count variate has a zero or positive realization. If the realization is positive, the hurdle is crossed, and the conditional distribution of the positives is governed by a truncated-at-zero count data model. With zero-inflated models , the response variable is modelled as a mixture of a Bernoulli distribution (or call it a point mass at zero) and a Poisson distribution (or any other count distribution supported on
non-negative integers). For more detail and formulae, see, for example, Gurmu and Trivedi (2011) and Dalrymple, Hudson, and Ford (2003). Example: Hurdle models can be motivated by sequential decision-making processes confronted by individuals. You first decide if you need to buy something, and then you decide on the quantity of that something (which must be positive). When you are allowed to (or can potentially) buy nothing after your decision to buy something is an example of a situation where zero-inflated model is appropriate. Zeros may come from two sources: a) no decision to buy; b) wanted to buy but ended up buying nothing (e.g. out of stock). Beta: The hurdle model is a special case of the two-part model described in Chapter
16 of Frees (2011). There, we will see that for two-part models, the amount of health care utilized may be a continuous as well as a count variable. So what has been somewhat confusingly termed "zero-inflated beta distribution" in the literature is in fact belongs in the class of two-part distributions and models (so common in actuarial science), which is consistent with the above definition of a hurdle model. This excellent book discussed zero-inflated models in section 12.4.1 and hurdle models in section 12.4.2, with formulas and examples from actuarial applications. History: zero-inflated Poisson (ZIP) models without covariates have a long history (see e.g., Johnson and Kotz, 1969). The general form of ZIP regression models incorporating covariates is due to Lambert (1992). Hurdle models were first proposed by a Canadian statistician Cragg (1971), and later developped further by Mullahy (1986). You may also consider Croston (1972), where positive geometric counts are used together with Bernoulli process to describe an integer-valued process dominated by zeros. R: Finally, if you use R, there is package pscl for "Classes and Methods for R developed in the Political Science Computational Laboratory" by Simon Jackman, containing hurdle() and zeroinfl() functions by Achim Zeileis. The following references have been consulted to produce the above: Gurmu, S. & Trivedi, P. K. Excess Zeros in Count Models for Recreational Trips Journal of Business & Economic Statistics, 1996, 14, 469-477 Johnson, N., Kotz, S., Distributions in Statistics: Discrete Distributions. 1969, Houghton MiZin, Boston Lambert, D., Zero-inflated Poisson regression with an application to defects in manufacturing. Technometrics, 1992, 34 (1), 1–14. Cragg, J. G. Some Statistical Models for Limited Dependent Variables with Application to the Demand for Durable Goods Econometrica, 1971, 39, 829-844 Mullahy, J. Specification and testing of some modified count data models Journal of Econometrics, 1986, 33, 341-365 Frees, E. W. Regression Modeling with Actuarial and Financial Applications Cambridge University Press, 2011 Dalrymple, M. L.; Hudson, I. L. & Ford, R. P. K. Finite Mixture, Zero-inflated Poisson and Hurdle models with application to SIDS Computational Statistics & Data Analysis, 2003, 41, 491-504 Croston, J. D. Forecasting and Stock Control for Intermittent Demands Operational Research Quarterly, 1972, 23, 289-303 | {
"source": [
"https://stats.stackexchange.com/questions/81457",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36317/"
]
} |
81,481 | Is there a specific purpose in terms of efficiency or functionality why the k-means algorithm does not use for example cosine (dis)similarity as a distance metric, but can only use the Euclidean norm? In general, will K-means method comply and be correct when other distances than Euclidean are considered or used? [Addition by @ttnphns. The question is two-fold. "(Non)Euclidean distance" may concern distance between two data points or distance between a data point and a cluster centre. Both ways have been attempted to address in the answers so far.] | K-Means procedure - which is a vector quantization method often used as a clustering method - does not explicitly use pairwise distances between data points at all (in contrast to hierarchical and some other clusterings which allow for arbitrary proximity measure). It amounts to repeatedly assigning points to the closest centroid thereby using Euclidean distance from data points to a centroid . However, K-Means is implicitly based on pairwise Euclidean distances between data points, because the sum of squared deviations from centroid is equal to the sum of pairwise squared Euclidean distances divided by the number of points . The term "centroid" is itself from Euclidean geometry. It is multivariate mean in euclidean space. Euclidean space is about euclidean distances. Non-Euclidean distances will generally not span Euclidean space. That's why K-Means is for Euclidean distances only. But a Euclidean distance between two data points can be represented in a number of alternative ways . For example, it is closely tied with cosine or scalar product between the points. If you have cosine, or covariance, or correlation, you can always (1) transform it to (squared) Euclidean distance, and then (2) create data for that matrix of Euclidean distances (by means of Principal Coordinates or other forms of metric Multidimensional Scaling) to (3) input those data to K-Means clustering. Therefore, it is possible to make K-Means "work with" pairwise cosines or such; in fact, such implementations of K-Means clustering exist. See also about "K-means for distance matrix" implementation. It is possible to program K-means in a way that it directly calculate on the square matrix of pairwise Euclidean distances, of course. But it will work slowly, and so the more efficient way is to create data for that distance matrix (converting the distances into scalar products and so on - the pass that is outlined in the previous paragraph) - and then apply standard K-means procedure to that dataset. Please note I was discussing the topic whether euclidean or noneuclidean dissimilarity between data points is compatible with K-means. It is related to but not quite the same question as whether noneuclidean deviations from centroid (in wide sense, centre or quasicentroid) can be incorporated in K-means or modified "K-means". See related question K-means: Why minimizing WCSS is maximizing Distance between clusters? . | {
"source": [
"https://stats.stackexchange.com/questions/81481",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9694/"
]
} |
81,483 | I have data showing fire fighter entrance exam results. I am testing the hypothesis that exam results and ethnicity are not mutually independent. To test this, I ran a Pearson chi-square test in R. The results show what I expected, but it gave a warning that " In chisq.test(a) : Chi-squared approximation may be incorrect ." > a
white black asian hispanic
pass 5 2 2 0
noShow 0 1 0 0
fail 0 2 3 4
> chisq.test(a)
Pearson's Chi-squared test
data: a
X-squared = 12.6667, df = 6, p-value = 0.04865
Warning message:
In chisq.test(a) : Chi-squared approximation may be incorrect Does anyone know why it gave a warning? Is it because I am using a wrong method? | It gave the warning because many of the expected values will be very small and therefore the approximations of p may not be right. In R you can use chisq.test(a, simulate.p.value = TRUE) to use simulate p values. However, with such small cell sizes, all estimates will be poor. It might be good to just test pass vs. fail (deleting "no show") either with chi-square or logistic regression. Indeed, since it is pretty clear that the pass/fail grade is a dependent variable, logistic regression might be better. | {
"source": [
"https://stats.stackexchange.com/questions/81483",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5276/"
]
} |
81,659 | Why and when we should use Mutual Information over statistical correlation measurements such as "Pearson", "spearman", or "Kendall's tau" ? | Let's consider one fundamental concept of (linear) correlation, covariance (which is Pearson's correlation coefficient "un-standardized"). For two discrete random variables $X$ and $Y$ with probability mass functions $p(x)$, $p(y)$ and joint pmf $p(x,y)$ we have $$\operatorname{Cov}(X,Y) = E(XY) - E(X)E(Y) = \sum_{x,y}p(x,y)xy - \left(\sum_xp(x)x\right)\cdot \left(\sum_yp(y)y\right)$$ $$\Rightarrow \operatorname{Cov}(X,Y) = \sum_{x,y}\left[p(x,y)-p(x)p(y)\right]xy$$ The Mutual Information between the two is defined as $$I(X,Y) = E\left (\ln \frac{p(x,y)}{p(x)p(y)}\right)=\sum_{x,y}p(x,y)\left[\ln p(x,y)-\ln p(x)p(y)\right]$$ Compare the two: each contains a point-wise "measure" of "the distance of the two rv's from independence" as it is expressed by the distance of the joint pmf from the product of the marginal pmf's: the $\operatorname{Cov}(X,Y)$ has it as difference of levels, while $I(X,Y)$ has it as difference of logarithms. And what do these measures do? In $\operatorname{Cov}(X,Y)$ they create a weighted sum of the product of the two random variables. In $I(X,Y)$ they create a weighted sum of their joint probabilities. So with $\operatorname{Cov}(X,Y)$ we look at what non-independence does to their product, while in $I(X,Y)$ we look at what non-independence does to their joint probability distribution. Reversely, $I(X,Y)$ is the average value of the logarithmic measure of distance from independence, while $\operatorname{Cov}(X,Y)$ is the weighted value of the levels-measure of distance from independence, weighted by the product of the two rv's. So the two are not antagonistic—they are complementary, describing different aspects of the association between two random variables. One could comment that Mutual Information "is not concerned" whether the association is linear or not, while Covariance may be zero and the variables may still be stochastically dependent. On the other hand, Covariance can be calculated directly from a data sample without the need to actually know the probability distributions involved (since it is an expression involving moments of the distribution), while Mutual Information requires knowledge of the distributions, whose estimation, if unknown, is a much more delicate and uncertain work compared to the estimation of Covariance. | {
"source": [
"https://stats.stackexchange.com/questions/81659",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37052/"
]
} |
81,693 | Following experimental design was done and some data as shown in table below was obtained: pretest> intervention1> intervention2> posttest > perception survey Number of students=60 Number of Students subjected to intervention=20 Independant variable is "Intervention2" which possible influences posttest .Perception survey is the students own evaluation of the intervention, lower the better. pretest posttest intervention1 intervention2 perceptionrank
37 35 10 10 4.2
38 28 8 10 2
20 30 8 10 #N/A
34 22 10 10 1.7
28 21 10 10 3.2
23 19 8 10 3.5
14 8 10 10 2
26 33 7 8 3.2
24 35 8 8 2.2
33 21 7 8 1.8
29 25 7 8 2
36 20 7 8 2.9 The subjects were students in the same classroom, the interventions were also evaluated , the score for which is also tabulated above.
What I am interested to find the causality between intervention2 and posttest scores. How should I go about solving this problem , should I use t-test , correlation and how? Update: I forgot to add that the control group is the students who did not receive the interventions , so I have their pretest and post test scores. Moreover I also have perception survey scores for the students who received the intervention. | Let's consider one fundamental concept of (linear) correlation, covariance (which is Pearson's correlation coefficient "un-standardized"). For two discrete random variables $X$ and $Y$ with probability mass functions $p(x)$, $p(y)$ and joint pmf $p(x,y)$ we have $$\operatorname{Cov}(X,Y) = E(XY) - E(X)E(Y) = \sum_{x,y}p(x,y)xy - \left(\sum_xp(x)x\right)\cdot \left(\sum_yp(y)y\right)$$ $$\Rightarrow \operatorname{Cov}(X,Y) = \sum_{x,y}\left[p(x,y)-p(x)p(y)\right]xy$$ The Mutual Information between the two is defined as $$I(X,Y) = E\left (\ln \frac{p(x,y)}{p(x)p(y)}\right)=\sum_{x,y}p(x,y)\left[\ln p(x,y)-\ln p(x)p(y)\right]$$ Compare the two: each contains a point-wise "measure" of "the distance of the two rv's from independence" as it is expressed by the distance of the joint pmf from the product of the marginal pmf's: the $\operatorname{Cov}(X,Y)$ has it as difference of levels, while $I(X,Y)$ has it as difference of logarithms. And what do these measures do? In $\operatorname{Cov}(X,Y)$ they create a weighted sum of the product of the two random variables. In $I(X,Y)$ they create a weighted sum of their joint probabilities. So with $\operatorname{Cov}(X,Y)$ we look at what non-independence does to their product, while in $I(X,Y)$ we look at what non-independence does to their joint probability distribution. Reversely, $I(X,Y)$ is the average value of the logarithmic measure of distance from independence, while $\operatorname{Cov}(X,Y)$ is the weighted value of the levels-measure of distance from independence, weighted by the product of the two rv's. So the two are not antagonistic—they are complementary, describing different aspects of the association between two random variables. One could comment that Mutual Information "is not concerned" whether the association is linear or not, while Covariance may be zero and the variables may still be stochastically dependent. On the other hand, Covariance can be calculated directly from a data sample without the need to actually know the probability distributions involved (since it is an expression involving moments of the distribution), while Mutual Information requires knowledge of the distributions, whose estimation, if unknown, is a much more delicate and uncertain work compared to the estimation of Covariance. | {
"source": [
"https://stats.stackexchange.com/questions/81693",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27145/"
]
} |
81,986 | In the text book "New Comprehensive Mathematics for O Level" by Greer (1983), I see averaged deviation calculated like this: Sum up absolute differences between single values and the mean. Then
get its average. Througout the chapter the term mean deviation is
used. But I've recently seen several references that use the term standard deviation and this is what they do: Calculate squares of differences between single values and the mean.
Then get their average and finally the root of the answer. I tried both methods on a common set of data and their answers differ. I'm not a statistician. I got confused while trying to teach deviation to my kids. So in short, are the terms standard deviation and mean deviation the same or is my old text book wrong? | Both answer how far your values are spread around the mean of the observations. An observation that is 1 under the mean is equally "far" from the mean as a value that is 1 above the mean. Hence you should neglect the sign of the deviation. This can be done in two ways: Calculate the absolute value of the deviations and sum these. Square the deviations and sum these squares. Due to the square, you give more weight to high deviations, and hence the sum of these squares will be different from the sum of the means. After calculating the "sum of absolute deviations" or the "square root of the sum of squared deviations", you average them to get the "mean deviation" and the "standard deviation" respectively. The mean deviation is rarely used. | {
"source": [
"https://stats.stackexchange.com/questions/81986",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37210/"
]
} |
82,105 | I have a binary logistic regression model with a McFadden's pseudo R-squared of 0.192 with a dependent variable called payment (1 = payment and 0 = no payment). What is the interpretation of this pseudo R-squared? Is it a relative comparison for nested models (e.g. a 6 variable model has a McFadden's pseudo R-squared of 0.192, whereas a 5 variable model (after removing one variable from the aforementioned 6 variable model), this 5 variable model has a pseudo R-squared of 0.131. Would we would want to keep that 6th variable in the model?) or is it an absolute quantity (e.g. a given model that has a McFadden's pseudo R-squared of 0.192 is better than any existing model with a McFadden's pseudo R-squared of 0.180 (for even non-nested models)? These are just possible ways to look at McFadden’s pseudo R-squared; however, I assume these two views are way off, thus the reason why I am asking this question here. I have done a great deal of research on this topic, and I have yet to find the answer that I am looking for in terms of being able to interpret a McFadden's pseudo R-squared of 0.192. Any insight and/or references are greatly appreciated! Before answering this question, I am aware that this isn't the best measure to describe a logistic regression model, but I would like to have a greater understanding of this statistic regardless! | So I figured I'd sum up what I've learned about McFadden's pseudo $R^2$ as a proper answer. The seminal reference that I can see for McFadden's pseudo $R^2$ is: McFadden, D. (1974) “Conditional logit analysis of qualitative choice behavior.” Pp. 105-142 in P. Zarembka (ed.), Frontiers in Econometrics. Academic Press. http://eml.berkeley.edu/~mcfadden/travel.html Figure 5.5 shows the relationship between $\rho^2$ and traditional $R^2$ measures from OLS. My interpretation is that larger values of $\rho^2$ (McFadden's pseudo $R^2$ ) are better than smaller ones. The interpretation of McFadden's pseudo $R^2$ between 0.2-0.4 comes from a book chapter he contributed to: Bahvioural Travel Modelling. Edited by David Hensher and Peter Stopher. 1979. McFadden contributed Ch. 15 "Quantitative Methods for Analyzing Travel Behaviour on Individuals: Some Recent Developments". Discussion of model evaluation (in the context of multinomial logit models) begins on page 306 where he introduces $\rho^2$ (McFadden's pseudo $R^2$ ). McFadden states "while the $R^2$ index is a more familiar concept to planner who are experienced in OLS, it is not as well behaved as the $\rho^2$ measure, for ML estimation. Those unfamiliar with $\rho^2$ should be forewarned that its values tend to be considerably lower than those of the $R^2$ index...For example, values of 0.2 to 0.4 for $\rho^2$ represent EXCELLENT fit." So basically, $\rho^2$ can be interpreted like $R^2$ , but don't expect it to be as big. And values from 0.2-0.4 indicate (in McFadden's words) excellent model fit. | {
"source": [
"https://stats.stackexchange.com/questions/82105",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29068/"
]
} |
82,113 | I'm reading a paper on data mining, and the authors mentioned that "we bootstrapped a classifier". I understand that bootstrapping means sampling with replacement, but I didn't understand how that relates to classifiers. | So I figured I'd sum up what I've learned about McFadden's pseudo $R^2$ as a proper answer. The seminal reference that I can see for McFadden's pseudo $R^2$ is: McFadden, D. (1974) “Conditional logit analysis of qualitative choice behavior.” Pp. 105-142 in P. Zarembka (ed.), Frontiers in Econometrics. Academic Press. http://eml.berkeley.edu/~mcfadden/travel.html Figure 5.5 shows the relationship between $\rho^2$ and traditional $R^2$ measures from OLS. My interpretation is that larger values of $\rho^2$ (McFadden's pseudo $R^2$ ) are better than smaller ones. The interpretation of McFadden's pseudo $R^2$ between 0.2-0.4 comes from a book chapter he contributed to: Bahvioural Travel Modelling. Edited by David Hensher and Peter Stopher. 1979. McFadden contributed Ch. 15 "Quantitative Methods for Analyzing Travel Behaviour on Individuals: Some Recent Developments". Discussion of model evaluation (in the context of multinomial logit models) begins on page 306 where he introduces $\rho^2$ (McFadden's pseudo $R^2$ ). McFadden states "while the $R^2$ index is a more familiar concept to planner who are experienced in OLS, it is not as well behaved as the $\rho^2$ measure, for ML estimation. Those unfamiliar with $\rho^2$ should be forewarned that its values tend to be considerably lower than those of the $R^2$ index...For example, values of 0.2 to 0.4 for $\rho^2$ represent EXCELLENT fit." So basically, $\rho^2$ can be interpreted like $R^2$ , but don't expect it to be as big. And values from 0.2-0.4 indicate (in McFadden's words) excellent model fit. | {
"source": [
"https://stats.stackexchange.com/questions/82113",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27779/"
]
} |
82,162 | I am reading a data mining book and it mentioned the Kappa statistic as a means for evaluating the prediction performance of classifiers. However, I just can't understand this. I also checked Wikipedia but it didn't help too: https://en.wikipedia.org/wiki/Cohen's_kappa . How does Cohen's kappa help in evaluating the prediction performance of classifiers? What does it tell? I understand that 100% kappa means that the classifier is in total agreement with a random classifier, but I don't understand how does this help in evaluating the performance of the classifier? What does 40% kappa mean? Does it mean that 40% of the time, the classifier is in agreement with the random classifier? If so, what does that tell me or help me in evaluating the classifier? | Introduction The Kappa statistic (or value) is a metric that compares an Observed Accuracy with an Expected Accuracy (random chance). The kappa statistic is used not only to evaluate a single classifier, but also to evaluate classifiers amongst themselves. In addition, it takes into account random chance (agreement with a random classifier), which generally means it is less misleading than simply using accuracy as a metric (an Observed Accuracy of 80% is a lot less impressive with an Expected Accuracy of 75% versus an Expected Accuracy of 50%). Computation of Observed Accuracy and Expected Accuracy is integral to comprehension of the kappa statistic, and is most easily illustrated through use of a confusion matrix. Lets begin with a simple confusion matrix from a simple binary classification of Cats and Dogs : Computation Cats Dogs
Cats| 10 | 7 |
Dogs| 5 | 8 | Assume that a model was built using supervised machine learning on labeled data. This doesn't always have to be the case; the kappa statistic is often used as a measure of reliability between two human raters. Regardless, columns correspond to one "rater" while rows correspond to another "rater". In supervised machine learning, one "rater" reflects ground truth (the actual values of each instance to be classified), obtained from labeled data, and the other "rater" is the machine learning classifier used to perform the classification. Ultimately it doesn't matter which is which to compute the kappa statistic, but for clarity's sake lets say that the columns reflect ground truth and the rows reflect the machine learning classifier classifications. From the confusion matrix we can see there are 30 instances total (10 + 7 + 5 + 8 = 30). According to the first column 15 were labeled as Cats (10 + 5 = 15), and according to the second column 15 were labeled as Dogs (7 + 8 = 15). We can also see that the model classified 17 instances as Cats (10 + 7 = 17) and 13 instances as Dogs (5 + 8 = 13). Observed Accuracy is simply the number of instances that were classified correctly throughout the entire confusion matrix, i.e. the number of instances that were labeled as Cats via ground truth and then classified as Cats by the machine learning classifier , or labeled as Dogs via ground truth and then classified as Dogs by the machine learning classifier . To calculate Observed Accuracy , we simply add the number of instances that the machine learning classifier agreed with the ground truth label, and divide by the total number of instances. For this confusion matrix, this would be 0.6 ((10 + 8) / 30 = 0.6). Before we get to the equation for the kappa statistic, one more value is needed: the Expected Accuracy . This value is defined as the accuracy that any random classifier would be expected to achieve based on the confusion matrix. The Expected Accuracy is directly related to the number of instances of each class ( Cats and Dogs ), along with the number of instances that the machine learning classifier agreed with the ground truth label. To calculate Expected Accuracy for our confusion matrix, first multiply the marginal frequency of Cats for one "rater" by the marginal frequency of Cats for the second "rater", and divide by the total number of instances. The marginal frequency for a certain class by a certain "rater" is just the sum of all instances the "rater" indicated were that class. In our case, 15 (10 + 5 = 15) instances were labeled as Cats according to ground truth , and 17 (10 + 7 = 17) instances were classified as Cats by the machine learning classifier . This results in a value of 8.5 (15 * 17 / 30 = 8.5). This is then done for the second class as well (and can be repeated for each additional class if there are more than 2). 15 (7 + 8 = 15) instances were labeled as Dogs according to ground truth , and 13 (8 + 5 = 13) instances were classified as Dogs by the machine learning classifier . This results in a value of 6.5 (15 * 13 / 30 = 6.5). The final step is to add all these values together, and finally divide again by the total number of instances, resulting in an Expected Accuracy of 0.5 ((8.5 + 6.5) / 30 = 0.5). In our example, the Expected Accuracy turned out to be 50%, as will always be the case when either "rater" classifies each class with the same frequency in a binary classification (both Cats and Dogs contained 15 instances according to ground truth labels in our confusion matrix). The kappa statistic can then be calculated using both the Observed Accuracy ( 0.60 ) and the Expected Accuracy ( 0.50 ) and the formula: Kappa = (observed accuracy - expected accuracy)/(1 - expected accuracy) So, in our case, the kappa statistic equals: (0.60 - 0.50)/(1 - 0.50) = 0.20. As another example, here is a less balanced confusion matrix and the corresponding calculations: Cats Dogs
Cats| 22 | 9 |
Dogs| 7 | 13 | Ground truth: Cats (29), Dogs (22) Machine Learning Classifier: Cats (31), Dogs (20) Total: (51) Observed Accuracy: ((22 + 13) / 51) = 0.69 Expected Accuracy: ((29 * 31 / 51) + (22 * 20 / 51)) / 51 = 0.51 Kappa: (0.69 - 0.51) / (1 - 0.51) = 0.37 In essence, the kappa statistic is a measure of how closely the instances classified by the machine learning classifier matched the data labeled as ground truth , controlling for the accuracy of a random classifier as measured by the expected accuracy. Not only can this kappa statistic shed light into how the classifier itself performed, the kappa statistic for one model is directly comparable to the kappa statistic for any other model used for the same classification task. Interpretation There is not a standardized interpretation of the kappa statistic. According to Wikipedia (citing their paper), Landis and Koch considers 0-0.20 as slight, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as almost perfect. Fleiss considers kappas > 0.75 as excellent, 0.40-0.75 as fair to good, and < 0.40 as poor. It is important to note that both scales are somewhat arbitrary. At least two further considerations should be taken into account when interpreting the kappa statistic. First, the kappa statistic should always be compared with an accompanied confusion matrix if possible to obtain the most accurate interpretation. Consider the following confusion matrix: Cats Dogs
Cats| 60 | 125 |
Dogs| 5 | 5000| The kappa statistic is 0.47, well above the threshold for moderate according to Landis and Koch and fair-good for Fleiss. However, notice the hit rate for classifying Cats . Less than a third of all Cats were actually classified as Cats ; the rest were all classified as Dogs . If we care more about classifying Cats correctly (say, we are allergic to Cats but not to Dogs , and all we care about is not succumbing to allergies as opposed to maximizing the number of animals we take in), then a classifier with a lower kappa but better rate of classifying Cats might be more ideal. Second, acceptable kappa statistic values vary on the context. For instance, in many inter-rater reliability studies with easily observable behaviors, kappa statistic values below 0.70 might be considered low. However, in studies using machine learning to explore unobservable phenomena like cognitive states such as day dreaming, kappa statistic values above 0.40 might be considered exceptional. So, in answer to your question about a 0.40 kappa, it depends. If nothing else, it means that the classifier achieved a rate of classification 2/5 of the way between whatever the expected accuracy was and 100% accuracy. If expected accuracy was 80%, that means that the classifier performed 40% (because kappa is 0.4) of 20% (because this is the distance between 80% and 100%) above 80% (because this is a kappa of 0, or random chance), or 88%. So, in that case, each increase in kappa of 0.10 indicates a 2% increase in classification accuracy. If accuracy was instead 50%, a kappa of 0.4 would mean that the classifier performed with an accuracy that is 40% (kappa of 0.4) of 50% (distance between 50% and 100%) greater than 50% (because this is a kappa of 0, or random chance), or 70%. Again, in this case that means that an increase in kappa of 0.1 indicates a 5% increase in classification accuracy. Classifiers built and evaluated on data sets of different class distributions can be compared more reliably through the kappa statistic (as opposed to merely using accuracy) because of this scaling in relation to expected accuracy. It gives a better indicator of how the classifier performed across all instances, because a simple accuracy can be skewed if the class distribution is similarly skewed. As mentioned earlier, an accuracy of 80% is a lot more impressive with an expected accuracy of 50% versus an expected accuracy of 75%. Expected accuracy as detailed above is susceptible to skewed class distributions, so by controlling for the expected accuracy through the kappa statistic, we allow models of different class distributions to be more easily compared. That's about all I have. If anyone notices anything left out, anything incorrect, or if anything is still unclear, please let me know so I can improve the answer. References I found helpful: Includes a succinct description of kappa: http://standardwisdom.com/softwarejournal/2011/12/confusion-matrix-another-single-value-metric-kappa-statistic/ Includes a description of calculating expected accuracy: http://epiville.ccnmtl.columbia.edu/popup/how_to_calculate_kappa.html | {
"source": [
"https://stats.stackexchange.com/questions/82162",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27779/"
]
} |
82,664 | In Bishop's PRML book, he says that, overfitting is a problem with Maximum Likelihood Estimation (MLE), and Bayesian can avoid it. But I think, overfitting is a problem more about model selection, not about the method used to do parameter estimation. That is, suppose I have a data set $D$, which is generated via $$f(x)=sin(x),\;x\in[0,1]$$, now I might choose different models $H_i$ to fit the data and find out which one is the best. And the models under consideration are polynomial ones with different orders, $H_1$ is order 1, $H_2$ is order 2, $H_3$ is order 9. Now I try to fit the data $D$ with each of the 3 models, each model has its paramters, denoted as $w_i$ for $H_i$. Using ML, I will have a point estimate of the model parameters $w$, and $H_1$ is too simple and will always underfit the data, whereas $H_3$ is too complex and will overfit the data, only $H_2$ will fit the data well. My questions are, 1) Model $H_3$ will overfit the data, but I don't think it's the problem of ML, but the problem of the model per se. Because, using ML for $H_1,H_2$ doesn't result into overfitting. Am I right? 2) Compared to Bayesian, ML does have some disadvantages, since it just gives the point estimate of the model parameters $w$, and it's overconfident. Whereas Bayesian doesn't rely on just the most probable value of the parameter, but all the possible values of the parameters given the observed data $D$, right? 3) Why can Bayesian avoid or decrease overfitting? As I understand it, we can use Bayesian for model comparison, that is, given data $D$, we could find out the marginal likelihood (or model evidence) for each model under consideration, and then pick the one with the highest marginal likelihood, right? If so, why is that? | Optimisation is the root of all evil in statistics. Any time you make choices about your model$^1$ by optimising some suitable criterion evaluated on a finite sample of data you run the risk of over-fitting the criterion, i.e. reducing the statistic beyond the point where improvements in generalisation performance are obtained and the reduction is instead gained by exploiting the peculiarities of the sample of data, e.g. noise). The reason the Bayesian method works better is that you don't optimise anything, but instead marginalise (integrate) over all possible choices. The problem then lies in the choice of prior beliefs regarding the model, so one problem has gone away, but another one appears in its place. $^1$ This includes maximising the evidence (marginal likelihood) in a Bayesian setting. For an example of this, see the results for Gaussian Process classifiers in my paper, where optimising the marginal likelihood makes the model worse if you have too many hyper-parameters (note selection according to marginal likelihood will tend to favour models with lots of hyper-parameters as a result of this form of over-fitting). G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. ( pdf ) | {
"source": [
"https://stats.stackexchange.com/questions/82664",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30540/"
]
} |
82,705 | I'm peer reviewing an academic journal article and the authors wrote the following as justification for not reporting any inferential statistics (I deidentified the nature of the two groups): In total, 25 of the 2,349 (1.1%) respondents reported X . We appropriately refrain from presenting analyses that statistically compare group X to group Y (the other 2,324 participants) since those results could be heavily driven by chance with an outcome this rare. My question is are the authors of this study justified in throwing in the towel with respect to comparing groups? If not, what might I recommend to them? | Statistical tests do not make assumptions about sample size. There are, of course, differing assumptions with various tests (e.g., normality), but the equality of sample sizes is not one of them. Unless the test used is inappropriate in some other way (I can't think of an issue right now), the type I error rate will not be affected by drastically unequal group sizes. Moreover, their phrasing implies (to my mind) that they believe it will. Thus, they are confused about these issues. On the other hand, type II error rates very much will be affected by highly unequal $n$s. This will be true no matter what the test (e.g., the $t$-test, Mann-Whitney $U$-test, or $z$-test for equality of proportions will all be affected in this way). For an example of this, see my answer here: How should one interpret the comparison of means from different sample sizes? Thus, they may well be "justified in throwing in the towel" with respect to this issue. (Specifically, if you expect to get a non-significant result whether the effect is real or not, what is the point of the test?) As the sample sizes diverge, statistical power will converge to $\alpha$. This fact actually leads to a different suggestion, which I suspect few people have ever heard of and would probably have trouble getting past reviewers (no offense intended): a compromise power analysis . The idea is relatively straightforward: In any power analysis, $\alpha$, $\beta$, $n_1$, $n_2$, and the effect size $d$, exist in relationship to each other. Having specified all but one, you can solve for the last. Typically, people do what is called an a-priori power analysis , in which you solve for $N$ (generally you are assuming $n_1=n_2$). On the other hand, you can fix $n_1$, $n_2$, and $d$, and solve for $\alpha$ (or equivalently $\beta$), if you specify the ratio of type I to type II error rates that you are willing to live with. Conventionally, $\alpha=.05$ and $\beta=.20$, so you are saying that type I errors are four times worse than type I errors. Of course, a given researcher might disagree with that, but having specified a given ratio, you can solve for what $\alpha$ you should be using in order to possibly maintain some adequate power. This approach is a logically valid option for the researchers in this situation, although I acknowledge the exoticness of this approach may make it a tough sell in the larger research community that probably has never heard of such a thing. | {
"source": [
"https://stats.stackexchange.com/questions/82705",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11758/"
]
} |
82,720 | What is the best technique to calculate a confidence interval of a binomial experiment, if your estimate is that $p=0$ (or similarly $p=1$) and sample size is relatively small, for example $n=25$? | Do not use the normal approximation Much has been written about this problem. A general advice is to never use the normal approximation (i.e., the asymptotic/Wald confidence interval), as it has terrible coverage properties. R code for illustrating this: library(binom)
p = seq(0,1,.001)
coverage = binom.coverage(p, 25, method="asymptotic")$coverage
plot(p, coverage, type="l")
binom.confint(0,25)
abline(h=.95, col="red") For small success probabilities, you might ask for a 95% confidence interval, but actually get, say, a 10% confidence interval! Recommendations So what should we use? I believe the current recommendations are the ones listed in the paper Interval Estimation for a Binomial Proportion by Brown, Cai and DasGupta in Statistical Science 2001, vol. 16, no. 2, pages 101–133. The authors examined several methods for calculating confidence intervals, and came to the following conclusion. [W]e recommend the Wilson interval or the equal-tailed Jeffreys prior interval for small n and the interval suggested in Agresti and Coull for larger n . The Wilson interval is also sometimes called the score interval, since it’s based on inverting a score test. Calculating the intervals To calculate these confidence intervals, you can use this online calculator or the binom.confint() function in the binom package in R. For example, for 0 successes in 25 trials, the R code would be: > binom.confint(0, 25, method=c("wilson", "bayes", "agresti-coull"),
type="central")
method x n mean lower upper
1 agresti-coull 0 25 0.000 -0.024 0.158
2 bayes 0 25 0.019 0.000 0.073
3 wilson 0 25 0.000 0.000 0.133 Here bayes is the Jeffreys interval. (The argument type="central" is needed to get the equal-tailed interval.) Note that you should decide on which of the three methods you want to use before calculating the interval. Looking at all three and selecting the shortest will naturally give you too small coverage probability. A quick, approximate answer As a final note, if you observe exactly zero successes in your n trials and just want a very quick approximate confidence interval, you can use the rule of three . Simply divide the number 3 by n . In the above example n is 25, so the upper bound is 3/25 = 0.12 (the lower bound is of course 0). | {
"source": [
"https://stats.stackexchange.com/questions/82720",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36462/"
]
} |
82,963 | A colleague of mine sent me this problem apparently making the rounds on the internet: If $3 = 18, 4 = 32, 5 = 50, 6 = 72, 7 = 98$, Then, $10 =$ ? The answer seems to be 200. 3*6
4*8
5*10
6*12
7*14
8*16
9*18
10*20=200 When I do a linear regression in R: data <- data.frame(a=c(3,4,5,6,7), b=c(18,32,50,72,98))
lm1 <- lm(b~a, data=data)
new.data <- data.frame(a=c(10,20,30))
predict <- predict(lm1, newdata=new.data, interval='prediction') I get: fit lwr upr
1 154 127.5518 180.4482
2 354 287.0626 420.9374
3 554 444.2602 663.7398 So my linear model is predicting $10 = 154$. When I plot the data it looks linear... but obviously I assumed something that is not correct. I'm trying to learn how to best use linear models in R. What is the proper way to analyze this series? Where did I go wrong? | A regression model, such as the one fit by lm() implicitly assumes that the underlying data generating process is probabilistic . You are assuming that the rule you are trying to model is deterministic . Therefore, there is a mismatch between what you are trying to do and the way you are trying to do it. There are other software (i.e., not R) that is explicitly designed to find / fit the simplest function to deterministic data (an example would be Eureqa ). There may be an R package for that (that I don't know of), but R is intended for statistical modeling of probabilistic data. As for the answer that lm() gave you, it looks reasonable, and could be right. However, I gather the context in which this problem was presented strongly implied that it should be understood as deterministic. If that hadn't been the case, and you were wondering if the fit was reasonable, one thing you might notice is that the two extreme data points are above the regression line, while the middle data are all below it. This suggests a mis-specified functional form. This can also be seen in the residuals vs. fitted plot ( plot(lm1, which=1 ): As for the model fit by @AlexWilliams, it looks much better: | {
"source": [
"https://stats.stackexchange.com/questions/82963",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31289/"
]
} |
83,136 | Recently, this paper has received a lot of attention (e.g. from WSJ ). Basically, the authors conclude that Facebook will lose 80% of its members by 2017. They base their claims on an extrapolation of the SIR model , a compartmental model frequently used in epidemiology. Their data is drawn from Google searches for "Facebook", and the authors use the demise of Myspace to validate their conclusion. Question: Are the authors making a "correlation does not imply causation" mistake? This model and logic may have worked for Myspace, but is it valid for any social network? Update : Facebook hits back In keeping with the scientific principle "correlation equals causation," our research unequivocally demonstrated that Princeton may be in danger of disappearing entirely. We don’t really think Princeton or the world’s air supply is going anywhere soon. We love Princeton (and air),” and adding a final reminder that “not all research is created equal – and some methods of analysis lead to pretty crazy conclusions. | The answers so far have focused on the data itself, which makes sense with the site this is on, and the flaws about it. But I'm a computational/mathematical epidemiologist by inclination, so I'm also going to talk about the model itself for a little bit, because it's also relevant to the discussion. In my mind, the biggest problem with the paper is not the Google data. Mathematical models in epidemiology handle messy data all the time, and to my mind the problems with it could be addressed with a fairly straightforward sensitivity analysis. The biggest problem, to me, is that the researchers have "doomed themselves to success" — something that should always be avoided in research. They do this in the model they decided to fit to the data: a standard SIR model. Briefly, a SIR model (which stands for susceptible (S) infectious (I) recovered (R)) is a series of differential equations that track the health states of a population as it experiences an infectious disease. Infected individuals interact with susceptible individuals and infect them, and then in time move on to the recovered category. This produces a curve that looks like this: Beautiful, is it not? And yes, this one is for a zombie epidemic. Long story. In this case, the red line is what's being modeled as "Facebook users". The problem is this: In the basic SIR model, the I class will eventually, and inevitably, asymptotically approach zero . It must happen. It doesn't matter if you're modeling zombies, measles, Facebook, or Stack Exchange, etc. If you model it with a SIR model, the inevitable conclusion is that the population in the infectious (I) class drops to approximately zero. There are extremely straightforward extensions to the SIR model that make this not true — either you can have people in the recovered (R) class come back to susceptible (S) (essentially, this would be people who left Facebook changing from "I'm never going back" to "I might go back someday"), or you can have new people come into the population (this would be little Timmy and Claire getting their first computers). Unfortunately, the authors didn't fit those models. This is, incidentally, a widespread problem in mathematical modeling. A statistical model is an attempt to describe the patterns of variables and their interactions within the data. A mathematical model is an assertion about reality . You can get a SIR model to fit lots of things, but your choice of a SIR model is also an assertion about the system. Namely, that once it peaks, it's heading to zero. Incidentally, Internet companies do use user-retention models that look a heck of a lot like epidemic models, but they're also considerably more complex than the one presented in the paper. | {
"source": [
"https://stats.stackexchange.com/questions/83136",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37729/"
]
} |
83,163 | Let's say I have two samples. If I want to tell whether they are pulled from different populations, I can run a t-test. But let's say I want to test whether the samples are from the same population. How does one do this? That is, how do I calculate the statistical probability that these two samples were pulled from the same population? | The tests that compare distributions are rule-out tests. They start with the null hypothesis that the 2 populations are identical, then try to reject that hypothesis. We can never prove the null to be true, just reject it, so these tests cannot really be used to show that 2 samples come from the same population (or identical populations). This is because there could be minor differences in the distributions (meaning they are not identical), but so small that tests cannot really find the difference. Consider 2 distributions, the first is uniform from 0 to 1, the second is a mixture of 2 uniforms, so it is 1 between 0 and 0.999, and also 1 between 9.999 and 10 (0 elsewhere). So clearly these distributions are different (whether the difference is meaningful is another question), but if you take a sample size of 50 from each (total 100) there is over a 90% chance that you will only see values between 0 and 0.999 and be unable to see any real difference. There are ways to do what is called equivalence testing where you ask if the 2 distributions/populations are equivalent, but you need to define what you consider to be equivalent. It is usually that some measure of difference is within a given range, i.e. the difference in the 2 means is less than 5% of the average of the 2 means, or the KS statistic is below a given cut-off, etc. If you can then calculate a confidence interval for the difference statistic (difference of means could just be the t confidence interval, bootstrapping, simulation, or other methods may be needed for other statistics). If the entire confidence interval falls in the "equivalence region" then we consider the 2 populations/distributions to be "equivalent". The hard part is figuring out what the equivalence region should be. | {
"source": [
"https://stats.stackexchange.com/questions/83163",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37742/"
]
} |
83,347 | Let's say I have two 1-dimensional arrays, $a_1$ and $a_2$ . Each contains 100 data points. $a_1$ is the actual data, and $a_2$ is the model prediction. In this case, the $R^2$ value would be: $$
R^2 = 1 - \frac{SS_{res}}{SS_{tot}} \quad\quad\quad\quad\quad\ \ \quad\quad(1).
$$ In the meantime, this would be equal to the square value of the correlation coefficient, $$
R^2 = (\text{Correlation Coefficient})^2 \quad (2).
$$ Now if I swap the two: $a_2$ is the actual data, and $a_1$ is the model prediction. From equation $(2)$ , because correlation coefficient does not care which comes first, the $R^2$ value would be the same. However, from equation $(1)$ , $SS_{tot}=\sum_i(y_i - \bar y )^2$ , the $R^2$ value will change, because the $SS_{tot}$ has changed if we switch $y$ from $a_1$ to $a_2$ ; in the meantime, $SS_{res}=\sum_i(y_i -f_i)^2$ does not change. My question is: How can these contradict each other? Edit : I was wondering that, will the relationship in Eq. (2) still stand, if it is not a simple linear regression, i.e., the relationship between IV and DV is not linear (could be exponential / log)? Will this relationship still stand, if the sum of the prediction errors does not equal zero? | This is true that $SS_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlation Coefficient as $r_{xy}^2=\dfrac{S_{xy}^2}{S_{xx}S_{yy}}$, where I used the sub-index $xy$ to emphasize the fact that $x$ is the independent variable and $y$ is the dependent variable. Obviously, $r_{xy}^2$ is unchanged if you swap $x$ with $y$. We can easily show that $SSR_{xy}=S_{yy}(R_{xy}^2)$, where $SSR_{xy}$ is the regression sum of of squares and $S_{yy}$ is the total sum of squares where $x$ is independent and $y$ is dependent variable. Therefore: $$R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}=\dfrac{S_{yy}-SSE_{xy}}{S_{yy}},$$ where $SSE_{xy}$ is the corresponding residual sum of of squares where $x$ is independent and $y$ is dependent variable. Note that in this case, we have $SSE_{xy}=b^2_{xy}S_{xx}$ with $b=\dfrac{S_{xy}}{S_{xx}}$ (See e.g. Eq. (34)-(41) here .) Therefore: $$R_{xy}^2=\dfrac{S_{yy}-\dfrac{S^2_{xy}}{S^2_{xx}}.S_{xx}}{S_{yy}}=\dfrac{S_{yy}S_{xx}-S^2_{xy}}{S_{xx}.S_{yy}}.$$ Clearly above equation is symmetric with respect to $x$ and $y$. In other words: $$R_{xy}^2=R_{yx}^2.$$ To summarize when you change $x$ with $y$ in the simple regression model, both numerator and denominator of $R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}$ will change in a way that $R_{xy}^2=R_{yx}^2.$ | {
"source": [
"https://stats.stackexchange.com/questions/83347",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29321/"
]
} |
83,373 | Suppose I have a coin toss experiment in which I want to calculate the maximum likelihood estimate of the coin parameter $p$ when tossing the coin $n$ times. After calculating the derivative of the binomial likelihood function $ L(p) = { n \choose x } p^x (1-p)^{n-x} $, I get the optimal value for $p$ to be $p^{*} = \frac{x}{n}$, with $x$ being the number of successes. My questions now are: How would I calculate the expected value/variance of this maximum likelihood estimate for $p$? Do I need to calculate the expected value/variance for $L(p^{*})$? If yes, how would I do that? | This is true that $SS_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlation Coefficient as $r_{xy}^2=\dfrac{S_{xy}^2}{S_{xx}S_{yy}}$, where I used the sub-index $xy$ to emphasize the fact that $x$ is the independent variable and $y$ is the dependent variable. Obviously, $r_{xy}^2$ is unchanged if you swap $x$ with $y$. We can easily show that $SSR_{xy}=S_{yy}(R_{xy}^2)$, where $SSR_{xy}$ is the regression sum of of squares and $S_{yy}$ is the total sum of squares where $x$ is independent and $y$ is dependent variable. Therefore: $$R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}=\dfrac{S_{yy}-SSE_{xy}}{S_{yy}},$$ where $SSE_{xy}$ is the corresponding residual sum of of squares where $x$ is independent and $y$ is dependent variable. Note that in this case, we have $SSE_{xy}=b^2_{xy}S_{xx}$ with $b=\dfrac{S_{xy}}{S_{xx}}$ (See e.g. Eq. (34)-(41) here .) Therefore: $$R_{xy}^2=\dfrac{S_{yy}-\dfrac{S^2_{xy}}{S^2_{xx}}.S_{xx}}{S_{yy}}=\dfrac{S_{yy}S_{xx}-S^2_{xy}}{S_{xx}.S_{yy}}.$$ Clearly above equation is symmetric with respect to $x$ and $y$. In other words: $$R_{xy}^2=R_{yx}^2.$$ To summarize when you change $x$ with $y$ in the simple regression model, both numerator and denominator of $R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}$ will change in a way that $R_{xy}^2=R_{yx}^2.$ | {
"source": [
"https://stats.stackexchange.com/questions/83373",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37948/"
]
} |
83,387 | I have read a couple of explanations of EM algorithm (e.g. from Bishop's Pattern Recognition and Machine Learning and from Roger and Gerolami First Course on Machine Learning). The derivation of EM is ok, I understand it. I also understand why the algorithm coverges to something: at each step we improve the result and the likelihood is bounded by 1.0, so by using a simple fact (if a function increases and is bounded then it converges) we know that the algorithm converges to some solution. However, how do we know it is a local minimum? At each step we are considering only one coordinate (either latent variable or parameters), so we might miss something, like that the local minimum requires moving by both coordinates at once. This I believe is a similar problem to that of general class of hill climbing algorithms, which EM is an instance of. So for a general hill climbing algorithm we have this problem for function f(x, y) = x*y. If we start from (0, 0) point, then only by considering both directions at once we are able to move upwards from 0 value. | EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points. | {
"source": [
"https://stats.stackexchange.com/questions/83387",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37953/"
]
} |
83,731 | In Bayesian data analysis, parameters are treated as random variables. This stems from the Bayesian subjective conceptualization of probability. But do Bayesians theoretically acknowledge that there is one true fixed parameter value out in the 'real world?' It seems like the obvious answer is 'yes', because then trying to estimate the parameter would almost be nonsensical. An academic citation for this answer would be greatly appreciated. | IMHO "yes"! Here is one of my favorite quotes by Greenland (2006: 767): It is often said (incorrectly) that ‘parameters are treated as fixed
by the frequentist but as random by the Bayesian’. For frequentists
and Bayesians alike, the value of a parameter may have been fixed from
the start or may have been generated from a physically random
mechanism. In either case, both suppose it has taken on some fixed
value that we would like to know. The Bayesian uses formal probability
models to express personal uncertainty about that value. The
‘randomness’ in these models represents personal uncertainty about the
parameter’s value; it is not a property of the parameter (although we
should hope it accurately reflects properties of the mechanisms that
produced the parameter). Greenland, S. (2006). Bayesian perspectives for epidemiological research: I. Foundations and basic methods. International Journal of Epidemiology , 35(3), 765–774. | {
"source": [
"https://stats.stackexchange.com/questions/83731",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24848/"
]
} |
83,771 | I've been looking at numerous questions on this site regarding bootstrapping and confidence intervals, but I'm still confused. Part of the reason for my confusion is probably that I'm not advanced enough in my statistics knowledge to understand a lot of the answers. I'm about half-way through an introductory statistics course and my math level is only about mid-Algebra II, so anything past that level just confuses me. If one of the knowledgeable people on this site could explain this issue at my level it would be extremely helpful. We were learning in class how to take resamples using the bootstrap method and use those to build up a confidence interval for some statistic we'd like to measure. So for example, say we take a sample from a large population and find that 40% say they'll vote for Candidate A. We assume that this sample is a pretty accurate reflection of the original population, in which case we can take resamples from it to discover something about the population. So we take resamples and find (using a 95% confidence level) that the resulting confidence interval ranges from 35% to 45%. My question is, what does this confidence interval actually mean ? I keep reading that there's a difference between (Frequentist) Confidence Intervals and (Bayesian) Credible Intervals. If I understood correctly, a credible interval would say that there's a 95% chance that in our situation the true parameter is within the given interval (35%-45%), while a confidence interval would say that there's a 95% that in this type of situation (but not necessarily in our situation specifically) the method we're using would accurately report that the true parameter is within the given interval. Assuming this definition is correct, my question is: What's the "true parameter" that we're talking about when using confidence intervals built up using the bootstrap method? Are we referring to (a) the true parameter of the original population , or (b) the true parameter of the sample ? If (a), then we'd be saying that 95% of the time the bootstrap method will accurately report true statements about the original population. But how could we possibly know that? Doesn't the whole bootstrap method rest on the assumption that the original sample is an accurate reflection of the population it was taken from? If (b) then I don't understand the meaning of the confidence interval at all. Don't we already know the true parameter of the sample? It's a straightforward measurement! I discussed this with my teacher and she was quite helpful. But I'm still confused. | If the bootstrapping procedure and the formation of the confidence interval were performed correctly, it means the same as any other confidence interval. From a frequentist perspective, a 95% CI implies that if the entire study were repeated identically ad infinitum , 95% of such confidence intervals formed in this manner will include the true value. Of course, in your study, or in any given individual study, the confidence interval either will include the true value or not, but you won't know which. To understand these ideas further, it may help you to read my answer here: Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? Regarding your further questions, the 'true value' refers to the actual parameter of the relevant population. (Samples don't have parameters, they have statistics ; e.g., the sample mean, $\bar x$, is a sample statistic, but the population mean, $\mu$, is a population parameter.) As to how we know this, in practice we don't. You are correct that we are relying on some assumptions--we always are. If those assumptions are correct, it can be proven that the properties hold. This was the point of Efron's work back in the late 1970's and early 1980's, but the math is difficult for most people to follow. For a somewhat mathematical explanation of the bootstrap, see @StasK's answer here: Explaining to laypeople why bootstrapping works . For a quick demonstration short of the math, consider the following simulation using R : # a function to perform bootstrapping
boot.mean.sampling.distribution = function(raw.data, B=1000){
# this function will take 1,000 (by default) bootsamples calculate the mean of
# each one, store it, & return the bootstrapped sampling distribution of the mean
boot.dist = vector(length=B) # this will store the means
N = length(raw.data) # this is the N from your data
for(i in 1:B){
boot.sample = sample(x=raw.data, size=N, replace=TRUE)
boot.dist[i] = mean(boot.sample)
}
boot.dist = sort(boot.dist)
return(boot.dist)
}
# simulate bootstrapped CI from a population w/ true mean = 0 on each pass through
# the loop, we will get a sample of data from the population, get the bootstrapped
# sampling distribution of the mean, & see if the population mean is included in the
# 95% confidence interval implied by that sampling distribution
set.seed(00) # this makes the simulation reproducible
includes = vector(length=1000) # this will store our results
for(i in 1:1000){
sim.data = rnorm(100, mean=0, sd=1)
boot.dist = boot.mean.sampling.distribution(raw.data=sim.data)
includes[i] = boot.dist[25]<0 & 0<boot.dist[976]
}
mean(includes) # this tells us the % of CIs that included the true mean
[1] 0.952 | {
"source": [
"https://stats.stackexchange.com/questions/83771",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/38186/"
]
} |
83,826 | I estimated a robust linear model in R with MM weights using the rlm() in the MASS package. `R`` does not provide an $R^2$ value for the model, but I would like to have one if it is a meaningful quantity. I am also interested to know if there is any meaning in having an $R^2$ value that weighs the total and residual variance in the same way that observations were weighted in the robust regression. My general thinking is that, if, for the purposes of the regression, we are essentially with the weights giving some of the estimates less influence because they are outliers in some way, then maybe for the purpose of calculating $r^2$ we should also give those same estimates less influence? I wrote two simple functions for the $R^2$ and the weighted $R^2$, they are below. I also included the results of running these functions for my model which is called HI9. EDIT: I found web page of Adelle Coster of UNSW that gives a formula for R2 that includes the weights vector in calculating the calculation of both SSe and SSt just as I did, and asked her for a more formal reference: http://web.maths.unsw.edu.au/~adelle/Garvan/Assays/GoodnessOfFit.html (still looking for help from Cross Validated on how to interpret this weighted $r^2$.) #I used this function to calculate a basic r-squared from the robust linear model
r2 <- function(x){
+ SSe <- sum((x$resid)^2);
+ observed <- x$resid+x$fitted;
+ SSt <- sum((observed-mean(observed))^2);
+ value <- 1-SSe/SSt;
+ return(value);
+ }
r2(HI9)
[1] 0.2061147
#I used this function to calculate a weighted r-squared from the robust linear model
> r2ww <- function(x){
+ SSe <- sum((x$w*x$resid)^2); #the residual sum of squares is weighted
+ observed <- x$resid+x$fitted;
+ SSt <- sum((x$w*(observed-mean(observed)))^2); #the total sum of squares is weighted
+ value <- 1-SSe/SSt;
+ return(value);
+ }
> r2ww(HI9)
[1] 0.7716264 Thanks to anyone who spends time answering this. Please accept my apologies if there is already some very good reference on this which I missed, or if my code above is hard to read (I am not a code guy). | The following answer is based on: (1) my interpretation of Willett and Singer (1988) Another Cautionary Note about R-squared: It's use in weighted least squates regression analysis. The American Statistician. 42(3). pp236-238, and (2) the premise that robust linear regression is essentially weighted least squares regression with the weights estimated by an iterative process. The formula I gave in the question for r2w needs a small correction to correspond to equation 4 in Willet and Singer (1988) for r2wls: the SSt calculation should also use a weighted mean: the correction is SSt <- sum((x$w*observed-mean(x$w*observed))^2)]. What is the meaning of this (corrected) weighted r-squared? Willett and Singer interpret it as: "the coefficient of determination in the transformed [weighted] dataset. It is a measure of the proportion of the variation in weighted Y that can be accounted for by weighted X, and is the quantity that is output as R2 by the major statistical computer packages when a WLS regression is performed". Is it meaningful as a measure of goodness of fit? This depends on how it is presented and interpreted. Willett and Singer caution that it is typically quite a bit higher than the r-squared obtained in ordinary least squares regression, and the high value encourages prominent display... but this display may be deceptive IF it is interpreted in the conventional sense of r-squared (as the proportion of unweighted variation explained by a model). Willett and Singer propose that a less 'deceptive' alternative is pseudoR2wls (their equation 7), which is equivalent to my function r2 in the original question. In general, Willett and Singer also caution that it is not good to rely on any r2 (even their pseudor2wls) as a sole measure of goodness of fit. Despite these cautions, the whole premise of robust regression is that some cases are judged 'not as good' and don't count as much in the model fitting, and it may be good to reflect this in part of the model assessment process. The weighted r-squared described, can be one good measure of goodness of fit - as long as the correct interpretation is clearly given in the presentation and it is not relied on as the sole assessment of goodness of fit. | {
"source": [
"https://stats.stackexchange.com/questions/83826",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36766/"
]
} |
84,314 | When we do multiple regressions and say we are looking at the average change in the $y$ variable for a change in an $x$ variable, holding all other variables constant, what values are we holding the other variables constant at? Their mean? Zero? Any value? I'm inclined to think it's at any value; just looking for clarification. If anyone had a proof, that would be great too. | You are right. Technically, it is any value . However, when I teach this I usually tell people that you are getting the effect of a one unit change in $X_j$ when all other variables are held at their respective means. I believe this is a common way to explain it that is not specific to me. I usually go on to mention that if you don't have any interactions, $\beta_j$ will be the effect of a one unit change in $X_j$, no matter what the values of your other variables are. But I like to start with the mean formulation. The reason is that there are two effects of including multiple variables in a regression model. First, you get the effect of $X_j$ controlling for the other variables (see my answer here ). The second is that the presence of the other variables (typically) reduces the residual variance of the model, making your variables (including $X_j$) 'more significant'. It is hard for people to understand how this works if the other variables have values that are all over the place. That seems like it would increase the variability somehow. If you think of adjusting each data point up or down for the value of each other variable until all the rest of the $X$ variables have been moved to their respective means, it is easier to see that the residual variability has been reduced. I don't get to interactions until a class or two after I've introduced the basics of multiple regression. However, when I do get to them, I return to this material. The above applies when there are not interactions. When there are interactions, it is more complicated. In that case, the interacting variable[s] is being held constant (very specifically) at $0$, and at no other value. If you want to see how this plays out algebraically, it is rather straight-forward. We can start with the no-interaction case. Let's determine the change in $\hat Y$ when all other variables are held constant at their respective means. Without loss of generality, let's say that there are three $X$ variables and we are interested in understanding how the change in $\hat Y$ is associated with a one unit change in $X_3$, holding $X_1$ and $X_2$ constant at their respective means: \begin{align}
\hat Y_i &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3X_{3i} \\
\hat Y_{i'} &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) \\
~ \\
&\text{subtracting the first equation from the second:} \\
~ \\
\hat Y_{i'} - \hat Y_i &= \hat\beta_0 - \hat\beta_0 + \hat\beta_1\bar X_1 - \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 - \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) - \hat\beta_3X_{3i} \\
\Delta Y &= \hat\beta_3X_{3i} + \hat\beta_3 - \hat\beta_3X_{3i} \\
\Delta Y &= \hat\beta_3
\end{align} Now it is obvious that we could have put any value in for $X_1$ and $X_2$ in the first two equations, so long as we put the same value for $X_1$ ($X_2$) in both of them. That is, so long as we are holding $X_1$ and $X_2$ constant . On the other hand, it does not work out this way if you have an interaction. Here I show the case where there is an $X_1X_3$ interaction term: \begin{align}
\hat Y_i &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3X_{3i} \quad\quad\ \! + \hat\beta_4\bar X_1X_{3i} \\
\hat Y_{i'} &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) + \hat\beta_4\bar X_1(X_{3i}\!+\!1) \\
~ \\
&\text{subtracting the first equation from the second:} \\
~ \\
\hat Y_{i'} - \hat Y_i &= \hat\beta_0 - \hat\beta_0 + \hat\beta_1\bar X_1 - \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 - \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) - \hat\beta_3X_{3i} + \\
&\quad\ \hat\beta_4\bar X_1(X_{3i}\!+\!1) - \hat\beta_4\bar X_1X_{3i} \\
\Delta Y &= \hat\beta_3X_{3i} + \hat\beta_3 - \hat\beta_3X_{3i} + \hat\beta_4\bar X_1 X_{3i} + \hat\beta_4\bar X_1 - \hat\beta_4\bar X_1X_{3i} \\
\Delta Y &= \hat\beta_3 + \hat\beta_4\bar X_1
\end{align} In this case, it is not possible to hold all else constant. Because the interaction term is a function of $X_1$ and $X_3$, it is not possible to change $X_3$ without the interaction term changing as well. Thus, $\hat\beta_3$ equals the change in $\hat Y$ associated with a one unit change in $X_3$ only when the interacting variable ($X_1$) is held at $0$ instead of $\bar X_1$ (or any other value but $0$), in which case the last term in the bottom equation drops out. In this discussion, I have focused on interactions, but more generally, the issue is when there is any variable that is a function of another such that it is not possible to change the value of the first without changing the respective value of the other variable. In such cases, the meaning of $\hat\beta_j$ becomes more complicated. For example, if you had a model with $X_j$ and $X_j^2$, then $\hat\beta_j$ is the derivative $\frac{dY}{dX_j}$ holding all else equal, and holding $X_j=0$ (see my answer here ). Other, still more complicated formulations are possible as well. | {
"source": [
"https://stats.stackexchange.com/questions/84314",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31188/"
]
} |
84,326 | I'm trying to understand how I might best model a variable where over time I've obtained increasingly detailed predictors. For example, consider modeling recovery rates on defaulted loans. Suppose we have a dataset with 20 years of data, and in the first 15 of those years we only know whether the loan was collateralized or not, but nothing about the characteristics of that collateral. For the last five years, however, we can break the collateral into a range of categories which are expected to be a good predictor of the recovery rate. Given this setup I want to fit a model to the data, determine measures such as the statistical significance of the predictors, and then forecast with the model. What missing data framework does this fit into? Are there any special considerations related to the fact that the more detailed explanatory variables only become available after a given point in time, as opposed to being scattered throughout the historical sample? | You are right. Technically, it is any value . However, when I teach this I usually tell people that you are getting the effect of a one unit change in $X_j$ when all other variables are held at their respective means. I believe this is a common way to explain it that is not specific to me. I usually go on to mention that if you don't have any interactions, $\beta_j$ will be the effect of a one unit change in $X_j$, no matter what the values of your other variables are. But I like to start with the mean formulation. The reason is that there are two effects of including multiple variables in a regression model. First, you get the effect of $X_j$ controlling for the other variables (see my answer here ). The second is that the presence of the other variables (typically) reduces the residual variance of the model, making your variables (including $X_j$) 'more significant'. It is hard for people to understand how this works if the other variables have values that are all over the place. That seems like it would increase the variability somehow. If you think of adjusting each data point up or down for the value of each other variable until all the rest of the $X$ variables have been moved to their respective means, it is easier to see that the residual variability has been reduced. I don't get to interactions until a class or two after I've introduced the basics of multiple regression. However, when I do get to them, I return to this material. The above applies when there are not interactions. When there are interactions, it is more complicated. In that case, the interacting variable[s] is being held constant (very specifically) at $0$, and at no other value. If you want to see how this plays out algebraically, it is rather straight-forward. We can start with the no-interaction case. Let's determine the change in $\hat Y$ when all other variables are held constant at their respective means. Without loss of generality, let's say that there are three $X$ variables and we are interested in understanding how the change in $\hat Y$ is associated with a one unit change in $X_3$, holding $X_1$ and $X_2$ constant at their respective means: \begin{align}
\hat Y_i &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3X_{3i} \\
\hat Y_{i'} &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) \\
~ \\
&\text{subtracting the first equation from the second:} \\
~ \\
\hat Y_{i'} - \hat Y_i &= \hat\beta_0 - \hat\beta_0 + \hat\beta_1\bar X_1 - \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 - \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) - \hat\beta_3X_{3i} \\
\Delta Y &= \hat\beta_3X_{3i} + \hat\beta_3 - \hat\beta_3X_{3i} \\
\Delta Y &= \hat\beta_3
\end{align} Now it is obvious that we could have put any value in for $X_1$ and $X_2$ in the first two equations, so long as we put the same value for $X_1$ ($X_2$) in both of them. That is, so long as we are holding $X_1$ and $X_2$ constant . On the other hand, it does not work out this way if you have an interaction. Here I show the case where there is an $X_1X_3$ interaction term: \begin{align}
\hat Y_i &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3X_{3i} \quad\quad\ \! + \hat\beta_4\bar X_1X_{3i} \\
\hat Y_{i'} &= \hat\beta_0 + \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) + \hat\beta_4\bar X_1(X_{3i}\!+\!1) \\
~ \\
&\text{subtracting the first equation from the second:} \\
~ \\
\hat Y_{i'} - \hat Y_i &= \hat\beta_0 - \hat\beta_0 + \hat\beta_1\bar X_1 - \hat\beta_1\bar X_1 + \hat\beta_2\bar X_2 - \hat\beta_2\bar X_2 + \hat\beta_3(X_{3i}\!+\!1) - \hat\beta_3X_{3i} + \\
&\quad\ \hat\beta_4\bar X_1(X_{3i}\!+\!1) - \hat\beta_4\bar X_1X_{3i} \\
\Delta Y &= \hat\beta_3X_{3i} + \hat\beta_3 - \hat\beta_3X_{3i} + \hat\beta_4\bar X_1 X_{3i} + \hat\beta_4\bar X_1 - \hat\beta_4\bar X_1X_{3i} \\
\Delta Y &= \hat\beta_3 + \hat\beta_4\bar X_1
\end{align} In this case, it is not possible to hold all else constant. Because the interaction term is a function of $X_1$ and $X_3$, it is not possible to change $X_3$ without the interaction term changing as well. Thus, $\hat\beta_3$ equals the change in $\hat Y$ associated with a one unit change in $X_3$ only when the interacting variable ($X_1$) is held at $0$ instead of $\bar X_1$ (or any other value but $0$), in which case the last term in the bottom equation drops out. In this discussion, I have focused on interactions, but more generally, the issue is when there is any variable that is a function of another such that it is not possible to change the value of the first without changing the respective value of the other variable. In such cases, the meaning of $\hat\beta_j$ becomes more complicated. For example, if you had a model with $X_j$ and $X_j^2$, then $\hat\beta_j$ is the derivative $\frac{dY}{dX_j}$ holding all else equal, and holding $X_j=0$ (see my answer here ). Other, still more complicated formulations are possible as well. | {
"source": [
"https://stats.stackexchange.com/questions/84326",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/38471/"
]
} |
85,426 | I am a student taking my first Statistics course now. I am confused by the term "test statistic". In the following (I saw this in some textbooks), $t$ seems to be a specific value calculated from a specific sample.
$$
t=\frac{\overline{x} - \mu_0}{s / \sqrt{n}}
$$ However, in the following (I saw this in some other textbooks), $T$ seems to be a random variable.
$$
T=\frac{\overline{X} - \mu_0}{S / \sqrt{n}}
$$ So, does the term "test statistic" mean a specific value or a random variable, or both ? | The short answer is "yes". The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed or observed of that random variable. T is a random variable because it represents the results of calculating from a sample chosen randomly. Once you take the sample (and the randomness is over) then you can calculate t, the specific value, and make conclusions based on how t compares to the distribution of T. So the test statistic is a random variable when we think about all the values it could take on based on all the different samples we could collect. But once we collect a single sample, we calculate a specific value of the test statistic. | {
"source": [
"https://stats.stackexchange.com/questions/85426",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16097/"
]
} |
85,431 | I would like to determine the relationship between two variables after controlling for a third. Specifically, I want to know if the prices of mercury and gold over time are correlated with each other more than or less than they are correlated with a generic metals price index. Is there some special signal in the relationship between these commodities or are they just moving in line with the overall index? I have a data set with 32 annual values (1980-2011) for mercury price, gold price, and metals index. I am using R. Below I have posted time series plots of each of the three variables. | The short answer is "yes". The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed or observed of that random variable. T is a random variable because it represents the results of calculating from a sample chosen randomly. Once you take the sample (and the randomness is over) then you can calculate t, the specific value, and make conclusions based on how t compares to the distribution of T. So the test statistic is a random variable when we think about all the values it could take on based on all the different samples we could collect. But once we collect a single sample, we calculate a specific value of the test statistic. | {
"source": [
"https://stats.stackexchange.com/questions/85431",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/39525/"
]
} |
85,560 | I have noticed that the confidence interval for predicted values in an linear regression tends to be narrow around the mean of the predictor and fat around the minimum and maximum values of the predictor. This can be seen in plots of these 4 linear regressions: I initially thought this was because most values of the predictors were concentrated around the mean of the predictor. However, I then noticed that the narrow middle of the confidence interval would occur even if many values of were concentrated around the extremes of the predictor, as in the bottom left linear regression, which lots of values of the predictor are concentrated around the minimum of the predictor. is anyone able to explain why confidence intervals for the predicted values in an linear regression tend to be narrow in the middle and fat at the extremes? | I'll discuss it in intuitive terms. Both confidence intervals and prediction intervals in regression take account of the fact that the intercept and slope are uncertain - you estimate the values from the data, but the population values may be different (if you took a new sample, you'd get different estimated values). A regression line will pass through $(\bar x, \bar y)$ , and it's best to center the discussion about changes to the fit around that point - that is to think about the line $y= a + b(x-\bar x)$ (in this formulation, $\hat a = \bar y$ ). If the line went through that $(\bar x, \bar y)$ point, but the slope were little higher or lower (i.e. if the height of the line at the mean was fixed but the slope was a little different), what would that look like? You'd see that the new line would move further away from the current line near the ends than near the middle, making a kind of slanted X that crossed at the mean (as each of the purple lines below do with respect to the red line; the purple lines represent the estimated slope $\pm$ two standard errors of the slope). If you drew a collection of such lines with the slope varying a little from its estimate, you'd see the distribution of predicted values near the ends 'fan out' (imagine the region between the two purple lines shaded in grey, for example, because we sampled again and drew many such slopes near the estimated one; We can get a sense of this by bootstrapping a line through the point ( $\bar{x},\bar{y}$ )). Here's an example using 2000 resamples with a parametric bootstrap: If instead you take account of the uncertainty in the constant (making the line pass close to but not quite through $(\bar x, \bar y)$ ), that moves the line up and down, so intervals for the mean at any $x$ will sit above and below the fitted line. (Here the purple lines are $\pm$ two standard errors of the constant term either side of the estimated line). When you do both at once (the line may be up or down a tiny bit, and the slope may be slightly steeper or shallower), then you get some amount of spread at the mean, $\bar x$ , because of the uncertainty in the constant, and you get some additional fanning out due to the slope's uncertainty, between them producing the characteristic hyperbolic shape of your plots. That's the intuition. Now, if you like, we can consider a little algebra (but it's not essential): It's actually the square root of the sum of the squares of those two effects - you can see it in the confidence interval's formula. Let's build up the pieces: The $a$ standard error with $b$ known is $\sigma /\sqrt{n}$ (remember $a$ here is the expected value of $y$ at the mean of $x$ , not the usual intercept; it's just a standard error of a mean). That's the standard error of the line's position at the mean ( $\bar x$ ). The $b$ standard error with $a$ known is $\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$ . The effect of uncertainty in slope at some value $x^*$ is multiplied by how far you are from the mean ( $x^*-\bar x$ ) (because the change in level is the change in slope times the distance you move), giving $(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$ . Now the overall effect is just the square root of the sum of the squares of those two things (why? because variances of uncorrelated things add, and if you write your line in the $y= a + b(x-\bar x)$ form, the estimates of $a$ and $b$ are uncorrelated. So the overall standard error is the square root of the overall variance, and the variance is the sum of the variances of the components - that is, we have $\sqrt{(\sigma /\sqrt{n})^2+ \left[(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}\right]^2 }$ A little simple manipulation gives the usual term for the standard error of the estimate of the mean value at $x^*$ : $\sigma\sqrt{\frac{1}{n}+ \frac{(x^*-\bar x)^2}{\sum_{i=1}^n (x_i-\bar{x})^2} }$ If you draw that as a function of $x^*$ , you'll see it forms a curve (looks like a smile) with a minimum at $\bar x$ , that gets bigger as you move out. That's what gets added to / subtracted from the fitted line (well, a multiple of it is, in order to get a desired confidence level). [With prediction intervals, there's also the variation in position due to the process variability; this adds another term that shifts the limits up and down, making a much wider spread, and because that term usually dominates the sum under the square root, the curvature is much less pronounced.] | {
"source": [
"https://stats.stackexchange.com/questions/85560",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12492/"
]
} |
85,583 | Could anyone please let me know how to implement Naive Bayesian Algorithm in R or SAS?I have got a training dataset with all the categorical predictors and target variable(3 levels).I got to build a model and apply it on a different test dataset along with the probable target and its predicted probability. To be more clear,my first dataset 'A' contact 4 categorical input variables a,b,c,d and a target class 'T' of 3 levels.I need to train the model for this dataset initially.Then,I have got one more dataset 'B' with input categorical variables w,x,y,z and I need to predict the probable target class 'S' along with its probability here based on my previous built model.I want the entire thing to be done in R or SAS but couldn't find much resources.Sorry,if the question has been repeated. | I'll discuss it in intuitive terms. Both confidence intervals and prediction intervals in regression take account of the fact that the intercept and slope are uncertain - you estimate the values from the data, but the population values may be different (if you took a new sample, you'd get different estimated values). A regression line will pass through $(\bar x, \bar y)$ , and it's best to center the discussion about changes to the fit around that point - that is to think about the line $y= a + b(x-\bar x)$ (in this formulation, $\hat a = \bar y$ ). If the line went through that $(\bar x, \bar y)$ point, but the slope were little higher or lower (i.e. if the height of the line at the mean was fixed but the slope was a little different), what would that look like? You'd see that the new line would move further away from the current line near the ends than near the middle, making a kind of slanted X that crossed at the mean (as each of the purple lines below do with respect to the red line; the purple lines represent the estimated slope $\pm$ two standard errors of the slope). If you drew a collection of such lines with the slope varying a little from its estimate, you'd see the distribution of predicted values near the ends 'fan out' (imagine the region between the two purple lines shaded in grey, for example, because we sampled again and drew many such slopes near the estimated one; We can get a sense of this by bootstrapping a line through the point ( $\bar{x},\bar{y}$ )). Here's an example using 2000 resamples with a parametric bootstrap: If instead you take account of the uncertainty in the constant (making the line pass close to but not quite through $(\bar x, \bar y)$ ), that moves the line up and down, so intervals for the mean at any $x$ will sit above and below the fitted line. (Here the purple lines are $\pm$ two standard errors of the constant term either side of the estimated line). When you do both at once (the line may be up or down a tiny bit, and the slope may be slightly steeper or shallower), then you get some amount of spread at the mean, $\bar x$ , because of the uncertainty in the constant, and you get some additional fanning out due to the slope's uncertainty, between them producing the characteristic hyperbolic shape of your plots. That's the intuition. Now, if you like, we can consider a little algebra (but it's not essential): It's actually the square root of the sum of the squares of those two effects - you can see it in the confidence interval's formula. Let's build up the pieces: The $a$ standard error with $b$ known is $\sigma /\sqrt{n}$ (remember $a$ here is the expected value of $y$ at the mean of $x$ , not the usual intercept; it's just a standard error of a mean). That's the standard error of the line's position at the mean ( $\bar x$ ). The $b$ standard error with $a$ known is $\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$ . The effect of uncertainty in slope at some value $x^*$ is multiplied by how far you are from the mean ( $x^*-\bar x$ ) (because the change in level is the change in slope times the distance you move), giving $(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$ . Now the overall effect is just the square root of the sum of the squares of those two things (why? because variances of uncorrelated things add, and if you write your line in the $y= a + b(x-\bar x)$ form, the estimates of $a$ and $b$ are uncorrelated. So the overall standard error is the square root of the overall variance, and the variance is the sum of the variances of the components - that is, we have $\sqrt{(\sigma /\sqrt{n})^2+ \left[(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}\right]^2 }$ A little simple manipulation gives the usual term for the standard error of the estimate of the mean value at $x^*$ : $\sigma\sqrt{\frac{1}{n}+ \frac{(x^*-\bar x)^2}{\sum_{i=1}^n (x_i-\bar{x})^2} }$ If you draw that as a function of $x^*$ , you'll see it forms a curve (looks like a smile) with a minimum at $\bar x$ , that gets bigger as you move out. That's what gets added to / subtracted from the fitted line (well, a multiple of it is, in order to get a desired confidence level). [With prediction intervals, there's also the variation in position due to the process variability; this adds another term that shifts the limits up and down, making a much wider spread, and because that term usually dominates the sum under the square root, the curvature is much less pronounced.] | {
"source": [
"https://stats.stackexchange.com/questions/85583",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/39605/"
]
} |
85,804 | Background: I'm giving a presentation to colleagues at work on hypothesis testing, and understand most of it fine but there's one aspect that I'm tying myself up in knots trying to understand as well as explain it to others. This is what I think I know (please correct if wrong!) Statistics that would be normal if variance was known, follow a $t$-distribution if the variance is unknown CLT (Central Limit Theorem): The sampling distribution of the sample mean is approximately normal for sufficiently large $n$ (could be $30$, could be up to $300$ for highly skewed distributions) The $t$-distribution can be considered Normal for degrees of freedom $> 30$ You use the $z$-test if: Population normal and variance known (for any sample size) Population normal, variance unknown and $n>30$ (due to CLT) Population binomial, $np>10$, $nq>10$ You use the $t$-test if: Population normal, variance unknown and $n<30$ No knowledge about population or variance and $n<30$, but sample data looks normal / passes tests etc so population can be assumed normal So I'm left with: For samples $>30$ and $<\approx 300$(?), no knowledge about population and variance known / unknown. So my questions are: At what sample size can you assume (where no knowledge about population distribution or variance) that the sampling distribution of the mean is normal (i.e. CLT has kicked in) when the sampling distribution looks non-normal? I know that some distributions need $n>300$, but some resources seem to say use the $z$-test whenever $n>30$... For the cases I'm unsure about, I presume I look at the data for normality. Now, if the sample data does looks normal do I use the $z$-test (since assume population normal, and since $n>30$)? What about where the sample data for cases I'm uncertain about don't look normal? Are there any circumstances where you'd still use a $t$-test or $z$-test or do you always look to transform / use non-parametric tests? I know that, due to CLT, at some value of $n$ the sampling distribution of the mean will approximate to normal but the sample data won't tell me what that value of $n$ is; the sample data could be non-normal whilst the sample mean follows a normal / $t$. Are there cases where you'd be transforming / using a non-parametric test when in fact the sampling distribution of the mean was normal / $t$ but you couldn't tell? | @AdamO is right, you simply always use the $t$ -test if you don't know the population standard deviation a-priori. You don't have to worry about when to switch to the $z$ -test, because the $t$ -distribution 'switches' for you. More specifically, the $t$ -distribution converges to the normal, thus it is the correct distribution to use at every $N$ . There is also a confusion here about the meaning of the traditional line at $N=30$ . There are two kinds of convergence that people talk about: The first is that the sampling distribution of the test statistic (i.e., $t$ ) computed from normally distributed (within group) raw data converges to a normal distribution as $N\rightarrow\infty$ despite the fact that the SD is estimated from the data. (The $t$ -distribution takes care of this for you, as noted above.) The second is that the sampling distribution of the mean of non-normally distributed (within group) raw data converges to a normal distribution (more slowly than above) as $N\rightarrow\infty$ . People count on the Central Limit Theorem to take care of this for them. However, there is no guarantee that it will converge within any reasonable sample size--there is certainly no reason to believe $30$ (or $300$ ) is the magic number. Depending on the magnitude and nature of the non-normality, it can take very long (cf. @Macro's answer here: Regression when the OLS residuals are not normally distributed ). If you believe your (within group) raw data are not very normal, it may be better to use a different type of test, such as the Mann-Whitney $U$ -test . Note that with non-normal data, the Mann-Whitney $U$ -test is likely to be more powerful than the $t$ -test, and can be so even if the CLT has kicked in. (It is also worth pointing out that testing for normality is likely to lead you astray, see: Is normality testing 'essentially useless'? ) At any rate, to answer your questions more explicitly, if you believe your (within group) raw data are not normally distributed, use the Mann-Whitney $U$ -test; if you believe you data are normally distributed, but you don't know the SD a-priori, use the $t$ -test; and if you believe your data are normally distributed and you know the SD a-priori, use the $z$ -test. It may help you to read @GregSnow's recent answer here: Interpretation of p-value in comparing proportions between two small groups in R regarding these issues as well. | {
"source": [
"https://stats.stackexchange.com/questions/85804",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/39704/"
]
} |
85,903 | Traditional statistical tests, like the two sample t-test, focus on trying to eliminate the hypothesis that there is no difference between a function of two independent samples. Then, we choose a confidence level and say that if the difference of means is beyond the 95% level, we can reject the null hypothesis. If not, we "can't reject the null hypothesis". This seems to imply that we can't accept it either. Does it mean we're not sure if the null hypothesis is true? Now, I want to design a test where my hypothesis is that a function of two samples is the same (which is the opposite of traditional statistics tests where the hypothesis is that the two samples are different). So, my null hypothesis becomes that the two samples are different. How should I design such a test? Will it be as simple as saying that if the p-value is lesser than 5% we can accept the hypothesis that there is no significant difference? | Traditionally, the null hypothesis is a point value. (It is typically $0$, but can in fact be any point value.) The alternative hypothesis is that the true value is any value other than the null value . Because a continuous variable (such as a mean difference) can take on a value which is indefinitely close to the null value but still not quite equal and thus make the null hypothesis false, a traditional point null hypothesis cannot be proven. Imagine your null hypothesis is $0$, and the mean difference you observe is $0.01$. Is it reasonable to assume the null hypothesis is true? You don't know yet; it would be helpful to know what our confidence interval looks like. Let's say that your 95% confidence interval is $(-4.99,\ 5.01)$. Now, should we conclude that the true value is $0$? I would not feel comfortable saying that, because the CI is very wide, and there are many, large non-zero values that we might reasonably suspect are consistent with our data. So let's say we gather much, much more data, and now our observed mean difference is $0.01$, but the 95% CI is $(0.005,\ 0.015)$. The observed mean difference has stayed the same (which would be amazing if it really happened), but the confidence interval now excludes the null value. Of course, this is just a thought experiment, but it should make the basic ideas clear. We can never prove that the true value is any particular point value; we can only (possibly) disprove that it is some point value. In statistical hypothesis testing, the fact that the p-value is > 0.05 (and that the 95% CI includes zero) means that we are not sure if the null hypothesis is true . As for your concrete case, you cannot construct a test where the alternative hypothesis is that the mean difference is $0$ and the null hypothesis is anything other than zero. This violates the logic of hypothesis testing. It is perfectly reasonable that it is your substantive, scientific hypothesis, but it cannot be your alternative hypothesis in a hypothesis testing situation. So what can you do? In this situation, you use equivalence testing. (You might want to read through some of our threads on this topic by clicking on the equivalence tag.) The typical strategy is to use the two one sided tests approach. Very briefly, you select an interval within which you would consider that the true mean difference might as well be $0$ for all you could care, then you perform a one-sided test to determine if the observed value is less than the upper bound of that interval, and another one-sided test to see if it is greater than the lower bound. If both of these tests are significant, then you have rejected the hypothesis that the true value is outside the interval you care about. If one (or both) are non-significant, you fail to reject the hypothesis that the true value is outside the interval. For example, suppose anything within the interval $(-0.02,\ 0.02)$ is so close to zero that you think it is essentially the same as zero for your purposes, so you use that as your substantive hypothesis. Now imagine that you get the first result described above. Although $0.01$ falls within that interval, you would not be able to reject the null hypothesis on either one-sided t-test, so you would fail to reject the null hypothesis. On the other hand, imagine that you got the second result described above. Now you find that the observed value falls within the designated interval, and it can be shown to be both less than the upper bound and greater than the lower bound, so you can reject the null. (It is worth noting that you can reject both the hypothesis that the true value is $0$, and the hypothesis that the true value lies outside of the interval $(-0.02,\ 0.02)$, which may seem perplexing at first, but is fully consistent with the logic of hypothesis testing.) | {
"source": [
"https://stats.stackexchange.com/questions/85903",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25186/"
]
} |
85,909 | The plm function of the plm library in R is giving me grief over having duplicate time-id couples, even when I'm running a model that I don't think should need a time variable at all (see reproducible example below). I can think of three possibilities: My understanding of fixed effects regression is wrong, and they really do require unique time indices (or time indices at all!). plm() is just being overly-finicky here and should relax this requirement. The particular estimation technique that plm() uses--the within transformation--requires time indices, even though the order doesn't seem to matter and the less computationally-efficient version (including dummies in a straight-up OLS model) doesn't need them. Any thoughts? set.seed(1)
n <- 1000
test <- data.frame( grp = as.factor(rep( letters, (n/length(letters))+1 ))[seq(n)], x = runif(n), z = runif(n) )
test$y <- with( test, 2*x + 3*z + rnorm(n) )
lm( y ~ x + z, data = test )
lm( y ~ x + z + grp, data = test )
require(plm)
# Model fails if I don't specify a time index, despite effect = "individual"
plm( y ~ x + z, data = test, model = "within", effect="individual", index = "grp" )
# Create time variable and add it to the index but still specify individual FE not time FE also
library(plyr)
test <- ddply( test, .(grp), function(dat) transform( dat, t = seq(nrow(dat)) ) )
# Now plm() works; note coefficients clearly include the fixed effects, as they match the lm() version above
plm( y ~ x + z, data = test, model = "within", effect="individual", index = c("grp","t") )
# Scramble time variables and show they don't matter as long as they're unique within a cluster
test <- ddply( test, .(grp), function(dat) transform( dat, t = sample(t) ) )
plm( y ~ x + z, data = test, model = "within", effect="individual", index = c("grp","t") )
# Add a duplicate time entry and show that it causes plm() to fail
test[ 2, "t" ] <- test[ 1, "t" ]
plm( y ~ x + z, data = test, model = "within", effect="individual", index = c("grp","t") ) Why this matters I'm trying to bootstrap my model, and when I do the requirement that the index-time pairs be unique is causing headaches which seem unnecessary if (2) is true. | Traditionally, the null hypothesis is a point value. (It is typically $0$, but can in fact be any point value.) The alternative hypothesis is that the true value is any value other than the null value . Because a continuous variable (such as a mean difference) can take on a value which is indefinitely close to the null value but still not quite equal and thus make the null hypothesis false, a traditional point null hypothesis cannot be proven. Imagine your null hypothesis is $0$, and the mean difference you observe is $0.01$. Is it reasonable to assume the null hypothesis is true? You don't know yet; it would be helpful to know what our confidence interval looks like. Let's say that your 95% confidence interval is $(-4.99,\ 5.01)$. Now, should we conclude that the true value is $0$? I would not feel comfortable saying that, because the CI is very wide, and there are many, large non-zero values that we might reasonably suspect are consistent with our data. So let's say we gather much, much more data, and now our observed mean difference is $0.01$, but the 95% CI is $(0.005,\ 0.015)$. The observed mean difference has stayed the same (which would be amazing if it really happened), but the confidence interval now excludes the null value. Of course, this is just a thought experiment, but it should make the basic ideas clear. We can never prove that the true value is any particular point value; we can only (possibly) disprove that it is some point value. In statistical hypothesis testing, the fact that the p-value is > 0.05 (and that the 95% CI includes zero) means that we are not sure if the null hypothesis is true . As for your concrete case, you cannot construct a test where the alternative hypothesis is that the mean difference is $0$ and the null hypothesis is anything other than zero. This violates the logic of hypothesis testing. It is perfectly reasonable that it is your substantive, scientific hypothesis, but it cannot be your alternative hypothesis in a hypothesis testing situation. So what can you do? In this situation, you use equivalence testing. (You might want to read through some of our threads on this topic by clicking on the equivalence tag.) The typical strategy is to use the two one sided tests approach. Very briefly, you select an interval within which you would consider that the true mean difference might as well be $0$ for all you could care, then you perform a one-sided test to determine if the observed value is less than the upper bound of that interval, and another one-sided test to see if it is greater than the lower bound. If both of these tests are significant, then you have rejected the hypothesis that the true value is outside the interval you care about. If one (or both) are non-significant, you fail to reject the hypothesis that the true value is outside the interval. For example, suppose anything within the interval $(-0.02,\ 0.02)$ is so close to zero that you think it is essentially the same as zero for your purposes, so you use that as your substantive hypothesis. Now imagine that you get the first result described above. Although $0.01$ falls within that interval, you would not be able to reject the null hypothesis on either one-sided t-test, so you would fail to reject the null hypothesis. On the other hand, imagine that you got the second result described above. Now you find that the observed value falls within the designated interval, and it can be shown to be both less than the upper bound and greater than the lower bound, so you can reject the null. (It is worth noting that you can reject both the hypothesis that the true value is $0$, and the hypothesis that the true value lies outside of the interval $(-0.02,\ 0.02)$, which may seem perplexing at first, but is fully consistent with the logic of hypothesis testing.) | {
"source": [
"https://stats.stackexchange.com/questions/85909",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3488/"
]
} |
85,916 | If $\mathbf{x}$ and $\mathbf{y}$ are two independent random unit vectors in $\mathbb{R}^D$ (uniformly distributed on a unit sphere), what is the distribution of their scalar product (dot product) $\mathbf x \cdot \mathbf y$ ? I guess as $D$ grows the distribution quickly (?) becomes normal with zero mean and variance decreasing in higher dimensions $$\lim_{D\to\infty}\sigma^2(D) \to 0,$$ but is there an explicit formula for $\sigma^2(D)$ ? Update I ran some quick simulations. First, generating 10000 pairs of random unit vectors for $D=1000$ it is easy to see that the distribution of their dot products is perfectly Gaussian (in fact it is quite Gaussian already for $D=100$ ), see the subplot on the left. Second, for each $D$ ranging from 1 to 10000 (with increasing steps) I generated 1000 pairs and computed the variance. Log-log plot is shown on the right, and it is clear that the formula is very well approximated by $1/D$ . Note that for $D=1$ and $D=2$ this formula even gives exact results (but I am not sure what happens later). | Because ( as is well-known ) a uniform distribution on the unit sphere $S^{D-1}$ is obtained by normalizing a $D$ -variate normal distribution and the dot product $t$ of normalized vectors is their correlation coefficient, the answers to the three questions are: $u= (t+1)/2$ has a Beta $((D-1)/2,(D-1)/2)$ distribution. The variance of $t$ equals $1/D$ (as speculated in the question). The standardized distribution of $t$ approaches normality at a rate of $O\left(\frac{1}{D}\right).$ Method The exact distribution of the dot product of unit vectors is easily obtained geometrically, because this is the component of the second vector in the direction of the first. Since the second vector is independent of the first and is uniformly distributed on the unit sphere, its component in the first direction is distributed the same as any coordinate of the sphere. (Notice that the distribution of the first vector does not matter.) Finding the Density Letting that coordinate be the last, the density at $t \in [-1,1]$ is therefore proportional to the surface area lying at a height between $t$ and $t+dt$ on the unit sphere. That proportion occurs within a belt of height $dt$ and radius $\sqrt{1-t^2},$ which is essentially a conical frustum constructed out of an $S^{D-2}$ of radius $\sqrt{1-t^2},$ of height $dt$ , and slope $1/\sqrt{1-t^2}$ . Whence the probability is proportional to $$\frac{\left(\sqrt{1 - t^2}\right)^{D-2}}{\sqrt{1 - t^2}}\,dt = (1 - t^2)^{(D-3)/2} dt.$$ Letting $u=(t+1)/2 \in [0,1]$ entails $t = 2u-1$ . Substituting that into the preceding gives the probability element up to a normalizing constant: $$f_D(u)du \; \propto \; (1 - (2u-1)^2)^{(D-3)/2} d(2u-1) = 2^{D-2}(u-u^2)^{(D-3)/2}du.$$ It is immediate that $u=(t+1)/2$ has a Beta $((D-1)/2, (D-1)/2)$ distribution, because (by definition) its density also is proportional to $$u^{(D-1)/2-1}\left(1-u\right)^{(D-1)/2-1} = (u-u^2)^{(D-3)/2} \; \propto \; f_D(u).$$ Determining the Limiting Behavior Information about the limiting behavior follows easily from this using elementary techniques: $f_D$ can be integrated to obtain the constant of proportionality $\frac{\Gamma \left(\frac{D}{2}\right)}{\sqrt{\pi } \Gamma \left(\frac{D-1}{2}\right)}$ ; $t^k f_D(t)$ can be integrated (using properties of Beta functions, for instance) to obtain moments, showing that the variance is $1/D$ and shrinks to $0$ (whence, by Chebyshev's Theorem, the probability is becoming concentrated near $t=0$ ); and the limiting distribution is then found by considering values of the density of the standardized distribution, proportional to $f_D(t/\sqrt{D}),$ for small values of $t$ : $$\eqalign{
\log(f_D(t/\sqrt{D})) &= C(D) + \frac{D-3}{2}\log\left(1 - \frac{t^2}{D}\right) \\
&=C(D) -\left(1/2 + \frac{3}{2D}\right)t^2 + O\left(\frac{t^4}{D}\right) \\
&\to C -\frac{1}{2}t^2
}$$ where the $C$ 's represent (log) constants of integration. Evidently the rate at which this approaches normality (for which the log density equals $-\frac{1}{2}t^2$ ) is $O\left(\frac{1}{D}\right).$ This plot shows the densities of the dot product for $D=4, 6, 10$ , as standardized to unit variance, and their limiting density. The values at $0$ increase with $D$ (from blue through red, gold, and then green for the standard normal density). The density for $D=1000$ would be indistinguishable from the normal density at this resolution. | {
"source": [
"https://stats.stackexchange.com/questions/85916",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28666/"
]
} |
86,015 | I got this question during an interview with Amazon: 50% of all people who receive a first interview receive a second interview 95% of your friends that got a second interview felt they had a good first interview 75% of your friends that DID NOT get a second interview felt they had a good first interview If you feel that you had a good first interview, what is the probability you will receive a second interview? Can someone please explain how to solve this? I'm having trouble breaking down the word problem into math (the interview is long over now). I understand there may not be an actual numerical solution, but an explanation of how you would walk through this problem would help. edit: Well I did get a second interview. If anyone is curious I had gone with an explanation that was a combination of a bunch of the responses below: not enough info, friends not representative sample, etc and just talked through some probabilities. The question left me puzzled at the end though, thanks for all of the responses. | Say 200 people took the interview, so that 100 received a 2nd interview and 100 did not. Out of the first lot, 95 felt they had a great first interview. Out of the 2nd lot, 75 felt they had a great first interview. So in total 95 + 75 people felt they had a great first interview. Of those 95 + 75 = 170 people, only 95 actually got a 2nd interview. Thus the probability is:
$$\frac{95}{(95 + 75)}=\frac{95}{170}=\frac{19}{34}$$ Note that, as many commenters graciously point out, this computation is only justifiable if you assume that your friends form an unbiased and well distributed sampling set, which may be a strong assumption. | {
"source": [
"https://stats.stackexchange.com/questions/86015",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37418/"
]
} |
86,040 | I wonder if someone knows any general rules of thumb regarding the number of bootstrap samples one should use, based on characteristics of the data (number of observations, etc.) and/or the variables included? | My experience is that statisticians won't take simulations or bootstraps seriously unless the number of iterations exceeds 1,000. MC error is a big issue that's a little under appreciated. For instance, this paper used Niter=50 to demonstrate LASSO as a feature selection tool. My thesis would have taken a lot less time to run had 50 iterations been deemed acceptable! I recommend that you should always inspect the histogram of the bootstrap samples . Their distribution should appear fairly regular. I don't think any plain numerical rule will suffice, and it would be overkill to perform, say, a double-bootstrap to assess MC error. Suppose you were estimating the mean from a ratio of two independent standard normal random variables, some statistician might recommend bootstrapping it since the integral is difficult to compute. If you have basic probability theory under your belt, you would recognize that this ratio forms a Cauchy random variable with a non-existent mean. Any other leptokurtic distribution would require several additional bootstrap iterations compared to a more regular Gaussian density counterpart. In that case, 1000, 100000, or 10000000 bootstrap samples would be insufficient to estimate that which doesn't exist. The histogram of these bootstraps would continue to look irregular and wrong. There are a few more wrinkles to that story. In particular, the bootstrap is only really justified when the moments of the data generating probability model exist. That's because you are using the empirical distribution function as a straw man for the actual probability model, and assuming they have the same mean, standard deviation, skewness, 99th percentile, etc. In short, a bootstrap estimate of a statistic and its standard error is only justified when the histogram of the bootstrapped samples appears regular beyond reasonable doubt and when the bootstrap is justified. | {
"source": [
"https://stats.stackexchange.com/questions/86040",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37765/"
]
} |
86,057 | Here's a link to a good question regarding Textbooks on Bayesian statistics from some time ago. People suggested John Kruschke's "Doing Bayesian Data Analysis: A Tutorial Introduction with R and BUGS" as one of the best options to get an introduction to Bayesian statistics.
Meanwhile, a potentially interesting book called "Bayesian and Frequentist Regression Methods" by Jon Wakefield was released, which also provides code for R and BUGS.
Thus, they esentially both seem to cover the same topics. Question 1: If you have read the book, would you recommend it to a frequentist economics masters graduate as both an introduction to Bayesian statstics and reference book for both frequentist and bayesian approaches? Question 2: If you have read both Wakefield's and Kruschke's book, which one would you recommend better? | My experience is that statisticians won't take simulations or bootstraps seriously unless the number of iterations exceeds 1,000. MC error is a big issue that's a little under appreciated. For instance, this paper used Niter=50 to demonstrate LASSO as a feature selection tool. My thesis would have taken a lot less time to run had 50 iterations been deemed acceptable! I recommend that you should always inspect the histogram of the bootstrap samples . Their distribution should appear fairly regular. I don't think any plain numerical rule will suffice, and it would be overkill to perform, say, a double-bootstrap to assess MC error. Suppose you were estimating the mean from a ratio of two independent standard normal random variables, some statistician might recommend bootstrapping it since the integral is difficult to compute. If you have basic probability theory under your belt, you would recognize that this ratio forms a Cauchy random variable with a non-existent mean. Any other leptokurtic distribution would require several additional bootstrap iterations compared to a more regular Gaussian density counterpart. In that case, 1000, 100000, or 10000000 bootstrap samples would be insufficient to estimate that which doesn't exist. The histogram of these bootstraps would continue to look irregular and wrong. There are a few more wrinkles to that story. In particular, the bootstrap is only really justified when the moments of the data generating probability model exist. That's because you are using the empirical distribution function as a straw man for the actual probability model, and assuming they have the same mean, standard deviation, skewness, 99th percentile, etc. In short, a bootstrap estimate of a statistic and its standard error is only justified when the histogram of the bootstrapped samples appears regular beyond reasonable doubt and when the bootstrap is justified. | {
"source": [
"https://stats.stackexchange.com/questions/86057",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/39554/"
]
} |
86,269 | I learned in my linear models class that if two predictors are correlated and both are included in a model, one will be insignificant. For example, assume the size of a house and the number of bedrooms are correlated. When predicting the cost of a house using these two predictors, one of them can be dropped because they are both providing a lot of the same information. Intuitively, this makes sense, but I have a some more technical questions: How does this effect manifest itself in p-values of the regression coefficients when including only one or including both predictors in the model? How does the variance of the regression coefficients get affected by including both predictors in the model or just having one? How do I know which predictor the model will choose to be less significant? How does including only one or including both predictors change the value/variance of my forecasted cost? | The topic you are asking about is multicollinearity . You might want to read some of the threads on CV categorized under the multicollinearity tag. @whuber's answer linked above in particular is also worth your time. The assertion that "if two predictors are correlated and both are included in a model, one will be insignificant", is not correct. If there is a real effect of a variable, the probability that variable will be significant is a function of several things, such as the magnitude of the effect, the magnitude of the error variance, the variance of the variable itself, the amount of data you have, and the number of other variables in the model. Whether the variables are correlated is also relevant, but it doesn't override these facts. Consider the following simple demonstration in R : library(MASS) # allows you to generate correlated data
set.seed(4314) # makes this example exactly replicable
# generate sets of 2 correlated variables w/ means=0 & SDs=1
X0 = mvrnorm(n=20, mu=c(0,0), Sigma=rbind(c(1.00, 0.70), # r=.70
c(0.70, 1.00)) )
X1 = mvrnorm(n=100, mu=c(0,0), Sigma=rbind(c(1.00, 0.87), # r=.87
c(0.87, 1.00)) )
X2 = mvrnorm(n=1000, mu=c(0,0), Sigma=rbind(c(1.00, 0.95), # r=.95
c(0.95, 1.00)) )
y0 = 5 + 0.6*X0[,1] + 0.4*X0[,2] + rnorm(20) # y is a function of both
y1 = 5 + 0.6*X1[,1] + 0.4*X1[,2] + rnorm(100) # but is more strongly
y2 = 5 + 0.6*X2[,1] + 0.4*X2[,2] + rnorm(1000) # related to the 1st
# results of fitted models (skipping a lot of output, including the intercepts)
summary(lm(y0~X0[,1]+X0[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X0[, 1] 0.6614 0.3612 1.831 0.0847 . # neither variable
# X0[, 2] 0.4215 0.3217 1.310 0.2075 # is significant
summary(lm(y1~X1[,1]+X1[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X1[, 1] 0.57987 0.21074 2.752 0.00708 ** # only 1 variable
# X1[, 2] 0.25081 0.19806 1.266 0.20841 # is significant
summary(lm(y2~X2[,1]+X2[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X2[, 1] 0.60783 0.09841 6.177 9.52e-10 *** # both variables
# X2[, 2] 0.39632 0.09781 4.052 5.47e-05 *** # are significant The correlation between the two variables is lowest in the first example and highest in the third, yet neither variable is significant in the first example and both are in the last example. The magnitude of the effects is identical in all three cases, and the variances of the variables and the errors should be similar (they are stochastic, but drawn from populations with the same variance). The pattern we see here is due primarily to my manipulating the $N$s for each case. The key concept to understand to resolve your questions is the variance inflation factor (VIF). The VIF is how much the variance of your regression coefficient is larger than it would otherwise have been if the variable had been completely uncorrelated with all the other variables in the model. Note that the VIF is a multiplicative factor, if the variable in question is uncorrelated the VIF=1. A simple understanding of the VIF is as follows: you could fit a model predicting a variable (say, $X_1$) from all other variables in your model (say, $X_2$), and get a multiple $R^2$. The VIF for $X_1$ would be $1/(1-R^2)$. Let's say the VIF for $X_1$ were $10$ (often considered a threshold for excessive multicollinearity), then the variance of the sampling distribution of the regression coefficient for $X_1$ would be $10\times$ larger than it would have been if $X_1$ had been completely uncorrelated with all the other variables in the model. Thinking about what would happen if you included both correlated variables vs. only one is similar, but slightly more complicated than the approach discussed above. This is because not including a variable means the model uses less degrees of freedom, which changes the residual variance and everything computed from that (including the variance of the regression coefficients). In addition, if the non-included variable really is associated with the response, the variance in the response due to that variable will be included into the residual variance, making it larger than it otherwise would be. Thus, several things change simultaneously (the variable is correlated or not with another variable, and the residual variance), and the precise effect of dropping / including the other variable will depend on how those trade off. The best way to think through this issue is based on the counterfactual of how the model would differ if the variables were uncorrelated instead of correlated, rather than including or excluding one of the variables. Armed with an understanding of the VIF, here are the answers to your questions: Because the variance of the sampling distribution of the regression coefficient would be larger (by a factor of the VIF) if it were correlated with other variables in the model, the p-values would be higher (i.e., less significant) than they otherwise would. The variances of the regression coefficients would be larger, as already discussed. In general, this is hard to know without solving for the model. Typically, if only one of two is significant, it will be the one that had the stronger bivariate correlation with $Y$. How the predicted values and their variance would change is quite complicated. It depends on how strongly correlated the variables are and the manner in which they appear to be associated with your response variable in your data. Regarding this issue, it may help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression? | {
"source": [
"https://stats.stackexchange.com/questions/86269",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24612/"
]
} |
86,285 | I realize that one uses set.seed() in R for pseudo-random number generation. I also realize that using the same number, like set.seed(123) insures you can reproduce results. But what I don't get is what do the values themselves mean. I am playing with several functions, and some use set.seed(1) or set.seed(300) or set.seed(12345) . What does that number mean (if anything)- and when should I use a different one. Example, in a book I am working through- they use set.seed(12345) when creating a training set for decision trees. Then in another chapter, they are using set.seed(300) for creating a Random Forest. Just don't get the number. | The seed number you choose is the starting point used in the generation of a sequence of random numbers, which is why (provided you use the same pseudo-random number generator) you'll obtain the same results given the same seed number. As far as your second question is concerned, this short snippet from the description of the equivalent functionality in Stata might be helpful: We cannot emphasize this enough: Do not set the seed too often. To see
why this is such a bad idea, consider the limiting case: You set the
seed, draw one pseudorandom number, reset the seed, draw again, and so
continue. The pseudorandom numbers you obtain will be nothing more
than the seeds you run through a mathematical function. The results
you obtain will not pass for random unless the seeds you choose pass
for random. If you already had such numbers, why are you even
bothering to use the pseudorandom-number generator? http://www.stata.com/manuals13/rsetseed.pdf | {
"source": [
"https://stats.stackexchange.com/questions/86285",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37648/"
]
} |
86,293 | Let's say $X$ and $Y$ are continuous and discrete random variables, respectively, with $f(x)$ and $g(y)$ being a probability density for $X$ and probability mass for $Y$, respectively. Can I say that $Z=XY$ has density function equal to $$\sum_{y} f(z/y)g(y)$$ and can I generalize this for all pairs of random variables? | The seed number you choose is the starting point used in the generation of a sequence of random numbers, which is why (provided you use the same pseudo-random number generator) you'll obtain the same results given the same seed number. As far as your second question is concerned, this short snippet from the description of the equivalent functionality in Stata might be helpful: We cannot emphasize this enough: Do not set the seed too often. To see
why this is such a bad idea, consider the limiting case: You set the
seed, draw one pseudorandom number, reset the seed, draw again, and so
continue. The pseudorandom numbers you obtain will be nothing more
than the seeds you run through a mathematical function. The results
you obtain will not pass for random unless the seeds you choose pass
for random. If you already had such numbers, why are you even
bothering to use the pseudorandom-number generator? http://www.stata.com/manuals13/rsetseed.pdf | {
"source": [
"https://stats.stackexchange.com/questions/86293",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/39592/"
]
} |
86,351 | I'm quite new on this with binomial data tests, but needed to do one and now I´m not sure how to interpret the outcome. The y-variable, the response variable, is binomial and the explanatory factors are continuous. This is what I got when summarizing the outcome: glm(formula = leaves.presence ~ Area, family = binomial, data = n)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.213 -1.044 -1.023 1.312 1.344
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
leaves.presence 0.0008166 0.0002472 3.303 0.000956 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 16662 on 12237 degrees of freedom
Residual deviance: 16651 on 12236 degrees of freedom
(314 observations deleted due to missingness)
AIC: 16655
Number of Fisher Scoring iterations: 4 There's a number of things I don't get here, what does this really say: Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
leaves.presence 0.0008166 0.0002472 3.303 0.000956 *** And what does AIC and Number of Fisher Scoring iterations mean? > fit
Call: glm(formula = Lövförekomst ~ Areal, family = binomial, data = n)
Coefficients:
(Intercept) Areal
-0.3877697 0.0008166
Degrees of Freedom: 12237 Total (i.e. Null); 12236 Residual
(314 observations deleted due to missingness)
Null Deviance: 16660
Residual Deviance: 16650 AIC: 16650 And here what does this mean: Coefficients:
(Intercept) Areal
-0.3877697 0.0008166 | What you have done is logistic regression . This can be done in basically any statistical software, and the output will be similar (at least in content, albeit the presentation may differ). There is a guide to logistic regression with R on UCLA's excellent statistics help website. If you are unfamiliar with this, my answer here: difference between logit and probit models , may help you understand what LR is about (although it is written in a different context). You seem to have two models presented, I will primarily focus on the top one. In addition, there seems to have been an error in copying and pasting the model or output, so I will swap leaves.presence with Area in the output to make it consistent with the model. Here is the model I'm referring to (notice that I added (link="logit") , which is implied by family=binomial ; see ?glm and ?family ): glm(formula = leaves.presence ~ Area, family = binomial(link="logit"), data = n) Let's walk through this output (notice that I changed the name of the variable in the second line under Coefficients ): Deviance Residuals:
Min 1Q Median 3Q Max
-1.213 -1.044 -1.023 1.312 1.344
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
Area 0.0008166 0.0002472 3.303 0.000956 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 16662 on 12237 degrees of freedom
Residual deviance: 16651 on 12236 degrees of freedom
(314 observations deleted due to missingness)
AIC: 16655
Number of Fisher Scoring iterations: 4 Just as there are residuals in linear (OLS) regression, there can be residuals in logistic regression and other generalized linear models. They are more complicated when the response variable is not continuous, however. GLiMs can have five different types of residuals, but what comes listed standard are the deviance residuals. ( Deviance and deviance residuals are more advanced, so I'll be brief here; if this discussion is somewhat hard to follow, I wouldn't worry too much, you can skip it): Deviance Residuals:
Min 1Q Median 3Q Max
-1.213 -1.044 -1.023 1.312 1.344 For every data point used in your model, the deviance associated with that point is calculated. Having done this for each point, you have a set of such residuals, and the above output is simply a non-parametric description of their distribution. Next we see the information about the covariates, which is what people typically are primarily interested in: Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
Area 0.0008166 0.0002472 3.303 0.000956 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 For a simple logistic regression model like this one, there is only one covariate ( Area here) and the intercept (also sometimes called the 'constant'). If you had a multiple logistic regression, there would be additional covariates listed below these, but the interpretation of the output would be the same. Under Estimate in the second row is the coefficient associated with the variable listed to the left. It is the estimated amount by which the log odds of leaves.presence would increase if Area were one unit higher. The log odds of leaves.presence when Area is $0$ is just above in the first row. (If you are not sufficiently familiar with log odds, it may help you to read my answer here: interpretation of simple predictions to odds ratios in logistic regression .) In the next column, we see the standard error associated with these estimates. That is, they are an estimate of how much, on average, these estimates would bounce around if the study were re-run identically, but with new data, over and over. (If you are not very familiar with the idea of a standard error, it may help you to read my answer here: how to interpret coefficient standard errors in linear regression .) If we were to divide the estimate by the standard error, we would get a quotient which is assumed to be normally distributed with large enough samples. This value is listed in under z value . Below Pr(>|z|) are listed the two-tailed p-values that correspond to those z-values in a standard normal distribution. Lastly, there are the traditional significance stars (and note the key below the coefficients table). The Dispersion line is printed by default with GLiMs, but doesn't add much information here (it is more important with count models, e.g.). We can ignore this. Lastly, we get information about the model and its goodness of fit: Null deviance: 16662 on 12237 degrees of freedom
Residual deviance: 16651 on 12236 degrees of freedom
(314 observations deleted due to missingness)
AIC: 16655
Number of Fisher Scoring iterations: 4 The line about missingness is often, um, missing. It shows up here because you had 314 observations for which either leaves.presence , Area , or both were missing. Those partial observations were not used in fitting the model. The Residual deviance is a measure of the lack of fit of your model taken as a whole, whereas the Null deviance is such a measure for a reduced model that only includes the intercept. Notice that the degrees of freedom associated with these two differs by only one. Since your model has only one covariate, only one additional parameter has been estimated (the Estimate for Area ), and thus only one additional degree of freedom has been consumed. These two values can be used in conducting a test of the model as a whole, which would be analogous to the global $F$-test that comes with a multiple linear regression model. Since you have only one covariate, such a test would be uninteresting in this case. The AIC is another measure of goodness of fit that takes into account the ability of the model to fit the data. This is very useful when comparing two models where one may fit better but perhaps only by virtue of being more flexible and thus better able to fit any data. Since you have only one model, this is uninformative. The reference to Fisher scoring iterations has to do with how the model was estimated. A linear model can be fit by solving closed form equations. Unfortunately, that cannot be done with most GLiMs including logistic regression. Instead, an iterative approach (the Newton-Raphson algorithm by default) is used. Loosely, the model is fit based on a guess about what the estimates might be. The algorithm then looks around to see if the fit would be improved by using different estimates instead. If so, it moves in that direction (say, using a higher value for the estimate) and then fits the model again. The algorithm stops when it doesn't perceive that moving again would yield much additional improvement. This line tells you how many iterations there were before the process stopped and output the results. Regarding the second model and output you list, this is just a different way of displaying results. Specifically, these Coefficients:
(Intercept) Areal
-0.3877697 0.0008166 are the same kind of estimates discussed above (albeit from a different model and presented with less supplementary information). | {
"source": [
"https://stats.stackexchange.com/questions/86351",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/40116/"
]
} |
86,429 | (This is based on a question that just came to me via email; I've added some context from a previous brief conversation with the same person.) Last year I was told that the gamma distribution is heavier tailed than the lognormal, and I've since been told that's not the case. Which is heavier tailed? What are some resources I can use to explore the relationship? | The (right) tail of a distribution describes its behavior at large values. The correct object to study is not its density--which in many practical cases does not exist--but rather its distribution function $F$ . More specifically, because $F$ must rise asymptotically to $1$ for large arguments $x$ (by the Law of Total Probability), we are interested in how rapidly it approaches that asymptote: we need to investigate the behavior of its survival function $1- F(x)$ as $x \to \infty$ . Specifically, one distribution $F$ for a random variable $X$ is "heavier" than another one $G$ provided that eventually $F$ has more probability at large values than $G$ . This can be formalized: there must exist a finite number $x_0$ such that for all $x \gt x_0$ , $${\Pr}_F(X\gt x) = 1 - F(x) \gt 1 - G(x) = {\Pr}_G(X\gt x).$$ The red curve in this figure is the survival function for a Poisson $(3)$ distribution. The blue curve is for a Gamma $(3)$ distribution, which has the same variance. Eventually the blue curve always exceeds the red curve, showing that this Gamma distribution has a heavier tail than this Poisson distribution. These distributions cannot readily be compared using densities, because the Poisson distribution has no density. It is true that when the densities $f$ and $g$ exist and $f(x) \gt g(x)$ for $x \gt x_0$ then $F$ is heavier-tailed than $G$ . However, the converse is false--and this is a compelling reason to base the definition of tail heaviness on survival functions rather than densities, even if often the analysis of tails may be more easily carried out using the densities. Counter-examples can be constructed by taking a discrete distribution $H$ of positive unbounded support that nevertheless is no heavier-tailed than $G$ (discretizing $G$ will do the trick). Turn this into a continuous distribution by replacing the probability mass of $H$ at each of its support points $k$ , written $h(k)$ , by (say) a scaled Beta $(2,2)$ distribution with support on a suitable interval $[k-\varepsilon(k), k+\varepsilon(k)]$ and weighted by $h(k)$ . Given a small positive number $\delta,$ choose $\varepsilon(k)$ sufficiently small to ensure that the peak density of this scaled Beta distribution exceeds $f(k)/\delta$ . By construction, the mixture $\delta H + (1-\delta )G$ is a continuous distribution $G^\prime$ whose tail looks like that of $G$ (it is uniformly a tiny bit lower by an amount $\delta$ ) but has spikes in its density at the support of $H$ and all those spikes have points where they exceed the density of $f$ . Thus $G^\prime$ is lighter-tailed than $F$ but no matter how far out in the tail we go there will be points where its density exceeds that of $F$ . The red curve is the PDF of a Gamma distribution $G$ , the gold curve is the PDF of a lognormal distribution $F$ , and the blue curve (with spikes) is the PDF of a mixture $G^\prime$ constructed as in the counterexample. (Notice the logarithmic density axis.) The survival function of $G^\prime$ is close to that of a Gamma distribution (with rapidly decaying wiggles): it will eventually grow less than that of $F$ , even though its PDF will always spike above that of $F$ no matter how far out into the tails we look. Discussion Incidentally, we can perform this analysis directly on the survival functions of lognormal and Gamma distributions, expanding them around $x=\infty$ to find their asymptotic behavior, and conclude that all lognormals have heavier tails than all Gammas. But, because these distributions have "nice" densities, the analysis is more easily carried out by showing that for sufficiently large $x$ , a lognormal density exceeds a Gamma density. Let us not, however, confuse this analytical convenience with the meaning of a heavy tail. Similarly, although higher moments and their variants (such as skewness and kurtosis) say a little about the tails, they do not provide sufficient information. As a simple example, we may truncate any lognormal distribution at such a large value that any given number of its moments will scarcely change--but in so doing we will have removed its tail entirely, making it lighter-tailed than any distribution with unbounded support (such as a Gamma). A fair objection to these mathematical contortions would be to point out that behavior so far out in the tail has no practical application, because nobody would ever believe that any distributional model will be valid at such extreme (perhaps physically unattainable) values. That shows, however, that in applications we ought to take some care to identify which portion of the tail is of concern and analyze it accordingly. (Flood recurrence times, for instance, can be understood in this sense: 10-year floods, 100-year floods, and 1000-year floods characterize particular sections of the tail of the flood distribution.) The same principles apply, though: the fundamental object of analysis here is the distribution function and not its density. | {
"source": [
"https://stats.stackexchange.com/questions/86429",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/805/"
]
} |
86,708 | How do I calculate relative error when the true value is zero? Say I have $x_{true} = 0$ and $x_{test}$ . If I define relative error as: $$\text{relative error} = \frac{x_{true}-x_{test}}{x_{true}}$$ Then the relative error is always undefined. If instead I use the definition: $$\text{relative error} = \frac{x_{true}-x_{test}}{x_{test}}$$ Then the relative error is always 100%. Both methods seem useless. Is there another alternative? | There are many alternatives, depending on the purpose. A common one is the "Relative Percent Difference," or RPD, used in laboratory quality control procedures. Although you can find many seemingly different formulas, they all come down to comparing the difference of two values to their average magnitude: $$d_1(x,y) = \frac{x - y}{(|x| + |y|)/2} = 2\frac{x - y}{|x| + |y|}.$$ This is a signed expression, positive when $x$ exceeds $y$ and negative when $y$ exceeds $x$. Its value always lies between $-2$ and $2$. By using absolute values in the denominator it handles negative numbers in a reasonable way. Most of the references I can find, such as the New Jersey DEP Site Remediation Program Data Quality Assessment and Data Usability Evaluation Technical Guidance , use the absolute value of $d_1$ because they are interested only in the magnitude of the relative error. A Wikipedia article on Relative Change and Difference observes that $$d_\infty(x,y) = \frac{|x - y|}{\max(|x|, |y|)}$$ is frequently used as a relative tolerance test in floating point numerical algorithms. The same article also points out that formulas like $d_1$ and $d_\infty$ may be generalized to $$d_f(x,y) = \frac{x - y}{f(x,y)}$$ where the function $f$ depends directly on the magnitudes of $x$ and $y$ (usually assuming $x$ and $y$ are positive). As examples it offers their max, min, and arithmetic mean (with and without taking the absolute values of $x$ and $y$ themselves), but one could contemplate other sorts of averages such as the geometric mean $\sqrt{|x y|}$, the harmonic mean $2/(1/|x| + 1/|y|)$ and $L^p$ means $((|x|^p + |y|^p)/2)^{1/p}$. ($d_1$ corresponds to $p=1$ and $d_\infty$ corresponds to the limit as $p\to \infty$.) One might choose an $f$ based on the expected statistical behavior of $x$ and $y$. For instance, with approximately lognormal distributions the geometric mean would be an attractive choice for $f$ because it is a meaningful average in that circumstance. Most of these formulas run into difficulties when the denominator equals zero. In many applications that either is not possible or it is harmless to set the difference to zero when $x=y=0$. Note that all these definitions share a fundamental invariance property: whatever the relative difference function $d$ may be, it does not change when the arguments are uniformly rescaled by $\lambda \gt 0$: $$d(x,y) = d(\lambda x, \lambda y).$$ It is this property that allows us to consider $d$ to be a relative difference. Thus, in particular, a non-invariant function like $$d(x,y) =?\ \frac{|x-y|}{1 + |y|}$$ simply does not qualify. Whatever virtues it might have, it does not express a relative difference. The story does not end here. We might even find it fruitful to push the implications of invariance a little further. The set of all ordered pairs of real numbers $(x,y)\ne (0,0)$ where $(x,y)$ is considered to be the same as $(\lambda x, \lambda y)$ is the Real Projective Line $\mathbb{RP}^1$. In both a topological sense and an algebraic sense, $\mathbb{RP}^1$ is a circle. Any $(x,y)\ne (0,0)$ determines a unique line through the origin $(0,0)$. When $x\ne 0$ its slope is $y/x$; otherwise we may consider its slope to be "infinite" (and either negative or positive). A neighborhood of this vertical line consists of lines with extremely large positive or extremely large negative slopes. We may parameterize all such lines in terms of their angle $\theta = \arctan(y/x)$, with $-\pi/2 \lt \theta \le \pi/2$. Associated with every such $\theta$ is a point on the circle, $$(\xi, \eta) = (\cos(2\theta), \sin(2\theta)) = \left(\frac{x^2-y^2}{x^2+y^2}, \frac{2xy}{x^2+y^2}\right).$$ Any distance defined on the circle can therefore be used to define a relative difference. As an example of where this can lead, consider the usual (Euclidean) distance on the circle, whereby the distance between two points is the size of the angle between them. The relative difference is least when $x=y$, corresponding to $2\theta = \pi/2$ (or $2\theta = -3\pi/2$ when $x$ and $y$ have opposite signs). From this point of view a natural relative difference for positive numbers $x$ and $y$ would be the distance to this angle: $$d_S(x,y) = \left|2\arctan\left(\frac{y}{x}\right) - \pi/2\right|.$$ To first order, this is the relative distance $|x-y|/|y|$--but it works even when $y=0$. Moreover, it doesn't blow up, but instead (as a signed distance) is limited between $-\pi/2$ and $\pi/2$, as this graph indicates: This hints at how flexible the choices are when selecting a way to measure relative differences. | {
"source": [
"https://stats.stackexchange.com/questions/86708",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/40336/"
]
} |
86,720 | Can anyone provide a clear list of differences between log-linear regression and logistic regression? I understand the former is a simple linear regression model but I am not clear on when each should be used. | The name is a bit of a misnomer. Log-linear models were traditionally used for the analysis of data in a contingency table format. While "count data" need not necessarily follow a Poisson distribution, the log-linear model is actually just a Poisson regression model. Hence the "log" name (Poisson regression models contain a "log" link function). A "log transformed outcome variable" in a linear regression model is not a log-linear model, (neither is an exponentiated outcome variable, as "log-linear" would suggest). Both log-linear models and logistic regressions are examples of generalized linear models , in which the relationship between a linear predictor (such as log-odds or log-rates) is linear in the model variables. They are not "simple linear regression models" (or models using the usual $E[Y|X] = a + bX$ format). Despite all that, it's possible to obtain equivalent inference on associations between categorical variables using logistic regression and poisson regression. It's just that in the poisson model, the outcome variables are treated like covariates. Interestingly, you can set up some models that borrow information across groups in a way much similar to a proportional odds model, but this is not well understood and rarely used. Examples of obtaining equivalent inference in logistic and poisson regression models using R illustrated below: y <- c(0, 1, 0, 1)
x <- c(0, 0, 1, 1)
w <- c(10, 20, 30, 40)
## odds ratio for relationship between x and y from logistic regression
glm(y ~ x, family=binomial, weights=w)
## the odds ratio is the same interaction parameter between contingency table frequencies
glm(w ~ y * x, family=poisson) Interesting, lack of association between $y$ and $x$ means the odds ratio is 1 in the logistic regression model and, likewise, the interaction term is 0 in the loglinear model. Gives you an idea of how we measure conditional independence in contingency table data. | {
"source": [
"https://stats.stackexchange.com/questions/86720",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/38133/"
]
} |
86,734 | I am doing master in statistics and I am advised to learn differential geometry. I would be happier to hear about statistical applications for differential geometry since this would make me motivated. Does anyone happen to know applications for differential geometry in statistics? | Two canonical books on the subject, with reviews, then two other references: Differential Geometry and Statistics , M.K. Murray, J.W. Rice Ever since the introduction by Rao in 1945 of the Fisher information metric on a family of probability distributions there has been interest among statisticians in the application of differential geometry to statistics. This interest has increased rapidly in the last couple of decades with the work of a large number of researchers. Until now an impediment to the spread of these ideas into the wider community of statisticians is the lack of a suitable text introducing the modern co-ordinate free approach to differential geometry in a manner accessible to statisticians. This book aims to fill this gap. The authors bring to the book extensive research experience in differential geometry and its application to statistics. The book commences with the study of the simplest differential manifolds - affine spaces and their relevance to exponential families and passes into the general theory, the Fisher information metric, the Amari connection and asymptotics. It culminates in the theory of the vector bundles, principle bundles and jets and their application to the theory of strings - a topic presently at the cutting edge of research in statistics and differential geometry. Methods of Information Geometry , S.-I. Amari, H. Nagaoka Information geometry provides the mathematical sciences with a new framework of analysis. It has emerged from the investigation of the natural differential geometric structure on manifolds of probability distributions, which consists of a Riemannian metric defined by the Fisher information and a one-parameter family of affine connections called the $\alpha$-connections. The duality between the $\alpha$-connection and the $(-\alpha)$-connection together with the metric play an essential role in this geometry. This kind of duality, having emerged from manifolds of probability distributions, is ubiquitous, appearing in a variety of problems which might have no explicit relation to probability theory. Through the duality, it is possible to analyze various fundamental problems in a unified perspective. The first half of this book is devoted to a comprehensive introduction to the mathematical foundation of information geometry, including preliminaries from differential geometry, the geometry of manifolds or probability distributions, and the general theory of dual affine connections. The second half of the text provides an overview of many areas of applications, such as statistics, linear systems, information theory, quantum mechanics, convex analysis, neural networks, and affine differential geometry. The book can serve as a suitable text for a topics course for advanced undergraduates and graduate students. Differential geometry in statistical inference , S.-I. Amari, O. E. Barndorff-Nielsen, R. E. Kass, S. L. Lauritzen, and C. R. Rao, IMS Lecture Notes Monogr. Ser. Volume 10, 1987, 240 pp. The Role of Differential Geometry in Statistical Theory , O. E. Barndorff-Nielsen, D. R. Cox and N. Reid, International Statistical Review / Revue Internationale de Statistique, Vol. 54, No. 1 (Apr., 1986), pp. 83-96 | {
"source": [
"https://stats.stackexchange.com/questions/86734",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36575/"
]
} |
86,739 | I was looking for an intuition for the perceptron algorithm with offset rule, why the update rule is as follows: cycle through all points until convergence: $\textbf{if }\, y^{(t)} \neq \theta^{T}x^{(t)} + \theta_0 \, \textbf{ then}\\\
\quad \theta^{(k+1)} \leftarrow \theta^{k} + y^{(t)}x^{(t)}\\
\quad \theta^{(k+1)}_0 \leftarrow \theta^{k}_0 + y^{(t)}\\
$ When the offset is zero, I think the update rule is completely intuitive. However, without it, it seems a little odd just adding 1 or -1 to the offset. The only reason I could come up with to explain it was the following but I don't really think its very intuitive explanation and was looking for a different explanation. My non-intuitive answer: When the perceptron makes a mistake then: $y^{(t)}(\theta^{T}x + \theta_0) \leq 0$ But we can re-write the top part as: $<\theta, \theta_0> \cdot <x^{(t)}, 1> = \theta^{T}x + \theta_0$ and now if we just appeal to the original perceptron rule and change the feature vector to have the one attached at the end and the normal now includes $\theta_0$ , now the update would occur as following: $ \theta'^{(k+1)} = \theta'^{(k)} + y^{(t)}x'^{(t)}$ which is: $<\theta, \theta_0> + y^{(t)}<x^{(t)}, 1> = <\theta + y^{(t)}x^{(t)}, \theta_0+y^{(t)}>$ I think this might be correct, but even if it is, I didn't really think it was intuitive or "obvious" and was wondering if anyone had a different argument? Thanks! PS: Feel free to edit my algorithm to have indentation and spaces, I couldn't make it have indentation without losing the latex :( | Two canonical books on the subject, with reviews, then two other references: Differential Geometry and Statistics , M.K. Murray, J.W. Rice Ever since the introduction by Rao in 1945 of the Fisher information metric on a family of probability distributions there has been interest among statisticians in the application of differential geometry to statistics. This interest has increased rapidly in the last couple of decades with the work of a large number of researchers. Until now an impediment to the spread of these ideas into the wider community of statisticians is the lack of a suitable text introducing the modern co-ordinate free approach to differential geometry in a manner accessible to statisticians. This book aims to fill this gap. The authors bring to the book extensive research experience in differential geometry and its application to statistics. The book commences with the study of the simplest differential manifolds - affine spaces and their relevance to exponential families and passes into the general theory, the Fisher information metric, the Amari connection and asymptotics. It culminates in the theory of the vector bundles, principle bundles and jets and their application to the theory of strings - a topic presently at the cutting edge of research in statistics and differential geometry. Methods of Information Geometry , S.-I. Amari, H. Nagaoka Information geometry provides the mathematical sciences with a new framework of analysis. It has emerged from the investigation of the natural differential geometric structure on manifolds of probability distributions, which consists of a Riemannian metric defined by the Fisher information and a one-parameter family of affine connections called the $\alpha$-connections. The duality between the $\alpha$-connection and the $(-\alpha)$-connection together with the metric play an essential role in this geometry. This kind of duality, having emerged from manifolds of probability distributions, is ubiquitous, appearing in a variety of problems which might have no explicit relation to probability theory. Through the duality, it is possible to analyze various fundamental problems in a unified perspective. The first half of this book is devoted to a comprehensive introduction to the mathematical foundation of information geometry, including preliminaries from differential geometry, the geometry of manifolds or probability distributions, and the general theory of dual affine connections. The second half of the text provides an overview of many areas of applications, such as statistics, linear systems, information theory, quantum mechanics, convex analysis, neural networks, and affine differential geometry. The book can serve as a suitable text for a topics course for advanced undergraduates and graduate students. Differential geometry in statistical inference , S.-I. Amari, O. E. Barndorff-Nielsen, R. E. Kass, S. L. Lauritzen, and C. R. Rao, IMS Lecture Notes Monogr. Ser. Volume 10, 1987, 240 pp. The Role of Differential Geometry in Statistical Theory , O. E. Barndorff-Nielsen, D. R. Cox and N. Reid, International Statistical Review / Revue Internationale de Statistique, Vol. 54, No. 1 (Apr., 1986), pp. 83-96 | {
"source": [
"https://stats.stackexchange.com/questions/86739",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28986/"
]
} |
86,856 | With all the media talk and hype about deep learning these days, I read some elementary stuff about it. I just found that it is just another machine learning method to learn patterns from data. But my question is: where does and why this method shine? Why all the talk about it right now? I.e. what is the fuss all about? | The main purported benefits: (1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term band-aid) (2) The learnt features are sometimes better than the best hand-engineered features, and can be so complex (computer vision - e.g. face-like features) that it would take way too much human time to engineer. (3) Can use unlabeled data to pre-train the network. Suppose we have 1000000 unlabeled images and 1000 labeled images. We can now drastically improve a supervised learning algorithm by pre-training on the 1000000 unlabeled images with deep learning. In addition, in some domains we have so much unlabeled data but labeled data is hard to find. An algorithm that can use this unlabeled data to improve classification is valuable. (4) Empirically, smashed many benchmarks that were only seeing incremental improvements until the introduction of deep learning methods. (5) Same algorithm works in multiple areas with raw (perhaps with minor pre-processing) inputs. (6) Keeps improving as more data is fed to the network (assuming stationary distributions etc). | {
"source": [
"https://stats.stackexchange.com/questions/86856",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27779/"
]
} |
86,991 | For a linear model $y=\beta_0+x\beta+\varepsilon$, the shrinkage term is always $P(\beta) $. What is the reason that we do not shrink the bias (intercept) term $\beta_0$? Should we shrink the bias term in the neural network models? | The Elements of Statistical Learning by Hastie et al. define ridge regression as follows (Section 3.4.1, equation 3.41): $$\hat \beta{}^\mathrm{ridge} = \underset{\beta}{\mathrm{argmin}}\left\{\sum_{i=1}^N(y_i - \beta_0 - \sum_{j=1}^p x_{ij}\beta_j)^2 + \lambda \sum_{j=1}^p \beta_j^2\right\},$$ i.e. explicitly exclude the intercept term $\beta_0$ from the ridge penalty. Then they write: [...] notice that the intercept $\beta_0$ has been left out of the penalty term. Penalization of the intercept would make the procedure depend on the origin
chosen for $Y$ ; that is, adding a constant $c$ to each of the targets $y_i$ would
not simply result in a shift of the predictions by the same amount $c$ . Indeed, in the presence of the intercept term, adding $c$ to all $y_i$ will simply lead to $\beta_0$ increasing by $c$ as well and correspondingly all predicted values $\hat y_i$ will also increase by $c$ . This is not true if the intercept is penalized: $\beta_0$ will have to increase by less than $c$ . In fact, there are several nice and convenient properties of linear regression that depend on there being a proper (unpenalized) intercept term. E.g. the average value of $y_i$ and the average value of $\hat y_i$ are equal, and (consequently) the squared multiple correlation coefficient $R$ is equal to the coefficient of determination $R^2$ : $$(R)^2 = \text{cor}^2(\hat {\mathbf y}, \mathbf y) = \frac{\|\hat{\mathbf y}\|^2}{\|\mathbf y\|^2} = R^2,$$ see e.g. this thread for an explanation: Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$ . Penalizing the intercept would lead to all of that not being true anymore. | {
"source": [
"https://stats.stackexchange.com/questions/86991",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37399/"
]
} |
87,021 | I collected data on one sample, the DV could be separated into two groups (success yes vs. no) and then I have several IVs with interval scale. I just don't know if to use Wilcoxon or Man-Whitney test. Also I don't know if it's necessary to use Bonferroni correction or if that is just important for parametric tests. | The Elements of Statistical Learning by Hastie et al. define ridge regression as follows (Section 3.4.1, equation 3.41): $$\hat \beta{}^\mathrm{ridge} = \underset{\beta}{\mathrm{argmin}}\left\{\sum_{i=1}^N(y_i - \beta_0 - \sum_{j=1}^p x_{ij}\beta_j)^2 + \lambda \sum_{j=1}^p \beta_j^2\right\},$$ i.e. explicitly exclude the intercept term $\beta_0$ from the ridge penalty. Then they write: [...] notice that the intercept $\beta_0$ has been left out of the penalty term. Penalization of the intercept would make the procedure depend on the origin
chosen for $Y$ ; that is, adding a constant $c$ to each of the targets $y_i$ would
not simply result in a shift of the predictions by the same amount $c$ . Indeed, in the presence of the intercept term, adding $c$ to all $y_i$ will simply lead to $\beta_0$ increasing by $c$ as well and correspondingly all predicted values $\hat y_i$ will also increase by $c$ . This is not true if the intercept is penalized: $\beta_0$ will have to increase by less than $c$ . In fact, there are several nice and convenient properties of linear regression that depend on there being a proper (unpenalized) intercept term. E.g. the average value of $y_i$ and the average value of $\hat y_i$ are equal, and (consequently) the squared multiple correlation coefficient $R$ is equal to the coefficient of determination $R^2$ : $$(R)^2 = \text{cor}^2(\hat {\mathbf y}, \mathbf y) = \frac{\|\hat{\mathbf y}\|^2}{\|\mathbf y\|^2} = R^2,$$ see e.g. this thread for an explanation: Geometric interpretation of multiple correlation coefficient $R$ and coefficient of determination $R^2$ . Penalizing the intercept would lead to all of that not being true anymore. | {
"source": [
"https://stats.stackexchange.com/questions/87021",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/40114/"
]
} |
87,132 | In a famous plot, Charles Minard visualised the losses of the French Army in the Russian campaign of Napoleon: (another nice example is this xkcd plot) Is there a canonical name for this type of visualisation? I'm actually looking for an R package to create such plots, but I don't even know how to look for it. EDIT: As I could not find a good package in R do do this type of plots, I have created my own, called "riverplot" -- you can download it from CRAN . Here is a simplified version of the above diagram: And an example of what other diagrams can be created with the package: | It is a map, and so cartographers would likely refer to it as a thematic map (as opposed to a topographical map). The fact that many statistical diagrams have unique names (e.g. a bar chart, a scatterplot, a dotplot) as opposed to just describing their contents can sometime be a hindrance. Both because not everything is named (as is the case here) and the same name can refer to different types of displays ( dotplot is a good example). In the Grammar of Graphics Wilkinson describes a graph as geometric elements displayed in a particular coordinate system. Here he refers to Napoleon's March as a path element whose width represents the number of troops. In this example the path is drawn in a Cartesian coordinate system whose points represent actual locations in Europe. The points are connected as a representation of the journey Napoleon and his army took, although it likely does not exactly trace the journey (nor does the wider element at the start mean the army took up more space on the road!) There are many different software programs that have the capabilities to to draw this type of diagram. Michael Friendly has a whole page of examples . Below is a slightly amended example using the ggplot2 package in R (as you requested an example in R), although it could certainly be replicated in base graphics. mydir <- "your directory here"
setwd(mydir)
library(ggplot2)
troops <- read.table("troops.txt", header=T)
#data is from Friendly link
cities <- read.table("cities.txt", header=T)
#http://www.datavis.ca/gallery/minard/ggplot2/ggplot2-minard-gallery.zip
temps <- read.table("temps.txt", header=T)
temps$date <- as.Date(strptime(temps$date,"%d%b%Y"))
xlim <- scale_x_continuous(limits = c(24, 39))
p <- ggplot(cities, aes(x = long, y = lat)) +
geom_path(
aes(size = survivors, colour = direction, group = group),
data=troops, linejoin = "round", lineend = "round"
) +
geom_point() +
geom_text(aes(label = city), hjust=0, vjust=1, size=4) +
scale_size(range = c(1, 10)) +
scale_colour_manual(values = c("grey50","red")) +
xlim + coord_fixed(ratio = 1)
p
ggsave(file = "march.png", width=16, height=4) Here are a few of the things that make this different than the original: I did not display the temperature graph at the bottom of the plot. In ggplot2 you can make a separate graph, you cannot draw lines across the separate graph windows though. Minard's original graph shows the path diminishing in steps between cities. This graph does not interpolate the losses like that, and shows abrupt changes from city to city. (Troop sizes are taken from a diary of a physician who traveled with the army I believe) This graph shows the exact location of the contemporary cities, Minard tended to bend space slightly to make the graph nicer. A more blatant example is the location of England in Minards map of migration flows . | {
"source": [
"https://stats.stackexchange.com/questions/87132",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14803/"
]
} |
87,182 | Shannon's entropy is the negative of the sum of the probabilities of each outcome multiplied by the logarithm of probabilities for each outcome. What purpose does the logarithm serve in this equation? An intuitive or visual answer (as opposed to a deeply mathematical answer) will be given bonus points! | Shannon entropy is a quantity satisfying a set of relations. In short, logarithm is to make it growing linearly with system size and "behaving like information". The first means that entropy of tossing a coin $n$ times is $n$ times entropy of tossing a coin once: $$
- \sum_{i=1}^{2^n} \frac{1}{2^n} \log\left(\tfrac{1}{2^n}\right)
= - \sum_{i=1}^{2^n} \frac{1}{2^n} n \log\left(\tfrac{1}{2}\right)
= n \left( - \sum_{i=1}^{2} \frac{1}{2} \log\left(\tfrac{1}{2}\right) \right) = n.
$$ Or just to see how it works when tossing two different coins (perhaps unfair - with heads with probability $p_1$ and tails $p_2$ for the first coin, and $q_1$ and $q_2$ for the second) $$
-\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i q_j)
= -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \left( \log(p_i) + \log(q_j) \right)
$$ $$
= -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i)
-\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(q_j)
= -\sum_{i=1}^2 p_i \log(p_i)
- \sum_{j=1}^2 q_j \log(q_j)
$$ so the properties of logarithm (logarithm of product is sum of logarithms) are crucial. But also Rényi entropy has this property (it is entropy parametrized by a real number $\alpha$ , which becomes Shannon entropy for $\alpha \to 1$ ). However, here comes the second property - Shannon entropy is special, as it is related to information.
To get some intuitive feeling, you can look at $$
H = \sum_i p_i \log \left(\tfrac{1}{p_i} \right)
$$ as the average of $\log(1/p)$ . We can call $\log(1/p)$ information. Why? Because if all events happen with probability $p$ , it means that there are $1/p$ events. To tell which event have happened, we need to use $\log(1/p)$ bits
(each bit doubles the number of events we can tell apart). You may feel anxious "OK, if all events have the same probability it makes sense to use $\log(1/p)$ as a measure of information. But if they are not, why averaging information makes any sense?" - and it is a natural concern. But it turns out that it makes sense - Shannon's source coding theorem says that a string with uncorrelated letters with probabilities $\{p_i\}_i$ of length $n$ cannot be compressed (on average) to binary string shorter than $n H$ . And in fact, we can use Huffman coding to compress the string and get very close to $n H$ . See also: A nice introduction is Cosma Shalizi's Information theory entry What is entropy, really? - MathOverflow Dissecting the GZIP format | {
"source": [
"https://stats.stackexchange.com/questions/87182",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10598/"
]
} |
87,188 | I have this regularized least square formula:
$$\sum\limits_{i=1}^N (\omega^T x_i - y_i)^2 + \lambda \left\|\omega\right\|^2$$ And the gradient:
$$2 \sum\limits_{i=1}^N ((\sum\limits_{j=1}^d x_{ij}\omega_j)x_{ik} - x_{ik} y_i) + 2\lambda \omega_k$$ I want to use gradient descent to find the vector w. I am using matlab. I though I would be able to make two loops and calculate the ws but my solution is very unstable and I need to use very small learning term a (a=0.000000001) in order to get not NAN solution. But I thought the values of w should head towards 0 when the lambda is large but it does not happen...
My data set is a matrix X (400x64) and y (400x1). This is two class problem where contains the class labels (+1 for class 1 and -1 for class 2). Here is my matlab code: function [ w ] = gradDecent( X, Y, a, lambda, iter )
% GRADIENT DESCENT
w = zeros(size(X(1,:)))';
for it=1:iter % For each iteration
for k = 1:size(w,1)
s = 0;
for i = 1:size(X,1)
s = s + (X(i,:)*w - Y(i))*X(i,k);
end
w(k) = w(k) - a*(2*s+2*lambda*w(k));
end
end Am I making some stupid mistakes? | Shannon entropy is a quantity satisfying a set of relations. In short, logarithm is to make it growing linearly with system size and "behaving like information". The first means that entropy of tossing a coin $n$ times is $n$ times entropy of tossing a coin once: $$
- \sum_{i=1}^{2^n} \frac{1}{2^n} \log\left(\tfrac{1}{2^n}\right)
= - \sum_{i=1}^{2^n} \frac{1}{2^n} n \log\left(\tfrac{1}{2}\right)
= n \left( - \sum_{i=1}^{2} \frac{1}{2} \log\left(\tfrac{1}{2}\right) \right) = n.
$$ Or just to see how it works when tossing two different coins (perhaps unfair - with heads with probability $p_1$ and tails $p_2$ for the first coin, and $q_1$ and $q_2$ for the second) $$
-\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i q_j)
= -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \left( \log(p_i) + \log(q_j) \right)
$$ $$
= -\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(p_i)
-\sum_{i=1}^2 \sum_{j=1}^2 p_i q_j \log(q_j)
= -\sum_{i=1}^2 p_i \log(p_i)
- \sum_{j=1}^2 q_j \log(q_j)
$$ so the properties of logarithm (logarithm of product is sum of logarithms) are crucial. But also Rényi entropy has this property (it is entropy parametrized by a real number $\alpha$ , which becomes Shannon entropy for $\alpha \to 1$ ). However, here comes the second property - Shannon entropy is special, as it is related to information.
To get some intuitive feeling, you can look at $$
H = \sum_i p_i \log \left(\tfrac{1}{p_i} \right)
$$ as the average of $\log(1/p)$ . We can call $\log(1/p)$ information. Why? Because if all events happen with probability $p$ , it means that there are $1/p$ events. To tell which event have happened, we need to use $\log(1/p)$ bits
(each bit doubles the number of events we can tell apart). You may feel anxious "OK, if all events have the same probability it makes sense to use $\log(1/p)$ as a measure of information. But if they are not, why averaging information makes any sense?" - and it is a natural concern. But it turns out that it makes sense - Shannon's source coding theorem says that a string with uncorrelated letters with probabilities $\{p_i\}_i$ of length $n$ cannot be compressed (on average) to binary string shorter than $n H$ . And in fact, we can use Huffman coding to compress the string and get very close to $n H$ . See also: A nice introduction is Cosma Shalizi's Information theory entry What is entropy, really? - MathOverflow Dissecting the GZIP format | {
"source": [
"https://stats.stackexchange.com/questions/87188",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31112/"
]
} |
87,248 | Is it correct to say that binary logistic regression is a special case of multinomial logistic regression when the outcome has 2 levels? | Short answer: Yes. Longer answer: Consider a dependent variable $y$ consisting $J$ categories, than a multinomial logit model would model the probability that $y$ falls in category $m$ as: $
\mathrm{Pr}(y=m | x) = \frac{\exp(x\beta_m)}{\sum_{j=1}^J \exp(x\beta_j)}
$ where $\beta_1 = 0$. So if $y$ has three categories (1,2,3), you could get the three probabilities as: $
\mathrm{Pr}(y=1 | x) = \frac{\exp(x0)}{\exp(x0) + \exp(x\beta_2) + \exp(x\beta_3)} = \frac{1}{1 + \exp(x\beta_2) + \exp(x\beta_3)}
$ $
\mathrm{Pr}(y=2 | x) = \frac{\exp(x\beta_2)}{1 + \exp(x\beta_2) + \exp(x\beta_3)}
$ $
\mathrm{Pr}(y=3 | x) = \frac{\exp(x\beta_3)}{1 + \exp(x\beta_2) + \exp(x\beta_3)}
$ In your special case where $y$ has two categories this condences to: $
\mathrm{Pr}(y=1 | x) = \frac{1}{1 + \exp(x\beta_2) }
$ $
\mathrm{Pr}(y=2 | x) = \frac{\exp(x\beta_2)}{1 + \exp(x\beta_2) }
$ This is exactly a binary logistic regression. | {
"source": [
"https://stats.stackexchange.com/questions/87248",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9671/"
]
} |
87,321 | I know that priors need not be proper and that the likelihood function does not integrate to 1 either. But does the posterior need to be a proper distribution? What are the implications if it is/is not? | (It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or not the posterior has to be proper (i.e., integrable to one) to be a proper (i.e., acceptable for Bayesian inference) posterior.) In Bayesian statistics, the posterior distribution has to be a probability distribution, from which one can derive moments like the posterior mean $\mathbb{E}^\pi[h(\theta)|x]$ and probability statements like the coverage of a credible region, $\mathbb{P}(\pi(\theta|x)>\kappa|x)$. If $$\int f(x|\theta)\,\pi(\theta)\,\text{d}\theta = +\infty\,,\qquad (1)$$ the posterior $\pi(\theta|x)$ cannot be normalised into a probability density and Bayesian inference simply cannot be conducted. The posterior simply does not exist in such cases. Actually, (1) must hold for all $x$'s in the sample space and not only for the observed $x$ for, otherwise, selecting the prior would depend on the data . This means that priors like Haldane's prior, $\pi(p)\propto \{1/p(1-p)\}$, on the probability $p$ of a Binomial or a Negative Binomial variable $X$ cannot be used, since the posterior is not defined for $x=0$. I know of one exception when one can consider "improper posteriors": it is found in "The Art of Data Augmentation" by David van Dyk and Xiao-Li Meng. The improper measure is over a so-called working parameter $\alpha$ such that the observation is produced by the marginal of an augmented distribution
$$f(x|\theta)=\int_{T(x^\text{aug})=x} f(x^\text{aug}|\theta,\alpha)\,\text{d}x^\text{aug}$$
and van Dyk and Meng put an improper prior $p(\alpha)$ on this working parameter $\alpha$ in order to speed up the simulation of $\pi(\theta|x)$ (which remains well-defined as a probability density) by MCMC. In another perspective, somewhat related to the answer by eretmochelys , namely a perspective of Bayesian decision theory , a setting where (1) occurs could still be acceptable if it led to optimal decisions. Namely, if $L(\delta,\theta)\ge 0$ is a loss function evaluating the impact of using the decision $\delta$, a Bayesian optimal decision under the prior $\pi$ is given by
$$\delta^\star(x)=\arg\min_\delta \int L(\delta,\theta) f(x|\theta)\,\pi(\theta)\,\text{d}\theta$$ and all that matters is that this integral is not everywhere (in $\delta$) infinite. Whether or not (1) holds is secondary for the derivation of $\delta^\star(x)$, even though properties like admissibility are only guaranteed when (1) holds. | {
"source": [
"https://stats.stackexchange.com/questions/87321",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24848/"
]
} |
87,956 | I have a repeated-measures experiment where the dependent variable is a percentage, and I have multiple factors as independent variables. I'd like to use glmer from the R package lme4 to treat it as a logistic regression problem (by specifying family=binomial ) since it seems to accommodate this setup directly. My data looks like this: > head(data.xvsy)
foldnum featureset noisered pooldur dpoolmode auc
1 0 mfcc-ms nr0 1 mean 0.6760438
2 1 mfcc-ms nr0 1 mean 0.6739482
3 0 melspec-maxp nr075 1 max 0.8141421
4 1 melspec-maxp nr075 1 max 0.7822994
5 0 chrmpeak-tpor1d nr075 1 max 0.6547476
6 1 chrmpeak-tpor1d nr075 1 max 0.6699825 and here's the R command that I was hoping would be appropriate: glmer(auc~1+featureset*noisered*pooldur*dpoolmode+(1|foldnum), data.xvsy, family=binomial) The problem with this is that the command complains about my dependent variable not being integers: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! and the analysis of this (pilot) data gives weird answers as a result. I understand why the binomial family expects integers (yes-no counts), but it seems it should be OK to regress percentage data directly. How to do this? | In order to use a vector of proportions as the response variable with glmer(., family = binomial) , you need to set the number of trials that led to each proportion using the weights argument. For example, using the cbpp data from the lme4 package: glmer(incidence / size ~ period + (1 | herd), weights = size,
family = binomial, data = cbpp) If you do not know the total number of trials, then a binomial model is not appropriate, as is indicated in the error message. | {
"source": [
"https://stats.stackexchange.com/questions/87956",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20486/"
]
} |
88,065 | I see that one time out of the twenty total tests they run, $p < 0.05$, so they wrongly assume that during one of the twenty tests, the result is significant ($0.05 = 1/20$). xkcd jelly bean comic - "Significant" Title: Significant Hover text: "'So, uh, we did the green study again and got no link. It was probably a--' 'RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!'" | Humor is a very personal thing - some people will find it amusing, but it may not be funny to everyone - and attempts to explain what makes something funny often fail to convey the funny, even if they explain the underlying point. Indeed not all xkcd's are even intended to be actually funny. Many do, however make important points in a way that's thought provoking, and at least sometimes they're amusing while doing that. (I personally find it funny, but I find it hard to clearly explain what, exactly, makes it funny to me. I think partly it's the recognition of the way that a doubtful, or even dubious result turns into a media circus ( on which see also this PhD comic ), and perhaps partly the recognition of the way some research may actually be done - if usually not consciously.) However, one can appreciate the point whether or not it tickles your funnybone. The point is about doing multiple hypothesis tests at some moderate significance level like 5%, and then publicizing the one that came out significant. Of course, if you do 20 such tests when there's really nothing of any importance going on, the expected number of those tests to give a significant result is 1. Doing a rough in-head approximation for $n$ tests at significance level $\frac{1}{n}$, there's roughly a 37% chance of no significant result, roughly 37% chance of one and roughly 26% chance of more than one (I just checked the exact answers; they're close enough to that). In the comic, Randall depicted 20 tests, so this is no doubt his point (that you expect to get one significant even when there's nothing going on). The fictional newspaper article even emphasizes the problem with the subhead "Only 5% chance of coincidence!". (If the one test that ended up in the papers was the only one done, that might be the case.) Of course, there's also the subtler issue that an individual researcher may behave much more reasonably, but the problem of rampant publicizing of false positives still occurs. Let's say that these researchers only do 5 tests, each at the 1% level, so their overall chance of discovering a bogus result like that is only about five percent. So far so good. But now imagine there are 20 such research groups, each testing whichever random subset of colors they think they have reason to try. Or 100 research groups... what chance of a headline like the one in the comic now? So more broadly, the comic may be referencing publication bias more generally. If only significant results are trumpeted, we won't hear about the dozens of groups that found nothing for green jellybeans, only the one that did. Indeed, that's one of the major points being made in this article , which has been in the news in the last few months ( e.g. here , even though it's a 2005 article). A response to that article emphasizes the need for replication. Note that if there were to be several replications of the study that was published, the "Green jellybeans linked to acne" result would be very unlikely to stand. (And indeed, the hover text for the comic makes a clever reference to the same point.) | {
"source": [
"https://stats.stackexchange.com/questions/88065",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36451/"
]
} |
88,125 | So far i have evaluated mn Bayes and Bernoulli, so my question is if i take the counts of the words of each document and use them for assigning the document to the particular class will it work with Multivariate Gaussian classifier (Bayes with Gaussian model)? | Humor is a very personal thing - some people will find it amusing, but it may not be funny to everyone - and attempts to explain what makes something funny often fail to convey the funny, even if they explain the underlying point. Indeed not all xkcd's are even intended to be actually funny. Many do, however make important points in a way that's thought provoking, and at least sometimes they're amusing while doing that. (I personally find it funny, but I find it hard to clearly explain what, exactly, makes it funny to me. I think partly it's the recognition of the way that a doubtful, or even dubious result turns into a media circus ( on which see also this PhD comic ), and perhaps partly the recognition of the way some research may actually be done - if usually not consciously.) However, one can appreciate the point whether or not it tickles your funnybone. The point is about doing multiple hypothesis tests at some moderate significance level like 5%, and then publicizing the one that came out significant. Of course, if you do 20 such tests when there's really nothing of any importance going on, the expected number of those tests to give a significant result is 1. Doing a rough in-head approximation for $n$ tests at significance level $\frac{1}{n}$, there's roughly a 37% chance of no significant result, roughly 37% chance of one and roughly 26% chance of more than one (I just checked the exact answers; they're close enough to that). In the comic, Randall depicted 20 tests, so this is no doubt his point (that you expect to get one significant even when there's nothing going on). The fictional newspaper article even emphasizes the problem with the subhead "Only 5% chance of coincidence!". (If the one test that ended up in the papers was the only one done, that might be the case.) Of course, there's also the subtler issue that an individual researcher may behave much more reasonably, but the problem of rampant publicizing of false positives still occurs. Let's say that these researchers only do 5 tests, each at the 1% level, so their overall chance of discovering a bogus result like that is only about five percent. So far so good. But now imagine there are 20 such research groups, each testing whichever random subset of colors they think they have reason to try. Or 100 research groups... what chance of a headline like the one in the comic now? So more broadly, the comic may be referencing publication bias more generally. If only significant results are trumpeted, we won't hear about the dozens of groups that found nothing for green jellybeans, only the one that did. Indeed, that's one of the major points being made in this article , which has been in the news in the last few months ( e.g. here , even though it's a 2005 article). A response to that article emphasizes the need for replication. Note that if there were to be several replications of the study that was published, the "Green jellybeans linked to acne" result would be very unlikely to stand. (And indeed, the hover text for the comic makes a clever reference to the same point.) | {
"source": [
"https://stats.stackexchange.com/questions/88125",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/40539/"
]
} |
88,348 | This is my first question on Cross Validated here, so please help me out even if it seems trivial :-) First of all, the question might be an outcome of language differences or perhaps me having real deficiencies in statistics. Nevertheless, here it is: In population statistics, are variation and variance the same terms? If not, what is the difference between the two? I know that variance is the square of standard deviation. I also know that it is a measure of how sparse the data is, and I know how to compute it. However, I've been following a Coursera.org course called "Model Thinking", and the lecturer clearly described variance but was constantly calling it variation. That got me confused a bit. To be fair, he always talked about computing variation of some particular instance in a population. Could someone make it clear to me if those are interchangeable, or perhaps I'm missing something? | Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion As described by others in the comments here, the short answer is: no, variation $\ne$ variance. Synonyms for "variation" are spread, dispersion, scatter and variability. It's just a way of talking about the behavior of the data in a general sense as either having a lot of density over a narrow interval (generally near the mean, but not necessarily if the distribution is skewed) or spread out over a wide range. Variance is a particular measure of variability, but others exist (and several are enumerated in the linked article). | {
"source": [
"https://stats.stackexchange.com/questions/88348",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41079/"
]
} |
88,461 | In simple linear regression, we have $y = \beta_0 + \beta_1 x + u$, where $u \sim iid\;\mathcal N(0,\sigma^2)$. I derived the estimator:
$$
\hat{\beta_1} = \frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sum_i (x_i - \bar{x})^2}\ ,
$$
where $\bar{x}$ and $\bar{y}$ are the sample means of $x$ and $y$. Now I want to find the variance of $\hat\beta_1$. I derived something like the following:
$$
\text{Var}(\hat{\beta_1}) = \frac{\sigma^2(1 - \frac{1}{n})}{\sum_i (x_i - \bar{x})^2}\ .
$$ The derivation is as follow: \begin{align}
&\text{Var}(\hat{\beta_1})\\
& =
\text{Var} \left(\frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sum_i (x_i - \bar{x})^2} \right) \\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2} \text{Var}\left( \sum_i (x_i - \bar{x})\left(\beta_0 + \beta_1x_i + u_i - \frac{1}{n}\sum_j(\beta_0 + \beta_1x_j + u_j) \right)\right)\\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}
\text{Var}\left( \beta_1 \sum_i (x_i - \bar{x})^2 +
\sum_i(x_i - \bar{x})
\left(u_i - \sum_j \frac{u_j}{n}\right) \right)\\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}\text{Var}\left( \sum_i(x_i - \bar{x})\left(u_i - \sum_j \frac{u_j}{n}\right)\right)\\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}\;\times \\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;E\left[\left( \sum_i(x_i - \bar{x})(u_i - \sum_j \frac{u_j}{n}) - \underbrace{E\left[\sum_i(x_i - \bar{x})(u_i - \sum_j \frac{u_j}{n})\right] }_{=0}\right)^2\right]\\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}
E\left[\left( \sum_i(x_i - \bar{x})(u_i - \sum_j \frac{u_j}{n})\right)^2 \right] \\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2} E\left[\sum_i(x_i - \bar{x})^2(u_i - \sum_j \frac{u_j}{n})^2 \right]\;\;\;\;\text{ , since } u_i \text{ 's are iid} \\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}\sum_i(x_i - \bar{x})^2E\left(u_i - \sum_j \frac{u_j}{n}\right)^2\\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}\sum_i(x_i - \bar{x})^2 \left(E(u_i^2) - 2 \times E \left(u_i \times (\sum_j \frac{u_j}{n})\right) + E\left(\sum_j \frac{u_j}{n}\right)^2\right)\\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}\sum_i(x_i - \bar{x})^2
\left(\sigma^2 - \frac{2}{n}\sigma^2 + \frac{\sigma^2}{n}\right)\\
& =
\frac{\sigma^2}{\sum_i (x_i - \bar{x})^2}\left(1 - \frac{1}{n}\right)
\end{align} Did I do something wrong here? I know if I do everything in matrix notation, I would get ${\rm Var}(\hat{\beta_1}) = \frac{\sigma^2}{\sum_i (x_i - \bar{x})^2}$. But I am trying to derive the answer without using the matrix notation just to make sure I understand the concepts. | At the start of your derivation you multiply out the brackets $\sum_i (x_i - \bar{x})(y_i - \bar{y})$, in the process expanding both $y_i$ and $\bar{y}$. The former depends on the sum variable $i$, whereas the latter doesn't. If you leave $\bar{y}$ as is, the derivation is a lot simpler, because
\begin{align}
\sum_i (x_i - \bar{x})\bar{y}
&= \bar{y}\sum_i (x_i - \bar{x})\\
&= \bar{y}\left(\left(\sum_i x_i\right) - n\bar{x}\right)\\
&= \bar{y}\left(n\bar{x} - n\bar{x}\right)\\
&= 0
\end{align} Hence \begin{align}
\sum_i (x_i - \bar{x})(y_i - \bar{y})
&= \sum_i (x_i - \bar{x})y_i - \sum_i (x_i - \bar{x})\bar{y}\\
&= \sum_i (x_i - \bar{x})y_i\\
&= \sum_i (x_i - \bar{x})(\beta_0 + \beta_1x_i + u_i )\\
\end{align} and \begin{align}
\text{Var}(\hat{\beta_1})
& = \text{Var} \left(\frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sum_i (x_i - \bar{x})^2} \right) \\
&= \text{Var} \left(\frac{\sum_i (x_i - \bar{x})(\beta_0 + \beta_1x_i + u_i )}{\sum_i (x_i - \bar{x})^2} \right), \;\;\;\text{substituting in the above} \\
&= \text{Var} \left(\frac{\sum_i (x_i - \bar{x})u_i}{\sum_i (x_i - \bar{x})^2} \right), \;\;\;\text{noting only $u_i$ is a random variable} \\
&= \frac{\sum_i (x_i - \bar{x})^2\text{Var}(u_i)}{\left(\sum_i (x_i - \bar{x})^2\right)^2} , \;\;\;\text{independence of } u_i \text{ and, Var}(kX)=k^2\text{Var}(X) \\
&= \frac{\sigma^2}{\sum_i (x_i - \bar{x})^2} \\
\end{align} which is the result you want. As a side note, I spent a long time trying to find an error in your derivation. In the end I decided that discretion was the better part of valour and it was best to try the simpler approach. However for the record I wasn't sure that this step was justified
$$\begin{align}
& =.
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2}
E\left[\left( \sum_i(x_i - \bar{x})(u_i - \sum_j \frac{u_j}{n})\right)^2 \right] \\
& =
\frac{1}{(\sum_i (x_i - \bar{x})^2)^2} E\left[\sum_i(x_i - \bar{x})^2(u_i - \sum_j \frac{u_j}{n})^2 \right]\;\;\;\;\text{ , since } u_i \text{ 's are iid} \\
\end{align}$$
because it misses out the cross terms due to $\sum_j \frac{u_j}{n}$. | {
"source": [
"https://stats.stackexchange.com/questions/88461",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/40761/"
]
} |
88,603 | I want to know why logistic regression is called a linear model. It uses a sigmoid function, which is not linear. So why is logistic regression a linear model? | The logistic regression model is of the form
$$
\mathrm{logit}(p_i) = \mathrm{ln}\left(\frac{p_i}{1-p_i}\right) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i}.
$$
It is called a generalized linear model not because the estimated probability of the response event is linear, but because the logit of the estimated probability response is a linear function of the predictors parameters. More generally, the Generalized Linear Model is of the form
$$
\mathrm{g}(\mu_i) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i},
$$
where $\mu$ is the expected value of the response given the covariates. Edit: Thank you whuber for the correction. | {
"source": [
"https://stats.stackexchange.com/questions/88603",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12329/"
]
} |
88,819 | In MCMC methods, I keep reading about burn-in time or the number of samples to "burn" . What is this exactly, and why is it needed? Update: Once MCMC stabilizes, does it remain stable? How is the notion of burn-in time related to that of mixing time? | Burn-in is intended to give the Markov Chain time to reach its equilibrium distribution, particularly if it has started from a lousy starting point. To "burn in" a chain, you just discard the first $n$ samples before you start collecting points. The idea is that a "bad" starting point may over-sample regions that are actually very low probability under the equilibrium distribution before it settles into the equilibrium distribution. If you throw those points away, then the points which should be unlikely will be suitably rare. This page gives a nice example, but it also points out that burn-in is more of a hack/artform than a principled technique. In theory, you could just sample for a really long time or find some way to choose a decent starting point instead. Edit: Mixing time refers to how long it takes the chain to approach its steady-state, but it's often difficult to calculate directly. If you knew the mixing time, you'd just discard that many samples, but in many cases, you don't. Thus, you choose a burn-in time that is hopefully large enough instead. As far as stability--it depends. If your chain has converged, then...it's converged. However, there are also situations where the chain appears to have converged but actually is just "hanging out" in one part of the state space. For example, imagine that there are several modes, but each mode is poorly connected to the others. It might take a very long time for the sampler to make it across that gap and it will look like the chain converged right until it makes that jump. There are diagnostics for convergence, but many of them have a hard time telling true convergence and pseudo-convergence apart. Charles Geyer's chapter (#1) in the Handbook of Markov Chain Monte Carlo is pretty pessimistic about everything but running the chain for as long as you can. | {
"source": [
"https://stats.stackexchange.com/questions/88819",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2798/"
]
} |
88,880 | I performed principal component analysis (PCA) with R using two different functions ( prcomp and princomp ) and observed that the PCA scores differed in sign. How can it be? Consider this: set.seed(999)
prcomp(data.frame(1:10,rnorm(10)))$x
PC1 PC2
[1,] -4.508620 -0.2567655
[2,] -3.373772 -1.1369417
[3,] -2.679669 1.0903445
[4,] -1.615837 0.7108631
[5,] -0.548879 0.3093389
[6,] 0.481756 0.1639112
[7,] 1.656178 -0.9952875
[8,] 2.560345 -0.2490548
[9,] 3.508442 0.1874520
[10,] 4.520055 0.1761397
set.seed(999)
princomp(data.frame(1:10,rnorm(10)))$scores
Comp.1 Comp.2
[1,] 4.508620 0.2567655
[2,] 3.373772 1.1369417
[3,] 2.679669 -1.0903445
[4,] 1.615837 -0.7108631
[5,] 0.548879 -0.3093389
[6,] -0.481756 -0.1639112
[7,] -1.656178 0.9952875
[8,] -2.560345 0.2490548
[9,] -3.508442 -0.1874520
[10,] -4.520055 -0.1761397 Why do the signs ( +/- ) differ for the two analyses? If I was then using principal components PC1 and PC2 as predictors in a regression, i.e. lm(y ~ PC1 + PC2) , this would completely change my understanding of the effect of the two variables on y depending on which method I used! How could I then say that PC1 has e.g. a positive effect on y and PC2 has e.g. a negative effect on y ? In addition: If the sign of PCA components is meaningless, is this true for factor analysis (FA) as well? Is it acceptable to flip (reverse) the sign of individual PCA/FA component scores (or of loadings, as a column of loading matrix)? | PCA is a simple mathematical transformation. If you change the signs of the component(s), you do not change the variance that is contained in the first component. Moreover, when you change the signs, the weights ( prcomp( ... )$rotation ) also change the sign, so the interpretation stays exactly the same: set.seed( 999 )
a <- data.frame(1:10,rnorm(10))
pca1 <- prcomp( a )
pca2 <- princomp( a )
pca1$rotation shows PC1 PC2
X1.10 0.9900908 0.1404287
rnorm.10. -0.1404287 0.9900908 and pca2$loadings show Loadings:
Comp.1 Comp.2
X1.10 -0.99 -0.14
rnorm.10. 0.14 -0.99
Comp.1 Comp.2
SS loadings 1.0 1.0
Proportion Var 0.5 0.5
Cumulative Var 0.5 1.0 So, why does the interpretation stays the same? You do the PCA regression of y on component 1. In the first version ( prcomp ), say the coefficient is positive: the larger the component 1, the larger the y. What does it mean when it comes to the original variables? Since the weight of the variable 1 ( 1:10 in a) is positive, that shows that the larger the variable 1, the larger the y. Now use the second version ( princomp ). Since the component has the sign changed, the larger the y, the smaller the component 1 -- the coefficient of y< over PC1 is now negative. But so is the loading of the variable 1; that means, the larger variable 1, the smaller the component 1, the larger y -- the interpretation is the same. Possibly, the easiest way to see that is to use a biplot. library( pca3d )
pca2d( pca1, biplot= TRUE, shape= 19, col= "black" ) shows The same biplot for the second variant shows pca2d( pca2$scores, biplot= pca2$loadings[,], shape= 19, col= "black" ) As you see, the images are rotated by 180°. However, the relation between the weights / loadings (the red arrows) and the data points (the black dots) is exactly the same; thus, the interpretation of the components is unchanged. | {
"source": [
"https://stats.stackexchange.com/questions/88880",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/19744/"
]
} |
88,980 | I have run across the assertion that each bootstrap sample (or bagged tree) will contain on average approximately $2/3$ of the observations. I understand that the chance of not being selected in any of $n$ draws from $n$ samples with replacement is $(1- 1/n)^n$, which works out to approximately $1/3$ chance of not being selected. What is a mathematical explanation for why this formula always gives $\approx 1/3$ ? | More precisely, each bootstrap sample (or bagged tree) will contain $1-\frac{1}{e} \approx 0.632$ of the sample. Let's go over how the bootstrap works. We have an original sample $x_1, x_2, \ldots x_n$ with $n$ items in it. We draw items with replacement from this original set until we have another set of size $n$. From that, it follows that the probability of choosing any one item (say, $x_1$) on the first draw is $\frac{1}{n}$. Therefore, the probability of not choosing that item is $1 - \frac{1}{n}$. That's just for the first draw; there are a total of $n$ draws, all of which are independent, so the probability of never choosing this item on any of the draws is $(1-\frac{1}{n})^n$. Now, let's think about what happens when $n$ gets larger and larger. We can take the limit as $n$ goes towards infinity, using the usual calculus tricks (or Wolfram Alpha):
$$ \lim_{n \rightarrow \infty} \big(1-\frac{1}{n}\big)^n = \frac{1}{e} \approx 0.368$$ That's the probability of an item not being chosen. Subtract it from one to find the probability of the item being chosen, which gives you 0.632. | {
"source": [
"https://stats.stackexchange.com/questions/88980",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/39930/"
]
} |
89,154 | I can't seem to find a general method for deriving standard errors anywhere. I've looked on google, this website and even in text books but all I can find is the formula for standard errors for the mean, variance, proportion, risk ratio, etc... and not how these formulas were arrived at. If any body could explain it in simple terms or even link me to a good resource which explains it I'd be grateful. | What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them together, and divide the sum by $n$. We than find the variance of this quantity and get the standard deviation by taking the square root of its variance. So, let the items that you pick be represented by the random variables $X_i, 1\le i \le n$, each of them identically distributed with variance $\sigma^2$. They are independently sampled, so the variance of the sum is just the sum of the variances.
$$
\text{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n\text{Var}\left(X_i\right) = \sum_{i=1}^n\sigma^2 = n\sigma^2
$$ Next we divide by $n$. We know in general that $\text{Var}(kY)=k^2 \text{Var}(Y)$, so putting $k=1/n$ we have $$
\text{Var}\left(\frac{\sum_{i=1}^n X_i}{n}\right) = \frac{1}{n^2}
\text{Var}\left(\sum_{i=1}^n X_i\right) = \frac{1}{n^2} n\sigma^2 = \frac{\sigma^2}{n}
$$ Finally take the square root to get the standard deviation $\dfrac{\sigma}{\sqrt{n}}$. When the population standard deviation isn't available the sample standard deviation $s$ is used as an estimate, giving $\dfrac{s}{\sqrt{n}}$. All of the above is true regardless of the distribution of the $X_i$s, but it begs the question of what do you actually want to do with the standard error? Typically you might want to construct confidence intervals, and it is then important assign a probability to constructing a confidence interval that contains the mean. If your $X_i$s are normally distributed, this is easy, because then the sampling distribution is also normally distributed. You can say 68% of samples of the mean will lie within 1 standard error of the true mean, 95% will be within 2 standard errors, etc. If you have a large enough sample (or a smaller sample and the $X_i$s are not too abnormal) then you can invoke the central limit theorem and say that the sampling distribution is approximately normally distributed, and your probability statements are also approximate. A case in point is estimating a proportion $p$, where you draw $n$ items each from a Bernouilli distribution. The variance of each $X_i$ distribution is $p(1-p)$ and hence the standard error is $\sqrt{p(1-p)/n}$ (the proportion $p$ is estimated using the data). To then jump to saying that approximately some % of samples are within so many standard deviations of the mean, you need to understand when the sampling distribution is approximately normal. Repeatedly sampling from a Bernouilli distribution is the same as sampling from a Binomial distribution, and one common rule of thumb is to approximate only when $np$ and $n(1-p)$ are $\ge5$. (See wikipedia for a more in-depth discussion on approximating binomial with normal. See here for a worked example of standard errors with a proportion.) If, on the other hand, your sampling distribution can't be approximated by a normal distribution, then the standard error is a lot less useful. For example, with a very skewed, asymmetric distribution you can't say that the same % of samples would be $\pm1$ standard deviation either side of the mean, and you might want to find a different way to associate probabilities with samples. | {
"source": [
"https://stats.stackexchange.com/questions/89154",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41499/"
]
} |
89,214 | In a comment to the answer of this question , it was stated that using AIC in model selection was equivalent to using a p-value of 0.154. I tried it in R, where I used a "backward" subset selection algorithm to throw out variables from a full specification. First, by sequentially throwing out the variable with the highest p-value and stopping when all p-values are below 0.154 and, secondly, by dropping the variable which results in lowest AIC when removed until no improvement can be made. It turned out that they give roughly the same results when I use a p-value of 0.154 as threshold. Is this actually true? If so, does anyone know why or can refer to a source which explains it? P.S. I couldn't ask the person commenting or write a comment, because just signed up. I am aware that this is not the most suitable approach to model selection and inference etc. | Variable selection done using statistical testing or AIC is highly problematic. If using $\chi^2$ tests, AIC uses a cutoff of $\chi^2$=2.0 which corresponds to $\alpha=0.157$. AIC when used on individual variables does nothing new; it just uses a more reasonable $\alpha$ than 0.05. A more reasonable (less inference-disturbing) $\alpha$ is 0.5. | {
"source": [
"https://stats.stackexchange.com/questions/89214",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41519/"
]
} |
89,635 | I share the same birthdate as my boyfriend, same date but also same year, our births are seperated by merely 5 hours or so. I know that the chances of meeting someone who was born on the same date than me is fairly high and I know a few people with whom I share my birthday although for the little I've read about the birthday paradox, it doesn't take same year into account. We've argued before about the probabilities and I am still not satisfied. My point was that the chances are tiny if you consider the probabilities of being in a relationship (+ being successful at it for X amount of time). I find the amount of factors to take into account quite vast (up to a point, gender and age, availability, probabilities of separation in our region, etc.) Is it even possible to calculate the probabilities on something like this? How would you go about it? | For any one relationship, the odds of sharing the same month and day are approximately 1 in 365 (not exactly because of leap year and because births are not exactly evenly spaced within a year. If you add in year, it's probably something like 1 in 3000 or 4000 (most people have relationships with people relatively close in age). But that' a priori. That is, if you had asked, before meeting your current boyfriend "What are the odds that the next man I have a relationship with will be born on same day and year?" the odds would have been 1 in 3000 or so. However, post hoc (that is, while in the relationship) it's trickier because you would have noticed a lot of other coincidences too: My boyfriend was born the day before me! My boyfriend's mother has the same name as my mother!" etc etc. The odds of "some weird connection with my boyfriend" are impossible to calculate. | {
"source": [
"https://stats.stackexchange.com/questions/89635",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/39721/"
]
} |
89,747 | I'm trying to fit a multiple linear regression model to my data with couple of input parameters, say 3. \begin{align}
F(x) &= Ax_1 + Bx_2 + Cx_3 + d \tag{i} \\
&\text{or} \\
F(x) &= (A\ B\ C)^T (x_1\ x_2\ x_3) + d \tag{ii}
\end{align} How do I explain and visualize this model? I could think of the following options: Mention the regression equation as described in $(i)$ (coefficients, constant) along with standard deviation and then a residual error plot to show the accuracy of this model. Pairwise plots of independent and dependent variables, like this: Once the coefficients are known, can the data points used to obtain equation $(i)$ be condensed to their real values. That is, the training data have new values, in the form $x$ instead of $x_1$, $x_2$, $x_3$, $\ldots$ where each of independent variable is multiplied by its respective coefficient. Then this simplified version can be visually shown as a simple regression as this: I'm confused on this in spite of going through appropriate material on this topic. Can someone please explain to me how to "explain" a multiple linear regression model and how to visually show it. | My favorite way of showing the results of a basic multiple linear regression is to first fit the model to normalized (continuous) variables. That is, z-transform the $X$ s by subtracting the mean and dividing by the standard deviation, then fit the model and estimate the parameters. When the variables are transformed in this way, the estimated coefficients are 'standardized' to have unit $\Delta Y/\Delta sd(X)$ . In this way, the distance the coefficients are from zero ranks their relative 'importance' and their CI gives the precision. I think it sums up the relationships rather well and offers a lot more information than the coefficients and p.values on their natural and often disparate numerical scales. An example is below: EDIT : Another possibility is to use an 'added variable plot' (i.e. plot the partial regressions). This gives another perspective in that it shows the bivariate relations between $Y$ and $X_i$ AFTER THE OTHER VARIABLES ARE ACCOUNTED FOR. For example, the partial regressions of $Y \sim X_1 + X_2 + X_3$ would give bivariate relations between $X_i$ against the residuals of $Y$ after regressing against the other two terms. You would go on to do this for each variable. Function avPlots() from library car gives these plots from a fitted lm object. An example is below: | {
"source": [
"https://stats.stackexchange.com/questions/89747",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24003/"
]
} |
89,752 | Can anybody please help me convert yearly data into monthly and quarterly data? | My favorite way of showing the results of a basic multiple linear regression is to first fit the model to normalized (continuous) variables. That is, z-transform the $X$ s by subtracting the mean and dividing by the standard deviation, then fit the model and estimate the parameters. When the variables are transformed in this way, the estimated coefficients are 'standardized' to have unit $\Delta Y/\Delta sd(X)$ . In this way, the distance the coefficients are from zero ranks their relative 'importance' and their CI gives the precision. I think it sums up the relationships rather well and offers a lot more information than the coefficients and p.values on their natural and often disparate numerical scales. An example is below: EDIT : Another possibility is to use an 'added variable plot' (i.e. plot the partial regressions). This gives another perspective in that it shows the bivariate relations between $Y$ and $X_i$ AFTER THE OTHER VARIABLES ARE ACCOUNTED FOR. For example, the partial regressions of $Y \sim X_1 + X_2 + X_3$ would give bivariate relations between $X_i$ against the residuals of $Y$ after regressing against the other two terms. You would go on to do this for each variable. Function avPlots() from library car gives these plots from a fitted lm object. An example is below: | {
"source": [
"https://stats.stackexchange.com/questions/89752",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41793/"
]
} |
89,809 | I found this tutorial , which suggests that you should run the scale function on features before clustering (I believe that it converts data to z-scores). I'm wondering whether that is necessary. I'm asking mostly because there's a nice elbow point when I don't scale the data, but it disappears when it's scaled. :) | The issue is what represents a good measure of distance between cases. If you have two features, one where the differences between cases is large and the other small, are you prepared to have the former as almost the only driver of distance? So for example if you clustered people on their weights in kilograms and heights in metres, is a 1kg difference as significant as a 1m difference in height? Does it matter that you would get different clusterings on weights in kilograms and heights in centimetres? If your answers are "no" and "yes" respectively then you should probably scale. On the other hand, if you were clustering Canadian cities based on distances east/west and distances north/south then, although there will typically be much bigger differences east/west, you may be happy just to use unscaled distances in either kilometres or miles (though you might want to adjust degrees of longitude and latitude for the curvature of the earth). | {
"source": [
"https://stats.stackexchange.com/questions/89809",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28729/"
]
} |
90,004 | In many online games, when players complete a difficult task, sometimes a special reward is given which everyone who completed the task can use. this is usually a mount (method of transportation) or another vanity item (items which don't improve the performance of the character and are mainly used for appearance customization). When such a reward is given, the most common way of determining who gets the reward is through random numbers. The game usually has a special command which generates a random (likely pseudorandom, not crypto secure random) number between 1 and 100 (sometimes the player can choose another spread, but 100 is the most common). Each player uses this command, all the players can see who rolled what, and the item is awarded to the person who rolls highest. Most games even have a a built-in system where players just press a button and once everyone pressed their button, the game does the rest automatically. Sometimes, some players generate the same high number and noone beats them. this is usually resolved by those players regenerating their numbers, until there is a unique highest number. My question is the following: Assume a random number generator which can generate any number between 1 and 100 with the same probability. Assume that you have a group of 25 players who each generate 1 number with such a random number generator (each with their own seed). You'll have 25 numbers between 1 and 100, with no limitations on how many players roll a specific numbder and no relation between the numbers. What is the chance that the highest generated number is generated by more than 1 player? In other words, what is the likelihood of a tie? | Let $x$ be the top end of your range, $x=100$ in your case. $n$ be the total number of draws, $n=25$ in your case. For any number $y\le x$, the number of sequences of $n$ numbers with each number in the sequence $\le y$ is $y^n$. Of these sequence, the number containing no $y$s is $(y-1)^n$, and the number containing one $y$ is $n(y-1)^{n-1}$. Hence the number of sequences with two or more $y$s is
$$y^n - (y-1)^n - n(y-1)^{n-1}$$
The total number of sequences of $n$ numbers with highest number $y$ containing at least two $y$s is
\begin{align}
\sum_{y=1}^x \left(y^n - (y-1)^n - n(y-1)^{n-1}\right)
&= \sum_{y=1}^x y^n - \sum_{y=1}^x(y-1)^n - \sum_{y=1}^xn(y-1)^{n-1}\\
&= x^n - n\sum_{y=1}^x(y-1)^{n-1}\\
&= x^n - n\sum_{y=1}^{x-1}y^{n-1}\\
\end{align} The total number of sequences is simply $x^n$. All sequences are equally likely and so the probability is
$$ \frac{x^n - n\sum_{y=1}^{y=x-1}y^{n-1}}{x^n}$$ With $x=100,n=25$ I make the probability 0.120004212454. I've tested this using the following Python program, which counts the sequences that match manually (for low $x,n$), simulates and calculates using the above formula. import itertools
import numpy.random as np
def countinlist(x, n):
count = 0
total = 0
for perm in itertools.product(range(1, x+1), repeat=n):
total += 1
if perm.count(max(perm)) > 1:
count += 1
print "Counting: x", x, "n", n, "total", total, "count", count
def simulate(x,n,N):
count = 0
for i in range(N):
perm = np.randint(x, size=n)
m = max(perm)
if sum(perm==m) > 1:
count += 1
print "Simulation: x", x, "n", n, "total", N, "count", count, "prob", count/float(N)
x=100
n=25
N = 1000000 # number of trials in simulation
#countinlist(x,n) # only call this for reasonably small x and n!!!!
simulate(x,n,N)
formula = x**n - n*sum([i**(n-1) for i in range(x)])
print "Formula count", formula, "out of", x**n, "probability", float(formula) / x**n This program outputted Simulation: x 100 n 25 total 1000000 count 120071 prob 0.120071
Formula count 12000421245360277498241319178764675560017783666750 out of 100000000000000000000000000000000000000000000000000 probability 0.120004212454 | {
"source": [
"https://stats.stackexchange.com/questions/90004",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41767/"
]
} |
90,490 | I don't mean a value close to zero (rounded to zero by some statistical software) but rather a value of literally zero. If so, would it mean that the probability of getting the obtained data assuming the null hypothesis is true is also zero? What are (some examples) of statistical tests that can return results of this sort? Edited the second sentence to remove the phrase "the probability of the null hypothesis". | It will be the case that if you observed a sample that's impossible under the null (and if the statistic is able to detect that), you can get a p-value of exactly zero. That can happen in real world problems. For example, if you do an Anderson-Darling test of goodness of fit of data to a standard uniform with some data outside that range - e.g. where your sample is (0.430, 0.712, 0.885, 1.08) - the p-value is actually zero (but a Kolmogorov-Smirnov test by contrast would give a p-value that isn't zero, even though we can rule it out by inspection). Likelihood ratio tests will likewise give a p-value of zero if the sample is not possible under the null. As whuber mentioned in comments, hypothesis tests don't evaluate the probability of the null hypothesis (or the alternative). We don't (can't, really) talk about the probability of the null being true in that framework (we can do it explicitly in a Bayesian framework, though -- but then we cast the decision problem somewhat differently from the outset). | {
"source": [
"https://stats.stackexchange.com/questions/90490",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9162/"
]
} |
90,605 | Why are the geometric distribution and hypergeometric distribution called "geometric" and "hypergoemetric" respectively? Is it because their pmfs take some special form? Thanks! | Yes, the terms refer to the probability mass functions (pmfs). 2,500 years ago, Euclid (in Books VIII and IV of his Elements ) studied sequences of lengths having common proportions. . At some point such sequences came to be known as "geometric progressions" (although the term "geometric" could for a similar reason just as easily have been applied to many other regular series, including those now called "arithmetic"). The probability mass function of a geometric distribution with parameter $p$ forms a geometric progression $$p, p(1-p), p(1-p)^2, \ldots, p(1-p)^n, \ldots.$$ Here the common proportion is $1-p$. Several hundred years ago a vast generalization of such progressions became important in the studies of elliptic curves, differential equations, and many other deeply interconnected areas of mathematics. The generalization supposes that the relative proportions among successive terms at positions $k$ and $k+1$ could vary, but it limits the nature of that variation: the proportions must be a given rational function of $k$. Because these go "over" or "beyond" the geometric progression (for which the rational function is constant), they were termed hypergeometric from the ancient Greek prefix $\grave\upsilon^\prime\pi\varepsilon\rho$ ("hyper"). The probability mass function of a hypergeometric function with parameters $N, K,$ and $n$ has the form $$p(k) = \frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}}$$ for suitable $k$. The ratio of successive probabilities therefore equals $$\frac{p(k+1)}{p(k)} = \frac{(K-k)(n-k)}{(k+1)(N-K-n+k+1)},$$ a rational function of $k$ of degree $(2,2)$. This places the probabilities into a (particular kind of) hypergeometric progression. | {
"source": [
"https://stats.stackexchange.com/questions/90605",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1005/"
]
} |
90,659 | I have two classifiers A: naive Bayesian network B: tree (singly-connected) Bayesian network In terms of accuracy and other measures, A performs comparatively worse than B. However, when I use the R packages ROCR and AUC to perform ROC analysis, it turns out that the AUC for A is higher than the AUC for B. Why is this happening? The true positive (tp), false positive (fp), false negative (fn), true negative (tn), sensitivity (sen), specificity (spec), positive predictive value (ppv), negative predictive value (npv), and accuracy (acc) for A and B are as follows. +------+---------+---------+
| | A | B |
+------+---------+---------+
| tp | 3601 | 769 |
| fp | 0 | 0 |
| fn | 6569 | 5918 |
| tn | 15655 | 19138 |
| sens | 0.35408 | 0.11500 |
| spec | 1.00000 | 1.00000 |
| ppv | 1.00000 | 1.00000 |
| npv | 0.70442 | 0.76381 |
| acc | 0.74563 | 0.77084 |
+------+---------+---------+ With the exception of sens and ties (spec and ppv) on the marginals (excluding tp, fn, fn, and tn), B seems to perform better than A. When I compute the AUC for sens (y-axis) vs 1-spec (x-axis) aucroc <- auc(roc(data$prediction,data$labels)); here is the AUC comparison. +----------------+---------+---------+
| | A | B |
+----------------+---------+---------+
| sens vs 1-spec | 0.77540 | 0.64590 |
| sens vs spec | 0.70770 | 0.61000 |
+----------------+---------+---------+ So here are my questions: Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy? So, how do I really judge / compare the classification performances of A and B? I mean, do I use the AUC value? Do I use the acc value, and if so why? Furthermore, when I apply proper scoring rules to A and B, B outperforms A in terms of log loss, quadratic loss, and spherical loss (p < 0.001). How do these weigh in on judging classification performance with respect to AUC? The ROC graph for A looks very smooth (it is a curved arc), but the ROC graph for B looks like a set of connected lines. Why is this? As requested, here are the plots for model A. Here are the plots for model B. Here are the histogram plots of the distribution of the probabilities for A and B. (breaks are set to 20). Here is the scatter plot of the probabilities of B vs A. | Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximizing them leads to a bogus model, inaccurate predictions, and selecting the wrong features. It is good that they disagree with proper scoring (log-likelihood; logarithmic scoring rule; Brier score) rules and the $c$-index (a semi-proper scoring rule - area under ROC curve; concordance probability; Wilcoxon statistic; Somers' $D_{xy}$ rank correlation coefficient); this gives us more confidence in proper scoring rules. | {
"source": [
"https://stats.stackexchange.com/questions/90659",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11832/"
]
} |
90,668 | I'm working through the examples in Kruschke's Doing Bayesian Data Analysis , specifically the Poisson exponential ANOVA in ch. 22, which he presents as an alternative to frequentist chi-square tests of independence for contingency tables. I can see how we get information about about interactions that occur more or less frequently than would be expected if the variables were independent (ie. when the HDI excludes zero). My question is how can I compute or interpret an effect size in this framework? For example, Kruschke writes "the combination of blue eyes with black hair happens less frequently than would be expected if eye color and hair color were independent", but how can we describe the strength of that association? How can I tell which interactions are more extreme than others? If we did a chi-square test of these data we might compute the Cramér's V as a measure of the overall effect size. How do I express effect size in this Bayesian context? Here's the self-contained example from the book (coded in R ), just in case the answer is hidden from me in plain sight ... df <- structure(c(20, 94, 84, 17, 68, 7, 119, 26, 5, 16, 29,
14, 15, 10, 54, 14), .Dim = c(4L, 4L),
.Dimnames = list(c("Black", "Blond",
"Brunette", "Red"), c("Blue", "Brown", "Green", "Hazel")))
df
Blue Brown Green Hazel
Black 20 68 5 15
Blond 94 7 16 10
Brunette 84 119 29 54
Red 17 26 14 14 Here's the frequentist output, with effect size measures (not in the book): vcd::assocstats(df)
X^2 df P(> X^2)
Likelihood Ratio 146.44 9 0
Pearson 138.29 9 0
Phi-Coefficient : 0.483
Contingency Coeff.: 0.435
Cramer's V : 0.279 Here's the Bayesian output, with HDIs and cell probabilities (directly from the book): # prepare to get Krushkes' R codes from his web site
Krushkes_codes <- c(
"http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/Programs/openGraphSaveGraph.R",
"http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/Programs/PoissonExponentialJagsSTZ.R")
# download Krushkes' scripts to working directory
lapply(Krushkes_codes, function(i) download.file(i, destfile = basename(i)))
# run the code to analyse the data and generate output
lapply(Krushkes_codes, function(i) source(basename(i))) And here are plots of the posterior of Poisson exponential model applied to the data: And plots of the posterior distribution on estimated cell probabilities: | Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximizing them leads to a bogus model, inaccurate predictions, and selecting the wrong features. It is good that they disagree with proper scoring (log-likelihood; logarithmic scoring rule; Brier score) rules and the $c$-index (a semi-proper scoring rule - area under ROC curve; concordance probability; Wilcoxon statistic; Somers' $D_{xy}$ rank correlation coefficient); this gives us more confidence in proper scoring rules. | {
"source": [
"https://stats.stackexchange.com/questions/90668",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7744/"
]
} |
90,754 | I have data on a large Italian firm's employees over ten years and I would like to see how the gender gap in male-female earnings has changed over time. For this purpose I run pooled OLS:
$$
y_{it} = X'_{it}\beta + \delta {\rm male}_i + \sum^{10}_{t=1}\gamma_t d_t + \varepsilon_{it}
$$
where $y$ is log earnings per year, $X_{it}$ includes covariates that differ by individual and time, $d_t$ are year dummies and ${\rm male}_i$ equals one if a worker is male and is zero otherwise. Now I have a concern that some of the covariates can be maybe correlated with unobserved fixed effects. But when I use the fixed effects (within) estimator or first differences I lose the gender dummy because this variable does not change over time. I don't want to use the random effects estimator because I often hear people saying that it puts assumptions that are very unrealistic and are unlikely to hold. Are there any ways for keep the gender dummy and control fixed effects at the same time? If there is a way, do I need to cluster or take care for other problems with the errors for hypothesis tests on the gender variable? | There are a few potential ways for you to keep the gender dummy in a fixed effects regression. Within Estimator Suppose you have a similar model compared to your pooled OLS model which is
$$y_{it} = \beta_1 + \sum^{10}_{t=2} \beta_t d_t + \gamma_1 (male_i) + \sum^{10}_{t=1} \gamma_t (d_t \cdot male_i) + X'_{it}\theta + c_i + \epsilon_{it}$$
where the variables are as before. Now note that $\beta_1$ and $\beta_1 + \gamma_1 (male_i)$ cannot be identified because the within estimator cannot distinguish them from the fixed effect $c_i$. Given that $\beta_1$ is the intercept for the base year $t=1$, $\gamma_1$ is the gender effect on earnings in this period. What we can identify in this case are $\gamma_2, ..., \gamma_{10}$ because they are interacted with your time dummies and they measure the differences in the partial effects of your gender variable relative to the first time period. This means if you observe an increase in your $\gamma_2,...,\gamma_{10}$ over time this is an indication for a widening of the earnings gap between men and women. First-Difference Estimator If you want to know the overall effect of the difference between men and women over time, you can try the following model:
$$y_{it} = \beta_1 + \sum^{10}_{t=2} \beta_t d_t + \gamma (t\cdot male_i) + X'_{it}\theta + c_i + \epsilon_{it}$$
where the variable $t = 1, 2,...,10$ is interacted with the time-invariant gender dummy. Now if you take first differences $\beta_1$ and $c_i$ drop out and you get
$$y_{it} - y_{i(t-1)} = \sum^{10}_{t=3} \beta_t (d_t - d_{(t-1)}) + \gamma (t\cdot male_i - [(t-1)male_i]) + (X'_{it}-X'_{i(t-1)})\theta + \epsilon_{it}-\epsilon_{i(t-1)}$$
Then $\gamma(t\cdot male_i - [(t-1)male_i]) = \gamma[(t - (t-1))\cdot male_i] = \gamma (male_i)$ and you can identify the gender difference in earnings $\gamma$. So the final regression equation will be:
$$\Delta y_{it} = \sum_{t=3}^{10}\beta_t \Delta d_t + \gamma(male_i) + \Delta X'_{it}\theta + \Delta \epsilon_{it}$$
and you get your effect of interest. The nice thing is that this is easily implemented in any statistical software but you lose a time period. Hausman-Taylor Estimator This estimator distinguishes between regressors that you can assume to be uncorrelated with the fixed effect $c_i$ and those that are potentially correlated with it. It further distinguishes between time-varying and time-invariant variables. Let $1$ denote variables that are uncorrelated with $c_i$ and $2$ those who are and let's say your gender variable is the only time-invariant variable. The Hausman-Taylor estimator then applies the random effects transformation:
$$\tilde{y}_{it} = \tilde{X}'_{1it} + \tilde{X}'_{2it} + \gamma (\widetilde{male}_{i2}) + \tilde{c}_i + \tilde{\epsilon}_{it}$$
where tilde notation means $\tilde{X}_{1it} = X_{1it} - \hat{\theta}_i \overline{X}_{1i}$ where $\hat{\theta}_i$ is used for the random effects transformation and $\overline{X}_{1i}$ is the time-average over each individual. This isn't like the usual random effects estimator that you wanted to avoid because group $2$ variables are instrumented for in order to remove the correlation with $c_i$. For $\tilde{X}_{2it}$ the instrument is $X_{2it} - \overline{X}_{2i}$. The same is done for the time-invariant variables, so if you specify the gender variable to be potentially correlated with the fixed effect it gets instrumented with $\overline{X}_{1i}$, so you must have more time-varying than time-invariant variables. All of this might sound a little complicated but there are canned packages for this estimator. For instance, in Stata the corresponding command is xthtaylor . For further information on this method you could read Cameron and Trivedi (2009) "Microeconometrics Using Stata". Otherwise you can just stick with the two previous methods which are a bit easier. Inference For your hypothesis tests there is not much that needs to be considered other than what you would need to do anyway in a fixed effects regression. You need to take care for the autocorrelation in the errors, for example by clustering on the individual ID variable. This allows for an arbitrary correlation structure among clusters (individuals) which deals with autocorrelation. For a reference see again Cameron and Trivedi (2009). | {
"source": [
"https://stats.stackexchange.com/questions/90754",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/42263/"
]
} |
90,779 | I have some doubts about which performance measure to use, area under the ROC curve (TPR as a function of FPR) or area under the precision-recall curve (precision as a function of recall). My data is imbalanced, i.e., the number of negative instances is much larger than positive instances. I am using the output prediction of weka, a sample is: inst#,actual,predicted,prediction
1,2:0,2:0,0.873
2,2:0,2:0,0.972
3,2:0,2:0,0.97
4,2:0,2:0,0.97
5,2:0,2:0,0.97
6,2:0,2:0,0.896
7,2:0,2:0,0.973 And I am using pROC and ROCR r libraries. | The question is quite vague so I am going to assume you want to choose an appropriate performance measure to compare different models. For a good overview of the key differences between ROC and PR curves, you can refer to the following paper: The Relationship Between Precision-Recall and ROC Curves by Davis and Goadrich . To quote Davis and Goadrich: However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. ROC curves plot FPR vs TPR. To be more explicit: $$FPR = \frac{FP}{FP+TN}, \quad TPR=\frac{TP}{TP+FN}.$$ PR curves plot precision versus recall (FPR), or more explicitly: $$recall = \frac{TP}{TP+FN} = TPR,\quad precision = \frac{TP}{TP+FP}$$ Precision is directly influenced by class (im)balance since $FP$ is affected, whereas TPR only depends on positives. This is why ROC curves do not capture such effects. Precision-recall curves are better to highlight differences between models for highly imbalanced data sets. If you want to compare different models in imbalanced settings, area under the PR curve will likely exhibit larger differences than area under the ROC curve. That said, ROC curves are much more common (even if they are less suited). Depending on your audience, ROC curves may be the lingua franca so using those is probably the safer choice. If one model completely dominates another in PR space (e.g. always have higher precision over the entire recall range), it will also dominate in ROC space. If the curves cross in either space they will also cross in the other. In other words, the main conclusions will be similar no matter which curve you use. Shameless advertisement . As an additional example, you could have a look at one of my papers in which I report both ROC and PR curves in an imbalanced setting. Figure 3 contains ROC and PR curves for identical models, clearly showing the difference between the two. To compare area under the PR versus area under ROC you can compare tables 1-2 (AUPR) and tables 3-4 (AUROC) where you can see that AUPR shows much larger differences between individual models than AUROC. This emphasizes the suitability of PR curves once more. | {
"source": [
"https://stats.stackexchange.com/questions/90779",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11917/"
]
} |
91,034 | What is the difference between the Wilcoxon Rank-Sum Test and the Wilcoxon Signed-Rank Test using paired observations? I know that the Rank-Sum test allows for a different number of observations in two different samples, whereas the Signed-Rank test for paired samples does not allow that, however, they both seem to test the same. Can someone give me some background and/or theoretical information on when one should use the Wilcoxon Rank-Sum Test and when one should use the Wilcoxon Signed-Rank Test using paired observations? | You should use the signed rank test when the data are paired . You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively dependent, while unpaired values are not dependent. Often the dependence-pairing occurs because they're observations on the same unit (repeated measures), but it doesn't have to be on the same unit, just in some way tending to be associated (while measuring the same kind of thing), to be considered as 'paired'. You should use the rank-sum test when the data are not paired. That's basically all there is to it. Note that having the same $n$ doesn't mean the data are paired, and having different $n$ doesn't mean that there isn't pairing (it may be that a few pairs lost an observation for some reason). Pairing comes from consideration of what was sampled. The effect of using a paired test when the data are paired is that it generally gives more power to detect the changes you're interested in. If the association leads to strong dependence*, then the gain in power may be substantial. * specifically, but speaking somewhat loosely, if the effect size is large compared to the typical size of the pair-differences, but small compared to the typical size of the unpaired-differences, you may pick up the difference with a paired test at a quite small sample size but with an unpaired test only at a much larger sample size. However, when the data are not paired, it may be (at least slightly) counterproductive to treat the data as paired. That said, the cost - in lost power - may in many circumstances be quite small - a power study I did in response to this question seems to suggest that on average the power loss in typical small-sample situations (say for n of the order of 10 to 30 in each sample, after adjusting for differences in significance level) may be surprisingly small, essentially negligible. [If you're somehow really uncertain whether the data are paired or not, the loss in treating unpaired data as paired is usually relatively minor, while the gains may be substantial if they are paired. This suggests if you really don't know, and have a way of figuring out what is paired with what assuming they were paired -- such as the values being in the same row in a table, it may in practice may make sense to act as if the data were paired to be safe -- though some people may tend to get quite exercised over you doing that.] | {
"source": [
"https://stats.stackexchange.com/questions/91034",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26569/"
]
} |
91,044 | Predicted
class
Cat Dog Rabbit
Actual class
Cat 5 3 0
Dog 2 3 1
Rabbit 0 2 11 How can I calculate precision and recall so It become easy to calculate F1-score. The normal confusion matrix is a 2 x 2 dimension. However, when it become 3 x 3 I don't know how to calculate precision and recall. | If you spell out the definitions of precision (aka positive predictive value PPV) and recall (aka sensitivity), you see that they relate to one class independent of any other classes: Recall or senstitivity is the proportion of cases correctly identified as belonging to class c among all cases that truly belong to class c . (Given we have a case truly belonging to " c ", what is the probability of predicting this correctly?) Precision or positive predictive value PPV is the proportion of cases correctly identified as belonging to class c among all cases of which the classifier claims that they belong to class c . In other words, of those cases predicted to belong to class c , which fraction truly belongs to class c ? (Given the predicion " c ", what is the probability of being correct?) negative predictive value NPV of those cases predicted not to belong to class c , which fraction truly doesn't belong to class c ? (Given the predicion "not c ", what is the probability of being correct?) So you can calculate precision and recall for each of your classes. For multi-class confusion tables, that's the diagonal elements divided by their row and column sums, respectively: Source: Beleites, C.; Salzer, R. & Sergo, V. Validation of soft classification models using partial class memberships: An extended concept of sensitivity & co. applied to grading of astrocytoma tissues, Chemom Intell Lab Syst, 122, 12 - 22 (2013). DOI: 10.1016/j.chemolab.2012.12.003 | {
"source": [
"https://stats.stackexchange.com/questions/91044",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/42433/"
]
} |
91,462 | I'm trying to use a LASSO model for prediction, and I need to estimate standard errors. Surely someone has already written a package to do this. But as far as I can see, none of the packages on CRAN that do predictions using a LASSO will return standard errors for those predictions. So my question is: Is there a package or some R code available to compute standard errors for LASSO predictions? | Kyung et al. (2010), "Penalized regression, standard errors, & Bayesian lassos", Bayesian Analysis , 5 , 2 , suggest that there might not be a consensus on a statistically valid method of calculating standard errors for the lasso predictions. Tibshirani seems to agree (slide 43) that standard errors are still an unresolved issue. | {
"source": [
"https://stats.stackexchange.com/questions/91462",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/159/"
]
} |
91,512 | It would be appreciated if the following examples could be given: A distribution with infinite mean and infinite variance. A distribution with infinite mean and finite variance. A distribution with finite mean and infinite variance. A distribution with finite mean and finite variance. It comes from me seeing these unfamiliar terms (infinite mean, infinite variance) used in an article I am reading, googling and reading a thread on the Wilmott forum/website , and not finding it a sufficiently clear explanation. I also haven't found any explanations in any of my own textbooks. | The mean and variance are defined in terms of (sufficiently general) integrals. What it means for the mean or variance to be infinite is a statement about the limiting behavior for those integrals For example, for a continuous density the mean is $\lim_{a,b\to\infty}\int_{-a}^b x f(x)\ dx$ (which might here be considered as a Riemann integral, say). This can happen, for example, if the tail is "heavy enough"; either the upper or the lower part (or both) may not converge to a finite value. Consider the following examples for four cases of finite/infinite mean and variance: A distribution with infinite mean and non-finite variance. Examples: Pareto distribution with $\alpha= 1$ , a zeta(2) distribution. A distribution with infinite mean and finite variance. Not possible. A distribution with finite mean and infinite variance. Examples: $t_2$ distribution . Pareto with $\alpha=\frac{3}{2}$ . A distribution with finite mean and finite variance. Examples: Any normal. Any uniform (indeed, any bounded variable has all moments). $t_3$ . These notes by Charles Geyer talk about how to compute relevant integrals in simple terms. It looks like it's dealing with Riemann integrals there, which only covers the continuous case but more general definitions of integrals will cover all the cases you will be likely to require [Lebesgue integration is the form of integration used in measure theory (which underlies probability) but the point here works just fine with more basic methods]. It also covers (Sec 2.5, p13-14) why "2." isn't possible (the mean exists if the variance exists). | {
"source": [
"https://stats.stackexchange.com/questions/91512",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9162/"
]
} |
91,863 | I am a pure math grad student with little background in applied mathematics. Since last fall I have been taking classes on Casella & Berger's book, and I have finished hundreds (230+) of pages of exercise problems in the book. Right now I am at Chapter 10. However, since I have not majored in statistics or planned to be a statistician, I do not think I will be able to invest time regularly to continue learning data analysis. My experience so far is telling me that, to be a statistician, one needs to bear with a lot of tedious computation involving various distributions (Weibull, Cauchy, $t$, $F$...). I found while the fundamental ideas are simple, the implementation (for example the LRT in hypothesis testing) can still be difficult due to technicalities. Is my understanding correct? Is there a way I can learn probability & statistics that not only covers more advanced material, but can also help in case I need data analysis in real life? Will I need to spend $\ge$20 hrs per week on it like I used to? While I believe there is no royal road in learning mathematics, I often cannot help wondering – most of the time we do not know what the distribution is for real life data, so what is the purpose for us to focus exclusively on various families of distributions? If the sample size is small and the central limit theorem does not apply, how can we properly analyze the data besides the sample average and variance if the distribution is unknown? My semester will end in a month, and I do not want my knowledge to evaporate after I start to focus on my PhD research. So I decided to ask. I am learning R, and I have some programming background, but my level is about the same as a code monkey. | I do not think I will be able to give regular time investment to continue learning data analysis I don't think Casella & Berger is a place to learn data much in the way of data analysis . It's a place to learn some of the tools of statistical theory. My experience so far telling me to be a statistican one needs to bear with a lot of tedious computation involving various distributions(Weibull, Cauchy, t, F...). I've spent a lot of time as a statistician doing data analysis. It rarely (almost never) involves me doing tedious calculation. It sometimes involves a little simple algebra, but the common problems are usually solved and I don't need to expend any effort on replicating that each time. The computer does all the tedious calculation. If I am in a situation where I'm not prepared to assume a reasonably standard case (e.g. not prepared to use a GLM), I generally don't have enough information to assume any other distribution either, so the question of the calculations in LRT is usually moot (I can do them when I need to, they just either tend to be already solved or come up so rarely that it's an interesting diversion). I tend to do a lot of simulation; I also frequently try to use resampling in some form either alongside or in place of parametric assumptions. Will I need to spend 20hr+ per week on it like I used to be? It depends on what you want to be able to do and how soon you want to get good at it. Data analysis is a skill, and it takes practice and a large base of knowledge. You'll have some of the knowledge you need already. If you want to be a good practitioner at a wide variety of things, it will take a lot of time - but to my mind it's a lot more fun than the algebra and such of doing Casella and Berger exercises. Some of the skills I built up on say regression problems are helpful with time series, say -- but a lot of new skills are needed. So learning to interpret residual plots and QQ plots is handy, but they don't tell me how much I need to worry about a little bump in a PACF plot and don't give me tools like the use of one-step-ahead prediction errors. So for example, I don't need to expend effort figuring out how to do reasonably ML for typical gamma or weibull models , because they're standard enough to be solved problems that have already been largely put into a convenient form. If you come to do research , you'll need a lot more of the skills you pick up in places like Casella & Berger (but even with those kind of skills, you should also read more than one book). Some suggested things: You should definitely build up some regression skills, even if you do nothing else. There are a number of quite good books, but perhaps Draper & Smith Applied Regression Analysis plus Fox and Weisberg An R Companion to Applied Regression ; I'd also suggest you consider following with Harrell's Regression Modelling Strategies (You could substitute any number of good books for Draper and Smith - find one or two that suit you.) The second book has a number of online additional chapters that are very much worth reading (and its own R-package) -- A good second serving would be Venables & Ripley's Modern Applied Statistics with S . That's some grounding in a fairly broad swathe of ideas. It may turn out that you need some more basic material in some topics (I don't know your background). Then you'd need to start thinking about what areas of statistics you want/need -- Bayesian stats, time series, multivariate analysis, etc etc | {
"source": [
"https://stats.stackexchange.com/questions/91863",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25259/"
]
} |
91,903 | Is the probability calculated by a logistic regression model (the one that is logit transformed) the fit of cumulative distribution function of successes of original data (ordered by the X variable)? EDIT: In other words - how to plot the probability distribution of the original data that you get when you fit a logistic regression model? The motivation for the question was Jeff Leak's example of regression on the Raven's score in a game and whether they won or not (from Coursera's Data Analysis course). Admittedly, the problem is artificial (see @FrankHarrell's comment below). Here is his data with a mix of his and my code: download.file("http://dl.dropbox.com/u/7710864/data/ravensData.rda",
destfile="ravensData.rda", method="internal")
load("ravensData.rda")
plot(ravenWinNum~ravenScore, data=ravensData) It doesn't seem like good material for logistic regression, but let's try anyway: logRegRavens <- glm(ravenWinNum ~ ravenScore, data=ravensData, family=binomial)
summary(logRegRavens)
# the beta is not significant
# sort table by ravenScore (X)
rav2 = ravensData[order(ravensData$ravenScore), ]
# plot CDF
plot(sort(ravensData$ravenScore), cumsum(rav2$ravenWinNum)/sum(rav2$ravenWinNum),
pch=19, col="blue", xlab="Score", ylab="Prob Ravens Win", ylim=c(0,1),
xlim=c(-10,50))
# overplot fitted values (Jeff's)
points(ravensData$ravenScore, logRegRavens$fitted, pch=19, col="red")
# overplot regression curve
curve(1/(1+exp(-(logRegRavens$coef[1]+logRegRavens$coef[2]*x))), -10, 50, add=T) If I understand logistic regression correctly, R does a pretty bad job at finding the right coefficients in this case. blue = original data to be fitted, I believe (CDF) red = prediction from the model (fitted data = projection of original data onto regression curve) SOLVED - lowess seems to be a good non-parametric estimator of the original data = what is being fitted (thanks @gung). Seeing it allows us to choose the right model, which in this case would be adding squared term to the previous model (@gung) - Of course, the problem is pretty artificial and modelling it rather pointless in general (@FrankHarrell) - in regular logistic regression it's not CDF, but point probabilities - first pointed out by @FrankHarrell; also my embarrassing inability to calculate CDF pointed out by @gung. | I do not think I will be able to give regular time investment to continue learning data analysis I don't think Casella & Berger is a place to learn data much in the way of data analysis . It's a place to learn some of the tools of statistical theory. My experience so far telling me to be a statistican one needs to bear with a lot of tedious computation involving various distributions(Weibull, Cauchy, t, F...). I've spent a lot of time as a statistician doing data analysis. It rarely (almost never) involves me doing tedious calculation. It sometimes involves a little simple algebra, but the common problems are usually solved and I don't need to expend any effort on replicating that each time. The computer does all the tedious calculation. If I am in a situation where I'm not prepared to assume a reasonably standard case (e.g. not prepared to use a GLM), I generally don't have enough information to assume any other distribution either, so the question of the calculations in LRT is usually moot (I can do them when I need to, they just either tend to be already solved or come up so rarely that it's an interesting diversion). I tend to do a lot of simulation; I also frequently try to use resampling in some form either alongside or in place of parametric assumptions. Will I need to spend 20hr+ per week on it like I used to be? It depends on what you want to be able to do and how soon you want to get good at it. Data analysis is a skill, and it takes practice and a large base of knowledge. You'll have some of the knowledge you need already. If you want to be a good practitioner at a wide variety of things, it will take a lot of time - but to my mind it's a lot more fun than the algebra and such of doing Casella and Berger exercises. Some of the skills I built up on say regression problems are helpful with time series, say -- but a lot of new skills are needed. So learning to interpret residual plots and QQ plots is handy, but they don't tell me how much I need to worry about a little bump in a PACF plot and don't give me tools like the use of one-step-ahead prediction errors. So for example, I don't need to expend effort figuring out how to do reasonably ML for typical gamma or weibull models , because they're standard enough to be solved problems that have already been largely put into a convenient form. If you come to do research , you'll need a lot more of the skills you pick up in places like Casella & Berger (but even with those kind of skills, you should also read more than one book). Some suggested things: You should definitely build up some regression skills, even if you do nothing else. There are a number of quite good books, but perhaps Draper & Smith Applied Regression Analysis plus Fox and Weisberg An R Companion to Applied Regression ; I'd also suggest you consider following with Harrell's Regression Modelling Strategies (You could substitute any number of good books for Draper and Smith - find one or two that suit you.) The second book has a number of online additional chapters that are very much worth reading (and its own R-package) -- A good second serving would be Venables & Ripley's Modern Applied Statistics with S . That's some grounding in a fairly broad swathe of ideas. It may turn out that you need some more basic material in some topics (I don't know your background). Then you'd need to start thinking about what areas of statistics you want/need -- Bayesian stats, time series, multivariate analysis, etc etc | {
"source": [
"https://stats.stackexchange.com/questions/91903",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/42778/"
]
} |
92,065 | If polynomial regression models nonlinear relationships, how can it be considered a special case of multiple linear regression? Wikipedia notes that "Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function $\mathbb{E}(y | x)$ is linear in the unknown parameters that are estimated from the data." How is polynomial regression linear in the unknown parameters if the parameters are coefficients for terms with order $\ge$ 2? | When you fit a regression model such as $\hat y_i = \hat\beta_0 + \hat\beta_1x_i + \hat\beta_2x^2_i$, the model and the OLS estimator doesn't 'know' that $x^2_i$ is simply the square of $x_i$, it just 'thinks' it's another variable. Of course there is some collinearity, and that gets incorporated into the fit (e.g., the standard errors are larger than they might otherwise be), but lots of pairs of variables can be somewhat collinear without one of them being a function of the other. We don't recognize that there are really two separate variables in the model, because we know that $x^2_i$ is ultimately the same variable as $x_i$ that we transformed and included in order to capture a curvilinear relationship between $x_i$ and $y_i$. That knowledge of the true nature of $x^2_i$, coupled with our belief that there is a curvilinear relationship between $x_i$ and $y_i$ is what makes it difficult for us to understand the way that it is still linear from the model's perspective. In addition, we visualize $x_i$ and $x^2_i$ together by looking at the marginal projection of the 3D function onto the 2D $x, y$ plane. If you only have $x_i$ and $x^2_i$, you can try to visualize them in the full 3D space (although it is still rather hard to really see what is going on). If you did look at the fitted function in the full 3D space, you would see that the fitted function is a 2D plane, and moreover that it is a flat plane. As I say, it is hard to see well because the $x_i, x^2_i$ data exist only along a curved line going through that 3D space (that fact is the visual manifestation of their collinearity). We can try to do that here. Imagine this is the fitted model: x = seq(from=0, to=10, by=.5)
x2 = x**2
y = 3 + x - .05*x2
d.mat = data.frame(X1=x, X2=x2, Y=y)
# 2D plot
plot(x, y, pch=1, ylim=c(0,11), col="red",
main="Marginal projection onto the 2D X,Y plane")
lines(x, y, col="lightblue") # 3D plot
library(scatterplot3d)
s = scatterplot3d(x=d.mat$X1, y=d.mat$X2, z=d.mat$Y, color="gray", pch=1,
xlab="X1", ylab="X2", zlab="Y", xlim=c(0, 11), ylim=c(0,101),
zlim=c(0, 11), type="h", main="In pseudo-3D space")
s$points(x=d.mat$X1, y=d.mat$X2, z=d.mat$Y, col="red", pch=1)
s$plane3d(Intercept=3, x.coef=1, y.coef=-.05, col="lightblue") It may be easier to see in these images, which are screenshots of a rotated 3D figure made with the same data using the rgl package. When we say that a model that is "linear in the parameters" really is linear, this isn't just some mathematical sophistry. With $p$ variables, you are fitting a $p$-dimensional hyperplane in a $p\!+\!1$-dimensional hyperspace (in our example a 2D plane in a 3D space). That hyperplane really is 'flat' / 'linear'; it isn't just a metaphor. | {
"source": [
"https://stats.stackexchange.com/questions/92065",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/42848/"
]
} |
92,141 | What is the difference between probability plots, PP-plots and QQ-plots when trying to analyse a fitted distribution to data? | As @vector07 notes , probability plot is the more abstract category of which pp-plots and qq-plots are members. Thus, I will discuss the distinction between the latter two. The best way to understand the differences is to think about how they are constructed, and to understand that you need to recognize the difference between the quantiles of a distribution and the proportion of the distribution that you have passed through when you reach a given quantile. You can see the relationship between these by plotting the cumulative distribution function (CDF) of a distribution. For example, consider the standard normal distribution: We see that approximately 68% of the y-axis (region between red lines) corresponds to 1/3 of the x-axis (region between blue lines). That means that when we use the proportion of the distribution we have passed through to evaluate the match between two distributions (i.e., we use a pp-plot), we will get a lot of resolution in the center of the distributions, but less at the tails. On the other hand, when we use the quantiles to evaluate the match between two distributions (i.e., we use a qq-plot), we will get very good resolution at the tails, but less in the center. (Because data analysts are typically more concerned about the tails of a distribution, which will have more effect on inference for example, qq-plots are much more common than pp-plots.) To see these facts in action, I will walk through the construction of a pp-plot and a qq-plot. (I also walk through the construction of a qq-plot verbally / more slowly here: QQ-plot does not match histogram .) I don't know if you use R, but hopefully it will be self-explanatory: set.seed(1) # this makes the example exactly reproducible
N = 10 # I will generate 10 data points
x = sort(rnorm(n=N, mean=0, sd=1)) # from a normal distribution w/ mean 0 & SD 1
n.props = pnorm(x, mean(x), sd(x)) # here I calculate the probabilities associated
# w/ these data if they came from a normal
# distribution w/ the same mean & SD
# I calculate the proportion of x we've gone through at each point
props = 1:N / (N+1)
n.quantiles = qnorm(props, mean=mean(x), sd=sd(x)) # this calculates the quantiles (ie
# z-scores) associated w/ the props
my.data = data.frame(x=x, props=props, # here I bundle them together
normal.proportions=n.props,
normal.quantiles=n.quantiles)
round(my.data, digits=3) # & display them w/ 3 decimal places
# x props normal.proportions normal.quantiles
# 1 -0.836 0.091 0.108 -0.910
# 2 -0.820 0.182 0.111 -0.577
# 3 -0.626 0.273 0.166 -0.340
# 4 -0.305 0.364 0.288 -0.140
# 5 0.184 0.455 0.526 0.043
# 6 0.330 0.545 0.600 0.221
# 7 0.487 0.636 0.675 0.404
# 8 0.576 0.727 0.715 0.604
# 9 0.738 0.818 0.781 0.841
# 10 1.595 0.909 0.970 1.174 Unfortunately, these plots aren't very distinctive, because there are few data and we are comparing a true normal to the correct theoretical distribution, so there isn't anything special to see in either the center or the tails of the distribution. To better demonstrate these differences, I plot a (fat-tailed) t-distribution with 4 degrees of freedom, and a bi-modal distribution below. The fat tails are much more distinctive in the qq-plot, whereas the bi-modality is more distinctive in the pp-plot. | {
"source": [
"https://stats.stackexchange.com/questions/92141",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/42842/"
]
} |
92,157 | I saw an LDA (linear discriminant analysis) plot with decision boundaries from The Elements of Statistical Learning : I understand that data are projected onto a lower-dimensional subspace. However, I would like to know how we get the decision boundaries in the original dimension such that I can project the decision boundaries onto a lower-dimensional subspace (likes the black lines in the image above). Is there a formula that I can use to compute the decision boundaries in the original (higher) dimension? If yes, then what inputs does this formula need? | This particular figure in Hastie et al. was produced without computing equations of class boundaries. Instead, algorithm outlined by @ttnphns in the comments was used, see footnote 2 in section 4.3, page 110: For this figure and many similar figures in the book we compute the decision boundaries by an exhaustive contouring method. We compute the decision rule on a fine lattice of points, and then use contouring algorithms to compute the boundaries. However, I will proceed with describing how to obtain equations of LDA class boundaries. Let us start with a simple 2D example. Here is the data from the Iris dataset ; I discard petal measurements and only consider sepal length and sepal width. Three classes are marked with red, green and blue colours: Let us denote class means (centroids) as $\boldsymbol\mu_1, \boldsymbol\mu_2, \boldsymbol\mu_3$. LDA assumes that all classes have the same within-class covariance; given the data, this shared covariance matrix is estimated (up to the scaling) as $\mathbf{W} = \sum_i (\mathbf{x}_i-\boldsymbol \mu_k)(\mathbf{x}_i-\boldsymbol \mu_k)^\top$, where the sum is over all data points and centroid of the respective class is subtracted from each point. For each pair of classes (e.g. class $1$ and $2$) there is a class boundary between them. It is obvious that the boundary has to pass through the middle-point between the two class centroids $(\boldsymbol \mu_{1} + \boldsymbol \mu_{2})/2$. One of the central LDA results is that this boundary is a straight line orthogonal to $\mathbf{W}^{-1} \boldsymbol (\boldsymbol \mu_{1} - \boldsymbol \mu_{2})$. There are several ways to obtain this result, and even though it was not part of the question, I will briefly hint at three of them in the Appendix below. Note that what is written above is already a precise specification of the boundary. If one wants to have a line equation in the standard form $y=ax+b$, then coefficients $a$ and $b$ can be computed and will be given by some messy formulas. I can hardly imagine a situation when this would be needed. Let us now apply this formula to the Iris example. For each pair of classes I find a middle point and plot a line perpendicular to $\mathbf{W}^{-1}
\boldsymbol (\boldsymbol \mu_{i} - \boldsymbol \mu_{j})$: Three lines intersect in one point, as should have been expected. Decision boundaries are given by rays starting from the intersection point: Note that if the number of classes is $K\gg 2$, then there will be $K(K-1)/2$ pairs of classes and so a lot of lines, all intersecting in a tangled mess. To draw a nice picture like the one from the Hastie et al., one needs to keep only the necessary segments, and it is a separate algorithmic problem in itself (not related to LDA in any way, because one does not need it to do the classification; to classify a point, either check the Mahalanobis distance to each class and choose the one with the lowest distance, or use a series or pairwise LDAs). In $D>2$ dimensions the formula stays exactly the same : boundary is orthogonal to $\mathbf{W}^{-1} \boldsymbol (\boldsymbol \mu_{1} - \boldsymbol \mu_{2})$ and passes through $(\boldsymbol \mu_{1} + \boldsymbol \mu_{2})/2$. However, in higher dimensions this is not a line anymore, but a hyperplane of $D-1$ dimensions. For illustration purposes, one can simply project the dataset to the first two discriminant axes, and thus reduce the problem to the 2D case (that I believe is what Hastie et al. did to produce that figure). Appendix How to see that the boundary is a straight line orthogonal to $\mathbf{W}^{-1} (\boldsymbol \mu_{1} - \boldsymbol \mu_{2})$? Here are several possible ways to obtain this result: The fancy way: $\mathbf{W}^{-1}$ induces Mahalanobis metric on the plane; the boundary has to be orthogonal to $\boldsymbol \mu_{1} - \boldsymbol \mu_{2}$ in this metric, QED. The standard Gaussian way: if both classes are described by Gaussian distributions, then the log-likelihood that a point $\mathbf x$ belongs to class $k$ is proportional to $(\mathbf x - \boldsymbol \mu_k)^\top \mathbf W^{-1}(\mathbf x - \boldsymbol \mu_k)$. On the boundary the likelihoods of belonging to classes $1$ and $2$ are equal; write it down, simplify, and you will immediately get to $\mathbf x^\top \mathbf W^{-1} (\boldsymbol \mu_{1} - \boldsymbol \mu_{2}) = \mathrm{const}$, QED. The laboursome but intuitive way. Imagine that $\mathbf{W}$ is an identity matrix, i.e. all classes are spherical. Then the solution is obvious: boundary is simply orthogonal to $\boldsymbol \mu_1 - \boldsymbol \mu_2$. If classes are not spherical, then one can make them such by sphering. If the eigen-decomposition of $\mathbf{W}$ is $\mathbf{W} = \mathbf U \mathbf D \mathbf U^\top$, then matrix $\mathbf S = \mathbf D^{-1/2} \mathbf U^\top$ will do the trick (see e.g. here ). So after applying $\mathbf S$, the boundary is orthogonal to $\mathbf S (\boldsymbol \mu_{1} - \boldsymbol \mu_{2})$. If we take this boundary, transform it back with $\mathbf S^{-1}$ and ask what is it now orthogonal to, the answer (left as an exercise) is: to $\mathbf S^\top \mathbf S \boldsymbol (\boldsymbol \mu_{1} - \boldsymbol \mu_{2})$. Plugging in the expression for $\mathbf S$, we get QED. | {
"source": [
"https://stats.stackexchange.com/questions/92157",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/40761/"
]
} |
92,419 | I'd like to determine the relative importance of sets of variables toward a randomForest classification model in R. The importance function provides the MeanDecreaseGini metric for each individual predictor--is it as simple as summing this across each predictor in a set? For example: # Assumes df has variables a1, a2, b1, b2, and outcome
rf <- randomForest(outcome ~ ., data=df)
importance(rf)
# To determine whether the "a" predictors are more important than the "b"s,
# can I sum the MeanDecreaseGini for a1 and a2 and compare to that of b1+b2? | First I would like to clarify what the importance metric actually measures. MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits during training. A common misconception is that the variable importance metric refers to the Gini used for asserting model performance which is closely related to AUC, but this is wrong. Here is the explanation from the randomForest package written by Breiman and Cutler: Gini importance Every time a split of a node is made on variable m the gini impurity criterion for the two descendent nodes is less than the parent node. Adding up the gini decreases for each individual variable over all trees in the forest gives a fast variable importance that is often very consistent with the permutation importance measure. The Gini impurity index is defined as
$$
G = \sum_{i=1}^{n_c} p_i(1-p_i)
$$
Where $n_c$ is the number of classes in the target variable and $p_i$ is the ratio of this class. For a two class problem, this results in the following curve which is maximized for the 50-50 sample and minimized for the homogeneous sets: The importance is then calculated as
$$
I = G_{parent} - G_{split1} - G_{split2}
$$
averaged over all splits in the forest involving the predictor in question. As this is an average it could easily be extended to be averaged over all splits on variables contained in a group. Looking closer we know each variable importance is an average conditional on the variable used and the meanDecreaseGini of the group would just be the mean of these importances weighted on the share this variable is used in the forest compared to the other variables in the same group. This holds because the the tower property
$$
\mathbb{E}[\mathbb{E}[X|Y]] = \mathbb{E}[X]
$$ Now, to answer your question directly it is not as simple as just summing up all importances in each group to get the combined MeanDecreaseGini but computing the weighted average will get you the answer you are looking for. We just need to find the variable frequencies within each group. Here is a simple script to get these from a random forest object in R: var.share <- function(rf.obj, members) {
count <- table(rf.obj$forest$bestvar)[-1]
names(count) <- names(rf.obj$forest$ncat)
share <- count[members] / sum(count[members])
return(share)
} Just pass in the names of the variables in the group as the members parameter. I hope this answers your question. I can write up a function to get the group importances directly if it is of interest. EDIT: Here is a function that gives the group importance given a randomForest object and a list of vectors with variable names. It uses var.share as previously defined. I have not done any input checking so you need to make sure you use the right variable names. group.importance <- function(rf.obj, groups) {
var.imp <- as.matrix(sapply(groups, function(g) {
sum(importance(rf.obj, 2)[g, ]*var.share(rf.obj, g))
}))
colnames(var.imp) <- "MeanDecreaseGini"
return(var.imp)
} Example of usage: library(randomForest)
data(iris)
rf.obj <- randomForest(Species ~ ., data=iris)
groups <- list(Sepal=c("Sepal.Width", "Sepal.Length"),
Petal=c("Petal.Width", "Petal.Length"))
group.importance(rf.obj, groups) > MeanDecreaseGini
Sepal 6.187198
Petal 43.913020 It also works for overlapping groups: overlapping.groups <- list(Sepal=c("Sepal.Width", "Sepal.Length"),
Petal=c("Petal.Width", "Petal.Length"),
Width=c("Sepal.Width", "Petal.Width"),
Length=c("Sepal.Length", "Petal.Length"))
group.importance(rf.obj, overlapping.groups) > MeanDecreaseGini
Sepal 6.187198
Petal 43.913020
Width 30.513776
Length 30.386706 | {
"source": [
"https://stats.stackexchange.com/questions/92419",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/19676/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.