Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
3496
1
null
null
27
14804
(I'm a newbie at stats. I'm a mathematician and a programmer and I'm trying to build something like a naive Bayesian spam filter.) I've noticed in many places that people tend to break down the denominator in the equation from Bayes' Theorem. So instead of this: $\frac{P(A|B)\cdot P(B)}{P(A)}$ We are presented with this: $\frac{P(A|B)\cdot P(B)}{P(A|B)\cdot P(B)+P(A|\neg B)\cdot P(\neg B)}$ You can see that this convention is used in [this Wikipedia article](http://en.wikipedia.org/wiki/Bayesian_spam_filtering#Computing_the_probability_that_a_message_containing_a_given_word_is_spam) and in this [insightful post](http://mail.python.org/pipermail/python-dev/2002-August/028216.html) by Tim Peters. I am baffled by this. Why is the denominator broken down like this? How does that help things at all? What's so complicated about calculating $P(A)$, which in the case of spam filters would be `The probability that the word "cheese" appears in an email, regardless of whether it's spam or not`?
Why break down the denominator in Bayes' Theorem?
CC BY-SA 2.5
null
2010-10-11T23:45:58.103
2010-10-12T08:30:55.770
null
null
5793
[ "bayesian" ]
3497
1
null
null
8
953
I have a fairly larege file 100M rows and 30 columns or so on which I would like to run multiple regressions. I have specialized code to run the regressions on the entire file, but what I would like to do is draw random samples from the file and run them in R. The strategy is: randomly sample N rows from the file without replacement run a regression and save the coefficients of interest repeat this process M times with different samples for each coefficient calculate the means and standard errors of the coefficeints over M runs. I would like to interpret the mean computed over M runs as an estimate of the values of the coefficients computed on the whole data set, and the stadard errors of the means as estimates of the standard errors of the coefficients computed on the entire data set. Experiments show this to be a promising strategy, but I am not sure about the underlying theory. Are my estimators consistent efficient and unbiased? If they are consistent how quickly should they converge? What tradeoffs of M and N are best? I would very much appreciate it if someone could point me to the papers, books etc. with the relevanth theory. Best regards and many thanks, Joe Rickert
Doing regressions on samples from a very large file: are the means and SEs of the sample coefficients consistent estimators?
CC BY-SA 2.5
null
2010-10-11T23:47:31.373
2010-12-19T21:49:54.970
2010-10-12T08:23:45.100
8
null
[ "r", "regression", "large-data", "bootstrap" ]
3500
2
null
3496
9
null
One reason for using the total probability rule is that we often deal with the component probabilities in that expression and it's straightforward to find the marginal probability by simply plugging in the values. For an illustration of this, see the following example on Wikipedia: - Bayes' Theorem > Example 1: Drug Testing Another reason is recognizing equivalent forms of Bayes' Rule by manipulating that expression. For example: $P(B|A) = \frac{P(A|B) P(B)}{P(A|B)P(B) + P(A|\lnot B)P(\lnot B)}$ Divide through the RHS by the numerator: $P(B|A) = \frac{1} {1 + \frac{P(A|\lnot B)}{P(A|B)} \frac{P(\lnot B)}{P(B)}}$ Which is a nice equivalent form for Bayes' Rule, made even handier by subtracting this from the original expression to obtain: $\frac{P(\lnot B|A)}{P(B|A)} = \frac{P(A|\lnot B)} {P(A|B)} \frac {P(\lnot B)} {P(B)}$ This is Bayes' Rule stated in terms of Odds, i.e. posterior odds against B = Bayes factor against B times the prior odds against B. (Or you could invert it to get an expression in terms of odds for B.) The Bayes factor is the ratio of the likelihoods of your models. Given that we're uncertain about the underlying data generating mechanism, we observe data and update our beliefs. I'm not sure if you find this useful, but hopefully it's not baffling; you should obviously work with the expression that works best for your scenario. Maybe someone else can pipe in with even better reasons.
null
CC BY-SA 2.5
null
2010-10-12T01:53:18.930
2010-10-12T01:53:18.930
null
null
251
null
3501
2
null
3496
18
null
The short answer to your question is, "most of the time we don't know what P(cheese) is, and it is often (relatively) difficult to calculate." The longer answer why Bayes' Rule/Theorem is normally stated in the way that you wrote is because in Bayesian problems we have - sitting in our lap - a prior distribution (the P(B) above) and likelihood (the P(A|B), P(A|notB) above) and it is a relatively simple matter of multiplication to compute the posterior (the P(B|A)). Going to the trouble to reexpress P(A) in its summarized form is effort that could be spent elsewhere. It might not seem so complicated in the context of an email because, as you rightly noted, it's just P(cheese), right? The trouble is that with more involved on-the-battlefield Bayesian problems the denominator is an unsightly integral, which may or may not have a closed-form solution. In fact, sometimes we need sophisticated Monte Carlo methods just to approximate the integral and churning the numbers can be a real pain in the rear. But more to the point, we usually don't even care what P(cheese) is. Bear in mind, we are trying to hone our belief regarding whether or not an email is spam, and couldn't care less about the marginal distribution of the data (the P(A), above). It is just a normalization constant, anyway, which doesn't depend on the parameter; the act of summation washes out whatever info we had about the parameter. The constant is a nuisance to calculate and is ultimately irrelevant when it comes to zeroing in on our beliefs about whether or not the email's spam. Sometimes we are obliged to calculate it, in which case the quickest way to do so is with the info we already have: the prior and likelihood.
null
CC BY-SA 2.5
null
2010-10-12T02:16:56.347
2010-10-12T03:49:44.033
2010-10-12T03:49:44.033
null
null
null
3502
2
null
3489
5
null
it sounds as tho the following simplified situation may capture the essence of your problem: there are two populations of individuals: A = acceptable individuals and U = unacceptables. associated with each individual is a 'score' $X$. suppose in each of the two populations, the scores have gaussian distributions, where in A, the [true] mean is $\mu_A$ and in U, it is $\mu_U$. we can also suppose [altho need not] that the distributions have the same SD = $\sigma$. all three [or four] parameters are presumably known. suppose $\mu_A > \mu_U$, so it makes sense to accept an individual if their 'score' $X$ is above some threshold $c$, say. there are two ways this rule can go wrong: - an $X$ from U can exceed $c$, leading to a false acceptance. - an $X$ from A can be below $c$, leading to a false rejection. the probabilities $$err_{falseacc} = P(N(\mu_U, \sigma^2) > c)$$ and $$err_{falserej} = P(N(\mu_A, \sigma^2) < c)$$ are the two error rates associated with the rule. you are focusing on $err_{falseacc}$. it is not difficult to see that as the threshold $c$ is changed, one error-rate will decrease and the other will increase. so $c$ has to be chosen to give values of both error-rates that one can live with. once you choose $c$, as others have already remarked, the error rates can be calculated. in the language of statistics, you are testing two hypotheses about the $\mu$ of the population that the individual with observed score $X$ came from. one hypothesis is H$_A: \mu = \mu_A$ and the other is H$_U: \mu = \mu_U$. the 'test' to decide between these two hypotheses is the above rule and the error rates given above are [somewhat unhelpfully] called the type I and type II errors, or [equally unhelpfully, IMHO] the sensitivity and the specificity or [likewise] the producer's risk and the consumer's risk. which is which depends on which of the two hypotheses is designated as the 'null hypothesis', a distinction that may not be entirely helpful in this context.
null
CC BY-SA 2.5
null
2010-10-12T03:00:29.283
2010-10-12T03:00:29.283
null
null
1112
null
3503
2
null
3497
2
null
The greater the sample N, the smaller the standard error (higher t stat, and smaller the respective p values) associated with all your regression coefficients. The greater M, the more datapoints you will have and the smaller will be your standard error of the mean of the coefficients over M runs. Such means should have a standard error that is normally distributed per the Central Limit Theorem. In terms of convergence of such means, I am not sure there are any statistical principles that dictate this. I suspect if your random sampling is well done (no structural bias, etc...) the convergence should occur fairly rapidly. That is something you just may have to observe empirically. Otherwise, your method seems good, I don't see any problem with it.
null
CC BY-SA 2.5
null
2010-10-12T04:24:05.447
2010-10-12T16:10:17.650
2010-10-12T16:10:17.650
1329
1329
null
3504
1
3506
null
5
10221
I measured response variable $Y$ at three levels of factor $A$ and four levels of factor $B$, $n=6$ reps/treatment. Results include - $A$ has strong effects on $Y$. - $B$ has no effect on $Y$ - There is no $A*B$ interaction I would like to report all three results (the second two are actually more interesting than the first since all were expected). So far I have: "The effect of A on Y was significant (ANOVA, $P<0.001$) and $Y$ was different at each level of $A$ (Tukey HSD, $P<0.001$). There was no effect of $B$ on $Y$, and there was no interaction between $A$ and $B.$ This implies that $B$ had no effect on $Y$ and that the effect of $B$ on $Y$ was similar for all levels of $A$." (Text with bold emphasis is in question) This approach minimizes the double negative as in "the effect of $B$ on $Y$ was not different at different levels of $A$". Still, neither sounds 'right' to me, thus my post. In my field, as in many fields of science, non-significant responses are rarely mentioned in the text, and even more rarely are they interpreted, but in this case, previous studies find that $B$ almost always affects $Y$ and the lack of an interaction is notable.
If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated?
CC BY-SA 2.5
null
2010-10-12T04:26:28.497
2010-10-12T11:11:42.463
2010-10-12T08:34:15.140
1381
1381
[ "anova", "multiple-comparisons", "interpretation" ]
3505
2
null
3504
6
null
Please let me know if you have replicates in your experiment. As you mention using Tukey HSD, I am guessing you don't have any replicates. If you experiment analyzes test of additivity in a two-way factorial Analysis of Variance (ANOVA) with one observation per cell, then please read ahead or otherwise ignore my solution. Assuming I am thinking in the right direction, it may be possible that B affects A, but it doesn't affect Y in form of a multiplicative term like A*B. In general, B can affect Y in even if the interaction A * B is not significant. The reason for this is the assumption of special form of relation between A and B (ie A*B) that affects Y in Tukey HSD Test. Please see the wikipedia article about [Tukey HSD](http://en.wikipedia.org/wiki/Tukey%27s_test_of_additivity) for further details. I am sorry if this was not the test that you were conducting. But even if the interaction is not statistically significant, it doesn't imply that B doesn't affect Y. Even if this is a randomized experiment and you are allowed to relate significance and causation, but not able to reject the null hypothesis $H_o$ (ie p-value is $\geq \alpha$ ) doesn't mean that you accept $H_o$. Please accept my apologies, if I completely misunderstood you. --- Update after David's details A few common themes that should be addressed first: - If you expect interaction in your main effects here A and B, then testing for the significance of main effects doesn't make sense. (cf. an elegant paper by Dr. Venables first link ) - I think doing sub-sampling doesn't help you (when I say help, I mean in the context of the number degrees of the freedom of F distribution which is used to test is treatment effect = 0). By sub-sampling, I mean replication at Block by Treatment level. In RCB, the degrees of freedom wasted in replication can only be used to test if the block effect is present or not. Using Blocks in expt. design is a "trick" to control for the extra variation which is not the main interest. In your case variance because of trays (block) is not of importance, so I don't know if it is useful to waste extra resources (ie money) in testing if the variance due to trays is significant or not. - I would be afraid to come to the conclusion that there was no block effect. The block effect in your case is random, and the test uses some "strong assumptions" to determine the sampling distribution (which may or may not be true depending on the strong assumptions). If your experiment started with RCB, please retain the structure of the data. Pooling of samples from block may confound your conclusions. You are again committing the error of accepting $H_o$ if you say there is no block effect (I am assuming you conclude no block effect by testing hypotheses. right?). - The point 4. further stresses my point 3. of choosing not to subsample, as essentially subsampling lets you test something (trays effect) which is not of primary interest to you. Even if this was of interest, you invest your money in replication to test something whose sampling distribution is not appropriately determined. - In the comment you say 12 treatments, but you only mention A and B as your treatment variables. Did you mean 12 levels for treatments A and B respectively? If yes, then you have only 2 treatments with 12 levels. If no, then you need to change the model. For further details about subsampling in RCB, refer to chapter 3 of George Casella's [design book](http://rads.stackoverflow.com/amzn/click/0387759646). Sincere apologies again, if I misunderstood you.
null
CC BY-SA 2.5
null
2010-10-12T07:34:16.077
2010-10-12T11:11:42.463
2010-10-12T11:11:42.463
1307
1307
null
3506
2
null
3504
8
null
Well, it depends if the interaction was your main hypothesis or not. If this the case, then you are encouraged to report the negative result, otherwise you can simply refit your model (without the B and A:B terms) to get a better estimate of A. Now, the part of your conclusion that you emphasized doesn't sound correct to me. You can only prove that an observed difference of means is different from 0 (or any other fixed value, according to your alternative hypothesis), you cannot "accept" the null. If your test is non-significant, it simply means that you cannot reject $H_0$. Non-significant results can also reflect lack of power (Type II error). Also, rather than simply reporting crude p-values, it would be better (and it actually follows the [APA](http://apastyle.apa.org/) recommendations) to also report some kind of effect size or difference of means, together with your inferential results. Here is an example for reporting results from a factorial ANOVA (it has to be rework to fit your specific experimental design since your factors have a lot of levels): A two-way analysis of variance yielded a main effect for A factor, $F(\nu_1,\nu_2) = 0.00$, $p < .05$, such that the average "value" was significantly higher in the $a_1$ condition (Mean=0.00, SD=0.00) compared to $a_2$ (Mean=0.00, SD=0.00) and $a_3$ (Mean=0.00, SD=0.00, Tukey HSD, all p<0.05). The main effect of B was non-significant ($F(\nu_1,\nu_2) = 0.00$, $p = 0.00$), and no interaction effect was found significant ($F(\nu_1,\nu_2) = 0.00$, $p = 0.00$) indicating that the effect of A on Y was not significantly different across the B levels.
null
CC BY-SA 2.5
null
2010-10-12T07:39:17.673
2010-10-12T07:39:17.673
null
null
930
null
3507
2
null
3497
5
null
If you can assume that your rows of your data matrix are exchangeable then your modelling strategy should work well. Your method should be fine under the conditions stated by Gaetan Lion before. The reason why your method will work (given the exchangeability assumption holds) is that it be taken as a special case of parametric bootstrap in which you take re-sample N rows of big sample, fit a model and store the coefficients and repeat this M times (in traditional bootstrap terminology your M is equivalent to B) and take average of the M coefficient estimates. You can also look at it from a permutation testing view point as well. But all these results are true if the (hard to verify) exchangeability assumption holds. If exchangeability assumption doesn't hold, the answer in that case becomes a bit complicated. Probably you need to take care of the subgroups in your data which are exchangeable and perform your process conditioned on these subgroups. Basically, hierarchical modeling.
null
CC BY-SA 2.5
null
2010-10-12T07:49:44.903
2010-10-12T07:57:05.120
2010-10-12T07:57:05.120
1307
1307
null
3508
2
null
3489
12
null
Just to add to other responses, here is a brief recap' on terminology. For any biometric or classification system, the main performance indicator is the [receiver operating characteristic](http://en.wikipedia.org/wiki/Receiver_operating_characteristic) (ROC) curve, which is a plot of true acceptance rate (TAR=1-FRR, the false rejection rate) against false acceptance rate (FAR), which is computed as the number of false instances classified as positive among all intruder and impostor cases. The closer the curve is to the top left corner, the better it is (this corresponds to maximizing the so-called area under the curve or AUC). Generally, such curves are generated offline from a database of previous records. In the biometric literature, FAR is sometimes defined such that the "impostor" makes zero effort to obtain a match. Here, I'm roughly quoting [Biometrics](http://books.google.fr/books?id=fefutm-Dhy0C&printsec=frontcover&dq=biometrics+boulgouris&source=bl&ots=etdSIp3ahh&sig=0D8m0AG45RQSIMmGLU7nFbGo4LE&hl=fr&ei=lmW0TOfwMNTa4AbG_8mgDQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBgQ6AEwAA), from Boulgouris et al. (chap. 26). So, you may choose your cutoff by using standard ROC tools (search for "ROC analysis" on [Rseek](http://www.rseek.org/)) to find the best compromise between FAR and TAR (this is not necessarily that cutoff that maximizes the AUC, it depends on your objectives). Now, as has been highlighted in other responses, this compromise between FAR and TAR led to similar interpretation in psychophysics, classification, or biomedical science. It's just a matter of terminology, and we often speak of Hit rate vs. False Alarm rate; sensibility vs. specificty. Note Here are some pictures to complement other responses, which I hope will help you to draw the parallel with decision theory and statistical testing. Let an individual be facing a two-alternative choice experiment. Depending on the location of his internal criterion, his response may lead to Hit or False Alarm (response > criterion), or alternatively Correct Rejection or Miss (response < criterion). The corresponding probabilistic response curve resemble your situation. ![alt text](https://i.stack.imgur.com/2yunb.png) Most classical textbooks on Statistics provide a Table similar to the one below, where we describe the probabilities of incorrectly rejecting a null hypothesis ($\alpha$) vs. falsely “accepting” the null ($\beta$) where in fact the alternative is true. ![alt text](https://i.stack.imgur.com/Cwhns.png) This leads to quite the same picture as with the psychophysical threshold model: ![alt text](https://i.stack.imgur.com/KRJ3B.png)
null
CC BY-SA 2.5
null
2010-10-12T08:19:36.063
2010-10-16T09:07:13.943
2010-10-16T09:07:13.943
930
930
null
3509
2
null
3496
7
null
Previous replies are detailed enough, but an intuitive way of looking why $P (A) $ (ie dinominator in the Bayes theorem) is broken into two cases. It is hard to comment about what is the $P(A)$ without any knowledge whether the email is ham or spam. You are correct that "cheese" appears in spam as well as in ham, but if you look at the probability of appearance of "cheese" given that the email is ham ($P(A | B)$, $B$ is representative for ham), you can definitely say a lot of it. At least in my case, I don't receive a lot of spams which contain cheese, therefore in my case $P(A | B)$ will be high (say 90%). Similarly, $P(A | \neg B)$ will be low in my case, as not a lot of spams contain the word cheese. Basically, we try to look at the occurrence of event of interest (here A) partitioned into two disjoint events, $B$ and $\neg B$. If we partition A into two separate events, we can say better about the conditional probabilities $P(A | B)$ and $P(A | \neg B)$. In order to get the total probability we also need to weight for the conditional probabilities for the occurance of events on which we condition ie $P(B)$ and $P(\neg B)$. Therefore the final expression $$P(A) = P(A|B)\cdot P(B)+P(A|\neg B)\cdot P(\neg B)$$
null
CC BY-SA 2.5
null
2010-10-12T08:30:55.770
2010-10-12T08:30:55.770
null
null
1307
null
3510
2
null
3458
15
null
It's important to bear in mind that there's no one algorithm that's always better than others. As stated by Wolpert and Macready, "any two algorithms are equivalent when their performance is averaged across all possible problems." (See [Wikipedia](http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization) for details.) For a given application, the "best" one is generally one that is most closely aligned to your application in terms of the assumptions it makes, the kinds of data it can handle, the hypotheses it can represent, and so on. So it's a good idea to characterise your data according to criteria such as: - Do I have a very large data set or a modest one? - Is the dimensionality high? - Are variables numerical (continuous/discrete) or symbolic, or a mix, and/or can they be transformed if necessary? - Are variables likely to be largely independent or quite dependent? - Are there likely to be redundant, noisy, or irrelevant variables? - Do I want to be able to inspect the model generated and try to make sense of it? By answering these, you can eliminate some algorithms and identify others as potentially relevant, and then maybe end up with a small set of candidate methods that you have intelligently chosen as likely to be useful. Sorry not to give you a simple answer, but I hope this helps nonetheless!
null
CC BY-SA 2.5
null
2010-10-12T08:56:16.850
2010-10-12T08:56:16.850
null
null
1436
null
3511
1
3513
null
12
8881
I have programmed a logistic regression using the [IRLS algorithm](http://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares). I would like to apply a [LASSO penalization](http://en.wikipedia.org/wiki/Least_squares#LASSO_method) in order to automatically select the right features. At each iteration, the following is solved: $$\mathbf{\left(X^TWX\right) \delta\hat\beta=X^T\left(y-p\right)}$$ Let $\lambda$ be a non-negative real number. I am not penalizing the intercept as suggested in [The Elements of. Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/). Ditto for the already zero coefficients. Otherwise, I subtract a term from the right-hand side: $$\mathbf{X^T\left(y-p\right)-\lambda\times \mathrm{sign}\left(\hat\beta\right)}$$ However, I am unsure about the modification of the IRLS algorithm. Is it the right way to do? --- Edit: Although I was not confident about it, here is one of the solutions I finally came up with. What is interesting is this solution corresponds to what I now understand about LASSO. There are indeed two steps at each iteration instead of merely one: - the first step is the same as before : we make an iteration of the algorithm (as if $\lambda=0$ in the formula for the gradient above), - the second step is the new one: we apply a soft-thresholding to each component (except for the component $\beta_0$, which corresponds to the intercept) of the vector $\beta$ obtained at the first step. This is called Iterative Soft-Thresholding Algorithm. $$\forall i \geq 1, \beta_{i}\leftarrow\mathrm{sign}\left(\beta_{i}\right)\times\max\left(0,\,\left|\beta_{i}\right|-\lambda\right)$$
How to Apply the Iteratively Reweighted Least Squares (IRLS) Method to the LASSO Model?
CC BY-SA 4.0
null
2010-10-12T09:01:26.290
2019-08-11T13:44:03.553
2019-08-11T13:44:03.553
6244
1351
[ "logistic", "generalized-linear-model", "feature-selection", "lasso", "convex" ]
3512
2
null
3511
5
null
The LASSO loss function has a discontinuity at zero along each axis, so IRLS is going to have problems with it. I have found a sequential minimal optimisation (SMO) type approach very effective, see e.g. [http://bioinformatics.oxfordjournals.org/content/19/17/2246](http://bioinformatics.oxfordjournals.org/content/19/17/2246) a version with MATLAB software is [http://bioinformatics.oxfordjournals.org/content/22/19/2348](http://bioinformatics.oxfordjournals.org/content/22/19/2348) the software is available here: [http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/](http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/) The Basic idea is to optimise the coefficients one at at time, and test to see if you cross the discontinuity one coefficient at a time, which is easy as you are perfoming a scalar optimisation. It may sound slow, but it is actually pretty efficient (although I expect better algorithms have been developed since - probably by Keerthi or Chih-Jen Lin who are both leading experts in that sort of thing).
null
CC BY-SA 2.5
null
2010-10-12T09:53:32.290
2010-10-12T09:53:32.290
null
null
887
null
3513
2
null
3511
13
null
This problem is typically solved by fit by coordinate descent ([see here](http://www.jstatsoft.org/v33/i01/paper)). This method is both safer more efficient numerically, algorithmically easier to implement and applicable to a more general array of models (also including Cox regression). An R implementation is available in the R package [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html). The codes are open source (partly in and in C, partly in R), so you can use them as blueprints.
null
CC BY-SA 2.5
null
2010-10-12T10:47:21.650
2010-10-12T10:47:21.650
null
null
603
null
3514
1
null
null
6
15725
While most of the tests for normality help for continous variables, is there a way to test the normality assumptions for binary variables. From what i've read on wiki, the K-S test can be applied for continous variables. How do we perform normality tests for binary (or even categorical variables)?
What is the normality test for binary data?
CC BY-SA 3.0
null
2010-10-12T12:34:08.707
2011-07-22T14:35:13.750
2011-07-22T14:35:13.750
null
null
[ "normality-assumption" ]
3515
2
null
3514
13
null
There is no such thing as normality of categorical variable. Normal distribution is a continuous distribution so in assumption don't cover categorical output.
null
CC BY-SA 2.5
null
2010-10-12T12:38:31.813
2010-10-12T12:38:31.813
null
null
null
null
3516
1
null
null
7
4267
I have a binary variable (which takes values 0,1). I have about 100k records of it. How do I determine if it follows the binomial distribution? (I'm bascially trying to test for normality. And, if the data is not normal, I might have to apply a transformation to get the variable into a binomial distribution.) --- Hey, thank you folks for clearing this up. This was an effort as a prelude to cluster analysis. I also understand that normality of variables is more a nice to have feature for cluster analysis and that the distance measures would be valid even otherwise. Your views?
Binomial test for a binary variable
CC BY-SA 3.0
null
2010-10-12T13:17:55.790
2012-05-25T15:33:12.343
2012-05-25T15:33:12.343
919
null
[ "binomial-distribution", "assumptions" ]
3517
2
null
3516
17
null
You cannot determine this through a statistical test, for a trivial reason and a profound reason. The trivial reason is that your data consist of $k$ ones and $n-k$ zeros with $n$ about 100k. These data conform extremely closely to a Bernoulli($k/n$) distribution. No testing is necessary. The profound reason is that you are implicitly assuming the data are independently random--but they might not be. If, for instance, they are collected by sampling a process over time, then you might be seeing long strings of $0$ followed by long strings of $1$. Modeling these as draws from a Bernoulli distribution would likely be a poor choice. Another possibility is that the values are independent but the probability of a $1$ is varying over time. (This would be an "overdispersed" Binomial model.) No transformation of $ 0, 1 $ will produce a normal distribution! Perhaps what you are hoping is that some statistic, such as the sample mean, is normally distributed. The Central Limit Theorem guarantees that, provided the values are independent and that the probabilities are not tending over time to either $0$ or $1$.
null
CC BY-SA 2.5
null
2010-10-12T13:31:01.233
2010-10-12T13:31:01.233
null
null
919
null
3518
2
null
3516
5
null
I completely agree with @whuber -- just wanted to add: If you were to try to transform the data. How would you go about doing so? You would map 0 to some number say, -5 and 1 to some other number?, say 5? So now instead of having: ``` 0 0 0 1 0 1 1 0 1 0 1 ``` You have: ``` -5 -5 -5 5 -5 5 5 -5 5 -5 5 ``` This cannot possible be normally distributed because you still only have two values! Each of these entries could however be Binomial(1,p) just as @whuber described [the same as Bernoulli(p) ], but not Binomial(N,p) because N is never greater than 1 if you only have binary data.
null
CC BY-SA 2.5
null
2010-10-12T13:40:22.543
2010-10-12T13:40:22.543
null
null
1499
null
3519
1
3521
null
14
9891
I started to do Monte Carlo in R as a hobby, but eventually a financial analyst advised to migrate to Matlab. I'm an experienced software developer. but a Monte Carlo beginner. I want to construct static models with sensitivity analysis, later dynamic models. Need good libraries/ algorithms that guide me. To me seems that R has excellent libraries, and I suspect mathlab is preferred by inexperienced programmers because of the easy pascal-like language. The R language is based on scheme and this is hard for beginners, but not for me. If Matlab/ Octave has not advantages on the numerical/ library side I would stick with R.
Is Matlab/octave or R better suited for monte carlo simulation?
CC BY-SA 2.5
null
2010-10-12T13:58:35.760
2011-01-20T21:11:26.137
null
null
778
[ "r", "matlab", "monte-carlo" ]
3520
1
3523
null
63
23337
I see the concept of 'exchangeability' being used in different contexts (e.g., bayesian models) but I have never understood the term very well. - What does this concept mean? - Under what circumstances is this concept invoked and why?
Can someone explain the concept of 'exchangeability'?
CC BY-SA 2.5
null
2010-10-12T14:59:04.240
2017-11-03T23:18:45.340
2017-11-03T23:18:45.340
11887
1558
[ "bayesian", "intuition", "exchangeability" ]
3521
2
null
3519
18
null
I use both. I often prototype functions & algorithms in Matlab because, as stated, it is easier to express an algorithm in something which is close to a pure mathematical language. R does have excellent libraries. I'm still learning it, but I'm starting to leave Matlab in the dust because once you know R, it's also fairly easy to prototype functions there. However, I find that if you want algorithms to function efficiently within a production environment, it is best to move to a compiled language like C++. I have experience wrapping C++ into both Matlab and R (and excel for that matter), but I've had a better experience with R. Disclaimer: Being a grad student, I haven't used a recent version of Matlab for my dlls, I've been working almost exclusively in Matlab 7.1 (which is like 4 years old). Perhaps the newer versions work better, but I can think of two situations off the top of my head where a C++ dll in the back of Matlab caused Windows XP to blue screen because I walked inappropriately outside an array bounds -- a very hard problem to debug if your computer reboots every time you make that mistake... Lastly, the R community appears to be growing much faster and with much more momentum than the Matlab community ever had. Further, as it's free you also don't have deal with the Godforsaken flexlm license manager. Note: Almost all of my development is in MCMC algorithms right now. I do about 90% in production in C++ with the visualization in R using ggplot2. Update for Parallel Comments: A fair amount of my development time right now is spent on parallelizing MCMC routines (it's my PhD thesis). I have used Matlab's parallel toolbox and Star P's solution (which I guess is now owned by [Microsoft??](http://www.microsoft.com/pathways/star-p/) -- jeez another one is gobbled up...) I found the parallel toolbox to be a configuration nightmare -- when I used it, it required root access to every single client node. I think they've fixed that little "bug" now, but still a mess. I found *'p solution to be elegant, but often difficult to profile. I have not used [Jacket](http://www.accelereyes.com/), but I've heard good things. I also have not used the more recent versions of the parallel toolbox which also support GPU computation. I have virtually no experience with the R parallel packages. It's been my experience that parallelizing code must occur at the C++ level where you have a finer granularity of control for task decomposition and memory/resource allocation. I find that if you attempt to parallelize programs at a high level, you often only receive a minimal speedup unless your code is trivially decomposable (also called dummy-parallelism). That said, you can even get reasonable speedups using a single-line at the C++ level using OpenMP: ``` #pragma omp parallel for ``` More complicated schemes have a learning curve, but I really like where gpgpu things are going. As of JSM this year, the few people I talked to about GPU development in R quote it as being only "toes in the deep end" so to speak. But as stated, I have minimal experience -- to change in the near future.
null
CC BY-SA 2.5
null
2010-10-12T15:01:30.577
2010-10-12T17:30:48.863
2010-10-12T17:30:48.863
1499
1499
null
3523
2
null
3520
71
null
Exchangeability is meant to capture symmetry in a problem, symmetry in a sense that does not require independence. Formally, a sequence is exchangeable if its joint probability distribution is a symmetric function of its $n$ arguments. Intuitively it means we can swap around, or reorder, variables in the sequence without changing their joint distribution. For example, every IID (independent, identically distributed) sequence is exchangeable - but not the other way around. Every exchangeable sequence is identically distributed, though. Imagine a table with a bunch of urns on top, each containing different proportions of red and green balls. We choose an urn at random (according to some prior distribution), and then take a sample (without replacement) from the selected urn. Note that the reds and greens that we observe are NOT independent. And it is maybe not a surprise to learn that the sequence of reds and greens we observe is an exchangeable sequence. What is maybe surprising is that EVERY exchangeable sequence can be imagined this way, for a suitable choice of urns and prior distribution. (see Diaconis/Freedman (1980) "Finite Exchangeable Sequences", Ann. Prob.). The concept is invoked in all sorts of places, and it is especially useful in Bayesian contexts because in those settings we have a prior distribution (our knowledge of the distribution of urns on the table) and we have a likelihood running around (a model which loosely represents the sampling procedure from a given, fixed, urn). We observe the sequence of reds and greens (the data) and use that information to update our beliefs about the particular urn in our hand (i.e., our posterior), or more generally, the urns on the table. Exchangeable random variables are especially wonderful because if we have infinitely many of them then we have tomes of mathematical machinery at our fingertips not the least of which being de Finetti's Theorem; see Wikipedia for an introduction.
null
CC BY-SA 2.5
null
2010-10-12T15:42:19.113
2010-10-12T15:42:19.113
null
null
null
null
3524
2
null
3519
2
null
If your simulations will involve relatively sophisticated techniques, then R is the way to go, because it is likely that routines you'll need will be available in R, but not necessarily in matlab.
null
CC BY-SA 2.5
null
2010-10-12T16:10:12.330
2010-10-12T16:10:12.330
null
null
247
null
3525
2
null
3519
9
null
Although I almost exclusively use `R`, I really admire the profiler in `Matlab`. When your program is kind of slow you normally want to know where the bottleneck is. Matlab's profiler is a great tool for achieving this as it tells you how much time is spend on each line of the code. At least to me, using `Rprof` is incomparably worse. I can't figure out which call is the bottleneck. Using `Rprof` you don't get the information on how much time is spend on each line, but how much time is spend on each primitive function (or so). However, a lot of the same primitive functions are called by a lot of different functions. Although I recommend `R` (because it is just great: free, extremely powerful, ...) if you know you have to profile your code a lot, Matlab is way better. And to be fair, there are multicore and parallel computing toolboxes in Matlab (though, extremely pricey).
null
CC BY-SA 2.5
null
2010-10-12T16:44:23.817
2010-10-12T16:44:23.817
null
null
442
null
3526
1
null
null
10
12500
The Marascuilo procedure as described [here](http://www.itl.nist.gov/div898/handbook/prc/section4/prc474.htm) seems to be a test that addresses the issue of multiple comparisons for proportions when you want to test which specific proportions are different from each other after rejecting the null in an overall chi-square test. However, I am not very familiar with this test. So, my questions: - What nuances (if any) should I worry about when using this test? - I know of at least two other approaches (see below) to address the same issue. Which test is the 'better' approach?: Performing "partitioned chi square" refereed in this answer by @Brett Magill Using a Holm–Bonferroni method to adjust p-values.
Has anyone used the Marascuilo procedure for comparing multiple proportions?
CC BY-SA 3.0
null
2010-10-12T16:53:32.110
2013-05-24T18:02:22.337
2017-04-13T12:44:45.783
-1
null
[ "multiple-comparisons", "chi-squared-test" ]
3527
2
null
3519
15
null
To be honest, I think any question you ask around here about R vs ... will be bias towards R. Remember that R is by far the most used [tag](https://stats.stackexchange.com/tags)! What I do My current working practice is to use R to prototype and use C when I need an extra boost of speed. It used to be that I would have to switch to C very quickly (again for my particular applications), but the R [multicore](http://www.rforge.net/doc/packages/multicore/multicore.html) libraries have helped delay that switch. Essentially, you make a `for` loop run in parallel with a trivial change. I should mention that my applications are very computationally intensive. Recommendation To be perfectly honest, it really depends on exactly what you want to do. So I'm basing my answer on this statement in your question. > I want to construct static models with sensitivity analysis, later dynamic models. Need good libraries/ algorithms that guide me I'd imagine that this problem would be ideally suited to prototyping in R and using C when needed (or some other compiled language). On saying that, typically Monte-Carlo/sensitivity analysis doesn't involve particularly advanced statistical routines - of course it may needed other advanced functionality. So I think (without more information) that you could carry out your analysis in any language, but being completely biased, I would recommend R!
null
CC BY-SA 2.5
null
2010-10-12T17:22:31.393
2010-10-12T17:22:31.393
2017-04-13T12:44:49.837
-1
8
null
3528
2
null
1874
2
null
I tend to use Gaussian process models for this and similar surface estimation (Possible relevant examples [here](http://www.ece.uvic.ca/~btill/papers/learning/Evans_etal_1993.pdf) and [here](http://www.springerlink.com/content/mv8888524v86043g/)). But perhaps your question would be best asked over on [Stack Overflow?](https://stackoverflow.com/questions/343108/3d-surface-reconstruction-algorithm) Could you provide more details on your input data (contours from a surface model of MRI data?) as well as your desired outputs: scale parameter which minimizes L2 distance between two sets of contours? Volume? Distance? Which distance? The answer will probably also be specific to the programming language you're working with because, for example, I believe OpenGL has a built in function for determining the minimum distance from a point to the surface ([possible example?](http://www.nvidia.com/content/GTC/posters/2010/A15-GPU-Algorithms-for-NURBS-Minimum-Distance-and-Clearance-Computations.pdf)).
null
CC BY-SA 2.5
null
2010-10-12T18:05:37.703
2010-10-12T18:11:52.460
2017-05-23T12:39:27.620
-1
1499
null
3529
2
null
3476
13
null
ars has the right, and succinct answer. I'll add that when learning how to use matplotlib, I found the [thumbnail gallery](http://matplotlib.sourceforge.net/gallery.html#) to be really useful for finding relevant code and examples. For your case, I submitted [this boxplot example](http://matplotlib.sourceforge.net/examples/pylab_examples/boxplot_demo2.html) that shows you other functionality that could be useful (like rotating the tick mark text, adding upper Y-axis tick marks and labels, adding color to the boxes, etc.) ![alt text](https://i.stack.imgur.com/I2acK.png)
null
CC BY-SA 2.5
null
2010-10-12T18:59:33.793
2010-10-12T18:59:33.793
null
null
1080
null
3530
2
null
298
217
null
I always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that "squashes" the data, such as a root or reciprocal. Before getting to that, let's recapitulate the wisdom in the existing answers in a more general way. Some non-linear re-expression of the dependent variable is indicated when any of the following apply: - The residuals have a skewed distribution. The purpose of a transformation is to obtain residuals that are approximately symmetrically distributed (about zero, of course). - The spread of the residuals changes systematically with the values of the dependent variable ("heteroscedasticity"). The purpose of the transformation is to remove that systematic change in spread, achieving approximate "homoscedasticity." - To linearize a relationship. - When scientific theory indicates. For example, chemistry often suggests expressing concentrations as logarithms (giving activities or even the well-known pH). - When a more nebulous statistical theory suggests the residuals reflect "random errors" that do not accumulate additively. - To simplify a model. For example, sometimes a logarithm can simplify the number and complexity of "interaction" terms. (These indications can conflict with one another; in such cases, judgment is needed.) So, when is a logarithm specifically indicated instead of some other transformation? - The residuals have a "strongly" positively skewed distribution. In his book on EDA, John Tukey provides quantitative ways to estimate the transformation (within the family of Box-Cox, or power, transformations) based on rank statistics of the residuals. It really comes down to the fact that if taking the log symmetrizes the residuals, it was probably the right form of re-expression; otherwise, some other re-expression is needed. - When the SD of the residuals is directly proportional to the fitted values (and not to some power of the fitted values). - When the relationship is close to exponential. - When residuals are believed to reflect multiplicatively accumulating errors. - You really want a model in which marginal changes in the explanatory variables are interpreted in terms of multiplicative (percentage) changes in the dependent variable. Finally, some non - reasons to use a re-expression: - Making outliers not look like outliers. An outlier is a datum that does not fit some parsimonious, relatively simple description of the data. Changing one's description in order to make outliers look better is usually an incorrect reversal of priorities: first obtain a scientifically valid, statistically good description of the data and then explore any outliers. Don't let the occasional outlier determine how to describe the rest of the data! - Because the software automatically did it. (Enough said!) - Because all the data are positive. (Positivity often implies positive skewness, but it does not have to. Furthermore, other transformations can work better. For example, a root often works best with counted data.) - To make "bad" data (perhaps of low quality) appear well behaved. - To be able to plot the data. (If a transformation is needed to be able to plot the data, it's probably needed for one or more good reasons already mentioned. If the only reason for the transformation truly is for plotting, go ahead and do it--but only to plot the data. Leave the data untransformed for analysis.)
null
CC BY-SA 3.0
null
2010-10-12T18:59:34.423
2011-10-16T16:27:44.830
2011-10-16T16:27:44.830
919
919
null
3531
1
7011
null
6
494
I would like to perform reversible jump on a network model, but before arriving there, I'm wondering if there are any R packages which support reversible jump for a user specified generalized linear model or spatial-GLM? Something as simple as an RJMCMC procedure (in R) for the selection of predictors in a logistic regression would be a nice place for me to start? Does such a function exist? Through googling, I've only found [RJaCGH](http://cran.r-project.org/web/packages/RJaCGH/index.html) which appears to be a bit more complicated (and application specific) than I was hoping for.
Are there any R functions which support Reversible Jump MCMC for a GLM or SGLM?
CC BY-SA 2.5
null
2010-10-12T19:54:05.787
2011-02-09T04:39:53.350
2010-10-12T20:07:43.090
8
1499
[ "r", "bayesian", "markov-chain-montecarlo" ]
3532
1
3535
null
15
1493
When programming in R, I've used the [multicore](http://www.rforge.net/doc/packages/multicore/multicore.html) package a few times. However, I've never seen a statement about how it handles it's random numbers. When I use openMP with C, I'm careful to use a proper parallel RNG, but with R I've assume that something sensible happens. Can anyone confirm that something sensible does happen? Example From the documentation, we have ``` x <- foreach(icount(1000), .combine = "+") %do% rnorm(4) ``` How are the `rnorm``s generated?
Random numbers and the multicore package
CC BY-SA 2.5
null
2010-10-12T20:14:49.203
2010-10-14T08:11:42.007
2010-10-14T08:11:42.007
8
8
[ "r", "random-generation", "parallel-computing", "multicore" ]
3533
2
null
298
22
null
For more on whuber's excellent point about reasons to prefer the logarithm to some other transformations such as a root or reciprocal, but focussing on the unique interpretability of the regression coefficients resulting from log-transformation compared to other transformations, see: Oliver N. Keene. The log transformation is special. Statistics in Medicine 1995; 14(8):811-819. DOI:[10.1002/sim.4780140810](http://dx.doi.org/10.1002/sim.4780140810). (PDF of dubious legality available at [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.530.9640&rep=rep1&type=pdf](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.530.9640&rep=rep1&type=pdf)). If you log the independent variable x to base b, you can interpret the regression coefficient (and CI) as the change in the dependent variable y per b-fold increase in x. (Logs to base 2 are therefore often useful as they correspond to the change in y per doubling in x, or logs to base 10 if x varies over many orders of magnitude, which is rarer). Other transformations, such as square root, have no such simple interpretation. If you log the dependent variable y (not the original question but one which several of the previous answers have addressed), then I find Tim Cole's idea of 'sympercents' attractive for presenting the results (i even used them in a paper once), though they don't seem to have caught on all that widely: Tim J Cole. Sympercents: symmetric percentage differences on the 100 log(e) scale simplify the presentation of log transformed data. Statistics in Medicine 2000; 19(22):3109-3125. DOI:[10.1002/1097-0258(20001130)19:22<3109::AID-SIM558>3.0.CO;2-F](http://www3.interscience.wiley.com/journal/75500451/abstract) [I'm so glad Stat Med stopped using [SICIs](http://en.wikipedia.org/wiki/SICI) as DOIs...]
null
CC BY-SA 4.0
null
2010-10-12T20:26:52.947
2021-12-09T20:01:45.203
2021-12-09T20:01:45.203
321901
449
null
3534
2
null
3532
7
null
You might want to look at page 5 of this [document](http://cran.r-project.org/web/packages/multicore/multicore.pdf) and of this [document](http://cran.r-project.org/web/packages/doMC/vignettes/gettingstartedMC.pdf). By default, under R, each core sets is own seed (i seem to recall using high precision time). NB: if you use foreach() from Revolution-computing under windows then i suspect something sensible will not happen. Windows is not POSIX compliant, and this should pose problems when each core needs a different high prec. starting time to set it's seed (unfortunately i don't have windows handy so i can't check this empirically).
null
CC BY-SA 2.5
null
2010-10-12T20:52:14.757
2010-10-13T06:40:54.883
2010-10-13T06:40:54.883
603
603
null
3535
2
null
3532
8
null
I'm not sure how the `foreach` works (from the doMC package, I guess), but in multicore if you did something like `mclapply` the `mc.set.seed` parameter defaults to `TRUE` which gives each process a different seed (e.g. `mclapply(1:1000, rnorm)`). I assume your code is translated into something similar, i.e. it boils down to calls to `parallel` which has the same convention. But also see page 16 of the [slides](http://www.stat.umn.edu/~charlie/parallel/) by Charlie Geyer, which recommends the [rlecuyer](http://cran.r-project.org/web/packages/rlecuyer/index.html) package for parallel independent streams with theoretical guarantees. Geyer's page also has sample code in R for the different setups.
null
CC BY-SA 2.5
null
2010-10-12T20:54:46.470
2010-10-12T20:54:46.470
null
null
251
null
3536
2
null
3526
8
null
Just a partial answer because I've never heard of this method. From what I read in the link you provided, it seems to be a single-step procedure (much like Bonferroni, except we rework the test statistics instead of the p-value) which is likely to be too conservative. In R, there is a function `pairwise.prop.test()` which allows any correction for multiple comparisons (single-step or step-down FWER methods or FDR-based), but it is quit what you already suggested (although Bonferroni is by far too conservative, but still very used in practice). A resampling approach, using permutation, might be interesting too. The `coin` R package provides a well-established testing framework in this respect, see §5 of [Implementing a Class of Permutation Tests: The coin Package](http://cran.r-project.org/web/packages/coin/vignettes/coin_implementation.pdf), but I never had to deal with permutation tests on categorical data in a post-hoc way. About the analysis of subdivided contingency tables, I generally consider specific associations as a guide to develop additional hypotheses (as for any unplanned comparisons), but this is another question. I generally just use visualization tools, like mosaicplot from [Michael Friendly](http://www.math.yorku.ca/SCS/friendly.html#grcat), Pearson's residuals, and if I seek to explain specific patterns of association I use log-linear models.
null
CC BY-SA 2.5
null
2010-10-12T20:56:39.273
2010-10-12T21:23:24.407
2010-10-12T21:23:24.407
930
930
null
3537
1
null
null
1
998
I have 3 factors factor1 (treatment) with 2 levels (control, stress) factor2 (Variates) with 12 Levels (Var1, Var2,....Var12) factor3 (Time) with 12 levels (Week1, Week2,..., Week12) The treatment has 3 replicates for control and 6 replicates for stress. Would it be unbalanced Design? What design would you suggest in this case? looking forward for your positive response! Thanks in advance! Regrads, Jacki
Unbalanced repeated measure design for the given data?
CC BY-SA 2.5
0
2010-10-12T22:33:09.020
2011-01-14T19:30:39.253
2011-01-14T19:30:39.253
449
null
[ "anova", "experiment-design", "split-plot" ]
3538
2
null
3537
2
null
Judging by your description- Yes, your design is unbalanced. But you haven't specified what happens with the factor2? or when you say variates - you mean to say that you include co-variates in the model as in ANCOVA. As you say, repeated measures, I am assuming each of the 3 replicates in control and 6 replicates in stress have 12 measurements across the time points. It is very hard to suggest a design unless you tell us: - What do you want to infer from the experiement, i-e- parameter(s) of interest. - Any major reason for 3 and 6 replicates in the different treatment levels of factor1. - Any other experiment specific detail (may or may not be technical). --- Update: Jackie, given the limited details, a wild guess of your design (pictorially) will be: _______Control________________Stress___________ ______V1 V2 ... V12___|_____V1 V2 ... V12________ ____W1 | . | . | ____W2 | . | . | ____..... | . | . | . . ____W12| . | . | In this table Vs represent varieties and Ws represent weeks. I wanted to ask if all the varieties were present in control and stress, respectively. Control groups include plants not treated with NO3 and stress group were treated with NO3. "." represent the reading of chlorophyll. If the above design is true, then your design is a simple Split Plot Design with Control as the whole plot treatment and you can treat v1, v2 .. v12 as blocks or "biological replicates" (the choice depends on your specific field and if you want to answer questions related to varieties). And time would be the repeated measure. The correlation between the time points can be modeled using various models, one of the popular choices is AR(1) (SAS also gives you a bunch of options in `PROC GLIMIX`). If you want to answer questions related to how treatment (control and stress) interact with time, you can look at the interaction term in the ANOVA analysis. Please let me know if this answers your question? or there are more details need to be added. I would be particularly interested to know the use of Variety variable, specifically, is it just a block or do you actually want to answer questions related to variety of plants.
null
CC BY-SA 2.5
null
2010-10-12T22:43:53.553
2010-10-13T17:37:12.843
2010-10-13T17:37:12.843
1307
1307
null
3539
1
3546
null
30
33502
Which inter-rater reliability methods are most appropriate for ordinal or interval data? I believe that "Joint probability of agreement" or "Kappa" are designed for nominal data. Whilst "Pearson" and "Spearman" can be used, they are mainly used for two raters (although they can be used for more than two raters). What other measures are suitable for ordinal or interval data, i.e. more than two raters?
Inter-rater reliability for ordinal or interval data
CC BY-SA 2.5
null
2010-10-12T22:48:56.690
2022-05-12T09:30:29.330
2011-07-29T01:01:31.163
183
1564
[ "reliability", "psychometrics", "agreement-statistics", "cohens-kappa" ]
3540
2
null
3539
6
null
The [Intraclass correlation](http://en.wikipedia.org/wiki/Intra-class_correlation_coefficient) may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of the ICC, see: - Intraclass correlations: uses in assessing rater reliability (Shrout, Fleiss, 1979)
null
CC BY-SA 2.5
null
2010-10-12T23:11:53.597
2010-10-12T23:11:53.597
null
null
251
null
3542
1
3543
null
51
2701
I've seen various theoretical treatments of graphics, such as the [Grammar of Graphics](http://rads.stackoverflow.com/amzn/click/0387987746). But I have seen nothing equivalent with regards to tables. Over the while I have developed an informal model of good practice in table design. However, I'd like to be able to provide a good reference to students. The [APA Style Manual](http://rads.stackoverflow.com/amzn/click/1557987912) has a few tips on table design, but it is only a starting point. Question: What is a good resource that provides theoretical and practical advice on the presentation of numeric results in tables? UPDATE: It would be particularly useful to have a good free online resource. Note: I'm not sure if this should be community wiki. I feel as if there might be a correct answer.
What is a good resource on table design?
CC BY-SA 2.5
null
2010-10-13T01:57:30.580
2019-09-11T05:16:40.350
2010-11-11T00:44:27.600
183
183
[ "tables" ]
3543
2
null
3542
28
null
Ed Tufte has a few pages on this in his classic ["The Visual Display of Quantitative Information"](http://rads.stackoverflow.com/amzn/click/0961392142). For a much more detailed treatment, there is Jane Miller's [Chicago Guide to Writing about Numbers](http://rads.stackoverflow.com/amzn/click/0226526313). I've never seen anything else like it. It has a whole chapter on "Creating Effective Tables".
null
CC BY-SA 2.5
null
2010-10-13T02:26:22.177
2010-10-13T02:26:22.177
null
null
159
null
3544
2
null
3526
3
null
I would like to see the Marascuilo procedure used more often. Quite frequently I see people calculating the chi-square on a subset of the main table ie two categories at the time but without actually doing the partitioning correctly. The reason why they do it this way as far as iI understood is that they can't bear grouping the categories as that will make the interpretation really hard. At the end of the day it depends of the audience as well coz if they don't know it they might just recommend the usual Bonferroni approach
null
CC BY-SA 3.0
null
2010-10-13T03:14:21.180
2013-05-24T18:02:22.337
2013-05-24T18:02:22.337
805
10229
null
3545
2
null
3542
15
null
Stephen Few's book [Show Me the Numbers: Designing Tables and Graphs to Enlighten](http://www.powells.com/biblio/62-9780970601995-0) has a couple of chapters devoted to tabular display of information. It's good and recommended, but it's not quite Grammar of Graphics if that's what you're after. Update This sounds interesting, but I haven't read it: [Handbook of tabular presentation: How to design and edit statistical tables, a style manual and case book](http://rads.stackoverflow.com/amzn/click/B0007DKT4C). (Curious to hear any comments from someone in the know ..)
null
CC BY-SA 2.5
null
2010-10-13T03:34:07.880
2010-10-13T03:39:35.697
2010-10-13T03:39:35.697
251
251
null
3546
2
null
3539
35
null
The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up). Extensions for the case of multiple raters exist (2, pp. 284–291). In the case of ordinal data, you can use the [weighted](https://en.wikipedia.org/wiki/Cohen%27s_kappa#Weighted_Kappa) $\kappa$, which basically reads as usual $\kappa$ with off-diagonal elements contributing to the measure of agreement. Fleiss (3) provided guidelines to interpret $\kappa$ values but these are merely rules of thumbs. The $\kappa$ statistic is asymptotically equivalent to the ICC estimated from a two-way random effects ANOVA, but significance tests and SE coming from the usual ANOVA framework are not valid anymore with binary data. It is better to use bootstrap to get confidence interval (CI). Fleiss (8) discussed the connection between weighted kappa and the intraclass correlation (ICC). It should be noted that some psychometricians don't very much like $\kappa$ because it is affected by the prevalence of the object of measurement much like predictive values are affected by the prevalence of the disease under consideration, and this can lead to paradoxical results. Inter-rater reliability for $k$ raters can be estimated with Kendall’s coefficient of concordance, $W$. When the number of items or units that are rated $n > 7$, $k(n − 1)W \sim \chi^2(n − 1)$. (2, pp. 269–270). This asymptotic approximation is valid for moderate value of $n$ and $k$ (6), but with less than 20 items $F$ or permutation tests are more suitable (7). There is a close relationship between Spearman’s $\rho$ and Kendall’s $W$ statistic: $W$ can be directly calculated from the mean of the pairwise Spearman correlations (for untied observations only). [Polychoric](https://en.wikipedia.org/wiki/Polychoric_correlation) (ordinal data) correlation may also be used as a measure of inter-rater agreement. Indeed, they allow to - estimate what would be the correlation if ratings were made on a continuous scale, - test marginal homogeneity between raters. In fact, it can be shown that it is a special case of latent trait modeling, which allows to relax distributional assumptions (4). About continuous (or so assumed) measurements, the ICC which quantifies the proportion of variance attributable to the between-subject variation is fine. Again, bootstraped CIs are recommended. As @ars said, there are basically two versions -- agreement and consistency -- that are applicable in the case of agreement studies (5), and that mainly differ on the way sum of squares are computed; the “consistency” ICC is generally estimated without considering the Item×Rater interaction. The ANOVA framework is useful with specific block design where one wants to minimize the number of ratings ([BIBD](https://en.wikipedia.org/wiki/Block_design)) -- in fact, this was one of the original motivation of Fleiss's work. It is also the best way to go for multiple raters. The natural extension of this approach is called the [Generalizability Theory](https://en.wikipedia.org/wiki/Generalizability_theory). A brief overview is given in [Rater Models: An Introduction](http://www.google.fr/url?sa=t&source=web&cd=17&ved=0CEAQFjAGOAo&url=http%3A%2F%2Flegacy.samsi.info%2F200910%2Fpsycho%2Fpresentations%2FJohnson_raters.pdf&ei=KXu1TKy3IMv44Aaju7SgDQ&usg=AFQjCNGzJnLQqrpaS8x0MiV2YO79w0SYyg&sig2=YgINb6uPkwAwz8eHgLWxng), otherwise the standard reference is Brennan's book, reviewed in [Psychometrika 2006 71(3)](https://doi.org/10.1007/s11336-005-1366-y). As for general references, I recommend chapter 3 of Statistics in Psychiatry, from Graham Dunn (Hodder Arnold, 2000). For a more complete treatment of reliability studies, the best reference to date is > Dunn, G (2004). Design and Analysis of Reliability Studies. Arnold. See the review in the International Journal of Epidemiology. A good online introduction is available on John Uebersax's website, [Intraclass Correlation and Related Methods](https://www.john-uebersax.com/stat/icc.htm); it includes a discussion of the pros and cons of the ICC approach, especially with respect to ordinal scales. Relevant R packages for two-way assessment (ordinal or continuous measurements) are found in the [Psychometrics](https://cran.r-project.org/web/views/Psychometrics.html) Task View; I generally use either the [psy](https://cran.r-project.org/web/packages/psy/index.html), [psych](https://cran.r-project.org/web/packages/psych/index.html), or [irr](https://cran.r-project.org/web/packages/irr/index.html) packages. There's also the [concord](https://cran.r-project.org/web/packages/concord/index.html) package but I never used it. For dealing with more than two raters, the [lme4](https://cran.r-project.org/web/packages/lme4/index.html) package is the way to go for it allows to easily incorporate random effects, but most of the reliability designs can be analysed using the `aov()` because we only need to estimate variance components. References - J Cohen. Weighted kappa: Nominal scale agreement with provision for scales disagreement of partial credit. Psychological Bulletin, 70, 213–220, 1968. - S Siegel and Jr N John Castellan. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill, Second edition, 1988. - J L Fleiss. Statistical Methods for Rates and Proportions. New York: Wiley, Second edition, 1981. - J S Uebersax. The tetrachoric and polychoric correlation coefficients. Statistical Methods for Rater Agreement web site, 2006. Available at: http://john-uebersax.com/stat/tetra.htm. Accessed February 24, 2010. - P E Shrout and J L Fleiss. Intraclass correlation: Uses in assessing rater reliability. Psychological Bulletin, 86, 420–428, 1979. - M G Kendall and B Babington Smith. The problem of m rankings. Annals of Mathematical Statistics, 10, 275–287, 1939. - P Legendre. Coefficient of concordance. In N J Salkind, editor, Encyclopedia of Research Design. SAGE Publications, 2010. - J L Fleiss. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33, 613-619, 1973.
null
CC BY-SA 4.0
null
2010-10-13T06:12:24.867
2022-05-12T09:30:29.330
2022-05-12T09:30:29.330
79696
930
null
3547
2
null
3463
4
null
How do you define correlation for non stationary time series? Do you plan to take the correlation of the diff or these time series? If not, I suggest you look for cointegration rather than correlation (cf Granger etc...)
null
CC BY-SA 2.5
null
2010-10-13T06:37:12.110
2010-10-13T06:37:12.110
null
null
1709
null
3548
2
null
555
10
null
ANOVA can be used with categorical explanatory variables (factors) that take more than 2 values (levels), and gives a basic test that the mean response is the same for every value. This avoids the regression problem on carrying multiple pairwise t-tests between those levels: - Multiple t-tests on a fixed 5% significance level, would make roughly 5% of them give wrong results. - These tests are not independed from each other. Comparing A's levels with B's is connected with comparing A's to C's, as A's data are used in both tests. It is better to use [contrasts](http://en.wikipedia.org/wiki/Contrast_%28statistics%29) for different combinations on the factor levels you want to test.
null
CC BY-SA 2.5
null
2010-10-13T08:53:32.477
2010-10-29T13:26:46.320
2010-10-29T13:26:46.320
1077
1077
null
3549
1
3555
null
88
82996
In a multiple linear regression, why is it possible to have a highly significant F statistic (p<.001) but have very high p-values on all the regressor's t tests? In my model, there are 10 regressors. One has a p-value of 0.1 and the rest are above 0.9 --- For dealing with this problem see the [follow-up question](https://stats.stackexchange.com/questions/3561/dealing-with-correlated-regressors).
Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
CC BY-SA 3.0
null
2010-10-13T09:40:17.420
2020-10-17T02:24:14.720
2020-10-17T02:24:14.720
11887
1077
[ "regression", "hypothesis-testing", "t-test", "multicollinearity", "faq" ]
3550
2
null
1610
5
null
(a bit joke answer I invented just a minute ago) - A first class person thinks he is always right. - A second class person thinks he is always wrong. --- - The first class person can only make a type I error (because sometimes he will be wrong). - The second class person can only make a type II error (because sometimes he will be right).
null
CC BY-SA 2.5
null
2010-10-13T10:15:00.153
2010-10-13T10:15:00.153
null
null
1219
null
3551
2
null
3516
1
null
ALL binary variables have the binomial distribution, provided that the probability of success (probability to observe 1) does not change and that all their instances are independent. A rule of thumb says that binomial distribution can be fairly approximated by normal distribution when n*p>30, with n=number of instances, p=probability of success. So, I argue that your question is about testing for independence and constant success rate. For the former, you can use Bradley run test [http://www.itl.nist.gov/div898/handbook/eda/section3/eda35d.htm](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35d.htm) (I suppose it is also known by another name). For the latter, I have only a rough answer: you can split your sample in k subgroups and then build a test using the k proportions of success in the subgroups.
null
CC BY-SA 2.5
null
2010-10-13T11:07:31.753
2010-10-13T11:07:31.753
null
null
1219
null
3552
1
null
null
5
762
I've got a model that I've developed in R, but also need to express in SAS. It's a double GLM, that is, I fit both the mean and (log-)variance as linear combinations of the predictors: $E(Y) = X_1'b_1$ $\log V(Y) = X_2'b_2$ where Y has a normal distribution, $X_1$ and $X_2$ are the vectors of independent variables, and $b_1$ and $b_2$ are the coefficients to be estimated. $X_1$ and $X_2$ can be the same, but need not be. I can fit this in R using gls() and the varComb and varIdent functions. I've also written a custom function that maximises the likelihood using optim/nlminb, and verified that it returns the same output as gls. I would now like to translate this into SAS. I know that I can use PROC MIXED: ``` proc mixed; class x2; model y = x1; repeated /group = x2; run; ``` However, this only gives me what I want if I have 1 variable in the /GROUP option. If I enter 2 or more variables, MIXED can only handle this by treating each individual combination of levels as a distinct group (that is, it takes the cartesian product). For example, if I have 2 variables in $X_2$, with 3 and 4 levels respectively, MIXED will fit 12 parameters for the variance. What I want is for the log-variance to be additive in the variables specified, ie 6 parameters. Is there a way of doing this in MIXED or any other proc? I could probably code something in NLP, but I'd really prefer not to.
Replicating R model in SAS
CC BY-SA 2.5
null
2010-10-13T11:31:21.963
2010-10-14T03:36:05.547
2010-10-13T12:34:00.090
null
1569
[ "r", "sas" ]
3553
2
null
3549
42
null
This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the response variable. Consequently, the F-test has a low p-value (it is saying that the predictors together are highly significant in explaining the variation in the response variable). But the t-test for each predictor has a high p-value because after allowing for the effect of the other predictor there is not much left to explain.
null
CC BY-SA 2.5
null
2010-10-13T11:45:32.477
2010-10-13T11:45:32.477
null
null
159
null
3554
2
null
3352
1
null
If $x_1=0$, then $Y$~$(\beta_0,\sigma^2)$ in model 1 and $Y$~$(0,\sigma^2)$ in model 2. If $x_1=1$, then $Y$~$(\beta_0+\beta_1,\sigma^2)$ in model 1 and $Y$~$(\beta_1,\sigma^2)$ in model 2. Look for example at the first line: is $\beta_0$ a random variable or is a zero constant?
null
CC BY-SA 2.5
null
2010-10-13T11:46:32.707
2010-10-13T11:46:32.707
null
null
1219
null
3555
2
null
3549
61
null
As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe size. But together it doesn't work out. Brief simulation example ``` RSS = 3:10 #Right shoe size LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS cor(LSS, RSS) #correlation ~ 0.99 weights = 120 + rnorm(RSS, 10*RSS, 10) ##Fit a joint model m = lm(weights ~ LSS + RSS) ##F-value is very small, but neither LSS or RSS are significant summary(m) ##Fitting RSS or LSS separately gives a significant result. summary(lm(weights ~ LSS)) ```
null
CC BY-SA 2.5
null
2010-10-13T12:29:11.170
2010-10-13T12:29:11.170
null
null
8
null
3556
1
null
null
13
4636
Can anyone report on their experience with an adaptive kernel density estimator? (There are many synonyms: adaptive | variable | variable-width, KDE | histogram | interpolator ...) [Variable kernel density estimation](http://en.wikipedia.org/wiki/Variable_kernel_density_estimation) says "we vary the width of the kernel in different regions of the sample space. There are two methods ..." actually, more: neighbors within some radius, KNN nearest neighbors (K usually fixed), Kd trees, multigrid... Of course no single method can do everything, but adaptive methods look attractive. See for example the nice picture of an adaptive 2d mesh in [Finite element method](http://en.wikipedia.org/wiki/Finite_element_method). I'd like to hear what worked / what didn't work for real data, especially >= 100k scattered data points in 2d or 3d. Added 2 Nov: here's a plot of a "clumpy" density (piecewise x^2 * y^2), a nearest-neighbor estimate, and Gaussian KDE with Scott's factor. While one (1) example doesn't prove anything, it does show that NN can fit sharp hills reasonably well (and, using KD trees, is fast in 2d, 3d ...) ![alt text](https://i.stack.imgur.com/ulkUd.png)
Adaptive kernel density estimators?
CC BY-SA 2.5
null
2010-10-13T14:22:54.863
2011-11-26T20:26:33.493
2011-01-17T23:59:29.293
449
557
[ "kde", "k-nearest-neighbour" ]
3557
2
null
3549
10
null
A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like [Variance Inflation Factors](http://en.wikipedia.org/wiki/Variance_inflation_factor) (VIFs) or methods as described inthe textbook ["Regression Diagnostics: Identifying Influential Data and Sources of Collinearity"](http://rads.stackoverflow.com/amzn/click/0471691178) by Belsley, Kuh and Welsch. VIFs are much easier to understand, but they can't deal with collinearity involving the intercept (i.e., predictors that are almost constant by themselves or in a linear combination) - conversely, the BKW diagnostics are far less intuitive but can deal with collinearity involving the intercept.
null
CC BY-SA 2.5
null
2010-10-13T14:38:26.963
2010-10-13T14:38:26.963
null
null
1352
null
3558
2
null
1337
10
null
This one's from the xkcd forums: > Three statisticians are out hunting. Bird flies up out of the bush, and the first statistician aims and fires. Unfortunately for them, he missed, the bullet going about a foot below the bird. The second one fires, but the bullet goes about a foot above the bird. The third statistician puts down his gun and says, "All right! We got him!..."
null
CC BY-SA 2.5
null
2010-10-13T15:07:55.360
2010-10-13T15:07:55.360
null
null
144
null
3559
1
3562
null
72
85563
I have `SPSS` output for a logistic regression model. The output reports two measures for the model fit, `Cox & Snell` and `Nagelkerke`. So as a rule of thumb, which of these $R^²$ measures would you report as the model fit? Or, which of these fit indices is the one that is usually reported in journals? --- Some Background: The regression tries to predict the presence or absence of a bird (capercaillie) from some environmental variables (e.g., steepness, vegetation cover, ...). Unfortunately, the bird did not appear very often (35 hits to 468 misses) so the regression performs rather poorly. Cox & Snell is .09, Nagelkerke, .23. The subject is environmental sciences or ecology.
Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
CC BY-SA 3.0
null
2010-10-13T16:12:57.630
2022-04-08T13:17:46.170
2022-04-07T19:24:49.050
11887
442
[ "logistic", "goodness-of-fit", "r-squared", "pseudo-r-squared" ]
3560
2
null
3559
3
null
I would prefer the Nagelkerke as this model fit attains 1 when the model fits perfectly giving the reader a sense of how far your model is from perfect fit. The Cox & Shell does not attain 1 for perfect model fit and hence interpreting a value of 0.09 is a bit harder. See this url for further info on [Pseudo RSquared](http://www.ats.ucla.edu/stat/mult_pkg/faq/general/Psuedo_RSquareds.htm) for an explanation of various types of fits.
null
CC BY-SA 2.5
null
2010-10-13T16:36:53.413
2010-10-13T16:36:53.413
null
null
null
null
3561
1
3569
null
27
16311
In a multiple linear regression with highly correlated regressors, what is the best strategy to use? Is it a legitimate approach to add the product of all the correlated regressors?
Dealing with correlated regressors
CC BY-SA 2.5
null
2010-10-13T17:34:00.057
2010-10-14T07:24:54.397
2010-10-13T19:36:47.703
1352
1077
[ "regression", "multicollinearity" ]
3562
2
null
3559
94
null
Normally I wouldn't report $R^2$ at all. Hosmer and Lemeshow, in their textbook [Applied Logistic Regression](http://rads.stackoverflow.com/amzn/click/0471356328) (2nd Ed.), explain why: > In general, [$R^2$ measures] are based on various comparisons of the predicted values from the fitted model to those from [the base model], the no data or intercept only model and, as a result, do not assess goodness-of-fit. We think that a true measure of fit is one based strictly on a comparison of observed to predicted values from the fitted model. [At p. 164.] Concerning various ML versions of $R^2$, the "pseudo $R^2$" stat, they mention that it is not "recommended for routine use, as it is not as intuitively easy to explain," but they feel obliged to describe it because various software packages report it. They conclude this discussion by writing, > ...low $R^2$ values in logistic regression are the norm and this presents a problem when reporting their values to an audience accustomed to seeing linear regression values. ... Thus [arguing by reference to running examples in the text] we do not recommend routine publishing of $R^2$ values with results from fitted logistic models. However, they may be helpful in the model building state as a statistic to evaluate competing models. [At p. 167.] My experience with some large logistic models (100k to 300k records, 100 - 300 explanatory variables) has been exactly as H & L describe. I could achieve relatively high $R^2$ with my data, up to about 0.40. These corresponded to classification error rates between 3% and 15% (false negatives and false positives, balanced, as confirmed using 50% hold-out datasets). As H & L hinted, I had to spend a lot of time disabusing the client (a sophisticated consultant himself, who was familiar with $R^2$) concerning $R^2$ and getting him to focus on what mattered in the analysis (the classification error rates). I can warmly recommend describing the results of your analysis without reference to $R^2$, which is more likely to mislead than not.
null
CC BY-SA 2.5
null
2010-10-13T17:46:39.857
2010-10-13T17:46:39.857
null
null
919
null
3563
2
null
3561
1
null
I'm no expert on this, but my first thought would be to run a principal component analysis on the predictor variables, then use the resulting principal components to predict your dependent variable.
null
CC BY-SA 2.5
null
2010-10-13T17:48:25.090
2010-10-13T18:00:55.567
2010-10-13T18:00:55.567
364
364
null
3564
1
3581
null
5
6863
I would like to apply KDE to inventory replenishment, but I am not sure how to use the analysis to predict future sales based on past sales. Given a set of data and having applied KDE to it (probably using a Gaussian distribution), how do I make a prediction about the future? Thanks for any help! Please let me know if I can clarify - I'm only starting to pick up the language for talking about KDE ... I'm glad to do reading on my own - pointers to any resources would be welcome.
How to use Kernel Density Estimation for Prediction?
CC BY-SA 2.5
null
2010-10-13T17:48:45.233
2015-04-23T05:53:04.807
2015-04-23T05:53:04.807
9964
1574
[ "time-series", "forecasting", "smoothing", "kernel-smoothing" ]
3565
2
null
3561
-1
null
One of the ways to reduce the effects of correlation is to standardize the regressors. In standardizing, all the regressors are subtracted by their respective means and divided by their respective standard deviations. Specifically, if $X$ is the regression matrix: $$x_{ij}^{standardized}=\frac {x_{ij}-\overline{x_{.j}}} {s_{j}}$$ This is not a remedy, but definitely a step in the right direction.
null
CC BY-SA 2.5
null
2010-10-13T17:48:53.860
2010-10-13T18:09:04.173
2010-10-13T18:09:04.173
1307
1307
null
3566
2
null
3559
32
null
Both indices are measures of strength of association (i.e. whether any predictor is associated with the outcome, as for an LR test), and can be used to quantify predictive ability or model performance. A single predictor may have a significant effect on the outcome but it might not necessarily be so useful for predicting individual response, hence the need to assess model performance as a whole (wrt. the null model). The Nagelkerke $R^2$ is useful because it has a maximum value of 1.0, as Srikant said. This is just a normalized version of the $R^2$ computed from the likelihood ratio, $R^2_{\text{LR}}=1-\exp(-\text{LR}/n)$, which has connection with the Wald statistic for overall association, as originally proposed by Cox and Snell. Other indices of predictive ability are Brier score, the C index (concordance probability or ROC area), or Somers' D, the latter two providing a better measure of predictive discrimination. The only assumptions made in logistic regression are that of linearity and additivity (+ independence). Although many global goodness-of-fit tests (like the Hosmer & Lemeshow $\chi^2$ test, but see my [comment](https://stats.stackexchange.com/questions/3559/logistic-regression-which-pseudo-r-squared-measure-is-the-one-to-report-cox-s/3574#3574) to @onestop) have been proposed, they generally lack power. For assessing model fit, it is better to rely on visual criteria (stratified estimates, nonparametric smoothing) that help to spot local or global departure between predicted and observed outcomes (e.g. non-linearity or interaction), and this is largely detailed in Harrell's [RMS handout](https://hbiostat.org/rms). On a related subject (calibration tests), Steyerberg ([Clinical Prediction Models](http://www.clinicalpredictionmodels.org/), 2009) points to the same approach for assessing the agreement between observed outcomes and predicted probabilities: > Calibration is related to goodness-of-fit, which relates to the ability of a model to fit a given set of data. Typically, there is no single goodness-of-fit test that has good power against all kinds of lack of fit of a prediction model. Examples of lack of fit are missed non-linearities, interactions, or an inappropriate link function between the linear predictor and the outcome. Goodness-of-fit can be tested with a $\chi^2$ statistic. (p. 274) He also suggests to rely on the absolute difference between smoothed observed outcomes and predicted probabilities either visually, or with the so-called Harrell's E statistic. More details can be found in Harrell's book, Regression Modeling Strategies (pp. 203-205, 230-244, 247-249). For a more recent discussion, see also Steyerberg, EW, Vickers, AJ, Cook, NR, Gerds, T, Gonen, M, Obuchowski, N, Pencina, MJ, and Kattan, MW (2010). [Assessing the Performance of Prediction Models, A Framework for Traditional and Novel Measures](http://www.epibiostat.ucsf.edu/courses/RoadmapK12/SIGS/Steyerberg_Epidemiol_2010.pdf). Epidemiology, 21(1), 128-138.
null
CC BY-SA 4.0
null
2010-10-13T18:02:06.783
2022-03-18T18:34:41.897
2022-03-18T18:34:41.897
4253
930
null
3567
2
null
3561
11
null
You can use principal components or ridge regression to deal with this problem. On the other hand, if you have two variables that are correlated highly enough to cause problems with parameter estimation, then you could almost certainly drop either one of the two without losing much in terms of prediction--because the two variables carry the same information. Of course, that only works when the problem is due to two highly correlated independents. When the problem involves more than two variables that are together nearly collinear (any two of which may have only moderate correlations), you'll probably need one of the other methods.
null
CC BY-SA 2.5
null
2010-10-13T18:12:33.317
2010-10-13T19:44:22.127
2010-10-13T19:44:22.127
485
485
null
3568
2
null
1610
10
null
I'll try not to be redundant with other responses (although it seems a little bit what J. M. already suggested), but I generally like showing the following two pictures: ![alt text](https://i.stack.imgur.com/9TEBZ.jpg) ![alt text](https://i.stack.imgur.com/YBAL1.png)
null
CC BY-SA 2.5
null
2010-10-13T18:43:21.297
2010-10-13T18:43:21.297
null
null
930
null
3569
2
null
3561
16
null
Principal components make a lot of sense... mathematically. However, I'd be wary of simply using some mathematical trick in this case and hoping that I don't need to think about my problem. I'd recommend thinking a little about what kind of predictors I have, what the independent variable is, why my predictors are correlated, whether some of my predictors are actually measuring the same underlying reality (if so, whether I can just work with a single measurement and which of my predictors would be best for this), what I am doing the analysis for - if I'm not interested in inference, only in prediction, then I could actually leave things just as they are, as long as future predictor values are similar to past ones.
null
CC BY-SA 2.5
null
2010-10-13T19:25:05.967
2010-10-13T19:25:05.967
null
null
1352
null
3570
2
null
3559
18
null
I found Tue Tjur's short [paper "Coefficients of Determination in Logistic Regression Models - A New Proposal: The Coefficient of Discrimination" (2009, The American Statistician)](http://pubs.amstat.org/doi/abs/10.1198/tast.2009.08210) on various proposals for a coefficient of determination in logistic models quite enlightening. He does a good job highlighting pros and cons - and of course offers a new definition. Very much recommended (though I have no favorite myself).
null
CC BY-SA 4.0
null
2010-10-13T19:33:11.767
2018-07-12T21:29:56.463
2018-07-12T21:29:56.463
1352
1352
null
3571
2
null
3561
3
null
Here is another thought that is inspired by Stephan's [answer](https://stats.stackexchange.com/questions/3561/dealing-with-correlated-regressors/3569#3569): If some of your correlated regressors are meaningfully related (e.g., they are different measures of intelligence i.e., verbal, math etc) then you can create a single variable that measures the same variable using one of the following techniques: - Sum the regressors (appropriate if the regressors are components of a whole, e.g., verbal IQ + math IQ = Overall IQ) - Average of the regressors (appropriate if the regressors are measuring the same underlying construct e.g., size of left shoe, size of right shoe to measure length of feet) - Factor analysis (to account for errors in measurements and to extract a latent factor) You can then drop all the correlated regressors and replace them with the one variable that emerges from the above analysis.
null
CC BY-SA 2.5
null
2010-10-13T19:43:22.190
2010-10-13T19:43:22.190
2017-04-13T12:44:45.783
-1
null
null
3572
2
null
3561
2
null
I was about to say much the same thing as Stephan Kolassa above (so have upvoted his answer). I'd only add that sometimes multicollinearity can be due to using [extensive variables](http://en.wikipedia.org/wiki/Intensive_and_extensive_properties) which are all highly correlated with some measure of size, and things can be improved by using intensive variables, i.e. dividing everything through by some measure of size. E.g. if your units are countries, you might divide by population, area, or GNP, depending on context. Oh - and to answer the second part of the original question: I can't think of any situation when adding the product of all the correlated regressors would be a good idea. How would it help? What would it mean?
null
CC BY-SA 2.5
null
2010-10-13T19:48:14.617
2010-10-14T07:24:54.397
2010-10-14T07:24:54.397
449
449
null
3573
2
null
3564
5
null
I would have thought that KDE bear little if any relationship to predicting future sales based on past sales. Sounds more like [time series analysis](http://en.wikipedia.org/wiki/Time_series) to me, though that's really not my area.
null
CC BY-SA 2.5
null
2010-10-13T19:54:25.117
2010-10-13T19:54:25.117
null
null
449
null
3574
2
null
3559
9
null
I was also going to say 'neither of them', so i've upvoted whuber's answer. As well as criticising R^2, Hosmer & Lemeshow did propose an alternative measure of goodness-of-fit for logistic regression that is sometimes useful. This is based on dividing the data into (say) 10 groups of equal size (or as near as possible) by ordering on the predicted probability (or equivalently, the linear predictor) then comparing the observed to expected number of positive responses in each group and performing a chi-squared test. This 'Hosmer-Lemeshow goodness-of-fit test' is implemented in most statistical software packages.
null
CC BY-SA 2.5
null
2010-10-13T20:08:11.973
2010-10-13T20:08:11.973
null
null
449
null
3575
1
3576
null
5
16484
I've got a data table like: ``` ID Low Color Med Color High Color 1 234 123 324 2 4 432 3423 ``` The rows are widgets, the columns are color levels. Would you call this table "widgets by color level" or "color levels by widget"?
Is it "rows by columns" or "columns by rows"?
CC BY-SA 2.5
null
2010-10-13T20:45:25.290
2021-02-01T13:30:41.447
2021-02-01T13:30:41.447
101426
1531
[ "terminology" ]
3576
2
null
3575
9
null
"That depends." Rows are usually considered observations, and columns are variables. So I would say widgets by color level in your context. But it really depends on which are your [dependent and independent variables](http://en.wikipedia.org/wiki/Dependent_and_independent_variables) (or how you're interpreting the data).
null
CC BY-SA 2.5
null
2010-10-13T20:50:40.067
2010-10-13T20:50:40.067
null
null
5
null
3577
2
null
3575
5
null
Of course you can view this table either way by transposing it. Conventionally, in a database rows represent objects and columns contain their attributes, whence this presentation would typically be viewed as a list of widgets, not a list of color levels.
null
CC BY-SA 2.5
null
2010-10-13T20:51:02.110
2010-10-13T20:51:02.110
null
null
919
null
3578
2
null
3575
3
null
Typically, we talk about a r(ow) X (c)olumn matrix, from Linear Algebra. So, a matrix with 2 rows and 3 columns is a 2 X 3 matrix. By that logic, I'd call your data frame a "Widgets by Color" table.
null
CC BY-SA 2.5
null
2010-10-13T20:54:42.053
2010-10-13T20:54:42.053
null
null
485
null
3579
2
null
3575
2
null
for a table like that, I say it the same way I'd say "n by k" for a matrix with n rows and k columns (i.e. rows first).
null
CC BY-SA 2.5
null
2010-10-13T21:46:38.637
2010-10-13T21:46:38.637
null
null
805
null
3581
2
null
3564
7
null
You can use conditional kernel density estimation to obtain the density of sales at time $t+h$ conditional on the values of sales at times $t, t-1, t-2, \dots$ This gives you a density forecast rather than a point forecast. The problem is that the conditioning is difficult in a density setting when the number of conditioning variables is more than 2. See [this paper](http://www.jstor.org/stable/1390887) for a discussion of the basic idea. An alternative procedure that imposes more assumptions (but allows more conditioning variables) is to fit an additive autoregression such as described in [Chen and Tsay (1993)](http://www.jstor.org/stable/2290787) and then use kde on the residuals to obtain the forecast densities. However, I suspect that both of these are more complicated than what you really want. I suggest you read a textbook on demand forecasting such as [Levenbach and Cleary (2006)](http://rads.stackoverflow.com/amzn/click/0534262686).
null
CC BY-SA 2.5
null
2010-10-13T22:49:26.513
2010-10-13T22:49:26.513
null
null
159
null
3582
2
null
3556
-1
null
Loess/lowess is basically a variable KDE method, with the width of the kernel being set by the nearest-neighbour approach. I've found that it works pretty well, certainly much better than any fixed-width model when the density of data points varies markedly. One thing to be aware of with KDE and multi-dimensional data is the curse of dimensionality. Other things being equal, there are far fewer points within a set radius when p ~ 10, than when p ~ 2. This may not be a problem for you if you only have 3d data, but it's something to keep in mind.
null
CC BY-SA 2.5
null
2010-10-14T01:16:51.910
2010-10-14T01:16:51.910
null
null
1569
null
3583
2
null
3552
4
null
Rather than the code you present, I assume you're doing something like `class x21 x22` followed by the `repeated` clause `group=x21*x22` to end up with 12 parameters. This is the only option I'm aware of within SAS, i.e. I don't think you can get a straightforward stratification of the variation across the combined levels of two variables. But, unless I'm missing something, you should be able re-code your variable into a combined `x3` to achieve this. Also, just to check -- I assumed that your `x2` variable is prepared such that it holds the appropriate exponential values for your log-linear variance model. Otherwise, you may want to check out the `local` option to the repeated statement. For example, the following model in R: ``` wgt <- varExp(form =~ x) fit <- gls(y ~ x, weights=wgt, data=dat) ``` corresponds to something like the following in SAS: ``` proc mixed data=dat; model y = x / solution; repeated / local=exp(x); run; ``` The covariance parameter estimates will differ by a factor of 2 between R and SAS. This is because the R version models $var(\epsilon_{ij}) = \sigma^2 e^{ 2bx_{i} }$ (see `?varExp`). I'm guessing you're processing your input and not using varExp because this isn't easy to accomplish in R if you have factor variables, but I thought I'd mention it in case it's helpful.
null
CC BY-SA 2.5
null
2010-10-14T03:22:37.420
2010-10-14T03:36:05.547
2010-10-14T03:36:05.547
251
251
null
3584
1
3585
null
10
1350
I have a dataset asking people whether they have been to a certain places (e.g. A, B, C, D), and they can make more than one choice, then a specimen is taken from their nose to see if they are infected with some disease. I need to find out the relative risk of getting infected for one going to a certain place, I can only think of logistic regression right now, is there any other suggestions? Thanks.
How to deal with survey question with multiple response?
CC BY-SA 2.5
null
2010-10-14T03:36:34.627
2010-10-14T20:37:16.073
2010-10-14T20:37:16.073
1352
588
[ "logistic" ]
3585
2
null
3584
2
null
You can still use logistic regression because your outcome is dichotomous, infected vs not-infected. I would just simply take a dummy variable approach and use no travel as the reference category (i.e. for each of your places you have a variable coded as 1 if they visited that place and coded as 0 if they did not visit that place). As such if you transform your beta coefficients to odds (i.e. exponentiate the log odds) the interpretation of the dummy variable for location A would be the odds ratio of visiting location A over not visiting location A controlling for other places one visited. Also note in this approach multi-collinearity is a concern (e.g. if many of the people who travel to A also travel to B it may bias each of their coefficients).
null
CC BY-SA 2.5
null
2010-10-14T04:40:09.423
2010-10-14T04:40:09.423
null
null
1036
null
3586
1
3593
null
2
1537
I am trying to calculate the relative risk for being tested as positive for people aged >25 or <=25, and here is the result. ``` $data Negative Positive Total >25 115 11 126 <=25 117 3 120 Total 232 14 246 $measure risk ratio with 95% C.I. estimate lower upper >25 1.0000000 NA NA <=25 0.2863636 0.08189025 1.001391 $p.value two-sided midp.exact fisher.exact chi.square >25 NA NA NA <=25 0.03815237 0.05147711 0.03500415 $correction [1] FALSE attr(,"method") [1] "Unconditional MLE & normal approximation (Wald) CI" ``` One thing that puzzled me a lot is that why I get a statistically significant result from chi.square (I don't use fisher's exact test because NONE OF THE expected value is small than 5), but the risk ratio includes 1, how can I explain? Thanks. ### CORRECTION Sorry for the mistake, I rechecked the data and NONE of the expected cell is smaller than 5
Inconsistency between Chi-sq and CI Estimation using Wald test
CC BY-SA 2.5
null
2010-10-14T05:14:57.183
2010-10-15T06:16:16.390
2010-10-14T07:09:09.693
588
588
[ "chi-squared-test", "relative-risk" ]
3587
2
null
3586
0
null
I am not sure what is the $H_o$ in the case of the hypothesis test. Usually its $H_o: OR=1$. If this is true then, your situation can arise because of the following reasons: The asymptotics that you comparing might be very different in the two cases. The test statistic's sampling distribution for relative risk and OR scenario can reach their asymptotic distribution differently, hence the discrepancy. But if $H_o: RR=1$ then As you mention yourself, one of the cell count is < 5. This condition should hold for all the cells for the Chi Sq test as well ([AFAIK](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test)). Due to this the asymptotic Chi Sq distribution may not be completely accurate. The p-value may be calculated based on the NORMAL APPROXIMATION where as the CI may be calculated based on the CHI SQ APPROXIMATION. But this is total guess work. Which package are you using? P-value and CI should agree as they are calculated based on the same $H_o$ (as you expect). The main reason for the discrepancy may be the Asymptotic Nature of the Wald test. No one can tell when the asymptotics is true, all you can do is hope for the best and assume your sample to be "good enough", so that if you have "large enough" sample asymptotics are true. When such a situation arises, it is better to trust Non Parametric test like the Fisher's Exact Test because the parametric tests are based on asymptotic assumptions. FET being non-parametric (you are basically counting all possible tables that can occur as extreme cases conditioned on the present table), is less restrictive, hence more trustworthy. Your point is valid that FET is not trust worthy as one cell count < 5, but in that case the Chi Sq distribution is not trustworthy as well.
null
CC BY-SA 2.5
null
2010-10-14T06:34:56.423
2010-10-14T06:53:23.313
2010-10-14T06:53:23.313
1307
1307
null
3588
2
null
3556
7
null
The article * D. G. Terrell; D. W. Scott (1992). "Variable kernel density estimation". Annals of Statistics 20: 1236–1265.* cited at the end of the Wikipedia article you yourself cite clearly states that unless the observations space is very sparse the variable kernel method is not recommended on the basis of global root mean squared error (both local and global) for Gaussian distributed random variables: (through theoretical arguments) they cite the figures of $n\leq 450$ ($n$ is the sample size) and (through bootstrapping results) $p\geq 4$ ($p$ is the number of dimension) as the settings in which variable kernel method become competitive with fixed width ones (judging from your question you are not in these settings). The intuition behind these results is that if you are not in very sparse settings, then, the local density simply does not vary enough for the gain in bias to outdo the loss in efficiency (and hence the AMISE of variable width kernel increases relative to the AMISE of fixed width). Also, given the large sample size you have (and the small dimensions) the fixed width kernel will be very local already, diminishing any potential gains in terms of bias.
null
CC BY-SA 2.5
null
2010-10-14T06:52:05.923
2010-10-14T06:52:05.923
null
null
603
null
3589
1
3605
null
10
32308
In a problem I am working on, I have two random variables, X and Y. I need to figure out how closely correlated the two of them are, but they are of different dimensions. The rank of the row space of X is 4350, and the rank of the row space of Y is substantially larger, in the tens of thousands. Both X and Y have the same number of columns. I need a measure of correlation between the two variables, and Pearson's r requires X and Y to have equal dimension (at least R requires the two r.v.'s to be). Do I have any hope of doing a correlation between these two, or should I find some way of pruning off observations from Y? ``` EDIT ``` Adding information from the comments, which should be in the question. I suppose I forgot to mention this. X and Y are stock prices. Company X has been public for a much shorter time period than Y. I wanted to tell how correlated the prices of X and Y are. I could definitely get a correlation for the period of time that X and Y both exist. I wanted to know if knowing the stock prices for several extra years of Y that X did not exist yielded me any additional information.
Correlation between two variables of unequal size
CC BY-SA 3.0
null
2010-10-14T06:56:42.463
2017-03-03T08:41:00.493
2017-03-03T08:41:00.493
11887
1118
[ "time-series", "correlation", "missing-data", "finance" ]
3590
2
null
2948
8
null
Gephi implements the Louvain Modularity method: [http://wiki.gephi.org/index.php/Modularity](http://wiki.gephi.org/index.php/Modularity) cheers
null
CC BY-SA 2.5
null
2010-10-14T07:15:50.350
2010-10-14T07:15:50.350
null
null
null
null
3591
2
null
3586
1
null
I think suncoolsu is on the right track, though RR = 1 if and only if OR = 1, so those null hypotheses are in fact one and the same. Pearson's chi-square test can be derived as a score test. In finite samples, score tests are, in general, better behaved than Wald tests, but less good than likelihood-ratio tests. If you want your CIs and p-values to look more consistent, you may be best switching to a different (more accurate) way of calculating the CI. I see the R package you're using offers a small-sample adjustment or a bootstrap method as alternatives to the standard Wald method.
null
CC BY-SA 2.5
null
2010-10-14T07:44:10.447
2010-10-14T07:44:10.447
null
null
449
null
3592
1
3604
null
7
352
I'm wondering about this question: short version: how to adequately compare the effect of a reclassification of the same subjects on survival long version: I have one cancer cohort that was sorted into TNM classes long time ago. The definition of the classes has now been updated and I like to know if, say, class 1 of the old classification has a better/worse survival than class 1 of the updated classification. My problem is that the cohort stays the same, only the sorting into classes changes. Mini example: patients 1,2 are in class I_old and patients 3,4 in class II_old, patient 1 stays in class I_new but patient 2 goes to class II_new, patient 3 stays in II_new, patient 4 changes to I_new. If I want to compare the effect of the reclassification in class I, I would have to compare the survival of patients 1 and 2 versus the the survival of patients 1 and 4. So patient 1 is in both of the groups. I doubt that the standard procedures like logrank test are adequate here. But what can be done? I have a very vague idea about proportional hazards regression with a time varying covariate (namely indicator for "before" and "after" update) or a frailty model but cannot tell at all if I'm going in the right direction. What do you think? psj
Survival analysis, one cohort, two classifications
CC BY-SA 2.5
null
2010-10-14T07:45:21.910
2010-11-11T23:24:58.520
2010-11-11T23:24:58.520
449
1573
[ "hypothesis-testing", "survival", "non-independent" ]
3593
2
null
3586
4
null
The Wald test for contingency tables is known to be misleading or conservative and the general advice is to prefer the Likelihood Ratio Test: ``` > library(vcd) > mat <- matrix(c(115, 117, 11, 3), 2, 2) > assocstats(mat)$chisq_tests X^2 df P(> X^2) Likelihood Ratio 4.730771 1 0.02962760 Pearson 4.444973 1 0.03500415 ``` Wald type tests are of the form $\frac{\hat{\theta} - \theta_0} {SE(\theta)}$ with the assumption that the difference is normal, which is often untenable with smaller samples or low counts in frequency tables. An intuitive reason for discounting the Wald test is that you're using the available data to estimate both $\hat\theta$ and the standard error to conduct the test. Contrast this with the LRT which uses only one estimate from the data. In a similar vein, I have [another answer](https://stats.stackexchange.com/questions/359/the-trinity-of-tests-in-maximum-likelihood-what-to-do-when-faced-with-contradict) that tries to explain why the Wald CI can be kooky at times and why the LRT should generally be preferred. You might also be interested in a paper that explored some of the ways in which the Wald test can be misleading in these types of analyses: - Wald's Test as Applied to Hypotheses in Logit Analysis
null
CC BY-SA 2.5
null
2010-10-14T07:45:42.830
2010-10-15T06:16:16.390
2017-04-13T12:44:25.243
-1
251
null
3594
2
null
3589
10
null
So the problem is one of missing data (not all Y have a corresponding X, where correspondence is operationalized via time points). I don't think there is much to do here than just to throw away the Y you don't have an X for and calculate the correlation on the full pairs. You may want to read up on financial time series, though I don't have a good reference handy at this point (ideas, anyone?). Stock prices often exhibit time-varying volatilities, which can be modeled, e.g., by [GARCH](http://en.wikipedia.org/wiki/GARCH). It is conceivable that your two time series X and Y exhibit positive correlations during periods of low volatility (when the economy grows, all stock prices tend to increase), but negative correlations when overall volatility is high (on 9/11, airlines tanked while money fled to safer investments). So just calculating an overall correlation may be too dependent on your observation time frame. UPDATE: I think you may want to look at [VAR (vector autoregressive)](http://en.wikipedia.org/wiki/Vector_autoregression) models.
null
CC BY-SA 2.5
null
2010-10-14T08:59:31.170
2010-10-14T13:21:45.070
2010-10-14T13:21:45.070
1352
1352
null
3595
1
3597
null
28
15815
Currently I have to analyze approximately 20M records and build prediction models. So far I have tried out Statistica, SPSS, RapidMiner and R. Among these Statistica seems to be most suitable to deal with data mining and RapidMiner user interface is also very handy but it seems that Statistica, RapidMiner and SPSS are suitable just for smaller datasets. Could anyone recommend a good tool for large datasets? Thanks you!
Statistics and data mining software tools for dealing with large datasets
CC BY-SA 2.5
null
2010-10-14T10:28:15.937
2014-12-22T15:47:25.553
2010-10-14T17:54:48.267
null
315
[ "large-data", "software", "data-mining" ]
3596
2
null
2061
3
null
[http://datajournalism.stanford.edu/](http://datajournalism.stanford.edu/) : Video on visualisation
null
CC BY-SA 2.5
null
2010-10-14T10:31:18.257
2010-10-14T10:31:18.257
null
null
183
null
3597
2
null
3595
19
null
I'll second @suncoolsu comment: The dimensionality of your data set is not the only criterion that should orient you toward a specific software. For instance, if you're just planning to do unsupervised clustering or use PCA, there are several dedicated tools that cope with large data sets, as commonly encountered in genomic studies. Now, R (64 bits) handles large data pretty well, and you still have the option to use disk storage instead of RAM access, but see CRAN Task View [High-Performance and Parallel Computing with R](http://cran.r-project.org/web/views/HighPerformanceComputing.html). Standard GLM will easily accomodate 20,000 obs. (but see also [speedglm](http://cran.r-project.org/web/packages/speedglm/index.html)) within reasonable time, as shown below: ``` > require(MASS) > n <- 20000 > X <- mvrnorm(n, mu=c(0,0), Sigma=matrix(c(1,.8,.8,1), 2, 2)) > df <- cbind.data.frame(X, grp=gl(4, n/4), y=sample(c(0,1), n, rep=TRUE)) > system.time(glm(y ~ ., data=df)) user system elapsed 0.361 0.018 0.379 ``` To give a more concrete illustration, I used R to process and analyse large genetic data (800 individuals x 800k [SNPs](http://www.google.fr/url?sa=t&source=web&cd=2&ved=0CCIQFjAB&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSingle-nucleotide_polymorphism&rct=j&q=single%20nucleotide%20polymorphisms&ei=hDu4TKyRLJDp4gai8MGNDg&usg=AFQjCNGPTlkC2hY6vz7EXSbJe4_YYrumgw&sig2=sweuWH-AMVYdmg_NT2uX8Q&cad=rja), where the main statistical model was a stratified GLM with several covariates (2 min); that was made possible thanks to efficient R and C codes available in the [snpMatrix](http://www.google.fr/url?sa=t&source=web&cd=1&ved=0CBUQFjAA&url=http%3A%2F%2Fwww.bioconductor.org%2Fpackages%2F2.3%2Fbioc%2Fhtml%2FsnpMatrix.html&rct=j&q=snpMatrix&ei=Qjy4TNDJOIXG4AbD1JHbDQ&usg=AFQjCNFJM_G_MFWKEDrIILpdWh8AVATn4g&sig2=wMAoR3l7SB41EG-Lvx9MOQ&cad=rja) package (in comparison, the same kind of model took about 8 min using a dedicated C++ software ([plink](http://www.google.fr/url?sa=t&source=web&cd=1&ved=0CBUQFjAA&url=http%3A%2F%2Fpngu.mgh.harvard.edu%2F~purcell%2Fplink%2F&rct=j&q=plink&ei=kDy4TMmgAZTe4AbPwcm9Dg&usg=AFQjCNHI9oFtR2YOxmlkKaGIMkTnNxTbSA&sig2=yWlRajpLg9eCIIOKoqy5hA&cad=rja)). I also worked on a clinical study (12k patients x 50 variables of interest) and R fits my needs too. Finally, as far as I know, the [lme4](http://cran.r-project.org/web/packages/lme4/index.html) package is the only software that allow to fit mixed-effects model with unbalanced and large data sets (as is the case in large-scale educational assessment). Stata/SE is another software that can handle [large data set](http://www.stata.com/order/new/corp/difference.html). SAS and SPSS are file based software, so they will handle large volumes of data. A comparative review of software for datamining is available in [Data Mining Tools: Which One is Best for CRM](http://www.information-management.com/specialreports/20060321/1049954-1.html). For visualization, there are also plenty of options; maybe a good start is [Graphics of large datasets: visualizing a million](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-32906-2) ([reviewed](http://www.jstatsoft.org/v17/b01/paper) in the JSS by P Murrell), and all related threads on this site.
null
CC BY-SA 2.5
null
2010-10-14T11:15:57.903
2010-10-15T11:39:14.457
2010-10-15T11:39:14.457
930
930
null
3598
2
null
3589
1
null
Given the extra information in your comments I'd recommend looking at two correlations. The first would be the common time periods that the companies were both around. So, if one was around 2 years earlier you'd just drop that data and look at the rest. The second would be the relative time periods. In the second one you're not correlating actual time but time measured since the company went public. The former would be strongly influenced by general economic forces shared within the same time period. The latter would be influenced by properties shared by companies as they change after the IPO.
null
CC BY-SA 2.5
null
2010-10-14T11:45:25.467
2010-10-14T19:09:49.620
2010-10-14T19:09:49.620
601
601
null
3599
1
3607
null
7
176
Could anyone give me hints as to a model framework that can be used in the following setting: The outcome A is dichotomous. I want to investigate the effect of a continuous variable B and a continous variable C on A in a longitudinal setting. So far so good. The problem is that C depends on B, in that values of B above a threshold (Unfortunately we do not know the threshold and the threshold itself might not be constant over time) will change C. The change in C will furthermore reduce B in the next measurement. How can I decipher the isolated effect og B and C on A? Would a mixed-logistic-regression model be OK? ie;t=time A~B*t+C*t+B*C*t Any other ideas? //M
Model suggestion
CC BY-SA 2.5
null
2010-10-14T12:36:05.257
2010-10-14T19:09:39.820
2010-10-14T19:09:39.820
1291
1291
[ "modeling", "causality" ]
3600
2
null
3589
2
null
@Jeromy Anglim specified this correctly. Having the extra information when only one of the time series existed would provide no value here. And in principle, the data should be sampled at the same time for it to be meaningful using conventional correlation measures. As a more general problem, I would add that there are techniques to deal with irregularly spaced time series data. You can search for "irregularly spaced time series correlation". Some of the recent work has been done on ["Realized Volatility and Correlation"](http://www.ssc.upenn.edu/~fdiebold/papers/paper29/temp.pdf) (Andersen, Bollerslev, Diebold, and Labys 1999) using high-frequency data.
null
CC BY-SA 2.5
null
2010-10-14T13:31:19.233
2010-10-14T13:31:19.233
null
null
5
null