Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8129
|
2
| null |
8089
|
7
| null |
Recall (in combination with precision) is generally used in areas where one is primarily interested in finding the Positives. An example for such an area is e.g. Performance Marketing or (as already suggested by ch'ls link) the area of Information Retrieval.
So:
If you are primarily interested in finding the negatives, "True Negative Rate" (as already suggested by chl) is the way to go. But don't forget to look at a "precision for focus on negatives"-metric (i.e. $\frac{TN}{TN + FN}$, because otherwise the "True Negative Rate" can be optimized by setting the prediction to "Negative" for all data points).
If you are interested in optimizing recall for both negatives AND positives, you should look at "Accuracy" (see again chl's link). But beware of class skew (i.e. you have many many more positives than negatives or vice versa ... in this case one can "optimize" accuracy by setting the prediction to the major class for all data points).
| null |
CC BY-SA 2.5
| null |
2011-03-10T14:38:23.600
|
2011-03-10T14:51:01.483
|
2011-03-10T14:51:01.483
|
264
|
264
| null |
8130
|
1
| null | null |
11
|
783
|
I am a software designer by trade and I am working on a project for a client, and I would like to make sure that my analysis is statistically sound.
Consider the following: We have n advertisements (n < 10), and we simply want to know which ad performs the best. Our ad server will randomly serve one of these ads. Success is if the a user clicks on the ad -- our server keeps track of that.
Given: Confidence Interval: 95%
Question: What is the estimated sample size? (How many total ads must we serve), Why? (remember i am a dummy)
Thanks
|
Sample size required to determine which of a set of advertisements has the highest click through rate
|
CC BY-SA 2.5
| null |
2011-03-10T14:52:20.337
|
2011-03-15T02:41:01.220
|
2011-03-11T08:06:25.987
|
183
|
3658
|
[
"anova",
"sample-size",
"t-test",
"rule-of-thumb"
] |
8131
|
2
| null |
8106
|
37
| null |
That the author has forced someone as thoughtful as you to have ask a question like this is compelling illustration of why the practice -- still way too common -- of confining reporting of regression model results to a table like this is so unacceptable.
- You can, as pointed out, try to transform the logit coefficient into some meaningful indication of the effect being estimated for the predictor in question but that's cumbersome and doesn't convey information about the precision of the prediction, which is usually pretty important in a logistic regression model (on voting in particular).
- Also, the use of multiple asterisks to report "levels" of significance reinforces the misconception that p-values are some meaningful index of effect size ("wow--that one has 3 asterisks!!"); for crying out loud, w/ N's of 10,000 to 20,000, completely trivial differences will be "significant" at p < .001 blah blah.
- There is absolutely no need to mystify in this way. The logistic regression model is an equation that can be used (through determinate calculation or better still simulation) to predict probability of an outcome conditional on specified values for predictors, subject to measurement error. So the researcher should report what the impact of predictors of interest are on the probability of the outcome variable of interest, & associated CI, as measured in units the practical importance of which can readily be grasped. To assure ready grasping, the results should be graphically displayed. Here, for example, the researcher could report that being a rural as opposed to an urban voter increases the likelihood of voting Republican, all else equal, by X pct points (I'm guessing around 17 in 2000; "divide by 4" is a reasonable heuristic) +/- x% at 0.95 level of confidence-- if that's something that is useful to know.
- The reporting of pseudo R^2 is also a sign that the modeler is engaged in statistical ritual rather than any attempt to illuminate. There are scores of ways to compute "pseudo R^2"; one might complain that the one used here is not specified, but why bother? All are next to meaningless. The only reason anyone uses pseudo R^2 is that they or the reviewer who is torturing them learned (likely 25 or more yrs ago) that OLS linear regression is the holy grail of statistics & thinks the only thing one is ever trying to figure out is "variance explained." There are plenty of defensible ways to assess the adequacy of overall model fit for logistic analysis, and likelihood ratio conveys meaningful information for comparing models that reflect alternative hypotheses. King, G. How Not to Lie with Statistics. Am. J. Pol. Sci. 30, 666-687 (1986).
- If you read a paper in which reporting is more or less confined to a table like this don't be confused, don't be intimidated, & definitely don't be impressed; instead be angry & tell the researcher he or she is doing a lousy job (particularly if he or she is polluting your local intellectual environment w/ mysticism & awe--amazing how many completely mediocre thinkers trick smart people into thinking they know something just b/c they can produce a table that the latter can't understand). For smart, & temperate, expositions of these ideas, see King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci. 44, 347-361 (2000); and Gelman, A., Pasarica, C. & Dodhia, R. Let's Practice What We Preach: Turning Tables into Graphs. Am. Stat. 56, 121-130 (2002).
| null |
CC BY-SA 3.0
| null |
2011-03-10T15:03:04.673
|
2017-03-02T09:41:19.690
|
2017-03-02T09:41:19.690
|
138830
|
11954
| null |
8132
|
2
| null |
8106
|
6
| null |
Let me just stress the importance of what rolando2 and dmk38 both noted: significance is commonly misread, and there is a high risk of that happening with that tabular presentation of results.
Paul Schrodt [recently](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1661045) offered a nice description of the issue:
>
Researchers find it nearly impossible to adhere to the correct interpretation of the significance test. The p-value tells you only the likelihood that you would get a result under the [usually] completely unrealistic conditions of the null hypothesis. Which is not what you want to know—you usually want to know the magnitude of the effect of an independent variable, given the data. That’s a Bayesian question, not a frequentist question. Instead we see—constantly—the p-value interpreted as if it gave the strength of association: this is the ubiquitous Mystical Cult of the Stars and P-Values which permeates our journals.(fn) This is not what the p-value says, nor will it ever.
In my experience, this mistake is almost impossible to avoid: even very careful analysts who are fully aware of the problem will often switch modes when verbally discussing their results, even if they’ve avoided the problem in a written exposition. And let’s not even speculate on the thousands of hours and gallons of ink we’ve expended correcting this in graduate papers.
(fn) The footnote also informs on another issue, mentioned by dmk38: “[the ubiquitous Mystical Cult of the Stars and P-Values] supplanted the earlier—and equally pervasive—Cult of the Highest R2, demolished… by [King (1986)](http://gking.harvard.edu/files/mist.pdf).”
| null |
CC BY-SA 2.5
| null |
2011-03-10T15:34:15.527
|
2011-03-10T15:34:15.527
| null | null |
3582
| null |
8133
|
1
| null | null |
4
|
1177
|
I read from a textbook that Gauss-Newton regression is also called 'artificial regression'. Please give me an example, how does it work? And what's the relation with Newton's method? Thank you.
|
Intuitive explanation of Gauss-Newton regression
|
CC BY-SA 2.5
| null |
2011-03-10T15:41:13.017
|
2011-03-11T04:05:28.043
|
2011-03-11T04:05:28.043
|
919
|
3525
|
[
"econometrics",
"nonlinear-regression"
] |
8134
|
2
| null |
8096
|
3
| null |
- Obtain the five-number summary for your sample.
- Calculate the standard error for your sample.
- Select a level of confidence, based on the z-score.
- Calculate the lower and upper bound of the interval.
If you are doing this by hand, use the formulae provided by any statistics handbook, such as Agresti and Finlay's Statistical Methods for the Social Sciences, to move through each stage. If you are using a computer solution such as R or Stata:
- Obs = 28, Mean = 27.17, Std. Dev. = 8.82 (su age in Stata)
- Std. Err. = 1.66 (ci age in Stata)
- z = 1.96 at 95% confidence, z = 2.58 at 99% confidence (level(95) or level(99) in Stata)
- 95% CI = [23.75, 30.60] and 99% CI = [22.55, 31.80] (truncated to two digits)
To me, the key element of learning that you should get from this exercise is that the standard error will decrease with the square root of the sample size. The only way to avoid having to trade off precision against efficiency is to maximise sample size.
Agresti and Finlay cover this in detail in Chapter 5 of their handbook, starting at page 126. Previous chapters provide the formulae for calculating the mean, variance, square root of variance (standard deviation) and standard error of the mean (SEM).
If you were dealing with proportions, you would be calculating the standard error slightly differently, but the underlying logic would remain: √N, the square root of the sample size, would stay critical in minimising the standard error.
| null |
CC BY-SA 3.0
| null |
2011-03-10T15:50:11.723
|
2014-08-25T17:05:57.230
|
2014-08-25T17:05:57.230
|
25292
|
3582
| null |
8135
|
1
| null | null |
11
|
8263
|
I am running a large OLS regression where all the independent variables (around 400) are dummy variables. If all are included, there is perfect multicollinearity (the dummy variable trap), so I have to omit one of the variables before running the regression.
My first question is, which variable should be omitted? I have read that it is better to omit a variable that is present in many of the observations rather than one that is present in only a few (e.g. if almost all observations are "male" or "female" and just a few are "unknown", omit either "male" or "female"). Is this justified?
After running the regression with a variable omitted, I am able to estimate the coefficient value of the omitted variable because I know that the overall mean of all my independent variables should be 0. So I use this fact to shift the coefficient values for all the included variables, and get an estimate for the omitted variable. My next question is whether there is some similar technique that can be used to estimate the standard error for the coefficient value of the omitted variable. As it is I have to re-run the regression omitting a different variable (and including the variable I had omitted in the first regression) in order to acquire a standard error estimate for the coefficient of the originally omitted variable.
Finally, I notice that the coefficient estimates I get (after re-centering around zero) vary slightly depending on which variable is omitted. In theory, would it be better to run several regressions, each omitting a different variable, and then average the coefficient estimates from all the regressions?
|
Dummy variable trap issues
|
CC BY-SA 2.5
| null |
2011-03-10T16:33:50.490
|
2014-12-03T18:10:49.713
| null | null |
1090
|
[
"categorical-data"
] |
8136
|
2
| null |
8052
|
4
| null |
Shouldn't you favour [Fisher's exact test](http://en.wikipedia.org/wiki/Fisher%27s_exact_test) on a 2x2 contingency table? The advantages of the Chi-squared test would be preserved, with the additional advantages of an exact test. Based on a few handbook readings, I believe that Fisher's exact test is recommended chiefly with low cell counts, but it is also often also recommended for 2x2 tables.
| null |
CC BY-SA 2.5
| null |
2011-03-10T17:37:32.670
|
2011-03-10T17:37:32.670
| null | null |
3582
| null |
8137
|
1
|
8235
| null |
7
|
36494
|
I have a boxplot output in R using ggplot2:
```
p <- ggplot(data, aes(y = age, x = group))
p <- p + geom_boxplot()
p <- p + scale_x_discrete(name= "Group",)
p <- p + scale_y_continuous(name= "Age")
p
```

I need to add horisontal lines like on common boxplot (and to change vertical line style if possible):
```
boxplot(age~group,data=data,names=c('1','2'),ylab="Age", xlab="Group")
```

How I could do this using ggplot2?
|
How to add horizontal lines to ggplot2 boxplot?
|
CC BY-SA 2.5
| null |
2011-03-10T17:52:10.160
|
2015-11-17T03:45:07.367
| null | null |
3376
|
[
"r",
"boxplot",
"ggplot2"
] |
8139
|
2
| null |
7982
|
3
| null |
Assuming data are considered missing completely at random (cf. @whuber's comment), using an ensemble learning technique as described in the following paper might be interesting to try:
>
Polikar, R. et al. (2010).
Learn++.MF: A random subspace
approach for the missing feature
problem. Pattern Recognition,
43(11), 3817-3832.
The general idea is to train multiple classifiers on a subset of the variables that compose your dataset (like in Random Forests), but to use only the classifiers trained with the non-missing features for building the classification rule. Be sure to check what the authors call the "distributed redundancy" assumption (p. 3 in the preprint linked above), that is there must be some equally balanced redundancy in your features set.
| null |
CC BY-SA 2.5
| null |
2011-03-10T21:29:22.377
|
2011-03-11T07:37:13.680
|
2011-03-11T07:37:13.680
|
930
|
930
| null |
8140
|
2
| null |
7982
|
1
| null |
If the features in the subset are random you can still impute values. However, if you have that much missing data, I would think twice about whether or not you really have enough valid data to do any kind of analysis.
The multiple imputation FAQ page ---->
[http://www.stat.psu.edu/~jls/mifaq.html](http://www.stat.psu.edu/~jls/mifaq.html)
| null |
CC BY-SA 2.5
| null |
2011-03-10T21:50:11.123
|
2011-03-10T21:50:11.123
| null | null |
3489
| null |
8141
|
2
| null |
8135
|
4
| null |
James, first of all why regression analysis, but not ANOVA (there are many specialists in this kind of analysis that could help you)? The pros for ANOVA is that all you actually interested in are differences in the means of different groups described by combinations of dummy variables (unique categories, or profiles). Well, if you do study impacts of each of categorical variable you include, you may run regression as well.
I think the type of the data you do have here is described in the sense of conjoint analysis: many attributes of the object (gender, age, education, etc.) each having several categories, thus you omit the whole largest profile, not just one dummy variable. A common practise is to code the categories within the attribute as follows (this [link](http://dissertations.ub.rug.nl/FILES/faculties/eco/1999/m.e.haaijer/c2.pdf) may be useful, you probably do not do conjoint analysis here, but coding is similar): suppose you have $n$ categories (three, as you suggested, male, female, unknown) then, first two are coded as usual you do include two dummies (male, female), giving $(1, 0)$ if male, $(0, 1)$ if female, and $(-1, -1)$ if unknown. In this way the results indeed will be placed around intercept term. You may code in a different way, however, but will lose the mentioned interpretation advantage. To sum up, you drop one category from each category, and code your observations in the described way. You do include intercept term also.
Well to omit the largest profile's categories seems good for me, though not so important, at least it is not empty I think. Since you code the variables in specific manner, joint statistical significance of included dummy variables (both male female, could be tested by F test) imply the significance of the omitted one.
It may happen that the results slightly different, but may be it is the wrong coding that influence this?
| null |
CC BY-SA 2.5
| null |
2011-03-10T22:14:10.927
|
2011-03-10T22:14:10.927
| null | null |
2645
| null |
8142
|
1
| null | null |
2
|
194
|
Is there an analytic form for the Hellinger distance between von Mises distributions?
|
Is there an analytic form for the Hellinger distance between von Mises distributions?
|
CC BY-SA 2.5
| null |
2011-03-11T00:45:27.277
|
2021-11-11T03:00:46.740
|
2021-11-11T03:00:46.740
|
11887
|
3595
|
[
"distributions",
"distance-functions",
"von-mises-distribution"
] |
8143
|
1
| null | null |
1
|
2070
|
If, for example, I have `0011` as a set of known bits $x$, how do I determine the probability that a sequence of randomly generated bits is equal to $x$?
Thanks for the help! I'm sure this is a dumb question, but probability has always been my weakness (which my algorithms class is highlighting!).
EDIT: This is what I should've asked:
How do you find the probability of randomly getting `0011` on the left-side of an 8-bit sequence?
|
How to calculate the probability of a random sequence of bits being a specific sequence?
|
CC BY-SA 2.5
| null |
2011-03-11T03:01:15.340
|
2011-03-11T16:27:33.847
|
2011-03-11T16:27:33.847
| null | null |
[
"probability"
] |
8144
|
2
| null |
7676
|
1
| null |
While you didn't say anything about the actual condtions (experimental treatment), think of the following example: you measure the weight of 40 people at some point during the day, then set them the task to drink (as much as possible of) 5 quarts of water (experimental manipulation), and them weigh them again (hinting at "yes" being the answer to your final question...).
Now, the values for the means of each group will, pre and post treatment, very likely overlap (given that, unless you select people with the same pre-treatment weight, the general population shows a great variability). The mean of the difference between post and pre (post - pre), on the other hand, is likely to be significantly different from (i.e. greater than) 0.
In other words, without knowing a little more about your conditions (e.g. how does the metric change between conditions? do you expect it to show signs of stability?) and also about the variability on that measure across the population (if you think of the measure as a random variable, each subject has his/her own probability distribution of getting a certain score under a certain condition; and the measurement error might be very small compared to the reliable difference of each participants measure compared to the population mean).
Does any of this help?
| null |
CC BY-SA 2.5
| null |
2011-03-11T05:25:46.947
|
2011-03-11T05:25:46.947
| null | null |
3667
| null |
8145
|
1
|
8149
| null |
11
|
410
|
I solve Rubik's cubes as a hobby. I record the time it took me to solve the cube using some software, and so now I have data from thousands of solves. The data is basically a long list of numbers representing the time each sequential solve took (e.g. 22.11, 20.66, 21.00, 18.74, ...)
The time it takes me to solve the cube naturally varies somewhat from solve to solve, so there are good solves and bad solves.
I want to know whether I "get hot" - whether the good solves come in streaks. For example, if I've just had a few consecutive good solves, is it more likely that my next solve will be good?
What sort of analysis would be appropriate? I can think of a few specific things to do, for example treating the solves as a Markov process and seeing how well one solve predicts the next and comparing to random data, seeing how long the longest streaks of consecutive solves below the median for the last 100 are and comparing to what would be expected in random data, etc. I am not sure how insightful these tests would be, and wonder whether there are some well-developed approaches to this sort of problem.
|
How do you tell whether good performances come in streaks?
|
CC BY-SA 2.5
| null |
2011-03-11T09:46:09.730
|
2020-11-29T13:42:14.743
|
2020-11-29T11:52:37.083
|
11887
|
2665
|
[
"time-series",
"probability"
] |
8146
|
1
| null | null |
1
|
456
|
Does any body know of a Java implementation of McNemar's Test?
|
McNemar's test implementation in Java
|
CC BY-SA 3.0
| null |
2011-03-11T10:25:28.287
|
2012-12-28T18:06:28.233
|
2012-12-28T18:06:28.233
|
17662
| null |
[
"java"
] |
8147
|
1
| null | null |
1
|
265
|
I have almost the same question as:
[How can I efficiently model the sum of Bernoulli random variables?](https://stats.stackexchange.com/questions/5347/how-can-i-efficiently-model-the-sum-of-bernoulli-random-variables)
But:
(1) The number of random variables for summation is ~ N=20 (case 1) or N=90 (case 2).
(2) $p_i$ ~ 0.13 (case 1)
(3) The precision of the model based on Poisson law is not enough.
(4) We need that our approx would be the good enough to model partial sums like these as well: $\sum_{i=k,N}{X_i}$, ( $k=1,N$ )
(5) We have empirical data for every $X_i$. The diagram shows that there is almost linear dependence for $\Pr(X_i=1)$ for i=1,6 and then almost constant function or some small linear dependence (for i=7,20 ).
Really I'm not sure in (3) since the Le Cam's inequality looks like very general...
What class of model could we try?
|
How can I efficiently approximate the sum of Bernoulli random variables for any number of summands in partial sum?
|
CC BY-SA 4.0
| null |
2011-03-11T10:29:28.877
|
2023-04-29T06:27:37.467
|
2023-04-29T06:27:37.467
|
362671
|
3670
|
[
"distributions",
"modeling"
] |
8148
|
1
| null | null |
18
|
17700
|
Apologies for the rudimentary question, I am new to this form of analysis and have a very limited understanding of the principles so far.
I was just wondering if many of the parametric assumptions for multivariate/univariate tests apply for Cluster analysis? Many of the sources of information I have read regarding cluster analysis fail to specify any assumptions.
I am particularly interested in the assumption of independence of observations. My understanding is that violation of this assumption (in ANOVA and MAVOVA for example) is serious because it influences estimates of error. From my reading so far, it seems that cluster analysis is largely a descriptive technique (that only involves statistical inference in certain specified cases). Accordingly, are assumptions such as independence and normally distributed data required?
Any recommendations of texts that discuss this issue would be greatly appreciated.
Many Thanks.
|
Assumptions of cluster analysis
|
CC BY-SA 2.5
| null |
2011-03-11T10:34:36.383
|
2011-06-02T10:51:50.320
|
2011-03-11T13:26:42.053
| null |
3619
|
[
"clustering",
"assumptions"
] |
8149
|
2
| null |
8145
|
8
| null |
The [Wald-Wolfowitz Runs Test](http://en.wikipedia.org/wiki/Wald%E2%80%93Wolfowitz_runs_test) seems to be a possible candidate, where a "run" is what you called a "streak". It requires dichotomous data, so you'd have to label each solve as "bad" vs. "good" according to some threshold - like the median time as you suggested. The null hypothesis is that "good" and "bad" solves alternate randomly. A one-sided alternative hypothesis corresponding to your intuition is that "good" solves clump together in long streaks, implying that there are fewer runs than expected with random data. Test statistic is the number of runs. In R:
```
> N <- 200 # number of solves
> DV <- round(runif(N, 15, 30), 1) # simulate some uniform data
> thresh <- median(DV) # threshold for binary classification
# do the binary classification
> DVfac <- cut(DV, breaks=c(-Inf, thresh, Inf), labels=c("good", "bad"))
> Nj <- table(DVfac) # number of "good" and "bad" solves
> n1 <- Nj[1] # number of "good" solves
> n2 <- Nj[2] # number of "bad" solves
> (runs <- rle(as.character(DVfac))) # analysis of runs
Run Length Encoding
lengths: int [1:92] 2 1 2 4 1 4 3 4 2 5 ...
values : chr [1:92] "bad" "good" "bad" "good" "bad" "good" "bad" ...
> (nRuns <- length(runs$lengths)) # test statistic: observed number of runs
[1] 92
# theoretical maximum of runs for given n1, n2
> (rMax <- ifelse(n1 == n2, N, 2*min(n1, n2) + 1))
199
```
When you only have a few observations, you can calculate the exact probabilities for each number of runs under the null hypothesis. Otherwise, the distribution of "number of runs" can be approximated by a standard normal distribution.
```
> (muR <- 1 + ((2*n1*n2) / N)) # expected value
100.99
> varR <- (2*n1*n2*(2*n1*n2 - N)) / (N^2 * (N-1)) # theoretical variance
> rZ <- (nRuns-muR) / sqrt(varR) # z-score
> (pVal <- pnorm(rZ, mean=0, sd=1)) # one-sided p-value
0.1012055
```
The p-value is for the one-sided alternative hypothesis that "good" solves come in streaks.
| null |
CC BY-SA 2.5
| null |
2011-03-11T10:41:21.237
|
2011-03-11T14:42:16.717
|
2011-03-11T14:42:16.717
|
1909
|
1909
| null |
8150
|
1
| null | null |
3
|
228
|
I have a prospective study with no data about estimated results, that could be used to get required sample size. Data looks like this:
```
caseID;groupID;value,result
1;1;12.3;0
2;1;15.6;1
3;2;11.3;0
4;2;13.4;1
...
```
Is it possible to determine how much observations should be made to complete the study after adding new portion of data?
I need both descriptive statistics ( % of cases have sign X, result in example ) and comparative (2-4 groups compared by continuous variable,value in example). Probability of type I error ($\alpha$) is 10% and type II ($\beta$) - 5% (for example) Statistical significance and power - 0,9 and 0,95. The best solution is to have a grid like this for making desision to stop or continue the study:
```
a;b;n
5;5;30
10;5;28
5;10;29
...
```
Where a is $\alpha$ , b is $\beta$ and n is the size of sample required for study with this $\alpha$ and $\beta$ (Type I and type 2 error probability).
My data is imported to R. Is it possible to perform some calculations to get?:
- Number observations required to complete study
- Required number of observations for each group
So I need to estimate sample size based on actual data to know it is time to finish the study.
The question is:
How to do this in R, if it is possible?
Any suggestions are welcomed.
|
Dynamic estimation of required sample size in R
|
CC BY-SA 2.5
| null |
2011-03-11T10:44:36.757
|
2011-03-15T19:46:01.623
|
2011-03-15T19:46:01.623
|
3376
|
3376
|
[
"r",
"estimation",
"sample-size"
] |
8151
|
1
| null | null |
4
|
5806
|
Is there a way to easily create factors based on quantiles of selected variables in a dataframe? Say, in a datatset D, I have variables V1 to V10, which are all numeric. I would like to create dummies for V7 to V10 based on their respective quantiles.
|
Quantile transformation in R
|
CC BY-SA 2.5
| null |
2011-03-11T11:42:59.117
|
2016-08-13T17:27:56.023
|
2016-08-13T17:27:56.023
|
919
|
3671
|
[
"r",
"categorical-data"
] |
8152
|
1
|
8153
| null |
2
|
1536
|
I have a few questions regarding multiple imputation for nested data.
Context: I have repeated measures (4 times) from a survey and these are clustered in workplaces (205 workplaces). There are about 180 items on this survey.
q1. Is it possible to take both the repeated measures and the workplace clustering into consideration or do i have to decide for one of the two?
q2. If i can only take into consideration one of the two clusterings (repeated measures vs workplace) which one would you recommend
q3. I have about 10000 observations and about 400 of them have missing values for the workplace. What would you recommend to do in this case? (also i should mention that the 205 workplaces are nested in 17 Organizations - For the moment i use general categories based on the organization: e.g. Organization1-Unclassified). Is there a meaningful way to actually impute these categories?
q4. Would you recommend to use all 180 items for imputation or the items that i intend to use in each of my models?
I use R for analysis and it would be greatly appreciated if you can recommend any packages for multiple imputation for clustered data.
Thanks in advance
|
Multiple imputation for clustered data
|
CC BY-SA 2.5
| null |
2011-03-11T12:06:54.577
|
2022-11-28T05:33:13.907
|
2011-03-11T12:40:58.963
|
1871
|
1871
|
[
"r",
"multilevel-analysis",
"panel-data",
"data-imputation"
] |
8153
|
2
| null |
8152
|
3
| null |
Take a look at the [Amelia II](http://gking.harvard.edu/amelia/) package, by Honaker, King and Blackwell.
>
Amelia II "multiply imputes" missing data in a single cross-section (such as a survey), from a time series (like variables collected for each year in a country), or from a time-series-cross-sectional data set (such as collected by years for each of several countries).
q1. Yes, it is possible with the above package.
q3. I guess it would be possible to impute workplace using a more general imputation method. ([MICE](https://web.archive.org/web/20110629161116/http://web.inter.nl.net/users/S.van.Buuren/mi/hmtl/mice.htm) could help).
q4. As a general rule, do not throw information out. The imputation model, at a minimun, should include all covariates in all your models. But if extra information help predicting the missing data, include it!
| null |
CC BY-SA 4.0
| null |
2011-03-11T13:24:39.780
|
2022-11-28T05:33:13.907
|
2022-11-28T05:33:13.907
|
362671
|
375
| null |
8154
|
2
| null |
8151
|
1
| null |
To get the factors, use:
```
cut(dataset, quantile(dataset))
```
From the help:
```
## Default S3 method:
cut(x, breaks)
x a numeric vector which is to be converted to a factor by cutting.
breaks either a numeric vector of two or more cut points or a single number (greater than or equal to 2) giving the number of intervals into which x is to be cut.
```
To use it on multiple columns you could use the data.table package:
```
df <- data.table(df)
cut_quantile <- function (x) cut(x, quantile(x))
df[, lapply(.SD, cut_quantile), .SDcols = c('V7', 'V8', 'V9', 'V10')]
```
| null |
CC BY-SA 3.0
| null |
2011-03-11T13:38:19.330
|
2016-08-13T14:03:22.870
|
2016-08-13T14:03:22.870
|
93550
|
1351
| null |
8155
|
2
| null |
8148
|
1
| null |
Cluster analysis does not involve hypothesis testing per se, but is really just a collection of different similarity algorithms for exploratory analysis. You can force hypothesis testing somewhat but the results are often inconsistent, since cluster changes are very sensitive to changes in parameters.
[http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_introclus_sect010.htm](http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_introclus_sect010.htm)
| null |
CC BY-SA 2.5
| null |
2011-03-11T14:17:04.913
|
2011-03-11T14:17:04.913
| null | null |
3489
| null |
8156
|
1
| null | null |
1
|
297
|
I work with a lot of bar charts. In particular, these bar charts are of basecalls along segments on the human genome. Each point along the x-axis is one of the four nitrogenous bases(A,C,T,G) that compose DNA and the y-axis is essentially how many times a base was able to be "called" (or recognized by a sequencer machine, so as to sequence the genome, which is simply determining the identity of each base along the genome).
Many of these bar charts display roughly linear dropoffs (when the machines aren't able to get sufficient read depth) that fall to 0 or (almost-0) from plateau-like regions. Is there a straightforward algorithm for assigning these plots a score that reflects this tendency? Is stdev a good place to start?
I'm a programmer but not much of a statistician.
|
Assessing DNA sequencing quality
|
CC BY-SA 2.5
| null |
2011-03-11T15:08:43.463
|
2011-03-12T10:45:54.777
|
2011-03-12T10:45:54.777
| null |
3672
|
[
"histogram",
"bioinformatics"
] |
8157
|
1
|
8395
| null |
10
|
2236
|
I need to create random vectors of real numbers a_i satisfying the following constraints:
```
abs(a_i) < c_i;
sum(a_i)< A; # sum of elements smaller than A
sum(b_i * a_i) < B; # weighted sum is smaller than B
aT*A*a < D # quadratic multiplication with A smaller than D
where c_i, b_i, A, B, D are constants.
```
What would be the typical algorithm to generate efficiently this kind of vector?
|
Generating random vectors with constraints
|
CC BY-SA 2.5
| null |
2011-03-11T15:32:55.813
|
2011-03-17T02:56:04.293
|
2011-03-15T18:07:51.207
|
3673
|
3673
|
[
"random-generation"
] |
8159
|
1
| null | null |
1
|
299
|
>
Possible Duplicate:
What is the difference between data mining, statistics, machine learning and AI?
How to compare
machine learning vs. data mining?
data mining vs. statistical analysis?
machine learning vs. statistical analysis?
|
The relationship between machine learning, data mining and statistical analysis?
|
CC BY-SA 2.5
| null |
2011-03-11T16:57:52.407
|
2011-03-11T16:57:52.407
|
2017-04-13T12:44:39.283
|
-1
|
3026
|
[
"machine-learning",
"multivariate-analysis",
"data-mining"
] |
8160
|
1
|
8173
| null |
7
|
2076
|
I have 3,000 observations (administrative communities) characterized by five variables. Four of them work in the direction 'the more, the worse' and one goes in the opposite.
I'd like to create one score or an ordered list of these observations that will best take into account all of those five variables.
I have tried clustering using MCLUST package in R, and it gives some meaningful results but it's hard to decide about the ordering of observations on the basis of cluster membership.
My second attempt was to run PCA and extract the first component, which is closer to what I'd like to get.
What other solutions (R- or Stata-based preferably) could I use to deal with this problem?
|
How to create one score from a mixed set of positive and negative variables?
|
CC BY-SA 2.5
| null |
2011-03-11T17:23:08.010
|
2011-03-30T18:54:57.467
|
2011-03-12T06:15:11.397
|
183
|
22
|
[
"clustering",
"pca",
"composite"
] |
8161
|
1
|
8200
| null |
3
|
461
|
Suppose I have two I(1) time series X and Y, and I want to know whether X and Y are "related" (for some definition of "related").
The standard cointegration approach defines relationship as cointegration, and says that X and Y are cointegrated if some linear combination of X and Y is stationary. To test whether X and Y are cointegrated, you perform a regression on X and Y, and test for stationarity of the residual errors.
It seems to me like another approach might be to difference the I(1) time series X and Y, to get new I(0) time series X' and Y', and to use a standard linear regression relationship test on X' and Y' (i.e., perform a regression to get Y' = aX' + b, and use a t-test to see whether a is significantly non-zero). You could then define X and Y to be related if X' and Y' pass this test.
Is this second approach valid, or do you get spurious relationships? What's the difference between this approach and the cointegration approach, or what are the advantages of the cointegration definition?
|
Testing two I(1) vectors for a relationship
|
CC BY-SA 2.5
| null |
2011-03-11T17:25:48.310
|
2014-10-31T12:40:35.140
|
2011-03-12T10:53:24.833
| null |
1106
|
[
"time-series",
"correlation",
"cointegration",
"stationarity"
] |
8162
|
2
| null |
8107
|
4
| null |
Are you asking about these results in particular or the Breusch-Pagan test more generally? For these particular tests, see @mpiktas's answer. Broadly, the BP test asks whether the squared residuals from a regression can be predicted using some set of predictors. These predictors may be the same as those from the original regression. The White test version of the BP test includes all the predictors from the original regression, plus their squares and interactions in a regression against the squared residuals. If the squared residuals are predictable using some set of covariates, then the estimated squared residuals and thus the variances of the residuals (which follows because the mean of the residuals is 0) appear to vary across units, which is the definition of heteroskedasticity or non-constant variance, the phenomenon that the BP test considers.
| null |
CC BY-SA 2.5
| null |
2011-03-11T17:42:32.797
|
2011-03-11T17:42:32.797
| null | null |
401
| null |
8163
|
2
| null |
8088
|
3
| null |
Why are you worried about multicolinearity? The only reason that we need this assumption in regression is to ensure that we get unique estimates. Multicolinearity only matters for estimation when it is perfect---when one variable is an exact linear combination of the others.
If your experimentally-manipulated variables were randomly assigned, then their correlations with the observed predictors as well as unobserved factors should be (roughly) 0; it is this assumption that helps you get unbiased estimates.
That said, non-perfect multicolinearity can make your standard errors larger, but only on those variables that experience the multicolinearity issue. In your context, the standard errors of the coefficients on your experimental variables should not be impacted.
| null |
CC BY-SA 2.5
| null |
2011-03-11T17:47:46.867
|
2011-03-11T17:47:46.867
| null | null |
401
| null |
8165
|
2
| null |
8135
|
8
| null |
You should get the "same" estimates no matter which variable you omit; the coefficients may be different, but the estimates of particular quantities or expectations should be the same across all the models.
In a simple case, let $x_i=1$ for men and 0 for women. Then, we have the model:
$$\begin{align*}
E[y_i \mid x_i] &= x_iE[y_i \mid x_i = 1] + (1 - x_i)E[y_i \mid x_i = 0] \\
&= E[y_i \mid x_i=0] + \left[E[y_i \mid x_i= 1] - E[y_i \mid x_i=0]\right]x_i \\
&= \beta_0 + \beta_1 x_i.
\end{align*}$$
Now, let $z_i=1$ for women. Then
$$\begin{align*}
E[y_i \mid z_i] &= z_iE[y_i \mid z_i = 1] + (1 - z_i)E[y_i \mid z_i = 0] \\
&= E[y_i \mid z_i=0] + \left[E[y_i \mid z_i= 1] - E[y_i \mid z_i=0]\right]z_i \\
&= \gamma_0 + \gamma_1 z_i .
\end{align*}$$
The expected value of $y$ for women is $\beta_0$ and also $\gamma_0 + \gamma_1$. For men, it is $\beta_0 + \beta_1$ and $\gamma_0$.
These results show how the coefficients from the two models are related. For example, $\beta_1 = -\gamma_1$. A similar exercise using your data should show that the "different" coefficients that you get are just sums and differences of one another.
| null |
CC BY-SA 3.0
| null |
2011-03-11T18:02:32.733
|
2012-12-04T16:28:20.270
|
2012-12-04T16:28:20.270
|
401
|
401
| null |
8166
|
1
|
8177
| null |
5
|
115
|
The UN has a convention against corruption (UNCAC) that has been signed by some 140 countries and ratified by most ([link](http://www.unodc.org/unodc/en/treaties/CAC/signatories.html)).
Transparency International publishes an annual report on corruption. They have data for most countries for the period 2000-2010, each country is given a score between zero and ten where ten is best (lowest level of corruption). There is usually a slight trend in the corruption score, indicating that there is some inertia in the level of corruption ([link](http://www.transparency.org/)).
I want to test whether ratifying the convention has any positive effect on the level of corruption in the country.
My initial idea is to calculate the mean level of corruption 3 years before and after signing the treaty and using a students paired t-test to see if latter mean is significantly larger than the former.
Another approach, that I would know less about, is to use some kind of event model where dummy variables are used for the years after the convention is ratified.
What are your thoughts on this? Would the first approach work, is there anything I have overlooked? All feedback is appreciated.
### Notes
I should note that this is a minor part of a course term paper. We weren't supposed to do anything quantitative, just a write-up on a topic of our choice. However I want to take a quick look at the statistics since data is readily available and I prefer to look at numbers if possible. Hence, I don't need the ideal PhD-level statistical model, just something quick and simple.
|
Test for the effect of UN ratification on corruption in a set of countries
|
CC BY-SA 2.5
| null |
2011-03-11T18:17:02.197
|
2015-11-29T00:13:44.090
|
2015-11-29T00:13:44.090
|
805
|
3182
|
[
"time-series",
"statistical-significance",
"t-test",
"mean",
"panel-data"
] |
8167
|
1
|
8174
| null |
8
|
3596
|
I have a set of around 500 responses to a online survey that offered an incentive to complete. While most of the data appears to be valid, it's clear that some people were able to get around the (inadequate) browser cookie-based duplicate survey protection. Some respondents clearly randomly clicked through the survey to recieve the incentive and then repeated the process via a couple methods. My question is what is the best way to try and filter out the invalid responses?
The information that I have is limited to:
- The amount of time it took to complete the survey (time started and ended)
- The IP address of each respondent
- The User Agent (browser identifier) of each respondent
- The survey answers of each respondent (over 100 questions in the survey)
The most obvious sign of an invalid response is when (sorted by time started) there will be a group all from the same IP address or similar IP (sharing the same first three octets for example, 255.255.255.*) which were all completed in a much shorter amount of time than the total average in quick succession.
With this information there must be a thoughtful way to weed out the people who were exploiting the survey for the incentive from the rest of the survey population. I know that someone from the community here would have an interesting idea about how to approach this. I'm willing to accept false positives as long as I can be confident that I've gotten rid of most of the invalid responses. Thanks for your advice!
|
How to identify invalid online survey responses?
|
CC BY-SA 2.5
| null |
2011-03-11T18:27:08.630
|
2011-03-11T19:48:07.653
|
2011-03-11T18:45:41.050
|
1220
|
1220
|
[
"classification",
"survey"
] |
8168
|
1
| null | null |
9
|
1037
|
Building on the post [How to efficiently manage a statistical analysis project](https://stats.stackexchange.com/questions/2910/how-to-efficiently-manage-a-statistical-analysis-project) and the [ProjectTemplate package](http://www.johnmyleswhite.com/notebook/2010/08/26/projecttemplate/) in R...
Q: How do you build your statistical project directory structure when multiple languages feature heavily (e.g, R AND Splus)?
Most of the discussions on this topic have been limited to projects which primarily use one language. I'm concerned with how to minimize sloppiness, confusion, and breakage, when using multiple languages.
I've included below my current project structure and methods for doing things. An alternative might be to separate code so that I have `./R` and `./Splus` directories---each containing their own `/lib`, `/src`, `/util`, `/tests`, and `/munge` directories.
Q: Which approach would be closest to "best practices" (if any exist)?
- /data - data shared across projects
- /libraries - scripts shared across projects
- /projects/myproject - my working directory. Currently, if I use multiple languages they share this location as their working directory.
- ./data/ - data specific to /myproject and symlinks to data in /data
- ./cache/ - cached workspaces (e.g., .RData files saved using save.image() in R or .sdd files saved using data.dump() in Splus)
- ./lib/ - main project files. Same across all projects. An R project will be run via source("./lib/main.R") which in turn runs load.R, clean.R, test.R, analyze.R, .report.R. Currently, if multiple languages are being used, say, Splus in addition to R, I'll throw main.ssc, clean.ssc, etc. into this directory as well. Not sure I like this though.
- ./src/ - project-specific functions. Collected one function per file.
- ./util/ - general functions eventually to be packaged. Collected one function per file.
- ./tests/ - files for running test cases. Used by ./lib/test.R
- ./munge/ - files for cleaning data. Used by ./lib/clean.R
- ./figures/ - tables and figure output from ./lib/report.R to be used in the final report
- ./report/ - .tex files and symlinks to files in ./figures
- ./presentation/ - .tex files for presentations (usually the Beamer class)
- ./temp/ - location for temporary scripts
- ./README
- ./TODO
- ./.RData - for storing R project workspaces
- ./.Data/ - for storing S project workspaces
|
Statistical project directory structure with multiple languages (e.g., R and Splus)?
|
CC BY-SA 2.5
| null |
2011-03-11T18:50:29.743
|
2011-09-29T03:21:10.087
|
2017-04-13T12:44:37.583
|
-1
|
3577
|
[
"r",
"project-management",
"splus"
] |
8169
|
1
|
8193
| null |
1
|
879
|
Given the prior probability of 2 distributions, $N(x,y)$ and $N(a,b)$, where $N(\mu,\sigma^2)$:
How do you make a decision rule to minimize the probability of error, if the prior probabilities are equal? Can you give an example?
What if the prior probabilities are different, such as
$P(\text{Distribution 1}) = 0.70$
$P(\text{Distribution 2}) = 0.30$?
|
Bayes classifier
|
CC BY-SA 2.5
| null |
2011-03-11T19:29:25.397
|
2016-05-06T13:29:48.270
|
2011-03-12T10:41:39.367
| null |
3681
|
[
"machine-learning",
"naive-bayes"
] |
8170
|
2
| null |
8052
|
2
| null |
Fisher's exact was once only recommended for low cell counts because back in "the dark times" it was computationally infeasible to use it for large counts. Indeed, with some approximations doing small counts properly demands a correction be applied.
No matter, the hypergeometric tests involved are good and powerful. They can be generalized to NxM tables by deposing these into 2x2 tables, where a count of significance for row and column is kept in row 1 and in column 1, and their complements summed to form row 2 and column 2, respectively.
See [http://www.stat.psu.edu/online/courses/stat504/03_2way/30_2way_exact.htm](http://www.stat.psu.edu/online/courses/stat504/03_2way/30_2way_exact.htm) for a nice overview and the text(s) by Bishop and Fienberg as well as Agresti for a detailed presentation.
Realizing the connection with the hypergeometric lets you pick two aspects of whether an intersection represents independence or not, letting both false alarms and effect size be specified.
I like the Bishop and Fienberg, and Fienberg books:
[http://www.amazon.com/Discrete-Multivariate-Analysis-Theory-Practice/dp/0387728058/](http://rads.stackoverflow.com/amzn/click/0387728058)
[http://www.amazon.com/Analysis-Cross-Classified-Categorical-Data/dp/0387728244/](http://rads.stackoverflow.com/amzn/click/0387728244)
as well as the related one by Zelterman:
[http://www.amazon.com/Models-Discrete-Data-Daniel-Zelterman/dp/0198567014/](http://rads.stackoverflow.com/amzn/click/0198567014)
| null |
CC BY-SA 2.5
| null |
2011-03-11T19:33:09.387
|
2011-03-11T20:05:12.323
|
2011-03-11T20:05:12.323
|
3437
|
3437
| null |
8171
|
5
| null | null |
0
| null |
"Regression" is a general term for a wide variety of techniques to analyze the relationship between one (or more) dependent variables and independent variables. Typically the dependent variables are modeled with probability distributions whose parameters are assumed to vary (deterministically) with the independent variables.
Ordinary least squares (OLS) regression affords a simple example in which the expectation of one dependent variable is assumed to depend linearly on the independent variables. The unknown coefficients in the assumed linear function are estimated by choosing values for them that minimize the sum of squared differences between the values of the dependent variable and the corresponding fitted values.
| null |
CC BY-SA 2.5
| null |
2011-03-11T19:34:18.293
|
2011-03-11T19:34:18.293
|
2011-03-11T19:34:18.293
|
919
|
919
| null |
8172
|
4
| null | null |
0
| null |
Techniques for analyzing the relationship between one (or more) "dependent" variables and "independent" variables.
| null |
CC BY-SA 2.5
| null |
2011-03-11T19:34:18.293
|
2011-03-11T19:34:18.293
|
2011-03-11T19:34:18.293
|
919
|
919
| null |
8173
|
2
| null |
8160
|
7
| null |
You might consider u-scores as defined in [1] Wittkowski, K. M., Lee, E., Nussbaum, R., Chamian, F. N. and Krueger, J. G. (2004), Combining several ordinal measures in clinical studies. Statistics in Medicine, 23: 1579–1592. ([PDF](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=46258B9F5D853340BDE8FAF578CFE5C2?doi=10.1.1.60.9819&rep=rep1&type=pdf))
The basic idea is that for each observation you count how many observations there are compared to which it is definitely better (four variables lower, one higher), and how many are definitely worse, and then create a combined score.
| null |
CC BY-SA 2.5
| null |
2011-03-11T19:47:59.110
|
2011-03-13T01:08:30.113
|
2011-03-13T01:08:30.113
|
183
|
279
| null |
8174
|
2
| null |
8167
|
9
| null |
1) Flag all responses with duplicate IP addresses. Create a new variable for this purpose -- say FLAG1, which takes on values of 1 or 0.
2) Choose a threshold for an impossibly fast response time based on common sense (e.g., less than 1 second per question) and the aid of a histogram of response times -- flag people faster than this threshold again using another variable, FLAG2.
3) "Some respondents clearly randomly clicked through..." -- Apparently you can manually identify some respondents who cheated. Sort the data by response time and look at the fastest 5% or 10% (25 or 50 respondents for your data). Manually examine these respondents and flag any "clearly random" ones using FLAG3.
4) Apply Sheldon's suggestion by creating an inconsistency score -- 1 point for each inconsistency. You can do this by creating a new variable that identifies inconsistencies for each pair of redundant items, and then adding across these variables. You could keep this variable as is, as higher inconsistency scores obviously correspond to higher probabilities of cheating. But a reasonable approach is to flag people who fall above a cut-off chosen by inspecting a histogram -- call this FLAG4.
Anyone who is flagged on each of FLAG1-4 is highly likely to have cheated, but you can set aside flagged people for a separate analysis based on any weighting scheme of FLAG1-4 you want. Given your tolerance for false positives, I would eliminate anyone flagged on FLAG1, FLAG2, or FLAG4.
| null |
CC BY-SA 2.5
| null |
2011-03-11T19:48:07.653
|
2011-03-11T19:48:07.653
| null | null |
3432
| null |
8176
|
2
| null |
7979
|
1
| null |
There is an estimator for the minimum (or the maximum) of a set of numbers given a sample. See Laurens de Haan, "Estimation of the minimum of a function using order statistics," JASM, 76(374), June 1981, 467-469.
| null |
CC BY-SA 2.5
| null |
2011-03-11T20:26:09.263
|
2011-03-11T20:26:09.263
| null | null |
3437
| null |
8177
|
2
| null |
8166
|
5
| null |
It's probably not a good idea without digging deeper into the source surveys. TI themselves note that changes in a country's index can come from either a change in corruption or just a change in methodology of the sources that they use. The sources themselves change over time as well, so comparing the index from year to year is actually fairly complicated to do properly.
See the references at [http://en.wikipedia.org/wiki/Corruption_Perceptions_Index](http://en.wikipedia.org/wiki/Corruption_Perceptions_Index)
| null |
CC BY-SA 2.5
| null |
2011-03-11T20:45:47.983
|
2011-03-11T20:45:47.983
| null | null |
26
| null |
8179
|
2
| null |
8146
|
1
| null |
I do not know such a library, but the statistics part of the [apache commons math library](http://commons.apache.org/math/userguide/stat.html) (written in java) provides a series of distributions, including chi2. Since the McNemar's Test does not seem to be thaaat complicated, you may figure an implementation out on your own.
| null |
CC BY-SA 2.5
| null |
2011-03-11T21:15:49.980
|
2011-03-11T21:15:49.980
| null | null |
264
| null |
8180
|
1
| null | null |
3
|
2796
|
Does any body know how to run a post hoc comparison in a 2X2 ANOVA with covariate in R. multcomp package seems very nice but I could not find clear answer or example to my question with this package. Thanks so much in advance
|
Post hoc comparison in two way ANOVA with covariate using R
|
CC BY-SA 2.5
| null |
2011-03-11T21:24:19.273
|
2011-03-12T19:35:02.983
| null | null |
3682
|
[
"r",
"multiple-comparisons",
"contrasts"
] |
8181
|
2
| null |
8180
|
5
| null |
This is data from Maxwell & Delaney (2004), artificially extended to include a second between-subjects IV, yielding a 3x2 design. Using `multcomp`'s `glht()` function is easier once you switch to the associated one-factorial design by combining your two IVs into one with `interaction()`.
DV is depression scores pre-treatment and post-treatment. Treatment is one of SSRI, Placebo or Waiting List. Pre-treatment score is the covariate. I included a second IV.
```
P <- 3 # number of groups in IV1
Q <- 2 # number of groups in IV2
Njk <- 5 # cell size
SSRIpre <- c(18, 16, 16, 15, 14, 20, 14, 21, 25, 11)
SSRIpost <- c(12, 0, 10, 9, 0, 11, 2, 4, 15, 10)
PlacPre <- c(18, 16, 15, 14, 20, 25, 11, 25, 11, 22)
PlacPost <- c(11, 4, 19, 15, 3, 14, 10, 16, 10, 20)
WLpre <- c(15, 19, 10, 29, 24, 15, 9, 18, 22, 13)
WLpost <- c(17, 25, 10, 22, 23, 10, 2, 10, 14, 7)
IV1 <- factor(rep(1:3, each=Njk*Q), labels=c("SSRI", "Placebo", "WL"))
IV2 <- factor(rep(1:2, times=Njk*P), labels=c("A", "B"))
# combine both IVs into 1 to get the associated one-factorial design
IVi <- interaction(IV1, IV2)
DVpre <- c(SSRIpre, PlacPre, WLpre)
DVpost <- c(SSRIpost, PlacPost, WLpost)
```
Now do the ANCOVA with `IVi` as between-subjects factor and `DVpre` as covariate. Using the associated one-factorial design is possible since it has the same Error-MS as the two-factorial design.
```
> aovAncova1 <- aov(DVpost ~ IVi + DVpre, data=dfAncova) # one-factorial design
> aovAncova2 <- aov(DVpost ~ IV1*IV2 + DVpre, data=dfAncova) # two-factorial design
> summary(aovAncova1)[[1]][["Mean Sq"]][3] # Error MS one-factorial design
[1] 32.84399
> summary(aovAncova2)[[1]][["Mean Sq"]][5] # Error MS two-factorial design
[1] 32.84399
```
Next comes the matrix defining 3 cell comparisons with sum-to-zero coefficients. The coefficients follow the order of 2*3 levels of `IVi`.
```
> levels(IVi)
[1] "SSRI.A" "Placebo.A" "WL.A" "SSRI.B" "Placebo.B" "WL.B"
> cntrMat <- rbind("SSRI-Placebo" = c(-1, 1, 0, -1, 1, 0),
+ "SSRI-0.5(P+WL)"= c(-2, 1, 1, -2, 1, 1),
+ "A-B" = c( 1, 1, 1, -1, -1, -1))
> library(multcomp)
> summary(glht(aovAncova1, linfct=mcp(IVi=cntrMat), alternative="greater"),
+ test=adjusted("none"))
Simultaneous Tests for General Linear Hypotheses
Multiple Comparisons of Means: User-defined Contrasts
Fit: aov(formula = DVpost ~ IVi + DVpre, data = dfAncova)
Linear Hypotheses:
Estimate Std. Error t value Pr(>t)
SSRI-Placebo <= 0 8.887 5.135 1.731 0.04846 *
SSRI-WL <= 0 12.878 5.129 2.511 0.00976 **
A-B <= 0 1.024 6.492 0.158 0.4380
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Adjusted p values reported -- none method)
```
| null |
CC BY-SA 2.5
| null |
2011-03-11T22:06:14.867
|
2011-03-12T19:35:02.983
|
2011-03-12T19:35:02.983
|
1909
|
1909
| null |
8182
|
1
| null | null |
10
|
8407
|
Is it possible to use kernel principal component analysis (kPCA) for Latent Semantic Indexing (LSI) in the same way as PCA is used?
I perform LSI in R using the `prcomp` PCA function and extract the features with highest loadings from the first $k$ components. By that I get the features describing the component best.
I tried to use the `kpca` function (from the `kernlib` package) but cannot see how to access the weights of the features to a principal component. Is this possible overall when using kernel methods?
|
Is it possible to use kernel PCA for feature selection?
|
CC BY-SA 3.0
| null |
2011-03-11T22:58:23.873
|
2015-08-11T16:46:11.983
|
2015-08-11T16:45:44.310
|
28666
|
3683
|
[
"r",
"pca",
"feature-selection",
"kernel-trick"
] |
8183
|
2
| null |
8146
|
1
| null |
I found this on [Google](http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=Mcnemar+test+java):
[http://www.jsc.nildram.co.uk/api/jsc/contingencytables/McNemarTest.html](http://www.jsc.nildram.co.uk/api/jsc/contingencytables/McNemarTest.html)
Does it do anything for you? McNemar's Test is implemented in the Java Statistical Classes Library, specifically in [jsc.contingencytables](http://www.jsc.nildram.co.uk/api/jsc/contingencytables/package-summary.html).
| null |
CC BY-SA 2.5
| null |
2011-03-11T23:19:14.227
|
2011-03-11T23:19:14.227
| null | null |
1118
| null |
8184
|
1
|
12221
| null |
8
|
1002
|
It's well established that both Anderson-Darling and Shapiro-Wilk have a much higher power to detect departures from normality than a KS-Test.
I have been told that Shapiro-Wilk is usually the best test to use if you want to test if a distribution is normal because it has one of the highest powers to detect lack of normality, but my limited experience, it seems that Shapiro-Wilk gives me the same result as Anderson-Darling every time.
I thus have two questions:
- When does the Shapiro-Wilk test out-perform Anderson-Darling?
- Is there a uniformly most powerful lack of normality test, or, barring that possibility, a normality test that out-performs nearly all other normality tests, or is Shapiro-Wilk the best bet?
|
Most powerful GoF test for normality
|
CC BY-SA 2.5
| null |
2011-03-11T23:36:27.563
|
2011-06-22T19:13:01.430
|
2011-03-12T10:43:08.117
| null |
1118
|
[
"goodness-of-fit"
] |
8185
|
1
| null | null |
9
|
6159
|
From basic statistics and hearing "correlation is not causation" all the time, I tend to think it's fine to say that "X and Y are correlated" even if X and Y aren't in a causal relationship. For example, I'd normally think it's perfectly okay to say that ice cream sales and swimsuit sales are correlated, since high swimsuit sales probably means high ice cream sales (even though increases in swimsuit sales don't cause an increase an ice cream sales).
However, when studying time analysis, I get a little confused about this terminology. It seems like a time series analyst would not say that ice cream sales are correlated with swimsuit sales, but rather that ice cream sales are spuriously correlated with swimsuit sales. An unmodified "X is correlated with Y" seems to be reserved for the case where X actually causes Y, so it's fine to say temperature (but not ice cream) is correlated with swimsuit sales.
Is this correct? My problem is that there seem to be two meanings to spurious correlation:
- Regress two independent random walks against each other, and ordinary statistical tests will say that they're correlated, even though the two random walks are obviously unrelated in any fashion. (I'm fine with this meaning of spurious correlation, since there really is no relationship.)
- Regress ice cream sales against swimsuit sales. It confuses me that this correlation is called spurious, since there really is a relationship between ice cream sales and swimsuit sales, even though this relationship isn't causal.
So I guess my question is: do time series analysts reserve the term "(non-spurious) correlation" for causal relationships -- so that for time series analysts, correlation is meant to suggest causation! -- while statisticians in general are fine with using "correlation" to indicate any kind of (possibly non-causal) relationship?
|
"Correlation" terminology in time series analysis
|
CC BY-SA 2.5
| null |
2011-03-11T23:54:24.433
|
2011-03-12T16:37:12.907
|
2011-03-12T10:39:22.473
| null | null |
[
"time-series",
"correlation"
] |
8186
|
2
| null |
8185
|
7
| null |
In order to avoid the spurious correlation problem, you should regress two stationary time series against one another. This can (potentially) provide a causal story. It is non-stationary series that lead to spurious correlation. See the reasoning given by my answer to [this question](https://stats.stackexchange.com/questions/7975/what-to-make-of-explanatories-in-time-series/8037#8037) (As a footnote, you may not need stationary series if they are integrated series, but I'd point you to any of the applied time series books to learn more about that.)
| null |
CC BY-SA 2.5
| null |
2011-03-12T00:04:38.953
|
2011-03-12T00:04:38.953
|
2017-04-13T12:44:37.793
|
-1
|
401
| null |
8187
|
1
|
8279
| null |
4
|
460
|
I am a beginner in statistics. I have these unpublished data (cases and deaths) of a disease for 7 years (2004-2010) from 2 neighbouring states. The study was started in 2004. State 1 and State 2 received different treatments. The death rate is high in one State. I want to prove that treatment given in State 1 is superior compared to that given in State 2. Time series was not accepted because of many possible confounding variables.
I have SPSS and Comprehensive Meta-analysis software. Please guide me.
- What is the best design? Prospective cohort study or Case-control study?
- What is the best effect measure (Odds ratio, risk ratio, any other)?
- What is the best statistical test (?Chi square etc)
- Can I use meta-analysis in this case?
- Any other information you think that would be of use.
State 1 State 2
Cases Deaths Cases Deaths
2004 1125 5 2024 254
2005 1213 5 1978 209
2006 1003 4 2294 217
2007 1425 6 2312 249
2008 1172 4 1528 197
2009 1092 3 1683 204
2010 1316 4 2024 218
|
How to assess effect of intervention in one state versus another using annual death rate data?
|
CC BY-SA 2.5
| null |
2011-03-12T01:05:02.263
|
2012-12-19T13:34:54.220
|
2011-03-13T04:16:15.557
|
2956
|
2956
|
[
"meta-analysis",
"experiment-design",
"effect-size",
"odds-ratio",
"relative-risk"
] |
8188
|
2
| null |
8145
|
2
| null |
Calculate [correlogram](http://en.wikipedia.org/wiki/Correlogram) for your process. If your process is gaussian (by the looks of your sample it is) you can establish lower/upper bounds (B) and check if the correlations at given lag are significant. Positive autocorrelation at lag 1 would indicate existence of "streaks of luck".
| null |
CC BY-SA 2.5
| null |
2011-03-12T03:35:21.030
|
2011-03-12T03:35:21.030
| null | null | null | null |
8189
|
2
| null |
8185
|
2
| null |
As to your main question, my answer is No. If you've seen such a distinction between the way the terms are used in time series contexts vs. cross-sectional contexts, it must be due to one or two idiosyncratic authors you've read. Rigorous authors would never be correct in using "correlation" to mean "causation." You've gotten ahold of a spurious terminology distinction there. Interesting question, though.
| null |
CC BY-SA 2.5
| null |
2011-03-12T03:46:28.557
|
2011-03-12T03:46:28.557
| null | null |
2669
| null |
8190
|
2
| null |
8145
|
5
| null |
A few thoughts:
- Plot the distribution of times.
My guess is that they will be positively skewed, such that some solution times are really slow. In that case you might want to consider a log or some other transformation of solution times.
- Create a scatter plot of trial on the x axis and solution time (or log solution time on the y-axis).
This should give you an intuitive understanding of the data.
It may also reveal other kinds of trends besides the "hot streak".
- Consider whether there is a learning effect over time.
With most puzzles, you get quicker with practice.
The plot should help to reveal whether this is the case.
Such an effect is different to a "hot streak" effect.
It will lead to correlation between trials because when you are first learning, slow trials will co-occur with other slow trials, and as you get more experienced, faster trials will co-occur with faster trials.
- Consider your conceptual definition of "hot streaks".
For example, does it only apply to trials that are proximate in time or is about proximity of order. Say you solved the cube quickly on Tuesday, and then had a break and on the next Friday you solved it quickly. Is this a hot streak, or does it only count if you do it on the same day?
- Are there other effects that might be distinct from a hot streak effect?
E.g., time of day that you solve the puzzle (e.g., fatigue), degree to which you are actually trying hard? etc.
- Once the alternative systematic effects have been understood, you could develop a model that includes as many of them as possible.
You could plot the residual on the y axis and trial on the x-axis.
Then you could see whether there are auto-correlations in the residuals in the model.
This auto-correlation would provide some evidence of hot streaks.
However, an alternative interpretation is that there is some other systematic effect that you have not excluded.
| null |
CC BY-SA 2.5
| null |
2011-03-12T05:40:11.007
|
2011-03-12T05:40:11.007
| null | null |
183
| null |
8191
|
2
| null |
8160
|
6
| null |
### Data or Theory Driven?
The first issue is whether you want the composite to be data driven or theory driven?
If you are wishing to form a composite variable, it is likely that you think that each component variable is important in measuring some overall domain.
In this case, you are likely going to prefer a theoretical set of weights. If, alternatively, you are interested in whatever is shared or common amongst the component variables, at the risk of not including one of the variables because it measures something that is orthogonal or less related to the remaining set, then you might want to explore data driven approaches.
This question maps on to the discussion in the structural equation modelling literature between reflective and formative measures
( e.g., see [here](http://smib.vuw.ac.nz:8081/WWW/ANZMAC2004/CDsite/papers/Bucic1.PDF)).
Whatever you do it is important to align your measurement with your actual research question.
### Theory Driven
If the composite is theoretically driven then you will want to form a weighted composite of the component variables where the weight assigned aligns with your theoretical weighting of the component.
If the variables are ordinal, then you'll have to think about how to scale the variable.
After scaling each component variable, you'll have to think about theoretical relative weighting and issues related to differential standard deviations of the variable.
One simple strategy is to convert all component variables into z-scores, and sum the z-scores.
If you have component variables, where some are positive and others are negative, then you'll need to reverse either just the negative or just the positive component variables.
I wrote a [post on forming composites](http://jeromyanglim.blogspot.com/2009/03/calculating-composite-scores-of-ability.html) which addresses several scenarios for forming composites.
Theoretical driven approaches can be implemented easily in any statistical packages.
`score.items` in the `psych` package is one function that makes it a little easier, but it is limited.
You might just write your own equation using simple arithmetic, and perhaps the `scale` function.
### Data Driven
If you are more interested in being data driven, then there are many possible approaches.
Taking the first principal component sounds like a reasonable idea.
If you have ordinal variables you might think about categorical PCA which would allow the component variables to be reweighted. This could automatically handle the quantification given the constraints you provide.
| null |
CC BY-SA 2.5
| null |
2011-03-12T06:02:41.313
|
2011-03-13T01:06:48.167
|
2011-03-13T01:06:48.167
|
183
|
183
| null |
8192
|
1
| null | null |
2
|
1420
|
I want to test whether a correlation coefficient is greater than 0.5.
- What should the null and alternative hypothesis be?
- Should I do a two tailed test or one tailed test?
Thanks
|
Hypothesis test for whether correlation coefficient is greater than specified value?
|
CC BY-SA 2.5
| null |
2011-03-12T06:59:04.090
|
2011-03-13T03:40:01.790
|
2011-03-12T07:18:23.997
|
183
|
3725
|
[
"hypothesis-testing",
"correlation"
] |
8193
|
2
| null |
8169
|
4
| null |
Let's say we have to classes $C$, i.e. $c_1$ and $c_2$ and further that the points of the class $c_1$ are drawn from N(a,b) and the points of $c_2$ from N(c,d) respectively (note the changes in notation). The point variable shall be X, a value of this variable x.
Here is the Bayes-Theorem:
$p(C|X)=\frac{p(X|C)*p(C)}{p(X)}$
The bayes decision rule for binary classification states: Given x, one decides...
- in favor of $c_1$ If $p(c_1|x) > p(c_2|x)$
- in favor of $c_2$ If $p(c_2|x) > p(c_1|x)$
The cases that both posterior probabilities is equal has to be treated separately. Normally one favors one class more than the other and hence decides in favor of that class (if a decision is forced). Let's say, that if the posteriors are equal, we decide in favor of class $c_2$
The above decision rule can be reformulated to:
If $\frac{p(c_1|x)}{p(c_2|x)} > 0$, decide in favor of $c_1$, else $c_2$
Applying Bayes Theorem ...
$\frac{p(c_1|x)}{p(c_2|x)}$ = $\frac{p(x|c_1)*p(c_1)}{p(x|c_2)*p(c_2)}$
If the priors are equal, the decision rule resolves to $\frac{p(x|c_1)}{p(x|c_2)}$.
If $p(c_1)=0.7$ and $p(c_2)=0.3$, the decision rule resolves to $\frac{0.7}{0.3}*\frac{p(x|c_1)}{p(x|c_2)}\approx 2.33*\frac{p(x|c_1)}{p(x|c_2)}$
The conditional densities p(X|C) are defined by the corresponding normal distributions. The density of a normal distribution for (not at) a point x can be calculated as $pdf(x|N(a,b))=\frac{1}{\sqrt{2\pi*b}} * e^{-\frac{(x-a)^2}{2b}}$.
Inserting this into our formula (for case of equal priors) one gets
$\frac{p(x|c_1)}{p(x|c_2)}=\frac{pdf(x|N(a,b))}{pdf(x|N(c,d))}$. Starting from here you can "simplify" the formula even further.
Edit:
Here is some R - code to visualize the priors * conditional probabilities (ignoring the normalization factor $p(X)$):
```
require(lattice)
#
set.seed(42)
# parameters for distribution of class 1
a <- 0
b <- 1
# parameters for distribution of class 2
c <- 3
d <- 1
x <- c(sort(rnorm(1000,mean=a,sd=b)),sort(c(rnorm(1000,mean=c,sd=d))))
y <- c(dnorm(x[1:1000],mean=a,sd=b),dnorm(x[1001:2000],mean=c,sd=d))
labels <- factor(rep(c("class 1","class 2"),each=1000))
dat <- data.frame("x"=x,"density"=y,"groups"=labels)
xyplot(density~x,data=dat,groups=labels,type="b",auto.key=T)
```
which results in this plot, where you can see the decision "rule" (line). Starting from here, I suggest to play around a little bit with the priors to get a good feeling "what's going on".

| null |
CC BY-SA 3.0
| null |
2011-03-12T08:08:58.610
|
2016-05-06T13:29:48.270
|
2016-05-06T13:29:48.270
|
107098
|
264
| null |
8194
|
2
| null |
8192
|
1
| null |
I came up with the following by taking a look at some video lectures. Are they correct?
The null hypothesis : correlation = 0.5
The alternative hypothesis : correlation > 0.5
Please confirm if they are correct.
| null |
CC BY-SA 2.5
| null |
2011-03-12T08:18:32.070
|
2011-03-12T09:21:54.810
|
2011-03-12T09:21:54.810
|
3725
|
3725
| null |
8195
|
2
| null |
8185
|
7
| null |
There is a good definition of spurious relationship in [wikipedia](http://en.wikipedia.org/wiki/Spurious_relationship). Spurious means that there is some hidden variable or feature which causes both of the variables. In both time-series and in usual regression then terminology means the same, the relationship between two variables is spurious when something else causes both variables. In time-series context this something else is inherent property of random walks, in usual regression analysis some other variable.
| null |
CC BY-SA 2.5
| null |
2011-03-12T08:51:13.900
|
2011-03-12T11:44:43.593
|
2011-03-12T11:44:43.593
|
2116
|
2116
| null |
8196
|
1
| null | null |
25
|
5437
|
This is somewhat related to [another question](https://stats.stackexchange.com/questions/8192/hypothesis-test-for-whether-correlation-coefficient-is-greater-than-specified-val) that I asked. The question I have is, when doing hypothesis testing, when the alternative hypothesis is a range, the null hypothesis is still a point value. As an example, when testing whether a correlation coefficient is greater than 0.5, the null hypothesis is "correlation = 0.5" instead of "correlation <= 0.5". Why is this the case? (or have I got it wrong?)
|
Why is the null hypothesis always a point value rather than a range in hypothesis testing?
|
CC BY-SA 2.5
| null |
2011-03-12T09:27:27.410
|
2013-07-19T14:26:19.000
|
2017-04-13T12:44:44.767
|
-1
|
3725
|
[
"hypothesis-testing"
] |
8197
|
2
| null |
8166
|
4
| null |
One problem with your pre-post paired t-test idea is that it chould give a small p-value if there's a general upward (or downward) trend in the corruption score over time regardless of ratification. You need some sort of comparison group in which the ratification status did not change between the two time points.
In principle, one approach to forming a comparison group could be to find a matched 'control' country that didn't change its ratification status during the same time period for each country (of a sample of those) that did. The 'control' country should have similar baseline corruption score, and ideally be as similar as possible in other respects that might affect the rate of change of corruption over time (that's what I mean by matching). I haven't looked at the data so I've no idea if this is feasible. You'd want to consider removing the country that changed status from your analysis if you can't find a 'control' country that matches closely enough (but what's 'close enough'?).
In any case it certainly wouldn't be simple, but drawing valid and defensible conclusions about causal effects from observational data seldom is. I agree with @JMS that it's probably not worth attempting in your situation.
| null |
CC BY-SA 2.5
| null |
2011-03-12T09:34:31.413
|
2011-03-12T09:34:31.413
| null | null |
449
| null |
8198
|
2
| null |
8196
|
21
| null |
First, it is not always the case. There might be a composite null.
Most standard tests have a simple null because in the framework of Neyman and Pearson the aim is to provide a decision rule that permits you to control the error of rejecting the null when it is true. To control this error you need to specify one distribution for the null.
When you have a composite hypothesis there are many possibilities. In this case, there are two natural types of strategies, either a Bayesian one (i.e. put weights on the different null distribution) or a minimax one (where you want to construct a test that has a controlled error in the worst case.
In the Bayesian setting, using the posterior, you are rapidly back to the case of a simple null. In the minimax setting, if the null is something like corre $\leq$ 0.5 it might be that the problem is equivalent to using the simple null corre = 0.5. Hence to avoid talking about minimax people directly take the simple null that is the 'extreme point' of the composite setting. In the general case it is often possible to transform the composite minimax null into a simple null... hence treating rigorously the case of a composite null is to my knowledge mostly done by going back somehow to a simple null.
| null |
CC BY-SA 3.0
| null |
2011-03-12T10:13:40.070
|
2013-07-19T14:26:19.000
|
2013-07-19T14:26:19.000
|
17230
|
223
| null |
8199
|
2
| null |
8196
|
2
| null |
I don't think that the null hypothesis should always be something like correlation=0.5. At least in the problems which I have come across that wasn't the case. For example in information theoretic statistics the following problem is considered. Suppose that $X_1, X_2, \cdots, X_n$ are coming from an unknown distribution $Q$. In the simplest case we want to test between two distributions $P_1$ and $P_2$. So the hypotheses are $H_1:Q=P_1$ and $H_2:Q=P_2$.
| null |
CC BY-SA 2.5
| null |
2011-03-12T10:24:55.427
|
2011-03-12T10:31:47.807
|
2011-03-12T10:31:47.807
|
3485
|
3485
| null |
8200
|
2
| null |
8161
|
4
| null |
If time series are cointegrated they admit VECM representation according to Granger Representation theorem. This is scantily explained in [this wikipedia page](http://en.wikipedia.org/wiki/Johansen_test). So if we have I(1) process:
$$X_t=\mu+\Phi D_t+\Pi_1 X_{t-1}+...+\Pi_p X_{t-p}+\varepsilon_t$$
it admits VECM representation
$$\Delta X_t=\mu+\Phi D_t+\Pi X_{t-p}+ \Gamma_1 \Delta X_{t-1}+...+\Gamma_p \Delta X_{t-p+1}+\varepsilon_t$$
What this means is that if you difference time series and do a linear regression as per your second approach you are not including the cointegration term $\Pi X_{t-p}$. So your regression suffers from [omitted variable problem](http://en.wikipedia.org/wiki/Omitted-variable_bias), which in turn makes the test you are trying to use not viable.
| null |
CC BY-SA 3.0
| null |
2011-03-12T12:25:42.250
|
2014-10-31T12:40:35.140
|
2014-10-31T12:40:35.140
|
2116
|
2116
| null |
8201
|
1
|
8205
| null |
7
|
319
|
EDIT: I reduced my problem to a more specific question:
[https://math.stackexchange.com/questions/26573/](https://math.stackexchange.com/questions/26573/)
But I am still interested in other ideas.
Let's say our data is generated by
$$Y_i = f(X_i) + \epsilon_i$$
where $X_i$ are observed vectors, and $f$ is an unknown function. We know that $f$ is invariant with respect to permutation of the elements of $X$. For example, if $X_i=[x_{i1},x_{i2},x_{i3}]$, then we have
$$
f([x_{i1},x_{i2},x_{i3}]) = f([x_{i1},x_{i3},x_{i2}]) = f([x_{i2},x_{i1},x_{i3}])=\cdots
$$
Are there modified versions of linear regression, support vector machines, forests, etc. which can be used to estimate $f$? I'm specifically interested in the case when $X_i$ are eigenvalues of matrices (so they have complex-valued entries).
EDIT: A desperation move would be to make replicates of each data point with all permutations of each $X_i$ vectors and then apply standard methods, but this is clearly computationally impractical.
|
Learning from unordered tuples?
|
CC BY-SA 2.5
| null |
2011-03-12T15:58:35.533
|
2019-12-03T20:36:38.550
|
2019-12-03T17:19:59.853
|
11887
|
3567
|
[
"regression",
"modeling",
"functional-data-analysis"
] |
8204
|
1
| null | null |
2
|
445
|
I have used factor analysis with regression method. I was trying to develop an index using this regression equation by using the output of the component score coefficient matrix (SPSS) generated for each factor. Should I include the residuals found under the "Reproduced Correlations Table" (SPSS) in the regression equation?
|
Factor analysis: regression equation and residuals
|
CC BY-SA 2.5
| null |
2011-03-12T16:38:34.327
|
2011-03-12T19:26:13.213
|
2011-03-12T19:26:13.213
| null | null |
[
"spss",
"factor-analysis"
] |
8205
|
2
| null |
8201
|
4
| null |
For regression models, one way would be to generate derived variables that are invariant to permutation of the labelling of the $x_i$s.
E.g. in your three-variable example, considering only polynomials of total order up to 3, such combinations would be:
- $w_1 = x_1 + x_2 + x_3$
- $w_2 = x_1x_2 + x_1x_3 + x_2x_3$
- $w_3 = x_1^2 + x_2^2 + x_3^2$
- $w_4 = x_1x_2x_3$
- $w_5 = x_1^2x_2 + x_1^2x_3 + x_2^2x_1 + x_2^2x_3 + x_3^2x_1 + x_3^2x_2$
- $w_6 = x_1^3 + x_2^3 + x_3^3$
You could then use any any form of regression that includes some function $f(a_1w_1 + a_2w_2 + \cdots + a_8w_8)$ and find values for the $a_i$s by non-linear least squares, generalized linear modelling or other methods. The combination $a_1w_1 + a_2w_2 + \cdots + a_8w_8$ fits a response surface that's a polynomial of order 3 that is symmetric in permutation of $x_1, x_2, x_3$.
Clearly there would be many more possibilities if you wished to allow functions other than polynomials such as logs, fractional powers...
(EDIT I finished this post before I saw your edit to the answer with the link to the more specific question on mathoverflow. I was started to think there must be some mathematical framework for listing all such polynomials of a given total order, but it sounds you already know more than me about the relevant area of maths!)
| null |
CC BY-SA 2.5
| null |
2011-03-12T17:21:16.623
|
2011-03-12T17:33:03.023
|
2011-03-12T17:33:03.023
|
449
|
449
| null |
8206
|
1
| null | null |
11
|
55793
|
I need to build a boxplot without any axes and add it to the current plot (ROC curve), but I need to add more text information to the boxplot: the labels for min and max. Current line of code is below (current graph also).
Thanks a lot for assistance.
```
boxplot(data, horizontal = TRUE, range = 0, axes=FALSE, col = "grey", add = TRUE)
```
The other solution is add the line from 0 to 1 (instead of x-axis), but I want it to go through the central line...for example like this graphic

|
Labeling boxplots in R
|
CC BY-SA 2.5
| null |
2011-03-12T17:31:58.277
|
2018-02-24T05:07:17.837
|
2011-03-12T22:36:01.140
|
919
|
3345
|
[
"r",
"boxplot"
] |
8207
|
2
| null |
8201
|
3
| null |
To add to onestop's response, it was confirmed on math.SE that the polynomials
$$w_1 = x_1 + \cdots + x_n$$
$$w_2 = x_1^2 + \cdots + x_n^2$$
$$\cdots$$
$$w_n = x_1^n + \cdots + x_n ^n$$
give you all the information needed to determine the original $X=(x_1, \cdots, x_n)$.
This is a neat result because it also applies to moments of a discrete distribution with uniform probabilities.
| null |
CC BY-SA 2.5
| null |
2011-03-12T18:08:32.600
|
2011-03-12T18:08:32.600
| null | null |
3567
| null |
8208
|
2
| null |
7946
|
2
| null |
To echo everyone else: MORE DETAILS ABOUT YOUR DATA. Please give a qualitative description of what your independent and dependent variable(s) is/are.
EDIT: Yes this is confusing; hopefully it's cleared up now.
In general, you probably want to avoid using sample statistics to estimate population parameters if you have the population data. This is because sample statistics are estimates of population parameters, thus the methods used to compute sample statistics always have less power than those same methods in their population parameter version(s). Of course, most of the time you have to use sample statistics because you don't have complete population data.
In your case either way you slice it inferring anything about a population from a case study is dubious because case studies are, by definition, case by case. You could make an inference about the case on which you collected data, but how useful is that? Maybe in your case it is.
Either way, forget about whether or not you can/should use a sample method when you have the population data. You don't have population data if it's a case study. Also, sample vs. population has to do with making inferences. You do not need to worry about sample vs. population methods if all you want is a correlation coefficient, because it is a purely descriptive statistic.
Your fourth bullet point is completely unintelligible. Please clear that up if you would like people to help you with it.
@mpiktas A Spearman rank correlation is NOT the proper correlation coefficient to use here. To use that test all data must be ranked and discrete (unless >= 2 values compete for a rank), i.e., they must be ordinal data. Maybe the HVOC table could be analyzed via Spearman's $\rho$, however more information must be provided by the poster to make that conclusion.
@whuber Yes all data are discrete when represented on a computer, however in this case it seems like what BB01 was referring to was the scale of measurement, not the electronic representation of numbers.
| null |
CC BY-SA 2.5
| null |
2011-03-12T18:10:30.497
|
2011-03-21T03:33:02.500
|
2011-03-21T03:33:02.500
|
2660
|
2660
| null |
8209
|
2
| null |
8148
|
5
| null |
There is a very wide variety of clustering methods, which are exploratory by nature, and I do not think that any of them, whether hierarchical or partition-based, relies on the kind of assumptions that one has to meet for analysing variance.
Having a look at the [MV] documentation in Stata to answer your question, I found this amusing quote at page 85:
>
Although some have said that there are as many cluster-analysis methods as there are people performing cluster analysis. This is a gross understatement! There exist infinitely more ways to perform a cluster analysis than people who perform them.
In that context, I doubt that there are any assumptions applying across clustering method. The rest of the text just sets out as a general rule that you need some form of "dissimilarity measure", which need not even be a metric distance, to create clusters.
There is one exception, though, which is when you are clustering observations as part of a post-estimation analysis. In Stata, the `vce` command comes with the following warning, at page 86 of the same source:
>
If you are familiar with Stata’s large array of estimation commands, be careful to distinguish between cluster analysis (the cluster command) and the vce(cluster clustvar) option allowed with many estimation commands. Cluster analysis finds groups in data. The vce(cluster clustvar) option allowed with various estimation commands indicates that the observations are independent across the groups defined by the option but are not necessarily independent within those groups. A grouping variable produced by the cluster command will seldom satisfy the assumption behind the use of the vce(cluster clustvar) option.
Based on that, I would assume that independent observations are not required outside of that particular case. Intuitively, I would add that cluster analysis might even be used to the precise purpose of exploring the extent to which the observations are independent or not.
I'll finish by mentioning that, at [page 356](http://books.google.fr/books?id=jVTEmrenxTQC&lpg=PA337&ots=J0HpEFmuN3&hl=en&pg=PA356) of Statistics with Stata, Lawrence Hamilton mentions standardized variables as an "essential" aspect of cluster analysis, although he does not go into more depth on the issue.
| null |
CC BY-SA 2.5
| null |
2011-03-12T18:11:59.990
|
2011-03-12T18:24:45.247
|
2011-03-12T18:24:45.247
|
3582
|
3582
| null |
8210
|
2
| null |
8187
|
2
| null |
I might have wrongly understood the data (see my comment above), but what would be so wrong in simply t-testing the difference in means of the deaths-to-cases ratios between the two series, assuming that the states are clearly independent from each other?
A t-test shows that State 2 performs far better than State 1 in preventing deaths in its cases of the disease:
```
year state ratio
2004 2 7.968504
2004 1 225
2005 2 9.464115
2005 1 242.6
2006 1 250.75
2006 2 10.57143
2007 1 237.5
2007 2 9.285141
2008 2 7.756345
2008 1 293
2009 1 364
2009 2 8.25
2010 2 9.284404
2010 1 329
```
Given the huge difference between the two series, and given that you cannot design a proper experiment, it seems to me that no superior level of "proof" is really needed to proclaim one state's approach superior to the other.
| null |
CC BY-SA 2.5
| null |
2011-03-12T18:45:14.493
|
2011-03-13T02:49:01.330
|
2011-03-13T02:49:01.330
|
3582
|
3582
| null |
8211
|
2
| null |
8206
|
8
| null |
Try something like this for a standalone version:
```
bxp <- boxplot(rnorm(100), horizontal=TRUE, axes=FALSE)
mtext(c("Min","Max"), side=3, at=bxp$stats[c(1,5)], line=-3)
```
Note that you can get some information when calling `boxplot`, in particular the "five numbers".
If you want it to be superimposed onto another graphic, use `add=T` but replace `mtext` by `text`; you will need to set a $y$ value (which depend on the way you plot the other graphic).
A more complete example was given by [John Maindonald](http://maths.anu.edu.au/~johnm/) (code should be on his website):

| null |
CC BY-SA 2.5
| null |
2011-03-12T18:55:43.080
|
2011-03-12T18:55:43.080
| null | null |
930
| null |
8212
|
2
| null |
8206
|
9
| null |
I think you will find this produces something like your hand-drawn diagram.
```
data <- c(0.4, 0.7, 0.75, 0.82, 0.9)
endaxis <- c(0, 1) # endpoints of axis
datamm <- c(min(data), max(data))
boxplot(data, horizontal = TRUE, range = 0, ylim = endaxis,
axes = FALSE, col = "grey", add = FALSE)
arrows(endaxis, 1, datamm, 1, code = 1, angle = 90, length = 0.1)
valuelabels <- c(endaxis[1], round(fivenum(data)[2], digits = 2) ,
round(fivenum(data)[4], digits = 2), endaxis[2] )
text(x = valuelabels, y = c(1.05, 1.25, 1.25, 1.05), labels = valuelabels)
```

There are probably better ways of doing it.
You may need to adapt it to fit your ROC plot, including changing `add = FALSE`
| null |
CC BY-SA 2.5
| null |
2011-03-12T19:06:21.527
|
2011-03-13T09:09:30.617
|
2011-03-13T09:09:30.617
|
930
|
2958
| null |
8213
|
1
|
8214
| null |
11
|
3911
|
For some volume reconstruction algorithm I'm working on, I need to detect an arbitrary number of circular patterns in 3d point data (coming from a LIDAR device). The patterns can be arbitrarily oriented in space, and be assumed to lie (although not perfectly) in thin 2d planes. Here is an example with two circles in the same plane (although remember this is a 3d space):

I tried many approaches.. the simplest (but the one working best so far) is clustering based on disjoint sets of the nearest neighbor graph. This works reasonably well when the patterns are far apart, but less so with circles like the ones in the example, really close to each other.
I tried K-means, but it doesn't do well: I suspect the circular point arrangement might not be well suited for it. Plus I have the additional problem of not knowing in advance the value of K.
I tried more complicated approaches, based on the detection of cycles in the nearest neighbor graph, but what I got was either too fragile or computationally expensive.
I also read about a lot of related topics (Hough transform, etc) but nothing seems to apply perfectly in this specific context. Any idea or inspiration would be appreciated.
|
Detect circular patterns in point cloud data
|
CC BY-SA 2.5
| null |
2011-03-12T19:50:46.497
|
2019-02-28T07:11:56.823
|
2011-03-12T20:39:07.317
|
3693
|
3693
|
[
"clustering",
"image-processing"
] |
8214
|
2
| null |
8213
|
10
| null |
A generalized [Hough transform](http://en.wikipedia.org/wiki/Hough_transform) is exactly what you want. The difficulty is to do it efficiently, because the space of circles in 3D has six dimensions (three for the center, two to orient the plane, one for the radius). This seems to rule out a direct calculation.
One possibility is to sneak up on the result through a sequence of simpler Hough transforms. For instance, you could start with the (usual) Hough transform to detect planar subsets: those require only a 3D grid for the computation. For each planar subset detected, slice the original points along that plane and perform a generalized Hough transform for circle detection. This should work well provided the original image does not have a lot of coplanar points (other than the ones formed by the circles) that could drown out the signal generated by the circles.
If the circle sizes have a predetermined upper bound you can potentially save a lot of computation: rather than looking at all pairs or triples of points in the original image, you can focus on pairs or triples within a bounded neighborhood of each point.
| null |
CC BY-SA 2.5
| null |
2011-03-12T22:32:41.720
|
2011-03-12T22:32:41.720
| null | null |
919
| null |
8215
|
2
| null |
125
|
15
| null |
My favourite first undergraduate text for bayesian statistics is by Bolstad, [Introduction to Bayesian Statistics](http://rads.stackoverflow.com/amzn/click/0470141158). If you're looking for something graduate level, this will be too elementary, but for someone who is new to statistics this is ideal.
| null |
CC BY-SA 2.5
| null |
2011-03-12T23:55:47.807
|
2011-03-12T23:55:47.807
| null | null |
3694
| null |
8216
|
2
| null |
125
|
9
| null |
I've at least glanced at most of these on this list and none are as good as the new [Bayesian Ideas and Data Analysis](http://rads.stackoverflow.com/amzn/click/1439803544) in my opinion.
Edit: It is easy to immediately begin doing Bayesian analysis while reading this book. Not just model the mean from a Normal distribution with known variance, but actual data analysis after the first couple of chapters. All code examples and data are on the book's website. Covers a decent amount of theory but the focus is applications. Lots of examples over a wide range of models. Nice chapter on Bayesian Nonparametrics. Winbugs, R, and SAS examples. I prefer it over Doing Bayesian Data Analysis (I have both). Most of the books on here (Gelman, Robert, ...) are not introductory in my opinion and unless you have someone to talk to you will probably be left with more questions then answers. Albert's book does not cover enough material to feel comfortable analyzing data different from what is presented in the book (again my opinion).
| null |
CC BY-SA 3.0
| null |
2011-03-13T00:58:39.420
|
2012-07-28T20:29:52.300
|
2012-07-28T20:29:52.300
|
2310
|
2310
| null |
8217
|
2
| null |
8160
|
2
| null |
For a non-ordinal measure, you could try MDS (multi-dimensional scaling). This can be done easily in R. This will try to arrange the points on a line (1d in your case) in such a way that distances between points will be preserved.
Some general comments: as you probably realize, the question is pretty vague, and not much can be said without knowing more about the data. For example, normalizing the variables (to zero mean and unit variance) may or may not be appropriate; weighing all variable equally may or may not be appropriate; etc. If this is not an exploratory analysis and you do have some 'correct' score in mind, then it may be appropriate to learn a set of weights either on a different dataset, or on a subset of your current dataset, and using these weights instead.
| null |
CC BY-SA 2.5
| null |
2011-03-13T03:12:26.717
|
2011-03-13T03:12:26.717
| null | null |
3369
| null |
8218
|
2
| null |
8192
|
1
| null |
One thing which may be difficult to determine using sampling theory is $P(data|\rho>0.5)$. I don't know of a sampling theory based decision problem which evaluates this directly (that is not to say it is impossible, I may just be ignorant).
You usually have to take a "minimax" approach. That is, find the "most favorable" $\rho_1$ within the range $\rho>0.5$, and do a specific test of $H_0:\rho=0.5$ vs $H_1:\rho=\rho_1$. Usually the "most favourable" value is the MLE of $\rho$, which is the sample correlation, so you would test against $H_1:\rho=r$. I believe this is called a "severe test" because if you reject $H_1$ in favor of $H_0$, then you will also reject all the other possibilities in favor of $H_0$ (because $H_1$ was the "best" of the alternatives).
But note that this hypothesis test is all under the assumption that the model/structure is true. If the correlation were to enter the model in a different functional form, then the test may be different. It is very easy to think that the test is "absolute" because usually you don't explicitly specify a class of alternatives. But the class of alternatives is always there, whether you specify it implicitly or explicitly. So if you were being pedantic about it, you should really include the full model specification and any additional assumptions in both $H_0$ and $H_1$. This will help you understand exactly what the test has and has not achieved.
| null |
CC BY-SA 2.5
| null |
2011-03-13T03:40:01.790
|
2011-03-13T03:40:01.790
| null | null |
2392
| null |
8220
|
1
| null | null |
3
|
321
|
I'd like to perform semi-supervised LDA (Latent Dirichlet Allocation) in the following sense:
I have several topics that I'd like to use, and have seed documents that relate to these topics. I'd like to run LDA to classify other documents, and potentially discover other topics.
I would guess there is work done on that, as the problem is natural, and the LDA framework seems to suggest it, nevertheless, I'm not an expert and do not know about such work.
Can you guide me to papers or tools ?
|
References on semi-supervised LDA
|
CC BY-SA 3.0
| null |
2011-03-13T07:09:44.603
|
2020-02-07T16:15:35.063
|
2020-02-07T16:15:35.063
|
11887
|
3696
|
[
"references",
"text-mining",
"unsupervised-learning",
"semi-supervised-learning"
] |
8222
|
1
| null | null |
9
|
4665
|
I have been looking for computer game datasets, but so far I've only been able to find the 'Avatar History' dataset for WoW.
Are there any other interesting datasets out there, possibly for other genres?
|
Computer game datasets
|
CC BY-SA 2.5
| null |
2011-03-13T09:58:02.480
|
2011-03-13T13:13:15.433
|
2011-03-13T11:54:19.637
| null |
37
|
[
"data-mining",
"dataset"
] |
8223
|
2
| null |
4099
|
6
| null |
Multicollinearity problem is well studied in actually most econometric textbooks. Moreover there is a good article in [wikipedia](http://en.wikipedia.org/wiki/Multicollinearity) which actually summarizes most of the key issues.
In practice one starts to bear in mind the multicollinearity problem if it causes some visual signs of parameter instability (most of them are implied by non (poor) invertability of $X^TX$ matrix):
- large changes in parameter estimates while performing rolling regressions or estimates on smaller sub-samples of the data
- averaging of parameter estimates, the latter may fall to be insignificant (by $t$ tests) even though junk-regression $F$ test shows high joint significance of the results
- VIF statistic (average value of auxiliary regressions) merely depends on your requirements to tolerance level, most practical suggestions put an acceptable tolerance to be lower than 0.2 or 0.1 meaning that corresponding averages of auxiliary regressions $R^2$ should be higher than 0.9 or 0.8 to detect the problem. Thus VIF should be larger than rule-of-thumb's 10 and 5 values. In small samples (less than 50 points) 5 is preferable, in larger you can go to larger values.
- Condition index is an alternative to VIF in your case neither VIF nor CI show the problem is left, so you may be satisfied statistically on this result, but...
probably not theoretically, since it may happen (and usually is the case) that you need all variables to be present in the model. Excluding relevant variables (omitted variable problem) will make biased and inconsistent parameter estimates anyway. On the other hand you may be forced to include all focus variables simply because your analysis is based on it. In data-mining approach though you are more technical in searching for the best fit.
So keep in mind the alternatives (that I would use myself):
- obtain more data points (recall that VIF requirements are smaller for larger data set and the explanatory variables if they are slowly varying, may change for some crucial points in time or cross-section)
- search for lattent factors through principal components (the latter are orthogonal combinations so not multi-collinear by the construction, more over involve all explanatory variables)
- ridge-regression (it introduces small bias in parameter estimates, but makes them highly stable)
Some other tricks are in the wiki article noted above.
| null |
CC BY-SA 2.5
| null |
2011-03-13T10:44:00.427
|
2011-03-13T10:44:00.427
| null | null |
2645
| null |
8224
|
2
| null |
8222
|
7
| null |
Starcraft I
- Starcraft Data Mining Project, providing some game data.
- Starcraft AI Competition, does not directly provide data, but allows you to connect a program written by you with the game. Although I did not try it, I expect that data collection is possible this way ;).
If you are generally interested in data mining + gaming, you may find the project [Robocode aka Java Robot Wars](http://robocode.sourceforge.net/) interesting, where you can program a bot for a simpler environment (simpler than Starcraft) and let it battle against other bots.
| null |
CC BY-SA 2.5
| null |
2011-03-13T11:20:58.870
|
2011-03-13T11:20:58.870
| null | null |
264
| null |
8225
|
1
|
8230
| null |
188
|
722977
|
I have R data frame like this:
```
age group
1 23.0883 1
2 25.8344 1
3 29.4648 1
4 32.7858 2
5 33.6372 1
6 34.9350 1
7 35.2115 2
8 35.2115 2
9 35.2115 2
10 36.7803 1
...
```
I need to get data frame in the following form:
```
group mean sd
1 34.5 5.6
2 32.3 4.2
...
```
Group number may vary, but their names and quantity could be obtained by calling `levels(factor(data$group))`
What manipulations should be done with the data to get the result?
|
How to summarize data by group in R?
|
CC BY-SA 2.5
| null |
2011-03-13T12:02:38.533
|
2015-11-11T08:03:29.147
|
2011-03-13T15:20:27.067
|
3376
|
3376
|
[
"r",
"data-transformation"
] |
8226
|
2
| null |
8225
|
103
| null |
One possibility is to use the [aggregate function](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/aggregate.html).
For instance,
```
aggregate(data$age, by=list(data$group), FUN=mean)[2]
```
gives you the second column of the desired result.
| null |
CC BY-SA 3.0
| null |
2011-03-13T12:12:54.217
|
2013-10-23T16:11:05.887
|
2013-10-23T16:11:05.887
|
7290
|
3019
| null |
8227
|
2
| null |
8225
|
8
| null |
Edited: According to chl's suggestions
The function you are looking for is called "tapply" which applies a function per group specified by a factor.
```
# create some artificial data
set.seed(42)
groups <- 5
agedat <- c()
groupdat <- c()
for(group in 1:groups){
agedat <- c(agedat,rnorm(100,mean=0 + group,1/group))
groupdat <- c(groupdat,rep(group,100))
}
dat <- data.frame("age"=agedat,"group"=factor(groupdat))
# calculate mean and stdev age per group
res <- rbind.data.frame(group=1:5, with(dat, tapply(age, group, function(x) c(mean(x), sd(x)))))
names(res) <- paste("group",1:5)
row.names(res)[2:3] <- c("mean","sd")
```
I really suggest to work through a basic R tutorial explaining all commonly used datastructures and methods. Otherwise you will get stuck every inch during programming. See [this question](https://stats.stackexchange.com/questions/138/resources-for-learning-r) for a collection of free available resources.
| null |
CC BY-SA 2.5
| null |
2011-03-13T12:15:29.533
|
2011-03-13T18:21:19.523
|
2017-04-13T12:44:25.283
|
-1
|
264
| null |
8228
|
2
| null |
1352
|
2
| null |
Answering with an aphorism, I believe that your study design will be successful as soon as it actually exists in its full-fledged form. The game of reviewing as it is played in academia is primarily a game of academics showing to each other that they have not completed that step in its full depth, e.g. by violating assumptions or omitting biases where they should be expected. If study design is a skill, it's the skill of making your research bulletproof to these critics.
Your question is very interesting but I am afraid that there is no short answer. To the best of my knowledge, the only way to learn thoroughly about research designs, whether experimental or observational, is to read the literature in your field of specialisation, and then to go the extra mile by connecting with academics in order to learn even more on how they work, in order to, eventually, write up your own research design.
In my field (European political science), we generically offer "research design" courses that span over all types of studies, but even then we miss important trends and also lack a deep understanding of our methods. After taking at least three of these courses, I have become convinced that no academic resource can replace learning from other academics, before confronting real-world settings directly.
I guess that your field also has these 'methods journals' that can be as painfully boring and complex to the outsider than they are helpful and interesting to actual 'study designers' -- and so would recommend that you start digging this literature first, eventually tracking down the recurring bibliographic items that might help you most with study design in biology/ecology. Google Scholar definitely flags a few books with the words 'ecology research methods'.
| null |
CC BY-SA 2.5
| null |
2011-03-13T12:26:51.003
|
2011-03-13T12:26:51.003
| null | null |
3582
| null |
8229
|
2
| null |
8225
|
11
| null |
In addition to existing suggestions, you might want to check out the `describe.by` function in the `psych` package.
It provides a number of descriptive statistics including the mean and standard deviation based on a grouping variable.
| null |
CC BY-SA 2.5
| null |
2011-03-13T12:38:57.817
|
2011-03-13T12:38:57.817
| null | null |
183
| null |
8230
|
2
| null |
8225
|
144
| null |
Here is the plyr one line variant using ddply:
```
dt <- data.frame(age=rchisq(20,10),group=sample(1:2,20,rep=T))
ddply(dt,~group,summarise,mean=mean(age),sd=sd(age))
```
Here is another one line variant using new package data.table.
```
dtf <- data.frame(age=rchisq(100000,10),group=factor(sample(1:10,100000,rep=T)))
dt <- data.table(dtf)
dt[,list(mean=mean(age),sd=sd(age)),by=group]
```
This one is faster, though this is noticeable only on table with 100k rows. Timings on my Macbook Pro with 2.53 Ghz Core 2 Duo processor and R 2.11.1:
```
> system.time(aa <- ddply(dtf,~group,summarise,mean=mean(age),sd=sd(age)))
utilisateur système écoulé
0.513 0.180 0.692
> system.time(aa <- dt[,list(mean=mean(age),sd=sd(age)),by=group])
utilisateur système écoulé
0.087 0.018 0.103
```
Further savings are possible if we use `setkey`:
```
> setkey(dt,group)
> system.time(dt[,list(mean=mean(age),sd=sd(age)),by=group])
utilisateur système écoulé
0.040 0.007 0.048
```
| null |
CC BY-SA 3.0
| null |
2011-03-13T12:44:13.913
|
2014-10-25T10:16:17.373
|
2014-10-25T10:16:17.373
|
2116
|
2116
| null |
8231
|
2
| null |
8222
|
1
| null |
- John Myles White has a dataset and analysis of Canabalt scores as posted on Twitter
- Stats at Berkeley has a dataset for a Video Games Survey.
| null |
CC BY-SA 2.5
| null |
2011-03-13T12:45:03.647
|
2011-03-13T13:13:15.433
|
2011-03-13T13:13:15.433
|
183
|
183
| null |
8232
|
1
| null | null |
3
|
933
|
I'm trying to figure out the usage of the $\chi^2$ distribution. In other words, in what kind of situations it occurs and how is it useful in that situations.
I read the wikipedia definition of $\chi^2$. Can someone give an example of a $\chi^2$ distribution so that I can better understand it?
|
What is an example of a chi-squared distribution?
|
CC BY-SA 2.5
| null |
2011-03-13T13:17:04.273
|
2021-02-02T12:08:06.727
|
2021-02-02T12:08:06.727
|
11887
|
3725
|
[
"chi-squared-distribution"
] |
8233
|
2
| null |
8232
|
2
| null |
I found [this video](http://www.youtube.com/watch?v=dXB3cUGnaxQ) useful in finding the solution to my question.
| null |
CC BY-SA 2.5
| null |
2011-03-13T14:09:18.563
|
2011-03-13T14:09:18.563
| null | null |
3725
| null |
8235
|
2
| null |
8137
|
8
| null |
Found solution myself. Maybe someone could use it:
```
#step 1: preparing data
ageMetaData <- ddply(data,~group,summarise,
mean=mean(age),
sd=sd(age),
min=min(age),
max=max(age),
median=median(age),
Q1=summary(age)['1st Qu.'],
Q3=summary(age)['3rd Qu.']
)
#step 2: correction for outliers
out <- data.frame() #initialising storage for outliers
for(group in 1:length((levels(factor(data$group))))){
bps <- boxplot.stats(data$age[data$group == group],coef=1.5)
ageMetaData[ageMetaData$group == group,]$min <- bps$stats[1] #lower wisker
ageMetaData[ageMetaData$group == group,]$max <- bps$stats[5] #upper wisker
if(length(bps$out) > 0){ #adding outliers
for(y in 1:length(bps$out)){
pt <-data.frame(x=group,y=bps$out[y])
out<-rbind(out,pt)
}
}
}
#step 3: drawing
p <- ggplot(ageMetaData, aes(x = group,y=mean))
p <- p + geom_errorbar(aes(ymin=min,ymax=max),linetype = 1,width = 0.5) #main range
p <- p + geom_crossbar(aes(y=median,ymin=Q1,ymax=Q3),linetype = 1,fill='white') #box
# drawning outliers if any
if(length(out) >0) p <- p + geom_point(data=out,aes(x=x,y=y),shape=4)
p <- p + scale_x_discrete(name= "Group")
p <- p + scale_y_continuous(name= "Age")
p
```
The quantile data resulution is ugly, but works. Maybe there is another way.
The result looks like this:

Also improved boxplot a little:
- added second smaller dotted errorbar to reflect sd range.
- added point to reflect mean
- removed background
maybe this also could be useful to someone:
```
p <- ggplot(ageMetaData, aes(x = group,y=mean))
p <- p + geom_errorbar(aes(ymin=min,ymax=max),linetype = 1,width = 0.5) #main range
p <- p + geom_crossbar(aes(y=median,ymin=Q1,ymax=Q3),linetype = 1,fill='white') #box
p <- p + geom_errorbar(aes(ymin=mean-sd,ymax=mean+sd),linetype = 3,width = 0.25) #sd range
p <- p + geom_point() # mean
# drawning outliers if any
if(length(out) >0) p <- p + geom_point(data=out,aes(x=x,y=y),shape=4)
p <- p + scale_x_discrete(name= "Group")
p <- p + scale_y_continuous(name= "Age")
p + opts(panel.background = theme_rect(fill = "white",colour = NA))
```
The result is:

and the same data with smaller range (boxplot `coef = 0.5`)

| null |
CC BY-SA 2.5
| null |
2011-03-13T14:51:25.300
|
2011-03-13T19:22:24.783
|
2011-03-13T19:22:24.783
|
3376
|
3376
| null |
8236
|
1
|
8275
| null |
10
|
875
|
I have a dataset of events that happened during the same period of time. Each event has a type (there are few different types, less then ten) and a location, represented as a 2D point.
I would like to check if there is any correlation between types of events, or between the type and the location. For example, maybe events of type A usually don't occur where events of type B do. Or maybe in some area, there are mostly events of type C.
What kind of tools could I use to perform this ? Being a novice in statistical analysis, my first idea was to use some kind of PCA (Principal Component Analysis) on this dataset to see if each type of event had its own component, or maybe some shared the same (ie were correlated) ?
I have to mention that my dataset is of the order of 500'000 points $(x, y, type)$, thus making things a bit harder to deal with.
EDIT: As noted in the answers below and the comments, the way to go is to model this as a marked point process, and then use R to do all the heavy-lifting, as explained in details in this workshop report : [http://www.csiro.edu.au/resources/Spatial-Point-Patterns-in-R.html](http://www.csiro.edu.au/resources/Spatial-Point-Patterns-in-R.html)
|
How to find relationships between different types of events (defined by their 2D location)?
|
CC BY-SA 2.5
| null |
2011-03-13T14:52:13.300
|
2011-03-15T16:27:00.380
|
2011-03-15T16:27:00.380
|
3699
|
3699
|
[
"correlation",
"pca",
"multivariate-analysis",
"point-process"
] |
8237
|
1
| null | null |
19
|
12559
|
I'm trying to understand the logic behind the ANOVA F-test in Simple Linear Regression Analysis. The question I have is like follows. When the F value, i.e.
`MSR/MSE` is large we accept the model as significant. What is the logic behind this?
|
Logic behind the ANOVA F-test in simple linear regression
|
CC BY-SA 2.5
| null |
2011-03-13T15:15:05.713
|
2011-03-13T21:38:19.687
|
2011-03-13T20:22:49.233
| null |
3725
|
[
"regression",
"anova"
] |
8238
|
2
| null |
290
|
3
| null |
I have listed some resources on my [Stata page](http://f.briatte.org/teaching/quanti/stata.html). It includes links to the UCLA and Princeton tutorials, as well as a few more resources in several formats. The "[Stata Guide](http://f.briatte.org/teaching/quanti/stata.html#download)" document, which is very much a work in progress at that stage, is my personal contribution.
One of the fantastic things about Stata is the wealth of online and offline documentation, but never forget that the first resource to learn Stata is the very good set of documentation pages that you can access right away from Stata through the `help` command ([also available online](http://www.stata.com/help.cgi?help)).
If you prefer using books, Alan Acock's A Gentle Introduction to Stata combined with Lawrence Hamilton's Statistics with Stata will get you at a fairly high level of proficiency. You might then add a third book (like [Long and Freese](http://www.stata.com/bookstore/regmodcdvs.html)'s excellent handbook) focused on a special research interest.
| null |
CC BY-SA 2.5
| null |
2011-03-13T15:23:49.450
|
2011-03-13T15:23:49.450
| null | null |
3582
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.