Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3815 | 2 | null | 3814 | 41 | null | Irene Stratton and colleague published a short paper about a closely related question:
Stratton IM, Neil A. [How to ensure your paper is rejected by the statistical reviewer](https://onlinelibrary.wiley.com/doi/full/10.1111/j.1464-5491.2004.01443.x). Diabetic Medicine 2005; 22(4):371-373.
| null | CC BY-SA 4.0 | null | 2010-10-20T19:13:42.330 | 2019-06-25T12:56:38.810 | 2019-06-25T12:56:38.810 | 173082 | 449 | null |
3816 | 2 | null | 3814 | 73 | null | What particularly irritates me personally is people who clearly used user-written packages for statistical software but don't cite them properly, or at all, thereby failing to give any credit to the authors. Doing so is particularly important when the authors are in academia and their jobs depend on publishing papers that get cited. (Perhaps I should add that, in my field, many of the culprits are not statisticians.)
| null | CC BY-SA 2.5 | null | 2010-10-20T19:28:41.487 | 2010-10-20T19:28:41.487 | null | null | 449 | null |
3817 | 2 | null | 3814 | 70 | null | Goodness me, so many things come to mind...
- Stepwise regression
- Splitting continuous data into
groups
- Giving p-values but no measure of
effect size
- Describing data using the mean and
the standard deviation without
indicating whether the data were
more-or-less symmetric and unimodal
- Figures without clear captions (are
those error bars standard errors of
the mean, or standard deviations
within groups, or what?)
| null | CC BY-SA 2.5 | null | 2010-10-20T19:29:29.613 | 2010-10-20T19:29:29.613 | null | null | 1352 | null |
3818 | 1 | null | null | 1 | 3666 | I have three variables,
1. Irrational Beliefs (Categorical)
2. Anxiety State/ Trait (Categorical)
3. Personality Traits (Categorical)
Which statistical analyses can be used?
| Analysis with three categorical variables | CC BY-SA 2.5 | null | 2010-10-20T19:37:42.267 | 2010-10-21T07:23:20.837 | 2010-10-20T20:18:42.270 | null | 1649 | [
"categorical-data"
]
|
3819 | 2 | null | 3814 | 32 | null | The code used to generate the simulated results is not provided. After asking for the code, it demands additional work to get it to run on a referee generated dataset.
| null | CC BY-SA 2.5 | null | 2010-10-20T19:41:32.543 | 2010-10-21T15:43:30.690 | 2010-10-21T15:43:30.690 | 8 | 603 | null |
3820 | 1 | 3825 | null | 7 | 6562 | I've been asked a question - if I calculate some sd, can I change one value and still keep the same sd.
The answer is simply yes. For example:
```
sd(c(2,3,4))
sd(c(3,4,5))
```
But what I wondered then is, assuming you change k values, under what rules do you change them so to always keep the same sd (is it true to ask what are the degrees of freedom here?!)
I am not sure how this applies to anything practical, but I imagine there was a good deal of theoretical work on such questions - but I don't know where to even look for them.
Thanks.
| When does the sd stay the same, even after values in the sample were changed? | CC BY-SA 2.5 | null | 2010-10-20T19:43:42.297 | 2014-03-11T17:29:55.577 | 2010-10-20T20:56:22.620 | 919 | 253 | [
"standard-deviation"
]
|
3821 | 2 | null | 3820 | 2 | null | Suppose that you have a random variable $X$ and you wish to find the set of transformations $Y=f(X)$ such that the standard deviation of $Y$ is the identical to the standard deviation of $X$.
Consider first the set of linear transformations:
$Y = a X + b$ where $a, b$ are constants.
It is clear that:
$Var(Y) = a^2 Var(X)$.
Thus, the only set of linear transformations that preserve standard deviations are linear translations but not scaling by any factor other than -1 (see comment by mbq to this answer). I suspect that non-linear transformations do not preserve standard deviations.
| null | CC BY-SA 2.5 | null | 2010-10-20T19:48:29.650 | 2010-10-20T20:15:24.157 | 2010-10-20T20:15:24.157 | null | null | null |
3822 | 1 | null | null | 4 | 432 | I have 6 similarity matrices reflecting the proximity of 100 pairs of sounds. Similarity indices are in the range [0,1], with higher values indicating sounds perceived as more alike. What is the most appropriate metric I can use to estimate the inter-rater reliability between them?
Thanks a lot.
| Inter-rater reliability between similarity matrices | CC BY-SA 2.5 | null | 2010-10-20T19:57:52.263 | 2010-10-22T07:59:16.753 | 2010-10-20T20:09:39.270 | 930 | 1564 | [
"reliability",
"agreement-statistics"
]
|
3823 | 2 | null | 3814 | 31 | null | Plagiarism (theoretical or methodological). My first review was indeed for a paper figuring many unreferenced copy/paste from a well-established methodological paper published 10 years ago.
Just found a couple of interesting papers on this topic: [Authorship and plagiarism in science](http://wrt-howard.syr.edu/Bibs/PlagScience.htm).
In the same vein, I find [falsification](http://en.wikipedia.org/wiki/Fabrication_%28science%29) (of data or results) the worst of all.
| null | CC BY-SA 2.5 | null | 2010-10-20T20:23:38.740 | 2010-10-21T06:27:00.043 | 2010-10-21T06:27:00.043 | 930 | 930 | null |
3824 | 2 | null | 3814 | 26 | null | When we ask the authors for
- minor comment about an idea we have (in this sense, this not considered as a reason for rejecting the paper but just to be sure the authors are able to discuss another POV), or
- unclear or contradicting results,
and that authors don't really answer in case (1) or that the incriminated results in (2) disappear from the MS.
| null | CC BY-SA 2.5 | null | 2010-10-20T20:29:24.090 | 2010-10-20T20:29:24.090 | null | null | 930 | null |
3825 | 2 | null | 3820 | 7 | null | The question is about the data, not random variables.
Let $X = (x_1, x_2, \ldots, x_n)$ be the data and $Y = (y_1, y_2, \ldots, y_n)$ be additive changes to the data so that the new values are $(x_1+y_1, \ldots, x_n+y_n)$. From
$$\text{Var}(X) = \text{Var}(X+Y) = \text{Var}(X) + 2 \text{Cov}(X,Y) + \text{Var}(Y)$$
we deduce that
$$(*) \quad \text{Var}(Y) + 2 \text{Cov}(X,Y) = 0$$
is necessary for the variance to be unchanged. Add in $n-k$ additional constraints to zero out all but $k$ of the $y_i$ (there are ${n \choose k}$ ways to do this) and note that all $n-k+1$ constraints almost everywhere have linearly independent derivatives. By the Implicit Function Theorem, this defines a manifold of $n - (n-k+1)$ = $k-1$ dimensions (plus perhaps a few singular points): those are your degrees of freedom.
For example, with $X = (2, 3, 4)$ we compute
$$3 \text{Var}(y) = y_1^2 + y_2^2 + y_3^2 - (y_1+y_2+y_3)^2/3$$
$$3 \text{Cov}(x,y) = (2 y_1 + 3 y_2 + 4 y_3) - 3(y_1 + y_2 + y_3)$$
If we set (arbitrarily) $y_2 = y_3 = 0$ the solutions to $(*)$ are $y_1 = 0$ (giving the original data) and $y_1 = 3$ (the posted solution). If instead we require $y_1=y_3 = 0$ the only solution is $y_2 = 0$: you can't keep the SD constant by changing $y_2$. Similarly we can set $y_3 = -3$ while zeroing the other two values. That exhausts the possibilities for $k=1$. If we set only $y_3 = 0$ (one of the cases where $k = 2$) then we get a set of solutions
$$y_2^2 - y_1 y_2 + y_1^2 - 3y_1 == 0$$
which consists of an ellipse in the $(y_1, y_2)$ plane. Similar sets of solutions arise in the choices $y_2 = 0$ and $y_1 = 0$.
| null | CC BY-SA 3.0 | null | 2010-10-20T20:47:18.890 | 2014-03-11T17:29:55.577 | 2014-03-11T17:29:55.577 | 919 | 919 | null |
3826 | 1 | 3877 | null | 4 | 6550 | How does one calculate Cohen's d and confidence intervals after logit in Stata?
| How does one calculate Cohen's d and confidence intervals after logit in Stata? | CC BY-SA 2.5 | null | 2010-10-20T20:51:59.437 | 2010-10-22T07:24:42.113 | null | null | null | [
"confidence-interval",
"stata",
"effect-size",
"cohens-d"
]
|
3827 | 2 | null | 3826 | 8 | null | Cohen’s d is not directly available in Stata, and you have to resort on external macros, e.g. [sizefx](http://econpapers.repec.org/software/bocbocode/s456738.htm) (`ssc install sizefx`). It works fine if you have to series of values, but I found it less handy when you work with a full data set because there's no possibility to pass options to this command (e.g. `by()`).
Anyway, you can still use the original formula (with pooled SDs),
$$
\delta_c=\frac{\bar x_1-\bar x_2}{s_p}
$$
where $s_p=\sqrt{\frac{(n_1-1)s_1^2+(n_2-1)s_2^2)}{(n_1+n_2-2)}}$.
Here is an example by hand:
```
. webuse lbw
. logit low age smoke
. graph box age, by(low)
. tabstat age, by(low) statistics(mean sd N)
Summary for variables: age
by categories of: low (birthweight<2500g)
low | mean sd N
---------+------------------------------
0 | 23.66154 5.584522 130
1 | 22.30508 4.511496 59
---------+------------------------------
Total | 23.2381 5.298678 189
----------------------------------------
. display "Cohen's d: = " (23.66154-22.30508) / sqrt((129*(5.584522)^2+58*(4.511496)^2)/187)
Cohen's d: = .25714326
```
This is in agreement with what R would give:
```
library(MBESS)
res <- smd(Mean.1=23.66154, Mean.2=22.30508,
s.1=5.584522, s.2=4.511496, n.1=130, n.2=59)
ci.smd(smd=res, n.1=130, n.2=59, conf.level=0.95)
```
that is an effect size of 0.257 with 95% CI [-0.052;0.566].
In contrast, `sizefx` gives results that differ a little (I have use `separate age, by(low)` and collapse the results in a new data window, here two columns labeled `age0` and `age1`), the ES version calculated above corresponding to what is referred to as Hedge's g below (unless I miss something in the [code](http://fmwww.bc.edu/repec/bocode/s/sizefx.ado) I read):
```
. sizefx age0 age1
Cohen's d and Hedges' g for: age0 vs. age1
Cohen's d statistic (pooled variance) = .26721576
Hedges' g statistic = .26494154
Effect size correlation (r) for: age0 vs. age1
ES correlation r = .13243109
```
| null | CC BY-SA 2.5 | null | 2010-10-20T21:02:48.473 | 2010-10-21T09:57:00.433 | 2010-10-21T09:57:00.433 | 930 | 930 | null |
3828 | 2 | null | 3818 | 2 | null | If you're after a general way to model multiway [contingency tables](http://en.wikipedia.org/wiki/Contingency_table), one powerful approach is to use [Poisson regression](http://en.wikipedia.org/wiki/Poisson_regression), often called a log-linear model in this context.
A classic paper on this is [Nelder (1974)](http://www.jstor.org/stable/2347125), but there are now more suitable resources for pedagogical purposes freely available online thanks to the generosity of some lecturers or their universities, such as Princeton's Germán Rodríguez: [pdf version](http://data.princeton.edu/wws509/notes/c5.pdf), [html version](http://data.princeton.edu/wws509/notes/c5s1.html). There must be less mathematical introductory material out there though?
| null | CC BY-SA 2.5 | null | 2010-10-20T21:46:53.727 | 2010-10-20T21:46:53.727 | null | null | 449 | null |
3829 | 2 | null | 3814 | 19 | null | When they don't sufficiently explain their analysis and/or include simple errors that make it difficult to work out what actually was done. This often includes throwing around a lot of jargon, by way of explanation, which is more ambiguous than the author seems to realize and also may be misused.
| null | CC BY-SA 2.5 | null | 2010-10-20T22:55:12.767 | 2010-10-20T22:55:12.767 | null | null | null | null |
3831 | 2 | null | 3788 | 2 | null | Whether a set of observations are iid or not is a decision that is typically taken after a consideration of the underlying data generating process. In your case, the underlying data generating process seems to be the measurements of the speed of a river. I would not consider these observations to be independent. If a particular measurement is on the high end of the scale the next measurement is also likely to be on the high end of the scale. In other words, if I know one measurement I can infer something about the likely values for my next measurement. However, the values are likely to be identically distributed as the errors in your measurement probably come from the methodology used to measure velocity. I would imagine that you would use the same methodology to collect multiple measurements. You could of course use formal statistical tests to see if your observations are iid but for that you have to specify a data generating process, estimate the model under iid assumptions and examine for residuals for deviation from iid.
However, do note that I know nothing about engineering wavelets, wavelet space so I may be way-off in my above assumptions/answer.
| null | CC BY-SA 2.5 | null | 2010-10-21T00:47:55.787 | 2010-10-21T00:53:22.890 | 2010-10-21T00:53:22.890 | null | null | null |
3832 | 2 | null | 3804 | 2 | null | Lets see if I understand Harlan's (and Srikant's) formulation correctly.
$$\pi_1 \sim beta(\alpha_1,\beta_1)$$
$$\pi_2 \sim beta(\alpha_2,\beta_2)$$
Say, $\pi_1$ corresponds to the set of data for which you have less information apriori and $\pi_2$ is for the more precise data set.
Using Srikant's formulation:
$$\pi(p) \propto \pi_1(p) \alpha + \pi_2(p) (1-\alpha)$$
Therefore, the complete hierarchical formulation will be:
$$\alpha \sim beta(\alpha_0,\beta_0)$$
$$p|\alpha \sim \pi(p)$$
$$y_i | p \sim B(n_i,p) $$
I assume here that $y_i|p$ are iid. I don't know if this is a valid assumption in your case. Am I correct?
You can choose $\alpha_0$ and $\beta_0$ in such a way that mean of this beta distribution is 0.8 (or 0.2) acc. to your formulation.
Now the MCMC sampling can be done, by using OpenBUGS or JAGS (untested).
I will add more to this (and recheck formulation) as soon as I get more time. But please point out if you see a fallacy in my argument.
| null | CC BY-SA 2.5 | null | 2010-10-21T04:34:05.837 | 2010-10-21T05:51:11.413 | 2010-10-21T05:51:11.413 | 1307 | 1307 | null |
3833 | 2 | null | 3818 | 4 | null | @onestop is literally correct. If you have three unordered categorical variables, techniques like loglinear modelling are appropriate.
@Neelam However, I doubt that your data is unordered categorical.
From my experience with measures of psychological scales measuring irrational beliefs, anxiety states and traits, and personality traits, the scales are numeric.
For example, a typical personality scale might have 10 items with each item measured on a 5-point scale. Thus, if you were to sum the scale you get scores ranging from a minimum possible score of 10 to a maximum possible score of 50. I.e., there are 41 possible values. Thus, in some sense the variables are categorical, but they are also ordinal, and they are also typically treated as numeric variables. See my comments to this earlier question regarding [discrete and continuous variables](https://stats.stackexchange.com/questions/206/discrete-and-continuous/210#210).
Thus, in the empirical research that I read with similar scales, researchers typically treat such scales as numeric variables. Thus, methods such as correlation, regression, and PCA are standard.
| null | CC BY-SA 2.5 | null | 2010-10-21T06:43:36.237 | 2010-10-21T07:23:20.837 | 2017-04-13T12:44:46.680 | -1 | 183 | null |
3834 | 2 | null | 3822 | 3 | null | My first idea would be to try some kind of cluster analysis (e.g. [hierarchical clustering](http://en.wikipedia.org/wiki/Hierarchical_clustering)) on each similarity matrix, and compare the classification trees across raters. We can derive a similarity index from all dendrograms, as discussed here, [A measure to describe the distribution of a dendrogram](https://stats.stackexchange.com/questions/3672/a-measure-to-describe-the-distribution-of-a-dendrogram), or in this review, [Comparing Clusterings - An Overview](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.164.6189&rep=rep1&type=pdf) from Wagner and Wagner.
You benefit from working with already existing distance matrices, thus such methods will really reflect the nature of your data, and you can still derive a single numerical value to quantify the closeness of method-specific assessments. The following article may be interesting, if you need to refer to existing work:
>
Hamer, RM and Cunningham, JW. Cluster
Analyzing Profile Data Confounded with
Interrater Differences: A Comparison
of Profile Association Measures.
Applied Psychological Measurement
(1981) 5(1): 63-72.
Another approach would be to apply some kind of [Principal Component Analysis](http://en.wikipedia.org/wiki/Principal_component_analysis) on each similarity matrix, and keep only the first principal component (the linear combination of all 100 items that account for the maximum of variance). More precisely, as you work with (dis)similarity indices or a particular distance/proximity metric, it is sometimes referred to as Principal Coordinate Analysis or [Multidimensional Scaling](http://en.wikipedia.org/wiki/Multidimensional_scaling) (MDS), although PCA and MDS would yield similar results when dissimilarities are defined as euclidean distances. There is a working example in Izenman's book ([Modern Multivariate Statistical Techniques](http://books.google.fr/books?id=1CuznRORa3EC&printsec=frontcover&dq=modern+multivariate+statistical+techniques&source=bl&ots=Uq98CVdyIf&sig=LE3N0YX4M1dP93cvZC4cotrnsRE&hl=fr&ei=Bum_TPbhGsuO4gb7_ZiCDQ&sa=X&oi=book_result&ct=result&resnum=4&ved=0CCsQ6AEwAw), chapter 13, "perceptions of color in human vision", pp. 468-470) and a discussion on so-called all-pairs design pp. 471-472. You can then compare the 6 linear combinations (i.e., the weights associated to each sound by rater-specific MDS) to assess their consistency across raters. There, an ICC (as described in my [previous answer](http://ttp://stats.stackexchange.com/questions/3762/the-best-measure-of-reliability-for-interval-data-between-0-and-1-closed)) could make sense, but I don't know of any application of it in this particular case.
| null | CC BY-SA 2.5 | null | 2010-10-21T07:22:40.340 | 2010-10-22T07:59:16.753 | 2017-04-13T12:44:53.777 | -1 | 930 | null |
3835 | 1 | 3846 | null | 4 | 104 | This example was taken from Mathematical Statistics : A Unified Introduction (ISBN 9780387227696), page 58, under the section 'The Principle of Least Squares'. I think my problem has more to do with algebra but since this is a statistics book, I thought I should post here...
Anyway, I need to estimate k base on this equation:

The goal is to reach here:

I managed to follow the example until I got stuck here:

The explanation to reach to the goal from above is described as follows:

I know how to get k but I totally don't understand how the 'middle term' was eliminated. Please help. I will provide more details if needed.
| Estimating k in d=kv | CC BY-SA 2.5 | null | 2010-10-21T07:58:40.967 | 2010-10-22T01:18:58.907 | null | null | 1636 | [
"least-squares"
]
|
3837 | 2 | null | 3051 | 4 | null | ```
library(zoo)
x=c(4, 5, 7, 3, 9, 8)
rollmean(x,3)
```
or
```
library(TTR)
x=c(4, 5, 7, 3, 9, 8)
SMA(x,3)
```
| null | CC BY-SA 2.5 | null | 2010-10-21T09:17:02.463 | 2010-10-21T09:17:02.463 | null | null | 1709 | null |
3840 | 1 | null | null | 6 | 1198 | I have two time series, series1 and series2. My aim is to find how much Series2 is different from Series1, automatically/quantitatively.
Image can be seen in its original size by [clicking here](http://img713.imageshack.us/img713/3401/image001eg.gif).

Series1 is the expected result.
Series2 is the test/incoming series.
I am providing a histogram plot, where Series2 is represented in dark brown colour. You can also note in the x-axis between 221 and 353 there is a significant variation. ie Series2 is less than Series1. I am coding using C++.
I think, crosscorrelation will help, but produces a value based on similarity rather than dissimilarity. I see people talk about Kolmogorov-Smirnov Test. Is this the test which i should be performing?
UPDATE 1: I am trying to perform a template matching. I have divided my template image in to 8x8 blocks as well as my incoming test image. I am trying to compare one block in template image with the same block(based on the spatial pixel positions) in the test image. I calculate the intensity sum within each block. I obtain series1 for the Template image and have Series2 for the test image.
| Automatic test measuring dissimilarity between two time series | CC BY-SA 2.5 | null | 2010-10-21T10:49:45.380 | 2010-10-21T14:06:00.943 | 2010-10-21T11:20:27.300 | 1352 | 1655 | [
"distributions",
"time-series",
"computational-statistics",
"cross-correlation",
"kolmogorov-smirnov-test"
]
|
3841 | 1 | 3866 | null | 11 | 186 | I have two years of data which looks basically like this:
Date ___ Violence Y/N? _ Number of patients
1/1/2008 ____ 0 __________ 11
2/1/2008 ____ 0 _________ 11
3/1/2008 _____1 __________ 12
4/1/2008 _____0 __________ 12
...
31/12/2009____ 0__________ 14
i.e. two years of observations, one per day, of a psychiatric ward, which indicate whether there was a violence incident on that day (1 is yes, 0 no) as well as the number of patients on the ward. The hypothesis that we wish to test is that more patients on the ward is associated with an increased probability of violence on the ward.
We realise, of course, that we will have to adjust for the fact that when there are more patients on the ward, violence is more likely because there are just more of them- we are interested in whether each individual’s probability of violence goes up when there are more patients on the ward.
I've seen several papers which just use logistic regression, but I think that is wrong because there is an autoregressive structure (although, looking at the autocorrelation function, it doesn’t get above .1 at any lag, although this is above the “significant” blue dashed line that R draws for me).
Just to make things more complicated, I can if I wish to break down the results into individual patients, so the data would look just as it does above, except I would have the data for each patient, 1/1/2008, 2/1/2008 etc. and an ID code going down the side so the data would show the whole history of incidents for each patient separately (although not all patients are present for all days, not sure whether that matters).
I would like to use lme4 in R to model the autoregressive structure within each patient, but some Googling comes up with the quotation “lme4 is not set up to deal with autoregressive structures”. Even if it were, I’m not sure I grasp how to write the code anyway.
Just in case anyone notices, I asked a question like this a while ago, they are different datasets with different problems, although actually solving this problem will help with that one (someone suggested I use mixed methods previously, but this autoregression thing has made me unsure how to do this).
So I’m a bit stuck and lost to be honest. Any help gratefully received!
| Two years of data describing occurence of violence- testing association with number of patients on ward | CC BY-SA 2.5 | null | 2010-10-21T11:07:03.723 | 2010-10-21T19:22:27.857 | null | null | 199 | [
"r",
"mixed-model",
"autocorrelation",
"panel-data"
]
|
3842 | 1 | 3843 | null | 9 | 105155 | I want to create a bardiagram for these data in R (read from a CVS file):
Experiment_Name MetricA MetricB
Just_X 2 10
Just_X_and_Y 3 20
to have the following diagram:

I am beginner and I do not know even how to start.
| How to create a barplot diagram where bars are side-by-side in R | CC BY-SA 2.5 | null | 2010-10-21T11:13:44.080 | 2015-02-01T03:27:55.130 | 2010-10-21T11:18:38.177 | 1352 | 1389 | [
"r",
"data-visualization",
"barplot"
]
|
3843 | 2 | null | 3842 | 13 | null | I shall assume that you are able to import your data in R with `read.table()` or the short-hand `read.csv()` functions. Then you can apply any summary functions you want, for instance `table` or `mean`, as below:
```
x <- replicate(4, rnorm(100))
apply(x, 2, mean)
```
or
```
x <- replicate(2, sample(letters[1:2], 100, rep=T))
apply(x, 2, table)
```
The idea is to end up with a matrix or table for the summary values you want to display.
For the graphical output, look at the `barplot()` function with the option `beside=TRUE`, e.g.
```
barplot(matrix(c(5,3,8,9),nr=2), beside=T,
col=c("aquamarine3","coral"),
names.arg=LETTERS[1:2])
legend("topleft", c("A","B"), pch=15,
col=c("aquamarine3","coral"),
bty="n")
```
The `space` argument can be used to add an extra space between juxtaposed bars.

| null | CC BY-SA 2.5 | null | 2010-10-21T11:44:58.290 | 2010-10-21T11:56:50.343 | 2010-10-21T11:56:50.343 | 930 | 930 | null |
3844 | 2 | null | 1053 | 2 | null | I found "Analysis of survival data" by Cox and Oakes (Chapman and Hall Monographs on Statistics and Applied Probability - vol. 21) to be very readable and informative. No material on survival analysis in R though.
| null | CC BY-SA 2.5 | null | 2010-10-21T12:17:31.507 | 2010-10-21T12:17:31.507 | null | null | 887 | null |
3845 | 1 | 5257 | null | 6 | 2133 | Please see [question](https://stats.stackexchange.com/questions/3708/forecasting-unemployment-rate) for the background.
Following the advice of @kwak and @Andy W, I have decided to use the package plm in `R` to fit my model. Here an excerpt of the data `df` (the numbers are made up, not real data!):
reg year ur mur cl
1 1 2001 0.000698717 0.012483361 1
2 2 2001 0.008283762 0.011899896 1
3 3 2001 0.001863817 0.012393738 1
4 4 2001 0.005344206 0.012126016 1
5 5 2001 0.007475083 0.011962102 1
6 6 2001 0.002785111 0.012322869 1
where reg is the region indicator, year is the year of measurement, ur is the unemployment rate, mur is the average unemployment rate in the neighboring regions (given by cl) excluding the current region (see @kwak suggestion). Below is the code that I used in `R`
```
fm <- pgmm(log(ur)~lag(log(ur),1)+log(mur)|lag(log(ur),2:40),data=df)
```
I have several question regarding this model:
- I guess I should choose effect="individual" to avoid having time dependent intercepts (?). But doing so the code is crashing (Error in names(coefficients) <- c(namesX, namest))!
- Which model should I choose, onestep or twosteps?
- How do I decide on the numbers of instruments (40 is just a guess)?
Assuming that I have fitted this model successfully, how do I simulate from it?
Regards
| Forecasting unemployment rate with plm | CC BY-SA 2.5 | null | 2010-10-21T12:32:34.307 | 2013-01-31T19:47:46.013 | 2020-06-11T14:32:37.003 | -1 | 1443 | [
"r",
"panel-data",
"generalized-moments"
]
|
3846 | 2 | null | 3835 | 1 | null | There is a constraint imposed a few lines above the highlighted text which states:
$\sum_i{v_i (d_i-k v_i)} = 0$
Does the that help?
Edit
In response to your second comment:
Consider: $\sum_i{(d_i - l v_i)^2}$. This can be re-written as:
$\sum_i{((d_i - k v_i) + (k v_i - l v_i))^2}$
Expanding the square, we have:
$\sum_i{(d_i - k v_i)^2 + \sum_i(k v_i - l v_i)^2 + \sum_i{2 (d_i - k v_i) (k v_i - l v_i)}}$
Simplifying the above, we have:
$\sum_i{(d_i - k v_i)^2 + (k-l)^2 \sum_i{v_i^2} + 2 (k-l) \sum_i{v_i (d_i - k v_i)}}$
So, if we choose $k$ such that $\sum_i{v_i (d_i-k v_i)} = 0$ then it follows that:
$\sum_i{(d_i - l v_i)^2} = \sum_i{(d_i - k v_i)^2} + (k-l)^2 \sum_i{v_i^2}$
But, then it follows that:
$\sum_i{(d_i - l v_i)^2} > \sum_i{(d_i - k v_i)^2}$
as long as $k \ne l$.
Thus, what we have shown is the following: If $k$ satisfies the constraint we imposed then it must be the case that the corresponding SSE is less than the SSE for any other $l$ that we can choose. Thus, the value of $k$ that satisfies the constraint is our least squares estimate.
| null | CC BY-SA 2.5 | null | 2010-10-21T12:33:03.850 | 2010-10-22T01:18:58.907 | 2010-10-22T01:18:58.907 | null | null | null |
3847 | 2 | null | 3842 | 13 | null | Here ggplot version:
```
library(ggplot2)
df = melt(data.frame(A=c(2, 10), B=c(3, 20),
experiment=c("X", "X & Y")),
variable_name="metric")
ggplot(df, aes(experiment, value, fill=metric)) +
geom_bar(position="dodge")
```

| null | CC BY-SA 2.5 | null | 2010-10-21T12:58:47.983 | 2010-10-22T10:16:06.170 | 2010-10-22T10:16:06.170 | 8 | 1443 | null |
3848 | 2 | null | 3840 | 3 | null | There are many different distance measures. For starters, there's always the correlation.
You can look at the [mean square error](http://en.wikipedia.org/wiki/Mean_squared_error). In R, you can see the algorithm for time series in Rob Hyndman's [ftsa](http://cran.us.r-project.org/web/packages/ftsa/) package (see the `error` function).
See [Liao (2005)](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.115.6594) for a nice short survey of time series similarity measures, including euclidean distance, root mean square distance, mikowski distance, pearson's correlation, dynamic time warping distance, kullback-liebler disance, symmetric Chernoff information divergence, and cross-correlation.
- T. Warren Liao, "Clustering of time series data—a survey" Pattern Recognition, 2005
| null | CC BY-SA 2.5 | null | 2010-10-21T13:21:24.800 | 2010-10-21T14:06:00.943 | 2010-10-21T14:06:00.943 | 5 | 5 | null |
3849 | 1 | 5429 | null | 3 | 2136 | Are there any tools that can format and write to external file output of principal component analysis in Stata?
I'm thinking about something that will work in similar vein to [excellent] family of [estout](http://repec.org/bocode/e/estout/index.html) commands.
| How can I format and export PCA output in Stata? | CC BY-SA 2.5 | null | 2010-10-21T13:38:32.160 | 2010-12-13T12:09:19.943 | null | null | 22 | [
"pca",
"stata"
]
|
3850 | 2 | null | 1053 | 4 | null | David Collett. Modelling Survival Data in Medical Research, Second Edition. Chapman & Hall/CRC. 2003. [ISBN 978-1584883258](http://en.wikipedia.org/wiki/Special%3aBookSources/9781584883258)
Software section focuses on SAS not R though.
| null | CC BY-SA 2.5 | null | 2010-10-21T14:42:28.390 | 2010-10-21T14:42:28.390 | null | null | 449 | null |
3853 | 1 | 3854 | null | 7 | 107634 | I have the following R code for creating barplot:
```
# ... here I read cvs file
mx <- rbind(results$"AVG.P10.")
colnames(mx) <- results$"RUN"
rownames(mx) <- "AVG P"
postscript(file="avg_p_result.ps")
barplot(mx, beside=T, col=c("grey"), names.arg= results$"RUN",
cex.axis = 1.5, cex.lab=1.5)
```
I have tried `cex.lab=1.5` but it does not work at all.
| How to increase size of label fonts in barplot | CC BY-SA 2.5 | null | 2010-10-21T15:01:50.123 | 2010-10-21T16:21:07.947 | 2010-10-21T16:20:10.117 | 930 | 1389 | [
"r",
"boxplot"
]
|
3854 | 2 | null | 3853 | 10 | null | According to `?barplot`, you need to use `cex.names=1.5`.
```
barplot(mx, beside=TRUE, col=c("grey"), names.arg=results$"RUN",
cex.axis=1.5, cex.names=1.5)
```
| null | CC BY-SA 2.5 | null | 2010-10-21T15:15:49.950 | 2010-10-21T16:21:07.947 | 2010-10-21T16:21:07.947 | 930 | 1657 | null |
3855 | 1 | 3861 | null | 6 | 998 | Imagine I have a test for a disease that returns either a positive or a negative result. I administer the test to one group twice in a single session; in another group, I administer it twice with two weeks separating the tests. The hypothesis is that the tests taken at the same time will be more consistent with each other than the tests taken two weeks apart. How can I test this?
Bonus question: What if the two groups consisted of the same people?
| Comparing test-retest reliabilities | CC BY-SA 2.5 | null | 2010-10-21T15:48:52.837 | 2010-10-21T17:14:20.710 | null | null | 71 | [
"reliability"
]
|
3856 | 1 | 3865 | null | 6 | 1513 | Are there standard ways of analyzing and generating "clumpy" distributions?
- analyze: how clumpy is a given point cloud (in 1d, 2d, nd),
what are its clumpy coefficients?
- generate or synthesize a pseudo-random cloud with coefficients C
(These are the basics for any family of distributions, e.g. normal.)
There are many kinds of clumpiness in nature
(traffic jams, clumpiness of climate changes),
so it's a wide term with I imagine various attempts at description
and various links to "classical" statistics.
I'm looking for an overview; pictures would be nice.
Added Friday 22 Oct: I had hoped to find methods for both analysis and synthesis,
abstract <-> real both ways; surely a wide-ranging abstraction must do both.
Still looking ...
(Experts please add tags).
| Analyze and generate "clumpy" distributions? | CC BY-SA 2.5 | null | 2010-10-21T16:20:09.727 | 2011-08-26T17:19:23.190 | 2010-10-22T13:32:28.723 | 557 | 557 | [
"clustering",
"spatial",
"autocorrelation"
]
|
3857 | 1 | 3863 | null | 43 | 876 | [My workplace](http://www.hsl.gov.uk/) has employees from a very wide range of disciplines, so we generate data in lots of different forms. Consequently, each team has developed its own system for storing data. Some use Access or SQL databases; some teams (to my horror) are reliant almost entirely on Excel spreadsheets. Often, data formats change from project to project. In some cases calling it a 'system' is too kind.
The problems this entails are that I have to write new code to clean the data for every project, which is expensive; people manually editing spreadsheets make reproducibility and auditing of data near impossible; and even worse, there is a chance that data gets lost or made incorrect.
I've been given the opportunity to discuss these problems with a board member of the company and I need to work out what to tell him. I think I've already persuaded him that we have a problem and that getting this right will enable better science and saving money. The question is: what should we be aiming for, and how do we get there?
More specifically:
How should we store data, in way that lets us track it from creation to publishing in a paper? (Databases stored on a central server?)
How do you go about standardising database formats?
Are there any good resources for educating people on how to care for data? (As a general rule, occupational hygienists and explosives engineers aren't data nerds; so non-technical content preferred.)
| How do I get people to take better care of data? | CC BY-SA 2.5 | null | 2010-10-21T16:26:22.880 | 2013-09-04T01:59:52.387 | 2011-10-07T02:13:18.347 | 183 | 478 | [
"dataset",
"reproducible-research",
"quality-control"
]
|
3858 | 2 | null | 3855 | 1 | null | Perhaps, computing the tetrachoric correlation would be useful. See this url: [Introduction to the Tetrachoric and Polychoric Correlation Coefficients](http://www.john-uebersax.com/stat/tetra.htm)
| null | CC BY-SA 2.5 | null | 2010-10-21T16:28:13.923 | 2010-10-21T16:28:13.923 | null | null | null | null |
3859 | 2 | null | 3856 | 4 | null | I think suitable 'clumpy coefficients' are measures of spatial autocorrelation such as [Moran's I](http://en.wikipedia.org/wiki/Moran%27s_I) and [Geary's C](http://en.wikipedia.org/wiki/Geary%27s_C). Spatial statistics is not my area and I don't know about simulation though.
| null | CC BY-SA 2.5 | null | 2010-10-21T16:36:55.843 | 2010-10-21T16:36:55.843 | null | null | 449 | null |
3860 | 2 | null | 3856 | 4 | null | You could calculate an [index of dispersion](http://www.passagesoftware.net/webhelp/Dispersion_Indices.htm) measure over your space to gauge clumpiness. One starting point for more information would be the ecology packages and literature to see how they simulate such things.
| null | CC BY-SA 2.5 | null | 2010-10-21T16:36:59.993 | 2010-10-21T16:36:59.993 | null | null | 251 | null |
3861 | 2 | null | 3855 | 7 | null | Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be confounded with a learning or memory effect. A chance-corrected measure of agreement, like [Cohen's kappa](http://en.wikipedia.org/wiki/Cohen%27s_kappa), can be used with binary variables, and bootstraped confidence intervals might be compared in the two situations (this is better than using $\kappa$ sampling variance directly). This should give an indication of the reliability of your measures, or in this case diagnostic agreement, at the two occasions. A [McNemar test](http://j.mp/cI0TZr) which tests for marginal homogeneity in matched pairs can also be used.
An approach based on the [intraclass correlation](http://www.john-uebersax.com/stat/icc.htm) is still valid and, provided your prevalence is not extreme, should be closed to
- a simple Pearson correlation (which, for binary data, is also called a phi coefficient) or the tetrachoric version suggested by @Skrikant,
- the aforementioned kappa (for a large sample, and assuming that the marginal distributions for case at the two occasions are the same, $\kappa\approx\text{ICC}$ from a one-way ANOVA).
About your bonus question, you generally need 3 time points to separate the lack of (temporal) stability -- which can occur if the latent class or trait your are measuring evolve over time -- from the lack of reliability (see for an illustration the model proposed by [Wiley and Wiley](http://www.jstor.org/pss/2093858), 1970, American Sociological Review 35).
| null | CC BY-SA 2.5 | null | 2010-10-21T17:14:20.710 | 2010-10-21T17:14:20.710 | null | null | 930 | null |
3862 | 2 | null | 3857 | 12 | null | One free online resource is the set of [Statistical Good Practice Guidelines](http://www.ilri.org/biometrics/GoodStatisticalPractice/publications/guides.html) from the [Statistical Services Centre at the University of Reading](http://www.ssc.rdg.ac.uk/).
In particular:
- Data Management Guidelines for Experimental Projects
- The Role of a Database Package in Managing Research Data
| null | CC BY-SA 3.0 | null | 2010-10-21T17:34:29.003 | 2011-10-06T22:23:20.723 | 2011-10-06T22:23:20.723 | 4756 | 449 | null |
3863 | 2 | null | 3857 | 16 | null | It's worth considering ideas from the software world. In particular you might think of setting up: a [version control repository](http://en.wikipedia.org/wiki/Revision_control) and a central database server.
Version control probably helps you out with otherwise free floating files, such as Excel and text files, etc. But this could also include files associated with data, such as R, SAS, etc. The idea is that there's a system which tracks changes to your files allowing you to know what happened when and rollback to a point in the past if needed.
Where you already have SQL databases, the best thing you can do is set up a central server and hire a capable [DBA](http://en.wikipedia.org/wiki/Database_administrator). The DBA is the person tasked with ensuring and mantaining the integrity of the data. Part of the job description involves things like backups and tuning. But another part is more relevant here -- controlling how data enters the system, ensuring that constraints are met, access policies are in place to prevent harm to the data, setting up views to expose custom or simplified data formats, etc. In short, implementing a methodology around the data process. Even if you don't hire an actual DBA (the good ones are very hard to recruit), having a central server still enables you to start thinking about instituting some kind of methodology around data.
| null | CC BY-SA 2.5 | null | 2010-10-21T17:45:14.913 | 2010-10-21T17:45:14.913 | null | null | 251 | null |
3864 | 2 | null | 3857 | 6 | null | I think first of all you have to ask yourself: why do people use Excel to do tasks Excel was not made for?
1) They already know how to use it
2) It works. Maybe in a clumsy way but it works and that's what they want
I copy a series of numbers in, press a button and I have a plot. As easy as that.
So, make them understand what advantages they can have by using centralized datasets, proper databases (note that Access is NOT one of those) and so on. But remember the two points above: you need to set up a system that works and it's easy to use.
I've seen too many times badly made systems that made me want to go back not to Excel but to pen and paper!
Just as an example, we have an horrible ordering system where I work.
We used to have to fill in an order form which was an Excel spreadsheet where you would input the name of the product, the quantity, the cost etc. It would add everything up, add TVA etc etc, you printed it, gave it to the secretary who would make the order and that was it. Unefficient, but it worked.
Now we have an online ordering system, with a centralized DB and everything. It's an horror.
It should not take me 10 minutes to fill in a damn form because of the unituitive keyboard shortcuts and the various oddities of the software. And note that I am quite informatics-savvy, so imagine what happens to people who don't like computers...
| null | CC BY-SA 2.5 | null | 2010-10-21T17:55:24.157 | 2010-10-21T17:55:24.157 | null | null | 582 | null |
3865 | 2 | null | 3856 | 5 | null | If assessing spatial auto-correlation is what your interested in, here is a paper that simulates data and evaluates different auto-regressive models in R.
[Spatial autocorrelation and the selection of simultaneous autoregressive models](http://dx.doi.org/10.1111/j.1466-8238.2007.00334.x)
by: W. D. Kissling, G. Carl
Global Ecology and Biogeography, Vol. 17, No. 1. (January 2008), pp. 59-71. (PDF available [here](http://www.iaps2010.ufz.de/data/kissling_carl_geb3346435.pdf))
Unfortunately they do not have the code in R they used to generate the simulated data, but they do have the code available of how they fit each of the models in the [supplementary material](http://onlinelibrary.wiley.com/doi/10.1111/j.1466-8238.2007.00334.x/suppinfo).
It would definately help though if you could be a little more clear about the nature of your data. Many of the techniques intended for spatial analysis will probably not be implemented in higher dimensional data, and I am sure there are other techniques that are more suitable. Some type of K-nearest neighbors technique might be useful, and make sure to change your search term from clumpy to cluster.
Some other references you may find helpful. I would imagine the best resources for simulating data in such a manner would be with packages in the R program.
Websites I suggest you check out the [Spatstat](http://www.spatstat.org/) R package page, and the R Cran Task View for [spatial data](http://cran.r-project.org/web/views/Spatial.html). I would also suggest you check out the [GeoDa](http://geodacenter.asu.edu/) center page, and you never know the [OpenSpace](http://groups.google.com/group/openspace-list) Google group may have some helpful info. I also came across this [R mailing list concerning geo data](http://www.mail-archive.com/[email protected]/info.html), but I have not combed the archive very much at this point (but I'm sure there is useful data in there).
---
Edit: For those interested in simulating a pre-specified amount of spatial auto-correlation in a distribution, I recently came across a paper that gives a quite simple recommended procedure (Dray, 2011, page 136);
>
I used the following steps to obtain a sample with a given
autocorrelation level $\rho$: (1) generate a vector $y$ containing 100 iid
normally distributed random values, (2) compute the inverse matrix $(I - \rho{W})^{-1}$
, and (3) premultiply the vector $y$ by the matrix obtained in
(2) to obtain autocorrelated data in the vector $x$ (i.e., $x = (I - \rho{W})^{-1}y$ ).
The only thing not defined here is that $W$ is an A priori defined spatial weighting matrix. I'm not sure how this would translate to multivariate case, but hopefully it is helpful to someone!
Citation:
Dray, Stephane. 2011. A new perspective about Moran's coefficient: Spatial autocorrelation as a linear regression problem. [Geographical Analysis 43(2):127-141](http://dx.doi.org/10.1111/j.1538-4632.2011.00811.x). (unfortunately I did not come across a public pdf of the document)
| null | CC BY-SA 3.0 | null | 2010-10-21T18:58:11.697 | 2011-08-26T17:19:23.190 | 2011-08-26T17:19:23.190 | 1036 | 1036 | null |
3866 | 2 | null | 3841 | 2 | null | Here is an idea that connects your binary dependent variable to a continuous, unobserved variable; a connection that may let you leverage the power of time series models for continuous variables.
Define:
$V_{w,t} = 1$ if violent incident happened in ward $w$ during time period $t$ and 0 otherwise
$P_{w,t}$ : Propensity for violence in ward $w$ at time $t$.
$P_{w,t}$ is assumed to be a continuous variable that in some sense represents 'pent-up' feelings of the inmates which boil over at some time and results in violence. Following this reasoning, we have:
$V_{w,t} = \begin{cases} 1 & \mbox{if } P_{w,t} \ge \tau \\
0 & \mbox{otherwise}
\end{cases}$
where,
$\tau$ is an unobserved threshold which triggers a violent act.
You can then use a time series model for $P_{w,t}$ and estimate the relevant parameters. For example, you could model $P_{w,t}$ as:
$P_{w,t} = \alpha_0 + \alpha_1 P_{w,t-1} + ... + \alpha_p P_{w,t-p}+ \beta n_{w,t} + \epsilon_t$
where,
$n_{w,t}$ is the number of patients in ward $w$ at time $t$.
You could see if $\beta$ is significantly different from 0 to test your hypothesis that "more patients lead to an increase in probability of violence".
The challenge of the above model specification is that you do not really observe $P_{w,t}$ and thus the above is not your usual time series model. I do not know anything about R so perhaps someone else will chip in if there is a package that would let you estimate models like the above.
| null | CC BY-SA 2.5 | null | 2010-10-21T19:22:27.857 | 2010-10-21T19:22:27.857 | null | null | null | null |
3867 | 2 | null | 1053 | 4 | null | Take a look at the course page for [Sociology 761: Statistical Applications in Social Research](http://socserv.socsci.mcmaster.ca/jfox/Courses/soc761/index.html). Professor [John Fox](http://socserv.socsci.mcmaster.ca/jfox/) at McMaster University has [course notes on survival analysis](http://socserv.socsci.mcmaster.ca/jfox/Courses/soc761/survival-analysis.pdf) as well as an example [R script](http://socserv.socsci.mcmaster.ca/jfox/Courses/soc761/survival-analysis.R) and [several](http://socserv.socsci.mcmaster.ca/jfox/Courses/soc761/Canada-mortality.txt) [data](http://socserv.socsci.mcmaster.ca/jfox/Courses/soc761/Henning.txt) [files](http://socserv.socsci.mcmaster.ca/jfox/Courses/soc761/Rossi.txt).
For another perspective, see [Models for Quantifying Risk, 3/e](http://rads.stackoverflow.com/amzn/click/1566986737), the standard textbook for [actuarial exam 3/MLC](http://www.soa.org/education/exam-req/edu-exam-m-detail.aspx). The bulk of the book, chapters 3-10, covers survival-contingent payment models.
| null | CC BY-SA 2.5 | null | 2010-10-21T20:35:15.033 | 2010-10-21T20:35:15.033 | null | null | null | null |
3868 | 1 | null | null | 5 | 606 | The nonlinear model I am fitting gamma distribution with inverse or log is not converging. There is one observation having zero value in the response variable. Does this zero affects to model the data? Any answer warmly appreciated!
| Undefined link function in gamma distribution | CC BY-SA 3.0 | 0 | 2010-10-21T23:47:03.803 | 2013-02-11T08:13:51.703 | 2013-02-11T08:13:51.703 | 19879 | null | [
"generalized-linear-model",
"fitting",
"link-function"
]
|
3869 | 1 | 3871 | null | 4 | 513 | I know that if events $A$ and $B$ are independent, then $P(A\cap B) = P(A).P(B)$.
Lets say I roll a die, and define the following events:
$x$ = Number is a multiple of $3 = \lbrace 3, 6\rbrace$
$y$ = Number is more than $3 = \lbrace 4, 5, 6\rbrace$
$z$ = Number is even = $\lbrace 2, 4, 6 \rbrace$
Therefore,
$P(x) * P(y)$: 0.166666666667
$P(x \cap y)$ : 0.166666666667
$P(x) * P(z)$ : 0.166666666667
$P(x \cap z)$ : 0.166666666667
$P(y) * P(z)$ : 0.25
$P(y \cap z)$ : 0.333333333333
My question is, why are $xy$ and $xz$ independent events but $yz$ is not? I had expected all combinations of events to be independent as neither events affect any other.
| What are independent events? | CC BY-SA 4.0 | null | 2010-10-22T01:02:20.173 | 2022-07-09T18:12:00.490 | 2022-07-09T18:12:00.490 | 44269 | 1636 | [
"probability",
"independence"
]
|
3870 | 2 | null | 3869 | 4 | null | I find it easiest to think of independence in terms of conditional probabilities: $X$ and $Y$ are independent if $P(Y|X)=P(Y)$ and $P(X|Y)=P(X)$. That is knowing one does not change the probability of the other. In this example,
\begin{aligned}
P(Y|X) &= P(Y \cap X) / P(X) = 1/2 = P(Y)\\
P(Y|Z) &= P(Y \cap Z) / P(Z) = 2/3 \ne P(Y)\\
P(Z|X) &= P(Z \cap X) / P(X) = 1/2 = P(Z)
\end{aligned}
So knowing $X$ does not affect the probability of either $Y$ or $Z$. But knowing $Z$ does affect the probability of $Y$.
| null | CC BY-SA 2.5 | null | 2010-10-22T01:16:57.240 | 2010-10-22T01:16:57.240 | null | null | 159 | null |
3871 | 2 | null | 3869 | 5 | null | To add to the Rob's [answer](https://stats.stackexchange.com/questions/3869/what-are-independent-events/3870#3870) the joint probability of two events in the discrete case can be decomposed as follows (See the wiki's definition for the [discrete case](http://en.wikipedia.org/wiki/Joint_probability_distribution#Discrete_case)) :
$P(A \cap B) = P(A|B) P(B)$
Therefore, if the fact that we observe the event $B$ is not informative about the probability of our observing the event $A$ then it must be the case that:
$P(A|B) = P(A)$.
Thus, if the two variables are independent (i.e., the occurrence of one event does not tell us anything about the other event) then it must be the case that:
$P(A \cap B) = P(A) P(B)$
The crucial condition for independence of two events is: $P(A|B) = P(A)$. This condition is essentially stating that the "Probability of our observing event A over the unrestricted sample space = Probability of our observing the event A if the sample space is restricted to the set of possible outcomes compatible with the event B". Thus, the knowledge that the event B occurred has no information about the probability of observing the event A.
| null | CC BY-SA 2.5 | null | 2010-10-22T01:33:49.437 | 2010-10-22T01:33:49.437 | 2017-04-13T12:44:20.903 | -1 | null | null |
3872 | 1 | null | null | 5 | 34780 | What statistical test can I use to compare two ratios from two independent samples. The ratios are after to before results. I need to compare the after/before ratios for two independent models and show whether they are have significant difference or not. Please help!
| Statistical test to compare two ratios from two independent models | CC BY-SA 2.5 | null | 2010-10-22T04:37:04.527 | 2020-04-27T01:57:37.307 | null | null | null | [
"statistical-significance"
]
|
3873 | 2 | null | 3650 | 2 | null | So here is my motivation for the questions, although I know this is not necessary I like it when people follow up on their questions so I will do the same. I would like to thank both Srikant and whuber for their helpful answers. (I ask no-one upvote this as it is not an answer to the question and both whuber's and Srikant's deserve to be above this, and you should upvote their excellent answers.)
The other day for a class field trip I sat in on the proceedings of an appeals court. Several of the criminal appeals brought before that day concerned issues surrounding [Batson challenges](http://en.wikipedia.org/wiki/Batson_v._Kentucky). A Batson challenge concerns the use of racial discrimination when an attorney uses what are called [peremptory challenges](http://en.wikipedia.org/wiki/Peremptory_challenge) during the [voir dire](http://en.wikipedia.org/wiki/Voir_dire) process of jury selection. (I'm in the US so this is entirely in the context of USA criminal law).
Two separate questions arose in the deliberations that were statistical in nature.
The first question was the chance that two out of two asian jurors (the purple balls) seated currently in the venire panel (the urn consisting of the total number of balls) would be selected by chance (the total number of peremptory challenges used equals the number of balls drawn). The attorney in this case stated the probability that both Asian jurors would be selected was $1/28$. I don't have the materials the attorneys presented to the court of appeals, so I do not know how the attorney calculated this probability. But this is essentially my question #1, so given the formula for the [Hypergeometric distribution](http://en.wikipedia.org/wiki/Hypergeometric_distribution) I calculated the expected probability given,
$n = 20$ The number of jurors seated in the venire panel
$p = 2$ The number of Asian jurors seated in the venire panel, both of whom were picked
$d = 6$ The number of peremptory challenges of use to an attorney
which then leads to an expected value of
$$\frac{\binom{p}{p} \binom{n-p}{d-p}}{\binom{n}{d}}=\frac{\binom{2}{2} \binom{20-2}{6-2}}{\binom{20}{6}}=\frac{3}{38}.$$
*note these are my best guesses of the values based on what I know of the case, if I had the court record I could know for sure. Values [calculated](http://www.wolframalpha.com/input/?i=binomial%5b18,4%5d/binomial%5b20,6%5d) using Wolfram Alpha.
Although the court is unlikely to establish a bright line rule which states the probability threshold that establishes a prima facie case of racial discrimination, at least one judge thought the use of such statistics is applicable to establishing the validity of a Batson challenge.
In this case there wasn't much doubt in the eyes of the court that Asian jurors had a stereotype among attorneys that they were pro-prosecution. But in a subsequent case a defense attorney used a Batson challenge to claim a prosecutor was being racially biased by eliminating 4 out of 6 Black women. The judges in this appeal were somewhat skeptical that Black women were a group that had a cognizable stereotype attached to them, but this again is a question amenable to statistical knowledge. Hence my question #2, given 100 observations could I determine if black women were eliminated using peremptory challenges in a non-random manner. In reality the attorneys are not eliminating prospective jurors based on only the race and sex of the juror, but that would not preclude someone from determining if the pattern of peremptory challenges at least appears or does not appear random (although non-randomness does not necessarily indicate racial discrimination).
Again I'd like to say thank you both to whuber and Srikant for their answers.
| null | CC BY-SA 2.5 | null | 2010-10-22T04:55:00.487 | 2010-10-22T16:38:05.127 | 2010-10-22T16:38:05.127 | 919 | 1036 | null |
3874 | 1 | 3876 | null | 17 | 17253 | I have data from patients treated with 2 different kinds of treatments during surgery.
I need to analyze its effect on heart rate.
The heart rate measurement is taken every 15 minutes.
Given that the surgery length can be different for each patient, each patient can have between 7 and 10 heart rate measurements.
So an unbalanced design should be used.
I'm doing my analysis using R. And have been using the ez package to do repeated measure mixed effect ANOVA. But I do not know how to analyse unbalanced data. Can anyone help?
Suggestions on how to analyze the data are also welcomed.
Update:
As suggested, I fitted the data using the `lmer` function and found that the best model is:
```
heart.rate~ time + treatment + (1|id) + (0+time|id) + (0+treatment|time)
```
with the following result:
```
Random effects:
Groups Name Variance Std.Dev. Corr
id time 0.00037139 0.019271
id (Intercept) 9.77814104 3.127002
time treat0 0.09981062 0.315928
treat1 1.82667634 1.351546 -0.504
Residual 2.70163305 1.643665
Number of obs: 378, groups: subj, 60; time, 9
Fixed effects:
Estimate Std. Error t value
(Intercept) 72.786396 0.649285 112.10
time 0.040714 0.005378 7.57
treat1 2.209312 1.040471 2.12
Correlation of Fixed Effects:
(Intr) time
time -0.302
treat1 -0.575 -0.121
```
Now I'm lost at interpreting the result.
Am I right in concluding that the two treatments differed in affecting heart rate? What does the correlation of -504 between treat0 and treat1 means?
| Unbalanced mixed effect ANOVA for repeated measures | CC BY-SA 3.0 | null | 2010-10-22T05:03:26.650 | 2011-04-08T01:04:33.640 | 2011-04-08T01:04:33.640 | 930 | 1663 | [
"r",
"mixed-model",
"repeated-measures",
"lme4-nlme"
]
|
3875 | 2 | null | 3872 | 3 | null | Any test for independence of a 2x2 contingency table will do! A chi-square or t-test are the textbook simple solutions. The "best" test in this situation is called Barnard's test for superiority -- the StatXact software will happily calculate this for you.
| null | CC BY-SA 2.5 | null | 2010-10-22T05:16:26.863 | 2010-10-22T05:16:26.863 | null | null | 1122 | null |
3876 | 2 | null | 3874 | 18 | null | The lme/lmer functions from the nlme/lme4 packages are able to deal with unbalanced designs. You should make sure that time is a numeric variable. You would also probably want to test for different types of curves as well. The code will look something like this:
```
library(lme4)
#plot data with a plot per person including a regression line for each
xyplot(heart.rate ~ time|id, groups=treatment, type= c("p", "r"), data=heart)
#Mixed effects modelling
#variation in intercept by participant
lmera.1 <- lmer(heart.rate ~ treatment * time + (1|id), data=heart)
#variation in intercept and slope without correlation between the two
lmera.2 <- lmer(heart.rate ~ treatment * time + (1|id) + (0+time|id), data=heart)
#As lmera.1 but with correlation between slope and intercept
lmera.3 <- lmer(heart.rate ~ treatment * time + (1+time|id), data=heart)
#Determine which random effects structure fits the data best
anova(lmera.1, lmera.2, lmera.3)
```
To get quadratic models use the formula "heart.rate ~ treatment * time * I(time^2) + (random effects)".
Update:
In this case where treatment is a between-subjects factor, I would stick with the model specifications above. I don't think the term (0+treatment|time) is one that you want included in the model, to me it doesn't make sense in this instance to treat time as a random-effects grouping variable.
But to answer your question of "what does the correlation -0.504 mean between treat0 and treat1" this is the correlation coefficient between the two treatments where each time grouping is one pair of values. This makes more sense if id is the grouping factor and treatment is a within-subjects variable. Then you have an estimate of the correlation between the intercepts of the two conditions.
Before making any conclusions about the model, refit it with lmera.2 and include REML=F. Then load the "languageR" package and run:
```
plmera.2<-pvals.fnc(lmera.2)
plmera.2
```
Then you can get p-values, but by the looks of it, there is probably a significant effect of time and a significant effect of treatment.
| null | CC BY-SA 2.5 | null | 2010-10-22T05:36:47.643 | 2010-10-28T15:44:12.870 | 2010-10-28T15:44:12.870 | 966 | 966 | null |
3877 | 2 | null | 3826 | 4 | null | Given the current issues with the user-written `sizefx` command for Stata that chl and I have uncovered, here's an alternative way of doing things in Stata using the user-written `metan` command. This is designed for meta-analysis of study-level summary data so you need to enter the means, sds and ns (which i've taken from chl's answer) in one row, then run a 'meta-analysis' on this single study (so the pooled SMD is the same as the SMD for the single study, and heterogeneity measures are undefined):.
```
input mean0 sd0 n0 mean1 sd1 n1
mean0 sd0 n0 mean1 sd1 n1
1. 23.66154 5.584522 130 22.30508 4.511496 59
2. end
. list
+------------------------------------------------------+
| mean0 sd0 n0 mean1 sd1 n1 |
|------------------------------------------------------|
1. | 23.66154 5.584522 130 22.30508 4.511496 59 |
+------------------------------------------------------+
. metan n1 mean1 sd1 n0 mean0 sd0, cohen
Study | SMD [95% Conf. Interval] % Weight
---------------------+---------------------------------------------------
1 | -0.257 -0.566 0.052 100.00
---------------------+---------------------------------------------------
I-V pooled SMD | -0.257 -0.566 0.052 100.00
---------------------+---------------------------------------------------
Heterogeneity chi-squared = 0.00 (d.f. = 0) p = .
I-squared (variation in SMD attributable to heterogeneity) = .%
Test of SMD=0 : z= 1.63 p = 0.103
```
The resulting point estimate and CI for Cohen's d agree with those that chl calculated by hand and using `ci.smd()` from R's MBESS library.
I'm less clear if this is entirely answering the original question, which asked to calculate Cohen's d after `logit`. But Cohen's d is one way of calculating the standardized mean difference, which I thought applied only to a continuous outcome measured in two groups. Perhaps the questioner could clarify.
| null | CC BY-SA 2.5 | null | 2010-10-22T07:24:42.113 | 2010-10-22T07:24:42.113 | null | null | 449 | null |
3878 | 2 | null | 2125 | 11 | null | Correlation analysis only quantifies the relation between two variables ignoring which is dependent variable and which is independent. But before appliyng regression you have to calrify that impact of which variable you want to check on the other variable.
| null | CC BY-SA 2.5 | null | 2010-10-22T09:17:26.323 | 2010-10-22T09:51:55.363 | 2010-10-22T09:51:55.363 | null | null | null |
3879 | 1 | 3880 | null | 16 | 92346 | This question is related to [my previous question](https://stats.stackexchange.com/questions/3842/how-to-create-a-barplot-diagram-where-bars-are-side-by-side-in-r). I would like to put values over bars in barplot. I am beginner in plotting in R.
| How to put values over bars in barplot in R | CC BY-SA 2.5 | null | 2010-10-22T10:05:15.167 | 2018-01-24T13:15:14.310 | 2017-04-13T12:44:21.160 | -1 | 1389 | [
"r",
"data-visualization"
]
|
3880 | 2 | null | 3879 | 16 | null | To add text to a plot, just use the `text` command. From @chl's [answer](https://stats.stackexchange.com/questions/3842/how-to-create-a-barplot-diagram-where-bars-are-side-by-side-in-r/3843#3843) to your previous question:
```
##Create data
x = replicate(2, sample(letters[1:2], 100, rep=T))
apply(x, 2, table)
##Create plot
barplot(matrix(c(5, 3, 8, 9), nr=2), beside=TRUE,
col=c("aquamarine3", "coral"),
names.arg=LETTERS[1:2],
ylim=range(0,12)))
##Add text at coordinates: x=2, y=6
##Use trial and error to place them
text(2, 6, "A label")
text(5, 10, "Another label")
```

| null | CC BY-SA 2.5 | null | 2010-10-22T10:10:01.447 | 2010-10-22T10:10:01.447 | 2017-04-13T12:44:21.160 | -1 | 8 | null |
3882 | 2 | null | 3879 | 3 | null | If you're learning to plot in R you might look at the [R graph gallery](https://www.r-graph-gallery.com/) (original [here](https://web.archive.org/web/20100910025837/http://addictedtor.free.fr/graphiques/)).
All the graphs there are posted with the code used to build them. Its a good resource.
| null | CC BY-SA 3.0 | null | 2010-10-22T11:20:54.903 | 2018-01-24T13:15:14.310 | 2018-01-24T13:15:14.310 | 7290 | 1475 | null |
3883 | 2 | null | 3879 | 7 | null | Another example of the use of `text` command
```
u <- c(3.2,6.6,11.7,16.3,16.6,15.4,14.6,12.7,11.4,10.2,9.8,9.1,9.1,9.0,8.8,8.4,7.7)
p <-c(3737,3761,3784,3802,3825,3839,3850,3862,3878,3890,3901,3909,3918,3926,3935,3948)
-c(385,394,401,409,422,430,434,437,437,435,436,437,439,442,447,452)
e <- c(2504,2375,2206,2071,2054,2099,2127,2170,2222,2296,2335,2367,2372,2365,2365,2401)
par(mar=c(2,3,1,2),las=1,mgp=c(2.2,1,0))
x <- barplot(u,names.arg=1990:2006,col="palegreen4",space=.3,ylim=c(0,20),
ylab="%",xaxs="r",xlim=c(.8,21.6))
text(x,u+.4,format(u,T))
lines(x[-17],100*e/p-55,lwd=2)
points(x[-17],100*e/p-55,lwd=2,cex=1.5,pch=1)
axis(4,seq(0,20,5),seq(55,75,5))
text(c(x[1]+1,x[5],x[16]+1),c(100*e/p-55)[c(1,5,16)]+c(0,-1,0),
format(c(100*e/p)[c(1,5,16)],T,di=3))
box()
```
| null | CC BY-SA 2.5 | null | 2010-10-22T12:13:40.343 | 2010-10-22T12:13:40.343 | null | null | 339 | null |
3884 | 2 | null | 3857 | 5 | null | I underline all answers given already, but let's call a cat a cat: in many workspaces it is hardly impossible to convince management that investment in "exotic" softwaretools (exotic to them, that is) is necessary, let alone hiring somebody that could set it up and maintain it. I have told quite some clients that they would benefit greatly from hiring a statistician with a thorough background on software and databases, but "no can do" is the general response.
So as long as that ain't going to happen, there are some simple things you can do with Excel that will make life easier. And the first of this is without doubt version control. More info on version control with Excel can be found [here](https://stackoverflow.com/questions/131605/best-way-to-do-version-control-for-ms-excel).
## Some things about using excel
People using EXCEL very often like the formula features of EXCEL. Yet, this is the most important source of errors within EXCEL sheets, and of problems when trying to read in EXCEL files as far as my experience goes. I refuse to work with sheets containing formulas.
I also force everybody I work with to deliver the EXCEL sheets in a plain format, meaning that:
- The first row contains the names of the different variables
- The spreadsheet starts in cell A1
- All data is put in columns, without interruptions and without formatting.
- If possible, the data is saved in .csv format as well. It's not difficult to write a VBA script that will extract the data, reformat it and put it in a .csv file. This also allows for better version control, as you can make a .csv dump of the data every day.
If there is a general structure the data always has, then it might be good to develop a template with underlying VB macros to add data and generate the dataset for analysis. This in general will avoid that every employee comes up with his own "genius" system of data storage, and it allows you to write your code in function of this.
This said, if you can convince everybody to use SQL (and a front end for entering data), you can link R directly to that one. This will greatly increase performance.
## Data structure and management
As a general rule, the data stored in databases (or EXCEL sheets if they insist) should be the absolute minimum, meaning that any variable that can be calculated from some other variables should not be contained in the database. Mind you, sometimes it can be beneficial to store those derived or transformed variables as well, if the calculations are tedious and take a long time. But these should be stored in a seperate database, if necessary linked to the original one.
Thought should be given as well to what is considered as one case (and hence one row). As as an example, people tend to produce time series by making a new variable for each time point. While this makes sense in an EXCEL, reading in these data demands quite some flipping around of the data matrix. Same for comparing groups: There should be one group indicator and one response variable, not a response variable for each group. This way data structures can be standardized as well.
A last thing I run into frequently, is the use of different metrics. Lengths are given in meters or centimeters, temperatures in Celcius, Kelvin or Farenheit, ... One should indicate in any front end or any template what the unit is in which the variable is measured.
And even after all these things, you still want to have a data control step before you actually start with the analysis. Again, this can be any script that runs daily (e.g. overnight) on new entries, and that flags problems immediately (out of range, wrong type, missing fields, ...) so they can be corrected as fast as possible. If you have to return to an entry that was made 2 months ago to find out what is wrong and why, you better get some good "Sherlock-skills" to correct it.
my 2 cents
| null | CC BY-SA 2.5 | null | 2010-10-22T12:28:30.873 | 2010-10-22T12:28:30.873 | 2020-06-11T14:32:37.003 | -1 | 1124 | null |
3885 | 1 | 3886 | null | 4 | 859 | Three unbiased estimators of standard deviation have been presented in the literature for m samples of size n. The first one is based on the sample ranges and can be obtained by
$\hat \sigma_1 = \frac{\bar R}{d_2}$
The second one is based on sample standard deviations and can be calculated by
$\hat \sigma_2 = \frac{\bar S}{c_4}$
The third estimator is due to sample variance and can be estimated as
$\hat \sigma_3 = \frac{\sqrt{\bar {S^2}}}{k_4}$
The denominators are constant values dependent on the sample size n and make the estimators unbiased. Does anybody know of a method to get their value in R (instead of using tables in statistical quality control books)?
| Computing unbiased estimators of $\sigma$ for m samples of size n | CC BY-SA 2.5 | null | 2010-10-22T13:05:30.993 | 2010-10-22T13:22:57.057 | null | null | 339 | [
"r",
"standard-deviation",
"quality-control"
]
|
3886 | 2 | null | 3885 | 2 | null | Ok, I finally found them [here](http://finzi.psych.upenn.edu/R/library/IQCC/html/00Index.html)
| null | CC BY-SA 2.5 | null | 2010-10-22T13:22:57.057 | 2010-10-22T13:22:57.057 | null | null | 339 | null |
3887 | 2 | null | 3857 | 3 | null | [VisTrails: A Python-Based Scientific Workflow and Provenance System](http://us.pycon.org/2010/conference/schedule/event/13/).
This talk given at PyCon 2010 has some good ideas. Worth listening to even if you are not interested in using VisTrails or python. In the end I think if you would be able to require that there be a clear document way to reproduce the data. And require some validation that they can.
Quoting:
>
"In this talk, we will give an overview of VisTrails (http://www.vistrails.org), a python-based open-source scientific workflow that transparently captures provenance (i.e., lineage) of both data products and the processes used to derive these products. We will show how VisTrails can be used to streamline data exploration and visualization. Using real examples, we will demonstrate key features of the system, including the ability to visually create information processing pipelines that combine multiple tools and Iibraries such as VTK, pylab, and matplotlib. We will also show how VisTrails leverages provenance information not only to support result reproducibility, but also to simplify the creation and refinement of pipelines."
| null | CC BY-SA 3.0 | null | 2010-10-22T13:35:54.403 | 2013-09-04T00:37:14.277 | 2013-09-04T00:37:14.277 | 9007 | 1189 | null |
3888 | 1 | 3889 | null | 2 | 1054 | I have a series of boxplots that i am generating with ggplot2. I want to control the order in which they are displayed. Is there a way to control this order? I have two genotypes and i want them displayed as WT then KO rather than the reverse (which is what i am getting as a default). My code right now is:
```
p <- qplot(genotype, activity.ratio, data = df)
p + geom_boxplot() + geom_jitter()
```
Also if this is better as a SO question than this forum then please let me know (and if it is the correct forum can someone create a ggplot tag).
| Reorder categorical data in ggplot2 | CC BY-SA 2.5 | null | 2010-10-22T13:58:09.980 | 2010-11-30T16:42:29.257 | 2010-11-30T16:42:29.257 | 8 | 1327 | [
"r",
"boxplot",
"ggplot2"
]
|
3889 | 2 | null | 3888 | 5 | null | Would that help?
```
x <- gl(2, 20, 40, labels=c("KO","WT"))
y <- rnorm(40)
qplot(x,y)
qplot(relevel(x, "WT"),y)
```
| null | CC BY-SA 2.5 | null | 2010-10-22T14:06:48.103 | 2010-10-22T14:06:48.103 | null | null | 930 | null |
3890 | 1 | 3891 | null | 3 | 7936 | I would like to use the fuction weighted.median() in package [R.basic](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/R.basic/html/weighted.median.html) (and some other ones too, although this one is most important).
```
install.packages("R.basic")
```
gives me:
```
package ‘R.basic’ is not available
```
and
```
library(R.basic)
```
says
```
Error in library("R.basic") : there is no package called 'R.basic'
```
There is a downloadable version [here](http://www1.maths.lth.se/help/R/R.classes/). But, typing:
```
install.packages("~/Documents/R.classes_0.62.tar.gz", CRAN=NULL)
```
Gives
```
Error in do_install(pkg) :
this seems to be a bundle -- and they are defunct
Warning message:
In install.packages("~/Documents/R.classes_0.62.tar.gz", :
installation of package '~/Documents/R.classes_0.62.tar.gz' had non-zero exit status
```
How to get thru ?
| package R.basic | CC BY-SA 2.5 | null | 2010-10-22T14:08:01.243 | 2014-04-23T13:54:08.877 | 2010-10-22T21:42:16.497 | 603 | 603 | [
"r"
]
|
3891 | 2 | null | 3890 | 6 | null | That's because the package doesn't exist on CRAN (see [the package list](http://cran.r-project.org/web/packages/)). It may have failed a build in a recent version of R.
You can install it directly from the author's site like so:
```
install.packages(c("R.basic"), contriburl="http://www.braju.com/R/repos/")
```
See [Henrik Bengtsson's home page](http://www.braju.com/R/) for more detail.
Edit
Just to add a little further: it looks like this package fails to build on later versions of R. You should probably get the source and build it yourself, or else contact the package author.
| null | CC BY-SA 2.5 | null | 2010-10-22T14:12:55.267 | 2010-10-22T14:21:15.993 | 2010-10-22T14:21:15.993 | 5 | 5 | null |
3892 | 1 | 3895 | null | 11 | 4965 | Why does OLS estimation involve taking vertical deviations of the points to the line rather than horizontal distances?
| Why vertical distances? | CC BY-SA 2.5 | null | 2010-10-22T15:08:17.383 | 2019-02-19T10:51:20.247 | null | null | 333 | [
"least-squares"
]
|
3893 | 1 | 4095 | null | 38 | 4205 | In all contexts I am familiar with cross-validation it is solely used with the goal of increasing predictive accuracy. Can the logic of cross validation be extended in estimating the unbiased relationships between variables?
While [this](http://dx.doi.org/10.1007/s10940-009-9077-7) paper by Richard Berk demonstrates the use of a hold out sample for parameter selection in the "final" regression model (and demonstrates why step-wise parameter selection is not a good idea), I still don't see how that exactly ensures unbiased estimates of the effect X has on Y any more so than choosing a model based on logic and prior knowledge of the subject.
I ask that people cite examples in which one used a hold-out sample to aid in causal inference, or general essays that may help my understanding. I also don't doubt my conception of cross validation is naive, and so if it is say so. It seems offhand the use of a hold out sample would be amenable to causal inference, but I do not know of any work that does this or how they would do this.
Citation for the Berk Paper:
[Statistical Inference After Model Selection](http://dx.doi.org/10.1007/s10940-009-9077-7)
by: Richard Berk, Lawrence Brown, Linda Zhao
Journal of Quantitative Criminology, Vol. 26, No. 2. (1 June 2010), pp. 217-236.
PDF version [here](http://www-stat.wharton.upenn.edu/~lzhao/papers/MyPublication/StatInfAfterMS_JQC_2010.pdf)
[This](https://stats.stackexchange.com/q/3252/1036) question on exploratory data analysis in small sample studies by chl prompted this question.
| Can cross validation be used for causal inference? | CC BY-SA 2.5 | null | 2010-10-22T15:12:03.423 | 2012-05-02T18:57:33.890 | 2017-04-13T12:44:41.967 | -1 | 1036 | [
"cross-validation",
"causality"
]
|
3894 | 1 | null | null | 5 | 677 | I was wondering whether it's me mis-remembering something or is there a dimensionless measure of dispersion calculated as: range/(mean or median). Where range is a difference between max and min values.
P.S. Could this be called "relative range"?
| Is there a parameter that is calculated as a range to mean ratio? | CC BY-SA 2.5 | null | 2010-10-22T15:15:30.043 | 2010-10-22T15:33:55.257 | 2010-10-22T15:24:29.867 | 219 | 219 | [
"distributions"
]
|
3895 | 2 | null | 3892 | 13 | null | OLS ([ordinary least squares](http://en.wikipedia.org/wiki/Ordinary_least_squares)) assumes that the values represented by the horizontal distances are either predetermined by the experimenter or measured with high accuracy (relative to the vertical distances). When there is a question of uncertainty in the horizontal distances, you shouldn't be using OLS, but instead should look into [errors-in-variables models](http://en.wikipedia.org/wiki/Errors-in-variables_models) or, possibly, [principal components analysis](http://en.wikipedia.org/wiki/Principal_component_analysis).
| null | CC BY-SA 2.5 | null | 2010-10-22T15:16:37.303 | 2011-01-19T04:56:27.243 | 2011-01-19T04:56:27.243 | 919 | 919 | null |
3896 | 2 | null | 3894 | 2 | null | The [coefficient of variation](http://en.wikipedia.org/wiki/Coefficient_of_variation) is defined as (standard deviation) / mean. Is that what you're thinking of??
| null | CC BY-SA 2.5 | null | 2010-10-22T15:26:45.180 | 2010-10-22T15:26:45.180 | null | null | 449 | null |
3897 | 2 | null | 3894 | 3 | null | The measure can be seen in a few publications. I can remember a paper by Lord (1947) using the range as an approximation to the standard deviation for the t-test. So the range to mean ratio would be an approximation to the coefficient of variation which onestop has mentioned already.
| null | CC BY-SA 2.5 | null | 2010-10-22T15:33:55.257 | 2010-10-22T15:33:55.257 | null | null | 1573 | null |
3898 | 1 | null | null | 7 | 1871 | I am working with a given factor model of the form
$$y=B x +\epsilon$$
where $x$ is a random vector in $R^M$, $B$ is an $N\times M$ matrix of factor loadings, $y$ is a random vector in $R^N$ (with $N \gg M$) and $\epsilon$ is a vector of mean-zero innovations, independent of $x$. For simplicity, the innovations are normally distributed and pairwise independent. My question is relatively simple to state: How can I test whether a factor is significant, in the same sense in which a predictor is significant in linear regression?
Clarification: I should perhaps have emphasized that I am working with a given model, and I have to assess the predictive value of each factor of the model. In other terms, is there a simple way to assess whether dropping a factor from a model would result in a loss of predictive power, without comparing two different models with and without the factor? In the latter case, there is a great deal of literature on model selection, based on ML, GMM etc. (The rotational invariance of factor models doesn't play an essential role, btw).
| Factor Significance for Factor Model | CC BY-SA 2.5 | null | 2010-10-22T15:41:29.937 | 2012-09-19T06:30:07.777 | 2010-10-23T22:35:32.570 | 30 | 30 | [
"hypothesis-testing",
"factor-analysis"
]
|
3899 | 2 | null | 1053 | 4 | null | I learned from Hosmer, Lemeshow & May ["Applied Survival Analysis: Regression Modeling of Time-to-Event Data"](https://rads.stackoverflow.com/amzn/click/com/0471754994) (2nd ed., 2008), which covers the basics. It also helped that I found a really cheap copy...
| null | CC BY-SA 4.0 | null | 2010-10-22T16:15:02.407 | 2022-05-10T10:07:08.253 | 2022-05-10T10:07:08.253 | 53690 | 795 | null |
3900 | 2 | null | 3898 | 1 | null | The short answer is: There is something you can do but I am not sure how meaningful it will be.
The Long answer: I will give the long answer for a simple model where we have only one unknown latent factor. The idea carries over to the more general case albeit with more complications. It follows from your factor model that:
$E(y) = B E(x)$ and
$Var(y) = B^T B Var(x) + \sigma^2$
(Note: Keep in mind that in this simplified model: $x$ is a scalar we are dealing with only one factor).
Since the data is normally distributed the above two equations determine your likelihood function.
However, do note that you have identification issues here as you have to estimate $B$ and $x$ simultaneously. The traditional way to solve the identification issue is to assume that:
$B(1) = 1$
(i.e., set the factor loading of the first element to 1. Otherwise we can scale $B$ by $\alpha$ and scale $E(x)$ by $\frac{1}{\alpha}$ and obtain an identical E(y)).
It follows that:
E(x) = E(y(1))
In other words, the above identification constraint on B has effectively constrained the mean of our factor to be the sample mean of our first dependent variable (i.e., y(1)).
For similar reasons, we assume that:
$Var(x) = 1$
(Otherwise you could just scale $B$ by $\sqrt{\alpha}$ and scale $Var(x)$ by $\frac{1}{\alpha}$ and your likelihood function will not change).
Since we impose an identification constraint on the distribution of x (which is in some sense arbitrary) I am not sure how meaningful it is to perform statistical testing for a factor.
You could compute factor scores and perform standard statistical testing using the mean of $E(x) = E(y(1))$ and $Var(x) = 1$.
So, the answer to your question depends on:
>
Is the conclusion from the above statistical test invariant to your choice of identification constraints?
I do not know the answer to the above question.
| null | CC BY-SA 2.5 | null | 2010-10-22T16:21:05.233 | 2010-10-22T16:21:05.233 | null | null | null | null |
3901 | 2 | null | 2504 | 5 | null | The counterexample is not relevant to the question asked. You want to test the null hypothesis that a sample of i.i.d. random variables is drawn from a distribution having finite variance, at a given significance level. I recommend a good reference text like "Statistical Inference" by Casella to understand the use and the limit of hypothesis testing. Regarding h.t. on finite variance, I don't have a reference handy, but the following paper addresses a similar, but stronger, version of the problem, i.e., if the distribution tails follow a power law.
[POWER-LAW DISTRIBUTIONS IN EMPIRICAL DATA](http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.1062v2.pdf) SIAM Review 51 (2009): 661--703.
| null | CC BY-SA 2.5 | null | 2010-10-22T19:15:54.743 | 2010-10-22T19:15:54.743 | null | null | 30 | null |
3902 | 1 | 3906 | null | 1 | 1643 | Does 'corr' function in Matlab consider correction for tied ranks while calculating Spearman correlation coefficient?
Thanks.
| Does corr function in Matlab consider correction for tied ranks? | CC BY-SA 2.5 | null | 2010-10-22T19:26:11.547 | 2019-06-29T08:19:46.140 | 2019-06-29T08:19:46.140 | 3277 | 1564 | [
"correlation",
"matlab",
"ties"
]
|
3903 | 2 | null | 3892 | 1 | null | Interesting Question. My answer would be that when we are fitting an OLS model we are implicitly and primarily trying to predict/explain the dependent variable at hand - the "Y" in the "Y vs X." As such, our main concern would be to minimize the distance from our fitted line to the actual observations with respect to the outcome, which means minimizing the vertical distance. This of course defines the residuals.
Also, least squares formulas are easier to derive than most other competing methods, which is perhaps why it came around first. :P
As 'whuber' alludes to above, there are other approaches that treat X and Y with equal emphasis when fitting a best-fit line. One such approach that I'm aware of is "principal lines" or "principal curves" regression, which minimizes the orthogonal distances between the points and the line (instead of a vertical error lines you have ones at 90 degrees to the fitted line). I post one reference below for your reading. It's lengthy but very accessible and enlightening.
Hope this helps,
Brenden
- Trevor Hastie. Principal Curves and Surfaces, PhD thesis, Stanford University; 1984
| null | CC BY-SA 2.5 | null | 2010-10-22T20:36:49.587 | 2010-10-22T20:36:49.587 | null | null | 1501 | null |
3904 | 2 | null | 3734 | 168 | null | I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. But here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the CLT for further elaboration in responses of your own.
---
Most attempts at "explaining" the CLT are illustrations or just restatements that assert it is true. A really penetrating, correct explanation would have to explain an awful lot of things.
Before looking at this further, let's be clear about what the CLT says. As you all know, there are versions that vary in their generality. The common context is a sequence of random variables, which are certain kinds of functions on a common probability space. For intuitive explanations that hold up rigorously I find it helpful to think of a probability space as a box with distinguishable objects. It doesn't matter what those objects are but I will call them "tickets." We make one "observation" of a box by thoroughly mixing up the tickets and drawing one out; that ticket constitutes the observation. After recording it for later analysis we return the ticket to the box so that its contents remain unchanged. [A "random variable" basically is a number written on each ticket.](https://stats.stackexchange.com/questions/50/what-is-meant-by-a-random-variable/54894#54894)
In 1733, [Abraham de Moivre](http://en.wikipedia.org/wiki/Abraham_de_Moivre#Probability) considered the case of a single box where the numbers on the tickets are only zeros and ones ("Bernoulli trials"), with some of each number present. He imagined making $n$ physically independent observations, yielding a sequence of values $x_1, x_2, \ldots, x_n$, all of which are zero or one. The sum of those values, $y_n = x_1 + x_2 + \ldots + x_n$, is random because the terms in the sum are. Therefore, if we could repeat this procedure many times, various sums (whole numbers ranging from $0$ through $n$) would appear with various frequencies--proportions of the total. (See the histograms below.)
Now one would expect--and it's true--that for very large values of $n$, all the frequencies would be quite small. If we were to be so bold (or foolish) as to attempt to "take a limit" or "let $n$ go to $\infty$", we would conclude correctly that all frequencies reduce to $0$. But if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $n$ all begin to look the same: in some sense, these histograms approach a limit even though the frequencies themselves all go to zero.

These histograms depict the results of repeating the procedure of obtaining $y_n$ many times. $n$ is the "number of trials" in the titles.
The insight here is to draw the histogram first and label its axes later. With large $n$ the histogram covers a large range of values centered around $n/2$ (on the horizontal axis) and a vanishingly small interval of values (on the vertical axis), because the individual frequencies grow quite small. Fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. The mathematical description of this is that for each $n$ we can choose some central value $m_n$ (not necessarily unique!) to position the histogram and some scale value $s_n$ (not necessarily unique!) to make it fit within the axes. This can be done mathematically by changing $y_n$ to $z_n = (y_n - m_n) / s_n$.
Remember that [a histogram represents frequencies by areas](https://stats.stackexchange.com/a/24590/919) between it and the horizontal axis. The eventual stability of these histograms for large values of $n$ should therefore be stated in terms of area. So, pick any interval of values you like, say from $a$ to $b \gt a$ and, as $n$ increases, track the area of the part of the histogram of $z_n$ that horizontally spans the interval $(a, b]$. The CLT asserts several things:
- No matter what $a$ and $b$ are, if we choose the sequences $m_n$ and $s_n$ appropriately (in a way that does not depend on $a$ or $b$ at all), this area indeed approaches a limit as $n$ gets large.
- The sequences $m_n$ and $s_n$ can be chosen in a way that depends only on $n$, the average of values in the box, and some measure of spread of those values--but on nothing else--so that regardless of what is in the box, the limit is always the same. (This universality property is amazing.)
- Specifically, that limiting area is the area under the curve $y = \exp(-z^2/2) / \sqrt{2 \pi}$ between $a$ and $b$: this is the formula of that universal limiting histogram.
The first generalization of the CLT adds,
- When the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold (provided that the proportions of extremely large or small numbers in the box are not "too great," a criterion that has a precise and simple quantitative statement).
The next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. Each box can have different numbers on its tickets in different proportions. The observation $x_1$ is made by drawing a ticket from the first box, $x_2$ comes from the second box, and so on.
- Exactly the same conclusions hold provided the contents of the boxes are "not too different" (there are several precise, but different, quantitative characterizations of what "not too different" has to mean; they allow an astonishing amount of latitude).
These five assertions, at a minimum, need explaining. There's more. Several intriguing aspects of the setup are implicit in all the statements. For example,
- What is special about the sum? Why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? (It turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the CLT.) The sequences of $m_n$ and $s_n$ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $n$ tickets and the standard deviation of the sum, respectively (which, in the first two statements of the CLT, equals $\sqrt{n}$ times the standard deviation of the box).
The standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most "natural," either historically or for many applications. (Many people would choose something like a median absolute deviation from the median, for instance.)
- Why does the SD appear in such an essential way?
- Consider the formula for the limiting histogram: who would have expected it to take such a form? It says the logarithm of the probability density is a quadratic function. Why? Is there some intuitive or clear, compelling explanation for this?
---
I confess I am unable to reach the ultimate goal of supplying answers that are simple enough to meet Srikant's challenging criteria for intuitiveness and simplicity, but I have sketched this background in the hope that others might be inspired to fill in some of the many gaps. I think a good demonstration will ultimately have to rely on an elementary analysis of how values between $\alpha_n = a s_n + m_n$ and $\beta_n = b s_n + m_n$ can arise in forming the sum $x_1 + x_2 + \ldots + x_n$. Going back to the single-box version of the CLT, the case of a symmetric distribution is simpler to handle: its median equals its mean, so there's a 50% chance that $x_i$ will be less than the box's mean and a 50% chance that $x_i$ will be greater than its mean. Moreover, when $n$ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. (This requires some careful justification, not just hand waving.) Thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. (Of all the things I have written here, this might be the most useful at providing some intuition about why the CLT works. Indeed, the technical assumptions needed to make the generalizations of the CLT true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising.)
This shows, to some degree anyway, why the first generalization of the CLT does not really uncover anything that was not in de Moivre's original Bernoulli trial version.
At this point it looks like there is nothing for it but to do a little math: we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations by any predetermined value $k$, where evidently $k$ is one of $-n, -n+2, \ldots, n-2, n$. But because vanishingly small errors will disappear in the limit, we don't have to count precisely; we only need to approximate the counts. To this end it suffices to know that
$$\text{The number of ways to obtain } k \text{ positive and } n-k \text{ negative values out of } n$$
$$\text{equals } \frac{n-k+1}{k}$$
$$\text{times the number of ways to get } k-1 \text{ positive and } n-k+1 \text { negative values.}$$
(That's a perfectly elementary result so I won't bother to write down the justification.) Now we approximate wholesale. The maximum frequency occurs when $k$ is as close to $n/2$ as possible (also elementary). Let's write $m = n/2$. Then, relative to the maximum frequency, the frequency of $m+j+1$ positive deviations ($j \ge 0$) is estimated by the product
$$\frac{m+1}{m+1} \frac{m}{m+2} \cdots \frac{m-j+1}{m+j+1}$$
$$=\frac{1 - 1/(m+1)}{1 + 1/(m+1)} \frac{1-2/(m+1)}{1+2/(m+1)} \cdots \frac{1-j/(m+1)}{1+j/(m+1)}.$$
135 years before de Moivre was writing, John Napier invented logarithms to simplify multiplication, so let's take advantage of this. Using the approximation
$$\log\left(\frac{1-x}{1+x}\right) = -2x - \frac{2x^3}{3} + O(x^5),$$
we find that the log of the relative frequency is approximately
$$-\frac{2}{m+1}\left(1 + 2 + \cdots + j\right) - \frac{2}{3(m+1)^3}\left(1^3+2^3+\cdots+j^3\right) = -\frac{j^2}{m} + O\left(\frac{j^4}{m^3}\right).$$
Because the error in approximating this sum by $-j^2/m$ is on the order of $j^4/m^3$, the approximation ought to work well provided $j^4$ is small relative to $m^3$. That covers a greater range of values of $j$ than is needed. (It suffices for the approximation to work for $j$ only on the order of $\sqrt{m}$ which asymptotically is much smaller than $m^{3/4}$.)
Consequently, writing $$z = \sqrt{2}\,\frac{j}{\sqrt{m}} = \frac{j/n}{1 / \sqrt{4n}}$$ for the standardized deviation, the relative frequency of deviations of size given by $z$ must be proportional to $\exp(-z^2/2)$ for large $m.$ Thus appears the Gaussian law of #3 above.
---
Obviously much more analysis of this sort should be presented to justify the other assertions in the CLT, but I'm running out of time, space, and energy and I've probably lost 90% of the people who started reading this anyway. This simple approximation, though, suggests how de Moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $s_n$ must be proportional to $\sqrt{n}$ (as shown by the denominator of the preceding formula). It is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning; anything less would leave the precise shape of the limiting curve a complete mystery.
| null | CC BY-SA 4.0 | null | 2010-10-22T21:11:42.460 | 2021-03-25T13:32:15.133 | 2021-03-25T13:32:15.133 | 919 | 919 | null |
3906 | 2 | null | 3902 | 3 | null | looking in the `corr` function, I see that it calls `tiedrank`:
```
$ grep -B 1 tiedrank corr.m
case 'k' % Kendall's tau
[xrank, xadj] = tiedrank(xi,1);
[yrank, yadj] = tiedrank(yj,1);
--
case 's' % Spearman's rank correlation
[xrank, xadj] = tiedrank(xi,0);
[yrank, yadj] = tiedrank(yj,0);
```
Issuing a `help tiedrank` in Matlab should help you further along.
| null | CC BY-SA 2.5 | null | 2010-10-23T00:16:03.480 | 2010-10-23T00:16:03.480 | null | null | 795 | null |
3907 | 1 | 4073 | null | 6 | 4375 | We have prospective data from an observational registry and wish to consider the affects of a gene on time to cardiovascular events. The data includes standard data like age, gender, ... and also the date(age) of a cardio event.
In general terms what are the conditions/assumptions/requirements of a starting time for survival analysis? Entry into the registry is at random times for each individual (as he or she is recruited into the registry).
This is obviously no data at which the registry participants' genes changed, and there no intervention. Is it valid to use the date of enrollment in the registry as the start time for the survival analysis?
We (group working on this) are asking as some work has been done and the the enrollment date was used as the start time for the survival interval and there is some disagreement now as to whether this is a valid approach.
| Start time requirements or assumptions for survival analysis | CC BY-SA 2.5 | null | 2010-10-23T01:00:11.397 | 2023-03-03T10:44:07.670 | 2010-10-24T16:31:20.160 | null | 1189 | [
"regression",
"survival"
]
|
3909 | 2 | null | 3893 | 18 | null | This is a really interesting question and I don't offer any specific citations. However, in general, I'd say, NO, in and of itself, cross-validation does not offer any insight into causality. In absence of a designed experiment, the issue of causality is always uncertain. As you suggest, cross-validation can and will improve predictive accuracy. This, alone, says nothing about causality.
Absent of a designed experiment, causal inference would require a model that includes all of the relevant predictors--something that we can rarely guarantee in an observational study. Moreover, a simple lag variable, for example (or anything highly correlated with whatever outcome we were trying to predict) would produce a good model and one which could be validated in multiple samples. That does not mean, however, that we can infer causation. Cross-validation assures repeatability in predictions and nothing more. Causality is a matter of design and logic.
EDIT:
Here's an example to illustrate. I could build a model with good predictive accuracy that predicts the population of a city based on the amount of money the city spends on trash removal. I could use cross-validation to test the accuracy of that model as well as other methods to improve the accuracy of prediction and get more stable parameters. Now, while this model works great for prediction, the causal logic is wrong--the causal direction is reversed. No matter what the folks in the Public Works Department might argue, increasing their budget for trash removal would not be a good strategy to increase the city's population (the causal interpretation).
The issues of accuracy and repeatability of a model are separate from our ability to make causal inferences about the relationships we observe. Cross-validation helps us with the former and not with the latter. Now, IF we are estimating a "correct" model in terms of specifying a casual relationship (for example, trying to determine what our trash removal budget should be based on our expected population next year), cross-validation can help us to have greater confidence in our estimate of that effect. However, cross-validation does nothing to help us choose the "correct" model with regard to causal relationships. Again, here we need to rely on the design of the study, our subject matter expertise, theory, and logic.
| null | CC BY-SA 2.5 | null | 2010-10-23T05:14:26.467 | 2010-10-24T18:29:37.743 | 2010-10-24T18:29:37.743 | 485 | 485 | null |
3910 | 2 | null | 3724 | 6 | null | Use VAR (with n-1 denominator) when you wish to estimate the variance of the underlying population from the sample, or VARP (with n denominator) when the sample is the population.
I find the name "population variance" quite ambiguous...
| null | CC BY-SA 2.5 | null | 2010-10-23T06:00:07.847 | 2010-10-23T06:00:07.847 | null | null | 1679 | null |
3911 | 1 | null | null | 33 | 3525 | If I understand correctly a confidence interval of a parameter is an interval constructed by a method which yields intervals containing the true value for a specified proportion of samples. So the 'confidence' is about the method rather than the interval I compute from a particular sample.
As a user of statistics I have always felt cheated by this since the space of all samples is hypothetical. All I have is one sample and I want to know what that sample tells me about a parameter.
Is this judgement wrong? Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?
[This question arises from second thoughts after dissing confidence intervals in a math.se answer [https://math.stackexchange.com/questions/7564/calculating-a-sample-size-based-on-a-confidence-level/7572#7572](https://math.stackexchange.com/questions/7564/calculating-a-sample-size-based-on-a-confidence-level/7572#7572) ]
| When are confidence intervals useful? | CC BY-SA 2.5 | null | 2010-10-23T07:48:41.417 | 2011-06-07T12:24:47.820 | 2017-04-13T12:19:38.447 | -1 | 1393 | [
"confidence-interval",
"interpretation"
]
|
3912 | 2 | null | 3911 | 15 | null | I like to think of CIs as some way to escape the Hypothesis Testing (HT) framework, at least the binary decision framework following [Neyman](http://j.mp/awJEkH)'s approach, and keep in line with theory of measurement in some way. More precisely, I view them as more close to the reliability of an estimation (a difference of means, for instance), and conversely HT are more close to hypothetico-deductive reasoning, with its pitfalls (we cannot accept the null, the alternative is often stochastic, etc.). Still, with both interval estimation and HT we have to rely on distribution assumptions most of the time (e.g. a sampling distribution under $H_0$), which allows to make inference from our sample to the general population or a representative one (at least in the frequentist approach).
In many context, CIs are complementary to usual HT, and I view them as in the following picture (it is under $H_0$):

that is, under the HT framework (left), you look at how far your statistic is from the null, while with CIs (right) you are looking at the null effect "from your statistic", in a certain sense.
Also, note that for certain kind of statistic, like odds-ratio, HT are often meaningless and it is better to look at its associated CI which is assymmetrical and provide more relevant information as to the direction and precision of the association, if any.
| null | CC BY-SA 2.5 | null | 2010-10-23T08:09:48.700 | 2010-10-23T08:17:26.293 | 2010-10-23T08:17:26.293 | 930 | 930 | null |
3913 | 2 | null | 3892 | 1 | null | It possibly also relates to designed experiments - if x is a controlled quantity that is part of the experimental design, it is treated as deterministic; whilst y is the outcome, and is a random quantity. x might be a continuous quantity (eg concentration of some drug) but could be a 0/1 split (leading to a 2 sample t-test assuming y is Gaussian). If x is a continuous quantity there may be some measurement error, but typically if this is much smaller than the variability of y then this is ignored.
| null | CC BY-SA 2.5 | null | 2010-10-23T08:22:28.610 | 2010-10-23T08:22:28.610 | null | null | 1680 | null |
3914 | 2 | null | 3911 | 7 | null | An alternative approach relevant to your 2nd Q, "Are there ways of looking at confidence intervals, at least in some circumstances, which would be meaningful to users of statistics?":
You should take a look at [Bayesian inference](http://en.wikipedia.org/wiki/Bayesian_statistics) and the resulting [credible intervals](http://en.wikipedia.org/wiki/Credible_interval). A 95% credible interval can be interpreted as an interval which you believe has 95% probability of including the true parameter value. The price you pay is that you need to put a [prior probability](http://en.wikipedia.org/wiki/Prior_probability) distribution on the values you believe the true parameter is likely to take before collecting the data. And your prior may differ from someone else's prior, so your resulting credible intervals may also differ even when you use the same data.
This is only my quick and crude attempt to summarise! A good recent textbook with a practical focus is:
Andrew Gelman, John B. Carlin, Hal S. Stern and Donald B. Rubin. "Bayesian Data Analysis" (2nd edition). Chapman & Hall/CRC, 2003. ISBN [978-1584883883](http://en.wikipedia.org/wiki/Special%3aBookSources/9781584883883)
| null | CC BY-SA 2.5 | null | 2010-10-23T08:45:56.277 | 2010-10-23T08:45:56.277 | null | null | 449 | null |
3915 | 1 | 3916 | null | 12 | 3647 | I am trying to assess a 20-item multliple choice test. I want to perform an item analysis such as can be found in [this example](http://www.utexas.edu/academic/ctl/assessment/iar/students/report/itemanalysis-example.php). So for each question I want the P-value and the correlation with the total, and the distribution of the options selected.
I don't know anything about the various statistical software packages out there, but I'd like to use R as I'm comfortable with programming and R is open source. The pseudo-workflow I envision is:
- prepare data in excel and export to CSV
- load data in R
- load a package that does what I need
- execute that package's commands
- export and report.
I am confident with 1 and 2 but having trouble with 3, probably because I don't have the statistical vocabulary to compare the packages I browsed on CRAN. `ltm` looks like it could be the right package but I can't tell. Whatever package is used, what would the commands be?
Side question: in the linked example, what do you suppose MC and MI stand for?
| Item Analysis for an R newbie | CC BY-SA 2.5 | null | 2010-10-23T11:19:34.040 | 2010-10-25T13:55:00.963 | 2010-10-25T13:55:00.963 | 930 | 1681 | [
"r",
"correlation",
"psychometrics",
"scales"
]
|
3916 | 2 | null | 3915 | 11 | null | I can suggest you at least two packages that allow to perform these tasks: [psych](http://cran.r-project.org/web/packages/psych/index.html) (`score.items`) and [ltm](http://cran.r-project.org/web/packages/ltm/index.html) (`descript`). The [CTT](http://cran.r-project.org/web/packages/CTT/index.html) package seems also to process MCQ but I have no experience with it. More information can be found on W Revelle's website, [The Personality Project](http://www.personality-project.org/), esp. the page dedicated to [psychometrics with R](http://www.personality-project.org/r/r.guide.html) which provides step-by-step instructions for importing, analyzing and report data. Also, the CRAN Task View on [Psychometrics](http://cran.r-project.org/web/views/Psychometrics.html) includes many additional resources.
As described in your link, MC stands for "Mean total raw score of the persons who answered the item with the correct response", and MI for "Mean total score of the persons who did not answer the item with the correct response.". Point-biserial correlation (R(IT)) is also available in the `ltm` package (`biserial.cor`). This is basically an indicator of the discrimination power of the item (since it is the correlation of item and total score), and is related to the discrimination parameter of a 2-PL IRT model or factor loading in Factor Analysis.
If you really want to reproduce the table you show, I guess you will have to wrap some of this code with custom code, at least to output the same kind of table. I've made a [quick and dirty example](http://gist.github.com/642219) which reproduce your table:
```
dat <- replicate(10, sample(LETTERS[1:4], 100, rep=TRUE))
dat[3,2] <- dat[67,5] <- NA
itan(dat)
P R MC MI NC OMIT A B C D
[1,] 0.23 -0.222 2.870 2.169 23 0 23 22 32 23
[2,] 0.32 -0.378 3.062 1.985 32 1 32 20 14 33
[3,] 0.18 -0.197 2.889 2.207 18 0 18 33 22 27
[4,] 0.33 -0.467 3.212 1.896 33 0 33 18 29 20
[5,] 0.27 -0.355 3.111 2.056 27 1 27 23 23 26
[6,] 0.17 -0.269 3.118 2.169 17 0 17 25 25 33
[7,] 0.21 -0.260 3.000 2.152 21 0 21 24 25 30
[8,] 0.24 -0.337 3.125 2.079 24 0 24 32 22 22
[9,] 0.13 -0.218 3.077 2.218 13 0 13 29 33 25
[10,] 0.25 -0.379 3.200 2.040 25 0 25 25 31 19
```
As these are random responses, biserial correlation and item difficulty are not very meaningful (except to check that data are truly random :). Also, it is worth checking for possible errors, since I drafted the R function in 10'...
| null | CC BY-SA 2.5 | null | 2010-10-23T11:59:38.010 | 2010-10-23T19:56:24.563 | 2010-10-23T19:56:24.563 | 930 | 930 | null |
3917 | 2 | null | 3893 | 12 | null | This is a good question, but the answer is definitely no: cross-validation will not improve causal inference. If you have a mapping between symptoms and diseases, cross-validation will help to insure that your model matches their joint distribution better than if you had simply fit your model to the entire raw data set, but it can't ever tell you anything about the directionality of causation.
Cross-validation is very important and worth studying, but it does nothing more than prevent you from overfitting to noise in your data set. If you'd like to understand it more, I'd suggest Chapter 7 of ESL: [http://www-stat.stanford.edu/~hastie/Papers/ESLII.pdf](http://www-stat.stanford.edu/~hastie/Papers/ESLII.pdf)
| null | CC BY-SA 2.5 | null | 2010-10-23T12:53:21.263 | 2010-10-23T12:53:21.263 | null | null | 303 | null |
3918 | 2 | null | 3911 | 3 | null | To address your question directly: Suppose that you are contemplating the use of a machine to fill a cereal box with a certain amount of cereal. Obviously, you do not want to overfill/underfill the box. You want to assess the reliability of the machine. You perform a series of tests like so: (a) Use the machine to fill the box and (b) Measure the amount of cereal that is filled in the box.
Using the data collected you construct a confidence interval for the amount of cereal that the machine is likely to fill in the box. This confidence interval tells us that the interval we obtained has a 95% probability that it will contain the the true amount of cereal the machine will put in the box. As you say, the interpretation of the confidence interval relies on hypothetical, unseen samples generated by the method under consideration. But, this is precisely what we want in our context. In the above context, we will use the machine repeatedly to fill the box and thus we care about hypothetical, unseen realizations of the amount of cereal the machine fills in the box.
To abstract away from the above context: a confidence interval gives us a guarantee that if we were to use the method under investigation (in the above example method = machine) repeatedly there is a 95% probability that the confidence interval will have the true parameter.
| null | CC BY-SA 2.5 | null | 2010-10-23T13:06:45.637 | 2010-10-23T13:06:45.637 | null | null | null | null |
3919 | 2 | null | 3898 | 1 | null | Just speaking on a practical level, in my discipline (psychology) I have never seen this done for pure factor analysis.
That being said, the significance (fit really) of a statistical model is normally tested by the use of Structural Equation Modelling, where you attempt to reproduce the observed matrix of data from the structure you have proposed through the use of factor analysis.
The SEM, lavaan or OpenMx Packages for R will all do this.
Technically, the Chi square test will tell you if a factor model fits perfectly, but this statistic is almost always significant with any appreciable (200+) sample size.
The psych package for R also gives you the Bayesian Information Criterion as a measure of fit after you specify a factor model, but I am unsure as to how useful this is.
| null | CC BY-SA 2.5 | null | 2010-10-23T13:33:45.217 | 2010-10-23T13:33:45.217 | null | null | 656 | null |
3920 | 2 | null | 3915 | 8 | null | The psych package function alpha produces what you are looking for.
To export this, save the object and use the xtable function to produce LaTeX markup, which can then be formatted by any LaTeX editor.
| null | CC BY-SA 2.5 | null | 2010-10-23T13:45:54.690 | 2010-10-23T13:45:54.690 | null | null | 656 | null |
3921 | 1 | null | null | 26 | 12545 | What are good ways to visualize set of Likert responses?
For example, a set of items inquiring about the importance of X to one's decisions about A, B, C, D, E, F & G? Is there something better than stacked bar charts?
- What should be done with responses of N/A? How might they be represented?
- Should the bar charts report percentages, or number of responses? (i.e. should the bars total the same length?)
- If percentages, should the denominator include invalid and/or N/A responses?
I have my own views, but I am looking for other people's ideas.
| Visualizing Likert Item Response Data | CC BY-SA 2.5 | null | 2010-10-23T15:04:28.670 | 2013-08-11T20:36:17.530 | 2011-03-30T18:40:07.603 | 930 | null | [
"data-visualization",
"scales",
"likert"
]
|
3922 | 2 | null | 3921 | 25 | null | Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each category if these are ordinal item (e.g. Likert). I prefer [dotchart](http://polisci.msu.edu/jacoby/research/dotplots/tpm/Jacoby,%20Dotplots,%20TPM%20Draft.pdf) (Cleveland dot plot), when there are not too many items and no more than 3-5 responses categories. But it is really a matter of visual clarity. I generally provide % as it is a standardized measure, and only report both % and counts with non-stacked barchart. Here is an example of what I mean:
```
data(Environment, package="ltm")
Environment[sample(1:nrow(Environment), 10),1] <- NA
na.count <- apply(Environment, 2, function(x) sum(is.na(x)))
tab <- apply(Environment, 2, table)/
apply(apply(Environment, 2, table), 2, sum)*100
dotchart(tab, xlim=c(0,100), xlab="Frequency (%)",
sub=paste("N", nrow(Environment), sep="="))
text(100, c(2,7,12,17,22,27), rev(na.count), cex=.8)
mtext("# NA", side=3, line=0, at=100, cex=.8)
```

Better rendering could be achieved with `lattice` or `ggplot2`. All items have the same response categories in this particular example, but in more general case we might expect different ones, so that showing all of them would not seem redundant as is the case here. It would be possible, however, to give the same color to each response category so as to facilitate reading.
But I would say stacked barcharts are better when all items have the same response category, as they help to appreciate the frequency of one response modality across items:

I can also think of some kind of heatmap, which is useful if there are many items with similar response category.

Missing responses (esp. when non negligible or localized on specific item/question) should be reported, ideally for each item. Generally, % of responses for each category are computed without NA. This is what is usually done in survey or psychometrics (we speak of "expressed or observed responses").
P.S.
I can think of more fancy things like the picture shown below (the first one was made by hand, the second is from `ggplot2`, `ggfluctuation(as.table(tab))`), but I don't think it convey as accurate information as dotplot or barchart since surface variations are difficult to appreciate.


| null | CC BY-SA 2.5 | null | 2010-10-23T15:11:30.267 | 2010-10-24T08:44:48.867 | 2010-10-24T08:44:48.867 | 930 | 930 | null |
3923 | 2 | null | 1576 | 13 | null | For me (and I hope this is useful) factor analysis is much more useful than PCA.
Recently, I had the pleasure of analysing a scale through factor analysis. This scale (although it's widely used in industry) was developed by using PCA, and to my knowledge had never been factor analysed.
When I performed the factor analysis (principal axis) I discovered that the communalities for three of the items were less than 30%, which means that over 70% of the items' variance was not being analysed. PCA just transforms the data into a new combination and doesn't care about communalities. My conclusion was that the scale was not a very good one from a psychometric point of view, and I've confirmed this with a different sample.
Essentially, if you want to predict using the factors, use PCA, while if you want to understand the latent factors, use Factor Analysis.
| null | CC BY-SA 3.0 | null | 2010-10-23T15:11:36.053 | 2013-06-25T00:14:55.990 | 2013-06-25T00:14:55.990 | 22047 | 656 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.