Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2711 | 1 | null | null | 4 | 146 | I'm trying to calculate how good are my measurements in machine learning! Let's say that I have five choices, and that error is 4, 2, 0.002, 3, 6. Naturally, I will pick third one for the hit, but I would like to say following:
- I'm X% certain that hit is third pick
- I'm Y% certain that hit is first (last) pick
Of course, X>>Y but I would like to know if is there some kind of statistics metric for this. I can always introduce my metric, but I'm interested is there some already known, so that I can avoid my explanation and just refer to particular metric?
The closest one is 'confidence interval', but that gives me the probability that my results are in some interval.
| How good is my error? | CC BY-SA 2.5 | null | 2010-09-16T08:01:38.570 | 2010-10-17T11:45:39.267 | 2010-09-17T11:04:45.970 | 159 | 1313 | [
"machine-learning",
"model-selection"
] |
2712 | 2 | null | 2711 | 2 | null | I would recommend cross-validation; then, from all iterations, you can get not only average, but also an approximation of distribution of error, and you can use it to estimate the confidence interval.
| null | CC BY-SA 2.5 | null | 2010-09-16T09:24:21.157 | 2010-09-16T09:24:21.157 | null | null | null | null |
2713 | 2 | null | 2272 | 15 | null | The answers provided before are very helpful and detailed. Here is my $0.25.
Confidence interval (CI) is a concept based on the classical definition of probability (also called the "Frequentist definition") that probability is like proportion and is based on the axiomatic system of Kolmogrov (and others).
Credible intervals (Highest Posterior Density, HPD) can be considered to have its roots in decision theory, based on the works of Wald and de Finetti (and extended a lot by others).
As people in this thread have done a great job in giving examples and the difference of hypotheses in the Bayesian and frequentist case, I will just stress on a few important points.
- CIs are based on the fact that
inference MUST be made on all
possible repetitions of an
experiment that can be seen and NOT
only on the observed data where as
HPDs are based ENTIRELY on the
observed data (and obv. our prior
assumptions).
- In general CIs are NOT coherent (will be explained later) where as HPDs are coherent(due to their roots in decision theory). Coherence (as I would explain to my grand mom) means: given a betting problem on a parameter value, if a classical statistician (frequentist) bets on CI and a bayesian bets on HPDs, the frequentist IS BOUND to lose (excluding the trivial case when HPD = CI). In short, if you want to summarize the findings of your experiment as a probability based on the data, the probability HAS to be a posterior probability (based on a prior). There is a theorem (cf Heath and Sudderth, Annals of Statistics, 1978) which (roughly) states: Assignment of probability to $\theta$ based on data will not make a sure loser if and only if it is obtained in a bayesian way.
- As CIs don't condition on the observed data (also called "Conditionality Principle" CP), there can be paradoxical examples. Fisher was a big supporter of CP and also found a lot of paradoxical examples when this was NOT followed (as in the case of CI). This is the reason why he used p-values for inference, as opposed to CI. In his view p-values were based on the observed data (much can be said about p-values, but that is not the focus here). Two of the very famous paradoxical examples are: (4 and 5)
- Cox's example (Annals of Math. Stat., 1958):
$X_i \sim \mathcal{N}(\mu, \sigma^2)$ (iid) for $i\in\{1,\dots,n\}$ and we want to estimate $\mu$. $n$ is NOT fixed and is chosen by tossing a coin. If coin toss results in H, 2 is chosen, otherwise 1000 is chosen. The "common sense" estimate - sample mean is an unbiased estimate with a variance of $0.5\sigma^2+0.0005\sigma^2$. What do we use as the variance of sample mean when $n = 1000$? Isn't it better (or sensible) to use the variance of sample mean estimator as $0.001\sigma^2$ (conditional variance) instead of the actual variance of the estimator, which is HUGE!! ($0.5\sigma^2+0.0005\sigma^2$). This is a simple illustration of CP when we use the variance as $0.001\sigma^2$ when $n=1000$. $n$ stand alone has no importance or no information for $\mu$ and $\sigma$ (ie $n$ is ancillary for them) but GIVEN its value, you know a lot about the "quality of data". This directly relates to CI as they involve the variance which should not be conditioned on $n$, ie we will end up using the larger variance, hence over conservative.
- Welch's example:
This example works for any $n$, but we will take $n=2$ for simplicity.
$X_1, X_2 \sim \mathcal{U}(\theta - 1/2, \theta +1/2)$ (iid), $\theta$ belongs to the Real line. This implies $X_1 - \theta \sim \mathcal{U}(-1/2, 1/2)$ (iid). $\frac{1}{2}(X_1 + X_2) {\bar x} - \theta$ (note that this is NOT a statistic) has a distribution independent of $\theta$. We can choose $c > 0$ s.t. $\text{Prob}_\theta(-c <= {\bar x} - \theta <= c) = 1-\alpha (\approx 99\%)$, implying $({\bar x} - c, {\bar x} + c)$ is the 99% CI of $\theta$. The interpretation of this CI is: if we sample repeatedly, we will get different ${\bar x}$ and 99% (at least) times it will contain true $\theta$, BUT (the elephant in the room) for a GIVEN data, we DON'T know the probability that CI will contain true $\theta$. Now, consider the following data: $X_1 = 0$ and $X_2=1$, as $|X_1 - X_2|=1$, we know FOR SURE that the interval $(X_1, X_2)$ contains $\theta$ (one possible criticism, $\text{Prob}(|X_1 - X_2|=1) = 0$, but we can handle it mathematically and I won't discuss it). This example also illustrates the concept of coherence beautifully. If you are a classical statistician, you will definitely bet on the 99% CI without looking at the value of $|X_1 - X_2|$ (assuming you are true to your profession). However, a bayesian will bet on the CI only if the value of $|X_1 - X_2|$ is close to 1. If we condition on $|X_1 - X_2|$, the interval is coherent and the player won't be a sure loser any longer (similar to the theorem by Heath and Sudderth).
- Fisher had a recommendation for such problems - use CP. For the Welch's example, Fisher suggested to condition of $X_2-X_1$. As we see, $X_2-X_1$ is ancillary for $\theta$, but it provides information about theta. If $X_2-X_1$ is SMALL, there is not a lot of information about $\theta$ in the data. If $X_2-X_1$ is LARGE, there is a lot of information about $\theta$ in the data. Fisher extended the strategy of conditioning on the ancillary statistic to a general theory called Fiducial Inference (also called his greatest failure, cf Zabell, Stat. Sci. 1992), but it didn't become popular due to lack of generality and flexibility. Fisher was trying to find a way different from both the classical statistics (of Neyman School) and the bayesian school (hence the famous adage from Savage: "Fisher wanted to make a Bayesian omelette (ie using CP) without breaking the Bayesian eggs"). Folklore (no proof) says: Fisher in his debates attacked Neyman (for Type I and Type II error and CI) by calling him a Quality Control guy rather than a Scientist, as Neyman's methods didn't condition on the observed data, instead looked at all possible repetitions.
- Statisticians also want to use Sufficiency Principle (SP) in addition to the CP. But SP and CP together imply the Likelihood Principle (LP) (cf Birnbaum, JASA, 1962) ie given CP and SP, one must ignore the sample space and look at the likelihood function only. Thus, we only need to look at the given data and NOT at the whole sample space (looking at whole sample space is in a way similar to repeated sampling). This has led to concept like Observed Fisher Information (cf. Efron and Hinkley, AS, 1978) which measure the information about the data from a frequentist perspective. The amount of information in the data is a bayesian concept (and hence related to HPD), instead of CI.
- Kiefer did some foundational work on CI in the late 1970s, but his extensions haven't become popular. A good source of reference is Berger ("Could Fisher, Neyman and Jeffreys agree about testing of hypotheses", Stat Sci, 2003).
---
## Summary:
(As pointed out by Srikant and others)
CIs can't be interpreted as probability and they don't tell anything about the unkown parameter GIVEN the observed data. CIs are statements about repeated experiments.
HPDs are probabilistic intervals based on the posterior distribution of the unknown parameter and have a probability based interpretation based on the given data.
Frequentist property (repeated sampling) property is a desirable property and HPDs (with appropriate priors) and CI both have them. HPDs condition on the given data also in answering the questions about the unknown parameter
(Objective NOT Subjective) Bayesians agree with the classical statisticians that there is a single TRUE value of the parameter. However, they both differ in the way they make inference about this true parameter.
Bayesian HPDs give us a good way of conditioning on data, but if they fail to agree with the frequentist properties of CI they are not very useful (analogy: a person who uses HPDs (with some prior) without a good frequentist property, is bound to be doomed like a carpenter who only cares about the hammer and forgets the screw driver)
At last, I have seen people in this thread (comments by Dr. Joris: "...assumptions involved imply a diffuse prior, i.e. a complete lack of knowledge about the true parameter.") talking about lack of knowledge about the true parameter being equivalent to using a diffuse prior. I DON'T know if I can agree with the statement (Dr. Keith agrees with me). For example, in the basic linear models case, some distributions can be obtained by using a uniform prior (which some people called diffuse), BUT it DOESN'T mean that uniform distribution can be regarded as a LOW INFORMATION PRIOR. In general, NON-INFORMATIVE(Objective) prior doesn't mean it has low information about the parameter.
Note: A lot of these points are based on the lectures by one of the prominent bayesians. I am still a student and could have misunderstood him in some way. Please accept my apologies in advance.
| null | CC BY-SA 4.0 | null | 2010-09-16T09:35:44.257 | 2018-11-18T23:03:07.730 | 2018-11-18T23:03:07.730 | 79696 | 1307 | null |
2714 | 2 | null | 1487 | 0 | null | Using PCA only to analyze your gene expression data can lead to (possibly) wrong conclusions. The fundamental assumption behind PCA is equal genewise variation and this may NOT be justified almost always. The links to the paper to analyze time course data by csgillespie point to better ways.
| null | CC BY-SA 2.5 | null | 2010-09-16T10:02:46.633 | 2010-09-16T10:02:46.633 | null | null | 1307 | null |
2715 | 1 | null | null | 88 | 10508 | I like G van Belle's book on [Statistical Rules of Thumb](http://vanbelle.org/), and to a lesser extent [Common Errors in Statistics (and How to Avoid Them)](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118294394.html) from Phillip I Good and James W. Hardin. They address common pitfalls when interpreting results from experimental and observational studies and provide practical recommendations for statistical inference, or exploratory data analysis. But I feel that "modern" guidelines are somewhat lacking, especially with the ever growing use of computational and robust statistics in various fields, or the introduction of techniques from the machine learning community in, e.g. clinical biostatistics or genetic epidemiology.
Apart from computational tricks or common pitfalls in data visualization which could be addressed elsewhere, I would like to ask: What are the top rules of thumb you would recommend for efficient data analysis? (one rule per answer, please).
I am thinking of guidelines that you might provide to a colleague, a researcher without strong background in statistical modeling, or a student in intermediate to advanced course. This might pertain to various stages of data analysis, e.g. sampling strategies, feature selection or model building, model comparison, post-estimation, etc.
| Rules of thumb for "modern" statistics | CC BY-SA 3.0 | null | 2010-09-16T10:21:36.837 | 2019-04-12T22:15:09.463 | 2017-01-17T11:02:13.353 | 96626 | 930 | [
"modeling",
"exploratory-data-analysis",
"rule-of-thumb"
] |
2716 | 2 | null | 2691 | 36 | null | OK, a totally non-math answer:
If you have a bunch of variables on a bunch of subjects and you want to reduce it to a smaller number of variables on those same subjects, while losing as little information as possible, then PCA is one tool to do this.
It differs from factor analysis, although they often give similar results, in that FA tries to recover a small number of latent variables from a larger number of observed variables that are believed to be related to the latent variables.
| null | CC BY-SA 2.5 | null | 2010-09-16T10:23:42.837 | 2010-09-16T10:23:42.837 | null | null | 686 | null |
2717 | 1 | 2721 | null | 72 | 95730 | I have a (symmetric) matrix `M` that represents the distance between each pair of nodes. For example,
A B C D E F G H I J K L
A 0 20 20 20 40 60 60 60 100 120 120 120
B 20 0 20 20 60 80 80 80 120 140 140 140
C 20 20 0 20 60 80 80 80 120 140 140 140
D 20 20 20 0 60 80 80 80 120 140 140 140
E 40 60 60 60 0 20 20 20 60 80 80 80
F 60 80 80 80 20 0 20 20 40 60 60 60
G 60 80 80 80 20 20 0 20 60 80 80 80
H 60 80 80 80 20 20 20 0 60 80 80 80
I 100 120 120 120 60 40 60 60 0 20 20 20
J 120 140 140 140 80 60 80 80 20 0 20 20
K 120 140 140 140 80 60 80 80 20 20 0 20
L 120 140 140 140 80 60 80 80 20 20 20 0
Is there any method to extract clusters from `M` (if needed, the number of clusters can be fixed), such that each cluster contains nodes with small distances between them. In the example, the clusters would be `(A, B, C, D)`, `(E, F, G, H)` and `(I, J, K, L)`.
I've already tried UPGMA and `k`-means but the resulting clusters are very bad.
The distances are the average steps a random walker would take to go from node `A` to node `B` (`!= A`) and go back to node `A`. It's guaranteed that `M^1/2` is a metric. To run `k`-means, I don't use the centroid. I define the distance between node `n` cluster `c` as the average distance between `n` and all nodes in `c`.
Thanks a lot :)
| Clustering with a distance matrix | CC BY-SA 2.5 | null | 2010-09-16T11:47:15.633 | 2023-03-25T22:33:04.297 | 2023-03-25T22:33:04.297 | 60613 | 1316 | [
"clustering",
"distance-functions"
] |
2718 | 2 | null | 2715 | 38 | null |
## There is no free lunch
A large part of statistical failures is created by clicking a big shiny button called "Calculate significance" without taking into account its burden of hidden assumptions.
## Repeat
Even if a single call to a random generator is involved, one may have luck or bad luck and so jump to the wrong conclusions.
| null | CC BY-SA 3.0 | null | 2010-09-16T12:08:07.737 | 2013-06-17T18:04:25.600 | 2020-06-11T14:32:37.003 | -1 | null | null |
2719 | 2 | null | 2715 | 18 | null | There can be a long list but to mention a few: (in no specific order)
- P-value is NOT probability. Specifically, it is not the probability of committing Type I error. Similarly, CIs have no probabilistic interpretation for the given data. They are applicable for repeated experiments.
- Problem related to variance dominate bias most the time in practice, so a biased estimate with small variance is better than an unbiased estimate with large variance (most of the time).
- Model fitting is an iterative process. Before analyzing the data understand the source of data and possible models that fit or don't fit the description. Also, try model any design issues in your model.
- Use the visualization tools, look at the data (for possible abnormalities, obvious trends etc etc to understand the data) before analyzing it. Use the visualization methods (if possible) to see how the model fits to that data.
- Last but not the least, use statistical software for what they are made for (to make your task of computation easier), they are not a substitute for human thinking.
| null | CC BY-SA 3.0 | null | 2010-09-16T12:15:45.783 | 2017-07-08T08:12:59.197 | 2017-07-08T08:12:59.197 | 79696 | 1307 | null |
2720 | 2 | null | 1815 | 9 | null | I am surprise no one mentioned: Statistical Design by George Casella
[Google Books Link](http://books.google.com/books?id=sqnbSUtryVAC&printsec=frontcover&dq=statistical++design+casella+book&source=bl&ots=634WifKYf5&sig=nfzDyBldgGgNFTc8rl8i8XYLfp8&hl=en&ei=sQ2STNnXG4Oenwf06_nVCA&sa=X&oi=book_result&ct=result&resnum=9&ved=0CD8Q6AEwCA#v=onepage&q=statistical%20%20design%20casella%20book&f=false)
| null | CC BY-SA 2.5 | null | 2010-09-16T12:31:45.183 | 2010-09-16T12:31:45.183 | null | null | 1307 | null |
2721 | 2 | null | 2717 | 48 | null | There are a number of options.
## k-medoids clustering
First, you could try partitioning around medoids (pam) instead of using k-means clustering. This one is more robust, and could give better results. Van der Laan reworked the algorithm. If you're going to implement it yourself, his [article](https://biostats.bepress.com/cgi/viewcontent.cgi?article=1003&context=ucbbiostat) is worth a read.
There is a specific k-medoids clustering algorithm for large datasets. The algorithm is called Clara in R, and is described in chapter 3 of Finding Groups in Data: An Introduction to Cluster Analysis. by Kaufman, L and Rousseeuw, PJ (1990).
## hierarchical clustering
Instead of UPGMA, you could try some other hierarchical clustering options. First of all, when you use hierarchical clustering, be sure you define the partitioning method properly. This partitioning method is essentially how the distances between observations and clusters are calculated. I mostly use Ward's method or complete linkage, but other options might be the choice for you.
Don't know if you tried it yet, but the single linkage method or neighbour joining is often preferred above UPGMA in phylogenetic applications. If you didn't try it yet, you could give it a shot as well, as it often gives remarkably good results.
---
In R you can take a look at the package cluster. All described algorithms are implemented there. See ?pam, ?clara, ?hclust, ... Check also the different implementation of the algorithm in ?kmeans. Sometimes chosing another algorithm can improve the clustering substantially.
---
EDIT : Just thought about something: If you work with graphs and nodes and the likes, you should take a look at the markov clustering algorithm as well. That one is used for example in grouping sequences based on blast similarities, and performs incredibly well. It can do the clustering for you, or give you some ideas on how to solve the research problem you're focusing on. Without knowing anything about it in fact, I guess his results are definitely worth looking at. If I may say so, I still consider this method of Stijn van Dongen one of the nicest results in clustering I've ever encountered.
[http://www.micans.org/mcl/](http://www.micans.org/mcl/)
| null | CC BY-SA 4.0 | null | 2010-09-16T12:36:49.757 | 2019-12-17T11:50:35.440 | 2019-12-17T11:50:35.440 | 142758 | 1124 | null |
2722 | 2 | null | 1815 | 2 | null | [The Design of Experiments: Statistical Principles for Practical Applications](http://books.google.co.uk/books?id=CaFZPbCllrMC) by Roger Mead. Examples are drawn from agriculture and biology, so probably most appropriate if you're interested in one of those fields. Rather expensive for a 600-page paperback but you can probably find it second-hand.
| null | CC BY-SA 2.5 | null | 2010-09-16T12:45:22.580 | 2010-09-16T12:45:22.580 | null | null | 449 | null |
2723 | 2 | null | 2715 | 66 | null | Don't forget to do some basic data checking before you start the analysis. In particular, look at a scatter plot of every variable you intend to analyse against ID number, date / time of data collection or similar. The eye can often pick up patterns that reveal problems when summary statistics don't show anything unusual. And if you're going to use a log or other transformation for analysis, also use it for the plot.
| null | CC BY-SA 2.5 | null | 2010-09-16T12:57:26.753 | 2010-09-16T12:57:26.753 | null | null | 449 | null |
2724 | 2 | null | 2715 | 29 | null | One thing I tell my students is to produce an appropriate graph for every p-value. e.g., a scatterplot if they test correlation, side-by-side boxplots if they do a one-way ANOVA, etc.
| null | CC BY-SA 2.5 | null | 2010-09-16T13:13:10.500 | 2010-09-16T13:13:10.500 | null | null | 159 | null |
2726 | 1 | 2735 | null | 5 | 181 | Is there a way to have R plot to an in-memory object or connection, rather than a named file?
I would like to have a plotting server create many graphs without ever going to a file.
The [Cairo](http://cran.r-project.org/web/packages/Cairo/index.html) package documents use of a connection, but it doesn't seem to work. What I would like to do is something like:
```
library(Cairo)
plot.to.var <- function(data) {
tc = textConnection("output", "w")
CairoPDF(tc)
plot(data)
dev.off()
tc.close()
output
}
```
When I do this, CairoPDF mentions a connection patch I can't find a reference to, that will allow me to do this, even though the documentation shows this working.
I have no particular desire to use Cairo, merely saw that the documentation mentioned this.
Any ideas?
| R plots to a connection? | CC BY-SA 2.5 | null | 2010-09-16T13:46:18.170 | 2010-09-16T16:46:17.977 | null | null | 1119 | [
"r",
"data-visualization"
] |
2727 | 2 | null | 2717 | 26 | null | One way to highlight clusters on your distance matrix is by way of [Multidimensional scaling](http://en.wikipedia.org/wiki/Multidimensional_scaling). When projecting individuals (here what you call your nodes) in an 2D-space, it provides a comparable solution to PCA. This is unsupervised, so you won't be able to specify a priori the number of clusters, but I think it may help to quickly summarize a given distance or similarity matrix.
Here is what you would get with your data:
```
tmp <- matrix(c(0,20,20,20,40,60,60,60,100,120,120,120,
20,0,20,20,60,80,80,80,120,140,140,140,
20,20,0,20,60,80,80,80,120,140,140,140,
20,20,20,0,60,80,80,80,120,140,140,140,
40,60,60,60,0,20,20,20,60,80,80,80,
60,80,80,80,20,0,20,20,40,60,60,60,
60,80,80,80,20,20,0,20,60,80,80,80,
60,80,80,80,20,20,20,0,60,80,80,80,
100,120,120,120,60,40,60,60,0,20,20,20,
120,140,140,140,80,60,80,80,20,0,20,20,
120,140,140,140,80,60,80,80,20,20,0,20,
120,140,140,140,80,60,80,80,20,20,20,0),
nr=12, dimnames=list(LETTERS[1:12], LETTERS[1:12]))
d <- as.dist(tmp)
mds.coor <- cmdscale(d)
plot(mds.coor[,1], mds.coor[,2], type="n", xlab="", ylab="")
text(jitter(mds.coor[,1]), jitter(mds.coor[,2]),
rownames(mds.coor), cex=0.8)
abline(h=0,v=0,col="gray75")
```

I added a small jittering on the x and y coordinates to allow distinguishing cases. Replace `tmp` by `1-tmp` if you'd prefer working with dissimilarities, but this yields essentially the same picture. However, here is the hierarchical clustering solution, with single agglomeration criteria:
```
plot(hclust(dist(1-tmp), method="single"))
```

You might further refine the selection of clusters based on the dendrogram, or more robust methods, see e.g. this related question: [What stop-criteria for agglomerative hierarchical clustering are used in practice?](https://stats.stackexchange.com/questions/2597/what-stop-criteria-for-agglomerative-hierarchical-clustering-are-used-in-practice)
| null | CC BY-SA 2.5 | null | 2010-09-16T13:59:10.410 | 2010-09-16T13:59:10.410 | 2017-04-13T12:44:55.630 | -1 | 930 | null |
2728 | 1 | 2738 | null | 6 | 3119 | I am clustering a dataset using the pam command (from {cluster} package), and I wish to decide on the number of clusters to use.
I was able to implement The_Elbow_Method in R ([see wiki](http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method)) for doing that. But that doesn't provide me with any solid criteria (like AIC, [for example](http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#Information_Criterion_Approach)) for decision.
I came by the {clValid} package which looks promising, but I wanted to know if there are any other R solutions (you know of) for choosing the number of clusters for pam?
Here's some dummy code if someone wants to show examples:
```
data(iris)
head(iris)
require(cluster)
pam(iris[,1:4], 3)
```
| Algorithm for choosing the number of clusters when using pam in R? | CC BY-SA 2.5 | null | 2010-09-16T15:40:06.907 | 2010-11-18T20:58:25.817 | 2010-09-16T16:20:44.223 | null | 253 | [
"r",
"clustering",
"model-selection"
] |
2729 | 2 | null | 2728 | 3 | null | What about [silhouette](http://bm2.genes.nig.ac.jp/RGM2/R_current/library/cluster/html/silhouette.html)?
| null | CC BY-SA 2.5 | null | 2010-09-16T15:43:10.977 | 2010-09-16T15:43:10.977 | null | null | null | null |
2730 | 1 | 2778 | null | 24 | 9986 | I am running experiments for a paper and I am looking for an interesting book / website to understand properly how ANOVA and ANCOVA work. I have a good math background so I don't necessarily need a vulgarized explanation.
I'd also like to know how to determine when to use ANOVA instead of ANCOVA.
| Good resource to understand ANOVA and ANCOVA? | CC BY-SA 3.0 | null | 2010-09-16T15:53:31.090 | 2022-10-04T13:56:21.240 | 2015-10-07T13:55:52.557 | 84004 | 1320 | [
"anova",
"references",
"ancova"
] |
2731 | 2 | null | 2728 | 4 | null | The [fpc package](http://cran.r-project.org/web/packages/fpc/index.html) provides a few clustering statistics. If you're looking for information criteria in particular, the `cluster.stats` method provides an information based distance. For mixture models based on clustering, the BIC is available.
| null | CC BY-SA 2.5 | null | 2010-09-16T16:02:54.897 | 2010-09-16T16:02:54.897 | null | null | 251 | null |
2732 | 2 | null | 2730 | 19 | null | So, in addition to this paper, [Misunderstanding Analysis of Covariance](https://web.archive.org/web/20120324142111/http://mres.gmu.edu/pmwiki/uploads/Main/ancova.pdf), which enumerates common pitfalls when using ANCOVA, I would recommend starting with:
- Frank Harrell's homepage, especially his handout on Regression Modeling Strategies and Biostatistical Modeling
- John Fox's homepage includes great material on Linear Model
- Practical Regression and Anova using R
This is mostly R-oriented material, but I feel you might better catch the idea if you start playing a little bit with these models on toy examples or real datasets (and R is great for that).
As for a good book, I would recommend [Design and Analysis of Experiments](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-EHEP000137.html) by Montgomery (now in its 7th ed.); ANCOVA is described in chapter 15. [Plane Answers to Complex Questions](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-95361-8) by Christensen is an excellent book on the theory of linear model (ANCOVA in chapter 9); it assumes a good mathematical background. Any biostatistical textbook should cover both topics, but I like [Biostatistical Analysis](https://web.archive.org/web/20210507035924/https://www.pearson.com/us/higher-education/program/Zar-Biostatistical-Analysis-5th-Edition/PGM263783.html) by Zar (ANCOVA in chapter 12), mainly because this was one of my first textbook.
And finally, H. Baayen's textbook is very complete, [Practical Data Analysis for the Language Sciences with R](https://www.cambridge.org/core/journals/journal-of-child-language/article/abs/r-h-baayen-analyzing-linguistic-data-a-practical-introduction-to-statistics-using-r-cambridge-cambridge-university-press-2008-pp-368-isbn13-9780521709187/DABA026906B88245F6754AA8645034EE#). Although it focus on linguistic data, it includes a very comprehensive treatment of the Linear Model and mixed-effects models.
| null | CC BY-SA 4.0 | null | 2010-09-16T16:06:13.050 | 2022-10-04T13:56:21.240 | 2022-10-04T13:56:21.240 | 7290 | 930 | null |
2733 | 2 | null | 485 | 4 | null | There are a bunch of helpful video tutorials on basic statistics & data mining with R and Weka at SentimentMining.net.
- http://sentimentmining.net/StatisticsVideos/
| null | CC BY-SA 2.5 | null | 2010-09-16T16:27:02.927 | 2010-09-16T16:27:02.927 | null | null | 653 | null |
2734 | 2 | null | 2730 | 4 | null | [The R book](http://rads.stackoverflow.com/amzn/click/0470510242) does a good job on that. You can see that it dedicates one chapter to each one of those methods (11 and 12). If you are new to R, this is a great book to start with.
| null | CC BY-SA 2.5 | null | 2010-09-16T16:37:35.490 | 2010-09-16T16:48:14.703 | 2010-09-16T16:48:14.703 | 339 | 339 | null |
2735 | 2 | null | 2726 | 3 | null | I don't think so; R does not have binary memory buffers. Cairo feature you mentioned needs recompilation of both R and package; trying to do this with plain R does not throw errors, but nothing is written neither to textConnection nor socket.
So I think the best idea will be to use ramdisk; if you still want to try the patch, see [this](http://rwiki.sciviews.org/doku.php?id=developers%3ar_connections_api).
| null | CC BY-SA 2.5 | null | 2010-09-16T16:46:17.977 | 2010-09-16T16:46:17.977 | null | null | null | null |
2736 | 2 | null | 1736 | 8 | null | [PyIMSL](http://www.vni.com/campaigns/pyimslstudioeval/) contains a handful of routines for survival analyses. It is Free As In Beer for noncommercial use, fully supported otherwise. From the documentation in the Statistics User Guide...
Computes Kaplan-Meier estimates of survival
probabilties: kaplanMeierEstimates()
Analyzes survival and reliability data using Cox’s
proportional hazards model: propHazardsGenLin()
Analyzes survival data using the generalized
linear model: survivalGlm()
Estimates using various parametric modes: survivalEstimates()
Estimates a reliability hazard function using a
nonparametric approach: nonparamHazardRate()
Produces population and cohort life tables: lifeTables()
| null | CC BY-SA 2.5 | null | 2010-09-16T16:53:28.610 | 2010-09-16T18:58:26.863 | 2010-09-16T18:58:26.863 | 1080 | 1080 | null |
2737 | 2 | null | 2686 | 1 | null | I have heard of 'time-based boxcar' functions which might solve your problem. A time-based boxcar sum of 'window size' $\Delta t$ is defined at time $t$ to be the sum of all values between $t - \Delta t$ and $t$. This will be subject to discontinuities which you may or may not want. If you want older values to be downweighted, you can employ a simple or exponential moving average within your time based window.
edit:
I interpret the question as follows: suppose some events occur at times $t_i$
with magnitudes $x_i$. (for example, $x_i$ might be the amount of a bill paid.)
Find some function $f(t)$ which estimates the sum of the magnitudes of the
$x_i$ for times "near" $t$. For one of the examples posed by the OP, $f(t)$
would represent "how much one was paying for electricity" around time $t$.
Similar to this problem is that of estimating the "average" value around
time $t$. For example: [regression](http://en.wikipedia.org/wiki/Regression_analysis), [interpolation](http://en.wikipedia.org/wiki/Interpolation) (not
usually applied to noisy data), and [filtering](http://en.wikipedia.org/wiki/Filter_%28signal_processing%29). You could spend a lifetime
studying just one of these three problems.
A seemingly unrelated problem, statistical in nature, is [Density
Estimation](http://en.wikipedia.org/wiki/Density_estimation). Here the goal is, given observations of magnitudes $y_i$
generated by some process, to estimate, roughly, the probability of that
process generating an event of magnitude $y$. One approach to density
estimation is via a [kernel function](http://en.wikipedia.org/wiki/Kernel_density_estimation). My suggestion is to abuse the kernel
approach for this problem.
Let $w(t)$ be a function such that $w(t) \ge 0$ for all $t$, $w(0) = 1$
(ordinary kernels do not all share this property), and $w'(t) \le 0$. Let $h$
be the bandwidth, which controls how much influence each data point has.
Given data $t_i, x_i$, define the sum estimate by
$$f(t) = \sum_{i=1}^n x_i w(|t - t_i|/h).$$
Some possible values of the function $w(t)$ are as follows:
- a uniform (or 'boxcar') kernel: $w(t) = 1$ for $t \le 1$ and $0$ otherwise;
- a triangular kernel: $w(t) = \max{(0,1-t)}$;
- a quadratic kernel: $w(t) = \max{(0,1-t^2)}$;
- a tricube kernel: $w(t) = \max{(0,(1-t^2)^3)}$;
- a Gaussian kernel: $w(t) = \exp{(-t^2 / 2)}$;
I call these kernels, but they are off by a constant factor here and there; see also a [comprehensive list of kernels](http://en.wikipedia.org/wiki/Kernel_%28statistics%29).
Some example code in Matlab:
```
%%kernels
ker0 = @(t)(max(0,ceil(1-t))); %uniform
ker1 = @(t)(max(0,1-t)); %triangular
ker2 = @(t)(max(0,1-t.^2)); %quadratic
ker3 = @(t)(max(0,(1-t.^2).^3)); %tricube
ker4 = @(t)(exp(-0.5 * t.^2)); %Gaussian
%%compute f(t) given x_i,t_i,kernel,h
ff = @(x_i,t_i,t,kerf,h)(sum(x_i .* kerf(abs(t - t_i) / h)));
%%some sample data: irregular electric bills
sdata = [
datenum(2009,12,30),141.73;...
datenum(2010,01,25),100.45;...
datenum(2010,02,23),98.34;...
datenum(2010,03,30),83.92;...
datenum(2010,05,01),56.21;... %late this month;
datenum(2010,05,22),47.33;...
datenum(2010,06,14),62.84;...
datenum(2010,07,30),83.34;...
datenum(2010,09,10),93.34;... %really late this month
datenum(2010,09,22),78.34;...
datenum(2010,10,22),93.25;...
datenum(2010,11,14),83.39;... %early this month;
datenum(2010,12,30),133.82];
%%some irregular observation times at which to sample the filtered version;
t_obs = sort(datenum(2009,12,01) + 400 * rand(1,400));
t_i = sdata(:,1);x_i = sdata(:,2);
%%compute f(t) for each of the kernel functions;
h = 60; %bandwidth of 60 days;
fx0 = arrayfun(@(t)(ff(x_i,t_i,t,ker0,h)),t_obs);
fx1 = arrayfun(@(t)(ff(x_i,t_i,t,ker1,h)),t_obs);
fx2 = arrayfun(@(t)(ff(x_i,t_i,t,ker2,h)),t_obs);
fx3 = arrayfun(@(t)(ff(x_i,t_i,t,ker3,h)),t_obs);
fx4 = arrayfun(@(t)(ff(x_i,t_i,t,ker4,0.5*h)),t_obs); %!!use smaller bandwidth
%%plot them;
lhand = plot(t_i,x_i,'--rs',t_obs,fx0,'m-+',t_obs,fx1,'b-+',t_obs,fx2,'k-+',...
t_obs,fx3,'g-+',t_obs,fx4,'c-+');
set(lhand(1),'MarkerSize',12);
set(lhand(2:end),'MarkerSize',4);
datetick(gca());
legend(lhand,{'data','uniform','triangular','quadratic','tricube','gaussian'});
```
The plot shows the use of a few kernels on some sample electric bill data. 
Note
that the uniform kernel is subject to the 'stochastic shocks' which the OP is
trying to avoid. The tricube and Gaussian kernels give much smoother
approximations. If this approach is acceptable, one only has to choose the
kernel and the bandwidth (in general that is a hard problem, but given some
domain knowledge, and some code-test-recode loops, it should not be too
difficult.)
| null | CC BY-SA 2.5 | null | 2010-09-16T17:21:56.977 | 2010-09-24T04:24:59.887 | 2010-09-24T04:24:59.887 | 795 | 795 | null |
2738 | 2 | null | 2728 | 3 | null | You may find an [answer to a similar question](https://stats.stackexchange.com/questions/723/how-can-i-test-whether-my-clustering-of-binary-data-is-significant/749#749) useful. I have also used clValid but, as I recall, it was rather slow (at least for relatively large datasets).
| null | CC BY-SA 2.5 | null | 2010-09-16T18:21:20.133 | 2010-09-16T18:21:20.133 | 2017-04-13T12:44:24.947 | -1 | 339 | null |
2739 | 1 | 2741 | null | 4 | 1583 | Can someone give a concise, layman's explanation of an "If-Then" rule (as in rule-based systems). I am finding this term used frequently without anyone really defining it properly.
| What is an "If-Then" rule? | CC BY-SA 2.5 | null | 2010-09-16T18:36:13.733 | 2016-02-24T19:45:40.013 | null | null | 1121 | [
"machine-learning"
] |
2740 | 2 | null | 2684 | 2 | null | Higher correlation within subject gets you more power when the test being done is a differencing, equivalent to a paired t-test. The standard deviation used in calculating effect size is multiplied by $\sqrt{1-\rho}$. The standard deviation for difference scores (for a one-sample test) is $SD\sqrt{2-2\rho}$. This also applies where it is appropriate to model using random intercepts, because that means a chunk of variability is being accounted for by individual random effects, and not being counted as error.
The opposite happens when the test is strictly between subjects. There you are in the world of increasing your effective sample size (as indicated by "Onestop" earlier). By repeating measurements, you increase your sample by the degree to which the observations are uncorrelated. The standard deviation for an average of independent (uncorrelated) numbers drawn from the same distribution is the standard error: $SD\sqrt{\frac{1}{N}}$. For averages of correlated numbers, it is roughly $SD\sqrt{\frac{1}{N}+\rho\frac{N-1}{N}}$. Note that as correlation approaches $1$, you get back the original $SD$, and as it approaches $0$, $SD$ approaches the $SE$ for independent observations.
Paul
| null | CC BY-SA 3.0 | null | 2010-09-16T18:41:33.953 | 2016-03-04T05:59:37.200 | 2016-03-04T05:59:37.200 | 101484 | 1324 | null |
2741 | 2 | null | 2739 | 8 | null | It is just a simple classifier; so simple that it is better to explain it by example. Let's say you have a 3 class classification problem and information system with 4 continous predictors $X1,\ldots, X4$. Now you can define simple rules like:
```
IF X1<3.45 THEN Class1
IF X3>7.2 THEN Class2
IF X2<2.11 THEN Class1
IF X2<1.2 THEN Class3
...
```
and this is it; the final decision is produced by voting. Structures like this can be trained by various methods, probably the best known are [rough sets](http://en.wikipedia.org/wiki/Rough_sets).
| null | CC BY-SA 2.5 | null | 2010-09-16T19:20:07.583 | 2010-09-16T19:27:07.387 | 2010-09-16T19:27:07.387 | null | null | null |
2742 | 1 | 2747 | null | 11 | 14047 | Several statistical packages, such as SAS, SPSS, and R, allow you to perform some kind of factor rotation following a PCA.
- Why is a rotation necessary after a PCA?
- Why would you apply an oblique rotation after a PCA given that the aim of PCA is to produce orthogonal dimensions?
| On the use of oblique rotation after PCA | CC BY-SA 2.5 | null | 2010-09-16T19:28:10.130 | 2015-10-20T11:23:49.137 | 2015-10-20T11:23:49.137 | 28666 | 1154 | [
"pca",
"factor-analysis",
"factor-rotation"
] |
2743 | 1 | null | null | 3 | 3323 | I am attempting to calculate Cohen's d and then pool those estimates into a summary effect size. Can anyone help? (Stata or SPSS software owned).
| How to use Stata to pool Cohen's d? | CC BY-SA 3.0 | 0 | 2010-09-16T19:43:55.820 | 2019-07-18T19:46:54.897 | 2013-07-20T22:58:44.117 | 22047 | null | [
"spss",
"meta-analysis",
"stata",
"effect-size",
"cohens-d"
] |
2745 | 2 | null | 2742 | 4 | null | The problem with orthogonal dimensions is that the components can be uninterpretable. Thus, while oblique rotation (i.e., nonorthogonal dimensions) is technically less satisfying such a rotation sometimes enhances interpretablity of the resulting components.
| null | CC BY-SA 2.5 | null | 2010-09-16T19:58:55.770 | 2010-09-16T19:58:55.770 | null | null | null | null |
2746 | 1 | 2750 | null | 43 | 15847 | I would like to be able to efficiently generate positive-semidefinite (PSD) correlation matrices. My method slows down dramatically as I increase the size of matrices to be generated.
- Could you suggest any efficient solutions? If you are aware of any examples in Matlab, I would be very thankful.
- When generating a PSD correlation matrix how would you pick the parameters to describe matrices to be generated? An average correlation, standard deviation of correlations, eigenvalues?
| How to efficiently generate random positive-semidefinite correlation matrices? | CC BY-SA 3.0 | null | 2010-09-16T20:39:00.603 | 2021-01-30T16:41:25.453 | 2015-11-24T10:35:11.873 | 28666 | 1250 | [
"random-generation",
"correlation-matrix"
] |
2747 | 2 | null | 2742 | 9 | null | I think there are different opinions or views about PCA, but basically we often think of it as either a reduction technique (you reduce your features space to a smaller one, often much more "readable" providing you take care of properly centering/standardizing the data when it is needed) or a way to construct latent factors or dimensions that account for a significant part of the inter-individual dispersion (here, the "individuals" stand for the statistical units on which data are collected; this may be country, people, etc.). In both case, we construct linear combinations of the original variables that account for the maximum of variance (when projected on the principal axis), subject to a constraint of orthogonality between any two principal components. Now, what has been described is purely algebrical or mathematical and we don't think of it as a (generating) model, contrary to what is done in the factor analysis tradition where we include an error term to account for some kind of measurement error. I also like the introduction given by William Revelle in his forthcoming handbook on [applied psychometrics using R](http://personality-project.org/r/book/) (Chapter 6), if we want to analyze the structure of a correlation matrix, then
>
The first [approach, PCA] is a model
that approximates the correlation
matrix in terms of the product of
components where each component is a
weighted linear sum of the variables,
the second model [factor analysis] is
also an approximation of the
correlation matrix by the product of
two factors, but the factors in this
are seen as causes rather than as
consequences of the variables.
In other words, with PCA you are expressing each component (factor) as a linear combination of the variables whereas in FA these are the variables that are expressed as a linear combination of the factors. It is well acknowledged that both methods will generally yield quite similar results (see e.g. Harman, 1976 or Catell, 1978), especially in the "ideal" case where we have a large number of individuals and a good ratio factor:variables (typically varying between 2 and 10 depending on the authors you consider!). This is because, by estimating the diagonals in the correlation matrix (as is done in FA, and these elements are known as the communalities), the error variance is eliminated from the factor matrix. This is the reason why PCA is often used as a way to uncover latent factors or psychological constructs in place of FA developed in the last century. But, as we go on this way, we often want to reach an easier interpretation of the resulting factor structure (or the so-called pattern matrix). And then comes the useful trick of rotating the factorial axis so that we maximize loadings of variables on specific factor, or equivalently reach a "simple structure". Using orthogonal rotation (e.g. VARIMAX), we preserve the independence of the factors. With oblique rotation (e.g. OBLIMIN, PROMAX), we break it and factors are allowed to correlate. This has been largely debated in the literature, and has lead some authors (not psychometricians, but statisticians in the early 1960's) to conclude that FA is an unfair approach due to the fact that researchers might seek the factor solution that is the more convenient to interpret.
But the point is that rotation methods were originally developed in the context of the FA approach and are now routinely used with PCA. I don't think this contradicts the algorithmic computation of the principal components: You can rotate your factorial axes the way you want, provided you keep in mind that once correlated (by oblique rotation) the interpretation of the factorial space becomes less obvious.
PCA is routinely used when developing new questionnaires, although FA is probably a better approach in this case because we are trying to extract meaningful factors that take into account measurement errors and whose relationships might be studied on their own (e.g. by factoring out the resulting pattern matrix, we get a second-order factor model). But PCA is also used for checking the factorial structure of already validated ones. Researchers don't really matter about FA vs. PCA when they have, say 500 representative subjects who are asked to rate a 60-item questionnaire tackling five dmensions (this is the case of the [NEO-FFI](http://en.wikipedia.org/wiki/Revised_NEO_Personality_Inventory), for example), and I think they are right because in this case we aren't very much interested in identifying a generating or conceptual model (the term "representative" is used here to alleviate the issue of measurement invariance).
Now, about the choice of rotation method and why some authors argue against the strict use of orthogonal rotation, I would like to quote Paul Kline, as I did in response to the following question, [FA: Choosing Rotation matrix, based on “Simple Structure Criteria”](https://stats.stackexchange.com/q/1812/930),
>
(...) in the real world, it is not
unreasonable to think that factors, as
important determiners of behavior,
would be correlated. -- P. Kline,
Intelligence. The Psychometric View, 1991, p. 19
I would thus conclude that, depending on the objective of your study (do you want to highlight the main patterns of your correlation matrix or do you seek to provide a sensible interpretation of the underlying mechanisms that may have cause you to observe such a correlation matrix), you are up to choose the method that is the most appropriate: This doesn't have to do with the construction of linear combinations, but merely on the way you want to interpret the resulting factorial space.
References
- Harman, H.H. (1976). Modern Factor Analysis. Chicago, University of Chicago Press.
- Cattell, R.B. (1978). The Scientific Use of Factor Analysis. New York, Plenum.
- Kline, P. (1991). Intelligence. The Psychometric View. Routledge.
| null | CC BY-SA 2.5 | null | 2010-09-16T20:45:22.383 | 2010-09-16T21:05:45.597 | 2017-04-13T12:44:24.947 | -1 | 930 | null |
2748 | 1 | null | null | 4 | 16696 | I am attempting to find a program that will let me conduct Cox regression on my matched case-control dataset.
Please assist.
p.s. I have STATA, SPSS, and MedCalc
| How to conduct conditional Cox regression for matched case-control study? | CC BY-SA 2.5 | null | 2010-09-16T20:46:35.240 | 2016-02-21T16:40:15.307 | 2010-09-18T09:50:10.163 | 521 | null | [
"regression",
"survival",
"frailty"
] |
2749 | 2 | null | 2748 | 2 | null | If you are using Stata, you can just look at the `stcox` command. Examples are available from [Stata](http://www.stata.com/capabilities/survivalsession.html) or [UCLA](http://www.ats.ucla.edu/stat/stata/topics/survival.htm) website.
Also, take a look at [Analysis of matched cohort data](http://www.stata-journal.com/sjpdf.html?articlenum=st0070) from the Stata Journal (2004 4(3)).
Under R, you can use the `coxph()` function from the `survival` library.
| null | CC BY-SA 2.5 | null | 2010-09-16T20:56:40.193 | 2010-09-16T20:56:40.193 | null | null | 930 | null |
2750 | 2 | null | 2746 | 18 | null | You can do it backward: every matrix $C \in \mathbb{R}_{++}^p$ (the set of all symmetric $p \times p$ PSD matrices) can be decomposed as
$C=O^{T}DO$ where $O$ is an orthonormal matrix
To get $O$, first generate a random basis $(v_1,...,v_p)$ (where $v_i$ are random vectors, typically in $(-1,1)$). From there, use the Gram-Schmidt orthogonalization process to get $(u_1,....,u_p)=O$
$R$ has a number of packages that can do the G-S orthogonalization of a random basis efficiently, that is even for large dimensions, for example the 'far' package. Although you will find the G-S algorithm on wiki, it's probably better not to re-invent the wheel and go for a matlab implementation (one surely exists, i just can't recommend any).
Finally, $D$ is a diagonal matrices whose elements are all positive (this is, again, easy to generate: generate $p$ random numbers, square them, sort them and place them unto the diagonal of a identity $p$ by $p$ matrix).
| null | CC BY-SA 3.0 | null | 2010-09-16T21:05:15.777 | 2015-11-03T11:21:36.323 | 2015-11-03T11:21:36.323 | 603 | 603 | null |
2751 | 2 | null | 2743 | 3 | null | In Stata:
1) Download the user-written package -[metan](http://ideas.repec.org/c/boc/bocode/s456798.html)- from the [SSC software library](http://ideas.repec.org/s/boc/bocode.html):
`. ssc install metan`
2) Assuming you have numbers, means and SDs for two groups, the syntax is of the form:
`. metan n1 mean1 sd1 n0 mean0 sd0`
(Cohen's d is the default method for continuous data)
This gives a fixed-effect meta-analysis. Add the 'random' option for a random-effects meta-analysis, i.e.
`. metan n1 mean1 sd1 n0 mean0 sd0, random`
-metan- has many other options which control various aspects of the output and forest plot.
| null | CC BY-SA 2.5 | null | 2010-09-16T21:09:15.640 | 2010-09-16T21:09:15.640 | null | null | 449 | null |
2752 | 2 | null | 2743 | 3 | null | For SPSS, look at :
- www.mathkb.com/Uwe/Forum.aspx/stat-consult/1201/Effect-Size-in-SPSS
- www.spsstools.net/Syntax/T-Test/StandardizedEffectsSize.txt (maybe better organized)
For Stata, I used [SIZEFX: Stata module to compute effect size correlations](http://ideas.repec.org/c/boc/bocode/s456738.html) (`findit sizefx` at Stata command prompt), but `metan` as suggested by @onestop is probably more featured.
| null | CC BY-SA 2.5 | null | 2010-09-16T21:10:09.010 | 2010-09-16T21:10:09.010 | null | null | 930 | null |
2753 | 2 | null | 2715 | 22 | null | Question your data. In the modern era of cheap RAM, we often work on large amounts of data. One 'fat-finger' error or 'lost decimal place' can easily dominate an analysis. Without some basic sanity checking, (or plotting the data, as suggested by others here) one can waste a lot of time. This also suggests using some basic techniques for 'robustness' to outliers.
| null | CC BY-SA 2.5 | null | 2010-09-16T21:32:16.190 | 2010-09-16T21:32:16.190 | null | null | 795 | null |
2755 | 2 | null | 2715 | 13 | null | For data organization/management, ensure that when you generate new variables in the dataset (for example, calculating body mass index from height and weight), the original variables are never deleted. A non-destructive approach is best from a reproducibility perspective. You never know when you might mis-enter a command and subsequently need to redo your variable generation. Without the original variables, you will lose a lot of time!
| null | CC BY-SA 2.5 | null | 2010-09-16T22:36:18.300 | 2010-09-16T22:36:18.300 | null | null | 561 | null |
2756 | 2 | null | 411 | 3 | null | I can't give you additional reasons to use the Kolmogorov-Smirnov test. But, I can give you an important reason not to use it. It does not fit the tail of the distribution well. In this regard, a superior distribution fitting test is Anderson-Darling. As a second best, the Chi Square test is pretty good. Both are deemed much superior to the K-S test in this regard.
| null | CC BY-SA 2.5 | null | 2010-09-16T22:38:46.897 | 2010-09-16T22:38:46.897 | null | null | 1329 | null |
2757 | 2 | null | 2715 | 21 | null | Use software that shows the chain of programming logic from the raw data through to the final analyses/results. Avoid software like Excel where one user can make an undetectable error in one cell, that only manual checking will pick up.
| null | CC BY-SA 2.5 | null | 2010-09-16T22:39:16.767 | 2010-09-16T22:39:16.767 | null | null | null | null |
2761 | 2 | null | 305 | 1 | null | I would take the opposite view here. Why bother with the Welch test when the standard unpaired student t test gives you nearly identical results. I studied this issue a while back and I explored a range of scenarios in an attempt to break down the t test and favor the Welch test. To do so I used sample sizes up to 5 times greater for one group vs the other. And, I explored variances up to 25 times greater for one group vs the other. And, it really did not make any material difference. The unpaired t test still generated a range of p values that were nearly identical to the Welch test.
You can see my work at the following link and focus especially on slide 5 and 6.
[http://www.slideshare.net/gaetanlion/unpaired-t-test-family](http://www.slideshare.net/gaetanlion/unpaired-t-test-family)
| null | CC BY-SA 2.5 | null | 2010-09-16T23:52:59.773 | 2010-09-16T23:52:59.773 | null | null | 1329 | null |
2763 | 2 | null | 2746 | 16 | null | An even simpler characterization is that for real matrix $A$, $A^TA$ is positive semidefinite. To see why this is the case, one only has to prove that $y^T (A^TA) y \ge 0$ for all vectors $y$ (of the right size, of course). This is trivial: $y^T (A^TA) y = (Ay)^T Ay = ||Ay||$ which is nonnegative. So in Matlab, simply try
```
A = randn(m,n); %here n is the desired size of the final matrix, and m > n
X = A' * A;
```
Depending on the application, this may not give you the distribution of eigenvalues you want; Kwak's answer is much better in that regard. The eigenvalues of `X` produced by this code snippet should follow the Marchenko-Pastur distribution.
For simulating the correlation matrices of stocks, say, you may want a slightly different approach:
```
k = 7; % # of latent dimensions;
n = 100; % # of stocks;
A = 0.01 * randn(k,n); % 'hedgeable risk'
D = diag(0.001 * randn(n,1)); % 'idiosyncratic risk'
X = A'*A + D;
ascii_hist(eig(X)); % this is my own function, you do a hist(eig(X));
-Inf <= x < -0.001 : **************** (17)
-0.001 <= x < 0.001 : ************************************************** (53)
0.001 <= x < 0.002 : ******************** (21)
0.002 <= x < 0.004 : ** (2)
0.004 <= x < 0.005 : (0)
0.005 <= x < 0.007 : * (1)
0.007 <= x < 0.008 : * (1)
0.008 <= x < 0.009 : *** (3)
0.009 <= x < 0.011 : * (1)
0.011 <= x < Inf : * (1)
```
| null | CC BY-SA 2.5 | null | 2010-09-17T00:14:46.137 | 2010-09-17T00:14:46.137 | null | null | 795 | null |
2764 | 1 | null | null | 2 | 1663 | I am attempting to calculate Orwin's (1983) modification of Rosenthal's (1979) Fail-safe N for my meta-analysis of Odds Ratios.
However, all the equations I am finding are using Cohen's d, which I cannot calculate (I don't have two groups).
I have STATA, SPSS, and MedCalc at my disposal.
| Calculating Orwin's (1983) modified Fail-safe N in a meta-analysis with Odds Ratio as summary statistic? | CC BY-SA 2.5 | null | 2010-09-17T00:30:13.537 | 2010-09-17T20:18:34.320 | 2010-09-17T20:18:34.320 | null | null | [
"meta-analysis",
"cohens-d",
"odds-ratio"
] |
2765 | 2 | null | 2730 | 6 | null | In my line of work, I've found this one to be quite useful: [Statistical Methods for Psychology (Howell, 2009)](http://rads.stackoverflow.com/amzn/click/0495597848)
| null | CC BY-SA 2.5 | null | 2010-09-17T00:41:03.327 | 2010-09-17T00:41:03.327 | null | null | 1334 | null |
2766 | 2 | null | 2639 | 0 | null | It's not clear why you must have a confidence interval. As @whuber pointed out, there are better ways to compare distributions. You are losing some information by looking only at the mean. However, if you must, you might want to compute a confidence interval on `k1_mean` based on the 40000 observations. See [Wikipedia](http://en.wikipedia.org/wiki/Standard_error_of_the_mean#Assumptions_and_usage) for the simple explanation of how to compute this. From there, you could just count the number of the 100 values of `k2_mean` that fall into a confidence interval on `k1_mean`, and repeat for the third and fourth methods as well. Again, this isn't the best way to compare distributions.
| null | CC BY-SA 2.5 | null | 2010-09-17T02:40:40.300 | 2010-09-17T02:40:40.300 | null | null | 795 | null |
2767 | 2 | null | 2516 | 24 | null | Hypothesis testing traditionally focused on p values to derive statistical significance when alpha is less than 0.05 has a major weakness. And, that is that with a large enough sample size any experiment can eventually reject the null hypothesis and detect trivially small differences that turn out to be statistically significant.
This is the reason why drug companies structure clinical trials to obtain FDA approval with very large samples. The large sample will reduce the standard error to close to zero. This in turn will artificially boost the t stat and commensurately lower the p value to close to 0%.
I gather within scientific communities that are not corrupted by economic incentives and related conflict of interest hypothesis testing is moving away from any p value measurements towards Effect Size measurements. This is because the unit of statistical distance or differentiation in Effect Size analysis is the standard deviation instead of the standard error. And, the standard deviation is completely independent from the sample size. The standard error on the other hand is totally dependent from the sample size.
So, anyone who is skeptical of hypothesis testing reaching statistically significant results based on large samples and p value related methodologies is right to be skeptical. They should rerun the analysis using the same data but using instead Effect Size statistical tests. And, then observe if the Effect Size is deemed material or not. By doing so, you could observe that a bunch of differences that are statistically significant are associated with Effect Size that are immaterial. That's what clinical trial researchers sometimes mean when a result is statistically significant but not "clinically significant." They mean by that one treatment may be better than placebo, but the difference is so marginal that it would make no difference to the patient within a clinical context.
| null | CC BY-SA 2.5 | null | 2010-09-17T04:11:52.573 | 2010-09-17T04:11:52.573 | null | null | 1329 | null |
2768 | 1 | null | null | 11 | 15243 | I was asked such a question as "Did you do any consistency check in your daily work?" during a phone interview for a Biostatistician position. I don't know what to answer. Any information is appreciated.
| What is a consistency check? | CC BY-SA 2.5 | null | 2010-09-17T04:36:00.473 | 2022-01-20T05:32:41.387 | 2010-09-17T11:44:38.123 | 159 | 1336 | [
"validation"
] |
2769 | 2 | null | 2742 | 4 | null | Basic Points
- Rotation can make interpretation of components clearer
- Oblique rotation often makes more theoretical sense. I.e., Observed variables can be explained in terms of a smaller number of correlated components.
Example
- 10 tests all measuring ability with some measuring verbal and some measuring spatial ability. All tests are intercorrelated, but intercorrelations within verbal or within spatial tests are greater than across test type. A parsimonious PCA might involve two correlated components, a verbal and a spatial. Theory and research suggests that these two abilities are correlated. Thus, an oblique rotation makes theoretical sense.
| null | CC BY-SA 2.5 | null | 2010-09-17T05:01:12.770 | 2010-09-17T05:01:12.770 | null | null | 183 | null |
2770 | 1 | 2773 | null | 13 | 2716 | I'm an undergraduate statistics student looking for a good treatment of clinical trials analysis. The text should cover the fundamentals of experimental design, blocking, power analysis, latin squares design, and cluster randomization designs, among other topics.
I have an undergraduate knowledge of mathematical statistics and real analysis, but if there's a fantastic text that requires a bit higher level of statistics or analysis, I can work up to it.
| Good text on Clinical Trials? | CC BY-SA 2.5 | null | 2010-09-17T05:05:49.097 | 2022-12-06T02:54:37.377 | 2010-09-17T06:36:30.523 | 183 | 1118 | [
"references",
"teaching",
"clinical-trials"
] |
2771 | 2 | null | 2516 | 5 | null | The short answer is "no". Research on hypothesis testing in the asymptotic regime of infinite observations and multiple hypotheses has been very, very active in the past 15-20 years, because of microarray data and financial data applications. The long answer is in the course page of Stat 329, "Large-Scale Simultaneous Inference", taught in 2010 by Brad Efron. A [full chapter (#2)](https://statweb.stanford.edu/%7Eckirby/brad/LSI/monograph_CUP.pdf) is devoted to large-scale hypothesis testing.
| null | CC BY-SA 4.0 | null | 2010-09-17T05:49:34.810 | 2021-10-15T14:16:05.627 | 2021-10-15T14:16:05.627 | 321901 | 30 | null |
2772 | 1 | null | null | 23 | 5444 | In two papers in [1986](http://dx.doi.org/10.1016/0304-405X%2886%2990027-9) and [1988](http://dx.doi.org/10.1016/0304-405X%2888%2990062-1), Connor and Korajczyk proposed an approach to modeling asset returns. Since these time series have usually more assets than time period observations, they proposed to perform a PCA on cross-sectional covariances of asset returns. They call this method Asymptotic Principal Component Analysis (APCA, which is rather confusing, since the audience thinks immediately of asymptotic properties of PCA).
I have worked out the equations, and the two approaches seem numerically equivalent. The asymptotics of course differ, since convergence is proved for $N \rightarrow \infty$ rather than $T \rightarrow \infty$. My question is: has anyone used APCA and compared to PCA? Are there concrete differences? If so, which ones?
| What is the difference between PCA and asymptotic PCA? | CC BY-SA 3.0 | null | 2010-09-17T06:11:36.290 | 2022-10-01T13:29:06.397 | 2011-06-19T09:20:09.777 | null | 30 | [
"pca",
"econometrics"
] |
2773 | 2 | null | 2770 | 8 | null | I would definitively recommend [Design and Analysis of Clinical Trials: Concepts and Methodologies](http://books.google.fr/books?id=HXMUEjZ4vyAC&dq=Design+and+Analysis+of+Clinical+Trials:+Concepts+and+Methodologies&printsec=frontcover&source=bn&hl=fr&ei=YQuTTPG-KseLswbt9JH5CQ&sa=X&oi=book_result&ct=result&resnum=4&ved=0CCwQ6AEwAw) which seems actually the most complete one given your request.
[Statistical Issues in Drug Development](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470018771.html) also covers a broad range of concepts but is less oriented toward design of experiment. [Statistics Applied to Clinical Trials](http://books.google.fr/books?id=LEDJ-6bPa8AC&printsec=frontcover&dq=Statistics+Applied+to+Clinical+Trials&source=bl&ots=LwzvnasZuk&sig=OkDTIEnTaaONeDG5Ik1XVOhIeNM&hl=fr&ei=yAiTTNHoN8LEswbg2_D4CQ&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCQQ6AEwAQ) includes more technical stuff, but mainly focus on crossover trials and applications in epidemiology. And there is always the most famous Rothman, [Modern Epidemiology](http://books.google.fr/books?id=Z3vjT9ALxHUC&dq=rothman+modern+epidemiology&printsec=frontcover&source=bn&hl=fr&ei=MAmTTMF2zcuzBve20fgJ&sa=X&oi=book_result&ct=result&resnum=4&ved=0CDMQ6AEwAw), which provides valuable help for interpretation and examples of application of clinical biostatistics. Finally, [The Design and Analysis of Clinical Experiments](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0471349917.html) is more centered onto the analysis of specific experimental settings, but it does not address other points and is a bit older.
| null | CC BY-SA 4.0 | null | 2010-09-17T06:32:58.730 | 2022-12-06T02:53:13.523 | 2022-12-06T02:53:13.523 | 362671 | 930 | null |
2774 | 2 | null | 2768 | 8 | null | I suppose this has to do with some form of Quality Control about data integrity, and more specifically that you regularly check that your working database isn't corrupted (due to error during transfer, copy, or after an update or a sanity check). This may also mean ensuring that your intermediate computation are double-checked (either manually or through additional code or macros in your statistical software).
Other information may be found here: the ICH E6 (R1) reference guide about [Guideline for Good Clinical Practice](http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500002874.pdf) from the EMEA, [Guidelines on Good Clinical Laboratory Practice](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2213906/), or [Clinical Research Study Investigator's Toolbox](http://www.nia.nih.gov/ResearchInformation/CTtoolbox/).
| null | CC BY-SA 2.5 | null | 2010-09-17T06:43:26.070 | 2010-09-17T06:49:06.947 | 2010-09-17T06:49:06.947 | 930 | 930 | null |
2775 | 2 | null | 2746 | 3 | null | The simplest method is the one above, which is a simulation of a random dataset and the computation of the [Gramian](http://en.wikipedia.org/wiki/Gramian_matrix). A word of caution: The resulting matrix will not be uniformly random, in that its decomposition, say $U^TSU$ will have rotations not distributed according to the Haar Measure. If you want to have "uniformly distributed" PSD matrices then you can use any of the approaches described [here](http://en.wikipedia.org/wiki/Rotation_matrix#Uniform_random_rotation_matrices).
| null | CC BY-SA 2.5 | null | 2010-09-17T06:52:22.090 | 2010-09-17T14:39:00.353 | 2010-09-17T14:39:00.353 | 30 | 30 | null |
2776 | 2 | null | 2764 | 1 | null | So, perhaps check these additional resources: [http://j.mp/d8znoP](http://j.mp/d8znoP) for SPSS. Don't know about Stata.
There is some R code about fail-safe N in the following handout: [Tests for funnel plot asymmetry and failsafe N](http://www.biostat.uzh.ch/teaching/phd/doktorandenseminar/schroedle.pdf), but I didn't check on the [www.metaanalysis.com](http://www.metaanalysis.com) website.
Otherwise, [ClinTools Software](http://www.clintools.com/) may be an option (I hope the demo version let you do some computation on real data), or better the [MIX](http://www.biomedcentral.com/1471-2288/6/50) software.
| null | CC BY-SA 2.5 | null | 2010-09-17T07:10:43.227 | 2010-09-17T07:10:43.227 | null | null | 930 | null |
2777 | 1 | 2781 | null | 34 | 9555 | Context:
Imagine you had a longitudinal study which measured a dependent variable (DV) once a week for 20 weeks on 200 participants. Although I'm interested in general, typical DVs that I'm thinking of include job performance following hire or various well-being measures following a clinical psychology intervention.
I know that multilevel modelling can be used to model the relationship between time and the DV. You can also allow coefficients (e.g., intercepts, slopes, etc.) to vary between individuals and estimate the particular values for participants. But what if when visually inspecting the data you find that the relationship between time and the DV is any one of the following:
- different in functional form (perhaps some are linear and others are exponential or some have a discontinuity)
- different in error variance (some individuals are more volatile from one time point to the next)
Questions:
- What would be a good way to approach modelling data like this?
- Specifically, what approaches are good at identifying different types of relationships, and categorising individuals with regards to their type?
- What implementations exist in R for such analyses?
- Are there any references on how to do this: textbook or actual application?
| Modelling longitudinal data where the effect of time varies in functional form between individuals | CC BY-SA 2.5 | null | 2010-09-17T07:12:27.057 | 2014-09-19T01:29:53.793 | null | null | 183 | [
"repeated-measures",
"random-effects-model",
"latent-class"
] |
2778 | 2 | null | 2730 | 11 | null | The classics I think are Winer and Kirk, both cover essentially only ANOVA and ANCOVA. You can probably get used copys for cheap (e.g., I own a Winer second edition from 71 bought via AMAZON for less than 10$):
[Winer - Statistical Principles In Experimental Design](http://rads.stackoverflow.com/amzn/click/0070709823)
[Kirk - Experimental Design](http://rads.stackoverflow.com/amzn/click/0534250920)
A more contemporary book is the one by Maxwell & Delaney. Besides ANOVA and ANCOVA it covers other methods, e.g., multivariate and multilevel:
[Maxwell & Delaney - Designing Experiments and Analyzing Data: A Model Comparison Perspective](http://rads.stackoverflow.com/amzn/click/0805837183)
Perhaps it is the best to go with this last one. It is pretty good.
| null | CC BY-SA 2.5 | null | 2010-09-17T08:43:52.880 | 2010-09-17T08:43:52.880 | null | null | 442 | null |
2779 | 2 | null | 2711 | 1 | null | For the problem above, I have used a really simple metric.
I wanted to asses how good is my hit!
If I have, for example, errors 4, 2, 0.002, 3, 6 then I choose h1=0.002 as hit, and h2=2 as the closest error.
X/h1 + X/h2 = 100% => X/h1 and X/h2 is %.
| null | CC BY-SA 2.5 | null | 2010-09-17T08:44:09.753 | 2010-09-17T08:44:09.753 | null | null | 1313 | null |
2780 | 2 | null | 2764 | 2 | null | Normally you always find ways to convert effect size. For example you can calculate $r$ from $d$ and back. So you surely will be able to converts odds ratio to Cohen's $d$.
One book I usually found a good resource for stuff like that is the one by Rosenthal & Rosnow. I think it was this one:
[Rosenthal & Rosnow - Essentials of Behavioral Research: Methods and Data Analysis](http://rads.stackoverflow.com/amzn/click/0073531960)
But in your case you should take a look at his paper, it will probably solve your problem (but I haven't looked at it):
[Chinn, S. (2000). A simple method for converting an odds ratio to effect size for use in meta-analysis. Statistic in Medicine, 19, 3127-3131.](http://www.citeulike.org/group/3472/article/2833123)
| null | CC BY-SA 2.5 | null | 2010-09-17T08:57:59.967 | 2010-09-17T08:57:59.967 | null | null | 442 | null |
2781 | 2 | null | 2777 | 21 | null | I would suggest to look at the following three directions:
- longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the partitioning (package kml, and references included in the online help); so basically, it won't help identifying specific shape for individual time course, but just separate homogeneous evolution profile
- some kind of latent growth curve accounting for heteroscedasticity: my best guess would be to look at the extensive references around MPlus software, especially the FAQ and mailing. I've also heard of random effect multiplicative heteroscedastic model (try googling around those keywords). I find these papers (1, 2) interesting, but I didn't look at them in details. I will update with references on neuropsychological assessment once back to my office.
- functional PCA (fpca package) but it may be worth looking at functional data analysis
Other references (just browsed on the fly):
- Willett & Bull (2004), Latent Growth Curve Analysis -- the authors use LGC on non-linear reading trajectories
- Welch (2007), Model Fit and Interpretation of Non-Linear Latent Growth Curve Models -- a PhD on modeling non-linear change in the context of latent growth modeling
- Berkey CS, Laird NM (1986). Nonlinear growth curve analysis: estimating the population parameters. Ann Hum Biol. 1986 Mar-Apr;13(2):111-28
- Rice (2003), Functional and Longitudinal Data Analysis: Perspectives on Smoothing
- Wu, Fan and Müller (2007). Varying-Coefficient Functional Linear Regression
| null | CC BY-SA 2.5 | null | 2010-09-17T08:59:36.203 | 2010-09-17T13:52:39.000 | 2010-09-17T13:52:39.000 | 930 | 930 | null |
2782 | 2 | null | 2715 | 28 | null | If you're deciding between two ways of analysing your data, try it both ways and see if it makes a difference.
This is useful in many contexts:
- To transform or not transform
- Non-parametric or parameteric test
- Spearman's or Pearson's correlation
- PCA or factor analysis
- Whether to use the arithmetic mean or a robust estimate of the mean
- Whether to include a covariate or not
- Whether to use list-wise deletion, pair-wise deletion, imputation, or some other method of missing values replacement
This shouldn't absolve one from thinking through the issue, but it at least gives a sense of the degree to which substantive findings are robust to the choice.
| null | CC BY-SA 2.5 | null | 2010-09-17T09:40:03.130 | 2010-09-19T10:13:15.170 | 2010-09-19T10:13:15.170 | 183 | 183 | null |
2783 | 2 | null | 2746 | 4 | null | You haven't specified a distribution for the matrices. Two common ones are the Wishart and inverse Wishart distributions. The [Bartlett decomposition](http://en.wikipedia.org/wiki/Wishart_distribution#Bartlett_decomposition) gives a Cholesky factorisation of a random Wishart matrix (which can also be efficiently solved to obtain a random inverse Wishart matrix).
In fact, the Cholesky space is a convenient way to generate other types of random PSD matrices, as you only have to ensure that the diagonal is non-negative.
| null | CC BY-SA 2.5 | null | 2010-09-17T11:10:21.900 | 2010-09-17T11:10:21.900 | null | null | 495 | null |
2784 | 2 | null | 2061 | 3 | null | In the category (3) of humorous videos, check out ['Statz rappers'](http://video.google.com/videoplay?docid=489221653835413043#); general interest. (Pretty funny even to older people ;-).)
| null | CC BY-SA 2.5 | null | 2010-09-17T11:11:41.793 | 2010-09-17T11:11:41.793 | null | null | 919 | null |
2785 | 2 | null | 2768 | 18 | null | To chl's list, which focuses on frank data processing errors, I would add checks for subtler errors to address the following questions and issues (given in no particular order and certainly incomplete):
- Assuming database integrity, are the data reasonable? Do they roughly conform with expectations or conventional models, or would they surprise someone familiar with similar data?
- Are the data internally consistent? For example, if one field is supposed to be the sum of two others, is it?
- How complete are the data? Are they what were specified during the data collection planning phase? Are there any extra data that were not planned for? If so, why are they there?
- Most analyses implicitly or explicitly model the data in a parsimonious way and include the possibility of variation from the general description. Each such model suggests its own particular way to identify outliers--the data that deviate remarkably from the general description. Were attempts made to identify and understand outliers at each stage of exploration and analysis?
- In many cases it is possible for the analyst to introduce additional data into the analysis for quality checking and insight. For example, many data sets in the natural and social sciences as well as business include (at least implicitly) location information: identifiers of Census regions; names of countries, states, counties; customer zip codes; and so on. Even if--perhaps especially if--spatial correlation is not an element of the EDA or modeling, the analyst can join the data to geographic representations of the locations and map them to look for patterns and outliers.
- One of the most insidious errors that can creep into an analysis is loss of data. When extracting fields, summarizing data, reformatting datasets, etc., if one or two items are dropped from a large dataset often there will be nothing to flag it. But occasionally something important is lost, to one's extreme embarrassment if it is ever discovered. Simple checks--such as comparing before and after counts and totals of data--need to occur routinely to guard against such things.
- Another insidious error is associated with type conversion in digital computing. For example, recently I had to construct a key (for matching two data files) out of a floating point field. The software (Stata) imported the field as a single precision float in one file but, for whatever reason, as a double precision float in another file. Most of the time the values matched but, in a few cases due to different rounding, they did not. Some data were lost as a result. I caught this only due to the application of (6). In general, it pays to check for consistency of field data types: ints vs. floats, lengths of strings, etc.
- If a spreadsheet is ever used at any stage of analysis, expect the worst. The problem is that even a stray keystroke can invisibly corrupt the data. When the results are critical, it pays to keep going back and forth--export to the spreadsheet, do the analysis, import back, and systematically compare--to make sure nothing untoward happened.
- Whenever a database is updated, it is worthwhile to pause and perform systematic, complete comparisons with the old one to make sure nothing was lost, changed, or corrupted in the process.
- At a higher level, whenever an estimate is performed (such as a regression, PCA, whatever), it can be worthwhile to perform it using a different technique to check for sensitivities or even possible errors in the code. E.g., follow an OLS regression by some form of robust regression and compare the coefficients. For important results, it can be comforting to obtain the answers using two (or more) different software platforms.
Perhaps the best kind of general "consistency check" anyone can perform is to graph everything, early and often.
| null | CC BY-SA 3.0 | null | 2010-09-17T11:41:19.193 | 2013-07-29T05:58:25.537 | 2013-07-29T05:58:25.537 | 7290 | 919 | null |
2786 | 2 | null | 2746 | 11 | null | As a variation on kwak's answer: generate a diagonal matrix $\mathbf{D}$ with random nonnegative eigenvalues from a distribution of your choice, and then perform a similarity transformation $\mathbf{A}=\mathbf{Q}\mathbf{D}\mathbf{Q}^T$ with $\mathbf{Q}$ a [Haar-distributed pseudorandom orthogonal matrix](http://dx.doi.org/10.1137/0717034).
| null | CC BY-SA 2.5 | null | 2010-09-17T12:12:34.027 | 2010-09-17T12:12:34.027 | null | null | 830 | null |
2787 | 1 | 2790 | null | 1 | 163 | It was recommended to me on the [User Interface Stack Exchange](https://ux.stackexchange.com/questions/1543/whats-recommended-for-the-number-of-users-to-use-in-qualitative-and-quantitative) that I also ask this question here. So....
I'm currently rebuilding our entire intranet from scratch, mostly because the tech behind is out-dated and it has been proved that a lot of information is difficult to find.
Though that is beside the point, what I am wondering is what would be the optimum amount of users to use for qualitative and quantitative testing with a userbase of around 1000 users?
Is there a general rule of thumb for both based on the total number of users you have? or is it just say 5 for qualitative and 10 for quantitative?
What would be the best approach?
| What's recommended for the number of users to use in qualitative and quantitative usability studies? | CC BY-SA 2.5 | null | 2010-09-17T12:52:07.833 | 2011-02-23T01:02:50.827 | 2017-04-12T07:29:42.590 | -1 | 593 | [
"sample-size",
"hypothesis-testing"
] |
2788 | 1 | 2789 | null | 7 | 2045 | I hope you all won't mind a basic question.
We are examining the effects of a compound at various concentration on the behaviour of an organism. The compound is administered once at the beginning of the time course. Observations are made every minute for a period of 2 hours. Each concentration is applied to 12 individuals.
If we repeat this experiment with different individuals, what is the best way to combine the 2 experiments? The issue that we're having is that although the distribution and shape of the data from each experiment is very similar, the plot of the medians by time is usually significantly shifted up or down between exp1 and exp2.
It will no doubt be very evident that I'm unskilled in statistics. If this question needs more information to be answerable, let me know.
Thanks,
Jen
| Combining repeated experiments into one dataset | CC BY-SA 3.0 | null | 2010-09-17T12:54:10.173 | 2012-04-19T14:03:34.913 | 2012-04-19T14:03:34.913 | 10322 | 1079 | [
"multilevel-analysis"
] |
2789 | 2 | null | 2788 | 8 | null | Just add "experiment" as an effect to your model, that should account for the shift between experiments and let you gain the power of increased N across experiments to detect effects of concentration and time.
In R, if using ANOVA and treating time as a factor (e.g. not numeric), then do:
```
library(ez)
ezANOVA(
data = my_data
, dv = .(my_dv)
, wid = .(individual)
, within = .(time)
, between = .(concentration,experiment)
)
```
However, this:
- treats experiment as a fixed effect, whereas it might more reasonably be considered a random effect (thanks Henrik!)
- treats time as non-continuous
- assumes sphericity across the levels of time
An approach that solves all three issues is to employ a mixed effects model. If you think that the effect of time is linear, then leave time as a numeric variable and do:
```
library(lme4)
lmer(
data = my_data
, formula = my_dv ~ time*concentration+(1|individual)+(1|experiment)
)
```
If you don't think time is linear, you could convert it to a factor and repeat the above, or use generalized additive mixed modelling:
```
library(gamm4)
fit <- gamm4(
data = my_data
, formula = my_dv ~ time+concentration+s(time,by=concentration,bs='tp')
, random = ~ (1|individual) + (1|experiment)
)
print(fit$gam)
```
That assumes that experiment only shifts the time function, but lets concentration change the shape of the time function. I have a hard time figuring out how to visualise the results from single gamm4 fits, so I usually obtain the fitted model's predictions across the fixed-effects space then bootstrap (in your case, sampling individuals with replacement within each experiment) confidence intervals around these predictions.
Also, all of the above assume that residuals are gaussian; if you're dealing with anything different (eg. binomial data), then you need to change the "family" arguments of lmer and gamm4 (ezANOVA can't do anything but gaussian).
| null | CC BY-SA 2.5 | null | 2010-09-17T13:14:32.587 | 2010-09-17T13:46:22.303 | 2017-04-13T12:44:55.360 | -1 | 364 | null |
2790 | 2 | null | 2787 | 3 | null | Not an exact answer to your question (which is kind of not understandable enough for conrete answers, see Srikant's comment), but a more general one:
If you want to do statistical testing on your data it is advisable to make sure that your sample conforms to the [central limit theorem](http://en.wikipedia.org/wiki/Central_limit_theorem). That is, the distribution of sample means from which your sample is drawn is normal (i.e., bell-shaped). Then you can use a lot of nice methods (as long as the other assumptions are met) on your data that try to estimate population parameters from your sample. A rule of thumb about the central limit theorem is that you need at least 30 participants (or 20, see [here](https://stats.stackexchange.com/questions/276/how-large-should-a-sample-be-for-a-given-estimation-technique-and-parameters)).
My advice: If you think that 10 could be enough, use at least 20.
For qualitative testing, I would guess, just use participants until you get a picture of what the issues are. Ask a couple of people to test your stuff and ask them to tell you their impression. If they agree to some degree, there you go. I think 5 is a good starting point (caveat: this is a non-theoretically answer, based on my stomach, and I just had lunch).
If you are interested in polls from your sample, [this discussion](https://stats.stackexchange.com/questions/166/how-do-you-decide-the-sample-size-when-polling-a-large-population) could be useful.
| null | CC BY-SA 2.5 | null | 2010-09-17T13:27:27.217 | 2010-09-17T13:27:27.217 | 2017-04-13T12:44:55.630 | -1 | 442 | null |
2791 | 2 | null | 2770 | 4 | null | [Experimental Design for the Life Sciences](http://rads.stackoverflow.com/amzn/click/019928511X), by Ruxton & Colegrave, is a nice book and is aimed primarily at undergraduates.
| null | CC BY-SA 2.5 | null | 2010-09-17T13:36:22.340 | 2010-09-17T13:36:22.340 | null | null | 266 | null |
2792 | 2 | null | 2788 | 6 | null | I think a way in which you could analyze the two experiments together is by defining a [multilevel/hierarchical model](http://en.wikipedia.org/wiki/Hierarchical_linear_modeling). The individuals are nested within each experiment.
The standard for this approach is [Gelman's book](http://rads.stackoverflow.com/amzn/click/052168689X) (I think).
The [Journal of Memory and Language had a special issue on analyzing data in 2008](http://www.sciencedirect.com/science?_ob=PublicationURL&_tockey=%23TOC%236896%232008%23999409995%23701947%23FLA%23&_cdi=6896&_pubType=J&view=c&_auth=y&_acct=C000054038&_version=1&_urlVersion=0&_userid=1634520&md5=f69a3f5826f12784ec9f11fa8c4210f0) which covered hierarchical models and even some examples in `R`.
Other's here can probably provide good web-resources as well.
| null | CC BY-SA 2.5 | null | 2010-09-17T13:39:43.130 | 2010-09-17T13:50:02.507 | 2010-09-17T13:50:02.507 | 442 | 442 | null |
2793 | 2 | null | 2516 | 13 | null | One simple point not made directly in another answer is that it's simply not true that "all null hypotheses are false."
The simple hypothesis that a physical coin has heads probability exactly equal to 0.5, ok, that is false.
But the compound hypothesis that a physical coin has heads probability greater than 0.499 and less than 0.501 may be true. If so, no hypothesis test -- no matter how many coin flips go into it -- is going to be able to reject this hypothesis with a probability greater than $\alpha$ (the tests's bound on false positives).
The medical industry tests "non-inferiority" hypotheses all the time, for this reason -- e.g. a new cancer drug has to show that its patients' probability of progression-free survival isn't less than 3 percentage points lower than an existing drug's, at some confidence level (the $\alpha$, usually 0.05).
| null | CC BY-SA 2.5 | null | 2010-09-17T15:26:45.237 | 2010-09-17T15:26:45.237 | null | null | 1122 | null |
2794 | 1 | 2798 | null | 6 | 5009 | I know the significance of the slope of a LMS linear regression can be calculated using the r2 coefficient of determination and looking up the appropriate value in an F table. However, I was thinking of perhaps making this "more robust" by replacing the LMS linear regression with a repeated median straight line fit, and perhaps even replacing the average value used to calculate r2 with the median value of the data. Is there any reason why this would NOT be a valid approach? Maybe the values in an F table are predicated on using LMS and averages, for example?
| Significance of the slope of a straight line fit | CC BY-SA 2.5 | null | 2010-09-17T15:44:08.690 | 2014-05-05T09:31:44.037 | null | null | 226 | [
"regression",
"statistical-significance",
"robust"
] |
2796 | 2 | null | 1621 | 4 | null | The most well know argument is that the fisher metric, being invariant to coordinate transforms, can be used to formulate an uninformed prior (Jeffreys prior). Not sure I buy it!
Less well known, is that sometimes these "integrated quantities" turn out to be divergences and such, one may argue that the fisher distances generate a generalised set of divergences (and properties thereof).
But still, I'm yet to find a good intuitive description of the fisher information and the quantities it generates. Please tell me if you find one.
| null | CC BY-SA 2.5 | null | 2010-09-17T16:31:46.073 | 2010-09-17T16:31:46.073 | null | null | 1342 | null |
2797 | 2 | null | 2245 | 9 | null | In addition to the excellent answer above, there is a statistical method that can get you closer to demonstrating causality. It is Granger Causality that demonstrates that one independent variable occurring before a dependent variable has a causal effect or not. I introduce this method in an easy to follow presentation at the following link:
[http://www.slideshare.net/gaetanlion/granger-causality-presentation](http://www.slideshare.net/gaetanlion/granger-causality-presentation)
I also apply this method to testing competing macroeconomic theories:
[http://www.slideshare.net/gaetanlion/economic-theory-testing-presentation](http://www.slideshare.net/gaetanlion/economic-theory-testing-presentation)
Be aware that this method is not perfect. It just confirms that certain events occur before others and that those events appear to have a consistent directional relationship. This seems to entail true causality but it is not always the case. The rooster morning call does not cause the sun to rise.
| null | CC BY-SA 2.5 | null | 2010-09-17T16:49:17.907 | 2010-09-17T16:49:17.907 | null | null | 1329 | null |
2798 | 2 | null | 2794 | 3 | null | No, F tests are based on the assumption that lowest sum of residual squares is optimal. It does not hold in case of robust regression, where the criterion is different.
For instance, effectively one may consider robust regression as least squares on data stripped from outliers; using $r^2$ on all data in this case adds non-culpable penalty for high residuals of outliers.
| null | CC BY-SA 2.5 | null | 2010-09-17T17:37:00.617 | 2010-09-17T17:37:00.617 | null | null | null | null |
2801 | 2 | null | 2794 | 3 | null | No need to reinvent the wheel. There is an alternative, robust, R^2 measure with very good statistical properties:
[A robust coefficient of determination for regression, O Renauda](http://www.google.com/url?sa=t&source=web&cd=3&ved=0CBsQFjAC&url=http%3A%2F%2Fwww.hec.unige.ch%2Fwww%2Fdms%2Fhec_en%2Fvictoriafeser%2Frecherche%2Fpaper-rob-R2.pdf&rct=j&q=robust%20r-2%20dehon&ei=MbSTTIvWMsWYOJHrnIgJ&usg=AFQjCNE_NtnSZK618rQc0bph5VlJGK7mkQ&sig2=HSRGJYQSPJHoHVaYLWkbCQ&cad=rja)
Edit:
*Is there any reason why this would NOT be a valid approach? * For one this does not make your method any more robust. There is a large [literature](http://en.wikipedia.org/wiki/Robust_statistics#Measures_of_robustness) on this issue, and fortunatly, good tools have been designed to adress these points.
| null | CC BY-SA 2.5 | null | 2010-09-17T18:36:17.920 | 2010-09-21T16:43:01.597 | 2010-09-21T16:43:01.597 | 603 | 603 | null |
2802 | 2 | null | 2794 | 1 | null | I would simply use the standard regression output to evaluate the significance of the slope coefficient. I mean by that looking at the coefficient itself, its standard error, t stat (# of standard errors = Coefficient/Standard error), p value, and confidence interval. The p value directly addresses the statistical significance of the slope or coefficient you have in mind.
R Square of the model tells how well the model explains the dependent variable, or how well the model fits the dependent variable.
The p value of each coefficient tells you how statistically significant those coefficients are.
Very often you can have a model with a high R Square, but that includes one variable with a coefficient that is not statistically significant (its p value is too high). In such a case, it suggests your model would be nearly as good if you took that variable out. By the way, you should really focus on the Adjusted R Square instead of R Square. The Adjusted R Square correctly penalizes the model for having more variable and potentially over-fitting the data with independent variables that are not so relevant.
| null | CC BY-SA 2.5 | null | 2010-09-17T18:57:23.110 | 2010-09-17T18:57:23.110 | null | null | 1329 | null |
2805 | 2 | null | 2077 | 13 | null | I still think using the % change from one period to the next is the best way to render a non-stationary variable stationary as you first suggest. A transformation such as a log works reasonably well (it flattens the non-stationary quality; but does not eliminate it entirely).
The third way is to deseasonalize and de-trend the data simultaneously in one single linear regression. One independent variable would be trend (or time): 1, 2, 3, ... to how many time period you have. And, the other variable would be a categorical variable with 11 different categories (for 11 out of the 12 months). Then, using the resulting coefficient from this regression you can simultaneously detrend and de-seasonalize the data. You will see your whole data set essentially flattened. The remaining differences between periods will reflect changes independent from both growth trend and season.
| null | CC BY-SA 2.5 | null | 2010-09-17T21:44:07.557 | 2010-09-17T21:44:07.557 | null | null | 1329 | null |
2806 | 1 | null | null | 55 | 32217 | I previously asked this on StackOverflow, but it seems like it might be more appropriate here, given that it didn't get any answers on SO. It's kind of at the intersection between statistics and programming.
I need to write some code to do PCA (Principal Component Analysis). I've browsed through the well-known algorithms and implemented [this one](http://en.wikipedia.org/wiki/Principal_component_analysis#Computing_principal_components_with_expectation_maximization), which as far as I can tell is equivalent to the NIPALS algorithm. It works well for finding the first 2-3 principal components, but then seems to become very slow to converge (on the order of hundreds to thousands of iterations). Here are the details of what I need:
- The algorithm must be efficient when dealing with huge numbers of features (order 10,000 to 20,000) and sample sizes on the order of a few hundred.
- It must be reasonably implementable without a decent linear algebra/matrix library, as the target language is D, which doesn't have one yet, and even if it did, I would prefer not to add it as a dependency to the project in question.
As a side note, on the same dataset R seems to find all principal components very fast, but it uses singular value decomposition, which is not something I want to code myself.
| Best PCA algorithm for huge number of features (>10K)? | CC BY-SA 3.0 | null | 2010-09-18T02:08:24.590 | 2019-04-25T07:42:50.480 | 2015-06-09T23:33:35.577 | 7291 | 1347 | [
"pca",
"algorithms",
"model-evaluation",
"high-dimensional"
] |
2807 | 2 | null | 2492 | 68 | null | IMHO normality tests are absolutely useless for the following reasons:
- On small samples, there's a good chance that the true distribution of the population is substantially non-normal, but the normality test isn't powerful to pick it up.
- On large samples, things like the T-test and ANOVA are pretty robust to non-normality.
- The whole idea of a normally distributed population is just a convenient mathematical approximation anyhow. None of the quantities typically dealt with statistically could plausibly have distributions with a support of all real numbers. For example, people can't have a negative height. Something can't have negative mass or more mass than there is in the universe. Therefore, it's safe to say that nothing is exactly normally distributed in the real world.
| null | CC BY-SA 2.5 | null | 2010-09-18T02:32:42.093 | 2010-09-18T02:32:42.093 | null | null | 1347 | null |
2808 | 2 | null | 1761 | 3 | null | GD047 linked to a UC Berkeley video on statistics, and I had another one. It's similar to MIT's Open Courseware (maybe a little lower quality). This one is for the Intro Statistics and Probability Class. It unfortunately only has 9 episodes, but it's free :).
[UC Berkeley STATS 20 Webcast](http://webcast.berkeley.edu/course_details_new.php?seriesid=2010-D-87392&semesterid=2010-D)
| null | CC BY-SA 2.5 | null | 2010-09-18T02:43:41.650 | 2010-09-18T02:43:41.650 | null | null | 1118 | null |
2809 | 2 | null | 2777 | 8 | null | I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data:
- Multivariate adaptive splines for analysis of longitudinal data (Free PDF)
- Mixed effects multivariate adaptive splines model for the analysis of longitudinal and growth curve data
In addition, see the [MASAL](http://c2s2.yale.edu/software/masal/) page for software including an R package.
| null | CC BY-SA 2.5 | null | 2010-09-18T02:56:41.683 | 2010-09-19T09:58:00.853 | 2010-09-19T09:58:00.853 | 183 | 251 | null |
2810 | 2 | null | 2806 | 1 | null | See Sam Roweis' paper, [EM Algorithms for PCA and SPCA](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.3330).
| null | CC BY-SA 2.5 | null | 2010-09-18T03:04:07.327 | 2010-09-18T03:04:07.327 | null | null | 251 | null |
2811 | 2 | null | 2777 | 6 | null | It looks to me like [Growth Mixture Models](http://dx.doi.org/10.1007/s10940-007-9036-0) might have potential to allow you to examine your error variance. ([PDF](http://statmodel2.com/download/kreutermuthen2006_34.pdf) here). (I'm not sure what multiplicative heteroscedastic models are, but I will definitely have to check them out).
Latent group based trajectory models have become really popular lately in criminology. But many people simply take for granted that groups actually exist, and some [astute research](http://dx.doi.org/10.1111/j.1745-9125.2010.00185.x) has pointed out that you will find groups even in random data. Also to note Nagin's group based modelling approach does not allow you to assess your error (and honestly I have never seen a model that would look anything like a discontinuity).
Although it would be difficult with 20 time points, for exploratory purposes creating simple heuristics to identify patterns could be helpful (e.g. always low or always high, coefficient of variation). I'm envisioning sparklines in a spreadsheet or parallel coordinates plots but I doubt they would be helpful (I honestly have not ever seen a parallel coordinate plot that is very enlightening).
Good luck
| null | CC BY-SA 2.5 | null | 2010-09-18T03:45:56.633 | 2010-09-18T03:45:56.633 | null | null | 1036 | null |
2812 | 2 | null | 2806 | 4 | null | It sounds like maybe you want to use the [Lanczos Algorithm](http://en.wikipedia.org/wiki/Lanczos_iteration). Failing that, you might want to consult [Golub & Van Loan.](http://rads.stackoverflow.com/amzn/click/0801854148) I once coded a SVD algorithm (in SML, of all languages) from their text, and it worked reasonably well.
| null | CC BY-SA 2.5 | null | 2010-09-18T04:08:12.430 | 2010-09-18T04:08:12.430 | null | null | 795 | null |
2813 | 2 | null | 2806 | 3 | null | I'd suggest trying [kernel PCA](http://en.wikipedia.org/wiki/Kernel_PCA) which has a time/space complexity dependent on the number of examples (N) rather than number of features (P), which I think would be more suitable in your setting (P>>N)). Kernel PCA basically works with NxN kernel matrix (matrix of similarities between the data points), rather than the PxP covariance matrix which can be hard to deal with for large P. Another good thing about kernel PCA is that it can learn non-linear projections as well if you use it with a suitable kernel. See [this paper on kernel PCA](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.7613&rep=rep1&type=pdf).
| null | CC BY-SA 2.5 | null | 2010-09-18T04:25:12.827 | 2010-09-18T04:25:12.827 | null | null | 881 | null |
2814 | 2 | null | 2615 | 2 | null | The proof should be rather simple. Let $C$ be your correlation matrix (it has all ones on the diagonal). You multiply each element on the off diagonal by $(1+k)$. This is equivalent to computing the matrix $\hat{C} = (1+k) C - k \operatorname{diag}(C) = (1+k)C - k I,$ where $\operatorname{diag}$ is the diagonal part of a matrix, which in the case of $C$ is $I$, the identity matrix. It is well known that the eigenvalues of a matrix commute with a polynomial applied to the matrix, and we are computing a polynomial function of $C$, with polynomial function $f(x) = (1+k)x - k$. Thus if $\lambda$ is an eigenvalue of $C$, then $f(\lambda) = (1+k)\lambda - k$ is an eigenvalue of $\hat{C} = f(C)$. Setting $f(\lambda) \le 0$ for $\lambda$ an eigenvalue of $C$ gives the desired condition on $k$.
Note that depending on whether $k$ is positive or negative, you will want to check either $f(\lambda_1)$ or $f(\lambda_n)$, where $\lambda_1, \lambda_n$ are the smallest and largest eigenvalue of $C$. Unless $C$ is the identity matrix, you will have $\lambda_1 \lt 1 \lt \lambda_n$. This is the case because the sum of the eigenvalues equals $n$, the size of the matrix $C$.
The commutivity of eigenvalues and polynomials is easy to check, actually. First if $x, \lambda$ are eigenvector, eigenvalue of matrix $A$, show that $x, c\lambda^j$ are eigenvector, eigenvalue of $c A^j$. This holds for integer $j$, even negative integers or zero. Then show that if $x, \lambda_1$ are eigenvector, -value of $A$ and $x, \lambda_2$ are eigenvector, -value of $B$, then $x, \lambda_1 + \lambda_2$ are eigenvector, -value of $A + B$. From there, one can easily show that $x, c_0 + c_1 \lambda + c_2 \lambda^2 + \ldots c_n \lambda^n$ are eigenvector, -value of the matrix $c_0 I + c_1 A + c_2 A^2 + \ldots c_n A^n$.
| null | CC BY-SA 2.5 | null | 2010-09-18T05:03:55.393 | 2010-09-18T20:55:20.910 | 2010-09-18T20:55:20.910 | 795 | 795 | null |
2815 | 2 | null | 2679 | 0 | null | you could model the traffic flow of red cars as a compound poisson distribution.
suppose we look at the number N of cars that pass in a fixed time [8 hours, perhaps].
N should have a poisson distribution, according to your assumptions. let $\lambda$ be the poisson parameter for N.
suppose all of the cars are colorless to begin with. as each car passes, we flip a coin for which P(heads) = $p$. [presumably $p$ will reflect the proportion of red cars in the population.] each time the coin lands 'heads' the car is colored red.
then if M denotes the number of red cars seen in the time period involved, M is also poisson with parameter $p\lambda$. [easy homework exercise, or see feller's book: intro to probability theory$\cdots$, vol 1, wiley.]
| null | CC BY-SA 2.5 | null | 2010-09-18T05:07:02.720 | 2010-09-18T05:07:02.720 | null | null | 1112 | null |
2816 | 2 | null | 2770 | 5 | null | It's difficult to go past Piantadosi's [Clinical Trials: A Methodologic Perspective](https://rads.stackoverflow.com/amzn/click/com/0471727814) at least as a starting point. It is quite comprehensive and covers the topics you want and much more - including ethics, history, reporting, fraud, meta-analyses, randomisation, phases, stopping rules etc. - at an advanced undergraduate to early post-grad level.
The few times I was frustrated with it (the 1st edition anyway) where when it lacked sufficient detail - eg the derivations for sample size calculations. A comprehensive starting point!
| null | CC BY-SA 4.0 | null | 2010-09-18T08:51:04.823 | 2022-12-06T00:45:33.433 | 2022-12-06T00:45:33.433 | 11887 | 521 | null |
2817 | 2 | null | 2806 | 2 | null | I seem to recall that it is possible to perform PCA by computing the eigen-decomposition of X^TX rather than XX^T and then transform to get the PCs. However I can't remember the details off-hand, but it is in Jolliffe's (excellent) book and I'll look it up when I am next at work. I'd transliterate the linear algebra routines from e.g. Numerical Methods in C, rather than use any other algorithm.
| null | CC BY-SA 2.5 | null | 2010-09-18T09:26:40.507 | 2010-09-18T09:26:40.507 | null | null | 887 | null |
2818 | 2 | null | 2748 | 4 | null | Half the battle with many questions is understanding the terminology. Matching implies within group (or within pair) correlation. Under appropriate circumstances matching can be dealt with paired t-tests, conditional logistic regression or mixed effects models.
In survival analysis (or time to event analysis), within group correlation is known as shared [FRAILTY](http://www.demogr.mpg.de/papers/working/wp-2003-032.pdf). [Stata](http://fmwww.bc.edu/RePEc/usug2001/uk7.pdf) covers frailty pretty thoroughly.
An example of the use of Stata for Cox PH with frailty on the UCLA website from Hosmer & Lemeshow's book is [here](http://www.ats.ucla.edu/stat/stata/examples/asa2/chap9.htm).
[Here](http://user.demogr.mpg.de/jwv/pdf/Vaupel-MPS-5-1995-2.pdf) is a paper discussing the concept of shared frailty in twins.
An example is provided with Stata 11. See the 2nd last example [here](http://www.stata.com/help.cgi?stcox). I'll copy from the STATA SURVIVAL ANALYSIS AND EPIDEMIOLOGICAL TABLES REFERENCE MANUAL [ST] (pp141-142).
```
webuse catheter, clear
list in 1/10
+----------------------------------------------------------------+
| patient time infect age female _st _d _t _t0 |
|----------------------------------------------------------------|
1. | 1 16 1 28 0 1 1 16 0 |
2. | 1 8 1 28 0 1 1 8 0 |
3. | 2 13 0 48 1 1 0 13 0 |
4. | 2 23 1 48 1 1 1 23 0 |
5. | 3 22 1 32 0 1 1 22 0 |
|----------------------------------------------------------------|
6. | 3 28 1 32 0 1 1 28 0 |
7. | 4 318 1 31.5 1 1 1 318 0 |
8. | 4 447 1 31.5 1 1 1 447 0 |
9. | 5 30 1 10 0 1 1 30 0 |
10. | 5 12 1 10 0 1 1 12 0 |
+----------------------------------------------------------------+
```
"Consider the data from a study of 38 kidney dialysis patients, as described in McGilchrist and
Aisbett (1991). The study is concerned with the prevalence of infection at the catheter insertion point.
Two recurrence times (in days) are measured for each patient, and each recorded time is the time
from initial insertion (onset of risk) to infection or censoring.
Each patient (patient) has two recurrence times (time) recorded, with each catheter insertion
resulting in either infection (infect==1) or right-censoring (infect==0). Among the covariates
measured are age and sex (female==1 if female, female==0 if male).
```
stset time, fail(infect)
stcox age female, shared(patient)
Cox regression --
Breslow method for ties Number of obs = 76
Gamma shared frailty Number of groups = 38
Group variable: patient
No. of subjects = 76 Obs per group: min = 2
No. of failures = 58 avg = 2
Time at risk = 7424 max = 2
Wald chi2(2) = 11.66
Log likelihood = -181.97453 Prob > chi2 = 0.0029
------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | 1.006202 .0120965 0.51 0.607 .9827701 1.030192
female | .2068678 .095708 -3.41 0.001 .0835376 .5122756
-------------+----------------------------------------------------------------
theta | .4754497 .2673108
------------------------------------------------------------------------------
Likelihood-ratio test of theta=0: chibar2(01) = 6.27 Prob>=chibar2 = 0.006
Note: standard errors of hazard ratios are conditional on theta.
```
From the output, we obtain $\hat{\theta}$ = 0.475, and given the standard error of $\hat{\theta}$ and likelihood-ratio test of H0:$\theta$ = 0, we find a significant frailty effect, meaning that the correlation within patient cannot be ignored."
Me again.
This is similar to a twin study - two measurements in the same unit (i.e. two from one patient or one from each twin)
If we ignore frailty the estimates change, especially the sex effect:
```
stcox age female
Cox regression -- Breslow method for ties
No. of subjects = 76 Number of obs = 76
No. of failures = 58
Time at risk = 7424
LR chi2(2) = 6.67
Log likelihood = -185.10993 Prob > chi2 = 0.0355
------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | 1.002245 .0091153 0.25 0.805 .9845377 1.020271
female | .4499194 .1340786 -2.68 0.007 .2508832 .8068592
------------------------------------------------------------------------------
```
Dummy variable model in response to Andy W's question (below):
```
gen pnum = patient - floor(_n/2)
list in 1/10
+-----------------------------------------------------------------------+
| patient time infect age female pnum _st _d _t _t0 |
|-----------------------------------------------------------------------|
1. | 1 16 1 28 0 1 1 1 16 0 |
2. | 1 8 1 28 0 0 1 1 8 0 |
3. | 2 13 0 48 1 1 1 0 13 0 |
4. | 2 23 1 48 1 0 1 1 23 0 |
5. | 3 22 1 32 0 1 1 1 22 0 |
|-----------------------------------------------------------------------|
6. | 3 28 1 32 0 0 1 1 28 0 |
7. | 4 318 1 31.5 1 1 1 1 318 0 |
8. | 4 447 1 31.5 1 0 1 1 447 0 |
9. | 5 30 1 10 0 1 1 1 30 0 |
10. | 5 12 1 10 0 0 1 1 12 0 |
+-----------------------------------------------------------------------+
stcox age female pnum
Cox regression -- Breslow method for ties
No. of subjects = 76 Number of obs = 76
No. of failures = 58
Time at risk = 7424
LR chi2(3) = 7.63
Log likelihood = -184.63204 Prob > chi2 = 0.0543
------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | 1.001207 .0091269 0.13 0.895 .9834779 1.019257
female | .4259506 .1295227 -2.81 0.005 .2347073 .7730221
pnum | .7639054 .2111721 -0.97 0.330 .4443606 1.313238
------------------------------------------------------------------------------
```
The coefficients are different and the evidence of a sex effect has disappeared.
| null | CC BY-SA 2.5 | null | 2010-09-18T09:49:20.960 | 2010-10-02T08:15:01.607 | 2010-10-02T08:15:01.607 | 521 | 521 | null |
2819 | 1 | 2822 | null | 30 | 14249 | I have p values from a lot of tests and would like to know whether there is actually something significant after correcting for multiple testing. The complication: my tests are not independent. The method I am thinking about (a variant of Fisher's Product Method, Zaykin et al., [Genet Epidemiol](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2569904/), 2002) needs the correlation between the p values.
In order to estimate this correlation, I am currently thinking about bootstrapping cases, running the analyses and correlating the resulting vectors of p values. Does anyone have a better idea? Or even a better idea for my original problem (correcting for multiple testing in correlated tests)?
Background: I am logistically regressing whether or not my subjects are suffering from a particular disease on the interaction between their genotype (AA, Aa or aa) and a covariate. However, the genotype is actually a lot (30-250) of Single Nucleotide Polymorphisms (SNPs), which are certainly not independent but in Linkage Disequilibrium.
| Correcting p values for multiple tests where tests are correlated (genetics) | CC BY-SA 2.5 | null | 2010-09-18T11:21:46.510 | 2012-11-05T10:09:29.377 | 2010-10-09T20:11:09.663 | 449 | 1352 | [
"correlation",
"multiple-comparisons",
"statistical-significance",
"genetics"
] |
2820 | 2 | null | 2819 | 3 | null | Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see [here for details](http://en.wikipedia.org/wiki/False_discovery_rate#Dependent_tests)) the problem is that I am not sure if you can say upfront if your correlations are all positive ones.
In R you can do simple FDR with p.adjust. For more complex things I would take a look at [multcomp](http://cran.r-project.org/web/packages/multcomp/index.html), but I didn't go through it to see for solutions in cases of dependencies.
Good luck.
| null | CC BY-SA 2.5 | null | 2010-09-18T12:03:38.633 | 2010-09-18T12:03:38.633 | null | null | 253 | null |
2821 | 2 | null | 328 | 1 | null | Oren, it is useful to define what aspects of finance you intend to tackle. Statistics is a tool when seen from the econometrics perspective (in terms of assessing the plausibility of a proposed model / theory) or can be the first or primary line of attack when seen from the machine learning side - that is you go low on the domain knowledge and rely more on constructing a feature space and applying algorithms. (However, the task of constructing a useful feature space is dependent on deep domain knowledge).
To get a handle on the theoretical aspects of finance - I'd recommend say:
- Investment Science by Luenberger
- Mathematical Techniques in Finance: Tools for Incomplete Markets by Ales Cerny (perhaps a later read)
- Asset Pricing by Cochrane
To learn how to apply statistics / econometrics coupled with the theory:
- Introductory Econometrics for Finance by Chris Brooks
- Asset Price Dynamics, Volatility, and Prediction by Stephen Taylor
- Econometrics of Financial Markets by Campbell, MacKinlay and Lo
The books recommended above, ones by David Ruppert, Eric Zivot, Ruey Tsay are useful, however, I would recommend Chris Brooks' & Ruppert's texts first, followed by Taylor's.
[Paul Soderlind](http://home.datacomm.ch/paulsoderlind/)'s notes, and [Kevin Sheppard](http://www.kevinsheppard.com/wiki/Main_Page)'s notes (both available online) are quite good.
| null | CC BY-SA 3.0 | null | 2010-09-18T12:21:00.450 | 2012-08-27T15:59:23.840 | 2012-08-27T15:59:23.840 | 7290 | 1349 | null |
2822 | 2 | null | 2819 | 35 | null | This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some authors, but in a different context (replication studies or meta-analysis, see e.g. (1) for a recent review). Combining SNP p-values by Fisher's method is generally used when one wants to derive an unique p-value for a given gene; this allows to work at the gene level, and reduce the amount of dimensionality of subsequent testing, but as you said the non-independence between markers (arising from spatial colocation or linkage disiquilibrium, LD) introduce a bias. More powerful alternatives rely on resampling procedures, for example the use of maxT statistics for combining p-value and working at the gene level or when one is interested in pathway-based approaches, see e.g. (2) (§2.4 p. 93 provides details on their approach).
My main concerns with bootstraping (with replacement) would be that you are introducing an artificial form of relatedness, or in other words you create virtual twins, hence altering Hardy-Weinberg equilibrium (but also minimum allele frequency and call rate). This would not be the case with a permutation approach where you permute individual labels and keep the genotyping data as is. Usually, the [plink](http://pngu.mgh.harvard.edu/~purcell/plink/) software can give you raw and permuted p-values, although it uses (by default) an adaptive testing strategy with a sliding window that allows to stop running all permutations (say 1000 per SNP) if it appears that the SNP under consideration is not "interesting"; it also has option for computing maxT, see the [online help](http://pngu.mgh.harvard.edu/~purcell/plink/perm.shtml).
But given the low number of SNPs you are considering, I would suggest relying on FDR-based or maxT tests as implemented in the [multtest](http://cran.r-project.org/web/packages/multtest/index.html) R package (see `mt.maxT`), but the definitive guide to resampling strategies for genomic application is [Multiple Testing Procedures with Applications to Genomics](http://www.springer.com/978-0-387-49316-9), from Dudoit & van der Laan (Springer, 2008). See also Andrea Foulkes's book on [genetics with R](http://www.jstatsoft.org/v31/b02/paper), which is reviewed in the JSS. She has great material on multiple testing procedures.
Further Notes
Many authors have pointed to the fact that simple multiple testing correcting methods such as the Bonferroni or Sidak are too stringent for adjusting the results for the individual SNPs. Moreover, neither of these methods take into account the correlation that exists between SNPs due to LD which tags the genetic variation across gene regions. Other alternative have bee proposed, like a derivative of Holm's method for multiple comparison (3), Hidden Markov Model (4), conditional or positive FDR (5) or derivative thereof (6), to name a few. So-called gap statistics or sliding window have been proved successful in some case, but you'll find a good review in (7) and (8).
I've also heard of methods that make effective use of the haplotype structure or LD, e.g. (9), but I never used them. They seem, however, more related to estimating the correlation between markers, not p-value as you meant. But in fact, you might better think in terms of the dependency structure between successive test statistics, than between correlated p-values.
References
- Cantor, RM, Lange, K and Sinsheimer, JS. Prioritizing GWAS Results: A Review of Statistical Methods and Recommendations for Their Application. Am J Hum Genet. 2010 86(1): 6–22.
- Corley, RP, Zeiger, JS, Crowley, T et al. Association of candidate genes with antisocial drug dependence in adolescents. Drug and Alcohol Dependence 2008 96: 90–98.
- Dalmasso, C, Génin, E and Trégouet DA. A Weighted-Holm Procedure Accounting for Allele Frequencies in Genomewide Association Studies. Genetics 2008 180(1): 697–702.
- Wei, Z, Sun, W, Wang, K, and Hakonarson, H. Multiple Testing in Genome-Wide Association Studies via Hidden Markov Models. Bioinformatics 2009 25(21): 2802-2808.
- Broberg, P. A comparative review of estimates of the proportion unchanged genes and the false discovery rate. BMC Bioinformatics 2005 6: 199.
- Need, AC, Ge, D, Weale, ME, et a. A Genome-Wide Investigation of SNPs and CNVs in Schizophrenia. PLoS Genet. 2009 5(2): e1000373.
- Han, B, Kang, HM, and Eskin, E. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers. PLoS Genetics 2009
- Liang, Y and Kelemen, A. Statistical advances and challenges for analyzing correlated high dimensional snp data in genomic study for complex diseases. Statistics Surveys 2008 2 :43–60. -- the best recent review ever
- Nyholt, DR. A Simple Correction for Multiple Testing for Single-Nucleotide Polymorphisms in Linkage Disequilibrium with Each Other. Am J Hum Genet. 2004 74(4): 765–769.
- Nicodemus, KK, Liu, W, Chase, GA, Tsai, Y-Y, and Fallin, MD. Comparison of type I error for multiple test corrections in large single-nucleotide polymorphism studies using principal components versus haplotype blocking algorithms. BMC Genetics 2005; 6(Suppl 1): S78.
- Peng, Q, Zhao, J, and Xue, F. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs. BMC Genetics 2010, 11:6
- Li, M, Romero, R, Fu, WJ, and Cui, Y (2010). Mapping Haplotype-haplotype Interactions with Adaptive LASSO. BMC Genetics 2010, 11:79 -- although not directly related to the question, it covers haplotype-based analysis/epistatic effect
| null | CC BY-SA 2.5 | null | 2010-09-18T14:32:08.507 | 2010-10-06T11:23:22.413 | 2010-10-06T11:23:22.413 | 930 | 930 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.