Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
2823
1
null
null
4
291
I've just been given a stack of polling data to analyse. Some of the questions are obviously leading or present subtle incentives (for the poller or polled) for specific answers. Of other questions I'm not so sure but I have some doubts. I'm also starting to question other factors about how the poll was conducted (environment, privacy, etc.). I'd like to be able to present the analysis with the unreliable data discounted according to some recognised standard. What are actionable rules or methods for assessing whether results are junk based on human factors in data collection? I'd gladly accept specific criteria, or links to resources such as books or web pages on the subject. As I'm not a professional statistician, even being told what terms are used for this field of study and it's concerns would help a lot (though maybe best added as comments).
What are good resources/criteria for judging human bias in data collection?
CC BY-SA 2.5
null
2010-09-18T16:14:02.140
2010-09-19T09:30:23.640
2010-09-19T09:30:23.640
183
1343
[ "references", "mathematical-statistics", "survey" ]
2824
1
2853
null
17
4509
A few months ago I posted a question about homoscedasticity tests in R on SO, and Ian Fellows answered that (I'll paraphrase his answer very loosely): Homoscedasticity tests are not a good tool when testing the goodness of fit of your model. With small samples, you don't have enough power to detect departures from homoscedasticity, while with big samples you have "plenty of power", so you are more likely to screen even trivial departures from equality. His great answer came as a slap in my face. I used to check normality and homoscedasticity assumptions each time I ran ANOVA. What is, in your opinion, best practice when checking ANOVA assumptions?
Checking ANOVA assumptions
CC BY-SA 3.0
null
2010-09-18T17:42:06.037
2016-03-11T12:36:01.467
2016-03-11T12:29:57.890
4253
1356
[ "hypothesis-testing", "anova", "nonparametric", "goodness-of-fit", "heteroscedasticity" ]
2825
2
null
2715
53
null
Keep your analysis reproducible. A reviewer or your boss or someone else will eventually ask you how exactly you arrived at your result - probably six months or more after you did the analysis. You will not remember how you cleaned the data, what analysis you did, why you chose the specific model you used... And reconstructing all this is a pain. Corollary: use a scripting language of some kind, put comments in your analysis scripts, and keep them. What you use (R, SAS, Stata, whatever) is less important than having a completely reproducible script. Reject environments in which this is impossible or awkward.
null
CC BY-SA 3.0
null
2010-09-18T18:15:49.220
2013-06-17T18:03:39.020
2013-06-17T18:03:39.020
22047
1352
null
2826
2
null
2823
1
null
Possibly [Benford's Law](http://en.wikipedia.org/wiki/Benford%27s_law) might help. Check the Application section on the wiki
null
CC BY-SA 2.5
null
2010-09-18T18:33:13.560
2010-09-18T20:43:30.903
2010-09-18T20:43:30.903
795
364
null
2827
2
null
2715
8
null
For histograms, a good rule of thumb for number of bins in a histogram: square root of the number of data points
null
CC BY-SA 2.5
null
2010-09-18T19:49:37.593
2010-09-18T19:49:37.593
null
null
438
null
2828
1
2890
null
24
14703
How can we compare complexity of two models with the same number of parameters? Edit 09/19: To clarify, model complexity is a measure of how hard it is to learn from limited data. When two models fit existing data equally well, a model with lower complexity will give lower error on future data. When approximations are used, this may technically not always be true, but that's OK if it tends to be true in practice. Various approximations give different complexity measures
Measures of model complexity
CC BY-SA 2.5
null
2010-09-18T20:20:10.293
2017-06-09T02:49:27.430
2010-09-19T22:06:26.193
511
511
[ "model-selection" ]
2829
2
null
527
1
null
I agree with @drnexus. In addition, I might recommend a Morgan-Pitman test for the equality of variances of the two methods. This would tell you if one method has more variance than the other. This in itself might not be a bad thing because presumably the two tests have different bias-variance tradeoffs (for example, one test might always say 50% (biased, but no variance) while the other is unbiased but very noisy). Some domain knowledge might be helpful here in determining how much tradeoff you want of your method. Of course, as noted by others, having a 'gold standard' would be much preferred.
null
CC BY-SA 2.5
null
2010-09-18T20:41:05.413
2010-09-18T20:41:05.413
null
null
795
null
2830
2
null
2828
5
null
I think it would depend on the actual model fitting procedure. For a generally applicable measure, you might consider Generalized Degrees of Freedom described in [Ye 1998](http://www.jstor.org/pss/2669609) -- essentially the sensitivity of change of model estimates to perturbation of observations -- which works quite well as a measure of model complexity.
null
CC BY-SA 2.5
null
2010-09-18T20:41:57.250
2010-09-18T20:41:57.250
null
null
251
null
2831
2
null
2715
30
null
One rule per answer ;-) Talk to the statistician before conducting the study. If possible, before applying for the grant. Help him/her understand the problem you are studying, get his/her input on how to analyze the data you are about to collect and think about what that means for your study design and data requirements. Perhaps the stats guy/gal suggests doing a hierarchical model to account for who diagnosed the patients - then you need to track who diagnosed whom. Sounds trivial, but it's far better to think about this before you collect data (and fail to collect something crucial) than afterwards. On a related note: do a power analysis before starting. Nothing is as frustrating as not having budgeted for a sufficiently large sample size. In thinking about what effect size you are expecting, remember publication bias - the effect size you are going to find will probably be smaller than what you expected given the (biased) literature.
null
CC BY-SA 2.5
null
2010-09-18T21:07:17.487
2010-09-18T21:07:17.487
null
null
1352
null
2832
2
null
1815
4
null
[Experimental Design for the Life Sciences](http://ukcatalogue.oup.com/product/9780199285112.do), by Ruxton & Colegrave. Aimed primarily at undergraduates.
null
CC BY-SA 2.5
null
2010-09-18T21:07:17.843
2010-09-18T21:07:17.843
null
null
266
null
2833
2
null
2824
11
null
A couple of graphs will usually be much more enlightening than the p value from a test of normality or homoskedasticity. Plot observed dependent variables against independent variables. Plot observations against fits. Plot residuals against independent variables. Investigate anything that looks strange on these plots. If something does not look strange, I would not worry about a significant test of an assumption.
null
CC BY-SA 2.5
null
2010-09-18T21:27:13.180
2010-09-18T21:27:13.180
null
null
1352
null
2834
2
null
527
16
null
The simple correlation approach isn't the right way to analyze results from method comparison studies. There are (at least) two highly recommended books on this topic that I referenced at the end (1,2). Briefly stated, when comparing measurement methods we usually expect that (a) our conclusions should not depend on the particular sample used for the comparison, and (b) measurement error associated to the particular measurement instrument should be accounted for. This precludes any method based on correlations, and we shall turn our attention to variance components or mixed-effects models that allow to reflect the systematic effect of item (here, item stands for individual or sample on which data are collected), which results from (a). In your case, you have single measurements collected using two different methods (I assume that none of them might be considered as a gold standard) and the very basic thing to do is to plot the differences ($X_1-X_2$) versus the means ($(X_1+X_2)/2$); this is called a [bland-altman-plot](/questions/tagged/bland-altman-plot). It will allow you to check if (1) the variations between the two set of measurements are constant and (2) the variance of the difference is constant across the range of observed values. Basically, this is just a 45° rotation of a simple scatterplot of $X_1$ vs. $X_2$, and its interpretation is close to a plot of fitted vs. residuals values used in linear regression. Then, - if the difference is constant (constant bias), you can compute the limit of agreement (see (3)) - if the difference is not constant across the range of measurement, you can fit a linear regression model between the two methods (choose the one you want as predictor) - if the variance of the differences is not constant, try to find a suitable transformation that makes the relationship linear with constant variance Other details may be found in (2), chapter 4. References - Dunn, G (2004). Design and Analysis of Reliability Studies. Arnold. See the review in the International Journal of Epidemiology. - Carstensen, B (2010). Comparing clinical measurement methods. Wiley. See the companion website, including R code. - The original article from Bland and Altman, Statistical methods for assessing agreement between two methods of clinical measurement. - Carstensen, B (2004). Comparing and predicting between several methods of measurement. Biostatistics, 5(3), 399–413.
null
CC BY-SA 3.0
null
2010-09-18T21:29:59.203
2016-07-13T08:11:16.300
2016-07-13T08:11:16.300
1352
930
null
2835
2
null
2585
2
null
A slight variation on Jeromy's theme: time on the horizontal axis, price on the vertical axis. Plot multiple lines: one connecting the minimum prices, one connecting the 10% quantiles of prices, one connecting the 25% quantiles of prices. Plot these lines in varying shades of gray: large amounts of product available at that price translate into a black line, small amounts into an almost white line. I like scaling things like these to the interval 0-100, then using the colors named "gray0" to "gray100" in R: [here](https://web.archive.org/web/20101010221443/http://research.stowers-institute.org/efg/R/Color/Chart/ColorChart.pdf). However, this kind of relies on your data not being too irregularly spaced in time. If this is an issue, Jeromy's idea of local quantile regression is appealing. You may be able to tweak the [quantreg](http://stat.ethz.ch/CRAN/web/packages/quantreg/index.html) package in R to help you here.
null
CC BY-SA 4.0
null
2010-09-18T21:46:28.497
2023-03-09T10:11:48.967
2023-03-09T10:11:48.967
362671
1352
null
2836
2
null
2828
3
null
[Minimum Description Length](http://en.wikipedia.org/wiki/Minimum_description_length) may be an avenue worth pursuing.
null
CC BY-SA 2.5
null
2010-09-18T21:50:53.100
2010-09-18T21:50:53.100
null
null
1352
null
2837
2
null
113
2
null
John, I am not sure my suggestion may be of help. But, in any case the book [Intuitive Biostatistics](http://rads.stackoverflow.com/amzn/click/0199730067) by Harvey Motulsky may be of assistance. Chapter 37 'Choosing a Test' has a pretty good table on page 298 that tells you given the nature of the data set and problem you are addressing what statistical method you should use. Amazon lets you search through this book. Good luck.
null
CC BY-SA 2.5
null
2010-09-19T00:27:18.983
2010-10-08T23:57:02.170
2010-10-08T23:57:02.170
1329
1329
null
2838
2
null
913
1
null
Many have already made excellent suggestions regarding transforming the variables and using robust regression methods. But, when looking at the scatter plot, I observe two separate data sets. One set has a very strong linear relationship where the correlation is a lot higher than the overall 0.6. And, visually it looks like Y = 0.13X. So, when X = 15,000 Y is around 2,000 or so. Thus, a regression line with a similar slope would fit the vast majority of the data points really well. Then, you have a second data set of 300 datapoints that are wild outliers that are random. I would focus on those 300 outliers. Can you explain them? Are there reasons why they are so far off the regression line? Are those datapoints a fractional % of your whole data set? Are they material events you need to keep for your study? Or can you afford to take them out? If you can take them out, you may have a pretty strong regression with a high R Square. You just would have to accept that in a few percentage of the time things go wild and your regression model will be off. But, that's the truth of any model you built. If you have to keep those 300 outliers in your overall data set, they will materially affect your regression. And, you will end up with a regression model that does not fit well the majority of your data point. And, it won't fit the outliers either because they are random and won't fit any regression line.
null
CC BY-SA 2.5
null
2010-09-19T00:52:52.930
2010-09-19T00:52:52.930
null
null
1329
null
2839
2
null
913
0
null
Like the others have said, some sort of transformation is recommended. Your data seems highly clustered, and could be roughly linear, but it's difficult to tell with all the other points around it. Others have suggested trying a log transformation, but it might also be a good idea to try a [Box-Cox Transformation](http://en.wikipedia.org/wiki/Box-Cox_transformation). If the resulting exponent it tells you to multiply by is 0, then a log transform is the best. All software packages that I know of allow you to do Box-Cox. In R, it's in the MASS package. Here's some information about that: [Doing Box-Cox Transformations in R](http://stat.ethz.ch/R-manual/R-devel/library/MASS/html/boxcox.html) That's not going to give a you a perfectly linear fit, but it'll probably make the interpretation of your data a little easier.
null
CC BY-SA 2.5
null
2010-09-19T01:47:02.270
2010-09-19T01:47:02.270
null
null
1118
null
2840
2
null
2824
4
null
QQ Plots are pretty good ways to detect non-normality. For homoscedasticity, try Levene's test or a Brown-Forsythe test. Both are similar, though BF is a little more robust. They are less sensitive to non-normality than Bartlett's test, but even still, I've found them not to be the most reliable with small sample sizes. [Q-Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot) [Brown-Forsythe test](https://en.wikipedia.org/wiki/Brown%E2%80%93Forsythe_test) [Levene's test](https://en.wikipedia.org/wiki/Levene%27s_test)
null
CC BY-SA 3.0
null
2010-09-19T02:09:59.820
2016-03-11T12:36:01.467
2016-03-11T12:36:01.467
22047
1118
null
2841
2
null
2823
4
null
In regards to the leading questions, here are several options of how I would attempt to investigate if your suspicions are true; 1 - Conduct your own experiment. One of your conditions will be to mimic the leading questions in the prior surveys, the other condition will be a survey constructed with functionally similar questions and answers but without the leading question(s). Randomly allocate surveys, and differences in answer distributions can be attributable to the different questions (+ sampling error). 2 - Determine if any other surveys have functionally similar questions but are not leading, and look at the distribution of answers for those surveys. The only thing you have to worry about here is the differences in the sample characteristics between your unfair surveys and the fair surveys that could account for some of the observed differences in the answers. 3 - Identify constructs between questions in your unfair survey. If leading questions within that construct have low correlations with other fair items it could be taken as evidence that the question is not returning the information it should. I'm sure their is some psychological/psychometrics literature on how to ask questions (or about other items such as "environment"). I'm sure it would do you some good to review some of that work. Good luck
null
CC BY-SA 2.5
null
2010-09-19T03:24:26.727
2010-09-19T03:24:26.727
null
null
1036
null
2842
2
null
2824
4
null
The are some very good web guides to checking the assumptions of ANOVA & what to do if the fail. [Here](http://quality-control-plan.com/StatGuide/oneway_anova_ass_viol.htm) is one. [This](http://homepage.mac.com/bradthiessen/stats/m301/4a.pdf) is another. Essentially your eye is the best judge, so do some [exploratory data analysis](http://en.wikipedia.org/wiki/Exploratory_data_analysis). That means plot the data - histograms and box plots are a good way to assess normality and homoscedascity. And remember ANOVA is robust to minor violations of these.
null
CC BY-SA 2.5
null
2010-09-19T04:34:21.487
2010-09-19T04:34:21.487
null
null
521
null
2843
2
null
328
0
null
I like [Risk and Asset Allocation](http://books.google.com/books?id=Qc8KWWtUokcC&lpg=PR1&dq=risk%20and%20asset%20allocation%20meucci&pg=PR1#v=onepage&q&f=false) by A. Meucci. This book is a bit more advanced than Ruppert's book, but still very user-friendly.
null
CC BY-SA 2.5
null
2010-09-19T05:41:03.267
2010-09-19T05:41:03.267
null
null
795
null
2844
1
2847
null
11
8113
Example code: ``` (pc.cr <- princomp(USArrests)) summary(pc.cr) loadings(pc.cr) ## note that blank entries are small but not zero ``` I am getting different outputs from each, and I am not sure I understand what the difference is. Here is the output: ``` > summary(pc.cr) Importance of components: Comp.1 Comp.2 Comp.3 Comp.4 Standard deviation 82.8908472 14.06956001 6.424204055 2.4578367034 Proportion of Variance 0.9655342 0.02781734 0.005799535 0.0008489079 Cumulative Proportion 0.9655342 0.99335156 0.999151092 1.0000000000 > loadings(pc.cr) ## note that blank entries are small but not zero ... Comp.1 Comp.2 Comp.3 Comp.4 SS loadings 1.00 1.00 1.00 1.00 Proportion Var 0.25 0.25 0.25 0.25 Cumulative Var 0.25 0.50 0.75 1.00 ``` P.S: how can I get access to the table created by summary(pc.cr)?? (I can't seem to find it in str.)
What is the difference between summary() and loadings() for princomp() object in R?
CC BY-SA 3.0
null
2010-09-19T09:21:07.803
2016-05-24T20:54:37.147
2016-05-24T20:54:37.147
253
253
[ "r", "pca" ]
2845
2
null
1815
2
null
[Experimental Design in Biotechnology](http://rads.stackoverflow.com/amzn/click/0824778812) by Perry D. Haaland, ed Marcel Dekker.
null
CC BY-SA 2.5
null
2010-09-19T09:24:48.077
2010-09-19T09:34:20.480
2010-09-19T09:34:20.480
null
null
null
2846
1
2850
null
13
10140
I have read and seen a lot of Parallel coordinates plots. Can someone answer the following set of questions: - What are parallel coordinates plots (PCP) in simple words, so that a layman can understand? - A mathematical explanation with some intuition if possible - When are PCP useful and when to use them? - When are PCP not useful and when they should be avoided? - Possible advantages and disadvantages of PCP
An easy explanation for the parallel coordinates plot
CC BY-SA 2.5
null
2010-09-19T09:32:28.610
2014-05-04T04:23:06.427
2010-09-19T09:37:17.790
183
1307
[ "r", "data-visualization" ]
2847
2
null
2844
4
null
The first output is the correct and most useful one. Calling `loadings()` on your object just returns a summary where the SS are always equal to 1, hence the % variance is just the SS loadings divided by the number of variables. It makes sense only when using Factor Analysis (like in `factanal`). I never use `princomp` or its SVD-based alternative (`prcomp`), and I prefer the [FactoMineR](http://cran.r-project.org/web/packages/FactoMineR/index.html) or [ade4](http://cran.r-project.org/web/packages/ade4/index.html) package which are by far more powerful! About your second question, the `summary()` function just returns the SD for each component (`pc.cr$sdev` in your case), and the rest of the table seems to be computed afterwards (through the `print` or `show` method, I didn't investigate this in details). ``` > getS3method("summary","princomp") function (object, loadings = FALSE, cutoff = 0.1, ...) { object$cutoff <- cutoff object$print.loadings <- loadings class(object) <- "summary.princomp" object } <environment: namespace:stats> ``` What `princomp()` itself does may be viewed using `getAnywhere("princomp.default")`.
null
CC BY-SA 2.5
null
2010-09-19T10:45:31.837
2010-09-19T11:40:21.617
2010-09-19T11:40:21.617
930
930
null
2848
2
null
1708
3
null
It looks like you are referring to eigenanalysis for SNPs data and the article from Nick Patterson, [Population Structure and Eigenanalysis](http://www.plosgenetics.org/article/info%3adoi/10.1371/journal.pgen.0020190) (PLoS Genetics 2006), where the first component explains the largest variance on allele frequency wrt. potential stratification in the sample (due to ethnicity or, more generally, ancestry). So I wonder why you want to consider all three first components, unless they appear to be significant from their expected distribution according to [TW distribution](http://j.mp/cHQnxw). Anyway, in R you can isolate the most informative SNPs (i.e. those that are at the extreme of the successive principal axes) with the `apply()` function, working on row, e.g. ``` apply(snp.df, 1, function(x) any(abs(x)>threshold)) ``` where `snp.df` stands for the data you show and which is stored either as a `data.frame` or `matrix` under R, and `threshold` is the value you want to consider (this can be Mean $\pm$ 6 SD, as in Price et al. Nature Genetics 2007 38(8): 904, or whatever value you want). You may also implement the iterative PCA yourself. Finally, the TW test can be implemented as follows: ``` ##' Test for the largest eigenvalue in a gaussian covariance matrix ##' ##' This function computes the test statistic and associated p-value ##' for a Wishart matrix focused on individuals (Tracy-Widom distribution). ##' ##' @param C a rectangular matrix of bi-allelic markers (columns) indexed ##' on m individuals. Caution: genotype should be in {0,1,2}. ##' @return test statistic and tabulated p-value ##' @reference \cite{Johnstone:2001} ##' @seealso The RMTstat package provides other interesting functions to ##' deal with Wishart matrices. ##' @example ##' X <- replicate(100,sample(0:2,20,rep=T)) tw.test <- function(C) { m <- nrow(C) # individuals n <- ncol(C) # markers # compute M C <- scale(C, scale=F) pj <- attr(C,"scaled:center")/2 M <- C/sqrt(pj*(1-pj)) # compute X=MM' X <- M %*% t(M) ev <- sort(svd(X)$d, decr=T)[1:(m-1)] nprime <- ((m+1)*sum(ev)^2)/(((m-1)*sum(ev^2))-sum(ev)^2) l <- (m-1)*ev[1]/sum(ev) # normalize l and compute test statistic num <- (sqrt(nprime-1)+sqrt(m)) mu <- num^2/nprime sigma <- num/nprime*(1/sqrt(nprime-1)+1/sqrt(m))^(1/3) l <- (l-mu)/sigma # estimate associated p-value if (require(RMTstat)) pv <- ptw(l, lower.tail=F) else pv <- NA return(list(stat=l, pval=pv)) } ```
null
CC BY-SA 2.5
null
2010-09-19T11:24:58.573
2010-09-26T20:55:20.310
2010-09-26T20:55:20.310
930
930
null
2849
1
null
null
3
718
I have a static panel data model with small T (T=5) that makes it impossible for me to use granger causality as it requires a long time span. So my question: - Is there any alternative solution to test for causation even in a small T context? Any hint will be highely appreciated!
How to test for causation in a static panel data model with small t?
CC BY-SA 2.5
null
2010-09-19T12:46:28.310
2010-11-10T07:40:11.763
2010-11-10T07:40:11.763
930
1251
[ "econometrics", "causality", "panel-data" ]
2850
2
null
2846
6
null
It seems to me that the main function of PCP is to highlight homogeneous groups of individuals, or conversely (in the dual space, by analogy with PCA) specific patterns of association on different variables. It produces an effective graphical summary of a multivariate data set, when there are not too much variables. Variables are automatically scaled to a fixed range (typically, 0–1) which is equivalent to working with standardized variables (to prevent the influence of one variable onto the others due to scaling issue), but for very high-dimensional data set (# of variables > 10), you definitely have to look at other displays, like [fluctuation plot](http://had.co.nz/ggplot/plot-templates.html) or [heatmap](http://en.wikipedia.org/wiki/Heat_map) as used in microarray studies. It helps answering questions like: - are there any consistent pattern of individual scores that may be explained by specific class membership (e.g. gender difference)? - are there any systematic covariation between scores observed on two or more variables (e.g. low scores observed on variable $X_1$ is always associated to high scores on $X_2$)? In the following plot of the [Iris data](http://en.wikipedia.org/wiki/Iris_flower_data_set), it is clearly seen that species (here shown in different colors) show very discriminant profiles when considering petal length and width, or that Iris setosa (blue) are more homogeneous with respect to their petal length (i.e. their variance is lower), for example. ![alt text](https://i.stack.imgur.com/xKvQv.png) You can even use it as a backend to classification or dimension reduction techniques, like PCA. Most often, when performing a PCA, in addition to reducing the features space you also want to highlight clusters of individuals (e.g. are there individuals who systematically score higher on some combination of the variables); this is usually down by applying some kind of hierarchical clustering on the factor scores and highlighting the resulting cluster membership on the factorial space (see the [FactoClass](http://cran.r-project.org/web/packages/FactoClass/index.html) R package). It is also used in clustergrams ([Visualizing non-hierarchical and hierarchical cluster analyses](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.1405&rep=rep1&type=pdf)) which aims at examining how cluster allocation evolves when increasing the number of clusters (see also, [What stop-criteria for agglomerative hierarchical clustering are used in practice?](https://stats.stackexchange.com/questions/2597/what-stop-criteria-for-agglomerative-hierarchical-clustering-are-used-in-practice/2609#2609)). Such displays are also useful when linked to usual scatterplots (which by construction are restricted to 2D-relationships), this is called brushing and it is available in the [GGobi](http://www.ggobi.org/) data visualization system, or the [Mondrian](http://stats.math.uni-augsburg.de/Mondrian/) software.
null
CC BY-SA 2.5
null
2010-09-19T12:57:26.100
2010-09-19T16:55:34.567
2017-04-13T12:44:40.807
-1
930
null
2851
2
null
2446
2
null
Concerning your more specific question (i.e. how many degrees of freedom): the question is how many replicates do you have. Look at the early pages of chapter 19 of [the R book](http://rads.stackoverflow.com/amzn/click/0470510242) for examples and guidelines for such accounting. We could do the accounting here but i don't understand the design of your experiment (probably due to difference in vocabulary, it could be easier if you explained it in formal (i.e. math) script with care to define the indices). You might also want to Check the following paper Hurlbert, S.H. (1984) Pseudoreplication and the design of ecological field experiments. Ecological Monographs, 54, 187–211.
null
CC BY-SA 2.5
null
2010-09-19T13:21:39.003
2010-09-21T20:33:56.847
2010-09-21T20:33:56.847
603
603
null
2852
1
null
null
1
5555
The biological data is listed as following: ``` V1 V2 V3 V4 V5 V6 0.064 0.014 0.016 0.012 0.013 0.023 0.056 0.000 0.000 0.008 0.010 0.000 0.042 0.014 0.024 0.008 0.017 0.023 0.031 0.014 0.016 0.008 0.013 0.023 0.068 0.000 0.008 0.004 0.020 0.000 0.081 0.000 0.000 0.004 0.010 0.000 0.060 0.014 0.016 0.006 0.010 0.023 ``` or you can download the data from [http://www.mediafire.com/?6yp9l9m47jv433a](http://www.mediafire.com/?6yp9l9m47jv433a). ``` A<- dat[,1] B<- dat[,2:6] ``` I want to compare the difference between the first column to other columns of the data.Because only dat[,2] and dat[,6] not subject to normal distribute,I used wilcox.test instead of t.test function to caculate in R. But the warning messages rised up,such as "In wilcox.test.default(A, B[, 1]) : cannot compute exact p-value with ties". Could you give me some suggestions? Thank you. ``` wilcox.test(A,B[,1]) Wilcoxon rank sum test with continuity correction data: A and B[, 1] W = 49, p-value = 0.00184 alternative hypothesis: true location shift is not equal to 0 Warning message: In wilcox.test.default(A, B[, 1]) : cannot compute exact p-value with ties ```
How to analyze these data?
CC BY-SA 2.5
null
2010-09-19T14:02:26.617
2010-09-20T11:29:59.137
2010-09-20T06:33:04.833
930
null
[ "r", "hypothesis-testing" ]
2853
2
null
2824
12
null
In applied settings it is typically more important to know whether any violation of assumptions is problematic for inference. Assumption tests based on significance tests are rarely of interest in large samples, because most inferential tests are robust to mild violations of assumptions. One of the nice features of graphical assessments of assumptions is that they focus attention on the degree of violation and not the statistical significance of any violation. However, it's also possible to focus on numeric summaries of your data which quantify the degree of violation of assumptions and not the statistical significance (e.g., skewness values, kurtosis values, ratio of largest to smallest group variances, etc.). You can also get standard errors or confidence intervals on these values, which will get smaller with larger samples. This perspective is consistent with the general idea that statistical significance is not equivalent to practical importance.
null
CC BY-SA 2.5
null
2010-09-19T14:44:17.667
2010-09-19T14:44:17.667
null
null
183
null
2854
1
5646
null
5
1274
Dear all, I was encouraged to ask this question here as well as on stackoverflow and would be very appreciative of any answers... Due to hetereoscedasticity I'm doing bootstrapped linear regression (appeals more to me than robust regression). I'd like to create a plot along the lines of what I've done in the script here. However the `fill=int` is not right since `int` should (I believe) be calculated using a bivariate normal distribution. - Any idea how I could do that in this setting? - Also is there a way for bootcov to return bias-corrected percentiles? sample script: ``` library(ggplot2) library(Hmisc) library(Design) # for ols() o<-data.frame(value=rnorm(10,20,5), bc=rnorm(1000,60,50), age=rnorm(1000,50,20), ai=as.factor(round(runif(1000,0,4),0)), Gs=as.factor(round(runif(1000,0,6),0))) reg.s<-function(x){ ols(value~as.numeric(bc)+as.numeric(age),data=x,x=T,y=T)->temp bootcov(temp,B=1000,coef.reps=T)->t2 return(t2) } dlply(o,.(ai,Gs),function(x) reg.s(x))->b.list llply(b.list,function(x) x[["boot.Coef"]])->b2 ks<-llply(names(b2),function(x){ s<-data.frame(b2[[x]]) s$ai<-x return(s) }) ks3<-do.call(rbind,ks) ks3$ai2<-with(ks3,substring(ai,1,1)) ks3$gc2<-sapply(strsplit(as.character(ks3$ai), "\\."), "[[", 2) k<-ks3 j<-dlply(k,.(ai2,gc2),function(x){ i1<-quantile(x$Intercept,probs=c(0.025,0.975))[1] i2<-quantile(x$Intercept,probs=c(0.025,0.975))[2] j1<-quantile(x$bc,probs=c(0.025,0.975))[1] j2<-quantile(x$bc,probs=c(0.025,0.975))[2] o<-x$Intercept>i1 & x$Intercept<i2 p<-x$bc>j1 & x$bc<j2 h<-o & p return(h) }) m<-melt(j) ks3$int<-m[,1] ggplot(ks3,aes(x=bc,y=Intercept,fill=int)) + geom_point(,alpha=0.3,size=1,shape=21) + facet_grid(gc2~ai2,scales = "free_y")+theme_bw()->plott plott<-plott+opts(panel.grid.minor=theme_blank(),panel.grid.major=theme_blank()) plott<-plott+geom_vline(x=0,color="red") plott+xlab("BC coefficient")+ylab("Intercept") ```
Calculating probability for bivariate normal distributions based on bootstrapped regression coefficients
CC BY-SA 2.5
null
2010-09-19T15:00:04.550
2010-12-24T11:45:34.523
2010-12-19T17:06:45.113
449
1291
[ "r", "bootstrap", "heteroscedasticity", "ggplot2" ]
2855
2
null
2010
2
null
Almost all statistics implicitly condition on N. We treat N as a constant that can come out from the expression $\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{N}{x_i}\right]$, for example. For that to be appropriate, N has to be a fixed value, which we get by conditioning. Without conditioning on N, as you said, we'd need to know the distribution of N, which is kind of strange to think about ("what are my chances of selecting each value of N for my study?"). It is important to assume that the $x_i$ are independent of N, however. This might be violated if, for example, a study has $100,000 to spend on its subjects and will spend it all no matter how many people are involved. If you are looking at the impact of this spending, then the spending-per-person declines as the number of people increases and N is correlated with your treatment (i.e., spending-per-person).
null
CC BY-SA 2.5
null
2010-09-19T16:41:11.350
2010-09-19T16:41:11.350
null
null
401
null
2856
2
null
2846
4
null
In regards to questions 3, 4, and 5 I would suggest you check out this work [Perceiving patterns in parallel coordinates: determining thresholds for identification of relationships by: Jimmy Johansson, Camilla Forsell, Mats Lind, Matthew Cooper in Information Visualization, Vol. 7, No. 2. (2008), pp. 152-162.](http://dx.doi.org/10.1057/palgrave.ivs.9500166) To sum up their findings people are ok at identifying the direction of the slope of the relationship between each node, but aren't that good at identifying the strength of the relationship or the degree of the slope. They give suggested levels of noise in which people can still decipher the relationship in the article. Unfortunately the article does not discuss identifying subgroups via color like chl demonstrates.
null
CC BY-SA 2.5
null
2010-09-19T17:10:15.873
2010-09-19T17:28:17.197
2010-09-19T17:28:17.197
1036
1036
null
2857
2
null
2852
1
null
For most of your variables (e.g. `V2`), some observations have identical values, hence the warning message thrown by R: unique ranks cannot be computed for all observations, and there are ties, precluding the computation of an exact p-value. For your variable named `V2`, there are in fact only two distinct values (out of 7), so I am very puzzled by the approach you took to analysis your data. With such a high number of tied data, I would not trust any Wilcoxon test. Moreover, in most non-parametric tests we assume that the sampled populations are symmetric and have the same dispersion or shape, which is hardly verifiable in your case. Thus, I think a permutation test would be more appropriate in your case, see e.g. `permTS` ([perm](http://cran.r-project.org/web/packages/perm/index.html)), `pperm` ([exactRankTests](http://cran.r-project.org/web/packages/exactRankTests/index.html)), or the [coin](http://cran.r-project.org/web/packages/coin/index.html) package.
null
CC BY-SA 2.5
null
2010-09-19T17:36:12.660
2010-09-19T17:36:12.660
null
null
930
null
2858
2
null
2846
4
null
Please visit [http://www.cs.tau.ac.il/~aiisreal/](http://www.cs.tau.ac.il/~aiisreal/) and also look at the new book Parallel Coordinates - This book is about visualization, systematically incorporating the fantastic human pattern recognition into the problem-solving process... www.springer.com/math/cse/book/978-0-387-21507-5. In Ch. 10 there are lots of real examples with multivariate data showing how parallel coordinates (abbr. ||-cs) can be used. It is also worth learning some of the math to visualize and work with multivariate/multidimensional relations (surfaces) and not just point sets. It is fun seeing and working with the analogues of familiar objects in many dimensions i.e. Moebius strip, convex sets and more. In short ||-cs are a multidimensional coordinate system where the axes are parallel to each other allowing for lots of axes to be seen. The methodology has been applied to Conflict resolution algorithms in Air Traffic Control, Computer Vision, Process Control and Decision Support.
null
CC BY-SA 2.5
null
2010-09-19T17:57:05.217
2010-09-19T17:57:05.217
null
null
1366
null
2859
2
null
2852
5
null
Sometimes a formal statistical test is overkill. Row by row, the entries in the first column are the largest. Draw a picture to make this apparent: side-by-side boxplots or dotplots would work nicely. Although this is a post-hoc comparison, if the initial intent had been to compare the first column against the rest for a shift in distribution, the most extreme characterizations would be that either all maxima or all minima occur in the first column (a two-sided test). The chance of this occurring by chance, if all columns contained values drawn at random from a common distribution, would be $2 (\frac{1}{6})^7$ = about 0.0007%. In fact, the first two contains the largest 7 of the 42 values. Again, ex post facto, the chance of such an extreme ordering occurring equals $\frac{2}{42 \choose 7}$ = about 0.000007%. These results indicate that any reasonably powerful test you choose to conduct will conclude there's a highly significant difference. In any event, You don't need a p-value; you need to characterize how large the difference is (the right way to do this depends on what the data mean) and you need to seek an explanation for the difference.
null
CC BY-SA 2.5
null
2010-09-19T18:11:38.943
2010-09-19T18:11:38.943
null
null
919
null
2860
1
null
null
5
3572
We know that the projection matrix learned by PCA can be applied to out-of-sample data points to get their low-dimensional embedding. However, how reliable are these embeddings expected to be, as compared to the embedding obtained from PCA with these out-of-sample points combined with the original data? Consider this hypothetical pseudo out-of-sample setting: Let's say I have 1000 data points and I want to do PCA on them. Can I instead do PCA on maybe just 500 of them (in order to have some computational savings) and then use the learned projection matrix to get the embedding of the rest of the points as well (by treating them as out-of-sample data)?
PCA on out-of-sample data
CC BY-SA 2.5
null
2010-09-19T18:53:35.410
2022-05-03T12:10:00.487
2010-09-21T12:01:56.170
183
881
[ "machine-learning", "pca", "dimensionality-reduction" ]
2861
2
null
2615
0
null
I am not sure what the real question is, but suppose instead of changing every non-diagonal element, you changed just 2 (to keep the resulting matrix symmetric). That is let $\hat{C}$ be $C$ with $\hat{C_{i,j}} = C_{i,j} + \Delta C / 2= \hat{C_{j,i}},$ for some choice of $i,j$ with $i \ne j$. (alternatively, imagine $\Delta C$ is added to $C_{i,j}$ only, and so $\hat{C}$ is no longer symmetric.) I will consider the question "how small can $\Delta C$ be before $\hat{C}$ is no longer PSD?" This question is easily solved as well, but the answer is not enlightening in my view. Let $\lambda_k, x^{(k)}$ be eigenvalue and eigenvector of $C$, where the eigenvector has unit norm. $\hat{C}$ is no longer PSD if there is index $k$ such that $\lambda_k + \Delta C x^{(k)}_i x^{(k)}_j < 0$. This can only hold if $x^{(k)}_i x^{(k)}_j < 0$, in which case we have $\Delta C > -\lambda_k / x^{(k)}_i x^{(k)}_j$. Compute the RHS for each $k$ for which the negativity condition holds and take the minimum, and that gives you the sufficient condition on $\Delta C.$
null
CC BY-SA 2.5
null
2010-09-19T20:09:26.417
2010-09-20T01:50:20.380
2010-09-20T01:50:20.380
795
795
null
2863
1
2864
null
6
13782
I want to assess item-total correlations on a 19-item questionnaire (some of the questions are meant to be reverse-scored). My question is: - Do I reverse score the items PRIOR to calculating the item-total correlations (in order to eliminate any variables that do not correlate with the total at >.40)? - Additionally, should the items be reverse-scored prior to running a factor analysis?
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?
CC BY-SA 3.0
null
2010-09-19T22:35:50.730
2020-02-29T20:40:23.440
2011-06-06T13:54:23.937
183
null
[ "correlation", "factor-analysis", "reliability" ]
2864
2
null
2863
5
null
Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis. For example, you have 'propensity to shoplift' measured via 3 items on a scale of 1 to 5 (where 1 is low propensity to shoplift and 5 is high). Suppose that you reversed item 1 on the on the survey so that 1 is high and 5 is low. Then you should reverse the score for item one so that 5 means the same thing across all three items (i.e., 5 is high propensity to shoplift).
null
CC BY-SA 2.5
null
2010-09-19T22:56:29.170
2010-09-19T22:56:29.170
null
null
null
null
2872
2
null
195
2
null
I have been told many times that the Anderson Darling (AD) test is much better than the Kolmogorov-Smirnov (KS) one because AD does a better job at fitting the tails of the distribution. KS is only good at fitting the mid-range of the distribution; but, is not better than AD even in this regard. I think the main advantage of the KS test is its very intuitive visual interpretation (fitting of the respective cumulative distributions). Because of the KS easy visual and intuitive interpretation it has become dominant in certain specialties such as credit scoring models within the financial service industry. But, more visually intuitive does not mean better. When using Monte Carlo simulation models that automatically fit a statistical distribution to a data set; their respective software manuals typically recommend leaning more on the AD than the KS test for the reason mentioned above (fits the tails better).
null
CC BY-SA 2.5
null
2010-09-20T00:29:57.047
2010-09-20T00:29:57.047
null
null
1329
null
2873
2
null
2860
1
null
I have never done this but my intuition suggests that the answer would depend to the extent to which the covariance matrix for the 500 data points is 'different' from the out-of-sample data. If the out-of-sample covariance matrix is very different then clearly the projection matrix of those points would be different than the projection matrix that emerges from the in-sample data. Thus, to the extent that the covariance matrix for the in-sample and out-of-sample data is 'similar' the results should be about the same. The above intuition suggests that you should carefully select the 500 in-sample points so that the resulting covariance matrices are as identical as possible for the in-sample and the out-of-sample.
null
CC BY-SA 2.5
null
2010-09-20T00:38:48.693
2010-09-20T00:38:48.693
null
null
null
null
2875
1
null
null
3
261
My friend and I are working on a project on distributed datastructures. We were wondering how much is nearest neighbor information used in modern recommendation systems and whether it would be worthwhile to work on a distributed datastructure (say a kd-tree) for that purpose. Thanks
Nearest neighbor information for recommendation engines
CC BY-SA 2.5
null
2010-09-20T01:19:26.897
2013-08-20T00:03:06.297
2013-08-20T00:03:06.297
22468
250
[ "k-nearest-neighbour", "recommender-system" ]
2877
2
null
2860
2
null
This isn't unlike a model selection problem where the goal is to arrive at something close to the "true dimensionality" of the data. You could try a cross validation approach, say 5-fold CV with your 500 data points. This will give you a reasonable metric of generalization error for out-of-sample data. The following paper has a nice survey and review of related methods: - Cross-validation methods in principal component analysis: A comparison (Diana, Tommassi, 2002)
null
CC BY-SA 4.0
null
2010-09-20T03:38:03.447
2022-05-03T12:10:00.487
2022-05-03T12:10:00.487
79696
251
null
2878
2
null
2863
9
null
Reliability Analysis: Yes, you should reverse score the reversed items. Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of thumb regarding number of factors to extract, etc.) should be the same. The sign of factor loadings will flip based on whether you reverse reversed items.
null
CC BY-SA 2.5
null
2010-09-20T04:07:18.163
2010-09-20T04:07:18.163
null
null
183
null
2883
2
null
2061
4
null
[BIOSTATISTICS VS. LAB RESEARCH](http://www.xtranormal.com/watch/6878253/): A funny/sad video on statistics consulting.
null
CC BY-SA 2.5
null
2010-09-20T06:05:15.013
2010-09-20T10:34:39.257
2010-09-20T10:34:39.257
183
183
null
2884
2
null
2852
0
null
Thank you very much, chl, whuber and Gaetan Lion. But do you think is there any problem that if I change to caculate the differene among the data using Kruskal-Wallis test instead of comparing the difference between the first column with other columns? > kruskal.test(as.list(Data)) ``` Kruskal-Wallis rank sum test ``` data: as.list(Data) Kruskal-Wallis chi-squared = 19.9149, df = 5, p-value = 0.001297 > kruskal.test(as.list(Data[,2:6])) ``` Kruskal-Wallis rank sum test ``` data: as.list(Data[, 2:6]) Kruskal-Wallis chi-squared = 3.8242, df = 4, p-value = 0.4303 The result also shows the 1st column has great difference between the other columns. Is it that right?
null
CC BY-SA 2.5
null
2010-09-20T06:14:15.227
2010-09-20T11:29:59.137
2010-09-20T11:29:59.137
null
null
null
2885
2
null
2828
5
null
Minimum Description Length (MDL) and Minimum Message Length (MML) are certainly worth checking out. As far as MDL is concerned, a simple paper that illustrates the Normalized Maximum Likelihood (NML) procedure as well as the asymptotic approximation is: > S. de Rooij & P. Grünwald. An empirical study of minimum description length model selection with infinite parametric complexity. Journal of Mathematical Psychology, 2006, 50, 180-192 Here, they look at the model complexity of a Geometric vs. a Poisson distribution. An excellent (free) tutorial on MDL can be found [here](http://www.cwi.nl/~pdg/ftp/mdlintro.pdf). Alternatively, a paper on the complexity of the exponential distribution examined with both MML and MDL can be found [here](http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F18%2F5075865%2F05075876.pdf%3Farnumber%3D5075876&authDecision=-203). Unfortunately, there is no up-to-date tutorial on MML, but the [book](http://rads.stackoverflow.com/amzn/click/038723795X) is an excellent reference, and highly recommended.
null
CC BY-SA 2.5
null
2010-09-20T06:20:50.417
2010-09-20T06:20:50.417
null
null
530
null
2886
1
2887
null
8
10898
As title, I am thinking of merging both into "missing data", which is to name it as NA in R. Since I don't see it will make much sense (or even any sense), to separate the "don't know" row out and to compare the information with other rows. Is it OK for me to do so?
How will you deal with "don't know" and "missing data" in survey data?
CC BY-SA 4.0
null
2010-09-20T06:44:20.220
2019-07-25T10:10:37.353
2019-07-25T10:10:37.353
11887
588
[ "multivariate-analysis", "missing-data", "survey" ]
2887
2
null
2886
12
null
Well, you should also considered that "don't know" is at least some kind of answer, whereas non-response is a purely missing value. Now, we often allow for "don't know" response in survey just to avoid forcing people to provide a response anyway (which might bias the results). For example, in the National Health and Nutrition Examination Survey, they are coded differently but subsequently discarded from the analysis. You could try analyzing the data both ways: (1) treating "don't know response" as specific response category and handling all responses set with some kind of multivariate data analysis (e.g. [multiple correspondence analysis](http://en.wikipedia.org/wiki/Multiple_correspondence_analysis) or multiple factor analysis for mixed data, see the [FactoMineR](http://cran.r-project.org/web/packages/FactoMineR/index.html) package), and (2) if it doesn't bring any evidence of distortion on items distribution, just merge it with missing values. For (2), I would also suggest you to check that "don't know" and MV are at least missing at random (MAR), or that they are not specific of one respondents group (e.g. male/female, age class, SES, etc.).
null
CC BY-SA 2.5
null
2010-09-20T07:08:01.583
2010-09-20T07:08:01.583
null
null
930
null
2888
2
null
1856
8
null
I haven't seen this used in outside of bioinformatics/machine learning either, but maybe you can be the first one :) As a good representative of small sample method method from bioinformatics, logistic regression with L1 regularization can give a good fit when number of parameters is exponential in the number of observations, non-asymptotic confidence intervals can be crafted using Chernoff-type inequalities (ie, [Dudik, (2004)](http://www2.research.att.com/~phillips/pdf/colt_2004.pdf) for example). Trevor Hastie has done some work applying these methods to identifying gene interactions. In the paper below, he uses it to identify significant effects from a model with 310,637 adjustable parameters fit to a sample of 2200 observations "Genome-wide association analysis by lasso penalized logistic regression." Authors: Hastie, T; Sobel, E; Wu, T. T; Chen, Y. F; Lange, K Bioinformatics Vol: 25 Issue: 6 ISSN: 1367-4803 Date: 03/2009 Pages: 714 - 721 Related presentation by Victoria Stodden ([Model Selection with Many More Variables than Observations ](http://www.stanford.edu/~vcs/talks/MicrosoftMay082008.pdf))
null
CC BY-SA 2.5
null
2010-09-20T07:29:51.983
2010-10-01T15:56:38.640
2010-10-01T15:56:38.640
511
511
null
2889
2
null
2886
2
null
It depends on the type of question/response in your survey. If they are like "I like", "I dislike", "Don't know", chl answers partially to your question. The first solution is chl's answer. You have to check if "Don't know" doesn't hide anything. You have to analyse separately these values to see if it highlights a specific profile of respondents. I'm not about imputation but... "Frenchy" software do it for MCA, ... often considering MAR assumption. It supposes that these answers are randomly distributed (you pick randomly another modality of response). You can also use a more sophisticated approach : if "Like" is at 30% and "Dislike" at 70% you pick an uniform random number distributed on (0,1) and choose "Like" if your number is at or below 0.3. If you pick a number between 0.3 and 1 you choose "Dislike". A more modern approach is Multiplie imputation (see MI PROC in SAS and mice package in R). Imputation is very efficient... But it can't recreate atypic profiles... If you're working in educational testing or if you need to compute a score, let me know I will complete this answer about scores estimation. Ref: Multiple Imputation for Nonreponse in survey, Rubin (1987). Wiley. mice package: [http://cran.r-project.org/web/packages/mice/index.html](http://cran.r-project.org/web/packages/mice/index.html) Survey Methodology, Robert M. Groves, Floyd J. Fowler & al. Wiley.
null
CC BY-SA 2.5
null
2010-09-20T07:31:33.540
2010-09-20T07:39:17.980
2010-09-20T07:39:17.980
930
1154
null
2890
2
null
2828
13
null
Besides the various measures of Minimum Description Length (e.g., normalized maximum likelihood, Fisher Information approximation), there are two other methods worth to mention: - Parametric Bootstrap. It's a lot easier to implement than the demanding MDL measures. A nice paper is by Wagenmaker and colleagues: Wagenmakers, E.-J., Ratcliff, R., Gomez, P., & Iverson, G. J. (2004). Assessing model mimicry using the parametric bootstrap. Journal of Mathematical Psychology, 48, 28-50. The abstract: We present a general sampling procedure to quantify model mimicry, defined as the ability of a model to account for data generated by a competing model. This sampling procedure, called the parametric bootstrap cross-fitting method (PBCM; cf. Williams (J. R. Statist. Soc. B 32 (1970) 350; Biometrics 26 (1970) 23)), generates distributions of differences in goodness-of-fit expected under each of the competing models. In the data informed version of the PBCM, the generating models have specific parameter values obtained by fitting the experimental data under consideration. The data informed difference distributions can be compared to the observed difference in goodness-of-fit to allow a quantification of model adequacy. In the data uninformed version of the PBCM, the generating models have a relatively broad range of parameter values based on prior knowledge. Application of both the data informed and the data uninformed PBCM is illustrated with several examples. Update: Assessing model mimicry in plain English. You take one of the two competing models and randomly pick a set of parameters for that model (either data informed or not). Then, you produce data from this model with the picked set of parameters. Next, you let both models fit the produced data and check which of the two candidate models gives the better fit. If both models are equally flexible or complex, the model from which you produced the data should give a better fit. However, if the other model is more complex, it could give a better fit, although the data was produced from the other model. You repeat this several times with both models (i.e., let both models produce data and look which of the two fits better). The model that "overfits" the data produced by the other model is the more complex one. - Cross-Validation: It is also quite easy to implement. See the answers to this question. However, note that the issue with it is that the choice among the sample-cutting rule (leave-one-out, K-fold, etc) is an unprincipled one.
null
CC BY-SA 2.5
null
2010-09-20T08:40:00.997
2010-09-21T08:45:45.537
2017-04-13T12:44:23.203
-1
442
null
2891
1
5695
null
1
188
I'm looking for a simple way to store ratios. For a time component, I must store the average ratio between two behavior. For example the number of people that turn left compared to the number of people that turn right. I have to detect unusual behavior (people that turn right abnormally). How should I mathematically compare the average ratio against the analyzed ratio, and how should I display the difference on a graph ? Thanks a lot in advance.
Best value to store ratio data and compare it to time period average
CC BY-SA 2.5
null
2010-09-20T09:07:06.277
2010-12-22T14:22:27.143
2010-12-22T14:22:27.143
1739
null
[ "data-visualization", "multiple-comparisons", "count-data", "logit", "proportion" ]
2892
1
2905
null
17
14638
What is your intuition / interpretation of a distribution of eigenvalues of a correlation matrix? I tend to hear that usually 3 largest eigenvalues are the most important, while those close to zero are noise. Also, I've seen a few research papers investigating how naturally occuring eigenvalue distributions differ from those calculated from random correlation matrices (again, distinguising noise from signal). Please feel free to elaborate on your insights.
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
CC BY-SA 2.5
null
2010-09-20T10:26:08.910
2019-02-17T00:32:01.113
null
null
1250
[ "distributions", "correlation" ]
2893
1
2897
null
21
5825
It is usual to use second, third and fourth moments of a distribution to describe certain properties. Do partial moments or moments higher than the fourth describe any useful properties of a distribution?
Moments of a distribution - any use for partial or higher moments?
CC BY-SA 2.5
null
2010-09-20T10:56:57.297
2018-09-04T08:41:06.673
2018-09-04T08:41:06.673
11887
1250
[ "distributions", "moments", "partial-moments" ]
2894
1
null
null
9
415
I am trying to estimate the mean of a more-or-less Gaussian distribution via sampling. I have no prior knowledge about its mean or its variance. Each sample is expensive to obtain. How do I dynamically decide how many samples I need to get a certain level of confidence/accuracy? Alternatively, how do I know when I can stop taking samples? All the answers to questions like this that I can find seem to presume some knowledge of the variance, but I need to discover that along the way as well. Other are geared towards taking polls, and it's not clear to me (beginner that I am) how that generalizes -- my mean isn't w/in [0,1], etc. I think this is probably a simple question with a well known answer, but my Google-fu is failing me. Even just telling me what to search for would be helpful.
Dynamic calculation of number of samples required to estimate the mean
CC BY-SA 2.5
null
2010-09-20T13:24:09.147
2010-09-20T15:51:22.327
2010-09-20T13:29:12.717
1376
1376
[ "estimation", "sample-size" ]
2895
2
null
665
6
null
Probability studies, well, how probable events are. You intuitively know what probability is. Statistics is the study of data: showing it (using tools such as charts), summarizing it (using means and standard deviations etc.), reaching conclusions about the world from which that data was drawn (fitting lines to data etc.), and -- this is key -- quantifying how sure we can be about our conclusions. In order to quantify how sure we can be about our conclusions we need to use Probability. Let's say you have last year's data about rainfall in the region where you live and where I live. Last year it rained an average of 1/4 inch per week where you live, and 3/8 inch where I live. So we can say that rainfall in my region is on average 50% greater than where you live, right? Not so fast, Sparky. It could be a coincidence: maybe it just happened to rain a lot last year where I live. We can use Probability to estimate how confident we can be in our conclusion that my home is 50% soggier than yours. So basically you can say that Probability is the mathematical foundation for the Theory of Statistics.
null
CC BY-SA 2.5
null
2010-09-20T13:59:30.777
2010-09-20T13:59:30.777
null
null
666
null
2896
2
null
2894
0
null
You would normally want at least 30 to invoke central limit theorem (though this is somewhat arbitrary). Unlike in the case with polls etc, which are modelled using the binomial distribution, you can not determine a sample size beforehand which guarantees a level of accuracy with a Gaussian process - it depends on what residuals you get which determine the standard error. It should be noted that if you have a robust sampling strategy, you can get much more accurate results than with a much larger sample size with a poor strategy.
null
CC BY-SA 2.5
null
2010-09-20T15:19:18.547
2010-09-20T15:44:52.563
2010-09-20T15:44:52.563
229
229
null
2897
2
null
2893
10
null
Aside from special properties of a few numbers (e.g., 2), the only real reason to single out integer moments as opposed to fractional moments is convenience. Higher moments can be used to understand tail behavior. For example, a centered random variable $X$ with variance 1 has subgaussian tails (i.e. $\mathbb{P}(|X| > t) < C e^{-ct^2}$ for some constants $c,C > 0$) if and only if $\mathbb{E}|X|^p \le (A \sqrt{p})^p$ for every $p\ge 1$ and some constant $A > 0$.
null
CC BY-SA 2.5
null
2010-09-20T15:22:46.340
2011-03-24T18:26:30.237
2011-03-24T18:26:30.237
89
89
null
2898
2
null
2894
2
null
You need to search for 'Bayesian adaptive designs'. The basic idea is as follows: - You initialize the prior for the parameters of interest. Before any data collection your priors would be diffuse. As additional data comes in you re-set the prior to be the posterior that corresponds to the 'prior + data till that point in time'. - Collect data. - Compute the posterior based on data + priors. The posterior is then used as the prior in step 1 if you actually collect additional data. - Assess whether your stopping criteria are met Stopping criteria could include something like the 95% credible interval should not be bigger than $\pm \epsilon$ units for the parameters of interest. You could also have more formal loss functions associated with the parameters of interest and compute expected loss with respect to the posterior distribution for the parameter of interest. You then repeat steps 1, 2 and 3 till your stopping criteria from step 4 are met.
null
CC BY-SA 2.5
null
2010-09-20T15:51:22.327
2010-09-20T15:51:22.327
null
null
null
null
2899
2
null
2893
10
null
I get suspicious when I hear people ask about third and fourth moments. There are two common errors people often have in mind when they bring up the topic. I'm not saying that you are necessarily making these mistakes, but they do come up often. First, it sounds like they implicitly believe that distributions can be boiled down to four numbers; they suspect that just two numbers is not enough, but three or four should be plenty. Second, it sounds like hearkening back to the moment-matching approach to statistics that has largely lost out to maximum likelihood methods in contemporary statistics. Update: I expanded this answer into a [blog post](http://www.johndcook.com/blog/2010/09/20/skewness-andkurtosis/).
null
CC BY-SA 2.5
null
2010-09-20T15:54:47.243
2010-09-20T17:31:39.423
2010-09-20T17:31:39.423
319
319
null
2900
2
null
1805
9
null
[This page in MathWorld](http://mathworld.wolfram.com/FishersExactTest.html) explains how the calculations work. It points out that the test can be defined in a variety of ways: > To compute the P-value of the test, the tables must be ordered by some criterion that measures dependence, and those tables that represent equal or greater deviation from independence than the observed table are the ones whose probabilities are added together. There are a variety of criteria that can be used to measure dependence. I have not been able to find other articles or texts that explain how this is done with tables larger than 2x2. [This calculator](http://www.quantitativeskills.com/sisa/statistics/five2hlp.htm) computes the exact Fisher's test for tables with 2 columns and up to 5 rows. The criterion it uses is the hypergeometric probability of each table. The overall P value is the sum of the hypergeometric probability of all tables with the same marginal totals whose probabilities are less than or equal to the probability computed from the actual data.
null
CC BY-SA 2.5
null
2010-09-20T16:15:58.623
2010-09-20T16:49:00.773
2010-09-20T16:49:00.773
25
25
null
2901
2
null
2893
3
null
One example of use (interpretation is a better qualifier) of a higher moment: the fifth moment of a univariate distribution measures the asymmetry of its tails.
null
CC BY-SA 2.5
null
2010-09-20T16:42:10.380
2010-09-20T20:07:31.340
2010-09-20T20:07:31.340
603
603
null
2903
2
null
2730
7
null
Gelman has a good discussion paper on [ANOVA](http://projecteuclid.org/euclid.aos/1112967698) Analysis of variance—why it is more important than ever
null
CC BY-SA 2.5
null
2010-09-20T16:54:56.517
2010-09-20T16:54:56.517
null
null
603
null
2904
1
null
null
7
1091
I am attempting to estimate a model of the following form: ``` W = alphaH * H + alphaM * M + alphaL * L + X * beta ``` where `H, M, L` are indicators for a discrete choice variable, and `beta` is something like 35-dimensional. Because we believe our data/model has endogeneity issues, we have expanded the model to ``` W = alphaH * H' + alphaM * M' + alphaL * L' + X * beta H = Z * betaH M = Z * betaM L = Z * betaL H' = 1( H = max(H,M,L) ) M' = 1( M = max(H,M,L) ) L' = 1( L = max(H,M,L) ) ``` where `Z` are instruments, and `betaH, betaM, betaL` are parameters to be estimated. This "subregression" corresponds to a latent utility-based choice model. We have been able to estimate the second-stage model (estimates of `H, M, L`, implying `H', M', L'`) in Stata using the `mvprobit` command, but can't figure out how to estimate the entire model in one fell swoop. To work around this, we wrote some code in MATLAB to estimate the model using simulated maximum likelihood, but MATLAB is choking on local minima (maxima in this problem, but MATLAB will only minimize the negative...), of which there are plenty. We have attempted to work around this by starting from a few dozen initial conditions, none of which usually converges to the "right" answer; I say this with near certainty since we have been testing the code piecewise and have confirmation (on randomized test data) that if the optimization starts near the "correct" values (in test), it converges to reasonable values, otherwise it gets nowhere close (although the resultant outcome has a far lower overall likelihood). Are there any tricks -- MATLAB, Stata, or otherwise -- to work around this problem? Is this an inherent issue with simulation versus closed-form analysis? Thanks for your help.
How can I work around "lumpiness" in simulated maximum likelihood estimation?
CC BY-SA 2.5
null
2010-09-20T17:45:15.683
2010-11-16T23:35:41.633
2010-11-16T23:35:41.633
159
53
[ "matlab", "stata", "maximum-likelihood", "optimization" ]
2905
2
null
2892
6
null
I tend to hear that usually 3 largest eigenvalues are the most important, while those close to zero are noise You can test for that. See the paper linked in [this](https://stats.stackexchange.com/questions/2860/pca-on-out-of-sample-data/2877#2877) post for more detail. Again if your dealing with financial times series you might wanna correct for leptokurticity first (i.e. consider the series of garch-adjusted returns, not the raw returns). I've seen a few research papers investigating how naturally occuring eigenvalue distributions differ from those calculated from random correlation matrices (again, distinguising noise from signal). Edward:> Usually, one would do it the other way arround: look at the multivariate distribution of eigenvalues (of correlation matrices) coming from the application you want. Once you have identified a credible candidate for the distribution of eigenvalues, it should be fairly easy to generate from them. The best procedure on how to identify the multivariate distribution of your eigenvalues depends on how many assets you want to consider simultaneously (i.e. what are the dimensions of your correlation matrix). There is a neat trick if $p\leq 10$ ($p$ being the number of assets). Edit (comments by Shabbychef) four step procedure: - Suppose you have $j=1,...,J$ sub samples of multivariate data. You need an estimator of the variance-covariance matrix $\tilde{C}_j$ for each sub-sample $j$ ( you could use the classical estimator or a robust alternative such as the fast MCD, which is well implemented in matlab, SAS, S,R,...). As usual, if your dealing with financial times series you would want to consider the series of garch-adjusted returns, not raw returns. - For each sub sample $j$, compute $\tilde{\Lambda}_j=$ $\log(\tilde{\lambda}_1^j)$ ,..., $\log(\tilde{\lambda}_p^j)$ , the eigen values of $\tilde{C}_j$. - Compute $CV(\tilde{\Lambda})$, the convex hull of the $J \times p$ matrix whose j-th entry is $\tilde{\Lambda}_j$ (again, this is well implemented in Matlab, R,...). - Draw points at random from inside $CV(\tilde{\Lambda})$ (this done by giving weight $w_i$ to each of the edges of $CV(\tilde{\Lambda})$ where $w_i=\frac{\gamma_i}{\sum_{i=1}^{p}\gamma_i}$, where $\gamma_i$ is a draw from an unit exponential distribution (more details here). A limitation is that fast computation of the convex hull of a series of points becomes extremely slow when the number of dimensions is larger than 10. $J\geq2$
null
CC BY-SA 2.5
null
2010-09-20T18:49:08.317
2010-09-21T11:59:13.977
2017-04-13T12:44:46.433
-1
603
null
2906
1
2908
null
20
1181
Nassim Taleb, of [Black Swan](http://rads.stackoverflow.com/amzn/click/081297381X) fame (or infamy), has elaborated on the concept and developed what he calls ["a map of the limits of Statistics"](http://www.edge.org/3rd_culture/taleb08/taleb08_index.html). His basic argument is that there is one kind of decision problem where the use of any statistical model is harmful. These would be any decision problems where the consequence of making the wrong decision could be inordinately high, and the underlying PDF is hard to know. One example would be shorting a stock option. This kind of operation can lead to limitless (in theory, at least) loss; and the probability of such a loss is unknown. Many people in fact model the probability, but Taleb argues that the financial markets aren't old enough to allow one to be confident about any model. Just because every swan you have ever seen is white, that doesn't mean black swans are impossible or even unlikely. So here's the question: is there such a thing as a consensus in the Statistics community about Mr. Taleb's arguments? Maybe this should be community wiki. I don't know.
What is the community's take on the Fourth Quadrant?
CC BY-SA 2.5
null
2010-09-20T18:57:24.617
2010-09-20T19:13:08.843
null
null
666
[ "distributions", "modeling", "random-variable" ]
2907
2
null
2686
1
null
I would start with robust time series [filters](http://cran.r-project.org/web/packages/robfilter/index.html) (i.e. time varying medians) because these are more simple and intuitive. Basically, the robust time filter is to time series smoothers what the median is to the mean; a summary measures (in this case a time varying one) that is not sensitive to 'wired' observations so long as they do not represent the majority of the data. For a summary see [here](http://en.wikipedia.org/wiki/Robust_statistics#Example:_speed_of_light_data). If you need more sophisticated smoothers (i.e. non linear ones), you could do with robust Kalman [filtering](http://robkalman.r-forge.r-project.org/) (although this requieres a slightly higher level of mathematical sophistication) [This](http://cran.r-project.org/web/packages/robfilter/robfilter.pdf) document contains the following example ( a code to run under [R](http://cran.r-project.org/mirrors.html), the open source stat software): ``` library(robfilter) data(Nile) nile <- as.numeric(Nile) obj <- wrm.filter(nile, width=11) plot(obj) ``` ![where the orginal time series is in black and the filtered version (filtered by repeated median) is overploted in red](https://i.stack.imgur.com/O3VJl.jpg). The last documents contains a large number of references to papers and books. Other types of filters are implemented in the package, but the repeated median is a very simple one.
null
CC BY-SA 2.5
null
2010-09-20T19:11:45.167
2010-09-22T12:44:41.213
2010-09-22T12:44:41.213
603
603
null
2908
2
null
2906
28
null
I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more careful in his language when addressing an auditorium of statisticians than he has been in his popular writing. Some statisticians are offended by the provocative hyperbole in Taleb's books, but when he states his ideas professionally there's not too much to object to. It's hard to argue that one can confidently estimate the probability of rare events with little or no data, or that one should make high-stakes decisions on such estimates if they can at all be avoided. (Here's a [blog post](http://www.johndcook.com/blog/2008/08/07/black-swan-talk/) I wrote about Taleb's ASA talk shortly after the event.)
null
CC BY-SA 2.5
null
2010-09-20T19:13:08.843
2010-09-20T19:13:08.843
null
null
319
null
2909
1
4033
null
9
1133
I am interested in the distribution of the maximum drawdown of a random walk: Let $X_0 = 0, X_{i+1} = X_i + Y_{i+1}$ where $Y_i \sim \mathcal{N}(\mu,1)$. The maximum drawdown after $n$ periods is $\max_{0 \le i \le j \le n} (X_i - X_j)$. A paper by [Magdon-Ismail et. al.](http://www.alumni.caltech.edu/~amir/drawdown-jrnl.pdf) gives the distribution for maximum drawdown of a Brownian motion with drift. The expression involves an infinite sum which includes some terms defined only implicitly. I am having problems writing an implementation which converges. Is anyone aware of an alternative expression of the CDF or a reference implementation in code?
Computing the cumulative distribution of max drawdown of random walk with drift
CC BY-SA 2.5
null
2010-09-20T19:59:31.487
2018-08-27T16:10:00.043
2018-08-27T16:10:00.043
11887
795
[ "distributions", "cumulative-distribution-function", "finance", "random-walk" ]
2910
1
3191
null
92
23330
We often hear of project management and design patterns in computer science, but less frequently in statistical analysis. However, it seems that a decisive step toward designing an effective and durable statistical project is to keep things organized. I often advocate the use of R and a consistent organization of files in separate folders (raw data file, transformed data file, R scripts, figures, notes, etc.). The main reason for this approach is that it may be easier to run your analysis later (when you forgot how you happened to produce a given plot, for instance). What are the best practices for statistical project management, or the recommendations you would like to give from your own experience? Of course, this applies to any statistical software. (one answer per post, please)
How to efficiently manage a statistical analysis project?
CC BY-SA 2.5
null
2010-09-20T20:39:08.183
2018-06-09T04:04:28.840
2016-08-10T15:26:11.967
7290
930
[ "project-management" ]
2911
2
null
2910
21
null
This doesn't specifically provide an answer, but you may want to look at these related stackoverflow questions: - "Workflow for statistical analysis and report writing" - "Organizing R Source Code" - "How to organize large R programs?" - "R and version control for the solo data analyst" - "How does software development compare with statistical programming/analysis ?" - "How do you combine “Revision Control” with “WorkFlow” for R?" You may also be interested in [John Myles White's recent project](http://www.johnmyleswhite.com/notebook/2010/09/19/why-use-projecttemplate-or-any-other-framework/) to create a statistical project template.
null
CC BY-SA 2.5
null
2010-09-20T20:42:21.877
2010-09-25T10:59:17.233
2017-05-23T12:39:26.167
-1
5
null
2912
2
null
2892
4
null
One way I have studied this problem in the past is to construct the 'eigenportfolios' of the correlation matrix. That is, take the eigenvector associated with the $k$th largest eigenvalue of the correlation matrix and scale it to a gross leverage of 1 (i.e. make the absolute sum of the vector equal to one). Then see if you can find any real physical or financial connection between the stocks which have large representation in the portfolio. Usually the first eigenportfolio is almost equal weighted in every name, which is to say the 'market' portfolio consisting of all assets with equal dollar weights. The second eigenportfolio may have some semantical meaning, depending on which time period you look over: e.g. mostly energy stocks, or bank stocks, etc. In my experience, you would be hard pressed to make any story out of the fifth eigenportfolio or beyond, and this depends in some part universe selection and the time period considered. This is just fine because usually the fifth eigenvalue or so is not too far beyond the limits imposed by the Marchenko-Pastur distribution.
null
CC BY-SA 2.5
null
2010-09-20T21:27:28.550
2010-09-20T21:27:28.550
null
null
795
null
2913
2
null
2860
2
null
What computational savings? The PCA computation is based on the covariance (or correlation) matrix, whose size depends on the number of variables, not the number of data points. The calculation of a covariance matrix is fast. Even if you were doing PCA repeatedly (as part of a simulation, for instance), reducing from 1000 data points to 500 wouldn't even reduce the time by 50%.
null
CC BY-SA 2.5
null
2010-09-20T21:29:31.903
2010-09-20T21:29:31.903
null
null
919
null
2914
1
2931
null
14
23534
When you are the one doing the work, being aware of what you are doing you develop a sense of when you have over-fit the model. For one thing, you can track the trend or deterioration in the Adjusted R Square of the model. You can also track a similar deterioration in the p values of the regression coefficients of the main variables. But, when you just read someone else study and you have no insight as to their own internal model development process how can you clearly detect if a model is over-fit or not.
How to detect when a regression model is over-fit?
CC BY-SA 2.5
null
2010-09-20T21:35:58.207
2021-11-19T14:13:26.137
2017-08-15T21:41:44.853
12359
1329
[ "regression", "multivariate-analysis", "overfitting" ]
2915
1
2928
null
4
276
I am talking about the regression method that measures the impact of several layers of independent variables upon a dependent variable.
What is a good internet based source of information on Hierarchical Modeling?
CC BY-SA 2.5
null
2010-09-20T22:16:05.190
2010-09-21T11:24:20.990
2010-09-21T11:24:20.990
183
1329
[ "modeling", "regression", "multilevel-analysis" ]
2916
2
null
2892
14
null
Eigenvalues give magnitudes of principle components of data spread. [](https://i.stack.imgur.com/PznUT.png) (source: [yaroslavvb.com](http://yaroslavvb.com/upload/eigenvalues.png)) First dataset was generated from Gaussian with covariance matrix $\left(\matrix{3&0\\\\0&1}\right)$ second dataset is the first dataset rotated by $\pi/4$
null
CC BY-SA 4.0
null
2010-09-20T23:05:48.593
2019-02-17T00:32:01.113
2019-02-17T00:32:01.113
79696
511
null
2917
1
null
null
11
3365
## Background I am conducting a meta-analysis that includes previously published data. Often, differences between treatments are reported with P-values, least significant differences (LSD), and other statistics but provide no direct estimate of the variance. In the context of the model that I am using, an overestimate of variance is okay. ## Problem Here is a list of transformations to $SE$ where $SE=\sqrt{MSE/n}$ [(Saville 2003)](http://eskes.psychiatry.dal.ca/Files/2003_-_Saville_-_Basic_statistics_and_the_inconsistency_of_m.pdf) that I am considering, feedback appreciated; below, I assume that $\alpha=0.05$ so $1-^{\alpha}/_2=0.975$ and variables are normally distributed unless otherwise stated: ## Questions: - given $P$, $n$, and treatment means $\bar X_1$ and $\bar X_2$ $$SE=\frac{\bar X_1-\bar X_2}{t_{(1-\frac{P}{2},2n-2)}\sqrt{2/n}}$$ - given LSD (Rosenberg 2004), $\alpha$, $n$, $b$ where $b$ is number of blocks, and $n=b$ by default for RCBD $$SE = \frac{LSD}{t_{(0.975,n)}\sqrt{2bn}}$$ - given MSD (minimum significant difference) (Wang 2000), $n$, $\alpha$, df = $2n-2$ $$SE = \frac{MSD}{t_{(0.975, 2n-2)}\sqrt{2}}$$ - given a 95% Confidence Interval (Saville 2003) (measured from mean to upper or lower confidence limit), $\alpha$, and $n$ $$SE = \frac{CI}{t_{(\alpha/2,n)}}$$ - given Tukey's HSD, $n$, where $q$ is the 'studentized range statistic', $$SE = \frac{HSD}{q_{(0.975,n)}}$$ An R function to encapsulate these equations: - Example Data: data <- data.frame(Y=rep(1,5), stat=rep(1,5), n=rep(4,5), statname=c('SD', 'MSE', 'LSD', 'HSD', 'MSD') - Example Use: transformstats(data) - The transformstats function: transformstats <- function(data) { ## Transformation of stats to SE ## transform SD to SE if ("SD" %in% data$statname) { sdi <- which(data$statname == "SD") data$stat[sdi] <- data$stat[sdi] / sqrt(data$n[sdi]) data$statname[sdi] <- "SE" } ## transform MSE to SE if ("MSE" %in% data$statname) { msei <- which(data$statname == "MSE") data$stat[msei] <- sqrt (data$stat[msei]/data$n[msei]) data$statname[msei] <- "SE" } ## 95%CI measured from mean to upper or lower CI ## SE = CI/t if ("95%CI" %in% data$statname) { cii <- which(data$statname == '95%CI') data$stat[cii] <- data$stat[cii]/qt(0.975,data$n[cii]) data$statname[cii] <- "SE" } ## Fisher's Least Significant Difference (LSD) ## conservatively assume no within block replication if ("LSD" %in% data$statname) { lsdi <- which(data$statname == "LSD") data$stat[lsdi] <- data$stat[lsdi] / (qt(0.975,data$n[lsdi]) * sqrt( (2 * data$n[lsdi]))) data$statname[lsdi] <- "SE" } ## Tukey's Honestly Significant Difference (HSD), ## conservatively assuming 3 groups being tested so df =2 if ("HSD" %in% data$statname) { hsdi <- which(data$statname == "HSD" & data$n > 1) data$stat[hsdi] <- data$stat[hsdi] / (qtukey(0.975, data$n[lsdi], df = 2)) data$statname[hsdi] <- "SE" } ## MSD Minimum Squared Difference ## MSD = t_{\alpha/2, 2n-2}*SD*sqrt(2/n) ## SE = MSD*n/(t*sqrt(2)) if ("MSD" %in% data$statname) { msdi <- which(data$statname == "MSD") data$stat[msdi] <- data$stat[msdi] * data$n[msdi] / (qt(0.975,2*data$n[lsdi]-2)*sqrt(2)) data$statname[msdi] <- "SE" } if (FALSE %in% c('SE','none') %in% data$statname) { print(paste(trait, ': ERROR!!! data contains untransformed statistics')) } return(data) } References [Saville 2003Can J. Exptl Psych. (pdf)](http://eskes.psychiatry.dal.ca/Files/2003_-_Saville_-_Basic_statistics_and_the_inconsistency_of_m.pdf) [Rosenberg et al 2004 (link)](http://apsjournals.apsnet.org/doi/abs/10.1094/PHYTO.2004.94.9.1013) [Wang et al. 2000 Env. Tox. and Chem 19(1):113-117 (link)](http://onlinelibrary.wiley.com/doi/10.1002/etc.5620190113/full)
Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative estimate of $\hat{\sigma}$ correct?
CC BY-SA 2.5
null
2010-09-20T23:14:27.380
2011-03-17T02:47:49.370
2011-03-17T02:46:32.257
795
1381
[ "multiple-comparisons", "variance", "data-transformation", "meta-analysis" ]
2918
1
null
null
4
1669
Are Lorenz curves and QQ-plots the same? If not, where are the differences? I read about both of them and they appear to be two terms for the same type of plot / statistical technique to compare distributions. I was not able to find any confirmatory source for this. Perhaps you know?
Is Lorenz curve the same as QQ-plot?
CC BY-SA 3.0
null
2010-09-20T23:22:35.163
2014-11-20T01:17:39.007
2014-11-20T01:17:39.007
805
608
[ "data-visualization", "qq-plot", "lorenz-curve" ]
2919
2
null
2915
4
null
I warmly recommend Doug Bate's [book](http://lme4.r-forge.r-project.org/book/)
null
CC BY-SA 2.5
null
2010-09-20T23:37:22.590
2010-09-20T23:37:22.590
null
null
603
null
2920
2
null
2917
7
null
Your LSD equation looks fine. If you want to get back to variance and you have a summary statistic that says something about variability or significance of an effect then you can almost always get back to variance—-you just need to know the formula. For example, in your equation for LSD you want to solve for MSE, MSE = (LSD/t_)^2 / 2 * b
null
CC BY-SA 2.5
null
2010-09-20T23:42:21.657
2010-09-20T23:42:21.657
null
null
601
null
2921
2
null
2915
7
null
[The Centre for Multilevel Modelling](http://www.cmm.bristol.ac.uk/learning-training/index.shtml) has free online tutorials for multi-level modeling, and they have software tutorials for fitting models in both their MLwiN software and STATA. You will probably want to check out all the questions with the [multilevel analysis](https://stats.stackexchange.com/questions/tagged/multilevel-analysis) tag here. You will find many other suggestions for books and resources. Also [Harvey Goldstein](http://www.cmm.bristol.ac.uk/team/HG_Personal/multbook1995.pdf) has an online book, but I would suggest you check out the Centre for Multilevel Modelling first. good luck
null
CC BY-SA 2.5
null
2010-09-20T23:52:04.230
2010-09-21T02:52:02.937
2017-04-13T12:44:33.310
-1
1036
null
2922
2
null
2904
2
null
Your likelihood function is non-concave (i.e. the Hessian matrix of your likelihood function is not SDN). From this it follows that - You will only find a local maximae to your likelihood function (no garantuee of global optimality) - This maxima will always depend on your choice of starting point. - Your maximization procedure will always be an iterative one. Without directly solving the issues aboves, one way to handle them would be thru Monte carlo optimization (a review is [here](http://www.google.com/url?sa=t&source=web&cd=14&ved=0CCcQFjADOAo&url=https%3A%2F%2Foa.doria.fi%2Fbitstream%2Fhandle%2F10024%2F30051%2FTMP.objres.467.pdf%3Fsequence%3D1&rct=j&q=monte%20carlo%20optimization%20nonlinear%20models&ei=vgeYTIurAs6gOLK15NYP&usg=AFQjCNEwwWrMgja3vUNM43h4gT6oIKdoAQ&sig2=HPFupT9Xbnuf6RFr89CqWg)) which is basically a recasting of Rob's suggestion inside the frame of statistics.
null
CC BY-SA 2.5
null
2010-09-21T00:04:45.017
2010-09-22T22:33:24.393
2010-09-22T22:33:24.393
603
603
null
2924
2
null
2904
3
null
It sounds like you need to use a more robust optimization algorithm that can handle local minima. Particle swarm methods work quite well in this case. Or you could try other evolutionary optimization methods or simulated annealing.
null
CC BY-SA 2.5
null
2010-09-21T00:42:04.910
2010-09-21T00:42:04.910
null
null
159
null
2925
1
2963
null
6
791
Suppose that we want to generate a draw from the following distribution: $P(X=0) = 0.5$ $P(X=1) = 0.5$ There are two constraints though: (a) The draw has to be on the basis of an external event. (b) Related to (a), the draw must be verifiable by a third party. In other words, a third party should be able to verify that my draw was in fact $X = 0$ (say). Qn 1: Can such a system be devised and if so how? Qn 2: Can the system be extended to discrete variables with more than 2 possible outcomes (like the roll of a dice)? Qn 3: Similarly, can it be extended to continuous variables (e.g., the normal)?
Is there a verifiable way to generate discrete random variables on the basis of an external event?
CC BY-SA 2.5
null
2010-09-21T00:47:15.803
2020-06-29T21:36:10.710
null
null
null
[ "random-variable" ]
2926
2
null
305
4
null
Two reasons I can think of: - Regular Student's T is pretty robust to heteroscedasticity if the sample sizes are equal. - If you believe strongly a priori that the data is homoscedastic, then you lose nothing and might gain a small amount of power by using Studen'ts T instead of Welch's T. One reason that I would not give is that Student's T is exact and Welch's T isn't. IMHO the exactness of Student's T is academic because it's only exact for normally distributed data, and no real data is exactly normally distributed. I can't think of a single quantity that people actually measure and analyze statistically where the distribution could plausibly have a support of all real numbers. For example, there are only so many atoms in the universe, and some quantities can't be negative. Therefore, when you use any kind of T-test on real data, you're making an approximation anyhow.
null
CC BY-SA 2.5
null
2010-09-21T01:36:24.877
2010-09-21T01:36:24.877
null
null
1347
null
2927
2
null
2914
7
null
When I'm fitting a model myself I generally use information criteria during the fitting process, such as [AIC](http://en.wikipedia.org/wiki/Akaike_information_criterion) or [BIC](http://en.wikipedia.org/wiki/Bayesian_information_criterion), or alternatively [Likelihood-ratio tests](http://en.wikipedia.org/wiki/Likelihood_ratio_test) for models fit based on maximum likelihood or [F-test](http://en.wikipedia.org/wiki/F-test#Regression_problems) for models fit based on least squares. All are conceptually similar in that they penalise additional parameters. They set a threshold of "additional explanatory power" for each new parameter added to a model. They are all a form of [regularisation](http://en.wikipedia.org/wiki/Regularization_%28mathematics%29). For others' models I look at the methods section to see if such techniques are used and also use rules of thumb, such as the number of observations per parameter - if there are around 5 (or fewer) observations per parameter I start to wonder. Always remember that a variable need need not be "significant" in a model to be important. I may be a [confounder](http://en.wikipedia.org/wiki/Confounding) and should be included on that basis if your goal is to estimate the effect of other variables.
null
CC BY-SA 2.5
null
2010-09-21T02:14:36.227
2010-09-21T02:14:36.227
null
null
521
null
2928
2
null
2915
6
null
UCLA has some good resources: - Papers on multilevel modelling - Textbook examples (see multilevel modelling) - A free textbook on multilevel modelling by Harvey Goldstein - and more...
null
CC BY-SA 2.5
null
2010-09-21T02:25:20.493
2010-09-21T02:25:20.493
null
null
183
null
2929
2
null
2918
11
null
The Lorenz curve is just a cumulative distribution function for a random variable bounded between 0 and 1, e.g., a proportion. In economics, the Lorenz curve asks, "what fraction of income is earned by the lowest x% of earners?" Typically, it is compared to the uniform distribution over [0,1], a distribution that would arise under perfect equality in income. The Gini coefficient is the area under the perfect equality curve less the area under the Lorenz curve, normalized by the area under the perfect equality curve; note that the area under the perfect equality curve is equal to 0.5. So, to be clear, while a QQ plot compares two distributions to one another, the Lorenz curve considers only one distribution that has a range over [0,1].
null
CC BY-SA 2.5
null
2010-09-21T02:46:15.140
2010-09-21T02:46:15.140
null
null
401
null
2930
2
null
2910
4
null
[van Belle](http://rads.stackoverflow.com/amzn/click/0470144483) is the source for the rules of successful statistical projects.
null
CC BY-SA 2.5
null
2010-09-21T03:00:47.613
2010-09-21T03:00:47.613
null
null
666
null
2931
2
null
2914
17
null
Cross validation is a fairly common way to detect overfitting, while regularization is a technique to prevent it. For a quick take, I'd recommend Andrew Moore's tutorial slides on the use of [cross-validation](https://www.cs.cmu.edu/%7E./awm/tutorials/overfit.html) ([mirror](https://web.archive.org/web/20170815214245/https://www.autonlab.org/_media/tutorials/overfit10.pdf)) -- pay particular attention to the caveats. For more detail, definitely read chapters 3 and 7 of [EOSL](http://www-stat.stanford.edu/%7Etibs/ElemStatLearn/), which cover the topic and associated matter in good depth.
null
CC BY-SA 4.0
null
2010-09-21T03:32:23.540
2021-11-19T14:13:26.137
2021-11-19T14:13:26.137
322742
251
null
2932
1
null
null
6
765
Does the use of metric spaces to describe the support of a random variable provide any greater illumination? I ask this after reading about how metrics spaces have been used to unify the mathematical measure theoretic nature of probability and the physical intuition that most associate with probability. You can read my inspiration here: [http://www.arsmathematica.net/archives/2009/02/14/complete-metric-spaces-and-the-interpretation-of-probability/](http://www.arsmathematica.net/archives/2009/02/14/complete-metric-spaces-and-the-interpretation-of-probability/)
Metric spaces and the support of a random variable
CC BY-SA 2.5
null
2010-09-21T03:38:50.050
2012-01-08T17:18:07.957
2010-09-24T14:00:05.347
930
null
[ "random-variable" ]
2934
1
null
null
4
852
I have distributional data which I represent as a density. The data represents frequencies of user activities on a computer screen (e.g. amount of clicks on the y or x-axis of that screen but also other activities that can be related to coordinates and can therefore be binned by those coordinates (e.g. 5 pixels bins)). I would like to compare two kinds of that behavior and find out how compatible their distributions are. Very general. No assumptions exist. I can't assume parametric conditions such as linearity or normality. I read about Lorenz curves and the Gini coefficient to be very much like what I need to compare distributions but also know that those methods find application primarily for economic and sociological problems and are usually not applied for general distributions. Am I applying the wrong tool for the job? What is your opinion about this? What alternatives do you recommend in order to find out how similar two distributions are?
Using Lorenz curve / Gini coefficient for (non-ecomoical) distribution data
CC BY-SA 2.5
null
2010-09-21T04:08:56.927
2010-09-21T04:57:20.413
null
null
608
[ "distributions" ]
2935
2
null
1735
5
null
Here is my suggestion. Rerun your model(s) using one single regression. And, the Summer/Winter variable would be simply a single dummy variable (1,0). This way you would have a coefficient for Summer to differentiate it from Winter. And, the regression coefficients for your three other variables would be consistent with one single weight rank.
null
CC BY-SA 2.5
null
2010-09-21T04:29:35.863
2010-09-21T04:29:35.863
null
null
1329
null
2936
2
null
2925
3
null
This reminds me of a question from Algorithms class a long time ago. Let the external event be a (preferrably continuous) random variable $Y$. To generate a value of $X$, take two independent observations of $Y$ and let $X$ be $1$ if the first observation of $Y$ is greater than the second, let it be $0$ if the second is greater than the first, and repeat the experiment if there is a tie. Obviously, this works better if $Y$ is continuous. If only a coinflip is available as a generator, one can let the number of consecutive heads before seeing a tail be the random variable $Y$. (The homework question, I believe, was how to turn a biased coinflip into an unbiased coinflip. One part of the homework was proving that the process would terminate...) This obviously can be extended to the discrete case as well, although one may run into more difficulties with ties. If you have $n$ possible outcomes for $X$, take some $k$ such that $n$ divides $k!$ and then partition the $k!$ possible orderings of $k$ observations from $Y$ into equivalence classes for $X$. edit: per @srikant's comment, an example of a possible generator of $Y$: (as anticipated by @andyW) Let $Z_i$ be the number of shares traded on a given highly-liquid ETF as reported by a given source over a fixed time period unambiguously indexed by $i$. Let $Y_i = \tan{(Z_i)},$ where the tangent function is computed by a fixed standard library (in a fixed revision of R, say, on a fixed platform.) Such a generator of $Y$ is pseudorandom enough for me. Other generators of $Z$ are also amenable to this process if they vary widely enough with respect to $2\pi$.
null
CC BY-SA 2.5
null
2010-09-21T04:48:37.157
2010-09-21T16:21:35.873
2010-09-21T16:21:35.873
795
795
null
2937
2
null
2934
2
null
You can use a [2-sample Kolmogorov-Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov_Smirnov_Test#Two-sample_Kolmogorov.E2.80.93Smirnov_test) to compare the two distributions. Other tests for comparing 2-samples are the Anderson-Darling test (although the 2-sample form of this is less frequently used), and the Baumgartner-Weiss-Schindler test. Before you jump into hypothesis testing, though, you may want to graphically inspect the two distributions, either by overlaying their empirical CDF, or better by an empirical [Q-Q plot.](http://en.wikipedia.org/wiki/Q-Q_plot)
null
CC BY-SA 2.5
null
2010-09-21T04:57:20.413
2010-09-21T04:57:20.413
null
null
795
null
2938
1
null
null
12
2568
The qq-plot can be used to visualize how similar two distributions are (e.g. visualizing the similarity of a distribution to a normal distribution, but also to compare two artibrary data distributions). Are there any statistics that generate a more objective, numerical measure that represent their similarity (preferably in a normalized (0 <= x <= 1) form)? The Gini coefficient is for example used in economics when working with Lorenz curves; is there something for QQ-plots?
Quantifying QQ plot
CC BY-SA 3.0
null
2010-09-21T05:15:16.743
2017-04-13T01:52:10.793
2014-11-20T09:49:31.713
22047
608
[ "distributions", "qq-plot" ]
2939
2
null
2938
8
null
As I say in response to your comment on your previous question, check out the Kolmogorov-Smirnov test. It uses the maximum absolute distance between two cumulative distribution functions (alternatively conceived as the maximum absolute distance of the curve in the QQ plot from the 45-degree line) as a statistic. The KS test can be found in R using the command `ks.test()` in the 'stats' library. [Here's more information about its R usage.](http://sekhon.berkeley.edu/stats/html/ks.test.html)
null
CC BY-SA 2.5
null
2010-09-21T05:35:11.600
2010-09-21T05:35:11.600
null
null
401
null